##// END OF EJS Templates
merge: have merge.update use a matcher instead of partial fn...
Augie Fackler -
r27344:43c00ca8 default
parent child Browse files
Show More

The requested changes are too big and content was truncated. Show full diff

@@ -1,1414 +1,1414 b''
1 # histedit.py - interactive history editing for mercurial
1 # histedit.py - interactive history editing for mercurial
2 #
2 #
3 # Copyright 2009 Augie Fackler <raf@durin42.com>
3 # Copyright 2009 Augie Fackler <raf@durin42.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """interactive history editing
7 """interactive history editing
8
8
9 With this extension installed, Mercurial gains one new command: histedit. Usage
9 With this extension installed, Mercurial gains one new command: histedit. Usage
10 is as follows, assuming the following history::
10 is as follows, assuming the following history::
11
11
12 @ 3[tip] 7c2fd3b9020c 2009-04-27 18:04 -0500 durin42
12 @ 3[tip] 7c2fd3b9020c 2009-04-27 18:04 -0500 durin42
13 | Add delta
13 | Add delta
14 |
14 |
15 o 2 030b686bedc4 2009-04-27 18:04 -0500 durin42
15 o 2 030b686bedc4 2009-04-27 18:04 -0500 durin42
16 | Add gamma
16 | Add gamma
17 |
17 |
18 o 1 c561b4e977df 2009-04-27 18:04 -0500 durin42
18 o 1 c561b4e977df 2009-04-27 18:04 -0500 durin42
19 | Add beta
19 | Add beta
20 |
20 |
21 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
21 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
22 Add alpha
22 Add alpha
23
23
24 If you were to run ``hg histedit c561b4e977df``, you would see the following
24 If you were to run ``hg histedit c561b4e977df``, you would see the following
25 file open in your editor::
25 file open in your editor::
26
26
27 pick c561b4e977df Add beta
27 pick c561b4e977df Add beta
28 pick 030b686bedc4 Add gamma
28 pick 030b686bedc4 Add gamma
29 pick 7c2fd3b9020c Add delta
29 pick 7c2fd3b9020c Add delta
30
30
31 # Edit history between c561b4e977df and 7c2fd3b9020c
31 # Edit history between c561b4e977df and 7c2fd3b9020c
32 #
32 #
33 # Commits are listed from least to most recent
33 # Commits are listed from least to most recent
34 #
34 #
35 # Commands:
35 # Commands:
36 # p, pick = use commit
36 # p, pick = use commit
37 # e, edit = use commit, but stop for amending
37 # e, edit = use commit, but stop for amending
38 # f, fold = use commit, but combine it with the one above
38 # f, fold = use commit, but combine it with the one above
39 # r, roll = like fold, but discard this commit's description
39 # r, roll = like fold, but discard this commit's description
40 # d, drop = remove commit from history
40 # d, drop = remove commit from history
41 # m, mess = edit commit message without changing commit content
41 # m, mess = edit commit message without changing commit content
42 #
42 #
43
43
44 In this file, lines beginning with ``#`` are ignored. You must specify a rule
44 In this file, lines beginning with ``#`` are ignored. You must specify a rule
45 for each revision in your history. For example, if you had meant to add gamma
45 for each revision in your history. For example, if you had meant to add gamma
46 before beta, and then wanted to add delta in the same revision as beta, you
46 before beta, and then wanted to add delta in the same revision as beta, you
47 would reorganize the file to look like this::
47 would reorganize the file to look like this::
48
48
49 pick 030b686bedc4 Add gamma
49 pick 030b686bedc4 Add gamma
50 pick c561b4e977df Add beta
50 pick c561b4e977df Add beta
51 fold 7c2fd3b9020c Add delta
51 fold 7c2fd3b9020c Add delta
52
52
53 # Edit history between c561b4e977df and 7c2fd3b9020c
53 # Edit history between c561b4e977df and 7c2fd3b9020c
54 #
54 #
55 # Commits are listed from least to most recent
55 # Commits are listed from least to most recent
56 #
56 #
57 # Commands:
57 # Commands:
58 # p, pick = use commit
58 # p, pick = use commit
59 # e, edit = use commit, but stop for amending
59 # e, edit = use commit, but stop for amending
60 # f, fold = use commit, but combine it with the one above
60 # f, fold = use commit, but combine it with the one above
61 # r, roll = like fold, but discard this commit's description
61 # r, roll = like fold, but discard this commit's description
62 # d, drop = remove commit from history
62 # d, drop = remove commit from history
63 # m, mess = edit commit message without changing commit content
63 # m, mess = edit commit message without changing commit content
64 #
64 #
65
65
66 At which point you close the editor and ``histedit`` starts working. When you
66 At which point you close the editor and ``histedit`` starts working. When you
67 specify a ``fold`` operation, ``histedit`` will open an editor when it folds
67 specify a ``fold`` operation, ``histedit`` will open an editor when it folds
68 those revisions together, offering you a chance to clean up the commit message::
68 those revisions together, offering you a chance to clean up the commit message::
69
69
70 Add beta
70 Add beta
71 ***
71 ***
72 Add delta
72 Add delta
73
73
74 Edit the commit message to your liking, then close the editor. For
74 Edit the commit message to your liking, then close the editor. For
75 this example, let's assume that the commit message was changed to
75 this example, let's assume that the commit message was changed to
76 ``Add beta and delta.`` After histedit has run and had a chance to
76 ``Add beta and delta.`` After histedit has run and had a chance to
77 remove any old or temporary revisions it needed, the history looks
77 remove any old or temporary revisions it needed, the history looks
78 like this::
78 like this::
79
79
80 @ 2[tip] 989b4d060121 2009-04-27 18:04 -0500 durin42
80 @ 2[tip] 989b4d060121 2009-04-27 18:04 -0500 durin42
81 | Add beta and delta.
81 | Add beta and delta.
82 |
82 |
83 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
83 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
84 | Add gamma
84 | Add gamma
85 |
85 |
86 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
86 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
87 Add alpha
87 Add alpha
88
88
89 Note that ``histedit`` does *not* remove any revisions (even its own temporary
89 Note that ``histedit`` does *not* remove any revisions (even its own temporary
90 ones) until after it has completed all the editing operations, so it will
90 ones) until after it has completed all the editing operations, so it will
91 probably perform several strip operations when it's done. For the above example,
91 probably perform several strip operations when it's done. For the above example,
92 it had to run strip twice. Strip can be slow depending on a variety of factors,
92 it had to run strip twice. Strip can be slow depending on a variety of factors,
93 so you might need to be a little patient. You can choose to keep the original
93 so you might need to be a little patient. You can choose to keep the original
94 revisions by passing the ``--keep`` flag.
94 revisions by passing the ``--keep`` flag.
95
95
96 The ``edit`` operation will drop you back to a command prompt,
96 The ``edit`` operation will drop you back to a command prompt,
97 allowing you to edit files freely, or even use ``hg record`` to commit
97 allowing you to edit files freely, or even use ``hg record`` to commit
98 some changes as a separate commit. When you're done, any remaining
98 some changes as a separate commit. When you're done, any remaining
99 uncommitted changes will be committed as well. When done, run ``hg
99 uncommitted changes will be committed as well. When done, run ``hg
100 histedit --continue`` to finish this step. You'll be prompted for a
100 histedit --continue`` to finish this step. You'll be prompted for a
101 new commit message, but the default commit message will be the
101 new commit message, but the default commit message will be the
102 original message for the ``edit`` ed revision.
102 original message for the ``edit`` ed revision.
103
103
104 The ``message`` operation will give you a chance to revise a commit
104 The ``message`` operation will give you a chance to revise a commit
105 message without changing the contents. It's a shortcut for doing
105 message without changing the contents. It's a shortcut for doing
106 ``edit`` immediately followed by `hg histedit --continue``.
106 ``edit`` immediately followed by `hg histedit --continue``.
107
107
108 If ``histedit`` encounters a conflict when moving a revision (while
108 If ``histedit`` encounters a conflict when moving a revision (while
109 handling ``pick`` or ``fold``), it'll stop in a similar manner to
109 handling ``pick`` or ``fold``), it'll stop in a similar manner to
110 ``edit`` with the difference that it won't prompt you for a commit
110 ``edit`` with the difference that it won't prompt you for a commit
111 message when done. If you decide at this point that you don't like how
111 message when done. If you decide at this point that you don't like how
112 much work it will be to rearrange history, or that you made a mistake,
112 much work it will be to rearrange history, or that you made a mistake,
113 you can use ``hg histedit --abort`` to abandon the new changes you
113 you can use ``hg histedit --abort`` to abandon the new changes you
114 have made and return to the state before you attempted to edit your
114 have made and return to the state before you attempted to edit your
115 history.
115 history.
116
116
117 If we clone the histedit-ed example repository above and add four more
117 If we clone the histedit-ed example repository above and add four more
118 changes, such that we have the following history::
118 changes, such that we have the following history::
119
119
120 @ 6[tip] 038383181893 2009-04-27 18:04 -0500 stefan
120 @ 6[tip] 038383181893 2009-04-27 18:04 -0500 stefan
121 | Add theta
121 | Add theta
122 |
122 |
123 o 5 140988835471 2009-04-27 18:04 -0500 stefan
123 o 5 140988835471 2009-04-27 18:04 -0500 stefan
124 | Add eta
124 | Add eta
125 |
125 |
126 o 4 122930637314 2009-04-27 18:04 -0500 stefan
126 o 4 122930637314 2009-04-27 18:04 -0500 stefan
127 | Add zeta
127 | Add zeta
128 |
128 |
129 o 3 836302820282 2009-04-27 18:04 -0500 stefan
129 o 3 836302820282 2009-04-27 18:04 -0500 stefan
130 | Add epsilon
130 | Add epsilon
131 |
131 |
132 o 2 989b4d060121 2009-04-27 18:04 -0500 durin42
132 o 2 989b4d060121 2009-04-27 18:04 -0500 durin42
133 | Add beta and delta.
133 | Add beta and delta.
134 |
134 |
135 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
135 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
136 | Add gamma
136 | Add gamma
137 |
137 |
138 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
138 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
139 Add alpha
139 Add alpha
140
140
141 If you run ``hg histedit --outgoing`` on the clone then it is the same
141 If you run ``hg histedit --outgoing`` on the clone then it is the same
142 as running ``hg histedit 836302820282``. If you need plan to push to a
142 as running ``hg histedit 836302820282``. If you need plan to push to a
143 repository that Mercurial does not detect to be related to the source
143 repository that Mercurial does not detect to be related to the source
144 repo, you can add a ``--force`` option.
144 repo, you can add a ``--force`` option.
145
145
146 Histedit rule lines are truncated to 80 characters by default. You
146 Histedit rule lines are truncated to 80 characters by default. You
147 can customize this behavior by setting a different length in your
147 can customize this behavior by setting a different length in your
148 configuration file::
148 configuration file::
149
149
150 [histedit]
150 [histedit]
151 linelen = 120 # truncate rule lines at 120 characters
151 linelen = 120 # truncate rule lines at 120 characters
152
152
153 ``hg histedit`` attempts to automatically choose an appropriate base
153 ``hg histedit`` attempts to automatically choose an appropriate base
154 revision to use. To change which base revision is used, define a
154 revision to use. To change which base revision is used, define a
155 revset in your configuration file::
155 revset in your configuration file::
156
156
157 [histedit]
157 [histedit]
158 defaultrev = only(.) & draft()
158 defaultrev = only(.) & draft()
159 """
159 """
160
160
161 try:
161 try:
162 import cPickle as pickle
162 import cPickle as pickle
163 pickle.dump # import now
163 pickle.dump # import now
164 except ImportError:
164 except ImportError:
165 import pickle
165 import pickle
166 import errno
166 import errno
167 import os
167 import os
168 import sys
168 import sys
169
169
170 from mercurial import bundle2
170 from mercurial import bundle2
171 from mercurial import cmdutil
171 from mercurial import cmdutil
172 from mercurial import discovery
172 from mercurial import discovery
173 from mercurial import error
173 from mercurial import error
174 from mercurial import copies
174 from mercurial import copies
175 from mercurial import context
175 from mercurial import context
176 from mercurial import destutil
176 from mercurial import destutil
177 from mercurial import exchange
177 from mercurial import exchange
178 from mercurial import extensions
178 from mercurial import extensions
179 from mercurial import hg
179 from mercurial import hg
180 from mercurial import node
180 from mercurial import node
181 from mercurial import repair
181 from mercurial import repair
182 from mercurial import scmutil
182 from mercurial import scmutil
183 from mercurial import util
183 from mercurial import util
184 from mercurial import obsolete
184 from mercurial import obsolete
185 from mercurial import merge as mergemod
185 from mercurial import merge as mergemod
186 from mercurial.lock import release
186 from mercurial.lock import release
187 from mercurial.i18n import _
187 from mercurial.i18n import _
188
188
189 cmdtable = {}
189 cmdtable = {}
190 command = cmdutil.command(cmdtable)
190 command = cmdutil.command(cmdtable)
191
191
192 class _constraints(object):
192 class _constraints(object):
193 # aborts if there are multiple rules for one node
193 # aborts if there are multiple rules for one node
194 noduplicates = 'noduplicates'
194 noduplicates = 'noduplicates'
195 # abort if the node does belong to edited stack
195 # abort if the node does belong to edited stack
196 forceother = 'forceother'
196 forceother = 'forceother'
197 # abort if the node doesn't belong to edited stack
197 # abort if the node doesn't belong to edited stack
198 noother = 'noother'
198 noother = 'noother'
199
199
200 @classmethod
200 @classmethod
201 def known(cls):
201 def known(cls):
202 return set([v for k, v in cls.__dict__.items() if k[0] != '_'])
202 return set([v for k, v in cls.__dict__.items() if k[0] != '_'])
203
203
204 # Note for extension authors: ONLY specify testedwith = 'internal' for
204 # Note for extension authors: ONLY specify testedwith = 'internal' for
205 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
205 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
206 # be specifying the version(s) of Mercurial they are tested with, or
206 # be specifying the version(s) of Mercurial they are tested with, or
207 # leave the attribute unspecified.
207 # leave the attribute unspecified.
208 testedwith = 'internal'
208 testedwith = 'internal'
209
209
210 # i18n: command names and abbreviations must remain untranslated
210 # i18n: command names and abbreviations must remain untranslated
211 editcomment = _("""# Edit history between %s and %s
211 editcomment = _("""# Edit history between %s and %s
212 #
212 #
213 # Commits are listed from least to most recent
213 # Commits are listed from least to most recent
214 #
214 #
215 # Commands:
215 # Commands:
216 # p, pick = use commit
216 # p, pick = use commit
217 # e, edit = use commit, but stop for amending
217 # e, edit = use commit, but stop for amending
218 # f, fold = use commit, but combine it with the one above
218 # f, fold = use commit, but combine it with the one above
219 # r, roll = like fold, but discard this commit's description
219 # r, roll = like fold, but discard this commit's description
220 # d, drop = remove commit from history
220 # d, drop = remove commit from history
221 # m, mess = edit commit message without changing commit content
221 # m, mess = edit commit message without changing commit content
222 #
222 #
223 """)
223 """)
224
224
225 class histeditstate(object):
225 class histeditstate(object):
226 def __init__(self, repo, parentctxnode=None, actions=None, keep=None,
226 def __init__(self, repo, parentctxnode=None, actions=None, keep=None,
227 topmost=None, replacements=None, lock=None, wlock=None):
227 topmost=None, replacements=None, lock=None, wlock=None):
228 self.repo = repo
228 self.repo = repo
229 self.actions = actions
229 self.actions = actions
230 self.keep = keep
230 self.keep = keep
231 self.topmost = topmost
231 self.topmost = topmost
232 self.parentctxnode = parentctxnode
232 self.parentctxnode = parentctxnode
233 self.lock = lock
233 self.lock = lock
234 self.wlock = wlock
234 self.wlock = wlock
235 self.backupfile = None
235 self.backupfile = None
236 if replacements is None:
236 if replacements is None:
237 self.replacements = []
237 self.replacements = []
238 else:
238 else:
239 self.replacements = replacements
239 self.replacements = replacements
240
240
241 def read(self):
241 def read(self):
242 """Load histedit state from disk and set fields appropriately."""
242 """Load histedit state from disk and set fields appropriately."""
243 try:
243 try:
244 fp = self.repo.vfs('histedit-state', 'r')
244 fp = self.repo.vfs('histedit-state', 'r')
245 except IOError as err:
245 except IOError as err:
246 if err.errno != errno.ENOENT:
246 if err.errno != errno.ENOENT:
247 raise
247 raise
248 raise error.Abort(_('no histedit in progress'))
248 raise error.Abort(_('no histedit in progress'))
249
249
250 try:
250 try:
251 data = pickle.load(fp)
251 data = pickle.load(fp)
252 parentctxnode, rules, keep, topmost, replacements = data
252 parentctxnode, rules, keep, topmost, replacements = data
253 backupfile = None
253 backupfile = None
254 except pickle.UnpicklingError:
254 except pickle.UnpicklingError:
255 data = self._load()
255 data = self._load()
256 parentctxnode, rules, keep, topmost, replacements, backupfile = data
256 parentctxnode, rules, keep, topmost, replacements, backupfile = data
257
257
258 self.parentctxnode = parentctxnode
258 self.parentctxnode = parentctxnode
259 rules = "\n".join(["%s %s" % (verb, rest) for [verb, rest] in rules])
259 rules = "\n".join(["%s %s" % (verb, rest) for [verb, rest] in rules])
260 actions = parserules(rules, self)
260 actions = parserules(rules, self)
261 self.actions = actions
261 self.actions = actions
262 self.keep = keep
262 self.keep = keep
263 self.topmost = topmost
263 self.topmost = topmost
264 self.replacements = replacements
264 self.replacements = replacements
265 self.backupfile = backupfile
265 self.backupfile = backupfile
266
266
267 def write(self):
267 def write(self):
268 fp = self.repo.vfs('histedit-state', 'w')
268 fp = self.repo.vfs('histedit-state', 'w')
269 fp.write('v1\n')
269 fp.write('v1\n')
270 fp.write('%s\n' % node.hex(self.parentctxnode))
270 fp.write('%s\n' % node.hex(self.parentctxnode))
271 fp.write('%s\n' % node.hex(self.topmost))
271 fp.write('%s\n' % node.hex(self.topmost))
272 fp.write('%s\n' % self.keep)
272 fp.write('%s\n' % self.keep)
273 fp.write('%d\n' % len(self.actions))
273 fp.write('%d\n' % len(self.actions))
274 for action in self.actions:
274 for action in self.actions:
275 fp.write('%s\n' % action.tostate())
275 fp.write('%s\n' % action.tostate())
276 fp.write('%d\n' % len(self.replacements))
276 fp.write('%d\n' % len(self.replacements))
277 for replacement in self.replacements:
277 for replacement in self.replacements:
278 fp.write('%s%s\n' % (node.hex(replacement[0]), ''.join(node.hex(r)
278 fp.write('%s%s\n' % (node.hex(replacement[0]), ''.join(node.hex(r)
279 for r in replacement[1])))
279 for r in replacement[1])))
280 backupfile = self.backupfile
280 backupfile = self.backupfile
281 if not backupfile:
281 if not backupfile:
282 backupfile = ''
282 backupfile = ''
283 fp.write('%s\n' % backupfile)
283 fp.write('%s\n' % backupfile)
284 fp.close()
284 fp.close()
285
285
286 def _load(self):
286 def _load(self):
287 fp = self.repo.vfs('histedit-state', 'r')
287 fp = self.repo.vfs('histedit-state', 'r')
288 lines = [l[:-1] for l in fp.readlines()]
288 lines = [l[:-1] for l in fp.readlines()]
289
289
290 index = 0
290 index = 0
291 lines[index] # version number
291 lines[index] # version number
292 index += 1
292 index += 1
293
293
294 parentctxnode = node.bin(lines[index])
294 parentctxnode = node.bin(lines[index])
295 index += 1
295 index += 1
296
296
297 topmost = node.bin(lines[index])
297 topmost = node.bin(lines[index])
298 index += 1
298 index += 1
299
299
300 keep = lines[index] == 'True'
300 keep = lines[index] == 'True'
301 index += 1
301 index += 1
302
302
303 # Rules
303 # Rules
304 rules = []
304 rules = []
305 rulelen = int(lines[index])
305 rulelen = int(lines[index])
306 index += 1
306 index += 1
307 for i in xrange(rulelen):
307 for i in xrange(rulelen):
308 ruleaction = lines[index]
308 ruleaction = lines[index]
309 index += 1
309 index += 1
310 rule = lines[index]
310 rule = lines[index]
311 index += 1
311 index += 1
312 rules.append((ruleaction, rule))
312 rules.append((ruleaction, rule))
313
313
314 # Replacements
314 # Replacements
315 replacements = []
315 replacements = []
316 replacementlen = int(lines[index])
316 replacementlen = int(lines[index])
317 index += 1
317 index += 1
318 for i in xrange(replacementlen):
318 for i in xrange(replacementlen):
319 replacement = lines[index]
319 replacement = lines[index]
320 original = node.bin(replacement[:40])
320 original = node.bin(replacement[:40])
321 succ = [node.bin(replacement[i:i + 40]) for i in
321 succ = [node.bin(replacement[i:i + 40]) for i in
322 range(40, len(replacement), 40)]
322 range(40, len(replacement), 40)]
323 replacements.append((original, succ))
323 replacements.append((original, succ))
324 index += 1
324 index += 1
325
325
326 backupfile = lines[index]
326 backupfile = lines[index]
327 index += 1
327 index += 1
328
328
329 fp.close()
329 fp.close()
330
330
331 return parentctxnode, rules, keep, topmost, replacements, backupfile
331 return parentctxnode, rules, keep, topmost, replacements, backupfile
332
332
333 def clear(self):
333 def clear(self):
334 if self.inprogress():
334 if self.inprogress():
335 self.repo.vfs.unlink('histedit-state')
335 self.repo.vfs.unlink('histedit-state')
336
336
337 def inprogress(self):
337 def inprogress(self):
338 return self.repo.vfs.exists('histedit-state')
338 return self.repo.vfs.exists('histedit-state')
339
339
340
340
341 class histeditaction(object):
341 class histeditaction(object):
342 def __init__(self, state, node):
342 def __init__(self, state, node):
343 self.state = state
343 self.state = state
344 self.repo = state.repo
344 self.repo = state.repo
345 self.node = node
345 self.node = node
346
346
347 @classmethod
347 @classmethod
348 def fromrule(cls, state, rule):
348 def fromrule(cls, state, rule):
349 """Parses the given rule, returning an instance of the histeditaction.
349 """Parses the given rule, returning an instance of the histeditaction.
350 """
350 """
351 rulehash = rule.strip().split(' ', 1)[0]
351 rulehash = rule.strip().split(' ', 1)[0]
352 return cls(state, node.bin(rulehash))
352 return cls(state, node.bin(rulehash))
353
353
354 def verify(self):
354 def verify(self):
355 """ Verifies semantic correctness of the rule"""
355 """ Verifies semantic correctness of the rule"""
356 repo = self.repo
356 repo = self.repo
357 ha = node.hex(self.node)
357 ha = node.hex(self.node)
358 try:
358 try:
359 self.node = repo[ha].node()
359 self.node = repo[ha].node()
360 except error.RepoError:
360 except error.RepoError:
361 raise error.Abort(_('unknown changeset %s listed')
361 raise error.Abort(_('unknown changeset %s listed')
362 % ha[:12])
362 % ha[:12])
363
363
364 def torule(self):
364 def torule(self):
365 """build a histedit rule line for an action
365 """build a histedit rule line for an action
366
366
367 by default lines are in the form:
367 by default lines are in the form:
368 <hash> <rev> <summary>
368 <hash> <rev> <summary>
369 """
369 """
370 ctx = self.repo[self.node]
370 ctx = self.repo[self.node]
371 summary = ''
371 summary = ''
372 if ctx.description():
372 if ctx.description():
373 summary = ctx.description().splitlines()[0]
373 summary = ctx.description().splitlines()[0]
374 line = '%s %s %d %s' % (self.verb, ctx, ctx.rev(), summary)
374 line = '%s %s %d %s' % (self.verb, ctx, ctx.rev(), summary)
375 # trim to 75 columns by default so it's not stupidly wide in my editor
375 # trim to 75 columns by default so it's not stupidly wide in my editor
376 # (the 5 more are left for verb)
376 # (the 5 more are left for verb)
377 maxlen = self.repo.ui.configint('histedit', 'linelen', default=80)
377 maxlen = self.repo.ui.configint('histedit', 'linelen', default=80)
378 maxlen = max(maxlen, 22) # avoid truncating hash
378 maxlen = max(maxlen, 22) # avoid truncating hash
379 return util.ellipsis(line, maxlen)
379 return util.ellipsis(line, maxlen)
380
380
381 def tostate(self):
381 def tostate(self):
382 """Print an action in format used by histedit state files
382 """Print an action in format used by histedit state files
383 (the first line is a verb, the remainder is the second)
383 (the first line is a verb, the remainder is the second)
384 """
384 """
385 return "%s\n%s" % (self.verb, node.hex(self.node))
385 return "%s\n%s" % (self.verb, node.hex(self.node))
386
386
387 def constraints(self):
387 def constraints(self):
388 """Return a set of constrains that this action should be verified for
388 """Return a set of constrains that this action should be verified for
389 """
389 """
390 return set([_constraints.noduplicates, _constraints.noother])
390 return set([_constraints.noduplicates, _constraints.noother])
391
391
392 def nodetoverify(self):
392 def nodetoverify(self):
393 """Returns a node associated with the action that will be used for
393 """Returns a node associated with the action that will be used for
394 verification purposes.
394 verification purposes.
395
395
396 If the action doesn't correspond to node it should return None
396 If the action doesn't correspond to node it should return None
397 """
397 """
398 return self.node
398 return self.node
399
399
400 def run(self):
400 def run(self):
401 """Runs the action. The default behavior is simply apply the action's
401 """Runs the action. The default behavior is simply apply the action's
402 rulectx onto the current parentctx."""
402 rulectx onto the current parentctx."""
403 self.applychange()
403 self.applychange()
404 self.continuedirty()
404 self.continuedirty()
405 return self.continueclean()
405 return self.continueclean()
406
406
407 def applychange(self):
407 def applychange(self):
408 """Applies the changes from this action's rulectx onto the current
408 """Applies the changes from this action's rulectx onto the current
409 parentctx, but does not commit them."""
409 parentctx, but does not commit them."""
410 repo = self.repo
410 repo = self.repo
411 rulectx = repo[self.node]
411 rulectx = repo[self.node]
412 hg.update(repo, self.state.parentctxnode)
412 hg.update(repo, self.state.parentctxnode)
413 stats = applychanges(repo.ui, repo, rulectx, {})
413 stats = applychanges(repo.ui, repo, rulectx, {})
414 if stats and stats[3] > 0:
414 if stats and stats[3] > 0:
415 raise error.InterventionRequired(_('Fix up the change and run '
415 raise error.InterventionRequired(_('Fix up the change and run '
416 'hg histedit --continue'))
416 'hg histedit --continue'))
417
417
418 def continuedirty(self):
418 def continuedirty(self):
419 """Continues the action when changes have been applied to the working
419 """Continues the action when changes have been applied to the working
420 copy. The default behavior is to commit the dirty changes."""
420 copy. The default behavior is to commit the dirty changes."""
421 repo = self.repo
421 repo = self.repo
422 rulectx = repo[self.node]
422 rulectx = repo[self.node]
423
423
424 editor = self.commiteditor()
424 editor = self.commiteditor()
425 commit = commitfuncfor(repo, rulectx)
425 commit = commitfuncfor(repo, rulectx)
426
426
427 commit(text=rulectx.description(), user=rulectx.user(),
427 commit(text=rulectx.description(), user=rulectx.user(),
428 date=rulectx.date(), extra=rulectx.extra(), editor=editor)
428 date=rulectx.date(), extra=rulectx.extra(), editor=editor)
429
429
430 def commiteditor(self):
430 def commiteditor(self):
431 """The editor to be used to edit the commit message."""
431 """The editor to be used to edit the commit message."""
432 return False
432 return False
433
433
434 def continueclean(self):
434 def continueclean(self):
435 """Continues the action when the working copy is clean. The default
435 """Continues the action when the working copy is clean. The default
436 behavior is to accept the current commit as the new version of the
436 behavior is to accept the current commit as the new version of the
437 rulectx."""
437 rulectx."""
438 ctx = self.repo['.']
438 ctx = self.repo['.']
439 if ctx.node() == self.state.parentctxnode:
439 if ctx.node() == self.state.parentctxnode:
440 self.repo.ui.warn(_('%s: empty changeset\n') %
440 self.repo.ui.warn(_('%s: empty changeset\n') %
441 node.short(self.node))
441 node.short(self.node))
442 return ctx, [(self.node, tuple())]
442 return ctx, [(self.node, tuple())]
443 if ctx.node() == self.node:
443 if ctx.node() == self.node:
444 # Nothing changed
444 # Nothing changed
445 return ctx, []
445 return ctx, []
446 return ctx, [(self.node, (ctx.node(),))]
446 return ctx, [(self.node, (ctx.node(),))]
447
447
448 def commitfuncfor(repo, src):
448 def commitfuncfor(repo, src):
449 """Build a commit function for the replacement of <src>
449 """Build a commit function for the replacement of <src>
450
450
451 This function ensure we apply the same treatment to all changesets.
451 This function ensure we apply the same treatment to all changesets.
452
452
453 - Add a 'histedit_source' entry in extra.
453 - Add a 'histedit_source' entry in extra.
454
454
455 Note that fold has its own separated logic because its handling is a bit
455 Note that fold has its own separated logic because its handling is a bit
456 different and not easily factored out of the fold method.
456 different and not easily factored out of the fold method.
457 """
457 """
458 phasemin = src.phase()
458 phasemin = src.phase()
459 def commitfunc(**kwargs):
459 def commitfunc(**kwargs):
460 phasebackup = repo.ui.backupconfig('phases', 'new-commit')
460 phasebackup = repo.ui.backupconfig('phases', 'new-commit')
461 try:
461 try:
462 repo.ui.setconfig('phases', 'new-commit', phasemin,
462 repo.ui.setconfig('phases', 'new-commit', phasemin,
463 'histedit')
463 'histedit')
464 extra = kwargs.get('extra', {}).copy()
464 extra = kwargs.get('extra', {}).copy()
465 extra['histedit_source'] = src.hex()
465 extra['histedit_source'] = src.hex()
466 kwargs['extra'] = extra
466 kwargs['extra'] = extra
467 return repo.commit(**kwargs)
467 return repo.commit(**kwargs)
468 finally:
468 finally:
469 repo.ui.restoreconfig(phasebackup)
469 repo.ui.restoreconfig(phasebackup)
470 return commitfunc
470 return commitfunc
471
471
472 def applychanges(ui, repo, ctx, opts):
472 def applychanges(ui, repo, ctx, opts):
473 """Merge changeset from ctx (only) in the current working directory"""
473 """Merge changeset from ctx (only) in the current working directory"""
474 wcpar = repo.dirstate.parents()[0]
474 wcpar = repo.dirstate.parents()[0]
475 if ctx.p1().node() == wcpar:
475 if ctx.p1().node() == wcpar:
476 # edits are "in place" we do not need to make any merge,
476 # edits are "in place" we do not need to make any merge,
477 # just applies changes on parent for edition
477 # just applies changes on parent for edition
478 cmdutil.revert(ui, repo, ctx, (wcpar, node.nullid), all=True)
478 cmdutil.revert(ui, repo, ctx, (wcpar, node.nullid), all=True)
479 stats = None
479 stats = None
480 else:
480 else:
481 try:
481 try:
482 # ui.forcemerge is an internal variable, do not document
482 # ui.forcemerge is an internal variable, do not document
483 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
483 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
484 'histedit')
484 'histedit')
485 stats = mergemod.graft(repo, ctx, ctx.p1(), ['local', 'histedit'])
485 stats = mergemod.graft(repo, ctx, ctx.p1(), ['local', 'histedit'])
486 finally:
486 finally:
487 repo.ui.setconfig('ui', 'forcemerge', '', 'histedit')
487 repo.ui.setconfig('ui', 'forcemerge', '', 'histedit')
488 return stats
488 return stats
489
489
490 def collapse(repo, first, last, commitopts, skipprompt=False):
490 def collapse(repo, first, last, commitopts, skipprompt=False):
491 """collapse the set of revisions from first to last as new one.
491 """collapse the set of revisions from first to last as new one.
492
492
493 Expected commit options are:
493 Expected commit options are:
494 - message
494 - message
495 - date
495 - date
496 - username
496 - username
497 Commit message is edited in all cases.
497 Commit message is edited in all cases.
498
498
499 This function works in memory."""
499 This function works in memory."""
500 ctxs = list(repo.set('%d::%d', first, last))
500 ctxs = list(repo.set('%d::%d', first, last))
501 if not ctxs:
501 if not ctxs:
502 return None
502 return None
503 for c in ctxs:
503 for c in ctxs:
504 if not c.mutable():
504 if not c.mutable():
505 raise error.Abort(
505 raise error.Abort(
506 _("cannot fold into public change %s") % node.short(c.node()))
506 _("cannot fold into public change %s") % node.short(c.node()))
507 base = first.parents()[0]
507 base = first.parents()[0]
508
508
509 # commit a new version of the old changeset, including the update
509 # commit a new version of the old changeset, including the update
510 # collect all files which might be affected
510 # collect all files which might be affected
511 files = set()
511 files = set()
512 for ctx in ctxs:
512 for ctx in ctxs:
513 files.update(ctx.files())
513 files.update(ctx.files())
514
514
515 # Recompute copies (avoid recording a -> b -> a)
515 # Recompute copies (avoid recording a -> b -> a)
516 copied = copies.pathcopies(base, last)
516 copied = copies.pathcopies(base, last)
517
517
518 # prune files which were reverted by the updates
518 # prune files which were reverted by the updates
519 def samefile(f):
519 def samefile(f):
520 if f in last.manifest():
520 if f in last.manifest():
521 a = last.filectx(f)
521 a = last.filectx(f)
522 if f in base.manifest():
522 if f in base.manifest():
523 b = base.filectx(f)
523 b = base.filectx(f)
524 return (a.data() == b.data()
524 return (a.data() == b.data()
525 and a.flags() == b.flags())
525 and a.flags() == b.flags())
526 else:
526 else:
527 return False
527 return False
528 else:
528 else:
529 return f not in base.manifest()
529 return f not in base.manifest()
530 files = [f for f in files if not samefile(f)]
530 files = [f for f in files if not samefile(f)]
531 # commit version of these files as defined by head
531 # commit version of these files as defined by head
532 headmf = last.manifest()
532 headmf = last.manifest()
533 def filectxfn(repo, ctx, path):
533 def filectxfn(repo, ctx, path):
534 if path in headmf:
534 if path in headmf:
535 fctx = last[path]
535 fctx = last[path]
536 flags = fctx.flags()
536 flags = fctx.flags()
537 mctx = context.memfilectx(repo,
537 mctx = context.memfilectx(repo,
538 fctx.path(), fctx.data(),
538 fctx.path(), fctx.data(),
539 islink='l' in flags,
539 islink='l' in flags,
540 isexec='x' in flags,
540 isexec='x' in flags,
541 copied=copied.get(path))
541 copied=copied.get(path))
542 return mctx
542 return mctx
543 return None
543 return None
544
544
545 if commitopts.get('message'):
545 if commitopts.get('message'):
546 message = commitopts['message']
546 message = commitopts['message']
547 else:
547 else:
548 message = first.description()
548 message = first.description()
549 user = commitopts.get('user')
549 user = commitopts.get('user')
550 date = commitopts.get('date')
550 date = commitopts.get('date')
551 extra = commitopts.get('extra')
551 extra = commitopts.get('extra')
552
552
553 parents = (first.p1().node(), first.p2().node())
553 parents = (first.p1().node(), first.p2().node())
554 editor = None
554 editor = None
555 if not skipprompt:
555 if not skipprompt:
556 editor = cmdutil.getcommiteditor(edit=True, editform='histedit.fold')
556 editor = cmdutil.getcommiteditor(edit=True, editform='histedit.fold')
557 new = context.memctx(repo,
557 new = context.memctx(repo,
558 parents=parents,
558 parents=parents,
559 text=message,
559 text=message,
560 files=files,
560 files=files,
561 filectxfn=filectxfn,
561 filectxfn=filectxfn,
562 user=user,
562 user=user,
563 date=date,
563 date=date,
564 extra=extra,
564 extra=extra,
565 editor=editor)
565 editor=editor)
566 return repo.commitctx(new)
566 return repo.commitctx(new)
567
567
568 def _isdirtywc(repo):
568 def _isdirtywc(repo):
569 return repo[None].dirty(missing=True)
569 return repo[None].dirty(missing=True)
570
570
571 def abortdirty():
571 def abortdirty():
572 raise error.Abort(_('working copy has pending changes'),
572 raise error.Abort(_('working copy has pending changes'),
573 hint=_('amend, commit, or revert them and run histedit '
573 hint=_('amend, commit, or revert them and run histedit '
574 '--continue, or abort with histedit --abort'))
574 '--continue, or abort with histedit --abort'))
575
575
576
576
577 actiontable = {}
577 actiontable = {}
578 actionlist = []
578 actionlist = []
579
579
580 def addhisteditaction(verbs):
580 def addhisteditaction(verbs):
581 def wrap(cls):
581 def wrap(cls):
582 cls.verb = verbs[0]
582 cls.verb = verbs[0]
583 for verb in verbs:
583 for verb in verbs:
584 actiontable[verb] = cls
584 actiontable[verb] = cls
585 actionlist.append(cls)
585 actionlist.append(cls)
586 return cls
586 return cls
587 return wrap
587 return wrap
588
588
589
589
590 @addhisteditaction(['pick', 'p'])
590 @addhisteditaction(['pick', 'p'])
591 class pick(histeditaction):
591 class pick(histeditaction):
592 def run(self):
592 def run(self):
593 rulectx = self.repo[self.node]
593 rulectx = self.repo[self.node]
594 if rulectx.parents()[0].node() == self.state.parentctxnode:
594 if rulectx.parents()[0].node() == self.state.parentctxnode:
595 self.repo.ui.debug('node %s unchanged\n' % node.short(self.node))
595 self.repo.ui.debug('node %s unchanged\n' % node.short(self.node))
596 return rulectx, []
596 return rulectx, []
597
597
598 return super(pick, self).run()
598 return super(pick, self).run()
599
599
600 @addhisteditaction(['edit', 'e'])
600 @addhisteditaction(['edit', 'e'])
601 class edit(histeditaction):
601 class edit(histeditaction):
602 def run(self):
602 def run(self):
603 repo = self.repo
603 repo = self.repo
604 rulectx = repo[self.node]
604 rulectx = repo[self.node]
605 hg.update(repo, self.state.parentctxnode)
605 hg.update(repo, self.state.parentctxnode)
606 applychanges(repo.ui, repo, rulectx, {})
606 applychanges(repo.ui, repo, rulectx, {})
607 raise error.InterventionRequired(
607 raise error.InterventionRequired(
608 _('Make changes as needed, you may commit or record as needed '
608 _('Make changes as needed, you may commit or record as needed '
609 'now.\nWhen you are finished, run hg histedit --continue to '
609 'now.\nWhen you are finished, run hg histedit --continue to '
610 'resume.'))
610 'resume.'))
611
611
612 def commiteditor(self):
612 def commiteditor(self):
613 return cmdutil.getcommiteditor(edit=True, editform='histedit.edit')
613 return cmdutil.getcommiteditor(edit=True, editform='histedit.edit')
614
614
615 @addhisteditaction(['fold', 'f'])
615 @addhisteditaction(['fold', 'f'])
616 class fold(histeditaction):
616 class fold(histeditaction):
617 def continuedirty(self):
617 def continuedirty(self):
618 repo = self.repo
618 repo = self.repo
619 rulectx = repo[self.node]
619 rulectx = repo[self.node]
620
620
621 commit = commitfuncfor(repo, rulectx)
621 commit = commitfuncfor(repo, rulectx)
622 commit(text='fold-temp-revision %s' % node.short(self.node),
622 commit(text='fold-temp-revision %s' % node.short(self.node),
623 user=rulectx.user(), date=rulectx.date(),
623 user=rulectx.user(), date=rulectx.date(),
624 extra=rulectx.extra())
624 extra=rulectx.extra())
625
625
626 def continueclean(self):
626 def continueclean(self):
627 repo = self.repo
627 repo = self.repo
628 ctx = repo['.']
628 ctx = repo['.']
629 rulectx = repo[self.node]
629 rulectx = repo[self.node]
630 parentctxnode = self.state.parentctxnode
630 parentctxnode = self.state.parentctxnode
631 if ctx.node() == parentctxnode:
631 if ctx.node() == parentctxnode:
632 repo.ui.warn(_('%s: empty changeset\n') %
632 repo.ui.warn(_('%s: empty changeset\n') %
633 node.short(self.node))
633 node.short(self.node))
634 return ctx, [(self.node, (parentctxnode,))]
634 return ctx, [(self.node, (parentctxnode,))]
635
635
636 parentctx = repo[parentctxnode]
636 parentctx = repo[parentctxnode]
637 newcommits = set(c.node() for c in repo.set('(%d::. - %d)', parentctx,
637 newcommits = set(c.node() for c in repo.set('(%d::. - %d)', parentctx,
638 parentctx))
638 parentctx))
639 if not newcommits:
639 if not newcommits:
640 repo.ui.warn(_('%s: cannot fold - working copy is not a '
640 repo.ui.warn(_('%s: cannot fold - working copy is not a '
641 'descendant of previous commit %s\n') %
641 'descendant of previous commit %s\n') %
642 (node.short(self.node), node.short(parentctxnode)))
642 (node.short(self.node), node.short(parentctxnode)))
643 return ctx, [(self.node, (ctx.node(),))]
643 return ctx, [(self.node, (ctx.node(),))]
644
644
645 middlecommits = newcommits.copy()
645 middlecommits = newcommits.copy()
646 middlecommits.discard(ctx.node())
646 middlecommits.discard(ctx.node())
647
647
648 return self.finishfold(repo.ui, repo, parentctx, rulectx, ctx.node(),
648 return self.finishfold(repo.ui, repo, parentctx, rulectx, ctx.node(),
649 middlecommits)
649 middlecommits)
650
650
651 def skipprompt(self):
651 def skipprompt(self):
652 """Returns true if the rule should skip the message editor.
652 """Returns true if the rule should skip the message editor.
653
653
654 For example, 'fold' wants to show an editor, but 'rollup'
654 For example, 'fold' wants to show an editor, but 'rollup'
655 doesn't want to.
655 doesn't want to.
656 """
656 """
657 return False
657 return False
658
658
659 def mergedescs(self):
659 def mergedescs(self):
660 """Returns true if the rule should merge messages of multiple changes.
660 """Returns true if the rule should merge messages of multiple changes.
661
661
662 This exists mainly so that 'rollup' rules can be a subclass of
662 This exists mainly so that 'rollup' rules can be a subclass of
663 'fold'.
663 'fold'.
664 """
664 """
665 return True
665 return True
666
666
667 def finishfold(self, ui, repo, ctx, oldctx, newnode, internalchanges):
667 def finishfold(self, ui, repo, ctx, oldctx, newnode, internalchanges):
668 parent = ctx.parents()[0].node()
668 parent = ctx.parents()[0].node()
669 hg.update(repo, parent)
669 hg.update(repo, parent)
670 ### prepare new commit data
670 ### prepare new commit data
671 commitopts = {}
671 commitopts = {}
672 commitopts['user'] = ctx.user()
672 commitopts['user'] = ctx.user()
673 # commit message
673 # commit message
674 if not self.mergedescs():
674 if not self.mergedescs():
675 newmessage = ctx.description()
675 newmessage = ctx.description()
676 else:
676 else:
677 newmessage = '\n***\n'.join(
677 newmessage = '\n***\n'.join(
678 [ctx.description()] +
678 [ctx.description()] +
679 [repo[r].description() for r in internalchanges] +
679 [repo[r].description() for r in internalchanges] +
680 [oldctx.description()]) + '\n'
680 [oldctx.description()]) + '\n'
681 commitopts['message'] = newmessage
681 commitopts['message'] = newmessage
682 # date
682 # date
683 commitopts['date'] = max(ctx.date(), oldctx.date())
683 commitopts['date'] = max(ctx.date(), oldctx.date())
684 extra = ctx.extra().copy()
684 extra = ctx.extra().copy()
685 # histedit_source
685 # histedit_source
686 # note: ctx is likely a temporary commit but that the best we can do
686 # note: ctx is likely a temporary commit but that the best we can do
687 # here. This is sufficient to solve issue3681 anyway.
687 # here. This is sufficient to solve issue3681 anyway.
688 extra['histedit_source'] = '%s,%s' % (ctx.hex(), oldctx.hex())
688 extra['histedit_source'] = '%s,%s' % (ctx.hex(), oldctx.hex())
689 commitopts['extra'] = extra
689 commitopts['extra'] = extra
690 phasebackup = repo.ui.backupconfig('phases', 'new-commit')
690 phasebackup = repo.ui.backupconfig('phases', 'new-commit')
691 try:
691 try:
692 phasemin = max(ctx.phase(), oldctx.phase())
692 phasemin = max(ctx.phase(), oldctx.phase())
693 repo.ui.setconfig('phases', 'new-commit', phasemin, 'histedit')
693 repo.ui.setconfig('phases', 'new-commit', phasemin, 'histedit')
694 n = collapse(repo, ctx, repo[newnode], commitopts,
694 n = collapse(repo, ctx, repo[newnode], commitopts,
695 skipprompt=self.skipprompt())
695 skipprompt=self.skipprompt())
696 finally:
696 finally:
697 repo.ui.restoreconfig(phasebackup)
697 repo.ui.restoreconfig(phasebackup)
698 if n is None:
698 if n is None:
699 return ctx, []
699 return ctx, []
700 hg.update(repo, n)
700 hg.update(repo, n)
701 replacements = [(oldctx.node(), (newnode,)),
701 replacements = [(oldctx.node(), (newnode,)),
702 (ctx.node(), (n,)),
702 (ctx.node(), (n,)),
703 (newnode, (n,)),
703 (newnode, (n,)),
704 ]
704 ]
705 for ich in internalchanges:
705 for ich in internalchanges:
706 replacements.append((ich, (n,)))
706 replacements.append((ich, (n,)))
707 return repo[n], replacements
707 return repo[n], replacements
708
708
709 class base(histeditaction):
709 class base(histeditaction):
710 def constraints(self):
710 def constraints(self):
711 return set([_constraints.forceother])
711 return set([_constraints.forceother])
712
712
713 def run(self):
713 def run(self):
714 if self.repo['.'].node() != self.node:
714 if self.repo['.'].node() != self.node:
715 mergemod.update(self.repo, self.node, False, True, False)
715 mergemod.update(self.repo, self.node, False, True)
716 # branchmerge, force, partial)
716 # branchmerge, force)
717 return self.continueclean()
717 return self.continueclean()
718
718
719 def continuedirty(self):
719 def continuedirty(self):
720 abortdirty()
720 abortdirty()
721
721
722 def continueclean(self):
722 def continueclean(self):
723 basectx = self.repo['.']
723 basectx = self.repo['.']
724 return basectx, []
724 return basectx, []
725
725
726 @addhisteditaction(['_multifold'])
726 @addhisteditaction(['_multifold'])
727 class _multifold(fold):
727 class _multifold(fold):
728 """fold subclass used for when multiple folds happen in a row
728 """fold subclass used for when multiple folds happen in a row
729
729
730 We only want to fire the editor for the folded message once when
730 We only want to fire the editor for the folded message once when
731 (say) four changes are folded down into a single change. This is
731 (say) four changes are folded down into a single change. This is
732 similar to rollup, but we should preserve both messages so that
732 similar to rollup, but we should preserve both messages so that
733 when the last fold operation runs we can show the user all the
733 when the last fold operation runs we can show the user all the
734 commit messages in their editor.
734 commit messages in their editor.
735 """
735 """
736 def skipprompt(self):
736 def skipprompt(self):
737 return True
737 return True
738
738
739 @addhisteditaction(["roll", "r"])
739 @addhisteditaction(["roll", "r"])
740 class rollup(fold):
740 class rollup(fold):
741 def mergedescs(self):
741 def mergedescs(self):
742 return False
742 return False
743
743
744 def skipprompt(self):
744 def skipprompt(self):
745 return True
745 return True
746
746
747 @addhisteditaction(["drop", "d"])
747 @addhisteditaction(["drop", "d"])
748 class drop(histeditaction):
748 class drop(histeditaction):
749 def run(self):
749 def run(self):
750 parentctx = self.repo[self.state.parentctxnode]
750 parentctx = self.repo[self.state.parentctxnode]
751 return parentctx, [(self.node, tuple())]
751 return parentctx, [(self.node, tuple())]
752
752
753 @addhisteditaction(["mess", "m"])
753 @addhisteditaction(["mess", "m"])
754 class message(histeditaction):
754 class message(histeditaction):
755 def commiteditor(self):
755 def commiteditor(self):
756 return cmdutil.getcommiteditor(edit=True, editform='histedit.mess')
756 return cmdutil.getcommiteditor(edit=True, editform='histedit.mess')
757
757
758 def findoutgoing(ui, repo, remote=None, force=False, opts=None):
758 def findoutgoing(ui, repo, remote=None, force=False, opts=None):
759 """utility function to find the first outgoing changeset
759 """utility function to find the first outgoing changeset
760
760
761 Used by initialization code"""
761 Used by initialization code"""
762 if opts is None:
762 if opts is None:
763 opts = {}
763 opts = {}
764 dest = ui.expandpath(remote or 'default-push', remote or 'default')
764 dest = ui.expandpath(remote or 'default-push', remote or 'default')
765 dest, revs = hg.parseurl(dest, None)[:2]
765 dest, revs = hg.parseurl(dest, None)[:2]
766 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
766 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
767
767
768 revs, checkout = hg.addbranchrevs(repo, repo, revs, None)
768 revs, checkout = hg.addbranchrevs(repo, repo, revs, None)
769 other = hg.peer(repo, opts, dest)
769 other = hg.peer(repo, opts, dest)
770
770
771 if revs:
771 if revs:
772 revs = [repo.lookup(rev) for rev in revs]
772 revs = [repo.lookup(rev) for rev in revs]
773
773
774 outgoing = discovery.findcommonoutgoing(repo, other, revs, force=force)
774 outgoing = discovery.findcommonoutgoing(repo, other, revs, force=force)
775 if not outgoing.missing:
775 if not outgoing.missing:
776 raise error.Abort(_('no outgoing ancestors'))
776 raise error.Abort(_('no outgoing ancestors'))
777 roots = list(repo.revs("roots(%ln)", outgoing.missing))
777 roots = list(repo.revs("roots(%ln)", outgoing.missing))
778 if 1 < len(roots):
778 if 1 < len(roots):
779 msg = _('there are ambiguous outgoing revisions')
779 msg = _('there are ambiguous outgoing revisions')
780 hint = _('see "hg help histedit" for more detail')
780 hint = _('see "hg help histedit" for more detail')
781 raise error.Abort(msg, hint=hint)
781 raise error.Abort(msg, hint=hint)
782 return repo.lookup(roots[0])
782 return repo.lookup(roots[0])
783
783
784
784
785 @command('histedit',
785 @command('histedit',
786 [('', 'commands', '',
786 [('', 'commands', '',
787 _('read history edits from the specified file'), _('FILE')),
787 _('read history edits from the specified file'), _('FILE')),
788 ('c', 'continue', False, _('continue an edit already in progress')),
788 ('c', 'continue', False, _('continue an edit already in progress')),
789 ('', 'edit-plan', False, _('edit remaining actions list')),
789 ('', 'edit-plan', False, _('edit remaining actions list')),
790 ('k', 'keep', False,
790 ('k', 'keep', False,
791 _("don't strip old nodes after edit is complete")),
791 _("don't strip old nodes after edit is complete")),
792 ('', 'abort', False, _('abort an edit in progress')),
792 ('', 'abort', False, _('abort an edit in progress')),
793 ('o', 'outgoing', False, _('changesets not found in destination')),
793 ('o', 'outgoing', False, _('changesets not found in destination')),
794 ('f', 'force', False,
794 ('f', 'force', False,
795 _('force outgoing even for unrelated repositories')),
795 _('force outgoing even for unrelated repositories')),
796 ('r', 'rev', [], _('first revision to be edited'), _('REV'))],
796 ('r', 'rev', [], _('first revision to be edited'), _('REV'))],
797 _("[ANCESTOR] | --outgoing [URL]"))
797 _("[ANCESTOR] | --outgoing [URL]"))
798 def histedit(ui, repo, *freeargs, **opts):
798 def histedit(ui, repo, *freeargs, **opts):
799 """interactively edit changeset history
799 """interactively edit changeset history
800
800
801 This command edits changesets between an ANCESTOR and the parent of
801 This command edits changesets between an ANCESTOR and the parent of
802 the working directory.
802 the working directory.
803
803
804 The value from the "histedit.defaultrev" config option is used as a
804 The value from the "histedit.defaultrev" config option is used as a
805 revset to select the base revision when ANCESTOR is not specified.
805 revset to select the base revision when ANCESTOR is not specified.
806 The first revision returned by the revset is used. By default, this
806 The first revision returned by the revset is used. By default, this
807 selects the editable history that is unique to the ancestry of the
807 selects the editable history that is unique to the ancestry of the
808 working directory.
808 working directory.
809
809
810 With --outgoing, this edits changesets not found in the
810 With --outgoing, this edits changesets not found in the
811 destination repository. If URL of the destination is omitted, the
811 destination repository. If URL of the destination is omitted, the
812 'default-push' (or 'default') path will be used.
812 'default-push' (or 'default') path will be used.
813
813
814 For safety, this command is also aborted if there are ambiguous
814 For safety, this command is also aborted if there are ambiguous
815 outgoing revisions which may confuse users: for example, if there
815 outgoing revisions which may confuse users: for example, if there
816 are multiple branches containing outgoing revisions.
816 are multiple branches containing outgoing revisions.
817
817
818 Use "min(outgoing() and ::.)" or similar revset specification
818 Use "min(outgoing() and ::.)" or similar revset specification
819 instead of --outgoing to specify edit target revision exactly in
819 instead of --outgoing to specify edit target revision exactly in
820 such ambiguous situation. See :hg:`help revsets` for detail about
820 such ambiguous situation. See :hg:`help revsets` for detail about
821 selecting revisions.
821 selecting revisions.
822
822
823 .. container:: verbose
823 .. container:: verbose
824
824
825 Examples:
825 Examples:
826
826
827 - A number of changes have been made.
827 - A number of changes have been made.
828 Revision 3 is no longer needed.
828 Revision 3 is no longer needed.
829
829
830 Start history editing from revision 3::
830 Start history editing from revision 3::
831
831
832 hg histedit -r 3
832 hg histedit -r 3
833
833
834 An editor opens, containing the list of revisions,
834 An editor opens, containing the list of revisions,
835 with specific actions specified::
835 with specific actions specified::
836
836
837 pick 5339bf82f0ca 3 Zworgle the foobar
837 pick 5339bf82f0ca 3 Zworgle the foobar
838 pick 8ef592ce7cc4 4 Bedazzle the zerlog
838 pick 8ef592ce7cc4 4 Bedazzle the zerlog
839 pick 0a9639fcda9d 5 Morgify the cromulancy
839 pick 0a9639fcda9d 5 Morgify the cromulancy
840
840
841 Additional information about the possible actions
841 Additional information about the possible actions
842 to take appears below the list of revisions.
842 to take appears below the list of revisions.
843
843
844 To remove revision 3 from the history,
844 To remove revision 3 from the history,
845 its action (at the beginning of the relevant line)
845 its action (at the beginning of the relevant line)
846 is changed to 'drop'::
846 is changed to 'drop'::
847
847
848 drop 5339bf82f0ca 3 Zworgle the foobar
848 drop 5339bf82f0ca 3 Zworgle the foobar
849 pick 8ef592ce7cc4 4 Bedazzle the zerlog
849 pick 8ef592ce7cc4 4 Bedazzle the zerlog
850 pick 0a9639fcda9d 5 Morgify the cromulancy
850 pick 0a9639fcda9d 5 Morgify the cromulancy
851
851
852 - A number of changes have been made.
852 - A number of changes have been made.
853 Revision 2 and 4 need to be swapped.
853 Revision 2 and 4 need to be swapped.
854
854
855 Start history editing from revision 2::
855 Start history editing from revision 2::
856
856
857 hg histedit -r 2
857 hg histedit -r 2
858
858
859 An editor opens, containing the list of revisions,
859 An editor opens, containing the list of revisions,
860 with specific actions specified::
860 with specific actions specified::
861
861
862 pick 252a1af424ad 2 Blorb a morgwazzle
862 pick 252a1af424ad 2 Blorb a morgwazzle
863 pick 5339bf82f0ca 3 Zworgle the foobar
863 pick 5339bf82f0ca 3 Zworgle the foobar
864 pick 8ef592ce7cc4 4 Bedazzle the zerlog
864 pick 8ef592ce7cc4 4 Bedazzle the zerlog
865
865
866 To swap revision 2 and 4, its lines are swapped
866 To swap revision 2 and 4, its lines are swapped
867 in the editor::
867 in the editor::
868
868
869 pick 8ef592ce7cc4 4 Bedazzle the zerlog
869 pick 8ef592ce7cc4 4 Bedazzle the zerlog
870 pick 5339bf82f0ca 3 Zworgle the foobar
870 pick 5339bf82f0ca 3 Zworgle the foobar
871 pick 252a1af424ad 2 Blorb a morgwazzle
871 pick 252a1af424ad 2 Blorb a morgwazzle
872
872
873 Returns 0 on success, 1 if user intervention is required (not only
873 Returns 0 on success, 1 if user intervention is required (not only
874 for intentional "edit" command, but also for resolving unexpected
874 for intentional "edit" command, but also for resolving unexpected
875 conflicts).
875 conflicts).
876 """
876 """
877 state = histeditstate(repo)
877 state = histeditstate(repo)
878 try:
878 try:
879 state.wlock = repo.wlock()
879 state.wlock = repo.wlock()
880 state.lock = repo.lock()
880 state.lock = repo.lock()
881 _histedit(ui, repo, state, *freeargs, **opts)
881 _histedit(ui, repo, state, *freeargs, **opts)
882 except error.Abort:
882 except error.Abort:
883 if repo.vfs.exists('histedit-last-edit.txt'):
883 if repo.vfs.exists('histedit-last-edit.txt'):
884 ui.warn(_('warning: histedit rules saved '
884 ui.warn(_('warning: histedit rules saved '
885 'to: .hg/histedit-last-edit.txt\n'))
885 'to: .hg/histedit-last-edit.txt\n'))
886 raise
886 raise
887 finally:
887 finally:
888 release(state.lock, state.wlock)
888 release(state.lock, state.wlock)
889
889
890 def _histedit(ui, repo, state, *freeargs, **opts):
890 def _histedit(ui, repo, state, *freeargs, **opts):
891 # TODO only abort if we try to histedit mq patches, not just
891 # TODO only abort if we try to histedit mq patches, not just
892 # blanket if mq patches are applied somewhere
892 # blanket if mq patches are applied somewhere
893 mq = getattr(repo, 'mq', None)
893 mq = getattr(repo, 'mq', None)
894 if mq and mq.applied:
894 if mq and mq.applied:
895 raise error.Abort(_('source has mq patches applied'))
895 raise error.Abort(_('source has mq patches applied'))
896
896
897 # basic argument incompatibility processing
897 # basic argument incompatibility processing
898 outg = opts.get('outgoing')
898 outg = opts.get('outgoing')
899 cont = opts.get('continue')
899 cont = opts.get('continue')
900 editplan = opts.get('edit_plan')
900 editplan = opts.get('edit_plan')
901 abort = opts.get('abort')
901 abort = opts.get('abort')
902 force = opts.get('force')
902 force = opts.get('force')
903 rules = opts.get('commands', '')
903 rules = opts.get('commands', '')
904 revs = opts.get('rev', [])
904 revs = opts.get('rev', [])
905 goal = 'new' # This invocation goal, in new, continue, abort
905 goal = 'new' # This invocation goal, in new, continue, abort
906 if force and not outg:
906 if force and not outg:
907 raise error.Abort(_('--force only allowed with --outgoing'))
907 raise error.Abort(_('--force only allowed with --outgoing'))
908 if cont:
908 if cont:
909 if any((outg, abort, revs, freeargs, rules, editplan)):
909 if any((outg, abort, revs, freeargs, rules, editplan)):
910 raise error.Abort(_('no arguments allowed with --continue'))
910 raise error.Abort(_('no arguments allowed with --continue'))
911 goal = 'continue'
911 goal = 'continue'
912 elif abort:
912 elif abort:
913 if any((outg, revs, freeargs, rules, editplan)):
913 if any((outg, revs, freeargs, rules, editplan)):
914 raise error.Abort(_('no arguments allowed with --abort'))
914 raise error.Abort(_('no arguments allowed with --abort'))
915 goal = 'abort'
915 goal = 'abort'
916 elif editplan:
916 elif editplan:
917 if any((outg, revs, freeargs)):
917 if any((outg, revs, freeargs)):
918 raise error.Abort(_('only --commands argument allowed with '
918 raise error.Abort(_('only --commands argument allowed with '
919 '--edit-plan'))
919 '--edit-plan'))
920 goal = 'edit-plan'
920 goal = 'edit-plan'
921 else:
921 else:
922 if os.path.exists(os.path.join(repo.path, 'histedit-state')):
922 if os.path.exists(os.path.join(repo.path, 'histedit-state')):
923 raise error.Abort(_('history edit already in progress, try '
923 raise error.Abort(_('history edit already in progress, try '
924 '--continue or --abort'))
924 '--continue or --abort'))
925 if outg:
925 if outg:
926 if revs:
926 if revs:
927 raise error.Abort(_('no revisions allowed with --outgoing'))
927 raise error.Abort(_('no revisions allowed with --outgoing'))
928 if len(freeargs) > 1:
928 if len(freeargs) > 1:
929 raise error.Abort(
929 raise error.Abort(
930 _('only one repo argument allowed with --outgoing'))
930 _('only one repo argument allowed with --outgoing'))
931 else:
931 else:
932 revs.extend(freeargs)
932 revs.extend(freeargs)
933 if len(revs) == 0:
933 if len(revs) == 0:
934 defaultrev = destutil.desthistedit(ui, repo)
934 defaultrev = destutil.desthistedit(ui, repo)
935 if defaultrev is not None:
935 if defaultrev is not None:
936 revs.append(defaultrev)
936 revs.append(defaultrev)
937
937
938 if len(revs) != 1:
938 if len(revs) != 1:
939 raise error.Abort(
939 raise error.Abort(
940 _('histedit requires exactly one ancestor revision'))
940 _('histedit requires exactly one ancestor revision'))
941
941
942
942
943 replacements = []
943 replacements = []
944 state.keep = opts.get('keep', False)
944 state.keep = opts.get('keep', False)
945 supportsmarkers = obsolete.isenabled(repo, obsolete.createmarkersopt)
945 supportsmarkers = obsolete.isenabled(repo, obsolete.createmarkersopt)
946
946
947 # rebuild state
947 # rebuild state
948 if goal == 'continue':
948 if goal == 'continue':
949 state.read()
949 state.read()
950 state = bootstrapcontinue(ui, state, opts)
950 state = bootstrapcontinue(ui, state, opts)
951 elif goal == 'edit-plan':
951 elif goal == 'edit-plan':
952 state.read()
952 state.read()
953 if not rules:
953 if not rules:
954 comment = editcomment % (node.short(state.parentctxnode),
954 comment = editcomment % (node.short(state.parentctxnode),
955 node.short(state.topmost))
955 node.short(state.topmost))
956 rules = ruleeditor(repo, ui, state.actions, comment)
956 rules = ruleeditor(repo, ui, state.actions, comment)
957 else:
957 else:
958 if rules == '-':
958 if rules == '-':
959 f = sys.stdin
959 f = sys.stdin
960 else:
960 else:
961 f = open(rules)
961 f = open(rules)
962 rules = f.read()
962 rules = f.read()
963 f.close()
963 f.close()
964 actions = parserules(rules, state)
964 actions = parserules(rules, state)
965 ctxs = [repo[act.nodetoverify()] \
965 ctxs = [repo[act.nodetoverify()] \
966 for act in state.actions if act.nodetoverify()]
966 for act in state.actions if act.nodetoverify()]
967 verifyactions(actions, state, ctxs)
967 verifyactions(actions, state, ctxs)
968 state.actions = actions
968 state.actions = actions
969 state.write()
969 state.write()
970 return
970 return
971 elif goal == 'abort':
971 elif goal == 'abort':
972 try:
972 try:
973 state.read()
973 state.read()
974 tmpnodes, leafs = newnodestoabort(state)
974 tmpnodes, leafs = newnodestoabort(state)
975 ui.debug('restore wc to old parent %s\n'
975 ui.debug('restore wc to old parent %s\n'
976 % node.short(state.topmost))
976 % node.short(state.topmost))
977
977
978 # Recover our old commits if necessary
978 # Recover our old commits if necessary
979 if not state.topmost in repo and state.backupfile:
979 if not state.topmost in repo and state.backupfile:
980 backupfile = repo.join(state.backupfile)
980 backupfile = repo.join(state.backupfile)
981 f = hg.openpath(ui, backupfile)
981 f = hg.openpath(ui, backupfile)
982 gen = exchange.readbundle(ui, f, backupfile)
982 gen = exchange.readbundle(ui, f, backupfile)
983 tr = repo.transaction('histedit.abort')
983 tr = repo.transaction('histedit.abort')
984 try:
984 try:
985 if not isinstance(gen, bundle2.unbundle20):
985 if not isinstance(gen, bundle2.unbundle20):
986 gen.apply(repo, 'histedit', 'bundle:' + backupfile)
986 gen.apply(repo, 'histedit', 'bundle:' + backupfile)
987 if isinstance(gen, bundle2.unbundle20):
987 if isinstance(gen, bundle2.unbundle20):
988 bundle2.applybundle(repo, gen, tr,
988 bundle2.applybundle(repo, gen, tr,
989 source='histedit',
989 source='histedit',
990 url='bundle:' + backupfile)
990 url='bundle:' + backupfile)
991 tr.close()
991 tr.close()
992 finally:
992 finally:
993 tr.release()
993 tr.release()
994
994
995 os.remove(backupfile)
995 os.remove(backupfile)
996
996
997 # check whether we should update away
997 # check whether we should update away
998 if repo.unfiltered().revs('parents() and (%n or %ln::)',
998 if repo.unfiltered().revs('parents() and (%n or %ln::)',
999 state.parentctxnode, leafs | tmpnodes):
999 state.parentctxnode, leafs | tmpnodes):
1000 hg.clean(repo, state.topmost)
1000 hg.clean(repo, state.topmost)
1001 cleanupnode(ui, repo, 'created', tmpnodes)
1001 cleanupnode(ui, repo, 'created', tmpnodes)
1002 cleanupnode(ui, repo, 'temp', leafs)
1002 cleanupnode(ui, repo, 'temp', leafs)
1003 except Exception:
1003 except Exception:
1004 if state.inprogress():
1004 if state.inprogress():
1005 ui.warn(_('warning: encountered an exception during histedit '
1005 ui.warn(_('warning: encountered an exception during histedit '
1006 '--abort; the repository may not have been completely '
1006 '--abort; the repository may not have been completely '
1007 'cleaned up\n'))
1007 'cleaned up\n'))
1008 raise
1008 raise
1009 finally:
1009 finally:
1010 state.clear()
1010 state.clear()
1011 return
1011 return
1012 else:
1012 else:
1013 cmdutil.checkunfinished(repo)
1013 cmdutil.checkunfinished(repo)
1014 cmdutil.bailifchanged(repo)
1014 cmdutil.bailifchanged(repo)
1015
1015
1016 if repo.vfs.exists('histedit-last-edit.txt'):
1016 if repo.vfs.exists('histedit-last-edit.txt'):
1017 repo.vfs.unlink('histedit-last-edit.txt')
1017 repo.vfs.unlink('histedit-last-edit.txt')
1018 topmost, empty = repo.dirstate.parents()
1018 topmost, empty = repo.dirstate.parents()
1019 if outg:
1019 if outg:
1020 if freeargs:
1020 if freeargs:
1021 remote = freeargs[0]
1021 remote = freeargs[0]
1022 else:
1022 else:
1023 remote = None
1023 remote = None
1024 root = findoutgoing(ui, repo, remote, force, opts)
1024 root = findoutgoing(ui, repo, remote, force, opts)
1025 else:
1025 else:
1026 rr = list(repo.set('roots(%ld)', scmutil.revrange(repo, revs)))
1026 rr = list(repo.set('roots(%ld)', scmutil.revrange(repo, revs)))
1027 if len(rr) != 1:
1027 if len(rr) != 1:
1028 raise error.Abort(_('The specified revisions must have '
1028 raise error.Abort(_('The specified revisions must have '
1029 'exactly one common root'))
1029 'exactly one common root'))
1030 root = rr[0].node()
1030 root = rr[0].node()
1031
1031
1032 revs = between(repo, root, topmost, state.keep)
1032 revs = between(repo, root, topmost, state.keep)
1033 if not revs:
1033 if not revs:
1034 raise error.Abort(_('%s is not an ancestor of working directory') %
1034 raise error.Abort(_('%s is not an ancestor of working directory') %
1035 node.short(root))
1035 node.short(root))
1036
1036
1037 ctxs = [repo[r] for r in revs]
1037 ctxs = [repo[r] for r in revs]
1038 if not rules:
1038 if not rules:
1039 comment = editcomment % (node.short(root), node.short(topmost))
1039 comment = editcomment % (node.short(root), node.short(topmost))
1040 actions = [pick(state, r) for r in revs]
1040 actions = [pick(state, r) for r in revs]
1041 rules = ruleeditor(repo, ui, actions, comment)
1041 rules = ruleeditor(repo, ui, actions, comment)
1042 else:
1042 else:
1043 if rules == '-':
1043 if rules == '-':
1044 f = sys.stdin
1044 f = sys.stdin
1045 else:
1045 else:
1046 f = open(rules)
1046 f = open(rules)
1047 rules = f.read()
1047 rules = f.read()
1048 f.close()
1048 f.close()
1049 actions = parserules(rules, state)
1049 actions = parserules(rules, state)
1050 verifyactions(actions, state, ctxs)
1050 verifyactions(actions, state, ctxs)
1051
1051
1052 parentctxnode = repo[root].parents()[0].node()
1052 parentctxnode = repo[root].parents()[0].node()
1053
1053
1054 state.parentctxnode = parentctxnode
1054 state.parentctxnode = parentctxnode
1055 state.actions = actions
1055 state.actions = actions
1056 state.topmost = topmost
1056 state.topmost = topmost
1057 state.replacements = replacements
1057 state.replacements = replacements
1058
1058
1059 # Create a backup so we can always abort completely.
1059 # Create a backup so we can always abort completely.
1060 backupfile = None
1060 backupfile = None
1061 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1061 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1062 backupfile = repair._bundle(repo, [parentctxnode], [topmost], root,
1062 backupfile = repair._bundle(repo, [parentctxnode], [topmost], root,
1063 'histedit')
1063 'histedit')
1064 state.backupfile = backupfile
1064 state.backupfile = backupfile
1065
1065
1066 # preprocess rules so that we can hide inner folds from the user
1066 # preprocess rules so that we can hide inner folds from the user
1067 # and only show one editor
1067 # and only show one editor
1068 actions = state.actions[:]
1068 actions = state.actions[:]
1069 for idx, (action, nextact) in enumerate(
1069 for idx, (action, nextact) in enumerate(
1070 zip(actions, actions[1:] + [None])):
1070 zip(actions, actions[1:] + [None])):
1071 if action.verb == 'fold' and nextact and nextact.verb == 'fold':
1071 if action.verb == 'fold' and nextact and nextact.verb == 'fold':
1072 state.actions[idx].__class__ = _multifold
1072 state.actions[idx].__class__ = _multifold
1073
1073
1074 while state.actions:
1074 while state.actions:
1075 state.write()
1075 state.write()
1076 actobj = state.actions.pop(0)
1076 actobj = state.actions.pop(0)
1077 ui.debug('histedit: processing %s %s\n' % (actobj.verb,\
1077 ui.debug('histedit: processing %s %s\n' % (actobj.verb,\
1078 actobj.torule()))
1078 actobj.torule()))
1079 parentctx, replacement_ = actobj.run()
1079 parentctx, replacement_ = actobj.run()
1080 state.parentctxnode = parentctx.node()
1080 state.parentctxnode = parentctx.node()
1081 state.replacements.extend(replacement_)
1081 state.replacements.extend(replacement_)
1082 state.write()
1082 state.write()
1083
1083
1084 hg.update(repo, state.parentctxnode)
1084 hg.update(repo, state.parentctxnode)
1085
1085
1086 mapping, tmpnodes, created, ntm = processreplacement(state)
1086 mapping, tmpnodes, created, ntm = processreplacement(state)
1087 if mapping:
1087 if mapping:
1088 for prec, succs in mapping.iteritems():
1088 for prec, succs in mapping.iteritems():
1089 if not succs:
1089 if not succs:
1090 ui.debug('histedit: %s is dropped\n' % node.short(prec))
1090 ui.debug('histedit: %s is dropped\n' % node.short(prec))
1091 else:
1091 else:
1092 ui.debug('histedit: %s is replaced by %s\n' % (
1092 ui.debug('histedit: %s is replaced by %s\n' % (
1093 node.short(prec), node.short(succs[0])))
1093 node.short(prec), node.short(succs[0])))
1094 if len(succs) > 1:
1094 if len(succs) > 1:
1095 m = 'histedit: %s'
1095 m = 'histedit: %s'
1096 for n in succs[1:]:
1096 for n in succs[1:]:
1097 ui.debug(m % node.short(n))
1097 ui.debug(m % node.short(n))
1098
1098
1099 if supportsmarkers:
1099 if supportsmarkers:
1100 # Only create markers if the temp nodes weren't already removed.
1100 # Only create markers if the temp nodes weren't already removed.
1101 obsolete.createmarkers(repo, ((repo[t],()) for t in sorted(tmpnodes)
1101 obsolete.createmarkers(repo, ((repo[t],()) for t in sorted(tmpnodes)
1102 if t in repo))
1102 if t in repo))
1103 else:
1103 else:
1104 cleanupnode(ui, repo, 'temp', tmpnodes)
1104 cleanupnode(ui, repo, 'temp', tmpnodes)
1105
1105
1106 if not state.keep:
1106 if not state.keep:
1107 if mapping:
1107 if mapping:
1108 movebookmarks(ui, repo, mapping, state.topmost, ntm)
1108 movebookmarks(ui, repo, mapping, state.topmost, ntm)
1109 # TODO update mq state
1109 # TODO update mq state
1110 if supportsmarkers:
1110 if supportsmarkers:
1111 markers = []
1111 markers = []
1112 # sort by revision number because it sound "right"
1112 # sort by revision number because it sound "right"
1113 for prec in sorted(mapping, key=repo.changelog.rev):
1113 for prec in sorted(mapping, key=repo.changelog.rev):
1114 succs = mapping[prec]
1114 succs = mapping[prec]
1115 markers.append((repo[prec],
1115 markers.append((repo[prec],
1116 tuple(repo[s] for s in succs)))
1116 tuple(repo[s] for s in succs)))
1117 if markers:
1117 if markers:
1118 obsolete.createmarkers(repo, markers)
1118 obsolete.createmarkers(repo, markers)
1119 else:
1119 else:
1120 cleanupnode(ui, repo, 'replaced', mapping)
1120 cleanupnode(ui, repo, 'replaced', mapping)
1121
1121
1122 state.clear()
1122 state.clear()
1123 if os.path.exists(repo.sjoin('undo')):
1123 if os.path.exists(repo.sjoin('undo')):
1124 os.unlink(repo.sjoin('undo'))
1124 os.unlink(repo.sjoin('undo'))
1125
1125
1126 def bootstrapcontinue(ui, state, opts):
1126 def bootstrapcontinue(ui, state, opts):
1127 repo = state.repo
1127 repo = state.repo
1128 if state.actions:
1128 if state.actions:
1129 actobj = state.actions.pop(0)
1129 actobj = state.actions.pop(0)
1130
1130
1131 if _isdirtywc(repo):
1131 if _isdirtywc(repo):
1132 actobj.continuedirty()
1132 actobj.continuedirty()
1133 if _isdirtywc(repo):
1133 if _isdirtywc(repo):
1134 abortdirty()
1134 abortdirty()
1135
1135
1136 parentctx, replacements = actobj.continueclean()
1136 parentctx, replacements = actobj.continueclean()
1137
1137
1138 state.parentctxnode = parentctx.node()
1138 state.parentctxnode = parentctx.node()
1139 state.replacements.extend(replacements)
1139 state.replacements.extend(replacements)
1140
1140
1141 return state
1141 return state
1142
1142
1143 def between(repo, old, new, keep):
1143 def between(repo, old, new, keep):
1144 """select and validate the set of revision to edit
1144 """select and validate the set of revision to edit
1145
1145
1146 When keep is false, the specified set can't have children."""
1146 When keep is false, the specified set can't have children."""
1147 ctxs = list(repo.set('%n::%n', old, new))
1147 ctxs = list(repo.set('%n::%n', old, new))
1148 if ctxs and not keep:
1148 if ctxs and not keep:
1149 if (not obsolete.isenabled(repo, obsolete.allowunstableopt) and
1149 if (not obsolete.isenabled(repo, obsolete.allowunstableopt) and
1150 repo.revs('(%ld::) - (%ld)', ctxs, ctxs)):
1150 repo.revs('(%ld::) - (%ld)', ctxs, ctxs)):
1151 raise error.Abort(_('cannot edit history that would orphan nodes'))
1151 raise error.Abort(_('cannot edit history that would orphan nodes'))
1152 if repo.revs('(%ld) and merge()', ctxs):
1152 if repo.revs('(%ld) and merge()', ctxs):
1153 raise error.Abort(_('cannot edit history that contains merges'))
1153 raise error.Abort(_('cannot edit history that contains merges'))
1154 root = ctxs[0] # list is already sorted by repo.set
1154 root = ctxs[0] # list is already sorted by repo.set
1155 if not root.mutable():
1155 if not root.mutable():
1156 raise error.Abort(_('cannot edit public changeset: %s') % root,
1156 raise error.Abort(_('cannot edit public changeset: %s') % root,
1157 hint=_('see "hg help phases" for details'))
1157 hint=_('see "hg help phases" for details'))
1158 return [c.node() for c in ctxs]
1158 return [c.node() for c in ctxs]
1159
1159
1160 def ruleeditor(repo, ui, actions, editcomment=""):
1160 def ruleeditor(repo, ui, actions, editcomment=""):
1161 """open an editor to edit rules
1161 """open an editor to edit rules
1162
1162
1163 rules are in the format [ [act, ctx], ...] like in state.rules
1163 rules are in the format [ [act, ctx], ...] like in state.rules
1164 """
1164 """
1165 rules = '\n'.join([act.torule() for act in actions])
1165 rules = '\n'.join([act.torule() for act in actions])
1166 rules += '\n\n'
1166 rules += '\n\n'
1167 rules += editcomment
1167 rules += editcomment
1168 rules = ui.edit(rules, ui.username(), {'prefix': 'histedit'})
1168 rules = ui.edit(rules, ui.username(), {'prefix': 'histedit'})
1169
1169
1170 # Save edit rules in .hg/histedit-last-edit.txt in case
1170 # Save edit rules in .hg/histedit-last-edit.txt in case
1171 # the user needs to ask for help after something
1171 # the user needs to ask for help after something
1172 # surprising happens.
1172 # surprising happens.
1173 f = open(repo.join('histedit-last-edit.txt'), 'w')
1173 f = open(repo.join('histedit-last-edit.txt'), 'w')
1174 f.write(rules)
1174 f.write(rules)
1175 f.close()
1175 f.close()
1176
1176
1177 return rules
1177 return rules
1178
1178
1179 def parserules(rules, state):
1179 def parserules(rules, state):
1180 """Read the histedit rules string and return list of action objects """
1180 """Read the histedit rules string and return list of action objects """
1181 rules = [l for l in (r.strip() for r in rules.splitlines())
1181 rules = [l for l in (r.strip() for r in rules.splitlines())
1182 if l and not l.startswith('#')]
1182 if l and not l.startswith('#')]
1183 actions = []
1183 actions = []
1184 for r in rules:
1184 for r in rules:
1185 if ' ' not in r:
1185 if ' ' not in r:
1186 raise error.Abort(_('malformed line "%s"') % r)
1186 raise error.Abort(_('malformed line "%s"') % r)
1187 verb, rest = r.split(' ', 1)
1187 verb, rest = r.split(' ', 1)
1188
1188
1189 if verb not in actiontable:
1189 if verb not in actiontable:
1190 raise error.Abort(_('unknown action "%s"') % verb)
1190 raise error.Abort(_('unknown action "%s"') % verb)
1191
1191
1192 action = actiontable[verb].fromrule(state, rest)
1192 action = actiontable[verb].fromrule(state, rest)
1193 actions.append(action)
1193 actions.append(action)
1194 return actions
1194 return actions
1195
1195
1196 def verifyactions(actions, state, ctxs):
1196 def verifyactions(actions, state, ctxs):
1197 """Verify that there exists exactly one action per given changeset and
1197 """Verify that there exists exactly one action per given changeset and
1198 other constraints.
1198 other constraints.
1199
1199
1200 Will abort if there are to many or too few rules, a malformed rule,
1200 Will abort if there are to many or too few rules, a malformed rule,
1201 or a rule on a changeset outside of the user-given range.
1201 or a rule on a changeset outside of the user-given range.
1202 """
1202 """
1203 expected = set(c.hex() for c in ctxs)
1203 expected = set(c.hex() for c in ctxs)
1204 seen = set()
1204 seen = set()
1205 for action in actions:
1205 for action in actions:
1206 action.verify()
1206 action.verify()
1207 constraints = action.constraints()
1207 constraints = action.constraints()
1208 for constraint in constraints:
1208 for constraint in constraints:
1209 if constraint not in _constraints.known():
1209 if constraint not in _constraints.known():
1210 raise error.Abort(_('unknown constraint "%s"') % constraint)
1210 raise error.Abort(_('unknown constraint "%s"') % constraint)
1211
1211
1212 nodetoverify = action.nodetoverify()
1212 nodetoverify = action.nodetoverify()
1213 if nodetoverify is not None:
1213 if nodetoverify is not None:
1214 ha = node.hex(nodetoverify)
1214 ha = node.hex(nodetoverify)
1215 if _constraints.noother in constraints and ha not in expected:
1215 if _constraints.noother in constraints and ha not in expected:
1216 raise error.Abort(
1216 raise error.Abort(
1217 _('may not use "%s" with changesets '
1217 _('may not use "%s" with changesets '
1218 'other than the ones listed') % action.verb)
1218 'other than the ones listed') % action.verb)
1219 if _constraints.forceother in constraints and ha in expected:
1219 if _constraints.forceother in constraints and ha in expected:
1220 raise error.Abort(
1220 raise error.Abort(
1221 _('may not use "%s" with changesets '
1221 _('may not use "%s" with changesets '
1222 'within the edited list') % action.verb)
1222 'within the edited list') % action.verb)
1223 if _constraints.noduplicates in constraints and ha in seen:
1223 if _constraints.noduplicates in constraints and ha in seen:
1224 raise error.Abort(_('duplicated command for changeset %s') %
1224 raise error.Abort(_('duplicated command for changeset %s') %
1225 ha[:12])
1225 ha[:12])
1226 seen.add(ha)
1226 seen.add(ha)
1227 missing = sorted(expected - seen) # sort to stabilize output
1227 missing = sorted(expected - seen) # sort to stabilize output
1228 if missing:
1228 if missing:
1229 raise error.Abort(_('missing rules for changeset %s') %
1229 raise error.Abort(_('missing rules for changeset %s') %
1230 missing[0][:12],
1230 missing[0][:12],
1231 hint=_('use "drop %s" to discard the change') % missing[0][:12])
1231 hint=_('use "drop %s" to discard the change') % missing[0][:12])
1232
1232
1233 def newnodestoabort(state):
1233 def newnodestoabort(state):
1234 """process the list of replacements to return
1234 """process the list of replacements to return
1235
1235
1236 1) the list of final node
1236 1) the list of final node
1237 2) the list of temporary node
1237 2) the list of temporary node
1238
1238
1239 This meant to be used on abort as less data are required in this case.
1239 This meant to be used on abort as less data are required in this case.
1240 """
1240 """
1241 replacements = state.replacements
1241 replacements = state.replacements
1242 allsuccs = set()
1242 allsuccs = set()
1243 replaced = set()
1243 replaced = set()
1244 for rep in replacements:
1244 for rep in replacements:
1245 allsuccs.update(rep[1])
1245 allsuccs.update(rep[1])
1246 replaced.add(rep[0])
1246 replaced.add(rep[0])
1247 newnodes = allsuccs - replaced
1247 newnodes = allsuccs - replaced
1248 tmpnodes = allsuccs & replaced
1248 tmpnodes = allsuccs & replaced
1249 return newnodes, tmpnodes
1249 return newnodes, tmpnodes
1250
1250
1251
1251
1252 def processreplacement(state):
1252 def processreplacement(state):
1253 """process the list of replacements to return
1253 """process the list of replacements to return
1254
1254
1255 1) the final mapping between original and created nodes
1255 1) the final mapping between original and created nodes
1256 2) the list of temporary node created by histedit
1256 2) the list of temporary node created by histedit
1257 3) the list of new commit created by histedit"""
1257 3) the list of new commit created by histedit"""
1258 replacements = state.replacements
1258 replacements = state.replacements
1259 allsuccs = set()
1259 allsuccs = set()
1260 replaced = set()
1260 replaced = set()
1261 fullmapping = {}
1261 fullmapping = {}
1262 # initialize basic set
1262 # initialize basic set
1263 # fullmapping records all operations recorded in replacement
1263 # fullmapping records all operations recorded in replacement
1264 for rep in replacements:
1264 for rep in replacements:
1265 allsuccs.update(rep[1])
1265 allsuccs.update(rep[1])
1266 replaced.add(rep[0])
1266 replaced.add(rep[0])
1267 fullmapping.setdefault(rep[0], set()).update(rep[1])
1267 fullmapping.setdefault(rep[0], set()).update(rep[1])
1268 new = allsuccs - replaced
1268 new = allsuccs - replaced
1269 tmpnodes = allsuccs & replaced
1269 tmpnodes = allsuccs & replaced
1270 # Reduce content fullmapping into direct relation between original nodes
1270 # Reduce content fullmapping into direct relation between original nodes
1271 # and final node created during history edition
1271 # and final node created during history edition
1272 # Dropped changeset are replaced by an empty list
1272 # Dropped changeset are replaced by an empty list
1273 toproceed = set(fullmapping)
1273 toproceed = set(fullmapping)
1274 final = {}
1274 final = {}
1275 while toproceed:
1275 while toproceed:
1276 for x in list(toproceed):
1276 for x in list(toproceed):
1277 succs = fullmapping[x]
1277 succs = fullmapping[x]
1278 for s in list(succs):
1278 for s in list(succs):
1279 if s in toproceed:
1279 if s in toproceed:
1280 # non final node with unknown closure
1280 # non final node with unknown closure
1281 # We can't process this now
1281 # We can't process this now
1282 break
1282 break
1283 elif s in final:
1283 elif s in final:
1284 # non final node, replace with closure
1284 # non final node, replace with closure
1285 succs.remove(s)
1285 succs.remove(s)
1286 succs.update(final[s])
1286 succs.update(final[s])
1287 else:
1287 else:
1288 final[x] = succs
1288 final[x] = succs
1289 toproceed.remove(x)
1289 toproceed.remove(x)
1290 # remove tmpnodes from final mapping
1290 # remove tmpnodes from final mapping
1291 for n in tmpnodes:
1291 for n in tmpnodes:
1292 del final[n]
1292 del final[n]
1293 # we expect all changes involved in final to exist in the repo
1293 # we expect all changes involved in final to exist in the repo
1294 # turn `final` into list (topologically sorted)
1294 # turn `final` into list (topologically sorted)
1295 nm = state.repo.changelog.nodemap
1295 nm = state.repo.changelog.nodemap
1296 for prec, succs in final.items():
1296 for prec, succs in final.items():
1297 final[prec] = sorted(succs, key=nm.get)
1297 final[prec] = sorted(succs, key=nm.get)
1298
1298
1299 # computed topmost element (necessary for bookmark)
1299 # computed topmost element (necessary for bookmark)
1300 if new:
1300 if new:
1301 newtopmost = sorted(new, key=state.repo.changelog.rev)[-1]
1301 newtopmost = sorted(new, key=state.repo.changelog.rev)[-1]
1302 elif not final:
1302 elif not final:
1303 # Nothing rewritten at all. we won't need `newtopmost`
1303 # Nothing rewritten at all. we won't need `newtopmost`
1304 # It is the same as `oldtopmost` and `processreplacement` know it
1304 # It is the same as `oldtopmost` and `processreplacement` know it
1305 newtopmost = None
1305 newtopmost = None
1306 else:
1306 else:
1307 # every body died. The newtopmost is the parent of the root.
1307 # every body died. The newtopmost is the parent of the root.
1308 r = state.repo.changelog.rev
1308 r = state.repo.changelog.rev
1309 newtopmost = state.repo[sorted(final, key=r)[0]].p1().node()
1309 newtopmost = state.repo[sorted(final, key=r)[0]].p1().node()
1310
1310
1311 return final, tmpnodes, new, newtopmost
1311 return final, tmpnodes, new, newtopmost
1312
1312
1313 def movebookmarks(ui, repo, mapping, oldtopmost, newtopmost):
1313 def movebookmarks(ui, repo, mapping, oldtopmost, newtopmost):
1314 """Move bookmark from old to newly created node"""
1314 """Move bookmark from old to newly created node"""
1315 if not mapping:
1315 if not mapping:
1316 # if nothing got rewritten there is not purpose for this function
1316 # if nothing got rewritten there is not purpose for this function
1317 return
1317 return
1318 moves = []
1318 moves = []
1319 for bk, old in sorted(repo._bookmarks.iteritems()):
1319 for bk, old in sorted(repo._bookmarks.iteritems()):
1320 if old == oldtopmost:
1320 if old == oldtopmost:
1321 # special case ensure bookmark stay on tip.
1321 # special case ensure bookmark stay on tip.
1322 #
1322 #
1323 # This is arguably a feature and we may only want that for the
1323 # This is arguably a feature and we may only want that for the
1324 # active bookmark. But the behavior is kept compatible with the old
1324 # active bookmark. But the behavior is kept compatible with the old
1325 # version for now.
1325 # version for now.
1326 moves.append((bk, newtopmost))
1326 moves.append((bk, newtopmost))
1327 continue
1327 continue
1328 base = old
1328 base = old
1329 new = mapping.get(base, None)
1329 new = mapping.get(base, None)
1330 if new is None:
1330 if new is None:
1331 continue
1331 continue
1332 while not new:
1332 while not new:
1333 # base is killed, trying with parent
1333 # base is killed, trying with parent
1334 base = repo[base].p1().node()
1334 base = repo[base].p1().node()
1335 new = mapping.get(base, (base,))
1335 new = mapping.get(base, (base,))
1336 # nothing to move
1336 # nothing to move
1337 moves.append((bk, new[-1]))
1337 moves.append((bk, new[-1]))
1338 if moves:
1338 if moves:
1339 lock = tr = None
1339 lock = tr = None
1340 try:
1340 try:
1341 lock = repo.lock()
1341 lock = repo.lock()
1342 tr = repo.transaction('histedit')
1342 tr = repo.transaction('histedit')
1343 marks = repo._bookmarks
1343 marks = repo._bookmarks
1344 for mark, new in moves:
1344 for mark, new in moves:
1345 old = marks[mark]
1345 old = marks[mark]
1346 ui.note(_('histedit: moving bookmarks %s from %s to %s\n')
1346 ui.note(_('histedit: moving bookmarks %s from %s to %s\n')
1347 % (mark, node.short(old), node.short(new)))
1347 % (mark, node.short(old), node.short(new)))
1348 marks[mark] = new
1348 marks[mark] = new
1349 marks.recordchange(tr)
1349 marks.recordchange(tr)
1350 tr.close()
1350 tr.close()
1351 finally:
1351 finally:
1352 release(tr, lock)
1352 release(tr, lock)
1353
1353
1354 def cleanupnode(ui, repo, name, nodes):
1354 def cleanupnode(ui, repo, name, nodes):
1355 """strip a group of nodes from the repository
1355 """strip a group of nodes from the repository
1356
1356
1357 The set of node to strip may contains unknown nodes."""
1357 The set of node to strip may contains unknown nodes."""
1358 ui.debug('should strip %s nodes %s\n' %
1358 ui.debug('should strip %s nodes %s\n' %
1359 (name, ', '.join([node.short(n) for n in nodes])))
1359 (name, ', '.join([node.short(n) for n in nodes])))
1360 lock = None
1360 lock = None
1361 try:
1361 try:
1362 lock = repo.lock()
1362 lock = repo.lock()
1363 # do not let filtering get in the way of the cleanse
1363 # do not let filtering get in the way of the cleanse
1364 # we should probably get rid of obsolescence marker created during the
1364 # we should probably get rid of obsolescence marker created during the
1365 # histedit, but we currently do not have such information.
1365 # histedit, but we currently do not have such information.
1366 repo = repo.unfiltered()
1366 repo = repo.unfiltered()
1367 # Find all nodes that need to be stripped
1367 # Find all nodes that need to be stripped
1368 # (we use %lr instead of %ln to silently ignore unknown items)
1368 # (we use %lr instead of %ln to silently ignore unknown items)
1369 nm = repo.changelog.nodemap
1369 nm = repo.changelog.nodemap
1370 nodes = sorted(n for n in nodes if n in nm)
1370 nodes = sorted(n for n in nodes if n in nm)
1371 roots = [c.node() for c in repo.set("roots(%ln)", nodes)]
1371 roots = [c.node() for c in repo.set("roots(%ln)", nodes)]
1372 for c in roots:
1372 for c in roots:
1373 # We should process node in reverse order to strip tip most first.
1373 # We should process node in reverse order to strip tip most first.
1374 # but this trigger a bug in changegroup hook.
1374 # but this trigger a bug in changegroup hook.
1375 # This would reduce bundle overhead
1375 # This would reduce bundle overhead
1376 repair.strip(ui, repo, c)
1376 repair.strip(ui, repo, c)
1377 finally:
1377 finally:
1378 release(lock)
1378 release(lock)
1379
1379
1380 def stripwrapper(orig, ui, repo, nodelist, *args, **kwargs):
1380 def stripwrapper(orig, ui, repo, nodelist, *args, **kwargs):
1381 if isinstance(nodelist, str):
1381 if isinstance(nodelist, str):
1382 nodelist = [nodelist]
1382 nodelist = [nodelist]
1383 if os.path.exists(os.path.join(repo.path, 'histedit-state')):
1383 if os.path.exists(os.path.join(repo.path, 'histedit-state')):
1384 state = histeditstate(repo)
1384 state = histeditstate(repo)
1385 state.read()
1385 state.read()
1386 histedit_nodes = set([action.nodetoverify() for action
1386 histedit_nodes = set([action.nodetoverify() for action
1387 in state.actions if action.nodetoverify()])
1387 in state.actions if action.nodetoverify()])
1388 strip_nodes = set([repo[n].node() for n in nodelist])
1388 strip_nodes = set([repo[n].node() for n in nodelist])
1389 common_nodes = histedit_nodes & strip_nodes
1389 common_nodes = histedit_nodes & strip_nodes
1390 if common_nodes:
1390 if common_nodes:
1391 raise error.Abort(_("histedit in progress, can't strip %s")
1391 raise error.Abort(_("histedit in progress, can't strip %s")
1392 % ', '.join(node.short(x) for x in common_nodes))
1392 % ', '.join(node.short(x) for x in common_nodes))
1393 return orig(ui, repo, nodelist, *args, **kwargs)
1393 return orig(ui, repo, nodelist, *args, **kwargs)
1394
1394
1395 extensions.wrapfunction(repair, 'strip', stripwrapper)
1395 extensions.wrapfunction(repair, 'strip', stripwrapper)
1396
1396
1397 def summaryhook(ui, repo):
1397 def summaryhook(ui, repo):
1398 if not os.path.exists(repo.join('histedit-state')):
1398 if not os.path.exists(repo.join('histedit-state')):
1399 return
1399 return
1400 state = histeditstate(repo)
1400 state = histeditstate(repo)
1401 state.read()
1401 state.read()
1402 if state.actions:
1402 if state.actions:
1403 # i18n: column positioning for "hg summary"
1403 # i18n: column positioning for "hg summary"
1404 ui.write(_('hist: %s (histedit --continue)\n') %
1404 ui.write(_('hist: %s (histedit --continue)\n') %
1405 (ui.label(_('%d remaining'), 'histedit.remaining') %
1405 (ui.label(_('%d remaining'), 'histedit.remaining') %
1406 len(state.actions)))
1406 len(state.actions)))
1407
1407
1408 def extsetup(ui):
1408 def extsetup(ui):
1409 cmdutil.summaryhooks.add('histedit', summaryhook)
1409 cmdutil.summaryhooks.add('histedit', summaryhook)
1410 cmdutil.unfinishedstates.append(
1410 cmdutil.unfinishedstates.append(
1411 ['histedit-state', False, True, _('histedit in progress'),
1411 ['histedit-state', False, True, _('histedit in progress'),
1412 _("use 'hg histedit --continue' or 'hg histedit --abort'")])
1412 _("use 'hg histedit --continue' or 'hg histedit --abort'")])
1413 if ui.configbool("experimental", "histeditng"):
1413 if ui.configbool("experimental", "histeditng"):
1414 globals()['base'] = addhisteditaction(['base', 'b'])(base)
1414 globals()['base'] = addhisteditaction(['base', 'b'])(base)
@@ -1,1430 +1,1433 b''
1 # Copyright 2009-2010 Gregory P. Ward
1 # Copyright 2009-2010 Gregory P. Ward
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
3 # Copyright 2010-2011 Fog Creek Software
3 # Copyright 2010-2011 Fog Creek Software
4 # Copyright 2010-2011 Unity Technologies
4 # Copyright 2010-2011 Unity Technologies
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 '''Overridden Mercurial commands and functions for the largefiles extension'''
9 '''Overridden Mercurial commands and functions for the largefiles extension'''
10
10
11 import os
11 import os
12 import copy
12 import copy
13
13
14 from mercurial import hg, util, cmdutil, scmutil, match as match_, \
14 from mercurial import hg, util, cmdutil, scmutil, match as match_, \
15 archival, pathutil, revset, error
15 archival, pathutil, revset, error
16 from mercurial.i18n import _
16 from mercurial.i18n import _
17
17
18 import lfutil
18 import lfutil
19 import lfcommands
19 import lfcommands
20 import basestore
20 import basestore
21
21
22 # -- Utility functions: commonly/repeatedly needed functionality ---------------
22 # -- Utility functions: commonly/repeatedly needed functionality ---------------
23
23
24 def composelargefilematcher(match, manifest):
24 def composelargefilematcher(match, manifest):
25 '''create a matcher that matches only the largefiles in the original
25 '''create a matcher that matches only the largefiles in the original
26 matcher'''
26 matcher'''
27 m = copy.copy(match)
27 m = copy.copy(match)
28 lfile = lambda f: lfutil.standin(f) in manifest
28 lfile = lambda f: lfutil.standin(f) in manifest
29 m._files = filter(lfile, m._files)
29 m._files = filter(lfile, m._files)
30 m._fileroots = set(m._files)
30 m._fileroots = set(m._files)
31 m._always = False
31 m._always = False
32 origmatchfn = m.matchfn
32 origmatchfn = m.matchfn
33 m.matchfn = lambda f: lfile(f) and origmatchfn(f)
33 m.matchfn = lambda f: lfile(f) and origmatchfn(f)
34 return m
34 return m
35
35
36 def composenormalfilematcher(match, manifest, exclude=None):
36 def composenormalfilematcher(match, manifest, exclude=None):
37 excluded = set()
37 excluded = set()
38 if exclude is not None:
38 if exclude is not None:
39 excluded.update(exclude)
39 excluded.update(exclude)
40
40
41 m = copy.copy(match)
41 m = copy.copy(match)
42 notlfile = lambda f: not (lfutil.isstandin(f) or lfutil.standin(f) in
42 notlfile = lambda f: not (lfutil.isstandin(f) or lfutil.standin(f) in
43 manifest or f in excluded)
43 manifest or f in excluded)
44 m._files = filter(notlfile, m._files)
44 m._files = filter(notlfile, m._files)
45 m._fileroots = set(m._files)
45 m._fileroots = set(m._files)
46 m._always = False
46 m._always = False
47 origmatchfn = m.matchfn
47 origmatchfn = m.matchfn
48 m.matchfn = lambda f: notlfile(f) and origmatchfn(f)
48 m.matchfn = lambda f: notlfile(f) and origmatchfn(f)
49 return m
49 return m
50
50
51 def installnormalfilesmatchfn(manifest):
51 def installnormalfilesmatchfn(manifest):
52 '''installmatchfn with a matchfn that ignores all largefiles'''
52 '''installmatchfn with a matchfn that ignores all largefiles'''
53 def overridematch(ctx, pats=(), opts=None, globbed=False,
53 def overridematch(ctx, pats=(), opts=None, globbed=False,
54 default='relpath', badfn=None):
54 default='relpath', badfn=None):
55 if opts is None:
55 if opts is None:
56 opts = {}
56 opts = {}
57 match = oldmatch(ctx, pats, opts, globbed, default, badfn=badfn)
57 match = oldmatch(ctx, pats, opts, globbed, default, badfn=badfn)
58 return composenormalfilematcher(match, manifest)
58 return composenormalfilematcher(match, manifest)
59 oldmatch = installmatchfn(overridematch)
59 oldmatch = installmatchfn(overridematch)
60
60
61 def installmatchfn(f):
61 def installmatchfn(f):
62 '''monkey patch the scmutil module with a custom match function.
62 '''monkey patch the scmutil module with a custom match function.
63 Warning: it is monkey patching the _module_ on runtime! Not thread safe!'''
63 Warning: it is monkey patching the _module_ on runtime! Not thread safe!'''
64 oldmatch = scmutil.match
64 oldmatch = scmutil.match
65 setattr(f, 'oldmatch', oldmatch)
65 setattr(f, 'oldmatch', oldmatch)
66 scmutil.match = f
66 scmutil.match = f
67 return oldmatch
67 return oldmatch
68
68
69 def restorematchfn():
69 def restorematchfn():
70 '''restores scmutil.match to what it was before installmatchfn
70 '''restores scmutil.match to what it was before installmatchfn
71 was called. no-op if scmutil.match is its original function.
71 was called. no-op if scmutil.match is its original function.
72
72
73 Note that n calls to installmatchfn will require n calls to
73 Note that n calls to installmatchfn will require n calls to
74 restore the original matchfn.'''
74 restore the original matchfn.'''
75 scmutil.match = getattr(scmutil.match, 'oldmatch')
75 scmutil.match = getattr(scmutil.match, 'oldmatch')
76
76
77 def installmatchandpatsfn(f):
77 def installmatchandpatsfn(f):
78 oldmatchandpats = scmutil.matchandpats
78 oldmatchandpats = scmutil.matchandpats
79 setattr(f, 'oldmatchandpats', oldmatchandpats)
79 setattr(f, 'oldmatchandpats', oldmatchandpats)
80 scmutil.matchandpats = f
80 scmutil.matchandpats = f
81 return oldmatchandpats
81 return oldmatchandpats
82
82
83 def restorematchandpatsfn():
83 def restorematchandpatsfn():
84 '''restores scmutil.matchandpats to what it was before
84 '''restores scmutil.matchandpats to what it was before
85 installmatchandpatsfn was called. No-op if scmutil.matchandpats
85 installmatchandpatsfn was called. No-op if scmutil.matchandpats
86 is its original function.
86 is its original function.
87
87
88 Note that n calls to installmatchandpatsfn will require n calls
88 Note that n calls to installmatchandpatsfn will require n calls
89 to restore the original matchfn.'''
89 to restore the original matchfn.'''
90 scmutil.matchandpats = getattr(scmutil.matchandpats, 'oldmatchandpats',
90 scmutil.matchandpats = getattr(scmutil.matchandpats, 'oldmatchandpats',
91 scmutil.matchandpats)
91 scmutil.matchandpats)
92
92
93 def addlargefiles(ui, repo, isaddremove, matcher, **opts):
93 def addlargefiles(ui, repo, isaddremove, matcher, **opts):
94 large = opts.get('large')
94 large = opts.get('large')
95 lfsize = lfutil.getminsize(
95 lfsize = lfutil.getminsize(
96 ui, lfutil.islfilesrepo(repo), opts.get('lfsize'))
96 ui, lfutil.islfilesrepo(repo), opts.get('lfsize'))
97
97
98 lfmatcher = None
98 lfmatcher = None
99 if lfutil.islfilesrepo(repo):
99 if lfutil.islfilesrepo(repo):
100 lfpats = ui.configlist(lfutil.longname, 'patterns', default=[])
100 lfpats = ui.configlist(lfutil.longname, 'patterns', default=[])
101 if lfpats:
101 if lfpats:
102 lfmatcher = match_.match(repo.root, '', list(lfpats))
102 lfmatcher = match_.match(repo.root, '', list(lfpats))
103
103
104 lfnames = []
104 lfnames = []
105 m = matcher
105 m = matcher
106
106
107 wctx = repo[None]
107 wctx = repo[None]
108 for f in repo.walk(match_.badmatch(m, lambda x, y: None)):
108 for f in repo.walk(match_.badmatch(m, lambda x, y: None)):
109 exact = m.exact(f)
109 exact = m.exact(f)
110 lfile = lfutil.standin(f) in wctx
110 lfile = lfutil.standin(f) in wctx
111 nfile = f in wctx
111 nfile = f in wctx
112 exists = lfile or nfile
112 exists = lfile or nfile
113
113
114 # addremove in core gets fancy with the name, add doesn't
114 # addremove in core gets fancy with the name, add doesn't
115 if isaddremove:
115 if isaddremove:
116 name = m.uipath(f)
116 name = m.uipath(f)
117 else:
117 else:
118 name = m.rel(f)
118 name = m.rel(f)
119
119
120 # Don't warn the user when they attempt to add a normal tracked file.
120 # Don't warn the user when they attempt to add a normal tracked file.
121 # The normal add code will do that for us.
121 # The normal add code will do that for us.
122 if exact and exists:
122 if exact and exists:
123 if lfile:
123 if lfile:
124 ui.warn(_('%s already a largefile\n') % name)
124 ui.warn(_('%s already a largefile\n') % name)
125 continue
125 continue
126
126
127 if (exact or not exists) and not lfutil.isstandin(f):
127 if (exact or not exists) and not lfutil.isstandin(f):
128 # In case the file was removed previously, but not committed
128 # In case the file was removed previously, but not committed
129 # (issue3507)
129 # (issue3507)
130 if not repo.wvfs.exists(f):
130 if not repo.wvfs.exists(f):
131 continue
131 continue
132
132
133 abovemin = (lfsize and
133 abovemin = (lfsize and
134 repo.wvfs.lstat(f).st_size >= lfsize * 1024 * 1024)
134 repo.wvfs.lstat(f).st_size >= lfsize * 1024 * 1024)
135 if large or abovemin or (lfmatcher and lfmatcher(f)):
135 if large or abovemin or (lfmatcher and lfmatcher(f)):
136 lfnames.append(f)
136 lfnames.append(f)
137 if ui.verbose or not exact:
137 if ui.verbose or not exact:
138 ui.status(_('adding %s as a largefile\n') % name)
138 ui.status(_('adding %s as a largefile\n') % name)
139
139
140 bad = []
140 bad = []
141
141
142 # Need to lock, otherwise there could be a race condition between
142 # Need to lock, otherwise there could be a race condition between
143 # when standins are created and added to the repo.
143 # when standins are created and added to the repo.
144 wlock = repo.wlock()
144 wlock = repo.wlock()
145 try:
145 try:
146 if not opts.get('dry_run'):
146 if not opts.get('dry_run'):
147 standins = []
147 standins = []
148 lfdirstate = lfutil.openlfdirstate(ui, repo)
148 lfdirstate = lfutil.openlfdirstate(ui, repo)
149 for f in lfnames:
149 for f in lfnames:
150 standinname = lfutil.standin(f)
150 standinname = lfutil.standin(f)
151 lfutil.writestandin(repo, standinname, hash='',
151 lfutil.writestandin(repo, standinname, hash='',
152 executable=lfutil.getexecutable(repo.wjoin(f)))
152 executable=lfutil.getexecutable(repo.wjoin(f)))
153 standins.append(standinname)
153 standins.append(standinname)
154 if lfdirstate[f] == 'r':
154 if lfdirstate[f] == 'r':
155 lfdirstate.normallookup(f)
155 lfdirstate.normallookup(f)
156 else:
156 else:
157 lfdirstate.add(f)
157 lfdirstate.add(f)
158 lfdirstate.write()
158 lfdirstate.write()
159 bad += [lfutil.splitstandin(f)
159 bad += [lfutil.splitstandin(f)
160 for f in repo[None].add(standins)
160 for f in repo[None].add(standins)
161 if f in m.files()]
161 if f in m.files()]
162
162
163 added = [f for f in lfnames if f not in bad]
163 added = [f for f in lfnames if f not in bad]
164 finally:
164 finally:
165 wlock.release()
165 wlock.release()
166 return added, bad
166 return added, bad
167
167
168 def removelargefiles(ui, repo, isaddremove, matcher, **opts):
168 def removelargefiles(ui, repo, isaddremove, matcher, **opts):
169 after = opts.get('after')
169 after = opts.get('after')
170 m = composelargefilematcher(matcher, repo[None].manifest())
170 m = composelargefilematcher(matcher, repo[None].manifest())
171 try:
171 try:
172 repo.lfstatus = True
172 repo.lfstatus = True
173 s = repo.status(match=m, clean=not isaddremove)
173 s = repo.status(match=m, clean=not isaddremove)
174 finally:
174 finally:
175 repo.lfstatus = False
175 repo.lfstatus = False
176 manifest = repo[None].manifest()
176 manifest = repo[None].manifest()
177 modified, added, deleted, clean = [[f for f in list
177 modified, added, deleted, clean = [[f for f in list
178 if lfutil.standin(f) in manifest]
178 if lfutil.standin(f) in manifest]
179 for list in (s.modified, s.added,
179 for list in (s.modified, s.added,
180 s.deleted, s.clean)]
180 s.deleted, s.clean)]
181
181
182 def warn(files, msg):
182 def warn(files, msg):
183 for f in files:
183 for f in files:
184 ui.warn(msg % m.rel(f))
184 ui.warn(msg % m.rel(f))
185 return int(len(files) > 0)
185 return int(len(files) > 0)
186
186
187 result = 0
187 result = 0
188
188
189 if after:
189 if after:
190 remove = deleted
190 remove = deleted
191 result = warn(modified + added + clean,
191 result = warn(modified + added + clean,
192 _('not removing %s: file still exists\n'))
192 _('not removing %s: file still exists\n'))
193 else:
193 else:
194 remove = deleted + clean
194 remove = deleted + clean
195 result = warn(modified, _('not removing %s: file is modified (use -f'
195 result = warn(modified, _('not removing %s: file is modified (use -f'
196 ' to force removal)\n'))
196 ' to force removal)\n'))
197 result = warn(added, _('not removing %s: file has been marked for add'
197 result = warn(added, _('not removing %s: file has been marked for add'
198 ' (use forget to undo)\n')) or result
198 ' (use forget to undo)\n')) or result
199
199
200 # Need to lock because standin files are deleted then removed from the
200 # Need to lock because standin files are deleted then removed from the
201 # repository and we could race in-between.
201 # repository and we could race in-between.
202 wlock = repo.wlock()
202 wlock = repo.wlock()
203 try:
203 try:
204 lfdirstate = lfutil.openlfdirstate(ui, repo)
204 lfdirstate = lfutil.openlfdirstate(ui, repo)
205 for f in sorted(remove):
205 for f in sorted(remove):
206 if ui.verbose or not m.exact(f):
206 if ui.verbose or not m.exact(f):
207 # addremove in core gets fancy with the name, remove doesn't
207 # addremove in core gets fancy with the name, remove doesn't
208 if isaddremove:
208 if isaddremove:
209 name = m.uipath(f)
209 name = m.uipath(f)
210 else:
210 else:
211 name = m.rel(f)
211 name = m.rel(f)
212 ui.status(_('removing %s\n') % name)
212 ui.status(_('removing %s\n') % name)
213
213
214 if not opts.get('dry_run'):
214 if not opts.get('dry_run'):
215 if not after:
215 if not after:
216 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
216 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
217
217
218 if opts.get('dry_run'):
218 if opts.get('dry_run'):
219 return result
219 return result
220
220
221 remove = [lfutil.standin(f) for f in remove]
221 remove = [lfutil.standin(f) for f in remove]
222 # If this is being called by addremove, let the original addremove
222 # If this is being called by addremove, let the original addremove
223 # function handle this.
223 # function handle this.
224 if not isaddremove:
224 if not isaddremove:
225 for f in remove:
225 for f in remove:
226 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
226 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
227 repo[None].forget(remove)
227 repo[None].forget(remove)
228
228
229 for f in remove:
229 for f in remove:
230 lfutil.synclfdirstate(repo, lfdirstate, lfutil.splitstandin(f),
230 lfutil.synclfdirstate(repo, lfdirstate, lfutil.splitstandin(f),
231 False)
231 False)
232
232
233 lfdirstate.write()
233 lfdirstate.write()
234 finally:
234 finally:
235 wlock.release()
235 wlock.release()
236
236
237 return result
237 return result
238
238
239 # For overriding mercurial.hgweb.webcommands so that largefiles will
239 # For overriding mercurial.hgweb.webcommands so that largefiles will
240 # appear at their right place in the manifests.
240 # appear at their right place in the manifests.
241 def decodepath(orig, path):
241 def decodepath(orig, path):
242 return lfutil.splitstandin(path) or path
242 return lfutil.splitstandin(path) or path
243
243
244 # -- Wrappers: modify existing commands --------------------------------
244 # -- Wrappers: modify existing commands --------------------------------
245
245
246 def overrideadd(orig, ui, repo, *pats, **opts):
246 def overrideadd(orig, ui, repo, *pats, **opts):
247 if opts.get('normal') and opts.get('large'):
247 if opts.get('normal') and opts.get('large'):
248 raise error.Abort(_('--normal cannot be used with --large'))
248 raise error.Abort(_('--normal cannot be used with --large'))
249 return orig(ui, repo, *pats, **opts)
249 return orig(ui, repo, *pats, **opts)
250
250
251 def cmdutiladd(orig, ui, repo, matcher, prefix, explicitonly, **opts):
251 def cmdutiladd(orig, ui, repo, matcher, prefix, explicitonly, **opts):
252 # The --normal flag short circuits this override
252 # The --normal flag short circuits this override
253 if opts.get('normal'):
253 if opts.get('normal'):
254 return orig(ui, repo, matcher, prefix, explicitonly, **opts)
254 return orig(ui, repo, matcher, prefix, explicitonly, **opts)
255
255
256 ladded, lbad = addlargefiles(ui, repo, False, matcher, **opts)
256 ladded, lbad = addlargefiles(ui, repo, False, matcher, **opts)
257 normalmatcher = composenormalfilematcher(matcher, repo[None].manifest(),
257 normalmatcher = composenormalfilematcher(matcher, repo[None].manifest(),
258 ladded)
258 ladded)
259 bad = orig(ui, repo, normalmatcher, prefix, explicitonly, **opts)
259 bad = orig(ui, repo, normalmatcher, prefix, explicitonly, **opts)
260
260
261 bad.extend(f for f in lbad)
261 bad.extend(f for f in lbad)
262 return bad
262 return bad
263
263
264 def cmdutilremove(orig, ui, repo, matcher, prefix, after, force, subrepos):
264 def cmdutilremove(orig, ui, repo, matcher, prefix, after, force, subrepos):
265 normalmatcher = composenormalfilematcher(matcher, repo[None].manifest())
265 normalmatcher = composenormalfilematcher(matcher, repo[None].manifest())
266 result = orig(ui, repo, normalmatcher, prefix, after, force, subrepos)
266 result = orig(ui, repo, normalmatcher, prefix, after, force, subrepos)
267 return removelargefiles(ui, repo, False, matcher, after=after,
267 return removelargefiles(ui, repo, False, matcher, after=after,
268 force=force) or result
268 force=force) or result
269
269
270 def overridestatusfn(orig, repo, rev2, **opts):
270 def overridestatusfn(orig, repo, rev2, **opts):
271 try:
271 try:
272 repo._repo.lfstatus = True
272 repo._repo.lfstatus = True
273 return orig(repo, rev2, **opts)
273 return orig(repo, rev2, **opts)
274 finally:
274 finally:
275 repo._repo.lfstatus = False
275 repo._repo.lfstatus = False
276
276
277 def overridestatus(orig, ui, repo, *pats, **opts):
277 def overridestatus(orig, ui, repo, *pats, **opts):
278 try:
278 try:
279 repo.lfstatus = True
279 repo.lfstatus = True
280 return orig(ui, repo, *pats, **opts)
280 return orig(ui, repo, *pats, **opts)
281 finally:
281 finally:
282 repo.lfstatus = False
282 repo.lfstatus = False
283
283
284 def overridedirty(orig, repo, ignoreupdate=False):
284 def overridedirty(orig, repo, ignoreupdate=False):
285 try:
285 try:
286 repo._repo.lfstatus = True
286 repo._repo.lfstatus = True
287 return orig(repo, ignoreupdate)
287 return orig(repo, ignoreupdate)
288 finally:
288 finally:
289 repo._repo.lfstatus = False
289 repo._repo.lfstatus = False
290
290
291 def overridelog(orig, ui, repo, *pats, **opts):
291 def overridelog(orig, ui, repo, *pats, **opts):
292 def overridematchandpats(ctx, pats=(), opts=None, globbed=False,
292 def overridematchandpats(ctx, pats=(), opts=None, globbed=False,
293 default='relpath', badfn=None):
293 default='relpath', badfn=None):
294 """Matcher that merges root directory with .hglf, suitable for log.
294 """Matcher that merges root directory with .hglf, suitable for log.
295 It is still possible to match .hglf directly.
295 It is still possible to match .hglf directly.
296 For any listed files run log on the standin too.
296 For any listed files run log on the standin too.
297 matchfn tries both the given filename and with .hglf stripped.
297 matchfn tries both the given filename and with .hglf stripped.
298 """
298 """
299 if opts is None:
299 if opts is None:
300 opts = {}
300 opts = {}
301 matchandpats = oldmatchandpats(ctx, pats, opts, globbed, default,
301 matchandpats = oldmatchandpats(ctx, pats, opts, globbed, default,
302 badfn=badfn)
302 badfn=badfn)
303 m, p = copy.copy(matchandpats)
303 m, p = copy.copy(matchandpats)
304
304
305 if m.always():
305 if m.always():
306 # We want to match everything anyway, so there's no benefit trying
306 # We want to match everything anyway, so there's no benefit trying
307 # to add standins.
307 # to add standins.
308 return matchandpats
308 return matchandpats
309
309
310 pats = set(p)
310 pats = set(p)
311
311
312 def fixpats(pat, tostandin=lfutil.standin):
312 def fixpats(pat, tostandin=lfutil.standin):
313 if pat.startswith('set:'):
313 if pat.startswith('set:'):
314 return pat
314 return pat
315
315
316 kindpat = match_._patsplit(pat, None)
316 kindpat = match_._patsplit(pat, None)
317
317
318 if kindpat[0] is not None:
318 if kindpat[0] is not None:
319 return kindpat[0] + ':' + tostandin(kindpat[1])
319 return kindpat[0] + ':' + tostandin(kindpat[1])
320 return tostandin(kindpat[1])
320 return tostandin(kindpat[1])
321
321
322 if m._cwd:
322 if m._cwd:
323 hglf = lfutil.shortname
323 hglf = lfutil.shortname
324 back = util.pconvert(m.rel(hglf)[:-len(hglf)])
324 back = util.pconvert(m.rel(hglf)[:-len(hglf)])
325
325
326 def tostandin(f):
326 def tostandin(f):
327 # The file may already be a standin, so truncate the back
327 # The file may already be a standin, so truncate the back
328 # prefix and test before mangling it. This avoids turning
328 # prefix and test before mangling it. This avoids turning
329 # 'glob:../.hglf/foo*' into 'glob:../.hglf/../.hglf/foo*'.
329 # 'glob:../.hglf/foo*' into 'glob:../.hglf/../.hglf/foo*'.
330 if f.startswith(back) and lfutil.splitstandin(f[len(back):]):
330 if f.startswith(back) and lfutil.splitstandin(f[len(back):]):
331 return f
331 return f
332
332
333 # An absolute path is from outside the repo, so truncate the
333 # An absolute path is from outside the repo, so truncate the
334 # path to the root before building the standin. Otherwise cwd
334 # path to the root before building the standin. Otherwise cwd
335 # is somewhere in the repo, relative to root, and needs to be
335 # is somewhere in the repo, relative to root, and needs to be
336 # prepended before building the standin.
336 # prepended before building the standin.
337 if os.path.isabs(m._cwd):
337 if os.path.isabs(m._cwd):
338 f = f[len(back):]
338 f = f[len(back):]
339 else:
339 else:
340 f = m._cwd + '/' + f
340 f = m._cwd + '/' + f
341 return back + lfutil.standin(f)
341 return back + lfutil.standin(f)
342
342
343 pats.update(fixpats(f, tostandin) for f in p)
343 pats.update(fixpats(f, tostandin) for f in p)
344 else:
344 else:
345 def tostandin(f):
345 def tostandin(f):
346 if lfutil.splitstandin(f):
346 if lfutil.splitstandin(f):
347 return f
347 return f
348 return lfutil.standin(f)
348 return lfutil.standin(f)
349 pats.update(fixpats(f, tostandin) for f in p)
349 pats.update(fixpats(f, tostandin) for f in p)
350
350
351 for i in range(0, len(m._files)):
351 for i in range(0, len(m._files)):
352 # Don't add '.hglf' to m.files, since that is already covered by '.'
352 # Don't add '.hglf' to m.files, since that is already covered by '.'
353 if m._files[i] == '.':
353 if m._files[i] == '.':
354 continue
354 continue
355 standin = lfutil.standin(m._files[i])
355 standin = lfutil.standin(m._files[i])
356 # If the "standin" is a directory, append instead of replace to
356 # If the "standin" is a directory, append instead of replace to
357 # support naming a directory on the command line with only
357 # support naming a directory on the command line with only
358 # largefiles. The original directory is kept to support normal
358 # largefiles. The original directory is kept to support normal
359 # files.
359 # files.
360 if standin in repo[ctx.node()]:
360 if standin in repo[ctx.node()]:
361 m._files[i] = standin
361 m._files[i] = standin
362 elif m._files[i] not in repo[ctx.node()] \
362 elif m._files[i] not in repo[ctx.node()] \
363 and repo.wvfs.isdir(standin):
363 and repo.wvfs.isdir(standin):
364 m._files.append(standin)
364 m._files.append(standin)
365
365
366 m._fileroots = set(m._files)
366 m._fileroots = set(m._files)
367 m._always = False
367 m._always = False
368 origmatchfn = m.matchfn
368 origmatchfn = m.matchfn
369 def lfmatchfn(f):
369 def lfmatchfn(f):
370 lf = lfutil.splitstandin(f)
370 lf = lfutil.splitstandin(f)
371 if lf is not None and origmatchfn(lf):
371 if lf is not None and origmatchfn(lf):
372 return True
372 return True
373 r = origmatchfn(f)
373 r = origmatchfn(f)
374 return r
374 return r
375 m.matchfn = lfmatchfn
375 m.matchfn = lfmatchfn
376
376
377 ui.debug('updated patterns: %s\n' % sorted(pats))
377 ui.debug('updated patterns: %s\n' % sorted(pats))
378 return m, pats
378 return m, pats
379
379
380 # For hg log --patch, the match object is used in two different senses:
380 # For hg log --patch, the match object is used in two different senses:
381 # (1) to determine what revisions should be printed out, and
381 # (1) to determine what revisions should be printed out, and
382 # (2) to determine what files to print out diffs for.
382 # (2) to determine what files to print out diffs for.
383 # The magic matchandpats override should be used for case (1) but not for
383 # The magic matchandpats override should be used for case (1) but not for
384 # case (2).
384 # case (2).
385 def overridemakelogfilematcher(repo, pats, opts, badfn=None):
385 def overridemakelogfilematcher(repo, pats, opts, badfn=None):
386 wctx = repo[None]
386 wctx = repo[None]
387 match, pats = oldmatchandpats(wctx, pats, opts, badfn=badfn)
387 match, pats = oldmatchandpats(wctx, pats, opts, badfn=badfn)
388 return lambda rev: match
388 return lambda rev: match
389
389
390 oldmatchandpats = installmatchandpatsfn(overridematchandpats)
390 oldmatchandpats = installmatchandpatsfn(overridematchandpats)
391 oldmakelogfilematcher = cmdutil._makenofollowlogfilematcher
391 oldmakelogfilematcher = cmdutil._makenofollowlogfilematcher
392 setattr(cmdutil, '_makenofollowlogfilematcher', overridemakelogfilematcher)
392 setattr(cmdutil, '_makenofollowlogfilematcher', overridemakelogfilematcher)
393
393
394 try:
394 try:
395 return orig(ui, repo, *pats, **opts)
395 return orig(ui, repo, *pats, **opts)
396 finally:
396 finally:
397 restorematchandpatsfn()
397 restorematchandpatsfn()
398 setattr(cmdutil, '_makenofollowlogfilematcher', oldmakelogfilematcher)
398 setattr(cmdutil, '_makenofollowlogfilematcher', oldmakelogfilematcher)
399
399
400 def overrideverify(orig, ui, repo, *pats, **opts):
400 def overrideverify(orig, ui, repo, *pats, **opts):
401 large = opts.pop('large', False)
401 large = opts.pop('large', False)
402 all = opts.pop('lfa', False)
402 all = opts.pop('lfa', False)
403 contents = opts.pop('lfc', False)
403 contents = opts.pop('lfc', False)
404
404
405 result = orig(ui, repo, *pats, **opts)
405 result = orig(ui, repo, *pats, **opts)
406 if large or all or contents:
406 if large or all or contents:
407 result = result or lfcommands.verifylfiles(ui, repo, all, contents)
407 result = result or lfcommands.verifylfiles(ui, repo, all, contents)
408 return result
408 return result
409
409
410 def overridedebugstate(orig, ui, repo, *pats, **opts):
410 def overridedebugstate(orig, ui, repo, *pats, **opts):
411 large = opts.pop('large', False)
411 large = opts.pop('large', False)
412 if large:
412 if large:
413 class fakerepo(object):
413 class fakerepo(object):
414 dirstate = lfutil.openlfdirstate(ui, repo)
414 dirstate = lfutil.openlfdirstate(ui, repo)
415 orig(ui, fakerepo, *pats, **opts)
415 orig(ui, fakerepo, *pats, **opts)
416 else:
416 else:
417 orig(ui, repo, *pats, **opts)
417 orig(ui, repo, *pats, **opts)
418
418
419 # Before starting the manifest merge, merge.updates will call
419 # Before starting the manifest merge, merge.updates will call
420 # _checkunknownfile to check if there are any files in the merged-in
420 # _checkunknownfile to check if there are any files in the merged-in
421 # changeset that collide with unknown files in the working copy.
421 # changeset that collide with unknown files in the working copy.
422 #
422 #
423 # The largefiles are seen as unknown, so this prevents us from merging
423 # The largefiles are seen as unknown, so this prevents us from merging
424 # in a file 'foo' if we already have a largefile with the same name.
424 # in a file 'foo' if we already have a largefile with the same name.
425 #
425 #
426 # The overridden function filters the unknown files by removing any
426 # The overridden function filters the unknown files by removing any
427 # largefiles. This makes the merge proceed and we can then handle this
427 # largefiles. This makes the merge proceed and we can then handle this
428 # case further in the overridden calculateupdates function below.
428 # case further in the overridden calculateupdates function below.
429 def overridecheckunknownfile(origfn, repo, wctx, mctx, f, f2=None):
429 def overridecheckunknownfile(origfn, repo, wctx, mctx, f, f2=None):
430 if lfutil.standin(repo.dirstate.normalize(f)) in wctx:
430 if lfutil.standin(repo.dirstate.normalize(f)) in wctx:
431 return False
431 return False
432 return origfn(repo, wctx, mctx, f, f2)
432 return origfn(repo, wctx, mctx, f, f2)
433
433
434 # The manifest merge handles conflicts on the manifest level. We want
434 # The manifest merge handles conflicts on the manifest level. We want
435 # to handle changes in largefile-ness of files at this level too.
435 # to handle changes in largefile-ness of files at this level too.
436 #
436 #
437 # The strategy is to run the original calculateupdates and then process
437 # The strategy is to run the original calculateupdates and then process
438 # the action list it outputs. There are two cases we need to deal with:
438 # the action list it outputs. There are two cases we need to deal with:
439 #
439 #
440 # 1. Normal file in p1, largefile in p2. Here the largefile is
440 # 1. Normal file in p1, largefile in p2. Here the largefile is
441 # detected via its standin file, which will enter the working copy
441 # detected via its standin file, which will enter the working copy
442 # with a "get" action. It is not "merge" since the standin is all
442 # with a "get" action. It is not "merge" since the standin is all
443 # Mercurial is concerned with at this level -- the link to the
443 # Mercurial is concerned with at this level -- the link to the
444 # existing normal file is not relevant here.
444 # existing normal file is not relevant here.
445 #
445 #
446 # 2. Largefile in p1, normal file in p2. Here we get a "merge" action
446 # 2. Largefile in p1, normal file in p2. Here we get a "merge" action
447 # since the largefile will be present in the working copy and
447 # since the largefile will be present in the working copy and
448 # different from the normal file in p2. Mercurial therefore
448 # different from the normal file in p2. Mercurial therefore
449 # triggers a merge action.
449 # triggers a merge action.
450 #
450 #
451 # In both cases, we prompt the user and emit new actions to either
451 # In both cases, we prompt the user and emit new actions to either
452 # remove the standin (if the normal file was kept) or to remove the
452 # remove the standin (if the normal file was kept) or to remove the
453 # normal file and get the standin (if the largefile was kept). The
453 # normal file and get the standin (if the largefile was kept). The
454 # default prompt answer is to use the largefile version since it was
454 # default prompt answer is to use the largefile version since it was
455 # presumably changed on purpose.
455 # presumably changed on purpose.
456 #
456 #
457 # Finally, the merge.applyupdates function will then take care of
457 # Finally, the merge.applyupdates function will then take care of
458 # writing the files into the working copy and lfcommands.updatelfiles
458 # writing the files into the working copy and lfcommands.updatelfiles
459 # will update the largefiles.
459 # will update the largefiles.
460 def overridecalculateupdates(origfn, repo, p1, p2, pas, branchmerge, force,
460 def overridecalculateupdates(origfn, repo, p1, p2, pas, branchmerge, force,
461 partial, acceptremote, followcopies):
461 partial, acceptremote, followcopies):
462 overwrite = force and not branchmerge
462 overwrite = force and not branchmerge
463 actions, diverge, renamedelete = origfn(
463 actions, diverge, renamedelete = origfn(
464 repo, p1, p2, pas, branchmerge, force, partial, acceptremote,
464 repo, p1, p2, pas, branchmerge, force, partial, acceptremote,
465 followcopies)
465 followcopies)
466
466
467 if overwrite:
467 if overwrite:
468 return actions, diverge, renamedelete
468 return actions, diverge, renamedelete
469
469
470 # Convert to dictionary with filename as key and action as value.
470 # Convert to dictionary with filename as key and action as value.
471 lfiles = set()
471 lfiles = set()
472 for f in actions:
472 for f in actions:
473 splitstandin = f and lfutil.splitstandin(f)
473 splitstandin = f and lfutil.splitstandin(f)
474 if splitstandin in p1:
474 if splitstandin in p1:
475 lfiles.add(splitstandin)
475 lfiles.add(splitstandin)
476 elif lfutil.standin(f) in p1:
476 elif lfutil.standin(f) in p1:
477 lfiles.add(f)
477 lfiles.add(f)
478
478
479 for lfile in lfiles:
479 for lfile in lfiles:
480 standin = lfutil.standin(lfile)
480 standin = lfutil.standin(lfile)
481 (lm, largs, lmsg) = actions.get(lfile, (None, None, None))
481 (lm, largs, lmsg) = actions.get(lfile, (None, None, None))
482 (sm, sargs, smsg) = actions.get(standin, (None, None, None))
482 (sm, sargs, smsg) = actions.get(standin, (None, None, None))
483 if sm in ('g', 'dc') and lm != 'r':
483 if sm in ('g', 'dc') and lm != 'r':
484 if sm == 'dc':
484 if sm == 'dc':
485 f1, f2, fa, move, anc = sargs
485 f1, f2, fa, move, anc = sargs
486 sargs = (p2[f2].flags(),)
486 sargs = (p2[f2].flags(),)
487 # Case 1: normal file in the working copy, largefile in
487 # Case 1: normal file in the working copy, largefile in
488 # the second parent
488 # the second parent
489 usermsg = _('remote turned local normal file %s into a largefile\n'
489 usermsg = _('remote turned local normal file %s into a largefile\n'
490 'use (l)argefile or keep (n)ormal file?'
490 'use (l)argefile or keep (n)ormal file?'
491 '$$ &Largefile $$ &Normal file') % lfile
491 '$$ &Largefile $$ &Normal file') % lfile
492 if repo.ui.promptchoice(usermsg, 0) == 0: # pick remote largefile
492 if repo.ui.promptchoice(usermsg, 0) == 0: # pick remote largefile
493 actions[lfile] = ('r', None, 'replaced by standin')
493 actions[lfile] = ('r', None, 'replaced by standin')
494 actions[standin] = ('g', sargs, 'replaces standin')
494 actions[standin] = ('g', sargs, 'replaces standin')
495 else: # keep local normal file
495 else: # keep local normal file
496 actions[lfile] = ('k', None, 'replaces standin')
496 actions[lfile] = ('k', None, 'replaces standin')
497 if branchmerge:
497 if branchmerge:
498 actions[standin] = ('k', None, 'replaced by non-standin')
498 actions[standin] = ('k', None, 'replaced by non-standin')
499 else:
499 else:
500 actions[standin] = ('r', None, 'replaced by non-standin')
500 actions[standin] = ('r', None, 'replaced by non-standin')
501 elif lm in ('g', 'dc') and sm != 'r':
501 elif lm in ('g', 'dc') and sm != 'r':
502 if lm == 'dc':
502 if lm == 'dc':
503 f1, f2, fa, move, anc = largs
503 f1, f2, fa, move, anc = largs
504 largs = (p2[f2].flags(),)
504 largs = (p2[f2].flags(),)
505 # Case 2: largefile in the working copy, normal file in
505 # Case 2: largefile in the working copy, normal file in
506 # the second parent
506 # the second parent
507 usermsg = _('remote turned local largefile %s into a normal file\n'
507 usermsg = _('remote turned local largefile %s into a normal file\n'
508 'keep (l)argefile or use (n)ormal file?'
508 'keep (l)argefile or use (n)ormal file?'
509 '$$ &Largefile $$ &Normal file') % lfile
509 '$$ &Largefile $$ &Normal file') % lfile
510 if repo.ui.promptchoice(usermsg, 0) == 0: # keep local largefile
510 if repo.ui.promptchoice(usermsg, 0) == 0: # keep local largefile
511 if branchmerge:
511 if branchmerge:
512 # largefile can be restored from standin safely
512 # largefile can be restored from standin safely
513 actions[lfile] = ('k', None, 'replaced by standin')
513 actions[lfile] = ('k', None, 'replaced by standin')
514 actions[standin] = ('k', None, 'replaces standin')
514 actions[standin] = ('k', None, 'replaces standin')
515 else:
515 else:
516 # "lfile" should be marked as "removed" without
516 # "lfile" should be marked as "removed" without
517 # removal of itself
517 # removal of itself
518 actions[lfile] = ('lfmr', None,
518 actions[lfile] = ('lfmr', None,
519 'forget non-standin largefile')
519 'forget non-standin largefile')
520
520
521 # linear-merge should treat this largefile as 're-added'
521 # linear-merge should treat this largefile as 're-added'
522 actions[standin] = ('a', None, 'keep standin')
522 actions[standin] = ('a', None, 'keep standin')
523 else: # pick remote normal file
523 else: # pick remote normal file
524 actions[lfile] = ('g', largs, 'replaces standin')
524 actions[lfile] = ('g', largs, 'replaces standin')
525 actions[standin] = ('r', None, 'replaced by non-standin')
525 actions[standin] = ('r', None, 'replaced by non-standin')
526
526
527 return actions, diverge, renamedelete
527 return actions, diverge, renamedelete
528
528
529 def mergerecordupdates(orig, repo, actions, branchmerge):
529 def mergerecordupdates(orig, repo, actions, branchmerge):
530 if 'lfmr' in actions:
530 if 'lfmr' in actions:
531 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
531 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
532 for lfile, args, msg in actions['lfmr']:
532 for lfile, args, msg in actions['lfmr']:
533 # this should be executed before 'orig', to execute 'remove'
533 # this should be executed before 'orig', to execute 'remove'
534 # before all other actions
534 # before all other actions
535 repo.dirstate.remove(lfile)
535 repo.dirstate.remove(lfile)
536 # make sure lfile doesn't get synclfdirstate'd as normal
536 # make sure lfile doesn't get synclfdirstate'd as normal
537 lfdirstate.add(lfile)
537 lfdirstate.add(lfile)
538 lfdirstate.write()
538 lfdirstate.write()
539
539
540 return orig(repo, actions, branchmerge)
540 return orig(repo, actions, branchmerge)
541
541
542
542
543 # Override filemerge to prompt the user about how they wish to merge
543 # Override filemerge to prompt the user about how they wish to merge
544 # largefiles. This will handle identical edits without prompting the user.
544 # largefiles. This will handle identical edits without prompting the user.
545 def overridefilemerge(origfn, premerge, repo, mynode, orig, fcd, fco, fca,
545 def overridefilemerge(origfn, premerge, repo, mynode, orig, fcd, fco, fca,
546 labels=None):
546 labels=None):
547 if not lfutil.isstandin(orig) or fcd.isabsent() or fco.isabsent():
547 if not lfutil.isstandin(orig) or fcd.isabsent() or fco.isabsent():
548 return origfn(premerge, repo, mynode, orig, fcd, fco, fca,
548 return origfn(premerge, repo, mynode, orig, fcd, fco, fca,
549 labels=labels)
549 labels=labels)
550
550
551 ahash = fca.data().strip().lower()
551 ahash = fca.data().strip().lower()
552 dhash = fcd.data().strip().lower()
552 dhash = fcd.data().strip().lower()
553 ohash = fco.data().strip().lower()
553 ohash = fco.data().strip().lower()
554 if (ohash != ahash and
554 if (ohash != ahash and
555 ohash != dhash and
555 ohash != dhash and
556 (dhash == ahash or
556 (dhash == ahash or
557 repo.ui.promptchoice(
557 repo.ui.promptchoice(
558 _('largefile %s has a merge conflict\nancestor was %s\n'
558 _('largefile %s has a merge conflict\nancestor was %s\n'
559 'keep (l)ocal %s or\ntake (o)ther %s?'
559 'keep (l)ocal %s or\ntake (o)ther %s?'
560 '$$ &Local $$ &Other') %
560 '$$ &Local $$ &Other') %
561 (lfutil.splitstandin(orig), ahash, dhash, ohash),
561 (lfutil.splitstandin(orig), ahash, dhash, ohash),
562 0) == 1)):
562 0) == 1)):
563 repo.wwrite(fcd.path(), fco.data(), fco.flags())
563 repo.wwrite(fcd.path(), fco.data(), fco.flags())
564 return True, 0, False
564 return True, 0, False
565
565
566 def copiespathcopies(orig, ctx1, ctx2, match=None):
566 def copiespathcopies(orig, ctx1, ctx2, match=None):
567 copies = orig(ctx1, ctx2, match=match)
567 copies = orig(ctx1, ctx2, match=match)
568 updated = {}
568 updated = {}
569
569
570 for k, v in copies.iteritems():
570 for k, v in copies.iteritems():
571 updated[lfutil.splitstandin(k) or k] = lfutil.splitstandin(v) or v
571 updated[lfutil.splitstandin(k) or k] = lfutil.splitstandin(v) or v
572
572
573 return updated
573 return updated
574
574
575 # Copy first changes the matchers to match standins instead of
575 # Copy first changes the matchers to match standins instead of
576 # largefiles. Then it overrides util.copyfile in that function it
576 # largefiles. Then it overrides util.copyfile in that function it
577 # checks if the destination largefile already exists. It also keeps a
577 # checks if the destination largefile already exists. It also keeps a
578 # list of copied files so that the largefiles can be copied and the
578 # list of copied files so that the largefiles can be copied and the
579 # dirstate updated.
579 # dirstate updated.
580 def overridecopy(orig, ui, repo, pats, opts, rename=False):
580 def overridecopy(orig, ui, repo, pats, opts, rename=False):
581 # doesn't remove largefile on rename
581 # doesn't remove largefile on rename
582 if len(pats) < 2:
582 if len(pats) < 2:
583 # this isn't legal, let the original function deal with it
583 # this isn't legal, let the original function deal with it
584 return orig(ui, repo, pats, opts, rename)
584 return orig(ui, repo, pats, opts, rename)
585
585
586 # This could copy both lfiles and normal files in one command,
586 # This could copy both lfiles and normal files in one command,
587 # but we don't want to do that. First replace their matcher to
587 # but we don't want to do that. First replace their matcher to
588 # only match normal files and run it, then replace it to just
588 # only match normal files and run it, then replace it to just
589 # match largefiles and run it again.
589 # match largefiles and run it again.
590 nonormalfiles = False
590 nonormalfiles = False
591 nolfiles = False
591 nolfiles = False
592 installnormalfilesmatchfn(repo[None].manifest())
592 installnormalfilesmatchfn(repo[None].manifest())
593 try:
593 try:
594 result = orig(ui, repo, pats, opts, rename)
594 result = orig(ui, repo, pats, opts, rename)
595 except error.Abort as e:
595 except error.Abort as e:
596 if str(e) != _('no files to copy'):
596 if str(e) != _('no files to copy'):
597 raise e
597 raise e
598 else:
598 else:
599 nonormalfiles = True
599 nonormalfiles = True
600 result = 0
600 result = 0
601 finally:
601 finally:
602 restorematchfn()
602 restorematchfn()
603
603
604 # The first rename can cause our current working directory to be removed.
604 # The first rename can cause our current working directory to be removed.
605 # In that case there is nothing left to copy/rename so just quit.
605 # In that case there is nothing left to copy/rename so just quit.
606 try:
606 try:
607 repo.getcwd()
607 repo.getcwd()
608 except OSError:
608 except OSError:
609 return result
609 return result
610
610
611 def makestandin(relpath):
611 def makestandin(relpath):
612 path = pathutil.canonpath(repo.root, repo.getcwd(), relpath)
612 path = pathutil.canonpath(repo.root, repo.getcwd(), relpath)
613 return os.path.join(repo.wjoin(lfutil.standin(path)))
613 return os.path.join(repo.wjoin(lfutil.standin(path)))
614
614
615 fullpats = scmutil.expandpats(pats)
615 fullpats = scmutil.expandpats(pats)
616 dest = fullpats[-1]
616 dest = fullpats[-1]
617
617
618 if os.path.isdir(dest):
618 if os.path.isdir(dest):
619 if not os.path.isdir(makestandin(dest)):
619 if not os.path.isdir(makestandin(dest)):
620 os.makedirs(makestandin(dest))
620 os.makedirs(makestandin(dest))
621
621
622 try:
622 try:
623 # When we call orig below it creates the standins but we don't add
623 # When we call orig below it creates the standins but we don't add
624 # them to the dir state until later so lock during that time.
624 # them to the dir state until later so lock during that time.
625 wlock = repo.wlock()
625 wlock = repo.wlock()
626
626
627 manifest = repo[None].manifest()
627 manifest = repo[None].manifest()
628 def overridematch(ctx, pats=(), opts=None, globbed=False,
628 def overridematch(ctx, pats=(), opts=None, globbed=False,
629 default='relpath', badfn=None):
629 default='relpath', badfn=None):
630 if opts is None:
630 if opts is None:
631 opts = {}
631 opts = {}
632 newpats = []
632 newpats = []
633 # The patterns were previously mangled to add the standin
633 # The patterns were previously mangled to add the standin
634 # directory; we need to remove that now
634 # directory; we need to remove that now
635 for pat in pats:
635 for pat in pats:
636 if match_.patkind(pat) is None and lfutil.shortname in pat:
636 if match_.patkind(pat) is None and lfutil.shortname in pat:
637 newpats.append(pat.replace(lfutil.shortname, ''))
637 newpats.append(pat.replace(lfutil.shortname, ''))
638 else:
638 else:
639 newpats.append(pat)
639 newpats.append(pat)
640 match = oldmatch(ctx, newpats, opts, globbed, default, badfn=badfn)
640 match = oldmatch(ctx, newpats, opts, globbed, default, badfn=badfn)
641 m = copy.copy(match)
641 m = copy.copy(match)
642 lfile = lambda f: lfutil.standin(f) in manifest
642 lfile = lambda f: lfutil.standin(f) in manifest
643 m._files = [lfutil.standin(f) for f in m._files if lfile(f)]
643 m._files = [lfutil.standin(f) for f in m._files if lfile(f)]
644 m._fileroots = set(m._files)
644 m._fileroots = set(m._files)
645 origmatchfn = m.matchfn
645 origmatchfn = m.matchfn
646 m.matchfn = lambda f: (lfutil.isstandin(f) and
646 m.matchfn = lambda f: (lfutil.isstandin(f) and
647 (f in manifest) and
647 (f in manifest) and
648 origmatchfn(lfutil.splitstandin(f)) or
648 origmatchfn(lfutil.splitstandin(f)) or
649 None)
649 None)
650 return m
650 return m
651 oldmatch = installmatchfn(overridematch)
651 oldmatch = installmatchfn(overridematch)
652 listpats = []
652 listpats = []
653 for pat in pats:
653 for pat in pats:
654 if match_.patkind(pat) is not None:
654 if match_.patkind(pat) is not None:
655 listpats.append(pat)
655 listpats.append(pat)
656 else:
656 else:
657 listpats.append(makestandin(pat))
657 listpats.append(makestandin(pat))
658
658
659 try:
659 try:
660 origcopyfile = util.copyfile
660 origcopyfile = util.copyfile
661 copiedfiles = []
661 copiedfiles = []
662 def overridecopyfile(src, dest):
662 def overridecopyfile(src, dest):
663 if (lfutil.shortname in src and
663 if (lfutil.shortname in src and
664 dest.startswith(repo.wjoin(lfutil.shortname))):
664 dest.startswith(repo.wjoin(lfutil.shortname))):
665 destlfile = dest.replace(lfutil.shortname, '')
665 destlfile = dest.replace(lfutil.shortname, '')
666 if not opts['force'] and os.path.exists(destlfile):
666 if not opts['force'] and os.path.exists(destlfile):
667 raise IOError('',
667 raise IOError('',
668 _('destination largefile already exists'))
668 _('destination largefile already exists'))
669 copiedfiles.append((src, dest))
669 copiedfiles.append((src, dest))
670 origcopyfile(src, dest)
670 origcopyfile(src, dest)
671
671
672 util.copyfile = overridecopyfile
672 util.copyfile = overridecopyfile
673 result += orig(ui, repo, listpats, opts, rename)
673 result += orig(ui, repo, listpats, opts, rename)
674 finally:
674 finally:
675 util.copyfile = origcopyfile
675 util.copyfile = origcopyfile
676
676
677 lfdirstate = lfutil.openlfdirstate(ui, repo)
677 lfdirstate = lfutil.openlfdirstate(ui, repo)
678 for (src, dest) in copiedfiles:
678 for (src, dest) in copiedfiles:
679 if (lfutil.shortname in src and
679 if (lfutil.shortname in src and
680 dest.startswith(repo.wjoin(lfutil.shortname))):
680 dest.startswith(repo.wjoin(lfutil.shortname))):
681 srclfile = src.replace(repo.wjoin(lfutil.standin('')), '')
681 srclfile = src.replace(repo.wjoin(lfutil.standin('')), '')
682 destlfile = dest.replace(repo.wjoin(lfutil.standin('')), '')
682 destlfile = dest.replace(repo.wjoin(lfutil.standin('')), '')
683 destlfiledir = os.path.dirname(repo.wjoin(destlfile)) or '.'
683 destlfiledir = os.path.dirname(repo.wjoin(destlfile)) or '.'
684 if not os.path.isdir(destlfiledir):
684 if not os.path.isdir(destlfiledir):
685 os.makedirs(destlfiledir)
685 os.makedirs(destlfiledir)
686 if rename:
686 if rename:
687 os.rename(repo.wjoin(srclfile), repo.wjoin(destlfile))
687 os.rename(repo.wjoin(srclfile), repo.wjoin(destlfile))
688
688
689 # The file is gone, but this deletes any empty parent
689 # The file is gone, but this deletes any empty parent
690 # directories as a side-effect.
690 # directories as a side-effect.
691 util.unlinkpath(repo.wjoin(srclfile), True)
691 util.unlinkpath(repo.wjoin(srclfile), True)
692 lfdirstate.remove(srclfile)
692 lfdirstate.remove(srclfile)
693 else:
693 else:
694 util.copyfile(repo.wjoin(srclfile),
694 util.copyfile(repo.wjoin(srclfile),
695 repo.wjoin(destlfile))
695 repo.wjoin(destlfile))
696
696
697 lfdirstate.add(destlfile)
697 lfdirstate.add(destlfile)
698 lfdirstate.write()
698 lfdirstate.write()
699 except error.Abort as e:
699 except error.Abort as e:
700 if str(e) != _('no files to copy'):
700 if str(e) != _('no files to copy'):
701 raise e
701 raise e
702 else:
702 else:
703 nolfiles = True
703 nolfiles = True
704 finally:
704 finally:
705 restorematchfn()
705 restorematchfn()
706 wlock.release()
706 wlock.release()
707
707
708 if nolfiles and nonormalfiles:
708 if nolfiles and nonormalfiles:
709 raise error.Abort(_('no files to copy'))
709 raise error.Abort(_('no files to copy'))
710
710
711 return result
711 return result
712
712
713 # When the user calls revert, we have to be careful to not revert any
713 # When the user calls revert, we have to be careful to not revert any
714 # changes to other largefiles accidentally. This means we have to keep
714 # changes to other largefiles accidentally. This means we have to keep
715 # track of the largefiles that are being reverted so we only pull down
715 # track of the largefiles that are being reverted so we only pull down
716 # the necessary largefiles.
716 # the necessary largefiles.
717 #
717 #
718 # Standins are only updated (to match the hash of largefiles) before
718 # Standins are only updated (to match the hash of largefiles) before
719 # commits. Update the standins then run the original revert, changing
719 # commits. Update the standins then run the original revert, changing
720 # the matcher to hit standins instead of largefiles. Based on the
720 # the matcher to hit standins instead of largefiles. Based on the
721 # resulting standins update the largefiles.
721 # resulting standins update the largefiles.
722 def overriderevert(orig, ui, repo, ctx, parents, *pats, **opts):
722 def overriderevert(orig, ui, repo, ctx, parents, *pats, **opts):
723 # Because we put the standins in a bad state (by updating them)
723 # Because we put the standins in a bad state (by updating them)
724 # and then return them to a correct state we need to lock to
724 # and then return them to a correct state we need to lock to
725 # prevent others from changing them in their incorrect state.
725 # prevent others from changing them in their incorrect state.
726 wlock = repo.wlock()
726 wlock = repo.wlock()
727 try:
727 try:
728 lfdirstate = lfutil.openlfdirstate(ui, repo)
728 lfdirstate = lfutil.openlfdirstate(ui, repo)
729 s = lfutil.lfdirstatestatus(lfdirstate, repo)
729 s = lfutil.lfdirstatestatus(lfdirstate, repo)
730 lfdirstate.write()
730 lfdirstate.write()
731 for lfile in s.modified:
731 for lfile in s.modified:
732 lfutil.updatestandin(repo, lfutil.standin(lfile))
732 lfutil.updatestandin(repo, lfutil.standin(lfile))
733 for lfile in s.deleted:
733 for lfile in s.deleted:
734 if (os.path.exists(repo.wjoin(lfutil.standin(lfile)))):
734 if (os.path.exists(repo.wjoin(lfutil.standin(lfile)))):
735 os.unlink(repo.wjoin(lfutil.standin(lfile)))
735 os.unlink(repo.wjoin(lfutil.standin(lfile)))
736
736
737 oldstandins = lfutil.getstandinsstate(repo)
737 oldstandins = lfutil.getstandinsstate(repo)
738
738
739 def overridematch(mctx, pats=(), opts=None, globbed=False,
739 def overridematch(mctx, pats=(), opts=None, globbed=False,
740 default='relpath', badfn=None):
740 default='relpath', badfn=None):
741 if opts is None:
741 if opts is None:
742 opts = {}
742 opts = {}
743 match = oldmatch(mctx, pats, opts, globbed, default, badfn=badfn)
743 match = oldmatch(mctx, pats, opts, globbed, default, badfn=badfn)
744 m = copy.copy(match)
744 m = copy.copy(match)
745
745
746 # revert supports recursing into subrepos, and though largefiles
746 # revert supports recursing into subrepos, and though largefiles
747 # currently doesn't work correctly in that case, this match is
747 # currently doesn't work correctly in that case, this match is
748 # called, so the lfdirstate above may not be the correct one for
748 # called, so the lfdirstate above may not be the correct one for
749 # this invocation of match.
749 # this invocation of match.
750 lfdirstate = lfutil.openlfdirstate(mctx.repo().ui, mctx.repo(),
750 lfdirstate = lfutil.openlfdirstate(mctx.repo().ui, mctx.repo(),
751 False)
751 False)
752
752
753 def tostandin(f):
753 def tostandin(f):
754 standin = lfutil.standin(f)
754 standin = lfutil.standin(f)
755 if standin in ctx or standin in mctx:
755 if standin in ctx or standin in mctx:
756 return standin
756 return standin
757 elif standin in repo[None] or lfdirstate[f] == 'r':
757 elif standin in repo[None] or lfdirstate[f] == 'r':
758 return None
758 return None
759 return f
759 return f
760 m._files = [tostandin(f) for f in m._files]
760 m._files = [tostandin(f) for f in m._files]
761 m._files = [f for f in m._files if f is not None]
761 m._files = [f for f in m._files if f is not None]
762 m._fileroots = set(m._files)
762 m._fileroots = set(m._files)
763 origmatchfn = m.matchfn
763 origmatchfn = m.matchfn
764 def matchfn(f):
764 def matchfn(f):
765 if lfutil.isstandin(f):
765 if lfutil.isstandin(f):
766 return (origmatchfn(lfutil.splitstandin(f)) and
766 return (origmatchfn(lfutil.splitstandin(f)) and
767 (f in ctx or f in mctx))
767 (f in ctx or f in mctx))
768 return origmatchfn(f)
768 return origmatchfn(f)
769 m.matchfn = matchfn
769 m.matchfn = matchfn
770 return m
770 return m
771 oldmatch = installmatchfn(overridematch)
771 oldmatch = installmatchfn(overridematch)
772 try:
772 try:
773 orig(ui, repo, ctx, parents, *pats, **opts)
773 orig(ui, repo, ctx, parents, *pats, **opts)
774 finally:
774 finally:
775 restorematchfn()
775 restorematchfn()
776
776
777 newstandins = lfutil.getstandinsstate(repo)
777 newstandins = lfutil.getstandinsstate(repo)
778 filelist = lfutil.getlfilestoupdate(oldstandins, newstandins)
778 filelist = lfutil.getlfilestoupdate(oldstandins, newstandins)
779 # lfdirstate should be 'normallookup'-ed for updated files,
779 # lfdirstate should be 'normallookup'-ed for updated files,
780 # because reverting doesn't touch dirstate for 'normal' files
780 # because reverting doesn't touch dirstate for 'normal' files
781 # when target revision is explicitly specified: in such case,
781 # when target revision is explicitly specified: in such case,
782 # 'n' and valid timestamp in dirstate doesn't ensure 'clean'
782 # 'n' and valid timestamp in dirstate doesn't ensure 'clean'
783 # of target (standin) file.
783 # of target (standin) file.
784 lfcommands.updatelfiles(ui, repo, filelist, printmessage=False,
784 lfcommands.updatelfiles(ui, repo, filelist, printmessage=False,
785 normallookup=True)
785 normallookup=True)
786
786
787 finally:
787 finally:
788 wlock.release()
788 wlock.release()
789
789
790 # after pulling changesets, we need to take some extra care to get
790 # after pulling changesets, we need to take some extra care to get
791 # largefiles updated remotely
791 # largefiles updated remotely
792 def overridepull(orig, ui, repo, source=None, **opts):
792 def overridepull(orig, ui, repo, source=None, **opts):
793 revsprepull = len(repo)
793 revsprepull = len(repo)
794 if not source:
794 if not source:
795 source = 'default'
795 source = 'default'
796 repo.lfpullsource = source
796 repo.lfpullsource = source
797 result = orig(ui, repo, source, **opts)
797 result = orig(ui, repo, source, **opts)
798 revspostpull = len(repo)
798 revspostpull = len(repo)
799 lfrevs = opts.get('lfrev', [])
799 lfrevs = opts.get('lfrev', [])
800 if opts.get('all_largefiles'):
800 if opts.get('all_largefiles'):
801 lfrevs.append('pulled()')
801 lfrevs.append('pulled()')
802 if lfrevs and revspostpull > revsprepull:
802 if lfrevs and revspostpull > revsprepull:
803 numcached = 0
803 numcached = 0
804 repo.firstpulled = revsprepull # for pulled() revset expression
804 repo.firstpulled = revsprepull # for pulled() revset expression
805 try:
805 try:
806 for rev in scmutil.revrange(repo, lfrevs):
806 for rev in scmutil.revrange(repo, lfrevs):
807 ui.note(_('pulling largefiles for revision %s\n') % rev)
807 ui.note(_('pulling largefiles for revision %s\n') % rev)
808 (cached, missing) = lfcommands.cachelfiles(ui, repo, rev)
808 (cached, missing) = lfcommands.cachelfiles(ui, repo, rev)
809 numcached += len(cached)
809 numcached += len(cached)
810 finally:
810 finally:
811 del repo.firstpulled
811 del repo.firstpulled
812 ui.status(_("%d largefiles cached\n") % numcached)
812 ui.status(_("%d largefiles cached\n") % numcached)
813 return result
813 return result
814
814
815 def pulledrevsetsymbol(repo, subset, x):
815 def pulledrevsetsymbol(repo, subset, x):
816 """``pulled()``
816 """``pulled()``
817 Changesets that just has been pulled.
817 Changesets that just has been pulled.
818
818
819 Only available with largefiles from pull --lfrev expressions.
819 Only available with largefiles from pull --lfrev expressions.
820
820
821 .. container:: verbose
821 .. container:: verbose
822
822
823 Some examples:
823 Some examples:
824
824
825 - pull largefiles for all new changesets::
825 - pull largefiles for all new changesets::
826
826
827 hg pull -lfrev "pulled()"
827 hg pull -lfrev "pulled()"
828
828
829 - pull largefiles for all new branch heads::
829 - pull largefiles for all new branch heads::
830
830
831 hg pull -lfrev "head(pulled()) and not closed()"
831 hg pull -lfrev "head(pulled()) and not closed()"
832
832
833 """
833 """
834
834
835 try:
835 try:
836 firstpulled = repo.firstpulled
836 firstpulled = repo.firstpulled
837 except AttributeError:
837 except AttributeError:
838 raise error.Abort(_("pulled() only available in --lfrev"))
838 raise error.Abort(_("pulled() only available in --lfrev"))
839 return revset.baseset([r for r in subset if r >= firstpulled])
839 return revset.baseset([r for r in subset if r >= firstpulled])
840
840
841 def overrideclone(orig, ui, source, dest=None, **opts):
841 def overrideclone(orig, ui, source, dest=None, **opts):
842 d = dest
842 d = dest
843 if d is None:
843 if d is None:
844 d = hg.defaultdest(source)
844 d = hg.defaultdest(source)
845 if opts.get('all_largefiles') and not hg.islocal(d):
845 if opts.get('all_largefiles') and not hg.islocal(d):
846 raise error.Abort(_(
846 raise error.Abort(_(
847 '--all-largefiles is incompatible with non-local destination %s') %
847 '--all-largefiles is incompatible with non-local destination %s') %
848 d)
848 d)
849
849
850 return orig(ui, source, dest, **opts)
850 return orig(ui, source, dest, **opts)
851
851
852 def hgclone(orig, ui, opts, *args, **kwargs):
852 def hgclone(orig, ui, opts, *args, **kwargs):
853 result = orig(ui, opts, *args, **kwargs)
853 result = orig(ui, opts, *args, **kwargs)
854
854
855 if result is not None:
855 if result is not None:
856 sourcerepo, destrepo = result
856 sourcerepo, destrepo = result
857 repo = destrepo.local()
857 repo = destrepo.local()
858
858
859 # When cloning to a remote repo (like through SSH), no repo is available
859 # When cloning to a remote repo (like through SSH), no repo is available
860 # from the peer. Therefore the largefiles can't be downloaded and the
860 # from the peer. Therefore the largefiles can't be downloaded and the
861 # hgrc can't be updated.
861 # hgrc can't be updated.
862 if not repo:
862 if not repo:
863 return result
863 return result
864
864
865 # If largefiles is required for this repo, permanently enable it locally
865 # If largefiles is required for this repo, permanently enable it locally
866 if 'largefiles' in repo.requirements:
866 if 'largefiles' in repo.requirements:
867 fp = repo.vfs('hgrc', 'a', text=True)
867 fp = repo.vfs('hgrc', 'a', text=True)
868 try:
868 try:
869 fp.write('\n[extensions]\nlargefiles=\n')
869 fp.write('\n[extensions]\nlargefiles=\n')
870 finally:
870 finally:
871 fp.close()
871 fp.close()
872
872
873 # Caching is implicitly limited to 'rev' option, since the dest repo was
873 # Caching is implicitly limited to 'rev' option, since the dest repo was
874 # truncated at that point. The user may expect a download count with
874 # truncated at that point. The user may expect a download count with
875 # this option, so attempt whether or not this is a largefile repo.
875 # this option, so attempt whether or not this is a largefile repo.
876 if opts.get('all_largefiles'):
876 if opts.get('all_largefiles'):
877 success, missing = lfcommands.downloadlfiles(ui, repo, None)
877 success, missing = lfcommands.downloadlfiles(ui, repo, None)
878
878
879 if missing != 0:
879 if missing != 0:
880 return None
880 return None
881
881
882 return result
882 return result
883
883
884 def overriderebase(orig, ui, repo, **opts):
884 def overriderebase(orig, ui, repo, **opts):
885 if not util.safehasattr(repo, '_largefilesenabled'):
885 if not util.safehasattr(repo, '_largefilesenabled'):
886 return orig(ui, repo, **opts)
886 return orig(ui, repo, **opts)
887
887
888 resuming = opts.get('continue')
888 resuming = opts.get('continue')
889 repo._lfcommithooks.append(lfutil.automatedcommithook(resuming))
889 repo._lfcommithooks.append(lfutil.automatedcommithook(resuming))
890 repo._lfstatuswriters.append(lambda *msg, **opts: None)
890 repo._lfstatuswriters.append(lambda *msg, **opts: None)
891 try:
891 try:
892 return orig(ui, repo, **opts)
892 return orig(ui, repo, **opts)
893 finally:
893 finally:
894 repo._lfstatuswriters.pop()
894 repo._lfstatuswriters.pop()
895 repo._lfcommithooks.pop()
895 repo._lfcommithooks.pop()
896
896
897 def overridearchivecmd(orig, ui, repo, dest, **opts):
897 def overridearchivecmd(orig, ui, repo, dest, **opts):
898 repo.unfiltered().lfstatus = True
898 repo.unfiltered().lfstatus = True
899
899
900 try:
900 try:
901 return orig(ui, repo.unfiltered(), dest, **opts)
901 return orig(ui, repo.unfiltered(), dest, **opts)
902 finally:
902 finally:
903 repo.unfiltered().lfstatus = False
903 repo.unfiltered().lfstatus = False
904
904
905 def hgwebarchive(orig, web, req, tmpl):
905 def hgwebarchive(orig, web, req, tmpl):
906 web.repo.lfstatus = True
906 web.repo.lfstatus = True
907
907
908 try:
908 try:
909 return orig(web, req, tmpl)
909 return orig(web, req, tmpl)
910 finally:
910 finally:
911 web.repo.lfstatus = False
911 web.repo.lfstatus = False
912
912
913 def overridearchive(orig, repo, dest, node, kind, decode=True, matchfn=None,
913 def overridearchive(orig, repo, dest, node, kind, decode=True, matchfn=None,
914 prefix='', mtime=None, subrepos=None):
914 prefix='', mtime=None, subrepos=None):
915 # For some reason setting repo.lfstatus in hgwebarchive only changes the
915 # For some reason setting repo.lfstatus in hgwebarchive only changes the
916 # unfiltered repo's attr, so check that as well.
916 # unfiltered repo's attr, so check that as well.
917 if not repo.lfstatus and not repo.unfiltered().lfstatus:
917 if not repo.lfstatus and not repo.unfiltered().lfstatus:
918 return orig(repo, dest, node, kind, decode, matchfn, prefix, mtime,
918 return orig(repo, dest, node, kind, decode, matchfn, prefix, mtime,
919 subrepos)
919 subrepos)
920
920
921 # No need to lock because we are only reading history and
921 # No need to lock because we are only reading history and
922 # largefile caches, neither of which are modified.
922 # largefile caches, neither of which are modified.
923 if node is not None:
923 if node is not None:
924 lfcommands.cachelfiles(repo.ui, repo, node)
924 lfcommands.cachelfiles(repo.ui, repo, node)
925
925
926 if kind not in archival.archivers:
926 if kind not in archival.archivers:
927 raise error.Abort(_("unknown archive type '%s'") % kind)
927 raise error.Abort(_("unknown archive type '%s'") % kind)
928
928
929 ctx = repo[node]
929 ctx = repo[node]
930
930
931 if kind == 'files':
931 if kind == 'files':
932 if prefix:
932 if prefix:
933 raise error.Abort(
933 raise error.Abort(
934 _('cannot give prefix when archiving to files'))
934 _('cannot give prefix when archiving to files'))
935 else:
935 else:
936 prefix = archival.tidyprefix(dest, kind, prefix)
936 prefix = archival.tidyprefix(dest, kind, prefix)
937
937
938 def write(name, mode, islink, getdata):
938 def write(name, mode, islink, getdata):
939 if matchfn and not matchfn(name):
939 if matchfn and not matchfn(name):
940 return
940 return
941 data = getdata()
941 data = getdata()
942 if decode:
942 if decode:
943 data = repo.wwritedata(name, data)
943 data = repo.wwritedata(name, data)
944 archiver.addfile(prefix + name, mode, islink, data)
944 archiver.addfile(prefix + name, mode, islink, data)
945
945
946 archiver = archival.archivers[kind](dest, mtime or ctx.date()[0])
946 archiver = archival.archivers[kind](dest, mtime or ctx.date()[0])
947
947
948 if repo.ui.configbool("ui", "archivemeta", True):
948 if repo.ui.configbool("ui", "archivemeta", True):
949 write('.hg_archival.txt', 0o644, False,
949 write('.hg_archival.txt', 0o644, False,
950 lambda: archival.buildmetadata(ctx))
950 lambda: archival.buildmetadata(ctx))
951
951
952 for f in ctx:
952 for f in ctx:
953 ff = ctx.flags(f)
953 ff = ctx.flags(f)
954 getdata = ctx[f].data
954 getdata = ctx[f].data
955 if lfutil.isstandin(f):
955 if lfutil.isstandin(f):
956 if node is not None:
956 if node is not None:
957 path = lfutil.findfile(repo, getdata().strip())
957 path = lfutil.findfile(repo, getdata().strip())
958
958
959 if path is None:
959 if path is None:
960 raise error.Abort(
960 raise error.Abort(
961 _('largefile %s not found in repo store or system cache')
961 _('largefile %s not found in repo store or system cache')
962 % lfutil.splitstandin(f))
962 % lfutil.splitstandin(f))
963 else:
963 else:
964 path = lfutil.splitstandin(f)
964 path = lfutil.splitstandin(f)
965
965
966 f = lfutil.splitstandin(f)
966 f = lfutil.splitstandin(f)
967
967
968 def getdatafn():
968 def getdatafn():
969 fd = None
969 fd = None
970 try:
970 try:
971 fd = open(path, 'rb')
971 fd = open(path, 'rb')
972 return fd.read()
972 return fd.read()
973 finally:
973 finally:
974 if fd:
974 if fd:
975 fd.close()
975 fd.close()
976
976
977 getdata = getdatafn
977 getdata = getdatafn
978 write(f, 'x' in ff and 0o755 or 0o644, 'l' in ff, getdata)
978 write(f, 'x' in ff and 0o755 or 0o644, 'l' in ff, getdata)
979
979
980 if subrepos:
980 if subrepos:
981 for subpath in sorted(ctx.substate):
981 for subpath in sorted(ctx.substate):
982 sub = ctx.workingsub(subpath)
982 sub = ctx.workingsub(subpath)
983 submatch = match_.narrowmatcher(subpath, matchfn)
983 submatch = match_.narrowmatcher(subpath, matchfn)
984 sub._repo.lfstatus = True
984 sub._repo.lfstatus = True
985 sub.archive(archiver, prefix, submatch)
985 sub.archive(archiver, prefix, submatch)
986
986
987 archiver.done()
987 archiver.done()
988
988
989 def hgsubrepoarchive(orig, repo, archiver, prefix, match=None):
989 def hgsubrepoarchive(orig, repo, archiver, prefix, match=None):
990 if not repo._repo.lfstatus:
990 if not repo._repo.lfstatus:
991 return orig(repo, archiver, prefix, match)
991 return orig(repo, archiver, prefix, match)
992
992
993 repo._get(repo._state + ('hg',))
993 repo._get(repo._state + ('hg',))
994 rev = repo._state[1]
994 rev = repo._state[1]
995 ctx = repo._repo[rev]
995 ctx = repo._repo[rev]
996
996
997 if ctx.node() is not None:
997 if ctx.node() is not None:
998 lfcommands.cachelfiles(repo.ui, repo._repo, ctx.node())
998 lfcommands.cachelfiles(repo.ui, repo._repo, ctx.node())
999
999
1000 def write(name, mode, islink, getdata):
1000 def write(name, mode, islink, getdata):
1001 # At this point, the standin has been replaced with the largefile name,
1001 # At this point, the standin has been replaced with the largefile name,
1002 # so the normal matcher works here without the lfutil variants.
1002 # so the normal matcher works here without the lfutil variants.
1003 if match and not match(f):
1003 if match and not match(f):
1004 return
1004 return
1005 data = getdata()
1005 data = getdata()
1006
1006
1007 archiver.addfile(prefix + repo._path + '/' + name, mode, islink, data)
1007 archiver.addfile(prefix + repo._path + '/' + name, mode, islink, data)
1008
1008
1009 for f in ctx:
1009 for f in ctx:
1010 ff = ctx.flags(f)
1010 ff = ctx.flags(f)
1011 getdata = ctx[f].data
1011 getdata = ctx[f].data
1012 if lfutil.isstandin(f):
1012 if lfutil.isstandin(f):
1013 if ctx.node() is not None:
1013 if ctx.node() is not None:
1014 path = lfutil.findfile(repo._repo, getdata().strip())
1014 path = lfutil.findfile(repo._repo, getdata().strip())
1015
1015
1016 if path is None:
1016 if path is None:
1017 raise error.Abort(
1017 raise error.Abort(
1018 _('largefile %s not found in repo store or system cache')
1018 _('largefile %s not found in repo store or system cache')
1019 % lfutil.splitstandin(f))
1019 % lfutil.splitstandin(f))
1020 else:
1020 else:
1021 path = lfutil.splitstandin(f)
1021 path = lfutil.splitstandin(f)
1022
1022
1023 f = lfutil.splitstandin(f)
1023 f = lfutil.splitstandin(f)
1024
1024
1025 def getdatafn():
1025 def getdatafn():
1026 fd = None
1026 fd = None
1027 try:
1027 try:
1028 fd = open(os.path.join(prefix, path), 'rb')
1028 fd = open(os.path.join(prefix, path), 'rb')
1029 return fd.read()
1029 return fd.read()
1030 finally:
1030 finally:
1031 if fd:
1031 if fd:
1032 fd.close()
1032 fd.close()
1033
1033
1034 getdata = getdatafn
1034 getdata = getdatafn
1035
1035
1036 write(f, 'x' in ff and 0o755 or 0o644, 'l' in ff, getdata)
1036 write(f, 'x' in ff and 0o755 or 0o644, 'l' in ff, getdata)
1037
1037
1038 for subpath in sorted(ctx.substate):
1038 for subpath in sorted(ctx.substate):
1039 sub = ctx.workingsub(subpath)
1039 sub = ctx.workingsub(subpath)
1040 submatch = match_.narrowmatcher(subpath, match)
1040 submatch = match_.narrowmatcher(subpath, match)
1041 sub._repo.lfstatus = True
1041 sub._repo.lfstatus = True
1042 sub.archive(archiver, prefix + repo._path + '/', submatch)
1042 sub.archive(archiver, prefix + repo._path + '/', submatch)
1043
1043
1044 # If a largefile is modified, the change is not reflected in its
1044 # If a largefile is modified, the change is not reflected in its
1045 # standin until a commit. cmdutil.bailifchanged() raises an exception
1045 # standin until a commit. cmdutil.bailifchanged() raises an exception
1046 # if the repo has uncommitted changes. Wrap it to also check if
1046 # if the repo has uncommitted changes. Wrap it to also check if
1047 # largefiles were changed. This is used by bisect, backout and fetch.
1047 # largefiles were changed. This is used by bisect, backout and fetch.
1048 def overridebailifchanged(orig, repo, *args, **kwargs):
1048 def overridebailifchanged(orig, repo, *args, **kwargs):
1049 orig(repo, *args, **kwargs)
1049 orig(repo, *args, **kwargs)
1050 repo.lfstatus = True
1050 repo.lfstatus = True
1051 s = repo.status()
1051 s = repo.status()
1052 repo.lfstatus = False
1052 repo.lfstatus = False
1053 if s.modified or s.added or s.removed or s.deleted:
1053 if s.modified or s.added or s.removed or s.deleted:
1054 raise error.Abort(_('uncommitted changes'))
1054 raise error.Abort(_('uncommitted changes'))
1055
1055
1056 def cmdutilforget(orig, ui, repo, match, prefix, explicitonly):
1056 def cmdutilforget(orig, ui, repo, match, prefix, explicitonly):
1057 normalmatcher = composenormalfilematcher(match, repo[None].manifest())
1057 normalmatcher = composenormalfilematcher(match, repo[None].manifest())
1058 bad, forgot = orig(ui, repo, normalmatcher, prefix, explicitonly)
1058 bad, forgot = orig(ui, repo, normalmatcher, prefix, explicitonly)
1059 m = composelargefilematcher(match, repo[None].manifest())
1059 m = composelargefilematcher(match, repo[None].manifest())
1060
1060
1061 try:
1061 try:
1062 repo.lfstatus = True
1062 repo.lfstatus = True
1063 s = repo.status(match=m, clean=True)
1063 s = repo.status(match=m, clean=True)
1064 finally:
1064 finally:
1065 repo.lfstatus = False
1065 repo.lfstatus = False
1066 forget = sorted(s.modified + s.added + s.deleted + s.clean)
1066 forget = sorted(s.modified + s.added + s.deleted + s.clean)
1067 forget = [f for f in forget if lfutil.standin(f) in repo[None].manifest()]
1067 forget = [f for f in forget if lfutil.standin(f) in repo[None].manifest()]
1068
1068
1069 for f in forget:
1069 for f in forget:
1070 if lfutil.standin(f) not in repo.dirstate and not \
1070 if lfutil.standin(f) not in repo.dirstate and not \
1071 repo.wvfs.isdir(lfutil.standin(f)):
1071 repo.wvfs.isdir(lfutil.standin(f)):
1072 ui.warn(_('not removing %s: file is already untracked\n')
1072 ui.warn(_('not removing %s: file is already untracked\n')
1073 % m.rel(f))
1073 % m.rel(f))
1074 bad.append(f)
1074 bad.append(f)
1075
1075
1076 for f in forget:
1076 for f in forget:
1077 if ui.verbose or not m.exact(f):
1077 if ui.verbose or not m.exact(f):
1078 ui.status(_('removing %s\n') % m.rel(f))
1078 ui.status(_('removing %s\n') % m.rel(f))
1079
1079
1080 # Need to lock because standin files are deleted then removed from the
1080 # Need to lock because standin files are deleted then removed from the
1081 # repository and we could race in-between.
1081 # repository and we could race in-between.
1082 wlock = repo.wlock()
1082 wlock = repo.wlock()
1083 try:
1083 try:
1084 lfdirstate = lfutil.openlfdirstate(ui, repo)
1084 lfdirstate = lfutil.openlfdirstate(ui, repo)
1085 for f in forget:
1085 for f in forget:
1086 if lfdirstate[f] == 'a':
1086 if lfdirstate[f] == 'a':
1087 lfdirstate.drop(f)
1087 lfdirstate.drop(f)
1088 else:
1088 else:
1089 lfdirstate.remove(f)
1089 lfdirstate.remove(f)
1090 lfdirstate.write()
1090 lfdirstate.write()
1091 standins = [lfutil.standin(f) for f in forget]
1091 standins = [lfutil.standin(f) for f in forget]
1092 for f in standins:
1092 for f in standins:
1093 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
1093 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
1094 rejected = repo[None].forget(standins)
1094 rejected = repo[None].forget(standins)
1095 finally:
1095 finally:
1096 wlock.release()
1096 wlock.release()
1097
1097
1098 bad.extend(f for f in rejected if f in m.files())
1098 bad.extend(f for f in rejected if f in m.files())
1099 forgot.extend(f for f in forget if f not in rejected)
1099 forgot.extend(f for f in forget if f not in rejected)
1100 return bad, forgot
1100 return bad, forgot
1101
1101
1102 def _getoutgoings(repo, other, missing, addfunc):
1102 def _getoutgoings(repo, other, missing, addfunc):
1103 """get pairs of filename and largefile hash in outgoing revisions
1103 """get pairs of filename and largefile hash in outgoing revisions
1104 in 'missing'.
1104 in 'missing'.
1105
1105
1106 largefiles already existing on 'other' repository are ignored.
1106 largefiles already existing on 'other' repository are ignored.
1107
1107
1108 'addfunc' is invoked with each unique pairs of filename and
1108 'addfunc' is invoked with each unique pairs of filename and
1109 largefile hash value.
1109 largefile hash value.
1110 """
1110 """
1111 knowns = set()
1111 knowns = set()
1112 lfhashes = set()
1112 lfhashes = set()
1113 def dedup(fn, lfhash):
1113 def dedup(fn, lfhash):
1114 k = (fn, lfhash)
1114 k = (fn, lfhash)
1115 if k not in knowns:
1115 if k not in knowns:
1116 knowns.add(k)
1116 knowns.add(k)
1117 lfhashes.add(lfhash)
1117 lfhashes.add(lfhash)
1118 lfutil.getlfilestoupload(repo, missing, dedup)
1118 lfutil.getlfilestoupload(repo, missing, dedup)
1119 if lfhashes:
1119 if lfhashes:
1120 lfexists = basestore._openstore(repo, other).exists(lfhashes)
1120 lfexists = basestore._openstore(repo, other).exists(lfhashes)
1121 for fn, lfhash in knowns:
1121 for fn, lfhash in knowns:
1122 if not lfexists[lfhash]: # lfhash doesn't exist on "other"
1122 if not lfexists[lfhash]: # lfhash doesn't exist on "other"
1123 addfunc(fn, lfhash)
1123 addfunc(fn, lfhash)
1124
1124
1125 def outgoinghook(ui, repo, other, opts, missing):
1125 def outgoinghook(ui, repo, other, opts, missing):
1126 if opts.pop('large', None):
1126 if opts.pop('large', None):
1127 lfhashes = set()
1127 lfhashes = set()
1128 if ui.debugflag:
1128 if ui.debugflag:
1129 toupload = {}
1129 toupload = {}
1130 def addfunc(fn, lfhash):
1130 def addfunc(fn, lfhash):
1131 if fn not in toupload:
1131 if fn not in toupload:
1132 toupload[fn] = []
1132 toupload[fn] = []
1133 toupload[fn].append(lfhash)
1133 toupload[fn].append(lfhash)
1134 lfhashes.add(lfhash)
1134 lfhashes.add(lfhash)
1135 def showhashes(fn):
1135 def showhashes(fn):
1136 for lfhash in sorted(toupload[fn]):
1136 for lfhash in sorted(toupload[fn]):
1137 ui.debug(' %s\n' % (lfhash))
1137 ui.debug(' %s\n' % (lfhash))
1138 else:
1138 else:
1139 toupload = set()
1139 toupload = set()
1140 def addfunc(fn, lfhash):
1140 def addfunc(fn, lfhash):
1141 toupload.add(fn)
1141 toupload.add(fn)
1142 lfhashes.add(lfhash)
1142 lfhashes.add(lfhash)
1143 def showhashes(fn):
1143 def showhashes(fn):
1144 pass
1144 pass
1145 _getoutgoings(repo, other, missing, addfunc)
1145 _getoutgoings(repo, other, missing, addfunc)
1146
1146
1147 if not toupload:
1147 if not toupload:
1148 ui.status(_('largefiles: no files to upload\n'))
1148 ui.status(_('largefiles: no files to upload\n'))
1149 else:
1149 else:
1150 ui.status(_('largefiles to upload (%d entities):\n')
1150 ui.status(_('largefiles to upload (%d entities):\n')
1151 % (len(lfhashes)))
1151 % (len(lfhashes)))
1152 for file in sorted(toupload):
1152 for file in sorted(toupload):
1153 ui.status(lfutil.splitstandin(file) + '\n')
1153 ui.status(lfutil.splitstandin(file) + '\n')
1154 showhashes(file)
1154 showhashes(file)
1155 ui.status('\n')
1155 ui.status('\n')
1156
1156
1157 def summaryremotehook(ui, repo, opts, changes):
1157 def summaryremotehook(ui, repo, opts, changes):
1158 largeopt = opts.get('large', False)
1158 largeopt = opts.get('large', False)
1159 if changes is None:
1159 if changes is None:
1160 if largeopt:
1160 if largeopt:
1161 return (False, True) # only outgoing check is needed
1161 return (False, True) # only outgoing check is needed
1162 else:
1162 else:
1163 return (False, False)
1163 return (False, False)
1164 elif largeopt:
1164 elif largeopt:
1165 url, branch, peer, outgoing = changes[1]
1165 url, branch, peer, outgoing = changes[1]
1166 if peer is None:
1166 if peer is None:
1167 # i18n: column positioning for "hg summary"
1167 # i18n: column positioning for "hg summary"
1168 ui.status(_('largefiles: (no remote repo)\n'))
1168 ui.status(_('largefiles: (no remote repo)\n'))
1169 return
1169 return
1170
1170
1171 toupload = set()
1171 toupload = set()
1172 lfhashes = set()
1172 lfhashes = set()
1173 def addfunc(fn, lfhash):
1173 def addfunc(fn, lfhash):
1174 toupload.add(fn)
1174 toupload.add(fn)
1175 lfhashes.add(lfhash)
1175 lfhashes.add(lfhash)
1176 _getoutgoings(repo, peer, outgoing.missing, addfunc)
1176 _getoutgoings(repo, peer, outgoing.missing, addfunc)
1177
1177
1178 if not toupload:
1178 if not toupload:
1179 # i18n: column positioning for "hg summary"
1179 # i18n: column positioning for "hg summary"
1180 ui.status(_('largefiles: (no files to upload)\n'))
1180 ui.status(_('largefiles: (no files to upload)\n'))
1181 else:
1181 else:
1182 # i18n: column positioning for "hg summary"
1182 # i18n: column positioning for "hg summary"
1183 ui.status(_('largefiles: %d entities for %d files to upload\n')
1183 ui.status(_('largefiles: %d entities for %d files to upload\n')
1184 % (len(lfhashes), len(toupload)))
1184 % (len(lfhashes), len(toupload)))
1185
1185
1186 def overridesummary(orig, ui, repo, *pats, **opts):
1186 def overridesummary(orig, ui, repo, *pats, **opts):
1187 try:
1187 try:
1188 repo.lfstatus = True
1188 repo.lfstatus = True
1189 orig(ui, repo, *pats, **opts)
1189 orig(ui, repo, *pats, **opts)
1190 finally:
1190 finally:
1191 repo.lfstatus = False
1191 repo.lfstatus = False
1192
1192
1193 def scmutiladdremove(orig, repo, matcher, prefix, opts=None, dry_run=None,
1193 def scmutiladdremove(orig, repo, matcher, prefix, opts=None, dry_run=None,
1194 similarity=None):
1194 similarity=None):
1195 if opts is None:
1195 if opts is None:
1196 opts = {}
1196 opts = {}
1197 if not lfutil.islfilesrepo(repo):
1197 if not lfutil.islfilesrepo(repo):
1198 return orig(repo, matcher, prefix, opts, dry_run, similarity)
1198 return orig(repo, matcher, prefix, opts, dry_run, similarity)
1199 # Get the list of missing largefiles so we can remove them
1199 # Get the list of missing largefiles so we can remove them
1200 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
1200 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
1201 unsure, s = lfdirstate.status(match_.always(repo.root, repo.getcwd()), [],
1201 unsure, s = lfdirstate.status(match_.always(repo.root, repo.getcwd()), [],
1202 False, False, False)
1202 False, False, False)
1203
1203
1204 # Call into the normal remove code, but the removing of the standin, we want
1204 # Call into the normal remove code, but the removing of the standin, we want
1205 # to have handled by original addremove. Monkey patching here makes sure
1205 # to have handled by original addremove. Monkey patching here makes sure
1206 # we don't remove the standin in the largefiles code, preventing a very
1206 # we don't remove the standin in the largefiles code, preventing a very
1207 # confused state later.
1207 # confused state later.
1208 if s.deleted:
1208 if s.deleted:
1209 m = copy.copy(matcher)
1209 m = copy.copy(matcher)
1210
1210
1211 # The m._files and m._map attributes are not changed to the deleted list
1211 # The m._files and m._map attributes are not changed to the deleted list
1212 # because that affects the m.exact() test, which in turn governs whether
1212 # because that affects the m.exact() test, which in turn governs whether
1213 # or not the file name is printed, and how. Simply limit the original
1213 # or not the file name is printed, and how. Simply limit the original
1214 # matches to those in the deleted status list.
1214 # matches to those in the deleted status list.
1215 matchfn = m.matchfn
1215 matchfn = m.matchfn
1216 m.matchfn = lambda f: f in s.deleted and matchfn(f)
1216 m.matchfn = lambda f: f in s.deleted and matchfn(f)
1217
1217
1218 removelargefiles(repo.ui, repo, True, m, **opts)
1218 removelargefiles(repo.ui, repo, True, m, **opts)
1219 # Call into the normal add code, and any files that *should* be added as
1219 # Call into the normal add code, and any files that *should* be added as
1220 # largefiles will be
1220 # largefiles will be
1221 added, bad = addlargefiles(repo.ui, repo, True, matcher, **opts)
1221 added, bad = addlargefiles(repo.ui, repo, True, matcher, **opts)
1222 # Now that we've handled largefiles, hand off to the original addremove
1222 # Now that we've handled largefiles, hand off to the original addremove
1223 # function to take care of the rest. Make sure it doesn't do anything with
1223 # function to take care of the rest. Make sure it doesn't do anything with
1224 # largefiles by passing a matcher that will ignore them.
1224 # largefiles by passing a matcher that will ignore them.
1225 matcher = composenormalfilematcher(matcher, repo[None].manifest(), added)
1225 matcher = composenormalfilematcher(matcher, repo[None].manifest(), added)
1226 return orig(repo, matcher, prefix, opts, dry_run, similarity)
1226 return orig(repo, matcher, prefix, opts, dry_run, similarity)
1227
1227
1228 # Calling purge with --all will cause the largefiles to be deleted.
1228 # Calling purge with --all will cause the largefiles to be deleted.
1229 # Override repo.status to prevent this from happening.
1229 # Override repo.status to prevent this from happening.
1230 def overridepurge(orig, ui, repo, *dirs, **opts):
1230 def overridepurge(orig, ui, repo, *dirs, **opts):
1231 # XXX Monkey patching a repoview will not work. The assigned attribute will
1231 # XXX Monkey patching a repoview will not work. The assigned attribute will
1232 # be set on the unfiltered repo, but we will only lookup attributes in the
1232 # be set on the unfiltered repo, but we will only lookup attributes in the
1233 # unfiltered repo if the lookup in the repoview object itself fails. As the
1233 # unfiltered repo if the lookup in the repoview object itself fails. As the
1234 # monkey patched method exists on the repoview class the lookup will not
1234 # monkey patched method exists on the repoview class the lookup will not
1235 # fail. As a result, the original version will shadow the monkey patched
1235 # fail. As a result, the original version will shadow the monkey patched
1236 # one, defeating the monkey patch.
1236 # one, defeating the monkey patch.
1237 #
1237 #
1238 # As a work around we use an unfiltered repo here. We should do something
1238 # As a work around we use an unfiltered repo here. We should do something
1239 # cleaner instead.
1239 # cleaner instead.
1240 repo = repo.unfiltered()
1240 repo = repo.unfiltered()
1241 oldstatus = repo.status
1241 oldstatus = repo.status
1242 def overridestatus(node1='.', node2=None, match=None, ignored=False,
1242 def overridestatus(node1='.', node2=None, match=None, ignored=False,
1243 clean=False, unknown=False, listsubrepos=False):
1243 clean=False, unknown=False, listsubrepos=False):
1244 r = oldstatus(node1, node2, match, ignored, clean, unknown,
1244 r = oldstatus(node1, node2, match, ignored, clean, unknown,
1245 listsubrepos)
1245 listsubrepos)
1246 lfdirstate = lfutil.openlfdirstate(ui, repo)
1246 lfdirstate = lfutil.openlfdirstate(ui, repo)
1247 unknown = [f for f in r.unknown if lfdirstate[f] == '?']
1247 unknown = [f for f in r.unknown if lfdirstate[f] == '?']
1248 ignored = [f for f in r.ignored if lfdirstate[f] == '?']
1248 ignored = [f for f in r.ignored if lfdirstate[f] == '?']
1249 return scmutil.status(r.modified, r.added, r.removed, r.deleted,
1249 return scmutil.status(r.modified, r.added, r.removed, r.deleted,
1250 unknown, ignored, r.clean)
1250 unknown, ignored, r.clean)
1251 repo.status = overridestatus
1251 repo.status = overridestatus
1252 orig(ui, repo, *dirs, **opts)
1252 orig(ui, repo, *dirs, **opts)
1253 repo.status = oldstatus
1253 repo.status = oldstatus
1254 def overriderollback(orig, ui, repo, **opts):
1254 def overriderollback(orig, ui, repo, **opts):
1255 wlock = repo.wlock()
1255 wlock = repo.wlock()
1256 try:
1256 try:
1257 before = repo.dirstate.parents()
1257 before = repo.dirstate.parents()
1258 orphans = set(f for f in repo.dirstate
1258 orphans = set(f for f in repo.dirstate
1259 if lfutil.isstandin(f) and repo.dirstate[f] != 'r')
1259 if lfutil.isstandin(f) and repo.dirstate[f] != 'r')
1260 result = orig(ui, repo, **opts)
1260 result = orig(ui, repo, **opts)
1261 after = repo.dirstate.parents()
1261 after = repo.dirstate.parents()
1262 if before == after:
1262 if before == after:
1263 return result # no need to restore standins
1263 return result # no need to restore standins
1264
1264
1265 pctx = repo['.']
1265 pctx = repo['.']
1266 for f in repo.dirstate:
1266 for f in repo.dirstate:
1267 if lfutil.isstandin(f):
1267 if lfutil.isstandin(f):
1268 orphans.discard(f)
1268 orphans.discard(f)
1269 if repo.dirstate[f] == 'r':
1269 if repo.dirstate[f] == 'r':
1270 repo.wvfs.unlinkpath(f, ignoremissing=True)
1270 repo.wvfs.unlinkpath(f, ignoremissing=True)
1271 elif f in pctx:
1271 elif f in pctx:
1272 fctx = pctx[f]
1272 fctx = pctx[f]
1273 repo.wwrite(f, fctx.data(), fctx.flags())
1273 repo.wwrite(f, fctx.data(), fctx.flags())
1274 else:
1274 else:
1275 # content of standin is not so important in 'a',
1275 # content of standin is not so important in 'a',
1276 # 'm' or 'n' (coming from the 2nd parent) cases
1276 # 'm' or 'n' (coming from the 2nd parent) cases
1277 lfutil.writestandin(repo, f, '', False)
1277 lfutil.writestandin(repo, f, '', False)
1278 for standin in orphans:
1278 for standin in orphans:
1279 repo.wvfs.unlinkpath(standin, ignoremissing=True)
1279 repo.wvfs.unlinkpath(standin, ignoremissing=True)
1280
1280
1281 lfdirstate = lfutil.openlfdirstate(ui, repo)
1281 lfdirstate = lfutil.openlfdirstate(ui, repo)
1282 orphans = set(lfdirstate)
1282 orphans = set(lfdirstate)
1283 lfiles = lfutil.listlfiles(repo)
1283 lfiles = lfutil.listlfiles(repo)
1284 for file in lfiles:
1284 for file in lfiles:
1285 lfutil.synclfdirstate(repo, lfdirstate, file, True)
1285 lfutil.synclfdirstate(repo, lfdirstate, file, True)
1286 orphans.discard(file)
1286 orphans.discard(file)
1287 for lfile in orphans:
1287 for lfile in orphans:
1288 lfdirstate.drop(lfile)
1288 lfdirstate.drop(lfile)
1289 lfdirstate.write()
1289 lfdirstate.write()
1290 finally:
1290 finally:
1291 wlock.release()
1291 wlock.release()
1292 return result
1292 return result
1293
1293
1294 def overridetransplant(orig, ui, repo, *revs, **opts):
1294 def overridetransplant(orig, ui, repo, *revs, **opts):
1295 resuming = opts.get('continue')
1295 resuming = opts.get('continue')
1296 repo._lfcommithooks.append(lfutil.automatedcommithook(resuming))
1296 repo._lfcommithooks.append(lfutil.automatedcommithook(resuming))
1297 repo._lfstatuswriters.append(lambda *msg, **opts: None)
1297 repo._lfstatuswriters.append(lambda *msg, **opts: None)
1298 try:
1298 try:
1299 result = orig(ui, repo, *revs, **opts)
1299 result = orig(ui, repo, *revs, **opts)
1300 finally:
1300 finally:
1301 repo._lfstatuswriters.pop()
1301 repo._lfstatuswriters.pop()
1302 repo._lfcommithooks.pop()
1302 repo._lfcommithooks.pop()
1303 return result
1303 return result
1304
1304
1305 def overridecat(orig, ui, repo, file1, *pats, **opts):
1305 def overridecat(orig, ui, repo, file1, *pats, **opts):
1306 ctx = scmutil.revsingle(repo, opts.get('rev'))
1306 ctx = scmutil.revsingle(repo, opts.get('rev'))
1307 err = 1
1307 err = 1
1308 notbad = set()
1308 notbad = set()
1309 m = scmutil.match(ctx, (file1,) + pats, opts)
1309 m = scmutil.match(ctx, (file1,) + pats, opts)
1310 origmatchfn = m.matchfn
1310 origmatchfn = m.matchfn
1311 def lfmatchfn(f):
1311 def lfmatchfn(f):
1312 if origmatchfn(f):
1312 if origmatchfn(f):
1313 return True
1313 return True
1314 lf = lfutil.splitstandin(f)
1314 lf = lfutil.splitstandin(f)
1315 if lf is None:
1315 if lf is None:
1316 return False
1316 return False
1317 notbad.add(lf)
1317 notbad.add(lf)
1318 return origmatchfn(lf)
1318 return origmatchfn(lf)
1319 m.matchfn = lfmatchfn
1319 m.matchfn = lfmatchfn
1320 origbadfn = m.bad
1320 origbadfn = m.bad
1321 def lfbadfn(f, msg):
1321 def lfbadfn(f, msg):
1322 if not f in notbad:
1322 if not f in notbad:
1323 origbadfn(f, msg)
1323 origbadfn(f, msg)
1324 m.bad = lfbadfn
1324 m.bad = lfbadfn
1325
1325
1326 origvisitdirfn = m.visitdir
1326 origvisitdirfn = m.visitdir
1327 def lfvisitdirfn(dir):
1327 def lfvisitdirfn(dir):
1328 if dir == lfutil.shortname:
1328 if dir == lfutil.shortname:
1329 return True
1329 return True
1330 ret = origvisitdirfn(dir)
1330 ret = origvisitdirfn(dir)
1331 if ret:
1331 if ret:
1332 return ret
1332 return ret
1333 lf = lfutil.splitstandin(dir)
1333 lf = lfutil.splitstandin(dir)
1334 if lf is None:
1334 if lf is None:
1335 return False
1335 return False
1336 return origvisitdirfn(lf)
1336 return origvisitdirfn(lf)
1337 m.visitdir = lfvisitdirfn
1337 m.visitdir = lfvisitdirfn
1338
1338
1339 for f in ctx.walk(m):
1339 for f in ctx.walk(m):
1340 fp = cmdutil.makefileobj(repo, opts.get('output'), ctx.node(),
1340 fp = cmdutil.makefileobj(repo, opts.get('output'), ctx.node(),
1341 pathname=f)
1341 pathname=f)
1342 lf = lfutil.splitstandin(f)
1342 lf = lfutil.splitstandin(f)
1343 if lf is None or origmatchfn(f):
1343 if lf is None or origmatchfn(f):
1344 # duplicating unreachable code from commands.cat
1344 # duplicating unreachable code from commands.cat
1345 data = ctx[f].data()
1345 data = ctx[f].data()
1346 if opts.get('decode'):
1346 if opts.get('decode'):
1347 data = repo.wwritedata(f, data)
1347 data = repo.wwritedata(f, data)
1348 fp.write(data)
1348 fp.write(data)
1349 else:
1349 else:
1350 hash = lfutil.readstandin(repo, lf, ctx.rev())
1350 hash = lfutil.readstandin(repo, lf, ctx.rev())
1351 if not lfutil.inusercache(repo.ui, hash):
1351 if not lfutil.inusercache(repo.ui, hash):
1352 store = basestore._openstore(repo)
1352 store = basestore._openstore(repo)
1353 success, missing = store.get([(lf, hash)])
1353 success, missing = store.get([(lf, hash)])
1354 if len(success) != 1:
1354 if len(success) != 1:
1355 raise error.Abort(
1355 raise error.Abort(
1356 _('largefile %s is not in cache and could not be '
1356 _('largefile %s is not in cache and could not be '
1357 'downloaded') % lf)
1357 'downloaded') % lf)
1358 path = lfutil.usercachepath(repo.ui, hash)
1358 path = lfutil.usercachepath(repo.ui, hash)
1359 fpin = open(path, "rb")
1359 fpin = open(path, "rb")
1360 for chunk in util.filechunkiter(fpin, 128 * 1024):
1360 for chunk in util.filechunkiter(fpin, 128 * 1024):
1361 fp.write(chunk)
1361 fp.write(chunk)
1362 fpin.close()
1362 fpin.close()
1363 fp.close()
1363 fp.close()
1364 err = 0
1364 err = 0
1365 return err
1365 return err
1366
1366
1367 def mergeupdate(orig, repo, node, branchmerge, force, partial,
1367 def mergeupdate(orig, repo, node, branchmerge, force,
1368 *args, **kwargs):
1368 *args, **kwargs):
1369 matcher = kwargs.get('matcher', None)
1370 # note if this is a partial update
1371 partial = matcher and not matcher.always()
1369 wlock = repo.wlock()
1372 wlock = repo.wlock()
1370 try:
1373 try:
1371 # branch | | |
1374 # branch | | |
1372 # merge | force | partial | action
1375 # merge | force | partial | action
1373 # -------+-------+---------+--------------
1376 # -------+-------+---------+--------------
1374 # x | x | x | linear-merge
1377 # x | x | x | linear-merge
1375 # o | x | x | branch-merge
1378 # o | x | x | branch-merge
1376 # x | o | x | overwrite (as clean update)
1379 # x | o | x | overwrite (as clean update)
1377 # o | o | x | force-branch-merge (*1)
1380 # o | o | x | force-branch-merge (*1)
1378 # x | x | o | (*)
1381 # x | x | o | (*)
1379 # o | x | o | (*)
1382 # o | x | o | (*)
1380 # x | o | o | overwrite (as revert)
1383 # x | o | o | overwrite (as revert)
1381 # o | o | o | (*)
1384 # o | o | o | (*)
1382 #
1385 #
1383 # (*) don't care
1386 # (*) don't care
1384 # (*1) deprecated, but used internally (e.g: "rebase --collapse")
1387 # (*1) deprecated, but used internally (e.g: "rebase --collapse")
1385
1388
1386 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
1389 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
1387 unsure, s = lfdirstate.status(match_.always(repo.root,
1390 unsure, s = lfdirstate.status(match_.always(repo.root,
1388 repo.getcwd()),
1391 repo.getcwd()),
1389 [], False, False, False)
1392 [], False, False, False)
1390 pctx = repo['.']
1393 pctx = repo['.']
1391 for lfile in unsure + s.modified:
1394 for lfile in unsure + s.modified:
1392 lfileabs = repo.wvfs.join(lfile)
1395 lfileabs = repo.wvfs.join(lfile)
1393 if not os.path.exists(lfileabs):
1396 if not os.path.exists(lfileabs):
1394 continue
1397 continue
1395 lfhash = lfutil.hashrepofile(repo, lfile)
1398 lfhash = lfutil.hashrepofile(repo, lfile)
1396 standin = lfutil.standin(lfile)
1399 standin = lfutil.standin(lfile)
1397 lfutil.writestandin(repo, standin, lfhash,
1400 lfutil.writestandin(repo, standin, lfhash,
1398 lfutil.getexecutable(lfileabs))
1401 lfutil.getexecutable(lfileabs))
1399 if (standin in pctx and
1402 if (standin in pctx and
1400 lfhash == lfutil.readstandin(repo, lfile, '.')):
1403 lfhash == lfutil.readstandin(repo, lfile, '.')):
1401 lfdirstate.normal(lfile)
1404 lfdirstate.normal(lfile)
1402 for lfile in s.added:
1405 for lfile in s.added:
1403 lfutil.updatestandin(repo, lfutil.standin(lfile))
1406 lfutil.updatestandin(repo, lfutil.standin(lfile))
1404 lfdirstate.write()
1407 lfdirstate.write()
1405
1408
1406 oldstandins = lfutil.getstandinsstate(repo)
1409 oldstandins = lfutil.getstandinsstate(repo)
1407
1410
1408 result = orig(repo, node, branchmerge, force, partial, *args, **kwargs)
1411 result = orig(repo, node, branchmerge, force, *args, **kwargs)
1409
1412
1410 newstandins = lfutil.getstandinsstate(repo)
1413 newstandins = lfutil.getstandinsstate(repo)
1411 filelist = lfutil.getlfilestoupdate(oldstandins, newstandins)
1414 filelist = lfutil.getlfilestoupdate(oldstandins, newstandins)
1412 if branchmerge or force or partial:
1415 if branchmerge or force or partial:
1413 filelist.extend(s.deleted + s.removed)
1416 filelist.extend(s.deleted + s.removed)
1414
1417
1415 lfcommands.updatelfiles(repo.ui, repo, filelist=filelist,
1418 lfcommands.updatelfiles(repo.ui, repo, filelist=filelist,
1416 normallookup=partial)
1419 normallookup=partial)
1417
1420
1418 return result
1421 return result
1419 finally:
1422 finally:
1420 wlock.release()
1423 wlock.release()
1421
1424
1422 def scmutilmarktouched(orig, repo, files, *args, **kwargs):
1425 def scmutilmarktouched(orig, repo, files, *args, **kwargs):
1423 result = orig(repo, files, *args, **kwargs)
1426 result = orig(repo, files, *args, **kwargs)
1424
1427
1425 filelist = [lfutil.splitstandin(f) for f in files if lfutil.isstandin(f)]
1428 filelist = [lfutil.splitstandin(f) for f in files if lfutil.isstandin(f)]
1426 if filelist:
1429 if filelist:
1427 lfcommands.updatelfiles(repo.ui, repo, filelist=filelist,
1430 lfcommands.updatelfiles(repo.ui, repo, filelist=filelist,
1428 printmessage=False, normallookup=True)
1431 printmessage=False, normallookup=True)
1429
1432
1430 return result
1433 return result
@@ -1,1249 +1,1249 b''
1 # rebase.py - rebasing feature for mercurial
1 # rebase.py - rebasing feature for mercurial
2 #
2 #
3 # Copyright 2008 Stefano Tortarolo <stefano.tortarolo at gmail dot com>
3 # Copyright 2008 Stefano Tortarolo <stefano.tortarolo at gmail dot com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''command to move sets of revisions to a different ancestor
8 '''command to move sets of revisions to a different ancestor
9
9
10 This extension lets you rebase changesets in an existing Mercurial
10 This extension lets you rebase changesets in an existing Mercurial
11 repository.
11 repository.
12
12
13 For more information:
13 For more information:
14 https://mercurial-scm.org/wiki/RebaseExtension
14 https://mercurial-scm.org/wiki/RebaseExtension
15 '''
15 '''
16
16
17 from mercurial import hg, util, repair, merge, cmdutil, commands, bookmarks
17 from mercurial import hg, util, repair, merge, cmdutil, commands, bookmarks
18 from mercurial import extensions, patch, scmutil, phases, obsolete, error
18 from mercurial import extensions, patch, scmutil, phases, obsolete, error
19 from mercurial import copies, repoview, revset
19 from mercurial import copies, repoview, revset
20 from mercurial.commands import templateopts
20 from mercurial.commands import templateopts
21 from mercurial.node import nullrev, nullid, hex, short
21 from mercurial.node import nullrev, nullid, hex, short
22 from mercurial.lock import release
22 from mercurial.lock import release
23 from mercurial.i18n import _
23 from mercurial.i18n import _
24 import os, errno
24 import os, errno
25
25
26 # The following constants are used throughout the rebase module. The ordering of
26 # The following constants are used throughout the rebase module. The ordering of
27 # their values must be maintained.
27 # their values must be maintained.
28
28
29 # Indicates that a revision needs to be rebased
29 # Indicates that a revision needs to be rebased
30 revtodo = -1
30 revtodo = -1
31 nullmerge = -2
31 nullmerge = -2
32 revignored = -3
32 revignored = -3
33 # successor in rebase destination
33 # successor in rebase destination
34 revprecursor = -4
34 revprecursor = -4
35 # plain prune (no successor)
35 # plain prune (no successor)
36 revpruned = -5
36 revpruned = -5
37 revskipped = (revignored, revprecursor, revpruned)
37 revskipped = (revignored, revprecursor, revpruned)
38
38
39 cmdtable = {}
39 cmdtable = {}
40 command = cmdutil.command(cmdtable)
40 command = cmdutil.command(cmdtable)
41 # Note for extension authors: ONLY specify testedwith = 'internal' for
41 # Note for extension authors: ONLY specify testedwith = 'internal' for
42 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
42 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
43 # be specifying the version(s) of Mercurial they are tested with, or
43 # be specifying the version(s) of Mercurial they are tested with, or
44 # leave the attribute unspecified.
44 # leave the attribute unspecified.
45 testedwith = 'internal'
45 testedwith = 'internal'
46
46
47 def _nothingtorebase():
47 def _nothingtorebase():
48 return 1
48 return 1
49
49
50 def _makeextrafn(copiers):
50 def _makeextrafn(copiers):
51 """make an extrafn out of the given copy-functions.
51 """make an extrafn out of the given copy-functions.
52
52
53 A copy function takes a context and an extra dict, and mutates the
53 A copy function takes a context and an extra dict, and mutates the
54 extra dict as needed based on the given context.
54 extra dict as needed based on the given context.
55 """
55 """
56 def extrafn(ctx, extra):
56 def extrafn(ctx, extra):
57 for c in copiers:
57 for c in copiers:
58 c(ctx, extra)
58 c(ctx, extra)
59 return extrafn
59 return extrafn
60
60
61 def _destrebase(repo):
61 def _destrebase(repo):
62 # Destination defaults to the latest revision in the
62 # Destination defaults to the latest revision in the
63 # current branch
63 # current branch
64 branch = repo[None].branch()
64 branch = repo[None].branch()
65 return repo[branch].rev()
65 return repo[branch].rev()
66
66
67 def _revsetdestrebase(repo, subset, x):
67 def _revsetdestrebase(repo, subset, x):
68 # ``_rebasedefaultdest()``
68 # ``_rebasedefaultdest()``
69
69
70 # default destination for rebase.
70 # default destination for rebase.
71 # # XXX: Currently private because I expect the signature to change.
71 # # XXX: Currently private because I expect the signature to change.
72 # # XXX: - taking rev as arguments,
72 # # XXX: - taking rev as arguments,
73 # # XXX: - bailing out in case of ambiguity vs returning all data.
73 # # XXX: - bailing out in case of ambiguity vs returning all data.
74 # # XXX: - probably merging with the merge destination.
74 # # XXX: - probably merging with the merge destination.
75 # i18n: "_rebasedefaultdest" is a keyword
75 # i18n: "_rebasedefaultdest" is a keyword
76 revset.getargs(x, 0, 0, _("_rebasedefaultdest takes no arguments"))
76 revset.getargs(x, 0, 0, _("_rebasedefaultdest takes no arguments"))
77 return subset & revset.baseset([_destrebase(repo)])
77 return subset & revset.baseset([_destrebase(repo)])
78
78
79 @command('rebase',
79 @command('rebase',
80 [('s', 'source', '',
80 [('s', 'source', '',
81 _('rebase the specified changeset and descendants'), _('REV')),
81 _('rebase the specified changeset and descendants'), _('REV')),
82 ('b', 'base', '',
82 ('b', 'base', '',
83 _('rebase everything from branching point of specified changeset'),
83 _('rebase everything from branching point of specified changeset'),
84 _('REV')),
84 _('REV')),
85 ('r', 'rev', [],
85 ('r', 'rev', [],
86 _('rebase these revisions'),
86 _('rebase these revisions'),
87 _('REV')),
87 _('REV')),
88 ('d', 'dest', '',
88 ('d', 'dest', '',
89 _('rebase onto the specified changeset'), _('REV')),
89 _('rebase onto the specified changeset'), _('REV')),
90 ('', 'collapse', False, _('collapse the rebased changesets')),
90 ('', 'collapse', False, _('collapse the rebased changesets')),
91 ('m', 'message', '',
91 ('m', 'message', '',
92 _('use text as collapse commit message'), _('TEXT')),
92 _('use text as collapse commit message'), _('TEXT')),
93 ('e', 'edit', False, _('invoke editor on commit messages')),
93 ('e', 'edit', False, _('invoke editor on commit messages')),
94 ('l', 'logfile', '',
94 ('l', 'logfile', '',
95 _('read collapse commit message from file'), _('FILE')),
95 _('read collapse commit message from file'), _('FILE')),
96 ('k', 'keep', False, _('keep original changesets')),
96 ('k', 'keep', False, _('keep original changesets')),
97 ('', 'keepbranches', False, _('keep original branch names')),
97 ('', 'keepbranches', False, _('keep original branch names')),
98 ('D', 'detach', False, _('(DEPRECATED)')),
98 ('D', 'detach', False, _('(DEPRECATED)')),
99 ('i', 'interactive', False, _('(DEPRECATED)')),
99 ('i', 'interactive', False, _('(DEPRECATED)')),
100 ('t', 'tool', '', _('specify merge tool')),
100 ('t', 'tool', '', _('specify merge tool')),
101 ('c', 'continue', False, _('continue an interrupted rebase')),
101 ('c', 'continue', False, _('continue an interrupted rebase')),
102 ('a', 'abort', False, _('abort an interrupted rebase'))] +
102 ('a', 'abort', False, _('abort an interrupted rebase'))] +
103 templateopts,
103 templateopts,
104 _('[-s REV | -b REV] [-d REV] [OPTION]'))
104 _('[-s REV | -b REV] [-d REV] [OPTION]'))
105 def rebase(ui, repo, **opts):
105 def rebase(ui, repo, **opts):
106 """move changeset (and descendants) to a different branch
106 """move changeset (and descendants) to a different branch
107
107
108 Rebase uses repeated merging to graft changesets from one part of
108 Rebase uses repeated merging to graft changesets from one part of
109 history (the source) onto another (the destination). This can be
109 history (the source) onto another (the destination). This can be
110 useful for linearizing *local* changes relative to a master
110 useful for linearizing *local* changes relative to a master
111 development tree.
111 development tree.
112
112
113 You should not rebase changesets that have already been shared
113 You should not rebase changesets that have already been shared
114 with others. Doing so will force everybody else to perform the
114 with others. Doing so will force everybody else to perform the
115 same rebase or they will end up with duplicated changesets after
115 same rebase or they will end up with duplicated changesets after
116 pulling in your rebased changesets.
116 pulling in your rebased changesets.
117
117
118 In its default configuration, Mercurial will prevent you from
118 In its default configuration, Mercurial will prevent you from
119 rebasing published changes. See :hg:`help phases` for details.
119 rebasing published changes. See :hg:`help phases` for details.
120
120
121 If you don't specify a destination changeset (``-d/--dest``),
121 If you don't specify a destination changeset (``-d/--dest``),
122 rebase uses the current branch tip as the destination. (The
122 rebase uses the current branch tip as the destination. (The
123 destination changeset is not modified by rebasing, but new
123 destination changeset is not modified by rebasing, but new
124 changesets are added as its descendants.)
124 changesets are added as its descendants.)
125
125
126 You can specify which changesets to rebase in two ways: as a
126 You can specify which changesets to rebase in two ways: as a
127 "source" changeset or as a "base" changeset. Both are shorthand
127 "source" changeset or as a "base" changeset. Both are shorthand
128 for a topologically related set of changesets (the "source
128 for a topologically related set of changesets (the "source
129 branch"). If you specify source (``-s/--source``), rebase will
129 branch"). If you specify source (``-s/--source``), rebase will
130 rebase that changeset and all of its descendants onto dest. If you
130 rebase that changeset and all of its descendants onto dest. If you
131 specify base (``-b/--base``), rebase will select ancestors of base
131 specify base (``-b/--base``), rebase will select ancestors of base
132 back to but not including the common ancestor with dest. Thus,
132 back to but not including the common ancestor with dest. Thus,
133 ``-b`` is less precise but more convenient than ``-s``: you can
133 ``-b`` is less precise but more convenient than ``-s``: you can
134 specify any changeset in the source branch, and rebase will select
134 specify any changeset in the source branch, and rebase will select
135 the whole branch. If you specify neither ``-s`` nor ``-b``, rebase
135 the whole branch. If you specify neither ``-s`` nor ``-b``, rebase
136 uses the parent of the working directory as the base.
136 uses the parent of the working directory as the base.
137
137
138 For advanced usage, a third way is available through the ``--rev``
138 For advanced usage, a third way is available through the ``--rev``
139 option. It allows you to specify an arbitrary set of changesets to
139 option. It allows you to specify an arbitrary set of changesets to
140 rebase. Descendants of revs you specify with this option are not
140 rebase. Descendants of revs you specify with this option are not
141 automatically included in the rebase.
141 automatically included in the rebase.
142
142
143 By default, rebase recreates the changesets in the source branch
143 By default, rebase recreates the changesets in the source branch
144 as descendants of dest and then destroys the originals. Use
144 as descendants of dest and then destroys the originals. Use
145 ``--keep`` to preserve the original source changesets. Some
145 ``--keep`` to preserve the original source changesets. Some
146 changesets in the source branch (e.g. merges from the destination
146 changesets in the source branch (e.g. merges from the destination
147 branch) may be dropped if they no longer contribute any change.
147 branch) may be dropped if they no longer contribute any change.
148
148
149 One result of the rules for selecting the destination changeset
149 One result of the rules for selecting the destination changeset
150 and source branch is that, unlike ``merge``, rebase will do
150 and source branch is that, unlike ``merge``, rebase will do
151 nothing if you are at the branch tip of a named branch
151 nothing if you are at the branch tip of a named branch
152 with two heads. You need to explicitly specify source and/or
152 with two heads. You need to explicitly specify source and/or
153 destination (or ``update`` to the other head, if it's the head of
153 destination (or ``update`` to the other head, if it's the head of
154 the intended source branch).
154 the intended source branch).
155
155
156 If a rebase is interrupted to manually resolve a merge, it can be
156 If a rebase is interrupted to manually resolve a merge, it can be
157 continued with --continue/-c or aborted with --abort/-a.
157 continued with --continue/-c or aborted with --abort/-a.
158
158
159 .. container:: verbose
159 .. container:: verbose
160
160
161 Examples:
161 Examples:
162
162
163 - move "local changes" (current commit back to branching point)
163 - move "local changes" (current commit back to branching point)
164 to the current branch tip after a pull::
164 to the current branch tip after a pull::
165
165
166 hg rebase
166 hg rebase
167
167
168 - move a single changeset to the stable branch::
168 - move a single changeset to the stable branch::
169
169
170 hg rebase -r 5f493448 -d stable
170 hg rebase -r 5f493448 -d stable
171
171
172 - splice a commit and all its descendants onto another part of history::
172 - splice a commit and all its descendants onto another part of history::
173
173
174 hg rebase --source c0c3 --dest 4cf9
174 hg rebase --source c0c3 --dest 4cf9
175
175
176 - rebase everything on a branch marked by a bookmark onto the
176 - rebase everything on a branch marked by a bookmark onto the
177 default branch::
177 default branch::
178
178
179 hg rebase --base myfeature --dest default
179 hg rebase --base myfeature --dest default
180
180
181 - collapse a sequence of changes into a single commit::
181 - collapse a sequence of changes into a single commit::
182
182
183 hg rebase --collapse -r 1520:1525 -d .
183 hg rebase --collapse -r 1520:1525 -d .
184
184
185 - move a named branch while preserving its name::
185 - move a named branch while preserving its name::
186
186
187 hg rebase -r "branch(featureX)" -d 1.3 --keepbranches
187 hg rebase -r "branch(featureX)" -d 1.3 --keepbranches
188
188
189 Returns 0 on success, 1 if nothing to rebase or there are
189 Returns 0 on success, 1 if nothing to rebase or there are
190 unresolved conflicts.
190 unresolved conflicts.
191
191
192 """
192 """
193 originalwd = target = None
193 originalwd = target = None
194 activebookmark = None
194 activebookmark = None
195 external = nullrev
195 external = nullrev
196 # Mapping between the old revision id and either what is the new rebased
196 # Mapping between the old revision id and either what is the new rebased
197 # revision or what needs to be done with the old revision. The state dict
197 # revision or what needs to be done with the old revision. The state dict
198 # will be what contains most of the rebase progress state.
198 # will be what contains most of the rebase progress state.
199 state = {}
199 state = {}
200 skipped = set()
200 skipped = set()
201 targetancestors = set()
201 targetancestors = set()
202
202
203
203
204 lock = wlock = None
204 lock = wlock = None
205 try:
205 try:
206 wlock = repo.wlock()
206 wlock = repo.wlock()
207 lock = repo.lock()
207 lock = repo.lock()
208
208
209 # Validate input and define rebasing points
209 # Validate input and define rebasing points
210 destf = opts.get('dest', None)
210 destf = opts.get('dest', None)
211 srcf = opts.get('source', None)
211 srcf = opts.get('source', None)
212 basef = opts.get('base', None)
212 basef = opts.get('base', None)
213 revf = opts.get('rev', [])
213 revf = opts.get('rev', [])
214 contf = opts.get('continue')
214 contf = opts.get('continue')
215 abortf = opts.get('abort')
215 abortf = opts.get('abort')
216 collapsef = opts.get('collapse', False)
216 collapsef = opts.get('collapse', False)
217 collapsemsg = cmdutil.logmessage(ui, opts)
217 collapsemsg = cmdutil.logmessage(ui, opts)
218 date = opts.get('date', None)
218 date = opts.get('date', None)
219 e = opts.get('extrafn') # internal, used by e.g. hgsubversion
219 e = opts.get('extrafn') # internal, used by e.g. hgsubversion
220 extrafns = []
220 extrafns = []
221 if e:
221 if e:
222 extrafns = [e]
222 extrafns = [e]
223 keepf = opts.get('keep', False)
223 keepf = opts.get('keep', False)
224 keepbranchesf = opts.get('keepbranches', False)
224 keepbranchesf = opts.get('keepbranches', False)
225 # keepopen is not meant for use on the command line, but by
225 # keepopen is not meant for use on the command line, but by
226 # other extensions
226 # other extensions
227 keepopen = opts.get('keepopen', False)
227 keepopen = opts.get('keepopen', False)
228
228
229 if opts.get('interactive'):
229 if opts.get('interactive'):
230 try:
230 try:
231 if extensions.find('histedit'):
231 if extensions.find('histedit'):
232 enablehistedit = ''
232 enablehistedit = ''
233 except KeyError:
233 except KeyError:
234 enablehistedit = " --config extensions.histedit="
234 enablehistedit = " --config extensions.histedit="
235 help = "hg%s help -e histedit" % enablehistedit
235 help = "hg%s help -e histedit" % enablehistedit
236 msg = _("interactive history editing is supported by the "
236 msg = _("interactive history editing is supported by the "
237 "'histedit' extension (see \"%s\")") % help
237 "'histedit' extension (see \"%s\")") % help
238 raise error.Abort(msg)
238 raise error.Abort(msg)
239
239
240 if collapsemsg and not collapsef:
240 if collapsemsg and not collapsef:
241 raise error.Abort(
241 raise error.Abort(
242 _('message can only be specified with collapse'))
242 _('message can only be specified with collapse'))
243
243
244 if contf or abortf:
244 if contf or abortf:
245 if contf and abortf:
245 if contf and abortf:
246 raise error.Abort(_('cannot use both abort and continue'))
246 raise error.Abort(_('cannot use both abort and continue'))
247 if collapsef:
247 if collapsef:
248 raise error.Abort(
248 raise error.Abort(
249 _('cannot use collapse with continue or abort'))
249 _('cannot use collapse with continue or abort'))
250 if srcf or basef or destf:
250 if srcf or basef or destf:
251 raise error.Abort(
251 raise error.Abort(
252 _('abort and continue do not allow specifying revisions'))
252 _('abort and continue do not allow specifying revisions'))
253 if abortf and opts.get('tool', False):
253 if abortf and opts.get('tool', False):
254 ui.warn(_('tool option will be ignored\n'))
254 ui.warn(_('tool option will be ignored\n'))
255
255
256 try:
256 try:
257 (originalwd, target, state, skipped, collapsef, keepf,
257 (originalwd, target, state, skipped, collapsef, keepf,
258 keepbranchesf, external, activebookmark) = restorestatus(repo)
258 keepbranchesf, external, activebookmark) = restorestatus(repo)
259 except error.RepoLookupError:
259 except error.RepoLookupError:
260 if abortf:
260 if abortf:
261 clearstatus(repo)
261 clearstatus(repo)
262 repo.ui.warn(_('rebase aborted (no revision is removed,'
262 repo.ui.warn(_('rebase aborted (no revision is removed,'
263 ' only broken state is cleared)\n'))
263 ' only broken state is cleared)\n'))
264 return 0
264 return 0
265 else:
265 else:
266 msg = _('cannot continue inconsistent rebase')
266 msg = _('cannot continue inconsistent rebase')
267 hint = _('use "hg rebase --abort" to clear broken state')
267 hint = _('use "hg rebase --abort" to clear broken state')
268 raise error.Abort(msg, hint=hint)
268 raise error.Abort(msg, hint=hint)
269 if abortf:
269 if abortf:
270 return abort(repo, originalwd, target, state,
270 return abort(repo, originalwd, target, state,
271 activebookmark=activebookmark)
271 activebookmark=activebookmark)
272 else:
272 else:
273 if srcf and basef:
273 if srcf and basef:
274 raise error.Abort(_('cannot specify both a '
274 raise error.Abort(_('cannot specify both a '
275 'source and a base'))
275 'source and a base'))
276 if revf and basef:
276 if revf and basef:
277 raise error.Abort(_('cannot specify both a '
277 raise error.Abort(_('cannot specify both a '
278 'revision and a base'))
278 'revision and a base'))
279 if revf and srcf:
279 if revf and srcf:
280 raise error.Abort(_('cannot specify both a '
280 raise error.Abort(_('cannot specify both a '
281 'revision and a source'))
281 'revision and a source'))
282
282
283 cmdutil.checkunfinished(repo)
283 cmdutil.checkunfinished(repo)
284 cmdutil.bailifchanged(repo)
284 cmdutil.bailifchanged(repo)
285
285
286 if destf:
286 if destf:
287 dest = scmutil.revsingle(repo, destf)
287 dest = scmutil.revsingle(repo, destf)
288 else:
288 else:
289 dest = repo[_destrebase(repo)]
289 dest = repo[_destrebase(repo)]
290 destf = str(dest)
290 destf = str(dest)
291
291
292 if revf:
292 if revf:
293 rebaseset = scmutil.revrange(repo, revf)
293 rebaseset = scmutil.revrange(repo, revf)
294 if not rebaseset:
294 if not rebaseset:
295 ui.status(_('empty "rev" revision set - '
295 ui.status(_('empty "rev" revision set - '
296 'nothing to rebase\n'))
296 'nothing to rebase\n'))
297 return _nothingtorebase()
297 return _nothingtorebase()
298 elif srcf:
298 elif srcf:
299 src = scmutil.revrange(repo, [srcf])
299 src = scmutil.revrange(repo, [srcf])
300 if not src:
300 if not src:
301 ui.status(_('empty "source" revision set - '
301 ui.status(_('empty "source" revision set - '
302 'nothing to rebase\n'))
302 'nothing to rebase\n'))
303 return _nothingtorebase()
303 return _nothingtorebase()
304 rebaseset = repo.revs('(%ld)::', src)
304 rebaseset = repo.revs('(%ld)::', src)
305 assert rebaseset
305 assert rebaseset
306 else:
306 else:
307 base = scmutil.revrange(repo, [basef or '.'])
307 base = scmutil.revrange(repo, [basef or '.'])
308 if not base:
308 if not base:
309 ui.status(_('empty "base" revision set - '
309 ui.status(_('empty "base" revision set - '
310 "can't compute rebase set\n"))
310 "can't compute rebase set\n"))
311 return _nothingtorebase()
311 return _nothingtorebase()
312 commonanc = repo.revs('ancestor(%ld, %d)', base, dest).first()
312 commonanc = repo.revs('ancestor(%ld, %d)', base, dest).first()
313 if commonanc is not None:
313 if commonanc is not None:
314 rebaseset = repo.revs('(%d::(%ld) - %d)::',
314 rebaseset = repo.revs('(%d::(%ld) - %d)::',
315 commonanc, base, commonanc)
315 commonanc, base, commonanc)
316 else:
316 else:
317 rebaseset = []
317 rebaseset = []
318
318
319 if not rebaseset:
319 if not rebaseset:
320 # transform to list because smartsets are not comparable to
320 # transform to list because smartsets are not comparable to
321 # lists. This should be improved to honor laziness of
321 # lists. This should be improved to honor laziness of
322 # smartset.
322 # smartset.
323 if list(base) == [dest.rev()]:
323 if list(base) == [dest.rev()]:
324 if basef:
324 if basef:
325 ui.status(_('nothing to rebase - %s is both "base"'
325 ui.status(_('nothing to rebase - %s is both "base"'
326 ' and destination\n') % dest)
326 ' and destination\n') % dest)
327 else:
327 else:
328 ui.status(_('nothing to rebase - working directory '
328 ui.status(_('nothing to rebase - working directory '
329 'parent is also destination\n'))
329 'parent is also destination\n'))
330 elif not repo.revs('%ld - ::%d', base, dest):
330 elif not repo.revs('%ld - ::%d', base, dest):
331 if basef:
331 if basef:
332 ui.status(_('nothing to rebase - "base" %s is '
332 ui.status(_('nothing to rebase - "base" %s is '
333 'already an ancestor of destination '
333 'already an ancestor of destination '
334 '%s\n') %
334 '%s\n') %
335 ('+'.join(str(repo[r]) for r in base),
335 ('+'.join(str(repo[r]) for r in base),
336 dest))
336 dest))
337 else:
337 else:
338 ui.status(_('nothing to rebase - working '
338 ui.status(_('nothing to rebase - working '
339 'directory parent is already an '
339 'directory parent is already an '
340 'ancestor of destination %s\n') % dest)
340 'ancestor of destination %s\n') % dest)
341 else: # can it happen?
341 else: # can it happen?
342 ui.status(_('nothing to rebase from %s to %s\n') %
342 ui.status(_('nothing to rebase from %s to %s\n') %
343 ('+'.join(str(repo[r]) for r in base), dest))
343 ('+'.join(str(repo[r]) for r in base), dest))
344 return _nothingtorebase()
344 return _nothingtorebase()
345
345
346 allowunstable = obsolete.isenabled(repo, obsolete.allowunstableopt)
346 allowunstable = obsolete.isenabled(repo, obsolete.allowunstableopt)
347 if (not (keepf or allowunstable)
347 if (not (keepf or allowunstable)
348 and repo.revs('first(children(%ld) - %ld)',
348 and repo.revs('first(children(%ld) - %ld)',
349 rebaseset, rebaseset)):
349 rebaseset, rebaseset)):
350 raise error.Abort(
350 raise error.Abort(
351 _("can't remove original changesets with"
351 _("can't remove original changesets with"
352 " unrebased descendants"),
352 " unrebased descendants"),
353 hint=_('use --keep to keep original changesets'))
353 hint=_('use --keep to keep original changesets'))
354
354
355 obsoletenotrebased = {}
355 obsoletenotrebased = {}
356 if ui.configbool('experimental', 'rebaseskipobsolete'):
356 if ui.configbool('experimental', 'rebaseskipobsolete'):
357 rebasesetrevs = set(rebaseset)
357 rebasesetrevs = set(rebaseset)
358 obsoletenotrebased = _computeobsoletenotrebased(repo,
358 obsoletenotrebased = _computeobsoletenotrebased(repo,
359 rebasesetrevs,
359 rebasesetrevs,
360 dest)
360 dest)
361
361
362 # - plain prune (no successor) changesets are rebased
362 # - plain prune (no successor) changesets are rebased
363 # - split changesets are not rebased if at least one of the
363 # - split changesets are not rebased if at least one of the
364 # changeset resulting from the split is an ancestor of dest
364 # changeset resulting from the split is an ancestor of dest
365 rebaseset = rebasesetrevs - set(obsoletenotrebased)
365 rebaseset = rebasesetrevs - set(obsoletenotrebased)
366 result = buildstate(repo, dest, rebaseset, collapsef,
366 result = buildstate(repo, dest, rebaseset, collapsef,
367 obsoletenotrebased)
367 obsoletenotrebased)
368
368
369 if not result:
369 if not result:
370 # Empty state built, nothing to rebase
370 # Empty state built, nothing to rebase
371 ui.status(_('nothing to rebase\n'))
371 ui.status(_('nothing to rebase\n'))
372 return _nothingtorebase()
372 return _nothingtorebase()
373
373
374 root = min(rebaseset)
374 root = min(rebaseset)
375 if not keepf and not repo[root].mutable():
375 if not keepf and not repo[root].mutable():
376 raise error.Abort(_("can't rebase public changeset %s")
376 raise error.Abort(_("can't rebase public changeset %s")
377 % repo[root],
377 % repo[root],
378 hint=_('see "hg help phases" for details'))
378 hint=_('see "hg help phases" for details'))
379
379
380 originalwd, target, state = result
380 originalwd, target, state = result
381 if collapsef:
381 if collapsef:
382 targetancestors = repo.changelog.ancestors([target],
382 targetancestors = repo.changelog.ancestors([target],
383 inclusive=True)
383 inclusive=True)
384 external = externalparent(repo, state, targetancestors)
384 external = externalparent(repo, state, targetancestors)
385
385
386 if dest.closesbranch() and not keepbranchesf:
386 if dest.closesbranch() and not keepbranchesf:
387 ui.status(_('reopening closed branch head %s\n') % dest)
387 ui.status(_('reopening closed branch head %s\n') % dest)
388
388
389 if keepbranchesf and collapsef:
389 if keepbranchesf and collapsef:
390 branches = set()
390 branches = set()
391 for rev in state:
391 for rev in state:
392 branches.add(repo[rev].branch())
392 branches.add(repo[rev].branch())
393 if len(branches) > 1:
393 if len(branches) > 1:
394 raise error.Abort(_('cannot collapse multiple named '
394 raise error.Abort(_('cannot collapse multiple named '
395 'branches'))
395 'branches'))
396
396
397 # Rebase
397 # Rebase
398 if not targetancestors:
398 if not targetancestors:
399 targetancestors = repo.changelog.ancestors([target], inclusive=True)
399 targetancestors = repo.changelog.ancestors([target], inclusive=True)
400
400
401 # Keep track of the current bookmarks in order to reset them later
401 # Keep track of the current bookmarks in order to reset them later
402 currentbookmarks = repo._bookmarks.copy()
402 currentbookmarks = repo._bookmarks.copy()
403 activebookmark = activebookmark or repo._activebookmark
403 activebookmark = activebookmark or repo._activebookmark
404 if activebookmark:
404 if activebookmark:
405 bookmarks.deactivate(repo)
405 bookmarks.deactivate(repo)
406
406
407 extrafn = _makeextrafn(extrafns)
407 extrafn = _makeextrafn(extrafns)
408
408
409 sortedstate = sorted(state)
409 sortedstate = sorted(state)
410 total = len(sortedstate)
410 total = len(sortedstate)
411 pos = 0
411 pos = 0
412 for rev in sortedstate:
412 for rev in sortedstate:
413 ctx = repo[rev]
413 ctx = repo[rev]
414 desc = '%d:%s "%s"' % (ctx.rev(), ctx,
414 desc = '%d:%s "%s"' % (ctx.rev(), ctx,
415 ctx.description().split('\n', 1)[0])
415 ctx.description().split('\n', 1)[0])
416 names = repo.nodetags(ctx.node()) + repo.nodebookmarks(ctx.node())
416 names = repo.nodetags(ctx.node()) + repo.nodebookmarks(ctx.node())
417 if names:
417 if names:
418 desc += ' (%s)' % ' '.join(names)
418 desc += ' (%s)' % ' '.join(names)
419 pos += 1
419 pos += 1
420 if state[rev] == revtodo:
420 if state[rev] == revtodo:
421 ui.status(_('rebasing %s\n') % desc)
421 ui.status(_('rebasing %s\n') % desc)
422 ui.progress(_("rebasing"), pos, ("%d:%s" % (rev, ctx)),
422 ui.progress(_("rebasing"), pos, ("%d:%s" % (rev, ctx)),
423 _('changesets'), total)
423 _('changesets'), total)
424 p1, p2, base = defineparents(repo, rev, target, state,
424 p1, p2, base = defineparents(repo, rev, target, state,
425 targetancestors)
425 targetancestors)
426 storestatus(repo, originalwd, target, state, collapsef, keepf,
426 storestatus(repo, originalwd, target, state, collapsef, keepf,
427 keepbranchesf, external, activebookmark)
427 keepbranchesf, external, activebookmark)
428 if len(repo[None].parents()) == 2:
428 if len(repo[None].parents()) == 2:
429 repo.ui.debug('resuming interrupted rebase\n')
429 repo.ui.debug('resuming interrupted rebase\n')
430 else:
430 else:
431 try:
431 try:
432 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
432 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
433 'rebase')
433 'rebase')
434 stats = rebasenode(repo, rev, p1, base, state,
434 stats = rebasenode(repo, rev, p1, base, state,
435 collapsef, target)
435 collapsef, target)
436 if stats and stats[3] > 0:
436 if stats and stats[3] > 0:
437 raise error.InterventionRequired(
437 raise error.InterventionRequired(
438 _('unresolved conflicts (see hg '
438 _('unresolved conflicts (see hg '
439 'resolve, then hg rebase --continue)'))
439 'resolve, then hg rebase --continue)'))
440 finally:
440 finally:
441 ui.setconfig('ui', 'forcemerge', '', 'rebase')
441 ui.setconfig('ui', 'forcemerge', '', 'rebase')
442 if not collapsef:
442 if not collapsef:
443 merging = p2 != nullrev
443 merging = p2 != nullrev
444 editform = cmdutil.mergeeditform(merging, 'rebase')
444 editform = cmdutil.mergeeditform(merging, 'rebase')
445 editor = cmdutil.getcommiteditor(editform=editform, **opts)
445 editor = cmdutil.getcommiteditor(editform=editform, **opts)
446 newnode = concludenode(repo, rev, p1, p2, extrafn=extrafn,
446 newnode = concludenode(repo, rev, p1, p2, extrafn=extrafn,
447 editor=editor,
447 editor=editor,
448 keepbranches=keepbranchesf,
448 keepbranches=keepbranchesf,
449 date=date)
449 date=date)
450 else:
450 else:
451 # Skip commit if we are collapsing
451 # Skip commit if we are collapsing
452 repo.dirstate.beginparentchange()
452 repo.dirstate.beginparentchange()
453 repo.setparents(repo[p1].node())
453 repo.setparents(repo[p1].node())
454 repo.dirstate.endparentchange()
454 repo.dirstate.endparentchange()
455 newnode = None
455 newnode = None
456 # Update the state
456 # Update the state
457 if newnode is not None:
457 if newnode is not None:
458 state[rev] = repo[newnode].rev()
458 state[rev] = repo[newnode].rev()
459 ui.debug('rebased as %s\n' % short(newnode))
459 ui.debug('rebased as %s\n' % short(newnode))
460 else:
460 else:
461 if not collapsef:
461 if not collapsef:
462 ui.warn(_('note: rebase of %d:%s created no changes '
462 ui.warn(_('note: rebase of %d:%s created no changes '
463 'to commit\n') % (rev, ctx))
463 'to commit\n') % (rev, ctx))
464 skipped.add(rev)
464 skipped.add(rev)
465 state[rev] = p1
465 state[rev] = p1
466 ui.debug('next revision set to %s\n' % p1)
466 ui.debug('next revision set to %s\n' % p1)
467 elif state[rev] == nullmerge:
467 elif state[rev] == nullmerge:
468 ui.debug('ignoring null merge rebase of %s\n' % rev)
468 ui.debug('ignoring null merge rebase of %s\n' % rev)
469 elif state[rev] == revignored:
469 elif state[rev] == revignored:
470 ui.status(_('not rebasing ignored %s\n') % desc)
470 ui.status(_('not rebasing ignored %s\n') % desc)
471 elif state[rev] == revprecursor:
471 elif state[rev] == revprecursor:
472 targetctx = repo[obsoletenotrebased[rev]]
472 targetctx = repo[obsoletenotrebased[rev]]
473 desctarget = '%d:%s "%s"' % (targetctx.rev(), targetctx,
473 desctarget = '%d:%s "%s"' % (targetctx.rev(), targetctx,
474 targetctx.description().split('\n', 1)[0])
474 targetctx.description().split('\n', 1)[0])
475 msg = _('note: not rebasing %s, already in destination as %s\n')
475 msg = _('note: not rebasing %s, already in destination as %s\n')
476 ui.status(msg % (desc, desctarget))
476 ui.status(msg % (desc, desctarget))
477 elif state[rev] == revpruned:
477 elif state[rev] == revpruned:
478 msg = _('note: not rebasing %s, it has no successor\n')
478 msg = _('note: not rebasing %s, it has no successor\n')
479 ui.status(msg % desc)
479 ui.status(msg % desc)
480 else:
480 else:
481 ui.status(_('already rebased %s as %s\n') %
481 ui.status(_('already rebased %s as %s\n') %
482 (desc, repo[state[rev]]))
482 (desc, repo[state[rev]]))
483
483
484 ui.progress(_('rebasing'), None)
484 ui.progress(_('rebasing'), None)
485 ui.note(_('rebase merging completed\n'))
485 ui.note(_('rebase merging completed\n'))
486
486
487 if collapsef and not keepopen:
487 if collapsef and not keepopen:
488 p1, p2, _base = defineparents(repo, min(state), target,
488 p1, p2, _base = defineparents(repo, min(state), target,
489 state, targetancestors)
489 state, targetancestors)
490 editopt = opts.get('edit')
490 editopt = opts.get('edit')
491 editform = 'rebase.collapse'
491 editform = 'rebase.collapse'
492 if collapsemsg:
492 if collapsemsg:
493 commitmsg = collapsemsg
493 commitmsg = collapsemsg
494 else:
494 else:
495 commitmsg = 'Collapsed revision'
495 commitmsg = 'Collapsed revision'
496 for rebased in state:
496 for rebased in state:
497 if rebased not in skipped and state[rebased] > nullmerge:
497 if rebased not in skipped and state[rebased] > nullmerge:
498 commitmsg += '\n* %s' % repo[rebased].description()
498 commitmsg += '\n* %s' % repo[rebased].description()
499 editopt = True
499 editopt = True
500 editor = cmdutil.getcommiteditor(edit=editopt, editform=editform)
500 editor = cmdutil.getcommiteditor(edit=editopt, editform=editform)
501 newnode = concludenode(repo, rev, p1, external, commitmsg=commitmsg,
501 newnode = concludenode(repo, rev, p1, external, commitmsg=commitmsg,
502 extrafn=extrafn, editor=editor,
502 extrafn=extrafn, editor=editor,
503 keepbranches=keepbranchesf,
503 keepbranches=keepbranchesf,
504 date=date)
504 date=date)
505 if newnode is None:
505 if newnode is None:
506 newrev = target
506 newrev = target
507 else:
507 else:
508 newrev = repo[newnode].rev()
508 newrev = repo[newnode].rev()
509 for oldrev in state.iterkeys():
509 for oldrev in state.iterkeys():
510 if state[oldrev] > nullmerge:
510 if state[oldrev] > nullmerge:
511 state[oldrev] = newrev
511 state[oldrev] = newrev
512
512
513 if 'qtip' in repo.tags():
513 if 'qtip' in repo.tags():
514 updatemq(repo, state, skipped, **opts)
514 updatemq(repo, state, skipped, **opts)
515
515
516 if currentbookmarks:
516 if currentbookmarks:
517 # Nodeids are needed to reset bookmarks
517 # Nodeids are needed to reset bookmarks
518 nstate = {}
518 nstate = {}
519 for k, v in state.iteritems():
519 for k, v in state.iteritems():
520 if v > nullmerge:
520 if v > nullmerge:
521 nstate[repo[k].node()] = repo[v].node()
521 nstate[repo[k].node()] = repo[v].node()
522 # XXX this is the same as dest.node() for the non-continue path --
522 # XXX this is the same as dest.node() for the non-continue path --
523 # this should probably be cleaned up
523 # this should probably be cleaned up
524 targetnode = repo[target].node()
524 targetnode = repo[target].node()
525
525
526 # restore original working directory
526 # restore original working directory
527 # (we do this before stripping)
527 # (we do this before stripping)
528 newwd = state.get(originalwd, originalwd)
528 newwd = state.get(originalwd, originalwd)
529 if newwd < 0:
529 if newwd < 0:
530 # original directory is a parent of rebase set root or ignored
530 # original directory is a parent of rebase set root or ignored
531 newwd = originalwd
531 newwd = originalwd
532 if newwd not in [c.rev() for c in repo[None].parents()]:
532 if newwd not in [c.rev() for c in repo[None].parents()]:
533 ui.note(_("update back to initial working directory parent\n"))
533 ui.note(_("update back to initial working directory parent\n"))
534 hg.updaterepo(repo, newwd, False)
534 hg.updaterepo(repo, newwd, False)
535
535
536 if not keepf:
536 if not keepf:
537 collapsedas = None
537 collapsedas = None
538 if collapsef:
538 if collapsef:
539 collapsedas = newnode
539 collapsedas = newnode
540 clearrebased(ui, repo, state, skipped, collapsedas)
540 clearrebased(ui, repo, state, skipped, collapsedas)
541
541
542 tr = None
542 tr = None
543 try:
543 try:
544 tr = repo.transaction('bookmark')
544 tr = repo.transaction('bookmark')
545 if currentbookmarks:
545 if currentbookmarks:
546 updatebookmarks(repo, targetnode, nstate, currentbookmarks, tr)
546 updatebookmarks(repo, targetnode, nstate, currentbookmarks, tr)
547 if activebookmark not in repo._bookmarks:
547 if activebookmark not in repo._bookmarks:
548 # active bookmark was divergent one and has been deleted
548 # active bookmark was divergent one and has been deleted
549 activebookmark = None
549 activebookmark = None
550 tr.close()
550 tr.close()
551 finally:
551 finally:
552 release(tr)
552 release(tr)
553 clearstatus(repo)
553 clearstatus(repo)
554
554
555 ui.note(_("rebase completed\n"))
555 ui.note(_("rebase completed\n"))
556 util.unlinkpath(repo.sjoin('undo'), ignoremissing=True)
556 util.unlinkpath(repo.sjoin('undo'), ignoremissing=True)
557 if skipped:
557 if skipped:
558 ui.note(_("%d revisions have been skipped\n") % len(skipped))
558 ui.note(_("%d revisions have been skipped\n") % len(skipped))
559
559
560 if (activebookmark and
560 if (activebookmark and
561 repo['.'].node() == repo._bookmarks[activebookmark]):
561 repo['.'].node() == repo._bookmarks[activebookmark]):
562 bookmarks.activate(repo, activebookmark)
562 bookmarks.activate(repo, activebookmark)
563
563
564 finally:
564 finally:
565 release(lock, wlock)
565 release(lock, wlock)
566
566
567 def externalparent(repo, state, targetancestors):
567 def externalparent(repo, state, targetancestors):
568 """Return the revision that should be used as the second parent
568 """Return the revision that should be used as the second parent
569 when the revisions in state is collapsed on top of targetancestors.
569 when the revisions in state is collapsed on top of targetancestors.
570 Abort if there is more than one parent.
570 Abort if there is more than one parent.
571 """
571 """
572 parents = set()
572 parents = set()
573 source = min(state)
573 source = min(state)
574 for rev in state:
574 for rev in state:
575 if rev == source:
575 if rev == source:
576 continue
576 continue
577 for p in repo[rev].parents():
577 for p in repo[rev].parents():
578 if (p.rev() not in state
578 if (p.rev() not in state
579 and p.rev() not in targetancestors):
579 and p.rev() not in targetancestors):
580 parents.add(p.rev())
580 parents.add(p.rev())
581 if not parents:
581 if not parents:
582 return nullrev
582 return nullrev
583 if len(parents) == 1:
583 if len(parents) == 1:
584 return parents.pop()
584 return parents.pop()
585 raise error.Abort(_('unable to collapse on top of %s, there is more '
585 raise error.Abort(_('unable to collapse on top of %s, there is more '
586 'than one external parent: %s') %
586 'than one external parent: %s') %
587 (max(targetancestors),
587 (max(targetancestors),
588 ', '.join(str(p) for p in sorted(parents))))
588 ', '.join(str(p) for p in sorted(parents))))
589
589
590 def concludenode(repo, rev, p1, p2, commitmsg=None, editor=None, extrafn=None,
590 def concludenode(repo, rev, p1, p2, commitmsg=None, editor=None, extrafn=None,
591 keepbranches=False, date=None):
591 keepbranches=False, date=None):
592 '''Commit the wd changes with parents p1 and p2. Reuse commit info from rev
592 '''Commit the wd changes with parents p1 and p2. Reuse commit info from rev
593 but also store useful information in extra.
593 but also store useful information in extra.
594 Return node of committed revision.'''
594 Return node of committed revision.'''
595 dsguard = cmdutil.dirstateguard(repo, 'rebase')
595 dsguard = cmdutil.dirstateguard(repo, 'rebase')
596 try:
596 try:
597 repo.setparents(repo[p1].node(), repo[p2].node())
597 repo.setparents(repo[p1].node(), repo[p2].node())
598 ctx = repo[rev]
598 ctx = repo[rev]
599 if commitmsg is None:
599 if commitmsg is None:
600 commitmsg = ctx.description()
600 commitmsg = ctx.description()
601 keepbranch = keepbranches and repo[p1].branch() != ctx.branch()
601 keepbranch = keepbranches and repo[p1].branch() != ctx.branch()
602 extra = ctx.extra().copy()
602 extra = ctx.extra().copy()
603 if not keepbranches:
603 if not keepbranches:
604 del extra['branch']
604 del extra['branch']
605 extra['rebase_source'] = ctx.hex()
605 extra['rebase_source'] = ctx.hex()
606 if extrafn:
606 if extrafn:
607 extrafn(ctx, extra)
607 extrafn(ctx, extra)
608
608
609 backup = repo.ui.backupconfig('phases', 'new-commit')
609 backup = repo.ui.backupconfig('phases', 'new-commit')
610 try:
610 try:
611 targetphase = max(ctx.phase(), phases.draft)
611 targetphase = max(ctx.phase(), phases.draft)
612 repo.ui.setconfig('phases', 'new-commit', targetphase, 'rebase')
612 repo.ui.setconfig('phases', 'new-commit', targetphase, 'rebase')
613 if keepbranch:
613 if keepbranch:
614 repo.ui.setconfig('ui', 'allowemptycommit', True)
614 repo.ui.setconfig('ui', 'allowemptycommit', True)
615 # Commit might fail if unresolved files exist
615 # Commit might fail if unresolved files exist
616 if date is None:
616 if date is None:
617 date = ctx.date()
617 date = ctx.date()
618 newnode = repo.commit(text=commitmsg, user=ctx.user(),
618 newnode = repo.commit(text=commitmsg, user=ctx.user(),
619 date=date, extra=extra, editor=editor)
619 date=date, extra=extra, editor=editor)
620 finally:
620 finally:
621 repo.ui.restoreconfig(backup)
621 repo.ui.restoreconfig(backup)
622
622
623 repo.dirstate.setbranch(repo[newnode].branch())
623 repo.dirstate.setbranch(repo[newnode].branch())
624 dsguard.close()
624 dsguard.close()
625 return newnode
625 return newnode
626 finally:
626 finally:
627 release(dsguard)
627 release(dsguard)
628
628
629 def rebasenode(repo, rev, p1, base, state, collapse, target):
629 def rebasenode(repo, rev, p1, base, state, collapse, target):
630 'Rebase a single revision rev on top of p1 using base as merge ancestor'
630 'Rebase a single revision rev on top of p1 using base as merge ancestor'
631 # Merge phase
631 # Merge phase
632 # Update to target and merge it with local
632 # Update to target and merge it with local
633 if repo['.'].rev() != p1:
633 if repo['.'].rev() != p1:
634 repo.ui.debug(" update to %d:%s\n" % (p1, repo[p1]))
634 repo.ui.debug(" update to %d:%s\n" % (p1, repo[p1]))
635 merge.update(repo, p1, False, True, False)
635 merge.update(repo, p1, False, True)
636 else:
636 else:
637 repo.ui.debug(" already in target\n")
637 repo.ui.debug(" already in target\n")
638 repo.dirstate.write(repo.currenttransaction())
638 repo.dirstate.write(repo.currenttransaction())
639 repo.ui.debug(" merge against %d:%s\n" % (rev, repo[rev]))
639 repo.ui.debug(" merge against %d:%s\n" % (rev, repo[rev]))
640 if base is not None:
640 if base is not None:
641 repo.ui.debug(" detach base %d:%s\n" % (base, repo[base]))
641 repo.ui.debug(" detach base %d:%s\n" % (base, repo[base]))
642 # When collapsing in-place, the parent is the common ancestor, we
642 # When collapsing in-place, the parent is the common ancestor, we
643 # have to allow merging with it.
643 # have to allow merging with it.
644 stats = merge.update(repo, rev, True, True, False, base, collapse,
644 stats = merge.update(repo, rev, True, True, base, collapse,
645 labels=['dest', 'source'])
645 labels=['dest', 'source'])
646 if collapse:
646 if collapse:
647 copies.duplicatecopies(repo, rev, target)
647 copies.duplicatecopies(repo, rev, target)
648 else:
648 else:
649 # If we're not using --collapse, we need to
649 # If we're not using --collapse, we need to
650 # duplicate copies between the revision we're
650 # duplicate copies between the revision we're
651 # rebasing and its first parent, but *not*
651 # rebasing and its first parent, but *not*
652 # duplicate any copies that have already been
652 # duplicate any copies that have already been
653 # performed in the destination.
653 # performed in the destination.
654 p1rev = repo[rev].p1().rev()
654 p1rev = repo[rev].p1().rev()
655 copies.duplicatecopies(repo, rev, p1rev, skiprev=target)
655 copies.duplicatecopies(repo, rev, p1rev, skiprev=target)
656 return stats
656 return stats
657
657
658 def nearestrebased(repo, rev, state):
658 def nearestrebased(repo, rev, state):
659 """return the nearest ancestors of rev in the rebase result"""
659 """return the nearest ancestors of rev in the rebase result"""
660 rebased = [r for r in state if state[r] > nullmerge]
660 rebased = [r for r in state if state[r] > nullmerge]
661 candidates = repo.revs('max(%ld and (::%d))', rebased, rev)
661 candidates = repo.revs('max(%ld and (::%d))', rebased, rev)
662 if candidates:
662 if candidates:
663 return state[candidates.first()]
663 return state[candidates.first()]
664 else:
664 else:
665 return None
665 return None
666
666
667 def defineparents(repo, rev, target, state, targetancestors):
667 def defineparents(repo, rev, target, state, targetancestors):
668 'Return the new parent relationship of the revision that will be rebased'
668 'Return the new parent relationship of the revision that will be rebased'
669 parents = repo[rev].parents()
669 parents = repo[rev].parents()
670 p1 = p2 = nullrev
670 p1 = p2 = nullrev
671
671
672 p1n = parents[0].rev()
672 p1n = parents[0].rev()
673 if p1n in targetancestors:
673 if p1n in targetancestors:
674 p1 = target
674 p1 = target
675 elif p1n in state:
675 elif p1n in state:
676 if state[p1n] == nullmerge:
676 if state[p1n] == nullmerge:
677 p1 = target
677 p1 = target
678 elif state[p1n] in revskipped:
678 elif state[p1n] in revskipped:
679 p1 = nearestrebased(repo, p1n, state)
679 p1 = nearestrebased(repo, p1n, state)
680 if p1 is None:
680 if p1 is None:
681 p1 = target
681 p1 = target
682 else:
682 else:
683 p1 = state[p1n]
683 p1 = state[p1n]
684 else: # p1n external
684 else: # p1n external
685 p1 = target
685 p1 = target
686 p2 = p1n
686 p2 = p1n
687
687
688 if len(parents) == 2 and parents[1].rev() not in targetancestors:
688 if len(parents) == 2 and parents[1].rev() not in targetancestors:
689 p2n = parents[1].rev()
689 p2n = parents[1].rev()
690 # interesting second parent
690 # interesting second parent
691 if p2n in state:
691 if p2n in state:
692 if p1 == target: # p1n in targetancestors or external
692 if p1 == target: # p1n in targetancestors or external
693 p1 = state[p2n]
693 p1 = state[p2n]
694 elif state[p2n] in revskipped:
694 elif state[p2n] in revskipped:
695 p2 = nearestrebased(repo, p2n, state)
695 p2 = nearestrebased(repo, p2n, state)
696 if p2 is None:
696 if p2 is None:
697 # no ancestors rebased yet, detach
697 # no ancestors rebased yet, detach
698 p2 = target
698 p2 = target
699 else:
699 else:
700 p2 = state[p2n]
700 p2 = state[p2n]
701 else: # p2n external
701 else: # p2n external
702 if p2 != nullrev: # p1n external too => rev is a merged revision
702 if p2 != nullrev: # p1n external too => rev is a merged revision
703 raise error.Abort(_('cannot use revision %d as base, result '
703 raise error.Abort(_('cannot use revision %d as base, result '
704 'would have 3 parents') % rev)
704 'would have 3 parents') % rev)
705 p2 = p2n
705 p2 = p2n
706 repo.ui.debug(" future parents are %d and %d\n" %
706 repo.ui.debug(" future parents are %d and %d\n" %
707 (repo[p1].rev(), repo[p2].rev()))
707 (repo[p1].rev(), repo[p2].rev()))
708
708
709 if rev == min(state):
709 if rev == min(state):
710 # Case (1) initial changeset of a non-detaching rebase.
710 # Case (1) initial changeset of a non-detaching rebase.
711 # Let the merge mechanism find the base itself.
711 # Let the merge mechanism find the base itself.
712 base = None
712 base = None
713 elif not repo[rev].p2():
713 elif not repo[rev].p2():
714 # Case (2) detaching the node with a single parent, use this parent
714 # Case (2) detaching the node with a single parent, use this parent
715 base = repo[rev].p1().rev()
715 base = repo[rev].p1().rev()
716 else:
716 else:
717 # Assuming there is a p1, this is the case where there also is a p2.
717 # Assuming there is a p1, this is the case where there also is a p2.
718 # We are thus rebasing a merge and need to pick the right merge base.
718 # We are thus rebasing a merge and need to pick the right merge base.
719 #
719 #
720 # Imagine we have:
720 # Imagine we have:
721 # - M: current rebase revision in this step
721 # - M: current rebase revision in this step
722 # - A: one parent of M
722 # - A: one parent of M
723 # - B: other parent of M
723 # - B: other parent of M
724 # - D: destination of this merge step (p1 var)
724 # - D: destination of this merge step (p1 var)
725 #
725 #
726 # Consider the case where D is a descendant of A or B and the other is
726 # Consider the case where D is a descendant of A or B and the other is
727 # 'outside'. In this case, the right merge base is the D ancestor.
727 # 'outside'. In this case, the right merge base is the D ancestor.
728 #
728 #
729 # An informal proof, assuming A is 'outside' and B is the D ancestor:
729 # An informal proof, assuming A is 'outside' and B is the D ancestor:
730 #
730 #
731 # If we pick B as the base, the merge involves:
731 # If we pick B as the base, the merge involves:
732 # - changes from B to M (actual changeset payload)
732 # - changes from B to M (actual changeset payload)
733 # - changes from B to D (induced by rebase) as D is a rebased
733 # - changes from B to D (induced by rebase) as D is a rebased
734 # version of B)
734 # version of B)
735 # Which exactly represent the rebase operation.
735 # Which exactly represent the rebase operation.
736 #
736 #
737 # If we pick A as the base, the merge involves:
737 # If we pick A as the base, the merge involves:
738 # - changes from A to M (actual changeset payload)
738 # - changes from A to M (actual changeset payload)
739 # - changes from A to D (with include changes between unrelated A and B
739 # - changes from A to D (with include changes between unrelated A and B
740 # plus changes induced by rebase)
740 # plus changes induced by rebase)
741 # Which does not represent anything sensible and creates a lot of
741 # Which does not represent anything sensible and creates a lot of
742 # conflicts. A is thus not the right choice - B is.
742 # conflicts. A is thus not the right choice - B is.
743 #
743 #
744 # Note: The base found in this 'proof' is only correct in the specified
744 # Note: The base found in this 'proof' is only correct in the specified
745 # case. This base does not make sense if is not D a descendant of A or B
745 # case. This base does not make sense if is not D a descendant of A or B
746 # or if the other is not parent 'outside' (especially not if the other
746 # or if the other is not parent 'outside' (especially not if the other
747 # parent has been rebased). The current implementation does not
747 # parent has been rebased). The current implementation does not
748 # make it feasible to consider different cases separately. In these
748 # make it feasible to consider different cases separately. In these
749 # other cases we currently just leave it to the user to correctly
749 # other cases we currently just leave it to the user to correctly
750 # resolve an impossible merge using a wrong ancestor.
750 # resolve an impossible merge using a wrong ancestor.
751 for p in repo[rev].parents():
751 for p in repo[rev].parents():
752 if state.get(p.rev()) == p1:
752 if state.get(p.rev()) == p1:
753 base = p.rev()
753 base = p.rev()
754 break
754 break
755 else: # fallback when base not found
755 else: # fallback when base not found
756 base = None
756 base = None
757
757
758 # Raise because this function is called wrong (see issue 4106)
758 # Raise because this function is called wrong (see issue 4106)
759 raise AssertionError('no base found to rebase on '
759 raise AssertionError('no base found to rebase on '
760 '(defineparents called wrong)')
760 '(defineparents called wrong)')
761 return p1, p2, base
761 return p1, p2, base
762
762
763 def isagitpatch(repo, patchname):
763 def isagitpatch(repo, patchname):
764 'Return true if the given patch is in git format'
764 'Return true if the given patch is in git format'
765 mqpatch = os.path.join(repo.mq.path, patchname)
765 mqpatch = os.path.join(repo.mq.path, patchname)
766 for line in patch.linereader(file(mqpatch, 'rb')):
766 for line in patch.linereader(file(mqpatch, 'rb')):
767 if line.startswith('diff --git'):
767 if line.startswith('diff --git'):
768 return True
768 return True
769 return False
769 return False
770
770
771 def updatemq(repo, state, skipped, **opts):
771 def updatemq(repo, state, skipped, **opts):
772 'Update rebased mq patches - finalize and then import them'
772 'Update rebased mq patches - finalize and then import them'
773 mqrebase = {}
773 mqrebase = {}
774 mq = repo.mq
774 mq = repo.mq
775 original_series = mq.fullseries[:]
775 original_series = mq.fullseries[:]
776 skippedpatches = set()
776 skippedpatches = set()
777
777
778 for p in mq.applied:
778 for p in mq.applied:
779 rev = repo[p.node].rev()
779 rev = repo[p.node].rev()
780 if rev in state:
780 if rev in state:
781 repo.ui.debug('revision %d is an mq patch (%s), finalize it.\n' %
781 repo.ui.debug('revision %d is an mq patch (%s), finalize it.\n' %
782 (rev, p.name))
782 (rev, p.name))
783 mqrebase[rev] = (p.name, isagitpatch(repo, p.name))
783 mqrebase[rev] = (p.name, isagitpatch(repo, p.name))
784 else:
784 else:
785 # Applied but not rebased, not sure this should happen
785 # Applied but not rebased, not sure this should happen
786 skippedpatches.add(p.name)
786 skippedpatches.add(p.name)
787
787
788 if mqrebase:
788 if mqrebase:
789 mq.finish(repo, mqrebase.keys())
789 mq.finish(repo, mqrebase.keys())
790
790
791 # We must start import from the newest revision
791 # We must start import from the newest revision
792 for rev in sorted(mqrebase, reverse=True):
792 for rev in sorted(mqrebase, reverse=True):
793 if rev not in skipped:
793 if rev not in skipped:
794 name, isgit = mqrebase[rev]
794 name, isgit = mqrebase[rev]
795 repo.ui.note(_('updating mq patch %s to %s:%s\n') %
795 repo.ui.note(_('updating mq patch %s to %s:%s\n') %
796 (name, state[rev], repo[state[rev]]))
796 (name, state[rev], repo[state[rev]]))
797 mq.qimport(repo, (), patchname=name, git=isgit,
797 mq.qimport(repo, (), patchname=name, git=isgit,
798 rev=[str(state[rev])])
798 rev=[str(state[rev])])
799 else:
799 else:
800 # Rebased and skipped
800 # Rebased and skipped
801 skippedpatches.add(mqrebase[rev][0])
801 skippedpatches.add(mqrebase[rev][0])
802
802
803 # Patches were either applied and rebased and imported in
803 # Patches were either applied and rebased and imported in
804 # order, applied and removed or unapplied. Discard the removed
804 # order, applied and removed or unapplied. Discard the removed
805 # ones while preserving the original series order and guards.
805 # ones while preserving the original series order and guards.
806 newseries = [s for s in original_series
806 newseries = [s for s in original_series
807 if mq.guard_re.split(s, 1)[0] not in skippedpatches]
807 if mq.guard_re.split(s, 1)[0] not in skippedpatches]
808 mq.fullseries[:] = newseries
808 mq.fullseries[:] = newseries
809 mq.seriesdirty = True
809 mq.seriesdirty = True
810 mq.savedirty()
810 mq.savedirty()
811
811
812 def updatebookmarks(repo, targetnode, nstate, originalbookmarks, tr):
812 def updatebookmarks(repo, targetnode, nstate, originalbookmarks, tr):
813 'Move bookmarks to their correct changesets, and delete divergent ones'
813 'Move bookmarks to their correct changesets, and delete divergent ones'
814 marks = repo._bookmarks
814 marks = repo._bookmarks
815 for k, v in originalbookmarks.iteritems():
815 for k, v in originalbookmarks.iteritems():
816 if v in nstate:
816 if v in nstate:
817 # update the bookmarks for revs that have moved
817 # update the bookmarks for revs that have moved
818 marks[k] = nstate[v]
818 marks[k] = nstate[v]
819 bookmarks.deletedivergent(repo, [targetnode], k)
819 bookmarks.deletedivergent(repo, [targetnode], k)
820 marks.recordchange(tr)
820 marks.recordchange(tr)
821
821
822 def storestatus(repo, originalwd, target, state, collapse, keep, keepbranches,
822 def storestatus(repo, originalwd, target, state, collapse, keep, keepbranches,
823 external, activebookmark):
823 external, activebookmark):
824 'Store the current status to allow recovery'
824 'Store the current status to allow recovery'
825 f = repo.vfs("rebasestate", "w")
825 f = repo.vfs("rebasestate", "w")
826 f.write(repo[originalwd].hex() + '\n')
826 f.write(repo[originalwd].hex() + '\n')
827 f.write(repo[target].hex() + '\n')
827 f.write(repo[target].hex() + '\n')
828 f.write(repo[external].hex() + '\n')
828 f.write(repo[external].hex() + '\n')
829 f.write('%d\n' % int(collapse))
829 f.write('%d\n' % int(collapse))
830 f.write('%d\n' % int(keep))
830 f.write('%d\n' % int(keep))
831 f.write('%d\n' % int(keepbranches))
831 f.write('%d\n' % int(keepbranches))
832 f.write('%s\n' % (activebookmark or ''))
832 f.write('%s\n' % (activebookmark or ''))
833 for d, v in state.iteritems():
833 for d, v in state.iteritems():
834 oldrev = repo[d].hex()
834 oldrev = repo[d].hex()
835 if v >= 0:
835 if v >= 0:
836 newrev = repo[v].hex()
836 newrev = repo[v].hex()
837 elif v == revtodo:
837 elif v == revtodo:
838 # To maintain format compatibility, we have to use nullid.
838 # To maintain format compatibility, we have to use nullid.
839 # Please do remove this special case when upgrading the format.
839 # Please do remove this special case when upgrading the format.
840 newrev = hex(nullid)
840 newrev = hex(nullid)
841 else:
841 else:
842 newrev = v
842 newrev = v
843 f.write("%s:%s\n" % (oldrev, newrev))
843 f.write("%s:%s\n" % (oldrev, newrev))
844 f.close()
844 f.close()
845 repo.ui.debug('rebase status stored\n')
845 repo.ui.debug('rebase status stored\n')
846
846
847 def clearstatus(repo):
847 def clearstatus(repo):
848 'Remove the status files'
848 'Remove the status files'
849 _clearrebasesetvisibiliy(repo)
849 _clearrebasesetvisibiliy(repo)
850 util.unlinkpath(repo.join("rebasestate"), ignoremissing=True)
850 util.unlinkpath(repo.join("rebasestate"), ignoremissing=True)
851
851
852 def restorestatus(repo):
852 def restorestatus(repo):
853 'Restore a previously stored status'
853 'Restore a previously stored status'
854 keepbranches = None
854 keepbranches = None
855 target = None
855 target = None
856 collapse = False
856 collapse = False
857 external = nullrev
857 external = nullrev
858 activebookmark = None
858 activebookmark = None
859 state = {}
859 state = {}
860
860
861 try:
861 try:
862 f = repo.vfs("rebasestate")
862 f = repo.vfs("rebasestate")
863 for i, l in enumerate(f.read().splitlines()):
863 for i, l in enumerate(f.read().splitlines()):
864 if i == 0:
864 if i == 0:
865 originalwd = repo[l].rev()
865 originalwd = repo[l].rev()
866 elif i == 1:
866 elif i == 1:
867 target = repo[l].rev()
867 target = repo[l].rev()
868 elif i == 2:
868 elif i == 2:
869 external = repo[l].rev()
869 external = repo[l].rev()
870 elif i == 3:
870 elif i == 3:
871 collapse = bool(int(l))
871 collapse = bool(int(l))
872 elif i == 4:
872 elif i == 4:
873 keep = bool(int(l))
873 keep = bool(int(l))
874 elif i == 5:
874 elif i == 5:
875 keepbranches = bool(int(l))
875 keepbranches = bool(int(l))
876 elif i == 6 and not (len(l) == 81 and ':' in l):
876 elif i == 6 and not (len(l) == 81 and ':' in l):
877 # line 6 is a recent addition, so for backwards compatibility
877 # line 6 is a recent addition, so for backwards compatibility
878 # check that the line doesn't look like the oldrev:newrev lines
878 # check that the line doesn't look like the oldrev:newrev lines
879 activebookmark = l
879 activebookmark = l
880 else:
880 else:
881 oldrev, newrev = l.split(':')
881 oldrev, newrev = l.split(':')
882 if newrev in (str(nullmerge), str(revignored),
882 if newrev in (str(nullmerge), str(revignored),
883 str(revprecursor), str(revpruned)):
883 str(revprecursor), str(revpruned)):
884 state[repo[oldrev].rev()] = int(newrev)
884 state[repo[oldrev].rev()] = int(newrev)
885 elif newrev == nullid:
885 elif newrev == nullid:
886 state[repo[oldrev].rev()] = revtodo
886 state[repo[oldrev].rev()] = revtodo
887 # Legacy compat special case
887 # Legacy compat special case
888 else:
888 else:
889 state[repo[oldrev].rev()] = repo[newrev].rev()
889 state[repo[oldrev].rev()] = repo[newrev].rev()
890
890
891 except IOError as err:
891 except IOError as err:
892 if err.errno != errno.ENOENT:
892 if err.errno != errno.ENOENT:
893 raise
893 raise
894 raise error.Abort(_('no rebase in progress'))
894 raise error.Abort(_('no rebase in progress'))
895
895
896 if keepbranches is None:
896 if keepbranches is None:
897 raise error.Abort(_('.hg/rebasestate is incomplete'))
897 raise error.Abort(_('.hg/rebasestate is incomplete'))
898
898
899 skipped = set()
899 skipped = set()
900 # recompute the set of skipped revs
900 # recompute the set of skipped revs
901 if not collapse:
901 if not collapse:
902 seen = set([target])
902 seen = set([target])
903 for old, new in sorted(state.items()):
903 for old, new in sorted(state.items()):
904 if new != revtodo and new in seen:
904 if new != revtodo and new in seen:
905 skipped.add(old)
905 skipped.add(old)
906 seen.add(new)
906 seen.add(new)
907 repo.ui.debug('computed skipped revs: %s\n' %
907 repo.ui.debug('computed skipped revs: %s\n' %
908 (' '.join(str(r) for r in sorted(skipped)) or None))
908 (' '.join(str(r) for r in sorted(skipped)) or None))
909 repo.ui.debug('rebase status resumed\n')
909 repo.ui.debug('rebase status resumed\n')
910 _setrebasesetvisibility(repo, state.keys())
910 _setrebasesetvisibility(repo, state.keys())
911 return (originalwd, target, state, skipped,
911 return (originalwd, target, state, skipped,
912 collapse, keep, keepbranches, external, activebookmark)
912 collapse, keep, keepbranches, external, activebookmark)
913
913
914 def needupdate(repo, state):
914 def needupdate(repo, state):
915 '''check whether we should `update --clean` away from a merge, or if
915 '''check whether we should `update --clean` away from a merge, or if
916 somehow the working dir got forcibly updated, e.g. by older hg'''
916 somehow the working dir got forcibly updated, e.g. by older hg'''
917 parents = [p.rev() for p in repo[None].parents()]
917 parents = [p.rev() for p in repo[None].parents()]
918
918
919 # Are we in a merge state at all?
919 # Are we in a merge state at all?
920 if len(parents) < 2:
920 if len(parents) < 2:
921 return False
921 return False
922
922
923 # We should be standing on the first as-of-yet unrebased commit.
923 # We should be standing on the first as-of-yet unrebased commit.
924 firstunrebased = min([old for old, new in state.iteritems()
924 firstunrebased = min([old for old, new in state.iteritems()
925 if new == nullrev])
925 if new == nullrev])
926 if firstunrebased in parents:
926 if firstunrebased in parents:
927 return True
927 return True
928
928
929 return False
929 return False
930
930
931 def abort(repo, originalwd, target, state, activebookmark=None):
931 def abort(repo, originalwd, target, state, activebookmark=None):
932 '''Restore the repository to its original state. Additional args:
932 '''Restore the repository to its original state. Additional args:
933
933
934 activebookmark: the name of the bookmark that should be active after the
934 activebookmark: the name of the bookmark that should be active after the
935 restore'''
935 restore'''
936
936
937 try:
937 try:
938 # If the first commits in the rebased set get skipped during the rebase,
938 # If the first commits in the rebased set get skipped during the rebase,
939 # their values within the state mapping will be the target rev id. The
939 # their values within the state mapping will be the target rev id. The
940 # dstates list must must not contain the target rev (issue4896)
940 # dstates list must must not contain the target rev (issue4896)
941 dstates = [s for s in state.values() if s >= 0 and s != target]
941 dstates = [s for s in state.values() if s >= 0 and s != target]
942 immutable = [d for d in dstates if not repo[d].mutable()]
942 immutable = [d for d in dstates if not repo[d].mutable()]
943 cleanup = True
943 cleanup = True
944 if immutable:
944 if immutable:
945 repo.ui.warn(_("warning: can't clean up public changesets %s\n")
945 repo.ui.warn(_("warning: can't clean up public changesets %s\n")
946 % ', '.join(str(repo[r]) for r in immutable),
946 % ', '.join(str(repo[r]) for r in immutable),
947 hint=_('see "hg help phases" for details'))
947 hint=_('see "hg help phases" for details'))
948 cleanup = False
948 cleanup = False
949
949
950 descendants = set()
950 descendants = set()
951 if dstates:
951 if dstates:
952 descendants = set(repo.changelog.descendants(dstates))
952 descendants = set(repo.changelog.descendants(dstates))
953 if descendants - set(dstates):
953 if descendants - set(dstates):
954 repo.ui.warn(_("warning: new changesets detected on target branch, "
954 repo.ui.warn(_("warning: new changesets detected on target branch, "
955 "can't strip\n"))
955 "can't strip\n"))
956 cleanup = False
956 cleanup = False
957
957
958 if cleanup:
958 if cleanup:
959 # Update away from the rebase if necessary
959 # Update away from the rebase if necessary
960 if needupdate(repo, state):
960 if needupdate(repo, state):
961 merge.update(repo, originalwd, False, True, False)
961 merge.update(repo, originalwd, False, True)
962
962
963 # Strip from the first rebased revision
963 # Strip from the first rebased revision
964 rebased = filter(lambda x: x >= 0 and x != target, state.values())
964 rebased = filter(lambda x: x >= 0 and x != target, state.values())
965 if rebased:
965 if rebased:
966 strippoints = [
966 strippoints = [
967 c.node() for c in repo.set('roots(%ld)', rebased)]
967 c.node() for c in repo.set('roots(%ld)', rebased)]
968 # no backup of rebased cset versions needed
968 # no backup of rebased cset versions needed
969 repair.strip(repo.ui, repo, strippoints)
969 repair.strip(repo.ui, repo, strippoints)
970
970
971 if activebookmark and activebookmark in repo._bookmarks:
971 if activebookmark and activebookmark in repo._bookmarks:
972 bookmarks.activate(repo, activebookmark)
972 bookmarks.activate(repo, activebookmark)
973
973
974 finally:
974 finally:
975 clearstatus(repo)
975 clearstatus(repo)
976 repo.ui.warn(_('rebase aborted\n'))
976 repo.ui.warn(_('rebase aborted\n'))
977 return 0
977 return 0
978
978
979 def buildstate(repo, dest, rebaseset, collapse, obsoletenotrebased):
979 def buildstate(repo, dest, rebaseset, collapse, obsoletenotrebased):
980 '''Define which revisions are going to be rebased and where
980 '''Define which revisions are going to be rebased and where
981
981
982 repo: repo
982 repo: repo
983 dest: context
983 dest: context
984 rebaseset: set of rev
984 rebaseset: set of rev
985 '''
985 '''
986 _setrebasesetvisibility(repo, rebaseset)
986 _setrebasesetvisibility(repo, rebaseset)
987
987
988 # This check isn't strictly necessary, since mq detects commits over an
988 # This check isn't strictly necessary, since mq detects commits over an
989 # applied patch. But it prevents messing up the working directory when
989 # applied patch. But it prevents messing up the working directory when
990 # a partially completed rebase is blocked by mq.
990 # a partially completed rebase is blocked by mq.
991 if 'qtip' in repo.tags() and (dest.node() in
991 if 'qtip' in repo.tags() and (dest.node() in
992 [s.node for s in repo.mq.applied]):
992 [s.node for s in repo.mq.applied]):
993 raise error.Abort(_('cannot rebase onto an applied mq patch'))
993 raise error.Abort(_('cannot rebase onto an applied mq patch'))
994
994
995 roots = list(repo.set('roots(%ld)', rebaseset))
995 roots = list(repo.set('roots(%ld)', rebaseset))
996 if not roots:
996 if not roots:
997 raise error.Abort(_('no matching revisions'))
997 raise error.Abort(_('no matching revisions'))
998 roots.sort()
998 roots.sort()
999 state = {}
999 state = {}
1000 detachset = set()
1000 detachset = set()
1001 for root in roots:
1001 for root in roots:
1002 commonbase = root.ancestor(dest)
1002 commonbase = root.ancestor(dest)
1003 if commonbase == root:
1003 if commonbase == root:
1004 raise error.Abort(_('source is ancestor of destination'))
1004 raise error.Abort(_('source is ancestor of destination'))
1005 if commonbase == dest:
1005 if commonbase == dest:
1006 samebranch = root.branch() == dest.branch()
1006 samebranch = root.branch() == dest.branch()
1007 if not collapse and samebranch and root in dest.children():
1007 if not collapse and samebranch and root in dest.children():
1008 repo.ui.debug('source is a child of destination\n')
1008 repo.ui.debug('source is a child of destination\n')
1009 return None
1009 return None
1010
1010
1011 repo.ui.debug('rebase onto %d starting from %s\n' % (dest, root))
1011 repo.ui.debug('rebase onto %d starting from %s\n' % (dest, root))
1012 state.update(dict.fromkeys(rebaseset, revtodo))
1012 state.update(dict.fromkeys(rebaseset, revtodo))
1013 # Rebase tries to turn <dest> into a parent of <root> while
1013 # Rebase tries to turn <dest> into a parent of <root> while
1014 # preserving the number of parents of rebased changesets:
1014 # preserving the number of parents of rebased changesets:
1015 #
1015 #
1016 # - A changeset with a single parent will always be rebased as a
1016 # - A changeset with a single parent will always be rebased as a
1017 # changeset with a single parent.
1017 # changeset with a single parent.
1018 #
1018 #
1019 # - A merge will be rebased as merge unless its parents are both
1019 # - A merge will be rebased as merge unless its parents are both
1020 # ancestors of <dest> or are themselves in the rebased set and
1020 # ancestors of <dest> or are themselves in the rebased set and
1021 # pruned while rebased.
1021 # pruned while rebased.
1022 #
1022 #
1023 # If one parent of <root> is an ancestor of <dest>, the rebased
1023 # If one parent of <root> is an ancestor of <dest>, the rebased
1024 # version of this parent will be <dest>. This is always true with
1024 # version of this parent will be <dest>. This is always true with
1025 # --base option.
1025 # --base option.
1026 #
1026 #
1027 # Otherwise, we need to *replace* the original parents with
1027 # Otherwise, we need to *replace* the original parents with
1028 # <dest>. This "detaches" the rebased set from its former location
1028 # <dest>. This "detaches" the rebased set from its former location
1029 # and rebases it onto <dest>. Changes introduced by ancestors of
1029 # and rebases it onto <dest>. Changes introduced by ancestors of
1030 # <root> not common with <dest> (the detachset, marked as
1030 # <root> not common with <dest> (the detachset, marked as
1031 # nullmerge) are "removed" from the rebased changesets.
1031 # nullmerge) are "removed" from the rebased changesets.
1032 #
1032 #
1033 # - If <root> has a single parent, set it to <dest>.
1033 # - If <root> has a single parent, set it to <dest>.
1034 #
1034 #
1035 # - If <root> is a merge, we cannot decide which parent to
1035 # - If <root> is a merge, we cannot decide which parent to
1036 # replace, the rebase operation is not clearly defined.
1036 # replace, the rebase operation is not clearly defined.
1037 #
1037 #
1038 # The table below sums up this behavior:
1038 # The table below sums up this behavior:
1039 #
1039 #
1040 # +------------------+----------------------+-------------------------+
1040 # +------------------+----------------------+-------------------------+
1041 # | | one parent | merge |
1041 # | | one parent | merge |
1042 # +------------------+----------------------+-------------------------+
1042 # +------------------+----------------------+-------------------------+
1043 # | parent in | new parent is <dest> | parents in ::<dest> are |
1043 # | parent in | new parent is <dest> | parents in ::<dest> are |
1044 # | ::<dest> | | remapped to <dest> |
1044 # | ::<dest> | | remapped to <dest> |
1045 # +------------------+----------------------+-------------------------+
1045 # +------------------+----------------------+-------------------------+
1046 # | unrelated source | new parent is <dest> | ambiguous, abort |
1046 # | unrelated source | new parent is <dest> | ambiguous, abort |
1047 # +------------------+----------------------+-------------------------+
1047 # +------------------+----------------------+-------------------------+
1048 #
1048 #
1049 # The actual abort is handled by `defineparents`
1049 # The actual abort is handled by `defineparents`
1050 if len(root.parents()) <= 1:
1050 if len(root.parents()) <= 1:
1051 # ancestors of <root> not ancestors of <dest>
1051 # ancestors of <root> not ancestors of <dest>
1052 detachset.update(repo.changelog.findmissingrevs([commonbase.rev()],
1052 detachset.update(repo.changelog.findmissingrevs([commonbase.rev()],
1053 [root.rev()]))
1053 [root.rev()]))
1054 for r in detachset:
1054 for r in detachset:
1055 if r not in state:
1055 if r not in state:
1056 state[r] = nullmerge
1056 state[r] = nullmerge
1057 if len(roots) > 1:
1057 if len(roots) > 1:
1058 # If we have multiple roots, we may have "hole" in the rebase set.
1058 # If we have multiple roots, we may have "hole" in the rebase set.
1059 # Rebase roots that descend from those "hole" should not be detached as
1059 # Rebase roots that descend from those "hole" should not be detached as
1060 # other root are. We use the special `revignored` to inform rebase that
1060 # other root are. We use the special `revignored` to inform rebase that
1061 # the revision should be ignored but that `defineparents` should search
1061 # the revision should be ignored but that `defineparents` should search
1062 # a rebase destination that make sense regarding rebased topology.
1062 # a rebase destination that make sense regarding rebased topology.
1063 rebasedomain = set(repo.revs('%ld::%ld', rebaseset, rebaseset))
1063 rebasedomain = set(repo.revs('%ld::%ld', rebaseset, rebaseset))
1064 for ignored in set(rebasedomain) - set(rebaseset):
1064 for ignored in set(rebasedomain) - set(rebaseset):
1065 state[ignored] = revignored
1065 state[ignored] = revignored
1066 for r in obsoletenotrebased:
1066 for r in obsoletenotrebased:
1067 if obsoletenotrebased[r] is None:
1067 if obsoletenotrebased[r] is None:
1068 state[r] = revpruned
1068 state[r] = revpruned
1069 else:
1069 else:
1070 state[r] = revprecursor
1070 state[r] = revprecursor
1071 return repo['.'].rev(), dest.rev(), state
1071 return repo['.'].rev(), dest.rev(), state
1072
1072
1073 def clearrebased(ui, repo, state, skipped, collapsedas=None):
1073 def clearrebased(ui, repo, state, skipped, collapsedas=None):
1074 """dispose of rebased revision at the end of the rebase
1074 """dispose of rebased revision at the end of the rebase
1075
1075
1076 If `collapsedas` is not None, the rebase was a collapse whose result if the
1076 If `collapsedas` is not None, the rebase was a collapse whose result if the
1077 `collapsedas` node."""
1077 `collapsedas` node."""
1078 if obsolete.isenabled(repo, obsolete.createmarkersopt):
1078 if obsolete.isenabled(repo, obsolete.createmarkersopt):
1079 markers = []
1079 markers = []
1080 for rev, newrev in sorted(state.items()):
1080 for rev, newrev in sorted(state.items()):
1081 if newrev >= 0:
1081 if newrev >= 0:
1082 if rev in skipped:
1082 if rev in skipped:
1083 succs = ()
1083 succs = ()
1084 elif collapsedas is not None:
1084 elif collapsedas is not None:
1085 succs = (repo[collapsedas],)
1085 succs = (repo[collapsedas],)
1086 else:
1086 else:
1087 succs = (repo[newrev],)
1087 succs = (repo[newrev],)
1088 markers.append((repo[rev], succs))
1088 markers.append((repo[rev], succs))
1089 if markers:
1089 if markers:
1090 obsolete.createmarkers(repo, markers)
1090 obsolete.createmarkers(repo, markers)
1091 else:
1091 else:
1092 rebased = [rev for rev in state if state[rev] > nullmerge]
1092 rebased = [rev for rev in state if state[rev] > nullmerge]
1093 if rebased:
1093 if rebased:
1094 stripped = []
1094 stripped = []
1095 for root in repo.set('roots(%ld)', rebased):
1095 for root in repo.set('roots(%ld)', rebased):
1096 if set(repo.changelog.descendants([root.rev()])) - set(state):
1096 if set(repo.changelog.descendants([root.rev()])) - set(state):
1097 ui.warn(_("warning: new changesets detected "
1097 ui.warn(_("warning: new changesets detected "
1098 "on source branch, not stripping\n"))
1098 "on source branch, not stripping\n"))
1099 else:
1099 else:
1100 stripped.append(root.node())
1100 stripped.append(root.node())
1101 if stripped:
1101 if stripped:
1102 # backup the old csets by default
1102 # backup the old csets by default
1103 repair.strip(ui, repo, stripped, "all")
1103 repair.strip(ui, repo, stripped, "all")
1104
1104
1105
1105
1106 def pullrebase(orig, ui, repo, *args, **opts):
1106 def pullrebase(orig, ui, repo, *args, **opts):
1107 'Call rebase after pull if the latter has been invoked with --rebase'
1107 'Call rebase after pull if the latter has been invoked with --rebase'
1108 ret = None
1108 ret = None
1109 if opts.get('rebase'):
1109 if opts.get('rebase'):
1110 wlock = lock = None
1110 wlock = lock = None
1111 try:
1111 try:
1112 wlock = repo.wlock()
1112 wlock = repo.wlock()
1113 lock = repo.lock()
1113 lock = repo.lock()
1114 if opts.get('update'):
1114 if opts.get('update'):
1115 del opts['update']
1115 del opts['update']
1116 ui.debug('--update and --rebase are not compatible, ignoring '
1116 ui.debug('--update and --rebase are not compatible, ignoring '
1117 'the update flag\n')
1117 'the update flag\n')
1118
1118
1119 movemarkfrom = repo['.'].node()
1119 movemarkfrom = repo['.'].node()
1120 revsprepull = len(repo)
1120 revsprepull = len(repo)
1121 origpostincoming = commands.postincoming
1121 origpostincoming = commands.postincoming
1122 def _dummy(*args, **kwargs):
1122 def _dummy(*args, **kwargs):
1123 pass
1123 pass
1124 commands.postincoming = _dummy
1124 commands.postincoming = _dummy
1125 try:
1125 try:
1126 ret = orig(ui, repo, *args, **opts)
1126 ret = orig(ui, repo, *args, **opts)
1127 finally:
1127 finally:
1128 commands.postincoming = origpostincoming
1128 commands.postincoming = origpostincoming
1129 revspostpull = len(repo)
1129 revspostpull = len(repo)
1130 if revspostpull > revsprepull:
1130 if revspostpull > revsprepull:
1131 # --rev option from pull conflict with rebase own --rev
1131 # --rev option from pull conflict with rebase own --rev
1132 # dropping it
1132 # dropping it
1133 if 'rev' in opts:
1133 if 'rev' in opts:
1134 del opts['rev']
1134 del opts['rev']
1135 # positional argument from pull conflicts with rebase's own
1135 # positional argument from pull conflicts with rebase's own
1136 # --source.
1136 # --source.
1137 if 'source' in opts:
1137 if 'source' in opts:
1138 del opts['source']
1138 del opts['source']
1139 rebase(ui, repo, **opts)
1139 rebase(ui, repo, **opts)
1140 branch = repo[None].branch()
1140 branch = repo[None].branch()
1141 dest = repo[branch].rev()
1141 dest = repo[branch].rev()
1142 if dest != repo['.'].rev():
1142 if dest != repo['.'].rev():
1143 # there was nothing to rebase we force an update
1143 # there was nothing to rebase we force an update
1144 hg.update(repo, dest)
1144 hg.update(repo, dest)
1145 if bookmarks.update(repo, [movemarkfrom], repo['.'].node()):
1145 if bookmarks.update(repo, [movemarkfrom], repo['.'].node()):
1146 ui.status(_("updating bookmark %s\n")
1146 ui.status(_("updating bookmark %s\n")
1147 % repo._activebookmark)
1147 % repo._activebookmark)
1148 finally:
1148 finally:
1149 release(lock, wlock)
1149 release(lock, wlock)
1150 else:
1150 else:
1151 if opts.get('tool'):
1151 if opts.get('tool'):
1152 raise error.Abort(_('--tool can only be used with --rebase'))
1152 raise error.Abort(_('--tool can only be used with --rebase'))
1153 ret = orig(ui, repo, *args, **opts)
1153 ret = orig(ui, repo, *args, **opts)
1154
1154
1155 return ret
1155 return ret
1156
1156
1157 def _setrebasesetvisibility(repo, revs):
1157 def _setrebasesetvisibility(repo, revs):
1158 """store the currently rebased set on the repo object
1158 """store the currently rebased set on the repo object
1159
1159
1160 This is used by another function to prevent rebased revision to because
1160 This is used by another function to prevent rebased revision to because
1161 hidden (see issue4505)"""
1161 hidden (see issue4505)"""
1162 repo = repo.unfiltered()
1162 repo = repo.unfiltered()
1163 revs = set(revs)
1163 revs = set(revs)
1164 repo._rebaseset = revs
1164 repo._rebaseset = revs
1165 # invalidate cache if visibility changes
1165 # invalidate cache if visibility changes
1166 hiddens = repo.filteredrevcache.get('visible', set())
1166 hiddens = repo.filteredrevcache.get('visible', set())
1167 if revs & hiddens:
1167 if revs & hiddens:
1168 repo.invalidatevolatilesets()
1168 repo.invalidatevolatilesets()
1169
1169
1170 def _clearrebasesetvisibiliy(repo):
1170 def _clearrebasesetvisibiliy(repo):
1171 """remove rebaseset data from the repo"""
1171 """remove rebaseset data from the repo"""
1172 repo = repo.unfiltered()
1172 repo = repo.unfiltered()
1173 if '_rebaseset' in vars(repo):
1173 if '_rebaseset' in vars(repo):
1174 del repo._rebaseset
1174 del repo._rebaseset
1175
1175
1176 def _rebasedvisible(orig, repo):
1176 def _rebasedvisible(orig, repo):
1177 """ensure rebased revs stay visible (see issue4505)"""
1177 """ensure rebased revs stay visible (see issue4505)"""
1178 blockers = orig(repo)
1178 blockers = orig(repo)
1179 blockers.update(getattr(repo, '_rebaseset', ()))
1179 blockers.update(getattr(repo, '_rebaseset', ()))
1180 return blockers
1180 return blockers
1181
1181
1182 def _computeobsoletenotrebased(repo, rebasesetrevs, dest):
1182 def _computeobsoletenotrebased(repo, rebasesetrevs, dest):
1183 """return a mapping obsolete => successor for all obsolete nodes to be
1183 """return a mapping obsolete => successor for all obsolete nodes to be
1184 rebased that have a successors in the destination
1184 rebased that have a successors in the destination
1185
1185
1186 obsolete => None entries in the mapping indicate nodes with no succesor"""
1186 obsolete => None entries in the mapping indicate nodes with no succesor"""
1187 obsoletenotrebased = {}
1187 obsoletenotrebased = {}
1188
1188
1189 # Build a mapping successor => obsolete nodes for the obsolete
1189 # Build a mapping successor => obsolete nodes for the obsolete
1190 # nodes to be rebased
1190 # nodes to be rebased
1191 allsuccessors = {}
1191 allsuccessors = {}
1192 cl = repo.changelog
1192 cl = repo.changelog
1193 for r in rebasesetrevs:
1193 for r in rebasesetrevs:
1194 n = repo[r]
1194 n = repo[r]
1195 if n.obsolete():
1195 if n.obsolete():
1196 node = cl.node(r)
1196 node = cl.node(r)
1197 for s in obsolete.allsuccessors(repo.obsstore, [node]):
1197 for s in obsolete.allsuccessors(repo.obsstore, [node]):
1198 try:
1198 try:
1199 allsuccessors[cl.rev(s)] = cl.rev(node)
1199 allsuccessors[cl.rev(s)] = cl.rev(node)
1200 except LookupError:
1200 except LookupError:
1201 pass
1201 pass
1202
1202
1203 if allsuccessors:
1203 if allsuccessors:
1204 # Look for successors of obsolete nodes to be rebased among
1204 # Look for successors of obsolete nodes to be rebased among
1205 # the ancestors of dest
1205 # the ancestors of dest
1206 ancs = cl.ancestors([repo[dest].rev()],
1206 ancs = cl.ancestors([repo[dest].rev()],
1207 stoprev=min(allsuccessors),
1207 stoprev=min(allsuccessors),
1208 inclusive=True)
1208 inclusive=True)
1209 for s in allsuccessors:
1209 for s in allsuccessors:
1210 if s in ancs:
1210 if s in ancs:
1211 obsoletenotrebased[allsuccessors[s]] = s
1211 obsoletenotrebased[allsuccessors[s]] = s
1212 elif (s == allsuccessors[s] and
1212 elif (s == allsuccessors[s] and
1213 allsuccessors.values().count(s) == 1):
1213 allsuccessors.values().count(s) == 1):
1214 # plain prune
1214 # plain prune
1215 obsoletenotrebased[s] = None
1215 obsoletenotrebased[s] = None
1216
1216
1217 return obsoletenotrebased
1217 return obsoletenotrebased
1218
1218
1219 def summaryhook(ui, repo):
1219 def summaryhook(ui, repo):
1220 if not os.path.exists(repo.join('rebasestate')):
1220 if not os.path.exists(repo.join('rebasestate')):
1221 return
1221 return
1222 try:
1222 try:
1223 state = restorestatus(repo)[2]
1223 state = restorestatus(repo)[2]
1224 except error.RepoLookupError:
1224 except error.RepoLookupError:
1225 # i18n: column positioning for "hg summary"
1225 # i18n: column positioning for "hg summary"
1226 msg = _('rebase: (use "hg rebase --abort" to clear broken state)\n')
1226 msg = _('rebase: (use "hg rebase --abort" to clear broken state)\n')
1227 ui.write(msg)
1227 ui.write(msg)
1228 return
1228 return
1229 numrebased = len([i for i in state.itervalues() if i >= 0])
1229 numrebased = len([i for i in state.itervalues() if i >= 0])
1230 # i18n: column positioning for "hg summary"
1230 # i18n: column positioning for "hg summary"
1231 ui.write(_('rebase: %s, %s (rebase --continue)\n') %
1231 ui.write(_('rebase: %s, %s (rebase --continue)\n') %
1232 (ui.label(_('%d rebased'), 'rebase.rebased') % numrebased,
1232 (ui.label(_('%d rebased'), 'rebase.rebased') % numrebased,
1233 ui.label(_('%d remaining'), 'rebase.remaining') %
1233 ui.label(_('%d remaining'), 'rebase.remaining') %
1234 (len(state) - numrebased)))
1234 (len(state) - numrebased)))
1235
1235
1236 def uisetup(ui):
1236 def uisetup(ui):
1237 #Replace pull with a decorator to provide --rebase option
1237 #Replace pull with a decorator to provide --rebase option
1238 entry = extensions.wrapcommand(commands.table, 'pull', pullrebase)
1238 entry = extensions.wrapcommand(commands.table, 'pull', pullrebase)
1239 entry[1].append(('', 'rebase', None,
1239 entry[1].append(('', 'rebase', None,
1240 _("rebase working directory to branch head")))
1240 _("rebase working directory to branch head")))
1241 entry[1].append(('t', 'tool', '',
1241 entry[1].append(('t', 'tool', '',
1242 _("specify merge tool for rebase")))
1242 _("specify merge tool for rebase")))
1243 cmdutil.summaryhooks.add('rebase', summaryhook)
1243 cmdutil.summaryhooks.add('rebase', summaryhook)
1244 cmdutil.unfinishedstates.append(
1244 cmdutil.unfinishedstates.append(
1245 ['rebasestate', False, False, _('rebase in progress'),
1245 ['rebasestate', False, False, _('rebase in progress'),
1246 _("use 'hg rebase --continue' or 'hg rebase --abort'")])
1246 _("use 'hg rebase --continue' or 'hg rebase --abort'")])
1247 # ensure rebased rev are not hidden
1247 # ensure rebased rev are not hidden
1248 extensions.wrapfunction(repoview, '_getdynamicblockers', _rebasedvisible)
1248 extensions.wrapfunction(repoview, '_getdynamicblockers', _rebasedvisible)
1249 revset.symbols['_destrebase'] = _revsetdestrebase
1249 revset.symbols['_destrebase'] = _revsetdestrebase
@@ -1,721 +1,721 b''
1 # Patch transplanting extension for Mercurial
1 # Patch transplanting extension for Mercurial
2 #
2 #
3 # Copyright 2006, 2007 Brendan Cully <brendan@kublai.com>
3 # Copyright 2006, 2007 Brendan Cully <brendan@kublai.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''command to transplant changesets from another branch
8 '''command to transplant changesets from another branch
9
9
10 This extension allows you to transplant changes to another parent revision,
10 This extension allows you to transplant changes to another parent revision,
11 possibly in another repository. The transplant is done using 'diff' patches.
11 possibly in another repository. The transplant is done using 'diff' patches.
12
12
13 Transplanted patches are recorded in .hg/transplant/transplants, as a
13 Transplanted patches are recorded in .hg/transplant/transplants, as a
14 map from a changeset hash to its hash in the source repository.
14 map from a changeset hash to its hash in the source repository.
15 '''
15 '''
16
16
17 from mercurial.i18n import _
17 from mercurial.i18n import _
18 import os, tempfile
18 import os, tempfile
19 from mercurial.node import short
19 from mercurial.node import short
20 from mercurial import bundlerepo, hg, merge, match
20 from mercurial import bundlerepo, hg, merge, match
21 from mercurial import patch, revlog, scmutil, util, error, cmdutil
21 from mercurial import patch, revlog, scmutil, util, error, cmdutil
22 from mercurial import revset, templatekw, exchange
22 from mercurial import revset, templatekw, exchange
23 from mercurial import lock as lockmod
23 from mercurial import lock as lockmod
24
24
25 class TransplantError(error.Abort):
25 class TransplantError(error.Abort):
26 pass
26 pass
27
27
28 cmdtable = {}
28 cmdtable = {}
29 command = cmdutil.command(cmdtable)
29 command = cmdutil.command(cmdtable)
30 # Note for extension authors: ONLY specify testedwith = 'internal' for
30 # Note for extension authors: ONLY specify testedwith = 'internal' for
31 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
31 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
32 # be specifying the version(s) of Mercurial they are tested with, or
32 # be specifying the version(s) of Mercurial they are tested with, or
33 # leave the attribute unspecified.
33 # leave the attribute unspecified.
34 testedwith = 'internal'
34 testedwith = 'internal'
35
35
36 class transplantentry(object):
36 class transplantentry(object):
37 def __init__(self, lnode, rnode):
37 def __init__(self, lnode, rnode):
38 self.lnode = lnode
38 self.lnode = lnode
39 self.rnode = rnode
39 self.rnode = rnode
40
40
41 class transplants(object):
41 class transplants(object):
42 def __init__(self, path=None, transplantfile=None, opener=None):
42 def __init__(self, path=None, transplantfile=None, opener=None):
43 self.path = path
43 self.path = path
44 self.transplantfile = transplantfile
44 self.transplantfile = transplantfile
45 self.opener = opener
45 self.opener = opener
46
46
47 if not opener:
47 if not opener:
48 self.opener = scmutil.opener(self.path)
48 self.opener = scmutil.opener(self.path)
49 self.transplants = {}
49 self.transplants = {}
50 self.dirty = False
50 self.dirty = False
51 self.read()
51 self.read()
52
52
53 def read(self):
53 def read(self):
54 abspath = os.path.join(self.path, self.transplantfile)
54 abspath = os.path.join(self.path, self.transplantfile)
55 if self.transplantfile and os.path.exists(abspath):
55 if self.transplantfile and os.path.exists(abspath):
56 for line in self.opener.read(self.transplantfile).splitlines():
56 for line in self.opener.read(self.transplantfile).splitlines():
57 lnode, rnode = map(revlog.bin, line.split(':'))
57 lnode, rnode = map(revlog.bin, line.split(':'))
58 list = self.transplants.setdefault(rnode, [])
58 list = self.transplants.setdefault(rnode, [])
59 list.append(transplantentry(lnode, rnode))
59 list.append(transplantentry(lnode, rnode))
60
60
61 def write(self):
61 def write(self):
62 if self.dirty and self.transplantfile:
62 if self.dirty and self.transplantfile:
63 if not os.path.isdir(self.path):
63 if not os.path.isdir(self.path):
64 os.mkdir(self.path)
64 os.mkdir(self.path)
65 fp = self.opener(self.transplantfile, 'w')
65 fp = self.opener(self.transplantfile, 'w')
66 for list in self.transplants.itervalues():
66 for list in self.transplants.itervalues():
67 for t in list:
67 for t in list:
68 l, r = map(revlog.hex, (t.lnode, t.rnode))
68 l, r = map(revlog.hex, (t.lnode, t.rnode))
69 fp.write(l + ':' + r + '\n')
69 fp.write(l + ':' + r + '\n')
70 fp.close()
70 fp.close()
71 self.dirty = False
71 self.dirty = False
72
72
73 def get(self, rnode):
73 def get(self, rnode):
74 return self.transplants.get(rnode) or []
74 return self.transplants.get(rnode) or []
75
75
76 def set(self, lnode, rnode):
76 def set(self, lnode, rnode):
77 list = self.transplants.setdefault(rnode, [])
77 list = self.transplants.setdefault(rnode, [])
78 list.append(transplantentry(lnode, rnode))
78 list.append(transplantentry(lnode, rnode))
79 self.dirty = True
79 self.dirty = True
80
80
81 def remove(self, transplant):
81 def remove(self, transplant):
82 list = self.transplants.get(transplant.rnode)
82 list = self.transplants.get(transplant.rnode)
83 if list:
83 if list:
84 del list[list.index(transplant)]
84 del list[list.index(transplant)]
85 self.dirty = True
85 self.dirty = True
86
86
87 class transplanter(object):
87 class transplanter(object):
88 def __init__(self, ui, repo, opts):
88 def __init__(self, ui, repo, opts):
89 self.ui = ui
89 self.ui = ui
90 self.path = repo.join('transplant')
90 self.path = repo.join('transplant')
91 self.opener = scmutil.opener(self.path)
91 self.opener = scmutil.opener(self.path)
92 self.transplants = transplants(self.path, 'transplants',
92 self.transplants = transplants(self.path, 'transplants',
93 opener=self.opener)
93 opener=self.opener)
94 def getcommiteditor():
94 def getcommiteditor():
95 editform = cmdutil.mergeeditform(repo[None], 'transplant')
95 editform = cmdutil.mergeeditform(repo[None], 'transplant')
96 return cmdutil.getcommiteditor(editform=editform, **opts)
96 return cmdutil.getcommiteditor(editform=editform, **opts)
97 self.getcommiteditor = getcommiteditor
97 self.getcommiteditor = getcommiteditor
98
98
99 def applied(self, repo, node, parent):
99 def applied(self, repo, node, parent):
100 '''returns True if a node is already an ancestor of parent
100 '''returns True if a node is already an ancestor of parent
101 or is parent or has already been transplanted'''
101 or is parent or has already been transplanted'''
102 if hasnode(repo, parent):
102 if hasnode(repo, parent):
103 parentrev = repo.changelog.rev(parent)
103 parentrev = repo.changelog.rev(parent)
104 if hasnode(repo, node):
104 if hasnode(repo, node):
105 rev = repo.changelog.rev(node)
105 rev = repo.changelog.rev(node)
106 reachable = repo.changelog.ancestors([parentrev], rev,
106 reachable = repo.changelog.ancestors([parentrev], rev,
107 inclusive=True)
107 inclusive=True)
108 if rev in reachable:
108 if rev in reachable:
109 return True
109 return True
110 for t in self.transplants.get(node):
110 for t in self.transplants.get(node):
111 # it might have been stripped
111 # it might have been stripped
112 if not hasnode(repo, t.lnode):
112 if not hasnode(repo, t.lnode):
113 self.transplants.remove(t)
113 self.transplants.remove(t)
114 return False
114 return False
115 lnoderev = repo.changelog.rev(t.lnode)
115 lnoderev = repo.changelog.rev(t.lnode)
116 if lnoderev in repo.changelog.ancestors([parentrev], lnoderev,
116 if lnoderev in repo.changelog.ancestors([parentrev], lnoderev,
117 inclusive=True):
117 inclusive=True):
118 return True
118 return True
119 return False
119 return False
120
120
121 def apply(self, repo, source, revmap, merges, opts=None):
121 def apply(self, repo, source, revmap, merges, opts=None):
122 '''apply the revisions in revmap one by one in revision order'''
122 '''apply the revisions in revmap one by one in revision order'''
123 if opts is None:
123 if opts is None:
124 opts = {}
124 opts = {}
125 revs = sorted(revmap)
125 revs = sorted(revmap)
126 p1, p2 = repo.dirstate.parents()
126 p1, p2 = repo.dirstate.parents()
127 pulls = []
127 pulls = []
128 diffopts = patch.difffeatureopts(self.ui, opts)
128 diffopts = patch.difffeatureopts(self.ui, opts)
129 diffopts.git = True
129 diffopts.git = True
130
130
131 lock = tr = None
131 lock = tr = None
132 try:
132 try:
133 lock = repo.lock()
133 lock = repo.lock()
134 tr = repo.transaction('transplant')
134 tr = repo.transaction('transplant')
135 for rev in revs:
135 for rev in revs:
136 node = revmap[rev]
136 node = revmap[rev]
137 revstr = '%s:%s' % (rev, short(node))
137 revstr = '%s:%s' % (rev, short(node))
138
138
139 if self.applied(repo, node, p1):
139 if self.applied(repo, node, p1):
140 self.ui.warn(_('skipping already applied revision %s\n') %
140 self.ui.warn(_('skipping already applied revision %s\n') %
141 revstr)
141 revstr)
142 continue
142 continue
143
143
144 parents = source.changelog.parents(node)
144 parents = source.changelog.parents(node)
145 if not (opts.get('filter') or opts.get('log')):
145 if not (opts.get('filter') or opts.get('log')):
146 # If the changeset parent is the same as the
146 # If the changeset parent is the same as the
147 # wdir's parent, just pull it.
147 # wdir's parent, just pull it.
148 if parents[0] == p1:
148 if parents[0] == p1:
149 pulls.append(node)
149 pulls.append(node)
150 p1 = node
150 p1 = node
151 continue
151 continue
152 if pulls:
152 if pulls:
153 if source != repo:
153 if source != repo:
154 exchange.pull(repo, source.peer(), heads=pulls)
154 exchange.pull(repo, source.peer(), heads=pulls)
155 merge.update(repo, pulls[-1], False, False, None)
155 merge.update(repo, pulls[-1], False, False)
156 p1, p2 = repo.dirstate.parents()
156 p1, p2 = repo.dirstate.parents()
157 pulls = []
157 pulls = []
158
158
159 domerge = False
159 domerge = False
160 if node in merges:
160 if node in merges:
161 # pulling all the merge revs at once would mean we
161 # pulling all the merge revs at once would mean we
162 # couldn't transplant after the latest even if
162 # couldn't transplant after the latest even if
163 # transplants before them fail.
163 # transplants before them fail.
164 domerge = True
164 domerge = True
165 if not hasnode(repo, node):
165 if not hasnode(repo, node):
166 exchange.pull(repo, source.peer(), heads=[node])
166 exchange.pull(repo, source.peer(), heads=[node])
167
167
168 skipmerge = False
168 skipmerge = False
169 if parents[1] != revlog.nullid:
169 if parents[1] != revlog.nullid:
170 if not opts.get('parent'):
170 if not opts.get('parent'):
171 self.ui.note(_('skipping merge changeset %s:%s\n')
171 self.ui.note(_('skipping merge changeset %s:%s\n')
172 % (rev, short(node)))
172 % (rev, short(node)))
173 skipmerge = True
173 skipmerge = True
174 else:
174 else:
175 parent = source.lookup(opts['parent'])
175 parent = source.lookup(opts['parent'])
176 if parent not in parents:
176 if parent not in parents:
177 raise error.Abort(_('%s is not a parent of %s') %
177 raise error.Abort(_('%s is not a parent of %s') %
178 (short(parent), short(node)))
178 (short(parent), short(node)))
179 else:
179 else:
180 parent = parents[0]
180 parent = parents[0]
181
181
182 if skipmerge:
182 if skipmerge:
183 patchfile = None
183 patchfile = None
184 else:
184 else:
185 fd, patchfile = tempfile.mkstemp(prefix='hg-transplant-')
185 fd, patchfile = tempfile.mkstemp(prefix='hg-transplant-')
186 fp = os.fdopen(fd, 'w')
186 fp = os.fdopen(fd, 'w')
187 gen = patch.diff(source, parent, node, opts=diffopts)
187 gen = patch.diff(source, parent, node, opts=diffopts)
188 for chunk in gen:
188 for chunk in gen:
189 fp.write(chunk)
189 fp.write(chunk)
190 fp.close()
190 fp.close()
191
191
192 del revmap[rev]
192 del revmap[rev]
193 if patchfile or domerge:
193 if patchfile or domerge:
194 try:
194 try:
195 try:
195 try:
196 n = self.applyone(repo, node,
196 n = self.applyone(repo, node,
197 source.changelog.read(node),
197 source.changelog.read(node),
198 patchfile, merge=domerge,
198 patchfile, merge=domerge,
199 log=opts.get('log'),
199 log=opts.get('log'),
200 filter=opts.get('filter'))
200 filter=opts.get('filter'))
201 except TransplantError:
201 except TransplantError:
202 # Do not rollback, it is up to the user to
202 # Do not rollback, it is up to the user to
203 # fix the merge or cancel everything
203 # fix the merge or cancel everything
204 tr.close()
204 tr.close()
205 raise
205 raise
206 if n and domerge:
206 if n and domerge:
207 self.ui.status(_('%s merged at %s\n') % (revstr,
207 self.ui.status(_('%s merged at %s\n') % (revstr,
208 short(n)))
208 short(n)))
209 elif n:
209 elif n:
210 self.ui.status(_('%s transplanted to %s\n')
210 self.ui.status(_('%s transplanted to %s\n')
211 % (short(node),
211 % (short(node),
212 short(n)))
212 short(n)))
213 finally:
213 finally:
214 if patchfile:
214 if patchfile:
215 os.unlink(patchfile)
215 os.unlink(patchfile)
216 tr.close()
216 tr.close()
217 if pulls:
217 if pulls:
218 exchange.pull(repo, source.peer(), heads=pulls)
218 exchange.pull(repo, source.peer(), heads=pulls)
219 merge.update(repo, pulls[-1], False, False, None)
219 merge.update(repo, pulls[-1], False, False)
220 finally:
220 finally:
221 self.saveseries(revmap, merges)
221 self.saveseries(revmap, merges)
222 self.transplants.write()
222 self.transplants.write()
223 if tr:
223 if tr:
224 tr.release()
224 tr.release()
225 if lock:
225 if lock:
226 lock.release()
226 lock.release()
227
227
228 def filter(self, filter, node, changelog, patchfile):
228 def filter(self, filter, node, changelog, patchfile):
229 '''arbitrarily rewrite changeset before applying it'''
229 '''arbitrarily rewrite changeset before applying it'''
230
230
231 self.ui.status(_('filtering %s\n') % patchfile)
231 self.ui.status(_('filtering %s\n') % patchfile)
232 user, date, msg = (changelog[1], changelog[2], changelog[4])
232 user, date, msg = (changelog[1], changelog[2], changelog[4])
233 fd, headerfile = tempfile.mkstemp(prefix='hg-transplant-')
233 fd, headerfile = tempfile.mkstemp(prefix='hg-transplant-')
234 fp = os.fdopen(fd, 'w')
234 fp = os.fdopen(fd, 'w')
235 fp.write("# HG changeset patch\n")
235 fp.write("# HG changeset patch\n")
236 fp.write("# User %s\n" % user)
236 fp.write("# User %s\n" % user)
237 fp.write("# Date %d %d\n" % date)
237 fp.write("# Date %d %d\n" % date)
238 fp.write(msg + '\n')
238 fp.write(msg + '\n')
239 fp.close()
239 fp.close()
240
240
241 try:
241 try:
242 self.ui.system('%s %s %s' % (filter, util.shellquote(headerfile),
242 self.ui.system('%s %s %s' % (filter, util.shellquote(headerfile),
243 util.shellquote(patchfile)),
243 util.shellquote(patchfile)),
244 environ={'HGUSER': changelog[1],
244 environ={'HGUSER': changelog[1],
245 'HGREVISION': revlog.hex(node),
245 'HGREVISION': revlog.hex(node),
246 },
246 },
247 onerr=error.Abort, errprefix=_('filter failed'))
247 onerr=error.Abort, errprefix=_('filter failed'))
248 user, date, msg = self.parselog(file(headerfile))[1:4]
248 user, date, msg = self.parselog(file(headerfile))[1:4]
249 finally:
249 finally:
250 os.unlink(headerfile)
250 os.unlink(headerfile)
251
251
252 return (user, date, msg)
252 return (user, date, msg)
253
253
254 def applyone(self, repo, node, cl, patchfile, merge=False, log=False,
254 def applyone(self, repo, node, cl, patchfile, merge=False, log=False,
255 filter=None):
255 filter=None):
256 '''apply the patch in patchfile to the repository as a transplant'''
256 '''apply the patch in patchfile to the repository as a transplant'''
257 (manifest, user, (time, timezone), files, message) = cl[:5]
257 (manifest, user, (time, timezone), files, message) = cl[:5]
258 date = "%d %d" % (time, timezone)
258 date = "%d %d" % (time, timezone)
259 extra = {'transplant_source': node}
259 extra = {'transplant_source': node}
260 if filter:
260 if filter:
261 (user, date, message) = self.filter(filter, node, cl, patchfile)
261 (user, date, message) = self.filter(filter, node, cl, patchfile)
262
262
263 if log:
263 if log:
264 # we don't translate messages inserted into commits
264 # we don't translate messages inserted into commits
265 message += '\n(transplanted from %s)' % revlog.hex(node)
265 message += '\n(transplanted from %s)' % revlog.hex(node)
266
266
267 self.ui.status(_('applying %s\n') % short(node))
267 self.ui.status(_('applying %s\n') % short(node))
268 self.ui.note('%s %s\n%s\n' % (user, date, message))
268 self.ui.note('%s %s\n%s\n' % (user, date, message))
269
269
270 if not patchfile and not merge:
270 if not patchfile and not merge:
271 raise error.Abort(_('can only omit patchfile if merging'))
271 raise error.Abort(_('can only omit patchfile if merging'))
272 if patchfile:
272 if patchfile:
273 try:
273 try:
274 files = set()
274 files = set()
275 patch.patch(self.ui, repo, patchfile, files=files, eolmode=None)
275 patch.patch(self.ui, repo, patchfile, files=files, eolmode=None)
276 files = list(files)
276 files = list(files)
277 except Exception as inst:
277 except Exception as inst:
278 seriespath = os.path.join(self.path, 'series')
278 seriespath = os.path.join(self.path, 'series')
279 if os.path.exists(seriespath):
279 if os.path.exists(seriespath):
280 os.unlink(seriespath)
280 os.unlink(seriespath)
281 p1 = repo.dirstate.p1()
281 p1 = repo.dirstate.p1()
282 p2 = node
282 p2 = node
283 self.log(user, date, message, p1, p2, merge=merge)
283 self.log(user, date, message, p1, p2, merge=merge)
284 self.ui.write(str(inst) + '\n')
284 self.ui.write(str(inst) + '\n')
285 raise TransplantError(_('fix up the merge and run '
285 raise TransplantError(_('fix up the merge and run '
286 'hg transplant --continue'))
286 'hg transplant --continue'))
287 else:
287 else:
288 files = None
288 files = None
289 if merge:
289 if merge:
290 p1, p2 = repo.dirstate.parents()
290 p1, p2 = repo.dirstate.parents()
291 repo.setparents(p1, node)
291 repo.setparents(p1, node)
292 m = match.always(repo.root, '')
292 m = match.always(repo.root, '')
293 else:
293 else:
294 m = match.exact(repo.root, '', files)
294 m = match.exact(repo.root, '', files)
295
295
296 n = repo.commit(message, user, date, extra=extra, match=m,
296 n = repo.commit(message, user, date, extra=extra, match=m,
297 editor=self.getcommiteditor())
297 editor=self.getcommiteditor())
298 if not n:
298 if not n:
299 self.ui.warn(_('skipping emptied changeset %s\n') % short(node))
299 self.ui.warn(_('skipping emptied changeset %s\n') % short(node))
300 return None
300 return None
301 if not merge:
301 if not merge:
302 self.transplants.set(n, node)
302 self.transplants.set(n, node)
303
303
304 return n
304 return n
305
305
306 def resume(self, repo, source, opts):
306 def resume(self, repo, source, opts):
307 '''recover last transaction and apply remaining changesets'''
307 '''recover last transaction and apply remaining changesets'''
308 if os.path.exists(os.path.join(self.path, 'journal')):
308 if os.path.exists(os.path.join(self.path, 'journal')):
309 n, node = self.recover(repo, source, opts)
309 n, node = self.recover(repo, source, opts)
310 if n:
310 if n:
311 self.ui.status(_('%s transplanted as %s\n') % (short(node),
311 self.ui.status(_('%s transplanted as %s\n') % (short(node),
312 short(n)))
312 short(n)))
313 else:
313 else:
314 self.ui.status(_('%s skipped due to empty diff\n')
314 self.ui.status(_('%s skipped due to empty diff\n')
315 % (short(node),))
315 % (short(node),))
316 seriespath = os.path.join(self.path, 'series')
316 seriespath = os.path.join(self.path, 'series')
317 if not os.path.exists(seriespath):
317 if not os.path.exists(seriespath):
318 self.transplants.write()
318 self.transplants.write()
319 return
319 return
320 nodes, merges = self.readseries()
320 nodes, merges = self.readseries()
321 revmap = {}
321 revmap = {}
322 for n in nodes:
322 for n in nodes:
323 revmap[source.changelog.rev(n)] = n
323 revmap[source.changelog.rev(n)] = n
324 os.unlink(seriespath)
324 os.unlink(seriespath)
325
325
326 self.apply(repo, source, revmap, merges, opts)
326 self.apply(repo, source, revmap, merges, opts)
327
327
328 def recover(self, repo, source, opts):
328 def recover(self, repo, source, opts):
329 '''commit working directory using journal metadata'''
329 '''commit working directory using journal metadata'''
330 node, user, date, message, parents = self.readlog()
330 node, user, date, message, parents = self.readlog()
331 merge = False
331 merge = False
332
332
333 if not user or not date or not message or not parents[0]:
333 if not user or not date or not message or not parents[0]:
334 raise error.Abort(_('transplant log file is corrupt'))
334 raise error.Abort(_('transplant log file is corrupt'))
335
335
336 parent = parents[0]
336 parent = parents[0]
337 if len(parents) > 1:
337 if len(parents) > 1:
338 if opts.get('parent'):
338 if opts.get('parent'):
339 parent = source.lookup(opts['parent'])
339 parent = source.lookup(opts['parent'])
340 if parent not in parents:
340 if parent not in parents:
341 raise error.Abort(_('%s is not a parent of %s') %
341 raise error.Abort(_('%s is not a parent of %s') %
342 (short(parent), short(node)))
342 (short(parent), short(node)))
343 else:
343 else:
344 merge = True
344 merge = True
345
345
346 extra = {'transplant_source': node}
346 extra = {'transplant_source': node}
347 try:
347 try:
348 p1, p2 = repo.dirstate.parents()
348 p1, p2 = repo.dirstate.parents()
349 if p1 != parent:
349 if p1 != parent:
350 raise error.Abort(_('working directory not at transplant '
350 raise error.Abort(_('working directory not at transplant '
351 'parent %s') % revlog.hex(parent))
351 'parent %s') % revlog.hex(parent))
352 if merge:
352 if merge:
353 repo.setparents(p1, parents[1])
353 repo.setparents(p1, parents[1])
354 modified, added, removed, deleted = repo.status()[:4]
354 modified, added, removed, deleted = repo.status()[:4]
355 if merge or modified or added or removed or deleted:
355 if merge or modified or added or removed or deleted:
356 n = repo.commit(message, user, date, extra=extra,
356 n = repo.commit(message, user, date, extra=extra,
357 editor=self.getcommiteditor())
357 editor=self.getcommiteditor())
358 if not n:
358 if not n:
359 raise error.Abort(_('commit failed'))
359 raise error.Abort(_('commit failed'))
360 if not merge:
360 if not merge:
361 self.transplants.set(n, node)
361 self.transplants.set(n, node)
362 else:
362 else:
363 n = None
363 n = None
364 self.unlog()
364 self.unlog()
365
365
366 return n, node
366 return n, node
367 finally:
367 finally:
368 # TODO: get rid of this meaningless try/finally enclosing.
368 # TODO: get rid of this meaningless try/finally enclosing.
369 # this is kept only to reduce changes in a patch.
369 # this is kept only to reduce changes in a patch.
370 pass
370 pass
371
371
372 def readseries(self):
372 def readseries(self):
373 nodes = []
373 nodes = []
374 merges = []
374 merges = []
375 cur = nodes
375 cur = nodes
376 for line in self.opener.read('series').splitlines():
376 for line in self.opener.read('series').splitlines():
377 if line.startswith('# Merges'):
377 if line.startswith('# Merges'):
378 cur = merges
378 cur = merges
379 continue
379 continue
380 cur.append(revlog.bin(line))
380 cur.append(revlog.bin(line))
381
381
382 return (nodes, merges)
382 return (nodes, merges)
383
383
384 def saveseries(self, revmap, merges):
384 def saveseries(self, revmap, merges):
385 if not revmap:
385 if not revmap:
386 return
386 return
387
387
388 if not os.path.isdir(self.path):
388 if not os.path.isdir(self.path):
389 os.mkdir(self.path)
389 os.mkdir(self.path)
390 series = self.opener('series', 'w')
390 series = self.opener('series', 'w')
391 for rev in sorted(revmap):
391 for rev in sorted(revmap):
392 series.write(revlog.hex(revmap[rev]) + '\n')
392 series.write(revlog.hex(revmap[rev]) + '\n')
393 if merges:
393 if merges:
394 series.write('# Merges\n')
394 series.write('# Merges\n')
395 for m in merges:
395 for m in merges:
396 series.write(revlog.hex(m) + '\n')
396 series.write(revlog.hex(m) + '\n')
397 series.close()
397 series.close()
398
398
399 def parselog(self, fp):
399 def parselog(self, fp):
400 parents = []
400 parents = []
401 message = []
401 message = []
402 node = revlog.nullid
402 node = revlog.nullid
403 inmsg = False
403 inmsg = False
404 user = None
404 user = None
405 date = None
405 date = None
406 for line in fp.read().splitlines():
406 for line in fp.read().splitlines():
407 if inmsg:
407 if inmsg:
408 message.append(line)
408 message.append(line)
409 elif line.startswith('# User '):
409 elif line.startswith('# User '):
410 user = line[7:]
410 user = line[7:]
411 elif line.startswith('# Date '):
411 elif line.startswith('# Date '):
412 date = line[7:]
412 date = line[7:]
413 elif line.startswith('# Node ID '):
413 elif line.startswith('# Node ID '):
414 node = revlog.bin(line[10:])
414 node = revlog.bin(line[10:])
415 elif line.startswith('# Parent '):
415 elif line.startswith('# Parent '):
416 parents.append(revlog.bin(line[9:]))
416 parents.append(revlog.bin(line[9:]))
417 elif not line.startswith('# '):
417 elif not line.startswith('# '):
418 inmsg = True
418 inmsg = True
419 message.append(line)
419 message.append(line)
420 if None in (user, date):
420 if None in (user, date):
421 raise error.Abort(_("filter corrupted changeset (no user or date)"))
421 raise error.Abort(_("filter corrupted changeset (no user or date)"))
422 return (node, user, date, '\n'.join(message), parents)
422 return (node, user, date, '\n'.join(message), parents)
423
423
424 def log(self, user, date, message, p1, p2, merge=False):
424 def log(self, user, date, message, p1, p2, merge=False):
425 '''journal changelog metadata for later recover'''
425 '''journal changelog metadata for later recover'''
426
426
427 if not os.path.isdir(self.path):
427 if not os.path.isdir(self.path):
428 os.mkdir(self.path)
428 os.mkdir(self.path)
429 fp = self.opener('journal', 'w')
429 fp = self.opener('journal', 'w')
430 fp.write('# User %s\n' % user)
430 fp.write('# User %s\n' % user)
431 fp.write('# Date %s\n' % date)
431 fp.write('# Date %s\n' % date)
432 fp.write('# Node ID %s\n' % revlog.hex(p2))
432 fp.write('# Node ID %s\n' % revlog.hex(p2))
433 fp.write('# Parent ' + revlog.hex(p1) + '\n')
433 fp.write('# Parent ' + revlog.hex(p1) + '\n')
434 if merge:
434 if merge:
435 fp.write('# Parent ' + revlog.hex(p2) + '\n')
435 fp.write('# Parent ' + revlog.hex(p2) + '\n')
436 fp.write(message.rstrip() + '\n')
436 fp.write(message.rstrip() + '\n')
437 fp.close()
437 fp.close()
438
438
439 def readlog(self):
439 def readlog(self):
440 return self.parselog(self.opener('journal'))
440 return self.parselog(self.opener('journal'))
441
441
442 def unlog(self):
442 def unlog(self):
443 '''remove changelog journal'''
443 '''remove changelog journal'''
444 absdst = os.path.join(self.path, 'journal')
444 absdst = os.path.join(self.path, 'journal')
445 if os.path.exists(absdst):
445 if os.path.exists(absdst):
446 os.unlink(absdst)
446 os.unlink(absdst)
447
447
448 def transplantfilter(self, repo, source, root):
448 def transplantfilter(self, repo, source, root):
449 def matchfn(node):
449 def matchfn(node):
450 if self.applied(repo, node, root):
450 if self.applied(repo, node, root):
451 return False
451 return False
452 if source.changelog.parents(node)[1] != revlog.nullid:
452 if source.changelog.parents(node)[1] != revlog.nullid:
453 return False
453 return False
454 extra = source.changelog.read(node)[5]
454 extra = source.changelog.read(node)[5]
455 cnode = extra.get('transplant_source')
455 cnode = extra.get('transplant_source')
456 if cnode and self.applied(repo, cnode, root):
456 if cnode and self.applied(repo, cnode, root):
457 return False
457 return False
458 return True
458 return True
459
459
460 return matchfn
460 return matchfn
461
461
462 def hasnode(repo, node):
462 def hasnode(repo, node):
463 try:
463 try:
464 return repo.changelog.rev(node) is not None
464 return repo.changelog.rev(node) is not None
465 except error.RevlogError:
465 except error.RevlogError:
466 return False
466 return False
467
467
468 def browserevs(ui, repo, nodes, opts):
468 def browserevs(ui, repo, nodes, opts):
469 '''interactively transplant changesets'''
469 '''interactively transplant changesets'''
470 displayer = cmdutil.show_changeset(ui, repo, opts)
470 displayer = cmdutil.show_changeset(ui, repo, opts)
471 transplants = []
471 transplants = []
472 merges = []
472 merges = []
473 prompt = _('apply changeset? [ynmpcq?]:'
473 prompt = _('apply changeset? [ynmpcq?]:'
474 '$$ &yes, transplant this changeset'
474 '$$ &yes, transplant this changeset'
475 '$$ &no, skip this changeset'
475 '$$ &no, skip this changeset'
476 '$$ &merge at this changeset'
476 '$$ &merge at this changeset'
477 '$$ show &patch'
477 '$$ show &patch'
478 '$$ &commit selected changesets'
478 '$$ &commit selected changesets'
479 '$$ &quit and cancel transplant'
479 '$$ &quit and cancel transplant'
480 '$$ &? (show this help)')
480 '$$ &? (show this help)')
481 for node in nodes:
481 for node in nodes:
482 displayer.show(repo[node])
482 displayer.show(repo[node])
483 action = None
483 action = None
484 while not action:
484 while not action:
485 action = 'ynmpcq?'[ui.promptchoice(prompt)]
485 action = 'ynmpcq?'[ui.promptchoice(prompt)]
486 if action == '?':
486 if action == '?':
487 for c, t in ui.extractchoices(prompt)[1]:
487 for c, t in ui.extractchoices(prompt)[1]:
488 ui.write('%s: %s\n' % (c, t))
488 ui.write('%s: %s\n' % (c, t))
489 action = None
489 action = None
490 elif action == 'p':
490 elif action == 'p':
491 parent = repo.changelog.parents(node)[0]
491 parent = repo.changelog.parents(node)[0]
492 for chunk in patch.diff(repo, parent, node):
492 for chunk in patch.diff(repo, parent, node):
493 ui.write(chunk)
493 ui.write(chunk)
494 action = None
494 action = None
495 if action == 'y':
495 if action == 'y':
496 transplants.append(node)
496 transplants.append(node)
497 elif action == 'm':
497 elif action == 'm':
498 merges.append(node)
498 merges.append(node)
499 elif action == 'c':
499 elif action == 'c':
500 break
500 break
501 elif action == 'q':
501 elif action == 'q':
502 transplants = ()
502 transplants = ()
503 merges = ()
503 merges = ()
504 break
504 break
505 displayer.close()
505 displayer.close()
506 return (transplants, merges)
506 return (transplants, merges)
507
507
508 @command('transplant',
508 @command('transplant',
509 [('s', 'source', '', _('transplant changesets from REPO'), _('REPO')),
509 [('s', 'source', '', _('transplant changesets from REPO'), _('REPO')),
510 ('b', 'branch', [], _('use this source changeset as head'), _('REV')),
510 ('b', 'branch', [], _('use this source changeset as head'), _('REV')),
511 ('a', 'all', None, _('pull all changesets up to the --branch revisions')),
511 ('a', 'all', None, _('pull all changesets up to the --branch revisions')),
512 ('p', 'prune', [], _('skip over REV'), _('REV')),
512 ('p', 'prune', [], _('skip over REV'), _('REV')),
513 ('m', 'merge', [], _('merge at REV'), _('REV')),
513 ('m', 'merge', [], _('merge at REV'), _('REV')),
514 ('', 'parent', '',
514 ('', 'parent', '',
515 _('parent to choose when transplanting merge'), _('REV')),
515 _('parent to choose when transplanting merge'), _('REV')),
516 ('e', 'edit', False, _('invoke editor on commit messages')),
516 ('e', 'edit', False, _('invoke editor on commit messages')),
517 ('', 'log', None, _('append transplant info to log message')),
517 ('', 'log', None, _('append transplant info to log message')),
518 ('c', 'continue', None, _('continue last transplant session '
518 ('c', 'continue', None, _('continue last transplant session '
519 'after fixing conflicts')),
519 'after fixing conflicts')),
520 ('', 'filter', '',
520 ('', 'filter', '',
521 _('filter changesets through command'), _('CMD'))],
521 _('filter changesets through command'), _('CMD'))],
522 _('hg transplant [-s REPO] [-b BRANCH [-a]] [-p REV] '
522 _('hg transplant [-s REPO] [-b BRANCH [-a]] [-p REV] '
523 '[-m REV] [REV]...'))
523 '[-m REV] [REV]...'))
524 def transplant(ui, repo, *revs, **opts):
524 def transplant(ui, repo, *revs, **opts):
525 '''transplant changesets from another branch
525 '''transplant changesets from another branch
526
526
527 Selected changesets will be applied on top of the current working
527 Selected changesets will be applied on top of the current working
528 directory with the log of the original changeset. The changesets
528 directory with the log of the original changeset. The changesets
529 are copied and will thus appear twice in the history with different
529 are copied and will thus appear twice in the history with different
530 identities.
530 identities.
531
531
532 Consider using the graft command if everything is inside the same
532 Consider using the graft command if everything is inside the same
533 repository - it will use merges and will usually give a better result.
533 repository - it will use merges and will usually give a better result.
534 Use the rebase extension if the changesets are unpublished and you want
534 Use the rebase extension if the changesets are unpublished and you want
535 to move them instead of copying them.
535 to move them instead of copying them.
536
536
537 If --log is specified, log messages will have a comment appended
537 If --log is specified, log messages will have a comment appended
538 of the form::
538 of the form::
539
539
540 (transplanted from CHANGESETHASH)
540 (transplanted from CHANGESETHASH)
541
541
542 You can rewrite the changelog message with the --filter option.
542 You can rewrite the changelog message with the --filter option.
543 Its argument will be invoked with the current changelog message as
543 Its argument will be invoked with the current changelog message as
544 $1 and the patch as $2.
544 $1 and the patch as $2.
545
545
546 --source/-s specifies another repository to use for selecting changesets,
546 --source/-s specifies another repository to use for selecting changesets,
547 just as if it temporarily had been pulled.
547 just as if it temporarily had been pulled.
548 If --branch/-b is specified, these revisions will be used as
548 If --branch/-b is specified, these revisions will be used as
549 heads when deciding which changesets to transplant, just as if only
549 heads when deciding which changesets to transplant, just as if only
550 these revisions had been pulled.
550 these revisions had been pulled.
551 If --all/-a is specified, all the revisions up to the heads specified
551 If --all/-a is specified, all the revisions up to the heads specified
552 with --branch will be transplanted.
552 with --branch will be transplanted.
553
553
554 Example:
554 Example:
555
555
556 - transplant all changes up to REV on top of your current revision::
556 - transplant all changes up to REV on top of your current revision::
557
557
558 hg transplant --branch REV --all
558 hg transplant --branch REV --all
559
559
560 You can optionally mark selected transplanted changesets as merge
560 You can optionally mark selected transplanted changesets as merge
561 changesets. You will not be prompted to transplant any ancestors
561 changesets. You will not be prompted to transplant any ancestors
562 of a merged transplant, and you can merge descendants of them
562 of a merged transplant, and you can merge descendants of them
563 normally instead of transplanting them.
563 normally instead of transplanting them.
564
564
565 Merge changesets may be transplanted directly by specifying the
565 Merge changesets may be transplanted directly by specifying the
566 proper parent changeset by calling :hg:`transplant --parent`.
566 proper parent changeset by calling :hg:`transplant --parent`.
567
567
568 If no merges or revisions are provided, :hg:`transplant` will
568 If no merges or revisions are provided, :hg:`transplant` will
569 start an interactive changeset browser.
569 start an interactive changeset browser.
570
570
571 If a changeset application fails, you can fix the merge by hand
571 If a changeset application fails, you can fix the merge by hand
572 and then resume where you left off by calling :hg:`transplant
572 and then resume where you left off by calling :hg:`transplant
573 --continue/-c`.
573 --continue/-c`.
574 '''
574 '''
575 wlock = None
575 wlock = None
576 try:
576 try:
577 wlock = repo.wlock()
577 wlock = repo.wlock()
578 return _dotransplant(ui, repo, *revs, **opts)
578 return _dotransplant(ui, repo, *revs, **opts)
579 finally:
579 finally:
580 lockmod.release(wlock)
580 lockmod.release(wlock)
581
581
582 def _dotransplant(ui, repo, *revs, **opts):
582 def _dotransplant(ui, repo, *revs, **opts):
583 def incwalk(repo, csets, match=util.always):
583 def incwalk(repo, csets, match=util.always):
584 for node in csets:
584 for node in csets:
585 if match(node):
585 if match(node):
586 yield node
586 yield node
587
587
588 def transplantwalk(repo, dest, heads, match=util.always):
588 def transplantwalk(repo, dest, heads, match=util.always):
589 '''Yield all nodes that are ancestors of a head but not ancestors
589 '''Yield all nodes that are ancestors of a head but not ancestors
590 of dest.
590 of dest.
591 If no heads are specified, the heads of repo will be used.'''
591 If no heads are specified, the heads of repo will be used.'''
592 if not heads:
592 if not heads:
593 heads = repo.heads()
593 heads = repo.heads()
594 ancestors = []
594 ancestors = []
595 ctx = repo[dest]
595 ctx = repo[dest]
596 for head in heads:
596 for head in heads:
597 ancestors.append(ctx.ancestor(repo[head]).node())
597 ancestors.append(ctx.ancestor(repo[head]).node())
598 for node in repo.changelog.nodesbetween(ancestors, heads)[0]:
598 for node in repo.changelog.nodesbetween(ancestors, heads)[0]:
599 if match(node):
599 if match(node):
600 yield node
600 yield node
601
601
602 def checkopts(opts, revs):
602 def checkopts(opts, revs):
603 if opts.get('continue'):
603 if opts.get('continue'):
604 if opts.get('branch') or opts.get('all') or opts.get('merge'):
604 if opts.get('branch') or opts.get('all') or opts.get('merge'):
605 raise error.Abort(_('--continue is incompatible with '
605 raise error.Abort(_('--continue is incompatible with '
606 '--branch, --all and --merge'))
606 '--branch, --all and --merge'))
607 return
607 return
608 if not (opts.get('source') or revs or
608 if not (opts.get('source') or revs or
609 opts.get('merge') or opts.get('branch')):
609 opts.get('merge') or opts.get('branch')):
610 raise error.Abort(_('no source URL, branch revision, or revision '
610 raise error.Abort(_('no source URL, branch revision, or revision '
611 'list provided'))
611 'list provided'))
612 if opts.get('all'):
612 if opts.get('all'):
613 if not opts.get('branch'):
613 if not opts.get('branch'):
614 raise error.Abort(_('--all requires a branch revision'))
614 raise error.Abort(_('--all requires a branch revision'))
615 if revs:
615 if revs:
616 raise error.Abort(_('--all is incompatible with a '
616 raise error.Abort(_('--all is incompatible with a '
617 'revision list'))
617 'revision list'))
618
618
619 checkopts(opts, revs)
619 checkopts(opts, revs)
620
620
621 if not opts.get('log'):
621 if not opts.get('log'):
622 # deprecated config: transplant.log
622 # deprecated config: transplant.log
623 opts['log'] = ui.config('transplant', 'log')
623 opts['log'] = ui.config('transplant', 'log')
624 if not opts.get('filter'):
624 if not opts.get('filter'):
625 # deprecated config: transplant.filter
625 # deprecated config: transplant.filter
626 opts['filter'] = ui.config('transplant', 'filter')
626 opts['filter'] = ui.config('transplant', 'filter')
627
627
628 tp = transplanter(ui, repo, opts)
628 tp = transplanter(ui, repo, opts)
629
629
630 cmdutil.checkunfinished(repo)
630 cmdutil.checkunfinished(repo)
631 p1, p2 = repo.dirstate.parents()
631 p1, p2 = repo.dirstate.parents()
632 if len(repo) > 0 and p1 == revlog.nullid:
632 if len(repo) > 0 and p1 == revlog.nullid:
633 raise error.Abort(_('no revision checked out'))
633 raise error.Abort(_('no revision checked out'))
634 if not opts.get('continue'):
634 if not opts.get('continue'):
635 if p2 != revlog.nullid:
635 if p2 != revlog.nullid:
636 raise error.Abort(_('outstanding uncommitted merges'))
636 raise error.Abort(_('outstanding uncommitted merges'))
637 m, a, r, d = repo.status()[:4]
637 m, a, r, d = repo.status()[:4]
638 if m or a or r or d:
638 if m or a or r or d:
639 raise error.Abort(_('outstanding local changes'))
639 raise error.Abort(_('outstanding local changes'))
640
640
641 sourcerepo = opts.get('source')
641 sourcerepo = opts.get('source')
642 if sourcerepo:
642 if sourcerepo:
643 peer = hg.peer(repo, opts, ui.expandpath(sourcerepo))
643 peer = hg.peer(repo, opts, ui.expandpath(sourcerepo))
644 heads = map(peer.lookup, opts.get('branch', ()))
644 heads = map(peer.lookup, opts.get('branch', ()))
645 target = set(heads)
645 target = set(heads)
646 for r in revs:
646 for r in revs:
647 try:
647 try:
648 target.add(peer.lookup(r))
648 target.add(peer.lookup(r))
649 except error.RepoError:
649 except error.RepoError:
650 pass
650 pass
651 source, csets, cleanupfn = bundlerepo.getremotechanges(ui, repo, peer,
651 source, csets, cleanupfn = bundlerepo.getremotechanges(ui, repo, peer,
652 onlyheads=sorted(target), force=True)
652 onlyheads=sorted(target), force=True)
653 else:
653 else:
654 source = repo
654 source = repo
655 heads = map(source.lookup, opts.get('branch', ()))
655 heads = map(source.lookup, opts.get('branch', ()))
656 cleanupfn = None
656 cleanupfn = None
657
657
658 try:
658 try:
659 if opts.get('continue'):
659 if opts.get('continue'):
660 tp.resume(repo, source, opts)
660 tp.resume(repo, source, opts)
661 return
661 return
662
662
663 tf = tp.transplantfilter(repo, source, p1)
663 tf = tp.transplantfilter(repo, source, p1)
664 if opts.get('prune'):
664 if opts.get('prune'):
665 prune = set(source.lookup(r)
665 prune = set(source.lookup(r)
666 for r in scmutil.revrange(source, opts.get('prune')))
666 for r in scmutil.revrange(source, opts.get('prune')))
667 matchfn = lambda x: tf(x) and x not in prune
667 matchfn = lambda x: tf(x) and x not in prune
668 else:
668 else:
669 matchfn = tf
669 matchfn = tf
670 merges = map(source.lookup, opts.get('merge', ()))
670 merges = map(source.lookup, opts.get('merge', ()))
671 revmap = {}
671 revmap = {}
672 if revs:
672 if revs:
673 for r in scmutil.revrange(source, revs):
673 for r in scmutil.revrange(source, revs):
674 revmap[int(r)] = source.lookup(r)
674 revmap[int(r)] = source.lookup(r)
675 elif opts.get('all') or not merges:
675 elif opts.get('all') or not merges:
676 if source != repo:
676 if source != repo:
677 alltransplants = incwalk(source, csets, match=matchfn)
677 alltransplants = incwalk(source, csets, match=matchfn)
678 else:
678 else:
679 alltransplants = transplantwalk(source, p1, heads,
679 alltransplants = transplantwalk(source, p1, heads,
680 match=matchfn)
680 match=matchfn)
681 if opts.get('all'):
681 if opts.get('all'):
682 revs = alltransplants
682 revs = alltransplants
683 else:
683 else:
684 revs, newmerges = browserevs(ui, source, alltransplants, opts)
684 revs, newmerges = browserevs(ui, source, alltransplants, opts)
685 merges.extend(newmerges)
685 merges.extend(newmerges)
686 for r in revs:
686 for r in revs:
687 revmap[source.changelog.rev(r)] = r
687 revmap[source.changelog.rev(r)] = r
688 for r in merges:
688 for r in merges:
689 revmap[source.changelog.rev(r)] = r
689 revmap[source.changelog.rev(r)] = r
690
690
691 tp.apply(repo, source, revmap, merges, opts)
691 tp.apply(repo, source, revmap, merges, opts)
692 finally:
692 finally:
693 if cleanupfn:
693 if cleanupfn:
694 cleanupfn()
694 cleanupfn()
695
695
696 def revsettransplanted(repo, subset, x):
696 def revsettransplanted(repo, subset, x):
697 """``transplanted([set])``
697 """``transplanted([set])``
698 Transplanted changesets in set, or all transplanted changesets.
698 Transplanted changesets in set, or all transplanted changesets.
699 """
699 """
700 if x:
700 if x:
701 s = revset.getset(repo, subset, x)
701 s = revset.getset(repo, subset, x)
702 else:
702 else:
703 s = subset
703 s = subset
704 return revset.baseset([r for r in s if
704 return revset.baseset([r for r in s if
705 repo[r].extra().get('transplant_source')])
705 repo[r].extra().get('transplant_source')])
706
706
707 def kwtransplanted(repo, ctx, **args):
707 def kwtransplanted(repo, ctx, **args):
708 """:transplanted: String. The node identifier of the transplanted
708 """:transplanted: String. The node identifier of the transplanted
709 changeset if any."""
709 changeset if any."""
710 n = ctx.extra().get('transplant_source')
710 n = ctx.extra().get('transplant_source')
711 return n and revlog.hex(n) or ''
711 return n and revlog.hex(n) or ''
712
712
713 def extsetup(ui):
713 def extsetup(ui):
714 revset.symbols['transplanted'] = revsettransplanted
714 revset.symbols['transplanted'] = revsettransplanted
715 templatekw.keywords['transplanted'] = kwtransplanted
715 templatekw.keywords['transplanted'] = kwtransplanted
716 cmdutil.unfinishedstates.append(
716 cmdutil.unfinishedstates.append(
717 ['series', True, False, _('transplant in progress'),
717 ['series', True, False, _('transplant in progress'),
718 _("use 'hg transplant --continue' or 'hg update' to abort")])
718 _("use 'hg transplant --continue' or 'hg update' to abort")])
719
719
720 # tell hggettext to extract docstrings from these functions:
720 # tell hggettext to extract docstrings from these functions:
721 i18nfunctions = [revsettransplanted, kwtransplanted]
721 i18nfunctions = [revsettransplanted, kwtransplanted]
@@ -1,3422 +1,3422 b''
1 # cmdutil.py - help for command processing in mercurial
1 # cmdutil.py - help for command processing in mercurial
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from node import hex, bin, nullid, nullrev, short
8 from node import hex, bin, nullid, nullrev, short
9 from i18n import _
9 from i18n import _
10 import os, sys, errno, re, tempfile, cStringIO, shutil
10 import os, sys, errno, re, tempfile, cStringIO, shutil
11 import util, scmutil, templater, patch, error, templatekw, revlog, copies
11 import util, scmutil, templater, patch, error, templatekw, revlog, copies
12 import match as matchmod
12 import match as matchmod
13 import repair, graphmod, revset, phases, obsolete, pathutil
13 import repair, graphmod, revset, phases, obsolete, pathutil
14 import changelog
14 import changelog
15 import bookmarks
15 import bookmarks
16 import encoding
16 import encoding
17 import formatter
17 import formatter
18 import crecord as crecordmod
18 import crecord as crecordmod
19 import lock as lockmod
19 import lock as lockmod
20
20
21 def ishunk(x):
21 def ishunk(x):
22 hunkclasses = (crecordmod.uihunk, patch.recordhunk)
22 hunkclasses = (crecordmod.uihunk, patch.recordhunk)
23 return isinstance(x, hunkclasses)
23 return isinstance(x, hunkclasses)
24
24
25 def newandmodified(chunks, originalchunks):
25 def newandmodified(chunks, originalchunks):
26 newlyaddedandmodifiedfiles = set()
26 newlyaddedandmodifiedfiles = set()
27 for chunk in chunks:
27 for chunk in chunks:
28 if ishunk(chunk) and chunk.header.isnewfile() and chunk not in \
28 if ishunk(chunk) and chunk.header.isnewfile() and chunk not in \
29 originalchunks:
29 originalchunks:
30 newlyaddedandmodifiedfiles.add(chunk.header.filename())
30 newlyaddedandmodifiedfiles.add(chunk.header.filename())
31 return newlyaddedandmodifiedfiles
31 return newlyaddedandmodifiedfiles
32
32
33 def parsealiases(cmd):
33 def parsealiases(cmd):
34 return cmd.lstrip("^").split("|")
34 return cmd.lstrip("^").split("|")
35
35
36 def setupwrapcolorwrite(ui):
36 def setupwrapcolorwrite(ui):
37 # wrap ui.write so diff output can be labeled/colorized
37 # wrap ui.write so diff output can be labeled/colorized
38 def wrapwrite(orig, *args, **kw):
38 def wrapwrite(orig, *args, **kw):
39 label = kw.pop('label', '')
39 label = kw.pop('label', '')
40 for chunk, l in patch.difflabel(lambda: args):
40 for chunk, l in patch.difflabel(lambda: args):
41 orig(chunk, label=label + l)
41 orig(chunk, label=label + l)
42
42
43 oldwrite = ui.write
43 oldwrite = ui.write
44 def wrap(*args, **kwargs):
44 def wrap(*args, **kwargs):
45 return wrapwrite(oldwrite, *args, **kwargs)
45 return wrapwrite(oldwrite, *args, **kwargs)
46 setattr(ui, 'write', wrap)
46 setattr(ui, 'write', wrap)
47 return oldwrite
47 return oldwrite
48
48
49 def filterchunks(ui, originalhunks, usecurses, testfile, operation=None):
49 def filterchunks(ui, originalhunks, usecurses, testfile, operation=None):
50 if usecurses:
50 if usecurses:
51 if testfile:
51 if testfile:
52 recordfn = crecordmod.testdecorator(testfile,
52 recordfn = crecordmod.testdecorator(testfile,
53 crecordmod.testchunkselector)
53 crecordmod.testchunkselector)
54 else:
54 else:
55 recordfn = crecordmod.chunkselector
55 recordfn = crecordmod.chunkselector
56
56
57 return crecordmod.filterpatch(ui, originalhunks, recordfn, operation)
57 return crecordmod.filterpatch(ui, originalhunks, recordfn, operation)
58
58
59 else:
59 else:
60 return patch.filterpatch(ui, originalhunks, operation)
60 return patch.filterpatch(ui, originalhunks, operation)
61
61
62 def recordfilter(ui, originalhunks, operation=None):
62 def recordfilter(ui, originalhunks, operation=None):
63 """ Prompts the user to filter the originalhunks and return a list of
63 """ Prompts the user to filter the originalhunks and return a list of
64 selected hunks.
64 selected hunks.
65 *operation* is used for ui purposes to indicate the user
65 *operation* is used for ui purposes to indicate the user
66 what kind of filtering they are doing: reverting, committing, shelving, etc.
66 what kind of filtering they are doing: reverting, committing, shelving, etc.
67 *operation* has to be a translated string.
67 *operation* has to be a translated string.
68 """
68 """
69 usecurses = ui.configbool('experimental', 'crecord', False)
69 usecurses = ui.configbool('experimental', 'crecord', False)
70 testfile = ui.config('experimental', 'crecordtest', None)
70 testfile = ui.config('experimental', 'crecordtest', None)
71 oldwrite = setupwrapcolorwrite(ui)
71 oldwrite = setupwrapcolorwrite(ui)
72 try:
72 try:
73 newchunks, newopts = filterchunks(ui, originalhunks, usecurses,
73 newchunks, newopts = filterchunks(ui, originalhunks, usecurses,
74 testfile, operation)
74 testfile, operation)
75 finally:
75 finally:
76 ui.write = oldwrite
76 ui.write = oldwrite
77 return newchunks, newopts
77 return newchunks, newopts
78
78
79 def dorecord(ui, repo, commitfunc, cmdsuggest, backupall,
79 def dorecord(ui, repo, commitfunc, cmdsuggest, backupall,
80 filterfn, *pats, **opts):
80 filterfn, *pats, **opts):
81 import merge as mergemod
81 import merge as mergemod
82
82
83 if not ui.interactive():
83 if not ui.interactive():
84 if cmdsuggest:
84 if cmdsuggest:
85 msg = _('running non-interactively, use %s instead') % cmdsuggest
85 msg = _('running non-interactively, use %s instead') % cmdsuggest
86 else:
86 else:
87 msg = _('running non-interactively')
87 msg = _('running non-interactively')
88 raise error.Abort(msg)
88 raise error.Abort(msg)
89
89
90 # make sure username is set before going interactive
90 # make sure username is set before going interactive
91 if not opts.get('user'):
91 if not opts.get('user'):
92 ui.username() # raise exception, username not provided
92 ui.username() # raise exception, username not provided
93
93
94 def recordfunc(ui, repo, message, match, opts):
94 def recordfunc(ui, repo, message, match, opts):
95 """This is generic record driver.
95 """This is generic record driver.
96
96
97 Its job is to interactively filter local changes, and
97 Its job is to interactively filter local changes, and
98 accordingly prepare working directory into a state in which the
98 accordingly prepare working directory into a state in which the
99 job can be delegated to a non-interactive commit command such as
99 job can be delegated to a non-interactive commit command such as
100 'commit' or 'qrefresh'.
100 'commit' or 'qrefresh'.
101
101
102 After the actual job is done by non-interactive command, the
102 After the actual job is done by non-interactive command, the
103 working directory is restored to its original state.
103 working directory is restored to its original state.
104
104
105 In the end we'll record interesting changes, and everything else
105 In the end we'll record interesting changes, and everything else
106 will be left in place, so the user can continue working.
106 will be left in place, so the user can continue working.
107 """
107 """
108
108
109 checkunfinished(repo, commit=True)
109 checkunfinished(repo, commit=True)
110 merge = len(repo[None].parents()) > 1
110 merge = len(repo[None].parents()) > 1
111 if merge:
111 if merge:
112 raise error.Abort(_('cannot partially commit a merge '
112 raise error.Abort(_('cannot partially commit a merge '
113 '(use "hg commit" instead)'))
113 '(use "hg commit" instead)'))
114
114
115 status = repo.status(match=match)
115 status = repo.status(match=match)
116 diffopts = patch.difffeatureopts(ui, opts=opts, whitespace=True)
116 diffopts = patch.difffeatureopts(ui, opts=opts, whitespace=True)
117 diffopts.nodates = True
117 diffopts.nodates = True
118 diffopts.git = True
118 diffopts.git = True
119 originaldiff = patch.diff(repo, changes=status, opts=diffopts)
119 originaldiff = patch.diff(repo, changes=status, opts=diffopts)
120 originalchunks = patch.parsepatch(originaldiff)
120 originalchunks = patch.parsepatch(originaldiff)
121
121
122 # 1. filter patch, so we have intending-to apply subset of it
122 # 1. filter patch, so we have intending-to apply subset of it
123 try:
123 try:
124 chunks, newopts = filterfn(ui, originalchunks)
124 chunks, newopts = filterfn(ui, originalchunks)
125 except patch.PatchError as err:
125 except patch.PatchError as err:
126 raise error.Abort(_('error parsing patch: %s') % err)
126 raise error.Abort(_('error parsing patch: %s') % err)
127 opts.update(newopts)
127 opts.update(newopts)
128
128
129 # We need to keep a backup of files that have been newly added and
129 # We need to keep a backup of files that have been newly added and
130 # modified during the recording process because there is a previous
130 # modified during the recording process because there is a previous
131 # version without the edit in the workdir
131 # version without the edit in the workdir
132 newlyaddedandmodifiedfiles = newandmodified(chunks, originalchunks)
132 newlyaddedandmodifiedfiles = newandmodified(chunks, originalchunks)
133 contenders = set()
133 contenders = set()
134 for h in chunks:
134 for h in chunks:
135 try:
135 try:
136 contenders.update(set(h.files()))
136 contenders.update(set(h.files()))
137 except AttributeError:
137 except AttributeError:
138 pass
138 pass
139
139
140 changed = status.modified + status.added + status.removed
140 changed = status.modified + status.added + status.removed
141 newfiles = [f for f in changed if f in contenders]
141 newfiles = [f for f in changed if f in contenders]
142 if not newfiles:
142 if not newfiles:
143 ui.status(_('no changes to record\n'))
143 ui.status(_('no changes to record\n'))
144 return 0
144 return 0
145
145
146 modified = set(status.modified)
146 modified = set(status.modified)
147
147
148 # 2. backup changed files, so we can restore them in the end
148 # 2. backup changed files, so we can restore them in the end
149
149
150 if backupall:
150 if backupall:
151 tobackup = changed
151 tobackup = changed
152 else:
152 else:
153 tobackup = [f for f in newfiles if f in modified or f in \
153 tobackup = [f for f in newfiles if f in modified or f in \
154 newlyaddedandmodifiedfiles]
154 newlyaddedandmodifiedfiles]
155 backups = {}
155 backups = {}
156 if tobackup:
156 if tobackup:
157 backupdir = repo.join('record-backups')
157 backupdir = repo.join('record-backups')
158 try:
158 try:
159 os.mkdir(backupdir)
159 os.mkdir(backupdir)
160 except OSError as err:
160 except OSError as err:
161 if err.errno != errno.EEXIST:
161 if err.errno != errno.EEXIST:
162 raise
162 raise
163 try:
163 try:
164 # backup continues
164 # backup continues
165 for f in tobackup:
165 for f in tobackup:
166 fd, tmpname = tempfile.mkstemp(prefix=f.replace('/', '_')+'.',
166 fd, tmpname = tempfile.mkstemp(prefix=f.replace('/', '_')+'.',
167 dir=backupdir)
167 dir=backupdir)
168 os.close(fd)
168 os.close(fd)
169 ui.debug('backup %r as %r\n' % (f, tmpname))
169 ui.debug('backup %r as %r\n' % (f, tmpname))
170 util.copyfile(repo.wjoin(f), tmpname)
170 util.copyfile(repo.wjoin(f), tmpname)
171 shutil.copystat(repo.wjoin(f), tmpname)
171 shutil.copystat(repo.wjoin(f), tmpname)
172 backups[f] = tmpname
172 backups[f] = tmpname
173
173
174 fp = cStringIO.StringIO()
174 fp = cStringIO.StringIO()
175 for c in chunks:
175 for c in chunks:
176 fname = c.filename()
176 fname = c.filename()
177 if fname in backups:
177 if fname in backups:
178 c.write(fp)
178 c.write(fp)
179 dopatch = fp.tell()
179 dopatch = fp.tell()
180 fp.seek(0)
180 fp.seek(0)
181
181
182 [os.unlink(repo.wjoin(c)) for c in newlyaddedandmodifiedfiles]
182 [os.unlink(repo.wjoin(c)) for c in newlyaddedandmodifiedfiles]
183 # 3a. apply filtered patch to clean repo (clean)
183 # 3a. apply filtered patch to clean repo (clean)
184 if backups:
184 if backups:
185 # Equivalent to hg.revert
185 # Equivalent to hg.revert
186 choices = lambda key: key in backups
186 m = scmutil.matchfiles(repo, backups.keys())
187 mergemod.update(repo, repo.dirstate.p1(),
187 mergemod.update(repo, repo.dirstate.p1(),
188 False, True, choices)
188 False, True, matcher=m)
189
189
190 # 3b. (apply)
190 # 3b. (apply)
191 if dopatch:
191 if dopatch:
192 try:
192 try:
193 ui.debug('applying patch\n')
193 ui.debug('applying patch\n')
194 ui.debug(fp.getvalue())
194 ui.debug(fp.getvalue())
195 patch.internalpatch(ui, repo, fp, 1, eolmode=None)
195 patch.internalpatch(ui, repo, fp, 1, eolmode=None)
196 except patch.PatchError as err:
196 except patch.PatchError as err:
197 raise error.Abort(str(err))
197 raise error.Abort(str(err))
198 del fp
198 del fp
199
199
200 # 4. We prepared working directory according to filtered
200 # 4. We prepared working directory according to filtered
201 # patch. Now is the time to delegate the job to
201 # patch. Now is the time to delegate the job to
202 # commit/qrefresh or the like!
202 # commit/qrefresh or the like!
203
203
204 # Make all of the pathnames absolute.
204 # Make all of the pathnames absolute.
205 newfiles = [repo.wjoin(nf) for nf in newfiles]
205 newfiles = [repo.wjoin(nf) for nf in newfiles]
206 return commitfunc(ui, repo, *newfiles, **opts)
206 return commitfunc(ui, repo, *newfiles, **opts)
207 finally:
207 finally:
208 # 5. finally restore backed-up files
208 # 5. finally restore backed-up files
209 try:
209 try:
210 dirstate = repo.dirstate
210 dirstate = repo.dirstate
211 for realname, tmpname in backups.iteritems():
211 for realname, tmpname in backups.iteritems():
212 ui.debug('restoring %r to %r\n' % (tmpname, realname))
212 ui.debug('restoring %r to %r\n' % (tmpname, realname))
213
213
214 if dirstate[realname] == 'n':
214 if dirstate[realname] == 'n':
215 # without normallookup, restoring timestamp
215 # without normallookup, restoring timestamp
216 # may cause partially committed files
216 # may cause partially committed files
217 # to be treated as unmodified
217 # to be treated as unmodified
218 dirstate.normallookup(realname)
218 dirstate.normallookup(realname)
219
219
220 util.copyfile(tmpname, repo.wjoin(realname))
220 util.copyfile(tmpname, repo.wjoin(realname))
221 # Our calls to copystat() here and above are a
221 # Our calls to copystat() here and above are a
222 # hack to trick any editors that have f open that
222 # hack to trick any editors that have f open that
223 # we haven't modified them.
223 # we haven't modified them.
224 #
224 #
225 # Also note that this racy as an editor could
225 # Also note that this racy as an editor could
226 # notice the file's mtime before we've finished
226 # notice the file's mtime before we've finished
227 # writing it.
227 # writing it.
228 shutil.copystat(tmpname, repo.wjoin(realname))
228 shutil.copystat(tmpname, repo.wjoin(realname))
229 os.unlink(tmpname)
229 os.unlink(tmpname)
230 if tobackup:
230 if tobackup:
231 os.rmdir(backupdir)
231 os.rmdir(backupdir)
232 except OSError:
232 except OSError:
233 pass
233 pass
234
234
235 def recordinwlock(ui, repo, message, match, opts):
235 def recordinwlock(ui, repo, message, match, opts):
236 wlock = repo.wlock()
236 wlock = repo.wlock()
237 try:
237 try:
238 return recordfunc(ui, repo, message, match, opts)
238 return recordfunc(ui, repo, message, match, opts)
239 finally:
239 finally:
240 wlock.release()
240 wlock.release()
241
241
242 return commit(ui, repo, recordinwlock, pats, opts)
242 return commit(ui, repo, recordinwlock, pats, opts)
243
243
244 def findpossible(cmd, table, strict=False):
244 def findpossible(cmd, table, strict=False):
245 """
245 """
246 Return cmd -> (aliases, command table entry)
246 Return cmd -> (aliases, command table entry)
247 for each matching command.
247 for each matching command.
248 Return debug commands (or their aliases) only if no normal command matches.
248 Return debug commands (or their aliases) only if no normal command matches.
249 """
249 """
250 choice = {}
250 choice = {}
251 debugchoice = {}
251 debugchoice = {}
252
252
253 if cmd in table:
253 if cmd in table:
254 # short-circuit exact matches, "log" alias beats "^log|history"
254 # short-circuit exact matches, "log" alias beats "^log|history"
255 keys = [cmd]
255 keys = [cmd]
256 else:
256 else:
257 keys = table.keys()
257 keys = table.keys()
258
258
259 allcmds = []
259 allcmds = []
260 for e in keys:
260 for e in keys:
261 aliases = parsealiases(e)
261 aliases = parsealiases(e)
262 allcmds.extend(aliases)
262 allcmds.extend(aliases)
263 found = None
263 found = None
264 if cmd in aliases:
264 if cmd in aliases:
265 found = cmd
265 found = cmd
266 elif not strict:
266 elif not strict:
267 for a in aliases:
267 for a in aliases:
268 if a.startswith(cmd):
268 if a.startswith(cmd):
269 found = a
269 found = a
270 break
270 break
271 if found is not None:
271 if found is not None:
272 if aliases[0].startswith("debug") or found.startswith("debug"):
272 if aliases[0].startswith("debug") or found.startswith("debug"):
273 debugchoice[found] = (aliases, table[e])
273 debugchoice[found] = (aliases, table[e])
274 else:
274 else:
275 choice[found] = (aliases, table[e])
275 choice[found] = (aliases, table[e])
276
276
277 if not choice and debugchoice:
277 if not choice and debugchoice:
278 choice = debugchoice
278 choice = debugchoice
279
279
280 return choice, allcmds
280 return choice, allcmds
281
281
282 def findcmd(cmd, table, strict=True):
282 def findcmd(cmd, table, strict=True):
283 """Return (aliases, command table entry) for command string."""
283 """Return (aliases, command table entry) for command string."""
284 choice, allcmds = findpossible(cmd, table, strict)
284 choice, allcmds = findpossible(cmd, table, strict)
285
285
286 if cmd in choice:
286 if cmd in choice:
287 return choice[cmd]
287 return choice[cmd]
288
288
289 if len(choice) > 1:
289 if len(choice) > 1:
290 clist = choice.keys()
290 clist = choice.keys()
291 clist.sort()
291 clist.sort()
292 raise error.AmbiguousCommand(cmd, clist)
292 raise error.AmbiguousCommand(cmd, clist)
293
293
294 if choice:
294 if choice:
295 return choice.values()[0]
295 return choice.values()[0]
296
296
297 raise error.UnknownCommand(cmd, allcmds)
297 raise error.UnknownCommand(cmd, allcmds)
298
298
299 def findrepo(p):
299 def findrepo(p):
300 while not os.path.isdir(os.path.join(p, ".hg")):
300 while not os.path.isdir(os.path.join(p, ".hg")):
301 oldp, p = p, os.path.dirname(p)
301 oldp, p = p, os.path.dirname(p)
302 if p == oldp:
302 if p == oldp:
303 return None
303 return None
304
304
305 return p
305 return p
306
306
307 def bailifchanged(repo, merge=True):
307 def bailifchanged(repo, merge=True):
308 if merge and repo.dirstate.p2() != nullid:
308 if merge and repo.dirstate.p2() != nullid:
309 raise error.Abort(_('outstanding uncommitted merge'))
309 raise error.Abort(_('outstanding uncommitted merge'))
310 modified, added, removed, deleted = repo.status()[:4]
310 modified, added, removed, deleted = repo.status()[:4]
311 if modified or added or removed or deleted:
311 if modified or added or removed or deleted:
312 raise error.Abort(_('uncommitted changes'))
312 raise error.Abort(_('uncommitted changes'))
313 ctx = repo[None]
313 ctx = repo[None]
314 for s in sorted(ctx.substate):
314 for s in sorted(ctx.substate):
315 ctx.sub(s).bailifchanged()
315 ctx.sub(s).bailifchanged()
316
316
317 def logmessage(ui, opts):
317 def logmessage(ui, opts):
318 """ get the log message according to -m and -l option """
318 """ get the log message according to -m and -l option """
319 message = opts.get('message')
319 message = opts.get('message')
320 logfile = opts.get('logfile')
320 logfile = opts.get('logfile')
321
321
322 if message and logfile:
322 if message and logfile:
323 raise error.Abort(_('options --message and --logfile are mutually '
323 raise error.Abort(_('options --message and --logfile are mutually '
324 'exclusive'))
324 'exclusive'))
325 if not message and logfile:
325 if not message and logfile:
326 try:
326 try:
327 if logfile == '-':
327 if logfile == '-':
328 message = ui.fin.read()
328 message = ui.fin.read()
329 else:
329 else:
330 message = '\n'.join(util.readfile(logfile).splitlines())
330 message = '\n'.join(util.readfile(logfile).splitlines())
331 except IOError as inst:
331 except IOError as inst:
332 raise error.Abort(_("can't read commit message '%s': %s") %
332 raise error.Abort(_("can't read commit message '%s': %s") %
333 (logfile, inst.strerror))
333 (logfile, inst.strerror))
334 return message
334 return message
335
335
336 def mergeeditform(ctxorbool, baseformname):
336 def mergeeditform(ctxorbool, baseformname):
337 """return appropriate editform name (referencing a committemplate)
337 """return appropriate editform name (referencing a committemplate)
338
338
339 'ctxorbool' is either a ctx to be committed, or a bool indicating whether
339 'ctxorbool' is either a ctx to be committed, or a bool indicating whether
340 merging is committed.
340 merging is committed.
341
341
342 This returns baseformname with '.merge' appended if it is a merge,
342 This returns baseformname with '.merge' appended if it is a merge,
343 otherwise '.normal' is appended.
343 otherwise '.normal' is appended.
344 """
344 """
345 if isinstance(ctxorbool, bool):
345 if isinstance(ctxorbool, bool):
346 if ctxorbool:
346 if ctxorbool:
347 return baseformname + ".merge"
347 return baseformname + ".merge"
348 elif 1 < len(ctxorbool.parents()):
348 elif 1 < len(ctxorbool.parents()):
349 return baseformname + ".merge"
349 return baseformname + ".merge"
350
350
351 return baseformname + ".normal"
351 return baseformname + ".normal"
352
352
353 def getcommiteditor(edit=False, finishdesc=None, extramsg=None,
353 def getcommiteditor(edit=False, finishdesc=None, extramsg=None,
354 editform='', **opts):
354 editform='', **opts):
355 """get appropriate commit message editor according to '--edit' option
355 """get appropriate commit message editor according to '--edit' option
356
356
357 'finishdesc' is a function to be called with edited commit message
357 'finishdesc' is a function to be called with edited commit message
358 (= 'description' of the new changeset) just after editing, but
358 (= 'description' of the new changeset) just after editing, but
359 before checking empty-ness. It should return actual text to be
359 before checking empty-ness. It should return actual text to be
360 stored into history. This allows to change description before
360 stored into history. This allows to change description before
361 storing.
361 storing.
362
362
363 'extramsg' is a extra message to be shown in the editor instead of
363 'extramsg' is a extra message to be shown in the editor instead of
364 'Leave message empty to abort commit' line. 'HG: ' prefix and EOL
364 'Leave message empty to abort commit' line. 'HG: ' prefix and EOL
365 is automatically added.
365 is automatically added.
366
366
367 'editform' is a dot-separated list of names, to distinguish
367 'editform' is a dot-separated list of names, to distinguish
368 the purpose of commit text editing.
368 the purpose of commit text editing.
369
369
370 'getcommiteditor' returns 'commitforceeditor' regardless of
370 'getcommiteditor' returns 'commitforceeditor' regardless of
371 'edit', if one of 'finishdesc' or 'extramsg' is specified, because
371 'edit', if one of 'finishdesc' or 'extramsg' is specified, because
372 they are specific for usage in MQ.
372 they are specific for usage in MQ.
373 """
373 """
374 if edit or finishdesc or extramsg:
374 if edit or finishdesc or extramsg:
375 return lambda r, c, s: commitforceeditor(r, c, s,
375 return lambda r, c, s: commitforceeditor(r, c, s,
376 finishdesc=finishdesc,
376 finishdesc=finishdesc,
377 extramsg=extramsg,
377 extramsg=extramsg,
378 editform=editform)
378 editform=editform)
379 elif editform:
379 elif editform:
380 return lambda r, c, s: commiteditor(r, c, s, editform=editform)
380 return lambda r, c, s: commiteditor(r, c, s, editform=editform)
381 else:
381 else:
382 return commiteditor
382 return commiteditor
383
383
384 def loglimit(opts):
384 def loglimit(opts):
385 """get the log limit according to option -l/--limit"""
385 """get the log limit according to option -l/--limit"""
386 limit = opts.get('limit')
386 limit = opts.get('limit')
387 if limit:
387 if limit:
388 try:
388 try:
389 limit = int(limit)
389 limit = int(limit)
390 except ValueError:
390 except ValueError:
391 raise error.Abort(_('limit must be a positive integer'))
391 raise error.Abort(_('limit must be a positive integer'))
392 if limit <= 0:
392 if limit <= 0:
393 raise error.Abort(_('limit must be positive'))
393 raise error.Abort(_('limit must be positive'))
394 else:
394 else:
395 limit = None
395 limit = None
396 return limit
396 return limit
397
397
398 def makefilename(repo, pat, node, desc=None,
398 def makefilename(repo, pat, node, desc=None,
399 total=None, seqno=None, revwidth=None, pathname=None):
399 total=None, seqno=None, revwidth=None, pathname=None):
400 node_expander = {
400 node_expander = {
401 'H': lambda: hex(node),
401 'H': lambda: hex(node),
402 'R': lambda: str(repo.changelog.rev(node)),
402 'R': lambda: str(repo.changelog.rev(node)),
403 'h': lambda: short(node),
403 'h': lambda: short(node),
404 'm': lambda: re.sub('[^\w]', '_', str(desc))
404 'm': lambda: re.sub('[^\w]', '_', str(desc))
405 }
405 }
406 expander = {
406 expander = {
407 '%': lambda: '%',
407 '%': lambda: '%',
408 'b': lambda: os.path.basename(repo.root),
408 'b': lambda: os.path.basename(repo.root),
409 }
409 }
410
410
411 try:
411 try:
412 if node:
412 if node:
413 expander.update(node_expander)
413 expander.update(node_expander)
414 if node:
414 if node:
415 expander['r'] = (lambda:
415 expander['r'] = (lambda:
416 str(repo.changelog.rev(node)).zfill(revwidth or 0))
416 str(repo.changelog.rev(node)).zfill(revwidth or 0))
417 if total is not None:
417 if total is not None:
418 expander['N'] = lambda: str(total)
418 expander['N'] = lambda: str(total)
419 if seqno is not None:
419 if seqno is not None:
420 expander['n'] = lambda: str(seqno)
420 expander['n'] = lambda: str(seqno)
421 if total is not None and seqno is not None:
421 if total is not None and seqno is not None:
422 expander['n'] = lambda: str(seqno).zfill(len(str(total)))
422 expander['n'] = lambda: str(seqno).zfill(len(str(total)))
423 if pathname is not None:
423 if pathname is not None:
424 expander['s'] = lambda: os.path.basename(pathname)
424 expander['s'] = lambda: os.path.basename(pathname)
425 expander['d'] = lambda: os.path.dirname(pathname) or '.'
425 expander['d'] = lambda: os.path.dirname(pathname) or '.'
426 expander['p'] = lambda: pathname
426 expander['p'] = lambda: pathname
427
427
428 newname = []
428 newname = []
429 patlen = len(pat)
429 patlen = len(pat)
430 i = 0
430 i = 0
431 while i < patlen:
431 while i < patlen:
432 c = pat[i]
432 c = pat[i]
433 if c == '%':
433 if c == '%':
434 i += 1
434 i += 1
435 c = pat[i]
435 c = pat[i]
436 c = expander[c]()
436 c = expander[c]()
437 newname.append(c)
437 newname.append(c)
438 i += 1
438 i += 1
439 return ''.join(newname)
439 return ''.join(newname)
440 except KeyError as inst:
440 except KeyError as inst:
441 raise error.Abort(_("invalid format spec '%%%s' in output filename") %
441 raise error.Abort(_("invalid format spec '%%%s' in output filename") %
442 inst.args[0])
442 inst.args[0])
443
443
444 def makefileobj(repo, pat, node=None, desc=None, total=None,
444 def makefileobj(repo, pat, node=None, desc=None, total=None,
445 seqno=None, revwidth=None, mode='wb', modemap=None,
445 seqno=None, revwidth=None, mode='wb', modemap=None,
446 pathname=None):
446 pathname=None):
447
447
448 writable = mode not in ('r', 'rb')
448 writable = mode not in ('r', 'rb')
449
449
450 if not pat or pat == '-':
450 if not pat or pat == '-':
451 if writable:
451 if writable:
452 fp = repo.ui.fout
452 fp = repo.ui.fout
453 else:
453 else:
454 fp = repo.ui.fin
454 fp = repo.ui.fin
455 if util.safehasattr(fp, 'fileno'):
455 if util.safehasattr(fp, 'fileno'):
456 return os.fdopen(os.dup(fp.fileno()), mode)
456 return os.fdopen(os.dup(fp.fileno()), mode)
457 else:
457 else:
458 # if this fp can't be duped properly, return
458 # if this fp can't be duped properly, return
459 # a dummy object that can be closed
459 # a dummy object that can be closed
460 class wrappedfileobj(object):
460 class wrappedfileobj(object):
461 noop = lambda x: None
461 noop = lambda x: None
462 def __init__(self, f):
462 def __init__(self, f):
463 self.f = f
463 self.f = f
464 def __getattr__(self, attr):
464 def __getattr__(self, attr):
465 if attr == 'close':
465 if attr == 'close':
466 return self.noop
466 return self.noop
467 else:
467 else:
468 return getattr(self.f, attr)
468 return getattr(self.f, attr)
469
469
470 return wrappedfileobj(fp)
470 return wrappedfileobj(fp)
471 if util.safehasattr(pat, 'write') and writable:
471 if util.safehasattr(pat, 'write') and writable:
472 return pat
472 return pat
473 if util.safehasattr(pat, 'read') and 'r' in mode:
473 if util.safehasattr(pat, 'read') and 'r' in mode:
474 return pat
474 return pat
475 fn = makefilename(repo, pat, node, desc, total, seqno, revwidth, pathname)
475 fn = makefilename(repo, pat, node, desc, total, seqno, revwidth, pathname)
476 if modemap is not None:
476 if modemap is not None:
477 mode = modemap.get(fn, mode)
477 mode = modemap.get(fn, mode)
478 if mode == 'wb':
478 if mode == 'wb':
479 modemap[fn] = 'ab'
479 modemap[fn] = 'ab'
480 return open(fn, mode)
480 return open(fn, mode)
481
481
482 def openrevlog(repo, cmd, file_, opts):
482 def openrevlog(repo, cmd, file_, opts):
483 """opens the changelog, manifest, a filelog or a given revlog"""
483 """opens the changelog, manifest, a filelog or a given revlog"""
484 cl = opts['changelog']
484 cl = opts['changelog']
485 mf = opts['manifest']
485 mf = opts['manifest']
486 dir = opts['dir']
486 dir = opts['dir']
487 msg = None
487 msg = None
488 if cl and mf:
488 if cl and mf:
489 msg = _('cannot specify --changelog and --manifest at the same time')
489 msg = _('cannot specify --changelog and --manifest at the same time')
490 elif cl and dir:
490 elif cl and dir:
491 msg = _('cannot specify --changelog and --dir at the same time')
491 msg = _('cannot specify --changelog and --dir at the same time')
492 elif cl or mf:
492 elif cl or mf:
493 if file_:
493 if file_:
494 msg = _('cannot specify filename with --changelog or --manifest')
494 msg = _('cannot specify filename with --changelog or --manifest')
495 elif not repo:
495 elif not repo:
496 msg = _('cannot specify --changelog or --manifest or --dir '
496 msg = _('cannot specify --changelog or --manifest or --dir '
497 'without a repository')
497 'without a repository')
498 if msg:
498 if msg:
499 raise error.Abort(msg)
499 raise error.Abort(msg)
500
500
501 r = None
501 r = None
502 if repo:
502 if repo:
503 if cl:
503 if cl:
504 r = repo.unfiltered().changelog
504 r = repo.unfiltered().changelog
505 elif dir:
505 elif dir:
506 if 'treemanifest' not in repo.requirements:
506 if 'treemanifest' not in repo.requirements:
507 raise error.Abort(_("--dir can only be used on repos with "
507 raise error.Abort(_("--dir can only be used on repos with "
508 "treemanifest enabled"))
508 "treemanifest enabled"))
509 dirlog = repo.dirlog(file_)
509 dirlog = repo.dirlog(file_)
510 if len(dirlog):
510 if len(dirlog):
511 r = dirlog
511 r = dirlog
512 elif mf:
512 elif mf:
513 r = repo.manifest
513 r = repo.manifest
514 elif file_:
514 elif file_:
515 filelog = repo.file(file_)
515 filelog = repo.file(file_)
516 if len(filelog):
516 if len(filelog):
517 r = filelog
517 r = filelog
518 if not r:
518 if not r:
519 if not file_:
519 if not file_:
520 raise error.CommandError(cmd, _('invalid arguments'))
520 raise error.CommandError(cmd, _('invalid arguments'))
521 if not os.path.isfile(file_):
521 if not os.path.isfile(file_):
522 raise error.Abort(_("revlog '%s' not found") % file_)
522 raise error.Abort(_("revlog '%s' not found") % file_)
523 r = revlog.revlog(scmutil.opener(os.getcwd(), audit=False),
523 r = revlog.revlog(scmutil.opener(os.getcwd(), audit=False),
524 file_[:-2] + ".i")
524 file_[:-2] + ".i")
525 return r
525 return r
526
526
527 def copy(ui, repo, pats, opts, rename=False):
527 def copy(ui, repo, pats, opts, rename=False):
528 # called with the repo lock held
528 # called with the repo lock held
529 #
529 #
530 # hgsep => pathname that uses "/" to separate directories
530 # hgsep => pathname that uses "/" to separate directories
531 # ossep => pathname that uses os.sep to separate directories
531 # ossep => pathname that uses os.sep to separate directories
532 cwd = repo.getcwd()
532 cwd = repo.getcwd()
533 targets = {}
533 targets = {}
534 after = opts.get("after")
534 after = opts.get("after")
535 dryrun = opts.get("dry_run")
535 dryrun = opts.get("dry_run")
536 wctx = repo[None]
536 wctx = repo[None]
537
537
538 def walkpat(pat):
538 def walkpat(pat):
539 srcs = []
539 srcs = []
540 if after:
540 if after:
541 badstates = '?'
541 badstates = '?'
542 else:
542 else:
543 badstates = '?r'
543 badstates = '?r'
544 m = scmutil.match(repo[None], [pat], opts, globbed=True)
544 m = scmutil.match(repo[None], [pat], opts, globbed=True)
545 for abs in repo.walk(m):
545 for abs in repo.walk(m):
546 state = repo.dirstate[abs]
546 state = repo.dirstate[abs]
547 rel = m.rel(abs)
547 rel = m.rel(abs)
548 exact = m.exact(abs)
548 exact = m.exact(abs)
549 if state in badstates:
549 if state in badstates:
550 if exact and state == '?':
550 if exact and state == '?':
551 ui.warn(_('%s: not copying - file is not managed\n') % rel)
551 ui.warn(_('%s: not copying - file is not managed\n') % rel)
552 if exact and state == 'r':
552 if exact and state == 'r':
553 ui.warn(_('%s: not copying - file has been marked for'
553 ui.warn(_('%s: not copying - file has been marked for'
554 ' remove\n') % rel)
554 ' remove\n') % rel)
555 continue
555 continue
556 # abs: hgsep
556 # abs: hgsep
557 # rel: ossep
557 # rel: ossep
558 srcs.append((abs, rel, exact))
558 srcs.append((abs, rel, exact))
559 return srcs
559 return srcs
560
560
561 # abssrc: hgsep
561 # abssrc: hgsep
562 # relsrc: ossep
562 # relsrc: ossep
563 # otarget: ossep
563 # otarget: ossep
564 def copyfile(abssrc, relsrc, otarget, exact):
564 def copyfile(abssrc, relsrc, otarget, exact):
565 abstarget = pathutil.canonpath(repo.root, cwd, otarget)
565 abstarget = pathutil.canonpath(repo.root, cwd, otarget)
566 if '/' in abstarget:
566 if '/' in abstarget:
567 # We cannot normalize abstarget itself, this would prevent
567 # We cannot normalize abstarget itself, this would prevent
568 # case only renames, like a => A.
568 # case only renames, like a => A.
569 abspath, absname = abstarget.rsplit('/', 1)
569 abspath, absname = abstarget.rsplit('/', 1)
570 abstarget = repo.dirstate.normalize(abspath) + '/' + absname
570 abstarget = repo.dirstate.normalize(abspath) + '/' + absname
571 reltarget = repo.pathto(abstarget, cwd)
571 reltarget = repo.pathto(abstarget, cwd)
572 target = repo.wjoin(abstarget)
572 target = repo.wjoin(abstarget)
573 src = repo.wjoin(abssrc)
573 src = repo.wjoin(abssrc)
574 state = repo.dirstate[abstarget]
574 state = repo.dirstate[abstarget]
575
575
576 scmutil.checkportable(ui, abstarget)
576 scmutil.checkportable(ui, abstarget)
577
577
578 # check for collisions
578 # check for collisions
579 prevsrc = targets.get(abstarget)
579 prevsrc = targets.get(abstarget)
580 if prevsrc is not None:
580 if prevsrc is not None:
581 ui.warn(_('%s: not overwriting - %s collides with %s\n') %
581 ui.warn(_('%s: not overwriting - %s collides with %s\n') %
582 (reltarget, repo.pathto(abssrc, cwd),
582 (reltarget, repo.pathto(abssrc, cwd),
583 repo.pathto(prevsrc, cwd)))
583 repo.pathto(prevsrc, cwd)))
584 return
584 return
585
585
586 # check for overwrites
586 # check for overwrites
587 exists = os.path.lexists(target)
587 exists = os.path.lexists(target)
588 samefile = False
588 samefile = False
589 if exists and abssrc != abstarget:
589 if exists and abssrc != abstarget:
590 if (repo.dirstate.normalize(abssrc) ==
590 if (repo.dirstate.normalize(abssrc) ==
591 repo.dirstate.normalize(abstarget)):
591 repo.dirstate.normalize(abstarget)):
592 if not rename:
592 if not rename:
593 ui.warn(_("%s: can't copy - same file\n") % reltarget)
593 ui.warn(_("%s: can't copy - same file\n") % reltarget)
594 return
594 return
595 exists = False
595 exists = False
596 samefile = True
596 samefile = True
597
597
598 if not after and exists or after and state in 'mn':
598 if not after and exists or after and state in 'mn':
599 if not opts['force']:
599 if not opts['force']:
600 ui.warn(_('%s: not overwriting - file exists\n') %
600 ui.warn(_('%s: not overwriting - file exists\n') %
601 reltarget)
601 reltarget)
602 return
602 return
603
603
604 if after:
604 if after:
605 if not exists:
605 if not exists:
606 if rename:
606 if rename:
607 ui.warn(_('%s: not recording move - %s does not exist\n') %
607 ui.warn(_('%s: not recording move - %s does not exist\n') %
608 (relsrc, reltarget))
608 (relsrc, reltarget))
609 else:
609 else:
610 ui.warn(_('%s: not recording copy - %s does not exist\n') %
610 ui.warn(_('%s: not recording copy - %s does not exist\n') %
611 (relsrc, reltarget))
611 (relsrc, reltarget))
612 return
612 return
613 elif not dryrun:
613 elif not dryrun:
614 try:
614 try:
615 if exists:
615 if exists:
616 os.unlink(target)
616 os.unlink(target)
617 targetdir = os.path.dirname(target) or '.'
617 targetdir = os.path.dirname(target) or '.'
618 if not os.path.isdir(targetdir):
618 if not os.path.isdir(targetdir):
619 os.makedirs(targetdir)
619 os.makedirs(targetdir)
620 if samefile:
620 if samefile:
621 tmp = target + "~hgrename"
621 tmp = target + "~hgrename"
622 os.rename(src, tmp)
622 os.rename(src, tmp)
623 os.rename(tmp, target)
623 os.rename(tmp, target)
624 else:
624 else:
625 util.copyfile(src, target)
625 util.copyfile(src, target)
626 srcexists = True
626 srcexists = True
627 except IOError as inst:
627 except IOError as inst:
628 if inst.errno == errno.ENOENT:
628 if inst.errno == errno.ENOENT:
629 ui.warn(_('%s: deleted in working directory\n') % relsrc)
629 ui.warn(_('%s: deleted in working directory\n') % relsrc)
630 srcexists = False
630 srcexists = False
631 else:
631 else:
632 ui.warn(_('%s: cannot copy - %s\n') %
632 ui.warn(_('%s: cannot copy - %s\n') %
633 (relsrc, inst.strerror))
633 (relsrc, inst.strerror))
634 return True # report a failure
634 return True # report a failure
635
635
636 if ui.verbose or not exact:
636 if ui.verbose or not exact:
637 if rename:
637 if rename:
638 ui.status(_('moving %s to %s\n') % (relsrc, reltarget))
638 ui.status(_('moving %s to %s\n') % (relsrc, reltarget))
639 else:
639 else:
640 ui.status(_('copying %s to %s\n') % (relsrc, reltarget))
640 ui.status(_('copying %s to %s\n') % (relsrc, reltarget))
641
641
642 targets[abstarget] = abssrc
642 targets[abstarget] = abssrc
643
643
644 # fix up dirstate
644 # fix up dirstate
645 scmutil.dirstatecopy(ui, repo, wctx, abssrc, abstarget,
645 scmutil.dirstatecopy(ui, repo, wctx, abssrc, abstarget,
646 dryrun=dryrun, cwd=cwd)
646 dryrun=dryrun, cwd=cwd)
647 if rename and not dryrun:
647 if rename and not dryrun:
648 if not after and srcexists and not samefile:
648 if not after and srcexists and not samefile:
649 util.unlinkpath(repo.wjoin(abssrc))
649 util.unlinkpath(repo.wjoin(abssrc))
650 wctx.forget([abssrc])
650 wctx.forget([abssrc])
651
651
652 # pat: ossep
652 # pat: ossep
653 # dest ossep
653 # dest ossep
654 # srcs: list of (hgsep, hgsep, ossep, bool)
654 # srcs: list of (hgsep, hgsep, ossep, bool)
655 # return: function that takes hgsep and returns ossep
655 # return: function that takes hgsep and returns ossep
656 def targetpathfn(pat, dest, srcs):
656 def targetpathfn(pat, dest, srcs):
657 if os.path.isdir(pat):
657 if os.path.isdir(pat):
658 abspfx = pathutil.canonpath(repo.root, cwd, pat)
658 abspfx = pathutil.canonpath(repo.root, cwd, pat)
659 abspfx = util.localpath(abspfx)
659 abspfx = util.localpath(abspfx)
660 if destdirexists:
660 if destdirexists:
661 striplen = len(os.path.split(abspfx)[0])
661 striplen = len(os.path.split(abspfx)[0])
662 else:
662 else:
663 striplen = len(abspfx)
663 striplen = len(abspfx)
664 if striplen:
664 if striplen:
665 striplen += len(os.sep)
665 striplen += len(os.sep)
666 res = lambda p: os.path.join(dest, util.localpath(p)[striplen:])
666 res = lambda p: os.path.join(dest, util.localpath(p)[striplen:])
667 elif destdirexists:
667 elif destdirexists:
668 res = lambda p: os.path.join(dest,
668 res = lambda p: os.path.join(dest,
669 os.path.basename(util.localpath(p)))
669 os.path.basename(util.localpath(p)))
670 else:
670 else:
671 res = lambda p: dest
671 res = lambda p: dest
672 return res
672 return res
673
673
674 # pat: ossep
674 # pat: ossep
675 # dest ossep
675 # dest ossep
676 # srcs: list of (hgsep, hgsep, ossep, bool)
676 # srcs: list of (hgsep, hgsep, ossep, bool)
677 # return: function that takes hgsep and returns ossep
677 # return: function that takes hgsep and returns ossep
678 def targetpathafterfn(pat, dest, srcs):
678 def targetpathafterfn(pat, dest, srcs):
679 if matchmod.patkind(pat):
679 if matchmod.patkind(pat):
680 # a mercurial pattern
680 # a mercurial pattern
681 res = lambda p: os.path.join(dest,
681 res = lambda p: os.path.join(dest,
682 os.path.basename(util.localpath(p)))
682 os.path.basename(util.localpath(p)))
683 else:
683 else:
684 abspfx = pathutil.canonpath(repo.root, cwd, pat)
684 abspfx = pathutil.canonpath(repo.root, cwd, pat)
685 if len(abspfx) < len(srcs[0][0]):
685 if len(abspfx) < len(srcs[0][0]):
686 # A directory. Either the target path contains the last
686 # A directory. Either the target path contains the last
687 # component of the source path or it does not.
687 # component of the source path or it does not.
688 def evalpath(striplen):
688 def evalpath(striplen):
689 score = 0
689 score = 0
690 for s in srcs:
690 for s in srcs:
691 t = os.path.join(dest, util.localpath(s[0])[striplen:])
691 t = os.path.join(dest, util.localpath(s[0])[striplen:])
692 if os.path.lexists(t):
692 if os.path.lexists(t):
693 score += 1
693 score += 1
694 return score
694 return score
695
695
696 abspfx = util.localpath(abspfx)
696 abspfx = util.localpath(abspfx)
697 striplen = len(abspfx)
697 striplen = len(abspfx)
698 if striplen:
698 if striplen:
699 striplen += len(os.sep)
699 striplen += len(os.sep)
700 if os.path.isdir(os.path.join(dest, os.path.split(abspfx)[1])):
700 if os.path.isdir(os.path.join(dest, os.path.split(abspfx)[1])):
701 score = evalpath(striplen)
701 score = evalpath(striplen)
702 striplen1 = len(os.path.split(abspfx)[0])
702 striplen1 = len(os.path.split(abspfx)[0])
703 if striplen1:
703 if striplen1:
704 striplen1 += len(os.sep)
704 striplen1 += len(os.sep)
705 if evalpath(striplen1) > score:
705 if evalpath(striplen1) > score:
706 striplen = striplen1
706 striplen = striplen1
707 res = lambda p: os.path.join(dest,
707 res = lambda p: os.path.join(dest,
708 util.localpath(p)[striplen:])
708 util.localpath(p)[striplen:])
709 else:
709 else:
710 # a file
710 # a file
711 if destdirexists:
711 if destdirexists:
712 res = lambda p: os.path.join(dest,
712 res = lambda p: os.path.join(dest,
713 os.path.basename(util.localpath(p)))
713 os.path.basename(util.localpath(p)))
714 else:
714 else:
715 res = lambda p: dest
715 res = lambda p: dest
716 return res
716 return res
717
717
718 pats = scmutil.expandpats(pats)
718 pats = scmutil.expandpats(pats)
719 if not pats:
719 if not pats:
720 raise error.Abort(_('no source or destination specified'))
720 raise error.Abort(_('no source or destination specified'))
721 if len(pats) == 1:
721 if len(pats) == 1:
722 raise error.Abort(_('no destination specified'))
722 raise error.Abort(_('no destination specified'))
723 dest = pats.pop()
723 dest = pats.pop()
724 destdirexists = os.path.isdir(dest) and not os.path.islink(dest)
724 destdirexists = os.path.isdir(dest) and not os.path.islink(dest)
725 if not destdirexists:
725 if not destdirexists:
726 if len(pats) > 1 or matchmod.patkind(pats[0]):
726 if len(pats) > 1 or matchmod.patkind(pats[0]):
727 raise error.Abort(_('with multiple sources, destination must be an '
727 raise error.Abort(_('with multiple sources, destination must be an '
728 'existing directory'))
728 'existing directory'))
729 if util.endswithsep(dest):
729 if util.endswithsep(dest):
730 raise error.Abort(_('destination %s is not a directory') % dest)
730 raise error.Abort(_('destination %s is not a directory') % dest)
731
731
732 tfn = targetpathfn
732 tfn = targetpathfn
733 if after:
733 if after:
734 tfn = targetpathafterfn
734 tfn = targetpathafterfn
735 copylist = []
735 copylist = []
736 for pat in pats:
736 for pat in pats:
737 srcs = walkpat(pat)
737 srcs = walkpat(pat)
738 if not srcs:
738 if not srcs:
739 continue
739 continue
740 copylist.append((tfn(pat, dest, srcs), srcs))
740 copylist.append((tfn(pat, dest, srcs), srcs))
741 if not copylist:
741 if not copylist:
742 raise error.Abort(_('no files to copy'))
742 raise error.Abort(_('no files to copy'))
743
743
744 errors = 0
744 errors = 0
745 for targetpath, srcs in copylist:
745 for targetpath, srcs in copylist:
746 for abssrc, relsrc, exact in srcs:
746 for abssrc, relsrc, exact in srcs:
747 if copyfile(abssrc, relsrc, targetpath(abssrc), exact):
747 if copyfile(abssrc, relsrc, targetpath(abssrc), exact):
748 errors += 1
748 errors += 1
749
749
750 if errors:
750 if errors:
751 ui.warn(_('(consider using --after)\n'))
751 ui.warn(_('(consider using --after)\n'))
752
752
753 return errors != 0
753 return errors != 0
754
754
755 def service(opts, parentfn=None, initfn=None, runfn=None, logfile=None,
755 def service(opts, parentfn=None, initfn=None, runfn=None, logfile=None,
756 runargs=None, appendpid=False):
756 runargs=None, appendpid=False):
757 '''Run a command as a service.'''
757 '''Run a command as a service.'''
758
758
759 def writepid(pid):
759 def writepid(pid):
760 if opts['pid_file']:
760 if opts['pid_file']:
761 if appendpid:
761 if appendpid:
762 mode = 'a'
762 mode = 'a'
763 else:
763 else:
764 mode = 'w'
764 mode = 'w'
765 fp = open(opts['pid_file'], mode)
765 fp = open(opts['pid_file'], mode)
766 fp.write(str(pid) + '\n')
766 fp.write(str(pid) + '\n')
767 fp.close()
767 fp.close()
768
768
769 if opts['daemon'] and not opts['daemon_pipefds']:
769 if opts['daemon'] and not opts['daemon_pipefds']:
770 # Signal child process startup with file removal
770 # Signal child process startup with file removal
771 lockfd, lockpath = tempfile.mkstemp(prefix='hg-service-')
771 lockfd, lockpath = tempfile.mkstemp(prefix='hg-service-')
772 os.close(lockfd)
772 os.close(lockfd)
773 try:
773 try:
774 if not runargs:
774 if not runargs:
775 runargs = util.hgcmd() + sys.argv[1:]
775 runargs = util.hgcmd() + sys.argv[1:]
776 runargs.append('--daemon-pipefds=%s' % lockpath)
776 runargs.append('--daemon-pipefds=%s' % lockpath)
777 # Don't pass --cwd to the child process, because we've already
777 # Don't pass --cwd to the child process, because we've already
778 # changed directory.
778 # changed directory.
779 for i in xrange(1, len(runargs)):
779 for i in xrange(1, len(runargs)):
780 if runargs[i].startswith('--cwd='):
780 if runargs[i].startswith('--cwd='):
781 del runargs[i]
781 del runargs[i]
782 break
782 break
783 elif runargs[i].startswith('--cwd'):
783 elif runargs[i].startswith('--cwd'):
784 del runargs[i:i + 2]
784 del runargs[i:i + 2]
785 break
785 break
786 def condfn():
786 def condfn():
787 return not os.path.exists(lockpath)
787 return not os.path.exists(lockpath)
788 pid = util.rundetached(runargs, condfn)
788 pid = util.rundetached(runargs, condfn)
789 if pid < 0:
789 if pid < 0:
790 raise error.Abort(_('child process failed to start'))
790 raise error.Abort(_('child process failed to start'))
791 writepid(pid)
791 writepid(pid)
792 finally:
792 finally:
793 try:
793 try:
794 os.unlink(lockpath)
794 os.unlink(lockpath)
795 except OSError as e:
795 except OSError as e:
796 if e.errno != errno.ENOENT:
796 if e.errno != errno.ENOENT:
797 raise
797 raise
798 if parentfn:
798 if parentfn:
799 return parentfn(pid)
799 return parentfn(pid)
800 else:
800 else:
801 return
801 return
802
802
803 if initfn:
803 if initfn:
804 initfn()
804 initfn()
805
805
806 if not opts['daemon']:
806 if not opts['daemon']:
807 writepid(os.getpid())
807 writepid(os.getpid())
808
808
809 if opts['daemon_pipefds']:
809 if opts['daemon_pipefds']:
810 lockpath = opts['daemon_pipefds']
810 lockpath = opts['daemon_pipefds']
811 try:
811 try:
812 os.setsid()
812 os.setsid()
813 except AttributeError:
813 except AttributeError:
814 pass
814 pass
815 os.unlink(lockpath)
815 os.unlink(lockpath)
816 util.hidewindow()
816 util.hidewindow()
817 sys.stdout.flush()
817 sys.stdout.flush()
818 sys.stderr.flush()
818 sys.stderr.flush()
819
819
820 nullfd = os.open(os.devnull, os.O_RDWR)
820 nullfd = os.open(os.devnull, os.O_RDWR)
821 logfilefd = nullfd
821 logfilefd = nullfd
822 if logfile:
822 if logfile:
823 logfilefd = os.open(logfile, os.O_RDWR | os.O_CREAT | os.O_APPEND)
823 logfilefd = os.open(logfile, os.O_RDWR | os.O_CREAT | os.O_APPEND)
824 os.dup2(nullfd, 0)
824 os.dup2(nullfd, 0)
825 os.dup2(logfilefd, 1)
825 os.dup2(logfilefd, 1)
826 os.dup2(logfilefd, 2)
826 os.dup2(logfilefd, 2)
827 if nullfd not in (0, 1, 2):
827 if nullfd not in (0, 1, 2):
828 os.close(nullfd)
828 os.close(nullfd)
829 if logfile and logfilefd not in (0, 1, 2):
829 if logfile and logfilefd not in (0, 1, 2):
830 os.close(logfilefd)
830 os.close(logfilefd)
831
831
832 if runfn:
832 if runfn:
833 return runfn()
833 return runfn()
834
834
835 ## facility to let extension process additional data into an import patch
835 ## facility to let extension process additional data into an import patch
836 # list of identifier to be executed in order
836 # list of identifier to be executed in order
837 extrapreimport = [] # run before commit
837 extrapreimport = [] # run before commit
838 extrapostimport = [] # run after commit
838 extrapostimport = [] # run after commit
839 # mapping from identifier to actual import function
839 # mapping from identifier to actual import function
840 #
840 #
841 # 'preimport' are run before the commit is made and are provided the following
841 # 'preimport' are run before the commit is made and are provided the following
842 # arguments:
842 # arguments:
843 # - repo: the localrepository instance,
843 # - repo: the localrepository instance,
844 # - patchdata: data extracted from patch header (cf m.patch.patchheadermap),
844 # - patchdata: data extracted from patch header (cf m.patch.patchheadermap),
845 # - extra: the future extra dictionary of the changeset, please mutate it,
845 # - extra: the future extra dictionary of the changeset, please mutate it,
846 # - opts: the import options.
846 # - opts: the import options.
847 # XXX ideally, we would just pass an ctx ready to be computed, that would allow
847 # XXX ideally, we would just pass an ctx ready to be computed, that would allow
848 # mutation of in memory commit and more. Feel free to rework the code to get
848 # mutation of in memory commit and more. Feel free to rework the code to get
849 # there.
849 # there.
850 extrapreimportmap = {}
850 extrapreimportmap = {}
851 # 'postimport' are run after the commit is made and are provided the following
851 # 'postimport' are run after the commit is made and are provided the following
852 # argument:
852 # argument:
853 # - ctx: the changectx created by import.
853 # - ctx: the changectx created by import.
854 extrapostimportmap = {}
854 extrapostimportmap = {}
855
855
856 def tryimportone(ui, repo, hunk, parents, opts, msgs, updatefunc):
856 def tryimportone(ui, repo, hunk, parents, opts, msgs, updatefunc):
857 """Utility function used by commands.import to import a single patch
857 """Utility function used by commands.import to import a single patch
858
858
859 This function is explicitly defined here to help the evolve extension to
859 This function is explicitly defined here to help the evolve extension to
860 wrap this part of the import logic.
860 wrap this part of the import logic.
861
861
862 The API is currently a bit ugly because it a simple code translation from
862 The API is currently a bit ugly because it a simple code translation from
863 the import command. Feel free to make it better.
863 the import command. Feel free to make it better.
864
864
865 :hunk: a patch (as a binary string)
865 :hunk: a patch (as a binary string)
866 :parents: nodes that will be parent of the created commit
866 :parents: nodes that will be parent of the created commit
867 :opts: the full dict of option passed to the import command
867 :opts: the full dict of option passed to the import command
868 :msgs: list to save commit message to.
868 :msgs: list to save commit message to.
869 (used in case we need to save it when failing)
869 (used in case we need to save it when failing)
870 :updatefunc: a function that update a repo to a given node
870 :updatefunc: a function that update a repo to a given node
871 updatefunc(<repo>, <node>)
871 updatefunc(<repo>, <node>)
872 """
872 """
873 # avoid cycle context -> subrepo -> cmdutil
873 # avoid cycle context -> subrepo -> cmdutil
874 import context
874 import context
875 extractdata = patch.extract(ui, hunk)
875 extractdata = patch.extract(ui, hunk)
876 tmpname = extractdata.get('filename')
876 tmpname = extractdata.get('filename')
877 message = extractdata.get('message')
877 message = extractdata.get('message')
878 user = extractdata.get('user')
878 user = extractdata.get('user')
879 date = extractdata.get('date')
879 date = extractdata.get('date')
880 branch = extractdata.get('branch')
880 branch = extractdata.get('branch')
881 nodeid = extractdata.get('nodeid')
881 nodeid = extractdata.get('nodeid')
882 p1 = extractdata.get('p1')
882 p1 = extractdata.get('p1')
883 p2 = extractdata.get('p2')
883 p2 = extractdata.get('p2')
884
884
885 update = not opts.get('bypass')
885 update = not opts.get('bypass')
886 strip = opts["strip"]
886 strip = opts["strip"]
887 prefix = opts["prefix"]
887 prefix = opts["prefix"]
888 sim = float(opts.get('similarity') or 0)
888 sim = float(opts.get('similarity') or 0)
889 if not tmpname:
889 if not tmpname:
890 return (None, None, False)
890 return (None, None, False)
891 msg = _('applied to working directory')
891 msg = _('applied to working directory')
892
892
893 rejects = False
893 rejects = False
894
894
895 try:
895 try:
896 cmdline_message = logmessage(ui, opts)
896 cmdline_message = logmessage(ui, opts)
897 if cmdline_message:
897 if cmdline_message:
898 # pickup the cmdline msg
898 # pickup the cmdline msg
899 message = cmdline_message
899 message = cmdline_message
900 elif message:
900 elif message:
901 # pickup the patch msg
901 # pickup the patch msg
902 message = message.strip()
902 message = message.strip()
903 else:
903 else:
904 # launch the editor
904 # launch the editor
905 message = None
905 message = None
906 ui.debug('message:\n%s\n' % message)
906 ui.debug('message:\n%s\n' % message)
907
907
908 if len(parents) == 1:
908 if len(parents) == 1:
909 parents.append(repo[nullid])
909 parents.append(repo[nullid])
910 if opts.get('exact'):
910 if opts.get('exact'):
911 if not nodeid or not p1:
911 if not nodeid or not p1:
912 raise error.Abort(_('not a Mercurial patch'))
912 raise error.Abort(_('not a Mercurial patch'))
913 p1 = repo[p1]
913 p1 = repo[p1]
914 p2 = repo[p2 or nullid]
914 p2 = repo[p2 or nullid]
915 elif p2:
915 elif p2:
916 try:
916 try:
917 p1 = repo[p1]
917 p1 = repo[p1]
918 p2 = repo[p2]
918 p2 = repo[p2]
919 # Without any options, consider p2 only if the
919 # Without any options, consider p2 only if the
920 # patch is being applied on top of the recorded
920 # patch is being applied on top of the recorded
921 # first parent.
921 # first parent.
922 if p1 != parents[0]:
922 if p1 != parents[0]:
923 p1 = parents[0]
923 p1 = parents[0]
924 p2 = repo[nullid]
924 p2 = repo[nullid]
925 except error.RepoError:
925 except error.RepoError:
926 p1, p2 = parents
926 p1, p2 = parents
927 if p2.node() == nullid:
927 if p2.node() == nullid:
928 ui.warn(_("warning: import the patch as a normal revision\n"
928 ui.warn(_("warning: import the patch as a normal revision\n"
929 "(use --exact to import the patch as a merge)\n"))
929 "(use --exact to import the patch as a merge)\n"))
930 else:
930 else:
931 p1, p2 = parents
931 p1, p2 = parents
932
932
933 n = None
933 n = None
934 if update:
934 if update:
935 if p1 != parents[0]:
935 if p1 != parents[0]:
936 updatefunc(repo, p1.node())
936 updatefunc(repo, p1.node())
937 if p2 != parents[1]:
937 if p2 != parents[1]:
938 repo.setparents(p1.node(), p2.node())
938 repo.setparents(p1.node(), p2.node())
939
939
940 if opts.get('exact') or opts.get('import_branch'):
940 if opts.get('exact') or opts.get('import_branch'):
941 repo.dirstate.setbranch(branch or 'default')
941 repo.dirstate.setbranch(branch or 'default')
942
942
943 partial = opts.get('partial', False)
943 partial = opts.get('partial', False)
944 files = set()
944 files = set()
945 try:
945 try:
946 patch.patch(ui, repo, tmpname, strip=strip, prefix=prefix,
946 patch.patch(ui, repo, tmpname, strip=strip, prefix=prefix,
947 files=files, eolmode=None, similarity=sim / 100.0)
947 files=files, eolmode=None, similarity=sim / 100.0)
948 except patch.PatchError as e:
948 except patch.PatchError as e:
949 if not partial:
949 if not partial:
950 raise error.Abort(str(e))
950 raise error.Abort(str(e))
951 if partial:
951 if partial:
952 rejects = True
952 rejects = True
953
953
954 files = list(files)
954 files = list(files)
955 if opts.get('no_commit'):
955 if opts.get('no_commit'):
956 if message:
956 if message:
957 msgs.append(message)
957 msgs.append(message)
958 else:
958 else:
959 if opts.get('exact') or p2:
959 if opts.get('exact') or p2:
960 # If you got here, you either use --force and know what
960 # If you got here, you either use --force and know what
961 # you are doing or used --exact or a merge patch while
961 # you are doing or used --exact or a merge patch while
962 # being updated to its first parent.
962 # being updated to its first parent.
963 m = None
963 m = None
964 else:
964 else:
965 m = scmutil.matchfiles(repo, files or [])
965 m = scmutil.matchfiles(repo, files or [])
966 editform = mergeeditform(repo[None], 'import.normal')
966 editform = mergeeditform(repo[None], 'import.normal')
967 if opts.get('exact'):
967 if opts.get('exact'):
968 editor = None
968 editor = None
969 else:
969 else:
970 editor = getcommiteditor(editform=editform, **opts)
970 editor = getcommiteditor(editform=editform, **opts)
971 allowemptyback = repo.ui.backupconfig('ui', 'allowemptycommit')
971 allowemptyback = repo.ui.backupconfig('ui', 'allowemptycommit')
972 extra = {}
972 extra = {}
973 for idfunc in extrapreimport:
973 for idfunc in extrapreimport:
974 extrapreimportmap[idfunc](repo, extractdata, extra, opts)
974 extrapreimportmap[idfunc](repo, extractdata, extra, opts)
975 try:
975 try:
976 if partial:
976 if partial:
977 repo.ui.setconfig('ui', 'allowemptycommit', True)
977 repo.ui.setconfig('ui', 'allowemptycommit', True)
978 n = repo.commit(message, opts.get('user') or user,
978 n = repo.commit(message, opts.get('user') or user,
979 opts.get('date') or date, match=m,
979 opts.get('date') or date, match=m,
980 editor=editor, extra=extra)
980 editor=editor, extra=extra)
981 for idfunc in extrapostimport:
981 for idfunc in extrapostimport:
982 extrapostimportmap[idfunc](repo[n])
982 extrapostimportmap[idfunc](repo[n])
983 finally:
983 finally:
984 repo.ui.restoreconfig(allowemptyback)
984 repo.ui.restoreconfig(allowemptyback)
985 else:
985 else:
986 if opts.get('exact') or opts.get('import_branch'):
986 if opts.get('exact') or opts.get('import_branch'):
987 branch = branch or 'default'
987 branch = branch or 'default'
988 else:
988 else:
989 branch = p1.branch()
989 branch = p1.branch()
990 store = patch.filestore()
990 store = patch.filestore()
991 try:
991 try:
992 files = set()
992 files = set()
993 try:
993 try:
994 patch.patchrepo(ui, repo, p1, store, tmpname, strip, prefix,
994 patch.patchrepo(ui, repo, p1, store, tmpname, strip, prefix,
995 files, eolmode=None)
995 files, eolmode=None)
996 except patch.PatchError as e:
996 except patch.PatchError as e:
997 raise error.Abort(str(e))
997 raise error.Abort(str(e))
998 if opts.get('exact'):
998 if opts.get('exact'):
999 editor = None
999 editor = None
1000 else:
1000 else:
1001 editor = getcommiteditor(editform='import.bypass')
1001 editor = getcommiteditor(editform='import.bypass')
1002 memctx = context.makememctx(repo, (p1.node(), p2.node()),
1002 memctx = context.makememctx(repo, (p1.node(), p2.node()),
1003 message,
1003 message,
1004 opts.get('user') or user,
1004 opts.get('user') or user,
1005 opts.get('date') or date,
1005 opts.get('date') or date,
1006 branch, files, store,
1006 branch, files, store,
1007 editor=editor)
1007 editor=editor)
1008 n = memctx.commit()
1008 n = memctx.commit()
1009 finally:
1009 finally:
1010 store.close()
1010 store.close()
1011 if opts.get('exact') and opts.get('no_commit'):
1011 if opts.get('exact') and opts.get('no_commit'):
1012 # --exact with --no-commit is still useful in that it does merge
1012 # --exact with --no-commit is still useful in that it does merge
1013 # and branch bits
1013 # and branch bits
1014 ui.warn(_("warning: can't check exact import with --no-commit\n"))
1014 ui.warn(_("warning: can't check exact import with --no-commit\n"))
1015 elif opts.get('exact') and hex(n) != nodeid:
1015 elif opts.get('exact') and hex(n) != nodeid:
1016 raise error.Abort(_('patch is damaged or loses information'))
1016 raise error.Abort(_('patch is damaged or loses information'))
1017 if n:
1017 if n:
1018 # i18n: refers to a short changeset id
1018 # i18n: refers to a short changeset id
1019 msg = _('created %s') % short(n)
1019 msg = _('created %s') % short(n)
1020 return (msg, n, rejects)
1020 return (msg, n, rejects)
1021 finally:
1021 finally:
1022 os.unlink(tmpname)
1022 os.unlink(tmpname)
1023
1023
1024 # facility to let extensions include additional data in an exported patch
1024 # facility to let extensions include additional data in an exported patch
1025 # list of identifiers to be executed in order
1025 # list of identifiers to be executed in order
1026 extraexport = []
1026 extraexport = []
1027 # mapping from identifier to actual export function
1027 # mapping from identifier to actual export function
1028 # function as to return a string to be added to the header or None
1028 # function as to return a string to be added to the header or None
1029 # it is given two arguments (sequencenumber, changectx)
1029 # it is given two arguments (sequencenumber, changectx)
1030 extraexportmap = {}
1030 extraexportmap = {}
1031
1031
1032 def export(repo, revs, template='hg-%h.patch', fp=None, switch_parent=False,
1032 def export(repo, revs, template='hg-%h.patch', fp=None, switch_parent=False,
1033 opts=None, match=None):
1033 opts=None, match=None):
1034 '''export changesets as hg patches.'''
1034 '''export changesets as hg patches.'''
1035
1035
1036 total = len(revs)
1036 total = len(revs)
1037 revwidth = max([len(str(rev)) for rev in revs])
1037 revwidth = max([len(str(rev)) for rev in revs])
1038 filemode = {}
1038 filemode = {}
1039
1039
1040 def single(rev, seqno, fp):
1040 def single(rev, seqno, fp):
1041 ctx = repo[rev]
1041 ctx = repo[rev]
1042 node = ctx.node()
1042 node = ctx.node()
1043 parents = [p.node() for p in ctx.parents() if p]
1043 parents = [p.node() for p in ctx.parents() if p]
1044 branch = ctx.branch()
1044 branch = ctx.branch()
1045 if switch_parent:
1045 if switch_parent:
1046 parents.reverse()
1046 parents.reverse()
1047
1047
1048 if parents:
1048 if parents:
1049 prev = parents[0]
1049 prev = parents[0]
1050 else:
1050 else:
1051 prev = nullid
1051 prev = nullid
1052
1052
1053 shouldclose = False
1053 shouldclose = False
1054 if not fp and len(template) > 0:
1054 if not fp and len(template) > 0:
1055 desc_lines = ctx.description().rstrip().split('\n')
1055 desc_lines = ctx.description().rstrip().split('\n')
1056 desc = desc_lines[0] #Commit always has a first line.
1056 desc = desc_lines[0] #Commit always has a first line.
1057 fp = makefileobj(repo, template, node, desc=desc, total=total,
1057 fp = makefileobj(repo, template, node, desc=desc, total=total,
1058 seqno=seqno, revwidth=revwidth, mode='wb',
1058 seqno=seqno, revwidth=revwidth, mode='wb',
1059 modemap=filemode)
1059 modemap=filemode)
1060 if fp != template:
1060 if fp != template:
1061 shouldclose = True
1061 shouldclose = True
1062 if fp and fp != sys.stdout and util.safehasattr(fp, 'name'):
1062 if fp and fp != sys.stdout and util.safehasattr(fp, 'name'):
1063 repo.ui.note("%s\n" % fp.name)
1063 repo.ui.note("%s\n" % fp.name)
1064
1064
1065 if not fp:
1065 if not fp:
1066 write = repo.ui.write
1066 write = repo.ui.write
1067 else:
1067 else:
1068 def write(s, **kw):
1068 def write(s, **kw):
1069 fp.write(s)
1069 fp.write(s)
1070
1070
1071 write("# HG changeset patch\n")
1071 write("# HG changeset patch\n")
1072 write("# User %s\n" % ctx.user())
1072 write("# User %s\n" % ctx.user())
1073 write("# Date %d %d\n" % ctx.date())
1073 write("# Date %d %d\n" % ctx.date())
1074 write("# %s\n" % util.datestr(ctx.date()))
1074 write("# %s\n" % util.datestr(ctx.date()))
1075 if branch and branch != 'default':
1075 if branch and branch != 'default':
1076 write("# Branch %s\n" % branch)
1076 write("# Branch %s\n" % branch)
1077 write("# Node ID %s\n" % hex(node))
1077 write("# Node ID %s\n" % hex(node))
1078 write("# Parent %s\n" % hex(prev))
1078 write("# Parent %s\n" % hex(prev))
1079 if len(parents) > 1:
1079 if len(parents) > 1:
1080 write("# Parent %s\n" % hex(parents[1]))
1080 write("# Parent %s\n" % hex(parents[1]))
1081
1081
1082 for headerid in extraexport:
1082 for headerid in extraexport:
1083 header = extraexportmap[headerid](seqno, ctx)
1083 header = extraexportmap[headerid](seqno, ctx)
1084 if header is not None:
1084 if header is not None:
1085 write('# %s\n' % header)
1085 write('# %s\n' % header)
1086 write(ctx.description().rstrip())
1086 write(ctx.description().rstrip())
1087 write("\n\n")
1087 write("\n\n")
1088
1088
1089 for chunk, label in patch.diffui(repo, prev, node, match, opts=opts):
1089 for chunk, label in patch.diffui(repo, prev, node, match, opts=opts):
1090 write(chunk, label=label)
1090 write(chunk, label=label)
1091
1091
1092 if shouldclose:
1092 if shouldclose:
1093 fp.close()
1093 fp.close()
1094
1094
1095 for seqno, rev in enumerate(revs):
1095 for seqno, rev in enumerate(revs):
1096 single(rev, seqno + 1, fp)
1096 single(rev, seqno + 1, fp)
1097
1097
1098 def diffordiffstat(ui, repo, diffopts, node1, node2, match,
1098 def diffordiffstat(ui, repo, diffopts, node1, node2, match,
1099 changes=None, stat=False, fp=None, prefix='',
1099 changes=None, stat=False, fp=None, prefix='',
1100 root='', listsubrepos=False):
1100 root='', listsubrepos=False):
1101 '''show diff or diffstat.'''
1101 '''show diff or diffstat.'''
1102 if fp is None:
1102 if fp is None:
1103 write = ui.write
1103 write = ui.write
1104 else:
1104 else:
1105 def write(s, **kw):
1105 def write(s, **kw):
1106 fp.write(s)
1106 fp.write(s)
1107
1107
1108 if root:
1108 if root:
1109 relroot = pathutil.canonpath(repo.root, repo.getcwd(), root)
1109 relroot = pathutil.canonpath(repo.root, repo.getcwd(), root)
1110 else:
1110 else:
1111 relroot = ''
1111 relroot = ''
1112 if relroot != '':
1112 if relroot != '':
1113 # XXX relative roots currently don't work if the root is within a
1113 # XXX relative roots currently don't work if the root is within a
1114 # subrepo
1114 # subrepo
1115 uirelroot = match.uipath(relroot)
1115 uirelroot = match.uipath(relroot)
1116 relroot += '/'
1116 relroot += '/'
1117 for matchroot in match.files():
1117 for matchroot in match.files():
1118 if not matchroot.startswith(relroot):
1118 if not matchroot.startswith(relroot):
1119 ui.warn(_('warning: %s not inside relative root %s\n') % (
1119 ui.warn(_('warning: %s not inside relative root %s\n') % (
1120 match.uipath(matchroot), uirelroot))
1120 match.uipath(matchroot), uirelroot))
1121
1121
1122 if stat:
1122 if stat:
1123 diffopts = diffopts.copy(context=0)
1123 diffopts = diffopts.copy(context=0)
1124 width = 80
1124 width = 80
1125 if not ui.plain():
1125 if not ui.plain():
1126 width = ui.termwidth()
1126 width = ui.termwidth()
1127 chunks = patch.diff(repo, node1, node2, match, changes, diffopts,
1127 chunks = patch.diff(repo, node1, node2, match, changes, diffopts,
1128 prefix=prefix, relroot=relroot)
1128 prefix=prefix, relroot=relroot)
1129 for chunk, label in patch.diffstatui(util.iterlines(chunks),
1129 for chunk, label in patch.diffstatui(util.iterlines(chunks),
1130 width=width,
1130 width=width,
1131 git=diffopts.git):
1131 git=diffopts.git):
1132 write(chunk, label=label)
1132 write(chunk, label=label)
1133 else:
1133 else:
1134 for chunk, label in patch.diffui(repo, node1, node2, match,
1134 for chunk, label in patch.diffui(repo, node1, node2, match,
1135 changes, diffopts, prefix=prefix,
1135 changes, diffopts, prefix=prefix,
1136 relroot=relroot):
1136 relroot=relroot):
1137 write(chunk, label=label)
1137 write(chunk, label=label)
1138
1138
1139 if listsubrepos:
1139 if listsubrepos:
1140 ctx1 = repo[node1]
1140 ctx1 = repo[node1]
1141 ctx2 = repo[node2]
1141 ctx2 = repo[node2]
1142 for subpath, sub in scmutil.itersubrepos(ctx1, ctx2):
1142 for subpath, sub in scmutil.itersubrepos(ctx1, ctx2):
1143 tempnode2 = node2
1143 tempnode2 = node2
1144 try:
1144 try:
1145 if node2 is not None:
1145 if node2 is not None:
1146 tempnode2 = ctx2.substate[subpath][1]
1146 tempnode2 = ctx2.substate[subpath][1]
1147 except KeyError:
1147 except KeyError:
1148 # A subrepo that existed in node1 was deleted between node1 and
1148 # A subrepo that existed in node1 was deleted between node1 and
1149 # node2 (inclusive). Thus, ctx2's substate won't contain that
1149 # node2 (inclusive). Thus, ctx2's substate won't contain that
1150 # subpath. The best we can do is to ignore it.
1150 # subpath. The best we can do is to ignore it.
1151 tempnode2 = None
1151 tempnode2 = None
1152 submatch = matchmod.narrowmatcher(subpath, match)
1152 submatch = matchmod.narrowmatcher(subpath, match)
1153 sub.diff(ui, diffopts, tempnode2, submatch, changes=changes,
1153 sub.diff(ui, diffopts, tempnode2, submatch, changes=changes,
1154 stat=stat, fp=fp, prefix=prefix)
1154 stat=stat, fp=fp, prefix=prefix)
1155
1155
1156 class changeset_printer(object):
1156 class changeset_printer(object):
1157 '''show changeset information when templating not requested.'''
1157 '''show changeset information when templating not requested.'''
1158
1158
1159 def __init__(self, ui, repo, matchfn, diffopts, buffered):
1159 def __init__(self, ui, repo, matchfn, diffopts, buffered):
1160 self.ui = ui
1160 self.ui = ui
1161 self.repo = repo
1161 self.repo = repo
1162 self.buffered = buffered
1162 self.buffered = buffered
1163 self.matchfn = matchfn
1163 self.matchfn = matchfn
1164 self.diffopts = diffopts
1164 self.diffopts = diffopts
1165 self.header = {}
1165 self.header = {}
1166 self.hunk = {}
1166 self.hunk = {}
1167 self.lastheader = None
1167 self.lastheader = None
1168 self.footer = None
1168 self.footer = None
1169
1169
1170 def flush(self, ctx):
1170 def flush(self, ctx):
1171 rev = ctx.rev()
1171 rev = ctx.rev()
1172 if rev in self.header:
1172 if rev in self.header:
1173 h = self.header[rev]
1173 h = self.header[rev]
1174 if h != self.lastheader:
1174 if h != self.lastheader:
1175 self.lastheader = h
1175 self.lastheader = h
1176 self.ui.write(h)
1176 self.ui.write(h)
1177 del self.header[rev]
1177 del self.header[rev]
1178 if rev in self.hunk:
1178 if rev in self.hunk:
1179 self.ui.write(self.hunk[rev])
1179 self.ui.write(self.hunk[rev])
1180 del self.hunk[rev]
1180 del self.hunk[rev]
1181 return 1
1181 return 1
1182 return 0
1182 return 0
1183
1183
1184 def close(self):
1184 def close(self):
1185 if self.footer:
1185 if self.footer:
1186 self.ui.write(self.footer)
1186 self.ui.write(self.footer)
1187
1187
1188 def show(self, ctx, copies=None, matchfn=None, **props):
1188 def show(self, ctx, copies=None, matchfn=None, **props):
1189 if self.buffered:
1189 if self.buffered:
1190 self.ui.pushbuffer(labeled=True)
1190 self.ui.pushbuffer(labeled=True)
1191 self._show(ctx, copies, matchfn, props)
1191 self._show(ctx, copies, matchfn, props)
1192 self.hunk[ctx.rev()] = self.ui.popbuffer()
1192 self.hunk[ctx.rev()] = self.ui.popbuffer()
1193 else:
1193 else:
1194 self._show(ctx, copies, matchfn, props)
1194 self._show(ctx, copies, matchfn, props)
1195
1195
1196 def _show(self, ctx, copies, matchfn, props):
1196 def _show(self, ctx, copies, matchfn, props):
1197 '''show a single changeset or file revision'''
1197 '''show a single changeset or file revision'''
1198 changenode = ctx.node()
1198 changenode = ctx.node()
1199 rev = ctx.rev()
1199 rev = ctx.rev()
1200 if self.ui.debugflag:
1200 if self.ui.debugflag:
1201 hexfunc = hex
1201 hexfunc = hex
1202 else:
1202 else:
1203 hexfunc = short
1203 hexfunc = short
1204 # as of now, wctx.node() and wctx.rev() return None, but we want to
1204 # as of now, wctx.node() and wctx.rev() return None, but we want to
1205 # show the same values as {node} and {rev} templatekw
1205 # show the same values as {node} and {rev} templatekw
1206 revnode = (scmutil.intrev(rev), hexfunc(bin(ctx.hex())))
1206 revnode = (scmutil.intrev(rev), hexfunc(bin(ctx.hex())))
1207
1207
1208 if self.ui.quiet:
1208 if self.ui.quiet:
1209 self.ui.write("%d:%s\n" % revnode, label='log.node')
1209 self.ui.write("%d:%s\n" % revnode, label='log.node')
1210 return
1210 return
1211
1211
1212 date = util.datestr(ctx.date())
1212 date = util.datestr(ctx.date())
1213
1213
1214 # i18n: column positioning for "hg log"
1214 # i18n: column positioning for "hg log"
1215 self.ui.write(_("changeset: %d:%s\n") % revnode,
1215 self.ui.write(_("changeset: %d:%s\n") % revnode,
1216 label='log.changeset changeset.%s' % ctx.phasestr())
1216 label='log.changeset changeset.%s' % ctx.phasestr())
1217
1217
1218 # branches are shown first before any other names due to backwards
1218 # branches are shown first before any other names due to backwards
1219 # compatibility
1219 # compatibility
1220 branch = ctx.branch()
1220 branch = ctx.branch()
1221 # don't show the default branch name
1221 # don't show the default branch name
1222 if branch != 'default':
1222 if branch != 'default':
1223 # i18n: column positioning for "hg log"
1223 # i18n: column positioning for "hg log"
1224 self.ui.write(_("branch: %s\n") % branch,
1224 self.ui.write(_("branch: %s\n") % branch,
1225 label='log.branch')
1225 label='log.branch')
1226
1226
1227 for name, ns in self.repo.names.iteritems():
1227 for name, ns in self.repo.names.iteritems():
1228 # branches has special logic already handled above, so here we just
1228 # branches has special logic already handled above, so here we just
1229 # skip it
1229 # skip it
1230 if name == 'branches':
1230 if name == 'branches':
1231 continue
1231 continue
1232 # we will use the templatename as the color name since those two
1232 # we will use the templatename as the color name since those two
1233 # should be the same
1233 # should be the same
1234 for name in ns.names(self.repo, changenode):
1234 for name in ns.names(self.repo, changenode):
1235 self.ui.write(ns.logfmt % name,
1235 self.ui.write(ns.logfmt % name,
1236 label='log.%s' % ns.colorname)
1236 label='log.%s' % ns.colorname)
1237 if self.ui.debugflag:
1237 if self.ui.debugflag:
1238 # i18n: column positioning for "hg log"
1238 # i18n: column positioning for "hg log"
1239 self.ui.write(_("phase: %s\n") % ctx.phasestr(),
1239 self.ui.write(_("phase: %s\n") % ctx.phasestr(),
1240 label='log.phase')
1240 label='log.phase')
1241 for pctx in scmutil.meaningfulparents(self.repo, ctx):
1241 for pctx in scmutil.meaningfulparents(self.repo, ctx):
1242 label = 'log.parent changeset.%s' % pctx.phasestr()
1242 label = 'log.parent changeset.%s' % pctx.phasestr()
1243 # i18n: column positioning for "hg log"
1243 # i18n: column positioning for "hg log"
1244 self.ui.write(_("parent: %d:%s\n")
1244 self.ui.write(_("parent: %d:%s\n")
1245 % (pctx.rev(), hexfunc(pctx.node())),
1245 % (pctx.rev(), hexfunc(pctx.node())),
1246 label=label)
1246 label=label)
1247
1247
1248 if self.ui.debugflag and rev is not None:
1248 if self.ui.debugflag and rev is not None:
1249 mnode = ctx.manifestnode()
1249 mnode = ctx.manifestnode()
1250 # i18n: column positioning for "hg log"
1250 # i18n: column positioning for "hg log"
1251 self.ui.write(_("manifest: %d:%s\n") %
1251 self.ui.write(_("manifest: %d:%s\n") %
1252 (self.repo.manifest.rev(mnode), hex(mnode)),
1252 (self.repo.manifest.rev(mnode), hex(mnode)),
1253 label='ui.debug log.manifest')
1253 label='ui.debug log.manifest')
1254 # i18n: column positioning for "hg log"
1254 # i18n: column positioning for "hg log"
1255 self.ui.write(_("user: %s\n") % ctx.user(),
1255 self.ui.write(_("user: %s\n") % ctx.user(),
1256 label='log.user')
1256 label='log.user')
1257 # i18n: column positioning for "hg log"
1257 # i18n: column positioning for "hg log"
1258 self.ui.write(_("date: %s\n") % date,
1258 self.ui.write(_("date: %s\n") % date,
1259 label='log.date')
1259 label='log.date')
1260
1260
1261 if self.ui.debugflag:
1261 if self.ui.debugflag:
1262 files = ctx.p1().status(ctx)[:3]
1262 files = ctx.p1().status(ctx)[:3]
1263 for key, value in zip([# i18n: column positioning for "hg log"
1263 for key, value in zip([# i18n: column positioning for "hg log"
1264 _("files:"),
1264 _("files:"),
1265 # i18n: column positioning for "hg log"
1265 # i18n: column positioning for "hg log"
1266 _("files+:"),
1266 _("files+:"),
1267 # i18n: column positioning for "hg log"
1267 # i18n: column positioning for "hg log"
1268 _("files-:")], files):
1268 _("files-:")], files):
1269 if value:
1269 if value:
1270 self.ui.write("%-12s %s\n" % (key, " ".join(value)),
1270 self.ui.write("%-12s %s\n" % (key, " ".join(value)),
1271 label='ui.debug log.files')
1271 label='ui.debug log.files')
1272 elif ctx.files() and self.ui.verbose:
1272 elif ctx.files() and self.ui.verbose:
1273 # i18n: column positioning for "hg log"
1273 # i18n: column positioning for "hg log"
1274 self.ui.write(_("files: %s\n") % " ".join(ctx.files()),
1274 self.ui.write(_("files: %s\n") % " ".join(ctx.files()),
1275 label='ui.note log.files')
1275 label='ui.note log.files')
1276 if copies and self.ui.verbose:
1276 if copies and self.ui.verbose:
1277 copies = ['%s (%s)' % c for c in copies]
1277 copies = ['%s (%s)' % c for c in copies]
1278 # i18n: column positioning for "hg log"
1278 # i18n: column positioning for "hg log"
1279 self.ui.write(_("copies: %s\n") % ' '.join(copies),
1279 self.ui.write(_("copies: %s\n") % ' '.join(copies),
1280 label='ui.note log.copies')
1280 label='ui.note log.copies')
1281
1281
1282 extra = ctx.extra()
1282 extra = ctx.extra()
1283 if extra and self.ui.debugflag:
1283 if extra and self.ui.debugflag:
1284 for key, value in sorted(extra.items()):
1284 for key, value in sorted(extra.items()):
1285 # i18n: column positioning for "hg log"
1285 # i18n: column positioning for "hg log"
1286 self.ui.write(_("extra: %s=%s\n")
1286 self.ui.write(_("extra: %s=%s\n")
1287 % (key, value.encode('string_escape')),
1287 % (key, value.encode('string_escape')),
1288 label='ui.debug log.extra')
1288 label='ui.debug log.extra')
1289
1289
1290 description = ctx.description().strip()
1290 description = ctx.description().strip()
1291 if description:
1291 if description:
1292 if self.ui.verbose:
1292 if self.ui.verbose:
1293 self.ui.write(_("description:\n"),
1293 self.ui.write(_("description:\n"),
1294 label='ui.note log.description')
1294 label='ui.note log.description')
1295 self.ui.write(description,
1295 self.ui.write(description,
1296 label='ui.note log.description')
1296 label='ui.note log.description')
1297 self.ui.write("\n\n")
1297 self.ui.write("\n\n")
1298 else:
1298 else:
1299 # i18n: column positioning for "hg log"
1299 # i18n: column positioning for "hg log"
1300 self.ui.write(_("summary: %s\n") %
1300 self.ui.write(_("summary: %s\n") %
1301 description.splitlines()[0],
1301 description.splitlines()[0],
1302 label='log.summary')
1302 label='log.summary')
1303 self.ui.write("\n")
1303 self.ui.write("\n")
1304
1304
1305 self.showpatch(ctx, matchfn)
1305 self.showpatch(ctx, matchfn)
1306
1306
1307 def showpatch(self, ctx, matchfn):
1307 def showpatch(self, ctx, matchfn):
1308 if not matchfn:
1308 if not matchfn:
1309 matchfn = self.matchfn
1309 matchfn = self.matchfn
1310 if matchfn:
1310 if matchfn:
1311 stat = self.diffopts.get('stat')
1311 stat = self.diffopts.get('stat')
1312 diff = self.diffopts.get('patch')
1312 diff = self.diffopts.get('patch')
1313 diffopts = patch.diffallopts(self.ui, self.diffopts)
1313 diffopts = patch.diffallopts(self.ui, self.diffopts)
1314 node = ctx.node()
1314 node = ctx.node()
1315 prev = ctx.p1()
1315 prev = ctx.p1()
1316 if stat:
1316 if stat:
1317 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1317 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1318 match=matchfn, stat=True)
1318 match=matchfn, stat=True)
1319 if diff:
1319 if diff:
1320 if stat:
1320 if stat:
1321 self.ui.write("\n")
1321 self.ui.write("\n")
1322 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1322 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1323 match=matchfn, stat=False)
1323 match=matchfn, stat=False)
1324 self.ui.write("\n")
1324 self.ui.write("\n")
1325
1325
1326 class jsonchangeset(changeset_printer):
1326 class jsonchangeset(changeset_printer):
1327 '''format changeset information.'''
1327 '''format changeset information.'''
1328
1328
1329 def __init__(self, ui, repo, matchfn, diffopts, buffered):
1329 def __init__(self, ui, repo, matchfn, diffopts, buffered):
1330 changeset_printer.__init__(self, ui, repo, matchfn, diffopts, buffered)
1330 changeset_printer.__init__(self, ui, repo, matchfn, diffopts, buffered)
1331 self.cache = {}
1331 self.cache = {}
1332 self._first = True
1332 self._first = True
1333
1333
1334 def close(self):
1334 def close(self):
1335 if not self._first:
1335 if not self._first:
1336 self.ui.write("\n]\n")
1336 self.ui.write("\n]\n")
1337 else:
1337 else:
1338 self.ui.write("[]\n")
1338 self.ui.write("[]\n")
1339
1339
1340 def _show(self, ctx, copies, matchfn, props):
1340 def _show(self, ctx, copies, matchfn, props):
1341 '''show a single changeset or file revision'''
1341 '''show a single changeset or file revision'''
1342 rev = ctx.rev()
1342 rev = ctx.rev()
1343 if rev is None:
1343 if rev is None:
1344 jrev = jnode = 'null'
1344 jrev = jnode = 'null'
1345 else:
1345 else:
1346 jrev = str(rev)
1346 jrev = str(rev)
1347 jnode = '"%s"' % hex(ctx.node())
1347 jnode = '"%s"' % hex(ctx.node())
1348 j = encoding.jsonescape
1348 j = encoding.jsonescape
1349
1349
1350 if self._first:
1350 if self._first:
1351 self.ui.write("[\n {")
1351 self.ui.write("[\n {")
1352 self._first = False
1352 self._first = False
1353 else:
1353 else:
1354 self.ui.write(",\n {")
1354 self.ui.write(",\n {")
1355
1355
1356 if self.ui.quiet:
1356 if self.ui.quiet:
1357 self.ui.write('\n "rev": %s' % jrev)
1357 self.ui.write('\n "rev": %s' % jrev)
1358 self.ui.write(',\n "node": %s' % jnode)
1358 self.ui.write(',\n "node": %s' % jnode)
1359 self.ui.write('\n }')
1359 self.ui.write('\n }')
1360 return
1360 return
1361
1361
1362 self.ui.write('\n "rev": %s' % jrev)
1362 self.ui.write('\n "rev": %s' % jrev)
1363 self.ui.write(',\n "node": %s' % jnode)
1363 self.ui.write(',\n "node": %s' % jnode)
1364 self.ui.write(',\n "branch": "%s"' % j(ctx.branch()))
1364 self.ui.write(',\n "branch": "%s"' % j(ctx.branch()))
1365 self.ui.write(',\n "phase": "%s"' % ctx.phasestr())
1365 self.ui.write(',\n "phase": "%s"' % ctx.phasestr())
1366 self.ui.write(',\n "user": "%s"' % j(ctx.user()))
1366 self.ui.write(',\n "user": "%s"' % j(ctx.user()))
1367 self.ui.write(',\n "date": [%d, %d]' % ctx.date())
1367 self.ui.write(',\n "date": [%d, %d]' % ctx.date())
1368 self.ui.write(',\n "desc": "%s"' % j(ctx.description()))
1368 self.ui.write(',\n "desc": "%s"' % j(ctx.description()))
1369
1369
1370 self.ui.write(',\n "bookmarks": [%s]' %
1370 self.ui.write(',\n "bookmarks": [%s]' %
1371 ", ".join('"%s"' % j(b) for b in ctx.bookmarks()))
1371 ", ".join('"%s"' % j(b) for b in ctx.bookmarks()))
1372 self.ui.write(',\n "tags": [%s]' %
1372 self.ui.write(',\n "tags": [%s]' %
1373 ", ".join('"%s"' % j(t) for t in ctx.tags()))
1373 ", ".join('"%s"' % j(t) for t in ctx.tags()))
1374 self.ui.write(',\n "parents": [%s]' %
1374 self.ui.write(',\n "parents": [%s]' %
1375 ", ".join('"%s"' % c.hex() for c in ctx.parents()))
1375 ", ".join('"%s"' % c.hex() for c in ctx.parents()))
1376
1376
1377 if self.ui.debugflag:
1377 if self.ui.debugflag:
1378 if rev is None:
1378 if rev is None:
1379 jmanifestnode = 'null'
1379 jmanifestnode = 'null'
1380 else:
1380 else:
1381 jmanifestnode = '"%s"' % hex(ctx.manifestnode())
1381 jmanifestnode = '"%s"' % hex(ctx.manifestnode())
1382 self.ui.write(',\n "manifest": %s' % jmanifestnode)
1382 self.ui.write(',\n "manifest": %s' % jmanifestnode)
1383
1383
1384 self.ui.write(',\n "extra": {%s}' %
1384 self.ui.write(',\n "extra": {%s}' %
1385 ", ".join('"%s": "%s"' % (j(k), j(v))
1385 ", ".join('"%s": "%s"' % (j(k), j(v))
1386 for k, v in ctx.extra().items()))
1386 for k, v in ctx.extra().items()))
1387
1387
1388 files = ctx.p1().status(ctx)
1388 files = ctx.p1().status(ctx)
1389 self.ui.write(',\n "modified": [%s]' %
1389 self.ui.write(',\n "modified": [%s]' %
1390 ", ".join('"%s"' % j(f) for f in files[0]))
1390 ", ".join('"%s"' % j(f) for f in files[0]))
1391 self.ui.write(',\n "added": [%s]' %
1391 self.ui.write(',\n "added": [%s]' %
1392 ", ".join('"%s"' % j(f) for f in files[1]))
1392 ", ".join('"%s"' % j(f) for f in files[1]))
1393 self.ui.write(',\n "removed": [%s]' %
1393 self.ui.write(',\n "removed": [%s]' %
1394 ", ".join('"%s"' % j(f) for f in files[2]))
1394 ", ".join('"%s"' % j(f) for f in files[2]))
1395
1395
1396 elif self.ui.verbose:
1396 elif self.ui.verbose:
1397 self.ui.write(',\n "files": [%s]' %
1397 self.ui.write(',\n "files": [%s]' %
1398 ", ".join('"%s"' % j(f) for f in ctx.files()))
1398 ", ".join('"%s"' % j(f) for f in ctx.files()))
1399
1399
1400 if copies:
1400 if copies:
1401 self.ui.write(',\n "copies": {%s}' %
1401 self.ui.write(',\n "copies": {%s}' %
1402 ", ".join('"%s": "%s"' % (j(k), j(v))
1402 ", ".join('"%s": "%s"' % (j(k), j(v))
1403 for k, v in copies))
1403 for k, v in copies))
1404
1404
1405 matchfn = self.matchfn
1405 matchfn = self.matchfn
1406 if matchfn:
1406 if matchfn:
1407 stat = self.diffopts.get('stat')
1407 stat = self.diffopts.get('stat')
1408 diff = self.diffopts.get('patch')
1408 diff = self.diffopts.get('patch')
1409 diffopts = patch.difffeatureopts(self.ui, self.diffopts, git=True)
1409 diffopts = patch.difffeatureopts(self.ui, self.diffopts, git=True)
1410 node, prev = ctx.node(), ctx.p1().node()
1410 node, prev = ctx.node(), ctx.p1().node()
1411 if stat:
1411 if stat:
1412 self.ui.pushbuffer()
1412 self.ui.pushbuffer()
1413 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1413 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1414 match=matchfn, stat=True)
1414 match=matchfn, stat=True)
1415 self.ui.write(',\n "diffstat": "%s"' % j(self.ui.popbuffer()))
1415 self.ui.write(',\n "diffstat": "%s"' % j(self.ui.popbuffer()))
1416 if diff:
1416 if diff:
1417 self.ui.pushbuffer()
1417 self.ui.pushbuffer()
1418 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1418 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1419 match=matchfn, stat=False)
1419 match=matchfn, stat=False)
1420 self.ui.write(',\n "diff": "%s"' % j(self.ui.popbuffer()))
1420 self.ui.write(',\n "diff": "%s"' % j(self.ui.popbuffer()))
1421
1421
1422 self.ui.write("\n }")
1422 self.ui.write("\n }")
1423
1423
1424 class changeset_templater(changeset_printer):
1424 class changeset_templater(changeset_printer):
1425 '''format changeset information.'''
1425 '''format changeset information.'''
1426
1426
1427 def __init__(self, ui, repo, matchfn, diffopts, tmpl, mapfile, buffered):
1427 def __init__(self, ui, repo, matchfn, diffopts, tmpl, mapfile, buffered):
1428 changeset_printer.__init__(self, ui, repo, matchfn, diffopts, buffered)
1428 changeset_printer.__init__(self, ui, repo, matchfn, diffopts, buffered)
1429 formatnode = ui.debugflag and (lambda x: x) or (lambda x: x[:12])
1429 formatnode = ui.debugflag and (lambda x: x) or (lambda x: x[:12])
1430 defaulttempl = {
1430 defaulttempl = {
1431 'parent': '{rev}:{node|formatnode} ',
1431 'parent': '{rev}:{node|formatnode} ',
1432 'manifest': '{rev}:{node|formatnode}',
1432 'manifest': '{rev}:{node|formatnode}',
1433 'file_copy': '{name} ({source})',
1433 'file_copy': '{name} ({source})',
1434 'extra': '{key}={value|stringescape}'
1434 'extra': '{key}={value|stringescape}'
1435 }
1435 }
1436 # filecopy is preserved for compatibility reasons
1436 # filecopy is preserved for compatibility reasons
1437 defaulttempl['filecopy'] = defaulttempl['file_copy']
1437 defaulttempl['filecopy'] = defaulttempl['file_copy']
1438 self.t = templater.templater(mapfile, {'formatnode': formatnode},
1438 self.t = templater.templater(mapfile, {'formatnode': formatnode},
1439 cache=defaulttempl)
1439 cache=defaulttempl)
1440 if tmpl:
1440 if tmpl:
1441 self.t.cache['changeset'] = tmpl
1441 self.t.cache['changeset'] = tmpl
1442
1442
1443 self.cache = {}
1443 self.cache = {}
1444
1444
1445 # find correct templates for current mode
1445 # find correct templates for current mode
1446 tmplmodes = [
1446 tmplmodes = [
1447 (True, None),
1447 (True, None),
1448 (self.ui.verbose, 'verbose'),
1448 (self.ui.verbose, 'verbose'),
1449 (self.ui.quiet, 'quiet'),
1449 (self.ui.quiet, 'quiet'),
1450 (self.ui.debugflag, 'debug'),
1450 (self.ui.debugflag, 'debug'),
1451 ]
1451 ]
1452
1452
1453 self._parts = {'header': '', 'footer': '', 'changeset': 'changeset',
1453 self._parts = {'header': '', 'footer': '', 'changeset': 'changeset',
1454 'docheader': '', 'docfooter': ''}
1454 'docheader': '', 'docfooter': ''}
1455 for mode, postfix in tmplmodes:
1455 for mode, postfix in tmplmodes:
1456 for t in self._parts:
1456 for t in self._parts:
1457 cur = t
1457 cur = t
1458 if postfix:
1458 if postfix:
1459 cur += "_" + postfix
1459 cur += "_" + postfix
1460 if mode and cur in self.t:
1460 if mode and cur in self.t:
1461 self._parts[t] = cur
1461 self._parts[t] = cur
1462
1462
1463 if self._parts['docheader']:
1463 if self._parts['docheader']:
1464 self.ui.write(templater.stringify(self.t(self._parts['docheader'])))
1464 self.ui.write(templater.stringify(self.t(self._parts['docheader'])))
1465
1465
1466 def close(self):
1466 def close(self):
1467 if self._parts['docfooter']:
1467 if self._parts['docfooter']:
1468 if not self.footer:
1468 if not self.footer:
1469 self.footer = ""
1469 self.footer = ""
1470 self.footer += templater.stringify(self.t(self._parts['docfooter']))
1470 self.footer += templater.stringify(self.t(self._parts['docfooter']))
1471 return super(changeset_templater, self).close()
1471 return super(changeset_templater, self).close()
1472
1472
1473 def _show(self, ctx, copies, matchfn, props):
1473 def _show(self, ctx, copies, matchfn, props):
1474 '''show a single changeset or file revision'''
1474 '''show a single changeset or file revision'''
1475 props = props.copy()
1475 props = props.copy()
1476 props.update(templatekw.keywords)
1476 props.update(templatekw.keywords)
1477 props['templ'] = self.t
1477 props['templ'] = self.t
1478 props['ctx'] = ctx
1478 props['ctx'] = ctx
1479 props['repo'] = self.repo
1479 props['repo'] = self.repo
1480 props['revcache'] = {'copies': copies}
1480 props['revcache'] = {'copies': copies}
1481 props['cache'] = self.cache
1481 props['cache'] = self.cache
1482
1482
1483 try:
1483 try:
1484 # write header
1484 # write header
1485 if self._parts['header']:
1485 if self._parts['header']:
1486 h = templater.stringify(self.t(self._parts['header'], **props))
1486 h = templater.stringify(self.t(self._parts['header'], **props))
1487 if self.buffered:
1487 if self.buffered:
1488 self.header[ctx.rev()] = h
1488 self.header[ctx.rev()] = h
1489 else:
1489 else:
1490 if self.lastheader != h:
1490 if self.lastheader != h:
1491 self.lastheader = h
1491 self.lastheader = h
1492 self.ui.write(h)
1492 self.ui.write(h)
1493
1493
1494 # write changeset metadata, then patch if requested
1494 # write changeset metadata, then patch if requested
1495 key = self._parts['changeset']
1495 key = self._parts['changeset']
1496 self.ui.write(templater.stringify(self.t(key, **props)))
1496 self.ui.write(templater.stringify(self.t(key, **props)))
1497 self.showpatch(ctx, matchfn)
1497 self.showpatch(ctx, matchfn)
1498
1498
1499 if self._parts['footer']:
1499 if self._parts['footer']:
1500 if not self.footer:
1500 if not self.footer:
1501 self.footer = templater.stringify(
1501 self.footer = templater.stringify(
1502 self.t(self._parts['footer'], **props))
1502 self.t(self._parts['footer'], **props))
1503 except KeyError as inst:
1503 except KeyError as inst:
1504 msg = _("%s: no key named '%s'")
1504 msg = _("%s: no key named '%s'")
1505 raise error.Abort(msg % (self.t.mapfile, inst.args[0]))
1505 raise error.Abort(msg % (self.t.mapfile, inst.args[0]))
1506 except SyntaxError as inst:
1506 except SyntaxError as inst:
1507 raise error.Abort('%s: %s' % (self.t.mapfile, inst.args[0]))
1507 raise error.Abort('%s: %s' % (self.t.mapfile, inst.args[0]))
1508
1508
1509 def gettemplate(ui, tmpl, style):
1509 def gettemplate(ui, tmpl, style):
1510 """
1510 """
1511 Find the template matching the given template spec or style.
1511 Find the template matching the given template spec or style.
1512 """
1512 """
1513
1513
1514 # ui settings
1514 # ui settings
1515 if not tmpl and not style: # template are stronger than style
1515 if not tmpl and not style: # template are stronger than style
1516 tmpl = ui.config('ui', 'logtemplate')
1516 tmpl = ui.config('ui', 'logtemplate')
1517 if tmpl:
1517 if tmpl:
1518 try:
1518 try:
1519 tmpl = templater.unquotestring(tmpl)
1519 tmpl = templater.unquotestring(tmpl)
1520 except SyntaxError:
1520 except SyntaxError:
1521 pass
1521 pass
1522 return tmpl, None
1522 return tmpl, None
1523 else:
1523 else:
1524 style = util.expandpath(ui.config('ui', 'style', ''))
1524 style = util.expandpath(ui.config('ui', 'style', ''))
1525
1525
1526 if not tmpl and style:
1526 if not tmpl and style:
1527 mapfile = style
1527 mapfile = style
1528 if not os.path.split(mapfile)[0]:
1528 if not os.path.split(mapfile)[0]:
1529 mapname = (templater.templatepath('map-cmdline.' + mapfile)
1529 mapname = (templater.templatepath('map-cmdline.' + mapfile)
1530 or templater.templatepath(mapfile))
1530 or templater.templatepath(mapfile))
1531 if mapname:
1531 if mapname:
1532 mapfile = mapname
1532 mapfile = mapname
1533 return None, mapfile
1533 return None, mapfile
1534
1534
1535 if not tmpl:
1535 if not tmpl:
1536 return None, None
1536 return None, None
1537
1537
1538 return formatter.lookuptemplate(ui, 'changeset', tmpl)
1538 return formatter.lookuptemplate(ui, 'changeset', tmpl)
1539
1539
1540 def show_changeset(ui, repo, opts, buffered=False):
1540 def show_changeset(ui, repo, opts, buffered=False):
1541 """show one changeset using template or regular display.
1541 """show one changeset using template or regular display.
1542
1542
1543 Display format will be the first non-empty hit of:
1543 Display format will be the first non-empty hit of:
1544 1. option 'template'
1544 1. option 'template'
1545 2. option 'style'
1545 2. option 'style'
1546 3. [ui] setting 'logtemplate'
1546 3. [ui] setting 'logtemplate'
1547 4. [ui] setting 'style'
1547 4. [ui] setting 'style'
1548 If all of these values are either the unset or the empty string,
1548 If all of these values are either the unset or the empty string,
1549 regular display via changeset_printer() is done.
1549 regular display via changeset_printer() is done.
1550 """
1550 """
1551 # options
1551 # options
1552 matchfn = None
1552 matchfn = None
1553 if opts.get('patch') or opts.get('stat'):
1553 if opts.get('patch') or opts.get('stat'):
1554 matchfn = scmutil.matchall(repo)
1554 matchfn = scmutil.matchall(repo)
1555
1555
1556 if opts.get('template') == 'json':
1556 if opts.get('template') == 'json':
1557 return jsonchangeset(ui, repo, matchfn, opts, buffered)
1557 return jsonchangeset(ui, repo, matchfn, opts, buffered)
1558
1558
1559 tmpl, mapfile = gettemplate(ui, opts.get('template'), opts.get('style'))
1559 tmpl, mapfile = gettemplate(ui, opts.get('template'), opts.get('style'))
1560
1560
1561 if not tmpl and not mapfile:
1561 if not tmpl and not mapfile:
1562 return changeset_printer(ui, repo, matchfn, opts, buffered)
1562 return changeset_printer(ui, repo, matchfn, opts, buffered)
1563
1563
1564 try:
1564 try:
1565 t = changeset_templater(ui, repo, matchfn, opts, tmpl, mapfile,
1565 t = changeset_templater(ui, repo, matchfn, opts, tmpl, mapfile,
1566 buffered)
1566 buffered)
1567 except SyntaxError as inst:
1567 except SyntaxError as inst:
1568 raise error.Abort(inst.args[0])
1568 raise error.Abort(inst.args[0])
1569 return t
1569 return t
1570
1570
1571 def showmarker(ui, marker):
1571 def showmarker(ui, marker):
1572 """utility function to display obsolescence marker in a readable way
1572 """utility function to display obsolescence marker in a readable way
1573
1573
1574 To be used by debug function."""
1574 To be used by debug function."""
1575 ui.write(hex(marker.precnode()))
1575 ui.write(hex(marker.precnode()))
1576 for repl in marker.succnodes():
1576 for repl in marker.succnodes():
1577 ui.write(' ')
1577 ui.write(' ')
1578 ui.write(hex(repl))
1578 ui.write(hex(repl))
1579 ui.write(' %X ' % marker.flags())
1579 ui.write(' %X ' % marker.flags())
1580 parents = marker.parentnodes()
1580 parents = marker.parentnodes()
1581 if parents is not None:
1581 if parents is not None:
1582 ui.write('{%s} ' % ', '.join(hex(p) for p in parents))
1582 ui.write('{%s} ' % ', '.join(hex(p) for p in parents))
1583 ui.write('(%s) ' % util.datestr(marker.date()))
1583 ui.write('(%s) ' % util.datestr(marker.date()))
1584 ui.write('{%s}' % (', '.join('%r: %r' % t for t in
1584 ui.write('{%s}' % (', '.join('%r: %r' % t for t in
1585 sorted(marker.metadata().items())
1585 sorted(marker.metadata().items())
1586 if t[0] != 'date')))
1586 if t[0] != 'date')))
1587 ui.write('\n')
1587 ui.write('\n')
1588
1588
1589 def finddate(ui, repo, date):
1589 def finddate(ui, repo, date):
1590 """Find the tipmost changeset that matches the given date spec"""
1590 """Find the tipmost changeset that matches the given date spec"""
1591
1591
1592 df = util.matchdate(date)
1592 df = util.matchdate(date)
1593 m = scmutil.matchall(repo)
1593 m = scmutil.matchall(repo)
1594 results = {}
1594 results = {}
1595
1595
1596 def prep(ctx, fns):
1596 def prep(ctx, fns):
1597 d = ctx.date()
1597 d = ctx.date()
1598 if df(d[0]):
1598 if df(d[0]):
1599 results[ctx.rev()] = d
1599 results[ctx.rev()] = d
1600
1600
1601 for ctx in walkchangerevs(repo, m, {'rev': None}, prep):
1601 for ctx in walkchangerevs(repo, m, {'rev': None}, prep):
1602 rev = ctx.rev()
1602 rev = ctx.rev()
1603 if rev in results:
1603 if rev in results:
1604 ui.status(_("found revision %s from %s\n") %
1604 ui.status(_("found revision %s from %s\n") %
1605 (rev, util.datestr(results[rev])))
1605 (rev, util.datestr(results[rev])))
1606 return str(rev)
1606 return str(rev)
1607
1607
1608 raise error.Abort(_("revision matching date not found"))
1608 raise error.Abort(_("revision matching date not found"))
1609
1609
1610 def increasingwindows(windowsize=8, sizelimit=512):
1610 def increasingwindows(windowsize=8, sizelimit=512):
1611 while True:
1611 while True:
1612 yield windowsize
1612 yield windowsize
1613 if windowsize < sizelimit:
1613 if windowsize < sizelimit:
1614 windowsize *= 2
1614 windowsize *= 2
1615
1615
1616 class FileWalkError(Exception):
1616 class FileWalkError(Exception):
1617 pass
1617 pass
1618
1618
1619 def walkfilerevs(repo, match, follow, revs, fncache):
1619 def walkfilerevs(repo, match, follow, revs, fncache):
1620 '''Walks the file history for the matched files.
1620 '''Walks the file history for the matched files.
1621
1621
1622 Returns the changeset revs that are involved in the file history.
1622 Returns the changeset revs that are involved in the file history.
1623
1623
1624 Throws FileWalkError if the file history can't be walked using
1624 Throws FileWalkError if the file history can't be walked using
1625 filelogs alone.
1625 filelogs alone.
1626 '''
1626 '''
1627 wanted = set()
1627 wanted = set()
1628 copies = []
1628 copies = []
1629 minrev, maxrev = min(revs), max(revs)
1629 minrev, maxrev = min(revs), max(revs)
1630 def filerevgen(filelog, last):
1630 def filerevgen(filelog, last):
1631 """
1631 """
1632 Only files, no patterns. Check the history of each file.
1632 Only files, no patterns. Check the history of each file.
1633
1633
1634 Examines filelog entries within minrev, maxrev linkrev range
1634 Examines filelog entries within minrev, maxrev linkrev range
1635 Returns an iterator yielding (linkrev, parentlinkrevs, copied)
1635 Returns an iterator yielding (linkrev, parentlinkrevs, copied)
1636 tuples in backwards order
1636 tuples in backwards order
1637 """
1637 """
1638 cl_count = len(repo)
1638 cl_count = len(repo)
1639 revs = []
1639 revs = []
1640 for j in xrange(0, last + 1):
1640 for j in xrange(0, last + 1):
1641 linkrev = filelog.linkrev(j)
1641 linkrev = filelog.linkrev(j)
1642 if linkrev < minrev:
1642 if linkrev < minrev:
1643 continue
1643 continue
1644 # only yield rev for which we have the changelog, it can
1644 # only yield rev for which we have the changelog, it can
1645 # happen while doing "hg log" during a pull or commit
1645 # happen while doing "hg log" during a pull or commit
1646 if linkrev >= cl_count:
1646 if linkrev >= cl_count:
1647 break
1647 break
1648
1648
1649 parentlinkrevs = []
1649 parentlinkrevs = []
1650 for p in filelog.parentrevs(j):
1650 for p in filelog.parentrevs(j):
1651 if p != nullrev:
1651 if p != nullrev:
1652 parentlinkrevs.append(filelog.linkrev(p))
1652 parentlinkrevs.append(filelog.linkrev(p))
1653 n = filelog.node(j)
1653 n = filelog.node(j)
1654 revs.append((linkrev, parentlinkrevs,
1654 revs.append((linkrev, parentlinkrevs,
1655 follow and filelog.renamed(n)))
1655 follow and filelog.renamed(n)))
1656
1656
1657 return reversed(revs)
1657 return reversed(revs)
1658 def iterfiles():
1658 def iterfiles():
1659 pctx = repo['.']
1659 pctx = repo['.']
1660 for filename in match.files():
1660 for filename in match.files():
1661 if follow:
1661 if follow:
1662 if filename not in pctx:
1662 if filename not in pctx:
1663 raise error.Abort(_('cannot follow file not in parent '
1663 raise error.Abort(_('cannot follow file not in parent '
1664 'revision: "%s"') % filename)
1664 'revision: "%s"') % filename)
1665 yield filename, pctx[filename].filenode()
1665 yield filename, pctx[filename].filenode()
1666 else:
1666 else:
1667 yield filename, None
1667 yield filename, None
1668 for filename_node in copies:
1668 for filename_node in copies:
1669 yield filename_node
1669 yield filename_node
1670
1670
1671 for file_, node in iterfiles():
1671 for file_, node in iterfiles():
1672 filelog = repo.file(file_)
1672 filelog = repo.file(file_)
1673 if not len(filelog):
1673 if not len(filelog):
1674 if node is None:
1674 if node is None:
1675 # A zero count may be a directory or deleted file, so
1675 # A zero count may be a directory or deleted file, so
1676 # try to find matching entries on the slow path.
1676 # try to find matching entries on the slow path.
1677 if follow:
1677 if follow:
1678 raise error.Abort(
1678 raise error.Abort(
1679 _('cannot follow nonexistent file: "%s"') % file_)
1679 _('cannot follow nonexistent file: "%s"') % file_)
1680 raise FileWalkError("Cannot walk via filelog")
1680 raise FileWalkError("Cannot walk via filelog")
1681 else:
1681 else:
1682 continue
1682 continue
1683
1683
1684 if node is None:
1684 if node is None:
1685 last = len(filelog) - 1
1685 last = len(filelog) - 1
1686 else:
1686 else:
1687 last = filelog.rev(node)
1687 last = filelog.rev(node)
1688
1688
1689 # keep track of all ancestors of the file
1689 # keep track of all ancestors of the file
1690 ancestors = set([filelog.linkrev(last)])
1690 ancestors = set([filelog.linkrev(last)])
1691
1691
1692 # iterate from latest to oldest revision
1692 # iterate from latest to oldest revision
1693 for rev, flparentlinkrevs, copied in filerevgen(filelog, last):
1693 for rev, flparentlinkrevs, copied in filerevgen(filelog, last):
1694 if not follow:
1694 if not follow:
1695 if rev > maxrev:
1695 if rev > maxrev:
1696 continue
1696 continue
1697 else:
1697 else:
1698 # Note that last might not be the first interesting
1698 # Note that last might not be the first interesting
1699 # rev to us:
1699 # rev to us:
1700 # if the file has been changed after maxrev, we'll
1700 # if the file has been changed after maxrev, we'll
1701 # have linkrev(last) > maxrev, and we still need
1701 # have linkrev(last) > maxrev, and we still need
1702 # to explore the file graph
1702 # to explore the file graph
1703 if rev not in ancestors:
1703 if rev not in ancestors:
1704 continue
1704 continue
1705 # XXX insert 1327 fix here
1705 # XXX insert 1327 fix here
1706 if flparentlinkrevs:
1706 if flparentlinkrevs:
1707 ancestors.update(flparentlinkrevs)
1707 ancestors.update(flparentlinkrevs)
1708
1708
1709 fncache.setdefault(rev, []).append(file_)
1709 fncache.setdefault(rev, []).append(file_)
1710 wanted.add(rev)
1710 wanted.add(rev)
1711 if copied:
1711 if copied:
1712 copies.append(copied)
1712 copies.append(copied)
1713
1713
1714 return wanted
1714 return wanted
1715
1715
1716 class _followfilter(object):
1716 class _followfilter(object):
1717 def __init__(self, repo, onlyfirst=False):
1717 def __init__(self, repo, onlyfirst=False):
1718 self.repo = repo
1718 self.repo = repo
1719 self.startrev = nullrev
1719 self.startrev = nullrev
1720 self.roots = set()
1720 self.roots = set()
1721 self.onlyfirst = onlyfirst
1721 self.onlyfirst = onlyfirst
1722
1722
1723 def match(self, rev):
1723 def match(self, rev):
1724 def realparents(rev):
1724 def realparents(rev):
1725 if self.onlyfirst:
1725 if self.onlyfirst:
1726 return self.repo.changelog.parentrevs(rev)[0:1]
1726 return self.repo.changelog.parentrevs(rev)[0:1]
1727 else:
1727 else:
1728 return filter(lambda x: x != nullrev,
1728 return filter(lambda x: x != nullrev,
1729 self.repo.changelog.parentrevs(rev))
1729 self.repo.changelog.parentrevs(rev))
1730
1730
1731 if self.startrev == nullrev:
1731 if self.startrev == nullrev:
1732 self.startrev = rev
1732 self.startrev = rev
1733 return True
1733 return True
1734
1734
1735 if rev > self.startrev:
1735 if rev > self.startrev:
1736 # forward: all descendants
1736 # forward: all descendants
1737 if not self.roots:
1737 if not self.roots:
1738 self.roots.add(self.startrev)
1738 self.roots.add(self.startrev)
1739 for parent in realparents(rev):
1739 for parent in realparents(rev):
1740 if parent in self.roots:
1740 if parent in self.roots:
1741 self.roots.add(rev)
1741 self.roots.add(rev)
1742 return True
1742 return True
1743 else:
1743 else:
1744 # backwards: all parents
1744 # backwards: all parents
1745 if not self.roots:
1745 if not self.roots:
1746 self.roots.update(realparents(self.startrev))
1746 self.roots.update(realparents(self.startrev))
1747 if rev in self.roots:
1747 if rev in self.roots:
1748 self.roots.remove(rev)
1748 self.roots.remove(rev)
1749 self.roots.update(realparents(rev))
1749 self.roots.update(realparents(rev))
1750 return True
1750 return True
1751
1751
1752 return False
1752 return False
1753
1753
1754 def walkchangerevs(repo, match, opts, prepare):
1754 def walkchangerevs(repo, match, opts, prepare):
1755 '''Iterate over files and the revs in which they changed.
1755 '''Iterate over files and the revs in which they changed.
1756
1756
1757 Callers most commonly need to iterate backwards over the history
1757 Callers most commonly need to iterate backwards over the history
1758 in which they are interested. Doing so has awful (quadratic-looking)
1758 in which they are interested. Doing so has awful (quadratic-looking)
1759 performance, so we use iterators in a "windowed" way.
1759 performance, so we use iterators in a "windowed" way.
1760
1760
1761 We walk a window of revisions in the desired order. Within the
1761 We walk a window of revisions in the desired order. Within the
1762 window, we first walk forwards to gather data, then in the desired
1762 window, we first walk forwards to gather data, then in the desired
1763 order (usually backwards) to display it.
1763 order (usually backwards) to display it.
1764
1764
1765 This function returns an iterator yielding contexts. Before
1765 This function returns an iterator yielding contexts. Before
1766 yielding each context, the iterator will first call the prepare
1766 yielding each context, the iterator will first call the prepare
1767 function on each context in the window in forward order.'''
1767 function on each context in the window in forward order.'''
1768
1768
1769 follow = opts.get('follow') or opts.get('follow_first')
1769 follow = opts.get('follow') or opts.get('follow_first')
1770 revs = _logrevs(repo, opts)
1770 revs = _logrevs(repo, opts)
1771 if not revs:
1771 if not revs:
1772 return []
1772 return []
1773 wanted = set()
1773 wanted = set()
1774 slowpath = match.anypats() or ((match.isexact() or match.prefix()) and
1774 slowpath = match.anypats() or ((match.isexact() or match.prefix()) and
1775 opts.get('removed'))
1775 opts.get('removed'))
1776 fncache = {}
1776 fncache = {}
1777 change = repo.changectx
1777 change = repo.changectx
1778
1778
1779 # First step is to fill wanted, the set of revisions that we want to yield.
1779 # First step is to fill wanted, the set of revisions that we want to yield.
1780 # When it does not induce extra cost, we also fill fncache for revisions in
1780 # When it does not induce extra cost, we also fill fncache for revisions in
1781 # wanted: a cache of filenames that were changed (ctx.files()) and that
1781 # wanted: a cache of filenames that were changed (ctx.files()) and that
1782 # match the file filtering conditions.
1782 # match the file filtering conditions.
1783
1783
1784 if match.always():
1784 if match.always():
1785 # No files, no patterns. Display all revs.
1785 # No files, no patterns. Display all revs.
1786 wanted = revs
1786 wanted = revs
1787 elif not slowpath:
1787 elif not slowpath:
1788 # We only have to read through the filelog to find wanted revisions
1788 # We only have to read through the filelog to find wanted revisions
1789
1789
1790 try:
1790 try:
1791 wanted = walkfilerevs(repo, match, follow, revs, fncache)
1791 wanted = walkfilerevs(repo, match, follow, revs, fncache)
1792 except FileWalkError:
1792 except FileWalkError:
1793 slowpath = True
1793 slowpath = True
1794
1794
1795 # We decided to fall back to the slowpath because at least one
1795 # We decided to fall back to the slowpath because at least one
1796 # of the paths was not a file. Check to see if at least one of them
1796 # of the paths was not a file. Check to see if at least one of them
1797 # existed in history, otherwise simply return
1797 # existed in history, otherwise simply return
1798 for path in match.files():
1798 for path in match.files():
1799 if path == '.' or path in repo.store:
1799 if path == '.' or path in repo.store:
1800 break
1800 break
1801 else:
1801 else:
1802 return []
1802 return []
1803
1803
1804 if slowpath:
1804 if slowpath:
1805 # We have to read the changelog to match filenames against
1805 # We have to read the changelog to match filenames against
1806 # changed files
1806 # changed files
1807
1807
1808 if follow:
1808 if follow:
1809 raise error.Abort(_('can only follow copies/renames for explicit '
1809 raise error.Abort(_('can only follow copies/renames for explicit '
1810 'filenames'))
1810 'filenames'))
1811
1811
1812 # The slow path checks files modified in every changeset.
1812 # The slow path checks files modified in every changeset.
1813 # This is really slow on large repos, so compute the set lazily.
1813 # This is really slow on large repos, so compute the set lazily.
1814 class lazywantedset(object):
1814 class lazywantedset(object):
1815 def __init__(self):
1815 def __init__(self):
1816 self.set = set()
1816 self.set = set()
1817 self.revs = set(revs)
1817 self.revs = set(revs)
1818
1818
1819 # No need to worry about locality here because it will be accessed
1819 # No need to worry about locality here because it will be accessed
1820 # in the same order as the increasing window below.
1820 # in the same order as the increasing window below.
1821 def __contains__(self, value):
1821 def __contains__(self, value):
1822 if value in self.set:
1822 if value in self.set:
1823 return True
1823 return True
1824 elif not value in self.revs:
1824 elif not value in self.revs:
1825 return False
1825 return False
1826 else:
1826 else:
1827 self.revs.discard(value)
1827 self.revs.discard(value)
1828 ctx = change(value)
1828 ctx = change(value)
1829 matches = filter(match, ctx.files())
1829 matches = filter(match, ctx.files())
1830 if matches:
1830 if matches:
1831 fncache[value] = matches
1831 fncache[value] = matches
1832 self.set.add(value)
1832 self.set.add(value)
1833 return True
1833 return True
1834 return False
1834 return False
1835
1835
1836 def discard(self, value):
1836 def discard(self, value):
1837 self.revs.discard(value)
1837 self.revs.discard(value)
1838 self.set.discard(value)
1838 self.set.discard(value)
1839
1839
1840 wanted = lazywantedset()
1840 wanted = lazywantedset()
1841
1841
1842 # it might be worthwhile to do this in the iterator if the rev range
1842 # it might be worthwhile to do this in the iterator if the rev range
1843 # is descending and the prune args are all within that range
1843 # is descending and the prune args are all within that range
1844 for rev in opts.get('prune', ()):
1844 for rev in opts.get('prune', ()):
1845 rev = repo[rev].rev()
1845 rev = repo[rev].rev()
1846 ff = _followfilter(repo)
1846 ff = _followfilter(repo)
1847 stop = min(revs[0], revs[-1])
1847 stop = min(revs[0], revs[-1])
1848 for x in xrange(rev, stop - 1, -1):
1848 for x in xrange(rev, stop - 1, -1):
1849 if ff.match(x):
1849 if ff.match(x):
1850 wanted = wanted - [x]
1850 wanted = wanted - [x]
1851
1851
1852 # Now that wanted is correctly initialized, we can iterate over the
1852 # Now that wanted is correctly initialized, we can iterate over the
1853 # revision range, yielding only revisions in wanted.
1853 # revision range, yielding only revisions in wanted.
1854 def iterate():
1854 def iterate():
1855 if follow and match.always():
1855 if follow and match.always():
1856 ff = _followfilter(repo, onlyfirst=opts.get('follow_first'))
1856 ff = _followfilter(repo, onlyfirst=opts.get('follow_first'))
1857 def want(rev):
1857 def want(rev):
1858 return ff.match(rev) and rev in wanted
1858 return ff.match(rev) and rev in wanted
1859 else:
1859 else:
1860 def want(rev):
1860 def want(rev):
1861 return rev in wanted
1861 return rev in wanted
1862
1862
1863 it = iter(revs)
1863 it = iter(revs)
1864 stopiteration = False
1864 stopiteration = False
1865 for windowsize in increasingwindows():
1865 for windowsize in increasingwindows():
1866 nrevs = []
1866 nrevs = []
1867 for i in xrange(windowsize):
1867 for i in xrange(windowsize):
1868 rev = next(it, None)
1868 rev = next(it, None)
1869 if rev is None:
1869 if rev is None:
1870 stopiteration = True
1870 stopiteration = True
1871 break
1871 break
1872 elif want(rev):
1872 elif want(rev):
1873 nrevs.append(rev)
1873 nrevs.append(rev)
1874 for rev in sorted(nrevs):
1874 for rev in sorted(nrevs):
1875 fns = fncache.get(rev)
1875 fns = fncache.get(rev)
1876 ctx = change(rev)
1876 ctx = change(rev)
1877 if not fns:
1877 if not fns:
1878 def fns_generator():
1878 def fns_generator():
1879 for f in ctx.files():
1879 for f in ctx.files():
1880 if match(f):
1880 if match(f):
1881 yield f
1881 yield f
1882 fns = fns_generator()
1882 fns = fns_generator()
1883 prepare(ctx, fns)
1883 prepare(ctx, fns)
1884 for rev in nrevs:
1884 for rev in nrevs:
1885 yield change(rev)
1885 yield change(rev)
1886
1886
1887 if stopiteration:
1887 if stopiteration:
1888 break
1888 break
1889
1889
1890 return iterate()
1890 return iterate()
1891
1891
1892 def _makefollowlogfilematcher(repo, files, followfirst):
1892 def _makefollowlogfilematcher(repo, files, followfirst):
1893 # When displaying a revision with --patch --follow FILE, we have
1893 # When displaying a revision with --patch --follow FILE, we have
1894 # to know which file of the revision must be diffed. With
1894 # to know which file of the revision must be diffed. With
1895 # --follow, we want the names of the ancestors of FILE in the
1895 # --follow, we want the names of the ancestors of FILE in the
1896 # revision, stored in "fcache". "fcache" is populated by
1896 # revision, stored in "fcache". "fcache" is populated by
1897 # reproducing the graph traversal already done by --follow revset
1897 # reproducing the graph traversal already done by --follow revset
1898 # and relating linkrevs to file names (which is not "correct" but
1898 # and relating linkrevs to file names (which is not "correct" but
1899 # good enough).
1899 # good enough).
1900 fcache = {}
1900 fcache = {}
1901 fcacheready = [False]
1901 fcacheready = [False]
1902 pctx = repo['.']
1902 pctx = repo['.']
1903
1903
1904 def populate():
1904 def populate():
1905 for fn in files:
1905 for fn in files:
1906 for i in ((pctx[fn],), pctx[fn].ancestors(followfirst=followfirst)):
1906 for i in ((pctx[fn],), pctx[fn].ancestors(followfirst=followfirst)):
1907 for c in i:
1907 for c in i:
1908 fcache.setdefault(c.linkrev(), set()).add(c.path())
1908 fcache.setdefault(c.linkrev(), set()).add(c.path())
1909
1909
1910 def filematcher(rev):
1910 def filematcher(rev):
1911 if not fcacheready[0]:
1911 if not fcacheready[0]:
1912 # Lazy initialization
1912 # Lazy initialization
1913 fcacheready[0] = True
1913 fcacheready[0] = True
1914 populate()
1914 populate()
1915 return scmutil.matchfiles(repo, fcache.get(rev, []))
1915 return scmutil.matchfiles(repo, fcache.get(rev, []))
1916
1916
1917 return filematcher
1917 return filematcher
1918
1918
1919 def _makenofollowlogfilematcher(repo, pats, opts):
1919 def _makenofollowlogfilematcher(repo, pats, opts):
1920 '''hook for extensions to override the filematcher for non-follow cases'''
1920 '''hook for extensions to override the filematcher for non-follow cases'''
1921 return None
1921 return None
1922
1922
1923 def _makelogrevset(repo, pats, opts, revs):
1923 def _makelogrevset(repo, pats, opts, revs):
1924 """Return (expr, filematcher) where expr is a revset string built
1924 """Return (expr, filematcher) where expr is a revset string built
1925 from log options and file patterns or None. If --stat or --patch
1925 from log options and file patterns or None. If --stat or --patch
1926 are not passed filematcher is None. Otherwise it is a callable
1926 are not passed filematcher is None. Otherwise it is a callable
1927 taking a revision number and returning a match objects filtering
1927 taking a revision number and returning a match objects filtering
1928 the files to be detailed when displaying the revision.
1928 the files to be detailed when displaying the revision.
1929 """
1929 """
1930 opt2revset = {
1930 opt2revset = {
1931 'no_merges': ('not merge()', None),
1931 'no_merges': ('not merge()', None),
1932 'only_merges': ('merge()', None),
1932 'only_merges': ('merge()', None),
1933 '_ancestors': ('ancestors(%(val)s)', None),
1933 '_ancestors': ('ancestors(%(val)s)', None),
1934 '_fancestors': ('_firstancestors(%(val)s)', None),
1934 '_fancestors': ('_firstancestors(%(val)s)', None),
1935 '_descendants': ('descendants(%(val)s)', None),
1935 '_descendants': ('descendants(%(val)s)', None),
1936 '_fdescendants': ('_firstdescendants(%(val)s)', None),
1936 '_fdescendants': ('_firstdescendants(%(val)s)', None),
1937 '_matchfiles': ('_matchfiles(%(val)s)', None),
1937 '_matchfiles': ('_matchfiles(%(val)s)', None),
1938 'date': ('date(%(val)r)', None),
1938 'date': ('date(%(val)r)', None),
1939 'branch': ('branch(%(val)r)', ' or '),
1939 'branch': ('branch(%(val)r)', ' or '),
1940 '_patslog': ('filelog(%(val)r)', ' or '),
1940 '_patslog': ('filelog(%(val)r)', ' or '),
1941 '_patsfollow': ('follow(%(val)r)', ' or '),
1941 '_patsfollow': ('follow(%(val)r)', ' or '),
1942 '_patsfollowfirst': ('_followfirst(%(val)r)', ' or '),
1942 '_patsfollowfirst': ('_followfirst(%(val)r)', ' or '),
1943 'keyword': ('keyword(%(val)r)', ' or '),
1943 'keyword': ('keyword(%(val)r)', ' or '),
1944 'prune': ('not (%(val)r or ancestors(%(val)r))', ' and '),
1944 'prune': ('not (%(val)r or ancestors(%(val)r))', ' and '),
1945 'user': ('user(%(val)r)', ' or '),
1945 'user': ('user(%(val)r)', ' or '),
1946 }
1946 }
1947
1947
1948 opts = dict(opts)
1948 opts = dict(opts)
1949 # follow or not follow?
1949 # follow or not follow?
1950 follow = opts.get('follow') or opts.get('follow_first')
1950 follow = opts.get('follow') or opts.get('follow_first')
1951 if opts.get('follow_first'):
1951 if opts.get('follow_first'):
1952 followfirst = 1
1952 followfirst = 1
1953 else:
1953 else:
1954 followfirst = 0
1954 followfirst = 0
1955 # --follow with FILE behavior depends on revs...
1955 # --follow with FILE behavior depends on revs...
1956 it = iter(revs)
1956 it = iter(revs)
1957 startrev = it.next()
1957 startrev = it.next()
1958 followdescendants = startrev < next(it, startrev)
1958 followdescendants = startrev < next(it, startrev)
1959
1959
1960 # branch and only_branch are really aliases and must be handled at
1960 # branch and only_branch are really aliases and must be handled at
1961 # the same time
1961 # the same time
1962 opts['branch'] = opts.get('branch', []) + opts.get('only_branch', [])
1962 opts['branch'] = opts.get('branch', []) + opts.get('only_branch', [])
1963 opts['branch'] = [repo.lookupbranch(b) for b in opts['branch']]
1963 opts['branch'] = [repo.lookupbranch(b) for b in opts['branch']]
1964 # pats/include/exclude are passed to match.match() directly in
1964 # pats/include/exclude are passed to match.match() directly in
1965 # _matchfiles() revset but walkchangerevs() builds its matcher with
1965 # _matchfiles() revset but walkchangerevs() builds its matcher with
1966 # scmutil.match(). The difference is input pats are globbed on
1966 # scmutil.match(). The difference is input pats are globbed on
1967 # platforms without shell expansion (windows).
1967 # platforms without shell expansion (windows).
1968 wctx = repo[None]
1968 wctx = repo[None]
1969 match, pats = scmutil.matchandpats(wctx, pats, opts)
1969 match, pats = scmutil.matchandpats(wctx, pats, opts)
1970 slowpath = match.anypats() or ((match.isexact() or match.prefix()) and
1970 slowpath = match.anypats() or ((match.isexact() or match.prefix()) and
1971 opts.get('removed'))
1971 opts.get('removed'))
1972 if not slowpath:
1972 if not slowpath:
1973 for f in match.files():
1973 for f in match.files():
1974 if follow and f not in wctx:
1974 if follow and f not in wctx:
1975 # If the file exists, it may be a directory, so let it
1975 # If the file exists, it may be a directory, so let it
1976 # take the slow path.
1976 # take the slow path.
1977 if os.path.exists(repo.wjoin(f)):
1977 if os.path.exists(repo.wjoin(f)):
1978 slowpath = True
1978 slowpath = True
1979 continue
1979 continue
1980 else:
1980 else:
1981 raise error.Abort(_('cannot follow file not in parent '
1981 raise error.Abort(_('cannot follow file not in parent '
1982 'revision: "%s"') % f)
1982 'revision: "%s"') % f)
1983 filelog = repo.file(f)
1983 filelog = repo.file(f)
1984 if not filelog:
1984 if not filelog:
1985 # A zero count may be a directory or deleted file, so
1985 # A zero count may be a directory or deleted file, so
1986 # try to find matching entries on the slow path.
1986 # try to find matching entries on the slow path.
1987 if follow:
1987 if follow:
1988 raise error.Abort(
1988 raise error.Abort(
1989 _('cannot follow nonexistent file: "%s"') % f)
1989 _('cannot follow nonexistent file: "%s"') % f)
1990 slowpath = True
1990 slowpath = True
1991
1991
1992 # We decided to fall back to the slowpath because at least one
1992 # We decided to fall back to the slowpath because at least one
1993 # of the paths was not a file. Check to see if at least one of them
1993 # of the paths was not a file. Check to see if at least one of them
1994 # existed in history - in that case, we'll continue down the
1994 # existed in history - in that case, we'll continue down the
1995 # slowpath; otherwise, we can turn off the slowpath
1995 # slowpath; otherwise, we can turn off the slowpath
1996 if slowpath:
1996 if slowpath:
1997 for path in match.files():
1997 for path in match.files():
1998 if path == '.' or path in repo.store:
1998 if path == '.' or path in repo.store:
1999 break
1999 break
2000 else:
2000 else:
2001 slowpath = False
2001 slowpath = False
2002
2002
2003 fpats = ('_patsfollow', '_patsfollowfirst')
2003 fpats = ('_patsfollow', '_patsfollowfirst')
2004 fnopats = (('_ancestors', '_fancestors'),
2004 fnopats = (('_ancestors', '_fancestors'),
2005 ('_descendants', '_fdescendants'))
2005 ('_descendants', '_fdescendants'))
2006 if slowpath:
2006 if slowpath:
2007 # See walkchangerevs() slow path.
2007 # See walkchangerevs() slow path.
2008 #
2008 #
2009 # pats/include/exclude cannot be represented as separate
2009 # pats/include/exclude cannot be represented as separate
2010 # revset expressions as their filtering logic applies at file
2010 # revset expressions as their filtering logic applies at file
2011 # level. For instance "-I a -X a" matches a revision touching
2011 # level. For instance "-I a -X a" matches a revision touching
2012 # "a" and "b" while "file(a) and not file(b)" does
2012 # "a" and "b" while "file(a) and not file(b)" does
2013 # not. Besides, filesets are evaluated against the working
2013 # not. Besides, filesets are evaluated against the working
2014 # directory.
2014 # directory.
2015 matchargs = ['r:', 'd:relpath']
2015 matchargs = ['r:', 'd:relpath']
2016 for p in pats:
2016 for p in pats:
2017 matchargs.append('p:' + p)
2017 matchargs.append('p:' + p)
2018 for p in opts.get('include', []):
2018 for p in opts.get('include', []):
2019 matchargs.append('i:' + p)
2019 matchargs.append('i:' + p)
2020 for p in opts.get('exclude', []):
2020 for p in opts.get('exclude', []):
2021 matchargs.append('x:' + p)
2021 matchargs.append('x:' + p)
2022 matchargs = ','.join(('%r' % p) for p in matchargs)
2022 matchargs = ','.join(('%r' % p) for p in matchargs)
2023 opts['_matchfiles'] = matchargs
2023 opts['_matchfiles'] = matchargs
2024 if follow:
2024 if follow:
2025 opts[fnopats[0][followfirst]] = '.'
2025 opts[fnopats[0][followfirst]] = '.'
2026 else:
2026 else:
2027 if follow:
2027 if follow:
2028 if pats:
2028 if pats:
2029 # follow() revset interprets its file argument as a
2029 # follow() revset interprets its file argument as a
2030 # manifest entry, so use match.files(), not pats.
2030 # manifest entry, so use match.files(), not pats.
2031 opts[fpats[followfirst]] = list(match.files())
2031 opts[fpats[followfirst]] = list(match.files())
2032 else:
2032 else:
2033 op = fnopats[followdescendants][followfirst]
2033 op = fnopats[followdescendants][followfirst]
2034 opts[op] = 'rev(%d)' % startrev
2034 opts[op] = 'rev(%d)' % startrev
2035 else:
2035 else:
2036 opts['_patslog'] = list(pats)
2036 opts['_patslog'] = list(pats)
2037
2037
2038 filematcher = None
2038 filematcher = None
2039 if opts.get('patch') or opts.get('stat'):
2039 if opts.get('patch') or opts.get('stat'):
2040 # When following files, track renames via a special matcher.
2040 # When following files, track renames via a special matcher.
2041 # If we're forced to take the slowpath it means we're following
2041 # If we're forced to take the slowpath it means we're following
2042 # at least one pattern/directory, so don't bother with rename tracking.
2042 # at least one pattern/directory, so don't bother with rename tracking.
2043 if follow and not match.always() and not slowpath:
2043 if follow and not match.always() and not slowpath:
2044 # _makefollowlogfilematcher expects its files argument to be
2044 # _makefollowlogfilematcher expects its files argument to be
2045 # relative to the repo root, so use match.files(), not pats.
2045 # relative to the repo root, so use match.files(), not pats.
2046 filematcher = _makefollowlogfilematcher(repo, match.files(),
2046 filematcher = _makefollowlogfilematcher(repo, match.files(),
2047 followfirst)
2047 followfirst)
2048 else:
2048 else:
2049 filematcher = _makenofollowlogfilematcher(repo, pats, opts)
2049 filematcher = _makenofollowlogfilematcher(repo, pats, opts)
2050 if filematcher is None:
2050 if filematcher is None:
2051 filematcher = lambda rev: match
2051 filematcher = lambda rev: match
2052
2052
2053 expr = []
2053 expr = []
2054 for op, val in sorted(opts.iteritems()):
2054 for op, val in sorted(opts.iteritems()):
2055 if not val:
2055 if not val:
2056 continue
2056 continue
2057 if op not in opt2revset:
2057 if op not in opt2revset:
2058 continue
2058 continue
2059 revop, andor = opt2revset[op]
2059 revop, andor = opt2revset[op]
2060 if '%(val)' not in revop:
2060 if '%(val)' not in revop:
2061 expr.append(revop)
2061 expr.append(revop)
2062 else:
2062 else:
2063 if not isinstance(val, list):
2063 if not isinstance(val, list):
2064 e = revop % {'val': val}
2064 e = revop % {'val': val}
2065 else:
2065 else:
2066 e = '(' + andor.join((revop % {'val': v}) for v in val) + ')'
2066 e = '(' + andor.join((revop % {'val': v}) for v in val) + ')'
2067 expr.append(e)
2067 expr.append(e)
2068
2068
2069 if expr:
2069 if expr:
2070 expr = '(' + ' and '.join(expr) + ')'
2070 expr = '(' + ' and '.join(expr) + ')'
2071 else:
2071 else:
2072 expr = None
2072 expr = None
2073 return expr, filematcher
2073 return expr, filematcher
2074
2074
2075 def _logrevs(repo, opts):
2075 def _logrevs(repo, opts):
2076 # Default --rev value depends on --follow but --follow behavior
2076 # Default --rev value depends on --follow but --follow behavior
2077 # depends on revisions resolved from --rev...
2077 # depends on revisions resolved from --rev...
2078 follow = opts.get('follow') or opts.get('follow_first')
2078 follow = opts.get('follow') or opts.get('follow_first')
2079 if opts.get('rev'):
2079 if opts.get('rev'):
2080 revs = scmutil.revrange(repo, opts['rev'])
2080 revs = scmutil.revrange(repo, opts['rev'])
2081 elif follow and repo.dirstate.p1() == nullid:
2081 elif follow and repo.dirstate.p1() == nullid:
2082 revs = revset.baseset()
2082 revs = revset.baseset()
2083 elif follow:
2083 elif follow:
2084 revs = repo.revs('reverse(:.)')
2084 revs = repo.revs('reverse(:.)')
2085 else:
2085 else:
2086 revs = revset.spanset(repo)
2086 revs = revset.spanset(repo)
2087 revs.reverse()
2087 revs.reverse()
2088 return revs
2088 return revs
2089
2089
2090 def getgraphlogrevs(repo, pats, opts):
2090 def getgraphlogrevs(repo, pats, opts):
2091 """Return (revs, expr, filematcher) where revs is an iterable of
2091 """Return (revs, expr, filematcher) where revs is an iterable of
2092 revision numbers, expr is a revset string built from log options
2092 revision numbers, expr is a revset string built from log options
2093 and file patterns or None, and used to filter 'revs'. If --stat or
2093 and file patterns or None, and used to filter 'revs'. If --stat or
2094 --patch are not passed filematcher is None. Otherwise it is a
2094 --patch are not passed filematcher is None. Otherwise it is a
2095 callable taking a revision number and returning a match objects
2095 callable taking a revision number and returning a match objects
2096 filtering the files to be detailed when displaying the revision.
2096 filtering the files to be detailed when displaying the revision.
2097 """
2097 """
2098 limit = loglimit(opts)
2098 limit = loglimit(opts)
2099 revs = _logrevs(repo, opts)
2099 revs = _logrevs(repo, opts)
2100 if not revs:
2100 if not revs:
2101 return revset.baseset(), None, None
2101 return revset.baseset(), None, None
2102 expr, filematcher = _makelogrevset(repo, pats, opts, revs)
2102 expr, filematcher = _makelogrevset(repo, pats, opts, revs)
2103 if opts.get('rev'):
2103 if opts.get('rev'):
2104 # User-specified revs might be unsorted, but don't sort before
2104 # User-specified revs might be unsorted, but don't sort before
2105 # _makelogrevset because it might depend on the order of revs
2105 # _makelogrevset because it might depend on the order of revs
2106 revs.sort(reverse=True)
2106 revs.sort(reverse=True)
2107 if expr:
2107 if expr:
2108 # Revset matchers often operate faster on revisions in changelog
2108 # Revset matchers often operate faster on revisions in changelog
2109 # order, because most filters deal with the changelog.
2109 # order, because most filters deal with the changelog.
2110 revs.reverse()
2110 revs.reverse()
2111 matcher = revset.match(repo.ui, expr)
2111 matcher = revset.match(repo.ui, expr)
2112 # Revset matches can reorder revisions. "A or B" typically returns
2112 # Revset matches can reorder revisions. "A or B" typically returns
2113 # returns the revision matching A then the revision matching B. Sort
2113 # returns the revision matching A then the revision matching B. Sort
2114 # again to fix that.
2114 # again to fix that.
2115 revs = matcher(repo, revs)
2115 revs = matcher(repo, revs)
2116 revs.sort(reverse=True)
2116 revs.sort(reverse=True)
2117 if limit is not None:
2117 if limit is not None:
2118 limitedrevs = []
2118 limitedrevs = []
2119 for idx, rev in enumerate(revs):
2119 for idx, rev in enumerate(revs):
2120 if idx >= limit:
2120 if idx >= limit:
2121 break
2121 break
2122 limitedrevs.append(rev)
2122 limitedrevs.append(rev)
2123 revs = revset.baseset(limitedrevs)
2123 revs = revset.baseset(limitedrevs)
2124
2124
2125 return revs, expr, filematcher
2125 return revs, expr, filematcher
2126
2126
2127 def getlogrevs(repo, pats, opts):
2127 def getlogrevs(repo, pats, opts):
2128 """Return (revs, expr, filematcher) where revs is an iterable of
2128 """Return (revs, expr, filematcher) where revs is an iterable of
2129 revision numbers, expr is a revset string built from log options
2129 revision numbers, expr is a revset string built from log options
2130 and file patterns or None, and used to filter 'revs'. If --stat or
2130 and file patterns or None, and used to filter 'revs'. If --stat or
2131 --patch are not passed filematcher is None. Otherwise it is a
2131 --patch are not passed filematcher is None. Otherwise it is a
2132 callable taking a revision number and returning a match objects
2132 callable taking a revision number and returning a match objects
2133 filtering the files to be detailed when displaying the revision.
2133 filtering the files to be detailed when displaying the revision.
2134 """
2134 """
2135 limit = loglimit(opts)
2135 limit = loglimit(opts)
2136 revs = _logrevs(repo, opts)
2136 revs = _logrevs(repo, opts)
2137 if not revs:
2137 if not revs:
2138 return revset.baseset([]), None, None
2138 return revset.baseset([]), None, None
2139 expr, filematcher = _makelogrevset(repo, pats, opts, revs)
2139 expr, filematcher = _makelogrevset(repo, pats, opts, revs)
2140 if expr:
2140 if expr:
2141 # Revset matchers often operate faster on revisions in changelog
2141 # Revset matchers often operate faster on revisions in changelog
2142 # order, because most filters deal with the changelog.
2142 # order, because most filters deal with the changelog.
2143 if not opts.get('rev'):
2143 if not opts.get('rev'):
2144 revs.reverse()
2144 revs.reverse()
2145 matcher = revset.match(repo.ui, expr)
2145 matcher = revset.match(repo.ui, expr)
2146 # Revset matches can reorder revisions. "A or B" typically returns
2146 # Revset matches can reorder revisions. "A or B" typically returns
2147 # returns the revision matching A then the revision matching B. Sort
2147 # returns the revision matching A then the revision matching B. Sort
2148 # again to fix that.
2148 # again to fix that.
2149 revs = matcher(repo, revs)
2149 revs = matcher(repo, revs)
2150 if not opts.get('rev'):
2150 if not opts.get('rev'):
2151 revs.sort(reverse=True)
2151 revs.sort(reverse=True)
2152 if limit is not None:
2152 if limit is not None:
2153 limitedrevs = []
2153 limitedrevs = []
2154 for idx, r in enumerate(revs):
2154 for idx, r in enumerate(revs):
2155 if limit <= idx:
2155 if limit <= idx:
2156 break
2156 break
2157 limitedrevs.append(r)
2157 limitedrevs.append(r)
2158 revs = revset.baseset(limitedrevs)
2158 revs = revset.baseset(limitedrevs)
2159
2159
2160 return revs, expr, filematcher
2160 return revs, expr, filematcher
2161
2161
2162 def _graphnodeformatter(ui, displayer):
2162 def _graphnodeformatter(ui, displayer):
2163 spec = ui.config('ui', 'graphnodetemplate')
2163 spec = ui.config('ui', 'graphnodetemplate')
2164 if not spec:
2164 if not spec:
2165 return templatekw.showgraphnode # fast path for "{graphnode}"
2165 return templatekw.showgraphnode # fast path for "{graphnode}"
2166
2166
2167 templ = formatter.gettemplater(ui, 'graphnode', spec)
2167 templ = formatter.gettemplater(ui, 'graphnode', spec)
2168 cache = {}
2168 cache = {}
2169 if isinstance(displayer, changeset_templater):
2169 if isinstance(displayer, changeset_templater):
2170 cache = displayer.cache # reuse cache of slow templates
2170 cache = displayer.cache # reuse cache of slow templates
2171 props = templatekw.keywords.copy()
2171 props = templatekw.keywords.copy()
2172 props['templ'] = templ
2172 props['templ'] = templ
2173 props['cache'] = cache
2173 props['cache'] = cache
2174 def formatnode(repo, ctx):
2174 def formatnode(repo, ctx):
2175 props['ctx'] = ctx
2175 props['ctx'] = ctx
2176 props['repo'] = repo
2176 props['repo'] = repo
2177 props['revcache'] = {}
2177 props['revcache'] = {}
2178 return templater.stringify(templ('graphnode', **props))
2178 return templater.stringify(templ('graphnode', **props))
2179 return formatnode
2179 return formatnode
2180
2180
2181 def displaygraph(ui, repo, dag, displayer, edgefn, getrenamed=None,
2181 def displaygraph(ui, repo, dag, displayer, edgefn, getrenamed=None,
2182 filematcher=None):
2182 filematcher=None):
2183 formatnode = _graphnodeformatter(ui, displayer)
2183 formatnode = _graphnodeformatter(ui, displayer)
2184 seen, state = [], graphmod.asciistate()
2184 seen, state = [], graphmod.asciistate()
2185 for rev, type, ctx, parents in dag:
2185 for rev, type, ctx, parents in dag:
2186 char = formatnode(repo, ctx)
2186 char = formatnode(repo, ctx)
2187 copies = None
2187 copies = None
2188 if getrenamed and ctx.rev():
2188 if getrenamed and ctx.rev():
2189 copies = []
2189 copies = []
2190 for fn in ctx.files():
2190 for fn in ctx.files():
2191 rename = getrenamed(fn, ctx.rev())
2191 rename = getrenamed(fn, ctx.rev())
2192 if rename:
2192 if rename:
2193 copies.append((fn, rename[0]))
2193 copies.append((fn, rename[0]))
2194 revmatchfn = None
2194 revmatchfn = None
2195 if filematcher is not None:
2195 if filematcher is not None:
2196 revmatchfn = filematcher(ctx.rev())
2196 revmatchfn = filematcher(ctx.rev())
2197 displayer.show(ctx, copies=copies, matchfn=revmatchfn)
2197 displayer.show(ctx, copies=copies, matchfn=revmatchfn)
2198 lines = displayer.hunk.pop(rev).split('\n')
2198 lines = displayer.hunk.pop(rev).split('\n')
2199 if not lines[-1]:
2199 if not lines[-1]:
2200 del lines[-1]
2200 del lines[-1]
2201 displayer.flush(ctx)
2201 displayer.flush(ctx)
2202 edges = edgefn(type, char, lines, seen, rev, parents)
2202 edges = edgefn(type, char, lines, seen, rev, parents)
2203 for type, char, lines, coldata in edges:
2203 for type, char, lines, coldata in edges:
2204 graphmod.ascii(ui, state, type, char, lines, coldata)
2204 graphmod.ascii(ui, state, type, char, lines, coldata)
2205 displayer.close()
2205 displayer.close()
2206
2206
2207 def graphlog(ui, repo, *pats, **opts):
2207 def graphlog(ui, repo, *pats, **opts):
2208 # Parameters are identical to log command ones
2208 # Parameters are identical to log command ones
2209 revs, expr, filematcher = getgraphlogrevs(repo, pats, opts)
2209 revs, expr, filematcher = getgraphlogrevs(repo, pats, opts)
2210 revdag = graphmod.dagwalker(repo, revs)
2210 revdag = graphmod.dagwalker(repo, revs)
2211
2211
2212 getrenamed = None
2212 getrenamed = None
2213 if opts.get('copies'):
2213 if opts.get('copies'):
2214 endrev = None
2214 endrev = None
2215 if opts.get('rev'):
2215 if opts.get('rev'):
2216 endrev = scmutil.revrange(repo, opts.get('rev')).max() + 1
2216 endrev = scmutil.revrange(repo, opts.get('rev')).max() + 1
2217 getrenamed = templatekw.getrenamedfn(repo, endrev=endrev)
2217 getrenamed = templatekw.getrenamedfn(repo, endrev=endrev)
2218 displayer = show_changeset(ui, repo, opts, buffered=True)
2218 displayer = show_changeset(ui, repo, opts, buffered=True)
2219 displaygraph(ui, repo, revdag, displayer, graphmod.asciiedges, getrenamed,
2219 displaygraph(ui, repo, revdag, displayer, graphmod.asciiedges, getrenamed,
2220 filematcher)
2220 filematcher)
2221
2221
2222 def checkunsupportedgraphflags(pats, opts):
2222 def checkunsupportedgraphflags(pats, opts):
2223 for op in ["newest_first"]:
2223 for op in ["newest_first"]:
2224 if op in opts and opts[op]:
2224 if op in opts and opts[op]:
2225 raise error.Abort(_("-G/--graph option is incompatible with --%s")
2225 raise error.Abort(_("-G/--graph option is incompatible with --%s")
2226 % op.replace("_", "-"))
2226 % op.replace("_", "-"))
2227
2227
2228 def graphrevs(repo, nodes, opts):
2228 def graphrevs(repo, nodes, opts):
2229 limit = loglimit(opts)
2229 limit = loglimit(opts)
2230 nodes.reverse()
2230 nodes.reverse()
2231 if limit is not None:
2231 if limit is not None:
2232 nodes = nodes[:limit]
2232 nodes = nodes[:limit]
2233 return graphmod.nodes(repo, nodes)
2233 return graphmod.nodes(repo, nodes)
2234
2234
2235 def add(ui, repo, match, prefix, explicitonly, **opts):
2235 def add(ui, repo, match, prefix, explicitonly, **opts):
2236 join = lambda f: os.path.join(prefix, f)
2236 join = lambda f: os.path.join(prefix, f)
2237 bad = []
2237 bad = []
2238
2238
2239 badfn = lambda x, y: bad.append(x) or match.bad(x, y)
2239 badfn = lambda x, y: bad.append(x) or match.bad(x, y)
2240 names = []
2240 names = []
2241 wctx = repo[None]
2241 wctx = repo[None]
2242 cca = None
2242 cca = None
2243 abort, warn = scmutil.checkportabilityalert(ui)
2243 abort, warn = scmutil.checkportabilityalert(ui)
2244 if abort or warn:
2244 if abort or warn:
2245 cca = scmutil.casecollisionauditor(ui, abort, repo.dirstate)
2245 cca = scmutil.casecollisionauditor(ui, abort, repo.dirstate)
2246
2246
2247 badmatch = matchmod.badmatch(match, badfn)
2247 badmatch = matchmod.badmatch(match, badfn)
2248 dirstate = repo.dirstate
2248 dirstate = repo.dirstate
2249 # We don't want to just call wctx.walk here, since it would return a lot of
2249 # We don't want to just call wctx.walk here, since it would return a lot of
2250 # clean files, which we aren't interested in and takes time.
2250 # clean files, which we aren't interested in and takes time.
2251 for f in sorted(dirstate.walk(badmatch, sorted(wctx.substate),
2251 for f in sorted(dirstate.walk(badmatch, sorted(wctx.substate),
2252 True, False, full=False)):
2252 True, False, full=False)):
2253 exact = match.exact(f)
2253 exact = match.exact(f)
2254 if exact or not explicitonly and f not in wctx and repo.wvfs.lexists(f):
2254 if exact or not explicitonly and f not in wctx and repo.wvfs.lexists(f):
2255 if cca:
2255 if cca:
2256 cca(f)
2256 cca(f)
2257 names.append(f)
2257 names.append(f)
2258 if ui.verbose or not exact:
2258 if ui.verbose or not exact:
2259 ui.status(_('adding %s\n') % match.rel(f))
2259 ui.status(_('adding %s\n') % match.rel(f))
2260
2260
2261 for subpath in sorted(wctx.substate):
2261 for subpath in sorted(wctx.substate):
2262 sub = wctx.sub(subpath)
2262 sub = wctx.sub(subpath)
2263 try:
2263 try:
2264 submatch = matchmod.narrowmatcher(subpath, match)
2264 submatch = matchmod.narrowmatcher(subpath, match)
2265 if opts.get('subrepos'):
2265 if opts.get('subrepos'):
2266 bad.extend(sub.add(ui, submatch, prefix, False, **opts))
2266 bad.extend(sub.add(ui, submatch, prefix, False, **opts))
2267 else:
2267 else:
2268 bad.extend(sub.add(ui, submatch, prefix, True, **opts))
2268 bad.extend(sub.add(ui, submatch, prefix, True, **opts))
2269 except error.LookupError:
2269 except error.LookupError:
2270 ui.status(_("skipping missing subrepository: %s\n")
2270 ui.status(_("skipping missing subrepository: %s\n")
2271 % join(subpath))
2271 % join(subpath))
2272
2272
2273 if not opts.get('dry_run'):
2273 if not opts.get('dry_run'):
2274 rejected = wctx.add(names, prefix)
2274 rejected = wctx.add(names, prefix)
2275 bad.extend(f for f in rejected if f in match.files())
2275 bad.extend(f for f in rejected if f in match.files())
2276 return bad
2276 return bad
2277
2277
2278 def forget(ui, repo, match, prefix, explicitonly):
2278 def forget(ui, repo, match, prefix, explicitonly):
2279 join = lambda f: os.path.join(prefix, f)
2279 join = lambda f: os.path.join(prefix, f)
2280 bad = []
2280 bad = []
2281 badfn = lambda x, y: bad.append(x) or match.bad(x, y)
2281 badfn = lambda x, y: bad.append(x) or match.bad(x, y)
2282 wctx = repo[None]
2282 wctx = repo[None]
2283 forgot = []
2283 forgot = []
2284
2284
2285 s = repo.status(match=matchmod.badmatch(match, badfn), clean=True)
2285 s = repo.status(match=matchmod.badmatch(match, badfn), clean=True)
2286 forget = sorted(s[0] + s[1] + s[3] + s[6])
2286 forget = sorted(s[0] + s[1] + s[3] + s[6])
2287 if explicitonly:
2287 if explicitonly:
2288 forget = [f for f in forget if match.exact(f)]
2288 forget = [f for f in forget if match.exact(f)]
2289
2289
2290 for subpath in sorted(wctx.substate):
2290 for subpath in sorted(wctx.substate):
2291 sub = wctx.sub(subpath)
2291 sub = wctx.sub(subpath)
2292 try:
2292 try:
2293 submatch = matchmod.narrowmatcher(subpath, match)
2293 submatch = matchmod.narrowmatcher(subpath, match)
2294 subbad, subforgot = sub.forget(submatch, prefix)
2294 subbad, subforgot = sub.forget(submatch, prefix)
2295 bad.extend([subpath + '/' + f for f in subbad])
2295 bad.extend([subpath + '/' + f for f in subbad])
2296 forgot.extend([subpath + '/' + f for f in subforgot])
2296 forgot.extend([subpath + '/' + f for f in subforgot])
2297 except error.LookupError:
2297 except error.LookupError:
2298 ui.status(_("skipping missing subrepository: %s\n")
2298 ui.status(_("skipping missing subrepository: %s\n")
2299 % join(subpath))
2299 % join(subpath))
2300
2300
2301 if not explicitonly:
2301 if not explicitonly:
2302 for f in match.files():
2302 for f in match.files():
2303 if f not in repo.dirstate and not repo.wvfs.isdir(f):
2303 if f not in repo.dirstate and not repo.wvfs.isdir(f):
2304 if f not in forgot:
2304 if f not in forgot:
2305 if repo.wvfs.exists(f):
2305 if repo.wvfs.exists(f):
2306 # Don't complain if the exact case match wasn't given.
2306 # Don't complain if the exact case match wasn't given.
2307 # But don't do this until after checking 'forgot', so
2307 # But don't do this until after checking 'forgot', so
2308 # that subrepo files aren't normalized, and this op is
2308 # that subrepo files aren't normalized, and this op is
2309 # purely from data cached by the status walk above.
2309 # purely from data cached by the status walk above.
2310 if repo.dirstate.normalize(f) in repo.dirstate:
2310 if repo.dirstate.normalize(f) in repo.dirstate:
2311 continue
2311 continue
2312 ui.warn(_('not removing %s: '
2312 ui.warn(_('not removing %s: '
2313 'file is already untracked\n')
2313 'file is already untracked\n')
2314 % match.rel(f))
2314 % match.rel(f))
2315 bad.append(f)
2315 bad.append(f)
2316
2316
2317 for f in forget:
2317 for f in forget:
2318 if ui.verbose or not match.exact(f):
2318 if ui.verbose or not match.exact(f):
2319 ui.status(_('removing %s\n') % match.rel(f))
2319 ui.status(_('removing %s\n') % match.rel(f))
2320
2320
2321 rejected = wctx.forget(forget, prefix)
2321 rejected = wctx.forget(forget, prefix)
2322 bad.extend(f for f in rejected if f in match.files())
2322 bad.extend(f for f in rejected if f in match.files())
2323 forgot.extend(f for f in forget if f not in rejected)
2323 forgot.extend(f for f in forget if f not in rejected)
2324 return bad, forgot
2324 return bad, forgot
2325
2325
2326 def files(ui, ctx, m, fm, fmt, subrepos):
2326 def files(ui, ctx, m, fm, fmt, subrepos):
2327 rev = ctx.rev()
2327 rev = ctx.rev()
2328 ret = 1
2328 ret = 1
2329 ds = ctx.repo().dirstate
2329 ds = ctx.repo().dirstate
2330
2330
2331 for f in ctx.matches(m):
2331 for f in ctx.matches(m):
2332 if rev is None and ds[f] == 'r':
2332 if rev is None and ds[f] == 'r':
2333 continue
2333 continue
2334 fm.startitem()
2334 fm.startitem()
2335 if ui.verbose:
2335 if ui.verbose:
2336 fc = ctx[f]
2336 fc = ctx[f]
2337 fm.write('size flags', '% 10d % 1s ', fc.size(), fc.flags())
2337 fm.write('size flags', '% 10d % 1s ', fc.size(), fc.flags())
2338 fm.data(abspath=f)
2338 fm.data(abspath=f)
2339 fm.write('path', fmt, m.rel(f))
2339 fm.write('path', fmt, m.rel(f))
2340 ret = 0
2340 ret = 0
2341
2341
2342 for subpath in sorted(ctx.substate):
2342 for subpath in sorted(ctx.substate):
2343 def matchessubrepo(subpath):
2343 def matchessubrepo(subpath):
2344 return (m.always() or m.exact(subpath)
2344 return (m.always() or m.exact(subpath)
2345 or any(f.startswith(subpath + '/') for f in m.files()))
2345 or any(f.startswith(subpath + '/') for f in m.files()))
2346
2346
2347 if subrepos or matchessubrepo(subpath):
2347 if subrepos or matchessubrepo(subpath):
2348 sub = ctx.sub(subpath)
2348 sub = ctx.sub(subpath)
2349 try:
2349 try:
2350 submatch = matchmod.narrowmatcher(subpath, m)
2350 submatch = matchmod.narrowmatcher(subpath, m)
2351 if sub.printfiles(ui, submatch, fm, fmt, subrepos) == 0:
2351 if sub.printfiles(ui, submatch, fm, fmt, subrepos) == 0:
2352 ret = 0
2352 ret = 0
2353 except error.LookupError:
2353 except error.LookupError:
2354 ui.status(_("skipping missing subrepository: %s\n")
2354 ui.status(_("skipping missing subrepository: %s\n")
2355 % m.abs(subpath))
2355 % m.abs(subpath))
2356
2356
2357 return ret
2357 return ret
2358
2358
2359 def remove(ui, repo, m, prefix, after, force, subrepos):
2359 def remove(ui, repo, m, prefix, after, force, subrepos):
2360 join = lambda f: os.path.join(prefix, f)
2360 join = lambda f: os.path.join(prefix, f)
2361 ret = 0
2361 ret = 0
2362 s = repo.status(match=m, clean=True)
2362 s = repo.status(match=m, clean=True)
2363 modified, added, deleted, clean = s[0], s[1], s[3], s[6]
2363 modified, added, deleted, clean = s[0], s[1], s[3], s[6]
2364
2364
2365 wctx = repo[None]
2365 wctx = repo[None]
2366
2366
2367 for subpath in sorted(wctx.substate):
2367 for subpath in sorted(wctx.substate):
2368 def matchessubrepo(matcher, subpath):
2368 def matchessubrepo(matcher, subpath):
2369 if matcher.exact(subpath):
2369 if matcher.exact(subpath):
2370 return True
2370 return True
2371 for f in matcher.files():
2371 for f in matcher.files():
2372 if f.startswith(subpath):
2372 if f.startswith(subpath):
2373 return True
2373 return True
2374 return False
2374 return False
2375
2375
2376 if subrepos or matchessubrepo(m, subpath):
2376 if subrepos or matchessubrepo(m, subpath):
2377 sub = wctx.sub(subpath)
2377 sub = wctx.sub(subpath)
2378 try:
2378 try:
2379 submatch = matchmod.narrowmatcher(subpath, m)
2379 submatch = matchmod.narrowmatcher(subpath, m)
2380 if sub.removefiles(submatch, prefix, after, force, subrepos):
2380 if sub.removefiles(submatch, prefix, after, force, subrepos):
2381 ret = 1
2381 ret = 1
2382 except error.LookupError:
2382 except error.LookupError:
2383 ui.status(_("skipping missing subrepository: %s\n")
2383 ui.status(_("skipping missing subrepository: %s\n")
2384 % join(subpath))
2384 % join(subpath))
2385
2385
2386 # warn about failure to delete explicit files/dirs
2386 # warn about failure to delete explicit files/dirs
2387 deleteddirs = util.dirs(deleted)
2387 deleteddirs = util.dirs(deleted)
2388 for f in m.files():
2388 for f in m.files():
2389 def insubrepo():
2389 def insubrepo():
2390 for subpath in wctx.substate:
2390 for subpath in wctx.substate:
2391 if f.startswith(subpath):
2391 if f.startswith(subpath):
2392 return True
2392 return True
2393 return False
2393 return False
2394
2394
2395 isdir = f in deleteddirs or wctx.hasdir(f)
2395 isdir = f in deleteddirs or wctx.hasdir(f)
2396 if f in repo.dirstate or isdir or f == '.' or insubrepo():
2396 if f in repo.dirstate or isdir or f == '.' or insubrepo():
2397 continue
2397 continue
2398
2398
2399 if repo.wvfs.exists(f):
2399 if repo.wvfs.exists(f):
2400 if repo.wvfs.isdir(f):
2400 if repo.wvfs.isdir(f):
2401 ui.warn(_('not removing %s: no tracked files\n')
2401 ui.warn(_('not removing %s: no tracked files\n')
2402 % m.rel(f))
2402 % m.rel(f))
2403 else:
2403 else:
2404 ui.warn(_('not removing %s: file is untracked\n')
2404 ui.warn(_('not removing %s: file is untracked\n')
2405 % m.rel(f))
2405 % m.rel(f))
2406 # missing files will generate a warning elsewhere
2406 # missing files will generate a warning elsewhere
2407 ret = 1
2407 ret = 1
2408
2408
2409 if force:
2409 if force:
2410 list = modified + deleted + clean + added
2410 list = modified + deleted + clean + added
2411 elif after:
2411 elif after:
2412 list = deleted
2412 list = deleted
2413 for f in modified + added + clean:
2413 for f in modified + added + clean:
2414 ui.warn(_('not removing %s: file still exists\n') % m.rel(f))
2414 ui.warn(_('not removing %s: file still exists\n') % m.rel(f))
2415 ret = 1
2415 ret = 1
2416 else:
2416 else:
2417 list = deleted + clean
2417 list = deleted + clean
2418 for f in modified:
2418 for f in modified:
2419 ui.warn(_('not removing %s: file is modified (use -f'
2419 ui.warn(_('not removing %s: file is modified (use -f'
2420 ' to force removal)\n') % m.rel(f))
2420 ' to force removal)\n') % m.rel(f))
2421 ret = 1
2421 ret = 1
2422 for f in added:
2422 for f in added:
2423 ui.warn(_('not removing %s: file has been marked for add'
2423 ui.warn(_('not removing %s: file has been marked for add'
2424 ' (use forget to undo)\n') % m.rel(f))
2424 ' (use forget to undo)\n') % m.rel(f))
2425 ret = 1
2425 ret = 1
2426
2426
2427 for f in sorted(list):
2427 for f in sorted(list):
2428 if ui.verbose or not m.exact(f):
2428 if ui.verbose or not m.exact(f):
2429 ui.status(_('removing %s\n') % m.rel(f))
2429 ui.status(_('removing %s\n') % m.rel(f))
2430
2430
2431 wlock = repo.wlock()
2431 wlock = repo.wlock()
2432 try:
2432 try:
2433 if not after:
2433 if not after:
2434 for f in list:
2434 for f in list:
2435 if f in added:
2435 if f in added:
2436 continue # we never unlink added files on remove
2436 continue # we never unlink added files on remove
2437 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
2437 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
2438 repo[None].forget(list)
2438 repo[None].forget(list)
2439 finally:
2439 finally:
2440 wlock.release()
2440 wlock.release()
2441
2441
2442 return ret
2442 return ret
2443
2443
2444 def cat(ui, repo, ctx, matcher, prefix, **opts):
2444 def cat(ui, repo, ctx, matcher, prefix, **opts):
2445 err = 1
2445 err = 1
2446
2446
2447 def write(path):
2447 def write(path):
2448 fp = makefileobj(repo, opts.get('output'), ctx.node(),
2448 fp = makefileobj(repo, opts.get('output'), ctx.node(),
2449 pathname=os.path.join(prefix, path))
2449 pathname=os.path.join(prefix, path))
2450 data = ctx[path].data()
2450 data = ctx[path].data()
2451 if opts.get('decode'):
2451 if opts.get('decode'):
2452 data = repo.wwritedata(path, data)
2452 data = repo.wwritedata(path, data)
2453 fp.write(data)
2453 fp.write(data)
2454 fp.close()
2454 fp.close()
2455
2455
2456 # Automation often uses hg cat on single files, so special case it
2456 # Automation often uses hg cat on single files, so special case it
2457 # for performance to avoid the cost of parsing the manifest.
2457 # for performance to avoid the cost of parsing the manifest.
2458 if len(matcher.files()) == 1 and not matcher.anypats():
2458 if len(matcher.files()) == 1 and not matcher.anypats():
2459 file = matcher.files()[0]
2459 file = matcher.files()[0]
2460 mf = repo.manifest
2460 mf = repo.manifest
2461 mfnode = ctx.manifestnode()
2461 mfnode = ctx.manifestnode()
2462 if mfnode and mf.find(mfnode, file)[0]:
2462 if mfnode and mf.find(mfnode, file)[0]:
2463 write(file)
2463 write(file)
2464 return 0
2464 return 0
2465
2465
2466 # Don't warn about "missing" files that are really in subrepos
2466 # Don't warn about "missing" files that are really in subrepos
2467 def badfn(path, msg):
2467 def badfn(path, msg):
2468 for subpath in ctx.substate:
2468 for subpath in ctx.substate:
2469 if path.startswith(subpath):
2469 if path.startswith(subpath):
2470 return
2470 return
2471 matcher.bad(path, msg)
2471 matcher.bad(path, msg)
2472
2472
2473 for abs in ctx.walk(matchmod.badmatch(matcher, badfn)):
2473 for abs in ctx.walk(matchmod.badmatch(matcher, badfn)):
2474 write(abs)
2474 write(abs)
2475 err = 0
2475 err = 0
2476
2476
2477 for subpath in sorted(ctx.substate):
2477 for subpath in sorted(ctx.substate):
2478 sub = ctx.sub(subpath)
2478 sub = ctx.sub(subpath)
2479 try:
2479 try:
2480 submatch = matchmod.narrowmatcher(subpath, matcher)
2480 submatch = matchmod.narrowmatcher(subpath, matcher)
2481
2481
2482 if not sub.cat(submatch, os.path.join(prefix, sub._path),
2482 if not sub.cat(submatch, os.path.join(prefix, sub._path),
2483 **opts):
2483 **opts):
2484 err = 0
2484 err = 0
2485 except error.RepoLookupError:
2485 except error.RepoLookupError:
2486 ui.status(_("skipping missing subrepository: %s\n")
2486 ui.status(_("skipping missing subrepository: %s\n")
2487 % os.path.join(prefix, subpath))
2487 % os.path.join(prefix, subpath))
2488
2488
2489 return err
2489 return err
2490
2490
2491 def commit(ui, repo, commitfunc, pats, opts):
2491 def commit(ui, repo, commitfunc, pats, opts):
2492 '''commit the specified files or all outstanding changes'''
2492 '''commit the specified files or all outstanding changes'''
2493 date = opts.get('date')
2493 date = opts.get('date')
2494 if date:
2494 if date:
2495 opts['date'] = util.parsedate(date)
2495 opts['date'] = util.parsedate(date)
2496 message = logmessage(ui, opts)
2496 message = logmessage(ui, opts)
2497 matcher = scmutil.match(repo[None], pats, opts)
2497 matcher = scmutil.match(repo[None], pats, opts)
2498
2498
2499 # extract addremove carefully -- this function can be called from a command
2499 # extract addremove carefully -- this function can be called from a command
2500 # that doesn't support addremove
2500 # that doesn't support addremove
2501 if opts.get('addremove'):
2501 if opts.get('addremove'):
2502 if scmutil.addremove(repo, matcher, "", opts) != 0:
2502 if scmutil.addremove(repo, matcher, "", opts) != 0:
2503 raise error.Abort(
2503 raise error.Abort(
2504 _("failed to mark all new/missing files as added/removed"))
2504 _("failed to mark all new/missing files as added/removed"))
2505
2505
2506 return commitfunc(ui, repo, message, matcher, opts)
2506 return commitfunc(ui, repo, message, matcher, opts)
2507
2507
2508 def amend(ui, repo, commitfunc, old, extra, pats, opts):
2508 def amend(ui, repo, commitfunc, old, extra, pats, opts):
2509 # avoid cycle context -> subrepo -> cmdutil
2509 # avoid cycle context -> subrepo -> cmdutil
2510 import context
2510 import context
2511
2511
2512 # amend will reuse the existing user if not specified, but the obsolete
2512 # amend will reuse the existing user if not specified, but the obsolete
2513 # marker creation requires that the current user's name is specified.
2513 # marker creation requires that the current user's name is specified.
2514 if obsolete.isenabled(repo, obsolete.createmarkersopt):
2514 if obsolete.isenabled(repo, obsolete.createmarkersopt):
2515 ui.username() # raise exception if username not set
2515 ui.username() # raise exception if username not set
2516
2516
2517 ui.note(_('amending changeset %s\n') % old)
2517 ui.note(_('amending changeset %s\n') % old)
2518 base = old.p1()
2518 base = old.p1()
2519 createmarkers = obsolete.isenabled(repo, obsolete.createmarkersopt)
2519 createmarkers = obsolete.isenabled(repo, obsolete.createmarkersopt)
2520
2520
2521 wlock = lock = newid = None
2521 wlock = lock = newid = None
2522 try:
2522 try:
2523 wlock = repo.wlock()
2523 wlock = repo.wlock()
2524 lock = repo.lock()
2524 lock = repo.lock()
2525 tr = repo.transaction('amend')
2525 tr = repo.transaction('amend')
2526 try:
2526 try:
2527 # See if we got a message from -m or -l, if not, open the editor
2527 # See if we got a message from -m or -l, if not, open the editor
2528 # with the message of the changeset to amend
2528 # with the message of the changeset to amend
2529 message = logmessage(ui, opts)
2529 message = logmessage(ui, opts)
2530 # ensure logfile does not conflict with later enforcement of the
2530 # ensure logfile does not conflict with later enforcement of the
2531 # message. potential logfile content has been processed by
2531 # message. potential logfile content has been processed by
2532 # `logmessage` anyway.
2532 # `logmessage` anyway.
2533 opts.pop('logfile')
2533 opts.pop('logfile')
2534 # First, do a regular commit to record all changes in the working
2534 # First, do a regular commit to record all changes in the working
2535 # directory (if there are any)
2535 # directory (if there are any)
2536 ui.callhooks = False
2536 ui.callhooks = False
2537 activebookmark = repo._activebookmark
2537 activebookmark = repo._activebookmark
2538 try:
2538 try:
2539 repo._activebookmark = None
2539 repo._activebookmark = None
2540 opts['message'] = 'temporary amend commit for %s' % old
2540 opts['message'] = 'temporary amend commit for %s' % old
2541 node = commit(ui, repo, commitfunc, pats, opts)
2541 node = commit(ui, repo, commitfunc, pats, opts)
2542 finally:
2542 finally:
2543 repo._activebookmark = activebookmark
2543 repo._activebookmark = activebookmark
2544 ui.callhooks = True
2544 ui.callhooks = True
2545 ctx = repo[node]
2545 ctx = repo[node]
2546
2546
2547 # Participating changesets:
2547 # Participating changesets:
2548 #
2548 #
2549 # node/ctx o - new (intermediate) commit that contains changes
2549 # node/ctx o - new (intermediate) commit that contains changes
2550 # | from working dir to go into amending commit
2550 # | from working dir to go into amending commit
2551 # | (or a workingctx if there were no changes)
2551 # | (or a workingctx if there were no changes)
2552 # |
2552 # |
2553 # old o - changeset to amend
2553 # old o - changeset to amend
2554 # |
2554 # |
2555 # base o - parent of amending changeset
2555 # base o - parent of amending changeset
2556
2556
2557 # Update extra dict from amended commit (e.g. to preserve graft
2557 # Update extra dict from amended commit (e.g. to preserve graft
2558 # source)
2558 # source)
2559 extra.update(old.extra())
2559 extra.update(old.extra())
2560
2560
2561 # Also update it from the intermediate commit or from the wctx
2561 # Also update it from the intermediate commit or from the wctx
2562 extra.update(ctx.extra())
2562 extra.update(ctx.extra())
2563
2563
2564 if len(old.parents()) > 1:
2564 if len(old.parents()) > 1:
2565 # ctx.files() isn't reliable for merges, so fall back to the
2565 # ctx.files() isn't reliable for merges, so fall back to the
2566 # slower repo.status() method
2566 # slower repo.status() method
2567 files = set([fn for st in repo.status(base, old)[:3]
2567 files = set([fn for st in repo.status(base, old)[:3]
2568 for fn in st])
2568 for fn in st])
2569 else:
2569 else:
2570 files = set(old.files())
2570 files = set(old.files())
2571
2571
2572 # Second, we use either the commit we just did, or if there were no
2572 # Second, we use either the commit we just did, or if there were no
2573 # changes the parent of the working directory as the version of the
2573 # changes the parent of the working directory as the version of the
2574 # files in the final amend commit
2574 # files in the final amend commit
2575 if node:
2575 if node:
2576 ui.note(_('copying changeset %s to %s\n') % (ctx, base))
2576 ui.note(_('copying changeset %s to %s\n') % (ctx, base))
2577
2577
2578 user = ctx.user()
2578 user = ctx.user()
2579 date = ctx.date()
2579 date = ctx.date()
2580 # Recompute copies (avoid recording a -> b -> a)
2580 # Recompute copies (avoid recording a -> b -> a)
2581 copied = copies.pathcopies(base, ctx)
2581 copied = copies.pathcopies(base, ctx)
2582 if old.p2:
2582 if old.p2:
2583 copied.update(copies.pathcopies(old.p2(), ctx))
2583 copied.update(copies.pathcopies(old.p2(), ctx))
2584
2584
2585 # Prune files which were reverted by the updates: if old
2585 # Prune files which were reverted by the updates: if old
2586 # introduced file X and our intermediate commit, node,
2586 # introduced file X and our intermediate commit, node,
2587 # renamed that file, then those two files are the same and
2587 # renamed that file, then those two files are the same and
2588 # we can discard X from our list of files. Likewise if X
2588 # we can discard X from our list of files. Likewise if X
2589 # was deleted, it's no longer relevant
2589 # was deleted, it's no longer relevant
2590 files.update(ctx.files())
2590 files.update(ctx.files())
2591
2591
2592 def samefile(f):
2592 def samefile(f):
2593 if f in ctx.manifest():
2593 if f in ctx.manifest():
2594 a = ctx.filectx(f)
2594 a = ctx.filectx(f)
2595 if f in base.manifest():
2595 if f in base.manifest():
2596 b = base.filectx(f)
2596 b = base.filectx(f)
2597 return (not a.cmp(b)
2597 return (not a.cmp(b)
2598 and a.flags() == b.flags())
2598 and a.flags() == b.flags())
2599 else:
2599 else:
2600 return False
2600 return False
2601 else:
2601 else:
2602 return f not in base.manifest()
2602 return f not in base.manifest()
2603 files = [f for f in files if not samefile(f)]
2603 files = [f for f in files if not samefile(f)]
2604
2604
2605 def filectxfn(repo, ctx_, path):
2605 def filectxfn(repo, ctx_, path):
2606 try:
2606 try:
2607 fctx = ctx[path]
2607 fctx = ctx[path]
2608 flags = fctx.flags()
2608 flags = fctx.flags()
2609 mctx = context.memfilectx(repo,
2609 mctx = context.memfilectx(repo,
2610 fctx.path(), fctx.data(),
2610 fctx.path(), fctx.data(),
2611 islink='l' in flags,
2611 islink='l' in flags,
2612 isexec='x' in flags,
2612 isexec='x' in flags,
2613 copied=copied.get(path))
2613 copied=copied.get(path))
2614 return mctx
2614 return mctx
2615 except KeyError:
2615 except KeyError:
2616 return None
2616 return None
2617 else:
2617 else:
2618 ui.note(_('copying changeset %s to %s\n') % (old, base))
2618 ui.note(_('copying changeset %s to %s\n') % (old, base))
2619
2619
2620 # Use version of files as in the old cset
2620 # Use version of files as in the old cset
2621 def filectxfn(repo, ctx_, path):
2621 def filectxfn(repo, ctx_, path):
2622 try:
2622 try:
2623 return old.filectx(path)
2623 return old.filectx(path)
2624 except KeyError:
2624 except KeyError:
2625 return None
2625 return None
2626
2626
2627 user = opts.get('user') or old.user()
2627 user = opts.get('user') or old.user()
2628 date = opts.get('date') or old.date()
2628 date = opts.get('date') or old.date()
2629 editform = mergeeditform(old, 'commit.amend')
2629 editform = mergeeditform(old, 'commit.amend')
2630 editor = getcommiteditor(editform=editform, **opts)
2630 editor = getcommiteditor(editform=editform, **opts)
2631 if not message:
2631 if not message:
2632 editor = getcommiteditor(edit=True, editform=editform)
2632 editor = getcommiteditor(edit=True, editform=editform)
2633 message = old.description()
2633 message = old.description()
2634
2634
2635 pureextra = extra.copy()
2635 pureextra = extra.copy()
2636 if 'amend_source' in pureextra:
2636 if 'amend_source' in pureextra:
2637 del pureextra['amend_source']
2637 del pureextra['amend_source']
2638 pureoldextra = old.extra()
2638 pureoldextra = old.extra()
2639 if 'amend_source' in pureoldextra:
2639 if 'amend_source' in pureoldextra:
2640 del pureoldextra['amend_source']
2640 del pureoldextra['amend_source']
2641 extra['amend_source'] = old.hex()
2641 extra['amend_source'] = old.hex()
2642
2642
2643 new = context.memctx(repo,
2643 new = context.memctx(repo,
2644 parents=[base.node(), old.p2().node()],
2644 parents=[base.node(), old.p2().node()],
2645 text=message,
2645 text=message,
2646 files=files,
2646 files=files,
2647 filectxfn=filectxfn,
2647 filectxfn=filectxfn,
2648 user=user,
2648 user=user,
2649 date=date,
2649 date=date,
2650 extra=extra,
2650 extra=extra,
2651 editor=editor)
2651 editor=editor)
2652
2652
2653 newdesc = changelog.stripdesc(new.description())
2653 newdesc = changelog.stripdesc(new.description())
2654 if ((not node)
2654 if ((not node)
2655 and newdesc == old.description()
2655 and newdesc == old.description()
2656 and user == old.user()
2656 and user == old.user()
2657 and date == old.date()
2657 and date == old.date()
2658 and pureextra == pureoldextra):
2658 and pureextra == pureoldextra):
2659 # nothing changed. continuing here would create a new node
2659 # nothing changed. continuing here would create a new node
2660 # anyway because of the amend_source noise.
2660 # anyway because of the amend_source noise.
2661 #
2661 #
2662 # This not what we expect from amend.
2662 # This not what we expect from amend.
2663 return old.node()
2663 return old.node()
2664
2664
2665 ph = repo.ui.config('phases', 'new-commit', phases.draft)
2665 ph = repo.ui.config('phases', 'new-commit', phases.draft)
2666 try:
2666 try:
2667 if opts.get('secret'):
2667 if opts.get('secret'):
2668 commitphase = 'secret'
2668 commitphase = 'secret'
2669 else:
2669 else:
2670 commitphase = old.phase()
2670 commitphase = old.phase()
2671 repo.ui.setconfig('phases', 'new-commit', commitphase, 'amend')
2671 repo.ui.setconfig('phases', 'new-commit', commitphase, 'amend')
2672 newid = repo.commitctx(new)
2672 newid = repo.commitctx(new)
2673 finally:
2673 finally:
2674 repo.ui.setconfig('phases', 'new-commit', ph, 'amend')
2674 repo.ui.setconfig('phases', 'new-commit', ph, 'amend')
2675 if newid != old.node():
2675 if newid != old.node():
2676 # Reroute the working copy parent to the new changeset
2676 # Reroute the working copy parent to the new changeset
2677 repo.setparents(newid, nullid)
2677 repo.setparents(newid, nullid)
2678
2678
2679 # Move bookmarks from old parent to amend commit
2679 # Move bookmarks from old parent to amend commit
2680 bms = repo.nodebookmarks(old.node())
2680 bms = repo.nodebookmarks(old.node())
2681 if bms:
2681 if bms:
2682 marks = repo._bookmarks
2682 marks = repo._bookmarks
2683 for bm in bms:
2683 for bm in bms:
2684 ui.debug('moving bookmarks %r from %s to %s\n' %
2684 ui.debug('moving bookmarks %r from %s to %s\n' %
2685 (marks, old.hex(), hex(newid)))
2685 (marks, old.hex(), hex(newid)))
2686 marks[bm] = newid
2686 marks[bm] = newid
2687 marks.recordchange(tr)
2687 marks.recordchange(tr)
2688 #commit the whole amend process
2688 #commit the whole amend process
2689 if createmarkers:
2689 if createmarkers:
2690 # mark the new changeset as successor of the rewritten one
2690 # mark the new changeset as successor of the rewritten one
2691 new = repo[newid]
2691 new = repo[newid]
2692 obs = [(old, (new,))]
2692 obs = [(old, (new,))]
2693 if node:
2693 if node:
2694 obs.append((ctx, ()))
2694 obs.append((ctx, ()))
2695
2695
2696 obsolete.createmarkers(repo, obs)
2696 obsolete.createmarkers(repo, obs)
2697 tr.close()
2697 tr.close()
2698 finally:
2698 finally:
2699 tr.release()
2699 tr.release()
2700 if not createmarkers and newid != old.node():
2700 if not createmarkers and newid != old.node():
2701 # Strip the intermediate commit (if there was one) and the amended
2701 # Strip the intermediate commit (if there was one) and the amended
2702 # commit
2702 # commit
2703 if node:
2703 if node:
2704 ui.note(_('stripping intermediate changeset %s\n') % ctx)
2704 ui.note(_('stripping intermediate changeset %s\n') % ctx)
2705 ui.note(_('stripping amended changeset %s\n') % old)
2705 ui.note(_('stripping amended changeset %s\n') % old)
2706 repair.strip(ui, repo, old.node(), topic='amend-backup')
2706 repair.strip(ui, repo, old.node(), topic='amend-backup')
2707 finally:
2707 finally:
2708 lockmod.release(lock, wlock)
2708 lockmod.release(lock, wlock)
2709 return newid
2709 return newid
2710
2710
2711 def commiteditor(repo, ctx, subs, editform=''):
2711 def commiteditor(repo, ctx, subs, editform=''):
2712 if ctx.description():
2712 if ctx.description():
2713 return ctx.description()
2713 return ctx.description()
2714 return commitforceeditor(repo, ctx, subs, editform=editform,
2714 return commitforceeditor(repo, ctx, subs, editform=editform,
2715 unchangedmessagedetection=True)
2715 unchangedmessagedetection=True)
2716
2716
2717 def commitforceeditor(repo, ctx, subs, finishdesc=None, extramsg=None,
2717 def commitforceeditor(repo, ctx, subs, finishdesc=None, extramsg=None,
2718 editform='', unchangedmessagedetection=False):
2718 editform='', unchangedmessagedetection=False):
2719 if not extramsg:
2719 if not extramsg:
2720 extramsg = _("Leave message empty to abort commit.")
2720 extramsg = _("Leave message empty to abort commit.")
2721
2721
2722 forms = [e for e in editform.split('.') if e]
2722 forms = [e for e in editform.split('.') if e]
2723 forms.insert(0, 'changeset')
2723 forms.insert(0, 'changeset')
2724 templatetext = None
2724 templatetext = None
2725 while forms:
2725 while forms:
2726 tmpl = repo.ui.config('committemplate', '.'.join(forms))
2726 tmpl = repo.ui.config('committemplate', '.'.join(forms))
2727 if tmpl:
2727 if tmpl:
2728 templatetext = committext = buildcommittemplate(
2728 templatetext = committext = buildcommittemplate(
2729 repo, ctx, subs, extramsg, tmpl)
2729 repo, ctx, subs, extramsg, tmpl)
2730 break
2730 break
2731 forms.pop()
2731 forms.pop()
2732 else:
2732 else:
2733 committext = buildcommittext(repo, ctx, subs, extramsg)
2733 committext = buildcommittext(repo, ctx, subs, extramsg)
2734
2734
2735 # run editor in the repository root
2735 # run editor in the repository root
2736 olddir = os.getcwd()
2736 olddir = os.getcwd()
2737 os.chdir(repo.root)
2737 os.chdir(repo.root)
2738
2738
2739 # make in-memory changes visible to external process
2739 # make in-memory changes visible to external process
2740 tr = repo.currenttransaction()
2740 tr = repo.currenttransaction()
2741 repo.dirstate.write(tr)
2741 repo.dirstate.write(tr)
2742 pending = tr and tr.writepending() and repo.root
2742 pending = tr and tr.writepending() and repo.root
2743
2743
2744 editortext = repo.ui.edit(committext, ctx.user(), ctx.extra(),
2744 editortext = repo.ui.edit(committext, ctx.user(), ctx.extra(),
2745 editform=editform, pending=pending)
2745 editform=editform, pending=pending)
2746 text = re.sub("(?m)^HG:.*(\n|$)", "", editortext)
2746 text = re.sub("(?m)^HG:.*(\n|$)", "", editortext)
2747 os.chdir(olddir)
2747 os.chdir(olddir)
2748
2748
2749 if finishdesc:
2749 if finishdesc:
2750 text = finishdesc(text)
2750 text = finishdesc(text)
2751 if not text.strip():
2751 if not text.strip():
2752 raise error.Abort(_("empty commit message"))
2752 raise error.Abort(_("empty commit message"))
2753 if unchangedmessagedetection and editortext == templatetext:
2753 if unchangedmessagedetection and editortext == templatetext:
2754 raise error.Abort(_("commit message unchanged"))
2754 raise error.Abort(_("commit message unchanged"))
2755
2755
2756 return text
2756 return text
2757
2757
2758 def buildcommittemplate(repo, ctx, subs, extramsg, tmpl):
2758 def buildcommittemplate(repo, ctx, subs, extramsg, tmpl):
2759 ui = repo.ui
2759 ui = repo.ui
2760 tmpl, mapfile = gettemplate(ui, tmpl, None)
2760 tmpl, mapfile = gettemplate(ui, tmpl, None)
2761
2761
2762 try:
2762 try:
2763 t = changeset_templater(ui, repo, None, {}, tmpl, mapfile, False)
2763 t = changeset_templater(ui, repo, None, {}, tmpl, mapfile, False)
2764 except SyntaxError as inst:
2764 except SyntaxError as inst:
2765 raise error.Abort(inst.args[0])
2765 raise error.Abort(inst.args[0])
2766
2766
2767 for k, v in repo.ui.configitems('committemplate'):
2767 for k, v in repo.ui.configitems('committemplate'):
2768 if k != 'changeset':
2768 if k != 'changeset':
2769 t.t.cache[k] = v
2769 t.t.cache[k] = v
2770
2770
2771 if not extramsg:
2771 if not extramsg:
2772 extramsg = '' # ensure that extramsg is string
2772 extramsg = '' # ensure that extramsg is string
2773
2773
2774 ui.pushbuffer()
2774 ui.pushbuffer()
2775 t.show(ctx, extramsg=extramsg)
2775 t.show(ctx, extramsg=extramsg)
2776 return ui.popbuffer()
2776 return ui.popbuffer()
2777
2777
2778 def hgprefix(msg):
2778 def hgprefix(msg):
2779 return "\n".join(["HG: %s" % a for a in msg.split("\n") if a])
2779 return "\n".join(["HG: %s" % a for a in msg.split("\n") if a])
2780
2780
2781 def buildcommittext(repo, ctx, subs, extramsg):
2781 def buildcommittext(repo, ctx, subs, extramsg):
2782 edittext = []
2782 edittext = []
2783 modified, added, removed = ctx.modified(), ctx.added(), ctx.removed()
2783 modified, added, removed = ctx.modified(), ctx.added(), ctx.removed()
2784 if ctx.description():
2784 if ctx.description():
2785 edittext.append(ctx.description())
2785 edittext.append(ctx.description())
2786 edittext.append("")
2786 edittext.append("")
2787 edittext.append("") # Empty line between message and comments.
2787 edittext.append("") # Empty line between message and comments.
2788 edittext.append(hgprefix(_("Enter commit message."
2788 edittext.append(hgprefix(_("Enter commit message."
2789 " Lines beginning with 'HG:' are removed.")))
2789 " Lines beginning with 'HG:' are removed.")))
2790 edittext.append(hgprefix(extramsg))
2790 edittext.append(hgprefix(extramsg))
2791 edittext.append("HG: --")
2791 edittext.append("HG: --")
2792 edittext.append(hgprefix(_("user: %s") % ctx.user()))
2792 edittext.append(hgprefix(_("user: %s") % ctx.user()))
2793 if ctx.p2():
2793 if ctx.p2():
2794 edittext.append(hgprefix(_("branch merge")))
2794 edittext.append(hgprefix(_("branch merge")))
2795 if ctx.branch():
2795 if ctx.branch():
2796 edittext.append(hgprefix(_("branch '%s'") % ctx.branch()))
2796 edittext.append(hgprefix(_("branch '%s'") % ctx.branch()))
2797 if bookmarks.isactivewdirparent(repo):
2797 if bookmarks.isactivewdirparent(repo):
2798 edittext.append(hgprefix(_("bookmark '%s'") % repo._activebookmark))
2798 edittext.append(hgprefix(_("bookmark '%s'") % repo._activebookmark))
2799 edittext.extend([hgprefix(_("subrepo %s") % s) for s in subs])
2799 edittext.extend([hgprefix(_("subrepo %s") % s) for s in subs])
2800 edittext.extend([hgprefix(_("added %s") % f) for f in added])
2800 edittext.extend([hgprefix(_("added %s") % f) for f in added])
2801 edittext.extend([hgprefix(_("changed %s") % f) for f in modified])
2801 edittext.extend([hgprefix(_("changed %s") % f) for f in modified])
2802 edittext.extend([hgprefix(_("removed %s") % f) for f in removed])
2802 edittext.extend([hgprefix(_("removed %s") % f) for f in removed])
2803 if not added and not modified and not removed:
2803 if not added and not modified and not removed:
2804 edittext.append(hgprefix(_("no files changed")))
2804 edittext.append(hgprefix(_("no files changed")))
2805 edittext.append("")
2805 edittext.append("")
2806
2806
2807 return "\n".join(edittext)
2807 return "\n".join(edittext)
2808
2808
2809 def commitstatus(repo, node, branch, bheads=None, opts=None):
2809 def commitstatus(repo, node, branch, bheads=None, opts=None):
2810 if opts is None:
2810 if opts is None:
2811 opts = {}
2811 opts = {}
2812 ctx = repo[node]
2812 ctx = repo[node]
2813 parents = ctx.parents()
2813 parents = ctx.parents()
2814
2814
2815 if (not opts.get('amend') and bheads and node not in bheads and not
2815 if (not opts.get('amend') and bheads and node not in bheads and not
2816 [x for x in parents if x.node() in bheads and x.branch() == branch]):
2816 [x for x in parents if x.node() in bheads and x.branch() == branch]):
2817 repo.ui.status(_('created new head\n'))
2817 repo.ui.status(_('created new head\n'))
2818 # The message is not printed for initial roots. For the other
2818 # The message is not printed for initial roots. For the other
2819 # changesets, it is printed in the following situations:
2819 # changesets, it is printed in the following situations:
2820 #
2820 #
2821 # Par column: for the 2 parents with ...
2821 # Par column: for the 2 parents with ...
2822 # N: null or no parent
2822 # N: null or no parent
2823 # B: parent is on another named branch
2823 # B: parent is on another named branch
2824 # C: parent is a regular non head changeset
2824 # C: parent is a regular non head changeset
2825 # H: parent was a branch head of the current branch
2825 # H: parent was a branch head of the current branch
2826 # Msg column: whether we print "created new head" message
2826 # Msg column: whether we print "created new head" message
2827 # In the following, it is assumed that there already exists some
2827 # In the following, it is assumed that there already exists some
2828 # initial branch heads of the current branch, otherwise nothing is
2828 # initial branch heads of the current branch, otherwise nothing is
2829 # printed anyway.
2829 # printed anyway.
2830 #
2830 #
2831 # Par Msg Comment
2831 # Par Msg Comment
2832 # N N y additional topo root
2832 # N N y additional topo root
2833 #
2833 #
2834 # B N y additional branch root
2834 # B N y additional branch root
2835 # C N y additional topo head
2835 # C N y additional topo head
2836 # H N n usual case
2836 # H N n usual case
2837 #
2837 #
2838 # B B y weird additional branch root
2838 # B B y weird additional branch root
2839 # C B y branch merge
2839 # C B y branch merge
2840 # H B n merge with named branch
2840 # H B n merge with named branch
2841 #
2841 #
2842 # C C y additional head from merge
2842 # C C y additional head from merge
2843 # C H n merge with a head
2843 # C H n merge with a head
2844 #
2844 #
2845 # H H n head merge: head count decreases
2845 # H H n head merge: head count decreases
2846
2846
2847 if not opts.get('close_branch'):
2847 if not opts.get('close_branch'):
2848 for r in parents:
2848 for r in parents:
2849 if r.closesbranch() and r.branch() == branch:
2849 if r.closesbranch() and r.branch() == branch:
2850 repo.ui.status(_('reopening closed branch head %d\n') % r)
2850 repo.ui.status(_('reopening closed branch head %d\n') % r)
2851
2851
2852 if repo.ui.debugflag:
2852 if repo.ui.debugflag:
2853 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx.hex()))
2853 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx.hex()))
2854 elif repo.ui.verbose:
2854 elif repo.ui.verbose:
2855 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx))
2855 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx))
2856
2856
2857 def revert(ui, repo, ctx, parents, *pats, **opts):
2857 def revert(ui, repo, ctx, parents, *pats, **opts):
2858 parent, p2 = parents
2858 parent, p2 = parents
2859 node = ctx.node()
2859 node = ctx.node()
2860
2860
2861 mf = ctx.manifest()
2861 mf = ctx.manifest()
2862 if node == p2:
2862 if node == p2:
2863 parent = p2
2863 parent = p2
2864 if node == parent:
2864 if node == parent:
2865 pmf = mf
2865 pmf = mf
2866 else:
2866 else:
2867 pmf = None
2867 pmf = None
2868
2868
2869 # need all matching names in dirstate and manifest of target rev,
2869 # need all matching names in dirstate and manifest of target rev,
2870 # so have to walk both. do not print errors if files exist in one
2870 # so have to walk both. do not print errors if files exist in one
2871 # but not other. in both cases, filesets should be evaluated against
2871 # but not other. in both cases, filesets should be evaluated against
2872 # workingctx to get consistent result (issue4497). this means 'set:**'
2872 # workingctx to get consistent result (issue4497). this means 'set:**'
2873 # cannot be used to select missing files from target rev.
2873 # cannot be used to select missing files from target rev.
2874
2874
2875 # `names` is a mapping for all elements in working copy and target revision
2875 # `names` is a mapping for all elements in working copy and target revision
2876 # The mapping is in the form:
2876 # The mapping is in the form:
2877 # <asb path in repo> -> (<path from CWD>, <exactly specified by matcher?>)
2877 # <asb path in repo> -> (<path from CWD>, <exactly specified by matcher?>)
2878 names = {}
2878 names = {}
2879
2879
2880 wlock = repo.wlock()
2880 wlock = repo.wlock()
2881 try:
2881 try:
2882 ## filling of the `names` mapping
2882 ## filling of the `names` mapping
2883 # walk dirstate to fill `names`
2883 # walk dirstate to fill `names`
2884
2884
2885 interactive = opts.get('interactive', False)
2885 interactive = opts.get('interactive', False)
2886 wctx = repo[None]
2886 wctx = repo[None]
2887 m = scmutil.match(wctx, pats, opts)
2887 m = scmutil.match(wctx, pats, opts)
2888
2888
2889 # we'll need this later
2889 # we'll need this later
2890 targetsubs = sorted(s for s in wctx.substate if m(s))
2890 targetsubs = sorted(s for s in wctx.substate if m(s))
2891
2891
2892 if not m.always():
2892 if not m.always():
2893 for abs in repo.walk(matchmod.badmatch(m, lambda x, y: False)):
2893 for abs in repo.walk(matchmod.badmatch(m, lambda x, y: False)):
2894 names[abs] = m.rel(abs), m.exact(abs)
2894 names[abs] = m.rel(abs), m.exact(abs)
2895
2895
2896 # walk target manifest to fill `names`
2896 # walk target manifest to fill `names`
2897
2897
2898 def badfn(path, msg):
2898 def badfn(path, msg):
2899 if path in names:
2899 if path in names:
2900 return
2900 return
2901 if path in ctx.substate:
2901 if path in ctx.substate:
2902 return
2902 return
2903 path_ = path + '/'
2903 path_ = path + '/'
2904 for f in names:
2904 for f in names:
2905 if f.startswith(path_):
2905 if f.startswith(path_):
2906 return
2906 return
2907 ui.warn("%s: %s\n" % (m.rel(path), msg))
2907 ui.warn("%s: %s\n" % (m.rel(path), msg))
2908
2908
2909 for abs in ctx.walk(matchmod.badmatch(m, badfn)):
2909 for abs in ctx.walk(matchmod.badmatch(m, badfn)):
2910 if abs not in names:
2910 if abs not in names:
2911 names[abs] = m.rel(abs), m.exact(abs)
2911 names[abs] = m.rel(abs), m.exact(abs)
2912
2912
2913 # Find status of all file in `names`.
2913 # Find status of all file in `names`.
2914 m = scmutil.matchfiles(repo, names)
2914 m = scmutil.matchfiles(repo, names)
2915
2915
2916 changes = repo.status(node1=node, match=m,
2916 changes = repo.status(node1=node, match=m,
2917 unknown=True, ignored=True, clean=True)
2917 unknown=True, ignored=True, clean=True)
2918 else:
2918 else:
2919 changes = repo.status(node1=node, match=m)
2919 changes = repo.status(node1=node, match=m)
2920 for kind in changes:
2920 for kind in changes:
2921 for abs in kind:
2921 for abs in kind:
2922 names[abs] = m.rel(abs), m.exact(abs)
2922 names[abs] = m.rel(abs), m.exact(abs)
2923
2923
2924 m = scmutil.matchfiles(repo, names)
2924 m = scmutil.matchfiles(repo, names)
2925
2925
2926 modified = set(changes.modified)
2926 modified = set(changes.modified)
2927 added = set(changes.added)
2927 added = set(changes.added)
2928 removed = set(changes.removed)
2928 removed = set(changes.removed)
2929 _deleted = set(changes.deleted)
2929 _deleted = set(changes.deleted)
2930 unknown = set(changes.unknown)
2930 unknown = set(changes.unknown)
2931 unknown.update(changes.ignored)
2931 unknown.update(changes.ignored)
2932 clean = set(changes.clean)
2932 clean = set(changes.clean)
2933 modadded = set()
2933 modadded = set()
2934
2934
2935 # split between files known in target manifest and the others
2935 # split between files known in target manifest and the others
2936 smf = set(mf)
2936 smf = set(mf)
2937
2937
2938 # determine the exact nature of the deleted changesets
2938 # determine the exact nature of the deleted changesets
2939 deladded = _deleted - smf
2939 deladded = _deleted - smf
2940 deleted = _deleted - deladded
2940 deleted = _deleted - deladded
2941
2941
2942 # We need to account for the state of the file in the dirstate,
2942 # We need to account for the state of the file in the dirstate,
2943 # even when we revert against something else than parent. This will
2943 # even when we revert against something else than parent. This will
2944 # slightly alter the behavior of revert (doing back up or not, delete
2944 # slightly alter the behavior of revert (doing back up or not, delete
2945 # or just forget etc).
2945 # or just forget etc).
2946 if parent == node:
2946 if parent == node:
2947 dsmodified = modified
2947 dsmodified = modified
2948 dsadded = added
2948 dsadded = added
2949 dsremoved = removed
2949 dsremoved = removed
2950 # store all local modifications, useful later for rename detection
2950 # store all local modifications, useful later for rename detection
2951 localchanges = dsmodified | dsadded
2951 localchanges = dsmodified | dsadded
2952 modified, added, removed = set(), set(), set()
2952 modified, added, removed = set(), set(), set()
2953 else:
2953 else:
2954 changes = repo.status(node1=parent, match=m)
2954 changes = repo.status(node1=parent, match=m)
2955 dsmodified = set(changes.modified)
2955 dsmodified = set(changes.modified)
2956 dsadded = set(changes.added)
2956 dsadded = set(changes.added)
2957 dsremoved = set(changes.removed)
2957 dsremoved = set(changes.removed)
2958 # store all local modifications, useful later for rename detection
2958 # store all local modifications, useful later for rename detection
2959 localchanges = dsmodified | dsadded
2959 localchanges = dsmodified | dsadded
2960
2960
2961 # only take into account for removes between wc and target
2961 # only take into account for removes between wc and target
2962 clean |= dsremoved - removed
2962 clean |= dsremoved - removed
2963 dsremoved &= removed
2963 dsremoved &= removed
2964 # distinct between dirstate remove and other
2964 # distinct between dirstate remove and other
2965 removed -= dsremoved
2965 removed -= dsremoved
2966
2966
2967 modadded = added & dsmodified
2967 modadded = added & dsmodified
2968 added -= modadded
2968 added -= modadded
2969
2969
2970 # tell newly modified apart.
2970 # tell newly modified apart.
2971 dsmodified &= modified
2971 dsmodified &= modified
2972 dsmodified |= modified & dsadded # dirstate added may needs backup
2972 dsmodified |= modified & dsadded # dirstate added may needs backup
2973 modified -= dsmodified
2973 modified -= dsmodified
2974
2974
2975 # We need to wait for some post-processing to update this set
2975 # We need to wait for some post-processing to update this set
2976 # before making the distinction. The dirstate will be used for
2976 # before making the distinction. The dirstate will be used for
2977 # that purpose.
2977 # that purpose.
2978 dsadded = added
2978 dsadded = added
2979
2979
2980 # in case of merge, files that are actually added can be reported as
2980 # in case of merge, files that are actually added can be reported as
2981 # modified, we need to post process the result
2981 # modified, we need to post process the result
2982 if p2 != nullid:
2982 if p2 != nullid:
2983 if pmf is None:
2983 if pmf is None:
2984 # only need parent manifest in the merge case,
2984 # only need parent manifest in the merge case,
2985 # so do not read by default
2985 # so do not read by default
2986 pmf = repo[parent].manifest()
2986 pmf = repo[parent].manifest()
2987 mergeadd = dsmodified - set(pmf)
2987 mergeadd = dsmodified - set(pmf)
2988 dsadded |= mergeadd
2988 dsadded |= mergeadd
2989 dsmodified -= mergeadd
2989 dsmodified -= mergeadd
2990
2990
2991 # if f is a rename, update `names` to also revert the source
2991 # if f is a rename, update `names` to also revert the source
2992 cwd = repo.getcwd()
2992 cwd = repo.getcwd()
2993 for f in localchanges:
2993 for f in localchanges:
2994 src = repo.dirstate.copied(f)
2994 src = repo.dirstate.copied(f)
2995 # XXX should we check for rename down to target node?
2995 # XXX should we check for rename down to target node?
2996 if src and src not in names and repo.dirstate[src] == 'r':
2996 if src and src not in names and repo.dirstate[src] == 'r':
2997 dsremoved.add(src)
2997 dsremoved.add(src)
2998 names[src] = (repo.pathto(src, cwd), True)
2998 names[src] = (repo.pathto(src, cwd), True)
2999
2999
3000 # distinguish between file to forget and the other
3000 # distinguish between file to forget and the other
3001 added = set()
3001 added = set()
3002 for abs in dsadded:
3002 for abs in dsadded:
3003 if repo.dirstate[abs] != 'a':
3003 if repo.dirstate[abs] != 'a':
3004 added.add(abs)
3004 added.add(abs)
3005 dsadded -= added
3005 dsadded -= added
3006
3006
3007 for abs in deladded:
3007 for abs in deladded:
3008 if repo.dirstate[abs] == 'a':
3008 if repo.dirstate[abs] == 'a':
3009 dsadded.add(abs)
3009 dsadded.add(abs)
3010 deladded -= dsadded
3010 deladded -= dsadded
3011
3011
3012 # For files marked as removed, we check if an unknown file is present at
3012 # For files marked as removed, we check if an unknown file is present at
3013 # the same path. If a such file exists it may need to be backed up.
3013 # the same path. If a such file exists it may need to be backed up.
3014 # Making the distinction at this stage helps have simpler backup
3014 # Making the distinction at this stage helps have simpler backup
3015 # logic.
3015 # logic.
3016 removunk = set()
3016 removunk = set()
3017 for abs in removed:
3017 for abs in removed:
3018 target = repo.wjoin(abs)
3018 target = repo.wjoin(abs)
3019 if os.path.lexists(target):
3019 if os.path.lexists(target):
3020 removunk.add(abs)
3020 removunk.add(abs)
3021 removed -= removunk
3021 removed -= removunk
3022
3022
3023 dsremovunk = set()
3023 dsremovunk = set()
3024 for abs in dsremoved:
3024 for abs in dsremoved:
3025 target = repo.wjoin(abs)
3025 target = repo.wjoin(abs)
3026 if os.path.lexists(target):
3026 if os.path.lexists(target):
3027 dsremovunk.add(abs)
3027 dsremovunk.add(abs)
3028 dsremoved -= dsremovunk
3028 dsremoved -= dsremovunk
3029
3029
3030 # action to be actually performed by revert
3030 # action to be actually performed by revert
3031 # (<list of file>, message>) tuple
3031 # (<list of file>, message>) tuple
3032 actions = {'revert': ([], _('reverting %s\n')),
3032 actions = {'revert': ([], _('reverting %s\n')),
3033 'add': ([], _('adding %s\n')),
3033 'add': ([], _('adding %s\n')),
3034 'remove': ([], _('removing %s\n')),
3034 'remove': ([], _('removing %s\n')),
3035 'drop': ([], _('removing %s\n')),
3035 'drop': ([], _('removing %s\n')),
3036 'forget': ([], _('forgetting %s\n')),
3036 'forget': ([], _('forgetting %s\n')),
3037 'undelete': ([], _('undeleting %s\n')),
3037 'undelete': ([], _('undeleting %s\n')),
3038 'noop': (None, _('no changes needed to %s\n')),
3038 'noop': (None, _('no changes needed to %s\n')),
3039 'unknown': (None, _('file not managed: %s\n')),
3039 'unknown': (None, _('file not managed: %s\n')),
3040 }
3040 }
3041
3041
3042 # "constant" that convey the backup strategy.
3042 # "constant" that convey the backup strategy.
3043 # All set to `discard` if `no-backup` is set do avoid checking
3043 # All set to `discard` if `no-backup` is set do avoid checking
3044 # no_backup lower in the code.
3044 # no_backup lower in the code.
3045 # These values are ordered for comparison purposes
3045 # These values are ordered for comparison purposes
3046 backup = 2 # unconditionally do backup
3046 backup = 2 # unconditionally do backup
3047 check = 1 # check if the existing file differs from target
3047 check = 1 # check if the existing file differs from target
3048 discard = 0 # never do backup
3048 discard = 0 # never do backup
3049 if opts.get('no_backup'):
3049 if opts.get('no_backup'):
3050 backup = check = discard
3050 backup = check = discard
3051
3051
3052 backupanddel = actions['remove']
3052 backupanddel = actions['remove']
3053 if not opts.get('no_backup'):
3053 if not opts.get('no_backup'):
3054 backupanddel = actions['drop']
3054 backupanddel = actions['drop']
3055
3055
3056 disptable = (
3056 disptable = (
3057 # dispatch table:
3057 # dispatch table:
3058 # file state
3058 # file state
3059 # action
3059 # action
3060 # make backup
3060 # make backup
3061
3061
3062 ## Sets that results that will change file on disk
3062 ## Sets that results that will change file on disk
3063 # Modified compared to target, no local change
3063 # Modified compared to target, no local change
3064 (modified, actions['revert'], discard),
3064 (modified, actions['revert'], discard),
3065 # Modified compared to target, but local file is deleted
3065 # Modified compared to target, but local file is deleted
3066 (deleted, actions['revert'], discard),
3066 (deleted, actions['revert'], discard),
3067 # Modified compared to target, local change
3067 # Modified compared to target, local change
3068 (dsmodified, actions['revert'], backup),
3068 (dsmodified, actions['revert'], backup),
3069 # Added since target
3069 # Added since target
3070 (added, actions['remove'], discard),
3070 (added, actions['remove'], discard),
3071 # Added in working directory
3071 # Added in working directory
3072 (dsadded, actions['forget'], discard),
3072 (dsadded, actions['forget'], discard),
3073 # Added since target, have local modification
3073 # Added since target, have local modification
3074 (modadded, backupanddel, backup),
3074 (modadded, backupanddel, backup),
3075 # Added since target but file is missing in working directory
3075 # Added since target but file is missing in working directory
3076 (deladded, actions['drop'], discard),
3076 (deladded, actions['drop'], discard),
3077 # Removed since target, before working copy parent
3077 # Removed since target, before working copy parent
3078 (removed, actions['add'], discard),
3078 (removed, actions['add'], discard),
3079 # Same as `removed` but an unknown file exists at the same path
3079 # Same as `removed` but an unknown file exists at the same path
3080 (removunk, actions['add'], check),
3080 (removunk, actions['add'], check),
3081 # Removed since targe, marked as such in working copy parent
3081 # Removed since targe, marked as such in working copy parent
3082 (dsremoved, actions['undelete'], discard),
3082 (dsremoved, actions['undelete'], discard),
3083 # Same as `dsremoved` but an unknown file exists at the same path
3083 # Same as `dsremoved` but an unknown file exists at the same path
3084 (dsremovunk, actions['undelete'], check),
3084 (dsremovunk, actions['undelete'], check),
3085 ## the following sets does not result in any file changes
3085 ## the following sets does not result in any file changes
3086 # File with no modification
3086 # File with no modification
3087 (clean, actions['noop'], discard),
3087 (clean, actions['noop'], discard),
3088 # Existing file, not tracked anywhere
3088 # Existing file, not tracked anywhere
3089 (unknown, actions['unknown'], discard),
3089 (unknown, actions['unknown'], discard),
3090 )
3090 )
3091
3091
3092 for abs, (rel, exact) in sorted(names.items()):
3092 for abs, (rel, exact) in sorted(names.items()):
3093 # target file to be touch on disk (relative to cwd)
3093 # target file to be touch on disk (relative to cwd)
3094 target = repo.wjoin(abs)
3094 target = repo.wjoin(abs)
3095 # search the entry in the dispatch table.
3095 # search the entry in the dispatch table.
3096 # if the file is in any of these sets, it was touched in the working
3096 # if the file is in any of these sets, it was touched in the working
3097 # directory parent and we are sure it needs to be reverted.
3097 # directory parent and we are sure it needs to be reverted.
3098 for table, (xlist, msg), dobackup in disptable:
3098 for table, (xlist, msg), dobackup in disptable:
3099 if abs not in table:
3099 if abs not in table:
3100 continue
3100 continue
3101 if xlist is not None:
3101 if xlist is not None:
3102 xlist.append(abs)
3102 xlist.append(abs)
3103 if dobackup and (backup <= dobackup
3103 if dobackup and (backup <= dobackup
3104 or wctx[abs].cmp(ctx[abs])):
3104 or wctx[abs].cmp(ctx[abs])):
3105 bakname = origpath(ui, repo, rel)
3105 bakname = origpath(ui, repo, rel)
3106 ui.note(_('saving current version of %s as %s\n') %
3106 ui.note(_('saving current version of %s as %s\n') %
3107 (rel, bakname))
3107 (rel, bakname))
3108 if not opts.get('dry_run'):
3108 if not opts.get('dry_run'):
3109 if interactive:
3109 if interactive:
3110 util.copyfile(target, bakname)
3110 util.copyfile(target, bakname)
3111 else:
3111 else:
3112 util.rename(target, bakname)
3112 util.rename(target, bakname)
3113 if ui.verbose or not exact:
3113 if ui.verbose or not exact:
3114 if not isinstance(msg, basestring):
3114 if not isinstance(msg, basestring):
3115 msg = msg(abs)
3115 msg = msg(abs)
3116 ui.status(msg % rel)
3116 ui.status(msg % rel)
3117 elif exact:
3117 elif exact:
3118 ui.warn(msg % rel)
3118 ui.warn(msg % rel)
3119 break
3119 break
3120
3120
3121 if not opts.get('dry_run'):
3121 if not opts.get('dry_run'):
3122 needdata = ('revert', 'add', 'undelete')
3122 needdata = ('revert', 'add', 'undelete')
3123 _revertprefetch(repo, ctx, *[actions[name][0] for name in needdata])
3123 _revertprefetch(repo, ctx, *[actions[name][0] for name in needdata])
3124 _performrevert(repo, parents, ctx, actions, interactive)
3124 _performrevert(repo, parents, ctx, actions, interactive)
3125
3125
3126 if targetsubs:
3126 if targetsubs:
3127 # Revert the subrepos on the revert list
3127 # Revert the subrepos on the revert list
3128 for sub in targetsubs:
3128 for sub in targetsubs:
3129 try:
3129 try:
3130 wctx.sub(sub).revert(ctx.substate[sub], *pats, **opts)
3130 wctx.sub(sub).revert(ctx.substate[sub], *pats, **opts)
3131 except KeyError:
3131 except KeyError:
3132 raise error.Abort("subrepository '%s' does not exist in %s!"
3132 raise error.Abort("subrepository '%s' does not exist in %s!"
3133 % (sub, short(ctx.node())))
3133 % (sub, short(ctx.node())))
3134 finally:
3134 finally:
3135 wlock.release()
3135 wlock.release()
3136
3136
3137 def origpath(ui, repo, filepath):
3137 def origpath(ui, repo, filepath):
3138 '''customize where .orig files are created
3138 '''customize where .orig files are created
3139
3139
3140 Fetch user defined path from config file: [ui] origbackuppath = <path>
3140 Fetch user defined path from config file: [ui] origbackuppath = <path>
3141 Fall back to default (filepath) if not specified
3141 Fall back to default (filepath) if not specified
3142 '''
3142 '''
3143 origbackuppath = ui.config('ui', 'origbackuppath', None)
3143 origbackuppath = ui.config('ui', 'origbackuppath', None)
3144 if origbackuppath is None:
3144 if origbackuppath is None:
3145 return filepath + ".orig"
3145 return filepath + ".orig"
3146
3146
3147 filepathfromroot = os.path.relpath(filepath, start=repo.root)
3147 filepathfromroot = os.path.relpath(filepath, start=repo.root)
3148 fullorigpath = repo.wjoin(origbackuppath, filepathfromroot)
3148 fullorigpath = repo.wjoin(origbackuppath, filepathfromroot)
3149
3149
3150 origbackupdir = repo.vfs.dirname(fullorigpath)
3150 origbackupdir = repo.vfs.dirname(fullorigpath)
3151 if not repo.vfs.exists(origbackupdir):
3151 if not repo.vfs.exists(origbackupdir):
3152 ui.note(_('creating directory: %s\n') % origbackupdir)
3152 ui.note(_('creating directory: %s\n') % origbackupdir)
3153 util.makedirs(origbackupdir)
3153 util.makedirs(origbackupdir)
3154
3154
3155 return fullorigpath + ".orig"
3155 return fullorigpath + ".orig"
3156
3156
3157 def _revertprefetch(repo, ctx, *files):
3157 def _revertprefetch(repo, ctx, *files):
3158 """Let extension changing the storage layer prefetch content"""
3158 """Let extension changing the storage layer prefetch content"""
3159 pass
3159 pass
3160
3160
3161 def _performrevert(repo, parents, ctx, actions, interactive=False):
3161 def _performrevert(repo, parents, ctx, actions, interactive=False):
3162 """function that actually perform all the actions computed for revert
3162 """function that actually perform all the actions computed for revert
3163
3163
3164 This is an independent function to let extension to plug in and react to
3164 This is an independent function to let extension to plug in and react to
3165 the imminent revert.
3165 the imminent revert.
3166
3166
3167 Make sure you have the working directory locked when calling this function.
3167 Make sure you have the working directory locked when calling this function.
3168 """
3168 """
3169 parent, p2 = parents
3169 parent, p2 = parents
3170 node = ctx.node()
3170 node = ctx.node()
3171 def checkout(f):
3171 def checkout(f):
3172 fc = ctx[f]
3172 fc = ctx[f]
3173 repo.wwrite(f, fc.data(), fc.flags())
3173 repo.wwrite(f, fc.data(), fc.flags())
3174
3174
3175 audit_path = pathutil.pathauditor(repo.root)
3175 audit_path = pathutil.pathauditor(repo.root)
3176 for f in actions['forget'][0]:
3176 for f in actions['forget'][0]:
3177 repo.dirstate.drop(f)
3177 repo.dirstate.drop(f)
3178 for f in actions['remove'][0]:
3178 for f in actions['remove'][0]:
3179 audit_path(f)
3179 audit_path(f)
3180 try:
3180 try:
3181 util.unlinkpath(repo.wjoin(f))
3181 util.unlinkpath(repo.wjoin(f))
3182 except OSError:
3182 except OSError:
3183 pass
3183 pass
3184 repo.dirstate.remove(f)
3184 repo.dirstate.remove(f)
3185 for f in actions['drop'][0]:
3185 for f in actions['drop'][0]:
3186 audit_path(f)
3186 audit_path(f)
3187 repo.dirstate.remove(f)
3187 repo.dirstate.remove(f)
3188
3188
3189 normal = None
3189 normal = None
3190 if node == parent:
3190 if node == parent:
3191 # We're reverting to our parent. If possible, we'd like status
3191 # We're reverting to our parent. If possible, we'd like status
3192 # to report the file as clean. We have to use normallookup for
3192 # to report the file as clean. We have to use normallookup for
3193 # merges to avoid losing information about merged/dirty files.
3193 # merges to avoid losing information about merged/dirty files.
3194 if p2 != nullid:
3194 if p2 != nullid:
3195 normal = repo.dirstate.normallookup
3195 normal = repo.dirstate.normallookup
3196 else:
3196 else:
3197 normal = repo.dirstate.normal
3197 normal = repo.dirstate.normal
3198
3198
3199 newlyaddedandmodifiedfiles = set()
3199 newlyaddedandmodifiedfiles = set()
3200 if interactive:
3200 if interactive:
3201 # Prompt the user for changes to revert
3201 # Prompt the user for changes to revert
3202 torevert = [repo.wjoin(f) for f in actions['revert'][0]]
3202 torevert = [repo.wjoin(f) for f in actions['revert'][0]]
3203 m = scmutil.match(ctx, torevert, {})
3203 m = scmutil.match(ctx, torevert, {})
3204 diffopts = patch.difffeatureopts(repo.ui, whitespace=True)
3204 diffopts = patch.difffeatureopts(repo.ui, whitespace=True)
3205 diffopts.nodates = True
3205 diffopts.nodates = True
3206 diffopts.git = True
3206 diffopts.git = True
3207 reversehunks = repo.ui.configbool('experimental',
3207 reversehunks = repo.ui.configbool('experimental',
3208 'revertalternateinteractivemode',
3208 'revertalternateinteractivemode',
3209 True)
3209 True)
3210 if reversehunks:
3210 if reversehunks:
3211 diff = patch.diff(repo, ctx.node(), None, m, opts=diffopts)
3211 diff = patch.diff(repo, ctx.node(), None, m, opts=diffopts)
3212 else:
3212 else:
3213 diff = patch.diff(repo, None, ctx.node(), m, opts=diffopts)
3213 diff = patch.diff(repo, None, ctx.node(), m, opts=diffopts)
3214 originalchunks = patch.parsepatch(diff)
3214 originalchunks = patch.parsepatch(diff)
3215
3215
3216 try:
3216 try:
3217
3217
3218 chunks, opts = recordfilter(repo.ui, originalchunks)
3218 chunks, opts = recordfilter(repo.ui, originalchunks)
3219 if reversehunks:
3219 if reversehunks:
3220 chunks = patch.reversehunks(chunks)
3220 chunks = patch.reversehunks(chunks)
3221
3221
3222 except patch.PatchError as err:
3222 except patch.PatchError as err:
3223 raise error.Abort(_('error parsing patch: %s') % err)
3223 raise error.Abort(_('error parsing patch: %s') % err)
3224
3224
3225 newlyaddedandmodifiedfiles = newandmodified(chunks, originalchunks)
3225 newlyaddedandmodifiedfiles = newandmodified(chunks, originalchunks)
3226 # Apply changes
3226 # Apply changes
3227 fp = cStringIO.StringIO()
3227 fp = cStringIO.StringIO()
3228 for c in chunks:
3228 for c in chunks:
3229 c.write(fp)
3229 c.write(fp)
3230 dopatch = fp.tell()
3230 dopatch = fp.tell()
3231 fp.seek(0)
3231 fp.seek(0)
3232 if dopatch:
3232 if dopatch:
3233 try:
3233 try:
3234 patch.internalpatch(repo.ui, repo, fp, 1, eolmode=None)
3234 patch.internalpatch(repo.ui, repo, fp, 1, eolmode=None)
3235 except patch.PatchError as err:
3235 except patch.PatchError as err:
3236 raise error.Abort(str(err))
3236 raise error.Abort(str(err))
3237 del fp
3237 del fp
3238 else:
3238 else:
3239 for f in actions['revert'][0]:
3239 for f in actions['revert'][0]:
3240 checkout(f)
3240 checkout(f)
3241 if normal:
3241 if normal:
3242 normal(f)
3242 normal(f)
3243
3243
3244 for f in actions['add'][0]:
3244 for f in actions['add'][0]:
3245 # Don't checkout modified files, they are already created by the diff
3245 # Don't checkout modified files, they are already created by the diff
3246 if f not in newlyaddedandmodifiedfiles:
3246 if f not in newlyaddedandmodifiedfiles:
3247 checkout(f)
3247 checkout(f)
3248 repo.dirstate.add(f)
3248 repo.dirstate.add(f)
3249
3249
3250 normal = repo.dirstate.normallookup
3250 normal = repo.dirstate.normallookup
3251 if node == parent and p2 == nullid:
3251 if node == parent and p2 == nullid:
3252 normal = repo.dirstate.normal
3252 normal = repo.dirstate.normal
3253 for f in actions['undelete'][0]:
3253 for f in actions['undelete'][0]:
3254 checkout(f)
3254 checkout(f)
3255 normal(f)
3255 normal(f)
3256
3256
3257 copied = copies.pathcopies(repo[parent], ctx)
3257 copied = copies.pathcopies(repo[parent], ctx)
3258
3258
3259 for f in actions['add'][0] + actions['undelete'][0] + actions['revert'][0]:
3259 for f in actions['add'][0] + actions['undelete'][0] + actions['revert'][0]:
3260 if f in copied:
3260 if f in copied:
3261 repo.dirstate.copy(copied[f], f)
3261 repo.dirstate.copy(copied[f], f)
3262
3262
3263 def command(table):
3263 def command(table):
3264 """Returns a function object to be used as a decorator for making commands.
3264 """Returns a function object to be used as a decorator for making commands.
3265
3265
3266 This function receives a command table as its argument. The table should
3266 This function receives a command table as its argument. The table should
3267 be a dict.
3267 be a dict.
3268
3268
3269 The returned function can be used as a decorator for adding commands
3269 The returned function can be used as a decorator for adding commands
3270 to that command table. This function accepts multiple arguments to define
3270 to that command table. This function accepts multiple arguments to define
3271 a command.
3271 a command.
3272
3272
3273 The first argument is the command name.
3273 The first argument is the command name.
3274
3274
3275 The options argument is an iterable of tuples defining command arguments.
3275 The options argument is an iterable of tuples defining command arguments.
3276 See ``mercurial.fancyopts.fancyopts()`` for the format of each tuple.
3276 See ``mercurial.fancyopts.fancyopts()`` for the format of each tuple.
3277
3277
3278 The synopsis argument defines a short, one line summary of how to use the
3278 The synopsis argument defines a short, one line summary of how to use the
3279 command. This shows up in the help output.
3279 command. This shows up in the help output.
3280
3280
3281 The norepo argument defines whether the command does not require a
3281 The norepo argument defines whether the command does not require a
3282 local repository. Most commands operate against a repository, thus the
3282 local repository. Most commands operate against a repository, thus the
3283 default is False.
3283 default is False.
3284
3284
3285 The optionalrepo argument defines whether the command optionally requires
3285 The optionalrepo argument defines whether the command optionally requires
3286 a local repository.
3286 a local repository.
3287
3287
3288 The inferrepo argument defines whether to try to find a repository from the
3288 The inferrepo argument defines whether to try to find a repository from the
3289 command line arguments. If True, arguments will be examined for potential
3289 command line arguments. If True, arguments will be examined for potential
3290 repository locations. See ``findrepo()``. If a repository is found, it
3290 repository locations. See ``findrepo()``. If a repository is found, it
3291 will be used.
3291 will be used.
3292 """
3292 """
3293 def cmd(name, options=(), synopsis=None, norepo=False, optionalrepo=False,
3293 def cmd(name, options=(), synopsis=None, norepo=False, optionalrepo=False,
3294 inferrepo=False):
3294 inferrepo=False):
3295 def decorator(func):
3295 def decorator(func):
3296 if synopsis:
3296 if synopsis:
3297 table[name] = func, list(options), synopsis
3297 table[name] = func, list(options), synopsis
3298 else:
3298 else:
3299 table[name] = func, list(options)
3299 table[name] = func, list(options)
3300
3300
3301 if norepo:
3301 if norepo:
3302 # Avoid import cycle.
3302 # Avoid import cycle.
3303 import commands
3303 import commands
3304 commands.norepo += ' %s' % ' '.join(parsealiases(name))
3304 commands.norepo += ' %s' % ' '.join(parsealiases(name))
3305
3305
3306 if optionalrepo:
3306 if optionalrepo:
3307 import commands
3307 import commands
3308 commands.optionalrepo += ' %s' % ' '.join(parsealiases(name))
3308 commands.optionalrepo += ' %s' % ' '.join(parsealiases(name))
3309
3309
3310 if inferrepo:
3310 if inferrepo:
3311 import commands
3311 import commands
3312 commands.inferrepo += ' %s' % ' '.join(parsealiases(name))
3312 commands.inferrepo += ' %s' % ' '.join(parsealiases(name))
3313
3313
3314 return func
3314 return func
3315 return decorator
3315 return decorator
3316
3316
3317 return cmd
3317 return cmd
3318
3318
3319 # a list of (ui, repo, otherpeer, opts, missing) functions called by
3319 # a list of (ui, repo, otherpeer, opts, missing) functions called by
3320 # commands.outgoing. "missing" is "missing" of the result of
3320 # commands.outgoing. "missing" is "missing" of the result of
3321 # "findcommonoutgoing()"
3321 # "findcommonoutgoing()"
3322 outgoinghooks = util.hooks()
3322 outgoinghooks = util.hooks()
3323
3323
3324 # a list of (ui, repo) functions called by commands.summary
3324 # a list of (ui, repo) functions called by commands.summary
3325 summaryhooks = util.hooks()
3325 summaryhooks = util.hooks()
3326
3326
3327 # a list of (ui, repo, opts, changes) functions called by commands.summary.
3327 # a list of (ui, repo, opts, changes) functions called by commands.summary.
3328 #
3328 #
3329 # functions should return tuple of booleans below, if 'changes' is None:
3329 # functions should return tuple of booleans below, if 'changes' is None:
3330 # (whether-incomings-are-needed, whether-outgoings-are-needed)
3330 # (whether-incomings-are-needed, whether-outgoings-are-needed)
3331 #
3331 #
3332 # otherwise, 'changes' is a tuple of tuples below:
3332 # otherwise, 'changes' is a tuple of tuples below:
3333 # - (sourceurl, sourcebranch, sourcepeer, incoming)
3333 # - (sourceurl, sourcebranch, sourcepeer, incoming)
3334 # - (desturl, destbranch, destpeer, outgoing)
3334 # - (desturl, destbranch, destpeer, outgoing)
3335 summaryremotehooks = util.hooks()
3335 summaryremotehooks = util.hooks()
3336
3336
3337 # A list of state files kept by multistep operations like graft.
3337 # A list of state files kept by multistep operations like graft.
3338 # Since graft cannot be aborted, it is considered 'clearable' by update.
3338 # Since graft cannot be aborted, it is considered 'clearable' by update.
3339 # note: bisect is intentionally excluded
3339 # note: bisect is intentionally excluded
3340 # (state file, clearable, allowcommit, error, hint)
3340 # (state file, clearable, allowcommit, error, hint)
3341 unfinishedstates = [
3341 unfinishedstates = [
3342 ('graftstate', True, False, _('graft in progress'),
3342 ('graftstate', True, False, _('graft in progress'),
3343 _("use 'hg graft --continue' or 'hg update' to abort")),
3343 _("use 'hg graft --continue' or 'hg update' to abort")),
3344 ('updatestate', True, False, _('last update was interrupted'),
3344 ('updatestate', True, False, _('last update was interrupted'),
3345 _("use 'hg update' to get a consistent checkout"))
3345 _("use 'hg update' to get a consistent checkout"))
3346 ]
3346 ]
3347
3347
3348 def checkunfinished(repo, commit=False):
3348 def checkunfinished(repo, commit=False):
3349 '''Look for an unfinished multistep operation, like graft, and abort
3349 '''Look for an unfinished multistep operation, like graft, and abort
3350 if found. It's probably good to check this right before
3350 if found. It's probably good to check this right before
3351 bailifchanged().
3351 bailifchanged().
3352 '''
3352 '''
3353 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3353 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3354 if commit and allowcommit:
3354 if commit and allowcommit:
3355 continue
3355 continue
3356 if repo.vfs.exists(f):
3356 if repo.vfs.exists(f):
3357 raise error.Abort(msg, hint=hint)
3357 raise error.Abort(msg, hint=hint)
3358
3358
3359 def clearunfinished(repo):
3359 def clearunfinished(repo):
3360 '''Check for unfinished operations (as above), and clear the ones
3360 '''Check for unfinished operations (as above), and clear the ones
3361 that are clearable.
3361 that are clearable.
3362 '''
3362 '''
3363 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3363 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3364 if not clearable and repo.vfs.exists(f):
3364 if not clearable and repo.vfs.exists(f):
3365 raise error.Abort(msg, hint=hint)
3365 raise error.Abort(msg, hint=hint)
3366 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3366 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3367 if clearable and repo.vfs.exists(f):
3367 if clearable and repo.vfs.exists(f):
3368 util.unlink(repo.join(f))
3368 util.unlink(repo.join(f))
3369
3369
3370 class dirstateguard(object):
3370 class dirstateguard(object):
3371 '''Restore dirstate at unexpected failure.
3371 '''Restore dirstate at unexpected failure.
3372
3372
3373 At the construction, this class does:
3373 At the construction, this class does:
3374
3374
3375 - write current ``repo.dirstate`` out, and
3375 - write current ``repo.dirstate`` out, and
3376 - save ``.hg/dirstate`` into the backup file
3376 - save ``.hg/dirstate`` into the backup file
3377
3377
3378 This restores ``.hg/dirstate`` from backup file, if ``release()``
3378 This restores ``.hg/dirstate`` from backup file, if ``release()``
3379 is invoked before ``close()``.
3379 is invoked before ``close()``.
3380
3380
3381 This just removes the backup file at ``close()`` before ``release()``.
3381 This just removes the backup file at ``close()`` before ``release()``.
3382 '''
3382 '''
3383
3383
3384 def __init__(self, repo, name):
3384 def __init__(self, repo, name):
3385 self._repo = repo
3385 self._repo = repo
3386 self._suffix = '.backup.%s.%d' % (name, id(self))
3386 self._suffix = '.backup.%s.%d' % (name, id(self))
3387 repo.dirstate._savebackup(repo.currenttransaction(), self._suffix)
3387 repo.dirstate._savebackup(repo.currenttransaction(), self._suffix)
3388 self._active = True
3388 self._active = True
3389 self._closed = False
3389 self._closed = False
3390
3390
3391 def __del__(self):
3391 def __del__(self):
3392 if self._active: # still active
3392 if self._active: # still active
3393 # this may occur, even if this class is used correctly:
3393 # this may occur, even if this class is used correctly:
3394 # for example, releasing other resources like transaction
3394 # for example, releasing other resources like transaction
3395 # may raise exception before ``dirstateguard.release`` in
3395 # may raise exception before ``dirstateguard.release`` in
3396 # ``release(tr, ....)``.
3396 # ``release(tr, ....)``.
3397 self._abort()
3397 self._abort()
3398
3398
3399 def close(self):
3399 def close(self):
3400 if not self._active: # already inactivated
3400 if not self._active: # already inactivated
3401 msg = (_("can't close already inactivated backup: dirstate%s")
3401 msg = (_("can't close already inactivated backup: dirstate%s")
3402 % self._suffix)
3402 % self._suffix)
3403 raise error.Abort(msg)
3403 raise error.Abort(msg)
3404
3404
3405 self._repo.dirstate._clearbackup(self._repo.currenttransaction(),
3405 self._repo.dirstate._clearbackup(self._repo.currenttransaction(),
3406 self._suffix)
3406 self._suffix)
3407 self._active = False
3407 self._active = False
3408 self._closed = True
3408 self._closed = True
3409
3409
3410 def _abort(self):
3410 def _abort(self):
3411 self._repo.dirstate._restorebackup(self._repo.currenttransaction(),
3411 self._repo.dirstate._restorebackup(self._repo.currenttransaction(),
3412 self._suffix)
3412 self._suffix)
3413 self._active = False
3413 self._active = False
3414
3414
3415 def release(self):
3415 def release(self):
3416 if not self._closed:
3416 if not self._closed:
3417 if not self._active: # already inactivated
3417 if not self._active: # already inactivated
3418 msg = (_("can't release already inactivated backup:"
3418 msg = (_("can't release already inactivated backup:"
3419 " dirstate%s")
3419 " dirstate%s")
3420 % self._suffix)
3420 % self._suffix)
3421 raise error.Abort(msg)
3421 raise error.Abort(msg)
3422 self._abort()
3422 self._abort()
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
General Comments 0
You need to be logged in to leave comments. Login now