##// END OF EJS Templates
merge: have merge.update use a matcher instead of partial fn...
Augie Fackler -
r27344:43c00ca8 default
parent child Browse files
Show More
@@ -1,1414 +1,1414 b''
1 # histedit.py - interactive history editing for mercurial
1 # histedit.py - interactive history editing for mercurial
2 #
2 #
3 # Copyright 2009 Augie Fackler <raf@durin42.com>
3 # Copyright 2009 Augie Fackler <raf@durin42.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """interactive history editing
7 """interactive history editing
8
8
9 With this extension installed, Mercurial gains one new command: histedit. Usage
9 With this extension installed, Mercurial gains one new command: histedit. Usage
10 is as follows, assuming the following history::
10 is as follows, assuming the following history::
11
11
12 @ 3[tip] 7c2fd3b9020c 2009-04-27 18:04 -0500 durin42
12 @ 3[tip] 7c2fd3b9020c 2009-04-27 18:04 -0500 durin42
13 | Add delta
13 | Add delta
14 |
14 |
15 o 2 030b686bedc4 2009-04-27 18:04 -0500 durin42
15 o 2 030b686bedc4 2009-04-27 18:04 -0500 durin42
16 | Add gamma
16 | Add gamma
17 |
17 |
18 o 1 c561b4e977df 2009-04-27 18:04 -0500 durin42
18 o 1 c561b4e977df 2009-04-27 18:04 -0500 durin42
19 | Add beta
19 | Add beta
20 |
20 |
21 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
21 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
22 Add alpha
22 Add alpha
23
23
24 If you were to run ``hg histedit c561b4e977df``, you would see the following
24 If you were to run ``hg histedit c561b4e977df``, you would see the following
25 file open in your editor::
25 file open in your editor::
26
26
27 pick c561b4e977df Add beta
27 pick c561b4e977df Add beta
28 pick 030b686bedc4 Add gamma
28 pick 030b686bedc4 Add gamma
29 pick 7c2fd3b9020c Add delta
29 pick 7c2fd3b9020c Add delta
30
30
31 # Edit history between c561b4e977df and 7c2fd3b9020c
31 # Edit history between c561b4e977df and 7c2fd3b9020c
32 #
32 #
33 # Commits are listed from least to most recent
33 # Commits are listed from least to most recent
34 #
34 #
35 # Commands:
35 # Commands:
36 # p, pick = use commit
36 # p, pick = use commit
37 # e, edit = use commit, but stop for amending
37 # e, edit = use commit, but stop for amending
38 # f, fold = use commit, but combine it with the one above
38 # f, fold = use commit, but combine it with the one above
39 # r, roll = like fold, but discard this commit's description
39 # r, roll = like fold, but discard this commit's description
40 # d, drop = remove commit from history
40 # d, drop = remove commit from history
41 # m, mess = edit commit message without changing commit content
41 # m, mess = edit commit message without changing commit content
42 #
42 #
43
43
44 In this file, lines beginning with ``#`` are ignored. You must specify a rule
44 In this file, lines beginning with ``#`` are ignored. You must specify a rule
45 for each revision in your history. For example, if you had meant to add gamma
45 for each revision in your history. For example, if you had meant to add gamma
46 before beta, and then wanted to add delta in the same revision as beta, you
46 before beta, and then wanted to add delta in the same revision as beta, you
47 would reorganize the file to look like this::
47 would reorganize the file to look like this::
48
48
49 pick 030b686bedc4 Add gamma
49 pick 030b686bedc4 Add gamma
50 pick c561b4e977df Add beta
50 pick c561b4e977df Add beta
51 fold 7c2fd3b9020c Add delta
51 fold 7c2fd3b9020c Add delta
52
52
53 # Edit history between c561b4e977df and 7c2fd3b9020c
53 # Edit history between c561b4e977df and 7c2fd3b9020c
54 #
54 #
55 # Commits are listed from least to most recent
55 # Commits are listed from least to most recent
56 #
56 #
57 # Commands:
57 # Commands:
58 # p, pick = use commit
58 # p, pick = use commit
59 # e, edit = use commit, but stop for amending
59 # e, edit = use commit, but stop for amending
60 # f, fold = use commit, but combine it with the one above
60 # f, fold = use commit, but combine it with the one above
61 # r, roll = like fold, but discard this commit's description
61 # r, roll = like fold, but discard this commit's description
62 # d, drop = remove commit from history
62 # d, drop = remove commit from history
63 # m, mess = edit commit message without changing commit content
63 # m, mess = edit commit message without changing commit content
64 #
64 #
65
65
66 At which point you close the editor and ``histedit`` starts working. When you
66 At which point you close the editor and ``histedit`` starts working. When you
67 specify a ``fold`` operation, ``histedit`` will open an editor when it folds
67 specify a ``fold`` operation, ``histedit`` will open an editor when it folds
68 those revisions together, offering you a chance to clean up the commit message::
68 those revisions together, offering you a chance to clean up the commit message::
69
69
70 Add beta
70 Add beta
71 ***
71 ***
72 Add delta
72 Add delta
73
73
74 Edit the commit message to your liking, then close the editor. For
74 Edit the commit message to your liking, then close the editor. For
75 this example, let's assume that the commit message was changed to
75 this example, let's assume that the commit message was changed to
76 ``Add beta and delta.`` After histedit has run and had a chance to
76 ``Add beta and delta.`` After histedit has run and had a chance to
77 remove any old or temporary revisions it needed, the history looks
77 remove any old or temporary revisions it needed, the history looks
78 like this::
78 like this::
79
79
80 @ 2[tip] 989b4d060121 2009-04-27 18:04 -0500 durin42
80 @ 2[tip] 989b4d060121 2009-04-27 18:04 -0500 durin42
81 | Add beta and delta.
81 | Add beta and delta.
82 |
82 |
83 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
83 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
84 | Add gamma
84 | Add gamma
85 |
85 |
86 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
86 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
87 Add alpha
87 Add alpha
88
88
89 Note that ``histedit`` does *not* remove any revisions (even its own temporary
89 Note that ``histedit`` does *not* remove any revisions (even its own temporary
90 ones) until after it has completed all the editing operations, so it will
90 ones) until after it has completed all the editing operations, so it will
91 probably perform several strip operations when it's done. For the above example,
91 probably perform several strip operations when it's done. For the above example,
92 it had to run strip twice. Strip can be slow depending on a variety of factors,
92 it had to run strip twice. Strip can be slow depending on a variety of factors,
93 so you might need to be a little patient. You can choose to keep the original
93 so you might need to be a little patient. You can choose to keep the original
94 revisions by passing the ``--keep`` flag.
94 revisions by passing the ``--keep`` flag.
95
95
96 The ``edit`` operation will drop you back to a command prompt,
96 The ``edit`` operation will drop you back to a command prompt,
97 allowing you to edit files freely, or even use ``hg record`` to commit
97 allowing you to edit files freely, or even use ``hg record`` to commit
98 some changes as a separate commit. When you're done, any remaining
98 some changes as a separate commit. When you're done, any remaining
99 uncommitted changes will be committed as well. When done, run ``hg
99 uncommitted changes will be committed as well. When done, run ``hg
100 histedit --continue`` to finish this step. You'll be prompted for a
100 histedit --continue`` to finish this step. You'll be prompted for a
101 new commit message, but the default commit message will be the
101 new commit message, but the default commit message will be the
102 original message for the ``edit`` ed revision.
102 original message for the ``edit`` ed revision.
103
103
104 The ``message`` operation will give you a chance to revise a commit
104 The ``message`` operation will give you a chance to revise a commit
105 message without changing the contents. It's a shortcut for doing
105 message without changing the contents. It's a shortcut for doing
106 ``edit`` immediately followed by `hg histedit --continue``.
106 ``edit`` immediately followed by `hg histedit --continue``.
107
107
108 If ``histedit`` encounters a conflict when moving a revision (while
108 If ``histedit`` encounters a conflict when moving a revision (while
109 handling ``pick`` or ``fold``), it'll stop in a similar manner to
109 handling ``pick`` or ``fold``), it'll stop in a similar manner to
110 ``edit`` with the difference that it won't prompt you for a commit
110 ``edit`` with the difference that it won't prompt you for a commit
111 message when done. If you decide at this point that you don't like how
111 message when done. If you decide at this point that you don't like how
112 much work it will be to rearrange history, or that you made a mistake,
112 much work it will be to rearrange history, or that you made a mistake,
113 you can use ``hg histedit --abort`` to abandon the new changes you
113 you can use ``hg histedit --abort`` to abandon the new changes you
114 have made and return to the state before you attempted to edit your
114 have made and return to the state before you attempted to edit your
115 history.
115 history.
116
116
117 If we clone the histedit-ed example repository above and add four more
117 If we clone the histedit-ed example repository above and add four more
118 changes, such that we have the following history::
118 changes, such that we have the following history::
119
119
120 @ 6[tip] 038383181893 2009-04-27 18:04 -0500 stefan
120 @ 6[tip] 038383181893 2009-04-27 18:04 -0500 stefan
121 | Add theta
121 | Add theta
122 |
122 |
123 o 5 140988835471 2009-04-27 18:04 -0500 stefan
123 o 5 140988835471 2009-04-27 18:04 -0500 stefan
124 | Add eta
124 | Add eta
125 |
125 |
126 o 4 122930637314 2009-04-27 18:04 -0500 stefan
126 o 4 122930637314 2009-04-27 18:04 -0500 stefan
127 | Add zeta
127 | Add zeta
128 |
128 |
129 o 3 836302820282 2009-04-27 18:04 -0500 stefan
129 o 3 836302820282 2009-04-27 18:04 -0500 stefan
130 | Add epsilon
130 | Add epsilon
131 |
131 |
132 o 2 989b4d060121 2009-04-27 18:04 -0500 durin42
132 o 2 989b4d060121 2009-04-27 18:04 -0500 durin42
133 | Add beta and delta.
133 | Add beta and delta.
134 |
134 |
135 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
135 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
136 | Add gamma
136 | Add gamma
137 |
137 |
138 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
138 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
139 Add alpha
139 Add alpha
140
140
141 If you run ``hg histedit --outgoing`` on the clone then it is the same
141 If you run ``hg histedit --outgoing`` on the clone then it is the same
142 as running ``hg histedit 836302820282``. If you need plan to push to a
142 as running ``hg histedit 836302820282``. If you need plan to push to a
143 repository that Mercurial does not detect to be related to the source
143 repository that Mercurial does not detect to be related to the source
144 repo, you can add a ``--force`` option.
144 repo, you can add a ``--force`` option.
145
145
146 Histedit rule lines are truncated to 80 characters by default. You
146 Histedit rule lines are truncated to 80 characters by default. You
147 can customize this behavior by setting a different length in your
147 can customize this behavior by setting a different length in your
148 configuration file::
148 configuration file::
149
149
150 [histedit]
150 [histedit]
151 linelen = 120 # truncate rule lines at 120 characters
151 linelen = 120 # truncate rule lines at 120 characters
152
152
153 ``hg histedit`` attempts to automatically choose an appropriate base
153 ``hg histedit`` attempts to automatically choose an appropriate base
154 revision to use. To change which base revision is used, define a
154 revision to use. To change which base revision is used, define a
155 revset in your configuration file::
155 revset in your configuration file::
156
156
157 [histedit]
157 [histedit]
158 defaultrev = only(.) & draft()
158 defaultrev = only(.) & draft()
159 """
159 """
160
160
161 try:
161 try:
162 import cPickle as pickle
162 import cPickle as pickle
163 pickle.dump # import now
163 pickle.dump # import now
164 except ImportError:
164 except ImportError:
165 import pickle
165 import pickle
166 import errno
166 import errno
167 import os
167 import os
168 import sys
168 import sys
169
169
170 from mercurial import bundle2
170 from mercurial import bundle2
171 from mercurial import cmdutil
171 from mercurial import cmdutil
172 from mercurial import discovery
172 from mercurial import discovery
173 from mercurial import error
173 from mercurial import error
174 from mercurial import copies
174 from mercurial import copies
175 from mercurial import context
175 from mercurial import context
176 from mercurial import destutil
176 from mercurial import destutil
177 from mercurial import exchange
177 from mercurial import exchange
178 from mercurial import extensions
178 from mercurial import extensions
179 from mercurial import hg
179 from mercurial import hg
180 from mercurial import node
180 from mercurial import node
181 from mercurial import repair
181 from mercurial import repair
182 from mercurial import scmutil
182 from mercurial import scmutil
183 from mercurial import util
183 from mercurial import util
184 from mercurial import obsolete
184 from mercurial import obsolete
185 from mercurial import merge as mergemod
185 from mercurial import merge as mergemod
186 from mercurial.lock import release
186 from mercurial.lock import release
187 from mercurial.i18n import _
187 from mercurial.i18n import _
188
188
189 cmdtable = {}
189 cmdtable = {}
190 command = cmdutil.command(cmdtable)
190 command = cmdutil.command(cmdtable)
191
191
192 class _constraints(object):
192 class _constraints(object):
193 # aborts if there are multiple rules for one node
193 # aborts if there are multiple rules for one node
194 noduplicates = 'noduplicates'
194 noduplicates = 'noduplicates'
195 # abort if the node does belong to edited stack
195 # abort if the node does belong to edited stack
196 forceother = 'forceother'
196 forceother = 'forceother'
197 # abort if the node doesn't belong to edited stack
197 # abort if the node doesn't belong to edited stack
198 noother = 'noother'
198 noother = 'noother'
199
199
200 @classmethod
200 @classmethod
201 def known(cls):
201 def known(cls):
202 return set([v for k, v in cls.__dict__.items() if k[0] != '_'])
202 return set([v for k, v in cls.__dict__.items() if k[0] != '_'])
203
203
204 # Note for extension authors: ONLY specify testedwith = 'internal' for
204 # Note for extension authors: ONLY specify testedwith = 'internal' for
205 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
205 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
206 # be specifying the version(s) of Mercurial they are tested with, or
206 # be specifying the version(s) of Mercurial they are tested with, or
207 # leave the attribute unspecified.
207 # leave the attribute unspecified.
208 testedwith = 'internal'
208 testedwith = 'internal'
209
209
210 # i18n: command names and abbreviations must remain untranslated
210 # i18n: command names and abbreviations must remain untranslated
211 editcomment = _("""# Edit history between %s and %s
211 editcomment = _("""# Edit history between %s and %s
212 #
212 #
213 # Commits are listed from least to most recent
213 # Commits are listed from least to most recent
214 #
214 #
215 # Commands:
215 # Commands:
216 # p, pick = use commit
216 # p, pick = use commit
217 # e, edit = use commit, but stop for amending
217 # e, edit = use commit, but stop for amending
218 # f, fold = use commit, but combine it with the one above
218 # f, fold = use commit, but combine it with the one above
219 # r, roll = like fold, but discard this commit's description
219 # r, roll = like fold, but discard this commit's description
220 # d, drop = remove commit from history
220 # d, drop = remove commit from history
221 # m, mess = edit commit message without changing commit content
221 # m, mess = edit commit message without changing commit content
222 #
222 #
223 """)
223 """)
224
224
225 class histeditstate(object):
225 class histeditstate(object):
226 def __init__(self, repo, parentctxnode=None, actions=None, keep=None,
226 def __init__(self, repo, parentctxnode=None, actions=None, keep=None,
227 topmost=None, replacements=None, lock=None, wlock=None):
227 topmost=None, replacements=None, lock=None, wlock=None):
228 self.repo = repo
228 self.repo = repo
229 self.actions = actions
229 self.actions = actions
230 self.keep = keep
230 self.keep = keep
231 self.topmost = topmost
231 self.topmost = topmost
232 self.parentctxnode = parentctxnode
232 self.parentctxnode = parentctxnode
233 self.lock = lock
233 self.lock = lock
234 self.wlock = wlock
234 self.wlock = wlock
235 self.backupfile = None
235 self.backupfile = None
236 if replacements is None:
236 if replacements is None:
237 self.replacements = []
237 self.replacements = []
238 else:
238 else:
239 self.replacements = replacements
239 self.replacements = replacements
240
240
241 def read(self):
241 def read(self):
242 """Load histedit state from disk and set fields appropriately."""
242 """Load histedit state from disk and set fields appropriately."""
243 try:
243 try:
244 fp = self.repo.vfs('histedit-state', 'r')
244 fp = self.repo.vfs('histedit-state', 'r')
245 except IOError as err:
245 except IOError as err:
246 if err.errno != errno.ENOENT:
246 if err.errno != errno.ENOENT:
247 raise
247 raise
248 raise error.Abort(_('no histedit in progress'))
248 raise error.Abort(_('no histedit in progress'))
249
249
250 try:
250 try:
251 data = pickle.load(fp)
251 data = pickle.load(fp)
252 parentctxnode, rules, keep, topmost, replacements = data
252 parentctxnode, rules, keep, topmost, replacements = data
253 backupfile = None
253 backupfile = None
254 except pickle.UnpicklingError:
254 except pickle.UnpicklingError:
255 data = self._load()
255 data = self._load()
256 parentctxnode, rules, keep, topmost, replacements, backupfile = data
256 parentctxnode, rules, keep, topmost, replacements, backupfile = data
257
257
258 self.parentctxnode = parentctxnode
258 self.parentctxnode = parentctxnode
259 rules = "\n".join(["%s %s" % (verb, rest) for [verb, rest] in rules])
259 rules = "\n".join(["%s %s" % (verb, rest) for [verb, rest] in rules])
260 actions = parserules(rules, self)
260 actions = parserules(rules, self)
261 self.actions = actions
261 self.actions = actions
262 self.keep = keep
262 self.keep = keep
263 self.topmost = topmost
263 self.topmost = topmost
264 self.replacements = replacements
264 self.replacements = replacements
265 self.backupfile = backupfile
265 self.backupfile = backupfile
266
266
267 def write(self):
267 def write(self):
268 fp = self.repo.vfs('histedit-state', 'w')
268 fp = self.repo.vfs('histedit-state', 'w')
269 fp.write('v1\n')
269 fp.write('v1\n')
270 fp.write('%s\n' % node.hex(self.parentctxnode))
270 fp.write('%s\n' % node.hex(self.parentctxnode))
271 fp.write('%s\n' % node.hex(self.topmost))
271 fp.write('%s\n' % node.hex(self.topmost))
272 fp.write('%s\n' % self.keep)
272 fp.write('%s\n' % self.keep)
273 fp.write('%d\n' % len(self.actions))
273 fp.write('%d\n' % len(self.actions))
274 for action in self.actions:
274 for action in self.actions:
275 fp.write('%s\n' % action.tostate())
275 fp.write('%s\n' % action.tostate())
276 fp.write('%d\n' % len(self.replacements))
276 fp.write('%d\n' % len(self.replacements))
277 for replacement in self.replacements:
277 for replacement in self.replacements:
278 fp.write('%s%s\n' % (node.hex(replacement[0]), ''.join(node.hex(r)
278 fp.write('%s%s\n' % (node.hex(replacement[0]), ''.join(node.hex(r)
279 for r in replacement[1])))
279 for r in replacement[1])))
280 backupfile = self.backupfile
280 backupfile = self.backupfile
281 if not backupfile:
281 if not backupfile:
282 backupfile = ''
282 backupfile = ''
283 fp.write('%s\n' % backupfile)
283 fp.write('%s\n' % backupfile)
284 fp.close()
284 fp.close()
285
285
286 def _load(self):
286 def _load(self):
287 fp = self.repo.vfs('histedit-state', 'r')
287 fp = self.repo.vfs('histedit-state', 'r')
288 lines = [l[:-1] for l in fp.readlines()]
288 lines = [l[:-1] for l in fp.readlines()]
289
289
290 index = 0
290 index = 0
291 lines[index] # version number
291 lines[index] # version number
292 index += 1
292 index += 1
293
293
294 parentctxnode = node.bin(lines[index])
294 parentctxnode = node.bin(lines[index])
295 index += 1
295 index += 1
296
296
297 topmost = node.bin(lines[index])
297 topmost = node.bin(lines[index])
298 index += 1
298 index += 1
299
299
300 keep = lines[index] == 'True'
300 keep = lines[index] == 'True'
301 index += 1
301 index += 1
302
302
303 # Rules
303 # Rules
304 rules = []
304 rules = []
305 rulelen = int(lines[index])
305 rulelen = int(lines[index])
306 index += 1
306 index += 1
307 for i in xrange(rulelen):
307 for i in xrange(rulelen):
308 ruleaction = lines[index]
308 ruleaction = lines[index]
309 index += 1
309 index += 1
310 rule = lines[index]
310 rule = lines[index]
311 index += 1
311 index += 1
312 rules.append((ruleaction, rule))
312 rules.append((ruleaction, rule))
313
313
314 # Replacements
314 # Replacements
315 replacements = []
315 replacements = []
316 replacementlen = int(lines[index])
316 replacementlen = int(lines[index])
317 index += 1
317 index += 1
318 for i in xrange(replacementlen):
318 for i in xrange(replacementlen):
319 replacement = lines[index]
319 replacement = lines[index]
320 original = node.bin(replacement[:40])
320 original = node.bin(replacement[:40])
321 succ = [node.bin(replacement[i:i + 40]) for i in
321 succ = [node.bin(replacement[i:i + 40]) for i in
322 range(40, len(replacement), 40)]
322 range(40, len(replacement), 40)]
323 replacements.append((original, succ))
323 replacements.append((original, succ))
324 index += 1
324 index += 1
325
325
326 backupfile = lines[index]
326 backupfile = lines[index]
327 index += 1
327 index += 1
328
328
329 fp.close()
329 fp.close()
330
330
331 return parentctxnode, rules, keep, topmost, replacements, backupfile
331 return parentctxnode, rules, keep, topmost, replacements, backupfile
332
332
333 def clear(self):
333 def clear(self):
334 if self.inprogress():
334 if self.inprogress():
335 self.repo.vfs.unlink('histedit-state')
335 self.repo.vfs.unlink('histedit-state')
336
336
337 def inprogress(self):
337 def inprogress(self):
338 return self.repo.vfs.exists('histedit-state')
338 return self.repo.vfs.exists('histedit-state')
339
339
340
340
341 class histeditaction(object):
341 class histeditaction(object):
342 def __init__(self, state, node):
342 def __init__(self, state, node):
343 self.state = state
343 self.state = state
344 self.repo = state.repo
344 self.repo = state.repo
345 self.node = node
345 self.node = node
346
346
347 @classmethod
347 @classmethod
348 def fromrule(cls, state, rule):
348 def fromrule(cls, state, rule):
349 """Parses the given rule, returning an instance of the histeditaction.
349 """Parses the given rule, returning an instance of the histeditaction.
350 """
350 """
351 rulehash = rule.strip().split(' ', 1)[0]
351 rulehash = rule.strip().split(' ', 1)[0]
352 return cls(state, node.bin(rulehash))
352 return cls(state, node.bin(rulehash))
353
353
354 def verify(self):
354 def verify(self):
355 """ Verifies semantic correctness of the rule"""
355 """ Verifies semantic correctness of the rule"""
356 repo = self.repo
356 repo = self.repo
357 ha = node.hex(self.node)
357 ha = node.hex(self.node)
358 try:
358 try:
359 self.node = repo[ha].node()
359 self.node = repo[ha].node()
360 except error.RepoError:
360 except error.RepoError:
361 raise error.Abort(_('unknown changeset %s listed')
361 raise error.Abort(_('unknown changeset %s listed')
362 % ha[:12])
362 % ha[:12])
363
363
364 def torule(self):
364 def torule(self):
365 """build a histedit rule line for an action
365 """build a histedit rule line for an action
366
366
367 by default lines are in the form:
367 by default lines are in the form:
368 <hash> <rev> <summary>
368 <hash> <rev> <summary>
369 """
369 """
370 ctx = self.repo[self.node]
370 ctx = self.repo[self.node]
371 summary = ''
371 summary = ''
372 if ctx.description():
372 if ctx.description():
373 summary = ctx.description().splitlines()[0]
373 summary = ctx.description().splitlines()[0]
374 line = '%s %s %d %s' % (self.verb, ctx, ctx.rev(), summary)
374 line = '%s %s %d %s' % (self.verb, ctx, ctx.rev(), summary)
375 # trim to 75 columns by default so it's not stupidly wide in my editor
375 # trim to 75 columns by default so it's not stupidly wide in my editor
376 # (the 5 more are left for verb)
376 # (the 5 more are left for verb)
377 maxlen = self.repo.ui.configint('histedit', 'linelen', default=80)
377 maxlen = self.repo.ui.configint('histedit', 'linelen', default=80)
378 maxlen = max(maxlen, 22) # avoid truncating hash
378 maxlen = max(maxlen, 22) # avoid truncating hash
379 return util.ellipsis(line, maxlen)
379 return util.ellipsis(line, maxlen)
380
380
381 def tostate(self):
381 def tostate(self):
382 """Print an action in format used by histedit state files
382 """Print an action in format used by histedit state files
383 (the first line is a verb, the remainder is the second)
383 (the first line is a verb, the remainder is the second)
384 """
384 """
385 return "%s\n%s" % (self.verb, node.hex(self.node))
385 return "%s\n%s" % (self.verb, node.hex(self.node))
386
386
387 def constraints(self):
387 def constraints(self):
388 """Return a set of constrains that this action should be verified for
388 """Return a set of constrains that this action should be verified for
389 """
389 """
390 return set([_constraints.noduplicates, _constraints.noother])
390 return set([_constraints.noduplicates, _constraints.noother])
391
391
392 def nodetoverify(self):
392 def nodetoverify(self):
393 """Returns a node associated with the action that will be used for
393 """Returns a node associated with the action that will be used for
394 verification purposes.
394 verification purposes.
395
395
396 If the action doesn't correspond to node it should return None
396 If the action doesn't correspond to node it should return None
397 """
397 """
398 return self.node
398 return self.node
399
399
400 def run(self):
400 def run(self):
401 """Runs the action. The default behavior is simply apply the action's
401 """Runs the action. The default behavior is simply apply the action's
402 rulectx onto the current parentctx."""
402 rulectx onto the current parentctx."""
403 self.applychange()
403 self.applychange()
404 self.continuedirty()
404 self.continuedirty()
405 return self.continueclean()
405 return self.continueclean()
406
406
407 def applychange(self):
407 def applychange(self):
408 """Applies the changes from this action's rulectx onto the current
408 """Applies the changes from this action's rulectx onto the current
409 parentctx, but does not commit them."""
409 parentctx, but does not commit them."""
410 repo = self.repo
410 repo = self.repo
411 rulectx = repo[self.node]
411 rulectx = repo[self.node]
412 hg.update(repo, self.state.parentctxnode)
412 hg.update(repo, self.state.parentctxnode)
413 stats = applychanges(repo.ui, repo, rulectx, {})
413 stats = applychanges(repo.ui, repo, rulectx, {})
414 if stats and stats[3] > 0:
414 if stats and stats[3] > 0:
415 raise error.InterventionRequired(_('Fix up the change and run '
415 raise error.InterventionRequired(_('Fix up the change and run '
416 'hg histedit --continue'))
416 'hg histedit --continue'))
417
417
418 def continuedirty(self):
418 def continuedirty(self):
419 """Continues the action when changes have been applied to the working
419 """Continues the action when changes have been applied to the working
420 copy. The default behavior is to commit the dirty changes."""
420 copy. The default behavior is to commit the dirty changes."""
421 repo = self.repo
421 repo = self.repo
422 rulectx = repo[self.node]
422 rulectx = repo[self.node]
423
423
424 editor = self.commiteditor()
424 editor = self.commiteditor()
425 commit = commitfuncfor(repo, rulectx)
425 commit = commitfuncfor(repo, rulectx)
426
426
427 commit(text=rulectx.description(), user=rulectx.user(),
427 commit(text=rulectx.description(), user=rulectx.user(),
428 date=rulectx.date(), extra=rulectx.extra(), editor=editor)
428 date=rulectx.date(), extra=rulectx.extra(), editor=editor)
429
429
430 def commiteditor(self):
430 def commiteditor(self):
431 """The editor to be used to edit the commit message."""
431 """The editor to be used to edit the commit message."""
432 return False
432 return False
433
433
434 def continueclean(self):
434 def continueclean(self):
435 """Continues the action when the working copy is clean. The default
435 """Continues the action when the working copy is clean. The default
436 behavior is to accept the current commit as the new version of the
436 behavior is to accept the current commit as the new version of the
437 rulectx."""
437 rulectx."""
438 ctx = self.repo['.']
438 ctx = self.repo['.']
439 if ctx.node() == self.state.parentctxnode:
439 if ctx.node() == self.state.parentctxnode:
440 self.repo.ui.warn(_('%s: empty changeset\n') %
440 self.repo.ui.warn(_('%s: empty changeset\n') %
441 node.short(self.node))
441 node.short(self.node))
442 return ctx, [(self.node, tuple())]
442 return ctx, [(self.node, tuple())]
443 if ctx.node() == self.node:
443 if ctx.node() == self.node:
444 # Nothing changed
444 # Nothing changed
445 return ctx, []
445 return ctx, []
446 return ctx, [(self.node, (ctx.node(),))]
446 return ctx, [(self.node, (ctx.node(),))]
447
447
448 def commitfuncfor(repo, src):
448 def commitfuncfor(repo, src):
449 """Build a commit function for the replacement of <src>
449 """Build a commit function for the replacement of <src>
450
450
451 This function ensure we apply the same treatment to all changesets.
451 This function ensure we apply the same treatment to all changesets.
452
452
453 - Add a 'histedit_source' entry in extra.
453 - Add a 'histedit_source' entry in extra.
454
454
455 Note that fold has its own separated logic because its handling is a bit
455 Note that fold has its own separated logic because its handling is a bit
456 different and not easily factored out of the fold method.
456 different and not easily factored out of the fold method.
457 """
457 """
458 phasemin = src.phase()
458 phasemin = src.phase()
459 def commitfunc(**kwargs):
459 def commitfunc(**kwargs):
460 phasebackup = repo.ui.backupconfig('phases', 'new-commit')
460 phasebackup = repo.ui.backupconfig('phases', 'new-commit')
461 try:
461 try:
462 repo.ui.setconfig('phases', 'new-commit', phasemin,
462 repo.ui.setconfig('phases', 'new-commit', phasemin,
463 'histedit')
463 'histedit')
464 extra = kwargs.get('extra', {}).copy()
464 extra = kwargs.get('extra', {}).copy()
465 extra['histedit_source'] = src.hex()
465 extra['histedit_source'] = src.hex()
466 kwargs['extra'] = extra
466 kwargs['extra'] = extra
467 return repo.commit(**kwargs)
467 return repo.commit(**kwargs)
468 finally:
468 finally:
469 repo.ui.restoreconfig(phasebackup)
469 repo.ui.restoreconfig(phasebackup)
470 return commitfunc
470 return commitfunc
471
471
472 def applychanges(ui, repo, ctx, opts):
472 def applychanges(ui, repo, ctx, opts):
473 """Merge changeset from ctx (only) in the current working directory"""
473 """Merge changeset from ctx (only) in the current working directory"""
474 wcpar = repo.dirstate.parents()[0]
474 wcpar = repo.dirstate.parents()[0]
475 if ctx.p1().node() == wcpar:
475 if ctx.p1().node() == wcpar:
476 # edits are "in place" we do not need to make any merge,
476 # edits are "in place" we do not need to make any merge,
477 # just applies changes on parent for edition
477 # just applies changes on parent for edition
478 cmdutil.revert(ui, repo, ctx, (wcpar, node.nullid), all=True)
478 cmdutil.revert(ui, repo, ctx, (wcpar, node.nullid), all=True)
479 stats = None
479 stats = None
480 else:
480 else:
481 try:
481 try:
482 # ui.forcemerge is an internal variable, do not document
482 # ui.forcemerge is an internal variable, do not document
483 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
483 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
484 'histedit')
484 'histedit')
485 stats = mergemod.graft(repo, ctx, ctx.p1(), ['local', 'histedit'])
485 stats = mergemod.graft(repo, ctx, ctx.p1(), ['local', 'histedit'])
486 finally:
486 finally:
487 repo.ui.setconfig('ui', 'forcemerge', '', 'histedit')
487 repo.ui.setconfig('ui', 'forcemerge', '', 'histedit')
488 return stats
488 return stats
489
489
490 def collapse(repo, first, last, commitopts, skipprompt=False):
490 def collapse(repo, first, last, commitopts, skipprompt=False):
491 """collapse the set of revisions from first to last as new one.
491 """collapse the set of revisions from first to last as new one.
492
492
493 Expected commit options are:
493 Expected commit options are:
494 - message
494 - message
495 - date
495 - date
496 - username
496 - username
497 Commit message is edited in all cases.
497 Commit message is edited in all cases.
498
498
499 This function works in memory."""
499 This function works in memory."""
500 ctxs = list(repo.set('%d::%d', first, last))
500 ctxs = list(repo.set('%d::%d', first, last))
501 if not ctxs:
501 if not ctxs:
502 return None
502 return None
503 for c in ctxs:
503 for c in ctxs:
504 if not c.mutable():
504 if not c.mutable():
505 raise error.Abort(
505 raise error.Abort(
506 _("cannot fold into public change %s") % node.short(c.node()))
506 _("cannot fold into public change %s") % node.short(c.node()))
507 base = first.parents()[0]
507 base = first.parents()[0]
508
508
509 # commit a new version of the old changeset, including the update
509 # commit a new version of the old changeset, including the update
510 # collect all files which might be affected
510 # collect all files which might be affected
511 files = set()
511 files = set()
512 for ctx in ctxs:
512 for ctx in ctxs:
513 files.update(ctx.files())
513 files.update(ctx.files())
514
514
515 # Recompute copies (avoid recording a -> b -> a)
515 # Recompute copies (avoid recording a -> b -> a)
516 copied = copies.pathcopies(base, last)
516 copied = copies.pathcopies(base, last)
517
517
518 # prune files which were reverted by the updates
518 # prune files which were reverted by the updates
519 def samefile(f):
519 def samefile(f):
520 if f in last.manifest():
520 if f in last.manifest():
521 a = last.filectx(f)
521 a = last.filectx(f)
522 if f in base.manifest():
522 if f in base.manifest():
523 b = base.filectx(f)
523 b = base.filectx(f)
524 return (a.data() == b.data()
524 return (a.data() == b.data()
525 and a.flags() == b.flags())
525 and a.flags() == b.flags())
526 else:
526 else:
527 return False
527 return False
528 else:
528 else:
529 return f not in base.manifest()
529 return f not in base.manifest()
530 files = [f for f in files if not samefile(f)]
530 files = [f for f in files if not samefile(f)]
531 # commit version of these files as defined by head
531 # commit version of these files as defined by head
532 headmf = last.manifest()
532 headmf = last.manifest()
533 def filectxfn(repo, ctx, path):
533 def filectxfn(repo, ctx, path):
534 if path in headmf:
534 if path in headmf:
535 fctx = last[path]
535 fctx = last[path]
536 flags = fctx.flags()
536 flags = fctx.flags()
537 mctx = context.memfilectx(repo,
537 mctx = context.memfilectx(repo,
538 fctx.path(), fctx.data(),
538 fctx.path(), fctx.data(),
539 islink='l' in flags,
539 islink='l' in flags,
540 isexec='x' in flags,
540 isexec='x' in flags,
541 copied=copied.get(path))
541 copied=copied.get(path))
542 return mctx
542 return mctx
543 return None
543 return None
544
544
545 if commitopts.get('message'):
545 if commitopts.get('message'):
546 message = commitopts['message']
546 message = commitopts['message']
547 else:
547 else:
548 message = first.description()
548 message = first.description()
549 user = commitopts.get('user')
549 user = commitopts.get('user')
550 date = commitopts.get('date')
550 date = commitopts.get('date')
551 extra = commitopts.get('extra')
551 extra = commitopts.get('extra')
552
552
553 parents = (first.p1().node(), first.p2().node())
553 parents = (first.p1().node(), first.p2().node())
554 editor = None
554 editor = None
555 if not skipprompt:
555 if not skipprompt:
556 editor = cmdutil.getcommiteditor(edit=True, editform='histedit.fold')
556 editor = cmdutil.getcommiteditor(edit=True, editform='histedit.fold')
557 new = context.memctx(repo,
557 new = context.memctx(repo,
558 parents=parents,
558 parents=parents,
559 text=message,
559 text=message,
560 files=files,
560 files=files,
561 filectxfn=filectxfn,
561 filectxfn=filectxfn,
562 user=user,
562 user=user,
563 date=date,
563 date=date,
564 extra=extra,
564 extra=extra,
565 editor=editor)
565 editor=editor)
566 return repo.commitctx(new)
566 return repo.commitctx(new)
567
567
568 def _isdirtywc(repo):
568 def _isdirtywc(repo):
569 return repo[None].dirty(missing=True)
569 return repo[None].dirty(missing=True)
570
570
571 def abortdirty():
571 def abortdirty():
572 raise error.Abort(_('working copy has pending changes'),
572 raise error.Abort(_('working copy has pending changes'),
573 hint=_('amend, commit, or revert them and run histedit '
573 hint=_('amend, commit, or revert them and run histedit '
574 '--continue, or abort with histedit --abort'))
574 '--continue, or abort with histedit --abort'))
575
575
576
576
577 actiontable = {}
577 actiontable = {}
578 actionlist = []
578 actionlist = []
579
579
580 def addhisteditaction(verbs):
580 def addhisteditaction(verbs):
581 def wrap(cls):
581 def wrap(cls):
582 cls.verb = verbs[0]
582 cls.verb = verbs[0]
583 for verb in verbs:
583 for verb in verbs:
584 actiontable[verb] = cls
584 actiontable[verb] = cls
585 actionlist.append(cls)
585 actionlist.append(cls)
586 return cls
586 return cls
587 return wrap
587 return wrap
588
588
589
589
590 @addhisteditaction(['pick', 'p'])
590 @addhisteditaction(['pick', 'p'])
591 class pick(histeditaction):
591 class pick(histeditaction):
592 def run(self):
592 def run(self):
593 rulectx = self.repo[self.node]
593 rulectx = self.repo[self.node]
594 if rulectx.parents()[0].node() == self.state.parentctxnode:
594 if rulectx.parents()[0].node() == self.state.parentctxnode:
595 self.repo.ui.debug('node %s unchanged\n' % node.short(self.node))
595 self.repo.ui.debug('node %s unchanged\n' % node.short(self.node))
596 return rulectx, []
596 return rulectx, []
597
597
598 return super(pick, self).run()
598 return super(pick, self).run()
599
599
600 @addhisteditaction(['edit', 'e'])
600 @addhisteditaction(['edit', 'e'])
601 class edit(histeditaction):
601 class edit(histeditaction):
602 def run(self):
602 def run(self):
603 repo = self.repo
603 repo = self.repo
604 rulectx = repo[self.node]
604 rulectx = repo[self.node]
605 hg.update(repo, self.state.parentctxnode)
605 hg.update(repo, self.state.parentctxnode)
606 applychanges(repo.ui, repo, rulectx, {})
606 applychanges(repo.ui, repo, rulectx, {})
607 raise error.InterventionRequired(
607 raise error.InterventionRequired(
608 _('Make changes as needed, you may commit or record as needed '
608 _('Make changes as needed, you may commit or record as needed '
609 'now.\nWhen you are finished, run hg histedit --continue to '
609 'now.\nWhen you are finished, run hg histedit --continue to '
610 'resume.'))
610 'resume.'))
611
611
612 def commiteditor(self):
612 def commiteditor(self):
613 return cmdutil.getcommiteditor(edit=True, editform='histedit.edit')
613 return cmdutil.getcommiteditor(edit=True, editform='histedit.edit')
614
614
615 @addhisteditaction(['fold', 'f'])
615 @addhisteditaction(['fold', 'f'])
616 class fold(histeditaction):
616 class fold(histeditaction):
617 def continuedirty(self):
617 def continuedirty(self):
618 repo = self.repo
618 repo = self.repo
619 rulectx = repo[self.node]
619 rulectx = repo[self.node]
620
620
621 commit = commitfuncfor(repo, rulectx)
621 commit = commitfuncfor(repo, rulectx)
622 commit(text='fold-temp-revision %s' % node.short(self.node),
622 commit(text='fold-temp-revision %s' % node.short(self.node),
623 user=rulectx.user(), date=rulectx.date(),
623 user=rulectx.user(), date=rulectx.date(),
624 extra=rulectx.extra())
624 extra=rulectx.extra())
625
625
626 def continueclean(self):
626 def continueclean(self):
627 repo = self.repo
627 repo = self.repo
628 ctx = repo['.']
628 ctx = repo['.']
629 rulectx = repo[self.node]
629 rulectx = repo[self.node]
630 parentctxnode = self.state.parentctxnode
630 parentctxnode = self.state.parentctxnode
631 if ctx.node() == parentctxnode:
631 if ctx.node() == parentctxnode:
632 repo.ui.warn(_('%s: empty changeset\n') %
632 repo.ui.warn(_('%s: empty changeset\n') %
633 node.short(self.node))
633 node.short(self.node))
634 return ctx, [(self.node, (parentctxnode,))]
634 return ctx, [(self.node, (parentctxnode,))]
635
635
636 parentctx = repo[parentctxnode]
636 parentctx = repo[parentctxnode]
637 newcommits = set(c.node() for c in repo.set('(%d::. - %d)', parentctx,
637 newcommits = set(c.node() for c in repo.set('(%d::. - %d)', parentctx,
638 parentctx))
638 parentctx))
639 if not newcommits:
639 if not newcommits:
640 repo.ui.warn(_('%s: cannot fold - working copy is not a '
640 repo.ui.warn(_('%s: cannot fold - working copy is not a '
641 'descendant of previous commit %s\n') %
641 'descendant of previous commit %s\n') %
642 (node.short(self.node), node.short(parentctxnode)))
642 (node.short(self.node), node.short(parentctxnode)))
643 return ctx, [(self.node, (ctx.node(),))]
643 return ctx, [(self.node, (ctx.node(),))]
644
644
645 middlecommits = newcommits.copy()
645 middlecommits = newcommits.copy()
646 middlecommits.discard(ctx.node())
646 middlecommits.discard(ctx.node())
647
647
648 return self.finishfold(repo.ui, repo, parentctx, rulectx, ctx.node(),
648 return self.finishfold(repo.ui, repo, parentctx, rulectx, ctx.node(),
649 middlecommits)
649 middlecommits)
650
650
651 def skipprompt(self):
651 def skipprompt(self):
652 """Returns true if the rule should skip the message editor.
652 """Returns true if the rule should skip the message editor.
653
653
654 For example, 'fold' wants to show an editor, but 'rollup'
654 For example, 'fold' wants to show an editor, but 'rollup'
655 doesn't want to.
655 doesn't want to.
656 """
656 """
657 return False
657 return False
658
658
659 def mergedescs(self):
659 def mergedescs(self):
660 """Returns true if the rule should merge messages of multiple changes.
660 """Returns true if the rule should merge messages of multiple changes.
661
661
662 This exists mainly so that 'rollup' rules can be a subclass of
662 This exists mainly so that 'rollup' rules can be a subclass of
663 'fold'.
663 'fold'.
664 """
664 """
665 return True
665 return True
666
666
667 def finishfold(self, ui, repo, ctx, oldctx, newnode, internalchanges):
667 def finishfold(self, ui, repo, ctx, oldctx, newnode, internalchanges):
668 parent = ctx.parents()[0].node()
668 parent = ctx.parents()[0].node()
669 hg.update(repo, parent)
669 hg.update(repo, parent)
670 ### prepare new commit data
670 ### prepare new commit data
671 commitopts = {}
671 commitopts = {}
672 commitopts['user'] = ctx.user()
672 commitopts['user'] = ctx.user()
673 # commit message
673 # commit message
674 if not self.mergedescs():
674 if not self.mergedescs():
675 newmessage = ctx.description()
675 newmessage = ctx.description()
676 else:
676 else:
677 newmessage = '\n***\n'.join(
677 newmessage = '\n***\n'.join(
678 [ctx.description()] +
678 [ctx.description()] +
679 [repo[r].description() for r in internalchanges] +
679 [repo[r].description() for r in internalchanges] +
680 [oldctx.description()]) + '\n'
680 [oldctx.description()]) + '\n'
681 commitopts['message'] = newmessage
681 commitopts['message'] = newmessage
682 # date
682 # date
683 commitopts['date'] = max(ctx.date(), oldctx.date())
683 commitopts['date'] = max(ctx.date(), oldctx.date())
684 extra = ctx.extra().copy()
684 extra = ctx.extra().copy()
685 # histedit_source
685 # histedit_source
686 # note: ctx is likely a temporary commit but that the best we can do
686 # note: ctx is likely a temporary commit but that the best we can do
687 # here. This is sufficient to solve issue3681 anyway.
687 # here. This is sufficient to solve issue3681 anyway.
688 extra['histedit_source'] = '%s,%s' % (ctx.hex(), oldctx.hex())
688 extra['histedit_source'] = '%s,%s' % (ctx.hex(), oldctx.hex())
689 commitopts['extra'] = extra
689 commitopts['extra'] = extra
690 phasebackup = repo.ui.backupconfig('phases', 'new-commit')
690 phasebackup = repo.ui.backupconfig('phases', 'new-commit')
691 try:
691 try:
692 phasemin = max(ctx.phase(), oldctx.phase())
692 phasemin = max(ctx.phase(), oldctx.phase())
693 repo.ui.setconfig('phases', 'new-commit', phasemin, 'histedit')
693 repo.ui.setconfig('phases', 'new-commit', phasemin, 'histedit')
694 n = collapse(repo, ctx, repo[newnode], commitopts,
694 n = collapse(repo, ctx, repo[newnode], commitopts,
695 skipprompt=self.skipprompt())
695 skipprompt=self.skipprompt())
696 finally:
696 finally:
697 repo.ui.restoreconfig(phasebackup)
697 repo.ui.restoreconfig(phasebackup)
698 if n is None:
698 if n is None:
699 return ctx, []
699 return ctx, []
700 hg.update(repo, n)
700 hg.update(repo, n)
701 replacements = [(oldctx.node(), (newnode,)),
701 replacements = [(oldctx.node(), (newnode,)),
702 (ctx.node(), (n,)),
702 (ctx.node(), (n,)),
703 (newnode, (n,)),
703 (newnode, (n,)),
704 ]
704 ]
705 for ich in internalchanges:
705 for ich in internalchanges:
706 replacements.append((ich, (n,)))
706 replacements.append((ich, (n,)))
707 return repo[n], replacements
707 return repo[n], replacements
708
708
709 class base(histeditaction):
709 class base(histeditaction):
710 def constraints(self):
710 def constraints(self):
711 return set([_constraints.forceother])
711 return set([_constraints.forceother])
712
712
713 def run(self):
713 def run(self):
714 if self.repo['.'].node() != self.node:
714 if self.repo['.'].node() != self.node:
715 mergemod.update(self.repo, self.node, False, True, False)
715 mergemod.update(self.repo, self.node, False, True)
716 # branchmerge, force, partial)
716 # branchmerge, force)
717 return self.continueclean()
717 return self.continueclean()
718
718
719 def continuedirty(self):
719 def continuedirty(self):
720 abortdirty()
720 abortdirty()
721
721
722 def continueclean(self):
722 def continueclean(self):
723 basectx = self.repo['.']
723 basectx = self.repo['.']
724 return basectx, []
724 return basectx, []
725
725
726 @addhisteditaction(['_multifold'])
726 @addhisteditaction(['_multifold'])
727 class _multifold(fold):
727 class _multifold(fold):
728 """fold subclass used for when multiple folds happen in a row
728 """fold subclass used for when multiple folds happen in a row
729
729
730 We only want to fire the editor for the folded message once when
730 We only want to fire the editor for the folded message once when
731 (say) four changes are folded down into a single change. This is
731 (say) four changes are folded down into a single change. This is
732 similar to rollup, but we should preserve both messages so that
732 similar to rollup, but we should preserve both messages so that
733 when the last fold operation runs we can show the user all the
733 when the last fold operation runs we can show the user all the
734 commit messages in their editor.
734 commit messages in their editor.
735 """
735 """
736 def skipprompt(self):
736 def skipprompt(self):
737 return True
737 return True
738
738
739 @addhisteditaction(["roll", "r"])
739 @addhisteditaction(["roll", "r"])
740 class rollup(fold):
740 class rollup(fold):
741 def mergedescs(self):
741 def mergedescs(self):
742 return False
742 return False
743
743
744 def skipprompt(self):
744 def skipprompt(self):
745 return True
745 return True
746
746
747 @addhisteditaction(["drop", "d"])
747 @addhisteditaction(["drop", "d"])
748 class drop(histeditaction):
748 class drop(histeditaction):
749 def run(self):
749 def run(self):
750 parentctx = self.repo[self.state.parentctxnode]
750 parentctx = self.repo[self.state.parentctxnode]
751 return parentctx, [(self.node, tuple())]
751 return parentctx, [(self.node, tuple())]
752
752
753 @addhisteditaction(["mess", "m"])
753 @addhisteditaction(["mess", "m"])
754 class message(histeditaction):
754 class message(histeditaction):
755 def commiteditor(self):
755 def commiteditor(self):
756 return cmdutil.getcommiteditor(edit=True, editform='histedit.mess')
756 return cmdutil.getcommiteditor(edit=True, editform='histedit.mess')
757
757
758 def findoutgoing(ui, repo, remote=None, force=False, opts=None):
758 def findoutgoing(ui, repo, remote=None, force=False, opts=None):
759 """utility function to find the first outgoing changeset
759 """utility function to find the first outgoing changeset
760
760
761 Used by initialization code"""
761 Used by initialization code"""
762 if opts is None:
762 if opts is None:
763 opts = {}
763 opts = {}
764 dest = ui.expandpath(remote or 'default-push', remote or 'default')
764 dest = ui.expandpath(remote or 'default-push', remote or 'default')
765 dest, revs = hg.parseurl(dest, None)[:2]
765 dest, revs = hg.parseurl(dest, None)[:2]
766 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
766 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
767
767
768 revs, checkout = hg.addbranchrevs(repo, repo, revs, None)
768 revs, checkout = hg.addbranchrevs(repo, repo, revs, None)
769 other = hg.peer(repo, opts, dest)
769 other = hg.peer(repo, opts, dest)
770
770
771 if revs:
771 if revs:
772 revs = [repo.lookup(rev) for rev in revs]
772 revs = [repo.lookup(rev) for rev in revs]
773
773
774 outgoing = discovery.findcommonoutgoing(repo, other, revs, force=force)
774 outgoing = discovery.findcommonoutgoing(repo, other, revs, force=force)
775 if not outgoing.missing:
775 if not outgoing.missing:
776 raise error.Abort(_('no outgoing ancestors'))
776 raise error.Abort(_('no outgoing ancestors'))
777 roots = list(repo.revs("roots(%ln)", outgoing.missing))
777 roots = list(repo.revs("roots(%ln)", outgoing.missing))
778 if 1 < len(roots):
778 if 1 < len(roots):
779 msg = _('there are ambiguous outgoing revisions')
779 msg = _('there are ambiguous outgoing revisions')
780 hint = _('see "hg help histedit" for more detail')
780 hint = _('see "hg help histedit" for more detail')
781 raise error.Abort(msg, hint=hint)
781 raise error.Abort(msg, hint=hint)
782 return repo.lookup(roots[0])
782 return repo.lookup(roots[0])
783
783
784
784
785 @command('histedit',
785 @command('histedit',
786 [('', 'commands', '',
786 [('', 'commands', '',
787 _('read history edits from the specified file'), _('FILE')),
787 _('read history edits from the specified file'), _('FILE')),
788 ('c', 'continue', False, _('continue an edit already in progress')),
788 ('c', 'continue', False, _('continue an edit already in progress')),
789 ('', 'edit-plan', False, _('edit remaining actions list')),
789 ('', 'edit-plan', False, _('edit remaining actions list')),
790 ('k', 'keep', False,
790 ('k', 'keep', False,
791 _("don't strip old nodes after edit is complete")),
791 _("don't strip old nodes after edit is complete")),
792 ('', 'abort', False, _('abort an edit in progress')),
792 ('', 'abort', False, _('abort an edit in progress')),
793 ('o', 'outgoing', False, _('changesets not found in destination')),
793 ('o', 'outgoing', False, _('changesets not found in destination')),
794 ('f', 'force', False,
794 ('f', 'force', False,
795 _('force outgoing even for unrelated repositories')),
795 _('force outgoing even for unrelated repositories')),
796 ('r', 'rev', [], _('first revision to be edited'), _('REV'))],
796 ('r', 'rev', [], _('first revision to be edited'), _('REV'))],
797 _("[ANCESTOR] | --outgoing [URL]"))
797 _("[ANCESTOR] | --outgoing [URL]"))
798 def histedit(ui, repo, *freeargs, **opts):
798 def histedit(ui, repo, *freeargs, **opts):
799 """interactively edit changeset history
799 """interactively edit changeset history
800
800
801 This command edits changesets between an ANCESTOR and the parent of
801 This command edits changesets between an ANCESTOR and the parent of
802 the working directory.
802 the working directory.
803
803
804 The value from the "histedit.defaultrev" config option is used as a
804 The value from the "histedit.defaultrev" config option is used as a
805 revset to select the base revision when ANCESTOR is not specified.
805 revset to select the base revision when ANCESTOR is not specified.
806 The first revision returned by the revset is used. By default, this
806 The first revision returned by the revset is used. By default, this
807 selects the editable history that is unique to the ancestry of the
807 selects the editable history that is unique to the ancestry of the
808 working directory.
808 working directory.
809
809
810 With --outgoing, this edits changesets not found in the
810 With --outgoing, this edits changesets not found in the
811 destination repository. If URL of the destination is omitted, the
811 destination repository. If URL of the destination is omitted, the
812 'default-push' (or 'default') path will be used.
812 'default-push' (or 'default') path will be used.
813
813
814 For safety, this command is also aborted if there are ambiguous
814 For safety, this command is also aborted if there are ambiguous
815 outgoing revisions which may confuse users: for example, if there
815 outgoing revisions which may confuse users: for example, if there
816 are multiple branches containing outgoing revisions.
816 are multiple branches containing outgoing revisions.
817
817
818 Use "min(outgoing() and ::.)" or similar revset specification
818 Use "min(outgoing() and ::.)" or similar revset specification
819 instead of --outgoing to specify edit target revision exactly in
819 instead of --outgoing to specify edit target revision exactly in
820 such ambiguous situation. See :hg:`help revsets` for detail about
820 such ambiguous situation. See :hg:`help revsets` for detail about
821 selecting revisions.
821 selecting revisions.
822
822
823 .. container:: verbose
823 .. container:: verbose
824
824
825 Examples:
825 Examples:
826
826
827 - A number of changes have been made.
827 - A number of changes have been made.
828 Revision 3 is no longer needed.
828 Revision 3 is no longer needed.
829
829
830 Start history editing from revision 3::
830 Start history editing from revision 3::
831
831
832 hg histedit -r 3
832 hg histedit -r 3
833
833
834 An editor opens, containing the list of revisions,
834 An editor opens, containing the list of revisions,
835 with specific actions specified::
835 with specific actions specified::
836
836
837 pick 5339bf82f0ca 3 Zworgle the foobar
837 pick 5339bf82f0ca 3 Zworgle the foobar
838 pick 8ef592ce7cc4 4 Bedazzle the zerlog
838 pick 8ef592ce7cc4 4 Bedazzle the zerlog
839 pick 0a9639fcda9d 5 Morgify the cromulancy
839 pick 0a9639fcda9d 5 Morgify the cromulancy
840
840
841 Additional information about the possible actions
841 Additional information about the possible actions
842 to take appears below the list of revisions.
842 to take appears below the list of revisions.
843
843
844 To remove revision 3 from the history,
844 To remove revision 3 from the history,
845 its action (at the beginning of the relevant line)
845 its action (at the beginning of the relevant line)
846 is changed to 'drop'::
846 is changed to 'drop'::
847
847
848 drop 5339bf82f0ca 3 Zworgle the foobar
848 drop 5339bf82f0ca 3 Zworgle the foobar
849 pick 8ef592ce7cc4 4 Bedazzle the zerlog
849 pick 8ef592ce7cc4 4 Bedazzle the zerlog
850 pick 0a9639fcda9d 5 Morgify the cromulancy
850 pick 0a9639fcda9d 5 Morgify the cromulancy
851
851
852 - A number of changes have been made.
852 - A number of changes have been made.
853 Revision 2 and 4 need to be swapped.
853 Revision 2 and 4 need to be swapped.
854
854
855 Start history editing from revision 2::
855 Start history editing from revision 2::
856
856
857 hg histedit -r 2
857 hg histedit -r 2
858
858
859 An editor opens, containing the list of revisions,
859 An editor opens, containing the list of revisions,
860 with specific actions specified::
860 with specific actions specified::
861
861
862 pick 252a1af424ad 2 Blorb a morgwazzle
862 pick 252a1af424ad 2 Blorb a morgwazzle
863 pick 5339bf82f0ca 3 Zworgle the foobar
863 pick 5339bf82f0ca 3 Zworgle the foobar
864 pick 8ef592ce7cc4 4 Bedazzle the zerlog
864 pick 8ef592ce7cc4 4 Bedazzle the zerlog
865
865
866 To swap revision 2 and 4, its lines are swapped
866 To swap revision 2 and 4, its lines are swapped
867 in the editor::
867 in the editor::
868
868
869 pick 8ef592ce7cc4 4 Bedazzle the zerlog
869 pick 8ef592ce7cc4 4 Bedazzle the zerlog
870 pick 5339bf82f0ca 3 Zworgle the foobar
870 pick 5339bf82f0ca 3 Zworgle the foobar
871 pick 252a1af424ad 2 Blorb a morgwazzle
871 pick 252a1af424ad 2 Blorb a morgwazzle
872
872
873 Returns 0 on success, 1 if user intervention is required (not only
873 Returns 0 on success, 1 if user intervention is required (not only
874 for intentional "edit" command, but also for resolving unexpected
874 for intentional "edit" command, but also for resolving unexpected
875 conflicts).
875 conflicts).
876 """
876 """
877 state = histeditstate(repo)
877 state = histeditstate(repo)
878 try:
878 try:
879 state.wlock = repo.wlock()
879 state.wlock = repo.wlock()
880 state.lock = repo.lock()
880 state.lock = repo.lock()
881 _histedit(ui, repo, state, *freeargs, **opts)
881 _histedit(ui, repo, state, *freeargs, **opts)
882 except error.Abort:
882 except error.Abort:
883 if repo.vfs.exists('histedit-last-edit.txt'):
883 if repo.vfs.exists('histedit-last-edit.txt'):
884 ui.warn(_('warning: histedit rules saved '
884 ui.warn(_('warning: histedit rules saved '
885 'to: .hg/histedit-last-edit.txt\n'))
885 'to: .hg/histedit-last-edit.txt\n'))
886 raise
886 raise
887 finally:
887 finally:
888 release(state.lock, state.wlock)
888 release(state.lock, state.wlock)
889
889
890 def _histedit(ui, repo, state, *freeargs, **opts):
890 def _histedit(ui, repo, state, *freeargs, **opts):
891 # TODO only abort if we try to histedit mq patches, not just
891 # TODO only abort if we try to histedit mq patches, not just
892 # blanket if mq patches are applied somewhere
892 # blanket if mq patches are applied somewhere
893 mq = getattr(repo, 'mq', None)
893 mq = getattr(repo, 'mq', None)
894 if mq and mq.applied:
894 if mq and mq.applied:
895 raise error.Abort(_('source has mq patches applied'))
895 raise error.Abort(_('source has mq patches applied'))
896
896
897 # basic argument incompatibility processing
897 # basic argument incompatibility processing
898 outg = opts.get('outgoing')
898 outg = opts.get('outgoing')
899 cont = opts.get('continue')
899 cont = opts.get('continue')
900 editplan = opts.get('edit_plan')
900 editplan = opts.get('edit_plan')
901 abort = opts.get('abort')
901 abort = opts.get('abort')
902 force = opts.get('force')
902 force = opts.get('force')
903 rules = opts.get('commands', '')
903 rules = opts.get('commands', '')
904 revs = opts.get('rev', [])
904 revs = opts.get('rev', [])
905 goal = 'new' # This invocation goal, in new, continue, abort
905 goal = 'new' # This invocation goal, in new, continue, abort
906 if force and not outg:
906 if force and not outg:
907 raise error.Abort(_('--force only allowed with --outgoing'))
907 raise error.Abort(_('--force only allowed with --outgoing'))
908 if cont:
908 if cont:
909 if any((outg, abort, revs, freeargs, rules, editplan)):
909 if any((outg, abort, revs, freeargs, rules, editplan)):
910 raise error.Abort(_('no arguments allowed with --continue'))
910 raise error.Abort(_('no arguments allowed with --continue'))
911 goal = 'continue'
911 goal = 'continue'
912 elif abort:
912 elif abort:
913 if any((outg, revs, freeargs, rules, editplan)):
913 if any((outg, revs, freeargs, rules, editplan)):
914 raise error.Abort(_('no arguments allowed with --abort'))
914 raise error.Abort(_('no arguments allowed with --abort'))
915 goal = 'abort'
915 goal = 'abort'
916 elif editplan:
916 elif editplan:
917 if any((outg, revs, freeargs)):
917 if any((outg, revs, freeargs)):
918 raise error.Abort(_('only --commands argument allowed with '
918 raise error.Abort(_('only --commands argument allowed with '
919 '--edit-plan'))
919 '--edit-plan'))
920 goal = 'edit-plan'
920 goal = 'edit-plan'
921 else:
921 else:
922 if os.path.exists(os.path.join(repo.path, 'histedit-state')):
922 if os.path.exists(os.path.join(repo.path, 'histedit-state')):
923 raise error.Abort(_('history edit already in progress, try '
923 raise error.Abort(_('history edit already in progress, try '
924 '--continue or --abort'))
924 '--continue or --abort'))
925 if outg:
925 if outg:
926 if revs:
926 if revs:
927 raise error.Abort(_('no revisions allowed with --outgoing'))
927 raise error.Abort(_('no revisions allowed with --outgoing'))
928 if len(freeargs) > 1:
928 if len(freeargs) > 1:
929 raise error.Abort(
929 raise error.Abort(
930 _('only one repo argument allowed with --outgoing'))
930 _('only one repo argument allowed with --outgoing'))
931 else:
931 else:
932 revs.extend(freeargs)
932 revs.extend(freeargs)
933 if len(revs) == 0:
933 if len(revs) == 0:
934 defaultrev = destutil.desthistedit(ui, repo)
934 defaultrev = destutil.desthistedit(ui, repo)
935 if defaultrev is not None:
935 if defaultrev is not None:
936 revs.append(defaultrev)
936 revs.append(defaultrev)
937
937
938 if len(revs) != 1:
938 if len(revs) != 1:
939 raise error.Abort(
939 raise error.Abort(
940 _('histedit requires exactly one ancestor revision'))
940 _('histedit requires exactly one ancestor revision'))
941
941
942
942
943 replacements = []
943 replacements = []
944 state.keep = opts.get('keep', False)
944 state.keep = opts.get('keep', False)
945 supportsmarkers = obsolete.isenabled(repo, obsolete.createmarkersopt)
945 supportsmarkers = obsolete.isenabled(repo, obsolete.createmarkersopt)
946
946
947 # rebuild state
947 # rebuild state
948 if goal == 'continue':
948 if goal == 'continue':
949 state.read()
949 state.read()
950 state = bootstrapcontinue(ui, state, opts)
950 state = bootstrapcontinue(ui, state, opts)
951 elif goal == 'edit-plan':
951 elif goal == 'edit-plan':
952 state.read()
952 state.read()
953 if not rules:
953 if not rules:
954 comment = editcomment % (node.short(state.parentctxnode),
954 comment = editcomment % (node.short(state.parentctxnode),
955 node.short(state.topmost))
955 node.short(state.topmost))
956 rules = ruleeditor(repo, ui, state.actions, comment)
956 rules = ruleeditor(repo, ui, state.actions, comment)
957 else:
957 else:
958 if rules == '-':
958 if rules == '-':
959 f = sys.stdin
959 f = sys.stdin
960 else:
960 else:
961 f = open(rules)
961 f = open(rules)
962 rules = f.read()
962 rules = f.read()
963 f.close()
963 f.close()
964 actions = parserules(rules, state)
964 actions = parserules(rules, state)
965 ctxs = [repo[act.nodetoverify()] \
965 ctxs = [repo[act.nodetoverify()] \
966 for act in state.actions if act.nodetoverify()]
966 for act in state.actions if act.nodetoverify()]
967 verifyactions(actions, state, ctxs)
967 verifyactions(actions, state, ctxs)
968 state.actions = actions
968 state.actions = actions
969 state.write()
969 state.write()
970 return
970 return
971 elif goal == 'abort':
971 elif goal == 'abort':
972 try:
972 try:
973 state.read()
973 state.read()
974 tmpnodes, leafs = newnodestoabort(state)
974 tmpnodes, leafs = newnodestoabort(state)
975 ui.debug('restore wc to old parent %s\n'
975 ui.debug('restore wc to old parent %s\n'
976 % node.short(state.topmost))
976 % node.short(state.topmost))
977
977
978 # Recover our old commits if necessary
978 # Recover our old commits if necessary
979 if not state.topmost in repo and state.backupfile:
979 if not state.topmost in repo and state.backupfile:
980 backupfile = repo.join(state.backupfile)
980 backupfile = repo.join(state.backupfile)
981 f = hg.openpath(ui, backupfile)
981 f = hg.openpath(ui, backupfile)
982 gen = exchange.readbundle(ui, f, backupfile)
982 gen = exchange.readbundle(ui, f, backupfile)
983 tr = repo.transaction('histedit.abort')
983 tr = repo.transaction('histedit.abort')
984 try:
984 try:
985 if not isinstance(gen, bundle2.unbundle20):
985 if not isinstance(gen, bundle2.unbundle20):
986 gen.apply(repo, 'histedit', 'bundle:' + backupfile)
986 gen.apply(repo, 'histedit', 'bundle:' + backupfile)
987 if isinstance(gen, bundle2.unbundle20):
987 if isinstance(gen, bundle2.unbundle20):
988 bundle2.applybundle(repo, gen, tr,
988 bundle2.applybundle(repo, gen, tr,
989 source='histedit',
989 source='histedit',
990 url='bundle:' + backupfile)
990 url='bundle:' + backupfile)
991 tr.close()
991 tr.close()
992 finally:
992 finally:
993 tr.release()
993 tr.release()
994
994
995 os.remove(backupfile)
995 os.remove(backupfile)
996
996
997 # check whether we should update away
997 # check whether we should update away
998 if repo.unfiltered().revs('parents() and (%n or %ln::)',
998 if repo.unfiltered().revs('parents() and (%n or %ln::)',
999 state.parentctxnode, leafs | tmpnodes):
999 state.parentctxnode, leafs | tmpnodes):
1000 hg.clean(repo, state.topmost)
1000 hg.clean(repo, state.topmost)
1001 cleanupnode(ui, repo, 'created', tmpnodes)
1001 cleanupnode(ui, repo, 'created', tmpnodes)
1002 cleanupnode(ui, repo, 'temp', leafs)
1002 cleanupnode(ui, repo, 'temp', leafs)
1003 except Exception:
1003 except Exception:
1004 if state.inprogress():
1004 if state.inprogress():
1005 ui.warn(_('warning: encountered an exception during histedit '
1005 ui.warn(_('warning: encountered an exception during histedit '
1006 '--abort; the repository may not have been completely '
1006 '--abort; the repository may not have been completely '
1007 'cleaned up\n'))
1007 'cleaned up\n'))
1008 raise
1008 raise
1009 finally:
1009 finally:
1010 state.clear()
1010 state.clear()
1011 return
1011 return
1012 else:
1012 else:
1013 cmdutil.checkunfinished(repo)
1013 cmdutil.checkunfinished(repo)
1014 cmdutil.bailifchanged(repo)
1014 cmdutil.bailifchanged(repo)
1015
1015
1016 if repo.vfs.exists('histedit-last-edit.txt'):
1016 if repo.vfs.exists('histedit-last-edit.txt'):
1017 repo.vfs.unlink('histedit-last-edit.txt')
1017 repo.vfs.unlink('histedit-last-edit.txt')
1018 topmost, empty = repo.dirstate.parents()
1018 topmost, empty = repo.dirstate.parents()
1019 if outg:
1019 if outg:
1020 if freeargs:
1020 if freeargs:
1021 remote = freeargs[0]
1021 remote = freeargs[0]
1022 else:
1022 else:
1023 remote = None
1023 remote = None
1024 root = findoutgoing(ui, repo, remote, force, opts)
1024 root = findoutgoing(ui, repo, remote, force, opts)
1025 else:
1025 else:
1026 rr = list(repo.set('roots(%ld)', scmutil.revrange(repo, revs)))
1026 rr = list(repo.set('roots(%ld)', scmutil.revrange(repo, revs)))
1027 if len(rr) != 1:
1027 if len(rr) != 1:
1028 raise error.Abort(_('The specified revisions must have '
1028 raise error.Abort(_('The specified revisions must have '
1029 'exactly one common root'))
1029 'exactly one common root'))
1030 root = rr[0].node()
1030 root = rr[0].node()
1031
1031
1032 revs = between(repo, root, topmost, state.keep)
1032 revs = between(repo, root, topmost, state.keep)
1033 if not revs:
1033 if not revs:
1034 raise error.Abort(_('%s is not an ancestor of working directory') %
1034 raise error.Abort(_('%s is not an ancestor of working directory') %
1035 node.short(root))
1035 node.short(root))
1036
1036
1037 ctxs = [repo[r] for r in revs]
1037 ctxs = [repo[r] for r in revs]
1038 if not rules:
1038 if not rules:
1039 comment = editcomment % (node.short(root), node.short(topmost))
1039 comment = editcomment % (node.short(root), node.short(topmost))
1040 actions = [pick(state, r) for r in revs]
1040 actions = [pick(state, r) for r in revs]
1041 rules = ruleeditor(repo, ui, actions, comment)
1041 rules = ruleeditor(repo, ui, actions, comment)
1042 else:
1042 else:
1043 if rules == '-':
1043 if rules == '-':
1044 f = sys.stdin
1044 f = sys.stdin
1045 else:
1045 else:
1046 f = open(rules)
1046 f = open(rules)
1047 rules = f.read()
1047 rules = f.read()
1048 f.close()
1048 f.close()
1049 actions = parserules(rules, state)
1049 actions = parserules(rules, state)
1050 verifyactions(actions, state, ctxs)
1050 verifyactions(actions, state, ctxs)
1051
1051
1052 parentctxnode = repo[root].parents()[0].node()
1052 parentctxnode = repo[root].parents()[0].node()
1053
1053
1054 state.parentctxnode = parentctxnode
1054 state.parentctxnode = parentctxnode
1055 state.actions = actions
1055 state.actions = actions
1056 state.topmost = topmost
1056 state.topmost = topmost
1057 state.replacements = replacements
1057 state.replacements = replacements
1058
1058
1059 # Create a backup so we can always abort completely.
1059 # Create a backup so we can always abort completely.
1060 backupfile = None
1060 backupfile = None
1061 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1061 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1062 backupfile = repair._bundle(repo, [parentctxnode], [topmost], root,
1062 backupfile = repair._bundle(repo, [parentctxnode], [topmost], root,
1063 'histedit')
1063 'histedit')
1064 state.backupfile = backupfile
1064 state.backupfile = backupfile
1065
1065
1066 # preprocess rules so that we can hide inner folds from the user
1066 # preprocess rules so that we can hide inner folds from the user
1067 # and only show one editor
1067 # and only show one editor
1068 actions = state.actions[:]
1068 actions = state.actions[:]
1069 for idx, (action, nextact) in enumerate(
1069 for idx, (action, nextact) in enumerate(
1070 zip(actions, actions[1:] + [None])):
1070 zip(actions, actions[1:] + [None])):
1071 if action.verb == 'fold' and nextact and nextact.verb == 'fold':
1071 if action.verb == 'fold' and nextact and nextact.verb == 'fold':
1072 state.actions[idx].__class__ = _multifold
1072 state.actions[idx].__class__ = _multifold
1073
1073
1074 while state.actions:
1074 while state.actions:
1075 state.write()
1075 state.write()
1076 actobj = state.actions.pop(0)
1076 actobj = state.actions.pop(0)
1077 ui.debug('histedit: processing %s %s\n' % (actobj.verb,\
1077 ui.debug('histedit: processing %s %s\n' % (actobj.verb,\
1078 actobj.torule()))
1078 actobj.torule()))
1079 parentctx, replacement_ = actobj.run()
1079 parentctx, replacement_ = actobj.run()
1080 state.parentctxnode = parentctx.node()
1080 state.parentctxnode = parentctx.node()
1081 state.replacements.extend(replacement_)
1081 state.replacements.extend(replacement_)
1082 state.write()
1082 state.write()
1083
1083
1084 hg.update(repo, state.parentctxnode)
1084 hg.update(repo, state.parentctxnode)
1085
1085
1086 mapping, tmpnodes, created, ntm = processreplacement(state)
1086 mapping, tmpnodes, created, ntm = processreplacement(state)
1087 if mapping:
1087 if mapping:
1088 for prec, succs in mapping.iteritems():
1088 for prec, succs in mapping.iteritems():
1089 if not succs:
1089 if not succs:
1090 ui.debug('histedit: %s is dropped\n' % node.short(prec))
1090 ui.debug('histedit: %s is dropped\n' % node.short(prec))
1091 else:
1091 else:
1092 ui.debug('histedit: %s is replaced by %s\n' % (
1092 ui.debug('histedit: %s is replaced by %s\n' % (
1093 node.short(prec), node.short(succs[0])))
1093 node.short(prec), node.short(succs[0])))
1094 if len(succs) > 1:
1094 if len(succs) > 1:
1095 m = 'histedit: %s'
1095 m = 'histedit: %s'
1096 for n in succs[1:]:
1096 for n in succs[1:]:
1097 ui.debug(m % node.short(n))
1097 ui.debug(m % node.short(n))
1098
1098
1099 if supportsmarkers:
1099 if supportsmarkers:
1100 # Only create markers if the temp nodes weren't already removed.
1100 # Only create markers if the temp nodes weren't already removed.
1101 obsolete.createmarkers(repo, ((repo[t],()) for t in sorted(tmpnodes)
1101 obsolete.createmarkers(repo, ((repo[t],()) for t in sorted(tmpnodes)
1102 if t in repo))
1102 if t in repo))
1103 else:
1103 else:
1104 cleanupnode(ui, repo, 'temp', tmpnodes)
1104 cleanupnode(ui, repo, 'temp', tmpnodes)
1105
1105
1106 if not state.keep:
1106 if not state.keep:
1107 if mapping:
1107 if mapping:
1108 movebookmarks(ui, repo, mapping, state.topmost, ntm)
1108 movebookmarks(ui, repo, mapping, state.topmost, ntm)
1109 # TODO update mq state
1109 # TODO update mq state
1110 if supportsmarkers:
1110 if supportsmarkers:
1111 markers = []
1111 markers = []
1112 # sort by revision number because it sound "right"
1112 # sort by revision number because it sound "right"
1113 for prec in sorted(mapping, key=repo.changelog.rev):
1113 for prec in sorted(mapping, key=repo.changelog.rev):
1114 succs = mapping[prec]
1114 succs = mapping[prec]
1115 markers.append((repo[prec],
1115 markers.append((repo[prec],
1116 tuple(repo[s] for s in succs)))
1116 tuple(repo[s] for s in succs)))
1117 if markers:
1117 if markers:
1118 obsolete.createmarkers(repo, markers)
1118 obsolete.createmarkers(repo, markers)
1119 else:
1119 else:
1120 cleanupnode(ui, repo, 'replaced', mapping)
1120 cleanupnode(ui, repo, 'replaced', mapping)
1121
1121
1122 state.clear()
1122 state.clear()
1123 if os.path.exists(repo.sjoin('undo')):
1123 if os.path.exists(repo.sjoin('undo')):
1124 os.unlink(repo.sjoin('undo'))
1124 os.unlink(repo.sjoin('undo'))
1125
1125
1126 def bootstrapcontinue(ui, state, opts):
1126 def bootstrapcontinue(ui, state, opts):
1127 repo = state.repo
1127 repo = state.repo
1128 if state.actions:
1128 if state.actions:
1129 actobj = state.actions.pop(0)
1129 actobj = state.actions.pop(0)
1130
1130
1131 if _isdirtywc(repo):
1131 if _isdirtywc(repo):
1132 actobj.continuedirty()
1132 actobj.continuedirty()
1133 if _isdirtywc(repo):
1133 if _isdirtywc(repo):
1134 abortdirty()
1134 abortdirty()
1135
1135
1136 parentctx, replacements = actobj.continueclean()
1136 parentctx, replacements = actobj.continueclean()
1137
1137
1138 state.parentctxnode = parentctx.node()
1138 state.parentctxnode = parentctx.node()
1139 state.replacements.extend(replacements)
1139 state.replacements.extend(replacements)
1140
1140
1141 return state
1141 return state
1142
1142
1143 def between(repo, old, new, keep):
1143 def between(repo, old, new, keep):
1144 """select and validate the set of revision to edit
1144 """select and validate the set of revision to edit
1145
1145
1146 When keep is false, the specified set can't have children."""
1146 When keep is false, the specified set can't have children."""
1147 ctxs = list(repo.set('%n::%n', old, new))
1147 ctxs = list(repo.set('%n::%n', old, new))
1148 if ctxs and not keep:
1148 if ctxs and not keep:
1149 if (not obsolete.isenabled(repo, obsolete.allowunstableopt) and
1149 if (not obsolete.isenabled(repo, obsolete.allowunstableopt) and
1150 repo.revs('(%ld::) - (%ld)', ctxs, ctxs)):
1150 repo.revs('(%ld::) - (%ld)', ctxs, ctxs)):
1151 raise error.Abort(_('cannot edit history that would orphan nodes'))
1151 raise error.Abort(_('cannot edit history that would orphan nodes'))
1152 if repo.revs('(%ld) and merge()', ctxs):
1152 if repo.revs('(%ld) and merge()', ctxs):
1153 raise error.Abort(_('cannot edit history that contains merges'))
1153 raise error.Abort(_('cannot edit history that contains merges'))
1154 root = ctxs[0] # list is already sorted by repo.set
1154 root = ctxs[0] # list is already sorted by repo.set
1155 if not root.mutable():
1155 if not root.mutable():
1156 raise error.Abort(_('cannot edit public changeset: %s') % root,
1156 raise error.Abort(_('cannot edit public changeset: %s') % root,
1157 hint=_('see "hg help phases" for details'))
1157 hint=_('see "hg help phases" for details'))
1158 return [c.node() for c in ctxs]
1158 return [c.node() for c in ctxs]
1159
1159
1160 def ruleeditor(repo, ui, actions, editcomment=""):
1160 def ruleeditor(repo, ui, actions, editcomment=""):
1161 """open an editor to edit rules
1161 """open an editor to edit rules
1162
1162
1163 rules are in the format [ [act, ctx], ...] like in state.rules
1163 rules are in the format [ [act, ctx], ...] like in state.rules
1164 """
1164 """
1165 rules = '\n'.join([act.torule() for act in actions])
1165 rules = '\n'.join([act.torule() for act in actions])
1166 rules += '\n\n'
1166 rules += '\n\n'
1167 rules += editcomment
1167 rules += editcomment
1168 rules = ui.edit(rules, ui.username(), {'prefix': 'histedit'})
1168 rules = ui.edit(rules, ui.username(), {'prefix': 'histedit'})
1169
1169
1170 # Save edit rules in .hg/histedit-last-edit.txt in case
1170 # Save edit rules in .hg/histedit-last-edit.txt in case
1171 # the user needs to ask for help after something
1171 # the user needs to ask for help after something
1172 # surprising happens.
1172 # surprising happens.
1173 f = open(repo.join('histedit-last-edit.txt'), 'w')
1173 f = open(repo.join('histedit-last-edit.txt'), 'w')
1174 f.write(rules)
1174 f.write(rules)
1175 f.close()
1175 f.close()
1176
1176
1177 return rules
1177 return rules
1178
1178
1179 def parserules(rules, state):
1179 def parserules(rules, state):
1180 """Read the histedit rules string and return list of action objects """
1180 """Read the histedit rules string and return list of action objects """
1181 rules = [l for l in (r.strip() for r in rules.splitlines())
1181 rules = [l for l in (r.strip() for r in rules.splitlines())
1182 if l and not l.startswith('#')]
1182 if l and not l.startswith('#')]
1183 actions = []
1183 actions = []
1184 for r in rules:
1184 for r in rules:
1185 if ' ' not in r:
1185 if ' ' not in r:
1186 raise error.Abort(_('malformed line "%s"') % r)
1186 raise error.Abort(_('malformed line "%s"') % r)
1187 verb, rest = r.split(' ', 1)
1187 verb, rest = r.split(' ', 1)
1188
1188
1189 if verb not in actiontable:
1189 if verb not in actiontable:
1190 raise error.Abort(_('unknown action "%s"') % verb)
1190 raise error.Abort(_('unknown action "%s"') % verb)
1191
1191
1192 action = actiontable[verb].fromrule(state, rest)
1192 action = actiontable[verb].fromrule(state, rest)
1193 actions.append(action)
1193 actions.append(action)
1194 return actions
1194 return actions
1195
1195
1196 def verifyactions(actions, state, ctxs):
1196 def verifyactions(actions, state, ctxs):
1197 """Verify that there exists exactly one action per given changeset and
1197 """Verify that there exists exactly one action per given changeset and
1198 other constraints.
1198 other constraints.
1199
1199
1200 Will abort if there are to many or too few rules, a malformed rule,
1200 Will abort if there are to many or too few rules, a malformed rule,
1201 or a rule on a changeset outside of the user-given range.
1201 or a rule on a changeset outside of the user-given range.
1202 """
1202 """
1203 expected = set(c.hex() for c in ctxs)
1203 expected = set(c.hex() for c in ctxs)
1204 seen = set()
1204 seen = set()
1205 for action in actions:
1205 for action in actions:
1206 action.verify()
1206 action.verify()
1207 constraints = action.constraints()
1207 constraints = action.constraints()
1208 for constraint in constraints:
1208 for constraint in constraints:
1209 if constraint not in _constraints.known():
1209 if constraint not in _constraints.known():
1210 raise error.Abort(_('unknown constraint "%s"') % constraint)
1210 raise error.Abort(_('unknown constraint "%s"') % constraint)
1211
1211
1212 nodetoverify = action.nodetoverify()
1212 nodetoverify = action.nodetoverify()
1213 if nodetoverify is not None:
1213 if nodetoverify is not None:
1214 ha = node.hex(nodetoverify)
1214 ha = node.hex(nodetoverify)
1215 if _constraints.noother in constraints and ha not in expected:
1215 if _constraints.noother in constraints and ha not in expected:
1216 raise error.Abort(
1216 raise error.Abort(
1217 _('may not use "%s" with changesets '
1217 _('may not use "%s" with changesets '
1218 'other than the ones listed') % action.verb)
1218 'other than the ones listed') % action.verb)
1219 if _constraints.forceother in constraints and ha in expected:
1219 if _constraints.forceother in constraints and ha in expected:
1220 raise error.Abort(
1220 raise error.Abort(
1221 _('may not use "%s" with changesets '
1221 _('may not use "%s" with changesets '
1222 'within the edited list') % action.verb)
1222 'within the edited list') % action.verb)
1223 if _constraints.noduplicates in constraints and ha in seen:
1223 if _constraints.noduplicates in constraints and ha in seen:
1224 raise error.Abort(_('duplicated command for changeset %s') %
1224 raise error.Abort(_('duplicated command for changeset %s') %
1225 ha[:12])
1225 ha[:12])
1226 seen.add(ha)
1226 seen.add(ha)
1227 missing = sorted(expected - seen) # sort to stabilize output
1227 missing = sorted(expected - seen) # sort to stabilize output
1228 if missing:
1228 if missing:
1229 raise error.Abort(_('missing rules for changeset %s') %
1229 raise error.Abort(_('missing rules for changeset %s') %
1230 missing[0][:12],
1230 missing[0][:12],
1231 hint=_('use "drop %s" to discard the change') % missing[0][:12])
1231 hint=_('use "drop %s" to discard the change') % missing[0][:12])
1232
1232
1233 def newnodestoabort(state):
1233 def newnodestoabort(state):
1234 """process the list of replacements to return
1234 """process the list of replacements to return
1235
1235
1236 1) the list of final node
1236 1) the list of final node
1237 2) the list of temporary node
1237 2) the list of temporary node
1238
1238
1239 This meant to be used on abort as less data are required in this case.
1239 This meant to be used on abort as less data are required in this case.
1240 """
1240 """
1241 replacements = state.replacements
1241 replacements = state.replacements
1242 allsuccs = set()
1242 allsuccs = set()
1243 replaced = set()
1243 replaced = set()
1244 for rep in replacements:
1244 for rep in replacements:
1245 allsuccs.update(rep[1])
1245 allsuccs.update(rep[1])
1246 replaced.add(rep[0])
1246 replaced.add(rep[0])
1247 newnodes = allsuccs - replaced
1247 newnodes = allsuccs - replaced
1248 tmpnodes = allsuccs & replaced
1248 tmpnodes = allsuccs & replaced
1249 return newnodes, tmpnodes
1249 return newnodes, tmpnodes
1250
1250
1251
1251
1252 def processreplacement(state):
1252 def processreplacement(state):
1253 """process the list of replacements to return
1253 """process the list of replacements to return
1254
1254
1255 1) the final mapping between original and created nodes
1255 1) the final mapping between original and created nodes
1256 2) the list of temporary node created by histedit
1256 2) the list of temporary node created by histedit
1257 3) the list of new commit created by histedit"""
1257 3) the list of new commit created by histedit"""
1258 replacements = state.replacements
1258 replacements = state.replacements
1259 allsuccs = set()
1259 allsuccs = set()
1260 replaced = set()
1260 replaced = set()
1261 fullmapping = {}
1261 fullmapping = {}
1262 # initialize basic set
1262 # initialize basic set
1263 # fullmapping records all operations recorded in replacement
1263 # fullmapping records all operations recorded in replacement
1264 for rep in replacements:
1264 for rep in replacements:
1265 allsuccs.update(rep[1])
1265 allsuccs.update(rep[1])
1266 replaced.add(rep[0])
1266 replaced.add(rep[0])
1267 fullmapping.setdefault(rep[0], set()).update(rep[1])
1267 fullmapping.setdefault(rep[0], set()).update(rep[1])
1268 new = allsuccs - replaced
1268 new = allsuccs - replaced
1269 tmpnodes = allsuccs & replaced
1269 tmpnodes = allsuccs & replaced
1270 # Reduce content fullmapping into direct relation between original nodes
1270 # Reduce content fullmapping into direct relation between original nodes
1271 # and final node created during history edition
1271 # and final node created during history edition
1272 # Dropped changeset are replaced by an empty list
1272 # Dropped changeset are replaced by an empty list
1273 toproceed = set(fullmapping)
1273 toproceed = set(fullmapping)
1274 final = {}
1274 final = {}
1275 while toproceed:
1275 while toproceed:
1276 for x in list(toproceed):
1276 for x in list(toproceed):
1277 succs = fullmapping[x]
1277 succs = fullmapping[x]
1278 for s in list(succs):
1278 for s in list(succs):
1279 if s in toproceed:
1279 if s in toproceed:
1280 # non final node with unknown closure
1280 # non final node with unknown closure
1281 # We can't process this now
1281 # We can't process this now
1282 break
1282 break
1283 elif s in final:
1283 elif s in final:
1284 # non final node, replace with closure
1284 # non final node, replace with closure
1285 succs.remove(s)
1285 succs.remove(s)
1286 succs.update(final[s])
1286 succs.update(final[s])
1287 else:
1287 else:
1288 final[x] = succs
1288 final[x] = succs
1289 toproceed.remove(x)
1289 toproceed.remove(x)
1290 # remove tmpnodes from final mapping
1290 # remove tmpnodes from final mapping
1291 for n in tmpnodes:
1291 for n in tmpnodes:
1292 del final[n]
1292 del final[n]
1293 # we expect all changes involved in final to exist in the repo
1293 # we expect all changes involved in final to exist in the repo
1294 # turn `final` into list (topologically sorted)
1294 # turn `final` into list (topologically sorted)
1295 nm = state.repo.changelog.nodemap
1295 nm = state.repo.changelog.nodemap
1296 for prec, succs in final.items():
1296 for prec, succs in final.items():
1297 final[prec] = sorted(succs, key=nm.get)
1297 final[prec] = sorted(succs, key=nm.get)
1298
1298
1299 # computed topmost element (necessary for bookmark)
1299 # computed topmost element (necessary for bookmark)
1300 if new:
1300 if new:
1301 newtopmost = sorted(new, key=state.repo.changelog.rev)[-1]
1301 newtopmost = sorted(new, key=state.repo.changelog.rev)[-1]
1302 elif not final:
1302 elif not final:
1303 # Nothing rewritten at all. we won't need `newtopmost`
1303 # Nothing rewritten at all. we won't need `newtopmost`
1304 # It is the same as `oldtopmost` and `processreplacement` know it
1304 # It is the same as `oldtopmost` and `processreplacement` know it
1305 newtopmost = None
1305 newtopmost = None
1306 else:
1306 else:
1307 # every body died. The newtopmost is the parent of the root.
1307 # every body died. The newtopmost is the parent of the root.
1308 r = state.repo.changelog.rev
1308 r = state.repo.changelog.rev
1309 newtopmost = state.repo[sorted(final, key=r)[0]].p1().node()
1309 newtopmost = state.repo[sorted(final, key=r)[0]].p1().node()
1310
1310
1311 return final, tmpnodes, new, newtopmost
1311 return final, tmpnodes, new, newtopmost
1312
1312
1313 def movebookmarks(ui, repo, mapping, oldtopmost, newtopmost):
1313 def movebookmarks(ui, repo, mapping, oldtopmost, newtopmost):
1314 """Move bookmark from old to newly created node"""
1314 """Move bookmark from old to newly created node"""
1315 if not mapping:
1315 if not mapping:
1316 # if nothing got rewritten there is not purpose for this function
1316 # if nothing got rewritten there is not purpose for this function
1317 return
1317 return
1318 moves = []
1318 moves = []
1319 for bk, old in sorted(repo._bookmarks.iteritems()):
1319 for bk, old in sorted(repo._bookmarks.iteritems()):
1320 if old == oldtopmost:
1320 if old == oldtopmost:
1321 # special case ensure bookmark stay on tip.
1321 # special case ensure bookmark stay on tip.
1322 #
1322 #
1323 # This is arguably a feature and we may only want that for the
1323 # This is arguably a feature and we may only want that for the
1324 # active bookmark. But the behavior is kept compatible with the old
1324 # active bookmark. But the behavior is kept compatible with the old
1325 # version for now.
1325 # version for now.
1326 moves.append((bk, newtopmost))
1326 moves.append((bk, newtopmost))
1327 continue
1327 continue
1328 base = old
1328 base = old
1329 new = mapping.get(base, None)
1329 new = mapping.get(base, None)
1330 if new is None:
1330 if new is None:
1331 continue
1331 continue
1332 while not new:
1332 while not new:
1333 # base is killed, trying with parent
1333 # base is killed, trying with parent
1334 base = repo[base].p1().node()
1334 base = repo[base].p1().node()
1335 new = mapping.get(base, (base,))
1335 new = mapping.get(base, (base,))
1336 # nothing to move
1336 # nothing to move
1337 moves.append((bk, new[-1]))
1337 moves.append((bk, new[-1]))
1338 if moves:
1338 if moves:
1339 lock = tr = None
1339 lock = tr = None
1340 try:
1340 try:
1341 lock = repo.lock()
1341 lock = repo.lock()
1342 tr = repo.transaction('histedit')
1342 tr = repo.transaction('histedit')
1343 marks = repo._bookmarks
1343 marks = repo._bookmarks
1344 for mark, new in moves:
1344 for mark, new in moves:
1345 old = marks[mark]
1345 old = marks[mark]
1346 ui.note(_('histedit: moving bookmarks %s from %s to %s\n')
1346 ui.note(_('histedit: moving bookmarks %s from %s to %s\n')
1347 % (mark, node.short(old), node.short(new)))
1347 % (mark, node.short(old), node.short(new)))
1348 marks[mark] = new
1348 marks[mark] = new
1349 marks.recordchange(tr)
1349 marks.recordchange(tr)
1350 tr.close()
1350 tr.close()
1351 finally:
1351 finally:
1352 release(tr, lock)
1352 release(tr, lock)
1353
1353
1354 def cleanupnode(ui, repo, name, nodes):
1354 def cleanupnode(ui, repo, name, nodes):
1355 """strip a group of nodes from the repository
1355 """strip a group of nodes from the repository
1356
1356
1357 The set of node to strip may contains unknown nodes."""
1357 The set of node to strip may contains unknown nodes."""
1358 ui.debug('should strip %s nodes %s\n' %
1358 ui.debug('should strip %s nodes %s\n' %
1359 (name, ', '.join([node.short(n) for n in nodes])))
1359 (name, ', '.join([node.short(n) for n in nodes])))
1360 lock = None
1360 lock = None
1361 try:
1361 try:
1362 lock = repo.lock()
1362 lock = repo.lock()
1363 # do not let filtering get in the way of the cleanse
1363 # do not let filtering get in the way of the cleanse
1364 # we should probably get rid of obsolescence marker created during the
1364 # we should probably get rid of obsolescence marker created during the
1365 # histedit, but we currently do not have such information.
1365 # histedit, but we currently do not have such information.
1366 repo = repo.unfiltered()
1366 repo = repo.unfiltered()
1367 # Find all nodes that need to be stripped
1367 # Find all nodes that need to be stripped
1368 # (we use %lr instead of %ln to silently ignore unknown items)
1368 # (we use %lr instead of %ln to silently ignore unknown items)
1369 nm = repo.changelog.nodemap
1369 nm = repo.changelog.nodemap
1370 nodes = sorted(n for n in nodes if n in nm)
1370 nodes = sorted(n for n in nodes if n in nm)
1371 roots = [c.node() for c in repo.set("roots(%ln)", nodes)]
1371 roots = [c.node() for c in repo.set("roots(%ln)", nodes)]
1372 for c in roots:
1372 for c in roots:
1373 # We should process node in reverse order to strip tip most first.
1373 # We should process node in reverse order to strip tip most first.
1374 # but this trigger a bug in changegroup hook.
1374 # but this trigger a bug in changegroup hook.
1375 # This would reduce bundle overhead
1375 # This would reduce bundle overhead
1376 repair.strip(ui, repo, c)
1376 repair.strip(ui, repo, c)
1377 finally:
1377 finally:
1378 release(lock)
1378 release(lock)
1379
1379
1380 def stripwrapper(orig, ui, repo, nodelist, *args, **kwargs):
1380 def stripwrapper(orig, ui, repo, nodelist, *args, **kwargs):
1381 if isinstance(nodelist, str):
1381 if isinstance(nodelist, str):
1382 nodelist = [nodelist]
1382 nodelist = [nodelist]
1383 if os.path.exists(os.path.join(repo.path, 'histedit-state')):
1383 if os.path.exists(os.path.join(repo.path, 'histedit-state')):
1384 state = histeditstate(repo)
1384 state = histeditstate(repo)
1385 state.read()
1385 state.read()
1386 histedit_nodes = set([action.nodetoverify() for action
1386 histedit_nodes = set([action.nodetoverify() for action
1387 in state.actions if action.nodetoverify()])
1387 in state.actions if action.nodetoverify()])
1388 strip_nodes = set([repo[n].node() for n in nodelist])
1388 strip_nodes = set([repo[n].node() for n in nodelist])
1389 common_nodes = histedit_nodes & strip_nodes
1389 common_nodes = histedit_nodes & strip_nodes
1390 if common_nodes:
1390 if common_nodes:
1391 raise error.Abort(_("histedit in progress, can't strip %s")
1391 raise error.Abort(_("histedit in progress, can't strip %s")
1392 % ', '.join(node.short(x) for x in common_nodes))
1392 % ', '.join(node.short(x) for x in common_nodes))
1393 return orig(ui, repo, nodelist, *args, **kwargs)
1393 return orig(ui, repo, nodelist, *args, **kwargs)
1394
1394
1395 extensions.wrapfunction(repair, 'strip', stripwrapper)
1395 extensions.wrapfunction(repair, 'strip', stripwrapper)
1396
1396
1397 def summaryhook(ui, repo):
1397 def summaryhook(ui, repo):
1398 if not os.path.exists(repo.join('histedit-state')):
1398 if not os.path.exists(repo.join('histedit-state')):
1399 return
1399 return
1400 state = histeditstate(repo)
1400 state = histeditstate(repo)
1401 state.read()
1401 state.read()
1402 if state.actions:
1402 if state.actions:
1403 # i18n: column positioning for "hg summary"
1403 # i18n: column positioning for "hg summary"
1404 ui.write(_('hist: %s (histedit --continue)\n') %
1404 ui.write(_('hist: %s (histedit --continue)\n') %
1405 (ui.label(_('%d remaining'), 'histedit.remaining') %
1405 (ui.label(_('%d remaining'), 'histedit.remaining') %
1406 len(state.actions)))
1406 len(state.actions)))
1407
1407
1408 def extsetup(ui):
1408 def extsetup(ui):
1409 cmdutil.summaryhooks.add('histedit', summaryhook)
1409 cmdutil.summaryhooks.add('histedit', summaryhook)
1410 cmdutil.unfinishedstates.append(
1410 cmdutil.unfinishedstates.append(
1411 ['histedit-state', False, True, _('histedit in progress'),
1411 ['histedit-state', False, True, _('histedit in progress'),
1412 _("use 'hg histedit --continue' or 'hg histedit --abort'")])
1412 _("use 'hg histedit --continue' or 'hg histedit --abort'")])
1413 if ui.configbool("experimental", "histeditng"):
1413 if ui.configbool("experimental", "histeditng"):
1414 globals()['base'] = addhisteditaction(['base', 'b'])(base)
1414 globals()['base'] = addhisteditaction(['base', 'b'])(base)
@@ -1,1430 +1,1433 b''
1 # Copyright 2009-2010 Gregory P. Ward
1 # Copyright 2009-2010 Gregory P. Ward
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
3 # Copyright 2010-2011 Fog Creek Software
3 # Copyright 2010-2011 Fog Creek Software
4 # Copyright 2010-2011 Unity Technologies
4 # Copyright 2010-2011 Unity Technologies
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 '''Overridden Mercurial commands and functions for the largefiles extension'''
9 '''Overridden Mercurial commands and functions for the largefiles extension'''
10
10
11 import os
11 import os
12 import copy
12 import copy
13
13
14 from mercurial import hg, util, cmdutil, scmutil, match as match_, \
14 from mercurial import hg, util, cmdutil, scmutil, match as match_, \
15 archival, pathutil, revset, error
15 archival, pathutil, revset, error
16 from mercurial.i18n import _
16 from mercurial.i18n import _
17
17
18 import lfutil
18 import lfutil
19 import lfcommands
19 import lfcommands
20 import basestore
20 import basestore
21
21
22 # -- Utility functions: commonly/repeatedly needed functionality ---------------
22 # -- Utility functions: commonly/repeatedly needed functionality ---------------
23
23
24 def composelargefilematcher(match, manifest):
24 def composelargefilematcher(match, manifest):
25 '''create a matcher that matches only the largefiles in the original
25 '''create a matcher that matches only the largefiles in the original
26 matcher'''
26 matcher'''
27 m = copy.copy(match)
27 m = copy.copy(match)
28 lfile = lambda f: lfutil.standin(f) in manifest
28 lfile = lambda f: lfutil.standin(f) in manifest
29 m._files = filter(lfile, m._files)
29 m._files = filter(lfile, m._files)
30 m._fileroots = set(m._files)
30 m._fileroots = set(m._files)
31 m._always = False
31 m._always = False
32 origmatchfn = m.matchfn
32 origmatchfn = m.matchfn
33 m.matchfn = lambda f: lfile(f) and origmatchfn(f)
33 m.matchfn = lambda f: lfile(f) and origmatchfn(f)
34 return m
34 return m
35
35
36 def composenormalfilematcher(match, manifest, exclude=None):
36 def composenormalfilematcher(match, manifest, exclude=None):
37 excluded = set()
37 excluded = set()
38 if exclude is not None:
38 if exclude is not None:
39 excluded.update(exclude)
39 excluded.update(exclude)
40
40
41 m = copy.copy(match)
41 m = copy.copy(match)
42 notlfile = lambda f: not (lfutil.isstandin(f) or lfutil.standin(f) in
42 notlfile = lambda f: not (lfutil.isstandin(f) or lfutil.standin(f) in
43 manifest or f in excluded)
43 manifest or f in excluded)
44 m._files = filter(notlfile, m._files)
44 m._files = filter(notlfile, m._files)
45 m._fileroots = set(m._files)
45 m._fileroots = set(m._files)
46 m._always = False
46 m._always = False
47 origmatchfn = m.matchfn
47 origmatchfn = m.matchfn
48 m.matchfn = lambda f: notlfile(f) and origmatchfn(f)
48 m.matchfn = lambda f: notlfile(f) and origmatchfn(f)
49 return m
49 return m
50
50
51 def installnormalfilesmatchfn(manifest):
51 def installnormalfilesmatchfn(manifest):
52 '''installmatchfn with a matchfn that ignores all largefiles'''
52 '''installmatchfn with a matchfn that ignores all largefiles'''
53 def overridematch(ctx, pats=(), opts=None, globbed=False,
53 def overridematch(ctx, pats=(), opts=None, globbed=False,
54 default='relpath', badfn=None):
54 default='relpath', badfn=None):
55 if opts is None:
55 if opts is None:
56 opts = {}
56 opts = {}
57 match = oldmatch(ctx, pats, opts, globbed, default, badfn=badfn)
57 match = oldmatch(ctx, pats, opts, globbed, default, badfn=badfn)
58 return composenormalfilematcher(match, manifest)
58 return composenormalfilematcher(match, manifest)
59 oldmatch = installmatchfn(overridematch)
59 oldmatch = installmatchfn(overridematch)
60
60
61 def installmatchfn(f):
61 def installmatchfn(f):
62 '''monkey patch the scmutil module with a custom match function.
62 '''monkey patch the scmutil module with a custom match function.
63 Warning: it is monkey patching the _module_ on runtime! Not thread safe!'''
63 Warning: it is monkey patching the _module_ on runtime! Not thread safe!'''
64 oldmatch = scmutil.match
64 oldmatch = scmutil.match
65 setattr(f, 'oldmatch', oldmatch)
65 setattr(f, 'oldmatch', oldmatch)
66 scmutil.match = f
66 scmutil.match = f
67 return oldmatch
67 return oldmatch
68
68
69 def restorematchfn():
69 def restorematchfn():
70 '''restores scmutil.match to what it was before installmatchfn
70 '''restores scmutil.match to what it was before installmatchfn
71 was called. no-op if scmutil.match is its original function.
71 was called. no-op if scmutil.match is its original function.
72
72
73 Note that n calls to installmatchfn will require n calls to
73 Note that n calls to installmatchfn will require n calls to
74 restore the original matchfn.'''
74 restore the original matchfn.'''
75 scmutil.match = getattr(scmutil.match, 'oldmatch')
75 scmutil.match = getattr(scmutil.match, 'oldmatch')
76
76
77 def installmatchandpatsfn(f):
77 def installmatchandpatsfn(f):
78 oldmatchandpats = scmutil.matchandpats
78 oldmatchandpats = scmutil.matchandpats
79 setattr(f, 'oldmatchandpats', oldmatchandpats)
79 setattr(f, 'oldmatchandpats', oldmatchandpats)
80 scmutil.matchandpats = f
80 scmutil.matchandpats = f
81 return oldmatchandpats
81 return oldmatchandpats
82
82
83 def restorematchandpatsfn():
83 def restorematchandpatsfn():
84 '''restores scmutil.matchandpats to what it was before
84 '''restores scmutil.matchandpats to what it was before
85 installmatchandpatsfn was called. No-op if scmutil.matchandpats
85 installmatchandpatsfn was called. No-op if scmutil.matchandpats
86 is its original function.
86 is its original function.
87
87
88 Note that n calls to installmatchandpatsfn will require n calls
88 Note that n calls to installmatchandpatsfn will require n calls
89 to restore the original matchfn.'''
89 to restore the original matchfn.'''
90 scmutil.matchandpats = getattr(scmutil.matchandpats, 'oldmatchandpats',
90 scmutil.matchandpats = getattr(scmutil.matchandpats, 'oldmatchandpats',
91 scmutil.matchandpats)
91 scmutil.matchandpats)
92
92
93 def addlargefiles(ui, repo, isaddremove, matcher, **opts):
93 def addlargefiles(ui, repo, isaddremove, matcher, **opts):
94 large = opts.get('large')
94 large = opts.get('large')
95 lfsize = lfutil.getminsize(
95 lfsize = lfutil.getminsize(
96 ui, lfutil.islfilesrepo(repo), opts.get('lfsize'))
96 ui, lfutil.islfilesrepo(repo), opts.get('lfsize'))
97
97
98 lfmatcher = None
98 lfmatcher = None
99 if lfutil.islfilesrepo(repo):
99 if lfutil.islfilesrepo(repo):
100 lfpats = ui.configlist(lfutil.longname, 'patterns', default=[])
100 lfpats = ui.configlist(lfutil.longname, 'patterns', default=[])
101 if lfpats:
101 if lfpats:
102 lfmatcher = match_.match(repo.root, '', list(lfpats))
102 lfmatcher = match_.match(repo.root, '', list(lfpats))
103
103
104 lfnames = []
104 lfnames = []
105 m = matcher
105 m = matcher
106
106
107 wctx = repo[None]
107 wctx = repo[None]
108 for f in repo.walk(match_.badmatch(m, lambda x, y: None)):
108 for f in repo.walk(match_.badmatch(m, lambda x, y: None)):
109 exact = m.exact(f)
109 exact = m.exact(f)
110 lfile = lfutil.standin(f) in wctx
110 lfile = lfutil.standin(f) in wctx
111 nfile = f in wctx
111 nfile = f in wctx
112 exists = lfile or nfile
112 exists = lfile or nfile
113
113
114 # addremove in core gets fancy with the name, add doesn't
114 # addremove in core gets fancy with the name, add doesn't
115 if isaddremove:
115 if isaddremove:
116 name = m.uipath(f)
116 name = m.uipath(f)
117 else:
117 else:
118 name = m.rel(f)
118 name = m.rel(f)
119
119
120 # Don't warn the user when they attempt to add a normal tracked file.
120 # Don't warn the user when they attempt to add a normal tracked file.
121 # The normal add code will do that for us.
121 # The normal add code will do that for us.
122 if exact and exists:
122 if exact and exists:
123 if lfile:
123 if lfile:
124 ui.warn(_('%s already a largefile\n') % name)
124 ui.warn(_('%s already a largefile\n') % name)
125 continue
125 continue
126
126
127 if (exact or not exists) and not lfutil.isstandin(f):
127 if (exact or not exists) and not lfutil.isstandin(f):
128 # In case the file was removed previously, but not committed
128 # In case the file was removed previously, but not committed
129 # (issue3507)
129 # (issue3507)
130 if not repo.wvfs.exists(f):
130 if not repo.wvfs.exists(f):
131 continue
131 continue
132
132
133 abovemin = (lfsize and
133 abovemin = (lfsize and
134 repo.wvfs.lstat(f).st_size >= lfsize * 1024 * 1024)
134 repo.wvfs.lstat(f).st_size >= lfsize * 1024 * 1024)
135 if large or abovemin or (lfmatcher and lfmatcher(f)):
135 if large or abovemin or (lfmatcher and lfmatcher(f)):
136 lfnames.append(f)
136 lfnames.append(f)
137 if ui.verbose or not exact:
137 if ui.verbose or not exact:
138 ui.status(_('adding %s as a largefile\n') % name)
138 ui.status(_('adding %s as a largefile\n') % name)
139
139
140 bad = []
140 bad = []
141
141
142 # Need to lock, otherwise there could be a race condition between
142 # Need to lock, otherwise there could be a race condition between
143 # when standins are created and added to the repo.
143 # when standins are created and added to the repo.
144 wlock = repo.wlock()
144 wlock = repo.wlock()
145 try:
145 try:
146 if not opts.get('dry_run'):
146 if not opts.get('dry_run'):
147 standins = []
147 standins = []
148 lfdirstate = lfutil.openlfdirstate(ui, repo)
148 lfdirstate = lfutil.openlfdirstate(ui, repo)
149 for f in lfnames:
149 for f in lfnames:
150 standinname = lfutil.standin(f)
150 standinname = lfutil.standin(f)
151 lfutil.writestandin(repo, standinname, hash='',
151 lfutil.writestandin(repo, standinname, hash='',
152 executable=lfutil.getexecutable(repo.wjoin(f)))
152 executable=lfutil.getexecutable(repo.wjoin(f)))
153 standins.append(standinname)
153 standins.append(standinname)
154 if lfdirstate[f] == 'r':
154 if lfdirstate[f] == 'r':
155 lfdirstate.normallookup(f)
155 lfdirstate.normallookup(f)
156 else:
156 else:
157 lfdirstate.add(f)
157 lfdirstate.add(f)
158 lfdirstate.write()
158 lfdirstate.write()
159 bad += [lfutil.splitstandin(f)
159 bad += [lfutil.splitstandin(f)
160 for f in repo[None].add(standins)
160 for f in repo[None].add(standins)
161 if f in m.files()]
161 if f in m.files()]
162
162
163 added = [f for f in lfnames if f not in bad]
163 added = [f for f in lfnames if f not in bad]
164 finally:
164 finally:
165 wlock.release()
165 wlock.release()
166 return added, bad
166 return added, bad
167
167
168 def removelargefiles(ui, repo, isaddremove, matcher, **opts):
168 def removelargefiles(ui, repo, isaddremove, matcher, **opts):
169 after = opts.get('after')
169 after = opts.get('after')
170 m = composelargefilematcher(matcher, repo[None].manifest())
170 m = composelargefilematcher(matcher, repo[None].manifest())
171 try:
171 try:
172 repo.lfstatus = True
172 repo.lfstatus = True
173 s = repo.status(match=m, clean=not isaddremove)
173 s = repo.status(match=m, clean=not isaddremove)
174 finally:
174 finally:
175 repo.lfstatus = False
175 repo.lfstatus = False
176 manifest = repo[None].manifest()
176 manifest = repo[None].manifest()
177 modified, added, deleted, clean = [[f for f in list
177 modified, added, deleted, clean = [[f for f in list
178 if lfutil.standin(f) in manifest]
178 if lfutil.standin(f) in manifest]
179 for list in (s.modified, s.added,
179 for list in (s.modified, s.added,
180 s.deleted, s.clean)]
180 s.deleted, s.clean)]
181
181
182 def warn(files, msg):
182 def warn(files, msg):
183 for f in files:
183 for f in files:
184 ui.warn(msg % m.rel(f))
184 ui.warn(msg % m.rel(f))
185 return int(len(files) > 0)
185 return int(len(files) > 0)
186
186
187 result = 0
187 result = 0
188
188
189 if after:
189 if after:
190 remove = deleted
190 remove = deleted
191 result = warn(modified + added + clean,
191 result = warn(modified + added + clean,
192 _('not removing %s: file still exists\n'))
192 _('not removing %s: file still exists\n'))
193 else:
193 else:
194 remove = deleted + clean
194 remove = deleted + clean
195 result = warn(modified, _('not removing %s: file is modified (use -f'
195 result = warn(modified, _('not removing %s: file is modified (use -f'
196 ' to force removal)\n'))
196 ' to force removal)\n'))
197 result = warn(added, _('not removing %s: file has been marked for add'
197 result = warn(added, _('not removing %s: file has been marked for add'
198 ' (use forget to undo)\n')) or result
198 ' (use forget to undo)\n')) or result
199
199
200 # Need to lock because standin files are deleted then removed from the
200 # Need to lock because standin files are deleted then removed from the
201 # repository and we could race in-between.
201 # repository and we could race in-between.
202 wlock = repo.wlock()
202 wlock = repo.wlock()
203 try:
203 try:
204 lfdirstate = lfutil.openlfdirstate(ui, repo)
204 lfdirstate = lfutil.openlfdirstate(ui, repo)
205 for f in sorted(remove):
205 for f in sorted(remove):
206 if ui.verbose or not m.exact(f):
206 if ui.verbose or not m.exact(f):
207 # addremove in core gets fancy with the name, remove doesn't
207 # addremove in core gets fancy with the name, remove doesn't
208 if isaddremove:
208 if isaddremove:
209 name = m.uipath(f)
209 name = m.uipath(f)
210 else:
210 else:
211 name = m.rel(f)
211 name = m.rel(f)
212 ui.status(_('removing %s\n') % name)
212 ui.status(_('removing %s\n') % name)
213
213
214 if not opts.get('dry_run'):
214 if not opts.get('dry_run'):
215 if not after:
215 if not after:
216 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
216 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
217
217
218 if opts.get('dry_run'):
218 if opts.get('dry_run'):
219 return result
219 return result
220
220
221 remove = [lfutil.standin(f) for f in remove]
221 remove = [lfutil.standin(f) for f in remove]
222 # If this is being called by addremove, let the original addremove
222 # If this is being called by addremove, let the original addremove
223 # function handle this.
223 # function handle this.
224 if not isaddremove:
224 if not isaddremove:
225 for f in remove:
225 for f in remove:
226 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
226 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
227 repo[None].forget(remove)
227 repo[None].forget(remove)
228
228
229 for f in remove:
229 for f in remove:
230 lfutil.synclfdirstate(repo, lfdirstate, lfutil.splitstandin(f),
230 lfutil.synclfdirstate(repo, lfdirstate, lfutil.splitstandin(f),
231 False)
231 False)
232
232
233 lfdirstate.write()
233 lfdirstate.write()
234 finally:
234 finally:
235 wlock.release()
235 wlock.release()
236
236
237 return result
237 return result
238
238
239 # For overriding mercurial.hgweb.webcommands so that largefiles will
239 # For overriding mercurial.hgweb.webcommands so that largefiles will
240 # appear at their right place in the manifests.
240 # appear at their right place in the manifests.
241 def decodepath(orig, path):
241 def decodepath(orig, path):
242 return lfutil.splitstandin(path) or path
242 return lfutil.splitstandin(path) or path
243
243
244 # -- Wrappers: modify existing commands --------------------------------
244 # -- Wrappers: modify existing commands --------------------------------
245
245
246 def overrideadd(orig, ui, repo, *pats, **opts):
246 def overrideadd(orig, ui, repo, *pats, **opts):
247 if opts.get('normal') and opts.get('large'):
247 if opts.get('normal') and opts.get('large'):
248 raise error.Abort(_('--normal cannot be used with --large'))
248 raise error.Abort(_('--normal cannot be used with --large'))
249 return orig(ui, repo, *pats, **opts)
249 return orig(ui, repo, *pats, **opts)
250
250
251 def cmdutiladd(orig, ui, repo, matcher, prefix, explicitonly, **opts):
251 def cmdutiladd(orig, ui, repo, matcher, prefix, explicitonly, **opts):
252 # The --normal flag short circuits this override
252 # The --normal flag short circuits this override
253 if opts.get('normal'):
253 if opts.get('normal'):
254 return orig(ui, repo, matcher, prefix, explicitonly, **opts)
254 return orig(ui, repo, matcher, prefix, explicitonly, **opts)
255
255
256 ladded, lbad = addlargefiles(ui, repo, False, matcher, **opts)
256 ladded, lbad = addlargefiles(ui, repo, False, matcher, **opts)
257 normalmatcher = composenormalfilematcher(matcher, repo[None].manifest(),
257 normalmatcher = composenormalfilematcher(matcher, repo[None].manifest(),
258 ladded)
258 ladded)
259 bad = orig(ui, repo, normalmatcher, prefix, explicitonly, **opts)
259 bad = orig(ui, repo, normalmatcher, prefix, explicitonly, **opts)
260
260
261 bad.extend(f for f in lbad)
261 bad.extend(f for f in lbad)
262 return bad
262 return bad
263
263
264 def cmdutilremove(orig, ui, repo, matcher, prefix, after, force, subrepos):
264 def cmdutilremove(orig, ui, repo, matcher, prefix, after, force, subrepos):
265 normalmatcher = composenormalfilematcher(matcher, repo[None].manifest())
265 normalmatcher = composenormalfilematcher(matcher, repo[None].manifest())
266 result = orig(ui, repo, normalmatcher, prefix, after, force, subrepos)
266 result = orig(ui, repo, normalmatcher, prefix, after, force, subrepos)
267 return removelargefiles(ui, repo, False, matcher, after=after,
267 return removelargefiles(ui, repo, False, matcher, after=after,
268 force=force) or result
268 force=force) or result
269
269
270 def overridestatusfn(orig, repo, rev2, **opts):
270 def overridestatusfn(orig, repo, rev2, **opts):
271 try:
271 try:
272 repo._repo.lfstatus = True
272 repo._repo.lfstatus = True
273 return orig(repo, rev2, **opts)
273 return orig(repo, rev2, **opts)
274 finally:
274 finally:
275 repo._repo.lfstatus = False
275 repo._repo.lfstatus = False
276
276
277 def overridestatus(orig, ui, repo, *pats, **opts):
277 def overridestatus(orig, ui, repo, *pats, **opts):
278 try:
278 try:
279 repo.lfstatus = True
279 repo.lfstatus = True
280 return orig(ui, repo, *pats, **opts)
280 return orig(ui, repo, *pats, **opts)
281 finally:
281 finally:
282 repo.lfstatus = False
282 repo.lfstatus = False
283
283
284 def overridedirty(orig, repo, ignoreupdate=False):
284 def overridedirty(orig, repo, ignoreupdate=False):
285 try:
285 try:
286 repo._repo.lfstatus = True
286 repo._repo.lfstatus = True
287 return orig(repo, ignoreupdate)
287 return orig(repo, ignoreupdate)
288 finally:
288 finally:
289 repo._repo.lfstatus = False
289 repo._repo.lfstatus = False
290
290
291 def overridelog(orig, ui, repo, *pats, **opts):
291 def overridelog(orig, ui, repo, *pats, **opts):
292 def overridematchandpats(ctx, pats=(), opts=None, globbed=False,
292 def overridematchandpats(ctx, pats=(), opts=None, globbed=False,
293 default='relpath', badfn=None):
293 default='relpath', badfn=None):
294 """Matcher that merges root directory with .hglf, suitable for log.
294 """Matcher that merges root directory with .hglf, suitable for log.
295 It is still possible to match .hglf directly.
295 It is still possible to match .hglf directly.
296 For any listed files run log on the standin too.
296 For any listed files run log on the standin too.
297 matchfn tries both the given filename and with .hglf stripped.
297 matchfn tries both the given filename and with .hglf stripped.
298 """
298 """
299 if opts is None:
299 if opts is None:
300 opts = {}
300 opts = {}
301 matchandpats = oldmatchandpats(ctx, pats, opts, globbed, default,
301 matchandpats = oldmatchandpats(ctx, pats, opts, globbed, default,
302 badfn=badfn)
302 badfn=badfn)
303 m, p = copy.copy(matchandpats)
303 m, p = copy.copy(matchandpats)
304
304
305 if m.always():
305 if m.always():
306 # We want to match everything anyway, so there's no benefit trying
306 # We want to match everything anyway, so there's no benefit trying
307 # to add standins.
307 # to add standins.
308 return matchandpats
308 return matchandpats
309
309
310 pats = set(p)
310 pats = set(p)
311
311
312 def fixpats(pat, tostandin=lfutil.standin):
312 def fixpats(pat, tostandin=lfutil.standin):
313 if pat.startswith('set:'):
313 if pat.startswith('set:'):
314 return pat
314 return pat
315
315
316 kindpat = match_._patsplit(pat, None)
316 kindpat = match_._patsplit(pat, None)
317
317
318 if kindpat[0] is not None:
318 if kindpat[0] is not None:
319 return kindpat[0] + ':' + tostandin(kindpat[1])
319 return kindpat[0] + ':' + tostandin(kindpat[1])
320 return tostandin(kindpat[1])
320 return tostandin(kindpat[1])
321
321
322 if m._cwd:
322 if m._cwd:
323 hglf = lfutil.shortname
323 hglf = lfutil.shortname
324 back = util.pconvert(m.rel(hglf)[:-len(hglf)])
324 back = util.pconvert(m.rel(hglf)[:-len(hglf)])
325
325
326 def tostandin(f):
326 def tostandin(f):
327 # The file may already be a standin, so truncate the back
327 # The file may already be a standin, so truncate the back
328 # prefix and test before mangling it. This avoids turning
328 # prefix and test before mangling it. This avoids turning
329 # 'glob:../.hglf/foo*' into 'glob:../.hglf/../.hglf/foo*'.
329 # 'glob:../.hglf/foo*' into 'glob:../.hglf/../.hglf/foo*'.
330 if f.startswith(back) and lfutil.splitstandin(f[len(back):]):
330 if f.startswith(back) and lfutil.splitstandin(f[len(back):]):
331 return f
331 return f
332
332
333 # An absolute path is from outside the repo, so truncate the
333 # An absolute path is from outside the repo, so truncate the
334 # path to the root before building the standin. Otherwise cwd
334 # path to the root before building the standin. Otherwise cwd
335 # is somewhere in the repo, relative to root, and needs to be
335 # is somewhere in the repo, relative to root, and needs to be
336 # prepended before building the standin.
336 # prepended before building the standin.
337 if os.path.isabs(m._cwd):
337 if os.path.isabs(m._cwd):
338 f = f[len(back):]
338 f = f[len(back):]
339 else:
339 else:
340 f = m._cwd + '/' + f
340 f = m._cwd + '/' + f
341 return back + lfutil.standin(f)
341 return back + lfutil.standin(f)
342
342
343 pats.update(fixpats(f, tostandin) for f in p)
343 pats.update(fixpats(f, tostandin) for f in p)
344 else:
344 else:
345 def tostandin(f):
345 def tostandin(f):
346 if lfutil.splitstandin(f):
346 if lfutil.splitstandin(f):
347 return f
347 return f
348 return lfutil.standin(f)
348 return lfutil.standin(f)
349 pats.update(fixpats(f, tostandin) for f in p)
349 pats.update(fixpats(f, tostandin) for f in p)
350
350
351 for i in range(0, len(m._files)):
351 for i in range(0, len(m._files)):
352 # Don't add '.hglf' to m.files, since that is already covered by '.'
352 # Don't add '.hglf' to m.files, since that is already covered by '.'
353 if m._files[i] == '.':
353 if m._files[i] == '.':
354 continue
354 continue
355 standin = lfutil.standin(m._files[i])
355 standin = lfutil.standin(m._files[i])
356 # If the "standin" is a directory, append instead of replace to
356 # If the "standin" is a directory, append instead of replace to
357 # support naming a directory on the command line with only
357 # support naming a directory on the command line with only
358 # largefiles. The original directory is kept to support normal
358 # largefiles. The original directory is kept to support normal
359 # files.
359 # files.
360 if standin in repo[ctx.node()]:
360 if standin in repo[ctx.node()]:
361 m._files[i] = standin
361 m._files[i] = standin
362 elif m._files[i] not in repo[ctx.node()] \
362 elif m._files[i] not in repo[ctx.node()] \
363 and repo.wvfs.isdir(standin):
363 and repo.wvfs.isdir(standin):
364 m._files.append(standin)
364 m._files.append(standin)
365
365
366 m._fileroots = set(m._files)
366 m._fileroots = set(m._files)
367 m._always = False
367 m._always = False
368 origmatchfn = m.matchfn
368 origmatchfn = m.matchfn
369 def lfmatchfn(f):
369 def lfmatchfn(f):
370 lf = lfutil.splitstandin(f)
370 lf = lfutil.splitstandin(f)
371 if lf is not None and origmatchfn(lf):
371 if lf is not None and origmatchfn(lf):
372 return True
372 return True
373 r = origmatchfn(f)
373 r = origmatchfn(f)
374 return r
374 return r
375 m.matchfn = lfmatchfn
375 m.matchfn = lfmatchfn
376
376
377 ui.debug('updated patterns: %s\n' % sorted(pats))
377 ui.debug('updated patterns: %s\n' % sorted(pats))
378 return m, pats
378 return m, pats
379
379
380 # For hg log --patch, the match object is used in two different senses:
380 # For hg log --patch, the match object is used in two different senses:
381 # (1) to determine what revisions should be printed out, and
381 # (1) to determine what revisions should be printed out, and
382 # (2) to determine what files to print out diffs for.
382 # (2) to determine what files to print out diffs for.
383 # The magic matchandpats override should be used for case (1) but not for
383 # The magic matchandpats override should be used for case (1) but not for
384 # case (2).
384 # case (2).
385 def overridemakelogfilematcher(repo, pats, opts, badfn=None):
385 def overridemakelogfilematcher(repo, pats, opts, badfn=None):
386 wctx = repo[None]
386 wctx = repo[None]
387 match, pats = oldmatchandpats(wctx, pats, opts, badfn=badfn)
387 match, pats = oldmatchandpats(wctx, pats, opts, badfn=badfn)
388 return lambda rev: match
388 return lambda rev: match
389
389
390 oldmatchandpats = installmatchandpatsfn(overridematchandpats)
390 oldmatchandpats = installmatchandpatsfn(overridematchandpats)
391 oldmakelogfilematcher = cmdutil._makenofollowlogfilematcher
391 oldmakelogfilematcher = cmdutil._makenofollowlogfilematcher
392 setattr(cmdutil, '_makenofollowlogfilematcher', overridemakelogfilematcher)
392 setattr(cmdutil, '_makenofollowlogfilematcher', overridemakelogfilematcher)
393
393
394 try:
394 try:
395 return orig(ui, repo, *pats, **opts)
395 return orig(ui, repo, *pats, **opts)
396 finally:
396 finally:
397 restorematchandpatsfn()
397 restorematchandpatsfn()
398 setattr(cmdutil, '_makenofollowlogfilematcher', oldmakelogfilematcher)
398 setattr(cmdutil, '_makenofollowlogfilematcher', oldmakelogfilematcher)
399
399
400 def overrideverify(orig, ui, repo, *pats, **opts):
400 def overrideverify(orig, ui, repo, *pats, **opts):
401 large = opts.pop('large', False)
401 large = opts.pop('large', False)
402 all = opts.pop('lfa', False)
402 all = opts.pop('lfa', False)
403 contents = opts.pop('lfc', False)
403 contents = opts.pop('lfc', False)
404
404
405 result = orig(ui, repo, *pats, **opts)
405 result = orig(ui, repo, *pats, **opts)
406 if large or all or contents:
406 if large or all or contents:
407 result = result or lfcommands.verifylfiles(ui, repo, all, contents)
407 result = result or lfcommands.verifylfiles(ui, repo, all, contents)
408 return result
408 return result
409
409
410 def overridedebugstate(orig, ui, repo, *pats, **opts):
410 def overridedebugstate(orig, ui, repo, *pats, **opts):
411 large = opts.pop('large', False)
411 large = opts.pop('large', False)
412 if large:
412 if large:
413 class fakerepo(object):
413 class fakerepo(object):
414 dirstate = lfutil.openlfdirstate(ui, repo)
414 dirstate = lfutil.openlfdirstate(ui, repo)
415 orig(ui, fakerepo, *pats, **opts)
415 orig(ui, fakerepo, *pats, **opts)
416 else:
416 else:
417 orig(ui, repo, *pats, **opts)
417 orig(ui, repo, *pats, **opts)
418
418
419 # Before starting the manifest merge, merge.updates will call
419 # Before starting the manifest merge, merge.updates will call
420 # _checkunknownfile to check if there are any files in the merged-in
420 # _checkunknownfile to check if there are any files in the merged-in
421 # changeset that collide with unknown files in the working copy.
421 # changeset that collide with unknown files in the working copy.
422 #
422 #
423 # The largefiles are seen as unknown, so this prevents us from merging
423 # The largefiles are seen as unknown, so this prevents us from merging
424 # in a file 'foo' if we already have a largefile with the same name.
424 # in a file 'foo' if we already have a largefile with the same name.
425 #
425 #
426 # The overridden function filters the unknown files by removing any
426 # The overridden function filters the unknown files by removing any
427 # largefiles. This makes the merge proceed and we can then handle this
427 # largefiles. This makes the merge proceed and we can then handle this
428 # case further in the overridden calculateupdates function below.
428 # case further in the overridden calculateupdates function below.
429 def overridecheckunknownfile(origfn, repo, wctx, mctx, f, f2=None):
429 def overridecheckunknownfile(origfn, repo, wctx, mctx, f, f2=None):
430 if lfutil.standin(repo.dirstate.normalize(f)) in wctx:
430 if lfutil.standin(repo.dirstate.normalize(f)) in wctx:
431 return False
431 return False
432 return origfn(repo, wctx, mctx, f, f2)
432 return origfn(repo, wctx, mctx, f, f2)
433
433
434 # The manifest merge handles conflicts on the manifest level. We want
434 # The manifest merge handles conflicts on the manifest level. We want
435 # to handle changes in largefile-ness of files at this level too.
435 # to handle changes in largefile-ness of files at this level too.
436 #
436 #
437 # The strategy is to run the original calculateupdates and then process
437 # The strategy is to run the original calculateupdates and then process
438 # the action list it outputs. There are two cases we need to deal with:
438 # the action list it outputs. There are two cases we need to deal with:
439 #
439 #
440 # 1. Normal file in p1, largefile in p2. Here the largefile is
440 # 1. Normal file in p1, largefile in p2. Here the largefile is
441 # detected via its standin file, which will enter the working copy
441 # detected via its standin file, which will enter the working copy
442 # with a "get" action. It is not "merge" since the standin is all
442 # with a "get" action. It is not "merge" since the standin is all
443 # Mercurial is concerned with at this level -- the link to the
443 # Mercurial is concerned with at this level -- the link to the
444 # existing normal file is not relevant here.
444 # existing normal file is not relevant here.
445 #
445 #
446 # 2. Largefile in p1, normal file in p2. Here we get a "merge" action
446 # 2. Largefile in p1, normal file in p2. Here we get a "merge" action
447 # since the largefile will be present in the working copy and
447 # since the largefile will be present in the working copy and
448 # different from the normal file in p2. Mercurial therefore
448 # different from the normal file in p2. Mercurial therefore
449 # triggers a merge action.
449 # triggers a merge action.
450 #
450 #
451 # In both cases, we prompt the user and emit new actions to either
451 # In both cases, we prompt the user and emit new actions to either
452 # remove the standin (if the normal file was kept) or to remove the
452 # remove the standin (if the normal file was kept) or to remove the
453 # normal file and get the standin (if the largefile was kept). The
453 # normal file and get the standin (if the largefile was kept). The
454 # default prompt answer is to use the largefile version since it was
454 # default prompt answer is to use the largefile version since it was
455 # presumably changed on purpose.
455 # presumably changed on purpose.
456 #
456 #
457 # Finally, the merge.applyupdates function will then take care of
457 # Finally, the merge.applyupdates function will then take care of
458 # writing the files into the working copy and lfcommands.updatelfiles
458 # writing the files into the working copy and lfcommands.updatelfiles
459 # will update the largefiles.
459 # will update the largefiles.
460 def overridecalculateupdates(origfn, repo, p1, p2, pas, branchmerge, force,
460 def overridecalculateupdates(origfn, repo, p1, p2, pas, branchmerge, force,
461 partial, acceptremote, followcopies):
461 partial, acceptremote, followcopies):
462 overwrite = force and not branchmerge
462 overwrite = force and not branchmerge
463 actions, diverge, renamedelete = origfn(
463 actions, diverge, renamedelete = origfn(
464 repo, p1, p2, pas, branchmerge, force, partial, acceptremote,
464 repo, p1, p2, pas, branchmerge, force, partial, acceptremote,
465 followcopies)
465 followcopies)
466
466
467 if overwrite:
467 if overwrite:
468 return actions, diverge, renamedelete
468 return actions, diverge, renamedelete
469
469
470 # Convert to dictionary with filename as key and action as value.
470 # Convert to dictionary with filename as key and action as value.
471 lfiles = set()
471 lfiles = set()
472 for f in actions:
472 for f in actions:
473 splitstandin = f and lfutil.splitstandin(f)
473 splitstandin = f and lfutil.splitstandin(f)
474 if splitstandin in p1:
474 if splitstandin in p1:
475 lfiles.add(splitstandin)
475 lfiles.add(splitstandin)
476 elif lfutil.standin(f) in p1:
476 elif lfutil.standin(f) in p1:
477 lfiles.add(f)
477 lfiles.add(f)
478
478
479 for lfile in lfiles:
479 for lfile in lfiles:
480 standin = lfutil.standin(lfile)
480 standin = lfutil.standin(lfile)
481 (lm, largs, lmsg) = actions.get(lfile, (None, None, None))
481 (lm, largs, lmsg) = actions.get(lfile, (None, None, None))
482 (sm, sargs, smsg) = actions.get(standin, (None, None, None))
482 (sm, sargs, smsg) = actions.get(standin, (None, None, None))
483 if sm in ('g', 'dc') and lm != 'r':
483 if sm in ('g', 'dc') and lm != 'r':
484 if sm == 'dc':
484 if sm == 'dc':
485 f1, f2, fa, move, anc = sargs
485 f1, f2, fa, move, anc = sargs
486 sargs = (p2[f2].flags(),)
486 sargs = (p2[f2].flags(),)
487 # Case 1: normal file in the working copy, largefile in
487 # Case 1: normal file in the working copy, largefile in
488 # the second parent
488 # the second parent
489 usermsg = _('remote turned local normal file %s into a largefile\n'
489 usermsg = _('remote turned local normal file %s into a largefile\n'
490 'use (l)argefile or keep (n)ormal file?'
490 'use (l)argefile or keep (n)ormal file?'
491 '$$ &Largefile $$ &Normal file') % lfile
491 '$$ &Largefile $$ &Normal file') % lfile
492 if repo.ui.promptchoice(usermsg, 0) == 0: # pick remote largefile
492 if repo.ui.promptchoice(usermsg, 0) == 0: # pick remote largefile
493 actions[lfile] = ('r', None, 'replaced by standin')
493 actions[lfile] = ('r', None, 'replaced by standin')
494 actions[standin] = ('g', sargs, 'replaces standin')
494 actions[standin] = ('g', sargs, 'replaces standin')
495 else: # keep local normal file
495 else: # keep local normal file
496 actions[lfile] = ('k', None, 'replaces standin')
496 actions[lfile] = ('k', None, 'replaces standin')
497 if branchmerge:
497 if branchmerge:
498 actions[standin] = ('k', None, 'replaced by non-standin')
498 actions[standin] = ('k', None, 'replaced by non-standin')
499 else:
499 else:
500 actions[standin] = ('r', None, 'replaced by non-standin')
500 actions[standin] = ('r', None, 'replaced by non-standin')
501 elif lm in ('g', 'dc') and sm != 'r':
501 elif lm in ('g', 'dc') and sm != 'r':
502 if lm == 'dc':
502 if lm == 'dc':
503 f1, f2, fa, move, anc = largs
503 f1, f2, fa, move, anc = largs
504 largs = (p2[f2].flags(),)
504 largs = (p2[f2].flags(),)
505 # Case 2: largefile in the working copy, normal file in
505 # Case 2: largefile in the working copy, normal file in
506 # the second parent
506 # the second parent
507 usermsg = _('remote turned local largefile %s into a normal file\n'
507 usermsg = _('remote turned local largefile %s into a normal file\n'
508 'keep (l)argefile or use (n)ormal file?'
508 'keep (l)argefile or use (n)ormal file?'
509 '$$ &Largefile $$ &Normal file') % lfile
509 '$$ &Largefile $$ &Normal file') % lfile
510 if repo.ui.promptchoice(usermsg, 0) == 0: # keep local largefile
510 if repo.ui.promptchoice(usermsg, 0) == 0: # keep local largefile
511 if branchmerge:
511 if branchmerge:
512 # largefile can be restored from standin safely
512 # largefile can be restored from standin safely
513 actions[lfile] = ('k', None, 'replaced by standin')
513 actions[lfile] = ('k', None, 'replaced by standin')
514 actions[standin] = ('k', None, 'replaces standin')
514 actions[standin] = ('k', None, 'replaces standin')
515 else:
515 else:
516 # "lfile" should be marked as "removed" without
516 # "lfile" should be marked as "removed" without
517 # removal of itself
517 # removal of itself
518 actions[lfile] = ('lfmr', None,
518 actions[lfile] = ('lfmr', None,
519 'forget non-standin largefile')
519 'forget non-standin largefile')
520
520
521 # linear-merge should treat this largefile as 're-added'
521 # linear-merge should treat this largefile as 're-added'
522 actions[standin] = ('a', None, 'keep standin')
522 actions[standin] = ('a', None, 'keep standin')
523 else: # pick remote normal file
523 else: # pick remote normal file
524 actions[lfile] = ('g', largs, 'replaces standin')
524 actions[lfile] = ('g', largs, 'replaces standin')
525 actions[standin] = ('r', None, 'replaced by non-standin')
525 actions[standin] = ('r', None, 'replaced by non-standin')
526
526
527 return actions, diverge, renamedelete
527 return actions, diverge, renamedelete
528
528
529 def mergerecordupdates(orig, repo, actions, branchmerge):
529 def mergerecordupdates(orig, repo, actions, branchmerge):
530 if 'lfmr' in actions:
530 if 'lfmr' in actions:
531 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
531 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
532 for lfile, args, msg in actions['lfmr']:
532 for lfile, args, msg in actions['lfmr']:
533 # this should be executed before 'orig', to execute 'remove'
533 # this should be executed before 'orig', to execute 'remove'
534 # before all other actions
534 # before all other actions
535 repo.dirstate.remove(lfile)
535 repo.dirstate.remove(lfile)
536 # make sure lfile doesn't get synclfdirstate'd as normal
536 # make sure lfile doesn't get synclfdirstate'd as normal
537 lfdirstate.add(lfile)
537 lfdirstate.add(lfile)
538 lfdirstate.write()
538 lfdirstate.write()
539
539
540 return orig(repo, actions, branchmerge)
540 return orig(repo, actions, branchmerge)
541
541
542
542
543 # Override filemerge to prompt the user about how they wish to merge
543 # Override filemerge to prompt the user about how they wish to merge
544 # largefiles. This will handle identical edits without prompting the user.
544 # largefiles. This will handle identical edits without prompting the user.
545 def overridefilemerge(origfn, premerge, repo, mynode, orig, fcd, fco, fca,
545 def overridefilemerge(origfn, premerge, repo, mynode, orig, fcd, fco, fca,
546 labels=None):
546 labels=None):
547 if not lfutil.isstandin(orig) or fcd.isabsent() or fco.isabsent():
547 if not lfutil.isstandin(orig) or fcd.isabsent() or fco.isabsent():
548 return origfn(premerge, repo, mynode, orig, fcd, fco, fca,
548 return origfn(premerge, repo, mynode, orig, fcd, fco, fca,
549 labels=labels)
549 labels=labels)
550
550
551 ahash = fca.data().strip().lower()
551 ahash = fca.data().strip().lower()
552 dhash = fcd.data().strip().lower()
552 dhash = fcd.data().strip().lower()
553 ohash = fco.data().strip().lower()
553 ohash = fco.data().strip().lower()
554 if (ohash != ahash and
554 if (ohash != ahash and
555 ohash != dhash and
555 ohash != dhash and
556 (dhash == ahash or
556 (dhash == ahash or
557 repo.ui.promptchoice(
557 repo.ui.promptchoice(
558 _('largefile %s has a merge conflict\nancestor was %s\n'
558 _('largefile %s has a merge conflict\nancestor was %s\n'
559 'keep (l)ocal %s or\ntake (o)ther %s?'
559 'keep (l)ocal %s or\ntake (o)ther %s?'
560 '$$ &Local $$ &Other') %
560 '$$ &Local $$ &Other') %
561 (lfutil.splitstandin(orig), ahash, dhash, ohash),
561 (lfutil.splitstandin(orig), ahash, dhash, ohash),
562 0) == 1)):
562 0) == 1)):
563 repo.wwrite(fcd.path(), fco.data(), fco.flags())
563 repo.wwrite(fcd.path(), fco.data(), fco.flags())
564 return True, 0, False
564 return True, 0, False
565
565
566 def copiespathcopies(orig, ctx1, ctx2, match=None):
566 def copiespathcopies(orig, ctx1, ctx2, match=None):
567 copies = orig(ctx1, ctx2, match=match)
567 copies = orig(ctx1, ctx2, match=match)
568 updated = {}
568 updated = {}
569
569
570 for k, v in copies.iteritems():
570 for k, v in copies.iteritems():
571 updated[lfutil.splitstandin(k) or k] = lfutil.splitstandin(v) or v
571 updated[lfutil.splitstandin(k) or k] = lfutil.splitstandin(v) or v
572
572
573 return updated
573 return updated
574
574
575 # Copy first changes the matchers to match standins instead of
575 # Copy first changes the matchers to match standins instead of
576 # largefiles. Then it overrides util.copyfile in that function it
576 # largefiles. Then it overrides util.copyfile in that function it
577 # checks if the destination largefile already exists. It also keeps a
577 # checks if the destination largefile already exists. It also keeps a
578 # list of copied files so that the largefiles can be copied and the
578 # list of copied files so that the largefiles can be copied and the
579 # dirstate updated.
579 # dirstate updated.
580 def overridecopy(orig, ui, repo, pats, opts, rename=False):
580 def overridecopy(orig, ui, repo, pats, opts, rename=False):
581 # doesn't remove largefile on rename
581 # doesn't remove largefile on rename
582 if len(pats) < 2:
582 if len(pats) < 2:
583 # this isn't legal, let the original function deal with it
583 # this isn't legal, let the original function deal with it
584 return orig(ui, repo, pats, opts, rename)
584 return orig(ui, repo, pats, opts, rename)
585
585
586 # This could copy both lfiles and normal files in one command,
586 # This could copy both lfiles and normal files in one command,
587 # but we don't want to do that. First replace their matcher to
587 # but we don't want to do that. First replace their matcher to
588 # only match normal files and run it, then replace it to just
588 # only match normal files and run it, then replace it to just
589 # match largefiles and run it again.
589 # match largefiles and run it again.
590 nonormalfiles = False
590 nonormalfiles = False
591 nolfiles = False
591 nolfiles = False
592 installnormalfilesmatchfn(repo[None].manifest())
592 installnormalfilesmatchfn(repo[None].manifest())
593 try:
593 try:
594 result = orig(ui, repo, pats, opts, rename)
594 result = orig(ui, repo, pats, opts, rename)
595 except error.Abort as e:
595 except error.Abort as e:
596 if str(e) != _('no files to copy'):
596 if str(e) != _('no files to copy'):
597 raise e
597 raise e
598 else:
598 else:
599 nonormalfiles = True
599 nonormalfiles = True
600 result = 0
600 result = 0
601 finally:
601 finally:
602 restorematchfn()
602 restorematchfn()
603
603
604 # The first rename can cause our current working directory to be removed.
604 # The first rename can cause our current working directory to be removed.
605 # In that case there is nothing left to copy/rename so just quit.
605 # In that case there is nothing left to copy/rename so just quit.
606 try:
606 try:
607 repo.getcwd()
607 repo.getcwd()
608 except OSError:
608 except OSError:
609 return result
609 return result
610
610
611 def makestandin(relpath):
611 def makestandin(relpath):
612 path = pathutil.canonpath(repo.root, repo.getcwd(), relpath)
612 path = pathutil.canonpath(repo.root, repo.getcwd(), relpath)
613 return os.path.join(repo.wjoin(lfutil.standin(path)))
613 return os.path.join(repo.wjoin(lfutil.standin(path)))
614
614
615 fullpats = scmutil.expandpats(pats)
615 fullpats = scmutil.expandpats(pats)
616 dest = fullpats[-1]
616 dest = fullpats[-1]
617
617
618 if os.path.isdir(dest):
618 if os.path.isdir(dest):
619 if not os.path.isdir(makestandin(dest)):
619 if not os.path.isdir(makestandin(dest)):
620 os.makedirs(makestandin(dest))
620 os.makedirs(makestandin(dest))
621
621
622 try:
622 try:
623 # When we call orig below it creates the standins but we don't add
623 # When we call orig below it creates the standins but we don't add
624 # them to the dir state until later so lock during that time.
624 # them to the dir state until later so lock during that time.
625 wlock = repo.wlock()
625 wlock = repo.wlock()
626
626
627 manifest = repo[None].manifest()
627 manifest = repo[None].manifest()
628 def overridematch(ctx, pats=(), opts=None, globbed=False,
628 def overridematch(ctx, pats=(), opts=None, globbed=False,
629 default='relpath', badfn=None):
629 default='relpath', badfn=None):
630 if opts is None:
630 if opts is None:
631 opts = {}
631 opts = {}
632 newpats = []
632 newpats = []
633 # The patterns were previously mangled to add the standin
633 # The patterns were previously mangled to add the standin
634 # directory; we need to remove that now
634 # directory; we need to remove that now
635 for pat in pats:
635 for pat in pats:
636 if match_.patkind(pat) is None and lfutil.shortname in pat:
636 if match_.patkind(pat) is None and lfutil.shortname in pat:
637 newpats.append(pat.replace(lfutil.shortname, ''))
637 newpats.append(pat.replace(lfutil.shortname, ''))
638 else:
638 else:
639 newpats.append(pat)
639 newpats.append(pat)
640 match = oldmatch(ctx, newpats, opts, globbed, default, badfn=badfn)
640 match = oldmatch(ctx, newpats, opts, globbed, default, badfn=badfn)
641 m = copy.copy(match)
641 m = copy.copy(match)
642 lfile = lambda f: lfutil.standin(f) in manifest
642 lfile = lambda f: lfutil.standin(f) in manifest
643 m._files = [lfutil.standin(f) for f in m._files if lfile(f)]
643 m._files = [lfutil.standin(f) for f in m._files if lfile(f)]
644 m._fileroots = set(m._files)
644 m._fileroots = set(m._files)
645 origmatchfn = m.matchfn
645 origmatchfn = m.matchfn
646 m.matchfn = lambda f: (lfutil.isstandin(f) and
646 m.matchfn = lambda f: (lfutil.isstandin(f) and
647 (f in manifest) and
647 (f in manifest) and
648 origmatchfn(lfutil.splitstandin(f)) or
648 origmatchfn(lfutil.splitstandin(f)) or
649 None)
649 None)
650 return m
650 return m
651 oldmatch = installmatchfn(overridematch)
651 oldmatch = installmatchfn(overridematch)
652 listpats = []
652 listpats = []
653 for pat in pats:
653 for pat in pats:
654 if match_.patkind(pat) is not None:
654 if match_.patkind(pat) is not None:
655 listpats.append(pat)
655 listpats.append(pat)
656 else:
656 else:
657 listpats.append(makestandin(pat))
657 listpats.append(makestandin(pat))
658
658
659 try:
659 try:
660 origcopyfile = util.copyfile
660 origcopyfile = util.copyfile
661 copiedfiles = []
661 copiedfiles = []
662 def overridecopyfile(src, dest):
662 def overridecopyfile(src, dest):
663 if (lfutil.shortname in src and
663 if (lfutil.shortname in src and
664 dest.startswith(repo.wjoin(lfutil.shortname))):
664 dest.startswith(repo.wjoin(lfutil.shortname))):
665 destlfile = dest.replace(lfutil.shortname, '')
665 destlfile = dest.replace(lfutil.shortname, '')
666 if not opts['force'] and os.path.exists(destlfile):
666 if not opts['force'] and os.path.exists(destlfile):
667 raise IOError('',
667 raise IOError('',
668 _('destination largefile already exists'))
668 _('destination largefile already exists'))
669 copiedfiles.append((src, dest))
669 copiedfiles.append((src, dest))
670 origcopyfile(src, dest)
670 origcopyfile(src, dest)
671
671
672 util.copyfile = overridecopyfile
672 util.copyfile = overridecopyfile
673 result += orig(ui, repo, listpats, opts, rename)
673 result += orig(ui, repo, listpats, opts, rename)
674 finally:
674 finally:
675 util.copyfile = origcopyfile
675 util.copyfile = origcopyfile
676
676
677 lfdirstate = lfutil.openlfdirstate(ui, repo)
677 lfdirstate = lfutil.openlfdirstate(ui, repo)
678 for (src, dest) in copiedfiles:
678 for (src, dest) in copiedfiles:
679 if (lfutil.shortname in src and
679 if (lfutil.shortname in src and
680 dest.startswith(repo.wjoin(lfutil.shortname))):
680 dest.startswith(repo.wjoin(lfutil.shortname))):
681 srclfile = src.replace(repo.wjoin(lfutil.standin('')), '')
681 srclfile = src.replace(repo.wjoin(lfutil.standin('')), '')
682 destlfile = dest.replace(repo.wjoin(lfutil.standin('')), '')
682 destlfile = dest.replace(repo.wjoin(lfutil.standin('')), '')
683 destlfiledir = os.path.dirname(repo.wjoin(destlfile)) or '.'
683 destlfiledir = os.path.dirname(repo.wjoin(destlfile)) or '.'
684 if not os.path.isdir(destlfiledir):
684 if not os.path.isdir(destlfiledir):
685 os.makedirs(destlfiledir)
685 os.makedirs(destlfiledir)
686 if rename:
686 if rename:
687 os.rename(repo.wjoin(srclfile), repo.wjoin(destlfile))
687 os.rename(repo.wjoin(srclfile), repo.wjoin(destlfile))
688
688
689 # The file is gone, but this deletes any empty parent
689 # The file is gone, but this deletes any empty parent
690 # directories as a side-effect.
690 # directories as a side-effect.
691 util.unlinkpath(repo.wjoin(srclfile), True)
691 util.unlinkpath(repo.wjoin(srclfile), True)
692 lfdirstate.remove(srclfile)
692 lfdirstate.remove(srclfile)
693 else:
693 else:
694 util.copyfile(repo.wjoin(srclfile),
694 util.copyfile(repo.wjoin(srclfile),
695 repo.wjoin(destlfile))
695 repo.wjoin(destlfile))
696
696
697 lfdirstate.add(destlfile)
697 lfdirstate.add(destlfile)
698 lfdirstate.write()
698 lfdirstate.write()
699 except error.Abort as e:
699 except error.Abort as e:
700 if str(e) != _('no files to copy'):
700 if str(e) != _('no files to copy'):
701 raise e
701 raise e
702 else:
702 else:
703 nolfiles = True
703 nolfiles = True
704 finally:
704 finally:
705 restorematchfn()
705 restorematchfn()
706 wlock.release()
706 wlock.release()
707
707
708 if nolfiles and nonormalfiles:
708 if nolfiles and nonormalfiles:
709 raise error.Abort(_('no files to copy'))
709 raise error.Abort(_('no files to copy'))
710
710
711 return result
711 return result
712
712
713 # When the user calls revert, we have to be careful to not revert any
713 # When the user calls revert, we have to be careful to not revert any
714 # changes to other largefiles accidentally. This means we have to keep
714 # changes to other largefiles accidentally. This means we have to keep
715 # track of the largefiles that are being reverted so we only pull down
715 # track of the largefiles that are being reverted so we only pull down
716 # the necessary largefiles.
716 # the necessary largefiles.
717 #
717 #
718 # Standins are only updated (to match the hash of largefiles) before
718 # Standins are only updated (to match the hash of largefiles) before
719 # commits. Update the standins then run the original revert, changing
719 # commits. Update the standins then run the original revert, changing
720 # the matcher to hit standins instead of largefiles. Based on the
720 # the matcher to hit standins instead of largefiles. Based on the
721 # resulting standins update the largefiles.
721 # resulting standins update the largefiles.
722 def overriderevert(orig, ui, repo, ctx, parents, *pats, **opts):
722 def overriderevert(orig, ui, repo, ctx, parents, *pats, **opts):
723 # Because we put the standins in a bad state (by updating them)
723 # Because we put the standins in a bad state (by updating them)
724 # and then return them to a correct state we need to lock to
724 # and then return them to a correct state we need to lock to
725 # prevent others from changing them in their incorrect state.
725 # prevent others from changing them in their incorrect state.
726 wlock = repo.wlock()
726 wlock = repo.wlock()
727 try:
727 try:
728 lfdirstate = lfutil.openlfdirstate(ui, repo)
728 lfdirstate = lfutil.openlfdirstate(ui, repo)
729 s = lfutil.lfdirstatestatus(lfdirstate, repo)
729 s = lfutil.lfdirstatestatus(lfdirstate, repo)
730 lfdirstate.write()
730 lfdirstate.write()
731 for lfile in s.modified:
731 for lfile in s.modified:
732 lfutil.updatestandin(repo, lfutil.standin(lfile))
732 lfutil.updatestandin(repo, lfutil.standin(lfile))
733 for lfile in s.deleted:
733 for lfile in s.deleted:
734 if (os.path.exists(repo.wjoin(lfutil.standin(lfile)))):
734 if (os.path.exists(repo.wjoin(lfutil.standin(lfile)))):
735 os.unlink(repo.wjoin(lfutil.standin(lfile)))
735 os.unlink(repo.wjoin(lfutil.standin(lfile)))
736
736
737 oldstandins = lfutil.getstandinsstate(repo)
737 oldstandins = lfutil.getstandinsstate(repo)
738
738
739 def overridematch(mctx, pats=(), opts=None, globbed=False,
739 def overridematch(mctx, pats=(), opts=None, globbed=False,
740 default='relpath', badfn=None):
740 default='relpath', badfn=None):
741 if opts is None:
741 if opts is None:
742 opts = {}
742 opts = {}
743 match = oldmatch(mctx, pats, opts, globbed, default, badfn=badfn)
743 match = oldmatch(mctx, pats, opts, globbed, default, badfn=badfn)
744 m = copy.copy(match)
744 m = copy.copy(match)
745
745
746 # revert supports recursing into subrepos, and though largefiles
746 # revert supports recursing into subrepos, and though largefiles
747 # currently doesn't work correctly in that case, this match is
747 # currently doesn't work correctly in that case, this match is
748 # called, so the lfdirstate above may not be the correct one for
748 # called, so the lfdirstate above may not be the correct one for
749 # this invocation of match.
749 # this invocation of match.
750 lfdirstate = lfutil.openlfdirstate(mctx.repo().ui, mctx.repo(),
750 lfdirstate = lfutil.openlfdirstate(mctx.repo().ui, mctx.repo(),
751 False)
751 False)
752
752
753 def tostandin(f):
753 def tostandin(f):
754 standin = lfutil.standin(f)
754 standin = lfutil.standin(f)
755 if standin in ctx or standin in mctx:
755 if standin in ctx or standin in mctx:
756 return standin
756 return standin
757 elif standin in repo[None] or lfdirstate[f] == 'r':
757 elif standin in repo[None] or lfdirstate[f] == 'r':
758 return None
758 return None
759 return f
759 return f
760 m._files = [tostandin(f) for f in m._files]
760 m._files = [tostandin(f) for f in m._files]
761 m._files = [f for f in m._files if f is not None]
761 m._files = [f for f in m._files if f is not None]
762 m._fileroots = set(m._files)
762 m._fileroots = set(m._files)
763 origmatchfn = m.matchfn
763 origmatchfn = m.matchfn
764 def matchfn(f):
764 def matchfn(f):
765 if lfutil.isstandin(f):
765 if lfutil.isstandin(f):
766 return (origmatchfn(lfutil.splitstandin(f)) and
766 return (origmatchfn(lfutil.splitstandin(f)) and
767 (f in ctx or f in mctx))
767 (f in ctx or f in mctx))
768 return origmatchfn(f)
768 return origmatchfn(f)
769 m.matchfn = matchfn
769 m.matchfn = matchfn
770 return m
770 return m
771 oldmatch = installmatchfn(overridematch)
771 oldmatch = installmatchfn(overridematch)
772 try:
772 try:
773 orig(ui, repo, ctx, parents, *pats, **opts)
773 orig(ui, repo, ctx, parents, *pats, **opts)
774 finally:
774 finally:
775 restorematchfn()
775 restorematchfn()
776
776
777 newstandins = lfutil.getstandinsstate(repo)
777 newstandins = lfutil.getstandinsstate(repo)
778 filelist = lfutil.getlfilestoupdate(oldstandins, newstandins)
778 filelist = lfutil.getlfilestoupdate(oldstandins, newstandins)
779 # lfdirstate should be 'normallookup'-ed for updated files,
779 # lfdirstate should be 'normallookup'-ed for updated files,
780 # because reverting doesn't touch dirstate for 'normal' files
780 # because reverting doesn't touch dirstate for 'normal' files
781 # when target revision is explicitly specified: in such case,
781 # when target revision is explicitly specified: in such case,
782 # 'n' and valid timestamp in dirstate doesn't ensure 'clean'
782 # 'n' and valid timestamp in dirstate doesn't ensure 'clean'
783 # of target (standin) file.
783 # of target (standin) file.
784 lfcommands.updatelfiles(ui, repo, filelist, printmessage=False,
784 lfcommands.updatelfiles(ui, repo, filelist, printmessage=False,
785 normallookup=True)
785 normallookup=True)
786
786
787 finally:
787 finally:
788 wlock.release()
788 wlock.release()
789
789
790 # after pulling changesets, we need to take some extra care to get
790 # after pulling changesets, we need to take some extra care to get
791 # largefiles updated remotely
791 # largefiles updated remotely
792 def overridepull(orig, ui, repo, source=None, **opts):
792 def overridepull(orig, ui, repo, source=None, **opts):
793 revsprepull = len(repo)
793 revsprepull = len(repo)
794 if not source:
794 if not source:
795 source = 'default'
795 source = 'default'
796 repo.lfpullsource = source
796 repo.lfpullsource = source
797 result = orig(ui, repo, source, **opts)
797 result = orig(ui, repo, source, **opts)
798 revspostpull = len(repo)
798 revspostpull = len(repo)
799 lfrevs = opts.get('lfrev', [])
799 lfrevs = opts.get('lfrev', [])
800 if opts.get('all_largefiles'):
800 if opts.get('all_largefiles'):
801 lfrevs.append('pulled()')
801 lfrevs.append('pulled()')
802 if lfrevs and revspostpull > revsprepull:
802 if lfrevs and revspostpull > revsprepull:
803 numcached = 0
803 numcached = 0
804 repo.firstpulled = revsprepull # for pulled() revset expression
804 repo.firstpulled = revsprepull # for pulled() revset expression
805 try:
805 try:
806 for rev in scmutil.revrange(repo, lfrevs):
806 for rev in scmutil.revrange(repo, lfrevs):
807 ui.note(_('pulling largefiles for revision %s\n') % rev)
807 ui.note(_('pulling largefiles for revision %s\n') % rev)
808 (cached, missing) = lfcommands.cachelfiles(ui, repo, rev)
808 (cached, missing) = lfcommands.cachelfiles(ui, repo, rev)
809 numcached += len(cached)
809 numcached += len(cached)
810 finally:
810 finally:
811 del repo.firstpulled
811 del repo.firstpulled
812 ui.status(_("%d largefiles cached\n") % numcached)
812 ui.status(_("%d largefiles cached\n") % numcached)
813 return result
813 return result
814
814
815 def pulledrevsetsymbol(repo, subset, x):
815 def pulledrevsetsymbol(repo, subset, x):
816 """``pulled()``
816 """``pulled()``
817 Changesets that just has been pulled.
817 Changesets that just has been pulled.
818
818
819 Only available with largefiles from pull --lfrev expressions.
819 Only available with largefiles from pull --lfrev expressions.
820
820
821 .. container:: verbose
821 .. container:: verbose
822
822
823 Some examples:
823 Some examples:
824
824
825 - pull largefiles for all new changesets::
825 - pull largefiles for all new changesets::
826
826
827 hg pull -lfrev "pulled()"
827 hg pull -lfrev "pulled()"
828
828
829 - pull largefiles for all new branch heads::
829 - pull largefiles for all new branch heads::
830
830
831 hg pull -lfrev "head(pulled()) and not closed()"
831 hg pull -lfrev "head(pulled()) and not closed()"
832
832
833 """
833 """
834
834
835 try:
835 try:
836 firstpulled = repo.firstpulled
836 firstpulled = repo.firstpulled
837 except AttributeError:
837 except AttributeError:
838 raise error.Abort(_("pulled() only available in --lfrev"))
838 raise error.Abort(_("pulled() only available in --lfrev"))
839 return revset.baseset([r for r in subset if r >= firstpulled])
839 return revset.baseset([r for r in subset if r >= firstpulled])
840
840
841 def overrideclone(orig, ui, source, dest=None, **opts):
841 def overrideclone(orig, ui, source, dest=None, **opts):
842 d = dest
842 d = dest
843 if d is None:
843 if d is None:
844 d = hg.defaultdest(source)
844 d = hg.defaultdest(source)
845 if opts.get('all_largefiles') and not hg.islocal(d):
845 if opts.get('all_largefiles') and not hg.islocal(d):
846 raise error.Abort(_(
846 raise error.Abort(_(
847 '--all-largefiles is incompatible with non-local destination %s') %
847 '--all-largefiles is incompatible with non-local destination %s') %
848 d)
848 d)
849
849
850 return orig(ui, source, dest, **opts)
850 return orig(ui, source, dest, **opts)
851
851
852 def hgclone(orig, ui, opts, *args, **kwargs):
852 def hgclone(orig, ui, opts, *args, **kwargs):
853 result = orig(ui, opts, *args, **kwargs)
853 result = orig(ui, opts, *args, **kwargs)
854
854
855 if result is not None:
855 if result is not None:
856 sourcerepo, destrepo = result
856 sourcerepo, destrepo = result
857 repo = destrepo.local()
857 repo = destrepo.local()
858
858
859 # When cloning to a remote repo (like through SSH), no repo is available
859 # When cloning to a remote repo (like through SSH), no repo is available
860 # from the peer. Therefore the largefiles can't be downloaded and the
860 # from the peer. Therefore the largefiles can't be downloaded and the
861 # hgrc can't be updated.
861 # hgrc can't be updated.
862 if not repo:
862 if not repo:
863 return result
863 return result
864
864
865 # If largefiles is required for this repo, permanently enable it locally
865 # If largefiles is required for this repo, permanently enable it locally
866 if 'largefiles' in repo.requirements:
866 if 'largefiles' in repo.requirements:
867 fp = repo.vfs('hgrc', 'a', text=True)
867 fp = repo.vfs('hgrc', 'a', text=True)
868 try:
868 try:
869 fp.write('\n[extensions]\nlargefiles=\n')
869 fp.write('\n[extensions]\nlargefiles=\n')
870 finally:
870 finally:
871 fp.close()
871 fp.close()
872
872
873 # Caching is implicitly limited to 'rev' option, since the dest repo was
873 # Caching is implicitly limited to 'rev' option, since the dest repo was
874 # truncated at that point. The user may expect a download count with
874 # truncated at that point. The user may expect a download count with
875 # this option, so attempt whether or not this is a largefile repo.
875 # this option, so attempt whether or not this is a largefile repo.
876 if opts.get('all_largefiles'):
876 if opts.get('all_largefiles'):
877 success, missing = lfcommands.downloadlfiles(ui, repo, None)
877 success, missing = lfcommands.downloadlfiles(ui, repo, None)
878
878
879 if missing != 0:
879 if missing != 0:
880 return None
880 return None
881
881
882 return result
882 return result
883
883
884 def overriderebase(orig, ui, repo, **opts):
884 def overriderebase(orig, ui, repo, **opts):
885 if not util.safehasattr(repo, '_largefilesenabled'):
885 if not util.safehasattr(repo, '_largefilesenabled'):
886 return orig(ui, repo, **opts)
886 return orig(ui, repo, **opts)
887
887
888 resuming = opts.get('continue')
888 resuming = opts.get('continue')
889 repo._lfcommithooks.append(lfutil.automatedcommithook(resuming))
889 repo._lfcommithooks.append(lfutil.automatedcommithook(resuming))
890 repo._lfstatuswriters.append(lambda *msg, **opts: None)
890 repo._lfstatuswriters.append(lambda *msg, **opts: None)
891 try:
891 try:
892 return orig(ui, repo, **opts)
892 return orig(ui, repo, **opts)
893 finally:
893 finally:
894 repo._lfstatuswriters.pop()
894 repo._lfstatuswriters.pop()
895 repo._lfcommithooks.pop()
895 repo._lfcommithooks.pop()
896
896
897 def overridearchivecmd(orig, ui, repo, dest, **opts):
897 def overridearchivecmd(orig, ui, repo, dest, **opts):
898 repo.unfiltered().lfstatus = True
898 repo.unfiltered().lfstatus = True
899
899
900 try:
900 try:
901 return orig(ui, repo.unfiltered(), dest, **opts)
901 return orig(ui, repo.unfiltered(), dest, **opts)
902 finally:
902 finally:
903 repo.unfiltered().lfstatus = False
903 repo.unfiltered().lfstatus = False
904
904
905 def hgwebarchive(orig, web, req, tmpl):
905 def hgwebarchive(orig, web, req, tmpl):
906 web.repo.lfstatus = True
906 web.repo.lfstatus = True
907
907
908 try:
908 try:
909 return orig(web, req, tmpl)
909 return orig(web, req, tmpl)
910 finally:
910 finally:
911 web.repo.lfstatus = False
911 web.repo.lfstatus = False
912
912
913 def overridearchive(orig, repo, dest, node, kind, decode=True, matchfn=None,
913 def overridearchive(orig, repo, dest, node, kind, decode=True, matchfn=None,
914 prefix='', mtime=None, subrepos=None):
914 prefix='', mtime=None, subrepos=None):
915 # For some reason setting repo.lfstatus in hgwebarchive only changes the
915 # For some reason setting repo.lfstatus in hgwebarchive only changes the
916 # unfiltered repo's attr, so check that as well.
916 # unfiltered repo's attr, so check that as well.
917 if not repo.lfstatus and not repo.unfiltered().lfstatus:
917 if not repo.lfstatus and not repo.unfiltered().lfstatus:
918 return orig(repo, dest, node, kind, decode, matchfn, prefix, mtime,
918 return orig(repo, dest, node, kind, decode, matchfn, prefix, mtime,
919 subrepos)
919 subrepos)
920
920
921 # No need to lock because we are only reading history and
921 # No need to lock because we are only reading history and
922 # largefile caches, neither of which are modified.
922 # largefile caches, neither of which are modified.
923 if node is not None:
923 if node is not None:
924 lfcommands.cachelfiles(repo.ui, repo, node)
924 lfcommands.cachelfiles(repo.ui, repo, node)
925
925
926 if kind not in archival.archivers:
926 if kind not in archival.archivers:
927 raise error.Abort(_("unknown archive type '%s'") % kind)
927 raise error.Abort(_("unknown archive type '%s'") % kind)
928
928
929 ctx = repo[node]
929 ctx = repo[node]
930
930
931 if kind == 'files':
931 if kind == 'files':
932 if prefix:
932 if prefix:
933 raise error.Abort(
933 raise error.Abort(
934 _('cannot give prefix when archiving to files'))
934 _('cannot give prefix when archiving to files'))
935 else:
935 else:
936 prefix = archival.tidyprefix(dest, kind, prefix)
936 prefix = archival.tidyprefix(dest, kind, prefix)
937
937
938 def write(name, mode, islink, getdata):
938 def write(name, mode, islink, getdata):
939 if matchfn and not matchfn(name):
939 if matchfn and not matchfn(name):
940 return
940 return
941 data = getdata()
941 data = getdata()
942 if decode:
942 if decode:
943 data = repo.wwritedata(name, data)
943 data = repo.wwritedata(name, data)
944 archiver.addfile(prefix + name, mode, islink, data)
944 archiver.addfile(prefix + name, mode, islink, data)
945
945
946 archiver = archival.archivers[kind](dest, mtime or ctx.date()[0])
946 archiver = archival.archivers[kind](dest, mtime or ctx.date()[0])
947
947
948 if repo.ui.configbool("ui", "archivemeta", True):
948 if repo.ui.configbool("ui", "archivemeta", True):
949 write('.hg_archival.txt', 0o644, False,
949 write('.hg_archival.txt', 0o644, False,
950 lambda: archival.buildmetadata(ctx))
950 lambda: archival.buildmetadata(ctx))
951
951
952 for f in ctx:
952 for f in ctx:
953 ff = ctx.flags(f)
953 ff = ctx.flags(f)
954 getdata = ctx[f].data
954 getdata = ctx[f].data
955 if lfutil.isstandin(f):
955 if lfutil.isstandin(f):
956 if node is not None:
956 if node is not None:
957 path = lfutil.findfile(repo, getdata().strip())
957 path = lfutil.findfile(repo, getdata().strip())
958
958
959 if path is None:
959 if path is None:
960 raise error.Abort(
960 raise error.Abort(
961 _('largefile %s not found in repo store or system cache')
961 _('largefile %s not found in repo store or system cache')
962 % lfutil.splitstandin(f))
962 % lfutil.splitstandin(f))
963 else:
963 else:
964 path = lfutil.splitstandin(f)
964 path = lfutil.splitstandin(f)
965
965
966 f = lfutil.splitstandin(f)
966 f = lfutil.splitstandin(f)
967
967
968 def getdatafn():
968 def getdatafn():
969 fd = None
969 fd = None
970 try:
970 try:
971 fd = open(path, 'rb')
971 fd = open(path, 'rb')
972 return fd.read()
972 return fd.read()
973 finally:
973 finally:
974 if fd:
974 if fd:
975 fd.close()
975 fd.close()
976
976
977 getdata = getdatafn
977 getdata = getdatafn
978 write(f, 'x' in ff and 0o755 or 0o644, 'l' in ff, getdata)
978 write(f, 'x' in ff and 0o755 or 0o644, 'l' in ff, getdata)
979
979
980 if subrepos:
980 if subrepos:
981 for subpath in sorted(ctx.substate):
981 for subpath in sorted(ctx.substate):
982 sub = ctx.workingsub(subpath)
982 sub = ctx.workingsub(subpath)
983 submatch = match_.narrowmatcher(subpath, matchfn)
983 submatch = match_.narrowmatcher(subpath, matchfn)
984 sub._repo.lfstatus = True
984 sub._repo.lfstatus = True
985 sub.archive(archiver, prefix, submatch)
985 sub.archive(archiver, prefix, submatch)
986
986
987 archiver.done()
987 archiver.done()
988
988
989 def hgsubrepoarchive(orig, repo, archiver, prefix, match=None):
989 def hgsubrepoarchive(orig, repo, archiver, prefix, match=None):
990 if not repo._repo.lfstatus:
990 if not repo._repo.lfstatus:
991 return orig(repo, archiver, prefix, match)
991 return orig(repo, archiver, prefix, match)
992
992
993 repo._get(repo._state + ('hg',))
993 repo._get(repo._state + ('hg',))
994 rev = repo._state[1]
994 rev = repo._state[1]
995 ctx = repo._repo[rev]
995 ctx = repo._repo[rev]
996
996
997 if ctx.node() is not None:
997 if ctx.node() is not None:
998 lfcommands.cachelfiles(repo.ui, repo._repo, ctx.node())
998 lfcommands.cachelfiles(repo.ui, repo._repo, ctx.node())
999
999
1000 def write(name, mode, islink, getdata):
1000 def write(name, mode, islink, getdata):
1001 # At this point, the standin has been replaced with the largefile name,
1001 # At this point, the standin has been replaced with the largefile name,
1002 # so the normal matcher works here without the lfutil variants.
1002 # so the normal matcher works here without the lfutil variants.
1003 if match and not match(f):
1003 if match and not match(f):
1004 return
1004 return
1005 data = getdata()
1005 data = getdata()
1006
1006
1007 archiver.addfile(prefix + repo._path + '/' + name, mode, islink, data)
1007 archiver.addfile(prefix + repo._path + '/' + name, mode, islink, data)
1008
1008
1009 for f in ctx:
1009 for f in ctx:
1010 ff = ctx.flags(f)
1010 ff = ctx.flags(f)
1011 getdata = ctx[f].data
1011 getdata = ctx[f].data
1012 if lfutil.isstandin(f):
1012 if lfutil.isstandin(f):
1013 if ctx.node() is not None:
1013 if ctx.node() is not None:
1014 path = lfutil.findfile(repo._repo, getdata().strip())
1014 path = lfutil.findfile(repo._repo, getdata().strip())
1015
1015
1016 if path is None:
1016 if path is None:
1017 raise error.Abort(
1017 raise error.Abort(
1018 _('largefile %s not found in repo store or system cache')
1018 _('largefile %s not found in repo store or system cache')
1019 % lfutil.splitstandin(f))
1019 % lfutil.splitstandin(f))
1020 else:
1020 else:
1021 path = lfutil.splitstandin(f)
1021 path = lfutil.splitstandin(f)
1022
1022
1023 f = lfutil.splitstandin(f)
1023 f = lfutil.splitstandin(f)
1024
1024
1025 def getdatafn():
1025 def getdatafn():
1026 fd = None
1026 fd = None
1027 try:
1027 try:
1028 fd = open(os.path.join(prefix, path), 'rb')
1028 fd = open(os.path.join(prefix, path), 'rb')
1029 return fd.read()
1029 return fd.read()
1030 finally:
1030 finally:
1031 if fd:
1031 if fd:
1032 fd.close()
1032 fd.close()
1033
1033
1034 getdata = getdatafn
1034 getdata = getdatafn
1035
1035
1036 write(f, 'x' in ff and 0o755 or 0o644, 'l' in ff, getdata)
1036 write(f, 'x' in ff and 0o755 or 0o644, 'l' in ff, getdata)
1037
1037
1038 for subpath in sorted(ctx.substate):
1038 for subpath in sorted(ctx.substate):
1039 sub = ctx.workingsub(subpath)
1039 sub = ctx.workingsub(subpath)
1040 submatch = match_.narrowmatcher(subpath, match)
1040 submatch = match_.narrowmatcher(subpath, match)
1041 sub._repo.lfstatus = True
1041 sub._repo.lfstatus = True
1042 sub.archive(archiver, prefix + repo._path + '/', submatch)
1042 sub.archive(archiver, prefix + repo._path + '/', submatch)
1043
1043
1044 # If a largefile is modified, the change is not reflected in its
1044 # If a largefile is modified, the change is not reflected in its
1045 # standin until a commit. cmdutil.bailifchanged() raises an exception
1045 # standin until a commit. cmdutil.bailifchanged() raises an exception
1046 # if the repo has uncommitted changes. Wrap it to also check if
1046 # if the repo has uncommitted changes. Wrap it to also check if
1047 # largefiles were changed. This is used by bisect, backout and fetch.
1047 # largefiles were changed. This is used by bisect, backout and fetch.
1048 def overridebailifchanged(orig, repo, *args, **kwargs):
1048 def overridebailifchanged(orig, repo, *args, **kwargs):
1049 orig(repo, *args, **kwargs)
1049 orig(repo, *args, **kwargs)
1050 repo.lfstatus = True
1050 repo.lfstatus = True
1051 s = repo.status()
1051 s = repo.status()
1052 repo.lfstatus = False
1052 repo.lfstatus = False
1053 if s.modified or s.added or s.removed or s.deleted:
1053 if s.modified or s.added or s.removed or s.deleted:
1054 raise error.Abort(_('uncommitted changes'))
1054 raise error.Abort(_('uncommitted changes'))
1055
1055
1056 def cmdutilforget(orig, ui, repo, match, prefix, explicitonly):
1056 def cmdutilforget(orig, ui, repo, match, prefix, explicitonly):
1057 normalmatcher = composenormalfilematcher(match, repo[None].manifest())
1057 normalmatcher = composenormalfilematcher(match, repo[None].manifest())
1058 bad, forgot = orig(ui, repo, normalmatcher, prefix, explicitonly)
1058 bad, forgot = orig(ui, repo, normalmatcher, prefix, explicitonly)
1059 m = composelargefilematcher(match, repo[None].manifest())
1059 m = composelargefilematcher(match, repo[None].manifest())
1060
1060
1061 try:
1061 try:
1062 repo.lfstatus = True
1062 repo.lfstatus = True
1063 s = repo.status(match=m, clean=True)
1063 s = repo.status(match=m, clean=True)
1064 finally:
1064 finally:
1065 repo.lfstatus = False
1065 repo.lfstatus = False
1066 forget = sorted(s.modified + s.added + s.deleted + s.clean)
1066 forget = sorted(s.modified + s.added + s.deleted + s.clean)
1067 forget = [f for f in forget if lfutil.standin(f) in repo[None].manifest()]
1067 forget = [f for f in forget if lfutil.standin(f) in repo[None].manifest()]
1068
1068
1069 for f in forget:
1069 for f in forget:
1070 if lfutil.standin(f) not in repo.dirstate and not \
1070 if lfutil.standin(f) not in repo.dirstate and not \
1071 repo.wvfs.isdir(lfutil.standin(f)):
1071 repo.wvfs.isdir(lfutil.standin(f)):
1072 ui.warn(_('not removing %s: file is already untracked\n')
1072 ui.warn(_('not removing %s: file is already untracked\n')
1073 % m.rel(f))
1073 % m.rel(f))
1074 bad.append(f)
1074 bad.append(f)
1075
1075
1076 for f in forget:
1076 for f in forget:
1077 if ui.verbose or not m.exact(f):
1077 if ui.verbose or not m.exact(f):
1078 ui.status(_('removing %s\n') % m.rel(f))
1078 ui.status(_('removing %s\n') % m.rel(f))
1079
1079
1080 # Need to lock because standin files are deleted then removed from the
1080 # Need to lock because standin files are deleted then removed from the
1081 # repository and we could race in-between.
1081 # repository and we could race in-between.
1082 wlock = repo.wlock()
1082 wlock = repo.wlock()
1083 try:
1083 try:
1084 lfdirstate = lfutil.openlfdirstate(ui, repo)
1084 lfdirstate = lfutil.openlfdirstate(ui, repo)
1085 for f in forget:
1085 for f in forget:
1086 if lfdirstate[f] == 'a':
1086 if lfdirstate[f] == 'a':
1087 lfdirstate.drop(f)
1087 lfdirstate.drop(f)
1088 else:
1088 else:
1089 lfdirstate.remove(f)
1089 lfdirstate.remove(f)
1090 lfdirstate.write()
1090 lfdirstate.write()
1091 standins = [lfutil.standin(f) for f in forget]
1091 standins = [lfutil.standin(f) for f in forget]
1092 for f in standins:
1092 for f in standins:
1093 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
1093 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
1094 rejected = repo[None].forget(standins)
1094 rejected = repo[None].forget(standins)
1095 finally:
1095 finally:
1096 wlock.release()
1096 wlock.release()
1097
1097
1098 bad.extend(f for f in rejected if f in m.files())
1098 bad.extend(f for f in rejected if f in m.files())
1099 forgot.extend(f for f in forget if f not in rejected)
1099 forgot.extend(f for f in forget if f not in rejected)
1100 return bad, forgot
1100 return bad, forgot
1101
1101
1102 def _getoutgoings(repo, other, missing, addfunc):
1102 def _getoutgoings(repo, other, missing, addfunc):
1103 """get pairs of filename and largefile hash in outgoing revisions
1103 """get pairs of filename and largefile hash in outgoing revisions
1104 in 'missing'.
1104 in 'missing'.
1105
1105
1106 largefiles already existing on 'other' repository are ignored.
1106 largefiles already existing on 'other' repository are ignored.
1107
1107
1108 'addfunc' is invoked with each unique pairs of filename and
1108 'addfunc' is invoked with each unique pairs of filename and
1109 largefile hash value.
1109 largefile hash value.
1110 """
1110 """
1111 knowns = set()
1111 knowns = set()
1112 lfhashes = set()
1112 lfhashes = set()
1113 def dedup(fn, lfhash):
1113 def dedup(fn, lfhash):
1114 k = (fn, lfhash)
1114 k = (fn, lfhash)
1115 if k not in knowns:
1115 if k not in knowns:
1116 knowns.add(k)
1116 knowns.add(k)
1117 lfhashes.add(lfhash)
1117 lfhashes.add(lfhash)
1118 lfutil.getlfilestoupload(repo, missing, dedup)
1118 lfutil.getlfilestoupload(repo, missing, dedup)
1119 if lfhashes:
1119 if lfhashes:
1120 lfexists = basestore._openstore(repo, other).exists(lfhashes)
1120 lfexists = basestore._openstore(repo, other).exists(lfhashes)
1121 for fn, lfhash in knowns:
1121 for fn, lfhash in knowns:
1122 if not lfexists[lfhash]: # lfhash doesn't exist on "other"
1122 if not lfexists[lfhash]: # lfhash doesn't exist on "other"
1123 addfunc(fn, lfhash)
1123 addfunc(fn, lfhash)
1124
1124
1125 def outgoinghook(ui, repo, other, opts, missing):
1125 def outgoinghook(ui, repo, other, opts, missing):
1126 if opts.pop('large', None):
1126 if opts.pop('large', None):
1127 lfhashes = set()
1127 lfhashes = set()
1128 if ui.debugflag:
1128 if ui.debugflag:
1129 toupload = {}
1129 toupload = {}
1130 def addfunc(fn, lfhash):
1130 def addfunc(fn, lfhash):
1131 if fn not in toupload:
1131 if fn not in toupload:
1132 toupload[fn] = []
1132 toupload[fn] = []
1133 toupload[fn].append(lfhash)
1133 toupload[fn].append(lfhash)
1134 lfhashes.add(lfhash)
1134 lfhashes.add(lfhash)
1135 def showhashes(fn):
1135 def showhashes(fn):
1136 for lfhash in sorted(toupload[fn]):
1136 for lfhash in sorted(toupload[fn]):
1137 ui.debug(' %s\n' % (lfhash))
1137 ui.debug(' %s\n' % (lfhash))
1138 else:
1138 else:
1139 toupload = set()
1139 toupload = set()
1140 def addfunc(fn, lfhash):
1140 def addfunc(fn, lfhash):
1141 toupload.add(fn)
1141 toupload.add(fn)
1142 lfhashes.add(lfhash)
1142 lfhashes.add(lfhash)
1143 def showhashes(fn):
1143 def showhashes(fn):
1144 pass
1144 pass
1145 _getoutgoings(repo, other, missing, addfunc)
1145 _getoutgoings(repo, other, missing, addfunc)
1146
1146
1147 if not toupload:
1147 if not toupload:
1148 ui.status(_('largefiles: no files to upload\n'))
1148 ui.status(_('largefiles: no files to upload\n'))
1149 else:
1149 else:
1150 ui.status(_('largefiles to upload (%d entities):\n')
1150 ui.status(_('largefiles to upload (%d entities):\n')
1151 % (len(lfhashes)))
1151 % (len(lfhashes)))
1152 for file in sorted(toupload):
1152 for file in sorted(toupload):
1153 ui.status(lfutil.splitstandin(file) + '\n')
1153 ui.status(lfutil.splitstandin(file) + '\n')
1154 showhashes(file)
1154 showhashes(file)
1155 ui.status('\n')
1155 ui.status('\n')
1156
1156
1157 def summaryremotehook(ui, repo, opts, changes):
1157 def summaryremotehook(ui, repo, opts, changes):
1158 largeopt = opts.get('large', False)
1158 largeopt = opts.get('large', False)
1159 if changes is None:
1159 if changes is None:
1160 if largeopt:
1160 if largeopt:
1161 return (False, True) # only outgoing check is needed
1161 return (False, True) # only outgoing check is needed
1162 else:
1162 else:
1163 return (False, False)
1163 return (False, False)
1164 elif largeopt:
1164 elif largeopt:
1165 url, branch, peer, outgoing = changes[1]
1165 url, branch, peer, outgoing = changes[1]
1166 if peer is None:
1166 if peer is None:
1167 # i18n: column positioning for "hg summary"
1167 # i18n: column positioning for "hg summary"
1168 ui.status(_('largefiles: (no remote repo)\n'))
1168 ui.status(_('largefiles: (no remote repo)\n'))
1169 return
1169 return
1170
1170
1171 toupload = set()
1171 toupload = set()
1172 lfhashes = set()
1172 lfhashes = set()
1173 def addfunc(fn, lfhash):
1173 def addfunc(fn, lfhash):
1174 toupload.add(fn)
1174 toupload.add(fn)
1175 lfhashes.add(lfhash)
1175 lfhashes.add(lfhash)
1176 _getoutgoings(repo, peer, outgoing.missing, addfunc)
1176 _getoutgoings(repo, peer, outgoing.missing, addfunc)
1177
1177
1178 if not toupload:
1178 if not toupload:
1179 # i18n: column positioning for "hg summary"
1179 # i18n: column positioning for "hg summary"
1180 ui.status(_('largefiles: (no files to upload)\n'))
1180 ui.status(_('largefiles: (no files to upload)\n'))
1181 else:
1181 else:
1182 # i18n: column positioning for "hg summary"
1182 # i18n: column positioning for "hg summary"
1183 ui.status(_('largefiles: %d entities for %d files to upload\n')
1183 ui.status(_('largefiles: %d entities for %d files to upload\n')
1184 % (len(lfhashes), len(toupload)))
1184 % (len(lfhashes), len(toupload)))
1185
1185
1186 def overridesummary(orig, ui, repo, *pats, **opts):
1186 def overridesummary(orig, ui, repo, *pats, **opts):
1187 try:
1187 try:
1188 repo.lfstatus = True
1188 repo.lfstatus = True
1189 orig(ui, repo, *pats, **opts)
1189 orig(ui, repo, *pats, **opts)
1190 finally:
1190 finally:
1191 repo.lfstatus = False
1191 repo.lfstatus = False
1192
1192
1193 def scmutiladdremove(orig, repo, matcher, prefix, opts=None, dry_run=None,
1193 def scmutiladdremove(orig, repo, matcher, prefix, opts=None, dry_run=None,
1194 similarity=None):
1194 similarity=None):
1195 if opts is None:
1195 if opts is None:
1196 opts = {}
1196 opts = {}
1197 if not lfutil.islfilesrepo(repo):
1197 if not lfutil.islfilesrepo(repo):
1198 return orig(repo, matcher, prefix, opts, dry_run, similarity)
1198 return orig(repo, matcher, prefix, opts, dry_run, similarity)
1199 # Get the list of missing largefiles so we can remove them
1199 # Get the list of missing largefiles so we can remove them
1200 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
1200 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
1201 unsure, s = lfdirstate.status(match_.always(repo.root, repo.getcwd()), [],
1201 unsure, s = lfdirstate.status(match_.always(repo.root, repo.getcwd()), [],
1202 False, False, False)
1202 False, False, False)
1203
1203
1204 # Call into the normal remove code, but the removing of the standin, we want
1204 # Call into the normal remove code, but the removing of the standin, we want
1205 # to have handled by original addremove. Monkey patching here makes sure
1205 # to have handled by original addremove. Monkey patching here makes sure
1206 # we don't remove the standin in the largefiles code, preventing a very
1206 # we don't remove the standin in the largefiles code, preventing a very
1207 # confused state later.
1207 # confused state later.
1208 if s.deleted:
1208 if s.deleted:
1209 m = copy.copy(matcher)
1209 m = copy.copy(matcher)
1210
1210
1211 # The m._files and m._map attributes are not changed to the deleted list
1211 # The m._files and m._map attributes are not changed to the deleted list
1212 # because that affects the m.exact() test, which in turn governs whether
1212 # because that affects the m.exact() test, which in turn governs whether
1213 # or not the file name is printed, and how. Simply limit the original
1213 # or not the file name is printed, and how. Simply limit the original
1214 # matches to those in the deleted status list.
1214 # matches to those in the deleted status list.
1215 matchfn = m.matchfn
1215 matchfn = m.matchfn
1216 m.matchfn = lambda f: f in s.deleted and matchfn(f)
1216 m.matchfn = lambda f: f in s.deleted and matchfn(f)
1217
1217
1218 removelargefiles(repo.ui, repo, True, m, **opts)
1218 removelargefiles(repo.ui, repo, True, m, **opts)
1219 # Call into the normal add code, and any files that *should* be added as
1219 # Call into the normal add code, and any files that *should* be added as
1220 # largefiles will be
1220 # largefiles will be
1221 added, bad = addlargefiles(repo.ui, repo, True, matcher, **opts)
1221 added, bad = addlargefiles(repo.ui, repo, True, matcher, **opts)
1222 # Now that we've handled largefiles, hand off to the original addremove
1222 # Now that we've handled largefiles, hand off to the original addremove
1223 # function to take care of the rest. Make sure it doesn't do anything with
1223 # function to take care of the rest. Make sure it doesn't do anything with
1224 # largefiles by passing a matcher that will ignore them.
1224 # largefiles by passing a matcher that will ignore them.
1225 matcher = composenormalfilematcher(matcher, repo[None].manifest(), added)
1225 matcher = composenormalfilematcher(matcher, repo[None].manifest(), added)
1226 return orig(repo, matcher, prefix, opts, dry_run, similarity)
1226 return orig(repo, matcher, prefix, opts, dry_run, similarity)
1227
1227
1228 # Calling purge with --all will cause the largefiles to be deleted.
1228 # Calling purge with --all will cause the largefiles to be deleted.
1229 # Override repo.status to prevent this from happening.
1229 # Override repo.status to prevent this from happening.
1230 def overridepurge(orig, ui, repo, *dirs, **opts):
1230 def overridepurge(orig, ui, repo, *dirs, **opts):
1231 # XXX Monkey patching a repoview will not work. The assigned attribute will
1231 # XXX Monkey patching a repoview will not work. The assigned attribute will
1232 # be set on the unfiltered repo, but we will only lookup attributes in the
1232 # be set on the unfiltered repo, but we will only lookup attributes in the
1233 # unfiltered repo if the lookup in the repoview object itself fails. As the
1233 # unfiltered repo if the lookup in the repoview object itself fails. As the
1234 # monkey patched method exists on the repoview class the lookup will not
1234 # monkey patched method exists on the repoview class the lookup will not
1235 # fail. As a result, the original version will shadow the monkey patched
1235 # fail. As a result, the original version will shadow the monkey patched
1236 # one, defeating the monkey patch.
1236 # one, defeating the monkey patch.
1237 #
1237 #
1238 # As a work around we use an unfiltered repo here. We should do something
1238 # As a work around we use an unfiltered repo here. We should do something
1239 # cleaner instead.
1239 # cleaner instead.
1240 repo = repo.unfiltered()
1240 repo = repo.unfiltered()
1241 oldstatus = repo.status
1241 oldstatus = repo.status
1242 def overridestatus(node1='.', node2=None, match=None, ignored=False,
1242 def overridestatus(node1='.', node2=None, match=None, ignored=False,
1243 clean=False, unknown=False, listsubrepos=False):
1243 clean=False, unknown=False, listsubrepos=False):
1244 r = oldstatus(node1, node2, match, ignored, clean, unknown,
1244 r = oldstatus(node1, node2, match, ignored, clean, unknown,
1245 listsubrepos)
1245 listsubrepos)
1246 lfdirstate = lfutil.openlfdirstate(ui, repo)
1246 lfdirstate = lfutil.openlfdirstate(ui, repo)
1247 unknown = [f for f in r.unknown if lfdirstate[f] == '?']
1247 unknown = [f for f in r.unknown if lfdirstate[f] == '?']
1248 ignored = [f for f in r.ignored if lfdirstate[f] == '?']
1248 ignored = [f for f in r.ignored if lfdirstate[f] == '?']
1249 return scmutil.status(r.modified, r.added, r.removed, r.deleted,
1249 return scmutil.status(r.modified, r.added, r.removed, r.deleted,
1250 unknown, ignored, r.clean)
1250 unknown, ignored, r.clean)
1251 repo.status = overridestatus
1251 repo.status = overridestatus
1252 orig(ui, repo, *dirs, **opts)
1252 orig(ui, repo, *dirs, **opts)
1253 repo.status = oldstatus
1253 repo.status = oldstatus
1254 def overriderollback(orig, ui, repo, **opts):
1254 def overriderollback(orig, ui, repo, **opts):
1255 wlock = repo.wlock()
1255 wlock = repo.wlock()
1256 try:
1256 try:
1257 before = repo.dirstate.parents()
1257 before = repo.dirstate.parents()
1258 orphans = set(f for f in repo.dirstate
1258 orphans = set(f for f in repo.dirstate
1259 if lfutil.isstandin(f) and repo.dirstate[f] != 'r')
1259 if lfutil.isstandin(f) and repo.dirstate[f] != 'r')
1260 result = orig(ui, repo, **opts)
1260 result = orig(ui, repo, **opts)
1261 after = repo.dirstate.parents()
1261 after = repo.dirstate.parents()
1262 if before == after:
1262 if before == after:
1263 return result # no need to restore standins
1263 return result # no need to restore standins
1264
1264
1265 pctx = repo['.']
1265 pctx = repo['.']
1266 for f in repo.dirstate:
1266 for f in repo.dirstate:
1267 if lfutil.isstandin(f):
1267 if lfutil.isstandin(f):
1268 orphans.discard(f)
1268 orphans.discard(f)
1269 if repo.dirstate[f] == 'r':
1269 if repo.dirstate[f] == 'r':
1270 repo.wvfs.unlinkpath(f, ignoremissing=True)
1270 repo.wvfs.unlinkpath(f, ignoremissing=True)
1271 elif f in pctx:
1271 elif f in pctx:
1272 fctx = pctx[f]
1272 fctx = pctx[f]
1273 repo.wwrite(f, fctx.data(), fctx.flags())
1273 repo.wwrite(f, fctx.data(), fctx.flags())
1274 else:
1274 else:
1275 # content of standin is not so important in 'a',
1275 # content of standin is not so important in 'a',
1276 # 'm' or 'n' (coming from the 2nd parent) cases
1276 # 'm' or 'n' (coming from the 2nd parent) cases
1277 lfutil.writestandin(repo, f, '', False)
1277 lfutil.writestandin(repo, f, '', False)
1278 for standin in orphans:
1278 for standin in orphans:
1279 repo.wvfs.unlinkpath(standin, ignoremissing=True)
1279 repo.wvfs.unlinkpath(standin, ignoremissing=True)
1280
1280
1281 lfdirstate = lfutil.openlfdirstate(ui, repo)
1281 lfdirstate = lfutil.openlfdirstate(ui, repo)
1282 orphans = set(lfdirstate)
1282 orphans = set(lfdirstate)
1283 lfiles = lfutil.listlfiles(repo)
1283 lfiles = lfutil.listlfiles(repo)
1284 for file in lfiles:
1284 for file in lfiles:
1285 lfutil.synclfdirstate(repo, lfdirstate, file, True)
1285 lfutil.synclfdirstate(repo, lfdirstate, file, True)
1286 orphans.discard(file)
1286 orphans.discard(file)
1287 for lfile in orphans:
1287 for lfile in orphans:
1288 lfdirstate.drop(lfile)
1288 lfdirstate.drop(lfile)
1289 lfdirstate.write()
1289 lfdirstate.write()
1290 finally:
1290 finally:
1291 wlock.release()
1291 wlock.release()
1292 return result
1292 return result
1293
1293
1294 def overridetransplant(orig, ui, repo, *revs, **opts):
1294 def overridetransplant(orig, ui, repo, *revs, **opts):
1295 resuming = opts.get('continue')
1295 resuming = opts.get('continue')
1296 repo._lfcommithooks.append(lfutil.automatedcommithook(resuming))
1296 repo._lfcommithooks.append(lfutil.automatedcommithook(resuming))
1297 repo._lfstatuswriters.append(lambda *msg, **opts: None)
1297 repo._lfstatuswriters.append(lambda *msg, **opts: None)
1298 try:
1298 try:
1299 result = orig(ui, repo, *revs, **opts)
1299 result = orig(ui, repo, *revs, **opts)
1300 finally:
1300 finally:
1301 repo._lfstatuswriters.pop()
1301 repo._lfstatuswriters.pop()
1302 repo._lfcommithooks.pop()
1302 repo._lfcommithooks.pop()
1303 return result
1303 return result
1304
1304
1305 def overridecat(orig, ui, repo, file1, *pats, **opts):
1305 def overridecat(orig, ui, repo, file1, *pats, **opts):
1306 ctx = scmutil.revsingle(repo, opts.get('rev'))
1306 ctx = scmutil.revsingle(repo, opts.get('rev'))
1307 err = 1
1307 err = 1
1308 notbad = set()
1308 notbad = set()
1309 m = scmutil.match(ctx, (file1,) + pats, opts)
1309 m = scmutil.match(ctx, (file1,) + pats, opts)
1310 origmatchfn = m.matchfn
1310 origmatchfn = m.matchfn
1311 def lfmatchfn(f):
1311 def lfmatchfn(f):
1312 if origmatchfn(f):
1312 if origmatchfn(f):
1313 return True
1313 return True
1314 lf = lfutil.splitstandin(f)
1314 lf = lfutil.splitstandin(f)
1315 if lf is None:
1315 if lf is None:
1316 return False
1316 return False
1317 notbad.add(lf)
1317 notbad.add(lf)
1318 return origmatchfn(lf)
1318 return origmatchfn(lf)
1319 m.matchfn = lfmatchfn
1319 m.matchfn = lfmatchfn
1320 origbadfn = m.bad
1320 origbadfn = m.bad
1321 def lfbadfn(f, msg):
1321 def lfbadfn(f, msg):
1322 if not f in notbad:
1322 if not f in notbad:
1323 origbadfn(f, msg)
1323 origbadfn(f, msg)
1324 m.bad = lfbadfn
1324 m.bad = lfbadfn
1325
1325
1326 origvisitdirfn = m.visitdir
1326 origvisitdirfn = m.visitdir
1327 def lfvisitdirfn(dir):
1327 def lfvisitdirfn(dir):
1328 if dir == lfutil.shortname:
1328 if dir == lfutil.shortname:
1329 return True
1329 return True
1330 ret = origvisitdirfn(dir)
1330 ret = origvisitdirfn(dir)
1331 if ret:
1331 if ret:
1332 return ret
1332 return ret
1333 lf = lfutil.splitstandin(dir)
1333 lf = lfutil.splitstandin(dir)
1334 if lf is None:
1334 if lf is None:
1335 return False
1335 return False
1336 return origvisitdirfn(lf)
1336 return origvisitdirfn(lf)
1337 m.visitdir = lfvisitdirfn
1337 m.visitdir = lfvisitdirfn
1338
1338
1339 for f in ctx.walk(m):
1339 for f in ctx.walk(m):
1340 fp = cmdutil.makefileobj(repo, opts.get('output'), ctx.node(),
1340 fp = cmdutil.makefileobj(repo, opts.get('output'), ctx.node(),
1341 pathname=f)
1341 pathname=f)
1342 lf = lfutil.splitstandin(f)
1342 lf = lfutil.splitstandin(f)
1343 if lf is None or origmatchfn(f):
1343 if lf is None or origmatchfn(f):
1344 # duplicating unreachable code from commands.cat
1344 # duplicating unreachable code from commands.cat
1345 data = ctx[f].data()
1345 data = ctx[f].data()
1346 if opts.get('decode'):
1346 if opts.get('decode'):
1347 data = repo.wwritedata(f, data)
1347 data = repo.wwritedata(f, data)
1348 fp.write(data)
1348 fp.write(data)
1349 else:
1349 else:
1350 hash = lfutil.readstandin(repo, lf, ctx.rev())
1350 hash = lfutil.readstandin(repo, lf, ctx.rev())
1351 if not lfutil.inusercache(repo.ui, hash):
1351 if not lfutil.inusercache(repo.ui, hash):
1352 store = basestore._openstore(repo)
1352 store = basestore._openstore(repo)
1353 success, missing = store.get([(lf, hash)])
1353 success, missing = store.get([(lf, hash)])
1354 if len(success) != 1:
1354 if len(success) != 1:
1355 raise error.Abort(
1355 raise error.Abort(
1356 _('largefile %s is not in cache and could not be '
1356 _('largefile %s is not in cache and could not be '
1357 'downloaded') % lf)
1357 'downloaded') % lf)
1358 path = lfutil.usercachepath(repo.ui, hash)
1358 path = lfutil.usercachepath(repo.ui, hash)
1359 fpin = open(path, "rb")
1359 fpin = open(path, "rb")
1360 for chunk in util.filechunkiter(fpin, 128 * 1024):
1360 for chunk in util.filechunkiter(fpin, 128 * 1024):
1361 fp.write(chunk)
1361 fp.write(chunk)
1362 fpin.close()
1362 fpin.close()
1363 fp.close()
1363 fp.close()
1364 err = 0
1364 err = 0
1365 return err
1365 return err
1366
1366
1367 def mergeupdate(orig, repo, node, branchmerge, force, partial,
1367 def mergeupdate(orig, repo, node, branchmerge, force,
1368 *args, **kwargs):
1368 *args, **kwargs):
1369 matcher = kwargs.get('matcher', None)
1370 # note if this is a partial update
1371 partial = matcher and not matcher.always()
1369 wlock = repo.wlock()
1372 wlock = repo.wlock()
1370 try:
1373 try:
1371 # branch | | |
1374 # branch | | |
1372 # merge | force | partial | action
1375 # merge | force | partial | action
1373 # -------+-------+---------+--------------
1376 # -------+-------+---------+--------------
1374 # x | x | x | linear-merge
1377 # x | x | x | linear-merge
1375 # o | x | x | branch-merge
1378 # o | x | x | branch-merge
1376 # x | o | x | overwrite (as clean update)
1379 # x | o | x | overwrite (as clean update)
1377 # o | o | x | force-branch-merge (*1)
1380 # o | o | x | force-branch-merge (*1)
1378 # x | x | o | (*)
1381 # x | x | o | (*)
1379 # o | x | o | (*)
1382 # o | x | o | (*)
1380 # x | o | o | overwrite (as revert)
1383 # x | o | o | overwrite (as revert)
1381 # o | o | o | (*)
1384 # o | o | o | (*)
1382 #
1385 #
1383 # (*) don't care
1386 # (*) don't care
1384 # (*1) deprecated, but used internally (e.g: "rebase --collapse")
1387 # (*1) deprecated, but used internally (e.g: "rebase --collapse")
1385
1388
1386 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
1389 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
1387 unsure, s = lfdirstate.status(match_.always(repo.root,
1390 unsure, s = lfdirstate.status(match_.always(repo.root,
1388 repo.getcwd()),
1391 repo.getcwd()),
1389 [], False, False, False)
1392 [], False, False, False)
1390 pctx = repo['.']
1393 pctx = repo['.']
1391 for lfile in unsure + s.modified:
1394 for lfile in unsure + s.modified:
1392 lfileabs = repo.wvfs.join(lfile)
1395 lfileabs = repo.wvfs.join(lfile)
1393 if not os.path.exists(lfileabs):
1396 if not os.path.exists(lfileabs):
1394 continue
1397 continue
1395 lfhash = lfutil.hashrepofile(repo, lfile)
1398 lfhash = lfutil.hashrepofile(repo, lfile)
1396 standin = lfutil.standin(lfile)
1399 standin = lfutil.standin(lfile)
1397 lfutil.writestandin(repo, standin, lfhash,
1400 lfutil.writestandin(repo, standin, lfhash,
1398 lfutil.getexecutable(lfileabs))
1401 lfutil.getexecutable(lfileabs))
1399 if (standin in pctx and
1402 if (standin in pctx and
1400 lfhash == lfutil.readstandin(repo, lfile, '.')):
1403 lfhash == lfutil.readstandin(repo, lfile, '.')):
1401 lfdirstate.normal(lfile)
1404 lfdirstate.normal(lfile)
1402 for lfile in s.added:
1405 for lfile in s.added:
1403 lfutil.updatestandin(repo, lfutil.standin(lfile))
1406 lfutil.updatestandin(repo, lfutil.standin(lfile))
1404 lfdirstate.write()
1407 lfdirstate.write()
1405
1408
1406 oldstandins = lfutil.getstandinsstate(repo)
1409 oldstandins = lfutil.getstandinsstate(repo)
1407
1410
1408 result = orig(repo, node, branchmerge, force, partial, *args, **kwargs)
1411 result = orig(repo, node, branchmerge, force, *args, **kwargs)
1409
1412
1410 newstandins = lfutil.getstandinsstate(repo)
1413 newstandins = lfutil.getstandinsstate(repo)
1411 filelist = lfutil.getlfilestoupdate(oldstandins, newstandins)
1414 filelist = lfutil.getlfilestoupdate(oldstandins, newstandins)
1412 if branchmerge or force or partial:
1415 if branchmerge or force or partial:
1413 filelist.extend(s.deleted + s.removed)
1416 filelist.extend(s.deleted + s.removed)
1414
1417
1415 lfcommands.updatelfiles(repo.ui, repo, filelist=filelist,
1418 lfcommands.updatelfiles(repo.ui, repo, filelist=filelist,
1416 normallookup=partial)
1419 normallookup=partial)
1417
1420
1418 return result
1421 return result
1419 finally:
1422 finally:
1420 wlock.release()
1423 wlock.release()
1421
1424
1422 def scmutilmarktouched(orig, repo, files, *args, **kwargs):
1425 def scmutilmarktouched(orig, repo, files, *args, **kwargs):
1423 result = orig(repo, files, *args, **kwargs)
1426 result = orig(repo, files, *args, **kwargs)
1424
1427
1425 filelist = [lfutil.splitstandin(f) for f in files if lfutil.isstandin(f)]
1428 filelist = [lfutil.splitstandin(f) for f in files if lfutil.isstandin(f)]
1426 if filelist:
1429 if filelist:
1427 lfcommands.updatelfiles(repo.ui, repo, filelist=filelist,
1430 lfcommands.updatelfiles(repo.ui, repo, filelist=filelist,
1428 printmessage=False, normallookup=True)
1431 printmessage=False, normallookup=True)
1429
1432
1430 return result
1433 return result
@@ -1,1249 +1,1249 b''
1 # rebase.py - rebasing feature for mercurial
1 # rebase.py - rebasing feature for mercurial
2 #
2 #
3 # Copyright 2008 Stefano Tortarolo <stefano.tortarolo at gmail dot com>
3 # Copyright 2008 Stefano Tortarolo <stefano.tortarolo at gmail dot com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''command to move sets of revisions to a different ancestor
8 '''command to move sets of revisions to a different ancestor
9
9
10 This extension lets you rebase changesets in an existing Mercurial
10 This extension lets you rebase changesets in an existing Mercurial
11 repository.
11 repository.
12
12
13 For more information:
13 For more information:
14 https://mercurial-scm.org/wiki/RebaseExtension
14 https://mercurial-scm.org/wiki/RebaseExtension
15 '''
15 '''
16
16
17 from mercurial import hg, util, repair, merge, cmdutil, commands, bookmarks
17 from mercurial import hg, util, repair, merge, cmdutil, commands, bookmarks
18 from mercurial import extensions, patch, scmutil, phases, obsolete, error
18 from mercurial import extensions, patch, scmutil, phases, obsolete, error
19 from mercurial import copies, repoview, revset
19 from mercurial import copies, repoview, revset
20 from mercurial.commands import templateopts
20 from mercurial.commands import templateopts
21 from mercurial.node import nullrev, nullid, hex, short
21 from mercurial.node import nullrev, nullid, hex, short
22 from mercurial.lock import release
22 from mercurial.lock import release
23 from mercurial.i18n import _
23 from mercurial.i18n import _
24 import os, errno
24 import os, errno
25
25
26 # The following constants are used throughout the rebase module. The ordering of
26 # The following constants are used throughout the rebase module. The ordering of
27 # their values must be maintained.
27 # their values must be maintained.
28
28
29 # Indicates that a revision needs to be rebased
29 # Indicates that a revision needs to be rebased
30 revtodo = -1
30 revtodo = -1
31 nullmerge = -2
31 nullmerge = -2
32 revignored = -3
32 revignored = -3
33 # successor in rebase destination
33 # successor in rebase destination
34 revprecursor = -4
34 revprecursor = -4
35 # plain prune (no successor)
35 # plain prune (no successor)
36 revpruned = -5
36 revpruned = -5
37 revskipped = (revignored, revprecursor, revpruned)
37 revskipped = (revignored, revprecursor, revpruned)
38
38
39 cmdtable = {}
39 cmdtable = {}
40 command = cmdutil.command(cmdtable)
40 command = cmdutil.command(cmdtable)
41 # Note for extension authors: ONLY specify testedwith = 'internal' for
41 # Note for extension authors: ONLY specify testedwith = 'internal' for
42 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
42 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
43 # be specifying the version(s) of Mercurial they are tested with, or
43 # be specifying the version(s) of Mercurial they are tested with, or
44 # leave the attribute unspecified.
44 # leave the attribute unspecified.
45 testedwith = 'internal'
45 testedwith = 'internal'
46
46
47 def _nothingtorebase():
47 def _nothingtorebase():
48 return 1
48 return 1
49
49
50 def _makeextrafn(copiers):
50 def _makeextrafn(copiers):
51 """make an extrafn out of the given copy-functions.
51 """make an extrafn out of the given copy-functions.
52
52
53 A copy function takes a context and an extra dict, and mutates the
53 A copy function takes a context and an extra dict, and mutates the
54 extra dict as needed based on the given context.
54 extra dict as needed based on the given context.
55 """
55 """
56 def extrafn(ctx, extra):
56 def extrafn(ctx, extra):
57 for c in copiers:
57 for c in copiers:
58 c(ctx, extra)
58 c(ctx, extra)
59 return extrafn
59 return extrafn
60
60
61 def _destrebase(repo):
61 def _destrebase(repo):
62 # Destination defaults to the latest revision in the
62 # Destination defaults to the latest revision in the
63 # current branch
63 # current branch
64 branch = repo[None].branch()
64 branch = repo[None].branch()
65 return repo[branch].rev()
65 return repo[branch].rev()
66
66
67 def _revsetdestrebase(repo, subset, x):
67 def _revsetdestrebase(repo, subset, x):
68 # ``_rebasedefaultdest()``
68 # ``_rebasedefaultdest()``
69
69
70 # default destination for rebase.
70 # default destination for rebase.
71 # # XXX: Currently private because I expect the signature to change.
71 # # XXX: Currently private because I expect the signature to change.
72 # # XXX: - taking rev as arguments,
72 # # XXX: - taking rev as arguments,
73 # # XXX: - bailing out in case of ambiguity vs returning all data.
73 # # XXX: - bailing out in case of ambiguity vs returning all data.
74 # # XXX: - probably merging with the merge destination.
74 # # XXX: - probably merging with the merge destination.
75 # i18n: "_rebasedefaultdest" is a keyword
75 # i18n: "_rebasedefaultdest" is a keyword
76 revset.getargs(x, 0, 0, _("_rebasedefaultdest takes no arguments"))
76 revset.getargs(x, 0, 0, _("_rebasedefaultdest takes no arguments"))
77 return subset & revset.baseset([_destrebase(repo)])
77 return subset & revset.baseset([_destrebase(repo)])
78
78
79 @command('rebase',
79 @command('rebase',
80 [('s', 'source', '',
80 [('s', 'source', '',
81 _('rebase the specified changeset and descendants'), _('REV')),
81 _('rebase the specified changeset and descendants'), _('REV')),
82 ('b', 'base', '',
82 ('b', 'base', '',
83 _('rebase everything from branching point of specified changeset'),
83 _('rebase everything from branching point of specified changeset'),
84 _('REV')),
84 _('REV')),
85 ('r', 'rev', [],
85 ('r', 'rev', [],
86 _('rebase these revisions'),
86 _('rebase these revisions'),
87 _('REV')),
87 _('REV')),
88 ('d', 'dest', '',
88 ('d', 'dest', '',
89 _('rebase onto the specified changeset'), _('REV')),
89 _('rebase onto the specified changeset'), _('REV')),
90 ('', 'collapse', False, _('collapse the rebased changesets')),
90 ('', 'collapse', False, _('collapse the rebased changesets')),
91 ('m', 'message', '',
91 ('m', 'message', '',
92 _('use text as collapse commit message'), _('TEXT')),
92 _('use text as collapse commit message'), _('TEXT')),
93 ('e', 'edit', False, _('invoke editor on commit messages')),
93 ('e', 'edit', False, _('invoke editor on commit messages')),
94 ('l', 'logfile', '',
94 ('l', 'logfile', '',
95 _('read collapse commit message from file'), _('FILE')),
95 _('read collapse commit message from file'), _('FILE')),
96 ('k', 'keep', False, _('keep original changesets')),
96 ('k', 'keep', False, _('keep original changesets')),
97 ('', 'keepbranches', False, _('keep original branch names')),
97 ('', 'keepbranches', False, _('keep original branch names')),
98 ('D', 'detach', False, _('(DEPRECATED)')),
98 ('D', 'detach', False, _('(DEPRECATED)')),
99 ('i', 'interactive', False, _('(DEPRECATED)')),
99 ('i', 'interactive', False, _('(DEPRECATED)')),
100 ('t', 'tool', '', _('specify merge tool')),
100 ('t', 'tool', '', _('specify merge tool')),
101 ('c', 'continue', False, _('continue an interrupted rebase')),
101 ('c', 'continue', False, _('continue an interrupted rebase')),
102 ('a', 'abort', False, _('abort an interrupted rebase'))] +
102 ('a', 'abort', False, _('abort an interrupted rebase'))] +
103 templateopts,
103 templateopts,
104 _('[-s REV | -b REV] [-d REV] [OPTION]'))
104 _('[-s REV | -b REV] [-d REV] [OPTION]'))
105 def rebase(ui, repo, **opts):
105 def rebase(ui, repo, **opts):
106 """move changeset (and descendants) to a different branch
106 """move changeset (and descendants) to a different branch
107
107
108 Rebase uses repeated merging to graft changesets from one part of
108 Rebase uses repeated merging to graft changesets from one part of
109 history (the source) onto another (the destination). This can be
109 history (the source) onto another (the destination). This can be
110 useful for linearizing *local* changes relative to a master
110 useful for linearizing *local* changes relative to a master
111 development tree.
111 development tree.
112
112
113 You should not rebase changesets that have already been shared
113 You should not rebase changesets that have already been shared
114 with others. Doing so will force everybody else to perform the
114 with others. Doing so will force everybody else to perform the
115 same rebase or they will end up with duplicated changesets after
115 same rebase or they will end up with duplicated changesets after
116 pulling in your rebased changesets.
116 pulling in your rebased changesets.
117
117
118 In its default configuration, Mercurial will prevent you from
118 In its default configuration, Mercurial will prevent you from
119 rebasing published changes. See :hg:`help phases` for details.
119 rebasing published changes. See :hg:`help phases` for details.
120
120
121 If you don't specify a destination changeset (``-d/--dest``),
121 If you don't specify a destination changeset (``-d/--dest``),
122 rebase uses the current branch tip as the destination. (The
122 rebase uses the current branch tip as the destination. (The
123 destination changeset is not modified by rebasing, but new
123 destination changeset is not modified by rebasing, but new
124 changesets are added as its descendants.)
124 changesets are added as its descendants.)
125
125
126 You can specify which changesets to rebase in two ways: as a
126 You can specify which changesets to rebase in two ways: as a
127 "source" changeset or as a "base" changeset. Both are shorthand
127 "source" changeset or as a "base" changeset. Both are shorthand
128 for a topologically related set of changesets (the "source
128 for a topologically related set of changesets (the "source
129 branch"). If you specify source (``-s/--source``), rebase will
129 branch"). If you specify source (``-s/--source``), rebase will
130 rebase that changeset and all of its descendants onto dest. If you
130 rebase that changeset and all of its descendants onto dest. If you
131 specify base (``-b/--base``), rebase will select ancestors of base
131 specify base (``-b/--base``), rebase will select ancestors of base
132 back to but not including the common ancestor with dest. Thus,
132 back to but not including the common ancestor with dest. Thus,
133 ``-b`` is less precise but more convenient than ``-s``: you can
133 ``-b`` is less precise but more convenient than ``-s``: you can
134 specify any changeset in the source branch, and rebase will select
134 specify any changeset in the source branch, and rebase will select
135 the whole branch. If you specify neither ``-s`` nor ``-b``, rebase
135 the whole branch. If you specify neither ``-s`` nor ``-b``, rebase
136 uses the parent of the working directory as the base.
136 uses the parent of the working directory as the base.
137
137
138 For advanced usage, a third way is available through the ``--rev``
138 For advanced usage, a third way is available through the ``--rev``
139 option. It allows you to specify an arbitrary set of changesets to
139 option. It allows you to specify an arbitrary set of changesets to
140 rebase. Descendants of revs you specify with this option are not
140 rebase. Descendants of revs you specify with this option are not
141 automatically included in the rebase.
141 automatically included in the rebase.
142
142
143 By default, rebase recreates the changesets in the source branch
143 By default, rebase recreates the changesets in the source branch
144 as descendants of dest and then destroys the originals. Use
144 as descendants of dest and then destroys the originals. Use
145 ``--keep`` to preserve the original source changesets. Some
145 ``--keep`` to preserve the original source changesets. Some
146 changesets in the source branch (e.g. merges from the destination
146 changesets in the source branch (e.g. merges from the destination
147 branch) may be dropped if they no longer contribute any change.
147 branch) may be dropped if they no longer contribute any change.
148
148
149 One result of the rules for selecting the destination changeset
149 One result of the rules for selecting the destination changeset
150 and source branch is that, unlike ``merge``, rebase will do
150 and source branch is that, unlike ``merge``, rebase will do
151 nothing if you are at the branch tip of a named branch
151 nothing if you are at the branch tip of a named branch
152 with two heads. You need to explicitly specify source and/or
152 with two heads. You need to explicitly specify source and/or
153 destination (or ``update`` to the other head, if it's the head of
153 destination (or ``update`` to the other head, if it's the head of
154 the intended source branch).
154 the intended source branch).
155
155
156 If a rebase is interrupted to manually resolve a merge, it can be
156 If a rebase is interrupted to manually resolve a merge, it can be
157 continued with --continue/-c or aborted with --abort/-a.
157 continued with --continue/-c or aborted with --abort/-a.
158
158
159 .. container:: verbose
159 .. container:: verbose
160
160
161 Examples:
161 Examples:
162
162
163 - move "local changes" (current commit back to branching point)
163 - move "local changes" (current commit back to branching point)
164 to the current branch tip after a pull::
164 to the current branch tip after a pull::
165
165
166 hg rebase
166 hg rebase
167
167
168 - move a single changeset to the stable branch::
168 - move a single changeset to the stable branch::
169
169
170 hg rebase -r 5f493448 -d stable
170 hg rebase -r 5f493448 -d stable
171
171
172 - splice a commit and all its descendants onto another part of history::
172 - splice a commit and all its descendants onto another part of history::
173
173
174 hg rebase --source c0c3 --dest 4cf9
174 hg rebase --source c0c3 --dest 4cf9
175
175
176 - rebase everything on a branch marked by a bookmark onto the
176 - rebase everything on a branch marked by a bookmark onto the
177 default branch::
177 default branch::
178
178
179 hg rebase --base myfeature --dest default
179 hg rebase --base myfeature --dest default
180
180
181 - collapse a sequence of changes into a single commit::
181 - collapse a sequence of changes into a single commit::
182
182
183 hg rebase --collapse -r 1520:1525 -d .
183 hg rebase --collapse -r 1520:1525 -d .
184
184
185 - move a named branch while preserving its name::
185 - move a named branch while preserving its name::
186
186
187 hg rebase -r "branch(featureX)" -d 1.3 --keepbranches
187 hg rebase -r "branch(featureX)" -d 1.3 --keepbranches
188
188
189 Returns 0 on success, 1 if nothing to rebase or there are
189 Returns 0 on success, 1 if nothing to rebase or there are
190 unresolved conflicts.
190 unresolved conflicts.
191
191
192 """
192 """
193 originalwd = target = None
193 originalwd = target = None
194 activebookmark = None
194 activebookmark = None
195 external = nullrev
195 external = nullrev
196 # Mapping between the old revision id and either what is the new rebased
196 # Mapping between the old revision id and either what is the new rebased
197 # revision or what needs to be done with the old revision. The state dict
197 # revision or what needs to be done with the old revision. The state dict
198 # will be what contains most of the rebase progress state.
198 # will be what contains most of the rebase progress state.
199 state = {}
199 state = {}
200 skipped = set()
200 skipped = set()
201 targetancestors = set()
201 targetancestors = set()
202
202
203
203
204 lock = wlock = None
204 lock = wlock = None
205 try:
205 try:
206 wlock = repo.wlock()
206 wlock = repo.wlock()
207 lock = repo.lock()
207 lock = repo.lock()
208
208
209 # Validate input and define rebasing points
209 # Validate input and define rebasing points
210 destf = opts.get('dest', None)
210 destf = opts.get('dest', None)
211 srcf = opts.get('source', None)
211 srcf = opts.get('source', None)
212 basef = opts.get('base', None)
212 basef = opts.get('base', None)
213 revf = opts.get('rev', [])
213 revf = opts.get('rev', [])
214 contf = opts.get('continue')
214 contf = opts.get('continue')
215 abortf = opts.get('abort')
215 abortf = opts.get('abort')
216 collapsef = opts.get('collapse', False)
216 collapsef = opts.get('collapse', False)
217 collapsemsg = cmdutil.logmessage(ui, opts)
217 collapsemsg = cmdutil.logmessage(ui, opts)
218 date = opts.get('date', None)
218 date = opts.get('date', None)
219 e = opts.get('extrafn') # internal, used by e.g. hgsubversion
219 e = opts.get('extrafn') # internal, used by e.g. hgsubversion
220 extrafns = []
220 extrafns = []
221 if e:
221 if e:
222 extrafns = [e]
222 extrafns = [e]
223 keepf = opts.get('keep', False)
223 keepf = opts.get('keep', False)
224 keepbranchesf = opts.get('keepbranches', False)
224 keepbranchesf = opts.get('keepbranches', False)
225 # keepopen is not meant for use on the command line, but by
225 # keepopen is not meant for use on the command line, but by
226 # other extensions
226 # other extensions
227 keepopen = opts.get('keepopen', False)
227 keepopen = opts.get('keepopen', False)
228
228
229 if opts.get('interactive'):
229 if opts.get('interactive'):
230 try:
230 try:
231 if extensions.find('histedit'):
231 if extensions.find('histedit'):
232 enablehistedit = ''
232 enablehistedit = ''
233 except KeyError:
233 except KeyError:
234 enablehistedit = " --config extensions.histedit="
234 enablehistedit = " --config extensions.histedit="
235 help = "hg%s help -e histedit" % enablehistedit
235 help = "hg%s help -e histedit" % enablehistedit
236 msg = _("interactive history editing is supported by the "
236 msg = _("interactive history editing is supported by the "
237 "'histedit' extension (see \"%s\")") % help
237 "'histedit' extension (see \"%s\")") % help
238 raise error.Abort(msg)
238 raise error.Abort(msg)
239
239
240 if collapsemsg and not collapsef:
240 if collapsemsg and not collapsef:
241 raise error.Abort(
241 raise error.Abort(
242 _('message can only be specified with collapse'))
242 _('message can only be specified with collapse'))
243
243
244 if contf or abortf:
244 if contf or abortf:
245 if contf and abortf:
245 if contf and abortf:
246 raise error.Abort(_('cannot use both abort and continue'))
246 raise error.Abort(_('cannot use both abort and continue'))
247 if collapsef:
247 if collapsef:
248 raise error.Abort(
248 raise error.Abort(
249 _('cannot use collapse with continue or abort'))
249 _('cannot use collapse with continue or abort'))
250 if srcf or basef or destf:
250 if srcf or basef or destf:
251 raise error.Abort(
251 raise error.Abort(
252 _('abort and continue do not allow specifying revisions'))
252 _('abort and continue do not allow specifying revisions'))
253 if abortf and opts.get('tool', False):
253 if abortf and opts.get('tool', False):
254 ui.warn(_('tool option will be ignored\n'))
254 ui.warn(_('tool option will be ignored\n'))
255
255
256 try:
256 try:
257 (originalwd, target, state, skipped, collapsef, keepf,
257 (originalwd, target, state, skipped, collapsef, keepf,
258 keepbranchesf, external, activebookmark) = restorestatus(repo)
258 keepbranchesf, external, activebookmark) = restorestatus(repo)
259 except error.RepoLookupError:
259 except error.RepoLookupError:
260 if abortf:
260 if abortf:
261 clearstatus(repo)
261 clearstatus(repo)
262 repo.ui.warn(_('rebase aborted (no revision is removed,'
262 repo.ui.warn(_('rebase aborted (no revision is removed,'
263 ' only broken state is cleared)\n'))
263 ' only broken state is cleared)\n'))
264 return 0
264 return 0
265 else:
265 else:
266 msg = _('cannot continue inconsistent rebase')
266 msg = _('cannot continue inconsistent rebase')
267 hint = _('use "hg rebase --abort" to clear broken state')
267 hint = _('use "hg rebase --abort" to clear broken state')
268 raise error.Abort(msg, hint=hint)
268 raise error.Abort(msg, hint=hint)
269 if abortf:
269 if abortf:
270 return abort(repo, originalwd, target, state,
270 return abort(repo, originalwd, target, state,
271 activebookmark=activebookmark)
271 activebookmark=activebookmark)
272 else:
272 else:
273 if srcf and basef:
273 if srcf and basef:
274 raise error.Abort(_('cannot specify both a '
274 raise error.Abort(_('cannot specify both a '
275 'source and a base'))
275 'source and a base'))
276 if revf and basef:
276 if revf and basef:
277 raise error.Abort(_('cannot specify both a '
277 raise error.Abort(_('cannot specify both a '
278 'revision and a base'))
278 'revision and a base'))
279 if revf and srcf:
279 if revf and srcf:
280 raise error.Abort(_('cannot specify both a '
280 raise error.Abort(_('cannot specify both a '
281 'revision and a source'))
281 'revision and a source'))
282
282
283 cmdutil.checkunfinished(repo)
283 cmdutil.checkunfinished(repo)
284 cmdutil.bailifchanged(repo)
284 cmdutil.bailifchanged(repo)
285
285
286 if destf:
286 if destf:
287 dest = scmutil.revsingle(repo, destf)
287 dest = scmutil.revsingle(repo, destf)
288 else:
288 else:
289 dest = repo[_destrebase(repo)]
289 dest = repo[_destrebase(repo)]
290 destf = str(dest)
290 destf = str(dest)
291
291
292 if revf:
292 if revf:
293 rebaseset = scmutil.revrange(repo, revf)
293 rebaseset = scmutil.revrange(repo, revf)
294 if not rebaseset:
294 if not rebaseset:
295 ui.status(_('empty "rev" revision set - '
295 ui.status(_('empty "rev" revision set - '
296 'nothing to rebase\n'))
296 'nothing to rebase\n'))
297 return _nothingtorebase()
297 return _nothingtorebase()
298 elif srcf:
298 elif srcf:
299 src = scmutil.revrange(repo, [srcf])
299 src = scmutil.revrange(repo, [srcf])
300 if not src:
300 if not src:
301 ui.status(_('empty "source" revision set - '
301 ui.status(_('empty "source" revision set - '
302 'nothing to rebase\n'))
302 'nothing to rebase\n'))
303 return _nothingtorebase()
303 return _nothingtorebase()
304 rebaseset = repo.revs('(%ld)::', src)
304 rebaseset = repo.revs('(%ld)::', src)
305 assert rebaseset
305 assert rebaseset
306 else:
306 else:
307 base = scmutil.revrange(repo, [basef or '.'])
307 base = scmutil.revrange(repo, [basef or '.'])
308 if not base:
308 if not base:
309 ui.status(_('empty "base" revision set - '
309 ui.status(_('empty "base" revision set - '
310 "can't compute rebase set\n"))
310 "can't compute rebase set\n"))
311 return _nothingtorebase()
311 return _nothingtorebase()
312 commonanc = repo.revs('ancestor(%ld, %d)', base, dest).first()
312 commonanc = repo.revs('ancestor(%ld, %d)', base, dest).first()
313 if commonanc is not None:
313 if commonanc is not None:
314 rebaseset = repo.revs('(%d::(%ld) - %d)::',
314 rebaseset = repo.revs('(%d::(%ld) - %d)::',
315 commonanc, base, commonanc)
315 commonanc, base, commonanc)
316 else:
316 else:
317 rebaseset = []
317 rebaseset = []
318
318
319 if not rebaseset:
319 if not rebaseset:
320 # transform to list because smartsets are not comparable to
320 # transform to list because smartsets are not comparable to
321 # lists. This should be improved to honor laziness of
321 # lists. This should be improved to honor laziness of
322 # smartset.
322 # smartset.
323 if list(base) == [dest.rev()]:
323 if list(base) == [dest.rev()]:
324 if basef:
324 if basef:
325 ui.status(_('nothing to rebase - %s is both "base"'
325 ui.status(_('nothing to rebase - %s is both "base"'
326 ' and destination\n') % dest)
326 ' and destination\n') % dest)
327 else:
327 else:
328 ui.status(_('nothing to rebase - working directory '
328 ui.status(_('nothing to rebase - working directory '
329 'parent is also destination\n'))
329 'parent is also destination\n'))
330 elif not repo.revs('%ld - ::%d', base, dest):
330 elif not repo.revs('%ld - ::%d', base, dest):
331 if basef:
331 if basef:
332 ui.status(_('nothing to rebase - "base" %s is '
332 ui.status(_('nothing to rebase - "base" %s is '
333 'already an ancestor of destination '
333 'already an ancestor of destination '
334 '%s\n') %
334 '%s\n') %
335 ('+'.join(str(repo[r]) for r in base),
335 ('+'.join(str(repo[r]) for r in base),
336 dest))
336 dest))
337 else:
337 else:
338 ui.status(_('nothing to rebase - working '
338 ui.status(_('nothing to rebase - working '
339 'directory parent is already an '
339 'directory parent is already an '
340 'ancestor of destination %s\n') % dest)
340 'ancestor of destination %s\n') % dest)
341 else: # can it happen?
341 else: # can it happen?
342 ui.status(_('nothing to rebase from %s to %s\n') %
342 ui.status(_('nothing to rebase from %s to %s\n') %
343 ('+'.join(str(repo[r]) for r in base), dest))
343 ('+'.join(str(repo[r]) for r in base), dest))
344 return _nothingtorebase()
344 return _nothingtorebase()
345
345
346 allowunstable = obsolete.isenabled(repo, obsolete.allowunstableopt)
346 allowunstable = obsolete.isenabled(repo, obsolete.allowunstableopt)
347 if (not (keepf or allowunstable)
347 if (not (keepf or allowunstable)
348 and repo.revs('first(children(%ld) - %ld)',
348 and repo.revs('first(children(%ld) - %ld)',
349 rebaseset, rebaseset)):
349 rebaseset, rebaseset)):
350 raise error.Abort(
350 raise error.Abort(
351 _("can't remove original changesets with"
351 _("can't remove original changesets with"
352 " unrebased descendants"),
352 " unrebased descendants"),
353 hint=_('use --keep to keep original changesets'))
353 hint=_('use --keep to keep original changesets'))
354
354
355 obsoletenotrebased = {}
355 obsoletenotrebased = {}
356 if ui.configbool('experimental', 'rebaseskipobsolete'):
356 if ui.configbool('experimental', 'rebaseskipobsolete'):
357 rebasesetrevs = set(rebaseset)
357 rebasesetrevs = set(rebaseset)
358 obsoletenotrebased = _computeobsoletenotrebased(repo,
358 obsoletenotrebased = _computeobsoletenotrebased(repo,
359 rebasesetrevs,
359 rebasesetrevs,
360 dest)
360 dest)
361
361
362 # - plain prune (no successor) changesets are rebased
362 # - plain prune (no successor) changesets are rebased
363 # - split changesets are not rebased if at least one of the
363 # - split changesets are not rebased if at least one of the
364 # changeset resulting from the split is an ancestor of dest
364 # changeset resulting from the split is an ancestor of dest
365 rebaseset = rebasesetrevs - set(obsoletenotrebased)
365 rebaseset = rebasesetrevs - set(obsoletenotrebased)
366 result = buildstate(repo, dest, rebaseset, collapsef,
366 result = buildstate(repo, dest, rebaseset, collapsef,
367 obsoletenotrebased)
367 obsoletenotrebased)
368
368
369 if not result:
369 if not result:
370 # Empty state built, nothing to rebase
370 # Empty state built, nothing to rebase
371 ui.status(_('nothing to rebase\n'))
371 ui.status(_('nothing to rebase\n'))
372 return _nothingtorebase()
372 return _nothingtorebase()
373
373
374 root = min(rebaseset)
374 root = min(rebaseset)
375 if not keepf and not repo[root].mutable():
375 if not keepf and not repo[root].mutable():
376 raise error.Abort(_("can't rebase public changeset %s")
376 raise error.Abort(_("can't rebase public changeset %s")
377 % repo[root],
377 % repo[root],
378 hint=_('see "hg help phases" for details'))
378 hint=_('see "hg help phases" for details'))
379
379
380 originalwd, target, state = result
380 originalwd, target, state = result
381 if collapsef:
381 if collapsef:
382 targetancestors = repo.changelog.ancestors([target],
382 targetancestors = repo.changelog.ancestors([target],
383 inclusive=True)
383 inclusive=True)
384 external = externalparent(repo, state, targetancestors)
384 external = externalparent(repo, state, targetancestors)
385
385
386 if dest.closesbranch() and not keepbranchesf:
386 if dest.closesbranch() and not keepbranchesf:
387 ui.status(_('reopening closed branch head %s\n') % dest)
387 ui.status(_('reopening closed branch head %s\n') % dest)
388
388
389 if keepbranchesf and collapsef:
389 if keepbranchesf and collapsef:
390 branches = set()
390 branches = set()
391 for rev in state:
391 for rev in state:
392 branches.add(repo[rev].branch())
392 branches.add(repo[rev].branch())
393 if len(branches) > 1:
393 if len(branches) > 1:
394 raise error.Abort(_('cannot collapse multiple named '
394 raise error.Abort(_('cannot collapse multiple named '
395 'branches'))
395 'branches'))
396
396
397 # Rebase
397 # Rebase
398 if not targetancestors:
398 if not targetancestors:
399 targetancestors = repo.changelog.ancestors([target], inclusive=True)
399 targetancestors = repo.changelog.ancestors([target], inclusive=True)
400
400
401 # Keep track of the current bookmarks in order to reset them later
401 # Keep track of the current bookmarks in order to reset them later
402 currentbookmarks = repo._bookmarks.copy()
402 currentbookmarks = repo._bookmarks.copy()
403 activebookmark = activebookmark or repo._activebookmark
403 activebookmark = activebookmark or repo._activebookmark
404 if activebookmark:
404 if activebookmark:
405 bookmarks.deactivate(repo)
405 bookmarks.deactivate(repo)
406
406
407 extrafn = _makeextrafn(extrafns)
407 extrafn = _makeextrafn(extrafns)
408
408
409 sortedstate = sorted(state)
409 sortedstate = sorted(state)
410 total = len(sortedstate)
410 total = len(sortedstate)
411 pos = 0
411 pos = 0
412 for rev in sortedstate:
412 for rev in sortedstate:
413 ctx = repo[rev]
413 ctx = repo[rev]
414 desc = '%d:%s "%s"' % (ctx.rev(), ctx,
414 desc = '%d:%s "%s"' % (ctx.rev(), ctx,
415 ctx.description().split('\n', 1)[0])
415 ctx.description().split('\n', 1)[0])
416 names = repo.nodetags(ctx.node()) + repo.nodebookmarks(ctx.node())
416 names = repo.nodetags(ctx.node()) + repo.nodebookmarks(ctx.node())
417 if names:
417 if names:
418 desc += ' (%s)' % ' '.join(names)
418 desc += ' (%s)' % ' '.join(names)
419 pos += 1
419 pos += 1
420 if state[rev] == revtodo:
420 if state[rev] == revtodo:
421 ui.status(_('rebasing %s\n') % desc)
421 ui.status(_('rebasing %s\n') % desc)
422 ui.progress(_("rebasing"), pos, ("%d:%s" % (rev, ctx)),
422 ui.progress(_("rebasing"), pos, ("%d:%s" % (rev, ctx)),
423 _('changesets'), total)
423 _('changesets'), total)
424 p1, p2, base = defineparents(repo, rev, target, state,
424 p1, p2, base = defineparents(repo, rev, target, state,
425 targetancestors)
425 targetancestors)
426 storestatus(repo, originalwd, target, state, collapsef, keepf,
426 storestatus(repo, originalwd, target, state, collapsef, keepf,
427 keepbranchesf, external, activebookmark)
427 keepbranchesf, external, activebookmark)
428 if len(repo[None].parents()) == 2:
428 if len(repo[None].parents()) == 2:
429 repo.ui.debug('resuming interrupted rebase\n')
429 repo.ui.debug('resuming interrupted rebase\n')
430 else:
430 else:
431 try:
431 try:
432 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
432 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
433 'rebase')
433 'rebase')
434 stats = rebasenode(repo, rev, p1, base, state,
434 stats = rebasenode(repo, rev, p1, base, state,
435 collapsef, target)
435 collapsef, target)
436 if stats and stats[3] > 0:
436 if stats and stats[3] > 0:
437 raise error.InterventionRequired(
437 raise error.InterventionRequired(
438 _('unresolved conflicts (see hg '
438 _('unresolved conflicts (see hg '
439 'resolve, then hg rebase --continue)'))
439 'resolve, then hg rebase --continue)'))
440 finally:
440 finally:
441 ui.setconfig('ui', 'forcemerge', '', 'rebase')
441 ui.setconfig('ui', 'forcemerge', '', 'rebase')
442 if not collapsef:
442 if not collapsef:
443 merging = p2 != nullrev
443 merging = p2 != nullrev
444 editform = cmdutil.mergeeditform(merging, 'rebase')
444 editform = cmdutil.mergeeditform(merging, 'rebase')
445 editor = cmdutil.getcommiteditor(editform=editform, **opts)
445 editor = cmdutil.getcommiteditor(editform=editform, **opts)
446 newnode = concludenode(repo, rev, p1, p2, extrafn=extrafn,
446 newnode = concludenode(repo, rev, p1, p2, extrafn=extrafn,
447 editor=editor,
447 editor=editor,
448 keepbranches=keepbranchesf,
448 keepbranches=keepbranchesf,
449 date=date)
449 date=date)
450 else:
450 else:
451 # Skip commit if we are collapsing
451 # Skip commit if we are collapsing
452 repo.dirstate.beginparentchange()
452 repo.dirstate.beginparentchange()
453 repo.setparents(repo[p1].node())
453 repo.setparents(repo[p1].node())
454 repo.dirstate.endparentchange()
454 repo.dirstate.endparentchange()
455 newnode = None
455 newnode = None
456 # Update the state
456 # Update the state
457 if newnode is not None:
457 if newnode is not None:
458 state[rev] = repo[newnode].rev()
458 state[rev] = repo[newnode].rev()
459 ui.debug('rebased as %s\n' % short(newnode))
459 ui.debug('rebased as %s\n' % short(newnode))
460 else:
460 else:
461 if not collapsef:
461 if not collapsef:
462 ui.warn(_('note: rebase of %d:%s created no changes '
462 ui.warn(_('note: rebase of %d:%s created no changes '
463 'to commit\n') % (rev, ctx))
463 'to commit\n') % (rev, ctx))
464 skipped.add(rev)
464 skipped.add(rev)
465 state[rev] = p1
465 state[rev] = p1
466 ui.debug('next revision set to %s\n' % p1)
466 ui.debug('next revision set to %s\n' % p1)
467 elif state[rev] == nullmerge:
467 elif state[rev] == nullmerge:
468 ui.debug('ignoring null merge rebase of %s\n' % rev)
468 ui.debug('ignoring null merge rebase of %s\n' % rev)
469 elif state[rev] == revignored:
469 elif state[rev] == revignored:
470 ui.status(_('not rebasing ignored %s\n') % desc)
470 ui.status(_('not rebasing ignored %s\n') % desc)
471 elif state[rev] == revprecursor:
471 elif state[rev] == revprecursor:
472 targetctx = repo[obsoletenotrebased[rev]]
472 targetctx = repo[obsoletenotrebased[rev]]
473 desctarget = '%d:%s "%s"' % (targetctx.rev(), targetctx,
473 desctarget = '%d:%s "%s"' % (targetctx.rev(), targetctx,
474 targetctx.description().split('\n', 1)[0])
474 targetctx.description().split('\n', 1)[0])
475 msg = _('note: not rebasing %s, already in destination as %s\n')
475 msg = _('note: not rebasing %s, already in destination as %s\n')
476 ui.status(msg % (desc, desctarget))
476 ui.status(msg % (desc, desctarget))
477 elif state[rev] == revpruned:
477 elif state[rev] == revpruned:
478 msg = _('note: not rebasing %s, it has no successor\n')
478 msg = _('note: not rebasing %s, it has no successor\n')
479 ui.status(msg % desc)
479 ui.status(msg % desc)
480 else:
480 else:
481 ui.status(_('already rebased %s as %s\n') %
481 ui.status(_('already rebased %s as %s\n') %
482 (desc, repo[state[rev]]))
482 (desc, repo[state[rev]]))
483
483
484 ui.progress(_('rebasing'), None)
484 ui.progress(_('rebasing'), None)
485 ui.note(_('rebase merging completed\n'))
485 ui.note(_('rebase merging completed\n'))
486
486
487 if collapsef and not keepopen:
487 if collapsef and not keepopen:
488 p1, p2, _base = defineparents(repo, min(state), target,
488 p1, p2, _base = defineparents(repo, min(state), target,
489 state, targetancestors)
489 state, targetancestors)
490 editopt = opts.get('edit')
490 editopt = opts.get('edit')
491 editform = 'rebase.collapse'
491 editform = 'rebase.collapse'
492 if collapsemsg:
492 if collapsemsg:
493 commitmsg = collapsemsg
493 commitmsg = collapsemsg
494 else:
494 else:
495 commitmsg = 'Collapsed revision'
495 commitmsg = 'Collapsed revision'
496 for rebased in state:
496 for rebased in state:
497 if rebased not in skipped and state[rebased] > nullmerge:
497 if rebased not in skipped and state[rebased] > nullmerge:
498 commitmsg += '\n* %s' % repo[rebased].description()
498 commitmsg += '\n* %s' % repo[rebased].description()
499 editopt = True
499 editopt = True
500 editor = cmdutil.getcommiteditor(edit=editopt, editform=editform)
500 editor = cmdutil.getcommiteditor(edit=editopt, editform=editform)
501 newnode = concludenode(repo, rev, p1, external, commitmsg=commitmsg,
501 newnode = concludenode(repo, rev, p1, external, commitmsg=commitmsg,
502 extrafn=extrafn, editor=editor,
502 extrafn=extrafn, editor=editor,
503 keepbranches=keepbranchesf,
503 keepbranches=keepbranchesf,
504 date=date)
504 date=date)
505 if newnode is None:
505 if newnode is None:
506 newrev = target
506 newrev = target
507 else:
507 else:
508 newrev = repo[newnode].rev()
508 newrev = repo[newnode].rev()
509 for oldrev in state.iterkeys():
509 for oldrev in state.iterkeys():
510 if state[oldrev] > nullmerge:
510 if state[oldrev] > nullmerge:
511 state[oldrev] = newrev
511 state[oldrev] = newrev
512
512
513 if 'qtip' in repo.tags():
513 if 'qtip' in repo.tags():
514 updatemq(repo, state, skipped, **opts)
514 updatemq(repo, state, skipped, **opts)
515
515
516 if currentbookmarks:
516 if currentbookmarks:
517 # Nodeids are needed to reset bookmarks
517 # Nodeids are needed to reset bookmarks
518 nstate = {}
518 nstate = {}
519 for k, v in state.iteritems():
519 for k, v in state.iteritems():
520 if v > nullmerge:
520 if v > nullmerge:
521 nstate[repo[k].node()] = repo[v].node()
521 nstate[repo[k].node()] = repo[v].node()
522 # XXX this is the same as dest.node() for the non-continue path --
522 # XXX this is the same as dest.node() for the non-continue path --
523 # this should probably be cleaned up
523 # this should probably be cleaned up
524 targetnode = repo[target].node()
524 targetnode = repo[target].node()
525
525
526 # restore original working directory
526 # restore original working directory
527 # (we do this before stripping)
527 # (we do this before stripping)
528 newwd = state.get(originalwd, originalwd)
528 newwd = state.get(originalwd, originalwd)
529 if newwd < 0:
529 if newwd < 0:
530 # original directory is a parent of rebase set root or ignored
530 # original directory is a parent of rebase set root or ignored
531 newwd = originalwd
531 newwd = originalwd
532 if newwd not in [c.rev() for c in repo[None].parents()]:
532 if newwd not in [c.rev() for c in repo[None].parents()]:
533 ui.note(_("update back to initial working directory parent\n"))
533 ui.note(_("update back to initial working directory parent\n"))
534 hg.updaterepo(repo, newwd, False)
534 hg.updaterepo(repo, newwd, False)
535
535
536 if not keepf:
536 if not keepf:
537 collapsedas = None
537 collapsedas = None
538 if collapsef:
538 if collapsef:
539 collapsedas = newnode
539 collapsedas = newnode
540 clearrebased(ui, repo, state, skipped, collapsedas)
540 clearrebased(ui, repo, state, skipped, collapsedas)
541
541
542 tr = None
542 tr = None
543 try:
543 try:
544 tr = repo.transaction('bookmark')
544 tr = repo.transaction('bookmark')
545 if currentbookmarks:
545 if currentbookmarks:
546 updatebookmarks(repo, targetnode, nstate, currentbookmarks, tr)
546 updatebookmarks(repo, targetnode, nstate, currentbookmarks, tr)
547 if activebookmark not in repo._bookmarks:
547 if activebookmark not in repo._bookmarks:
548 # active bookmark was divergent one and has been deleted
548 # active bookmark was divergent one and has been deleted
549 activebookmark = None
549 activebookmark = None
550 tr.close()
550 tr.close()
551 finally:
551 finally:
552 release(tr)
552 release(tr)
553 clearstatus(repo)
553 clearstatus(repo)
554
554
555 ui.note(_("rebase completed\n"))
555 ui.note(_("rebase completed\n"))
556 util.unlinkpath(repo.sjoin('undo'), ignoremissing=True)
556 util.unlinkpath(repo.sjoin('undo'), ignoremissing=True)
557 if skipped:
557 if skipped:
558 ui.note(_("%d revisions have been skipped\n") % len(skipped))
558 ui.note(_("%d revisions have been skipped\n") % len(skipped))
559
559
560 if (activebookmark and
560 if (activebookmark and
561 repo['.'].node() == repo._bookmarks[activebookmark]):
561 repo['.'].node() == repo._bookmarks[activebookmark]):
562 bookmarks.activate(repo, activebookmark)
562 bookmarks.activate(repo, activebookmark)
563
563
564 finally:
564 finally:
565 release(lock, wlock)
565 release(lock, wlock)
566
566
567 def externalparent(repo, state, targetancestors):
567 def externalparent(repo, state, targetancestors):
568 """Return the revision that should be used as the second parent
568 """Return the revision that should be used as the second parent
569 when the revisions in state is collapsed on top of targetancestors.
569 when the revisions in state is collapsed on top of targetancestors.
570 Abort if there is more than one parent.
570 Abort if there is more than one parent.
571 """
571 """
572 parents = set()
572 parents = set()
573 source = min(state)
573 source = min(state)
574 for rev in state:
574 for rev in state:
575 if rev == source:
575 if rev == source:
576 continue
576 continue
577 for p in repo[rev].parents():
577 for p in repo[rev].parents():
578 if (p.rev() not in state
578 if (p.rev() not in state
579 and p.rev() not in targetancestors):
579 and p.rev() not in targetancestors):
580 parents.add(p.rev())
580 parents.add(p.rev())
581 if not parents:
581 if not parents:
582 return nullrev
582 return nullrev
583 if len(parents) == 1:
583 if len(parents) == 1:
584 return parents.pop()
584 return parents.pop()
585 raise error.Abort(_('unable to collapse on top of %s, there is more '
585 raise error.Abort(_('unable to collapse on top of %s, there is more '
586 'than one external parent: %s') %
586 'than one external parent: %s') %
587 (max(targetancestors),
587 (max(targetancestors),
588 ', '.join(str(p) for p in sorted(parents))))
588 ', '.join(str(p) for p in sorted(parents))))
589
589
590 def concludenode(repo, rev, p1, p2, commitmsg=None, editor=None, extrafn=None,
590 def concludenode(repo, rev, p1, p2, commitmsg=None, editor=None, extrafn=None,
591 keepbranches=False, date=None):
591 keepbranches=False, date=None):
592 '''Commit the wd changes with parents p1 and p2. Reuse commit info from rev
592 '''Commit the wd changes with parents p1 and p2. Reuse commit info from rev
593 but also store useful information in extra.
593 but also store useful information in extra.
594 Return node of committed revision.'''
594 Return node of committed revision.'''
595 dsguard = cmdutil.dirstateguard(repo, 'rebase')
595 dsguard = cmdutil.dirstateguard(repo, 'rebase')
596 try:
596 try:
597 repo.setparents(repo[p1].node(), repo[p2].node())
597 repo.setparents(repo[p1].node(), repo[p2].node())
598 ctx = repo[rev]
598 ctx = repo[rev]
599 if commitmsg is None:
599 if commitmsg is None:
600 commitmsg = ctx.description()
600 commitmsg = ctx.description()
601 keepbranch = keepbranches and repo[p1].branch() != ctx.branch()
601 keepbranch = keepbranches and repo[p1].branch() != ctx.branch()
602 extra = ctx.extra().copy()
602 extra = ctx.extra().copy()
603 if not keepbranches:
603 if not keepbranches:
604 del extra['branch']
604 del extra['branch']
605 extra['rebase_source'] = ctx.hex()
605 extra['rebase_source'] = ctx.hex()
606 if extrafn:
606 if extrafn:
607 extrafn(ctx, extra)
607 extrafn(ctx, extra)
608
608
609 backup = repo.ui.backupconfig('phases', 'new-commit')
609 backup = repo.ui.backupconfig('phases', 'new-commit')
610 try:
610 try:
611 targetphase = max(ctx.phase(), phases.draft)
611 targetphase = max(ctx.phase(), phases.draft)
612 repo.ui.setconfig('phases', 'new-commit', targetphase, 'rebase')
612 repo.ui.setconfig('phases', 'new-commit', targetphase, 'rebase')
613 if keepbranch:
613 if keepbranch:
614 repo.ui.setconfig('ui', 'allowemptycommit', True)
614 repo.ui.setconfig('ui', 'allowemptycommit', True)
615 # Commit might fail if unresolved files exist
615 # Commit might fail if unresolved files exist
616 if date is None:
616 if date is None:
617 date = ctx.date()
617 date = ctx.date()
618 newnode = repo.commit(text=commitmsg, user=ctx.user(),
618 newnode = repo.commit(text=commitmsg, user=ctx.user(),
619 date=date, extra=extra, editor=editor)
619 date=date, extra=extra, editor=editor)
620 finally:
620 finally:
621 repo.ui.restoreconfig(backup)
621 repo.ui.restoreconfig(backup)
622
622
623 repo.dirstate.setbranch(repo[newnode].branch())
623 repo.dirstate.setbranch(repo[newnode].branch())
624 dsguard.close()
624 dsguard.close()
625 return newnode
625 return newnode
626 finally:
626 finally:
627 release(dsguard)
627 release(dsguard)
628
628
629 def rebasenode(repo, rev, p1, base, state, collapse, target):
629 def rebasenode(repo, rev, p1, base, state, collapse, target):
630 'Rebase a single revision rev on top of p1 using base as merge ancestor'
630 'Rebase a single revision rev on top of p1 using base as merge ancestor'
631 # Merge phase
631 # Merge phase
632 # Update to target and merge it with local
632 # Update to target and merge it with local
633 if repo['.'].rev() != p1:
633 if repo['.'].rev() != p1:
634 repo.ui.debug(" update to %d:%s\n" % (p1, repo[p1]))
634 repo.ui.debug(" update to %d:%s\n" % (p1, repo[p1]))
635 merge.update(repo, p1, False, True, False)
635 merge.update(repo, p1, False, True)
636 else:
636 else:
637 repo.ui.debug(" already in target\n")
637 repo.ui.debug(" already in target\n")
638 repo.dirstate.write(repo.currenttransaction())
638 repo.dirstate.write(repo.currenttransaction())
639 repo.ui.debug(" merge against %d:%s\n" % (rev, repo[rev]))
639 repo.ui.debug(" merge against %d:%s\n" % (rev, repo[rev]))
640 if base is not None:
640 if base is not None:
641 repo.ui.debug(" detach base %d:%s\n" % (base, repo[base]))
641 repo.ui.debug(" detach base %d:%s\n" % (base, repo[base]))
642 # When collapsing in-place, the parent is the common ancestor, we
642 # When collapsing in-place, the parent is the common ancestor, we
643 # have to allow merging with it.
643 # have to allow merging with it.
644 stats = merge.update(repo, rev, True, True, False, base, collapse,
644 stats = merge.update(repo, rev, True, True, base, collapse,
645 labels=['dest', 'source'])
645 labels=['dest', 'source'])
646 if collapse:
646 if collapse:
647 copies.duplicatecopies(repo, rev, target)
647 copies.duplicatecopies(repo, rev, target)
648 else:
648 else:
649 # If we're not using --collapse, we need to
649 # If we're not using --collapse, we need to
650 # duplicate copies between the revision we're
650 # duplicate copies between the revision we're
651 # rebasing and its first parent, but *not*
651 # rebasing and its first parent, but *not*
652 # duplicate any copies that have already been
652 # duplicate any copies that have already been
653 # performed in the destination.
653 # performed in the destination.
654 p1rev = repo[rev].p1().rev()
654 p1rev = repo[rev].p1().rev()
655 copies.duplicatecopies(repo, rev, p1rev, skiprev=target)
655 copies.duplicatecopies(repo, rev, p1rev, skiprev=target)
656 return stats
656 return stats
657
657
658 def nearestrebased(repo, rev, state):
658 def nearestrebased(repo, rev, state):
659 """return the nearest ancestors of rev in the rebase result"""
659 """return the nearest ancestors of rev in the rebase result"""
660 rebased = [r for r in state if state[r] > nullmerge]
660 rebased = [r for r in state if state[r] > nullmerge]
661 candidates = repo.revs('max(%ld and (::%d))', rebased, rev)
661 candidates = repo.revs('max(%ld and (::%d))', rebased, rev)
662 if candidates:
662 if candidates:
663 return state[candidates.first()]
663 return state[candidates.first()]
664 else:
664 else:
665 return None
665 return None
666
666
667 def defineparents(repo, rev, target, state, targetancestors):
667 def defineparents(repo, rev, target, state, targetancestors):
668 'Return the new parent relationship of the revision that will be rebased'
668 'Return the new parent relationship of the revision that will be rebased'
669 parents = repo[rev].parents()
669 parents = repo[rev].parents()
670 p1 = p2 = nullrev
670 p1 = p2 = nullrev
671
671
672 p1n = parents[0].rev()
672 p1n = parents[0].rev()
673 if p1n in targetancestors:
673 if p1n in targetancestors:
674 p1 = target
674 p1 = target
675 elif p1n in state:
675 elif p1n in state:
676 if state[p1n] == nullmerge:
676 if state[p1n] == nullmerge:
677 p1 = target
677 p1 = target
678 elif state[p1n] in revskipped:
678 elif state[p1n] in revskipped:
679 p1 = nearestrebased(repo, p1n, state)
679 p1 = nearestrebased(repo, p1n, state)
680 if p1 is None:
680 if p1 is None:
681 p1 = target
681 p1 = target
682 else:
682 else:
683 p1 = state[p1n]
683 p1 = state[p1n]
684 else: # p1n external
684 else: # p1n external
685 p1 = target
685 p1 = target
686 p2 = p1n
686 p2 = p1n
687
687
688 if len(parents) == 2 and parents[1].rev() not in targetancestors:
688 if len(parents) == 2 and parents[1].rev() not in targetancestors:
689 p2n = parents[1].rev()
689 p2n = parents[1].rev()
690 # interesting second parent
690 # interesting second parent
691 if p2n in state:
691 if p2n in state:
692 if p1 == target: # p1n in targetancestors or external
692 if p1 == target: # p1n in targetancestors or external
693 p1 = state[p2n]
693 p1 = state[p2n]
694 elif state[p2n] in revskipped:
694 elif state[p2n] in revskipped:
695 p2 = nearestrebased(repo, p2n, state)
695 p2 = nearestrebased(repo, p2n, state)
696 if p2 is None:
696 if p2 is None:
697 # no ancestors rebased yet, detach
697 # no ancestors rebased yet, detach
698 p2 = target
698 p2 = target
699 else:
699 else:
700 p2 = state[p2n]
700 p2 = state[p2n]
701 else: # p2n external
701 else: # p2n external
702 if p2 != nullrev: # p1n external too => rev is a merged revision
702 if p2 != nullrev: # p1n external too => rev is a merged revision
703 raise error.Abort(_('cannot use revision %d as base, result '
703 raise error.Abort(_('cannot use revision %d as base, result '
704 'would have 3 parents') % rev)
704 'would have 3 parents') % rev)
705 p2 = p2n
705 p2 = p2n
706 repo.ui.debug(" future parents are %d and %d\n" %
706 repo.ui.debug(" future parents are %d and %d\n" %
707 (repo[p1].rev(), repo[p2].rev()))
707 (repo[p1].rev(), repo[p2].rev()))
708
708
709 if rev == min(state):
709 if rev == min(state):
710 # Case (1) initial changeset of a non-detaching rebase.
710 # Case (1) initial changeset of a non-detaching rebase.
711 # Let the merge mechanism find the base itself.
711 # Let the merge mechanism find the base itself.
712 base = None
712 base = None
713 elif not repo[rev].p2():
713 elif not repo[rev].p2():
714 # Case (2) detaching the node with a single parent, use this parent
714 # Case (2) detaching the node with a single parent, use this parent
715 base = repo[rev].p1().rev()
715 base = repo[rev].p1().rev()
716 else:
716 else:
717 # Assuming there is a p1, this is the case where there also is a p2.
717 # Assuming there is a p1, this is the case where there also is a p2.
718 # We are thus rebasing a merge and need to pick the right merge base.
718 # We are thus rebasing a merge and need to pick the right merge base.
719 #
719 #
720 # Imagine we have:
720 # Imagine we have:
721 # - M: current rebase revision in this step
721 # - M: current rebase revision in this step
722 # - A: one parent of M
722 # - A: one parent of M
723 # - B: other parent of M
723 # - B: other parent of M
724 # - D: destination of this merge step (p1 var)
724 # - D: destination of this merge step (p1 var)
725 #
725 #
726 # Consider the case where D is a descendant of A or B and the other is
726 # Consider the case where D is a descendant of A or B and the other is
727 # 'outside'. In this case, the right merge base is the D ancestor.
727 # 'outside'. In this case, the right merge base is the D ancestor.
728 #
728 #
729 # An informal proof, assuming A is 'outside' and B is the D ancestor:
729 # An informal proof, assuming A is 'outside' and B is the D ancestor:
730 #
730 #
731 # If we pick B as the base, the merge involves:
731 # If we pick B as the base, the merge involves:
732 # - changes from B to M (actual changeset payload)
732 # - changes from B to M (actual changeset payload)
733 # - changes from B to D (induced by rebase) as D is a rebased
733 # - changes from B to D (induced by rebase) as D is a rebased
734 # version of B)
734 # version of B)
735 # Which exactly represent the rebase operation.
735 # Which exactly represent the rebase operation.
736 #
736 #
737 # If we pick A as the base, the merge involves:
737 # If we pick A as the base, the merge involves:
738 # - changes from A to M (actual changeset payload)
738 # - changes from A to M (actual changeset payload)
739 # - changes from A to D (with include changes between unrelated A and B
739 # - changes from A to D (with include changes between unrelated A and B
740 # plus changes induced by rebase)
740 # plus changes induced by rebase)
741 # Which does not represent anything sensible and creates a lot of
741 # Which does not represent anything sensible and creates a lot of
742 # conflicts. A is thus not the right choice - B is.
742 # conflicts. A is thus not the right choice - B is.
743 #
743 #
744 # Note: The base found in this 'proof' is only correct in the specified
744 # Note: The base found in this 'proof' is only correct in the specified
745 # case. This base does not make sense if is not D a descendant of A or B
745 # case. This base does not make sense if is not D a descendant of A or B
746 # or if the other is not parent 'outside' (especially not if the other
746 # or if the other is not parent 'outside' (especially not if the other
747 # parent has been rebased). The current implementation does not
747 # parent has been rebased). The current implementation does not
748 # make it feasible to consider different cases separately. In these
748 # make it feasible to consider different cases separately. In these
749 # other cases we currently just leave it to the user to correctly
749 # other cases we currently just leave it to the user to correctly
750 # resolve an impossible merge using a wrong ancestor.
750 # resolve an impossible merge using a wrong ancestor.
751 for p in repo[rev].parents():
751 for p in repo[rev].parents():
752 if state.get(p.rev()) == p1:
752 if state.get(p.rev()) == p1:
753 base = p.rev()
753 base = p.rev()
754 break
754 break
755 else: # fallback when base not found
755 else: # fallback when base not found
756 base = None
756 base = None
757
757
758 # Raise because this function is called wrong (see issue 4106)
758 # Raise because this function is called wrong (see issue 4106)
759 raise AssertionError('no base found to rebase on '
759 raise AssertionError('no base found to rebase on '
760 '(defineparents called wrong)')
760 '(defineparents called wrong)')
761 return p1, p2, base
761 return p1, p2, base
762
762
763 def isagitpatch(repo, patchname):
763 def isagitpatch(repo, patchname):
764 'Return true if the given patch is in git format'
764 'Return true if the given patch is in git format'
765 mqpatch = os.path.join(repo.mq.path, patchname)
765 mqpatch = os.path.join(repo.mq.path, patchname)
766 for line in patch.linereader(file(mqpatch, 'rb')):
766 for line in patch.linereader(file(mqpatch, 'rb')):
767 if line.startswith('diff --git'):
767 if line.startswith('diff --git'):
768 return True
768 return True
769 return False
769 return False
770
770
771 def updatemq(repo, state, skipped, **opts):
771 def updatemq(repo, state, skipped, **opts):
772 'Update rebased mq patches - finalize and then import them'
772 'Update rebased mq patches - finalize and then import them'
773 mqrebase = {}
773 mqrebase = {}
774 mq = repo.mq
774 mq = repo.mq
775 original_series = mq.fullseries[:]
775 original_series = mq.fullseries[:]
776 skippedpatches = set()
776 skippedpatches = set()
777
777
778 for p in mq.applied:
778 for p in mq.applied:
779 rev = repo[p.node].rev()
779 rev = repo[p.node].rev()
780 if rev in state:
780 if rev in state:
781 repo.ui.debug('revision %d is an mq patch (%s), finalize it.\n' %
781 repo.ui.debug('revision %d is an mq patch (%s), finalize it.\n' %
782 (rev, p.name))
782 (rev, p.name))
783 mqrebase[rev] = (p.name, isagitpatch(repo, p.name))
783 mqrebase[rev] = (p.name, isagitpatch(repo, p.name))
784 else:
784 else:
785 # Applied but not rebased, not sure this should happen
785 # Applied but not rebased, not sure this should happen
786 skippedpatches.add(p.name)
786 skippedpatches.add(p.name)
787
787
788 if mqrebase:
788 if mqrebase:
789 mq.finish(repo, mqrebase.keys())
789 mq.finish(repo, mqrebase.keys())
790
790
791 # We must start import from the newest revision
791 # We must start import from the newest revision
792 for rev in sorted(mqrebase, reverse=True):
792 for rev in sorted(mqrebase, reverse=True):
793 if rev not in skipped:
793 if rev not in skipped:
794 name, isgit = mqrebase[rev]
794 name, isgit = mqrebase[rev]
795 repo.ui.note(_('updating mq patch %s to %s:%s\n') %
795 repo.ui.note(_('updating mq patch %s to %s:%s\n') %
796 (name, state[rev], repo[state[rev]]))
796 (name, state[rev], repo[state[rev]]))
797 mq.qimport(repo, (), patchname=name, git=isgit,
797 mq.qimport(repo, (), patchname=name, git=isgit,
798 rev=[str(state[rev])])
798 rev=[str(state[rev])])
799 else:
799 else:
800 # Rebased and skipped
800 # Rebased and skipped
801 skippedpatches.add(mqrebase[rev][0])
801 skippedpatches.add(mqrebase[rev][0])
802
802
803 # Patches were either applied and rebased and imported in
803 # Patches were either applied and rebased and imported in
804 # order, applied and removed or unapplied. Discard the removed
804 # order, applied and removed or unapplied. Discard the removed
805 # ones while preserving the original series order and guards.
805 # ones while preserving the original series order and guards.
806 newseries = [s for s in original_series
806 newseries = [s for s in original_series
807 if mq.guard_re.split(s, 1)[0] not in skippedpatches]
807 if mq.guard_re.split(s, 1)[0] not in skippedpatches]
808 mq.fullseries[:] = newseries
808 mq.fullseries[:] = newseries
809 mq.seriesdirty = True
809 mq.seriesdirty = True
810 mq.savedirty()
810 mq.savedirty()
811
811
812 def updatebookmarks(repo, targetnode, nstate, originalbookmarks, tr):
812 def updatebookmarks(repo, targetnode, nstate, originalbookmarks, tr):
813 'Move bookmarks to their correct changesets, and delete divergent ones'
813 'Move bookmarks to their correct changesets, and delete divergent ones'
814 marks = repo._bookmarks
814 marks = repo._bookmarks
815 for k, v in originalbookmarks.iteritems():
815 for k, v in originalbookmarks.iteritems():
816 if v in nstate:
816 if v in nstate:
817 # update the bookmarks for revs that have moved
817 # update the bookmarks for revs that have moved
818 marks[k] = nstate[v]
818 marks[k] = nstate[v]
819 bookmarks.deletedivergent(repo, [targetnode], k)
819 bookmarks.deletedivergent(repo, [targetnode], k)
820 marks.recordchange(tr)
820 marks.recordchange(tr)
821
821
822 def storestatus(repo, originalwd, target, state, collapse, keep, keepbranches,
822 def storestatus(repo, originalwd, target, state, collapse, keep, keepbranches,
823 external, activebookmark):
823 external, activebookmark):
824 'Store the current status to allow recovery'
824 'Store the current status to allow recovery'
825 f = repo.vfs("rebasestate", "w")
825 f = repo.vfs("rebasestate", "w")
826 f.write(repo[originalwd].hex() + '\n')
826 f.write(repo[originalwd].hex() + '\n')
827 f.write(repo[target].hex() + '\n')
827 f.write(repo[target].hex() + '\n')
828 f.write(repo[external].hex() + '\n')
828 f.write(repo[external].hex() + '\n')
829 f.write('%d\n' % int(collapse))
829 f.write('%d\n' % int(collapse))
830 f.write('%d\n' % int(keep))
830 f.write('%d\n' % int(keep))
831 f.write('%d\n' % int(keepbranches))
831 f.write('%d\n' % int(keepbranches))
832 f.write('%s\n' % (activebookmark or ''))
832 f.write('%s\n' % (activebookmark or ''))
833 for d, v in state.iteritems():
833 for d, v in state.iteritems():
834 oldrev = repo[d].hex()
834 oldrev = repo[d].hex()
835 if v >= 0:
835 if v >= 0:
836 newrev = repo[v].hex()
836 newrev = repo[v].hex()
837 elif v == revtodo:
837 elif v == revtodo:
838 # To maintain format compatibility, we have to use nullid.
838 # To maintain format compatibility, we have to use nullid.
839 # Please do remove this special case when upgrading the format.
839 # Please do remove this special case when upgrading the format.
840 newrev = hex(nullid)
840 newrev = hex(nullid)
841 else:
841 else:
842 newrev = v
842 newrev = v
843 f.write("%s:%s\n" % (oldrev, newrev))
843 f.write("%s:%s\n" % (oldrev, newrev))
844 f.close()
844 f.close()
845 repo.ui.debug('rebase status stored\n')
845 repo.ui.debug('rebase status stored\n')
846
846
847 def clearstatus(repo):
847 def clearstatus(repo):
848 'Remove the status files'
848 'Remove the status files'
849 _clearrebasesetvisibiliy(repo)
849 _clearrebasesetvisibiliy(repo)
850 util.unlinkpath(repo.join("rebasestate"), ignoremissing=True)
850 util.unlinkpath(repo.join("rebasestate"), ignoremissing=True)
851
851
852 def restorestatus(repo):
852 def restorestatus(repo):
853 'Restore a previously stored status'
853 'Restore a previously stored status'
854 keepbranches = None
854 keepbranches = None
855 target = None
855 target = None
856 collapse = False
856 collapse = False
857 external = nullrev
857 external = nullrev
858 activebookmark = None
858 activebookmark = None
859 state = {}
859 state = {}
860
860
861 try:
861 try:
862 f = repo.vfs("rebasestate")
862 f = repo.vfs("rebasestate")
863 for i, l in enumerate(f.read().splitlines()):
863 for i, l in enumerate(f.read().splitlines()):
864 if i == 0:
864 if i == 0:
865 originalwd = repo[l].rev()
865 originalwd = repo[l].rev()
866 elif i == 1:
866 elif i == 1:
867 target = repo[l].rev()
867 target = repo[l].rev()
868 elif i == 2:
868 elif i == 2:
869 external = repo[l].rev()
869 external = repo[l].rev()
870 elif i == 3:
870 elif i == 3:
871 collapse = bool(int(l))
871 collapse = bool(int(l))
872 elif i == 4:
872 elif i == 4:
873 keep = bool(int(l))
873 keep = bool(int(l))
874 elif i == 5:
874 elif i == 5:
875 keepbranches = bool(int(l))
875 keepbranches = bool(int(l))
876 elif i == 6 and not (len(l) == 81 and ':' in l):
876 elif i == 6 and not (len(l) == 81 and ':' in l):
877 # line 6 is a recent addition, so for backwards compatibility
877 # line 6 is a recent addition, so for backwards compatibility
878 # check that the line doesn't look like the oldrev:newrev lines
878 # check that the line doesn't look like the oldrev:newrev lines
879 activebookmark = l
879 activebookmark = l
880 else:
880 else:
881 oldrev, newrev = l.split(':')
881 oldrev, newrev = l.split(':')
882 if newrev in (str(nullmerge), str(revignored),
882 if newrev in (str(nullmerge), str(revignored),
883 str(revprecursor), str(revpruned)):
883 str(revprecursor), str(revpruned)):
884 state[repo[oldrev].rev()] = int(newrev)
884 state[repo[oldrev].rev()] = int(newrev)
885 elif newrev == nullid:
885 elif newrev == nullid:
886 state[repo[oldrev].rev()] = revtodo
886 state[repo[oldrev].rev()] = revtodo
887 # Legacy compat special case
887 # Legacy compat special case
888 else:
888 else:
889 state[repo[oldrev].rev()] = repo[newrev].rev()
889 state[repo[oldrev].rev()] = repo[newrev].rev()
890
890
891 except IOError as err:
891 except IOError as err:
892 if err.errno != errno.ENOENT:
892 if err.errno != errno.ENOENT:
893 raise
893 raise
894 raise error.Abort(_('no rebase in progress'))
894 raise error.Abort(_('no rebase in progress'))
895
895
896 if keepbranches is None:
896 if keepbranches is None:
897 raise error.Abort(_('.hg/rebasestate is incomplete'))
897 raise error.Abort(_('.hg/rebasestate is incomplete'))
898
898
899 skipped = set()
899 skipped = set()
900 # recompute the set of skipped revs
900 # recompute the set of skipped revs
901 if not collapse:
901 if not collapse:
902 seen = set([target])
902 seen = set([target])
903 for old, new in sorted(state.items()):
903 for old, new in sorted(state.items()):
904 if new != revtodo and new in seen:
904 if new != revtodo and new in seen:
905 skipped.add(old)
905 skipped.add(old)
906 seen.add(new)
906 seen.add(new)
907 repo.ui.debug('computed skipped revs: %s\n' %
907 repo.ui.debug('computed skipped revs: %s\n' %
908 (' '.join(str(r) for r in sorted(skipped)) or None))
908 (' '.join(str(r) for r in sorted(skipped)) or None))
909 repo.ui.debug('rebase status resumed\n')
909 repo.ui.debug('rebase status resumed\n')
910 _setrebasesetvisibility(repo, state.keys())
910 _setrebasesetvisibility(repo, state.keys())
911 return (originalwd, target, state, skipped,
911 return (originalwd, target, state, skipped,
912 collapse, keep, keepbranches, external, activebookmark)
912 collapse, keep, keepbranches, external, activebookmark)
913
913
914 def needupdate(repo, state):
914 def needupdate(repo, state):
915 '''check whether we should `update --clean` away from a merge, or if
915 '''check whether we should `update --clean` away from a merge, or if
916 somehow the working dir got forcibly updated, e.g. by older hg'''
916 somehow the working dir got forcibly updated, e.g. by older hg'''
917 parents = [p.rev() for p in repo[None].parents()]
917 parents = [p.rev() for p in repo[None].parents()]
918
918
919 # Are we in a merge state at all?
919 # Are we in a merge state at all?
920 if len(parents) < 2:
920 if len(parents) < 2:
921 return False
921 return False
922
922
923 # We should be standing on the first as-of-yet unrebased commit.
923 # We should be standing on the first as-of-yet unrebased commit.
924 firstunrebased = min([old for old, new in state.iteritems()
924 firstunrebased = min([old for old, new in state.iteritems()
925 if new == nullrev])
925 if new == nullrev])
926 if firstunrebased in parents:
926 if firstunrebased in parents:
927 return True
927 return True
928
928
929 return False
929 return False
930
930
931 def abort(repo, originalwd, target, state, activebookmark=None):
931 def abort(repo, originalwd, target, state, activebookmark=None):
932 '''Restore the repository to its original state. Additional args:
932 '''Restore the repository to its original state. Additional args:
933
933
934 activebookmark: the name of the bookmark that should be active after the
934 activebookmark: the name of the bookmark that should be active after the
935 restore'''
935 restore'''
936
936
937 try:
937 try:
938 # If the first commits in the rebased set get skipped during the rebase,
938 # If the first commits in the rebased set get skipped during the rebase,
939 # their values within the state mapping will be the target rev id. The
939 # their values within the state mapping will be the target rev id. The
940 # dstates list must must not contain the target rev (issue4896)
940 # dstates list must must not contain the target rev (issue4896)
941 dstates = [s for s in state.values() if s >= 0 and s != target]
941 dstates = [s for s in state.values() if s >= 0 and s != target]
942 immutable = [d for d in dstates if not repo[d].mutable()]
942 immutable = [d for d in dstates if not repo[d].mutable()]
943 cleanup = True
943 cleanup = True
944 if immutable:
944 if immutable:
945 repo.ui.warn(_("warning: can't clean up public changesets %s\n")
945 repo.ui.warn(_("warning: can't clean up public changesets %s\n")
946 % ', '.join(str(repo[r]) for r in immutable),
946 % ', '.join(str(repo[r]) for r in immutable),
947 hint=_('see "hg help phases" for details'))
947 hint=_('see "hg help phases" for details'))
948 cleanup = False
948 cleanup = False
949
949
950 descendants = set()
950 descendants = set()
951 if dstates:
951 if dstates:
952 descendants = set(repo.changelog.descendants(dstates))
952 descendants = set(repo.changelog.descendants(dstates))
953 if descendants - set(dstates):
953 if descendants - set(dstates):
954 repo.ui.warn(_("warning: new changesets detected on target branch, "
954 repo.ui.warn(_("warning: new changesets detected on target branch, "
955 "can't strip\n"))
955 "can't strip\n"))
956 cleanup = False
956 cleanup = False
957
957
958 if cleanup:
958 if cleanup:
959 # Update away from the rebase if necessary
959 # Update away from the rebase if necessary
960 if needupdate(repo, state):
960 if needupdate(repo, state):
961 merge.update(repo, originalwd, False, True, False)
961 merge.update(repo, originalwd, False, True)
962
962
963 # Strip from the first rebased revision
963 # Strip from the first rebased revision
964 rebased = filter(lambda x: x >= 0 and x != target, state.values())
964 rebased = filter(lambda x: x >= 0 and x != target, state.values())
965 if rebased:
965 if rebased:
966 strippoints = [
966 strippoints = [
967 c.node() for c in repo.set('roots(%ld)', rebased)]
967 c.node() for c in repo.set('roots(%ld)', rebased)]
968 # no backup of rebased cset versions needed
968 # no backup of rebased cset versions needed
969 repair.strip(repo.ui, repo, strippoints)
969 repair.strip(repo.ui, repo, strippoints)
970
970
971 if activebookmark and activebookmark in repo._bookmarks:
971 if activebookmark and activebookmark in repo._bookmarks:
972 bookmarks.activate(repo, activebookmark)
972 bookmarks.activate(repo, activebookmark)
973
973
974 finally:
974 finally:
975 clearstatus(repo)
975 clearstatus(repo)
976 repo.ui.warn(_('rebase aborted\n'))
976 repo.ui.warn(_('rebase aborted\n'))
977 return 0
977 return 0
978
978
979 def buildstate(repo, dest, rebaseset, collapse, obsoletenotrebased):
979 def buildstate(repo, dest, rebaseset, collapse, obsoletenotrebased):
980 '''Define which revisions are going to be rebased and where
980 '''Define which revisions are going to be rebased and where
981
981
982 repo: repo
982 repo: repo
983 dest: context
983 dest: context
984 rebaseset: set of rev
984 rebaseset: set of rev
985 '''
985 '''
986 _setrebasesetvisibility(repo, rebaseset)
986 _setrebasesetvisibility(repo, rebaseset)
987
987
988 # This check isn't strictly necessary, since mq detects commits over an
988 # This check isn't strictly necessary, since mq detects commits over an
989 # applied patch. But it prevents messing up the working directory when
989 # applied patch. But it prevents messing up the working directory when
990 # a partially completed rebase is blocked by mq.
990 # a partially completed rebase is blocked by mq.
991 if 'qtip' in repo.tags() and (dest.node() in
991 if 'qtip' in repo.tags() and (dest.node() in
992 [s.node for s in repo.mq.applied]):
992 [s.node for s in repo.mq.applied]):
993 raise error.Abort(_('cannot rebase onto an applied mq patch'))
993 raise error.Abort(_('cannot rebase onto an applied mq patch'))
994
994
995 roots = list(repo.set('roots(%ld)', rebaseset))
995 roots = list(repo.set('roots(%ld)', rebaseset))
996 if not roots:
996 if not roots:
997 raise error.Abort(_('no matching revisions'))
997 raise error.Abort(_('no matching revisions'))
998 roots.sort()
998 roots.sort()
999 state = {}
999 state = {}
1000 detachset = set()
1000 detachset = set()
1001 for root in roots:
1001 for root in roots:
1002 commonbase = root.ancestor(dest)
1002 commonbase = root.ancestor(dest)
1003 if commonbase == root:
1003 if commonbase == root:
1004 raise error.Abort(_('source is ancestor of destination'))
1004 raise error.Abort(_('source is ancestor of destination'))
1005 if commonbase == dest:
1005 if commonbase == dest:
1006 samebranch = root.branch() == dest.branch()
1006 samebranch = root.branch() == dest.branch()
1007 if not collapse and samebranch and root in dest.children():
1007 if not collapse and samebranch and root in dest.children():
1008 repo.ui.debug('source is a child of destination\n')
1008 repo.ui.debug('source is a child of destination\n')
1009 return None
1009 return None
1010
1010
1011 repo.ui.debug('rebase onto %d starting from %s\n' % (dest, root))
1011 repo.ui.debug('rebase onto %d starting from %s\n' % (dest, root))
1012 state.update(dict.fromkeys(rebaseset, revtodo))
1012 state.update(dict.fromkeys(rebaseset, revtodo))
1013 # Rebase tries to turn <dest> into a parent of <root> while
1013 # Rebase tries to turn <dest> into a parent of <root> while
1014 # preserving the number of parents of rebased changesets:
1014 # preserving the number of parents of rebased changesets:
1015 #
1015 #
1016 # - A changeset with a single parent will always be rebased as a
1016 # - A changeset with a single parent will always be rebased as a
1017 # changeset with a single parent.
1017 # changeset with a single parent.
1018 #
1018 #
1019 # - A merge will be rebased as merge unless its parents are both
1019 # - A merge will be rebased as merge unless its parents are both
1020 # ancestors of <dest> or are themselves in the rebased set and
1020 # ancestors of <dest> or are themselves in the rebased set and
1021 # pruned while rebased.
1021 # pruned while rebased.
1022 #
1022 #
1023 # If one parent of <root> is an ancestor of <dest>, the rebased
1023 # If one parent of <root> is an ancestor of <dest>, the rebased
1024 # version of this parent will be <dest>. This is always true with
1024 # version of this parent will be <dest>. This is always true with
1025 # --base option.
1025 # --base option.
1026 #
1026 #
1027 # Otherwise, we need to *replace* the original parents with
1027 # Otherwise, we need to *replace* the original parents with
1028 # <dest>. This "detaches" the rebased set from its former location
1028 # <dest>. This "detaches" the rebased set from its former location
1029 # and rebases it onto <dest>. Changes introduced by ancestors of
1029 # and rebases it onto <dest>. Changes introduced by ancestors of
1030 # <root> not common with <dest> (the detachset, marked as
1030 # <root> not common with <dest> (the detachset, marked as
1031 # nullmerge) are "removed" from the rebased changesets.
1031 # nullmerge) are "removed" from the rebased changesets.
1032 #
1032 #
1033 # - If <root> has a single parent, set it to <dest>.
1033 # - If <root> has a single parent, set it to <dest>.
1034 #
1034 #
1035 # - If <root> is a merge, we cannot decide which parent to
1035 # - If <root> is a merge, we cannot decide which parent to
1036 # replace, the rebase operation is not clearly defined.
1036 # replace, the rebase operation is not clearly defined.
1037 #
1037 #
1038 # The table below sums up this behavior:
1038 # The table below sums up this behavior:
1039 #
1039 #
1040 # +------------------+----------------------+-------------------------+
1040 # +------------------+----------------------+-------------------------+
1041 # | | one parent | merge |
1041 # | | one parent | merge |
1042 # +------------------+----------------------+-------------------------+
1042 # +------------------+----------------------+-------------------------+
1043 # | parent in | new parent is <dest> | parents in ::<dest> are |
1043 # | parent in | new parent is <dest> | parents in ::<dest> are |
1044 # | ::<dest> | | remapped to <dest> |
1044 # | ::<dest> | | remapped to <dest> |
1045 # +------------------+----------------------+-------------------------+
1045 # +------------------+----------------------+-------------------------+
1046 # | unrelated source | new parent is <dest> | ambiguous, abort |
1046 # | unrelated source | new parent is <dest> | ambiguous, abort |
1047 # +------------------+----------------------+-------------------------+
1047 # +------------------+----------------------+-------------------------+
1048 #
1048 #
1049 # The actual abort is handled by `defineparents`
1049 # The actual abort is handled by `defineparents`
1050 if len(root.parents()) <= 1:
1050 if len(root.parents()) <= 1:
1051 # ancestors of <root> not ancestors of <dest>
1051 # ancestors of <root> not ancestors of <dest>
1052 detachset.update(repo.changelog.findmissingrevs([commonbase.rev()],
1052 detachset.update(repo.changelog.findmissingrevs([commonbase.rev()],
1053 [root.rev()]))
1053 [root.rev()]))
1054 for r in detachset:
1054 for r in detachset:
1055 if r not in state:
1055 if r not in state:
1056 state[r] = nullmerge
1056 state[r] = nullmerge
1057 if len(roots) > 1:
1057 if len(roots) > 1:
1058 # If we have multiple roots, we may have "hole" in the rebase set.
1058 # If we have multiple roots, we may have "hole" in the rebase set.
1059 # Rebase roots that descend from those "hole" should not be detached as
1059 # Rebase roots that descend from those "hole" should not be detached as
1060 # other root are. We use the special `revignored` to inform rebase that
1060 # other root are. We use the special `revignored` to inform rebase that
1061 # the revision should be ignored but that `defineparents` should search
1061 # the revision should be ignored but that `defineparents` should search
1062 # a rebase destination that make sense regarding rebased topology.
1062 # a rebase destination that make sense regarding rebased topology.
1063 rebasedomain = set(repo.revs('%ld::%ld', rebaseset, rebaseset))
1063 rebasedomain = set(repo.revs('%ld::%ld', rebaseset, rebaseset))
1064 for ignored in set(rebasedomain) - set(rebaseset):
1064 for ignored in set(rebasedomain) - set(rebaseset):
1065 state[ignored] = revignored
1065 state[ignored] = revignored
1066 for r in obsoletenotrebased:
1066 for r in obsoletenotrebased:
1067 if obsoletenotrebased[r] is None:
1067 if obsoletenotrebased[r] is None:
1068 state[r] = revpruned
1068 state[r] = revpruned
1069 else:
1069 else:
1070 state[r] = revprecursor
1070 state[r] = revprecursor
1071 return repo['.'].rev(), dest.rev(), state
1071 return repo['.'].rev(), dest.rev(), state
1072
1072
1073 def clearrebased(ui, repo, state, skipped, collapsedas=None):
1073 def clearrebased(ui, repo, state, skipped, collapsedas=None):
1074 """dispose of rebased revision at the end of the rebase
1074 """dispose of rebased revision at the end of the rebase
1075
1075
1076 If `collapsedas` is not None, the rebase was a collapse whose result if the
1076 If `collapsedas` is not None, the rebase was a collapse whose result if the
1077 `collapsedas` node."""
1077 `collapsedas` node."""
1078 if obsolete.isenabled(repo, obsolete.createmarkersopt):
1078 if obsolete.isenabled(repo, obsolete.createmarkersopt):
1079 markers = []
1079 markers = []
1080 for rev, newrev in sorted(state.items()):
1080 for rev, newrev in sorted(state.items()):
1081 if newrev >= 0:
1081 if newrev >= 0:
1082 if rev in skipped:
1082 if rev in skipped:
1083 succs = ()
1083 succs = ()
1084 elif collapsedas is not None:
1084 elif collapsedas is not None:
1085 succs = (repo[collapsedas],)
1085 succs = (repo[collapsedas],)
1086 else:
1086 else:
1087 succs = (repo[newrev],)
1087 succs = (repo[newrev],)
1088 markers.append((repo[rev], succs))
1088 markers.append((repo[rev], succs))
1089 if markers:
1089 if markers:
1090 obsolete.createmarkers(repo, markers)
1090 obsolete.createmarkers(repo, markers)
1091 else:
1091 else:
1092 rebased = [rev for rev in state if state[rev] > nullmerge]
1092 rebased = [rev for rev in state if state[rev] > nullmerge]
1093 if rebased:
1093 if rebased:
1094 stripped = []
1094 stripped = []
1095 for root in repo.set('roots(%ld)', rebased):
1095 for root in repo.set('roots(%ld)', rebased):
1096 if set(repo.changelog.descendants([root.rev()])) - set(state):
1096 if set(repo.changelog.descendants([root.rev()])) - set(state):
1097 ui.warn(_("warning: new changesets detected "
1097 ui.warn(_("warning: new changesets detected "
1098 "on source branch, not stripping\n"))
1098 "on source branch, not stripping\n"))
1099 else:
1099 else:
1100 stripped.append(root.node())
1100 stripped.append(root.node())
1101 if stripped:
1101 if stripped:
1102 # backup the old csets by default
1102 # backup the old csets by default
1103 repair.strip(ui, repo, stripped, "all")
1103 repair.strip(ui, repo, stripped, "all")
1104
1104
1105
1105
1106 def pullrebase(orig, ui, repo, *args, **opts):
1106 def pullrebase(orig, ui, repo, *args, **opts):
1107 'Call rebase after pull if the latter has been invoked with --rebase'
1107 'Call rebase after pull if the latter has been invoked with --rebase'
1108 ret = None
1108 ret = None
1109 if opts.get('rebase'):
1109 if opts.get('rebase'):
1110 wlock = lock = None
1110 wlock = lock = None
1111 try:
1111 try:
1112 wlock = repo.wlock()
1112 wlock = repo.wlock()
1113 lock = repo.lock()
1113 lock = repo.lock()
1114 if opts.get('update'):
1114 if opts.get('update'):
1115 del opts['update']
1115 del opts['update']
1116 ui.debug('--update and --rebase are not compatible, ignoring '
1116 ui.debug('--update and --rebase are not compatible, ignoring '
1117 'the update flag\n')
1117 'the update flag\n')
1118
1118
1119 movemarkfrom = repo['.'].node()
1119 movemarkfrom = repo['.'].node()
1120 revsprepull = len(repo)
1120 revsprepull = len(repo)
1121 origpostincoming = commands.postincoming
1121 origpostincoming = commands.postincoming
1122 def _dummy(*args, **kwargs):
1122 def _dummy(*args, **kwargs):
1123 pass
1123 pass
1124 commands.postincoming = _dummy
1124 commands.postincoming = _dummy
1125 try:
1125 try:
1126 ret = orig(ui, repo, *args, **opts)
1126 ret = orig(ui, repo, *args, **opts)
1127 finally:
1127 finally:
1128 commands.postincoming = origpostincoming
1128 commands.postincoming = origpostincoming
1129 revspostpull = len(repo)
1129 revspostpull = len(repo)
1130 if revspostpull > revsprepull:
1130 if revspostpull > revsprepull:
1131 # --rev option from pull conflict with rebase own --rev
1131 # --rev option from pull conflict with rebase own --rev
1132 # dropping it
1132 # dropping it
1133 if 'rev' in opts:
1133 if 'rev' in opts:
1134 del opts['rev']
1134 del opts['rev']
1135 # positional argument from pull conflicts with rebase's own
1135 # positional argument from pull conflicts with rebase's own
1136 # --source.
1136 # --source.
1137 if 'source' in opts:
1137 if 'source' in opts:
1138 del opts['source']
1138 del opts['source']
1139 rebase(ui, repo, **opts)
1139 rebase(ui, repo, **opts)
1140 branch = repo[None].branch()
1140 branch = repo[None].branch()
1141 dest = repo[branch].rev()
1141 dest = repo[branch].rev()
1142 if dest != repo['.'].rev():
1142 if dest != repo['.'].rev():
1143 # there was nothing to rebase we force an update
1143 # there was nothing to rebase we force an update
1144 hg.update(repo, dest)
1144 hg.update(repo, dest)
1145 if bookmarks.update(repo, [movemarkfrom], repo['.'].node()):
1145 if bookmarks.update(repo, [movemarkfrom], repo['.'].node()):
1146 ui.status(_("updating bookmark %s\n")
1146 ui.status(_("updating bookmark %s\n")
1147 % repo._activebookmark)
1147 % repo._activebookmark)
1148 finally:
1148 finally:
1149 release(lock, wlock)
1149 release(lock, wlock)
1150 else:
1150 else:
1151 if opts.get('tool'):
1151 if opts.get('tool'):
1152 raise error.Abort(_('--tool can only be used with --rebase'))
1152 raise error.Abort(_('--tool can only be used with --rebase'))
1153 ret = orig(ui, repo, *args, **opts)
1153 ret = orig(ui, repo, *args, **opts)
1154
1154
1155 return ret
1155 return ret
1156
1156
1157 def _setrebasesetvisibility(repo, revs):
1157 def _setrebasesetvisibility(repo, revs):
1158 """store the currently rebased set on the repo object
1158 """store the currently rebased set on the repo object
1159
1159
1160 This is used by another function to prevent rebased revision to because
1160 This is used by another function to prevent rebased revision to because
1161 hidden (see issue4505)"""
1161 hidden (see issue4505)"""
1162 repo = repo.unfiltered()
1162 repo = repo.unfiltered()
1163 revs = set(revs)
1163 revs = set(revs)
1164 repo._rebaseset = revs
1164 repo._rebaseset = revs
1165 # invalidate cache if visibility changes
1165 # invalidate cache if visibility changes
1166 hiddens = repo.filteredrevcache.get('visible', set())
1166 hiddens = repo.filteredrevcache.get('visible', set())
1167 if revs & hiddens:
1167 if revs & hiddens:
1168 repo.invalidatevolatilesets()
1168 repo.invalidatevolatilesets()
1169
1169
1170 def _clearrebasesetvisibiliy(repo):
1170 def _clearrebasesetvisibiliy(repo):
1171 """remove rebaseset data from the repo"""
1171 """remove rebaseset data from the repo"""
1172 repo = repo.unfiltered()
1172 repo = repo.unfiltered()
1173 if '_rebaseset' in vars(repo):
1173 if '_rebaseset' in vars(repo):
1174 del repo._rebaseset
1174 del repo._rebaseset
1175
1175
1176 def _rebasedvisible(orig, repo):
1176 def _rebasedvisible(orig, repo):
1177 """ensure rebased revs stay visible (see issue4505)"""
1177 """ensure rebased revs stay visible (see issue4505)"""
1178 blockers = orig(repo)
1178 blockers = orig(repo)
1179 blockers.update(getattr(repo, '_rebaseset', ()))
1179 blockers.update(getattr(repo, '_rebaseset', ()))
1180 return blockers
1180 return blockers
1181
1181
1182 def _computeobsoletenotrebased(repo, rebasesetrevs, dest):
1182 def _computeobsoletenotrebased(repo, rebasesetrevs, dest):
1183 """return a mapping obsolete => successor for all obsolete nodes to be
1183 """return a mapping obsolete => successor for all obsolete nodes to be
1184 rebased that have a successors in the destination
1184 rebased that have a successors in the destination
1185
1185
1186 obsolete => None entries in the mapping indicate nodes with no succesor"""
1186 obsolete => None entries in the mapping indicate nodes with no succesor"""
1187 obsoletenotrebased = {}
1187 obsoletenotrebased = {}
1188
1188
1189 # Build a mapping successor => obsolete nodes for the obsolete
1189 # Build a mapping successor => obsolete nodes for the obsolete
1190 # nodes to be rebased
1190 # nodes to be rebased
1191 allsuccessors = {}
1191 allsuccessors = {}
1192 cl = repo.changelog
1192 cl = repo.changelog
1193 for r in rebasesetrevs:
1193 for r in rebasesetrevs:
1194 n = repo[r]
1194 n = repo[r]
1195 if n.obsolete():
1195 if n.obsolete():
1196 node = cl.node(r)
1196 node = cl.node(r)
1197 for s in obsolete.allsuccessors(repo.obsstore, [node]):
1197 for s in obsolete.allsuccessors(repo.obsstore, [node]):
1198 try:
1198 try:
1199 allsuccessors[cl.rev(s)] = cl.rev(node)
1199 allsuccessors[cl.rev(s)] = cl.rev(node)
1200 except LookupError:
1200 except LookupError:
1201 pass
1201 pass
1202
1202
1203 if allsuccessors:
1203 if allsuccessors:
1204 # Look for successors of obsolete nodes to be rebased among
1204 # Look for successors of obsolete nodes to be rebased among
1205 # the ancestors of dest
1205 # the ancestors of dest
1206 ancs = cl.ancestors([repo[dest].rev()],
1206 ancs = cl.ancestors([repo[dest].rev()],
1207 stoprev=min(allsuccessors),
1207 stoprev=min(allsuccessors),
1208 inclusive=True)
1208 inclusive=True)
1209 for s in allsuccessors:
1209 for s in allsuccessors:
1210 if s in ancs:
1210 if s in ancs:
1211 obsoletenotrebased[allsuccessors[s]] = s
1211 obsoletenotrebased[allsuccessors[s]] = s
1212 elif (s == allsuccessors[s] and
1212 elif (s == allsuccessors[s] and
1213 allsuccessors.values().count(s) == 1):
1213 allsuccessors.values().count(s) == 1):
1214 # plain prune
1214 # plain prune
1215 obsoletenotrebased[s] = None
1215 obsoletenotrebased[s] = None
1216
1216
1217 return obsoletenotrebased
1217 return obsoletenotrebased
1218
1218
1219 def summaryhook(ui, repo):
1219 def summaryhook(ui, repo):
1220 if not os.path.exists(repo.join('rebasestate')):
1220 if not os.path.exists(repo.join('rebasestate')):
1221 return
1221 return
1222 try:
1222 try:
1223 state = restorestatus(repo)[2]
1223 state = restorestatus(repo)[2]
1224 except error.RepoLookupError:
1224 except error.RepoLookupError:
1225 # i18n: column positioning for "hg summary"
1225 # i18n: column positioning for "hg summary"
1226 msg = _('rebase: (use "hg rebase --abort" to clear broken state)\n')
1226 msg = _('rebase: (use "hg rebase --abort" to clear broken state)\n')
1227 ui.write(msg)
1227 ui.write(msg)
1228 return
1228 return
1229 numrebased = len([i for i in state.itervalues() if i >= 0])
1229 numrebased = len([i for i in state.itervalues() if i >= 0])
1230 # i18n: column positioning for "hg summary"
1230 # i18n: column positioning for "hg summary"
1231 ui.write(_('rebase: %s, %s (rebase --continue)\n') %
1231 ui.write(_('rebase: %s, %s (rebase --continue)\n') %
1232 (ui.label(_('%d rebased'), 'rebase.rebased') % numrebased,
1232 (ui.label(_('%d rebased'), 'rebase.rebased') % numrebased,
1233 ui.label(_('%d remaining'), 'rebase.remaining') %
1233 ui.label(_('%d remaining'), 'rebase.remaining') %
1234 (len(state) - numrebased)))
1234 (len(state) - numrebased)))
1235
1235
1236 def uisetup(ui):
1236 def uisetup(ui):
1237 #Replace pull with a decorator to provide --rebase option
1237 #Replace pull with a decorator to provide --rebase option
1238 entry = extensions.wrapcommand(commands.table, 'pull', pullrebase)
1238 entry = extensions.wrapcommand(commands.table, 'pull', pullrebase)
1239 entry[1].append(('', 'rebase', None,
1239 entry[1].append(('', 'rebase', None,
1240 _("rebase working directory to branch head")))
1240 _("rebase working directory to branch head")))
1241 entry[1].append(('t', 'tool', '',
1241 entry[1].append(('t', 'tool', '',
1242 _("specify merge tool for rebase")))
1242 _("specify merge tool for rebase")))
1243 cmdutil.summaryhooks.add('rebase', summaryhook)
1243 cmdutil.summaryhooks.add('rebase', summaryhook)
1244 cmdutil.unfinishedstates.append(
1244 cmdutil.unfinishedstates.append(
1245 ['rebasestate', False, False, _('rebase in progress'),
1245 ['rebasestate', False, False, _('rebase in progress'),
1246 _("use 'hg rebase --continue' or 'hg rebase --abort'")])
1246 _("use 'hg rebase --continue' or 'hg rebase --abort'")])
1247 # ensure rebased rev are not hidden
1247 # ensure rebased rev are not hidden
1248 extensions.wrapfunction(repoview, '_getdynamicblockers', _rebasedvisible)
1248 extensions.wrapfunction(repoview, '_getdynamicblockers', _rebasedvisible)
1249 revset.symbols['_destrebase'] = _revsetdestrebase
1249 revset.symbols['_destrebase'] = _revsetdestrebase
@@ -1,721 +1,721 b''
1 # Patch transplanting extension for Mercurial
1 # Patch transplanting extension for Mercurial
2 #
2 #
3 # Copyright 2006, 2007 Brendan Cully <brendan@kublai.com>
3 # Copyright 2006, 2007 Brendan Cully <brendan@kublai.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''command to transplant changesets from another branch
8 '''command to transplant changesets from another branch
9
9
10 This extension allows you to transplant changes to another parent revision,
10 This extension allows you to transplant changes to another parent revision,
11 possibly in another repository. The transplant is done using 'diff' patches.
11 possibly in another repository. The transplant is done using 'diff' patches.
12
12
13 Transplanted patches are recorded in .hg/transplant/transplants, as a
13 Transplanted patches are recorded in .hg/transplant/transplants, as a
14 map from a changeset hash to its hash in the source repository.
14 map from a changeset hash to its hash in the source repository.
15 '''
15 '''
16
16
17 from mercurial.i18n import _
17 from mercurial.i18n import _
18 import os, tempfile
18 import os, tempfile
19 from mercurial.node import short
19 from mercurial.node import short
20 from mercurial import bundlerepo, hg, merge, match
20 from mercurial import bundlerepo, hg, merge, match
21 from mercurial import patch, revlog, scmutil, util, error, cmdutil
21 from mercurial import patch, revlog, scmutil, util, error, cmdutil
22 from mercurial import revset, templatekw, exchange
22 from mercurial import revset, templatekw, exchange
23 from mercurial import lock as lockmod
23 from mercurial import lock as lockmod
24
24
25 class TransplantError(error.Abort):
25 class TransplantError(error.Abort):
26 pass
26 pass
27
27
28 cmdtable = {}
28 cmdtable = {}
29 command = cmdutil.command(cmdtable)
29 command = cmdutil.command(cmdtable)
30 # Note for extension authors: ONLY specify testedwith = 'internal' for
30 # Note for extension authors: ONLY specify testedwith = 'internal' for
31 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
31 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
32 # be specifying the version(s) of Mercurial they are tested with, or
32 # be specifying the version(s) of Mercurial they are tested with, or
33 # leave the attribute unspecified.
33 # leave the attribute unspecified.
34 testedwith = 'internal'
34 testedwith = 'internal'
35
35
36 class transplantentry(object):
36 class transplantentry(object):
37 def __init__(self, lnode, rnode):
37 def __init__(self, lnode, rnode):
38 self.lnode = lnode
38 self.lnode = lnode
39 self.rnode = rnode
39 self.rnode = rnode
40
40
41 class transplants(object):
41 class transplants(object):
42 def __init__(self, path=None, transplantfile=None, opener=None):
42 def __init__(self, path=None, transplantfile=None, opener=None):
43 self.path = path
43 self.path = path
44 self.transplantfile = transplantfile
44 self.transplantfile = transplantfile
45 self.opener = opener
45 self.opener = opener
46
46
47 if not opener:
47 if not opener:
48 self.opener = scmutil.opener(self.path)
48 self.opener = scmutil.opener(self.path)
49 self.transplants = {}
49 self.transplants = {}
50 self.dirty = False
50 self.dirty = False
51 self.read()
51 self.read()
52
52
53 def read(self):
53 def read(self):
54 abspath = os.path.join(self.path, self.transplantfile)
54 abspath = os.path.join(self.path, self.transplantfile)
55 if self.transplantfile and os.path.exists(abspath):
55 if self.transplantfile and os.path.exists(abspath):
56 for line in self.opener.read(self.transplantfile).splitlines():
56 for line in self.opener.read(self.transplantfile).splitlines():
57 lnode, rnode = map(revlog.bin, line.split(':'))
57 lnode, rnode = map(revlog.bin, line.split(':'))
58 list = self.transplants.setdefault(rnode, [])
58 list = self.transplants.setdefault(rnode, [])
59 list.append(transplantentry(lnode, rnode))
59 list.append(transplantentry(lnode, rnode))
60
60
61 def write(self):
61 def write(self):
62 if self.dirty and self.transplantfile:
62 if self.dirty and self.transplantfile:
63 if not os.path.isdir(self.path):
63 if not os.path.isdir(self.path):
64 os.mkdir(self.path)
64 os.mkdir(self.path)
65 fp = self.opener(self.transplantfile, 'w')
65 fp = self.opener(self.transplantfile, 'w')
66 for list in self.transplants.itervalues():
66 for list in self.transplants.itervalues():
67 for t in list:
67 for t in list:
68 l, r = map(revlog.hex, (t.lnode, t.rnode))
68 l, r = map(revlog.hex, (t.lnode, t.rnode))
69 fp.write(l + ':' + r + '\n')
69 fp.write(l + ':' + r + '\n')
70 fp.close()
70 fp.close()
71 self.dirty = False
71 self.dirty = False
72
72
73 def get(self, rnode):
73 def get(self, rnode):
74 return self.transplants.get(rnode) or []
74 return self.transplants.get(rnode) or []
75
75
76 def set(self, lnode, rnode):
76 def set(self, lnode, rnode):
77 list = self.transplants.setdefault(rnode, [])
77 list = self.transplants.setdefault(rnode, [])
78 list.append(transplantentry(lnode, rnode))
78 list.append(transplantentry(lnode, rnode))
79 self.dirty = True
79 self.dirty = True
80
80
81 def remove(self, transplant):
81 def remove(self, transplant):
82 list = self.transplants.get(transplant.rnode)
82 list = self.transplants.get(transplant.rnode)
83 if list:
83 if list:
84 del list[list.index(transplant)]
84 del list[list.index(transplant)]
85 self.dirty = True
85 self.dirty = True
86
86
87 class transplanter(object):
87 class transplanter(object):
88 def __init__(self, ui, repo, opts):
88 def __init__(self, ui, repo, opts):
89 self.ui = ui
89 self.ui = ui
90 self.path = repo.join('transplant')
90 self.path = repo.join('transplant')
91 self.opener = scmutil.opener(self.path)
91 self.opener = scmutil.opener(self.path)
92 self.transplants = transplants(self.path, 'transplants',
92 self.transplants = transplants(self.path, 'transplants',
93 opener=self.opener)
93 opener=self.opener)
94 def getcommiteditor():
94 def getcommiteditor():
95 editform = cmdutil.mergeeditform(repo[None], 'transplant')
95 editform = cmdutil.mergeeditform(repo[None], 'transplant')
96 return cmdutil.getcommiteditor(editform=editform, **opts)
96 return cmdutil.getcommiteditor(editform=editform, **opts)
97 self.getcommiteditor = getcommiteditor
97 self.getcommiteditor = getcommiteditor
98
98
99 def applied(self, repo, node, parent):
99 def applied(self, repo, node, parent):
100 '''returns True if a node is already an ancestor of parent
100 '''returns True if a node is already an ancestor of parent
101 or is parent or has already been transplanted'''
101 or is parent or has already been transplanted'''
102 if hasnode(repo, parent):
102 if hasnode(repo, parent):
103 parentrev = repo.changelog.rev(parent)
103 parentrev = repo.changelog.rev(parent)
104 if hasnode(repo, node):
104 if hasnode(repo, node):
105 rev = repo.changelog.rev(node)
105 rev = repo.changelog.rev(node)
106 reachable = repo.changelog.ancestors([parentrev], rev,
106 reachable = repo.changelog.ancestors([parentrev], rev,
107 inclusive=True)
107 inclusive=True)
108 if rev in reachable:
108 if rev in reachable:
109 return True
109 return True
110 for t in self.transplants.get(node):
110 for t in self.transplants.get(node):
111 # it might have been stripped
111 # it might have been stripped
112 if not hasnode(repo, t.lnode):
112 if not hasnode(repo, t.lnode):
113 self.transplants.remove(t)
113 self.transplants.remove(t)
114 return False
114 return False
115 lnoderev = repo.changelog.rev(t.lnode)
115 lnoderev = repo.changelog.rev(t.lnode)
116 if lnoderev in repo.changelog.ancestors([parentrev], lnoderev,
116 if lnoderev in repo.changelog.ancestors([parentrev], lnoderev,
117 inclusive=True):
117 inclusive=True):
118 return True
118 return True
119 return False
119 return False
120
120
121 def apply(self, repo, source, revmap, merges, opts=None):
121 def apply(self, repo, source, revmap, merges, opts=None):
122 '''apply the revisions in revmap one by one in revision order'''
122 '''apply the revisions in revmap one by one in revision order'''
123 if opts is None:
123 if opts is None:
124 opts = {}
124 opts = {}
125 revs = sorted(revmap)
125 revs = sorted(revmap)
126 p1, p2 = repo.dirstate.parents()
126 p1, p2 = repo.dirstate.parents()
127 pulls = []
127 pulls = []
128 diffopts = patch.difffeatureopts(self.ui, opts)
128 diffopts = patch.difffeatureopts(self.ui, opts)
129 diffopts.git = True
129 diffopts.git = True
130
130
131 lock = tr = None
131 lock = tr = None
132 try:
132 try:
133 lock = repo.lock()
133 lock = repo.lock()
134 tr = repo.transaction('transplant')
134 tr = repo.transaction('transplant')
135 for rev in revs:
135 for rev in revs:
136 node = revmap[rev]
136 node = revmap[rev]
137 revstr = '%s:%s' % (rev, short(node))
137 revstr = '%s:%s' % (rev, short(node))
138
138
139 if self.applied(repo, node, p1):
139 if self.applied(repo, node, p1):
140 self.ui.warn(_('skipping already applied revision %s\n') %
140 self.ui.warn(_('skipping already applied revision %s\n') %
141 revstr)
141 revstr)
142 continue
142 continue
143
143
144 parents = source.changelog.parents(node)
144 parents = source.changelog.parents(node)
145 if not (opts.get('filter') or opts.get('log')):
145 if not (opts.get('filter') or opts.get('log')):
146 # If the changeset parent is the same as the
146 # If the changeset parent is the same as the
147 # wdir's parent, just pull it.
147 # wdir's parent, just pull it.
148 if parents[0] == p1:
148 if parents[0] == p1:
149 pulls.append(node)
149 pulls.append(node)
150 p1 = node
150 p1 = node
151 continue
151 continue
152 if pulls:
152 if pulls:
153 if source != repo:
153 if source != repo:
154 exchange.pull(repo, source.peer(), heads=pulls)
154 exchange.pull(repo, source.peer(), heads=pulls)
155 merge.update(repo, pulls[-1], False, False, None)
155 merge.update(repo, pulls[-1], False, False)
156 p1, p2 = repo.dirstate.parents()
156 p1, p2 = repo.dirstate.parents()
157 pulls = []
157 pulls = []
158
158
159 domerge = False
159 domerge = False
160 if node in merges:
160 if node in merges:
161 # pulling all the merge revs at once would mean we
161 # pulling all the merge revs at once would mean we
162 # couldn't transplant after the latest even if
162 # couldn't transplant after the latest even if
163 # transplants before them fail.
163 # transplants before them fail.
164 domerge = True
164 domerge = True
165 if not hasnode(repo, node):
165 if not hasnode(repo, node):
166 exchange.pull(repo, source.peer(), heads=[node])
166 exchange.pull(repo, source.peer(), heads=[node])
167
167
168 skipmerge = False
168 skipmerge = False
169 if parents[1] != revlog.nullid:
169 if parents[1] != revlog.nullid:
170 if not opts.get('parent'):
170 if not opts.get('parent'):
171 self.ui.note(_('skipping merge changeset %s:%s\n')
171 self.ui.note(_('skipping merge changeset %s:%s\n')
172 % (rev, short(node)))
172 % (rev, short(node)))
173 skipmerge = True
173 skipmerge = True
174 else:
174 else:
175 parent = source.lookup(opts['parent'])
175 parent = source.lookup(opts['parent'])
176 if parent not in parents:
176 if parent not in parents:
177 raise error.Abort(_('%s is not a parent of %s') %
177 raise error.Abort(_('%s is not a parent of %s') %
178 (short(parent), short(node)))
178 (short(parent), short(node)))
179 else:
179 else:
180 parent = parents[0]
180 parent = parents[0]
181
181
182 if skipmerge:
182 if skipmerge:
183 patchfile = None
183 patchfile = None
184 else:
184 else:
185 fd, patchfile = tempfile.mkstemp(prefix='hg-transplant-')
185 fd, patchfile = tempfile.mkstemp(prefix='hg-transplant-')
186 fp = os.fdopen(fd, 'w')
186 fp = os.fdopen(fd, 'w')
187 gen = patch.diff(source, parent, node, opts=diffopts)
187 gen = patch.diff(source, parent, node, opts=diffopts)
188 for chunk in gen:
188 for chunk in gen:
189 fp.write(chunk)
189 fp.write(chunk)
190 fp.close()
190 fp.close()
191
191
192 del revmap[rev]
192 del revmap[rev]
193 if patchfile or domerge:
193 if patchfile or domerge:
194 try:
194 try:
195 try:
195 try:
196 n = self.applyone(repo, node,
196 n = self.applyone(repo, node,
197 source.changelog.read(node),
197 source.changelog.read(node),
198 patchfile, merge=domerge,
198 patchfile, merge=domerge,
199 log=opts.get('log'),
199 log=opts.get('log'),
200 filter=opts.get('filter'))
200 filter=opts.get('filter'))
201 except TransplantError:
201 except TransplantError:
202 # Do not rollback, it is up to the user to
202 # Do not rollback, it is up to the user to
203 # fix the merge or cancel everything
203 # fix the merge or cancel everything
204 tr.close()
204 tr.close()
205 raise
205 raise
206 if n and domerge:
206 if n and domerge:
207 self.ui.status(_('%s merged at %s\n') % (revstr,
207 self.ui.status(_('%s merged at %s\n') % (revstr,
208 short(n)))
208 short(n)))
209 elif n:
209 elif n:
210 self.ui.status(_('%s transplanted to %s\n')
210 self.ui.status(_('%s transplanted to %s\n')
211 % (short(node),
211 % (short(node),
212 short(n)))
212 short(n)))
213 finally:
213 finally:
214 if patchfile:
214 if patchfile:
215 os.unlink(patchfile)
215 os.unlink(patchfile)
216 tr.close()
216 tr.close()
217 if pulls:
217 if pulls:
218 exchange.pull(repo, source.peer(), heads=pulls)
218 exchange.pull(repo, source.peer(), heads=pulls)
219 merge.update(repo, pulls[-1], False, False, None)
219 merge.update(repo, pulls[-1], False, False)
220 finally:
220 finally:
221 self.saveseries(revmap, merges)
221 self.saveseries(revmap, merges)
222 self.transplants.write()
222 self.transplants.write()
223 if tr:
223 if tr:
224 tr.release()
224 tr.release()
225 if lock:
225 if lock:
226 lock.release()
226 lock.release()
227
227
228 def filter(self, filter, node, changelog, patchfile):
228 def filter(self, filter, node, changelog, patchfile):
229 '''arbitrarily rewrite changeset before applying it'''
229 '''arbitrarily rewrite changeset before applying it'''
230
230
231 self.ui.status(_('filtering %s\n') % patchfile)
231 self.ui.status(_('filtering %s\n') % patchfile)
232 user, date, msg = (changelog[1], changelog[2], changelog[4])
232 user, date, msg = (changelog[1], changelog[2], changelog[4])
233 fd, headerfile = tempfile.mkstemp(prefix='hg-transplant-')
233 fd, headerfile = tempfile.mkstemp(prefix='hg-transplant-')
234 fp = os.fdopen(fd, 'w')
234 fp = os.fdopen(fd, 'w')
235 fp.write("# HG changeset patch\n")
235 fp.write("# HG changeset patch\n")
236 fp.write("# User %s\n" % user)
236 fp.write("# User %s\n" % user)
237 fp.write("# Date %d %d\n" % date)
237 fp.write("# Date %d %d\n" % date)
238 fp.write(msg + '\n')
238 fp.write(msg + '\n')
239 fp.close()
239 fp.close()
240
240
241 try:
241 try:
242 self.ui.system('%s %s %s' % (filter, util.shellquote(headerfile),
242 self.ui.system('%s %s %s' % (filter, util.shellquote(headerfile),
243 util.shellquote(patchfile)),
243 util.shellquote(patchfile)),
244 environ={'HGUSER': changelog[1],
244 environ={'HGUSER': changelog[1],
245 'HGREVISION': revlog.hex(node),
245 'HGREVISION': revlog.hex(node),
246 },
246 },
247 onerr=error.Abort, errprefix=_('filter failed'))
247 onerr=error.Abort, errprefix=_('filter failed'))
248 user, date, msg = self.parselog(file(headerfile))[1:4]
248 user, date, msg = self.parselog(file(headerfile))[1:4]
249 finally:
249 finally:
250 os.unlink(headerfile)
250 os.unlink(headerfile)
251
251
252 return (user, date, msg)
252 return (user, date, msg)
253
253
254 def applyone(self, repo, node, cl, patchfile, merge=False, log=False,
254 def applyone(self, repo, node, cl, patchfile, merge=False, log=False,
255 filter=None):
255 filter=None):
256 '''apply the patch in patchfile to the repository as a transplant'''
256 '''apply the patch in patchfile to the repository as a transplant'''
257 (manifest, user, (time, timezone), files, message) = cl[:5]
257 (manifest, user, (time, timezone), files, message) = cl[:5]
258 date = "%d %d" % (time, timezone)
258 date = "%d %d" % (time, timezone)
259 extra = {'transplant_source': node}
259 extra = {'transplant_source': node}
260 if filter:
260 if filter:
261 (user, date, message) = self.filter(filter, node, cl, patchfile)
261 (user, date, message) = self.filter(filter, node, cl, patchfile)
262
262
263 if log:
263 if log:
264 # we don't translate messages inserted into commits
264 # we don't translate messages inserted into commits
265 message += '\n(transplanted from %s)' % revlog.hex(node)
265 message += '\n(transplanted from %s)' % revlog.hex(node)
266
266
267 self.ui.status(_('applying %s\n') % short(node))
267 self.ui.status(_('applying %s\n') % short(node))
268 self.ui.note('%s %s\n%s\n' % (user, date, message))
268 self.ui.note('%s %s\n%s\n' % (user, date, message))
269
269
270 if not patchfile and not merge:
270 if not patchfile and not merge:
271 raise error.Abort(_('can only omit patchfile if merging'))
271 raise error.Abort(_('can only omit patchfile if merging'))
272 if patchfile:
272 if patchfile:
273 try:
273 try:
274 files = set()
274 files = set()
275 patch.patch(self.ui, repo, patchfile, files=files, eolmode=None)
275 patch.patch(self.ui, repo, patchfile, files=files, eolmode=None)
276 files = list(files)
276 files = list(files)
277 except Exception as inst:
277 except Exception as inst:
278 seriespath = os.path.join(self.path, 'series')
278 seriespath = os.path.join(self.path, 'series')
279 if os.path.exists(seriespath):
279 if os.path.exists(seriespath):
280 os.unlink(seriespath)
280 os.unlink(seriespath)
281 p1 = repo.dirstate.p1()
281 p1 = repo.dirstate.p1()
282 p2 = node
282 p2 = node
283 self.log(user, date, message, p1, p2, merge=merge)
283 self.log(user, date, message, p1, p2, merge=merge)
284 self.ui.write(str(inst) + '\n')
284 self.ui.write(str(inst) + '\n')
285 raise TransplantError(_('fix up the merge and run '
285 raise TransplantError(_('fix up the merge and run '
286 'hg transplant --continue'))
286 'hg transplant --continue'))
287 else:
287 else:
288 files = None
288 files = None
289 if merge:
289 if merge:
290 p1, p2 = repo.dirstate.parents()
290 p1, p2 = repo.dirstate.parents()
291 repo.setparents(p1, node)
291 repo.setparents(p1, node)
292 m = match.always(repo.root, '')
292 m = match.always(repo.root, '')
293 else:
293 else:
294 m = match.exact(repo.root, '', files)
294 m = match.exact(repo.root, '', files)
295
295
296 n = repo.commit(message, user, date, extra=extra, match=m,
296 n = repo.commit(message, user, date, extra=extra, match=m,
297 editor=self.getcommiteditor())
297 editor=self.getcommiteditor())
298 if not n:
298 if not n:
299 self.ui.warn(_('skipping emptied changeset %s\n') % short(node))
299 self.ui.warn(_('skipping emptied changeset %s\n') % short(node))
300 return None
300 return None
301 if not merge:
301 if not merge:
302 self.transplants.set(n, node)
302 self.transplants.set(n, node)
303
303
304 return n
304 return n
305
305
306 def resume(self, repo, source, opts):
306 def resume(self, repo, source, opts):
307 '''recover last transaction and apply remaining changesets'''
307 '''recover last transaction and apply remaining changesets'''
308 if os.path.exists(os.path.join(self.path, 'journal')):
308 if os.path.exists(os.path.join(self.path, 'journal')):
309 n, node = self.recover(repo, source, opts)
309 n, node = self.recover(repo, source, opts)
310 if n:
310 if n:
311 self.ui.status(_('%s transplanted as %s\n') % (short(node),
311 self.ui.status(_('%s transplanted as %s\n') % (short(node),
312 short(n)))
312 short(n)))
313 else:
313 else:
314 self.ui.status(_('%s skipped due to empty diff\n')
314 self.ui.status(_('%s skipped due to empty diff\n')
315 % (short(node),))
315 % (short(node),))
316 seriespath = os.path.join(self.path, 'series')
316 seriespath = os.path.join(self.path, 'series')
317 if not os.path.exists(seriespath):
317 if not os.path.exists(seriespath):
318 self.transplants.write()
318 self.transplants.write()
319 return
319 return
320 nodes, merges = self.readseries()
320 nodes, merges = self.readseries()
321 revmap = {}
321 revmap = {}
322 for n in nodes:
322 for n in nodes:
323 revmap[source.changelog.rev(n)] = n
323 revmap[source.changelog.rev(n)] = n
324 os.unlink(seriespath)
324 os.unlink(seriespath)
325
325
326 self.apply(repo, source, revmap, merges, opts)
326 self.apply(repo, source, revmap, merges, opts)
327
327
328 def recover(self, repo, source, opts):
328 def recover(self, repo, source, opts):
329 '''commit working directory using journal metadata'''
329 '''commit working directory using journal metadata'''
330 node, user, date, message, parents = self.readlog()
330 node, user, date, message, parents = self.readlog()
331 merge = False
331 merge = False
332
332
333 if not user or not date or not message or not parents[0]:
333 if not user or not date or not message or not parents[0]:
334 raise error.Abort(_('transplant log file is corrupt'))
334 raise error.Abort(_('transplant log file is corrupt'))
335
335
336 parent = parents[0]
336 parent = parents[0]
337 if len(parents) > 1:
337 if len(parents) > 1:
338 if opts.get('parent'):
338 if opts.get('parent'):
339 parent = source.lookup(opts['parent'])
339 parent = source.lookup(opts['parent'])
340 if parent not in parents:
340 if parent not in parents:
341 raise error.Abort(_('%s is not a parent of %s') %
341 raise error.Abort(_('%s is not a parent of %s') %
342 (short(parent), short(node)))
342 (short(parent), short(node)))
343 else:
343 else:
344 merge = True
344 merge = True
345
345
346 extra = {'transplant_source': node}
346 extra = {'transplant_source': node}
347 try:
347 try:
348 p1, p2 = repo.dirstate.parents()
348 p1, p2 = repo.dirstate.parents()
349 if p1 != parent:
349 if p1 != parent:
350 raise error.Abort(_('working directory not at transplant '
350 raise error.Abort(_('working directory not at transplant '
351 'parent %s') % revlog.hex(parent))
351 'parent %s') % revlog.hex(parent))
352 if merge:
352 if merge:
353 repo.setparents(p1, parents[1])
353 repo.setparents(p1, parents[1])
354 modified, added, removed, deleted = repo.status()[:4]
354 modified, added, removed, deleted = repo.status()[:4]
355 if merge or modified or added or removed or deleted:
355 if merge or modified or added or removed or deleted:
356 n = repo.commit(message, user, date, extra=extra,
356 n = repo.commit(message, user, date, extra=extra,
357 editor=self.getcommiteditor())
357 editor=self.getcommiteditor())
358 if not n:
358 if not n:
359 raise error.Abort(_('commit failed'))
359 raise error.Abort(_('commit failed'))
360 if not merge:
360 if not merge:
361 self.transplants.set(n, node)
361 self.transplants.set(n, node)
362 else:
362 else:
363 n = None
363 n = None
364 self.unlog()
364 self.unlog()
365
365
366 return n, node
366 return n, node
367 finally:
367 finally:
368 # TODO: get rid of this meaningless try/finally enclosing.
368 # TODO: get rid of this meaningless try/finally enclosing.
369 # this is kept only to reduce changes in a patch.
369 # this is kept only to reduce changes in a patch.
370 pass
370 pass
371
371
372 def readseries(self):
372 def readseries(self):
373 nodes = []
373 nodes = []
374 merges = []
374 merges = []
375 cur = nodes
375 cur = nodes
376 for line in self.opener.read('series').splitlines():
376 for line in self.opener.read('series').splitlines():
377 if line.startswith('# Merges'):
377 if line.startswith('# Merges'):
378 cur = merges
378 cur = merges
379 continue
379 continue
380 cur.append(revlog.bin(line))
380 cur.append(revlog.bin(line))
381
381
382 return (nodes, merges)
382 return (nodes, merges)
383
383
384 def saveseries(self, revmap, merges):
384 def saveseries(self, revmap, merges):
385 if not revmap:
385 if not revmap:
386 return
386 return
387
387
388 if not os.path.isdir(self.path):
388 if not os.path.isdir(self.path):
389 os.mkdir(self.path)
389 os.mkdir(self.path)
390 series = self.opener('series', 'w')
390 series = self.opener('series', 'w')
391 for rev in sorted(revmap):
391 for rev in sorted(revmap):
392 series.write(revlog.hex(revmap[rev]) + '\n')
392 series.write(revlog.hex(revmap[rev]) + '\n')
393 if merges:
393 if merges:
394 series.write('# Merges\n')
394 series.write('# Merges\n')
395 for m in merges:
395 for m in merges:
396 series.write(revlog.hex(m) + '\n')
396 series.write(revlog.hex(m) + '\n')
397 series.close()
397 series.close()
398
398
399 def parselog(self, fp):
399 def parselog(self, fp):
400 parents = []
400 parents = []
401 message = []
401 message = []
402 node = revlog.nullid
402 node = revlog.nullid
403 inmsg = False
403 inmsg = False
404 user = None
404 user = None
405 date = None
405 date = None
406 for line in fp.read().splitlines():
406 for line in fp.read().splitlines():
407 if inmsg:
407 if inmsg:
408 message.append(line)
408 message.append(line)
409 elif line.startswith('# User '):
409 elif line.startswith('# User '):
410 user = line[7:]
410 user = line[7:]
411 elif line.startswith('# Date '):
411 elif line.startswith('# Date '):
412 date = line[7:]
412 date = line[7:]
413 elif line.startswith('# Node ID '):
413 elif line.startswith('# Node ID '):
414 node = revlog.bin(line[10:])
414 node = revlog.bin(line[10:])
415 elif line.startswith('# Parent '):
415 elif line.startswith('# Parent '):
416 parents.append(revlog.bin(line[9:]))
416 parents.append(revlog.bin(line[9:]))
417 elif not line.startswith('# '):
417 elif not line.startswith('# '):
418 inmsg = True
418 inmsg = True
419 message.append(line)
419 message.append(line)
420 if None in (user, date):
420 if None in (user, date):
421 raise error.Abort(_("filter corrupted changeset (no user or date)"))
421 raise error.Abort(_("filter corrupted changeset (no user or date)"))
422 return (node, user, date, '\n'.join(message), parents)
422 return (node, user, date, '\n'.join(message), parents)
423
423
424 def log(self, user, date, message, p1, p2, merge=False):
424 def log(self, user, date, message, p1, p2, merge=False):
425 '''journal changelog metadata for later recover'''
425 '''journal changelog metadata for later recover'''
426
426
427 if not os.path.isdir(self.path):
427 if not os.path.isdir(self.path):
428 os.mkdir(self.path)
428 os.mkdir(self.path)
429 fp = self.opener('journal', 'w')
429 fp = self.opener('journal', 'w')
430 fp.write('# User %s\n' % user)
430 fp.write('# User %s\n' % user)
431 fp.write('# Date %s\n' % date)
431 fp.write('# Date %s\n' % date)
432 fp.write('# Node ID %s\n' % revlog.hex(p2))
432 fp.write('# Node ID %s\n' % revlog.hex(p2))
433 fp.write('# Parent ' + revlog.hex(p1) + '\n')
433 fp.write('# Parent ' + revlog.hex(p1) + '\n')
434 if merge:
434 if merge:
435 fp.write('# Parent ' + revlog.hex(p2) + '\n')
435 fp.write('# Parent ' + revlog.hex(p2) + '\n')
436 fp.write(message.rstrip() + '\n')
436 fp.write(message.rstrip() + '\n')
437 fp.close()
437 fp.close()
438
438
439 def readlog(self):
439 def readlog(self):
440 return self.parselog(self.opener('journal'))
440 return self.parselog(self.opener('journal'))
441
441
442 def unlog(self):
442 def unlog(self):
443 '''remove changelog journal'''
443 '''remove changelog journal'''
444 absdst = os.path.join(self.path, 'journal')
444 absdst = os.path.join(self.path, 'journal')
445 if os.path.exists(absdst):
445 if os.path.exists(absdst):
446 os.unlink(absdst)
446 os.unlink(absdst)
447
447
448 def transplantfilter(self, repo, source, root):
448 def transplantfilter(self, repo, source, root):
449 def matchfn(node):
449 def matchfn(node):
450 if self.applied(repo, node, root):
450 if self.applied(repo, node, root):
451 return False
451 return False
452 if source.changelog.parents(node)[1] != revlog.nullid:
452 if source.changelog.parents(node)[1] != revlog.nullid:
453 return False
453 return False
454 extra = source.changelog.read(node)[5]
454 extra = source.changelog.read(node)[5]
455 cnode = extra.get('transplant_source')
455 cnode = extra.get('transplant_source')
456 if cnode and self.applied(repo, cnode, root):
456 if cnode and self.applied(repo, cnode, root):
457 return False
457 return False
458 return True
458 return True
459
459
460 return matchfn
460 return matchfn
461
461
462 def hasnode(repo, node):
462 def hasnode(repo, node):
463 try:
463 try:
464 return repo.changelog.rev(node) is not None
464 return repo.changelog.rev(node) is not None
465 except error.RevlogError:
465 except error.RevlogError:
466 return False
466 return False
467
467
468 def browserevs(ui, repo, nodes, opts):
468 def browserevs(ui, repo, nodes, opts):
469 '''interactively transplant changesets'''
469 '''interactively transplant changesets'''
470 displayer = cmdutil.show_changeset(ui, repo, opts)
470 displayer = cmdutil.show_changeset(ui, repo, opts)
471 transplants = []
471 transplants = []
472 merges = []
472 merges = []
473 prompt = _('apply changeset? [ynmpcq?]:'
473 prompt = _('apply changeset? [ynmpcq?]:'
474 '$$ &yes, transplant this changeset'
474 '$$ &yes, transplant this changeset'
475 '$$ &no, skip this changeset'
475 '$$ &no, skip this changeset'
476 '$$ &merge at this changeset'
476 '$$ &merge at this changeset'
477 '$$ show &patch'
477 '$$ show &patch'
478 '$$ &commit selected changesets'
478 '$$ &commit selected changesets'
479 '$$ &quit and cancel transplant'
479 '$$ &quit and cancel transplant'
480 '$$ &? (show this help)')
480 '$$ &? (show this help)')
481 for node in nodes:
481 for node in nodes:
482 displayer.show(repo[node])
482 displayer.show(repo[node])
483 action = None
483 action = None
484 while not action:
484 while not action:
485 action = 'ynmpcq?'[ui.promptchoice(prompt)]
485 action = 'ynmpcq?'[ui.promptchoice(prompt)]
486 if action == '?':
486 if action == '?':
487 for c, t in ui.extractchoices(prompt)[1]:
487 for c, t in ui.extractchoices(prompt)[1]:
488 ui.write('%s: %s\n' % (c, t))
488 ui.write('%s: %s\n' % (c, t))
489 action = None
489 action = None
490 elif action == 'p':
490 elif action == 'p':
491 parent = repo.changelog.parents(node)[0]
491 parent = repo.changelog.parents(node)[0]
492 for chunk in patch.diff(repo, parent, node):
492 for chunk in patch.diff(repo, parent, node):
493 ui.write(chunk)
493 ui.write(chunk)
494 action = None
494 action = None
495 if action == 'y':
495 if action == 'y':
496 transplants.append(node)
496 transplants.append(node)
497 elif action == 'm':
497 elif action == 'm':
498 merges.append(node)
498 merges.append(node)
499 elif action == 'c':
499 elif action == 'c':
500 break
500 break
501 elif action == 'q':
501 elif action == 'q':
502 transplants = ()
502 transplants = ()
503 merges = ()
503 merges = ()
504 break
504 break
505 displayer.close()
505 displayer.close()
506 return (transplants, merges)
506 return (transplants, merges)
507
507
508 @command('transplant',
508 @command('transplant',
509 [('s', 'source', '', _('transplant changesets from REPO'), _('REPO')),
509 [('s', 'source', '', _('transplant changesets from REPO'), _('REPO')),
510 ('b', 'branch', [], _('use this source changeset as head'), _('REV')),
510 ('b', 'branch', [], _('use this source changeset as head'), _('REV')),
511 ('a', 'all', None, _('pull all changesets up to the --branch revisions')),
511 ('a', 'all', None, _('pull all changesets up to the --branch revisions')),
512 ('p', 'prune', [], _('skip over REV'), _('REV')),
512 ('p', 'prune', [], _('skip over REV'), _('REV')),
513 ('m', 'merge', [], _('merge at REV'), _('REV')),
513 ('m', 'merge', [], _('merge at REV'), _('REV')),
514 ('', 'parent', '',
514 ('', 'parent', '',
515 _('parent to choose when transplanting merge'), _('REV')),
515 _('parent to choose when transplanting merge'), _('REV')),
516 ('e', 'edit', False, _('invoke editor on commit messages')),
516 ('e', 'edit', False, _('invoke editor on commit messages')),
517 ('', 'log', None, _('append transplant info to log message')),
517 ('', 'log', None, _('append transplant info to log message')),
518 ('c', 'continue', None, _('continue last transplant session '
518 ('c', 'continue', None, _('continue last transplant session '
519 'after fixing conflicts')),
519 'after fixing conflicts')),
520 ('', 'filter', '',
520 ('', 'filter', '',
521 _('filter changesets through command'), _('CMD'))],
521 _('filter changesets through command'), _('CMD'))],
522 _('hg transplant [-s REPO] [-b BRANCH [-a]] [-p REV] '
522 _('hg transplant [-s REPO] [-b BRANCH [-a]] [-p REV] '
523 '[-m REV] [REV]...'))
523 '[-m REV] [REV]...'))
524 def transplant(ui, repo, *revs, **opts):
524 def transplant(ui, repo, *revs, **opts):
525 '''transplant changesets from another branch
525 '''transplant changesets from another branch
526
526
527 Selected changesets will be applied on top of the current working
527 Selected changesets will be applied on top of the current working
528 directory with the log of the original changeset. The changesets
528 directory with the log of the original changeset. The changesets
529 are copied and will thus appear twice in the history with different
529 are copied and will thus appear twice in the history with different
530 identities.
530 identities.
531
531
532 Consider using the graft command if everything is inside the same
532 Consider using the graft command if everything is inside the same
533 repository - it will use merges and will usually give a better result.
533 repository - it will use merges and will usually give a better result.
534 Use the rebase extension if the changesets are unpublished and you want
534 Use the rebase extension if the changesets are unpublished and you want
535 to move them instead of copying them.
535 to move them instead of copying them.
536
536
537 If --log is specified, log messages will have a comment appended
537 If --log is specified, log messages will have a comment appended
538 of the form::
538 of the form::
539
539
540 (transplanted from CHANGESETHASH)
540 (transplanted from CHANGESETHASH)
541
541
542 You can rewrite the changelog message with the --filter option.
542 You can rewrite the changelog message with the --filter option.
543 Its argument will be invoked with the current changelog message as
543 Its argument will be invoked with the current changelog message as
544 $1 and the patch as $2.
544 $1 and the patch as $2.
545
545
546 --source/-s specifies another repository to use for selecting changesets,
546 --source/-s specifies another repository to use for selecting changesets,
547 just as if it temporarily had been pulled.
547 just as if it temporarily had been pulled.
548 If --branch/-b is specified, these revisions will be used as
548 If --branch/-b is specified, these revisions will be used as
549 heads when deciding which changesets to transplant, just as if only
549 heads when deciding which changesets to transplant, just as if only
550 these revisions had been pulled.
550 these revisions had been pulled.
551 If --all/-a is specified, all the revisions up to the heads specified
551 If --all/-a is specified, all the revisions up to the heads specified
552 with --branch will be transplanted.
552 with --branch will be transplanted.
553
553
554 Example:
554 Example:
555
555
556 - transplant all changes up to REV on top of your current revision::
556 - transplant all changes up to REV on top of your current revision::
557
557
558 hg transplant --branch REV --all
558 hg transplant --branch REV --all
559
559
560 You can optionally mark selected transplanted changesets as merge
560 You can optionally mark selected transplanted changesets as merge
561 changesets. You will not be prompted to transplant any ancestors
561 changesets. You will not be prompted to transplant any ancestors
562 of a merged transplant, and you can merge descendants of them
562 of a merged transplant, and you can merge descendants of them
563 normally instead of transplanting them.
563 normally instead of transplanting them.
564
564
565 Merge changesets may be transplanted directly by specifying the
565 Merge changesets may be transplanted directly by specifying the
566 proper parent changeset by calling :hg:`transplant --parent`.
566 proper parent changeset by calling :hg:`transplant --parent`.
567
567
568 If no merges or revisions are provided, :hg:`transplant` will
568 If no merges or revisions are provided, :hg:`transplant` will
569 start an interactive changeset browser.
569 start an interactive changeset browser.
570
570
571 If a changeset application fails, you can fix the merge by hand
571 If a changeset application fails, you can fix the merge by hand
572 and then resume where you left off by calling :hg:`transplant
572 and then resume where you left off by calling :hg:`transplant
573 --continue/-c`.
573 --continue/-c`.
574 '''
574 '''
575 wlock = None
575 wlock = None
576 try:
576 try:
577 wlock = repo.wlock()
577 wlock = repo.wlock()
578 return _dotransplant(ui, repo, *revs, **opts)
578 return _dotransplant(ui, repo, *revs, **opts)
579 finally:
579 finally:
580 lockmod.release(wlock)
580 lockmod.release(wlock)
581
581
582 def _dotransplant(ui, repo, *revs, **opts):
582 def _dotransplant(ui, repo, *revs, **opts):
583 def incwalk(repo, csets, match=util.always):
583 def incwalk(repo, csets, match=util.always):
584 for node in csets:
584 for node in csets:
585 if match(node):
585 if match(node):
586 yield node
586 yield node
587
587
588 def transplantwalk(repo, dest, heads, match=util.always):
588 def transplantwalk(repo, dest, heads, match=util.always):
589 '''Yield all nodes that are ancestors of a head but not ancestors
589 '''Yield all nodes that are ancestors of a head but not ancestors
590 of dest.
590 of dest.
591 If no heads are specified, the heads of repo will be used.'''
591 If no heads are specified, the heads of repo will be used.'''
592 if not heads:
592 if not heads:
593 heads = repo.heads()
593 heads = repo.heads()
594 ancestors = []
594 ancestors = []
595 ctx = repo[dest]
595 ctx = repo[dest]
596 for head in heads:
596 for head in heads:
597 ancestors.append(ctx.ancestor(repo[head]).node())
597 ancestors.append(ctx.ancestor(repo[head]).node())
598 for node in repo.changelog.nodesbetween(ancestors, heads)[0]:
598 for node in repo.changelog.nodesbetween(ancestors, heads)[0]:
599 if match(node):
599 if match(node):
600 yield node
600 yield node
601
601
602 def checkopts(opts, revs):
602 def checkopts(opts, revs):
603 if opts.get('continue'):
603 if opts.get('continue'):
604 if opts.get('branch') or opts.get('all') or opts.get('merge'):
604 if opts.get('branch') or opts.get('all') or opts.get('merge'):
605 raise error.Abort(_('--continue is incompatible with '
605 raise error.Abort(_('--continue is incompatible with '
606 '--branch, --all and --merge'))
606 '--branch, --all and --merge'))
607 return
607 return
608 if not (opts.get('source') or revs or
608 if not (opts.get('source') or revs or
609 opts.get('merge') or opts.get('branch')):
609 opts.get('merge') or opts.get('branch')):
610 raise error.Abort(_('no source URL, branch revision, or revision '
610 raise error.Abort(_('no source URL, branch revision, or revision '
611 'list provided'))
611 'list provided'))
612 if opts.get('all'):
612 if opts.get('all'):
613 if not opts.get('branch'):
613 if not opts.get('branch'):
614 raise error.Abort(_('--all requires a branch revision'))
614 raise error.Abort(_('--all requires a branch revision'))
615 if revs:
615 if revs:
616 raise error.Abort(_('--all is incompatible with a '
616 raise error.Abort(_('--all is incompatible with a '
617 'revision list'))
617 'revision list'))
618
618
619 checkopts(opts, revs)
619 checkopts(opts, revs)
620
620
621 if not opts.get('log'):
621 if not opts.get('log'):
622 # deprecated config: transplant.log
622 # deprecated config: transplant.log
623 opts['log'] = ui.config('transplant', 'log')
623 opts['log'] = ui.config('transplant', 'log')
624 if not opts.get('filter'):
624 if not opts.get('filter'):
625 # deprecated config: transplant.filter
625 # deprecated config: transplant.filter
626 opts['filter'] = ui.config('transplant', 'filter')
626 opts['filter'] = ui.config('transplant', 'filter')
627
627
628 tp = transplanter(ui, repo, opts)
628 tp = transplanter(ui, repo, opts)
629
629
630 cmdutil.checkunfinished(repo)
630 cmdutil.checkunfinished(repo)
631 p1, p2 = repo.dirstate.parents()
631 p1, p2 = repo.dirstate.parents()
632 if len(repo) > 0 and p1 == revlog.nullid:
632 if len(repo) > 0 and p1 == revlog.nullid:
633 raise error.Abort(_('no revision checked out'))
633 raise error.Abort(_('no revision checked out'))
634 if not opts.get('continue'):
634 if not opts.get('continue'):
635 if p2 != revlog.nullid:
635 if p2 != revlog.nullid:
636 raise error.Abort(_('outstanding uncommitted merges'))
636 raise error.Abort(_('outstanding uncommitted merges'))
637 m, a, r, d = repo.status()[:4]
637 m, a, r, d = repo.status()[:4]
638 if m or a or r or d:
638 if m or a or r or d:
639 raise error.Abort(_('outstanding local changes'))
639 raise error.Abort(_('outstanding local changes'))
640
640
641 sourcerepo = opts.get('source')
641 sourcerepo = opts.get('source')
642 if sourcerepo:
642 if sourcerepo:
643 peer = hg.peer(repo, opts, ui.expandpath(sourcerepo))
643 peer = hg.peer(repo, opts, ui.expandpath(sourcerepo))
644 heads = map(peer.lookup, opts.get('branch', ()))
644 heads = map(peer.lookup, opts.get('branch', ()))
645 target = set(heads)
645 target = set(heads)
646 for r in revs:
646 for r in revs:
647 try:
647 try:
648 target.add(peer.lookup(r))
648 target.add(peer.lookup(r))
649 except error.RepoError:
649 except error.RepoError:
650 pass
650 pass
651 source, csets, cleanupfn = bundlerepo.getremotechanges(ui, repo, peer,
651 source, csets, cleanupfn = bundlerepo.getremotechanges(ui, repo, peer,
652 onlyheads=sorted(target), force=True)
652 onlyheads=sorted(target), force=True)
653 else:
653 else:
654 source = repo
654 source = repo
655 heads = map(source.lookup, opts.get('branch', ()))
655 heads = map(source.lookup, opts.get('branch', ()))
656 cleanupfn = None
656 cleanupfn = None
657
657
658 try:
658 try:
659 if opts.get('continue'):
659 if opts.get('continue'):
660 tp.resume(repo, source, opts)
660 tp.resume(repo, source, opts)
661 return
661 return
662
662
663 tf = tp.transplantfilter(repo, source, p1)
663 tf = tp.transplantfilter(repo, source, p1)
664 if opts.get('prune'):
664 if opts.get('prune'):
665 prune = set(source.lookup(r)
665 prune = set(source.lookup(r)
666 for r in scmutil.revrange(source, opts.get('prune')))
666 for r in scmutil.revrange(source, opts.get('prune')))
667 matchfn = lambda x: tf(x) and x not in prune
667 matchfn = lambda x: tf(x) and x not in prune
668 else:
668 else:
669 matchfn = tf
669 matchfn = tf
670 merges = map(source.lookup, opts.get('merge', ()))
670 merges = map(source.lookup, opts.get('merge', ()))
671 revmap = {}
671 revmap = {}
672 if revs:
672 if revs:
673 for r in scmutil.revrange(source, revs):
673 for r in scmutil.revrange(source, revs):
674 revmap[int(r)] = source.lookup(r)
674 revmap[int(r)] = source.lookup(r)
675 elif opts.get('all') or not merges:
675 elif opts.get('all') or not merges:
676 if source != repo:
676 if source != repo:
677 alltransplants = incwalk(source, csets, match=matchfn)
677 alltransplants = incwalk(source, csets, match=matchfn)
678 else:
678 else:
679 alltransplants = transplantwalk(source, p1, heads,
679 alltransplants = transplantwalk(source, p1, heads,
680 match=matchfn)
680 match=matchfn)
681 if opts.get('all'):
681 if opts.get('all'):
682 revs = alltransplants
682 revs = alltransplants
683 else:
683 else:
684 revs, newmerges = browserevs(ui, source, alltransplants, opts)
684 revs, newmerges = browserevs(ui, source, alltransplants, opts)
685 merges.extend(newmerges)
685 merges.extend(newmerges)
686 for r in revs:
686 for r in revs:
687 revmap[source.changelog.rev(r)] = r
687 revmap[source.changelog.rev(r)] = r
688 for r in merges:
688 for r in merges:
689 revmap[source.changelog.rev(r)] = r
689 revmap[source.changelog.rev(r)] = r
690
690
691 tp.apply(repo, source, revmap, merges, opts)
691 tp.apply(repo, source, revmap, merges, opts)
692 finally:
692 finally:
693 if cleanupfn:
693 if cleanupfn:
694 cleanupfn()
694 cleanupfn()
695
695
696 def revsettransplanted(repo, subset, x):
696 def revsettransplanted(repo, subset, x):
697 """``transplanted([set])``
697 """``transplanted([set])``
698 Transplanted changesets in set, or all transplanted changesets.
698 Transplanted changesets in set, or all transplanted changesets.
699 """
699 """
700 if x:
700 if x:
701 s = revset.getset(repo, subset, x)
701 s = revset.getset(repo, subset, x)
702 else:
702 else:
703 s = subset
703 s = subset
704 return revset.baseset([r for r in s if
704 return revset.baseset([r for r in s if
705 repo[r].extra().get('transplant_source')])
705 repo[r].extra().get('transplant_source')])
706
706
707 def kwtransplanted(repo, ctx, **args):
707 def kwtransplanted(repo, ctx, **args):
708 """:transplanted: String. The node identifier of the transplanted
708 """:transplanted: String. The node identifier of the transplanted
709 changeset if any."""
709 changeset if any."""
710 n = ctx.extra().get('transplant_source')
710 n = ctx.extra().get('transplant_source')
711 return n and revlog.hex(n) or ''
711 return n and revlog.hex(n) or ''
712
712
713 def extsetup(ui):
713 def extsetup(ui):
714 revset.symbols['transplanted'] = revsettransplanted
714 revset.symbols['transplanted'] = revsettransplanted
715 templatekw.keywords['transplanted'] = kwtransplanted
715 templatekw.keywords['transplanted'] = kwtransplanted
716 cmdutil.unfinishedstates.append(
716 cmdutil.unfinishedstates.append(
717 ['series', True, False, _('transplant in progress'),
717 ['series', True, False, _('transplant in progress'),
718 _("use 'hg transplant --continue' or 'hg update' to abort")])
718 _("use 'hg transplant --continue' or 'hg update' to abort")])
719
719
720 # tell hggettext to extract docstrings from these functions:
720 # tell hggettext to extract docstrings from these functions:
721 i18nfunctions = [revsettransplanted, kwtransplanted]
721 i18nfunctions = [revsettransplanted, kwtransplanted]
@@ -1,3422 +1,3422 b''
1 # cmdutil.py - help for command processing in mercurial
1 # cmdutil.py - help for command processing in mercurial
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from node import hex, bin, nullid, nullrev, short
8 from node import hex, bin, nullid, nullrev, short
9 from i18n import _
9 from i18n import _
10 import os, sys, errno, re, tempfile, cStringIO, shutil
10 import os, sys, errno, re, tempfile, cStringIO, shutil
11 import util, scmutil, templater, patch, error, templatekw, revlog, copies
11 import util, scmutil, templater, patch, error, templatekw, revlog, copies
12 import match as matchmod
12 import match as matchmod
13 import repair, graphmod, revset, phases, obsolete, pathutil
13 import repair, graphmod, revset, phases, obsolete, pathutil
14 import changelog
14 import changelog
15 import bookmarks
15 import bookmarks
16 import encoding
16 import encoding
17 import formatter
17 import formatter
18 import crecord as crecordmod
18 import crecord as crecordmod
19 import lock as lockmod
19 import lock as lockmod
20
20
21 def ishunk(x):
21 def ishunk(x):
22 hunkclasses = (crecordmod.uihunk, patch.recordhunk)
22 hunkclasses = (crecordmod.uihunk, patch.recordhunk)
23 return isinstance(x, hunkclasses)
23 return isinstance(x, hunkclasses)
24
24
25 def newandmodified(chunks, originalchunks):
25 def newandmodified(chunks, originalchunks):
26 newlyaddedandmodifiedfiles = set()
26 newlyaddedandmodifiedfiles = set()
27 for chunk in chunks:
27 for chunk in chunks:
28 if ishunk(chunk) and chunk.header.isnewfile() and chunk not in \
28 if ishunk(chunk) and chunk.header.isnewfile() and chunk not in \
29 originalchunks:
29 originalchunks:
30 newlyaddedandmodifiedfiles.add(chunk.header.filename())
30 newlyaddedandmodifiedfiles.add(chunk.header.filename())
31 return newlyaddedandmodifiedfiles
31 return newlyaddedandmodifiedfiles
32
32
33 def parsealiases(cmd):
33 def parsealiases(cmd):
34 return cmd.lstrip("^").split("|")
34 return cmd.lstrip("^").split("|")
35
35
36 def setupwrapcolorwrite(ui):
36 def setupwrapcolorwrite(ui):
37 # wrap ui.write so diff output can be labeled/colorized
37 # wrap ui.write so diff output can be labeled/colorized
38 def wrapwrite(orig, *args, **kw):
38 def wrapwrite(orig, *args, **kw):
39 label = kw.pop('label', '')
39 label = kw.pop('label', '')
40 for chunk, l in patch.difflabel(lambda: args):
40 for chunk, l in patch.difflabel(lambda: args):
41 orig(chunk, label=label + l)
41 orig(chunk, label=label + l)
42
42
43 oldwrite = ui.write
43 oldwrite = ui.write
44 def wrap(*args, **kwargs):
44 def wrap(*args, **kwargs):
45 return wrapwrite(oldwrite, *args, **kwargs)
45 return wrapwrite(oldwrite, *args, **kwargs)
46 setattr(ui, 'write', wrap)
46 setattr(ui, 'write', wrap)
47 return oldwrite
47 return oldwrite
48
48
49 def filterchunks(ui, originalhunks, usecurses, testfile, operation=None):
49 def filterchunks(ui, originalhunks, usecurses, testfile, operation=None):
50 if usecurses:
50 if usecurses:
51 if testfile:
51 if testfile:
52 recordfn = crecordmod.testdecorator(testfile,
52 recordfn = crecordmod.testdecorator(testfile,
53 crecordmod.testchunkselector)
53 crecordmod.testchunkselector)
54 else:
54 else:
55 recordfn = crecordmod.chunkselector
55 recordfn = crecordmod.chunkselector
56
56
57 return crecordmod.filterpatch(ui, originalhunks, recordfn, operation)
57 return crecordmod.filterpatch(ui, originalhunks, recordfn, operation)
58
58
59 else:
59 else:
60 return patch.filterpatch(ui, originalhunks, operation)
60 return patch.filterpatch(ui, originalhunks, operation)
61
61
62 def recordfilter(ui, originalhunks, operation=None):
62 def recordfilter(ui, originalhunks, operation=None):
63 """ Prompts the user to filter the originalhunks and return a list of
63 """ Prompts the user to filter the originalhunks and return a list of
64 selected hunks.
64 selected hunks.
65 *operation* is used for ui purposes to indicate the user
65 *operation* is used for ui purposes to indicate the user
66 what kind of filtering they are doing: reverting, committing, shelving, etc.
66 what kind of filtering they are doing: reverting, committing, shelving, etc.
67 *operation* has to be a translated string.
67 *operation* has to be a translated string.
68 """
68 """
69 usecurses = ui.configbool('experimental', 'crecord', False)
69 usecurses = ui.configbool('experimental', 'crecord', False)
70 testfile = ui.config('experimental', 'crecordtest', None)
70 testfile = ui.config('experimental', 'crecordtest', None)
71 oldwrite = setupwrapcolorwrite(ui)
71 oldwrite = setupwrapcolorwrite(ui)
72 try:
72 try:
73 newchunks, newopts = filterchunks(ui, originalhunks, usecurses,
73 newchunks, newopts = filterchunks(ui, originalhunks, usecurses,
74 testfile, operation)
74 testfile, operation)
75 finally:
75 finally:
76 ui.write = oldwrite
76 ui.write = oldwrite
77 return newchunks, newopts
77 return newchunks, newopts
78
78
79 def dorecord(ui, repo, commitfunc, cmdsuggest, backupall,
79 def dorecord(ui, repo, commitfunc, cmdsuggest, backupall,
80 filterfn, *pats, **opts):
80 filterfn, *pats, **opts):
81 import merge as mergemod
81 import merge as mergemod
82
82
83 if not ui.interactive():
83 if not ui.interactive():
84 if cmdsuggest:
84 if cmdsuggest:
85 msg = _('running non-interactively, use %s instead') % cmdsuggest
85 msg = _('running non-interactively, use %s instead') % cmdsuggest
86 else:
86 else:
87 msg = _('running non-interactively')
87 msg = _('running non-interactively')
88 raise error.Abort(msg)
88 raise error.Abort(msg)
89
89
90 # make sure username is set before going interactive
90 # make sure username is set before going interactive
91 if not opts.get('user'):
91 if not opts.get('user'):
92 ui.username() # raise exception, username not provided
92 ui.username() # raise exception, username not provided
93
93
94 def recordfunc(ui, repo, message, match, opts):
94 def recordfunc(ui, repo, message, match, opts):
95 """This is generic record driver.
95 """This is generic record driver.
96
96
97 Its job is to interactively filter local changes, and
97 Its job is to interactively filter local changes, and
98 accordingly prepare working directory into a state in which the
98 accordingly prepare working directory into a state in which the
99 job can be delegated to a non-interactive commit command such as
99 job can be delegated to a non-interactive commit command such as
100 'commit' or 'qrefresh'.
100 'commit' or 'qrefresh'.
101
101
102 After the actual job is done by non-interactive command, the
102 After the actual job is done by non-interactive command, the
103 working directory is restored to its original state.
103 working directory is restored to its original state.
104
104
105 In the end we'll record interesting changes, and everything else
105 In the end we'll record interesting changes, and everything else
106 will be left in place, so the user can continue working.
106 will be left in place, so the user can continue working.
107 """
107 """
108
108
109 checkunfinished(repo, commit=True)
109 checkunfinished(repo, commit=True)
110 merge = len(repo[None].parents()) > 1
110 merge = len(repo[None].parents()) > 1
111 if merge:
111 if merge:
112 raise error.Abort(_('cannot partially commit a merge '
112 raise error.Abort(_('cannot partially commit a merge '
113 '(use "hg commit" instead)'))
113 '(use "hg commit" instead)'))
114
114
115 status = repo.status(match=match)
115 status = repo.status(match=match)
116 diffopts = patch.difffeatureopts(ui, opts=opts, whitespace=True)
116 diffopts = patch.difffeatureopts(ui, opts=opts, whitespace=True)
117 diffopts.nodates = True
117 diffopts.nodates = True
118 diffopts.git = True
118 diffopts.git = True
119 originaldiff = patch.diff(repo, changes=status, opts=diffopts)
119 originaldiff = patch.diff(repo, changes=status, opts=diffopts)
120 originalchunks = patch.parsepatch(originaldiff)
120 originalchunks = patch.parsepatch(originaldiff)
121
121
122 # 1. filter patch, so we have intending-to apply subset of it
122 # 1. filter patch, so we have intending-to apply subset of it
123 try:
123 try:
124 chunks, newopts = filterfn(ui, originalchunks)
124 chunks, newopts = filterfn(ui, originalchunks)
125 except patch.PatchError as err:
125 except patch.PatchError as err:
126 raise error.Abort(_('error parsing patch: %s') % err)
126 raise error.Abort(_('error parsing patch: %s') % err)
127 opts.update(newopts)
127 opts.update(newopts)
128
128
129 # We need to keep a backup of files that have been newly added and
129 # We need to keep a backup of files that have been newly added and
130 # modified during the recording process because there is a previous
130 # modified during the recording process because there is a previous
131 # version without the edit in the workdir
131 # version without the edit in the workdir
132 newlyaddedandmodifiedfiles = newandmodified(chunks, originalchunks)
132 newlyaddedandmodifiedfiles = newandmodified(chunks, originalchunks)
133 contenders = set()
133 contenders = set()
134 for h in chunks:
134 for h in chunks:
135 try:
135 try:
136 contenders.update(set(h.files()))
136 contenders.update(set(h.files()))
137 except AttributeError:
137 except AttributeError:
138 pass
138 pass
139
139
140 changed = status.modified + status.added + status.removed
140 changed = status.modified + status.added + status.removed
141 newfiles = [f for f in changed if f in contenders]
141 newfiles = [f for f in changed if f in contenders]
142 if not newfiles:
142 if not newfiles:
143 ui.status(_('no changes to record\n'))
143 ui.status(_('no changes to record\n'))
144 return 0
144 return 0
145
145
146 modified = set(status.modified)
146 modified = set(status.modified)
147
147
148 # 2. backup changed files, so we can restore them in the end
148 # 2. backup changed files, so we can restore them in the end
149
149
150 if backupall:
150 if backupall:
151 tobackup = changed
151 tobackup = changed
152 else:
152 else:
153 tobackup = [f for f in newfiles if f in modified or f in \
153 tobackup = [f for f in newfiles if f in modified or f in \
154 newlyaddedandmodifiedfiles]
154 newlyaddedandmodifiedfiles]
155 backups = {}
155 backups = {}
156 if tobackup:
156 if tobackup:
157 backupdir = repo.join('record-backups')
157 backupdir = repo.join('record-backups')
158 try:
158 try:
159 os.mkdir(backupdir)
159 os.mkdir(backupdir)
160 except OSError as err:
160 except OSError as err:
161 if err.errno != errno.EEXIST:
161 if err.errno != errno.EEXIST:
162 raise
162 raise
163 try:
163 try:
164 # backup continues
164 # backup continues
165 for f in tobackup:
165 for f in tobackup:
166 fd, tmpname = tempfile.mkstemp(prefix=f.replace('/', '_')+'.',
166 fd, tmpname = tempfile.mkstemp(prefix=f.replace('/', '_')+'.',
167 dir=backupdir)
167 dir=backupdir)
168 os.close(fd)
168 os.close(fd)
169 ui.debug('backup %r as %r\n' % (f, tmpname))
169 ui.debug('backup %r as %r\n' % (f, tmpname))
170 util.copyfile(repo.wjoin(f), tmpname)
170 util.copyfile(repo.wjoin(f), tmpname)
171 shutil.copystat(repo.wjoin(f), tmpname)
171 shutil.copystat(repo.wjoin(f), tmpname)
172 backups[f] = tmpname
172 backups[f] = tmpname
173
173
174 fp = cStringIO.StringIO()
174 fp = cStringIO.StringIO()
175 for c in chunks:
175 for c in chunks:
176 fname = c.filename()
176 fname = c.filename()
177 if fname in backups:
177 if fname in backups:
178 c.write(fp)
178 c.write(fp)
179 dopatch = fp.tell()
179 dopatch = fp.tell()
180 fp.seek(0)
180 fp.seek(0)
181
181
182 [os.unlink(repo.wjoin(c)) for c in newlyaddedandmodifiedfiles]
182 [os.unlink(repo.wjoin(c)) for c in newlyaddedandmodifiedfiles]
183 # 3a. apply filtered patch to clean repo (clean)
183 # 3a. apply filtered patch to clean repo (clean)
184 if backups:
184 if backups:
185 # Equivalent to hg.revert
185 # Equivalent to hg.revert
186 choices = lambda key: key in backups
186 m = scmutil.matchfiles(repo, backups.keys())
187 mergemod.update(repo, repo.dirstate.p1(),
187 mergemod.update(repo, repo.dirstate.p1(),
188 False, True, choices)
188 False, True, matcher=m)
189
189
190 # 3b. (apply)
190 # 3b. (apply)
191 if dopatch:
191 if dopatch:
192 try:
192 try:
193 ui.debug('applying patch\n')
193 ui.debug('applying patch\n')
194 ui.debug(fp.getvalue())
194 ui.debug(fp.getvalue())
195 patch.internalpatch(ui, repo, fp, 1, eolmode=None)
195 patch.internalpatch(ui, repo, fp, 1, eolmode=None)
196 except patch.PatchError as err:
196 except patch.PatchError as err:
197 raise error.Abort(str(err))
197 raise error.Abort(str(err))
198 del fp
198 del fp
199
199
200 # 4. We prepared working directory according to filtered
200 # 4. We prepared working directory according to filtered
201 # patch. Now is the time to delegate the job to
201 # patch. Now is the time to delegate the job to
202 # commit/qrefresh or the like!
202 # commit/qrefresh or the like!
203
203
204 # Make all of the pathnames absolute.
204 # Make all of the pathnames absolute.
205 newfiles = [repo.wjoin(nf) for nf in newfiles]
205 newfiles = [repo.wjoin(nf) for nf in newfiles]
206 return commitfunc(ui, repo, *newfiles, **opts)
206 return commitfunc(ui, repo, *newfiles, **opts)
207 finally:
207 finally:
208 # 5. finally restore backed-up files
208 # 5. finally restore backed-up files
209 try:
209 try:
210 dirstate = repo.dirstate
210 dirstate = repo.dirstate
211 for realname, tmpname in backups.iteritems():
211 for realname, tmpname in backups.iteritems():
212 ui.debug('restoring %r to %r\n' % (tmpname, realname))
212 ui.debug('restoring %r to %r\n' % (tmpname, realname))
213
213
214 if dirstate[realname] == 'n':
214 if dirstate[realname] == 'n':
215 # without normallookup, restoring timestamp
215 # without normallookup, restoring timestamp
216 # may cause partially committed files
216 # may cause partially committed files
217 # to be treated as unmodified
217 # to be treated as unmodified
218 dirstate.normallookup(realname)
218 dirstate.normallookup(realname)
219
219
220 util.copyfile(tmpname, repo.wjoin(realname))
220 util.copyfile(tmpname, repo.wjoin(realname))
221 # Our calls to copystat() here and above are a
221 # Our calls to copystat() here and above are a
222 # hack to trick any editors that have f open that
222 # hack to trick any editors that have f open that
223 # we haven't modified them.
223 # we haven't modified them.
224 #
224 #
225 # Also note that this racy as an editor could
225 # Also note that this racy as an editor could
226 # notice the file's mtime before we've finished
226 # notice the file's mtime before we've finished
227 # writing it.
227 # writing it.
228 shutil.copystat(tmpname, repo.wjoin(realname))
228 shutil.copystat(tmpname, repo.wjoin(realname))
229 os.unlink(tmpname)
229 os.unlink(tmpname)
230 if tobackup:
230 if tobackup:
231 os.rmdir(backupdir)
231 os.rmdir(backupdir)
232 except OSError:
232 except OSError:
233 pass
233 pass
234
234
235 def recordinwlock(ui, repo, message, match, opts):
235 def recordinwlock(ui, repo, message, match, opts):
236 wlock = repo.wlock()
236 wlock = repo.wlock()
237 try:
237 try:
238 return recordfunc(ui, repo, message, match, opts)
238 return recordfunc(ui, repo, message, match, opts)
239 finally:
239 finally:
240 wlock.release()
240 wlock.release()
241
241
242 return commit(ui, repo, recordinwlock, pats, opts)
242 return commit(ui, repo, recordinwlock, pats, opts)
243
243
244 def findpossible(cmd, table, strict=False):
244 def findpossible(cmd, table, strict=False):
245 """
245 """
246 Return cmd -> (aliases, command table entry)
246 Return cmd -> (aliases, command table entry)
247 for each matching command.
247 for each matching command.
248 Return debug commands (or their aliases) only if no normal command matches.
248 Return debug commands (or their aliases) only if no normal command matches.
249 """
249 """
250 choice = {}
250 choice = {}
251 debugchoice = {}
251 debugchoice = {}
252
252
253 if cmd in table:
253 if cmd in table:
254 # short-circuit exact matches, "log" alias beats "^log|history"
254 # short-circuit exact matches, "log" alias beats "^log|history"
255 keys = [cmd]
255 keys = [cmd]
256 else:
256 else:
257 keys = table.keys()
257 keys = table.keys()
258
258
259 allcmds = []
259 allcmds = []
260 for e in keys:
260 for e in keys:
261 aliases = parsealiases(e)
261 aliases = parsealiases(e)
262 allcmds.extend(aliases)
262 allcmds.extend(aliases)
263 found = None
263 found = None
264 if cmd in aliases:
264 if cmd in aliases:
265 found = cmd
265 found = cmd
266 elif not strict:
266 elif not strict:
267 for a in aliases:
267 for a in aliases:
268 if a.startswith(cmd):
268 if a.startswith(cmd):
269 found = a
269 found = a
270 break
270 break
271 if found is not None:
271 if found is not None:
272 if aliases[0].startswith("debug") or found.startswith("debug"):
272 if aliases[0].startswith("debug") or found.startswith("debug"):
273 debugchoice[found] = (aliases, table[e])
273 debugchoice[found] = (aliases, table[e])
274 else:
274 else:
275 choice[found] = (aliases, table[e])
275 choice[found] = (aliases, table[e])
276
276
277 if not choice and debugchoice:
277 if not choice and debugchoice:
278 choice = debugchoice
278 choice = debugchoice
279
279
280 return choice, allcmds
280 return choice, allcmds
281
281
282 def findcmd(cmd, table, strict=True):
282 def findcmd(cmd, table, strict=True):
283 """Return (aliases, command table entry) for command string."""
283 """Return (aliases, command table entry) for command string."""
284 choice, allcmds = findpossible(cmd, table, strict)
284 choice, allcmds = findpossible(cmd, table, strict)
285
285
286 if cmd in choice:
286 if cmd in choice:
287 return choice[cmd]
287 return choice[cmd]
288
288
289 if len(choice) > 1:
289 if len(choice) > 1:
290 clist = choice.keys()
290 clist = choice.keys()
291 clist.sort()
291 clist.sort()
292 raise error.AmbiguousCommand(cmd, clist)
292 raise error.AmbiguousCommand(cmd, clist)
293
293
294 if choice:
294 if choice:
295 return choice.values()[0]
295 return choice.values()[0]
296
296
297 raise error.UnknownCommand(cmd, allcmds)
297 raise error.UnknownCommand(cmd, allcmds)
298
298
299 def findrepo(p):
299 def findrepo(p):
300 while not os.path.isdir(os.path.join(p, ".hg")):
300 while not os.path.isdir(os.path.join(p, ".hg")):
301 oldp, p = p, os.path.dirname(p)
301 oldp, p = p, os.path.dirname(p)
302 if p == oldp:
302 if p == oldp:
303 return None
303 return None
304
304
305 return p
305 return p
306
306
307 def bailifchanged(repo, merge=True):
307 def bailifchanged(repo, merge=True):
308 if merge and repo.dirstate.p2() != nullid:
308 if merge and repo.dirstate.p2() != nullid:
309 raise error.Abort(_('outstanding uncommitted merge'))
309 raise error.Abort(_('outstanding uncommitted merge'))
310 modified, added, removed, deleted = repo.status()[:4]
310 modified, added, removed, deleted = repo.status()[:4]
311 if modified or added or removed or deleted:
311 if modified or added or removed or deleted:
312 raise error.Abort(_('uncommitted changes'))
312 raise error.Abort(_('uncommitted changes'))
313 ctx = repo[None]
313 ctx = repo[None]
314 for s in sorted(ctx.substate):
314 for s in sorted(ctx.substate):
315 ctx.sub(s).bailifchanged()
315 ctx.sub(s).bailifchanged()
316
316
317 def logmessage(ui, opts):
317 def logmessage(ui, opts):
318 """ get the log message according to -m and -l option """
318 """ get the log message according to -m and -l option """
319 message = opts.get('message')
319 message = opts.get('message')
320 logfile = opts.get('logfile')
320 logfile = opts.get('logfile')
321
321
322 if message and logfile:
322 if message and logfile:
323 raise error.Abort(_('options --message and --logfile are mutually '
323 raise error.Abort(_('options --message and --logfile are mutually '
324 'exclusive'))
324 'exclusive'))
325 if not message and logfile:
325 if not message and logfile:
326 try:
326 try:
327 if logfile == '-':
327 if logfile == '-':
328 message = ui.fin.read()
328 message = ui.fin.read()
329 else:
329 else:
330 message = '\n'.join(util.readfile(logfile).splitlines())
330 message = '\n'.join(util.readfile(logfile).splitlines())
331 except IOError as inst:
331 except IOError as inst:
332 raise error.Abort(_("can't read commit message '%s': %s") %
332 raise error.Abort(_("can't read commit message '%s': %s") %
333 (logfile, inst.strerror))
333 (logfile, inst.strerror))
334 return message
334 return message
335
335
336 def mergeeditform(ctxorbool, baseformname):
336 def mergeeditform(ctxorbool, baseformname):
337 """return appropriate editform name (referencing a committemplate)
337 """return appropriate editform name (referencing a committemplate)
338
338
339 'ctxorbool' is either a ctx to be committed, or a bool indicating whether
339 'ctxorbool' is either a ctx to be committed, or a bool indicating whether
340 merging is committed.
340 merging is committed.
341
341
342 This returns baseformname with '.merge' appended if it is a merge,
342 This returns baseformname with '.merge' appended if it is a merge,
343 otherwise '.normal' is appended.
343 otherwise '.normal' is appended.
344 """
344 """
345 if isinstance(ctxorbool, bool):
345 if isinstance(ctxorbool, bool):
346 if ctxorbool:
346 if ctxorbool:
347 return baseformname + ".merge"
347 return baseformname + ".merge"
348 elif 1 < len(ctxorbool.parents()):
348 elif 1 < len(ctxorbool.parents()):
349 return baseformname + ".merge"
349 return baseformname + ".merge"
350
350
351 return baseformname + ".normal"
351 return baseformname + ".normal"
352
352
353 def getcommiteditor(edit=False, finishdesc=None, extramsg=None,
353 def getcommiteditor(edit=False, finishdesc=None, extramsg=None,
354 editform='', **opts):
354 editform='', **opts):
355 """get appropriate commit message editor according to '--edit' option
355 """get appropriate commit message editor according to '--edit' option
356
356
357 'finishdesc' is a function to be called with edited commit message
357 'finishdesc' is a function to be called with edited commit message
358 (= 'description' of the new changeset) just after editing, but
358 (= 'description' of the new changeset) just after editing, but
359 before checking empty-ness. It should return actual text to be
359 before checking empty-ness. It should return actual text to be
360 stored into history. This allows to change description before
360 stored into history. This allows to change description before
361 storing.
361 storing.
362
362
363 'extramsg' is a extra message to be shown in the editor instead of
363 'extramsg' is a extra message to be shown in the editor instead of
364 'Leave message empty to abort commit' line. 'HG: ' prefix and EOL
364 'Leave message empty to abort commit' line. 'HG: ' prefix and EOL
365 is automatically added.
365 is automatically added.
366
366
367 'editform' is a dot-separated list of names, to distinguish
367 'editform' is a dot-separated list of names, to distinguish
368 the purpose of commit text editing.
368 the purpose of commit text editing.
369
369
370 'getcommiteditor' returns 'commitforceeditor' regardless of
370 'getcommiteditor' returns 'commitforceeditor' regardless of
371 'edit', if one of 'finishdesc' or 'extramsg' is specified, because
371 'edit', if one of 'finishdesc' or 'extramsg' is specified, because
372 they are specific for usage in MQ.
372 they are specific for usage in MQ.
373 """
373 """
374 if edit or finishdesc or extramsg:
374 if edit or finishdesc or extramsg:
375 return lambda r, c, s: commitforceeditor(r, c, s,
375 return lambda r, c, s: commitforceeditor(r, c, s,
376 finishdesc=finishdesc,
376 finishdesc=finishdesc,
377 extramsg=extramsg,
377 extramsg=extramsg,
378 editform=editform)
378 editform=editform)
379 elif editform:
379 elif editform:
380 return lambda r, c, s: commiteditor(r, c, s, editform=editform)
380 return lambda r, c, s: commiteditor(r, c, s, editform=editform)
381 else:
381 else:
382 return commiteditor
382 return commiteditor
383
383
384 def loglimit(opts):
384 def loglimit(opts):
385 """get the log limit according to option -l/--limit"""
385 """get the log limit according to option -l/--limit"""
386 limit = opts.get('limit')
386 limit = opts.get('limit')
387 if limit:
387 if limit:
388 try:
388 try:
389 limit = int(limit)
389 limit = int(limit)
390 except ValueError:
390 except ValueError:
391 raise error.Abort(_('limit must be a positive integer'))
391 raise error.Abort(_('limit must be a positive integer'))
392 if limit <= 0:
392 if limit <= 0:
393 raise error.Abort(_('limit must be positive'))
393 raise error.Abort(_('limit must be positive'))
394 else:
394 else:
395 limit = None
395 limit = None
396 return limit
396 return limit
397
397
398 def makefilename(repo, pat, node, desc=None,
398 def makefilename(repo, pat, node, desc=None,
399 total=None, seqno=None, revwidth=None, pathname=None):
399 total=None, seqno=None, revwidth=None, pathname=None):
400 node_expander = {
400 node_expander = {
401 'H': lambda: hex(node),
401 'H': lambda: hex(node),
402 'R': lambda: str(repo.changelog.rev(node)),
402 'R': lambda: str(repo.changelog.rev(node)),
403 'h': lambda: short(node),
403 'h': lambda: short(node),
404 'm': lambda: re.sub('[^\w]', '_', str(desc))
404 'm': lambda: re.sub('[^\w]', '_', str(desc))
405 }
405 }
406 expander = {
406 expander = {
407 '%': lambda: '%',
407 '%': lambda: '%',
408 'b': lambda: os.path.basename(repo.root),
408 'b': lambda: os.path.basename(repo.root),
409 }
409 }
410
410
411 try:
411 try:
412 if node:
412 if node:
413 expander.update(node_expander)
413 expander.update(node_expander)
414 if node:
414 if node:
415 expander['r'] = (lambda:
415 expander['r'] = (lambda:
416 str(repo.changelog.rev(node)).zfill(revwidth or 0))
416 str(repo.changelog.rev(node)).zfill(revwidth or 0))
417 if total is not None:
417 if total is not None:
418 expander['N'] = lambda: str(total)
418 expander['N'] = lambda: str(total)
419 if seqno is not None:
419 if seqno is not None:
420 expander['n'] = lambda: str(seqno)
420 expander['n'] = lambda: str(seqno)
421 if total is not None and seqno is not None:
421 if total is not None and seqno is not None:
422 expander['n'] = lambda: str(seqno).zfill(len(str(total)))
422 expander['n'] = lambda: str(seqno).zfill(len(str(total)))
423 if pathname is not None:
423 if pathname is not None:
424 expander['s'] = lambda: os.path.basename(pathname)
424 expander['s'] = lambda: os.path.basename(pathname)
425 expander['d'] = lambda: os.path.dirname(pathname) or '.'
425 expander['d'] = lambda: os.path.dirname(pathname) or '.'
426 expander['p'] = lambda: pathname
426 expander['p'] = lambda: pathname
427
427
428 newname = []
428 newname = []
429 patlen = len(pat)
429 patlen = len(pat)
430 i = 0
430 i = 0
431 while i < patlen:
431 while i < patlen:
432 c = pat[i]
432 c = pat[i]
433 if c == '%':
433 if c == '%':
434 i += 1
434 i += 1
435 c = pat[i]
435 c = pat[i]
436 c = expander[c]()
436 c = expander[c]()
437 newname.append(c)
437 newname.append(c)
438 i += 1
438 i += 1
439 return ''.join(newname)
439 return ''.join(newname)
440 except KeyError as inst:
440 except KeyError as inst:
441 raise error.Abort(_("invalid format spec '%%%s' in output filename") %
441 raise error.Abort(_("invalid format spec '%%%s' in output filename") %
442 inst.args[0])
442 inst.args[0])
443
443
444 def makefileobj(repo, pat, node=None, desc=None, total=None,
444 def makefileobj(repo, pat, node=None, desc=None, total=None,
445 seqno=None, revwidth=None, mode='wb', modemap=None,
445 seqno=None, revwidth=None, mode='wb', modemap=None,
446 pathname=None):
446 pathname=None):
447
447
448 writable = mode not in ('r', 'rb')
448 writable = mode not in ('r', 'rb')
449
449
450 if not pat or pat == '-':
450 if not pat or pat == '-':
451 if writable:
451 if writable:
452 fp = repo.ui.fout
452 fp = repo.ui.fout
453 else:
453 else:
454 fp = repo.ui.fin
454 fp = repo.ui.fin
455 if util.safehasattr(fp, 'fileno'):
455 if util.safehasattr(fp, 'fileno'):
456 return os.fdopen(os.dup(fp.fileno()), mode)
456 return os.fdopen(os.dup(fp.fileno()), mode)
457 else:
457 else:
458 # if this fp can't be duped properly, return
458 # if this fp can't be duped properly, return
459 # a dummy object that can be closed
459 # a dummy object that can be closed
460 class wrappedfileobj(object):
460 class wrappedfileobj(object):
461 noop = lambda x: None
461 noop = lambda x: None
462 def __init__(self, f):
462 def __init__(self, f):
463 self.f = f
463 self.f = f
464 def __getattr__(self, attr):
464 def __getattr__(self, attr):
465 if attr == 'close':
465 if attr == 'close':
466 return self.noop
466 return self.noop
467 else:
467 else:
468 return getattr(self.f, attr)
468 return getattr(self.f, attr)
469
469
470 return wrappedfileobj(fp)
470 return wrappedfileobj(fp)
471 if util.safehasattr(pat, 'write') and writable:
471 if util.safehasattr(pat, 'write') and writable:
472 return pat
472 return pat
473 if util.safehasattr(pat, 'read') and 'r' in mode:
473 if util.safehasattr(pat, 'read') and 'r' in mode:
474 return pat
474 return pat
475 fn = makefilename(repo, pat, node, desc, total, seqno, revwidth, pathname)
475 fn = makefilename(repo, pat, node, desc, total, seqno, revwidth, pathname)
476 if modemap is not None:
476 if modemap is not None:
477 mode = modemap.get(fn, mode)
477 mode = modemap.get(fn, mode)
478 if mode == 'wb':
478 if mode == 'wb':
479 modemap[fn] = 'ab'
479 modemap[fn] = 'ab'
480 return open(fn, mode)
480 return open(fn, mode)
481
481
482 def openrevlog(repo, cmd, file_, opts):
482 def openrevlog(repo, cmd, file_, opts):
483 """opens the changelog, manifest, a filelog or a given revlog"""
483 """opens the changelog, manifest, a filelog or a given revlog"""
484 cl = opts['changelog']
484 cl = opts['changelog']
485 mf = opts['manifest']
485 mf = opts['manifest']
486 dir = opts['dir']
486 dir = opts['dir']
487 msg = None
487 msg = None
488 if cl and mf:
488 if cl and mf:
489 msg = _('cannot specify --changelog and --manifest at the same time')
489 msg = _('cannot specify --changelog and --manifest at the same time')
490 elif cl and dir:
490 elif cl and dir:
491 msg = _('cannot specify --changelog and --dir at the same time')
491 msg = _('cannot specify --changelog and --dir at the same time')
492 elif cl or mf:
492 elif cl or mf:
493 if file_:
493 if file_:
494 msg = _('cannot specify filename with --changelog or --manifest')
494 msg = _('cannot specify filename with --changelog or --manifest')
495 elif not repo:
495 elif not repo:
496 msg = _('cannot specify --changelog or --manifest or --dir '
496 msg = _('cannot specify --changelog or --manifest or --dir '
497 'without a repository')
497 'without a repository')
498 if msg:
498 if msg:
499 raise error.Abort(msg)
499 raise error.Abort(msg)
500
500
501 r = None
501 r = None
502 if repo:
502 if repo:
503 if cl:
503 if cl:
504 r = repo.unfiltered().changelog
504 r = repo.unfiltered().changelog
505 elif dir:
505 elif dir:
506 if 'treemanifest' not in repo.requirements:
506 if 'treemanifest' not in repo.requirements:
507 raise error.Abort(_("--dir can only be used on repos with "
507 raise error.Abort(_("--dir can only be used on repos with "
508 "treemanifest enabled"))
508 "treemanifest enabled"))
509 dirlog = repo.dirlog(file_)
509 dirlog = repo.dirlog(file_)
510 if len(dirlog):
510 if len(dirlog):
511 r = dirlog
511 r = dirlog
512 elif mf:
512 elif mf:
513 r = repo.manifest
513 r = repo.manifest
514 elif file_:
514 elif file_:
515 filelog = repo.file(file_)
515 filelog = repo.file(file_)
516 if len(filelog):
516 if len(filelog):
517 r = filelog
517 r = filelog
518 if not r:
518 if not r:
519 if not file_:
519 if not file_:
520 raise error.CommandError(cmd, _('invalid arguments'))
520 raise error.CommandError(cmd, _('invalid arguments'))
521 if not os.path.isfile(file_):
521 if not os.path.isfile(file_):
522 raise error.Abort(_("revlog '%s' not found") % file_)
522 raise error.Abort(_("revlog '%s' not found") % file_)
523 r = revlog.revlog(scmutil.opener(os.getcwd(), audit=False),
523 r = revlog.revlog(scmutil.opener(os.getcwd(), audit=False),
524 file_[:-2] + ".i")
524 file_[:-2] + ".i")
525 return r
525 return r
526
526
527 def copy(ui, repo, pats, opts, rename=False):
527 def copy(ui, repo, pats, opts, rename=False):
528 # called with the repo lock held
528 # called with the repo lock held
529 #
529 #
530 # hgsep => pathname that uses "/" to separate directories
530 # hgsep => pathname that uses "/" to separate directories
531 # ossep => pathname that uses os.sep to separate directories
531 # ossep => pathname that uses os.sep to separate directories
532 cwd = repo.getcwd()
532 cwd = repo.getcwd()
533 targets = {}
533 targets = {}
534 after = opts.get("after")
534 after = opts.get("after")
535 dryrun = opts.get("dry_run")
535 dryrun = opts.get("dry_run")
536 wctx = repo[None]
536 wctx = repo[None]
537
537
538 def walkpat(pat):
538 def walkpat(pat):
539 srcs = []
539 srcs = []
540 if after:
540 if after:
541 badstates = '?'
541 badstates = '?'
542 else:
542 else:
543 badstates = '?r'
543 badstates = '?r'
544 m = scmutil.match(repo[None], [pat], opts, globbed=True)
544 m = scmutil.match(repo[None], [pat], opts, globbed=True)
545 for abs in repo.walk(m):
545 for abs in repo.walk(m):
546 state = repo.dirstate[abs]
546 state = repo.dirstate[abs]
547 rel = m.rel(abs)
547 rel = m.rel(abs)
548 exact = m.exact(abs)
548 exact = m.exact(abs)
549 if state in badstates:
549 if state in badstates:
550 if exact and state == '?':
550 if exact and state == '?':
551 ui.warn(_('%s: not copying - file is not managed\n') % rel)
551 ui.warn(_('%s: not copying - file is not managed\n') % rel)
552 if exact and state == 'r':
552 if exact and state == 'r':
553 ui.warn(_('%s: not copying - file has been marked for'
553 ui.warn(_('%s: not copying - file has been marked for'
554 ' remove\n') % rel)
554 ' remove\n') % rel)
555 continue
555 continue
556 # abs: hgsep
556 # abs: hgsep
557 # rel: ossep
557 # rel: ossep
558 srcs.append((abs, rel, exact))
558 srcs.append((abs, rel, exact))
559 return srcs
559 return srcs
560
560
561 # abssrc: hgsep
561 # abssrc: hgsep
562 # relsrc: ossep
562 # relsrc: ossep
563 # otarget: ossep
563 # otarget: ossep
564 def copyfile(abssrc, relsrc, otarget, exact):
564 def copyfile(abssrc, relsrc, otarget, exact):
565 abstarget = pathutil.canonpath(repo.root, cwd, otarget)
565 abstarget = pathutil.canonpath(repo.root, cwd, otarget)
566 if '/' in abstarget:
566 if '/' in abstarget:
567 # We cannot normalize abstarget itself, this would prevent
567 # We cannot normalize abstarget itself, this would prevent
568 # case only renames, like a => A.
568 # case only renames, like a => A.
569 abspath, absname = abstarget.rsplit('/', 1)
569 abspath, absname = abstarget.rsplit('/', 1)
570 abstarget = repo.dirstate.normalize(abspath) + '/' + absname
570 abstarget = repo.dirstate.normalize(abspath) + '/' + absname
571 reltarget = repo.pathto(abstarget, cwd)
571 reltarget = repo.pathto(abstarget, cwd)
572 target = repo.wjoin(abstarget)
572 target = repo.wjoin(abstarget)
573 src = repo.wjoin(abssrc)
573 src = repo.wjoin(abssrc)
574 state = repo.dirstate[abstarget]
574 state = repo.dirstate[abstarget]
575
575
576 scmutil.checkportable(ui, abstarget)
576 scmutil.checkportable(ui, abstarget)
577
577
578 # check for collisions
578 # check for collisions
579 prevsrc = targets.get(abstarget)
579 prevsrc = targets.get(abstarget)
580 if prevsrc is not None:
580 if prevsrc is not None:
581 ui.warn(_('%s: not overwriting - %s collides with %s\n') %
581 ui.warn(_('%s: not overwriting - %s collides with %s\n') %
582 (reltarget, repo.pathto(abssrc, cwd),
582 (reltarget, repo.pathto(abssrc, cwd),
583 repo.pathto(prevsrc, cwd)))
583 repo.pathto(prevsrc, cwd)))
584 return
584 return
585
585
586 # check for overwrites
586 # check for overwrites
587 exists = os.path.lexists(target)
587 exists = os.path.lexists(target)
588 samefile = False
588 samefile = False
589 if exists and abssrc != abstarget:
589 if exists and abssrc != abstarget:
590 if (repo.dirstate.normalize(abssrc) ==
590 if (repo.dirstate.normalize(abssrc) ==
591 repo.dirstate.normalize(abstarget)):
591 repo.dirstate.normalize(abstarget)):
592 if not rename:
592 if not rename:
593 ui.warn(_("%s: can't copy - same file\n") % reltarget)
593 ui.warn(_("%s: can't copy - same file\n") % reltarget)
594 return
594 return
595 exists = False
595 exists = False
596 samefile = True
596 samefile = True
597
597
598 if not after and exists or after and state in 'mn':
598 if not after and exists or after and state in 'mn':
599 if not opts['force']:
599 if not opts['force']:
600 ui.warn(_('%s: not overwriting - file exists\n') %
600 ui.warn(_('%s: not overwriting - file exists\n') %
601 reltarget)
601 reltarget)
602 return
602 return
603
603
604 if after:
604 if after:
605 if not exists:
605 if not exists:
606 if rename:
606 if rename:
607 ui.warn(_('%s: not recording move - %s does not exist\n') %
607 ui.warn(_('%s: not recording move - %s does not exist\n') %
608 (relsrc, reltarget))
608 (relsrc, reltarget))
609 else:
609 else:
610 ui.warn(_('%s: not recording copy - %s does not exist\n') %
610 ui.warn(_('%s: not recording copy - %s does not exist\n') %
611 (relsrc, reltarget))
611 (relsrc, reltarget))
612 return
612 return
613 elif not dryrun:
613 elif not dryrun:
614 try:
614 try:
615 if exists:
615 if exists:
616 os.unlink(target)
616 os.unlink(target)
617 targetdir = os.path.dirname(target) or '.'
617 targetdir = os.path.dirname(target) or '.'
618 if not os.path.isdir(targetdir):
618 if not os.path.isdir(targetdir):
619 os.makedirs(targetdir)
619 os.makedirs(targetdir)
620 if samefile:
620 if samefile:
621 tmp = target + "~hgrename"
621 tmp = target + "~hgrename"
622 os.rename(src, tmp)
622 os.rename(src, tmp)
623 os.rename(tmp, target)
623 os.rename(tmp, target)
624 else:
624 else:
625 util.copyfile(src, target)
625 util.copyfile(src, target)
626 srcexists = True
626 srcexists = True
627 except IOError as inst:
627 except IOError as inst:
628 if inst.errno == errno.ENOENT:
628 if inst.errno == errno.ENOENT:
629 ui.warn(_('%s: deleted in working directory\n') % relsrc)
629 ui.warn(_('%s: deleted in working directory\n') % relsrc)
630 srcexists = False
630 srcexists = False
631 else:
631 else:
632 ui.warn(_('%s: cannot copy - %s\n') %
632 ui.warn(_('%s: cannot copy - %s\n') %
633 (relsrc, inst.strerror))
633 (relsrc, inst.strerror))
634 return True # report a failure
634 return True # report a failure
635
635
636 if ui.verbose or not exact:
636 if ui.verbose or not exact:
637 if rename:
637 if rename:
638 ui.status(_('moving %s to %s\n') % (relsrc, reltarget))
638 ui.status(_('moving %s to %s\n') % (relsrc, reltarget))
639 else:
639 else:
640 ui.status(_('copying %s to %s\n') % (relsrc, reltarget))
640 ui.status(_('copying %s to %s\n') % (relsrc, reltarget))
641
641
642 targets[abstarget] = abssrc
642 targets[abstarget] = abssrc
643
643
644 # fix up dirstate
644 # fix up dirstate
645 scmutil.dirstatecopy(ui, repo, wctx, abssrc, abstarget,
645 scmutil.dirstatecopy(ui, repo, wctx, abssrc, abstarget,
646 dryrun=dryrun, cwd=cwd)
646 dryrun=dryrun, cwd=cwd)
647 if rename and not dryrun:
647 if rename and not dryrun:
648 if not after and srcexists and not samefile:
648 if not after and srcexists and not samefile:
649 util.unlinkpath(repo.wjoin(abssrc))
649 util.unlinkpath(repo.wjoin(abssrc))
650 wctx.forget([abssrc])
650 wctx.forget([abssrc])
651
651
652 # pat: ossep
652 # pat: ossep
653 # dest ossep
653 # dest ossep
654 # srcs: list of (hgsep, hgsep, ossep, bool)
654 # srcs: list of (hgsep, hgsep, ossep, bool)
655 # return: function that takes hgsep and returns ossep
655 # return: function that takes hgsep and returns ossep
656 def targetpathfn(pat, dest, srcs):
656 def targetpathfn(pat, dest, srcs):
657 if os.path.isdir(pat):
657 if os.path.isdir(pat):
658 abspfx = pathutil.canonpath(repo.root, cwd, pat)
658 abspfx = pathutil.canonpath(repo.root, cwd, pat)
659 abspfx = util.localpath(abspfx)
659 abspfx = util.localpath(abspfx)
660 if destdirexists:
660 if destdirexists:
661 striplen = len(os.path.split(abspfx)[0])
661 striplen = len(os.path.split(abspfx)[0])
662 else:
662 else:
663 striplen = len(abspfx)
663 striplen = len(abspfx)
664 if striplen:
664 if striplen:
665 striplen += len(os.sep)
665 striplen += len(os.sep)
666 res = lambda p: os.path.join(dest, util.localpath(p)[striplen:])
666 res = lambda p: os.path.join(dest, util.localpath(p)[striplen:])
667 elif destdirexists:
667 elif destdirexists:
668 res = lambda p: os.path.join(dest,
668 res = lambda p: os.path.join(dest,
669 os.path.basename(util.localpath(p)))
669 os.path.basename(util.localpath(p)))
670 else:
670 else:
671 res = lambda p: dest
671 res = lambda p: dest
672 return res
672 return res
673
673
674 # pat: ossep
674 # pat: ossep
675 # dest ossep
675 # dest ossep
676 # srcs: list of (hgsep, hgsep, ossep, bool)
676 # srcs: list of (hgsep, hgsep, ossep, bool)
677 # return: function that takes hgsep and returns ossep
677 # return: function that takes hgsep and returns ossep
678 def targetpathafterfn(pat, dest, srcs):
678 def targetpathafterfn(pat, dest, srcs):
679 if matchmod.patkind(pat):
679 if matchmod.patkind(pat):
680 # a mercurial pattern
680 # a mercurial pattern
681 res = lambda p: os.path.join(dest,
681 res = lambda p: os.path.join(dest,
682 os.path.basename(util.localpath(p)))
682 os.path.basename(util.localpath(p)))
683 else:
683 else:
684 abspfx = pathutil.canonpath(repo.root, cwd, pat)
684 abspfx = pathutil.canonpath(repo.root, cwd, pat)
685 if len(abspfx) < len(srcs[0][0]):
685 if len(abspfx) < len(srcs[0][0]):
686 # A directory. Either the target path contains the last
686 # A directory. Either the target path contains the last
687 # component of the source path or it does not.
687 # component of the source path or it does not.
688 def evalpath(striplen):
688 def evalpath(striplen):
689 score = 0
689 score = 0
690 for s in srcs:
690 for s in srcs:
691 t = os.path.join(dest, util.localpath(s[0])[striplen:])
691 t = os.path.join(dest, util.localpath(s[0])[striplen:])
692 if os.path.lexists(t):
692 if os.path.lexists(t):
693 score += 1
693 score += 1
694 return score
694 return score
695
695
696 abspfx = util.localpath(abspfx)
696 abspfx = util.localpath(abspfx)
697 striplen = len(abspfx)
697 striplen = len(abspfx)
698 if striplen:
698 if striplen:
699 striplen += len(os.sep)
699 striplen += len(os.sep)
700 if os.path.isdir(os.path.join(dest, os.path.split(abspfx)[1])):
700 if os.path.isdir(os.path.join(dest, os.path.split(abspfx)[1])):
701 score = evalpath(striplen)
701 score = evalpath(striplen)
702 striplen1 = len(os.path.split(abspfx)[0])
702 striplen1 = len(os.path.split(abspfx)[0])
703 if striplen1:
703 if striplen1:
704 striplen1 += len(os.sep)
704 striplen1 += len(os.sep)
705 if evalpath(striplen1) > score:
705 if evalpath(striplen1) > score:
706 striplen = striplen1
706 striplen = striplen1
707 res = lambda p: os.path.join(dest,
707 res = lambda p: os.path.join(dest,
708 util.localpath(p)[striplen:])
708 util.localpath(p)[striplen:])
709 else:
709 else:
710 # a file
710 # a file
711 if destdirexists:
711 if destdirexists:
712 res = lambda p: os.path.join(dest,
712 res = lambda p: os.path.join(dest,
713 os.path.basename(util.localpath(p)))
713 os.path.basename(util.localpath(p)))
714 else:
714 else:
715 res = lambda p: dest
715 res = lambda p: dest
716 return res
716 return res
717
717
718 pats = scmutil.expandpats(pats)
718 pats = scmutil.expandpats(pats)
719 if not pats:
719 if not pats:
720 raise error.Abort(_('no source or destination specified'))
720 raise error.Abort(_('no source or destination specified'))
721 if len(pats) == 1:
721 if len(pats) == 1:
722 raise error.Abort(_('no destination specified'))
722 raise error.Abort(_('no destination specified'))
723 dest = pats.pop()
723 dest = pats.pop()
724 destdirexists = os.path.isdir(dest) and not os.path.islink(dest)
724 destdirexists = os.path.isdir(dest) and not os.path.islink(dest)
725 if not destdirexists:
725 if not destdirexists:
726 if len(pats) > 1 or matchmod.patkind(pats[0]):
726 if len(pats) > 1 or matchmod.patkind(pats[0]):
727 raise error.Abort(_('with multiple sources, destination must be an '
727 raise error.Abort(_('with multiple sources, destination must be an '
728 'existing directory'))
728 'existing directory'))
729 if util.endswithsep(dest):
729 if util.endswithsep(dest):
730 raise error.Abort(_('destination %s is not a directory') % dest)
730 raise error.Abort(_('destination %s is not a directory') % dest)
731
731
732 tfn = targetpathfn
732 tfn = targetpathfn
733 if after:
733 if after:
734 tfn = targetpathafterfn
734 tfn = targetpathafterfn
735 copylist = []
735 copylist = []
736 for pat in pats:
736 for pat in pats:
737 srcs = walkpat(pat)
737 srcs = walkpat(pat)
738 if not srcs:
738 if not srcs:
739 continue
739 continue
740 copylist.append((tfn(pat, dest, srcs), srcs))
740 copylist.append((tfn(pat, dest, srcs), srcs))
741 if not copylist:
741 if not copylist:
742 raise error.Abort(_('no files to copy'))
742 raise error.Abort(_('no files to copy'))
743
743
744 errors = 0
744 errors = 0
745 for targetpath, srcs in copylist:
745 for targetpath, srcs in copylist:
746 for abssrc, relsrc, exact in srcs:
746 for abssrc, relsrc, exact in srcs:
747 if copyfile(abssrc, relsrc, targetpath(abssrc), exact):
747 if copyfile(abssrc, relsrc, targetpath(abssrc), exact):
748 errors += 1
748 errors += 1
749
749
750 if errors:
750 if errors:
751 ui.warn(_('(consider using --after)\n'))
751 ui.warn(_('(consider using --after)\n'))
752
752
753 return errors != 0
753 return errors != 0
754
754
755 def service(opts, parentfn=None, initfn=None, runfn=None, logfile=None,
755 def service(opts, parentfn=None, initfn=None, runfn=None, logfile=None,
756 runargs=None, appendpid=False):
756 runargs=None, appendpid=False):
757 '''Run a command as a service.'''
757 '''Run a command as a service.'''
758
758
759 def writepid(pid):
759 def writepid(pid):
760 if opts['pid_file']:
760 if opts['pid_file']:
761 if appendpid:
761 if appendpid:
762 mode = 'a'
762 mode = 'a'
763 else:
763 else:
764 mode = 'w'
764 mode = 'w'
765 fp = open(opts['pid_file'], mode)
765 fp = open(opts['pid_file'], mode)
766 fp.write(str(pid) + '\n')
766 fp.write(str(pid) + '\n')
767 fp.close()
767 fp.close()
768
768
769 if opts['daemon'] and not opts['daemon_pipefds']:
769 if opts['daemon'] and not opts['daemon_pipefds']:
770 # Signal child process startup with file removal
770 # Signal child process startup with file removal
771 lockfd, lockpath = tempfile.mkstemp(prefix='hg-service-')
771 lockfd, lockpath = tempfile.mkstemp(prefix='hg-service-')
772 os.close(lockfd)
772 os.close(lockfd)
773 try:
773 try:
774 if not runargs:
774 if not runargs:
775 runargs = util.hgcmd() + sys.argv[1:]
775 runargs = util.hgcmd() + sys.argv[1:]
776 runargs.append('--daemon-pipefds=%s' % lockpath)
776 runargs.append('--daemon-pipefds=%s' % lockpath)
777 # Don't pass --cwd to the child process, because we've already
777 # Don't pass --cwd to the child process, because we've already
778 # changed directory.
778 # changed directory.
779 for i in xrange(1, len(runargs)):
779 for i in xrange(1, len(runargs)):
780 if runargs[i].startswith('--cwd='):
780 if runargs[i].startswith('--cwd='):
781 del runargs[i]
781 del runargs[i]
782 break
782 break
783 elif runargs[i].startswith('--cwd'):
783 elif runargs[i].startswith('--cwd'):
784 del runargs[i:i + 2]
784 del runargs[i:i + 2]
785 break
785 break
786 def condfn():
786 def condfn():
787 return not os.path.exists(lockpath)
787 return not os.path.exists(lockpath)
788 pid = util.rundetached(runargs, condfn)
788 pid = util.rundetached(runargs, condfn)
789 if pid < 0:
789 if pid < 0:
790 raise error.Abort(_('child process failed to start'))
790 raise error.Abort(_('child process failed to start'))
791 writepid(pid)
791 writepid(pid)
792 finally:
792 finally:
793 try:
793 try:
794 os.unlink(lockpath)
794 os.unlink(lockpath)
795 except OSError as e:
795 except OSError as e:
796 if e.errno != errno.ENOENT:
796 if e.errno != errno.ENOENT:
797 raise
797 raise
798 if parentfn:
798 if parentfn:
799 return parentfn(pid)
799 return parentfn(pid)
800 else:
800 else:
801 return
801 return
802
802
803 if initfn:
803 if initfn:
804 initfn()
804 initfn()
805
805
806 if not opts['daemon']:
806 if not opts['daemon']:
807 writepid(os.getpid())
807 writepid(os.getpid())
808
808
809 if opts['daemon_pipefds']:
809 if opts['daemon_pipefds']:
810 lockpath = opts['daemon_pipefds']
810 lockpath = opts['daemon_pipefds']
811 try:
811 try:
812 os.setsid()
812 os.setsid()
813 except AttributeError:
813 except AttributeError:
814 pass
814 pass
815 os.unlink(lockpath)
815 os.unlink(lockpath)
816 util.hidewindow()
816 util.hidewindow()
817 sys.stdout.flush()
817 sys.stdout.flush()
818 sys.stderr.flush()
818 sys.stderr.flush()
819
819
820 nullfd = os.open(os.devnull, os.O_RDWR)
820 nullfd = os.open(os.devnull, os.O_RDWR)
821 logfilefd = nullfd
821 logfilefd = nullfd
822 if logfile:
822 if logfile:
823 logfilefd = os.open(logfile, os.O_RDWR | os.O_CREAT | os.O_APPEND)
823 logfilefd = os.open(logfile, os.O_RDWR | os.O_CREAT | os.O_APPEND)
824 os.dup2(nullfd, 0)
824 os.dup2(nullfd, 0)
825 os.dup2(logfilefd, 1)
825 os.dup2(logfilefd, 1)
826 os.dup2(logfilefd, 2)
826 os.dup2(logfilefd, 2)
827 if nullfd not in (0, 1, 2):
827 if nullfd not in (0, 1, 2):
828 os.close(nullfd)
828 os.close(nullfd)
829 if logfile and logfilefd not in (0, 1, 2):
829 if logfile and logfilefd not in (0, 1, 2):
830 os.close(logfilefd)
830 os.close(logfilefd)
831
831
832 if runfn:
832 if runfn:
833 return runfn()
833 return runfn()
834
834
835 ## facility to let extension process additional data into an import patch
835 ## facility to let extension process additional data into an import patch
836 # list of identifier to be executed in order
836 # list of identifier to be executed in order
837 extrapreimport = [] # run before commit
837 extrapreimport = [] # run before commit
838 extrapostimport = [] # run after commit
838 extrapostimport = [] # run after commit
839 # mapping from identifier to actual import function
839 # mapping from identifier to actual import function
840 #
840 #
841 # 'preimport' are run before the commit is made and are provided the following
841 # 'preimport' are run before the commit is made and are provided the following
842 # arguments:
842 # arguments:
843 # - repo: the localrepository instance,
843 # - repo: the localrepository instance,
844 # - patchdata: data extracted from patch header (cf m.patch.patchheadermap),
844 # - patchdata: data extracted from patch header (cf m.patch.patchheadermap),
845 # - extra: the future extra dictionary of the changeset, please mutate it,
845 # - extra: the future extra dictionary of the changeset, please mutate it,
846 # - opts: the import options.
846 # - opts: the import options.
847 # XXX ideally, we would just pass an ctx ready to be computed, that would allow
847 # XXX ideally, we would just pass an ctx ready to be computed, that would allow
848 # mutation of in memory commit and more. Feel free to rework the code to get
848 # mutation of in memory commit and more. Feel free to rework the code to get
849 # there.
849 # there.
850 extrapreimportmap = {}
850 extrapreimportmap = {}
851 # 'postimport' are run after the commit is made and are provided the following
851 # 'postimport' are run after the commit is made and are provided the following
852 # argument:
852 # argument:
853 # - ctx: the changectx created by import.
853 # - ctx: the changectx created by import.
854 extrapostimportmap = {}
854 extrapostimportmap = {}
855
855
856 def tryimportone(ui, repo, hunk, parents, opts, msgs, updatefunc):
856 def tryimportone(ui, repo, hunk, parents, opts, msgs, updatefunc):
857 """Utility function used by commands.import to import a single patch
857 """Utility function used by commands.import to import a single patch
858
858
859 This function is explicitly defined here to help the evolve extension to
859 This function is explicitly defined here to help the evolve extension to
860 wrap this part of the import logic.
860 wrap this part of the import logic.
861
861
862 The API is currently a bit ugly because it a simple code translation from
862 The API is currently a bit ugly because it a simple code translation from
863 the import command. Feel free to make it better.
863 the import command. Feel free to make it better.
864
864
865 :hunk: a patch (as a binary string)
865 :hunk: a patch (as a binary string)
866 :parents: nodes that will be parent of the created commit
866 :parents: nodes that will be parent of the created commit
867 :opts: the full dict of option passed to the import command
867 :opts: the full dict of option passed to the import command
868 :msgs: list to save commit message to.
868 :msgs: list to save commit message to.
869 (used in case we need to save it when failing)
869 (used in case we need to save it when failing)
870 :updatefunc: a function that update a repo to a given node
870 :updatefunc: a function that update a repo to a given node
871 updatefunc(<repo>, <node>)
871 updatefunc(<repo>, <node>)
872 """
872 """
873 # avoid cycle context -> subrepo -> cmdutil
873 # avoid cycle context -> subrepo -> cmdutil
874 import context
874 import context
875 extractdata = patch.extract(ui, hunk)
875 extractdata = patch.extract(ui, hunk)
876 tmpname = extractdata.get('filename')
876 tmpname = extractdata.get('filename')
877 message = extractdata.get('message')
877 message = extractdata.get('message')
878 user = extractdata.get('user')
878 user = extractdata.get('user')
879 date = extractdata.get('date')
879 date = extractdata.get('date')
880 branch = extractdata.get('branch')
880 branch = extractdata.get('branch')
881 nodeid = extractdata.get('nodeid')
881 nodeid = extractdata.get('nodeid')
882 p1 = extractdata.get('p1')
882 p1 = extractdata.get('p1')
883 p2 = extractdata.get('p2')
883 p2 = extractdata.get('p2')
884
884
885 update = not opts.get('bypass')
885 update = not opts.get('bypass')
886 strip = opts["strip"]
886 strip = opts["strip"]
887 prefix = opts["prefix"]
887 prefix = opts["prefix"]
888 sim = float(opts.get('similarity') or 0)
888 sim = float(opts.get('similarity') or 0)
889 if not tmpname:
889 if not tmpname:
890 return (None, None, False)
890 return (None, None, False)
891 msg = _('applied to working directory')
891 msg = _('applied to working directory')
892
892
893 rejects = False
893 rejects = False
894
894
895 try:
895 try:
896 cmdline_message = logmessage(ui, opts)
896 cmdline_message = logmessage(ui, opts)
897 if cmdline_message:
897 if cmdline_message:
898 # pickup the cmdline msg
898 # pickup the cmdline msg
899 message = cmdline_message
899 message = cmdline_message
900 elif message:
900 elif message:
901 # pickup the patch msg
901 # pickup the patch msg
902 message = message.strip()
902 message = message.strip()
903 else:
903 else:
904 # launch the editor
904 # launch the editor
905 message = None
905 message = None
906 ui.debug('message:\n%s\n' % message)
906 ui.debug('message:\n%s\n' % message)
907
907
908 if len(parents) == 1:
908 if len(parents) == 1:
909 parents.append(repo[nullid])
909 parents.append(repo[nullid])
910 if opts.get('exact'):
910 if opts.get('exact'):
911 if not nodeid or not p1:
911 if not nodeid or not p1:
912 raise error.Abort(_('not a Mercurial patch'))
912 raise error.Abort(_('not a Mercurial patch'))
913 p1 = repo[p1]
913 p1 = repo[p1]
914 p2 = repo[p2 or nullid]
914 p2 = repo[p2 or nullid]
915 elif p2:
915 elif p2:
916 try:
916 try:
917 p1 = repo[p1]
917 p1 = repo[p1]
918 p2 = repo[p2]
918 p2 = repo[p2]
919 # Without any options, consider p2 only if the
919 # Without any options, consider p2 only if the
920 # patch is being applied on top of the recorded
920 # patch is being applied on top of the recorded
921 # first parent.
921 # first parent.
922 if p1 != parents[0]:
922 if p1 != parents[0]:
923 p1 = parents[0]
923 p1 = parents[0]
924 p2 = repo[nullid]
924 p2 = repo[nullid]
925 except error.RepoError:
925 except error.RepoError:
926 p1, p2 = parents
926 p1, p2 = parents
927 if p2.node() == nullid:
927 if p2.node() == nullid:
928 ui.warn(_("warning: import the patch as a normal revision\n"
928 ui.warn(_("warning: import the patch as a normal revision\n"
929 "(use --exact to import the patch as a merge)\n"))
929 "(use --exact to import the patch as a merge)\n"))
930 else:
930 else:
931 p1, p2 = parents
931 p1, p2 = parents
932
932
933 n = None
933 n = None
934 if update:
934 if update:
935 if p1 != parents[0]:
935 if p1 != parents[0]:
936 updatefunc(repo, p1.node())
936 updatefunc(repo, p1.node())
937 if p2 != parents[1]:
937 if p2 != parents[1]:
938 repo.setparents(p1.node(), p2.node())
938 repo.setparents(p1.node(), p2.node())
939
939
940 if opts.get('exact') or opts.get('import_branch'):
940 if opts.get('exact') or opts.get('import_branch'):
941 repo.dirstate.setbranch(branch or 'default')
941 repo.dirstate.setbranch(branch or 'default')
942
942
943 partial = opts.get('partial', False)
943 partial = opts.get('partial', False)
944 files = set()
944 files = set()
945 try:
945 try:
946 patch.patch(ui, repo, tmpname, strip=strip, prefix=prefix,
946 patch.patch(ui, repo, tmpname, strip=strip, prefix=prefix,
947 files=files, eolmode=None, similarity=sim / 100.0)
947 files=files, eolmode=None, similarity=sim / 100.0)
948 except patch.PatchError as e:
948 except patch.PatchError as e:
949 if not partial:
949 if not partial:
950 raise error.Abort(str(e))
950 raise error.Abort(str(e))
951 if partial:
951 if partial:
952 rejects = True
952 rejects = True
953
953
954 files = list(files)
954 files = list(files)
955 if opts.get('no_commit'):
955 if opts.get('no_commit'):
956 if message:
956 if message:
957 msgs.append(message)
957 msgs.append(message)
958 else:
958 else:
959 if opts.get('exact') or p2:
959 if opts.get('exact') or p2:
960 # If you got here, you either use --force and know what
960 # If you got here, you either use --force and know what
961 # you are doing or used --exact or a merge patch while
961 # you are doing or used --exact or a merge patch while
962 # being updated to its first parent.
962 # being updated to its first parent.
963 m = None
963 m = None
964 else:
964 else:
965 m = scmutil.matchfiles(repo, files or [])
965 m = scmutil.matchfiles(repo, files or [])
966 editform = mergeeditform(repo[None], 'import.normal')
966 editform = mergeeditform(repo[None], 'import.normal')
967 if opts.get('exact'):
967 if opts.get('exact'):
968 editor = None
968 editor = None
969 else:
969 else:
970 editor = getcommiteditor(editform=editform, **opts)
970 editor = getcommiteditor(editform=editform, **opts)
971 allowemptyback = repo.ui.backupconfig('ui', 'allowemptycommit')
971 allowemptyback = repo.ui.backupconfig('ui', 'allowemptycommit')
972 extra = {}
972 extra = {}
973 for idfunc in extrapreimport:
973 for idfunc in extrapreimport:
974 extrapreimportmap[idfunc](repo, extractdata, extra, opts)
974 extrapreimportmap[idfunc](repo, extractdata, extra, opts)
975 try:
975 try:
976 if partial:
976 if partial:
977 repo.ui.setconfig('ui', 'allowemptycommit', True)
977 repo.ui.setconfig('ui', 'allowemptycommit', True)
978 n = repo.commit(message, opts.get('user') or user,
978 n = repo.commit(message, opts.get('user') or user,
979 opts.get('date') or date, match=m,
979 opts.get('date') or date, match=m,
980 editor=editor, extra=extra)
980 editor=editor, extra=extra)
981 for idfunc in extrapostimport:
981 for idfunc in extrapostimport:
982 extrapostimportmap[idfunc](repo[n])
982 extrapostimportmap[idfunc](repo[n])
983 finally:
983 finally:
984 repo.ui.restoreconfig(allowemptyback)
984 repo.ui.restoreconfig(allowemptyback)
985 else:
985 else:
986 if opts.get('exact') or opts.get('import_branch'):
986 if opts.get('exact') or opts.get('import_branch'):
987 branch = branch or 'default'
987 branch = branch or 'default'
988 else:
988 else:
989 branch = p1.branch()
989 branch = p1.branch()
990 store = patch.filestore()
990 store = patch.filestore()
991 try:
991 try:
992 files = set()
992 files = set()
993 try:
993 try:
994 patch.patchrepo(ui, repo, p1, store, tmpname, strip, prefix,
994 patch.patchrepo(ui, repo, p1, store, tmpname, strip, prefix,
995 files, eolmode=None)
995 files, eolmode=None)
996 except patch.PatchError as e:
996 except patch.PatchError as e:
997 raise error.Abort(str(e))
997 raise error.Abort(str(e))
998 if opts.get('exact'):
998 if opts.get('exact'):
999 editor = None
999 editor = None
1000 else:
1000 else:
1001 editor = getcommiteditor(editform='import.bypass')
1001 editor = getcommiteditor(editform='import.bypass')
1002 memctx = context.makememctx(repo, (p1.node(), p2.node()),
1002 memctx = context.makememctx(repo, (p1.node(), p2.node()),
1003 message,
1003 message,
1004 opts.get('user') or user,
1004 opts.get('user') or user,
1005 opts.get('date') or date,
1005 opts.get('date') or date,
1006 branch, files, store,
1006 branch, files, store,
1007 editor=editor)
1007 editor=editor)
1008 n = memctx.commit()
1008 n = memctx.commit()
1009 finally:
1009 finally:
1010 store.close()
1010 store.close()
1011 if opts.get('exact') and opts.get('no_commit'):
1011 if opts.get('exact') and opts.get('no_commit'):
1012 # --exact with --no-commit is still useful in that it does merge
1012 # --exact with --no-commit is still useful in that it does merge
1013 # and branch bits
1013 # and branch bits
1014 ui.warn(_("warning: can't check exact import with --no-commit\n"))
1014 ui.warn(_("warning: can't check exact import with --no-commit\n"))
1015 elif opts.get('exact') and hex(n) != nodeid:
1015 elif opts.get('exact') and hex(n) != nodeid:
1016 raise error.Abort(_('patch is damaged or loses information'))
1016 raise error.Abort(_('patch is damaged or loses information'))
1017 if n:
1017 if n:
1018 # i18n: refers to a short changeset id
1018 # i18n: refers to a short changeset id
1019 msg = _('created %s') % short(n)
1019 msg = _('created %s') % short(n)
1020 return (msg, n, rejects)
1020 return (msg, n, rejects)
1021 finally:
1021 finally:
1022 os.unlink(tmpname)
1022 os.unlink(tmpname)
1023
1023
1024 # facility to let extensions include additional data in an exported patch
1024 # facility to let extensions include additional data in an exported patch
1025 # list of identifiers to be executed in order
1025 # list of identifiers to be executed in order
1026 extraexport = []
1026 extraexport = []
1027 # mapping from identifier to actual export function
1027 # mapping from identifier to actual export function
1028 # function as to return a string to be added to the header or None
1028 # function as to return a string to be added to the header or None
1029 # it is given two arguments (sequencenumber, changectx)
1029 # it is given two arguments (sequencenumber, changectx)
1030 extraexportmap = {}
1030 extraexportmap = {}
1031
1031
1032 def export(repo, revs, template='hg-%h.patch', fp=None, switch_parent=False,
1032 def export(repo, revs, template='hg-%h.patch', fp=None, switch_parent=False,
1033 opts=None, match=None):
1033 opts=None, match=None):
1034 '''export changesets as hg patches.'''
1034 '''export changesets as hg patches.'''
1035
1035
1036 total = len(revs)
1036 total = len(revs)
1037 revwidth = max([len(str(rev)) for rev in revs])
1037 revwidth = max([len(str(rev)) for rev in revs])
1038 filemode = {}
1038 filemode = {}
1039
1039
1040 def single(rev, seqno, fp):
1040 def single(rev, seqno, fp):
1041 ctx = repo[rev]
1041 ctx = repo[rev]
1042 node = ctx.node()
1042 node = ctx.node()
1043 parents = [p.node() for p in ctx.parents() if p]
1043 parents = [p.node() for p in ctx.parents() if p]
1044 branch = ctx.branch()
1044 branch = ctx.branch()
1045 if switch_parent:
1045 if switch_parent:
1046 parents.reverse()
1046 parents.reverse()
1047
1047
1048 if parents:
1048 if parents:
1049 prev = parents[0]
1049 prev = parents[0]
1050 else:
1050 else:
1051 prev = nullid
1051 prev = nullid
1052
1052
1053 shouldclose = False
1053 shouldclose = False
1054 if not fp and len(template) > 0:
1054 if not fp and len(template) > 0:
1055 desc_lines = ctx.description().rstrip().split('\n')
1055 desc_lines = ctx.description().rstrip().split('\n')
1056 desc = desc_lines[0] #Commit always has a first line.
1056 desc = desc_lines[0] #Commit always has a first line.
1057 fp = makefileobj(repo, template, node, desc=desc, total=total,
1057 fp = makefileobj(repo, template, node, desc=desc, total=total,
1058 seqno=seqno, revwidth=revwidth, mode='wb',
1058 seqno=seqno, revwidth=revwidth, mode='wb',
1059 modemap=filemode)
1059 modemap=filemode)
1060 if fp != template:
1060 if fp != template:
1061 shouldclose = True
1061 shouldclose = True
1062 if fp and fp != sys.stdout and util.safehasattr(fp, 'name'):
1062 if fp and fp != sys.stdout and util.safehasattr(fp, 'name'):
1063 repo.ui.note("%s\n" % fp.name)
1063 repo.ui.note("%s\n" % fp.name)
1064
1064
1065 if not fp:
1065 if not fp:
1066 write = repo.ui.write
1066 write = repo.ui.write
1067 else:
1067 else:
1068 def write(s, **kw):
1068 def write(s, **kw):
1069 fp.write(s)
1069 fp.write(s)
1070
1070
1071 write("# HG changeset patch\n")
1071 write("# HG changeset patch\n")
1072 write("# User %s\n" % ctx.user())
1072 write("# User %s\n" % ctx.user())
1073 write("# Date %d %d\n" % ctx.date())
1073 write("# Date %d %d\n" % ctx.date())
1074 write("# %s\n" % util.datestr(ctx.date()))
1074 write("# %s\n" % util.datestr(ctx.date()))
1075 if branch and branch != 'default':
1075 if branch and branch != 'default':
1076 write("# Branch %s\n" % branch)
1076 write("# Branch %s\n" % branch)
1077 write("# Node ID %s\n" % hex(node))
1077 write("# Node ID %s\n" % hex(node))
1078 write("# Parent %s\n" % hex(prev))
1078 write("# Parent %s\n" % hex(prev))
1079 if len(parents) > 1:
1079 if len(parents) > 1:
1080 write("# Parent %s\n" % hex(parents[1]))
1080 write("# Parent %s\n" % hex(parents[1]))
1081
1081
1082 for headerid in extraexport:
1082 for headerid in extraexport:
1083 header = extraexportmap[headerid](seqno, ctx)
1083 header = extraexportmap[headerid](seqno, ctx)
1084 if header is not None:
1084 if header is not None:
1085 write('# %s\n' % header)
1085 write('# %s\n' % header)
1086 write(ctx.description().rstrip())
1086 write(ctx.description().rstrip())
1087 write("\n\n")
1087 write("\n\n")
1088
1088
1089 for chunk, label in patch.diffui(repo, prev, node, match, opts=opts):
1089 for chunk, label in patch.diffui(repo, prev, node, match, opts=opts):
1090 write(chunk, label=label)
1090 write(chunk, label=label)
1091
1091
1092 if shouldclose:
1092 if shouldclose:
1093 fp.close()
1093 fp.close()
1094
1094
1095 for seqno, rev in enumerate(revs):
1095 for seqno, rev in enumerate(revs):
1096 single(rev, seqno + 1, fp)
1096 single(rev, seqno + 1, fp)
1097
1097
1098 def diffordiffstat(ui, repo, diffopts, node1, node2, match,
1098 def diffordiffstat(ui, repo, diffopts, node1, node2, match,
1099 changes=None, stat=False, fp=None, prefix='',
1099 changes=None, stat=False, fp=None, prefix='',
1100 root='', listsubrepos=False):
1100 root='', listsubrepos=False):
1101 '''show diff or diffstat.'''
1101 '''show diff or diffstat.'''
1102 if fp is None:
1102 if fp is None:
1103 write = ui.write
1103 write = ui.write
1104 else:
1104 else:
1105 def write(s, **kw):
1105 def write(s, **kw):
1106 fp.write(s)
1106 fp.write(s)
1107
1107
1108 if root:
1108 if root:
1109 relroot = pathutil.canonpath(repo.root, repo.getcwd(), root)
1109 relroot = pathutil.canonpath(repo.root, repo.getcwd(), root)
1110 else:
1110 else:
1111 relroot = ''
1111 relroot = ''
1112 if relroot != '':
1112 if relroot != '':
1113 # XXX relative roots currently don't work if the root is within a
1113 # XXX relative roots currently don't work if the root is within a
1114 # subrepo
1114 # subrepo
1115 uirelroot = match.uipath(relroot)
1115 uirelroot = match.uipath(relroot)
1116 relroot += '/'
1116 relroot += '/'
1117 for matchroot in match.files():
1117 for matchroot in match.files():
1118 if not matchroot.startswith(relroot):
1118 if not matchroot.startswith(relroot):
1119 ui.warn(_('warning: %s not inside relative root %s\n') % (
1119 ui.warn(_('warning: %s not inside relative root %s\n') % (
1120 match.uipath(matchroot), uirelroot))
1120 match.uipath(matchroot), uirelroot))
1121
1121
1122 if stat:
1122 if stat:
1123 diffopts = diffopts.copy(context=0)
1123 diffopts = diffopts.copy(context=0)
1124 width = 80
1124 width = 80
1125 if not ui.plain():
1125 if not ui.plain():
1126 width = ui.termwidth()
1126 width = ui.termwidth()
1127 chunks = patch.diff(repo, node1, node2, match, changes, diffopts,
1127 chunks = patch.diff(repo, node1, node2, match, changes, diffopts,
1128 prefix=prefix, relroot=relroot)
1128 prefix=prefix, relroot=relroot)
1129 for chunk, label in patch.diffstatui(util.iterlines(chunks),
1129 for chunk, label in patch.diffstatui(util.iterlines(chunks),
1130 width=width,
1130 width=width,
1131 git=diffopts.git):
1131 git=diffopts.git):
1132 write(chunk, label=label)
1132 write(chunk, label=label)
1133 else:
1133 else:
1134 for chunk, label in patch.diffui(repo, node1, node2, match,
1134 for chunk, label in patch.diffui(repo, node1, node2, match,
1135 changes, diffopts, prefix=prefix,
1135 changes, diffopts, prefix=prefix,
1136 relroot=relroot):
1136 relroot=relroot):
1137 write(chunk, label=label)
1137 write(chunk, label=label)
1138
1138
1139 if listsubrepos:
1139 if listsubrepos:
1140 ctx1 = repo[node1]
1140 ctx1 = repo[node1]
1141 ctx2 = repo[node2]
1141 ctx2 = repo[node2]
1142 for subpath, sub in scmutil.itersubrepos(ctx1, ctx2):
1142 for subpath, sub in scmutil.itersubrepos(ctx1, ctx2):
1143 tempnode2 = node2
1143 tempnode2 = node2
1144 try:
1144 try:
1145 if node2 is not None:
1145 if node2 is not None:
1146 tempnode2 = ctx2.substate[subpath][1]
1146 tempnode2 = ctx2.substate[subpath][1]
1147 except KeyError:
1147 except KeyError:
1148 # A subrepo that existed in node1 was deleted between node1 and
1148 # A subrepo that existed in node1 was deleted between node1 and
1149 # node2 (inclusive). Thus, ctx2's substate won't contain that
1149 # node2 (inclusive). Thus, ctx2's substate won't contain that
1150 # subpath. The best we can do is to ignore it.
1150 # subpath. The best we can do is to ignore it.
1151 tempnode2 = None
1151 tempnode2 = None
1152 submatch = matchmod.narrowmatcher(subpath, match)
1152 submatch = matchmod.narrowmatcher(subpath, match)
1153 sub.diff(ui, diffopts, tempnode2, submatch, changes=changes,
1153 sub.diff(ui, diffopts, tempnode2, submatch, changes=changes,
1154 stat=stat, fp=fp, prefix=prefix)
1154 stat=stat, fp=fp, prefix=prefix)
1155
1155
1156 class changeset_printer(object):
1156 class changeset_printer(object):
1157 '''show changeset information when templating not requested.'''
1157 '''show changeset information when templating not requested.'''
1158
1158
1159 def __init__(self, ui, repo, matchfn, diffopts, buffered):
1159 def __init__(self, ui, repo, matchfn, diffopts, buffered):
1160 self.ui = ui
1160 self.ui = ui
1161 self.repo = repo
1161 self.repo = repo
1162 self.buffered = buffered
1162 self.buffered = buffered
1163 self.matchfn = matchfn
1163 self.matchfn = matchfn
1164 self.diffopts = diffopts
1164 self.diffopts = diffopts
1165 self.header = {}
1165 self.header = {}
1166 self.hunk = {}
1166 self.hunk = {}
1167 self.lastheader = None
1167 self.lastheader = None
1168 self.footer = None
1168 self.footer = None
1169
1169
1170 def flush(self, ctx):
1170 def flush(self, ctx):
1171 rev = ctx.rev()
1171 rev = ctx.rev()
1172 if rev in self.header:
1172 if rev in self.header:
1173 h = self.header[rev]
1173 h = self.header[rev]
1174 if h != self.lastheader:
1174 if h != self.lastheader:
1175 self.lastheader = h
1175 self.lastheader = h
1176 self.ui.write(h)
1176 self.ui.write(h)
1177 del self.header[rev]
1177 del self.header[rev]
1178 if rev in self.hunk:
1178 if rev in self.hunk:
1179 self.ui.write(self.hunk[rev])
1179 self.ui.write(self.hunk[rev])
1180 del self.hunk[rev]
1180 del self.hunk[rev]
1181 return 1
1181 return 1
1182 return 0
1182 return 0
1183
1183
1184 def close(self):
1184 def close(self):
1185 if self.footer:
1185 if self.footer:
1186 self.ui.write(self.footer)
1186 self.ui.write(self.footer)
1187
1187
1188 def show(self, ctx, copies=None, matchfn=None, **props):
1188 def show(self, ctx, copies=None, matchfn=None, **props):
1189 if self.buffered:
1189 if self.buffered:
1190 self.ui.pushbuffer(labeled=True)
1190 self.ui.pushbuffer(labeled=True)
1191 self._show(ctx, copies, matchfn, props)
1191 self._show(ctx, copies, matchfn, props)
1192 self.hunk[ctx.rev()] = self.ui.popbuffer()
1192 self.hunk[ctx.rev()] = self.ui.popbuffer()
1193 else:
1193 else:
1194 self._show(ctx, copies, matchfn, props)
1194 self._show(ctx, copies, matchfn, props)
1195
1195
1196 def _show(self, ctx, copies, matchfn, props):
1196 def _show(self, ctx, copies, matchfn, props):
1197 '''show a single changeset or file revision'''
1197 '''show a single changeset or file revision'''
1198 changenode = ctx.node()
1198 changenode = ctx.node()
1199 rev = ctx.rev()
1199 rev = ctx.rev()
1200 if self.ui.debugflag:
1200 if self.ui.debugflag:
1201 hexfunc = hex
1201 hexfunc = hex
1202 else:
1202 else:
1203 hexfunc = short
1203 hexfunc = short
1204 # as of now, wctx.node() and wctx.rev() return None, but we want to
1204 # as of now, wctx.node() and wctx.rev() return None, but we want to
1205 # show the same values as {node} and {rev} templatekw
1205 # show the same values as {node} and {rev} templatekw
1206 revnode = (scmutil.intrev(rev), hexfunc(bin(ctx.hex())))
1206 revnode = (scmutil.intrev(rev), hexfunc(bin(ctx.hex())))
1207
1207
1208 if self.ui.quiet:
1208 if self.ui.quiet:
1209 self.ui.write("%d:%s\n" % revnode, label='log.node')
1209 self.ui.write("%d:%s\n" % revnode, label='log.node')
1210 return
1210 return
1211
1211
1212 date = util.datestr(ctx.date())
1212 date = util.datestr(ctx.date())
1213
1213
1214 # i18n: column positioning for "hg log"
1214 # i18n: column positioning for "hg log"
1215 self.ui.write(_("changeset: %d:%s\n") % revnode,
1215 self.ui.write(_("changeset: %d:%s\n") % revnode,
1216 label='log.changeset changeset.%s' % ctx.phasestr())
1216 label='log.changeset changeset.%s' % ctx.phasestr())
1217
1217
1218 # branches are shown first before any other names due to backwards
1218 # branches are shown first before any other names due to backwards
1219 # compatibility
1219 # compatibility
1220 branch = ctx.branch()
1220 branch = ctx.branch()
1221 # don't show the default branch name
1221 # don't show the default branch name
1222 if branch != 'default':
1222 if branch != 'default':
1223 # i18n: column positioning for "hg log"
1223 # i18n: column positioning for "hg log"
1224 self.ui.write(_("branch: %s\n") % branch,
1224 self.ui.write(_("branch: %s\n") % branch,
1225 label='log.branch')
1225 label='log.branch')
1226
1226
1227 for name, ns in self.repo.names.iteritems():
1227 for name, ns in self.repo.names.iteritems():
1228 # branches has special logic already handled above, so here we just
1228 # branches has special logic already handled above, so here we just
1229 # skip it
1229 # skip it
1230 if name == 'branches':
1230 if name == 'branches':
1231 continue
1231 continue
1232 # we will use the templatename as the color name since those two
1232 # we will use the templatename as the color name since those two
1233 # should be the same
1233 # should be the same
1234 for name in ns.names(self.repo, changenode):
1234 for name in ns.names(self.repo, changenode):
1235 self.ui.write(ns.logfmt % name,
1235 self.ui.write(ns.logfmt % name,
1236 label='log.%s' % ns.colorname)
1236 label='log.%s' % ns.colorname)
1237 if self.ui.debugflag:
1237 if self.ui.debugflag:
1238 # i18n: column positioning for "hg log"
1238 # i18n: column positioning for "hg log"
1239 self.ui.write(_("phase: %s\n") % ctx.phasestr(),
1239 self.ui.write(_("phase: %s\n") % ctx.phasestr(),
1240 label='log.phase')
1240 label='log.phase')
1241 for pctx in scmutil.meaningfulparents(self.repo, ctx):
1241 for pctx in scmutil.meaningfulparents(self.repo, ctx):
1242 label = 'log.parent changeset.%s' % pctx.phasestr()
1242 label = 'log.parent changeset.%s' % pctx.phasestr()
1243 # i18n: column positioning for "hg log"
1243 # i18n: column positioning for "hg log"
1244 self.ui.write(_("parent: %d:%s\n")
1244 self.ui.write(_("parent: %d:%s\n")
1245 % (pctx.rev(), hexfunc(pctx.node())),
1245 % (pctx.rev(), hexfunc(pctx.node())),
1246 label=label)
1246 label=label)
1247
1247
1248 if self.ui.debugflag and rev is not None:
1248 if self.ui.debugflag and rev is not None:
1249 mnode = ctx.manifestnode()
1249 mnode = ctx.manifestnode()
1250 # i18n: column positioning for "hg log"
1250 # i18n: column positioning for "hg log"
1251 self.ui.write(_("manifest: %d:%s\n") %
1251 self.ui.write(_("manifest: %d:%s\n") %
1252 (self.repo.manifest.rev(mnode), hex(mnode)),
1252 (self.repo.manifest.rev(mnode), hex(mnode)),
1253 label='ui.debug log.manifest')
1253 label='ui.debug log.manifest')
1254 # i18n: column positioning for "hg log"
1254 # i18n: column positioning for "hg log"
1255 self.ui.write(_("user: %s\n") % ctx.user(),
1255 self.ui.write(_("user: %s\n") % ctx.user(),
1256 label='log.user')
1256 label='log.user')
1257 # i18n: column positioning for "hg log"
1257 # i18n: column positioning for "hg log"
1258 self.ui.write(_("date: %s\n") % date,
1258 self.ui.write(_("date: %s\n") % date,
1259 label='log.date')
1259 label='log.date')
1260
1260
1261 if self.ui.debugflag:
1261 if self.ui.debugflag:
1262 files = ctx.p1().status(ctx)[:3]
1262 files = ctx.p1().status(ctx)[:3]
1263 for key, value in zip([# i18n: column positioning for "hg log"
1263 for key, value in zip([# i18n: column positioning for "hg log"
1264 _("files:"),
1264 _("files:"),
1265 # i18n: column positioning for "hg log"
1265 # i18n: column positioning for "hg log"
1266 _("files+:"),
1266 _("files+:"),
1267 # i18n: column positioning for "hg log"
1267 # i18n: column positioning for "hg log"
1268 _("files-:")], files):
1268 _("files-:")], files):
1269 if value:
1269 if value:
1270 self.ui.write("%-12s %s\n" % (key, " ".join(value)),
1270 self.ui.write("%-12s %s\n" % (key, " ".join(value)),
1271 label='ui.debug log.files')
1271 label='ui.debug log.files')
1272 elif ctx.files() and self.ui.verbose:
1272 elif ctx.files() and self.ui.verbose:
1273 # i18n: column positioning for "hg log"
1273 # i18n: column positioning for "hg log"
1274 self.ui.write(_("files: %s\n") % " ".join(ctx.files()),
1274 self.ui.write(_("files: %s\n") % " ".join(ctx.files()),
1275 label='ui.note log.files')
1275 label='ui.note log.files')
1276 if copies and self.ui.verbose:
1276 if copies and self.ui.verbose:
1277 copies = ['%s (%s)' % c for c in copies]
1277 copies = ['%s (%s)' % c for c in copies]
1278 # i18n: column positioning for "hg log"
1278 # i18n: column positioning for "hg log"
1279 self.ui.write(_("copies: %s\n") % ' '.join(copies),
1279 self.ui.write(_("copies: %s\n") % ' '.join(copies),
1280 label='ui.note log.copies')
1280 label='ui.note log.copies')
1281
1281
1282 extra = ctx.extra()
1282 extra = ctx.extra()
1283 if extra and self.ui.debugflag:
1283 if extra and self.ui.debugflag:
1284 for key, value in sorted(extra.items()):
1284 for key, value in sorted(extra.items()):
1285 # i18n: column positioning for "hg log"
1285 # i18n: column positioning for "hg log"
1286 self.ui.write(_("extra: %s=%s\n")
1286 self.ui.write(_("extra: %s=%s\n")
1287 % (key, value.encode('string_escape')),
1287 % (key, value.encode('string_escape')),
1288 label='ui.debug log.extra')
1288 label='ui.debug log.extra')
1289
1289
1290 description = ctx.description().strip()
1290 description = ctx.description().strip()
1291 if description:
1291 if description:
1292 if self.ui.verbose:
1292 if self.ui.verbose:
1293 self.ui.write(_("description:\n"),
1293 self.ui.write(_("description:\n"),
1294 label='ui.note log.description')
1294 label='ui.note log.description')
1295 self.ui.write(description,
1295 self.ui.write(description,
1296 label='ui.note log.description')
1296 label='ui.note log.description')
1297 self.ui.write("\n\n")
1297 self.ui.write("\n\n")
1298 else:
1298 else:
1299 # i18n: column positioning for "hg log"
1299 # i18n: column positioning for "hg log"
1300 self.ui.write(_("summary: %s\n") %
1300 self.ui.write(_("summary: %s\n") %
1301 description.splitlines()[0],
1301 description.splitlines()[0],
1302 label='log.summary')
1302 label='log.summary')
1303 self.ui.write("\n")
1303 self.ui.write("\n")
1304
1304
1305 self.showpatch(ctx, matchfn)
1305 self.showpatch(ctx, matchfn)
1306
1306
1307 def showpatch(self, ctx, matchfn):
1307 def showpatch(self, ctx, matchfn):
1308 if not matchfn:
1308 if not matchfn:
1309 matchfn = self.matchfn
1309 matchfn = self.matchfn
1310 if matchfn:
1310 if matchfn:
1311 stat = self.diffopts.get('stat')
1311 stat = self.diffopts.get('stat')
1312 diff = self.diffopts.get('patch')
1312 diff = self.diffopts.get('patch')
1313 diffopts = patch.diffallopts(self.ui, self.diffopts)
1313 diffopts = patch.diffallopts(self.ui, self.diffopts)
1314 node = ctx.node()
1314 node = ctx.node()
1315 prev = ctx.p1()
1315 prev = ctx.p1()
1316 if stat:
1316 if stat:
1317 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1317 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1318 match=matchfn, stat=True)
1318 match=matchfn, stat=True)
1319 if diff:
1319 if diff:
1320 if stat:
1320 if stat:
1321 self.ui.write("\n")
1321 self.ui.write("\n")
1322 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1322 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1323 match=matchfn, stat=False)
1323 match=matchfn, stat=False)
1324 self.ui.write("\n")
1324 self.ui.write("\n")
1325
1325
1326 class jsonchangeset(changeset_printer):
1326 class jsonchangeset(changeset_printer):
1327 '''format changeset information.'''
1327 '''format changeset information.'''
1328
1328
1329 def __init__(self, ui, repo, matchfn, diffopts, buffered):
1329 def __init__(self, ui, repo, matchfn, diffopts, buffered):
1330 changeset_printer.__init__(self, ui, repo, matchfn, diffopts, buffered)
1330 changeset_printer.__init__(self, ui, repo, matchfn, diffopts, buffered)
1331 self.cache = {}
1331 self.cache = {}
1332 self._first = True
1332 self._first = True
1333
1333
1334 def close(self):
1334 def close(self):
1335 if not self._first:
1335 if not self._first:
1336 self.ui.write("\n]\n")
1336 self.ui.write("\n]\n")
1337 else:
1337 else:
1338 self.ui.write("[]\n")
1338 self.ui.write("[]\n")
1339
1339
1340 def _show(self, ctx, copies, matchfn, props):
1340 def _show(self, ctx, copies, matchfn, props):
1341 '''show a single changeset or file revision'''
1341 '''show a single changeset or file revision'''
1342 rev = ctx.rev()
1342 rev = ctx.rev()
1343 if rev is None:
1343 if rev is None:
1344 jrev = jnode = 'null'
1344 jrev = jnode = 'null'
1345 else:
1345 else:
1346 jrev = str(rev)
1346 jrev = str(rev)
1347 jnode = '"%s"' % hex(ctx.node())
1347 jnode = '"%s"' % hex(ctx.node())
1348 j = encoding.jsonescape
1348 j = encoding.jsonescape
1349
1349
1350 if self._first:
1350 if self._first:
1351 self.ui.write("[\n {")
1351 self.ui.write("[\n {")
1352 self._first = False
1352 self._first = False
1353 else:
1353 else:
1354 self.ui.write(",\n {")
1354 self.ui.write(",\n {")
1355
1355
1356 if self.ui.quiet:
1356 if self.ui.quiet:
1357 self.ui.write('\n "rev": %s' % jrev)
1357 self.ui.write('\n "rev": %s' % jrev)
1358 self.ui.write(',\n "node": %s' % jnode)
1358 self.ui.write(',\n "node": %s' % jnode)
1359 self.ui.write('\n }')
1359 self.ui.write('\n }')
1360 return
1360 return
1361
1361
1362 self.ui.write('\n "rev": %s' % jrev)
1362 self.ui.write('\n "rev": %s' % jrev)
1363 self.ui.write(',\n "node": %s' % jnode)
1363 self.ui.write(',\n "node": %s' % jnode)
1364 self.ui.write(',\n "branch": "%s"' % j(ctx.branch()))
1364 self.ui.write(',\n "branch": "%s"' % j(ctx.branch()))
1365 self.ui.write(',\n "phase": "%s"' % ctx.phasestr())
1365 self.ui.write(',\n "phase": "%s"' % ctx.phasestr())
1366 self.ui.write(',\n "user": "%s"' % j(ctx.user()))
1366 self.ui.write(',\n "user": "%s"' % j(ctx.user()))
1367 self.ui.write(',\n "date": [%d, %d]' % ctx.date())
1367 self.ui.write(',\n "date": [%d, %d]' % ctx.date())
1368 self.ui.write(',\n "desc": "%s"' % j(ctx.description()))
1368 self.ui.write(',\n "desc": "%s"' % j(ctx.description()))
1369
1369
1370 self.ui.write(',\n "bookmarks": [%s]' %
1370 self.ui.write(',\n "bookmarks": [%s]' %
1371 ", ".join('"%s"' % j(b) for b in ctx.bookmarks()))
1371 ", ".join('"%s"' % j(b) for b in ctx.bookmarks()))
1372 self.ui.write(',\n "tags": [%s]' %
1372 self.ui.write(',\n "tags": [%s]' %
1373 ", ".join('"%s"' % j(t) for t in ctx.tags()))
1373 ", ".join('"%s"' % j(t) for t in ctx.tags()))
1374 self.ui.write(',\n "parents": [%s]' %
1374 self.ui.write(',\n "parents": [%s]' %
1375 ", ".join('"%s"' % c.hex() for c in ctx.parents()))
1375 ", ".join('"%s"' % c.hex() for c in ctx.parents()))
1376
1376
1377 if self.ui.debugflag:
1377 if self.ui.debugflag:
1378 if rev is None:
1378 if rev is None:
1379 jmanifestnode = 'null'
1379 jmanifestnode = 'null'
1380 else:
1380 else:
1381 jmanifestnode = '"%s"' % hex(ctx.manifestnode())
1381 jmanifestnode = '"%s"' % hex(ctx.manifestnode())
1382 self.ui.write(',\n "manifest": %s' % jmanifestnode)
1382 self.ui.write(',\n "manifest": %s' % jmanifestnode)
1383
1383
1384 self.ui.write(',\n "extra": {%s}' %
1384 self.ui.write(',\n "extra": {%s}' %
1385 ", ".join('"%s": "%s"' % (j(k), j(v))
1385 ", ".join('"%s": "%s"' % (j(k), j(v))
1386 for k, v in ctx.extra().items()))
1386 for k, v in ctx.extra().items()))
1387
1387
1388 files = ctx.p1().status(ctx)
1388 files = ctx.p1().status(ctx)
1389 self.ui.write(',\n "modified": [%s]' %
1389 self.ui.write(',\n "modified": [%s]' %
1390 ", ".join('"%s"' % j(f) for f in files[0]))
1390 ", ".join('"%s"' % j(f) for f in files[0]))
1391 self.ui.write(',\n "added": [%s]' %
1391 self.ui.write(',\n "added": [%s]' %
1392 ", ".join('"%s"' % j(f) for f in files[1]))
1392 ", ".join('"%s"' % j(f) for f in files[1]))
1393 self.ui.write(',\n "removed": [%s]' %
1393 self.ui.write(',\n "removed": [%s]' %
1394 ", ".join('"%s"' % j(f) for f in files[2]))
1394 ", ".join('"%s"' % j(f) for f in files[2]))
1395
1395
1396 elif self.ui.verbose:
1396 elif self.ui.verbose:
1397 self.ui.write(',\n "files": [%s]' %
1397 self.ui.write(',\n "files": [%s]' %
1398 ", ".join('"%s"' % j(f) for f in ctx.files()))
1398 ", ".join('"%s"' % j(f) for f in ctx.files()))
1399
1399
1400 if copies:
1400 if copies:
1401 self.ui.write(',\n "copies": {%s}' %
1401 self.ui.write(',\n "copies": {%s}' %
1402 ", ".join('"%s": "%s"' % (j(k), j(v))
1402 ", ".join('"%s": "%s"' % (j(k), j(v))
1403 for k, v in copies))
1403 for k, v in copies))
1404
1404
1405 matchfn = self.matchfn
1405 matchfn = self.matchfn
1406 if matchfn:
1406 if matchfn:
1407 stat = self.diffopts.get('stat')
1407 stat = self.diffopts.get('stat')
1408 diff = self.diffopts.get('patch')
1408 diff = self.diffopts.get('patch')
1409 diffopts = patch.difffeatureopts(self.ui, self.diffopts, git=True)
1409 diffopts = patch.difffeatureopts(self.ui, self.diffopts, git=True)
1410 node, prev = ctx.node(), ctx.p1().node()
1410 node, prev = ctx.node(), ctx.p1().node()
1411 if stat:
1411 if stat:
1412 self.ui.pushbuffer()
1412 self.ui.pushbuffer()
1413 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1413 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1414 match=matchfn, stat=True)
1414 match=matchfn, stat=True)
1415 self.ui.write(',\n "diffstat": "%s"' % j(self.ui.popbuffer()))
1415 self.ui.write(',\n "diffstat": "%s"' % j(self.ui.popbuffer()))
1416 if diff:
1416 if diff:
1417 self.ui.pushbuffer()
1417 self.ui.pushbuffer()
1418 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1418 diffordiffstat(self.ui, self.repo, diffopts, prev, node,
1419 match=matchfn, stat=False)
1419 match=matchfn, stat=False)
1420 self.ui.write(',\n "diff": "%s"' % j(self.ui.popbuffer()))
1420 self.ui.write(',\n "diff": "%s"' % j(self.ui.popbuffer()))
1421
1421
1422 self.ui.write("\n }")
1422 self.ui.write("\n }")
1423
1423
1424 class changeset_templater(changeset_printer):
1424 class changeset_templater(changeset_printer):
1425 '''format changeset information.'''
1425 '''format changeset information.'''
1426
1426
1427 def __init__(self, ui, repo, matchfn, diffopts, tmpl, mapfile, buffered):
1427 def __init__(self, ui, repo, matchfn, diffopts, tmpl, mapfile, buffered):
1428 changeset_printer.__init__(self, ui, repo, matchfn, diffopts, buffered)
1428 changeset_printer.__init__(self, ui, repo, matchfn, diffopts, buffered)
1429 formatnode = ui.debugflag and (lambda x: x) or (lambda x: x[:12])
1429 formatnode = ui.debugflag and (lambda x: x) or (lambda x: x[:12])
1430 defaulttempl = {
1430 defaulttempl = {
1431 'parent': '{rev}:{node|formatnode} ',
1431 'parent': '{rev}:{node|formatnode} ',
1432 'manifest': '{rev}:{node|formatnode}',
1432 'manifest': '{rev}:{node|formatnode}',
1433 'file_copy': '{name} ({source})',
1433 'file_copy': '{name} ({source})',
1434 'extra': '{key}={value|stringescape}'
1434 'extra': '{key}={value|stringescape}'
1435 }
1435 }
1436 # filecopy is preserved for compatibility reasons
1436 # filecopy is preserved for compatibility reasons
1437 defaulttempl['filecopy'] = defaulttempl['file_copy']
1437 defaulttempl['filecopy'] = defaulttempl['file_copy']
1438 self.t = templater.templater(mapfile, {'formatnode': formatnode},
1438 self.t = templater.templater(mapfile, {'formatnode': formatnode},
1439 cache=defaulttempl)
1439 cache=defaulttempl)
1440 if tmpl:
1440 if tmpl:
1441 self.t.cache['changeset'] = tmpl
1441 self.t.cache['changeset'] = tmpl
1442
1442
1443 self.cache = {}
1443 self.cache = {}
1444
1444
1445 # find correct templates for current mode
1445 # find correct templates for current mode
1446 tmplmodes = [
1446 tmplmodes = [
1447 (True, None),
1447 (True, None),
1448 (self.ui.verbose, 'verbose'),
1448 (self.ui.verbose, 'verbose'),
1449 (self.ui.quiet, 'quiet'),
1449 (self.ui.quiet, 'quiet'),
1450 (self.ui.debugflag, 'debug'),
1450 (self.ui.debugflag, 'debug'),
1451 ]
1451 ]
1452
1452
1453 self._parts = {'header': '', 'footer': '', 'changeset': 'changeset',
1453 self._parts = {'header': '', 'footer': '', 'changeset': 'changeset',
1454 'docheader': '', 'docfooter': ''}
1454 'docheader': '', 'docfooter': ''}
1455 for mode, postfix in tmplmodes:
1455 for mode, postfix in tmplmodes:
1456 for t in self._parts:
1456 for t in self._parts:
1457 cur = t
1457 cur = t
1458 if postfix:
1458 if postfix:
1459 cur += "_" + postfix
1459 cur += "_" + postfix
1460 if mode and cur in self.t:
1460 if mode and cur in self.t:
1461 self._parts[t] = cur
1461 self._parts[t] = cur
1462
1462
1463 if self._parts['docheader']:
1463 if self._parts['docheader']:
1464 self.ui.write(templater.stringify(self.t(self._parts['docheader'])))
1464 self.ui.write(templater.stringify(self.t(self._parts['docheader'])))
1465
1465
1466 def close(self):
1466 def close(self):
1467 if self._parts['docfooter']:
1467 if self._parts['docfooter']:
1468 if not self.footer:
1468 if not self.footer:
1469 self.footer = ""
1469 self.footer = ""
1470 self.footer += templater.stringify(self.t(self._parts['docfooter']))
1470 self.footer += templater.stringify(self.t(self._parts['docfooter']))
1471 return super(changeset_templater, self).close()
1471 return super(changeset_templater, self).close()
1472
1472
1473 def _show(self, ctx, copies, matchfn, props):
1473 def _show(self, ctx, copies, matchfn, props):
1474 '''show a single changeset or file revision'''
1474 '''show a single changeset or file revision'''
1475 props = props.copy()
1475 props = props.copy()
1476 props.update(templatekw.keywords)
1476 props.update(templatekw.keywords)
1477 props['templ'] = self.t
1477 props['templ'] = self.t
1478 props['ctx'] = ctx
1478 props['ctx'] = ctx
1479 props['repo'] = self.repo
1479 props['repo'] = self.repo
1480 props['revcache'] = {'copies': copies}
1480 props['revcache'] = {'copies': copies}
1481 props['cache'] = self.cache
1481 props['cache'] = self.cache
1482
1482
1483 try:
1483 try:
1484 # write header
1484 # write header
1485 if self._parts['header']:
1485 if self._parts['header']:
1486 h = templater.stringify(self.t(self._parts['header'], **props))
1486 h = templater.stringify(self.t(self._parts['header'], **props))
1487 if self.buffered:
1487 if self.buffered:
1488 self.header[ctx.rev()] = h
1488 self.header[ctx.rev()] = h
1489 else:
1489 else:
1490 if self.lastheader != h:
1490 if self.lastheader != h:
1491 self.lastheader = h
1491 self.lastheader = h
1492 self.ui.write(h)
1492 self.ui.write(h)
1493
1493
1494 # write changeset metadata, then patch if requested
1494 # write changeset metadata, then patch if requested
1495 key = self._parts['changeset']
1495 key = self._parts['changeset']
1496 self.ui.write(templater.stringify(self.t(key, **props)))
1496 self.ui.write(templater.stringify(self.t(key, **props)))
1497 self.showpatch(ctx, matchfn)
1497 self.showpatch(ctx, matchfn)
1498
1498
1499 if self._parts['footer']:
1499 if self._parts['footer']:
1500 if not self.footer:
1500 if not self.footer:
1501 self.footer = templater.stringify(
1501 self.footer = templater.stringify(
1502 self.t(self._parts['footer'], **props))
1502 self.t(self._parts['footer'], **props))
1503 except KeyError as inst:
1503 except KeyError as inst:
1504 msg = _("%s: no key named '%s'")
1504 msg = _("%s: no key named '%s'")
1505 raise error.Abort(msg % (self.t.mapfile, inst.args[0]))
1505 raise error.Abort(msg % (self.t.mapfile, inst.args[0]))
1506 except SyntaxError as inst:
1506 except SyntaxError as inst:
1507 raise error.Abort('%s: %s' % (self.t.mapfile, inst.args[0]))
1507 raise error.Abort('%s: %s' % (self.t.mapfile, inst.args[0]))
1508
1508
1509 def gettemplate(ui, tmpl, style):
1509 def gettemplate(ui, tmpl, style):
1510 """
1510 """
1511 Find the template matching the given template spec or style.
1511 Find the template matching the given template spec or style.
1512 """
1512 """
1513
1513
1514 # ui settings
1514 # ui settings
1515 if not tmpl and not style: # template are stronger than style
1515 if not tmpl and not style: # template are stronger than style
1516 tmpl = ui.config('ui', 'logtemplate')
1516 tmpl = ui.config('ui', 'logtemplate')
1517 if tmpl:
1517 if tmpl:
1518 try:
1518 try:
1519 tmpl = templater.unquotestring(tmpl)
1519 tmpl = templater.unquotestring(tmpl)
1520 except SyntaxError:
1520 except SyntaxError:
1521 pass
1521 pass
1522 return tmpl, None
1522 return tmpl, None
1523 else:
1523 else:
1524 style = util.expandpath(ui.config('ui', 'style', ''))
1524 style = util.expandpath(ui.config('ui', 'style', ''))
1525
1525
1526 if not tmpl and style:
1526 if not tmpl and style:
1527 mapfile = style
1527 mapfile = style
1528 if not os.path.split(mapfile)[0]:
1528 if not os.path.split(mapfile)[0]:
1529 mapname = (templater.templatepath('map-cmdline.' + mapfile)
1529 mapname = (templater.templatepath('map-cmdline.' + mapfile)
1530 or templater.templatepath(mapfile))
1530 or templater.templatepath(mapfile))
1531 if mapname:
1531 if mapname:
1532 mapfile = mapname
1532 mapfile = mapname
1533 return None, mapfile
1533 return None, mapfile
1534
1534
1535 if not tmpl:
1535 if not tmpl:
1536 return None, None
1536 return None, None
1537
1537
1538 return formatter.lookuptemplate(ui, 'changeset', tmpl)
1538 return formatter.lookuptemplate(ui, 'changeset', tmpl)
1539
1539
1540 def show_changeset(ui, repo, opts, buffered=False):
1540 def show_changeset(ui, repo, opts, buffered=False):
1541 """show one changeset using template or regular display.
1541 """show one changeset using template or regular display.
1542
1542
1543 Display format will be the first non-empty hit of:
1543 Display format will be the first non-empty hit of:
1544 1. option 'template'
1544 1. option 'template'
1545 2. option 'style'
1545 2. option 'style'
1546 3. [ui] setting 'logtemplate'
1546 3. [ui] setting 'logtemplate'
1547 4. [ui] setting 'style'
1547 4. [ui] setting 'style'
1548 If all of these values are either the unset or the empty string,
1548 If all of these values are either the unset or the empty string,
1549 regular display via changeset_printer() is done.
1549 regular display via changeset_printer() is done.
1550 """
1550 """
1551 # options
1551 # options
1552 matchfn = None
1552 matchfn = None
1553 if opts.get('patch') or opts.get('stat'):
1553 if opts.get('patch') or opts.get('stat'):
1554 matchfn = scmutil.matchall(repo)
1554 matchfn = scmutil.matchall(repo)
1555
1555
1556 if opts.get('template') == 'json':
1556 if opts.get('template') == 'json':
1557 return jsonchangeset(ui, repo, matchfn, opts, buffered)
1557 return jsonchangeset(ui, repo, matchfn, opts, buffered)
1558
1558
1559 tmpl, mapfile = gettemplate(ui, opts.get('template'), opts.get('style'))
1559 tmpl, mapfile = gettemplate(ui, opts.get('template'), opts.get('style'))
1560
1560
1561 if not tmpl and not mapfile:
1561 if not tmpl and not mapfile:
1562 return changeset_printer(ui, repo, matchfn, opts, buffered)
1562 return changeset_printer(ui, repo, matchfn, opts, buffered)
1563
1563
1564 try:
1564 try:
1565 t = changeset_templater(ui, repo, matchfn, opts, tmpl, mapfile,
1565 t = changeset_templater(ui, repo, matchfn, opts, tmpl, mapfile,
1566 buffered)
1566 buffered)
1567 except SyntaxError as inst:
1567 except SyntaxError as inst:
1568 raise error.Abort(inst.args[0])
1568 raise error.Abort(inst.args[0])
1569 return t
1569 return t
1570
1570
1571 def showmarker(ui, marker):
1571 def showmarker(ui, marker):
1572 """utility function to display obsolescence marker in a readable way
1572 """utility function to display obsolescence marker in a readable way
1573
1573
1574 To be used by debug function."""
1574 To be used by debug function."""
1575 ui.write(hex(marker.precnode()))
1575 ui.write(hex(marker.precnode()))
1576 for repl in marker.succnodes():
1576 for repl in marker.succnodes():
1577 ui.write(' ')
1577 ui.write(' ')
1578 ui.write(hex(repl))
1578 ui.write(hex(repl))
1579 ui.write(' %X ' % marker.flags())
1579 ui.write(' %X ' % marker.flags())
1580 parents = marker.parentnodes()
1580 parents = marker.parentnodes()
1581 if parents is not None:
1581 if parents is not None:
1582 ui.write('{%s} ' % ', '.join(hex(p) for p in parents))
1582 ui.write('{%s} ' % ', '.join(hex(p) for p in parents))
1583 ui.write('(%s) ' % util.datestr(marker.date()))
1583 ui.write('(%s) ' % util.datestr(marker.date()))
1584 ui.write('{%s}' % (', '.join('%r: %r' % t for t in
1584 ui.write('{%s}' % (', '.join('%r: %r' % t for t in
1585 sorted(marker.metadata().items())
1585 sorted(marker.metadata().items())
1586 if t[0] != 'date')))
1586 if t[0] != 'date')))
1587 ui.write('\n')
1587 ui.write('\n')
1588
1588
1589 def finddate(ui, repo, date):
1589 def finddate(ui, repo, date):
1590 """Find the tipmost changeset that matches the given date spec"""
1590 """Find the tipmost changeset that matches the given date spec"""
1591
1591
1592 df = util.matchdate(date)
1592 df = util.matchdate(date)
1593 m = scmutil.matchall(repo)
1593 m = scmutil.matchall(repo)
1594 results = {}
1594 results = {}
1595
1595
1596 def prep(ctx, fns):
1596 def prep(ctx, fns):
1597 d = ctx.date()
1597 d = ctx.date()
1598 if df(d[0]):
1598 if df(d[0]):
1599 results[ctx.rev()] = d
1599 results[ctx.rev()] = d
1600
1600
1601 for ctx in walkchangerevs(repo, m, {'rev': None}, prep):
1601 for ctx in walkchangerevs(repo, m, {'rev': None}, prep):
1602 rev = ctx.rev()
1602 rev = ctx.rev()
1603 if rev in results:
1603 if rev in results:
1604 ui.status(_("found revision %s from %s\n") %
1604 ui.status(_("found revision %s from %s\n") %
1605 (rev, util.datestr(results[rev])))
1605 (rev, util.datestr(results[rev])))
1606 return str(rev)
1606 return str(rev)
1607
1607
1608 raise error.Abort(_("revision matching date not found"))
1608 raise error.Abort(_("revision matching date not found"))
1609
1609
1610 def increasingwindows(windowsize=8, sizelimit=512):
1610 def increasingwindows(windowsize=8, sizelimit=512):
1611 while True:
1611 while True:
1612 yield windowsize
1612 yield windowsize
1613 if windowsize < sizelimit:
1613 if windowsize < sizelimit:
1614 windowsize *= 2
1614 windowsize *= 2
1615
1615
1616 class FileWalkError(Exception):
1616 class FileWalkError(Exception):
1617 pass
1617 pass
1618
1618
1619 def walkfilerevs(repo, match, follow, revs, fncache):
1619 def walkfilerevs(repo, match, follow, revs, fncache):
1620 '''Walks the file history for the matched files.
1620 '''Walks the file history for the matched files.
1621
1621
1622 Returns the changeset revs that are involved in the file history.
1622 Returns the changeset revs that are involved in the file history.
1623
1623
1624 Throws FileWalkError if the file history can't be walked using
1624 Throws FileWalkError if the file history can't be walked using
1625 filelogs alone.
1625 filelogs alone.
1626 '''
1626 '''
1627 wanted = set()
1627 wanted = set()
1628 copies = []
1628 copies = []
1629 minrev, maxrev = min(revs), max(revs)
1629 minrev, maxrev = min(revs), max(revs)
1630 def filerevgen(filelog, last):
1630 def filerevgen(filelog, last):
1631 """
1631 """
1632 Only files, no patterns. Check the history of each file.
1632 Only files, no patterns. Check the history of each file.
1633
1633
1634 Examines filelog entries within minrev, maxrev linkrev range
1634 Examines filelog entries within minrev, maxrev linkrev range
1635 Returns an iterator yielding (linkrev, parentlinkrevs, copied)
1635 Returns an iterator yielding (linkrev, parentlinkrevs, copied)
1636 tuples in backwards order
1636 tuples in backwards order
1637 """
1637 """
1638 cl_count = len(repo)
1638 cl_count = len(repo)
1639 revs = []
1639 revs = []
1640 for j in xrange(0, last + 1):
1640 for j in xrange(0, last + 1):
1641 linkrev = filelog.linkrev(j)
1641 linkrev = filelog.linkrev(j)
1642 if linkrev < minrev:
1642 if linkrev < minrev:
1643 continue
1643 continue
1644 # only yield rev for which we have the changelog, it can
1644 # only yield rev for which we have the changelog, it can
1645 # happen while doing "hg log" during a pull or commit
1645 # happen while doing "hg log" during a pull or commit
1646 if linkrev >= cl_count:
1646 if linkrev >= cl_count:
1647 break
1647 break
1648
1648
1649 parentlinkrevs = []
1649 parentlinkrevs = []
1650 for p in filelog.parentrevs(j):
1650 for p in filelog.parentrevs(j):
1651 if p != nullrev:
1651 if p != nullrev:
1652 parentlinkrevs.append(filelog.linkrev(p))
1652 parentlinkrevs.append(filelog.linkrev(p))
1653 n = filelog.node(j)
1653 n = filelog.node(j)
1654 revs.append((linkrev, parentlinkrevs,
1654 revs.append((linkrev, parentlinkrevs,
1655 follow and filelog.renamed(n)))
1655 follow and filelog.renamed(n)))
1656
1656
1657 return reversed(revs)
1657 return reversed(revs)
1658 def iterfiles():
1658 def iterfiles():
1659 pctx = repo['.']
1659 pctx = repo['.']
1660 for filename in match.files():
1660 for filename in match.files():
1661 if follow:
1661 if follow:
1662 if filename not in pctx:
1662 if filename not in pctx:
1663 raise error.Abort(_('cannot follow file not in parent '
1663 raise error.Abort(_('cannot follow file not in parent '
1664 'revision: "%s"') % filename)
1664 'revision: "%s"') % filename)
1665 yield filename, pctx[filename].filenode()
1665 yield filename, pctx[filename].filenode()
1666 else:
1666 else:
1667 yield filename, None
1667 yield filename, None
1668 for filename_node in copies:
1668 for filename_node in copies:
1669 yield filename_node
1669 yield filename_node
1670
1670
1671 for file_, node in iterfiles():
1671 for file_, node in iterfiles():
1672 filelog = repo.file(file_)
1672 filelog = repo.file(file_)
1673 if not len(filelog):
1673 if not len(filelog):
1674 if node is None:
1674 if node is None:
1675 # A zero count may be a directory or deleted file, so
1675 # A zero count may be a directory or deleted file, so
1676 # try to find matching entries on the slow path.
1676 # try to find matching entries on the slow path.
1677 if follow:
1677 if follow:
1678 raise error.Abort(
1678 raise error.Abort(
1679 _('cannot follow nonexistent file: "%s"') % file_)
1679 _('cannot follow nonexistent file: "%s"') % file_)
1680 raise FileWalkError("Cannot walk via filelog")
1680 raise FileWalkError("Cannot walk via filelog")
1681 else:
1681 else:
1682 continue
1682 continue
1683
1683
1684 if node is None:
1684 if node is None:
1685 last = len(filelog) - 1
1685 last = len(filelog) - 1
1686 else:
1686 else:
1687 last = filelog.rev(node)
1687 last = filelog.rev(node)
1688
1688
1689 # keep track of all ancestors of the file
1689 # keep track of all ancestors of the file
1690 ancestors = set([filelog.linkrev(last)])
1690 ancestors = set([filelog.linkrev(last)])
1691
1691
1692 # iterate from latest to oldest revision
1692 # iterate from latest to oldest revision
1693 for rev, flparentlinkrevs, copied in filerevgen(filelog, last):
1693 for rev, flparentlinkrevs, copied in filerevgen(filelog, last):
1694 if not follow:
1694 if not follow:
1695 if rev > maxrev:
1695 if rev > maxrev:
1696 continue
1696 continue
1697 else:
1697 else:
1698 # Note that last might not be the first interesting
1698 # Note that last might not be the first interesting
1699 # rev to us:
1699 # rev to us:
1700 # if the file has been changed after maxrev, we'll
1700 # if the file has been changed after maxrev, we'll
1701 # have linkrev(last) > maxrev, and we still need
1701 # have linkrev(last) > maxrev, and we still need
1702 # to explore the file graph
1702 # to explore the file graph
1703 if rev not in ancestors:
1703 if rev not in ancestors:
1704 continue
1704 continue
1705 # XXX insert 1327 fix here
1705 # XXX insert 1327 fix here
1706 if flparentlinkrevs:
1706 if flparentlinkrevs:
1707 ancestors.update(flparentlinkrevs)
1707 ancestors.update(flparentlinkrevs)
1708
1708
1709 fncache.setdefault(rev, []).append(file_)
1709 fncache.setdefault(rev, []).append(file_)
1710 wanted.add(rev)
1710 wanted.add(rev)
1711 if copied:
1711 if copied:
1712 copies.append(copied)
1712 copies.append(copied)
1713
1713
1714 return wanted
1714 return wanted
1715
1715
1716 class _followfilter(object):
1716 class _followfilter(object):
1717 def __init__(self, repo, onlyfirst=False):
1717 def __init__(self, repo, onlyfirst=False):
1718 self.repo = repo
1718 self.repo = repo
1719 self.startrev = nullrev
1719 self.startrev = nullrev
1720 self.roots = set()
1720 self.roots = set()
1721 self.onlyfirst = onlyfirst
1721 self.onlyfirst = onlyfirst
1722
1722
1723 def match(self, rev):
1723 def match(self, rev):
1724 def realparents(rev):
1724 def realparents(rev):
1725 if self.onlyfirst:
1725 if self.onlyfirst:
1726 return self.repo.changelog.parentrevs(rev)[0:1]
1726 return self.repo.changelog.parentrevs(rev)[0:1]
1727 else:
1727 else:
1728 return filter(lambda x: x != nullrev,
1728 return filter(lambda x: x != nullrev,
1729 self.repo.changelog.parentrevs(rev))
1729 self.repo.changelog.parentrevs(rev))
1730
1730
1731 if self.startrev == nullrev:
1731 if self.startrev == nullrev:
1732 self.startrev = rev
1732 self.startrev = rev
1733 return True
1733 return True
1734
1734
1735 if rev > self.startrev:
1735 if rev > self.startrev:
1736 # forward: all descendants
1736 # forward: all descendants
1737 if not self.roots:
1737 if not self.roots:
1738 self.roots.add(self.startrev)
1738 self.roots.add(self.startrev)
1739 for parent in realparents(rev):
1739 for parent in realparents(rev):
1740 if parent in self.roots:
1740 if parent in self.roots:
1741 self.roots.add(rev)
1741 self.roots.add(rev)
1742 return True
1742 return True
1743 else:
1743 else:
1744 # backwards: all parents
1744 # backwards: all parents
1745 if not self.roots:
1745 if not self.roots:
1746 self.roots.update(realparents(self.startrev))
1746 self.roots.update(realparents(self.startrev))
1747 if rev in self.roots:
1747 if rev in self.roots:
1748 self.roots.remove(rev)
1748 self.roots.remove(rev)
1749 self.roots.update(realparents(rev))
1749 self.roots.update(realparents(rev))
1750 return True
1750 return True
1751
1751
1752 return False
1752 return False
1753
1753
1754 def walkchangerevs(repo, match, opts, prepare):
1754 def walkchangerevs(repo, match, opts, prepare):
1755 '''Iterate over files and the revs in which they changed.
1755 '''Iterate over files and the revs in which they changed.
1756
1756
1757 Callers most commonly need to iterate backwards over the history
1757 Callers most commonly need to iterate backwards over the history
1758 in which they are interested. Doing so has awful (quadratic-looking)
1758 in which they are interested. Doing so has awful (quadratic-looking)
1759 performance, so we use iterators in a "windowed" way.
1759 performance, so we use iterators in a "windowed" way.
1760
1760
1761 We walk a window of revisions in the desired order. Within the
1761 We walk a window of revisions in the desired order. Within the
1762 window, we first walk forwards to gather data, then in the desired
1762 window, we first walk forwards to gather data, then in the desired
1763 order (usually backwards) to display it.
1763 order (usually backwards) to display it.
1764
1764
1765 This function returns an iterator yielding contexts. Before
1765 This function returns an iterator yielding contexts. Before
1766 yielding each context, the iterator will first call the prepare
1766 yielding each context, the iterator will first call the prepare
1767 function on each context in the window in forward order.'''
1767 function on each context in the window in forward order.'''
1768
1768
1769 follow = opts.get('follow') or opts.get('follow_first')
1769 follow = opts.get('follow') or opts.get('follow_first')
1770 revs = _logrevs(repo, opts)
1770 revs = _logrevs(repo, opts)
1771 if not revs:
1771 if not revs:
1772 return []
1772 return []
1773 wanted = set()
1773 wanted = set()
1774 slowpath = match.anypats() or ((match.isexact() or match.prefix()) and
1774 slowpath = match.anypats() or ((match.isexact() or match.prefix()) and
1775 opts.get('removed'))
1775 opts.get('removed'))
1776 fncache = {}
1776 fncache = {}
1777 change = repo.changectx
1777 change = repo.changectx
1778
1778
1779 # First step is to fill wanted, the set of revisions that we want to yield.
1779 # First step is to fill wanted, the set of revisions that we want to yield.
1780 # When it does not induce extra cost, we also fill fncache for revisions in
1780 # When it does not induce extra cost, we also fill fncache for revisions in
1781 # wanted: a cache of filenames that were changed (ctx.files()) and that
1781 # wanted: a cache of filenames that were changed (ctx.files()) and that
1782 # match the file filtering conditions.
1782 # match the file filtering conditions.
1783
1783
1784 if match.always():
1784 if match.always():
1785 # No files, no patterns. Display all revs.
1785 # No files, no patterns. Display all revs.
1786 wanted = revs
1786 wanted = revs
1787 elif not slowpath:
1787 elif not slowpath:
1788 # We only have to read through the filelog to find wanted revisions
1788 # We only have to read through the filelog to find wanted revisions
1789
1789
1790 try:
1790 try:
1791 wanted = walkfilerevs(repo, match, follow, revs, fncache)
1791 wanted = walkfilerevs(repo, match, follow, revs, fncache)
1792 except FileWalkError:
1792 except FileWalkError:
1793 slowpath = True
1793 slowpath = True
1794
1794
1795 # We decided to fall back to the slowpath because at least one
1795 # We decided to fall back to the slowpath because at least one
1796 # of the paths was not a file. Check to see if at least one of them
1796 # of the paths was not a file. Check to see if at least one of them
1797 # existed in history, otherwise simply return
1797 # existed in history, otherwise simply return
1798 for path in match.files():
1798 for path in match.files():
1799 if path == '.' or path in repo.store:
1799 if path == '.' or path in repo.store:
1800 break
1800 break
1801 else:
1801 else:
1802 return []
1802 return []
1803
1803
1804 if slowpath:
1804 if slowpath:
1805 # We have to read the changelog to match filenames against
1805 # We have to read the changelog to match filenames against
1806 # changed files
1806 # changed files
1807
1807
1808 if follow:
1808 if follow:
1809 raise error.Abort(_('can only follow copies/renames for explicit '
1809 raise error.Abort(_('can only follow copies/renames for explicit '
1810 'filenames'))
1810 'filenames'))
1811
1811
1812 # The slow path checks files modified in every changeset.
1812 # The slow path checks files modified in every changeset.
1813 # This is really slow on large repos, so compute the set lazily.
1813 # This is really slow on large repos, so compute the set lazily.
1814 class lazywantedset(object):
1814 class lazywantedset(object):
1815 def __init__(self):
1815 def __init__(self):
1816 self.set = set()
1816 self.set = set()
1817 self.revs = set(revs)
1817 self.revs = set(revs)
1818
1818
1819 # No need to worry about locality here because it will be accessed
1819 # No need to worry about locality here because it will be accessed
1820 # in the same order as the increasing window below.
1820 # in the same order as the increasing window below.
1821 def __contains__(self, value):
1821 def __contains__(self, value):
1822 if value in self.set:
1822 if value in self.set:
1823 return True
1823 return True
1824 elif not value in self.revs:
1824 elif not value in self.revs:
1825 return False
1825 return False
1826 else:
1826 else:
1827 self.revs.discard(value)
1827 self.revs.discard(value)
1828 ctx = change(value)
1828 ctx = change(value)
1829 matches = filter(match, ctx.files())
1829 matches = filter(match, ctx.files())
1830 if matches:
1830 if matches:
1831 fncache[value] = matches
1831 fncache[value] = matches
1832 self.set.add(value)
1832 self.set.add(value)
1833 return True
1833 return True
1834 return False
1834 return False
1835
1835
1836 def discard(self, value):
1836 def discard(self, value):
1837 self.revs.discard(value)
1837 self.revs.discard(value)
1838 self.set.discard(value)
1838 self.set.discard(value)
1839
1839
1840 wanted = lazywantedset()
1840 wanted = lazywantedset()
1841
1841
1842 # it might be worthwhile to do this in the iterator if the rev range
1842 # it might be worthwhile to do this in the iterator if the rev range
1843 # is descending and the prune args are all within that range
1843 # is descending and the prune args are all within that range
1844 for rev in opts.get('prune', ()):
1844 for rev in opts.get('prune', ()):
1845 rev = repo[rev].rev()
1845 rev = repo[rev].rev()
1846 ff = _followfilter(repo)
1846 ff = _followfilter(repo)
1847 stop = min(revs[0], revs[-1])
1847 stop = min(revs[0], revs[-1])
1848 for x in xrange(rev, stop - 1, -1):
1848 for x in xrange(rev, stop - 1, -1):
1849 if ff.match(x):
1849 if ff.match(x):
1850 wanted = wanted - [x]
1850 wanted = wanted - [x]
1851
1851
1852 # Now that wanted is correctly initialized, we can iterate over the
1852 # Now that wanted is correctly initialized, we can iterate over the
1853 # revision range, yielding only revisions in wanted.
1853 # revision range, yielding only revisions in wanted.
1854 def iterate():
1854 def iterate():
1855 if follow and match.always():
1855 if follow and match.always():
1856 ff = _followfilter(repo, onlyfirst=opts.get('follow_first'))
1856 ff = _followfilter(repo, onlyfirst=opts.get('follow_first'))
1857 def want(rev):
1857 def want(rev):
1858 return ff.match(rev) and rev in wanted
1858 return ff.match(rev) and rev in wanted
1859 else:
1859 else:
1860 def want(rev):
1860 def want(rev):
1861 return rev in wanted
1861 return rev in wanted
1862
1862
1863 it = iter(revs)
1863 it = iter(revs)
1864 stopiteration = False
1864 stopiteration = False
1865 for windowsize in increasingwindows():
1865 for windowsize in increasingwindows():
1866 nrevs = []
1866 nrevs = []
1867 for i in xrange(windowsize):
1867 for i in xrange(windowsize):
1868 rev = next(it, None)
1868 rev = next(it, None)
1869 if rev is None:
1869 if rev is None:
1870 stopiteration = True
1870 stopiteration = True
1871 break
1871 break
1872 elif want(rev):
1872 elif want(rev):
1873 nrevs.append(rev)
1873 nrevs.append(rev)
1874 for rev in sorted(nrevs):
1874 for rev in sorted(nrevs):
1875 fns = fncache.get(rev)
1875 fns = fncache.get(rev)
1876 ctx = change(rev)
1876 ctx = change(rev)
1877 if not fns:
1877 if not fns:
1878 def fns_generator():
1878 def fns_generator():
1879 for f in ctx.files():
1879 for f in ctx.files():
1880 if match(f):
1880 if match(f):
1881 yield f
1881 yield f
1882 fns = fns_generator()
1882 fns = fns_generator()
1883 prepare(ctx, fns)
1883 prepare(ctx, fns)
1884 for rev in nrevs:
1884 for rev in nrevs:
1885 yield change(rev)
1885 yield change(rev)
1886
1886
1887 if stopiteration:
1887 if stopiteration:
1888 break
1888 break
1889
1889
1890 return iterate()
1890 return iterate()
1891
1891
1892 def _makefollowlogfilematcher(repo, files, followfirst):
1892 def _makefollowlogfilematcher(repo, files, followfirst):
1893 # When displaying a revision with --patch --follow FILE, we have
1893 # When displaying a revision with --patch --follow FILE, we have
1894 # to know which file of the revision must be diffed. With
1894 # to know which file of the revision must be diffed. With
1895 # --follow, we want the names of the ancestors of FILE in the
1895 # --follow, we want the names of the ancestors of FILE in the
1896 # revision, stored in "fcache". "fcache" is populated by
1896 # revision, stored in "fcache". "fcache" is populated by
1897 # reproducing the graph traversal already done by --follow revset
1897 # reproducing the graph traversal already done by --follow revset
1898 # and relating linkrevs to file names (which is not "correct" but
1898 # and relating linkrevs to file names (which is not "correct" but
1899 # good enough).
1899 # good enough).
1900 fcache = {}
1900 fcache = {}
1901 fcacheready = [False]
1901 fcacheready = [False]
1902 pctx = repo['.']
1902 pctx = repo['.']
1903
1903
1904 def populate():
1904 def populate():
1905 for fn in files:
1905 for fn in files:
1906 for i in ((pctx[fn],), pctx[fn].ancestors(followfirst=followfirst)):
1906 for i in ((pctx[fn],), pctx[fn].ancestors(followfirst=followfirst)):
1907 for c in i:
1907 for c in i:
1908 fcache.setdefault(c.linkrev(), set()).add(c.path())
1908 fcache.setdefault(c.linkrev(), set()).add(c.path())
1909
1909
1910 def filematcher(rev):
1910 def filematcher(rev):
1911 if not fcacheready[0]:
1911 if not fcacheready[0]:
1912 # Lazy initialization
1912 # Lazy initialization
1913 fcacheready[0] = True
1913 fcacheready[0] = True
1914 populate()
1914 populate()
1915 return scmutil.matchfiles(repo, fcache.get(rev, []))
1915 return scmutil.matchfiles(repo, fcache.get(rev, []))
1916
1916
1917 return filematcher
1917 return filematcher
1918
1918
1919 def _makenofollowlogfilematcher(repo, pats, opts):
1919 def _makenofollowlogfilematcher(repo, pats, opts):
1920 '''hook for extensions to override the filematcher for non-follow cases'''
1920 '''hook for extensions to override the filematcher for non-follow cases'''
1921 return None
1921 return None
1922
1922
1923 def _makelogrevset(repo, pats, opts, revs):
1923 def _makelogrevset(repo, pats, opts, revs):
1924 """Return (expr, filematcher) where expr is a revset string built
1924 """Return (expr, filematcher) where expr is a revset string built
1925 from log options and file patterns or None. If --stat or --patch
1925 from log options and file patterns or None. If --stat or --patch
1926 are not passed filematcher is None. Otherwise it is a callable
1926 are not passed filematcher is None. Otherwise it is a callable
1927 taking a revision number and returning a match objects filtering
1927 taking a revision number and returning a match objects filtering
1928 the files to be detailed when displaying the revision.
1928 the files to be detailed when displaying the revision.
1929 """
1929 """
1930 opt2revset = {
1930 opt2revset = {
1931 'no_merges': ('not merge()', None),
1931 'no_merges': ('not merge()', None),
1932 'only_merges': ('merge()', None),
1932 'only_merges': ('merge()', None),
1933 '_ancestors': ('ancestors(%(val)s)', None),
1933 '_ancestors': ('ancestors(%(val)s)', None),
1934 '_fancestors': ('_firstancestors(%(val)s)', None),
1934 '_fancestors': ('_firstancestors(%(val)s)', None),
1935 '_descendants': ('descendants(%(val)s)', None),
1935 '_descendants': ('descendants(%(val)s)', None),
1936 '_fdescendants': ('_firstdescendants(%(val)s)', None),
1936 '_fdescendants': ('_firstdescendants(%(val)s)', None),
1937 '_matchfiles': ('_matchfiles(%(val)s)', None),
1937 '_matchfiles': ('_matchfiles(%(val)s)', None),
1938 'date': ('date(%(val)r)', None),
1938 'date': ('date(%(val)r)', None),
1939 'branch': ('branch(%(val)r)', ' or '),
1939 'branch': ('branch(%(val)r)', ' or '),
1940 '_patslog': ('filelog(%(val)r)', ' or '),
1940 '_patslog': ('filelog(%(val)r)', ' or '),
1941 '_patsfollow': ('follow(%(val)r)', ' or '),
1941 '_patsfollow': ('follow(%(val)r)', ' or '),
1942 '_patsfollowfirst': ('_followfirst(%(val)r)', ' or '),
1942 '_patsfollowfirst': ('_followfirst(%(val)r)', ' or '),
1943 'keyword': ('keyword(%(val)r)', ' or '),
1943 'keyword': ('keyword(%(val)r)', ' or '),
1944 'prune': ('not (%(val)r or ancestors(%(val)r))', ' and '),
1944 'prune': ('not (%(val)r or ancestors(%(val)r))', ' and '),
1945 'user': ('user(%(val)r)', ' or '),
1945 'user': ('user(%(val)r)', ' or '),
1946 }
1946 }
1947
1947
1948 opts = dict(opts)
1948 opts = dict(opts)
1949 # follow or not follow?
1949 # follow or not follow?
1950 follow = opts.get('follow') or opts.get('follow_first')
1950 follow = opts.get('follow') or opts.get('follow_first')
1951 if opts.get('follow_first'):
1951 if opts.get('follow_first'):
1952 followfirst = 1
1952 followfirst = 1
1953 else:
1953 else:
1954 followfirst = 0
1954 followfirst = 0
1955 # --follow with FILE behavior depends on revs...
1955 # --follow with FILE behavior depends on revs...
1956 it = iter(revs)
1956 it = iter(revs)
1957 startrev = it.next()
1957 startrev = it.next()
1958 followdescendants = startrev < next(it, startrev)
1958 followdescendants = startrev < next(it, startrev)
1959
1959
1960 # branch and only_branch are really aliases and must be handled at
1960 # branch and only_branch are really aliases and must be handled at
1961 # the same time
1961 # the same time
1962 opts['branch'] = opts.get('branch', []) + opts.get('only_branch', [])
1962 opts['branch'] = opts.get('branch', []) + opts.get('only_branch', [])
1963 opts['branch'] = [repo.lookupbranch(b) for b in opts['branch']]
1963 opts['branch'] = [repo.lookupbranch(b) for b in opts['branch']]
1964 # pats/include/exclude are passed to match.match() directly in
1964 # pats/include/exclude are passed to match.match() directly in
1965 # _matchfiles() revset but walkchangerevs() builds its matcher with
1965 # _matchfiles() revset but walkchangerevs() builds its matcher with
1966 # scmutil.match(). The difference is input pats are globbed on
1966 # scmutil.match(). The difference is input pats are globbed on
1967 # platforms without shell expansion (windows).
1967 # platforms without shell expansion (windows).
1968 wctx = repo[None]
1968 wctx = repo[None]
1969 match, pats = scmutil.matchandpats(wctx, pats, opts)
1969 match, pats = scmutil.matchandpats(wctx, pats, opts)
1970 slowpath = match.anypats() or ((match.isexact() or match.prefix()) and
1970 slowpath = match.anypats() or ((match.isexact() or match.prefix()) and
1971 opts.get('removed'))
1971 opts.get('removed'))
1972 if not slowpath:
1972 if not slowpath:
1973 for f in match.files():
1973 for f in match.files():
1974 if follow and f not in wctx:
1974 if follow and f not in wctx:
1975 # If the file exists, it may be a directory, so let it
1975 # If the file exists, it may be a directory, so let it
1976 # take the slow path.
1976 # take the slow path.
1977 if os.path.exists(repo.wjoin(f)):
1977 if os.path.exists(repo.wjoin(f)):
1978 slowpath = True
1978 slowpath = True
1979 continue
1979 continue
1980 else:
1980 else:
1981 raise error.Abort(_('cannot follow file not in parent '
1981 raise error.Abort(_('cannot follow file not in parent '
1982 'revision: "%s"') % f)
1982 'revision: "%s"') % f)
1983 filelog = repo.file(f)
1983 filelog = repo.file(f)
1984 if not filelog:
1984 if not filelog:
1985 # A zero count may be a directory or deleted file, so
1985 # A zero count may be a directory or deleted file, so
1986 # try to find matching entries on the slow path.
1986 # try to find matching entries on the slow path.
1987 if follow:
1987 if follow:
1988 raise error.Abort(
1988 raise error.Abort(
1989 _('cannot follow nonexistent file: "%s"') % f)
1989 _('cannot follow nonexistent file: "%s"') % f)
1990 slowpath = True
1990 slowpath = True
1991
1991
1992 # We decided to fall back to the slowpath because at least one
1992 # We decided to fall back to the slowpath because at least one
1993 # of the paths was not a file. Check to see if at least one of them
1993 # of the paths was not a file. Check to see if at least one of them
1994 # existed in history - in that case, we'll continue down the
1994 # existed in history - in that case, we'll continue down the
1995 # slowpath; otherwise, we can turn off the slowpath
1995 # slowpath; otherwise, we can turn off the slowpath
1996 if slowpath:
1996 if slowpath:
1997 for path in match.files():
1997 for path in match.files():
1998 if path == '.' or path in repo.store:
1998 if path == '.' or path in repo.store:
1999 break
1999 break
2000 else:
2000 else:
2001 slowpath = False
2001 slowpath = False
2002
2002
2003 fpats = ('_patsfollow', '_patsfollowfirst')
2003 fpats = ('_patsfollow', '_patsfollowfirst')
2004 fnopats = (('_ancestors', '_fancestors'),
2004 fnopats = (('_ancestors', '_fancestors'),
2005 ('_descendants', '_fdescendants'))
2005 ('_descendants', '_fdescendants'))
2006 if slowpath:
2006 if slowpath:
2007 # See walkchangerevs() slow path.
2007 # See walkchangerevs() slow path.
2008 #
2008 #
2009 # pats/include/exclude cannot be represented as separate
2009 # pats/include/exclude cannot be represented as separate
2010 # revset expressions as their filtering logic applies at file
2010 # revset expressions as their filtering logic applies at file
2011 # level. For instance "-I a -X a" matches a revision touching
2011 # level. For instance "-I a -X a" matches a revision touching
2012 # "a" and "b" while "file(a) and not file(b)" does
2012 # "a" and "b" while "file(a) and not file(b)" does
2013 # not. Besides, filesets are evaluated against the working
2013 # not. Besides, filesets are evaluated against the working
2014 # directory.
2014 # directory.
2015 matchargs = ['r:', 'd:relpath']
2015 matchargs = ['r:', 'd:relpath']
2016 for p in pats:
2016 for p in pats:
2017 matchargs.append('p:' + p)
2017 matchargs.append('p:' + p)
2018 for p in opts.get('include', []):
2018 for p in opts.get('include', []):
2019 matchargs.append('i:' + p)
2019 matchargs.append('i:' + p)
2020 for p in opts.get('exclude', []):
2020 for p in opts.get('exclude', []):
2021 matchargs.append('x:' + p)
2021 matchargs.append('x:' + p)
2022 matchargs = ','.join(('%r' % p) for p in matchargs)
2022 matchargs = ','.join(('%r' % p) for p in matchargs)
2023 opts['_matchfiles'] = matchargs
2023 opts['_matchfiles'] = matchargs
2024 if follow:
2024 if follow:
2025 opts[fnopats[0][followfirst]] = '.'
2025 opts[fnopats[0][followfirst]] = '.'
2026 else:
2026 else:
2027 if follow:
2027 if follow:
2028 if pats:
2028 if pats:
2029 # follow() revset interprets its file argument as a
2029 # follow() revset interprets its file argument as a
2030 # manifest entry, so use match.files(), not pats.
2030 # manifest entry, so use match.files(), not pats.
2031 opts[fpats[followfirst]] = list(match.files())
2031 opts[fpats[followfirst]] = list(match.files())
2032 else:
2032 else:
2033 op = fnopats[followdescendants][followfirst]
2033 op = fnopats[followdescendants][followfirst]
2034 opts[op] = 'rev(%d)' % startrev
2034 opts[op] = 'rev(%d)' % startrev
2035 else:
2035 else:
2036 opts['_patslog'] = list(pats)
2036 opts['_patslog'] = list(pats)
2037
2037
2038 filematcher = None
2038 filematcher = None
2039 if opts.get('patch') or opts.get('stat'):
2039 if opts.get('patch') or opts.get('stat'):
2040 # When following files, track renames via a special matcher.
2040 # When following files, track renames via a special matcher.
2041 # If we're forced to take the slowpath it means we're following
2041 # If we're forced to take the slowpath it means we're following
2042 # at least one pattern/directory, so don't bother with rename tracking.
2042 # at least one pattern/directory, so don't bother with rename tracking.
2043 if follow and not match.always() and not slowpath:
2043 if follow and not match.always() and not slowpath:
2044 # _makefollowlogfilematcher expects its files argument to be
2044 # _makefollowlogfilematcher expects its files argument to be
2045 # relative to the repo root, so use match.files(), not pats.
2045 # relative to the repo root, so use match.files(), not pats.
2046 filematcher = _makefollowlogfilematcher(repo, match.files(),
2046 filematcher = _makefollowlogfilematcher(repo, match.files(),
2047 followfirst)
2047 followfirst)
2048 else:
2048 else:
2049 filematcher = _makenofollowlogfilematcher(repo, pats, opts)
2049 filematcher = _makenofollowlogfilematcher(repo, pats, opts)
2050 if filematcher is None:
2050 if filematcher is None:
2051 filematcher = lambda rev: match
2051 filematcher = lambda rev: match
2052
2052
2053 expr = []
2053 expr = []
2054 for op, val in sorted(opts.iteritems()):
2054 for op, val in sorted(opts.iteritems()):
2055 if not val:
2055 if not val:
2056 continue
2056 continue
2057 if op not in opt2revset:
2057 if op not in opt2revset:
2058 continue
2058 continue
2059 revop, andor = opt2revset[op]
2059 revop, andor = opt2revset[op]
2060 if '%(val)' not in revop:
2060 if '%(val)' not in revop:
2061 expr.append(revop)
2061 expr.append(revop)
2062 else:
2062 else:
2063 if not isinstance(val, list):
2063 if not isinstance(val, list):
2064 e = revop % {'val': val}
2064 e = revop % {'val': val}
2065 else:
2065 else:
2066 e = '(' + andor.join((revop % {'val': v}) for v in val) + ')'
2066 e = '(' + andor.join((revop % {'val': v}) for v in val) + ')'
2067 expr.append(e)
2067 expr.append(e)
2068
2068
2069 if expr:
2069 if expr:
2070 expr = '(' + ' and '.join(expr) + ')'
2070 expr = '(' + ' and '.join(expr) + ')'
2071 else:
2071 else:
2072 expr = None
2072 expr = None
2073 return expr, filematcher
2073 return expr, filematcher
2074
2074
2075 def _logrevs(repo, opts):
2075 def _logrevs(repo, opts):
2076 # Default --rev value depends on --follow but --follow behavior
2076 # Default --rev value depends on --follow but --follow behavior
2077 # depends on revisions resolved from --rev...
2077 # depends on revisions resolved from --rev...
2078 follow = opts.get('follow') or opts.get('follow_first')
2078 follow = opts.get('follow') or opts.get('follow_first')
2079 if opts.get('rev'):
2079 if opts.get('rev'):
2080 revs = scmutil.revrange(repo, opts['rev'])
2080 revs = scmutil.revrange(repo, opts['rev'])
2081 elif follow and repo.dirstate.p1() == nullid:
2081 elif follow and repo.dirstate.p1() == nullid:
2082 revs = revset.baseset()
2082 revs = revset.baseset()
2083 elif follow:
2083 elif follow:
2084 revs = repo.revs('reverse(:.)')
2084 revs = repo.revs('reverse(:.)')
2085 else:
2085 else:
2086 revs = revset.spanset(repo)
2086 revs = revset.spanset(repo)
2087 revs.reverse()
2087 revs.reverse()
2088 return revs
2088 return revs
2089
2089
2090 def getgraphlogrevs(repo, pats, opts):
2090 def getgraphlogrevs(repo, pats, opts):
2091 """Return (revs, expr, filematcher) where revs is an iterable of
2091 """Return (revs, expr, filematcher) where revs is an iterable of
2092 revision numbers, expr is a revset string built from log options
2092 revision numbers, expr is a revset string built from log options
2093 and file patterns or None, and used to filter 'revs'. If --stat or
2093 and file patterns or None, and used to filter 'revs'. If --stat or
2094 --patch are not passed filematcher is None. Otherwise it is a
2094 --patch are not passed filematcher is None. Otherwise it is a
2095 callable taking a revision number and returning a match objects
2095 callable taking a revision number and returning a match objects
2096 filtering the files to be detailed when displaying the revision.
2096 filtering the files to be detailed when displaying the revision.
2097 """
2097 """
2098 limit = loglimit(opts)
2098 limit = loglimit(opts)
2099 revs = _logrevs(repo, opts)
2099 revs = _logrevs(repo, opts)
2100 if not revs:
2100 if not revs:
2101 return revset.baseset(), None, None
2101 return revset.baseset(), None, None
2102 expr, filematcher = _makelogrevset(repo, pats, opts, revs)
2102 expr, filematcher = _makelogrevset(repo, pats, opts, revs)
2103 if opts.get('rev'):
2103 if opts.get('rev'):
2104 # User-specified revs might be unsorted, but don't sort before
2104 # User-specified revs might be unsorted, but don't sort before
2105 # _makelogrevset because it might depend on the order of revs
2105 # _makelogrevset because it might depend on the order of revs
2106 revs.sort(reverse=True)
2106 revs.sort(reverse=True)
2107 if expr:
2107 if expr:
2108 # Revset matchers often operate faster on revisions in changelog
2108 # Revset matchers often operate faster on revisions in changelog
2109 # order, because most filters deal with the changelog.
2109 # order, because most filters deal with the changelog.
2110 revs.reverse()
2110 revs.reverse()
2111 matcher = revset.match(repo.ui, expr)
2111 matcher = revset.match(repo.ui, expr)
2112 # Revset matches can reorder revisions. "A or B" typically returns
2112 # Revset matches can reorder revisions. "A or B" typically returns
2113 # returns the revision matching A then the revision matching B. Sort
2113 # returns the revision matching A then the revision matching B. Sort
2114 # again to fix that.
2114 # again to fix that.
2115 revs = matcher(repo, revs)
2115 revs = matcher(repo, revs)
2116 revs.sort(reverse=True)
2116 revs.sort(reverse=True)
2117 if limit is not None:
2117 if limit is not None:
2118 limitedrevs = []
2118 limitedrevs = []
2119 for idx, rev in enumerate(revs):
2119 for idx, rev in enumerate(revs):
2120 if idx >= limit:
2120 if idx >= limit:
2121 break
2121 break
2122 limitedrevs.append(rev)
2122 limitedrevs.append(rev)
2123 revs = revset.baseset(limitedrevs)
2123 revs = revset.baseset(limitedrevs)
2124
2124
2125 return revs, expr, filematcher
2125 return revs, expr, filematcher
2126
2126
2127 def getlogrevs(repo, pats, opts):
2127 def getlogrevs(repo, pats, opts):
2128 """Return (revs, expr, filematcher) where revs is an iterable of
2128 """Return (revs, expr, filematcher) where revs is an iterable of
2129 revision numbers, expr is a revset string built from log options
2129 revision numbers, expr is a revset string built from log options
2130 and file patterns or None, and used to filter 'revs'. If --stat or
2130 and file patterns or None, and used to filter 'revs'. If --stat or
2131 --patch are not passed filematcher is None. Otherwise it is a
2131 --patch are not passed filematcher is None. Otherwise it is a
2132 callable taking a revision number and returning a match objects
2132 callable taking a revision number and returning a match objects
2133 filtering the files to be detailed when displaying the revision.
2133 filtering the files to be detailed when displaying the revision.
2134 """
2134 """
2135 limit = loglimit(opts)
2135 limit = loglimit(opts)
2136 revs = _logrevs(repo, opts)
2136 revs = _logrevs(repo, opts)
2137 if not revs:
2137 if not revs:
2138 return revset.baseset([]), None, None
2138 return revset.baseset([]), None, None
2139 expr, filematcher = _makelogrevset(repo, pats, opts, revs)
2139 expr, filematcher = _makelogrevset(repo, pats, opts, revs)
2140 if expr:
2140 if expr:
2141 # Revset matchers often operate faster on revisions in changelog
2141 # Revset matchers often operate faster on revisions in changelog
2142 # order, because most filters deal with the changelog.
2142 # order, because most filters deal with the changelog.
2143 if not opts.get('rev'):
2143 if not opts.get('rev'):
2144 revs.reverse()
2144 revs.reverse()
2145 matcher = revset.match(repo.ui, expr)
2145 matcher = revset.match(repo.ui, expr)
2146 # Revset matches can reorder revisions. "A or B" typically returns
2146 # Revset matches can reorder revisions. "A or B" typically returns
2147 # returns the revision matching A then the revision matching B. Sort
2147 # returns the revision matching A then the revision matching B. Sort
2148 # again to fix that.
2148 # again to fix that.
2149 revs = matcher(repo, revs)
2149 revs = matcher(repo, revs)
2150 if not opts.get('rev'):
2150 if not opts.get('rev'):
2151 revs.sort(reverse=True)
2151 revs.sort(reverse=True)
2152 if limit is not None:
2152 if limit is not None:
2153 limitedrevs = []
2153 limitedrevs = []
2154 for idx, r in enumerate(revs):
2154 for idx, r in enumerate(revs):
2155 if limit <= idx:
2155 if limit <= idx:
2156 break
2156 break
2157 limitedrevs.append(r)
2157 limitedrevs.append(r)
2158 revs = revset.baseset(limitedrevs)
2158 revs = revset.baseset(limitedrevs)
2159
2159
2160 return revs, expr, filematcher
2160 return revs, expr, filematcher
2161
2161
2162 def _graphnodeformatter(ui, displayer):
2162 def _graphnodeformatter(ui, displayer):
2163 spec = ui.config('ui', 'graphnodetemplate')
2163 spec = ui.config('ui', 'graphnodetemplate')
2164 if not spec:
2164 if not spec:
2165 return templatekw.showgraphnode # fast path for "{graphnode}"
2165 return templatekw.showgraphnode # fast path for "{graphnode}"
2166
2166
2167 templ = formatter.gettemplater(ui, 'graphnode', spec)
2167 templ = formatter.gettemplater(ui, 'graphnode', spec)
2168 cache = {}
2168 cache = {}
2169 if isinstance(displayer, changeset_templater):
2169 if isinstance(displayer, changeset_templater):
2170 cache = displayer.cache # reuse cache of slow templates
2170 cache = displayer.cache # reuse cache of slow templates
2171 props = templatekw.keywords.copy()
2171 props = templatekw.keywords.copy()
2172 props['templ'] = templ
2172 props['templ'] = templ
2173 props['cache'] = cache
2173 props['cache'] = cache
2174 def formatnode(repo, ctx):
2174 def formatnode(repo, ctx):
2175 props['ctx'] = ctx
2175 props['ctx'] = ctx
2176 props['repo'] = repo
2176 props['repo'] = repo
2177 props['revcache'] = {}
2177 props['revcache'] = {}
2178 return templater.stringify(templ('graphnode', **props))
2178 return templater.stringify(templ('graphnode', **props))
2179 return formatnode
2179 return formatnode
2180
2180
2181 def displaygraph(ui, repo, dag, displayer, edgefn, getrenamed=None,
2181 def displaygraph(ui, repo, dag, displayer, edgefn, getrenamed=None,
2182 filematcher=None):
2182 filematcher=None):
2183 formatnode = _graphnodeformatter(ui, displayer)
2183 formatnode = _graphnodeformatter(ui, displayer)
2184 seen, state = [], graphmod.asciistate()
2184 seen, state = [], graphmod.asciistate()
2185 for rev, type, ctx, parents in dag:
2185 for rev, type, ctx, parents in dag:
2186 char = formatnode(repo, ctx)
2186 char = formatnode(repo, ctx)
2187 copies = None
2187 copies = None
2188 if getrenamed and ctx.rev():
2188 if getrenamed and ctx.rev():
2189 copies = []
2189 copies = []
2190 for fn in ctx.files():
2190 for fn in ctx.files():
2191 rename = getrenamed(fn, ctx.rev())
2191 rename = getrenamed(fn, ctx.rev())
2192 if rename:
2192 if rename:
2193 copies.append((fn, rename[0]))
2193 copies.append((fn, rename[0]))
2194 revmatchfn = None
2194 revmatchfn = None
2195 if filematcher is not None:
2195 if filematcher is not None:
2196 revmatchfn = filematcher(ctx.rev())
2196 revmatchfn = filematcher(ctx.rev())
2197 displayer.show(ctx, copies=copies, matchfn=revmatchfn)
2197 displayer.show(ctx, copies=copies, matchfn=revmatchfn)
2198 lines = displayer.hunk.pop(rev).split('\n')
2198 lines = displayer.hunk.pop(rev).split('\n')
2199 if not lines[-1]:
2199 if not lines[-1]:
2200 del lines[-1]
2200 del lines[-1]
2201 displayer.flush(ctx)
2201 displayer.flush(ctx)
2202 edges = edgefn(type, char, lines, seen, rev, parents)
2202 edges = edgefn(type, char, lines, seen, rev, parents)
2203 for type, char, lines, coldata in edges:
2203 for type, char, lines, coldata in edges:
2204 graphmod.ascii(ui, state, type, char, lines, coldata)
2204 graphmod.ascii(ui, state, type, char, lines, coldata)
2205 displayer.close()
2205 displayer.close()
2206
2206
2207 def graphlog(ui, repo, *pats, **opts):
2207 def graphlog(ui, repo, *pats, **opts):
2208 # Parameters are identical to log command ones
2208 # Parameters are identical to log command ones
2209 revs, expr, filematcher = getgraphlogrevs(repo, pats, opts)
2209 revs, expr, filematcher = getgraphlogrevs(repo, pats, opts)
2210 revdag = graphmod.dagwalker(repo, revs)
2210 revdag = graphmod.dagwalker(repo, revs)
2211
2211
2212 getrenamed = None
2212 getrenamed = None
2213 if opts.get('copies'):
2213 if opts.get('copies'):
2214 endrev = None
2214 endrev = None
2215 if opts.get('rev'):
2215 if opts.get('rev'):
2216 endrev = scmutil.revrange(repo, opts.get('rev')).max() + 1
2216 endrev = scmutil.revrange(repo, opts.get('rev')).max() + 1
2217 getrenamed = templatekw.getrenamedfn(repo, endrev=endrev)
2217 getrenamed = templatekw.getrenamedfn(repo, endrev=endrev)
2218 displayer = show_changeset(ui, repo, opts, buffered=True)
2218 displayer = show_changeset(ui, repo, opts, buffered=True)
2219 displaygraph(ui, repo, revdag, displayer, graphmod.asciiedges, getrenamed,
2219 displaygraph(ui, repo, revdag, displayer, graphmod.asciiedges, getrenamed,
2220 filematcher)
2220 filematcher)
2221
2221
2222 def checkunsupportedgraphflags(pats, opts):
2222 def checkunsupportedgraphflags(pats, opts):
2223 for op in ["newest_first"]:
2223 for op in ["newest_first"]:
2224 if op in opts and opts[op]:
2224 if op in opts and opts[op]:
2225 raise error.Abort(_("-G/--graph option is incompatible with --%s")
2225 raise error.Abort(_("-G/--graph option is incompatible with --%s")
2226 % op.replace("_", "-"))
2226 % op.replace("_", "-"))
2227
2227
2228 def graphrevs(repo, nodes, opts):
2228 def graphrevs(repo, nodes, opts):
2229 limit = loglimit(opts)
2229 limit = loglimit(opts)
2230 nodes.reverse()
2230 nodes.reverse()
2231 if limit is not None:
2231 if limit is not None:
2232 nodes = nodes[:limit]
2232 nodes = nodes[:limit]
2233 return graphmod.nodes(repo, nodes)
2233 return graphmod.nodes(repo, nodes)
2234
2234
2235 def add(ui, repo, match, prefix, explicitonly, **opts):
2235 def add(ui, repo, match, prefix, explicitonly, **opts):
2236 join = lambda f: os.path.join(prefix, f)
2236 join = lambda f: os.path.join(prefix, f)
2237 bad = []
2237 bad = []
2238
2238
2239 badfn = lambda x, y: bad.append(x) or match.bad(x, y)
2239 badfn = lambda x, y: bad.append(x) or match.bad(x, y)
2240 names = []
2240 names = []
2241 wctx = repo[None]
2241 wctx = repo[None]
2242 cca = None
2242 cca = None
2243 abort, warn = scmutil.checkportabilityalert(ui)
2243 abort, warn = scmutil.checkportabilityalert(ui)
2244 if abort or warn:
2244 if abort or warn:
2245 cca = scmutil.casecollisionauditor(ui, abort, repo.dirstate)
2245 cca = scmutil.casecollisionauditor(ui, abort, repo.dirstate)
2246
2246
2247 badmatch = matchmod.badmatch(match, badfn)
2247 badmatch = matchmod.badmatch(match, badfn)
2248 dirstate = repo.dirstate
2248 dirstate = repo.dirstate
2249 # We don't want to just call wctx.walk here, since it would return a lot of
2249 # We don't want to just call wctx.walk here, since it would return a lot of
2250 # clean files, which we aren't interested in and takes time.
2250 # clean files, which we aren't interested in and takes time.
2251 for f in sorted(dirstate.walk(badmatch, sorted(wctx.substate),
2251 for f in sorted(dirstate.walk(badmatch, sorted(wctx.substate),
2252 True, False, full=False)):
2252 True, False, full=False)):
2253 exact = match.exact(f)
2253 exact = match.exact(f)
2254 if exact or not explicitonly and f not in wctx and repo.wvfs.lexists(f):
2254 if exact or not explicitonly and f not in wctx and repo.wvfs.lexists(f):
2255 if cca:
2255 if cca:
2256 cca(f)
2256 cca(f)
2257 names.append(f)
2257 names.append(f)
2258 if ui.verbose or not exact:
2258 if ui.verbose or not exact:
2259 ui.status(_('adding %s\n') % match.rel(f))
2259 ui.status(_('adding %s\n') % match.rel(f))
2260
2260
2261 for subpath in sorted(wctx.substate):
2261 for subpath in sorted(wctx.substate):
2262 sub = wctx.sub(subpath)
2262 sub = wctx.sub(subpath)
2263 try:
2263 try:
2264 submatch = matchmod.narrowmatcher(subpath, match)
2264 submatch = matchmod.narrowmatcher(subpath, match)
2265 if opts.get('subrepos'):
2265 if opts.get('subrepos'):
2266 bad.extend(sub.add(ui, submatch, prefix, False, **opts))
2266 bad.extend(sub.add(ui, submatch, prefix, False, **opts))
2267 else:
2267 else:
2268 bad.extend(sub.add(ui, submatch, prefix, True, **opts))
2268 bad.extend(sub.add(ui, submatch, prefix, True, **opts))
2269 except error.LookupError:
2269 except error.LookupError:
2270 ui.status(_("skipping missing subrepository: %s\n")
2270 ui.status(_("skipping missing subrepository: %s\n")
2271 % join(subpath))
2271 % join(subpath))
2272
2272
2273 if not opts.get('dry_run'):
2273 if not opts.get('dry_run'):
2274 rejected = wctx.add(names, prefix)
2274 rejected = wctx.add(names, prefix)
2275 bad.extend(f for f in rejected if f in match.files())
2275 bad.extend(f for f in rejected if f in match.files())
2276 return bad
2276 return bad
2277
2277
2278 def forget(ui, repo, match, prefix, explicitonly):
2278 def forget(ui, repo, match, prefix, explicitonly):
2279 join = lambda f: os.path.join(prefix, f)
2279 join = lambda f: os.path.join(prefix, f)
2280 bad = []
2280 bad = []
2281 badfn = lambda x, y: bad.append(x) or match.bad(x, y)
2281 badfn = lambda x, y: bad.append(x) or match.bad(x, y)
2282 wctx = repo[None]
2282 wctx = repo[None]
2283 forgot = []
2283 forgot = []
2284
2284
2285 s = repo.status(match=matchmod.badmatch(match, badfn), clean=True)
2285 s = repo.status(match=matchmod.badmatch(match, badfn), clean=True)
2286 forget = sorted(s[0] + s[1] + s[3] + s[6])
2286 forget = sorted(s[0] + s[1] + s[3] + s[6])
2287 if explicitonly:
2287 if explicitonly:
2288 forget = [f for f in forget if match.exact(f)]
2288 forget = [f for f in forget if match.exact(f)]
2289
2289
2290 for subpath in sorted(wctx.substate):
2290 for subpath in sorted(wctx.substate):
2291 sub = wctx.sub(subpath)
2291 sub = wctx.sub(subpath)
2292 try:
2292 try:
2293 submatch = matchmod.narrowmatcher(subpath, match)
2293 submatch = matchmod.narrowmatcher(subpath, match)
2294 subbad, subforgot = sub.forget(submatch, prefix)
2294 subbad, subforgot = sub.forget(submatch, prefix)
2295 bad.extend([subpath + '/' + f for f in subbad])
2295 bad.extend([subpath + '/' + f for f in subbad])
2296 forgot.extend([subpath + '/' + f for f in subforgot])
2296 forgot.extend([subpath + '/' + f for f in subforgot])
2297 except error.LookupError:
2297 except error.LookupError:
2298 ui.status(_("skipping missing subrepository: %s\n")
2298 ui.status(_("skipping missing subrepository: %s\n")
2299 % join(subpath))
2299 % join(subpath))
2300
2300
2301 if not explicitonly:
2301 if not explicitonly:
2302 for f in match.files():
2302 for f in match.files():
2303 if f not in repo.dirstate and not repo.wvfs.isdir(f):
2303 if f not in repo.dirstate and not repo.wvfs.isdir(f):
2304 if f not in forgot:
2304 if f not in forgot:
2305 if repo.wvfs.exists(f):
2305 if repo.wvfs.exists(f):
2306 # Don't complain if the exact case match wasn't given.
2306 # Don't complain if the exact case match wasn't given.
2307 # But don't do this until after checking 'forgot', so
2307 # But don't do this until after checking 'forgot', so
2308 # that subrepo files aren't normalized, and this op is
2308 # that subrepo files aren't normalized, and this op is
2309 # purely from data cached by the status walk above.
2309 # purely from data cached by the status walk above.
2310 if repo.dirstate.normalize(f) in repo.dirstate:
2310 if repo.dirstate.normalize(f) in repo.dirstate:
2311 continue
2311 continue
2312 ui.warn(_('not removing %s: '
2312 ui.warn(_('not removing %s: '
2313 'file is already untracked\n')
2313 'file is already untracked\n')
2314 % match.rel(f))
2314 % match.rel(f))
2315 bad.append(f)
2315 bad.append(f)
2316
2316
2317 for f in forget:
2317 for f in forget:
2318 if ui.verbose or not match.exact(f):
2318 if ui.verbose or not match.exact(f):
2319 ui.status(_('removing %s\n') % match.rel(f))
2319 ui.status(_('removing %s\n') % match.rel(f))
2320
2320
2321 rejected = wctx.forget(forget, prefix)
2321 rejected = wctx.forget(forget, prefix)
2322 bad.extend(f for f in rejected if f in match.files())
2322 bad.extend(f for f in rejected if f in match.files())
2323 forgot.extend(f for f in forget if f not in rejected)
2323 forgot.extend(f for f in forget if f not in rejected)
2324 return bad, forgot
2324 return bad, forgot
2325
2325
2326 def files(ui, ctx, m, fm, fmt, subrepos):
2326 def files(ui, ctx, m, fm, fmt, subrepos):
2327 rev = ctx.rev()
2327 rev = ctx.rev()
2328 ret = 1
2328 ret = 1
2329 ds = ctx.repo().dirstate
2329 ds = ctx.repo().dirstate
2330
2330
2331 for f in ctx.matches(m):
2331 for f in ctx.matches(m):
2332 if rev is None and ds[f] == 'r':
2332 if rev is None and ds[f] == 'r':
2333 continue
2333 continue
2334 fm.startitem()
2334 fm.startitem()
2335 if ui.verbose:
2335 if ui.verbose:
2336 fc = ctx[f]
2336 fc = ctx[f]
2337 fm.write('size flags', '% 10d % 1s ', fc.size(), fc.flags())
2337 fm.write('size flags', '% 10d % 1s ', fc.size(), fc.flags())
2338 fm.data(abspath=f)
2338 fm.data(abspath=f)
2339 fm.write('path', fmt, m.rel(f))
2339 fm.write('path', fmt, m.rel(f))
2340 ret = 0
2340 ret = 0
2341
2341
2342 for subpath in sorted(ctx.substate):
2342 for subpath in sorted(ctx.substate):
2343 def matchessubrepo(subpath):
2343 def matchessubrepo(subpath):
2344 return (m.always() or m.exact(subpath)
2344 return (m.always() or m.exact(subpath)
2345 or any(f.startswith(subpath + '/') for f in m.files()))
2345 or any(f.startswith(subpath + '/') for f in m.files()))
2346
2346
2347 if subrepos or matchessubrepo(subpath):
2347 if subrepos or matchessubrepo(subpath):
2348 sub = ctx.sub(subpath)
2348 sub = ctx.sub(subpath)
2349 try:
2349 try:
2350 submatch = matchmod.narrowmatcher(subpath, m)
2350 submatch = matchmod.narrowmatcher(subpath, m)
2351 if sub.printfiles(ui, submatch, fm, fmt, subrepos) == 0:
2351 if sub.printfiles(ui, submatch, fm, fmt, subrepos) == 0:
2352 ret = 0
2352 ret = 0
2353 except error.LookupError:
2353 except error.LookupError:
2354 ui.status(_("skipping missing subrepository: %s\n")
2354 ui.status(_("skipping missing subrepository: %s\n")
2355 % m.abs(subpath))
2355 % m.abs(subpath))
2356
2356
2357 return ret
2357 return ret
2358
2358
2359 def remove(ui, repo, m, prefix, after, force, subrepos):
2359 def remove(ui, repo, m, prefix, after, force, subrepos):
2360 join = lambda f: os.path.join(prefix, f)
2360 join = lambda f: os.path.join(prefix, f)
2361 ret = 0
2361 ret = 0
2362 s = repo.status(match=m, clean=True)
2362 s = repo.status(match=m, clean=True)
2363 modified, added, deleted, clean = s[0], s[1], s[3], s[6]
2363 modified, added, deleted, clean = s[0], s[1], s[3], s[6]
2364
2364
2365 wctx = repo[None]
2365 wctx = repo[None]
2366
2366
2367 for subpath in sorted(wctx.substate):
2367 for subpath in sorted(wctx.substate):
2368 def matchessubrepo(matcher, subpath):
2368 def matchessubrepo(matcher, subpath):
2369 if matcher.exact(subpath):
2369 if matcher.exact(subpath):
2370 return True
2370 return True
2371 for f in matcher.files():
2371 for f in matcher.files():
2372 if f.startswith(subpath):
2372 if f.startswith(subpath):
2373 return True
2373 return True
2374 return False
2374 return False
2375
2375
2376 if subrepos or matchessubrepo(m, subpath):
2376 if subrepos or matchessubrepo(m, subpath):
2377 sub = wctx.sub(subpath)
2377 sub = wctx.sub(subpath)
2378 try:
2378 try:
2379 submatch = matchmod.narrowmatcher(subpath, m)
2379 submatch = matchmod.narrowmatcher(subpath, m)
2380 if sub.removefiles(submatch, prefix, after, force, subrepos):
2380 if sub.removefiles(submatch, prefix, after, force, subrepos):
2381 ret = 1
2381 ret = 1
2382 except error.LookupError:
2382 except error.LookupError:
2383 ui.status(_("skipping missing subrepository: %s\n")
2383 ui.status(_("skipping missing subrepository: %s\n")
2384 % join(subpath))
2384 % join(subpath))
2385
2385
2386 # warn about failure to delete explicit files/dirs
2386 # warn about failure to delete explicit files/dirs
2387 deleteddirs = util.dirs(deleted)
2387 deleteddirs = util.dirs(deleted)
2388 for f in m.files():
2388 for f in m.files():
2389 def insubrepo():
2389 def insubrepo():
2390 for subpath in wctx.substate:
2390 for subpath in wctx.substate:
2391 if f.startswith(subpath):
2391 if f.startswith(subpath):
2392 return True
2392 return True
2393 return False
2393 return False
2394
2394
2395 isdir = f in deleteddirs or wctx.hasdir(f)
2395 isdir = f in deleteddirs or wctx.hasdir(f)
2396 if f in repo.dirstate or isdir or f == '.' or insubrepo():
2396 if f in repo.dirstate or isdir or f == '.' or insubrepo():
2397 continue
2397 continue
2398
2398
2399 if repo.wvfs.exists(f):
2399 if repo.wvfs.exists(f):
2400 if repo.wvfs.isdir(f):
2400 if repo.wvfs.isdir(f):
2401 ui.warn(_('not removing %s: no tracked files\n')
2401 ui.warn(_('not removing %s: no tracked files\n')
2402 % m.rel(f))
2402 % m.rel(f))
2403 else:
2403 else:
2404 ui.warn(_('not removing %s: file is untracked\n')
2404 ui.warn(_('not removing %s: file is untracked\n')
2405 % m.rel(f))
2405 % m.rel(f))
2406 # missing files will generate a warning elsewhere
2406 # missing files will generate a warning elsewhere
2407 ret = 1
2407 ret = 1
2408
2408
2409 if force:
2409 if force:
2410 list = modified + deleted + clean + added
2410 list = modified + deleted + clean + added
2411 elif after:
2411 elif after:
2412 list = deleted
2412 list = deleted
2413 for f in modified + added + clean:
2413 for f in modified + added + clean:
2414 ui.warn(_('not removing %s: file still exists\n') % m.rel(f))
2414 ui.warn(_('not removing %s: file still exists\n') % m.rel(f))
2415 ret = 1
2415 ret = 1
2416 else:
2416 else:
2417 list = deleted + clean
2417 list = deleted + clean
2418 for f in modified:
2418 for f in modified:
2419 ui.warn(_('not removing %s: file is modified (use -f'
2419 ui.warn(_('not removing %s: file is modified (use -f'
2420 ' to force removal)\n') % m.rel(f))
2420 ' to force removal)\n') % m.rel(f))
2421 ret = 1
2421 ret = 1
2422 for f in added:
2422 for f in added:
2423 ui.warn(_('not removing %s: file has been marked for add'
2423 ui.warn(_('not removing %s: file has been marked for add'
2424 ' (use forget to undo)\n') % m.rel(f))
2424 ' (use forget to undo)\n') % m.rel(f))
2425 ret = 1
2425 ret = 1
2426
2426
2427 for f in sorted(list):
2427 for f in sorted(list):
2428 if ui.verbose or not m.exact(f):
2428 if ui.verbose or not m.exact(f):
2429 ui.status(_('removing %s\n') % m.rel(f))
2429 ui.status(_('removing %s\n') % m.rel(f))
2430
2430
2431 wlock = repo.wlock()
2431 wlock = repo.wlock()
2432 try:
2432 try:
2433 if not after:
2433 if not after:
2434 for f in list:
2434 for f in list:
2435 if f in added:
2435 if f in added:
2436 continue # we never unlink added files on remove
2436 continue # we never unlink added files on remove
2437 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
2437 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
2438 repo[None].forget(list)
2438 repo[None].forget(list)
2439 finally:
2439 finally:
2440 wlock.release()
2440 wlock.release()
2441
2441
2442 return ret
2442 return ret
2443
2443
2444 def cat(ui, repo, ctx, matcher, prefix, **opts):
2444 def cat(ui, repo, ctx, matcher, prefix, **opts):
2445 err = 1
2445 err = 1
2446
2446
2447 def write(path):
2447 def write(path):
2448 fp = makefileobj(repo, opts.get('output'), ctx.node(),
2448 fp = makefileobj(repo, opts.get('output'), ctx.node(),
2449 pathname=os.path.join(prefix, path))
2449 pathname=os.path.join(prefix, path))
2450 data = ctx[path].data()
2450 data = ctx[path].data()
2451 if opts.get('decode'):
2451 if opts.get('decode'):
2452 data = repo.wwritedata(path, data)
2452 data = repo.wwritedata(path, data)
2453 fp.write(data)
2453 fp.write(data)
2454 fp.close()
2454 fp.close()
2455
2455
2456 # Automation often uses hg cat on single files, so special case it
2456 # Automation often uses hg cat on single files, so special case it
2457 # for performance to avoid the cost of parsing the manifest.
2457 # for performance to avoid the cost of parsing the manifest.
2458 if len(matcher.files()) == 1 and not matcher.anypats():
2458 if len(matcher.files()) == 1 and not matcher.anypats():
2459 file = matcher.files()[0]
2459 file = matcher.files()[0]
2460 mf = repo.manifest
2460 mf = repo.manifest
2461 mfnode = ctx.manifestnode()
2461 mfnode = ctx.manifestnode()
2462 if mfnode and mf.find(mfnode, file)[0]:
2462 if mfnode and mf.find(mfnode, file)[0]:
2463 write(file)
2463 write(file)
2464 return 0
2464 return 0
2465
2465
2466 # Don't warn about "missing" files that are really in subrepos
2466 # Don't warn about "missing" files that are really in subrepos
2467 def badfn(path, msg):
2467 def badfn(path, msg):
2468 for subpath in ctx.substate:
2468 for subpath in ctx.substate:
2469 if path.startswith(subpath):
2469 if path.startswith(subpath):
2470 return
2470 return
2471 matcher.bad(path, msg)
2471 matcher.bad(path, msg)
2472
2472
2473 for abs in ctx.walk(matchmod.badmatch(matcher, badfn)):
2473 for abs in ctx.walk(matchmod.badmatch(matcher, badfn)):
2474 write(abs)
2474 write(abs)
2475 err = 0
2475 err = 0
2476
2476
2477 for subpath in sorted(ctx.substate):
2477 for subpath in sorted(ctx.substate):
2478 sub = ctx.sub(subpath)
2478 sub = ctx.sub(subpath)
2479 try:
2479 try:
2480 submatch = matchmod.narrowmatcher(subpath, matcher)
2480 submatch = matchmod.narrowmatcher(subpath, matcher)
2481
2481
2482 if not sub.cat(submatch, os.path.join(prefix, sub._path),
2482 if not sub.cat(submatch, os.path.join(prefix, sub._path),
2483 **opts):
2483 **opts):
2484 err = 0
2484 err = 0
2485 except error.RepoLookupError:
2485 except error.RepoLookupError:
2486 ui.status(_("skipping missing subrepository: %s\n")
2486 ui.status(_("skipping missing subrepository: %s\n")
2487 % os.path.join(prefix, subpath))
2487 % os.path.join(prefix, subpath))
2488
2488
2489 return err
2489 return err
2490
2490
2491 def commit(ui, repo, commitfunc, pats, opts):
2491 def commit(ui, repo, commitfunc, pats, opts):
2492 '''commit the specified files or all outstanding changes'''
2492 '''commit the specified files or all outstanding changes'''
2493 date = opts.get('date')
2493 date = opts.get('date')
2494 if date:
2494 if date:
2495 opts['date'] = util.parsedate(date)
2495 opts['date'] = util.parsedate(date)
2496 message = logmessage(ui, opts)
2496 message = logmessage(ui, opts)
2497 matcher = scmutil.match(repo[None], pats, opts)
2497 matcher = scmutil.match(repo[None], pats, opts)
2498
2498
2499 # extract addremove carefully -- this function can be called from a command
2499 # extract addremove carefully -- this function can be called from a command
2500 # that doesn't support addremove
2500 # that doesn't support addremove
2501 if opts.get('addremove'):
2501 if opts.get('addremove'):
2502 if scmutil.addremove(repo, matcher, "", opts) != 0:
2502 if scmutil.addremove(repo, matcher, "", opts) != 0:
2503 raise error.Abort(
2503 raise error.Abort(
2504 _("failed to mark all new/missing files as added/removed"))
2504 _("failed to mark all new/missing files as added/removed"))
2505
2505
2506 return commitfunc(ui, repo, message, matcher, opts)
2506 return commitfunc(ui, repo, message, matcher, opts)
2507
2507
2508 def amend(ui, repo, commitfunc, old, extra, pats, opts):
2508 def amend(ui, repo, commitfunc, old, extra, pats, opts):
2509 # avoid cycle context -> subrepo -> cmdutil
2509 # avoid cycle context -> subrepo -> cmdutil
2510 import context
2510 import context
2511
2511
2512 # amend will reuse the existing user if not specified, but the obsolete
2512 # amend will reuse the existing user if not specified, but the obsolete
2513 # marker creation requires that the current user's name is specified.
2513 # marker creation requires that the current user's name is specified.
2514 if obsolete.isenabled(repo, obsolete.createmarkersopt):
2514 if obsolete.isenabled(repo, obsolete.createmarkersopt):
2515 ui.username() # raise exception if username not set
2515 ui.username() # raise exception if username not set
2516
2516
2517 ui.note(_('amending changeset %s\n') % old)
2517 ui.note(_('amending changeset %s\n') % old)
2518 base = old.p1()
2518 base = old.p1()
2519 createmarkers = obsolete.isenabled(repo, obsolete.createmarkersopt)
2519 createmarkers = obsolete.isenabled(repo, obsolete.createmarkersopt)
2520
2520
2521 wlock = lock = newid = None
2521 wlock = lock = newid = None
2522 try:
2522 try:
2523 wlock = repo.wlock()
2523 wlock = repo.wlock()
2524 lock = repo.lock()
2524 lock = repo.lock()
2525 tr = repo.transaction('amend')
2525 tr = repo.transaction('amend')
2526 try:
2526 try:
2527 # See if we got a message from -m or -l, if not, open the editor
2527 # See if we got a message from -m or -l, if not, open the editor
2528 # with the message of the changeset to amend
2528 # with the message of the changeset to amend
2529 message = logmessage(ui, opts)
2529 message = logmessage(ui, opts)
2530 # ensure logfile does not conflict with later enforcement of the
2530 # ensure logfile does not conflict with later enforcement of the
2531 # message. potential logfile content has been processed by
2531 # message. potential logfile content has been processed by
2532 # `logmessage` anyway.
2532 # `logmessage` anyway.
2533 opts.pop('logfile')
2533 opts.pop('logfile')
2534 # First, do a regular commit to record all changes in the working
2534 # First, do a regular commit to record all changes in the working
2535 # directory (if there are any)
2535 # directory (if there are any)
2536 ui.callhooks = False
2536 ui.callhooks = False
2537 activebookmark = repo._activebookmark
2537 activebookmark = repo._activebookmark
2538 try:
2538 try:
2539 repo._activebookmark = None
2539 repo._activebookmark = None
2540 opts['message'] = 'temporary amend commit for %s' % old
2540 opts['message'] = 'temporary amend commit for %s' % old
2541 node = commit(ui, repo, commitfunc, pats, opts)
2541 node = commit(ui, repo, commitfunc, pats, opts)
2542 finally:
2542 finally:
2543 repo._activebookmark = activebookmark
2543 repo._activebookmark = activebookmark
2544 ui.callhooks = True
2544 ui.callhooks = True
2545 ctx = repo[node]
2545 ctx = repo[node]
2546
2546
2547 # Participating changesets:
2547 # Participating changesets:
2548 #
2548 #
2549 # node/ctx o - new (intermediate) commit that contains changes
2549 # node/ctx o - new (intermediate) commit that contains changes
2550 # | from working dir to go into amending commit
2550 # | from working dir to go into amending commit
2551 # | (or a workingctx if there were no changes)
2551 # | (or a workingctx if there were no changes)
2552 # |
2552 # |
2553 # old o - changeset to amend
2553 # old o - changeset to amend
2554 # |
2554 # |
2555 # base o - parent of amending changeset
2555 # base o - parent of amending changeset
2556
2556
2557 # Update extra dict from amended commit (e.g. to preserve graft
2557 # Update extra dict from amended commit (e.g. to preserve graft
2558 # source)
2558 # source)
2559 extra.update(old.extra())
2559 extra.update(old.extra())
2560
2560
2561 # Also update it from the intermediate commit or from the wctx
2561 # Also update it from the intermediate commit or from the wctx
2562 extra.update(ctx.extra())
2562 extra.update(ctx.extra())
2563
2563
2564 if len(old.parents()) > 1:
2564 if len(old.parents()) > 1:
2565 # ctx.files() isn't reliable for merges, so fall back to the
2565 # ctx.files() isn't reliable for merges, so fall back to the
2566 # slower repo.status() method
2566 # slower repo.status() method
2567 files = set([fn for st in repo.status(base, old)[:3]
2567 files = set([fn for st in repo.status(base, old)[:3]
2568 for fn in st])
2568 for fn in st])
2569 else:
2569 else:
2570 files = set(old.files())
2570 files = set(old.files())
2571
2571
2572 # Second, we use either the commit we just did, or if there were no
2572 # Second, we use either the commit we just did, or if there were no
2573 # changes the parent of the working directory as the version of the
2573 # changes the parent of the working directory as the version of the
2574 # files in the final amend commit
2574 # files in the final amend commit
2575 if node:
2575 if node:
2576 ui.note(_('copying changeset %s to %s\n') % (ctx, base))
2576 ui.note(_('copying changeset %s to %s\n') % (ctx, base))
2577
2577
2578 user = ctx.user()
2578 user = ctx.user()
2579 date = ctx.date()
2579 date = ctx.date()
2580 # Recompute copies (avoid recording a -> b -> a)
2580 # Recompute copies (avoid recording a -> b -> a)
2581 copied = copies.pathcopies(base, ctx)
2581 copied = copies.pathcopies(base, ctx)
2582 if old.p2:
2582 if old.p2:
2583 copied.update(copies.pathcopies(old.p2(), ctx))
2583 copied.update(copies.pathcopies(old.p2(), ctx))
2584
2584
2585 # Prune files which were reverted by the updates: if old
2585 # Prune files which were reverted by the updates: if old
2586 # introduced file X and our intermediate commit, node,
2586 # introduced file X and our intermediate commit, node,
2587 # renamed that file, then those two files are the same and
2587 # renamed that file, then those two files are the same and
2588 # we can discard X from our list of files. Likewise if X
2588 # we can discard X from our list of files. Likewise if X
2589 # was deleted, it's no longer relevant
2589 # was deleted, it's no longer relevant
2590 files.update(ctx.files())
2590 files.update(ctx.files())
2591
2591
2592 def samefile(f):
2592 def samefile(f):
2593 if f in ctx.manifest():
2593 if f in ctx.manifest():
2594 a = ctx.filectx(f)
2594 a = ctx.filectx(f)
2595 if f in base.manifest():
2595 if f in base.manifest():
2596 b = base.filectx(f)
2596 b = base.filectx(f)
2597 return (not a.cmp(b)
2597 return (not a.cmp(b)
2598 and a.flags() == b.flags())
2598 and a.flags() == b.flags())
2599 else:
2599 else:
2600 return False
2600 return False
2601 else:
2601 else:
2602 return f not in base.manifest()
2602 return f not in base.manifest()
2603 files = [f for f in files if not samefile(f)]
2603 files = [f for f in files if not samefile(f)]
2604
2604
2605 def filectxfn(repo, ctx_, path):
2605 def filectxfn(repo, ctx_, path):
2606 try:
2606 try:
2607 fctx = ctx[path]
2607 fctx = ctx[path]
2608 flags = fctx.flags()
2608 flags = fctx.flags()
2609 mctx = context.memfilectx(repo,
2609 mctx = context.memfilectx(repo,
2610 fctx.path(), fctx.data(),
2610 fctx.path(), fctx.data(),
2611 islink='l' in flags,
2611 islink='l' in flags,
2612 isexec='x' in flags,
2612 isexec='x' in flags,
2613 copied=copied.get(path))
2613 copied=copied.get(path))
2614 return mctx
2614 return mctx
2615 except KeyError:
2615 except KeyError:
2616 return None
2616 return None
2617 else:
2617 else:
2618 ui.note(_('copying changeset %s to %s\n') % (old, base))
2618 ui.note(_('copying changeset %s to %s\n') % (old, base))
2619
2619
2620 # Use version of files as in the old cset
2620 # Use version of files as in the old cset
2621 def filectxfn(repo, ctx_, path):
2621 def filectxfn(repo, ctx_, path):
2622 try:
2622 try:
2623 return old.filectx(path)
2623 return old.filectx(path)
2624 except KeyError:
2624 except KeyError:
2625 return None
2625 return None
2626
2626
2627 user = opts.get('user') or old.user()
2627 user = opts.get('user') or old.user()
2628 date = opts.get('date') or old.date()
2628 date = opts.get('date') or old.date()
2629 editform = mergeeditform(old, 'commit.amend')
2629 editform = mergeeditform(old, 'commit.amend')
2630 editor = getcommiteditor(editform=editform, **opts)
2630 editor = getcommiteditor(editform=editform, **opts)
2631 if not message:
2631 if not message:
2632 editor = getcommiteditor(edit=True, editform=editform)
2632 editor = getcommiteditor(edit=True, editform=editform)
2633 message = old.description()
2633 message = old.description()
2634
2634
2635 pureextra = extra.copy()
2635 pureextra = extra.copy()
2636 if 'amend_source' in pureextra:
2636 if 'amend_source' in pureextra:
2637 del pureextra['amend_source']
2637 del pureextra['amend_source']
2638 pureoldextra = old.extra()
2638 pureoldextra = old.extra()
2639 if 'amend_source' in pureoldextra:
2639 if 'amend_source' in pureoldextra:
2640 del pureoldextra['amend_source']
2640 del pureoldextra['amend_source']
2641 extra['amend_source'] = old.hex()
2641 extra['amend_source'] = old.hex()
2642
2642
2643 new = context.memctx(repo,
2643 new = context.memctx(repo,
2644 parents=[base.node(), old.p2().node()],
2644 parents=[base.node(), old.p2().node()],
2645 text=message,
2645 text=message,
2646 files=files,
2646 files=files,
2647 filectxfn=filectxfn,
2647 filectxfn=filectxfn,
2648 user=user,
2648 user=user,
2649 date=date,
2649 date=date,
2650 extra=extra,
2650 extra=extra,
2651 editor=editor)
2651 editor=editor)
2652
2652
2653 newdesc = changelog.stripdesc(new.description())
2653 newdesc = changelog.stripdesc(new.description())
2654 if ((not node)
2654 if ((not node)
2655 and newdesc == old.description()
2655 and newdesc == old.description()
2656 and user == old.user()
2656 and user == old.user()
2657 and date == old.date()
2657 and date == old.date()
2658 and pureextra == pureoldextra):
2658 and pureextra == pureoldextra):
2659 # nothing changed. continuing here would create a new node
2659 # nothing changed. continuing here would create a new node
2660 # anyway because of the amend_source noise.
2660 # anyway because of the amend_source noise.
2661 #
2661 #
2662 # This not what we expect from amend.
2662 # This not what we expect from amend.
2663 return old.node()
2663 return old.node()
2664
2664
2665 ph = repo.ui.config('phases', 'new-commit', phases.draft)
2665 ph = repo.ui.config('phases', 'new-commit', phases.draft)
2666 try:
2666 try:
2667 if opts.get('secret'):
2667 if opts.get('secret'):
2668 commitphase = 'secret'
2668 commitphase = 'secret'
2669 else:
2669 else:
2670 commitphase = old.phase()
2670 commitphase = old.phase()
2671 repo.ui.setconfig('phases', 'new-commit', commitphase, 'amend')
2671 repo.ui.setconfig('phases', 'new-commit', commitphase, 'amend')
2672 newid = repo.commitctx(new)
2672 newid = repo.commitctx(new)
2673 finally:
2673 finally:
2674 repo.ui.setconfig('phases', 'new-commit', ph, 'amend')
2674 repo.ui.setconfig('phases', 'new-commit', ph, 'amend')
2675 if newid != old.node():
2675 if newid != old.node():
2676 # Reroute the working copy parent to the new changeset
2676 # Reroute the working copy parent to the new changeset
2677 repo.setparents(newid, nullid)
2677 repo.setparents(newid, nullid)
2678
2678
2679 # Move bookmarks from old parent to amend commit
2679 # Move bookmarks from old parent to amend commit
2680 bms = repo.nodebookmarks(old.node())
2680 bms = repo.nodebookmarks(old.node())
2681 if bms:
2681 if bms:
2682 marks = repo._bookmarks
2682 marks = repo._bookmarks
2683 for bm in bms:
2683 for bm in bms:
2684 ui.debug('moving bookmarks %r from %s to %s\n' %
2684 ui.debug('moving bookmarks %r from %s to %s\n' %
2685 (marks, old.hex(), hex(newid)))
2685 (marks, old.hex(), hex(newid)))
2686 marks[bm] = newid
2686 marks[bm] = newid
2687 marks.recordchange(tr)
2687 marks.recordchange(tr)
2688 #commit the whole amend process
2688 #commit the whole amend process
2689 if createmarkers:
2689 if createmarkers:
2690 # mark the new changeset as successor of the rewritten one
2690 # mark the new changeset as successor of the rewritten one
2691 new = repo[newid]
2691 new = repo[newid]
2692 obs = [(old, (new,))]
2692 obs = [(old, (new,))]
2693 if node:
2693 if node:
2694 obs.append((ctx, ()))
2694 obs.append((ctx, ()))
2695
2695
2696 obsolete.createmarkers(repo, obs)
2696 obsolete.createmarkers(repo, obs)
2697 tr.close()
2697 tr.close()
2698 finally:
2698 finally:
2699 tr.release()
2699 tr.release()
2700 if not createmarkers and newid != old.node():
2700 if not createmarkers and newid != old.node():
2701 # Strip the intermediate commit (if there was one) and the amended
2701 # Strip the intermediate commit (if there was one) and the amended
2702 # commit
2702 # commit
2703 if node:
2703 if node:
2704 ui.note(_('stripping intermediate changeset %s\n') % ctx)
2704 ui.note(_('stripping intermediate changeset %s\n') % ctx)
2705 ui.note(_('stripping amended changeset %s\n') % old)
2705 ui.note(_('stripping amended changeset %s\n') % old)
2706 repair.strip(ui, repo, old.node(), topic='amend-backup')
2706 repair.strip(ui, repo, old.node(), topic='amend-backup')
2707 finally:
2707 finally:
2708 lockmod.release(lock, wlock)
2708 lockmod.release(lock, wlock)
2709 return newid
2709 return newid
2710
2710
2711 def commiteditor(repo, ctx, subs, editform=''):
2711 def commiteditor(repo, ctx, subs, editform=''):
2712 if ctx.description():
2712 if ctx.description():
2713 return ctx.description()
2713 return ctx.description()
2714 return commitforceeditor(repo, ctx, subs, editform=editform,
2714 return commitforceeditor(repo, ctx, subs, editform=editform,
2715 unchangedmessagedetection=True)
2715 unchangedmessagedetection=True)
2716
2716
2717 def commitforceeditor(repo, ctx, subs, finishdesc=None, extramsg=None,
2717 def commitforceeditor(repo, ctx, subs, finishdesc=None, extramsg=None,
2718 editform='', unchangedmessagedetection=False):
2718 editform='', unchangedmessagedetection=False):
2719 if not extramsg:
2719 if not extramsg:
2720 extramsg = _("Leave message empty to abort commit.")
2720 extramsg = _("Leave message empty to abort commit.")
2721
2721
2722 forms = [e for e in editform.split('.') if e]
2722 forms = [e for e in editform.split('.') if e]
2723 forms.insert(0, 'changeset')
2723 forms.insert(0, 'changeset')
2724 templatetext = None
2724 templatetext = None
2725 while forms:
2725 while forms:
2726 tmpl = repo.ui.config('committemplate', '.'.join(forms))
2726 tmpl = repo.ui.config('committemplate', '.'.join(forms))
2727 if tmpl:
2727 if tmpl:
2728 templatetext = committext = buildcommittemplate(
2728 templatetext = committext = buildcommittemplate(
2729 repo, ctx, subs, extramsg, tmpl)
2729 repo, ctx, subs, extramsg, tmpl)
2730 break
2730 break
2731 forms.pop()
2731 forms.pop()
2732 else:
2732 else:
2733 committext = buildcommittext(repo, ctx, subs, extramsg)
2733 committext = buildcommittext(repo, ctx, subs, extramsg)
2734
2734
2735 # run editor in the repository root
2735 # run editor in the repository root
2736 olddir = os.getcwd()
2736 olddir = os.getcwd()
2737 os.chdir(repo.root)
2737 os.chdir(repo.root)
2738
2738
2739 # make in-memory changes visible to external process
2739 # make in-memory changes visible to external process
2740 tr = repo.currenttransaction()
2740 tr = repo.currenttransaction()
2741 repo.dirstate.write(tr)
2741 repo.dirstate.write(tr)
2742 pending = tr and tr.writepending() and repo.root
2742 pending = tr and tr.writepending() and repo.root
2743
2743
2744 editortext = repo.ui.edit(committext, ctx.user(), ctx.extra(),
2744 editortext = repo.ui.edit(committext, ctx.user(), ctx.extra(),
2745 editform=editform, pending=pending)
2745 editform=editform, pending=pending)
2746 text = re.sub("(?m)^HG:.*(\n|$)", "", editortext)
2746 text = re.sub("(?m)^HG:.*(\n|$)", "", editortext)
2747 os.chdir(olddir)
2747 os.chdir(olddir)
2748
2748
2749 if finishdesc:
2749 if finishdesc:
2750 text = finishdesc(text)
2750 text = finishdesc(text)
2751 if not text.strip():
2751 if not text.strip():
2752 raise error.Abort(_("empty commit message"))
2752 raise error.Abort(_("empty commit message"))
2753 if unchangedmessagedetection and editortext == templatetext:
2753 if unchangedmessagedetection and editortext == templatetext:
2754 raise error.Abort(_("commit message unchanged"))
2754 raise error.Abort(_("commit message unchanged"))
2755
2755
2756 return text
2756 return text
2757
2757
2758 def buildcommittemplate(repo, ctx, subs, extramsg, tmpl):
2758 def buildcommittemplate(repo, ctx, subs, extramsg, tmpl):
2759 ui = repo.ui
2759 ui = repo.ui
2760 tmpl, mapfile = gettemplate(ui, tmpl, None)
2760 tmpl, mapfile = gettemplate(ui, tmpl, None)
2761
2761
2762 try:
2762 try:
2763 t = changeset_templater(ui, repo, None, {}, tmpl, mapfile, False)
2763 t = changeset_templater(ui, repo, None, {}, tmpl, mapfile, False)
2764 except SyntaxError as inst:
2764 except SyntaxError as inst:
2765 raise error.Abort(inst.args[0])
2765 raise error.Abort(inst.args[0])
2766
2766
2767 for k, v in repo.ui.configitems('committemplate'):
2767 for k, v in repo.ui.configitems('committemplate'):
2768 if k != 'changeset':
2768 if k != 'changeset':
2769 t.t.cache[k] = v
2769 t.t.cache[k] = v
2770
2770
2771 if not extramsg:
2771 if not extramsg:
2772 extramsg = '' # ensure that extramsg is string
2772 extramsg = '' # ensure that extramsg is string
2773
2773
2774 ui.pushbuffer()
2774 ui.pushbuffer()
2775 t.show(ctx, extramsg=extramsg)
2775 t.show(ctx, extramsg=extramsg)
2776 return ui.popbuffer()
2776 return ui.popbuffer()
2777
2777
2778 def hgprefix(msg):
2778 def hgprefix(msg):
2779 return "\n".join(["HG: %s" % a for a in msg.split("\n") if a])
2779 return "\n".join(["HG: %s" % a for a in msg.split("\n") if a])
2780
2780
2781 def buildcommittext(repo, ctx, subs, extramsg):
2781 def buildcommittext(repo, ctx, subs, extramsg):
2782 edittext = []
2782 edittext = []
2783 modified, added, removed = ctx.modified(), ctx.added(), ctx.removed()
2783 modified, added, removed = ctx.modified(), ctx.added(), ctx.removed()
2784 if ctx.description():
2784 if ctx.description():
2785 edittext.append(ctx.description())
2785 edittext.append(ctx.description())
2786 edittext.append("")
2786 edittext.append("")
2787 edittext.append("") # Empty line between message and comments.
2787 edittext.append("") # Empty line between message and comments.
2788 edittext.append(hgprefix(_("Enter commit message."
2788 edittext.append(hgprefix(_("Enter commit message."
2789 " Lines beginning with 'HG:' are removed.")))
2789 " Lines beginning with 'HG:' are removed.")))
2790 edittext.append(hgprefix(extramsg))
2790 edittext.append(hgprefix(extramsg))
2791 edittext.append("HG: --")
2791 edittext.append("HG: --")
2792 edittext.append(hgprefix(_("user: %s") % ctx.user()))
2792 edittext.append(hgprefix(_("user: %s") % ctx.user()))
2793 if ctx.p2():
2793 if ctx.p2():
2794 edittext.append(hgprefix(_("branch merge")))
2794 edittext.append(hgprefix(_("branch merge")))
2795 if ctx.branch():
2795 if ctx.branch():
2796 edittext.append(hgprefix(_("branch '%s'") % ctx.branch()))
2796 edittext.append(hgprefix(_("branch '%s'") % ctx.branch()))
2797 if bookmarks.isactivewdirparent(repo):
2797 if bookmarks.isactivewdirparent(repo):
2798 edittext.append(hgprefix(_("bookmark '%s'") % repo._activebookmark))
2798 edittext.append(hgprefix(_("bookmark '%s'") % repo._activebookmark))
2799 edittext.extend([hgprefix(_("subrepo %s") % s) for s in subs])
2799 edittext.extend([hgprefix(_("subrepo %s") % s) for s in subs])
2800 edittext.extend([hgprefix(_("added %s") % f) for f in added])
2800 edittext.extend([hgprefix(_("added %s") % f) for f in added])
2801 edittext.extend([hgprefix(_("changed %s") % f) for f in modified])
2801 edittext.extend([hgprefix(_("changed %s") % f) for f in modified])
2802 edittext.extend([hgprefix(_("removed %s") % f) for f in removed])
2802 edittext.extend([hgprefix(_("removed %s") % f) for f in removed])
2803 if not added and not modified and not removed:
2803 if not added and not modified and not removed:
2804 edittext.append(hgprefix(_("no files changed")))
2804 edittext.append(hgprefix(_("no files changed")))
2805 edittext.append("")
2805 edittext.append("")
2806
2806
2807 return "\n".join(edittext)
2807 return "\n".join(edittext)
2808
2808
2809 def commitstatus(repo, node, branch, bheads=None, opts=None):
2809 def commitstatus(repo, node, branch, bheads=None, opts=None):
2810 if opts is None:
2810 if opts is None:
2811 opts = {}
2811 opts = {}
2812 ctx = repo[node]
2812 ctx = repo[node]
2813 parents = ctx.parents()
2813 parents = ctx.parents()
2814
2814
2815 if (not opts.get('amend') and bheads and node not in bheads and not
2815 if (not opts.get('amend') and bheads and node not in bheads and not
2816 [x for x in parents if x.node() in bheads and x.branch() == branch]):
2816 [x for x in parents if x.node() in bheads and x.branch() == branch]):
2817 repo.ui.status(_('created new head\n'))
2817 repo.ui.status(_('created new head\n'))
2818 # The message is not printed for initial roots. For the other
2818 # The message is not printed for initial roots. For the other
2819 # changesets, it is printed in the following situations:
2819 # changesets, it is printed in the following situations:
2820 #
2820 #
2821 # Par column: for the 2 parents with ...
2821 # Par column: for the 2 parents with ...
2822 # N: null or no parent
2822 # N: null or no parent
2823 # B: parent is on another named branch
2823 # B: parent is on another named branch
2824 # C: parent is a regular non head changeset
2824 # C: parent is a regular non head changeset
2825 # H: parent was a branch head of the current branch
2825 # H: parent was a branch head of the current branch
2826 # Msg column: whether we print "created new head" message
2826 # Msg column: whether we print "created new head" message
2827 # In the following, it is assumed that there already exists some
2827 # In the following, it is assumed that there already exists some
2828 # initial branch heads of the current branch, otherwise nothing is
2828 # initial branch heads of the current branch, otherwise nothing is
2829 # printed anyway.
2829 # printed anyway.
2830 #
2830 #
2831 # Par Msg Comment
2831 # Par Msg Comment
2832 # N N y additional topo root
2832 # N N y additional topo root
2833 #
2833 #
2834 # B N y additional branch root
2834 # B N y additional branch root
2835 # C N y additional topo head
2835 # C N y additional topo head
2836 # H N n usual case
2836 # H N n usual case
2837 #
2837 #
2838 # B B y weird additional branch root
2838 # B B y weird additional branch root
2839 # C B y branch merge
2839 # C B y branch merge
2840 # H B n merge with named branch
2840 # H B n merge with named branch
2841 #
2841 #
2842 # C C y additional head from merge
2842 # C C y additional head from merge
2843 # C H n merge with a head
2843 # C H n merge with a head
2844 #
2844 #
2845 # H H n head merge: head count decreases
2845 # H H n head merge: head count decreases
2846
2846
2847 if not opts.get('close_branch'):
2847 if not opts.get('close_branch'):
2848 for r in parents:
2848 for r in parents:
2849 if r.closesbranch() and r.branch() == branch:
2849 if r.closesbranch() and r.branch() == branch:
2850 repo.ui.status(_('reopening closed branch head %d\n') % r)
2850 repo.ui.status(_('reopening closed branch head %d\n') % r)
2851
2851
2852 if repo.ui.debugflag:
2852 if repo.ui.debugflag:
2853 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx.hex()))
2853 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx.hex()))
2854 elif repo.ui.verbose:
2854 elif repo.ui.verbose:
2855 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx))
2855 repo.ui.write(_('committed changeset %d:%s\n') % (int(ctx), ctx))
2856
2856
2857 def revert(ui, repo, ctx, parents, *pats, **opts):
2857 def revert(ui, repo, ctx, parents, *pats, **opts):
2858 parent, p2 = parents
2858 parent, p2 = parents
2859 node = ctx.node()
2859 node = ctx.node()
2860
2860
2861 mf = ctx.manifest()
2861 mf = ctx.manifest()
2862 if node == p2:
2862 if node == p2:
2863 parent = p2
2863 parent = p2
2864 if node == parent:
2864 if node == parent:
2865 pmf = mf
2865 pmf = mf
2866 else:
2866 else:
2867 pmf = None
2867 pmf = None
2868
2868
2869 # need all matching names in dirstate and manifest of target rev,
2869 # need all matching names in dirstate and manifest of target rev,
2870 # so have to walk both. do not print errors if files exist in one
2870 # so have to walk both. do not print errors if files exist in one
2871 # but not other. in both cases, filesets should be evaluated against
2871 # but not other. in both cases, filesets should be evaluated against
2872 # workingctx to get consistent result (issue4497). this means 'set:**'
2872 # workingctx to get consistent result (issue4497). this means 'set:**'
2873 # cannot be used to select missing files from target rev.
2873 # cannot be used to select missing files from target rev.
2874
2874
2875 # `names` is a mapping for all elements in working copy and target revision
2875 # `names` is a mapping for all elements in working copy and target revision
2876 # The mapping is in the form:
2876 # The mapping is in the form:
2877 # <asb path in repo> -> (<path from CWD>, <exactly specified by matcher?>)
2877 # <asb path in repo> -> (<path from CWD>, <exactly specified by matcher?>)
2878 names = {}
2878 names = {}
2879
2879
2880 wlock = repo.wlock()
2880 wlock = repo.wlock()
2881 try:
2881 try:
2882 ## filling of the `names` mapping
2882 ## filling of the `names` mapping
2883 # walk dirstate to fill `names`
2883 # walk dirstate to fill `names`
2884
2884
2885 interactive = opts.get('interactive', False)
2885 interactive = opts.get('interactive', False)
2886 wctx = repo[None]
2886 wctx = repo[None]
2887 m = scmutil.match(wctx, pats, opts)
2887 m = scmutil.match(wctx, pats, opts)
2888
2888
2889 # we'll need this later
2889 # we'll need this later
2890 targetsubs = sorted(s for s in wctx.substate if m(s))
2890 targetsubs = sorted(s for s in wctx.substate if m(s))
2891
2891
2892 if not m.always():
2892 if not m.always():
2893 for abs in repo.walk(matchmod.badmatch(m, lambda x, y: False)):
2893 for abs in repo.walk(matchmod.badmatch(m, lambda x, y: False)):
2894 names[abs] = m.rel(abs), m.exact(abs)
2894 names[abs] = m.rel(abs), m.exact(abs)
2895
2895
2896 # walk target manifest to fill `names`
2896 # walk target manifest to fill `names`
2897
2897
2898 def badfn(path, msg):
2898 def badfn(path, msg):
2899 if path in names:
2899 if path in names:
2900 return
2900 return
2901 if path in ctx.substate:
2901 if path in ctx.substate:
2902 return
2902 return
2903 path_ = path + '/'
2903 path_ = path + '/'
2904 for f in names:
2904 for f in names:
2905 if f.startswith(path_):
2905 if f.startswith(path_):
2906 return
2906 return
2907 ui.warn("%s: %s\n" % (m.rel(path), msg))
2907 ui.warn("%s: %s\n" % (m.rel(path), msg))
2908
2908
2909 for abs in ctx.walk(matchmod.badmatch(m, badfn)):
2909 for abs in ctx.walk(matchmod.badmatch(m, badfn)):
2910 if abs not in names:
2910 if abs not in names:
2911 names[abs] = m.rel(abs), m.exact(abs)
2911 names[abs] = m.rel(abs), m.exact(abs)
2912
2912
2913 # Find status of all file in `names`.
2913 # Find status of all file in `names`.
2914 m = scmutil.matchfiles(repo, names)
2914 m = scmutil.matchfiles(repo, names)
2915
2915
2916 changes = repo.status(node1=node, match=m,
2916 changes = repo.status(node1=node, match=m,
2917 unknown=True, ignored=True, clean=True)
2917 unknown=True, ignored=True, clean=True)
2918 else:
2918 else:
2919 changes = repo.status(node1=node, match=m)
2919 changes = repo.status(node1=node, match=m)
2920 for kind in changes:
2920 for kind in changes:
2921 for abs in kind:
2921 for abs in kind:
2922 names[abs] = m.rel(abs), m.exact(abs)
2922 names[abs] = m.rel(abs), m.exact(abs)
2923
2923
2924 m = scmutil.matchfiles(repo, names)
2924 m = scmutil.matchfiles(repo, names)
2925
2925
2926 modified = set(changes.modified)
2926 modified = set(changes.modified)
2927 added = set(changes.added)
2927 added = set(changes.added)
2928 removed = set(changes.removed)
2928 removed = set(changes.removed)
2929 _deleted = set(changes.deleted)
2929 _deleted = set(changes.deleted)
2930 unknown = set(changes.unknown)
2930 unknown = set(changes.unknown)
2931 unknown.update(changes.ignored)
2931 unknown.update(changes.ignored)
2932 clean = set(changes.clean)
2932 clean = set(changes.clean)
2933 modadded = set()
2933 modadded = set()
2934
2934
2935 # split between files known in target manifest and the others
2935 # split between files known in target manifest and the others
2936 smf = set(mf)
2936 smf = set(mf)
2937
2937
2938 # determine the exact nature of the deleted changesets
2938 # determine the exact nature of the deleted changesets
2939 deladded = _deleted - smf
2939 deladded = _deleted - smf
2940 deleted = _deleted - deladded
2940 deleted = _deleted - deladded
2941
2941
2942 # We need to account for the state of the file in the dirstate,
2942 # We need to account for the state of the file in the dirstate,
2943 # even when we revert against something else than parent. This will
2943 # even when we revert against something else than parent. This will
2944 # slightly alter the behavior of revert (doing back up or not, delete
2944 # slightly alter the behavior of revert (doing back up or not, delete
2945 # or just forget etc).
2945 # or just forget etc).
2946 if parent == node:
2946 if parent == node:
2947 dsmodified = modified
2947 dsmodified = modified
2948 dsadded = added
2948 dsadded = added
2949 dsremoved = removed
2949 dsremoved = removed
2950 # store all local modifications, useful later for rename detection
2950 # store all local modifications, useful later for rename detection
2951 localchanges = dsmodified | dsadded
2951 localchanges = dsmodified | dsadded
2952 modified, added, removed = set(), set(), set()
2952 modified, added, removed = set(), set(), set()
2953 else:
2953 else:
2954 changes = repo.status(node1=parent, match=m)
2954 changes = repo.status(node1=parent, match=m)
2955 dsmodified = set(changes.modified)
2955 dsmodified = set(changes.modified)
2956 dsadded = set(changes.added)
2956 dsadded = set(changes.added)
2957 dsremoved = set(changes.removed)
2957 dsremoved = set(changes.removed)
2958 # store all local modifications, useful later for rename detection
2958 # store all local modifications, useful later for rename detection
2959 localchanges = dsmodified | dsadded
2959 localchanges = dsmodified | dsadded
2960
2960
2961 # only take into account for removes between wc and target
2961 # only take into account for removes between wc and target
2962 clean |= dsremoved - removed
2962 clean |= dsremoved - removed
2963 dsremoved &= removed
2963 dsremoved &= removed
2964 # distinct between dirstate remove and other
2964 # distinct between dirstate remove and other
2965 removed -= dsremoved
2965 removed -= dsremoved
2966
2966
2967 modadded = added & dsmodified
2967 modadded = added & dsmodified
2968 added -= modadded
2968 added -= modadded
2969
2969
2970 # tell newly modified apart.
2970 # tell newly modified apart.
2971 dsmodified &= modified
2971 dsmodified &= modified
2972 dsmodified |= modified & dsadded # dirstate added may needs backup
2972 dsmodified |= modified & dsadded # dirstate added may needs backup
2973 modified -= dsmodified
2973 modified -= dsmodified
2974
2974
2975 # We need to wait for some post-processing to update this set
2975 # We need to wait for some post-processing to update this set
2976 # before making the distinction. The dirstate will be used for
2976 # before making the distinction. The dirstate will be used for
2977 # that purpose.
2977 # that purpose.
2978 dsadded = added
2978 dsadded = added
2979
2979
2980 # in case of merge, files that are actually added can be reported as
2980 # in case of merge, files that are actually added can be reported as
2981 # modified, we need to post process the result
2981 # modified, we need to post process the result
2982 if p2 != nullid:
2982 if p2 != nullid:
2983 if pmf is None:
2983 if pmf is None:
2984 # only need parent manifest in the merge case,
2984 # only need parent manifest in the merge case,
2985 # so do not read by default
2985 # so do not read by default
2986 pmf = repo[parent].manifest()
2986 pmf = repo[parent].manifest()
2987 mergeadd = dsmodified - set(pmf)
2987 mergeadd = dsmodified - set(pmf)
2988 dsadded |= mergeadd
2988 dsadded |= mergeadd
2989 dsmodified -= mergeadd
2989 dsmodified -= mergeadd
2990
2990
2991 # if f is a rename, update `names` to also revert the source
2991 # if f is a rename, update `names` to also revert the source
2992 cwd = repo.getcwd()
2992 cwd = repo.getcwd()
2993 for f in localchanges:
2993 for f in localchanges:
2994 src = repo.dirstate.copied(f)
2994 src = repo.dirstate.copied(f)
2995 # XXX should we check for rename down to target node?
2995 # XXX should we check for rename down to target node?
2996 if src and src not in names and repo.dirstate[src] == 'r':
2996 if src and src not in names and repo.dirstate[src] == 'r':
2997 dsremoved.add(src)
2997 dsremoved.add(src)
2998 names[src] = (repo.pathto(src, cwd), True)
2998 names[src] = (repo.pathto(src, cwd), True)
2999
2999
3000 # distinguish between file to forget and the other
3000 # distinguish between file to forget and the other
3001 added = set()
3001 added = set()
3002 for abs in dsadded:
3002 for abs in dsadded:
3003 if repo.dirstate[abs] != 'a':
3003 if repo.dirstate[abs] != 'a':
3004 added.add(abs)
3004 added.add(abs)
3005 dsadded -= added
3005 dsadded -= added
3006
3006
3007 for abs in deladded:
3007 for abs in deladded:
3008 if repo.dirstate[abs] == 'a':
3008 if repo.dirstate[abs] == 'a':
3009 dsadded.add(abs)
3009 dsadded.add(abs)
3010 deladded -= dsadded
3010 deladded -= dsadded
3011
3011
3012 # For files marked as removed, we check if an unknown file is present at
3012 # For files marked as removed, we check if an unknown file is present at
3013 # the same path. If a such file exists it may need to be backed up.
3013 # the same path. If a such file exists it may need to be backed up.
3014 # Making the distinction at this stage helps have simpler backup
3014 # Making the distinction at this stage helps have simpler backup
3015 # logic.
3015 # logic.
3016 removunk = set()
3016 removunk = set()
3017 for abs in removed:
3017 for abs in removed:
3018 target = repo.wjoin(abs)
3018 target = repo.wjoin(abs)
3019 if os.path.lexists(target):
3019 if os.path.lexists(target):
3020 removunk.add(abs)
3020 removunk.add(abs)
3021 removed -= removunk
3021 removed -= removunk
3022
3022
3023 dsremovunk = set()
3023 dsremovunk = set()
3024 for abs in dsremoved:
3024 for abs in dsremoved:
3025 target = repo.wjoin(abs)
3025 target = repo.wjoin(abs)
3026 if os.path.lexists(target):
3026 if os.path.lexists(target):
3027 dsremovunk.add(abs)
3027 dsremovunk.add(abs)
3028 dsremoved -= dsremovunk
3028 dsremoved -= dsremovunk
3029
3029
3030 # action to be actually performed by revert
3030 # action to be actually performed by revert
3031 # (<list of file>, message>) tuple
3031 # (<list of file>, message>) tuple
3032 actions = {'revert': ([], _('reverting %s\n')),
3032 actions = {'revert': ([], _('reverting %s\n')),
3033 'add': ([], _('adding %s\n')),
3033 'add': ([], _('adding %s\n')),
3034 'remove': ([], _('removing %s\n')),
3034 'remove': ([], _('removing %s\n')),
3035 'drop': ([], _('removing %s\n')),
3035 'drop': ([], _('removing %s\n')),
3036 'forget': ([], _('forgetting %s\n')),
3036 'forget': ([], _('forgetting %s\n')),
3037 'undelete': ([], _('undeleting %s\n')),
3037 'undelete': ([], _('undeleting %s\n')),
3038 'noop': (None, _('no changes needed to %s\n')),
3038 'noop': (None, _('no changes needed to %s\n')),
3039 'unknown': (None, _('file not managed: %s\n')),
3039 'unknown': (None, _('file not managed: %s\n')),
3040 }
3040 }
3041
3041
3042 # "constant" that convey the backup strategy.
3042 # "constant" that convey the backup strategy.
3043 # All set to `discard` if `no-backup` is set do avoid checking
3043 # All set to `discard` if `no-backup` is set do avoid checking
3044 # no_backup lower in the code.
3044 # no_backup lower in the code.
3045 # These values are ordered for comparison purposes
3045 # These values are ordered for comparison purposes
3046 backup = 2 # unconditionally do backup
3046 backup = 2 # unconditionally do backup
3047 check = 1 # check if the existing file differs from target
3047 check = 1 # check if the existing file differs from target
3048 discard = 0 # never do backup
3048 discard = 0 # never do backup
3049 if opts.get('no_backup'):
3049 if opts.get('no_backup'):
3050 backup = check = discard
3050 backup = check = discard
3051
3051
3052 backupanddel = actions['remove']
3052 backupanddel = actions['remove']
3053 if not opts.get('no_backup'):
3053 if not opts.get('no_backup'):
3054 backupanddel = actions['drop']
3054 backupanddel = actions['drop']
3055
3055
3056 disptable = (
3056 disptable = (
3057 # dispatch table:
3057 # dispatch table:
3058 # file state
3058 # file state
3059 # action
3059 # action
3060 # make backup
3060 # make backup
3061
3061
3062 ## Sets that results that will change file on disk
3062 ## Sets that results that will change file on disk
3063 # Modified compared to target, no local change
3063 # Modified compared to target, no local change
3064 (modified, actions['revert'], discard),
3064 (modified, actions['revert'], discard),
3065 # Modified compared to target, but local file is deleted
3065 # Modified compared to target, but local file is deleted
3066 (deleted, actions['revert'], discard),
3066 (deleted, actions['revert'], discard),
3067 # Modified compared to target, local change
3067 # Modified compared to target, local change
3068 (dsmodified, actions['revert'], backup),
3068 (dsmodified, actions['revert'], backup),
3069 # Added since target
3069 # Added since target
3070 (added, actions['remove'], discard),
3070 (added, actions['remove'], discard),
3071 # Added in working directory
3071 # Added in working directory
3072 (dsadded, actions['forget'], discard),
3072 (dsadded, actions['forget'], discard),
3073 # Added since target, have local modification
3073 # Added since target, have local modification
3074 (modadded, backupanddel, backup),
3074 (modadded, backupanddel, backup),
3075 # Added since target but file is missing in working directory
3075 # Added since target but file is missing in working directory
3076 (deladded, actions['drop'], discard),
3076 (deladded, actions['drop'], discard),
3077 # Removed since target, before working copy parent
3077 # Removed since target, before working copy parent
3078 (removed, actions['add'], discard),
3078 (removed, actions['add'], discard),
3079 # Same as `removed` but an unknown file exists at the same path
3079 # Same as `removed` but an unknown file exists at the same path
3080 (removunk, actions['add'], check),
3080 (removunk, actions['add'], check),
3081 # Removed since targe, marked as such in working copy parent
3081 # Removed since targe, marked as such in working copy parent
3082 (dsremoved, actions['undelete'], discard),
3082 (dsremoved, actions['undelete'], discard),
3083 # Same as `dsremoved` but an unknown file exists at the same path
3083 # Same as `dsremoved` but an unknown file exists at the same path
3084 (dsremovunk, actions['undelete'], check),
3084 (dsremovunk, actions['undelete'], check),
3085 ## the following sets does not result in any file changes
3085 ## the following sets does not result in any file changes
3086 # File with no modification
3086 # File with no modification
3087 (clean, actions['noop'], discard),
3087 (clean, actions['noop'], discard),
3088 # Existing file, not tracked anywhere
3088 # Existing file, not tracked anywhere
3089 (unknown, actions['unknown'], discard),
3089 (unknown, actions['unknown'], discard),
3090 )
3090 )
3091
3091
3092 for abs, (rel, exact) in sorted(names.items()):
3092 for abs, (rel, exact) in sorted(names.items()):
3093 # target file to be touch on disk (relative to cwd)
3093 # target file to be touch on disk (relative to cwd)
3094 target = repo.wjoin(abs)
3094 target = repo.wjoin(abs)
3095 # search the entry in the dispatch table.
3095 # search the entry in the dispatch table.
3096 # if the file is in any of these sets, it was touched in the working
3096 # if the file is in any of these sets, it was touched in the working
3097 # directory parent and we are sure it needs to be reverted.
3097 # directory parent and we are sure it needs to be reverted.
3098 for table, (xlist, msg), dobackup in disptable:
3098 for table, (xlist, msg), dobackup in disptable:
3099 if abs not in table:
3099 if abs not in table:
3100 continue
3100 continue
3101 if xlist is not None:
3101 if xlist is not None:
3102 xlist.append(abs)
3102 xlist.append(abs)
3103 if dobackup and (backup <= dobackup
3103 if dobackup and (backup <= dobackup
3104 or wctx[abs].cmp(ctx[abs])):
3104 or wctx[abs].cmp(ctx[abs])):
3105 bakname = origpath(ui, repo, rel)
3105 bakname = origpath(ui, repo, rel)
3106 ui.note(_('saving current version of %s as %s\n') %
3106 ui.note(_('saving current version of %s as %s\n') %
3107 (rel, bakname))
3107 (rel, bakname))
3108 if not opts.get('dry_run'):
3108 if not opts.get('dry_run'):
3109 if interactive:
3109 if interactive:
3110 util.copyfile(target, bakname)
3110 util.copyfile(target, bakname)
3111 else:
3111 else:
3112 util.rename(target, bakname)
3112 util.rename(target, bakname)
3113 if ui.verbose or not exact:
3113 if ui.verbose or not exact:
3114 if not isinstance(msg, basestring):
3114 if not isinstance(msg, basestring):
3115 msg = msg(abs)
3115 msg = msg(abs)
3116 ui.status(msg % rel)
3116 ui.status(msg % rel)
3117 elif exact:
3117 elif exact:
3118 ui.warn(msg % rel)
3118 ui.warn(msg % rel)
3119 break
3119 break
3120
3120
3121 if not opts.get('dry_run'):
3121 if not opts.get('dry_run'):
3122 needdata = ('revert', 'add', 'undelete')
3122 needdata = ('revert', 'add', 'undelete')
3123 _revertprefetch(repo, ctx, *[actions[name][0] for name in needdata])
3123 _revertprefetch(repo, ctx, *[actions[name][0] for name in needdata])
3124 _performrevert(repo, parents, ctx, actions, interactive)
3124 _performrevert(repo, parents, ctx, actions, interactive)
3125
3125
3126 if targetsubs:
3126 if targetsubs:
3127 # Revert the subrepos on the revert list
3127 # Revert the subrepos on the revert list
3128 for sub in targetsubs:
3128 for sub in targetsubs:
3129 try:
3129 try:
3130 wctx.sub(sub).revert(ctx.substate[sub], *pats, **opts)
3130 wctx.sub(sub).revert(ctx.substate[sub], *pats, **opts)
3131 except KeyError:
3131 except KeyError:
3132 raise error.Abort("subrepository '%s' does not exist in %s!"
3132 raise error.Abort("subrepository '%s' does not exist in %s!"
3133 % (sub, short(ctx.node())))
3133 % (sub, short(ctx.node())))
3134 finally:
3134 finally:
3135 wlock.release()
3135 wlock.release()
3136
3136
3137 def origpath(ui, repo, filepath):
3137 def origpath(ui, repo, filepath):
3138 '''customize where .orig files are created
3138 '''customize where .orig files are created
3139
3139
3140 Fetch user defined path from config file: [ui] origbackuppath = <path>
3140 Fetch user defined path from config file: [ui] origbackuppath = <path>
3141 Fall back to default (filepath) if not specified
3141 Fall back to default (filepath) if not specified
3142 '''
3142 '''
3143 origbackuppath = ui.config('ui', 'origbackuppath', None)
3143 origbackuppath = ui.config('ui', 'origbackuppath', None)
3144 if origbackuppath is None:
3144 if origbackuppath is None:
3145 return filepath + ".orig"
3145 return filepath + ".orig"
3146
3146
3147 filepathfromroot = os.path.relpath(filepath, start=repo.root)
3147 filepathfromroot = os.path.relpath(filepath, start=repo.root)
3148 fullorigpath = repo.wjoin(origbackuppath, filepathfromroot)
3148 fullorigpath = repo.wjoin(origbackuppath, filepathfromroot)
3149
3149
3150 origbackupdir = repo.vfs.dirname(fullorigpath)
3150 origbackupdir = repo.vfs.dirname(fullorigpath)
3151 if not repo.vfs.exists(origbackupdir):
3151 if not repo.vfs.exists(origbackupdir):
3152 ui.note(_('creating directory: %s\n') % origbackupdir)
3152 ui.note(_('creating directory: %s\n') % origbackupdir)
3153 util.makedirs(origbackupdir)
3153 util.makedirs(origbackupdir)
3154
3154
3155 return fullorigpath + ".orig"
3155 return fullorigpath + ".orig"
3156
3156
3157 def _revertprefetch(repo, ctx, *files):
3157 def _revertprefetch(repo, ctx, *files):
3158 """Let extension changing the storage layer prefetch content"""
3158 """Let extension changing the storage layer prefetch content"""
3159 pass
3159 pass
3160
3160
3161 def _performrevert(repo, parents, ctx, actions, interactive=False):
3161 def _performrevert(repo, parents, ctx, actions, interactive=False):
3162 """function that actually perform all the actions computed for revert
3162 """function that actually perform all the actions computed for revert
3163
3163
3164 This is an independent function to let extension to plug in and react to
3164 This is an independent function to let extension to plug in and react to
3165 the imminent revert.
3165 the imminent revert.
3166
3166
3167 Make sure you have the working directory locked when calling this function.
3167 Make sure you have the working directory locked when calling this function.
3168 """
3168 """
3169 parent, p2 = parents
3169 parent, p2 = parents
3170 node = ctx.node()
3170 node = ctx.node()
3171 def checkout(f):
3171 def checkout(f):
3172 fc = ctx[f]
3172 fc = ctx[f]
3173 repo.wwrite(f, fc.data(), fc.flags())
3173 repo.wwrite(f, fc.data(), fc.flags())
3174
3174
3175 audit_path = pathutil.pathauditor(repo.root)
3175 audit_path = pathutil.pathauditor(repo.root)
3176 for f in actions['forget'][0]:
3176 for f in actions['forget'][0]:
3177 repo.dirstate.drop(f)
3177 repo.dirstate.drop(f)
3178 for f in actions['remove'][0]:
3178 for f in actions['remove'][0]:
3179 audit_path(f)
3179 audit_path(f)
3180 try:
3180 try:
3181 util.unlinkpath(repo.wjoin(f))
3181 util.unlinkpath(repo.wjoin(f))
3182 except OSError:
3182 except OSError:
3183 pass
3183 pass
3184 repo.dirstate.remove(f)
3184 repo.dirstate.remove(f)
3185 for f in actions['drop'][0]:
3185 for f in actions['drop'][0]:
3186 audit_path(f)
3186 audit_path(f)
3187 repo.dirstate.remove(f)
3187 repo.dirstate.remove(f)
3188
3188
3189 normal = None
3189 normal = None
3190 if node == parent:
3190 if node == parent:
3191 # We're reverting to our parent. If possible, we'd like status
3191 # We're reverting to our parent. If possible, we'd like status
3192 # to report the file as clean. We have to use normallookup for
3192 # to report the file as clean. We have to use normallookup for
3193 # merges to avoid losing information about merged/dirty files.
3193 # merges to avoid losing information about merged/dirty files.
3194 if p2 != nullid:
3194 if p2 != nullid:
3195 normal = repo.dirstate.normallookup
3195 normal = repo.dirstate.normallookup
3196 else:
3196 else:
3197 normal = repo.dirstate.normal
3197 normal = repo.dirstate.normal
3198
3198
3199 newlyaddedandmodifiedfiles = set()
3199 newlyaddedandmodifiedfiles = set()
3200 if interactive:
3200 if interactive:
3201 # Prompt the user for changes to revert
3201 # Prompt the user for changes to revert
3202 torevert = [repo.wjoin(f) for f in actions['revert'][0]]
3202 torevert = [repo.wjoin(f) for f in actions['revert'][0]]
3203 m = scmutil.match(ctx, torevert, {})
3203 m = scmutil.match(ctx, torevert, {})
3204 diffopts = patch.difffeatureopts(repo.ui, whitespace=True)
3204 diffopts = patch.difffeatureopts(repo.ui, whitespace=True)
3205 diffopts.nodates = True
3205 diffopts.nodates = True
3206 diffopts.git = True
3206 diffopts.git = True
3207 reversehunks = repo.ui.configbool('experimental',
3207 reversehunks = repo.ui.configbool('experimental',
3208 'revertalternateinteractivemode',
3208 'revertalternateinteractivemode',
3209 True)
3209 True)
3210 if reversehunks:
3210 if reversehunks:
3211 diff = patch.diff(repo, ctx.node(), None, m, opts=diffopts)
3211 diff = patch.diff(repo, ctx.node(), None, m, opts=diffopts)
3212 else:
3212 else:
3213 diff = patch.diff(repo, None, ctx.node(), m, opts=diffopts)
3213 diff = patch.diff(repo, None, ctx.node(), m, opts=diffopts)
3214 originalchunks = patch.parsepatch(diff)
3214 originalchunks = patch.parsepatch(diff)
3215
3215
3216 try:
3216 try:
3217
3217
3218 chunks, opts = recordfilter(repo.ui, originalchunks)
3218 chunks, opts = recordfilter(repo.ui, originalchunks)
3219 if reversehunks:
3219 if reversehunks:
3220 chunks = patch.reversehunks(chunks)
3220 chunks = patch.reversehunks(chunks)
3221
3221
3222 except patch.PatchError as err:
3222 except patch.PatchError as err:
3223 raise error.Abort(_('error parsing patch: %s') % err)
3223 raise error.Abort(_('error parsing patch: %s') % err)
3224
3224
3225 newlyaddedandmodifiedfiles = newandmodified(chunks, originalchunks)
3225 newlyaddedandmodifiedfiles = newandmodified(chunks, originalchunks)
3226 # Apply changes
3226 # Apply changes
3227 fp = cStringIO.StringIO()
3227 fp = cStringIO.StringIO()
3228 for c in chunks:
3228 for c in chunks:
3229 c.write(fp)
3229 c.write(fp)
3230 dopatch = fp.tell()
3230 dopatch = fp.tell()
3231 fp.seek(0)
3231 fp.seek(0)
3232 if dopatch:
3232 if dopatch:
3233 try:
3233 try:
3234 patch.internalpatch(repo.ui, repo, fp, 1, eolmode=None)
3234 patch.internalpatch(repo.ui, repo, fp, 1, eolmode=None)
3235 except patch.PatchError as err:
3235 except patch.PatchError as err:
3236 raise error.Abort(str(err))
3236 raise error.Abort(str(err))
3237 del fp
3237 del fp
3238 else:
3238 else:
3239 for f in actions['revert'][0]:
3239 for f in actions['revert'][0]:
3240 checkout(f)
3240 checkout(f)
3241 if normal:
3241 if normal:
3242 normal(f)
3242 normal(f)
3243
3243
3244 for f in actions['add'][0]:
3244 for f in actions['add'][0]:
3245 # Don't checkout modified files, they are already created by the diff
3245 # Don't checkout modified files, they are already created by the diff
3246 if f not in newlyaddedandmodifiedfiles:
3246 if f not in newlyaddedandmodifiedfiles:
3247 checkout(f)
3247 checkout(f)
3248 repo.dirstate.add(f)
3248 repo.dirstate.add(f)
3249
3249
3250 normal = repo.dirstate.normallookup
3250 normal = repo.dirstate.normallookup
3251 if node == parent and p2 == nullid:
3251 if node == parent and p2 == nullid:
3252 normal = repo.dirstate.normal
3252 normal = repo.dirstate.normal
3253 for f in actions['undelete'][0]:
3253 for f in actions['undelete'][0]:
3254 checkout(f)
3254 checkout(f)
3255 normal(f)
3255 normal(f)
3256
3256
3257 copied = copies.pathcopies(repo[parent], ctx)
3257 copied = copies.pathcopies(repo[parent], ctx)
3258
3258
3259 for f in actions['add'][0] + actions['undelete'][0] + actions['revert'][0]:
3259 for f in actions['add'][0] + actions['undelete'][0] + actions['revert'][0]:
3260 if f in copied:
3260 if f in copied:
3261 repo.dirstate.copy(copied[f], f)
3261 repo.dirstate.copy(copied[f], f)
3262
3262
3263 def command(table):
3263 def command(table):
3264 """Returns a function object to be used as a decorator for making commands.
3264 """Returns a function object to be used as a decorator for making commands.
3265
3265
3266 This function receives a command table as its argument. The table should
3266 This function receives a command table as its argument. The table should
3267 be a dict.
3267 be a dict.
3268
3268
3269 The returned function can be used as a decorator for adding commands
3269 The returned function can be used as a decorator for adding commands
3270 to that command table. This function accepts multiple arguments to define
3270 to that command table. This function accepts multiple arguments to define
3271 a command.
3271 a command.
3272
3272
3273 The first argument is the command name.
3273 The first argument is the command name.
3274
3274
3275 The options argument is an iterable of tuples defining command arguments.
3275 The options argument is an iterable of tuples defining command arguments.
3276 See ``mercurial.fancyopts.fancyopts()`` for the format of each tuple.
3276 See ``mercurial.fancyopts.fancyopts()`` for the format of each tuple.
3277
3277
3278 The synopsis argument defines a short, one line summary of how to use the
3278 The synopsis argument defines a short, one line summary of how to use the
3279 command. This shows up in the help output.
3279 command. This shows up in the help output.
3280
3280
3281 The norepo argument defines whether the command does not require a
3281 The norepo argument defines whether the command does not require a
3282 local repository. Most commands operate against a repository, thus the
3282 local repository. Most commands operate against a repository, thus the
3283 default is False.
3283 default is False.
3284
3284
3285 The optionalrepo argument defines whether the command optionally requires
3285 The optionalrepo argument defines whether the command optionally requires
3286 a local repository.
3286 a local repository.
3287
3287
3288 The inferrepo argument defines whether to try to find a repository from the
3288 The inferrepo argument defines whether to try to find a repository from the
3289 command line arguments. If True, arguments will be examined for potential
3289 command line arguments. If True, arguments will be examined for potential
3290 repository locations. See ``findrepo()``. If a repository is found, it
3290 repository locations. See ``findrepo()``. If a repository is found, it
3291 will be used.
3291 will be used.
3292 """
3292 """
3293 def cmd(name, options=(), synopsis=None, norepo=False, optionalrepo=False,
3293 def cmd(name, options=(), synopsis=None, norepo=False, optionalrepo=False,
3294 inferrepo=False):
3294 inferrepo=False):
3295 def decorator(func):
3295 def decorator(func):
3296 if synopsis:
3296 if synopsis:
3297 table[name] = func, list(options), synopsis
3297 table[name] = func, list(options), synopsis
3298 else:
3298 else:
3299 table[name] = func, list(options)
3299 table[name] = func, list(options)
3300
3300
3301 if norepo:
3301 if norepo:
3302 # Avoid import cycle.
3302 # Avoid import cycle.
3303 import commands
3303 import commands
3304 commands.norepo += ' %s' % ' '.join(parsealiases(name))
3304 commands.norepo += ' %s' % ' '.join(parsealiases(name))
3305
3305
3306 if optionalrepo:
3306 if optionalrepo:
3307 import commands
3307 import commands
3308 commands.optionalrepo += ' %s' % ' '.join(parsealiases(name))
3308 commands.optionalrepo += ' %s' % ' '.join(parsealiases(name))
3309
3309
3310 if inferrepo:
3310 if inferrepo:
3311 import commands
3311 import commands
3312 commands.inferrepo += ' %s' % ' '.join(parsealiases(name))
3312 commands.inferrepo += ' %s' % ' '.join(parsealiases(name))
3313
3313
3314 return func
3314 return func
3315 return decorator
3315 return decorator
3316
3316
3317 return cmd
3317 return cmd
3318
3318
3319 # a list of (ui, repo, otherpeer, opts, missing) functions called by
3319 # a list of (ui, repo, otherpeer, opts, missing) functions called by
3320 # commands.outgoing. "missing" is "missing" of the result of
3320 # commands.outgoing. "missing" is "missing" of the result of
3321 # "findcommonoutgoing()"
3321 # "findcommonoutgoing()"
3322 outgoinghooks = util.hooks()
3322 outgoinghooks = util.hooks()
3323
3323
3324 # a list of (ui, repo) functions called by commands.summary
3324 # a list of (ui, repo) functions called by commands.summary
3325 summaryhooks = util.hooks()
3325 summaryhooks = util.hooks()
3326
3326
3327 # a list of (ui, repo, opts, changes) functions called by commands.summary.
3327 # a list of (ui, repo, opts, changes) functions called by commands.summary.
3328 #
3328 #
3329 # functions should return tuple of booleans below, if 'changes' is None:
3329 # functions should return tuple of booleans below, if 'changes' is None:
3330 # (whether-incomings-are-needed, whether-outgoings-are-needed)
3330 # (whether-incomings-are-needed, whether-outgoings-are-needed)
3331 #
3331 #
3332 # otherwise, 'changes' is a tuple of tuples below:
3332 # otherwise, 'changes' is a tuple of tuples below:
3333 # - (sourceurl, sourcebranch, sourcepeer, incoming)
3333 # - (sourceurl, sourcebranch, sourcepeer, incoming)
3334 # - (desturl, destbranch, destpeer, outgoing)
3334 # - (desturl, destbranch, destpeer, outgoing)
3335 summaryremotehooks = util.hooks()
3335 summaryremotehooks = util.hooks()
3336
3336
3337 # A list of state files kept by multistep operations like graft.
3337 # A list of state files kept by multistep operations like graft.
3338 # Since graft cannot be aborted, it is considered 'clearable' by update.
3338 # Since graft cannot be aborted, it is considered 'clearable' by update.
3339 # note: bisect is intentionally excluded
3339 # note: bisect is intentionally excluded
3340 # (state file, clearable, allowcommit, error, hint)
3340 # (state file, clearable, allowcommit, error, hint)
3341 unfinishedstates = [
3341 unfinishedstates = [
3342 ('graftstate', True, False, _('graft in progress'),
3342 ('graftstate', True, False, _('graft in progress'),
3343 _("use 'hg graft --continue' or 'hg update' to abort")),
3343 _("use 'hg graft --continue' or 'hg update' to abort")),
3344 ('updatestate', True, False, _('last update was interrupted'),
3344 ('updatestate', True, False, _('last update was interrupted'),
3345 _("use 'hg update' to get a consistent checkout"))
3345 _("use 'hg update' to get a consistent checkout"))
3346 ]
3346 ]
3347
3347
3348 def checkunfinished(repo, commit=False):
3348 def checkunfinished(repo, commit=False):
3349 '''Look for an unfinished multistep operation, like graft, and abort
3349 '''Look for an unfinished multistep operation, like graft, and abort
3350 if found. It's probably good to check this right before
3350 if found. It's probably good to check this right before
3351 bailifchanged().
3351 bailifchanged().
3352 '''
3352 '''
3353 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3353 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3354 if commit and allowcommit:
3354 if commit and allowcommit:
3355 continue
3355 continue
3356 if repo.vfs.exists(f):
3356 if repo.vfs.exists(f):
3357 raise error.Abort(msg, hint=hint)
3357 raise error.Abort(msg, hint=hint)
3358
3358
3359 def clearunfinished(repo):
3359 def clearunfinished(repo):
3360 '''Check for unfinished operations (as above), and clear the ones
3360 '''Check for unfinished operations (as above), and clear the ones
3361 that are clearable.
3361 that are clearable.
3362 '''
3362 '''
3363 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3363 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3364 if not clearable and repo.vfs.exists(f):
3364 if not clearable and repo.vfs.exists(f):
3365 raise error.Abort(msg, hint=hint)
3365 raise error.Abort(msg, hint=hint)
3366 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3366 for f, clearable, allowcommit, msg, hint in unfinishedstates:
3367 if clearable and repo.vfs.exists(f):
3367 if clearable and repo.vfs.exists(f):
3368 util.unlink(repo.join(f))
3368 util.unlink(repo.join(f))
3369
3369
3370 class dirstateguard(object):
3370 class dirstateguard(object):
3371 '''Restore dirstate at unexpected failure.
3371 '''Restore dirstate at unexpected failure.
3372
3372
3373 At the construction, this class does:
3373 At the construction, this class does:
3374
3374
3375 - write current ``repo.dirstate`` out, and
3375 - write current ``repo.dirstate`` out, and
3376 - save ``.hg/dirstate`` into the backup file
3376 - save ``.hg/dirstate`` into the backup file
3377
3377
3378 This restores ``.hg/dirstate`` from backup file, if ``release()``
3378 This restores ``.hg/dirstate`` from backup file, if ``release()``
3379 is invoked before ``close()``.
3379 is invoked before ``close()``.
3380
3380
3381 This just removes the backup file at ``close()`` before ``release()``.
3381 This just removes the backup file at ``close()`` before ``release()``.
3382 '''
3382 '''
3383
3383
3384 def __init__(self, repo, name):
3384 def __init__(self, repo, name):
3385 self._repo = repo
3385 self._repo = repo
3386 self._suffix = '.backup.%s.%d' % (name, id(self))
3386 self._suffix = '.backup.%s.%d' % (name, id(self))
3387 repo.dirstate._savebackup(repo.currenttransaction(), self._suffix)
3387 repo.dirstate._savebackup(repo.currenttransaction(), self._suffix)
3388 self._active = True
3388 self._active = True
3389 self._closed = False
3389 self._closed = False
3390
3390
3391 def __del__(self):
3391 def __del__(self):
3392 if self._active: # still active
3392 if self._active: # still active
3393 # this may occur, even if this class is used correctly:
3393 # this may occur, even if this class is used correctly:
3394 # for example, releasing other resources like transaction
3394 # for example, releasing other resources like transaction
3395 # may raise exception before ``dirstateguard.release`` in
3395 # may raise exception before ``dirstateguard.release`` in
3396 # ``release(tr, ....)``.
3396 # ``release(tr, ....)``.
3397 self._abort()
3397 self._abort()
3398
3398
3399 def close(self):
3399 def close(self):
3400 if not self._active: # already inactivated
3400 if not self._active: # already inactivated
3401 msg = (_("can't close already inactivated backup: dirstate%s")
3401 msg = (_("can't close already inactivated backup: dirstate%s")
3402 % self._suffix)
3402 % self._suffix)
3403 raise error.Abort(msg)
3403 raise error.Abort(msg)
3404
3404
3405 self._repo.dirstate._clearbackup(self._repo.currenttransaction(),
3405 self._repo.dirstate._clearbackup(self._repo.currenttransaction(),
3406 self._suffix)
3406 self._suffix)
3407 self._active = False
3407 self._active = False
3408 self._closed = True
3408 self._closed = True
3409
3409
3410 def _abort(self):
3410 def _abort(self):
3411 self._repo.dirstate._restorebackup(self._repo.currenttransaction(),
3411 self._repo.dirstate._restorebackup(self._repo.currenttransaction(),
3412 self._suffix)
3412 self._suffix)
3413 self._active = False
3413 self._active = False
3414
3414
3415 def release(self):
3415 def release(self):
3416 if not self._closed:
3416 if not self._closed:
3417 if not self._active: # already inactivated
3417 if not self._active: # already inactivated
3418 msg = (_("can't release already inactivated backup:"
3418 msg = (_("can't release already inactivated backup:"
3419 " dirstate%s")
3419 " dirstate%s")
3420 % self._suffix)
3420 % self._suffix)
3421 raise error.Abort(msg)
3421 raise error.Abort(msg)
3422 self._abort()
3422 self._abort()
@@ -1,6974 +1,6973 b''
1 # commands.py - command processing for mercurial
1 # commands.py - command processing for mercurial
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from node import hex, bin, nullhex, nullid, nullrev, short
8 from node import hex, bin, nullhex, nullid, nullrev, short
9 from lock import release
9 from lock import release
10 from i18n import _
10 from i18n import _
11 import os, re, difflib, time, tempfile, errno, shlex
11 import os, re, difflib, time, tempfile, errno, shlex
12 import sys, socket
12 import sys, socket
13 import hg, scmutil, util, revlog, copies, error, bookmarks
13 import hg, scmutil, util, revlog, copies, error, bookmarks
14 import patch, help, encoding, templatekw, discovery
14 import patch, help, encoding, templatekw, discovery
15 import archival, changegroup, cmdutil, hbisect
15 import archival, changegroup, cmdutil, hbisect
16 import sshserver, hgweb
16 import sshserver, hgweb
17 import extensions
17 import extensions
18 import merge as mergemod
18 import merge as mergemod
19 import minirst, revset, fileset
19 import minirst, revset, fileset
20 import dagparser, context, simplemerge, graphmod, copies
20 import dagparser, context, simplemerge, graphmod, copies
21 import random, operator
21 import random, operator
22 import setdiscovery, treediscovery, dagutil, pvec, localrepo, destutil
22 import setdiscovery, treediscovery, dagutil, pvec, localrepo, destutil
23 import phases, obsolete, exchange, bundle2, repair, lock as lockmod
23 import phases, obsolete, exchange, bundle2, repair, lock as lockmod
24 import ui as uimod
24 import ui as uimod
25 import streamclone
25 import streamclone
26
26
27 table = {}
27 table = {}
28
28
29 command = cmdutil.command(table)
29 command = cmdutil.command(table)
30
30
31 # Space delimited list of commands that don't require local repositories.
31 # Space delimited list of commands that don't require local repositories.
32 # This should be populated by passing norepo=True into the @command decorator.
32 # This should be populated by passing norepo=True into the @command decorator.
33 norepo = ''
33 norepo = ''
34 # Space delimited list of commands that optionally require local repositories.
34 # Space delimited list of commands that optionally require local repositories.
35 # This should be populated by passing optionalrepo=True into the @command
35 # This should be populated by passing optionalrepo=True into the @command
36 # decorator.
36 # decorator.
37 optionalrepo = ''
37 optionalrepo = ''
38 # Space delimited list of commands that will examine arguments looking for
38 # Space delimited list of commands that will examine arguments looking for
39 # a repository. This should be populated by passing inferrepo=True into the
39 # a repository. This should be populated by passing inferrepo=True into the
40 # @command decorator.
40 # @command decorator.
41 inferrepo = ''
41 inferrepo = ''
42
42
43 # label constants
43 # label constants
44 # until 3.5, bookmarks.current was the advertised name, not
44 # until 3.5, bookmarks.current was the advertised name, not
45 # bookmarks.active, so we must use both to avoid breaking old
45 # bookmarks.active, so we must use both to avoid breaking old
46 # custom styles
46 # custom styles
47 activebookmarklabel = 'bookmarks.active bookmarks.current'
47 activebookmarklabel = 'bookmarks.active bookmarks.current'
48
48
49 # common command options
49 # common command options
50
50
51 globalopts = [
51 globalopts = [
52 ('R', 'repository', '',
52 ('R', 'repository', '',
53 _('repository root directory or name of overlay bundle file'),
53 _('repository root directory or name of overlay bundle file'),
54 _('REPO')),
54 _('REPO')),
55 ('', 'cwd', '',
55 ('', 'cwd', '',
56 _('change working directory'), _('DIR')),
56 _('change working directory'), _('DIR')),
57 ('y', 'noninteractive', None,
57 ('y', 'noninteractive', None,
58 _('do not prompt, automatically pick the first choice for all prompts')),
58 _('do not prompt, automatically pick the first choice for all prompts')),
59 ('q', 'quiet', None, _('suppress output')),
59 ('q', 'quiet', None, _('suppress output')),
60 ('v', 'verbose', None, _('enable additional output')),
60 ('v', 'verbose', None, _('enable additional output')),
61 ('', 'config', [],
61 ('', 'config', [],
62 _('set/override config option (use \'section.name=value\')'),
62 _('set/override config option (use \'section.name=value\')'),
63 _('CONFIG')),
63 _('CONFIG')),
64 ('', 'debug', None, _('enable debugging output')),
64 ('', 'debug', None, _('enable debugging output')),
65 ('', 'debugger', None, _('start debugger')),
65 ('', 'debugger', None, _('start debugger')),
66 ('', 'encoding', encoding.encoding, _('set the charset encoding'),
66 ('', 'encoding', encoding.encoding, _('set the charset encoding'),
67 _('ENCODE')),
67 _('ENCODE')),
68 ('', 'encodingmode', encoding.encodingmode,
68 ('', 'encodingmode', encoding.encodingmode,
69 _('set the charset encoding mode'), _('MODE')),
69 _('set the charset encoding mode'), _('MODE')),
70 ('', 'traceback', None, _('always print a traceback on exception')),
70 ('', 'traceback', None, _('always print a traceback on exception')),
71 ('', 'time', None, _('time how long the command takes')),
71 ('', 'time', None, _('time how long the command takes')),
72 ('', 'profile', None, _('print command execution profile')),
72 ('', 'profile', None, _('print command execution profile')),
73 ('', 'version', None, _('output version information and exit')),
73 ('', 'version', None, _('output version information and exit')),
74 ('h', 'help', None, _('display help and exit')),
74 ('h', 'help', None, _('display help and exit')),
75 ('', 'hidden', False, _('consider hidden changesets')),
75 ('', 'hidden', False, _('consider hidden changesets')),
76 ]
76 ]
77
77
78 dryrunopts = [('n', 'dry-run', None,
78 dryrunopts = [('n', 'dry-run', None,
79 _('do not perform actions, just print output'))]
79 _('do not perform actions, just print output'))]
80
80
81 remoteopts = [
81 remoteopts = [
82 ('e', 'ssh', '',
82 ('e', 'ssh', '',
83 _('specify ssh command to use'), _('CMD')),
83 _('specify ssh command to use'), _('CMD')),
84 ('', 'remotecmd', '',
84 ('', 'remotecmd', '',
85 _('specify hg command to run on the remote side'), _('CMD')),
85 _('specify hg command to run on the remote side'), _('CMD')),
86 ('', 'insecure', None,
86 ('', 'insecure', None,
87 _('do not verify server certificate (ignoring web.cacerts config)')),
87 _('do not verify server certificate (ignoring web.cacerts config)')),
88 ]
88 ]
89
89
90 walkopts = [
90 walkopts = [
91 ('I', 'include', [],
91 ('I', 'include', [],
92 _('include names matching the given patterns'), _('PATTERN')),
92 _('include names matching the given patterns'), _('PATTERN')),
93 ('X', 'exclude', [],
93 ('X', 'exclude', [],
94 _('exclude names matching the given patterns'), _('PATTERN')),
94 _('exclude names matching the given patterns'), _('PATTERN')),
95 ]
95 ]
96
96
97 commitopts = [
97 commitopts = [
98 ('m', 'message', '',
98 ('m', 'message', '',
99 _('use text as commit message'), _('TEXT')),
99 _('use text as commit message'), _('TEXT')),
100 ('l', 'logfile', '',
100 ('l', 'logfile', '',
101 _('read commit message from file'), _('FILE')),
101 _('read commit message from file'), _('FILE')),
102 ]
102 ]
103
103
104 commitopts2 = [
104 commitopts2 = [
105 ('d', 'date', '',
105 ('d', 'date', '',
106 _('record the specified date as commit date'), _('DATE')),
106 _('record the specified date as commit date'), _('DATE')),
107 ('u', 'user', '',
107 ('u', 'user', '',
108 _('record the specified user as committer'), _('USER')),
108 _('record the specified user as committer'), _('USER')),
109 ]
109 ]
110
110
111 # hidden for now
111 # hidden for now
112 formatteropts = [
112 formatteropts = [
113 ('T', 'template', '',
113 ('T', 'template', '',
114 _('display with template (EXPERIMENTAL)'), _('TEMPLATE')),
114 _('display with template (EXPERIMENTAL)'), _('TEMPLATE')),
115 ]
115 ]
116
116
117 templateopts = [
117 templateopts = [
118 ('', 'style', '',
118 ('', 'style', '',
119 _('display using template map file (DEPRECATED)'), _('STYLE')),
119 _('display using template map file (DEPRECATED)'), _('STYLE')),
120 ('T', 'template', '',
120 ('T', 'template', '',
121 _('display with template'), _('TEMPLATE')),
121 _('display with template'), _('TEMPLATE')),
122 ]
122 ]
123
123
124 logopts = [
124 logopts = [
125 ('p', 'patch', None, _('show patch')),
125 ('p', 'patch', None, _('show patch')),
126 ('g', 'git', None, _('use git extended diff format')),
126 ('g', 'git', None, _('use git extended diff format')),
127 ('l', 'limit', '',
127 ('l', 'limit', '',
128 _('limit number of changes displayed'), _('NUM')),
128 _('limit number of changes displayed'), _('NUM')),
129 ('M', 'no-merges', None, _('do not show merges')),
129 ('M', 'no-merges', None, _('do not show merges')),
130 ('', 'stat', None, _('output diffstat-style summary of changes')),
130 ('', 'stat', None, _('output diffstat-style summary of changes')),
131 ('G', 'graph', None, _("show the revision DAG")),
131 ('G', 'graph', None, _("show the revision DAG")),
132 ] + templateopts
132 ] + templateopts
133
133
134 diffopts = [
134 diffopts = [
135 ('a', 'text', None, _('treat all files as text')),
135 ('a', 'text', None, _('treat all files as text')),
136 ('g', 'git', None, _('use git extended diff format')),
136 ('g', 'git', None, _('use git extended diff format')),
137 ('', 'nodates', None, _('omit dates from diff headers'))
137 ('', 'nodates', None, _('omit dates from diff headers'))
138 ]
138 ]
139
139
140 diffwsopts = [
140 diffwsopts = [
141 ('w', 'ignore-all-space', None,
141 ('w', 'ignore-all-space', None,
142 _('ignore white space when comparing lines')),
142 _('ignore white space when comparing lines')),
143 ('b', 'ignore-space-change', None,
143 ('b', 'ignore-space-change', None,
144 _('ignore changes in the amount of white space')),
144 _('ignore changes in the amount of white space')),
145 ('B', 'ignore-blank-lines', None,
145 ('B', 'ignore-blank-lines', None,
146 _('ignore changes whose lines are all blank')),
146 _('ignore changes whose lines are all blank')),
147 ]
147 ]
148
148
149 diffopts2 = [
149 diffopts2 = [
150 ('', 'noprefix', None, _('omit a/ and b/ prefixes from filenames')),
150 ('', 'noprefix', None, _('omit a/ and b/ prefixes from filenames')),
151 ('p', 'show-function', None, _('show which function each change is in')),
151 ('p', 'show-function', None, _('show which function each change is in')),
152 ('', 'reverse', None, _('produce a diff that undoes the changes')),
152 ('', 'reverse', None, _('produce a diff that undoes the changes')),
153 ] + diffwsopts + [
153 ] + diffwsopts + [
154 ('U', 'unified', '',
154 ('U', 'unified', '',
155 _('number of lines of context to show'), _('NUM')),
155 _('number of lines of context to show'), _('NUM')),
156 ('', 'stat', None, _('output diffstat-style summary of changes')),
156 ('', 'stat', None, _('output diffstat-style summary of changes')),
157 ('', 'root', '', _('produce diffs relative to subdirectory'), _('DIR')),
157 ('', 'root', '', _('produce diffs relative to subdirectory'), _('DIR')),
158 ]
158 ]
159
159
160 mergetoolopts = [
160 mergetoolopts = [
161 ('t', 'tool', '', _('specify merge tool')),
161 ('t', 'tool', '', _('specify merge tool')),
162 ]
162 ]
163
163
164 similarityopts = [
164 similarityopts = [
165 ('s', 'similarity', '',
165 ('s', 'similarity', '',
166 _('guess renamed files by similarity (0<=s<=100)'), _('SIMILARITY'))
166 _('guess renamed files by similarity (0<=s<=100)'), _('SIMILARITY'))
167 ]
167 ]
168
168
169 subrepoopts = [
169 subrepoopts = [
170 ('S', 'subrepos', None,
170 ('S', 'subrepos', None,
171 _('recurse into subrepositories'))
171 _('recurse into subrepositories'))
172 ]
172 ]
173
173
174 debugrevlogopts = [
174 debugrevlogopts = [
175 ('c', 'changelog', False, _('open changelog')),
175 ('c', 'changelog', False, _('open changelog')),
176 ('m', 'manifest', False, _('open manifest')),
176 ('m', 'manifest', False, _('open manifest')),
177 ('', 'dir', False, _('open directory manifest')),
177 ('', 'dir', False, _('open directory manifest')),
178 ]
178 ]
179
179
180 # Commands start here, listed alphabetically
180 # Commands start here, listed alphabetically
181
181
182 @command('^add',
182 @command('^add',
183 walkopts + subrepoopts + dryrunopts,
183 walkopts + subrepoopts + dryrunopts,
184 _('[OPTION]... [FILE]...'),
184 _('[OPTION]... [FILE]...'),
185 inferrepo=True)
185 inferrepo=True)
186 def add(ui, repo, *pats, **opts):
186 def add(ui, repo, *pats, **opts):
187 """add the specified files on the next commit
187 """add the specified files on the next commit
188
188
189 Schedule files to be version controlled and added to the
189 Schedule files to be version controlled and added to the
190 repository.
190 repository.
191
191
192 The files will be added to the repository at the next commit. To
192 The files will be added to the repository at the next commit. To
193 undo an add before that, see :hg:`forget`.
193 undo an add before that, see :hg:`forget`.
194
194
195 If no names are given, add all files to the repository.
195 If no names are given, add all files to the repository.
196
196
197 .. container:: verbose
197 .. container:: verbose
198
198
199 Examples:
199 Examples:
200
200
201 - New (unknown) files are added
201 - New (unknown) files are added
202 automatically by :hg:`add`::
202 automatically by :hg:`add`::
203
203
204 $ ls
204 $ ls
205 foo.c
205 foo.c
206 $ hg status
206 $ hg status
207 ? foo.c
207 ? foo.c
208 $ hg add
208 $ hg add
209 adding foo.c
209 adding foo.c
210 $ hg status
210 $ hg status
211 A foo.c
211 A foo.c
212
212
213 - Specific files to be added can be specified::
213 - Specific files to be added can be specified::
214
214
215 $ ls
215 $ ls
216 bar.c foo.c
216 bar.c foo.c
217 $ hg status
217 $ hg status
218 ? bar.c
218 ? bar.c
219 ? foo.c
219 ? foo.c
220 $ hg add bar.c
220 $ hg add bar.c
221 $ hg status
221 $ hg status
222 A bar.c
222 A bar.c
223 ? foo.c
223 ? foo.c
224
224
225 Returns 0 if all files are successfully added.
225 Returns 0 if all files are successfully added.
226 """
226 """
227
227
228 m = scmutil.match(repo[None], pats, opts)
228 m = scmutil.match(repo[None], pats, opts)
229 rejected = cmdutil.add(ui, repo, m, "", False, **opts)
229 rejected = cmdutil.add(ui, repo, m, "", False, **opts)
230 return rejected and 1 or 0
230 return rejected and 1 or 0
231
231
232 @command('addremove',
232 @command('addremove',
233 similarityopts + subrepoopts + walkopts + dryrunopts,
233 similarityopts + subrepoopts + walkopts + dryrunopts,
234 _('[OPTION]... [FILE]...'),
234 _('[OPTION]... [FILE]...'),
235 inferrepo=True)
235 inferrepo=True)
236 def addremove(ui, repo, *pats, **opts):
236 def addremove(ui, repo, *pats, **opts):
237 """add all new files, delete all missing files
237 """add all new files, delete all missing files
238
238
239 Add all new files and remove all missing files from the
239 Add all new files and remove all missing files from the
240 repository.
240 repository.
241
241
242 New files are ignored if they match any of the patterns in
242 New files are ignored if they match any of the patterns in
243 ``.hgignore``. As with add, these changes take effect at the next
243 ``.hgignore``. As with add, these changes take effect at the next
244 commit.
244 commit.
245
245
246 Use the -s/--similarity option to detect renamed files. This
246 Use the -s/--similarity option to detect renamed files. This
247 option takes a percentage between 0 (disabled) and 100 (files must
247 option takes a percentage between 0 (disabled) and 100 (files must
248 be identical) as its parameter. With a parameter greater than 0,
248 be identical) as its parameter. With a parameter greater than 0,
249 this compares every removed file with every added file and records
249 this compares every removed file with every added file and records
250 those similar enough as renames. Detecting renamed files this way
250 those similar enough as renames. Detecting renamed files this way
251 can be expensive. After using this option, :hg:`status -C` can be
251 can be expensive. After using this option, :hg:`status -C` can be
252 used to check which files were identified as moved or renamed. If
252 used to check which files were identified as moved or renamed. If
253 not specified, -s/--similarity defaults to 100 and only renames of
253 not specified, -s/--similarity defaults to 100 and only renames of
254 identical files are detected.
254 identical files are detected.
255
255
256 .. container:: verbose
256 .. container:: verbose
257
257
258 Examples:
258 Examples:
259
259
260 - A number of files (bar.c and foo.c) are new,
260 - A number of files (bar.c and foo.c) are new,
261 while foobar.c has been removed (without using :hg:`remove`)
261 while foobar.c has been removed (without using :hg:`remove`)
262 from the repository::
262 from the repository::
263
263
264 $ ls
264 $ ls
265 bar.c foo.c
265 bar.c foo.c
266 $ hg status
266 $ hg status
267 ! foobar.c
267 ! foobar.c
268 ? bar.c
268 ? bar.c
269 ? foo.c
269 ? foo.c
270 $ hg addremove
270 $ hg addremove
271 adding bar.c
271 adding bar.c
272 adding foo.c
272 adding foo.c
273 removing foobar.c
273 removing foobar.c
274 $ hg status
274 $ hg status
275 A bar.c
275 A bar.c
276 A foo.c
276 A foo.c
277 R foobar.c
277 R foobar.c
278
278
279 - A file foobar.c was moved to foo.c without using :hg:`rename`.
279 - A file foobar.c was moved to foo.c without using :hg:`rename`.
280 Afterwards, it was edited slightly::
280 Afterwards, it was edited slightly::
281
281
282 $ ls
282 $ ls
283 foo.c
283 foo.c
284 $ hg status
284 $ hg status
285 ! foobar.c
285 ! foobar.c
286 ? foo.c
286 ? foo.c
287 $ hg addremove --similarity 90
287 $ hg addremove --similarity 90
288 removing foobar.c
288 removing foobar.c
289 adding foo.c
289 adding foo.c
290 recording removal of foobar.c as rename to foo.c (94% similar)
290 recording removal of foobar.c as rename to foo.c (94% similar)
291 $ hg status -C
291 $ hg status -C
292 A foo.c
292 A foo.c
293 foobar.c
293 foobar.c
294 R foobar.c
294 R foobar.c
295
295
296 Returns 0 if all files are successfully added.
296 Returns 0 if all files are successfully added.
297 """
297 """
298 try:
298 try:
299 sim = float(opts.get('similarity') or 100)
299 sim = float(opts.get('similarity') or 100)
300 except ValueError:
300 except ValueError:
301 raise error.Abort(_('similarity must be a number'))
301 raise error.Abort(_('similarity must be a number'))
302 if sim < 0 or sim > 100:
302 if sim < 0 or sim > 100:
303 raise error.Abort(_('similarity must be between 0 and 100'))
303 raise error.Abort(_('similarity must be between 0 and 100'))
304 matcher = scmutil.match(repo[None], pats, opts)
304 matcher = scmutil.match(repo[None], pats, opts)
305 return scmutil.addremove(repo, matcher, "", opts, similarity=sim / 100.0)
305 return scmutil.addremove(repo, matcher, "", opts, similarity=sim / 100.0)
306
306
307 @command('^annotate|blame',
307 @command('^annotate|blame',
308 [('r', 'rev', '', _('annotate the specified revision'), _('REV')),
308 [('r', 'rev', '', _('annotate the specified revision'), _('REV')),
309 ('', 'follow', None,
309 ('', 'follow', None,
310 _('follow copies/renames and list the filename (DEPRECATED)')),
310 _('follow copies/renames and list the filename (DEPRECATED)')),
311 ('', 'no-follow', None, _("don't follow copies and renames")),
311 ('', 'no-follow', None, _("don't follow copies and renames")),
312 ('a', 'text', None, _('treat all files as text')),
312 ('a', 'text', None, _('treat all files as text')),
313 ('u', 'user', None, _('list the author (long with -v)')),
313 ('u', 'user', None, _('list the author (long with -v)')),
314 ('f', 'file', None, _('list the filename')),
314 ('f', 'file', None, _('list the filename')),
315 ('d', 'date', None, _('list the date (short with -q)')),
315 ('d', 'date', None, _('list the date (short with -q)')),
316 ('n', 'number', None, _('list the revision number (default)')),
316 ('n', 'number', None, _('list the revision number (default)')),
317 ('c', 'changeset', None, _('list the changeset')),
317 ('c', 'changeset', None, _('list the changeset')),
318 ('l', 'line-number', None, _('show line number at the first appearance'))
318 ('l', 'line-number', None, _('show line number at the first appearance'))
319 ] + diffwsopts + walkopts + formatteropts,
319 ] + diffwsopts + walkopts + formatteropts,
320 _('[-r REV] [-f] [-a] [-u] [-d] [-n] [-c] [-l] FILE...'),
320 _('[-r REV] [-f] [-a] [-u] [-d] [-n] [-c] [-l] FILE...'),
321 inferrepo=True)
321 inferrepo=True)
322 def annotate(ui, repo, *pats, **opts):
322 def annotate(ui, repo, *pats, **opts):
323 """show changeset information by line for each file
323 """show changeset information by line for each file
324
324
325 List changes in files, showing the revision id responsible for
325 List changes in files, showing the revision id responsible for
326 each line
326 each line
327
327
328 This command is useful for discovering when a change was made and
328 This command is useful for discovering when a change was made and
329 by whom.
329 by whom.
330
330
331 Without the -a/--text option, annotate will avoid processing files
331 Without the -a/--text option, annotate will avoid processing files
332 it detects as binary. With -a, annotate will annotate the file
332 it detects as binary. With -a, annotate will annotate the file
333 anyway, although the results will probably be neither useful
333 anyway, although the results will probably be neither useful
334 nor desirable.
334 nor desirable.
335
335
336 Returns 0 on success.
336 Returns 0 on success.
337 """
337 """
338 if not pats:
338 if not pats:
339 raise error.Abort(_('at least one filename or pattern is required'))
339 raise error.Abort(_('at least one filename or pattern is required'))
340
340
341 if opts.get('follow'):
341 if opts.get('follow'):
342 # --follow is deprecated and now just an alias for -f/--file
342 # --follow is deprecated and now just an alias for -f/--file
343 # to mimic the behavior of Mercurial before version 1.5
343 # to mimic the behavior of Mercurial before version 1.5
344 opts['file'] = True
344 opts['file'] = True
345
345
346 ctx = scmutil.revsingle(repo, opts.get('rev'))
346 ctx = scmutil.revsingle(repo, opts.get('rev'))
347
347
348 fm = ui.formatter('annotate', opts)
348 fm = ui.formatter('annotate', opts)
349 if ui.quiet:
349 if ui.quiet:
350 datefunc = util.shortdate
350 datefunc = util.shortdate
351 else:
351 else:
352 datefunc = util.datestr
352 datefunc = util.datestr
353 if ctx.rev() is None:
353 if ctx.rev() is None:
354 def hexfn(node):
354 def hexfn(node):
355 if node is None:
355 if node is None:
356 return None
356 return None
357 else:
357 else:
358 return fm.hexfunc(node)
358 return fm.hexfunc(node)
359 if opts.get('changeset'):
359 if opts.get('changeset'):
360 # omit "+" suffix which is appended to node hex
360 # omit "+" suffix which is appended to node hex
361 def formatrev(rev):
361 def formatrev(rev):
362 if rev is None:
362 if rev is None:
363 return '%d' % ctx.p1().rev()
363 return '%d' % ctx.p1().rev()
364 else:
364 else:
365 return '%d' % rev
365 return '%d' % rev
366 else:
366 else:
367 def formatrev(rev):
367 def formatrev(rev):
368 if rev is None:
368 if rev is None:
369 return '%d+' % ctx.p1().rev()
369 return '%d+' % ctx.p1().rev()
370 else:
370 else:
371 return '%d ' % rev
371 return '%d ' % rev
372 def formathex(hex):
372 def formathex(hex):
373 if hex is None:
373 if hex is None:
374 return '%s+' % fm.hexfunc(ctx.p1().node())
374 return '%s+' % fm.hexfunc(ctx.p1().node())
375 else:
375 else:
376 return '%s ' % hex
376 return '%s ' % hex
377 else:
377 else:
378 hexfn = fm.hexfunc
378 hexfn = fm.hexfunc
379 formatrev = formathex = str
379 formatrev = formathex = str
380
380
381 opmap = [('user', ' ', lambda x: x[0].user(), ui.shortuser),
381 opmap = [('user', ' ', lambda x: x[0].user(), ui.shortuser),
382 ('number', ' ', lambda x: x[0].rev(), formatrev),
382 ('number', ' ', lambda x: x[0].rev(), formatrev),
383 ('changeset', ' ', lambda x: hexfn(x[0].node()), formathex),
383 ('changeset', ' ', lambda x: hexfn(x[0].node()), formathex),
384 ('date', ' ', lambda x: x[0].date(), util.cachefunc(datefunc)),
384 ('date', ' ', lambda x: x[0].date(), util.cachefunc(datefunc)),
385 ('file', ' ', lambda x: x[0].path(), str),
385 ('file', ' ', lambda x: x[0].path(), str),
386 ('line_number', ':', lambda x: x[1], str),
386 ('line_number', ':', lambda x: x[1], str),
387 ]
387 ]
388 fieldnamemap = {'number': 'rev', 'changeset': 'node'}
388 fieldnamemap = {'number': 'rev', 'changeset': 'node'}
389
389
390 if (not opts.get('user') and not opts.get('changeset')
390 if (not opts.get('user') and not opts.get('changeset')
391 and not opts.get('date') and not opts.get('file')):
391 and not opts.get('date') and not opts.get('file')):
392 opts['number'] = True
392 opts['number'] = True
393
393
394 linenumber = opts.get('line_number') is not None
394 linenumber = opts.get('line_number') is not None
395 if linenumber and (not opts.get('changeset')) and (not opts.get('number')):
395 if linenumber and (not opts.get('changeset')) and (not opts.get('number')):
396 raise error.Abort(_('at least one of -n/-c is required for -l'))
396 raise error.Abort(_('at least one of -n/-c is required for -l'))
397
397
398 if fm:
398 if fm:
399 def makefunc(get, fmt):
399 def makefunc(get, fmt):
400 return get
400 return get
401 else:
401 else:
402 def makefunc(get, fmt):
402 def makefunc(get, fmt):
403 return lambda x: fmt(get(x))
403 return lambda x: fmt(get(x))
404 funcmap = [(makefunc(get, fmt), sep) for op, sep, get, fmt in opmap
404 funcmap = [(makefunc(get, fmt), sep) for op, sep, get, fmt in opmap
405 if opts.get(op)]
405 if opts.get(op)]
406 funcmap[0] = (funcmap[0][0], '') # no separator in front of first column
406 funcmap[0] = (funcmap[0][0], '') # no separator in front of first column
407 fields = ' '.join(fieldnamemap.get(op, op) for op, sep, get, fmt in opmap
407 fields = ' '.join(fieldnamemap.get(op, op) for op, sep, get, fmt in opmap
408 if opts.get(op))
408 if opts.get(op))
409
409
410 def bad(x, y):
410 def bad(x, y):
411 raise error.Abort("%s: %s" % (x, y))
411 raise error.Abort("%s: %s" % (x, y))
412
412
413 m = scmutil.match(ctx, pats, opts, badfn=bad)
413 m = scmutil.match(ctx, pats, opts, badfn=bad)
414
414
415 follow = not opts.get('no_follow')
415 follow = not opts.get('no_follow')
416 diffopts = patch.difffeatureopts(ui, opts, section='annotate',
416 diffopts = patch.difffeatureopts(ui, opts, section='annotate',
417 whitespace=True)
417 whitespace=True)
418 for abs in ctx.walk(m):
418 for abs in ctx.walk(m):
419 fctx = ctx[abs]
419 fctx = ctx[abs]
420 if not opts.get('text') and util.binary(fctx.data()):
420 if not opts.get('text') and util.binary(fctx.data()):
421 fm.plain(_("%s: binary file\n") % ((pats and m.rel(abs)) or abs))
421 fm.plain(_("%s: binary file\n") % ((pats and m.rel(abs)) or abs))
422 continue
422 continue
423
423
424 lines = fctx.annotate(follow=follow, linenumber=linenumber,
424 lines = fctx.annotate(follow=follow, linenumber=linenumber,
425 diffopts=diffopts)
425 diffopts=diffopts)
426 formats = []
426 formats = []
427 pieces = []
427 pieces = []
428
428
429 for f, sep in funcmap:
429 for f, sep in funcmap:
430 l = [f(n) for n, dummy in lines]
430 l = [f(n) for n, dummy in lines]
431 if l:
431 if l:
432 if fm:
432 if fm:
433 formats.append(['%s' for x in l])
433 formats.append(['%s' for x in l])
434 else:
434 else:
435 sizes = [encoding.colwidth(x) for x in l]
435 sizes = [encoding.colwidth(x) for x in l]
436 ml = max(sizes)
436 ml = max(sizes)
437 formats.append([sep + ' ' * (ml - w) + '%s' for w in sizes])
437 formats.append([sep + ' ' * (ml - w) + '%s' for w in sizes])
438 pieces.append(l)
438 pieces.append(l)
439
439
440 for f, p, l in zip(zip(*formats), zip(*pieces), lines):
440 for f, p, l in zip(zip(*formats), zip(*pieces), lines):
441 fm.startitem()
441 fm.startitem()
442 fm.write(fields, "".join(f), *p)
442 fm.write(fields, "".join(f), *p)
443 fm.write('line', ": %s", l[1])
443 fm.write('line', ": %s", l[1])
444
444
445 if lines and not lines[-1][1].endswith('\n'):
445 if lines and not lines[-1][1].endswith('\n'):
446 fm.plain('\n')
446 fm.plain('\n')
447
447
448 fm.end()
448 fm.end()
449
449
450 @command('archive',
450 @command('archive',
451 [('', 'no-decode', None, _('do not pass files through decoders')),
451 [('', 'no-decode', None, _('do not pass files through decoders')),
452 ('p', 'prefix', '', _('directory prefix for files in archive'),
452 ('p', 'prefix', '', _('directory prefix for files in archive'),
453 _('PREFIX')),
453 _('PREFIX')),
454 ('r', 'rev', '', _('revision to distribute'), _('REV')),
454 ('r', 'rev', '', _('revision to distribute'), _('REV')),
455 ('t', 'type', '', _('type of distribution to create'), _('TYPE')),
455 ('t', 'type', '', _('type of distribution to create'), _('TYPE')),
456 ] + subrepoopts + walkopts,
456 ] + subrepoopts + walkopts,
457 _('[OPTION]... DEST'))
457 _('[OPTION]... DEST'))
458 def archive(ui, repo, dest, **opts):
458 def archive(ui, repo, dest, **opts):
459 '''create an unversioned archive of a repository revision
459 '''create an unversioned archive of a repository revision
460
460
461 By default, the revision used is the parent of the working
461 By default, the revision used is the parent of the working
462 directory; use -r/--rev to specify a different revision.
462 directory; use -r/--rev to specify a different revision.
463
463
464 The archive type is automatically detected based on file
464 The archive type is automatically detected based on file
465 extension (or override using -t/--type).
465 extension (or override using -t/--type).
466
466
467 .. container:: verbose
467 .. container:: verbose
468
468
469 Examples:
469 Examples:
470
470
471 - create a zip file containing the 1.0 release::
471 - create a zip file containing the 1.0 release::
472
472
473 hg archive -r 1.0 project-1.0.zip
473 hg archive -r 1.0 project-1.0.zip
474
474
475 - create a tarball excluding .hg files::
475 - create a tarball excluding .hg files::
476
476
477 hg archive project.tar.gz -X ".hg*"
477 hg archive project.tar.gz -X ".hg*"
478
478
479 Valid types are:
479 Valid types are:
480
480
481 :``files``: a directory full of files (default)
481 :``files``: a directory full of files (default)
482 :``tar``: tar archive, uncompressed
482 :``tar``: tar archive, uncompressed
483 :``tbz2``: tar archive, compressed using bzip2
483 :``tbz2``: tar archive, compressed using bzip2
484 :``tgz``: tar archive, compressed using gzip
484 :``tgz``: tar archive, compressed using gzip
485 :``uzip``: zip archive, uncompressed
485 :``uzip``: zip archive, uncompressed
486 :``zip``: zip archive, compressed using deflate
486 :``zip``: zip archive, compressed using deflate
487
487
488 The exact name of the destination archive or directory is given
488 The exact name of the destination archive or directory is given
489 using a format string; see :hg:`help export` for details.
489 using a format string; see :hg:`help export` for details.
490
490
491 Each member added to an archive file has a directory prefix
491 Each member added to an archive file has a directory prefix
492 prepended. Use -p/--prefix to specify a format string for the
492 prepended. Use -p/--prefix to specify a format string for the
493 prefix. The default is the basename of the archive, with suffixes
493 prefix. The default is the basename of the archive, with suffixes
494 removed.
494 removed.
495
495
496 Returns 0 on success.
496 Returns 0 on success.
497 '''
497 '''
498
498
499 ctx = scmutil.revsingle(repo, opts.get('rev'))
499 ctx = scmutil.revsingle(repo, opts.get('rev'))
500 if not ctx:
500 if not ctx:
501 raise error.Abort(_('no working directory: please specify a revision'))
501 raise error.Abort(_('no working directory: please specify a revision'))
502 node = ctx.node()
502 node = ctx.node()
503 dest = cmdutil.makefilename(repo, dest, node)
503 dest = cmdutil.makefilename(repo, dest, node)
504 if os.path.realpath(dest) == repo.root:
504 if os.path.realpath(dest) == repo.root:
505 raise error.Abort(_('repository root cannot be destination'))
505 raise error.Abort(_('repository root cannot be destination'))
506
506
507 kind = opts.get('type') or archival.guesskind(dest) or 'files'
507 kind = opts.get('type') or archival.guesskind(dest) or 'files'
508 prefix = opts.get('prefix')
508 prefix = opts.get('prefix')
509
509
510 if dest == '-':
510 if dest == '-':
511 if kind == 'files':
511 if kind == 'files':
512 raise error.Abort(_('cannot archive plain files to stdout'))
512 raise error.Abort(_('cannot archive plain files to stdout'))
513 dest = cmdutil.makefileobj(repo, dest)
513 dest = cmdutil.makefileobj(repo, dest)
514 if not prefix:
514 if not prefix:
515 prefix = os.path.basename(repo.root) + '-%h'
515 prefix = os.path.basename(repo.root) + '-%h'
516
516
517 prefix = cmdutil.makefilename(repo, prefix, node)
517 prefix = cmdutil.makefilename(repo, prefix, node)
518 matchfn = scmutil.match(ctx, [], opts)
518 matchfn = scmutil.match(ctx, [], opts)
519 archival.archive(repo, dest, node, kind, not opts.get('no_decode'),
519 archival.archive(repo, dest, node, kind, not opts.get('no_decode'),
520 matchfn, prefix, subrepos=opts.get('subrepos'))
520 matchfn, prefix, subrepos=opts.get('subrepos'))
521
521
522 @command('backout',
522 @command('backout',
523 [('', 'merge', None, _('merge with old dirstate parent after backout')),
523 [('', 'merge', None, _('merge with old dirstate parent after backout')),
524 ('', 'commit', None, _('commit if no conflicts were encountered')),
524 ('', 'commit', None, _('commit if no conflicts were encountered')),
525 ('', 'parent', '',
525 ('', 'parent', '',
526 _('parent to choose when backing out merge (DEPRECATED)'), _('REV')),
526 _('parent to choose when backing out merge (DEPRECATED)'), _('REV')),
527 ('r', 'rev', '', _('revision to backout'), _('REV')),
527 ('r', 'rev', '', _('revision to backout'), _('REV')),
528 ('e', 'edit', False, _('invoke editor on commit messages')),
528 ('e', 'edit', False, _('invoke editor on commit messages')),
529 ] + mergetoolopts + walkopts + commitopts + commitopts2,
529 ] + mergetoolopts + walkopts + commitopts + commitopts2,
530 _('[OPTION]... [-r] REV'))
530 _('[OPTION]... [-r] REV'))
531 def backout(ui, repo, node=None, rev=None, commit=False, **opts):
531 def backout(ui, repo, node=None, rev=None, commit=False, **opts):
532 '''reverse effect of earlier changeset
532 '''reverse effect of earlier changeset
533
533
534 Prepare a new changeset with the effect of REV undone in the
534 Prepare a new changeset with the effect of REV undone in the
535 current working directory.
535 current working directory.
536
536
537 If REV is the parent of the working directory, then this new changeset
537 If REV is the parent of the working directory, then this new changeset
538 is committed automatically. Otherwise, hg needs to merge the
538 is committed automatically. Otherwise, hg needs to merge the
539 changes and the merged result is left uncommitted.
539 changes and the merged result is left uncommitted.
540
540
541 .. note::
541 .. note::
542
542
543 backout cannot be used to fix either an unwanted or
543 backout cannot be used to fix either an unwanted or
544 incorrect merge.
544 incorrect merge.
545
545
546 .. container:: verbose
546 .. container:: verbose
547
547
548 Examples:
548 Examples:
549
549
550 - Reverse the effect of the parent of the working directory.
550 - Reverse the effect of the parent of the working directory.
551 This backout will be committed immediately::
551 This backout will be committed immediately::
552
552
553 hg backout -r .
553 hg backout -r .
554
554
555 - Reverse the effect of previous bad revision 23::
555 - Reverse the effect of previous bad revision 23::
556
556
557 hg backout -r 23
557 hg backout -r 23
558 hg commit -m "Backout revision 23"
558 hg commit -m "Backout revision 23"
559
559
560 - Reverse the effect of previous bad revision 23 and
560 - Reverse the effect of previous bad revision 23 and
561 commit the backout immediately::
561 commit the backout immediately::
562
562
563 hg backout -r 23 --commit
563 hg backout -r 23 --commit
564
564
565 By default, the pending changeset will have one parent,
565 By default, the pending changeset will have one parent,
566 maintaining a linear history. With --merge, the pending
566 maintaining a linear history. With --merge, the pending
567 changeset will instead have two parents: the old parent of the
567 changeset will instead have two parents: the old parent of the
568 working directory and a new child of REV that simply undoes REV.
568 working directory and a new child of REV that simply undoes REV.
569
569
570 Before version 1.7, the behavior without --merge was equivalent
570 Before version 1.7, the behavior without --merge was equivalent
571 to specifying --merge followed by :hg:`update --clean .` to
571 to specifying --merge followed by :hg:`update --clean .` to
572 cancel the merge and leave the child of REV as a head to be
572 cancel the merge and leave the child of REV as a head to be
573 merged separately.
573 merged separately.
574
574
575 See :hg:`help dates` for a list of formats valid for -d/--date.
575 See :hg:`help dates` for a list of formats valid for -d/--date.
576
576
577 See :hg:`help revert` for a way to restore files to the state
577 See :hg:`help revert` for a way to restore files to the state
578 of another revision.
578 of another revision.
579
579
580 Returns 0 on success, 1 if nothing to backout or there are unresolved
580 Returns 0 on success, 1 if nothing to backout or there are unresolved
581 files.
581 files.
582 '''
582 '''
583 wlock = lock = None
583 wlock = lock = None
584 try:
584 try:
585 wlock = repo.wlock()
585 wlock = repo.wlock()
586 lock = repo.lock()
586 lock = repo.lock()
587 return _dobackout(ui, repo, node, rev, commit, **opts)
587 return _dobackout(ui, repo, node, rev, commit, **opts)
588 finally:
588 finally:
589 release(lock, wlock)
589 release(lock, wlock)
590
590
591 def _dobackout(ui, repo, node=None, rev=None, commit=False, **opts):
591 def _dobackout(ui, repo, node=None, rev=None, commit=False, **opts):
592 if rev and node:
592 if rev and node:
593 raise error.Abort(_("please specify just one revision"))
593 raise error.Abort(_("please specify just one revision"))
594
594
595 if not rev:
595 if not rev:
596 rev = node
596 rev = node
597
597
598 if not rev:
598 if not rev:
599 raise error.Abort(_("please specify a revision to backout"))
599 raise error.Abort(_("please specify a revision to backout"))
600
600
601 date = opts.get('date')
601 date = opts.get('date')
602 if date:
602 if date:
603 opts['date'] = util.parsedate(date)
603 opts['date'] = util.parsedate(date)
604
604
605 cmdutil.checkunfinished(repo)
605 cmdutil.checkunfinished(repo)
606 cmdutil.bailifchanged(repo)
606 cmdutil.bailifchanged(repo)
607 node = scmutil.revsingle(repo, rev).node()
607 node = scmutil.revsingle(repo, rev).node()
608
608
609 op1, op2 = repo.dirstate.parents()
609 op1, op2 = repo.dirstate.parents()
610 if not repo.changelog.isancestor(node, op1):
610 if not repo.changelog.isancestor(node, op1):
611 raise error.Abort(_('cannot backout change that is not an ancestor'))
611 raise error.Abort(_('cannot backout change that is not an ancestor'))
612
612
613 p1, p2 = repo.changelog.parents(node)
613 p1, p2 = repo.changelog.parents(node)
614 if p1 == nullid:
614 if p1 == nullid:
615 raise error.Abort(_('cannot backout a change with no parents'))
615 raise error.Abort(_('cannot backout a change with no parents'))
616 if p2 != nullid:
616 if p2 != nullid:
617 if not opts.get('parent'):
617 if not opts.get('parent'):
618 raise error.Abort(_('cannot backout a merge changeset'))
618 raise error.Abort(_('cannot backout a merge changeset'))
619 p = repo.lookup(opts['parent'])
619 p = repo.lookup(opts['parent'])
620 if p not in (p1, p2):
620 if p not in (p1, p2):
621 raise error.Abort(_('%s is not a parent of %s') %
621 raise error.Abort(_('%s is not a parent of %s') %
622 (short(p), short(node)))
622 (short(p), short(node)))
623 parent = p
623 parent = p
624 else:
624 else:
625 if opts.get('parent'):
625 if opts.get('parent'):
626 raise error.Abort(_('cannot use --parent on non-merge changeset'))
626 raise error.Abort(_('cannot use --parent on non-merge changeset'))
627 parent = p1
627 parent = p1
628
628
629 # the backout should appear on the same branch
629 # the backout should appear on the same branch
630 try:
630 try:
631 branch = repo.dirstate.branch()
631 branch = repo.dirstate.branch()
632 bheads = repo.branchheads(branch)
632 bheads = repo.branchheads(branch)
633 rctx = scmutil.revsingle(repo, hex(parent))
633 rctx = scmutil.revsingle(repo, hex(parent))
634 if not opts.get('merge') and op1 != node:
634 if not opts.get('merge') and op1 != node:
635 dsguard = cmdutil.dirstateguard(repo, 'backout')
635 dsguard = cmdutil.dirstateguard(repo, 'backout')
636 try:
636 try:
637 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
637 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
638 'backout')
638 'backout')
639 stats = mergemod.update(repo, parent, True, True, False,
639 stats = mergemod.update(repo, parent, True, True, node, False)
640 node, False)
641 repo.setparents(op1, op2)
640 repo.setparents(op1, op2)
642 dsguard.close()
641 dsguard.close()
643 hg._showstats(repo, stats)
642 hg._showstats(repo, stats)
644 if stats[3]:
643 if stats[3]:
645 repo.ui.status(_("use 'hg resolve' to retry unresolved "
644 repo.ui.status(_("use 'hg resolve' to retry unresolved "
646 "file merges\n"))
645 "file merges\n"))
647 return 1
646 return 1
648 elif not commit:
647 elif not commit:
649 msg = _("changeset %s backed out, "
648 msg = _("changeset %s backed out, "
650 "don't forget to commit.\n")
649 "don't forget to commit.\n")
651 ui.status(msg % short(node))
650 ui.status(msg % short(node))
652 return 0
651 return 0
653 finally:
652 finally:
654 ui.setconfig('ui', 'forcemerge', '', '')
653 ui.setconfig('ui', 'forcemerge', '', '')
655 lockmod.release(dsguard)
654 lockmod.release(dsguard)
656 else:
655 else:
657 hg.clean(repo, node, show_stats=False)
656 hg.clean(repo, node, show_stats=False)
658 repo.dirstate.setbranch(branch)
657 repo.dirstate.setbranch(branch)
659 cmdutil.revert(ui, repo, rctx, repo.dirstate.parents())
658 cmdutil.revert(ui, repo, rctx, repo.dirstate.parents())
660
659
661
660
662 def commitfunc(ui, repo, message, match, opts):
661 def commitfunc(ui, repo, message, match, opts):
663 editform = 'backout'
662 editform = 'backout'
664 e = cmdutil.getcommiteditor(editform=editform, **opts)
663 e = cmdutil.getcommiteditor(editform=editform, **opts)
665 if not message:
664 if not message:
666 # we don't translate commit messages
665 # we don't translate commit messages
667 message = "Backed out changeset %s" % short(node)
666 message = "Backed out changeset %s" % short(node)
668 e = cmdutil.getcommiteditor(edit=True, editform=editform)
667 e = cmdutil.getcommiteditor(edit=True, editform=editform)
669 return repo.commit(message, opts.get('user'), opts.get('date'),
668 return repo.commit(message, opts.get('user'), opts.get('date'),
670 match, editor=e)
669 match, editor=e)
671 newnode = cmdutil.commit(ui, repo, commitfunc, [], opts)
670 newnode = cmdutil.commit(ui, repo, commitfunc, [], opts)
672 if not newnode:
671 if not newnode:
673 ui.status(_("nothing changed\n"))
672 ui.status(_("nothing changed\n"))
674 return 1
673 return 1
675 cmdutil.commitstatus(repo, newnode, branch, bheads)
674 cmdutil.commitstatus(repo, newnode, branch, bheads)
676
675
677 def nice(node):
676 def nice(node):
678 return '%d:%s' % (repo.changelog.rev(node), short(node))
677 return '%d:%s' % (repo.changelog.rev(node), short(node))
679 ui.status(_('changeset %s backs out changeset %s\n') %
678 ui.status(_('changeset %s backs out changeset %s\n') %
680 (nice(repo.changelog.tip()), nice(node)))
679 (nice(repo.changelog.tip()), nice(node)))
681 if opts.get('merge') and op1 != node:
680 if opts.get('merge') and op1 != node:
682 hg.clean(repo, op1, show_stats=False)
681 hg.clean(repo, op1, show_stats=False)
683 ui.status(_('merging with changeset %s\n')
682 ui.status(_('merging with changeset %s\n')
684 % nice(repo.changelog.tip()))
683 % nice(repo.changelog.tip()))
685 try:
684 try:
686 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
685 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
687 'backout')
686 'backout')
688 return hg.merge(repo, hex(repo.changelog.tip()))
687 return hg.merge(repo, hex(repo.changelog.tip()))
689 finally:
688 finally:
690 ui.setconfig('ui', 'forcemerge', '', '')
689 ui.setconfig('ui', 'forcemerge', '', '')
691 finally:
690 finally:
692 # TODO: get rid of this meaningless try/finally enclosing.
691 # TODO: get rid of this meaningless try/finally enclosing.
693 # this is kept only to reduce changes in a patch.
692 # this is kept only to reduce changes in a patch.
694 pass
693 pass
695 return 0
694 return 0
696
695
697 @command('bisect',
696 @command('bisect',
698 [('r', 'reset', False, _('reset bisect state')),
697 [('r', 'reset', False, _('reset bisect state')),
699 ('g', 'good', False, _('mark changeset good')),
698 ('g', 'good', False, _('mark changeset good')),
700 ('b', 'bad', False, _('mark changeset bad')),
699 ('b', 'bad', False, _('mark changeset bad')),
701 ('s', 'skip', False, _('skip testing changeset')),
700 ('s', 'skip', False, _('skip testing changeset')),
702 ('e', 'extend', False, _('extend the bisect range')),
701 ('e', 'extend', False, _('extend the bisect range')),
703 ('c', 'command', '', _('use command to check changeset state'), _('CMD')),
702 ('c', 'command', '', _('use command to check changeset state'), _('CMD')),
704 ('U', 'noupdate', False, _('do not update to target'))],
703 ('U', 'noupdate', False, _('do not update to target'))],
705 _("[-gbsr] [-U] [-c CMD] [REV]"))
704 _("[-gbsr] [-U] [-c CMD] [REV]"))
706 def bisect(ui, repo, rev=None, extra=None, command=None,
705 def bisect(ui, repo, rev=None, extra=None, command=None,
707 reset=None, good=None, bad=None, skip=None, extend=None,
706 reset=None, good=None, bad=None, skip=None, extend=None,
708 noupdate=None):
707 noupdate=None):
709 """subdivision search of changesets
708 """subdivision search of changesets
710
709
711 This command helps to find changesets which introduce problems. To
710 This command helps to find changesets which introduce problems. To
712 use, mark the earliest changeset you know exhibits the problem as
711 use, mark the earliest changeset you know exhibits the problem as
713 bad, then mark the latest changeset which is free from the problem
712 bad, then mark the latest changeset which is free from the problem
714 as good. Bisect will update your working directory to a revision
713 as good. Bisect will update your working directory to a revision
715 for testing (unless the -U/--noupdate option is specified). Once
714 for testing (unless the -U/--noupdate option is specified). Once
716 you have performed tests, mark the working directory as good or
715 you have performed tests, mark the working directory as good or
717 bad, and bisect will either update to another candidate changeset
716 bad, and bisect will either update to another candidate changeset
718 or announce that it has found the bad revision.
717 or announce that it has found the bad revision.
719
718
720 As a shortcut, you can also use the revision argument to mark a
719 As a shortcut, you can also use the revision argument to mark a
721 revision as good or bad without checking it out first.
720 revision as good or bad without checking it out first.
722
721
723 If you supply a command, it will be used for automatic bisection.
722 If you supply a command, it will be used for automatic bisection.
724 The environment variable HG_NODE will contain the ID of the
723 The environment variable HG_NODE will contain the ID of the
725 changeset being tested. The exit status of the command will be
724 changeset being tested. The exit status of the command will be
726 used to mark revisions as good or bad: status 0 means good, 125
725 used to mark revisions as good or bad: status 0 means good, 125
727 means to skip the revision, 127 (command not found) will abort the
726 means to skip the revision, 127 (command not found) will abort the
728 bisection, and any other non-zero exit status means the revision
727 bisection, and any other non-zero exit status means the revision
729 is bad.
728 is bad.
730
729
731 .. container:: verbose
730 .. container:: verbose
732
731
733 Some examples:
732 Some examples:
734
733
735 - start a bisection with known bad revision 34, and good revision 12::
734 - start a bisection with known bad revision 34, and good revision 12::
736
735
737 hg bisect --bad 34
736 hg bisect --bad 34
738 hg bisect --good 12
737 hg bisect --good 12
739
738
740 - advance the current bisection by marking current revision as good or
739 - advance the current bisection by marking current revision as good or
741 bad::
740 bad::
742
741
743 hg bisect --good
742 hg bisect --good
744 hg bisect --bad
743 hg bisect --bad
745
744
746 - mark the current revision, or a known revision, to be skipped (e.g. if
745 - mark the current revision, or a known revision, to be skipped (e.g. if
747 that revision is not usable because of another issue)::
746 that revision is not usable because of another issue)::
748
747
749 hg bisect --skip
748 hg bisect --skip
750 hg bisect --skip 23
749 hg bisect --skip 23
751
750
752 - skip all revisions that do not touch directories ``foo`` or ``bar``::
751 - skip all revisions that do not touch directories ``foo`` or ``bar``::
753
752
754 hg bisect --skip "!( file('path:foo') & file('path:bar') )"
753 hg bisect --skip "!( file('path:foo') & file('path:bar') )"
755
754
756 - forget the current bisection::
755 - forget the current bisection::
757
756
758 hg bisect --reset
757 hg bisect --reset
759
758
760 - use 'make && make tests' to automatically find the first broken
759 - use 'make && make tests' to automatically find the first broken
761 revision::
760 revision::
762
761
763 hg bisect --reset
762 hg bisect --reset
764 hg bisect --bad 34
763 hg bisect --bad 34
765 hg bisect --good 12
764 hg bisect --good 12
766 hg bisect --command "make && make tests"
765 hg bisect --command "make && make tests"
767
766
768 - see all changesets whose states are already known in the current
767 - see all changesets whose states are already known in the current
769 bisection::
768 bisection::
770
769
771 hg log -r "bisect(pruned)"
770 hg log -r "bisect(pruned)"
772
771
773 - see the changeset currently being bisected (especially useful
772 - see the changeset currently being bisected (especially useful
774 if running with -U/--noupdate)::
773 if running with -U/--noupdate)::
775
774
776 hg log -r "bisect(current)"
775 hg log -r "bisect(current)"
777
776
778 - see all changesets that took part in the current bisection::
777 - see all changesets that took part in the current bisection::
779
778
780 hg log -r "bisect(range)"
779 hg log -r "bisect(range)"
781
780
782 - you can even get a nice graph::
781 - you can even get a nice graph::
783
782
784 hg log --graph -r "bisect(range)"
783 hg log --graph -r "bisect(range)"
785
784
786 See :hg:`help revsets` for more about the `bisect()` keyword.
785 See :hg:`help revsets` for more about the `bisect()` keyword.
787
786
788 Returns 0 on success.
787 Returns 0 on success.
789 """
788 """
790 def extendbisectrange(nodes, good):
789 def extendbisectrange(nodes, good):
791 # bisect is incomplete when it ends on a merge node and
790 # bisect is incomplete when it ends on a merge node and
792 # one of the parent was not checked.
791 # one of the parent was not checked.
793 parents = repo[nodes[0]].parents()
792 parents = repo[nodes[0]].parents()
794 if len(parents) > 1:
793 if len(parents) > 1:
795 if good:
794 if good:
796 side = state['bad']
795 side = state['bad']
797 else:
796 else:
798 side = state['good']
797 side = state['good']
799 num = len(set(i.node() for i in parents) & set(side))
798 num = len(set(i.node() for i in parents) & set(side))
800 if num == 1:
799 if num == 1:
801 return parents[0].ancestor(parents[1])
800 return parents[0].ancestor(parents[1])
802 return None
801 return None
803
802
804 def print_result(nodes, good):
803 def print_result(nodes, good):
805 displayer = cmdutil.show_changeset(ui, repo, {})
804 displayer = cmdutil.show_changeset(ui, repo, {})
806 if len(nodes) == 1:
805 if len(nodes) == 1:
807 # narrowed it down to a single revision
806 # narrowed it down to a single revision
808 if good:
807 if good:
809 ui.write(_("The first good revision is:\n"))
808 ui.write(_("The first good revision is:\n"))
810 else:
809 else:
811 ui.write(_("The first bad revision is:\n"))
810 ui.write(_("The first bad revision is:\n"))
812 displayer.show(repo[nodes[0]])
811 displayer.show(repo[nodes[0]])
813 extendnode = extendbisectrange(nodes, good)
812 extendnode = extendbisectrange(nodes, good)
814 if extendnode is not None:
813 if extendnode is not None:
815 ui.write(_('Not all ancestors of this changeset have been'
814 ui.write(_('Not all ancestors of this changeset have been'
816 ' checked.\nUse bisect --extend to continue the '
815 ' checked.\nUse bisect --extend to continue the '
817 'bisection from\nthe common ancestor, %s.\n')
816 'bisection from\nthe common ancestor, %s.\n')
818 % extendnode)
817 % extendnode)
819 else:
818 else:
820 # multiple possible revisions
819 # multiple possible revisions
821 if good:
820 if good:
822 ui.write(_("Due to skipped revisions, the first "
821 ui.write(_("Due to skipped revisions, the first "
823 "good revision could be any of:\n"))
822 "good revision could be any of:\n"))
824 else:
823 else:
825 ui.write(_("Due to skipped revisions, the first "
824 ui.write(_("Due to skipped revisions, the first "
826 "bad revision could be any of:\n"))
825 "bad revision could be any of:\n"))
827 for n in nodes:
826 for n in nodes:
828 displayer.show(repo[n])
827 displayer.show(repo[n])
829 displayer.close()
828 displayer.close()
830
829
831 def check_state(state, interactive=True):
830 def check_state(state, interactive=True):
832 if not state['good'] or not state['bad']:
831 if not state['good'] or not state['bad']:
833 if (good or bad or skip or reset) and interactive:
832 if (good or bad or skip or reset) and interactive:
834 return
833 return
835 if not state['good']:
834 if not state['good']:
836 raise error.Abort(_('cannot bisect (no known good revisions)'))
835 raise error.Abort(_('cannot bisect (no known good revisions)'))
837 else:
836 else:
838 raise error.Abort(_('cannot bisect (no known bad revisions)'))
837 raise error.Abort(_('cannot bisect (no known bad revisions)'))
839 return True
838 return True
840
839
841 # backward compatibility
840 # backward compatibility
842 if rev in "good bad reset init".split():
841 if rev in "good bad reset init".split():
843 ui.warn(_("(use of 'hg bisect <cmd>' is deprecated)\n"))
842 ui.warn(_("(use of 'hg bisect <cmd>' is deprecated)\n"))
844 cmd, rev, extra = rev, extra, None
843 cmd, rev, extra = rev, extra, None
845 if cmd == "good":
844 if cmd == "good":
846 good = True
845 good = True
847 elif cmd == "bad":
846 elif cmd == "bad":
848 bad = True
847 bad = True
849 else:
848 else:
850 reset = True
849 reset = True
851 elif extra or good + bad + skip + reset + extend + bool(command) > 1:
850 elif extra or good + bad + skip + reset + extend + bool(command) > 1:
852 raise error.Abort(_('incompatible arguments'))
851 raise error.Abort(_('incompatible arguments'))
853
852
854 cmdutil.checkunfinished(repo)
853 cmdutil.checkunfinished(repo)
855
854
856 if reset:
855 if reset:
857 p = repo.join("bisect.state")
856 p = repo.join("bisect.state")
858 if os.path.exists(p):
857 if os.path.exists(p):
859 os.unlink(p)
858 os.unlink(p)
860 return
859 return
861
860
862 state = hbisect.load_state(repo)
861 state = hbisect.load_state(repo)
863
862
864 if command:
863 if command:
865 changesets = 1
864 changesets = 1
866 if noupdate:
865 if noupdate:
867 try:
866 try:
868 node = state['current'][0]
867 node = state['current'][0]
869 except LookupError:
868 except LookupError:
870 raise error.Abort(_('current bisect revision is unknown - '
869 raise error.Abort(_('current bisect revision is unknown - '
871 'start a new bisect to fix'))
870 'start a new bisect to fix'))
872 else:
871 else:
873 node, p2 = repo.dirstate.parents()
872 node, p2 = repo.dirstate.parents()
874 if p2 != nullid:
873 if p2 != nullid:
875 raise error.Abort(_('current bisect revision is a merge'))
874 raise error.Abort(_('current bisect revision is a merge'))
876 try:
875 try:
877 while changesets:
876 while changesets:
878 # update state
877 # update state
879 state['current'] = [node]
878 state['current'] = [node]
880 hbisect.save_state(repo, state)
879 hbisect.save_state(repo, state)
881 status = ui.system(command, environ={'HG_NODE': hex(node)})
880 status = ui.system(command, environ={'HG_NODE': hex(node)})
882 if status == 125:
881 if status == 125:
883 transition = "skip"
882 transition = "skip"
884 elif status == 0:
883 elif status == 0:
885 transition = "good"
884 transition = "good"
886 # status < 0 means process was killed
885 # status < 0 means process was killed
887 elif status == 127:
886 elif status == 127:
888 raise error.Abort(_("failed to execute %s") % command)
887 raise error.Abort(_("failed to execute %s") % command)
889 elif status < 0:
888 elif status < 0:
890 raise error.Abort(_("%s killed") % command)
889 raise error.Abort(_("%s killed") % command)
891 else:
890 else:
892 transition = "bad"
891 transition = "bad"
893 ctx = scmutil.revsingle(repo, rev, node)
892 ctx = scmutil.revsingle(repo, rev, node)
894 rev = None # clear for future iterations
893 rev = None # clear for future iterations
895 state[transition].append(ctx.node())
894 state[transition].append(ctx.node())
896 ui.status(_('changeset %d:%s: %s\n') % (ctx, ctx, transition))
895 ui.status(_('changeset %d:%s: %s\n') % (ctx, ctx, transition))
897 check_state(state, interactive=False)
896 check_state(state, interactive=False)
898 # bisect
897 # bisect
899 nodes, changesets, bgood = hbisect.bisect(repo.changelog, state)
898 nodes, changesets, bgood = hbisect.bisect(repo.changelog, state)
900 # update to next check
899 # update to next check
901 node = nodes[0]
900 node = nodes[0]
902 if not noupdate:
901 if not noupdate:
903 cmdutil.bailifchanged(repo)
902 cmdutil.bailifchanged(repo)
904 hg.clean(repo, node, show_stats=False)
903 hg.clean(repo, node, show_stats=False)
905 finally:
904 finally:
906 state['current'] = [node]
905 state['current'] = [node]
907 hbisect.save_state(repo, state)
906 hbisect.save_state(repo, state)
908 print_result(nodes, bgood)
907 print_result(nodes, bgood)
909 return
908 return
910
909
911 # update state
910 # update state
912
911
913 if rev:
912 if rev:
914 nodes = [repo.lookup(i) for i in scmutil.revrange(repo, [rev])]
913 nodes = [repo.lookup(i) for i in scmutil.revrange(repo, [rev])]
915 else:
914 else:
916 nodes = [repo.lookup('.')]
915 nodes = [repo.lookup('.')]
917
916
918 if good or bad or skip:
917 if good or bad or skip:
919 if good:
918 if good:
920 state['good'] += nodes
919 state['good'] += nodes
921 elif bad:
920 elif bad:
922 state['bad'] += nodes
921 state['bad'] += nodes
923 elif skip:
922 elif skip:
924 state['skip'] += nodes
923 state['skip'] += nodes
925 hbisect.save_state(repo, state)
924 hbisect.save_state(repo, state)
926
925
927 if not check_state(state):
926 if not check_state(state):
928 return
927 return
929
928
930 # actually bisect
929 # actually bisect
931 nodes, changesets, good = hbisect.bisect(repo.changelog, state)
930 nodes, changesets, good = hbisect.bisect(repo.changelog, state)
932 if extend:
931 if extend:
933 if not changesets:
932 if not changesets:
934 extendnode = extendbisectrange(nodes, good)
933 extendnode = extendbisectrange(nodes, good)
935 if extendnode is not None:
934 if extendnode is not None:
936 ui.write(_("Extending search to changeset %d:%s\n")
935 ui.write(_("Extending search to changeset %d:%s\n")
937 % (extendnode.rev(), extendnode))
936 % (extendnode.rev(), extendnode))
938 state['current'] = [extendnode.node()]
937 state['current'] = [extendnode.node()]
939 hbisect.save_state(repo, state)
938 hbisect.save_state(repo, state)
940 if noupdate:
939 if noupdate:
941 return
940 return
942 cmdutil.bailifchanged(repo)
941 cmdutil.bailifchanged(repo)
943 return hg.clean(repo, extendnode.node())
942 return hg.clean(repo, extendnode.node())
944 raise error.Abort(_("nothing to extend"))
943 raise error.Abort(_("nothing to extend"))
945
944
946 if changesets == 0:
945 if changesets == 0:
947 print_result(nodes, good)
946 print_result(nodes, good)
948 else:
947 else:
949 assert len(nodes) == 1 # only a single node can be tested next
948 assert len(nodes) == 1 # only a single node can be tested next
950 node = nodes[0]
949 node = nodes[0]
951 # compute the approximate number of remaining tests
950 # compute the approximate number of remaining tests
952 tests, size = 0, 2
951 tests, size = 0, 2
953 while size <= changesets:
952 while size <= changesets:
954 tests, size = tests + 1, size * 2
953 tests, size = tests + 1, size * 2
955 rev = repo.changelog.rev(node)
954 rev = repo.changelog.rev(node)
956 ui.write(_("Testing changeset %d:%s "
955 ui.write(_("Testing changeset %d:%s "
957 "(%d changesets remaining, ~%d tests)\n")
956 "(%d changesets remaining, ~%d tests)\n")
958 % (rev, short(node), changesets, tests))
957 % (rev, short(node), changesets, tests))
959 state['current'] = [node]
958 state['current'] = [node]
960 hbisect.save_state(repo, state)
959 hbisect.save_state(repo, state)
961 if not noupdate:
960 if not noupdate:
962 cmdutil.bailifchanged(repo)
961 cmdutil.bailifchanged(repo)
963 return hg.clean(repo, node)
962 return hg.clean(repo, node)
964
963
965 @command('bookmarks|bookmark',
964 @command('bookmarks|bookmark',
966 [('f', 'force', False, _('force')),
965 [('f', 'force', False, _('force')),
967 ('r', 'rev', '', _('revision'), _('REV')),
966 ('r', 'rev', '', _('revision'), _('REV')),
968 ('d', 'delete', False, _('delete a given bookmark')),
967 ('d', 'delete', False, _('delete a given bookmark')),
969 ('m', 'rename', '', _('rename a given bookmark'), _('OLD')),
968 ('m', 'rename', '', _('rename a given bookmark'), _('OLD')),
970 ('i', 'inactive', False, _('mark a bookmark inactive')),
969 ('i', 'inactive', False, _('mark a bookmark inactive')),
971 ] + formatteropts,
970 ] + formatteropts,
972 _('hg bookmarks [OPTIONS]... [NAME]...'))
971 _('hg bookmarks [OPTIONS]... [NAME]...'))
973 def bookmark(ui, repo, *names, **opts):
972 def bookmark(ui, repo, *names, **opts):
974 '''create a new bookmark or list existing bookmarks
973 '''create a new bookmark or list existing bookmarks
975
974
976 Bookmarks are labels on changesets to help track lines of development.
975 Bookmarks are labels on changesets to help track lines of development.
977 Bookmarks are unversioned and can be moved, renamed and deleted.
976 Bookmarks are unversioned and can be moved, renamed and deleted.
978 Deleting or moving a bookmark has no effect on the associated changesets.
977 Deleting or moving a bookmark has no effect on the associated changesets.
979
978
980 Creating or updating to a bookmark causes it to be marked as 'active'.
979 Creating or updating to a bookmark causes it to be marked as 'active'.
981 The active bookmark is indicated with a '*'.
980 The active bookmark is indicated with a '*'.
982 When a commit is made, the active bookmark will advance to the new commit.
981 When a commit is made, the active bookmark will advance to the new commit.
983 A plain :hg:`update` will also advance an active bookmark, if possible.
982 A plain :hg:`update` will also advance an active bookmark, if possible.
984 Updating away from a bookmark will cause it to be deactivated.
983 Updating away from a bookmark will cause it to be deactivated.
985
984
986 Bookmarks can be pushed and pulled between repositories (see
985 Bookmarks can be pushed and pulled between repositories (see
987 :hg:`help push` and :hg:`help pull`). If a shared bookmark has
986 :hg:`help push` and :hg:`help pull`). If a shared bookmark has
988 diverged, a new 'divergent bookmark' of the form 'name@path' will
987 diverged, a new 'divergent bookmark' of the form 'name@path' will
989 be created. Using :hg:`merge` will resolve the divergence.
988 be created. Using :hg:`merge` will resolve the divergence.
990
989
991 A bookmark named '@' has the special property that :hg:`clone` will
990 A bookmark named '@' has the special property that :hg:`clone` will
992 check it out by default if it exists.
991 check it out by default if it exists.
993
992
994 .. container:: verbose
993 .. container:: verbose
995
994
996 Examples:
995 Examples:
997
996
998 - create an active bookmark for a new line of development::
997 - create an active bookmark for a new line of development::
999
998
1000 hg book new-feature
999 hg book new-feature
1001
1000
1002 - create an inactive bookmark as a place marker::
1001 - create an inactive bookmark as a place marker::
1003
1002
1004 hg book -i reviewed
1003 hg book -i reviewed
1005
1004
1006 - create an inactive bookmark on another changeset::
1005 - create an inactive bookmark on another changeset::
1007
1006
1008 hg book -r .^ tested
1007 hg book -r .^ tested
1009
1008
1010 - rename bookmark turkey to dinner::
1009 - rename bookmark turkey to dinner::
1011
1010
1012 hg book -m turkey dinner
1011 hg book -m turkey dinner
1013
1012
1014 - move the '@' bookmark from another branch::
1013 - move the '@' bookmark from another branch::
1015
1014
1016 hg book -f @
1015 hg book -f @
1017 '''
1016 '''
1018 force = opts.get('force')
1017 force = opts.get('force')
1019 rev = opts.get('rev')
1018 rev = opts.get('rev')
1020 delete = opts.get('delete')
1019 delete = opts.get('delete')
1021 rename = opts.get('rename')
1020 rename = opts.get('rename')
1022 inactive = opts.get('inactive')
1021 inactive = opts.get('inactive')
1023
1022
1024 def checkformat(mark):
1023 def checkformat(mark):
1025 mark = mark.strip()
1024 mark = mark.strip()
1026 if not mark:
1025 if not mark:
1027 raise error.Abort(_("bookmark names cannot consist entirely of "
1026 raise error.Abort(_("bookmark names cannot consist entirely of "
1028 "whitespace"))
1027 "whitespace"))
1029 scmutil.checknewlabel(repo, mark, 'bookmark')
1028 scmutil.checknewlabel(repo, mark, 'bookmark')
1030 return mark
1029 return mark
1031
1030
1032 def checkconflict(repo, mark, cur, force=False, target=None):
1031 def checkconflict(repo, mark, cur, force=False, target=None):
1033 if mark in marks and not force:
1032 if mark in marks and not force:
1034 if target:
1033 if target:
1035 if marks[mark] == target and target == cur:
1034 if marks[mark] == target and target == cur:
1036 # re-activating a bookmark
1035 # re-activating a bookmark
1037 return
1036 return
1038 anc = repo.changelog.ancestors([repo[target].rev()])
1037 anc = repo.changelog.ancestors([repo[target].rev()])
1039 bmctx = repo[marks[mark]]
1038 bmctx = repo[marks[mark]]
1040 divs = [repo[b].node() for b in marks
1039 divs = [repo[b].node() for b in marks
1041 if b.split('@', 1)[0] == mark.split('@', 1)[0]]
1040 if b.split('@', 1)[0] == mark.split('@', 1)[0]]
1042
1041
1043 # allow resolving a single divergent bookmark even if moving
1042 # allow resolving a single divergent bookmark even if moving
1044 # the bookmark across branches when a revision is specified
1043 # the bookmark across branches when a revision is specified
1045 # that contains a divergent bookmark
1044 # that contains a divergent bookmark
1046 if bmctx.rev() not in anc and target in divs:
1045 if bmctx.rev() not in anc and target in divs:
1047 bookmarks.deletedivergent(repo, [target], mark)
1046 bookmarks.deletedivergent(repo, [target], mark)
1048 return
1047 return
1049
1048
1050 deletefrom = [b for b in divs
1049 deletefrom = [b for b in divs
1051 if repo[b].rev() in anc or b == target]
1050 if repo[b].rev() in anc or b == target]
1052 bookmarks.deletedivergent(repo, deletefrom, mark)
1051 bookmarks.deletedivergent(repo, deletefrom, mark)
1053 if bookmarks.validdest(repo, bmctx, repo[target]):
1052 if bookmarks.validdest(repo, bmctx, repo[target]):
1054 ui.status(_("moving bookmark '%s' forward from %s\n") %
1053 ui.status(_("moving bookmark '%s' forward from %s\n") %
1055 (mark, short(bmctx.node())))
1054 (mark, short(bmctx.node())))
1056 return
1055 return
1057 raise error.Abort(_("bookmark '%s' already exists "
1056 raise error.Abort(_("bookmark '%s' already exists "
1058 "(use -f to force)") % mark)
1057 "(use -f to force)") % mark)
1059 if ((mark in repo.branchmap() or mark == repo.dirstate.branch())
1058 if ((mark in repo.branchmap() or mark == repo.dirstate.branch())
1060 and not force):
1059 and not force):
1061 raise error.Abort(
1060 raise error.Abort(
1062 _("a bookmark cannot have the name of an existing branch"))
1061 _("a bookmark cannot have the name of an existing branch"))
1063
1062
1064 if delete and rename:
1063 if delete and rename:
1065 raise error.Abort(_("--delete and --rename are incompatible"))
1064 raise error.Abort(_("--delete and --rename are incompatible"))
1066 if delete and rev:
1065 if delete and rev:
1067 raise error.Abort(_("--rev is incompatible with --delete"))
1066 raise error.Abort(_("--rev is incompatible with --delete"))
1068 if rename and rev:
1067 if rename and rev:
1069 raise error.Abort(_("--rev is incompatible with --rename"))
1068 raise error.Abort(_("--rev is incompatible with --rename"))
1070 if not names and (delete or rev):
1069 if not names and (delete or rev):
1071 raise error.Abort(_("bookmark name required"))
1070 raise error.Abort(_("bookmark name required"))
1072
1071
1073 if delete or rename or names or inactive:
1072 if delete or rename or names or inactive:
1074 wlock = lock = tr = None
1073 wlock = lock = tr = None
1075 try:
1074 try:
1076 wlock = repo.wlock()
1075 wlock = repo.wlock()
1077 lock = repo.lock()
1076 lock = repo.lock()
1078 cur = repo.changectx('.').node()
1077 cur = repo.changectx('.').node()
1079 marks = repo._bookmarks
1078 marks = repo._bookmarks
1080 if delete:
1079 if delete:
1081 tr = repo.transaction('bookmark')
1080 tr = repo.transaction('bookmark')
1082 for mark in names:
1081 for mark in names:
1083 if mark not in marks:
1082 if mark not in marks:
1084 raise error.Abort(_("bookmark '%s' does not exist") %
1083 raise error.Abort(_("bookmark '%s' does not exist") %
1085 mark)
1084 mark)
1086 if mark == repo._activebookmark:
1085 if mark == repo._activebookmark:
1087 bookmarks.deactivate(repo)
1086 bookmarks.deactivate(repo)
1088 del marks[mark]
1087 del marks[mark]
1089
1088
1090 elif rename:
1089 elif rename:
1091 tr = repo.transaction('bookmark')
1090 tr = repo.transaction('bookmark')
1092 if not names:
1091 if not names:
1093 raise error.Abort(_("new bookmark name required"))
1092 raise error.Abort(_("new bookmark name required"))
1094 elif len(names) > 1:
1093 elif len(names) > 1:
1095 raise error.Abort(_("only one new bookmark name allowed"))
1094 raise error.Abort(_("only one new bookmark name allowed"))
1096 mark = checkformat(names[0])
1095 mark = checkformat(names[0])
1097 if rename not in marks:
1096 if rename not in marks:
1098 raise error.Abort(_("bookmark '%s' does not exist")
1097 raise error.Abort(_("bookmark '%s' does not exist")
1099 % rename)
1098 % rename)
1100 checkconflict(repo, mark, cur, force)
1099 checkconflict(repo, mark, cur, force)
1101 marks[mark] = marks[rename]
1100 marks[mark] = marks[rename]
1102 if repo._activebookmark == rename and not inactive:
1101 if repo._activebookmark == rename and not inactive:
1103 bookmarks.activate(repo, mark)
1102 bookmarks.activate(repo, mark)
1104 del marks[rename]
1103 del marks[rename]
1105 elif names:
1104 elif names:
1106 tr = repo.transaction('bookmark')
1105 tr = repo.transaction('bookmark')
1107 newact = None
1106 newact = None
1108 for mark in names:
1107 for mark in names:
1109 mark = checkformat(mark)
1108 mark = checkformat(mark)
1110 if newact is None:
1109 if newact is None:
1111 newact = mark
1110 newact = mark
1112 if inactive and mark == repo._activebookmark:
1111 if inactive and mark == repo._activebookmark:
1113 bookmarks.deactivate(repo)
1112 bookmarks.deactivate(repo)
1114 return
1113 return
1115 tgt = cur
1114 tgt = cur
1116 if rev:
1115 if rev:
1117 tgt = scmutil.revsingle(repo, rev).node()
1116 tgt = scmutil.revsingle(repo, rev).node()
1118 checkconflict(repo, mark, cur, force, tgt)
1117 checkconflict(repo, mark, cur, force, tgt)
1119 marks[mark] = tgt
1118 marks[mark] = tgt
1120 if not inactive and cur == marks[newact] and not rev:
1119 if not inactive and cur == marks[newact] and not rev:
1121 bookmarks.activate(repo, newact)
1120 bookmarks.activate(repo, newact)
1122 elif cur != tgt and newact == repo._activebookmark:
1121 elif cur != tgt and newact == repo._activebookmark:
1123 bookmarks.deactivate(repo)
1122 bookmarks.deactivate(repo)
1124 elif inactive:
1123 elif inactive:
1125 if len(marks) == 0:
1124 if len(marks) == 0:
1126 ui.status(_("no bookmarks set\n"))
1125 ui.status(_("no bookmarks set\n"))
1127 elif not repo._activebookmark:
1126 elif not repo._activebookmark:
1128 ui.status(_("no active bookmark\n"))
1127 ui.status(_("no active bookmark\n"))
1129 else:
1128 else:
1130 bookmarks.deactivate(repo)
1129 bookmarks.deactivate(repo)
1131 if tr is not None:
1130 if tr is not None:
1132 marks.recordchange(tr)
1131 marks.recordchange(tr)
1133 tr.close()
1132 tr.close()
1134 finally:
1133 finally:
1135 lockmod.release(tr, lock, wlock)
1134 lockmod.release(tr, lock, wlock)
1136 else: # show bookmarks
1135 else: # show bookmarks
1137 fm = ui.formatter('bookmarks', opts)
1136 fm = ui.formatter('bookmarks', opts)
1138 hexfn = fm.hexfunc
1137 hexfn = fm.hexfunc
1139 marks = repo._bookmarks
1138 marks = repo._bookmarks
1140 if len(marks) == 0 and not fm:
1139 if len(marks) == 0 and not fm:
1141 ui.status(_("no bookmarks set\n"))
1140 ui.status(_("no bookmarks set\n"))
1142 for bmark, n in sorted(marks.iteritems()):
1141 for bmark, n in sorted(marks.iteritems()):
1143 active = repo._activebookmark
1142 active = repo._activebookmark
1144 if bmark == active:
1143 if bmark == active:
1145 prefix, label = '*', activebookmarklabel
1144 prefix, label = '*', activebookmarklabel
1146 else:
1145 else:
1147 prefix, label = ' ', ''
1146 prefix, label = ' ', ''
1148
1147
1149 fm.startitem()
1148 fm.startitem()
1150 if not ui.quiet:
1149 if not ui.quiet:
1151 fm.plain(' %s ' % prefix, label=label)
1150 fm.plain(' %s ' % prefix, label=label)
1152 fm.write('bookmark', '%s', bmark, label=label)
1151 fm.write('bookmark', '%s', bmark, label=label)
1153 pad = " " * (25 - encoding.colwidth(bmark))
1152 pad = " " * (25 - encoding.colwidth(bmark))
1154 fm.condwrite(not ui.quiet, 'rev node', pad + ' %d:%s',
1153 fm.condwrite(not ui.quiet, 'rev node', pad + ' %d:%s',
1155 repo.changelog.rev(n), hexfn(n), label=label)
1154 repo.changelog.rev(n), hexfn(n), label=label)
1156 fm.data(active=(bmark == active))
1155 fm.data(active=(bmark == active))
1157 fm.plain('\n')
1156 fm.plain('\n')
1158 fm.end()
1157 fm.end()
1159
1158
1160 @command('branch',
1159 @command('branch',
1161 [('f', 'force', None,
1160 [('f', 'force', None,
1162 _('set branch name even if it shadows an existing branch')),
1161 _('set branch name even if it shadows an existing branch')),
1163 ('C', 'clean', None, _('reset branch name to parent branch name'))],
1162 ('C', 'clean', None, _('reset branch name to parent branch name'))],
1164 _('[-fC] [NAME]'))
1163 _('[-fC] [NAME]'))
1165 def branch(ui, repo, label=None, **opts):
1164 def branch(ui, repo, label=None, **opts):
1166 """set or show the current branch name
1165 """set or show the current branch name
1167
1166
1168 .. note::
1167 .. note::
1169
1168
1170 Branch names are permanent and global. Use :hg:`bookmark` to create a
1169 Branch names are permanent and global. Use :hg:`bookmark` to create a
1171 light-weight bookmark instead. See :hg:`help glossary` for more
1170 light-weight bookmark instead. See :hg:`help glossary` for more
1172 information about named branches and bookmarks.
1171 information about named branches and bookmarks.
1173
1172
1174 With no argument, show the current branch name. With one argument,
1173 With no argument, show the current branch name. With one argument,
1175 set the working directory branch name (the branch will not exist
1174 set the working directory branch name (the branch will not exist
1176 in the repository until the next commit). Standard practice
1175 in the repository until the next commit). Standard practice
1177 recommends that primary development take place on the 'default'
1176 recommends that primary development take place on the 'default'
1178 branch.
1177 branch.
1179
1178
1180 Unless -f/--force is specified, branch will not let you set a
1179 Unless -f/--force is specified, branch will not let you set a
1181 branch name that already exists.
1180 branch name that already exists.
1182
1181
1183 Use -C/--clean to reset the working directory branch to that of
1182 Use -C/--clean to reset the working directory branch to that of
1184 the parent of the working directory, negating a previous branch
1183 the parent of the working directory, negating a previous branch
1185 change.
1184 change.
1186
1185
1187 Use the command :hg:`update` to switch to an existing branch. Use
1186 Use the command :hg:`update` to switch to an existing branch. Use
1188 :hg:`commit --close-branch` to mark this branch head as closed.
1187 :hg:`commit --close-branch` to mark this branch head as closed.
1189 When all heads of the branch are closed, the branch will be
1188 When all heads of the branch are closed, the branch will be
1190 considered closed.
1189 considered closed.
1191
1190
1192 Returns 0 on success.
1191 Returns 0 on success.
1193 """
1192 """
1194 if label:
1193 if label:
1195 label = label.strip()
1194 label = label.strip()
1196
1195
1197 if not opts.get('clean') and not label:
1196 if not opts.get('clean') and not label:
1198 ui.write("%s\n" % repo.dirstate.branch())
1197 ui.write("%s\n" % repo.dirstate.branch())
1199 return
1198 return
1200
1199
1201 wlock = repo.wlock()
1200 wlock = repo.wlock()
1202 try:
1201 try:
1203 if opts.get('clean'):
1202 if opts.get('clean'):
1204 label = repo[None].p1().branch()
1203 label = repo[None].p1().branch()
1205 repo.dirstate.setbranch(label)
1204 repo.dirstate.setbranch(label)
1206 ui.status(_('reset working directory to branch %s\n') % label)
1205 ui.status(_('reset working directory to branch %s\n') % label)
1207 elif label:
1206 elif label:
1208 if not opts.get('force') and label in repo.branchmap():
1207 if not opts.get('force') and label in repo.branchmap():
1209 if label not in [p.branch() for p in repo[None].parents()]:
1208 if label not in [p.branch() for p in repo[None].parents()]:
1210 raise error.Abort(_('a branch of the same name already'
1209 raise error.Abort(_('a branch of the same name already'
1211 ' exists'),
1210 ' exists'),
1212 # i18n: "it" refers to an existing branch
1211 # i18n: "it" refers to an existing branch
1213 hint=_("use 'hg update' to switch to it"))
1212 hint=_("use 'hg update' to switch to it"))
1214 scmutil.checknewlabel(repo, label, 'branch')
1213 scmutil.checknewlabel(repo, label, 'branch')
1215 repo.dirstate.setbranch(label)
1214 repo.dirstate.setbranch(label)
1216 ui.status(_('marked working directory as branch %s\n') % label)
1215 ui.status(_('marked working directory as branch %s\n') % label)
1217
1216
1218 # find any open named branches aside from default
1217 # find any open named branches aside from default
1219 others = [n for n, h, t, c in repo.branchmap().iterbranches()
1218 others = [n for n, h, t, c in repo.branchmap().iterbranches()
1220 if n != "default" and not c]
1219 if n != "default" and not c]
1221 if not others:
1220 if not others:
1222 ui.status(_('(branches are permanent and global, '
1221 ui.status(_('(branches are permanent and global, '
1223 'did you want a bookmark?)\n'))
1222 'did you want a bookmark?)\n'))
1224 finally:
1223 finally:
1225 wlock.release()
1224 wlock.release()
1226
1225
1227 @command('branches',
1226 @command('branches',
1228 [('a', 'active', False,
1227 [('a', 'active', False,
1229 _('show only branches that have unmerged heads (DEPRECATED)')),
1228 _('show only branches that have unmerged heads (DEPRECATED)')),
1230 ('c', 'closed', False, _('show normal and closed branches')),
1229 ('c', 'closed', False, _('show normal and closed branches')),
1231 ] + formatteropts,
1230 ] + formatteropts,
1232 _('[-ac]'))
1231 _('[-ac]'))
1233 def branches(ui, repo, active=False, closed=False, **opts):
1232 def branches(ui, repo, active=False, closed=False, **opts):
1234 """list repository named branches
1233 """list repository named branches
1235
1234
1236 List the repository's named branches, indicating which ones are
1235 List the repository's named branches, indicating which ones are
1237 inactive. If -c/--closed is specified, also list branches which have
1236 inactive. If -c/--closed is specified, also list branches which have
1238 been marked closed (see :hg:`commit --close-branch`).
1237 been marked closed (see :hg:`commit --close-branch`).
1239
1238
1240 Use the command :hg:`update` to switch to an existing branch.
1239 Use the command :hg:`update` to switch to an existing branch.
1241
1240
1242 Returns 0.
1241 Returns 0.
1243 """
1242 """
1244
1243
1245 fm = ui.formatter('branches', opts)
1244 fm = ui.formatter('branches', opts)
1246 hexfunc = fm.hexfunc
1245 hexfunc = fm.hexfunc
1247
1246
1248 allheads = set(repo.heads())
1247 allheads = set(repo.heads())
1249 branches = []
1248 branches = []
1250 for tag, heads, tip, isclosed in repo.branchmap().iterbranches():
1249 for tag, heads, tip, isclosed in repo.branchmap().iterbranches():
1251 isactive = not isclosed and bool(set(heads) & allheads)
1250 isactive = not isclosed and bool(set(heads) & allheads)
1252 branches.append((tag, repo[tip], isactive, not isclosed))
1251 branches.append((tag, repo[tip], isactive, not isclosed))
1253 branches.sort(key=lambda i: (i[2], i[1].rev(), i[0], i[3]),
1252 branches.sort(key=lambda i: (i[2], i[1].rev(), i[0], i[3]),
1254 reverse=True)
1253 reverse=True)
1255
1254
1256 for tag, ctx, isactive, isopen in branches:
1255 for tag, ctx, isactive, isopen in branches:
1257 if active and not isactive:
1256 if active and not isactive:
1258 continue
1257 continue
1259 if isactive:
1258 if isactive:
1260 label = 'branches.active'
1259 label = 'branches.active'
1261 notice = ''
1260 notice = ''
1262 elif not isopen:
1261 elif not isopen:
1263 if not closed:
1262 if not closed:
1264 continue
1263 continue
1265 label = 'branches.closed'
1264 label = 'branches.closed'
1266 notice = _(' (closed)')
1265 notice = _(' (closed)')
1267 else:
1266 else:
1268 label = 'branches.inactive'
1267 label = 'branches.inactive'
1269 notice = _(' (inactive)')
1268 notice = _(' (inactive)')
1270 current = (tag == repo.dirstate.branch())
1269 current = (tag == repo.dirstate.branch())
1271 if current:
1270 if current:
1272 label = 'branches.current'
1271 label = 'branches.current'
1273
1272
1274 fm.startitem()
1273 fm.startitem()
1275 fm.write('branch', '%s', tag, label=label)
1274 fm.write('branch', '%s', tag, label=label)
1276 rev = ctx.rev()
1275 rev = ctx.rev()
1277 padsize = max(31 - len(str(rev)) - encoding.colwidth(tag), 0)
1276 padsize = max(31 - len(str(rev)) - encoding.colwidth(tag), 0)
1278 fmt = ' ' * padsize + ' %d:%s'
1277 fmt = ' ' * padsize + ' %d:%s'
1279 fm.condwrite(not ui.quiet, 'rev node', fmt, rev, hexfunc(ctx.node()),
1278 fm.condwrite(not ui.quiet, 'rev node', fmt, rev, hexfunc(ctx.node()),
1280 label='log.changeset changeset.%s' % ctx.phasestr())
1279 label='log.changeset changeset.%s' % ctx.phasestr())
1281 fm.data(active=isactive, closed=not isopen, current=current)
1280 fm.data(active=isactive, closed=not isopen, current=current)
1282 if not ui.quiet:
1281 if not ui.quiet:
1283 fm.plain(notice)
1282 fm.plain(notice)
1284 fm.plain('\n')
1283 fm.plain('\n')
1285 fm.end()
1284 fm.end()
1286
1285
1287 @command('bundle',
1286 @command('bundle',
1288 [('f', 'force', None, _('run even when the destination is unrelated')),
1287 [('f', 'force', None, _('run even when the destination is unrelated')),
1289 ('r', 'rev', [], _('a changeset intended to be added to the destination'),
1288 ('r', 'rev', [], _('a changeset intended to be added to the destination'),
1290 _('REV')),
1289 _('REV')),
1291 ('b', 'branch', [], _('a specific branch you would like to bundle'),
1290 ('b', 'branch', [], _('a specific branch you would like to bundle'),
1292 _('BRANCH')),
1291 _('BRANCH')),
1293 ('', 'base', [],
1292 ('', 'base', [],
1294 _('a base changeset assumed to be available at the destination'),
1293 _('a base changeset assumed to be available at the destination'),
1295 _('REV')),
1294 _('REV')),
1296 ('a', 'all', None, _('bundle all changesets in the repository')),
1295 ('a', 'all', None, _('bundle all changesets in the repository')),
1297 ('t', 'type', 'bzip2', _('bundle compression type to use'), _('TYPE')),
1296 ('t', 'type', 'bzip2', _('bundle compression type to use'), _('TYPE')),
1298 ] + remoteopts,
1297 ] + remoteopts,
1299 _('[-f] [-t TYPE] [-a] [-r REV]... [--base REV]... FILE [DEST]'))
1298 _('[-f] [-t TYPE] [-a] [-r REV]... [--base REV]... FILE [DEST]'))
1300 def bundle(ui, repo, fname, dest=None, **opts):
1299 def bundle(ui, repo, fname, dest=None, **opts):
1301 """create a changegroup file
1300 """create a changegroup file
1302
1301
1303 Generate a compressed changegroup file collecting changesets not
1302 Generate a compressed changegroup file collecting changesets not
1304 known to be in another repository.
1303 known to be in another repository.
1305
1304
1306 If you omit the destination repository, then hg assumes the
1305 If you omit the destination repository, then hg assumes the
1307 destination will have all the nodes you specify with --base
1306 destination will have all the nodes you specify with --base
1308 parameters. To create a bundle containing all changesets, use
1307 parameters. To create a bundle containing all changesets, use
1309 -a/--all (or --base null).
1308 -a/--all (or --base null).
1310
1309
1311 You can change bundle format with the -t/--type option. You can
1310 You can change bundle format with the -t/--type option. You can
1312 specify a compression, a bundle version or both using a dash
1311 specify a compression, a bundle version or both using a dash
1313 (comp-version). The available compression methods are: none, bzip2,
1312 (comp-version). The available compression methods are: none, bzip2,
1314 and gzip (by default, bundles are compressed using bzip2). The
1313 and gzip (by default, bundles are compressed using bzip2). The
1315 available format are: v1, v2 (default to most suitable).
1314 available format are: v1, v2 (default to most suitable).
1316
1315
1317 The bundle file can then be transferred using conventional means
1316 The bundle file can then be transferred using conventional means
1318 and applied to another repository with the unbundle or pull
1317 and applied to another repository with the unbundle or pull
1319 command. This is useful when direct push and pull are not
1318 command. This is useful when direct push and pull are not
1320 available or when exporting an entire repository is undesirable.
1319 available or when exporting an entire repository is undesirable.
1321
1320
1322 Applying bundles preserves all changeset contents including
1321 Applying bundles preserves all changeset contents including
1323 permissions, copy/rename information, and revision history.
1322 permissions, copy/rename information, and revision history.
1324
1323
1325 Returns 0 on success, 1 if no changes found.
1324 Returns 0 on success, 1 if no changes found.
1326 """
1325 """
1327 revs = None
1326 revs = None
1328 if 'rev' in opts:
1327 if 'rev' in opts:
1329 revs = scmutil.revrange(repo, opts['rev'])
1328 revs = scmutil.revrange(repo, opts['rev'])
1330
1329
1331 bundletype = opts.get('type', 'bzip2').lower()
1330 bundletype = opts.get('type', 'bzip2').lower()
1332 try:
1331 try:
1333 bcompression, cgversion, params = exchange.parsebundlespec(
1332 bcompression, cgversion, params = exchange.parsebundlespec(
1334 repo, bundletype, strict=False)
1333 repo, bundletype, strict=False)
1335 except error.UnsupportedBundleSpecification as e:
1334 except error.UnsupportedBundleSpecification as e:
1336 raise error.Abort(str(e),
1335 raise error.Abort(str(e),
1337 hint=_('see "hg help bundle" for supported '
1336 hint=_('see "hg help bundle" for supported '
1338 'values for --type'))
1337 'values for --type'))
1339
1338
1340 # Packed bundles are a pseudo bundle format for now.
1339 # Packed bundles are a pseudo bundle format for now.
1341 if cgversion == 's1':
1340 if cgversion == 's1':
1342 raise error.Abort(_('packed bundles cannot be produced by "hg bundle"'),
1341 raise error.Abort(_('packed bundles cannot be produced by "hg bundle"'),
1343 hint=_('use "hg debugcreatestreamclonebundle"'))
1342 hint=_('use "hg debugcreatestreamclonebundle"'))
1344
1343
1345 if opts.get('all'):
1344 if opts.get('all'):
1346 base = ['null']
1345 base = ['null']
1347 else:
1346 else:
1348 base = scmutil.revrange(repo, opts.get('base'))
1347 base = scmutil.revrange(repo, opts.get('base'))
1349 # TODO: get desired bundlecaps from command line.
1348 # TODO: get desired bundlecaps from command line.
1350 bundlecaps = None
1349 bundlecaps = None
1351 if base:
1350 if base:
1352 if dest:
1351 if dest:
1353 raise error.Abort(_("--base is incompatible with specifying "
1352 raise error.Abort(_("--base is incompatible with specifying "
1354 "a destination"))
1353 "a destination"))
1355 common = [repo.lookup(rev) for rev in base]
1354 common = [repo.lookup(rev) for rev in base]
1356 heads = revs and map(repo.lookup, revs) or revs
1355 heads = revs and map(repo.lookup, revs) or revs
1357 cg = changegroup.getchangegroup(repo, 'bundle', heads=heads,
1356 cg = changegroup.getchangegroup(repo, 'bundle', heads=heads,
1358 common=common, bundlecaps=bundlecaps,
1357 common=common, bundlecaps=bundlecaps,
1359 version=cgversion)
1358 version=cgversion)
1360 outgoing = None
1359 outgoing = None
1361 else:
1360 else:
1362 dest = ui.expandpath(dest or 'default-push', dest or 'default')
1361 dest = ui.expandpath(dest or 'default-push', dest or 'default')
1363 dest, branches = hg.parseurl(dest, opts.get('branch'))
1362 dest, branches = hg.parseurl(dest, opts.get('branch'))
1364 other = hg.peer(repo, opts, dest)
1363 other = hg.peer(repo, opts, dest)
1365 revs, checkout = hg.addbranchrevs(repo, repo, branches, revs)
1364 revs, checkout = hg.addbranchrevs(repo, repo, branches, revs)
1366 heads = revs and map(repo.lookup, revs) or revs
1365 heads = revs and map(repo.lookup, revs) or revs
1367 outgoing = discovery.findcommonoutgoing(repo, other,
1366 outgoing = discovery.findcommonoutgoing(repo, other,
1368 onlyheads=heads,
1367 onlyheads=heads,
1369 force=opts.get('force'),
1368 force=opts.get('force'),
1370 portable=True)
1369 portable=True)
1371 cg = changegroup.getlocalchangegroup(repo, 'bundle', outgoing,
1370 cg = changegroup.getlocalchangegroup(repo, 'bundle', outgoing,
1372 bundlecaps, version=cgversion)
1371 bundlecaps, version=cgversion)
1373 if not cg:
1372 if not cg:
1374 scmutil.nochangesfound(ui, repo, outgoing and outgoing.excluded)
1373 scmutil.nochangesfound(ui, repo, outgoing and outgoing.excluded)
1375 return 1
1374 return 1
1376
1375
1377 if cgversion == '01': #bundle1
1376 if cgversion == '01': #bundle1
1378 if bcompression is None:
1377 if bcompression is None:
1379 bcompression = 'UN'
1378 bcompression = 'UN'
1380 bversion = 'HG10' + bcompression
1379 bversion = 'HG10' + bcompression
1381 bcompression = None
1380 bcompression = None
1382 else:
1381 else:
1383 assert cgversion == '02'
1382 assert cgversion == '02'
1384 bversion = 'HG20'
1383 bversion = 'HG20'
1385
1384
1386
1385
1387 changegroup.writebundle(ui, cg, fname, bversion, compression=bcompression)
1386 changegroup.writebundle(ui, cg, fname, bversion, compression=bcompression)
1388
1387
1389 @command('cat',
1388 @command('cat',
1390 [('o', 'output', '',
1389 [('o', 'output', '',
1391 _('print output to file with formatted name'), _('FORMAT')),
1390 _('print output to file with formatted name'), _('FORMAT')),
1392 ('r', 'rev', '', _('print the given revision'), _('REV')),
1391 ('r', 'rev', '', _('print the given revision'), _('REV')),
1393 ('', 'decode', None, _('apply any matching decode filter')),
1392 ('', 'decode', None, _('apply any matching decode filter')),
1394 ] + walkopts,
1393 ] + walkopts,
1395 _('[OPTION]... FILE...'),
1394 _('[OPTION]... FILE...'),
1396 inferrepo=True)
1395 inferrepo=True)
1397 def cat(ui, repo, file1, *pats, **opts):
1396 def cat(ui, repo, file1, *pats, **opts):
1398 """output the current or given revision of files
1397 """output the current or given revision of files
1399
1398
1400 Print the specified files as they were at the given revision. If
1399 Print the specified files as they were at the given revision. If
1401 no revision is given, the parent of the working directory is used.
1400 no revision is given, the parent of the working directory is used.
1402
1401
1403 Output may be to a file, in which case the name of the file is
1402 Output may be to a file, in which case the name of the file is
1404 given using a format string. The formatting rules as follows:
1403 given using a format string. The formatting rules as follows:
1405
1404
1406 :``%%``: literal "%" character
1405 :``%%``: literal "%" character
1407 :``%s``: basename of file being printed
1406 :``%s``: basename of file being printed
1408 :``%d``: dirname of file being printed, or '.' if in repository root
1407 :``%d``: dirname of file being printed, or '.' if in repository root
1409 :``%p``: root-relative path name of file being printed
1408 :``%p``: root-relative path name of file being printed
1410 :``%H``: changeset hash (40 hexadecimal digits)
1409 :``%H``: changeset hash (40 hexadecimal digits)
1411 :``%R``: changeset revision number
1410 :``%R``: changeset revision number
1412 :``%h``: short-form changeset hash (12 hexadecimal digits)
1411 :``%h``: short-form changeset hash (12 hexadecimal digits)
1413 :``%r``: zero-padded changeset revision number
1412 :``%r``: zero-padded changeset revision number
1414 :``%b``: basename of the exporting repository
1413 :``%b``: basename of the exporting repository
1415
1414
1416 Returns 0 on success.
1415 Returns 0 on success.
1417 """
1416 """
1418 ctx = scmutil.revsingle(repo, opts.get('rev'))
1417 ctx = scmutil.revsingle(repo, opts.get('rev'))
1419 m = scmutil.match(ctx, (file1,) + pats, opts)
1418 m = scmutil.match(ctx, (file1,) + pats, opts)
1420
1419
1421 return cmdutil.cat(ui, repo, ctx, m, '', **opts)
1420 return cmdutil.cat(ui, repo, ctx, m, '', **opts)
1422
1421
1423 @command('^clone',
1422 @command('^clone',
1424 [('U', 'noupdate', None, _('the clone will include an empty working '
1423 [('U', 'noupdate', None, _('the clone will include an empty working '
1425 'directory (only a repository)')),
1424 'directory (only a repository)')),
1426 ('u', 'updaterev', '', _('revision, tag, or branch to check out'),
1425 ('u', 'updaterev', '', _('revision, tag, or branch to check out'),
1427 _('REV')),
1426 _('REV')),
1428 ('r', 'rev', [], _('include the specified changeset'), _('REV')),
1427 ('r', 'rev', [], _('include the specified changeset'), _('REV')),
1429 ('b', 'branch', [], _('clone only the specified branch'), _('BRANCH')),
1428 ('b', 'branch', [], _('clone only the specified branch'), _('BRANCH')),
1430 ('', 'pull', None, _('use pull protocol to copy metadata')),
1429 ('', 'pull', None, _('use pull protocol to copy metadata')),
1431 ('', 'uncompressed', None, _('use uncompressed transfer (fast over LAN)')),
1430 ('', 'uncompressed', None, _('use uncompressed transfer (fast over LAN)')),
1432 ] + remoteopts,
1431 ] + remoteopts,
1433 _('[OPTION]... SOURCE [DEST]'),
1432 _('[OPTION]... SOURCE [DEST]'),
1434 norepo=True)
1433 norepo=True)
1435 def clone(ui, source, dest=None, **opts):
1434 def clone(ui, source, dest=None, **opts):
1436 """make a copy of an existing repository
1435 """make a copy of an existing repository
1437
1436
1438 Create a copy of an existing repository in a new directory.
1437 Create a copy of an existing repository in a new directory.
1439
1438
1440 If no destination directory name is specified, it defaults to the
1439 If no destination directory name is specified, it defaults to the
1441 basename of the source.
1440 basename of the source.
1442
1441
1443 The location of the source is added to the new repository's
1442 The location of the source is added to the new repository's
1444 ``.hg/hgrc`` file, as the default to be used for future pulls.
1443 ``.hg/hgrc`` file, as the default to be used for future pulls.
1445
1444
1446 Only local paths and ``ssh://`` URLs are supported as
1445 Only local paths and ``ssh://`` URLs are supported as
1447 destinations. For ``ssh://`` destinations, no working directory or
1446 destinations. For ``ssh://`` destinations, no working directory or
1448 ``.hg/hgrc`` will be created on the remote side.
1447 ``.hg/hgrc`` will be created on the remote side.
1449
1448
1450 To pull only a subset of changesets, specify one or more revisions
1449 To pull only a subset of changesets, specify one or more revisions
1451 identifiers with -r/--rev or branches with -b/--branch. The
1450 identifiers with -r/--rev or branches with -b/--branch. The
1452 resulting clone will contain only the specified changesets and
1451 resulting clone will contain only the specified changesets and
1453 their ancestors. These options (or 'clone src#rev dest') imply
1452 their ancestors. These options (or 'clone src#rev dest') imply
1454 --pull, even for local source repositories. Note that specifying a
1453 --pull, even for local source repositories. Note that specifying a
1455 tag will include the tagged changeset but not the changeset
1454 tag will include the tagged changeset but not the changeset
1456 containing the tag.
1455 containing the tag.
1457
1456
1458 If the source repository has a bookmark called '@' set, that
1457 If the source repository has a bookmark called '@' set, that
1459 revision will be checked out in the new repository by default.
1458 revision will be checked out in the new repository by default.
1460
1459
1461 To check out a particular version, use -u/--update, or
1460 To check out a particular version, use -u/--update, or
1462 -U/--noupdate to create a clone with no working directory.
1461 -U/--noupdate to create a clone with no working directory.
1463
1462
1464 .. container:: verbose
1463 .. container:: verbose
1465
1464
1466 For efficiency, hardlinks are used for cloning whenever the
1465 For efficiency, hardlinks are used for cloning whenever the
1467 source and destination are on the same filesystem (note this
1466 source and destination are on the same filesystem (note this
1468 applies only to the repository data, not to the working
1467 applies only to the repository data, not to the working
1469 directory). Some filesystems, such as AFS, implement hardlinking
1468 directory). Some filesystems, such as AFS, implement hardlinking
1470 incorrectly, but do not report errors. In these cases, use the
1469 incorrectly, but do not report errors. In these cases, use the
1471 --pull option to avoid hardlinking.
1470 --pull option to avoid hardlinking.
1472
1471
1473 In some cases, you can clone repositories and the working
1472 In some cases, you can clone repositories and the working
1474 directory using full hardlinks with ::
1473 directory using full hardlinks with ::
1475
1474
1476 $ cp -al REPO REPOCLONE
1475 $ cp -al REPO REPOCLONE
1477
1476
1478 This is the fastest way to clone, but it is not always safe. The
1477 This is the fastest way to clone, but it is not always safe. The
1479 operation is not atomic (making sure REPO is not modified during
1478 operation is not atomic (making sure REPO is not modified during
1480 the operation is up to you) and you have to make sure your
1479 the operation is up to you) and you have to make sure your
1481 editor breaks hardlinks (Emacs and most Linux Kernel tools do
1480 editor breaks hardlinks (Emacs and most Linux Kernel tools do
1482 so). Also, this is not compatible with certain extensions that
1481 so). Also, this is not compatible with certain extensions that
1483 place their metadata under the .hg directory, such as mq.
1482 place their metadata under the .hg directory, such as mq.
1484
1483
1485 Mercurial will update the working directory to the first applicable
1484 Mercurial will update the working directory to the first applicable
1486 revision from this list:
1485 revision from this list:
1487
1486
1488 a) null if -U or the source repository has no changesets
1487 a) null if -U or the source repository has no changesets
1489 b) if -u . and the source repository is local, the first parent of
1488 b) if -u . and the source repository is local, the first parent of
1490 the source repository's working directory
1489 the source repository's working directory
1491 c) the changeset specified with -u (if a branch name, this means the
1490 c) the changeset specified with -u (if a branch name, this means the
1492 latest head of that branch)
1491 latest head of that branch)
1493 d) the changeset specified with -r
1492 d) the changeset specified with -r
1494 e) the tipmost head specified with -b
1493 e) the tipmost head specified with -b
1495 f) the tipmost head specified with the url#branch source syntax
1494 f) the tipmost head specified with the url#branch source syntax
1496 g) the revision marked with the '@' bookmark, if present
1495 g) the revision marked with the '@' bookmark, if present
1497 h) the tipmost head of the default branch
1496 h) the tipmost head of the default branch
1498 i) tip
1497 i) tip
1499
1498
1500 Examples:
1499 Examples:
1501
1500
1502 - clone a remote repository to a new directory named hg/::
1501 - clone a remote repository to a new directory named hg/::
1503
1502
1504 hg clone http://selenic.com/hg
1503 hg clone http://selenic.com/hg
1505
1504
1506 - create a lightweight local clone::
1505 - create a lightweight local clone::
1507
1506
1508 hg clone project/ project-feature/
1507 hg clone project/ project-feature/
1509
1508
1510 - clone from an absolute path on an ssh server (note double-slash)::
1509 - clone from an absolute path on an ssh server (note double-slash)::
1511
1510
1512 hg clone ssh://user@server//home/projects/alpha/
1511 hg clone ssh://user@server//home/projects/alpha/
1513
1512
1514 - do a high-speed clone over a LAN while checking out a
1513 - do a high-speed clone over a LAN while checking out a
1515 specified version::
1514 specified version::
1516
1515
1517 hg clone --uncompressed http://server/repo -u 1.5
1516 hg clone --uncompressed http://server/repo -u 1.5
1518
1517
1519 - create a repository without changesets after a particular revision::
1518 - create a repository without changesets after a particular revision::
1520
1519
1521 hg clone -r 04e544 experimental/ good/
1520 hg clone -r 04e544 experimental/ good/
1522
1521
1523 - clone (and track) a particular named branch::
1522 - clone (and track) a particular named branch::
1524
1523
1525 hg clone http://selenic.com/hg#stable
1524 hg clone http://selenic.com/hg#stable
1526
1525
1527 See :hg:`help urls` for details on specifying URLs.
1526 See :hg:`help urls` for details on specifying URLs.
1528
1527
1529 Returns 0 on success.
1528 Returns 0 on success.
1530 """
1529 """
1531 if opts.get('noupdate') and opts.get('updaterev'):
1530 if opts.get('noupdate') and opts.get('updaterev'):
1532 raise error.Abort(_("cannot specify both --noupdate and --updaterev"))
1531 raise error.Abort(_("cannot specify both --noupdate and --updaterev"))
1533
1532
1534 r = hg.clone(ui, opts, source, dest,
1533 r = hg.clone(ui, opts, source, dest,
1535 pull=opts.get('pull'),
1534 pull=opts.get('pull'),
1536 stream=opts.get('uncompressed'),
1535 stream=opts.get('uncompressed'),
1537 rev=opts.get('rev'),
1536 rev=opts.get('rev'),
1538 update=opts.get('updaterev') or not opts.get('noupdate'),
1537 update=opts.get('updaterev') or not opts.get('noupdate'),
1539 branch=opts.get('branch'),
1538 branch=opts.get('branch'),
1540 shareopts=opts.get('shareopts'))
1539 shareopts=opts.get('shareopts'))
1541
1540
1542 return r is None
1541 return r is None
1543
1542
1544 @command('^commit|ci',
1543 @command('^commit|ci',
1545 [('A', 'addremove', None,
1544 [('A', 'addremove', None,
1546 _('mark new/missing files as added/removed before committing')),
1545 _('mark new/missing files as added/removed before committing')),
1547 ('', 'close-branch', None,
1546 ('', 'close-branch', None,
1548 _('mark a branch head as closed')),
1547 _('mark a branch head as closed')),
1549 ('', 'amend', None, _('amend the parent of the working directory')),
1548 ('', 'amend', None, _('amend the parent of the working directory')),
1550 ('s', 'secret', None, _('use the secret phase for committing')),
1549 ('s', 'secret', None, _('use the secret phase for committing')),
1551 ('e', 'edit', None, _('invoke editor on commit messages')),
1550 ('e', 'edit', None, _('invoke editor on commit messages')),
1552 ('i', 'interactive', None, _('use interactive mode')),
1551 ('i', 'interactive', None, _('use interactive mode')),
1553 ] + walkopts + commitopts + commitopts2 + subrepoopts,
1552 ] + walkopts + commitopts + commitopts2 + subrepoopts,
1554 _('[OPTION]... [FILE]...'),
1553 _('[OPTION]... [FILE]...'),
1555 inferrepo=True)
1554 inferrepo=True)
1556 def commit(ui, repo, *pats, **opts):
1555 def commit(ui, repo, *pats, **opts):
1557 """commit the specified files or all outstanding changes
1556 """commit the specified files or all outstanding changes
1558
1557
1559 Commit changes to the given files into the repository. Unlike a
1558 Commit changes to the given files into the repository. Unlike a
1560 centralized SCM, this operation is a local operation. See
1559 centralized SCM, this operation is a local operation. See
1561 :hg:`push` for a way to actively distribute your changes.
1560 :hg:`push` for a way to actively distribute your changes.
1562
1561
1563 If a list of files is omitted, all changes reported by :hg:`status`
1562 If a list of files is omitted, all changes reported by :hg:`status`
1564 will be committed.
1563 will be committed.
1565
1564
1566 If you are committing the result of a merge, do not provide any
1565 If you are committing the result of a merge, do not provide any
1567 filenames or -I/-X filters.
1566 filenames or -I/-X filters.
1568
1567
1569 If no commit message is specified, Mercurial starts your
1568 If no commit message is specified, Mercurial starts your
1570 configured editor where you can enter a message. In case your
1569 configured editor where you can enter a message. In case your
1571 commit fails, you will find a backup of your message in
1570 commit fails, you will find a backup of your message in
1572 ``.hg/last-message.txt``.
1571 ``.hg/last-message.txt``.
1573
1572
1574 The --close-branch flag can be used to mark the current branch
1573 The --close-branch flag can be used to mark the current branch
1575 head closed. When all heads of a branch are closed, the branch
1574 head closed. When all heads of a branch are closed, the branch
1576 will be considered closed and no longer listed.
1575 will be considered closed and no longer listed.
1577
1576
1578 The --amend flag can be used to amend the parent of the
1577 The --amend flag can be used to amend the parent of the
1579 working directory with a new commit that contains the changes
1578 working directory with a new commit that contains the changes
1580 in the parent in addition to those currently reported by :hg:`status`,
1579 in the parent in addition to those currently reported by :hg:`status`,
1581 if there are any. The old commit is stored in a backup bundle in
1580 if there are any. The old commit is stored in a backup bundle in
1582 ``.hg/strip-backup`` (see :hg:`help bundle` and :hg:`help unbundle`
1581 ``.hg/strip-backup`` (see :hg:`help bundle` and :hg:`help unbundle`
1583 on how to restore it).
1582 on how to restore it).
1584
1583
1585 Message, user and date are taken from the amended commit unless
1584 Message, user and date are taken from the amended commit unless
1586 specified. When a message isn't specified on the command line,
1585 specified. When a message isn't specified on the command line,
1587 the editor will open with the message of the amended commit.
1586 the editor will open with the message of the amended commit.
1588
1587
1589 It is not possible to amend public changesets (see :hg:`help phases`)
1588 It is not possible to amend public changesets (see :hg:`help phases`)
1590 or changesets that have children.
1589 or changesets that have children.
1591
1590
1592 See :hg:`help dates` for a list of formats valid for -d/--date.
1591 See :hg:`help dates` for a list of formats valid for -d/--date.
1593
1592
1594 Returns 0 on success, 1 if nothing changed.
1593 Returns 0 on success, 1 if nothing changed.
1595
1594
1596 .. container:: verbose
1595 .. container:: verbose
1597
1596
1598 Examples:
1597 Examples:
1599
1598
1600 - commit all files ending in .py::
1599 - commit all files ending in .py::
1601
1600
1602 hg commit --include "set:**.py"
1601 hg commit --include "set:**.py"
1603
1602
1604 - commit all non-binary files::
1603 - commit all non-binary files::
1605
1604
1606 hg commit --exclude "set:binary()"
1605 hg commit --exclude "set:binary()"
1607
1606
1608 - amend the current commit and set the date to now::
1607 - amend the current commit and set the date to now::
1609
1608
1610 hg commit --amend --date now
1609 hg commit --amend --date now
1611 """
1610 """
1612 wlock = lock = None
1611 wlock = lock = None
1613 try:
1612 try:
1614 wlock = repo.wlock()
1613 wlock = repo.wlock()
1615 lock = repo.lock()
1614 lock = repo.lock()
1616 return _docommit(ui, repo, *pats, **opts)
1615 return _docommit(ui, repo, *pats, **opts)
1617 finally:
1616 finally:
1618 release(lock, wlock)
1617 release(lock, wlock)
1619
1618
1620 def _docommit(ui, repo, *pats, **opts):
1619 def _docommit(ui, repo, *pats, **opts):
1621 if opts.get('interactive'):
1620 if opts.get('interactive'):
1622 opts.pop('interactive')
1621 opts.pop('interactive')
1623 cmdutil.dorecord(ui, repo, commit, None, False,
1622 cmdutil.dorecord(ui, repo, commit, None, False,
1624 cmdutil.recordfilter, *pats, **opts)
1623 cmdutil.recordfilter, *pats, **opts)
1625 return
1624 return
1626
1625
1627 if opts.get('subrepos'):
1626 if opts.get('subrepos'):
1628 if opts.get('amend'):
1627 if opts.get('amend'):
1629 raise error.Abort(_('cannot amend with --subrepos'))
1628 raise error.Abort(_('cannot amend with --subrepos'))
1630 # Let --subrepos on the command line override config setting.
1629 # Let --subrepos on the command line override config setting.
1631 ui.setconfig('ui', 'commitsubrepos', True, 'commit')
1630 ui.setconfig('ui', 'commitsubrepos', True, 'commit')
1632
1631
1633 cmdutil.checkunfinished(repo, commit=True)
1632 cmdutil.checkunfinished(repo, commit=True)
1634
1633
1635 branch = repo[None].branch()
1634 branch = repo[None].branch()
1636 bheads = repo.branchheads(branch)
1635 bheads = repo.branchheads(branch)
1637
1636
1638 extra = {}
1637 extra = {}
1639 if opts.get('close_branch'):
1638 if opts.get('close_branch'):
1640 extra['close'] = 1
1639 extra['close'] = 1
1641
1640
1642 if not bheads:
1641 if not bheads:
1643 raise error.Abort(_('can only close branch heads'))
1642 raise error.Abort(_('can only close branch heads'))
1644 elif opts.get('amend'):
1643 elif opts.get('amend'):
1645 if repo[None].parents()[0].p1().branch() != branch and \
1644 if repo[None].parents()[0].p1().branch() != branch and \
1646 repo[None].parents()[0].p2().branch() != branch:
1645 repo[None].parents()[0].p2().branch() != branch:
1647 raise error.Abort(_('can only close branch heads'))
1646 raise error.Abort(_('can only close branch heads'))
1648
1647
1649 if opts.get('amend'):
1648 if opts.get('amend'):
1650 if ui.configbool('ui', 'commitsubrepos'):
1649 if ui.configbool('ui', 'commitsubrepos'):
1651 raise error.Abort(_('cannot amend with ui.commitsubrepos enabled'))
1650 raise error.Abort(_('cannot amend with ui.commitsubrepos enabled'))
1652
1651
1653 old = repo['.']
1652 old = repo['.']
1654 if not old.mutable():
1653 if not old.mutable():
1655 raise error.Abort(_('cannot amend public changesets'))
1654 raise error.Abort(_('cannot amend public changesets'))
1656 if len(repo[None].parents()) > 1:
1655 if len(repo[None].parents()) > 1:
1657 raise error.Abort(_('cannot amend while merging'))
1656 raise error.Abort(_('cannot amend while merging'))
1658 allowunstable = obsolete.isenabled(repo, obsolete.allowunstableopt)
1657 allowunstable = obsolete.isenabled(repo, obsolete.allowunstableopt)
1659 if not allowunstable and old.children():
1658 if not allowunstable and old.children():
1660 raise error.Abort(_('cannot amend changeset with children'))
1659 raise error.Abort(_('cannot amend changeset with children'))
1661
1660
1662 newextra = extra.copy()
1661 newextra = extra.copy()
1663 newextra['branch'] = branch
1662 newextra['branch'] = branch
1664 extra = newextra
1663 extra = newextra
1665 # commitfunc is used only for temporary amend commit by cmdutil.amend
1664 # commitfunc is used only for temporary amend commit by cmdutil.amend
1666 def commitfunc(ui, repo, message, match, opts):
1665 def commitfunc(ui, repo, message, match, opts):
1667 return repo.commit(message,
1666 return repo.commit(message,
1668 opts.get('user') or old.user(),
1667 opts.get('user') or old.user(),
1669 opts.get('date') or old.date(),
1668 opts.get('date') or old.date(),
1670 match,
1669 match,
1671 extra=extra)
1670 extra=extra)
1672
1671
1673 node = cmdutil.amend(ui, repo, commitfunc, old, extra, pats, opts)
1672 node = cmdutil.amend(ui, repo, commitfunc, old, extra, pats, opts)
1674 if node == old.node():
1673 if node == old.node():
1675 ui.status(_("nothing changed\n"))
1674 ui.status(_("nothing changed\n"))
1676 return 1
1675 return 1
1677 else:
1676 else:
1678 def commitfunc(ui, repo, message, match, opts):
1677 def commitfunc(ui, repo, message, match, opts):
1679 backup = ui.backupconfig('phases', 'new-commit')
1678 backup = ui.backupconfig('phases', 'new-commit')
1680 baseui = repo.baseui
1679 baseui = repo.baseui
1681 basebackup = baseui.backupconfig('phases', 'new-commit')
1680 basebackup = baseui.backupconfig('phases', 'new-commit')
1682 try:
1681 try:
1683 if opts.get('secret'):
1682 if opts.get('secret'):
1684 ui.setconfig('phases', 'new-commit', 'secret', 'commit')
1683 ui.setconfig('phases', 'new-commit', 'secret', 'commit')
1685 # Propagate to subrepos
1684 # Propagate to subrepos
1686 baseui.setconfig('phases', 'new-commit', 'secret', 'commit')
1685 baseui.setconfig('phases', 'new-commit', 'secret', 'commit')
1687
1686
1688 editform = cmdutil.mergeeditform(repo[None], 'commit.normal')
1687 editform = cmdutil.mergeeditform(repo[None], 'commit.normal')
1689 editor = cmdutil.getcommiteditor(editform=editform, **opts)
1688 editor = cmdutil.getcommiteditor(editform=editform, **opts)
1690 return repo.commit(message, opts.get('user'), opts.get('date'),
1689 return repo.commit(message, opts.get('user'), opts.get('date'),
1691 match,
1690 match,
1692 editor=editor,
1691 editor=editor,
1693 extra=extra)
1692 extra=extra)
1694 finally:
1693 finally:
1695 ui.restoreconfig(backup)
1694 ui.restoreconfig(backup)
1696 repo.baseui.restoreconfig(basebackup)
1695 repo.baseui.restoreconfig(basebackup)
1697
1696
1698
1697
1699 node = cmdutil.commit(ui, repo, commitfunc, pats, opts)
1698 node = cmdutil.commit(ui, repo, commitfunc, pats, opts)
1700
1699
1701 if not node:
1700 if not node:
1702 stat = repo.status(match=scmutil.match(repo[None], pats, opts))
1701 stat = repo.status(match=scmutil.match(repo[None], pats, opts))
1703 if stat[3]:
1702 if stat[3]:
1704 ui.status(_("nothing changed (%d missing files, see "
1703 ui.status(_("nothing changed (%d missing files, see "
1705 "'hg status')\n") % len(stat[3]))
1704 "'hg status')\n") % len(stat[3]))
1706 else:
1705 else:
1707 ui.status(_("nothing changed\n"))
1706 ui.status(_("nothing changed\n"))
1708 return 1
1707 return 1
1709
1708
1710 cmdutil.commitstatus(repo, node, branch, bheads, opts)
1709 cmdutil.commitstatus(repo, node, branch, bheads, opts)
1711
1710
1712 @command('config|showconfig|debugconfig',
1711 @command('config|showconfig|debugconfig',
1713 [('u', 'untrusted', None, _('show untrusted configuration options')),
1712 [('u', 'untrusted', None, _('show untrusted configuration options')),
1714 ('e', 'edit', None, _('edit user config')),
1713 ('e', 'edit', None, _('edit user config')),
1715 ('l', 'local', None, _('edit repository config')),
1714 ('l', 'local', None, _('edit repository config')),
1716 ('g', 'global', None, _('edit global config'))],
1715 ('g', 'global', None, _('edit global config'))],
1717 _('[-u] [NAME]...'),
1716 _('[-u] [NAME]...'),
1718 optionalrepo=True)
1717 optionalrepo=True)
1719 def config(ui, repo, *values, **opts):
1718 def config(ui, repo, *values, **opts):
1720 """show combined config settings from all hgrc files
1719 """show combined config settings from all hgrc files
1721
1720
1722 With no arguments, print names and values of all config items.
1721 With no arguments, print names and values of all config items.
1723
1722
1724 With one argument of the form section.name, print just the value
1723 With one argument of the form section.name, print just the value
1725 of that config item.
1724 of that config item.
1726
1725
1727 With multiple arguments, print names and values of all config
1726 With multiple arguments, print names and values of all config
1728 items with matching section names.
1727 items with matching section names.
1729
1728
1730 With --edit, start an editor on the user-level config file. With
1729 With --edit, start an editor on the user-level config file. With
1731 --global, edit the system-wide config file. With --local, edit the
1730 --global, edit the system-wide config file. With --local, edit the
1732 repository-level config file.
1731 repository-level config file.
1733
1732
1734 With --debug, the source (filename and line number) is printed
1733 With --debug, the source (filename and line number) is printed
1735 for each config item.
1734 for each config item.
1736
1735
1737 See :hg:`help config` for more information about config files.
1736 See :hg:`help config` for more information about config files.
1738
1737
1739 Returns 0 on success, 1 if NAME does not exist.
1738 Returns 0 on success, 1 if NAME does not exist.
1740
1739
1741 """
1740 """
1742
1741
1743 if opts.get('edit') or opts.get('local') or opts.get('global'):
1742 if opts.get('edit') or opts.get('local') or opts.get('global'):
1744 if opts.get('local') and opts.get('global'):
1743 if opts.get('local') and opts.get('global'):
1745 raise error.Abort(_("can't use --local and --global together"))
1744 raise error.Abort(_("can't use --local and --global together"))
1746
1745
1747 if opts.get('local'):
1746 if opts.get('local'):
1748 if not repo:
1747 if not repo:
1749 raise error.Abort(_("can't use --local outside a repository"))
1748 raise error.Abort(_("can't use --local outside a repository"))
1750 paths = [repo.join('hgrc')]
1749 paths = [repo.join('hgrc')]
1751 elif opts.get('global'):
1750 elif opts.get('global'):
1752 paths = scmutil.systemrcpath()
1751 paths = scmutil.systemrcpath()
1753 else:
1752 else:
1754 paths = scmutil.userrcpath()
1753 paths = scmutil.userrcpath()
1755
1754
1756 for f in paths:
1755 for f in paths:
1757 if os.path.exists(f):
1756 if os.path.exists(f):
1758 break
1757 break
1759 else:
1758 else:
1760 if opts.get('global'):
1759 if opts.get('global'):
1761 samplehgrc = uimod.samplehgrcs['global']
1760 samplehgrc = uimod.samplehgrcs['global']
1762 elif opts.get('local'):
1761 elif opts.get('local'):
1763 samplehgrc = uimod.samplehgrcs['local']
1762 samplehgrc = uimod.samplehgrcs['local']
1764 else:
1763 else:
1765 samplehgrc = uimod.samplehgrcs['user']
1764 samplehgrc = uimod.samplehgrcs['user']
1766
1765
1767 f = paths[0]
1766 f = paths[0]
1768 fp = open(f, "w")
1767 fp = open(f, "w")
1769 fp.write(samplehgrc)
1768 fp.write(samplehgrc)
1770 fp.close()
1769 fp.close()
1771
1770
1772 editor = ui.geteditor()
1771 editor = ui.geteditor()
1773 ui.system("%s \"%s\"" % (editor, f),
1772 ui.system("%s \"%s\"" % (editor, f),
1774 onerr=error.Abort, errprefix=_("edit failed"))
1773 onerr=error.Abort, errprefix=_("edit failed"))
1775 return
1774 return
1776
1775
1777 for f in scmutil.rcpath():
1776 for f in scmutil.rcpath():
1778 ui.debug('read config from: %s\n' % f)
1777 ui.debug('read config from: %s\n' % f)
1779 untrusted = bool(opts.get('untrusted'))
1778 untrusted = bool(opts.get('untrusted'))
1780 if values:
1779 if values:
1781 sections = [v for v in values if '.' not in v]
1780 sections = [v for v in values if '.' not in v]
1782 items = [v for v in values if '.' in v]
1781 items = [v for v in values if '.' in v]
1783 if len(items) > 1 or items and sections:
1782 if len(items) > 1 or items and sections:
1784 raise error.Abort(_('only one config item permitted'))
1783 raise error.Abort(_('only one config item permitted'))
1785 matched = False
1784 matched = False
1786 for section, name, value in ui.walkconfig(untrusted=untrusted):
1785 for section, name, value in ui.walkconfig(untrusted=untrusted):
1787 value = str(value).replace('\n', '\\n')
1786 value = str(value).replace('\n', '\\n')
1788 sectname = section + '.' + name
1787 sectname = section + '.' + name
1789 if values:
1788 if values:
1790 for v in values:
1789 for v in values:
1791 if v == section:
1790 if v == section:
1792 ui.debug('%s: ' %
1791 ui.debug('%s: ' %
1793 ui.configsource(section, name, untrusted))
1792 ui.configsource(section, name, untrusted))
1794 ui.write('%s=%s\n' % (sectname, value))
1793 ui.write('%s=%s\n' % (sectname, value))
1795 matched = True
1794 matched = True
1796 elif v == sectname:
1795 elif v == sectname:
1797 ui.debug('%s: ' %
1796 ui.debug('%s: ' %
1798 ui.configsource(section, name, untrusted))
1797 ui.configsource(section, name, untrusted))
1799 ui.write(value, '\n')
1798 ui.write(value, '\n')
1800 matched = True
1799 matched = True
1801 else:
1800 else:
1802 ui.debug('%s: ' %
1801 ui.debug('%s: ' %
1803 ui.configsource(section, name, untrusted))
1802 ui.configsource(section, name, untrusted))
1804 ui.write('%s=%s\n' % (sectname, value))
1803 ui.write('%s=%s\n' % (sectname, value))
1805 matched = True
1804 matched = True
1806 if matched:
1805 if matched:
1807 return 0
1806 return 0
1808 return 1
1807 return 1
1809
1808
1810 @command('copy|cp',
1809 @command('copy|cp',
1811 [('A', 'after', None, _('record a copy that has already occurred')),
1810 [('A', 'after', None, _('record a copy that has already occurred')),
1812 ('f', 'force', None, _('forcibly copy over an existing managed file')),
1811 ('f', 'force', None, _('forcibly copy over an existing managed file')),
1813 ] + walkopts + dryrunopts,
1812 ] + walkopts + dryrunopts,
1814 _('[OPTION]... [SOURCE]... DEST'))
1813 _('[OPTION]... [SOURCE]... DEST'))
1815 def copy(ui, repo, *pats, **opts):
1814 def copy(ui, repo, *pats, **opts):
1816 """mark files as copied for the next commit
1815 """mark files as copied for the next commit
1817
1816
1818 Mark dest as having copies of source files. If dest is a
1817 Mark dest as having copies of source files. If dest is a
1819 directory, copies are put in that directory. If dest is a file,
1818 directory, copies are put in that directory. If dest is a file,
1820 the source must be a single file.
1819 the source must be a single file.
1821
1820
1822 By default, this command copies the contents of files as they
1821 By default, this command copies the contents of files as they
1823 exist in the working directory. If invoked with -A/--after, the
1822 exist in the working directory. If invoked with -A/--after, the
1824 operation is recorded, but no copying is performed.
1823 operation is recorded, but no copying is performed.
1825
1824
1826 This command takes effect with the next commit. To undo a copy
1825 This command takes effect with the next commit. To undo a copy
1827 before that, see :hg:`revert`.
1826 before that, see :hg:`revert`.
1828
1827
1829 Returns 0 on success, 1 if errors are encountered.
1828 Returns 0 on success, 1 if errors are encountered.
1830 """
1829 """
1831 wlock = repo.wlock(False)
1830 wlock = repo.wlock(False)
1832 try:
1831 try:
1833 return cmdutil.copy(ui, repo, pats, opts)
1832 return cmdutil.copy(ui, repo, pats, opts)
1834 finally:
1833 finally:
1835 wlock.release()
1834 wlock.release()
1836
1835
1837 @command('debugancestor', [], _('[INDEX] REV1 REV2'), optionalrepo=True)
1836 @command('debugancestor', [], _('[INDEX] REV1 REV2'), optionalrepo=True)
1838 def debugancestor(ui, repo, *args):
1837 def debugancestor(ui, repo, *args):
1839 """find the ancestor revision of two revisions in a given index"""
1838 """find the ancestor revision of two revisions in a given index"""
1840 if len(args) == 3:
1839 if len(args) == 3:
1841 index, rev1, rev2 = args
1840 index, rev1, rev2 = args
1842 r = revlog.revlog(scmutil.opener(os.getcwd(), audit=False), index)
1841 r = revlog.revlog(scmutil.opener(os.getcwd(), audit=False), index)
1843 lookup = r.lookup
1842 lookup = r.lookup
1844 elif len(args) == 2:
1843 elif len(args) == 2:
1845 if not repo:
1844 if not repo:
1846 raise error.Abort(_("there is no Mercurial repository here "
1845 raise error.Abort(_("there is no Mercurial repository here "
1847 "(.hg not found)"))
1846 "(.hg not found)"))
1848 rev1, rev2 = args
1847 rev1, rev2 = args
1849 r = repo.changelog
1848 r = repo.changelog
1850 lookup = repo.lookup
1849 lookup = repo.lookup
1851 else:
1850 else:
1852 raise error.Abort(_('either two or three arguments required'))
1851 raise error.Abort(_('either two or three arguments required'))
1853 a = r.ancestor(lookup(rev1), lookup(rev2))
1852 a = r.ancestor(lookup(rev1), lookup(rev2))
1854 ui.write("%d:%s\n" % (r.rev(a), hex(a)))
1853 ui.write("%d:%s\n" % (r.rev(a), hex(a)))
1855
1854
1856 @command('debugbuilddag',
1855 @command('debugbuilddag',
1857 [('m', 'mergeable-file', None, _('add single file mergeable changes')),
1856 [('m', 'mergeable-file', None, _('add single file mergeable changes')),
1858 ('o', 'overwritten-file', None, _('add single file all revs overwrite')),
1857 ('o', 'overwritten-file', None, _('add single file all revs overwrite')),
1859 ('n', 'new-file', None, _('add new file at each rev'))],
1858 ('n', 'new-file', None, _('add new file at each rev'))],
1860 _('[OPTION]... [TEXT]'))
1859 _('[OPTION]... [TEXT]'))
1861 def debugbuilddag(ui, repo, text=None,
1860 def debugbuilddag(ui, repo, text=None,
1862 mergeable_file=False,
1861 mergeable_file=False,
1863 overwritten_file=False,
1862 overwritten_file=False,
1864 new_file=False):
1863 new_file=False):
1865 """builds a repo with a given DAG from scratch in the current empty repo
1864 """builds a repo with a given DAG from scratch in the current empty repo
1866
1865
1867 The description of the DAG is read from stdin if not given on the
1866 The description of the DAG is read from stdin if not given on the
1868 command line.
1867 command line.
1869
1868
1870 Elements:
1869 Elements:
1871
1870
1872 - "+n" is a linear run of n nodes based on the current default parent
1871 - "+n" is a linear run of n nodes based on the current default parent
1873 - "." is a single node based on the current default parent
1872 - "." is a single node based on the current default parent
1874 - "$" resets the default parent to null (implied at the start);
1873 - "$" resets the default parent to null (implied at the start);
1875 otherwise the default parent is always the last node created
1874 otherwise the default parent is always the last node created
1876 - "<p" sets the default parent to the backref p
1875 - "<p" sets the default parent to the backref p
1877 - "*p" is a fork at parent p, which is a backref
1876 - "*p" is a fork at parent p, which is a backref
1878 - "*p1/p2" is a merge of parents p1 and p2, which are backrefs
1877 - "*p1/p2" is a merge of parents p1 and p2, which are backrefs
1879 - "/p2" is a merge of the preceding node and p2
1878 - "/p2" is a merge of the preceding node and p2
1880 - ":tag" defines a local tag for the preceding node
1879 - ":tag" defines a local tag for the preceding node
1881 - "@branch" sets the named branch for subsequent nodes
1880 - "@branch" sets the named branch for subsequent nodes
1882 - "#...\\n" is a comment up to the end of the line
1881 - "#...\\n" is a comment up to the end of the line
1883
1882
1884 Whitespace between the above elements is ignored.
1883 Whitespace between the above elements is ignored.
1885
1884
1886 A backref is either
1885 A backref is either
1887
1886
1888 - a number n, which references the node curr-n, where curr is the current
1887 - a number n, which references the node curr-n, where curr is the current
1889 node, or
1888 node, or
1890 - the name of a local tag you placed earlier using ":tag", or
1889 - the name of a local tag you placed earlier using ":tag", or
1891 - empty to denote the default parent.
1890 - empty to denote the default parent.
1892
1891
1893 All string valued-elements are either strictly alphanumeric, or must
1892 All string valued-elements are either strictly alphanumeric, or must
1894 be enclosed in double quotes ("..."), with "\\" as escape character.
1893 be enclosed in double quotes ("..."), with "\\" as escape character.
1895 """
1894 """
1896
1895
1897 if text is None:
1896 if text is None:
1898 ui.status(_("reading DAG from stdin\n"))
1897 ui.status(_("reading DAG from stdin\n"))
1899 text = ui.fin.read()
1898 text = ui.fin.read()
1900
1899
1901 cl = repo.changelog
1900 cl = repo.changelog
1902 if len(cl) > 0:
1901 if len(cl) > 0:
1903 raise error.Abort(_('repository is not empty'))
1902 raise error.Abort(_('repository is not empty'))
1904
1903
1905 # determine number of revs in DAG
1904 # determine number of revs in DAG
1906 total = 0
1905 total = 0
1907 for type, data in dagparser.parsedag(text):
1906 for type, data in dagparser.parsedag(text):
1908 if type == 'n':
1907 if type == 'n':
1909 total += 1
1908 total += 1
1910
1909
1911 if mergeable_file:
1910 if mergeable_file:
1912 linesperrev = 2
1911 linesperrev = 2
1913 # make a file with k lines per rev
1912 # make a file with k lines per rev
1914 initialmergedlines = [str(i) for i in xrange(0, total * linesperrev)]
1913 initialmergedlines = [str(i) for i in xrange(0, total * linesperrev)]
1915 initialmergedlines.append("")
1914 initialmergedlines.append("")
1916
1915
1917 tags = []
1916 tags = []
1918
1917
1919 lock = tr = None
1918 lock = tr = None
1920 try:
1919 try:
1921 lock = repo.lock()
1920 lock = repo.lock()
1922 tr = repo.transaction("builddag")
1921 tr = repo.transaction("builddag")
1923
1922
1924 at = -1
1923 at = -1
1925 atbranch = 'default'
1924 atbranch = 'default'
1926 nodeids = []
1925 nodeids = []
1927 id = 0
1926 id = 0
1928 ui.progress(_('building'), id, unit=_('revisions'), total=total)
1927 ui.progress(_('building'), id, unit=_('revisions'), total=total)
1929 for type, data in dagparser.parsedag(text):
1928 for type, data in dagparser.parsedag(text):
1930 if type == 'n':
1929 if type == 'n':
1931 ui.note(('node %s\n' % str(data)))
1930 ui.note(('node %s\n' % str(data)))
1932 id, ps = data
1931 id, ps = data
1933
1932
1934 files = []
1933 files = []
1935 fctxs = {}
1934 fctxs = {}
1936
1935
1937 p2 = None
1936 p2 = None
1938 if mergeable_file:
1937 if mergeable_file:
1939 fn = "mf"
1938 fn = "mf"
1940 p1 = repo[ps[0]]
1939 p1 = repo[ps[0]]
1941 if len(ps) > 1:
1940 if len(ps) > 1:
1942 p2 = repo[ps[1]]
1941 p2 = repo[ps[1]]
1943 pa = p1.ancestor(p2)
1942 pa = p1.ancestor(p2)
1944 base, local, other = [x[fn].data() for x in (pa, p1,
1943 base, local, other = [x[fn].data() for x in (pa, p1,
1945 p2)]
1944 p2)]
1946 m3 = simplemerge.Merge3Text(base, local, other)
1945 m3 = simplemerge.Merge3Text(base, local, other)
1947 ml = [l.strip() for l in m3.merge_lines()]
1946 ml = [l.strip() for l in m3.merge_lines()]
1948 ml.append("")
1947 ml.append("")
1949 elif at > 0:
1948 elif at > 0:
1950 ml = p1[fn].data().split("\n")
1949 ml = p1[fn].data().split("\n")
1951 else:
1950 else:
1952 ml = initialmergedlines
1951 ml = initialmergedlines
1953 ml[id * linesperrev] += " r%i" % id
1952 ml[id * linesperrev] += " r%i" % id
1954 mergedtext = "\n".join(ml)
1953 mergedtext = "\n".join(ml)
1955 files.append(fn)
1954 files.append(fn)
1956 fctxs[fn] = context.memfilectx(repo, fn, mergedtext)
1955 fctxs[fn] = context.memfilectx(repo, fn, mergedtext)
1957
1956
1958 if overwritten_file:
1957 if overwritten_file:
1959 fn = "of"
1958 fn = "of"
1960 files.append(fn)
1959 files.append(fn)
1961 fctxs[fn] = context.memfilectx(repo, fn, "r%i\n" % id)
1960 fctxs[fn] = context.memfilectx(repo, fn, "r%i\n" % id)
1962
1961
1963 if new_file:
1962 if new_file:
1964 fn = "nf%i" % id
1963 fn = "nf%i" % id
1965 files.append(fn)
1964 files.append(fn)
1966 fctxs[fn] = context.memfilectx(repo, fn, "r%i\n" % id)
1965 fctxs[fn] = context.memfilectx(repo, fn, "r%i\n" % id)
1967 if len(ps) > 1:
1966 if len(ps) > 1:
1968 if not p2:
1967 if not p2:
1969 p2 = repo[ps[1]]
1968 p2 = repo[ps[1]]
1970 for fn in p2:
1969 for fn in p2:
1971 if fn.startswith("nf"):
1970 if fn.startswith("nf"):
1972 files.append(fn)
1971 files.append(fn)
1973 fctxs[fn] = p2[fn]
1972 fctxs[fn] = p2[fn]
1974
1973
1975 def fctxfn(repo, cx, path):
1974 def fctxfn(repo, cx, path):
1976 return fctxs.get(path)
1975 return fctxs.get(path)
1977
1976
1978 if len(ps) == 0 or ps[0] < 0:
1977 if len(ps) == 0 or ps[0] < 0:
1979 pars = [None, None]
1978 pars = [None, None]
1980 elif len(ps) == 1:
1979 elif len(ps) == 1:
1981 pars = [nodeids[ps[0]], None]
1980 pars = [nodeids[ps[0]], None]
1982 else:
1981 else:
1983 pars = [nodeids[p] for p in ps]
1982 pars = [nodeids[p] for p in ps]
1984 cx = context.memctx(repo, pars, "r%i" % id, files, fctxfn,
1983 cx = context.memctx(repo, pars, "r%i" % id, files, fctxfn,
1985 date=(id, 0),
1984 date=(id, 0),
1986 user="debugbuilddag",
1985 user="debugbuilddag",
1987 extra={'branch': atbranch})
1986 extra={'branch': atbranch})
1988 nodeid = repo.commitctx(cx)
1987 nodeid = repo.commitctx(cx)
1989 nodeids.append(nodeid)
1988 nodeids.append(nodeid)
1990 at = id
1989 at = id
1991 elif type == 'l':
1990 elif type == 'l':
1992 id, name = data
1991 id, name = data
1993 ui.note(('tag %s\n' % name))
1992 ui.note(('tag %s\n' % name))
1994 tags.append("%s %s\n" % (hex(repo.changelog.node(id)), name))
1993 tags.append("%s %s\n" % (hex(repo.changelog.node(id)), name))
1995 elif type == 'a':
1994 elif type == 'a':
1996 ui.note(('branch %s\n' % data))
1995 ui.note(('branch %s\n' % data))
1997 atbranch = data
1996 atbranch = data
1998 ui.progress(_('building'), id, unit=_('revisions'), total=total)
1997 ui.progress(_('building'), id, unit=_('revisions'), total=total)
1999 tr.close()
1998 tr.close()
2000
1999
2001 if tags:
2000 if tags:
2002 repo.vfs.write("localtags", "".join(tags))
2001 repo.vfs.write("localtags", "".join(tags))
2003 finally:
2002 finally:
2004 ui.progress(_('building'), None)
2003 ui.progress(_('building'), None)
2005 release(tr, lock)
2004 release(tr, lock)
2006
2005
2007 @command('debugbundle',
2006 @command('debugbundle',
2008 [('a', 'all', None, _('show all details'))],
2007 [('a', 'all', None, _('show all details'))],
2009 _('FILE'),
2008 _('FILE'),
2010 norepo=True)
2009 norepo=True)
2011 def debugbundle(ui, bundlepath, all=None, **opts):
2010 def debugbundle(ui, bundlepath, all=None, **opts):
2012 """lists the contents of a bundle"""
2011 """lists the contents of a bundle"""
2013 f = hg.openpath(ui, bundlepath)
2012 f = hg.openpath(ui, bundlepath)
2014 try:
2013 try:
2015 gen = exchange.readbundle(ui, f, bundlepath)
2014 gen = exchange.readbundle(ui, f, bundlepath)
2016 if isinstance(gen, bundle2.unbundle20):
2015 if isinstance(gen, bundle2.unbundle20):
2017 return _debugbundle2(ui, gen, all=all, **opts)
2016 return _debugbundle2(ui, gen, all=all, **opts)
2018 if all:
2017 if all:
2019 ui.write(("format: id, p1, p2, cset, delta base, len(delta)\n"))
2018 ui.write(("format: id, p1, p2, cset, delta base, len(delta)\n"))
2020
2019
2021 def showchunks(named):
2020 def showchunks(named):
2022 ui.write("\n%s\n" % named)
2021 ui.write("\n%s\n" % named)
2023 chain = None
2022 chain = None
2024 while True:
2023 while True:
2025 chunkdata = gen.deltachunk(chain)
2024 chunkdata = gen.deltachunk(chain)
2026 if not chunkdata:
2025 if not chunkdata:
2027 break
2026 break
2028 node = chunkdata['node']
2027 node = chunkdata['node']
2029 p1 = chunkdata['p1']
2028 p1 = chunkdata['p1']
2030 p2 = chunkdata['p2']
2029 p2 = chunkdata['p2']
2031 cs = chunkdata['cs']
2030 cs = chunkdata['cs']
2032 deltabase = chunkdata['deltabase']
2031 deltabase = chunkdata['deltabase']
2033 delta = chunkdata['delta']
2032 delta = chunkdata['delta']
2034 ui.write("%s %s %s %s %s %s\n" %
2033 ui.write("%s %s %s %s %s %s\n" %
2035 (hex(node), hex(p1), hex(p2),
2034 (hex(node), hex(p1), hex(p2),
2036 hex(cs), hex(deltabase), len(delta)))
2035 hex(cs), hex(deltabase), len(delta)))
2037 chain = node
2036 chain = node
2038
2037
2039 chunkdata = gen.changelogheader()
2038 chunkdata = gen.changelogheader()
2040 showchunks("changelog")
2039 showchunks("changelog")
2041 chunkdata = gen.manifestheader()
2040 chunkdata = gen.manifestheader()
2042 showchunks("manifest")
2041 showchunks("manifest")
2043 while True:
2042 while True:
2044 chunkdata = gen.filelogheader()
2043 chunkdata = gen.filelogheader()
2045 if not chunkdata:
2044 if not chunkdata:
2046 break
2045 break
2047 fname = chunkdata['filename']
2046 fname = chunkdata['filename']
2048 showchunks(fname)
2047 showchunks(fname)
2049 else:
2048 else:
2050 if isinstance(gen, bundle2.unbundle20):
2049 if isinstance(gen, bundle2.unbundle20):
2051 raise error.Abort(_('use debugbundle2 for this file'))
2050 raise error.Abort(_('use debugbundle2 for this file'))
2052 chunkdata = gen.changelogheader()
2051 chunkdata = gen.changelogheader()
2053 chain = None
2052 chain = None
2054 while True:
2053 while True:
2055 chunkdata = gen.deltachunk(chain)
2054 chunkdata = gen.deltachunk(chain)
2056 if not chunkdata:
2055 if not chunkdata:
2057 break
2056 break
2058 node = chunkdata['node']
2057 node = chunkdata['node']
2059 ui.write("%s\n" % hex(node))
2058 ui.write("%s\n" % hex(node))
2060 chain = node
2059 chain = node
2061 finally:
2060 finally:
2062 f.close()
2061 f.close()
2063
2062
2064 def _debugbundle2(ui, gen, **opts):
2063 def _debugbundle2(ui, gen, **opts):
2065 """lists the contents of a bundle2"""
2064 """lists the contents of a bundle2"""
2066 if not isinstance(gen, bundle2.unbundle20):
2065 if not isinstance(gen, bundle2.unbundle20):
2067 raise error.Abort(_('not a bundle2 file'))
2066 raise error.Abort(_('not a bundle2 file'))
2068 ui.write(('Stream params: %s\n' % repr(gen.params)))
2067 ui.write(('Stream params: %s\n' % repr(gen.params)))
2069 for part in gen.iterparts():
2068 for part in gen.iterparts():
2070 ui.write('%s -- %r\n' % (part.type, repr(part.params)))
2069 ui.write('%s -- %r\n' % (part.type, repr(part.params)))
2071 if part.type == 'changegroup':
2070 if part.type == 'changegroup':
2072 version = part.params.get('version', '01')
2071 version = part.params.get('version', '01')
2073 cg = changegroup.packermap[version][1](part, 'UN')
2072 cg = changegroup.packermap[version][1](part, 'UN')
2074 chunkdata = cg.changelogheader()
2073 chunkdata = cg.changelogheader()
2075 chain = None
2074 chain = None
2076 while True:
2075 while True:
2077 chunkdata = cg.deltachunk(chain)
2076 chunkdata = cg.deltachunk(chain)
2078 if not chunkdata:
2077 if not chunkdata:
2079 break
2078 break
2080 node = chunkdata['node']
2079 node = chunkdata['node']
2081 ui.write(" %s\n" % hex(node))
2080 ui.write(" %s\n" % hex(node))
2082 chain = node
2081 chain = node
2083
2082
2084 @command('debugcreatestreamclonebundle', [], 'FILE')
2083 @command('debugcreatestreamclonebundle', [], 'FILE')
2085 def debugcreatestreamclonebundle(ui, repo, fname):
2084 def debugcreatestreamclonebundle(ui, repo, fname):
2086 """create a stream clone bundle file
2085 """create a stream clone bundle file
2087
2086
2088 Stream bundles are special bundles that are essentially archives of
2087 Stream bundles are special bundles that are essentially archives of
2089 revlog files. They are commonly used for cloning very quickly.
2088 revlog files. They are commonly used for cloning very quickly.
2090 """
2089 """
2091 requirements, gen = streamclone.generatebundlev1(repo)
2090 requirements, gen = streamclone.generatebundlev1(repo)
2092 changegroup.writechunks(ui, gen, fname)
2091 changegroup.writechunks(ui, gen, fname)
2093
2092
2094 ui.write(_('bundle requirements: %s\n') % ', '.join(sorted(requirements)))
2093 ui.write(_('bundle requirements: %s\n') % ', '.join(sorted(requirements)))
2095
2094
2096 @command('debugapplystreamclonebundle', [], 'FILE')
2095 @command('debugapplystreamclonebundle', [], 'FILE')
2097 def debugapplystreamclonebundle(ui, repo, fname):
2096 def debugapplystreamclonebundle(ui, repo, fname):
2098 """apply a stream clone bundle file"""
2097 """apply a stream clone bundle file"""
2099 f = hg.openpath(ui, fname)
2098 f = hg.openpath(ui, fname)
2100 gen = exchange.readbundle(ui, f, fname)
2099 gen = exchange.readbundle(ui, f, fname)
2101 gen.apply(repo)
2100 gen.apply(repo)
2102
2101
2103 @command('debugcheckstate', [], '')
2102 @command('debugcheckstate', [], '')
2104 def debugcheckstate(ui, repo):
2103 def debugcheckstate(ui, repo):
2105 """validate the correctness of the current dirstate"""
2104 """validate the correctness of the current dirstate"""
2106 parent1, parent2 = repo.dirstate.parents()
2105 parent1, parent2 = repo.dirstate.parents()
2107 m1 = repo[parent1].manifest()
2106 m1 = repo[parent1].manifest()
2108 m2 = repo[parent2].manifest()
2107 m2 = repo[parent2].manifest()
2109 errors = 0
2108 errors = 0
2110 for f in repo.dirstate:
2109 for f in repo.dirstate:
2111 state = repo.dirstate[f]
2110 state = repo.dirstate[f]
2112 if state in "nr" and f not in m1:
2111 if state in "nr" and f not in m1:
2113 ui.warn(_("%s in state %s, but not in manifest1\n") % (f, state))
2112 ui.warn(_("%s in state %s, but not in manifest1\n") % (f, state))
2114 errors += 1
2113 errors += 1
2115 if state in "a" and f in m1:
2114 if state in "a" and f in m1:
2116 ui.warn(_("%s in state %s, but also in manifest1\n") % (f, state))
2115 ui.warn(_("%s in state %s, but also in manifest1\n") % (f, state))
2117 errors += 1
2116 errors += 1
2118 if state in "m" and f not in m1 and f not in m2:
2117 if state in "m" and f not in m1 and f not in m2:
2119 ui.warn(_("%s in state %s, but not in either manifest\n") %
2118 ui.warn(_("%s in state %s, but not in either manifest\n") %
2120 (f, state))
2119 (f, state))
2121 errors += 1
2120 errors += 1
2122 for f in m1:
2121 for f in m1:
2123 state = repo.dirstate[f]
2122 state = repo.dirstate[f]
2124 if state not in "nrm":
2123 if state not in "nrm":
2125 ui.warn(_("%s in manifest1, but listed as state %s") % (f, state))
2124 ui.warn(_("%s in manifest1, but listed as state %s") % (f, state))
2126 errors += 1
2125 errors += 1
2127 if errors:
2126 if errors:
2128 error = _(".hg/dirstate inconsistent with current parent's manifest")
2127 error = _(".hg/dirstate inconsistent with current parent's manifest")
2129 raise error.Abort(error)
2128 raise error.Abort(error)
2130
2129
2131 @command('debugcommands', [], _('[COMMAND]'), norepo=True)
2130 @command('debugcommands', [], _('[COMMAND]'), norepo=True)
2132 def debugcommands(ui, cmd='', *args):
2131 def debugcommands(ui, cmd='', *args):
2133 """list all available commands and options"""
2132 """list all available commands and options"""
2134 for cmd, vals in sorted(table.iteritems()):
2133 for cmd, vals in sorted(table.iteritems()):
2135 cmd = cmd.split('|')[0].strip('^')
2134 cmd = cmd.split('|')[0].strip('^')
2136 opts = ', '.join([i[1] for i in vals[1]])
2135 opts = ', '.join([i[1] for i in vals[1]])
2137 ui.write('%s: %s\n' % (cmd, opts))
2136 ui.write('%s: %s\n' % (cmd, opts))
2138
2137
2139 @command('debugcomplete',
2138 @command('debugcomplete',
2140 [('o', 'options', None, _('show the command options'))],
2139 [('o', 'options', None, _('show the command options'))],
2141 _('[-o] CMD'),
2140 _('[-o] CMD'),
2142 norepo=True)
2141 norepo=True)
2143 def debugcomplete(ui, cmd='', **opts):
2142 def debugcomplete(ui, cmd='', **opts):
2144 """returns the completion list associated with the given command"""
2143 """returns the completion list associated with the given command"""
2145
2144
2146 if opts.get('options'):
2145 if opts.get('options'):
2147 options = []
2146 options = []
2148 otables = [globalopts]
2147 otables = [globalopts]
2149 if cmd:
2148 if cmd:
2150 aliases, entry = cmdutil.findcmd(cmd, table, False)
2149 aliases, entry = cmdutil.findcmd(cmd, table, False)
2151 otables.append(entry[1])
2150 otables.append(entry[1])
2152 for t in otables:
2151 for t in otables:
2153 for o in t:
2152 for o in t:
2154 if "(DEPRECATED)" in o[3]:
2153 if "(DEPRECATED)" in o[3]:
2155 continue
2154 continue
2156 if o[0]:
2155 if o[0]:
2157 options.append('-%s' % o[0])
2156 options.append('-%s' % o[0])
2158 options.append('--%s' % o[1])
2157 options.append('--%s' % o[1])
2159 ui.write("%s\n" % "\n".join(options))
2158 ui.write("%s\n" % "\n".join(options))
2160 return
2159 return
2161
2160
2162 cmdlist, unused_allcmds = cmdutil.findpossible(cmd, table)
2161 cmdlist, unused_allcmds = cmdutil.findpossible(cmd, table)
2163 if ui.verbose:
2162 if ui.verbose:
2164 cmdlist = [' '.join(c[0]) for c in cmdlist.values()]
2163 cmdlist = [' '.join(c[0]) for c in cmdlist.values()]
2165 ui.write("%s\n" % "\n".join(sorted(cmdlist)))
2164 ui.write("%s\n" % "\n".join(sorted(cmdlist)))
2166
2165
2167 @command('debugdag',
2166 @command('debugdag',
2168 [('t', 'tags', None, _('use tags as labels')),
2167 [('t', 'tags', None, _('use tags as labels')),
2169 ('b', 'branches', None, _('annotate with branch names')),
2168 ('b', 'branches', None, _('annotate with branch names')),
2170 ('', 'dots', None, _('use dots for runs')),
2169 ('', 'dots', None, _('use dots for runs')),
2171 ('s', 'spaces', None, _('separate elements by spaces'))],
2170 ('s', 'spaces', None, _('separate elements by spaces'))],
2172 _('[OPTION]... [FILE [REV]...]'),
2171 _('[OPTION]... [FILE [REV]...]'),
2173 optionalrepo=True)
2172 optionalrepo=True)
2174 def debugdag(ui, repo, file_=None, *revs, **opts):
2173 def debugdag(ui, repo, file_=None, *revs, **opts):
2175 """format the changelog or an index DAG as a concise textual description
2174 """format the changelog or an index DAG as a concise textual description
2176
2175
2177 If you pass a revlog index, the revlog's DAG is emitted. If you list
2176 If you pass a revlog index, the revlog's DAG is emitted. If you list
2178 revision numbers, they get labeled in the output as rN.
2177 revision numbers, they get labeled in the output as rN.
2179
2178
2180 Otherwise, the changelog DAG of the current repo is emitted.
2179 Otherwise, the changelog DAG of the current repo is emitted.
2181 """
2180 """
2182 spaces = opts.get('spaces')
2181 spaces = opts.get('spaces')
2183 dots = opts.get('dots')
2182 dots = opts.get('dots')
2184 if file_:
2183 if file_:
2185 rlog = revlog.revlog(scmutil.opener(os.getcwd(), audit=False), file_)
2184 rlog = revlog.revlog(scmutil.opener(os.getcwd(), audit=False), file_)
2186 revs = set((int(r) for r in revs))
2185 revs = set((int(r) for r in revs))
2187 def events():
2186 def events():
2188 for r in rlog:
2187 for r in rlog:
2189 yield 'n', (r, list(p for p in rlog.parentrevs(r)
2188 yield 'n', (r, list(p for p in rlog.parentrevs(r)
2190 if p != -1))
2189 if p != -1))
2191 if r in revs:
2190 if r in revs:
2192 yield 'l', (r, "r%i" % r)
2191 yield 'l', (r, "r%i" % r)
2193 elif repo:
2192 elif repo:
2194 cl = repo.changelog
2193 cl = repo.changelog
2195 tags = opts.get('tags')
2194 tags = opts.get('tags')
2196 branches = opts.get('branches')
2195 branches = opts.get('branches')
2197 if tags:
2196 if tags:
2198 labels = {}
2197 labels = {}
2199 for l, n in repo.tags().items():
2198 for l, n in repo.tags().items():
2200 labels.setdefault(cl.rev(n), []).append(l)
2199 labels.setdefault(cl.rev(n), []).append(l)
2201 def events():
2200 def events():
2202 b = "default"
2201 b = "default"
2203 for r in cl:
2202 for r in cl:
2204 if branches:
2203 if branches:
2205 newb = cl.read(cl.node(r))[5]['branch']
2204 newb = cl.read(cl.node(r))[5]['branch']
2206 if newb != b:
2205 if newb != b:
2207 yield 'a', newb
2206 yield 'a', newb
2208 b = newb
2207 b = newb
2209 yield 'n', (r, list(p for p in cl.parentrevs(r)
2208 yield 'n', (r, list(p for p in cl.parentrevs(r)
2210 if p != -1))
2209 if p != -1))
2211 if tags:
2210 if tags:
2212 ls = labels.get(r)
2211 ls = labels.get(r)
2213 if ls:
2212 if ls:
2214 for l in ls:
2213 for l in ls:
2215 yield 'l', (r, l)
2214 yield 'l', (r, l)
2216 else:
2215 else:
2217 raise error.Abort(_('need repo for changelog dag'))
2216 raise error.Abort(_('need repo for changelog dag'))
2218
2217
2219 for line in dagparser.dagtextlines(events(),
2218 for line in dagparser.dagtextlines(events(),
2220 addspaces=spaces,
2219 addspaces=spaces,
2221 wraplabels=True,
2220 wraplabels=True,
2222 wrapannotations=True,
2221 wrapannotations=True,
2223 wrapnonlinear=dots,
2222 wrapnonlinear=dots,
2224 usedots=dots,
2223 usedots=dots,
2225 maxlinewidth=70):
2224 maxlinewidth=70):
2226 ui.write(line)
2225 ui.write(line)
2227 ui.write("\n")
2226 ui.write("\n")
2228
2227
2229 @command('debugdata', debugrevlogopts, _('-c|-m|FILE REV'))
2228 @command('debugdata', debugrevlogopts, _('-c|-m|FILE REV'))
2230 def debugdata(ui, repo, file_, rev=None, **opts):
2229 def debugdata(ui, repo, file_, rev=None, **opts):
2231 """dump the contents of a data file revision"""
2230 """dump the contents of a data file revision"""
2232 if opts.get('changelog') or opts.get('manifest'):
2231 if opts.get('changelog') or opts.get('manifest'):
2233 file_, rev = None, file_
2232 file_, rev = None, file_
2234 elif rev is None:
2233 elif rev is None:
2235 raise error.CommandError('debugdata', _('invalid arguments'))
2234 raise error.CommandError('debugdata', _('invalid arguments'))
2236 r = cmdutil.openrevlog(repo, 'debugdata', file_, opts)
2235 r = cmdutil.openrevlog(repo, 'debugdata', file_, opts)
2237 try:
2236 try:
2238 ui.write(r.revision(r.lookup(rev)))
2237 ui.write(r.revision(r.lookup(rev)))
2239 except KeyError:
2238 except KeyError:
2240 raise error.Abort(_('invalid revision identifier %s') % rev)
2239 raise error.Abort(_('invalid revision identifier %s') % rev)
2241
2240
2242 @command('debugdate',
2241 @command('debugdate',
2243 [('e', 'extended', None, _('try extended date formats'))],
2242 [('e', 'extended', None, _('try extended date formats'))],
2244 _('[-e] DATE [RANGE]'),
2243 _('[-e] DATE [RANGE]'),
2245 norepo=True, optionalrepo=True)
2244 norepo=True, optionalrepo=True)
2246 def debugdate(ui, date, range=None, **opts):
2245 def debugdate(ui, date, range=None, **opts):
2247 """parse and display a date"""
2246 """parse and display a date"""
2248 if opts["extended"]:
2247 if opts["extended"]:
2249 d = util.parsedate(date, util.extendeddateformats)
2248 d = util.parsedate(date, util.extendeddateformats)
2250 else:
2249 else:
2251 d = util.parsedate(date)
2250 d = util.parsedate(date)
2252 ui.write(("internal: %s %s\n") % d)
2251 ui.write(("internal: %s %s\n") % d)
2253 ui.write(("standard: %s\n") % util.datestr(d))
2252 ui.write(("standard: %s\n") % util.datestr(d))
2254 if range:
2253 if range:
2255 m = util.matchdate(range)
2254 m = util.matchdate(range)
2256 ui.write(("match: %s\n") % m(d[0]))
2255 ui.write(("match: %s\n") % m(d[0]))
2257
2256
2258 @command('debugdiscovery',
2257 @command('debugdiscovery',
2259 [('', 'old', None, _('use old-style discovery')),
2258 [('', 'old', None, _('use old-style discovery')),
2260 ('', 'nonheads', None,
2259 ('', 'nonheads', None,
2261 _('use old-style discovery with non-heads included')),
2260 _('use old-style discovery with non-heads included')),
2262 ] + remoteopts,
2261 ] + remoteopts,
2263 _('[-l REV] [-r REV] [-b BRANCH]... [OTHER]'))
2262 _('[-l REV] [-r REV] [-b BRANCH]... [OTHER]'))
2264 def debugdiscovery(ui, repo, remoteurl="default", **opts):
2263 def debugdiscovery(ui, repo, remoteurl="default", **opts):
2265 """runs the changeset discovery protocol in isolation"""
2264 """runs the changeset discovery protocol in isolation"""
2266 remoteurl, branches = hg.parseurl(ui.expandpath(remoteurl),
2265 remoteurl, branches = hg.parseurl(ui.expandpath(remoteurl),
2267 opts.get('branch'))
2266 opts.get('branch'))
2268 remote = hg.peer(repo, opts, remoteurl)
2267 remote = hg.peer(repo, opts, remoteurl)
2269 ui.status(_('comparing with %s\n') % util.hidepassword(remoteurl))
2268 ui.status(_('comparing with %s\n') % util.hidepassword(remoteurl))
2270
2269
2271 # make sure tests are repeatable
2270 # make sure tests are repeatable
2272 random.seed(12323)
2271 random.seed(12323)
2273
2272
2274 def doit(localheads, remoteheads, remote=remote):
2273 def doit(localheads, remoteheads, remote=remote):
2275 if opts.get('old'):
2274 if opts.get('old'):
2276 if localheads:
2275 if localheads:
2277 raise error.Abort('cannot use localheads with old style '
2276 raise error.Abort('cannot use localheads with old style '
2278 'discovery')
2277 'discovery')
2279 if not util.safehasattr(remote, 'branches'):
2278 if not util.safehasattr(remote, 'branches'):
2280 # enable in-client legacy support
2279 # enable in-client legacy support
2281 remote = localrepo.locallegacypeer(remote.local())
2280 remote = localrepo.locallegacypeer(remote.local())
2282 common, _in, hds = treediscovery.findcommonincoming(repo, remote,
2281 common, _in, hds = treediscovery.findcommonincoming(repo, remote,
2283 force=True)
2282 force=True)
2284 common = set(common)
2283 common = set(common)
2285 if not opts.get('nonheads'):
2284 if not opts.get('nonheads'):
2286 ui.write(("unpruned common: %s\n") %
2285 ui.write(("unpruned common: %s\n") %
2287 " ".join(sorted(short(n) for n in common)))
2286 " ".join(sorted(short(n) for n in common)))
2288 dag = dagutil.revlogdag(repo.changelog)
2287 dag = dagutil.revlogdag(repo.changelog)
2289 all = dag.ancestorset(dag.internalizeall(common))
2288 all = dag.ancestorset(dag.internalizeall(common))
2290 common = dag.externalizeall(dag.headsetofconnecteds(all))
2289 common = dag.externalizeall(dag.headsetofconnecteds(all))
2291 else:
2290 else:
2292 common, any, hds = setdiscovery.findcommonheads(ui, repo, remote)
2291 common, any, hds = setdiscovery.findcommonheads(ui, repo, remote)
2293 common = set(common)
2292 common = set(common)
2294 rheads = set(hds)
2293 rheads = set(hds)
2295 lheads = set(repo.heads())
2294 lheads = set(repo.heads())
2296 ui.write(("common heads: %s\n") %
2295 ui.write(("common heads: %s\n") %
2297 " ".join(sorted(short(n) for n in common)))
2296 " ".join(sorted(short(n) for n in common)))
2298 if lheads <= common:
2297 if lheads <= common:
2299 ui.write(("local is subset\n"))
2298 ui.write(("local is subset\n"))
2300 elif rheads <= common:
2299 elif rheads <= common:
2301 ui.write(("remote is subset\n"))
2300 ui.write(("remote is subset\n"))
2302
2301
2303 serverlogs = opts.get('serverlog')
2302 serverlogs = opts.get('serverlog')
2304 if serverlogs:
2303 if serverlogs:
2305 for filename in serverlogs:
2304 for filename in serverlogs:
2306 logfile = open(filename, 'r')
2305 logfile = open(filename, 'r')
2307 try:
2306 try:
2308 line = logfile.readline()
2307 line = logfile.readline()
2309 while line:
2308 while line:
2310 parts = line.strip().split(';')
2309 parts = line.strip().split(';')
2311 op = parts[1]
2310 op = parts[1]
2312 if op == 'cg':
2311 if op == 'cg':
2313 pass
2312 pass
2314 elif op == 'cgss':
2313 elif op == 'cgss':
2315 doit(parts[2].split(' '), parts[3].split(' '))
2314 doit(parts[2].split(' '), parts[3].split(' '))
2316 elif op == 'unb':
2315 elif op == 'unb':
2317 doit(parts[3].split(' '), parts[2].split(' '))
2316 doit(parts[3].split(' '), parts[2].split(' '))
2318 line = logfile.readline()
2317 line = logfile.readline()
2319 finally:
2318 finally:
2320 logfile.close()
2319 logfile.close()
2321
2320
2322 else:
2321 else:
2323 remoterevs, _checkout = hg.addbranchrevs(repo, remote, branches,
2322 remoterevs, _checkout = hg.addbranchrevs(repo, remote, branches,
2324 opts.get('remote_head'))
2323 opts.get('remote_head'))
2325 localrevs = opts.get('local_head')
2324 localrevs = opts.get('local_head')
2326 doit(localrevs, remoterevs)
2325 doit(localrevs, remoterevs)
2327
2326
2328 @command('debugextensions', formatteropts, [], norepo=True)
2327 @command('debugextensions', formatteropts, [], norepo=True)
2329 def debugextensions(ui, **opts):
2328 def debugextensions(ui, **opts):
2330 '''show information about active extensions'''
2329 '''show information about active extensions'''
2331 exts = extensions.extensions(ui)
2330 exts = extensions.extensions(ui)
2332 fm = ui.formatter('debugextensions', opts)
2331 fm = ui.formatter('debugextensions', opts)
2333 for extname, extmod in sorted(exts, key=operator.itemgetter(0)):
2332 for extname, extmod in sorted(exts, key=operator.itemgetter(0)):
2334 extsource = extmod.__file__
2333 extsource = extmod.__file__
2335 exttestedwith = getattr(extmod, 'testedwith', None)
2334 exttestedwith = getattr(extmod, 'testedwith', None)
2336 if exttestedwith is not None:
2335 if exttestedwith is not None:
2337 exttestedwith = exttestedwith.split()
2336 exttestedwith = exttestedwith.split()
2338 extbuglink = getattr(extmod, 'buglink', None)
2337 extbuglink = getattr(extmod, 'buglink', None)
2339
2338
2340 fm.startitem()
2339 fm.startitem()
2341
2340
2342 if ui.quiet or ui.verbose:
2341 if ui.quiet or ui.verbose:
2343 fm.write('name', '%s\n', extname)
2342 fm.write('name', '%s\n', extname)
2344 else:
2343 else:
2345 fm.write('name', '%s', extname)
2344 fm.write('name', '%s', extname)
2346 if not exttestedwith:
2345 if not exttestedwith:
2347 fm.plain(_(' (untested!)\n'))
2346 fm.plain(_(' (untested!)\n'))
2348 else:
2347 else:
2349 if exttestedwith == ['internal'] or \
2348 if exttestedwith == ['internal'] or \
2350 util.version() in exttestedwith:
2349 util.version() in exttestedwith:
2351 fm.plain('\n')
2350 fm.plain('\n')
2352 else:
2351 else:
2353 lasttestedversion = exttestedwith[-1]
2352 lasttestedversion = exttestedwith[-1]
2354 fm.plain(' (%s!)\n' % lasttestedversion)
2353 fm.plain(' (%s!)\n' % lasttestedversion)
2355
2354
2356 fm.condwrite(ui.verbose and extsource, 'source',
2355 fm.condwrite(ui.verbose and extsource, 'source',
2357 _(' location: %s\n'), extsource or "")
2356 _(' location: %s\n'), extsource or "")
2358
2357
2359 fm.condwrite(ui.verbose and exttestedwith, 'testedwith',
2358 fm.condwrite(ui.verbose and exttestedwith, 'testedwith',
2360 _(' tested with: %s\n'), ' '.join(exttestedwith or []))
2359 _(' tested with: %s\n'), ' '.join(exttestedwith or []))
2361
2360
2362 fm.condwrite(ui.verbose and extbuglink, 'buglink',
2361 fm.condwrite(ui.verbose and extbuglink, 'buglink',
2363 _(' bug reporting: %s\n'), extbuglink or "")
2362 _(' bug reporting: %s\n'), extbuglink or "")
2364
2363
2365 fm.end()
2364 fm.end()
2366
2365
2367 @command('debugfileset',
2366 @command('debugfileset',
2368 [('r', 'rev', '', _('apply the filespec on this revision'), _('REV'))],
2367 [('r', 'rev', '', _('apply the filespec on this revision'), _('REV'))],
2369 _('[-r REV] FILESPEC'))
2368 _('[-r REV] FILESPEC'))
2370 def debugfileset(ui, repo, expr, **opts):
2369 def debugfileset(ui, repo, expr, **opts):
2371 '''parse and apply a fileset specification'''
2370 '''parse and apply a fileset specification'''
2372 ctx = scmutil.revsingle(repo, opts.get('rev'), None)
2371 ctx = scmutil.revsingle(repo, opts.get('rev'), None)
2373 if ui.verbose:
2372 if ui.verbose:
2374 tree = fileset.parse(expr)
2373 tree = fileset.parse(expr)
2375 ui.note(fileset.prettyformat(tree), "\n")
2374 ui.note(fileset.prettyformat(tree), "\n")
2376
2375
2377 for f in ctx.getfileset(expr):
2376 for f in ctx.getfileset(expr):
2378 ui.write("%s\n" % f)
2377 ui.write("%s\n" % f)
2379
2378
2380 @command('debugfsinfo', [], _('[PATH]'), norepo=True)
2379 @command('debugfsinfo', [], _('[PATH]'), norepo=True)
2381 def debugfsinfo(ui, path="."):
2380 def debugfsinfo(ui, path="."):
2382 """show information detected about current filesystem"""
2381 """show information detected about current filesystem"""
2383 util.writefile('.debugfsinfo', '')
2382 util.writefile('.debugfsinfo', '')
2384 ui.write(('exec: %s\n') % (util.checkexec(path) and 'yes' or 'no'))
2383 ui.write(('exec: %s\n') % (util.checkexec(path) and 'yes' or 'no'))
2385 ui.write(('symlink: %s\n') % (util.checklink(path) and 'yes' or 'no'))
2384 ui.write(('symlink: %s\n') % (util.checklink(path) and 'yes' or 'no'))
2386 ui.write(('hardlink: %s\n') % (util.checknlink(path) and 'yes' or 'no'))
2385 ui.write(('hardlink: %s\n') % (util.checknlink(path) and 'yes' or 'no'))
2387 ui.write(('case-sensitive: %s\n') % (util.checkcase('.debugfsinfo')
2386 ui.write(('case-sensitive: %s\n') % (util.checkcase('.debugfsinfo')
2388 and 'yes' or 'no'))
2387 and 'yes' or 'no'))
2389 os.unlink('.debugfsinfo')
2388 os.unlink('.debugfsinfo')
2390
2389
2391 @command('debuggetbundle',
2390 @command('debuggetbundle',
2392 [('H', 'head', [], _('id of head node'), _('ID')),
2391 [('H', 'head', [], _('id of head node'), _('ID')),
2393 ('C', 'common', [], _('id of common node'), _('ID')),
2392 ('C', 'common', [], _('id of common node'), _('ID')),
2394 ('t', 'type', 'bzip2', _('bundle compression type to use'), _('TYPE'))],
2393 ('t', 'type', 'bzip2', _('bundle compression type to use'), _('TYPE'))],
2395 _('REPO FILE [-H|-C ID]...'),
2394 _('REPO FILE [-H|-C ID]...'),
2396 norepo=True)
2395 norepo=True)
2397 def debuggetbundle(ui, repopath, bundlepath, head=None, common=None, **opts):
2396 def debuggetbundle(ui, repopath, bundlepath, head=None, common=None, **opts):
2398 """retrieves a bundle from a repo
2397 """retrieves a bundle from a repo
2399
2398
2400 Every ID must be a full-length hex node id string. Saves the bundle to the
2399 Every ID must be a full-length hex node id string. Saves the bundle to the
2401 given file.
2400 given file.
2402 """
2401 """
2403 repo = hg.peer(ui, opts, repopath)
2402 repo = hg.peer(ui, opts, repopath)
2404 if not repo.capable('getbundle'):
2403 if not repo.capable('getbundle'):
2405 raise error.Abort("getbundle() not supported by target repository")
2404 raise error.Abort("getbundle() not supported by target repository")
2406 args = {}
2405 args = {}
2407 if common:
2406 if common:
2408 args['common'] = [bin(s) for s in common]
2407 args['common'] = [bin(s) for s in common]
2409 if head:
2408 if head:
2410 args['heads'] = [bin(s) for s in head]
2409 args['heads'] = [bin(s) for s in head]
2411 # TODO: get desired bundlecaps from command line.
2410 # TODO: get desired bundlecaps from command line.
2412 args['bundlecaps'] = None
2411 args['bundlecaps'] = None
2413 bundle = repo.getbundle('debug', **args)
2412 bundle = repo.getbundle('debug', **args)
2414
2413
2415 bundletype = opts.get('type', 'bzip2').lower()
2414 bundletype = opts.get('type', 'bzip2').lower()
2416 btypes = {'none': 'HG10UN',
2415 btypes = {'none': 'HG10UN',
2417 'bzip2': 'HG10BZ',
2416 'bzip2': 'HG10BZ',
2418 'gzip': 'HG10GZ',
2417 'gzip': 'HG10GZ',
2419 'bundle2': 'HG20'}
2418 'bundle2': 'HG20'}
2420 bundletype = btypes.get(bundletype)
2419 bundletype = btypes.get(bundletype)
2421 if bundletype not in changegroup.bundletypes:
2420 if bundletype not in changegroup.bundletypes:
2422 raise error.Abort(_('unknown bundle type specified with --type'))
2421 raise error.Abort(_('unknown bundle type specified with --type'))
2423 changegroup.writebundle(ui, bundle, bundlepath, bundletype)
2422 changegroup.writebundle(ui, bundle, bundlepath, bundletype)
2424
2423
2425 @command('debugignore', [], '')
2424 @command('debugignore', [], '')
2426 def debugignore(ui, repo, *values, **opts):
2425 def debugignore(ui, repo, *values, **opts):
2427 """display the combined ignore pattern"""
2426 """display the combined ignore pattern"""
2428 ignore = repo.dirstate._ignore
2427 ignore = repo.dirstate._ignore
2429 includepat = getattr(ignore, 'includepat', None)
2428 includepat = getattr(ignore, 'includepat', None)
2430 if includepat is not None:
2429 if includepat is not None:
2431 ui.write("%s\n" % includepat)
2430 ui.write("%s\n" % includepat)
2432 else:
2431 else:
2433 raise error.Abort(_("no ignore patterns found"))
2432 raise error.Abort(_("no ignore patterns found"))
2434
2433
2435 @command('debugindex', debugrevlogopts +
2434 @command('debugindex', debugrevlogopts +
2436 [('f', 'format', 0, _('revlog format'), _('FORMAT'))],
2435 [('f', 'format', 0, _('revlog format'), _('FORMAT'))],
2437 _('[-f FORMAT] -c|-m|FILE'),
2436 _('[-f FORMAT] -c|-m|FILE'),
2438 optionalrepo=True)
2437 optionalrepo=True)
2439 def debugindex(ui, repo, file_=None, **opts):
2438 def debugindex(ui, repo, file_=None, **opts):
2440 """dump the contents of an index file"""
2439 """dump the contents of an index file"""
2441 r = cmdutil.openrevlog(repo, 'debugindex', file_, opts)
2440 r = cmdutil.openrevlog(repo, 'debugindex', file_, opts)
2442 format = opts.get('format', 0)
2441 format = opts.get('format', 0)
2443 if format not in (0, 1):
2442 if format not in (0, 1):
2444 raise error.Abort(_("unknown format %d") % format)
2443 raise error.Abort(_("unknown format %d") % format)
2445
2444
2446 generaldelta = r.version & revlog.REVLOGGENERALDELTA
2445 generaldelta = r.version & revlog.REVLOGGENERALDELTA
2447 if generaldelta:
2446 if generaldelta:
2448 basehdr = ' delta'
2447 basehdr = ' delta'
2449 else:
2448 else:
2450 basehdr = ' base'
2449 basehdr = ' base'
2451
2450
2452 if ui.debugflag:
2451 if ui.debugflag:
2453 shortfn = hex
2452 shortfn = hex
2454 else:
2453 else:
2455 shortfn = short
2454 shortfn = short
2456
2455
2457 # There might not be anything in r, so have a sane default
2456 # There might not be anything in r, so have a sane default
2458 idlen = 12
2457 idlen = 12
2459 for i in r:
2458 for i in r:
2460 idlen = len(shortfn(r.node(i)))
2459 idlen = len(shortfn(r.node(i)))
2461 break
2460 break
2462
2461
2463 if format == 0:
2462 if format == 0:
2464 ui.write(" rev offset length " + basehdr + " linkrev"
2463 ui.write(" rev offset length " + basehdr + " linkrev"
2465 " %s %s p2\n" % ("nodeid".ljust(idlen), "p1".ljust(idlen)))
2464 " %s %s p2\n" % ("nodeid".ljust(idlen), "p1".ljust(idlen)))
2466 elif format == 1:
2465 elif format == 1:
2467 ui.write(" rev flag offset length"
2466 ui.write(" rev flag offset length"
2468 " size " + basehdr + " link p1 p2"
2467 " size " + basehdr + " link p1 p2"
2469 " %s\n" % "nodeid".rjust(idlen))
2468 " %s\n" % "nodeid".rjust(idlen))
2470
2469
2471 for i in r:
2470 for i in r:
2472 node = r.node(i)
2471 node = r.node(i)
2473 if generaldelta:
2472 if generaldelta:
2474 base = r.deltaparent(i)
2473 base = r.deltaparent(i)
2475 else:
2474 else:
2476 base = r.chainbase(i)
2475 base = r.chainbase(i)
2477 if format == 0:
2476 if format == 0:
2478 try:
2477 try:
2479 pp = r.parents(node)
2478 pp = r.parents(node)
2480 except Exception:
2479 except Exception:
2481 pp = [nullid, nullid]
2480 pp = [nullid, nullid]
2482 ui.write("% 6d % 9d % 7d % 6d % 7d %s %s %s\n" % (
2481 ui.write("% 6d % 9d % 7d % 6d % 7d %s %s %s\n" % (
2483 i, r.start(i), r.length(i), base, r.linkrev(i),
2482 i, r.start(i), r.length(i), base, r.linkrev(i),
2484 shortfn(node), shortfn(pp[0]), shortfn(pp[1])))
2483 shortfn(node), shortfn(pp[0]), shortfn(pp[1])))
2485 elif format == 1:
2484 elif format == 1:
2486 pr = r.parentrevs(i)
2485 pr = r.parentrevs(i)
2487 ui.write("% 6d %04x % 8d % 8d % 8d % 6d % 6d % 6d % 6d %s\n" % (
2486 ui.write("% 6d %04x % 8d % 8d % 8d % 6d % 6d % 6d % 6d %s\n" % (
2488 i, r.flags(i), r.start(i), r.length(i), r.rawsize(i),
2487 i, r.flags(i), r.start(i), r.length(i), r.rawsize(i),
2489 base, r.linkrev(i), pr[0], pr[1], shortfn(node)))
2488 base, r.linkrev(i), pr[0], pr[1], shortfn(node)))
2490
2489
2491 @command('debugindexdot', debugrevlogopts,
2490 @command('debugindexdot', debugrevlogopts,
2492 _('-c|-m|FILE'), optionalrepo=True)
2491 _('-c|-m|FILE'), optionalrepo=True)
2493 def debugindexdot(ui, repo, file_=None, **opts):
2492 def debugindexdot(ui, repo, file_=None, **opts):
2494 """dump an index DAG as a graphviz dot file"""
2493 """dump an index DAG as a graphviz dot file"""
2495 r = cmdutil.openrevlog(repo, 'debugindexdot', file_, opts)
2494 r = cmdutil.openrevlog(repo, 'debugindexdot', file_, opts)
2496 ui.write(("digraph G {\n"))
2495 ui.write(("digraph G {\n"))
2497 for i in r:
2496 for i in r:
2498 node = r.node(i)
2497 node = r.node(i)
2499 pp = r.parents(node)
2498 pp = r.parents(node)
2500 ui.write("\t%d -> %d\n" % (r.rev(pp[0]), i))
2499 ui.write("\t%d -> %d\n" % (r.rev(pp[0]), i))
2501 if pp[1] != nullid:
2500 if pp[1] != nullid:
2502 ui.write("\t%d -> %d\n" % (r.rev(pp[1]), i))
2501 ui.write("\t%d -> %d\n" % (r.rev(pp[1]), i))
2503 ui.write("}\n")
2502 ui.write("}\n")
2504
2503
2505 @command('debugdeltachain',
2504 @command('debugdeltachain',
2506 debugrevlogopts + formatteropts,
2505 debugrevlogopts + formatteropts,
2507 _('-c|-m|FILE'),
2506 _('-c|-m|FILE'),
2508 optionalrepo=True)
2507 optionalrepo=True)
2509 def debugdeltachain(ui, repo, file_=None, **opts):
2508 def debugdeltachain(ui, repo, file_=None, **opts):
2510 """dump information about delta chains in a revlog
2509 """dump information about delta chains in a revlog
2511
2510
2512 Output can be templatized. Available template keywords are:
2511 Output can be templatized. Available template keywords are:
2513
2512
2514 rev revision number
2513 rev revision number
2515 chainid delta chain identifier (numbered by unique base)
2514 chainid delta chain identifier (numbered by unique base)
2516 chainlen delta chain length to this revision
2515 chainlen delta chain length to this revision
2517 prevrev previous revision in delta chain
2516 prevrev previous revision in delta chain
2518 deltatype role of delta / how it was computed
2517 deltatype role of delta / how it was computed
2519 compsize compressed size of revision
2518 compsize compressed size of revision
2520 uncompsize uncompressed size of revision
2519 uncompsize uncompressed size of revision
2521 chainsize total size of compressed revisions in chain
2520 chainsize total size of compressed revisions in chain
2522 chainratio total chain size divided by uncompressed revision size
2521 chainratio total chain size divided by uncompressed revision size
2523 (new delta chains typically start at ratio 2.00)
2522 (new delta chains typically start at ratio 2.00)
2524 lindist linear distance from base revision in delta chain to end
2523 lindist linear distance from base revision in delta chain to end
2525 of this revision
2524 of this revision
2526 extradist total size of revisions not part of this delta chain from
2525 extradist total size of revisions not part of this delta chain from
2527 base of delta chain to end of this revision; a measurement
2526 base of delta chain to end of this revision; a measurement
2528 of how much extra data we need to read/seek across to read
2527 of how much extra data we need to read/seek across to read
2529 the delta chain for this revision
2528 the delta chain for this revision
2530 extraratio extradist divided by chainsize; another representation of
2529 extraratio extradist divided by chainsize; another representation of
2531 how much unrelated data is needed to load this delta chain
2530 how much unrelated data is needed to load this delta chain
2532 """
2531 """
2533 r = cmdutil.openrevlog(repo, 'debugdeltachain', file_, opts)
2532 r = cmdutil.openrevlog(repo, 'debugdeltachain', file_, opts)
2534 index = r.index
2533 index = r.index
2535 generaldelta = r.version & revlog.REVLOGGENERALDELTA
2534 generaldelta = r.version & revlog.REVLOGGENERALDELTA
2536
2535
2537 def revinfo(rev):
2536 def revinfo(rev):
2538 iterrev = rev
2537 iterrev = rev
2539 e = index[iterrev]
2538 e = index[iterrev]
2540 chain = []
2539 chain = []
2541 compsize = e[1]
2540 compsize = e[1]
2542 uncompsize = e[2]
2541 uncompsize = e[2]
2543 chainsize = 0
2542 chainsize = 0
2544
2543
2545 if generaldelta:
2544 if generaldelta:
2546 if e[3] == e[5]:
2545 if e[3] == e[5]:
2547 deltatype = 'p1'
2546 deltatype = 'p1'
2548 elif e[3] == e[6]:
2547 elif e[3] == e[6]:
2549 deltatype = 'p2'
2548 deltatype = 'p2'
2550 elif e[3] == rev - 1:
2549 elif e[3] == rev - 1:
2551 deltatype = 'prev'
2550 deltatype = 'prev'
2552 elif e[3] == rev:
2551 elif e[3] == rev:
2553 deltatype = 'base'
2552 deltatype = 'base'
2554 else:
2553 else:
2555 deltatype = 'other'
2554 deltatype = 'other'
2556 else:
2555 else:
2557 if e[3] == rev:
2556 if e[3] == rev:
2558 deltatype = 'base'
2557 deltatype = 'base'
2559 else:
2558 else:
2560 deltatype = 'prev'
2559 deltatype = 'prev'
2561
2560
2562 while iterrev != e[3]:
2561 while iterrev != e[3]:
2563 chain.append(iterrev)
2562 chain.append(iterrev)
2564 chainsize += e[1]
2563 chainsize += e[1]
2565 if generaldelta:
2564 if generaldelta:
2566 iterrev = e[3]
2565 iterrev = e[3]
2567 else:
2566 else:
2568 iterrev -= 1
2567 iterrev -= 1
2569 e = index[iterrev]
2568 e = index[iterrev]
2570 else:
2569 else:
2571 chainsize += e[1]
2570 chainsize += e[1]
2572 chain.append(iterrev)
2571 chain.append(iterrev)
2573
2572
2574 chain.reverse()
2573 chain.reverse()
2575 return compsize, uncompsize, deltatype, chain, chainsize
2574 return compsize, uncompsize, deltatype, chain, chainsize
2576
2575
2577 fm = ui.formatter('debugdeltachain', opts)
2576 fm = ui.formatter('debugdeltachain', opts)
2578
2577
2579 fm.plain(' rev chain# chainlen prev delta '
2578 fm.plain(' rev chain# chainlen prev delta '
2580 'size rawsize chainsize ratio lindist extradist '
2579 'size rawsize chainsize ratio lindist extradist '
2581 'extraratio\n')
2580 'extraratio\n')
2582
2581
2583 chainbases = {}
2582 chainbases = {}
2584 for rev in r:
2583 for rev in r:
2585 comp, uncomp, deltatype, chain, chainsize = revinfo(rev)
2584 comp, uncomp, deltatype, chain, chainsize = revinfo(rev)
2586 chainbase = chain[0]
2585 chainbase = chain[0]
2587 chainid = chainbases.setdefault(chainbase, len(chainbases) + 1)
2586 chainid = chainbases.setdefault(chainbase, len(chainbases) + 1)
2588 basestart = r.start(chainbase)
2587 basestart = r.start(chainbase)
2589 revstart = r.start(rev)
2588 revstart = r.start(rev)
2590 lineardist = revstart + comp - basestart
2589 lineardist = revstart + comp - basestart
2591 extradist = lineardist - chainsize
2590 extradist = lineardist - chainsize
2592 try:
2591 try:
2593 prevrev = chain[-2]
2592 prevrev = chain[-2]
2594 except IndexError:
2593 except IndexError:
2595 prevrev = -1
2594 prevrev = -1
2596
2595
2597 chainratio = float(chainsize) / float(uncomp)
2596 chainratio = float(chainsize) / float(uncomp)
2598 extraratio = float(extradist) / float(chainsize)
2597 extraratio = float(extradist) / float(chainsize)
2599
2598
2600 fm.startitem()
2599 fm.startitem()
2601 fm.write('rev chainid chainlen prevrev deltatype compsize '
2600 fm.write('rev chainid chainlen prevrev deltatype compsize '
2602 'uncompsize chainsize chainratio lindist extradist '
2601 'uncompsize chainsize chainratio lindist extradist '
2603 'extraratio',
2602 'extraratio',
2604 '%7d %7d %8d %8d %7s %10d %10d %10d %9.5f %9d %9d %10.5f\n',
2603 '%7d %7d %8d %8d %7s %10d %10d %10d %9.5f %9d %9d %10.5f\n',
2605 rev, chainid, len(chain), prevrev, deltatype, comp,
2604 rev, chainid, len(chain), prevrev, deltatype, comp,
2606 uncomp, chainsize, chainratio, lineardist, extradist,
2605 uncomp, chainsize, chainratio, lineardist, extradist,
2607 extraratio,
2606 extraratio,
2608 rev=rev, chainid=chainid, chainlen=len(chain),
2607 rev=rev, chainid=chainid, chainlen=len(chain),
2609 prevrev=prevrev, deltatype=deltatype, compsize=comp,
2608 prevrev=prevrev, deltatype=deltatype, compsize=comp,
2610 uncompsize=uncomp, chainsize=chainsize,
2609 uncompsize=uncomp, chainsize=chainsize,
2611 chainratio=chainratio, lindist=lineardist,
2610 chainratio=chainratio, lindist=lineardist,
2612 extradist=extradist, extraratio=extraratio)
2611 extradist=extradist, extraratio=extraratio)
2613
2612
2614 fm.end()
2613 fm.end()
2615
2614
2616 @command('debuginstall', [], '', norepo=True)
2615 @command('debuginstall', [], '', norepo=True)
2617 def debuginstall(ui):
2616 def debuginstall(ui):
2618 '''test Mercurial installation
2617 '''test Mercurial installation
2619
2618
2620 Returns 0 on success.
2619 Returns 0 on success.
2621 '''
2620 '''
2622
2621
2623 def writetemp(contents):
2622 def writetemp(contents):
2624 (fd, name) = tempfile.mkstemp(prefix="hg-debuginstall-")
2623 (fd, name) = tempfile.mkstemp(prefix="hg-debuginstall-")
2625 f = os.fdopen(fd, "wb")
2624 f = os.fdopen(fd, "wb")
2626 f.write(contents)
2625 f.write(contents)
2627 f.close()
2626 f.close()
2628 return name
2627 return name
2629
2628
2630 problems = 0
2629 problems = 0
2631
2630
2632 # encoding
2631 # encoding
2633 ui.status(_("checking encoding (%s)...\n") % encoding.encoding)
2632 ui.status(_("checking encoding (%s)...\n") % encoding.encoding)
2634 try:
2633 try:
2635 encoding.fromlocal("test")
2634 encoding.fromlocal("test")
2636 except error.Abort as inst:
2635 except error.Abort as inst:
2637 ui.write(" %s\n" % inst)
2636 ui.write(" %s\n" % inst)
2638 ui.write(_(" (check that your locale is properly set)\n"))
2637 ui.write(_(" (check that your locale is properly set)\n"))
2639 problems += 1
2638 problems += 1
2640
2639
2641 # Python
2640 # Python
2642 ui.status(_("checking Python executable (%s)\n") % sys.executable)
2641 ui.status(_("checking Python executable (%s)\n") % sys.executable)
2643 ui.status(_("checking Python version (%s)\n")
2642 ui.status(_("checking Python version (%s)\n")
2644 % ("%s.%s.%s" % sys.version_info[:3]))
2643 % ("%s.%s.%s" % sys.version_info[:3]))
2645 ui.status(_("checking Python lib (%s)...\n")
2644 ui.status(_("checking Python lib (%s)...\n")
2646 % os.path.dirname(os.__file__))
2645 % os.path.dirname(os.__file__))
2647
2646
2648 # compiled modules
2647 # compiled modules
2649 ui.status(_("checking installed modules (%s)...\n")
2648 ui.status(_("checking installed modules (%s)...\n")
2650 % os.path.dirname(__file__))
2649 % os.path.dirname(__file__))
2651 try:
2650 try:
2652 import bdiff, mpatch, base85, osutil
2651 import bdiff, mpatch, base85, osutil
2653 dir(bdiff), dir(mpatch), dir(base85), dir(osutil) # quiet pyflakes
2652 dir(bdiff), dir(mpatch), dir(base85), dir(osutil) # quiet pyflakes
2654 except Exception as inst:
2653 except Exception as inst:
2655 ui.write(" %s\n" % inst)
2654 ui.write(" %s\n" % inst)
2656 ui.write(_(" One or more extensions could not be found"))
2655 ui.write(_(" One or more extensions could not be found"))
2657 ui.write(_(" (check that you compiled the extensions)\n"))
2656 ui.write(_(" (check that you compiled the extensions)\n"))
2658 problems += 1
2657 problems += 1
2659
2658
2660 # templates
2659 # templates
2661 import templater
2660 import templater
2662 p = templater.templatepaths()
2661 p = templater.templatepaths()
2663 ui.status(_("checking templates (%s)...\n") % ' '.join(p))
2662 ui.status(_("checking templates (%s)...\n") % ' '.join(p))
2664 if p:
2663 if p:
2665 m = templater.templatepath("map-cmdline.default")
2664 m = templater.templatepath("map-cmdline.default")
2666 if m:
2665 if m:
2667 # template found, check if it is working
2666 # template found, check if it is working
2668 try:
2667 try:
2669 templater.templater(m)
2668 templater.templater(m)
2670 except Exception as inst:
2669 except Exception as inst:
2671 ui.write(" %s\n" % inst)
2670 ui.write(" %s\n" % inst)
2672 p = None
2671 p = None
2673 else:
2672 else:
2674 ui.write(_(" template 'default' not found\n"))
2673 ui.write(_(" template 'default' not found\n"))
2675 p = None
2674 p = None
2676 else:
2675 else:
2677 ui.write(_(" no template directories found\n"))
2676 ui.write(_(" no template directories found\n"))
2678 if not p:
2677 if not p:
2679 ui.write(_(" (templates seem to have been installed incorrectly)\n"))
2678 ui.write(_(" (templates seem to have been installed incorrectly)\n"))
2680 problems += 1
2679 problems += 1
2681
2680
2682 # editor
2681 # editor
2683 ui.status(_("checking commit editor...\n"))
2682 ui.status(_("checking commit editor...\n"))
2684 editor = ui.geteditor()
2683 editor = ui.geteditor()
2685 editor = util.expandpath(editor)
2684 editor = util.expandpath(editor)
2686 cmdpath = util.findexe(shlex.split(editor)[0])
2685 cmdpath = util.findexe(shlex.split(editor)[0])
2687 if not cmdpath:
2686 if not cmdpath:
2688 if editor == 'vi':
2687 if editor == 'vi':
2689 ui.write(_(" No commit editor set and can't find vi in PATH\n"))
2688 ui.write(_(" No commit editor set and can't find vi in PATH\n"))
2690 ui.write(_(" (specify a commit editor in your configuration"
2689 ui.write(_(" (specify a commit editor in your configuration"
2691 " file)\n"))
2690 " file)\n"))
2692 else:
2691 else:
2693 ui.write(_(" Can't find editor '%s' in PATH\n") % editor)
2692 ui.write(_(" Can't find editor '%s' in PATH\n") % editor)
2694 ui.write(_(" (specify a commit editor in your configuration"
2693 ui.write(_(" (specify a commit editor in your configuration"
2695 " file)\n"))
2694 " file)\n"))
2696 problems += 1
2695 problems += 1
2697
2696
2698 # check username
2697 # check username
2699 ui.status(_("checking username...\n"))
2698 ui.status(_("checking username...\n"))
2700 try:
2699 try:
2701 ui.username()
2700 ui.username()
2702 except error.Abort as e:
2701 except error.Abort as e:
2703 ui.write(" %s\n" % e)
2702 ui.write(" %s\n" % e)
2704 ui.write(_(" (specify a username in your configuration file)\n"))
2703 ui.write(_(" (specify a username in your configuration file)\n"))
2705 problems += 1
2704 problems += 1
2706
2705
2707 if not problems:
2706 if not problems:
2708 ui.status(_("no problems detected\n"))
2707 ui.status(_("no problems detected\n"))
2709 else:
2708 else:
2710 ui.write(_("%s problems detected,"
2709 ui.write(_("%s problems detected,"
2711 " please check your install!\n") % problems)
2710 " please check your install!\n") % problems)
2712
2711
2713 return problems
2712 return problems
2714
2713
2715 @command('debugknown', [], _('REPO ID...'), norepo=True)
2714 @command('debugknown', [], _('REPO ID...'), norepo=True)
2716 def debugknown(ui, repopath, *ids, **opts):
2715 def debugknown(ui, repopath, *ids, **opts):
2717 """test whether node ids are known to a repo
2716 """test whether node ids are known to a repo
2718
2717
2719 Every ID must be a full-length hex node id string. Returns a list of 0s
2718 Every ID must be a full-length hex node id string. Returns a list of 0s
2720 and 1s indicating unknown/known.
2719 and 1s indicating unknown/known.
2721 """
2720 """
2722 repo = hg.peer(ui, opts, repopath)
2721 repo = hg.peer(ui, opts, repopath)
2723 if not repo.capable('known'):
2722 if not repo.capable('known'):
2724 raise error.Abort("known() not supported by target repository")
2723 raise error.Abort("known() not supported by target repository")
2725 flags = repo.known([bin(s) for s in ids])
2724 flags = repo.known([bin(s) for s in ids])
2726 ui.write("%s\n" % ("".join([f and "1" or "0" for f in flags])))
2725 ui.write("%s\n" % ("".join([f and "1" or "0" for f in flags])))
2727
2726
2728 @command('debuglabelcomplete', [], _('LABEL...'))
2727 @command('debuglabelcomplete', [], _('LABEL...'))
2729 def debuglabelcomplete(ui, repo, *args):
2728 def debuglabelcomplete(ui, repo, *args):
2730 '''backwards compatibility with old bash completion scripts (DEPRECATED)'''
2729 '''backwards compatibility with old bash completion scripts (DEPRECATED)'''
2731 debugnamecomplete(ui, repo, *args)
2730 debugnamecomplete(ui, repo, *args)
2732
2731
2733 @command('debugmergestate', [], '')
2732 @command('debugmergestate', [], '')
2734 def debugmergestate(ui, repo, *args):
2733 def debugmergestate(ui, repo, *args):
2735 """print merge state
2734 """print merge state
2736
2735
2737 Use --verbose to print out information about whether v1 or v2 merge state
2736 Use --verbose to print out information about whether v1 or v2 merge state
2738 was chosen."""
2737 was chosen."""
2739 def _hashornull(h):
2738 def _hashornull(h):
2740 if h == nullhex:
2739 if h == nullhex:
2741 return 'null'
2740 return 'null'
2742 else:
2741 else:
2743 return h
2742 return h
2744
2743
2745 def printrecords(version):
2744 def printrecords(version):
2746 ui.write(('* version %s records\n') % version)
2745 ui.write(('* version %s records\n') % version)
2747 if version == 1:
2746 if version == 1:
2748 records = v1records
2747 records = v1records
2749 else:
2748 else:
2750 records = v2records
2749 records = v2records
2751
2750
2752 for rtype, record in records:
2751 for rtype, record in records:
2753 # pretty print some record types
2752 # pretty print some record types
2754 if rtype == 'L':
2753 if rtype == 'L':
2755 ui.write(('local: %s\n') % record)
2754 ui.write(('local: %s\n') % record)
2756 elif rtype == 'O':
2755 elif rtype == 'O':
2757 ui.write(('other: %s\n') % record)
2756 ui.write(('other: %s\n') % record)
2758 elif rtype == 'm':
2757 elif rtype == 'm':
2759 driver, mdstate = record.split('\0', 1)
2758 driver, mdstate = record.split('\0', 1)
2760 ui.write(('merge driver: %s (state "%s")\n')
2759 ui.write(('merge driver: %s (state "%s")\n')
2761 % (driver, mdstate))
2760 % (driver, mdstate))
2762 elif rtype in 'FDC':
2761 elif rtype in 'FDC':
2763 r = record.split('\0')
2762 r = record.split('\0')
2764 f, state, hash, lfile, afile, anode, ofile = r[0:7]
2763 f, state, hash, lfile, afile, anode, ofile = r[0:7]
2765 if version == 1:
2764 if version == 1:
2766 onode = 'not stored in v1 format'
2765 onode = 'not stored in v1 format'
2767 flags = r[7]
2766 flags = r[7]
2768 else:
2767 else:
2769 onode, flags = r[7:9]
2768 onode, flags = r[7:9]
2770 ui.write(('file: %s (record type "%s", state "%s", hash %s)\n')
2769 ui.write(('file: %s (record type "%s", state "%s", hash %s)\n')
2771 % (f, rtype, state, _hashornull(hash)))
2770 % (f, rtype, state, _hashornull(hash)))
2772 ui.write((' local path: %s (flags "%s")\n') % (lfile, flags))
2771 ui.write((' local path: %s (flags "%s")\n') % (lfile, flags))
2773 ui.write((' ancestor path: %s (node %s)\n')
2772 ui.write((' ancestor path: %s (node %s)\n')
2774 % (afile, _hashornull(anode)))
2773 % (afile, _hashornull(anode)))
2775 ui.write((' other path: %s (node %s)\n')
2774 ui.write((' other path: %s (node %s)\n')
2776 % (ofile, _hashornull(onode)))
2775 % (ofile, _hashornull(onode)))
2777 else:
2776 else:
2778 ui.write(('unrecognized entry: %s\t%s\n')
2777 ui.write(('unrecognized entry: %s\t%s\n')
2779 % (rtype, record.replace('\0', '\t')))
2778 % (rtype, record.replace('\0', '\t')))
2780
2779
2781 # Avoid mergestate.read() since it may raise an exception for unsupported
2780 # Avoid mergestate.read() since it may raise an exception for unsupported
2782 # merge state records. We shouldn't be doing this, but this is OK since this
2781 # merge state records. We shouldn't be doing this, but this is OK since this
2783 # command is pretty low-level.
2782 # command is pretty low-level.
2784 ms = mergemod.mergestate(repo)
2783 ms = mergemod.mergestate(repo)
2785
2784
2786 # sort so that reasonable information is on top
2785 # sort so that reasonable information is on top
2787 v1records = ms._readrecordsv1()
2786 v1records = ms._readrecordsv1()
2788 v2records = ms._readrecordsv2()
2787 v2records = ms._readrecordsv2()
2789 order = 'LOm'
2788 order = 'LOm'
2790 def key(r):
2789 def key(r):
2791 idx = order.find(r[0])
2790 idx = order.find(r[0])
2792 if idx == -1:
2791 if idx == -1:
2793 return (1, r[1])
2792 return (1, r[1])
2794 else:
2793 else:
2795 return (0, idx)
2794 return (0, idx)
2796 v1records.sort(key=key)
2795 v1records.sort(key=key)
2797 v2records.sort(key=key)
2796 v2records.sort(key=key)
2798
2797
2799 if not v1records and not v2records:
2798 if not v1records and not v2records:
2800 ui.write(('no merge state found\n'))
2799 ui.write(('no merge state found\n'))
2801 elif not v2records:
2800 elif not v2records:
2802 ui.note(('no version 2 merge state\n'))
2801 ui.note(('no version 2 merge state\n'))
2803 printrecords(1)
2802 printrecords(1)
2804 elif ms._v1v2match(v1records, v2records):
2803 elif ms._v1v2match(v1records, v2records):
2805 ui.note(('v1 and v2 states match: using v2\n'))
2804 ui.note(('v1 and v2 states match: using v2\n'))
2806 printrecords(2)
2805 printrecords(2)
2807 else:
2806 else:
2808 ui.note(('v1 and v2 states mismatch: using v1\n'))
2807 ui.note(('v1 and v2 states mismatch: using v1\n'))
2809 printrecords(1)
2808 printrecords(1)
2810 if ui.verbose:
2809 if ui.verbose:
2811 printrecords(2)
2810 printrecords(2)
2812
2811
2813 @command('debugnamecomplete', [], _('NAME...'))
2812 @command('debugnamecomplete', [], _('NAME...'))
2814 def debugnamecomplete(ui, repo, *args):
2813 def debugnamecomplete(ui, repo, *args):
2815 '''complete "names" - tags, open branch names, bookmark names'''
2814 '''complete "names" - tags, open branch names, bookmark names'''
2816
2815
2817 names = set()
2816 names = set()
2818 # since we previously only listed open branches, we will handle that
2817 # since we previously only listed open branches, we will handle that
2819 # specially (after this for loop)
2818 # specially (after this for loop)
2820 for name, ns in repo.names.iteritems():
2819 for name, ns in repo.names.iteritems():
2821 if name != 'branches':
2820 if name != 'branches':
2822 names.update(ns.listnames(repo))
2821 names.update(ns.listnames(repo))
2823 names.update(tag for (tag, heads, tip, closed)
2822 names.update(tag for (tag, heads, tip, closed)
2824 in repo.branchmap().iterbranches() if not closed)
2823 in repo.branchmap().iterbranches() if not closed)
2825 completions = set()
2824 completions = set()
2826 if not args:
2825 if not args:
2827 args = ['']
2826 args = ['']
2828 for a in args:
2827 for a in args:
2829 completions.update(n for n in names if n.startswith(a))
2828 completions.update(n for n in names if n.startswith(a))
2830 ui.write('\n'.join(sorted(completions)))
2829 ui.write('\n'.join(sorted(completions)))
2831 ui.write('\n')
2830 ui.write('\n')
2832
2831
2833 @command('debuglocks',
2832 @command('debuglocks',
2834 [('L', 'force-lock', None, _('free the store lock (DANGEROUS)')),
2833 [('L', 'force-lock', None, _('free the store lock (DANGEROUS)')),
2835 ('W', 'force-wlock', None,
2834 ('W', 'force-wlock', None,
2836 _('free the working state lock (DANGEROUS)'))],
2835 _('free the working state lock (DANGEROUS)'))],
2837 _('[OPTION]...'))
2836 _('[OPTION]...'))
2838 def debuglocks(ui, repo, **opts):
2837 def debuglocks(ui, repo, **opts):
2839 """show or modify state of locks
2838 """show or modify state of locks
2840
2839
2841 By default, this command will show which locks are held. This
2840 By default, this command will show which locks are held. This
2842 includes the user and process holding the lock, the amount of time
2841 includes the user and process holding the lock, the amount of time
2843 the lock has been held, and the machine name where the process is
2842 the lock has been held, and the machine name where the process is
2844 running if it's not local.
2843 running if it's not local.
2845
2844
2846 Locks protect the integrity of Mercurial's data, so should be
2845 Locks protect the integrity of Mercurial's data, so should be
2847 treated with care. System crashes or other interruptions may cause
2846 treated with care. System crashes or other interruptions may cause
2848 locks to not be properly released, though Mercurial will usually
2847 locks to not be properly released, though Mercurial will usually
2849 detect and remove such stale locks automatically.
2848 detect and remove such stale locks automatically.
2850
2849
2851 However, detecting stale locks may not always be possible (for
2850 However, detecting stale locks may not always be possible (for
2852 instance, on a shared filesystem). Removing locks may also be
2851 instance, on a shared filesystem). Removing locks may also be
2853 blocked by filesystem permissions.
2852 blocked by filesystem permissions.
2854
2853
2855 Returns 0 if no locks are held.
2854 Returns 0 if no locks are held.
2856
2855
2857 """
2856 """
2858
2857
2859 if opts.get('force_lock'):
2858 if opts.get('force_lock'):
2860 repo.svfs.unlink('lock')
2859 repo.svfs.unlink('lock')
2861 if opts.get('force_wlock'):
2860 if opts.get('force_wlock'):
2862 repo.vfs.unlink('wlock')
2861 repo.vfs.unlink('wlock')
2863 if opts.get('force_lock') or opts.get('force_lock'):
2862 if opts.get('force_lock') or opts.get('force_lock'):
2864 return 0
2863 return 0
2865
2864
2866 now = time.time()
2865 now = time.time()
2867 held = 0
2866 held = 0
2868
2867
2869 def report(vfs, name, method):
2868 def report(vfs, name, method):
2870 # this causes stale locks to get reaped for more accurate reporting
2869 # this causes stale locks to get reaped for more accurate reporting
2871 try:
2870 try:
2872 l = method(False)
2871 l = method(False)
2873 except error.LockHeld:
2872 except error.LockHeld:
2874 l = None
2873 l = None
2875
2874
2876 if l:
2875 if l:
2877 l.release()
2876 l.release()
2878 else:
2877 else:
2879 try:
2878 try:
2880 stat = vfs.lstat(name)
2879 stat = vfs.lstat(name)
2881 age = now - stat.st_mtime
2880 age = now - stat.st_mtime
2882 user = util.username(stat.st_uid)
2881 user = util.username(stat.st_uid)
2883 locker = vfs.readlock(name)
2882 locker = vfs.readlock(name)
2884 if ":" in locker:
2883 if ":" in locker:
2885 host, pid = locker.split(':')
2884 host, pid = locker.split(':')
2886 if host == socket.gethostname():
2885 if host == socket.gethostname():
2887 locker = 'user %s, process %s' % (user, pid)
2886 locker = 'user %s, process %s' % (user, pid)
2888 else:
2887 else:
2889 locker = 'user %s, process %s, host %s' \
2888 locker = 'user %s, process %s, host %s' \
2890 % (user, pid, host)
2889 % (user, pid, host)
2891 ui.write("%-6s %s (%ds)\n" % (name + ":", locker, age))
2890 ui.write("%-6s %s (%ds)\n" % (name + ":", locker, age))
2892 return 1
2891 return 1
2893 except OSError as e:
2892 except OSError as e:
2894 if e.errno != errno.ENOENT:
2893 if e.errno != errno.ENOENT:
2895 raise
2894 raise
2896
2895
2897 ui.write("%-6s free\n" % (name + ":"))
2896 ui.write("%-6s free\n" % (name + ":"))
2898 return 0
2897 return 0
2899
2898
2900 held += report(repo.svfs, "lock", repo.lock)
2899 held += report(repo.svfs, "lock", repo.lock)
2901 held += report(repo.vfs, "wlock", repo.wlock)
2900 held += report(repo.vfs, "wlock", repo.wlock)
2902
2901
2903 return held
2902 return held
2904
2903
2905 @command('debugobsolete',
2904 @command('debugobsolete',
2906 [('', 'flags', 0, _('markers flag')),
2905 [('', 'flags', 0, _('markers flag')),
2907 ('', 'record-parents', False,
2906 ('', 'record-parents', False,
2908 _('record parent information for the precursor')),
2907 _('record parent information for the precursor')),
2909 ('r', 'rev', [], _('display markers relevant to REV')),
2908 ('r', 'rev', [], _('display markers relevant to REV')),
2910 ] + commitopts2,
2909 ] + commitopts2,
2911 _('[OBSOLETED [REPLACEMENT ...]]'))
2910 _('[OBSOLETED [REPLACEMENT ...]]'))
2912 def debugobsolete(ui, repo, precursor=None, *successors, **opts):
2911 def debugobsolete(ui, repo, precursor=None, *successors, **opts):
2913 """create arbitrary obsolete marker
2912 """create arbitrary obsolete marker
2914
2913
2915 With no arguments, displays the list of obsolescence markers."""
2914 With no arguments, displays the list of obsolescence markers."""
2916
2915
2917 def parsenodeid(s):
2916 def parsenodeid(s):
2918 try:
2917 try:
2919 # We do not use revsingle/revrange functions here to accept
2918 # We do not use revsingle/revrange functions here to accept
2920 # arbitrary node identifiers, possibly not present in the
2919 # arbitrary node identifiers, possibly not present in the
2921 # local repository.
2920 # local repository.
2922 n = bin(s)
2921 n = bin(s)
2923 if len(n) != len(nullid):
2922 if len(n) != len(nullid):
2924 raise TypeError()
2923 raise TypeError()
2925 return n
2924 return n
2926 except TypeError:
2925 except TypeError:
2927 raise error.Abort('changeset references must be full hexadecimal '
2926 raise error.Abort('changeset references must be full hexadecimal '
2928 'node identifiers')
2927 'node identifiers')
2929
2928
2930 if precursor is not None:
2929 if precursor is not None:
2931 if opts['rev']:
2930 if opts['rev']:
2932 raise error.Abort('cannot select revision when creating marker')
2931 raise error.Abort('cannot select revision when creating marker')
2933 metadata = {}
2932 metadata = {}
2934 metadata['user'] = opts['user'] or ui.username()
2933 metadata['user'] = opts['user'] or ui.username()
2935 succs = tuple(parsenodeid(succ) for succ in successors)
2934 succs = tuple(parsenodeid(succ) for succ in successors)
2936 l = repo.lock()
2935 l = repo.lock()
2937 try:
2936 try:
2938 tr = repo.transaction('debugobsolete')
2937 tr = repo.transaction('debugobsolete')
2939 try:
2938 try:
2940 date = opts.get('date')
2939 date = opts.get('date')
2941 if date:
2940 if date:
2942 date = util.parsedate(date)
2941 date = util.parsedate(date)
2943 else:
2942 else:
2944 date = None
2943 date = None
2945 prec = parsenodeid(precursor)
2944 prec = parsenodeid(precursor)
2946 parents = None
2945 parents = None
2947 if opts['record_parents']:
2946 if opts['record_parents']:
2948 if prec not in repo.unfiltered():
2947 if prec not in repo.unfiltered():
2949 raise error.Abort('cannot used --record-parents on '
2948 raise error.Abort('cannot used --record-parents on '
2950 'unknown changesets')
2949 'unknown changesets')
2951 parents = repo.unfiltered()[prec].parents()
2950 parents = repo.unfiltered()[prec].parents()
2952 parents = tuple(p.node() for p in parents)
2951 parents = tuple(p.node() for p in parents)
2953 repo.obsstore.create(tr, prec, succs, opts['flags'],
2952 repo.obsstore.create(tr, prec, succs, opts['flags'],
2954 parents=parents, date=date,
2953 parents=parents, date=date,
2955 metadata=metadata)
2954 metadata=metadata)
2956 tr.close()
2955 tr.close()
2957 except ValueError as exc:
2956 except ValueError as exc:
2958 raise error.Abort(_('bad obsmarker input: %s') % exc)
2957 raise error.Abort(_('bad obsmarker input: %s') % exc)
2959 finally:
2958 finally:
2960 tr.release()
2959 tr.release()
2961 finally:
2960 finally:
2962 l.release()
2961 l.release()
2963 else:
2962 else:
2964 if opts['rev']:
2963 if opts['rev']:
2965 revs = scmutil.revrange(repo, opts['rev'])
2964 revs = scmutil.revrange(repo, opts['rev'])
2966 nodes = [repo[r].node() for r in revs]
2965 nodes = [repo[r].node() for r in revs]
2967 markers = list(obsolete.getmarkers(repo, nodes=nodes))
2966 markers = list(obsolete.getmarkers(repo, nodes=nodes))
2968 markers.sort(key=lambda x: x._data)
2967 markers.sort(key=lambda x: x._data)
2969 else:
2968 else:
2970 markers = obsolete.getmarkers(repo)
2969 markers = obsolete.getmarkers(repo)
2971
2970
2972 for m in markers:
2971 for m in markers:
2973 cmdutil.showmarker(ui, m)
2972 cmdutil.showmarker(ui, m)
2974
2973
2975 @command('debugpathcomplete',
2974 @command('debugpathcomplete',
2976 [('f', 'full', None, _('complete an entire path')),
2975 [('f', 'full', None, _('complete an entire path')),
2977 ('n', 'normal', None, _('show only normal files')),
2976 ('n', 'normal', None, _('show only normal files')),
2978 ('a', 'added', None, _('show only added files')),
2977 ('a', 'added', None, _('show only added files')),
2979 ('r', 'removed', None, _('show only removed files'))],
2978 ('r', 'removed', None, _('show only removed files'))],
2980 _('FILESPEC...'))
2979 _('FILESPEC...'))
2981 def debugpathcomplete(ui, repo, *specs, **opts):
2980 def debugpathcomplete(ui, repo, *specs, **opts):
2982 '''complete part or all of a tracked path
2981 '''complete part or all of a tracked path
2983
2982
2984 This command supports shells that offer path name completion. It
2983 This command supports shells that offer path name completion. It
2985 currently completes only files already known to the dirstate.
2984 currently completes only files already known to the dirstate.
2986
2985
2987 Completion extends only to the next path segment unless
2986 Completion extends only to the next path segment unless
2988 --full is specified, in which case entire paths are used.'''
2987 --full is specified, in which case entire paths are used.'''
2989
2988
2990 def complete(path, acceptable):
2989 def complete(path, acceptable):
2991 dirstate = repo.dirstate
2990 dirstate = repo.dirstate
2992 spec = os.path.normpath(os.path.join(os.getcwd(), path))
2991 spec = os.path.normpath(os.path.join(os.getcwd(), path))
2993 rootdir = repo.root + os.sep
2992 rootdir = repo.root + os.sep
2994 if spec != repo.root and not spec.startswith(rootdir):
2993 if spec != repo.root and not spec.startswith(rootdir):
2995 return [], []
2994 return [], []
2996 if os.path.isdir(spec):
2995 if os.path.isdir(spec):
2997 spec += '/'
2996 spec += '/'
2998 spec = spec[len(rootdir):]
2997 spec = spec[len(rootdir):]
2999 fixpaths = os.sep != '/'
2998 fixpaths = os.sep != '/'
3000 if fixpaths:
2999 if fixpaths:
3001 spec = spec.replace(os.sep, '/')
3000 spec = spec.replace(os.sep, '/')
3002 speclen = len(spec)
3001 speclen = len(spec)
3003 fullpaths = opts['full']
3002 fullpaths = opts['full']
3004 files, dirs = set(), set()
3003 files, dirs = set(), set()
3005 adddir, addfile = dirs.add, files.add
3004 adddir, addfile = dirs.add, files.add
3006 for f, st in dirstate.iteritems():
3005 for f, st in dirstate.iteritems():
3007 if f.startswith(spec) and st[0] in acceptable:
3006 if f.startswith(spec) and st[0] in acceptable:
3008 if fixpaths:
3007 if fixpaths:
3009 f = f.replace('/', os.sep)
3008 f = f.replace('/', os.sep)
3010 if fullpaths:
3009 if fullpaths:
3011 addfile(f)
3010 addfile(f)
3012 continue
3011 continue
3013 s = f.find(os.sep, speclen)
3012 s = f.find(os.sep, speclen)
3014 if s >= 0:
3013 if s >= 0:
3015 adddir(f[:s])
3014 adddir(f[:s])
3016 else:
3015 else:
3017 addfile(f)
3016 addfile(f)
3018 return files, dirs
3017 return files, dirs
3019
3018
3020 acceptable = ''
3019 acceptable = ''
3021 if opts['normal']:
3020 if opts['normal']:
3022 acceptable += 'nm'
3021 acceptable += 'nm'
3023 if opts['added']:
3022 if opts['added']:
3024 acceptable += 'a'
3023 acceptable += 'a'
3025 if opts['removed']:
3024 if opts['removed']:
3026 acceptable += 'r'
3025 acceptable += 'r'
3027 cwd = repo.getcwd()
3026 cwd = repo.getcwd()
3028 if not specs:
3027 if not specs:
3029 specs = ['.']
3028 specs = ['.']
3030
3029
3031 files, dirs = set(), set()
3030 files, dirs = set(), set()
3032 for spec in specs:
3031 for spec in specs:
3033 f, d = complete(spec, acceptable or 'nmar')
3032 f, d = complete(spec, acceptable or 'nmar')
3034 files.update(f)
3033 files.update(f)
3035 dirs.update(d)
3034 dirs.update(d)
3036 files.update(dirs)
3035 files.update(dirs)
3037 ui.write('\n'.join(repo.pathto(p, cwd) for p in sorted(files)))
3036 ui.write('\n'.join(repo.pathto(p, cwd) for p in sorted(files)))
3038 ui.write('\n')
3037 ui.write('\n')
3039
3038
3040 @command('debugpushkey', [], _('REPO NAMESPACE [KEY OLD NEW]'), norepo=True)
3039 @command('debugpushkey', [], _('REPO NAMESPACE [KEY OLD NEW]'), norepo=True)
3041 def debugpushkey(ui, repopath, namespace, *keyinfo, **opts):
3040 def debugpushkey(ui, repopath, namespace, *keyinfo, **opts):
3042 '''access the pushkey key/value protocol
3041 '''access the pushkey key/value protocol
3043
3042
3044 With two args, list the keys in the given namespace.
3043 With two args, list the keys in the given namespace.
3045
3044
3046 With five args, set a key to new if it currently is set to old.
3045 With five args, set a key to new if it currently is set to old.
3047 Reports success or failure.
3046 Reports success or failure.
3048 '''
3047 '''
3049
3048
3050 target = hg.peer(ui, {}, repopath)
3049 target = hg.peer(ui, {}, repopath)
3051 if keyinfo:
3050 if keyinfo:
3052 key, old, new = keyinfo
3051 key, old, new = keyinfo
3053 r = target.pushkey(namespace, key, old, new)
3052 r = target.pushkey(namespace, key, old, new)
3054 ui.status(str(r) + '\n')
3053 ui.status(str(r) + '\n')
3055 return not r
3054 return not r
3056 else:
3055 else:
3057 for k, v in sorted(target.listkeys(namespace).iteritems()):
3056 for k, v in sorted(target.listkeys(namespace).iteritems()):
3058 ui.write("%s\t%s\n" % (k.encode('string-escape'),
3057 ui.write("%s\t%s\n" % (k.encode('string-escape'),
3059 v.encode('string-escape')))
3058 v.encode('string-escape')))
3060
3059
3061 @command('debugpvec', [], _('A B'))
3060 @command('debugpvec', [], _('A B'))
3062 def debugpvec(ui, repo, a, b=None):
3061 def debugpvec(ui, repo, a, b=None):
3063 ca = scmutil.revsingle(repo, a)
3062 ca = scmutil.revsingle(repo, a)
3064 cb = scmutil.revsingle(repo, b)
3063 cb = scmutil.revsingle(repo, b)
3065 pa = pvec.ctxpvec(ca)
3064 pa = pvec.ctxpvec(ca)
3066 pb = pvec.ctxpvec(cb)
3065 pb = pvec.ctxpvec(cb)
3067 if pa == pb:
3066 if pa == pb:
3068 rel = "="
3067 rel = "="
3069 elif pa > pb:
3068 elif pa > pb:
3070 rel = ">"
3069 rel = ">"
3071 elif pa < pb:
3070 elif pa < pb:
3072 rel = "<"
3071 rel = "<"
3073 elif pa | pb:
3072 elif pa | pb:
3074 rel = "|"
3073 rel = "|"
3075 ui.write(_("a: %s\n") % pa)
3074 ui.write(_("a: %s\n") % pa)
3076 ui.write(_("b: %s\n") % pb)
3075 ui.write(_("b: %s\n") % pb)
3077 ui.write(_("depth(a): %d depth(b): %d\n") % (pa._depth, pb._depth))
3076 ui.write(_("depth(a): %d depth(b): %d\n") % (pa._depth, pb._depth))
3078 ui.write(_("delta: %d hdist: %d distance: %d relation: %s\n") %
3077 ui.write(_("delta: %d hdist: %d distance: %d relation: %s\n") %
3079 (abs(pa._depth - pb._depth), pvec._hamming(pa._vec, pb._vec),
3078 (abs(pa._depth - pb._depth), pvec._hamming(pa._vec, pb._vec),
3080 pa.distance(pb), rel))
3079 pa.distance(pb), rel))
3081
3080
3082 @command('debugrebuilddirstate|debugrebuildstate',
3081 @command('debugrebuilddirstate|debugrebuildstate',
3083 [('r', 'rev', '', _('revision to rebuild to'), _('REV')),
3082 [('r', 'rev', '', _('revision to rebuild to'), _('REV')),
3084 ('', 'minimal', None, _('only rebuild files that are inconsistent with '
3083 ('', 'minimal', None, _('only rebuild files that are inconsistent with '
3085 'the working copy parent')),
3084 'the working copy parent')),
3086 ],
3085 ],
3087 _('[-r REV]'))
3086 _('[-r REV]'))
3088 def debugrebuilddirstate(ui, repo, rev, **opts):
3087 def debugrebuilddirstate(ui, repo, rev, **opts):
3089 """rebuild the dirstate as it would look like for the given revision
3088 """rebuild the dirstate as it would look like for the given revision
3090
3089
3091 If no revision is specified the first current parent will be used.
3090 If no revision is specified the first current parent will be used.
3092
3091
3093 The dirstate will be set to the files of the given revision.
3092 The dirstate will be set to the files of the given revision.
3094 The actual working directory content or existing dirstate
3093 The actual working directory content or existing dirstate
3095 information such as adds or removes is not considered.
3094 information such as adds or removes is not considered.
3096
3095
3097 ``minimal`` will only rebuild the dirstate status for files that claim to be
3096 ``minimal`` will only rebuild the dirstate status for files that claim to be
3098 tracked but are not in the parent manifest, or that exist in the parent
3097 tracked but are not in the parent manifest, or that exist in the parent
3099 manifest but are not in the dirstate. It will not change adds, removes, or
3098 manifest but are not in the dirstate. It will not change adds, removes, or
3100 modified files that are in the working copy parent.
3099 modified files that are in the working copy parent.
3101
3100
3102 One use of this command is to make the next :hg:`status` invocation
3101 One use of this command is to make the next :hg:`status` invocation
3103 check the actual file content.
3102 check the actual file content.
3104 """
3103 """
3105 ctx = scmutil.revsingle(repo, rev)
3104 ctx = scmutil.revsingle(repo, rev)
3106 wlock = repo.wlock()
3105 wlock = repo.wlock()
3107 try:
3106 try:
3108 dirstate = repo.dirstate
3107 dirstate = repo.dirstate
3109 changedfiles = None
3108 changedfiles = None
3110 # See command doc for what minimal does.
3109 # See command doc for what minimal does.
3111 if opts.get('minimal'):
3110 if opts.get('minimal'):
3112 manifestfiles = set(ctx.manifest().keys())
3111 manifestfiles = set(ctx.manifest().keys())
3113 dirstatefiles = set(dirstate)
3112 dirstatefiles = set(dirstate)
3114 manifestonly = manifestfiles - dirstatefiles
3113 manifestonly = manifestfiles - dirstatefiles
3115 dsonly = dirstatefiles - manifestfiles
3114 dsonly = dirstatefiles - manifestfiles
3116 dsnotadded = set(f for f in dsonly if dirstate[f] != 'a')
3115 dsnotadded = set(f for f in dsonly if dirstate[f] != 'a')
3117 changedfiles = manifestonly | dsnotadded
3116 changedfiles = manifestonly | dsnotadded
3118
3117
3119 dirstate.rebuild(ctx.node(), ctx.manifest(), changedfiles)
3118 dirstate.rebuild(ctx.node(), ctx.manifest(), changedfiles)
3120 finally:
3119 finally:
3121 wlock.release()
3120 wlock.release()
3122
3121
3123 @command('debugrebuildfncache', [], '')
3122 @command('debugrebuildfncache', [], '')
3124 def debugrebuildfncache(ui, repo):
3123 def debugrebuildfncache(ui, repo):
3125 """rebuild the fncache file"""
3124 """rebuild the fncache file"""
3126 repair.rebuildfncache(ui, repo)
3125 repair.rebuildfncache(ui, repo)
3127
3126
3128 @command('debugrename',
3127 @command('debugrename',
3129 [('r', 'rev', '', _('revision to debug'), _('REV'))],
3128 [('r', 'rev', '', _('revision to debug'), _('REV'))],
3130 _('[-r REV] FILE'))
3129 _('[-r REV] FILE'))
3131 def debugrename(ui, repo, file1, *pats, **opts):
3130 def debugrename(ui, repo, file1, *pats, **opts):
3132 """dump rename information"""
3131 """dump rename information"""
3133
3132
3134 ctx = scmutil.revsingle(repo, opts.get('rev'))
3133 ctx = scmutil.revsingle(repo, opts.get('rev'))
3135 m = scmutil.match(ctx, (file1,) + pats, opts)
3134 m = scmutil.match(ctx, (file1,) + pats, opts)
3136 for abs in ctx.walk(m):
3135 for abs in ctx.walk(m):
3137 fctx = ctx[abs]
3136 fctx = ctx[abs]
3138 o = fctx.filelog().renamed(fctx.filenode())
3137 o = fctx.filelog().renamed(fctx.filenode())
3139 rel = m.rel(abs)
3138 rel = m.rel(abs)
3140 if o:
3139 if o:
3141 ui.write(_("%s renamed from %s:%s\n") % (rel, o[0], hex(o[1])))
3140 ui.write(_("%s renamed from %s:%s\n") % (rel, o[0], hex(o[1])))
3142 else:
3141 else:
3143 ui.write(_("%s not renamed\n") % rel)
3142 ui.write(_("%s not renamed\n") % rel)
3144
3143
3145 @command('debugrevlog', debugrevlogopts +
3144 @command('debugrevlog', debugrevlogopts +
3146 [('d', 'dump', False, _('dump index data'))],
3145 [('d', 'dump', False, _('dump index data'))],
3147 _('-c|-m|FILE'),
3146 _('-c|-m|FILE'),
3148 optionalrepo=True)
3147 optionalrepo=True)
3149 def debugrevlog(ui, repo, file_=None, **opts):
3148 def debugrevlog(ui, repo, file_=None, **opts):
3150 """show data and statistics about a revlog"""
3149 """show data and statistics about a revlog"""
3151 r = cmdutil.openrevlog(repo, 'debugrevlog', file_, opts)
3150 r = cmdutil.openrevlog(repo, 'debugrevlog', file_, opts)
3152
3151
3153 if opts.get("dump"):
3152 if opts.get("dump"):
3154 numrevs = len(r)
3153 numrevs = len(r)
3155 ui.write("# rev p1rev p2rev start end deltastart base p1 p2"
3154 ui.write("# rev p1rev p2rev start end deltastart base p1 p2"
3156 " rawsize totalsize compression heads chainlen\n")
3155 " rawsize totalsize compression heads chainlen\n")
3157 ts = 0
3156 ts = 0
3158 heads = set()
3157 heads = set()
3159
3158
3160 for rev in xrange(numrevs):
3159 for rev in xrange(numrevs):
3161 dbase = r.deltaparent(rev)
3160 dbase = r.deltaparent(rev)
3162 if dbase == -1:
3161 if dbase == -1:
3163 dbase = rev
3162 dbase = rev
3164 cbase = r.chainbase(rev)
3163 cbase = r.chainbase(rev)
3165 clen = r.chainlen(rev)
3164 clen = r.chainlen(rev)
3166 p1, p2 = r.parentrevs(rev)
3165 p1, p2 = r.parentrevs(rev)
3167 rs = r.rawsize(rev)
3166 rs = r.rawsize(rev)
3168 ts = ts + rs
3167 ts = ts + rs
3169 heads -= set(r.parentrevs(rev))
3168 heads -= set(r.parentrevs(rev))
3170 heads.add(rev)
3169 heads.add(rev)
3171 ui.write("%5d %5d %5d %5d %5d %10d %4d %4d %4d %7d %9d "
3170 ui.write("%5d %5d %5d %5d %5d %10d %4d %4d %4d %7d %9d "
3172 "%11d %5d %8d\n" %
3171 "%11d %5d %8d\n" %
3173 (rev, p1, p2, r.start(rev), r.end(rev),
3172 (rev, p1, p2, r.start(rev), r.end(rev),
3174 r.start(dbase), r.start(cbase),
3173 r.start(dbase), r.start(cbase),
3175 r.start(p1), r.start(p2),
3174 r.start(p1), r.start(p2),
3176 rs, ts, ts / r.end(rev), len(heads), clen))
3175 rs, ts, ts / r.end(rev), len(heads), clen))
3177 return 0
3176 return 0
3178
3177
3179 v = r.version
3178 v = r.version
3180 format = v & 0xFFFF
3179 format = v & 0xFFFF
3181 flags = []
3180 flags = []
3182 gdelta = False
3181 gdelta = False
3183 if v & revlog.REVLOGNGINLINEDATA:
3182 if v & revlog.REVLOGNGINLINEDATA:
3184 flags.append('inline')
3183 flags.append('inline')
3185 if v & revlog.REVLOGGENERALDELTA:
3184 if v & revlog.REVLOGGENERALDELTA:
3186 gdelta = True
3185 gdelta = True
3187 flags.append('generaldelta')
3186 flags.append('generaldelta')
3188 if not flags:
3187 if not flags:
3189 flags = ['(none)']
3188 flags = ['(none)']
3190
3189
3191 nummerges = 0
3190 nummerges = 0
3192 numfull = 0
3191 numfull = 0
3193 numprev = 0
3192 numprev = 0
3194 nump1 = 0
3193 nump1 = 0
3195 nump2 = 0
3194 nump2 = 0
3196 numother = 0
3195 numother = 0
3197 nump1prev = 0
3196 nump1prev = 0
3198 nump2prev = 0
3197 nump2prev = 0
3199 chainlengths = []
3198 chainlengths = []
3200
3199
3201 datasize = [None, 0, 0L]
3200 datasize = [None, 0, 0L]
3202 fullsize = [None, 0, 0L]
3201 fullsize = [None, 0, 0L]
3203 deltasize = [None, 0, 0L]
3202 deltasize = [None, 0, 0L]
3204
3203
3205 def addsize(size, l):
3204 def addsize(size, l):
3206 if l[0] is None or size < l[0]:
3205 if l[0] is None or size < l[0]:
3207 l[0] = size
3206 l[0] = size
3208 if size > l[1]:
3207 if size > l[1]:
3209 l[1] = size
3208 l[1] = size
3210 l[2] += size
3209 l[2] += size
3211
3210
3212 numrevs = len(r)
3211 numrevs = len(r)
3213 for rev in xrange(numrevs):
3212 for rev in xrange(numrevs):
3214 p1, p2 = r.parentrevs(rev)
3213 p1, p2 = r.parentrevs(rev)
3215 delta = r.deltaparent(rev)
3214 delta = r.deltaparent(rev)
3216 if format > 0:
3215 if format > 0:
3217 addsize(r.rawsize(rev), datasize)
3216 addsize(r.rawsize(rev), datasize)
3218 if p2 != nullrev:
3217 if p2 != nullrev:
3219 nummerges += 1
3218 nummerges += 1
3220 size = r.length(rev)
3219 size = r.length(rev)
3221 if delta == nullrev:
3220 if delta == nullrev:
3222 chainlengths.append(0)
3221 chainlengths.append(0)
3223 numfull += 1
3222 numfull += 1
3224 addsize(size, fullsize)
3223 addsize(size, fullsize)
3225 else:
3224 else:
3226 chainlengths.append(chainlengths[delta] + 1)
3225 chainlengths.append(chainlengths[delta] + 1)
3227 addsize(size, deltasize)
3226 addsize(size, deltasize)
3228 if delta == rev - 1:
3227 if delta == rev - 1:
3229 numprev += 1
3228 numprev += 1
3230 if delta == p1:
3229 if delta == p1:
3231 nump1prev += 1
3230 nump1prev += 1
3232 elif delta == p2:
3231 elif delta == p2:
3233 nump2prev += 1
3232 nump2prev += 1
3234 elif delta == p1:
3233 elif delta == p1:
3235 nump1 += 1
3234 nump1 += 1
3236 elif delta == p2:
3235 elif delta == p2:
3237 nump2 += 1
3236 nump2 += 1
3238 elif delta != nullrev:
3237 elif delta != nullrev:
3239 numother += 1
3238 numother += 1
3240
3239
3241 # Adjust size min value for empty cases
3240 # Adjust size min value for empty cases
3242 for size in (datasize, fullsize, deltasize):
3241 for size in (datasize, fullsize, deltasize):
3243 if size[0] is None:
3242 if size[0] is None:
3244 size[0] = 0
3243 size[0] = 0
3245
3244
3246 numdeltas = numrevs - numfull
3245 numdeltas = numrevs - numfull
3247 numoprev = numprev - nump1prev - nump2prev
3246 numoprev = numprev - nump1prev - nump2prev
3248 totalrawsize = datasize[2]
3247 totalrawsize = datasize[2]
3249 datasize[2] /= numrevs
3248 datasize[2] /= numrevs
3250 fulltotal = fullsize[2]
3249 fulltotal = fullsize[2]
3251 fullsize[2] /= numfull
3250 fullsize[2] /= numfull
3252 deltatotal = deltasize[2]
3251 deltatotal = deltasize[2]
3253 if numrevs - numfull > 0:
3252 if numrevs - numfull > 0:
3254 deltasize[2] /= numrevs - numfull
3253 deltasize[2] /= numrevs - numfull
3255 totalsize = fulltotal + deltatotal
3254 totalsize = fulltotal + deltatotal
3256 avgchainlen = sum(chainlengths) / numrevs
3255 avgchainlen = sum(chainlengths) / numrevs
3257 maxchainlen = max(chainlengths)
3256 maxchainlen = max(chainlengths)
3258 compratio = 1
3257 compratio = 1
3259 if totalsize:
3258 if totalsize:
3260 compratio = totalrawsize / totalsize
3259 compratio = totalrawsize / totalsize
3261
3260
3262 basedfmtstr = '%%%dd\n'
3261 basedfmtstr = '%%%dd\n'
3263 basepcfmtstr = '%%%dd %s(%%5.2f%%%%)\n'
3262 basepcfmtstr = '%%%dd %s(%%5.2f%%%%)\n'
3264
3263
3265 def dfmtstr(max):
3264 def dfmtstr(max):
3266 return basedfmtstr % len(str(max))
3265 return basedfmtstr % len(str(max))
3267 def pcfmtstr(max, padding=0):
3266 def pcfmtstr(max, padding=0):
3268 return basepcfmtstr % (len(str(max)), ' ' * padding)
3267 return basepcfmtstr % (len(str(max)), ' ' * padding)
3269
3268
3270 def pcfmt(value, total):
3269 def pcfmt(value, total):
3271 if total:
3270 if total:
3272 return (value, 100 * float(value) / total)
3271 return (value, 100 * float(value) / total)
3273 else:
3272 else:
3274 return value, 100.0
3273 return value, 100.0
3275
3274
3276 ui.write(('format : %d\n') % format)
3275 ui.write(('format : %d\n') % format)
3277 ui.write(('flags : %s\n') % ', '.join(flags))
3276 ui.write(('flags : %s\n') % ', '.join(flags))
3278
3277
3279 ui.write('\n')
3278 ui.write('\n')
3280 fmt = pcfmtstr(totalsize)
3279 fmt = pcfmtstr(totalsize)
3281 fmt2 = dfmtstr(totalsize)
3280 fmt2 = dfmtstr(totalsize)
3282 ui.write(('revisions : ') + fmt2 % numrevs)
3281 ui.write(('revisions : ') + fmt2 % numrevs)
3283 ui.write((' merges : ') + fmt % pcfmt(nummerges, numrevs))
3282 ui.write((' merges : ') + fmt % pcfmt(nummerges, numrevs))
3284 ui.write((' normal : ') + fmt % pcfmt(numrevs - nummerges, numrevs))
3283 ui.write((' normal : ') + fmt % pcfmt(numrevs - nummerges, numrevs))
3285 ui.write(('revisions : ') + fmt2 % numrevs)
3284 ui.write(('revisions : ') + fmt2 % numrevs)
3286 ui.write((' full : ') + fmt % pcfmt(numfull, numrevs))
3285 ui.write((' full : ') + fmt % pcfmt(numfull, numrevs))
3287 ui.write((' deltas : ') + fmt % pcfmt(numdeltas, numrevs))
3286 ui.write((' deltas : ') + fmt % pcfmt(numdeltas, numrevs))
3288 ui.write(('revision size : ') + fmt2 % totalsize)
3287 ui.write(('revision size : ') + fmt2 % totalsize)
3289 ui.write((' full : ') + fmt % pcfmt(fulltotal, totalsize))
3288 ui.write((' full : ') + fmt % pcfmt(fulltotal, totalsize))
3290 ui.write((' deltas : ') + fmt % pcfmt(deltatotal, totalsize))
3289 ui.write((' deltas : ') + fmt % pcfmt(deltatotal, totalsize))
3291
3290
3292 ui.write('\n')
3291 ui.write('\n')
3293 fmt = dfmtstr(max(avgchainlen, compratio))
3292 fmt = dfmtstr(max(avgchainlen, compratio))
3294 ui.write(('avg chain length : ') + fmt % avgchainlen)
3293 ui.write(('avg chain length : ') + fmt % avgchainlen)
3295 ui.write(('max chain length : ') + fmt % maxchainlen)
3294 ui.write(('max chain length : ') + fmt % maxchainlen)
3296 ui.write(('compression ratio : ') + fmt % compratio)
3295 ui.write(('compression ratio : ') + fmt % compratio)
3297
3296
3298 if format > 0:
3297 if format > 0:
3299 ui.write('\n')
3298 ui.write('\n')
3300 ui.write(('uncompressed data size (min/max/avg) : %d / %d / %d\n')
3299 ui.write(('uncompressed data size (min/max/avg) : %d / %d / %d\n')
3301 % tuple(datasize))
3300 % tuple(datasize))
3302 ui.write(('full revision size (min/max/avg) : %d / %d / %d\n')
3301 ui.write(('full revision size (min/max/avg) : %d / %d / %d\n')
3303 % tuple(fullsize))
3302 % tuple(fullsize))
3304 ui.write(('delta size (min/max/avg) : %d / %d / %d\n')
3303 ui.write(('delta size (min/max/avg) : %d / %d / %d\n')
3305 % tuple(deltasize))
3304 % tuple(deltasize))
3306
3305
3307 if numdeltas > 0:
3306 if numdeltas > 0:
3308 ui.write('\n')
3307 ui.write('\n')
3309 fmt = pcfmtstr(numdeltas)
3308 fmt = pcfmtstr(numdeltas)
3310 fmt2 = pcfmtstr(numdeltas, 4)
3309 fmt2 = pcfmtstr(numdeltas, 4)
3311 ui.write(('deltas against prev : ') + fmt % pcfmt(numprev, numdeltas))
3310 ui.write(('deltas against prev : ') + fmt % pcfmt(numprev, numdeltas))
3312 if numprev > 0:
3311 if numprev > 0:
3313 ui.write((' where prev = p1 : ') + fmt2 % pcfmt(nump1prev,
3312 ui.write((' where prev = p1 : ') + fmt2 % pcfmt(nump1prev,
3314 numprev))
3313 numprev))
3315 ui.write((' where prev = p2 : ') + fmt2 % pcfmt(nump2prev,
3314 ui.write((' where prev = p2 : ') + fmt2 % pcfmt(nump2prev,
3316 numprev))
3315 numprev))
3317 ui.write((' other : ') + fmt2 % pcfmt(numoprev,
3316 ui.write((' other : ') + fmt2 % pcfmt(numoprev,
3318 numprev))
3317 numprev))
3319 if gdelta:
3318 if gdelta:
3320 ui.write(('deltas against p1 : ')
3319 ui.write(('deltas against p1 : ')
3321 + fmt % pcfmt(nump1, numdeltas))
3320 + fmt % pcfmt(nump1, numdeltas))
3322 ui.write(('deltas against p2 : ')
3321 ui.write(('deltas against p2 : ')
3323 + fmt % pcfmt(nump2, numdeltas))
3322 + fmt % pcfmt(nump2, numdeltas))
3324 ui.write(('deltas against other : ') + fmt % pcfmt(numother,
3323 ui.write(('deltas against other : ') + fmt % pcfmt(numother,
3325 numdeltas))
3324 numdeltas))
3326
3325
3327 @command('debugrevspec',
3326 @command('debugrevspec',
3328 [('', 'optimize', None, _('print parsed tree after optimizing'))],
3327 [('', 'optimize', None, _('print parsed tree after optimizing'))],
3329 ('REVSPEC'))
3328 ('REVSPEC'))
3330 def debugrevspec(ui, repo, expr, **opts):
3329 def debugrevspec(ui, repo, expr, **opts):
3331 """parse and apply a revision specification
3330 """parse and apply a revision specification
3332
3331
3333 Use --verbose to print the parsed tree before and after aliases
3332 Use --verbose to print the parsed tree before and after aliases
3334 expansion.
3333 expansion.
3335 """
3334 """
3336 if ui.verbose:
3335 if ui.verbose:
3337 tree = revset.parse(expr, lookup=repo.__contains__)
3336 tree = revset.parse(expr, lookup=repo.__contains__)
3338 ui.note(revset.prettyformat(tree), "\n")
3337 ui.note(revset.prettyformat(tree), "\n")
3339 newtree = revset.findaliases(ui, tree)
3338 newtree = revset.findaliases(ui, tree)
3340 if newtree != tree:
3339 if newtree != tree:
3341 ui.note(revset.prettyformat(newtree), "\n")
3340 ui.note(revset.prettyformat(newtree), "\n")
3342 tree = newtree
3341 tree = newtree
3343 newtree = revset.foldconcat(tree)
3342 newtree = revset.foldconcat(tree)
3344 if newtree != tree:
3343 if newtree != tree:
3345 ui.note(revset.prettyformat(newtree), "\n")
3344 ui.note(revset.prettyformat(newtree), "\n")
3346 if opts["optimize"]:
3345 if opts["optimize"]:
3347 weight, optimizedtree = revset.optimize(newtree, True)
3346 weight, optimizedtree = revset.optimize(newtree, True)
3348 ui.note("* optimized:\n", revset.prettyformat(optimizedtree), "\n")
3347 ui.note("* optimized:\n", revset.prettyformat(optimizedtree), "\n")
3349 func = revset.match(ui, expr, repo)
3348 func = revset.match(ui, expr, repo)
3350 revs = func(repo)
3349 revs = func(repo)
3351 if ui.verbose:
3350 if ui.verbose:
3352 ui.note("* set:\n", revset.prettyformatset(revs), "\n")
3351 ui.note("* set:\n", revset.prettyformatset(revs), "\n")
3353 for c in revs:
3352 for c in revs:
3354 ui.write("%s\n" % c)
3353 ui.write("%s\n" % c)
3355
3354
3356 @command('debugsetparents', [], _('REV1 [REV2]'))
3355 @command('debugsetparents', [], _('REV1 [REV2]'))
3357 def debugsetparents(ui, repo, rev1, rev2=None):
3356 def debugsetparents(ui, repo, rev1, rev2=None):
3358 """manually set the parents of the current working directory
3357 """manually set the parents of the current working directory
3359
3358
3360 This is useful for writing repository conversion tools, but should
3359 This is useful for writing repository conversion tools, but should
3361 be used with care. For example, neither the working directory nor the
3360 be used with care. For example, neither the working directory nor the
3362 dirstate is updated, so file status may be incorrect after running this
3361 dirstate is updated, so file status may be incorrect after running this
3363 command.
3362 command.
3364
3363
3365 Returns 0 on success.
3364 Returns 0 on success.
3366 """
3365 """
3367
3366
3368 r1 = scmutil.revsingle(repo, rev1).node()
3367 r1 = scmutil.revsingle(repo, rev1).node()
3369 r2 = scmutil.revsingle(repo, rev2, 'null').node()
3368 r2 = scmutil.revsingle(repo, rev2, 'null').node()
3370
3369
3371 wlock = repo.wlock()
3370 wlock = repo.wlock()
3372 try:
3371 try:
3373 repo.dirstate.beginparentchange()
3372 repo.dirstate.beginparentchange()
3374 repo.setparents(r1, r2)
3373 repo.setparents(r1, r2)
3375 repo.dirstate.endparentchange()
3374 repo.dirstate.endparentchange()
3376 finally:
3375 finally:
3377 wlock.release()
3376 wlock.release()
3378
3377
3379 @command('debugdirstate|debugstate',
3378 @command('debugdirstate|debugstate',
3380 [('', 'nodates', None, _('do not display the saved mtime')),
3379 [('', 'nodates', None, _('do not display the saved mtime')),
3381 ('', 'datesort', None, _('sort by saved mtime'))],
3380 ('', 'datesort', None, _('sort by saved mtime'))],
3382 _('[OPTION]...'))
3381 _('[OPTION]...'))
3383 def debugstate(ui, repo, **opts):
3382 def debugstate(ui, repo, **opts):
3384 """show the contents of the current dirstate"""
3383 """show the contents of the current dirstate"""
3385
3384
3386 nodates = opts.get('nodates')
3385 nodates = opts.get('nodates')
3387 datesort = opts.get('datesort')
3386 datesort = opts.get('datesort')
3388
3387
3389 timestr = ""
3388 timestr = ""
3390 if datesort:
3389 if datesort:
3391 keyfunc = lambda x: (x[1][3], x[0]) # sort by mtime, then by filename
3390 keyfunc = lambda x: (x[1][3], x[0]) # sort by mtime, then by filename
3392 else:
3391 else:
3393 keyfunc = None # sort by filename
3392 keyfunc = None # sort by filename
3394 for file_, ent in sorted(repo.dirstate._map.iteritems(), key=keyfunc):
3393 for file_, ent in sorted(repo.dirstate._map.iteritems(), key=keyfunc):
3395 if ent[3] == -1:
3394 if ent[3] == -1:
3396 timestr = 'unset '
3395 timestr = 'unset '
3397 elif nodates:
3396 elif nodates:
3398 timestr = 'set '
3397 timestr = 'set '
3399 else:
3398 else:
3400 timestr = time.strftime("%Y-%m-%d %H:%M:%S ",
3399 timestr = time.strftime("%Y-%m-%d %H:%M:%S ",
3401 time.localtime(ent[3]))
3400 time.localtime(ent[3]))
3402 if ent[1] & 0o20000:
3401 if ent[1] & 0o20000:
3403 mode = 'lnk'
3402 mode = 'lnk'
3404 else:
3403 else:
3405 mode = '%3o' % (ent[1] & 0o777 & ~util.umask)
3404 mode = '%3o' % (ent[1] & 0o777 & ~util.umask)
3406 ui.write("%c %s %10d %s%s\n" % (ent[0], mode, ent[2], timestr, file_))
3405 ui.write("%c %s %10d %s%s\n" % (ent[0], mode, ent[2], timestr, file_))
3407 for f in repo.dirstate.copies():
3406 for f in repo.dirstate.copies():
3408 ui.write(_("copy: %s -> %s\n") % (repo.dirstate.copied(f), f))
3407 ui.write(_("copy: %s -> %s\n") % (repo.dirstate.copied(f), f))
3409
3408
3410 @command('debugsub',
3409 @command('debugsub',
3411 [('r', 'rev', '',
3410 [('r', 'rev', '',
3412 _('revision to check'), _('REV'))],
3411 _('revision to check'), _('REV'))],
3413 _('[-r REV] [REV]'))
3412 _('[-r REV] [REV]'))
3414 def debugsub(ui, repo, rev=None):
3413 def debugsub(ui, repo, rev=None):
3415 ctx = scmutil.revsingle(repo, rev, None)
3414 ctx = scmutil.revsingle(repo, rev, None)
3416 for k, v in sorted(ctx.substate.items()):
3415 for k, v in sorted(ctx.substate.items()):
3417 ui.write(('path %s\n') % k)
3416 ui.write(('path %s\n') % k)
3418 ui.write((' source %s\n') % v[0])
3417 ui.write((' source %s\n') % v[0])
3419 ui.write((' revision %s\n') % v[1])
3418 ui.write((' revision %s\n') % v[1])
3420
3419
3421 @command('debugsuccessorssets',
3420 @command('debugsuccessorssets',
3422 [],
3421 [],
3423 _('[REV]'))
3422 _('[REV]'))
3424 def debugsuccessorssets(ui, repo, *revs):
3423 def debugsuccessorssets(ui, repo, *revs):
3425 """show set of successors for revision
3424 """show set of successors for revision
3426
3425
3427 A successors set of changeset A is a consistent group of revisions that
3426 A successors set of changeset A is a consistent group of revisions that
3428 succeed A. It contains non-obsolete changesets only.
3427 succeed A. It contains non-obsolete changesets only.
3429
3428
3430 In most cases a changeset A has a single successors set containing a single
3429 In most cases a changeset A has a single successors set containing a single
3431 successor (changeset A replaced by A').
3430 successor (changeset A replaced by A').
3432
3431
3433 A changeset that is made obsolete with no successors are called "pruned".
3432 A changeset that is made obsolete with no successors are called "pruned".
3434 Such changesets have no successors sets at all.
3433 Such changesets have no successors sets at all.
3435
3434
3436 A changeset that has been "split" will have a successors set containing
3435 A changeset that has been "split" will have a successors set containing
3437 more than one successor.
3436 more than one successor.
3438
3437
3439 A changeset that has been rewritten in multiple different ways is called
3438 A changeset that has been rewritten in multiple different ways is called
3440 "divergent". Such changesets have multiple successor sets (each of which
3439 "divergent". Such changesets have multiple successor sets (each of which
3441 may also be split, i.e. have multiple successors).
3440 may also be split, i.e. have multiple successors).
3442
3441
3443 Results are displayed as follows::
3442 Results are displayed as follows::
3444
3443
3445 <rev1>
3444 <rev1>
3446 <successors-1A>
3445 <successors-1A>
3447 <rev2>
3446 <rev2>
3448 <successors-2A>
3447 <successors-2A>
3449 <successors-2B1> <successors-2B2> <successors-2B3>
3448 <successors-2B1> <successors-2B2> <successors-2B3>
3450
3449
3451 Here rev2 has two possible (i.e. divergent) successors sets. The first
3450 Here rev2 has two possible (i.e. divergent) successors sets. The first
3452 holds one element, whereas the second holds three (i.e. the changeset has
3451 holds one element, whereas the second holds three (i.e. the changeset has
3453 been split).
3452 been split).
3454 """
3453 """
3455 # passed to successorssets caching computation from one call to another
3454 # passed to successorssets caching computation from one call to another
3456 cache = {}
3455 cache = {}
3457 ctx2str = str
3456 ctx2str = str
3458 node2str = short
3457 node2str = short
3459 if ui.debug():
3458 if ui.debug():
3460 def ctx2str(ctx):
3459 def ctx2str(ctx):
3461 return ctx.hex()
3460 return ctx.hex()
3462 node2str = hex
3461 node2str = hex
3463 for rev in scmutil.revrange(repo, revs):
3462 for rev in scmutil.revrange(repo, revs):
3464 ctx = repo[rev]
3463 ctx = repo[rev]
3465 ui.write('%s\n'% ctx2str(ctx))
3464 ui.write('%s\n'% ctx2str(ctx))
3466 for succsset in obsolete.successorssets(repo, ctx.node(), cache):
3465 for succsset in obsolete.successorssets(repo, ctx.node(), cache):
3467 if succsset:
3466 if succsset:
3468 ui.write(' ')
3467 ui.write(' ')
3469 ui.write(node2str(succsset[0]))
3468 ui.write(node2str(succsset[0]))
3470 for node in succsset[1:]:
3469 for node in succsset[1:]:
3471 ui.write(' ')
3470 ui.write(' ')
3472 ui.write(node2str(node))
3471 ui.write(node2str(node))
3473 ui.write('\n')
3472 ui.write('\n')
3474
3473
3475 @command('debugwalk', walkopts, _('[OPTION]... [FILE]...'), inferrepo=True)
3474 @command('debugwalk', walkopts, _('[OPTION]... [FILE]...'), inferrepo=True)
3476 def debugwalk(ui, repo, *pats, **opts):
3475 def debugwalk(ui, repo, *pats, **opts):
3477 """show how files match on given patterns"""
3476 """show how files match on given patterns"""
3478 m = scmutil.match(repo[None], pats, opts)
3477 m = scmutil.match(repo[None], pats, opts)
3479 items = list(repo.walk(m))
3478 items = list(repo.walk(m))
3480 if not items:
3479 if not items:
3481 return
3480 return
3482 f = lambda fn: fn
3481 f = lambda fn: fn
3483 if ui.configbool('ui', 'slash') and os.sep != '/':
3482 if ui.configbool('ui', 'slash') and os.sep != '/':
3484 f = lambda fn: util.normpath(fn)
3483 f = lambda fn: util.normpath(fn)
3485 fmt = 'f %%-%ds %%-%ds %%s' % (
3484 fmt = 'f %%-%ds %%-%ds %%s' % (
3486 max([len(abs) for abs in items]),
3485 max([len(abs) for abs in items]),
3487 max([len(m.rel(abs)) for abs in items]))
3486 max([len(m.rel(abs)) for abs in items]))
3488 for abs in items:
3487 for abs in items:
3489 line = fmt % (abs, f(m.rel(abs)), m.exact(abs) and 'exact' or '')
3488 line = fmt % (abs, f(m.rel(abs)), m.exact(abs) and 'exact' or '')
3490 ui.write("%s\n" % line.rstrip())
3489 ui.write("%s\n" % line.rstrip())
3491
3490
3492 @command('debugwireargs',
3491 @command('debugwireargs',
3493 [('', 'three', '', 'three'),
3492 [('', 'three', '', 'three'),
3494 ('', 'four', '', 'four'),
3493 ('', 'four', '', 'four'),
3495 ('', 'five', '', 'five'),
3494 ('', 'five', '', 'five'),
3496 ] + remoteopts,
3495 ] + remoteopts,
3497 _('REPO [OPTIONS]... [ONE [TWO]]'),
3496 _('REPO [OPTIONS]... [ONE [TWO]]'),
3498 norepo=True)
3497 norepo=True)
3499 def debugwireargs(ui, repopath, *vals, **opts):
3498 def debugwireargs(ui, repopath, *vals, **opts):
3500 repo = hg.peer(ui, opts, repopath)
3499 repo = hg.peer(ui, opts, repopath)
3501 for opt in remoteopts:
3500 for opt in remoteopts:
3502 del opts[opt[1]]
3501 del opts[opt[1]]
3503 args = {}
3502 args = {}
3504 for k, v in opts.iteritems():
3503 for k, v in opts.iteritems():
3505 if v:
3504 if v:
3506 args[k] = v
3505 args[k] = v
3507 # run twice to check that we don't mess up the stream for the next command
3506 # run twice to check that we don't mess up the stream for the next command
3508 res1 = repo.debugwireargs(*vals, **args)
3507 res1 = repo.debugwireargs(*vals, **args)
3509 res2 = repo.debugwireargs(*vals, **args)
3508 res2 = repo.debugwireargs(*vals, **args)
3510 ui.write("%s\n" % res1)
3509 ui.write("%s\n" % res1)
3511 if res1 != res2:
3510 if res1 != res2:
3512 ui.warn("%s\n" % res2)
3511 ui.warn("%s\n" % res2)
3513
3512
3514 @command('^diff',
3513 @command('^diff',
3515 [('r', 'rev', [], _('revision'), _('REV')),
3514 [('r', 'rev', [], _('revision'), _('REV')),
3516 ('c', 'change', '', _('change made by revision'), _('REV'))
3515 ('c', 'change', '', _('change made by revision'), _('REV'))
3517 ] + diffopts + diffopts2 + walkopts + subrepoopts,
3516 ] + diffopts + diffopts2 + walkopts + subrepoopts,
3518 _('[OPTION]... ([-c REV] | [-r REV1 [-r REV2]]) [FILE]...'),
3517 _('[OPTION]... ([-c REV] | [-r REV1 [-r REV2]]) [FILE]...'),
3519 inferrepo=True)
3518 inferrepo=True)
3520 def diff(ui, repo, *pats, **opts):
3519 def diff(ui, repo, *pats, **opts):
3521 """diff repository (or selected files)
3520 """diff repository (or selected files)
3522
3521
3523 Show differences between revisions for the specified files.
3522 Show differences between revisions for the specified files.
3524
3523
3525 Differences between files are shown using the unified diff format.
3524 Differences between files are shown using the unified diff format.
3526
3525
3527 .. note::
3526 .. note::
3528
3527
3529 diff may generate unexpected results for merges, as it will
3528 diff may generate unexpected results for merges, as it will
3530 default to comparing against the working directory's first
3529 default to comparing against the working directory's first
3531 parent changeset if no revisions are specified.
3530 parent changeset if no revisions are specified.
3532
3531
3533 When two revision arguments are given, then changes are shown
3532 When two revision arguments are given, then changes are shown
3534 between those revisions. If only one revision is specified then
3533 between those revisions. If only one revision is specified then
3535 that revision is compared to the working directory, and, when no
3534 that revision is compared to the working directory, and, when no
3536 revisions are specified, the working directory files are compared
3535 revisions are specified, the working directory files are compared
3537 to its parent.
3536 to its parent.
3538
3537
3539 Alternatively you can specify -c/--change with a revision to see
3538 Alternatively you can specify -c/--change with a revision to see
3540 the changes in that changeset relative to its first parent.
3539 the changes in that changeset relative to its first parent.
3541
3540
3542 Without the -a/--text option, diff will avoid generating diffs of
3541 Without the -a/--text option, diff will avoid generating diffs of
3543 files it detects as binary. With -a, diff will generate a diff
3542 files it detects as binary. With -a, diff will generate a diff
3544 anyway, probably with undesirable results.
3543 anyway, probably with undesirable results.
3545
3544
3546 Use the -g/--git option to generate diffs in the git extended diff
3545 Use the -g/--git option to generate diffs in the git extended diff
3547 format. For more information, read :hg:`help diffs`.
3546 format. For more information, read :hg:`help diffs`.
3548
3547
3549 .. container:: verbose
3548 .. container:: verbose
3550
3549
3551 Examples:
3550 Examples:
3552
3551
3553 - compare a file in the current working directory to its parent::
3552 - compare a file in the current working directory to its parent::
3554
3553
3555 hg diff foo.c
3554 hg diff foo.c
3556
3555
3557 - compare two historical versions of a directory, with rename info::
3556 - compare two historical versions of a directory, with rename info::
3558
3557
3559 hg diff --git -r 1.0:1.2 lib/
3558 hg diff --git -r 1.0:1.2 lib/
3560
3559
3561 - get change stats relative to the last change on some date::
3560 - get change stats relative to the last change on some date::
3562
3561
3563 hg diff --stat -r "date('may 2')"
3562 hg diff --stat -r "date('may 2')"
3564
3563
3565 - diff all newly-added files that contain a keyword::
3564 - diff all newly-added files that contain a keyword::
3566
3565
3567 hg diff "set:added() and grep(GNU)"
3566 hg diff "set:added() and grep(GNU)"
3568
3567
3569 - compare a revision and its parents::
3568 - compare a revision and its parents::
3570
3569
3571 hg diff -c 9353 # compare against first parent
3570 hg diff -c 9353 # compare against first parent
3572 hg diff -r 9353^:9353 # same using revset syntax
3571 hg diff -r 9353^:9353 # same using revset syntax
3573 hg diff -r 9353^2:9353 # compare against the second parent
3572 hg diff -r 9353^2:9353 # compare against the second parent
3574
3573
3575 Returns 0 on success.
3574 Returns 0 on success.
3576 """
3575 """
3577
3576
3578 revs = opts.get('rev')
3577 revs = opts.get('rev')
3579 change = opts.get('change')
3578 change = opts.get('change')
3580 stat = opts.get('stat')
3579 stat = opts.get('stat')
3581 reverse = opts.get('reverse')
3580 reverse = opts.get('reverse')
3582
3581
3583 if revs and change:
3582 if revs and change:
3584 msg = _('cannot specify --rev and --change at the same time')
3583 msg = _('cannot specify --rev and --change at the same time')
3585 raise error.Abort(msg)
3584 raise error.Abort(msg)
3586 elif change:
3585 elif change:
3587 node2 = scmutil.revsingle(repo, change, None).node()
3586 node2 = scmutil.revsingle(repo, change, None).node()
3588 node1 = repo[node2].p1().node()
3587 node1 = repo[node2].p1().node()
3589 else:
3588 else:
3590 node1, node2 = scmutil.revpair(repo, revs)
3589 node1, node2 = scmutil.revpair(repo, revs)
3591
3590
3592 if reverse:
3591 if reverse:
3593 node1, node2 = node2, node1
3592 node1, node2 = node2, node1
3594
3593
3595 diffopts = patch.diffallopts(ui, opts)
3594 diffopts = patch.diffallopts(ui, opts)
3596 m = scmutil.match(repo[node2], pats, opts)
3595 m = scmutil.match(repo[node2], pats, opts)
3597 cmdutil.diffordiffstat(ui, repo, diffopts, node1, node2, m, stat=stat,
3596 cmdutil.diffordiffstat(ui, repo, diffopts, node1, node2, m, stat=stat,
3598 listsubrepos=opts.get('subrepos'),
3597 listsubrepos=opts.get('subrepos'),
3599 root=opts.get('root'))
3598 root=opts.get('root'))
3600
3599
3601 @command('^export',
3600 @command('^export',
3602 [('o', 'output', '',
3601 [('o', 'output', '',
3603 _('print output to file with formatted name'), _('FORMAT')),
3602 _('print output to file with formatted name'), _('FORMAT')),
3604 ('', 'switch-parent', None, _('diff against the second parent')),
3603 ('', 'switch-parent', None, _('diff against the second parent')),
3605 ('r', 'rev', [], _('revisions to export'), _('REV')),
3604 ('r', 'rev', [], _('revisions to export'), _('REV')),
3606 ] + diffopts,
3605 ] + diffopts,
3607 _('[OPTION]... [-o OUTFILESPEC] [-r] [REV]...'))
3606 _('[OPTION]... [-o OUTFILESPEC] [-r] [REV]...'))
3608 def export(ui, repo, *changesets, **opts):
3607 def export(ui, repo, *changesets, **opts):
3609 """dump the header and diffs for one or more changesets
3608 """dump the header and diffs for one or more changesets
3610
3609
3611 Print the changeset header and diffs for one or more revisions.
3610 Print the changeset header and diffs for one or more revisions.
3612 If no revision is given, the parent of the working directory is used.
3611 If no revision is given, the parent of the working directory is used.
3613
3612
3614 The information shown in the changeset header is: author, date,
3613 The information shown in the changeset header is: author, date,
3615 branch name (if non-default), changeset hash, parent(s) and commit
3614 branch name (if non-default), changeset hash, parent(s) and commit
3616 comment.
3615 comment.
3617
3616
3618 .. note::
3617 .. note::
3619
3618
3620 export may generate unexpected diff output for merge
3619 export may generate unexpected diff output for merge
3621 changesets, as it will compare the merge changeset against its
3620 changesets, as it will compare the merge changeset against its
3622 first parent only.
3621 first parent only.
3623
3622
3624 Output may be to a file, in which case the name of the file is
3623 Output may be to a file, in which case the name of the file is
3625 given using a format string. The formatting rules are as follows:
3624 given using a format string. The formatting rules are as follows:
3626
3625
3627 :``%%``: literal "%" character
3626 :``%%``: literal "%" character
3628 :``%H``: changeset hash (40 hexadecimal digits)
3627 :``%H``: changeset hash (40 hexadecimal digits)
3629 :``%N``: number of patches being generated
3628 :``%N``: number of patches being generated
3630 :``%R``: changeset revision number
3629 :``%R``: changeset revision number
3631 :``%b``: basename of the exporting repository
3630 :``%b``: basename of the exporting repository
3632 :``%h``: short-form changeset hash (12 hexadecimal digits)
3631 :``%h``: short-form changeset hash (12 hexadecimal digits)
3633 :``%m``: first line of the commit message (only alphanumeric characters)
3632 :``%m``: first line of the commit message (only alphanumeric characters)
3634 :``%n``: zero-padded sequence number, starting at 1
3633 :``%n``: zero-padded sequence number, starting at 1
3635 :``%r``: zero-padded changeset revision number
3634 :``%r``: zero-padded changeset revision number
3636
3635
3637 Without the -a/--text option, export will avoid generating diffs
3636 Without the -a/--text option, export will avoid generating diffs
3638 of files it detects as binary. With -a, export will generate a
3637 of files it detects as binary. With -a, export will generate a
3639 diff anyway, probably with undesirable results.
3638 diff anyway, probably with undesirable results.
3640
3639
3641 Use the -g/--git option to generate diffs in the git extended diff
3640 Use the -g/--git option to generate diffs in the git extended diff
3642 format. See :hg:`help diffs` for more information.
3641 format. See :hg:`help diffs` for more information.
3643
3642
3644 With the --switch-parent option, the diff will be against the
3643 With the --switch-parent option, the diff will be against the
3645 second parent. It can be useful to review a merge.
3644 second parent. It can be useful to review a merge.
3646
3645
3647 .. container:: verbose
3646 .. container:: verbose
3648
3647
3649 Examples:
3648 Examples:
3650
3649
3651 - use export and import to transplant a bugfix to the current
3650 - use export and import to transplant a bugfix to the current
3652 branch::
3651 branch::
3653
3652
3654 hg export -r 9353 | hg import -
3653 hg export -r 9353 | hg import -
3655
3654
3656 - export all the changesets between two revisions to a file with
3655 - export all the changesets between two revisions to a file with
3657 rename information::
3656 rename information::
3658
3657
3659 hg export --git -r 123:150 > changes.txt
3658 hg export --git -r 123:150 > changes.txt
3660
3659
3661 - split outgoing changes into a series of patches with
3660 - split outgoing changes into a series of patches with
3662 descriptive names::
3661 descriptive names::
3663
3662
3664 hg export -r "outgoing()" -o "%n-%m.patch"
3663 hg export -r "outgoing()" -o "%n-%m.patch"
3665
3664
3666 Returns 0 on success.
3665 Returns 0 on success.
3667 """
3666 """
3668 changesets += tuple(opts.get('rev', []))
3667 changesets += tuple(opts.get('rev', []))
3669 if not changesets:
3668 if not changesets:
3670 changesets = ['.']
3669 changesets = ['.']
3671 revs = scmutil.revrange(repo, changesets)
3670 revs = scmutil.revrange(repo, changesets)
3672 if not revs:
3671 if not revs:
3673 raise error.Abort(_("export requires at least one changeset"))
3672 raise error.Abort(_("export requires at least one changeset"))
3674 if len(revs) > 1:
3673 if len(revs) > 1:
3675 ui.note(_('exporting patches:\n'))
3674 ui.note(_('exporting patches:\n'))
3676 else:
3675 else:
3677 ui.note(_('exporting patch:\n'))
3676 ui.note(_('exporting patch:\n'))
3678 cmdutil.export(repo, revs, template=opts.get('output'),
3677 cmdutil.export(repo, revs, template=opts.get('output'),
3679 switch_parent=opts.get('switch_parent'),
3678 switch_parent=opts.get('switch_parent'),
3680 opts=patch.diffallopts(ui, opts))
3679 opts=patch.diffallopts(ui, opts))
3681
3680
3682 @command('files',
3681 @command('files',
3683 [('r', 'rev', '', _('search the repository as it is in REV'), _('REV')),
3682 [('r', 'rev', '', _('search the repository as it is in REV'), _('REV')),
3684 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
3683 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
3685 ] + walkopts + formatteropts + subrepoopts,
3684 ] + walkopts + formatteropts + subrepoopts,
3686 _('[OPTION]... [PATTERN]...'))
3685 _('[OPTION]... [PATTERN]...'))
3687 def files(ui, repo, *pats, **opts):
3686 def files(ui, repo, *pats, **opts):
3688 """list tracked files
3687 """list tracked files
3689
3688
3690 Print files under Mercurial control in the working directory or
3689 Print files under Mercurial control in the working directory or
3691 specified revision whose names match the given patterns (excluding
3690 specified revision whose names match the given patterns (excluding
3692 removed files).
3691 removed files).
3693
3692
3694 If no patterns are given to match, this command prints the names
3693 If no patterns are given to match, this command prints the names
3695 of all files under Mercurial control in the working directory.
3694 of all files under Mercurial control in the working directory.
3696
3695
3697 .. container:: verbose
3696 .. container:: verbose
3698
3697
3699 Examples:
3698 Examples:
3700
3699
3701 - list all files under the current directory::
3700 - list all files under the current directory::
3702
3701
3703 hg files .
3702 hg files .
3704
3703
3705 - shows sizes and flags for current revision::
3704 - shows sizes and flags for current revision::
3706
3705
3707 hg files -vr .
3706 hg files -vr .
3708
3707
3709 - list all files named README::
3708 - list all files named README::
3710
3709
3711 hg files -I "**/README"
3710 hg files -I "**/README"
3712
3711
3713 - list all binary files::
3712 - list all binary files::
3714
3713
3715 hg files "set:binary()"
3714 hg files "set:binary()"
3716
3715
3717 - find files containing a regular expression::
3716 - find files containing a regular expression::
3718
3717
3719 hg files "set:grep('bob')"
3718 hg files "set:grep('bob')"
3720
3719
3721 - search tracked file contents with xargs and grep::
3720 - search tracked file contents with xargs and grep::
3722
3721
3723 hg files -0 | xargs -0 grep foo
3722 hg files -0 | xargs -0 grep foo
3724
3723
3725 See :hg:`help patterns` and :hg:`help filesets` for more information
3724 See :hg:`help patterns` and :hg:`help filesets` for more information
3726 on specifying file patterns.
3725 on specifying file patterns.
3727
3726
3728 Returns 0 if a match is found, 1 otherwise.
3727 Returns 0 if a match is found, 1 otherwise.
3729
3728
3730 """
3729 """
3731 ctx = scmutil.revsingle(repo, opts.get('rev'), None)
3730 ctx = scmutil.revsingle(repo, opts.get('rev'), None)
3732
3731
3733 end = '\n'
3732 end = '\n'
3734 if opts.get('print0'):
3733 if opts.get('print0'):
3735 end = '\0'
3734 end = '\0'
3736 fm = ui.formatter('files', opts)
3735 fm = ui.formatter('files', opts)
3737 fmt = '%s' + end
3736 fmt = '%s' + end
3738
3737
3739 m = scmutil.match(ctx, pats, opts)
3738 m = scmutil.match(ctx, pats, opts)
3740 ret = cmdutil.files(ui, ctx, m, fm, fmt, opts.get('subrepos'))
3739 ret = cmdutil.files(ui, ctx, m, fm, fmt, opts.get('subrepos'))
3741
3740
3742 fm.end()
3741 fm.end()
3743
3742
3744 return ret
3743 return ret
3745
3744
3746 @command('^forget', walkopts, _('[OPTION]... FILE...'), inferrepo=True)
3745 @command('^forget', walkopts, _('[OPTION]... FILE...'), inferrepo=True)
3747 def forget(ui, repo, *pats, **opts):
3746 def forget(ui, repo, *pats, **opts):
3748 """forget the specified files on the next commit
3747 """forget the specified files on the next commit
3749
3748
3750 Mark the specified files so they will no longer be tracked
3749 Mark the specified files so they will no longer be tracked
3751 after the next commit.
3750 after the next commit.
3752
3751
3753 This only removes files from the current branch, not from the
3752 This only removes files from the current branch, not from the
3754 entire project history, and it does not delete them from the
3753 entire project history, and it does not delete them from the
3755 working directory.
3754 working directory.
3756
3755
3757 To delete the file from the working directory, see :hg:`remove`.
3756 To delete the file from the working directory, see :hg:`remove`.
3758
3757
3759 To undo a forget before the next commit, see :hg:`add`.
3758 To undo a forget before the next commit, see :hg:`add`.
3760
3759
3761 .. container:: verbose
3760 .. container:: verbose
3762
3761
3763 Examples:
3762 Examples:
3764
3763
3765 - forget newly-added binary files::
3764 - forget newly-added binary files::
3766
3765
3767 hg forget "set:added() and binary()"
3766 hg forget "set:added() and binary()"
3768
3767
3769 - forget files that would be excluded by .hgignore::
3768 - forget files that would be excluded by .hgignore::
3770
3769
3771 hg forget "set:hgignore()"
3770 hg forget "set:hgignore()"
3772
3771
3773 Returns 0 on success.
3772 Returns 0 on success.
3774 """
3773 """
3775
3774
3776 if not pats:
3775 if not pats:
3777 raise error.Abort(_('no files specified'))
3776 raise error.Abort(_('no files specified'))
3778
3777
3779 m = scmutil.match(repo[None], pats, opts)
3778 m = scmutil.match(repo[None], pats, opts)
3780 rejected = cmdutil.forget(ui, repo, m, prefix="", explicitonly=False)[0]
3779 rejected = cmdutil.forget(ui, repo, m, prefix="", explicitonly=False)[0]
3781 return rejected and 1 or 0
3780 return rejected and 1 or 0
3782
3781
3783 @command(
3782 @command(
3784 'graft',
3783 'graft',
3785 [('r', 'rev', [], _('revisions to graft'), _('REV')),
3784 [('r', 'rev', [], _('revisions to graft'), _('REV')),
3786 ('c', 'continue', False, _('resume interrupted graft')),
3785 ('c', 'continue', False, _('resume interrupted graft')),
3787 ('e', 'edit', False, _('invoke editor on commit messages')),
3786 ('e', 'edit', False, _('invoke editor on commit messages')),
3788 ('', 'log', None, _('append graft info to log message')),
3787 ('', 'log', None, _('append graft info to log message')),
3789 ('f', 'force', False, _('force graft')),
3788 ('f', 'force', False, _('force graft')),
3790 ('D', 'currentdate', False,
3789 ('D', 'currentdate', False,
3791 _('record the current date as commit date')),
3790 _('record the current date as commit date')),
3792 ('U', 'currentuser', False,
3791 ('U', 'currentuser', False,
3793 _('record the current user as committer'), _('DATE'))]
3792 _('record the current user as committer'), _('DATE'))]
3794 + commitopts2 + mergetoolopts + dryrunopts,
3793 + commitopts2 + mergetoolopts + dryrunopts,
3795 _('[OPTION]... [-r] REV...'))
3794 _('[OPTION]... [-r] REV...'))
3796 def graft(ui, repo, *revs, **opts):
3795 def graft(ui, repo, *revs, **opts):
3797 '''copy changes from other branches onto the current branch
3796 '''copy changes from other branches onto the current branch
3798
3797
3799 This command uses Mercurial's merge logic to copy individual
3798 This command uses Mercurial's merge logic to copy individual
3800 changes from other branches without merging branches in the
3799 changes from other branches without merging branches in the
3801 history graph. This is sometimes known as 'backporting' or
3800 history graph. This is sometimes known as 'backporting' or
3802 'cherry-picking'. By default, graft will copy user, date, and
3801 'cherry-picking'. By default, graft will copy user, date, and
3803 description from the source changesets.
3802 description from the source changesets.
3804
3803
3805 Changesets that are ancestors of the current revision, that have
3804 Changesets that are ancestors of the current revision, that have
3806 already been grafted, or that are merges will be skipped.
3805 already been grafted, or that are merges will be skipped.
3807
3806
3808 If --log is specified, log messages will have a comment appended
3807 If --log is specified, log messages will have a comment appended
3809 of the form::
3808 of the form::
3810
3809
3811 (grafted from CHANGESETHASH)
3810 (grafted from CHANGESETHASH)
3812
3811
3813 If --force is specified, revisions will be grafted even if they
3812 If --force is specified, revisions will be grafted even if they
3814 are already ancestors of or have been grafted to the destination.
3813 are already ancestors of or have been grafted to the destination.
3815 This is useful when the revisions have since been backed out.
3814 This is useful when the revisions have since been backed out.
3816
3815
3817 If a graft merge results in conflicts, the graft process is
3816 If a graft merge results in conflicts, the graft process is
3818 interrupted so that the current merge can be manually resolved.
3817 interrupted so that the current merge can be manually resolved.
3819 Once all conflicts are addressed, the graft process can be
3818 Once all conflicts are addressed, the graft process can be
3820 continued with the -c/--continue option.
3819 continued with the -c/--continue option.
3821
3820
3822 .. note::
3821 .. note::
3823
3822
3824 The -c/--continue option does not reapply earlier options, except
3823 The -c/--continue option does not reapply earlier options, except
3825 for --force.
3824 for --force.
3826
3825
3827 .. container:: verbose
3826 .. container:: verbose
3828
3827
3829 Examples:
3828 Examples:
3830
3829
3831 - copy a single change to the stable branch and edit its description::
3830 - copy a single change to the stable branch and edit its description::
3832
3831
3833 hg update stable
3832 hg update stable
3834 hg graft --edit 9393
3833 hg graft --edit 9393
3835
3834
3836 - graft a range of changesets with one exception, updating dates::
3835 - graft a range of changesets with one exception, updating dates::
3837
3836
3838 hg graft -D "2085::2093 and not 2091"
3837 hg graft -D "2085::2093 and not 2091"
3839
3838
3840 - continue a graft after resolving conflicts::
3839 - continue a graft after resolving conflicts::
3841
3840
3842 hg graft -c
3841 hg graft -c
3843
3842
3844 - show the source of a grafted changeset::
3843 - show the source of a grafted changeset::
3845
3844
3846 hg log --debug -r .
3845 hg log --debug -r .
3847
3846
3848 See :hg:`help revisions` and :hg:`help revsets` for more about
3847 See :hg:`help revisions` and :hg:`help revsets` for more about
3849 specifying revisions.
3848 specifying revisions.
3850
3849
3851 Returns 0 on successful completion.
3850 Returns 0 on successful completion.
3852 '''
3851 '''
3853 wlock = None
3852 wlock = None
3854 try:
3853 try:
3855 wlock = repo.wlock()
3854 wlock = repo.wlock()
3856 return _dograft(ui, repo, *revs, **opts)
3855 return _dograft(ui, repo, *revs, **opts)
3857 finally:
3856 finally:
3858 release(wlock)
3857 release(wlock)
3859
3858
3860 def _dograft(ui, repo, *revs, **opts):
3859 def _dograft(ui, repo, *revs, **opts):
3861 revs = list(revs)
3860 revs = list(revs)
3862 revs.extend(opts['rev'])
3861 revs.extend(opts['rev'])
3863
3862
3864 if not opts.get('user') and opts.get('currentuser'):
3863 if not opts.get('user') and opts.get('currentuser'):
3865 opts['user'] = ui.username()
3864 opts['user'] = ui.username()
3866 if not opts.get('date') and opts.get('currentdate'):
3865 if not opts.get('date') and opts.get('currentdate'):
3867 opts['date'] = "%d %d" % util.makedate()
3866 opts['date'] = "%d %d" % util.makedate()
3868
3867
3869 editor = cmdutil.getcommiteditor(editform='graft', **opts)
3868 editor = cmdutil.getcommiteditor(editform='graft', **opts)
3870
3869
3871 cont = False
3870 cont = False
3872 if opts['continue']:
3871 if opts['continue']:
3873 cont = True
3872 cont = True
3874 if revs:
3873 if revs:
3875 raise error.Abort(_("can't specify --continue and revisions"))
3874 raise error.Abort(_("can't specify --continue and revisions"))
3876 # read in unfinished revisions
3875 # read in unfinished revisions
3877 try:
3876 try:
3878 nodes = repo.vfs.read('graftstate').splitlines()
3877 nodes = repo.vfs.read('graftstate').splitlines()
3879 revs = [repo[node].rev() for node in nodes]
3878 revs = [repo[node].rev() for node in nodes]
3880 except IOError as inst:
3879 except IOError as inst:
3881 if inst.errno != errno.ENOENT:
3880 if inst.errno != errno.ENOENT:
3882 raise
3881 raise
3883 raise error.Abort(_("no graft state found, can't continue"))
3882 raise error.Abort(_("no graft state found, can't continue"))
3884 else:
3883 else:
3885 cmdutil.checkunfinished(repo)
3884 cmdutil.checkunfinished(repo)
3886 cmdutil.bailifchanged(repo)
3885 cmdutil.bailifchanged(repo)
3887 if not revs:
3886 if not revs:
3888 raise error.Abort(_('no revisions specified'))
3887 raise error.Abort(_('no revisions specified'))
3889 revs = scmutil.revrange(repo, revs)
3888 revs = scmutil.revrange(repo, revs)
3890
3889
3891 skipped = set()
3890 skipped = set()
3892 # check for merges
3891 # check for merges
3893 for rev in repo.revs('%ld and merge()', revs):
3892 for rev in repo.revs('%ld and merge()', revs):
3894 ui.warn(_('skipping ungraftable merge revision %s\n') % rev)
3893 ui.warn(_('skipping ungraftable merge revision %s\n') % rev)
3895 skipped.add(rev)
3894 skipped.add(rev)
3896 revs = [r for r in revs if r not in skipped]
3895 revs = [r for r in revs if r not in skipped]
3897 if not revs:
3896 if not revs:
3898 return -1
3897 return -1
3899
3898
3900 # Don't check in the --continue case, in effect retaining --force across
3899 # Don't check in the --continue case, in effect retaining --force across
3901 # --continues. That's because without --force, any revisions we decided to
3900 # --continues. That's because without --force, any revisions we decided to
3902 # skip would have been filtered out here, so they wouldn't have made their
3901 # skip would have been filtered out here, so they wouldn't have made their
3903 # way to the graftstate. With --force, any revisions we would have otherwise
3902 # way to the graftstate. With --force, any revisions we would have otherwise
3904 # skipped would not have been filtered out, and if they hadn't been applied
3903 # skipped would not have been filtered out, and if they hadn't been applied
3905 # already, they'd have been in the graftstate.
3904 # already, they'd have been in the graftstate.
3906 if not (cont or opts.get('force')):
3905 if not (cont or opts.get('force')):
3907 # check for ancestors of dest branch
3906 # check for ancestors of dest branch
3908 crev = repo['.'].rev()
3907 crev = repo['.'].rev()
3909 ancestors = repo.changelog.ancestors([crev], inclusive=True)
3908 ancestors = repo.changelog.ancestors([crev], inclusive=True)
3910 # Cannot use x.remove(y) on smart set, this has to be a list.
3909 # Cannot use x.remove(y) on smart set, this has to be a list.
3911 # XXX make this lazy in the future
3910 # XXX make this lazy in the future
3912 revs = list(revs)
3911 revs = list(revs)
3913 # don't mutate while iterating, create a copy
3912 # don't mutate while iterating, create a copy
3914 for rev in list(revs):
3913 for rev in list(revs):
3915 if rev in ancestors:
3914 if rev in ancestors:
3916 ui.warn(_('skipping ancestor revision %d:%s\n') %
3915 ui.warn(_('skipping ancestor revision %d:%s\n') %
3917 (rev, repo[rev]))
3916 (rev, repo[rev]))
3918 # XXX remove on list is slow
3917 # XXX remove on list is slow
3919 revs.remove(rev)
3918 revs.remove(rev)
3920 if not revs:
3919 if not revs:
3921 return -1
3920 return -1
3922
3921
3923 # analyze revs for earlier grafts
3922 # analyze revs for earlier grafts
3924 ids = {}
3923 ids = {}
3925 for ctx in repo.set("%ld", revs):
3924 for ctx in repo.set("%ld", revs):
3926 ids[ctx.hex()] = ctx.rev()
3925 ids[ctx.hex()] = ctx.rev()
3927 n = ctx.extra().get('source')
3926 n = ctx.extra().get('source')
3928 if n:
3927 if n:
3929 ids[n] = ctx.rev()
3928 ids[n] = ctx.rev()
3930
3929
3931 # check ancestors for earlier grafts
3930 # check ancestors for earlier grafts
3932 ui.debug('scanning for duplicate grafts\n')
3931 ui.debug('scanning for duplicate grafts\n')
3933
3932
3934 for rev in repo.changelog.findmissingrevs(revs, [crev]):
3933 for rev in repo.changelog.findmissingrevs(revs, [crev]):
3935 ctx = repo[rev]
3934 ctx = repo[rev]
3936 n = ctx.extra().get('source')
3935 n = ctx.extra().get('source')
3937 if n in ids:
3936 if n in ids:
3938 try:
3937 try:
3939 r = repo[n].rev()
3938 r = repo[n].rev()
3940 except error.RepoLookupError:
3939 except error.RepoLookupError:
3941 r = None
3940 r = None
3942 if r in revs:
3941 if r in revs:
3943 ui.warn(_('skipping revision %d:%s '
3942 ui.warn(_('skipping revision %d:%s '
3944 '(already grafted to %d:%s)\n')
3943 '(already grafted to %d:%s)\n')
3945 % (r, repo[r], rev, ctx))
3944 % (r, repo[r], rev, ctx))
3946 revs.remove(r)
3945 revs.remove(r)
3947 elif ids[n] in revs:
3946 elif ids[n] in revs:
3948 if r is None:
3947 if r is None:
3949 ui.warn(_('skipping already grafted revision %d:%s '
3948 ui.warn(_('skipping already grafted revision %d:%s '
3950 '(%d:%s also has unknown origin %s)\n')
3949 '(%d:%s also has unknown origin %s)\n')
3951 % (ids[n], repo[ids[n]], rev, ctx, n[:12]))
3950 % (ids[n], repo[ids[n]], rev, ctx, n[:12]))
3952 else:
3951 else:
3953 ui.warn(_('skipping already grafted revision %d:%s '
3952 ui.warn(_('skipping already grafted revision %d:%s '
3954 '(%d:%s also has origin %d:%s)\n')
3953 '(%d:%s also has origin %d:%s)\n')
3955 % (ids[n], repo[ids[n]], rev, ctx, r, n[:12]))
3954 % (ids[n], repo[ids[n]], rev, ctx, r, n[:12]))
3956 revs.remove(ids[n])
3955 revs.remove(ids[n])
3957 elif ctx.hex() in ids:
3956 elif ctx.hex() in ids:
3958 r = ids[ctx.hex()]
3957 r = ids[ctx.hex()]
3959 ui.warn(_('skipping already grafted revision %d:%s '
3958 ui.warn(_('skipping already grafted revision %d:%s '
3960 '(was grafted from %d:%s)\n') %
3959 '(was grafted from %d:%s)\n') %
3961 (r, repo[r], rev, ctx))
3960 (r, repo[r], rev, ctx))
3962 revs.remove(r)
3961 revs.remove(r)
3963 if not revs:
3962 if not revs:
3964 return -1
3963 return -1
3965
3964
3966 try:
3965 try:
3967 for pos, ctx in enumerate(repo.set("%ld", revs)):
3966 for pos, ctx in enumerate(repo.set("%ld", revs)):
3968 desc = '%d:%s "%s"' % (ctx.rev(), ctx,
3967 desc = '%d:%s "%s"' % (ctx.rev(), ctx,
3969 ctx.description().split('\n', 1)[0])
3968 ctx.description().split('\n', 1)[0])
3970 names = repo.nodetags(ctx.node()) + repo.nodebookmarks(ctx.node())
3969 names = repo.nodetags(ctx.node()) + repo.nodebookmarks(ctx.node())
3971 if names:
3970 if names:
3972 desc += ' (%s)' % ' '.join(names)
3971 desc += ' (%s)' % ' '.join(names)
3973 ui.status(_('grafting %s\n') % desc)
3972 ui.status(_('grafting %s\n') % desc)
3974 if opts.get('dry_run'):
3973 if opts.get('dry_run'):
3975 continue
3974 continue
3976
3975
3977 extra = ctx.extra().copy()
3976 extra = ctx.extra().copy()
3978 del extra['branch']
3977 del extra['branch']
3979 source = extra.get('source')
3978 source = extra.get('source')
3980 if source:
3979 if source:
3981 extra['intermediate-source'] = ctx.hex()
3980 extra['intermediate-source'] = ctx.hex()
3982 else:
3981 else:
3983 extra['source'] = ctx.hex()
3982 extra['source'] = ctx.hex()
3984 user = ctx.user()
3983 user = ctx.user()
3985 if opts.get('user'):
3984 if opts.get('user'):
3986 user = opts['user']
3985 user = opts['user']
3987 date = ctx.date()
3986 date = ctx.date()
3988 if opts.get('date'):
3987 if opts.get('date'):
3989 date = opts['date']
3988 date = opts['date']
3990 message = ctx.description()
3989 message = ctx.description()
3991 if opts.get('log'):
3990 if opts.get('log'):
3992 message += '\n(grafted from %s)' % ctx.hex()
3991 message += '\n(grafted from %s)' % ctx.hex()
3993
3992
3994 # we don't merge the first commit when continuing
3993 # we don't merge the first commit when continuing
3995 if not cont:
3994 if not cont:
3996 # perform the graft merge with p1(rev) as 'ancestor'
3995 # perform the graft merge with p1(rev) as 'ancestor'
3997 try:
3996 try:
3998 # ui.forcemerge is an internal variable, do not document
3997 # ui.forcemerge is an internal variable, do not document
3999 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
3998 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
4000 'graft')
3999 'graft')
4001 stats = mergemod.graft(repo, ctx, ctx.p1(),
4000 stats = mergemod.graft(repo, ctx, ctx.p1(),
4002 ['local', 'graft'])
4001 ['local', 'graft'])
4003 finally:
4002 finally:
4004 repo.ui.setconfig('ui', 'forcemerge', '', 'graft')
4003 repo.ui.setconfig('ui', 'forcemerge', '', 'graft')
4005 # report any conflicts
4004 # report any conflicts
4006 if stats and stats[3] > 0:
4005 if stats and stats[3] > 0:
4007 # write out state for --continue
4006 # write out state for --continue
4008 nodelines = [repo[rev].hex() + "\n" for rev in revs[pos:]]
4007 nodelines = [repo[rev].hex() + "\n" for rev in revs[pos:]]
4009 repo.vfs.write('graftstate', ''.join(nodelines))
4008 repo.vfs.write('graftstate', ''.join(nodelines))
4010 extra = ''
4009 extra = ''
4011 if opts.get('user'):
4010 if opts.get('user'):
4012 extra += ' --user %s' % opts['user']
4011 extra += ' --user %s' % opts['user']
4013 if opts.get('date'):
4012 if opts.get('date'):
4014 extra += ' --date %s' % opts['date']
4013 extra += ' --date %s' % opts['date']
4015 if opts.get('log'):
4014 if opts.get('log'):
4016 extra += ' --log'
4015 extra += ' --log'
4017 hint=_('use hg resolve and hg graft --continue%s') % extra
4016 hint=_('use hg resolve and hg graft --continue%s') % extra
4018 raise error.Abort(
4017 raise error.Abort(
4019 _("unresolved conflicts, can't continue"),
4018 _("unresolved conflicts, can't continue"),
4020 hint=hint)
4019 hint=hint)
4021 else:
4020 else:
4022 cont = False
4021 cont = False
4023
4022
4024 # commit
4023 # commit
4025 node = repo.commit(text=message, user=user,
4024 node = repo.commit(text=message, user=user,
4026 date=date, extra=extra, editor=editor)
4025 date=date, extra=extra, editor=editor)
4027 if node is None:
4026 if node is None:
4028 ui.warn(
4027 ui.warn(
4029 _('note: graft of %d:%s created no changes to commit\n') %
4028 _('note: graft of %d:%s created no changes to commit\n') %
4030 (ctx.rev(), ctx))
4029 (ctx.rev(), ctx))
4031 finally:
4030 finally:
4032 # TODO: get rid of this meaningless try/finally enclosing.
4031 # TODO: get rid of this meaningless try/finally enclosing.
4033 # this is kept only to reduce changes in a patch.
4032 # this is kept only to reduce changes in a patch.
4034 pass
4033 pass
4035
4034
4036 # remove state when we complete successfully
4035 # remove state when we complete successfully
4037 if not opts.get('dry_run'):
4036 if not opts.get('dry_run'):
4038 util.unlinkpath(repo.join('graftstate'), ignoremissing=True)
4037 util.unlinkpath(repo.join('graftstate'), ignoremissing=True)
4039
4038
4040 return 0
4039 return 0
4041
4040
4042 @command('grep',
4041 @command('grep',
4043 [('0', 'print0', None, _('end fields with NUL')),
4042 [('0', 'print0', None, _('end fields with NUL')),
4044 ('', 'all', None, _('print all revisions that match')),
4043 ('', 'all', None, _('print all revisions that match')),
4045 ('a', 'text', None, _('treat all files as text')),
4044 ('a', 'text', None, _('treat all files as text')),
4046 ('f', 'follow', None,
4045 ('f', 'follow', None,
4047 _('follow changeset history,'
4046 _('follow changeset history,'
4048 ' or file history across copies and renames')),
4047 ' or file history across copies and renames')),
4049 ('i', 'ignore-case', None, _('ignore case when matching')),
4048 ('i', 'ignore-case', None, _('ignore case when matching')),
4050 ('l', 'files-with-matches', None,
4049 ('l', 'files-with-matches', None,
4051 _('print only filenames and revisions that match')),
4050 _('print only filenames and revisions that match')),
4052 ('n', 'line-number', None, _('print matching line numbers')),
4051 ('n', 'line-number', None, _('print matching line numbers')),
4053 ('r', 'rev', [],
4052 ('r', 'rev', [],
4054 _('only search files changed within revision range'), _('REV')),
4053 _('only search files changed within revision range'), _('REV')),
4055 ('u', 'user', None, _('list the author (long with -v)')),
4054 ('u', 'user', None, _('list the author (long with -v)')),
4056 ('d', 'date', None, _('list the date (short with -q)')),
4055 ('d', 'date', None, _('list the date (short with -q)')),
4057 ] + walkopts,
4056 ] + walkopts,
4058 _('[OPTION]... PATTERN [FILE]...'),
4057 _('[OPTION]... PATTERN [FILE]...'),
4059 inferrepo=True)
4058 inferrepo=True)
4060 def grep(ui, repo, pattern, *pats, **opts):
4059 def grep(ui, repo, pattern, *pats, **opts):
4061 """search for a pattern in specified files and revisions
4060 """search for a pattern in specified files and revisions
4062
4061
4063 Search revisions of files for a regular expression.
4062 Search revisions of files for a regular expression.
4064
4063
4065 This command behaves differently than Unix grep. It only accepts
4064 This command behaves differently than Unix grep. It only accepts
4066 Python/Perl regexps. It searches repository history, not the
4065 Python/Perl regexps. It searches repository history, not the
4067 working directory. It always prints the revision number in which a
4066 working directory. It always prints the revision number in which a
4068 match appears.
4067 match appears.
4069
4068
4070 By default, grep only prints output for the first revision of a
4069 By default, grep only prints output for the first revision of a
4071 file in which it finds a match. To get it to print every revision
4070 file in which it finds a match. To get it to print every revision
4072 that contains a change in match status ("-" for a match that
4071 that contains a change in match status ("-" for a match that
4073 becomes a non-match, or "+" for a non-match that becomes a match),
4072 becomes a non-match, or "+" for a non-match that becomes a match),
4074 use the --all flag.
4073 use the --all flag.
4075
4074
4076 Returns 0 if a match is found, 1 otherwise.
4075 Returns 0 if a match is found, 1 otherwise.
4077 """
4076 """
4078 reflags = re.M
4077 reflags = re.M
4079 if opts.get('ignore_case'):
4078 if opts.get('ignore_case'):
4080 reflags |= re.I
4079 reflags |= re.I
4081 try:
4080 try:
4082 regexp = util.re.compile(pattern, reflags)
4081 regexp = util.re.compile(pattern, reflags)
4083 except re.error as inst:
4082 except re.error as inst:
4084 ui.warn(_("grep: invalid match pattern: %s\n") % inst)
4083 ui.warn(_("grep: invalid match pattern: %s\n") % inst)
4085 return 1
4084 return 1
4086 sep, eol = ':', '\n'
4085 sep, eol = ':', '\n'
4087 if opts.get('print0'):
4086 if opts.get('print0'):
4088 sep = eol = '\0'
4087 sep = eol = '\0'
4089
4088
4090 getfile = util.lrucachefunc(repo.file)
4089 getfile = util.lrucachefunc(repo.file)
4091
4090
4092 def matchlines(body):
4091 def matchlines(body):
4093 begin = 0
4092 begin = 0
4094 linenum = 0
4093 linenum = 0
4095 while begin < len(body):
4094 while begin < len(body):
4096 match = regexp.search(body, begin)
4095 match = regexp.search(body, begin)
4097 if not match:
4096 if not match:
4098 break
4097 break
4099 mstart, mend = match.span()
4098 mstart, mend = match.span()
4100 linenum += body.count('\n', begin, mstart) + 1
4099 linenum += body.count('\n', begin, mstart) + 1
4101 lstart = body.rfind('\n', begin, mstart) + 1 or begin
4100 lstart = body.rfind('\n', begin, mstart) + 1 or begin
4102 begin = body.find('\n', mend) + 1 or len(body) + 1
4101 begin = body.find('\n', mend) + 1 or len(body) + 1
4103 lend = begin - 1
4102 lend = begin - 1
4104 yield linenum, mstart - lstart, mend - lstart, body[lstart:lend]
4103 yield linenum, mstart - lstart, mend - lstart, body[lstart:lend]
4105
4104
4106 class linestate(object):
4105 class linestate(object):
4107 def __init__(self, line, linenum, colstart, colend):
4106 def __init__(self, line, linenum, colstart, colend):
4108 self.line = line
4107 self.line = line
4109 self.linenum = linenum
4108 self.linenum = linenum
4110 self.colstart = colstart
4109 self.colstart = colstart
4111 self.colend = colend
4110 self.colend = colend
4112
4111
4113 def __hash__(self):
4112 def __hash__(self):
4114 return hash((self.linenum, self.line))
4113 return hash((self.linenum, self.line))
4115
4114
4116 def __eq__(self, other):
4115 def __eq__(self, other):
4117 return self.line == other.line
4116 return self.line == other.line
4118
4117
4119 def __iter__(self):
4118 def __iter__(self):
4120 yield (self.line[:self.colstart], '')
4119 yield (self.line[:self.colstart], '')
4121 yield (self.line[self.colstart:self.colend], 'grep.match')
4120 yield (self.line[self.colstart:self.colend], 'grep.match')
4122 rest = self.line[self.colend:]
4121 rest = self.line[self.colend:]
4123 while rest != '':
4122 while rest != '':
4124 match = regexp.search(rest)
4123 match = regexp.search(rest)
4125 if not match:
4124 if not match:
4126 yield (rest, '')
4125 yield (rest, '')
4127 break
4126 break
4128 mstart, mend = match.span()
4127 mstart, mend = match.span()
4129 yield (rest[:mstart], '')
4128 yield (rest[:mstart], '')
4130 yield (rest[mstart:mend], 'grep.match')
4129 yield (rest[mstart:mend], 'grep.match')
4131 rest = rest[mend:]
4130 rest = rest[mend:]
4132
4131
4133 matches = {}
4132 matches = {}
4134 copies = {}
4133 copies = {}
4135 def grepbody(fn, rev, body):
4134 def grepbody(fn, rev, body):
4136 matches[rev].setdefault(fn, [])
4135 matches[rev].setdefault(fn, [])
4137 m = matches[rev][fn]
4136 m = matches[rev][fn]
4138 for lnum, cstart, cend, line in matchlines(body):
4137 for lnum, cstart, cend, line in matchlines(body):
4139 s = linestate(line, lnum, cstart, cend)
4138 s = linestate(line, lnum, cstart, cend)
4140 m.append(s)
4139 m.append(s)
4141
4140
4142 def difflinestates(a, b):
4141 def difflinestates(a, b):
4143 sm = difflib.SequenceMatcher(None, a, b)
4142 sm = difflib.SequenceMatcher(None, a, b)
4144 for tag, alo, ahi, blo, bhi in sm.get_opcodes():
4143 for tag, alo, ahi, blo, bhi in sm.get_opcodes():
4145 if tag == 'insert':
4144 if tag == 'insert':
4146 for i in xrange(blo, bhi):
4145 for i in xrange(blo, bhi):
4147 yield ('+', b[i])
4146 yield ('+', b[i])
4148 elif tag == 'delete':
4147 elif tag == 'delete':
4149 for i in xrange(alo, ahi):
4148 for i in xrange(alo, ahi):
4150 yield ('-', a[i])
4149 yield ('-', a[i])
4151 elif tag == 'replace':
4150 elif tag == 'replace':
4152 for i in xrange(alo, ahi):
4151 for i in xrange(alo, ahi):
4153 yield ('-', a[i])
4152 yield ('-', a[i])
4154 for i in xrange(blo, bhi):
4153 for i in xrange(blo, bhi):
4155 yield ('+', b[i])
4154 yield ('+', b[i])
4156
4155
4157 def display(fn, ctx, pstates, states):
4156 def display(fn, ctx, pstates, states):
4158 rev = ctx.rev()
4157 rev = ctx.rev()
4159 if ui.quiet:
4158 if ui.quiet:
4160 datefunc = util.shortdate
4159 datefunc = util.shortdate
4161 else:
4160 else:
4162 datefunc = util.datestr
4161 datefunc = util.datestr
4163 found = False
4162 found = False
4164 @util.cachefunc
4163 @util.cachefunc
4165 def binary():
4164 def binary():
4166 flog = getfile(fn)
4165 flog = getfile(fn)
4167 return util.binary(flog.read(ctx.filenode(fn)))
4166 return util.binary(flog.read(ctx.filenode(fn)))
4168
4167
4169 if opts.get('all'):
4168 if opts.get('all'):
4170 iter = difflinestates(pstates, states)
4169 iter = difflinestates(pstates, states)
4171 else:
4170 else:
4172 iter = [('', l) for l in states]
4171 iter = [('', l) for l in states]
4173 for change, l in iter:
4172 for change, l in iter:
4174 cols = [(fn, 'grep.filename'), (str(rev), 'grep.rev')]
4173 cols = [(fn, 'grep.filename'), (str(rev), 'grep.rev')]
4175
4174
4176 if opts.get('line_number'):
4175 if opts.get('line_number'):
4177 cols.append((str(l.linenum), 'grep.linenumber'))
4176 cols.append((str(l.linenum), 'grep.linenumber'))
4178 if opts.get('all'):
4177 if opts.get('all'):
4179 cols.append((change, 'grep.change'))
4178 cols.append((change, 'grep.change'))
4180 if opts.get('user'):
4179 if opts.get('user'):
4181 cols.append((ui.shortuser(ctx.user()), 'grep.user'))
4180 cols.append((ui.shortuser(ctx.user()), 'grep.user'))
4182 if opts.get('date'):
4181 if opts.get('date'):
4183 cols.append((datefunc(ctx.date()), 'grep.date'))
4182 cols.append((datefunc(ctx.date()), 'grep.date'))
4184 for col, label in cols[:-1]:
4183 for col, label in cols[:-1]:
4185 ui.write(col, label=label)
4184 ui.write(col, label=label)
4186 ui.write(sep, label='grep.sep')
4185 ui.write(sep, label='grep.sep')
4187 ui.write(cols[-1][0], label=cols[-1][1])
4186 ui.write(cols[-1][0], label=cols[-1][1])
4188 if not opts.get('files_with_matches'):
4187 if not opts.get('files_with_matches'):
4189 ui.write(sep, label='grep.sep')
4188 ui.write(sep, label='grep.sep')
4190 if not opts.get('text') and binary():
4189 if not opts.get('text') and binary():
4191 ui.write(" Binary file matches")
4190 ui.write(" Binary file matches")
4192 else:
4191 else:
4193 for s, label in l:
4192 for s, label in l:
4194 ui.write(s, label=label)
4193 ui.write(s, label=label)
4195 ui.write(eol)
4194 ui.write(eol)
4196 found = True
4195 found = True
4197 if opts.get('files_with_matches'):
4196 if opts.get('files_with_matches'):
4198 break
4197 break
4199 return found
4198 return found
4200
4199
4201 skip = {}
4200 skip = {}
4202 revfiles = {}
4201 revfiles = {}
4203 matchfn = scmutil.match(repo[None], pats, opts)
4202 matchfn = scmutil.match(repo[None], pats, opts)
4204 found = False
4203 found = False
4205 follow = opts.get('follow')
4204 follow = opts.get('follow')
4206
4205
4207 def prep(ctx, fns):
4206 def prep(ctx, fns):
4208 rev = ctx.rev()
4207 rev = ctx.rev()
4209 pctx = ctx.p1()
4208 pctx = ctx.p1()
4210 parent = pctx.rev()
4209 parent = pctx.rev()
4211 matches.setdefault(rev, {})
4210 matches.setdefault(rev, {})
4212 matches.setdefault(parent, {})
4211 matches.setdefault(parent, {})
4213 files = revfiles.setdefault(rev, [])
4212 files = revfiles.setdefault(rev, [])
4214 for fn in fns:
4213 for fn in fns:
4215 flog = getfile(fn)
4214 flog = getfile(fn)
4216 try:
4215 try:
4217 fnode = ctx.filenode(fn)
4216 fnode = ctx.filenode(fn)
4218 except error.LookupError:
4217 except error.LookupError:
4219 continue
4218 continue
4220
4219
4221 copied = flog.renamed(fnode)
4220 copied = flog.renamed(fnode)
4222 copy = follow and copied and copied[0]
4221 copy = follow and copied and copied[0]
4223 if copy:
4222 if copy:
4224 copies.setdefault(rev, {})[fn] = copy
4223 copies.setdefault(rev, {})[fn] = copy
4225 if fn in skip:
4224 if fn in skip:
4226 if copy:
4225 if copy:
4227 skip[copy] = True
4226 skip[copy] = True
4228 continue
4227 continue
4229 files.append(fn)
4228 files.append(fn)
4230
4229
4231 if fn not in matches[rev]:
4230 if fn not in matches[rev]:
4232 grepbody(fn, rev, flog.read(fnode))
4231 grepbody(fn, rev, flog.read(fnode))
4233
4232
4234 pfn = copy or fn
4233 pfn = copy or fn
4235 if pfn not in matches[parent]:
4234 if pfn not in matches[parent]:
4236 try:
4235 try:
4237 fnode = pctx.filenode(pfn)
4236 fnode = pctx.filenode(pfn)
4238 grepbody(pfn, parent, flog.read(fnode))
4237 grepbody(pfn, parent, flog.read(fnode))
4239 except error.LookupError:
4238 except error.LookupError:
4240 pass
4239 pass
4241
4240
4242 for ctx in cmdutil.walkchangerevs(repo, matchfn, opts, prep):
4241 for ctx in cmdutil.walkchangerevs(repo, matchfn, opts, prep):
4243 rev = ctx.rev()
4242 rev = ctx.rev()
4244 parent = ctx.p1().rev()
4243 parent = ctx.p1().rev()
4245 for fn in sorted(revfiles.get(rev, [])):
4244 for fn in sorted(revfiles.get(rev, [])):
4246 states = matches[rev][fn]
4245 states = matches[rev][fn]
4247 copy = copies.get(rev, {}).get(fn)
4246 copy = copies.get(rev, {}).get(fn)
4248 if fn in skip:
4247 if fn in skip:
4249 if copy:
4248 if copy:
4250 skip[copy] = True
4249 skip[copy] = True
4251 continue
4250 continue
4252 pstates = matches.get(parent, {}).get(copy or fn, [])
4251 pstates = matches.get(parent, {}).get(copy or fn, [])
4253 if pstates or states:
4252 if pstates or states:
4254 r = display(fn, ctx, pstates, states)
4253 r = display(fn, ctx, pstates, states)
4255 found = found or r
4254 found = found or r
4256 if r and not opts.get('all'):
4255 if r and not opts.get('all'):
4257 skip[fn] = True
4256 skip[fn] = True
4258 if copy:
4257 if copy:
4259 skip[copy] = True
4258 skip[copy] = True
4260 del matches[rev]
4259 del matches[rev]
4261 del revfiles[rev]
4260 del revfiles[rev]
4262
4261
4263 return not found
4262 return not found
4264
4263
4265 @command('heads',
4264 @command('heads',
4266 [('r', 'rev', '',
4265 [('r', 'rev', '',
4267 _('show only heads which are descendants of STARTREV'), _('STARTREV')),
4266 _('show only heads which are descendants of STARTREV'), _('STARTREV')),
4268 ('t', 'topo', False, _('show topological heads only')),
4267 ('t', 'topo', False, _('show topological heads only')),
4269 ('a', 'active', False, _('show active branchheads only (DEPRECATED)')),
4268 ('a', 'active', False, _('show active branchheads only (DEPRECATED)')),
4270 ('c', 'closed', False, _('show normal and closed branch heads')),
4269 ('c', 'closed', False, _('show normal and closed branch heads')),
4271 ] + templateopts,
4270 ] + templateopts,
4272 _('[-ct] [-r STARTREV] [REV]...'))
4271 _('[-ct] [-r STARTREV] [REV]...'))
4273 def heads(ui, repo, *branchrevs, **opts):
4272 def heads(ui, repo, *branchrevs, **opts):
4274 """show branch heads
4273 """show branch heads
4275
4274
4276 With no arguments, show all open branch heads in the repository.
4275 With no arguments, show all open branch heads in the repository.
4277 Branch heads are changesets that have no descendants on the
4276 Branch heads are changesets that have no descendants on the
4278 same branch. They are where development generally takes place and
4277 same branch. They are where development generally takes place and
4279 are the usual targets for update and merge operations.
4278 are the usual targets for update and merge operations.
4280
4279
4281 If one or more REVs are given, only open branch heads on the
4280 If one or more REVs are given, only open branch heads on the
4282 branches associated with the specified changesets are shown. This
4281 branches associated with the specified changesets are shown. This
4283 means that you can use :hg:`heads .` to see the heads on the
4282 means that you can use :hg:`heads .` to see the heads on the
4284 currently checked-out branch.
4283 currently checked-out branch.
4285
4284
4286 If -c/--closed is specified, also show branch heads marked closed
4285 If -c/--closed is specified, also show branch heads marked closed
4287 (see :hg:`commit --close-branch`).
4286 (see :hg:`commit --close-branch`).
4288
4287
4289 If STARTREV is specified, only those heads that are descendants of
4288 If STARTREV is specified, only those heads that are descendants of
4290 STARTREV will be displayed.
4289 STARTREV will be displayed.
4291
4290
4292 If -t/--topo is specified, named branch mechanics will be ignored and only
4291 If -t/--topo is specified, named branch mechanics will be ignored and only
4293 topological heads (changesets with no children) will be shown.
4292 topological heads (changesets with no children) will be shown.
4294
4293
4295 Returns 0 if matching heads are found, 1 if not.
4294 Returns 0 if matching heads are found, 1 if not.
4296 """
4295 """
4297
4296
4298 start = None
4297 start = None
4299 if 'rev' in opts:
4298 if 'rev' in opts:
4300 start = scmutil.revsingle(repo, opts['rev'], None).node()
4299 start = scmutil.revsingle(repo, opts['rev'], None).node()
4301
4300
4302 if opts.get('topo'):
4301 if opts.get('topo'):
4303 heads = [repo[h] for h in repo.heads(start)]
4302 heads = [repo[h] for h in repo.heads(start)]
4304 else:
4303 else:
4305 heads = []
4304 heads = []
4306 for branch in repo.branchmap():
4305 for branch in repo.branchmap():
4307 heads += repo.branchheads(branch, start, opts.get('closed'))
4306 heads += repo.branchheads(branch, start, opts.get('closed'))
4308 heads = [repo[h] for h in heads]
4307 heads = [repo[h] for h in heads]
4309
4308
4310 if branchrevs:
4309 if branchrevs:
4311 branches = set(repo[br].branch() for br in branchrevs)
4310 branches = set(repo[br].branch() for br in branchrevs)
4312 heads = [h for h in heads if h.branch() in branches]
4311 heads = [h for h in heads if h.branch() in branches]
4313
4312
4314 if opts.get('active') and branchrevs:
4313 if opts.get('active') and branchrevs:
4315 dagheads = repo.heads(start)
4314 dagheads = repo.heads(start)
4316 heads = [h for h in heads if h.node() in dagheads]
4315 heads = [h for h in heads if h.node() in dagheads]
4317
4316
4318 if branchrevs:
4317 if branchrevs:
4319 haveheads = set(h.branch() for h in heads)
4318 haveheads = set(h.branch() for h in heads)
4320 if branches - haveheads:
4319 if branches - haveheads:
4321 headless = ', '.join(b for b in branches - haveheads)
4320 headless = ', '.join(b for b in branches - haveheads)
4322 msg = _('no open branch heads found on branches %s')
4321 msg = _('no open branch heads found on branches %s')
4323 if opts.get('rev'):
4322 if opts.get('rev'):
4324 msg += _(' (started at %s)') % opts['rev']
4323 msg += _(' (started at %s)') % opts['rev']
4325 ui.warn((msg + '\n') % headless)
4324 ui.warn((msg + '\n') % headless)
4326
4325
4327 if not heads:
4326 if not heads:
4328 return 1
4327 return 1
4329
4328
4330 heads = sorted(heads, key=lambda x: -x.rev())
4329 heads = sorted(heads, key=lambda x: -x.rev())
4331 displayer = cmdutil.show_changeset(ui, repo, opts)
4330 displayer = cmdutil.show_changeset(ui, repo, opts)
4332 for ctx in heads:
4331 for ctx in heads:
4333 displayer.show(ctx)
4332 displayer.show(ctx)
4334 displayer.close()
4333 displayer.close()
4335
4334
4336 @command('help',
4335 @command('help',
4337 [('e', 'extension', None, _('show only help for extensions')),
4336 [('e', 'extension', None, _('show only help for extensions')),
4338 ('c', 'command', None, _('show only help for commands')),
4337 ('c', 'command', None, _('show only help for commands')),
4339 ('k', 'keyword', None, _('show topics matching keyword')),
4338 ('k', 'keyword', None, _('show topics matching keyword')),
4340 ],
4339 ],
4341 _('[-eck] [TOPIC]'),
4340 _('[-eck] [TOPIC]'),
4342 norepo=True)
4341 norepo=True)
4343 def help_(ui, name=None, **opts):
4342 def help_(ui, name=None, **opts):
4344 """show help for a given topic or a help overview
4343 """show help for a given topic or a help overview
4345
4344
4346 With no arguments, print a list of commands with short help messages.
4345 With no arguments, print a list of commands with short help messages.
4347
4346
4348 Given a topic, extension, or command name, print help for that
4347 Given a topic, extension, or command name, print help for that
4349 topic.
4348 topic.
4350
4349
4351 Returns 0 if successful.
4350 Returns 0 if successful.
4352 """
4351 """
4353
4352
4354 textwidth = min(ui.termwidth(), 80) - 2
4353 textwidth = min(ui.termwidth(), 80) - 2
4355
4354
4356 keep = []
4355 keep = []
4357 if ui.verbose:
4356 if ui.verbose:
4358 keep.append('verbose')
4357 keep.append('verbose')
4359 if sys.platform.startswith('win'):
4358 if sys.platform.startswith('win'):
4360 keep.append('windows')
4359 keep.append('windows')
4361 elif sys.platform == 'OpenVMS':
4360 elif sys.platform == 'OpenVMS':
4362 keep.append('vms')
4361 keep.append('vms')
4363 elif sys.platform == 'plan9':
4362 elif sys.platform == 'plan9':
4364 keep.append('plan9')
4363 keep.append('plan9')
4365 else:
4364 else:
4366 keep.append('unix')
4365 keep.append('unix')
4367 keep.append(sys.platform.lower())
4366 keep.append(sys.platform.lower())
4368
4367
4369 section = None
4368 section = None
4370 if name and '.' in name:
4369 if name and '.' in name:
4371 name, section = name.split('.', 1)
4370 name, section = name.split('.', 1)
4372 section = section.lower()
4371 section = section.lower()
4373
4372
4374 text = help.help_(ui, name, **opts)
4373 text = help.help_(ui, name, **opts)
4375
4374
4376 formatted, pruned = minirst.format(text, textwidth, keep=keep,
4375 formatted, pruned = minirst.format(text, textwidth, keep=keep,
4377 section=section)
4376 section=section)
4378
4377
4379 # We could have been given a weird ".foo" section without a name
4378 # We could have been given a weird ".foo" section without a name
4380 # to look for, or we could have simply failed to found "foo.bar"
4379 # to look for, or we could have simply failed to found "foo.bar"
4381 # because bar isn't a section of foo
4380 # because bar isn't a section of foo
4382 if section and not (formatted and name):
4381 if section and not (formatted and name):
4383 raise error.Abort(_("help section not found"))
4382 raise error.Abort(_("help section not found"))
4384
4383
4385 if 'verbose' in pruned:
4384 if 'verbose' in pruned:
4386 keep.append('omitted')
4385 keep.append('omitted')
4387 else:
4386 else:
4388 keep.append('notomitted')
4387 keep.append('notomitted')
4389 formatted, pruned = minirst.format(text, textwidth, keep=keep,
4388 formatted, pruned = minirst.format(text, textwidth, keep=keep,
4390 section=section)
4389 section=section)
4391 ui.write(formatted)
4390 ui.write(formatted)
4392
4391
4393
4392
4394 @command('identify|id',
4393 @command('identify|id',
4395 [('r', 'rev', '',
4394 [('r', 'rev', '',
4396 _('identify the specified revision'), _('REV')),
4395 _('identify the specified revision'), _('REV')),
4397 ('n', 'num', None, _('show local revision number')),
4396 ('n', 'num', None, _('show local revision number')),
4398 ('i', 'id', None, _('show global revision id')),
4397 ('i', 'id', None, _('show global revision id')),
4399 ('b', 'branch', None, _('show branch')),
4398 ('b', 'branch', None, _('show branch')),
4400 ('t', 'tags', None, _('show tags')),
4399 ('t', 'tags', None, _('show tags')),
4401 ('B', 'bookmarks', None, _('show bookmarks')),
4400 ('B', 'bookmarks', None, _('show bookmarks')),
4402 ] + remoteopts,
4401 ] + remoteopts,
4403 _('[-nibtB] [-r REV] [SOURCE]'),
4402 _('[-nibtB] [-r REV] [SOURCE]'),
4404 optionalrepo=True)
4403 optionalrepo=True)
4405 def identify(ui, repo, source=None, rev=None,
4404 def identify(ui, repo, source=None, rev=None,
4406 num=None, id=None, branch=None, tags=None, bookmarks=None, **opts):
4405 num=None, id=None, branch=None, tags=None, bookmarks=None, **opts):
4407 """identify the working directory or specified revision
4406 """identify the working directory or specified revision
4408
4407
4409 Print a summary identifying the repository state at REV using one or
4408 Print a summary identifying the repository state at REV using one or
4410 two parent hash identifiers, followed by a "+" if the working
4409 two parent hash identifiers, followed by a "+" if the working
4411 directory has uncommitted changes, the branch name (if not default),
4410 directory has uncommitted changes, the branch name (if not default),
4412 a list of tags, and a list of bookmarks.
4411 a list of tags, and a list of bookmarks.
4413
4412
4414 When REV is not given, print a summary of the current state of the
4413 When REV is not given, print a summary of the current state of the
4415 repository.
4414 repository.
4416
4415
4417 Specifying a path to a repository root or Mercurial bundle will
4416 Specifying a path to a repository root or Mercurial bundle will
4418 cause lookup to operate on that repository/bundle.
4417 cause lookup to operate on that repository/bundle.
4419
4418
4420 .. container:: verbose
4419 .. container:: verbose
4421
4420
4422 Examples:
4421 Examples:
4423
4422
4424 - generate a build identifier for the working directory::
4423 - generate a build identifier for the working directory::
4425
4424
4426 hg id --id > build-id.dat
4425 hg id --id > build-id.dat
4427
4426
4428 - find the revision corresponding to a tag::
4427 - find the revision corresponding to a tag::
4429
4428
4430 hg id -n -r 1.3
4429 hg id -n -r 1.3
4431
4430
4432 - check the most recent revision of a remote repository::
4431 - check the most recent revision of a remote repository::
4433
4432
4434 hg id -r tip http://selenic.com/hg/
4433 hg id -r tip http://selenic.com/hg/
4435
4434
4436 See :hg:`log` for generating more information about specific revisions,
4435 See :hg:`log` for generating more information about specific revisions,
4437 including full hash identifiers.
4436 including full hash identifiers.
4438
4437
4439 Returns 0 if successful.
4438 Returns 0 if successful.
4440 """
4439 """
4441
4440
4442 if not repo and not source:
4441 if not repo and not source:
4443 raise error.Abort(_("there is no Mercurial repository here "
4442 raise error.Abort(_("there is no Mercurial repository here "
4444 "(.hg not found)"))
4443 "(.hg not found)"))
4445
4444
4446 if ui.debugflag:
4445 if ui.debugflag:
4447 hexfunc = hex
4446 hexfunc = hex
4448 else:
4447 else:
4449 hexfunc = short
4448 hexfunc = short
4450 default = not (num or id or branch or tags or bookmarks)
4449 default = not (num or id or branch or tags or bookmarks)
4451 output = []
4450 output = []
4452 revs = []
4451 revs = []
4453
4452
4454 if source:
4453 if source:
4455 source, branches = hg.parseurl(ui.expandpath(source))
4454 source, branches = hg.parseurl(ui.expandpath(source))
4456 peer = hg.peer(repo or ui, opts, source) # only pass ui when no repo
4455 peer = hg.peer(repo or ui, opts, source) # only pass ui when no repo
4457 repo = peer.local()
4456 repo = peer.local()
4458 revs, checkout = hg.addbranchrevs(repo, peer, branches, None)
4457 revs, checkout = hg.addbranchrevs(repo, peer, branches, None)
4459
4458
4460 if not repo:
4459 if not repo:
4461 if num or branch or tags:
4460 if num or branch or tags:
4462 raise error.Abort(
4461 raise error.Abort(
4463 _("can't query remote revision number, branch, or tags"))
4462 _("can't query remote revision number, branch, or tags"))
4464 if not rev and revs:
4463 if not rev and revs:
4465 rev = revs[0]
4464 rev = revs[0]
4466 if not rev:
4465 if not rev:
4467 rev = "tip"
4466 rev = "tip"
4468
4467
4469 remoterev = peer.lookup(rev)
4468 remoterev = peer.lookup(rev)
4470 if default or id:
4469 if default or id:
4471 output = [hexfunc(remoterev)]
4470 output = [hexfunc(remoterev)]
4472
4471
4473 def getbms():
4472 def getbms():
4474 bms = []
4473 bms = []
4475
4474
4476 if 'bookmarks' in peer.listkeys('namespaces'):
4475 if 'bookmarks' in peer.listkeys('namespaces'):
4477 hexremoterev = hex(remoterev)
4476 hexremoterev = hex(remoterev)
4478 bms = [bm for bm, bmr in peer.listkeys('bookmarks').iteritems()
4477 bms = [bm for bm, bmr in peer.listkeys('bookmarks').iteritems()
4479 if bmr == hexremoterev]
4478 if bmr == hexremoterev]
4480
4479
4481 return sorted(bms)
4480 return sorted(bms)
4482
4481
4483 if bookmarks:
4482 if bookmarks:
4484 output.extend(getbms())
4483 output.extend(getbms())
4485 elif default and not ui.quiet:
4484 elif default and not ui.quiet:
4486 # multiple bookmarks for a single parent separated by '/'
4485 # multiple bookmarks for a single parent separated by '/'
4487 bm = '/'.join(getbms())
4486 bm = '/'.join(getbms())
4488 if bm:
4487 if bm:
4489 output.append(bm)
4488 output.append(bm)
4490 else:
4489 else:
4491 ctx = scmutil.revsingle(repo, rev, None)
4490 ctx = scmutil.revsingle(repo, rev, None)
4492
4491
4493 if ctx.rev() is None:
4492 if ctx.rev() is None:
4494 ctx = repo[None]
4493 ctx = repo[None]
4495 parents = ctx.parents()
4494 parents = ctx.parents()
4496 taglist = []
4495 taglist = []
4497 for p in parents:
4496 for p in parents:
4498 taglist.extend(p.tags())
4497 taglist.extend(p.tags())
4499
4498
4500 changed = ""
4499 changed = ""
4501 if default or id or num:
4500 if default or id or num:
4502 if (any(repo.status())
4501 if (any(repo.status())
4503 or any(ctx.sub(s).dirty() for s in ctx.substate)):
4502 or any(ctx.sub(s).dirty() for s in ctx.substate)):
4504 changed = '+'
4503 changed = '+'
4505 if default or id:
4504 if default or id:
4506 output = ["%s%s" %
4505 output = ["%s%s" %
4507 ('+'.join([hexfunc(p.node()) for p in parents]), changed)]
4506 ('+'.join([hexfunc(p.node()) for p in parents]), changed)]
4508 if num:
4507 if num:
4509 output.append("%s%s" %
4508 output.append("%s%s" %
4510 ('+'.join([str(p.rev()) for p in parents]), changed))
4509 ('+'.join([str(p.rev()) for p in parents]), changed))
4511 else:
4510 else:
4512 if default or id:
4511 if default or id:
4513 output = [hexfunc(ctx.node())]
4512 output = [hexfunc(ctx.node())]
4514 if num:
4513 if num:
4515 output.append(str(ctx.rev()))
4514 output.append(str(ctx.rev()))
4516 taglist = ctx.tags()
4515 taglist = ctx.tags()
4517
4516
4518 if default and not ui.quiet:
4517 if default and not ui.quiet:
4519 b = ctx.branch()
4518 b = ctx.branch()
4520 if b != 'default':
4519 if b != 'default':
4521 output.append("(%s)" % b)
4520 output.append("(%s)" % b)
4522
4521
4523 # multiple tags for a single parent separated by '/'
4522 # multiple tags for a single parent separated by '/'
4524 t = '/'.join(taglist)
4523 t = '/'.join(taglist)
4525 if t:
4524 if t:
4526 output.append(t)
4525 output.append(t)
4527
4526
4528 # multiple bookmarks for a single parent separated by '/'
4527 # multiple bookmarks for a single parent separated by '/'
4529 bm = '/'.join(ctx.bookmarks())
4528 bm = '/'.join(ctx.bookmarks())
4530 if bm:
4529 if bm:
4531 output.append(bm)
4530 output.append(bm)
4532 else:
4531 else:
4533 if branch:
4532 if branch:
4534 output.append(ctx.branch())
4533 output.append(ctx.branch())
4535
4534
4536 if tags:
4535 if tags:
4537 output.extend(taglist)
4536 output.extend(taglist)
4538
4537
4539 if bookmarks:
4538 if bookmarks:
4540 output.extend(ctx.bookmarks())
4539 output.extend(ctx.bookmarks())
4541
4540
4542 ui.write("%s\n" % ' '.join(output))
4541 ui.write("%s\n" % ' '.join(output))
4543
4542
4544 @command('import|patch',
4543 @command('import|patch',
4545 [('p', 'strip', 1,
4544 [('p', 'strip', 1,
4546 _('directory strip option for patch. This has the same '
4545 _('directory strip option for patch. This has the same '
4547 'meaning as the corresponding patch option'), _('NUM')),
4546 'meaning as the corresponding patch option'), _('NUM')),
4548 ('b', 'base', '', _('base path (DEPRECATED)'), _('PATH')),
4547 ('b', 'base', '', _('base path (DEPRECATED)'), _('PATH')),
4549 ('e', 'edit', False, _('invoke editor on commit messages')),
4548 ('e', 'edit', False, _('invoke editor on commit messages')),
4550 ('f', 'force', None,
4549 ('f', 'force', None,
4551 _('skip check for outstanding uncommitted changes (DEPRECATED)')),
4550 _('skip check for outstanding uncommitted changes (DEPRECATED)')),
4552 ('', 'no-commit', None,
4551 ('', 'no-commit', None,
4553 _("don't commit, just update the working directory")),
4552 _("don't commit, just update the working directory")),
4554 ('', 'bypass', None,
4553 ('', 'bypass', None,
4555 _("apply patch without touching the working directory")),
4554 _("apply patch without touching the working directory")),
4556 ('', 'partial', None,
4555 ('', 'partial', None,
4557 _('commit even if some hunks fail')),
4556 _('commit even if some hunks fail')),
4558 ('', 'exact', None,
4557 ('', 'exact', None,
4559 _('apply patch to the nodes from which it was generated')),
4558 _('apply patch to the nodes from which it was generated')),
4560 ('', 'prefix', '',
4559 ('', 'prefix', '',
4561 _('apply patch to subdirectory'), _('DIR')),
4560 _('apply patch to subdirectory'), _('DIR')),
4562 ('', 'import-branch', None,
4561 ('', 'import-branch', None,
4563 _('use any branch information in patch (implied by --exact)'))] +
4562 _('use any branch information in patch (implied by --exact)'))] +
4564 commitopts + commitopts2 + similarityopts,
4563 commitopts + commitopts2 + similarityopts,
4565 _('[OPTION]... PATCH...'))
4564 _('[OPTION]... PATCH...'))
4566 def import_(ui, repo, patch1=None, *patches, **opts):
4565 def import_(ui, repo, patch1=None, *patches, **opts):
4567 """import an ordered set of patches
4566 """import an ordered set of patches
4568
4567
4569 Import a list of patches and commit them individually (unless
4568 Import a list of patches and commit them individually (unless
4570 --no-commit is specified).
4569 --no-commit is specified).
4571
4570
4572 Because import first applies changes to the working directory,
4571 Because import first applies changes to the working directory,
4573 import will abort if there are outstanding changes.
4572 import will abort if there are outstanding changes.
4574
4573
4575 You can import a patch straight from a mail message. Even patches
4574 You can import a patch straight from a mail message. Even patches
4576 as attachments work (to use the body part, it must have type
4575 as attachments work (to use the body part, it must have type
4577 text/plain or text/x-patch). From and Subject headers of email
4576 text/plain or text/x-patch). From and Subject headers of email
4578 message are used as default committer and commit message. All
4577 message are used as default committer and commit message. All
4579 text/plain body parts before first diff are added to commit
4578 text/plain body parts before first diff are added to commit
4580 message.
4579 message.
4581
4580
4582 If the imported patch was generated by :hg:`export`, user and
4581 If the imported patch was generated by :hg:`export`, user and
4583 description from patch override values from message headers and
4582 description from patch override values from message headers and
4584 body. Values given on command line with -m/--message and -u/--user
4583 body. Values given on command line with -m/--message and -u/--user
4585 override these.
4584 override these.
4586
4585
4587 If --exact is specified, import will set the working directory to
4586 If --exact is specified, import will set the working directory to
4588 the parent of each patch before applying it, and will abort if the
4587 the parent of each patch before applying it, and will abort if the
4589 resulting changeset has a different ID than the one recorded in
4588 resulting changeset has a different ID than the one recorded in
4590 the patch. This may happen due to character set problems or other
4589 the patch. This may happen due to character set problems or other
4591 deficiencies in the text patch format.
4590 deficiencies in the text patch format.
4592
4591
4593 Use --bypass to apply and commit patches directly to the
4592 Use --bypass to apply and commit patches directly to the
4594 repository, not touching the working directory. Without --exact,
4593 repository, not touching the working directory. Without --exact,
4595 patches will be applied on top of the working directory parent
4594 patches will be applied on top of the working directory parent
4596 revision.
4595 revision.
4597
4596
4598 With -s/--similarity, hg will attempt to discover renames and
4597 With -s/--similarity, hg will attempt to discover renames and
4599 copies in the patch in the same way as :hg:`addremove`.
4598 copies in the patch in the same way as :hg:`addremove`.
4600
4599
4601 Use --partial to ensure a changeset will be created from the patch
4600 Use --partial to ensure a changeset will be created from the patch
4602 even if some hunks fail to apply. Hunks that fail to apply will be
4601 even if some hunks fail to apply. Hunks that fail to apply will be
4603 written to a <target-file>.rej file. Conflicts can then be resolved
4602 written to a <target-file>.rej file. Conflicts can then be resolved
4604 by hand before :hg:`commit --amend` is run to update the created
4603 by hand before :hg:`commit --amend` is run to update the created
4605 changeset. This flag exists to let people import patches that
4604 changeset. This flag exists to let people import patches that
4606 partially apply without losing the associated metadata (author,
4605 partially apply without losing the associated metadata (author,
4607 date, description, ...). Note that when none of the hunk applies
4606 date, description, ...). Note that when none of the hunk applies
4608 cleanly, :hg:`import --partial` will create an empty changeset,
4607 cleanly, :hg:`import --partial` will create an empty changeset,
4609 importing only the patch metadata.
4608 importing only the patch metadata.
4610
4609
4611 It is possible to use external patch programs to perform the patch
4610 It is possible to use external patch programs to perform the patch
4612 by setting the ``ui.patch`` configuration option. For the default
4611 by setting the ``ui.patch`` configuration option. For the default
4613 internal tool, the fuzz can also be configured via ``patch.fuzz``.
4612 internal tool, the fuzz can also be configured via ``patch.fuzz``.
4614 See :hg:`help config` for more information about configuration
4613 See :hg:`help config` for more information about configuration
4615 files and how to use these options.
4614 files and how to use these options.
4616
4615
4617 To read a patch from standard input, use "-" as the patch name. If
4616 To read a patch from standard input, use "-" as the patch name. If
4618 a URL is specified, the patch will be downloaded from it.
4617 a URL is specified, the patch will be downloaded from it.
4619 See :hg:`help dates` for a list of formats valid for -d/--date.
4618 See :hg:`help dates` for a list of formats valid for -d/--date.
4620
4619
4621 .. container:: verbose
4620 .. container:: verbose
4622
4621
4623 Examples:
4622 Examples:
4624
4623
4625 - import a traditional patch from a website and detect renames::
4624 - import a traditional patch from a website and detect renames::
4626
4625
4627 hg import -s 80 http://example.com/bugfix.patch
4626 hg import -s 80 http://example.com/bugfix.patch
4628
4627
4629 - import a changeset from an hgweb server::
4628 - import a changeset from an hgweb server::
4630
4629
4631 hg import http://www.selenic.com/hg/rev/5ca8c111e9aa
4630 hg import http://www.selenic.com/hg/rev/5ca8c111e9aa
4632
4631
4633 - import all the patches in an Unix-style mbox::
4632 - import all the patches in an Unix-style mbox::
4634
4633
4635 hg import incoming-patches.mbox
4634 hg import incoming-patches.mbox
4636
4635
4637 - attempt to exactly restore an exported changeset (not always
4636 - attempt to exactly restore an exported changeset (not always
4638 possible)::
4637 possible)::
4639
4638
4640 hg import --exact proposed-fix.patch
4639 hg import --exact proposed-fix.patch
4641
4640
4642 - use an external tool to apply a patch which is too fuzzy for
4641 - use an external tool to apply a patch which is too fuzzy for
4643 the default internal tool.
4642 the default internal tool.
4644
4643
4645 hg import --config ui.patch="patch --merge" fuzzy.patch
4644 hg import --config ui.patch="patch --merge" fuzzy.patch
4646
4645
4647 - change the default fuzzing from 2 to a less strict 7
4646 - change the default fuzzing from 2 to a less strict 7
4648
4647
4649 hg import --config ui.fuzz=7 fuzz.patch
4648 hg import --config ui.fuzz=7 fuzz.patch
4650
4649
4651 Returns 0 on success, 1 on partial success (see --partial).
4650 Returns 0 on success, 1 on partial success (see --partial).
4652 """
4651 """
4653
4652
4654 if not patch1:
4653 if not patch1:
4655 raise error.Abort(_('need at least one patch to import'))
4654 raise error.Abort(_('need at least one patch to import'))
4656
4655
4657 patches = (patch1,) + patches
4656 patches = (patch1,) + patches
4658
4657
4659 date = opts.get('date')
4658 date = opts.get('date')
4660 if date:
4659 if date:
4661 opts['date'] = util.parsedate(date)
4660 opts['date'] = util.parsedate(date)
4662
4661
4663 update = not opts.get('bypass')
4662 update = not opts.get('bypass')
4664 if not update and opts.get('no_commit'):
4663 if not update and opts.get('no_commit'):
4665 raise error.Abort(_('cannot use --no-commit with --bypass'))
4664 raise error.Abort(_('cannot use --no-commit with --bypass'))
4666 try:
4665 try:
4667 sim = float(opts.get('similarity') or 0)
4666 sim = float(opts.get('similarity') or 0)
4668 except ValueError:
4667 except ValueError:
4669 raise error.Abort(_('similarity must be a number'))
4668 raise error.Abort(_('similarity must be a number'))
4670 if sim < 0 or sim > 100:
4669 if sim < 0 or sim > 100:
4671 raise error.Abort(_('similarity must be between 0 and 100'))
4670 raise error.Abort(_('similarity must be between 0 and 100'))
4672 if sim and not update:
4671 if sim and not update:
4673 raise error.Abort(_('cannot use --similarity with --bypass'))
4672 raise error.Abort(_('cannot use --similarity with --bypass'))
4674 if opts.get('exact') and opts.get('edit'):
4673 if opts.get('exact') and opts.get('edit'):
4675 raise error.Abort(_('cannot use --exact with --edit'))
4674 raise error.Abort(_('cannot use --exact with --edit'))
4676 if opts.get('exact') and opts.get('prefix'):
4675 if opts.get('exact') and opts.get('prefix'):
4677 raise error.Abort(_('cannot use --exact with --prefix'))
4676 raise error.Abort(_('cannot use --exact with --prefix'))
4678
4677
4679 base = opts["base"]
4678 base = opts["base"]
4680 wlock = dsguard = lock = tr = None
4679 wlock = dsguard = lock = tr = None
4681 msgs = []
4680 msgs = []
4682 ret = 0
4681 ret = 0
4683
4682
4684
4683
4685 try:
4684 try:
4686 try:
4685 try:
4687 wlock = repo.wlock()
4686 wlock = repo.wlock()
4688
4687
4689 if update:
4688 if update:
4690 cmdutil.checkunfinished(repo)
4689 cmdutil.checkunfinished(repo)
4691 if (opts.get('exact') or not opts.get('force')) and update:
4690 if (opts.get('exact') or not opts.get('force')) and update:
4692 cmdutil.bailifchanged(repo)
4691 cmdutil.bailifchanged(repo)
4693
4692
4694 if not opts.get('no_commit'):
4693 if not opts.get('no_commit'):
4695 lock = repo.lock()
4694 lock = repo.lock()
4696 tr = repo.transaction('import')
4695 tr = repo.transaction('import')
4697 else:
4696 else:
4698 dsguard = cmdutil.dirstateguard(repo, 'import')
4697 dsguard = cmdutil.dirstateguard(repo, 'import')
4699 parents = repo[None].parents()
4698 parents = repo[None].parents()
4700 for patchurl in patches:
4699 for patchurl in patches:
4701 if patchurl == '-':
4700 if patchurl == '-':
4702 ui.status(_('applying patch from stdin\n'))
4701 ui.status(_('applying patch from stdin\n'))
4703 patchfile = ui.fin
4702 patchfile = ui.fin
4704 patchurl = 'stdin' # for error message
4703 patchurl = 'stdin' # for error message
4705 else:
4704 else:
4706 patchurl = os.path.join(base, patchurl)
4705 patchurl = os.path.join(base, patchurl)
4707 ui.status(_('applying %s\n') % patchurl)
4706 ui.status(_('applying %s\n') % patchurl)
4708 patchfile = hg.openpath(ui, patchurl)
4707 patchfile = hg.openpath(ui, patchurl)
4709
4708
4710 haspatch = False
4709 haspatch = False
4711 for hunk in patch.split(patchfile):
4710 for hunk in patch.split(patchfile):
4712 (msg, node, rej) = cmdutil.tryimportone(ui, repo, hunk,
4711 (msg, node, rej) = cmdutil.tryimportone(ui, repo, hunk,
4713 parents, opts,
4712 parents, opts,
4714 msgs, hg.clean)
4713 msgs, hg.clean)
4715 if msg:
4714 if msg:
4716 haspatch = True
4715 haspatch = True
4717 ui.note(msg + '\n')
4716 ui.note(msg + '\n')
4718 if update or opts.get('exact'):
4717 if update or opts.get('exact'):
4719 parents = repo[None].parents()
4718 parents = repo[None].parents()
4720 else:
4719 else:
4721 parents = [repo[node]]
4720 parents = [repo[node]]
4722 if rej:
4721 if rej:
4723 ui.write_err(_("patch applied partially\n"))
4722 ui.write_err(_("patch applied partially\n"))
4724 ui.write_err(_("(fix the .rej files and run "
4723 ui.write_err(_("(fix the .rej files and run "
4725 "`hg commit --amend`)\n"))
4724 "`hg commit --amend`)\n"))
4726 ret = 1
4725 ret = 1
4727 break
4726 break
4728
4727
4729 if not haspatch:
4728 if not haspatch:
4730 raise error.Abort(_('%s: no diffs found') % patchurl)
4729 raise error.Abort(_('%s: no diffs found') % patchurl)
4731
4730
4732 if tr:
4731 if tr:
4733 tr.close()
4732 tr.close()
4734 if msgs:
4733 if msgs:
4735 repo.savecommitmessage('\n* * *\n'.join(msgs))
4734 repo.savecommitmessage('\n* * *\n'.join(msgs))
4736 if dsguard:
4735 if dsguard:
4737 dsguard.close()
4736 dsguard.close()
4738 return ret
4737 return ret
4739 finally:
4738 finally:
4740 # TODO: get rid of this meaningless try/finally enclosing.
4739 # TODO: get rid of this meaningless try/finally enclosing.
4741 # this is kept only to reduce changes in a patch.
4740 # this is kept only to reduce changes in a patch.
4742 pass
4741 pass
4743 finally:
4742 finally:
4744 if tr:
4743 if tr:
4745 tr.release()
4744 tr.release()
4746 release(lock, dsguard, wlock)
4745 release(lock, dsguard, wlock)
4747
4746
4748 @command('incoming|in',
4747 @command('incoming|in',
4749 [('f', 'force', None,
4748 [('f', 'force', None,
4750 _('run even if remote repository is unrelated')),
4749 _('run even if remote repository is unrelated')),
4751 ('n', 'newest-first', None, _('show newest record first')),
4750 ('n', 'newest-first', None, _('show newest record first')),
4752 ('', 'bundle', '',
4751 ('', 'bundle', '',
4753 _('file to store the bundles into'), _('FILE')),
4752 _('file to store the bundles into'), _('FILE')),
4754 ('r', 'rev', [], _('a remote changeset intended to be added'), _('REV')),
4753 ('r', 'rev', [], _('a remote changeset intended to be added'), _('REV')),
4755 ('B', 'bookmarks', False, _("compare bookmarks")),
4754 ('B', 'bookmarks', False, _("compare bookmarks")),
4756 ('b', 'branch', [],
4755 ('b', 'branch', [],
4757 _('a specific branch you would like to pull'), _('BRANCH')),
4756 _('a specific branch you would like to pull'), _('BRANCH')),
4758 ] + logopts + remoteopts + subrepoopts,
4757 ] + logopts + remoteopts + subrepoopts,
4759 _('[-p] [-n] [-M] [-f] [-r REV]... [--bundle FILENAME] [SOURCE]'))
4758 _('[-p] [-n] [-M] [-f] [-r REV]... [--bundle FILENAME] [SOURCE]'))
4760 def incoming(ui, repo, source="default", **opts):
4759 def incoming(ui, repo, source="default", **opts):
4761 """show new changesets found in source
4760 """show new changesets found in source
4762
4761
4763 Show new changesets found in the specified path/URL or the default
4762 Show new changesets found in the specified path/URL or the default
4764 pull location. These are the changesets that would have been pulled
4763 pull location. These are the changesets that would have been pulled
4765 if a pull at the time you issued this command.
4764 if a pull at the time you issued this command.
4766
4765
4767 See pull for valid source format details.
4766 See pull for valid source format details.
4768
4767
4769 .. container:: verbose
4768 .. container:: verbose
4770
4769
4771 With -B/--bookmarks, the result of bookmark comparison between
4770 With -B/--bookmarks, the result of bookmark comparison between
4772 local and remote repositories is displayed. With -v/--verbose,
4771 local and remote repositories is displayed. With -v/--verbose,
4773 status is also displayed for each bookmark like below::
4772 status is also displayed for each bookmark like below::
4774
4773
4775 BM1 01234567890a added
4774 BM1 01234567890a added
4776 BM2 1234567890ab advanced
4775 BM2 1234567890ab advanced
4777 BM3 234567890abc diverged
4776 BM3 234567890abc diverged
4778 BM4 34567890abcd changed
4777 BM4 34567890abcd changed
4779
4778
4780 The action taken locally when pulling depends on the
4779 The action taken locally when pulling depends on the
4781 status of each bookmark:
4780 status of each bookmark:
4782
4781
4783 :``added``: pull will create it
4782 :``added``: pull will create it
4784 :``advanced``: pull will update it
4783 :``advanced``: pull will update it
4785 :``diverged``: pull will create a divergent bookmark
4784 :``diverged``: pull will create a divergent bookmark
4786 :``changed``: result depends on remote changesets
4785 :``changed``: result depends on remote changesets
4787
4786
4788 From the point of view of pulling behavior, bookmark
4787 From the point of view of pulling behavior, bookmark
4789 existing only in the remote repository are treated as ``added``,
4788 existing only in the remote repository are treated as ``added``,
4790 even if it is in fact locally deleted.
4789 even if it is in fact locally deleted.
4791
4790
4792 .. container:: verbose
4791 .. container:: verbose
4793
4792
4794 For remote repository, using --bundle avoids downloading the
4793 For remote repository, using --bundle avoids downloading the
4795 changesets twice if the incoming is followed by a pull.
4794 changesets twice if the incoming is followed by a pull.
4796
4795
4797 Examples:
4796 Examples:
4798
4797
4799 - show incoming changes with patches and full description::
4798 - show incoming changes with patches and full description::
4800
4799
4801 hg incoming -vp
4800 hg incoming -vp
4802
4801
4803 - show incoming changes excluding merges, store a bundle::
4802 - show incoming changes excluding merges, store a bundle::
4804
4803
4805 hg in -vpM --bundle incoming.hg
4804 hg in -vpM --bundle incoming.hg
4806 hg pull incoming.hg
4805 hg pull incoming.hg
4807
4806
4808 - briefly list changes inside a bundle::
4807 - briefly list changes inside a bundle::
4809
4808
4810 hg in changes.hg -T "{desc|firstline}\\n"
4809 hg in changes.hg -T "{desc|firstline}\\n"
4811
4810
4812 Returns 0 if there are incoming changes, 1 otherwise.
4811 Returns 0 if there are incoming changes, 1 otherwise.
4813 """
4812 """
4814 if opts.get('graph'):
4813 if opts.get('graph'):
4815 cmdutil.checkunsupportedgraphflags([], opts)
4814 cmdutil.checkunsupportedgraphflags([], opts)
4816 def display(other, chlist, displayer):
4815 def display(other, chlist, displayer):
4817 revdag = cmdutil.graphrevs(other, chlist, opts)
4816 revdag = cmdutil.graphrevs(other, chlist, opts)
4818 cmdutil.displaygraph(ui, repo, revdag, displayer,
4817 cmdutil.displaygraph(ui, repo, revdag, displayer,
4819 graphmod.asciiedges)
4818 graphmod.asciiedges)
4820
4819
4821 hg._incoming(display, lambda: 1, ui, repo, source, opts, buffered=True)
4820 hg._incoming(display, lambda: 1, ui, repo, source, opts, buffered=True)
4822 return 0
4821 return 0
4823
4822
4824 if opts.get('bundle') and opts.get('subrepos'):
4823 if opts.get('bundle') and opts.get('subrepos'):
4825 raise error.Abort(_('cannot combine --bundle and --subrepos'))
4824 raise error.Abort(_('cannot combine --bundle and --subrepos'))
4826
4825
4827 if opts.get('bookmarks'):
4826 if opts.get('bookmarks'):
4828 source, branches = hg.parseurl(ui.expandpath(source),
4827 source, branches = hg.parseurl(ui.expandpath(source),
4829 opts.get('branch'))
4828 opts.get('branch'))
4830 other = hg.peer(repo, opts, source)
4829 other = hg.peer(repo, opts, source)
4831 if 'bookmarks' not in other.listkeys('namespaces'):
4830 if 'bookmarks' not in other.listkeys('namespaces'):
4832 ui.warn(_("remote doesn't support bookmarks\n"))
4831 ui.warn(_("remote doesn't support bookmarks\n"))
4833 return 0
4832 return 0
4834 ui.status(_('comparing with %s\n') % util.hidepassword(source))
4833 ui.status(_('comparing with %s\n') % util.hidepassword(source))
4835 return bookmarks.incoming(ui, repo, other)
4834 return bookmarks.incoming(ui, repo, other)
4836
4835
4837 repo._subtoppath = ui.expandpath(source)
4836 repo._subtoppath = ui.expandpath(source)
4838 try:
4837 try:
4839 return hg.incoming(ui, repo, source, opts)
4838 return hg.incoming(ui, repo, source, opts)
4840 finally:
4839 finally:
4841 del repo._subtoppath
4840 del repo._subtoppath
4842
4841
4843
4842
4844 @command('^init', remoteopts, _('[-e CMD] [--remotecmd CMD] [DEST]'),
4843 @command('^init', remoteopts, _('[-e CMD] [--remotecmd CMD] [DEST]'),
4845 norepo=True)
4844 norepo=True)
4846 def init(ui, dest=".", **opts):
4845 def init(ui, dest=".", **opts):
4847 """create a new repository in the given directory
4846 """create a new repository in the given directory
4848
4847
4849 Initialize a new repository in the given directory. If the given
4848 Initialize a new repository in the given directory. If the given
4850 directory does not exist, it will be created.
4849 directory does not exist, it will be created.
4851
4850
4852 If no directory is given, the current directory is used.
4851 If no directory is given, the current directory is used.
4853
4852
4854 It is possible to specify an ``ssh://`` URL as the destination.
4853 It is possible to specify an ``ssh://`` URL as the destination.
4855 See :hg:`help urls` for more information.
4854 See :hg:`help urls` for more information.
4856
4855
4857 Returns 0 on success.
4856 Returns 0 on success.
4858 """
4857 """
4859 hg.peer(ui, opts, ui.expandpath(dest), create=True)
4858 hg.peer(ui, opts, ui.expandpath(dest), create=True)
4860
4859
4861 @command('locate',
4860 @command('locate',
4862 [('r', 'rev', '', _('search the repository as it is in REV'), _('REV')),
4861 [('r', 'rev', '', _('search the repository as it is in REV'), _('REV')),
4863 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
4862 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
4864 ('f', 'fullpath', None, _('print complete paths from the filesystem root')),
4863 ('f', 'fullpath', None, _('print complete paths from the filesystem root')),
4865 ] + walkopts,
4864 ] + walkopts,
4866 _('[OPTION]... [PATTERN]...'))
4865 _('[OPTION]... [PATTERN]...'))
4867 def locate(ui, repo, *pats, **opts):
4866 def locate(ui, repo, *pats, **opts):
4868 """locate files matching specific patterns (DEPRECATED)
4867 """locate files matching specific patterns (DEPRECATED)
4869
4868
4870 Print files under Mercurial control in the working directory whose
4869 Print files under Mercurial control in the working directory whose
4871 names match the given patterns.
4870 names match the given patterns.
4872
4871
4873 By default, this command searches all directories in the working
4872 By default, this command searches all directories in the working
4874 directory. To search just the current directory and its
4873 directory. To search just the current directory and its
4875 subdirectories, use "--include .".
4874 subdirectories, use "--include .".
4876
4875
4877 If no patterns are given to match, this command prints the names
4876 If no patterns are given to match, this command prints the names
4878 of all files under Mercurial control in the working directory.
4877 of all files under Mercurial control in the working directory.
4879
4878
4880 If you want to feed the output of this command into the "xargs"
4879 If you want to feed the output of this command into the "xargs"
4881 command, use the -0 option to both this command and "xargs". This
4880 command, use the -0 option to both this command and "xargs". This
4882 will avoid the problem of "xargs" treating single filenames that
4881 will avoid the problem of "xargs" treating single filenames that
4883 contain whitespace as multiple filenames.
4882 contain whitespace as multiple filenames.
4884
4883
4885 See :hg:`help files` for a more versatile command.
4884 See :hg:`help files` for a more versatile command.
4886
4885
4887 Returns 0 if a match is found, 1 otherwise.
4886 Returns 0 if a match is found, 1 otherwise.
4888 """
4887 """
4889 if opts.get('print0'):
4888 if opts.get('print0'):
4890 end = '\0'
4889 end = '\0'
4891 else:
4890 else:
4892 end = '\n'
4891 end = '\n'
4893 rev = scmutil.revsingle(repo, opts.get('rev'), None).node()
4892 rev = scmutil.revsingle(repo, opts.get('rev'), None).node()
4894
4893
4895 ret = 1
4894 ret = 1
4896 ctx = repo[rev]
4895 ctx = repo[rev]
4897 m = scmutil.match(ctx, pats, opts, default='relglob',
4896 m = scmutil.match(ctx, pats, opts, default='relglob',
4898 badfn=lambda x, y: False)
4897 badfn=lambda x, y: False)
4899
4898
4900 for abs in ctx.matches(m):
4899 for abs in ctx.matches(m):
4901 if opts.get('fullpath'):
4900 if opts.get('fullpath'):
4902 ui.write(repo.wjoin(abs), end)
4901 ui.write(repo.wjoin(abs), end)
4903 else:
4902 else:
4904 ui.write(((pats and m.rel(abs)) or abs), end)
4903 ui.write(((pats and m.rel(abs)) or abs), end)
4905 ret = 0
4904 ret = 0
4906
4905
4907 return ret
4906 return ret
4908
4907
4909 @command('^log|history',
4908 @command('^log|history',
4910 [('f', 'follow', None,
4909 [('f', 'follow', None,
4911 _('follow changeset history, or file history across copies and renames')),
4910 _('follow changeset history, or file history across copies and renames')),
4912 ('', 'follow-first', None,
4911 ('', 'follow-first', None,
4913 _('only follow the first parent of merge changesets (DEPRECATED)')),
4912 _('only follow the first parent of merge changesets (DEPRECATED)')),
4914 ('d', 'date', '', _('show revisions matching date spec'), _('DATE')),
4913 ('d', 'date', '', _('show revisions matching date spec'), _('DATE')),
4915 ('C', 'copies', None, _('show copied files')),
4914 ('C', 'copies', None, _('show copied files')),
4916 ('k', 'keyword', [],
4915 ('k', 'keyword', [],
4917 _('do case-insensitive search for a given text'), _('TEXT')),
4916 _('do case-insensitive search for a given text'), _('TEXT')),
4918 ('r', 'rev', [], _('show the specified revision or revset'), _('REV')),
4917 ('r', 'rev', [], _('show the specified revision or revset'), _('REV')),
4919 ('', 'removed', None, _('include revisions where files were removed')),
4918 ('', 'removed', None, _('include revisions where files were removed')),
4920 ('m', 'only-merges', None, _('show only merges (DEPRECATED)')),
4919 ('m', 'only-merges', None, _('show only merges (DEPRECATED)')),
4921 ('u', 'user', [], _('revisions committed by user'), _('USER')),
4920 ('u', 'user', [], _('revisions committed by user'), _('USER')),
4922 ('', 'only-branch', [],
4921 ('', 'only-branch', [],
4923 _('show only changesets within the given named branch (DEPRECATED)'),
4922 _('show only changesets within the given named branch (DEPRECATED)'),
4924 _('BRANCH')),
4923 _('BRANCH')),
4925 ('b', 'branch', [],
4924 ('b', 'branch', [],
4926 _('show changesets within the given named branch'), _('BRANCH')),
4925 _('show changesets within the given named branch'), _('BRANCH')),
4927 ('P', 'prune', [],
4926 ('P', 'prune', [],
4928 _('do not display revision or any of its ancestors'), _('REV')),
4927 _('do not display revision or any of its ancestors'), _('REV')),
4929 ] + logopts + walkopts,
4928 ] + logopts + walkopts,
4930 _('[OPTION]... [FILE]'),
4929 _('[OPTION]... [FILE]'),
4931 inferrepo=True)
4930 inferrepo=True)
4932 def log(ui, repo, *pats, **opts):
4931 def log(ui, repo, *pats, **opts):
4933 """show revision history of entire repository or files
4932 """show revision history of entire repository or files
4934
4933
4935 Print the revision history of the specified files or the entire
4934 Print the revision history of the specified files or the entire
4936 project.
4935 project.
4937
4936
4938 If no revision range is specified, the default is ``tip:0`` unless
4937 If no revision range is specified, the default is ``tip:0`` unless
4939 --follow is set, in which case the working directory parent is
4938 --follow is set, in which case the working directory parent is
4940 used as the starting revision.
4939 used as the starting revision.
4941
4940
4942 File history is shown without following rename or copy history of
4941 File history is shown without following rename or copy history of
4943 files. Use -f/--follow with a filename to follow history across
4942 files. Use -f/--follow with a filename to follow history across
4944 renames and copies. --follow without a filename will only show
4943 renames and copies. --follow without a filename will only show
4945 ancestors or descendants of the starting revision.
4944 ancestors or descendants of the starting revision.
4946
4945
4947 By default this command prints revision number and changeset id,
4946 By default this command prints revision number and changeset id,
4948 tags, non-trivial parents, user, date and time, and a summary for
4947 tags, non-trivial parents, user, date and time, and a summary for
4949 each commit. When the -v/--verbose switch is used, the list of
4948 each commit. When the -v/--verbose switch is used, the list of
4950 changed files and full commit message are shown.
4949 changed files and full commit message are shown.
4951
4950
4952 With --graph the revisions are shown as an ASCII art DAG with the most
4951 With --graph the revisions are shown as an ASCII art DAG with the most
4953 recent changeset at the top.
4952 recent changeset at the top.
4954 'o' is a changeset, '@' is a working directory parent, 'x' is obsolete,
4953 'o' is a changeset, '@' is a working directory parent, 'x' is obsolete,
4955 and '+' represents a fork where the changeset from the lines below is a
4954 and '+' represents a fork where the changeset from the lines below is a
4956 parent of the 'o' merge on the same line.
4955 parent of the 'o' merge on the same line.
4957
4956
4958 .. note::
4957 .. note::
4959
4958
4960 log -p/--patch may generate unexpected diff output for merge
4959 log -p/--patch may generate unexpected diff output for merge
4961 changesets, as it will only compare the merge changeset against
4960 changesets, as it will only compare the merge changeset against
4962 its first parent. Also, only files different from BOTH parents
4961 its first parent. Also, only files different from BOTH parents
4963 will appear in files:.
4962 will appear in files:.
4964
4963
4965 .. note::
4964 .. note::
4966
4965
4967 for performance reasons, log FILE may omit duplicate changes
4966 for performance reasons, log FILE may omit duplicate changes
4968 made on branches and will not show removals or mode changes. To
4967 made on branches and will not show removals or mode changes. To
4969 see all such changes, use the --removed switch.
4968 see all such changes, use the --removed switch.
4970
4969
4971 .. container:: verbose
4970 .. container:: verbose
4972
4971
4973 Some examples:
4972 Some examples:
4974
4973
4975 - changesets with full descriptions and file lists::
4974 - changesets with full descriptions and file lists::
4976
4975
4977 hg log -v
4976 hg log -v
4978
4977
4979 - changesets ancestral to the working directory::
4978 - changesets ancestral to the working directory::
4980
4979
4981 hg log -f
4980 hg log -f
4982
4981
4983 - last 10 commits on the current branch::
4982 - last 10 commits on the current branch::
4984
4983
4985 hg log -l 10 -b .
4984 hg log -l 10 -b .
4986
4985
4987 - changesets showing all modifications of a file, including removals::
4986 - changesets showing all modifications of a file, including removals::
4988
4987
4989 hg log --removed file.c
4988 hg log --removed file.c
4990
4989
4991 - all changesets that touch a directory, with diffs, excluding merges::
4990 - all changesets that touch a directory, with diffs, excluding merges::
4992
4991
4993 hg log -Mp lib/
4992 hg log -Mp lib/
4994
4993
4995 - all revision numbers that match a keyword::
4994 - all revision numbers that match a keyword::
4996
4995
4997 hg log -k bug --template "{rev}\\n"
4996 hg log -k bug --template "{rev}\\n"
4998
4997
4999 - the full hash identifier of the working directory parent::
4998 - the full hash identifier of the working directory parent::
5000
4999
5001 hg log -r . --template "{node}\\n"
5000 hg log -r . --template "{node}\\n"
5002
5001
5003 - list available log templates::
5002 - list available log templates::
5004
5003
5005 hg log -T list
5004 hg log -T list
5006
5005
5007 - check if a given changeset is included in a tagged release::
5006 - check if a given changeset is included in a tagged release::
5008
5007
5009 hg log -r "a21ccf and ancestor(1.9)"
5008 hg log -r "a21ccf and ancestor(1.9)"
5010
5009
5011 - find all changesets by some user in a date range::
5010 - find all changesets by some user in a date range::
5012
5011
5013 hg log -k alice -d "may 2008 to jul 2008"
5012 hg log -k alice -d "may 2008 to jul 2008"
5014
5013
5015 - summary of all changesets after the last tag::
5014 - summary of all changesets after the last tag::
5016
5015
5017 hg log -r "last(tagged())::" --template "{desc|firstline}\\n"
5016 hg log -r "last(tagged())::" --template "{desc|firstline}\\n"
5018
5017
5019 See :hg:`help dates` for a list of formats valid for -d/--date.
5018 See :hg:`help dates` for a list of formats valid for -d/--date.
5020
5019
5021 See :hg:`help revisions` and :hg:`help revsets` for more about
5020 See :hg:`help revisions` and :hg:`help revsets` for more about
5022 specifying revisions.
5021 specifying revisions.
5023
5022
5024 See :hg:`help templates` for more about pre-packaged styles and
5023 See :hg:`help templates` for more about pre-packaged styles and
5025 specifying custom templates.
5024 specifying custom templates.
5026
5025
5027 Returns 0 on success.
5026 Returns 0 on success.
5028
5027
5029 """
5028 """
5030 if opts.get('follow') and opts.get('rev'):
5029 if opts.get('follow') and opts.get('rev'):
5031 opts['rev'] = [revset.formatspec('reverse(::%lr)', opts.get('rev'))]
5030 opts['rev'] = [revset.formatspec('reverse(::%lr)', opts.get('rev'))]
5032 del opts['follow']
5031 del opts['follow']
5033
5032
5034 if opts.get('graph'):
5033 if opts.get('graph'):
5035 return cmdutil.graphlog(ui, repo, *pats, **opts)
5034 return cmdutil.graphlog(ui, repo, *pats, **opts)
5036
5035
5037 revs, expr, filematcher = cmdutil.getlogrevs(repo, pats, opts)
5036 revs, expr, filematcher = cmdutil.getlogrevs(repo, pats, opts)
5038 limit = cmdutil.loglimit(opts)
5037 limit = cmdutil.loglimit(opts)
5039 count = 0
5038 count = 0
5040
5039
5041 getrenamed = None
5040 getrenamed = None
5042 if opts.get('copies'):
5041 if opts.get('copies'):
5043 endrev = None
5042 endrev = None
5044 if opts.get('rev'):
5043 if opts.get('rev'):
5045 endrev = scmutil.revrange(repo, opts.get('rev')).max() + 1
5044 endrev = scmutil.revrange(repo, opts.get('rev')).max() + 1
5046 getrenamed = templatekw.getrenamedfn(repo, endrev=endrev)
5045 getrenamed = templatekw.getrenamedfn(repo, endrev=endrev)
5047
5046
5048 displayer = cmdutil.show_changeset(ui, repo, opts, buffered=True)
5047 displayer = cmdutil.show_changeset(ui, repo, opts, buffered=True)
5049 for rev in revs:
5048 for rev in revs:
5050 if count == limit:
5049 if count == limit:
5051 break
5050 break
5052 ctx = repo[rev]
5051 ctx = repo[rev]
5053 copies = None
5052 copies = None
5054 if getrenamed is not None and rev:
5053 if getrenamed is not None and rev:
5055 copies = []
5054 copies = []
5056 for fn in ctx.files():
5055 for fn in ctx.files():
5057 rename = getrenamed(fn, rev)
5056 rename = getrenamed(fn, rev)
5058 if rename:
5057 if rename:
5059 copies.append((fn, rename[0]))
5058 copies.append((fn, rename[0]))
5060 if filematcher:
5059 if filematcher:
5061 revmatchfn = filematcher(ctx.rev())
5060 revmatchfn = filematcher(ctx.rev())
5062 else:
5061 else:
5063 revmatchfn = None
5062 revmatchfn = None
5064 displayer.show(ctx, copies=copies, matchfn=revmatchfn)
5063 displayer.show(ctx, copies=copies, matchfn=revmatchfn)
5065 if displayer.flush(ctx):
5064 if displayer.flush(ctx):
5066 count += 1
5065 count += 1
5067
5066
5068 displayer.close()
5067 displayer.close()
5069
5068
5070 @command('manifest',
5069 @command('manifest',
5071 [('r', 'rev', '', _('revision to display'), _('REV')),
5070 [('r', 'rev', '', _('revision to display'), _('REV')),
5072 ('', 'all', False, _("list files from all revisions"))]
5071 ('', 'all', False, _("list files from all revisions"))]
5073 + formatteropts,
5072 + formatteropts,
5074 _('[-r REV]'))
5073 _('[-r REV]'))
5075 def manifest(ui, repo, node=None, rev=None, **opts):
5074 def manifest(ui, repo, node=None, rev=None, **opts):
5076 """output the current or given revision of the project manifest
5075 """output the current or given revision of the project manifest
5077
5076
5078 Print a list of version controlled files for the given revision.
5077 Print a list of version controlled files for the given revision.
5079 If no revision is given, the first parent of the working directory
5078 If no revision is given, the first parent of the working directory
5080 is used, or the null revision if no revision is checked out.
5079 is used, or the null revision if no revision is checked out.
5081
5080
5082 With -v, print file permissions, symlink and executable bits.
5081 With -v, print file permissions, symlink and executable bits.
5083 With --debug, print file revision hashes.
5082 With --debug, print file revision hashes.
5084
5083
5085 If option --all is specified, the list of all files from all revisions
5084 If option --all is specified, the list of all files from all revisions
5086 is printed. This includes deleted and renamed files.
5085 is printed. This includes deleted and renamed files.
5087
5086
5088 Returns 0 on success.
5087 Returns 0 on success.
5089 """
5088 """
5090
5089
5091 fm = ui.formatter('manifest', opts)
5090 fm = ui.formatter('manifest', opts)
5092
5091
5093 if opts.get('all'):
5092 if opts.get('all'):
5094 if rev or node:
5093 if rev or node:
5095 raise error.Abort(_("can't specify a revision with --all"))
5094 raise error.Abort(_("can't specify a revision with --all"))
5096
5095
5097 res = []
5096 res = []
5098 prefix = "data/"
5097 prefix = "data/"
5099 suffix = ".i"
5098 suffix = ".i"
5100 plen = len(prefix)
5099 plen = len(prefix)
5101 slen = len(suffix)
5100 slen = len(suffix)
5102 lock = repo.lock()
5101 lock = repo.lock()
5103 try:
5102 try:
5104 for fn, b, size in repo.store.datafiles():
5103 for fn, b, size in repo.store.datafiles():
5105 if size != 0 and fn[-slen:] == suffix and fn[:plen] == prefix:
5104 if size != 0 and fn[-slen:] == suffix and fn[:plen] == prefix:
5106 res.append(fn[plen:-slen])
5105 res.append(fn[plen:-slen])
5107 finally:
5106 finally:
5108 lock.release()
5107 lock.release()
5109 for f in res:
5108 for f in res:
5110 fm.startitem()
5109 fm.startitem()
5111 fm.write("path", '%s\n', f)
5110 fm.write("path", '%s\n', f)
5112 fm.end()
5111 fm.end()
5113 return
5112 return
5114
5113
5115 if rev and node:
5114 if rev and node:
5116 raise error.Abort(_("please specify just one revision"))
5115 raise error.Abort(_("please specify just one revision"))
5117
5116
5118 if not node:
5117 if not node:
5119 node = rev
5118 node = rev
5120
5119
5121 char = {'l': '@', 'x': '*', '': ''}
5120 char = {'l': '@', 'x': '*', '': ''}
5122 mode = {'l': '644', 'x': '755', '': '644'}
5121 mode = {'l': '644', 'x': '755', '': '644'}
5123 ctx = scmutil.revsingle(repo, node)
5122 ctx = scmutil.revsingle(repo, node)
5124 mf = ctx.manifest()
5123 mf = ctx.manifest()
5125 for f in ctx:
5124 for f in ctx:
5126 fm.startitem()
5125 fm.startitem()
5127 fl = ctx[f].flags()
5126 fl = ctx[f].flags()
5128 fm.condwrite(ui.debugflag, 'hash', '%s ', hex(mf[f]))
5127 fm.condwrite(ui.debugflag, 'hash', '%s ', hex(mf[f]))
5129 fm.condwrite(ui.verbose, 'mode type', '%s %1s ', mode[fl], char[fl])
5128 fm.condwrite(ui.verbose, 'mode type', '%s %1s ', mode[fl], char[fl])
5130 fm.write('path', '%s\n', f)
5129 fm.write('path', '%s\n', f)
5131 fm.end()
5130 fm.end()
5132
5131
5133 @command('^merge',
5132 @command('^merge',
5134 [('f', 'force', None,
5133 [('f', 'force', None,
5135 _('force a merge including outstanding changes (DEPRECATED)')),
5134 _('force a merge including outstanding changes (DEPRECATED)')),
5136 ('r', 'rev', '', _('revision to merge'), _('REV')),
5135 ('r', 'rev', '', _('revision to merge'), _('REV')),
5137 ('P', 'preview', None,
5136 ('P', 'preview', None,
5138 _('review revisions to merge (no merge is performed)'))
5137 _('review revisions to merge (no merge is performed)'))
5139 ] + mergetoolopts,
5138 ] + mergetoolopts,
5140 _('[-P] [-f] [[-r] REV]'))
5139 _('[-P] [-f] [[-r] REV]'))
5141 def merge(ui, repo, node=None, **opts):
5140 def merge(ui, repo, node=None, **opts):
5142 """merge another revision into working directory
5141 """merge another revision into working directory
5143
5142
5144 The current working directory is updated with all changes made in
5143 The current working directory is updated with all changes made in
5145 the requested revision since the last common predecessor revision.
5144 the requested revision since the last common predecessor revision.
5146
5145
5147 Files that changed between either parent are marked as changed for
5146 Files that changed between either parent are marked as changed for
5148 the next commit and a commit must be performed before any further
5147 the next commit and a commit must be performed before any further
5149 updates to the repository are allowed. The next commit will have
5148 updates to the repository are allowed. The next commit will have
5150 two parents.
5149 two parents.
5151
5150
5152 ``--tool`` can be used to specify the merge tool used for file
5151 ``--tool`` can be used to specify the merge tool used for file
5153 merges. It overrides the HGMERGE environment variable and your
5152 merges. It overrides the HGMERGE environment variable and your
5154 configuration files. See :hg:`help merge-tools` for options.
5153 configuration files. See :hg:`help merge-tools` for options.
5155
5154
5156 If no revision is specified, the working directory's parent is a
5155 If no revision is specified, the working directory's parent is a
5157 head revision, and the current branch contains exactly one other
5156 head revision, and the current branch contains exactly one other
5158 head, the other head is merged with by default. Otherwise, an
5157 head, the other head is merged with by default. Otherwise, an
5159 explicit revision with which to merge with must be provided.
5158 explicit revision with which to merge with must be provided.
5160
5159
5161 :hg:`resolve` must be used to resolve unresolved files.
5160 :hg:`resolve` must be used to resolve unresolved files.
5162
5161
5163 To undo an uncommitted merge, use :hg:`update --clean .` which
5162 To undo an uncommitted merge, use :hg:`update --clean .` which
5164 will check out a clean copy of the original merge parent, losing
5163 will check out a clean copy of the original merge parent, losing
5165 all changes.
5164 all changes.
5166
5165
5167 Returns 0 on success, 1 if there are unresolved files.
5166 Returns 0 on success, 1 if there are unresolved files.
5168 """
5167 """
5169
5168
5170 if opts.get('rev') and node:
5169 if opts.get('rev') and node:
5171 raise error.Abort(_("please specify just one revision"))
5170 raise error.Abort(_("please specify just one revision"))
5172 if not node:
5171 if not node:
5173 node = opts.get('rev')
5172 node = opts.get('rev')
5174
5173
5175 if node:
5174 if node:
5176 node = scmutil.revsingle(repo, node).node()
5175 node = scmutil.revsingle(repo, node).node()
5177
5176
5178 if not node:
5177 if not node:
5179 node = repo[destutil.destmerge(repo)].node()
5178 node = repo[destutil.destmerge(repo)].node()
5180
5179
5181 if opts.get('preview'):
5180 if opts.get('preview'):
5182 # find nodes that are ancestors of p2 but not of p1
5181 # find nodes that are ancestors of p2 but not of p1
5183 p1 = repo.lookup('.')
5182 p1 = repo.lookup('.')
5184 p2 = repo.lookup(node)
5183 p2 = repo.lookup(node)
5185 nodes = repo.changelog.findmissing(common=[p1], heads=[p2])
5184 nodes = repo.changelog.findmissing(common=[p1], heads=[p2])
5186
5185
5187 displayer = cmdutil.show_changeset(ui, repo, opts)
5186 displayer = cmdutil.show_changeset(ui, repo, opts)
5188 for node in nodes:
5187 for node in nodes:
5189 displayer.show(repo[node])
5188 displayer.show(repo[node])
5190 displayer.close()
5189 displayer.close()
5191 return 0
5190 return 0
5192
5191
5193 try:
5192 try:
5194 # ui.forcemerge is an internal variable, do not document
5193 # ui.forcemerge is an internal variable, do not document
5195 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''), 'merge')
5194 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''), 'merge')
5196 return hg.merge(repo, node, force=opts.get('force'))
5195 return hg.merge(repo, node, force=opts.get('force'))
5197 finally:
5196 finally:
5198 ui.setconfig('ui', 'forcemerge', '', 'merge')
5197 ui.setconfig('ui', 'forcemerge', '', 'merge')
5199
5198
5200 @command('outgoing|out',
5199 @command('outgoing|out',
5201 [('f', 'force', None, _('run even when the destination is unrelated')),
5200 [('f', 'force', None, _('run even when the destination is unrelated')),
5202 ('r', 'rev', [],
5201 ('r', 'rev', [],
5203 _('a changeset intended to be included in the destination'), _('REV')),
5202 _('a changeset intended to be included in the destination'), _('REV')),
5204 ('n', 'newest-first', None, _('show newest record first')),
5203 ('n', 'newest-first', None, _('show newest record first')),
5205 ('B', 'bookmarks', False, _('compare bookmarks')),
5204 ('B', 'bookmarks', False, _('compare bookmarks')),
5206 ('b', 'branch', [], _('a specific branch you would like to push'),
5205 ('b', 'branch', [], _('a specific branch you would like to push'),
5207 _('BRANCH')),
5206 _('BRANCH')),
5208 ] + logopts + remoteopts + subrepoopts,
5207 ] + logopts + remoteopts + subrepoopts,
5209 _('[-M] [-p] [-n] [-f] [-r REV]... [DEST]'))
5208 _('[-M] [-p] [-n] [-f] [-r REV]... [DEST]'))
5210 def outgoing(ui, repo, dest=None, **opts):
5209 def outgoing(ui, repo, dest=None, **opts):
5211 """show changesets not found in the destination
5210 """show changesets not found in the destination
5212
5211
5213 Show changesets not found in the specified destination repository
5212 Show changesets not found in the specified destination repository
5214 or the default push location. These are the changesets that would
5213 or the default push location. These are the changesets that would
5215 be pushed if a push was requested.
5214 be pushed if a push was requested.
5216
5215
5217 See pull for details of valid destination formats.
5216 See pull for details of valid destination formats.
5218
5217
5219 .. container:: verbose
5218 .. container:: verbose
5220
5219
5221 With -B/--bookmarks, the result of bookmark comparison between
5220 With -B/--bookmarks, the result of bookmark comparison between
5222 local and remote repositories is displayed. With -v/--verbose,
5221 local and remote repositories is displayed. With -v/--verbose,
5223 status is also displayed for each bookmark like below::
5222 status is also displayed for each bookmark like below::
5224
5223
5225 BM1 01234567890a added
5224 BM1 01234567890a added
5226 BM2 deleted
5225 BM2 deleted
5227 BM3 234567890abc advanced
5226 BM3 234567890abc advanced
5228 BM4 34567890abcd diverged
5227 BM4 34567890abcd diverged
5229 BM5 4567890abcde changed
5228 BM5 4567890abcde changed
5230
5229
5231 The action taken when pushing depends on the
5230 The action taken when pushing depends on the
5232 status of each bookmark:
5231 status of each bookmark:
5233
5232
5234 :``added``: push with ``-B`` will create it
5233 :``added``: push with ``-B`` will create it
5235 :``deleted``: push with ``-B`` will delete it
5234 :``deleted``: push with ``-B`` will delete it
5236 :``advanced``: push will update it
5235 :``advanced``: push will update it
5237 :``diverged``: push with ``-B`` will update it
5236 :``diverged``: push with ``-B`` will update it
5238 :``changed``: push with ``-B`` will update it
5237 :``changed``: push with ``-B`` will update it
5239
5238
5240 From the point of view of pushing behavior, bookmarks
5239 From the point of view of pushing behavior, bookmarks
5241 existing only in the remote repository are treated as
5240 existing only in the remote repository are treated as
5242 ``deleted``, even if it is in fact added remotely.
5241 ``deleted``, even if it is in fact added remotely.
5243
5242
5244 Returns 0 if there are outgoing changes, 1 otherwise.
5243 Returns 0 if there are outgoing changes, 1 otherwise.
5245 """
5244 """
5246 if opts.get('graph'):
5245 if opts.get('graph'):
5247 cmdutil.checkunsupportedgraphflags([], opts)
5246 cmdutil.checkunsupportedgraphflags([], opts)
5248 o, other = hg._outgoing(ui, repo, dest, opts)
5247 o, other = hg._outgoing(ui, repo, dest, opts)
5249 if not o:
5248 if not o:
5250 cmdutil.outgoinghooks(ui, repo, other, opts, o)
5249 cmdutil.outgoinghooks(ui, repo, other, opts, o)
5251 return
5250 return
5252
5251
5253 revdag = cmdutil.graphrevs(repo, o, opts)
5252 revdag = cmdutil.graphrevs(repo, o, opts)
5254 displayer = cmdutil.show_changeset(ui, repo, opts, buffered=True)
5253 displayer = cmdutil.show_changeset(ui, repo, opts, buffered=True)
5255 cmdutil.displaygraph(ui, repo, revdag, displayer, graphmod.asciiedges)
5254 cmdutil.displaygraph(ui, repo, revdag, displayer, graphmod.asciiedges)
5256 cmdutil.outgoinghooks(ui, repo, other, opts, o)
5255 cmdutil.outgoinghooks(ui, repo, other, opts, o)
5257 return 0
5256 return 0
5258
5257
5259 if opts.get('bookmarks'):
5258 if opts.get('bookmarks'):
5260 dest = ui.expandpath(dest or 'default-push', dest or 'default')
5259 dest = ui.expandpath(dest or 'default-push', dest or 'default')
5261 dest, branches = hg.parseurl(dest, opts.get('branch'))
5260 dest, branches = hg.parseurl(dest, opts.get('branch'))
5262 other = hg.peer(repo, opts, dest)
5261 other = hg.peer(repo, opts, dest)
5263 if 'bookmarks' not in other.listkeys('namespaces'):
5262 if 'bookmarks' not in other.listkeys('namespaces'):
5264 ui.warn(_("remote doesn't support bookmarks\n"))
5263 ui.warn(_("remote doesn't support bookmarks\n"))
5265 return 0
5264 return 0
5266 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
5265 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
5267 return bookmarks.outgoing(ui, repo, other)
5266 return bookmarks.outgoing(ui, repo, other)
5268
5267
5269 repo._subtoppath = ui.expandpath(dest or 'default-push', dest or 'default')
5268 repo._subtoppath = ui.expandpath(dest or 'default-push', dest or 'default')
5270 try:
5269 try:
5271 return hg.outgoing(ui, repo, dest, opts)
5270 return hg.outgoing(ui, repo, dest, opts)
5272 finally:
5271 finally:
5273 del repo._subtoppath
5272 del repo._subtoppath
5274
5273
5275 @command('parents',
5274 @command('parents',
5276 [('r', 'rev', '', _('show parents of the specified revision'), _('REV')),
5275 [('r', 'rev', '', _('show parents of the specified revision'), _('REV')),
5277 ] + templateopts,
5276 ] + templateopts,
5278 _('[-r REV] [FILE]'),
5277 _('[-r REV] [FILE]'),
5279 inferrepo=True)
5278 inferrepo=True)
5280 def parents(ui, repo, file_=None, **opts):
5279 def parents(ui, repo, file_=None, **opts):
5281 """show the parents of the working directory or revision (DEPRECATED)
5280 """show the parents of the working directory or revision (DEPRECATED)
5282
5281
5283 Print the working directory's parent revisions. If a revision is
5282 Print the working directory's parent revisions. If a revision is
5284 given via -r/--rev, the parent of that revision will be printed.
5283 given via -r/--rev, the parent of that revision will be printed.
5285 If a file argument is given, the revision in which the file was
5284 If a file argument is given, the revision in which the file was
5286 last changed (before the working directory revision or the
5285 last changed (before the working directory revision or the
5287 argument to --rev if given) is printed.
5286 argument to --rev if given) is printed.
5288
5287
5289 This command is equivalent to::
5288 This command is equivalent to::
5290
5289
5291 hg log -r "parents()" or
5290 hg log -r "parents()" or
5292 hg log -r "parents(REV)" or
5291 hg log -r "parents(REV)" or
5293 hg log -r "max(file(FILE))" or
5292 hg log -r "max(file(FILE))" or
5294 hg log -r "max(::REV and file(FILE))"
5293 hg log -r "max(::REV and file(FILE))"
5295
5294
5296 See :hg:`summary` and :hg:`help revsets` for related information.
5295 See :hg:`summary` and :hg:`help revsets` for related information.
5297
5296
5298 Returns 0 on success.
5297 Returns 0 on success.
5299 """
5298 """
5300
5299
5301 ctx = scmutil.revsingle(repo, opts.get('rev'), None)
5300 ctx = scmutil.revsingle(repo, opts.get('rev'), None)
5302
5301
5303 if file_:
5302 if file_:
5304 m = scmutil.match(ctx, (file_,), opts)
5303 m = scmutil.match(ctx, (file_,), opts)
5305 if m.anypats() or len(m.files()) != 1:
5304 if m.anypats() or len(m.files()) != 1:
5306 raise error.Abort(_('can only specify an explicit filename'))
5305 raise error.Abort(_('can only specify an explicit filename'))
5307 file_ = m.files()[0]
5306 file_ = m.files()[0]
5308 filenodes = []
5307 filenodes = []
5309 for cp in ctx.parents():
5308 for cp in ctx.parents():
5310 if not cp:
5309 if not cp:
5311 continue
5310 continue
5312 try:
5311 try:
5313 filenodes.append(cp.filenode(file_))
5312 filenodes.append(cp.filenode(file_))
5314 except error.LookupError:
5313 except error.LookupError:
5315 pass
5314 pass
5316 if not filenodes:
5315 if not filenodes:
5317 raise error.Abort(_("'%s' not found in manifest!") % file_)
5316 raise error.Abort(_("'%s' not found in manifest!") % file_)
5318 p = []
5317 p = []
5319 for fn in filenodes:
5318 for fn in filenodes:
5320 fctx = repo.filectx(file_, fileid=fn)
5319 fctx = repo.filectx(file_, fileid=fn)
5321 p.append(fctx.node())
5320 p.append(fctx.node())
5322 else:
5321 else:
5323 p = [cp.node() for cp in ctx.parents()]
5322 p = [cp.node() for cp in ctx.parents()]
5324
5323
5325 displayer = cmdutil.show_changeset(ui, repo, opts)
5324 displayer = cmdutil.show_changeset(ui, repo, opts)
5326 for n in p:
5325 for n in p:
5327 if n != nullid:
5326 if n != nullid:
5328 displayer.show(repo[n])
5327 displayer.show(repo[n])
5329 displayer.close()
5328 displayer.close()
5330
5329
5331 @command('paths', [], _('[NAME]'), optionalrepo=True)
5330 @command('paths', [], _('[NAME]'), optionalrepo=True)
5332 def paths(ui, repo, search=None):
5331 def paths(ui, repo, search=None):
5333 """show aliases for remote repositories
5332 """show aliases for remote repositories
5334
5333
5335 Show definition of symbolic path name NAME. If no name is given,
5334 Show definition of symbolic path name NAME. If no name is given,
5336 show definition of all available names.
5335 show definition of all available names.
5337
5336
5338 Option -q/--quiet suppresses all output when searching for NAME
5337 Option -q/--quiet suppresses all output when searching for NAME
5339 and shows only the path names when listing all definitions.
5338 and shows only the path names when listing all definitions.
5340
5339
5341 Path names are defined in the [paths] section of your
5340 Path names are defined in the [paths] section of your
5342 configuration file and in ``/etc/mercurial/hgrc``. If run inside a
5341 configuration file and in ``/etc/mercurial/hgrc``. If run inside a
5343 repository, ``.hg/hgrc`` is used, too.
5342 repository, ``.hg/hgrc`` is used, too.
5344
5343
5345 The path names ``default`` and ``default-push`` have a special
5344 The path names ``default`` and ``default-push`` have a special
5346 meaning. When performing a push or pull operation, they are used
5345 meaning. When performing a push or pull operation, they are used
5347 as fallbacks if no location is specified on the command-line.
5346 as fallbacks if no location is specified on the command-line.
5348 When ``default-push`` is set, it will be used for push and
5347 When ``default-push`` is set, it will be used for push and
5349 ``default`` will be used for pull; otherwise ``default`` is used
5348 ``default`` will be used for pull; otherwise ``default`` is used
5350 as the fallback for both. When cloning a repository, the clone
5349 as the fallback for both. When cloning a repository, the clone
5351 source is written as ``default`` in ``.hg/hgrc``. Note that
5350 source is written as ``default`` in ``.hg/hgrc``. Note that
5352 ``default`` and ``default-push`` apply to all inbound (e.g.
5351 ``default`` and ``default-push`` apply to all inbound (e.g.
5353 :hg:`incoming`) and outbound (e.g. :hg:`outgoing`, :hg:`email` and
5352 :hg:`incoming`) and outbound (e.g. :hg:`outgoing`, :hg:`email` and
5354 :hg:`bundle`) operations.
5353 :hg:`bundle`) operations.
5355
5354
5356 See :hg:`help urls` for more information.
5355 See :hg:`help urls` for more information.
5357
5356
5358 Returns 0 on success.
5357 Returns 0 on success.
5359 """
5358 """
5360 if search:
5359 if search:
5361 for name, path in sorted(ui.paths.iteritems()):
5360 for name, path in sorted(ui.paths.iteritems()):
5362 if name == search:
5361 if name == search:
5363 ui.status("%s\n" % util.hidepassword(path.rawloc))
5362 ui.status("%s\n" % util.hidepassword(path.rawloc))
5364 return
5363 return
5365 if not ui.quiet:
5364 if not ui.quiet:
5366 ui.warn(_("not found!\n"))
5365 ui.warn(_("not found!\n"))
5367 return 1
5366 return 1
5368 else:
5367 else:
5369 for name, path in sorted(ui.paths.iteritems()):
5368 for name, path in sorted(ui.paths.iteritems()):
5370 if ui.quiet:
5369 if ui.quiet:
5371 ui.write("%s\n" % name)
5370 ui.write("%s\n" % name)
5372 else:
5371 else:
5373 ui.write("%s = %s\n" % (name,
5372 ui.write("%s = %s\n" % (name,
5374 util.hidepassword(path.rawloc)))
5373 util.hidepassword(path.rawloc)))
5375 for subopt, value in sorted(path.suboptions.items()):
5374 for subopt, value in sorted(path.suboptions.items()):
5376 ui.write('%s:%s = %s\n' % (name, subopt, value))
5375 ui.write('%s:%s = %s\n' % (name, subopt, value))
5377
5376
5378 @command('phase',
5377 @command('phase',
5379 [('p', 'public', False, _('set changeset phase to public')),
5378 [('p', 'public', False, _('set changeset phase to public')),
5380 ('d', 'draft', False, _('set changeset phase to draft')),
5379 ('d', 'draft', False, _('set changeset phase to draft')),
5381 ('s', 'secret', False, _('set changeset phase to secret')),
5380 ('s', 'secret', False, _('set changeset phase to secret')),
5382 ('f', 'force', False, _('allow to move boundary backward')),
5381 ('f', 'force', False, _('allow to move boundary backward')),
5383 ('r', 'rev', [], _('target revision'), _('REV')),
5382 ('r', 'rev', [], _('target revision'), _('REV')),
5384 ],
5383 ],
5385 _('[-p|-d|-s] [-f] [-r] [REV...]'))
5384 _('[-p|-d|-s] [-f] [-r] [REV...]'))
5386 def phase(ui, repo, *revs, **opts):
5385 def phase(ui, repo, *revs, **opts):
5387 """set or show the current phase name
5386 """set or show the current phase name
5388
5387
5389 With no argument, show the phase name of the current revision(s).
5388 With no argument, show the phase name of the current revision(s).
5390
5389
5391 With one of -p/--public, -d/--draft or -s/--secret, change the
5390 With one of -p/--public, -d/--draft or -s/--secret, change the
5392 phase value of the specified revisions.
5391 phase value of the specified revisions.
5393
5392
5394 Unless -f/--force is specified, :hg:`phase` won't move changeset from a
5393 Unless -f/--force is specified, :hg:`phase` won't move changeset from a
5395 lower phase to an higher phase. Phases are ordered as follows::
5394 lower phase to an higher phase. Phases are ordered as follows::
5396
5395
5397 public < draft < secret
5396 public < draft < secret
5398
5397
5399 Returns 0 on success, 1 if some phases could not be changed.
5398 Returns 0 on success, 1 if some phases could not be changed.
5400
5399
5401 (For more information about the phases concept, see :hg:`help phases`.)
5400 (For more information about the phases concept, see :hg:`help phases`.)
5402 """
5401 """
5403 # search for a unique phase argument
5402 # search for a unique phase argument
5404 targetphase = None
5403 targetphase = None
5405 for idx, name in enumerate(phases.phasenames):
5404 for idx, name in enumerate(phases.phasenames):
5406 if opts[name]:
5405 if opts[name]:
5407 if targetphase is not None:
5406 if targetphase is not None:
5408 raise error.Abort(_('only one phase can be specified'))
5407 raise error.Abort(_('only one phase can be specified'))
5409 targetphase = idx
5408 targetphase = idx
5410
5409
5411 # look for specified revision
5410 # look for specified revision
5412 revs = list(revs)
5411 revs = list(revs)
5413 revs.extend(opts['rev'])
5412 revs.extend(opts['rev'])
5414 if not revs:
5413 if not revs:
5415 # display both parents as the second parent phase can influence
5414 # display both parents as the second parent phase can influence
5416 # the phase of a merge commit
5415 # the phase of a merge commit
5417 revs = [c.rev() for c in repo[None].parents()]
5416 revs = [c.rev() for c in repo[None].parents()]
5418
5417
5419 revs = scmutil.revrange(repo, revs)
5418 revs = scmutil.revrange(repo, revs)
5420
5419
5421 lock = None
5420 lock = None
5422 ret = 0
5421 ret = 0
5423 if targetphase is None:
5422 if targetphase is None:
5424 # display
5423 # display
5425 for r in revs:
5424 for r in revs:
5426 ctx = repo[r]
5425 ctx = repo[r]
5427 ui.write('%i: %s\n' % (ctx.rev(), ctx.phasestr()))
5426 ui.write('%i: %s\n' % (ctx.rev(), ctx.phasestr()))
5428 else:
5427 else:
5429 tr = None
5428 tr = None
5430 lock = repo.lock()
5429 lock = repo.lock()
5431 try:
5430 try:
5432 tr = repo.transaction("phase")
5431 tr = repo.transaction("phase")
5433 # set phase
5432 # set phase
5434 if not revs:
5433 if not revs:
5435 raise error.Abort(_('empty revision set'))
5434 raise error.Abort(_('empty revision set'))
5436 nodes = [repo[r].node() for r in revs]
5435 nodes = [repo[r].node() for r in revs]
5437 # moving revision from public to draft may hide them
5436 # moving revision from public to draft may hide them
5438 # We have to check result on an unfiltered repository
5437 # We have to check result on an unfiltered repository
5439 unfi = repo.unfiltered()
5438 unfi = repo.unfiltered()
5440 getphase = unfi._phasecache.phase
5439 getphase = unfi._phasecache.phase
5441 olddata = [getphase(unfi, r) for r in unfi]
5440 olddata = [getphase(unfi, r) for r in unfi]
5442 phases.advanceboundary(repo, tr, targetphase, nodes)
5441 phases.advanceboundary(repo, tr, targetphase, nodes)
5443 if opts['force']:
5442 if opts['force']:
5444 phases.retractboundary(repo, tr, targetphase, nodes)
5443 phases.retractboundary(repo, tr, targetphase, nodes)
5445 tr.close()
5444 tr.close()
5446 finally:
5445 finally:
5447 if tr is not None:
5446 if tr is not None:
5448 tr.release()
5447 tr.release()
5449 lock.release()
5448 lock.release()
5450 getphase = unfi._phasecache.phase
5449 getphase = unfi._phasecache.phase
5451 newdata = [getphase(unfi, r) for r in unfi]
5450 newdata = [getphase(unfi, r) for r in unfi]
5452 changes = sum(newdata[r] != olddata[r] for r in unfi)
5451 changes = sum(newdata[r] != olddata[r] for r in unfi)
5453 cl = unfi.changelog
5452 cl = unfi.changelog
5454 rejected = [n for n in nodes
5453 rejected = [n for n in nodes
5455 if newdata[cl.rev(n)] < targetphase]
5454 if newdata[cl.rev(n)] < targetphase]
5456 if rejected:
5455 if rejected:
5457 ui.warn(_('cannot move %i changesets to a higher '
5456 ui.warn(_('cannot move %i changesets to a higher '
5458 'phase, use --force\n') % len(rejected))
5457 'phase, use --force\n') % len(rejected))
5459 ret = 1
5458 ret = 1
5460 if changes:
5459 if changes:
5461 msg = _('phase changed for %i changesets\n') % changes
5460 msg = _('phase changed for %i changesets\n') % changes
5462 if ret:
5461 if ret:
5463 ui.status(msg)
5462 ui.status(msg)
5464 else:
5463 else:
5465 ui.note(msg)
5464 ui.note(msg)
5466 else:
5465 else:
5467 ui.warn(_('no phases changed\n'))
5466 ui.warn(_('no phases changed\n'))
5468 return ret
5467 return ret
5469
5468
5470 def postincoming(ui, repo, modheads, optupdate, checkout):
5469 def postincoming(ui, repo, modheads, optupdate, checkout):
5471 if modheads == 0:
5470 if modheads == 0:
5472 return
5471 return
5473 if optupdate:
5472 if optupdate:
5474 try:
5473 try:
5475 brev = checkout
5474 brev = checkout
5476 movemarkfrom = None
5475 movemarkfrom = None
5477 if not checkout:
5476 if not checkout:
5478 updata = destutil.destupdate(repo)
5477 updata = destutil.destupdate(repo)
5479 checkout, movemarkfrom, brev = updata
5478 checkout, movemarkfrom, brev = updata
5480 ret = hg.update(repo, checkout)
5479 ret = hg.update(repo, checkout)
5481 except error.UpdateAbort as inst:
5480 except error.UpdateAbort as inst:
5482 msg = _("not updating: %s") % str(inst)
5481 msg = _("not updating: %s") % str(inst)
5483 hint = inst.hint
5482 hint = inst.hint
5484 raise error.UpdateAbort(msg, hint=hint)
5483 raise error.UpdateAbort(msg, hint=hint)
5485 if not ret and not checkout:
5484 if not ret and not checkout:
5486 if bookmarks.update(repo, [movemarkfrom], repo['.'].node()):
5485 if bookmarks.update(repo, [movemarkfrom], repo['.'].node()):
5487 ui.status(_("updating bookmark %s\n") % repo._activebookmark)
5486 ui.status(_("updating bookmark %s\n") % repo._activebookmark)
5488 return ret
5487 return ret
5489 if modheads > 1:
5488 if modheads > 1:
5490 currentbranchheads = len(repo.branchheads())
5489 currentbranchheads = len(repo.branchheads())
5491 if currentbranchheads == modheads:
5490 if currentbranchheads == modheads:
5492 ui.status(_("(run 'hg heads' to see heads, 'hg merge' to merge)\n"))
5491 ui.status(_("(run 'hg heads' to see heads, 'hg merge' to merge)\n"))
5493 elif currentbranchheads > 1:
5492 elif currentbranchheads > 1:
5494 ui.status(_("(run 'hg heads .' to see heads, 'hg merge' to "
5493 ui.status(_("(run 'hg heads .' to see heads, 'hg merge' to "
5495 "merge)\n"))
5494 "merge)\n"))
5496 else:
5495 else:
5497 ui.status(_("(run 'hg heads' to see heads)\n"))
5496 ui.status(_("(run 'hg heads' to see heads)\n"))
5498 else:
5497 else:
5499 ui.status(_("(run 'hg update' to get a working copy)\n"))
5498 ui.status(_("(run 'hg update' to get a working copy)\n"))
5500
5499
5501 @command('^pull',
5500 @command('^pull',
5502 [('u', 'update', None,
5501 [('u', 'update', None,
5503 _('update to new branch head if changesets were pulled')),
5502 _('update to new branch head if changesets were pulled')),
5504 ('f', 'force', None, _('run even when remote repository is unrelated')),
5503 ('f', 'force', None, _('run even when remote repository is unrelated')),
5505 ('r', 'rev', [], _('a remote changeset intended to be added'), _('REV')),
5504 ('r', 'rev', [], _('a remote changeset intended to be added'), _('REV')),
5506 ('B', 'bookmark', [], _("bookmark to pull"), _('BOOKMARK')),
5505 ('B', 'bookmark', [], _("bookmark to pull"), _('BOOKMARK')),
5507 ('b', 'branch', [], _('a specific branch you would like to pull'),
5506 ('b', 'branch', [], _('a specific branch you would like to pull'),
5508 _('BRANCH')),
5507 _('BRANCH')),
5509 ] + remoteopts,
5508 ] + remoteopts,
5510 _('[-u] [-f] [-r REV]... [-e CMD] [--remotecmd CMD] [SOURCE]'))
5509 _('[-u] [-f] [-r REV]... [-e CMD] [--remotecmd CMD] [SOURCE]'))
5511 def pull(ui, repo, source="default", **opts):
5510 def pull(ui, repo, source="default", **opts):
5512 """pull changes from the specified source
5511 """pull changes from the specified source
5513
5512
5514 Pull changes from a remote repository to a local one.
5513 Pull changes from a remote repository to a local one.
5515
5514
5516 This finds all changes from the repository at the specified path
5515 This finds all changes from the repository at the specified path
5517 or URL and adds them to a local repository (the current one unless
5516 or URL and adds them to a local repository (the current one unless
5518 -R is specified). By default, this does not update the copy of the
5517 -R is specified). By default, this does not update the copy of the
5519 project in the working directory.
5518 project in the working directory.
5520
5519
5521 Use :hg:`incoming` if you want to see what would have been added
5520 Use :hg:`incoming` if you want to see what would have been added
5522 by a pull at the time you issued this command. If you then decide
5521 by a pull at the time you issued this command. If you then decide
5523 to add those changes to the repository, you should use :hg:`pull
5522 to add those changes to the repository, you should use :hg:`pull
5524 -r X` where ``X`` is the last changeset listed by :hg:`incoming`.
5523 -r X` where ``X`` is the last changeset listed by :hg:`incoming`.
5525
5524
5526 If SOURCE is omitted, the 'default' path will be used.
5525 If SOURCE is omitted, the 'default' path will be used.
5527 See :hg:`help urls` for more information.
5526 See :hg:`help urls` for more information.
5528
5527
5529 Returns 0 on success, 1 if an update had unresolved files.
5528 Returns 0 on success, 1 if an update had unresolved files.
5530 """
5529 """
5531 source, branches = hg.parseurl(ui.expandpath(source), opts.get('branch'))
5530 source, branches = hg.parseurl(ui.expandpath(source), opts.get('branch'))
5532 ui.status(_('pulling from %s\n') % util.hidepassword(source))
5531 ui.status(_('pulling from %s\n') % util.hidepassword(source))
5533 other = hg.peer(repo, opts, source)
5532 other = hg.peer(repo, opts, source)
5534 try:
5533 try:
5535 revs, checkout = hg.addbranchrevs(repo, other, branches,
5534 revs, checkout = hg.addbranchrevs(repo, other, branches,
5536 opts.get('rev'))
5535 opts.get('rev'))
5537
5536
5538
5537
5539 pullopargs = {}
5538 pullopargs = {}
5540 if opts.get('bookmark'):
5539 if opts.get('bookmark'):
5541 if not revs:
5540 if not revs:
5542 revs = []
5541 revs = []
5543 # The list of bookmark used here is not the one used to actually
5542 # The list of bookmark used here is not the one used to actually
5544 # update the bookmark name. This can result in the revision pulled
5543 # update the bookmark name. This can result in the revision pulled
5545 # not ending up with the name of the bookmark because of a race
5544 # not ending up with the name of the bookmark because of a race
5546 # condition on the server. (See issue 4689 for details)
5545 # condition on the server. (See issue 4689 for details)
5547 remotebookmarks = other.listkeys('bookmarks')
5546 remotebookmarks = other.listkeys('bookmarks')
5548 pullopargs['remotebookmarks'] = remotebookmarks
5547 pullopargs['remotebookmarks'] = remotebookmarks
5549 for b in opts['bookmark']:
5548 for b in opts['bookmark']:
5550 if b not in remotebookmarks:
5549 if b not in remotebookmarks:
5551 raise error.Abort(_('remote bookmark %s not found!') % b)
5550 raise error.Abort(_('remote bookmark %s not found!') % b)
5552 revs.append(remotebookmarks[b])
5551 revs.append(remotebookmarks[b])
5553
5552
5554 if revs:
5553 if revs:
5555 try:
5554 try:
5556 # When 'rev' is a bookmark name, we cannot guarantee that it
5555 # When 'rev' is a bookmark name, we cannot guarantee that it
5557 # will be updated with that name because of a race condition
5556 # will be updated with that name because of a race condition
5558 # server side. (See issue 4689 for details)
5557 # server side. (See issue 4689 for details)
5559 oldrevs = revs
5558 oldrevs = revs
5560 revs = [] # actually, nodes
5559 revs = [] # actually, nodes
5561 for r in oldrevs:
5560 for r in oldrevs:
5562 node = other.lookup(r)
5561 node = other.lookup(r)
5563 revs.append(node)
5562 revs.append(node)
5564 if r == checkout:
5563 if r == checkout:
5565 checkout = node
5564 checkout = node
5566 except error.CapabilityError:
5565 except error.CapabilityError:
5567 err = _("other repository doesn't support revision lookup, "
5566 err = _("other repository doesn't support revision lookup, "
5568 "so a rev cannot be specified.")
5567 "so a rev cannot be specified.")
5569 raise error.Abort(err)
5568 raise error.Abort(err)
5570
5569
5571 pullopargs.update(opts.get('opargs', {}))
5570 pullopargs.update(opts.get('opargs', {}))
5572 modheads = exchange.pull(repo, other, heads=revs,
5571 modheads = exchange.pull(repo, other, heads=revs,
5573 force=opts.get('force'),
5572 force=opts.get('force'),
5574 bookmarks=opts.get('bookmark', ()),
5573 bookmarks=opts.get('bookmark', ()),
5575 opargs=pullopargs).cgresult
5574 opargs=pullopargs).cgresult
5576 if checkout:
5575 if checkout:
5577 checkout = str(repo.changelog.rev(checkout))
5576 checkout = str(repo.changelog.rev(checkout))
5578 repo._subtoppath = source
5577 repo._subtoppath = source
5579 try:
5578 try:
5580 ret = postincoming(ui, repo, modheads, opts.get('update'), checkout)
5579 ret = postincoming(ui, repo, modheads, opts.get('update'), checkout)
5581
5580
5582 finally:
5581 finally:
5583 del repo._subtoppath
5582 del repo._subtoppath
5584
5583
5585 finally:
5584 finally:
5586 other.close()
5585 other.close()
5587 return ret
5586 return ret
5588
5587
5589 @command('^push',
5588 @command('^push',
5590 [('f', 'force', None, _('force push')),
5589 [('f', 'force', None, _('force push')),
5591 ('r', 'rev', [],
5590 ('r', 'rev', [],
5592 _('a changeset intended to be included in the destination'),
5591 _('a changeset intended to be included in the destination'),
5593 _('REV')),
5592 _('REV')),
5594 ('B', 'bookmark', [], _("bookmark to push"), _('BOOKMARK')),
5593 ('B', 'bookmark', [], _("bookmark to push"), _('BOOKMARK')),
5595 ('b', 'branch', [],
5594 ('b', 'branch', [],
5596 _('a specific branch you would like to push'), _('BRANCH')),
5595 _('a specific branch you would like to push'), _('BRANCH')),
5597 ('', 'new-branch', False, _('allow pushing a new branch')),
5596 ('', 'new-branch', False, _('allow pushing a new branch')),
5598 ] + remoteopts,
5597 ] + remoteopts,
5599 _('[-f] [-r REV]... [-e CMD] [--remotecmd CMD] [DEST]'))
5598 _('[-f] [-r REV]... [-e CMD] [--remotecmd CMD] [DEST]'))
5600 def push(ui, repo, dest=None, **opts):
5599 def push(ui, repo, dest=None, **opts):
5601 """push changes to the specified destination
5600 """push changes to the specified destination
5602
5601
5603 Push changesets from the local repository to the specified
5602 Push changesets from the local repository to the specified
5604 destination.
5603 destination.
5605
5604
5606 This operation is symmetrical to pull: it is identical to a pull
5605 This operation is symmetrical to pull: it is identical to a pull
5607 in the destination repository from the current one.
5606 in the destination repository from the current one.
5608
5607
5609 By default, push will not allow creation of new heads at the
5608 By default, push will not allow creation of new heads at the
5610 destination, since multiple heads would make it unclear which head
5609 destination, since multiple heads would make it unclear which head
5611 to use. In this situation, it is recommended to pull and merge
5610 to use. In this situation, it is recommended to pull and merge
5612 before pushing.
5611 before pushing.
5613
5612
5614 Use --new-branch if you want to allow push to create a new named
5613 Use --new-branch if you want to allow push to create a new named
5615 branch that is not present at the destination. This allows you to
5614 branch that is not present at the destination. This allows you to
5616 only create a new branch without forcing other changes.
5615 only create a new branch without forcing other changes.
5617
5616
5618 .. note::
5617 .. note::
5619
5618
5620 Extra care should be taken with the -f/--force option,
5619 Extra care should be taken with the -f/--force option,
5621 which will push all new heads on all branches, an action which will
5620 which will push all new heads on all branches, an action which will
5622 almost always cause confusion for collaborators.
5621 almost always cause confusion for collaborators.
5623
5622
5624 If -r/--rev is used, the specified revision and all its ancestors
5623 If -r/--rev is used, the specified revision and all its ancestors
5625 will be pushed to the remote repository.
5624 will be pushed to the remote repository.
5626
5625
5627 If -B/--bookmark is used, the specified bookmarked revision, its
5626 If -B/--bookmark is used, the specified bookmarked revision, its
5628 ancestors, and the bookmark will be pushed to the remote
5627 ancestors, and the bookmark will be pushed to the remote
5629 repository.
5628 repository.
5630
5629
5631 Please see :hg:`help urls` for important details about ``ssh://``
5630 Please see :hg:`help urls` for important details about ``ssh://``
5632 URLs. If DESTINATION is omitted, a default path will be used.
5631 URLs. If DESTINATION is omitted, a default path will be used.
5633
5632
5634 Returns 0 if push was successful, 1 if nothing to push.
5633 Returns 0 if push was successful, 1 if nothing to push.
5635 """
5634 """
5636
5635
5637 if opts.get('bookmark'):
5636 if opts.get('bookmark'):
5638 ui.setconfig('bookmarks', 'pushing', opts['bookmark'], 'push')
5637 ui.setconfig('bookmarks', 'pushing', opts['bookmark'], 'push')
5639 for b in opts['bookmark']:
5638 for b in opts['bookmark']:
5640 # translate -B options to -r so changesets get pushed
5639 # translate -B options to -r so changesets get pushed
5641 if b in repo._bookmarks:
5640 if b in repo._bookmarks:
5642 opts.setdefault('rev', []).append(b)
5641 opts.setdefault('rev', []).append(b)
5643 else:
5642 else:
5644 # if we try to push a deleted bookmark, translate it to null
5643 # if we try to push a deleted bookmark, translate it to null
5645 # this lets simultaneous -r, -b options continue working
5644 # this lets simultaneous -r, -b options continue working
5646 opts.setdefault('rev', []).append("null")
5645 opts.setdefault('rev', []).append("null")
5647
5646
5648 path = ui.paths.getpath(dest, default='default')
5647 path = ui.paths.getpath(dest, default='default')
5649 if not path:
5648 if not path:
5650 raise error.Abort(_('default repository not configured!'),
5649 raise error.Abort(_('default repository not configured!'),
5651 hint=_('see the "path" section in "hg help config"'))
5650 hint=_('see the "path" section in "hg help config"'))
5652 dest = path.pushloc or path.loc
5651 dest = path.pushloc or path.loc
5653 branches = (path.branch, opts.get('branch') or [])
5652 branches = (path.branch, opts.get('branch') or [])
5654 ui.status(_('pushing to %s\n') % util.hidepassword(dest))
5653 ui.status(_('pushing to %s\n') % util.hidepassword(dest))
5655 revs, checkout = hg.addbranchrevs(repo, repo, branches, opts.get('rev'))
5654 revs, checkout = hg.addbranchrevs(repo, repo, branches, opts.get('rev'))
5656 other = hg.peer(repo, opts, dest)
5655 other = hg.peer(repo, opts, dest)
5657
5656
5658 if revs:
5657 if revs:
5659 revs = [repo.lookup(r) for r in scmutil.revrange(repo, revs)]
5658 revs = [repo.lookup(r) for r in scmutil.revrange(repo, revs)]
5660 if not revs:
5659 if not revs:
5661 raise error.Abort(_("specified revisions evaluate to an empty set"),
5660 raise error.Abort(_("specified revisions evaluate to an empty set"),
5662 hint=_("use different revision arguments"))
5661 hint=_("use different revision arguments"))
5663
5662
5664 repo._subtoppath = dest
5663 repo._subtoppath = dest
5665 try:
5664 try:
5666 # push subrepos depth-first for coherent ordering
5665 # push subrepos depth-first for coherent ordering
5667 c = repo['']
5666 c = repo['']
5668 subs = c.substate # only repos that are committed
5667 subs = c.substate # only repos that are committed
5669 for s in sorted(subs):
5668 for s in sorted(subs):
5670 result = c.sub(s).push(opts)
5669 result = c.sub(s).push(opts)
5671 if result == 0:
5670 if result == 0:
5672 return not result
5671 return not result
5673 finally:
5672 finally:
5674 del repo._subtoppath
5673 del repo._subtoppath
5675 pushop = exchange.push(repo, other, opts.get('force'), revs=revs,
5674 pushop = exchange.push(repo, other, opts.get('force'), revs=revs,
5676 newbranch=opts.get('new_branch'),
5675 newbranch=opts.get('new_branch'),
5677 bookmarks=opts.get('bookmark', ()),
5676 bookmarks=opts.get('bookmark', ()),
5678 opargs=opts.get('opargs'))
5677 opargs=opts.get('opargs'))
5679
5678
5680 result = not pushop.cgresult
5679 result = not pushop.cgresult
5681
5680
5682 if pushop.bkresult is not None:
5681 if pushop.bkresult is not None:
5683 if pushop.bkresult == 2:
5682 if pushop.bkresult == 2:
5684 result = 2
5683 result = 2
5685 elif not result and pushop.bkresult:
5684 elif not result and pushop.bkresult:
5686 result = 2
5685 result = 2
5687
5686
5688 return result
5687 return result
5689
5688
5690 @command('recover', [])
5689 @command('recover', [])
5691 def recover(ui, repo):
5690 def recover(ui, repo):
5692 """roll back an interrupted transaction
5691 """roll back an interrupted transaction
5693
5692
5694 Recover from an interrupted commit or pull.
5693 Recover from an interrupted commit or pull.
5695
5694
5696 This command tries to fix the repository status after an
5695 This command tries to fix the repository status after an
5697 interrupted operation. It should only be necessary when Mercurial
5696 interrupted operation. It should only be necessary when Mercurial
5698 suggests it.
5697 suggests it.
5699
5698
5700 Returns 0 if successful, 1 if nothing to recover or verify fails.
5699 Returns 0 if successful, 1 if nothing to recover or verify fails.
5701 """
5700 """
5702 if repo.recover():
5701 if repo.recover():
5703 return hg.verify(repo)
5702 return hg.verify(repo)
5704 return 1
5703 return 1
5705
5704
5706 @command('^remove|rm',
5705 @command('^remove|rm',
5707 [('A', 'after', None, _('record delete for missing files')),
5706 [('A', 'after', None, _('record delete for missing files')),
5708 ('f', 'force', None,
5707 ('f', 'force', None,
5709 _('remove (and delete) file even if added or modified')),
5708 _('remove (and delete) file even if added or modified')),
5710 ] + subrepoopts + walkopts,
5709 ] + subrepoopts + walkopts,
5711 _('[OPTION]... FILE...'),
5710 _('[OPTION]... FILE...'),
5712 inferrepo=True)
5711 inferrepo=True)
5713 def remove(ui, repo, *pats, **opts):
5712 def remove(ui, repo, *pats, **opts):
5714 """remove the specified files on the next commit
5713 """remove the specified files on the next commit
5715
5714
5716 Schedule the indicated files for removal from the current branch.
5715 Schedule the indicated files for removal from the current branch.
5717
5716
5718 This command schedules the files to be removed at the next commit.
5717 This command schedules the files to be removed at the next commit.
5719 To undo a remove before that, see :hg:`revert`. To undo added
5718 To undo a remove before that, see :hg:`revert`. To undo added
5720 files, see :hg:`forget`.
5719 files, see :hg:`forget`.
5721
5720
5722 .. container:: verbose
5721 .. container:: verbose
5723
5722
5724 -A/--after can be used to remove only files that have already
5723 -A/--after can be used to remove only files that have already
5725 been deleted, -f/--force can be used to force deletion, and -Af
5724 been deleted, -f/--force can be used to force deletion, and -Af
5726 can be used to remove files from the next revision without
5725 can be used to remove files from the next revision without
5727 deleting them from the working directory.
5726 deleting them from the working directory.
5728
5727
5729 The following table details the behavior of remove for different
5728 The following table details the behavior of remove for different
5730 file states (columns) and option combinations (rows). The file
5729 file states (columns) and option combinations (rows). The file
5731 states are Added [A], Clean [C], Modified [M] and Missing [!]
5730 states are Added [A], Clean [C], Modified [M] and Missing [!]
5732 (as reported by :hg:`status`). The actions are Warn, Remove
5731 (as reported by :hg:`status`). The actions are Warn, Remove
5733 (from branch) and Delete (from disk):
5732 (from branch) and Delete (from disk):
5734
5733
5735 ========= == == == ==
5734 ========= == == == ==
5736 opt/state A C M !
5735 opt/state A C M !
5737 ========= == == == ==
5736 ========= == == == ==
5738 none W RD W R
5737 none W RD W R
5739 -f R RD RD R
5738 -f R RD RD R
5740 -A W W W R
5739 -A W W W R
5741 -Af R R R R
5740 -Af R R R R
5742 ========= == == == ==
5741 ========= == == == ==
5743
5742
5744 Note that remove never deletes files in Added [A] state from the
5743 Note that remove never deletes files in Added [A] state from the
5745 working directory, not even if option --force is specified.
5744 working directory, not even if option --force is specified.
5746
5745
5747 Returns 0 on success, 1 if any warnings encountered.
5746 Returns 0 on success, 1 if any warnings encountered.
5748 """
5747 """
5749
5748
5750 after, force = opts.get('after'), opts.get('force')
5749 after, force = opts.get('after'), opts.get('force')
5751 if not pats and not after:
5750 if not pats and not after:
5752 raise error.Abort(_('no files specified'))
5751 raise error.Abort(_('no files specified'))
5753
5752
5754 m = scmutil.match(repo[None], pats, opts)
5753 m = scmutil.match(repo[None], pats, opts)
5755 subrepos = opts.get('subrepos')
5754 subrepos = opts.get('subrepos')
5756 return cmdutil.remove(ui, repo, m, "", after, force, subrepos)
5755 return cmdutil.remove(ui, repo, m, "", after, force, subrepos)
5757
5756
5758 @command('rename|move|mv',
5757 @command('rename|move|mv',
5759 [('A', 'after', None, _('record a rename that has already occurred')),
5758 [('A', 'after', None, _('record a rename that has already occurred')),
5760 ('f', 'force', None, _('forcibly copy over an existing managed file')),
5759 ('f', 'force', None, _('forcibly copy over an existing managed file')),
5761 ] + walkopts + dryrunopts,
5760 ] + walkopts + dryrunopts,
5762 _('[OPTION]... SOURCE... DEST'))
5761 _('[OPTION]... SOURCE... DEST'))
5763 def rename(ui, repo, *pats, **opts):
5762 def rename(ui, repo, *pats, **opts):
5764 """rename files; equivalent of copy + remove
5763 """rename files; equivalent of copy + remove
5765
5764
5766 Mark dest as copies of sources; mark sources for deletion. If dest
5765 Mark dest as copies of sources; mark sources for deletion. If dest
5767 is a directory, copies are put in that directory. If dest is a
5766 is a directory, copies are put in that directory. If dest is a
5768 file, there can only be one source.
5767 file, there can only be one source.
5769
5768
5770 By default, this command copies the contents of files as they
5769 By default, this command copies the contents of files as they
5771 exist in the working directory. If invoked with -A/--after, the
5770 exist in the working directory. If invoked with -A/--after, the
5772 operation is recorded, but no copying is performed.
5771 operation is recorded, but no copying is performed.
5773
5772
5774 This command takes effect at the next commit. To undo a rename
5773 This command takes effect at the next commit. To undo a rename
5775 before that, see :hg:`revert`.
5774 before that, see :hg:`revert`.
5776
5775
5777 Returns 0 on success, 1 if errors are encountered.
5776 Returns 0 on success, 1 if errors are encountered.
5778 """
5777 """
5779 wlock = repo.wlock(False)
5778 wlock = repo.wlock(False)
5780 try:
5779 try:
5781 return cmdutil.copy(ui, repo, pats, opts, rename=True)
5780 return cmdutil.copy(ui, repo, pats, opts, rename=True)
5782 finally:
5781 finally:
5783 wlock.release()
5782 wlock.release()
5784
5783
5785 @command('resolve',
5784 @command('resolve',
5786 [('a', 'all', None, _('select all unresolved files')),
5785 [('a', 'all', None, _('select all unresolved files')),
5787 ('l', 'list', None, _('list state of files needing merge')),
5786 ('l', 'list', None, _('list state of files needing merge')),
5788 ('m', 'mark', None, _('mark files as resolved')),
5787 ('m', 'mark', None, _('mark files as resolved')),
5789 ('u', 'unmark', None, _('mark files as unresolved')),
5788 ('u', 'unmark', None, _('mark files as unresolved')),
5790 ('n', 'no-status', None, _('hide status prefix'))]
5789 ('n', 'no-status', None, _('hide status prefix'))]
5791 + mergetoolopts + walkopts + formatteropts,
5790 + mergetoolopts + walkopts + formatteropts,
5792 _('[OPTION]... [FILE]...'),
5791 _('[OPTION]... [FILE]...'),
5793 inferrepo=True)
5792 inferrepo=True)
5794 def resolve(ui, repo, *pats, **opts):
5793 def resolve(ui, repo, *pats, **opts):
5795 """redo merges or set/view the merge status of files
5794 """redo merges or set/view the merge status of files
5796
5795
5797 Merges with unresolved conflicts are often the result of
5796 Merges with unresolved conflicts are often the result of
5798 non-interactive merging using the ``internal:merge`` configuration
5797 non-interactive merging using the ``internal:merge`` configuration
5799 setting, or a command-line merge tool like ``diff3``. The resolve
5798 setting, or a command-line merge tool like ``diff3``. The resolve
5800 command is used to manage the files involved in a merge, after
5799 command is used to manage the files involved in a merge, after
5801 :hg:`merge` has been run, and before :hg:`commit` is run (i.e. the
5800 :hg:`merge` has been run, and before :hg:`commit` is run (i.e. the
5802 working directory must have two parents). See :hg:`help
5801 working directory must have two parents). See :hg:`help
5803 merge-tools` for information on configuring merge tools.
5802 merge-tools` for information on configuring merge tools.
5804
5803
5805 The resolve command can be used in the following ways:
5804 The resolve command can be used in the following ways:
5806
5805
5807 - :hg:`resolve [--tool TOOL] FILE...`: attempt to re-merge the specified
5806 - :hg:`resolve [--tool TOOL] FILE...`: attempt to re-merge the specified
5808 files, discarding any previous merge attempts. Re-merging is not
5807 files, discarding any previous merge attempts. Re-merging is not
5809 performed for files already marked as resolved. Use ``--all/-a``
5808 performed for files already marked as resolved. Use ``--all/-a``
5810 to select all unresolved files. ``--tool`` can be used to specify
5809 to select all unresolved files. ``--tool`` can be used to specify
5811 the merge tool used for the given files. It overrides the HGMERGE
5810 the merge tool used for the given files. It overrides the HGMERGE
5812 environment variable and your configuration files. Previous file
5811 environment variable and your configuration files. Previous file
5813 contents are saved with a ``.orig`` suffix.
5812 contents are saved with a ``.orig`` suffix.
5814
5813
5815 - :hg:`resolve -m [FILE]`: mark a file as having been resolved
5814 - :hg:`resolve -m [FILE]`: mark a file as having been resolved
5816 (e.g. after having manually fixed-up the files). The default is
5815 (e.g. after having manually fixed-up the files). The default is
5817 to mark all unresolved files.
5816 to mark all unresolved files.
5818
5817
5819 - :hg:`resolve -u [FILE]...`: mark a file as unresolved. The
5818 - :hg:`resolve -u [FILE]...`: mark a file as unresolved. The
5820 default is to mark all resolved files.
5819 default is to mark all resolved files.
5821
5820
5822 - :hg:`resolve -l`: list files which had or still have conflicts.
5821 - :hg:`resolve -l`: list files which had or still have conflicts.
5823 In the printed list, ``U`` = unresolved and ``R`` = resolved.
5822 In the printed list, ``U`` = unresolved and ``R`` = resolved.
5824
5823
5825 Note that Mercurial will not let you commit files with unresolved
5824 Note that Mercurial will not let you commit files with unresolved
5826 merge conflicts. You must use :hg:`resolve -m ...` before you can
5825 merge conflicts. You must use :hg:`resolve -m ...` before you can
5827 commit after a conflicting merge.
5826 commit after a conflicting merge.
5828
5827
5829 Returns 0 on success, 1 if any files fail a resolve attempt.
5828 Returns 0 on success, 1 if any files fail a resolve attempt.
5830 """
5829 """
5831
5830
5832 all, mark, unmark, show, nostatus = \
5831 all, mark, unmark, show, nostatus = \
5833 [opts.get(o) for o in 'all mark unmark list no_status'.split()]
5832 [opts.get(o) for o in 'all mark unmark list no_status'.split()]
5834
5833
5835 if (show and (mark or unmark)) or (mark and unmark):
5834 if (show and (mark or unmark)) or (mark and unmark):
5836 raise error.Abort(_("too many options specified"))
5835 raise error.Abort(_("too many options specified"))
5837 if pats and all:
5836 if pats and all:
5838 raise error.Abort(_("can't specify --all and patterns"))
5837 raise error.Abort(_("can't specify --all and patterns"))
5839 if not (all or pats or show or mark or unmark):
5838 if not (all or pats or show or mark or unmark):
5840 raise error.Abort(_('no files or directories specified'),
5839 raise error.Abort(_('no files or directories specified'),
5841 hint=('use --all to re-merge all unresolved files'))
5840 hint=('use --all to re-merge all unresolved files'))
5842
5841
5843 if show:
5842 if show:
5844 fm = ui.formatter('resolve', opts)
5843 fm = ui.formatter('resolve', opts)
5845 ms = mergemod.mergestate.read(repo)
5844 ms = mergemod.mergestate.read(repo)
5846 m = scmutil.match(repo[None], pats, opts)
5845 m = scmutil.match(repo[None], pats, opts)
5847 for f in ms:
5846 for f in ms:
5848 if not m(f):
5847 if not m(f):
5849 continue
5848 continue
5850 l = 'resolve.' + {'u': 'unresolved', 'r': 'resolved',
5849 l = 'resolve.' + {'u': 'unresolved', 'r': 'resolved',
5851 'd': 'driverresolved'}[ms[f]]
5850 'd': 'driverresolved'}[ms[f]]
5852 fm.startitem()
5851 fm.startitem()
5853 fm.condwrite(not nostatus, 'status', '%s ', ms[f].upper(), label=l)
5852 fm.condwrite(not nostatus, 'status', '%s ', ms[f].upper(), label=l)
5854 fm.write('path', '%s\n', f, label=l)
5853 fm.write('path', '%s\n', f, label=l)
5855 fm.end()
5854 fm.end()
5856 return 0
5855 return 0
5857
5856
5858 wlock = repo.wlock()
5857 wlock = repo.wlock()
5859 try:
5858 try:
5860 ms = mergemod.mergestate.read(repo)
5859 ms = mergemod.mergestate.read(repo)
5861
5860
5862 if not (ms.active() or repo.dirstate.p2() != nullid):
5861 if not (ms.active() or repo.dirstate.p2() != nullid):
5863 raise error.Abort(
5862 raise error.Abort(
5864 _('resolve command not applicable when not merging'))
5863 _('resolve command not applicable when not merging'))
5865
5864
5866 wctx = repo[None]
5865 wctx = repo[None]
5867
5866
5868 if ms.mergedriver and ms.mdstate() == 'u':
5867 if ms.mergedriver and ms.mdstate() == 'u':
5869 proceed = mergemod.driverpreprocess(repo, ms, wctx)
5868 proceed = mergemod.driverpreprocess(repo, ms, wctx)
5870 ms.commit()
5869 ms.commit()
5871 # allow mark and unmark to go through
5870 # allow mark and unmark to go through
5872 if not mark and not unmark and not proceed:
5871 if not mark and not unmark and not proceed:
5873 return 1
5872 return 1
5874
5873
5875 m = scmutil.match(wctx, pats, opts)
5874 m = scmutil.match(wctx, pats, opts)
5876 ret = 0
5875 ret = 0
5877 didwork = False
5876 didwork = False
5878 runconclude = False
5877 runconclude = False
5879
5878
5880 tocomplete = []
5879 tocomplete = []
5881 for f in ms:
5880 for f in ms:
5882 if not m(f):
5881 if not m(f):
5883 continue
5882 continue
5884
5883
5885 didwork = True
5884 didwork = True
5886
5885
5887 # don't let driver-resolved files be marked, and run the conclude
5886 # don't let driver-resolved files be marked, and run the conclude
5888 # step if asked to resolve
5887 # step if asked to resolve
5889 if ms[f] == "d":
5888 if ms[f] == "d":
5890 exact = m.exact(f)
5889 exact = m.exact(f)
5891 if mark:
5890 if mark:
5892 if exact:
5891 if exact:
5893 ui.warn(_('not marking %s as it is driver-resolved\n')
5892 ui.warn(_('not marking %s as it is driver-resolved\n')
5894 % f)
5893 % f)
5895 elif unmark:
5894 elif unmark:
5896 if exact:
5895 if exact:
5897 ui.warn(_('not unmarking %s as it is driver-resolved\n')
5896 ui.warn(_('not unmarking %s as it is driver-resolved\n')
5898 % f)
5897 % f)
5899 else:
5898 else:
5900 runconclude = True
5899 runconclude = True
5901 continue
5900 continue
5902
5901
5903 if mark:
5902 if mark:
5904 ms.mark(f, "r")
5903 ms.mark(f, "r")
5905 elif unmark:
5904 elif unmark:
5906 ms.mark(f, "u")
5905 ms.mark(f, "u")
5907 else:
5906 else:
5908 # backup pre-resolve (merge uses .orig for its own purposes)
5907 # backup pre-resolve (merge uses .orig for its own purposes)
5909 a = repo.wjoin(f)
5908 a = repo.wjoin(f)
5910 try:
5909 try:
5911 util.copyfile(a, a + ".resolve")
5910 util.copyfile(a, a + ".resolve")
5912 except (IOError, OSError) as inst:
5911 except (IOError, OSError) as inst:
5913 if inst.errno != errno.ENOENT:
5912 if inst.errno != errno.ENOENT:
5914 raise
5913 raise
5915
5914
5916 try:
5915 try:
5917 # preresolve file
5916 # preresolve file
5918 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
5917 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
5919 'resolve')
5918 'resolve')
5920 complete, r = ms.preresolve(f, wctx)
5919 complete, r = ms.preresolve(f, wctx)
5921 if not complete:
5920 if not complete:
5922 tocomplete.append(f)
5921 tocomplete.append(f)
5923 elif r:
5922 elif r:
5924 ret = 1
5923 ret = 1
5925 finally:
5924 finally:
5926 ui.setconfig('ui', 'forcemerge', '', 'resolve')
5925 ui.setconfig('ui', 'forcemerge', '', 'resolve')
5927 ms.commit()
5926 ms.commit()
5928
5927
5929 # replace filemerge's .orig file with our resolve file, but only
5928 # replace filemerge's .orig file with our resolve file, but only
5930 # for merges that are complete
5929 # for merges that are complete
5931 if complete:
5930 if complete:
5932 try:
5931 try:
5933 util.rename(a + ".resolve",
5932 util.rename(a + ".resolve",
5934 cmdutil.origpath(ui, repo, a))
5933 cmdutil.origpath(ui, repo, a))
5935 except OSError as inst:
5934 except OSError as inst:
5936 if inst.errno != errno.ENOENT:
5935 if inst.errno != errno.ENOENT:
5937 raise
5936 raise
5938
5937
5939 for f in tocomplete:
5938 for f in tocomplete:
5940 try:
5939 try:
5941 # resolve file
5940 # resolve file
5942 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
5941 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
5943 'resolve')
5942 'resolve')
5944 r = ms.resolve(f, wctx)
5943 r = ms.resolve(f, wctx)
5945 if r:
5944 if r:
5946 ret = 1
5945 ret = 1
5947 finally:
5946 finally:
5948 ui.setconfig('ui', 'forcemerge', '', 'resolve')
5947 ui.setconfig('ui', 'forcemerge', '', 'resolve')
5949 ms.commit()
5948 ms.commit()
5950
5949
5951 # replace filemerge's .orig file with our resolve file
5950 # replace filemerge's .orig file with our resolve file
5952 a = repo.wjoin(f)
5951 a = repo.wjoin(f)
5953 try:
5952 try:
5954 util.rename(a + ".resolve", cmdutil.origpath(ui, repo, a))
5953 util.rename(a + ".resolve", cmdutil.origpath(ui, repo, a))
5955 except OSError as inst:
5954 except OSError as inst:
5956 if inst.errno != errno.ENOENT:
5955 if inst.errno != errno.ENOENT:
5957 raise
5956 raise
5958
5957
5959 ms.commit()
5958 ms.commit()
5960 ms.recordactions()
5959 ms.recordactions()
5961
5960
5962 if not didwork and pats:
5961 if not didwork and pats:
5963 ui.warn(_("arguments do not match paths that need resolving\n"))
5962 ui.warn(_("arguments do not match paths that need resolving\n"))
5964 elif ms.mergedriver and ms.mdstate() != 's':
5963 elif ms.mergedriver and ms.mdstate() != 's':
5965 # run conclude step when either a driver-resolved file is requested
5964 # run conclude step when either a driver-resolved file is requested
5966 # or there are no driver-resolved files
5965 # or there are no driver-resolved files
5967 # we can't use 'ret' to determine whether any files are unresolved
5966 # we can't use 'ret' to determine whether any files are unresolved
5968 # because we might not have tried to resolve some
5967 # because we might not have tried to resolve some
5969 if ((runconclude or not list(ms.driverresolved()))
5968 if ((runconclude or not list(ms.driverresolved()))
5970 and not list(ms.unresolved())):
5969 and not list(ms.unresolved())):
5971 proceed = mergemod.driverconclude(repo, ms, wctx)
5970 proceed = mergemod.driverconclude(repo, ms, wctx)
5972 ms.commit()
5971 ms.commit()
5973 if not proceed:
5972 if not proceed:
5974 return 1
5973 return 1
5975
5974
5976 finally:
5975 finally:
5977 wlock.release()
5976 wlock.release()
5978
5977
5979 # Nudge users into finishing an unfinished operation
5978 # Nudge users into finishing an unfinished operation
5980 unresolvedf = list(ms.unresolved())
5979 unresolvedf = list(ms.unresolved())
5981 driverresolvedf = list(ms.driverresolved())
5980 driverresolvedf = list(ms.driverresolved())
5982 if not unresolvedf and not driverresolvedf:
5981 if not unresolvedf and not driverresolvedf:
5983 ui.status(_('(no more unresolved files)\n'))
5982 ui.status(_('(no more unresolved files)\n'))
5984 elif not unresolvedf:
5983 elif not unresolvedf:
5985 ui.status(_('(no more unresolved files -- '
5984 ui.status(_('(no more unresolved files -- '
5986 'run "hg resolve --all" to conclude)\n'))
5985 'run "hg resolve --all" to conclude)\n'))
5987
5986
5988 return ret
5987 return ret
5989
5988
5990 @command('revert',
5989 @command('revert',
5991 [('a', 'all', None, _('revert all changes when no arguments given')),
5990 [('a', 'all', None, _('revert all changes when no arguments given')),
5992 ('d', 'date', '', _('tipmost revision matching date'), _('DATE')),
5991 ('d', 'date', '', _('tipmost revision matching date'), _('DATE')),
5993 ('r', 'rev', '', _('revert to the specified revision'), _('REV')),
5992 ('r', 'rev', '', _('revert to the specified revision'), _('REV')),
5994 ('C', 'no-backup', None, _('do not save backup copies of files')),
5993 ('C', 'no-backup', None, _('do not save backup copies of files')),
5995 ('i', 'interactive', None,
5994 ('i', 'interactive', None,
5996 _('interactively select the changes (EXPERIMENTAL)')),
5995 _('interactively select the changes (EXPERIMENTAL)')),
5997 ] + walkopts + dryrunopts,
5996 ] + walkopts + dryrunopts,
5998 _('[OPTION]... [-r REV] [NAME]...'))
5997 _('[OPTION]... [-r REV] [NAME]...'))
5999 def revert(ui, repo, *pats, **opts):
5998 def revert(ui, repo, *pats, **opts):
6000 """restore files to their checkout state
5999 """restore files to their checkout state
6001
6000
6002 .. note::
6001 .. note::
6003
6002
6004 To check out earlier revisions, you should use :hg:`update REV`.
6003 To check out earlier revisions, you should use :hg:`update REV`.
6005 To cancel an uncommitted merge (and lose your changes),
6004 To cancel an uncommitted merge (and lose your changes),
6006 use :hg:`update --clean .`.
6005 use :hg:`update --clean .`.
6007
6006
6008 With no revision specified, revert the specified files or directories
6007 With no revision specified, revert the specified files or directories
6009 to the contents they had in the parent of the working directory.
6008 to the contents they had in the parent of the working directory.
6010 This restores the contents of files to an unmodified
6009 This restores the contents of files to an unmodified
6011 state and unschedules adds, removes, copies, and renames. If the
6010 state and unschedules adds, removes, copies, and renames. If the
6012 working directory has two parents, you must explicitly specify a
6011 working directory has two parents, you must explicitly specify a
6013 revision.
6012 revision.
6014
6013
6015 Using the -r/--rev or -d/--date options, revert the given files or
6014 Using the -r/--rev or -d/--date options, revert the given files or
6016 directories to their states as of a specific revision. Because
6015 directories to their states as of a specific revision. Because
6017 revert does not change the working directory parents, this will
6016 revert does not change the working directory parents, this will
6018 cause these files to appear modified. This can be helpful to "back
6017 cause these files to appear modified. This can be helpful to "back
6019 out" some or all of an earlier change. See :hg:`backout` for a
6018 out" some or all of an earlier change. See :hg:`backout` for a
6020 related method.
6019 related method.
6021
6020
6022 Modified files are saved with a .orig suffix before reverting.
6021 Modified files are saved with a .orig suffix before reverting.
6023 To disable these backups, use --no-backup.
6022 To disable these backups, use --no-backup.
6024
6023
6025 See :hg:`help dates` for a list of formats valid for -d/--date.
6024 See :hg:`help dates` for a list of formats valid for -d/--date.
6026
6025
6027 See :hg:`help backout` for a way to reverse the effect of an
6026 See :hg:`help backout` for a way to reverse the effect of an
6028 earlier changeset.
6027 earlier changeset.
6029
6028
6030 Returns 0 on success.
6029 Returns 0 on success.
6031 """
6030 """
6032
6031
6033 if opts.get("date"):
6032 if opts.get("date"):
6034 if opts.get("rev"):
6033 if opts.get("rev"):
6035 raise error.Abort(_("you can't specify a revision and a date"))
6034 raise error.Abort(_("you can't specify a revision and a date"))
6036 opts["rev"] = cmdutil.finddate(ui, repo, opts["date"])
6035 opts["rev"] = cmdutil.finddate(ui, repo, opts["date"])
6037
6036
6038 parent, p2 = repo.dirstate.parents()
6037 parent, p2 = repo.dirstate.parents()
6039 if not opts.get('rev') and p2 != nullid:
6038 if not opts.get('rev') and p2 != nullid:
6040 # revert after merge is a trap for new users (issue2915)
6039 # revert after merge is a trap for new users (issue2915)
6041 raise error.Abort(_('uncommitted merge with no revision specified'),
6040 raise error.Abort(_('uncommitted merge with no revision specified'),
6042 hint=_('use "hg update" or see "hg help revert"'))
6041 hint=_('use "hg update" or see "hg help revert"'))
6043
6042
6044 ctx = scmutil.revsingle(repo, opts.get('rev'))
6043 ctx = scmutil.revsingle(repo, opts.get('rev'))
6045
6044
6046 if (not (pats or opts.get('include') or opts.get('exclude') or
6045 if (not (pats or opts.get('include') or opts.get('exclude') or
6047 opts.get('all') or opts.get('interactive'))):
6046 opts.get('all') or opts.get('interactive'))):
6048 msg = _("no files or directories specified")
6047 msg = _("no files or directories specified")
6049 if p2 != nullid:
6048 if p2 != nullid:
6050 hint = _("uncommitted merge, use --all to discard all changes,"
6049 hint = _("uncommitted merge, use --all to discard all changes,"
6051 " or 'hg update -C .' to abort the merge")
6050 " or 'hg update -C .' to abort the merge")
6052 raise error.Abort(msg, hint=hint)
6051 raise error.Abort(msg, hint=hint)
6053 dirty = any(repo.status())
6052 dirty = any(repo.status())
6054 node = ctx.node()
6053 node = ctx.node()
6055 if node != parent:
6054 if node != parent:
6056 if dirty:
6055 if dirty:
6057 hint = _("uncommitted changes, use --all to discard all"
6056 hint = _("uncommitted changes, use --all to discard all"
6058 " changes, or 'hg update %s' to update") % ctx.rev()
6057 " changes, or 'hg update %s' to update") % ctx.rev()
6059 else:
6058 else:
6060 hint = _("use --all to revert all files,"
6059 hint = _("use --all to revert all files,"
6061 " or 'hg update %s' to update") % ctx.rev()
6060 " or 'hg update %s' to update") % ctx.rev()
6062 elif dirty:
6061 elif dirty:
6063 hint = _("uncommitted changes, use --all to discard all changes")
6062 hint = _("uncommitted changes, use --all to discard all changes")
6064 else:
6063 else:
6065 hint = _("use --all to revert all files")
6064 hint = _("use --all to revert all files")
6066 raise error.Abort(msg, hint=hint)
6065 raise error.Abort(msg, hint=hint)
6067
6066
6068 return cmdutil.revert(ui, repo, ctx, (parent, p2), *pats, **opts)
6067 return cmdutil.revert(ui, repo, ctx, (parent, p2), *pats, **opts)
6069
6068
6070 @command('rollback', dryrunopts +
6069 @command('rollback', dryrunopts +
6071 [('f', 'force', False, _('ignore safety measures'))])
6070 [('f', 'force', False, _('ignore safety measures'))])
6072 def rollback(ui, repo, **opts):
6071 def rollback(ui, repo, **opts):
6073 """roll back the last transaction (DANGEROUS) (DEPRECATED)
6072 """roll back the last transaction (DANGEROUS) (DEPRECATED)
6074
6073
6075 Please use :hg:`commit --amend` instead of rollback to correct
6074 Please use :hg:`commit --amend` instead of rollback to correct
6076 mistakes in the last commit.
6075 mistakes in the last commit.
6077
6076
6078 This command should be used with care. There is only one level of
6077 This command should be used with care. There is only one level of
6079 rollback, and there is no way to undo a rollback. It will also
6078 rollback, and there is no way to undo a rollback. It will also
6080 restore the dirstate at the time of the last transaction, losing
6079 restore the dirstate at the time of the last transaction, losing
6081 any dirstate changes since that time. This command does not alter
6080 any dirstate changes since that time. This command does not alter
6082 the working directory.
6081 the working directory.
6083
6082
6084 Transactions are used to encapsulate the effects of all commands
6083 Transactions are used to encapsulate the effects of all commands
6085 that create new changesets or propagate existing changesets into a
6084 that create new changesets or propagate existing changesets into a
6086 repository.
6085 repository.
6087
6086
6088 .. container:: verbose
6087 .. container:: verbose
6089
6088
6090 For example, the following commands are transactional, and their
6089 For example, the following commands are transactional, and their
6091 effects can be rolled back:
6090 effects can be rolled back:
6092
6091
6093 - commit
6092 - commit
6094 - import
6093 - import
6095 - pull
6094 - pull
6096 - push (with this repository as the destination)
6095 - push (with this repository as the destination)
6097 - unbundle
6096 - unbundle
6098
6097
6099 To avoid permanent data loss, rollback will refuse to rollback a
6098 To avoid permanent data loss, rollback will refuse to rollback a
6100 commit transaction if it isn't checked out. Use --force to
6099 commit transaction if it isn't checked out. Use --force to
6101 override this protection.
6100 override this protection.
6102
6101
6103 This command is not intended for use on public repositories. Once
6102 This command is not intended for use on public repositories. Once
6104 changes are visible for pull by other users, rolling a transaction
6103 changes are visible for pull by other users, rolling a transaction
6105 back locally is ineffective (someone else may already have pulled
6104 back locally is ineffective (someone else may already have pulled
6106 the changes). Furthermore, a race is possible with readers of the
6105 the changes). Furthermore, a race is possible with readers of the
6107 repository; for example an in-progress pull from the repository
6106 repository; for example an in-progress pull from the repository
6108 may fail if a rollback is performed.
6107 may fail if a rollback is performed.
6109
6108
6110 Returns 0 on success, 1 if no rollback data is available.
6109 Returns 0 on success, 1 if no rollback data is available.
6111 """
6110 """
6112 return repo.rollback(dryrun=opts.get('dry_run'),
6111 return repo.rollback(dryrun=opts.get('dry_run'),
6113 force=opts.get('force'))
6112 force=opts.get('force'))
6114
6113
6115 @command('root', [])
6114 @command('root', [])
6116 def root(ui, repo):
6115 def root(ui, repo):
6117 """print the root (top) of the current working directory
6116 """print the root (top) of the current working directory
6118
6117
6119 Print the root directory of the current repository.
6118 Print the root directory of the current repository.
6120
6119
6121 Returns 0 on success.
6120 Returns 0 on success.
6122 """
6121 """
6123 ui.write(repo.root + "\n")
6122 ui.write(repo.root + "\n")
6124
6123
6125 @command('^serve',
6124 @command('^serve',
6126 [('A', 'accesslog', '', _('name of access log file to write to'),
6125 [('A', 'accesslog', '', _('name of access log file to write to'),
6127 _('FILE')),
6126 _('FILE')),
6128 ('d', 'daemon', None, _('run server in background')),
6127 ('d', 'daemon', None, _('run server in background')),
6129 ('', 'daemon-pipefds', '', _('used internally by daemon mode'), _('FILE')),
6128 ('', 'daemon-pipefds', '', _('used internally by daemon mode'), _('FILE')),
6130 ('E', 'errorlog', '', _('name of error log file to write to'), _('FILE')),
6129 ('E', 'errorlog', '', _('name of error log file to write to'), _('FILE')),
6131 # use string type, then we can check if something was passed
6130 # use string type, then we can check if something was passed
6132 ('p', 'port', '', _('port to listen on (default: 8000)'), _('PORT')),
6131 ('p', 'port', '', _('port to listen on (default: 8000)'), _('PORT')),
6133 ('a', 'address', '', _('address to listen on (default: all interfaces)'),
6132 ('a', 'address', '', _('address to listen on (default: all interfaces)'),
6134 _('ADDR')),
6133 _('ADDR')),
6135 ('', 'prefix', '', _('prefix path to serve from (default: server root)'),
6134 ('', 'prefix', '', _('prefix path to serve from (default: server root)'),
6136 _('PREFIX')),
6135 _('PREFIX')),
6137 ('n', 'name', '',
6136 ('n', 'name', '',
6138 _('name to show in web pages (default: working directory)'), _('NAME')),
6137 _('name to show in web pages (default: working directory)'), _('NAME')),
6139 ('', 'web-conf', '',
6138 ('', 'web-conf', '',
6140 _('name of the hgweb config file (see "hg help hgweb")'), _('FILE')),
6139 _('name of the hgweb config file (see "hg help hgweb")'), _('FILE')),
6141 ('', 'webdir-conf', '', _('name of the hgweb config file (DEPRECATED)'),
6140 ('', 'webdir-conf', '', _('name of the hgweb config file (DEPRECATED)'),
6142 _('FILE')),
6141 _('FILE')),
6143 ('', 'pid-file', '', _('name of file to write process ID to'), _('FILE')),
6142 ('', 'pid-file', '', _('name of file to write process ID to'), _('FILE')),
6144 ('', 'stdio', None, _('for remote clients')),
6143 ('', 'stdio', None, _('for remote clients')),
6145 ('', 'cmdserver', '', _('for remote clients'), _('MODE')),
6144 ('', 'cmdserver', '', _('for remote clients'), _('MODE')),
6146 ('t', 'templates', '', _('web templates to use'), _('TEMPLATE')),
6145 ('t', 'templates', '', _('web templates to use'), _('TEMPLATE')),
6147 ('', 'style', '', _('template style to use'), _('STYLE')),
6146 ('', 'style', '', _('template style to use'), _('STYLE')),
6148 ('6', 'ipv6', None, _('use IPv6 in addition to IPv4')),
6147 ('6', 'ipv6', None, _('use IPv6 in addition to IPv4')),
6149 ('', 'certificate', '', _('SSL certificate file'), _('FILE'))],
6148 ('', 'certificate', '', _('SSL certificate file'), _('FILE'))],
6150 _('[OPTION]...'),
6149 _('[OPTION]...'),
6151 optionalrepo=True)
6150 optionalrepo=True)
6152 def serve(ui, repo, **opts):
6151 def serve(ui, repo, **opts):
6153 """start stand-alone webserver
6152 """start stand-alone webserver
6154
6153
6155 Start a local HTTP repository browser and pull server. You can use
6154 Start a local HTTP repository browser and pull server. You can use
6156 this for ad-hoc sharing and browsing of repositories. It is
6155 this for ad-hoc sharing and browsing of repositories. It is
6157 recommended to use a real web server to serve a repository for
6156 recommended to use a real web server to serve a repository for
6158 longer periods of time.
6157 longer periods of time.
6159
6158
6160 Please note that the server does not implement access control.
6159 Please note that the server does not implement access control.
6161 This means that, by default, anybody can read from the server and
6160 This means that, by default, anybody can read from the server and
6162 nobody can write to it by default. Set the ``web.allow_push``
6161 nobody can write to it by default. Set the ``web.allow_push``
6163 option to ``*`` to allow everybody to push to the server. You
6162 option to ``*`` to allow everybody to push to the server. You
6164 should use a real web server if you need to authenticate users.
6163 should use a real web server if you need to authenticate users.
6165
6164
6166 By default, the server logs accesses to stdout and errors to
6165 By default, the server logs accesses to stdout and errors to
6167 stderr. Use the -A/--accesslog and -E/--errorlog options to log to
6166 stderr. Use the -A/--accesslog and -E/--errorlog options to log to
6168 files.
6167 files.
6169
6168
6170 To have the server choose a free port number to listen on, specify
6169 To have the server choose a free port number to listen on, specify
6171 a port number of 0; in this case, the server will print the port
6170 a port number of 0; in this case, the server will print the port
6172 number it uses.
6171 number it uses.
6173
6172
6174 Returns 0 on success.
6173 Returns 0 on success.
6175 """
6174 """
6176
6175
6177 if opts["stdio"] and opts["cmdserver"]:
6176 if opts["stdio"] and opts["cmdserver"]:
6178 raise error.Abort(_("cannot use --stdio with --cmdserver"))
6177 raise error.Abort(_("cannot use --stdio with --cmdserver"))
6179
6178
6180 if opts["stdio"]:
6179 if opts["stdio"]:
6181 if repo is None:
6180 if repo is None:
6182 raise error.RepoError(_("there is no Mercurial repository here"
6181 raise error.RepoError(_("there is no Mercurial repository here"
6183 " (.hg not found)"))
6182 " (.hg not found)"))
6184 s = sshserver.sshserver(ui, repo)
6183 s = sshserver.sshserver(ui, repo)
6185 s.serve_forever()
6184 s.serve_forever()
6186
6185
6187 if opts["cmdserver"]:
6186 if opts["cmdserver"]:
6188 import commandserver
6187 import commandserver
6189 service = commandserver.createservice(ui, repo, opts)
6188 service = commandserver.createservice(ui, repo, opts)
6190 else:
6189 else:
6191 service = hgweb.createservice(ui, repo, opts)
6190 service = hgweb.createservice(ui, repo, opts)
6192 return cmdutil.service(opts, initfn=service.init, runfn=service.run)
6191 return cmdutil.service(opts, initfn=service.init, runfn=service.run)
6193
6192
6194 @command('^status|st',
6193 @command('^status|st',
6195 [('A', 'all', None, _('show status of all files')),
6194 [('A', 'all', None, _('show status of all files')),
6196 ('m', 'modified', None, _('show only modified files')),
6195 ('m', 'modified', None, _('show only modified files')),
6197 ('a', 'added', None, _('show only added files')),
6196 ('a', 'added', None, _('show only added files')),
6198 ('r', 'removed', None, _('show only removed files')),
6197 ('r', 'removed', None, _('show only removed files')),
6199 ('d', 'deleted', None, _('show only deleted (but tracked) files')),
6198 ('d', 'deleted', None, _('show only deleted (but tracked) files')),
6200 ('c', 'clean', None, _('show only files without changes')),
6199 ('c', 'clean', None, _('show only files without changes')),
6201 ('u', 'unknown', None, _('show only unknown (not tracked) files')),
6200 ('u', 'unknown', None, _('show only unknown (not tracked) files')),
6202 ('i', 'ignored', None, _('show only ignored files')),
6201 ('i', 'ignored', None, _('show only ignored files')),
6203 ('n', 'no-status', None, _('hide status prefix')),
6202 ('n', 'no-status', None, _('hide status prefix')),
6204 ('C', 'copies', None, _('show source of copied files')),
6203 ('C', 'copies', None, _('show source of copied files')),
6205 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
6204 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
6206 ('', 'rev', [], _('show difference from revision'), _('REV')),
6205 ('', 'rev', [], _('show difference from revision'), _('REV')),
6207 ('', 'change', '', _('list the changed files of a revision'), _('REV')),
6206 ('', 'change', '', _('list the changed files of a revision'), _('REV')),
6208 ] + walkopts + subrepoopts + formatteropts,
6207 ] + walkopts + subrepoopts + formatteropts,
6209 _('[OPTION]... [FILE]...'),
6208 _('[OPTION]... [FILE]...'),
6210 inferrepo=True)
6209 inferrepo=True)
6211 def status(ui, repo, *pats, **opts):
6210 def status(ui, repo, *pats, **opts):
6212 """show changed files in the working directory
6211 """show changed files in the working directory
6213
6212
6214 Show status of files in the repository. If names are given, only
6213 Show status of files in the repository. If names are given, only
6215 files that match are shown. Files that are clean or ignored or
6214 files that match are shown. Files that are clean or ignored or
6216 the source of a copy/move operation, are not listed unless
6215 the source of a copy/move operation, are not listed unless
6217 -c/--clean, -i/--ignored, -C/--copies or -A/--all are given.
6216 -c/--clean, -i/--ignored, -C/--copies or -A/--all are given.
6218 Unless options described with "show only ..." are given, the
6217 Unless options described with "show only ..." are given, the
6219 options -mardu are used.
6218 options -mardu are used.
6220
6219
6221 Option -q/--quiet hides untracked (unknown and ignored) files
6220 Option -q/--quiet hides untracked (unknown and ignored) files
6222 unless explicitly requested with -u/--unknown or -i/--ignored.
6221 unless explicitly requested with -u/--unknown or -i/--ignored.
6223
6222
6224 .. note::
6223 .. note::
6225
6224
6226 status may appear to disagree with diff if permissions have
6225 status may appear to disagree with diff if permissions have
6227 changed or a merge has occurred. The standard diff format does
6226 changed or a merge has occurred. The standard diff format does
6228 not report permission changes and diff only reports changes
6227 not report permission changes and diff only reports changes
6229 relative to one merge parent.
6228 relative to one merge parent.
6230
6229
6231 If one revision is given, it is used as the base revision.
6230 If one revision is given, it is used as the base revision.
6232 If two revisions are given, the differences between them are
6231 If two revisions are given, the differences between them are
6233 shown. The --change option can also be used as a shortcut to list
6232 shown. The --change option can also be used as a shortcut to list
6234 the changed files of a revision from its first parent.
6233 the changed files of a revision from its first parent.
6235
6234
6236 The codes used to show the status of files are::
6235 The codes used to show the status of files are::
6237
6236
6238 M = modified
6237 M = modified
6239 A = added
6238 A = added
6240 R = removed
6239 R = removed
6241 C = clean
6240 C = clean
6242 ! = missing (deleted by non-hg command, but still tracked)
6241 ! = missing (deleted by non-hg command, but still tracked)
6243 ? = not tracked
6242 ? = not tracked
6244 I = ignored
6243 I = ignored
6245 = origin of the previous file (with --copies)
6244 = origin of the previous file (with --copies)
6246
6245
6247 .. container:: verbose
6246 .. container:: verbose
6248
6247
6249 Examples:
6248 Examples:
6250
6249
6251 - show changes in the working directory relative to a
6250 - show changes in the working directory relative to a
6252 changeset::
6251 changeset::
6253
6252
6254 hg status --rev 9353
6253 hg status --rev 9353
6255
6254
6256 - show changes in the working directory relative to the
6255 - show changes in the working directory relative to the
6257 current directory (see :hg:`help patterns` for more information)::
6256 current directory (see :hg:`help patterns` for more information)::
6258
6257
6259 hg status re:
6258 hg status re:
6260
6259
6261 - show all changes including copies in an existing changeset::
6260 - show all changes including copies in an existing changeset::
6262
6261
6263 hg status --copies --change 9353
6262 hg status --copies --change 9353
6264
6263
6265 - get a NUL separated list of added files, suitable for xargs::
6264 - get a NUL separated list of added files, suitable for xargs::
6266
6265
6267 hg status -an0
6266 hg status -an0
6268
6267
6269 Returns 0 on success.
6268 Returns 0 on success.
6270 """
6269 """
6271
6270
6272 revs = opts.get('rev')
6271 revs = opts.get('rev')
6273 change = opts.get('change')
6272 change = opts.get('change')
6274
6273
6275 if revs and change:
6274 if revs and change:
6276 msg = _('cannot specify --rev and --change at the same time')
6275 msg = _('cannot specify --rev and --change at the same time')
6277 raise error.Abort(msg)
6276 raise error.Abort(msg)
6278 elif change:
6277 elif change:
6279 node2 = scmutil.revsingle(repo, change, None).node()
6278 node2 = scmutil.revsingle(repo, change, None).node()
6280 node1 = repo[node2].p1().node()
6279 node1 = repo[node2].p1().node()
6281 else:
6280 else:
6282 node1, node2 = scmutil.revpair(repo, revs)
6281 node1, node2 = scmutil.revpair(repo, revs)
6283
6282
6284 if pats:
6283 if pats:
6285 cwd = repo.getcwd()
6284 cwd = repo.getcwd()
6286 else:
6285 else:
6287 cwd = ''
6286 cwd = ''
6288
6287
6289 if opts.get('print0'):
6288 if opts.get('print0'):
6290 end = '\0'
6289 end = '\0'
6291 else:
6290 else:
6292 end = '\n'
6291 end = '\n'
6293 copy = {}
6292 copy = {}
6294 states = 'modified added removed deleted unknown ignored clean'.split()
6293 states = 'modified added removed deleted unknown ignored clean'.split()
6295 show = [k for k in states if opts.get(k)]
6294 show = [k for k in states if opts.get(k)]
6296 if opts.get('all'):
6295 if opts.get('all'):
6297 show += ui.quiet and (states[:4] + ['clean']) or states
6296 show += ui.quiet and (states[:4] + ['clean']) or states
6298 if not show:
6297 if not show:
6299 if ui.quiet:
6298 if ui.quiet:
6300 show = states[:4]
6299 show = states[:4]
6301 else:
6300 else:
6302 show = states[:5]
6301 show = states[:5]
6303
6302
6304 m = scmutil.match(repo[node2], pats, opts)
6303 m = scmutil.match(repo[node2], pats, opts)
6305 stat = repo.status(node1, node2, m,
6304 stat = repo.status(node1, node2, m,
6306 'ignored' in show, 'clean' in show, 'unknown' in show,
6305 'ignored' in show, 'clean' in show, 'unknown' in show,
6307 opts.get('subrepos'))
6306 opts.get('subrepos'))
6308 changestates = zip(states, 'MAR!?IC', stat)
6307 changestates = zip(states, 'MAR!?IC', stat)
6309
6308
6310 if (opts.get('all') or opts.get('copies')
6309 if (opts.get('all') or opts.get('copies')
6311 or ui.configbool('ui', 'statuscopies')) and not opts.get('no_status'):
6310 or ui.configbool('ui', 'statuscopies')) and not opts.get('no_status'):
6312 copy = copies.pathcopies(repo[node1], repo[node2], m)
6311 copy = copies.pathcopies(repo[node1], repo[node2], m)
6313
6312
6314 fm = ui.formatter('status', opts)
6313 fm = ui.formatter('status', opts)
6315 fmt = '%s' + end
6314 fmt = '%s' + end
6316 showchar = not opts.get('no_status')
6315 showchar = not opts.get('no_status')
6317
6316
6318 for state, char, files in changestates:
6317 for state, char, files in changestates:
6319 if state in show:
6318 if state in show:
6320 label = 'status.' + state
6319 label = 'status.' + state
6321 for f in files:
6320 for f in files:
6322 fm.startitem()
6321 fm.startitem()
6323 fm.condwrite(showchar, 'status', '%s ', char, label=label)
6322 fm.condwrite(showchar, 'status', '%s ', char, label=label)
6324 fm.write('path', fmt, repo.pathto(f, cwd), label=label)
6323 fm.write('path', fmt, repo.pathto(f, cwd), label=label)
6325 if f in copy:
6324 if f in copy:
6326 fm.write("copy", ' %s' + end, repo.pathto(copy[f], cwd),
6325 fm.write("copy", ' %s' + end, repo.pathto(copy[f], cwd),
6327 label='status.copied')
6326 label='status.copied')
6328 fm.end()
6327 fm.end()
6329
6328
6330 @command('^summary|sum',
6329 @command('^summary|sum',
6331 [('', 'remote', None, _('check for push and pull'))], '[--remote]')
6330 [('', 'remote', None, _('check for push and pull'))], '[--remote]')
6332 def summary(ui, repo, **opts):
6331 def summary(ui, repo, **opts):
6333 """summarize working directory state
6332 """summarize working directory state
6334
6333
6335 This generates a brief summary of the working directory state,
6334 This generates a brief summary of the working directory state,
6336 including parents, branch, commit status, phase and available updates.
6335 including parents, branch, commit status, phase and available updates.
6337
6336
6338 With the --remote option, this will check the default paths for
6337 With the --remote option, this will check the default paths for
6339 incoming and outgoing changes. This can be time-consuming.
6338 incoming and outgoing changes. This can be time-consuming.
6340
6339
6341 Returns 0 on success.
6340 Returns 0 on success.
6342 """
6341 """
6343
6342
6344 ctx = repo[None]
6343 ctx = repo[None]
6345 parents = ctx.parents()
6344 parents = ctx.parents()
6346 pnode = parents[0].node()
6345 pnode = parents[0].node()
6347 marks = []
6346 marks = []
6348
6347
6349 for p in parents:
6348 for p in parents:
6350 # label with log.changeset (instead of log.parent) since this
6349 # label with log.changeset (instead of log.parent) since this
6351 # shows a working directory parent *changeset*:
6350 # shows a working directory parent *changeset*:
6352 # i18n: column positioning for "hg summary"
6351 # i18n: column positioning for "hg summary"
6353 ui.write(_('parent: %d:%s ') % (p.rev(), str(p)),
6352 ui.write(_('parent: %d:%s ') % (p.rev(), str(p)),
6354 label='log.changeset changeset.%s' % p.phasestr())
6353 label='log.changeset changeset.%s' % p.phasestr())
6355 ui.write(' '.join(p.tags()), label='log.tag')
6354 ui.write(' '.join(p.tags()), label='log.tag')
6356 if p.bookmarks():
6355 if p.bookmarks():
6357 marks.extend(p.bookmarks())
6356 marks.extend(p.bookmarks())
6358 if p.rev() == -1:
6357 if p.rev() == -1:
6359 if not len(repo):
6358 if not len(repo):
6360 ui.write(_(' (empty repository)'))
6359 ui.write(_(' (empty repository)'))
6361 else:
6360 else:
6362 ui.write(_(' (no revision checked out)'))
6361 ui.write(_(' (no revision checked out)'))
6363 ui.write('\n')
6362 ui.write('\n')
6364 if p.description():
6363 if p.description():
6365 ui.status(' ' + p.description().splitlines()[0].strip() + '\n',
6364 ui.status(' ' + p.description().splitlines()[0].strip() + '\n',
6366 label='log.summary')
6365 label='log.summary')
6367
6366
6368 branch = ctx.branch()
6367 branch = ctx.branch()
6369 bheads = repo.branchheads(branch)
6368 bheads = repo.branchheads(branch)
6370 # i18n: column positioning for "hg summary"
6369 # i18n: column positioning for "hg summary"
6371 m = _('branch: %s\n') % branch
6370 m = _('branch: %s\n') % branch
6372 if branch != 'default':
6371 if branch != 'default':
6373 ui.write(m, label='log.branch')
6372 ui.write(m, label='log.branch')
6374 else:
6373 else:
6375 ui.status(m, label='log.branch')
6374 ui.status(m, label='log.branch')
6376
6375
6377 if marks:
6376 if marks:
6378 active = repo._activebookmark
6377 active = repo._activebookmark
6379 # i18n: column positioning for "hg summary"
6378 # i18n: column positioning for "hg summary"
6380 ui.write(_('bookmarks:'), label='log.bookmark')
6379 ui.write(_('bookmarks:'), label='log.bookmark')
6381 if active is not None:
6380 if active is not None:
6382 if active in marks:
6381 if active in marks:
6383 ui.write(' *' + active, label=activebookmarklabel)
6382 ui.write(' *' + active, label=activebookmarklabel)
6384 marks.remove(active)
6383 marks.remove(active)
6385 else:
6384 else:
6386 ui.write(' [%s]' % active, label=activebookmarklabel)
6385 ui.write(' [%s]' % active, label=activebookmarklabel)
6387 for m in marks:
6386 for m in marks:
6388 ui.write(' ' + m, label='log.bookmark')
6387 ui.write(' ' + m, label='log.bookmark')
6389 ui.write('\n', label='log.bookmark')
6388 ui.write('\n', label='log.bookmark')
6390
6389
6391 status = repo.status(unknown=True)
6390 status = repo.status(unknown=True)
6392
6391
6393 c = repo.dirstate.copies()
6392 c = repo.dirstate.copies()
6394 copied, renamed = [], []
6393 copied, renamed = [], []
6395 for d, s in c.iteritems():
6394 for d, s in c.iteritems():
6396 if s in status.removed:
6395 if s in status.removed:
6397 status.removed.remove(s)
6396 status.removed.remove(s)
6398 renamed.append(d)
6397 renamed.append(d)
6399 else:
6398 else:
6400 copied.append(d)
6399 copied.append(d)
6401 if d in status.added:
6400 if d in status.added:
6402 status.added.remove(d)
6401 status.added.remove(d)
6403
6402
6404 try:
6403 try:
6405 ms = mergemod.mergestate.read(repo)
6404 ms = mergemod.mergestate.read(repo)
6406 except error.UnsupportedMergeRecords as e:
6405 except error.UnsupportedMergeRecords as e:
6407 s = ' '.join(e.recordtypes)
6406 s = ' '.join(e.recordtypes)
6408 ui.warn(
6407 ui.warn(
6409 _('warning: merge state has unsupported record types: %s\n') % s)
6408 _('warning: merge state has unsupported record types: %s\n') % s)
6410 unresolved = 0
6409 unresolved = 0
6411 else:
6410 else:
6412 unresolved = [f for f in ms if ms[f] == 'u']
6411 unresolved = [f for f in ms if ms[f] == 'u']
6413
6412
6414 subs = [s for s in ctx.substate if ctx.sub(s).dirty()]
6413 subs = [s for s in ctx.substate if ctx.sub(s).dirty()]
6415
6414
6416 labels = [(ui.label(_('%d modified'), 'status.modified'), status.modified),
6415 labels = [(ui.label(_('%d modified'), 'status.modified'), status.modified),
6417 (ui.label(_('%d added'), 'status.added'), status.added),
6416 (ui.label(_('%d added'), 'status.added'), status.added),
6418 (ui.label(_('%d removed'), 'status.removed'), status.removed),
6417 (ui.label(_('%d removed'), 'status.removed'), status.removed),
6419 (ui.label(_('%d renamed'), 'status.copied'), renamed),
6418 (ui.label(_('%d renamed'), 'status.copied'), renamed),
6420 (ui.label(_('%d copied'), 'status.copied'), copied),
6419 (ui.label(_('%d copied'), 'status.copied'), copied),
6421 (ui.label(_('%d deleted'), 'status.deleted'), status.deleted),
6420 (ui.label(_('%d deleted'), 'status.deleted'), status.deleted),
6422 (ui.label(_('%d unknown'), 'status.unknown'), status.unknown),
6421 (ui.label(_('%d unknown'), 'status.unknown'), status.unknown),
6423 (ui.label(_('%d unresolved'), 'resolve.unresolved'), unresolved),
6422 (ui.label(_('%d unresolved'), 'resolve.unresolved'), unresolved),
6424 (ui.label(_('%d subrepos'), 'status.modified'), subs)]
6423 (ui.label(_('%d subrepos'), 'status.modified'), subs)]
6425 t = []
6424 t = []
6426 for l, s in labels:
6425 for l, s in labels:
6427 if s:
6426 if s:
6428 t.append(l % len(s))
6427 t.append(l % len(s))
6429
6428
6430 t = ', '.join(t)
6429 t = ', '.join(t)
6431 cleanworkdir = False
6430 cleanworkdir = False
6432
6431
6433 if repo.vfs.exists('graftstate'):
6432 if repo.vfs.exists('graftstate'):
6434 t += _(' (graft in progress)')
6433 t += _(' (graft in progress)')
6435 if repo.vfs.exists('updatestate'):
6434 if repo.vfs.exists('updatestate'):
6436 t += _(' (interrupted update)')
6435 t += _(' (interrupted update)')
6437 elif len(parents) > 1:
6436 elif len(parents) > 1:
6438 t += _(' (merge)')
6437 t += _(' (merge)')
6439 elif branch != parents[0].branch():
6438 elif branch != parents[0].branch():
6440 t += _(' (new branch)')
6439 t += _(' (new branch)')
6441 elif (parents[0].closesbranch() and
6440 elif (parents[0].closesbranch() and
6442 pnode in repo.branchheads(branch, closed=True)):
6441 pnode in repo.branchheads(branch, closed=True)):
6443 t += _(' (head closed)')
6442 t += _(' (head closed)')
6444 elif not (status.modified or status.added or status.removed or renamed or
6443 elif not (status.modified or status.added or status.removed or renamed or
6445 copied or subs):
6444 copied or subs):
6446 t += _(' (clean)')
6445 t += _(' (clean)')
6447 cleanworkdir = True
6446 cleanworkdir = True
6448 elif pnode not in bheads:
6447 elif pnode not in bheads:
6449 t += _(' (new branch head)')
6448 t += _(' (new branch head)')
6450
6449
6451 if parents:
6450 if parents:
6452 pendingphase = max(p.phase() for p in parents)
6451 pendingphase = max(p.phase() for p in parents)
6453 else:
6452 else:
6454 pendingphase = phases.public
6453 pendingphase = phases.public
6455
6454
6456 if pendingphase > phases.newcommitphase(ui):
6455 if pendingphase > phases.newcommitphase(ui):
6457 t += ' (%s)' % phases.phasenames[pendingphase]
6456 t += ' (%s)' % phases.phasenames[pendingphase]
6458
6457
6459 if cleanworkdir:
6458 if cleanworkdir:
6460 # i18n: column positioning for "hg summary"
6459 # i18n: column positioning for "hg summary"
6461 ui.status(_('commit: %s\n') % t.strip())
6460 ui.status(_('commit: %s\n') % t.strip())
6462 else:
6461 else:
6463 # i18n: column positioning for "hg summary"
6462 # i18n: column positioning for "hg summary"
6464 ui.write(_('commit: %s\n') % t.strip())
6463 ui.write(_('commit: %s\n') % t.strip())
6465
6464
6466 # all ancestors of branch heads - all ancestors of parent = new csets
6465 # all ancestors of branch heads - all ancestors of parent = new csets
6467 new = len(repo.changelog.findmissing([pctx.node() for pctx in parents],
6466 new = len(repo.changelog.findmissing([pctx.node() for pctx in parents],
6468 bheads))
6467 bheads))
6469
6468
6470 if new == 0:
6469 if new == 0:
6471 # i18n: column positioning for "hg summary"
6470 # i18n: column positioning for "hg summary"
6472 ui.status(_('update: (current)\n'))
6471 ui.status(_('update: (current)\n'))
6473 elif pnode not in bheads:
6472 elif pnode not in bheads:
6474 # i18n: column positioning for "hg summary"
6473 # i18n: column positioning for "hg summary"
6475 ui.write(_('update: %d new changesets (update)\n') % new)
6474 ui.write(_('update: %d new changesets (update)\n') % new)
6476 else:
6475 else:
6477 # i18n: column positioning for "hg summary"
6476 # i18n: column positioning for "hg summary"
6478 ui.write(_('update: %d new changesets, %d branch heads (merge)\n') %
6477 ui.write(_('update: %d new changesets, %d branch heads (merge)\n') %
6479 (new, len(bheads)))
6478 (new, len(bheads)))
6480
6479
6481 t = []
6480 t = []
6482 draft = len(repo.revs('draft()'))
6481 draft = len(repo.revs('draft()'))
6483 if draft:
6482 if draft:
6484 t.append(_('%d draft') % draft)
6483 t.append(_('%d draft') % draft)
6485 secret = len(repo.revs('secret()'))
6484 secret = len(repo.revs('secret()'))
6486 if secret:
6485 if secret:
6487 t.append(_('%d secret') % secret)
6486 t.append(_('%d secret') % secret)
6488
6487
6489 if draft or secret:
6488 if draft or secret:
6490 ui.status(_('phases: %s\n') % ', '.join(t))
6489 ui.status(_('phases: %s\n') % ', '.join(t))
6491
6490
6492 cmdutil.summaryhooks(ui, repo)
6491 cmdutil.summaryhooks(ui, repo)
6493
6492
6494 if opts.get('remote'):
6493 if opts.get('remote'):
6495 needsincoming, needsoutgoing = True, True
6494 needsincoming, needsoutgoing = True, True
6496 else:
6495 else:
6497 needsincoming, needsoutgoing = False, False
6496 needsincoming, needsoutgoing = False, False
6498 for i, o in cmdutil.summaryremotehooks(ui, repo, opts, None):
6497 for i, o in cmdutil.summaryremotehooks(ui, repo, opts, None):
6499 if i:
6498 if i:
6500 needsincoming = True
6499 needsincoming = True
6501 if o:
6500 if o:
6502 needsoutgoing = True
6501 needsoutgoing = True
6503 if not needsincoming and not needsoutgoing:
6502 if not needsincoming and not needsoutgoing:
6504 return
6503 return
6505
6504
6506 def getincoming():
6505 def getincoming():
6507 source, branches = hg.parseurl(ui.expandpath('default'))
6506 source, branches = hg.parseurl(ui.expandpath('default'))
6508 sbranch = branches[0]
6507 sbranch = branches[0]
6509 try:
6508 try:
6510 other = hg.peer(repo, {}, source)
6509 other = hg.peer(repo, {}, source)
6511 except error.RepoError:
6510 except error.RepoError:
6512 if opts.get('remote'):
6511 if opts.get('remote'):
6513 raise
6512 raise
6514 return source, sbranch, None, None, None
6513 return source, sbranch, None, None, None
6515 revs, checkout = hg.addbranchrevs(repo, other, branches, None)
6514 revs, checkout = hg.addbranchrevs(repo, other, branches, None)
6516 if revs:
6515 if revs:
6517 revs = [other.lookup(rev) for rev in revs]
6516 revs = [other.lookup(rev) for rev in revs]
6518 ui.debug('comparing with %s\n' % util.hidepassword(source))
6517 ui.debug('comparing with %s\n' % util.hidepassword(source))
6519 repo.ui.pushbuffer()
6518 repo.ui.pushbuffer()
6520 commoninc = discovery.findcommonincoming(repo, other, heads=revs)
6519 commoninc = discovery.findcommonincoming(repo, other, heads=revs)
6521 repo.ui.popbuffer()
6520 repo.ui.popbuffer()
6522 return source, sbranch, other, commoninc, commoninc[1]
6521 return source, sbranch, other, commoninc, commoninc[1]
6523
6522
6524 if needsincoming:
6523 if needsincoming:
6525 source, sbranch, sother, commoninc, incoming = getincoming()
6524 source, sbranch, sother, commoninc, incoming = getincoming()
6526 else:
6525 else:
6527 source = sbranch = sother = commoninc = incoming = None
6526 source = sbranch = sother = commoninc = incoming = None
6528
6527
6529 def getoutgoing():
6528 def getoutgoing():
6530 dest, branches = hg.parseurl(ui.expandpath('default-push', 'default'))
6529 dest, branches = hg.parseurl(ui.expandpath('default-push', 'default'))
6531 dbranch = branches[0]
6530 dbranch = branches[0]
6532 revs, checkout = hg.addbranchrevs(repo, repo, branches, None)
6531 revs, checkout = hg.addbranchrevs(repo, repo, branches, None)
6533 if source != dest:
6532 if source != dest:
6534 try:
6533 try:
6535 dother = hg.peer(repo, {}, dest)
6534 dother = hg.peer(repo, {}, dest)
6536 except error.RepoError:
6535 except error.RepoError:
6537 if opts.get('remote'):
6536 if opts.get('remote'):
6538 raise
6537 raise
6539 return dest, dbranch, None, None
6538 return dest, dbranch, None, None
6540 ui.debug('comparing with %s\n' % util.hidepassword(dest))
6539 ui.debug('comparing with %s\n' % util.hidepassword(dest))
6541 elif sother is None:
6540 elif sother is None:
6542 # there is no explicit destination peer, but source one is invalid
6541 # there is no explicit destination peer, but source one is invalid
6543 return dest, dbranch, None, None
6542 return dest, dbranch, None, None
6544 else:
6543 else:
6545 dother = sother
6544 dother = sother
6546 if (source != dest or (sbranch is not None and sbranch != dbranch)):
6545 if (source != dest or (sbranch is not None and sbranch != dbranch)):
6547 common = None
6546 common = None
6548 else:
6547 else:
6549 common = commoninc
6548 common = commoninc
6550 if revs:
6549 if revs:
6551 revs = [repo.lookup(rev) for rev in revs]
6550 revs = [repo.lookup(rev) for rev in revs]
6552 repo.ui.pushbuffer()
6551 repo.ui.pushbuffer()
6553 outgoing = discovery.findcommonoutgoing(repo, dother, onlyheads=revs,
6552 outgoing = discovery.findcommonoutgoing(repo, dother, onlyheads=revs,
6554 commoninc=common)
6553 commoninc=common)
6555 repo.ui.popbuffer()
6554 repo.ui.popbuffer()
6556 return dest, dbranch, dother, outgoing
6555 return dest, dbranch, dother, outgoing
6557
6556
6558 if needsoutgoing:
6557 if needsoutgoing:
6559 dest, dbranch, dother, outgoing = getoutgoing()
6558 dest, dbranch, dother, outgoing = getoutgoing()
6560 else:
6559 else:
6561 dest = dbranch = dother = outgoing = None
6560 dest = dbranch = dother = outgoing = None
6562
6561
6563 if opts.get('remote'):
6562 if opts.get('remote'):
6564 t = []
6563 t = []
6565 if incoming:
6564 if incoming:
6566 t.append(_('1 or more incoming'))
6565 t.append(_('1 or more incoming'))
6567 o = outgoing.missing
6566 o = outgoing.missing
6568 if o:
6567 if o:
6569 t.append(_('%d outgoing') % len(o))
6568 t.append(_('%d outgoing') % len(o))
6570 other = dother or sother
6569 other = dother or sother
6571 if 'bookmarks' in other.listkeys('namespaces'):
6570 if 'bookmarks' in other.listkeys('namespaces'):
6572 counts = bookmarks.summary(repo, other)
6571 counts = bookmarks.summary(repo, other)
6573 if counts[0] > 0:
6572 if counts[0] > 0:
6574 t.append(_('%d incoming bookmarks') % counts[0])
6573 t.append(_('%d incoming bookmarks') % counts[0])
6575 if counts[1] > 0:
6574 if counts[1] > 0:
6576 t.append(_('%d outgoing bookmarks') % counts[1])
6575 t.append(_('%d outgoing bookmarks') % counts[1])
6577
6576
6578 if t:
6577 if t:
6579 # i18n: column positioning for "hg summary"
6578 # i18n: column positioning for "hg summary"
6580 ui.write(_('remote: %s\n') % (', '.join(t)))
6579 ui.write(_('remote: %s\n') % (', '.join(t)))
6581 else:
6580 else:
6582 # i18n: column positioning for "hg summary"
6581 # i18n: column positioning for "hg summary"
6583 ui.status(_('remote: (synced)\n'))
6582 ui.status(_('remote: (synced)\n'))
6584
6583
6585 cmdutil.summaryremotehooks(ui, repo, opts,
6584 cmdutil.summaryremotehooks(ui, repo, opts,
6586 ((source, sbranch, sother, commoninc),
6585 ((source, sbranch, sother, commoninc),
6587 (dest, dbranch, dother, outgoing)))
6586 (dest, dbranch, dother, outgoing)))
6588
6587
6589 @command('tag',
6588 @command('tag',
6590 [('f', 'force', None, _('force tag')),
6589 [('f', 'force', None, _('force tag')),
6591 ('l', 'local', None, _('make the tag local')),
6590 ('l', 'local', None, _('make the tag local')),
6592 ('r', 'rev', '', _('revision to tag'), _('REV')),
6591 ('r', 'rev', '', _('revision to tag'), _('REV')),
6593 ('', 'remove', None, _('remove a tag')),
6592 ('', 'remove', None, _('remove a tag')),
6594 # -l/--local is already there, commitopts cannot be used
6593 # -l/--local is already there, commitopts cannot be used
6595 ('e', 'edit', None, _('invoke editor on commit messages')),
6594 ('e', 'edit', None, _('invoke editor on commit messages')),
6596 ('m', 'message', '', _('use text as commit message'), _('TEXT')),
6595 ('m', 'message', '', _('use text as commit message'), _('TEXT')),
6597 ] + commitopts2,
6596 ] + commitopts2,
6598 _('[-f] [-l] [-m TEXT] [-d DATE] [-u USER] [-r REV] NAME...'))
6597 _('[-f] [-l] [-m TEXT] [-d DATE] [-u USER] [-r REV] NAME...'))
6599 def tag(ui, repo, name1, *names, **opts):
6598 def tag(ui, repo, name1, *names, **opts):
6600 """add one or more tags for the current or given revision
6599 """add one or more tags for the current or given revision
6601
6600
6602 Name a particular revision using <name>.
6601 Name a particular revision using <name>.
6603
6602
6604 Tags are used to name particular revisions of the repository and are
6603 Tags are used to name particular revisions of the repository and are
6605 very useful to compare different revisions, to go back to significant
6604 very useful to compare different revisions, to go back to significant
6606 earlier versions or to mark branch points as releases, etc. Changing
6605 earlier versions or to mark branch points as releases, etc. Changing
6607 an existing tag is normally disallowed; use -f/--force to override.
6606 an existing tag is normally disallowed; use -f/--force to override.
6608
6607
6609 If no revision is given, the parent of the working directory is
6608 If no revision is given, the parent of the working directory is
6610 used.
6609 used.
6611
6610
6612 To facilitate version control, distribution, and merging of tags,
6611 To facilitate version control, distribution, and merging of tags,
6613 they are stored as a file named ".hgtags" which is managed similarly
6612 they are stored as a file named ".hgtags" which is managed similarly
6614 to other project files and can be hand-edited if necessary. This
6613 to other project files and can be hand-edited if necessary. This
6615 also means that tagging creates a new commit. The file
6614 also means that tagging creates a new commit. The file
6616 ".hg/localtags" is used for local tags (not shared among
6615 ".hg/localtags" is used for local tags (not shared among
6617 repositories).
6616 repositories).
6618
6617
6619 Tag commits are usually made at the head of a branch. If the parent
6618 Tag commits are usually made at the head of a branch. If the parent
6620 of the working directory is not a branch head, :hg:`tag` aborts; use
6619 of the working directory is not a branch head, :hg:`tag` aborts; use
6621 -f/--force to force the tag commit to be based on a non-head
6620 -f/--force to force the tag commit to be based on a non-head
6622 changeset.
6621 changeset.
6623
6622
6624 See :hg:`help dates` for a list of formats valid for -d/--date.
6623 See :hg:`help dates` for a list of formats valid for -d/--date.
6625
6624
6626 Since tag names have priority over branch names during revision
6625 Since tag names have priority over branch names during revision
6627 lookup, using an existing branch name as a tag name is discouraged.
6626 lookup, using an existing branch name as a tag name is discouraged.
6628
6627
6629 Returns 0 on success.
6628 Returns 0 on success.
6630 """
6629 """
6631 wlock = lock = None
6630 wlock = lock = None
6632 try:
6631 try:
6633 wlock = repo.wlock()
6632 wlock = repo.wlock()
6634 lock = repo.lock()
6633 lock = repo.lock()
6635 rev_ = "."
6634 rev_ = "."
6636 names = [t.strip() for t in (name1,) + names]
6635 names = [t.strip() for t in (name1,) + names]
6637 if len(names) != len(set(names)):
6636 if len(names) != len(set(names)):
6638 raise error.Abort(_('tag names must be unique'))
6637 raise error.Abort(_('tag names must be unique'))
6639 for n in names:
6638 for n in names:
6640 scmutil.checknewlabel(repo, n, 'tag')
6639 scmutil.checknewlabel(repo, n, 'tag')
6641 if not n:
6640 if not n:
6642 raise error.Abort(_('tag names cannot consist entirely of '
6641 raise error.Abort(_('tag names cannot consist entirely of '
6643 'whitespace'))
6642 'whitespace'))
6644 if opts.get('rev') and opts.get('remove'):
6643 if opts.get('rev') and opts.get('remove'):
6645 raise error.Abort(_("--rev and --remove are incompatible"))
6644 raise error.Abort(_("--rev and --remove are incompatible"))
6646 if opts.get('rev'):
6645 if opts.get('rev'):
6647 rev_ = opts['rev']
6646 rev_ = opts['rev']
6648 message = opts.get('message')
6647 message = opts.get('message')
6649 if opts.get('remove'):
6648 if opts.get('remove'):
6650 if opts.get('local'):
6649 if opts.get('local'):
6651 expectedtype = 'local'
6650 expectedtype = 'local'
6652 else:
6651 else:
6653 expectedtype = 'global'
6652 expectedtype = 'global'
6654
6653
6655 for n in names:
6654 for n in names:
6656 if not repo.tagtype(n):
6655 if not repo.tagtype(n):
6657 raise error.Abort(_("tag '%s' does not exist") % n)
6656 raise error.Abort(_("tag '%s' does not exist") % n)
6658 if repo.tagtype(n) != expectedtype:
6657 if repo.tagtype(n) != expectedtype:
6659 if expectedtype == 'global':
6658 if expectedtype == 'global':
6660 raise error.Abort(_("tag '%s' is not a global tag") % n)
6659 raise error.Abort(_("tag '%s' is not a global tag") % n)
6661 else:
6660 else:
6662 raise error.Abort(_("tag '%s' is not a local tag") % n)
6661 raise error.Abort(_("tag '%s' is not a local tag") % n)
6663 rev_ = 'null'
6662 rev_ = 'null'
6664 if not message:
6663 if not message:
6665 # we don't translate commit messages
6664 # we don't translate commit messages
6666 message = 'Removed tag %s' % ', '.join(names)
6665 message = 'Removed tag %s' % ', '.join(names)
6667 elif not opts.get('force'):
6666 elif not opts.get('force'):
6668 for n in names:
6667 for n in names:
6669 if n in repo.tags():
6668 if n in repo.tags():
6670 raise error.Abort(_("tag '%s' already exists "
6669 raise error.Abort(_("tag '%s' already exists "
6671 "(use -f to force)") % n)
6670 "(use -f to force)") % n)
6672 if not opts.get('local'):
6671 if not opts.get('local'):
6673 p1, p2 = repo.dirstate.parents()
6672 p1, p2 = repo.dirstate.parents()
6674 if p2 != nullid:
6673 if p2 != nullid:
6675 raise error.Abort(_('uncommitted merge'))
6674 raise error.Abort(_('uncommitted merge'))
6676 bheads = repo.branchheads()
6675 bheads = repo.branchheads()
6677 if not opts.get('force') and bheads and p1 not in bheads:
6676 if not opts.get('force') and bheads and p1 not in bheads:
6678 raise error.Abort(_('not at a branch head (use -f to force)'))
6677 raise error.Abort(_('not at a branch head (use -f to force)'))
6679 r = scmutil.revsingle(repo, rev_).node()
6678 r = scmutil.revsingle(repo, rev_).node()
6680
6679
6681 if not message:
6680 if not message:
6682 # we don't translate commit messages
6681 # we don't translate commit messages
6683 message = ('Added tag %s for changeset %s' %
6682 message = ('Added tag %s for changeset %s' %
6684 (', '.join(names), short(r)))
6683 (', '.join(names), short(r)))
6685
6684
6686 date = opts.get('date')
6685 date = opts.get('date')
6687 if date:
6686 if date:
6688 date = util.parsedate(date)
6687 date = util.parsedate(date)
6689
6688
6690 if opts.get('remove'):
6689 if opts.get('remove'):
6691 editform = 'tag.remove'
6690 editform = 'tag.remove'
6692 else:
6691 else:
6693 editform = 'tag.add'
6692 editform = 'tag.add'
6694 editor = cmdutil.getcommiteditor(editform=editform, **opts)
6693 editor = cmdutil.getcommiteditor(editform=editform, **opts)
6695
6694
6696 # don't allow tagging the null rev
6695 # don't allow tagging the null rev
6697 if (not opts.get('remove') and
6696 if (not opts.get('remove') and
6698 scmutil.revsingle(repo, rev_).rev() == nullrev):
6697 scmutil.revsingle(repo, rev_).rev() == nullrev):
6699 raise error.Abort(_("cannot tag null revision"))
6698 raise error.Abort(_("cannot tag null revision"))
6700
6699
6701 repo.tag(names, r, message, opts.get('local'), opts.get('user'), date,
6700 repo.tag(names, r, message, opts.get('local'), opts.get('user'), date,
6702 editor=editor)
6701 editor=editor)
6703 finally:
6702 finally:
6704 release(lock, wlock)
6703 release(lock, wlock)
6705
6704
6706 @command('tags', formatteropts, '')
6705 @command('tags', formatteropts, '')
6707 def tags(ui, repo, **opts):
6706 def tags(ui, repo, **opts):
6708 """list repository tags
6707 """list repository tags
6709
6708
6710 This lists both regular and local tags. When the -v/--verbose
6709 This lists both regular and local tags. When the -v/--verbose
6711 switch is used, a third column "local" is printed for local tags.
6710 switch is used, a third column "local" is printed for local tags.
6712
6711
6713 Returns 0 on success.
6712 Returns 0 on success.
6714 """
6713 """
6715
6714
6716 fm = ui.formatter('tags', opts)
6715 fm = ui.formatter('tags', opts)
6717 hexfunc = fm.hexfunc
6716 hexfunc = fm.hexfunc
6718 tagtype = ""
6717 tagtype = ""
6719
6718
6720 for t, n in reversed(repo.tagslist()):
6719 for t, n in reversed(repo.tagslist()):
6721 hn = hexfunc(n)
6720 hn = hexfunc(n)
6722 label = 'tags.normal'
6721 label = 'tags.normal'
6723 tagtype = ''
6722 tagtype = ''
6724 if repo.tagtype(t) == 'local':
6723 if repo.tagtype(t) == 'local':
6725 label = 'tags.local'
6724 label = 'tags.local'
6726 tagtype = 'local'
6725 tagtype = 'local'
6727
6726
6728 fm.startitem()
6727 fm.startitem()
6729 fm.write('tag', '%s', t, label=label)
6728 fm.write('tag', '%s', t, label=label)
6730 fmt = " " * (30 - encoding.colwidth(t)) + ' %5d:%s'
6729 fmt = " " * (30 - encoding.colwidth(t)) + ' %5d:%s'
6731 fm.condwrite(not ui.quiet, 'rev node', fmt,
6730 fm.condwrite(not ui.quiet, 'rev node', fmt,
6732 repo.changelog.rev(n), hn, label=label)
6731 repo.changelog.rev(n), hn, label=label)
6733 fm.condwrite(ui.verbose and tagtype, 'type', ' %s',
6732 fm.condwrite(ui.verbose and tagtype, 'type', ' %s',
6734 tagtype, label=label)
6733 tagtype, label=label)
6735 fm.plain('\n')
6734 fm.plain('\n')
6736 fm.end()
6735 fm.end()
6737
6736
6738 @command('tip',
6737 @command('tip',
6739 [('p', 'patch', None, _('show patch')),
6738 [('p', 'patch', None, _('show patch')),
6740 ('g', 'git', None, _('use git extended diff format')),
6739 ('g', 'git', None, _('use git extended diff format')),
6741 ] + templateopts,
6740 ] + templateopts,
6742 _('[-p] [-g]'))
6741 _('[-p] [-g]'))
6743 def tip(ui, repo, **opts):
6742 def tip(ui, repo, **opts):
6744 """show the tip revision (DEPRECATED)
6743 """show the tip revision (DEPRECATED)
6745
6744
6746 The tip revision (usually just called the tip) is the changeset
6745 The tip revision (usually just called the tip) is the changeset
6747 most recently added to the repository (and therefore the most
6746 most recently added to the repository (and therefore the most
6748 recently changed head).
6747 recently changed head).
6749
6748
6750 If you have just made a commit, that commit will be the tip. If
6749 If you have just made a commit, that commit will be the tip. If
6751 you have just pulled changes from another repository, the tip of
6750 you have just pulled changes from another repository, the tip of
6752 that repository becomes the current tip. The "tip" tag is special
6751 that repository becomes the current tip. The "tip" tag is special
6753 and cannot be renamed or assigned to a different changeset.
6752 and cannot be renamed or assigned to a different changeset.
6754
6753
6755 This command is deprecated, please use :hg:`heads` instead.
6754 This command is deprecated, please use :hg:`heads` instead.
6756
6755
6757 Returns 0 on success.
6756 Returns 0 on success.
6758 """
6757 """
6759 displayer = cmdutil.show_changeset(ui, repo, opts)
6758 displayer = cmdutil.show_changeset(ui, repo, opts)
6760 displayer.show(repo['tip'])
6759 displayer.show(repo['tip'])
6761 displayer.close()
6760 displayer.close()
6762
6761
6763 @command('unbundle',
6762 @command('unbundle',
6764 [('u', 'update', None,
6763 [('u', 'update', None,
6765 _('update to new branch head if changesets were unbundled'))],
6764 _('update to new branch head if changesets were unbundled'))],
6766 _('[-u] FILE...'))
6765 _('[-u] FILE...'))
6767 def unbundle(ui, repo, fname1, *fnames, **opts):
6766 def unbundle(ui, repo, fname1, *fnames, **opts):
6768 """apply one or more changegroup files
6767 """apply one or more changegroup files
6769
6768
6770 Apply one or more compressed changegroup files generated by the
6769 Apply one or more compressed changegroup files generated by the
6771 bundle command.
6770 bundle command.
6772
6771
6773 Returns 0 on success, 1 if an update has unresolved files.
6772 Returns 0 on success, 1 if an update has unresolved files.
6774 """
6773 """
6775 fnames = (fname1,) + fnames
6774 fnames = (fname1,) + fnames
6776
6775
6777 lock = repo.lock()
6776 lock = repo.lock()
6778 try:
6777 try:
6779 for fname in fnames:
6778 for fname in fnames:
6780 f = hg.openpath(ui, fname)
6779 f = hg.openpath(ui, fname)
6781 gen = exchange.readbundle(ui, f, fname)
6780 gen = exchange.readbundle(ui, f, fname)
6782 if isinstance(gen, bundle2.unbundle20):
6781 if isinstance(gen, bundle2.unbundle20):
6783 tr = repo.transaction('unbundle')
6782 tr = repo.transaction('unbundle')
6784 try:
6783 try:
6785 op = bundle2.applybundle(repo, gen, tr, source='unbundle',
6784 op = bundle2.applybundle(repo, gen, tr, source='unbundle',
6786 url='bundle:' + fname)
6785 url='bundle:' + fname)
6787 tr.close()
6786 tr.close()
6788 except error.BundleUnknownFeatureError as exc:
6787 except error.BundleUnknownFeatureError as exc:
6789 raise error.Abort(_('%s: unknown bundle feature, %s')
6788 raise error.Abort(_('%s: unknown bundle feature, %s')
6790 % (fname, exc),
6789 % (fname, exc),
6791 hint=_("see https://mercurial-scm.org/"
6790 hint=_("see https://mercurial-scm.org/"
6792 "wiki/BundleFeature for more "
6791 "wiki/BundleFeature for more "
6793 "information"))
6792 "information"))
6794 finally:
6793 finally:
6795 if tr:
6794 if tr:
6796 tr.release()
6795 tr.release()
6797 changes = [r.get('return', 0)
6796 changes = [r.get('return', 0)
6798 for r in op.records['changegroup']]
6797 for r in op.records['changegroup']]
6799 modheads = changegroup.combineresults(changes)
6798 modheads = changegroup.combineresults(changes)
6800 elif isinstance(gen, streamclone.streamcloneapplier):
6799 elif isinstance(gen, streamclone.streamcloneapplier):
6801 raise error.Abort(
6800 raise error.Abort(
6802 _('packed bundles cannot be applied with '
6801 _('packed bundles cannot be applied with '
6803 '"hg unbundle"'),
6802 '"hg unbundle"'),
6804 hint=_('use "hg debugapplystreamclonebundle"'))
6803 hint=_('use "hg debugapplystreamclonebundle"'))
6805 else:
6804 else:
6806 modheads = gen.apply(repo, 'unbundle', 'bundle:' + fname)
6805 modheads = gen.apply(repo, 'unbundle', 'bundle:' + fname)
6807 finally:
6806 finally:
6808 lock.release()
6807 lock.release()
6809
6808
6810 return postincoming(ui, repo, modheads, opts.get('update'), None)
6809 return postincoming(ui, repo, modheads, opts.get('update'), None)
6811
6810
6812 @command('^update|up|checkout|co',
6811 @command('^update|up|checkout|co',
6813 [('C', 'clean', None, _('discard uncommitted changes (no backup)')),
6812 [('C', 'clean', None, _('discard uncommitted changes (no backup)')),
6814 ('c', 'check', None,
6813 ('c', 'check', None,
6815 _('update across branches if no uncommitted changes')),
6814 _('update across branches if no uncommitted changes')),
6816 ('d', 'date', '', _('tipmost revision matching date'), _('DATE')),
6815 ('d', 'date', '', _('tipmost revision matching date'), _('DATE')),
6817 ('r', 'rev', '', _('revision'), _('REV'))
6816 ('r', 'rev', '', _('revision'), _('REV'))
6818 ] + mergetoolopts,
6817 ] + mergetoolopts,
6819 _('[-c] [-C] [-d DATE] [[-r] REV]'))
6818 _('[-c] [-C] [-d DATE] [[-r] REV]'))
6820 def update(ui, repo, node=None, rev=None, clean=False, date=None, check=False,
6819 def update(ui, repo, node=None, rev=None, clean=False, date=None, check=False,
6821 tool=None):
6820 tool=None):
6822 """update working directory (or switch revisions)
6821 """update working directory (or switch revisions)
6823
6822
6824 Update the repository's working directory to the specified
6823 Update the repository's working directory to the specified
6825 changeset. If no changeset is specified, update to the tip of the
6824 changeset. If no changeset is specified, update to the tip of the
6826 current named branch and move the active bookmark (see :hg:`help
6825 current named branch and move the active bookmark (see :hg:`help
6827 bookmarks`).
6826 bookmarks`).
6828
6827
6829 Update sets the working directory's parent revision to the specified
6828 Update sets the working directory's parent revision to the specified
6830 changeset (see :hg:`help parents`).
6829 changeset (see :hg:`help parents`).
6831
6830
6832 If the changeset is not a descendant or ancestor of the working
6831 If the changeset is not a descendant or ancestor of the working
6833 directory's parent, the update is aborted. With the -c/--check
6832 directory's parent, the update is aborted. With the -c/--check
6834 option, the working directory is checked for uncommitted changes; if
6833 option, the working directory is checked for uncommitted changes; if
6835 none are found, the working directory is updated to the specified
6834 none are found, the working directory is updated to the specified
6836 changeset.
6835 changeset.
6837
6836
6838 .. container:: verbose
6837 .. container:: verbose
6839
6838
6840 The following rules apply when the working directory contains
6839 The following rules apply when the working directory contains
6841 uncommitted changes:
6840 uncommitted changes:
6842
6841
6843 1. If neither -c/--check nor -C/--clean is specified, and if
6842 1. If neither -c/--check nor -C/--clean is specified, and if
6844 the requested changeset is an ancestor or descendant of
6843 the requested changeset is an ancestor or descendant of
6845 the working directory's parent, the uncommitted changes
6844 the working directory's parent, the uncommitted changes
6846 are merged into the requested changeset and the merged
6845 are merged into the requested changeset and the merged
6847 result is left uncommitted. If the requested changeset is
6846 result is left uncommitted. If the requested changeset is
6848 not an ancestor or descendant (that is, it is on another
6847 not an ancestor or descendant (that is, it is on another
6849 branch), the update is aborted and the uncommitted changes
6848 branch), the update is aborted and the uncommitted changes
6850 are preserved.
6849 are preserved.
6851
6850
6852 2. With the -c/--check option, the update is aborted and the
6851 2. With the -c/--check option, the update is aborted and the
6853 uncommitted changes are preserved.
6852 uncommitted changes are preserved.
6854
6853
6855 3. With the -C/--clean option, uncommitted changes are discarded and
6854 3. With the -C/--clean option, uncommitted changes are discarded and
6856 the working directory is updated to the requested changeset.
6855 the working directory is updated to the requested changeset.
6857
6856
6858 To cancel an uncommitted merge (and lose your changes), use
6857 To cancel an uncommitted merge (and lose your changes), use
6859 :hg:`update --clean .`.
6858 :hg:`update --clean .`.
6860
6859
6861 Use null as the changeset to remove the working directory (like
6860 Use null as the changeset to remove the working directory (like
6862 :hg:`clone -U`).
6861 :hg:`clone -U`).
6863
6862
6864 If you want to revert just one file to an older revision, use
6863 If you want to revert just one file to an older revision, use
6865 :hg:`revert [-r REV] NAME`.
6864 :hg:`revert [-r REV] NAME`.
6866
6865
6867 See :hg:`help dates` for a list of formats valid for -d/--date.
6866 See :hg:`help dates` for a list of formats valid for -d/--date.
6868
6867
6869 Returns 0 on success, 1 if there are unresolved files.
6868 Returns 0 on success, 1 if there are unresolved files.
6870 """
6869 """
6871 movemarkfrom = None
6870 movemarkfrom = None
6872 if rev and node:
6871 if rev and node:
6873 raise error.Abort(_("please specify just one revision"))
6872 raise error.Abort(_("please specify just one revision"))
6874
6873
6875 if rev is None or rev == '':
6874 if rev is None or rev == '':
6876 rev = node
6875 rev = node
6877
6876
6878 wlock = repo.wlock()
6877 wlock = repo.wlock()
6879 try:
6878 try:
6880 cmdutil.clearunfinished(repo)
6879 cmdutil.clearunfinished(repo)
6881
6880
6882 if date:
6881 if date:
6883 if rev is not None:
6882 if rev is not None:
6884 raise error.Abort(_("you can't specify a revision and a date"))
6883 raise error.Abort(_("you can't specify a revision and a date"))
6885 rev = cmdutil.finddate(ui, repo, date)
6884 rev = cmdutil.finddate(ui, repo, date)
6886
6885
6887 # if we defined a bookmark, we have to remember the original name
6886 # if we defined a bookmark, we have to remember the original name
6888 brev = rev
6887 brev = rev
6889 rev = scmutil.revsingle(repo, rev, rev).rev()
6888 rev = scmutil.revsingle(repo, rev, rev).rev()
6890
6889
6891 if check and clean:
6890 if check and clean:
6892 raise error.Abort(_("cannot specify both -c/--check and -C/--clean")
6891 raise error.Abort(_("cannot specify both -c/--check and -C/--clean")
6893 )
6892 )
6894
6893
6895 if check:
6894 if check:
6896 cmdutil.bailifchanged(repo, merge=False)
6895 cmdutil.bailifchanged(repo, merge=False)
6897 if rev is None:
6896 if rev is None:
6898 updata = destutil.destupdate(repo, clean=clean, check=check)
6897 updata = destutil.destupdate(repo, clean=clean, check=check)
6899 rev, movemarkfrom, brev = updata
6898 rev, movemarkfrom, brev = updata
6900
6899
6901 repo.ui.setconfig('ui', 'forcemerge', tool, 'update')
6900 repo.ui.setconfig('ui', 'forcemerge', tool, 'update')
6902
6901
6903 if clean:
6902 if clean:
6904 ret = hg.clean(repo, rev)
6903 ret = hg.clean(repo, rev)
6905 else:
6904 else:
6906 ret = hg.update(repo, rev)
6905 ret = hg.update(repo, rev)
6907
6906
6908 if not ret and movemarkfrom:
6907 if not ret and movemarkfrom:
6909 if movemarkfrom == repo['.'].node():
6908 if movemarkfrom == repo['.'].node():
6910 pass # no-op update
6909 pass # no-op update
6911 elif bookmarks.update(repo, [movemarkfrom], repo['.'].node()):
6910 elif bookmarks.update(repo, [movemarkfrom], repo['.'].node()):
6912 ui.status(_("updating bookmark %s\n") % repo._activebookmark)
6911 ui.status(_("updating bookmark %s\n") % repo._activebookmark)
6913 else:
6912 else:
6914 # this can happen with a non-linear update
6913 # this can happen with a non-linear update
6915 ui.status(_("(leaving bookmark %s)\n") %
6914 ui.status(_("(leaving bookmark %s)\n") %
6916 repo._activebookmark)
6915 repo._activebookmark)
6917 bookmarks.deactivate(repo)
6916 bookmarks.deactivate(repo)
6918 elif brev in repo._bookmarks:
6917 elif brev in repo._bookmarks:
6919 bookmarks.activate(repo, brev)
6918 bookmarks.activate(repo, brev)
6920 ui.status(_("(activating bookmark %s)\n") % brev)
6919 ui.status(_("(activating bookmark %s)\n") % brev)
6921 elif brev:
6920 elif brev:
6922 if repo._activebookmark:
6921 if repo._activebookmark:
6923 ui.status(_("(leaving bookmark %s)\n") %
6922 ui.status(_("(leaving bookmark %s)\n") %
6924 repo._activebookmark)
6923 repo._activebookmark)
6925 bookmarks.deactivate(repo)
6924 bookmarks.deactivate(repo)
6926 finally:
6925 finally:
6927 wlock.release()
6926 wlock.release()
6928
6927
6929 return ret
6928 return ret
6930
6929
6931 @command('verify', [])
6930 @command('verify', [])
6932 def verify(ui, repo):
6931 def verify(ui, repo):
6933 """verify the integrity of the repository
6932 """verify the integrity of the repository
6934
6933
6935 Verify the integrity of the current repository.
6934 Verify the integrity of the current repository.
6936
6935
6937 This will perform an extensive check of the repository's
6936 This will perform an extensive check of the repository's
6938 integrity, validating the hashes and checksums of each entry in
6937 integrity, validating the hashes and checksums of each entry in
6939 the changelog, manifest, and tracked files, as well as the
6938 the changelog, manifest, and tracked files, as well as the
6940 integrity of their crosslinks and indices.
6939 integrity of their crosslinks and indices.
6941
6940
6942 Please see https://mercurial-scm.org/wiki/RepositoryCorruption
6941 Please see https://mercurial-scm.org/wiki/RepositoryCorruption
6943 for more information about recovery from corruption of the
6942 for more information about recovery from corruption of the
6944 repository.
6943 repository.
6945
6944
6946 Returns 0 on success, 1 if errors are encountered.
6945 Returns 0 on success, 1 if errors are encountered.
6947 """
6946 """
6948 return hg.verify(repo)
6947 return hg.verify(repo)
6949
6948
6950 @command('version', [], norepo=True)
6949 @command('version', [], norepo=True)
6951 def version_(ui):
6950 def version_(ui):
6952 """output version and copyright information"""
6951 """output version and copyright information"""
6953 ui.write(_("Mercurial Distributed SCM (version %s)\n")
6952 ui.write(_("Mercurial Distributed SCM (version %s)\n")
6954 % util.version())
6953 % util.version())
6955 ui.status(_(
6954 ui.status(_(
6956 "(see https://mercurial-scm.org for more information)\n"
6955 "(see https://mercurial-scm.org for more information)\n"
6957 "\nCopyright (C) 2005-2015 Matt Mackall and others\n"
6956 "\nCopyright (C) 2005-2015 Matt Mackall and others\n"
6958 "This is free software; see the source for copying conditions. "
6957 "This is free software; see the source for copying conditions. "
6959 "There is NO\nwarranty; "
6958 "There is NO\nwarranty; "
6960 "not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n"
6959 "not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n"
6961 ))
6960 ))
6962
6961
6963 ui.note(_("\nEnabled extensions:\n\n"))
6962 ui.note(_("\nEnabled extensions:\n\n"))
6964 if ui.verbose:
6963 if ui.verbose:
6965 # format names and versions into columns
6964 # format names and versions into columns
6966 names = []
6965 names = []
6967 vers = []
6966 vers = []
6968 for name, module in extensions.extensions():
6967 for name, module in extensions.extensions():
6969 names.append(name)
6968 names.append(name)
6970 vers.append(extensions.moduleversion(module))
6969 vers.append(extensions.moduleversion(module))
6971 if names:
6970 if names:
6972 maxnamelen = max(len(n) for n in names)
6971 maxnamelen = max(len(n) for n in names)
6973 for i, name in enumerate(names):
6972 for i, name in enumerate(names):
6974 ui.write(" %-*s %s\n" % (maxnamelen, name, vers[i]))
6973 ui.write(" %-*s %s\n" % (maxnamelen, name, vers[i]))
@@ -1,906 +1,906 b''
1 # hg.py - repository classes for mercurial
1 # hg.py - repository classes for mercurial
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
4 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 from __future__ import absolute_import
9 from __future__ import absolute_import
10
10
11 import errno
11 import errno
12 import os
12 import os
13 import shutil
13 import shutil
14
14
15 from .i18n import _
15 from .i18n import _
16 from .node import nullid
16 from .node import nullid
17
17
18 from . import (
18 from . import (
19 bookmarks,
19 bookmarks,
20 bundlerepo,
20 bundlerepo,
21 cmdutil,
21 cmdutil,
22 discovery,
22 discovery,
23 error,
23 error,
24 exchange,
24 exchange,
25 extensions,
25 extensions,
26 httppeer,
26 httppeer,
27 localrepo,
27 localrepo,
28 lock,
28 lock,
29 merge as mergemod,
29 merge as mergemod,
30 node,
30 node,
31 phases,
31 phases,
32 repoview,
32 repoview,
33 scmutil,
33 scmutil,
34 sshpeer,
34 sshpeer,
35 statichttprepo,
35 statichttprepo,
36 ui as uimod,
36 ui as uimod,
37 unionrepo,
37 unionrepo,
38 url,
38 url,
39 util,
39 util,
40 verify as verifymod,
40 verify as verifymod,
41 )
41 )
42
42
43 release = lock.release
43 release = lock.release
44
44
45 def _local(path):
45 def _local(path):
46 path = util.expandpath(util.urllocalpath(path))
46 path = util.expandpath(util.urllocalpath(path))
47 return (os.path.isfile(path) and bundlerepo or localrepo)
47 return (os.path.isfile(path) and bundlerepo or localrepo)
48
48
49 def addbranchrevs(lrepo, other, branches, revs):
49 def addbranchrevs(lrepo, other, branches, revs):
50 peer = other.peer() # a courtesy to callers using a localrepo for other
50 peer = other.peer() # a courtesy to callers using a localrepo for other
51 hashbranch, branches = branches
51 hashbranch, branches = branches
52 if not hashbranch and not branches:
52 if not hashbranch and not branches:
53 x = revs or None
53 x = revs or None
54 if util.safehasattr(revs, 'first'):
54 if util.safehasattr(revs, 'first'):
55 y = revs.first()
55 y = revs.first()
56 elif revs:
56 elif revs:
57 y = revs[0]
57 y = revs[0]
58 else:
58 else:
59 y = None
59 y = None
60 return x, y
60 return x, y
61 if revs:
61 if revs:
62 revs = list(revs)
62 revs = list(revs)
63 else:
63 else:
64 revs = []
64 revs = []
65
65
66 if not peer.capable('branchmap'):
66 if not peer.capable('branchmap'):
67 if branches:
67 if branches:
68 raise error.Abort(_("remote branch lookup not supported"))
68 raise error.Abort(_("remote branch lookup not supported"))
69 revs.append(hashbranch)
69 revs.append(hashbranch)
70 return revs, revs[0]
70 return revs, revs[0]
71 branchmap = peer.branchmap()
71 branchmap = peer.branchmap()
72
72
73 def primary(branch):
73 def primary(branch):
74 if branch == '.':
74 if branch == '.':
75 if not lrepo:
75 if not lrepo:
76 raise error.Abort(_("dirstate branch not accessible"))
76 raise error.Abort(_("dirstate branch not accessible"))
77 branch = lrepo.dirstate.branch()
77 branch = lrepo.dirstate.branch()
78 if branch in branchmap:
78 if branch in branchmap:
79 revs.extend(node.hex(r) for r in reversed(branchmap[branch]))
79 revs.extend(node.hex(r) for r in reversed(branchmap[branch]))
80 return True
80 return True
81 else:
81 else:
82 return False
82 return False
83
83
84 for branch in branches:
84 for branch in branches:
85 if not primary(branch):
85 if not primary(branch):
86 raise error.RepoLookupError(_("unknown branch '%s'") % branch)
86 raise error.RepoLookupError(_("unknown branch '%s'") % branch)
87 if hashbranch:
87 if hashbranch:
88 if not primary(hashbranch):
88 if not primary(hashbranch):
89 revs.append(hashbranch)
89 revs.append(hashbranch)
90 return revs, revs[0]
90 return revs, revs[0]
91
91
92 def parseurl(path, branches=None):
92 def parseurl(path, branches=None):
93 '''parse url#branch, returning (url, (branch, branches))'''
93 '''parse url#branch, returning (url, (branch, branches))'''
94
94
95 u = util.url(path)
95 u = util.url(path)
96 branch = None
96 branch = None
97 if u.fragment:
97 if u.fragment:
98 branch = u.fragment
98 branch = u.fragment
99 u.fragment = None
99 u.fragment = None
100 return str(u), (branch, branches or [])
100 return str(u), (branch, branches or [])
101
101
102 schemes = {
102 schemes = {
103 'bundle': bundlerepo,
103 'bundle': bundlerepo,
104 'union': unionrepo,
104 'union': unionrepo,
105 'file': _local,
105 'file': _local,
106 'http': httppeer,
106 'http': httppeer,
107 'https': httppeer,
107 'https': httppeer,
108 'ssh': sshpeer,
108 'ssh': sshpeer,
109 'static-http': statichttprepo,
109 'static-http': statichttprepo,
110 }
110 }
111
111
112 def _peerlookup(path):
112 def _peerlookup(path):
113 u = util.url(path)
113 u = util.url(path)
114 scheme = u.scheme or 'file'
114 scheme = u.scheme or 'file'
115 thing = schemes.get(scheme) or schemes['file']
115 thing = schemes.get(scheme) or schemes['file']
116 try:
116 try:
117 return thing(path)
117 return thing(path)
118 except TypeError:
118 except TypeError:
119 # we can't test callable(thing) because 'thing' can be an unloaded
119 # we can't test callable(thing) because 'thing' can be an unloaded
120 # module that implements __call__
120 # module that implements __call__
121 if not util.safehasattr(thing, 'instance'):
121 if not util.safehasattr(thing, 'instance'):
122 raise
122 raise
123 return thing
123 return thing
124
124
125 def islocal(repo):
125 def islocal(repo):
126 '''return true if repo (or path pointing to repo) is local'''
126 '''return true if repo (or path pointing to repo) is local'''
127 if isinstance(repo, str):
127 if isinstance(repo, str):
128 try:
128 try:
129 return _peerlookup(repo).islocal(repo)
129 return _peerlookup(repo).islocal(repo)
130 except AttributeError:
130 except AttributeError:
131 return False
131 return False
132 return repo.local()
132 return repo.local()
133
133
134 def openpath(ui, path):
134 def openpath(ui, path):
135 '''open path with open if local, url.open if remote'''
135 '''open path with open if local, url.open if remote'''
136 pathurl = util.url(path, parsequery=False, parsefragment=False)
136 pathurl = util.url(path, parsequery=False, parsefragment=False)
137 if pathurl.islocal():
137 if pathurl.islocal():
138 return util.posixfile(pathurl.localpath(), 'rb')
138 return util.posixfile(pathurl.localpath(), 'rb')
139 else:
139 else:
140 return url.open(ui, path)
140 return url.open(ui, path)
141
141
142 # a list of (ui, repo) functions called for wire peer initialization
142 # a list of (ui, repo) functions called for wire peer initialization
143 wirepeersetupfuncs = []
143 wirepeersetupfuncs = []
144
144
145 def _peerorrepo(ui, path, create=False):
145 def _peerorrepo(ui, path, create=False):
146 """return a repository object for the specified path"""
146 """return a repository object for the specified path"""
147 obj = _peerlookup(path).instance(ui, path, create)
147 obj = _peerlookup(path).instance(ui, path, create)
148 ui = getattr(obj, "ui", ui)
148 ui = getattr(obj, "ui", ui)
149 for name, module in extensions.extensions(ui):
149 for name, module in extensions.extensions(ui):
150 hook = getattr(module, 'reposetup', None)
150 hook = getattr(module, 'reposetup', None)
151 if hook:
151 if hook:
152 hook(ui, obj)
152 hook(ui, obj)
153 if not obj.local():
153 if not obj.local():
154 for f in wirepeersetupfuncs:
154 for f in wirepeersetupfuncs:
155 f(ui, obj)
155 f(ui, obj)
156 return obj
156 return obj
157
157
158 def repository(ui, path='', create=False):
158 def repository(ui, path='', create=False):
159 """return a repository object for the specified path"""
159 """return a repository object for the specified path"""
160 peer = _peerorrepo(ui, path, create)
160 peer = _peerorrepo(ui, path, create)
161 repo = peer.local()
161 repo = peer.local()
162 if not repo:
162 if not repo:
163 raise error.Abort(_("repository '%s' is not local") %
163 raise error.Abort(_("repository '%s' is not local") %
164 (path or peer.url()))
164 (path or peer.url()))
165 return repo.filtered('visible')
165 return repo.filtered('visible')
166
166
167 def peer(uiorrepo, opts, path, create=False):
167 def peer(uiorrepo, opts, path, create=False):
168 '''return a repository peer for the specified path'''
168 '''return a repository peer for the specified path'''
169 rui = remoteui(uiorrepo, opts)
169 rui = remoteui(uiorrepo, opts)
170 return _peerorrepo(rui, path, create).peer()
170 return _peerorrepo(rui, path, create).peer()
171
171
172 def defaultdest(source):
172 def defaultdest(source):
173 '''return default destination of clone if none is given
173 '''return default destination of clone if none is given
174
174
175 >>> defaultdest('foo')
175 >>> defaultdest('foo')
176 'foo'
176 'foo'
177 >>> defaultdest('/foo/bar')
177 >>> defaultdest('/foo/bar')
178 'bar'
178 'bar'
179 >>> defaultdest('/')
179 >>> defaultdest('/')
180 ''
180 ''
181 >>> defaultdest('')
181 >>> defaultdest('')
182 ''
182 ''
183 >>> defaultdest('http://example.org/')
183 >>> defaultdest('http://example.org/')
184 ''
184 ''
185 >>> defaultdest('http://example.org/foo/')
185 >>> defaultdest('http://example.org/foo/')
186 'foo'
186 'foo'
187 '''
187 '''
188 path = util.url(source).path
188 path = util.url(source).path
189 if not path:
189 if not path:
190 return ''
190 return ''
191 return os.path.basename(os.path.normpath(path))
191 return os.path.basename(os.path.normpath(path))
192
192
193 def share(ui, source, dest=None, update=True, bookmarks=True):
193 def share(ui, source, dest=None, update=True, bookmarks=True):
194 '''create a shared repository'''
194 '''create a shared repository'''
195
195
196 if not islocal(source):
196 if not islocal(source):
197 raise error.Abort(_('can only share local repositories'))
197 raise error.Abort(_('can only share local repositories'))
198
198
199 if not dest:
199 if not dest:
200 dest = defaultdest(source)
200 dest = defaultdest(source)
201 else:
201 else:
202 dest = ui.expandpath(dest)
202 dest = ui.expandpath(dest)
203
203
204 if isinstance(source, str):
204 if isinstance(source, str):
205 origsource = ui.expandpath(source)
205 origsource = ui.expandpath(source)
206 source, branches = parseurl(origsource)
206 source, branches = parseurl(origsource)
207 srcrepo = repository(ui, source)
207 srcrepo = repository(ui, source)
208 rev, checkout = addbranchrevs(srcrepo, srcrepo, branches, None)
208 rev, checkout = addbranchrevs(srcrepo, srcrepo, branches, None)
209 else:
209 else:
210 srcrepo = source.local()
210 srcrepo = source.local()
211 origsource = source = srcrepo.url()
211 origsource = source = srcrepo.url()
212 checkout = None
212 checkout = None
213
213
214 sharedpath = srcrepo.sharedpath # if our source is already sharing
214 sharedpath = srcrepo.sharedpath # if our source is already sharing
215
215
216 destwvfs = scmutil.vfs(dest, realpath=True)
216 destwvfs = scmutil.vfs(dest, realpath=True)
217 destvfs = scmutil.vfs(os.path.join(destwvfs.base, '.hg'), realpath=True)
217 destvfs = scmutil.vfs(os.path.join(destwvfs.base, '.hg'), realpath=True)
218
218
219 if destvfs.lexists():
219 if destvfs.lexists():
220 raise error.Abort(_('destination already exists'))
220 raise error.Abort(_('destination already exists'))
221
221
222 if not destwvfs.isdir():
222 if not destwvfs.isdir():
223 destwvfs.mkdir()
223 destwvfs.mkdir()
224 destvfs.makedir()
224 destvfs.makedir()
225
225
226 requirements = ''
226 requirements = ''
227 try:
227 try:
228 requirements = srcrepo.vfs.read('requires')
228 requirements = srcrepo.vfs.read('requires')
229 except IOError as inst:
229 except IOError as inst:
230 if inst.errno != errno.ENOENT:
230 if inst.errno != errno.ENOENT:
231 raise
231 raise
232
232
233 requirements += 'shared\n'
233 requirements += 'shared\n'
234 destvfs.write('requires', requirements)
234 destvfs.write('requires', requirements)
235 destvfs.write('sharedpath', sharedpath)
235 destvfs.write('sharedpath', sharedpath)
236
236
237 r = repository(ui, destwvfs.base)
237 r = repository(ui, destwvfs.base)
238
238
239 default = srcrepo.ui.config('paths', 'default')
239 default = srcrepo.ui.config('paths', 'default')
240 if default:
240 if default:
241 fp = r.vfs("hgrc", "w", text=True)
241 fp = r.vfs("hgrc", "w", text=True)
242 fp.write("[paths]\n")
242 fp.write("[paths]\n")
243 fp.write("default = %s\n" % default)
243 fp.write("default = %s\n" % default)
244 fp.close()
244 fp.close()
245
245
246 if update:
246 if update:
247 r.ui.status(_("updating working directory\n"))
247 r.ui.status(_("updating working directory\n"))
248 if update is not True:
248 if update is not True:
249 checkout = update
249 checkout = update
250 for test in (checkout, 'default', 'tip'):
250 for test in (checkout, 'default', 'tip'):
251 if test is None:
251 if test is None:
252 continue
252 continue
253 try:
253 try:
254 uprev = r.lookup(test)
254 uprev = r.lookup(test)
255 break
255 break
256 except error.RepoLookupError:
256 except error.RepoLookupError:
257 continue
257 continue
258 _update(r, uprev)
258 _update(r, uprev)
259
259
260 if bookmarks:
260 if bookmarks:
261 fp = r.vfs('shared', 'w')
261 fp = r.vfs('shared', 'w')
262 fp.write('bookmarks\n')
262 fp.write('bookmarks\n')
263 fp.close()
263 fp.close()
264
264
265 def copystore(ui, srcrepo, destpath):
265 def copystore(ui, srcrepo, destpath):
266 '''copy files from store of srcrepo in destpath
266 '''copy files from store of srcrepo in destpath
267
267
268 returns destlock
268 returns destlock
269 '''
269 '''
270 destlock = None
270 destlock = None
271 try:
271 try:
272 hardlink = None
272 hardlink = None
273 num = 0
273 num = 0
274 closetopic = [None]
274 closetopic = [None]
275 def prog(topic, pos):
275 def prog(topic, pos):
276 if pos is None:
276 if pos is None:
277 closetopic[0] = topic
277 closetopic[0] = topic
278 else:
278 else:
279 ui.progress(topic, pos + num)
279 ui.progress(topic, pos + num)
280 srcpublishing = srcrepo.publishing()
280 srcpublishing = srcrepo.publishing()
281 srcvfs = scmutil.vfs(srcrepo.sharedpath)
281 srcvfs = scmutil.vfs(srcrepo.sharedpath)
282 dstvfs = scmutil.vfs(destpath)
282 dstvfs = scmutil.vfs(destpath)
283 for f in srcrepo.store.copylist():
283 for f in srcrepo.store.copylist():
284 if srcpublishing and f.endswith('phaseroots'):
284 if srcpublishing and f.endswith('phaseroots'):
285 continue
285 continue
286 dstbase = os.path.dirname(f)
286 dstbase = os.path.dirname(f)
287 if dstbase and not dstvfs.exists(dstbase):
287 if dstbase and not dstvfs.exists(dstbase):
288 dstvfs.mkdir(dstbase)
288 dstvfs.mkdir(dstbase)
289 if srcvfs.exists(f):
289 if srcvfs.exists(f):
290 if f.endswith('data'):
290 if f.endswith('data'):
291 # 'dstbase' may be empty (e.g. revlog format 0)
291 # 'dstbase' may be empty (e.g. revlog format 0)
292 lockfile = os.path.join(dstbase, "lock")
292 lockfile = os.path.join(dstbase, "lock")
293 # lock to avoid premature writing to the target
293 # lock to avoid premature writing to the target
294 destlock = lock.lock(dstvfs, lockfile)
294 destlock = lock.lock(dstvfs, lockfile)
295 hardlink, n = util.copyfiles(srcvfs.join(f), dstvfs.join(f),
295 hardlink, n = util.copyfiles(srcvfs.join(f), dstvfs.join(f),
296 hardlink, progress=prog)
296 hardlink, progress=prog)
297 num += n
297 num += n
298 if hardlink:
298 if hardlink:
299 ui.debug("linked %d files\n" % num)
299 ui.debug("linked %d files\n" % num)
300 if closetopic[0]:
300 if closetopic[0]:
301 ui.progress(closetopic[0], None)
301 ui.progress(closetopic[0], None)
302 else:
302 else:
303 ui.debug("copied %d files\n" % num)
303 ui.debug("copied %d files\n" % num)
304 if closetopic[0]:
304 if closetopic[0]:
305 ui.progress(closetopic[0], None)
305 ui.progress(closetopic[0], None)
306 return destlock
306 return destlock
307 except: # re-raises
307 except: # re-raises
308 release(destlock)
308 release(destlock)
309 raise
309 raise
310
310
311 def clonewithshare(ui, peeropts, sharepath, source, srcpeer, dest, pull=False,
311 def clonewithshare(ui, peeropts, sharepath, source, srcpeer, dest, pull=False,
312 rev=None, update=True, stream=False):
312 rev=None, update=True, stream=False):
313 """Perform a clone using a shared repo.
313 """Perform a clone using a shared repo.
314
314
315 The store for the repository will be located at <sharepath>/.hg. The
315 The store for the repository will be located at <sharepath>/.hg. The
316 specified revisions will be cloned or pulled from "source". A shared repo
316 specified revisions will be cloned or pulled from "source". A shared repo
317 will be created at "dest" and a working copy will be created if "update" is
317 will be created at "dest" and a working copy will be created if "update" is
318 True.
318 True.
319 """
319 """
320 revs = None
320 revs = None
321 if rev:
321 if rev:
322 if not srcpeer.capable('lookup'):
322 if not srcpeer.capable('lookup'):
323 raise error.Abort(_("src repository does not support "
323 raise error.Abort(_("src repository does not support "
324 "revision lookup and so doesn't "
324 "revision lookup and so doesn't "
325 "support clone by revision"))
325 "support clone by revision"))
326 revs = [srcpeer.lookup(r) for r in rev]
326 revs = [srcpeer.lookup(r) for r in rev]
327
327
328 basename = os.path.basename(sharepath)
328 basename = os.path.basename(sharepath)
329
329
330 if os.path.exists(sharepath):
330 if os.path.exists(sharepath):
331 ui.status(_('(sharing from existing pooled repository %s)\n') %
331 ui.status(_('(sharing from existing pooled repository %s)\n') %
332 basename)
332 basename)
333 else:
333 else:
334 ui.status(_('(sharing from new pooled repository %s)\n') % basename)
334 ui.status(_('(sharing from new pooled repository %s)\n') % basename)
335 # Always use pull mode because hardlinks in share mode don't work well.
335 # Always use pull mode because hardlinks in share mode don't work well.
336 # Never update because working copies aren't necessary in share mode.
336 # Never update because working copies aren't necessary in share mode.
337 clone(ui, peeropts, source, dest=sharepath, pull=True,
337 clone(ui, peeropts, source, dest=sharepath, pull=True,
338 rev=rev, update=False, stream=stream)
338 rev=rev, update=False, stream=stream)
339
339
340 sharerepo = repository(ui, path=sharepath)
340 sharerepo = repository(ui, path=sharepath)
341 share(ui, sharerepo, dest=dest, update=update, bookmarks=False)
341 share(ui, sharerepo, dest=dest, update=update, bookmarks=False)
342
342
343 # We need to perform a pull against the dest repo to fetch bookmarks
343 # We need to perform a pull against the dest repo to fetch bookmarks
344 # and other non-store data that isn't shared by default. In the case of
344 # and other non-store data that isn't shared by default. In the case of
345 # non-existing shared repo, this means we pull from the remote twice. This
345 # non-existing shared repo, this means we pull from the remote twice. This
346 # is a bit weird. But at the time it was implemented, there wasn't an easy
346 # is a bit weird. But at the time it was implemented, there wasn't an easy
347 # way to pull just non-changegroup data.
347 # way to pull just non-changegroup data.
348 destrepo = repository(ui, path=dest)
348 destrepo = repository(ui, path=dest)
349 exchange.pull(destrepo, srcpeer, heads=revs)
349 exchange.pull(destrepo, srcpeer, heads=revs)
350
350
351 return srcpeer, peer(ui, peeropts, dest)
351 return srcpeer, peer(ui, peeropts, dest)
352
352
353 def clone(ui, peeropts, source, dest=None, pull=False, rev=None,
353 def clone(ui, peeropts, source, dest=None, pull=False, rev=None,
354 update=True, stream=False, branch=None, shareopts=None):
354 update=True, stream=False, branch=None, shareopts=None):
355 """Make a copy of an existing repository.
355 """Make a copy of an existing repository.
356
356
357 Create a copy of an existing repository in a new directory. The
357 Create a copy of an existing repository in a new directory. The
358 source and destination are URLs, as passed to the repository
358 source and destination are URLs, as passed to the repository
359 function. Returns a pair of repository peers, the source and
359 function. Returns a pair of repository peers, the source and
360 newly created destination.
360 newly created destination.
361
361
362 The location of the source is added to the new repository's
362 The location of the source is added to the new repository's
363 .hg/hgrc file, as the default to be used for future pulls and
363 .hg/hgrc file, as the default to be used for future pulls and
364 pushes.
364 pushes.
365
365
366 If an exception is raised, the partly cloned/updated destination
366 If an exception is raised, the partly cloned/updated destination
367 repository will be deleted.
367 repository will be deleted.
368
368
369 Arguments:
369 Arguments:
370
370
371 source: repository object or URL
371 source: repository object or URL
372
372
373 dest: URL of destination repository to create (defaults to base
373 dest: URL of destination repository to create (defaults to base
374 name of source repository)
374 name of source repository)
375
375
376 pull: always pull from source repository, even in local case or if the
376 pull: always pull from source repository, even in local case or if the
377 server prefers streaming
377 server prefers streaming
378
378
379 stream: stream raw data uncompressed from repository (fast over
379 stream: stream raw data uncompressed from repository (fast over
380 LAN, slow over WAN)
380 LAN, slow over WAN)
381
381
382 rev: revision to clone up to (implies pull=True)
382 rev: revision to clone up to (implies pull=True)
383
383
384 update: update working directory after clone completes, if
384 update: update working directory after clone completes, if
385 destination is local repository (True means update to default rev,
385 destination is local repository (True means update to default rev,
386 anything else is treated as a revision)
386 anything else is treated as a revision)
387
387
388 branch: branches to clone
388 branch: branches to clone
389
389
390 shareopts: dict of options to control auto sharing behavior. The "pool" key
390 shareopts: dict of options to control auto sharing behavior. The "pool" key
391 activates auto sharing mode and defines the directory for stores. The
391 activates auto sharing mode and defines the directory for stores. The
392 "mode" key determines how to construct the directory name of the shared
392 "mode" key determines how to construct the directory name of the shared
393 repository. "identity" means the name is derived from the node of the first
393 repository. "identity" means the name is derived from the node of the first
394 changeset in the repository. "remote" means the name is derived from the
394 changeset in the repository. "remote" means the name is derived from the
395 remote's path/URL. Defaults to "identity."
395 remote's path/URL. Defaults to "identity."
396 """
396 """
397
397
398 if isinstance(source, str):
398 if isinstance(source, str):
399 origsource = ui.expandpath(source)
399 origsource = ui.expandpath(source)
400 source, branch = parseurl(origsource, branch)
400 source, branch = parseurl(origsource, branch)
401 srcpeer = peer(ui, peeropts, source)
401 srcpeer = peer(ui, peeropts, source)
402 else:
402 else:
403 srcpeer = source.peer() # in case we were called with a localrepo
403 srcpeer = source.peer() # in case we were called with a localrepo
404 branch = (None, branch or [])
404 branch = (None, branch or [])
405 origsource = source = srcpeer.url()
405 origsource = source = srcpeer.url()
406 rev, checkout = addbranchrevs(srcpeer, srcpeer, branch, rev)
406 rev, checkout = addbranchrevs(srcpeer, srcpeer, branch, rev)
407
407
408 if dest is None:
408 if dest is None:
409 dest = defaultdest(source)
409 dest = defaultdest(source)
410 if dest:
410 if dest:
411 ui.status(_("destination directory: %s\n") % dest)
411 ui.status(_("destination directory: %s\n") % dest)
412 else:
412 else:
413 dest = ui.expandpath(dest)
413 dest = ui.expandpath(dest)
414
414
415 dest = util.urllocalpath(dest)
415 dest = util.urllocalpath(dest)
416 source = util.urllocalpath(source)
416 source = util.urllocalpath(source)
417
417
418 if not dest:
418 if not dest:
419 raise error.Abort(_("empty destination path is not valid"))
419 raise error.Abort(_("empty destination path is not valid"))
420
420
421 destvfs = scmutil.vfs(dest, expandpath=True)
421 destvfs = scmutil.vfs(dest, expandpath=True)
422 if destvfs.lexists():
422 if destvfs.lexists():
423 if not destvfs.isdir():
423 if not destvfs.isdir():
424 raise error.Abort(_("destination '%s' already exists") % dest)
424 raise error.Abort(_("destination '%s' already exists") % dest)
425 elif destvfs.listdir():
425 elif destvfs.listdir():
426 raise error.Abort(_("destination '%s' is not empty") % dest)
426 raise error.Abort(_("destination '%s' is not empty") % dest)
427
427
428 shareopts = shareopts or {}
428 shareopts = shareopts or {}
429 sharepool = shareopts.get('pool')
429 sharepool = shareopts.get('pool')
430 sharenamemode = shareopts.get('mode')
430 sharenamemode = shareopts.get('mode')
431 if sharepool and islocal(dest):
431 if sharepool and islocal(dest):
432 sharepath = None
432 sharepath = None
433 if sharenamemode == 'identity':
433 if sharenamemode == 'identity':
434 # Resolve the name from the initial changeset in the remote
434 # Resolve the name from the initial changeset in the remote
435 # repository. This returns nullid when the remote is empty. It
435 # repository. This returns nullid when the remote is empty. It
436 # raises RepoLookupError if revision 0 is filtered or otherwise
436 # raises RepoLookupError if revision 0 is filtered or otherwise
437 # not available. If we fail to resolve, sharing is not enabled.
437 # not available. If we fail to resolve, sharing is not enabled.
438 try:
438 try:
439 rootnode = srcpeer.lookup('0')
439 rootnode = srcpeer.lookup('0')
440 if rootnode != node.nullid:
440 if rootnode != node.nullid:
441 sharepath = os.path.join(sharepool, node.hex(rootnode))
441 sharepath = os.path.join(sharepool, node.hex(rootnode))
442 else:
442 else:
443 ui.status(_('(not using pooled storage: '
443 ui.status(_('(not using pooled storage: '
444 'remote appears to be empty)\n'))
444 'remote appears to be empty)\n'))
445 except error.RepoLookupError:
445 except error.RepoLookupError:
446 ui.status(_('(not using pooled storage: '
446 ui.status(_('(not using pooled storage: '
447 'unable to resolve identity of remote)\n'))
447 'unable to resolve identity of remote)\n'))
448 elif sharenamemode == 'remote':
448 elif sharenamemode == 'remote':
449 sharepath = os.path.join(sharepool, util.sha1(source).hexdigest())
449 sharepath = os.path.join(sharepool, util.sha1(source).hexdigest())
450 else:
450 else:
451 raise error.Abort('unknown share naming mode: %s' % sharenamemode)
451 raise error.Abort('unknown share naming mode: %s' % sharenamemode)
452
452
453 if sharepath:
453 if sharepath:
454 return clonewithshare(ui, peeropts, sharepath, source, srcpeer,
454 return clonewithshare(ui, peeropts, sharepath, source, srcpeer,
455 dest, pull=pull, rev=rev, update=update,
455 dest, pull=pull, rev=rev, update=update,
456 stream=stream)
456 stream=stream)
457
457
458 srclock = destlock = cleandir = None
458 srclock = destlock = cleandir = None
459 srcrepo = srcpeer.local()
459 srcrepo = srcpeer.local()
460 try:
460 try:
461 abspath = origsource
461 abspath = origsource
462 if islocal(origsource):
462 if islocal(origsource):
463 abspath = os.path.abspath(util.urllocalpath(origsource))
463 abspath = os.path.abspath(util.urllocalpath(origsource))
464
464
465 if islocal(dest):
465 if islocal(dest):
466 cleandir = dest
466 cleandir = dest
467
467
468 copy = False
468 copy = False
469 if (srcrepo and srcrepo.cancopy() and islocal(dest)
469 if (srcrepo and srcrepo.cancopy() and islocal(dest)
470 and not phases.hassecret(srcrepo)):
470 and not phases.hassecret(srcrepo)):
471 copy = not pull and not rev
471 copy = not pull and not rev
472
472
473 if copy:
473 if copy:
474 try:
474 try:
475 # we use a lock here because if we race with commit, we
475 # we use a lock here because if we race with commit, we
476 # can end up with extra data in the cloned revlogs that's
476 # can end up with extra data in the cloned revlogs that's
477 # not pointed to by changesets, thus causing verify to
477 # not pointed to by changesets, thus causing verify to
478 # fail
478 # fail
479 srclock = srcrepo.lock(wait=False)
479 srclock = srcrepo.lock(wait=False)
480 except error.LockError:
480 except error.LockError:
481 copy = False
481 copy = False
482
482
483 if copy:
483 if copy:
484 srcrepo.hook('preoutgoing', throw=True, source='clone')
484 srcrepo.hook('preoutgoing', throw=True, source='clone')
485 hgdir = os.path.realpath(os.path.join(dest, ".hg"))
485 hgdir = os.path.realpath(os.path.join(dest, ".hg"))
486 if not os.path.exists(dest):
486 if not os.path.exists(dest):
487 os.mkdir(dest)
487 os.mkdir(dest)
488 else:
488 else:
489 # only clean up directories we create ourselves
489 # only clean up directories we create ourselves
490 cleandir = hgdir
490 cleandir = hgdir
491 try:
491 try:
492 destpath = hgdir
492 destpath = hgdir
493 util.makedir(destpath, notindexed=True)
493 util.makedir(destpath, notindexed=True)
494 except OSError as inst:
494 except OSError as inst:
495 if inst.errno == errno.EEXIST:
495 if inst.errno == errno.EEXIST:
496 cleandir = None
496 cleandir = None
497 raise error.Abort(_("destination '%s' already exists")
497 raise error.Abort(_("destination '%s' already exists")
498 % dest)
498 % dest)
499 raise
499 raise
500
500
501 destlock = copystore(ui, srcrepo, destpath)
501 destlock = copystore(ui, srcrepo, destpath)
502 # copy bookmarks over
502 # copy bookmarks over
503 srcbookmarks = srcrepo.join('bookmarks')
503 srcbookmarks = srcrepo.join('bookmarks')
504 dstbookmarks = os.path.join(destpath, 'bookmarks')
504 dstbookmarks = os.path.join(destpath, 'bookmarks')
505 if os.path.exists(srcbookmarks):
505 if os.path.exists(srcbookmarks):
506 util.copyfile(srcbookmarks, dstbookmarks)
506 util.copyfile(srcbookmarks, dstbookmarks)
507
507
508 # Recomputing branch cache might be slow on big repos,
508 # Recomputing branch cache might be slow on big repos,
509 # so just copy it
509 # so just copy it
510 def copybranchcache(fname):
510 def copybranchcache(fname):
511 srcbranchcache = srcrepo.join('cache/%s' % fname)
511 srcbranchcache = srcrepo.join('cache/%s' % fname)
512 dstbranchcache = os.path.join(dstcachedir, fname)
512 dstbranchcache = os.path.join(dstcachedir, fname)
513 if os.path.exists(srcbranchcache):
513 if os.path.exists(srcbranchcache):
514 if not os.path.exists(dstcachedir):
514 if not os.path.exists(dstcachedir):
515 os.mkdir(dstcachedir)
515 os.mkdir(dstcachedir)
516 util.copyfile(srcbranchcache, dstbranchcache)
516 util.copyfile(srcbranchcache, dstbranchcache)
517
517
518 dstcachedir = os.path.join(destpath, 'cache')
518 dstcachedir = os.path.join(destpath, 'cache')
519 # In local clones we're copying all nodes, not just served
519 # In local clones we're copying all nodes, not just served
520 # ones. Therefore copy all branch caches over.
520 # ones. Therefore copy all branch caches over.
521 copybranchcache('branch2')
521 copybranchcache('branch2')
522 for cachename in repoview.filtertable:
522 for cachename in repoview.filtertable:
523 copybranchcache('branch2-%s' % cachename)
523 copybranchcache('branch2-%s' % cachename)
524
524
525 # we need to re-init the repo after manually copying the data
525 # we need to re-init the repo after manually copying the data
526 # into it
526 # into it
527 destpeer = peer(srcrepo, peeropts, dest)
527 destpeer = peer(srcrepo, peeropts, dest)
528 srcrepo.hook('outgoing', source='clone',
528 srcrepo.hook('outgoing', source='clone',
529 node=node.hex(node.nullid))
529 node=node.hex(node.nullid))
530 else:
530 else:
531 try:
531 try:
532 destpeer = peer(srcrepo or ui, peeropts, dest, create=True)
532 destpeer = peer(srcrepo or ui, peeropts, dest, create=True)
533 # only pass ui when no srcrepo
533 # only pass ui when no srcrepo
534 except OSError as inst:
534 except OSError as inst:
535 if inst.errno == errno.EEXIST:
535 if inst.errno == errno.EEXIST:
536 cleandir = None
536 cleandir = None
537 raise error.Abort(_("destination '%s' already exists")
537 raise error.Abort(_("destination '%s' already exists")
538 % dest)
538 % dest)
539 raise
539 raise
540
540
541 revs = None
541 revs = None
542 if rev:
542 if rev:
543 if not srcpeer.capable('lookup'):
543 if not srcpeer.capable('lookup'):
544 raise error.Abort(_("src repository does not support "
544 raise error.Abort(_("src repository does not support "
545 "revision lookup and so doesn't "
545 "revision lookup and so doesn't "
546 "support clone by revision"))
546 "support clone by revision"))
547 revs = [srcpeer.lookup(r) for r in rev]
547 revs = [srcpeer.lookup(r) for r in rev]
548 checkout = revs[0]
548 checkout = revs[0]
549 local = destpeer.local()
549 local = destpeer.local()
550 if local:
550 if local:
551 if not stream:
551 if not stream:
552 if pull:
552 if pull:
553 stream = False
553 stream = False
554 else:
554 else:
555 stream = None
555 stream = None
556 # internal config: ui.quietbookmarkmove
556 # internal config: ui.quietbookmarkmove
557 quiet = local.ui.backupconfig('ui', 'quietbookmarkmove')
557 quiet = local.ui.backupconfig('ui', 'quietbookmarkmove')
558 try:
558 try:
559 local.ui.setconfig(
559 local.ui.setconfig(
560 'ui', 'quietbookmarkmove', True, 'clone')
560 'ui', 'quietbookmarkmove', True, 'clone')
561 exchange.pull(local, srcpeer, revs,
561 exchange.pull(local, srcpeer, revs,
562 streamclonerequested=stream)
562 streamclonerequested=stream)
563 finally:
563 finally:
564 local.ui.restoreconfig(quiet)
564 local.ui.restoreconfig(quiet)
565 elif srcrepo:
565 elif srcrepo:
566 exchange.push(srcrepo, destpeer, revs=revs,
566 exchange.push(srcrepo, destpeer, revs=revs,
567 bookmarks=srcrepo._bookmarks.keys())
567 bookmarks=srcrepo._bookmarks.keys())
568 else:
568 else:
569 raise error.Abort(_("clone from remote to remote not supported")
569 raise error.Abort(_("clone from remote to remote not supported")
570 )
570 )
571
571
572 cleandir = None
572 cleandir = None
573
573
574 destrepo = destpeer.local()
574 destrepo = destpeer.local()
575 if destrepo:
575 if destrepo:
576 template = uimod.samplehgrcs['cloned']
576 template = uimod.samplehgrcs['cloned']
577 fp = destrepo.vfs("hgrc", "w", text=True)
577 fp = destrepo.vfs("hgrc", "w", text=True)
578 u = util.url(abspath)
578 u = util.url(abspath)
579 u.passwd = None
579 u.passwd = None
580 defaulturl = str(u)
580 defaulturl = str(u)
581 fp.write(template % defaulturl)
581 fp.write(template % defaulturl)
582 fp.close()
582 fp.close()
583
583
584 destrepo.ui.setconfig('paths', 'default', defaulturl, 'clone')
584 destrepo.ui.setconfig('paths', 'default', defaulturl, 'clone')
585
585
586 if update:
586 if update:
587 if update is not True:
587 if update is not True:
588 checkout = srcpeer.lookup(update)
588 checkout = srcpeer.lookup(update)
589 uprev = None
589 uprev = None
590 status = None
590 status = None
591 if checkout is not None:
591 if checkout is not None:
592 try:
592 try:
593 uprev = destrepo.lookup(checkout)
593 uprev = destrepo.lookup(checkout)
594 except error.RepoLookupError:
594 except error.RepoLookupError:
595 if update is not True:
595 if update is not True:
596 try:
596 try:
597 uprev = destrepo.lookup(update)
597 uprev = destrepo.lookup(update)
598 except error.RepoLookupError:
598 except error.RepoLookupError:
599 pass
599 pass
600 if uprev is None:
600 if uprev is None:
601 try:
601 try:
602 uprev = destrepo._bookmarks['@']
602 uprev = destrepo._bookmarks['@']
603 update = '@'
603 update = '@'
604 bn = destrepo[uprev].branch()
604 bn = destrepo[uprev].branch()
605 if bn == 'default':
605 if bn == 'default':
606 status = _("updating to bookmark @\n")
606 status = _("updating to bookmark @\n")
607 else:
607 else:
608 status = (_("updating to bookmark @ on branch %s\n")
608 status = (_("updating to bookmark @ on branch %s\n")
609 % bn)
609 % bn)
610 except KeyError:
610 except KeyError:
611 try:
611 try:
612 uprev = destrepo.branchtip('default')
612 uprev = destrepo.branchtip('default')
613 except error.RepoLookupError:
613 except error.RepoLookupError:
614 uprev = destrepo.lookup('tip')
614 uprev = destrepo.lookup('tip')
615 if not status:
615 if not status:
616 bn = destrepo[uprev].branch()
616 bn = destrepo[uprev].branch()
617 status = _("updating to branch %s\n") % bn
617 status = _("updating to branch %s\n") % bn
618 destrepo.ui.status(status)
618 destrepo.ui.status(status)
619 _update(destrepo, uprev)
619 _update(destrepo, uprev)
620 if update in destrepo._bookmarks:
620 if update in destrepo._bookmarks:
621 bookmarks.activate(destrepo, update)
621 bookmarks.activate(destrepo, update)
622 finally:
622 finally:
623 release(srclock, destlock)
623 release(srclock, destlock)
624 if cleandir is not None:
624 if cleandir is not None:
625 shutil.rmtree(cleandir, True)
625 shutil.rmtree(cleandir, True)
626 if srcpeer is not None:
626 if srcpeer is not None:
627 srcpeer.close()
627 srcpeer.close()
628 return srcpeer, destpeer
628 return srcpeer, destpeer
629
629
630 def _showstats(repo, stats):
630 def _showstats(repo, stats):
631 repo.ui.status(_("%d files updated, %d files merged, "
631 repo.ui.status(_("%d files updated, %d files merged, "
632 "%d files removed, %d files unresolved\n") % stats)
632 "%d files removed, %d files unresolved\n") % stats)
633
633
634 def updaterepo(repo, node, overwrite):
634 def updaterepo(repo, node, overwrite):
635 """Update the working directory to node.
635 """Update the working directory to node.
636
636
637 When overwrite is set, changes are clobbered, merged else
637 When overwrite is set, changes are clobbered, merged else
638
638
639 returns stats (see pydoc mercurial.merge.applyupdates)"""
639 returns stats (see pydoc mercurial.merge.applyupdates)"""
640 return mergemod.update(repo, node, False, overwrite, None,
640 return mergemod.update(repo, node, False, overwrite,
641 labels=['working copy', 'destination'])
641 labels=['working copy', 'destination'])
642
642
643 def update(repo, node):
643 def update(repo, node):
644 """update the working directory to node, merging linear changes"""
644 """update the working directory to node, merging linear changes"""
645 stats = updaterepo(repo, node, False)
645 stats = updaterepo(repo, node, False)
646 _showstats(repo, stats)
646 _showstats(repo, stats)
647 if stats[3]:
647 if stats[3]:
648 repo.ui.status(_("use 'hg resolve' to retry unresolved file merges\n"))
648 repo.ui.status(_("use 'hg resolve' to retry unresolved file merges\n"))
649 return stats[3] > 0
649 return stats[3] > 0
650
650
651 # naming conflict in clone()
651 # naming conflict in clone()
652 _update = update
652 _update = update
653
653
654 def clean(repo, node, show_stats=True):
654 def clean(repo, node, show_stats=True):
655 """forcibly switch the working directory to node, clobbering changes"""
655 """forcibly switch the working directory to node, clobbering changes"""
656 stats = updaterepo(repo, node, True)
656 stats = updaterepo(repo, node, True)
657 util.unlinkpath(repo.join('graftstate'), ignoremissing=True)
657 util.unlinkpath(repo.join('graftstate'), ignoremissing=True)
658 if show_stats:
658 if show_stats:
659 _showstats(repo, stats)
659 _showstats(repo, stats)
660 return stats[3] > 0
660 return stats[3] > 0
661
661
662 def merge(repo, node, force=None, remind=True):
662 def merge(repo, node, force=None, remind=True):
663 """Branch merge with node, resolving changes. Return true if any
663 """Branch merge with node, resolving changes. Return true if any
664 unresolved conflicts."""
664 unresolved conflicts."""
665 stats = mergemod.update(repo, node, True, force, False)
665 stats = mergemod.update(repo, node, True, force)
666 _showstats(repo, stats)
666 _showstats(repo, stats)
667 if stats[3]:
667 if stats[3]:
668 repo.ui.status(_("use 'hg resolve' to retry unresolved file merges "
668 repo.ui.status(_("use 'hg resolve' to retry unresolved file merges "
669 "or 'hg update -C .' to abandon\n"))
669 "or 'hg update -C .' to abandon\n"))
670 elif remind:
670 elif remind:
671 repo.ui.status(_("(branch merge, don't forget to commit)\n"))
671 repo.ui.status(_("(branch merge, don't forget to commit)\n"))
672 return stats[3] > 0
672 return stats[3] > 0
673
673
674 def _incoming(displaychlist, subreporecurse, ui, repo, source,
674 def _incoming(displaychlist, subreporecurse, ui, repo, source,
675 opts, buffered=False):
675 opts, buffered=False):
676 """
676 """
677 Helper for incoming / gincoming.
677 Helper for incoming / gincoming.
678 displaychlist gets called with
678 displaychlist gets called with
679 (remoterepo, incomingchangesetlist, displayer) parameters,
679 (remoterepo, incomingchangesetlist, displayer) parameters,
680 and is supposed to contain only code that can't be unified.
680 and is supposed to contain only code that can't be unified.
681 """
681 """
682 source, branches = parseurl(ui.expandpath(source), opts.get('branch'))
682 source, branches = parseurl(ui.expandpath(source), opts.get('branch'))
683 other = peer(repo, opts, source)
683 other = peer(repo, opts, source)
684 ui.status(_('comparing with %s\n') % util.hidepassword(source))
684 ui.status(_('comparing with %s\n') % util.hidepassword(source))
685 revs, checkout = addbranchrevs(repo, other, branches, opts.get('rev'))
685 revs, checkout = addbranchrevs(repo, other, branches, opts.get('rev'))
686
686
687 if revs:
687 if revs:
688 revs = [other.lookup(rev) for rev in revs]
688 revs = [other.lookup(rev) for rev in revs]
689 other, chlist, cleanupfn = bundlerepo.getremotechanges(ui, repo, other,
689 other, chlist, cleanupfn = bundlerepo.getremotechanges(ui, repo, other,
690 revs, opts["bundle"], opts["force"])
690 revs, opts["bundle"], opts["force"])
691 try:
691 try:
692 if not chlist:
692 if not chlist:
693 ui.status(_("no changes found\n"))
693 ui.status(_("no changes found\n"))
694 return subreporecurse()
694 return subreporecurse()
695
695
696 displayer = cmdutil.show_changeset(ui, other, opts, buffered)
696 displayer = cmdutil.show_changeset(ui, other, opts, buffered)
697 displaychlist(other, chlist, displayer)
697 displaychlist(other, chlist, displayer)
698 displayer.close()
698 displayer.close()
699 finally:
699 finally:
700 cleanupfn()
700 cleanupfn()
701 subreporecurse()
701 subreporecurse()
702 return 0 # exit code is zero since we found incoming changes
702 return 0 # exit code is zero since we found incoming changes
703
703
704 def incoming(ui, repo, source, opts):
704 def incoming(ui, repo, source, opts):
705 def subreporecurse():
705 def subreporecurse():
706 ret = 1
706 ret = 1
707 if opts.get('subrepos'):
707 if opts.get('subrepos'):
708 ctx = repo[None]
708 ctx = repo[None]
709 for subpath in sorted(ctx.substate):
709 for subpath in sorted(ctx.substate):
710 sub = ctx.sub(subpath)
710 sub = ctx.sub(subpath)
711 ret = min(ret, sub.incoming(ui, source, opts))
711 ret = min(ret, sub.incoming(ui, source, opts))
712 return ret
712 return ret
713
713
714 def display(other, chlist, displayer):
714 def display(other, chlist, displayer):
715 limit = cmdutil.loglimit(opts)
715 limit = cmdutil.loglimit(opts)
716 if opts.get('newest_first'):
716 if opts.get('newest_first'):
717 chlist.reverse()
717 chlist.reverse()
718 count = 0
718 count = 0
719 for n in chlist:
719 for n in chlist:
720 if limit is not None and count >= limit:
720 if limit is not None and count >= limit:
721 break
721 break
722 parents = [p for p in other.changelog.parents(n) if p != nullid]
722 parents = [p for p in other.changelog.parents(n) if p != nullid]
723 if opts.get('no_merges') and len(parents) == 2:
723 if opts.get('no_merges') and len(parents) == 2:
724 continue
724 continue
725 count += 1
725 count += 1
726 displayer.show(other[n])
726 displayer.show(other[n])
727 return _incoming(display, subreporecurse, ui, repo, source, opts)
727 return _incoming(display, subreporecurse, ui, repo, source, opts)
728
728
729 def _outgoing(ui, repo, dest, opts):
729 def _outgoing(ui, repo, dest, opts):
730 dest = ui.expandpath(dest or 'default-push', dest or 'default')
730 dest = ui.expandpath(dest or 'default-push', dest or 'default')
731 dest, branches = parseurl(dest, opts.get('branch'))
731 dest, branches = parseurl(dest, opts.get('branch'))
732 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
732 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
733 revs, checkout = addbranchrevs(repo, repo, branches, opts.get('rev'))
733 revs, checkout = addbranchrevs(repo, repo, branches, opts.get('rev'))
734 if revs:
734 if revs:
735 revs = [repo.lookup(rev) for rev in scmutil.revrange(repo, revs)]
735 revs = [repo.lookup(rev) for rev in scmutil.revrange(repo, revs)]
736
736
737 other = peer(repo, opts, dest)
737 other = peer(repo, opts, dest)
738 outgoing = discovery.findcommonoutgoing(repo.unfiltered(), other, revs,
738 outgoing = discovery.findcommonoutgoing(repo.unfiltered(), other, revs,
739 force=opts.get('force'))
739 force=opts.get('force'))
740 o = outgoing.missing
740 o = outgoing.missing
741 if not o:
741 if not o:
742 scmutil.nochangesfound(repo.ui, repo, outgoing.excluded)
742 scmutil.nochangesfound(repo.ui, repo, outgoing.excluded)
743 return o, other
743 return o, other
744
744
745 def outgoing(ui, repo, dest, opts):
745 def outgoing(ui, repo, dest, opts):
746 def recurse():
746 def recurse():
747 ret = 1
747 ret = 1
748 if opts.get('subrepos'):
748 if opts.get('subrepos'):
749 ctx = repo[None]
749 ctx = repo[None]
750 for subpath in sorted(ctx.substate):
750 for subpath in sorted(ctx.substate):
751 sub = ctx.sub(subpath)
751 sub = ctx.sub(subpath)
752 ret = min(ret, sub.outgoing(ui, dest, opts))
752 ret = min(ret, sub.outgoing(ui, dest, opts))
753 return ret
753 return ret
754
754
755 limit = cmdutil.loglimit(opts)
755 limit = cmdutil.loglimit(opts)
756 o, other = _outgoing(ui, repo, dest, opts)
756 o, other = _outgoing(ui, repo, dest, opts)
757 if not o:
757 if not o:
758 cmdutil.outgoinghooks(ui, repo, other, opts, o)
758 cmdutil.outgoinghooks(ui, repo, other, opts, o)
759 return recurse()
759 return recurse()
760
760
761 if opts.get('newest_first'):
761 if opts.get('newest_first'):
762 o.reverse()
762 o.reverse()
763 displayer = cmdutil.show_changeset(ui, repo, opts)
763 displayer = cmdutil.show_changeset(ui, repo, opts)
764 count = 0
764 count = 0
765 for n in o:
765 for n in o:
766 if limit is not None and count >= limit:
766 if limit is not None and count >= limit:
767 break
767 break
768 parents = [p for p in repo.changelog.parents(n) if p != nullid]
768 parents = [p for p in repo.changelog.parents(n) if p != nullid]
769 if opts.get('no_merges') and len(parents) == 2:
769 if opts.get('no_merges') and len(parents) == 2:
770 continue
770 continue
771 count += 1
771 count += 1
772 displayer.show(repo[n])
772 displayer.show(repo[n])
773 displayer.close()
773 displayer.close()
774 cmdutil.outgoinghooks(ui, repo, other, opts, o)
774 cmdutil.outgoinghooks(ui, repo, other, opts, o)
775 recurse()
775 recurse()
776 return 0 # exit code is zero since we found outgoing changes
776 return 0 # exit code is zero since we found outgoing changes
777
777
778 def verify(repo):
778 def verify(repo):
779 """verify the consistency of a repository"""
779 """verify the consistency of a repository"""
780 ret = verifymod.verify(repo)
780 ret = verifymod.verify(repo)
781
781
782 # Broken subrepo references in hidden csets don't seem worth worrying about,
782 # Broken subrepo references in hidden csets don't seem worth worrying about,
783 # since they can't be pushed/pulled, and --hidden can be used if they are a
783 # since they can't be pushed/pulled, and --hidden can be used if they are a
784 # concern.
784 # concern.
785
785
786 # pathto() is needed for -R case
786 # pathto() is needed for -R case
787 revs = repo.revs("filelog(%s)",
787 revs = repo.revs("filelog(%s)",
788 util.pathto(repo.root, repo.getcwd(), '.hgsubstate'))
788 util.pathto(repo.root, repo.getcwd(), '.hgsubstate'))
789
789
790 if revs:
790 if revs:
791 repo.ui.status(_('checking subrepo links\n'))
791 repo.ui.status(_('checking subrepo links\n'))
792 for rev in revs:
792 for rev in revs:
793 ctx = repo[rev]
793 ctx = repo[rev]
794 try:
794 try:
795 for subpath in ctx.substate:
795 for subpath in ctx.substate:
796 ret = ctx.sub(subpath).verify() or ret
796 ret = ctx.sub(subpath).verify() or ret
797 except Exception:
797 except Exception:
798 repo.ui.warn(_('.hgsubstate is corrupt in revision %s\n') %
798 repo.ui.warn(_('.hgsubstate is corrupt in revision %s\n') %
799 node.short(ctx.node()))
799 node.short(ctx.node()))
800
800
801 return ret
801 return ret
802
802
803 def remoteui(src, opts):
803 def remoteui(src, opts):
804 'build a remote ui from ui or repo and opts'
804 'build a remote ui from ui or repo and opts'
805 if util.safehasattr(src, 'baseui'): # looks like a repository
805 if util.safehasattr(src, 'baseui'): # looks like a repository
806 dst = src.baseui.copy() # drop repo-specific config
806 dst = src.baseui.copy() # drop repo-specific config
807 src = src.ui # copy target options from repo
807 src = src.ui # copy target options from repo
808 else: # assume it's a global ui object
808 else: # assume it's a global ui object
809 dst = src.copy() # keep all global options
809 dst = src.copy() # keep all global options
810
810
811 # copy ssh-specific options
811 # copy ssh-specific options
812 for o in 'ssh', 'remotecmd':
812 for o in 'ssh', 'remotecmd':
813 v = opts.get(o) or src.config('ui', o)
813 v = opts.get(o) or src.config('ui', o)
814 if v:
814 if v:
815 dst.setconfig("ui", o, v, 'copied')
815 dst.setconfig("ui", o, v, 'copied')
816
816
817 # copy bundle-specific options
817 # copy bundle-specific options
818 r = src.config('bundle', 'mainreporoot')
818 r = src.config('bundle', 'mainreporoot')
819 if r:
819 if r:
820 dst.setconfig('bundle', 'mainreporoot', r, 'copied')
820 dst.setconfig('bundle', 'mainreporoot', r, 'copied')
821
821
822 # copy selected local settings to the remote ui
822 # copy selected local settings to the remote ui
823 for sect in ('auth', 'hostfingerprints', 'http_proxy'):
823 for sect in ('auth', 'hostfingerprints', 'http_proxy'):
824 for key, val in src.configitems(sect):
824 for key, val in src.configitems(sect):
825 dst.setconfig(sect, key, val, 'copied')
825 dst.setconfig(sect, key, val, 'copied')
826 v = src.config('web', 'cacerts')
826 v = src.config('web', 'cacerts')
827 if v == '!':
827 if v == '!':
828 dst.setconfig('web', 'cacerts', v, 'copied')
828 dst.setconfig('web', 'cacerts', v, 'copied')
829 elif v:
829 elif v:
830 dst.setconfig('web', 'cacerts', util.expandpath(v), 'copied')
830 dst.setconfig('web', 'cacerts', util.expandpath(v), 'copied')
831
831
832 return dst
832 return dst
833
833
834 # Files of interest
834 # Files of interest
835 # Used to check if the repository has changed looking at mtime and size of
835 # Used to check if the repository has changed looking at mtime and size of
836 # these files.
836 # these files.
837 foi = [('spath', '00changelog.i'),
837 foi = [('spath', '00changelog.i'),
838 ('spath', 'phaseroots'), # ! phase can change content at the same size
838 ('spath', 'phaseroots'), # ! phase can change content at the same size
839 ('spath', 'obsstore'),
839 ('spath', 'obsstore'),
840 ('path', 'bookmarks'), # ! bookmark can change content at the same size
840 ('path', 'bookmarks'), # ! bookmark can change content at the same size
841 ]
841 ]
842
842
843 class cachedlocalrepo(object):
843 class cachedlocalrepo(object):
844 """Holds a localrepository that can be cached and reused."""
844 """Holds a localrepository that can be cached and reused."""
845
845
846 def __init__(self, repo):
846 def __init__(self, repo):
847 """Create a new cached repo from an existing repo.
847 """Create a new cached repo from an existing repo.
848
848
849 We assume the passed in repo was recently created. If the
849 We assume the passed in repo was recently created. If the
850 repo has changed between when it was created and when it was
850 repo has changed between when it was created and when it was
851 turned into a cache, it may not refresh properly.
851 turned into a cache, it may not refresh properly.
852 """
852 """
853 assert isinstance(repo, localrepo.localrepository)
853 assert isinstance(repo, localrepo.localrepository)
854 self._repo = repo
854 self._repo = repo
855 self._state, self.mtime = self._repostate()
855 self._state, self.mtime = self._repostate()
856
856
857 def fetch(self):
857 def fetch(self):
858 """Refresh (if necessary) and return a repository.
858 """Refresh (if necessary) and return a repository.
859
859
860 If the cached instance is out of date, it will be recreated
860 If the cached instance is out of date, it will be recreated
861 automatically and returned.
861 automatically and returned.
862
862
863 Returns a tuple of the repo and a boolean indicating whether a new
863 Returns a tuple of the repo and a boolean indicating whether a new
864 repo instance was created.
864 repo instance was created.
865 """
865 """
866 # We compare the mtimes and sizes of some well-known files to
866 # We compare the mtimes and sizes of some well-known files to
867 # determine if the repo changed. This is not precise, as mtimes
867 # determine if the repo changed. This is not precise, as mtimes
868 # are susceptible to clock skew and imprecise filesystems and
868 # are susceptible to clock skew and imprecise filesystems and
869 # file content can change while maintaining the same size.
869 # file content can change while maintaining the same size.
870
870
871 state, mtime = self._repostate()
871 state, mtime = self._repostate()
872 if state == self._state:
872 if state == self._state:
873 return self._repo, False
873 return self._repo, False
874
874
875 self._repo = repository(self._repo.baseui, self._repo.url())
875 self._repo = repository(self._repo.baseui, self._repo.url())
876 self._state = state
876 self._state = state
877 self.mtime = mtime
877 self.mtime = mtime
878
878
879 return self._repo, True
879 return self._repo, True
880
880
881 def _repostate(self):
881 def _repostate(self):
882 state = []
882 state = []
883 maxmtime = -1
883 maxmtime = -1
884 for attr, fname in foi:
884 for attr, fname in foi:
885 prefix = getattr(self._repo, attr)
885 prefix = getattr(self._repo, attr)
886 p = os.path.join(prefix, fname)
886 p = os.path.join(prefix, fname)
887 try:
887 try:
888 st = os.stat(p)
888 st = os.stat(p)
889 except OSError:
889 except OSError:
890 st = os.stat(prefix)
890 st = os.stat(prefix)
891 state.append((st.st_mtime, st.st_size))
891 state.append((st.st_mtime, st.st_size))
892 maxmtime = max(maxmtime, st.st_mtime)
892 maxmtime = max(maxmtime, st.st_mtime)
893
893
894 return tuple(state), maxmtime
894 return tuple(state), maxmtime
895
895
896 def copy(self):
896 def copy(self):
897 """Obtain a copy of this class instance.
897 """Obtain a copy of this class instance.
898
898
899 A new localrepository instance is obtained. The new instance should be
899 A new localrepository instance is obtained. The new instance should be
900 completely independent of the original.
900 completely independent of the original.
901 """
901 """
902 repo = repository(self._repo.baseui, self._repo.origroot)
902 repo = repository(self._repo.baseui, self._repo.origroot)
903 c = cachedlocalrepo(repo)
903 c = cachedlocalrepo(repo)
904 c._state = self._state
904 c._state = self._state
905 c.mtime = self.mtime
905 c.mtime = self.mtime
906 return c
906 return c
@@ -1,1534 +1,1545 b''
1 # merge.py - directory-level update/merge handling for Mercurial
1 # merge.py - directory-level update/merge handling for Mercurial
2 #
2 #
3 # Copyright 2006, 2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2006, 2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import errno
10 import errno
11 import os
11 import os
12 import shutil
12 import shutil
13 import struct
13 import struct
14
14
15 from .i18n import _
15 from .i18n import _
16 from .node import (
16 from .node import (
17 bin,
17 bin,
18 hex,
18 hex,
19 nullhex,
19 nullhex,
20 nullid,
20 nullid,
21 nullrev,
21 nullrev,
22 )
22 )
23 from . import (
23 from . import (
24 copies,
24 copies,
25 destutil,
25 destutil,
26 error,
26 error,
27 filemerge,
27 filemerge,
28 obsolete,
28 obsolete,
29 subrepo,
29 subrepo,
30 util,
30 util,
31 worker,
31 worker,
32 )
32 )
33
33
34 _pack = struct.pack
34 _pack = struct.pack
35 _unpack = struct.unpack
35 _unpack = struct.unpack
36
36
37 def _droponode(data):
37 def _droponode(data):
38 # used for compatibility for v1
38 # used for compatibility for v1
39 bits = data.split('\0')
39 bits = data.split('\0')
40 bits = bits[:-2] + bits[-1:]
40 bits = bits[:-2] + bits[-1:]
41 return '\0'.join(bits)
41 return '\0'.join(bits)
42
42
43 class mergestate(object):
43 class mergestate(object):
44 '''track 3-way merge state of individual files
44 '''track 3-way merge state of individual files
45
45
46 The merge state is stored on disk when needed. Two files are used: one with
46 The merge state is stored on disk when needed. Two files are used: one with
47 an old format (version 1), and one with a new format (version 2). Version 2
47 an old format (version 1), and one with a new format (version 2). Version 2
48 stores a superset of the data in version 1, including new kinds of records
48 stores a superset of the data in version 1, including new kinds of records
49 in the future. For more about the new format, see the documentation for
49 in the future. For more about the new format, see the documentation for
50 `_readrecordsv2`.
50 `_readrecordsv2`.
51
51
52 Each record can contain arbitrary content, and has an associated type. This
52 Each record can contain arbitrary content, and has an associated type. This
53 `type` should be a letter. If `type` is uppercase, the record is mandatory:
53 `type` should be a letter. If `type` is uppercase, the record is mandatory:
54 versions of Mercurial that don't support it should abort. If `type` is
54 versions of Mercurial that don't support it should abort. If `type` is
55 lowercase, the record can be safely ignored.
55 lowercase, the record can be safely ignored.
56
56
57 Currently known records:
57 Currently known records:
58
58
59 L: the node of the "local" part of the merge (hexified version)
59 L: the node of the "local" part of the merge (hexified version)
60 O: the node of the "other" part of the merge (hexified version)
60 O: the node of the "other" part of the merge (hexified version)
61 F: a file to be merged entry
61 F: a file to be merged entry
62 C: a change/delete or delete/change conflict
62 C: a change/delete or delete/change conflict
63 D: a file that the external merge driver will merge internally
63 D: a file that the external merge driver will merge internally
64 (experimental)
64 (experimental)
65 m: the external merge driver defined for this merge plus its run state
65 m: the external merge driver defined for this merge plus its run state
66 (experimental)
66 (experimental)
67 X: unsupported mandatory record type (used in tests)
67 X: unsupported mandatory record type (used in tests)
68 x: unsupported advisory record type (used in tests)
68 x: unsupported advisory record type (used in tests)
69
69
70 Merge driver run states (experimental):
70 Merge driver run states (experimental):
71 u: driver-resolved files unmarked -- needs to be run next time we're about
71 u: driver-resolved files unmarked -- needs to be run next time we're about
72 to resolve or commit
72 to resolve or commit
73 m: driver-resolved files marked -- only needs to be run before commit
73 m: driver-resolved files marked -- only needs to be run before commit
74 s: success/skipped -- does not need to be run any more
74 s: success/skipped -- does not need to be run any more
75
75
76 '''
76 '''
77 statepathv1 = 'merge/state'
77 statepathv1 = 'merge/state'
78 statepathv2 = 'merge/state2'
78 statepathv2 = 'merge/state2'
79
79
80 @staticmethod
80 @staticmethod
81 def clean(repo, node=None, other=None):
81 def clean(repo, node=None, other=None):
82 """Initialize a brand new merge state, removing any existing state on
82 """Initialize a brand new merge state, removing any existing state on
83 disk."""
83 disk."""
84 ms = mergestate(repo)
84 ms = mergestate(repo)
85 ms.reset(node, other)
85 ms.reset(node, other)
86 return ms
86 return ms
87
87
88 @staticmethod
88 @staticmethod
89 def read(repo):
89 def read(repo):
90 """Initialize the merge state, reading it from disk."""
90 """Initialize the merge state, reading it from disk."""
91 ms = mergestate(repo)
91 ms = mergestate(repo)
92 ms._read()
92 ms._read()
93 return ms
93 return ms
94
94
95 def __init__(self, repo):
95 def __init__(self, repo):
96 """Initialize the merge state.
96 """Initialize the merge state.
97
97
98 Do not use this directly! Instead call read() or clean()."""
98 Do not use this directly! Instead call read() or clean()."""
99 self._repo = repo
99 self._repo = repo
100 self._dirty = False
100 self._dirty = False
101
101
102 def reset(self, node=None, other=None):
102 def reset(self, node=None, other=None):
103 self._state = {}
103 self._state = {}
104 self._local = None
104 self._local = None
105 self._other = None
105 self._other = None
106 for var in ('localctx', 'otherctx'):
106 for var in ('localctx', 'otherctx'):
107 if var in vars(self):
107 if var in vars(self):
108 delattr(self, var)
108 delattr(self, var)
109 if node:
109 if node:
110 self._local = node
110 self._local = node
111 self._other = other
111 self._other = other
112 self._readmergedriver = None
112 self._readmergedriver = None
113 if self.mergedriver:
113 if self.mergedriver:
114 self._mdstate = 's'
114 self._mdstate = 's'
115 else:
115 else:
116 self._mdstate = 'u'
116 self._mdstate = 'u'
117 shutil.rmtree(self._repo.join('merge'), True)
117 shutil.rmtree(self._repo.join('merge'), True)
118 self._results = {}
118 self._results = {}
119 self._dirty = False
119 self._dirty = False
120
120
121 def _read(self):
121 def _read(self):
122 """Analyse each record content to restore a serialized state from disk
122 """Analyse each record content to restore a serialized state from disk
123
123
124 This function process "record" entry produced by the de-serialization
124 This function process "record" entry produced by the de-serialization
125 of on disk file.
125 of on disk file.
126 """
126 """
127 self._state = {}
127 self._state = {}
128 self._local = None
128 self._local = None
129 self._other = None
129 self._other = None
130 for var in ('localctx', 'otherctx'):
130 for var in ('localctx', 'otherctx'):
131 if var in vars(self):
131 if var in vars(self):
132 delattr(self, var)
132 delattr(self, var)
133 self._readmergedriver = None
133 self._readmergedriver = None
134 self._mdstate = 's'
134 self._mdstate = 's'
135 unsupported = set()
135 unsupported = set()
136 records = self._readrecords()
136 records = self._readrecords()
137 for rtype, record in records:
137 for rtype, record in records:
138 if rtype == 'L':
138 if rtype == 'L':
139 self._local = bin(record)
139 self._local = bin(record)
140 elif rtype == 'O':
140 elif rtype == 'O':
141 self._other = bin(record)
141 self._other = bin(record)
142 elif rtype == 'm':
142 elif rtype == 'm':
143 bits = record.split('\0', 1)
143 bits = record.split('\0', 1)
144 mdstate = bits[1]
144 mdstate = bits[1]
145 if len(mdstate) != 1 or mdstate not in 'ums':
145 if len(mdstate) != 1 or mdstate not in 'ums':
146 # the merge driver should be idempotent, so just rerun it
146 # the merge driver should be idempotent, so just rerun it
147 mdstate = 'u'
147 mdstate = 'u'
148
148
149 self._readmergedriver = bits[0]
149 self._readmergedriver = bits[0]
150 self._mdstate = mdstate
150 self._mdstate = mdstate
151 elif rtype in 'FDC':
151 elif rtype in 'FDC':
152 bits = record.split('\0')
152 bits = record.split('\0')
153 self._state[bits[0]] = bits[1:]
153 self._state[bits[0]] = bits[1:]
154 elif not rtype.islower():
154 elif not rtype.islower():
155 unsupported.add(rtype)
155 unsupported.add(rtype)
156 self._results = {}
156 self._results = {}
157 self._dirty = False
157 self._dirty = False
158
158
159 if unsupported:
159 if unsupported:
160 raise error.UnsupportedMergeRecords(unsupported)
160 raise error.UnsupportedMergeRecords(unsupported)
161
161
162 def _readrecords(self):
162 def _readrecords(self):
163 """Read merge state from disk and return a list of record (TYPE, data)
163 """Read merge state from disk and return a list of record (TYPE, data)
164
164
165 We read data from both v1 and v2 files and decide which one to use.
165 We read data from both v1 and v2 files and decide which one to use.
166
166
167 V1 has been used by version prior to 2.9.1 and contains less data than
167 V1 has been used by version prior to 2.9.1 and contains less data than
168 v2. We read both versions and check if no data in v2 contradicts
168 v2. We read both versions and check if no data in v2 contradicts
169 v1. If there is not contradiction we can safely assume that both v1
169 v1. If there is not contradiction we can safely assume that both v1
170 and v2 were written at the same time and use the extract data in v2. If
170 and v2 were written at the same time and use the extract data in v2. If
171 there is contradiction we ignore v2 content as we assume an old version
171 there is contradiction we ignore v2 content as we assume an old version
172 of Mercurial has overwritten the mergestate file and left an old v2
172 of Mercurial has overwritten the mergestate file and left an old v2
173 file around.
173 file around.
174
174
175 returns list of record [(TYPE, data), ...]"""
175 returns list of record [(TYPE, data), ...]"""
176 v1records = self._readrecordsv1()
176 v1records = self._readrecordsv1()
177 v2records = self._readrecordsv2()
177 v2records = self._readrecordsv2()
178 if self._v1v2match(v1records, v2records):
178 if self._v1v2match(v1records, v2records):
179 return v2records
179 return v2records
180 else:
180 else:
181 # v1 file is newer than v2 file, use it
181 # v1 file is newer than v2 file, use it
182 # we have to infer the "other" changeset of the merge
182 # we have to infer the "other" changeset of the merge
183 # we cannot do better than that with v1 of the format
183 # we cannot do better than that with v1 of the format
184 mctx = self._repo[None].parents()[-1]
184 mctx = self._repo[None].parents()[-1]
185 v1records.append(('O', mctx.hex()))
185 v1records.append(('O', mctx.hex()))
186 # add place holder "other" file node information
186 # add place holder "other" file node information
187 # nobody is using it yet so we do no need to fetch the data
187 # nobody is using it yet so we do no need to fetch the data
188 # if mctx was wrong `mctx[bits[-2]]` may fails.
188 # if mctx was wrong `mctx[bits[-2]]` may fails.
189 for idx, r in enumerate(v1records):
189 for idx, r in enumerate(v1records):
190 if r[0] == 'F':
190 if r[0] == 'F':
191 bits = r[1].split('\0')
191 bits = r[1].split('\0')
192 bits.insert(-2, '')
192 bits.insert(-2, '')
193 v1records[idx] = (r[0], '\0'.join(bits))
193 v1records[idx] = (r[0], '\0'.join(bits))
194 return v1records
194 return v1records
195
195
196 def _v1v2match(self, v1records, v2records):
196 def _v1v2match(self, v1records, v2records):
197 oldv2 = set() # old format version of v2 record
197 oldv2 = set() # old format version of v2 record
198 for rec in v2records:
198 for rec in v2records:
199 if rec[0] == 'L':
199 if rec[0] == 'L':
200 oldv2.add(rec)
200 oldv2.add(rec)
201 elif rec[0] == 'F':
201 elif rec[0] == 'F':
202 # drop the onode data (not contained in v1)
202 # drop the onode data (not contained in v1)
203 oldv2.add(('F', _droponode(rec[1])))
203 oldv2.add(('F', _droponode(rec[1])))
204 for rec in v1records:
204 for rec in v1records:
205 if rec not in oldv2:
205 if rec not in oldv2:
206 return False
206 return False
207 else:
207 else:
208 return True
208 return True
209
209
210 def _readrecordsv1(self):
210 def _readrecordsv1(self):
211 """read on disk merge state for version 1 file
211 """read on disk merge state for version 1 file
212
212
213 returns list of record [(TYPE, data), ...]
213 returns list of record [(TYPE, data), ...]
214
214
215 Note: the "F" data from this file are one entry short
215 Note: the "F" data from this file are one entry short
216 (no "other file node" entry)
216 (no "other file node" entry)
217 """
217 """
218 records = []
218 records = []
219 try:
219 try:
220 f = self._repo.vfs(self.statepathv1)
220 f = self._repo.vfs(self.statepathv1)
221 for i, l in enumerate(f):
221 for i, l in enumerate(f):
222 if i == 0:
222 if i == 0:
223 records.append(('L', l[:-1]))
223 records.append(('L', l[:-1]))
224 else:
224 else:
225 records.append(('F', l[:-1]))
225 records.append(('F', l[:-1]))
226 f.close()
226 f.close()
227 except IOError as err:
227 except IOError as err:
228 if err.errno != errno.ENOENT:
228 if err.errno != errno.ENOENT:
229 raise
229 raise
230 return records
230 return records
231
231
232 def _readrecordsv2(self):
232 def _readrecordsv2(self):
233 """read on disk merge state for version 2 file
233 """read on disk merge state for version 2 file
234
234
235 This format is a list of arbitrary records of the form:
235 This format is a list of arbitrary records of the form:
236
236
237 [type][length][content]
237 [type][length][content]
238
238
239 `type` is a single character, `length` is a 4 byte integer, and
239 `type` is a single character, `length` is a 4 byte integer, and
240 `content` is an arbitrary byte sequence of length `length`.
240 `content` is an arbitrary byte sequence of length `length`.
241
241
242 Mercurial versions prior to 3.7 have a bug where if there are
242 Mercurial versions prior to 3.7 have a bug where if there are
243 unsupported mandatory merge records, attempting to clear out the merge
243 unsupported mandatory merge records, attempting to clear out the merge
244 state with hg update --clean or similar aborts. The 't' record type
244 state with hg update --clean or similar aborts. The 't' record type
245 works around that by writing out what those versions treat as an
245 works around that by writing out what those versions treat as an
246 advisory record, but later versions interpret as special: the first
246 advisory record, but later versions interpret as special: the first
247 character is the 'real' record type and everything onwards is the data.
247 character is the 'real' record type and everything onwards is the data.
248
248
249 Returns list of records [(TYPE, data), ...]."""
249 Returns list of records [(TYPE, data), ...]."""
250 records = []
250 records = []
251 try:
251 try:
252 f = self._repo.vfs(self.statepathv2)
252 f = self._repo.vfs(self.statepathv2)
253 data = f.read()
253 data = f.read()
254 off = 0
254 off = 0
255 end = len(data)
255 end = len(data)
256 while off < end:
256 while off < end:
257 rtype = data[off]
257 rtype = data[off]
258 off += 1
258 off += 1
259 length = _unpack('>I', data[off:(off + 4)])[0]
259 length = _unpack('>I', data[off:(off + 4)])[0]
260 off += 4
260 off += 4
261 record = data[off:(off + length)]
261 record = data[off:(off + length)]
262 off += length
262 off += length
263 if rtype == 't':
263 if rtype == 't':
264 rtype, record = record[0], record[1:]
264 rtype, record = record[0], record[1:]
265 records.append((rtype, record))
265 records.append((rtype, record))
266 f.close()
266 f.close()
267 except IOError as err:
267 except IOError as err:
268 if err.errno != errno.ENOENT:
268 if err.errno != errno.ENOENT:
269 raise
269 raise
270 return records
270 return records
271
271
272 @util.propertycache
272 @util.propertycache
273 def mergedriver(self):
273 def mergedriver(self):
274 # protect against the following:
274 # protect against the following:
275 # - A configures a malicious merge driver in their hgrc, then
275 # - A configures a malicious merge driver in their hgrc, then
276 # pauses the merge
276 # pauses the merge
277 # - A edits their hgrc to remove references to the merge driver
277 # - A edits their hgrc to remove references to the merge driver
278 # - A gives a copy of their entire repo, including .hg, to B
278 # - A gives a copy of their entire repo, including .hg, to B
279 # - B inspects .hgrc and finds it to be clean
279 # - B inspects .hgrc and finds it to be clean
280 # - B then continues the merge and the malicious merge driver
280 # - B then continues the merge and the malicious merge driver
281 # gets invoked
281 # gets invoked
282 configmergedriver = self._repo.ui.config('experimental', 'mergedriver')
282 configmergedriver = self._repo.ui.config('experimental', 'mergedriver')
283 if (self._readmergedriver is not None
283 if (self._readmergedriver is not None
284 and self._readmergedriver != configmergedriver):
284 and self._readmergedriver != configmergedriver):
285 raise error.ConfigError(
285 raise error.ConfigError(
286 _("merge driver changed since merge started"),
286 _("merge driver changed since merge started"),
287 hint=_("revert merge driver change or abort merge"))
287 hint=_("revert merge driver change or abort merge"))
288
288
289 return configmergedriver
289 return configmergedriver
290
290
291 @util.propertycache
291 @util.propertycache
292 def localctx(self):
292 def localctx(self):
293 if self._local is None:
293 if self._local is None:
294 raise RuntimeError("localctx accessed but self._local isn't set")
294 raise RuntimeError("localctx accessed but self._local isn't set")
295 return self._repo[self._local]
295 return self._repo[self._local]
296
296
297 @util.propertycache
297 @util.propertycache
298 def otherctx(self):
298 def otherctx(self):
299 if self._other is None:
299 if self._other is None:
300 raise RuntimeError("localctx accessed but self._local isn't set")
300 raise RuntimeError("localctx accessed but self._local isn't set")
301 return self._repo[self._other]
301 return self._repo[self._other]
302
302
303 def active(self):
303 def active(self):
304 """Whether mergestate is active.
304 """Whether mergestate is active.
305
305
306 Returns True if there appears to be mergestate. This is a rough proxy
306 Returns True if there appears to be mergestate. This is a rough proxy
307 for "is a merge in progress."
307 for "is a merge in progress."
308 """
308 """
309 # Check local variables before looking at filesystem for performance
309 # Check local variables before looking at filesystem for performance
310 # reasons.
310 # reasons.
311 return bool(self._local) or bool(self._state) or \
311 return bool(self._local) or bool(self._state) or \
312 self._repo.vfs.exists(self.statepathv1) or \
312 self._repo.vfs.exists(self.statepathv1) or \
313 self._repo.vfs.exists(self.statepathv2)
313 self._repo.vfs.exists(self.statepathv2)
314
314
315 def commit(self):
315 def commit(self):
316 """Write current state on disk (if necessary)"""
316 """Write current state on disk (if necessary)"""
317 if self._dirty:
317 if self._dirty:
318 records = self._makerecords()
318 records = self._makerecords()
319 self._writerecords(records)
319 self._writerecords(records)
320 self._dirty = False
320 self._dirty = False
321
321
322 def _makerecords(self):
322 def _makerecords(self):
323 records = []
323 records = []
324 records.append(('L', hex(self._local)))
324 records.append(('L', hex(self._local)))
325 records.append(('O', hex(self._other)))
325 records.append(('O', hex(self._other)))
326 if self.mergedriver:
326 if self.mergedriver:
327 records.append(('m', '\0'.join([
327 records.append(('m', '\0'.join([
328 self.mergedriver, self._mdstate])))
328 self.mergedriver, self._mdstate])))
329 for d, v in self._state.iteritems():
329 for d, v in self._state.iteritems():
330 if v[0] == 'd':
330 if v[0] == 'd':
331 records.append(('D', '\0'.join([d] + v)))
331 records.append(('D', '\0'.join([d] + v)))
332 # v[1] == local ('cd'), v[6] == other ('dc') -- not supported by
332 # v[1] == local ('cd'), v[6] == other ('dc') -- not supported by
333 # older versions of Mercurial
333 # older versions of Mercurial
334 elif v[1] == nullhex or v[6] == nullhex:
334 elif v[1] == nullhex or v[6] == nullhex:
335 records.append(('C', '\0'.join([d] + v)))
335 records.append(('C', '\0'.join([d] + v)))
336 else:
336 else:
337 records.append(('F', '\0'.join([d] + v)))
337 records.append(('F', '\0'.join([d] + v)))
338 return records
338 return records
339
339
340 def _writerecords(self, records):
340 def _writerecords(self, records):
341 """Write current state on disk (both v1 and v2)"""
341 """Write current state on disk (both v1 and v2)"""
342 self._writerecordsv1(records)
342 self._writerecordsv1(records)
343 self._writerecordsv2(records)
343 self._writerecordsv2(records)
344
344
345 def _writerecordsv1(self, records):
345 def _writerecordsv1(self, records):
346 """Write current state on disk in a version 1 file"""
346 """Write current state on disk in a version 1 file"""
347 f = self._repo.vfs(self.statepathv1, 'w')
347 f = self._repo.vfs(self.statepathv1, 'w')
348 irecords = iter(records)
348 irecords = iter(records)
349 lrecords = irecords.next()
349 lrecords = irecords.next()
350 assert lrecords[0] == 'L'
350 assert lrecords[0] == 'L'
351 f.write(hex(self._local) + '\n')
351 f.write(hex(self._local) + '\n')
352 for rtype, data in irecords:
352 for rtype, data in irecords:
353 if rtype == 'F':
353 if rtype == 'F':
354 f.write('%s\n' % _droponode(data))
354 f.write('%s\n' % _droponode(data))
355 f.close()
355 f.close()
356
356
357 def _writerecordsv2(self, records):
357 def _writerecordsv2(self, records):
358 """Write current state on disk in a version 2 file
358 """Write current state on disk in a version 2 file
359
359
360 See the docstring for _readrecordsv2 for why we use 't'."""
360 See the docstring for _readrecordsv2 for why we use 't'."""
361 # these are the records that all version 2 clients can read
361 # these are the records that all version 2 clients can read
362 whitelist = 'LOF'
362 whitelist = 'LOF'
363 f = self._repo.vfs(self.statepathv2, 'w')
363 f = self._repo.vfs(self.statepathv2, 'w')
364 for key, data in records:
364 for key, data in records:
365 assert len(key) == 1
365 assert len(key) == 1
366 if key not in whitelist:
366 if key not in whitelist:
367 key, data = 't', '%s%s' % (key, data)
367 key, data = 't', '%s%s' % (key, data)
368 format = '>sI%is' % len(data)
368 format = '>sI%is' % len(data)
369 f.write(_pack(format, key, len(data), data))
369 f.write(_pack(format, key, len(data), data))
370 f.close()
370 f.close()
371
371
372 def add(self, fcl, fco, fca, fd):
372 def add(self, fcl, fco, fca, fd):
373 """add a new (potentially?) conflicting file the merge state
373 """add a new (potentially?) conflicting file the merge state
374 fcl: file context for local,
374 fcl: file context for local,
375 fco: file context for remote,
375 fco: file context for remote,
376 fca: file context for ancestors,
376 fca: file context for ancestors,
377 fd: file path of the resulting merge.
377 fd: file path of the resulting merge.
378
378
379 note: also write the local version to the `.hg/merge` directory.
379 note: also write the local version to the `.hg/merge` directory.
380 """
380 """
381 if fcl.isabsent():
381 if fcl.isabsent():
382 hash = nullhex
382 hash = nullhex
383 else:
383 else:
384 hash = util.sha1(fcl.path()).hexdigest()
384 hash = util.sha1(fcl.path()).hexdigest()
385 self._repo.vfs.write('merge/' + hash, fcl.data())
385 self._repo.vfs.write('merge/' + hash, fcl.data())
386 self._state[fd] = ['u', hash, fcl.path(),
386 self._state[fd] = ['u', hash, fcl.path(),
387 fca.path(), hex(fca.filenode()),
387 fca.path(), hex(fca.filenode()),
388 fco.path(), hex(fco.filenode()),
388 fco.path(), hex(fco.filenode()),
389 fcl.flags()]
389 fcl.flags()]
390 self._dirty = True
390 self._dirty = True
391
391
392 def __contains__(self, dfile):
392 def __contains__(self, dfile):
393 return dfile in self._state
393 return dfile in self._state
394
394
395 def __getitem__(self, dfile):
395 def __getitem__(self, dfile):
396 return self._state[dfile][0]
396 return self._state[dfile][0]
397
397
398 def __iter__(self):
398 def __iter__(self):
399 return iter(sorted(self._state))
399 return iter(sorted(self._state))
400
400
401 def files(self):
401 def files(self):
402 return self._state.keys()
402 return self._state.keys()
403
403
404 def mark(self, dfile, state):
404 def mark(self, dfile, state):
405 self._state[dfile][0] = state
405 self._state[dfile][0] = state
406 self._dirty = True
406 self._dirty = True
407
407
408 def mdstate(self):
408 def mdstate(self):
409 return self._mdstate
409 return self._mdstate
410
410
411 def unresolved(self):
411 def unresolved(self):
412 """Obtain the paths of unresolved files."""
412 """Obtain the paths of unresolved files."""
413
413
414 for f, entry in self._state.items():
414 for f, entry in self._state.items():
415 if entry[0] == 'u':
415 if entry[0] == 'u':
416 yield f
416 yield f
417
417
418 def driverresolved(self):
418 def driverresolved(self):
419 """Obtain the paths of driver-resolved files."""
419 """Obtain the paths of driver-resolved files."""
420
420
421 for f, entry in self._state.items():
421 for f, entry in self._state.items():
422 if entry[0] == 'd':
422 if entry[0] == 'd':
423 yield f
423 yield f
424
424
425 def _resolve(self, preresolve, dfile, wctx, labels=None):
425 def _resolve(self, preresolve, dfile, wctx, labels=None):
426 """rerun merge process for file path `dfile`"""
426 """rerun merge process for file path `dfile`"""
427 if self[dfile] in 'rd':
427 if self[dfile] in 'rd':
428 return True, 0
428 return True, 0
429 stateentry = self._state[dfile]
429 stateentry = self._state[dfile]
430 state, hash, lfile, afile, anode, ofile, onode, flags = stateentry
430 state, hash, lfile, afile, anode, ofile, onode, flags = stateentry
431 octx = self._repo[self._other]
431 octx = self._repo[self._other]
432 fcd = self._filectxorabsent(hash, wctx, dfile)
432 fcd = self._filectxorabsent(hash, wctx, dfile)
433 fco = self._filectxorabsent(onode, octx, ofile)
433 fco = self._filectxorabsent(onode, octx, ofile)
434 # TODO: move this to filectxorabsent
434 # TODO: move this to filectxorabsent
435 fca = self._repo.filectx(afile, fileid=anode)
435 fca = self._repo.filectx(afile, fileid=anode)
436 # "premerge" x flags
436 # "premerge" x flags
437 flo = fco.flags()
437 flo = fco.flags()
438 fla = fca.flags()
438 fla = fca.flags()
439 if 'x' in flags + flo + fla and 'l' not in flags + flo + fla:
439 if 'x' in flags + flo + fla and 'l' not in flags + flo + fla:
440 if fca.node() == nullid:
440 if fca.node() == nullid:
441 if preresolve:
441 if preresolve:
442 self._repo.ui.warn(
442 self._repo.ui.warn(
443 _('warning: cannot merge flags for %s\n') % afile)
443 _('warning: cannot merge flags for %s\n') % afile)
444 elif flags == fla:
444 elif flags == fla:
445 flags = flo
445 flags = flo
446 if preresolve:
446 if preresolve:
447 # restore local
447 # restore local
448 if hash != nullhex:
448 if hash != nullhex:
449 f = self._repo.vfs('merge/' + hash)
449 f = self._repo.vfs('merge/' + hash)
450 self._repo.wwrite(dfile, f.read(), flags)
450 self._repo.wwrite(dfile, f.read(), flags)
451 f.close()
451 f.close()
452 else:
452 else:
453 self._repo.wvfs.unlinkpath(dfile, ignoremissing=True)
453 self._repo.wvfs.unlinkpath(dfile, ignoremissing=True)
454 complete, r, deleted = filemerge.premerge(self._repo, self._local,
454 complete, r, deleted = filemerge.premerge(self._repo, self._local,
455 lfile, fcd, fco, fca,
455 lfile, fcd, fco, fca,
456 labels=labels)
456 labels=labels)
457 else:
457 else:
458 complete, r, deleted = filemerge.filemerge(self._repo, self._local,
458 complete, r, deleted = filemerge.filemerge(self._repo, self._local,
459 lfile, fcd, fco, fca,
459 lfile, fcd, fco, fca,
460 labels=labels)
460 labels=labels)
461 if r is None:
461 if r is None:
462 # no real conflict
462 # no real conflict
463 del self._state[dfile]
463 del self._state[dfile]
464 self._dirty = True
464 self._dirty = True
465 elif not r:
465 elif not r:
466 self.mark(dfile, 'r')
466 self.mark(dfile, 'r')
467
467
468 if complete:
468 if complete:
469 action = None
469 action = None
470 if deleted:
470 if deleted:
471 if fcd.isabsent():
471 if fcd.isabsent():
472 # dc: local picked. Need to drop if present, which may
472 # dc: local picked. Need to drop if present, which may
473 # happen on re-resolves.
473 # happen on re-resolves.
474 action = 'f'
474 action = 'f'
475 else:
475 else:
476 # cd: remote picked (or otherwise deleted)
476 # cd: remote picked (or otherwise deleted)
477 action = 'r'
477 action = 'r'
478 else:
478 else:
479 if fcd.isabsent(): # dc: remote picked
479 if fcd.isabsent(): # dc: remote picked
480 action = 'g'
480 action = 'g'
481 elif fco.isabsent(): # cd: local picked
481 elif fco.isabsent(): # cd: local picked
482 if dfile in self.localctx:
482 if dfile in self.localctx:
483 action = 'am'
483 action = 'am'
484 else:
484 else:
485 action = 'a'
485 action = 'a'
486 # else: regular merges (no action necessary)
486 # else: regular merges (no action necessary)
487 self._results[dfile] = r, action
487 self._results[dfile] = r, action
488
488
489 return complete, r
489 return complete, r
490
490
491 def _filectxorabsent(self, hexnode, ctx, f):
491 def _filectxorabsent(self, hexnode, ctx, f):
492 if hexnode == nullhex:
492 if hexnode == nullhex:
493 return filemerge.absentfilectx(ctx, f)
493 return filemerge.absentfilectx(ctx, f)
494 else:
494 else:
495 return ctx[f]
495 return ctx[f]
496
496
497 def preresolve(self, dfile, wctx, labels=None):
497 def preresolve(self, dfile, wctx, labels=None):
498 """run premerge process for dfile
498 """run premerge process for dfile
499
499
500 Returns whether the merge is complete, and the exit code."""
500 Returns whether the merge is complete, and the exit code."""
501 return self._resolve(True, dfile, wctx, labels=labels)
501 return self._resolve(True, dfile, wctx, labels=labels)
502
502
503 def resolve(self, dfile, wctx, labels=None):
503 def resolve(self, dfile, wctx, labels=None):
504 """run merge process (assuming premerge was run) for dfile
504 """run merge process (assuming premerge was run) for dfile
505
505
506 Returns the exit code of the merge."""
506 Returns the exit code of the merge."""
507 return self._resolve(False, dfile, wctx, labels=labels)[1]
507 return self._resolve(False, dfile, wctx, labels=labels)[1]
508
508
509 def counts(self):
509 def counts(self):
510 """return counts for updated, merged and removed files in this
510 """return counts for updated, merged and removed files in this
511 session"""
511 session"""
512 updated, merged, removed = 0, 0, 0
512 updated, merged, removed = 0, 0, 0
513 for r, action in self._results.itervalues():
513 for r, action in self._results.itervalues():
514 if r is None:
514 if r is None:
515 updated += 1
515 updated += 1
516 elif r == 0:
516 elif r == 0:
517 if action == 'r':
517 if action == 'r':
518 removed += 1
518 removed += 1
519 else:
519 else:
520 merged += 1
520 merged += 1
521 return updated, merged, removed
521 return updated, merged, removed
522
522
523 def unresolvedcount(self):
523 def unresolvedcount(self):
524 """get unresolved count for this merge (persistent)"""
524 """get unresolved count for this merge (persistent)"""
525 return len([True for f, entry in self._state.iteritems()
525 return len([True for f, entry in self._state.iteritems()
526 if entry[0] == 'u'])
526 if entry[0] == 'u'])
527
527
528 def actions(self):
528 def actions(self):
529 """return lists of actions to perform on the dirstate"""
529 """return lists of actions to perform on the dirstate"""
530 actions = {'r': [], 'f': [], 'a': [], 'am': [], 'g': []}
530 actions = {'r': [], 'f': [], 'a': [], 'am': [], 'g': []}
531 for f, (r, action) in self._results.iteritems():
531 for f, (r, action) in self._results.iteritems():
532 if action is not None:
532 if action is not None:
533 actions[action].append((f, None, "merge result"))
533 actions[action].append((f, None, "merge result"))
534 return actions
534 return actions
535
535
536 def recordactions(self):
536 def recordactions(self):
537 """record remove/add/get actions in the dirstate"""
537 """record remove/add/get actions in the dirstate"""
538 branchmerge = self._repo.dirstate.p2() != nullid
538 branchmerge = self._repo.dirstate.p2() != nullid
539 recordupdates(self._repo, self.actions(), branchmerge)
539 recordupdates(self._repo, self.actions(), branchmerge)
540
540
541 def queueremove(self, f):
541 def queueremove(self, f):
542 """queues a file to be removed from the dirstate
542 """queues a file to be removed from the dirstate
543
543
544 Meant for use by custom merge drivers."""
544 Meant for use by custom merge drivers."""
545 self._results[f] = 0, 'r'
545 self._results[f] = 0, 'r'
546
546
547 def queueadd(self, f):
547 def queueadd(self, f):
548 """queues a file to be added to the dirstate
548 """queues a file to be added to the dirstate
549
549
550 Meant for use by custom merge drivers."""
550 Meant for use by custom merge drivers."""
551 self._results[f] = 0, 'a'
551 self._results[f] = 0, 'a'
552
552
553 def queueget(self, f):
553 def queueget(self, f):
554 """queues a file to be marked modified in the dirstate
554 """queues a file to be marked modified in the dirstate
555
555
556 Meant for use by custom merge drivers."""
556 Meant for use by custom merge drivers."""
557 self._results[f] = 0, 'g'
557 self._results[f] = 0, 'g'
558
558
559 def _checkunknownfile(repo, wctx, mctx, f, f2=None):
559 def _checkunknownfile(repo, wctx, mctx, f, f2=None):
560 if f2 is None:
560 if f2 is None:
561 f2 = f
561 f2 = f
562 return (os.path.isfile(repo.wjoin(f))
562 return (os.path.isfile(repo.wjoin(f))
563 and repo.wvfs.audit.check(f)
563 and repo.wvfs.audit.check(f)
564 and repo.dirstate.normalize(f) not in repo.dirstate
564 and repo.dirstate.normalize(f) not in repo.dirstate
565 and mctx[f2].cmp(wctx[f]))
565 and mctx[f2].cmp(wctx[f]))
566
566
567 def _checkunknownfiles(repo, wctx, mctx, force, actions):
567 def _checkunknownfiles(repo, wctx, mctx, force, actions):
568 """
568 """
569 Considers any actions that care about the presence of conflicting unknown
569 Considers any actions that care about the presence of conflicting unknown
570 files. For some actions, the result is to abort; for others, it is to
570 files. For some actions, the result is to abort; for others, it is to
571 choose a different action.
571 choose a different action.
572 """
572 """
573 aborts = []
573 aborts = []
574 if not force:
574 if not force:
575 for f, (m, args, msg) in actions.iteritems():
575 for f, (m, args, msg) in actions.iteritems():
576 if m in ('c', 'dc'):
576 if m in ('c', 'dc'):
577 if _checkunknownfile(repo, wctx, mctx, f):
577 if _checkunknownfile(repo, wctx, mctx, f):
578 aborts.append(f)
578 aborts.append(f)
579 elif m == 'dg':
579 elif m == 'dg':
580 if _checkunknownfile(repo, wctx, mctx, f, args[0]):
580 if _checkunknownfile(repo, wctx, mctx, f, args[0]):
581 aborts.append(f)
581 aborts.append(f)
582
582
583 for f in sorted(aborts):
583 for f in sorted(aborts):
584 repo.ui.warn(_("%s: untracked file differs\n") % f)
584 repo.ui.warn(_("%s: untracked file differs\n") % f)
585 if aborts:
585 if aborts:
586 raise error.Abort(_("untracked files in working directory differ "
586 raise error.Abort(_("untracked files in working directory differ "
587 "from files in requested revision"))
587 "from files in requested revision"))
588
588
589 for f, (m, args, msg) in actions.iteritems():
589 for f, (m, args, msg) in actions.iteritems():
590 if m == 'c':
590 if m == 'c':
591 actions[f] = ('g', args, msg)
591 actions[f] = ('g', args, msg)
592 elif m == 'cm':
592 elif m == 'cm':
593 fl2, anc = args
593 fl2, anc = args
594 different = _checkunknownfile(repo, wctx, mctx, f)
594 different = _checkunknownfile(repo, wctx, mctx, f)
595 if different:
595 if different:
596 actions[f] = ('m', (f, f, None, False, anc),
596 actions[f] = ('m', (f, f, None, False, anc),
597 "remote differs from untracked local")
597 "remote differs from untracked local")
598 else:
598 else:
599 actions[f] = ('g', (fl2,), "remote created")
599 actions[f] = ('g', (fl2,), "remote created")
600
600
601 def _forgetremoved(wctx, mctx, branchmerge):
601 def _forgetremoved(wctx, mctx, branchmerge):
602 """
602 """
603 Forget removed files
603 Forget removed files
604
604
605 If we're jumping between revisions (as opposed to merging), and if
605 If we're jumping between revisions (as opposed to merging), and if
606 neither the working directory nor the target rev has the file,
606 neither the working directory nor the target rev has the file,
607 then we need to remove it from the dirstate, to prevent the
607 then we need to remove it from the dirstate, to prevent the
608 dirstate from listing the file when it is no longer in the
608 dirstate from listing the file when it is no longer in the
609 manifest.
609 manifest.
610
610
611 If we're merging, and the other revision has removed a file
611 If we're merging, and the other revision has removed a file
612 that is not present in the working directory, we need to mark it
612 that is not present in the working directory, we need to mark it
613 as removed.
613 as removed.
614 """
614 """
615
615
616 actions = {}
616 actions = {}
617 m = 'f'
617 m = 'f'
618 if branchmerge:
618 if branchmerge:
619 m = 'r'
619 m = 'r'
620 for f in wctx.deleted():
620 for f in wctx.deleted():
621 if f not in mctx:
621 if f not in mctx:
622 actions[f] = m, None, "forget deleted"
622 actions[f] = m, None, "forget deleted"
623
623
624 if not branchmerge:
624 if not branchmerge:
625 for f in wctx.removed():
625 for f in wctx.removed():
626 if f not in mctx:
626 if f not in mctx:
627 actions[f] = 'f', None, "forget removed"
627 actions[f] = 'f', None, "forget removed"
628
628
629 return actions
629 return actions
630
630
631 def _checkcollision(repo, wmf, actions):
631 def _checkcollision(repo, wmf, actions):
632 # build provisional merged manifest up
632 # build provisional merged manifest up
633 pmmf = set(wmf)
633 pmmf = set(wmf)
634
634
635 if actions:
635 if actions:
636 # k, dr, e and rd are no-op
636 # k, dr, e and rd are no-op
637 for m in 'a', 'am', 'f', 'g', 'cd', 'dc':
637 for m in 'a', 'am', 'f', 'g', 'cd', 'dc':
638 for f, args, msg in actions[m]:
638 for f, args, msg in actions[m]:
639 pmmf.add(f)
639 pmmf.add(f)
640 for f, args, msg in actions['r']:
640 for f, args, msg in actions['r']:
641 pmmf.discard(f)
641 pmmf.discard(f)
642 for f, args, msg in actions['dm']:
642 for f, args, msg in actions['dm']:
643 f2, flags = args
643 f2, flags = args
644 pmmf.discard(f2)
644 pmmf.discard(f2)
645 pmmf.add(f)
645 pmmf.add(f)
646 for f, args, msg in actions['dg']:
646 for f, args, msg in actions['dg']:
647 pmmf.add(f)
647 pmmf.add(f)
648 for f, args, msg in actions['m']:
648 for f, args, msg in actions['m']:
649 f1, f2, fa, move, anc = args
649 f1, f2, fa, move, anc = args
650 if move:
650 if move:
651 pmmf.discard(f1)
651 pmmf.discard(f1)
652 pmmf.add(f)
652 pmmf.add(f)
653
653
654 # check case-folding collision in provisional merged manifest
654 # check case-folding collision in provisional merged manifest
655 foldmap = {}
655 foldmap = {}
656 for f in sorted(pmmf):
656 for f in sorted(pmmf):
657 fold = util.normcase(f)
657 fold = util.normcase(f)
658 if fold in foldmap:
658 if fold in foldmap:
659 raise error.Abort(_("case-folding collision between %s and %s")
659 raise error.Abort(_("case-folding collision between %s and %s")
660 % (f, foldmap[fold]))
660 % (f, foldmap[fold]))
661 foldmap[fold] = f
661 foldmap[fold] = f
662
662
663 # check case-folding of directories
663 # check case-folding of directories
664 foldprefix = unfoldprefix = lastfull = ''
664 foldprefix = unfoldprefix = lastfull = ''
665 for fold, f in sorted(foldmap.items()):
665 for fold, f in sorted(foldmap.items()):
666 if fold.startswith(foldprefix) and not f.startswith(unfoldprefix):
666 if fold.startswith(foldprefix) and not f.startswith(unfoldprefix):
667 # the folded prefix matches but actual casing is different
667 # the folded prefix matches but actual casing is different
668 raise error.Abort(_("case-folding collision between "
668 raise error.Abort(_("case-folding collision between "
669 "%s and directory of %s") % (lastfull, f))
669 "%s and directory of %s") % (lastfull, f))
670 foldprefix = fold + '/'
670 foldprefix = fold + '/'
671 unfoldprefix = f + '/'
671 unfoldprefix = f + '/'
672 lastfull = f
672 lastfull = f
673
673
674 def driverpreprocess(repo, ms, wctx, labels=None):
674 def driverpreprocess(repo, ms, wctx, labels=None):
675 """run the preprocess step of the merge driver, if any
675 """run the preprocess step of the merge driver, if any
676
676
677 This is currently not implemented -- it's an extension point."""
677 This is currently not implemented -- it's an extension point."""
678 return True
678 return True
679
679
680 def driverconclude(repo, ms, wctx, labels=None):
680 def driverconclude(repo, ms, wctx, labels=None):
681 """run the conclude step of the merge driver, if any
681 """run the conclude step of the merge driver, if any
682
682
683 This is currently not implemented -- it's an extension point."""
683 This is currently not implemented -- it's an extension point."""
684 return True
684 return True
685
685
686 def manifestmerge(repo, wctx, p2, pa, branchmerge, force, partial,
686 def manifestmerge(repo, wctx, p2, pa, branchmerge, force, partial,
687 acceptremote, followcopies):
687 acceptremote, followcopies):
688 """
688 """
689 Merge p1 and p2 with ancestor pa and generate merge action list
689 Merge p1 and p2 with ancestor pa and generate merge action list
690
690
691 branchmerge and force are as passed in to update
691 branchmerge and force are as passed in to update
692 partial = function to filter file lists
692 partial = function to filter file lists
693 acceptremote = accept the incoming changes without prompting
693 acceptremote = accept the incoming changes without prompting
694 """
694 """
695
695
696 copy, movewithdir, diverge, renamedelete = {}, {}, {}, {}
696 copy, movewithdir, diverge, renamedelete = {}, {}, {}, {}
697
697
698 # manifests fetched in order are going to be faster, so prime the caches
698 # manifests fetched in order are going to be faster, so prime the caches
699 [x.manifest() for x in
699 [x.manifest() for x in
700 sorted(wctx.parents() + [p2, pa], key=lambda x: x.rev())]
700 sorted(wctx.parents() + [p2, pa], key=lambda x: x.rev())]
701
701
702 if followcopies:
702 if followcopies:
703 ret = copies.mergecopies(repo, wctx, p2, pa)
703 ret = copies.mergecopies(repo, wctx, p2, pa)
704 copy, movewithdir, diverge, renamedelete = ret
704 copy, movewithdir, diverge, renamedelete = ret
705
705
706 repo.ui.note(_("resolving manifests\n"))
706 repo.ui.note(_("resolving manifests\n"))
707 repo.ui.debug(" branchmerge: %s, force: %s, partial: %s\n"
707 repo.ui.debug(" branchmerge: %s, force: %s, partial: %s\n"
708 % (bool(branchmerge), bool(force), bool(partial)))
708 % (bool(branchmerge), bool(force), bool(partial)))
709 repo.ui.debug(" ancestor: %s, local: %s, remote: %s\n" % (pa, wctx, p2))
709 repo.ui.debug(" ancestor: %s, local: %s, remote: %s\n" % (pa, wctx, p2))
710
710
711 m1, m2, ma = wctx.manifest(), p2.manifest(), pa.manifest()
711 m1, m2, ma = wctx.manifest(), p2.manifest(), pa.manifest()
712 copied = set(copy.values())
712 copied = set(copy.values())
713 copied.update(movewithdir.values())
713 copied.update(movewithdir.values())
714
714
715 if '.hgsubstate' in m1:
715 if '.hgsubstate' in m1:
716 # check whether sub state is modified
716 # check whether sub state is modified
717 for s in sorted(wctx.substate):
717 for s in sorted(wctx.substate):
718 if wctx.sub(s).dirty():
718 if wctx.sub(s).dirty():
719 m1['.hgsubstate'] += '+'
719 m1['.hgsubstate'] += '+'
720 break
720 break
721
721
722 # Compare manifests
722 # Compare manifests
723 diff = m1.diff(m2)
723 diff = m1.diff(m2)
724
724
725 actions = {}
725 actions = {}
726 for f, ((n1, fl1), (n2, fl2)) in diff.iteritems():
726 for f, ((n1, fl1), (n2, fl2)) in diff.iteritems():
727 if partial and not partial(f):
727 if partial and not partial(f):
728 continue
728 continue
729 if n1 and n2: # file exists on both local and remote side
729 if n1 and n2: # file exists on both local and remote side
730 if f not in ma:
730 if f not in ma:
731 fa = copy.get(f, None)
731 fa = copy.get(f, None)
732 if fa is not None:
732 if fa is not None:
733 actions[f] = ('m', (f, f, fa, False, pa.node()),
733 actions[f] = ('m', (f, f, fa, False, pa.node()),
734 "both renamed from " + fa)
734 "both renamed from " + fa)
735 else:
735 else:
736 actions[f] = ('m', (f, f, None, False, pa.node()),
736 actions[f] = ('m', (f, f, None, False, pa.node()),
737 "both created")
737 "both created")
738 else:
738 else:
739 a = ma[f]
739 a = ma[f]
740 fla = ma.flags(f)
740 fla = ma.flags(f)
741 nol = 'l' not in fl1 + fl2 + fla
741 nol = 'l' not in fl1 + fl2 + fla
742 if n2 == a and fl2 == fla:
742 if n2 == a and fl2 == fla:
743 actions[f] = ('k' , (), "remote unchanged")
743 actions[f] = ('k' , (), "remote unchanged")
744 elif n1 == a and fl1 == fla: # local unchanged - use remote
744 elif n1 == a and fl1 == fla: # local unchanged - use remote
745 if n1 == n2: # optimization: keep local content
745 if n1 == n2: # optimization: keep local content
746 actions[f] = ('e', (fl2,), "update permissions")
746 actions[f] = ('e', (fl2,), "update permissions")
747 else:
747 else:
748 actions[f] = ('g', (fl2,), "remote is newer")
748 actions[f] = ('g', (fl2,), "remote is newer")
749 elif nol and n2 == a: # remote only changed 'x'
749 elif nol and n2 == a: # remote only changed 'x'
750 actions[f] = ('e', (fl2,), "update permissions")
750 actions[f] = ('e', (fl2,), "update permissions")
751 elif nol and n1 == a: # local only changed 'x'
751 elif nol and n1 == a: # local only changed 'x'
752 actions[f] = ('g', (fl1,), "remote is newer")
752 actions[f] = ('g', (fl1,), "remote is newer")
753 else: # both changed something
753 else: # both changed something
754 actions[f] = ('m', (f, f, f, False, pa.node()),
754 actions[f] = ('m', (f, f, f, False, pa.node()),
755 "versions differ")
755 "versions differ")
756 elif n1: # file exists only on local side
756 elif n1: # file exists only on local side
757 if f in copied:
757 if f in copied:
758 pass # we'll deal with it on m2 side
758 pass # we'll deal with it on m2 side
759 elif f in movewithdir: # directory rename, move local
759 elif f in movewithdir: # directory rename, move local
760 f2 = movewithdir[f]
760 f2 = movewithdir[f]
761 if f2 in m2:
761 if f2 in m2:
762 actions[f2] = ('m', (f, f2, None, True, pa.node()),
762 actions[f2] = ('m', (f, f2, None, True, pa.node()),
763 "remote directory rename, both created")
763 "remote directory rename, both created")
764 else:
764 else:
765 actions[f2] = ('dm', (f, fl1),
765 actions[f2] = ('dm', (f, fl1),
766 "remote directory rename - move from " + f)
766 "remote directory rename - move from " + f)
767 elif f in copy:
767 elif f in copy:
768 f2 = copy[f]
768 f2 = copy[f]
769 actions[f] = ('m', (f, f2, f2, False, pa.node()),
769 actions[f] = ('m', (f, f2, f2, False, pa.node()),
770 "local copied/moved from " + f2)
770 "local copied/moved from " + f2)
771 elif f in ma: # clean, a different, no remote
771 elif f in ma: # clean, a different, no remote
772 if n1 != ma[f]:
772 if n1 != ma[f]:
773 if acceptremote:
773 if acceptremote:
774 actions[f] = ('r', None, "remote delete")
774 actions[f] = ('r', None, "remote delete")
775 else:
775 else:
776 actions[f] = ('cd', (f, None, f, False, pa.node()),
776 actions[f] = ('cd', (f, None, f, False, pa.node()),
777 "prompt changed/deleted")
777 "prompt changed/deleted")
778 elif n1[20:] == 'a':
778 elif n1[20:] == 'a':
779 # This extra 'a' is added by working copy manifest to mark
779 # This extra 'a' is added by working copy manifest to mark
780 # the file as locally added. We should forget it instead of
780 # the file as locally added. We should forget it instead of
781 # deleting it.
781 # deleting it.
782 actions[f] = ('f', None, "remote deleted")
782 actions[f] = ('f', None, "remote deleted")
783 else:
783 else:
784 actions[f] = ('r', None, "other deleted")
784 actions[f] = ('r', None, "other deleted")
785 elif n2: # file exists only on remote side
785 elif n2: # file exists only on remote side
786 if f in copied:
786 if f in copied:
787 pass # we'll deal with it on m1 side
787 pass # we'll deal with it on m1 side
788 elif f in movewithdir:
788 elif f in movewithdir:
789 f2 = movewithdir[f]
789 f2 = movewithdir[f]
790 if f2 in m1:
790 if f2 in m1:
791 actions[f2] = ('m', (f2, f, None, False, pa.node()),
791 actions[f2] = ('m', (f2, f, None, False, pa.node()),
792 "local directory rename, both created")
792 "local directory rename, both created")
793 else:
793 else:
794 actions[f2] = ('dg', (f, fl2),
794 actions[f2] = ('dg', (f, fl2),
795 "local directory rename - get from " + f)
795 "local directory rename - get from " + f)
796 elif f in copy:
796 elif f in copy:
797 f2 = copy[f]
797 f2 = copy[f]
798 if f2 in m2:
798 if f2 in m2:
799 actions[f] = ('m', (f2, f, f2, False, pa.node()),
799 actions[f] = ('m', (f2, f, f2, False, pa.node()),
800 "remote copied from " + f2)
800 "remote copied from " + f2)
801 else:
801 else:
802 actions[f] = ('m', (f2, f, f2, True, pa.node()),
802 actions[f] = ('m', (f2, f, f2, True, pa.node()),
803 "remote moved from " + f2)
803 "remote moved from " + f2)
804 elif f not in ma:
804 elif f not in ma:
805 # local unknown, remote created: the logic is described by the
805 # local unknown, remote created: the logic is described by the
806 # following table:
806 # following table:
807 #
807 #
808 # force branchmerge different | action
808 # force branchmerge different | action
809 # n * * | create
809 # n * * | create
810 # y n * | create
810 # y n * | create
811 # y y n | create
811 # y y n | create
812 # y y y | merge
812 # y y y | merge
813 #
813 #
814 # Checking whether the files are different is expensive, so we
814 # Checking whether the files are different is expensive, so we
815 # don't do that when we can avoid it.
815 # don't do that when we can avoid it.
816 if not force:
816 if not force:
817 actions[f] = ('c', (fl2,), "remote created")
817 actions[f] = ('c', (fl2,), "remote created")
818 elif not branchmerge:
818 elif not branchmerge:
819 actions[f] = ('c', (fl2,), "remote created")
819 actions[f] = ('c', (fl2,), "remote created")
820 else:
820 else:
821 actions[f] = ('cm', (fl2, pa.node()),
821 actions[f] = ('cm', (fl2, pa.node()),
822 "remote created, get or merge")
822 "remote created, get or merge")
823 elif n2 != ma[f]:
823 elif n2 != ma[f]:
824 if acceptremote:
824 if acceptremote:
825 actions[f] = ('c', (fl2,), "remote recreating")
825 actions[f] = ('c', (fl2,), "remote recreating")
826 else:
826 else:
827 actions[f] = ('dc', (None, f, f, False, pa.node()),
827 actions[f] = ('dc', (None, f, f, False, pa.node()),
828 "prompt deleted/changed")
828 "prompt deleted/changed")
829
829
830 return actions, diverge, renamedelete
830 return actions, diverge, renamedelete
831
831
832 def _resolvetrivial(repo, wctx, mctx, ancestor, actions):
832 def _resolvetrivial(repo, wctx, mctx, ancestor, actions):
833 """Resolves false conflicts where the nodeid changed but the content
833 """Resolves false conflicts where the nodeid changed but the content
834 remained the same."""
834 remained the same."""
835
835
836 for f, (m, args, msg) in actions.items():
836 for f, (m, args, msg) in actions.items():
837 if m == 'cd' and f in ancestor and not wctx[f].cmp(ancestor[f]):
837 if m == 'cd' and f in ancestor and not wctx[f].cmp(ancestor[f]):
838 # local did change but ended up with same content
838 # local did change but ended up with same content
839 actions[f] = 'r', None, "prompt same"
839 actions[f] = 'r', None, "prompt same"
840 elif m == 'dc' and f in ancestor and not mctx[f].cmp(ancestor[f]):
840 elif m == 'dc' and f in ancestor and not mctx[f].cmp(ancestor[f]):
841 # remote did change but ended up with same content
841 # remote did change but ended up with same content
842 del actions[f] # don't get = keep local deleted
842 del actions[f] # don't get = keep local deleted
843
843
844 def calculateupdates(repo, wctx, mctx, ancestors, branchmerge, force, partial,
844 def calculateupdates(repo, wctx, mctx, ancestors, branchmerge, force, partial,
845 acceptremote, followcopies):
845 acceptremote, followcopies):
846 "Calculate the actions needed to merge mctx into wctx using ancestors"
846 "Calculate the actions needed to merge mctx into wctx using ancestors"
847
847
848 if len(ancestors) == 1: # default
848 if len(ancestors) == 1: # default
849 actions, diverge, renamedelete = manifestmerge(
849 actions, diverge, renamedelete = manifestmerge(
850 repo, wctx, mctx, ancestors[0], branchmerge, force, partial,
850 repo, wctx, mctx, ancestors[0], branchmerge, force, partial,
851 acceptremote, followcopies)
851 acceptremote, followcopies)
852 _checkunknownfiles(repo, wctx, mctx, force, actions)
852 _checkunknownfiles(repo, wctx, mctx, force, actions)
853
853
854 else: # only when merge.preferancestor=* - the default
854 else: # only when merge.preferancestor=* - the default
855 repo.ui.note(
855 repo.ui.note(
856 _("note: merging %s and %s using bids from ancestors %s\n") %
856 _("note: merging %s and %s using bids from ancestors %s\n") %
857 (wctx, mctx, _(' and ').join(str(anc) for anc in ancestors)))
857 (wctx, mctx, _(' and ').join(str(anc) for anc in ancestors)))
858
858
859 # Call for bids
859 # Call for bids
860 fbids = {} # mapping filename to bids (action method to list af actions)
860 fbids = {} # mapping filename to bids (action method to list af actions)
861 diverge, renamedelete = None, None
861 diverge, renamedelete = None, None
862 for ancestor in ancestors:
862 for ancestor in ancestors:
863 repo.ui.note(_('\ncalculating bids for ancestor %s\n') % ancestor)
863 repo.ui.note(_('\ncalculating bids for ancestor %s\n') % ancestor)
864 actions, diverge1, renamedelete1 = manifestmerge(
864 actions, diverge1, renamedelete1 = manifestmerge(
865 repo, wctx, mctx, ancestor, branchmerge, force, partial,
865 repo, wctx, mctx, ancestor, branchmerge, force, partial,
866 acceptremote, followcopies)
866 acceptremote, followcopies)
867 _checkunknownfiles(repo, wctx, mctx, force, actions)
867 _checkunknownfiles(repo, wctx, mctx, force, actions)
868
868
869 # Track the shortest set of warning on the theory that bid
869 # Track the shortest set of warning on the theory that bid
870 # merge will correctly incorporate more information
870 # merge will correctly incorporate more information
871 if diverge is None or len(diverge1) < len(diverge):
871 if diverge is None or len(diverge1) < len(diverge):
872 diverge = diverge1
872 diverge = diverge1
873 if renamedelete is None or len(renamedelete) < len(renamedelete1):
873 if renamedelete is None or len(renamedelete) < len(renamedelete1):
874 renamedelete = renamedelete1
874 renamedelete = renamedelete1
875
875
876 for f, a in sorted(actions.iteritems()):
876 for f, a in sorted(actions.iteritems()):
877 m, args, msg = a
877 m, args, msg = a
878 repo.ui.debug(' %s: %s -> %s\n' % (f, msg, m))
878 repo.ui.debug(' %s: %s -> %s\n' % (f, msg, m))
879 if f in fbids:
879 if f in fbids:
880 d = fbids[f]
880 d = fbids[f]
881 if m in d:
881 if m in d:
882 d[m].append(a)
882 d[m].append(a)
883 else:
883 else:
884 d[m] = [a]
884 d[m] = [a]
885 else:
885 else:
886 fbids[f] = {m: [a]}
886 fbids[f] = {m: [a]}
887
887
888 # Pick the best bid for each file
888 # Pick the best bid for each file
889 repo.ui.note(_('\nauction for merging merge bids\n'))
889 repo.ui.note(_('\nauction for merging merge bids\n'))
890 actions = {}
890 actions = {}
891 for f, bids in sorted(fbids.items()):
891 for f, bids in sorted(fbids.items()):
892 # bids is a mapping from action method to list af actions
892 # bids is a mapping from action method to list af actions
893 # Consensus?
893 # Consensus?
894 if len(bids) == 1: # all bids are the same kind of method
894 if len(bids) == 1: # all bids are the same kind of method
895 m, l = bids.items()[0]
895 m, l = bids.items()[0]
896 if all(a == l[0] for a in l[1:]): # len(bids) is > 1
896 if all(a == l[0] for a in l[1:]): # len(bids) is > 1
897 repo.ui.note(" %s: consensus for %s\n" % (f, m))
897 repo.ui.note(" %s: consensus for %s\n" % (f, m))
898 actions[f] = l[0]
898 actions[f] = l[0]
899 continue
899 continue
900 # If keep is an option, just do it.
900 # If keep is an option, just do it.
901 if 'k' in bids:
901 if 'k' in bids:
902 repo.ui.note(" %s: picking 'keep' action\n" % f)
902 repo.ui.note(" %s: picking 'keep' action\n" % f)
903 actions[f] = bids['k'][0]
903 actions[f] = bids['k'][0]
904 continue
904 continue
905 # If there are gets and they all agree [how could they not?], do it.
905 # If there are gets and they all agree [how could they not?], do it.
906 if 'g' in bids:
906 if 'g' in bids:
907 ga0 = bids['g'][0]
907 ga0 = bids['g'][0]
908 if all(a == ga0 for a in bids['g'][1:]):
908 if all(a == ga0 for a in bids['g'][1:]):
909 repo.ui.note(" %s: picking 'get' action\n" % f)
909 repo.ui.note(" %s: picking 'get' action\n" % f)
910 actions[f] = ga0
910 actions[f] = ga0
911 continue
911 continue
912 # TODO: Consider other simple actions such as mode changes
912 # TODO: Consider other simple actions such as mode changes
913 # Handle inefficient democrazy.
913 # Handle inefficient democrazy.
914 repo.ui.note(_(' %s: multiple bids for merge action:\n') % f)
914 repo.ui.note(_(' %s: multiple bids for merge action:\n') % f)
915 for m, l in sorted(bids.items()):
915 for m, l in sorted(bids.items()):
916 for _f, args, msg in l:
916 for _f, args, msg in l:
917 repo.ui.note(' %s -> %s\n' % (msg, m))
917 repo.ui.note(' %s -> %s\n' % (msg, m))
918 # Pick random action. TODO: Instead, prompt user when resolving
918 # Pick random action. TODO: Instead, prompt user when resolving
919 m, l = bids.items()[0]
919 m, l = bids.items()[0]
920 repo.ui.warn(_(' %s: ambiguous merge - picked %s action\n') %
920 repo.ui.warn(_(' %s: ambiguous merge - picked %s action\n') %
921 (f, m))
921 (f, m))
922 actions[f] = l[0]
922 actions[f] = l[0]
923 continue
923 continue
924 repo.ui.note(_('end of auction\n\n'))
924 repo.ui.note(_('end of auction\n\n'))
925
925
926 _resolvetrivial(repo, wctx, mctx, ancestors[0], actions)
926 _resolvetrivial(repo, wctx, mctx, ancestors[0], actions)
927
927
928 if wctx.rev() is None:
928 if wctx.rev() is None:
929 fractions = _forgetremoved(wctx, mctx, branchmerge)
929 fractions = _forgetremoved(wctx, mctx, branchmerge)
930 actions.update(fractions)
930 actions.update(fractions)
931
931
932 return actions, diverge, renamedelete
932 return actions, diverge, renamedelete
933
933
934 def batchremove(repo, actions):
934 def batchremove(repo, actions):
935 """apply removes to the working directory
935 """apply removes to the working directory
936
936
937 yields tuples for progress updates
937 yields tuples for progress updates
938 """
938 """
939 verbose = repo.ui.verbose
939 verbose = repo.ui.verbose
940 unlink = util.unlinkpath
940 unlink = util.unlinkpath
941 wjoin = repo.wjoin
941 wjoin = repo.wjoin
942 audit = repo.wvfs.audit
942 audit = repo.wvfs.audit
943 i = 0
943 i = 0
944 for f, args, msg in actions:
944 for f, args, msg in actions:
945 repo.ui.debug(" %s: %s -> r\n" % (f, msg))
945 repo.ui.debug(" %s: %s -> r\n" % (f, msg))
946 if verbose:
946 if verbose:
947 repo.ui.note(_("removing %s\n") % f)
947 repo.ui.note(_("removing %s\n") % f)
948 audit(f)
948 audit(f)
949 try:
949 try:
950 unlink(wjoin(f), ignoremissing=True)
950 unlink(wjoin(f), ignoremissing=True)
951 except OSError as inst:
951 except OSError as inst:
952 repo.ui.warn(_("update failed to remove %s: %s!\n") %
952 repo.ui.warn(_("update failed to remove %s: %s!\n") %
953 (f, inst.strerror))
953 (f, inst.strerror))
954 if i == 100:
954 if i == 100:
955 yield i, f
955 yield i, f
956 i = 0
956 i = 0
957 i += 1
957 i += 1
958 if i > 0:
958 if i > 0:
959 yield i, f
959 yield i, f
960
960
961 def batchget(repo, mctx, actions):
961 def batchget(repo, mctx, actions):
962 """apply gets to the working directory
962 """apply gets to the working directory
963
963
964 mctx is the context to get from
964 mctx is the context to get from
965
965
966 yields tuples for progress updates
966 yields tuples for progress updates
967 """
967 """
968 verbose = repo.ui.verbose
968 verbose = repo.ui.verbose
969 fctx = mctx.filectx
969 fctx = mctx.filectx
970 wwrite = repo.wwrite
970 wwrite = repo.wwrite
971 i = 0
971 i = 0
972 for f, args, msg in actions:
972 for f, args, msg in actions:
973 repo.ui.debug(" %s: %s -> g\n" % (f, msg))
973 repo.ui.debug(" %s: %s -> g\n" % (f, msg))
974 if verbose:
974 if verbose:
975 repo.ui.note(_("getting %s\n") % f)
975 repo.ui.note(_("getting %s\n") % f)
976 wwrite(f, fctx(f).data(), args[0])
976 wwrite(f, fctx(f).data(), args[0])
977 if i == 100:
977 if i == 100:
978 yield i, f
978 yield i, f
979 i = 0
979 i = 0
980 i += 1
980 i += 1
981 if i > 0:
981 if i > 0:
982 yield i, f
982 yield i, f
983
983
984 def applyupdates(repo, actions, wctx, mctx, overwrite, labels=None):
984 def applyupdates(repo, actions, wctx, mctx, overwrite, labels=None):
985 """apply the merge action list to the working directory
985 """apply the merge action list to the working directory
986
986
987 wctx is the working copy context
987 wctx is the working copy context
988 mctx is the context to be merged into the working copy
988 mctx is the context to be merged into the working copy
989
989
990 Return a tuple of counts (updated, merged, removed, unresolved) that
990 Return a tuple of counts (updated, merged, removed, unresolved) that
991 describes how many files were affected by the update.
991 describes how many files were affected by the update.
992 """
992 """
993
993
994 updated, merged, removed = 0, 0, 0
994 updated, merged, removed = 0, 0, 0
995 ms = mergestate.clean(repo, wctx.p1().node(), mctx.node())
995 ms = mergestate.clean(repo, wctx.p1().node(), mctx.node())
996 moves = []
996 moves = []
997 for m, l in actions.items():
997 for m, l in actions.items():
998 l.sort()
998 l.sort()
999
999
1000 # 'cd' and 'dc' actions are treated like other merge conflicts
1000 # 'cd' and 'dc' actions are treated like other merge conflicts
1001 mergeactions = sorted(actions['cd'])
1001 mergeactions = sorted(actions['cd'])
1002 mergeactions.extend(sorted(actions['dc']))
1002 mergeactions.extend(sorted(actions['dc']))
1003 mergeactions.extend(actions['m'])
1003 mergeactions.extend(actions['m'])
1004 for f, args, msg in mergeactions:
1004 for f, args, msg in mergeactions:
1005 f1, f2, fa, move, anc = args
1005 f1, f2, fa, move, anc = args
1006 if f == '.hgsubstate': # merged internally
1006 if f == '.hgsubstate': # merged internally
1007 continue
1007 continue
1008 if f1 is None:
1008 if f1 is None:
1009 fcl = filemerge.absentfilectx(wctx, fa)
1009 fcl = filemerge.absentfilectx(wctx, fa)
1010 else:
1010 else:
1011 repo.ui.debug(" preserving %s for resolve of %s\n" % (f1, f))
1011 repo.ui.debug(" preserving %s for resolve of %s\n" % (f1, f))
1012 fcl = wctx[f1]
1012 fcl = wctx[f1]
1013 if f2 is None:
1013 if f2 is None:
1014 fco = filemerge.absentfilectx(mctx, fa)
1014 fco = filemerge.absentfilectx(mctx, fa)
1015 else:
1015 else:
1016 fco = mctx[f2]
1016 fco = mctx[f2]
1017 actx = repo[anc]
1017 actx = repo[anc]
1018 if fa in actx:
1018 if fa in actx:
1019 fca = actx[fa]
1019 fca = actx[fa]
1020 else:
1020 else:
1021 # TODO: move to absentfilectx
1021 # TODO: move to absentfilectx
1022 fca = repo.filectx(f1, fileid=nullrev)
1022 fca = repo.filectx(f1, fileid=nullrev)
1023 ms.add(fcl, fco, fca, f)
1023 ms.add(fcl, fco, fca, f)
1024 if f1 != f and move:
1024 if f1 != f and move:
1025 moves.append(f1)
1025 moves.append(f1)
1026
1026
1027 audit = repo.wvfs.audit
1027 audit = repo.wvfs.audit
1028 _updating = _('updating')
1028 _updating = _('updating')
1029 _files = _('files')
1029 _files = _('files')
1030 progress = repo.ui.progress
1030 progress = repo.ui.progress
1031
1031
1032 # remove renamed files after safely stored
1032 # remove renamed files after safely stored
1033 for f in moves:
1033 for f in moves:
1034 if os.path.lexists(repo.wjoin(f)):
1034 if os.path.lexists(repo.wjoin(f)):
1035 repo.ui.debug("removing %s\n" % f)
1035 repo.ui.debug("removing %s\n" % f)
1036 audit(f)
1036 audit(f)
1037 util.unlinkpath(repo.wjoin(f))
1037 util.unlinkpath(repo.wjoin(f))
1038
1038
1039 numupdates = sum(len(l) for m, l in actions.items() if m != 'k')
1039 numupdates = sum(len(l) for m, l in actions.items() if m != 'k')
1040
1040
1041 if [a for a in actions['r'] if a[0] == '.hgsubstate']:
1041 if [a for a in actions['r'] if a[0] == '.hgsubstate']:
1042 subrepo.submerge(repo, wctx, mctx, wctx, overwrite)
1042 subrepo.submerge(repo, wctx, mctx, wctx, overwrite)
1043
1043
1044 # remove in parallel (must come first)
1044 # remove in parallel (must come first)
1045 z = 0
1045 z = 0
1046 prog = worker.worker(repo.ui, 0.001, batchremove, (repo,), actions['r'])
1046 prog = worker.worker(repo.ui, 0.001, batchremove, (repo,), actions['r'])
1047 for i, item in prog:
1047 for i, item in prog:
1048 z += i
1048 z += i
1049 progress(_updating, z, item=item, total=numupdates, unit=_files)
1049 progress(_updating, z, item=item, total=numupdates, unit=_files)
1050 removed = len(actions['r'])
1050 removed = len(actions['r'])
1051
1051
1052 # get in parallel
1052 # get in parallel
1053 prog = worker.worker(repo.ui, 0.001, batchget, (repo, mctx), actions['g'])
1053 prog = worker.worker(repo.ui, 0.001, batchget, (repo, mctx), actions['g'])
1054 for i, item in prog:
1054 for i, item in prog:
1055 z += i
1055 z += i
1056 progress(_updating, z, item=item, total=numupdates, unit=_files)
1056 progress(_updating, z, item=item, total=numupdates, unit=_files)
1057 updated = len(actions['g'])
1057 updated = len(actions['g'])
1058
1058
1059 if [a for a in actions['g'] if a[0] == '.hgsubstate']:
1059 if [a for a in actions['g'] if a[0] == '.hgsubstate']:
1060 subrepo.submerge(repo, wctx, mctx, wctx, overwrite)
1060 subrepo.submerge(repo, wctx, mctx, wctx, overwrite)
1061
1061
1062 # forget (manifest only, just log it) (must come first)
1062 # forget (manifest only, just log it) (must come first)
1063 for f, args, msg in actions['f']:
1063 for f, args, msg in actions['f']:
1064 repo.ui.debug(" %s: %s -> f\n" % (f, msg))
1064 repo.ui.debug(" %s: %s -> f\n" % (f, msg))
1065 z += 1
1065 z += 1
1066 progress(_updating, z, item=f, total=numupdates, unit=_files)
1066 progress(_updating, z, item=f, total=numupdates, unit=_files)
1067
1067
1068 # re-add (manifest only, just log it)
1068 # re-add (manifest only, just log it)
1069 for f, args, msg in actions['a']:
1069 for f, args, msg in actions['a']:
1070 repo.ui.debug(" %s: %s -> a\n" % (f, msg))
1070 repo.ui.debug(" %s: %s -> a\n" % (f, msg))
1071 z += 1
1071 z += 1
1072 progress(_updating, z, item=f, total=numupdates, unit=_files)
1072 progress(_updating, z, item=f, total=numupdates, unit=_files)
1073
1073
1074 # re-add/mark as modified (manifest only, just log it)
1074 # re-add/mark as modified (manifest only, just log it)
1075 for f, args, msg in actions['am']:
1075 for f, args, msg in actions['am']:
1076 repo.ui.debug(" %s: %s -> am\n" % (f, msg))
1076 repo.ui.debug(" %s: %s -> am\n" % (f, msg))
1077 z += 1
1077 z += 1
1078 progress(_updating, z, item=f, total=numupdates, unit=_files)
1078 progress(_updating, z, item=f, total=numupdates, unit=_files)
1079
1079
1080 # keep (noop, just log it)
1080 # keep (noop, just log it)
1081 for f, args, msg in actions['k']:
1081 for f, args, msg in actions['k']:
1082 repo.ui.debug(" %s: %s -> k\n" % (f, msg))
1082 repo.ui.debug(" %s: %s -> k\n" % (f, msg))
1083 # no progress
1083 # no progress
1084
1084
1085 # directory rename, move local
1085 # directory rename, move local
1086 for f, args, msg in actions['dm']:
1086 for f, args, msg in actions['dm']:
1087 repo.ui.debug(" %s: %s -> dm\n" % (f, msg))
1087 repo.ui.debug(" %s: %s -> dm\n" % (f, msg))
1088 z += 1
1088 z += 1
1089 progress(_updating, z, item=f, total=numupdates, unit=_files)
1089 progress(_updating, z, item=f, total=numupdates, unit=_files)
1090 f0, flags = args
1090 f0, flags = args
1091 repo.ui.note(_("moving %s to %s\n") % (f0, f))
1091 repo.ui.note(_("moving %s to %s\n") % (f0, f))
1092 audit(f)
1092 audit(f)
1093 repo.wwrite(f, wctx.filectx(f0).data(), flags)
1093 repo.wwrite(f, wctx.filectx(f0).data(), flags)
1094 util.unlinkpath(repo.wjoin(f0))
1094 util.unlinkpath(repo.wjoin(f0))
1095 updated += 1
1095 updated += 1
1096
1096
1097 # local directory rename, get
1097 # local directory rename, get
1098 for f, args, msg in actions['dg']:
1098 for f, args, msg in actions['dg']:
1099 repo.ui.debug(" %s: %s -> dg\n" % (f, msg))
1099 repo.ui.debug(" %s: %s -> dg\n" % (f, msg))
1100 z += 1
1100 z += 1
1101 progress(_updating, z, item=f, total=numupdates, unit=_files)
1101 progress(_updating, z, item=f, total=numupdates, unit=_files)
1102 f0, flags = args
1102 f0, flags = args
1103 repo.ui.note(_("getting %s to %s\n") % (f0, f))
1103 repo.ui.note(_("getting %s to %s\n") % (f0, f))
1104 repo.wwrite(f, mctx.filectx(f0).data(), flags)
1104 repo.wwrite(f, mctx.filectx(f0).data(), flags)
1105 updated += 1
1105 updated += 1
1106
1106
1107 # exec
1107 # exec
1108 for f, args, msg in actions['e']:
1108 for f, args, msg in actions['e']:
1109 repo.ui.debug(" %s: %s -> e\n" % (f, msg))
1109 repo.ui.debug(" %s: %s -> e\n" % (f, msg))
1110 z += 1
1110 z += 1
1111 progress(_updating, z, item=f, total=numupdates, unit=_files)
1111 progress(_updating, z, item=f, total=numupdates, unit=_files)
1112 flags, = args
1112 flags, = args
1113 audit(f)
1113 audit(f)
1114 util.setflags(repo.wjoin(f), 'l' in flags, 'x' in flags)
1114 util.setflags(repo.wjoin(f), 'l' in flags, 'x' in flags)
1115 updated += 1
1115 updated += 1
1116
1116
1117 # the ordering is important here -- ms.mergedriver will raise if the merge
1117 # the ordering is important here -- ms.mergedriver will raise if the merge
1118 # driver has changed, and we want to be able to bypass it when overwrite is
1118 # driver has changed, and we want to be able to bypass it when overwrite is
1119 # True
1119 # True
1120 usemergedriver = not overwrite and mergeactions and ms.mergedriver
1120 usemergedriver = not overwrite and mergeactions and ms.mergedriver
1121
1121
1122 if usemergedriver:
1122 if usemergedriver:
1123 ms.commit()
1123 ms.commit()
1124 proceed = driverpreprocess(repo, ms, wctx, labels=labels)
1124 proceed = driverpreprocess(repo, ms, wctx, labels=labels)
1125 # the driver might leave some files unresolved
1125 # the driver might leave some files unresolved
1126 unresolvedf = set(ms.unresolved())
1126 unresolvedf = set(ms.unresolved())
1127 if not proceed:
1127 if not proceed:
1128 # XXX setting unresolved to at least 1 is a hack to make sure we
1128 # XXX setting unresolved to at least 1 is a hack to make sure we
1129 # error out
1129 # error out
1130 return updated, merged, removed, max(len(unresolvedf), 1)
1130 return updated, merged, removed, max(len(unresolvedf), 1)
1131 newactions = []
1131 newactions = []
1132 for f, args, msg in mergeactions:
1132 for f, args, msg in mergeactions:
1133 if f in unresolvedf:
1133 if f in unresolvedf:
1134 newactions.append((f, args, msg))
1134 newactions.append((f, args, msg))
1135 mergeactions = newactions
1135 mergeactions = newactions
1136
1136
1137 # premerge
1137 # premerge
1138 tocomplete = []
1138 tocomplete = []
1139 for f, args, msg in mergeactions:
1139 for f, args, msg in mergeactions:
1140 repo.ui.debug(" %s: %s -> m (premerge)\n" % (f, msg))
1140 repo.ui.debug(" %s: %s -> m (premerge)\n" % (f, msg))
1141 z += 1
1141 z += 1
1142 progress(_updating, z, item=f, total=numupdates, unit=_files)
1142 progress(_updating, z, item=f, total=numupdates, unit=_files)
1143 if f == '.hgsubstate': # subrepo states need updating
1143 if f == '.hgsubstate': # subrepo states need updating
1144 subrepo.submerge(repo, wctx, mctx, wctx.ancestor(mctx),
1144 subrepo.submerge(repo, wctx, mctx, wctx.ancestor(mctx),
1145 overwrite)
1145 overwrite)
1146 continue
1146 continue
1147 audit(f)
1147 audit(f)
1148 complete, r = ms.preresolve(f, wctx, labels=labels)
1148 complete, r = ms.preresolve(f, wctx, labels=labels)
1149 if not complete:
1149 if not complete:
1150 numupdates += 1
1150 numupdates += 1
1151 tocomplete.append((f, args, msg))
1151 tocomplete.append((f, args, msg))
1152
1152
1153 # merge
1153 # merge
1154 for f, args, msg in tocomplete:
1154 for f, args, msg in tocomplete:
1155 repo.ui.debug(" %s: %s -> m (merge)\n" % (f, msg))
1155 repo.ui.debug(" %s: %s -> m (merge)\n" % (f, msg))
1156 z += 1
1156 z += 1
1157 progress(_updating, z, item=f, total=numupdates, unit=_files)
1157 progress(_updating, z, item=f, total=numupdates, unit=_files)
1158 ms.resolve(f, wctx, labels=labels)
1158 ms.resolve(f, wctx, labels=labels)
1159
1159
1160 ms.commit()
1160 ms.commit()
1161
1161
1162 unresolved = ms.unresolvedcount()
1162 unresolved = ms.unresolvedcount()
1163
1163
1164 if usemergedriver and not unresolved and ms.mdstate() != 's':
1164 if usemergedriver and not unresolved and ms.mdstate() != 's':
1165 if not driverconclude(repo, ms, wctx, labels=labels):
1165 if not driverconclude(repo, ms, wctx, labels=labels):
1166 # XXX setting unresolved to at least 1 is a hack to make sure we
1166 # XXX setting unresolved to at least 1 is a hack to make sure we
1167 # error out
1167 # error out
1168 unresolved = max(unresolved, 1)
1168 unresolved = max(unresolved, 1)
1169
1169
1170 ms.commit()
1170 ms.commit()
1171
1171
1172 msupdated, msmerged, msremoved = ms.counts()
1172 msupdated, msmerged, msremoved = ms.counts()
1173 updated += msupdated
1173 updated += msupdated
1174 merged += msmerged
1174 merged += msmerged
1175 removed += msremoved
1175 removed += msremoved
1176
1176
1177 extraactions = ms.actions()
1177 extraactions = ms.actions()
1178 for k, acts in extraactions.iteritems():
1178 for k, acts in extraactions.iteritems():
1179 actions[k].extend(acts)
1179 actions[k].extend(acts)
1180
1180
1181 progress(_updating, None, total=numupdates, unit=_files)
1181 progress(_updating, None, total=numupdates, unit=_files)
1182
1182
1183 return updated, merged, removed, unresolved
1183 return updated, merged, removed, unresolved
1184
1184
1185 def recordupdates(repo, actions, branchmerge):
1185 def recordupdates(repo, actions, branchmerge):
1186 "record merge actions to the dirstate"
1186 "record merge actions to the dirstate"
1187 # remove (must come first)
1187 # remove (must come first)
1188 for f, args, msg in actions.get('r', []):
1188 for f, args, msg in actions.get('r', []):
1189 if branchmerge:
1189 if branchmerge:
1190 repo.dirstate.remove(f)
1190 repo.dirstate.remove(f)
1191 else:
1191 else:
1192 repo.dirstate.drop(f)
1192 repo.dirstate.drop(f)
1193
1193
1194 # forget (must come first)
1194 # forget (must come first)
1195 for f, args, msg in actions.get('f', []):
1195 for f, args, msg in actions.get('f', []):
1196 repo.dirstate.drop(f)
1196 repo.dirstate.drop(f)
1197
1197
1198 # re-add
1198 # re-add
1199 for f, args, msg in actions.get('a', []):
1199 for f, args, msg in actions.get('a', []):
1200 repo.dirstate.add(f)
1200 repo.dirstate.add(f)
1201
1201
1202 # re-add/mark as modified
1202 # re-add/mark as modified
1203 for f, args, msg in actions.get('am', []):
1203 for f, args, msg in actions.get('am', []):
1204 if branchmerge:
1204 if branchmerge:
1205 repo.dirstate.normallookup(f)
1205 repo.dirstate.normallookup(f)
1206 else:
1206 else:
1207 repo.dirstate.add(f)
1207 repo.dirstate.add(f)
1208
1208
1209 # exec change
1209 # exec change
1210 for f, args, msg in actions.get('e', []):
1210 for f, args, msg in actions.get('e', []):
1211 repo.dirstate.normallookup(f)
1211 repo.dirstate.normallookup(f)
1212
1212
1213 # keep
1213 # keep
1214 for f, args, msg in actions.get('k', []):
1214 for f, args, msg in actions.get('k', []):
1215 pass
1215 pass
1216
1216
1217 # get
1217 # get
1218 for f, args, msg in actions.get('g', []):
1218 for f, args, msg in actions.get('g', []):
1219 if branchmerge:
1219 if branchmerge:
1220 repo.dirstate.otherparent(f)
1220 repo.dirstate.otherparent(f)
1221 else:
1221 else:
1222 repo.dirstate.normal(f)
1222 repo.dirstate.normal(f)
1223
1223
1224 # merge
1224 # merge
1225 for f, args, msg in actions.get('m', []):
1225 for f, args, msg in actions.get('m', []):
1226 f1, f2, fa, move, anc = args
1226 f1, f2, fa, move, anc = args
1227 if branchmerge:
1227 if branchmerge:
1228 # We've done a branch merge, mark this file as merged
1228 # We've done a branch merge, mark this file as merged
1229 # so that we properly record the merger later
1229 # so that we properly record the merger later
1230 repo.dirstate.merge(f)
1230 repo.dirstate.merge(f)
1231 if f1 != f2: # copy/rename
1231 if f1 != f2: # copy/rename
1232 if move:
1232 if move:
1233 repo.dirstate.remove(f1)
1233 repo.dirstate.remove(f1)
1234 if f1 != f:
1234 if f1 != f:
1235 repo.dirstate.copy(f1, f)
1235 repo.dirstate.copy(f1, f)
1236 else:
1236 else:
1237 repo.dirstate.copy(f2, f)
1237 repo.dirstate.copy(f2, f)
1238 else:
1238 else:
1239 # We've update-merged a locally modified file, so
1239 # We've update-merged a locally modified file, so
1240 # we set the dirstate to emulate a normal checkout
1240 # we set the dirstate to emulate a normal checkout
1241 # of that file some time in the past. Thus our
1241 # of that file some time in the past. Thus our
1242 # merge will appear as a normal local file
1242 # merge will appear as a normal local file
1243 # modification.
1243 # modification.
1244 if f2 == f: # file not locally copied/moved
1244 if f2 == f: # file not locally copied/moved
1245 repo.dirstate.normallookup(f)
1245 repo.dirstate.normallookup(f)
1246 if move:
1246 if move:
1247 repo.dirstate.drop(f1)
1247 repo.dirstate.drop(f1)
1248
1248
1249 # directory rename, move local
1249 # directory rename, move local
1250 for f, args, msg in actions.get('dm', []):
1250 for f, args, msg in actions.get('dm', []):
1251 f0, flag = args
1251 f0, flag = args
1252 if branchmerge:
1252 if branchmerge:
1253 repo.dirstate.add(f)
1253 repo.dirstate.add(f)
1254 repo.dirstate.remove(f0)
1254 repo.dirstate.remove(f0)
1255 repo.dirstate.copy(f0, f)
1255 repo.dirstate.copy(f0, f)
1256 else:
1256 else:
1257 repo.dirstate.normal(f)
1257 repo.dirstate.normal(f)
1258 repo.dirstate.drop(f0)
1258 repo.dirstate.drop(f0)
1259
1259
1260 # directory rename, get
1260 # directory rename, get
1261 for f, args, msg in actions.get('dg', []):
1261 for f, args, msg in actions.get('dg', []):
1262 f0, flag = args
1262 f0, flag = args
1263 if branchmerge:
1263 if branchmerge:
1264 repo.dirstate.add(f)
1264 repo.dirstate.add(f)
1265 repo.dirstate.copy(f0, f)
1265 repo.dirstate.copy(f0, f)
1266 else:
1266 else:
1267 repo.dirstate.normal(f)
1267 repo.dirstate.normal(f)
1268
1268
1269 def update(repo, node, branchmerge, force, partial, ancestor=None,
1269 def update(repo, node, branchmerge, force, ancestor=None,
1270 mergeancestor=False, labels=None):
1270 mergeancestor=False, labels=None, matcher=None):
1271 """
1271 """
1272 Perform a merge between the working directory and the given node
1272 Perform a merge between the working directory and the given node
1273
1273
1274 node = the node to update to, or None if unspecified
1274 node = the node to update to, or None if unspecified
1275 branchmerge = whether to merge between branches
1275 branchmerge = whether to merge between branches
1276 force = whether to force branch merging or file overwriting
1276 force = whether to force branch merging or file overwriting
1277 partial = a function to filter file lists (dirstate not updated)
1277 matcher = a matcher to filter file lists (dirstate not updated)
1278 mergeancestor = whether it is merging with an ancestor. If true,
1278 mergeancestor = whether it is merging with an ancestor. If true,
1279 we should accept the incoming changes for any prompts that occur.
1279 we should accept the incoming changes for any prompts that occur.
1280 If false, merging with an ancestor (fast-forward) is only allowed
1280 If false, merging with an ancestor (fast-forward) is only allowed
1281 between different named branches. This flag is used by rebase extension
1281 between different named branches. This flag is used by rebase extension
1282 as a temporary fix and should be avoided in general.
1282 as a temporary fix and should be avoided in general.
1283
1283
1284 The table below shows all the behaviors of the update command
1284 The table below shows all the behaviors of the update command
1285 given the -c and -C or no options, whether the working directory
1285 given the -c and -C or no options, whether the working directory
1286 is dirty, whether a revision is specified, and the relationship of
1286 is dirty, whether a revision is specified, and the relationship of
1287 the parent rev to the target rev (linear, on the same named
1287 the parent rev to the target rev (linear, on the same named
1288 branch, or on another named branch).
1288 branch, or on another named branch).
1289
1289
1290 This logic is tested by test-update-branches.t.
1290 This logic is tested by test-update-branches.t.
1291
1291
1292 -c -C dirty rev | linear same cross
1292 -c -C dirty rev | linear same cross
1293 n n n n | ok (1) x
1293 n n n n | ok (1) x
1294 n n n y | ok ok ok
1294 n n n y | ok ok ok
1295 n n y n | merge (2) (2)
1295 n n y n | merge (2) (2)
1296 n n y y | merge (3) (3)
1296 n n y y | merge (3) (3)
1297 n y * * | discard discard discard
1297 n y * * | discard discard discard
1298 y n y * | (4) (4) (4)
1298 y n y * | (4) (4) (4)
1299 y n n * | ok ok ok
1299 y n n * | ok ok ok
1300 y y * * | (5) (5) (5)
1300 y y * * | (5) (5) (5)
1301
1301
1302 x = can't happen
1302 x = can't happen
1303 * = don't-care
1303 * = don't-care
1304 1 = abort: not a linear update (merge or update --check to force update)
1304 1 = abort: not a linear update (merge or update --check to force update)
1305 2 = abort: uncommitted changes (commit and merge, or update --clean to
1305 2 = abort: uncommitted changes (commit and merge, or update --clean to
1306 discard changes)
1306 discard changes)
1307 3 = abort: uncommitted changes (commit or update --clean to discard changes)
1307 3 = abort: uncommitted changes (commit or update --clean to discard changes)
1308 4 = abort: uncommitted changes (checked in commands.py)
1308 4 = abort: uncommitted changes (checked in commands.py)
1309 5 = incompatible options (checked in commands.py)
1309 5 = incompatible options (checked in commands.py)
1310
1310
1311 Return the same tuple as applyupdates().
1311 Return the same tuple as applyupdates().
1312 """
1312 """
1313
1313
1314 onode = node
1314 onode = node
1315 wlock = repo.wlock()
1315 wlock = repo.wlock()
1316 # If we're doing a partial update, we need to skip updating
1317 # the dirstate, so make a note of any partial-ness to the
1318 # update here.
1319 if matcher is None or matcher.always():
1320 partial = False
1321 else:
1322 partial = True
1316 try:
1323 try:
1317 wc = repo[None]
1324 wc = repo[None]
1318 pl = wc.parents()
1325 pl = wc.parents()
1319 p1 = pl[0]
1326 p1 = pl[0]
1320 pas = [None]
1327 pas = [None]
1321 if ancestor is not None:
1328 if ancestor is not None:
1322 pas = [repo[ancestor]]
1329 pas = [repo[ancestor]]
1323
1330
1324 if node is None:
1331 if node is None:
1325 if (repo.ui.configbool('devel', 'all-warnings')
1332 if (repo.ui.configbool('devel', 'all-warnings')
1326 or repo.ui.configbool('devel', 'oldapi')):
1333 or repo.ui.configbool('devel', 'oldapi')):
1327 repo.ui.develwarn('update with no target')
1334 repo.ui.develwarn('update with no target')
1328 rev, _mark, _act = destutil.destupdate(repo)
1335 rev, _mark, _act = destutil.destupdate(repo)
1329 node = repo[rev].node()
1336 node = repo[rev].node()
1330
1337
1331 overwrite = force and not branchmerge
1338 overwrite = force and not branchmerge
1332
1339
1333 p2 = repo[node]
1340 p2 = repo[node]
1334 if pas[0] is None:
1341 if pas[0] is None:
1335 if repo.ui.configlist('merge', 'preferancestor', ['*']) == ['*']:
1342 if repo.ui.configlist('merge', 'preferancestor', ['*']) == ['*']:
1336 cahs = repo.changelog.commonancestorsheads(p1.node(), p2.node())
1343 cahs = repo.changelog.commonancestorsheads(p1.node(), p2.node())
1337 pas = [repo[anc] for anc in (sorted(cahs) or [nullid])]
1344 pas = [repo[anc] for anc in (sorted(cahs) or [nullid])]
1338 else:
1345 else:
1339 pas = [p1.ancestor(p2, warn=branchmerge)]
1346 pas = [p1.ancestor(p2, warn=branchmerge)]
1340
1347
1341 fp1, fp2, xp1, xp2 = p1.node(), p2.node(), str(p1), str(p2)
1348 fp1, fp2, xp1, xp2 = p1.node(), p2.node(), str(p1), str(p2)
1342
1349
1343 ### check phase
1350 ### check phase
1344 if not overwrite:
1351 if not overwrite:
1345 if len(pl) > 1:
1352 if len(pl) > 1:
1346 raise error.Abort(_("outstanding uncommitted merge"))
1353 raise error.Abort(_("outstanding uncommitted merge"))
1347 ms = mergestate.read(repo)
1354 ms = mergestate.read(repo)
1348 if list(ms.unresolved()):
1355 if list(ms.unresolved()):
1349 raise error.Abort(_("outstanding merge conflicts"))
1356 raise error.Abort(_("outstanding merge conflicts"))
1350 if branchmerge:
1357 if branchmerge:
1351 if pas == [p2]:
1358 if pas == [p2]:
1352 raise error.Abort(_("merging with a working directory ancestor"
1359 raise error.Abort(_("merging with a working directory ancestor"
1353 " has no effect"))
1360 " has no effect"))
1354 elif pas == [p1]:
1361 elif pas == [p1]:
1355 if not mergeancestor and p1.branch() == p2.branch():
1362 if not mergeancestor and p1.branch() == p2.branch():
1356 raise error.Abort(_("nothing to merge"),
1363 raise error.Abort(_("nothing to merge"),
1357 hint=_("use 'hg update' "
1364 hint=_("use 'hg update' "
1358 "or check 'hg heads'"))
1365 "or check 'hg heads'"))
1359 if not force and (wc.files() or wc.deleted()):
1366 if not force and (wc.files() or wc.deleted()):
1360 raise error.Abort(_("uncommitted changes"),
1367 raise error.Abort(_("uncommitted changes"),
1361 hint=_("use 'hg status' to list changes"))
1368 hint=_("use 'hg status' to list changes"))
1362 for s in sorted(wc.substate):
1369 for s in sorted(wc.substate):
1363 wc.sub(s).bailifchanged()
1370 wc.sub(s).bailifchanged()
1364
1371
1365 elif not overwrite:
1372 elif not overwrite:
1366 if p1 == p2: # no-op update
1373 if p1 == p2: # no-op update
1367 # call the hooks and exit early
1374 # call the hooks and exit early
1368 repo.hook('preupdate', throw=True, parent1=xp2, parent2='')
1375 repo.hook('preupdate', throw=True, parent1=xp2, parent2='')
1369 repo.hook('update', parent1=xp2, parent2='', error=0)
1376 repo.hook('update', parent1=xp2, parent2='', error=0)
1370 return 0, 0, 0, 0
1377 return 0, 0, 0, 0
1371
1378
1372 if pas not in ([p1], [p2]): # nonlinear
1379 if pas not in ([p1], [p2]): # nonlinear
1373 dirty = wc.dirty(missing=True)
1380 dirty = wc.dirty(missing=True)
1374 if dirty or onode is None:
1381 if dirty or onode is None:
1375 # Branching is a bit strange to ensure we do the minimal
1382 # Branching is a bit strange to ensure we do the minimal
1376 # amount of call to obsolete.background.
1383 # amount of call to obsolete.background.
1377 foreground = obsolete.foreground(repo, [p1.node()])
1384 foreground = obsolete.foreground(repo, [p1.node()])
1378 # note: the <node> variable contains a random identifier
1385 # note: the <node> variable contains a random identifier
1379 if repo[node].node() in foreground:
1386 if repo[node].node() in foreground:
1380 pas = [p1] # allow updating to successors
1387 pas = [p1] # allow updating to successors
1381 elif dirty:
1388 elif dirty:
1382 msg = _("uncommitted changes")
1389 msg = _("uncommitted changes")
1383 if onode is None:
1390 if onode is None:
1384 hint = _("commit and merge, or update --clean to"
1391 hint = _("commit and merge, or update --clean to"
1385 " discard changes")
1392 " discard changes")
1386 else:
1393 else:
1387 hint = _("commit or update --clean to discard"
1394 hint = _("commit or update --clean to discard"
1388 " changes")
1395 " changes")
1389 raise error.Abort(msg, hint=hint)
1396 raise error.Abort(msg, hint=hint)
1390 else: # node is none
1397 else: # node is none
1391 msg = _("not a linear update")
1398 msg = _("not a linear update")
1392 hint = _("merge or update --check to force update")
1399 hint = _("merge or update --check to force update")
1393 raise error.Abort(msg, hint=hint)
1400 raise error.Abort(msg, hint=hint)
1394 else:
1401 else:
1395 # Allow jumping branches if clean and specific rev given
1402 # Allow jumping branches if clean and specific rev given
1396 pas = [p1]
1403 pas = [p1]
1397
1404
1398 # deprecated config: merge.followcopies
1405 # deprecated config: merge.followcopies
1399 followcopies = False
1406 followcopies = False
1400 if overwrite:
1407 if overwrite:
1401 pas = [wc]
1408 pas = [wc]
1402 elif pas == [p2]: # backwards
1409 elif pas == [p2]: # backwards
1403 pas = [wc.p1()]
1410 pas = [wc.p1()]
1404 elif not branchmerge and not wc.dirty(missing=True):
1411 elif not branchmerge and not wc.dirty(missing=True):
1405 pass
1412 pass
1406 elif pas[0] and repo.ui.configbool('merge', 'followcopies', True):
1413 elif pas[0] and repo.ui.configbool('merge', 'followcopies', True):
1407 followcopies = True
1414 followcopies = True
1408
1415
1409 ### calculate phase
1416 ### calculate phase
1417 if matcher is None or matcher.always():
1418 partial = False
1419 else:
1420 partial = matcher.matchfn
1410 actionbyfile, diverge, renamedelete = calculateupdates(
1421 actionbyfile, diverge, renamedelete = calculateupdates(
1411 repo, wc, p2, pas, branchmerge, force, partial, mergeancestor,
1422 repo, wc, p2, pas, branchmerge, force, partial, mergeancestor,
1412 followcopies)
1423 followcopies)
1413 # Convert to dictionary-of-lists format
1424 # Convert to dictionary-of-lists format
1414 actions = dict((m, []) for m in 'a am f g cd dc r dm dg m e k'.split())
1425 actions = dict((m, []) for m in 'a am f g cd dc r dm dg m e k'.split())
1415 for f, (m, args, msg) in actionbyfile.iteritems():
1426 for f, (m, args, msg) in actionbyfile.iteritems():
1416 if m not in actions:
1427 if m not in actions:
1417 actions[m] = []
1428 actions[m] = []
1418 actions[m].append((f, args, msg))
1429 actions[m].append((f, args, msg))
1419
1430
1420 if not util.checkcase(repo.path):
1431 if not util.checkcase(repo.path):
1421 # check collision between files only in p2 for clean update
1432 # check collision between files only in p2 for clean update
1422 if (not branchmerge and
1433 if (not branchmerge and
1423 (force or not wc.dirty(missing=True, branch=False))):
1434 (force or not wc.dirty(missing=True, branch=False))):
1424 _checkcollision(repo, p2.manifest(), None)
1435 _checkcollision(repo, p2.manifest(), None)
1425 else:
1436 else:
1426 _checkcollision(repo, wc.manifest(), actions)
1437 _checkcollision(repo, wc.manifest(), actions)
1427
1438
1428 # Prompt and create actions. Most of this is in the resolve phase
1439 # Prompt and create actions. Most of this is in the resolve phase
1429 # already, but we can't handle .hgsubstate in filemerge or
1440 # already, but we can't handle .hgsubstate in filemerge or
1430 # subrepo.submerge yet so we have to keep prompting for it.
1441 # subrepo.submerge yet so we have to keep prompting for it.
1431 for f, args, msg in sorted(actions['cd']):
1442 for f, args, msg in sorted(actions['cd']):
1432 if f != '.hgsubstate':
1443 if f != '.hgsubstate':
1433 continue
1444 continue
1434 if repo.ui.promptchoice(
1445 if repo.ui.promptchoice(
1435 _("local changed %s which remote deleted\n"
1446 _("local changed %s which remote deleted\n"
1436 "use (c)hanged version or (d)elete?"
1447 "use (c)hanged version or (d)elete?"
1437 "$$ &Changed $$ &Delete") % f, 0):
1448 "$$ &Changed $$ &Delete") % f, 0):
1438 actions['r'].append((f, None, "prompt delete"))
1449 actions['r'].append((f, None, "prompt delete"))
1439 elif f in p1:
1450 elif f in p1:
1440 actions['am'].append((f, None, "prompt keep"))
1451 actions['am'].append((f, None, "prompt keep"))
1441 else:
1452 else:
1442 actions['a'].append((f, None, "prompt keep"))
1453 actions['a'].append((f, None, "prompt keep"))
1443
1454
1444 for f, args, msg in sorted(actions['dc']):
1455 for f, args, msg in sorted(actions['dc']):
1445 if f != '.hgsubstate':
1456 if f != '.hgsubstate':
1446 continue
1457 continue
1447 f1, f2, fa, move, anc = args
1458 f1, f2, fa, move, anc = args
1448 flags = p2[f2].flags()
1459 flags = p2[f2].flags()
1449 if repo.ui.promptchoice(
1460 if repo.ui.promptchoice(
1450 _("remote changed %s which local deleted\n"
1461 _("remote changed %s which local deleted\n"
1451 "use (c)hanged version or leave (d)eleted?"
1462 "use (c)hanged version or leave (d)eleted?"
1452 "$$ &Changed $$ &Deleted") % f, 0) == 0:
1463 "$$ &Changed $$ &Deleted") % f, 0) == 0:
1453 actions['g'].append((f, (flags,), "prompt recreating"))
1464 actions['g'].append((f, (flags,), "prompt recreating"))
1454
1465
1455 # divergent renames
1466 # divergent renames
1456 for f, fl in sorted(diverge.iteritems()):
1467 for f, fl in sorted(diverge.iteritems()):
1457 repo.ui.warn(_("note: possible conflict - %s was renamed "
1468 repo.ui.warn(_("note: possible conflict - %s was renamed "
1458 "multiple times to:\n") % f)
1469 "multiple times to:\n") % f)
1459 for nf in fl:
1470 for nf in fl:
1460 repo.ui.warn(" %s\n" % nf)
1471 repo.ui.warn(" %s\n" % nf)
1461
1472
1462 # rename and delete
1473 # rename and delete
1463 for f, fl in sorted(renamedelete.iteritems()):
1474 for f, fl in sorted(renamedelete.iteritems()):
1464 repo.ui.warn(_("note: possible conflict - %s was deleted "
1475 repo.ui.warn(_("note: possible conflict - %s was deleted "
1465 "and renamed to:\n") % f)
1476 "and renamed to:\n") % f)
1466 for nf in fl:
1477 for nf in fl:
1467 repo.ui.warn(" %s\n" % nf)
1478 repo.ui.warn(" %s\n" % nf)
1468
1479
1469 ### apply phase
1480 ### apply phase
1470 if not branchmerge: # just jump to the new rev
1481 if not branchmerge: # just jump to the new rev
1471 fp1, fp2, xp1, xp2 = fp2, nullid, xp2, ''
1482 fp1, fp2, xp1, xp2 = fp2, nullid, xp2, ''
1472 if not partial:
1483 if not partial:
1473 repo.hook('preupdate', throw=True, parent1=xp1, parent2=xp2)
1484 repo.hook('preupdate', throw=True, parent1=xp1, parent2=xp2)
1474 # note that we're in the middle of an update
1485 # note that we're in the middle of an update
1475 repo.vfs.write('updatestate', p2.hex())
1486 repo.vfs.write('updatestate', p2.hex())
1476
1487
1477 stats = applyupdates(repo, actions, wc, p2, overwrite, labels=labels)
1488 stats = applyupdates(repo, actions, wc, p2, overwrite, labels=labels)
1478
1489
1479 if not partial:
1490 if not partial:
1480 repo.dirstate.beginparentchange()
1491 repo.dirstate.beginparentchange()
1481 repo.setparents(fp1, fp2)
1492 repo.setparents(fp1, fp2)
1482 recordupdates(repo, actions, branchmerge)
1493 recordupdates(repo, actions, branchmerge)
1483 # update completed, clear state
1494 # update completed, clear state
1484 util.unlink(repo.join('updatestate'))
1495 util.unlink(repo.join('updatestate'))
1485
1496
1486 if not branchmerge:
1497 if not branchmerge:
1487 repo.dirstate.setbranch(p2.branch())
1498 repo.dirstate.setbranch(p2.branch())
1488 repo.dirstate.endparentchange()
1499 repo.dirstate.endparentchange()
1489 finally:
1500 finally:
1490 wlock.release()
1501 wlock.release()
1491
1502
1492 if not partial:
1503 if not partial:
1493 repo.hook('update', parent1=xp1, parent2=xp2, error=stats[3])
1504 repo.hook('update', parent1=xp1, parent2=xp2, error=stats[3])
1494 return stats
1505 return stats
1495
1506
1496 def graft(repo, ctx, pctx, labels, keepparent=False):
1507 def graft(repo, ctx, pctx, labels, keepparent=False):
1497 """Do a graft-like merge.
1508 """Do a graft-like merge.
1498
1509
1499 This is a merge where the merge ancestor is chosen such that one
1510 This is a merge where the merge ancestor is chosen such that one
1500 or more changesets are grafted onto the current changeset. In
1511 or more changesets are grafted onto the current changeset. In
1501 addition to the merge, this fixes up the dirstate to include only
1512 addition to the merge, this fixes up the dirstate to include only
1502 a single parent (if keepparent is False) and tries to duplicate any
1513 a single parent (if keepparent is False) and tries to duplicate any
1503 renames/copies appropriately.
1514 renames/copies appropriately.
1504
1515
1505 ctx - changeset to rebase
1516 ctx - changeset to rebase
1506 pctx - merge base, usually ctx.p1()
1517 pctx - merge base, usually ctx.p1()
1507 labels - merge labels eg ['local', 'graft']
1518 labels - merge labels eg ['local', 'graft']
1508 keepparent - keep second parent if any
1519 keepparent - keep second parent if any
1509
1520
1510 """
1521 """
1511 # If we're grafting a descendant onto an ancestor, be sure to pass
1522 # If we're grafting a descendant onto an ancestor, be sure to pass
1512 # mergeancestor=True to update. This does two things: 1) allows the merge if
1523 # mergeancestor=True to update. This does two things: 1) allows the merge if
1513 # the destination is the same as the parent of the ctx (so we can use graft
1524 # the destination is the same as the parent of the ctx (so we can use graft
1514 # to copy commits), and 2) informs update that the incoming changes are
1525 # to copy commits), and 2) informs update that the incoming changes are
1515 # newer than the destination so it doesn't prompt about "remote changed foo
1526 # newer than the destination so it doesn't prompt about "remote changed foo
1516 # which local deleted".
1527 # which local deleted".
1517 mergeancestor = repo.changelog.isancestor(repo['.'].node(), ctx.node())
1528 mergeancestor = repo.changelog.isancestor(repo['.'].node(), ctx.node())
1518
1529
1519 stats = update(repo, ctx.node(), True, True, False, pctx.node(),
1530 stats = update(repo, ctx.node(), True, True, pctx.node(),
1520 mergeancestor=mergeancestor, labels=labels)
1531 mergeancestor=mergeancestor, labels=labels)
1521
1532
1522 pother = nullid
1533 pother = nullid
1523 parents = ctx.parents()
1534 parents = ctx.parents()
1524 if keepparent and len(parents) == 2 and pctx in parents:
1535 if keepparent and len(parents) == 2 and pctx in parents:
1525 parents.remove(pctx)
1536 parents.remove(pctx)
1526 pother = parents[0].node()
1537 pother = parents[0].node()
1527
1538
1528 repo.dirstate.beginparentchange()
1539 repo.dirstate.beginparentchange()
1529 repo.setparents(repo['.'].node(), pother)
1540 repo.setparents(repo['.'].node(), pother)
1530 repo.dirstate.write(repo.currenttransaction())
1541 repo.dirstate.write(repo.currenttransaction())
1531 # fix up dirstate for copies and renames
1542 # fix up dirstate for copies and renames
1532 copies.duplicatecopies(repo, ctx.rev(), pctx.rev())
1543 copies.duplicatecopies(repo, ctx.rev(), pctx.rev())
1533 repo.dirstate.endparentchange()
1544 repo.dirstate.endparentchange()
1534 return stats
1545 return stats
@@ -1,85 +1,84 b''
1 import os
1 import os
2 from mercurial import hg, ui, merge
2 from mercurial import hg, ui, merge
3
3
4 u = ui.ui()
4 u = ui.ui()
5
5
6 repo = hg.repository(u, 'test1', create=1)
6 repo = hg.repository(u, 'test1', create=1)
7 os.chdir('test1')
7 os.chdir('test1')
8
8
9 def commit(text, time):
9 def commit(text, time):
10 repo.commit(text=text, date="%d 0" % time)
10 repo.commit(text=text, date="%d 0" % time)
11
11
12 def addcommit(name, time):
12 def addcommit(name, time):
13 f = open(name, 'w')
13 f = open(name, 'w')
14 f.write('%s\n' % name)
14 f.write('%s\n' % name)
15 f.close()
15 f.close()
16 repo[None].add([name])
16 repo[None].add([name])
17 commit(name, time)
17 commit(name, time)
18
18
19 def update(rev):
19 def update(rev):
20 merge.update(repo, rev, False, True, False)
20 merge.update(repo, rev, False, True)
21
21
22 def merge_(rev):
22 def merge_(rev):
23 merge.update(repo, rev, True, False, False)
23 merge.update(repo, rev, True, False)
24
24
25 if __name__ == '__main__':
25 if __name__ == '__main__':
26 addcommit("A", 0)
26 addcommit("A", 0)
27 addcommit("B", 1)
27 addcommit("B", 1)
28
28
29 update(0)
29 update(0)
30 addcommit("C", 2)
30 addcommit("C", 2)
31
31
32 merge_(1)
32 merge_(1)
33 commit("D", 3)
33 commit("D", 3)
34
34
35 update(2)
35 update(2)
36 addcommit("E", 4)
36 addcommit("E", 4)
37 addcommit("F", 5)
37 addcommit("F", 5)
38
38
39 update(3)
39 update(3)
40 addcommit("G", 6)
40 addcommit("G", 6)
41
41
42 merge_(5)
42 merge_(5)
43 commit("H", 7)
43 commit("H", 7)
44
44
45 update(5)
45 update(5)
46 addcommit("I", 8)
46 addcommit("I", 8)
47
47
48 # Ancestors
48 # Ancestors
49 print 'Ancestors of 5'
49 print 'Ancestors of 5'
50 for r in repo.changelog.ancestors([5]):
50 for r in repo.changelog.ancestors([5]):
51 print r,
51 print r,
52
52
53 print '\nAncestors of 6 and 5'
53 print '\nAncestors of 6 and 5'
54 for r in repo.changelog.ancestors([6, 5]):
54 for r in repo.changelog.ancestors([6, 5]):
55 print r,
55 print r,
56
56
57 print '\nAncestors of 5 and 4'
57 print '\nAncestors of 5 and 4'
58 for r in repo.changelog.ancestors([5, 4]):
58 for r in repo.changelog.ancestors([5, 4]):
59 print r,
59 print r,
60
60
61 print '\nAncestors of 7, stop at 6'
61 print '\nAncestors of 7, stop at 6'
62 for r in repo.changelog.ancestors([7], 6):
62 for r in repo.changelog.ancestors([7], 6):
63 print r,
63 print r,
64
64
65 print '\nAncestors of 7, including revs'
65 print '\nAncestors of 7, including revs'
66 for r in repo.changelog.ancestors([7], inclusive=True):
66 for r in repo.changelog.ancestors([7], inclusive=True):
67 print r,
67 print r,
68
68
69 print '\nAncestors of 7, 5 and 3, including revs'
69 print '\nAncestors of 7, 5 and 3, including revs'
70 for r in repo.changelog.ancestors([7, 5, 3], inclusive=True):
70 for r in repo.changelog.ancestors([7, 5, 3], inclusive=True):
71 print r,
71 print r,
72
72
73 # Descendants
73 # Descendants
74 print '\n\nDescendants of 5'
74 print '\n\nDescendants of 5'
75 for r in repo.changelog.descendants([5]):
75 for r in repo.changelog.descendants([5]):
76 print r,
76 print r,
77
77
78 print '\nDescendants of 5 and 3'
78 print '\nDescendants of 5 and 3'
79 for r in repo.changelog.descendants([5, 3]):
79 for r in repo.changelog.descendants([5, 3]):
80 print r,
80 print r,
81
81
82 print '\nDescendants of 5 and 4'
82 print '\nDescendants of 5 and 4'
83 for r in repo.changelog.descendants([5, 4]):
83 for r in repo.changelog.descendants([5, 4]):
84 print r,
84 print r,
85
General Comments 0
You need to be logged in to leave comments. Login now