##// END OF EJS Templates
bundle: add a applybundle1() method...
Martin von Zweigbergk -
r33039:b82615af default
parent child Browse files
Show More
@@ -1,1676 +1,1678 b''
1 # histedit.py - interactive history editing for mercurial
1 # histedit.py - interactive history editing for mercurial
2 #
2 #
3 # Copyright 2009 Augie Fackler <raf@durin42.com>
3 # Copyright 2009 Augie Fackler <raf@durin42.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """interactive history editing
7 """interactive history editing
8
8
9 With this extension installed, Mercurial gains one new command: histedit. Usage
9 With this extension installed, Mercurial gains one new command: histedit. Usage
10 is as follows, assuming the following history::
10 is as follows, assuming the following history::
11
11
12 @ 3[tip] 7c2fd3b9020c 2009-04-27 18:04 -0500 durin42
12 @ 3[tip] 7c2fd3b9020c 2009-04-27 18:04 -0500 durin42
13 | Add delta
13 | Add delta
14 |
14 |
15 o 2 030b686bedc4 2009-04-27 18:04 -0500 durin42
15 o 2 030b686bedc4 2009-04-27 18:04 -0500 durin42
16 | Add gamma
16 | Add gamma
17 |
17 |
18 o 1 c561b4e977df 2009-04-27 18:04 -0500 durin42
18 o 1 c561b4e977df 2009-04-27 18:04 -0500 durin42
19 | Add beta
19 | Add beta
20 |
20 |
21 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
21 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
22 Add alpha
22 Add alpha
23
23
24 If you were to run ``hg histedit c561b4e977df``, you would see the following
24 If you were to run ``hg histedit c561b4e977df``, you would see the following
25 file open in your editor::
25 file open in your editor::
26
26
27 pick c561b4e977df Add beta
27 pick c561b4e977df Add beta
28 pick 030b686bedc4 Add gamma
28 pick 030b686bedc4 Add gamma
29 pick 7c2fd3b9020c Add delta
29 pick 7c2fd3b9020c Add delta
30
30
31 # Edit history between c561b4e977df and 7c2fd3b9020c
31 # Edit history between c561b4e977df and 7c2fd3b9020c
32 #
32 #
33 # Commits are listed from least to most recent
33 # Commits are listed from least to most recent
34 #
34 #
35 # Commands:
35 # Commands:
36 # p, pick = use commit
36 # p, pick = use commit
37 # e, edit = use commit, but stop for amending
37 # e, edit = use commit, but stop for amending
38 # f, fold = use commit, but combine it with the one above
38 # f, fold = use commit, but combine it with the one above
39 # r, roll = like fold, but discard this commit's description and date
39 # r, roll = like fold, but discard this commit's description and date
40 # d, drop = remove commit from history
40 # d, drop = remove commit from history
41 # m, mess = edit commit message without changing commit content
41 # m, mess = edit commit message without changing commit content
42 #
42 #
43
43
44 In this file, lines beginning with ``#`` are ignored. You must specify a rule
44 In this file, lines beginning with ``#`` are ignored. You must specify a rule
45 for each revision in your history. For example, if you had meant to add gamma
45 for each revision in your history. For example, if you had meant to add gamma
46 before beta, and then wanted to add delta in the same revision as beta, you
46 before beta, and then wanted to add delta in the same revision as beta, you
47 would reorganize the file to look like this::
47 would reorganize the file to look like this::
48
48
49 pick 030b686bedc4 Add gamma
49 pick 030b686bedc4 Add gamma
50 pick c561b4e977df Add beta
50 pick c561b4e977df Add beta
51 fold 7c2fd3b9020c Add delta
51 fold 7c2fd3b9020c Add delta
52
52
53 # Edit history between c561b4e977df and 7c2fd3b9020c
53 # Edit history between c561b4e977df and 7c2fd3b9020c
54 #
54 #
55 # Commits are listed from least to most recent
55 # Commits are listed from least to most recent
56 #
56 #
57 # Commands:
57 # Commands:
58 # p, pick = use commit
58 # p, pick = use commit
59 # e, edit = use commit, but stop for amending
59 # e, edit = use commit, but stop for amending
60 # f, fold = use commit, but combine it with the one above
60 # f, fold = use commit, but combine it with the one above
61 # r, roll = like fold, but discard this commit's description and date
61 # r, roll = like fold, but discard this commit's description and date
62 # d, drop = remove commit from history
62 # d, drop = remove commit from history
63 # m, mess = edit commit message without changing commit content
63 # m, mess = edit commit message without changing commit content
64 #
64 #
65
65
66 At which point you close the editor and ``histedit`` starts working. When you
66 At which point you close the editor and ``histedit`` starts working. When you
67 specify a ``fold`` operation, ``histedit`` will open an editor when it folds
67 specify a ``fold`` operation, ``histedit`` will open an editor when it folds
68 those revisions together, offering you a chance to clean up the commit message::
68 those revisions together, offering you a chance to clean up the commit message::
69
69
70 Add beta
70 Add beta
71 ***
71 ***
72 Add delta
72 Add delta
73
73
74 Edit the commit message to your liking, then close the editor. The date used
74 Edit the commit message to your liking, then close the editor. The date used
75 for the commit will be the later of the two commits' dates. For this example,
75 for the commit will be the later of the two commits' dates. For this example,
76 let's assume that the commit message was changed to ``Add beta and delta.``
76 let's assume that the commit message was changed to ``Add beta and delta.``
77 After histedit has run and had a chance to remove any old or temporary
77 After histedit has run and had a chance to remove any old or temporary
78 revisions it needed, the history looks like this::
78 revisions it needed, the history looks like this::
79
79
80 @ 2[tip] 989b4d060121 2009-04-27 18:04 -0500 durin42
80 @ 2[tip] 989b4d060121 2009-04-27 18:04 -0500 durin42
81 | Add beta and delta.
81 | Add beta and delta.
82 |
82 |
83 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
83 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
84 | Add gamma
84 | Add gamma
85 |
85 |
86 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
86 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
87 Add alpha
87 Add alpha
88
88
89 Note that ``histedit`` does *not* remove any revisions (even its own temporary
89 Note that ``histedit`` does *not* remove any revisions (even its own temporary
90 ones) until after it has completed all the editing operations, so it will
90 ones) until after it has completed all the editing operations, so it will
91 probably perform several strip operations when it's done. For the above example,
91 probably perform several strip operations when it's done. For the above example,
92 it had to run strip twice. Strip can be slow depending on a variety of factors,
92 it had to run strip twice. Strip can be slow depending on a variety of factors,
93 so you might need to be a little patient. You can choose to keep the original
93 so you might need to be a little patient. You can choose to keep the original
94 revisions by passing the ``--keep`` flag.
94 revisions by passing the ``--keep`` flag.
95
95
96 The ``edit`` operation will drop you back to a command prompt,
96 The ``edit`` operation will drop you back to a command prompt,
97 allowing you to edit files freely, or even use ``hg record`` to commit
97 allowing you to edit files freely, or even use ``hg record`` to commit
98 some changes as a separate commit. When you're done, any remaining
98 some changes as a separate commit. When you're done, any remaining
99 uncommitted changes will be committed as well. When done, run ``hg
99 uncommitted changes will be committed as well. When done, run ``hg
100 histedit --continue`` to finish this step. If there are uncommitted
100 histedit --continue`` to finish this step. If there are uncommitted
101 changes, you'll be prompted for a new commit message, but the default
101 changes, you'll be prompted for a new commit message, but the default
102 commit message will be the original message for the ``edit`` ed
102 commit message will be the original message for the ``edit`` ed
103 revision, and the date of the original commit will be preserved.
103 revision, and the date of the original commit will be preserved.
104
104
105 The ``message`` operation will give you a chance to revise a commit
105 The ``message`` operation will give you a chance to revise a commit
106 message without changing the contents. It's a shortcut for doing
106 message without changing the contents. It's a shortcut for doing
107 ``edit`` immediately followed by `hg histedit --continue``.
107 ``edit`` immediately followed by `hg histedit --continue``.
108
108
109 If ``histedit`` encounters a conflict when moving a revision (while
109 If ``histedit`` encounters a conflict when moving a revision (while
110 handling ``pick`` or ``fold``), it'll stop in a similar manner to
110 handling ``pick`` or ``fold``), it'll stop in a similar manner to
111 ``edit`` with the difference that it won't prompt you for a commit
111 ``edit`` with the difference that it won't prompt you for a commit
112 message when done. If you decide at this point that you don't like how
112 message when done. If you decide at this point that you don't like how
113 much work it will be to rearrange history, or that you made a mistake,
113 much work it will be to rearrange history, or that you made a mistake,
114 you can use ``hg histedit --abort`` to abandon the new changes you
114 you can use ``hg histedit --abort`` to abandon the new changes you
115 have made and return to the state before you attempted to edit your
115 have made and return to the state before you attempted to edit your
116 history.
116 history.
117
117
118 If we clone the histedit-ed example repository above and add four more
118 If we clone the histedit-ed example repository above and add four more
119 changes, such that we have the following history::
119 changes, such that we have the following history::
120
120
121 @ 6[tip] 038383181893 2009-04-27 18:04 -0500 stefan
121 @ 6[tip] 038383181893 2009-04-27 18:04 -0500 stefan
122 | Add theta
122 | Add theta
123 |
123 |
124 o 5 140988835471 2009-04-27 18:04 -0500 stefan
124 o 5 140988835471 2009-04-27 18:04 -0500 stefan
125 | Add eta
125 | Add eta
126 |
126 |
127 o 4 122930637314 2009-04-27 18:04 -0500 stefan
127 o 4 122930637314 2009-04-27 18:04 -0500 stefan
128 | Add zeta
128 | Add zeta
129 |
129 |
130 o 3 836302820282 2009-04-27 18:04 -0500 stefan
130 o 3 836302820282 2009-04-27 18:04 -0500 stefan
131 | Add epsilon
131 | Add epsilon
132 |
132 |
133 o 2 989b4d060121 2009-04-27 18:04 -0500 durin42
133 o 2 989b4d060121 2009-04-27 18:04 -0500 durin42
134 | Add beta and delta.
134 | Add beta and delta.
135 |
135 |
136 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
136 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
137 | Add gamma
137 | Add gamma
138 |
138 |
139 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
139 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
140 Add alpha
140 Add alpha
141
141
142 If you run ``hg histedit --outgoing`` on the clone then it is the same
142 If you run ``hg histedit --outgoing`` on the clone then it is the same
143 as running ``hg histedit 836302820282``. If you need plan to push to a
143 as running ``hg histedit 836302820282``. If you need plan to push to a
144 repository that Mercurial does not detect to be related to the source
144 repository that Mercurial does not detect to be related to the source
145 repo, you can add a ``--force`` option.
145 repo, you can add a ``--force`` option.
146
146
147 Config
147 Config
148 ------
148 ------
149
149
150 Histedit rule lines are truncated to 80 characters by default. You
150 Histedit rule lines are truncated to 80 characters by default. You
151 can customize this behavior by setting a different length in your
151 can customize this behavior by setting a different length in your
152 configuration file::
152 configuration file::
153
153
154 [histedit]
154 [histedit]
155 linelen = 120 # truncate rule lines at 120 characters
155 linelen = 120 # truncate rule lines at 120 characters
156
156
157 ``hg histedit`` attempts to automatically choose an appropriate base
157 ``hg histedit`` attempts to automatically choose an appropriate base
158 revision to use. To change which base revision is used, define a
158 revision to use. To change which base revision is used, define a
159 revset in your configuration file::
159 revset in your configuration file::
160
160
161 [histedit]
161 [histedit]
162 defaultrev = only(.) & draft()
162 defaultrev = only(.) & draft()
163
163
164 By default each edited revision needs to be present in histedit commands.
164 By default each edited revision needs to be present in histedit commands.
165 To remove revision you need to use ``drop`` operation. You can configure
165 To remove revision you need to use ``drop`` operation. You can configure
166 the drop to be implicit for missing commits by adding::
166 the drop to be implicit for missing commits by adding::
167
167
168 [histedit]
168 [histedit]
169 dropmissing = True
169 dropmissing = True
170
170
171 By default, histedit will close the transaction after each action. For
171 By default, histedit will close the transaction after each action. For
172 performance purposes, you can configure histedit to use a single transaction
172 performance purposes, you can configure histedit to use a single transaction
173 across the entire histedit. WARNING: This setting introduces a significant risk
173 across the entire histedit. WARNING: This setting introduces a significant risk
174 of losing the work you've done in a histedit if the histedit aborts
174 of losing the work you've done in a histedit if the histedit aborts
175 unexpectedly::
175 unexpectedly::
176
176
177 [histedit]
177 [histedit]
178 singletransaction = True
178 singletransaction = True
179
179
180 """
180 """
181
181
182 from __future__ import absolute_import
182 from __future__ import absolute_import
183
183
184 import errno
184 import errno
185 import os
185 import os
186
186
187 from mercurial.i18n import _
187 from mercurial.i18n import _
188 from mercurial import (
188 from mercurial import (
189 bundle2,
189 bundle2,
190 cmdutil,
190 cmdutil,
191 context,
191 context,
192 copies,
192 copies,
193 destutil,
193 destutil,
194 discovery,
194 discovery,
195 error,
195 error,
196 exchange,
196 exchange,
197 extensions,
197 extensions,
198 hg,
198 hg,
199 lock,
199 lock,
200 merge as mergemod,
200 merge as mergemod,
201 mergeutil,
201 mergeutil,
202 node,
202 node,
203 obsolete,
203 obsolete,
204 registrar,
204 registrar,
205 repair,
205 repair,
206 scmutil,
206 scmutil,
207 util,
207 util,
208 )
208 )
209
209
210 pickle = util.pickle
210 pickle = util.pickle
211 release = lock.release
211 release = lock.release
212 cmdtable = {}
212 cmdtable = {}
213 command = registrar.command(cmdtable)
213 command = registrar.command(cmdtable)
214
214
215 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
215 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
216 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
216 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
217 # be specifying the version(s) of Mercurial they are tested with, or
217 # be specifying the version(s) of Mercurial they are tested with, or
218 # leave the attribute unspecified.
218 # leave the attribute unspecified.
219 testedwith = 'ships-with-hg-core'
219 testedwith = 'ships-with-hg-core'
220
220
221 actiontable = {}
221 actiontable = {}
222 primaryactions = set()
222 primaryactions = set()
223 secondaryactions = set()
223 secondaryactions = set()
224 tertiaryactions = set()
224 tertiaryactions = set()
225 internalactions = set()
225 internalactions = set()
226
226
227 def geteditcomment(ui, first, last):
227 def geteditcomment(ui, first, last):
228 """ construct the editor comment
228 """ construct the editor comment
229 The comment includes::
229 The comment includes::
230 - an intro
230 - an intro
231 - sorted primary commands
231 - sorted primary commands
232 - sorted short commands
232 - sorted short commands
233 - sorted long commands
233 - sorted long commands
234 - additional hints
234 - additional hints
235
235
236 Commands are only included once.
236 Commands are only included once.
237 """
237 """
238 intro = _("""Edit history between %s and %s
238 intro = _("""Edit history between %s and %s
239
239
240 Commits are listed from least to most recent
240 Commits are listed from least to most recent
241
241
242 You can reorder changesets by reordering the lines
242 You can reorder changesets by reordering the lines
243
243
244 Commands:
244 Commands:
245 """)
245 """)
246 actions = []
246 actions = []
247 def addverb(v):
247 def addverb(v):
248 a = actiontable[v]
248 a = actiontable[v]
249 lines = a.message.split("\n")
249 lines = a.message.split("\n")
250 if len(a.verbs):
250 if len(a.verbs):
251 v = ', '.join(sorted(a.verbs, key=lambda v: len(v)))
251 v = ', '.join(sorted(a.verbs, key=lambda v: len(v)))
252 actions.append(" %s = %s" % (v, lines[0]))
252 actions.append(" %s = %s" % (v, lines[0]))
253 actions.extend([' %s' for l in lines[1:]])
253 actions.extend([' %s' for l in lines[1:]])
254
254
255 for v in (
255 for v in (
256 sorted(primaryactions) +
256 sorted(primaryactions) +
257 sorted(secondaryactions) +
257 sorted(secondaryactions) +
258 sorted(tertiaryactions)
258 sorted(tertiaryactions)
259 ):
259 ):
260 addverb(v)
260 addverb(v)
261 actions.append('')
261 actions.append('')
262
262
263 hints = []
263 hints = []
264 if ui.configbool('histedit', 'dropmissing'):
264 if ui.configbool('histedit', 'dropmissing'):
265 hints.append("Deleting a changeset from the list "
265 hints.append("Deleting a changeset from the list "
266 "will DISCARD it from the edited history!")
266 "will DISCARD it from the edited history!")
267
267
268 lines = (intro % (first, last)).split('\n') + actions + hints
268 lines = (intro % (first, last)).split('\n') + actions + hints
269
269
270 return ''.join(['# %s\n' % l if l else '#\n' for l in lines])
270 return ''.join(['# %s\n' % l if l else '#\n' for l in lines])
271
271
272 class histeditstate(object):
272 class histeditstate(object):
273 def __init__(self, repo, parentctxnode=None, actions=None, keep=None,
273 def __init__(self, repo, parentctxnode=None, actions=None, keep=None,
274 topmost=None, replacements=None, lock=None, wlock=None):
274 topmost=None, replacements=None, lock=None, wlock=None):
275 self.repo = repo
275 self.repo = repo
276 self.actions = actions
276 self.actions = actions
277 self.keep = keep
277 self.keep = keep
278 self.topmost = topmost
278 self.topmost = topmost
279 self.parentctxnode = parentctxnode
279 self.parentctxnode = parentctxnode
280 self.lock = lock
280 self.lock = lock
281 self.wlock = wlock
281 self.wlock = wlock
282 self.backupfile = None
282 self.backupfile = None
283 self.tr = None
283 self.tr = None
284 if replacements is None:
284 if replacements is None:
285 self.replacements = []
285 self.replacements = []
286 else:
286 else:
287 self.replacements = replacements
287 self.replacements = replacements
288
288
289 def read(self):
289 def read(self):
290 """Load histedit state from disk and set fields appropriately."""
290 """Load histedit state from disk and set fields appropriately."""
291 try:
291 try:
292 state = self.repo.vfs.read('histedit-state')
292 state = self.repo.vfs.read('histedit-state')
293 except IOError as err:
293 except IOError as err:
294 if err.errno != errno.ENOENT:
294 if err.errno != errno.ENOENT:
295 raise
295 raise
296 cmdutil.wrongtooltocontinue(self.repo, _('histedit'))
296 cmdutil.wrongtooltocontinue(self.repo, _('histedit'))
297
297
298 if state.startswith('v1\n'):
298 if state.startswith('v1\n'):
299 data = self._load()
299 data = self._load()
300 parentctxnode, rules, keep, topmost, replacements, backupfile = data
300 parentctxnode, rules, keep, topmost, replacements, backupfile = data
301 else:
301 else:
302 data = pickle.loads(state)
302 data = pickle.loads(state)
303 parentctxnode, rules, keep, topmost, replacements = data
303 parentctxnode, rules, keep, topmost, replacements = data
304 backupfile = None
304 backupfile = None
305
305
306 self.parentctxnode = parentctxnode
306 self.parentctxnode = parentctxnode
307 rules = "\n".join(["%s %s" % (verb, rest) for [verb, rest] in rules])
307 rules = "\n".join(["%s %s" % (verb, rest) for [verb, rest] in rules])
308 actions = parserules(rules, self)
308 actions = parserules(rules, self)
309 self.actions = actions
309 self.actions = actions
310 self.keep = keep
310 self.keep = keep
311 self.topmost = topmost
311 self.topmost = topmost
312 self.replacements = replacements
312 self.replacements = replacements
313 self.backupfile = backupfile
313 self.backupfile = backupfile
314
314
315 def write(self, tr=None):
315 def write(self, tr=None):
316 if tr:
316 if tr:
317 tr.addfilegenerator('histedit-state', ('histedit-state',),
317 tr.addfilegenerator('histedit-state', ('histedit-state',),
318 self._write, location='plain')
318 self._write, location='plain')
319 else:
319 else:
320 with self.repo.vfs("histedit-state", "w") as f:
320 with self.repo.vfs("histedit-state", "w") as f:
321 self._write(f)
321 self._write(f)
322
322
323 def _write(self, fp):
323 def _write(self, fp):
324 fp.write('v1\n')
324 fp.write('v1\n')
325 fp.write('%s\n' % node.hex(self.parentctxnode))
325 fp.write('%s\n' % node.hex(self.parentctxnode))
326 fp.write('%s\n' % node.hex(self.topmost))
326 fp.write('%s\n' % node.hex(self.topmost))
327 fp.write('%s\n' % self.keep)
327 fp.write('%s\n' % self.keep)
328 fp.write('%d\n' % len(self.actions))
328 fp.write('%d\n' % len(self.actions))
329 for action in self.actions:
329 for action in self.actions:
330 fp.write('%s\n' % action.tostate())
330 fp.write('%s\n' % action.tostate())
331 fp.write('%d\n' % len(self.replacements))
331 fp.write('%d\n' % len(self.replacements))
332 for replacement in self.replacements:
332 for replacement in self.replacements:
333 fp.write('%s%s\n' % (node.hex(replacement[0]), ''.join(node.hex(r)
333 fp.write('%s%s\n' % (node.hex(replacement[0]), ''.join(node.hex(r)
334 for r in replacement[1])))
334 for r in replacement[1])))
335 backupfile = self.backupfile
335 backupfile = self.backupfile
336 if not backupfile:
336 if not backupfile:
337 backupfile = ''
337 backupfile = ''
338 fp.write('%s\n' % backupfile)
338 fp.write('%s\n' % backupfile)
339
339
340 def _load(self):
340 def _load(self):
341 fp = self.repo.vfs('histedit-state', 'r')
341 fp = self.repo.vfs('histedit-state', 'r')
342 lines = [l[:-1] for l in fp.readlines()]
342 lines = [l[:-1] for l in fp.readlines()]
343
343
344 index = 0
344 index = 0
345 lines[index] # version number
345 lines[index] # version number
346 index += 1
346 index += 1
347
347
348 parentctxnode = node.bin(lines[index])
348 parentctxnode = node.bin(lines[index])
349 index += 1
349 index += 1
350
350
351 topmost = node.bin(lines[index])
351 topmost = node.bin(lines[index])
352 index += 1
352 index += 1
353
353
354 keep = lines[index] == 'True'
354 keep = lines[index] == 'True'
355 index += 1
355 index += 1
356
356
357 # Rules
357 # Rules
358 rules = []
358 rules = []
359 rulelen = int(lines[index])
359 rulelen = int(lines[index])
360 index += 1
360 index += 1
361 for i in xrange(rulelen):
361 for i in xrange(rulelen):
362 ruleaction = lines[index]
362 ruleaction = lines[index]
363 index += 1
363 index += 1
364 rule = lines[index]
364 rule = lines[index]
365 index += 1
365 index += 1
366 rules.append((ruleaction, rule))
366 rules.append((ruleaction, rule))
367
367
368 # Replacements
368 # Replacements
369 replacements = []
369 replacements = []
370 replacementlen = int(lines[index])
370 replacementlen = int(lines[index])
371 index += 1
371 index += 1
372 for i in xrange(replacementlen):
372 for i in xrange(replacementlen):
373 replacement = lines[index]
373 replacement = lines[index]
374 original = node.bin(replacement[:40])
374 original = node.bin(replacement[:40])
375 succ = [node.bin(replacement[i:i + 40]) for i in
375 succ = [node.bin(replacement[i:i + 40]) for i in
376 range(40, len(replacement), 40)]
376 range(40, len(replacement), 40)]
377 replacements.append((original, succ))
377 replacements.append((original, succ))
378 index += 1
378 index += 1
379
379
380 backupfile = lines[index]
380 backupfile = lines[index]
381 index += 1
381 index += 1
382
382
383 fp.close()
383 fp.close()
384
384
385 return parentctxnode, rules, keep, topmost, replacements, backupfile
385 return parentctxnode, rules, keep, topmost, replacements, backupfile
386
386
387 def clear(self):
387 def clear(self):
388 if self.inprogress():
388 if self.inprogress():
389 self.repo.vfs.unlink('histedit-state')
389 self.repo.vfs.unlink('histedit-state')
390
390
391 def inprogress(self):
391 def inprogress(self):
392 return self.repo.vfs.exists('histedit-state')
392 return self.repo.vfs.exists('histedit-state')
393
393
394
394
395 class histeditaction(object):
395 class histeditaction(object):
396 def __init__(self, state, node):
396 def __init__(self, state, node):
397 self.state = state
397 self.state = state
398 self.repo = state.repo
398 self.repo = state.repo
399 self.node = node
399 self.node = node
400
400
401 @classmethod
401 @classmethod
402 def fromrule(cls, state, rule):
402 def fromrule(cls, state, rule):
403 """Parses the given rule, returning an instance of the histeditaction.
403 """Parses the given rule, returning an instance of the histeditaction.
404 """
404 """
405 rulehash = rule.strip().split(' ', 1)[0]
405 rulehash = rule.strip().split(' ', 1)[0]
406 try:
406 try:
407 rev = node.bin(rulehash)
407 rev = node.bin(rulehash)
408 except TypeError:
408 except TypeError:
409 raise error.ParseError("invalid changeset %s" % rulehash)
409 raise error.ParseError("invalid changeset %s" % rulehash)
410 return cls(state, rev)
410 return cls(state, rev)
411
411
412 def verify(self, prev, expected, seen):
412 def verify(self, prev, expected, seen):
413 """ Verifies semantic correctness of the rule"""
413 """ Verifies semantic correctness of the rule"""
414 repo = self.repo
414 repo = self.repo
415 ha = node.hex(self.node)
415 ha = node.hex(self.node)
416 try:
416 try:
417 self.node = repo[ha].node()
417 self.node = repo[ha].node()
418 except error.RepoError:
418 except error.RepoError:
419 raise error.ParseError(_('unknown changeset %s listed')
419 raise error.ParseError(_('unknown changeset %s listed')
420 % ha[:12])
420 % ha[:12])
421 if self.node is not None:
421 if self.node is not None:
422 self._verifynodeconstraints(prev, expected, seen)
422 self._verifynodeconstraints(prev, expected, seen)
423
423
424 def _verifynodeconstraints(self, prev, expected, seen):
424 def _verifynodeconstraints(self, prev, expected, seen):
425 # by default command need a node in the edited list
425 # by default command need a node in the edited list
426 if self.node not in expected:
426 if self.node not in expected:
427 raise error.ParseError(_('%s "%s" changeset was not a candidate')
427 raise error.ParseError(_('%s "%s" changeset was not a candidate')
428 % (self.verb, node.short(self.node)),
428 % (self.verb, node.short(self.node)),
429 hint=_('only use listed changesets'))
429 hint=_('only use listed changesets'))
430 # and only one command per node
430 # and only one command per node
431 if self.node in seen:
431 if self.node in seen:
432 raise error.ParseError(_('duplicated command for changeset %s') %
432 raise error.ParseError(_('duplicated command for changeset %s') %
433 node.short(self.node))
433 node.short(self.node))
434
434
435 def torule(self):
435 def torule(self):
436 """build a histedit rule line for an action
436 """build a histedit rule line for an action
437
437
438 by default lines are in the form:
438 by default lines are in the form:
439 <hash> <rev> <summary>
439 <hash> <rev> <summary>
440 """
440 """
441 ctx = self.repo[self.node]
441 ctx = self.repo[self.node]
442 summary = _getsummary(ctx)
442 summary = _getsummary(ctx)
443 line = '%s %s %d %s' % (self.verb, ctx, ctx.rev(), summary)
443 line = '%s %s %d %s' % (self.verb, ctx, ctx.rev(), summary)
444 # trim to 75 columns by default so it's not stupidly wide in my editor
444 # trim to 75 columns by default so it's not stupidly wide in my editor
445 # (the 5 more are left for verb)
445 # (the 5 more are left for verb)
446 maxlen = self.repo.ui.configint('histedit', 'linelen', default=80)
446 maxlen = self.repo.ui.configint('histedit', 'linelen', default=80)
447 maxlen = max(maxlen, 22) # avoid truncating hash
447 maxlen = max(maxlen, 22) # avoid truncating hash
448 return util.ellipsis(line, maxlen)
448 return util.ellipsis(line, maxlen)
449
449
450 def tostate(self):
450 def tostate(self):
451 """Print an action in format used by histedit state files
451 """Print an action in format used by histedit state files
452 (the first line is a verb, the remainder is the second)
452 (the first line is a verb, the remainder is the second)
453 """
453 """
454 return "%s\n%s" % (self.verb, node.hex(self.node))
454 return "%s\n%s" % (self.verb, node.hex(self.node))
455
455
456 def run(self):
456 def run(self):
457 """Runs the action. The default behavior is simply apply the action's
457 """Runs the action. The default behavior is simply apply the action's
458 rulectx onto the current parentctx."""
458 rulectx onto the current parentctx."""
459 self.applychange()
459 self.applychange()
460 self.continuedirty()
460 self.continuedirty()
461 return self.continueclean()
461 return self.continueclean()
462
462
463 def applychange(self):
463 def applychange(self):
464 """Applies the changes from this action's rulectx onto the current
464 """Applies the changes from this action's rulectx onto the current
465 parentctx, but does not commit them."""
465 parentctx, but does not commit them."""
466 repo = self.repo
466 repo = self.repo
467 rulectx = repo[self.node]
467 rulectx = repo[self.node]
468 repo.ui.pushbuffer(error=True, labeled=True)
468 repo.ui.pushbuffer(error=True, labeled=True)
469 hg.update(repo, self.state.parentctxnode, quietempty=True)
469 hg.update(repo, self.state.parentctxnode, quietempty=True)
470 stats = applychanges(repo.ui, repo, rulectx, {})
470 stats = applychanges(repo.ui, repo, rulectx, {})
471 if stats and stats[3] > 0:
471 if stats and stats[3] > 0:
472 buf = repo.ui.popbuffer()
472 buf = repo.ui.popbuffer()
473 repo.ui.write(*buf)
473 repo.ui.write(*buf)
474 raise error.InterventionRequired(
474 raise error.InterventionRequired(
475 _('Fix up the change (%s %s)') %
475 _('Fix up the change (%s %s)') %
476 (self.verb, node.short(self.node)),
476 (self.verb, node.short(self.node)),
477 hint=_('hg histedit --continue to resume'))
477 hint=_('hg histedit --continue to resume'))
478 else:
478 else:
479 repo.ui.popbuffer()
479 repo.ui.popbuffer()
480
480
481 def continuedirty(self):
481 def continuedirty(self):
482 """Continues the action when changes have been applied to the working
482 """Continues the action when changes have been applied to the working
483 copy. The default behavior is to commit the dirty changes."""
483 copy. The default behavior is to commit the dirty changes."""
484 repo = self.repo
484 repo = self.repo
485 rulectx = repo[self.node]
485 rulectx = repo[self.node]
486
486
487 editor = self.commiteditor()
487 editor = self.commiteditor()
488 commit = commitfuncfor(repo, rulectx)
488 commit = commitfuncfor(repo, rulectx)
489
489
490 commit(text=rulectx.description(), user=rulectx.user(),
490 commit(text=rulectx.description(), user=rulectx.user(),
491 date=rulectx.date(), extra=rulectx.extra(), editor=editor)
491 date=rulectx.date(), extra=rulectx.extra(), editor=editor)
492
492
493 def commiteditor(self):
493 def commiteditor(self):
494 """The editor to be used to edit the commit message."""
494 """The editor to be used to edit the commit message."""
495 return False
495 return False
496
496
497 def continueclean(self):
497 def continueclean(self):
498 """Continues the action when the working copy is clean. The default
498 """Continues the action when the working copy is clean. The default
499 behavior is to accept the current commit as the new version of the
499 behavior is to accept the current commit as the new version of the
500 rulectx."""
500 rulectx."""
501 ctx = self.repo['.']
501 ctx = self.repo['.']
502 if ctx.node() == self.state.parentctxnode:
502 if ctx.node() == self.state.parentctxnode:
503 self.repo.ui.warn(_('%s: skipping changeset (no changes)\n') %
503 self.repo.ui.warn(_('%s: skipping changeset (no changes)\n') %
504 node.short(self.node))
504 node.short(self.node))
505 return ctx, [(self.node, tuple())]
505 return ctx, [(self.node, tuple())]
506 if ctx.node() == self.node:
506 if ctx.node() == self.node:
507 # Nothing changed
507 # Nothing changed
508 return ctx, []
508 return ctx, []
509 return ctx, [(self.node, (ctx.node(),))]
509 return ctx, [(self.node, (ctx.node(),))]
510
510
511 def commitfuncfor(repo, src):
511 def commitfuncfor(repo, src):
512 """Build a commit function for the replacement of <src>
512 """Build a commit function for the replacement of <src>
513
513
514 This function ensure we apply the same treatment to all changesets.
514 This function ensure we apply the same treatment to all changesets.
515
515
516 - Add a 'histedit_source' entry in extra.
516 - Add a 'histedit_source' entry in extra.
517
517
518 Note that fold has its own separated logic because its handling is a bit
518 Note that fold has its own separated logic because its handling is a bit
519 different and not easily factored out of the fold method.
519 different and not easily factored out of the fold method.
520 """
520 """
521 phasemin = src.phase()
521 phasemin = src.phase()
522 def commitfunc(**kwargs):
522 def commitfunc(**kwargs):
523 overrides = {('phases', 'new-commit'): phasemin}
523 overrides = {('phases', 'new-commit'): phasemin}
524 with repo.ui.configoverride(overrides, 'histedit'):
524 with repo.ui.configoverride(overrides, 'histedit'):
525 extra = kwargs.get('extra', {}).copy()
525 extra = kwargs.get('extra', {}).copy()
526 extra['histedit_source'] = src.hex()
526 extra['histedit_source'] = src.hex()
527 kwargs['extra'] = extra
527 kwargs['extra'] = extra
528 return repo.commit(**kwargs)
528 return repo.commit(**kwargs)
529 return commitfunc
529 return commitfunc
530
530
531 def applychanges(ui, repo, ctx, opts):
531 def applychanges(ui, repo, ctx, opts):
532 """Merge changeset from ctx (only) in the current working directory"""
532 """Merge changeset from ctx (only) in the current working directory"""
533 wcpar = repo.dirstate.parents()[0]
533 wcpar = repo.dirstate.parents()[0]
534 if ctx.p1().node() == wcpar:
534 if ctx.p1().node() == wcpar:
535 # edits are "in place" we do not need to make any merge,
535 # edits are "in place" we do not need to make any merge,
536 # just applies changes on parent for editing
536 # just applies changes on parent for editing
537 cmdutil.revert(ui, repo, ctx, (wcpar, node.nullid), all=True)
537 cmdutil.revert(ui, repo, ctx, (wcpar, node.nullid), all=True)
538 stats = None
538 stats = None
539 else:
539 else:
540 try:
540 try:
541 # ui.forcemerge is an internal variable, do not document
541 # ui.forcemerge is an internal variable, do not document
542 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
542 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
543 'histedit')
543 'histedit')
544 stats = mergemod.graft(repo, ctx, ctx.p1(), ['local', 'histedit'])
544 stats = mergemod.graft(repo, ctx, ctx.p1(), ['local', 'histedit'])
545 finally:
545 finally:
546 repo.ui.setconfig('ui', 'forcemerge', '', 'histedit')
546 repo.ui.setconfig('ui', 'forcemerge', '', 'histedit')
547 return stats
547 return stats
548
548
549 def collapse(repo, first, last, commitopts, skipprompt=False):
549 def collapse(repo, first, last, commitopts, skipprompt=False):
550 """collapse the set of revisions from first to last as new one.
550 """collapse the set of revisions from first to last as new one.
551
551
552 Expected commit options are:
552 Expected commit options are:
553 - message
553 - message
554 - date
554 - date
555 - username
555 - username
556 Commit message is edited in all cases.
556 Commit message is edited in all cases.
557
557
558 This function works in memory."""
558 This function works in memory."""
559 ctxs = list(repo.set('%d::%d', first, last))
559 ctxs = list(repo.set('%d::%d', first, last))
560 if not ctxs:
560 if not ctxs:
561 return None
561 return None
562 for c in ctxs:
562 for c in ctxs:
563 if not c.mutable():
563 if not c.mutable():
564 raise error.ParseError(
564 raise error.ParseError(
565 _("cannot fold into public change %s") % node.short(c.node()))
565 _("cannot fold into public change %s") % node.short(c.node()))
566 base = first.parents()[0]
566 base = first.parents()[0]
567
567
568 # commit a new version of the old changeset, including the update
568 # commit a new version of the old changeset, including the update
569 # collect all files which might be affected
569 # collect all files which might be affected
570 files = set()
570 files = set()
571 for ctx in ctxs:
571 for ctx in ctxs:
572 files.update(ctx.files())
572 files.update(ctx.files())
573
573
574 # Recompute copies (avoid recording a -> b -> a)
574 # Recompute copies (avoid recording a -> b -> a)
575 copied = copies.pathcopies(base, last)
575 copied = copies.pathcopies(base, last)
576
576
577 # prune files which were reverted by the updates
577 # prune files which were reverted by the updates
578 files = [f for f in files if not cmdutil.samefile(f, last, base)]
578 files = [f for f in files if not cmdutil.samefile(f, last, base)]
579 # commit version of these files as defined by head
579 # commit version of these files as defined by head
580 headmf = last.manifest()
580 headmf = last.manifest()
581 def filectxfn(repo, ctx, path):
581 def filectxfn(repo, ctx, path):
582 if path in headmf:
582 if path in headmf:
583 fctx = last[path]
583 fctx = last[path]
584 flags = fctx.flags()
584 flags = fctx.flags()
585 mctx = context.memfilectx(repo,
585 mctx = context.memfilectx(repo,
586 fctx.path(), fctx.data(),
586 fctx.path(), fctx.data(),
587 islink='l' in flags,
587 islink='l' in flags,
588 isexec='x' in flags,
588 isexec='x' in flags,
589 copied=copied.get(path))
589 copied=copied.get(path))
590 return mctx
590 return mctx
591 return None
591 return None
592
592
593 if commitopts.get('message'):
593 if commitopts.get('message'):
594 message = commitopts['message']
594 message = commitopts['message']
595 else:
595 else:
596 message = first.description()
596 message = first.description()
597 user = commitopts.get('user')
597 user = commitopts.get('user')
598 date = commitopts.get('date')
598 date = commitopts.get('date')
599 extra = commitopts.get('extra')
599 extra = commitopts.get('extra')
600
600
601 parents = (first.p1().node(), first.p2().node())
601 parents = (first.p1().node(), first.p2().node())
602 editor = None
602 editor = None
603 if not skipprompt:
603 if not skipprompt:
604 editor = cmdutil.getcommiteditor(edit=True, editform='histedit.fold')
604 editor = cmdutil.getcommiteditor(edit=True, editform='histedit.fold')
605 new = context.memctx(repo,
605 new = context.memctx(repo,
606 parents=parents,
606 parents=parents,
607 text=message,
607 text=message,
608 files=files,
608 files=files,
609 filectxfn=filectxfn,
609 filectxfn=filectxfn,
610 user=user,
610 user=user,
611 date=date,
611 date=date,
612 extra=extra,
612 extra=extra,
613 editor=editor)
613 editor=editor)
614 return repo.commitctx(new)
614 return repo.commitctx(new)
615
615
616 def _isdirtywc(repo):
616 def _isdirtywc(repo):
617 return repo[None].dirty(missing=True)
617 return repo[None].dirty(missing=True)
618
618
619 def abortdirty():
619 def abortdirty():
620 raise error.Abort(_('working copy has pending changes'),
620 raise error.Abort(_('working copy has pending changes'),
621 hint=_('amend, commit, or revert them and run histedit '
621 hint=_('amend, commit, or revert them and run histedit '
622 '--continue, or abort with histedit --abort'))
622 '--continue, or abort with histedit --abort'))
623
623
624 def action(verbs, message, priority=False, internal=False):
624 def action(verbs, message, priority=False, internal=False):
625 def wrap(cls):
625 def wrap(cls):
626 assert not priority or not internal
626 assert not priority or not internal
627 verb = verbs[0]
627 verb = verbs[0]
628 if priority:
628 if priority:
629 primaryactions.add(verb)
629 primaryactions.add(verb)
630 elif internal:
630 elif internal:
631 internalactions.add(verb)
631 internalactions.add(verb)
632 elif len(verbs) > 1:
632 elif len(verbs) > 1:
633 secondaryactions.add(verb)
633 secondaryactions.add(verb)
634 else:
634 else:
635 tertiaryactions.add(verb)
635 tertiaryactions.add(verb)
636
636
637 cls.verb = verb
637 cls.verb = verb
638 cls.verbs = verbs
638 cls.verbs = verbs
639 cls.message = message
639 cls.message = message
640 for verb in verbs:
640 for verb in verbs:
641 actiontable[verb] = cls
641 actiontable[verb] = cls
642 return cls
642 return cls
643 return wrap
643 return wrap
644
644
645 @action(['pick', 'p'],
645 @action(['pick', 'p'],
646 _('use commit'),
646 _('use commit'),
647 priority=True)
647 priority=True)
648 class pick(histeditaction):
648 class pick(histeditaction):
649 def run(self):
649 def run(self):
650 rulectx = self.repo[self.node]
650 rulectx = self.repo[self.node]
651 if rulectx.parents()[0].node() == self.state.parentctxnode:
651 if rulectx.parents()[0].node() == self.state.parentctxnode:
652 self.repo.ui.debug('node %s unchanged\n' % node.short(self.node))
652 self.repo.ui.debug('node %s unchanged\n' % node.short(self.node))
653 return rulectx, []
653 return rulectx, []
654
654
655 return super(pick, self).run()
655 return super(pick, self).run()
656
656
657 @action(['edit', 'e'],
657 @action(['edit', 'e'],
658 _('use commit, but stop for amending'),
658 _('use commit, but stop for amending'),
659 priority=True)
659 priority=True)
660 class edit(histeditaction):
660 class edit(histeditaction):
661 def run(self):
661 def run(self):
662 repo = self.repo
662 repo = self.repo
663 rulectx = repo[self.node]
663 rulectx = repo[self.node]
664 hg.update(repo, self.state.parentctxnode, quietempty=True)
664 hg.update(repo, self.state.parentctxnode, quietempty=True)
665 applychanges(repo.ui, repo, rulectx, {})
665 applychanges(repo.ui, repo, rulectx, {})
666 raise error.InterventionRequired(
666 raise error.InterventionRequired(
667 _('Editing (%s), you may commit or record as needed now.')
667 _('Editing (%s), you may commit or record as needed now.')
668 % node.short(self.node),
668 % node.short(self.node),
669 hint=_('hg histedit --continue to resume'))
669 hint=_('hg histedit --continue to resume'))
670
670
671 def commiteditor(self):
671 def commiteditor(self):
672 return cmdutil.getcommiteditor(edit=True, editform='histedit.edit')
672 return cmdutil.getcommiteditor(edit=True, editform='histedit.edit')
673
673
674 @action(['fold', 'f'],
674 @action(['fold', 'f'],
675 _('use commit, but combine it with the one above'))
675 _('use commit, but combine it with the one above'))
676 class fold(histeditaction):
676 class fold(histeditaction):
677 def verify(self, prev, expected, seen):
677 def verify(self, prev, expected, seen):
678 """ Verifies semantic correctness of the fold rule"""
678 """ Verifies semantic correctness of the fold rule"""
679 super(fold, self).verify(prev, expected, seen)
679 super(fold, self).verify(prev, expected, seen)
680 repo = self.repo
680 repo = self.repo
681 if not prev:
681 if not prev:
682 c = repo[self.node].parents()[0]
682 c = repo[self.node].parents()[0]
683 elif not prev.verb in ('pick', 'base'):
683 elif not prev.verb in ('pick', 'base'):
684 return
684 return
685 else:
685 else:
686 c = repo[prev.node]
686 c = repo[prev.node]
687 if not c.mutable():
687 if not c.mutable():
688 raise error.ParseError(
688 raise error.ParseError(
689 _("cannot fold into public change %s") % node.short(c.node()))
689 _("cannot fold into public change %s") % node.short(c.node()))
690
690
691
691
692 def continuedirty(self):
692 def continuedirty(self):
693 repo = self.repo
693 repo = self.repo
694 rulectx = repo[self.node]
694 rulectx = repo[self.node]
695
695
696 commit = commitfuncfor(repo, rulectx)
696 commit = commitfuncfor(repo, rulectx)
697 commit(text='fold-temp-revision %s' % node.short(self.node),
697 commit(text='fold-temp-revision %s' % node.short(self.node),
698 user=rulectx.user(), date=rulectx.date(),
698 user=rulectx.user(), date=rulectx.date(),
699 extra=rulectx.extra())
699 extra=rulectx.extra())
700
700
701 def continueclean(self):
701 def continueclean(self):
702 repo = self.repo
702 repo = self.repo
703 ctx = repo['.']
703 ctx = repo['.']
704 rulectx = repo[self.node]
704 rulectx = repo[self.node]
705 parentctxnode = self.state.parentctxnode
705 parentctxnode = self.state.parentctxnode
706 if ctx.node() == parentctxnode:
706 if ctx.node() == parentctxnode:
707 repo.ui.warn(_('%s: empty changeset\n') %
707 repo.ui.warn(_('%s: empty changeset\n') %
708 node.short(self.node))
708 node.short(self.node))
709 return ctx, [(self.node, (parentctxnode,))]
709 return ctx, [(self.node, (parentctxnode,))]
710
710
711 parentctx = repo[parentctxnode]
711 parentctx = repo[parentctxnode]
712 newcommits = set(c.node() for c in repo.set('(%d::. - %d)', parentctx,
712 newcommits = set(c.node() for c in repo.set('(%d::. - %d)', parentctx,
713 parentctx))
713 parentctx))
714 if not newcommits:
714 if not newcommits:
715 repo.ui.warn(_('%s: cannot fold - working copy is not a '
715 repo.ui.warn(_('%s: cannot fold - working copy is not a '
716 'descendant of previous commit %s\n') %
716 'descendant of previous commit %s\n') %
717 (node.short(self.node), node.short(parentctxnode)))
717 (node.short(self.node), node.short(parentctxnode)))
718 return ctx, [(self.node, (ctx.node(),))]
718 return ctx, [(self.node, (ctx.node(),))]
719
719
720 middlecommits = newcommits.copy()
720 middlecommits = newcommits.copy()
721 middlecommits.discard(ctx.node())
721 middlecommits.discard(ctx.node())
722
722
723 return self.finishfold(repo.ui, repo, parentctx, rulectx, ctx.node(),
723 return self.finishfold(repo.ui, repo, parentctx, rulectx, ctx.node(),
724 middlecommits)
724 middlecommits)
725
725
726 def skipprompt(self):
726 def skipprompt(self):
727 """Returns true if the rule should skip the message editor.
727 """Returns true if the rule should skip the message editor.
728
728
729 For example, 'fold' wants to show an editor, but 'rollup'
729 For example, 'fold' wants to show an editor, but 'rollup'
730 doesn't want to.
730 doesn't want to.
731 """
731 """
732 return False
732 return False
733
733
734 def mergedescs(self):
734 def mergedescs(self):
735 """Returns true if the rule should merge messages of multiple changes.
735 """Returns true if the rule should merge messages of multiple changes.
736
736
737 This exists mainly so that 'rollup' rules can be a subclass of
737 This exists mainly so that 'rollup' rules can be a subclass of
738 'fold'.
738 'fold'.
739 """
739 """
740 return True
740 return True
741
741
742 def firstdate(self):
742 def firstdate(self):
743 """Returns true if the rule should preserve the date of the first
743 """Returns true if the rule should preserve the date of the first
744 change.
744 change.
745
745
746 This exists mainly so that 'rollup' rules can be a subclass of
746 This exists mainly so that 'rollup' rules can be a subclass of
747 'fold'.
747 'fold'.
748 """
748 """
749 return False
749 return False
750
750
751 def finishfold(self, ui, repo, ctx, oldctx, newnode, internalchanges):
751 def finishfold(self, ui, repo, ctx, oldctx, newnode, internalchanges):
752 parent = ctx.parents()[0].node()
752 parent = ctx.parents()[0].node()
753 repo.ui.pushbuffer()
753 repo.ui.pushbuffer()
754 hg.update(repo, parent)
754 hg.update(repo, parent)
755 repo.ui.popbuffer()
755 repo.ui.popbuffer()
756 ### prepare new commit data
756 ### prepare new commit data
757 commitopts = {}
757 commitopts = {}
758 commitopts['user'] = ctx.user()
758 commitopts['user'] = ctx.user()
759 # commit message
759 # commit message
760 if not self.mergedescs():
760 if not self.mergedescs():
761 newmessage = ctx.description()
761 newmessage = ctx.description()
762 else:
762 else:
763 newmessage = '\n***\n'.join(
763 newmessage = '\n***\n'.join(
764 [ctx.description()] +
764 [ctx.description()] +
765 [repo[r].description() for r in internalchanges] +
765 [repo[r].description() for r in internalchanges] +
766 [oldctx.description()]) + '\n'
766 [oldctx.description()]) + '\n'
767 commitopts['message'] = newmessage
767 commitopts['message'] = newmessage
768 # date
768 # date
769 if self.firstdate():
769 if self.firstdate():
770 commitopts['date'] = ctx.date()
770 commitopts['date'] = ctx.date()
771 else:
771 else:
772 commitopts['date'] = max(ctx.date(), oldctx.date())
772 commitopts['date'] = max(ctx.date(), oldctx.date())
773 extra = ctx.extra().copy()
773 extra = ctx.extra().copy()
774 # histedit_source
774 # histedit_source
775 # note: ctx is likely a temporary commit but that the best we can do
775 # note: ctx is likely a temporary commit but that the best we can do
776 # here. This is sufficient to solve issue3681 anyway.
776 # here. This is sufficient to solve issue3681 anyway.
777 extra['histedit_source'] = '%s,%s' % (ctx.hex(), oldctx.hex())
777 extra['histedit_source'] = '%s,%s' % (ctx.hex(), oldctx.hex())
778 commitopts['extra'] = extra
778 commitopts['extra'] = extra
779 phasemin = max(ctx.phase(), oldctx.phase())
779 phasemin = max(ctx.phase(), oldctx.phase())
780 overrides = {('phases', 'new-commit'): phasemin}
780 overrides = {('phases', 'new-commit'): phasemin}
781 with repo.ui.configoverride(overrides, 'histedit'):
781 with repo.ui.configoverride(overrides, 'histedit'):
782 n = collapse(repo, ctx, repo[newnode], commitopts,
782 n = collapse(repo, ctx, repo[newnode], commitopts,
783 skipprompt=self.skipprompt())
783 skipprompt=self.skipprompt())
784 if n is None:
784 if n is None:
785 return ctx, []
785 return ctx, []
786 repo.ui.pushbuffer()
786 repo.ui.pushbuffer()
787 hg.update(repo, n)
787 hg.update(repo, n)
788 repo.ui.popbuffer()
788 repo.ui.popbuffer()
789 replacements = [(oldctx.node(), (newnode,)),
789 replacements = [(oldctx.node(), (newnode,)),
790 (ctx.node(), (n,)),
790 (ctx.node(), (n,)),
791 (newnode, (n,)),
791 (newnode, (n,)),
792 ]
792 ]
793 for ich in internalchanges:
793 for ich in internalchanges:
794 replacements.append((ich, (n,)))
794 replacements.append((ich, (n,)))
795 return repo[n], replacements
795 return repo[n], replacements
796
796
797 class base(histeditaction):
797 class base(histeditaction):
798
798
799 def run(self):
799 def run(self):
800 if self.repo['.'].node() != self.node:
800 if self.repo['.'].node() != self.node:
801 mergemod.update(self.repo, self.node, False, True)
801 mergemod.update(self.repo, self.node, False, True)
802 # branchmerge, force)
802 # branchmerge, force)
803 return self.continueclean()
803 return self.continueclean()
804
804
805 def continuedirty(self):
805 def continuedirty(self):
806 abortdirty()
806 abortdirty()
807
807
808 def continueclean(self):
808 def continueclean(self):
809 basectx = self.repo['.']
809 basectx = self.repo['.']
810 return basectx, []
810 return basectx, []
811
811
812 def _verifynodeconstraints(self, prev, expected, seen):
812 def _verifynodeconstraints(self, prev, expected, seen):
813 # base can only be use with a node not in the edited set
813 # base can only be use with a node not in the edited set
814 if self.node in expected:
814 if self.node in expected:
815 msg = _('%s "%s" changeset was an edited list candidate')
815 msg = _('%s "%s" changeset was an edited list candidate')
816 raise error.ParseError(
816 raise error.ParseError(
817 msg % (self.verb, node.short(self.node)),
817 msg % (self.verb, node.short(self.node)),
818 hint=_('base must only use unlisted changesets'))
818 hint=_('base must only use unlisted changesets'))
819
819
820 @action(['_multifold'],
820 @action(['_multifold'],
821 _(
821 _(
822 """fold subclass used for when multiple folds happen in a row
822 """fold subclass used for when multiple folds happen in a row
823
823
824 We only want to fire the editor for the folded message once when
824 We only want to fire the editor for the folded message once when
825 (say) four changes are folded down into a single change. This is
825 (say) four changes are folded down into a single change. This is
826 similar to rollup, but we should preserve both messages so that
826 similar to rollup, but we should preserve both messages so that
827 when the last fold operation runs we can show the user all the
827 when the last fold operation runs we can show the user all the
828 commit messages in their editor.
828 commit messages in their editor.
829 """),
829 """),
830 internal=True)
830 internal=True)
831 class _multifold(fold):
831 class _multifold(fold):
832 def skipprompt(self):
832 def skipprompt(self):
833 return True
833 return True
834
834
835 @action(["roll", "r"],
835 @action(["roll", "r"],
836 _("like fold, but discard this commit's description and date"))
836 _("like fold, but discard this commit's description and date"))
837 class rollup(fold):
837 class rollup(fold):
838 def mergedescs(self):
838 def mergedescs(self):
839 return False
839 return False
840
840
841 def skipprompt(self):
841 def skipprompt(self):
842 return True
842 return True
843
843
844 def firstdate(self):
844 def firstdate(self):
845 return True
845 return True
846
846
847 @action(["drop", "d"],
847 @action(["drop", "d"],
848 _('remove commit from history'))
848 _('remove commit from history'))
849 class drop(histeditaction):
849 class drop(histeditaction):
850 def run(self):
850 def run(self):
851 parentctx = self.repo[self.state.parentctxnode]
851 parentctx = self.repo[self.state.parentctxnode]
852 return parentctx, [(self.node, tuple())]
852 return parentctx, [(self.node, tuple())]
853
853
854 @action(["mess", "m"],
854 @action(["mess", "m"],
855 _('edit commit message without changing commit content'),
855 _('edit commit message without changing commit content'),
856 priority=True)
856 priority=True)
857 class message(histeditaction):
857 class message(histeditaction):
858 def commiteditor(self):
858 def commiteditor(self):
859 return cmdutil.getcommiteditor(edit=True, editform='histedit.mess')
859 return cmdutil.getcommiteditor(edit=True, editform='histedit.mess')
860
860
861 def findoutgoing(ui, repo, remote=None, force=False, opts=None):
861 def findoutgoing(ui, repo, remote=None, force=False, opts=None):
862 """utility function to find the first outgoing changeset
862 """utility function to find the first outgoing changeset
863
863
864 Used by initialization code"""
864 Used by initialization code"""
865 if opts is None:
865 if opts is None:
866 opts = {}
866 opts = {}
867 dest = ui.expandpath(remote or 'default-push', remote or 'default')
867 dest = ui.expandpath(remote or 'default-push', remote or 'default')
868 dest, revs = hg.parseurl(dest, None)[:2]
868 dest, revs = hg.parseurl(dest, None)[:2]
869 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
869 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
870
870
871 revs, checkout = hg.addbranchrevs(repo, repo, revs, None)
871 revs, checkout = hg.addbranchrevs(repo, repo, revs, None)
872 other = hg.peer(repo, opts, dest)
872 other = hg.peer(repo, opts, dest)
873
873
874 if revs:
874 if revs:
875 revs = [repo.lookup(rev) for rev in revs]
875 revs = [repo.lookup(rev) for rev in revs]
876
876
877 outgoing = discovery.findcommonoutgoing(repo, other, revs, force=force)
877 outgoing = discovery.findcommonoutgoing(repo, other, revs, force=force)
878 if not outgoing.missing:
878 if not outgoing.missing:
879 raise error.Abort(_('no outgoing ancestors'))
879 raise error.Abort(_('no outgoing ancestors'))
880 roots = list(repo.revs("roots(%ln)", outgoing.missing))
880 roots = list(repo.revs("roots(%ln)", outgoing.missing))
881 if 1 < len(roots):
881 if 1 < len(roots):
882 msg = _('there are ambiguous outgoing revisions')
882 msg = _('there are ambiguous outgoing revisions')
883 hint = _("see 'hg help histedit' for more detail")
883 hint = _("see 'hg help histedit' for more detail")
884 raise error.Abort(msg, hint=hint)
884 raise error.Abort(msg, hint=hint)
885 return repo.lookup(roots[0])
885 return repo.lookup(roots[0])
886
886
887
887
888 @command('histedit',
888 @command('histedit',
889 [('', 'commands', '',
889 [('', 'commands', '',
890 _('read history edits from the specified file'), _('FILE')),
890 _('read history edits from the specified file'), _('FILE')),
891 ('c', 'continue', False, _('continue an edit already in progress')),
891 ('c', 'continue', False, _('continue an edit already in progress')),
892 ('', 'edit-plan', False, _('edit remaining actions list')),
892 ('', 'edit-plan', False, _('edit remaining actions list')),
893 ('k', 'keep', False,
893 ('k', 'keep', False,
894 _("don't strip old nodes after edit is complete")),
894 _("don't strip old nodes after edit is complete")),
895 ('', 'abort', False, _('abort an edit in progress')),
895 ('', 'abort', False, _('abort an edit in progress')),
896 ('o', 'outgoing', False, _('changesets not found in destination')),
896 ('o', 'outgoing', False, _('changesets not found in destination')),
897 ('f', 'force', False,
897 ('f', 'force', False,
898 _('force outgoing even for unrelated repositories')),
898 _('force outgoing even for unrelated repositories')),
899 ('r', 'rev', [], _('first revision to be edited'), _('REV'))],
899 ('r', 'rev', [], _('first revision to be edited'), _('REV'))],
900 _("[OPTIONS] ([ANCESTOR] | --outgoing [URL])"))
900 _("[OPTIONS] ([ANCESTOR] | --outgoing [URL])"))
901 def histedit(ui, repo, *freeargs, **opts):
901 def histedit(ui, repo, *freeargs, **opts):
902 """interactively edit changeset history
902 """interactively edit changeset history
903
903
904 This command lets you edit a linear series of changesets (up to
904 This command lets you edit a linear series of changesets (up to
905 and including the working directory, which should be clean).
905 and including the working directory, which should be clean).
906 You can:
906 You can:
907
907
908 - `pick` to [re]order a changeset
908 - `pick` to [re]order a changeset
909
909
910 - `drop` to omit changeset
910 - `drop` to omit changeset
911
911
912 - `mess` to reword the changeset commit message
912 - `mess` to reword the changeset commit message
913
913
914 - `fold` to combine it with the preceding changeset (using the later date)
914 - `fold` to combine it with the preceding changeset (using the later date)
915
915
916 - `roll` like fold, but discarding this commit's description and date
916 - `roll` like fold, but discarding this commit's description and date
917
917
918 - `edit` to edit this changeset (preserving date)
918 - `edit` to edit this changeset (preserving date)
919
919
920 There are a number of ways to select the root changeset:
920 There are a number of ways to select the root changeset:
921
921
922 - Specify ANCESTOR directly
922 - Specify ANCESTOR directly
923
923
924 - Use --outgoing -- it will be the first linear changeset not
924 - Use --outgoing -- it will be the first linear changeset not
925 included in destination. (See :hg:`help config.paths.default-push`)
925 included in destination. (See :hg:`help config.paths.default-push`)
926
926
927 - Otherwise, the value from the "histedit.defaultrev" config option
927 - Otherwise, the value from the "histedit.defaultrev" config option
928 is used as a revset to select the base revision when ANCESTOR is not
928 is used as a revset to select the base revision when ANCESTOR is not
929 specified. The first revision returned by the revset is used. By
929 specified. The first revision returned by the revset is used. By
930 default, this selects the editable history that is unique to the
930 default, this selects the editable history that is unique to the
931 ancestry of the working directory.
931 ancestry of the working directory.
932
932
933 .. container:: verbose
933 .. container:: verbose
934
934
935 If you use --outgoing, this command will abort if there are ambiguous
935 If you use --outgoing, this command will abort if there are ambiguous
936 outgoing revisions. For example, if there are multiple branches
936 outgoing revisions. For example, if there are multiple branches
937 containing outgoing revisions.
937 containing outgoing revisions.
938
938
939 Use "min(outgoing() and ::.)" or similar revset specification
939 Use "min(outgoing() and ::.)" or similar revset specification
940 instead of --outgoing to specify edit target revision exactly in
940 instead of --outgoing to specify edit target revision exactly in
941 such ambiguous situation. See :hg:`help revsets` for detail about
941 such ambiguous situation. See :hg:`help revsets` for detail about
942 selecting revisions.
942 selecting revisions.
943
943
944 .. container:: verbose
944 .. container:: verbose
945
945
946 Examples:
946 Examples:
947
947
948 - A number of changes have been made.
948 - A number of changes have been made.
949 Revision 3 is no longer needed.
949 Revision 3 is no longer needed.
950
950
951 Start history editing from revision 3::
951 Start history editing from revision 3::
952
952
953 hg histedit -r 3
953 hg histedit -r 3
954
954
955 An editor opens, containing the list of revisions,
955 An editor opens, containing the list of revisions,
956 with specific actions specified::
956 with specific actions specified::
957
957
958 pick 5339bf82f0ca 3 Zworgle the foobar
958 pick 5339bf82f0ca 3 Zworgle the foobar
959 pick 8ef592ce7cc4 4 Bedazzle the zerlog
959 pick 8ef592ce7cc4 4 Bedazzle the zerlog
960 pick 0a9639fcda9d 5 Morgify the cromulancy
960 pick 0a9639fcda9d 5 Morgify the cromulancy
961
961
962 Additional information about the possible actions
962 Additional information about the possible actions
963 to take appears below the list of revisions.
963 to take appears below the list of revisions.
964
964
965 To remove revision 3 from the history,
965 To remove revision 3 from the history,
966 its action (at the beginning of the relevant line)
966 its action (at the beginning of the relevant line)
967 is changed to 'drop'::
967 is changed to 'drop'::
968
968
969 drop 5339bf82f0ca 3 Zworgle the foobar
969 drop 5339bf82f0ca 3 Zworgle the foobar
970 pick 8ef592ce7cc4 4 Bedazzle the zerlog
970 pick 8ef592ce7cc4 4 Bedazzle the zerlog
971 pick 0a9639fcda9d 5 Morgify the cromulancy
971 pick 0a9639fcda9d 5 Morgify the cromulancy
972
972
973 - A number of changes have been made.
973 - A number of changes have been made.
974 Revision 2 and 4 need to be swapped.
974 Revision 2 and 4 need to be swapped.
975
975
976 Start history editing from revision 2::
976 Start history editing from revision 2::
977
977
978 hg histedit -r 2
978 hg histedit -r 2
979
979
980 An editor opens, containing the list of revisions,
980 An editor opens, containing the list of revisions,
981 with specific actions specified::
981 with specific actions specified::
982
982
983 pick 252a1af424ad 2 Blorb a morgwazzle
983 pick 252a1af424ad 2 Blorb a morgwazzle
984 pick 5339bf82f0ca 3 Zworgle the foobar
984 pick 5339bf82f0ca 3 Zworgle the foobar
985 pick 8ef592ce7cc4 4 Bedazzle the zerlog
985 pick 8ef592ce7cc4 4 Bedazzle the zerlog
986
986
987 To swap revision 2 and 4, its lines are swapped
987 To swap revision 2 and 4, its lines are swapped
988 in the editor::
988 in the editor::
989
989
990 pick 8ef592ce7cc4 4 Bedazzle the zerlog
990 pick 8ef592ce7cc4 4 Bedazzle the zerlog
991 pick 5339bf82f0ca 3 Zworgle the foobar
991 pick 5339bf82f0ca 3 Zworgle the foobar
992 pick 252a1af424ad 2 Blorb a morgwazzle
992 pick 252a1af424ad 2 Blorb a morgwazzle
993
993
994 Returns 0 on success, 1 if user intervention is required (not only
994 Returns 0 on success, 1 if user intervention is required (not only
995 for intentional "edit" command, but also for resolving unexpected
995 for intentional "edit" command, but also for resolving unexpected
996 conflicts).
996 conflicts).
997 """
997 """
998 state = histeditstate(repo)
998 state = histeditstate(repo)
999 try:
999 try:
1000 state.wlock = repo.wlock()
1000 state.wlock = repo.wlock()
1001 state.lock = repo.lock()
1001 state.lock = repo.lock()
1002 _histedit(ui, repo, state, *freeargs, **opts)
1002 _histedit(ui, repo, state, *freeargs, **opts)
1003 finally:
1003 finally:
1004 release(state.lock, state.wlock)
1004 release(state.lock, state.wlock)
1005
1005
1006 goalcontinue = 'continue'
1006 goalcontinue = 'continue'
1007 goalabort = 'abort'
1007 goalabort = 'abort'
1008 goaleditplan = 'edit-plan'
1008 goaleditplan = 'edit-plan'
1009 goalnew = 'new'
1009 goalnew = 'new'
1010
1010
1011 def _getgoal(opts):
1011 def _getgoal(opts):
1012 if opts.get('continue'):
1012 if opts.get('continue'):
1013 return goalcontinue
1013 return goalcontinue
1014 if opts.get('abort'):
1014 if opts.get('abort'):
1015 return goalabort
1015 return goalabort
1016 if opts.get('edit_plan'):
1016 if opts.get('edit_plan'):
1017 return goaleditplan
1017 return goaleditplan
1018 return goalnew
1018 return goalnew
1019
1019
1020 def _readfile(ui, path):
1020 def _readfile(ui, path):
1021 if path == '-':
1021 if path == '-':
1022 with ui.timeblockedsection('histedit'):
1022 with ui.timeblockedsection('histedit'):
1023 return ui.fin.read()
1023 return ui.fin.read()
1024 else:
1024 else:
1025 with open(path, 'rb') as f:
1025 with open(path, 'rb') as f:
1026 return f.read()
1026 return f.read()
1027
1027
1028 def _validateargs(ui, repo, state, freeargs, opts, goal, rules, revs):
1028 def _validateargs(ui, repo, state, freeargs, opts, goal, rules, revs):
1029 # TODO only abort if we try to histedit mq patches, not just
1029 # TODO only abort if we try to histedit mq patches, not just
1030 # blanket if mq patches are applied somewhere
1030 # blanket if mq patches are applied somewhere
1031 mq = getattr(repo, 'mq', None)
1031 mq = getattr(repo, 'mq', None)
1032 if mq and mq.applied:
1032 if mq and mq.applied:
1033 raise error.Abort(_('source has mq patches applied'))
1033 raise error.Abort(_('source has mq patches applied'))
1034
1034
1035 # basic argument incompatibility processing
1035 # basic argument incompatibility processing
1036 outg = opts.get('outgoing')
1036 outg = opts.get('outgoing')
1037 editplan = opts.get('edit_plan')
1037 editplan = opts.get('edit_plan')
1038 abort = opts.get('abort')
1038 abort = opts.get('abort')
1039 force = opts.get('force')
1039 force = opts.get('force')
1040 if force and not outg:
1040 if force and not outg:
1041 raise error.Abort(_('--force only allowed with --outgoing'))
1041 raise error.Abort(_('--force only allowed with --outgoing'))
1042 if goal == 'continue':
1042 if goal == 'continue':
1043 if any((outg, abort, revs, freeargs, rules, editplan)):
1043 if any((outg, abort, revs, freeargs, rules, editplan)):
1044 raise error.Abort(_('no arguments allowed with --continue'))
1044 raise error.Abort(_('no arguments allowed with --continue'))
1045 elif goal == 'abort':
1045 elif goal == 'abort':
1046 if any((outg, revs, freeargs, rules, editplan)):
1046 if any((outg, revs, freeargs, rules, editplan)):
1047 raise error.Abort(_('no arguments allowed with --abort'))
1047 raise error.Abort(_('no arguments allowed with --abort'))
1048 elif goal == 'edit-plan':
1048 elif goal == 'edit-plan':
1049 if any((outg, revs, freeargs)):
1049 if any((outg, revs, freeargs)):
1050 raise error.Abort(_('only --commands argument allowed with '
1050 raise error.Abort(_('only --commands argument allowed with '
1051 '--edit-plan'))
1051 '--edit-plan'))
1052 else:
1052 else:
1053 if os.path.exists(os.path.join(repo.path, 'histedit-state')):
1053 if os.path.exists(os.path.join(repo.path, 'histedit-state')):
1054 raise error.Abort(_('history edit already in progress, try '
1054 raise error.Abort(_('history edit already in progress, try '
1055 '--continue or --abort'))
1055 '--continue or --abort'))
1056 if outg:
1056 if outg:
1057 if revs:
1057 if revs:
1058 raise error.Abort(_('no revisions allowed with --outgoing'))
1058 raise error.Abort(_('no revisions allowed with --outgoing'))
1059 if len(freeargs) > 1:
1059 if len(freeargs) > 1:
1060 raise error.Abort(
1060 raise error.Abort(
1061 _('only one repo argument allowed with --outgoing'))
1061 _('only one repo argument allowed with --outgoing'))
1062 else:
1062 else:
1063 revs.extend(freeargs)
1063 revs.extend(freeargs)
1064 if len(revs) == 0:
1064 if len(revs) == 0:
1065 defaultrev = destutil.desthistedit(ui, repo)
1065 defaultrev = destutil.desthistedit(ui, repo)
1066 if defaultrev is not None:
1066 if defaultrev is not None:
1067 revs.append(defaultrev)
1067 revs.append(defaultrev)
1068
1068
1069 if len(revs) != 1:
1069 if len(revs) != 1:
1070 raise error.Abort(
1070 raise error.Abort(
1071 _('histedit requires exactly one ancestor revision'))
1071 _('histedit requires exactly one ancestor revision'))
1072
1072
1073 def _histedit(ui, repo, state, *freeargs, **opts):
1073 def _histedit(ui, repo, state, *freeargs, **opts):
1074 goal = _getgoal(opts)
1074 goal = _getgoal(opts)
1075 revs = opts.get('rev', [])
1075 revs = opts.get('rev', [])
1076 rules = opts.get('commands', '')
1076 rules = opts.get('commands', '')
1077 state.keep = opts.get('keep', False)
1077 state.keep = opts.get('keep', False)
1078
1078
1079 _validateargs(ui, repo, state, freeargs, opts, goal, rules, revs)
1079 _validateargs(ui, repo, state, freeargs, opts, goal, rules, revs)
1080
1080
1081 # rebuild state
1081 # rebuild state
1082 if goal == goalcontinue:
1082 if goal == goalcontinue:
1083 state.read()
1083 state.read()
1084 state = bootstrapcontinue(ui, state, opts)
1084 state = bootstrapcontinue(ui, state, opts)
1085 elif goal == goaleditplan:
1085 elif goal == goaleditplan:
1086 _edithisteditplan(ui, repo, state, rules)
1086 _edithisteditplan(ui, repo, state, rules)
1087 return
1087 return
1088 elif goal == goalabort:
1088 elif goal == goalabort:
1089 _aborthistedit(ui, repo, state)
1089 _aborthistedit(ui, repo, state)
1090 return
1090 return
1091 else:
1091 else:
1092 # goal == goalnew
1092 # goal == goalnew
1093 _newhistedit(ui, repo, state, revs, freeargs, opts)
1093 _newhistedit(ui, repo, state, revs, freeargs, opts)
1094
1094
1095 _continuehistedit(ui, repo, state)
1095 _continuehistedit(ui, repo, state)
1096 _finishhistedit(ui, repo, state)
1096 _finishhistedit(ui, repo, state)
1097
1097
1098 def _continuehistedit(ui, repo, state):
1098 def _continuehistedit(ui, repo, state):
1099 """This function runs after either:
1099 """This function runs after either:
1100 - bootstrapcontinue (if the goal is 'continue')
1100 - bootstrapcontinue (if the goal is 'continue')
1101 - _newhistedit (if the goal is 'new')
1101 - _newhistedit (if the goal is 'new')
1102 """
1102 """
1103 # preprocess rules so that we can hide inner folds from the user
1103 # preprocess rules so that we can hide inner folds from the user
1104 # and only show one editor
1104 # and only show one editor
1105 actions = state.actions[:]
1105 actions = state.actions[:]
1106 for idx, (action, nextact) in enumerate(
1106 for idx, (action, nextact) in enumerate(
1107 zip(actions, actions[1:] + [None])):
1107 zip(actions, actions[1:] + [None])):
1108 if action.verb == 'fold' and nextact and nextact.verb == 'fold':
1108 if action.verb == 'fold' and nextact and nextact.verb == 'fold':
1109 state.actions[idx].__class__ = _multifold
1109 state.actions[idx].__class__ = _multifold
1110
1110
1111 total = len(state.actions)
1111 total = len(state.actions)
1112 pos = 0
1112 pos = 0
1113 state.tr = None
1113 state.tr = None
1114
1114
1115 # Force an initial state file write, so the user can run --abort/continue
1115 # Force an initial state file write, so the user can run --abort/continue
1116 # even if there's an exception before the first transaction serialize.
1116 # even if there's an exception before the first transaction serialize.
1117 state.write()
1117 state.write()
1118 try:
1118 try:
1119 # Don't use singletransaction by default since it rolls the entire
1119 # Don't use singletransaction by default since it rolls the entire
1120 # transaction back if an unexpected exception happens (like a
1120 # transaction back if an unexpected exception happens (like a
1121 # pretxncommit hook throws, or the user aborts the commit msg editor).
1121 # pretxncommit hook throws, or the user aborts the commit msg editor).
1122 if ui.configbool("histedit", "singletransaction", False):
1122 if ui.configbool("histedit", "singletransaction", False):
1123 # Don't use a 'with' for the transaction, since actions may close
1123 # Don't use a 'with' for the transaction, since actions may close
1124 # and reopen a transaction. For example, if the action executes an
1124 # and reopen a transaction. For example, if the action executes an
1125 # external process it may choose to commit the transaction first.
1125 # external process it may choose to commit the transaction first.
1126 state.tr = repo.transaction('histedit')
1126 state.tr = repo.transaction('histedit')
1127
1127
1128 while state.actions:
1128 while state.actions:
1129 state.write(tr=state.tr)
1129 state.write(tr=state.tr)
1130 actobj = state.actions[0]
1130 actobj = state.actions[0]
1131 pos += 1
1131 pos += 1
1132 ui.progress(_("editing"), pos, actobj.torule(),
1132 ui.progress(_("editing"), pos, actobj.torule(),
1133 _('changes'), total)
1133 _('changes'), total)
1134 ui.debug('histedit: processing %s %s\n' % (actobj.verb,\
1134 ui.debug('histedit: processing %s %s\n' % (actobj.verb,\
1135 actobj.torule()))
1135 actobj.torule()))
1136 parentctx, replacement_ = actobj.run()
1136 parentctx, replacement_ = actobj.run()
1137 state.parentctxnode = parentctx.node()
1137 state.parentctxnode = parentctx.node()
1138 state.replacements.extend(replacement_)
1138 state.replacements.extend(replacement_)
1139 state.actions.pop(0)
1139 state.actions.pop(0)
1140
1140
1141 if state.tr is not None:
1141 if state.tr is not None:
1142 state.tr.close()
1142 state.tr.close()
1143 except error.InterventionRequired:
1143 except error.InterventionRequired:
1144 if state.tr is not None:
1144 if state.tr is not None:
1145 state.tr.close()
1145 state.tr.close()
1146 raise
1146 raise
1147 except Exception:
1147 except Exception:
1148 if state.tr is not None:
1148 if state.tr is not None:
1149 state.tr.abort()
1149 state.tr.abort()
1150 raise
1150 raise
1151
1151
1152 state.write()
1152 state.write()
1153 ui.progress(_("editing"), None)
1153 ui.progress(_("editing"), None)
1154
1154
1155 def _finishhistedit(ui, repo, state):
1155 def _finishhistedit(ui, repo, state):
1156 """This action runs when histedit is finishing its session"""
1156 """This action runs when histedit is finishing its session"""
1157 repo.ui.pushbuffer()
1157 repo.ui.pushbuffer()
1158 hg.update(repo, state.parentctxnode, quietempty=True)
1158 hg.update(repo, state.parentctxnode, quietempty=True)
1159 repo.ui.popbuffer()
1159 repo.ui.popbuffer()
1160
1160
1161 mapping, tmpnodes, created, ntm = processreplacement(state)
1161 mapping, tmpnodes, created, ntm = processreplacement(state)
1162 if mapping:
1162 if mapping:
1163 for prec, succs in mapping.iteritems():
1163 for prec, succs in mapping.iteritems():
1164 if not succs:
1164 if not succs:
1165 ui.debug('histedit: %s is dropped\n' % node.short(prec))
1165 ui.debug('histedit: %s is dropped\n' % node.short(prec))
1166 else:
1166 else:
1167 ui.debug('histedit: %s is replaced by %s\n' % (
1167 ui.debug('histedit: %s is replaced by %s\n' % (
1168 node.short(prec), node.short(succs[0])))
1168 node.short(prec), node.short(succs[0])))
1169 if len(succs) > 1:
1169 if len(succs) > 1:
1170 m = 'histedit: %s'
1170 m = 'histedit: %s'
1171 for n in succs[1:]:
1171 for n in succs[1:]:
1172 ui.debug(m % node.short(n))
1172 ui.debug(m % node.short(n))
1173
1173
1174 safecleanupnode(ui, repo, 'temp', tmpnodes)
1174 safecleanupnode(ui, repo, 'temp', tmpnodes)
1175
1175
1176 if not state.keep:
1176 if not state.keep:
1177 if mapping:
1177 if mapping:
1178 movebookmarks(ui, repo, mapping, state.topmost, ntm)
1178 movebookmarks(ui, repo, mapping, state.topmost, ntm)
1179 # TODO update mq state
1179 # TODO update mq state
1180 safecleanupnode(ui, repo, 'replaced', mapping)
1180 safecleanupnode(ui, repo, 'replaced', mapping)
1181
1181
1182 state.clear()
1182 state.clear()
1183 if os.path.exists(repo.sjoin('undo')):
1183 if os.path.exists(repo.sjoin('undo')):
1184 os.unlink(repo.sjoin('undo'))
1184 os.unlink(repo.sjoin('undo'))
1185 if repo.vfs.exists('histedit-last-edit.txt'):
1185 if repo.vfs.exists('histedit-last-edit.txt'):
1186 repo.vfs.unlink('histedit-last-edit.txt')
1186 repo.vfs.unlink('histedit-last-edit.txt')
1187
1187
1188 def _aborthistedit(ui, repo, state):
1188 def _aborthistedit(ui, repo, state):
1189 try:
1189 try:
1190 state.read()
1190 state.read()
1191 __, leafs, tmpnodes, __ = processreplacement(state)
1191 __, leafs, tmpnodes, __ = processreplacement(state)
1192 ui.debug('restore wc to old parent %s\n'
1192 ui.debug('restore wc to old parent %s\n'
1193 % node.short(state.topmost))
1193 % node.short(state.topmost))
1194
1194
1195 # Recover our old commits if necessary
1195 # Recover our old commits if necessary
1196 if not state.topmost in repo and state.backupfile:
1196 if not state.topmost in repo and state.backupfile:
1197 backupfile = repo.vfs.join(state.backupfile)
1197 backupfile = repo.vfs.join(state.backupfile)
1198 f = hg.openpath(ui, backupfile)
1198 f = hg.openpath(ui, backupfile)
1199 gen = exchange.readbundle(ui, f, backupfile)
1199 gen = exchange.readbundle(ui, f, backupfile)
1200 with repo.transaction('histedit.abort') as tr:
1200 with repo.transaction('histedit.abort') as tr:
1201 if not isinstance(gen, bundle2.unbundle20):
1201 if not isinstance(gen, bundle2.unbundle20):
1202 gen.apply(repo, tr, 'histedit', 'bundle:' + backupfile)
1202 bundle2.applybundle1(repo, gen, tr,
1203 source='histedit',
1204 url='bundle:' + backupfile)
1203 else:
1205 else:
1204 bundle2.applybundle(repo, gen, tr,
1206 bundle2.applybundle(repo, gen, tr,
1205 source='histedit',
1207 source='histedit',
1206 url='bundle:' + backupfile)
1208 url='bundle:' + backupfile)
1207
1209
1208 os.remove(backupfile)
1210 os.remove(backupfile)
1209
1211
1210 # check whether we should update away
1212 # check whether we should update away
1211 if repo.unfiltered().revs('parents() and (%n or %ln::)',
1213 if repo.unfiltered().revs('parents() and (%n or %ln::)',
1212 state.parentctxnode, leafs | tmpnodes):
1214 state.parentctxnode, leafs | tmpnodes):
1213 hg.clean(repo, state.topmost, show_stats=True, quietempty=True)
1215 hg.clean(repo, state.topmost, show_stats=True, quietempty=True)
1214 cleanupnode(ui, repo, 'created', tmpnodes)
1216 cleanupnode(ui, repo, 'created', tmpnodes)
1215 cleanupnode(ui, repo, 'temp', leafs)
1217 cleanupnode(ui, repo, 'temp', leafs)
1216 except Exception:
1218 except Exception:
1217 if state.inprogress():
1219 if state.inprogress():
1218 ui.warn(_('warning: encountered an exception during histedit '
1220 ui.warn(_('warning: encountered an exception during histedit '
1219 '--abort; the repository may not have been completely '
1221 '--abort; the repository may not have been completely '
1220 'cleaned up\n'))
1222 'cleaned up\n'))
1221 raise
1223 raise
1222 finally:
1224 finally:
1223 state.clear()
1225 state.clear()
1224
1226
1225 def _edithisteditplan(ui, repo, state, rules):
1227 def _edithisteditplan(ui, repo, state, rules):
1226 state.read()
1228 state.read()
1227 if not rules:
1229 if not rules:
1228 comment = geteditcomment(ui,
1230 comment = geteditcomment(ui,
1229 node.short(state.parentctxnode),
1231 node.short(state.parentctxnode),
1230 node.short(state.topmost))
1232 node.short(state.topmost))
1231 rules = ruleeditor(repo, ui, state.actions, comment)
1233 rules = ruleeditor(repo, ui, state.actions, comment)
1232 else:
1234 else:
1233 rules = _readfile(ui, rules)
1235 rules = _readfile(ui, rules)
1234 actions = parserules(rules, state)
1236 actions = parserules(rules, state)
1235 ctxs = [repo[act.node] \
1237 ctxs = [repo[act.node] \
1236 for act in state.actions if act.node]
1238 for act in state.actions if act.node]
1237 warnverifyactions(ui, repo, actions, state, ctxs)
1239 warnverifyactions(ui, repo, actions, state, ctxs)
1238 state.actions = actions
1240 state.actions = actions
1239 state.write()
1241 state.write()
1240
1242
1241 def _newhistedit(ui, repo, state, revs, freeargs, opts):
1243 def _newhistedit(ui, repo, state, revs, freeargs, opts):
1242 outg = opts.get('outgoing')
1244 outg = opts.get('outgoing')
1243 rules = opts.get('commands', '')
1245 rules = opts.get('commands', '')
1244 force = opts.get('force')
1246 force = opts.get('force')
1245
1247
1246 cmdutil.checkunfinished(repo)
1248 cmdutil.checkunfinished(repo)
1247 cmdutil.bailifchanged(repo)
1249 cmdutil.bailifchanged(repo)
1248
1250
1249 topmost, empty = repo.dirstate.parents()
1251 topmost, empty = repo.dirstate.parents()
1250 if outg:
1252 if outg:
1251 if freeargs:
1253 if freeargs:
1252 remote = freeargs[0]
1254 remote = freeargs[0]
1253 else:
1255 else:
1254 remote = None
1256 remote = None
1255 root = findoutgoing(ui, repo, remote, force, opts)
1257 root = findoutgoing(ui, repo, remote, force, opts)
1256 else:
1258 else:
1257 rr = list(repo.set('roots(%ld)', scmutil.revrange(repo, revs)))
1259 rr = list(repo.set('roots(%ld)', scmutil.revrange(repo, revs)))
1258 if len(rr) != 1:
1260 if len(rr) != 1:
1259 raise error.Abort(_('The specified revisions must have '
1261 raise error.Abort(_('The specified revisions must have '
1260 'exactly one common root'))
1262 'exactly one common root'))
1261 root = rr[0].node()
1263 root = rr[0].node()
1262
1264
1263 revs = between(repo, root, topmost, state.keep)
1265 revs = between(repo, root, topmost, state.keep)
1264 if not revs:
1266 if not revs:
1265 raise error.Abort(_('%s is not an ancestor of working directory') %
1267 raise error.Abort(_('%s is not an ancestor of working directory') %
1266 node.short(root))
1268 node.short(root))
1267
1269
1268 ctxs = [repo[r] for r in revs]
1270 ctxs = [repo[r] for r in revs]
1269 if not rules:
1271 if not rules:
1270 comment = geteditcomment(ui, node.short(root), node.short(topmost))
1272 comment = geteditcomment(ui, node.short(root), node.short(topmost))
1271 actions = [pick(state, r) for r in revs]
1273 actions = [pick(state, r) for r in revs]
1272 rules = ruleeditor(repo, ui, actions, comment)
1274 rules = ruleeditor(repo, ui, actions, comment)
1273 else:
1275 else:
1274 rules = _readfile(ui, rules)
1276 rules = _readfile(ui, rules)
1275 actions = parserules(rules, state)
1277 actions = parserules(rules, state)
1276 warnverifyactions(ui, repo, actions, state, ctxs)
1278 warnverifyactions(ui, repo, actions, state, ctxs)
1277
1279
1278 parentctxnode = repo[root].parents()[0].node()
1280 parentctxnode = repo[root].parents()[0].node()
1279
1281
1280 state.parentctxnode = parentctxnode
1282 state.parentctxnode = parentctxnode
1281 state.actions = actions
1283 state.actions = actions
1282 state.topmost = topmost
1284 state.topmost = topmost
1283 state.replacements = []
1285 state.replacements = []
1284
1286
1285 # Create a backup so we can always abort completely.
1287 # Create a backup so we can always abort completely.
1286 backupfile = None
1288 backupfile = None
1287 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1289 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1288 backupfile = repair._bundle(repo, [parentctxnode], [topmost], root,
1290 backupfile = repair._bundle(repo, [parentctxnode], [topmost], root,
1289 'histedit')
1291 'histedit')
1290 state.backupfile = backupfile
1292 state.backupfile = backupfile
1291
1293
1292 def _getsummary(ctx):
1294 def _getsummary(ctx):
1293 # a common pattern is to extract the summary but default to the empty
1295 # a common pattern is to extract the summary but default to the empty
1294 # string
1296 # string
1295 summary = ctx.description() or ''
1297 summary = ctx.description() or ''
1296 if summary:
1298 if summary:
1297 summary = summary.splitlines()[0]
1299 summary = summary.splitlines()[0]
1298 return summary
1300 return summary
1299
1301
1300 def bootstrapcontinue(ui, state, opts):
1302 def bootstrapcontinue(ui, state, opts):
1301 repo = state.repo
1303 repo = state.repo
1302
1304
1303 ms = mergemod.mergestate.read(repo)
1305 ms = mergemod.mergestate.read(repo)
1304 mergeutil.checkunresolved(ms)
1306 mergeutil.checkunresolved(ms)
1305
1307
1306 if state.actions:
1308 if state.actions:
1307 actobj = state.actions.pop(0)
1309 actobj = state.actions.pop(0)
1308
1310
1309 if _isdirtywc(repo):
1311 if _isdirtywc(repo):
1310 actobj.continuedirty()
1312 actobj.continuedirty()
1311 if _isdirtywc(repo):
1313 if _isdirtywc(repo):
1312 abortdirty()
1314 abortdirty()
1313
1315
1314 parentctx, replacements = actobj.continueclean()
1316 parentctx, replacements = actobj.continueclean()
1315
1317
1316 state.parentctxnode = parentctx.node()
1318 state.parentctxnode = parentctx.node()
1317 state.replacements.extend(replacements)
1319 state.replacements.extend(replacements)
1318
1320
1319 return state
1321 return state
1320
1322
1321 def between(repo, old, new, keep):
1323 def between(repo, old, new, keep):
1322 """select and validate the set of revision to edit
1324 """select and validate the set of revision to edit
1323
1325
1324 When keep is false, the specified set can't have children."""
1326 When keep is false, the specified set can't have children."""
1325 ctxs = list(repo.set('%n::%n', old, new))
1327 ctxs = list(repo.set('%n::%n', old, new))
1326 if ctxs and not keep:
1328 if ctxs and not keep:
1327 if (not obsolete.isenabled(repo, obsolete.allowunstableopt) and
1329 if (not obsolete.isenabled(repo, obsolete.allowunstableopt) and
1328 repo.revs('(%ld::) - (%ld)', ctxs, ctxs)):
1330 repo.revs('(%ld::) - (%ld)', ctxs, ctxs)):
1329 raise error.Abort(_('can only histedit a changeset together '
1331 raise error.Abort(_('can only histedit a changeset together '
1330 'with all its descendants'))
1332 'with all its descendants'))
1331 if repo.revs('(%ld) and merge()', ctxs):
1333 if repo.revs('(%ld) and merge()', ctxs):
1332 raise error.Abort(_('cannot edit history that contains merges'))
1334 raise error.Abort(_('cannot edit history that contains merges'))
1333 root = ctxs[0] # list is already sorted by repo.set
1335 root = ctxs[0] # list is already sorted by repo.set
1334 if not root.mutable():
1336 if not root.mutable():
1335 raise error.Abort(_('cannot edit public changeset: %s') % root,
1337 raise error.Abort(_('cannot edit public changeset: %s') % root,
1336 hint=_("see 'hg help phases' for details"))
1338 hint=_("see 'hg help phases' for details"))
1337 return [c.node() for c in ctxs]
1339 return [c.node() for c in ctxs]
1338
1340
1339 def ruleeditor(repo, ui, actions, editcomment=""):
1341 def ruleeditor(repo, ui, actions, editcomment=""):
1340 """open an editor to edit rules
1342 """open an editor to edit rules
1341
1343
1342 rules are in the format [ [act, ctx], ...] like in state.rules
1344 rules are in the format [ [act, ctx], ...] like in state.rules
1343 """
1345 """
1344 if repo.ui.configbool("experimental", "histedit.autoverb"):
1346 if repo.ui.configbool("experimental", "histedit.autoverb"):
1345 newact = util.sortdict()
1347 newact = util.sortdict()
1346 for act in actions:
1348 for act in actions:
1347 ctx = repo[act.node]
1349 ctx = repo[act.node]
1348 summary = _getsummary(ctx)
1350 summary = _getsummary(ctx)
1349 fword = summary.split(' ', 1)[0].lower()
1351 fword = summary.split(' ', 1)[0].lower()
1350 added = False
1352 added = False
1351
1353
1352 # if it doesn't end with the special character '!' just skip this
1354 # if it doesn't end with the special character '!' just skip this
1353 if fword.endswith('!'):
1355 if fword.endswith('!'):
1354 fword = fword[:-1]
1356 fword = fword[:-1]
1355 if fword in primaryactions | secondaryactions | tertiaryactions:
1357 if fword in primaryactions | secondaryactions | tertiaryactions:
1356 act.verb = fword
1358 act.verb = fword
1357 # get the target summary
1359 # get the target summary
1358 tsum = summary[len(fword) + 1:].lstrip()
1360 tsum = summary[len(fword) + 1:].lstrip()
1359 # safe but slow: reverse iterate over the actions so we
1361 # safe but slow: reverse iterate over the actions so we
1360 # don't clash on two commits having the same summary
1362 # don't clash on two commits having the same summary
1361 for na, l in reversed(list(newact.iteritems())):
1363 for na, l in reversed(list(newact.iteritems())):
1362 actx = repo[na.node]
1364 actx = repo[na.node]
1363 asum = _getsummary(actx)
1365 asum = _getsummary(actx)
1364 if asum == tsum:
1366 if asum == tsum:
1365 added = True
1367 added = True
1366 l.append(act)
1368 l.append(act)
1367 break
1369 break
1368
1370
1369 if not added:
1371 if not added:
1370 newact[act] = []
1372 newact[act] = []
1371
1373
1372 # copy over and flatten the new list
1374 # copy over and flatten the new list
1373 actions = []
1375 actions = []
1374 for na, l in newact.iteritems():
1376 for na, l in newact.iteritems():
1375 actions.append(na)
1377 actions.append(na)
1376 actions += l
1378 actions += l
1377
1379
1378 rules = '\n'.join([act.torule() for act in actions])
1380 rules = '\n'.join([act.torule() for act in actions])
1379 rules += '\n\n'
1381 rules += '\n\n'
1380 rules += editcomment
1382 rules += editcomment
1381 rules = ui.edit(rules, ui.username(), {'prefix': 'histedit'},
1383 rules = ui.edit(rules, ui.username(), {'prefix': 'histedit'},
1382 repopath=repo.path)
1384 repopath=repo.path)
1383
1385
1384 # Save edit rules in .hg/histedit-last-edit.txt in case
1386 # Save edit rules in .hg/histedit-last-edit.txt in case
1385 # the user needs to ask for help after something
1387 # the user needs to ask for help after something
1386 # surprising happens.
1388 # surprising happens.
1387 f = open(repo.vfs.join('histedit-last-edit.txt'), 'w')
1389 f = open(repo.vfs.join('histedit-last-edit.txt'), 'w')
1388 f.write(rules)
1390 f.write(rules)
1389 f.close()
1391 f.close()
1390
1392
1391 return rules
1393 return rules
1392
1394
1393 def parserules(rules, state):
1395 def parserules(rules, state):
1394 """Read the histedit rules string and return list of action objects """
1396 """Read the histedit rules string and return list of action objects """
1395 rules = [l for l in (r.strip() for r in rules.splitlines())
1397 rules = [l for l in (r.strip() for r in rules.splitlines())
1396 if l and not l.startswith('#')]
1398 if l and not l.startswith('#')]
1397 actions = []
1399 actions = []
1398 for r in rules:
1400 for r in rules:
1399 if ' ' not in r:
1401 if ' ' not in r:
1400 raise error.ParseError(_('malformed line "%s"') % r)
1402 raise error.ParseError(_('malformed line "%s"') % r)
1401 verb, rest = r.split(' ', 1)
1403 verb, rest = r.split(' ', 1)
1402
1404
1403 if verb not in actiontable:
1405 if verb not in actiontable:
1404 raise error.ParseError(_('unknown action "%s"') % verb)
1406 raise error.ParseError(_('unknown action "%s"') % verb)
1405
1407
1406 action = actiontable[verb].fromrule(state, rest)
1408 action = actiontable[verb].fromrule(state, rest)
1407 actions.append(action)
1409 actions.append(action)
1408 return actions
1410 return actions
1409
1411
1410 def warnverifyactions(ui, repo, actions, state, ctxs):
1412 def warnverifyactions(ui, repo, actions, state, ctxs):
1411 try:
1413 try:
1412 verifyactions(actions, state, ctxs)
1414 verifyactions(actions, state, ctxs)
1413 except error.ParseError:
1415 except error.ParseError:
1414 if repo.vfs.exists('histedit-last-edit.txt'):
1416 if repo.vfs.exists('histedit-last-edit.txt'):
1415 ui.warn(_('warning: histedit rules saved '
1417 ui.warn(_('warning: histedit rules saved '
1416 'to: .hg/histedit-last-edit.txt\n'))
1418 'to: .hg/histedit-last-edit.txt\n'))
1417 raise
1419 raise
1418
1420
1419 def verifyactions(actions, state, ctxs):
1421 def verifyactions(actions, state, ctxs):
1420 """Verify that there exists exactly one action per given changeset and
1422 """Verify that there exists exactly one action per given changeset and
1421 other constraints.
1423 other constraints.
1422
1424
1423 Will abort if there are to many or too few rules, a malformed rule,
1425 Will abort if there are to many or too few rules, a malformed rule,
1424 or a rule on a changeset outside of the user-given range.
1426 or a rule on a changeset outside of the user-given range.
1425 """
1427 """
1426 expected = set(c.node() for c in ctxs)
1428 expected = set(c.node() for c in ctxs)
1427 seen = set()
1429 seen = set()
1428 prev = None
1430 prev = None
1429 for action in actions:
1431 for action in actions:
1430 action.verify(prev, expected, seen)
1432 action.verify(prev, expected, seen)
1431 prev = action
1433 prev = action
1432 if action.node is not None:
1434 if action.node is not None:
1433 seen.add(action.node)
1435 seen.add(action.node)
1434 missing = sorted(expected - seen) # sort to stabilize output
1436 missing = sorted(expected - seen) # sort to stabilize output
1435
1437
1436 if state.repo.ui.configbool('histedit', 'dropmissing'):
1438 if state.repo.ui.configbool('histedit', 'dropmissing'):
1437 if len(actions) == 0:
1439 if len(actions) == 0:
1438 raise error.ParseError(_('no rules provided'),
1440 raise error.ParseError(_('no rules provided'),
1439 hint=_('use strip extension to remove commits'))
1441 hint=_('use strip extension to remove commits'))
1440
1442
1441 drops = [drop(state, n) for n in missing]
1443 drops = [drop(state, n) for n in missing]
1442 # put the in the beginning so they execute immediately and
1444 # put the in the beginning so they execute immediately and
1443 # don't show in the edit-plan in the future
1445 # don't show in the edit-plan in the future
1444 actions[:0] = drops
1446 actions[:0] = drops
1445 elif missing:
1447 elif missing:
1446 raise error.ParseError(_('missing rules for changeset %s') %
1448 raise error.ParseError(_('missing rules for changeset %s') %
1447 node.short(missing[0]),
1449 node.short(missing[0]),
1448 hint=_('use "drop %s" to discard, see also: '
1450 hint=_('use "drop %s" to discard, see also: '
1449 "'hg help -e histedit.config'")
1451 "'hg help -e histedit.config'")
1450 % node.short(missing[0]))
1452 % node.short(missing[0]))
1451
1453
1452 def adjustreplacementsfrommarkers(repo, oldreplacements):
1454 def adjustreplacementsfrommarkers(repo, oldreplacements):
1453 """Adjust replacements from obsolescence markers
1455 """Adjust replacements from obsolescence markers
1454
1456
1455 Replacements structure is originally generated based on
1457 Replacements structure is originally generated based on
1456 histedit's state and does not account for changes that are
1458 histedit's state and does not account for changes that are
1457 not recorded there. This function fixes that by adding
1459 not recorded there. This function fixes that by adding
1458 data read from obsolescence markers"""
1460 data read from obsolescence markers"""
1459 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1461 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1460 return oldreplacements
1462 return oldreplacements
1461
1463
1462 unfi = repo.unfiltered()
1464 unfi = repo.unfiltered()
1463 nm = unfi.changelog.nodemap
1465 nm = unfi.changelog.nodemap
1464 obsstore = repo.obsstore
1466 obsstore = repo.obsstore
1465 newreplacements = list(oldreplacements)
1467 newreplacements = list(oldreplacements)
1466 oldsuccs = [r[1] for r in oldreplacements]
1468 oldsuccs = [r[1] for r in oldreplacements]
1467 # successors that have already been added to succstocheck once
1469 # successors that have already been added to succstocheck once
1468 seensuccs = set().union(*oldsuccs) # create a set from an iterable of tuples
1470 seensuccs = set().union(*oldsuccs) # create a set from an iterable of tuples
1469 succstocheck = list(seensuccs)
1471 succstocheck = list(seensuccs)
1470 while succstocheck:
1472 while succstocheck:
1471 n = succstocheck.pop()
1473 n = succstocheck.pop()
1472 missing = nm.get(n) is None
1474 missing = nm.get(n) is None
1473 markers = obsstore.successors.get(n, ())
1475 markers = obsstore.successors.get(n, ())
1474 if missing and not markers:
1476 if missing and not markers:
1475 # dead end, mark it as such
1477 # dead end, mark it as such
1476 newreplacements.append((n, ()))
1478 newreplacements.append((n, ()))
1477 for marker in markers:
1479 for marker in markers:
1478 nsuccs = marker[1]
1480 nsuccs = marker[1]
1479 newreplacements.append((n, nsuccs))
1481 newreplacements.append((n, nsuccs))
1480 for nsucc in nsuccs:
1482 for nsucc in nsuccs:
1481 if nsucc not in seensuccs:
1483 if nsucc not in seensuccs:
1482 seensuccs.add(nsucc)
1484 seensuccs.add(nsucc)
1483 succstocheck.append(nsucc)
1485 succstocheck.append(nsucc)
1484
1486
1485 return newreplacements
1487 return newreplacements
1486
1488
1487 def processreplacement(state):
1489 def processreplacement(state):
1488 """process the list of replacements to return
1490 """process the list of replacements to return
1489
1491
1490 1) the final mapping between original and created nodes
1492 1) the final mapping between original and created nodes
1491 2) the list of temporary node created by histedit
1493 2) the list of temporary node created by histedit
1492 3) the list of new commit created by histedit"""
1494 3) the list of new commit created by histedit"""
1493 replacements = adjustreplacementsfrommarkers(state.repo, state.replacements)
1495 replacements = adjustreplacementsfrommarkers(state.repo, state.replacements)
1494 allsuccs = set()
1496 allsuccs = set()
1495 replaced = set()
1497 replaced = set()
1496 fullmapping = {}
1498 fullmapping = {}
1497 # initialize basic set
1499 # initialize basic set
1498 # fullmapping records all operations recorded in replacement
1500 # fullmapping records all operations recorded in replacement
1499 for rep in replacements:
1501 for rep in replacements:
1500 allsuccs.update(rep[1])
1502 allsuccs.update(rep[1])
1501 replaced.add(rep[0])
1503 replaced.add(rep[0])
1502 fullmapping.setdefault(rep[0], set()).update(rep[1])
1504 fullmapping.setdefault(rep[0], set()).update(rep[1])
1503 new = allsuccs - replaced
1505 new = allsuccs - replaced
1504 tmpnodes = allsuccs & replaced
1506 tmpnodes = allsuccs & replaced
1505 # Reduce content fullmapping into direct relation between original nodes
1507 # Reduce content fullmapping into direct relation between original nodes
1506 # and final node created during history edition
1508 # and final node created during history edition
1507 # Dropped changeset are replaced by an empty list
1509 # Dropped changeset are replaced by an empty list
1508 toproceed = set(fullmapping)
1510 toproceed = set(fullmapping)
1509 final = {}
1511 final = {}
1510 while toproceed:
1512 while toproceed:
1511 for x in list(toproceed):
1513 for x in list(toproceed):
1512 succs = fullmapping[x]
1514 succs = fullmapping[x]
1513 for s in list(succs):
1515 for s in list(succs):
1514 if s in toproceed:
1516 if s in toproceed:
1515 # non final node with unknown closure
1517 # non final node with unknown closure
1516 # We can't process this now
1518 # We can't process this now
1517 break
1519 break
1518 elif s in final:
1520 elif s in final:
1519 # non final node, replace with closure
1521 # non final node, replace with closure
1520 succs.remove(s)
1522 succs.remove(s)
1521 succs.update(final[s])
1523 succs.update(final[s])
1522 else:
1524 else:
1523 final[x] = succs
1525 final[x] = succs
1524 toproceed.remove(x)
1526 toproceed.remove(x)
1525 # remove tmpnodes from final mapping
1527 # remove tmpnodes from final mapping
1526 for n in tmpnodes:
1528 for n in tmpnodes:
1527 del final[n]
1529 del final[n]
1528 # we expect all changes involved in final to exist in the repo
1530 # we expect all changes involved in final to exist in the repo
1529 # turn `final` into list (topologically sorted)
1531 # turn `final` into list (topologically sorted)
1530 nm = state.repo.changelog.nodemap
1532 nm = state.repo.changelog.nodemap
1531 for prec, succs in final.items():
1533 for prec, succs in final.items():
1532 final[prec] = sorted(succs, key=nm.get)
1534 final[prec] = sorted(succs, key=nm.get)
1533
1535
1534 # computed topmost element (necessary for bookmark)
1536 # computed topmost element (necessary for bookmark)
1535 if new:
1537 if new:
1536 newtopmost = sorted(new, key=state.repo.changelog.rev)[-1]
1538 newtopmost = sorted(new, key=state.repo.changelog.rev)[-1]
1537 elif not final:
1539 elif not final:
1538 # Nothing rewritten at all. we won't need `newtopmost`
1540 # Nothing rewritten at all. we won't need `newtopmost`
1539 # It is the same as `oldtopmost` and `processreplacement` know it
1541 # It is the same as `oldtopmost` and `processreplacement` know it
1540 newtopmost = None
1542 newtopmost = None
1541 else:
1543 else:
1542 # every body died. The newtopmost is the parent of the root.
1544 # every body died. The newtopmost is the parent of the root.
1543 r = state.repo.changelog.rev
1545 r = state.repo.changelog.rev
1544 newtopmost = state.repo[sorted(final, key=r)[0]].p1().node()
1546 newtopmost = state.repo[sorted(final, key=r)[0]].p1().node()
1545
1547
1546 return final, tmpnodes, new, newtopmost
1548 return final, tmpnodes, new, newtopmost
1547
1549
1548 def movebookmarks(ui, repo, mapping, oldtopmost, newtopmost):
1550 def movebookmarks(ui, repo, mapping, oldtopmost, newtopmost):
1549 """Move bookmark from old to newly created node"""
1551 """Move bookmark from old to newly created node"""
1550 if not mapping:
1552 if not mapping:
1551 # if nothing got rewritten there is not purpose for this function
1553 # if nothing got rewritten there is not purpose for this function
1552 return
1554 return
1553 moves = []
1555 moves = []
1554 for bk, old in sorted(repo._bookmarks.iteritems()):
1556 for bk, old in sorted(repo._bookmarks.iteritems()):
1555 if old == oldtopmost:
1557 if old == oldtopmost:
1556 # special case ensure bookmark stay on tip.
1558 # special case ensure bookmark stay on tip.
1557 #
1559 #
1558 # This is arguably a feature and we may only want that for the
1560 # This is arguably a feature and we may only want that for the
1559 # active bookmark. But the behavior is kept compatible with the old
1561 # active bookmark. But the behavior is kept compatible with the old
1560 # version for now.
1562 # version for now.
1561 moves.append((bk, newtopmost))
1563 moves.append((bk, newtopmost))
1562 continue
1564 continue
1563 base = old
1565 base = old
1564 new = mapping.get(base, None)
1566 new = mapping.get(base, None)
1565 if new is None:
1567 if new is None:
1566 continue
1568 continue
1567 while not new:
1569 while not new:
1568 # base is killed, trying with parent
1570 # base is killed, trying with parent
1569 base = repo[base].p1().node()
1571 base = repo[base].p1().node()
1570 new = mapping.get(base, (base,))
1572 new = mapping.get(base, (base,))
1571 # nothing to move
1573 # nothing to move
1572 moves.append((bk, new[-1]))
1574 moves.append((bk, new[-1]))
1573 if moves:
1575 if moves:
1574 lock = tr = None
1576 lock = tr = None
1575 try:
1577 try:
1576 lock = repo.lock()
1578 lock = repo.lock()
1577 tr = repo.transaction('histedit')
1579 tr = repo.transaction('histedit')
1578 marks = repo._bookmarks
1580 marks = repo._bookmarks
1579 for mark, new in moves:
1581 for mark, new in moves:
1580 old = marks[mark]
1582 old = marks[mark]
1581 ui.note(_('histedit: moving bookmarks %s from %s to %s\n')
1583 ui.note(_('histedit: moving bookmarks %s from %s to %s\n')
1582 % (mark, node.short(old), node.short(new)))
1584 % (mark, node.short(old), node.short(new)))
1583 marks[mark] = new
1585 marks[mark] = new
1584 marks.recordchange(tr)
1586 marks.recordchange(tr)
1585 tr.close()
1587 tr.close()
1586 finally:
1588 finally:
1587 release(tr, lock)
1589 release(tr, lock)
1588
1590
1589 def cleanupnode(ui, repo, name, nodes):
1591 def cleanupnode(ui, repo, name, nodes):
1590 """strip a group of nodes from the repository
1592 """strip a group of nodes from the repository
1591
1593
1592 The set of node to strip may contains unknown nodes."""
1594 The set of node to strip may contains unknown nodes."""
1593 ui.debug('should strip %s nodes %s\n' %
1595 ui.debug('should strip %s nodes %s\n' %
1594 (name, ', '.join([node.short(n) for n in nodes])))
1596 (name, ', '.join([node.short(n) for n in nodes])))
1595 with repo.lock():
1597 with repo.lock():
1596 # do not let filtering get in the way of the cleanse
1598 # do not let filtering get in the way of the cleanse
1597 # we should probably get rid of obsolescence marker created during the
1599 # we should probably get rid of obsolescence marker created during the
1598 # histedit, but we currently do not have such information.
1600 # histedit, but we currently do not have such information.
1599 repo = repo.unfiltered()
1601 repo = repo.unfiltered()
1600 # Find all nodes that need to be stripped
1602 # Find all nodes that need to be stripped
1601 # (we use %lr instead of %ln to silently ignore unknown items)
1603 # (we use %lr instead of %ln to silently ignore unknown items)
1602 nm = repo.changelog.nodemap
1604 nm = repo.changelog.nodemap
1603 nodes = sorted(n for n in nodes if n in nm)
1605 nodes = sorted(n for n in nodes if n in nm)
1604 roots = [c.node() for c in repo.set("roots(%ln)", nodes)]
1606 roots = [c.node() for c in repo.set("roots(%ln)", nodes)]
1605 for c in roots:
1607 for c in roots:
1606 # We should process node in reverse order to strip tip most first.
1608 # We should process node in reverse order to strip tip most first.
1607 # but this trigger a bug in changegroup hook.
1609 # but this trigger a bug in changegroup hook.
1608 # This would reduce bundle overhead
1610 # This would reduce bundle overhead
1609 repair.strip(ui, repo, c)
1611 repair.strip(ui, repo, c)
1610
1612
1611 def safecleanupnode(ui, repo, name, nodes):
1613 def safecleanupnode(ui, repo, name, nodes):
1612 """strip or obsolete nodes
1614 """strip or obsolete nodes
1613
1615
1614 nodes could be either a set or dict which maps to replacements.
1616 nodes could be either a set or dict which maps to replacements.
1615 nodes could be unknown (outside the repo).
1617 nodes could be unknown (outside the repo).
1616 """
1618 """
1617 supportsmarkers = obsolete.isenabled(repo, obsolete.createmarkersopt)
1619 supportsmarkers = obsolete.isenabled(repo, obsolete.createmarkersopt)
1618 if supportsmarkers:
1620 if supportsmarkers:
1619 if util.safehasattr(nodes, 'get'):
1621 if util.safehasattr(nodes, 'get'):
1620 # nodes is a dict-like mapping
1622 # nodes is a dict-like mapping
1621 # use unfiltered repo for successors in case they are hidden
1623 # use unfiltered repo for successors in case they are hidden
1622 urepo = repo.unfiltered()
1624 urepo = repo.unfiltered()
1623 def getmarker(prec):
1625 def getmarker(prec):
1624 succs = tuple(urepo[n] for n in nodes.get(prec, ()))
1626 succs = tuple(urepo[n] for n in nodes.get(prec, ()))
1625 return (repo[prec], succs)
1627 return (repo[prec], succs)
1626 else:
1628 else:
1627 # nodes is a set-like
1629 # nodes is a set-like
1628 def getmarker(prec):
1630 def getmarker(prec):
1629 return (repo[prec], ())
1631 return (repo[prec], ())
1630 # sort by revision number because it sound "right"
1632 # sort by revision number because it sound "right"
1631 sortednodes = sorted([n for n in nodes if n in repo],
1633 sortednodes = sorted([n for n in nodes if n in repo],
1632 key=repo.changelog.rev)
1634 key=repo.changelog.rev)
1633 markers = [getmarker(t) for t in sortednodes]
1635 markers = [getmarker(t) for t in sortednodes]
1634 if markers:
1636 if markers:
1635 obsolete.createmarkers(repo, markers, operation='histedit')
1637 obsolete.createmarkers(repo, markers, operation='histedit')
1636 else:
1638 else:
1637 return cleanupnode(ui, repo, name, nodes)
1639 return cleanupnode(ui, repo, name, nodes)
1638
1640
1639 def stripwrapper(orig, ui, repo, nodelist, *args, **kwargs):
1641 def stripwrapper(orig, ui, repo, nodelist, *args, **kwargs):
1640 if isinstance(nodelist, str):
1642 if isinstance(nodelist, str):
1641 nodelist = [nodelist]
1643 nodelist = [nodelist]
1642 if os.path.exists(os.path.join(repo.path, 'histedit-state')):
1644 if os.path.exists(os.path.join(repo.path, 'histedit-state')):
1643 state = histeditstate(repo)
1645 state = histeditstate(repo)
1644 state.read()
1646 state.read()
1645 histedit_nodes = {action.node for action
1647 histedit_nodes = {action.node for action
1646 in state.actions if action.node}
1648 in state.actions if action.node}
1647 common_nodes = histedit_nodes & set(nodelist)
1649 common_nodes = histedit_nodes & set(nodelist)
1648 if common_nodes:
1650 if common_nodes:
1649 raise error.Abort(_("histedit in progress, can't strip %s")
1651 raise error.Abort(_("histedit in progress, can't strip %s")
1650 % ', '.join(node.short(x) for x in common_nodes))
1652 % ', '.join(node.short(x) for x in common_nodes))
1651 return orig(ui, repo, nodelist, *args, **kwargs)
1653 return orig(ui, repo, nodelist, *args, **kwargs)
1652
1654
1653 extensions.wrapfunction(repair, 'strip', stripwrapper)
1655 extensions.wrapfunction(repair, 'strip', stripwrapper)
1654
1656
1655 def summaryhook(ui, repo):
1657 def summaryhook(ui, repo):
1656 if not os.path.exists(repo.vfs.join('histedit-state')):
1658 if not os.path.exists(repo.vfs.join('histedit-state')):
1657 return
1659 return
1658 state = histeditstate(repo)
1660 state = histeditstate(repo)
1659 state.read()
1661 state.read()
1660 if state.actions:
1662 if state.actions:
1661 # i18n: column positioning for "hg summary"
1663 # i18n: column positioning for "hg summary"
1662 ui.write(_('hist: %s (histedit --continue)\n') %
1664 ui.write(_('hist: %s (histedit --continue)\n') %
1663 (ui.label(_('%d remaining'), 'histedit.remaining') %
1665 (ui.label(_('%d remaining'), 'histedit.remaining') %
1664 len(state.actions)))
1666 len(state.actions)))
1665
1667
1666 def extsetup(ui):
1668 def extsetup(ui):
1667 cmdutil.summaryhooks.add('histedit', summaryhook)
1669 cmdutil.summaryhooks.add('histedit', summaryhook)
1668 cmdutil.unfinishedstates.append(
1670 cmdutil.unfinishedstates.append(
1669 ['histedit-state', False, True, _('histedit in progress'),
1671 ['histedit-state', False, True, _('histedit in progress'),
1670 _("use 'hg histedit --continue' or 'hg histedit --abort'")])
1672 _("use 'hg histedit --continue' or 'hg histedit --abort'")])
1671 cmdutil.afterresolvedstates.append(
1673 cmdutil.afterresolvedstates.append(
1672 ['histedit-state', _('hg histedit --continue')])
1674 ['histedit-state', _('hg histedit --continue')])
1673 if ui.configbool("experimental", "histeditng"):
1675 if ui.configbool("experimental", "histeditng"):
1674 globals()['base'] = action(['base', 'b'],
1676 globals()['base'] = action(['base', 'b'],
1675 _('checkout changeset and apply further changesets from there')
1677 _('checkout changeset and apply further changesets from there')
1676 )(base)
1678 )(base)
@@ -1,1047 +1,1049 b''
1 # shelve.py - save/restore working directory state
1 # shelve.py - save/restore working directory state
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 """save and restore changes to the working directory
8 """save and restore changes to the working directory
9
9
10 The "hg shelve" command saves changes made to the working directory
10 The "hg shelve" command saves changes made to the working directory
11 and reverts those changes, resetting the working directory to a clean
11 and reverts those changes, resetting the working directory to a clean
12 state.
12 state.
13
13
14 Later on, the "hg unshelve" command restores the changes saved by "hg
14 Later on, the "hg unshelve" command restores the changes saved by "hg
15 shelve". Changes can be restored even after updating to a different
15 shelve". Changes can be restored even after updating to a different
16 parent, in which case Mercurial's merge machinery will resolve any
16 parent, in which case Mercurial's merge machinery will resolve any
17 conflicts if necessary.
17 conflicts if necessary.
18
18
19 You can have more than one shelved change outstanding at a time; each
19 You can have more than one shelved change outstanding at a time; each
20 shelved change has a distinct name. For details, see the help for "hg
20 shelved change has a distinct name. For details, see the help for "hg
21 shelve".
21 shelve".
22 """
22 """
23 from __future__ import absolute_import
23 from __future__ import absolute_import
24
24
25 import collections
25 import collections
26 import errno
26 import errno
27 import itertools
27 import itertools
28
28
29 from mercurial.i18n import _
29 from mercurial.i18n import _
30 from mercurial import (
30 from mercurial import (
31 bookmarks,
31 bookmarks,
32 bundle2,
32 bundle2,
33 bundlerepo,
33 bundlerepo,
34 changegroup,
34 changegroup,
35 cmdutil,
35 cmdutil,
36 error,
36 error,
37 exchange,
37 exchange,
38 hg,
38 hg,
39 lock as lockmod,
39 lock as lockmod,
40 mdiff,
40 mdiff,
41 merge,
41 merge,
42 node as nodemod,
42 node as nodemod,
43 patch,
43 patch,
44 phases,
44 phases,
45 registrar,
45 registrar,
46 repair,
46 repair,
47 scmutil,
47 scmutil,
48 templatefilters,
48 templatefilters,
49 util,
49 util,
50 vfs as vfsmod,
50 vfs as vfsmod,
51 )
51 )
52
52
53 from . import (
53 from . import (
54 rebase,
54 rebase,
55 )
55 )
56
56
57 cmdtable = {}
57 cmdtable = {}
58 command = registrar.command(cmdtable)
58 command = registrar.command(cmdtable)
59 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
59 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
60 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
60 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
61 # be specifying the version(s) of Mercurial they are tested with, or
61 # be specifying the version(s) of Mercurial they are tested with, or
62 # leave the attribute unspecified.
62 # leave the attribute unspecified.
63 testedwith = 'ships-with-hg-core'
63 testedwith = 'ships-with-hg-core'
64
64
65 backupdir = 'shelve-backup'
65 backupdir = 'shelve-backup'
66 shelvedir = 'shelved'
66 shelvedir = 'shelved'
67 shelvefileextensions = ['hg', 'patch', 'oshelve']
67 shelvefileextensions = ['hg', 'patch', 'oshelve']
68 # universal extension is present in all types of shelves
68 # universal extension is present in all types of shelves
69 patchextension = 'patch'
69 patchextension = 'patch'
70
70
71 # we never need the user, so we use a
71 # we never need the user, so we use a
72 # generic user for all shelve operations
72 # generic user for all shelve operations
73 shelveuser = 'shelve@localhost'
73 shelveuser = 'shelve@localhost'
74
74
75 class shelvedfile(object):
75 class shelvedfile(object):
76 """Helper for the file storing a single shelve
76 """Helper for the file storing a single shelve
77
77
78 Handles common functions on shelve files (.hg/.patch) using
78 Handles common functions on shelve files (.hg/.patch) using
79 the vfs layer"""
79 the vfs layer"""
80 def __init__(self, repo, name, filetype=None):
80 def __init__(self, repo, name, filetype=None):
81 self.repo = repo
81 self.repo = repo
82 self.name = name
82 self.name = name
83 self.vfs = vfsmod.vfs(repo.vfs.join(shelvedir))
83 self.vfs = vfsmod.vfs(repo.vfs.join(shelvedir))
84 self.backupvfs = vfsmod.vfs(repo.vfs.join(backupdir))
84 self.backupvfs = vfsmod.vfs(repo.vfs.join(backupdir))
85 self.ui = self.repo.ui
85 self.ui = self.repo.ui
86 if filetype:
86 if filetype:
87 self.fname = name + '.' + filetype
87 self.fname = name + '.' + filetype
88 else:
88 else:
89 self.fname = name
89 self.fname = name
90
90
91 def exists(self):
91 def exists(self):
92 return self.vfs.exists(self.fname)
92 return self.vfs.exists(self.fname)
93
93
94 def filename(self):
94 def filename(self):
95 return self.vfs.join(self.fname)
95 return self.vfs.join(self.fname)
96
96
97 def backupfilename(self):
97 def backupfilename(self):
98 def gennames(base):
98 def gennames(base):
99 yield base
99 yield base
100 base, ext = base.rsplit('.', 1)
100 base, ext = base.rsplit('.', 1)
101 for i in itertools.count(1):
101 for i in itertools.count(1):
102 yield '%s-%d.%s' % (base, i, ext)
102 yield '%s-%d.%s' % (base, i, ext)
103
103
104 name = self.backupvfs.join(self.fname)
104 name = self.backupvfs.join(self.fname)
105 for n in gennames(name):
105 for n in gennames(name):
106 if not self.backupvfs.exists(n):
106 if not self.backupvfs.exists(n):
107 return n
107 return n
108
108
109 def movetobackup(self):
109 def movetobackup(self):
110 if not self.backupvfs.isdir():
110 if not self.backupvfs.isdir():
111 self.backupvfs.makedir()
111 self.backupvfs.makedir()
112 util.rename(self.filename(), self.backupfilename())
112 util.rename(self.filename(), self.backupfilename())
113
113
114 def stat(self):
114 def stat(self):
115 return self.vfs.stat(self.fname)
115 return self.vfs.stat(self.fname)
116
116
117 def opener(self, mode='rb'):
117 def opener(self, mode='rb'):
118 try:
118 try:
119 return self.vfs(self.fname, mode)
119 return self.vfs(self.fname, mode)
120 except IOError as err:
120 except IOError as err:
121 if err.errno != errno.ENOENT:
121 if err.errno != errno.ENOENT:
122 raise
122 raise
123 raise error.Abort(_("shelved change '%s' not found") % self.name)
123 raise error.Abort(_("shelved change '%s' not found") % self.name)
124
124
125 def applybundle(self):
125 def applybundle(self):
126 fp = self.opener()
126 fp = self.opener()
127 try:
127 try:
128 gen = exchange.readbundle(self.repo.ui, fp, self.fname, self.vfs)
128 gen = exchange.readbundle(self.repo.ui, fp, self.fname, self.vfs)
129 if not isinstance(gen, bundle2.unbundle20):
129 if not isinstance(gen, bundle2.unbundle20):
130 gen.apply(self.repo, self.repo.currenttransaction(), 'unshelve',
130 bundle2.applybundle1(self.repo, gen,
131 'bundle:' + self.vfs.join(self.fname),
131 self.repo.currenttransaction(),
132 targetphase=phases.secret)
132 source='unshelve',
133 url='bundle:' + self.vfs.join(self.fname),
134 targetphase=phases.secret)
133 else:
135 else:
134 bundle2.applybundle(self.repo, gen,
136 bundle2.applybundle(self.repo, gen,
135 self.repo.currenttransaction(),
137 self.repo.currenttransaction(),
136 source='unshelve',
138 source='unshelve',
137 url='bundle:' + self.vfs.join(self.fname))
139 url='bundle:' + self.vfs.join(self.fname))
138 finally:
140 finally:
139 fp.close()
141 fp.close()
140
142
141 def bundlerepo(self):
143 def bundlerepo(self):
142 return bundlerepo.bundlerepository(self.repo.baseui, self.repo.root,
144 return bundlerepo.bundlerepository(self.repo.baseui, self.repo.root,
143 self.vfs.join(self.fname))
145 self.vfs.join(self.fname))
144 def writebundle(self, bases, node):
146 def writebundle(self, bases, node):
145 cgversion = changegroup.safeversion(self.repo)
147 cgversion = changegroup.safeversion(self.repo)
146 if cgversion == '01':
148 if cgversion == '01':
147 btype = 'HG10BZ'
149 btype = 'HG10BZ'
148 compression = None
150 compression = None
149 else:
151 else:
150 btype = 'HG20'
152 btype = 'HG20'
151 compression = 'BZ'
153 compression = 'BZ'
152
154
153 cg = changegroup.changegroupsubset(self.repo, bases, [node], 'shelve',
155 cg = changegroup.changegroupsubset(self.repo, bases, [node], 'shelve',
154 version=cgversion)
156 version=cgversion)
155 bundle2.writebundle(self.ui, cg, self.fname, btype, self.vfs,
157 bundle2.writebundle(self.ui, cg, self.fname, btype, self.vfs,
156 compression=compression)
158 compression=compression)
157
159
158 def writeobsshelveinfo(self, info):
160 def writeobsshelveinfo(self, info):
159 scmutil.simplekeyvaluefile(self.vfs, self.fname).write(info)
161 scmutil.simplekeyvaluefile(self.vfs, self.fname).write(info)
160
162
161 def readobsshelveinfo(self):
163 def readobsshelveinfo(self):
162 return scmutil.simplekeyvaluefile(self.vfs, self.fname).read()
164 return scmutil.simplekeyvaluefile(self.vfs, self.fname).read()
163
165
164 class shelvedstate(object):
166 class shelvedstate(object):
165 """Handle persistence during unshelving operations.
167 """Handle persistence during unshelving operations.
166
168
167 Handles saving and restoring a shelved state. Ensures that different
169 Handles saving and restoring a shelved state. Ensures that different
168 versions of a shelved state are possible and handles them appropriately.
170 versions of a shelved state are possible and handles them appropriately.
169 """
171 """
170 _version = 2
172 _version = 2
171 _filename = 'shelvedstate'
173 _filename = 'shelvedstate'
172 _keep = 'keep'
174 _keep = 'keep'
173 _nokeep = 'nokeep'
175 _nokeep = 'nokeep'
174 # colon is essential to differentiate from a real bookmark name
176 # colon is essential to differentiate from a real bookmark name
175 _noactivebook = ':no-active-bookmark'
177 _noactivebook = ':no-active-bookmark'
176
178
177 @classmethod
179 @classmethod
178 def _verifyandtransform(cls, d):
180 def _verifyandtransform(cls, d):
179 """Some basic shelvestate syntactic verification and transformation"""
181 """Some basic shelvestate syntactic verification and transformation"""
180 try:
182 try:
181 d['originalwctx'] = nodemod.bin(d['originalwctx'])
183 d['originalwctx'] = nodemod.bin(d['originalwctx'])
182 d['pendingctx'] = nodemod.bin(d['pendingctx'])
184 d['pendingctx'] = nodemod.bin(d['pendingctx'])
183 d['parents'] = [nodemod.bin(h)
185 d['parents'] = [nodemod.bin(h)
184 for h in d['parents'].split(' ')]
186 for h in d['parents'].split(' ')]
185 d['nodestoremove'] = [nodemod.bin(h)
187 d['nodestoremove'] = [nodemod.bin(h)
186 for h in d['nodestoremove'].split(' ')]
188 for h in d['nodestoremove'].split(' ')]
187 except (ValueError, TypeError, KeyError) as err:
189 except (ValueError, TypeError, KeyError) as err:
188 raise error.CorruptedState(str(err))
190 raise error.CorruptedState(str(err))
189
191
190 @classmethod
192 @classmethod
191 def _getversion(cls, repo):
193 def _getversion(cls, repo):
192 """Read version information from shelvestate file"""
194 """Read version information from shelvestate file"""
193 fp = repo.vfs(cls._filename)
195 fp = repo.vfs(cls._filename)
194 try:
196 try:
195 version = int(fp.readline().strip())
197 version = int(fp.readline().strip())
196 except ValueError as err:
198 except ValueError as err:
197 raise error.CorruptedState(str(err))
199 raise error.CorruptedState(str(err))
198 finally:
200 finally:
199 fp.close()
201 fp.close()
200 return version
202 return version
201
203
202 @classmethod
204 @classmethod
203 def _readold(cls, repo):
205 def _readold(cls, repo):
204 """Read the old position-based version of a shelvestate file"""
206 """Read the old position-based version of a shelvestate file"""
205 # Order is important, because old shelvestate file uses it
207 # Order is important, because old shelvestate file uses it
206 # to detemine values of fields (i.g. name is on the second line,
208 # to detemine values of fields (i.g. name is on the second line,
207 # originalwctx is on the third and so forth). Please do not change.
209 # originalwctx is on the third and so forth). Please do not change.
208 keys = ['version', 'name', 'originalwctx', 'pendingctx', 'parents',
210 keys = ['version', 'name', 'originalwctx', 'pendingctx', 'parents',
209 'nodestoremove', 'branchtorestore', 'keep', 'activebook']
211 'nodestoremove', 'branchtorestore', 'keep', 'activebook']
210 # this is executed only seldomly, so it is not a big deal
212 # this is executed only seldomly, so it is not a big deal
211 # that we open this file twice
213 # that we open this file twice
212 fp = repo.vfs(cls._filename)
214 fp = repo.vfs(cls._filename)
213 d = {}
215 d = {}
214 try:
216 try:
215 for key in keys:
217 for key in keys:
216 d[key] = fp.readline().strip()
218 d[key] = fp.readline().strip()
217 finally:
219 finally:
218 fp.close()
220 fp.close()
219 return d
221 return d
220
222
221 @classmethod
223 @classmethod
222 def load(cls, repo):
224 def load(cls, repo):
223 version = cls._getversion(repo)
225 version = cls._getversion(repo)
224 if version < cls._version:
226 if version < cls._version:
225 d = cls._readold(repo)
227 d = cls._readold(repo)
226 elif version == cls._version:
228 elif version == cls._version:
227 d = scmutil.simplekeyvaluefile(repo.vfs, cls._filename)\
229 d = scmutil.simplekeyvaluefile(repo.vfs, cls._filename)\
228 .read(firstlinenonkeyval=True)
230 .read(firstlinenonkeyval=True)
229 else:
231 else:
230 raise error.Abort(_('this version of shelve is incompatible '
232 raise error.Abort(_('this version of shelve is incompatible '
231 'with the version used in this repo'))
233 'with the version used in this repo'))
232
234
233 cls._verifyandtransform(d)
235 cls._verifyandtransform(d)
234 try:
236 try:
235 obj = cls()
237 obj = cls()
236 obj.name = d['name']
238 obj.name = d['name']
237 obj.wctx = repo[d['originalwctx']]
239 obj.wctx = repo[d['originalwctx']]
238 obj.pendingctx = repo[d['pendingctx']]
240 obj.pendingctx = repo[d['pendingctx']]
239 obj.parents = d['parents']
241 obj.parents = d['parents']
240 obj.nodestoremove = d['nodestoremove']
242 obj.nodestoremove = d['nodestoremove']
241 obj.branchtorestore = d.get('branchtorestore', '')
243 obj.branchtorestore = d.get('branchtorestore', '')
242 obj.keep = d.get('keep') == cls._keep
244 obj.keep = d.get('keep') == cls._keep
243 obj.activebookmark = ''
245 obj.activebookmark = ''
244 if d.get('activebook', '') != cls._noactivebook:
246 if d.get('activebook', '') != cls._noactivebook:
245 obj.activebookmark = d.get('activebook', '')
247 obj.activebookmark = d.get('activebook', '')
246 except (error.RepoLookupError, KeyError) as err:
248 except (error.RepoLookupError, KeyError) as err:
247 raise error.CorruptedState(str(err))
249 raise error.CorruptedState(str(err))
248
250
249 return obj
251 return obj
250
252
251 @classmethod
253 @classmethod
252 def save(cls, repo, name, originalwctx, pendingctx, nodestoremove,
254 def save(cls, repo, name, originalwctx, pendingctx, nodestoremove,
253 branchtorestore, keep=False, activebook=''):
255 branchtorestore, keep=False, activebook=''):
254 info = {
256 info = {
255 "name": name,
257 "name": name,
256 "originalwctx": nodemod.hex(originalwctx.node()),
258 "originalwctx": nodemod.hex(originalwctx.node()),
257 "pendingctx": nodemod.hex(pendingctx.node()),
259 "pendingctx": nodemod.hex(pendingctx.node()),
258 "parents": ' '.join([nodemod.hex(p)
260 "parents": ' '.join([nodemod.hex(p)
259 for p in repo.dirstate.parents()]),
261 for p in repo.dirstate.parents()]),
260 "nodestoremove": ' '.join([nodemod.hex(n)
262 "nodestoremove": ' '.join([nodemod.hex(n)
261 for n in nodestoremove]),
263 for n in nodestoremove]),
262 "branchtorestore": branchtorestore,
264 "branchtorestore": branchtorestore,
263 "keep": cls._keep if keep else cls._nokeep,
265 "keep": cls._keep if keep else cls._nokeep,
264 "activebook": activebook or cls._noactivebook
266 "activebook": activebook or cls._noactivebook
265 }
267 }
266 scmutil.simplekeyvaluefile(repo.vfs, cls._filename)\
268 scmutil.simplekeyvaluefile(repo.vfs, cls._filename)\
267 .write(info, firstline=str(cls._version))
269 .write(info, firstline=str(cls._version))
268
270
269 @classmethod
271 @classmethod
270 def clear(cls, repo):
272 def clear(cls, repo):
271 repo.vfs.unlinkpath(cls._filename, ignoremissing=True)
273 repo.vfs.unlinkpath(cls._filename, ignoremissing=True)
272
274
273 def cleanupoldbackups(repo):
275 def cleanupoldbackups(repo):
274 vfs = vfsmod.vfs(repo.vfs.join(backupdir))
276 vfs = vfsmod.vfs(repo.vfs.join(backupdir))
275 maxbackups = repo.ui.configint('shelve', 'maxbackups', 10)
277 maxbackups = repo.ui.configint('shelve', 'maxbackups', 10)
276 hgfiles = [f for f in vfs.listdir()
278 hgfiles = [f for f in vfs.listdir()
277 if f.endswith('.' + patchextension)]
279 if f.endswith('.' + patchextension)]
278 hgfiles = sorted([(vfs.stat(f).st_mtime, f) for f in hgfiles])
280 hgfiles = sorted([(vfs.stat(f).st_mtime, f) for f in hgfiles])
279 if 0 < maxbackups and maxbackups < len(hgfiles):
281 if 0 < maxbackups and maxbackups < len(hgfiles):
280 bordermtime = hgfiles[-maxbackups][0]
282 bordermtime = hgfiles[-maxbackups][0]
281 else:
283 else:
282 bordermtime = None
284 bordermtime = None
283 for mtime, f in hgfiles[:len(hgfiles) - maxbackups]:
285 for mtime, f in hgfiles[:len(hgfiles) - maxbackups]:
284 if mtime == bordermtime:
286 if mtime == bordermtime:
285 # keep it, because timestamp can't decide exact order of backups
287 # keep it, because timestamp can't decide exact order of backups
286 continue
288 continue
287 base = f[:-(1 + len(patchextension))]
289 base = f[:-(1 + len(patchextension))]
288 for ext in shelvefileextensions:
290 for ext in shelvefileextensions:
289 vfs.tryunlink(base + '.' + ext)
291 vfs.tryunlink(base + '.' + ext)
290
292
291 def _backupactivebookmark(repo):
293 def _backupactivebookmark(repo):
292 activebookmark = repo._activebookmark
294 activebookmark = repo._activebookmark
293 if activebookmark:
295 if activebookmark:
294 bookmarks.deactivate(repo)
296 bookmarks.deactivate(repo)
295 return activebookmark
297 return activebookmark
296
298
297 def _restoreactivebookmark(repo, mark):
299 def _restoreactivebookmark(repo, mark):
298 if mark:
300 if mark:
299 bookmarks.activate(repo, mark)
301 bookmarks.activate(repo, mark)
300
302
301 def _aborttransaction(repo):
303 def _aborttransaction(repo):
302 '''Abort current transaction for shelve/unshelve, but keep dirstate
304 '''Abort current transaction for shelve/unshelve, but keep dirstate
303 '''
305 '''
304 tr = repo.currenttransaction()
306 tr = repo.currenttransaction()
305 repo.dirstate.savebackup(tr, suffix='.shelve')
307 repo.dirstate.savebackup(tr, suffix='.shelve')
306 tr.abort()
308 tr.abort()
307 repo.dirstate.restorebackup(None, suffix='.shelve')
309 repo.dirstate.restorebackup(None, suffix='.shelve')
308
310
309 def createcmd(ui, repo, pats, opts):
311 def createcmd(ui, repo, pats, opts):
310 """subcommand that creates a new shelve"""
312 """subcommand that creates a new shelve"""
311 with repo.wlock():
313 with repo.wlock():
312 cmdutil.checkunfinished(repo)
314 cmdutil.checkunfinished(repo)
313 return _docreatecmd(ui, repo, pats, opts)
315 return _docreatecmd(ui, repo, pats, opts)
314
316
315 def getshelvename(repo, parent, opts):
317 def getshelvename(repo, parent, opts):
316 """Decide on the name this shelve is going to have"""
318 """Decide on the name this shelve is going to have"""
317 def gennames():
319 def gennames():
318 yield label
320 yield label
319 for i in itertools.count(1):
321 for i in itertools.count(1):
320 yield '%s-%02d' % (label, i)
322 yield '%s-%02d' % (label, i)
321 name = opts.get('name')
323 name = opts.get('name')
322 label = repo._activebookmark or parent.branch() or 'default'
324 label = repo._activebookmark or parent.branch() or 'default'
323 # slashes aren't allowed in filenames, therefore we rename it
325 # slashes aren't allowed in filenames, therefore we rename it
324 label = label.replace('/', '_')
326 label = label.replace('/', '_')
325 label = label.replace('\\', '_')
327 label = label.replace('\\', '_')
326 # filenames must not start with '.' as it should not be hidden
328 # filenames must not start with '.' as it should not be hidden
327 if label.startswith('.'):
329 if label.startswith('.'):
328 label = label.replace('.', '_', 1)
330 label = label.replace('.', '_', 1)
329
331
330 if name:
332 if name:
331 if shelvedfile(repo, name, patchextension).exists():
333 if shelvedfile(repo, name, patchextension).exists():
332 e = _("a shelved change named '%s' already exists") % name
334 e = _("a shelved change named '%s' already exists") % name
333 raise error.Abort(e)
335 raise error.Abort(e)
334
336
335 # ensure we are not creating a subdirectory or a hidden file
337 # ensure we are not creating a subdirectory or a hidden file
336 if '/' in name or '\\' in name:
338 if '/' in name or '\\' in name:
337 raise error.Abort(_('shelved change names can not contain slashes'))
339 raise error.Abort(_('shelved change names can not contain slashes'))
338 if name.startswith('.'):
340 if name.startswith('.'):
339 raise error.Abort(_("shelved change names can not start with '.'"))
341 raise error.Abort(_("shelved change names can not start with '.'"))
340
342
341 else:
343 else:
342 for n in gennames():
344 for n in gennames():
343 if not shelvedfile(repo, n, patchextension).exists():
345 if not shelvedfile(repo, n, patchextension).exists():
344 name = n
346 name = n
345 break
347 break
346
348
347 return name
349 return name
348
350
349 def mutableancestors(ctx):
351 def mutableancestors(ctx):
350 """return all mutable ancestors for ctx (included)
352 """return all mutable ancestors for ctx (included)
351
353
352 Much faster than the revset ancestors(ctx) & draft()"""
354 Much faster than the revset ancestors(ctx) & draft()"""
353 seen = {nodemod.nullrev}
355 seen = {nodemod.nullrev}
354 visit = collections.deque()
356 visit = collections.deque()
355 visit.append(ctx)
357 visit.append(ctx)
356 while visit:
358 while visit:
357 ctx = visit.popleft()
359 ctx = visit.popleft()
358 yield ctx.node()
360 yield ctx.node()
359 for parent in ctx.parents():
361 for parent in ctx.parents():
360 rev = parent.rev()
362 rev = parent.rev()
361 if rev not in seen:
363 if rev not in seen:
362 seen.add(rev)
364 seen.add(rev)
363 if parent.mutable():
365 if parent.mutable():
364 visit.append(parent)
366 visit.append(parent)
365
367
366 def getcommitfunc(extra, interactive, editor=False):
368 def getcommitfunc(extra, interactive, editor=False):
367 def commitfunc(ui, repo, message, match, opts):
369 def commitfunc(ui, repo, message, match, opts):
368 hasmq = util.safehasattr(repo, 'mq')
370 hasmq = util.safehasattr(repo, 'mq')
369 if hasmq:
371 if hasmq:
370 saved, repo.mq.checkapplied = repo.mq.checkapplied, False
372 saved, repo.mq.checkapplied = repo.mq.checkapplied, False
371 overrides = {('phases', 'new-commit'): phases.secret}
373 overrides = {('phases', 'new-commit'): phases.secret}
372 try:
374 try:
373 editor_ = False
375 editor_ = False
374 if editor:
376 if editor:
375 editor_ = cmdutil.getcommiteditor(editform='shelve.shelve',
377 editor_ = cmdutil.getcommiteditor(editform='shelve.shelve',
376 **opts)
378 **opts)
377 with repo.ui.configoverride(overrides):
379 with repo.ui.configoverride(overrides):
378 return repo.commit(message, shelveuser, opts.get('date'),
380 return repo.commit(message, shelveuser, opts.get('date'),
379 match, editor=editor_, extra=extra)
381 match, editor=editor_, extra=extra)
380 finally:
382 finally:
381 if hasmq:
383 if hasmq:
382 repo.mq.checkapplied = saved
384 repo.mq.checkapplied = saved
383
385
384 def interactivecommitfunc(ui, repo, *pats, **opts):
386 def interactivecommitfunc(ui, repo, *pats, **opts):
385 match = scmutil.match(repo['.'], pats, {})
387 match = scmutil.match(repo['.'], pats, {})
386 message = opts['message']
388 message = opts['message']
387 return commitfunc(ui, repo, message, match, opts)
389 return commitfunc(ui, repo, message, match, opts)
388
390
389 return interactivecommitfunc if interactive else commitfunc
391 return interactivecommitfunc if interactive else commitfunc
390
392
391 def _nothingtoshelvemessaging(ui, repo, pats, opts):
393 def _nothingtoshelvemessaging(ui, repo, pats, opts):
392 stat = repo.status(match=scmutil.match(repo[None], pats, opts))
394 stat = repo.status(match=scmutil.match(repo[None], pats, opts))
393 if stat.deleted:
395 if stat.deleted:
394 ui.status(_("nothing changed (%d missing files, see "
396 ui.status(_("nothing changed (%d missing files, see "
395 "'hg status')\n") % len(stat.deleted))
397 "'hg status')\n") % len(stat.deleted))
396 else:
398 else:
397 ui.status(_("nothing changed\n"))
399 ui.status(_("nothing changed\n"))
398
400
399 def _shelvecreatedcommit(repo, node, name):
401 def _shelvecreatedcommit(repo, node, name):
400 bases = list(mutableancestors(repo[node]))
402 bases = list(mutableancestors(repo[node]))
401 shelvedfile(repo, name, 'hg').writebundle(bases, node)
403 shelvedfile(repo, name, 'hg').writebundle(bases, node)
402 cmdutil.export(repo, [node],
404 cmdutil.export(repo, [node],
403 fp=shelvedfile(repo, name, patchextension).opener('wb'),
405 fp=shelvedfile(repo, name, patchextension).opener('wb'),
404 opts=mdiff.diffopts(git=True))
406 opts=mdiff.diffopts(git=True))
405
407
406 def _includeunknownfiles(repo, pats, opts, extra):
408 def _includeunknownfiles(repo, pats, opts, extra):
407 s = repo.status(match=scmutil.match(repo[None], pats, opts),
409 s = repo.status(match=scmutil.match(repo[None], pats, opts),
408 unknown=True)
410 unknown=True)
409 if s.unknown:
411 if s.unknown:
410 extra['shelve_unknown'] = '\0'.join(s.unknown)
412 extra['shelve_unknown'] = '\0'.join(s.unknown)
411 repo[None].add(s.unknown)
413 repo[None].add(s.unknown)
412
414
413 def _finishshelve(repo):
415 def _finishshelve(repo):
414 _aborttransaction(repo)
416 _aborttransaction(repo)
415
417
416 def _docreatecmd(ui, repo, pats, opts):
418 def _docreatecmd(ui, repo, pats, opts):
417 wctx = repo[None]
419 wctx = repo[None]
418 parents = wctx.parents()
420 parents = wctx.parents()
419 if len(parents) > 1:
421 if len(parents) > 1:
420 raise error.Abort(_('cannot shelve while merging'))
422 raise error.Abort(_('cannot shelve while merging'))
421 parent = parents[0]
423 parent = parents[0]
422 origbranch = wctx.branch()
424 origbranch = wctx.branch()
423
425
424 if parent.node() != nodemod.nullid:
426 if parent.node() != nodemod.nullid:
425 desc = "changes to: %s" % parent.description().split('\n', 1)[0]
427 desc = "changes to: %s" % parent.description().split('\n', 1)[0]
426 else:
428 else:
427 desc = '(changes in empty repository)'
429 desc = '(changes in empty repository)'
428
430
429 if not opts.get('message'):
431 if not opts.get('message'):
430 opts['message'] = desc
432 opts['message'] = desc
431
433
432 lock = tr = activebookmark = None
434 lock = tr = activebookmark = None
433 try:
435 try:
434 lock = repo.lock()
436 lock = repo.lock()
435
437
436 # use an uncommitted transaction to generate the bundle to avoid
438 # use an uncommitted transaction to generate the bundle to avoid
437 # pull races. ensure we don't print the abort message to stderr.
439 # pull races. ensure we don't print the abort message to stderr.
438 tr = repo.transaction('commit', report=lambda x: None)
440 tr = repo.transaction('commit', report=lambda x: None)
439
441
440 interactive = opts.get('interactive', False)
442 interactive = opts.get('interactive', False)
441 includeunknown = (opts.get('unknown', False) and
443 includeunknown = (opts.get('unknown', False) and
442 not opts.get('addremove', False))
444 not opts.get('addremove', False))
443
445
444 name = getshelvename(repo, parent, opts)
446 name = getshelvename(repo, parent, opts)
445 activebookmark = _backupactivebookmark(repo)
447 activebookmark = _backupactivebookmark(repo)
446 extra = {}
448 extra = {}
447 if includeunknown:
449 if includeunknown:
448 _includeunknownfiles(repo, pats, opts, extra)
450 _includeunknownfiles(repo, pats, opts, extra)
449
451
450 if _iswctxonnewbranch(repo) and not _isbareshelve(pats, opts):
452 if _iswctxonnewbranch(repo) and not _isbareshelve(pats, opts):
451 # In non-bare shelve we don't store newly created branch
453 # In non-bare shelve we don't store newly created branch
452 # at bundled commit
454 # at bundled commit
453 repo.dirstate.setbranch(repo['.'].branch())
455 repo.dirstate.setbranch(repo['.'].branch())
454
456
455 commitfunc = getcommitfunc(extra, interactive, editor=True)
457 commitfunc = getcommitfunc(extra, interactive, editor=True)
456 if not interactive:
458 if not interactive:
457 node = cmdutil.commit(ui, repo, commitfunc, pats, opts)
459 node = cmdutil.commit(ui, repo, commitfunc, pats, opts)
458 else:
460 else:
459 node = cmdutil.dorecord(ui, repo, commitfunc, None,
461 node = cmdutil.dorecord(ui, repo, commitfunc, None,
460 False, cmdutil.recordfilter, *pats,
462 False, cmdutil.recordfilter, *pats,
461 **opts)
463 **opts)
462 if not node:
464 if not node:
463 _nothingtoshelvemessaging(ui, repo, pats, opts)
465 _nothingtoshelvemessaging(ui, repo, pats, opts)
464 return 1
466 return 1
465
467
466 _shelvecreatedcommit(repo, node, name)
468 _shelvecreatedcommit(repo, node, name)
467
469
468 if ui.formatted():
470 if ui.formatted():
469 desc = util.ellipsis(desc, ui.termwidth())
471 desc = util.ellipsis(desc, ui.termwidth())
470 ui.status(_('shelved as %s\n') % name)
472 ui.status(_('shelved as %s\n') % name)
471 hg.update(repo, parent.node())
473 hg.update(repo, parent.node())
472 if origbranch != repo['.'].branch() and not _isbareshelve(pats, opts):
474 if origbranch != repo['.'].branch() and not _isbareshelve(pats, opts):
473 repo.dirstate.setbranch(origbranch)
475 repo.dirstate.setbranch(origbranch)
474
476
475 _finishshelve(repo)
477 _finishshelve(repo)
476 finally:
478 finally:
477 _restoreactivebookmark(repo, activebookmark)
479 _restoreactivebookmark(repo, activebookmark)
478 lockmod.release(tr, lock)
480 lockmod.release(tr, lock)
479
481
480 def _isbareshelve(pats, opts):
482 def _isbareshelve(pats, opts):
481 return (not pats
483 return (not pats
482 and not opts.get('interactive', False)
484 and not opts.get('interactive', False)
483 and not opts.get('include', False)
485 and not opts.get('include', False)
484 and not opts.get('exclude', False))
486 and not opts.get('exclude', False))
485
487
486 def _iswctxonnewbranch(repo):
488 def _iswctxonnewbranch(repo):
487 return repo[None].branch() != repo['.'].branch()
489 return repo[None].branch() != repo['.'].branch()
488
490
489 def cleanupcmd(ui, repo):
491 def cleanupcmd(ui, repo):
490 """subcommand that deletes all shelves"""
492 """subcommand that deletes all shelves"""
491
493
492 with repo.wlock():
494 with repo.wlock():
493 for (name, _type) in repo.vfs.readdir(shelvedir):
495 for (name, _type) in repo.vfs.readdir(shelvedir):
494 suffix = name.rsplit('.', 1)[-1]
496 suffix = name.rsplit('.', 1)[-1]
495 if suffix in shelvefileextensions:
497 if suffix in shelvefileextensions:
496 shelvedfile(repo, name).movetobackup()
498 shelvedfile(repo, name).movetobackup()
497 cleanupoldbackups(repo)
499 cleanupoldbackups(repo)
498
500
499 def deletecmd(ui, repo, pats):
501 def deletecmd(ui, repo, pats):
500 """subcommand that deletes a specific shelve"""
502 """subcommand that deletes a specific shelve"""
501 if not pats:
503 if not pats:
502 raise error.Abort(_('no shelved changes specified!'))
504 raise error.Abort(_('no shelved changes specified!'))
503 with repo.wlock():
505 with repo.wlock():
504 try:
506 try:
505 for name in pats:
507 for name in pats:
506 for suffix in shelvefileextensions:
508 for suffix in shelvefileextensions:
507 shfile = shelvedfile(repo, name, suffix)
509 shfile = shelvedfile(repo, name, suffix)
508 # patch file is necessary, as it should
510 # patch file is necessary, as it should
509 # be present for any kind of shelve,
511 # be present for any kind of shelve,
510 # but the .hg file is optional as in future we
512 # but the .hg file is optional as in future we
511 # will add obsolete shelve with does not create a
513 # will add obsolete shelve with does not create a
512 # bundle
514 # bundle
513 if shfile.exists() or suffix == patchextension:
515 if shfile.exists() or suffix == patchextension:
514 shfile.movetobackup()
516 shfile.movetobackup()
515 cleanupoldbackups(repo)
517 cleanupoldbackups(repo)
516 except OSError as err:
518 except OSError as err:
517 if err.errno != errno.ENOENT:
519 if err.errno != errno.ENOENT:
518 raise
520 raise
519 raise error.Abort(_("shelved change '%s' not found") % name)
521 raise error.Abort(_("shelved change '%s' not found") % name)
520
522
521 def listshelves(repo):
523 def listshelves(repo):
522 """return all shelves in repo as list of (time, filename)"""
524 """return all shelves in repo as list of (time, filename)"""
523 try:
525 try:
524 names = repo.vfs.readdir(shelvedir)
526 names = repo.vfs.readdir(shelvedir)
525 except OSError as err:
527 except OSError as err:
526 if err.errno != errno.ENOENT:
528 if err.errno != errno.ENOENT:
527 raise
529 raise
528 return []
530 return []
529 info = []
531 info = []
530 for (name, _type) in names:
532 for (name, _type) in names:
531 pfx, sfx = name.rsplit('.', 1)
533 pfx, sfx = name.rsplit('.', 1)
532 if not pfx or sfx != patchextension:
534 if not pfx or sfx != patchextension:
533 continue
535 continue
534 st = shelvedfile(repo, name).stat()
536 st = shelvedfile(repo, name).stat()
535 info.append((st.st_mtime, shelvedfile(repo, pfx).filename()))
537 info.append((st.st_mtime, shelvedfile(repo, pfx).filename()))
536 return sorted(info, reverse=True)
538 return sorted(info, reverse=True)
537
539
538 def listcmd(ui, repo, pats, opts):
540 def listcmd(ui, repo, pats, opts):
539 """subcommand that displays the list of shelves"""
541 """subcommand that displays the list of shelves"""
540 pats = set(pats)
542 pats = set(pats)
541 width = 80
543 width = 80
542 if not ui.plain():
544 if not ui.plain():
543 width = ui.termwidth()
545 width = ui.termwidth()
544 namelabel = 'shelve.newest'
546 namelabel = 'shelve.newest'
545 ui.pager('shelve')
547 ui.pager('shelve')
546 for mtime, name in listshelves(repo):
548 for mtime, name in listshelves(repo):
547 sname = util.split(name)[1]
549 sname = util.split(name)[1]
548 if pats and sname not in pats:
550 if pats and sname not in pats:
549 continue
551 continue
550 ui.write(sname, label=namelabel)
552 ui.write(sname, label=namelabel)
551 namelabel = 'shelve.name'
553 namelabel = 'shelve.name'
552 if ui.quiet:
554 if ui.quiet:
553 ui.write('\n')
555 ui.write('\n')
554 continue
556 continue
555 ui.write(' ' * (16 - len(sname)))
557 ui.write(' ' * (16 - len(sname)))
556 used = 16
558 used = 16
557 age = '(%s)' % templatefilters.age(util.makedate(mtime), abbrev=True)
559 age = '(%s)' % templatefilters.age(util.makedate(mtime), abbrev=True)
558 ui.write(age, label='shelve.age')
560 ui.write(age, label='shelve.age')
559 ui.write(' ' * (12 - len(age)))
561 ui.write(' ' * (12 - len(age)))
560 used += 12
562 used += 12
561 with open(name + '.' + patchextension, 'rb') as fp:
563 with open(name + '.' + patchextension, 'rb') as fp:
562 while True:
564 while True:
563 line = fp.readline()
565 line = fp.readline()
564 if not line:
566 if not line:
565 break
567 break
566 if not line.startswith('#'):
568 if not line.startswith('#'):
567 desc = line.rstrip()
569 desc = line.rstrip()
568 if ui.formatted():
570 if ui.formatted():
569 desc = util.ellipsis(desc, width - used)
571 desc = util.ellipsis(desc, width - used)
570 ui.write(desc)
572 ui.write(desc)
571 break
573 break
572 ui.write('\n')
574 ui.write('\n')
573 if not (opts['patch'] or opts['stat']):
575 if not (opts['patch'] or opts['stat']):
574 continue
576 continue
575 difflines = fp.readlines()
577 difflines = fp.readlines()
576 if opts['patch']:
578 if opts['patch']:
577 for chunk, label in patch.difflabel(iter, difflines):
579 for chunk, label in patch.difflabel(iter, difflines):
578 ui.write(chunk, label=label)
580 ui.write(chunk, label=label)
579 if opts['stat']:
581 if opts['stat']:
580 for chunk, label in patch.diffstatui(difflines, width=width):
582 for chunk, label in patch.diffstatui(difflines, width=width):
581 ui.write(chunk, label=label)
583 ui.write(chunk, label=label)
582
584
583 def patchcmds(ui, repo, pats, opts, subcommand):
585 def patchcmds(ui, repo, pats, opts, subcommand):
584 """subcommand that displays shelves"""
586 """subcommand that displays shelves"""
585 if len(pats) == 0:
587 if len(pats) == 0:
586 raise error.Abort(_("--%s expects at least one shelf") % subcommand)
588 raise error.Abort(_("--%s expects at least one shelf") % subcommand)
587
589
588 for shelfname in pats:
590 for shelfname in pats:
589 if not shelvedfile(repo, shelfname, patchextension).exists():
591 if not shelvedfile(repo, shelfname, patchextension).exists():
590 raise error.Abort(_("cannot find shelf %s") % shelfname)
592 raise error.Abort(_("cannot find shelf %s") % shelfname)
591
593
592 listcmd(ui, repo, pats, opts)
594 listcmd(ui, repo, pats, opts)
593
595
594 def checkparents(repo, state):
596 def checkparents(repo, state):
595 """check parent while resuming an unshelve"""
597 """check parent while resuming an unshelve"""
596 if state.parents != repo.dirstate.parents():
598 if state.parents != repo.dirstate.parents():
597 raise error.Abort(_('working directory parents do not match unshelve '
599 raise error.Abort(_('working directory parents do not match unshelve '
598 'state'))
600 'state'))
599
601
600 def pathtofiles(repo, files):
602 def pathtofiles(repo, files):
601 cwd = repo.getcwd()
603 cwd = repo.getcwd()
602 return [repo.pathto(f, cwd) for f in files]
604 return [repo.pathto(f, cwd) for f in files]
603
605
604 def unshelveabort(ui, repo, state, opts):
606 def unshelveabort(ui, repo, state, opts):
605 """subcommand that abort an in-progress unshelve"""
607 """subcommand that abort an in-progress unshelve"""
606 with repo.lock():
608 with repo.lock():
607 try:
609 try:
608 checkparents(repo, state)
610 checkparents(repo, state)
609
611
610 repo.vfs.rename('unshelverebasestate', 'rebasestate')
612 repo.vfs.rename('unshelverebasestate', 'rebasestate')
611 try:
613 try:
612 rebase.rebase(ui, repo, **{
614 rebase.rebase(ui, repo, **{
613 'abort' : True
615 'abort' : True
614 })
616 })
615 except Exception:
617 except Exception:
616 repo.vfs.rename('rebasestate', 'unshelverebasestate')
618 repo.vfs.rename('rebasestate', 'unshelverebasestate')
617 raise
619 raise
618
620
619 mergefiles(ui, repo, state.wctx, state.pendingctx)
621 mergefiles(ui, repo, state.wctx, state.pendingctx)
620 repair.strip(ui, repo, state.nodestoremove, backup=False,
622 repair.strip(ui, repo, state.nodestoremove, backup=False,
621 topic='shelve')
623 topic='shelve')
622 finally:
624 finally:
623 shelvedstate.clear(repo)
625 shelvedstate.clear(repo)
624 ui.warn(_("unshelve of '%s' aborted\n") % state.name)
626 ui.warn(_("unshelve of '%s' aborted\n") % state.name)
625
627
626 def mergefiles(ui, repo, wctx, shelvectx):
628 def mergefiles(ui, repo, wctx, shelvectx):
627 """updates to wctx and merges the changes from shelvectx into the
629 """updates to wctx and merges the changes from shelvectx into the
628 dirstate."""
630 dirstate."""
629 with ui.configoverride({('ui', 'quiet'): True}):
631 with ui.configoverride({('ui', 'quiet'): True}):
630 hg.update(repo, wctx.node())
632 hg.update(repo, wctx.node())
631 files = []
633 files = []
632 files.extend(shelvectx.files())
634 files.extend(shelvectx.files())
633 files.extend(shelvectx.parents()[0].files())
635 files.extend(shelvectx.parents()[0].files())
634
636
635 # revert will overwrite unknown files, so move them out of the way
637 # revert will overwrite unknown files, so move them out of the way
636 for file in repo.status(unknown=True).unknown:
638 for file in repo.status(unknown=True).unknown:
637 if file in files:
639 if file in files:
638 util.rename(file, scmutil.origpath(ui, repo, file))
640 util.rename(file, scmutil.origpath(ui, repo, file))
639 ui.pushbuffer(True)
641 ui.pushbuffer(True)
640 cmdutil.revert(ui, repo, shelvectx, repo.dirstate.parents(),
642 cmdutil.revert(ui, repo, shelvectx, repo.dirstate.parents(),
641 *pathtofiles(repo, files),
643 *pathtofiles(repo, files),
642 **{'no_backup': True})
644 **{'no_backup': True})
643 ui.popbuffer()
645 ui.popbuffer()
644
646
645 def restorebranch(ui, repo, branchtorestore):
647 def restorebranch(ui, repo, branchtorestore):
646 if branchtorestore and branchtorestore != repo.dirstate.branch():
648 if branchtorestore and branchtorestore != repo.dirstate.branch():
647 repo.dirstate.setbranch(branchtorestore)
649 repo.dirstate.setbranch(branchtorestore)
648 ui.status(_('marked working directory as branch %s\n')
650 ui.status(_('marked working directory as branch %s\n')
649 % branchtorestore)
651 % branchtorestore)
650
652
651 def unshelvecleanup(ui, repo, name, opts):
653 def unshelvecleanup(ui, repo, name, opts):
652 """remove related files after an unshelve"""
654 """remove related files after an unshelve"""
653 if not opts.get('keep'):
655 if not opts.get('keep'):
654 for filetype in shelvefileextensions:
656 for filetype in shelvefileextensions:
655 shfile = shelvedfile(repo, name, filetype)
657 shfile = shelvedfile(repo, name, filetype)
656 if shfile.exists():
658 if shfile.exists():
657 shfile.movetobackup()
659 shfile.movetobackup()
658 cleanupoldbackups(repo)
660 cleanupoldbackups(repo)
659
661
660 def unshelvecontinue(ui, repo, state, opts):
662 def unshelvecontinue(ui, repo, state, opts):
661 """subcommand to continue an in-progress unshelve"""
663 """subcommand to continue an in-progress unshelve"""
662 # We're finishing off a merge. First parent is our original
664 # We're finishing off a merge. First parent is our original
663 # parent, second is the temporary "fake" commit we're unshelving.
665 # parent, second is the temporary "fake" commit we're unshelving.
664 with repo.lock():
666 with repo.lock():
665 checkparents(repo, state)
667 checkparents(repo, state)
666 ms = merge.mergestate.read(repo)
668 ms = merge.mergestate.read(repo)
667 if [f for f in ms if ms[f] == 'u']:
669 if [f for f in ms if ms[f] == 'u']:
668 raise error.Abort(
670 raise error.Abort(
669 _("unresolved conflicts, can't continue"),
671 _("unresolved conflicts, can't continue"),
670 hint=_("see 'hg resolve', then 'hg unshelve --continue'"))
672 hint=_("see 'hg resolve', then 'hg unshelve --continue'"))
671
673
672 repo.vfs.rename('unshelverebasestate', 'rebasestate')
674 repo.vfs.rename('unshelverebasestate', 'rebasestate')
673 try:
675 try:
674 rebase.rebase(ui, repo, **{
676 rebase.rebase(ui, repo, **{
675 'continue' : True
677 'continue' : True
676 })
678 })
677 except Exception:
679 except Exception:
678 repo.vfs.rename('rebasestate', 'unshelverebasestate')
680 repo.vfs.rename('rebasestate', 'unshelverebasestate')
679 raise
681 raise
680
682
681 shelvectx = repo['tip']
683 shelvectx = repo['tip']
682 if state.pendingctx not in shelvectx.parents():
684 if state.pendingctx not in shelvectx.parents():
683 # rebase was a no-op, so it produced no child commit
685 # rebase was a no-op, so it produced no child commit
684 shelvectx = state.pendingctx
686 shelvectx = state.pendingctx
685 else:
687 else:
686 # only strip the shelvectx if the rebase produced it
688 # only strip the shelvectx if the rebase produced it
687 state.nodestoremove.append(shelvectx.node())
689 state.nodestoremove.append(shelvectx.node())
688
690
689 mergefiles(ui, repo, state.wctx, shelvectx)
691 mergefiles(ui, repo, state.wctx, shelvectx)
690 restorebranch(ui, repo, state.branchtorestore)
692 restorebranch(ui, repo, state.branchtorestore)
691
693
692 repair.strip(ui, repo, state.nodestoremove, backup=False,
694 repair.strip(ui, repo, state.nodestoremove, backup=False,
693 topic='shelve')
695 topic='shelve')
694 _restoreactivebookmark(repo, state.activebookmark)
696 _restoreactivebookmark(repo, state.activebookmark)
695 shelvedstate.clear(repo)
697 shelvedstate.clear(repo)
696 unshelvecleanup(ui, repo, state.name, opts)
698 unshelvecleanup(ui, repo, state.name, opts)
697 ui.status(_("unshelve of '%s' complete\n") % state.name)
699 ui.status(_("unshelve of '%s' complete\n") % state.name)
698
700
699 def _commitworkingcopychanges(ui, repo, opts, tmpwctx):
701 def _commitworkingcopychanges(ui, repo, opts, tmpwctx):
700 """Temporarily commit working copy changes before moving unshelve commit"""
702 """Temporarily commit working copy changes before moving unshelve commit"""
701 # Store pending changes in a commit and remember added in case a shelve
703 # Store pending changes in a commit and remember added in case a shelve
702 # contains unknown files that are part of the pending change
704 # contains unknown files that are part of the pending change
703 s = repo.status()
705 s = repo.status()
704 addedbefore = frozenset(s.added)
706 addedbefore = frozenset(s.added)
705 if not (s.modified or s.added or s.removed):
707 if not (s.modified or s.added or s.removed):
706 return tmpwctx, addedbefore
708 return tmpwctx, addedbefore
707 ui.status(_("temporarily committing pending changes "
709 ui.status(_("temporarily committing pending changes "
708 "(restore with 'hg unshelve --abort')\n"))
710 "(restore with 'hg unshelve --abort')\n"))
709 commitfunc = getcommitfunc(extra=None, interactive=False,
711 commitfunc = getcommitfunc(extra=None, interactive=False,
710 editor=False)
712 editor=False)
711 tempopts = {}
713 tempopts = {}
712 tempopts['message'] = "pending changes temporary commit"
714 tempopts['message'] = "pending changes temporary commit"
713 tempopts['date'] = opts.get('date')
715 tempopts['date'] = opts.get('date')
714 with ui.configoverride({('ui', 'quiet'): True}):
716 with ui.configoverride({('ui', 'quiet'): True}):
715 node = cmdutil.commit(ui, repo, commitfunc, [], tempopts)
717 node = cmdutil.commit(ui, repo, commitfunc, [], tempopts)
716 tmpwctx = repo[node]
718 tmpwctx = repo[node]
717 return tmpwctx, addedbefore
719 return tmpwctx, addedbefore
718
720
719 def _unshelverestorecommit(ui, repo, basename):
721 def _unshelverestorecommit(ui, repo, basename):
720 """Recreate commit in the repository during the unshelve"""
722 """Recreate commit in the repository during the unshelve"""
721 with ui.configoverride({('ui', 'quiet'): True}):
723 with ui.configoverride({('ui', 'quiet'): True}):
722 shelvedfile(repo, basename, 'hg').applybundle()
724 shelvedfile(repo, basename, 'hg').applybundle()
723 shelvectx = repo['tip']
725 shelvectx = repo['tip']
724 return repo, shelvectx
726 return repo, shelvectx
725
727
726 def _rebaserestoredcommit(ui, repo, opts, tr, oldtiprev, basename, pctx,
728 def _rebaserestoredcommit(ui, repo, opts, tr, oldtiprev, basename, pctx,
727 tmpwctx, shelvectx, branchtorestore,
729 tmpwctx, shelvectx, branchtorestore,
728 activebookmark):
730 activebookmark):
729 """Rebase restored commit from its original location to a destination"""
731 """Rebase restored commit from its original location to a destination"""
730 # If the shelve is not immediately on top of the commit
732 # If the shelve is not immediately on top of the commit
731 # we'll be merging with, rebase it to be on top.
733 # we'll be merging with, rebase it to be on top.
732 if tmpwctx.node() == shelvectx.parents()[0].node():
734 if tmpwctx.node() == shelvectx.parents()[0].node():
733 return shelvectx
735 return shelvectx
734
736
735 ui.status(_('rebasing shelved changes\n'))
737 ui.status(_('rebasing shelved changes\n'))
736 try:
738 try:
737 rebase.rebase(ui, repo, **{
739 rebase.rebase(ui, repo, **{
738 'rev': [shelvectx.rev()],
740 'rev': [shelvectx.rev()],
739 'dest': str(tmpwctx.rev()),
741 'dest': str(tmpwctx.rev()),
740 'keep': True,
742 'keep': True,
741 'tool': opts.get('tool', ''),
743 'tool': opts.get('tool', ''),
742 })
744 })
743 except error.InterventionRequired:
745 except error.InterventionRequired:
744 tr.close()
746 tr.close()
745
747
746 nodestoremove = [repo.changelog.node(rev)
748 nodestoremove = [repo.changelog.node(rev)
747 for rev in xrange(oldtiprev, len(repo))]
749 for rev in xrange(oldtiprev, len(repo))]
748 shelvedstate.save(repo, basename, pctx, tmpwctx, nodestoremove,
750 shelvedstate.save(repo, basename, pctx, tmpwctx, nodestoremove,
749 branchtorestore, opts.get('keep'), activebookmark)
751 branchtorestore, opts.get('keep'), activebookmark)
750
752
751 repo.vfs.rename('rebasestate', 'unshelverebasestate')
753 repo.vfs.rename('rebasestate', 'unshelverebasestate')
752 raise error.InterventionRequired(
754 raise error.InterventionRequired(
753 _("unresolved conflicts (see 'hg resolve', then "
755 _("unresolved conflicts (see 'hg resolve', then "
754 "'hg unshelve --continue')"))
756 "'hg unshelve --continue')"))
755
757
756 # refresh ctx after rebase completes
758 # refresh ctx after rebase completes
757 shelvectx = repo['tip']
759 shelvectx = repo['tip']
758
760
759 if tmpwctx not in shelvectx.parents():
761 if tmpwctx not in shelvectx.parents():
760 # rebase was a no-op, so it produced no child commit
762 # rebase was a no-op, so it produced no child commit
761 shelvectx = tmpwctx
763 shelvectx = tmpwctx
762 return shelvectx
764 return shelvectx
763
765
764 def _forgetunknownfiles(repo, shelvectx, addedbefore):
766 def _forgetunknownfiles(repo, shelvectx, addedbefore):
765 # Forget any files that were unknown before the shelve, unknown before
767 # Forget any files that were unknown before the shelve, unknown before
766 # unshelve started, but are now added.
768 # unshelve started, but are now added.
767 shelveunknown = shelvectx.extra().get('shelve_unknown')
769 shelveunknown = shelvectx.extra().get('shelve_unknown')
768 if not shelveunknown:
770 if not shelveunknown:
769 return
771 return
770 shelveunknown = frozenset(shelveunknown.split('\0'))
772 shelveunknown = frozenset(shelveunknown.split('\0'))
771 addedafter = frozenset(repo.status().added)
773 addedafter = frozenset(repo.status().added)
772 toforget = (addedafter & shelveunknown) - addedbefore
774 toforget = (addedafter & shelveunknown) - addedbefore
773 repo[None].forget(toforget)
775 repo[None].forget(toforget)
774
776
775 def _finishunshelve(repo, oldtiprev, tr, activebookmark):
777 def _finishunshelve(repo, oldtiprev, tr, activebookmark):
776 _restoreactivebookmark(repo, activebookmark)
778 _restoreactivebookmark(repo, activebookmark)
777 # The transaction aborting will strip all the commits for us,
779 # The transaction aborting will strip all the commits for us,
778 # but it doesn't update the inmemory structures, so addchangegroup
780 # but it doesn't update the inmemory structures, so addchangegroup
779 # hooks still fire and try to operate on the missing commits.
781 # hooks still fire and try to operate on the missing commits.
780 # Clean up manually to prevent this.
782 # Clean up manually to prevent this.
781 repo.unfiltered().changelog.strip(oldtiprev, tr)
783 repo.unfiltered().changelog.strip(oldtiprev, tr)
782 _aborttransaction(repo)
784 _aborttransaction(repo)
783
785
784 def _checkunshelveuntrackedproblems(ui, repo, shelvectx):
786 def _checkunshelveuntrackedproblems(ui, repo, shelvectx):
785 """Check potential problems which may result from working
787 """Check potential problems which may result from working
786 copy having untracked changes."""
788 copy having untracked changes."""
787 wcdeleted = set(repo.status().deleted)
789 wcdeleted = set(repo.status().deleted)
788 shelvetouched = set(shelvectx.files())
790 shelvetouched = set(shelvectx.files())
789 intersection = wcdeleted.intersection(shelvetouched)
791 intersection = wcdeleted.intersection(shelvetouched)
790 if intersection:
792 if intersection:
791 m = _("shelved change touches missing files")
793 m = _("shelved change touches missing files")
792 hint = _("run hg status to see which files are missing")
794 hint = _("run hg status to see which files are missing")
793 raise error.Abort(m, hint=hint)
795 raise error.Abort(m, hint=hint)
794
796
795 @command('unshelve',
797 @command('unshelve',
796 [('a', 'abort', None,
798 [('a', 'abort', None,
797 _('abort an incomplete unshelve operation')),
799 _('abort an incomplete unshelve operation')),
798 ('c', 'continue', None,
800 ('c', 'continue', None,
799 _('continue an incomplete unshelve operation')),
801 _('continue an incomplete unshelve operation')),
800 ('k', 'keep', None,
802 ('k', 'keep', None,
801 _('keep shelve after unshelving')),
803 _('keep shelve after unshelving')),
802 ('n', 'name', '',
804 ('n', 'name', '',
803 _('restore shelved change with given name'), _('NAME')),
805 _('restore shelved change with given name'), _('NAME')),
804 ('t', 'tool', '', _('specify merge tool')),
806 ('t', 'tool', '', _('specify merge tool')),
805 ('', 'date', '',
807 ('', 'date', '',
806 _('set date for temporary commits (DEPRECATED)'), _('DATE'))],
808 _('set date for temporary commits (DEPRECATED)'), _('DATE'))],
807 _('hg unshelve [[-n] SHELVED]'))
809 _('hg unshelve [[-n] SHELVED]'))
808 def unshelve(ui, repo, *shelved, **opts):
810 def unshelve(ui, repo, *shelved, **opts):
809 """restore a shelved change to the working directory
811 """restore a shelved change to the working directory
810
812
811 This command accepts an optional name of a shelved change to
813 This command accepts an optional name of a shelved change to
812 restore. If none is given, the most recent shelved change is used.
814 restore. If none is given, the most recent shelved change is used.
813
815
814 If a shelved change is applied successfully, the bundle that
816 If a shelved change is applied successfully, the bundle that
815 contains the shelved changes is moved to a backup location
817 contains the shelved changes is moved to a backup location
816 (.hg/shelve-backup).
818 (.hg/shelve-backup).
817
819
818 Since you can restore a shelved change on top of an arbitrary
820 Since you can restore a shelved change on top of an arbitrary
819 commit, it is possible that unshelving will result in a conflict
821 commit, it is possible that unshelving will result in a conflict
820 between your changes and the commits you are unshelving onto. If
822 between your changes and the commits you are unshelving onto. If
821 this occurs, you must resolve the conflict, then use
823 this occurs, you must resolve the conflict, then use
822 ``--continue`` to complete the unshelve operation. (The bundle
824 ``--continue`` to complete the unshelve operation. (The bundle
823 will not be moved until you successfully complete the unshelve.)
825 will not be moved until you successfully complete the unshelve.)
824
826
825 (Alternatively, you can use ``--abort`` to abandon an unshelve
827 (Alternatively, you can use ``--abort`` to abandon an unshelve
826 that causes a conflict. This reverts the unshelved changes, and
828 that causes a conflict. This reverts the unshelved changes, and
827 leaves the bundle in place.)
829 leaves the bundle in place.)
828
830
829 If bare shelved change(when no files are specified, without interactive,
831 If bare shelved change(when no files are specified, without interactive,
830 include and exclude option) was done on newly created branch it would
832 include and exclude option) was done on newly created branch it would
831 restore branch information to the working directory.
833 restore branch information to the working directory.
832
834
833 After a successful unshelve, the shelved changes are stored in a
835 After a successful unshelve, the shelved changes are stored in a
834 backup directory. Only the N most recent backups are kept. N
836 backup directory. Only the N most recent backups are kept. N
835 defaults to 10 but can be overridden using the ``shelve.maxbackups``
837 defaults to 10 but can be overridden using the ``shelve.maxbackups``
836 configuration option.
838 configuration option.
837
839
838 .. container:: verbose
840 .. container:: verbose
839
841
840 Timestamp in seconds is used to decide order of backups. More
842 Timestamp in seconds is used to decide order of backups. More
841 than ``maxbackups`` backups are kept, if same timestamp
843 than ``maxbackups`` backups are kept, if same timestamp
842 prevents from deciding exact order of them, for safety.
844 prevents from deciding exact order of them, for safety.
843 """
845 """
844 with repo.wlock():
846 with repo.wlock():
845 return _dounshelve(ui, repo, *shelved, **opts)
847 return _dounshelve(ui, repo, *shelved, **opts)
846
848
847 def _dounshelve(ui, repo, *shelved, **opts):
849 def _dounshelve(ui, repo, *shelved, **opts):
848 abortf = opts.get('abort')
850 abortf = opts.get('abort')
849 continuef = opts.get('continue')
851 continuef = opts.get('continue')
850 if not abortf and not continuef:
852 if not abortf and not continuef:
851 cmdutil.checkunfinished(repo)
853 cmdutil.checkunfinished(repo)
852 shelved = list(shelved)
854 shelved = list(shelved)
853 if opts.get("name"):
855 if opts.get("name"):
854 shelved.append(opts["name"])
856 shelved.append(opts["name"])
855
857
856 if abortf or continuef:
858 if abortf or continuef:
857 if abortf and continuef:
859 if abortf and continuef:
858 raise error.Abort(_('cannot use both abort and continue'))
860 raise error.Abort(_('cannot use both abort and continue'))
859 if shelved:
861 if shelved:
860 raise error.Abort(_('cannot combine abort/continue with '
862 raise error.Abort(_('cannot combine abort/continue with '
861 'naming a shelved change'))
863 'naming a shelved change'))
862 if abortf and opts.get('tool', False):
864 if abortf and opts.get('tool', False):
863 ui.warn(_('tool option will be ignored\n'))
865 ui.warn(_('tool option will be ignored\n'))
864
866
865 try:
867 try:
866 state = shelvedstate.load(repo)
868 state = shelvedstate.load(repo)
867 if opts.get('keep') is None:
869 if opts.get('keep') is None:
868 opts['keep'] = state.keep
870 opts['keep'] = state.keep
869 except IOError as err:
871 except IOError as err:
870 if err.errno != errno.ENOENT:
872 if err.errno != errno.ENOENT:
871 raise
873 raise
872 cmdutil.wrongtooltocontinue(repo, _('unshelve'))
874 cmdutil.wrongtooltocontinue(repo, _('unshelve'))
873 except error.CorruptedState as err:
875 except error.CorruptedState as err:
874 ui.debug(str(err) + '\n')
876 ui.debug(str(err) + '\n')
875 if continuef:
877 if continuef:
876 msg = _('corrupted shelved state file')
878 msg = _('corrupted shelved state file')
877 hint = _('please run hg unshelve --abort to abort unshelve '
879 hint = _('please run hg unshelve --abort to abort unshelve '
878 'operation')
880 'operation')
879 raise error.Abort(msg, hint=hint)
881 raise error.Abort(msg, hint=hint)
880 elif abortf:
882 elif abortf:
881 msg = _('could not read shelved state file, your working copy '
883 msg = _('could not read shelved state file, your working copy '
882 'may be in an unexpected state\nplease update to some '
884 'may be in an unexpected state\nplease update to some '
883 'commit\n')
885 'commit\n')
884 ui.warn(msg)
886 ui.warn(msg)
885 shelvedstate.clear(repo)
887 shelvedstate.clear(repo)
886 return
888 return
887
889
888 if abortf:
890 if abortf:
889 return unshelveabort(ui, repo, state, opts)
891 return unshelveabort(ui, repo, state, opts)
890 elif continuef:
892 elif continuef:
891 return unshelvecontinue(ui, repo, state, opts)
893 return unshelvecontinue(ui, repo, state, opts)
892 elif len(shelved) > 1:
894 elif len(shelved) > 1:
893 raise error.Abort(_('can only unshelve one change at a time'))
895 raise error.Abort(_('can only unshelve one change at a time'))
894 elif not shelved:
896 elif not shelved:
895 shelved = listshelves(repo)
897 shelved = listshelves(repo)
896 if not shelved:
898 if not shelved:
897 raise error.Abort(_('no shelved changes to apply!'))
899 raise error.Abort(_('no shelved changes to apply!'))
898 basename = util.split(shelved[0][1])[1]
900 basename = util.split(shelved[0][1])[1]
899 ui.status(_("unshelving change '%s'\n") % basename)
901 ui.status(_("unshelving change '%s'\n") % basename)
900 else:
902 else:
901 basename = shelved[0]
903 basename = shelved[0]
902
904
903 if not shelvedfile(repo, basename, patchextension).exists():
905 if not shelvedfile(repo, basename, patchextension).exists():
904 raise error.Abort(_("shelved change '%s' not found") % basename)
906 raise error.Abort(_("shelved change '%s' not found") % basename)
905
907
906 lock = tr = None
908 lock = tr = None
907 try:
909 try:
908 lock = repo.lock()
910 lock = repo.lock()
909 tr = repo.transaction('unshelve', report=lambda x: None)
911 tr = repo.transaction('unshelve', report=lambda x: None)
910 oldtiprev = len(repo)
912 oldtiprev = len(repo)
911
913
912 pctx = repo['.']
914 pctx = repo['.']
913 tmpwctx = pctx
915 tmpwctx = pctx
914 # The goal is to have a commit structure like so:
916 # The goal is to have a commit structure like so:
915 # ...-> pctx -> tmpwctx -> shelvectx
917 # ...-> pctx -> tmpwctx -> shelvectx
916 # where tmpwctx is an optional commit with the user's pending changes
918 # where tmpwctx is an optional commit with the user's pending changes
917 # and shelvectx is the unshelved changes. Then we merge it all down
919 # and shelvectx is the unshelved changes. Then we merge it all down
918 # to the original pctx.
920 # to the original pctx.
919
921
920 activebookmark = _backupactivebookmark(repo)
922 activebookmark = _backupactivebookmark(repo)
921 overrides = {('ui', 'forcemerge'): opts.get('tool', '')}
923 overrides = {('ui', 'forcemerge'): opts.get('tool', '')}
922 with ui.configoverride(overrides, 'unshelve'):
924 with ui.configoverride(overrides, 'unshelve'):
923 tmpwctx, addedbefore = _commitworkingcopychanges(ui, repo, opts,
925 tmpwctx, addedbefore = _commitworkingcopychanges(ui, repo, opts,
924 tmpwctx)
926 tmpwctx)
925 repo, shelvectx = _unshelverestorecommit(ui, repo, basename)
927 repo, shelvectx = _unshelverestorecommit(ui, repo, basename)
926 _checkunshelveuntrackedproblems(ui, repo, shelvectx)
928 _checkunshelveuntrackedproblems(ui, repo, shelvectx)
927 branchtorestore = ''
929 branchtorestore = ''
928 if shelvectx.branch() != shelvectx.p1().branch():
930 if shelvectx.branch() != shelvectx.p1().branch():
929 branchtorestore = shelvectx.branch()
931 branchtorestore = shelvectx.branch()
930
932
931 shelvectx = _rebaserestoredcommit(ui, repo, opts, tr, oldtiprev,
933 shelvectx = _rebaserestoredcommit(ui, repo, opts, tr, oldtiprev,
932 basename, pctx, tmpwctx,
934 basename, pctx, tmpwctx,
933 shelvectx, branchtorestore,
935 shelvectx, branchtorestore,
934 activebookmark)
936 activebookmark)
935 mergefiles(ui, repo, pctx, shelvectx)
937 mergefiles(ui, repo, pctx, shelvectx)
936 restorebranch(ui, repo, branchtorestore)
938 restorebranch(ui, repo, branchtorestore)
937 _forgetunknownfiles(repo, shelvectx, addedbefore)
939 _forgetunknownfiles(repo, shelvectx, addedbefore)
938
940
939 shelvedstate.clear(repo)
941 shelvedstate.clear(repo)
940 _finishunshelve(repo, oldtiprev, tr, activebookmark)
942 _finishunshelve(repo, oldtiprev, tr, activebookmark)
941 unshelvecleanup(ui, repo, basename, opts)
943 unshelvecleanup(ui, repo, basename, opts)
942 finally:
944 finally:
943 if tr:
945 if tr:
944 tr.release()
946 tr.release()
945 lockmod.release(lock)
947 lockmod.release(lock)
946
948
947 @command('shelve',
949 @command('shelve',
948 [('A', 'addremove', None,
950 [('A', 'addremove', None,
949 _('mark new/missing files as added/removed before shelving')),
951 _('mark new/missing files as added/removed before shelving')),
950 ('u', 'unknown', None,
952 ('u', 'unknown', None,
951 _('store unknown files in the shelve')),
953 _('store unknown files in the shelve')),
952 ('', 'cleanup', None,
954 ('', 'cleanup', None,
953 _('delete all shelved changes')),
955 _('delete all shelved changes')),
954 ('', 'date', '',
956 ('', 'date', '',
955 _('shelve with the specified commit date'), _('DATE')),
957 _('shelve with the specified commit date'), _('DATE')),
956 ('d', 'delete', None,
958 ('d', 'delete', None,
957 _('delete the named shelved change(s)')),
959 _('delete the named shelved change(s)')),
958 ('e', 'edit', False,
960 ('e', 'edit', False,
959 _('invoke editor on commit messages')),
961 _('invoke editor on commit messages')),
960 ('l', 'list', None,
962 ('l', 'list', None,
961 _('list current shelves')),
963 _('list current shelves')),
962 ('m', 'message', '',
964 ('m', 'message', '',
963 _('use text as shelve message'), _('TEXT')),
965 _('use text as shelve message'), _('TEXT')),
964 ('n', 'name', '',
966 ('n', 'name', '',
965 _('use the given name for the shelved commit'), _('NAME')),
967 _('use the given name for the shelved commit'), _('NAME')),
966 ('p', 'patch', None,
968 ('p', 'patch', None,
967 _('show patch')),
969 _('show patch')),
968 ('i', 'interactive', None,
970 ('i', 'interactive', None,
969 _('interactive mode, only works while creating a shelve')),
971 _('interactive mode, only works while creating a shelve')),
970 ('', 'stat', None,
972 ('', 'stat', None,
971 _('output diffstat-style summary of changes'))] + cmdutil.walkopts,
973 _('output diffstat-style summary of changes'))] + cmdutil.walkopts,
972 _('hg shelve [OPTION]... [FILE]...'))
974 _('hg shelve [OPTION]... [FILE]...'))
973 def shelvecmd(ui, repo, *pats, **opts):
975 def shelvecmd(ui, repo, *pats, **opts):
974 '''save and set aside changes from the working directory
976 '''save and set aside changes from the working directory
975
977
976 Shelving takes files that "hg status" reports as not clean, saves
978 Shelving takes files that "hg status" reports as not clean, saves
977 the modifications to a bundle (a shelved change), and reverts the
979 the modifications to a bundle (a shelved change), and reverts the
978 files so that their state in the working directory becomes clean.
980 files so that their state in the working directory becomes clean.
979
981
980 To restore these changes to the working directory, using "hg
982 To restore these changes to the working directory, using "hg
981 unshelve"; this will work even if you switch to a different
983 unshelve"; this will work even if you switch to a different
982 commit.
984 commit.
983
985
984 When no files are specified, "hg shelve" saves all not-clean
986 When no files are specified, "hg shelve" saves all not-clean
985 files. If specific files or directories are named, only changes to
987 files. If specific files or directories are named, only changes to
986 those files are shelved.
988 those files are shelved.
987
989
988 In bare shelve (when no files are specified, without interactive,
990 In bare shelve (when no files are specified, without interactive,
989 include and exclude option), shelving remembers information if the
991 include and exclude option), shelving remembers information if the
990 working directory was on newly created branch, in other words working
992 working directory was on newly created branch, in other words working
991 directory was on different branch than its first parent. In this
993 directory was on different branch than its first parent. In this
992 situation unshelving restores branch information to the working directory.
994 situation unshelving restores branch information to the working directory.
993
995
994 Each shelved change has a name that makes it easier to find later.
996 Each shelved change has a name that makes it easier to find later.
995 The name of a shelved change defaults to being based on the active
997 The name of a shelved change defaults to being based on the active
996 bookmark, or if there is no active bookmark, the current named
998 bookmark, or if there is no active bookmark, the current named
997 branch. To specify a different name, use ``--name``.
999 branch. To specify a different name, use ``--name``.
998
1000
999 To see a list of existing shelved changes, use the ``--list``
1001 To see a list of existing shelved changes, use the ``--list``
1000 option. For each shelved change, this will print its name, age,
1002 option. For each shelved change, this will print its name, age,
1001 and description; use ``--patch`` or ``--stat`` for more details.
1003 and description; use ``--patch`` or ``--stat`` for more details.
1002
1004
1003 To delete specific shelved changes, use ``--delete``. To delete
1005 To delete specific shelved changes, use ``--delete``. To delete
1004 all shelved changes, use ``--cleanup``.
1006 all shelved changes, use ``--cleanup``.
1005 '''
1007 '''
1006 allowables = [
1008 allowables = [
1007 ('addremove', {'create'}), # 'create' is pseudo action
1009 ('addremove', {'create'}), # 'create' is pseudo action
1008 ('unknown', {'create'}),
1010 ('unknown', {'create'}),
1009 ('cleanup', {'cleanup'}),
1011 ('cleanup', {'cleanup'}),
1010 # ('date', {'create'}), # ignored for passing '--date "0 0"' in tests
1012 # ('date', {'create'}), # ignored for passing '--date "0 0"' in tests
1011 ('delete', {'delete'}),
1013 ('delete', {'delete'}),
1012 ('edit', {'create'}),
1014 ('edit', {'create'}),
1013 ('list', {'list'}),
1015 ('list', {'list'}),
1014 ('message', {'create'}),
1016 ('message', {'create'}),
1015 ('name', {'create'}),
1017 ('name', {'create'}),
1016 ('patch', {'patch', 'list'}),
1018 ('patch', {'patch', 'list'}),
1017 ('stat', {'stat', 'list'}),
1019 ('stat', {'stat', 'list'}),
1018 ]
1020 ]
1019 def checkopt(opt):
1021 def checkopt(opt):
1020 if opts.get(opt):
1022 if opts.get(opt):
1021 for i, allowable in allowables:
1023 for i, allowable in allowables:
1022 if opts[i] and opt not in allowable:
1024 if opts[i] and opt not in allowable:
1023 raise error.Abort(_("options '--%s' and '--%s' may not be "
1025 raise error.Abort(_("options '--%s' and '--%s' may not be "
1024 "used together") % (opt, i))
1026 "used together") % (opt, i))
1025 return True
1027 return True
1026 if checkopt('cleanup'):
1028 if checkopt('cleanup'):
1027 if pats:
1029 if pats:
1028 raise error.Abort(_("cannot specify names when using '--cleanup'"))
1030 raise error.Abort(_("cannot specify names when using '--cleanup'"))
1029 return cleanupcmd(ui, repo)
1031 return cleanupcmd(ui, repo)
1030 elif checkopt('delete'):
1032 elif checkopt('delete'):
1031 return deletecmd(ui, repo, pats)
1033 return deletecmd(ui, repo, pats)
1032 elif checkopt('list'):
1034 elif checkopt('list'):
1033 return listcmd(ui, repo, pats, opts)
1035 return listcmd(ui, repo, pats, opts)
1034 elif checkopt('patch'):
1036 elif checkopt('patch'):
1035 return patchcmds(ui, repo, pats, opts, subcommand='patch')
1037 return patchcmds(ui, repo, pats, opts, subcommand='patch')
1036 elif checkopt('stat'):
1038 elif checkopt('stat'):
1037 return patchcmds(ui, repo, pats, opts, subcommand='stat')
1039 return patchcmds(ui, repo, pats, opts, subcommand='stat')
1038 else:
1040 else:
1039 return createcmd(ui, repo, pats, opts)
1041 return createcmd(ui, repo, pats, opts)
1040
1042
1041 def extsetup(ui):
1043 def extsetup(ui):
1042 cmdutil.unfinishedstates.append(
1044 cmdutil.unfinishedstates.append(
1043 [shelvedstate._filename, False, False,
1045 [shelvedstate._filename, False, False,
1044 _('unshelve already in progress'),
1046 _('unshelve already in progress'),
1045 _("use 'hg unshelve --continue' or 'hg unshelve --abort'")])
1047 _("use 'hg unshelve --continue' or 'hg unshelve --abort'")])
1046 cmdutil.afterresolvedstates.append(
1048 cmdutil.afterresolvedstates.append(
1047 [shelvedstate._filename, _('hg unshelve --continue')])
1049 [shelvedstate._filename, _('hg unshelve --continue')])
@@ -1,1839 +1,1843 b''
1 # bundle2.py - generic container format to transmit arbitrary data.
1 # bundle2.py - generic container format to transmit arbitrary data.
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """Handling of the new bundle2 format
7 """Handling of the new bundle2 format
8
8
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 payloads in an application agnostic way. It consist in a sequence of "parts"
10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 that will be handed to and processed by the application layer.
11 that will be handed to and processed by the application layer.
12
12
13
13
14 General format architecture
14 General format architecture
15 ===========================
15 ===========================
16
16
17 The format is architectured as follow
17 The format is architectured as follow
18
18
19 - magic string
19 - magic string
20 - stream level parameters
20 - stream level parameters
21 - payload parts (any number)
21 - payload parts (any number)
22 - end of stream marker.
22 - end of stream marker.
23
23
24 the Binary format
24 the Binary format
25 ============================
25 ============================
26
26
27 All numbers are unsigned and big-endian.
27 All numbers are unsigned and big-endian.
28
28
29 stream level parameters
29 stream level parameters
30 ------------------------
30 ------------------------
31
31
32 Binary format is as follow
32 Binary format is as follow
33
33
34 :params size: int32
34 :params size: int32
35
35
36 The total number of Bytes used by the parameters
36 The total number of Bytes used by the parameters
37
37
38 :params value: arbitrary number of Bytes
38 :params value: arbitrary number of Bytes
39
39
40 A blob of `params size` containing the serialized version of all stream level
40 A blob of `params size` containing the serialized version of all stream level
41 parameters.
41 parameters.
42
42
43 The blob contains a space separated list of parameters. Parameters with value
43 The blob contains a space separated list of parameters. Parameters with value
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45
45
46 Empty name are obviously forbidden.
46 Empty name are obviously forbidden.
47
47
48 Name MUST start with a letter. If this first letter is lower case, the
48 Name MUST start with a letter. If this first letter is lower case, the
49 parameter is advisory and can be safely ignored. However when the first
49 parameter is advisory and can be safely ignored. However when the first
50 letter is capital, the parameter is mandatory and the bundling process MUST
50 letter is capital, the parameter is mandatory and the bundling process MUST
51 stop if he is not able to proceed it.
51 stop if he is not able to proceed it.
52
52
53 Stream parameters use a simple textual format for two main reasons:
53 Stream parameters use a simple textual format for two main reasons:
54
54
55 - Stream level parameters should remain simple and we want to discourage any
55 - Stream level parameters should remain simple and we want to discourage any
56 crazy usage.
56 crazy usage.
57 - Textual data allow easy human inspection of a bundle2 header in case of
57 - Textual data allow easy human inspection of a bundle2 header in case of
58 troubles.
58 troubles.
59
59
60 Any Applicative level options MUST go into a bundle2 part instead.
60 Any Applicative level options MUST go into a bundle2 part instead.
61
61
62 Payload part
62 Payload part
63 ------------------------
63 ------------------------
64
64
65 Binary format is as follow
65 Binary format is as follow
66
66
67 :header size: int32
67 :header size: int32
68
68
69 The total number of Bytes used by the part header. When the header is empty
69 The total number of Bytes used by the part header. When the header is empty
70 (size = 0) this is interpreted as the end of stream marker.
70 (size = 0) this is interpreted as the end of stream marker.
71
71
72 :header:
72 :header:
73
73
74 The header defines how to interpret the part. It contains two piece of
74 The header defines how to interpret the part. It contains two piece of
75 data: the part type, and the part parameters.
75 data: the part type, and the part parameters.
76
76
77 The part type is used to route an application level handler, that can
77 The part type is used to route an application level handler, that can
78 interpret payload.
78 interpret payload.
79
79
80 Part parameters are passed to the application level handler. They are
80 Part parameters are passed to the application level handler. They are
81 meant to convey information that will help the application level object to
81 meant to convey information that will help the application level object to
82 interpret the part payload.
82 interpret the part payload.
83
83
84 The binary format of the header is has follow
84 The binary format of the header is has follow
85
85
86 :typesize: (one byte)
86 :typesize: (one byte)
87
87
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89
89
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 to this part.
91 to this part.
92
92
93 :parameters:
93 :parameters:
94
94
95 Part's parameter may have arbitrary content, the binary structure is::
95 Part's parameter may have arbitrary content, the binary structure is::
96
96
97 <mandatory-count><advisory-count><param-sizes><param-data>
97 <mandatory-count><advisory-count><param-sizes><param-data>
98
98
99 :mandatory-count: 1 byte, number of mandatory parameters
99 :mandatory-count: 1 byte, number of mandatory parameters
100
100
101 :advisory-count: 1 byte, number of advisory parameters
101 :advisory-count: 1 byte, number of advisory parameters
102
102
103 :param-sizes:
103 :param-sizes:
104
104
105 N couple of bytes, where N is the total number of parameters. Each
105 N couple of bytes, where N is the total number of parameters. Each
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107
107
108 :param-data:
108 :param-data:
109
109
110 A blob of bytes from which each parameter key and value can be
110 A blob of bytes from which each parameter key and value can be
111 retrieved using the list of size couples stored in the previous
111 retrieved using the list of size couples stored in the previous
112 field.
112 field.
113
113
114 Mandatory parameters comes first, then the advisory ones.
114 Mandatory parameters comes first, then the advisory ones.
115
115
116 Each parameter's key MUST be unique within the part.
116 Each parameter's key MUST be unique within the part.
117
117
118 :payload:
118 :payload:
119
119
120 payload is a series of `<chunksize><chunkdata>`.
120 payload is a series of `<chunksize><chunkdata>`.
121
121
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124
124
125 The current implementation always produces either zero or one chunk.
125 The current implementation always produces either zero or one chunk.
126 This is an implementation limitation that will ultimately be lifted.
126 This is an implementation limitation that will ultimately be lifted.
127
127
128 `chunksize` can be negative to trigger special case processing. No such
128 `chunksize` can be negative to trigger special case processing. No such
129 processing is in place yet.
129 processing is in place yet.
130
130
131 Bundle processing
131 Bundle processing
132 ============================
132 ============================
133
133
134 Each part is processed in order using a "part handler". Handler are registered
134 Each part is processed in order using a "part handler". Handler are registered
135 for a certain part type.
135 for a certain part type.
136
136
137 The matching of a part to its handler is case insensitive. The case of the
137 The matching of a part to its handler is case insensitive. The case of the
138 part type is used to know if a part is mandatory or advisory. If the Part type
138 part type is used to know if a part is mandatory or advisory. If the Part type
139 contains any uppercase char it is considered mandatory. When no handler is
139 contains any uppercase char it is considered mandatory. When no handler is
140 known for a Mandatory part, the process is aborted and an exception is raised.
140 known for a Mandatory part, the process is aborted and an exception is raised.
141 If the part is advisory and no handler is known, the part is ignored. When the
141 If the part is advisory and no handler is known, the part is ignored. When the
142 process is aborted, the full bundle is still read from the stream to keep the
142 process is aborted, the full bundle is still read from the stream to keep the
143 channel usable. But none of the part read from an abort are processed. In the
143 channel usable. But none of the part read from an abort are processed. In the
144 future, dropping the stream may become an option for channel we do not care to
144 future, dropping the stream may become an option for channel we do not care to
145 preserve.
145 preserve.
146 """
146 """
147
147
148 from __future__ import absolute_import
148 from __future__ import absolute_import
149
149
150 import errno
150 import errno
151 import re
151 import re
152 import string
152 import string
153 import struct
153 import struct
154 import sys
154 import sys
155
155
156 from .i18n import _
156 from .i18n import _
157 from . import (
157 from . import (
158 changegroup,
158 changegroup,
159 error,
159 error,
160 obsolete,
160 obsolete,
161 phases,
161 phases,
162 pushkey,
162 pushkey,
163 pycompat,
163 pycompat,
164 tags,
164 tags,
165 url,
165 url,
166 util,
166 util,
167 )
167 )
168
168
169 urlerr = util.urlerr
169 urlerr = util.urlerr
170 urlreq = util.urlreq
170 urlreq = util.urlreq
171
171
172 _pack = struct.pack
172 _pack = struct.pack
173 _unpack = struct.unpack
173 _unpack = struct.unpack
174
174
175 _fstreamparamsize = '>i'
175 _fstreamparamsize = '>i'
176 _fpartheadersize = '>i'
176 _fpartheadersize = '>i'
177 _fparttypesize = '>B'
177 _fparttypesize = '>B'
178 _fpartid = '>I'
178 _fpartid = '>I'
179 _fpayloadsize = '>i'
179 _fpayloadsize = '>i'
180 _fpartparamcount = '>BB'
180 _fpartparamcount = '>BB'
181
181
182 _fphasesentry = '>i20s'
182 _fphasesentry = '>i20s'
183
183
184 preferedchunksize = 4096
184 preferedchunksize = 4096
185
185
186 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
186 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
187
187
188 def outdebug(ui, message):
188 def outdebug(ui, message):
189 """debug regarding output stream (bundling)"""
189 """debug regarding output stream (bundling)"""
190 if ui.configbool('devel', 'bundle2.debug', False):
190 if ui.configbool('devel', 'bundle2.debug', False):
191 ui.debug('bundle2-output: %s\n' % message)
191 ui.debug('bundle2-output: %s\n' % message)
192
192
193 def indebug(ui, message):
193 def indebug(ui, message):
194 """debug on input stream (unbundling)"""
194 """debug on input stream (unbundling)"""
195 if ui.configbool('devel', 'bundle2.debug', False):
195 if ui.configbool('devel', 'bundle2.debug', False):
196 ui.debug('bundle2-input: %s\n' % message)
196 ui.debug('bundle2-input: %s\n' % message)
197
197
198 def validateparttype(parttype):
198 def validateparttype(parttype):
199 """raise ValueError if a parttype contains invalid character"""
199 """raise ValueError if a parttype contains invalid character"""
200 if _parttypeforbidden.search(parttype):
200 if _parttypeforbidden.search(parttype):
201 raise ValueError(parttype)
201 raise ValueError(parttype)
202
202
203 def _makefpartparamsizes(nbparams):
203 def _makefpartparamsizes(nbparams):
204 """return a struct format to read part parameter sizes
204 """return a struct format to read part parameter sizes
205
205
206 The number parameters is variable so we need to build that format
206 The number parameters is variable so we need to build that format
207 dynamically.
207 dynamically.
208 """
208 """
209 return '>'+('BB'*nbparams)
209 return '>'+('BB'*nbparams)
210
210
211 parthandlermapping = {}
211 parthandlermapping = {}
212
212
213 def parthandler(parttype, params=()):
213 def parthandler(parttype, params=()):
214 """decorator that register a function as a bundle2 part handler
214 """decorator that register a function as a bundle2 part handler
215
215
216 eg::
216 eg::
217
217
218 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
218 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
219 def myparttypehandler(...):
219 def myparttypehandler(...):
220 '''process a part of type "my part".'''
220 '''process a part of type "my part".'''
221 ...
221 ...
222 """
222 """
223 validateparttype(parttype)
223 validateparttype(parttype)
224 def _decorator(func):
224 def _decorator(func):
225 lparttype = parttype.lower() # enforce lower case matching.
225 lparttype = parttype.lower() # enforce lower case matching.
226 assert lparttype not in parthandlermapping
226 assert lparttype not in parthandlermapping
227 parthandlermapping[lparttype] = func
227 parthandlermapping[lparttype] = func
228 func.params = frozenset(params)
228 func.params = frozenset(params)
229 return func
229 return func
230 return _decorator
230 return _decorator
231
231
232 class unbundlerecords(object):
232 class unbundlerecords(object):
233 """keep record of what happens during and unbundle
233 """keep record of what happens during and unbundle
234
234
235 New records are added using `records.add('cat', obj)`. Where 'cat' is a
235 New records are added using `records.add('cat', obj)`. Where 'cat' is a
236 category of record and obj is an arbitrary object.
236 category of record and obj is an arbitrary object.
237
237
238 `records['cat']` will return all entries of this category 'cat'.
238 `records['cat']` will return all entries of this category 'cat'.
239
239
240 Iterating on the object itself will yield `('category', obj)` tuples
240 Iterating on the object itself will yield `('category', obj)` tuples
241 for all entries.
241 for all entries.
242
242
243 All iterations happens in chronological order.
243 All iterations happens in chronological order.
244 """
244 """
245
245
246 def __init__(self):
246 def __init__(self):
247 self._categories = {}
247 self._categories = {}
248 self._sequences = []
248 self._sequences = []
249 self._replies = {}
249 self._replies = {}
250
250
251 def add(self, category, entry, inreplyto=None):
251 def add(self, category, entry, inreplyto=None):
252 """add a new record of a given category.
252 """add a new record of a given category.
253
253
254 The entry can then be retrieved in the list returned by
254 The entry can then be retrieved in the list returned by
255 self['category']."""
255 self['category']."""
256 self._categories.setdefault(category, []).append(entry)
256 self._categories.setdefault(category, []).append(entry)
257 self._sequences.append((category, entry))
257 self._sequences.append((category, entry))
258 if inreplyto is not None:
258 if inreplyto is not None:
259 self.getreplies(inreplyto).add(category, entry)
259 self.getreplies(inreplyto).add(category, entry)
260
260
261 def getreplies(self, partid):
261 def getreplies(self, partid):
262 """get the records that are replies to a specific part"""
262 """get the records that are replies to a specific part"""
263 return self._replies.setdefault(partid, unbundlerecords())
263 return self._replies.setdefault(partid, unbundlerecords())
264
264
265 def __getitem__(self, cat):
265 def __getitem__(self, cat):
266 return tuple(self._categories.get(cat, ()))
266 return tuple(self._categories.get(cat, ()))
267
267
268 def __iter__(self):
268 def __iter__(self):
269 return iter(self._sequences)
269 return iter(self._sequences)
270
270
271 def __len__(self):
271 def __len__(self):
272 return len(self._sequences)
272 return len(self._sequences)
273
273
274 def __nonzero__(self):
274 def __nonzero__(self):
275 return bool(self._sequences)
275 return bool(self._sequences)
276
276
277 __bool__ = __nonzero__
277 __bool__ = __nonzero__
278
278
279 class bundleoperation(object):
279 class bundleoperation(object):
280 """an object that represents a single bundling process
280 """an object that represents a single bundling process
281
281
282 Its purpose is to carry unbundle-related objects and states.
282 Its purpose is to carry unbundle-related objects and states.
283
283
284 A new object should be created at the beginning of each bundle processing.
284 A new object should be created at the beginning of each bundle processing.
285 The object is to be returned by the processing function.
285 The object is to be returned by the processing function.
286
286
287 The object has very little content now it will ultimately contain:
287 The object has very little content now it will ultimately contain:
288 * an access to the repo the bundle is applied to,
288 * an access to the repo the bundle is applied to,
289 * a ui object,
289 * a ui object,
290 * a way to retrieve a transaction to add changes to the repo,
290 * a way to retrieve a transaction to add changes to the repo,
291 * a way to record the result of processing each part,
291 * a way to record the result of processing each part,
292 * a way to construct a bundle response when applicable.
292 * a way to construct a bundle response when applicable.
293 """
293 """
294
294
295 def __init__(self, repo, transactiongetter, captureoutput=True):
295 def __init__(self, repo, transactiongetter, captureoutput=True):
296 self.repo = repo
296 self.repo = repo
297 self.ui = repo.ui
297 self.ui = repo.ui
298 self.records = unbundlerecords()
298 self.records = unbundlerecords()
299 self.gettransaction = transactiongetter
299 self.gettransaction = transactiongetter
300 self.reply = None
300 self.reply = None
301 self.captureoutput = captureoutput
301 self.captureoutput = captureoutput
302
302
303 class TransactionUnavailable(RuntimeError):
303 class TransactionUnavailable(RuntimeError):
304 pass
304 pass
305
305
306 def _notransaction():
306 def _notransaction():
307 """default method to get a transaction while processing a bundle
307 """default method to get a transaction while processing a bundle
308
308
309 Raise an exception to highlight the fact that no transaction was expected
309 Raise an exception to highlight the fact that no transaction was expected
310 to be created"""
310 to be created"""
311 raise TransactionUnavailable()
311 raise TransactionUnavailable()
312
312
313 def applybundle1(repo, cg, tr, source, url, **kwargs):
314 ret, addednodes = cg.apply(repo, tr, source, url, **kwargs)
315 return ret
316
313 def applybundle(repo, unbundler, tr, source=None, url=None):
317 def applybundle(repo, unbundler, tr, source=None, url=None):
314 # transform me into unbundler.apply() as soon as the freeze is lifted
318 # transform me into unbundler.apply() as soon as the freeze is lifted
315 tr.hookargs['bundle2'] = '1'
319 tr.hookargs['bundle2'] = '1'
316 if source is not None and 'source' not in tr.hookargs:
320 if source is not None and 'source' not in tr.hookargs:
317 tr.hookargs['source'] = source
321 tr.hookargs['source'] = source
318 if url is not None and 'url' not in tr.hookargs:
322 if url is not None and 'url' not in tr.hookargs:
319 tr.hookargs['url'] = url
323 tr.hookargs['url'] = url
320 return processbundle(repo, unbundler, lambda: tr)
324 return processbundle(repo, unbundler, lambda: tr)
321
325
322 def processbundle(repo, unbundler, transactiongetter=None, op=None):
326 def processbundle(repo, unbundler, transactiongetter=None, op=None):
323 """This function process a bundle, apply effect to/from a repo
327 """This function process a bundle, apply effect to/from a repo
324
328
325 It iterates over each part then searches for and uses the proper handling
329 It iterates over each part then searches for and uses the proper handling
326 code to process the part. Parts are processed in order.
330 code to process the part. Parts are processed in order.
327
331
328 Unknown Mandatory part will abort the process.
332 Unknown Mandatory part will abort the process.
329
333
330 It is temporarily possible to provide a prebuilt bundleoperation to the
334 It is temporarily possible to provide a prebuilt bundleoperation to the
331 function. This is used to ensure output is properly propagated in case of
335 function. This is used to ensure output is properly propagated in case of
332 an error during the unbundling. This output capturing part will likely be
336 an error during the unbundling. This output capturing part will likely be
333 reworked and this ability will probably go away in the process.
337 reworked and this ability will probably go away in the process.
334 """
338 """
335 if op is None:
339 if op is None:
336 if transactiongetter is None:
340 if transactiongetter is None:
337 transactiongetter = _notransaction
341 transactiongetter = _notransaction
338 op = bundleoperation(repo, transactiongetter)
342 op = bundleoperation(repo, transactiongetter)
339 # todo:
343 # todo:
340 # - replace this is a init function soon.
344 # - replace this is a init function soon.
341 # - exception catching
345 # - exception catching
342 unbundler.params
346 unbundler.params
343 if repo.ui.debugflag:
347 if repo.ui.debugflag:
344 msg = ['bundle2-input-bundle:']
348 msg = ['bundle2-input-bundle:']
345 if unbundler.params:
349 if unbundler.params:
346 msg.append(' %i params')
350 msg.append(' %i params')
347 if op.gettransaction is None or op.gettransaction is _notransaction:
351 if op.gettransaction is None or op.gettransaction is _notransaction:
348 msg.append(' no-transaction')
352 msg.append(' no-transaction')
349 else:
353 else:
350 msg.append(' with-transaction')
354 msg.append(' with-transaction')
351 msg.append('\n')
355 msg.append('\n')
352 repo.ui.debug(''.join(msg))
356 repo.ui.debug(''.join(msg))
353 iterparts = enumerate(unbundler.iterparts())
357 iterparts = enumerate(unbundler.iterparts())
354 part = None
358 part = None
355 nbpart = 0
359 nbpart = 0
356 try:
360 try:
357 for nbpart, part in iterparts:
361 for nbpart, part in iterparts:
358 _processpart(op, part)
362 _processpart(op, part)
359 except Exception as exc:
363 except Exception as exc:
360 # Any exceptions seeking to the end of the bundle at this point are
364 # Any exceptions seeking to the end of the bundle at this point are
361 # almost certainly related to the underlying stream being bad.
365 # almost certainly related to the underlying stream being bad.
362 # And, chances are that the exception we're handling is related to
366 # And, chances are that the exception we're handling is related to
363 # getting in that bad state. So, we swallow the seeking error and
367 # getting in that bad state. So, we swallow the seeking error and
364 # re-raise the original error.
368 # re-raise the original error.
365 seekerror = False
369 seekerror = False
366 try:
370 try:
367 for nbpart, part in iterparts:
371 for nbpart, part in iterparts:
368 # consume the bundle content
372 # consume the bundle content
369 part.seek(0, 2)
373 part.seek(0, 2)
370 except Exception:
374 except Exception:
371 seekerror = True
375 seekerror = True
372
376
373 # Small hack to let caller code distinguish exceptions from bundle2
377 # Small hack to let caller code distinguish exceptions from bundle2
374 # processing from processing the old format. This is mostly
378 # processing from processing the old format. This is mostly
375 # needed to handle different return codes to unbundle according to the
379 # needed to handle different return codes to unbundle according to the
376 # type of bundle. We should probably clean up or drop this return code
380 # type of bundle. We should probably clean up or drop this return code
377 # craziness in a future version.
381 # craziness in a future version.
378 exc.duringunbundle2 = True
382 exc.duringunbundle2 = True
379 salvaged = []
383 salvaged = []
380 replycaps = None
384 replycaps = None
381 if op.reply is not None:
385 if op.reply is not None:
382 salvaged = op.reply.salvageoutput()
386 salvaged = op.reply.salvageoutput()
383 replycaps = op.reply.capabilities
387 replycaps = op.reply.capabilities
384 exc._replycaps = replycaps
388 exc._replycaps = replycaps
385 exc._bundle2salvagedoutput = salvaged
389 exc._bundle2salvagedoutput = salvaged
386
390
387 # Re-raising from a variable loses the original stack. So only use
391 # Re-raising from a variable loses the original stack. So only use
388 # that form if we need to.
392 # that form if we need to.
389 if seekerror:
393 if seekerror:
390 raise exc
394 raise exc
391 else:
395 else:
392 raise
396 raise
393 finally:
397 finally:
394 repo.ui.debug('bundle2-input-bundle: %i parts total\n' % nbpart)
398 repo.ui.debug('bundle2-input-bundle: %i parts total\n' % nbpart)
395
399
396 return op
400 return op
397
401
398 def _processchangegroup(op, cg, tr, source, url, **kwargs):
402 def _processchangegroup(op, cg, tr, source, url, **kwargs):
399 ret, addednodes = cg.apply(op.repo, tr, source, url, **kwargs)
403 ret, addednodes = cg.apply(op.repo, tr, source, url, **kwargs)
400 op.records.add('changegroup', {
404 op.records.add('changegroup', {
401 'return': ret,
405 'return': ret,
402 'addednodes': addednodes,
406 'addednodes': addednodes,
403 })
407 })
404 return ret
408 return ret
405
409
406 def _processpart(op, part):
410 def _processpart(op, part):
407 """process a single part from a bundle
411 """process a single part from a bundle
408
412
409 The part is guaranteed to have been fully consumed when the function exits
413 The part is guaranteed to have been fully consumed when the function exits
410 (even if an exception is raised)."""
414 (even if an exception is raised)."""
411 status = 'unknown' # used by debug output
415 status = 'unknown' # used by debug output
412 hardabort = False
416 hardabort = False
413 try:
417 try:
414 try:
418 try:
415 handler = parthandlermapping.get(part.type)
419 handler = parthandlermapping.get(part.type)
416 if handler is None:
420 if handler is None:
417 status = 'unsupported-type'
421 status = 'unsupported-type'
418 raise error.BundleUnknownFeatureError(parttype=part.type)
422 raise error.BundleUnknownFeatureError(parttype=part.type)
419 indebug(op.ui, 'found a handler for part %r' % part.type)
423 indebug(op.ui, 'found a handler for part %r' % part.type)
420 unknownparams = part.mandatorykeys - handler.params
424 unknownparams = part.mandatorykeys - handler.params
421 if unknownparams:
425 if unknownparams:
422 unknownparams = list(unknownparams)
426 unknownparams = list(unknownparams)
423 unknownparams.sort()
427 unknownparams.sort()
424 status = 'unsupported-params (%s)' % unknownparams
428 status = 'unsupported-params (%s)' % unknownparams
425 raise error.BundleUnknownFeatureError(parttype=part.type,
429 raise error.BundleUnknownFeatureError(parttype=part.type,
426 params=unknownparams)
430 params=unknownparams)
427 status = 'supported'
431 status = 'supported'
428 except error.BundleUnknownFeatureError as exc:
432 except error.BundleUnknownFeatureError as exc:
429 if part.mandatory: # mandatory parts
433 if part.mandatory: # mandatory parts
430 raise
434 raise
431 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
435 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
432 return # skip to part processing
436 return # skip to part processing
433 finally:
437 finally:
434 if op.ui.debugflag:
438 if op.ui.debugflag:
435 msg = ['bundle2-input-part: "%s"' % part.type]
439 msg = ['bundle2-input-part: "%s"' % part.type]
436 if not part.mandatory:
440 if not part.mandatory:
437 msg.append(' (advisory)')
441 msg.append(' (advisory)')
438 nbmp = len(part.mandatorykeys)
442 nbmp = len(part.mandatorykeys)
439 nbap = len(part.params) - nbmp
443 nbap = len(part.params) - nbmp
440 if nbmp or nbap:
444 if nbmp or nbap:
441 msg.append(' (params:')
445 msg.append(' (params:')
442 if nbmp:
446 if nbmp:
443 msg.append(' %i mandatory' % nbmp)
447 msg.append(' %i mandatory' % nbmp)
444 if nbap:
448 if nbap:
445 msg.append(' %i advisory' % nbmp)
449 msg.append(' %i advisory' % nbmp)
446 msg.append(')')
450 msg.append(')')
447 msg.append(' %s\n' % status)
451 msg.append(' %s\n' % status)
448 op.ui.debug(''.join(msg))
452 op.ui.debug(''.join(msg))
449
453
450 # handler is called outside the above try block so that we don't
454 # handler is called outside the above try block so that we don't
451 # risk catching KeyErrors from anything other than the
455 # risk catching KeyErrors from anything other than the
452 # parthandlermapping lookup (any KeyError raised by handler()
456 # parthandlermapping lookup (any KeyError raised by handler()
453 # itself represents a defect of a different variety).
457 # itself represents a defect of a different variety).
454 output = None
458 output = None
455 if op.captureoutput and op.reply is not None:
459 if op.captureoutput and op.reply is not None:
456 op.ui.pushbuffer(error=True, subproc=True)
460 op.ui.pushbuffer(error=True, subproc=True)
457 output = ''
461 output = ''
458 try:
462 try:
459 handler(op, part)
463 handler(op, part)
460 finally:
464 finally:
461 if output is not None:
465 if output is not None:
462 output = op.ui.popbuffer()
466 output = op.ui.popbuffer()
463 if output:
467 if output:
464 outpart = op.reply.newpart('output', data=output,
468 outpart = op.reply.newpart('output', data=output,
465 mandatory=False)
469 mandatory=False)
466 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
470 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
467 # If exiting or interrupted, do not attempt to seek the stream in the
471 # If exiting or interrupted, do not attempt to seek the stream in the
468 # finally block below. This makes abort faster.
472 # finally block below. This makes abort faster.
469 except (SystemExit, KeyboardInterrupt):
473 except (SystemExit, KeyboardInterrupt):
470 hardabort = True
474 hardabort = True
471 raise
475 raise
472 finally:
476 finally:
473 # consume the part content to not corrupt the stream.
477 # consume the part content to not corrupt the stream.
474 if not hardabort:
478 if not hardabort:
475 part.seek(0, 2)
479 part.seek(0, 2)
476
480
477
481
478 def decodecaps(blob):
482 def decodecaps(blob):
479 """decode a bundle2 caps bytes blob into a dictionary
483 """decode a bundle2 caps bytes blob into a dictionary
480
484
481 The blob is a list of capabilities (one per line)
485 The blob is a list of capabilities (one per line)
482 Capabilities may have values using a line of the form::
486 Capabilities may have values using a line of the form::
483
487
484 capability=value1,value2,value3
488 capability=value1,value2,value3
485
489
486 The values are always a list."""
490 The values are always a list."""
487 caps = {}
491 caps = {}
488 for line in blob.splitlines():
492 for line in blob.splitlines():
489 if not line:
493 if not line:
490 continue
494 continue
491 if '=' not in line:
495 if '=' not in line:
492 key, vals = line, ()
496 key, vals = line, ()
493 else:
497 else:
494 key, vals = line.split('=', 1)
498 key, vals = line.split('=', 1)
495 vals = vals.split(',')
499 vals = vals.split(',')
496 key = urlreq.unquote(key)
500 key = urlreq.unquote(key)
497 vals = [urlreq.unquote(v) for v in vals]
501 vals = [urlreq.unquote(v) for v in vals]
498 caps[key] = vals
502 caps[key] = vals
499 return caps
503 return caps
500
504
501 def encodecaps(caps):
505 def encodecaps(caps):
502 """encode a bundle2 caps dictionary into a bytes blob"""
506 """encode a bundle2 caps dictionary into a bytes blob"""
503 chunks = []
507 chunks = []
504 for ca in sorted(caps):
508 for ca in sorted(caps):
505 vals = caps[ca]
509 vals = caps[ca]
506 ca = urlreq.quote(ca)
510 ca = urlreq.quote(ca)
507 vals = [urlreq.quote(v) for v in vals]
511 vals = [urlreq.quote(v) for v in vals]
508 if vals:
512 if vals:
509 ca = "%s=%s" % (ca, ','.join(vals))
513 ca = "%s=%s" % (ca, ','.join(vals))
510 chunks.append(ca)
514 chunks.append(ca)
511 return '\n'.join(chunks)
515 return '\n'.join(chunks)
512
516
513 bundletypes = {
517 bundletypes = {
514 "": ("", 'UN'), # only when using unbundle on ssh and old http servers
518 "": ("", 'UN'), # only when using unbundle on ssh and old http servers
515 # since the unification ssh accepts a header but there
519 # since the unification ssh accepts a header but there
516 # is no capability signaling it.
520 # is no capability signaling it.
517 "HG20": (), # special-cased below
521 "HG20": (), # special-cased below
518 "HG10UN": ("HG10UN", 'UN'),
522 "HG10UN": ("HG10UN", 'UN'),
519 "HG10BZ": ("HG10", 'BZ'),
523 "HG10BZ": ("HG10", 'BZ'),
520 "HG10GZ": ("HG10GZ", 'GZ'),
524 "HG10GZ": ("HG10GZ", 'GZ'),
521 }
525 }
522
526
523 # hgweb uses this list to communicate its preferred type
527 # hgweb uses this list to communicate its preferred type
524 bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN']
528 bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN']
525
529
526 class bundle20(object):
530 class bundle20(object):
527 """represent an outgoing bundle2 container
531 """represent an outgoing bundle2 container
528
532
529 Use the `addparam` method to add stream level parameter. and `newpart` to
533 Use the `addparam` method to add stream level parameter. and `newpart` to
530 populate it. Then call `getchunks` to retrieve all the binary chunks of
534 populate it. Then call `getchunks` to retrieve all the binary chunks of
531 data that compose the bundle2 container."""
535 data that compose the bundle2 container."""
532
536
533 _magicstring = 'HG20'
537 _magicstring = 'HG20'
534
538
535 def __init__(self, ui, capabilities=()):
539 def __init__(self, ui, capabilities=()):
536 self.ui = ui
540 self.ui = ui
537 self._params = []
541 self._params = []
538 self._parts = []
542 self._parts = []
539 self.capabilities = dict(capabilities)
543 self.capabilities = dict(capabilities)
540 self._compengine = util.compengines.forbundletype('UN')
544 self._compengine = util.compengines.forbundletype('UN')
541 self._compopts = None
545 self._compopts = None
542
546
543 def setcompression(self, alg, compopts=None):
547 def setcompression(self, alg, compopts=None):
544 """setup core part compression to <alg>"""
548 """setup core part compression to <alg>"""
545 if alg in (None, 'UN'):
549 if alg in (None, 'UN'):
546 return
550 return
547 assert not any(n.lower() == 'compression' for n, v in self._params)
551 assert not any(n.lower() == 'compression' for n, v in self._params)
548 self.addparam('Compression', alg)
552 self.addparam('Compression', alg)
549 self._compengine = util.compengines.forbundletype(alg)
553 self._compengine = util.compengines.forbundletype(alg)
550 self._compopts = compopts
554 self._compopts = compopts
551
555
552 @property
556 @property
553 def nbparts(self):
557 def nbparts(self):
554 """total number of parts added to the bundler"""
558 """total number of parts added to the bundler"""
555 return len(self._parts)
559 return len(self._parts)
556
560
557 # methods used to defines the bundle2 content
561 # methods used to defines the bundle2 content
558 def addparam(self, name, value=None):
562 def addparam(self, name, value=None):
559 """add a stream level parameter"""
563 """add a stream level parameter"""
560 if not name:
564 if not name:
561 raise ValueError('empty parameter name')
565 raise ValueError('empty parameter name')
562 if name[0] not in string.letters:
566 if name[0] not in string.letters:
563 raise ValueError('non letter first character: %r' % name)
567 raise ValueError('non letter first character: %r' % name)
564 self._params.append((name, value))
568 self._params.append((name, value))
565
569
566 def addpart(self, part):
570 def addpart(self, part):
567 """add a new part to the bundle2 container
571 """add a new part to the bundle2 container
568
572
569 Parts contains the actual applicative payload."""
573 Parts contains the actual applicative payload."""
570 assert part.id is None
574 assert part.id is None
571 part.id = len(self._parts) # very cheap counter
575 part.id = len(self._parts) # very cheap counter
572 self._parts.append(part)
576 self._parts.append(part)
573
577
574 def newpart(self, typeid, *args, **kwargs):
578 def newpart(self, typeid, *args, **kwargs):
575 """create a new part and add it to the containers
579 """create a new part and add it to the containers
576
580
577 As the part is directly added to the containers. For now, this means
581 As the part is directly added to the containers. For now, this means
578 that any failure to properly initialize the part after calling
582 that any failure to properly initialize the part after calling
579 ``newpart`` should result in a failure of the whole bundling process.
583 ``newpart`` should result in a failure of the whole bundling process.
580
584
581 You can still fall back to manually create and add if you need better
585 You can still fall back to manually create and add if you need better
582 control."""
586 control."""
583 part = bundlepart(typeid, *args, **kwargs)
587 part = bundlepart(typeid, *args, **kwargs)
584 self.addpart(part)
588 self.addpart(part)
585 return part
589 return part
586
590
587 # methods used to generate the bundle2 stream
591 # methods used to generate the bundle2 stream
588 def getchunks(self):
592 def getchunks(self):
589 if self.ui.debugflag:
593 if self.ui.debugflag:
590 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
594 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
591 if self._params:
595 if self._params:
592 msg.append(' (%i params)' % len(self._params))
596 msg.append(' (%i params)' % len(self._params))
593 msg.append(' %i parts total\n' % len(self._parts))
597 msg.append(' %i parts total\n' % len(self._parts))
594 self.ui.debug(''.join(msg))
598 self.ui.debug(''.join(msg))
595 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
599 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
596 yield self._magicstring
600 yield self._magicstring
597 param = self._paramchunk()
601 param = self._paramchunk()
598 outdebug(self.ui, 'bundle parameter: %s' % param)
602 outdebug(self.ui, 'bundle parameter: %s' % param)
599 yield _pack(_fstreamparamsize, len(param))
603 yield _pack(_fstreamparamsize, len(param))
600 if param:
604 if param:
601 yield param
605 yield param
602 for chunk in self._compengine.compressstream(self._getcorechunk(),
606 for chunk in self._compengine.compressstream(self._getcorechunk(),
603 self._compopts):
607 self._compopts):
604 yield chunk
608 yield chunk
605
609
606 def _paramchunk(self):
610 def _paramchunk(self):
607 """return a encoded version of all stream parameters"""
611 """return a encoded version of all stream parameters"""
608 blocks = []
612 blocks = []
609 for par, value in self._params:
613 for par, value in self._params:
610 par = urlreq.quote(par)
614 par = urlreq.quote(par)
611 if value is not None:
615 if value is not None:
612 value = urlreq.quote(value)
616 value = urlreq.quote(value)
613 par = '%s=%s' % (par, value)
617 par = '%s=%s' % (par, value)
614 blocks.append(par)
618 blocks.append(par)
615 return ' '.join(blocks)
619 return ' '.join(blocks)
616
620
617 def _getcorechunk(self):
621 def _getcorechunk(self):
618 """yield chunk for the core part of the bundle
622 """yield chunk for the core part of the bundle
619
623
620 (all but headers and parameters)"""
624 (all but headers and parameters)"""
621 outdebug(self.ui, 'start of parts')
625 outdebug(self.ui, 'start of parts')
622 for part in self._parts:
626 for part in self._parts:
623 outdebug(self.ui, 'bundle part: "%s"' % part.type)
627 outdebug(self.ui, 'bundle part: "%s"' % part.type)
624 for chunk in part.getchunks(ui=self.ui):
628 for chunk in part.getchunks(ui=self.ui):
625 yield chunk
629 yield chunk
626 outdebug(self.ui, 'end of bundle')
630 outdebug(self.ui, 'end of bundle')
627 yield _pack(_fpartheadersize, 0)
631 yield _pack(_fpartheadersize, 0)
628
632
629
633
630 def salvageoutput(self):
634 def salvageoutput(self):
631 """return a list with a copy of all output parts in the bundle
635 """return a list with a copy of all output parts in the bundle
632
636
633 This is meant to be used during error handling to make sure we preserve
637 This is meant to be used during error handling to make sure we preserve
634 server output"""
638 server output"""
635 salvaged = []
639 salvaged = []
636 for part in self._parts:
640 for part in self._parts:
637 if part.type.startswith('output'):
641 if part.type.startswith('output'):
638 salvaged.append(part.copy())
642 salvaged.append(part.copy())
639 return salvaged
643 return salvaged
640
644
641
645
642 class unpackermixin(object):
646 class unpackermixin(object):
643 """A mixin to extract bytes and struct data from a stream"""
647 """A mixin to extract bytes and struct data from a stream"""
644
648
645 def __init__(self, fp):
649 def __init__(self, fp):
646 self._fp = fp
650 self._fp = fp
647
651
648 def _unpack(self, format):
652 def _unpack(self, format):
649 """unpack this struct format from the stream
653 """unpack this struct format from the stream
650
654
651 This method is meant for internal usage by the bundle2 protocol only.
655 This method is meant for internal usage by the bundle2 protocol only.
652 They directly manipulate the low level stream including bundle2 level
656 They directly manipulate the low level stream including bundle2 level
653 instruction.
657 instruction.
654
658
655 Do not use it to implement higher-level logic or methods."""
659 Do not use it to implement higher-level logic or methods."""
656 data = self._readexact(struct.calcsize(format))
660 data = self._readexact(struct.calcsize(format))
657 return _unpack(format, data)
661 return _unpack(format, data)
658
662
659 def _readexact(self, size):
663 def _readexact(self, size):
660 """read exactly <size> bytes from the stream
664 """read exactly <size> bytes from the stream
661
665
662 This method is meant for internal usage by the bundle2 protocol only.
666 This method is meant for internal usage by the bundle2 protocol only.
663 They directly manipulate the low level stream including bundle2 level
667 They directly manipulate the low level stream including bundle2 level
664 instruction.
668 instruction.
665
669
666 Do not use it to implement higher-level logic or methods."""
670 Do not use it to implement higher-level logic or methods."""
667 return changegroup.readexactly(self._fp, size)
671 return changegroup.readexactly(self._fp, size)
668
672
669 def getunbundler(ui, fp, magicstring=None):
673 def getunbundler(ui, fp, magicstring=None):
670 """return a valid unbundler object for a given magicstring"""
674 """return a valid unbundler object for a given magicstring"""
671 if magicstring is None:
675 if magicstring is None:
672 magicstring = changegroup.readexactly(fp, 4)
676 magicstring = changegroup.readexactly(fp, 4)
673 magic, version = magicstring[0:2], magicstring[2:4]
677 magic, version = magicstring[0:2], magicstring[2:4]
674 if magic != 'HG':
678 if magic != 'HG':
675 raise error.Abort(_('not a Mercurial bundle'))
679 raise error.Abort(_('not a Mercurial bundle'))
676 unbundlerclass = formatmap.get(version)
680 unbundlerclass = formatmap.get(version)
677 if unbundlerclass is None:
681 if unbundlerclass is None:
678 raise error.Abort(_('unknown bundle version %s') % version)
682 raise error.Abort(_('unknown bundle version %s') % version)
679 unbundler = unbundlerclass(ui, fp)
683 unbundler = unbundlerclass(ui, fp)
680 indebug(ui, 'start processing of %s stream' % magicstring)
684 indebug(ui, 'start processing of %s stream' % magicstring)
681 return unbundler
685 return unbundler
682
686
683 class unbundle20(unpackermixin):
687 class unbundle20(unpackermixin):
684 """interpret a bundle2 stream
688 """interpret a bundle2 stream
685
689
686 This class is fed with a binary stream and yields parts through its
690 This class is fed with a binary stream and yields parts through its
687 `iterparts` methods."""
691 `iterparts` methods."""
688
692
689 _magicstring = 'HG20'
693 _magicstring = 'HG20'
690
694
691 def __init__(self, ui, fp):
695 def __init__(self, ui, fp):
692 """If header is specified, we do not read it out of the stream."""
696 """If header is specified, we do not read it out of the stream."""
693 self.ui = ui
697 self.ui = ui
694 self._compengine = util.compengines.forbundletype('UN')
698 self._compengine = util.compengines.forbundletype('UN')
695 self._compressed = None
699 self._compressed = None
696 super(unbundle20, self).__init__(fp)
700 super(unbundle20, self).__init__(fp)
697
701
698 @util.propertycache
702 @util.propertycache
699 def params(self):
703 def params(self):
700 """dictionary of stream level parameters"""
704 """dictionary of stream level parameters"""
701 indebug(self.ui, 'reading bundle2 stream parameters')
705 indebug(self.ui, 'reading bundle2 stream parameters')
702 params = {}
706 params = {}
703 paramssize = self._unpack(_fstreamparamsize)[0]
707 paramssize = self._unpack(_fstreamparamsize)[0]
704 if paramssize < 0:
708 if paramssize < 0:
705 raise error.BundleValueError('negative bundle param size: %i'
709 raise error.BundleValueError('negative bundle param size: %i'
706 % paramssize)
710 % paramssize)
707 if paramssize:
711 if paramssize:
708 params = self._readexact(paramssize)
712 params = self._readexact(paramssize)
709 params = self._processallparams(params)
713 params = self._processallparams(params)
710 return params
714 return params
711
715
712 def _processallparams(self, paramsblock):
716 def _processallparams(self, paramsblock):
713 """"""
717 """"""
714 params = util.sortdict()
718 params = util.sortdict()
715 for p in paramsblock.split(' '):
719 for p in paramsblock.split(' '):
716 p = p.split('=', 1)
720 p = p.split('=', 1)
717 p = [urlreq.unquote(i) for i in p]
721 p = [urlreq.unquote(i) for i in p]
718 if len(p) < 2:
722 if len(p) < 2:
719 p.append(None)
723 p.append(None)
720 self._processparam(*p)
724 self._processparam(*p)
721 params[p[0]] = p[1]
725 params[p[0]] = p[1]
722 return params
726 return params
723
727
724
728
725 def _processparam(self, name, value):
729 def _processparam(self, name, value):
726 """process a parameter, applying its effect if needed
730 """process a parameter, applying its effect if needed
727
731
728 Parameter starting with a lower case letter are advisory and will be
732 Parameter starting with a lower case letter are advisory and will be
729 ignored when unknown. Those starting with an upper case letter are
733 ignored when unknown. Those starting with an upper case letter are
730 mandatory and will this function will raise a KeyError when unknown.
734 mandatory and will this function will raise a KeyError when unknown.
731
735
732 Note: no option are currently supported. Any input will be either
736 Note: no option are currently supported. Any input will be either
733 ignored or failing.
737 ignored or failing.
734 """
738 """
735 if not name:
739 if not name:
736 raise ValueError('empty parameter name')
740 raise ValueError('empty parameter name')
737 if name[0] not in string.letters:
741 if name[0] not in string.letters:
738 raise ValueError('non letter first character: %r' % name)
742 raise ValueError('non letter first character: %r' % name)
739 try:
743 try:
740 handler = b2streamparamsmap[name.lower()]
744 handler = b2streamparamsmap[name.lower()]
741 except KeyError:
745 except KeyError:
742 if name[0].islower():
746 if name[0].islower():
743 indebug(self.ui, "ignoring unknown parameter %r" % name)
747 indebug(self.ui, "ignoring unknown parameter %r" % name)
744 else:
748 else:
745 raise error.BundleUnknownFeatureError(params=(name,))
749 raise error.BundleUnknownFeatureError(params=(name,))
746 else:
750 else:
747 handler(self, name, value)
751 handler(self, name, value)
748
752
749 def _forwardchunks(self):
753 def _forwardchunks(self):
750 """utility to transfer a bundle2 as binary
754 """utility to transfer a bundle2 as binary
751
755
752 This is made necessary by the fact the 'getbundle' command over 'ssh'
756 This is made necessary by the fact the 'getbundle' command over 'ssh'
753 have no way to know then the reply end, relying on the bundle to be
757 have no way to know then the reply end, relying on the bundle to be
754 interpreted to know its end. This is terrible and we are sorry, but we
758 interpreted to know its end. This is terrible and we are sorry, but we
755 needed to move forward to get general delta enabled.
759 needed to move forward to get general delta enabled.
756 """
760 """
757 yield self._magicstring
761 yield self._magicstring
758 assert 'params' not in vars(self)
762 assert 'params' not in vars(self)
759 paramssize = self._unpack(_fstreamparamsize)[0]
763 paramssize = self._unpack(_fstreamparamsize)[0]
760 if paramssize < 0:
764 if paramssize < 0:
761 raise error.BundleValueError('negative bundle param size: %i'
765 raise error.BundleValueError('negative bundle param size: %i'
762 % paramssize)
766 % paramssize)
763 yield _pack(_fstreamparamsize, paramssize)
767 yield _pack(_fstreamparamsize, paramssize)
764 if paramssize:
768 if paramssize:
765 params = self._readexact(paramssize)
769 params = self._readexact(paramssize)
766 self._processallparams(params)
770 self._processallparams(params)
767 yield params
771 yield params
768 assert self._compengine.bundletype == 'UN'
772 assert self._compengine.bundletype == 'UN'
769 # From there, payload might need to be decompressed
773 # From there, payload might need to be decompressed
770 self._fp = self._compengine.decompressorreader(self._fp)
774 self._fp = self._compengine.decompressorreader(self._fp)
771 emptycount = 0
775 emptycount = 0
772 while emptycount < 2:
776 while emptycount < 2:
773 # so we can brainlessly loop
777 # so we can brainlessly loop
774 assert _fpartheadersize == _fpayloadsize
778 assert _fpartheadersize == _fpayloadsize
775 size = self._unpack(_fpartheadersize)[0]
779 size = self._unpack(_fpartheadersize)[0]
776 yield _pack(_fpartheadersize, size)
780 yield _pack(_fpartheadersize, size)
777 if size:
781 if size:
778 emptycount = 0
782 emptycount = 0
779 else:
783 else:
780 emptycount += 1
784 emptycount += 1
781 continue
785 continue
782 if size == flaginterrupt:
786 if size == flaginterrupt:
783 continue
787 continue
784 elif size < 0:
788 elif size < 0:
785 raise error.BundleValueError('negative chunk size: %i')
789 raise error.BundleValueError('negative chunk size: %i')
786 yield self._readexact(size)
790 yield self._readexact(size)
787
791
788
792
789 def iterparts(self):
793 def iterparts(self):
790 """yield all parts contained in the stream"""
794 """yield all parts contained in the stream"""
791 # make sure param have been loaded
795 # make sure param have been loaded
792 self.params
796 self.params
793 # From there, payload need to be decompressed
797 # From there, payload need to be decompressed
794 self._fp = self._compengine.decompressorreader(self._fp)
798 self._fp = self._compengine.decompressorreader(self._fp)
795 indebug(self.ui, 'start extraction of bundle2 parts')
799 indebug(self.ui, 'start extraction of bundle2 parts')
796 headerblock = self._readpartheader()
800 headerblock = self._readpartheader()
797 while headerblock is not None:
801 while headerblock is not None:
798 part = unbundlepart(self.ui, headerblock, self._fp)
802 part = unbundlepart(self.ui, headerblock, self._fp)
799 yield part
803 yield part
800 part.seek(0, 2)
804 part.seek(0, 2)
801 headerblock = self._readpartheader()
805 headerblock = self._readpartheader()
802 indebug(self.ui, 'end of bundle2 stream')
806 indebug(self.ui, 'end of bundle2 stream')
803
807
804 def _readpartheader(self):
808 def _readpartheader(self):
805 """reads a part header size and return the bytes blob
809 """reads a part header size and return the bytes blob
806
810
807 returns None if empty"""
811 returns None if empty"""
808 headersize = self._unpack(_fpartheadersize)[0]
812 headersize = self._unpack(_fpartheadersize)[0]
809 if headersize < 0:
813 if headersize < 0:
810 raise error.BundleValueError('negative part header size: %i'
814 raise error.BundleValueError('negative part header size: %i'
811 % headersize)
815 % headersize)
812 indebug(self.ui, 'part header size: %i' % headersize)
816 indebug(self.ui, 'part header size: %i' % headersize)
813 if headersize:
817 if headersize:
814 return self._readexact(headersize)
818 return self._readexact(headersize)
815 return None
819 return None
816
820
817 def compressed(self):
821 def compressed(self):
818 self.params # load params
822 self.params # load params
819 return self._compressed
823 return self._compressed
820
824
821 def close(self):
825 def close(self):
822 """close underlying file"""
826 """close underlying file"""
823 if util.safehasattr(self._fp, 'close'):
827 if util.safehasattr(self._fp, 'close'):
824 return self._fp.close()
828 return self._fp.close()
825
829
826 formatmap = {'20': unbundle20}
830 formatmap = {'20': unbundle20}
827
831
828 b2streamparamsmap = {}
832 b2streamparamsmap = {}
829
833
830 def b2streamparamhandler(name):
834 def b2streamparamhandler(name):
831 """register a handler for a stream level parameter"""
835 """register a handler for a stream level parameter"""
832 def decorator(func):
836 def decorator(func):
833 assert name not in formatmap
837 assert name not in formatmap
834 b2streamparamsmap[name] = func
838 b2streamparamsmap[name] = func
835 return func
839 return func
836 return decorator
840 return decorator
837
841
838 @b2streamparamhandler('compression')
842 @b2streamparamhandler('compression')
839 def processcompression(unbundler, param, value):
843 def processcompression(unbundler, param, value):
840 """read compression parameter and install payload decompression"""
844 """read compression parameter and install payload decompression"""
841 if value not in util.compengines.supportedbundletypes:
845 if value not in util.compengines.supportedbundletypes:
842 raise error.BundleUnknownFeatureError(params=(param,),
846 raise error.BundleUnknownFeatureError(params=(param,),
843 values=(value,))
847 values=(value,))
844 unbundler._compengine = util.compengines.forbundletype(value)
848 unbundler._compengine = util.compengines.forbundletype(value)
845 if value is not None:
849 if value is not None:
846 unbundler._compressed = True
850 unbundler._compressed = True
847
851
848 class bundlepart(object):
852 class bundlepart(object):
849 """A bundle2 part contains application level payload
853 """A bundle2 part contains application level payload
850
854
851 The part `type` is used to route the part to the application level
855 The part `type` is used to route the part to the application level
852 handler.
856 handler.
853
857
854 The part payload is contained in ``part.data``. It could be raw bytes or a
858 The part payload is contained in ``part.data``. It could be raw bytes or a
855 generator of byte chunks.
859 generator of byte chunks.
856
860
857 You can add parameters to the part using the ``addparam`` method.
861 You can add parameters to the part using the ``addparam`` method.
858 Parameters can be either mandatory (default) or advisory. Remote side
862 Parameters can be either mandatory (default) or advisory. Remote side
859 should be able to safely ignore the advisory ones.
863 should be able to safely ignore the advisory ones.
860
864
861 Both data and parameters cannot be modified after the generation has begun.
865 Both data and parameters cannot be modified after the generation has begun.
862 """
866 """
863
867
864 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
868 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
865 data='', mandatory=True):
869 data='', mandatory=True):
866 validateparttype(parttype)
870 validateparttype(parttype)
867 self.id = None
871 self.id = None
868 self.type = parttype
872 self.type = parttype
869 self._data = data
873 self._data = data
870 self._mandatoryparams = list(mandatoryparams)
874 self._mandatoryparams = list(mandatoryparams)
871 self._advisoryparams = list(advisoryparams)
875 self._advisoryparams = list(advisoryparams)
872 # checking for duplicated entries
876 # checking for duplicated entries
873 self._seenparams = set()
877 self._seenparams = set()
874 for pname, __ in self._mandatoryparams + self._advisoryparams:
878 for pname, __ in self._mandatoryparams + self._advisoryparams:
875 if pname in self._seenparams:
879 if pname in self._seenparams:
876 raise error.ProgrammingError('duplicated params: %s' % pname)
880 raise error.ProgrammingError('duplicated params: %s' % pname)
877 self._seenparams.add(pname)
881 self._seenparams.add(pname)
878 # status of the part's generation:
882 # status of the part's generation:
879 # - None: not started,
883 # - None: not started,
880 # - False: currently generated,
884 # - False: currently generated,
881 # - True: generation done.
885 # - True: generation done.
882 self._generated = None
886 self._generated = None
883 self.mandatory = mandatory
887 self.mandatory = mandatory
884
888
885 def __repr__(self):
889 def __repr__(self):
886 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
890 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
887 return ('<%s object at %x; id: %s; type: %s; mandatory: %s>'
891 return ('<%s object at %x; id: %s; type: %s; mandatory: %s>'
888 % (cls, id(self), self.id, self.type, self.mandatory))
892 % (cls, id(self), self.id, self.type, self.mandatory))
889
893
890 def copy(self):
894 def copy(self):
891 """return a copy of the part
895 """return a copy of the part
892
896
893 The new part have the very same content but no partid assigned yet.
897 The new part have the very same content but no partid assigned yet.
894 Parts with generated data cannot be copied."""
898 Parts with generated data cannot be copied."""
895 assert not util.safehasattr(self.data, 'next')
899 assert not util.safehasattr(self.data, 'next')
896 return self.__class__(self.type, self._mandatoryparams,
900 return self.__class__(self.type, self._mandatoryparams,
897 self._advisoryparams, self._data, self.mandatory)
901 self._advisoryparams, self._data, self.mandatory)
898
902
899 # methods used to defines the part content
903 # methods used to defines the part content
900 @property
904 @property
901 def data(self):
905 def data(self):
902 return self._data
906 return self._data
903
907
904 @data.setter
908 @data.setter
905 def data(self, data):
909 def data(self, data):
906 if self._generated is not None:
910 if self._generated is not None:
907 raise error.ReadOnlyPartError('part is being generated')
911 raise error.ReadOnlyPartError('part is being generated')
908 self._data = data
912 self._data = data
909
913
910 @property
914 @property
911 def mandatoryparams(self):
915 def mandatoryparams(self):
912 # make it an immutable tuple to force people through ``addparam``
916 # make it an immutable tuple to force people through ``addparam``
913 return tuple(self._mandatoryparams)
917 return tuple(self._mandatoryparams)
914
918
915 @property
919 @property
916 def advisoryparams(self):
920 def advisoryparams(self):
917 # make it an immutable tuple to force people through ``addparam``
921 # make it an immutable tuple to force people through ``addparam``
918 return tuple(self._advisoryparams)
922 return tuple(self._advisoryparams)
919
923
920 def addparam(self, name, value='', mandatory=True):
924 def addparam(self, name, value='', mandatory=True):
921 """add a parameter to the part
925 """add a parameter to the part
922
926
923 If 'mandatory' is set to True, the remote handler must claim support
927 If 'mandatory' is set to True, the remote handler must claim support
924 for this parameter or the unbundling will be aborted.
928 for this parameter or the unbundling will be aborted.
925
929
926 The 'name' and 'value' cannot exceed 255 bytes each.
930 The 'name' and 'value' cannot exceed 255 bytes each.
927 """
931 """
928 if self._generated is not None:
932 if self._generated is not None:
929 raise error.ReadOnlyPartError('part is being generated')
933 raise error.ReadOnlyPartError('part is being generated')
930 if name in self._seenparams:
934 if name in self._seenparams:
931 raise ValueError('duplicated params: %s' % name)
935 raise ValueError('duplicated params: %s' % name)
932 self._seenparams.add(name)
936 self._seenparams.add(name)
933 params = self._advisoryparams
937 params = self._advisoryparams
934 if mandatory:
938 if mandatory:
935 params = self._mandatoryparams
939 params = self._mandatoryparams
936 params.append((name, value))
940 params.append((name, value))
937
941
938 # methods used to generates the bundle2 stream
942 # methods used to generates the bundle2 stream
939 def getchunks(self, ui):
943 def getchunks(self, ui):
940 if self._generated is not None:
944 if self._generated is not None:
941 raise error.ProgrammingError('part can only be consumed once')
945 raise error.ProgrammingError('part can only be consumed once')
942 self._generated = False
946 self._generated = False
943
947
944 if ui.debugflag:
948 if ui.debugflag:
945 msg = ['bundle2-output-part: "%s"' % self.type]
949 msg = ['bundle2-output-part: "%s"' % self.type]
946 if not self.mandatory:
950 if not self.mandatory:
947 msg.append(' (advisory)')
951 msg.append(' (advisory)')
948 nbmp = len(self.mandatoryparams)
952 nbmp = len(self.mandatoryparams)
949 nbap = len(self.advisoryparams)
953 nbap = len(self.advisoryparams)
950 if nbmp or nbap:
954 if nbmp or nbap:
951 msg.append(' (params:')
955 msg.append(' (params:')
952 if nbmp:
956 if nbmp:
953 msg.append(' %i mandatory' % nbmp)
957 msg.append(' %i mandatory' % nbmp)
954 if nbap:
958 if nbap:
955 msg.append(' %i advisory' % nbmp)
959 msg.append(' %i advisory' % nbmp)
956 msg.append(')')
960 msg.append(')')
957 if not self.data:
961 if not self.data:
958 msg.append(' empty payload')
962 msg.append(' empty payload')
959 elif util.safehasattr(self.data, 'next'):
963 elif util.safehasattr(self.data, 'next'):
960 msg.append(' streamed payload')
964 msg.append(' streamed payload')
961 else:
965 else:
962 msg.append(' %i bytes payload' % len(self.data))
966 msg.append(' %i bytes payload' % len(self.data))
963 msg.append('\n')
967 msg.append('\n')
964 ui.debug(''.join(msg))
968 ui.debug(''.join(msg))
965
969
966 #### header
970 #### header
967 if self.mandatory:
971 if self.mandatory:
968 parttype = self.type.upper()
972 parttype = self.type.upper()
969 else:
973 else:
970 parttype = self.type.lower()
974 parttype = self.type.lower()
971 outdebug(ui, 'part %s: "%s"' % (self.id, parttype))
975 outdebug(ui, 'part %s: "%s"' % (self.id, parttype))
972 ## parttype
976 ## parttype
973 header = [_pack(_fparttypesize, len(parttype)),
977 header = [_pack(_fparttypesize, len(parttype)),
974 parttype, _pack(_fpartid, self.id),
978 parttype, _pack(_fpartid, self.id),
975 ]
979 ]
976 ## parameters
980 ## parameters
977 # count
981 # count
978 manpar = self.mandatoryparams
982 manpar = self.mandatoryparams
979 advpar = self.advisoryparams
983 advpar = self.advisoryparams
980 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
984 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
981 # size
985 # size
982 parsizes = []
986 parsizes = []
983 for key, value in manpar:
987 for key, value in manpar:
984 parsizes.append(len(key))
988 parsizes.append(len(key))
985 parsizes.append(len(value))
989 parsizes.append(len(value))
986 for key, value in advpar:
990 for key, value in advpar:
987 parsizes.append(len(key))
991 parsizes.append(len(key))
988 parsizes.append(len(value))
992 parsizes.append(len(value))
989 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
993 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
990 header.append(paramsizes)
994 header.append(paramsizes)
991 # key, value
995 # key, value
992 for key, value in manpar:
996 for key, value in manpar:
993 header.append(key)
997 header.append(key)
994 header.append(value)
998 header.append(value)
995 for key, value in advpar:
999 for key, value in advpar:
996 header.append(key)
1000 header.append(key)
997 header.append(value)
1001 header.append(value)
998 ## finalize header
1002 ## finalize header
999 headerchunk = ''.join(header)
1003 headerchunk = ''.join(header)
1000 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
1004 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
1001 yield _pack(_fpartheadersize, len(headerchunk))
1005 yield _pack(_fpartheadersize, len(headerchunk))
1002 yield headerchunk
1006 yield headerchunk
1003 ## payload
1007 ## payload
1004 try:
1008 try:
1005 for chunk in self._payloadchunks():
1009 for chunk in self._payloadchunks():
1006 outdebug(ui, 'payload chunk size: %i' % len(chunk))
1010 outdebug(ui, 'payload chunk size: %i' % len(chunk))
1007 yield _pack(_fpayloadsize, len(chunk))
1011 yield _pack(_fpayloadsize, len(chunk))
1008 yield chunk
1012 yield chunk
1009 except GeneratorExit:
1013 except GeneratorExit:
1010 # GeneratorExit means that nobody is listening for our
1014 # GeneratorExit means that nobody is listening for our
1011 # results anyway, so just bail quickly rather than trying
1015 # results anyway, so just bail quickly rather than trying
1012 # to produce an error part.
1016 # to produce an error part.
1013 ui.debug('bundle2-generatorexit\n')
1017 ui.debug('bundle2-generatorexit\n')
1014 raise
1018 raise
1015 except BaseException as exc:
1019 except BaseException as exc:
1016 # backup exception data for later
1020 # backup exception data for later
1017 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
1021 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
1018 % exc)
1022 % exc)
1019 tb = sys.exc_info()[2]
1023 tb = sys.exc_info()[2]
1020 msg = 'unexpected error: %s' % exc
1024 msg = 'unexpected error: %s' % exc
1021 interpart = bundlepart('error:abort', [('message', msg)],
1025 interpart = bundlepart('error:abort', [('message', msg)],
1022 mandatory=False)
1026 mandatory=False)
1023 interpart.id = 0
1027 interpart.id = 0
1024 yield _pack(_fpayloadsize, -1)
1028 yield _pack(_fpayloadsize, -1)
1025 for chunk in interpart.getchunks(ui=ui):
1029 for chunk in interpart.getchunks(ui=ui):
1026 yield chunk
1030 yield chunk
1027 outdebug(ui, 'closing payload chunk')
1031 outdebug(ui, 'closing payload chunk')
1028 # abort current part payload
1032 # abort current part payload
1029 yield _pack(_fpayloadsize, 0)
1033 yield _pack(_fpayloadsize, 0)
1030 pycompat.raisewithtb(exc, tb)
1034 pycompat.raisewithtb(exc, tb)
1031 # end of payload
1035 # end of payload
1032 outdebug(ui, 'closing payload chunk')
1036 outdebug(ui, 'closing payload chunk')
1033 yield _pack(_fpayloadsize, 0)
1037 yield _pack(_fpayloadsize, 0)
1034 self._generated = True
1038 self._generated = True
1035
1039
1036 def _payloadchunks(self):
1040 def _payloadchunks(self):
1037 """yield chunks of a the part payload
1041 """yield chunks of a the part payload
1038
1042
1039 Exists to handle the different methods to provide data to a part."""
1043 Exists to handle the different methods to provide data to a part."""
1040 # we only support fixed size data now.
1044 # we only support fixed size data now.
1041 # This will be improved in the future.
1045 # This will be improved in the future.
1042 if util.safehasattr(self.data, 'next'):
1046 if util.safehasattr(self.data, 'next'):
1043 buff = util.chunkbuffer(self.data)
1047 buff = util.chunkbuffer(self.data)
1044 chunk = buff.read(preferedchunksize)
1048 chunk = buff.read(preferedchunksize)
1045 while chunk:
1049 while chunk:
1046 yield chunk
1050 yield chunk
1047 chunk = buff.read(preferedchunksize)
1051 chunk = buff.read(preferedchunksize)
1048 elif len(self.data):
1052 elif len(self.data):
1049 yield self.data
1053 yield self.data
1050
1054
1051
1055
1052 flaginterrupt = -1
1056 flaginterrupt = -1
1053
1057
1054 class interrupthandler(unpackermixin):
1058 class interrupthandler(unpackermixin):
1055 """read one part and process it with restricted capability
1059 """read one part and process it with restricted capability
1056
1060
1057 This allows to transmit exception raised on the producer size during part
1061 This allows to transmit exception raised on the producer size during part
1058 iteration while the consumer is reading a part.
1062 iteration while the consumer is reading a part.
1059
1063
1060 Part processed in this manner only have access to a ui object,"""
1064 Part processed in this manner only have access to a ui object,"""
1061
1065
1062 def __init__(self, ui, fp):
1066 def __init__(self, ui, fp):
1063 super(interrupthandler, self).__init__(fp)
1067 super(interrupthandler, self).__init__(fp)
1064 self.ui = ui
1068 self.ui = ui
1065
1069
1066 def _readpartheader(self):
1070 def _readpartheader(self):
1067 """reads a part header size and return the bytes blob
1071 """reads a part header size and return the bytes blob
1068
1072
1069 returns None if empty"""
1073 returns None if empty"""
1070 headersize = self._unpack(_fpartheadersize)[0]
1074 headersize = self._unpack(_fpartheadersize)[0]
1071 if headersize < 0:
1075 if headersize < 0:
1072 raise error.BundleValueError('negative part header size: %i'
1076 raise error.BundleValueError('negative part header size: %i'
1073 % headersize)
1077 % headersize)
1074 indebug(self.ui, 'part header size: %i\n' % headersize)
1078 indebug(self.ui, 'part header size: %i\n' % headersize)
1075 if headersize:
1079 if headersize:
1076 return self._readexact(headersize)
1080 return self._readexact(headersize)
1077 return None
1081 return None
1078
1082
1079 def __call__(self):
1083 def __call__(self):
1080
1084
1081 self.ui.debug('bundle2-input-stream-interrupt:'
1085 self.ui.debug('bundle2-input-stream-interrupt:'
1082 ' opening out of band context\n')
1086 ' opening out of band context\n')
1083 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1087 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1084 headerblock = self._readpartheader()
1088 headerblock = self._readpartheader()
1085 if headerblock is None:
1089 if headerblock is None:
1086 indebug(self.ui, 'no part found during interruption.')
1090 indebug(self.ui, 'no part found during interruption.')
1087 return
1091 return
1088 part = unbundlepart(self.ui, headerblock, self._fp)
1092 part = unbundlepart(self.ui, headerblock, self._fp)
1089 op = interruptoperation(self.ui)
1093 op = interruptoperation(self.ui)
1090 _processpart(op, part)
1094 _processpart(op, part)
1091 self.ui.debug('bundle2-input-stream-interrupt:'
1095 self.ui.debug('bundle2-input-stream-interrupt:'
1092 ' closing out of band context\n')
1096 ' closing out of band context\n')
1093
1097
1094 class interruptoperation(object):
1098 class interruptoperation(object):
1095 """A limited operation to be use by part handler during interruption
1099 """A limited operation to be use by part handler during interruption
1096
1100
1097 It only have access to an ui object.
1101 It only have access to an ui object.
1098 """
1102 """
1099
1103
1100 def __init__(self, ui):
1104 def __init__(self, ui):
1101 self.ui = ui
1105 self.ui = ui
1102 self.reply = None
1106 self.reply = None
1103 self.captureoutput = False
1107 self.captureoutput = False
1104
1108
1105 @property
1109 @property
1106 def repo(self):
1110 def repo(self):
1107 raise error.ProgrammingError('no repo access from stream interruption')
1111 raise error.ProgrammingError('no repo access from stream interruption')
1108
1112
1109 def gettransaction(self):
1113 def gettransaction(self):
1110 raise TransactionUnavailable('no repo access from stream interruption')
1114 raise TransactionUnavailable('no repo access from stream interruption')
1111
1115
1112 class unbundlepart(unpackermixin):
1116 class unbundlepart(unpackermixin):
1113 """a bundle part read from a bundle"""
1117 """a bundle part read from a bundle"""
1114
1118
1115 def __init__(self, ui, header, fp):
1119 def __init__(self, ui, header, fp):
1116 super(unbundlepart, self).__init__(fp)
1120 super(unbundlepart, self).__init__(fp)
1117 self._seekable = (util.safehasattr(fp, 'seek') and
1121 self._seekable = (util.safehasattr(fp, 'seek') and
1118 util.safehasattr(fp, 'tell'))
1122 util.safehasattr(fp, 'tell'))
1119 self.ui = ui
1123 self.ui = ui
1120 # unbundle state attr
1124 # unbundle state attr
1121 self._headerdata = header
1125 self._headerdata = header
1122 self._headeroffset = 0
1126 self._headeroffset = 0
1123 self._initialized = False
1127 self._initialized = False
1124 self.consumed = False
1128 self.consumed = False
1125 # part data
1129 # part data
1126 self.id = None
1130 self.id = None
1127 self.type = None
1131 self.type = None
1128 self.mandatoryparams = None
1132 self.mandatoryparams = None
1129 self.advisoryparams = None
1133 self.advisoryparams = None
1130 self.params = None
1134 self.params = None
1131 self.mandatorykeys = ()
1135 self.mandatorykeys = ()
1132 self._payloadstream = None
1136 self._payloadstream = None
1133 self._readheader()
1137 self._readheader()
1134 self._mandatory = None
1138 self._mandatory = None
1135 self._chunkindex = [] #(payload, file) position tuples for chunk starts
1139 self._chunkindex = [] #(payload, file) position tuples for chunk starts
1136 self._pos = 0
1140 self._pos = 0
1137
1141
1138 def _fromheader(self, size):
1142 def _fromheader(self, size):
1139 """return the next <size> byte from the header"""
1143 """return the next <size> byte from the header"""
1140 offset = self._headeroffset
1144 offset = self._headeroffset
1141 data = self._headerdata[offset:(offset + size)]
1145 data = self._headerdata[offset:(offset + size)]
1142 self._headeroffset = offset + size
1146 self._headeroffset = offset + size
1143 return data
1147 return data
1144
1148
1145 def _unpackheader(self, format):
1149 def _unpackheader(self, format):
1146 """read given format from header
1150 """read given format from header
1147
1151
1148 This automatically compute the size of the format to read."""
1152 This automatically compute the size of the format to read."""
1149 data = self._fromheader(struct.calcsize(format))
1153 data = self._fromheader(struct.calcsize(format))
1150 return _unpack(format, data)
1154 return _unpack(format, data)
1151
1155
1152 def _initparams(self, mandatoryparams, advisoryparams):
1156 def _initparams(self, mandatoryparams, advisoryparams):
1153 """internal function to setup all logic related parameters"""
1157 """internal function to setup all logic related parameters"""
1154 # make it read only to prevent people touching it by mistake.
1158 # make it read only to prevent people touching it by mistake.
1155 self.mandatoryparams = tuple(mandatoryparams)
1159 self.mandatoryparams = tuple(mandatoryparams)
1156 self.advisoryparams = tuple(advisoryparams)
1160 self.advisoryparams = tuple(advisoryparams)
1157 # user friendly UI
1161 # user friendly UI
1158 self.params = util.sortdict(self.mandatoryparams)
1162 self.params = util.sortdict(self.mandatoryparams)
1159 self.params.update(self.advisoryparams)
1163 self.params.update(self.advisoryparams)
1160 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1164 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1161
1165
1162 def _payloadchunks(self, chunknum=0):
1166 def _payloadchunks(self, chunknum=0):
1163 '''seek to specified chunk and start yielding data'''
1167 '''seek to specified chunk and start yielding data'''
1164 if len(self._chunkindex) == 0:
1168 if len(self._chunkindex) == 0:
1165 assert chunknum == 0, 'Must start with chunk 0'
1169 assert chunknum == 0, 'Must start with chunk 0'
1166 self._chunkindex.append((0, self._tellfp()))
1170 self._chunkindex.append((0, self._tellfp()))
1167 else:
1171 else:
1168 assert chunknum < len(self._chunkindex), \
1172 assert chunknum < len(self._chunkindex), \
1169 'Unknown chunk %d' % chunknum
1173 'Unknown chunk %d' % chunknum
1170 self._seekfp(self._chunkindex[chunknum][1])
1174 self._seekfp(self._chunkindex[chunknum][1])
1171
1175
1172 pos = self._chunkindex[chunknum][0]
1176 pos = self._chunkindex[chunknum][0]
1173 payloadsize = self._unpack(_fpayloadsize)[0]
1177 payloadsize = self._unpack(_fpayloadsize)[0]
1174 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1178 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1175 while payloadsize:
1179 while payloadsize:
1176 if payloadsize == flaginterrupt:
1180 if payloadsize == flaginterrupt:
1177 # interruption detection, the handler will now read a
1181 # interruption detection, the handler will now read a
1178 # single part and process it.
1182 # single part and process it.
1179 interrupthandler(self.ui, self._fp)()
1183 interrupthandler(self.ui, self._fp)()
1180 elif payloadsize < 0:
1184 elif payloadsize < 0:
1181 msg = 'negative payload chunk size: %i' % payloadsize
1185 msg = 'negative payload chunk size: %i' % payloadsize
1182 raise error.BundleValueError(msg)
1186 raise error.BundleValueError(msg)
1183 else:
1187 else:
1184 result = self._readexact(payloadsize)
1188 result = self._readexact(payloadsize)
1185 chunknum += 1
1189 chunknum += 1
1186 pos += payloadsize
1190 pos += payloadsize
1187 if chunknum == len(self._chunkindex):
1191 if chunknum == len(self._chunkindex):
1188 self._chunkindex.append((pos, self._tellfp()))
1192 self._chunkindex.append((pos, self._tellfp()))
1189 yield result
1193 yield result
1190 payloadsize = self._unpack(_fpayloadsize)[0]
1194 payloadsize = self._unpack(_fpayloadsize)[0]
1191 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1195 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1192
1196
1193 def _findchunk(self, pos):
1197 def _findchunk(self, pos):
1194 '''for a given payload position, return a chunk number and offset'''
1198 '''for a given payload position, return a chunk number and offset'''
1195 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1199 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1196 if ppos == pos:
1200 if ppos == pos:
1197 return chunk, 0
1201 return chunk, 0
1198 elif ppos > pos:
1202 elif ppos > pos:
1199 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1203 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1200 raise ValueError('Unknown chunk')
1204 raise ValueError('Unknown chunk')
1201
1205
1202 def _readheader(self):
1206 def _readheader(self):
1203 """read the header and setup the object"""
1207 """read the header and setup the object"""
1204 typesize = self._unpackheader(_fparttypesize)[0]
1208 typesize = self._unpackheader(_fparttypesize)[0]
1205 self.type = self._fromheader(typesize)
1209 self.type = self._fromheader(typesize)
1206 indebug(self.ui, 'part type: "%s"' % self.type)
1210 indebug(self.ui, 'part type: "%s"' % self.type)
1207 self.id = self._unpackheader(_fpartid)[0]
1211 self.id = self._unpackheader(_fpartid)[0]
1208 indebug(self.ui, 'part id: "%s"' % self.id)
1212 indebug(self.ui, 'part id: "%s"' % self.id)
1209 # extract mandatory bit from type
1213 # extract mandatory bit from type
1210 self.mandatory = (self.type != self.type.lower())
1214 self.mandatory = (self.type != self.type.lower())
1211 self.type = self.type.lower()
1215 self.type = self.type.lower()
1212 ## reading parameters
1216 ## reading parameters
1213 # param count
1217 # param count
1214 mancount, advcount = self._unpackheader(_fpartparamcount)
1218 mancount, advcount = self._unpackheader(_fpartparamcount)
1215 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1219 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1216 # param size
1220 # param size
1217 fparamsizes = _makefpartparamsizes(mancount + advcount)
1221 fparamsizes = _makefpartparamsizes(mancount + advcount)
1218 paramsizes = self._unpackheader(fparamsizes)
1222 paramsizes = self._unpackheader(fparamsizes)
1219 # make it a list of couple again
1223 # make it a list of couple again
1220 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
1224 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
1221 # split mandatory from advisory
1225 # split mandatory from advisory
1222 mansizes = paramsizes[:mancount]
1226 mansizes = paramsizes[:mancount]
1223 advsizes = paramsizes[mancount:]
1227 advsizes = paramsizes[mancount:]
1224 # retrieve param value
1228 # retrieve param value
1225 manparams = []
1229 manparams = []
1226 for key, value in mansizes:
1230 for key, value in mansizes:
1227 manparams.append((self._fromheader(key), self._fromheader(value)))
1231 manparams.append((self._fromheader(key), self._fromheader(value)))
1228 advparams = []
1232 advparams = []
1229 for key, value in advsizes:
1233 for key, value in advsizes:
1230 advparams.append((self._fromheader(key), self._fromheader(value)))
1234 advparams.append((self._fromheader(key), self._fromheader(value)))
1231 self._initparams(manparams, advparams)
1235 self._initparams(manparams, advparams)
1232 ## part payload
1236 ## part payload
1233 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1237 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1234 # we read the data, tell it
1238 # we read the data, tell it
1235 self._initialized = True
1239 self._initialized = True
1236
1240
1237 def read(self, size=None):
1241 def read(self, size=None):
1238 """read payload data"""
1242 """read payload data"""
1239 if not self._initialized:
1243 if not self._initialized:
1240 self._readheader()
1244 self._readheader()
1241 if size is None:
1245 if size is None:
1242 data = self._payloadstream.read()
1246 data = self._payloadstream.read()
1243 else:
1247 else:
1244 data = self._payloadstream.read(size)
1248 data = self._payloadstream.read(size)
1245 self._pos += len(data)
1249 self._pos += len(data)
1246 if size is None or len(data) < size:
1250 if size is None or len(data) < size:
1247 if not self.consumed and self._pos:
1251 if not self.consumed and self._pos:
1248 self.ui.debug('bundle2-input-part: total payload size %i\n'
1252 self.ui.debug('bundle2-input-part: total payload size %i\n'
1249 % self._pos)
1253 % self._pos)
1250 self.consumed = True
1254 self.consumed = True
1251 return data
1255 return data
1252
1256
1253 def tell(self):
1257 def tell(self):
1254 return self._pos
1258 return self._pos
1255
1259
1256 def seek(self, offset, whence=0):
1260 def seek(self, offset, whence=0):
1257 if whence == 0:
1261 if whence == 0:
1258 newpos = offset
1262 newpos = offset
1259 elif whence == 1:
1263 elif whence == 1:
1260 newpos = self._pos + offset
1264 newpos = self._pos + offset
1261 elif whence == 2:
1265 elif whence == 2:
1262 if not self.consumed:
1266 if not self.consumed:
1263 self.read()
1267 self.read()
1264 newpos = self._chunkindex[-1][0] - offset
1268 newpos = self._chunkindex[-1][0] - offset
1265 else:
1269 else:
1266 raise ValueError('Unknown whence value: %r' % (whence,))
1270 raise ValueError('Unknown whence value: %r' % (whence,))
1267
1271
1268 if newpos > self._chunkindex[-1][0] and not self.consumed:
1272 if newpos > self._chunkindex[-1][0] and not self.consumed:
1269 self.read()
1273 self.read()
1270 if not 0 <= newpos <= self._chunkindex[-1][0]:
1274 if not 0 <= newpos <= self._chunkindex[-1][0]:
1271 raise ValueError('Offset out of range')
1275 raise ValueError('Offset out of range')
1272
1276
1273 if self._pos != newpos:
1277 if self._pos != newpos:
1274 chunk, internaloffset = self._findchunk(newpos)
1278 chunk, internaloffset = self._findchunk(newpos)
1275 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1279 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1276 adjust = self.read(internaloffset)
1280 adjust = self.read(internaloffset)
1277 if len(adjust) != internaloffset:
1281 if len(adjust) != internaloffset:
1278 raise error.Abort(_('Seek failed\n'))
1282 raise error.Abort(_('Seek failed\n'))
1279 self._pos = newpos
1283 self._pos = newpos
1280
1284
1281 def _seekfp(self, offset, whence=0):
1285 def _seekfp(self, offset, whence=0):
1282 """move the underlying file pointer
1286 """move the underlying file pointer
1283
1287
1284 This method is meant for internal usage by the bundle2 protocol only.
1288 This method is meant for internal usage by the bundle2 protocol only.
1285 They directly manipulate the low level stream including bundle2 level
1289 They directly manipulate the low level stream including bundle2 level
1286 instruction.
1290 instruction.
1287
1291
1288 Do not use it to implement higher-level logic or methods."""
1292 Do not use it to implement higher-level logic or methods."""
1289 if self._seekable:
1293 if self._seekable:
1290 return self._fp.seek(offset, whence)
1294 return self._fp.seek(offset, whence)
1291 else:
1295 else:
1292 raise NotImplementedError(_('File pointer is not seekable'))
1296 raise NotImplementedError(_('File pointer is not seekable'))
1293
1297
1294 def _tellfp(self):
1298 def _tellfp(self):
1295 """return the file offset, or None if file is not seekable
1299 """return the file offset, or None if file is not seekable
1296
1300
1297 This method is meant for internal usage by the bundle2 protocol only.
1301 This method is meant for internal usage by the bundle2 protocol only.
1298 They directly manipulate the low level stream including bundle2 level
1302 They directly manipulate the low level stream including bundle2 level
1299 instruction.
1303 instruction.
1300
1304
1301 Do not use it to implement higher-level logic or methods."""
1305 Do not use it to implement higher-level logic or methods."""
1302 if self._seekable:
1306 if self._seekable:
1303 try:
1307 try:
1304 return self._fp.tell()
1308 return self._fp.tell()
1305 except IOError as e:
1309 except IOError as e:
1306 if e.errno == errno.ESPIPE:
1310 if e.errno == errno.ESPIPE:
1307 self._seekable = False
1311 self._seekable = False
1308 else:
1312 else:
1309 raise
1313 raise
1310 return None
1314 return None
1311
1315
1312 # These are only the static capabilities.
1316 # These are only the static capabilities.
1313 # Check the 'getrepocaps' function for the rest.
1317 # Check the 'getrepocaps' function for the rest.
1314 capabilities = {'HG20': (),
1318 capabilities = {'HG20': (),
1315 'error': ('abort', 'unsupportedcontent', 'pushraced',
1319 'error': ('abort', 'unsupportedcontent', 'pushraced',
1316 'pushkey'),
1320 'pushkey'),
1317 'listkeys': (),
1321 'listkeys': (),
1318 'pushkey': (),
1322 'pushkey': (),
1319 'digests': tuple(sorted(util.DIGESTS.keys())),
1323 'digests': tuple(sorted(util.DIGESTS.keys())),
1320 'remote-changegroup': ('http', 'https'),
1324 'remote-changegroup': ('http', 'https'),
1321 'hgtagsfnodes': (),
1325 'hgtagsfnodes': (),
1322 }
1326 }
1323
1327
1324 def getrepocaps(repo, allowpushback=False):
1328 def getrepocaps(repo, allowpushback=False):
1325 """return the bundle2 capabilities for a given repo
1329 """return the bundle2 capabilities for a given repo
1326
1330
1327 Exists to allow extensions (like evolution) to mutate the capabilities.
1331 Exists to allow extensions (like evolution) to mutate the capabilities.
1328 """
1332 """
1329 caps = capabilities.copy()
1333 caps = capabilities.copy()
1330 caps['changegroup'] = tuple(sorted(
1334 caps['changegroup'] = tuple(sorted(
1331 changegroup.supportedincomingversions(repo)))
1335 changegroup.supportedincomingversions(repo)))
1332 if obsolete.isenabled(repo, obsolete.exchangeopt):
1336 if obsolete.isenabled(repo, obsolete.exchangeopt):
1333 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1337 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1334 caps['obsmarkers'] = supportedformat
1338 caps['obsmarkers'] = supportedformat
1335 if allowpushback:
1339 if allowpushback:
1336 caps['pushback'] = ()
1340 caps['pushback'] = ()
1337 cpmode = repo.ui.config('server', 'concurrent-push-mode', 'strict')
1341 cpmode = repo.ui.config('server', 'concurrent-push-mode', 'strict')
1338 if cpmode == 'check-related':
1342 if cpmode == 'check-related':
1339 caps['checkheads'] = ('related',)
1343 caps['checkheads'] = ('related',)
1340 return caps
1344 return caps
1341
1345
1342 def bundle2caps(remote):
1346 def bundle2caps(remote):
1343 """return the bundle capabilities of a peer as dict"""
1347 """return the bundle capabilities of a peer as dict"""
1344 raw = remote.capable('bundle2')
1348 raw = remote.capable('bundle2')
1345 if not raw and raw != '':
1349 if not raw and raw != '':
1346 return {}
1350 return {}
1347 capsblob = urlreq.unquote(remote.capable('bundle2'))
1351 capsblob = urlreq.unquote(remote.capable('bundle2'))
1348 return decodecaps(capsblob)
1352 return decodecaps(capsblob)
1349
1353
1350 def obsmarkersversion(caps):
1354 def obsmarkersversion(caps):
1351 """extract the list of supported obsmarkers versions from a bundle2caps dict
1355 """extract the list of supported obsmarkers versions from a bundle2caps dict
1352 """
1356 """
1353 obscaps = caps.get('obsmarkers', ())
1357 obscaps = caps.get('obsmarkers', ())
1354 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1358 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1355
1359
1356 def writenewbundle(ui, repo, source, filename, bundletype, outgoing, opts,
1360 def writenewbundle(ui, repo, source, filename, bundletype, outgoing, opts,
1357 vfs=None, compression=None, compopts=None):
1361 vfs=None, compression=None, compopts=None):
1358 if bundletype.startswith('HG10'):
1362 if bundletype.startswith('HG10'):
1359 cg = changegroup.getchangegroup(repo, source, outgoing, version='01')
1363 cg = changegroup.getchangegroup(repo, source, outgoing, version='01')
1360 return writebundle(ui, cg, filename, bundletype, vfs=vfs,
1364 return writebundle(ui, cg, filename, bundletype, vfs=vfs,
1361 compression=compression, compopts=compopts)
1365 compression=compression, compopts=compopts)
1362 elif not bundletype.startswith('HG20'):
1366 elif not bundletype.startswith('HG20'):
1363 raise error.ProgrammingError('unknown bundle type: %s' % bundletype)
1367 raise error.ProgrammingError('unknown bundle type: %s' % bundletype)
1364
1368
1365 caps = {}
1369 caps = {}
1366 if 'obsolescence' in opts:
1370 if 'obsolescence' in opts:
1367 caps['obsmarkers'] = ('V1',)
1371 caps['obsmarkers'] = ('V1',)
1368 bundle = bundle20(ui, caps)
1372 bundle = bundle20(ui, caps)
1369 bundle.setcompression(compression, compopts)
1373 bundle.setcompression(compression, compopts)
1370 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1374 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1371 chunkiter = bundle.getchunks()
1375 chunkiter = bundle.getchunks()
1372
1376
1373 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1377 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1374
1378
1375 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1379 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1376 # We should eventually reconcile this logic with the one behind
1380 # We should eventually reconcile this logic with the one behind
1377 # 'exchange.getbundle2partsgenerator'.
1381 # 'exchange.getbundle2partsgenerator'.
1378 #
1382 #
1379 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1383 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1380 # different right now. So we keep them separated for now for the sake of
1384 # different right now. So we keep them separated for now for the sake of
1381 # simplicity.
1385 # simplicity.
1382
1386
1383 # we always want a changegroup in such bundle
1387 # we always want a changegroup in such bundle
1384 cgversion = opts.get('cg.version')
1388 cgversion = opts.get('cg.version')
1385 if cgversion is None:
1389 if cgversion is None:
1386 cgversion = changegroup.safeversion(repo)
1390 cgversion = changegroup.safeversion(repo)
1387 cg = changegroup.getchangegroup(repo, source, outgoing,
1391 cg = changegroup.getchangegroup(repo, source, outgoing,
1388 version=cgversion)
1392 version=cgversion)
1389 part = bundler.newpart('changegroup', data=cg.getchunks())
1393 part = bundler.newpart('changegroup', data=cg.getchunks())
1390 part.addparam('version', cg.version)
1394 part.addparam('version', cg.version)
1391 if 'clcount' in cg.extras:
1395 if 'clcount' in cg.extras:
1392 part.addparam('nbchanges', str(cg.extras['clcount']),
1396 part.addparam('nbchanges', str(cg.extras['clcount']),
1393 mandatory=False)
1397 mandatory=False)
1394
1398
1395 addparttagsfnodescache(repo, bundler, outgoing)
1399 addparttagsfnodescache(repo, bundler, outgoing)
1396
1400
1397 if opts.get('obsolescence', False):
1401 if opts.get('obsolescence', False):
1398 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1402 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1399 buildobsmarkerspart(bundler, obsmarkers)
1403 buildobsmarkerspart(bundler, obsmarkers)
1400
1404
1401 if opts.get('phases', False):
1405 if opts.get('phases', False):
1402 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1406 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1403 phasedata = []
1407 phasedata = []
1404 for phase in phases.allphases:
1408 for phase in phases.allphases:
1405 for head in headsbyphase[phase]:
1409 for head in headsbyphase[phase]:
1406 phasedata.append(_pack(_fphasesentry, phase, head))
1410 phasedata.append(_pack(_fphasesentry, phase, head))
1407 bundler.newpart('phase-heads', data=''.join(phasedata))
1411 bundler.newpart('phase-heads', data=''.join(phasedata))
1408
1412
1409 def addparttagsfnodescache(repo, bundler, outgoing):
1413 def addparttagsfnodescache(repo, bundler, outgoing):
1410 # we include the tags fnode cache for the bundle changeset
1414 # we include the tags fnode cache for the bundle changeset
1411 # (as an optional parts)
1415 # (as an optional parts)
1412 cache = tags.hgtagsfnodescache(repo.unfiltered())
1416 cache = tags.hgtagsfnodescache(repo.unfiltered())
1413 chunks = []
1417 chunks = []
1414
1418
1415 # .hgtags fnodes are only relevant for head changesets. While we could
1419 # .hgtags fnodes are only relevant for head changesets. While we could
1416 # transfer values for all known nodes, there will likely be little to
1420 # transfer values for all known nodes, there will likely be little to
1417 # no benefit.
1421 # no benefit.
1418 #
1422 #
1419 # We don't bother using a generator to produce output data because
1423 # We don't bother using a generator to produce output data because
1420 # a) we only have 40 bytes per head and even esoteric numbers of heads
1424 # a) we only have 40 bytes per head and even esoteric numbers of heads
1421 # consume little memory (1M heads is 40MB) b) we don't want to send the
1425 # consume little memory (1M heads is 40MB) b) we don't want to send the
1422 # part if we don't have entries and knowing if we have entries requires
1426 # part if we don't have entries and knowing if we have entries requires
1423 # cache lookups.
1427 # cache lookups.
1424 for node in outgoing.missingheads:
1428 for node in outgoing.missingheads:
1425 # Don't compute missing, as this may slow down serving.
1429 # Don't compute missing, as this may slow down serving.
1426 fnode = cache.getfnode(node, computemissing=False)
1430 fnode = cache.getfnode(node, computemissing=False)
1427 if fnode is not None:
1431 if fnode is not None:
1428 chunks.extend([node, fnode])
1432 chunks.extend([node, fnode])
1429
1433
1430 if chunks:
1434 if chunks:
1431 bundler.newpart('hgtagsfnodes', data=''.join(chunks))
1435 bundler.newpart('hgtagsfnodes', data=''.join(chunks))
1432
1436
1433 def buildobsmarkerspart(bundler, markers):
1437 def buildobsmarkerspart(bundler, markers):
1434 """add an obsmarker part to the bundler with <markers>
1438 """add an obsmarker part to the bundler with <markers>
1435
1439
1436 No part is created if markers is empty.
1440 No part is created if markers is empty.
1437 Raises ValueError if the bundler doesn't support any known obsmarker format.
1441 Raises ValueError if the bundler doesn't support any known obsmarker format.
1438 """
1442 """
1439 if not markers:
1443 if not markers:
1440 return None
1444 return None
1441
1445
1442 remoteversions = obsmarkersversion(bundler.capabilities)
1446 remoteversions = obsmarkersversion(bundler.capabilities)
1443 version = obsolete.commonversion(remoteversions)
1447 version = obsolete.commonversion(remoteversions)
1444 if version is None:
1448 if version is None:
1445 raise ValueError('bundler does not support common obsmarker format')
1449 raise ValueError('bundler does not support common obsmarker format')
1446 stream = obsolete.encodemarkers(markers, True, version=version)
1450 stream = obsolete.encodemarkers(markers, True, version=version)
1447 return bundler.newpart('obsmarkers', data=stream)
1451 return bundler.newpart('obsmarkers', data=stream)
1448
1452
1449 def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None,
1453 def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None,
1450 compopts=None):
1454 compopts=None):
1451 """Write a bundle file and return its filename.
1455 """Write a bundle file and return its filename.
1452
1456
1453 Existing files will not be overwritten.
1457 Existing files will not be overwritten.
1454 If no filename is specified, a temporary file is created.
1458 If no filename is specified, a temporary file is created.
1455 bz2 compression can be turned off.
1459 bz2 compression can be turned off.
1456 The bundle file will be deleted in case of errors.
1460 The bundle file will be deleted in case of errors.
1457 """
1461 """
1458
1462
1459 if bundletype == "HG20":
1463 if bundletype == "HG20":
1460 bundle = bundle20(ui)
1464 bundle = bundle20(ui)
1461 bundle.setcompression(compression, compopts)
1465 bundle.setcompression(compression, compopts)
1462 part = bundle.newpart('changegroup', data=cg.getchunks())
1466 part = bundle.newpart('changegroup', data=cg.getchunks())
1463 part.addparam('version', cg.version)
1467 part.addparam('version', cg.version)
1464 if 'clcount' in cg.extras:
1468 if 'clcount' in cg.extras:
1465 part.addparam('nbchanges', str(cg.extras['clcount']),
1469 part.addparam('nbchanges', str(cg.extras['clcount']),
1466 mandatory=False)
1470 mandatory=False)
1467 chunkiter = bundle.getchunks()
1471 chunkiter = bundle.getchunks()
1468 else:
1472 else:
1469 # compression argument is only for the bundle2 case
1473 # compression argument is only for the bundle2 case
1470 assert compression is None
1474 assert compression is None
1471 if cg.version != '01':
1475 if cg.version != '01':
1472 raise error.Abort(_('old bundle types only supports v1 '
1476 raise error.Abort(_('old bundle types only supports v1 '
1473 'changegroups'))
1477 'changegroups'))
1474 header, comp = bundletypes[bundletype]
1478 header, comp = bundletypes[bundletype]
1475 if comp not in util.compengines.supportedbundletypes:
1479 if comp not in util.compengines.supportedbundletypes:
1476 raise error.Abort(_('unknown stream compression type: %s')
1480 raise error.Abort(_('unknown stream compression type: %s')
1477 % comp)
1481 % comp)
1478 compengine = util.compengines.forbundletype(comp)
1482 compengine = util.compengines.forbundletype(comp)
1479 def chunkiter():
1483 def chunkiter():
1480 yield header
1484 yield header
1481 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1485 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1482 yield chunk
1486 yield chunk
1483 chunkiter = chunkiter()
1487 chunkiter = chunkiter()
1484
1488
1485 # parse the changegroup data, otherwise we will block
1489 # parse the changegroup data, otherwise we will block
1486 # in case of sshrepo because we don't know the end of the stream
1490 # in case of sshrepo because we don't know the end of the stream
1487 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1491 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1488
1492
1489 def combinechangegroupresults(op):
1493 def combinechangegroupresults(op):
1490 """logic to combine 0 or more addchangegroup results into one"""
1494 """logic to combine 0 or more addchangegroup results into one"""
1491 results = [r.get('return', 0)
1495 results = [r.get('return', 0)
1492 for r in op.records['changegroup']]
1496 for r in op.records['changegroup']]
1493 changedheads = 0
1497 changedheads = 0
1494 result = 1
1498 result = 1
1495 for ret in results:
1499 for ret in results:
1496 # If any changegroup result is 0, return 0
1500 # If any changegroup result is 0, return 0
1497 if ret == 0:
1501 if ret == 0:
1498 result = 0
1502 result = 0
1499 break
1503 break
1500 if ret < -1:
1504 if ret < -1:
1501 changedheads += ret + 1
1505 changedheads += ret + 1
1502 elif ret > 1:
1506 elif ret > 1:
1503 changedheads += ret - 1
1507 changedheads += ret - 1
1504 if changedheads > 0:
1508 if changedheads > 0:
1505 result = 1 + changedheads
1509 result = 1 + changedheads
1506 elif changedheads < 0:
1510 elif changedheads < 0:
1507 result = -1 + changedheads
1511 result = -1 + changedheads
1508 return result
1512 return result
1509
1513
1510 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest'))
1514 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest'))
1511 def handlechangegroup(op, inpart):
1515 def handlechangegroup(op, inpart):
1512 """apply a changegroup part on the repo
1516 """apply a changegroup part on the repo
1513
1517
1514 This is a very early implementation that will massive rework before being
1518 This is a very early implementation that will massive rework before being
1515 inflicted to any end-user.
1519 inflicted to any end-user.
1516 """
1520 """
1517 tr = op.gettransaction()
1521 tr = op.gettransaction()
1518 unpackerversion = inpart.params.get('version', '01')
1522 unpackerversion = inpart.params.get('version', '01')
1519 # We should raise an appropriate exception here
1523 # We should raise an appropriate exception here
1520 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1524 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1521 # the source and url passed here are overwritten by the one contained in
1525 # the source and url passed here are overwritten by the one contained in
1522 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1526 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1523 nbchangesets = None
1527 nbchangesets = None
1524 if 'nbchanges' in inpart.params:
1528 if 'nbchanges' in inpart.params:
1525 nbchangesets = int(inpart.params.get('nbchanges'))
1529 nbchangesets = int(inpart.params.get('nbchanges'))
1526 if ('treemanifest' in inpart.params and
1530 if ('treemanifest' in inpart.params and
1527 'treemanifest' not in op.repo.requirements):
1531 'treemanifest' not in op.repo.requirements):
1528 if len(op.repo.changelog) != 0:
1532 if len(op.repo.changelog) != 0:
1529 raise error.Abort(_(
1533 raise error.Abort(_(
1530 "bundle contains tree manifests, but local repo is "
1534 "bundle contains tree manifests, but local repo is "
1531 "non-empty and does not use tree manifests"))
1535 "non-empty and does not use tree manifests"))
1532 op.repo.requirements.add('treemanifest')
1536 op.repo.requirements.add('treemanifest')
1533 op.repo._applyopenerreqs()
1537 op.repo._applyopenerreqs()
1534 op.repo._writerequirements()
1538 op.repo._writerequirements()
1535 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2',
1539 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2',
1536 expectedtotal=nbchangesets)
1540 expectedtotal=nbchangesets)
1537 if op.reply is not None:
1541 if op.reply is not None:
1538 # This is definitely not the final form of this
1542 # This is definitely not the final form of this
1539 # return. But one need to start somewhere.
1543 # return. But one need to start somewhere.
1540 part = op.reply.newpart('reply:changegroup', mandatory=False)
1544 part = op.reply.newpart('reply:changegroup', mandatory=False)
1541 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1545 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1542 part.addparam('return', '%i' % ret, mandatory=False)
1546 part.addparam('return', '%i' % ret, mandatory=False)
1543 assert not inpart.read()
1547 assert not inpart.read()
1544
1548
1545 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1549 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1546 ['digest:%s' % k for k in util.DIGESTS.keys()])
1550 ['digest:%s' % k for k in util.DIGESTS.keys()])
1547 @parthandler('remote-changegroup', _remotechangegroupparams)
1551 @parthandler('remote-changegroup', _remotechangegroupparams)
1548 def handleremotechangegroup(op, inpart):
1552 def handleremotechangegroup(op, inpart):
1549 """apply a bundle10 on the repo, given an url and validation information
1553 """apply a bundle10 on the repo, given an url and validation information
1550
1554
1551 All the information about the remote bundle to import are given as
1555 All the information about the remote bundle to import are given as
1552 parameters. The parameters include:
1556 parameters. The parameters include:
1553 - url: the url to the bundle10.
1557 - url: the url to the bundle10.
1554 - size: the bundle10 file size. It is used to validate what was
1558 - size: the bundle10 file size. It is used to validate what was
1555 retrieved by the client matches the server knowledge about the bundle.
1559 retrieved by the client matches the server knowledge about the bundle.
1556 - digests: a space separated list of the digest types provided as
1560 - digests: a space separated list of the digest types provided as
1557 parameters.
1561 parameters.
1558 - digest:<digest-type>: the hexadecimal representation of the digest with
1562 - digest:<digest-type>: the hexadecimal representation of the digest with
1559 that name. Like the size, it is used to validate what was retrieved by
1563 that name. Like the size, it is used to validate what was retrieved by
1560 the client matches what the server knows about the bundle.
1564 the client matches what the server knows about the bundle.
1561
1565
1562 When multiple digest types are given, all of them are checked.
1566 When multiple digest types are given, all of them are checked.
1563 """
1567 """
1564 try:
1568 try:
1565 raw_url = inpart.params['url']
1569 raw_url = inpart.params['url']
1566 except KeyError:
1570 except KeyError:
1567 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1571 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1568 parsed_url = util.url(raw_url)
1572 parsed_url = util.url(raw_url)
1569 if parsed_url.scheme not in capabilities['remote-changegroup']:
1573 if parsed_url.scheme not in capabilities['remote-changegroup']:
1570 raise error.Abort(_('remote-changegroup does not support %s urls') %
1574 raise error.Abort(_('remote-changegroup does not support %s urls') %
1571 parsed_url.scheme)
1575 parsed_url.scheme)
1572
1576
1573 try:
1577 try:
1574 size = int(inpart.params['size'])
1578 size = int(inpart.params['size'])
1575 except ValueError:
1579 except ValueError:
1576 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1580 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1577 % 'size')
1581 % 'size')
1578 except KeyError:
1582 except KeyError:
1579 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1583 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1580
1584
1581 digests = {}
1585 digests = {}
1582 for typ in inpart.params.get('digests', '').split():
1586 for typ in inpart.params.get('digests', '').split():
1583 param = 'digest:%s' % typ
1587 param = 'digest:%s' % typ
1584 try:
1588 try:
1585 value = inpart.params[param]
1589 value = inpart.params[param]
1586 except KeyError:
1590 except KeyError:
1587 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1591 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1588 param)
1592 param)
1589 digests[typ] = value
1593 digests[typ] = value
1590
1594
1591 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1595 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1592
1596
1593 tr = op.gettransaction()
1597 tr = op.gettransaction()
1594 from . import exchange
1598 from . import exchange
1595 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1599 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1596 if not isinstance(cg, changegroup.cg1unpacker):
1600 if not isinstance(cg, changegroup.cg1unpacker):
1597 raise error.Abort(_('%s: not a bundle version 1.0') %
1601 raise error.Abort(_('%s: not a bundle version 1.0') %
1598 util.hidepassword(raw_url))
1602 util.hidepassword(raw_url))
1599 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2')
1603 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2')
1600 if op.reply is not None:
1604 if op.reply is not None:
1601 # This is definitely not the final form of this
1605 # This is definitely not the final form of this
1602 # return. But one need to start somewhere.
1606 # return. But one need to start somewhere.
1603 part = op.reply.newpart('reply:changegroup')
1607 part = op.reply.newpart('reply:changegroup')
1604 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1608 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1605 part.addparam('return', '%i' % ret, mandatory=False)
1609 part.addparam('return', '%i' % ret, mandatory=False)
1606 try:
1610 try:
1607 real_part.validate()
1611 real_part.validate()
1608 except error.Abort as e:
1612 except error.Abort as e:
1609 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1613 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1610 (util.hidepassword(raw_url), str(e)))
1614 (util.hidepassword(raw_url), str(e)))
1611 assert not inpart.read()
1615 assert not inpart.read()
1612
1616
1613 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1617 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1614 def handlereplychangegroup(op, inpart):
1618 def handlereplychangegroup(op, inpart):
1615 ret = int(inpart.params['return'])
1619 ret = int(inpart.params['return'])
1616 replyto = int(inpart.params['in-reply-to'])
1620 replyto = int(inpart.params['in-reply-to'])
1617 op.records.add('changegroup', {'return': ret}, replyto)
1621 op.records.add('changegroup', {'return': ret}, replyto)
1618
1622
1619 @parthandler('check:heads')
1623 @parthandler('check:heads')
1620 def handlecheckheads(op, inpart):
1624 def handlecheckheads(op, inpart):
1621 """check that head of the repo did not change
1625 """check that head of the repo did not change
1622
1626
1623 This is used to detect a push race when using unbundle.
1627 This is used to detect a push race when using unbundle.
1624 This replaces the "heads" argument of unbundle."""
1628 This replaces the "heads" argument of unbundle."""
1625 h = inpart.read(20)
1629 h = inpart.read(20)
1626 heads = []
1630 heads = []
1627 while len(h) == 20:
1631 while len(h) == 20:
1628 heads.append(h)
1632 heads.append(h)
1629 h = inpart.read(20)
1633 h = inpart.read(20)
1630 assert not h
1634 assert not h
1631 # Trigger a transaction so that we are guaranteed to have the lock now.
1635 # Trigger a transaction so that we are guaranteed to have the lock now.
1632 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1636 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1633 op.gettransaction()
1637 op.gettransaction()
1634 if sorted(heads) != sorted(op.repo.heads()):
1638 if sorted(heads) != sorted(op.repo.heads()):
1635 raise error.PushRaced('repository changed while pushing - '
1639 raise error.PushRaced('repository changed while pushing - '
1636 'please try again')
1640 'please try again')
1637
1641
1638 @parthandler('check:updated-heads')
1642 @parthandler('check:updated-heads')
1639 def handlecheckupdatedheads(op, inpart):
1643 def handlecheckupdatedheads(op, inpart):
1640 """check for race on the heads touched by a push
1644 """check for race on the heads touched by a push
1641
1645
1642 This is similar to 'check:heads' but focus on the heads actually updated
1646 This is similar to 'check:heads' but focus on the heads actually updated
1643 during the push. If other activities happen on unrelated heads, it is
1647 during the push. If other activities happen on unrelated heads, it is
1644 ignored.
1648 ignored.
1645
1649
1646 This allow server with high traffic to avoid push contention as long as
1650 This allow server with high traffic to avoid push contention as long as
1647 unrelated parts of the graph are involved."""
1651 unrelated parts of the graph are involved."""
1648 h = inpart.read(20)
1652 h = inpart.read(20)
1649 heads = []
1653 heads = []
1650 while len(h) == 20:
1654 while len(h) == 20:
1651 heads.append(h)
1655 heads.append(h)
1652 h = inpart.read(20)
1656 h = inpart.read(20)
1653 assert not h
1657 assert not h
1654 # trigger a transaction so that we are guaranteed to have the lock now.
1658 # trigger a transaction so that we are guaranteed to have the lock now.
1655 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1659 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1656 op.gettransaction()
1660 op.gettransaction()
1657
1661
1658 currentheads = set()
1662 currentheads = set()
1659 for ls in op.repo.branchmap().itervalues():
1663 for ls in op.repo.branchmap().itervalues():
1660 currentheads.update(ls)
1664 currentheads.update(ls)
1661
1665
1662 for h in heads:
1666 for h in heads:
1663 if h not in currentheads:
1667 if h not in currentheads:
1664 raise error.PushRaced('repository changed while pushing - '
1668 raise error.PushRaced('repository changed while pushing - '
1665 'please try again')
1669 'please try again')
1666
1670
1667 @parthandler('output')
1671 @parthandler('output')
1668 def handleoutput(op, inpart):
1672 def handleoutput(op, inpart):
1669 """forward output captured on the server to the client"""
1673 """forward output captured on the server to the client"""
1670 for line in inpart.read().splitlines():
1674 for line in inpart.read().splitlines():
1671 op.ui.status(_('remote: %s\n') % line)
1675 op.ui.status(_('remote: %s\n') % line)
1672
1676
1673 @parthandler('replycaps')
1677 @parthandler('replycaps')
1674 def handlereplycaps(op, inpart):
1678 def handlereplycaps(op, inpart):
1675 """Notify that a reply bundle should be created
1679 """Notify that a reply bundle should be created
1676
1680
1677 The payload contains the capabilities information for the reply"""
1681 The payload contains the capabilities information for the reply"""
1678 caps = decodecaps(inpart.read())
1682 caps = decodecaps(inpart.read())
1679 if op.reply is None:
1683 if op.reply is None:
1680 op.reply = bundle20(op.ui, caps)
1684 op.reply = bundle20(op.ui, caps)
1681
1685
1682 class AbortFromPart(error.Abort):
1686 class AbortFromPart(error.Abort):
1683 """Sub-class of Abort that denotes an error from a bundle2 part."""
1687 """Sub-class of Abort that denotes an error from a bundle2 part."""
1684
1688
1685 @parthandler('error:abort', ('message', 'hint'))
1689 @parthandler('error:abort', ('message', 'hint'))
1686 def handleerrorabort(op, inpart):
1690 def handleerrorabort(op, inpart):
1687 """Used to transmit abort error over the wire"""
1691 """Used to transmit abort error over the wire"""
1688 raise AbortFromPart(inpart.params['message'],
1692 raise AbortFromPart(inpart.params['message'],
1689 hint=inpart.params.get('hint'))
1693 hint=inpart.params.get('hint'))
1690
1694
1691 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
1695 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
1692 'in-reply-to'))
1696 'in-reply-to'))
1693 def handleerrorpushkey(op, inpart):
1697 def handleerrorpushkey(op, inpart):
1694 """Used to transmit failure of a mandatory pushkey over the wire"""
1698 """Used to transmit failure of a mandatory pushkey over the wire"""
1695 kwargs = {}
1699 kwargs = {}
1696 for name in ('namespace', 'key', 'new', 'old', 'ret'):
1700 for name in ('namespace', 'key', 'new', 'old', 'ret'):
1697 value = inpart.params.get(name)
1701 value = inpart.params.get(name)
1698 if value is not None:
1702 if value is not None:
1699 kwargs[name] = value
1703 kwargs[name] = value
1700 raise error.PushkeyFailed(inpart.params['in-reply-to'], **kwargs)
1704 raise error.PushkeyFailed(inpart.params['in-reply-to'], **kwargs)
1701
1705
1702 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1706 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1703 def handleerrorunsupportedcontent(op, inpart):
1707 def handleerrorunsupportedcontent(op, inpart):
1704 """Used to transmit unknown content error over the wire"""
1708 """Used to transmit unknown content error over the wire"""
1705 kwargs = {}
1709 kwargs = {}
1706 parttype = inpart.params.get('parttype')
1710 parttype = inpart.params.get('parttype')
1707 if parttype is not None:
1711 if parttype is not None:
1708 kwargs['parttype'] = parttype
1712 kwargs['parttype'] = parttype
1709 params = inpart.params.get('params')
1713 params = inpart.params.get('params')
1710 if params is not None:
1714 if params is not None:
1711 kwargs['params'] = params.split('\0')
1715 kwargs['params'] = params.split('\0')
1712
1716
1713 raise error.BundleUnknownFeatureError(**kwargs)
1717 raise error.BundleUnknownFeatureError(**kwargs)
1714
1718
1715 @parthandler('error:pushraced', ('message',))
1719 @parthandler('error:pushraced', ('message',))
1716 def handleerrorpushraced(op, inpart):
1720 def handleerrorpushraced(op, inpart):
1717 """Used to transmit push race error over the wire"""
1721 """Used to transmit push race error over the wire"""
1718 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1722 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1719
1723
1720 @parthandler('listkeys', ('namespace',))
1724 @parthandler('listkeys', ('namespace',))
1721 def handlelistkeys(op, inpart):
1725 def handlelistkeys(op, inpart):
1722 """retrieve pushkey namespace content stored in a bundle2"""
1726 """retrieve pushkey namespace content stored in a bundle2"""
1723 namespace = inpart.params['namespace']
1727 namespace = inpart.params['namespace']
1724 r = pushkey.decodekeys(inpart.read())
1728 r = pushkey.decodekeys(inpart.read())
1725 op.records.add('listkeys', (namespace, r))
1729 op.records.add('listkeys', (namespace, r))
1726
1730
1727 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1731 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1728 def handlepushkey(op, inpart):
1732 def handlepushkey(op, inpart):
1729 """process a pushkey request"""
1733 """process a pushkey request"""
1730 dec = pushkey.decode
1734 dec = pushkey.decode
1731 namespace = dec(inpart.params['namespace'])
1735 namespace = dec(inpart.params['namespace'])
1732 key = dec(inpart.params['key'])
1736 key = dec(inpart.params['key'])
1733 old = dec(inpart.params['old'])
1737 old = dec(inpart.params['old'])
1734 new = dec(inpart.params['new'])
1738 new = dec(inpart.params['new'])
1735 # Grab the transaction to ensure that we have the lock before performing the
1739 # Grab the transaction to ensure that we have the lock before performing the
1736 # pushkey.
1740 # pushkey.
1737 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1741 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1738 op.gettransaction()
1742 op.gettransaction()
1739 ret = op.repo.pushkey(namespace, key, old, new)
1743 ret = op.repo.pushkey(namespace, key, old, new)
1740 record = {'namespace': namespace,
1744 record = {'namespace': namespace,
1741 'key': key,
1745 'key': key,
1742 'old': old,
1746 'old': old,
1743 'new': new}
1747 'new': new}
1744 op.records.add('pushkey', record)
1748 op.records.add('pushkey', record)
1745 if op.reply is not None:
1749 if op.reply is not None:
1746 rpart = op.reply.newpart('reply:pushkey')
1750 rpart = op.reply.newpart('reply:pushkey')
1747 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1751 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1748 rpart.addparam('return', '%i' % ret, mandatory=False)
1752 rpart.addparam('return', '%i' % ret, mandatory=False)
1749 if inpart.mandatory and not ret:
1753 if inpart.mandatory and not ret:
1750 kwargs = {}
1754 kwargs = {}
1751 for key in ('namespace', 'key', 'new', 'old', 'ret'):
1755 for key in ('namespace', 'key', 'new', 'old', 'ret'):
1752 if key in inpart.params:
1756 if key in inpart.params:
1753 kwargs[key] = inpart.params[key]
1757 kwargs[key] = inpart.params[key]
1754 raise error.PushkeyFailed(partid=str(inpart.id), **kwargs)
1758 raise error.PushkeyFailed(partid=str(inpart.id), **kwargs)
1755
1759
1756 def _readphaseheads(inpart):
1760 def _readphaseheads(inpart):
1757 headsbyphase = [[] for i in phases.allphases]
1761 headsbyphase = [[] for i in phases.allphases]
1758 entrysize = struct.calcsize(_fphasesentry)
1762 entrysize = struct.calcsize(_fphasesentry)
1759 while True:
1763 while True:
1760 entry = inpart.read(entrysize)
1764 entry = inpart.read(entrysize)
1761 if len(entry) < entrysize:
1765 if len(entry) < entrysize:
1762 if entry:
1766 if entry:
1763 raise error.Abort(_('bad phase-heads bundle part'))
1767 raise error.Abort(_('bad phase-heads bundle part'))
1764 break
1768 break
1765 phase, node = struct.unpack(_fphasesentry, entry)
1769 phase, node = struct.unpack(_fphasesentry, entry)
1766 headsbyphase[phase].append(node)
1770 headsbyphase[phase].append(node)
1767 return headsbyphase
1771 return headsbyphase
1768
1772
1769 @parthandler('phase-heads')
1773 @parthandler('phase-heads')
1770 def handlephases(op, inpart):
1774 def handlephases(op, inpart):
1771 """apply phases from bundle part to repo"""
1775 """apply phases from bundle part to repo"""
1772 headsbyphase = _readphaseheads(inpart)
1776 headsbyphase = _readphaseheads(inpart)
1773 addednodes = []
1777 addednodes = []
1774 for entry in op.records['changegroup']:
1778 for entry in op.records['changegroup']:
1775 addednodes.extend(entry['addednodes'])
1779 addednodes.extend(entry['addednodes'])
1776 phases.updatephases(op.repo.unfiltered(), op.gettransaction(), headsbyphase,
1780 phases.updatephases(op.repo.unfiltered(), op.gettransaction(), headsbyphase,
1777 addednodes)
1781 addednodes)
1778
1782
1779 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
1783 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
1780 def handlepushkeyreply(op, inpart):
1784 def handlepushkeyreply(op, inpart):
1781 """retrieve the result of a pushkey request"""
1785 """retrieve the result of a pushkey request"""
1782 ret = int(inpart.params['return'])
1786 ret = int(inpart.params['return'])
1783 partid = int(inpart.params['in-reply-to'])
1787 partid = int(inpart.params['in-reply-to'])
1784 op.records.add('pushkey', {'return': ret}, partid)
1788 op.records.add('pushkey', {'return': ret}, partid)
1785
1789
1786 @parthandler('obsmarkers')
1790 @parthandler('obsmarkers')
1787 def handleobsmarker(op, inpart):
1791 def handleobsmarker(op, inpart):
1788 """add a stream of obsmarkers to the repo"""
1792 """add a stream of obsmarkers to the repo"""
1789 tr = op.gettransaction()
1793 tr = op.gettransaction()
1790 markerdata = inpart.read()
1794 markerdata = inpart.read()
1791 if op.ui.config('experimental', 'obsmarkers-exchange-debug', False):
1795 if op.ui.config('experimental', 'obsmarkers-exchange-debug', False):
1792 op.ui.write(('obsmarker-exchange: %i bytes received\n')
1796 op.ui.write(('obsmarker-exchange: %i bytes received\n')
1793 % len(markerdata))
1797 % len(markerdata))
1794 # The mergemarkers call will crash if marker creation is not enabled.
1798 # The mergemarkers call will crash if marker creation is not enabled.
1795 # we want to avoid this if the part is advisory.
1799 # we want to avoid this if the part is advisory.
1796 if not inpart.mandatory and op.repo.obsstore.readonly:
1800 if not inpart.mandatory and op.repo.obsstore.readonly:
1797 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled')
1801 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled')
1798 return
1802 return
1799 new = op.repo.obsstore.mergemarkers(tr, markerdata)
1803 new = op.repo.obsstore.mergemarkers(tr, markerdata)
1800 op.repo.invalidatevolatilesets()
1804 op.repo.invalidatevolatilesets()
1801 if new:
1805 if new:
1802 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1806 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1803 op.records.add('obsmarkers', {'new': new})
1807 op.records.add('obsmarkers', {'new': new})
1804 if op.reply is not None:
1808 if op.reply is not None:
1805 rpart = op.reply.newpart('reply:obsmarkers')
1809 rpart = op.reply.newpart('reply:obsmarkers')
1806 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1810 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1807 rpart.addparam('new', '%i' % new, mandatory=False)
1811 rpart.addparam('new', '%i' % new, mandatory=False)
1808
1812
1809
1813
1810 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
1814 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
1811 def handleobsmarkerreply(op, inpart):
1815 def handleobsmarkerreply(op, inpart):
1812 """retrieve the result of a pushkey request"""
1816 """retrieve the result of a pushkey request"""
1813 ret = int(inpart.params['new'])
1817 ret = int(inpart.params['new'])
1814 partid = int(inpart.params['in-reply-to'])
1818 partid = int(inpart.params['in-reply-to'])
1815 op.records.add('obsmarkers', {'new': ret}, partid)
1819 op.records.add('obsmarkers', {'new': ret}, partid)
1816
1820
1817 @parthandler('hgtagsfnodes')
1821 @parthandler('hgtagsfnodes')
1818 def handlehgtagsfnodes(op, inpart):
1822 def handlehgtagsfnodes(op, inpart):
1819 """Applies .hgtags fnodes cache entries to the local repo.
1823 """Applies .hgtags fnodes cache entries to the local repo.
1820
1824
1821 Payload is pairs of 20 byte changeset nodes and filenodes.
1825 Payload is pairs of 20 byte changeset nodes and filenodes.
1822 """
1826 """
1823 # Grab the transaction so we ensure that we have the lock at this point.
1827 # Grab the transaction so we ensure that we have the lock at this point.
1824 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1828 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1825 op.gettransaction()
1829 op.gettransaction()
1826 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
1830 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
1827
1831
1828 count = 0
1832 count = 0
1829 while True:
1833 while True:
1830 node = inpart.read(20)
1834 node = inpart.read(20)
1831 fnode = inpart.read(20)
1835 fnode = inpart.read(20)
1832 if len(node) < 20 or len(fnode) < 20:
1836 if len(node) < 20 or len(fnode) < 20:
1833 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
1837 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
1834 break
1838 break
1835 cache.setfnode(node, fnode)
1839 cache.setfnode(node, fnode)
1836 count += 1
1840 count += 1
1837
1841
1838 cache.write()
1842 cache.write()
1839 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
1843 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
@@ -1,5400 +1,5402 b''
1 # commands.py - command processing for mercurial
1 # commands.py - command processing for mercurial
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import difflib
10 import difflib
11 import errno
11 import errno
12 import os
12 import os
13 import re
13 import re
14 import sys
14 import sys
15
15
16 from .i18n import _
16 from .i18n import _
17 from .node import (
17 from .node import (
18 hex,
18 hex,
19 nullid,
19 nullid,
20 nullrev,
20 nullrev,
21 short,
21 short,
22 )
22 )
23 from . import (
23 from . import (
24 archival,
24 archival,
25 bookmarks,
25 bookmarks,
26 bundle2,
26 bundle2,
27 changegroup,
27 changegroup,
28 cmdutil,
28 cmdutil,
29 copies,
29 copies,
30 debugcommands as debugcommandsmod,
30 debugcommands as debugcommandsmod,
31 destutil,
31 destutil,
32 dirstateguard,
32 dirstateguard,
33 discovery,
33 discovery,
34 encoding,
34 encoding,
35 error,
35 error,
36 exchange,
36 exchange,
37 extensions,
37 extensions,
38 formatter,
38 formatter,
39 graphmod,
39 graphmod,
40 hbisect,
40 hbisect,
41 help,
41 help,
42 hg,
42 hg,
43 lock as lockmod,
43 lock as lockmod,
44 merge as mergemod,
44 merge as mergemod,
45 obsolete,
45 obsolete,
46 patch,
46 patch,
47 phases,
47 phases,
48 pycompat,
48 pycompat,
49 rcutil,
49 rcutil,
50 registrar,
50 registrar,
51 revsetlang,
51 revsetlang,
52 scmutil,
52 scmutil,
53 server,
53 server,
54 sshserver,
54 sshserver,
55 streamclone,
55 streamclone,
56 tags as tagsmod,
56 tags as tagsmod,
57 templatekw,
57 templatekw,
58 ui as uimod,
58 ui as uimod,
59 util,
59 util,
60 )
60 )
61
61
62 release = lockmod.release
62 release = lockmod.release
63
63
64 table = {}
64 table = {}
65 table.update(debugcommandsmod.command._table)
65 table.update(debugcommandsmod.command._table)
66
66
67 command = registrar.command(table)
67 command = registrar.command(table)
68
68
69 # common command options
69 # common command options
70
70
71 globalopts = [
71 globalopts = [
72 ('R', 'repository', '',
72 ('R', 'repository', '',
73 _('repository root directory or name of overlay bundle file'),
73 _('repository root directory or name of overlay bundle file'),
74 _('REPO')),
74 _('REPO')),
75 ('', 'cwd', '',
75 ('', 'cwd', '',
76 _('change working directory'), _('DIR')),
76 _('change working directory'), _('DIR')),
77 ('y', 'noninteractive', None,
77 ('y', 'noninteractive', None,
78 _('do not prompt, automatically pick the first choice for all prompts')),
78 _('do not prompt, automatically pick the first choice for all prompts')),
79 ('q', 'quiet', None, _('suppress output')),
79 ('q', 'quiet', None, _('suppress output')),
80 ('v', 'verbose', None, _('enable additional output')),
80 ('v', 'verbose', None, _('enable additional output')),
81 ('', 'color', '',
81 ('', 'color', '',
82 # i18n: 'always', 'auto', 'never', and 'debug' are keywords
82 # i18n: 'always', 'auto', 'never', and 'debug' are keywords
83 # and should not be translated
83 # and should not be translated
84 _("when to colorize (boolean, always, auto, never, or debug)"),
84 _("when to colorize (boolean, always, auto, never, or debug)"),
85 _('TYPE')),
85 _('TYPE')),
86 ('', 'config', [],
86 ('', 'config', [],
87 _('set/override config option (use \'section.name=value\')'),
87 _('set/override config option (use \'section.name=value\')'),
88 _('CONFIG')),
88 _('CONFIG')),
89 ('', 'debug', None, _('enable debugging output')),
89 ('', 'debug', None, _('enable debugging output')),
90 ('', 'debugger', None, _('start debugger')),
90 ('', 'debugger', None, _('start debugger')),
91 ('', 'encoding', encoding.encoding, _('set the charset encoding'),
91 ('', 'encoding', encoding.encoding, _('set the charset encoding'),
92 _('ENCODE')),
92 _('ENCODE')),
93 ('', 'encodingmode', encoding.encodingmode,
93 ('', 'encodingmode', encoding.encodingmode,
94 _('set the charset encoding mode'), _('MODE')),
94 _('set the charset encoding mode'), _('MODE')),
95 ('', 'traceback', None, _('always print a traceback on exception')),
95 ('', 'traceback', None, _('always print a traceback on exception')),
96 ('', 'time', None, _('time how long the command takes')),
96 ('', 'time', None, _('time how long the command takes')),
97 ('', 'profile', None, _('print command execution profile')),
97 ('', 'profile', None, _('print command execution profile')),
98 ('', 'version', None, _('output version information and exit')),
98 ('', 'version', None, _('output version information and exit')),
99 ('h', 'help', None, _('display help and exit')),
99 ('h', 'help', None, _('display help and exit')),
100 ('', 'hidden', False, _('consider hidden changesets')),
100 ('', 'hidden', False, _('consider hidden changesets')),
101 ('', 'pager', 'auto',
101 ('', 'pager', 'auto',
102 _("when to paginate (boolean, always, auto, or never)"), _('TYPE')),
102 _("when to paginate (boolean, always, auto, or never)"), _('TYPE')),
103 ]
103 ]
104
104
105 dryrunopts = cmdutil.dryrunopts
105 dryrunopts = cmdutil.dryrunopts
106 remoteopts = cmdutil.remoteopts
106 remoteopts = cmdutil.remoteopts
107 walkopts = cmdutil.walkopts
107 walkopts = cmdutil.walkopts
108 commitopts = cmdutil.commitopts
108 commitopts = cmdutil.commitopts
109 commitopts2 = cmdutil.commitopts2
109 commitopts2 = cmdutil.commitopts2
110 formatteropts = cmdutil.formatteropts
110 formatteropts = cmdutil.formatteropts
111 templateopts = cmdutil.templateopts
111 templateopts = cmdutil.templateopts
112 logopts = cmdutil.logopts
112 logopts = cmdutil.logopts
113 diffopts = cmdutil.diffopts
113 diffopts = cmdutil.diffopts
114 diffwsopts = cmdutil.diffwsopts
114 diffwsopts = cmdutil.diffwsopts
115 diffopts2 = cmdutil.diffopts2
115 diffopts2 = cmdutil.diffopts2
116 mergetoolopts = cmdutil.mergetoolopts
116 mergetoolopts = cmdutil.mergetoolopts
117 similarityopts = cmdutil.similarityopts
117 similarityopts = cmdutil.similarityopts
118 subrepoopts = cmdutil.subrepoopts
118 subrepoopts = cmdutil.subrepoopts
119 debugrevlogopts = cmdutil.debugrevlogopts
119 debugrevlogopts = cmdutil.debugrevlogopts
120
120
121 # Commands start here, listed alphabetically
121 # Commands start here, listed alphabetically
122
122
123 @command('^add',
123 @command('^add',
124 walkopts + subrepoopts + dryrunopts,
124 walkopts + subrepoopts + dryrunopts,
125 _('[OPTION]... [FILE]...'),
125 _('[OPTION]... [FILE]...'),
126 inferrepo=True)
126 inferrepo=True)
127 def add(ui, repo, *pats, **opts):
127 def add(ui, repo, *pats, **opts):
128 """add the specified files on the next commit
128 """add the specified files on the next commit
129
129
130 Schedule files to be version controlled and added to the
130 Schedule files to be version controlled and added to the
131 repository.
131 repository.
132
132
133 The files will be added to the repository at the next commit. To
133 The files will be added to the repository at the next commit. To
134 undo an add before that, see :hg:`forget`.
134 undo an add before that, see :hg:`forget`.
135
135
136 If no names are given, add all files to the repository (except
136 If no names are given, add all files to the repository (except
137 files matching ``.hgignore``).
137 files matching ``.hgignore``).
138
138
139 .. container:: verbose
139 .. container:: verbose
140
140
141 Examples:
141 Examples:
142
142
143 - New (unknown) files are added
143 - New (unknown) files are added
144 automatically by :hg:`add`::
144 automatically by :hg:`add`::
145
145
146 $ ls
146 $ ls
147 foo.c
147 foo.c
148 $ hg status
148 $ hg status
149 ? foo.c
149 ? foo.c
150 $ hg add
150 $ hg add
151 adding foo.c
151 adding foo.c
152 $ hg status
152 $ hg status
153 A foo.c
153 A foo.c
154
154
155 - Specific files to be added can be specified::
155 - Specific files to be added can be specified::
156
156
157 $ ls
157 $ ls
158 bar.c foo.c
158 bar.c foo.c
159 $ hg status
159 $ hg status
160 ? bar.c
160 ? bar.c
161 ? foo.c
161 ? foo.c
162 $ hg add bar.c
162 $ hg add bar.c
163 $ hg status
163 $ hg status
164 A bar.c
164 A bar.c
165 ? foo.c
165 ? foo.c
166
166
167 Returns 0 if all files are successfully added.
167 Returns 0 if all files are successfully added.
168 """
168 """
169
169
170 m = scmutil.match(repo[None], pats, pycompat.byteskwargs(opts))
170 m = scmutil.match(repo[None], pats, pycompat.byteskwargs(opts))
171 rejected = cmdutil.add(ui, repo, m, "", False, **opts)
171 rejected = cmdutil.add(ui, repo, m, "", False, **opts)
172 return rejected and 1 or 0
172 return rejected and 1 or 0
173
173
174 @command('addremove',
174 @command('addremove',
175 similarityopts + subrepoopts + walkopts + dryrunopts,
175 similarityopts + subrepoopts + walkopts + dryrunopts,
176 _('[OPTION]... [FILE]...'),
176 _('[OPTION]... [FILE]...'),
177 inferrepo=True)
177 inferrepo=True)
178 def addremove(ui, repo, *pats, **opts):
178 def addremove(ui, repo, *pats, **opts):
179 """add all new files, delete all missing files
179 """add all new files, delete all missing files
180
180
181 Add all new files and remove all missing files from the
181 Add all new files and remove all missing files from the
182 repository.
182 repository.
183
183
184 Unless names are given, new files are ignored if they match any of
184 Unless names are given, new files are ignored if they match any of
185 the patterns in ``.hgignore``. As with add, these changes take
185 the patterns in ``.hgignore``. As with add, these changes take
186 effect at the next commit.
186 effect at the next commit.
187
187
188 Use the -s/--similarity option to detect renamed files. This
188 Use the -s/--similarity option to detect renamed files. This
189 option takes a percentage between 0 (disabled) and 100 (files must
189 option takes a percentage between 0 (disabled) and 100 (files must
190 be identical) as its parameter. With a parameter greater than 0,
190 be identical) as its parameter. With a parameter greater than 0,
191 this compares every removed file with every added file and records
191 this compares every removed file with every added file and records
192 those similar enough as renames. Detecting renamed files this way
192 those similar enough as renames. Detecting renamed files this way
193 can be expensive. After using this option, :hg:`status -C` can be
193 can be expensive. After using this option, :hg:`status -C` can be
194 used to check which files were identified as moved or renamed. If
194 used to check which files were identified as moved or renamed. If
195 not specified, -s/--similarity defaults to 100 and only renames of
195 not specified, -s/--similarity defaults to 100 and only renames of
196 identical files are detected.
196 identical files are detected.
197
197
198 .. container:: verbose
198 .. container:: verbose
199
199
200 Examples:
200 Examples:
201
201
202 - A number of files (bar.c and foo.c) are new,
202 - A number of files (bar.c and foo.c) are new,
203 while foobar.c has been removed (without using :hg:`remove`)
203 while foobar.c has been removed (without using :hg:`remove`)
204 from the repository::
204 from the repository::
205
205
206 $ ls
206 $ ls
207 bar.c foo.c
207 bar.c foo.c
208 $ hg status
208 $ hg status
209 ! foobar.c
209 ! foobar.c
210 ? bar.c
210 ? bar.c
211 ? foo.c
211 ? foo.c
212 $ hg addremove
212 $ hg addremove
213 adding bar.c
213 adding bar.c
214 adding foo.c
214 adding foo.c
215 removing foobar.c
215 removing foobar.c
216 $ hg status
216 $ hg status
217 A bar.c
217 A bar.c
218 A foo.c
218 A foo.c
219 R foobar.c
219 R foobar.c
220
220
221 - A file foobar.c was moved to foo.c without using :hg:`rename`.
221 - A file foobar.c was moved to foo.c without using :hg:`rename`.
222 Afterwards, it was edited slightly::
222 Afterwards, it was edited slightly::
223
223
224 $ ls
224 $ ls
225 foo.c
225 foo.c
226 $ hg status
226 $ hg status
227 ! foobar.c
227 ! foobar.c
228 ? foo.c
228 ? foo.c
229 $ hg addremove --similarity 90
229 $ hg addremove --similarity 90
230 removing foobar.c
230 removing foobar.c
231 adding foo.c
231 adding foo.c
232 recording removal of foobar.c as rename to foo.c (94% similar)
232 recording removal of foobar.c as rename to foo.c (94% similar)
233 $ hg status -C
233 $ hg status -C
234 A foo.c
234 A foo.c
235 foobar.c
235 foobar.c
236 R foobar.c
236 R foobar.c
237
237
238 Returns 0 if all files are successfully added.
238 Returns 0 if all files are successfully added.
239 """
239 """
240 opts = pycompat.byteskwargs(opts)
240 opts = pycompat.byteskwargs(opts)
241 try:
241 try:
242 sim = float(opts.get('similarity') or 100)
242 sim = float(opts.get('similarity') or 100)
243 except ValueError:
243 except ValueError:
244 raise error.Abort(_('similarity must be a number'))
244 raise error.Abort(_('similarity must be a number'))
245 if sim < 0 or sim > 100:
245 if sim < 0 or sim > 100:
246 raise error.Abort(_('similarity must be between 0 and 100'))
246 raise error.Abort(_('similarity must be between 0 and 100'))
247 matcher = scmutil.match(repo[None], pats, opts)
247 matcher = scmutil.match(repo[None], pats, opts)
248 return scmutil.addremove(repo, matcher, "", opts, similarity=sim / 100.0)
248 return scmutil.addremove(repo, matcher, "", opts, similarity=sim / 100.0)
249
249
250 @command('^annotate|blame',
250 @command('^annotate|blame',
251 [('r', 'rev', '', _('annotate the specified revision'), _('REV')),
251 [('r', 'rev', '', _('annotate the specified revision'), _('REV')),
252 ('', 'follow', None,
252 ('', 'follow', None,
253 _('follow copies/renames and list the filename (DEPRECATED)')),
253 _('follow copies/renames and list the filename (DEPRECATED)')),
254 ('', 'no-follow', None, _("don't follow copies and renames")),
254 ('', 'no-follow', None, _("don't follow copies and renames")),
255 ('a', 'text', None, _('treat all files as text')),
255 ('a', 'text', None, _('treat all files as text')),
256 ('u', 'user', None, _('list the author (long with -v)')),
256 ('u', 'user', None, _('list the author (long with -v)')),
257 ('f', 'file', None, _('list the filename')),
257 ('f', 'file', None, _('list the filename')),
258 ('d', 'date', None, _('list the date (short with -q)')),
258 ('d', 'date', None, _('list the date (short with -q)')),
259 ('n', 'number', None, _('list the revision number (default)')),
259 ('n', 'number', None, _('list the revision number (default)')),
260 ('c', 'changeset', None, _('list the changeset')),
260 ('c', 'changeset', None, _('list the changeset')),
261 ('l', 'line-number', None, _('show line number at the first appearance')),
261 ('l', 'line-number', None, _('show line number at the first appearance')),
262 ('', 'skip', [], _('revision to not display (EXPERIMENTAL)'), _('REV')),
262 ('', 'skip', [], _('revision to not display (EXPERIMENTAL)'), _('REV')),
263 ] + diffwsopts + walkopts + formatteropts,
263 ] + diffwsopts + walkopts + formatteropts,
264 _('[-r REV] [-f] [-a] [-u] [-d] [-n] [-c] [-l] FILE...'),
264 _('[-r REV] [-f] [-a] [-u] [-d] [-n] [-c] [-l] FILE...'),
265 inferrepo=True)
265 inferrepo=True)
266 def annotate(ui, repo, *pats, **opts):
266 def annotate(ui, repo, *pats, **opts):
267 """show changeset information by line for each file
267 """show changeset information by line for each file
268
268
269 List changes in files, showing the revision id responsible for
269 List changes in files, showing the revision id responsible for
270 each line.
270 each line.
271
271
272 This command is useful for discovering when a change was made and
272 This command is useful for discovering when a change was made and
273 by whom.
273 by whom.
274
274
275 If you include --file, --user, or --date, the revision number is
275 If you include --file, --user, or --date, the revision number is
276 suppressed unless you also include --number.
276 suppressed unless you also include --number.
277
277
278 Without the -a/--text option, annotate will avoid processing files
278 Without the -a/--text option, annotate will avoid processing files
279 it detects as binary. With -a, annotate will annotate the file
279 it detects as binary. With -a, annotate will annotate the file
280 anyway, although the results will probably be neither useful
280 anyway, although the results will probably be neither useful
281 nor desirable.
281 nor desirable.
282
282
283 Returns 0 on success.
283 Returns 0 on success.
284 """
284 """
285 opts = pycompat.byteskwargs(opts)
285 opts = pycompat.byteskwargs(opts)
286 if not pats:
286 if not pats:
287 raise error.Abort(_('at least one filename or pattern is required'))
287 raise error.Abort(_('at least one filename or pattern is required'))
288
288
289 if opts.get('follow'):
289 if opts.get('follow'):
290 # --follow is deprecated and now just an alias for -f/--file
290 # --follow is deprecated and now just an alias for -f/--file
291 # to mimic the behavior of Mercurial before version 1.5
291 # to mimic the behavior of Mercurial before version 1.5
292 opts['file'] = True
292 opts['file'] = True
293
293
294 ctx = scmutil.revsingle(repo, opts.get('rev'))
294 ctx = scmutil.revsingle(repo, opts.get('rev'))
295
295
296 rootfm = ui.formatter('annotate', opts)
296 rootfm = ui.formatter('annotate', opts)
297 if ui.quiet:
297 if ui.quiet:
298 datefunc = util.shortdate
298 datefunc = util.shortdate
299 else:
299 else:
300 datefunc = util.datestr
300 datefunc = util.datestr
301 if ctx.rev() is None:
301 if ctx.rev() is None:
302 def hexfn(node):
302 def hexfn(node):
303 if node is None:
303 if node is None:
304 return None
304 return None
305 else:
305 else:
306 return rootfm.hexfunc(node)
306 return rootfm.hexfunc(node)
307 if opts.get('changeset'):
307 if opts.get('changeset'):
308 # omit "+" suffix which is appended to node hex
308 # omit "+" suffix which is appended to node hex
309 def formatrev(rev):
309 def formatrev(rev):
310 if rev is None:
310 if rev is None:
311 return '%d' % ctx.p1().rev()
311 return '%d' % ctx.p1().rev()
312 else:
312 else:
313 return '%d' % rev
313 return '%d' % rev
314 else:
314 else:
315 def formatrev(rev):
315 def formatrev(rev):
316 if rev is None:
316 if rev is None:
317 return '%d+' % ctx.p1().rev()
317 return '%d+' % ctx.p1().rev()
318 else:
318 else:
319 return '%d ' % rev
319 return '%d ' % rev
320 def formathex(hex):
320 def formathex(hex):
321 if hex is None:
321 if hex is None:
322 return '%s+' % rootfm.hexfunc(ctx.p1().node())
322 return '%s+' % rootfm.hexfunc(ctx.p1().node())
323 else:
323 else:
324 return '%s ' % hex
324 return '%s ' % hex
325 else:
325 else:
326 hexfn = rootfm.hexfunc
326 hexfn = rootfm.hexfunc
327 formatrev = formathex = str
327 formatrev = formathex = str
328
328
329 opmap = [('user', ' ', lambda x: x[0].user(), ui.shortuser),
329 opmap = [('user', ' ', lambda x: x[0].user(), ui.shortuser),
330 ('number', ' ', lambda x: x[0].rev(), formatrev),
330 ('number', ' ', lambda x: x[0].rev(), formatrev),
331 ('changeset', ' ', lambda x: hexfn(x[0].node()), formathex),
331 ('changeset', ' ', lambda x: hexfn(x[0].node()), formathex),
332 ('date', ' ', lambda x: x[0].date(), util.cachefunc(datefunc)),
332 ('date', ' ', lambda x: x[0].date(), util.cachefunc(datefunc)),
333 ('file', ' ', lambda x: x[0].path(), str),
333 ('file', ' ', lambda x: x[0].path(), str),
334 ('line_number', ':', lambda x: x[1], str),
334 ('line_number', ':', lambda x: x[1], str),
335 ]
335 ]
336 fieldnamemap = {'number': 'rev', 'changeset': 'node'}
336 fieldnamemap = {'number': 'rev', 'changeset': 'node'}
337
337
338 if (not opts.get('user') and not opts.get('changeset')
338 if (not opts.get('user') and not opts.get('changeset')
339 and not opts.get('date') and not opts.get('file')):
339 and not opts.get('date') and not opts.get('file')):
340 opts['number'] = True
340 opts['number'] = True
341
341
342 linenumber = opts.get('line_number') is not None
342 linenumber = opts.get('line_number') is not None
343 if linenumber and (not opts.get('changeset')) and (not opts.get('number')):
343 if linenumber and (not opts.get('changeset')) and (not opts.get('number')):
344 raise error.Abort(_('at least one of -n/-c is required for -l'))
344 raise error.Abort(_('at least one of -n/-c is required for -l'))
345
345
346 ui.pager('annotate')
346 ui.pager('annotate')
347
347
348 if rootfm.isplain():
348 if rootfm.isplain():
349 def makefunc(get, fmt):
349 def makefunc(get, fmt):
350 return lambda x: fmt(get(x))
350 return lambda x: fmt(get(x))
351 else:
351 else:
352 def makefunc(get, fmt):
352 def makefunc(get, fmt):
353 return get
353 return get
354 funcmap = [(makefunc(get, fmt), sep) for op, sep, get, fmt in opmap
354 funcmap = [(makefunc(get, fmt), sep) for op, sep, get, fmt in opmap
355 if opts.get(op)]
355 if opts.get(op)]
356 funcmap[0] = (funcmap[0][0], '') # no separator in front of first column
356 funcmap[0] = (funcmap[0][0], '') # no separator in front of first column
357 fields = ' '.join(fieldnamemap.get(op, op) for op, sep, get, fmt in opmap
357 fields = ' '.join(fieldnamemap.get(op, op) for op, sep, get, fmt in opmap
358 if opts.get(op))
358 if opts.get(op))
359
359
360 def bad(x, y):
360 def bad(x, y):
361 raise error.Abort("%s: %s" % (x, y))
361 raise error.Abort("%s: %s" % (x, y))
362
362
363 m = scmutil.match(ctx, pats, opts, badfn=bad)
363 m = scmutil.match(ctx, pats, opts, badfn=bad)
364
364
365 follow = not opts.get('no_follow')
365 follow = not opts.get('no_follow')
366 diffopts = patch.difffeatureopts(ui, opts, section='annotate',
366 diffopts = patch.difffeatureopts(ui, opts, section='annotate',
367 whitespace=True)
367 whitespace=True)
368 skiprevs = opts.get('skip')
368 skiprevs = opts.get('skip')
369 if skiprevs:
369 if skiprevs:
370 skiprevs = scmutil.revrange(repo, skiprevs)
370 skiprevs = scmutil.revrange(repo, skiprevs)
371
371
372 for abs in ctx.walk(m):
372 for abs in ctx.walk(m):
373 fctx = ctx[abs]
373 fctx = ctx[abs]
374 rootfm.startitem()
374 rootfm.startitem()
375 rootfm.data(abspath=abs, path=m.rel(abs))
375 rootfm.data(abspath=abs, path=m.rel(abs))
376 if not opts.get('text') and fctx.isbinary():
376 if not opts.get('text') and fctx.isbinary():
377 rootfm.plain(_("%s: binary file\n")
377 rootfm.plain(_("%s: binary file\n")
378 % ((pats and m.rel(abs)) or abs))
378 % ((pats and m.rel(abs)) or abs))
379 continue
379 continue
380
380
381 fm = rootfm.nested('lines')
381 fm = rootfm.nested('lines')
382 lines = fctx.annotate(follow=follow, linenumber=linenumber,
382 lines = fctx.annotate(follow=follow, linenumber=linenumber,
383 skiprevs=skiprevs, diffopts=diffopts)
383 skiprevs=skiprevs, diffopts=diffopts)
384 if not lines:
384 if not lines:
385 fm.end()
385 fm.end()
386 continue
386 continue
387 formats = []
387 formats = []
388 pieces = []
388 pieces = []
389
389
390 for f, sep in funcmap:
390 for f, sep in funcmap:
391 l = [f(n) for n, dummy in lines]
391 l = [f(n) for n, dummy in lines]
392 if fm.isplain():
392 if fm.isplain():
393 sizes = [encoding.colwidth(x) for x in l]
393 sizes = [encoding.colwidth(x) for x in l]
394 ml = max(sizes)
394 ml = max(sizes)
395 formats.append([sep + ' ' * (ml - w) + '%s' for w in sizes])
395 formats.append([sep + ' ' * (ml - w) + '%s' for w in sizes])
396 else:
396 else:
397 formats.append(['%s' for x in l])
397 formats.append(['%s' for x in l])
398 pieces.append(l)
398 pieces.append(l)
399
399
400 for f, p, l in zip(zip(*formats), zip(*pieces), lines):
400 for f, p, l in zip(zip(*formats), zip(*pieces), lines):
401 fm.startitem()
401 fm.startitem()
402 fm.write(fields, "".join(f), *p)
402 fm.write(fields, "".join(f), *p)
403 fm.write('line', ": %s", l[1])
403 fm.write('line', ": %s", l[1])
404
404
405 if not lines[-1][1].endswith('\n'):
405 if not lines[-1][1].endswith('\n'):
406 fm.plain('\n')
406 fm.plain('\n')
407 fm.end()
407 fm.end()
408
408
409 rootfm.end()
409 rootfm.end()
410
410
411 @command('archive',
411 @command('archive',
412 [('', 'no-decode', None, _('do not pass files through decoders')),
412 [('', 'no-decode', None, _('do not pass files through decoders')),
413 ('p', 'prefix', '', _('directory prefix for files in archive'),
413 ('p', 'prefix', '', _('directory prefix for files in archive'),
414 _('PREFIX')),
414 _('PREFIX')),
415 ('r', 'rev', '', _('revision to distribute'), _('REV')),
415 ('r', 'rev', '', _('revision to distribute'), _('REV')),
416 ('t', 'type', '', _('type of distribution to create'), _('TYPE')),
416 ('t', 'type', '', _('type of distribution to create'), _('TYPE')),
417 ] + subrepoopts + walkopts,
417 ] + subrepoopts + walkopts,
418 _('[OPTION]... DEST'))
418 _('[OPTION]... DEST'))
419 def archive(ui, repo, dest, **opts):
419 def archive(ui, repo, dest, **opts):
420 '''create an unversioned archive of a repository revision
420 '''create an unversioned archive of a repository revision
421
421
422 By default, the revision used is the parent of the working
422 By default, the revision used is the parent of the working
423 directory; use -r/--rev to specify a different revision.
423 directory; use -r/--rev to specify a different revision.
424
424
425 The archive type is automatically detected based on file
425 The archive type is automatically detected based on file
426 extension (to override, use -t/--type).
426 extension (to override, use -t/--type).
427
427
428 .. container:: verbose
428 .. container:: verbose
429
429
430 Examples:
430 Examples:
431
431
432 - create a zip file containing the 1.0 release::
432 - create a zip file containing the 1.0 release::
433
433
434 hg archive -r 1.0 project-1.0.zip
434 hg archive -r 1.0 project-1.0.zip
435
435
436 - create a tarball excluding .hg files::
436 - create a tarball excluding .hg files::
437
437
438 hg archive project.tar.gz -X ".hg*"
438 hg archive project.tar.gz -X ".hg*"
439
439
440 Valid types are:
440 Valid types are:
441
441
442 :``files``: a directory full of files (default)
442 :``files``: a directory full of files (default)
443 :``tar``: tar archive, uncompressed
443 :``tar``: tar archive, uncompressed
444 :``tbz2``: tar archive, compressed using bzip2
444 :``tbz2``: tar archive, compressed using bzip2
445 :``tgz``: tar archive, compressed using gzip
445 :``tgz``: tar archive, compressed using gzip
446 :``uzip``: zip archive, uncompressed
446 :``uzip``: zip archive, uncompressed
447 :``zip``: zip archive, compressed using deflate
447 :``zip``: zip archive, compressed using deflate
448
448
449 The exact name of the destination archive or directory is given
449 The exact name of the destination archive or directory is given
450 using a format string; see :hg:`help export` for details.
450 using a format string; see :hg:`help export` for details.
451
451
452 Each member added to an archive file has a directory prefix
452 Each member added to an archive file has a directory prefix
453 prepended. Use -p/--prefix to specify a format string for the
453 prepended. Use -p/--prefix to specify a format string for the
454 prefix. The default is the basename of the archive, with suffixes
454 prefix. The default is the basename of the archive, with suffixes
455 removed.
455 removed.
456
456
457 Returns 0 on success.
457 Returns 0 on success.
458 '''
458 '''
459
459
460 opts = pycompat.byteskwargs(opts)
460 opts = pycompat.byteskwargs(opts)
461 ctx = scmutil.revsingle(repo, opts.get('rev'))
461 ctx = scmutil.revsingle(repo, opts.get('rev'))
462 if not ctx:
462 if not ctx:
463 raise error.Abort(_('no working directory: please specify a revision'))
463 raise error.Abort(_('no working directory: please specify a revision'))
464 node = ctx.node()
464 node = ctx.node()
465 dest = cmdutil.makefilename(repo, dest, node)
465 dest = cmdutil.makefilename(repo, dest, node)
466 if os.path.realpath(dest) == repo.root:
466 if os.path.realpath(dest) == repo.root:
467 raise error.Abort(_('repository root cannot be destination'))
467 raise error.Abort(_('repository root cannot be destination'))
468
468
469 kind = opts.get('type') or archival.guesskind(dest) or 'files'
469 kind = opts.get('type') or archival.guesskind(dest) or 'files'
470 prefix = opts.get('prefix')
470 prefix = opts.get('prefix')
471
471
472 if dest == '-':
472 if dest == '-':
473 if kind == 'files':
473 if kind == 'files':
474 raise error.Abort(_('cannot archive plain files to stdout'))
474 raise error.Abort(_('cannot archive plain files to stdout'))
475 dest = cmdutil.makefileobj(repo, dest)
475 dest = cmdutil.makefileobj(repo, dest)
476 if not prefix:
476 if not prefix:
477 prefix = os.path.basename(repo.root) + '-%h'
477 prefix = os.path.basename(repo.root) + '-%h'
478
478
479 prefix = cmdutil.makefilename(repo, prefix, node)
479 prefix = cmdutil.makefilename(repo, prefix, node)
480 matchfn = scmutil.match(ctx, [], opts)
480 matchfn = scmutil.match(ctx, [], opts)
481 archival.archive(repo, dest, node, kind, not opts.get('no_decode'),
481 archival.archive(repo, dest, node, kind, not opts.get('no_decode'),
482 matchfn, prefix, subrepos=opts.get('subrepos'))
482 matchfn, prefix, subrepos=opts.get('subrepos'))
483
483
484 @command('backout',
484 @command('backout',
485 [('', 'merge', None, _('merge with old dirstate parent after backout')),
485 [('', 'merge', None, _('merge with old dirstate parent after backout')),
486 ('', 'commit', None,
486 ('', 'commit', None,
487 _('commit if no conflicts were encountered (DEPRECATED)')),
487 _('commit if no conflicts were encountered (DEPRECATED)')),
488 ('', 'no-commit', None, _('do not commit')),
488 ('', 'no-commit', None, _('do not commit')),
489 ('', 'parent', '',
489 ('', 'parent', '',
490 _('parent to choose when backing out merge (DEPRECATED)'), _('REV')),
490 _('parent to choose when backing out merge (DEPRECATED)'), _('REV')),
491 ('r', 'rev', '', _('revision to backout'), _('REV')),
491 ('r', 'rev', '', _('revision to backout'), _('REV')),
492 ('e', 'edit', False, _('invoke editor on commit messages')),
492 ('e', 'edit', False, _('invoke editor on commit messages')),
493 ] + mergetoolopts + walkopts + commitopts + commitopts2,
493 ] + mergetoolopts + walkopts + commitopts + commitopts2,
494 _('[OPTION]... [-r] REV'))
494 _('[OPTION]... [-r] REV'))
495 def backout(ui, repo, node=None, rev=None, **opts):
495 def backout(ui, repo, node=None, rev=None, **opts):
496 '''reverse effect of earlier changeset
496 '''reverse effect of earlier changeset
497
497
498 Prepare a new changeset with the effect of REV undone in the
498 Prepare a new changeset with the effect of REV undone in the
499 current working directory. If no conflicts were encountered,
499 current working directory. If no conflicts were encountered,
500 it will be committed immediately.
500 it will be committed immediately.
501
501
502 If REV is the parent of the working directory, then this new changeset
502 If REV is the parent of the working directory, then this new changeset
503 is committed automatically (unless --no-commit is specified).
503 is committed automatically (unless --no-commit is specified).
504
504
505 .. note::
505 .. note::
506
506
507 :hg:`backout` cannot be used to fix either an unwanted or
507 :hg:`backout` cannot be used to fix either an unwanted or
508 incorrect merge.
508 incorrect merge.
509
509
510 .. container:: verbose
510 .. container:: verbose
511
511
512 Examples:
512 Examples:
513
513
514 - Reverse the effect of the parent of the working directory.
514 - Reverse the effect of the parent of the working directory.
515 This backout will be committed immediately::
515 This backout will be committed immediately::
516
516
517 hg backout -r .
517 hg backout -r .
518
518
519 - Reverse the effect of previous bad revision 23::
519 - Reverse the effect of previous bad revision 23::
520
520
521 hg backout -r 23
521 hg backout -r 23
522
522
523 - Reverse the effect of previous bad revision 23 and
523 - Reverse the effect of previous bad revision 23 and
524 leave changes uncommitted::
524 leave changes uncommitted::
525
525
526 hg backout -r 23 --no-commit
526 hg backout -r 23 --no-commit
527 hg commit -m "Backout revision 23"
527 hg commit -m "Backout revision 23"
528
528
529 By default, the pending changeset will have one parent,
529 By default, the pending changeset will have one parent,
530 maintaining a linear history. With --merge, the pending
530 maintaining a linear history. With --merge, the pending
531 changeset will instead have two parents: the old parent of the
531 changeset will instead have two parents: the old parent of the
532 working directory and a new child of REV that simply undoes REV.
532 working directory and a new child of REV that simply undoes REV.
533
533
534 Before version 1.7, the behavior without --merge was equivalent
534 Before version 1.7, the behavior without --merge was equivalent
535 to specifying --merge followed by :hg:`update --clean .` to
535 to specifying --merge followed by :hg:`update --clean .` to
536 cancel the merge and leave the child of REV as a head to be
536 cancel the merge and leave the child of REV as a head to be
537 merged separately.
537 merged separately.
538
538
539 See :hg:`help dates` for a list of formats valid for -d/--date.
539 See :hg:`help dates` for a list of formats valid for -d/--date.
540
540
541 See :hg:`help revert` for a way to restore files to the state
541 See :hg:`help revert` for a way to restore files to the state
542 of another revision.
542 of another revision.
543
543
544 Returns 0 on success, 1 if nothing to backout or there are unresolved
544 Returns 0 on success, 1 if nothing to backout or there are unresolved
545 files.
545 files.
546 '''
546 '''
547 wlock = lock = None
547 wlock = lock = None
548 try:
548 try:
549 wlock = repo.wlock()
549 wlock = repo.wlock()
550 lock = repo.lock()
550 lock = repo.lock()
551 return _dobackout(ui, repo, node, rev, **opts)
551 return _dobackout(ui, repo, node, rev, **opts)
552 finally:
552 finally:
553 release(lock, wlock)
553 release(lock, wlock)
554
554
555 def _dobackout(ui, repo, node=None, rev=None, **opts):
555 def _dobackout(ui, repo, node=None, rev=None, **opts):
556 opts = pycompat.byteskwargs(opts)
556 opts = pycompat.byteskwargs(opts)
557 if opts.get('commit') and opts.get('no_commit'):
557 if opts.get('commit') and opts.get('no_commit'):
558 raise error.Abort(_("cannot use --commit with --no-commit"))
558 raise error.Abort(_("cannot use --commit with --no-commit"))
559 if opts.get('merge') and opts.get('no_commit'):
559 if opts.get('merge') and opts.get('no_commit'):
560 raise error.Abort(_("cannot use --merge with --no-commit"))
560 raise error.Abort(_("cannot use --merge with --no-commit"))
561
561
562 if rev and node:
562 if rev and node:
563 raise error.Abort(_("please specify just one revision"))
563 raise error.Abort(_("please specify just one revision"))
564
564
565 if not rev:
565 if not rev:
566 rev = node
566 rev = node
567
567
568 if not rev:
568 if not rev:
569 raise error.Abort(_("please specify a revision to backout"))
569 raise error.Abort(_("please specify a revision to backout"))
570
570
571 date = opts.get('date')
571 date = opts.get('date')
572 if date:
572 if date:
573 opts['date'] = util.parsedate(date)
573 opts['date'] = util.parsedate(date)
574
574
575 cmdutil.checkunfinished(repo)
575 cmdutil.checkunfinished(repo)
576 cmdutil.bailifchanged(repo)
576 cmdutil.bailifchanged(repo)
577 node = scmutil.revsingle(repo, rev).node()
577 node = scmutil.revsingle(repo, rev).node()
578
578
579 op1, op2 = repo.dirstate.parents()
579 op1, op2 = repo.dirstate.parents()
580 if not repo.changelog.isancestor(node, op1):
580 if not repo.changelog.isancestor(node, op1):
581 raise error.Abort(_('cannot backout change that is not an ancestor'))
581 raise error.Abort(_('cannot backout change that is not an ancestor'))
582
582
583 p1, p2 = repo.changelog.parents(node)
583 p1, p2 = repo.changelog.parents(node)
584 if p1 == nullid:
584 if p1 == nullid:
585 raise error.Abort(_('cannot backout a change with no parents'))
585 raise error.Abort(_('cannot backout a change with no parents'))
586 if p2 != nullid:
586 if p2 != nullid:
587 if not opts.get('parent'):
587 if not opts.get('parent'):
588 raise error.Abort(_('cannot backout a merge changeset'))
588 raise error.Abort(_('cannot backout a merge changeset'))
589 p = repo.lookup(opts['parent'])
589 p = repo.lookup(opts['parent'])
590 if p not in (p1, p2):
590 if p not in (p1, p2):
591 raise error.Abort(_('%s is not a parent of %s') %
591 raise error.Abort(_('%s is not a parent of %s') %
592 (short(p), short(node)))
592 (short(p), short(node)))
593 parent = p
593 parent = p
594 else:
594 else:
595 if opts.get('parent'):
595 if opts.get('parent'):
596 raise error.Abort(_('cannot use --parent on non-merge changeset'))
596 raise error.Abort(_('cannot use --parent on non-merge changeset'))
597 parent = p1
597 parent = p1
598
598
599 # the backout should appear on the same branch
599 # the backout should appear on the same branch
600 branch = repo.dirstate.branch()
600 branch = repo.dirstate.branch()
601 bheads = repo.branchheads(branch)
601 bheads = repo.branchheads(branch)
602 rctx = scmutil.revsingle(repo, hex(parent))
602 rctx = scmutil.revsingle(repo, hex(parent))
603 if not opts.get('merge') and op1 != node:
603 if not opts.get('merge') and op1 != node:
604 dsguard = dirstateguard.dirstateguard(repo, 'backout')
604 dsguard = dirstateguard.dirstateguard(repo, 'backout')
605 try:
605 try:
606 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
606 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
607 'backout')
607 'backout')
608 stats = mergemod.update(repo, parent, True, True, node, False)
608 stats = mergemod.update(repo, parent, True, True, node, False)
609 repo.setparents(op1, op2)
609 repo.setparents(op1, op2)
610 dsguard.close()
610 dsguard.close()
611 hg._showstats(repo, stats)
611 hg._showstats(repo, stats)
612 if stats[3]:
612 if stats[3]:
613 repo.ui.status(_("use 'hg resolve' to retry unresolved "
613 repo.ui.status(_("use 'hg resolve' to retry unresolved "
614 "file merges\n"))
614 "file merges\n"))
615 return 1
615 return 1
616 finally:
616 finally:
617 ui.setconfig('ui', 'forcemerge', '', '')
617 ui.setconfig('ui', 'forcemerge', '', '')
618 lockmod.release(dsguard)
618 lockmod.release(dsguard)
619 else:
619 else:
620 hg.clean(repo, node, show_stats=False)
620 hg.clean(repo, node, show_stats=False)
621 repo.dirstate.setbranch(branch)
621 repo.dirstate.setbranch(branch)
622 cmdutil.revert(ui, repo, rctx, repo.dirstate.parents())
622 cmdutil.revert(ui, repo, rctx, repo.dirstate.parents())
623
623
624 if opts.get('no_commit'):
624 if opts.get('no_commit'):
625 msg = _("changeset %s backed out, "
625 msg = _("changeset %s backed out, "
626 "don't forget to commit.\n")
626 "don't forget to commit.\n")
627 ui.status(msg % short(node))
627 ui.status(msg % short(node))
628 return 0
628 return 0
629
629
630 def commitfunc(ui, repo, message, match, opts):
630 def commitfunc(ui, repo, message, match, opts):
631 editform = 'backout'
631 editform = 'backout'
632 e = cmdutil.getcommiteditor(editform=editform,
632 e = cmdutil.getcommiteditor(editform=editform,
633 **pycompat.strkwargs(opts))
633 **pycompat.strkwargs(opts))
634 if not message:
634 if not message:
635 # we don't translate commit messages
635 # we don't translate commit messages
636 message = "Backed out changeset %s" % short(node)
636 message = "Backed out changeset %s" % short(node)
637 e = cmdutil.getcommiteditor(edit=True, editform=editform)
637 e = cmdutil.getcommiteditor(edit=True, editform=editform)
638 return repo.commit(message, opts.get('user'), opts.get('date'),
638 return repo.commit(message, opts.get('user'), opts.get('date'),
639 match, editor=e)
639 match, editor=e)
640 newnode = cmdutil.commit(ui, repo, commitfunc, [], opts)
640 newnode = cmdutil.commit(ui, repo, commitfunc, [], opts)
641 if not newnode:
641 if not newnode:
642 ui.status(_("nothing changed\n"))
642 ui.status(_("nothing changed\n"))
643 return 1
643 return 1
644 cmdutil.commitstatus(repo, newnode, branch, bheads)
644 cmdutil.commitstatus(repo, newnode, branch, bheads)
645
645
646 def nice(node):
646 def nice(node):
647 return '%d:%s' % (repo.changelog.rev(node), short(node))
647 return '%d:%s' % (repo.changelog.rev(node), short(node))
648 ui.status(_('changeset %s backs out changeset %s\n') %
648 ui.status(_('changeset %s backs out changeset %s\n') %
649 (nice(repo.changelog.tip()), nice(node)))
649 (nice(repo.changelog.tip()), nice(node)))
650 if opts.get('merge') and op1 != node:
650 if opts.get('merge') and op1 != node:
651 hg.clean(repo, op1, show_stats=False)
651 hg.clean(repo, op1, show_stats=False)
652 ui.status(_('merging with changeset %s\n')
652 ui.status(_('merging with changeset %s\n')
653 % nice(repo.changelog.tip()))
653 % nice(repo.changelog.tip()))
654 try:
654 try:
655 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
655 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
656 'backout')
656 'backout')
657 return hg.merge(repo, hex(repo.changelog.tip()))
657 return hg.merge(repo, hex(repo.changelog.tip()))
658 finally:
658 finally:
659 ui.setconfig('ui', 'forcemerge', '', '')
659 ui.setconfig('ui', 'forcemerge', '', '')
660 return 0
660 return 0
661
661
662 @command('bisect',
662 @command('bisect',
663 [('r', 'reset', False, _('reset bisect state')),
663 [('r', 'reset', False, _('reset bisect state')),
664 ('g', 'good', False, _('mark changeset good')),
664 ('g', 'good', False, _('mark changeset good')),
665 ('b', 'bad', False, _('mark changeset bad')),
665 ('b', 'bad', False, _('mark changeset bad')),
666 ('s', 'skip', False, _('skip testing changeset')),
666 ('s', 'skip', False, _('skip testing changeset')),
667 ('e', 'extend', False, _('extend the bisect range')),
667 ('e', 'extend', False, _('extend the bisect range')),
668 ('c', 'command', '', _('use command to check changeset state'), _('CMD')),
668 ('c', 'command', '', _('use command to check changeset state'), _('CMD')),
669 ('U', 'noupdate', False, _('do not update to target'))],
669 ('U', 'noupdate', False, _('do not update to target'))],
670 _("[-gbsr] [-U] [-c CMD] [REV]"))
670 _("[-gbsr] [-U] [-c CMD] [REV]"))
671 def bisect(ui, repo, rev=None, extra=None, command=None,
671 def bisect(ui, repo, rev=None, extra=None, command=None,
672 reset=None, good=None, bad=None, skip=None, extend=None,
672 reset=None, good=None, bad=None, skip=None, extend=None,
673 noupdate=None):
673 noupdate=None):
674 """subdivision search of changesets
674 """subdivision search of changesets
675
675
676 This command helps to find changesets which introduce problems. To
676 This command helps to find changesets which introduce problems. To
677 use, mark the earliest changeset you know exhibits the problem as
677 use, mark the earliest changeset you know exhibits the problem as
678 bad, then mark the latest changeset which is free from the problem
678 bad, then mark the latest changeset which is free from the problem
679 as good. Bisect will update your working directory to a revision
679 as good. Bisect will update your working directory to a revision
680 for testing (unless the -U/--noupdate option is specified). Once
680 for testing (unless the -U/--noupdate option is specified). Once
681 you have performed tests, mark the working directory as good or
681 you have performed tests, mark the working directory as good or
682 bad, and bisect will either update to another candidate changeset
682 bad, and bisect will either update to another candidate changeset
683 or announce that it has found the bad revision.
683 or announce that it has found the bad revision.
684
684
685 As a shortcut, you can also use the revision argument to mark a
685 As a shortcut, you can also use the revision argument to mark a
686 revision as good or bad without checking it out first.
686 revision as good or bad without checking it out first.
687
687
688 If you supply a command, it will be used for automatic bisection.
688 If you supply a command, it will be used for automatic bisection.
689 The environment variable HG_NODE will contain the ID of the
689 The environment variable HG_NODE will contain the ID of the
690 changeset being tested. The exit status of the command will be
690 changeset being tested. The exit status of the command will be
691 used to mark revisions as good or bad: status 0 means good, 125
691 used to mark revisions as good or bad: status 0 means good, 125
692 means to skip the revision, 127 (command not found) will abort the
692 means to skip the revision, 127 (command not found) will abort the
693 bisection, and any other non-zero exit status means the revision
693 bisection, and any other non-zero exit status means the revision
694 is bad.
694 is bad.
695
695
696 .. container:: verbose
696 .. container:: verbose
697
697
698 Some examples:
698 Some examples:
699
699
700 - start a bisection with known bad revision 34, and good revision 12::
700 - start a bisection with known bad revision 34, and good revision 12::
701
701
702 hg bisect --bad 34
702 hg bisect --bad 34
703 hg bisect --good 12
703 hg bisect --good 12
704
704
705 - advance the current bisection by marking current revision as good or
705 - advance the current bisection by marking current revision as good or
706 bad::
706 bad::
707
707
708 hg bisect --good
708 hg bisect --good
709 hg bisect --bad
709 hg bisect --bad
710
710
711 - mark the current revision, or a known revision, to be skipped (e.g. if
711 - mark the current revision, or a known revision, to be skipped (e.g. if
712 that revision is not usable because of another issue)::
712 that revision is not usable because of another issue)::
713
713
714 hg bisect --skip
714 hg bisect --skip
715 hg bisect --skip 23
715 hg bisect --skip 23
716
716
717 - skip all revisions that do not touch directories ``foo`` or ``bar``::
717 - skip all revisions that do not touch directories ``foo`` or ``bar``::
718
718
719 hg bisect --skip "!( file('path:foo') & file('path:bar') )"
719 hg bisect --skip "!( file('path:foo') & file('path:bar') )"
720
720
721 - forget the current bisection::
721 - forget the current bisection::
722
722
723 hg bisect --reset
723 hg bisect --reset
724
724
725 - use 'make && make tests' to automatically find the first broken
725 - use 'make && make tests' to automatically find the first broken
726 revision::
726 revision::
727
727
728 hg bisect --reset
728 hg bisect --reset
729 hg bisect --bad 34
729 hg bisect --bad 34
730 hg bisect --good 12
730 hg bisect --good 12
731 hg bisect --command "make && make tests"
731 hg bisect --command "make && make tests"
732
732
733 - see all changesets whose states are already known in the current
733 - see all changesets whose states are already known in the current
734 bisection::
734 bisection::
735
735
736 hg log -r "bisect(pruned)"
736 hg log -r "bisect(pruned)"
737
737
738 - see the changeset currently being bisected (especially useful
738 - see the changeset currently being bisected (especially useful
739 if running with -U/--noupdate)::
739 if running with -U/--noupdate)::
740
740
741 hg log -r "bisect(current)"
741 hg log -r "bisect(current)"
742
742
743 - see all changesets that took part in the current bisection::
743 - see all changesets that took part in the current bisection::
744
744
745 hg log -r "bisect(range)"
745 hg log -r "bisect(range)"
746
746
747 - you can even get a nice graph::
747 - you can even get a nice graph::
748
748
749 hg log --graph -r "bisect(range)"
749 hg log --graph -r "bisect(range)"
750
750
751 See :hg:`help revisions.bisect` for more about the `bisect()` predicate.
751 See :hg:`help revisions.bisect` for more about the `bisect()` predicate.
752
752
753 Returns 0 on success.
753 Returns 0 on success.
754 """
754 """
755 # backward compatibility
755 # backward compatibility
756 if rev in "good bad reset init".split():
756 if rev in "good bad reset init".split():
757 ui.warn(_("(use of 'hg bisect <cmd>' is deprecated)\n"))
757 ui.warn(_("(use of 'hg bisect <cmd>' is deprecated)\n"))
758 cmd, rev, extra = rev, extra, None
758 cmd, rev, extra = rev, extra, None
759 if cmd == "good":
759 if cmd == "good":
760 good = True
760 good = True
761 elif cmd == "bad":
761 elif cmd == "bad":
762 bad = True
762 bad = True
763 else:
763 else:
764 reset = True
764 reset = True
765 elif extra:
765 elif extra:
766 raise error.Abort(_('incompatible arguments'))
766 raise error.Abort(_('incompatible arguments'))
767
767
768 incompatibles = {
768 incompatibles = {
769 '--bad': bad,
769 '--bad': bad,
770 '--command': bool(command),
770 '--command': bool(command),
771 '--extend': extend,
771 '--extend': extend,
772 '--good': good,
772 '--good': good,
773 '--reset': reset,
773 '--reset': reset,
774 '--skip': skip,
774 '--skip': skip,
775 }
775 }
776
776
777 enabled = [x for x in incompatibles if incompatibles[x]]
777 enabled = [x for x in incompatibles if incompatibles[x]]
778
778
779 if len(enabled) > 1:
779 if len(enabled) > 1:
780 raise error.Abort(_('%s and %s are incompatible') %
780 raise error.Abort(_('%s and %s are incompatible') %
781 tuple(sorted(enabled)[0:2]))
781 tuple(sorted(enabled)[0:2]))
782
782
783 if reset:
783 if reset:
784 hbisect.resetstate(repo)
784 hbisect.resetstate(repo)
785 return
785 return
786
786
787 state = hbisect.load_state(repo)
787 state = hbisect.load_state(repo)
788
788
789 # update state
789 # update state
790 if good or bad or skip:
790 if good or bad or skip:
791 if rev:
791 if rev:
792 nodes = [repo.lookup(i) for i in scmutil.revrange(repo, [rev])]
792 nodes = [repo.lookup(i) for i in scmutil.revrange(repo, [rev])]
793 else:
793 else:
794 nodes = [repo.lookup('.')]
794 nodes = [repo.lookup('.')]
795 if good:
795 if good:
796 state['good'] += nodes
796 state['good'] += nodes
797 elif bad:
797 elif bad:
798 state['bad'] += nodes
798 state['bad'] += nodes
799 elif skip:
799 elif skip:
800 state['skip'] += nodes
800 state['skip'] += nodes
801 hbisect.save_state(repo, state)
801 hbisect.save_state(repo, state)
802 if not (state['good'] and state['bad']):
802 if not (state['good'] and state['bad']):
803 return
803 return
804
804
805 def mayupdate(repo, node, show_stats=True):
805 def mayupdate(repo, node, show_stats=True):
806 """common used update sequence"""
806 """common used update sequence"""
807 if noupdate:
807 if noupdate:
808 return
808 return
809 cmdutil.checkunfinished(repo)
809 cmdutil.checkunfinished(repo)
810 cmdutil.bailifchanged(repo)
810 cmdutil.bailifchanged(repo)
811 return hg.clean(repo, node, show_stats=show_stats)
811 return hg.clean(repo, node, show_stats=show_stats)
812
812
813 displayer = cmdutil.show_changeset(ui, repo, {})
813 displayer = cmdutil.show_changeset(ui, repo, {})
814
814
815 if command:
815 if command:
816 changesets = 1
816 changesets = 1
817 if noupdate:
817 if noupdate:
818 try:
818 try:
819 node = state['current'][0]
819 node = state['current'][0]
820 except LookupError:
820 except LookupError:
821 raise error.Abort(_('current bisect revision is unknown - '
821 raise error.Abort(_('current bisect revision is unknown - '
822 'start a new bisect to fix'))
822 'start a new bisect to fix'))
823 else:
823 else:
824 node, p2 = repo.dirstate.parents()
824 node, p2 = repo.dirstate.parents()
825 if p2 != nullid:
825 if p2 != nullid:
826 raise error.Abort(_('current bisect revision is a merge'))
826 raise error.Abort(_('current bisect revision is a merge'))
827 if rev:
827 if rev:
828 node = repo[scmutil.revsingle(repo, rev, node)].node()
828 node = repo[scmutil.revsingle(repo, rev, node)].node()
829 try:
829 try:
830 while changesets:
830 while changesets:
831 # update state
831 # update state
832 state['current'] = [node]
832 state['current'] = [node]
833 hbisect.save_state(repo, state)
833 hbisect.save_state(repo, state)
834 status = ui.system(command, environ={'HG_NODE': hex(node)},
834 status = ui.system(command, environ={'HG_NODE': hex(node)},
835 blockedtag='bisect_check')
835 blockedtag='bisect_check')
836 if status == 125:
836 if status == 125:
837 transition = "skip"
837 transition = "skip"
838 elif status == 0:
838 elif status == 0:
839 transition = "good"
839 transition = "good"
840 # status < 0 means process was killed
840 # status < 0 means process was killed
841 elif status == 127:
841 elif status == 127:
842 raise error.Abort(_("failed to execute %s") % command)
842 raise error.Abort(_("failed to execute %s") % command)
843 elif status < 0:
843 elif status < 0:
844 raise error.Abort(_("%s killed") % command)
844 raise error.Abort(_("%s killed") % command)
845 else:
845 else:
846 transition = "bad"
846 transition = "bad"
847 state[transition].append(node)
847 state[transition].append(node)
848 ctx = repo[node]
848 ctx = repo[node]
849 ui.status(_('changeset %d:%s: %s\n') % (ctx, ctx, transition))
849 ui.status(_('changeset %d:%s: %s\n') % (ctx, ctx, transition))
850 hbisect.checkstate(state)
850 hbisect.checkstate(state)
851 # bisect
851 # bisect
852 nodes, changesets, bgood = hbisect.bisect(repo.changelog, state)
852 nodes, changesets, bgood = hbisect.bisect(repo.changelog, state)
853 # update to next check
853 # update to next check
854 node = nodes[0]
854 node = nodes[0]
855 mayupdate(repo, node, show_stats=False)
855 mayupdate(repo, node, show_stats=False)
856 finally:
856 finally:
857 state['current'] = [node]
857 state['current'] = [node]
858 hbisect.save_state(repo, state)
858 hbisect.save_state(repo, state)
859 hbisect.printresult(ui, repo, state, displayer, nodes, bgood)
859 hbisect.printresult(ui, repo, state, displayer, nodes, bgood)
860 return
860 return
861
861
862 hbisect.checkstate(state)
862 hbisect.checkstate(state)
863
863
864 # actually bisect
864 # actually bisect
865 nodes, changesets, good = hbisect.bisect(repo.changelog, state)
865 nodes, changesets, good = hbisect.bisect(repo.changelog, state)
866 if extend:
866 if extend:
867 if not changesets:
867 if not changesets:
868 extendnode = hbisect.extendrange(repo, state, nodes, good)
868 extendnode = hbisect.extendrange(repo, state, nodes, good)
869 if extendnode is not None:
869 if extendnode is not None:
870 ui.write(_("Extending search to changeset %d:%s\n")
870 ui.write(_("Extending search to changeset %d:%s\n")
871 % (extendnode.rev(), extendnode))
871 % (extendnode.rev(), extendnode))
872 state['current'] = [extendnode.node()]
872 state['current'] = [extendnode.node()]
873 hbisect.save_state(repo, state)
873 hbisect.save_state(repo, state)
874 return mayupdate(repo, extendnode.node())
874 return mayupdate(repo, extendnode.node())
875 raise error.Abort(_("nothing to extend"))
875 raise error.Abort(_("nothing to extend"))
876
876
877 if changesets == 0:
877 if changesets == 0:
878 hbisect.printresult(ui, repo, state, displayer, nodes, good)
878 hbisect.printresult(ui, repo, state, displayer, nodes, good)
879 else:
879 else:
880 assert len(nodes) == 1 # only a single node can be tested next
880 assert len(nodes) == 1 # only a single node can be tested next
881 node = nodes[0]
881 node = nodes[0]
882 # compute the approximate number of remaining tests
882 # compute the approximate number of remaining tests
883 tests, size = 0, 2
883 tests, size = 0, 2
884 while size <= changesets:
884 while size <= changesets:
885 tests, size = tests + 1, size * 2
885 tests, size = tests + 1, size * 2
886 rev = repo.changelog.rev(node)
886 rev = repo.changelog.rev(node)
887 ui.write(_("Testing changeset %d:%s "
887 ui.write(_("Testing changeset %d:%s "
888 "(%d changesets remaining, ~%d tests)\n")
888 "(%d changesets remaining, ~%d tests)\n")
889 % (rev, short(node), changesets, tests))
889 % (rev, short(node), changesets, tests))
890 state['current'] = [node]
890 state['current'] = [node]
891 hbisect.save_state(repo, state)
891 hbisect.save_state(repo, state)
892 return mayupdate(repo, node)
892 return mayupdate(repo, node)
893
893
894 @command('bookmarks|bookmark',
894 @command('bookmarks|bookmark',
895 [('f', 'force', False, _('force')),
895 [('f', 'force', False, _('force')),
896 ('r', 'rev', '', _('revision for bookmark action'), _('REV')),
896 ('r', 'rev', '', _('revision for bookmark action'), _('REV')),
897 ('d', 'delete', False, _('delete a given bookmark')),
897 ('d', 'delete', False, _('delete a given bookmark')),
898 ('m', 'rename', '', _('rename a given bookmark'), _('OLD')),
898 ('m', 'rename', '', _('rename a given bookmark'), _('OLD')),
899 ('i', 'inactive', False, _('mark a bookmark inactive')),
899 ('i', 'inactive', False, _('mark a bookmark inactive')),
900 ] + formatteropts,
900 ] + formatteropts,
901 _('hg bookmarks [OPTIONS]... [NAME]...'))
901 _('hg bookmarks [OPTIONS]... [NAME]...'))
902 def bookmark(ui, repo, *names, **opts):
902 def bookmark(ui, repo, *names, **opts):
903 '''create a new bookmark or list existing bookmarks
903 '''create a new bookmark or list existing bookmarks
904
904
905 Bookmarks are labels on changesets to help track lines of development.
905 Bookmarks are labels on changesets to help track lines of development.
906 Bookmarks are unversioned and can be moved, renamed and deleted.
906 Bookmarks are unversioned and can be moved, renamed and deleted.
907 Deleting or moving a bookmark has no effect on the associated changesets.
907 Deleting or moving a bookmark has no effect on the associated changesets.
908
908
909 Creating or updating to a bookmark causes it to be marked as 'active'.
909 Creating or updating to a bookmark causes it to be marked as 'active'.
910 The active bookmark is indicated with a '*'.
910 The active bookmark is indicated with a '*'.
911 When a commit is made, the active bookmark will advance to the new commit.
911 When a commit is made, the active bookmark will advance to the new commit.
912 A plain :hg:`update` will also advance an active bookmark, if possible.
912 A plain :hg:`update` will also advance an active bookmark, if possible.
913 Updating away from a bookmark will cause it to be deactivated.
913 Updating away from a bookmark will cause it to be deactivated.
914
914
915 Bookmarks can be pushed and pulled between repositories (see
915 Bookmarks can be pushed and pulled between repositories (see
916 :hg:`help push` and :hg:`help pull`). If a shared bookmark has
916 :hg:`help push` and :hg:`help pull`). If a shared bookmark has
917 diverged, a new 'divergent bookmark' of the form 'name@path' will
917 diverged, a new 'divergent bookmark' of the form 'name@path' will
918 be created. Using :hg:`merge` will resolve the divergence.
918 be created. Using :hg:`merge` will resolve the divergence.
919
919
920 A bookmark named '@' has the special property that :hg:`clone` will
920 A bookmark named '@' has the special property that :hg:`clone` will
921 check it out by default if it exists.
921 check it out by default if it exists.
922
922
923 .. container:: verbose
923 .. container:: verbose
924
924
925 Examples:
925 Examples:
926
926
927 - create an active bookmark for a new line of development::
927 - create an active bookmark for a new line of development::
928
928
929 hg book new-feature
929 hg book new-feature
930
930
931 - create an inactive bookmark as a place marker::
931 - create an inactive bookmark as a place marker::
932
932
933 hg book -i reviewed
933 hg book -i reviewed
934
934
935 - create an inactive bookmark on another changeset::
935 - create an inactive bookmark on another changeset::
936
936
937 hg book -r .^ tested
937 hg book -r .^ tested
938
938
939 - rename bookmark turkey to dinner::
939 - rename bookmark turkey to dinner::
940
940
941 hg book -m turkey dinner
941 hg book -m turkey dinner
942
942
943 - move the '@' bookmark from another branch::
943 - move the '@' bookmark from another branch::
944
944
945 hg book -f @
945 hg book -f @
946 '''
946 '''
947 opts = pycompat.byteskwargs(opts)
947 opts = pycompat.byteskwargs(opts)
948 force = opts.get('force')
948 force = opts.get('force')
949 rev = opts.get('rev')
949 rev = opts.get('rev')
950 delete = opts.get('delete')
950 delete = opts.get('delete')
951 rename = opts.get('rename')
951 rename = opts.get('rename')
952 inactive = opts.get('inactive')
952 inactive = opts.get('inactive')
953
953
954 if delete and rename:
954 if delete and rename:
955 raise error.Abort(_("--delete and --rename are incompatible"))
955 raise error.Abort(_("--delete and --rename are incompatible"))
956 if delete and rev:
956 if delete and rev:
957 raise error.Abort(_("--rev is incompatible with --delete"))
957 raise error.Abort(_("--rev is incompatible with --delete"))
958 if rename and rev:
958 if rename and rev:
959 raise error.Abort(_("--rev is incompatible with --rename"))
959 raise error.Abort(_("--rev is incompatible with --rename"))
960 if not names and (delete or rev):
960 if not names and (delete or rev):
961 raise error.Abort(_("bookmark name required"))
961 raise error.Abort(_("bookmark name required"))
962
962
963 if delete or rename or names or inactive:
963 if delete or rename or names or inactive:
964 with repo.wlock(), repo.lock(), repo.transaction('bookmark') as tr:
964 with repo.wlock(), repo.lock(), repo.transaction('bookmark') as tr:
965 if delete:
965 if delete:
966 bookmarks.delete(repo, tr, names)
966 bookmarks.delete(repo, tr, names)
967 elif rename:
967 elif rename:
968 if not names:
968 if not names:
969 raise error.Abort(_("new bookmark name required"))
969 raise error.Abort(_("new bookmark name required"))
970 elif len(names) > 1:
970 elif len(names) > 1:
971 raise error.Abort(_("only one new bookmark name allowed"))
971 raise error.Abort(_("only one new bookmark name allowed"))
972 bookmarks.rename(repo, tr, rename, names[0], force, inactive)
972 bookmarks.rename(repo, tr, rename, names[0], force, inactive)
973 elif names:
973 elif names:
974 bookmarks.addbookmarks(repo, tr, names, rev, force, inactive)
974 bookmarks.addbookmarks(repo, tr, names, rev, force, inactive)
975 elif inactive:
975 elif inactive:
976 if len(repo._bookmarks) == 0:
976 if len(repo._bookmarks) == 0:
977 ui.status(_("no bookmarks set\n"))
977 ui.status(_("no bookmarks set\n"))
978 elif not repo._activebookmark:
978 elif not repo._activebookmark:
979 ui.status(_("no active bookmark\n"))
979 ui.status(_("no active bookmark\n"))
980 else:
980 else:
981 bookmarks.deactivate(repo)
981 bookmarks.deactivate(repo)
982 else: # show bookmarks
982 else: # show bookmarks
983 bookmarks.printbookmarks(ui, repo, **opts)
983 bookmarks.printbookmarks(ui, repo, **opts)
984
984
985 @command('branch',
985 @command('branch',
986 [('f', 'force', None,
986 [('f', 'force', None,
987 _('set branch name even if it shadows an existing branch')),
987 _('set branch name even if it shadows an existing branch')),
988 ('C', 'clean', None, _('reset branch name to parent branch name'))],
988 ('C', 'clean', None, _('reset branch name to parent branch name'))],
989 _('[-fC] [NAME]'))
989 _('[-fC] [NAME]'))
990 def branch(ui, repo, label=None, **opts):
990 def branch(ui, repo, label=None, **opts):
991 """set or show the current branch name
991 """set or show the current branch name
992
992
993 .. note::
993 .. note::
994
994
995 Branch names are permanent and global. Use :hg:`bookmark` to create a
995 Branch names are permanent and global. Use :hg:`bookmark` to create a
996 light-weight bookmark instead. See :hg:`help glossary` for more
996 light-weight bookmark instead. See :hg:`help glossary` for more
997 information about named branches and bookmarks.
997 information about named branches and bookmarks.
998
998
999 With no argument, show the current branch name. With one argument,
999 With no argument, show the current branch name. With one argument,
1000 set the working directory branch name (the branch will not exist
1000 set the working directory branch name (the branch will not exist
1001 in the repository until the next commit). Standard practice
1001 in the repository until the next commit). Standard practice
1002 recommends that primary development take place on the 'default'
1002 recommends that primary development take place on the 'default'
1003 branch.
1003 branch.
1004
1004
1005 Unless -f/--force is specified, branch will not let you set a
1005 Unless -f/--force is specified, branch will not let you set a
1006 branch name that already exists.
1006 branch name that already exists.
1007
1007
1008 Use -C/--clean to reset the working directory branch to that of
1008 Use -C/--clean to reset the working directory branch to that of
1009 the parent of the working directory, negating a previous branch
1009 the parent of the working directory, negating a previous branch
1010 change.
1010 change.
1011
1011
1012 Use the command :hg:`update` to switch to an existing branch. Use
1012 Use the command :hg:`update` to switch to an existing branch. Use
1013 :hg:`commit --close-branch` to mark this branch head as closed.
1013 :hg:`commit --close-branch` to mark this branch head as closed.
1014 When all heads of a branch are closed, the branch will be
1014 When all heads of a branch are closed, the branch will be
1015 considered closed.
1015 considered closed.
1016
1016
1017 Returns 0 on success.
1017 Returns 0 on success.
1018 """
1018 """
1019 opts = pycompat.byteskwargs(opts)
1019 opts = pycompat.byteskwargs(opts)
1020 if label:
1020 if label:
1021 label = label.strip()
1021 label = label.strip()
1022
1022
1023 if not opts.get('clean') and not label:
1023 if not opts.get('clean') and not label:
1024 ui.write("%s\n" % repo.dirstate.branch())
1024 ui.write("%s\n" % repo.dirstate.branch())
1025 return
1025 return
1026
1026
1027 with repo.wlock():
1027 with repo.wlock():
1028 if opts.get('clean'):
1028 if opts.get('clean'):
1029 label = repo[None].p1().branch()
1029 label = repo[None].p1().branch()
1030 repo.dirstate.setbranch(label)
1030 repo.dirstate.setbranch(label)
1031 ui.status(_('reset working directory to branch %s\n') % label)
1031 ui.status(_('reset working directory to branch %s\n') % label)
1032 elif label:
1032 elif label:
1033 if not opts.get('force') and label in repo.branchmap():
1033 if not opts.get('force') and label in repo.branchmap():
1034 if label not in [p.branch() for p in repo[None].parents()]:
1034 if label not in [p.branch() for p in repo[None].parents()]:
1035 raise error.Abort(_('a branch of the same name already'
1035 raise error.Abort(_('a branch of the same name already'
1036 ' exists'),
1036 ' exists'),
1037 # i18n: "it" refers to an existing branch
1037 # i18n: "it" refers to an existing branch
1038 hint=_("use 'hg update' to switch to it"))
1038 hint=_("use 'hg update' to switch to it"))
1039 scmutil.checknewlabel(repo, label, 'branch')
1039 scmutil.checknewlabel(repo, label, 'branch')
1040 repo.dirstate.setbranch(label)
1040 repo.dirstate.setbranch(label)
1041 ui.status(_('marked working directory as branch %s\n') % label)
1041 ui.status(_('marked working directory as branch %s\n') % label)
1042
1042
1043 # find any open named branches aside from default
1043 # find any open named branches aside from default
1044 others = [n for n, h, t, c in repo.branchmap().iterbranches()
1044 others = [n for n, h, t, c in repo.branchmap().iterbranches()
1045 if n != "default" and not c]
1045 if n != "default" and not c]
1046 if not others:
1046 if not others:
1047 ui.status(_('(branches are permanent and global, '
1047 ui.status(_('(branches are permanent and global, '
1048 'did you want a bookmark?)\n'))
1048 'did you want a bookmark?)\n'))
1049
1049
1050 @command('branches',
1050 @command('branches',
1051 [('a', 'active', False,
1051 [('a', 'active', False,
1052 _('show only branches that have unmerged heads (DEPRECATED)')),
1052 _('show only branches that have unmerged heads (DEPRECATED)')),
1053 ('c', 'closed', False, _('show normal and closed branches')),
1053 ('c', 'closed', False, _('show normal and closed branches')),
1054 ] + formatteropts,
1054 ] + formatteropts,
1055 _('[-c]'))
1055 _('[-c]'))
1056 def branches(ui, repo, active=False, closed=False, **opts):
1056 def branches(ui, repo, active=False, closed=False, **opts):
1057 """list repository named branches
1057 """list repository named branches
1058
1058
1059 List the repository's named branches, indicating which ones are
1059 List the repository's named branches, indicating which ones are
1060 inactive. If -c/--closed is specified, also list branches which have
1060 inactive. If -c/--closed is specified, also list branches which have
1061 been marked closed (see :hg:`commit --close-branch`).
1061 been marked closed (see :hg:`commit --close-branch`).
1062
1062
1063 Use the command :hg:`update` to switch to an existing branch.
1063 Use the command :hg:`update` to switch to an existing branch.
1064
1064
1065 Returns 0.
1065 Returns 0.
1066 """
1066 """
1067
1067
1068 opts = pycompat.byteskwargs(opts)
1068 opts = pycompat.byteskwargs(opts)
1069 ui.pager('branches')
1069 ui.pager('branches')
1070 fm = ui.formatter('branches', opts)
1070 fm = ui.formatter('branches', opts)
1071 hexfunc = fm.hexfunc
1071 hexfunc = fm.hexfunc
1072
1072
1073 allheads = set(repo.heads())
1073 allheads = set(repo.heads())
1074 branches = []
1074 branches = []
1075 for tag, heads, tip, isclosed in repo.branchmap().iterbranches():
1075 for tag, heads, tip, isclosed in repo.branchmap().iterbranches():
1076 isactive = not isclosed and bool(set(heads) & allheads)
1076 isactive = not isclosed and bool(set(heads) & allheads)
1077 branches.append((tag, repo[tip], isactive, not isclosed))
1077 branches.append((tag, repo[tip], isactive, not isclosed))
1078 branches.sort(key=lambda i: (i[2], i[1].rev(), i[0], i[3]),
1078 branches.sort(key=lambda i: (i[2], i[1].rev(), i[0], i[3]),
1079 reverse=True)
1079 reverse=True)
1080
1080
1081 for tag, ctx, isactive, isopen in branches:
1081 for tag, ctx, isactive, isopen in branches:
1082 if active and not isactive:
1082 if active and not isactive:
1083 continue
1083 continue
1084 if isactive:
1084 if isactive:
1085 label = 'branches.active'
1085 label = 'branches.active'
1086 notice = ''
1086 notice = ''
1087 elif not isopen:
1087 elif not isopen:
1088 if not closed:
1088 if not closed:
1089 continue
1089 continue
1090 label = 'branches.closed'
1090 label = 'branches.closed'
1091 notice = _(' (closed)')
1091 notice = _(' (closed)')
1092 else:
1092 else:
1093 label = 'branches.inactive'
1093 label = 'branches.inactive'
1094 notice = _(' (inactive)')
1094 notice = _(' (inactive)')
1095 current = (tag == repo.dirstate.branch())
1095 current = (tag == repo.dirstate.branch())
1096 if current:
1096 if current:
1097 label = 'branches.current'
1097 label = 'branches.current'
1098
1098
1099 fm.startitem()
1099 fm.startitem()
1100 fm.write('branch', '%s', tag, label=label)
1100 fm.write('branch', '%s', tag, label=label)
1101 rev = ctx.rev()
1101 rev = ctx.rev()
1102 padsize = max(31 - len(str(rev)) - encoding.colwidth(tag), 0)
1102 padsize = max(31 - len(str(rev)) - encoding.colwidth(tag), 0)
1103 fmt = ' ' * padsize + ' %d:%s'
1103 fmt = ' ' * padsize + ' %d:%s'
1104 fm.condwrite(not ui.quiet, 'rev node', fmt, rev, hexfunc(ctx.node()),
1104 fm.condwrite(not ui.quiet, 'rev node', fmt, rev, hexfunc(ctx.node()),
1105 label='log.changeset changeset.%s' % ctx.phasestr())
1105 label='log.changeset changeset.%s' % ctx.phasestr())
1106 fm.context(ctx=ctx)
1106 fm.context(ctx=ctx)
1107 fm.data(active=isactive, closed=not isopen, current=current)
1107 fm.data(active=isactive, closed=not isopen, current=current)
1108 if not ui.quiet:
1108 if not ui.quiet:
1109 fm.plain(notice)
1109 fm.plain(notice)
1110 fm.plain('\n')
1110 fm.plain('\n')
1111 fm.end()
1111 fm.end()
1112
1112
1113 @command('bundle',
1113 @command('bundle',
1114 [('f', 'force', None, _('run even when the destination is unrelated')),
1114 [('f', 'force', None, _('run even when the destination is unrelated')),
1115 ('r', 'rev', [], _('a changeset intended to be added to the destination'),
1115 ('r', 'rev', [], _('a changeset intended to be added to the destination'),
1116 _('REV')),
1116 _('REV')),
1117 ('b', 'branch', [], _('a specific branch you would like to bundle'),
1117 ('b', 'branch', [], _('a specific branch you would like to bundle'),
1118 _('BRANCH')),
1118 _('BRANCH')),
1119 ('', 'base', [],
1119 ('', 'base', [],
1120 _('a base changeset assumed to be available at the destination'),
1120 _('a base changeset assumed to be available at the destination'),
1121 _('REV')),
1121 _('REV')),
1122 ('a', 'all', None, _('bundle all changesets in the repository')),
1122 ('a', 'all', None, _('bundle all changesets in the repository')),
1123 ('t', 'type', 'bzip2', _('bundle compression type to use'), _('TYPE')),
1123 ('t', 'type', 'bzip2', _('bundle compression type to use'), _('TYPE')),
1124 ] + remoteopts,
1124 ] + remoteopts,
1125 _('[-f] [-t BUNDLESPEC] [-a] [-r REV]... [--base REV]... FILE [DEST]'))
1125 _('[-f] [-t BUNDLESPEC] [-a] [-r REV]... [--base REV]... FILE [DEST]'))
1126 def bundle(ui, repo, fname, dest=None, **opts):
1126 def bundle(ui, repo, fname, dest=None, **opts):
1127 """create a bundle file
1127 """create a bundle file
1128
1128
1129 Generate a bundle file containing data to be added to a repository.
1129 Generate a bundle file containing data to be added to a repository.
1130
1130
1131 To create a bundle containing all changesets, use -a/--all
1131 To create a bundle containing all changesets, use -a/--all
1132 (or --base null). Otherwise, hg assumes the destination will have
1132 (or --base null). Otherwise, hg assumes the destination will have
1133 all the nodes you specify with --base parameters. Otherwise, hg
1133 all the nodes you specify with --base parameters. Otherwise, hg
1134 will assume the repository has all the nodes in destination, or
1134 will assume the repository has all the nodes in destination, or
1135 default-push/default if no destination is specified.
1135 default-push/default if no destination is specified.
1136
1136
1137 You can change bundle format with the -t/--type option. See
1137 You can change bundle format with the -t/--type option. See
1138 :hg:`help bundlespec` for documentation on this format. By default,
1138 :hg:`help bundlespec` for documentation on this format. By default,
1139 the most appropriate format is used and compression defaults to
1139 the most appropriate format is used and compression defaults to
1140 bzip2.
1140 bzip2.
1141
1141
1142 The bundle file can then be transferred using conventional means
1142 The bundle file can then be transferred using conventional means
1143 and applied to another repository with the unbundle or pull
1143 and applied to another repository with the unbundle or pull
1144 command. This is useful when direct push and pull are not
1144 command. This is useful when direct push and pull are not
1145 available or when exporting an entire repository is undesirable.
1145 available or when exporting an entire repository is undesirable.
1146
1146
1147 Applying bundles preserves all changeset contents including
1147 Applying bundles preserves all changeset contents including
1148 permissions, copy/rename information, and revision history.
1148 permissions, copy/rename information, and revision history.
1149
1149
1150 Returns 0 on success, 1 if no changes found.
1150 Returns 0 on success, 1 if no changes found.
1151 """
1151 """
1152 opts = pycompat.byteskwargs(opts)
1152 opts = pycompat.byteskwargs(opts)
1153 revs = None
1153 revs = None
1154 if 'rev' in opts:
1154 if 'rev' in opts:
1155 revstrings = opts['rev']
1155 revstrings = opts['rev']
1156 revs = scmutil.revrange(repo, revstrings)
1156 revs = scmutil.revrange(repo, revstrings)
1157 if revstrings and not revs:
1157 if revstrings and not revs:
1158 raise error.Abort(_('no commits to bundle'))
1158 raise error.Abort(_('no commits to bundle'))
1159
1159
1160 bundletype = opts.get('type', 'bzip2').lower()
1160 bundletype = opts.get('type', 'bzip2').lower()
1161 try:
1161 try:
1162 bcompression, cgversion, params = exchange.parsebundlespec(
1162 bcompression, cgversion, params = exchange.parsebundlespec(
1163 repo, bundletype, strict=False)
1163 repo, bundletype, strict=False)
1164 except error.UnsupportedBundleSpecification as e:
1164 except error.UnsupportedBundleSpecification as e:
1165 raise error.Abort(str(e),
1165 raise error.Abort(str(e),
1166 hint=_("see 'hg help bundlespec' for supported "
1166 hint=_("see 'hg help bundlespec' for supported "
1167 "values for --type"))
1167 "values for --type"))
1168
1168
1169 # Packed bundles are a pseudo bundle format for now.
1169 # Packed bundles are a pseudo bundle format for now.
1170 if cgversion == 's1':
1170 if cgversion == 's1':
1171 raise error.Abort(_('packed bundles cannot be produced by "hg bundle"'),
1171 raise error.Abort(_('packed bundles cannot be produced by "hg bundle"'),
1172 hint=_("use 'hg debugcreatestreamclonebundle'"))
1172 hint=_("use 'hg debugcreatestreamclonebundle'"))
1173
1173
1174 if opts.get('all'):
1174 if opts.get('all'):
1175 if dest:
1175 if dest:
1176 raise error.Abort(_("--all is incompatible with specifying "
1176 raise error.Abort(_("--all is incompatible with specifying "
1177 "a destination"))
1177 "a destination"))
1178 if opts.get('base'):
1178 if opts.get('base'):
1179 ui.warn(_("ignoring --base because --all was specified\n"))
1179 ui.warn(_("ignoring --base because --all was specified\n"))
1180 base = ['null']
1180 base = ['null']
1181 else:
1181 else:
1182 base = scmutil.revrange(repo, opts.get('base'))
1182 base = scmutil.revrange(repo, opts.get('base'))
1183 if cgversion not in changegroup.supportedoutgoingversions(repo):
1183 if cgversion not in changegroup.supportedoutgoingversions(repo):
1184 raise error.Abort(_("repository does not support bundle version %s") %
1184 raise error.Abort(_("repository does not support bundle version %s") %
1185 cgversion)
1185 cgversion)
1186
1186
1187 if base:
1187 if base:
1188 if dest:
1188 if dest:
1189 raise error.Abort(_("--base is incompatible with specifying "
1189 raise error.Abort(_("--base is incompatible with specifying "
1190 "a destination"))
1190 "a destination"))
1191 common = [repo.lookup(rev) for rev in base]
1191 common = [repo.lookup(rev) for rev in base]
1192 heads = revs and map(repo.lookup, revs) or None
1192 heads = revs and map(repo.lookup, revs) or None
1193 outgoing = discovery.outgoing(repo, common, heads)
1193 outgoing = discovery.outgoing(repo, common, heads)
1194 else:
1194 else:
1195 dest = ui.expandpath(dest or 'default-push', dest or 'default')
1195 dest = ui.expandpath(dest or 'default-push', dest or 'default')
1196 dest, branches = hg.parseurl(dest, opts.get('branch'))
1196 dest, branches = hg.parseurl(dest, opts.get('branch'))
1197 other = hg.peer(repo, opts, dest)
1197 other = hg.peer(repo, opts, dest)
1198 revs, checkout = hg.addbranchrevs(repo, repo, branches, revs)
1198 revs, checkout = hg.addbranchrevs(repo, repo, branches, revs)
1199 heads = revs and map(repo.lookup, revs) or revs
1199 heads = revs and map(repo.lookup, revs) or revs
1200 outgoing = discovery.findcommonoutgoing(repo, other,
1200 outgoing = discovery.findcommonoutgoing(repo, other,
1201 onlyheads=heads,
1201 onlyheads=heads,
1202 force=opts.get('force'),
1202 force=opts.get('force'),
1203 portable=True)
1203 portable=True)
1204
1204
1205 if not outgoing.missing:
1205 if not outgoing.missing:
1206 scmutil.nochangesfound(ui, repo, not base and outgoing.excluded)
1206 scmutil.nochangesfound(ui, repo, not base and outgoing.excluded)
1207 return 1
1207 return 1
1208
1208
1209 if cgversion == '01': #bundle1
1209 if cgversion == '01': #bundle1
1210 if bcompression is None:
1210 if bcompression is None:
1211 bcompression = 'UN'
1211 bcompression = 'UN'
1212 bversion = 'HG10' + bcompression
1212 bversion = 'HG10' + bcompression
1213 bcompression = None
1213 bcompression = None
1214 elif cgversion in ('02', '03'):
1214 elif cgversion in ('02', '03'):
1215 bversion = 'HG20'
1215 bversion = 'HG20'
1216 else:
1216 else:
1217 raise error.ProgrammingError(
1217 raise error.ProgrammingError(
1218 'bundle: unexpected changegroup version %s' % cgversion)
1218 'bundle: unexpected changegroup version %s' % cgversion)
1219
1219
1220 # TODO compression options should be derived from bundlespec parsing.
1220 # TODO compression options should be derived from bundlespec parsing.
1221 # This is a temporary hack to allow adjusting bundle compression
1221 # This is a temporary hack to allow adjusting bundle compression
1222 # level without a) formalizing the bundlespec changes to declare it
1222 # level without a) formalizing the bundlespec changes to declare it
1223 # b) introducing a command flag.
1223 # b) introducing a command flag.
1224 compopts = {}
1224 compopts = {}
1225 complevel = ui.configint('experimental', 'bundlecomplevel')
1225 complevel = ui.configint('experimental', 'bundlecomplevel')
1226 if complevel is not None:
1226 if complevel is not None:
1227 compopts['level'] = complevel
1227 compopts['level'] = complevel
1228
1228
1229
1229
1230 contentopts = {'cg.version': cgversion}
1230 contentopts = {'cg.version': cgversion}
1231 if repo.ui.configbool('experimental', 'evolution.bundle-obsmarker', False):
1231 if repo.ui.configbool('experimental', 'evolution.bundle-obsmarker', False):
1232 contentopts['obsolescence'] = True
1232 contentopts['obsolescence'] = True
1233 if repo.ui.configbool('experimental', 'bundle-phases', False):
1233 if repo.ui.configbool('experimental', 'bundle-phases', False):
1234 contentopts['phases'] = True
1234 contentopts['phases'] = True
1235 bundle2.writenewbundle(ui, repo, 'bundle', fname, bversion, outgoing,
1235 bundle2.writenewbundle(ui, repo, 'bundle', fname, bversion, outgoing,
1236 contentopts, compression=bcompression,
1236 contentopts, compression=bcompression,
1237 compopts=compopts)
1237 compopts=compopts)
1238
1238
1239 @command('cat',
1239 @command('cat',
1240 [('o', 'output', '',
1240 [('o', 'output', '',
1241 _('print output to file with formatted name'), _('FORMAT')),
1241 _('print output to file with formatted name'), _('FORMAT')),
1242 ('r', 'rev', '', _('print the given revision'), _('REV')),
1242 ('r', 'rev', '', _('print the given revision'), _('REV')),
1243 ('', 'decode', None, _('apply any matching decode filter')),
1243 ('', 'decode', None, _('apply any matching decode filter')),
1244 ] + walkopts + formatteropts,
1244 ] + walkopts + formatteropts,
1245 _('[OPTION]... FILE...'),
1245 _('[OPTION]... FILE...'),
1246 inferrepo=True)
1246 inferrepo=True)
1247 def cat(ui, repo, file1, *pats, **opts):
1247 def cat(ui, repo, file1, *pats, **opts):
1248 """output the current or given revision of files
1248 """output the current or given revision of files
1249
1249
1250 Print the specified files as they were at the given revision. If
1250 Print the specified files as they were at the given revision. If
1251 no revision is given, the parent of the working directory is used.
1251 no revision is given, the parent of the working directory is used.
1252
1252
1253 Output may be to a file, in which case the name of the file is
1253 Output may be to a file, in which case the name of the file is
1254 given using a format string. The formatting rules as follows:
1254 given using a format string. The formatting rules as follows:
1255
1255
1256 :``%%``: literal "%" character
1256 :``%%``: literal "%" character
1257 :``%s``: basename of file being printed
1257 :``%s``: basename of file being printed
1258 :``%d``: dirname of file being printed, or '.' if in repository root
1258 :``%d``: dirname of file being printed, or '.' if in repository root
1259 :``%p``: root-relative path name of file being printed
1259 :``%p``: root-relative path name of file being printed
1260 :``%H``: changeset hash (40 hexadecimal digits)
1260 :``%H``: changeset hash (40 hexadecimal digits)
1261 :``%R``: changeset revision number
1261 :``%R``: changeset revision number
1262 :``%h``: short-form changeset hash (12 hexadecimal digits)
1262 :``%h``: short-form changeset hash (12 hexadecimal digits)
1263 :``%r``: zero-padded changeset revision number
1263 :``%r``: zero-padded changeset revision number
1264 :``%b``: basename of the exporting repository
1264 :``%b``: basename of the exporting repository
1265
1265
1266 Returns 0 on success.
1266 Returns 0 on success.
1267 """
1267 """
1268 ctx = scmutil.revsingle(repo, opts.get('rev'))
1268 ctx = scmutil.revsingle(repo, opts.get('rev'))
1269 m = scmutil.match(ctx, (file1,) + pats, opts)
1269 m = scmutil.match(ctx, (file1,) + pats, opts)
1270 fntemplate = opts.pop('output', '')
1270 fntemplate = opts.pop('output', '')
1271 if cmdutil.isstdiofilename(fntemplate):
1271 if cmdutil.isstdiofilename(fntemplate):
1272 fntemplate = ''
1272 fntemplate = ''
1273
1273
1274 if fntemplate:
1274 if fntemplate:
1275 fm = formatter.nullformatter(ui, 'cat')
1275 fm = formatter.nullformatter(ui, 'cat')
1276 else:
1276 else:
1277 ui.pager('cat')
1277 ui.pager('cat')
1278 fm = ui.formatter('cat', opts)
1278 fm = ui.formatter('cat', opts)
1279 with fm:
1279 with fm:
1280 return cmdutil.cat(ui, repo, ctx, m, fm, fntemplate, '', **opts)
1280 return cmdutil.cat(ui, repo, ctx, m, fm, fntemplate, '', **opts)
1281
1281
1282 @command('^clone',
1282 @command('^clone',
1283 [('U', 'noupdate', None, _('the clone will include an empty working '
1283 [('U', 'noupdate', None, _('the clone will include an empty working '
1284 'directory (only a repository)')),
1284 'directory (only a repository)')),
1285 ('u', 'updaterev', '', _('revision, tag, or branch to check out'),
1285 ('u', 'updaterev', '', _('revision, tag, or branch to check out'),
1286 _('REV')),
1286 _('REV')),
1287 ('r', 'rev', [], _('include the specified changeset'), _('REV')),
1287 ('r', 'rev', [], _('include the specified changeset'), _('REV')),
1288 ('b', 'branch', [], _('clone only the specified branch'), _('BRANCH')),
1288 ('b', 'branch', [], _('clone only the specified branch'), _('BRANCH')),
1289 ('', 'pull', None, _('use pull protocol to copy metadata')),
1289 ('', 'pull', None, _('use pull protocol to copy metadata')),
1290 ('', 'uncompressed', None, _('use uncompressed transfer (fast over LAN)')),
1290 ('', 'uncompressed', None, _('use uncompressed transfer (fast over LAN)')),
1291 ] + remoteopts,
1291 ] + remoteopts,
1292 _('[OPTION]... SOURCE [DEST]'),
1292 _('[OPTION]... SOURCE [DEST]'),
1293 norepo=True)
1293 norepo=True)
1294 def clone(ui, source, dest=None, **opts):
1294 def clone(ui, source, dest=None, **opts):
1295 """make a copy of an existing repository
1295 """make a copy of an existing repository
1296
1296
1297 Create a copy of an existing repository in a new directory.
1297 Create a copy of an existing repository in a new directory.
1298
1298
1299 If no destination directory name is specified, it defaults to the
1299 If no destination directory name is specified, it defaults to the
1300 basename of the source.
1300 basename of the source.
1301
1301
1302 The location of the source is added to the new repository's
1302 The location of the source is added to the new repository's
1303 ``.hg/hgrc`` file, as the default to be used for future pulls.
1303 ``.hg/hgrc`` file, as the default to be used for future pulls.
1304
1304
1305 Only local paths and ``ssh://`` URLs are supported as
1305 Only local paths and ``ssh://`` URLs are supported as
1306 destinations. For ``ssh://`` destinations, no working directory or
1306 destinations. For ``ssh://`` destinations, no working directory or
1307 ``.hg/hgrc`` will be created on the remote side.
1307 ``.hg/hgrc`` will be created on the remote side.
1308
1308
1309 If the source repository has a bookmark called '@' set, that
1309 If the source repository has a bookmark called '@' set, that
1310 revision will be checked out in the new repository by default.
1310 revision will be checked out in the new repository by default.
1311
1311
1312 To check out a particular version, use -u/--update, or
1312 To check out a particular version, use -u/--update, or
1313 -U/--noupdate to create a clone with no working directory.
1313 -U/--noupdate to create a clone with no working directory.
1314
1314
1315 To pull only a subset of changesets, specify one or more revisions
1315 To pull only a subset of changesets, specify one or more revisions
1316 identifiers with -r/--rev or branches with -b/--branch. The
1316 identifiers with -r/--rev or branches with -b/--branch. The
1317 resulting clone will contain only the specified changesets and
1317 resulting clone will contain only the specified changesets and
1318 their ancestors. These options (or 'clone src#rev dest') imply
1318 their ancestors. These options (or 'clone src#rev dest') imply
1319 --pull, even for local source repositories.
1319 --pull, even for local source repositories.
1320
1320
1321 .. note::
1321 .. note::
1322
1322
1323 Specifying a tag will include the tagged changeset but not the
1323 Specifying a tag will include the tagged changeset but not the
1324 changeset containing the tag.
1324 changeset containing the tag.
1325
1325
1326 .. container:: verbose
1326 .. container:: verbose
1327
1327
1328 For efficiency, hardlinks are used for cloning whenever the
1328 For efficiency, hardlinks are used for cloning whenever the
1329 source and destination are on the same filesystem (note this
1329 source and destination are on the same filesystem (note this
1330 applies only to the repository data, not to the working
1330 applies only to the repository data, not to the working
1331 directory). Some filesystems, such as AFS, implement hardlinking
1331 directory). Some filesystems, such as AFS, implement hardlinking
1332 incorrectly, but do not report errors. In these cases, use the
1332 incorrectly, but do not report errors. In these cases, use the
1333 --pull option to avoid hardlinking.
1333 --pull option to avoid hardlinking.
1334
1334
1335 In some cases, you can clone repositories and the working
1335 In some cases, you can clone repositories and the working
1336 directory using full hardlinks with ::
1336 directory using full hardlinks with ::
1337
1337
1338 $ cp -al REPO REPOCLONE
1338 $ cp -al REPO REPOCLONE
1339
1339
1340 This is the fastest way to clone, but it is not always safe. The
1340 This is the fastest way to clone, but it is not always safe. The
1341 operation is not atomic (making sure REPO is not modified during
1341 operation is not atomic (making sure REPO is not modified during
1342 the operation is up to you) and you have to make sure your
1342 the operation is up to you) and you have to make sure your
1343 editor breaks hardlinks (Emacs and most Linux Kernel tools do
1343 editor breaks hardlinks (Emacs and most Linux Kernel tools do
1344 so). Also, this is not compatible with certain extensions that
1344 so). Also, this is not compatible with certain extensions that
1345 place their metadata under the .hg directory, such as mq.
1345 place their metadata under the .hg directory, such as mq.
1346
1346
1347 Mercurial will update the working directory to the first applicable
1347 Mercurial will update the working directory to the first applicable
1348 revision from this list:
1348 revision from this list:
1349
1349
1350 a) null if -U or the source repository has no changesets
1350 a) null if -U or the source repository has no changesets
1351 b) if -u . and the source repository is local, the first parent of
1351 b) if -u . and the source repository is local, the first parent of
1352 the source repository's working directory
1352 the source repository's working directory
1353 c) the changeset specified with -u (if a branch name, this means the
1353 c) the changeset specified with -u (if a branch name, this means the
1354 latest head of that branch)
1354 latest head of that branch)
1355 d) the changeset specified with -r
1355 d) the changeset specified with -r
1356 e) the tipmost head specified with -b
1356 e) the tipmost head specified with -b
1357 f) the tipmost head specified with the url#branch source syntax
1357 f) the tipmost head specified with the url#branch source syntax
1358 g) the revision marked with the '@' bookmark, if present
1358 g) the revision marked with the '@' bookmark, if present
1359 h) the tipmost head of the default branch
1359 h) the tipmost head of the default branch
1360 i) tip
1360 i) tip
1361
1361
1362 When cloning from servers that support it, Mercurial may fetch
1362 When cloning from servers that support it, Mercurial may fetch
1363 pre-generated data from a server-advertised URL. When this is done,
1363 pre-generated data from a server-advertised URL. When this is done,
1364 hooks operating on incoming changesets and changegroups may fire twice,
1364 hooks operating on incoming changesets and changegroups may fire twice,
1365 once for the bundle fetched from the URL and another for any additional
1365 once for the bundle fetched from the URL and another for any additional
1366 data not fetched from this URL. In addition, if an error occurs, the
1366 data not fetched from this URL. In addition, if an error occurs, the
1367 repository may be rolled back to a partial clone. This behavior may
1367 repository may be rolled back to a partial clone. This behavior may
1368 change in future releases. See :hg:`help -e clonebundles` for more.
1368 change in future releases. See :hg:`help -e clonebundles` for more.
1369
1369
1370 Examples:
1370 Examples:
1371
1371
1372 - clone a remote repository to a new directory named hg/::
1372 - clone a remote repository to a new directory named hg/::
1373
1373
1374 hg clone https://www.mercurial-scm.org/repo/hg/
1374 hg clone https://www.mercurial-scm.org/repo/hg/
1375
1375
1376 - create a lightweight local clone::
1376 - create a lightweight local clone::
1377
1377
1378 hg clone project/ project-feature/
1378 hg clone project/ project-feature/
1379
1379
1380 - clone from an absolute path on an ssh server (note double-slash)::
1380 - clone from an absolute path on an ssh server (note double-slash)::
1381
1381
1382 hg clone ssh://user@server//home/projects/alpha/
1382 hg clone ssh://user@server//home/projects/alpha/
1383
1383
1384 - do a high-speed clone over a LAN while checking out a
1384 - do a high-speed clone over a LAN while checking out a
1385 specified version::
1385 specified version::
1386
1386
1387 hg clone --uncompressed http://server/repo -u 1.5
1387 hg clone --uncompressed http://server/repo -u 1.5
1388
1388
1389 - create a repository without changesets after a particular revision::
1389 - create a repository without changesets after a particular revision::
1390
1390
1391 hg clone -r 04e544 experimental/ good/
1391 hg clone -r 04e544 experimental/ good/
1392
1392
1393 - clone (and track) a particular named branch::
1393 - clone (and track) a particular named branch::
1394
1394
1395 hg clone https://www.mercurial-scm.org/repo/hg/#stable
1395 hg clone https://www.mercurial-scm.org/repo/hg/#stable
1396
1396
1397 See :hg:`help urls` for details on specifying URLs.
1397 See :hg:`help urls` for details on specifying URLs.
1398
1398
1399 Returns 0 on success.
1399 Returns 0 on success.
1400 """
1400 """
1401 opts = pycompat.byteskwargs(opts)
1401 opts = pycompat.byteskwargs(opts)
1402 if opts.get('noupdate') and opts.get('updaterev'):
1402 if opts.get('noupdate') and opts.get('updaterev'):
1403 raise error.Abort(_("cannot specify both --noupdate and --updaterev"))
1403 raise error.Abort(_("cannot specify both --noupdate and --updaterev"))
1404
1404
1405 r = hg.clone(ui, opts, source, dest,
1405 r = hg.clone(ui, opts, source, dest,
1406 pull=opts.get('pull'),
1406 pull=opts.get('pull'),
1407 stream=opts.get('uncompressed'),
1407 stream=opts.get('uncompressed'),
1408 rev=opts.get('rev'),
1408 rev=opts.get('rev'),
1409 update=opts.get('updaterev') or not opts.get('noupdate'),
1409 update=opts.get('updaterev') or not opts.get('noupdate'),
1410 branch=opts.get('branch'),
1410 branch=opts.get('branch'),
1411 shareopts=opts.get('shareopts'))
1411 shareopts=opts.get('shareopts'))
1412
1412
1413 return r is None
1413 return r is None
1414
1414
1415 @command('^commit|ci',
1415 @command('^commit|ci',
1416 [('A', 'addremove', None,
1416 [('A', 'addremove', None,
1417 _('mark new/missing files as added/removed before committing')),
1417 _('mark new/missing files as added/removed before committing')),
1418 ('', 'close-branch', None,
1418 ('', 'close-branch', None,
1419 _('mark a branch head as closed')),
1419 _('mark a branch head as closed')),
1420 ('', 'amend', None, _('amend the parent of the working directory')),
1420 ('', 'amend', None, _('amend the parent of the working directory')),
1421 ('s', 'secret', None, _('use the secret phase for committing')),
1421 ('s', 'secret', None, _('use the secret phase for committing')),
1422 ('e', 'edit', None, _('invoke editor on commit messages')),
1422 ('e', 'edit', None, _('invoke editor on commit messages')),
1423 ('i', 'interactive', None, _('use interactive mode')),
1423 ('i', 'interactive', None, _('use interactive mode')),
1424 ] + walkopts + commitopts + commitopts2 + subrepoopts,
1424 ] + walkopts + commitopts + commitopts2 + subrepoopts,
1425 _('[OPTION]... [FILE]...'),
1425 _('[OPTION]... [FILE]...'),
1426 inferrepo=True)
1426 inferrepo=True)
1427 def commit(ui, repo, *pats, **opts):
1427 def commit(ui, repo, *pats, **opts):
1428 """commit the specified files or all outstanding changes
1428 """commit the specified files or all outstanding changes
1429
1429
1430 Commit changes to the given files into the repository. Unlike a
1430 Commit changes to the given files into the repository. Unlike a
1431 centralized SCM, this operation is a local operation. See
1431 centralized SCM, this operation is a local operation. See
1432 :hg:`push` for a way to actively distribute your changes.
1432 :hg:`push` for a way to actively distribute your changes.
1433
1433
1434 If a list of files is omitted, all changes reported by :hg:`status`
1434 If a list of files is omitted, all changes reported by :hg:`status`
1435 will be committed.
1435 will be committed.
1436
1436
1437 If you are committing the result of a merge, do not provide any
1437 If you are committing the result of a merge, do not provide any
1438 filenames or -I/-X filters.
1438 filenames or -I/-X filters.
1439
1439
1440 If no commit message is specified, Mercurial starts your
1440 If no commit message is specified, Mercurial starts your
1441 configured editor where you can enter a message. In case your
1441 configured editor where you can enter a message. In case your
1442 commit fails, you will find a backup of your message in
1442 commit fails, you will find a backup of your message in
1443 ``.hg/last-message.txt``.
1443 ``.hg/last-message.txt``.
1444
1444
1445 The --close-branch flag can be used to mark the current branch
1445 The --close-branch flag can be used to mark the current branch
1446 head closed. When all heads of a branch are closed, the branch
1446 head closed. When all heads of a branch are closed, the branch
1447 will be considered closed and no longer listed.
1447 will be considered closed and no longer listed.
1448
1448
1449 The --amend flag can be used to amend the parent of the
1449 The --amend flag can be used to amend the parent of the
1450 working directory with a new commit that contains the changes
1450 working directory with a new commit that contains the changes
1451 in the parent in addition to those currently reported by :hg:`status`,
1451 in the parent in addition to those currently reported by :hg:`status`,
1452 if there are any. The old commit is stored in a backup bundle in
1452 if there are any. The old commit is stored in a backup bundle in
1453 ``.hg/strip-backup`` (see :hg:`help bundle` and :hg:`help unbundle`
1453 ``.hg/strip-backup`` (see :hg:`help bundle` and :hg:`help unbundle`
1454 on how to restore it).
1454 on how to restore it).
1455
1455
1456 Message, user and date are taken from the amended commit unless
1456 Message, user and date are taken from the amended commit unless
1457 specified. When a message isn't specified on the command line,
1457 specified. When a message isn't specified on the command line,
1458 the editor will open with the message of the amended commit.
1458 the editor will open with the message of the amended commit.
1459
1459
1460 It is not possible to amend public changesets (see :hg:`help phases`)
1460 It is not possible to amend public changesets (see :hg:`help phases`)
1461 or changesets that have children.
1461 or changesets that have children.
1462
1462
1463 See :hg:`help dates` for a list of formats valid for -d/--date.
1463 See :hg:`help dates` for a list of formats valid for -d/--date.
1464
1464
1465 Returns 0 on success, 1 if nothing changed.
1465 Returns 0 on success, 1 if nothing changed.
1466
1466
1467 .. container:: verbose
1467 .. container:: verbose
1468
1468
1469 Examples:
1469 Examples:
1470
1470
1471 - commit all files ending in .py::
1471 - commit all files ending in .py::
1472
1472
1473 hg commit --include "set:**.py"
1473 hg commit --include "set:**.py"
1474
1474
1475 - commit all non-binary files::
1475 - commit all non-binary files::
1476
1476
1477 hg commit --exclude "set:binary()"
1477 hg commit --exclude "set:binary()"
1478
1478
1479 - amend the current commit and set the date to now::
1479 - amend the current commit and set the date to now::
1480
1480
1481 hg commit --amend --date now
1481 hg commit --amend --date now
1482 """
1482 """
1483 wlock = lock = None
1483 wlock = lock = None
1484 try:
1484 try:
1485 wlock = repo.wlock()
1485 wlock = repo.wlock()
1486 lock = repo.lock()
1486 lock = repo.lock()
1487 return _docommit(ui, repo, *pats, **opts)
1487 return _docommit(ui, repo, *pats, **opts)
1488 finally:
1488 finally:
1489 release(lock, wlock)
1489 release(lock, wlock)
1490
1490
1491 def _docommit(ui, repo, *pats, **opts):
1491 def _docommit(ui, repo, *pats, **opts):
1492 if opts.get(r'interactive'):
1492 if opts.get(r'interactive'):
1493 opts.pop(r'interactive')
1493 opts.pop(r'interactive')
1494 ret = cmdutil.dorecord(ui, repo, commit, None, False,
1494 ret = cmdutil.dorecord(ui, repo, commit, None, False,
1495 cmdutil.recordfilter, *pats,
1495 cmdutil.recordfilter, *pats,
1496 **opts)
1496 **opts)
1497 # ret can be 0 (no changes to record) or the value returned by
1497 # ret can be 0 (no changes to record) or the value returned by
1498 # commit(), 1 if nothing changed or None on success.
1498 # commit(), 1 if nothing changed or None on success.
1499 return 1 if ret == 0 else ret
1499 return 1 if ret == 0 else ret
1500
1500
1501 opts = pycompat.byteskwargs(opts)
1501 opts = pycompat.byteskwargs(opts)
1502 if opts.get('subrepos'):
1502 if opts.get('subrepos'):
1503 if opts.get('amend'):
1503 if opts.get('amend'):
1504 raise error.Abort(_('cannot amend with --subrepos'))
1504 raise error.Abort(_('cannot amend with --subrepos'))
1505 # Let --subrepos on the command line override config setting.
1505 # Let --subrepos on the command line override config setting.
1506 ui.setconfig('ui', 'commitsubrepos', True, 'commit')
1506 ui.setconfig('ui', 'commitsubrepos', True, 'commit')
1507
1507
1508 cmdutil.checkunfinished(repo, commit=True)
1508 cmdutil.checkunfinished(repo, commit=True)
1509
1509
1510 branch = repo[None].branch()
1510 branch = repo[None].branch()
1511 bheads = repo.branchheads(branch)
1511 bheads = repo.branchheads(branch)
1512
1512
1513 extra = {}
1513 extra = {}
1514 if opts.get('close_branch'):
1514 if opts.get('close_branch'):
1515 extra['close'] = 1
1515 extra['close'] = 1
1516
1516
1517 if not bheads:
1517 if not bheads:
1518 raise error.Abort(_('can only close branch heads'))
1518 raise error.Abort(_('can only close branch heads'))
1519 elif opts.get('amend'):
1519 elif opts.get('amend'):
1520 if repo[None].parents()[0].p1().branch() != branch and \
1520 if repo[None].parents()[0].p1().branch() != branch and \
1521 repo[None].parents()[0].p2().branch() != branch:
1521 repo[None].parents()[0].p2().branch() != branch:
1522 raise error.Abort(_('can only close branch heads'))
1522 raise error.Abort(_('can only close branch heads'))
1523
1523
1524 if opts.get('amend'):
1524 if opts.get('amend'):
1525 if ui.configbool('ui', 'commitsubrepos'):
1525 if ui.configbool('ui', 'commitsubrepos'):
1526 raise error.Abort(_('cannot amend with ui.commitsubrepos enabled'))
1526 raise error.Abort(_('cannot amend with ui.commitsubrepos enabled'))
1527
1527
1528 old = repo['.']
1528 old = repo['.']
1529 if not old.mutable():
1529 if not old.mutable():
1530 raise error.Abort(_('cannot amend public changesets'))
1530 raise error.Abort(_('cannot amend public changesets'))
1531 if len(repo[None].parents()) > 1:
1531 if len(repo[None].parents()) > 1:
1532 raise error.Abort(_('cannot amend while merging'))
1532 raise error.Abort(_('cannot amend while merging'))
1533 allowunstable = obsolete.isenabled(repo, obsolete.allowunstableopt)
1533 allowunstable = obsolete.isenabled(repo, obsolete.allowunstableopt)
1534 if not allowunstable and old.children():
1534 if not allowunstable and old.children():
1535 raise error.Abort(_('cannot amend changeset with children'))
1535 raise error.Abort(_('cannot amend changeset with children'))
1536
1536
1537 # Currently histedit gets confused if an amend happens while histedit
1537 # Currently histedit gets confused if an amend happens while histedit
1538 # is in progress. Since we have a checkunfinished command, we are
1538 # is in progress. Since we have a checkunfinished command, we are
1539 # temporarily honoring it.
1539 # temporarily honoring it.
1540 #
1540 #
1541 # Note: eventually this guard will be removed. Please do not expect
1541 # Note: eventually this guard will be removed. Please do not expect
1542 # this behavior to remain.
1542 # this behavior to remain.
1543 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1543 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1544 cmdutil.checkunfinished(repo)
1544 cmdutil.checkunfinished(repo)
1545
1545
1546 # commitfunc is used only for temporary amend commit by cmdutil.amend
1546 # commitfunc is used only for temporary amend commit by cmdutil.amend
1547 def commitfunc(ui, repo, message, match, opts):
1547 def commitfunc(ui, repo, message, match, opts):
1548 return repo.commit(message,
1548 return repo.commit(message,
1549 opts.get('user') or old.user(),
1549 opts.get('user') or old.user(),
1550 opts.get('date') or old.date(),
1550 opts.get('date') or old.date(),
1551 match,
1551 match,
1552 extra=extra)
1552 extra=extra)
1553
1553
1554 node = cmdutil.amend(ui, repo, commitfunc, old, extra, pats, opts)
1554 node = cmdutil.amend(ui, repo, commitfunc, old, extra, pats, opts)
1555 if node == old.node():
1555 if node == old.node():
1556 ui.status(_("nothing changed\n"))
1556 ui.status(_("nothing changed\n"))
1557 return 1
1557 return 1
1558 else:
1558 else:
1559 def commitfunc(ui, repo, message, match, opts):
1559 def commitfunc(ui, repo, message, match, opts):
1560 overrides = {}
1560 overrides = {}
1561 if opts.get('secret'):
1561 if opts.get('secret'):
1562 overrides[('phases', 'new-commit')] = 'secret'
1562 overrides[('phases', 'new-commit')] = 'secret'
1563
1563
1564 baseui = repo.baseui
1564 baseui = repo.baseui
1565 with baseui.configoverride(overrides, 'commit'):
1565 with baseui.configoverride(overrides, 'commit'):
1566 with ui.configoverride(overrides, 'commit'):
1566 with ui.configoverride(overrides, 'commit'):
1567 editform = cmdutil.mergeeditform(repo[None],
1567 editform = cmdutil.mergeeditform(repo[None],
1568 'commit.normal')
1568 'commit.normal')
1569 editor = cmdutil.getcommiteditor(
1569 editor = cmdutil.getcommiteditor(
1570 editform=editform, **pycompat.strkwargs(opts))
1570 editform=editform, **pycompat.strkwargs(opts))
1571 return repo.commit(message,
1571 return repo.commit(message,
1572 opts.get('user'),
1572 opts.get('user'),
1573 opts.get('date'),
1573 opts.get('date'),
1574 match,
1574 match,
1575 editor=editor,
1575 editor=editor,
1576 extra=extra)
1576 extra=extra)
1577
1577
1578 node = cmdutil.commit(ui, repo, commitfunc, pats, opts)
1578 node = cmdutil.commit(ui, repo, commitfunc, pats, opts)
1579
1579
1580 if not node:
1580 if not node:
1581 stat = cmdutil.postcommitstatus(repo, pats, opts)
1581 stat = cmdutil.postcommitstatus(repo, pats, opts)
1582 if stat[3]:
1582 if stat[3]:
1583 ui.status(_("nothing changed (%d missing files, see "
1583 ui.status(_("nothing changed (%d missing files, see "
1584 "'hg status')\n") % len(stat[3]))
1584 "'hg status')\n") % len(stat[3]))
1585 else:
1585 else:
1586 ui.status(_("nothing changed\n"))
1586 ui.status(_("nothing changed\n"))
1587 return 1
1587 return 1
1588
1588
1589 cmdutil.commitstatus(repo, node, branch, bheads, opts)
1589 cmdutil.commitstatus(repo, node, branch, bheads, opts)
1590
1590
1591 @command('config|showconfig|debugconfig',
1591 @command('config|showconfig|debugconfig',
1592 [('u', 'untrusted', None, _('show untrusted configuration options')),
1592 [('u', 'untrusted', None, _('show untrusted configuration options')),
1593 ('e', 'edit', None, _('edit user config')),
1593 ('e', 'edit', None, _('edit user config')),
1594 ('l', 'local', None, _('edit repository config')),
1594 ('l', 'local', None, _('edit repository config')),
1595 ('g', 'global', None, _('edit global config'))] + formatteropts,
1595 ('g', 'global', None, _('edit global config'))] + formatteropts,
1596 _('[-u] [NAME]...'),
1596 _('[-u] [NAME]...'),
1597 optionalrepo=True)
1597 optionalrepo=True)
1598 def config(ui, repo, *values, **opts):
1598 def config(ui, repo, *values, **opts):
1599 """show combined config settings from all hgrc files
1599 """show combined config settings from all hgrc files
1600
1600
1601 With no arguments, print names and values of all config items.
1601 With no arguments, print names and values of all config items.
1602
1602
1603 With one argument of the form section.name, print just the value
1603 With one argument of the form section.name, print just the value
1604 of that config item.
1604 of that config item.
1605
1605
1606 With multiple arguments, print names and values of all config
1606 With multiple arguments, print names and values of all config
1607 items with matching section names.
1607 items with matching section names.
1608
1608
1609 With --edit, start an editor on the user-level config file. With
1609 With --edit, start an editor on the user-level config file. With
1610 --global, edit the system-wide config file. With --local, edit the
1610 --global, edit the system-wide config file. With --local, edit the
1611 repository-level config file.
1611 repository-level config file.
1612
1612
1613 With --debug, the source (filename and line number) is printed
1613 With --debug, the source (filename and line number) is printed
1614 for each config item.
1614 for each config item.
1615
1615
1616 See :hg:`help config` for more information about config files.
1616 See :hg:`help config` for more information about config files.
1617
1617
1618 Returns 0 on success, 1 if NAME does not exist.
1618 Returns 0 on success, 1 if NAME does not exist.
1619
1619
1620 """
1620 """
1621
1621
1622 opts = pycompat.byteskwargs(opts)
1622 opts = pycompat.byteskwargs(opts)
1623 if opts.get('edit') or opts.get('local') or opts.get('global'):
1623 if opts.get('edit') or opts.get('local') or opts.get('global'):
1624 if opts.get('local') and opts.get('global'):
1624 if opts.get('local') and opts.get('global'):
1625 raise error.Abort(_("can't use --local and --global together"))
1625 raise error.Abort(_("can't use --local and --global together"))
1626
1626
1627 if opts.get('local'):
1627 if opts.get('local'):
1628 if not repo:
1628 if not repo:
1629 raise error.Abort(_("can't use --local outside a repository"))
1629 raise error.Abort(_("can't use --local outside a repository"))
1630 paths = [repo.vfs.join('hgrc')]
1630 paths = [repo.vfs.join('hgrc')]
1631 elif opts.get('global'):
1631 elif opts.get('global'):
1632 paths = rcutil.systemrcpath()
1632 paths = rcutil.systemrcpath()
1633 else:
1633 else:
1634 paths = rcutil.userrcpath()
1634 paths = rcutil.userrcpath()
1635
1635
1636 for f in paths:
1636 for f in paths:
1637 if os.path.exists(f):
1637 if os.path.exists(f):
1638 break
1638 break
1639 else:
1639 else:
1640 if opts.get('global'):
1640 if opts.get('global'):
1641 samplehgrc = uimod.samplehgrcs['global']
1641 samplehgrc = uimod.samplehgrcs['global']
1642 elif opts.get('local'):
1642 elif opts.get('local'):
1643 samplehgrc = uimod.samplehgrcs['local']
1643 samplehgrc = uimod.samplehgrcs['local']
1644 else:
1644 else:
1645 samplehgrc = uimod.samplehgrcs['user']
1645 samplehgrc = uimod.samplehgrcs['user']
1646
1646
1647 f = paths[0]
1647 f = paths[0]
1648 fp = open(f, "w")
1648 fp = open(f, "w")
1649 fp.write(samplehgrc)
1649 fp.write(samplehgrc)
1650 fp.close()
1650 fp.close()
1651
1651
1652 editor = ui.geteditor()
1652 editor = ui.geteditor()
1653 ui.system("%s \"%s\"" % (editor, f),
1653 ui.system("%s \"%s\"" % (editor, f),
1654 onerr=error.Abort, errprefix=_("edit failed"),
1654 onerr=error.Abort, errprefix=_("edit failed"),
1655 blockedtag='config_edit')
1655 blockedtag='config_edit')
1656 return
1656 return
1657 ui.pager('config')
1657 ui.pager('config')
1658 fm = ui.formatter('config', opts)
1658 fm = ui.formatter('config', opts)
1659 for t, f in rcutil.rccomponents():
1659 for t, f in rcutil.rccomponents():
1660 if t == 'path':
1660 if t == 'path':
1661 ui.debug('read config from: %s\n' % f)
1661 ui.debug('read config from: %s\n' % f)
1662 elif t == 'items':
1662 elif t == 'items':
1663 for section, name, value, source in f:
1663 for section, name, value, source in f:
1664 ui.debug('set config by: %s\n' % source)
1664 ui.debug('set config by: %s\n' % source)
1665 else:
1665 else:
1666 raise error.ProgrammingError('unknown rctype: %s' % t)
1666 raise error.ProgrammingError('unknown rctype: %s' % t)
1667 untrusted = bool(opts.get('untrusted'))
1667 untrusted = bool(opts.get('untrusted'))
1668 if values:
1668 if values:
1669 sections = [v for v in values if '.' not in v]
1669 sections = [v for v in values if '.' not in v]
1670 items = [v for v in values if '.' in v]
1670 items = [v for v in values if '.' in v]
1671 if len(items) > 1 or items and sections:
1671 if len(items) > 1 or items and sections:
1672 raise error.Abort(_('only one config item permitted'))
1672 raise error.Abort(_('only one config item permitted'))
1673 matched = False
1673 matched = False
1674 for section, name, value in ui.walkconfig(untrusted=untrusted):
1674 for section, name, value in ui.walkconfig(untrusted=untrusted):
1675 source = ui.configsource(section, name, untrusted)
1675 source = ui.configsource(section, name, untrusted)
1676 value = pycompat.bytestr(value)
1676 value = pycompat.bytestr(value)
1677 if fm.isplain():
1677 if fm.isplain():
1678 source = source or 'none'
1678 source = source or 'none'
1679 value = value.replace('\n', '\\n')
1679 value = value.replace('\n', '\\n')
1680 entryname = section + '.' + name
1680 entryname = section + '.' + name
1681 if values:
1681 if values:
1682 for v in values:
1682 for v in values:
1683 if v == section:
1683 if v == section:
1684 fm.startitem()
1684 fm.startitem()
1685 fm.condwrite(ui.debugflag, 'source', '%s: ', source)
1685 fm.condwrite(ui.debugflag, 'source', '%s: ', source)
1686 fm.write('name value', '%s=%s\n', entryname, value)
1686 fm.write('name value', '%s=%s\n', entryname, value)
1687 matched = True
1687 matched = True
1688 elif v == entryname:
1688 elif v == entryname:
1689 fm.startitem()
1689 fm.startitem()
1690 fm.condwrite(ui.debugflag, 'source', '%s: ', source)
1690 fm.condwrite(ui.debugflag, 'source', '%s: ', source)
1691 fm.write('value', '%s\n', value)
1691 fm.write('value', '%s\n', value)
1692 fm.data(name=entryname)
1692 fm.data(name=entryname)
1693 matched = True
1693 matched = True
1694 else:
1694 else:
1695 fm.startitem()
1695 fm.startitem()
1696 fm.condwrite(ui.debugflag, 'source', '%s: ', source)
1696 fm.condwrite(ui.debugflag, 'source', '%s: ', source)
1697 fm.write('name value', '%s=%s\n', entryname, value)
1697 fm.write('name value', '%s=%s\n', entryname, value)
1698 matched = True
1698 matched = True
1699 fm.end()
1699 fm.end()
1700 if matched:
1700 if matched:
1701 return 0
1701 return 0
1702 return 1
1702 return 1
1703
1703
1704 @command('copy|cp',
1704 @command('copy|cp',
1705 [('A', 'after', None, _('record a copy that has already occurred')),
1705 [('A', 'after', None, _('record a copy that has already occurred')),
1706 ('f', 'force', None, _('forcibly copy over an existing managed file')),
1706 ('f', 'force', None, _('forcibly copy over an existing managed file')),
1707 ] + walkopts + dryrunopts,
1707 ] + walkopts + dryrunopts,
1708 _('[OPTION]... [SOURCE]... DEST'))
1708 _('[OPTION]... [SOURCE]... DEST'))
1709 def copy(ui, repo, *pats, **opts):
1709 def copy(ui, repo, *pats, **opts):
1710 """mark files as copied for the next commit
1710 """mark files as copied for the next commit
1711
1711
1712 Mark dest as having copies of source files. If dest is a
1712 Mark dest as having copies of source files. If dest is a
1713 directory, copies are put in that directory. If dest is a file,
1713 directory, copies are put in that directory. If dest is a file,
1714 the source must be a single file.
1714 the source must be a single file.
1715
1715
1716 By default, this command copies the contents of files as they
1716 By default, this command copies the contents of files as they
1717 exist in the working directory. If invoked with -A/--after, the
1717 exist in the working directory. If invoked with -A/--after, the
1718 operation is recorded, but no copying is performed.
1718 operation is recorded, but no copying is performed.
1719
1719
1720 This command takes effect with the next commit. To undo a copy
1720 This command takes effect with the next commit. To undo a copy
1721 before that, see :hg:`revert`.
1721 before that, see :hg:`revert`.
1722
1722
1723 Returns 0 on success, 1 if errors are encountered.
1723 Returns 0 on success, 1 if errors are encountered.
1724 """
1724 """
1725 opts = pycompat.byteskwargs(opts)
1725 opts = pycompat.byteskwargs(opts)
1726 with repo.wlock(False):
1726 with repo.wlock(False):
1727 return cmdutil.copy(ui, repo, pats, opts)
1727 return cmdutil.copy(ui, repo, pats, opts)
1728
1728
1729 @command('debugcommands', [], _('[COMMAND]'), norepo=True)
1729 @command('debugcommands', [], _('[COMMAND]'), norepo=True)
1730 def debugcommands(ui, cmd='', *args):
1730 def debugcommands(ui, cmd='', *args):
1731 """list all available commands and options"""
1731 """list all available commands and options"""
1732 for cmd, vals in sorted(table.iteritems()):
1732 for cmd, vals in sorted(table.iteritems()):
1733 cmd = cmd.split('|')[0].strip('^')
1733 cmd = cmd.split('|')[0].strip('^')
1734 opts = ', '.join([i[1] for i in vals[1]])
1734 opts = ', '.join([i[1] for i in vals[1]])
1735 ui.write('%s: %s\n' % (cmd, opts))
1735 ui.write('%s: %s\n' % (cmd, opts))
1736
1736
1737 @command('debugcomplete',
1737 @command('debugcomplete',
1738 [('o', 'options', None, _('show the command options'))],
1738 [('o', 'options', None, _('show the command options'))],
1739 _('[-o] CMD'),
1739 _('[-o] CMD'),
1740 norepo=True)
1740 norepo=True)
1741 def debugcomplete(ui, cmd='', **opts):
1741 def debugcomplete(ui, cmd='', **opts):
1742 """returns the completion list associated with the given command"""
1742 """returns the completion list associated with the given command"""
1743
1743
1744 if opts.get('options'):
1744 if opts.get('options'):
1745 options = []
1745 options = []
1746 otables = [globalopts]
1746 otables = [globalopts]
1747 if cmd:
1747 if cmd:
1748 aliases, entry = cmdutil.findcmd(cmd, table, False)
1748 aliases, entry = cmdutil.findcmd(cmd, table, False)
1749 otables.append(entry[1])
1749 otables.append(entry[1])
1750 for t in otables:
1750 for t in otables:
1751 for o in t:
1751 for o in t:
1752 if "(DEPRECATED)" in o[3]:
1752 if "(DEPRECATED)" in o[3]:
1753 continue
1753 continue
1754 if o[0]:
1754 if o[0]:
1755 options.append('-%s' % o[0])
1755 options.append('-%s' % o[0])
1756 options.append('--%s' % o[1])
1756 options.append('--%s' % o[1])
1757 ui.write("%s\n" % "\n".join(options))
1757 ui.write("%s\n" % "\n".join(options))
1758 return
1758 return
1759
1759
1760 cmdlist, unused_allcmds = cmdutil.findpossible(cmd, table)
1760 cmdlist, unused_allcmds = cmdutil.findpossible(cmd, table)
1761 if ui.verbose:
1761 if ui.verbose:
1762 cmdlist = [' '.join(c[0]) for c in cmdlist.values()]
1762 cmdlist = [' '.join(c[0]) for c in cmdlist.values()]
1763 ui.write("%s\n" % "\n".join(sorted(cmdlist)))
1763 ui.write("%s\n" % "\n".join(sorted(cmdlist)))
1764
1764
1765 @command('^diff',
1765 @command('^diff',
1766 [('r', 'rev', [], _('revision'), _('REV')),
1766 [('r', 'rev', [], _('revision'), _('REV')),
1767 ('c', 'change', '', _('change made by revision'), _('REV'))
1767 ('c', 'change', '', _('change made by revision'), _('REV'))
1768 ] + diffopts + diffopts2 + walkopts + subrepoopts,
1768 ] + diffopts + diffopts2 + walkopts + subrepoopts,
1769 _('[OPTION]... ([-c REV] | [-r REV1 [-r REV2]]) [FILE]...'),
1769 _('[OPTION]... ([-c REV] | [-r REV1 [-r REV2]]) [FILE]...'),
1770 inferrepo=True)
1770 inferrepo=True)
1771 def diff(ui, repo, *pats, **opts):
1771 def diff(ui, repo, *pats, **opts):
1772 """diff repository (or selected files)
1772 """diff repository (or selected files)
1773
1773
1774 Show differences between revisions for the specified files.
1774 Show differences between revisions for the specified files.
1775
1775
1776 Differences between files are shown using the unified diff format.
1776 Differences between files are shown using the unified diff format.
1777
1777
1778 .. note::
1778 .. note::
1779
1779
1780 :hg:`diff` may generate unexpected results for merges, as it will
1780 :hg:`diff` may generate unexpected results for merges, as it will
1781 default to comparing against the working directory's first
1781 default to comparing against the working directory's first
1782 parent changeset if no revisions are specified.
1782 parent changeset if no revisions are specified.
1783
1783
1784 When two revision arguments are given, then changes are shown
1784 When two revision arguments are given, then changes are shown
1785 between those revisions. If only one revision is specified then
1785 between those revisions. If only one revision is specified then
1786 that revision is compared to the working directory, and, when no
1786 that revision is compared to the working directory, and, when no
1787 revisions are specified, the working directory files are compared
1787 revisions are specified, the working directory files are compared
1788 to its first parent.
1788 to its first parent.
1789
1789
1790 Alternatively you can specify -c/--change with a revision to see
1790 Alternatively you can specify -c/--change with a revision to see
1791 the changes in that changeset relative to its first parent.
1791 the changes in that changeset relative to its first parent.
1792
1792
1793 Without the -a/--text option, diff will avoid generating diffs of
1793 Without the -a/--text option, diff will avoid generating diffs of
1794 files it detects as binary. With -a, diff will generate a diff
1794 files it detects as binary. With -a, diff will generate a diff
1795 anyway, probably with undesirable results.
1795 anyway, probably with undesirable results.
1796
1796
1797 Use the -g/--git option to generate diffs in the git extended diff
1797 Use the -g/--git option to generate diffs in the git extended diff
1798 format. For more information, read :hg:`help diffs`.
1798 format. For more information, read :hg:`help diffs`.
1799
1799
1800 .. container:: verbose
1800 .. container:: verbose
1801
1801
1802 Examples:
1802 Examples:
1803
1803
1804 - compare a file in the current working directory to its parent::
1804 - compare a file in the current working directory to its parent::
1805
1805
1806 hg diff foo.c
1806 hg diff foo.c
1807
1807
1808 - compare two historical versions of a directory, with rename info::
1808 - compare two historical versions of a directory, with rename info::
1809
1809
1810 hg diff --git -r 1.0:1.2 lib/
1810 hg diff --git -r 1.0:1.2 lib/
1811
1811
1812 - get change stats relative to the last change on some date::
1812 - get change stats relative to the last change on some date::
1813
1813
1814 hg diff --stat -r "date('may 2')"
1814 hg diff --stat -r "date('may 2')"
1815
1815
1816 - diff all newly-added files that contain a keyword::
1816 - diff all newly-added files that contain a keyword::
1817
1817
1818 hg diff "set:added() and grep(GNU)"
1818 hg diff "set:added() and grep(GNU)"
1819
1819
1820 - compare a revision and its parents::
1820 - compare a revision and its parents::
1821
1821
1822 hg diff -c 9353 # compare against first parent
1822 hg diff -c 9353 # compare against first parent
1823 hg diff -r 9353^:9353 # same using revset syntax
1823 hg diff -r 9353^:9353 # same using revset syntax
1824 hg diff -r 9353^2:9353 # compare against the second parent
1824 hg diff -r 9353^2:9353 # compare against the second parent
1825
1825
1826 Returns 0 on success.
1826 Returns 0 on success.
1827 """
1827 """
1828
1828
1829 opts = pycompat.byteskwargs(opts)
1829 opts = pycompat.byteskwargs(opts)
1830 revs = opts.get('rev')
1830 revs = opts.get('rev')
1831 change = opts.get('change')
1831 change = opts.get('change')
1832 stat = opts.get('stat')
1832 stat = opts.get('stat')
1833 reverse = opts.get('reverse')
1833 reverse = opts.get('reverse')
1834
1834
1835 if revs and change:
1835 if revs and change:
1836 msg = _('cannot specify --rev and --change at the same time')
1836 msg = _('cannot specify --rev and --change at the same time')
1837 raise error.Abort(msg)
1837 raise error.Abort(msg)
1838 elif change:
1838 elif change:
1839 node2 = scmutil.revsingle(repo, change, None).node()
1839 node2 = scmutil.revsingle(repo, change, None).node()
1840 node1 = repo[node2].p1().node()
1840 node1 = repo[node2].p1().node()
1841 else:
1841 else:
1842 node1, node2 = scmutil.revpair(repo, revs)
1842 node1, node2 = scmutil.revpair(repo, revs)
1843
1843
1844 if reverse:
1844 if reverse:
1845 node1, node2 = node2, node1
1845 node1, node2 = node2, node1
1846
1846
1847 diffopts = patch.diffallopts(ui, opts)
1847 diffopts = patch.diffallopts(ui, opts)
1848 m = scmutil.match(repo[node2], pats, opts)
1848 m = scmutil.match(repo[node2], pats, opts)
1849 ui.pager('diff')
1849 ui.pager('diff')
1850 cmdutil.diffordiffstat(ui, repo, diffopts, node1, node2, m, stat=stat,
1850 cmdutil.diffordiffstat(ui, repo, diffopts, node1, node2, m, stat=stat,
1851 listsubrepos=opts.get('subrepos'),
1851 listsubrepos=opts.get('subrepos'),
1852 root=opts.get('root'))
1852 root=opts.get('root'))
1853
1853
1854 @command('^export',
1854 @command('^export',
1855 [('o', 'output', '',
1855 [('o', 'output', '',
1856 _('print output to file with formatted name'), _('FORMAT')),
1856 _('print output to file with formatted name'), _('FORMAT')),
1857 ('', 'switch-parent', None, _('diff against the second parent')),
1857 ('', 'switch-parent', None, _('diff against the second parent')),
1858 ('r', 'rev', [], _('revisions to export'), _('REV')),
1858 ('r', 'rev', [], _('revisions to export'), _('REV')),
1859 ] + diffopts,
1859 ] + diffopts,
1860 _('[OPTION]... [-o OUTFILESPEC] [-r] [REV]...'))
1860 _('[OPTION]... [-o OUTFILESPEC] [-r] [REV]...'))
1861 def export(ui, repo, *changesets, **opts):
1861 def export(ui, repo, *changesets, **opts):
1862 """dump the header and diffs for one or more changesets
1862 """dump the header and diffs for one or more changesets
1863
1863
1864 Print the changeset header and diffs for one or more revisions.
1864 Print the changeset header and diffs for one or more revisions.
1865 If no revision is given, the parent of the working directory is used.
1865 If no revision is given, the parent of the working directory is used.
1866
1866
1867 The information shown in the changeset header is: author, date,
1867 The information shown in the changeset header is: author, date,
1868 branch name (if non-default), changeset hash, parent(s) and commit
1868 branch name (if non-default), changeset hash, parent(s) and commit
1869 comment.
1869 comment.
1870
1870
1871 .. note::
1871 .. note::
1872
1872
1873 :hg:`export` may generate unexpected diff output for merge
1873 :hg:`export` may generate unexpected diff output for merge
1874 changesets, as it will compare the merge changeset against its
1874 changesets, as it will compare the merge changeset against its
1875 first parent only.
1875 first parent only.
1876
1876
1877 Output may be to a file, in which case the name of the file is
1877 Output may be to a file, in which case the name of the file is
1878 given using a format string. The formatting rules are as follows:
1878 given using a format string. The formatting rules are as follows:
1879
1879
1880 :``%%``: literal "%" character
1880 :``%%``: literal "%" character
1881 :``%H``: changeset hash (40 hexadecimal digits)
1881 :``%H``: changeset hash (40 hexadecimal digits)
1882 :``%N``: number of patches being generated
1882 :``%N``: number of patches being generated
1883 :``%R``: changeset revision number
1883 :``%R``: changeset revision number
1884 :``%b``: basename of the exporting repository
1884 :``%b``: basename of the exporting repository
1885 :``%h``: short-form changeset hash (12 hexadecimal digits)
1885 :``%h``: short-form changeset hash (12 hexadecimal digits)
1886 :``%m``: first line of the commit message (only alphanumeric characters)
1886 :``%m``: first line of the commit message (only alphanumeric characters)
1887 :``%n``: zero-padded sequence number, starting at 1
1887 :``%n``: zero-padded sequence number, starting at 1
1888 :``%r``: zero-padded changeset revision number
1888 :``%r``: zero-padded changeset revision number
1889
1889
1890 Without the -a/--text option, export will avoid generating diffs
1890 Without the -a/--text option, export will avoid generating diffs
1891 of files it detects as binary. With -a, export will generate a
1891 of files it detects as binary. With -a, export will generate a
1892 diff anyway, probably with undesirable results.
1892 diff anyway, probably with undesirable results.
1893
1893
1894 Use the -g/--git option to generate diffs in the git extended diff
1894 Use the -g/--git option to generate diffs in the git extended diff
1895 format. See :hg:`help diffs` for more information.
1895 format. See :hg:`help diffs` for more information.
1896
1896
1897 With the --switch-parent option, the diff will be against the
1897 With the --switch-parent option, the diff will be against the
1898 second parent. It can be useful to review a merge.
1898 second parent. It can be useful to review a merge.
1899
1899
1900 .. container:: verbose
1900 .. container:: verbose
1901
1901
1902 Examples:
1902 Examples:
1903
1903
1904 - use export and import to transplant a bugfix to the current
1904 - use export and import to transplant a bugfix to the current
1905 branch::
1905 branch::
1906
1906
1907 hg export -r 9353 | hg import -
1907 hg export -r 9353 | hg import -
1908
1908
1909 - export all the changesets between two revisions to a file with
1909 - export all the changesets between two revisions to a file with
1910 rename information::
1910 rename information::
1911
1911
1912 hg export --git -r 123:150 > changes.txt
1912 hg export --git -r 123:150 > changes.txt
1913
1913
1914 - split outgoing changes into a series of patches with
1914 - split outgoing changes into a series of patches with
1915 descriptive names::
1915 descriptive names::
1916
1916
1917 hg export -r "outgoing()" -o "%n-%m.patch"
1917 hg export -r "outgoing()" -o "%n-%m.patch"
1918
1918
1919 Returns 0 on success.
1919 Returns 0 on success.
1920 """
1920 """
1921 opts = pycompat.byteskwargs(opts)
1921 opts = pycompat.byteskwargs(opts)
1922 changesets += tuple(opts.get('rev', []))
1922 changesets += tuple(opts.get('rev', []))
1923 if not changesets:
1923 if not changesets:
1924 changesets = ['.']
1924 changesets = ['.']
1925 revs = scmutil.revrange(repo, changesets)
1925 revs = scmutil.revrange(repo, changesets)
1926 if not revs:
1926 if not revs:
1927 raise error.Abort(_("export requires at least one changeset"))
1927 raise error.Abort(_("export requires at least one changeset"))
1928 if len(revs) > 1:
1928 if len(revs) > 1:
1929 ui.note(_('exporting patches:\n'))
1929 ui.note(_('exporting patches:\n'))
1930 else:
1930 else:
1931 ui.note(_('exporting patch:\n'))
1931 ui.note(_('exporting patch:\n'))
1932 ui.pager('export')
1932 ui.pager('export')
1933 cmdutil.export(repo, revs, fntemplate=opts.get('output'),
1933 cmdutil.export(repo, revs, fntemplate=opts.get('output'),
1934 switch_parent=opts.get('switch_parent'),
1934 switch_parent=opts.get('switch_parent'),
1935 opts=patch.diffallopts(ui, opts))
1935 opts=patch.diffallopts(ui, opts))
1936
1936
1937 @command('files',
1937 @command('files',
1938 [('r', 'rev', '', _('search the repository as it is in REV'), _('REV')),
1938 [('r', 'rev', '', _('search the repository as it is in REV'), _('REV')),
1939 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
1939 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
1940 ] + walkopts + formatteropts + subrepoopts,
1940 ] + walkopts + formatteropts + subrepoopts,
1941 _('[OPTION]... [FILE]...'))
1941 _('[OPTION]... [FILE]...'))
1942 def files(ui, repo, *pats, **opts):
1942 def files(ui, repo, *pats, **opts):
1943 """list tracked files
1943 """list tracked files
1944
1944
1945 Print files under Mercurial control in the working directory or
1945 Print files under Mercurial control in the working directory or
1946 specified revision for given files (excluding removed files).
1946 specified revision for given files (excluding removed files).
1947 Files can be specified as filenames or filesets.
1947 Files can be specified as filenames or filesets.
1948
1948
1949 If no files are given to match, this command prints the names
1949 If no files are given to match, this command prints the names
1950 of all files under Mercurial control.
1950 of all files under Mercurial control.
1951
1951
1952 .. container:: verbose
1952 .. container:: verbose
1953
1953
1954 Examples:
1954 Examples:
1955
1955
1956 - list all files under the current directory::
1956 - list all files under the current directory::
1957
1957
1958 hg files .
1958 hg files .
1959
1959
1960 - shows sizes and flags for current revision::
1960 - shows sizes and flags for current revision::
1961
1961
1962 hg files -vr .
1962 hg files -vr .
1963
1963
1964 - list all files named README::
1964 - list all files named README::
1965
1965
1966 hg files -I "**/README"
1966 hg files -I "**/README"
1967
1967
1968 - list all binary files::
1968 - list all binary files::
1969
1969
1970 hg files "set:binary()"
1970 hg files "set:binary()"
1971
1971
1972 - find files containing a regular expression::
1972 - find files containing a regular expression::
1973
1973
1974 hg files "set:grep('bob')"
1974 hg files "set:grep('bob')"
1975
1975
1976 - search tracked file contents with xargs and grep::
1976 - search tracked file contents with xargs and grep::
1977
1977
1978 hg files -0 | xargs -0 grep foo
1978 hg files -0 | xargs -0 grep foo
1979
1979
1980 See :hg:`help patterns` and :hg:`help filesets` for more information
1980 See :hg:`help patterns` and :hg:`help filesets` for more information
1981 on specifying file patterns.
1981 on specifying file patterns.
1982
1982
1983 Returns 0 if a match is found, 1 otherwise.
1983 Returns 0 if a match is found, 1 otherwise.
1984
1984
1985 """
1985 """
1986
1986
1987 opts = pycompat.byteskwargs(opts)
1987 opts = pycompat.byteskwargs(opts)
1988 ctx = scmutil.revsingle(repo, opts.get('rev'), None)
1988 ctx = scmutil.revsingle(repo, opts.get('rev'), None)
1989
1989
1990 end = '\n'
1990 end = '\n'
1991 if opts.get('print0'):
1991 if opts.get('print0'):
1992 end = '\0'
1992 end = '\0'
1993 fmt = '%s' + end
1993 fmt = '%s' + end
1994
1994
1995 m = scmutil.match(ctx, pats, opts)
1995 m = scmutil.match(ctx, pats, opts)
1996 ui.pager('files')
1996 ui.pager('files')
1997 with ui.formatter('files', opts) as fm:
1997 with ui.formatter('files', opts) as fm:
1998 return cmdutil.files(ui, ctx, m, fm, fmt, opts.get('subrepos'))
1998 return cmdutil.files(ui, ctx, m, fm, fmt, opts.get('subrepos'))
1999
1999
2000 @command('^forget', walkopts, _('[OPTION]... FILE...'), inferrepo=True)
2000 @command('^forget', walkopts, _('[OPTION]... FILE...'), inferrepo=True)
2001 def forget(ui, repo, *pats, **opts):
2001 def forget(ui, repo, *pats, **opts):
2002 """forget the specified files on the next commit
2002 """forget the specified files on the next commit
2003
2003
2004 Mark the specified files so they will no longer be tracked
2004 Mark the specified files so they will no longer be tracked
2005 after the next commit.
2005 after the next commit.
2006
2006
2007 This only removes files from the current branch, not from the
2007 This only removes files from the current branch, not from the
2008 entire project history, and it does not delete them from the
2008 entire project history, and it does not delete them from the
2009 working directory.
2009 working directory.
2010
2010
2011 To delete the file from the working directory, see :hg:`remove`.
2011 To delete the file from the working directory, see :hg:`remove`.
2012
2012
2013 To undo a forget before the next commit, see :hg:`add`.
2013 To undo a forget before the next commit, see :hg:`add`.
2014
2014
2015 .. container:: verbose
2015 .. container:: verbose
2016
2016
2017 Examples:
2017 Examples:
2018
2018
2019 - forget newly-added binary files::
2019 - forget newly-added binary files::
2020
2020
2021 hg forget "set:added() and binary()"
2021 hg forget "set:added() and binary()"
2022
2022
2023 - forget files that would be excluded by .hgignore::
2023 - forget files that would be excluded by .hgignore::
2024
2024
2025 hg forget "set:hgignore()"
2025 hg forget "set:hgignore()"
2026
2026
2027 Returns 0 on success.
2027 Returns 0 on success.
2028 """
2028 """
2029
2029
2030 opts = pycompat.byteskwargs(opts)
2030 opts = pycompat.byteskwargs(opts)
2031 if not pats:
2031 if not pats:
2032 raise error.Abort(_('no files specified'))
2032 raise error.Abort(_('no files specified'))
2033
2033
2034 m = scmutil.match(repo[None], pats, opts)
2034 m = scmutil.match(repo[None], pats, opts)
2035 rejected = cmdutil.forget(ui, repo, m, prefix="", explicitonly=False)[0]
2035 rejected = cmdutil.forget(ui, repo, m, prefix="", explicitonly=False)[0]
2036 return rejected and 1 or 0
2036 return rejected and 1 or 0
2037
2037
2038 @command(
2038 @command(
2039 'graft',
2039 'graft',
2040 [('r', 'rev', [], _('revisions to graft'), _('REV')),
2040 [('r', 'rev', [], _('revisions to graft'), _('REV')),
2041 ('c', 'continue', False, _('resume interrupted graft')),
2041 ('c', 'continue', False, _('resume interrupted graft')),
2042 ('e', 'edit', False, _('invoke editor on commit messages')),
2042 ('e', 'edit', False, _('invoke editor on commit messages')),
2043 ('', 'log', None, _('append graft info to log message')),
2043 ('', 'log', None, _('append graft info to log message')),
2044 ('f', 'force', False, _('force graft')),
2044 ('f', 'force', False, _('force graft')),
2045 ('D', 'currentdate', False,
2045 ('D', 'currentdate', False,
2046 _('record the current date as commit date')),
2046 _('record the current date as commit date')),
2047 ('U', 'currentuser', False,
2047 ('U', 'currentuser', False,
2048 _('record the current user as committer'), _('DATE'))]
2048 _('record the current user as committer'), _('DATE'))]
2049 + commitopts2 + mergetoolopts + dryrunopts,
2049 + commitopts2 + mergetoolopts + dryrunopts,
2050 _('[OPTION]... [-r REV]... REV...'))
2050 _('[OPTION]... [-r REV]... REV...'))
2051 def graft(ui, repo, *revs, **opts):
2051 def graft(ui, repo, *revs, **opts):
2052 '''copy changes from other branches onto the current branch
2052 '''copy changes from other branches onto the current branch
2053
2053
2054 This command uses Mercurial's merge logic to copy individual
2054 This command uses Mercurial's merge logic to copy individual
2055 changes from other branches without merging branches in the
2055 changes from other branches without merging branches in the
2056 history graph. This is sometimes known as 'backporting' or
2056 history graph. This is sometimes known as 'backporting' or
2057 'cherry-picking'. By default, graft will copy user, date, and
2057 'cherry-picking'. By default, graft will copy user, date, and
2058 description from the source changesets.
2058 description from the source changesets.
2059
2059
2060 Changesets that are ancestors of the current revision, that have
2060 Changesets that are ancestors of the current revision, that have
2061 already been grafted, or that are merges will be skipped.
2061 already been grafted, or that are merges will be skipped.
2062
2062
2063 If --log is specified, log messages will have a comment appended
2063 If --log is specified, log messages will have a comment appended
2064 of the form::
2064 of the form::
2065
2065
2066 (grafted from CHANGESETHASH)
2066 (grafted from CHANGESETHASH)
2067
2067
2068 If --force is specified, revisions will be grafted even if they
2068 If --force is specified, revisions will be grafted even if they
2069 are already ancestors of or have been grafted to the destination.
2069 are already ancestors of or have been grafted to the destination.
2070 This is useful when the revisions have since been backed out.
2070 This is useful when the revisions have since been backed out.
2071
2071
2072 If a graft merge results in conflicts, the graft process is
2072 If a graft merge results in conflicts, the graft process is
2073 interrupted so that the current merge can be manually resolved.
2073 interrupted so that the current merge can be manually resolved.
2074 Once all conflicts are addressed, the graft process can be
2074 Once all conflicts are addressed, the graft process can be
2075 continued with the -c/--continue option.
2075 continued with the -c/--continue option.
2076
2076
2077 .. note::
2077 .. note::
2078
2078
2079 The -c/--continue option does not reapply earlier options, except
2079 The -c/--continue option does not reapply earlier options, except
2080 for --force.
2080 for --force.
2081
2081
2082 .. container:: verbose
2082 .. container:: verbose
2083
2083
2084 Examples:
2084 Examples:
2085
2085
2086 - copy a single change to the stable branch and edit its description::
2086 - copy a single change to the stable branch and edit its description::
2087
2087
2088 hg update stable
2088 hg update stable
2089 hg graft --edit 9393
2089 hg graft --edit 9393
2090
2090
2091 - graft a range of changesets with one exception, updating dates::
2091 - graft a range of changesets with one exception, updating dates::
2092
2092
2093 hg graft -D "2085::2093 and not 2091"
2093 hg graft -D "2085::2093 and not 2091"
2094
2094
2095 - continue a graft after resolving conflicts::
2095 - continue a graft after resolving conflicts::
2096
2096
2097 hg graft -c
2097 hg graft -c
2098
2098
2099 - show the source of a grafted changeset::
2099 - show the source of a grafted changeset::
2100
2100
2101 hg log --debug -r .
2101 hg log --debug -r .
2102
2102
2103 - show revisions sorted by date::
2103 - show revisions sorted by date::
2104
2104
2105 hg log -r "sort(all(), date)"
2105 hg log -r "sort(all(), date)"
2106
2106
2107 See :hg:`help revisions` for more about specifying revisions.
2107 See :hg:`help revisions` for more about specifying revisions.
2108
2108
2109 Returns 0 on successful completion.
2109 Returns 0 on successful completion.
2110 '''
2110 '''
2111 with repo.wlock():
2111 with repo.wlock():
2112 return _dograft(ui, repo, *revs, **opts)
2112 return _dograft(ui, repo, *revs, **opts)
2113
2113
2114 def _dograft(ui, repo, *revs, **opts):
2114 def _dograft(ui, repo, *revs, **opts):
2115 opts = pycompat.byteskwargs(opts)
2115 opts = pycompat.byteskwargs(opts)
2116 if revs and opts.get('rev'):
2116 if revs and opts.get('rev'):
2117 ui.warn(_('warning: inconsistent use of --rev might give unexpected '
2117 ui.warn(_('warning: inconsistent use of --rev might give unexpected '
2118 'revision ordering!\n'))
2118 'revision ordering!\n'))
2119
2119
2120 revs = list(revs)
2120 revs = list(revs)
2121 revs.extend(opts.get('rev'))
2121 revs.extend(opts.get('rev'))
2122
2122
2123 if not opts.get('user') and opts.get('currentuser'):
2123 if not opts.get('user') and opts.get('currentuser'):
2124 opts['user'] = ui.username()
2124 opts['user'] = ui.username()
2125 if not opts.get('date') and opts.get('currentdate'):
2125 if not opts.get('date') and opts.get('currentdate'):
2126 opts['date'] = "%d %d" % util.makedate()
2126 opts['date'] = "%d %d" % util.makedate()
2127
2127
2128 editor = cmdutil.getcommiteditor(editform='graft',
2128 editor = cmdutil.getcommiteditor(editform='graft',
2129 **pycompat.strkwargs(opts))
2129 **pycompat.strkwargs(opts))
2130
2130
2131 cont = False
2131 cont = False
2132 if opts.get('continue'):
2132 if opts.get('continue'):
2133 cont = True
2133 cont = True
2134 if revs:
2134 if revs:
2135 raise error.Abort(_("can't specify --continue and revisions"))
2135 raise error.Abort(_("can't specify --continue and revisions"))
2136 # read in unfinished revisions
2136 # read in unfinished revisions
2137 try:
2137 try:
2138 nodes = repo.vfs.read('graftstate').splitlines()
2138 nodes = repo.vfs.read('graftstate').splitlines()
2139 revs = [repo[node].rev() for node in nodes]
2139 revs = [repo[node].rev() for node in nodes]
2140 except IOError as inst:
2140 except IOError as inst:
2141 if inst.errno != errno.ENOENT:
2141 if inst.errno != errno.ENOENT:
2142 raise
2142 raise
2143 cmdutil.wrongtooltocontinue(repo, _('graft'))
2143 cmdutil.wrongtooltocontinue(repo, _('graft'))
2144 else:
2144 else:
2145 cmdutil.checkunfinished(repo)
2145 cmdutil.checkunfinished(repo)
2146 cmdutil.bailifchanged(repo)
2146 cmdutil.bailifchanged(repo)
2147 if not revs:
2147 if not revs:
2148 raise error.Abort(_('no revisions specified'))
2148 raise error.Abort(_('no revisions specified'))
2149 revs = scmutil.revrange(repo, revs)
2149 revs = scmutil.revrange(repo, revs)
2150
2150
2151 skipped = set()
2151 skipped = set()
2152 # check for merges
2152 # check for merges
2153 for rev in repo.revs('%ld and merge()', revs):
2153 for rev in repo.revs('%ld and merge()', revs):
2154 ui.warn(_('skipping ungraftable merge revision %s\n') % rev)
2154 ui.warn(_('skipping ungraftable merge revision %s\n') % rev)
2155 skipped.add(rev)
2155 skipped.add(rev)
2156 revs = [r for r in revs if r not in skipped]
2156 revs = [r for r in revs if r not in skipped]
2157 if not revs:
2157 if not revs:
2158 return -1
2158 return -1
2159
2159
2160 # Don't check in the --continue case, in effect retaining --force across
2160 # Don't check in the --continue case, in effect retaining --force across
2161 # --continues. That's because without --force, any revisions we decided to
2161 # --continues. That's because without --force, any revisions we decided to
2162 # skip would have been filtered out here, so they wouldn't have made their
2162 # skip would have been filtered out here, so they wouldn't have made their
2163 # way to the graftstate. With --force, any revisions we would have otherwise
2163 # way to the graftstate. With --force, any revisions we would have otherwise
2164 # skipped would not have been filtered out, and if they hadn't been applied
2164 # skipped would not have been filtered out, and if they hadn't been applied
2165 # already, they'd have been in the graftstate.
2165 # already, they'd have been in the graftstate.
2166 if not (cont or opts.get('force')):
2166 if not (cont or opts.get('force')):
2167 # check for ancestors of dest branch
2167 # check for ancestors of dest branch
2168 crev = repo['.'].rev()
2168 crev = repo['.'].rev()
2169 ancestors = repo.changelog.ancestors([crev], inclusive=True)
2169 ancestors = repo.changelog.ancestors([crev], inclusive=True)
2170 # XXX make this lazy in the future
2170 # XXX make this lazy in the future
2171 # don't mutate while iterating, create a copy
2171 # don't mutate while iterating, create a copy
2172 for rev in list(revs):
2172 for rev in list(revs):
2173 if rev in ancestors:
2173 if rev in ancestors:
2174 ui.warn(_('skipping ancestor revision %d:%s\n') %
2174 ui.warn(_('skipping ancestor revision %d:%s\n') %
2175 (rev, repo[rev]))
2175 (rev, repo[rev]))
2176 # XXX remove on list is slow
2176 # XXX remove on list is slow
2177 revs.remove(rev)
2177 revs.remove(rev)
2178 if not revs:
2178 if not revs:
2179 return -1
2179 return -1
2180
2180
2181 # analyze revs for earlier grafts
2181 # analyze revs for earlier grafts
2182 ids = {}
2182 ids = {}
2183 for ctx in repo.set("%ld", revs):
2183 for ctx in repo.set("%ld", revs):
2184 ids[ctx.hex()] = ctx.rev()
2184 ids[ctx.hex()] = ctx.rev()
2185 n = ctx.extra().get('source')
2185 n = ctx.extra().get('source')
2186 if n:
2186 if n:
2187 ids[n] = ctx.rev()
2187 ids[n] = ctx.rev()
2188
2188
2189 # check ancestors for earlier grafts
2189 # check ancestors for earlier grafts
2190 ui.debug('scanning for duplicate grafts\n')
2190 ui.debug('scanning for duplicate grafts\n')
2191
2191
2192 # The only changesets we can be sure doesn't contain grafts of any
2192 # The only changesets we can be sure doesn't contain grafts of any
2193 # revs, are the ones that are common ancestors of *all* revs:
2193 # revs, are the ones that are common ancestors of *all* revs:
2194 for rev in repo.revs('only(%d,ancestor(%ld))', crev, revs):
2194 for rev in repo.revs('only(%d,ancestor(%ld))', crev, revs):
2195 ctx = repo[rev]
2195 ctx = repo[rev]
2196 n = ctx.extra().get('source')
2196 n = ctx.extra().get('source')
2197 if n in ids:
2197 if n in ids:
2198 try:
2198 try:
2199 r = repo[n].rev()
2199 r = repo[n].rev()
2200 except error.RepoLookupError:
2200 except error.RepoLookupError:
2201 r = None
2201 r = None
2202 if r in revs:
2202 if r in revs:
2203 ui.warn(_('skipping revision %d:%s '
2203 ui.warn(_('skipping revision %d:%s '
2204 '(already grafted to %d:%s)\n')
2204 '(already grafted to %d:%s)\n')
2205 % (r, repo[r], rev, ctx))
2205 % (r, repo[r], rev, ctx))
2206 revs.remove(r)
2206 revs.remove(r)
2207 elif ids[n] in revs:
2207 elif ids[n] in revs:
2208 if r is None:
2208 if r is None:
2209 ui.warn(_('skipping already grafted revision %d:%s '
2209 ui.warn(_('skipping already grafted revision %d:%s '
2210 '(%d:%s also has unknown origin %s)\n')
2210 '(%d:%s also has unknown origin %s)\n')
2211 % (ids[n], repo[ids[n]], rev, ctx, n[:12]))
2211 % (ids[n], repo[ids[n]], rev, ctx, n[:12]))
2212 else:
2212 else:
2213 ui.warn(_('skipping already grafted revision %d:%s '
2213 ui.warn(_('skipping already grafted revision %d:%s '
2214 '(%d:%s also has origin %d:%s)\n')
2214 '(%d:%s also has origin %d:%s)\n')
2215 % (ids[n], repo[ids[n]], rev, ctx, r, n[:12]))
2215 % (ids[n], repo[ids[n]], rev, ctx, r, n[:12]))
2216 revs.remove(ids[n])
2216 revs.remove(ids[n])
2217 elif ctx.hex() in ids:
2217 elif ctx.hex() in ids:
2218 r = ids[ctx.hex()]
2218 r = ids[ctx.hex()]
2219 ui.warn(_('skipping already grafted revision %d:%s '
2219 ui.warn(_('skipping already grafted revision %d:%s '
2220 '(was grafted from %d:%s)\n') %
2220 '(was grafted from %d:%s)\n') %
2221 (r, repo[r], rev, ctx))
2221 (r, repo[r], rev, ctx))
2222 revs.remove(r)
2222 revs.remove(r)
2223 if not revs:
2223 if not revs:
2224 return -1
2224 return -1
2225
2225
2226 for pos, ctx in enumerate(repo.set("%ld", revs)):
2226 for pos, ctx in enumerate(repo.set("%ld", revs)):
2227 desc = '%d:%s "%s"' % (ctx.rev(), ctx,
2227 desc = '%d:%s "%s"' % (ctx.rev(), ctx,
2228 ctx.description().split('\n', 1)[0])
2228 ctx.description().split('\n', 1)[0])
2229 names = repo.nodetags(ctx.node()) + repo.nodebookmarks(ctx.node())
2229 names = repo.nodetags(ctx.node()) + repo.nodebookmarks(ctx.node())
2230 if names:
2230 if names:
2231 desc += ' (%s)' % ' '.join(names)
2231 desc += ' (%s)' % ' '.join(names)
2232 ui.status(_('grafting %s\n') % desc)
2232 ui.status(_('grafting %s\n') % desc)
2233 if opts.get('dry_run'):
2233 if opts.get('dry_run'):
2234 continue
2234 continue
2235
2235
2236 source = ctx.extra().get('source')
2236 source = ctx.extra().get('source')
2237 extra = {}
2237 extra = {}
2238 if source:
2238 if source:
2239 extra['source'] = source
2239 extra['source'] = source
2240 extra['intermediate-source'] = ctx.hex()
2240 extra['intermediate-source'] = ctx.hex()
2241 else:
2241 else:
2242 extra['source'] = ctx.hex()
2242 extra['source'] = ctx.hex()
2243 user = ctx.user()
2243 user = ctx.user()
2244 if opts.get('user'):
2244 if opts.get('user'):
2245 user = opts['user']
2245 user = opts['user']
2246 date = ctx.date()
2246 date = ctx.date()
2247 if opts.get('date'):
2247 if opts.get('date'):
2248 date = opts['date']
2248 date = opts['date']
2249 message = ctx.description()
2249 message = ctx.description()
2250 if opts.get('log'):
2250 if opts.get('log'):
2251 message += '\n(grafted from %s)' % ctx.hex()
2251 message += '\n(grafted from %s)' % ctx.hex()
2252
2252
2253 # we don't merge the first commit when continuing
2253 # we don't merge the first commit when continuing
2254 if not cont:
2254 if not cont:
2255 # perform the graft merge with p1(rev) as 'ancestor'
2255 # perform the graft merge with p1(rev) as 'ancestor'
2256 try:
2256 try:
2257 # ui.forcemerge is an internal variable, do not document
2257 # ui.forcemerge is an internal variable, do not document
2258 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
2258 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
2259 'graft')
2259 'graft')
2260 stats = mergemod.graft(repo, ctx, ctx.p1(),
2260 stats = mergemod.graft(repo, ctx, ctx.p1(),
2261 ['local', 'graft'])
2261 ['local', 'graft'])
2262 finally:
2262 finally:
2263 repo.ui.setconfig('ui', 'forcemerge', '', 'graft')
2263 repo.ui.setconfig('ui', 'forcemerge', '', 'graft')
2264 # report any conflicts
2264 # report any conflicts
2265 if stats and stats[3] > 0:
2265 if stats and stats[3] > 0:
2266 # write out state for --continue
2266 # write out state for --continue
2267 nodelines = [repo[rev].hex() + "\n" for rev in revs[pos:]]
2267 nodelines = [repo[rev].hex() + "\n" for rev in revs[pos:]]
2268 repo.vfs.write('graftstate', ''.join(nodelines))
2268 repo.vfs.write('graftstate', ''.join(nodelines))
2269 extra = ''
2269 extra = ''
2270 if opts.get('user'):
2270 if opts.get('user'):
2271 extra += ' --user %s' % util.shellquote(opts['user'])
2271 extra += ' --user %s' % util.shellquote(opts['user'])
2272 if opts.get('date'):
2272 if opts.get('date'):
2273 extra += ' --date %s' % util.shellquote(opts['date'])
2273 extra += ' --date %s' % util.shellquote(opts['date'])
2274 if opts.get('log'):
2274 if opts.get('log'):
2275 extra += ' --log'
2275 extra += ' --log'
2276 hint=_("use 'hg resolve' and 'hg graft --continue%s'") % extra
2276 hint=_("use 'hg resolve' and 'hg graft --continue%s'") % extra
2277 raise error.Abort(
2277 raise error.Abort(
2278 _("unresolved conflicts, can't continue"),
2278 _("unresolved conflicts, can't continue"),
2279 hint=hint)
2279 hint=hint)
2280 else:
2280 else:
2281 cont = False
2281 cont = False
2282
2282
2283 # commit
2283 # commit
2284 node = repo.commit(text=message, user=user,
2284 node = repo.commit(text=message, user=user,
2285 date=date, extra=extra, editor=editor)
2285 date=date, extra=extra, editor=editor)
2286 if node is None:
2286 if node is None:
2287 ui.warn(
2287 ui.warn(
2288 _('note: graft of %d:%s created no changes to commit\n') %
2288 _('note: graft of %d:%s created no changes to commit\n') %
2289 (ctx.rev(), ctx))
2289 (ctx.rev(), ctx))
2290
2290
2291 # remove state when we complete successfully
2291 # remove state when we complete successfully
2292 if not opts.get('dry_run'):
2292 if not opts.get('dry_run'):
2293 repo.vfs.unlinkpath('graftstate', ignoremissing=True)
2293 repo.vfs.unlinkpath('graftstate', ignoremissing=True)
2294
2294
2295 return 0
2295 return 0
2296
2296
2297 @command('grep',
2297 @command('grep',
2298 [('0', 'print0', None, _('end fields with NUL')),
2298 [('0', 'print0', None, _('end fields with NUL')),
2299 ('', 'all', None, _('print all revisions that match')),
2299 ('', 'all', None, _('print all revisions that match')),
2300 ('a', 'text', None, _('treat all files as text')),
2300 ('a', 'text', None, _('treat all files as text')),
2301 ('f', 'follow', None,
2301 ('f', 'follow', None,
2302 _('follow changeset history,'
2302 _('follow changeset history,'
2303 ' or file history across copies and renames')),
2303 ' or file history across copies and renames')),
2304 ('i', 'ignore-case', None, _('ignore case when matching')),
2304 ('i', 'ignore-case', None, _('ignore case when matching')),
2305 ('l', 'files-with-matches', None,
2305 ('l', 'files-with-matches', None,
2306 _('print only filenames and revisions that match')),
2306 _('print only filenames and revisions that match')),
2307 ('n', 'line-number', None, _('print matching line numbers')),
2307 ('n', 'line-number', None, _('print matching line numbers')),
2308 ('r', 'rev', [],
2308 ('r', 'rev', [],
2309 _('only search files changed within revision range'), _('REV')),
2309 _('only search files changed within revision range'), _('REV')),
2310 ('u', 'user', None, _('list the author (long with -v)')),
2310 ('u', 'user', None, _('list the author (long with -v)')),
2311 ('d', 'date', None, _('list the date (short with -q)')),
2311 ('d', 'date', None, _('list the date (short with -q)')),
2312 ] + formatteropts + walkopts,
2312 ] + formatteropts + walkopts,
2313 _('[OPTION]... PATTERN [FILE]...'),
2313 _('[OPTION]... PATTERN [FILE]...'),
2314 inferrepo=True)
2314 inferrepo=True)
2315 def grep(ui, repo, pattern, *pats, **opts):
2315 def grep(ui, repo, pattern, *pats, **opts):
2316 """search revision history for a pattern in specified files
2316 """search revision history for a pattern in specified files
2317
2317
2318 Search revision history for a regular expression in the specified
2318 Search revision history for a regular expression in the specified
2319 files or the entire project.
2319 files or the entire project.
2320
2320
2321 By default, grep prints the most recent revision number for each
2321 By default, grep prints the most recent revision number for each
2322 file in which it finds a match. To get it to print every revision
2322 file in which it finds a match. To get it to print every revision
2323 that contains a change in match status ("-" for a match that becomes
2323 that contains a change in match status ("-" for a match that becomes
2324 a non-match, or "+" for a non-match that becomes a match), use the
2324 a non-match, or "+" for a non-match that becomes a match), use the
2325 --all flag.
2325 --all flag.
2326
2326
2327 PATTERN can be any Python (roughly Perl-compatible) regular
2327 PATTERN can be any Python (roughly Perl-compatible) regular
2328 expression.
2328 expression.
2329
2329
2330 If no FILEs are specified (and -f/--follow isn't set), all files in
2330 If no FILEs are specified (and -f/--follow isn't set), all files in
2331 the repository are searched, including those that don't exist in the
2331 the repository are searched, including those that don't exist in the
2332 current branch or have been deleted in a prior changeset.
2332 current branch or have been deleted in a prior changeset.
2333
2333
2334 Returns 0 if a match is found, 1 otherwise.
2334 Returns 0 if a match is found, 1 otherwise.
2335 """
2335 """
2336 opts = pycompat.byteskwargs(opts)
2336 opts = pycompat.byteskwargs(opts)
2337 reflags = re.M
2337 reflags = re.M
2338 if opts.get('ignore_case'):
2338 if opts.get('ignore_case'):
2339 reflags |= re.I
2339 reflags |= re.I
2340 try:
2340 try:
2341 regexp = util.re.compile(pattern, reflags)
2341 regexp = util.re.compile(pattern, reflags)
2342 except re.error as inst:
2342 except re.error as inst:
2343 ui.warn(_("grep: invalid match pattern: %s\n") % inst)
2343 ui.warn(_("grep: invalid match pattern: %s\n") % inst)
2344 return 1
2344 return 1
2345 sep, eol = ':', '\n'
2345 sep, eol = ':', '\n'
2346 if opts.get('print0'):
2346 if opts.get('print0'):
2347 sep = eol = '\0'
2347 sep = eol = '\0'
2348
2348
2349 getfile = util.lrucachefunc(repo.file)
2349 getfile = util.lrucachefunc(repo.file)
2350
2350
2351 def matchlines(body):
2351 def matchlines(body):
2352 begin = 0
2352 begin = 0
2353 linenum = 0
2353 linenum = 0
2354 while begin < len(body):
2354 while begin < len(body):
2355 match = regexp.search(body, begin)
2355 match = regexp.search(body, begin)
2356 if not match:
2356 if not match:
2357 break
2357 break
2358 mstart, mend = match.span()
2358 mstart, mend = match.span()
2359 linenum += body.count('\n', begin, mstart) + 1
2359 linenum += body.count('\n', begin, mstart) + 1
2360 lstart = body.rfind('\n', begin, mstart) + 1 or begin
2360 lstart = body.rfind('\n', begin, mstart) + 1 or begin
2361 begin = body.find('\n', mend) + 1 or len(body) + 1
2361 begin = body.find('\n', mend) + 1 or len(body) + 1
2362 lend = begin - 1
2362 lend = begin - 1
2363 yield linenum, mstart - lstart, mend - lstart, body[lstart:lend]
2363 yield linenum, mstart - lstart, mend - lstart, body[lstart:lend]
2364
2364
2365 class linestate(object):
2365 class linestate(object):
2366 def __init__(self, line, linenum, colstart, colend):
2366 def __init__(self, line, linenum, colstart, colend):
2367 self.line = line
2367 self.line = line
2368 self.linenum = linenum
2368 self.linenum = linenum
2369 self.colstart = colstart
2369 self.colstart = colstart
2370 self.colend = colend
2370 self.colend = colend
2371
2371
2372 def __hash__(self):
2372 def __hash__(self):
2373 return hash((self.linenum, self.line))
2373 return hash((self.linenum, self.line))
2374
2374
2375 def __eq__(self, other):
2375 def __eq__(self, other):
2376 return self.line == other.line
2376 return self.line == other.line
2377
2377
2378 def findpos(self):
2378 def findpos(self):
2379 """Iterate all (start, end) indices of matches"""
2379 """Iterate all (start, end) indices of matches"""
2380 yield self.colstart, self.colend
2380 yield self.colstart, self.colend
2381 p = self.colend
2381 p = self.colend
2382 while p < len(self.line):
2382 while p < len(self.line):
2383 m = regexp.search(self.line, p)
2383 m = regexp.search(self.line, p)
2384 if not m:
2384 if not m:
2385 break
2385 break
2386 yield m.span()
2386 yield m.span()
2387 p = m.end()
2387 p = m.end()
2388
2388
2389 matches = {}
2389 matches = {}
2390 copies = {}
2390 copies = {}
2391 def grepbody(fn, rev, body):
2391 def grepbody(fn, rev, body):
2392 matches[rev].setdefault(fn, [])
2392 matches[rev].setdefault(fn, [])
2393 m = matches[rev][fn]
2393 m = matches[rev][fn]
2394 for lnum, cstart, cend, line in matchlines(body):
2394 for lnum, cstart, cend, line in matchlines(body):
2395 s = linestate(line, lnum, cstart, cend)
2395 s = linestate(line, lnum, cstart, cend)
2396 m.append(s)
2396 m.append(s)
2397
2397
2398 def difflinestates(a, b):
2398 def difflinestates(a, b):
2399 sm = difflib.SequenceMatcher(None, a, b)
2399 sm = difflib.SequenceMatcher(None, a, b)
2400 for tag, alo, ahi, blo, bhi in sm.get_opcodes():
2400 for tag, alo, ahi, blo, bhi in sm.get_opcodes():
2401 if tag == 'insert':
2401 if tag == 'insert':
2402 for i in xrange(blo, bhi):
2402 for i in xrange(blo, bhi):
2403 yield ('+', b[i])
2403 yield ('+', b[i])
2404 elif tag == 'delete':
2404 elif tag == 'delete':
2405 for i in xrange(alo, ahi):
2405 for i in xrange(alo, ahi):
2406 yield ('-', a[i])
2406 yield ('-', a[i])
2407 elif tag == 'replace':
2407 elif tag == 'replace':
2408 for i in xrange(alo, ahi):
2408 for i in xrange(alo, ahi):
2409 yield ('-', a[i])
2409 yield ('-', a[i])
2410 for i in xrange(blo, bhi):
2410 for i in xrange(blo, bhi):
2411 yield ('+', b[i])
2411 yield ('+', b[i])
2412
2412
2413 def display(fm, fn, ctx, pstates, states):
2413 def display(fm, fn, ctx, pstates, states):
2414 rev = ctx.rev()
2414 rev = ctx.rev()
2415 if fm.isplain():
2415 if fm.isplain():
2416 formatuser = ui.shortuser
2416 formatuser = ui.shortuser
2417 else:
2417 else:
2418 formatuser = str
2418 formatuser = str
2419 if ui.quiet:
2419 if ui.quiet:
2420 datefmt = '%Y-%m-%d'
2420 datefmt = '%Y-%m-%d'
2421 else:
2421 else:
2422 datefmt = '%a %b %d %H:%M:%S %Y %1%2'
2422 datefmt = '%a %b %d %H:%M:%S %Y %1%2'
2423 found = False
2423 found = False
2424 @util.cachefunc
2424 @util.cachefunc
2425 def binary():
2425 def binary():
2426 flog = getfile(fn)
2426 flog = getfile(fn)
2427 return util.binary(flog.read(ctx.filenode(fn)))
2427 return util.binary(flog.read(ctx.filenode(fn)))
2428
2428
2429 fieldnamemap = {'filename': 'file', 'linenumber': 'line_number'}
2429 fieldnamemap = {'filename': 'file', 'linenumber': 'line_number'}
2430 if opts.get('all'):
2430 if opts.get('all'):
2431 iter = difflinestates(pstates, states)
2431 iter = difflinestates(pstates, states)
2432 else:
2432 else:
2433 iter = [('', l) for l in states]
2433 iter = [('', l) for l in states]
2434 for change, l in iter:
2434 for change, l in iter:
2435 fm.startitem()
2435 fm.startitem()
2436 fm.data(node=fm.hexfunc(ctx.node()))
2436 fm.data(node=fm.hexfunc(ctx.node()))
2437 cols = [
2437 cols = [
2438 ('filename', fn, True),
2438 ('filename', fn, True),
2439 ('rev', rev, True),
2439 ('rev', rev, True),
2440 ('linenumber', l.linenum, opts.get('line_number')),
2440 ('linenumber', l.linenum, opts.get('line_number')),
2441 ]
2441 ]
2442 if opts.get('all'):
2442 if opts.get('all'):
2443 cols.append(('change', change, True))
2443 cols.append(('change', change, True))
2444 cols.extend([
2444 cols.extend([
2445 ('user', formatuser(ctx.user()), opts.get('user')),
2445 ('user', formatuser(ctx.user()), opts.get('user')),
2446 ('date', fm.formatdate(ctx.date(), datefmt), opts.get('date')),
2446 ('date', fm.formatdate(ctx.date(), datefmt), opts.get('date')),
2447 ])
2447 ])
2448 lastcol = next(name for name, data, cond in reversed(cols) if cond)
2448 lastcol = next(name for name, data, cond in reversed(cols) if cond)
2449 for name, data, cond in cols:
2449 for name, data, cond in cols:
2450 field = fieldnamemap.get(name, name)
2450 field = fieldnamemap.get(name, name)
2451 fm.condwrite(cond, field, '%s', data, label='grep.%s' % name)
2451 fm.condwrite(cond, field, '%s', data, label='grep.%s' % name)
2452 if cond and name != lastcol:
2452 if cond and name != lastcol:
2453 fm.plain(sep, label='grep.sep')
2453 fm.plain(sep, label='grep.sep')
2454 if not opts.get('files_with_matches'):
2454 if not opts.get('files_with_matches'):
2455 fm.plain(sep, label='grep.sep')
2455 fm.plain(sep, label='grep.sep')
2456 if not opts.get('text') and binary():
2456 if not opts.get('text') and binary():
2457 fm.plain(_(" Binary file matches"))
2457 fm.plain(_(" Binary file matches"))
2458 else:
2458 else:
2459 displaymatches(fm.nested('texts'), l)
2459 displaymatches(fm.nested('texts'), l)
2460 fm.plain(eol)
2460 fm.plain(eol)
2461 found = True
2461 found = True
2462 if opts.get('files_with_matches'):
2462 if opts.get('files_with_matches'):
2463 break
2463 break
2464 return found
2464 return found
2465
2465
2466 def displaymatches(fm, l):
2466 def displaymatches(fm, l):
2467 p = 0
2467 p = 0
2468 for s, e in l.findpos():
2468 for s, e in l.findpos():
2469 if p < s:
2469 if p < s:
2470 fm.startitem()
2470 fm.startitem()
2471 fm.write('text', '%s', l.line[p:s])
2471 fm.write('text', '%s', l.line[p:s])
2472 fm.data(matched=False)
2472 fm.data(matched=False)
2473 fm.startitem()
2473 fm.startitem()
2474 fm.write('text', '%s', l.line[s:e], label='grep.match')
2474 fm.write('text', '%s', l.line[s:e], label='grep.match')
2475 fm.data(matched=True)
2475 fm.data(matched=True)
2476 p = e
2476 p = e
2477 if p < len(l.line):
2477 if p < len(l.line):
2478 fm.startitem()
2478 fm.startitem()
2479 fm.write('text', '%s', l.line[p:])
2479 fm.write('text', '%s', l.line[p:])
2480 fm.data(matched=False)
2480 fm.data(matched=False)
2481 fm.end()
2481 fm.end()
2482
2482
2483 skip = {}
2483 skip = {}
2484 revfiles = {}
2484 revfiles = {}
2485 matchfn = scmutil.match(repo[None], pats, opts)
2485 matchfn = scmutil.match(repo[None], pats, opts)
2486 found = False
2486 found = False
2487 follow = opts.get('follow')
2487 follow = opts.get('follow')
2488
2488
2489 def prep(ctx, fns):
2489 def prep(ctx, fns):
2490 rev = ctx.rev()
2490 rev = ctx.rev()
2491 pctx = ctx.p1()
2491 pctx = ctx.p1()
2492 parent = pctx.rev()
2492 parent = pctx.rev()
2493 matches.setdefault(rev, {})
2493 matches.setdefault(rev, {})
2494 matches.setdefault(parent, {})
2494 matches.setdefault(parent, {})
2495 files = revfiles.setdefault(rev, [])
2495 files = revfiles.setdefault(rev, [])
2496 for fn in fns:
2496 for fn in fns:
2497 flog = getfile(fn)
2497 flog = getfile(fn)
2498 try:
2498 try:
2499 fnode = ctx.filenode(fn)
2499 fnode = ctx.filenode(fn)
2500 except error.LookupError:
2500 except error.LookupError:
2501 continue
2501 continue
2502
2502
2503 copied = flog.renamed(fnode)
2503 copied = flog.renamed(fnode)
2504 copy = follow and copied and copied[0]
2504 copy = follow and copied and copied[0]
2505 if copy:
2505 if copy:
2506 copies.setdefault(rev, {})[fn] = copy
2506 copies.setdefault(rev, {})[fn] = copy
2507 if fn in skip:
2507 if fn in skip:
2508 if copy:
2508 if copy:
2509 skip[copy] = True
2509 skip[copy] = True
2510 continue
2510 continue
2511 files.append(fn)
2511 files.append(fn)
2512
2512
2513 if fn not in matches[rev]:
2513 if fn not in matches[rev]:
2514 grepbody(fn, rev, flog.read(fnode))
2514 grepbody(fn, rev, flog.read(fnode))
2515
2515
2516 pfn = copy or fn
2516 pfn = copy or fn
2517 if pfn not in matches[parent]:
2517 if pfn not in matches[parent]:
2518 try:
2518 try:
2519 fnode = pctx.filenode(pfn)
2519 fnode = pctx.filenode(pfn)
2520 grepbody(pfn, parent, flog.read(fnode))
2520 grepbody(pfn, parent, flog.read(fnode))
2521 except error.LookupError:
2521 except error.LookupError:
2522 pass
2522 pass
2523
2523
2524 ui.pager('grep')
2524 ui.pager('grep')
2525 fm = ui.formatter('grep', opts)
2525 fm = ui.formatter('grep', opts)
2526 for ctx in cmdutil.walkchangerevs(repo, matchfn, opts, prep):
2526 for ctx in cmdutil.walkchangerevs(repo, matchfn, opts, prep):
2527 rev = ctx.rev()
2527 rev = ctx.rev()
2528 parent = ctx.p1().rev()
2528 parent = ctx.p1().rev()
2529 for fn in sorted(revfiles.get(rev, [])):
2529 for fn in sorted(revfiles.get(rev, [])):
2530 states = matches[rev][fn]
2530 states = matches[rev][fn]
2531 copy = copies.get(rev, {}).get(fn)
2531 copy = copies.get(rev, {}).get(fn)
2532 if fn in skip:
2532 if fn in skip:
2533 if copy:
2533 if copy:
2534 skip[copy] = True
2534 skip[copy] = True
2535 continue
2535 continue
2536 pstates = matches.get(parent, {}).get(copy or fn, [])
2536 pstates = matches.get(parent, {}).get(copy or fn, [])
2537 if pstates or states:
2537 if pstates or states:
2538 r = display(fm, fn, ctx, pstates, states)
2538 r = display(fm, fn, ctx, pstates, states)
2539 found = found or r
2539 found = found or r
2540 if r and not opts.get('all'):
2540 if r and not opts.get('all'):
2541 skip[fn] = True
2541 skip[fn] = True
2542 if copy:
2542 if copy:
2543 skip[copy] = True
2543 skip[copy] = True
2544 del matches[rev]
2544 del matches[rev]
2545 del revfiles[rev]
2545 del revfiles[rev]
2546 fm.end()
2546 fm.end()
2547
2547
2548 return not found
2548 return not found
2549
2549
2550 @command('heads',
2550 @command('heads',
2551 [('r', 'rev', '',
2551 [('r', 'rev', '',
2552 _('show only heads which are descendants of STARTREV'), _('STARTREV')),
2552 _('show only heads which are descendants of STARTREV'), _('STARTREV')),
2553 ('t', 'topo', False, _('show topological heads only')),
2553 ('t', 'topo', False, _('show topological heads only')),
2554 ('a', 'active', False, _('show active branchheads only (DEPRECATED)')),
2554 ('a', 'active', False, _('show active branchheads only (DEPRECATED)')),
2555 ('c', 'closed', False, _('show normal and closed branch heads')),
2555 ('c', 'closed', False, _('show normal and closed branch heads')),
2556 ] + templateopts,
2556 ] + templateopts,
2557 _('[-ct] [-r STARTREV] [REV]...'))
2557 _('[-ct] [-r STARTREV] [REV]...'))
2558 def heads(ui, repo, *branchrevs, **opts):
2558 def heads(ui, repo, *branchrevs, **opts):
2559 """show branch heads
2559 """show branch heads
2560
2560
2561 With no arguments, show all open branch heads in the repository.
2561 With no arguments, show all open branch heads in the repository.
2562 Branch heads are changesets that have no descendants on the
2562 Branch heads are changesets that have no descendants on the
2563 same branch. They are where development generally takes place and
2563 same branch. They are where development generally takes place and
2564 are the usual targets for update and merge operations.
2564 are the usual targets for update and merge operations.
2565
2565
2566 If one or more REVs are given, only open branch heads on the
2566 If one or more REVs are given, only open branch heads on the
2567 branches associated with the specified changesets are shown. This
2567 branches associated with the specified changesets are shown. This
2568 means that you can use :hg:`heads .` to see the heads on the
2568 means that you can use :hg:`heads .` to see the heads on the
2569 currently checked-out branch.
2569 currently checked-out branch.
2570
2570
2571 If -c/--closed is specified, also show branch heads marked closed
2571 If -c/--closed is specified, also show branch heads marked closed
2572 (see :hg:`commit --close-branch`).
2572 (see :hg:`commit --close-branch`).
2573
2573
2574 If STARTREV is specified, only those heads that are descendants of
2574 If STARTREV is specified, only those heads that are descendants of
2575 STARTREV will be displayed.
2575 STARTREV will be displayed.
2576
2576
2577 If -t/--topo is specified, named branch mechanics will be ignored and only
2577 If -t/--topo is specified, named branch mechanics will be ignored and only
2578 topological heads (changesets with no children) will be shown.
2578 topological heads (changesets with no children) will be shown.
2579
2579
2580 Returns 0 if matching heads are found, 1 if not.
2580 Returns 0 if matching heads are found, 1 if not.
2581 """
2581 """
2582
2582
2583 opts = pycompat.byteskwargs(opts)
2583 opts = pycompat.byteskwargs(opts)
2584 start = None
2584 start = None
2585 if 'rev' in opts:
2585 if 'rev' in opts:
2586 start = scmutil.revsingle(repo, opts['rev'], None).node()
2586 start = scmutil.revsingle(repo, opts['rev'], None).node()
2587
2587
2588 if opts.get('topo'):
2588 if opts.get('topo'):
2589 heads = [repo[h] for h in repo.heads(start)]
2589 heads = [repo[h] for h in repo.heads(start)]
2590 else:
2590 else:
2591 heads = []
2591 heads = []
2592 for branch in repo.branchmap():
2592 for branch in repo.branchmap():
2593 heads += repo.branchheads(branch, start, opts.get('closed'))
2593 heads += repo.branchheads(branch, start, opts.get('closed'))
2594 heads = [repo[h] for h in heads]
2594 heads = [repo[h] for h in heads]
2595
2595
2596 if branchrevs:
2596 if branchrevs:
2597 branches = set(repo[br].branch() for br in branchrevs)
2597 branches = set(repo[br].branch() for br in branchrevs)
2598 heads = [h for h in heads if h.branch() in branches]
2598 heads = [h for h in heads if h.branch() in branches]
2599
2599
2600 if opts.get('active') and branchrevs:
2600 if opts.get('active') and branchrevs:
2601 dagheads = repo.heads(start)
2601 dagheads = repo.heads(start)
2602 heads = [h for h in heads if h.node() in dagheads]
2602 heads = [h for h in heads if h.node() in dagheads]
2603
2603
2604 if branchrevs:
2604 if branchrevs:
2605 haveheads = set(h.branch() for h in heads)
2605 haveheads = set(h.branch() for h in heads)
2606 if branches - haveheads:
2606 if branches - haveheads:
2607 headless = ', '.join(b for b in branches - haveheads)
2607 headless = ', '.join(b for b in branches - haveheads)
2608 msg = _('no open branch heads found on branches %s')
2608 msg = _('no open branch heads found on branches %s')
2609 if opts.get('rev'):
2609 if opts.get('rev'):
2610 msg += _(' (started at %s)') % opts['rev']
2610 msg += _(' (started at %s)') % opts['rev']
2611 ui.warn((msg + '\n') % headless)
2611 ui.warn((msg + '\n') % headless)
2612
2612
2613 if not heads:
2613 if not heads:
2614 return 1
2614 return 1
2615
2615
2616 ui.pager('heads')
2616 ui.pager('heads')
2617 heads = sorted(heads, key=lambda x: -x.rev())
2617 heads = sorted(heads, key=lambda x: -x.rev())
2618 displayer = cmdutil.show_changeset(ui, repo, opts)
2618 displayer = cmdutil.show_changeset(ui, repo, opts)
2619 for ctx in heads:
2619 for ctx in heads:
2620 displayer.show(ctx)
2620 displayer.show(ctx)
2621 displayer.close()
2621 displayer.close()
2622
2622
2623 @command('help',
2623 @command('help',
2624 [('e', 'extension', None, _('show only help for extensions')),
2624 [('e', 'extension', None, _('show only help for extensions')),
2625 ('c', 'command', None, _('show only help for commands')),
2625 ('c', 'command', None, _('show only help for commands')),
2626 ('k', 'keyword', None, _('show topics matching keyword')),
2626 ('k', 'keyword', None, _('show topics matching keyword')),
2627 ('s', 'system', [], _('show help for specific platform(s)')),
2627 ('s', 'system', [], _('show help for specific platform(s)')),
2628 ],
2628 ],
2629 _('[-ecks] [TOPIC]'),
2629 _('[-ecks] [TOPIC]'),
2630 norepo=True)
2630 norepo=True)
2631 def help_(ui, name=None, **opts):
2631 def help_(ui, name=None, **opts):
2632 """show help for a given topic or a help overview
2632 """show help for a given topic or a help overview
2633
2633
2634 With no arguments, print a list of commands with short help messages.
2634 With no arguments, print a list of commands with short help messages.
2635
2635
2636 Given a topic, extension, or command name, print help for that
2636 Given a topic, extension, or command name, print help for that
2637 topic.
2637 topic.
2638
2638
2639 Returns 0 if successful.
2639 Returns 0 if successful.
2640 """
2640 """
2641
2641
2642 keep = opts.get(r'system') or []
2642 keep = opts.get(r'system') or []
2643 if len(keep) == 0:
2643 if len(keep) == 0:
2644 if pycompat.sysplatform.startswith('win'):
2644 if pycompat.sysplatform.startswith('win'):
2645 keep.append('windows')
2645 keep.append('windows')
2646 elif pycompat.sysplatform == 'OpenVMS':
2646 elif pycompat.sysplatform == 'OpenVMS':
2647 keep.append('vms')
2647 keep.append('vms')
2648 elif pycompat.sysplatform == 'plan9':
2648 elif pycompat.sysplatform == 'plan9':
2649 keep.append('plan9')
2649 keep.append('plan9')
2650 else:
2650 else:
2651 keep.append('unix')
2651 keep.append('unix')
2652 keep.append(pycompat.sysplatform.lower())
2652 keep.append(pycompat.sysplatform.lower())
2653 if ui.verbose:
2653 if ui.verbose:
2654 keep.append('verbose')
2654 keep.append('verbose')
2655
2655
2656 commands = sys.modules[__name__]
2656 commands = sys.modules[__name__]
2657 formatted = help.formattedhelp(ui, commands, name, keep=keep, **opts)
2657 formatted = help.formattedhelp(ui, commands, name, keep=keep, **opts)
2658 ui.pager('help')
2658 ui.pager('help')
2659 ui.write(formatted)
2659 ui.write(formatted)
2660
2660
2661
2661
2662 @command('identify|id',
2662 @command('identify|id',
2663 [('r', 'rev', '',
2663 [('r', 'rev', '',
2664 _('identify the specified revision'), _('REV')),
2664 _('identify the specified revision'), _('REV')),
2665 ('n', 'num', None, _('show local revision number')),
2665 ('n', 'num', None, _('show local revision number')),
2666 ('i', 'id', None, _('show global revision id')),
2666 ('i', 'id', None, _('show global revision id')),
2667 ('b', 'branch', None, _('show branch')),
2667 ('b', 'branch', None, _('show branch')),
2668 ('t', 'tags', None, _('show tags')),
2668 ('t', 'tags', None, _('show tags')),
2669 ('B', 'bookmarks', None, _('show bookmarks')),
2669 ('B', 'bookmarks', None, _('show bookmarks')),
2670 ] + remoteopts,
2670 ] + remoteopts,
2671 _('[-nibtB] [-r REV] [SOURCE]'),
2671 _('[-nibtB] [-r REV] [SOURCE]'),
2672 optionalrepo=True)
2672 optionalrepo=True)
2673 def identify(ui, repo, source=None, rev=None,
2673 def identify(ui, repo, source=None, rev=None,
2674 num=None, id=None, branch=None, tags=None, bookmarks=None, **opts):
2674 num=None, id=None, branch=None, tags=None, bookmarks=None, **opts):
2675 """identify the working directory or specified revision
2675 """identify the working directory or specified revision
2676
2676
2677 Print a summary identifying the repository state at REV using one or
2677 Print a summary identifying the repository state at REV using one or
2678 two parent hash identifiers, followed by a "+" if the working
2678 two parent hash identifiers, followed by a "+" if the working
2679 directory has uncommitted changes, the branch name (if not default),
2679 directory has uncommitted changes, the branch name (if not default),
2680 a list of tags, and a list of bookmarks.
2680 a list of tags, and a list of bookmarks.
2681
2681
2682 When REV is not given, print a summary of the current state of the
2682 When REV is not given, print a summary of the current state of the
2683 repository.
2683 repository.
2684
2684
2685 Specifying a path to a repository root or Mercurial bundle will
2685 Specifying a path to a repository root or Mercurial bundle will
2686 cause lookup to operate on that repository/bundle.
2686 cause lookup to operate on that repository/bundle.
2687
2687
2688 .. container:: verbose
2688 .. container:: verbose
2689
2689
2690 Examples:
2690 Examples:
2691
2691
2692 - generate a build identifier for the working directory::
2692 - generate a build identifier for the working directory::
2693
2693
2694 hg id --id > build-id.dat
2694 hg id --id > build-id.dat
2695
2695
2696 - find the revision corresponding to a tag::
2696 - find the revision corresponding to a tag::
2697
2697
2698 hg id -n -r 1.3
2698 hg id -n -r 1.3
2699
2699
2700 - check the most recent revision of a remote repository::
2700 - check the most recent revision of a remote repository::
2701
2701
2702 hg id -r tip https://www.mercurial-scm.org/repo/hg/
2702 hg id -r tip https://www.mercurial-scm.org/repo/hg/
2703
2703
2704 See :hg:`log` for generating more information about specific revisions,
2704 See :hg:`log` for generating more information about specific revisions,
2705 including full hash identifiers.
2705 including full hash identifiers.
2706
2706
2707 Returns 0 if successful.
2707 Returns 0 if successful.
2708 """
2708 """
2709
2709
2710 opts = pycompat.byteskwargs(opts)
2710 opts = pycompat.byteskwargs(opts)
2711 if not repo and not source:
2711 if not repo and not source:
2712 raise error.Abort(_("there is no Mercurial repository here "
2712 raise error.Abort(_("there is no Mercurial repository here "
2713 "(.hg not found)"))
2713 "(.hg not found)"))
2714
2714
2715 if ui.debugflag:
2715 if ui.debugflag:
2716 hexfunc = hex
2716 hexfunc = hex
2717 else:
2717 else:
2718 hexfunc = short
2718 hexfunc = short
2719 default = not (num or id or branch or tags or bookmarks)
2719 default = not (num or id or branch or tags or bookmarks)
2720 output = []
2720 output = []
2721 revs = []
2721 revs = []
2722
2722
2723 if source:
2723 if source:
2724 source, branches = hg.parseurl(ui.expandpath(source))
2724 source, branches = hg.parseurl(ui.expandpath(source))
2725 peer = hg.peer(repo or ui, opts, source) # only pass ui when no repo
2725 peer = hg.peer(repo or ui, opts, source) # only pass ui when no repo
2726 repo = peer.local()
2726 repo = peer.local()
2727 revs, checkout = hg.addbranchrevs(repo, peer, branches, None)
2727 revs, checkout = hg.addbranchrevs(repo, peer, branches, None)
2728
2728
2729 if not repo:
2729 if not repo:
2730 if num or branch or tags:
2730 if num or branch or tags:
2731 raise error.Abort(
2731 raise error.Abort(
2732 _("can't query remote revision number, branch, or tags"))
2732 _("can't query remote revision number, branch, or tags"))
2733 if not rev and revs:
2733 if not rev and revs:
2734 rev = revs[0]
2734 rev = revs[0]
2735 if not rev:
2735 if not rev:
2736 rev = "tip"
2736 rev = "tip"
2737
2737
2738 remoterev = peer.lookup(rev)
2738 remoterev = peer.lookup(rev)
2739 if default or id:
2739 if default or id:
2740 output = [hexfunc(remoterev)]
2740 output = [hexfunc(remoterev)]
2741
2741
2742 def getbms():
2742 def getbms():
2743 bms = []
2743 bms = []
2744
2744
2745 if 'bookmarks' in peer.listkeys('namespaces'):
2745 if 'bookmarks' in peer.listkeys('namespaces'):
2746 hexremoterev = hex(remoterev)
2746 hexremoterev = hex(remoterev)
2747 bms = [bm for bm, bmr in peer.listkeys('bookmarks').iteritems()
2747 bms = [bm for bm, bmr in peer.listkeys('bookmarks').iteritems()
2748 if bmr == hexremoterev]
2748 if bmr == hexremoterev]
2749
2749
2750 return sorted(bms)
2750 return sorted(bms)
2751
2751
2752 if bookmarks:
2752 if bookmarks:
2753 output.extend(getbms())
2753 output.extend(getbms())
2754 elif default and not ui.quiet:
2754 elif default and not ui.quiet:
2755 # multiple bookmarks for a single parent separated by '/'
2755 # multiple bookmarks for a single parent separated by '/'
2756 bm = '/'.join(getbms())
2756 bm = '/'.join(getbms())
2757 if bm:
2757 if bm:
2758 output.append(bm)
2758 output.append(bm)
2759 else:
2759 else:
2760 ctx = scmutil.revsingle(repo, rev, None)
2760 ctx = scmutil.revsingle(repo, rev, None)
2761
2761
2762 if ctx.rev() is None:
2762 if ctx.rev() is None:
2763 ctx = repo[None]
2763 ctx = repo[None]
2764 parents = ctx.parents()
2764 parents = ctx.parents()
2765 taglist = []
2765 taglist = []
2766 for p in parents:
2766 for p in parents:
2767 taglist.extend(p.tags())
2767 taglist.extend(p.tags())
2768
2768
2769 changed = ""
2769 changed = ""
2770 if default or id or num:
2770 if default or id or num:
2771 if (any(repo.status())
2771 if (any(repo.status())
2772 or any(ctx.sub(s).dirty() for s in ctx.substate)):
2772 or any(ctx.sub(s).dirty() for s in ctx.substate)):
2773 changed = '+'
2773 changed = '+'
2774 if default or id:
2774 if default or id:
2775 output = ["%s%s" %
2775 output = ["%s%s" %
2776 ('+'.join([hexfunc(p.node()) for p in parents]), changed)]
2776 ('+'.join([hexfunc(p.node()) for p in parents]), changed)]
2777 if num:
2777 if num:
2778 output.append("%s%s" %
2778 output.append("%s%s" %
2779 ('+'.join(["%d" % p.rev() for p in parents]), changed))
2779 ('+'.join(["%d" % p.rev() for p in parents]), changed))
2780 else:
2780 else:
2781 if default or id:
2781 if default or id:
2782 output = [hexfunc(ctx.node())]
2782 output = [hexfunc(ctx.node())]
2783 if num:
2783 if num:
2784 output.append(pycompat.bytestr(ctx.rev()))
2784 output.append(pycompat.bytestr(ctx.rev()))
2785 taglist = ctx.tags()
2785 taglist = ctx.tags()
2786
2786
2787 if default and not ui.quiet:
2787 if default and not ui.quiet:
2788 b = ctx.branch()
2788 b = ctx.branch()
2789 if b != 'default':
2789 if b != 'default':
2790 output.append("(%s)" % b)
2790 output.append("(%s)" % b)
2791
2791
2792 # multiple tags for a single parent separated by '/'
2792 # multiple tags for a single parent separated by '/'
2793 t = '/'.join(taglist)
2793 t = '/'.join(taglist)
2794 if t:
2794 if t:
2795 output.append(t)
2795 output.append(t)
2796
2796
2797 # multiple bookmarks for a single parent separated by '/'
2797 # multiple bookmarks for a single parent separated by '/'
2798 bm = '/'.join(ctx.bookmarks())
2798 bm = '/'.join(ctx.bookmarks())
2799 if bm:
2799 if bm:
2800 output.append(bm)
2800 output.append(bm)
2801 else:
2801 else:
2802 if branch:
2802 if branch:
2803 output.append(ctx.branch())
2803 output.append(ctx.branch())
2804
2804
2805 if tags:
2805 if tags:
2806 output.extend(taglist)
2806 output.extend(taglist)
2807
2807
2808 if bookmarks:
2808 if bookmarks:
2809 output.extend(ctx.bookmarks())
2809 output.extend(ctx.bookmarks())
2810
2810
2811 ui.write("%s\n" % ' '.join(output))
2811 ui.write("%s\n" % ' '.join(output))
2812
2812
2813 @command('import|patch',
2813 @command('import|patch',
2814 [('p', 'strip', 1,
2814 [('p', 'strip', 1,
2815 _('directory strip option for patch. This has the same '
2815 _('directory strip option for patch. This has the same '
2816 'meaning as the corresponding patch option'), _('NUM')),
2816 'meaning as the corresponding patch option'), _('NUM')),
2817 ('b', 'base', '', _('base path (DEPRECATED)'), _('PATH')),
2817 ('b', 'base', '', _('base path (DEPRECATED)'), _('PATH')),
2818 ('e', 'edit', False, _('invoke editor on commit messages')),
2818 ('e', 'edit', False, _('invoke editor on commit messages')),
2819 ('f', 'force', None,
2819 ('f', 'force', None,
2820 _('skip check for outstanding uncommitted changes (DEPRECATED)')),
2820 _('skip check for outstanding uncommitted changes (DEPRECATED)')),
2821 ('', 'no-commit', None,
2821 ('', 'no-commit', None,
2822 _("don't commit, just update the working directory")),
2822 _("don't commit, just update the working directory")),
2823 ('', 'bypass', None,
2823 ('', 'bypass', None,
2824 _("apply patch without touching the working directory")),
2824 _("apply patch without touching the working directory")),
2825 ('', 'partial', None,
2825 ('', 'partial', None,
2826 _('commit even if some hunks fail')),
2826 _('commit even if some hunks fail')),
2827 ('', 'exact', None,
2827 ('', 'exact', None,
2828 _('abort if patch would apply lossily')),
2828 _('abort if patch would apply lossily')),
2829 ('', 'prefix', '',
2829 ('', 'prefix', '',
2830 _('apply patch to subdirectory'), _('DIR')),
2830 _('apply patch to subdirectory'), _('DIR')),
2831 ('', 'import-branch', None,
2831 ('', 'import-branch', None,
2832 _('use any branch information in patch (implied by --exact)'))] +
2832 _('use any branch information in patch (implied by --exact)'))] +
2833 commitopts + commitopts2 + similarityopts,
2833 commitopts + commitopts2 + similarityopts,
2834 _('[OPTION]... PATCH...'))
2834 _('[OPTION]... PATCH...'))
2835 def import_(ui, repo, patch1=None, *patches, **opts):
2835 def import_(ui, repo, patch1=None, *patches, **opts):
2836 """import an ordered set of patches
2836 """import an ordered set of patches
2837
2837
2838 Import a list of patches and commit them individually (unless
2838 Import a list of patches and commit them individually (unless
2839 --no-commit is specified).
2839 --no-commit is specified).
2840
2840
2841 To read a patch from standard input (stdin), use "-" as the patch
2841 To read a patch from standard input (stdin), use "-" as the patch
2842 name. If a URL is specified, the patch will be downloaded from
2842 name. If a URL is specified, the patch will be downloaded from
2843 there.
2843 there.
2844
2844
2845 Import first applies changes to the working directory (unless
2845 Import first applies changes to the working directory (unless
2846 --bypass is specified), import will abort if there are outstanding
2846 --bypass is specified), import will abort if there are outstanding
2847 changes.
2847 changes.
2848
2848
2849 Use --bypass to apply and commit patches directly to the
2849 Use --bypass to apply and commit patches directly to the
2850 repository, without affecting the working directory. Without
2850 repository, without affecting the working directory. Without
2851 --exact, patches will be applied on top of the working directory
2851 --exact, patches will be applied on top of the working directory
2852 parent revision.
2852 parent revision.
2853
2853
2854 You can import a patch straight from a mail message. Even patches
2854 You can import a patch straight from a mail message. Even patches
2855 as attachments work (to use the body part, it must have type
2855 as attachments work (to use the body part, it must have type
2856 text/plain or text/x-patch). From and Subject headers of email
2856 text/plain or text/x-patch). From and Subject headers of email
2857 message are used as default committer and commit message. All
2857 message are used as default committer and commit message. All
2858 text/plain body parts before first diff are added to the commit
2858 text/plain body parts before first diff are added to the commit
2859 message.
2859 message.
2860
2860
2861 If the imported patch was generated by :hg:`export`, user and
2861 If the imported patch was generated by :hg:`export`, user and
2862 description from patch override values from message headers and
2862 description from patch override values from message headers and
2863 body. Values given on command line with -m/--message and -u/--user
2863 body. Values given on command line with -m/--message and -u/--user
2864 override these.
2864 override these.
2865
2865
2866 If --exact is specified, import will set the working directory to
2866 If --exact is specified, import will set the working directory to
2867 the parent of each patch before applying it, and will abort if the
2867 the parent of each patch before applying it, and will abort if the
2868 resulting changeset has a different ID than the one recorded in
2868 resulting changeset has a different ID than the one recorded in
2869 the patch. This will guard against various ways that portable
2869 the patch. This will guard against various ways that portable
2870 patch formats and mail systems might fail to transfer Mercurial
2870 patch formats and mail systems might fail to transfer Mercurial
2871 data or metadata. See :hg:`bundle` for lossless transmission.
2871 data or metadata. See :hg:`bundle` for lossless transmission.
2872
2872
2873 Use --partial to ensure a changeset will be created from the patch
2873 Use --partial to ensure a changeset will be created from the patch
2874 even if some hunks fail to apply. Hunks that fail to apply will be
2874 even if some hunks fail to apply. Hunks that fail to apply will be
2875 written to a <target-file>.rej file. Conflicts can then be resolved
2875 written to a <target-file>.rej file. Conflicts can then be resolved
2876 by hand before :hg:`commit --amend` is run to update the created
2876 by hand before :hg:`commit --amend` is run to update the created
2877 changeset. This flag exists to let people import patches that
2877 changeset. This flag exists to let people import patches that
2878 partially apply without losing the associated metadata (author,
2878 partially apply without losing the associated metadata (author,
2879 date, description, ...).
2879 date, description, ...).
2880
2880
2881 .. note::
2881 .. note::
2882
2882
2883 When no hunks apply cleanly, :hg:`import --partial` will create
2883 When no hunks apply cleanly, :hg:`import --partial` will create
2884 an empty changeset, importing only the patch metadata.
2884 an empty changeset, importing only the patch metadata.
2885
2885
2886 With -s/--similarity, hg will attempt to discover renames and
2886 With -s/--similarity, hg will attempt to discover renames and
2887 copies in the patch in the same way as :hg:`addremove`.
2887 copies in the patch in the same way as :hg:`addremove`.
2888
2888
2889 It is possible to use external patch programs to perform the patch
2889 It is possible to use external patch programs to perform the patch
2890 by setting the ``ui.patch`` configuration option. For the default
2890 by setting the ``ui.patch`` configuration option. For the default
2891 internal tool, the fuzz can also be configured via ``patch.fuzz``.
2891 internal tool, the fuzz can also be configured via ``patch.fuzz``.
2892 See :hg:`help config` for more information about configuration
2892 See :hg:`help config` for more information about configuration
2893 files and how to use these options.
2893 files and how to use these options.
2894
2894
2895 See :hg:`help dates` for a list of formats valid for -d/--date.
2895 See :hg:`help dates` for a list of formats valid for -d/--date.
2896
2896
2897 .. container:: verbose
2897 .. container:: verbose
2898
2898
2899 Examples:
2899 Examples:
2900
2900
2901 - import a traditional patch from a website and detect renames::
2901 - import a traditional patch from a website and detect renames::
2902
2902
2903 hg import -s 80 http://example.com/bugfix.patch
2903 hg import -s 80 http://example.com/bugfix.patch
2904
2904
2905 - import a changeset from an hgweb server::
2905 - import a changeset from an hgweb server::
2906
2906
2907 hg import https://www.mercurial-scm.org/repo/hg/rev/5ca8c111e9aa
2907 hg import https://www.mercurial-scm.org/repo/hg/rev/5ca8c111e9aa
2908
2908
2909 - import all the patches in an Unix-style mbox::
2909 - import all the patches in an Unix-style mbox::
2910
2910
2911 hg import incoming-patches.mbox
2911 hg import incoming-patches.mbox
2912
2912
2913 - import patches from stdin::
2913 - import patches from stdin::
2914
2914
2915 hg import -
2915 hg import -
2916
2916
2917 - attempt to exactly restore an exported changeset (not always
2917 - attempt to exactly restore an exported changeset (not always
2918 possible)::
2918 possible)::
2919
2919
2920 hg import --exact proposed-fix.patch
2920 hg import --exact proposed-fix.patch
2921
2921
2922 - use an external tool to apply a patch which is too fuzzy for
2922 - use an external tool to apply a patch which is too fuzzy for
2923 the default internal tool.
2923 the default internal tool.
2924
2924
2925 hg import --config ui.patch="patch --merge" fuzzy.patch
2925 hg import --config ui.patch="patch --merge" fuzzy.patch
2926
2926
2927 - change the default fuzzing from 2 to a less strict 7
2927 - change the default fuzzing from 2 to a less strict 7
2928
2928
2929 hg import --config ui.fuzz=7 fuzz.patch
2929 hg import --config ui.fuzz=7 fuzz.patch
2930
2930
2931 Returns 0 on success, 1 on partial success (see --partial).
2931 Returns 0 on success, 1 on partial success (see --partial).
2932 """
2932 """
2933
2933
2934 opts = pycompat.byteskwargs(opts)
2934 opts = pycompat.byteskwargs(opts)
2935 if not patch1:
2935 if not patch1:
2936 raise error.Abort(_('need at least one patch to import'))
2936 raise error.Abort(_('need at least one patch to import'))
2937
2937
2938 patches = (patch1,) + patches
2938 patches = (patch1,) + patches
2939
2939
2940 date = opts.get('date')
2940 date = opts.get('date')
2941 if date:
2941 if date:
2942 opts['date'] = util.parsedate(date)
2942 opts['date'] = util.parsedate(date)
2943
2943
2944 exact = opts.get('exact')
2944 exact = opts.get('exact')
2945 update = not opts.get('bypass')
2945 update = not opts.get('bypass')
2946 if not update and opts.get('no_commit'):
2946 if not update and opts.get('no_commit'):
2947 raise error.Abort(_('cannot use --no-commit with --bypass'))
2947 raise error.Abort(_('cannot use --no-commit with --bypass'))
2948 try:
2948 try:
2949 sim = float(opts.get('similarity') or 0)
2949 sim = float(opts.get('similarity') or 0)
2950 except ValueError:
2950 except ValueError:
2951 raise error.Abort(_('similarity must be a number'))
2951 raise error.Abort(_('similarity must be a number'))
2952 if sim < 0 or sim > 100:
2952 if sim < 0 or sim > 100:
2953 raise error.Abort(_('similarity must be between 0 and 100'))
2953 raise error.Abort(_('similarity must be between 0 and 100'))
2954 if sim and not update:
2954 if sim and not update:
2955 raise error.Abort(_('cannot use --similarity with --bypass'))
2955 raise error.Abort(_('cannot use --similarity with --bypass'))
2956 if exact:
2956 if exact:
2957 if opts.get('edit'):
2957 if opts.get('edit'):
2958 raise error.Abort(_('cannot use --exact with --edit'))
2958 raise error.Abort(_('cannot use --exact with --edit'))
2959 if opts.get('prefix'):
2959 if opts.get('prefix'):
2960 raise error.Abort(_('cannot use --exact with --prefix'))
2960 raise error.Abort(_('cannot use --exact with --prefix'))
2961
2961
2962 base = opts["base"]
2962 base = opts["base"]
2963 wlock = dsguard = lock = tr = None
2963 wlock = dsguard = lock = tr = None
2964 msgs = []
2964 msgs = []
2965 ret = 0
2965 ret = 0
2966
2966
2967
2967
2968 try:
2968 try:
2969 wlock = repo.wlock()
2969 wlock = repo.wlock()
2970
2970
2971 if update:
2971 if update:
2972 cmdutil.checkunfinished(repo)
2972 cmdutil.checkunfinished(repo)
2973 if (exact or not opts.get('force')):
2973 if (exact or not opts.get('force')):
2974 cmdutil.bailifchanged(repo)
2974 cmdutil.bailifchanged(repo)
2975
2975
2976 if not opts.get('no_commit'):
2976 if not opts.get('no_commit'):
2977 lock = repo.lock()
2977 lock = repo.lock()
2978 tr = repo.transaction('import')
2978 tr = repo.transaction('import')
2979 else:
2979 else:
2980 dsguard = dirstateguard.dirstateguard(repo, 'import')
2980 dsguard = dirstateguard.dirstateguard(repo, 'import')
2981 parents = repo[None].parents()
2981 parents = repo[None].parents()
2982 for patchurl in patches:
2982 for patchurl in patches:
2983 if patchurl == '-':
2983 if patchurl == '-':
2984 ui.status(_('applying patch from stdin\n'))
2984 ui.status(_('applying patch from stdin\n'))
2985 patchfile = ui.fin
2985 patchfile = ui.fin
2986 patchurl = 'stdin' # for error message
2986 patchurl = 'stdin' # for error message
2987 else:
2987 else:
2988 patchurl = os.path.join(base, patchurl)
2988 patchurl = os.path.join(base, patchurl)
2989 ui.status(_('applying %s\n') % patchurl)
2989 ui.status(_('applying %s\n') % patchurl)
2990 patchfile = hg.openpath(ui, patchurl)
2990 patchfile = hg.openpath(ui, patchurl)
2991
2991
2992 haspatch = False
2992 haspatch = False
2993 for hunk in patch.split(patchfile):
2993 for hunk in patch.split(patchfile):
2994 (msg, node, rej) = cmdutil.tryimportone(ui, repo, hunk,
2994 (msg, node, rej) = cmdutil.tryimportone(ui, repo, hunk,
2995 parents, opts,
2995 parents, opts,
2996 msgs, hg.clean)
2996 msgs, hg.clean)
2997 if msg:
2997 if msg:
2998 haspatch = True
2998 haspatch = True
2999 ui.note(msg + '\n')
2999 ui.note(msg + '\n')
3000 if update or exact:
3000 if update or exact:
3001 parents = repo[None].parents()
3001 parents = repo[None].parents()
3002 else:
3002 else:
3003 parents = [repo[node]]
3003 parents = [repo[node]]
3004 if rej:
3004 if rej:
3005 ui.write_err(_("patch applied partially\n"))
3005 ui.write_err(_("patch applied partially\n"))
3006 ui.write_err(_("(fix the .rej files and run "
3006 ui.write_err(_("(fix the .rej files and run "
3007 "`hg commit --amend`)\n"))
3007 "`hg commit --amend`)\n"))
3008 ret = 1
3008 ret = 1
3009 break
3009 break
3010
3010
3011 if not haspatch:
3011 if not haspatch:
3012 raise error.Abort(_('%s: no diffs found') % patchurl)
3012 raise error.Abort(_('%s: no diffs found') % patchurl)
3013
3013
3014 if tr:
3014 if tr:
3015 tr.close()
3015 tr.close()
3016 if msgs:
3016 if msgs:
3017 repo.savecommitmessage('\n* * *\n'.join(msgs))
3017 repo.savecommitmessage('\n* * *\n'.join(msgs))
3018 if dsguard:
3018 if dsguard:
3019 dsguard.close()
3019 dsguard.close()
3020 return ret
3020 return ret
3021 finally:
3021 finally:
3022 if tr:
3022 if tr:
3023 tr.release()
3023 tr.release()
3024 release(lock, dsguard, wlock)
3024 release(lock, dsguard, wlock)
3025
3025
3026 @command('incoming|in',
3026 @command('incoming|in',
3027 [('f', 'force', None,
3027 [('f', 'force', None,
3028 _('run even if remote repository is unrelated')),
3028 _('run even if remote repository is unrelated')),
3029 ('n', 'newest-first', None, _('show newest record first')),
3029 ('n', 'newest-first', None, _('show newest record first')),
3030 ('', 'bundle', '',
3030 ('', 'bundle', '',
3031 _('file to store the bundles into'), _('FILE')),
3031 _('file to store the bundles into'), _('FILE')),
3032 ('r', 'rev', [], _('a remote changeset intended to be added'), _('REV')),
3032 ('r', 'rev', [], _('a remote changeset intended to be added'), _('REV')),
3033 ('B', 'bookmarks', False, _("compare bookmarks")),
3033 ('B', 'bookmarks', False, _("compare bookmarks")),
3034 ('b', 'branch', [],
3034 ('b', 'branch', [],
3035 _('a specific branch you would like to pull'), _('BRANCH')),
3035 _('a specific branch you would like to pull'), _('BRANCH')),
3036 ] + logopts + remoteopts + subrepoopts,
3036 ] + logopts + remoteopts + subrepoopts,
3037 _('[-p] [-n] [-M] [-f] [-r REV]... [--bundle FILENAME] [SOURCE]'))
3037 _('[-p] [-n] [-M] [-f] [-r REV]... [--bundle FILENAME] [SOURCE]'))
3038 def incoming(ui, repo, source="default", **opts):
3038 def incoming(ui, repo, source="default", **opts):
3039 """show new changesets found in source
3039 """show new changesets found in source
3040
3040
3041 Show new changesets found in the specified path/URL or the default
3041 Show new changesets found in the specified path/URL or the default
3042 pull location. These are the changesets that would have been pulled
3042 pull location. These are the changesets that would have been pulled
3043 if a pull at the time you issued this command.
3043 if a pull at the time you issued this command.
3044
3044
3045 See pull for valid source format details.
3045 See pull for valid source format details.
3046
3046
3047 .. container:: verbose
3047 .. container:: verbose
3048
3048
3049 With -B/--bookmarks, the result of bookmark comparison between
3049 With -B/--bookmarks, the result of bookmark comparison between
3050 local and remote repositories is displayed. With -v/--verbose,
3050 local and remote repositories is displayed. With -v/--verbose,
3051 status is also displayed for each bookmark like below::
3051 status is also displayed for each bookmark like below::
3052
3052
3053 BM1 01234567890a added
3053 BM1 01234567890a added
3054 BM2 1234567890ab advanced
3054 BM2 1234567890ab advanced
3055 BM3 234567890abc diverged
3055 BM3 234567890abc diverged
3056 BM4 34567890abcd changed
3056 BM4 34567890abcd changed
3057
3057
3058 The action taken locally when pulling depends on the
3058 The action taken locally when pulling depends on the
3059 status of each bookmark:
3059 status of each bookmark:
3060
3060
3061 :``added``: pull will create it
3061 :``added``: pull will create it
3062 :``advanced``: pull will update it
3062 :``advanced``: pull will update it
3063 :``diverged``: pull will create a divergent bookmark
3063 :``diverged``: pull will create a divergent bookmark
3064 :``changed``: result depends on remote changesets
3064 :``changed``: result depends on remote changesets
3065
3065
3066 From the point of view of pulling behavior, bookmark
3066 From the point of view of pulling behavior, bookmark
3067 existing only in the remote repository are treated as ``added``,
3067 existing only in the remote repository are treated as ``added``,
3068 even if it is in fact locally deleted.
3068 even if it is in fact locally deleted.
3069
3069
3070 .. container:: verbose
3070 .. container:: verbose
3071
3071
3072 For remote repository, using --bundle avoids downloading the
3072 For remote repository, using --bundle avoids downloading the
3073 changesets twice if the incoming is followed by a pull.
3073 changesets twice if the incoming is followed by a pull.
3074
3074
3075 Examples:
3075 Examples:
3076
3076
3077 - show incoming changes with patches and full description::
3077 - show incoming changes with patches and full description::
3078
3078
3079 hg incoming -vp
3079 hg incoming -vp
3080
3080
3081 - show incoming changes excluding merges, store a bundle::
3081 - show incoming changes excluding merges, store a bundle::
3082
3082
3083 hg in -vpM --bundle incoming.hg
3083 hg in -vpM --bundle incoming.hg
3084 hg pull incoming.hg
3084 hg pull incoming.hg
3085
3085
3086 - briefly list changes inside a bundle::
3086 - briefly list changes inside a bundle::
3087
3087
3088 hg in changes.hg -T "{desc|firstline}\\n"
3088 hg in changes.hg -T "{desc|firstline}\\n"
3089
3089
3090 Returns 0 if there are incoming changes, 1 otherwise.
3090 Returns 0 if there are incoming changes, 1 otherwise.
3091 """
3091 """
3092 opts = pycompat.byteskwargs(opts)
3092 opts = pycompat.byteskwargs(opts)
3093 if opts.get('graph'):
3093 if opts.get('graph'):
3094 cmdutil.checkunsupportedgraphflags([], opts)
3094 cmdutil.checkunsupportedgraphflags([], opts)
3095 def display(other, chlist, displayer):
3095 def display(other, chlist, displayer):
3096 revdag = cmdutil.graphrevs(other, chlist, opts)
3096 revdag = cmdutil.graphrevs(other, chlist, opts)
3097 cmdutil.displaygraph(ui, repo, revdag, displayer,
3097 cmdutil.displaygraph(ui, repo, revdag, displayer,
3098 graphmod.asciiedges)
3098 graphmod.asciiedges)
3099
3099
3100 hg._incoming(display, lambda: 1, ui, repo, source, opts, buffered=True)
3100 hg._incoming(display, lambda: 1, ui, repo, source, opts, buffered=True)
3101 return 0
3101 return 0
3102
3102
3103 if opts.get('bundle') and opts.get('subrepos'):
3103 if opts.get('bundle') and opts.get('subrepos'):
3104 raise error.Abort(_('cannot combine --bundle and --subrepos'))
3104 raise error.Abort(_('cannot combine --bundle and --subrepos'))
3105
3105
3106 if opts.get('bookmarks'):
3106 if opts.get('bookmarks'):
3107 source, branches = hg.parseurl(ui.expandpath(source),
3107 source, branches = hg.parseurl(ui.expandpath(source),
3108 opts.get('branch'))
3108 opts.get('branch'))
3109 other = hg.peer(repo, opts, source)
3109 other = hg.peer(repo, opts, source)
3110 if 'bookmarks' not in other.listkeys('namespaces'):
3110 if 'bookmarks' not in other.listkeys('namespaces'):
3111 ui.warn(_("remote doesn't support bookmarks\n"))
3111 ui.warn(_("remote doesn't support bookmarks\n"))
3112 return 0
3112 return 0
3113 ui.pager('incoming')
3113 ui.pager('incoming')
3114 ui.status(_('comparing with %s\n') % util.hidepassword(source))
3114 ui.status(_('comparing with %s\n') % util.hidepassword(source))
3115 return bookmarks.incoming(ui, repo, other)
3115 return bookmarks.incoming(ui, repo, other)
3116
3116
3117 repo._subtoppath = ui.expandpath(source)
3117 repo._subtoppath = ui.expandpath(source)
3118 try:
3118 try:
3119 return hg.incoming(ui, repo, source, opts)
3119 return hg.incoming(ui, repo, source, opts)
3120 finally:
3120 finally:
3121 del repo._subtoppath
3121 del repo._subtoppath
3122
3122
3123
3123
3124 @command('^init', remoteopts, _('[-e CMD] [--remotecmd CMD] [DEST]'),
3124 @command('^init', remoteopts, _('[-e CMD] [--remotecmd CMD] [DEST]'),
3125 norepo=True)
3125 norepo=True)
3126 def init(ui, dest=".", **opts):
3126 def init(ui, dest=".", **opts):
3127 """create a new repository in the given directory
3127 """create a new repository in the given directory
3128
3128
3129 Initialize a new repository in the given directory. If the given
3129 Initialize a new repository in the given directory. If the given
3130 directory does not exist, it will be created.
3130 directory does not exist, it will be created.
3131
3131
3132 If no directory is given, the current directory is used.
3132 If no directory is given, the current directory is used.
3133
3133
3134 It is possible to specify an ``ssh://`` URL as the destination.
3134 It is possible to specify an ``ssh://`` URL as the destination.
3135 See :hg:`help urls` for more information.
3135 See :hg:`help urls` for more information.
3136
3136
3137 Returns 0 on success.
3137 Returns 0 on success.
3138 """
3138 """
3139 opts = pycompat.byteskwargs(opts)
3139 opts = pycompat.byteskwargs(opts)
3140 hg.peer(ui, opts, ui.expandpath(dest), create=True)
3140 hg.peer(ui, opts, ui.expandpath(dest), create=True)
3141
3141
3142 @command('locate',
3142 @command('locate',
3143 [('r', 'rev', '', _('search the repository as it is in REV'), _('REV')),
3143 [('r', 'rev', '', _('search the repository as it is in REV'), _('REV')),
3144 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
3144 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
3145 ('f', 'fullpath', None, _('print complete paths from the filesystem root')),
3145 ('f', 'fullpath', None, _('print complete paths from the filesystem root')),
3146 ] + walkopts,
3146 ] + walkopts,
3147 _('[OPTION]... [PATTERN]...'))
3147 _('[OPTION]... [PATTERN]...'))
3148 def locate(ui, repo, *pats, **opts):
3148 def locate(ui, repo, *pats, **opts):
3149 """locate files matching specific patterns (DEPRECATED)
3149 """locate files matching specific patterns (DEPRECATED)
3150
3150
3151 Print files under Mercurial control in the working directory whose
3151 Print files under Mercurial control in the working directory whose
3152 names match the given patterns.
3152 names match the given patterns.
3153
3153
3154 By default, this command searches all directories in the working
3154 By default, this command searches all directories in the working
3155 directory. To search just the current directory and its
3155 directory. To search just the current directory and its
3156 subdirectories, use "--include .".
3156 subdirectories, use "--include .".
3157
3157
3158 If no patterns are given to match, this command prints the names
3158 If no patterns are given to match, this command prints the names
3159 of all files under Mercurial control in the working directory.
3159 of all files under Mercurial control in the working directory.
3160
3160
3161 If you want to feed the output of this command into the "xargs"
3161 If you want to feed the output of this command into the "xargs"
3162 command, use the -0 option to both this command and "xargs". This
3162 command, use the -0 option to both this command and "xargs". This
3163 will avoid the problem of "xargs" treating single filenames that
3163 will avoid the problem of "xargs" treating single filenames that
3164 contain whitespace as multiple filenames.
3164 contain whitespace as multiple filenames.
3165
3165
3166 See :hg:`help files` for a more versatile command.
3166 See :hg:`help files` for a more versatile command.
3167
3167
3168 Returns 0 if a match is found, 1 otherwise.
3168 Returns 0 if a match is found, 1 otherwise.
3169 """
3169 """
3170 opts = pycompat.byteskwargs(opts)
3170 opts = pycompat.byteskwargs(opts)
3171 if opts.get('print0'):
3171 if opts.get('print0'):
3172 end = '\0'
3172 end = '\0'
3173 else:
3173 else:
3174 end = '\n'
3174 end = '\n'
3175 rev = scmutil.revsingle(repo, opts.get('rev'), None).node()
3175 rev = scmutil.revsingle(repo, opts.get('rev'), None).node()
3176
3176
3177 ret = 1
3177 ret = 1
3178 ctx = repo[rev]
3178 ctx = repo[rev]
3179 m = scmutil.match(ctx, pats, opts, default='relglob',
3179 m = scmutil.match(ctx, pats, opts, default='relglob',
3180 badfn=lambda x, y: False)
3180 badfn=lambda x, y: False)
3181
3181
3182 ui.pager('locate')
3182 ui.pager('locate')
3183 for abs in ctx.matches(m):
3183 for abs in ctx.matches(m):
3184 if opts.get('fullpath'):
3184 if opts.get('fullpath'):
3185 ui.write(repo.wjoin(abs), end)
3185 ui.write(repo.wjoin(abs), end)
3186 else:
3186 else:
3187 ui.write(((pats and m.rel(abs)) or abs), end)
3187 ui.write(((pats and m.rel(abs)) or abs), end)
3188 ret = 0
3188 ret = 0
3189
3189
3190 return ret
3190 return ret
3191
3191
3192 @command('^log|history',
3192 @command('^log|history',
3193 [('f', 'follow', None,
3193 [('f', 'follow', None,
3194 _('follow changeset history, or file history across copies and renames')),
3194 _('follow changeset history, or file history across copies and renames')),
3195 ('', 'follow-first', None,
3195 ('', 'follow-first', None,
3196 _('only follow the first parent of merge changesets (DEPRECATED)')),
3196 _('only follow the first parent of merge changesets (DEPRECATED)')),
3197 ('d', 'date', '', _('show revisions matching date spec'), _('DATE')),
3197 ('d', 'date', '', _('show revisions matching date spec'), _('DATE')),
3198 ('C', 'copies', None, _('show copied files')),
3198 ('C', 'copies', None, _('show copied files')),
3199 ('k', 'keyword', [],
3199 ('k', 'keyword', [],
3200 _('do case-insensitive search for a given text'), _('TEXT')),
3200 _('do case-insensitive search for a given text'), _('TEXT')),
3201 ('r', 'rev', [], _('show the specified revision or revset'), _('REV')),
3201 ('r', 'rev', [], _('show the specified revision or revset'), _('REV')),
3202 ('', 'removed', None, _('include revisions where files were removed')),
3202 ('', 'removed', None, _('include revisions where files were removed')),
3203 ('m', 'only-merges', None, _('show only merges (DEPRECATED)')),
3203 ('m', 'only-merges', None, _('show only merges (DEPRECATED)')),
3204 ('u', 'user', [], _('revisions committed by user'), _('USER')),
3204 ('u', 'user', [], _('revisions committed by user'), _('USER')),
3205 ('', 'only-branch', [],
3205 ('', 'only-branch', [],
3206 _('show only changesets within the given named branch (DEPRECATED)'),
3206 _('show only changesets within the given named branch (DEPRECATED)'),
3207 _('BRANCH')),
3207 _('BRANCH')),
3208 ('b', 'branch', [],
3208 ('b', 'branch', [],
3209 _('show changesets within the given named branch'), _('BRANCH')),
3209 _('show changesets within the given named branch'), _('BRANCH')),
3210 ('P', 'prune', [],
3210 ('P', 'prune', [],
3211 _('do not display revision or any of its ancestors'), _('REV')),
3211 _('do not display revision or any of its ancestors'), _('REV')),
3212 ] + logopts + walkopts,
3212 ] + logopts + walkopts,
3213 _('[OPTION]... [FILE]'),
3213 _('[OPTION]... [FILE]'),
3214 inferrepo=True)
3214 inferrepo=True)
3215 def log(ui, repo, *pats, **opts):
3215 def log(ui, repo, *pats, **opts):
3216 """show revision history of entire repository or files
3216 """show revision history of entire repository or files
3217
3217
3218 Print the revision history of the specified files or the entire
3218 Print the revision history of the specified files or the entire
3219 project.
3219 project.
3220
3220
3221 If no revision range is specified, the default is ``tip:0`` unless
3221 If no revision range is specified, the default is ``tip:0`` unless
3222 --follow is set, in which case the working directory parent is
3222 --follow is set, in which case the working directory parent is
3223 used as the starting revision.
3223 used as the starting revision.
3224
3224
3225 File history is shown without following rename or copy history of
3225 File history is shown without following rename or copy history of
3226 files. Use -f/--follow with a filename to follow history across
3226 files. Use -f/--follow with a filename to follow history across
3227 renames and copies. --follow without a filename will only show
3227 renames and copies. --follow without a filename will only show
3228 ancestors or descendants of the starting revision.
3228 ancestors or descendants of the starting revision.
3229
3229
3230 By default this command prints revision number and changeset id,
3230 By default this command prints revision number and changeset id,
3231 tags, non-trivial parents, user, date and time, and a summary for
3231 tags, non-trivial parents, user, date and time, and a summary for
3232 each commit. When the -v/--verbose switch is used, the list of
3232 each commit. When the -v/--verbose switch is used, the list of
3233 changed files and full commit message are shown.
3233 changed files and full commit message are shown.
3234
3234
3235 With --graph the revisions are shown as an ASCII art DAG with the most
3235 With --graph the revisions are shown as an ASCII art DAG with the most
3236 recent changeset at the top.
3236 recent changeset at the top.
3237 'o' is a changeset, '@' is a working directory parent, 'x' is obsolete,
3237 'o' is a changeset, '@' is a working directory parent, 'x' is obsolete,
3238 and '+' represents a fork where the changeset from the lines below is a
3238 and '+' represents a fork where the changeset from the lines below is a
3239 parent of the 'o' merge on the same line.
3239 parent of the 'o' merge on the same line.
3240 Paths in the DAG are represented with '|', '/' and so forth. ':' in place
3240 Paths in the DAG are represented with '|', '/' and so forth. ':' in place
3241 of a '|' indicates one or more revisions in a path are omitted.
3241 of a '|' indicates one or more revisions in a path are omitted.
3242
3242
3243 .. note::
3243 .. note::
3244
3244
3245 :hg:`log --patch` may generate unexpected diff output for merge
3245 :hg:`log --patch` may generate unexpected diff output for merge
3246 changesets, as it will only compare the merge changeset against
3246 changesets, as it will only compare the merge changeset against
3247 its first parent. Also, only files different from BOTH parents
3247 its first parent. Also, only files different from BOTH parents
3248 will appear in files:.
3248 will appear in files:.
3249
3249
3250 .. note::
3250 .. note::
3251
3251
3252 For performance reasons, :hg:`log FILE` may omit duplicate changes
3252 For performance reasons, :hg:`log FILE` may omit duplicate changes
3253 made on branches and will not show removals or mode changes. To
3253 made on branches and will not show removals or mode changes. To
3254 see all such changes, use the --removed switch.
3254 see all such changes, use the --removed switch.
3255
3255
3256 .. container:: verbose
3256 .. container:: verbose
3257
3257
3258 Some examples:
3258 Some examples:
3259
3259
3260 - changesets with full descriptions and file lists::
3260 - changesets with full descriptions and file lists::
3261
3261
3262 hg log -v
3262 hg log -v
3263
3263
3264 - changesets ancestral to the working directory::
3264 - changesets ancestral to the working directory::
3265
3265
3266 hg log -f
3266 hg log -f
3267
3267
3268 - last 10 commits on the current branch::
3268 - last 10 commits on the current branch::
3269
3269
3270 hg log -l 10 -b .
3270 hg log -l 10 -b .
3271
3271
3272 - changesets showing all modifications of a file, including removals::
3272 - changesets showing all modifications of a file, including removals::
3273
3273
3274 hg log --removed file.c
3274 hg log --removed file.c
3275
3275
3276 - all changesets that touch a directory, with diffs, excluding merges::
3276 - all changesets that touch a directory, with diffs, excluding merges::
3277
3277
3278 hg log -Mp lib/
3278 hg log -Mp lib/
3279
3279
3280 - all revision numbers that match a keyword::
3280 - all revision numbers that match a keyword::
3281
3281
3282 hg log -k bug --template "{rev}\\n"
3282 hg log -k bug --template "{rev}\\n"
3283
3283
3284 - the full hash identifier of the working directory parent::
3284 - the full hash identifier of the working directory parent::
3285
3285
3286 hg log -r . --template "{node}\\n"
3286 hg log -r . --template "{node}\\n"
3287
3287
3288 - list available log templates::
3288 - list available log templates::
3289
3289
3290 hg log -T list
3290 hg log -T list
3291
3291
3292 - check if a given changeset is included in a tagged release::
3292 - check if a given changeset is included in a tagged release::
3293
3293
3294 hg log -r "a21ccf and ancestor(1.9)"
3294 hg log -r "a21ccf and ancestor(1.9)"
3295
3295
3296 - find all changesets by some user in a date range::
3296 - find all changesets by some user in a date range::
3297
3297
3298 hg log -k alice -d "may 2008 to jul 2008"
3298 hg log -k alice -d "may 2008 to jul 2008"
3299
3299
3300 - summary of all changesets after the last tag::
3300 - summary of all changesets after the last tag::
3301
3301
3302 hg log -r "last(tagged())::" --template "{desc|firstline}\\n"
3302 hg log -r "last(tagged())::" --template "{desc|firstline}\\n"
3303
3303
3304 See :hg:`help dates` for a list of formats valid for -d/--date.
3304 See :hg:`help dates` for a list of formats valid for -d/--date.
3305
3305
3306 See :hg:`help revisions` for more about specifying and ordering
3306 See :hg:`help revisions` for more about specifying and ordering
3307 revisions.
3307 revisions.
3308
3308
3309 See :hg:`help templates` for more about pre-packaged styles and
3309 See :hg:`help templates` for more about pre-packaged styles and
3310 specifying custom templates.
3310 specifying custom templates.
3311
3311
3312 Returns 0 on success.
3312 Returns 0 on success.
3313
3313
3314 """
3314 """
3315 opts = pycompat.byteskwargs(opts)
3315 opts = pycompat.byteskwargs(opts)
3316 if opts.get('follow') and opts.get('rev'):
3316 if opts.get('follow') and opts.get('rev'):
3317 opts['rev'] = [revsetlang.formatspec('reverse(::%lr)', opts.get('rev'))]
3317 opts['rev'] = [revsetlang.formatspec('reverse(::%lr)', opts.get('rev'))]
3318 del opts['follow']
3318 del opts['follow']
3319
3319
3320 if opts.get('graph'):
3320 if opts.get('graph'):
3321 return cmdutil.graphlog(ui, repo, pats, opts)
3321 return cmdutil.graphlog(ui, repo, pats, opts)
3322
3322
3323 revs, expr, filematcher = cmdutil.getlogrevs(repo, pats, opts)
3323 revs, expr, filematcher = cmdutil.getlogrevs(repo, pats, opts)
3324 limit = cmdutil.loglimit(opts)
3324 limit = cmdutil.loglimit(opts)
3325 count = 0
3325 count = 0
3326
3326
3327 getrenamed = None
3327 getrenamed = None
3328 if opts.get('copies'):
3328 if opts.get('copies'):
3329 endrev = None
3329 endrev = None
3330 if opts.get('rev'):
3330 if opts.get('rev'):
3331 endrev = scmutil.revrange(repo, opts.get('rev')).max() + 1
3331 endrev = scmutil.revrange(repo, opts.get('rev')).max() + 1
3332 getrenamed = templatekw.getrenamedfn(repo, endrev=endrev)
3332 getrenamed = templatekw.getrenamedfn(repo, endrev=endrev)
3333
3333
3334 ui.pager('log')
3334 ui.pager('log')
3335 displayer = cmdutil.show_changeset(ui, repo, opts, buffered=True)
3335 displayer = cmdutil.show_changeset(ui, repo, opts, buffered=True)
3336 for rev in revs:
3336 for rev in revs:
3337 if count == limit:
3337 if count == limit:
3338 break
3338 break
3339 ctx = repo[rev]
3339 ctx = repo[rev]
3340 copies = None
3340 copies = None
3341 if getrenamed is not None and rev:
3341 if getrenamed is not None and rev:
3342 copies = []
3342 copies = []
3343 for fn in ctx.files():
3343 for fn in ctx.files():
3344 rename = getrenamed(fn, rev)
3344 rename = getrenamed(fn, rev)
3345 if rename:
3345 if rename:
3346 copies.append((fn, rename[0]))
3346 copies.append((fn, rename[0]))
3347 if filematcher:
3347 if filematcher:
3348 revmatchfn = filematcher(ctx.rev())
3348 revmatchfn = filematcher(ctx.rev())
3349 else:
3349 else:
3350 revmatchfn = None
3350 revmatchfn = None
3351 displayer.show(ctx, copies=copies, matchfn=revmatchfn)
3351 displayer.show(ctx, copies=copies, matchfn=revmatchfn)
3352 if displayer.flush(ctx):
3352 if displayer.flush(ctx):
3353 count += 1
3353 count += 1
3354
3354
3355 displayer.close()
3355 displayer.close()
3356
3356
3357 @command('manifest',
3357 @command('manifest',
3358 [('r', 'rev', '', _('revision to display'), _('REV')),
3358 [('r', 'rev', '', _('revision to display'), _('REV')),
3359 ('', 'all', False, _("list files from all revisions"))]
3359 ('', 'all', False, _("list files from all revisions"))]
3360 + formatteropts,
3360 + formatteropts,
3361 _('[-r REV]'))
3361 _('[-r REV]'))
3362 def manifest(ui, repo, node=None, rev=None, **opts):
3362 def manifest(ui, repo, node=None, rev=None, **opts):
3363 """output the current or given revision of the project manifest
3363 """output the current or given revision of the project manifest
3364
3364
3365 Print a list of version controlled files for the given revision.
3365 Print a list of version controlled files for the given revision.
3366 If no revision is given, the first parent of the working directory
3366 If no revision is given, the first parent of the working directory
3367 is used, or the null revision if no revision is checked out.
3367 is used, or the null revision if no revision is checked out.
3368
3368
3369 With -v, print file permissions, symlink and executable bits.
3369 With -v, print file permissions, symlink and executable bits.
3370 With --debug, print file revision hashes.
3370 With --debug, print file revision hashes.
3371
3371
3372 If option --all is specified, the list of all files from all revisions
3372 If option --all is specified, the list of all files from all revisions
3373 is printed. This includes deleted and renamed files.
3373 is printed. This includes deleted and renamed files.
3374
3374
3375 Returns 0 on success.
3375 Returns 0 on success.
3376 """
3376 """
3377 opts = pycompat.byteskwargs(opts)
3377 opts = pycompat.byteskwargs(opts)
3378 fm = ui.formatter('manifest', opts)
3378 fm = ui.formatter('manifest', opts)
3379
3379
3380 if opts.get('all'):
3380 if opts.get('all'):
3381 if rev or node:
3381 if rev or node:
3382 raise error.Abort(_("can't specify a revision with --all"))
3382 raise error.Abort(_("can't specify a revision with --all"))
3383
3383
3384 res = []
3384 res = []
3385 prefix = "data/"
3385 prefix = "data/"
3386 suffix = ".i"
3386 suffix = ".i"
3387 plen = len(prefix)
3387 plen = len(prefix)
3388 slen = len(suffix)
3388 slen = len(suffix)
3389 with repo.lock():
3389 with repo.lock():
3390 for fn, b, size in repo.store.datafiles():
3390 for fn, b, size in repo.store.datafiles():
3391 if size != 0 and fn[-slen:] == suffix and fn[:plen] == prefix:
3391 if size != 0 and fn[-slen:] == suffix and fn[:plen] == prefix:
3392 res.append(fn[plen:-slen])
3392 res.append(fn[plen:-slen])
3393 ui.pager('manifest')
3393 ui.pager('manifest')
3394 for f in res:
3394 for f in res:
3395 fm.startitem()
3395 fm.startitem()
3396 fm.write("path", '%s\n', f)
3396 fm.write("path", '%s\n', f)
3397 fm.end()
3397 fm.end()
3398 return
3398 return
3399
3399
3400 if rev and node:
3400 if rev and node:
3401 raise error.Abort(_("please specify just one revision"))
3401 raise error.Abort(_("please specify just one revision"))
3402
3402
3403 if not node:
3403 if not node:
3404 node = rev
3404 node = rev
3405
3405
3406 char = {'l': '@', 'x': '*', '': ''}
3406 char = {'l': '@', 'x': '*', '': ''}
3407 mode = {'l': '644', 'x': '755', '': '644'}
3407 mode = {'l': '644', 'x': '755', '': '644'}
3408 ctx = scmutil.revsingle(repo, node)
3408 ctx = scmutil.revsingle(repo, node)
3409 mf = ctx.manifest()
3409 mf = ctx.manifest()
3410 ui.pager('manifest')
3410 ui.pager('manifest')
3411 for f in ctx:
3411 for f in ctx:
3412 fm.startitem()
3412 fm.startitem()
3413 fl = ctx[f].flags()
3413 fl = ctx[f].flags()
3414 fm.condwrite(ui.debugflag, 'hash', '%s ', hex(mf[f]))
3414 fm.condwrite(ui.debugflag, 'hash', '%s ', hex(mf[f]))
3415 fm.condwrite(ui.verbose, 'mode type', '%s %1s ', mode[fl], char[fl])
3415 fm.condwrite(ui.verbose, 'mode type', '%s %1s ', mode[fl], char[fl])
3416 fm.write('path', '%s\n', f)
3416 fm.write('path', '%s\n', f)
3417 fm.end()
3417 fm.end()
3418
3418
3419 @command('^merge',
3419 @command('^merge',
3420 [('f', 'force', None,
3420 [('f', 'force', None,
3421 _('force a merge including outstanding changes (DEPRECATED)')),
3421 _('force a merge including outstanding changes (DEPRECATED)')),
3422 ('r', 'rev', '', _('revision to merge'), _('REV')),
3422 ('r', 'rev', '', _('revision to merge'), _('REV')),
3423 ('P', 'preview', None,
3423 ('P', 'preview', None,
3424 _('review revisions to merge (no merge is performed)'))
3424 _('review revisions to merge (no merge is performed)'))
3425 ] + mergetoolopts,
3425 ] + mergetoolopts,
3426 _('[-P] [[-r] REV]'))
3426 _('[-P] [[-r] REV]'))
3427 def merge(ui, repo, node=None, **opts):
3427 def merge(ui, repo, node=None, **opts):
3428 """merge another revision into working directory
3428 """merge another revision into working directory
3429
3429
3430 The current working directory is updated with all changes made in
3430 The current working directory is updated with all changes made in
3431 the requested revision since the last common predecessor revision.
3431 the requested revision since the last common predecessor revision.
3432
3432
3433 Files that changed between either parent are marked as changed for
3433 Files that changed between either parent are marked as changed for
3434 the next commit and a commit must be performed before any further
3434 the next commit and a commit must be performed before any further
3435 updates to the repository are allowed. The next commit will have
3435 updates to the repository are allowed. The next commit will have
3436 two parents.
3436 two parents.
3437
3437
3438 ``--tool`` can be used to specify the merge tool used for file
3438 ``--tool`` can be used to specify the merge tool used for file
3439 merges. It overrides the HGMERGE environment variable and your
3439 merges. It overrides the HGMERGE environment variable and your
3440 configuration files. See :hg:`help merge-tools` for options.
3440 configuration files. See :hg:`help merge-tools` for options.
3441
3441
3442 If no revision is specified, the working directory's parent is a
3442 If no revision is specified, the working directory's parent is a
3443 head revision, and the current branch contains exactly one other
3443 head revision, and the current branch contains exactly one other
3444 head, the other head is merged with by default. Otherwise, an
3444 head, the other head is merged with by default. Otherwise, an
3445 explicit revision with which to merge with must be provided.
3445 explicit revision with which to merge with must be provided.
3446
3446
3447 See :hg:`help resolve` for information on handling file conflicts.
3447 See :hg:`help resolve` for information on handling file conflicts.
3448
3448
3449 To undo an uncommitted merge, use :hg:`update --clean .` which
3449 To undo an uncommitted merge, use :hg:`update --clean .` which
3450 will check out a clean copy of the original merge parent, losing
3450 will check out a clean copy of the original merge parent, losing
3451 all changes.
3451 all changes.
3452
3452
3453 Returns 0 on success, 1 if there are unresolved files.
3453 Returns 0 on success, 1 if there are unresolved files.
3454 """
3454 """
3455
3455
3456 opts = pycompat.byteskwargs(opts)
3456 opts = pycompat.byteskwargs(opts)
3457 if opts.get('rev') and node:
3457 if opts.get('rev') and node:
3458 raise error.Abort(_("please specify just one revision"))
3458 raise error.Abort(_("please specify just one revision"))
3459 if not node:
3459 if not node:
3460 node = opts.get('rev')
3460 node = opts.get('rev')
3461
3461
3462 if node:
3462 if node:
3463 node = scmutil.revsingle(repo, node).node()
3463 node = scmutil.revsingle(repo, node).node()
3464
3464
3465 if not node:
3465 if not node:
3466 node = repo[destutil.destmerge(repo)].node()
3466 node = repo[destutil.destmerge(repo)].node()
3467
3467
3468 if opts.get('preview'):
3468 if opts.get('preview'):
3469 # find nodes that are ancestors of p2 but not of p1
3469 # find nodes that are ancestors of p2 but not of p1
3470 p1 = repo.lookup('.')
3470 p1 = repo.lookup('.')
3471 p2 = repo.lookup(node)
3471 p2 = repo.lookup(node)
3472 nodes = repo.changelog.findmissing(common=[p1], heads=[p2])
3472 nodes = repo.changelog.findmissing(common=[p1], heads=[p2])
3473
3473
3474 displayer = cmdutil.show_changeset(ui, repo, opts)
3474 displayer = cmdutil.show_changeset(ui, repo, opts)
3475 for node in nodes:
3475 for node in nodes:
3476 displayer.show(repo[node])
3476 displayer.show(repo[node])
3477 displayer.close()
3477 displayer.close()
3478 return 0
3478 return 0
3479
3479
3480 try:
3480 try:
3481 # ui.forcemerge is an internal variable, do not document
3481 # ui.forcemerge is an internal variable, do not document
3482 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''), 'merge')
3482 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''), 'merge')
3483 force = opts.get('force')
3483 force = opts.get('force')
3484 labels = ['working copy', 'merge rev']
3484 labels = ['working copy', 'merge rev']
3485 return hg.merge(repo, node, force=force, mergeforce=force,
3485 return hg.merge(repo, node, force=force, mergeforce=force,
3486 labels=labels)
3486 labels=labels)
3487 finally:
3487 finally:
3488 ui.setconfig('ui', 'forcemerge', '', 'merge')
3488 ui.setconfig('ui', 'forcemerge', '', 'merge')
3489
3489
3490 @command('outgoing|out',
3490 @command('outgoing|out',
3491 [('f', 'force', None, _('run even when the destination is unrelated')),
3491 [('f', 'force', None, _('run even when the destination is unrelated')),
3492 ('r', 'rev', [],
3492 ('r', 'rev', [],
3493 _('a changeset intended to be included in the destination'), _('REV')),
3493 _('a changeset intended to be included in the destination'), _('REV')),
3494 ('n', 'newest-first', None, _('show newest record first')),
3494 ('n', 'newest-first', None, _('show newest record first')),
3495 ('B', 'bookmarks', False, _('compare bookmarks')),
3495 ('B', 'bookmarks', False, _('compare bookmarks')),
3496 ('b', 'branch', [], _('a specific branch you would like to push'),
3496 ('b', 'branch', [], _('a specific branch you would like to push'),
3497 _('BRANCH')),
3497 _('BRANCH')),
3498 ] + logopts + remoteopts + subrepoopts,
3498 ] + logopts + remoteopts + subrepoopts,
3499 _('[-M] [-p] [-n] [-f] [-r REV]... [DEST]'))
3499 _('[-M] [-p] [-n] [-f] [-r REV]... [DEST]'))
3500 def outgoing(ui, repo, dest=None, **opts):
3500 def outgoing(ui, repo, dest=None, **opts):
3501 """show changesets not found in the destination
3501 """show changesets not found in the destination
3502
3502
3503 Show changesets not found in the specified destination repository
3503 Show changesets not found in the specified destination repository
3504 or the default push location. These are the changesets that would
3504 or the default push location. These are the changesets that would
3505 be pushed if a push was requested.
3505 be pushed if a push was requested.
3506
3506
3507 See pull for details of valid destination formats.
3507 See pull for details of valid destination formats.
3508
3508
3509 .. container:: verbose
3509 .. container:: verbose
3510
3510
3511 With -B/--bookmarks, the result of bookmark comparison between
3511 With -B/--bookmarks, the result of bookmark comparison between
3512 local and remote repositories is displayed. With -v/--verbose,
3512 local and remote repositories is displayed. With -v/--verbose,
3513 status is also displayed for each bookmark like below::
3513 status is also displayed for each bookmark like below::
3514
3514
3515 BM1 01234567890a added
3515 BM1 01234567890a added
3516 BM2 deleted
3516 BM2 deleted
3517 BM3 234567890abc advanced
3517 BM3 234567890abc advanced
3518 BM4 34567890abcd diverged
3518 BM4 34567890abcd diverged
3519 BM5 4567890abcde changed
3519 BM5 4567890abcde changed
3520
3520
3521 The action taken when pushing depends on the
3521 The action taken when pushing depends on the
3522 status of each bookmark:
3522 status of each bookmark:
3523
3523
3524 :``added``: push with ``-B`` will create it
3524 :``added``: push with ``-B`` will create it
3525 :``deleted``: push with ``-B`` will delete it
3525 :``deleted``: push with ``-B`` will delete it
3526 :``advanced``: push will update it
3526 :``advanced``: push will update it
3527 :``diverged``: push with ``-B`` will update it
3527 :``diverged``: push with ``-B`` will update it
3528 :``changed``: push with ``-B`` will update it
3528 :``changed``: push with ``-B`` will update it
3529
3529
3530 From the point of view of pushing behavior, bookmarks
3530 From the point of view of pushing behavior, bookmarks
3531 existing only in the remote repository are treated as
3531 existing only in the remote repository are treated as
3532 ``deleted``, even if it is in fact added remotely.
3532 ``deleted``, even if it is in fact added remotely.
3533
3533
3534 Returns 0 if there are outgoing changes, 1 otherwise.
3534 Returns 0 if there are outgoing changes, 1 otherwise.
3535 """
3535 """
3536 opts = pycompat.byteskwargs(opts)
3536 opts = pycompat.byteskwargs(opts)
3537 if opts.get('graph'):
3537 if opts.get('graph'):
3538 cmdutil.checkunsupportedgraphflags([], opts)
3538 cmdutil.checkunsupportedgraphflags([], opts)
3539 o, other = hg._outgoing(ui, repo, dest, opts)
3539 o, other = hg._outgoing(ui, repo, dest, opts)
3540 if not o:
3540 if not o:
3541 cmdutil.outgoinghooks(ui, repo, other, opts, o)
3541 cmdutil.outgoinghooks(ui, repo, other, opts, o)
3542 return
3542 return
3543
3543
3544 revdag = cmdutil.graphrevs(repo, o, opts)
3544 revdag = cmdutil.graphrevs(repo, o, opts)
3545 ui.pager('outgoing')
3545 ui.pager('outgoing')
3546 displayer = cmdutil.show_changeset(ui, repo, opts, buffered=True)
3546 displayer = cmdutil.show_changeset(ui, repo, opts, buffered=True)
3547 cmdutil.displaygraph(ui, repo, revdag, displayer, graphmod.asciiedges)
3547 cmdutil.displaygraph(ui, repo, revdag, displayer, graphmod.asciiedges)
3548 cmdutil.outgoinghooks(ui, repo, other, opts, o)
3548 cmdutil.outgoinghooks(ui, repo, other, opts, o)
3549 return 0
3549 return 0
3550
3550
3551 if opts.get('bookmarks'):
3551 if opts.get('bookmarks'):
3552 dest = ui.expandpath(dest or 'default-push', dest or 'default')
3552 dest = ui.expandpath(dest or 'default-push', dest or 'default')
3553 dest, branches = hg.parseurl(dest, opts.get('branch'))
3553 dest, branches = hg.parseurl(dest, opts.get('branch'))
3554 other = hg.peer(repo, opts, dest)
3554 other = hg.peer(repo, opts, dest)
3555 if 'bookmarks' not in other.listkeys('namespaces'):
3555 if 'bookmarks' not in other.listkeys('namespaces'):
3556 ui.warn(_("remote doesn't support bookmarks\n"))
3556 ui.warn(_("remote doesn't support bookmarks\n"))
3557 return 0
3557 return 0
3558 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
3558 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
3559 ui.pager('outgoing')
3559 ui.pager('outgoing')
3560 return bookmarks.outgoing(ui, repo, other)
3560 return bookmarks.outgoing(ui, repo, other)
3561
3561
3562 repo._subtoppath = ui.expandpath(dest or 'default-push', dest or 'default')
3562 repo._subtoppath = ui.expandpath(dest or 'default-push', dest or 'default')
3563 try:
3563 try:
3564 return hg.outgoing(ui, repo, dest, opts)
3564 return hg.outgoing(ui, repo, dest, opts)
3565 finally:
3565 finally:
3566 del repo._subtoppath
3566 del repo._subtoppath
3567
3567
3568 @command('parents',
3568 @command('parents',
3569 [('r', 'rev', '', _('show parents of the specified revision'), _('REV')),
3569 [('r', 'rev', '', _('show parents of the specified revision'), _('REV')),
3570 ] + templateopts,
3570 ] + templateopts,
3571 _('[-r REV] [FILE]'),
3571 _('[-r REV] [FILE]'),
3572 inferrepo=True)
3572 inferrepo=True)
3573 def parents(ui, repo, file_=None, **opts):
3573 def parents(ui, repo, file_=None, **opts):
3574 """show the parents of the working directory or revision (DEPRECATED)
3574 """show the parents of the working directory or revision (DEPRECATED)
3575
3575
3576 Print the working directory's parent revisions. If a revision is
3576 Print the working directory's parent revisions. If a revision is
3577 given via -r/--rev, the parent of that revision will be printed.
3577 given via -r/--rev, the parent of that revision will be printed.
3578 If a file argument is given, the revision in which the file was
3578 If a file argument is given, the revision in which the file was
3579 last changed (before the working directory revision or the
3579 last changed (before the working directory revision or the
3580 argument to --rev if given) is printed.
3580 argument to --rev if given) is printed.
3581
3581
3582 This command is equivalent to::
3582 This command is equivalent to::
3583
3583
3584 hg log -r "p1()+p2()" or
3584 hg log -r "p1()+p2()" or
3585 hg log -r "p1(REV)+p2(REV)" or
3585 hg log -r "p1(REV)+p2(REV)" or
3586 hg log -r "max(::p1() and file(FILE))+max(::p2() and file(FILE))" or
3586 hg log -r "max(::p1() and file(FILE))+max(::p2() and file(FILE))" or
3587 hg log -r "max(::p1(REV) and file(FILE))+max(::p2(REV) and file(FILE))"
3587 hg log -r "max(::p1(REV) and file(FILE))+max(::p2(REV) and file(FILE))"
3588
3588
3589 See :hg:`summary` and :hg:`help revsets` for related information.
3589 See :hg:`summary` and :hg:`help revsets` for related information.
3590
3590
3591 Returns 0 on success.
3591 Returns 0 on success.
3592 """
3592 """
3593
3593
3594 opts = pycompat.byteskwargs(opts)
3594 opts = pycompat.byteskwargs(opts)
3595 ctx = scmutil.revsingle(repo, opts.get('rev'), None)
3595 ctx = scmutil.revsingle(repo, opts.get('rev'), None)
3596
3596
3597 if file_:
3597 if file_:
3598 m = scmutil.match(ctx, (file_,), opts)
3598 m = scmutil.match(ctx, (file_,), opts)
3599 if m.anypats() or len(m.files()) != 1:
3599 if m.anypats() or len(m.files()) != 1:
3600 raise error.Abort(_('can only specify an explicit filename'))
3600 raise error.Abort(_('can only specify an explicit filename'))
3601 file_ = m.files()[0]
3601 file_ = m.files()[0]
3602 filenodes = []
3602 filenodes = []
3603 for cp in ctx.parents():
3603 for cp in ctx.parents():
3604 if not cp:
3604 if not cp:
3605 continue
3605 continue
3606 try:
3606 try:
3607 filenodes.append(cp.filenode(file_))
3607 filenodes.append(cp.filenode(file_))
3608 except error.LookupError:
3608 except error.LookupError:
3609 pass
3609 pass
3610 if not filenodes:
3610 if not filenodes:
3611 raise error.Abort(_("'%s' not found in manifest!") % file_)
3611 raise error.Abort(_("'%s' not found in manifest!") % file_)
3612 p = []
3612 p = []
3613 for fn in filenodes:
3613 for fn in filenodes:
3614 fctx = repo.filectx(file_, fileid=fn)
3614 fctx = repo.filectx(file_, fileid=fn)
3615 p.append(fctx.node())
3615 p.append(fctx.node())
3616 else:
3616 else:
3617 p = [cp.node() for cp in ctx.parents()]
3617 p = [cp.node() for cp in ctx.parents()]
3618
3618
3619 displayer = cmdutil.show_changeset(ui, repo, opts)
3619 displayer = cmdutil.show_changeset(ui, repo, opts)
3620 for n in p:
3620 for n in p:
3621 if n != nullid:
3621 if n != nullid:
3622 displayer.show(repo[n])
3622 displayer.show(repo[n])
3623 displayer.close()
3623 displayer.close()
3624
3624
3625 @command('paths', formatteropts, _('[NAME]'), optionalrepo=True)
3625 @command('paths', formatteropts, _('[NAME]'), optionalrepo=True)
3626 def paths(ui, repo, search=None, **opts):
3626 def paths(ui, repo, search=None, **opts):
3627 """show aliases for remote repositories
3627 """show aliases for remote repositories
3628
3628
3629 Show definition of symbolic path name NAME. If no name is given,
3629 Show definition of symbolic path name NAME. If no name is given,
3630 show definition of all available names.
3630 show definition of all available names.
3631
3631
3632 Option -q/--quiet suppresses all output when searching for NAME
3632 Option -q/--quiet suppresses all output when searching for NAME
3633 and shows only the path names when listing all definitions.
3633 and shows only the path names when listing all definitions.
3634
3634
3635 Path names are defined in the [paths] section of your
3635 Path names are defined in the [paths] section of your
3636 configuration file and in ``/etc/mercurial/hgrc``. If run inside a
3636 configuration file and in ``/etc/mercurial/hgrc``. If run inside a
3637 repository, ``.hg/hgrc`` is used, too.
3637 repository, ``.hg/hgrc`` is used, too.
3638
3638
3639 The path names ``default`` and ``default-push`` have a special
3639 The path names ``default`` and ``default-push`` have a special
3640 meaning. When performing a push or pull operation, they are used
3640 meaning. When performing a push or pull operation, they are used
3641 as fallbacks if no location is specified on the command-line.
3641 as fallbacks if no location is specified on the command-line.
3642 When ``default-push`` is set, it will be used for push and
3642 When ``default-push`` is set, it will be used for push and
3643 ``default`` will be used for pull; otherwise ``default`` is used
3643 ``default`` will be used for pull; otherwise ``default`` is used
3644 as the fallback for both. When cloning a repository, the clone
3644 as the fallback for both. When cloning a repository, the clone
3645 source is written as ``default`` in ``.hg/hgrc``.
3645 source is written as ``default`` in ``.hg/hgrc``.
3646
3646
3647 .. note::
3647 .. note::
3648
3648
3649 ``default`` and ``default-push`` apply to all inbound (e.g.
3649 ``default`` and ``default-push`` apply to all inbound (e.g.
3650 :hg:`incoming`) and outbound (e.g. :hg:`outgoing`, :hg:`email`
3650 :hg:`incoming`) and outbound (e.g. :hg:`outgoing`, :hg:`email`
3651 and :hg:`bundle`) operations.
3651 and :hg:`bundle`) operations.
3652
3652
3653 See :hg:`help urls` for more information.
3653 See :hg:`help urls` for more information.
3654
3654
3655 Returns 0 on success.
3655 Returns 0 on success.
3656 """
3656 """
3657
3657
3658 opts = pycompat.byteskwargs(opts)
3658 opts = pycompat.byteskwargs(opts)
3659 ui.pager('paths')
3659 ui.pager('paths')
3660 if search:
3660 if search:
3661 pathitems = [(name, path) for name, path in ui.paths.iteritems()
3661 pathitems = [(name, path) for name, path in ui.paths.iteritems()
3662 if name == search]
3662 if name == search]
3663 else:
3663 else:
3664 pathitems = sorted(ui.paths.iteritems())
3664 pathitems = sorted(ui.paths.iteritems())
3665
3665
3666 fm = ui.formatter('paths', opts)
3666 fm = ui.formatter('paths', opts)
3667 if fm.isplain():
3667 if fm.isplain():
3668 hidepassword = util.hidepassword
3668 hidepassword = util.hidepassword
3669 else:
3669 else:
3670 hidepassword = str
3670 hidepassword = str
3671 if ui.quiet:
3671 if ui.quiet:
3672 namefmt = '%s\n'
3672 namefmt = '%s\n'
3673 else:
3673 else:
3674 namefmt = '%s = '
3674 namefmt = '%s = '
3675 showsubopts = not search and not ui.quiet
3675 showsubopts = not search and not ui.quiet
3676
3676
3677 for name, path in pathitems:
3677 for name, path in pathitems:
3678 fm.startitem()
3678 fm.startitem()
3679 fm.condwrite(not search, 'name', namefmt, name)
3679 fm.condwrite(not search, 'name', namefmt, name)
3680 fm.condwrite(not ui.quiet, 'url', '%s\n', hidepassword(path.rawloc))
3680 fm.condwrite(not ui.quiet, 'url', '%s\n', hidepassword(path.rawloc))
3681 for subopt, value in sorted(path.suboptions.items()):
3681 for subopt, value in sorted(path.suboptions.items()):
3682 assert subopt not in ('name', 'url')
3682 assert subopt not in ('name', 'url')
3683 if showsubopts:
3683 if showsubopts:
3684 fm.plain('%s:%s = ' % (name, subopt))
3684 fm.plain('%s:%s = ' % (name, subopt))
3685 fm.condwrite(showsubopts, subopt, '%s\n', value)
3685 fm.condwrite(showsubopts, subopt, '%s\n', value)
3686
3686
3687 fm.end()
3687 fm.end()
3688
3688
3689 if search and not pathitems:
3689 if search and not pathitems:
3690 if not ui.quiet:
3690 if not ui.quiet:
3691 ui.warn(_("not found!\n"))
3691 ui.warn(_("not found!\n"))
3692 return 1
3692 return 1
3693 else:
3693 else:
3694 return 0
3694 return 0
3695
3695
3696 @command('phase',
3696 @command('phase',
3697 [('p', 'public', False, _('set changeset phase to public')),
3697 [('p', 'public', False, _('set changeset phase to public')),
3698 ('d', 'draft', False, _('set changeset phase to draft')),
3698 ('d', 'draft', False, _('set changeset phase to draft')),
3699 ('s', 'secret', False, _('set changeset phase to secret')),
3699 ('s', 'secret', False, _('set changeset phase to secret')),
3700 ('f', 'force', False, _('allow to move boundary backward')),
3700 ('f', 'force', False, _('allow to move boundary backward')),
3701 ('r', 'rev', [], _('target revision'), _('REV')),
3701 ('r', 'rev', [], _('target revision'), _('REV')),
3702 ],
3702 ],
3703 _('[-p|-d|-s] [-f] [-r] [REV...]'))
3703 _('[-p|-d|-s] [-f] [-r] [REV...]'))
3704 def phase(ui, repo, *revs, **opts):
3704 def phase(ui, repo, *revs, **opts):
3705 """set or show the current phase name
3705 """set or show the current phase name
3706
3706
3707 With no argument, show the phase name of the current revision(s).
3707 With no argument, show the phase name of the current revision(s).
3708
3708
3709 With one of -p/--public, -d/--draft or -s/--secret, change the
3709 With one of -p/--public, -d/--draft or -s/--secret, change the
3710 phase value of the specified revisions.
3710 phase value of the specified revisions.
3711
3711
3712 Unless -f/--force is specified, :hg:`phase` won't move changeset from a
3712 Unless -f/--force is specified, :hg:`phase` won't move changeset from a
3713 lower phase to an higher phase. Phases are ordered as follows::
3713 lower phase to an higher phase. Phases are ordered as follows::
3714
3714
3715 public < draft < secret
3715 public < draft < secret
3716
3716
3717 Returns 0 on success, 1 if some phases could not be changed.
3717 Returns 0 on success, 1 if some phases could not be changed.
3718
3718
3719 (For more information about the phases concept, see :hg:`help phases`.)
3719 (For more information about the phases concept, see :hg:`help phases`.)
3720 """
3720 """
3721 opts = pycompat.byteskwargs(opts)
3721 opts = pycompat.byteskwargs(opts)
3722 # search for a unique phase argument
3722 # search for a unique phase argument
3723 targetphase = None
3723 targetphase = None
3724 for idx, name in enumerate(phases.phasenames):
3724 for idx, name in enumerate(phases.phasenames):
3725 if opts[name]:
3725 if opts[name]:
3726 if targetphase is not None:
3726 if targetphase is not None:
3727 raise error.Abort(_('only one phase can be specified'))
3727 raise error.Abort(_('only one phase can be specified'))
3728 targetphase = idx
3728 targetphase = idx
3729
3729
3730 # look for specified revision
3730 # look for specified revision
3731 revs = list(revs)
3731 revs = list(revs)
3732 revs.extend(opts['rev'])
3732 revs.extend(opts['rev'])
3733 if not revs:
3733 if not revs:
3734 # display both parents as the second parent phase can influence
3734 # display both parents as the second parent phase can influence
3735 # the phase of a merge commit
3735 # the phase of a merge commit
3736 revs = [c.rev() for c in repo[None].parents()]
3736 revs = [c.rev() for c in repo[None].parents()]
3737
3737
3738 revs = scmutil.revrange(repo, revs)
3738 revs = scmutil.revrange(repo, revs)
3739
3739
3740 lock = None
3740 lock = None
3741 ret = 0
3741 ret = 0
3742 if targetphase is None:
3742 if targetphase is None:
3743 # display
3743 # display
3744 for r in revs:
3744 for r in revs:
3745 ctx = repo[r]
3745 ctx = repo[r]
3746 ui.write('%i: %s\n' % (ctx.rev(), ctx.phasestr()))
3746 ui.write('%i: %s\n' % (ctx.rev(), ctx.phasestr()))
3747 else:
3747 else:
3748 tr = None
3748 tr = None
3749 lock = repo.lock()
3749 lock = repo.lock()
3750 try:
3750 try:
3751 tr = repo.transaction("phase")
3751 tr = repo.transaction("phase")
3752 # set phase
3752 # set phase
3753 if not revs:
3753 if not revs:
3754 raise error.Abort(_('empty revision set'))
3754 raise error.Abort(_('empty revision set'))
3755 nodes = [repo[r].node() for r in revs]
3755 nodes = [repo[r].node() for r in revs]
3756 # moving revision from public to draft may hide them
3756 # moving revision from public to draft may hide them
3757 # We have to check result on an unfiltered repository
3757 # We have to check result on an unfiltered repository
3758 unfi = repo.unfiltered()
3758 unfi = repo.unfiltered()
3759 getphase = unfi._phasecache.phase
3759 getphase = unfi._phasecache.phase
3760 olddata = [getphase(unfi, r) for r in unfi]
3760 olddata = [getphase(unfi, r) for r in unfi]
3761 phases.advanceboundary(repo, tr, targetphase, nodes)
3761 phases.advanceboundary(repo, tr, targetphase, nodes)
3762 if opts['force']:
3762 if opts['force']:
3763 phases.retractboundary(repo, tr, targetphase, nodes)
3763 phases.retractboundary(repo, tr, targetphase, nodes)
3764 tr.close()
3764 tr.close()
3765 finally:
3765 finally:
3766 if tr is not None:
3766 if tr is not None:
3767 tr.release()
3767 tr.release()
3768 lock.release()
3768 lock.release()
3769 getphase = unfi._phasecache.phase
3769 getphase = unfi._phasecache.phase
3770 newdata = [getphase(unfi, r) for r in unfi]
3770 newdata = [getphase(unfi, r) for r in unfi]
3771 changes = sum(newdata[r] != olddata[r] for r in unfi)
3771 changes = sum(newdata[r] != olddata[r] for r in unfi)
3772 cl = unfi.changelog
3772 cl = unfi.changelog
3773 rejected = [n for n in nodes
3773 rejected = [n for n in nodes
3774 if newdata[cl.rev(n)] < targetphase]
3774 if newdata[cl.rev(n)] < targetphase]
3775 if rejected:
3775 if rejected:
3776 ui.warn(_('cannot move %i changesets to a higher '
3776 ui.warn(_('cannot move %i changesets to a higher '
3777 'phase, use --force\n') % len(rejected))
3777 'phase, use --force\n') % len(rejected))
3778 ret = 1
3778 ret = 1
3779 if changes:
3779 if changes:
3780 msg = _('phase changed for %i changesets\n') % changes
3780 msg = _('phase changed for %i changesets\n') % changes
3781 if ret:
3781 if ret:
3782 ui.status(msg)
3782 ui.status(msg)
3783 else:
3783 else:
3784 ui.note(msg)
3784 ui.note(msg)
3785 else:
3785 else:
3786 ui.warn(_('no phases changed\n'))
3786 ui.warn(_('no phases changed\n'))
3787 return ret
3787 return ret
3788
3788
3789 def postincoming(ui, repo, modheads, optupdate, checkout, brev):
3789 def postincoming(ui, repo, modheads, optupdate, checkout, brev):
3790 """Run after a changegroup has been added via pull/unbundle
3790 """Run after a changegroup has been added via pull/unbundle
3791
3791
3792 This takes arguments below:
3792 This takes arguments below:
3793
3793
3794 :modheads: change of heads by pull/unbundle
3794 :modheads: change of heads by pull/unbundle
3795 :optupdate: updating working directory is needed or not
3795 :optupdate: updating working directory is needed or not
3796 :checkout: update destination revision (or None to default destination)
3796 :checkout: update destination revision (or None to default destination)
3797 :brev: a name, which might be a bookmark to be activated after updating
3797 :brev: a name, which might be a bookmark to be activated after updating
3798 """
3798 """
3799 if modheads == 0:
3799 if modheads == 0:
3800 return
3800 return
3801 if optupdate:
3801 if optupdate:
3802 try:
3802 try:
3803 return hg.updatetotally(ui, repo, checkout, brev)
3803 return hg.updatetotally(ui, repo, checkout, brev)
3804 except error.UpdateAbort as inst:
3804 except error.UpdateAbort as inst:
3805 msg = _("not updating: %s") % str(inst)
3805 msg = _("not updating: %s") % str(inst)
3806 hint = inst.hint
3806 hint = inst.hint
3807 raise error.UpdateAbort(msg, hint=hint)
3807 raise error.UpdateAbort(msg, hint=hint)
3808 if modheads > 1:
3808 if modheads > 1:
3809 currentbranchheads = len(repo.branchheads())
3809 currentbranchheads = len(repo.branchheads())
3810 if currentbranchheads == modheads:
3810 if currentbranchheads == modheads:
3811 ui.status(_("(run 'hg heads' to see heads, 'hg merge' to merge)\n"))
3811 ui.status(_("(run 'hg heads' to see heads, 'hg merge' to merge)\n"))
3812 elif currentbranchheads > 1:
3812 elif currentbranchheads > 1:
3813 ui.status(_("(run 'hg heads .' to see heads, 'hg merge' to "
3813 ui.status(_("(run 'hg heads .' to see heads, 'hg merge' to "
3814 "merge)\n"))
3814 "merge)\n"))
3815 else:
3815 else:
3816 ui.status(_("(run 'hg heads' to see heads)\n"))
3816 ui.status(_("(run 'hg heads' to see heads)\n"))
3817 else:
3817 else:
3818 ui.status(_("(run 'hg update' to get a working copy)\n"))
3818 ui.status(_("(run 'hg update' to get a working copy)\n"))
3819
3819
3820 @command('^pull',
3820 @command('^pull',
3821 [('u', 'update', None,
3821 [('u', 'update', None,
3822 _('update to new branch head if changesets were pulled')),
3822 _('update to new branch head if changesets were pulled')),
3823 ('f', 'force', None, _('run even when remote repository is unrelated')),
3823 ('f', 'force', None, _('run even when remote repository is unrelated')),
3824 ('r', 'rev', [], _('a remote changeset intended to be added'), _('REV')),
3824 ('r', 'rev', [], _('a remote changeset intended to be added'), _('REV')),
3825 ('B', 'bookmark', [], _("bookmark to pull"), _('BOOKMARK')),
3825 ('B', 'bookmark', [], _("bookmark to pull"), _('BOOKMARK')),
3826 ('b', 'branch', [], _('a specific branch you would like to pull'),
3826 ('b', 'branch', [], _('a specific branch you would like to pull'),
3827 _('BRANCH')),
3827 _('BRANCH')),
3828 ] + remoteopts,
3828 ] + remoteopts,
3829 _('[-u] [-f] [-r REV]... [-e CMD] [--remotecmd CMD] [SOURCE]'))
3829 _('[-u] [-f] [-r REV]... [-e CMD] [--remotecmd CMD] [SOURCE]'))
3830 def pull(ui, repo, source="default", **opts):
3830 def pull(ui, repo, source="default", **opts):
3831 """pull changes from the specified source
3831 """pull changes from the specified source
3832
3832
3833 Pull changes from a remote repository to a local one.
3833 Pull changes from a remote repository to a local one.
3834
3834
3835 This finds all changes from the repository at the specified path
3835 This finds all changes from the repository at the specified path
3836 or URL and adds them to a local repository (the current one unless
3836 or URL and adds them to a local repository (the current one unless
3837 -R is specified). By default, this does not update the copy of the
3837 -R is specified). By default, this does not update the copy of the
3838 project in the working directory.
3838 project in the working directory.
3839
3839
3840 Use :hg:`incoming` if you want to see what would have been added
3840 Use :hg:`incoming` if you want to see what would have been added
3841 by a pull at the time you issued this command. If you then decide
3841 by a pull at the time you issued this command. If you then decide
3842 to add those changes to the repository, you should use :hg:`pull
3842 to add those changes to the repository, you should use :hg:`pull
3843 -r X` where ``X`` is the last changeset listed by :hg:`incoming`.
3843 -r X` where ``X`` is the last changeset listed by :hg:`incoming`.
3844
3844
3845 If SOURCE is omitted, the 'default' path will be used.
3845 If SOURCE is omitted, the 'default' path will be used.
3846 See :hg:`help urls` for more information.
3846 See :hg:`help urls` for more information.
3847
3847
3848 Specifying bookmark as ``.`` is equivalent to specifying the active
3848 Specifying bookmark as ``.`` is equivalent to specifying the active
3849 bookmark's name.
3849 bookmark's name.
3850
3850
3851 Returns 0 on success, 1 if an update had unresolved files.
3851 Returns 0 on success, 1 if an update had unresolved files.
3852 """
3852 """
3853
3853
3854 opts = pycompat.byteskwargs(opts)
3854 opts = pycompat.byteskwargs(opts)
3855 if ui.configbool('commands', 'update.requiredest') and opts.get('update'):
3855 if ui.configbool('commands', 'update.requiredest') and opts.get('update'):
3856 msg = _('update destination required by configuration')
3856 msg = _('update destination required by configuration')
3857 hint = _('use hg pull followed by hg update DEST')
3857 hint = _('use hg pull followed by hg update DEST')
3858 raise error.Abort(msg, hint=hint)
3858 raise error.Abort(msg, hint=hint)
3859
3859
3860 source, branches = hg.parseurl(ui.expandpath(source), opts.get('branch'))
3860 source, branches = hg.parseurl(ui.expandpath(source), opts.get('branch'))
3861 ui.status(_('pulling from %s\n') % util.hidepassword(source))
3861 ui.status(_('pulling from %s\n') % util.hidepassword(source))
3862 other = hg.peer(repo, opts, source)
3862 other = hg.peer(repo, opts, source)
3863 try:
3863 try:
3864 revs, checkout = hg.addbranchrevs(repo, other, branches,
3864 revs, checkout = hg.addbranchrevs(repo, other, branches,
3865 opts.get('rev'))
3865 opts.get('rev'))
3866
3866
3867
3867
3868 pullopargs = {}
3868 pullopargs = {}
3869 if opts.get('bookmark'):
3869 if opts.get('bookmark'):
3870 if not revs:
3870 if not revs:
3871 revs = []
3871 revs = []
3872 # The list of bookmark used here is not the one used to actually
3872 # The list of bookmark used here is not the one used to actually
3873 # update the bookmark name. This can result in the revision pulled
3873 # update the bookmark name. This can result in the revision pulled
3874 # not ending up with the name of the bookmark because of a race
3874 # not ending up with the name of the bookmark because of a race
3875 # condition on the server. (See issue 4689 for details)
3875 # condition on the server. (See issue 4689 for details)
3876 remotebookmarks = other.listkeys('bookmarks')
3876 remotebookmarks = other.listkeys('bookmarks')
3877 pullopargs['remotebookmarks'] = remotebookmarks
3877 pullopargs['remotebookmarks'] = remotebookmarks
3878 for b in opts['bookmark']:
3878 for b in opts['bookmark']:
3879 b = repo._bookmarks.expandname(b)
3879 b = repo._bookmarks.expandname(b)
3880 if b not in remotebookmarks:
3880 if b not in remotebookmarks:
3881 raise error.Abort(_('remote bookmark %s not found!') % b)
3881 raise error.Abort(_('remote bookmark %s not found!') % b)
3882 revs.append(remotebookmarks[b])
3882 revs.append(remotebookmarks[b])
3883
3883
3884 if revs:
3884 if revs:
3885 try:
3885 try:
3886 # When 'rev' is a bookmark name, we cannot guarantee that it
3886 # When 'rev' is a bookmark name, we cannot guarantee that it
3887 # will be updated with that name because of a race condition
3887 # will be updated with that name because of a race condition
3888 # server side. (See issue 4689 for details)
3888 # server side. (See issue 4689 for details)
3889 oldrevs = revs
3889 oldrevs = revs
3890 revs = [] # actually, nodes
3890 revs = [] # actually, nodes
3891 for r in oldrevs:
3891 for r in oldrevs:
3892 node = other.lookup(r)
3892 node = other.lookup(r)
3893 revs.append(node)
3893 revs.append(node)
3894 if r == checkout:
3894 if r == checkout:
3895 checkout = node
3895 checkout = node
3896 except error.CapabilityError:
3896 except error.CapabilityError:
3897 err = _("other repository doesn't support revision lookup, "
3897 err = _("other repository doesn't support revision lookup, "
3898 "so a rev cannot be specified.")
3898 "so a rev cannot be specified.")
3899 raise error.Abort(err)
3899 raise error.Abort(err)
3900
3900
3901 pullopargs.update(opts.get('opargs', {}))
3901 pullopargs.update(opts.get('opargs', {}))
3902 modheads = exchange.pull(repo, other, heads=revs,
3902 modheads = exchange.pull(repo, other, heads=revs,
3903 force=opts.get('force'),
3903 force=opts.get('force'),
3904 bookmarks=opts.get('bookmark', ()),
3904 bookmarks=opts.get('bookmark', ()),
3905 opargs=pullopargs).cgresult
3905 opargs=pullopargs).cgresult
3906
3906
3907 # brev is a name, which might be a bookmark to be activated at
3907 # brev is a name, which might be a bookmark to be activated at
3908 # the end of the update. In other words, it is an explicit
3908 # the end of the update. In other words, it is an explicit
3909 # destination of the update
3909 # destination of the update
3910 brev = None
3910 brev = None
3911
3911
3912 if checkout:
3912 if checkout:
3913 checkout = str(repo.changelog.rev(checkout))
3913 checkout = str(repo.changelog.rev(checkout))
3914
3914
3915 # order below depends on implementation of
3915 # order below depends on implementation of
3916 # hg.addbranchrevs(). opts['bookmark'] is ignored,
3916 # hg.addbranchrevs(). opts['bookmark'] is ignored,
3917 # because 'checkout' is determined without it.
3917 # because 'checkout' is determined without it.
3918 if opts.get('rev'):
3918 if opts.get('rev'):
3919 brev = opts['rev'][0]
3919 brev = opts['rev'][0]
3920 elif opts.get('branch'):
3920 elif opts.get('branch'):
3921 brev = opts['branch'][0]
3921 brev = opts['branch'][0]
3922 else:
3922 else:
3923 brev = branches[0]
3923 brev = branches[0]
3924 repo._subtoppath = source
3924 repo._subtoppath = source
3925 try:
3925 try:
3926 ret = postincoming(ui, repo, modheads, opts.get('update'),
3926 ret = postincoming(ui, repo, modheads, opts.get('update'),
3927 checkout, brev)
3927 checkout, brev)
3928
3928
3929 finally:
3929 finally:
3930 del repo._subtoppath
3930 del repo._subtoppath
3931
3931
3932 finally:
3932 finally:
3933 other.close()
3933 other.close()
3934 return ret
3934 return ret
3935
3935
3936 @command('^push',
3936 @command('^push',
3937 [('f', 'force', None, _('force push')),
3937 [('f', 'force', None, _('force push')),
3938 ('r', 'rev', [],
3938 ('r', 'rev', [],
3939 _('a changeset intended to be included in the destination'),
3939 _('a changeset intended to be included in the destination'),
3940 _('REV')),
3940 _('REV')),
3941 ('B', 'bookmark', [], _("bookmark to push"), _('BOOKMARK')),
3941 ('B', 'bookmark', [], _("bookmark to push"), _('BOOKMARK')),
3942 ('b', 'branch', [],
3942 ('b', 'branch', [],
3943 _('a specific branch you would like to push'), _('BRANCH')),
3943 _('a specific branch you would like to push'), _('BRANCH')),
3944 ('', 'new-branch', False, _('allow pushing a new branch')),
3944 ('', 'new-branch', False, _('allow pushing a new branch')),
3945 ] + remoteopts,
3945 ] + remoteopts,
3946 _('[-f] [-r REV]... [-e CMD] [--remotecmd CMD] [DEST]'))
3946 _('[-f] [-r REV]... [-e CMD] [--remotecmd CMD] [DEST]'))
3947 def push(ui, repo, dest=None, **opts):
3947 def push(ui, repo, dest=None, **opts):
3948 """push changes to the specified destination
3948 """push changes to the specified destination
3949
3949
3950 Push changesets from the local repository to the specified
3950 Push changesets from the local repository to the specified
3951 destination.
3951 destination.
3952
3952
3953 This operation is symmetrical to pull: it is identical to a pull
3953 This operation is symmetrical to pull: it is identical to a pull
3954 in the destination repository from the current one.
3954 in the destination repository from the current one.
3955
3955
3956 By default, push will not allow creation of new heads at the
3956 By default, push will not allow creation of new heads at the
3957 destination, since multiple heads would make it unclear which head
3957 destination, since multiple heads would make it unclear which head
3958 to use. In this situation, it is recommended to pull and merge
3958 to use. In this situation, it is recommended to pull and merge
3959 before pushing.
3959 before pushing.
3960
3960
3961 Use --new-branch if you want to allow push to create a new named
3961 Use --new-branch if you want to allow push to create a new named
3962 branch that is not present at the destination. This allows you to
3962 branch that is not present at the destination. This allows you to
3963 only create a new branch without forcing other changes.
3963 only create a new branch without forcing other changes.
3964
3964
3965 .. note::
3965 .. note::
3966
3966
3967 Extra care should be taken with the -f/--force option,
3967 Extra care should be taken with the -f/--force option,
3968 which will push all new heads on all branches, an action which will
3968 which will push all new heads on all branches, an action which will
3969 almost always cause confusion for collaborators.
3969 almost always cause confusion for collaborators.
3970
3970
3971 If -r/--rev is used, the specified revision and all its ancestors
3971 If -r/--rev is used, the specified revision and all its ancestors
3972 will be pushed to the remote repository.
3972 will be pushed to the remote repository.
3973
3973
3974 If -B/--bookmark is used, the specified bookmarked revision, its
3974 If -B/--bookmark is used, the specified bookmarked revision, its
3975 ancestors, and the bookmark will be pushed to the remote
3975 ancestors, and the bookmark will be pushed to the remote
3976 repository. Specifying ``.`` is equivalent to specifying the active
3976 repository. Specifying ``.`` is equivalent to specifying the active
3977 bookmark's name.
3977 bookmark's name.
3978
3978
3979 Please see :hg:`help urls` for important details about ``ssh://``
3979 Please see :hg:`help urls` for important details about ``ssh://``
3980 URLs. If DESTINATION is omitted, a default path will be used.
3980 URLs. If DESTINATION is omitted, a default path will be used.
3981
3981
3982 Returns 0 if push was successful, 1 if nothing to push.
3982 Returns 0 if push was successful, 1 if nothing to push.
3983 """
3983 """
3984
3984
3985 opts = pycompat.byteskwargs(opts)
3985 opts = pycompat.byteskwargs(opts)
3986 if opts.get('bookmark'):
3986 if opts.get('bookmark'):
3987 ui.setconfig('bookmarks', 'pushing', opts['bookmark'], 'push')
3987 ui.setconfig('bookmarks', 'pushing', opts['bookmark'], 'push')
3988 for b in opts['bookmark']:
3988 for b in opts['bookmark']:
3989 # translate -B options to -r so changesets get pushed
3989 # translate -B options to -r so changesets get pushed
3990 b = repo._bookmarks.expandname(b)
3990 b = repo._bookmarks.expandname(b)
3991 if b in repo._bookmarks:
3991 if b in repo._bookmarks:
3992 opts.setdefault('rev', []).append(b)
3992 opts.setdefault('rev', []).append(b)
3993 else:
3993 else:
3994 # if we try to push a deleted bookmark, translate it to null
3994 # if we try to push a deleted bookmark, translate it to null
3995 # this lets simultaneous -r, -b options continue working
3995 # this lets simultaneous -r, -b options continue working
3996 opts.setdefault('rev', []).append("null")
3996 opts.setdefault('rev', []).append("null")
3997
3997
3998 path = ui.paths.getpath(dest, default=('default-push', 'default'))
3998 path = ui.paths.getpath(dest, default=('default-push', 'default'))
3999 if not path:
3999 if not path:
4000 raise error.Abort(_('default repository not configured!'),
4000 raise error.Abort(_('default repository not configured!'),
4001 hint=_("see 'hg help config.paths'"))
4001 hint=_("see 'hg help config.paths'"))
4002 dest = path.pushloc or path.loc
4002 dest = path.pushloc or path.loc
4003 branches = (path.branch, opts.get('branch') or [])
4003 branches = (path.branch, opts.get('branch') or [])
4004 ui.status(_('pushing to %s\n') % util.hidepassword(dest))
4004 ui.status(_('pushing to %s\n') % util.hidepassword(dest))
4005 revs, checkout = hg.addbranchrevs(repo, repo, branches, opts.get('rev'))
4005 revs, checkout = hg.addbranchrevs(repo, repo, branches, opts.get('rev'))
4006 other = hg.peer(repo, opts, dest)
4006 other = hg.peer(repo, opts, dest)
4007
4007
4008 if revs:
4008 if revs:
4009 revs = [repo.lookup(r) for r in scmutil.revrange(repo, revs)]
4009 revs = [repo.lookup(r) for r in scmutil.revrange(repo, revs)]
4010 if not revs:
4010 if not revs:
4011 raise error.Abort(_("specified revisions evaluate to an empty set"),
4011 raise error.Abort(_("specified revisions evaluate to an empty set"),
4012 hint=_("use different revision arguments"))
4012 hint=_("use different revision arguments"))
4013 elif path.pushrev:
4013 elif path.pushrev:
4014 # It doesn't make any sense to specify ancestor revisions. So limit
4014 # It doesn't make any sense to specify ancestor revisions. So limit
4015 # to DAG heads to make discovery simpler.
4015 # to DAG heads to make discovery simpler.
4016 expr = revsetlang.formatspec('heads(%r)', path.pushrev)
4016 expr = revsetlang.formatspec('heads(%r)', path.pushrev)
4017 revs = scmutil.revrange(repo, [expr])
4017 revs = scmutil.revrange(repo, [expr])
4018 revs = [repo[rev].node() for rev in revs]
4018 revs = [repo[rev].node() for rev in revs]
4019 if not revs:
4019 if not revs:
4020 raise error.Abort(_('default push revset for path evaluates to an '
4020 raise error.Abort(_('default push revset for path evaluates to an '
4021 'empty set'))
4021 'empty set'))
4022
4022
4023 repo._subtoppath = dest
4023 repo._subtoppath = dest
4024 try:
4024 try:
4025 # push subrepos depth-first for coherent ordering
4025 # push subrepos depth-first for coherent ordering
4026 c = repo['']
4026 c = repo['']
4027 subs = c.substate # only repos that are committed
4027 subs = c.substate # only repos that are committed
4028 for s in sorted(subs):
4028 for s in sorted(subs):
4029 result = c.sub(s).push(opts)
4029 result = c.sub(s).push(opts)
4030 if result == 0:
4030 if result == 0:
4031 return not result
4031 return not result
4032 finally:
4032 finally:
4033 del repo._subtoppath
4033 del repo._subtoppath
4034 pushop = exchange.push(repo, other, opts.get('force'), revs=revs,
4034 pushop = exchange.push(repo, other, opts.get('force'), revs=revs,
4035 newbranch=opts.get('new_branch'),
4035 newbranch=opts.get('new_branch'),
4036 bookmarks=opts.get('bookmark', ()),
4036 bookmarks=opts.get('bookmark', ()),
4037 opargs=opts.get('opargs'))
4037 opargs=opts.get('opargs'))
4038
4038
4039 result = not pushop.cgresult
4039 result = not pushop.cgresult
4040
4040
4041 if pushop.bkresult is not None:
4041 if pushop.bkresult is not None:
4042 if pushop.bkresult == 2:
4042 if pushop.bkresult == 2:
4043 result = 2
4043 result = 2
4044 elif not result and pushop.bkresult:
4044 elif not result and pushop.bkresult:
4045 result = 2
4045 result = 2
4046
4046
4047 return result
4047 return result
4048
4048
4049 @command('recover', [])
4049 @command('recover', [])
4050 def recover(ui, repo):
4050 def recover(ui, repo):
4051 """roll back an interrupted transaction
4051 """roll back an interrupted transaction
4052
4052
4053 Recover from an interrupted commit or pull.
4053 Recover from an interrupted commit or pull.
4054
4054
4055 This command tries to fix the repository status after an
4055 This command tries to fix the repository status after an
4056 interrupted operation. It should only be necessary when Mercurial
4056 interrupted operation. It should only be necessary when Mercurial
4057 suggests it.
4057 suggests it.
4058
4058
4059 Returns 0 if successful, 1 if nothing to recover or verify fails.
4059 Returns 0 if successful, 1 if nothing to recover or verify fails.
4060 """
4060 """
4061 if repo.recover():
4061 if repo.recover():
4062 return hg.verify(repo)
4062 return hg.verify(repo)
4063 return 1
4063 return 1
4064
4064
4065 @command('^remove|rm',
4065 @command('^remove|rm',
4066 [('A', 'after', None, _('record delete for missing files')),
4066 [('A', 'after', None, _('record delete for missing files')),
4067 ('f', 'force', None,
4067 ('f', 'force', None,
4068 _('forget added files, delete modified files')),
4068 _('forget added files, delete modified files')),
4069 ] + subrepoopts + walkopts,
4069 ] + subrepoopts + walkopts,
4070 _('[OPTION]... FILE...'),
4070 _('[OPTION]... FILE...'),
4071 inferrepo=True)
4071 inferrepo=True)
4072 def remove(ui, repo, *pats, **opts):
4072 def remove(ui, repo, *pats, **opts):
4073 """remove the specified files on the next commit
4073 """remove the specified files on the next commit
4074
4074
4075 Schedule the indicated files for removal from the current branch.
4075 Schedule the indicated files for removal from the current branch.
4076
4076
4077 This command schedules the files to be removed at the next commit.
4077 This command schedules the files to be removed at the next commit.
4078 To undo a remove before that, see :hg:`revert`. To undo added
4078 To undo a remove before that, see :hg:`revert`. To undo added
4079 files, see :hg:`forget`.
4079 files, see :hg:`forget`.
4080
4080
4081 .. container:: verbose
4081 .. container:: verbose
4082
4082
4083 -A/--after can be used to remove only files that have already
4083 -A/--after can be used to remove only files that have already
4084 been deleted, -f/--force can be used to force deletion, and -Af
4084 been deleted, -f/--force can be used to force deletion, and -Af
4085 can be used to remove files from the next revision without
4085 can be used to remove files from the next revision without
4086 deleting them from the working directory.
4086 deleting them from the working directory.
4087
4087
4088 The following table details the behavior of remove for different
4088 The following table details the behavior of remove for different
4089 file states (columns) and option combinations (rows). The file
4089 file states (columns) and option combinations (rows). The file
4090 states are Added [A], Clean [C], Modified [M] and Missing [!]
4090 states are Added [A], Clean [C], Modified [M] and Missing [!]
4091 (as reported by :hg:`status`). The actions are Warn, Remove
4091 (as reported by :hg:`status`). The actions are Warn, Remove
4092 (from branch) and Delete (from disk):
4092 (from branch) and Delete (from disk):
4093
4093
4094 ========= == == == ==
4094 ========= == == == ==
4095 opt/state A C M !
4095 opt/state A C M !
4096 ========= == == == ==
4096 ========= == == == ==
4097 none W RD W R
4097 none W RD W R
4098 -f R RD RD R
4098 -f R RD RD R
4099 -A W W W R
4099 -A W W W R
4100 -Af R R R R
4100 -Af R R R R
4101 ========= == == == ==
4101 ========= == == == ==
4102
4102
4103 .. note::
4103 .. note::
4104
4104
4105 :hg:`remove` never deletes files in Added [A] state from the
4105 :hg:`remove` never deletes files in Added [A] state from the
4106 working directory, not even if ``--force`` is specified.
4106 working directory, not even if ``--force`` is specified.
4107
4107
4108 Returns 0 on success, 1 if any warnings encountered.
4108 Returns 0 on success, 1 if any warnings encountered.
4109 """
4109 """
4110
4110
4111 opts = pycompat.byteskwargs(opts)
4111 opts = pycompat.byteskwargs(opts)
4112 after, force = opts.get('after'), opts.get('force')
4112 after, force = opts.get('after'), opts.get('force')
4113 if not pats and not after:
4113 if not pats and not after:
4114 raise error.Abort(_('no files specified'))
4114 raise error.Abort(_('no files specified'))
4115
4115
4116 m = scmutil.match(repo[None], pats, opts)
4116 m = scmutil.match(repo[None], pats, opts)
4117 subrepos = opts.get('subrepos')
4117 subrepos = opts.get('subrepos')
4118 return cmdutil.remove(ui, repo, m, "", after, force, subrepos)
4118 return cmdutil.remove(ui, repo, m, "", after, force, subrepos)
4119
4119
4120 @command('rename|move|mv',
4120 @command('rename|move|mv',
4121 [('A', 'after', None, _('record a rename that has already occurred')),
4121 [('A', 'after', None, _('record a rename that has already occurred')),
4122 ('f', 'force', None, _('forcibly copy over an existing managed file')),
4122 ('f', 'force', None, _('forcibly copy over an existing managed file')),
4123 ] + walkopts + dryrunopts,
4123 ] + walkopts + dryrunopts,
4124 _('[OPTION]... SOURCE... DEST'))
4124 _('[OPTION]... SOURCE... DEST'))
4125 def rename(ui, repo, *pats, **opts):
4125 def rename(ui, repo, *pats, **opts):
4126 """rename files; equivalent of copy + remove
4126 """rename files; equivalent of copy + remove
4127
4127
4128 Mark dest as copies of sources; mark sources for deletion. If dest
4128 Mark dest as copies of sources; mark sources for deletion. If dest
4129 is a directory, copies are put in that directory. If dest is a
4129 is a directory, copies are put in that directory. If dest is a
4130 file, there can only be one source.
4130 file, there can only be one source.
4131
4131
4132 By default, this command copies the contents of files as they
4132 By default, this command copies the contents of files as they
4133 exist in the working directory. If invoked with -A/--after, the
4133 exist in the working directory. If invoked with -A/--after, the
4134 operation is recorded, but no copying is performed.
4134 operation is recorded, but no copying is performed.
4135
4135
4136 This command takes effect at the next commit. To undo a rename
4136 This command takes effect at the next commit. To undo a rename
4137 before that, see :hg:`revert`.
4137 before that, see :hg:`revert`.
4138
4138
4139 Returns 0 on success, 1 if errors are encountered.
4139 Returns 0 on success, 1 if errors are encountered.
4140 """
4140 """
4141 opts = pycompat.byteskwargs(opts)
4141 opts = pycompat.byteskwargs(opts)
4142 with repo.wlock(False):
4142 with repo.wlock(False):
4143 return cmdutil.copy(ui, repo, pats, opts, rename=True)
4143 return cmdutil.copy(ui, repo, pats, opts, rename=True)
4144
4144
4145 @command('resolve',
4145 @command('resolve',
4146 [('a', 'all', None, _('select all unresolved files')),
4146 [('a', 'all', None, _('select all unresolved files')),
4147 ('l', 'list', None, _('list state of files needing merge')),
4147 ('l', 'list', None, _('list state of files needing merge')),
4148 ('m', 'mark', None, _('mark files as resolved')),
4148 ('m', 'mark', None, _('mark files as resolved')),
4149 ('u', 'unmark', None, _('mark files as unresolved')),
4149 ('u', 'unmark', None, _('mark files as unresolved')),
4150 ('n', 'no-status', None, _('hide status prefix'))]
4150 ('n', 'no-status', None, _('hide status prefix'))]
4151 + mergetoolopts + walkopts + formatteropts,
4151 + mergetoolopts + walkopts + formatteropts,
4152 _('[OPTION]... [FILE]...'),
4152 _('[OPTION]... [FILE]...'),
4153 inferrepo=True)
4153 inferrepo=True)
4154 def resolve(ui, repo, *pats, **opts):
4154 def resolve(ui, repo, *pats, **opts):
4155 """redo merges or set/view the merge status of files
4155 """redo merges or set/view the merge status of files
4156
4156
4157 Merges with unresolved conflicts are often the result of
4157 Merges with unresolved conflicts are often the result of
4158 non-interactive merging using the ``internal:merge`` configuration
4158 non-interactive merging using the ``internal:merge`` configuration
4159 setting, or a command-line merge tool like ``diff3``. The resolve
4159 setting, or a command-line merge tool like ``diff3``. The resolve
4160 command is used to manage the files involved in a merge, after
4160 command is used to manage the files involved in a merge, after
4161 :hg:`merge` has been run, and before :hg:`commit` is run (i.e. the
4161 :hg:`merge` has been run, and before :hg:`commit` is run (i.e. the
4162 working directory must have two parents). See :hg:`help
4162 working directory must have two parents). See :hg:`help
4163 merge-tools` for information on configuring merge tools.
4163 merge-tools` for information on configuring merge tools.
4164
4164
4165 The resolve command can be used in the following ways:
4165 The resolve command can be used in the following ways:
4166
4166
4167 - :hg:`resolve [--tool TOOL] FILE...`: attempt to re-merge the specified
4167 - :hg:`resolve [--tool TOOL] FILE...`: attempt to re-merge the specified
4168 files, discarding any previous merge attempts. Re-merging is not
4168 files, discarding any previous merge attempts. Re-merging is not
4169 performed for files already marked as resolved. Use ``--all/-a``
4169 performed for files already marked as resolved. Use ``--all/-a``
4170 to select all unresolved files. ``--tool`` can be used to specify
4170 to select all unresolved files. ``--tool`` can be used to specify
4171 the merge tool used for the given files. It overrides the HGMERGE
4171 the merge tool used for the given files. It overrides the HGMERGE
4172 environment variable and your configuration files. Previous file
4172 environment variable and your configuration files. Previous file
4173 contents are saved with a ``.orig`` suffix.
4173 contents are saved with a ``.orig`` suffix.
4174
4174
4175 - :hg:`resolve -m [FILE]`: mark a file as having been resolved
4175 - :hg:`resolve -m [FILE]`: mark a file as having been resolved
4176 (e.g. after having manually fixed-up the files). The default is
4176 (e.g. after having manually fixed-up the files). The default is
4177 to mark all unresolved files.
4177 to mark all unresolved files.
4178
4178
4179 - :hg:`resolve -u [FILE]...`: mark a file as unresolved. The
4179 - :hg:`resolve -u [FILE]...`: mark a file as unresolved. The
4180 default is to mark all resolved files.
4180 default is to mark all resolved files.
4181
4181
4182 - :hg:`resolve -l`: list files which had or still have conflicts.
4182 - :hg:`resolve -l`: list files which had or still have conflicts.
4183 In the printed list, ``U`` = unresolved and ``R`` = resolved.
4183 In the printed list, ``U`` = unresolved and ``R`` = resolved.
4184 You can use ``set:unresolved()`` or ``set:resolved()`` to filter
4184 You can use ``set:unresolved()`` or ``set:resolved()`` to filter
4185 the list. See :hg:`help filesets` for details.
4185 the list. See :hg:`help filesets` for details.
4186
4186
4187 .. note::
4187 .. note::
4188
4188
4189 Mercurial will not let you commit files with unresolved merge
4189 Mercurial will not let you commit files with unresolved merge
4190 conflicts. You must use :hg:`resolve -m ...` before you can
4190 conflicts. You must use :hg:`resolve -m ...` before you can
4191 commit after a conflicting merge.
4191 commit after a conflicting merge.
4192
4192
4193 Returns 0 on success, 1 if any files fail a resolve attempt.
4193 Returns 0 on success, 1 if any files fail a resolve attempt.
4194 """
4194 """
4195
4195
4196 opts = pycompat.byteskwargs(opts)
4196 opts = pycompat.byteskwargs(opts)
4197 flaglist = 'all mark unmark list no_status'.split()
4197 flaglist = 'all mark unmark list no_status'.split()
4198 all, mark, unmark, show, nostatus = \
4198 all, mark, unmark, show, nostatus = \
4199 [opts.get(o) for o in flaglist]
4199 [opts.get(o) for o in flaglist]
4200
4200
4201 if (show and (mark or unmark)) or (mark and unmark):
4201 if (show and (mark or unmark)) or (mark and unmark):
4202 raise error.Abort(_("too many options specified"))
4202 raise error.Abort(_("too many options specified"))
4203 if pats and all:
4203 if pats and all:
4204 raise error.Abort(_("can't specify --all and patterns"))
4204 raise error.Abort(_("can't specify --all and patterns"))
4205 if not (all or pats or show or mark or unmark):
4205 if not (all or pats or show or mark or unmark):
4206 raise error.Abort(_('no files or directories specified'),
4206 raise error.Abort(_('no files or directories specified'),
4207 hint=('use --all to re-merge all unresolved files'))
4207 hint=('use --all to re-merge all unresolved files'))
4208
4208
4209 if show:
4209 if show:
4210 ui.pager('resolve')
4210 ui.pager('resolve')
4211 fm = ui.formatter('resolve', opts)
4211 fm = ui.formatter('resolve', opts)
4212 ms = mergemod.mergestate.read(repo)
4212 ms = mergemod.mergestate.read(repo)
4213 m = scmutil.match(repo[None], pats, opts)
4213 m = scmutil.match(repo[None], pats, opts)
4214 for f in ms:
4214 for f in ms:
4215 if not m(f):
4215 if not m(f):
4216 continue
4216 continue
4217 l = 'resolve.' + {'u': 'unresolved', 'r': 'resolved',
4217 l = 'resolve.' + {'u': 'unresolved', 'r': 'resolved',
4218 'd': 'driverresolved'}[ms[f]]
4218 'd': 'driverresolved'}[ms[f]]
4219 fm.startitem()
4219 fm.startitem()
4220 fm.condwrite(not nostatus, 'status', '%s ', ms[f].upper(), label=l)
4220 fm.condwrite(not nostatus, 'status', '%s ', ms[f].upper(), label=l)
4221 fm.write('path', '%s\n', f, label=l)
4221 fm.write('path', '%s\n', f, label=l)
4222 fm.end()
4222 fm.end()
4223 return 0
4223 return 0
4224
4224
4225 with repo.wlock():
4225 with repo.wlock():
4226 ms = mergemod.mergestate.read(repo)
4226 ms = mergemod.mergestate.read(repo)
4227
4227
4228 if not (ms.active() or repo.dirstate.p2() != nullid):
4228 if not (ms.active() or repo.dirstate.p2() != nullid):
4229 raise error.Abort(
4229 raise error.Abort(
4230 _('resolve command not applicable when not merging'))
4230 _('resolve command not applicable when not merging'))
4231
4231
4232 wctx = repo[None]
4232 wctx = repo[None]
4233
4233
4234 if ms.mergedriver and ms.mdstate() == 'u':
4234 if ms.mergedriver and ms.mdstate() == 'u':
4235 proceed = mergemod.driverpreprocess(repo, ms, wctx)
4235 proceed = mergemod.driverpreprocess(repo, ms, wctx)
4236 ms.commit()
4236 ms.commit()
4237 # allow mark and unmark to go through
4237 # allow mark and unmark to go through
4238 if not mark and not unmark and not proceed:
4238 if not mark and not unmark and not proceed:
4239 return 1
4239 return 1
4240
4240
4241 m = scmutil.match(wctx, pats, opts)
4241 m = scmutil.match(wctx, pats, opts)
4242 ret = 0
4242 ret = 0
4243 didwork = False
4243 didwork = False
4244 runconclude = False
4244 runconclude = False
4245
4245
4246 tocomplete = []
4246 tocomplete = []
4247 for f in ms:
4247 for f in ms:
4248 if not m(f):
4248 if not m(f):
4249 continue
4249 continue
4250
4250
4251 didwork = True
4251 didwork = True
4252
4252
4253 # don't let driver-resolved files be marked, and run the conclude
4253 # don't let driver-resolved files be marked, and run the conclude
4254 # step if asked to resolve
4254 # step if asked to resolve
4255 if ms[f] == "d":
4255 if ms[f] == "d":
4256 exact = m.exact(f)
4256 exact = m.exact(f)
4257 if mark:
4257 if mark:
4258 if exact:
4258 if exact:
4259 ui.warn(_('not marking %s as it is driver-resolved\n')
4259 ui.warn(_('not marking %s as it is driver-resolved\n')
4260 % f)
4260 % f)
4261 elif unmark:
4261 elif unmark:
4262 if exact:
4262 if exact:
4263 ui.warn(_('not unmarking %s as it is driver-resolved\n')
4263 ui.warn(_('not unmarking %s as it is driver-resolved\n')
4264 % f)
4264 % f)
4265 else:
4265 else:
4266 runconclude = True
4266 runconclude = True
4267 continue
4267 continue
4268
4268
4269 if mark:
4269 if mark:
4270 ms.mark(f, "r")
4270 ms.mark(f, "r")
4271 elif unmark:
4271 elif unmark:
4272 ms.mark(f, "u")
4272 ms.mark(f, "u")
4273 else:
4273 else:
4274 # backup pre-resolve (merge uses .orig for its own purposes)
4274 # backup pre-resolve (merge uses .orig for its own purposes)
4275 a = repo.wjoin(f)
4275 a = repo.wjoin(f)
4276 try:
4276 try:
4277 util.copyfile(a, a + ".resolve")
4277 util.copyfile(a, a + ".resolve")
4278 except (IOError, OSError) as inst:
4278 except (IOError, OSError) as inst:
4279 if inst.errno != errno.ENOENT:
4279 if inst.errno != errno.ENOENT:
4280 raise
4280 raise
4281
4281
4282 try:
4282 try:
4283 # preresolve file
4283 # preresolve file
4284 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
4284 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
4285 'resolve')
4285 'resolve')
4286 complete, r = ms.preresolve(f, wctx)
4286 complete, r = ms.preresolve(f, wctx)
4287 if not complete:
4287 if not complete:
4288 tocomplete.append(f)
4288 tocomplete.append(f)
4289 elif r:
4289 elif r:
4290 ret = 1
4290 ret = 1
4291 finally:
4291 finally:
4292 ui.setconfig('ui', 'forcemerge', '', 'resolve')
4292 ui.setconfig('ui', 'forcemerge', '', 'resolve')
4293 ms.commit()
4293 ms.commit()
4294
4294
4295 # replace filemerge's .orig file with our resolve file, but only
4295 # replace filemerge's .orig file with our resolve file, but only
4296 # for merges that are complete
4296 # for merges that are complete
4297 if complete:
4297 if complete:
4298 try:
4298 try:
4299 util.rename(a + ".resolve",
4299 util.rename(a + ".resolve",
4300 scmutil.origpath(ui, repo, a))
4300 scmutil.origpath(ui, repo, a))
4301 except OSError as inst:
4301 except OSError as inst:
4302 if inst.errno != errno.ENOENT:
4302 if inst.errno != errno.ENOENT:
4303 raise
4303 raise
4304
4304
4305 for f in tocomplete:
4305 for f in tocomplete:
4306 try:
4306 try:
4307 # resolve file
4307 # resolve file
4308 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
4308 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
4309 'resolve')
4309 'resolve')
4310 r = ms.resolve(f, wctx)
4310 r = ms.resolve(f, wctx)
4311 if r:
4311 if r:
4312 ret = 1
4312 ret = 1
4313 finally:
4313 finally:
4314 ui.setconfig('ui', 'forcemerge', '', 'resolve')
4314 ui.setconfig('ui', 'forcemerge', '', 'resolve')
4315 ms.commit()
4315 ms.commit()
4316
4316
4317 # replace filemerge's .orig file with our resolve file
4317 # replace filemerge's .orig file with our resolve file
4318 a = repo.wjoin(f)
4318 a = repo.wjoin(f)
4319 try:
4319 try:
4320 util.rename(a + ".resolve", scmutil.origpath(ui, repo, a))
4320 util.rename(a + ".resolve", scmutil.origpath(ui, repo, a))
4321 except OSError as inst:
4321 except OSError as inst:
4322 if inst.errno != errno.ENOENT:
4322 if inst.errno != errno.ENOENT:
4323 raise
4323 raise
4324
4324
4325 ms.commit()
4325 ms.commit()
4326 ms.recordactions()
4326 ms.recordactions()
4327
4327
4328 if not didwork and pats:
4328 if not didwork and pats:
4329 hint = None
4329 hint = None
4330 if not any([p for p in pats if p.find(':') >= 0]):
4330 if not any([p for p in pats if p.find(':') >= 0]):
4331 pats = ['path:%s' % p for p in pats]
4331 pats = ['path:%s' % p for p in pats]
4332 m = scmutil.match(wctx, pats, opts)
4332 m = scmutil.match(wctx, pats, opts)
4333 for f in ms:
4333 for f in ms:
4334 if not m(f):
4334 if not m(f):
4335 continue
4335 continue
4336 flags = ''.join(['-%s ' % o[0] for o in flaglist
4336 flags = ''.join(['-%s ' % o[0] for o in flaglist
4337 if opts.get(o)])
4337 if opts.get(o)])
4338 hint = _("(try: hg resolve %s%s)\n") % (
4338 hint = _("(try: hg resolve %s%s)\n") % (
4339 flags,
4339 flags,
4340 ' '.join(pats))
4340 ' '.join(pats))
4341 break
4341 break
4342 ui.warn(_("arguments do not match paths that need resolving\n"))
4342 ui.warn(_("arguments do not match paths that need resolving\n"))
4343 if hint:
4343 if hint:
4344 ui.warn(hint)
4344 ui.warn(hint)
4345 elif ms.mergedriver and ms.mdstate() != 's':
4345 elif ms.mergedriver and ms.mdstate() != 's':
4346 # run conclude step when either a driver-resolved file is requested
4346 # run conclude step when either a driver-resolved file is requested
4347 # or there are no driver-resolved files
4347 # or there are no driver-resolved files
4348 # we can't use 'ret' to determine whether any files are unresolved
4348 # we can't use 'ret' to determine whether any files are unresolved
4349 # because we might not have tried to resolve some
4349 # because we might not have tried to resolve some
4350 if ((runconclude or not list(ms.driverresolved()))
4350 if ((runconclude or not list(ms.driverresolved()))
4351 and not list(ms.unresolved())):
4351 and not list(ms.unresolved())):
4352 proceed = mergemod.driverconclude(repo, ms, wctx)
4352 proceed = mergemod.driverconclude(repo, ms, wctx)
4353 ms.commit()
4353 ms.commit()
4354 if not proceed:
4354 if not proceed:
4355 return 1
4355 return 1
4356
4356
4357 # Nudge users into finishing an unfinished operation
4357 # Nudge users into finishing an unfinished operation
4358 unresolvedf = list(ms.unresolved())
4358 unresolvedf = list(ms.unresolved())
4359 driverresolvedf = list(ms.driverresolved())
4359 driverresolvedf = list(ms.driverresolved())
4360 if not unresolvedf and not driverresolvedf:
4360 if not unresolvedf and not driverresolvedf:
4361 ui.status(_('(no more unresolved files)\n'))
4361 ui.status(_('(no more unresolved files)\n'))
4362 cmdutil.checkafterresolved(repo)
4362 cmdutil.checkafterresolved(repo)
4363 elif not unresolvedf:
4363 elif not unresolvedf:
4364 ui.status(_('(no more unresolved files -- '
4364 ui.status(_('(no more unresolved files -- '
4365 'run "hg resolve --all" to conclude)\n'))
4365 'run "hg resolve --all" to conclude)\n'))
4366
4366
4367 return ret
4367 return ret
4368
4368
4369 @command('revert',
4369 @command('revert',
4370 [('a', 'all', None, _('revert all changes when no arguments given')),
4370 [('a', 'all', None, _('revert all changes when no arguments given')),
4371 ('d', 'date', '', _('tipmost revision matching date'), _('DATE')),
4371 ('d', 'date', '', _('tipmost revision matching date'), _('DATE')),
4372 ('r', 'rev', '', _('revert to the specified revision'), _('REV')),
4372 ('r', 'rev', '', _('revert to the specified revision'), _('REV')),
4373 ('C', 'no-backup', None, _('do not save backup copies of files')),
4373 ('C', 'no-backup', None, _('do not save backup copies of files')),
4374 ('i', 'interactive', None,
4374 ('i', 'interactive', None,
4375 _('interactively select the changes (EXPERIMENTAL)')),
4375 _('interactively select the changes (EXPERIMENTAL)')),
4376 ] + walkopts + dryrunopts,
4376 ] + walkopts + dryrunopts,
4377 _('[OPTION]... [-r REV] [NAME]...'))
4377 _('[OPTION]... [-r REV] [NAME]...'))
4378 def revert(ui, repo, *pats, **opts):
4378 def revert(ui, repo, *pats, **opts):
4379 """restore files to their checkout state
4379 """restore files to their checkout state
4380
4380
4381 .. note::
4381 .. note::
4382
4382
4383 To check out earlier revisions, you should use :hg:`update REV`.
4383 To check out earlier revisions, you should use :hg:`update REV`.
4384 To cancel an uncommitted merge (and lose your changes),
4384 To cancel an uncommitted merge (and lose your changes),
4385 use :hg:`update --clean .`.
4385 use :hg:`update --clean .`.
4386
4386
4387 With no revision specified, revert the specified files or directories
4387 With no revision specified, revert the specified files or directories
4388 to the contents they had in the parent of the working directory.
4388 to the contents they had in the parent of the working directory.
4389 This restores the contents of files to an unmodified
4389 This restores the contents of files to an unmodified
4390 state and unschedules adds, removes, copies, and renames. If the
4390 state and unschedules adds, removes, copies, and renames. If the
4391 working directory has two parents, you must explicitly specify a
4391 working directory has two parents, you must explicitly specify a
4392 revision.
4392 revision.
4393
4393
4394 Using the -r/--rev or -d/--date options, revert the given files or
4394 Using the -r/--rev or -d/--date options, revert the given files or
4395 directories to their states as of a specific revision. Because
4395 directories to their states as of a specific revision. Because
4396 revert does not change the working directory parents, this will
4396 revert does not change the working directory parents, this will
4397 cause these files to appear modified. This can be helpful to "back
4397 cause these files to appear modified. This can be helpful to "back
4398 out" some or all of an earlier change. See :hg:`backout` for a
4398 out" some or all of an earlier change. See :hg:`backout` for a
4399 related method.
4399 related method.
4400
4400
4401 Modified files are saved with a .orig suffix before reverting.
4401 Modified files are saved with a .orig suffix before reverting.
4402 To disable these backups, use --no-backup. It is possible to store
4402 To disable these backups, use --no-backup. It is possible to store
4403 the backup files in a custom directory relative to the root of the
4403 the backup files in a custom directory relative to the root of the
4404 repository by setting the ``ui.origbackuppath`` configuration
4404 repository by setting the ``ui.origbackuppath`` configuration
4405 option.
4405 option.
4406
4406
4407 See :hg:`help dates` for a list of formats valid for -d/--date.
4407 See :hg:`help dates` for a list of formats valid for -d/--date.
4408
4408
4409 See :hg:`help backout` for a way to reverse the effect of an
4409 See :hg:`help backout` for a way to reverse the effect of an
4410 earlier changeset.
4410 earlier changeset.
4411
4411
4412 Returns 0 on success.
4412 Returns 0 on success.
4413 """
4413 """
4414
4414
4415 if opts.get("date"):
4415 if opts.get("date"):
4416 if opts.get("rev"):
4416 if opts.get("rev"):
4417 raise error.Abort(_("you can't specify a revision and a date"))
4417 raise error.Abort(_("you can't specify a revision and a date"))
4418 opts["rev"] = cmdutil.finddate(ui, repo, opts["date"])
4418 opts["rev"] = cmdutil.finddate(ui, repo, opts["date"])
4419
4419
4420 parent, p2 = repo.dirstate.parents()
4420 parent, p2 = repo.dirstate.parents()
4421 if not opts.get('rev') and p2 != nullid:
4421 if not opts.get('rev') and p2 != nullid:
4422 # revert after merge is a trap for new users (issue2915)
4422 # revert after merge is a trap for new users (issue2915)
4423 raise error.Abort(_('uncommitted merge with no revision specified'),
4423 raise error.Abort(_('uncommitted merge with no revision specified'),
4424 hint=_("use 'hg update' or see 'hg help revert'"))
4424 hint=_("use 'hg update' or see 'hg help revert'"))
4425
4425
4426 ctx = scmutil.revsingle(repo, opts.get('rev'))
4426 ctx = scmutil.revsingle(repo, opts.get('rev'))
4427
4427
4428 if (not (pats or opts.get('include') or opts.get('exclude') or
4428 if (not (pats or opts.get('include') or opts.get('exclude') or
4429 opts.get('all') or opts.get('interactive'))):
4429 opts.get('all') or opts.get('interactive'))):
4430 msg = _("no files or directories specified")
4430 msg = _("no files or directories specified")
4431 if p2 != nullid:
4431 if p2 != nullid:
4432 hint = _("uncommitted merge, use --all to discard all changes,"
4432 hint = _("uncommitted merge, use --all to discard all changes,"
4433 " or 'hg update -C .' to abort the merge")
4433 " or 'hg update -C .' to abort the merge")
4434 raise error.Abort(msg, hint=hint)
4434 raise error.Abort(msg, hint=hint)
4435 dirty = any(repo.status())
4435 dirty = any(repo.status())
4436 node = ctx.node()
4436 node = ctx.node()
4437 if node != parent:
4437 if node != parent:
4438 if dirty:
4438 if dirty:
4439 hint = _("uncommitted changes, use --all to discard all"
4439 hint = _("uncommitted changes, use --all to discard all"
4440 " changes, or 'hg update %s' to update") % ctx.rev()
4440 " changes, or 'hg update %s' to update") % ctx.rev()
4441 else:
4441 else:
4442 hint = _("use --all to revert all files,"
4442 hint = _("use --all to revert all files,"
4443 " or 'hg update %s' to update") % ctx.rev()
4443 " or 'hg update %s' to update") % ctx.rev()
4444 elif dirty:
4444 elif dirty:
4445 hint = _("uncommitted changes, use --all to discard all changes")
4445 hint = _("uncommitted changes, use --all to discard all changes")
4446 else:
4446 else:
4447 hint = _("use --all to revert all files")
4447 hint = _("use --all to revert all files")
4448 raise error.Abort(msg, hint=hint)
4448 raise error.Abort(msg, hint=hint)
4449
4449
4450 return cmdutil.revert(ui, repo, ctx, (parent, p2), *pats, **opts)
4450 return cmdutil.revert(ui, repo, ctx, (parent, p2), *pats, **opts)
4451
4451
4452 @command('rollback', dryrunopts +
4452 @command('rollback', dryrunopts +
4453 [('f', 'force', False, _('ignore safety measures'))])
4453 [('f', 'force', False, _('ignore safety measures'))])
4454 def rollback(ui, repo, **opts):
4454 def rollback(ui, repo, **opts):
4455 """roll back the last transaction (DANGEROUS) (DEPRECATED)
4455 """roll back the last transaction (DANGEROUS) (DEPRECATED)
4456
4456
4457 Please use :hg:`commit --amend` instead of rollback to correct
4457 Please use :hg:`commit --amend` instead of rollback to correct
4458 mistakes in the last commit.
4458 mistakes in the last commit.
4459
4459
4460 This command should be used with care. There is only one level of
4460 This command should be used with care. There is only one level of
4461 rollback, and there is no way to undo a rollback. It will also
4461 rollback, and there is no way to undo a rollback. It will also
4462 restore the dirstate at the time of the last transaction, losing
4462 restore the dirstate at the time of the last transaction, losing
4463 any dirstate changes since that time. This command does not alter
4463 any dirstate changes since that time. This command does not alter
4464 the working directory.
4464 the working directory.
4465
4465
4466 Transactions are used to encapsulate the effects of all commands
4466 Transactions are used to encapsulate the effects of all commands
4467 that create new changesets or propagate existing changesets into a
4467 that create new changesets or propagate existing changesets into a
4468 repository.
4468 repository.
4469
4469
4470 .. container:: verbose
4470 .. container:: verbose
4471
4471
4472 For example, the following commands are transactional, and their
4472 For example, the following commands are transactional, and their
4473 effects can be rolled back:
4473 effects can be rolled back:
4474
4474
4475 - commit
4475 - commit
4476 - import
4476 - import
4477 - pull
4477 - pull
4478 - push (with this repository as the destination)
4478 - push (with this repository as the destination)
4479 - unbundle
4479 - unbundle
4480
4480
4481 To avoid permanent data loss, rollback will refuse to rollback a
4481 To avoid permanent data loss, rollback will refuse to rollback a
4482 commit transaction if it isn't checked out. Use --force to
4482 commit transaction if it isn't checked out. Use --force to
4483 override this protection.
4483 override this protection.
4484
4484
4485 The rollback command can be entirely disabled by setting the
4485 The rollback command can be entirely disabled by setting the
4486 ``ui.rollback`` configuration setting to false. If you're here
4486 ``ui.rollback`` configuration setting to false. If you're here
4487 because you want to use rollback and it's disabled, you can
4487 because you want to use rollback and it's disabled, you can
4488 re-enable the command by setting ``ui.rollback`` to true.
4488 re-enable the command by setting ``ui.rollback`` to true.
4489
4489
4490 This command is not intended for use on public repositories. Once
4490 This command is not intended for use on public repositories. Once
4491 changes are visible for pull by other users, rolling a transaction
4491 changes are visible for pull by other users, rolling a transaction
4492 back locally is ineffective (someone else may already have pulled
4492 back locally is ineffective (someone else may already have pulled
4493 the changes). Furthermore, a race is possible with readers of the
4493 the changes). Furthermore, a race is possible with readers of the
4494 repository; for example an in-progress pull from the repository
4494 repository; for example an in-progress pull from the repository
4495 may fail if a rollback is performed.
4495 may fail if a rollback is performed.
4496
4496
4497 Returns 0 on success, 1 if no rollback data is available.
4497 Returns 0 on success, 1 if no rollback data is available.
4498 """
4498 """
4499 if not ui.configbool('ui', 'rollback', True):
4499 if not ui.configbool('ui', 'rollback', True):
4500 raise error.Abort(_('rollback is disabled because it is unsafe'),
4500 raise error.Abort(_('rollback is disabled because it is unsafe'),
4501 hint=('see `hg help -v rollback` for information'))
4501 hint=('see `hg help -v rollback` for information'))
4502 return repo.rollback(dryrun=opts.get(r'dry_run'),
4502 return repo.rollback(dryrun=opts.get(r'dry_run'),
4503 force=opts.get(r'force'))
4503 force=opts.get(r'force'))
4504
4504
4505 @command('root', [])
4505 @command('root', [])
4506 def root(ui, repo):
4506 def root(ui, repo):
4507 """print the root (top) of the current working directory
4507 """print the root (top) of the current working directory
4508
4508
4509 Print the root directory of the current repository.
4509 Print the root directory of the current repository.
4510
4510
4511 Returns 0 on success.
4511 Returns 0 on success.
4512 """
4512 """
4513 ui.write(repo.root + "\n")
4513 ui.write(repo.root + "\n")
4514
4514
4515 @command('^serve',
4515 @command('^serve',
4516 [('A', 'accesslog', '', _('name of access log file to write to'),
4516 [('A', 'accesslog', '', _('name of access log file to write to'),
4517 _('FILE')),
4517 _('FILE')),
4518 ('d', 'daemon', None, _('run server in background')),
4518 ('d', 'daemon', None, _('run server in background')),
4519 ('', 'daemon-postexec', [], _('used internally by daemon mode')),
4519 ('', 'daemon-postexec', [], _('used internally by daemon mode')),
4520 ('E', 'errorlog', '', _('name of error log file to write to'), _('FILE')),
4520 ('E', 'errorlog', '', _('name of error log file to write to'), _('FILE')),
4521 # use string type, then we can check if something was passed
4521 # use string type, then we can check if something was passed
4522 ('p', 'port', '', _('port to listen on (default: 8000)'), _('PORT')),
4522 ('p', 'port', '', _('port to listen on (default: 8000)'), _('PORT')),
4523 ('a', 'address', '', _('address to listen on (default: all interfaces)'),
4523 ('a', 'address', '', _('address to listen on (default: all interfaces)'),
4524 _('ADDR')),
4524 _('ADDR')),
4525 ('', 'prefix', '', _('prefix path to serve from (default: server root)'),
4525 ('', 'prefix', '', _('prefix path to serve from (default: server root)'),
4526 _('PREFIX')),
4526 _('PREFIX')),
4527 ('n', 'name', '',
4527 ('n', 'name', '',
4528 _('name to show in web pages (default: working directory)'), _('NAME')),
4528 _('name to show in web pages (default: working directory)'), _('NAME')),
4529 ('', 'web-conf', '',
4529 ('', 'web-conf', '',
4530 _("name of the hgweb config file (see 'hg help hgweb')"), _('FILE')),
4530 _("name of the hgweb config file (see 'hg help hgweb')"), _('FILE')),
4531 ('', 'webdir-conf', '', _('name of the hgweb config file (DEPRECATED)'),
4531 ('', 'webdir-conf', '', _('name of the hgweb config file (DEPRECATED)'),
4532 _('FILE')),
4532 _('FILE')),
4533 ('', 'pid-file', '', _('name of file to write process ID to'), _('FILE')),
4533 ('', 'pid-file', '', _('name of file to write process ID to'), _('FILE')),
4534 ('', 'stdio', None, _('for remote clients (ADVANCED)')),
4534 ('', 'stdio', None, _('for remote clients (ADVANCED)')),
4535 ('', 'cmdserver', '', _('for remote clients (ADVANCED)'), _('MODE')),
4535 ('', 'cmdserver', '', _('for remote clients (ADVANCED)'), _('MODE')),
4536 ('t', 'templates', '', _('web templates to use'), _('TEMPLATE')),
4536 ('t', 'templates', '', _('web templates to use'), _('TEMPLATE')),
4537 ('', 'style', '', _('template style to use'), _('STYLE')),
4537 ('', 'style', '', _('template style to use'), _('STYLE')),
4538 ('6', 'ipv6', None, _('use IPv6 in addition to IPv4')),
4538 ('6', 'ipv6', None, _('use IPv6 in addition to IPv4')),
4539 ('', 'certificate', '', _('SSL certificate file'), _('FILE'))]
4539 ('', 'certificate', '', _('SSL certificate file'), _('FILE'))]
4540 + subrepoopts,
4540 + subrepoopts,
4541 _('[OPTION]...'),
4541 _('[OPTION]...'),
4542 optionalrepo=True)
4542 optionalrepo=True)
4543 def serve(ui, repo, **opts):
4543 def serve(ui, repo, **opts):
4544 """start stand-alone webserver
4544 """start stand-alone webserver
4545
4545
4546 Start a local HTTP repository browser and pull server. You can use
4546 Start a local HTTP repository browser and pull server. You can use
4547 this for ad-hoc sharing and browsing of repositories. It is
4547 this for ad-hoc sharing and browsing of repositories. It is
4548 recommended to use a real web server to serve a repository for
4548 recommended to use a real web server to serve a repository for
4549 longer periods of time.
4549 longer periods of time.
4550
4550
4551 Please note that the server does not implement access control.
4551 Please note that the server does not implement access control.
4552 This means that, by default, anybody can read from the server and
4552 This means that, by default, anybody can read from the server and
4553 nobody can write to it by default. Set the ``web.allow_push``
4553 nobody can write to it by default. Set the ``web.allow_push``
4554 option to ``*`` to allow everybody to push to the server. You
4554 option to ``*`` to allow everybody to push to the server. You
4555 should use a real web server if you need to authenticate users.
4555 should use a real web server if you need to authenticate users.
4556
4556
4557 By default, the server logs accesses to stdout and errors to
4557 By default, the server logs accesses to stdout and errors to
4558 stderr. Use the -A/--accesslog and -E/--errorlog options to log to
4558 stderr. Use the -A/--accesslog and -E/--errorlog options to log to
4559 files.
4559 files.
4560
4560
4561 To have the server choose a free port number to listen on, specify
4561 To have the server choose a free port number to listen on, specify
4562 a port number of 0; in this case, the server will print the port
4562 a port number of 0; in this case, the server will print the port
4563 number it uses.
4563 number it uses.
4564
4564
4565 Returns 0 on success.
4565 Returns 0 on success.
4566 """
4566 """
4567
4567
4568 opts = pycompat.byteskwargs(opts)
4568 opts = pycompat.byteskwargs(opts)
4569 if opts["stdio"] and opts["cmdserver"]:
4569 if opts["stdio"] and opts["cmdserver"]:
4570 raise error.Abort(_("cannot use --stdio with --cmdserver"))
4570 raise error.Abort(_("cannot use --stdio with --cmdserver"))
4571
4571
4572 if opts["stdio"]:
4572 if opts["stdio"]:
4573 if repo is None:
4573 if repo is None:
4574 raise error.RepoError(_("there is no Mercurial repository here"
4574 raise error.RepoError(_("there is no Mercurial repository here"
4575 " (.hg not found)"))
4575 " (.hg not found)"))
4576 s = sshserver.sshserver(ui, repo)
4576 s = sshserver.sshserver(ui, repo)
4577 s.serve_forever()
4577 s.serve_forever()
4578
4578
4579 service = server.createservice(ui, repo, opts)
4579 service = server.createservice(ui, repo, opts)
4580 return server.runservice(opts, initfn=service.init, runfn=service.run)
4580 return server.runservice(opts, initfn=service.init, runfn=service.run)
4581
4581
4582 @command('^status|st',
4582 @command('^status|st',
4583 [('A', 'all', None, _('show status of all files')),
4583 [('A', 'all', None, _('show status of all files')),
4584 ('m', 'modified', None, _('show only modified files')),
4584 ('m', 'modified', None, _('show only modified files')),
4585 ('a', 'added', None, _('show only added files')),
4585 ('a', 'added', None, _('show only added files')),
4586 ('r', 'removed', None, _('show only removed files')),
4586 ('r', 'removed', None, _('show only removed files')),
4587 ('d', 'deleted', None, _('show only deleted (but tracked) files')),
4587 ('d', 'deleted', None, _('show only deleted (but tracked) files')),
4588 ('c', 'clean', None, _('show only files without changes')),
4588 ('c', 'clean', None, _('show only files without changes')),
4589 ('u', 'unknown', None, _('show only unknown (not tracked) files')),
4589 ('u', 'unknown', None, _('show only unknown (not tracked) files')),
4590 ('i', 'ignored', None, _('show only ignored files')),
4590 ('i', 'ignored', None, _('show only ignored files')),
4591 ('n', 'no-status', None, _('hide status prefix')),
4591 ('n', 'no-status', None, _('hide status prefix')),
4592 ('C', 'copies', None, _('show source of copied files')),
4592 ('C', 'copies', None, _('show source of copied files')),
4593 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
4593 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
4594 ('', 'rev', [], _('show difference from revision'), _('REV')),
4594 ('', 'rev', [], _('show difference from revision'), _('REV')),
4595 ('', 'change', '', _('list the changed files of a revision'), _('REV')),
4595 ('', 'change', '', _('list the changed files of a revision'), _('REV')),
4596 ] + walkopts + subrepoopts + formatteropts,
4596 ] + walkopts + subrepoopts + formatteropts,
4597 _('[OPTION]... [FILE]...'),
4597 _('[OPTION]... [FILE]...'),
4598 inferrepo=True)
4598 inferrepo=True)
4599 def status(ui, repo, *pats, **opts):
4599 def status(ui, repo, *pats, **opts):
4600 """show changed files in the working directory
4600 """show changed files in the working directory
4601
4601
4602 Show status of files in the repository. If names are given, only
4602 Show status of files in the repository. If names are given, only
4603 files that match are shown. Files that are clean or ignored or
4603 files that match are shown. Files that are clean or ignored or
4604 the source of a copy/move operation, are not listed unless
4604 the source of a copy/move operation, are not listed unless
4605 -c/--clean, -i/--ignored, -C/--copies or -A/--all are given.
4605 -c/--clean, -i/--ignored, -C/--copies or -A/--all are given.
4606 Unless options described with "show only ..." are given, the
4606 Unless options described with "show only ..." are given, the
4607 options -mardu are used.
4607 options -mardu are used.
4608
4608
4609 Option -q/--quiet hides untracked (unknown and ignored) files
4609 Option -q/--quiet hides untracked (unknown and ignored) files
4610 unless explicitly requested with -u/--unknown or -i/--ignored.
4610 unless explicitly requested with -u/--unknown or -i/--ignored.
4611
4611
4612 .. note::
4612 .. note::
4613
4613
4614 :hg:`status` may appear to disagree with diff if permissions have
4614 :hg:`status` may appear to disagree with diff if permissions have
4615 changed or a merge has occurred. The standard diff format does
4615 changed or a merge has occurred. The standard diff format does
4616 not report permission changes and diff only reports changes
4616 not report permission changes and diff only reports changes
4617 relative to one merge parent.
4617 relative to one merge parent.
4618
4618
4619 If one revision is given, it is used as the base revision.
4619 If one revision is given, it is used as the base revision.
4620 If two revisions are given, the differences between them are
4620 If two revisions are given, the differences between them are
4621 shown. The --change option can also be used as a shortcut to list
4621 shown. The --change option can also be used as a shortcut to list
4622 the changed files of a revision from its first parent.
4622 the changed files of a revision from its first parent.
4623
4623
4624 The codes used to show the status of files are::
4624 The codes used to show the status of files are::
4625
4625
4626 M = modified
4626 M = modified
4627 A = added
4627 A = added
4628 R = removed
4628 R = removed
4629 C = clean
4629 C = clean
4630 ! = missing (deleted by non-hg command, but still tracked)
4630 ! = missing (deleted by non-hg command, but still tracked)
4631 ? = not tracked
4631 ? = not tracked
4632 I = ignored
4632 I = ignored
4633 = origin of the previous file (with --copies)
4633 = origin of the previous file (with --copies)
4634
4634
4635 .. container:: verbose
4635 .. container:: verbose
4636
4636
4637 Examples:
4637 Examples:
4638
4638
4639 - show changes in the working directory relative to a
4639 - show changes in the working directory relative to a
4640 changeset::
4640 changeset::
4641
4641
4642 hg status --rev 9353
4642 hg status --rev 9353
4643
4643
4644 - show changes in the working directory relative to the
4644 - show changes in the working directory relative to the
4645 current directory (see :hg:`help patterns` for more information)::
4645 current directory (see :hg:`help patterns` for more information)::
4646
4646
4647 hg status re:
4647 hg status re:
4648
4648
4649 - show all changes including copies in an existing changeset::
4649 - show all changes including copies in an existing changeset::
4650
4650
4651 hg status --copies --change 9353
4651 hg status --copies --change 9353
4652
4652
4653 - get a NUL separated list of added files, suitable for xargs::
4653 - get a NUL separated list of added files, suitable for xargs::
4654
4654
4655 hg status -an0
4655 hg status -an0
4656
4656
4657 Returns 0 on success.
4657 Returns 0 on success.
4658 """
4658 """
4659
4659
4660 opts = pycompat.byteskwargs(opts)
4660 opts = pycompat.byteskwargs(opts)
4661 revs = opts.get('rev')
4661 revs = opts.get('rev')
4662 change = opts.get('change')
4662 change = opts.get('change')
4663
4663
4664 if revs and change:
4664 if revs and change:
4665 msg = _('cannot specify --rev and --change at the same time')
4665 msg = _('cannot specify --rev and --change at the same time')
4666 raise error.Abort(msg)
4666 raise error.Abort(msg)
4667 elif change:
4667 elif change:
4668 node2 = scmutil.revsingle(repo, change, None).node()
4668 node2 = scmutil.revsingle(repo, change, None).node()
4669 node1 = repo[node2].p1().node()
4669 node1 = repo[node2].p1().node()
4670 else:
4670 else:
4671 node1, node2 = scmutil.revpair(repo, revs)
4671 node1, node2 = scmutil.revpair(repo, revs)
4672
4672
4673 if pats or ui.configbool('commands', 'status.relative'):
4673 if pats or ui.configbool('commands', 'status.relative'):
4674 cwd = repo.getcwd()
4674 cwd = repo.getcwd()
4675 else:
4675 else:
4676 cwd = ''
4676 cwd = ''
4677
4677
4678 if opts.get('print0'):
4678 if opts.get('print0'):
4679 end = '\0'
4679 end = '\0'
4680 else:
4680 else:
4681 end = '\n'
4681 end = '\n'
4682 copy = {}
4682 copy = {}
4683 states = 'modified added removed deleted unknown ignored clean'.split()
4683 states = 'modified added removed deleted unknown ignored clean'.split()
4684 show = [k for k in states if opts.get(k)]
4684 show = [k for k in states if opts.get(k)]
4685 if opts.get('all'):
4685 if opts.get('all'):
4686 show += ui.quiet and (states[:4] + ['clean']) or states
4686 show += ui.quiet and (states[:4] + ['clean']) or states
4687 if not show:
4687 if not show:
4688 if ui.quiet:
4688 if ui.quiet:
4689 show = states[:4]
4689 show = states[:4]
4690 else:
4690 else:
4691 show = states[:5]
4691 show = states[:5]
4692
4692
4693 m = scmutil.match(repo[node2], pats, opts)
4693 m = scmutil.match(repo[node2], pats, opts)
4694 stat = repo.status(node1, node2, m,
4694 stat = repo.status(node1, node2, m,
4695 'ignored' in show, 'clean' in show, 'unknown' in show,
4695 'ignored' in show, 'clean' in show, 'unknown' in show,
4696 opts.get('subrepos'))
4696 opts.get('subrepos'))
4697 changestates = zip(states, pycompat.iterbytestr('MAR!?IC'), stat)
4697 changestates = zip(states, pycompat.iterbytestr('MAR!?IC'), stat)
4698
4698
4699 if (opts.get('all') or opts.get('copies')
4699 if (opts.get('all') or opts.get('copies')
4700 or ui.configbool('ui', 'statuscopies')) and not opts.get('no_status'):
4700 or ui.configbool('ui', 'statuscopies')) and not opts.get('no_status'):
4701 copy = copies.pathcopies(repo[node1], repo[node2], m)
4701 copy = copies.pathcopies(repo[node1], repo[node2], m)
4702
4702
4703 ui.pager('status')
4703 ui.pager('status')
4704 fm = ui.formatter('status', opts)
4704 fm = ui.formatter('status', opts)
4705 fmt = '%s' + end
4705 fmt = '%s' + end
4706 showchar = not opts.get('no_status')
4706 showchar = not opts.get('no_status')
4707
4707
4708 for state, char, files in changestates:
4708 for state, char, files in changestates:
4709 if state in show:
4709 if state in show:
4710 label = 'status.' + state
4710 label = 'status.' + state
4711 for f in files:
4711 for f in files:
4712 fm.startitem()
4712 fm.startitem()
4713 fm.condwrite(showchar, 'status', '%s ', char, label=label)
4713 fm.condwrite(showchar, 'status', '%s ', char, label=label)
4714 fm.write('path', fmt, repo.pathto(f, cwd), label=label)
4714 fm.write('path', fmt, repo.pathto(f, cwd), label=label)
4715 if f in copy:
4715 if f in copy:
4716 fm.write("copy", ' %s' + end, repo.pathto(copy[f], cwd),
4716 fm.write("copy", ' %s' + end, repo.pathto(copy[f], cwd),
4717 label='status.copied')
4717 label='status.copied')
4718 fm.end()
4718 fm.end()
4719
4719
4720 @command('^summary|sum',
4720 @command('^summary|sum',
4721 [('', 'remote', None, _('check for push and pull'))], '[--remote]')
4721 [('', 'remote', None, _('check for push and pull'))], '[--remote]')
4722 def summary(ui, repo, **opts):
4722 def summary(ui, repo, **opts):
4723 """summarize working directory state
4723 """summarize working directory state
4724
4724
4725 This generates a brief summary of the working directory state,
4725 This generates a brief summary of the working directory state,
4726 including parents, branch, commit status, phase and available updates.
4726 including parents, branch, commit status, phase and available updates.
4727
4727
4728 With the --remote option, this will check the default paths for
4728 With the --remote option, this will check the default paths for
4729 incoming and outgoing changes. This can be time-consuming.
4729 incoming and outgoing changes. This can be time-consuming.
4730
4730
4731 Returns 0 on success.
4731 Returns 0 on success.
4732 """
4732 """
4733
4733
4734 opts = pycompat.byteskwargs(opts)
4734 opts = pycompat.byteskwargs(opts)
4735 ui.pager('summary')
4735 ui.pager('summary')
4736 ctx = repo[None]
4736 ctx = repo[None]
4737 parents = ctx.parents()
4737 parents = ctx.parents()
4738 pnode = parents[0].node()
4738 pnode = parents[0].node()
4739 marks = []
4739 marks = []
4740
4740
4741 ms = None
4741 ms = None
4742 try:
4742 try:
4743 ms = mergemod.mergestate.read(repo)
4743 ms = mergemod.mergestate.read(repo)
4744 except error.UnsupportedMergeRecords as e:
4744 except error.UnsupportedMergeRecords as e:
4745 s = ' '.join(e.recordtypes)
4745 s = ' '.join(e.recordtypes)
4746 ui.warn(
4746 ui.warn(
4747 _('warning: merge state has unsupported record types: %s\n') % s)
4747 _('warning: merge state has unsupported record types: %s\n') % s)
4748 unresolved = 0
4748 unresolved = 0
4749 else:
4749 else:
4750 unresolved = [f for f in ms if ms[f] == 'u']
4750 unresolved = [f for f in ms if ms[f] == 'u']
4751
4751
4752 for p in parents:
4752 for p in parents:
4753 # label with log.changeset (instead of log.parent) since this
4753 # label with log.changeset (instead of log.parent) since this
4754 # shows a working directory parent *changeset*:
4754 # shows a working directory parent *changeset*:
4755 # i18n: column positioning for "hg summary"
4755 # i18n: column positioning for "hg summary"
4756 ui.write(_('parent: %d:%s ') % (p.rev(), p),
4756 ui.write(_('parent: %d:%s ') % (p.rev(), p),
4757 label=cmdutil._changesetlabels(p))
4757 label=cmdutil._changesetlabels(p))
4758 ui.write(' '.join(p.tags()), label='log.tag')
4758 ui.write(' '.join(p.tags()), label='log.tag')
4759 if p.bookmarks():
4759 if p.bookmarks():
4760 marks.extend(p.bookmarks())
4760 marks.extend(p.bookmarks())
4761 if p.rev() == -1:
4761 if p.rev() == -1:
4762 if not len(repo):
4762 if not len(repo):
4763 ui.write(_(' (empty repository)'))
4763 ui.write(_(' (empty repository)'))
4764 else:
4764 else:
4765 ui.write(_(' (no revision checked out)'))
4765 ui.write(_(' (no revision checked out)'))
4766 if p.obsolete():
4766 if p.obsolete():
4767 ui.write(_(' (obsolete)'))
4767 ui.write(_(' (obsolete)'))
4768 if p.troubled():
4768 if p.troubled():
4769 ui.write(' ('
4769 ui.write(' ('
4770 + ', '.join(ui.label(trouble, 'trouble.%s' % trouble)
4770 + ', '.join(ui.label(trouble, 'trouble.%s' % trouble)
4771 for trouble in p.troubles())
4771 for trouble in p.troubles())
4772 + ')')
4772 + ')')
4773 ui.write('\n')
4773 ui.write('\n')
4774 if p.description():
4774 if p.description():
4775 ui.status(' ' + p.description().splitlines()[0].strip() + '\n',
4775 ui.status(' ' + p.description().splitlines()[0].strip() + '\n',
4776 label='log.summary')
4776 label='log.summary')
4777
4777
4778 branch = ctx.branch()
4778 branch = ctx.branch()
4779 bheads = repo.branchheads(branch)
4779 bheads = repo.branchheads(branch)
4780 # i18n: column positioning for "hg summary"
4780 # i18n: column positioning for "hg summary"
4781 m = _('branch: %s\n') % branch
4781 m = _('branch: %s\n') % branch
4782 if branch != 'default':
4782 if branch != 'default':
4783 ui.write(m, label='log.branch')
4783 ui.write(m, label='log.branch')
4784 else:
4784 else:
4785 ui.status(m, label='log.branch')
4785 ui.status(m, label='log.branch')
4786
4786
4787 if marks:
4787 if marks:
4788 active = repo._activebookmark
4788 active = repo._activebookmark
4789 # i18n: column positioning for "hg summary"
4789 # i18n: column positioning for "hg summary"
4790 ui.write(_('bookmarks:'), label='log.bookmark')
4790 ui.write(_('bookmarks:'), label='log.bookmark')
4791 if active is not None:
4791 if active is not None:
4792 if active in marks:
4792 if active in marks:
4793 ui.write(' *' + active, label=bookmarks.activebookmarklabel)
4793 ui.write(' *' + active, label=bookmarks.activebookmarklabel)
4794 marks.remove(active)
4794 marks.remove(active)
4795 else:
4795 else:
4796 ui.write(' [%s]' % active, label=bookmarks.activebookmarklabel)
4796 ui.write(' [%s]' % active, label=bookmarks.activebookmarklabel)
4797 for m in marks:
4797 for m in marks:
4798 ui.write(' ' + m, label='log.bookmark')
4798 ui.write(' ' + m, label='log.bookmark')
4799 ui.write('\n', label='log.bookmark')
4799 ui.write('\n', label='log.bookmark')
4800
4800
4801 status = repo.status(unknown=True)
4801 status = repo.status(unknown=True)
4802
4802
4803 c = repo.dirstate.copies()
4803 c = repo.dirstate.copies()
4804 copied, renamed = [], []
4804 copied, renamed = [], []
4805 for d, s in c.iteritems():
4805 for d, s in c.iteritems():
4806 if s in status.removed:
4806 if s in status.removed:
4807 status.removed.remove(s)
4807 status.removed.remove(s)
4808 renamed.append(d)
4808 renamed.append(d)
4809 else:
4809 else:
4810 copied.append(d)
4810 copied.append(d)
4811 if d in status.added:
4811 if d in status.added:
4812 status.added.remove(d)
4812 status.added.remove(d)
4813
4813
4814 subs = [s for s in ctx.substate if ctx.sub(s).dirty()]
4814 subs = [s for s in ctx.substate if ctx.sub(s).dirty()]
4815
4815
4816 labels = [(ui.label(_('%d modified'), 'status.modified'), status.modified),
4816 labels = [(ui.label(_('%d modified'), 'status.modified'), status.modified),
4817 (ui.label(_('%d added'), 'status.added'), status.added),
4817 (ui.label(_('%d added'), 'status.added'), status.added),
4818 (ui.label(_('%d removed'), 'status.removed'), status.removed),
4818 (ui.label(_('%d removed'), 'status.removed'), status.removed),
4819 (ui.label(_('%d renamed'), 'status.copied'), renamed),
4819 (ui.label(_('%d renamed'), 'status.copied'), renamed),
4820 (ui.label(_('%d copied'), 'status.copied'), copied),
4820 (ui.label(_('%d copied'), 'status.copied'), copied),
4821 (ui.label(_('%d deleted'), 'status.deleted'), status.deleted),
4821 (ui.label(_('%d deleted'), 'status.deleted'), status.deleted),
4822 (ui.label(_('%d unknown'), 'status.unknown'), status.unknown),
4822 (ui.label(_('%d unknown'), 'status.unknown'), status.unknown),
4823 (ui.label(_('%d unresolved'), 'resolve.unresolved'), unresolved),
4823 (ui.label(_('%d unresolved'), 'resolve.unresolved'), unresolved),
4824 (ui.label(_('%d subrepos'), 'status.modified'), subs)]
4824 (ui.label(_('%d subrepos'), 'status.modified'), subs)]
4825 t = []
4825 t = []
4826 for l, s in labels:
4826 for l, s in labels:
4827 if s:
4827 if s:
4828 t.append(l % len(s))
4828 t.append(l % len(s))
4829
4829
4830 t = ', '.join(t)
4830 t = ', '.join(t)
4831 cleanworkdir = False
4831 cleanworkdir = False
4832
4832
4833 if repo.vfs.exists('graftstate'):
4833 if repo.vfs.exists('graftstate'):
4834 t += _(' (graft in progress)')
4834 t += _(' (graft in progress)')
4835 if repo.vfs.exists('updatestate'):
4835 if repo.vfs.exists('updatestate'):
4836 t += _(' (interrupted update)')
4836 t += _(' (interrupted update)')
4837 elif len(parents) > 1:
4837 elif len(parents) > 1:
4838 t += _(' (merge)')
4838 t += _(' (merge)')
4839 elif branch != parents[0].branch():
4839 elif branch != parents[0].branch():
4840 t += _(' (new branch)')
4840 t += _(' (new branch)')
4841 elif (parents[0].closesbranch() and
4841 elif (parents[0].closesbranch() and
4842 pnode in repo.branchheads(branch, closed=True)):
4842 pnode in repo.branchheads(branch, closed=True)):
4843 t += _(' (head closed)')
4843 t += _(' (head closed)')
4844 elif not (status.modified or status.added or status.removed or renamed or
4844 elif not (status.modified or status.added or status.removed or renamed or
4845 copied or subs):
4845 copied or subs):
4846 t += _(' (clean)')
4846 t += _(' (clean)')
4847 cleanworkdir = True
4847 cleanworkdir = True
4848 elif pnode not in bheads:
4848 elif pnode not in bheads:
4849 t += _(' (new branch head)')
4849 t += _(' (new branch head)')
4850
4850
4851 if parents:
4851 if parents:
4852 pendingphase = max(p.phase() for p in parents)
4852 pendingphase = max(p.phase() for p in parents)
4853 else:
4853 else:
4854 pendingphase = phases.public
4854 pendingphase = phases.public
4855
4855
4856 if pendingphase > phases.newcommitphase(ui):
4856 if pendingphase > phases.newcommitphase(ui):
4857 t += ' (%s)' % phases.phasenames[pendingphase]
4857 t += ' (%s)' % phases.phasenames[pendingphase]
4858
4858
4859 if cleanworkdir:
4859 if cleanworkdir:
4860 # i18n: column positioning for "hg summary"
4860 # i18n: column positioning for "hg summary"
4861 ui.status(_('commit: %s\n') % t.strip())
4861 ui.status(_('commit: %s\n') % t.strip())
4862 else:
4862 else:
4863 # i18n: column positioning for "hg summary"
4863 # i18n: column positioning for "hg summary"
4864 ui.write(_('commit: %s\n') % t.strip())
4864 ui.write(_('commit: %s\n') % t.strip())
4865
4865
4866 # all ancestors of branch heads - all ancestors of parent = new csets
4866 # all ancestors of branch heads - all ancestors of parent = new csets
4867 new = len(repo.changelog.findmissing([pctx.node() for pctx in parents],
4867 new = len(repo.changelog.findmissing([pctx.node() for pctx in parents],
4868 bheads))
4868 bheads))
4869
4869
4870 if new == 0:
4870 if new == 0:
4871 # i18n: column positioning for "hg summary"
4871 # i18n: column positioning for "hg summary"
4872 ui.status(_('update: (current)\n'))
4872 ui.status(_('update: (current)\n'))
4873 elif pnode not in bheads:
4873 elif pnode not in bheads:
4874 # i18n: column positioning for "hg summary"
4874 # i18n: column positioning for "hg summary"
4875 ui.write(_('update: %d new changesets (update)\n') % new)
4875 ui.write(_('update: %d new changesets (update)\n') % new)
4876 else:
4876 else:
4877 # i18n: column positioning for "hg summary"
4877 # i18n: column positioning for "hg summary"
4878 ui.write(_('update: %d new changesets, %d branch heads (merge)\n') %
4878 ui.write(_('update: %d new changesets, %d branch heads (merge)\n') %
4879 (new, len(bheads)))
4879 (new, len(bheads)))
4880
4880
4881 t = []
4881 t = []
4882 draft = len(repo.revs('draft()'))
4882 draft = len(repo.revs('draft()'))
4883 if draft:
4883 if draft:
4884 t.append(_('%d draft') % draft)
4884 t.append(_('%d draft') % draft)
4885 secret = len(repo.revs('secret()'))
4885 secret = len(repo.revs('secret()'))
4886 if secret:
4886 if secret:
4887 t.append(_('%d secret') % secret)
4887 t.append(_('%d secret') % secret)
4888
4888
4889 if draft or secret:
4889 if draft or secret:
4890 ui.status(_('phases: %s\n') % ', '.join(t))
4890 ui.status(_('phases: %s\n') % ', '.join(t))
4891
4891
4892 if obsolete.isenabled(repo, obsolete.createmarkersopt):
4892 if obsolete.isenabled(repo, obsolete.createmarkersopt):
4893 for trouble in ("unstable", "divergent", "bumped"):
4893 for trouble in ("unstable", "divergent", "bumped"):
4894 numtrouble = len(repo.revs(trouble + "()"))
4894 numtrouble = len(repo.revs(trouble + "()"))
4895 # We write all the possibilities to ease translation
4895 # We write all the possibilities to ease translation
4896 troublemsg = {
4896 troublemsg = {
4897 "unstable": _("unstable: %d changesets"),
4897 "unstable": _("unstable: %d changesets"),
4898 "divergent": _("divergent: %d changesets"),
4898 "divergent": _("divergent: %d changesets"),
4899 "bumped": _("bumped: %d changesets"),
4899 "bumped": _("bumped: %d changesets"),
4900 }
4900 }
4901 if numtrouble > 0:
4901 if numtrouble > 0:
4902 ui.status(troublemsg[trouble] % numtrouble + "\n")
4902 ui.status(troublemsg[trouble] % numtrouble + "\n")
4903
4903
4904 cmdutil.summaryhooks(ui, repo)
4904 cmdutil.summaryhooks(ui, repo)
4905
4905
4906 if opts.get('remote'):
4906 if opts.get('remote'):
4907 needsincoming, needsoutgoing = True, True
4907 needsincoming, needsoutgoing = True, True
4908 else:
4908 else:
4909 needsincoming, needsoutgoing = False, False
4909 needsincoming, needsoutgoing = False, False
4910 for i, o in cmdutil.summaryremotehooks(ui, repo, opts, None):
4910 for i, o in cmdutil.summaryremotehooks(ui, repo, opts, None):
4911 if i:
4911 if i:
4912 needsincoming = True
4912 needsincoming = True
4913 if o:
4913 if o:
4914 needsoutgoing = True
4914 needsoutgoing = True
4915 if not needsincoming and not needsoutgoing:
4915 if not needsincoming and not needsoutgoing:
4916 return
4916 return
4917
4917
4918 def getincoming():
4918 def getincoming():
4919 source, branches = hg.parseurl(ui.expandpath('default'))
4919 source, branches = hg.parseurl(ui.expandpath('default'))
4920 sbranch = branches[0]
4920 sbranch = branches[0]
4921 try:
4921 try:
4922 other = hg.peer(repo, {}, source)
4922 other = hg.peer(repo, {}, source)
4923 except error.RepoError:
4923 except error.RepoError:
4924 if opts.get('remote'):
4924 if opts.get('remote'):
4925 raise
4925 raise
4926 return source, sbranch, None, None, None
4926 return source, sbranch, None, None, None
4927 revs, checkout = hg.addbranchrevs(repo, other, branches, None)
4927 revs, checkout = hg.addbranchrevs(repo, other, branches, None)
4928 if revs:
4928 if revs:
4929 revs = [other.lookup(rev) for rev in revs]
4929 revs = [other.lookup(rev) for rev in revs]
4930 ui.debug('comparing with %s\n' % util.hidepassword(source))
4930 ui.debug('comparing with %s\n' % util.hidepassword(source))
4931 repo.ui.pushbuffer()
4931 repo.ui.pushbuffer()
4932 commoninc = discovery.findcommonincoming(repo, other, heads=revs)
4932 commoninc = discovery.findcommonincoming(repo, other, heads=revs)
4933 repo.ui.popbuffer()
4933 repo.ui.popbuffer()
4934 return source, sbranch, other, commoninc, commoninc[1]
4934 return source, sbranch, other, commoninc, commoninc[1]
4935
4935
4936 if needsincoming:
4936 if needsincoming:
4937 source, sbranch, sother, commoninc, incoming = getincoming()
4937 source, sbranch, sother, commoninc, incoming = getincoming()
4938 else:
4938 else:
4939 source = sbranch = sother = commoninc = incoming = None
4939 source = sbranch = sother = commoninc = incoming = None
4940
4940
4941 def getoutgoing():
4941 def getoutgoing():
4942 dest, branches = hg.parseurl(ui.expandpath('default-push', 'default'))
4942 dest, branches = hg.parseurl(ui.expandpath('default-push', 'default'))
4943 dbranch = branches[0]
4943 dbranch = branches[0]
4944 revs, checkout = hg.addbranchrevs(repo, repo, branches, None)
4944 revs, checkout = hg.addbranchrevs(repo, repo, branches, None)
4945 if source != dest:
4945 if source != dest:
4946 try:
4946 try:
4947 dother = hg.peer(repo, {}, dest)
4947 dother = hg.peer(repo, {}, dest)
4948 except error.RepoError:
4948 except error.RepoError:
4949 if opts.get('remote'):
4949 if opts.get('remote'):
4950 raise
4950 raise
4951 return dest, dbranch, None, None
4951 return dest, dbranch, None, None
4952 ui.debug('comparing with %s\n' % util.hidepassword(dest))
4952 ui.debug('comparing with %s\n' % util.hidepassword(dest))
4953 elif sother is None:
4953 elif sother is None:
4954 # there is no explicit destination peer, but source one is invalid
4954 # there is no explicit destination peer, but source one is invalid
4955 return dest, dbranch, None, None
4955 return dest, dbranch, None, None
4956 else:
4956 else:
4957 dother = sother
4957 dother = sother
4958 if (source != dest or (sbranch is not None and sbranch != dbranch)):
4958 if (source != dest or (sbranch is not None and sbranch != dbranch)):
4959 common = None
4959 common = None
4960 else:
4960 else:
4961 common = commoninc
4961 common = commoninc
4962 if revs:
4962 if revs:
4963 revs = [repo.lookup(rev) for rev in revs]
4963 revs = [repo.lookup(rev) for rev in revs]
4964 repo.ui.pushbuffer()
4964 repo.ui.pushbuffer()
4965 outgoing = discovery.findcommonoutgoing(repo, dother, onlyheads=revs,
4965 outgoing = discovery.findcommonoutgoing(repo, dother, onlyheads=revs,
4966 commoninc=common)
4966 commoninc=common)
4967 repo.ui.popbuffer()
4967 repo.ui.popbuffer()
4968 return dest, dbranch, dother, outgoing
4968 return dest, dbranch, dother, outgoing
4969
4969
4970 if needsoutgoing:
4970 if needsoutgoing:
4971 dest, dbranch, dother, outgoing = getoutgoing()
4971 dest, dbranch, dother, outgoing = getoutgoing()
4972 else:
4972 else:
4973 dest = dbranch = dother = outgoing = None
4973 dest = dbranch = dother = outgoing = None
4974
4974
4975 if opts.get('remote'):
4975 if opts.get('remote'):
4976 t = []
4976 t = []
4977 if incoming:
4977 if incoming:
4978 t.append(_('1 or more incoming'))
4978 t.append(_('1 or more incoming'))
4979 o = outgoing.missing
4979 o = outgoing.missing
4980 if o:
4980 if o:
4981 t.append(_('%d outgoing') % len(o))
4981 t.append(_('%d outgoing') % len(o))
4982 other = dother or sother
4982 other = dother or sother
4983 if 'bookmarks' in other.listkeys('namespaces'):
4983 if 'bookmarks' in other.listkeys('namespaces'):
4984 counts = bookmarks.summary(repo, other)
4984 counts = bookmarks.summary(repo, other)
4985 if counts[0] > 0:
4985 if counts[0] > 0:
4986 t.append(_('%d incoming bookmarks') % counts[0])
4986 t.append(_('%d incoming bookmarks') % counts[0])
4987 if counts[1] > 0:
4987 if counts[1] > 0:
4988 t.append(_('%d outgoing bookmarks') % counts[1])
4988 t.append(_('%d outgoing bookmarks') % counts[1])
4989
4989
4990 if t:
4990 if t:
4991 # i18n: column positioning for "hg summary"
4991 # i18n: column positioning for "hg summary"
4992 ui.write(_('remote: %s\n') % (', '.join(t)))
4992 ui.write(_('remote: %s\n') % (', '.join(t)))
4993 else:
4993 else:
4994 # i18n: column positioning for "hg summary"
4994 # i18n: column positioning for "hg summary"
4995 ui.status(_('remote: (synced)\n'))
4995 ui.status(_('remote: (synced)\n'))
4996
4996
4997 cmdutil.summaryremotehooks(ui, repo, opts,
4997 cmdutil.summaryremotehooks(ui, repo, opts,
4998 ((source, sbranch, sother, commoninc),
4998 ((source, sbranch, sother, commoninc),
4999 (dest, dbranch, dother, outgoing)))
4999 (dest, dbranch, dother, outgoing)))
5000
5000
5001 @command('tag',
5001 @command('tag',
5002 [('f', 'force', None, _('force tag')),
5002 [('f', 'force', None, _('force tag')),
5003 ('l', 'local', None, _('make the tag local')),
5003 ('l', 'local', None, _('make the tag local')),
5004 ('r', 'rev', '', _('revision to tag'), _('REV')),
5004 ('r', 'rev', '', _('revision to tag'), _('REV')),
5005 ('', 'remove', None, _('remove a tag')),
5005 ('', 'remove', None, _('remove a tag')),
5006 # -l/--local is already there, commitopts cannot be used
5006 # -l/--local is already there, commitopts cannot be used
5007 ('e', 'edit', None, _('invoke editor on commit messages')),
5007 ('e', 'edit', None, _('invoke editor on commit messages')),
5008 ('m', 'message', '', _('use text as commit message'), _('TEXT')),
5008 ('m', 'message', '', _('use text as commit message'), _('TEXT')),
5009 ] + commitopts2,
5009 ] + commitopts2,
5010 _('[-f] [-l] [-m TEXT] [-d DATE] [-u USER] [-r REV] NAME...'))
5010 _('[-f] [-l] [-m TEXT] [-d DATE] [-u USER] [-r REV] NAME...'))
5011 def tag(ui, repo, name1, *names, **opts):
5011 def tag(ui, repo, name1, *names, **opts):
5012 """add one or more tags for the current or given revision
5012 """add one or more tags for the current or given revision
5013
5013
5014 Name a particular revision using <name>.
5014 Name a particular revision using <name>.
5015
5015
5016 Tags are used to name particular revisions of the repository and are
5016 Tags are used to name particular revisions of the repository and are
5017 very useful to compare different revisions, to go back to significant
5017 very useful to compare different revisions, to go back to significant
5018 earlier versions or to mark branch points as releases, etc. Changing
5018 earlier versions or to mark branch points as releases, etc. Changing
5019 an existing tag is normally disallowed; use -f/--force to override.
5019 an existing tag is normally disallowed; use -f/--force to override.
5020
5020
5021 If no revision is given, the parent of the working directory is
5021 If no revision is given, the parent of the working directory is
5022 used.
5022 used.
5023
5023
5024 To facilitate version control, distribution, and merging of tags,
5024 To facilitate version control, distribution, and merging of tags,
5025 they are stored as a file named ".hgtags" which is managed similarly
5025 they are stored as a file named ".hgtags" which is managed similarly
5026 to other project files and can be hand-edited if necessary. This
5026 to other project files and can be hand-edited if necessary. This
5027 also means that tagging creates a new commit. The file
5027 also means that tagging creates a new commit. The file
5028 ".hg/localtags" is used for local tags (not shared among
5028 ".hg/localtags" is used for local tags (not shared among
5029 repositories).
5029 repositories).
5030
5030
5031 Tag commits are usually made at the head of a branch. If the parent
5031 Tag commits are usually made at the head of a branch. If the parent
5032 of the working directory is not a branch head, :hg:`tag` aborts; use
5032 of the working directory is not a branch head, :hg:`tag` aborts; use
5033 -f/--force to force the tag commit to be based on a non-head
5033 -f/--force to force the tag commit to be based on a non-head
5034 changeset.
5034 changeset.
5035
5035
5036 See :hg:`help dates` for a list of formats valid for -d/--date.
5036 See :hg:`help dates` for a list of formats valid for -d/--date.
5037
5037
5038 Since tag names have priority over branch names during revision
5038 Since tag names have priority over branch names during revision
5039 lookup, using an existing branch name as a tag name is discouraged.
5039 lookup, using an existing branch name as a tag name is discouraged.
5040
5040
5041 Returns 0 on success.
5041 Returns 0 on success.
5042 """
5042 """
5043 opts = pycompat.byteskwargs(opts)
5043 opts = pycompat.byteskwargs(opts)
5044 wlock = lock = None
5044 wlock = lock = None
5045 try:
5045 try:
5046 wlock = repo.wlock()
5046 wlock = repo.wlock()
5047 lock = repo.lock()
5047 lock = repo.lock()
5048 rev_ = "."
5048 rev_ = "."
5049 names = [t.strip() for t in (name1,) + names]
5049 names = [t.strip() for t in (name1,) + names]
5050 if len(names) != len(set(names)):
5050 if len(names) != len(set(names)):
5051 raise error.Abort(_('tag names must be unique'))
5051 raise error.Abort(_('tag names must be unique'))
5052 for n in names:
5052 for n in names:
5053 scmutil.checknewlabel(repo, n, 'tag')
5053 scmutil.checknewlabel(repo, n, 'tag')
5054 if not n:
5054 if not n:
5055 raise error.Abort(_('tag names cannot consist entirely of '
5055 raise error.Abort(_('tag names cannot consist entirely of '
5056 'whitespace'))
5056 'whitespace'))
5057 if opts.get('rev') and opts.get('remove'):
5057 if opts.get('rev') and opts.get('remove'):
5058 raise error.Abort(_("--rev and --remove are incompatible"))
5058 raise error.Abort(_("--rev and --remove are incompatible"))
5059 if opts.get('rev'):
5059 if opts.get('rev'):
5060 rev_ = opts['rev']
5060 rev_ = opts['rev']
5061 message = opts.get('message')
5061 message = opts.get('message')
5062 if opts.get('remove'):
5062 if opts.get('remove'):
5063 if opts.get('local'):
5063 if opts.get('local'):
5064 expectedtype = 'local'
5064 expectedtype = 'local'
5065 else:
5065 else:
5066 expectedtype = 'global'
5066 expectedtype = 'global'
5067
5067
5068 for n in names:
5068 for n in names:
5069 if not repo.tagtype(n):
5069 if not repo.tagtype(n):
5070 raise error.Abort(_("tag '%s' does not exist") % n)
5070 raise error.Abort(_("tag '%s' does not exist") % n)
5071 if repo.tagtype(n) != expectedtype:
5071 if repo.tagtype(n) != expectedtype:
5072 if expectedtype == 'global':
5072 if expectedtype == 'global':
5073 raise error.Abort(_("tag '%s' is not a global tag") % n)
5073 raise error.Abort(_("tag '%s' is not a global tag") % n)
5074 else:
5074 else:
5075 raise error.Abort(_("tag '%s' is not a local tag") % n)
5075 raise error.Abort(_("tag '%s' is not a local tag") % n)
5076 rev_ = 'null'
5076 rev_ = 'null'
5077 if not message:
5077 if not message:
5078 # we don't translate commit messages
5078 # we don't translate commit messages
5079 message = 'Removed tag %s' % ', '.join(names)
5079 message = 'Removed tag %s' % ', '.join(names)
5080 elif not opts.get('force'):
5080 elif not opts.get('force'):
5081 for n in names:
5081 for n in names:
5082 if n in repo.tags():
5082 if n in repo.tags():
5083 raise error.Abort(_("tag '%s' already exists "
5083 raise error.Abort(_("tag '%s' already exists "
5084 "(use -f to force)") % n)
5084 "(use -f to force)") % n)
5085 if not opts.get('local'):
5085 if not opts.get('local'):
5086 p1, p2 = repo.dirstate.parents()
5086 p1, p2 = repo.dirstate.parents()
5087 if p2 != nullid:
5087 if p2 != nullid:
5088 raise error.Abort(_('uncommitted merge'))
5088 raise error.Abort(_('uncommitted merge'))
5089 bheads = repo.branchheads()
5089 bheads = repo.branchheads()
5090 if not opts.get('force') and bheads and p1 not in bheads:
5090 if not opts.get('force') and bheads and p1 not in bheads:
5091 raise error.Abort(_('working directory is not at a branch head '
5091 raise error.Abort(_('working directory is not at a branch head '
5092 '(use -f to force)'))
5092 '(use -f to force)'))
5093 r = scmutil.revsingle(repo, rev_).node()
5093 r = scmutil.revsingle(repo, rev_).node()
5094
5094
5095 if not message:
5095 if not message:
5096 # we don't translate commit messages
5096 # we don't translate commit messages
5097 message = ('Added tag %s for changeset %s' %
5097 message = ('Added tag %s for changeset %s' %
5098 (', '.join(names), short(r)))
5098 (', '.join(names), short(r)))
5099
5099
5100 date = opts.get('date')
5100 date = opts.get('date')
5101 if date:
5101 if date:
5102 date = util.parsedate(date)
5102 date = util.parsedate(date)
5103
5103
5104 if opts.get('remove'):
5104 if opts.get('remove'):
5105 editform = 'tag.remove'
5105 editform = 'tag.remove'
5106 else:
5106 else:
5107 editform = 'tag.add'
5107 editform = 'tag.add'
5108 editor = cmdutil.getcommiteditor(editform=editform,
5108 editor = cmdutil.getcommiteditor(editform=editform,
5109 **pycompat.strkwargs(opts))
5109 **pycompat.strkwargs(opts))
5110
5110
5111 # don't allow tagging the null rev
5111 # don't allow tagging the null rev
5112 if (not opts.get('remove') and
5112 if (not opts.get('remove') and
5113 scmutil.revsingle(repo, rev_).rev() == nullrev):
5113 scmutil.revsingle(repo, rev_).rev() == nullrev):
5114 raise error.Abort(_("cannot tag null revision"))
5114 raise error.Abort(_("cannot tag null revision"))
5115
5115
5116 tagsmod.tag(repo, names, r, message, opts.get('local'),
5116 tagsmod.tag(repo, names, r, message, opts.get('local'),
5117 opts.get('user'), date, editor=editor)
5117 opts.get('user'), date, editor=editor)
5118 finally:
5118 finally:
5119 release(lock, wlock)
5119 release(lock, wlock)
5120
5120
5121 @command('tags', formatteropts, '')
5121 @command('tags', formatteropts, '')
5122 def tags(ui, repo, **opts):
5122 def tags(ui, repo, **opts):
5123 """list repository tags
5123 """list repository tags
5124
5124
5125 This lists both regular and local tags. When the -v/--verbose
5125 This lists both regular and local tags. When the -v/--verbose
5126 switch is used, a third column "local" is printed for local tags.
5126 switch is used, a third column "local" is printed for local tags.
5127 When the -q/--quiet switch is used, only the tag name is printed.
5127 When the -q/--quiet switch is used, only the tag name is printed.
5128
5128
5129 Returns 0 on success.
5129 Returns 0 on success.
5130 """
5130 """
5131
5131
5132 opts = pycompat.byteskwargs(opts)
5132 opts = pycompat.byteskwargs(opts)
5133 ui.pager('tags')
5133 ui.pager('tags')
5134 fm = ui.formatter('tags', opts)
5134 fm = ui.formatter('tags', opts)
5135 hexfunc = fm.hexfunc
5135 hexfunc = fm.hexfunc
5136 tagtype = ""
5136 tagtype = ""
5137
5137
5138 for t, n in reversed(repo.tagslist()):
5138 for t, n in reversed(repo.tagslist()):
5139 hn = hexfunc(n)
5139 hn = hexfunc(n)
5140 label = 'tags.normal'
5140 label = 'tags.normal'
5141 tagtype = ''
5141 tagtype = ''
5142 if repo.tagtype(t) == 'local':
5142 if repo.tagtype(t) == 'local':
5143 label = 'tags.local'
5143 label = 'tags.local'
5144 tagtype = 'local'
5144 tagtype = 'local'
5145
5145
5146 fm.startitem()
5146 fm.startitem()
5147 fm.write('tag', '%s', t, label=label)
5147 fm.write('tag', '%s', t, label=label)
5148 fmt = " " * (30 - encoding.colwidth(t)) + ' %5d:%s'
5148 fmt = " " * (30 - encoding.colwidth(t)) + ' %5d:%s'
5149 fm.condwrite(not ui.quiet, 'rev node', fmt,
5149 fm.condwrite(not ui.quiet, 'rev node', fmt,
5150 repo.changelog.rev(n), hn, label=label)
5150 repo.changelog.rev(n), hn, label=label)
5151 fm.condwrite(ui.verbose and tagtype, 'type', ' %s',
5151 fm.condwrite(ui.verbose and tagtype, 'type', ' %s',
5152 tagtype, label=label)
5152 tagtype, label=label)
5153 fm.plain('\n')
5153 fm.plain('\n')
5154 fm.end()
5154 fm.end()
5155
5155
5156 @command('tip',
5156 @command('tip',
5157 [('p', 'patch', None, _('show patch')),
5157 [('p', 'patch', None, _('show patch')),
5158 ('g', 'git', None, _('use git extended diff format')),
5158 ('g', 'git', None, _('use git extended diff format')),
5159 ] + templateopts,
5159 ] + templateopts,
5160 _('[-p] [-g]'))
5160 _('[-p] [-g]'))
5161 def tip(ui, repo, **opts):
5161 def tip(ui, repo, **opts):
5162 """show the tip revision (DEPRECATED)
5162 """show the tip revision (DEPRECATED)
5163
5163
5164 The tip revision (usually just called the tip) is the changeset
5164 The tip revision (usually just called the tip) is the changeset
5165 most recently added to the repository (and therefore the most
5165 most recently added to the repository (and therefore the most
5166 recently changed head).
5166 recently changed head).
5167
5167
5168 If you have just made a commit, that commit will be the tip. If
5168 If you have just made a commit, that commit will be the tip. If
5169 you have just pulled changes from another repository, the tip of
5169 you have just pulled changes from another repository, the tip of
5170 that repository becomes the current tip. The "tip" tag is special
5170 that repository becomes the current tip. The "tip" tag is special
5171 and cannot be renamed or assigned to a different changeset.
5171 and cannot be renamed or assigned to a different changeset.
5172
5172
5173 This command is deprecated, please use :hg:`heads` instead.
5173 This command is deprecated, please use :hg:`heads` instead.
5174
5174
5175 Returns 0 on success.
5175 Returns 0 on success.
5176 """
5176 """
5177 opts = pycompat.byteskwargs(opts)
5177 opts = pycompat.byteskwargs(opts)
5178 displayer = cmdutil.show_changeset(ui, repo, opts)
5178 displayer = cmdutil.show_changeset(ui, repo, opts)
5179 displayer.show(repo['tip'])
5179 displayer.show(repo['tip'])
5180 displayer.close()
5180 displayer.close()
5181
5181
5182 @command('unbundle',
5182 @command('unbundle',
5183 [('u', 'update', None,
5183 [('u', 'update', None,
5184 _('update to new branch head if changesets were unbundled'))],
5184 _('update to new branch head if changesets were unbundled'))],
5185 _('[-u] FILE...'))
5185 _('[-u] FILE...'))
5186 def unbundle(ui, repo, fname1, *fnames, **opts):
5186 def unbundle(ui, repo, fname1, *fnames, **opts):
5187 """apply one or more bundle files
5187 """apply one or more bundle files
5188
5188
5189 Apply one or more bundle files generated by :hg:`bundle`.
5189 Apply one or more bundle files generated by :hg:`bundle`.
5190
5190
5191 Returns 0 on success, 1 if an update has unresolved files.
5191 Returns 0 on success, 1 if an update has unresolved files.
5192 """
5192 """
5193 fnames = (fname1,) + fnames
5193 fnames = (fname1,) + fnames
5194
5194
5195 with repo.lock():
5195 with repo.lock():
5196 for fname in fnames:
5196 for fname in fnames:
5197 f = hg.openpath(ui, fname)
5197 f = hg.openpath(ui, fname)
5198 gen = exchange.readbundle(ui, f, fname)
5198 gen = exchange.readbundle(ui, f, fname)
5199 if isinstance(gen, streamclone.streamcloneapplier):
5199 if isinstance(gen, streamclone.streamcloneapplier):
5200 raise error.Abort(
5200 raise error.Abort(
5201 _('packed bundles cannot be applied with '
5201 _('packed bundles cannot be applied with '
5202 '"hg unbundle"'),
5202 '"hg unbundle"'),
5203 hint=_('use "hg debugapplystreamclonebundle"'))
5203 hint=_('use "hg debugapplystreamclonebundle"'))
5204 url = 'bundle:' + fname
5204 url = 'bundle:' + fname
5205 if isinstance(gen, bundle2.unbundle20):
5205 if isinstance(gen, bundle2.unbundle20):
5206 with repo.transaction('unbundle') as tr:
5206 with repo.transaction('unbundle') as tr:
5207 try:
5207 try:
5208 op = bundle2.applybundle(repo, gen, tr,
5208 op = bundle2.applybundle(repo, gen, tr,
5209 source='unbundle',
5209 source='unbundle',
5210 url=url)
5210 url=url)
5211 except error.BundleUnknownFeatureError as exc:
5211 except error.BundleUnknownFeatureError as exc:
5212 raise error.Abort(
5212 raise error.Abort(
5213 _('%s: unknown bundle feature, %s') % (fname, exc),
5213 _('%s: unknown bundle feature, %s') % (fname, exc),
5214 hint=_("see https://mercurial-scm.org/"
5214 hint=_("see https://mercurial-scm.org/"
5215 "wiki/BundleFeature for more "
5215 "wiki/BundleFeature for more "
5216 "information"))
5216 "information"))
5217 modheads = bundle2.combinechangegroupresults(op)
5217 modheads = bundle2.combinechangegroupresults(op)
5218 else:
5218 else:
5219 txnname = 'unbundle\n%s' % util.hidepassword(url)
5219 txnname = 'unbundle\n%s' % util.hidepassword(url)
5220 with repo.transaction(txnname) as tr:
5220 with repo.transaction(txnname) as tr:
5221 modheads, addednodes = gen.apply(repo, tr, 'unbundle', url)
5221 modheads = bundle2.applybundle1(repo, gen, tr,
5222 source='unbundle',
5223 url=url)
5222
5224
5223 return postincoming(ui, repo, modheads, opts.get(r'update'), None, None)
5225 return postincoming(ui, repo, modheads, opts.get(r'update'), None, None)
5224
5226
5225 @command('^update|up|checkout|co',
5227 @command('^update|up|checkout|co',
5226 [('C', 'clean', None, _('discard uncommitted changes (no backup)')),
5228 [('C', 'clean', None, _('discard uncommitted changes (no backup)')),
5227 ('c', 'check', None, _('require clean working directory')),
5229 ('c', 'check', None, _('require clean working directory')),
5228 ('m', 'merge', None, _('merge uncommitted changes')),
5230 ('m', 'merge', None, _('merge uncommitted changes')),
5229 ('d', 'date', '', _('tipmost revision matching date'), _('DATE')),
5231 ('d', 'date', '', _('tipmost revision matching date'), _('DATE')),
5230 ('r', 'rev', '', _('revision'), _('REV'))
5232 ('r', 'rev', '', _('revision'), _('REV'))
5231 ] + mergetoolopts,
5233 ] + mergetoolopts,
5232 _('[-C|-c|-m] [-d DATE] [[-r] REV]'))
5234 _('[-C|-c|-m] [-d DATE] [[-r] REV]'))
5233 def update(ui, repo, node=None, rev=None, clean=False, date=None, check=False,
5235 def update(ui, repo, node=None, rev=None, clean=False, date=None, check=False,
5234 merge=None, tool=None):
5236 merge=None, tool=None):
5235 """update working directory (or switch revisions)
5237 """update working directory (or switch revisions)
5236
5238
5237 Update the repository's working directory to the specified
5239 Update the repository's working directory to the specified
5238 changeset. If no changeset is specified, update to the tip of the
5240 changeset. If no changeset is specified, update to the tip of the
5239 current named branch and move the active bookmark (see :hg:`help
5241 current named branch and move the active bookmark (see :hg:`help
5240 bookmarks`).
5242 bookmarks`).
5241
5243
5242 Update sets the working directory's parent revision to the specified
5244 Update sets the working directory's parent revision to the specified
5243 changeset (see :hg:`help parents`).
5245 changeset (see :hg:`help parents`).
5244
5246
5245 If the changeset is not a descendant or ancestor of the working
5247 If the changeset is not a descendant or ancestor of the working
5246 directory's parent and there are uncommitted changes, the update is
5248 directory's parent and there are uncommitted changes, the update is
5247 aborted. With the -c/--check option, the working directory is checked
5249 aborted. With the -c/--check option, the working directory is checked
5248 for uncommitted changes; if none are found, the working directory is
5250 for uncommitted changes; if none are found, the working directory is
5249 updated to the specified changeset.
5251 updated to the specified changeset.
5250
5252
5251 .. container:: verbose
5253 .. container:: verbose
5252
5254
5253 The -C/--clean, -c/--check, and -m/--merge options control what
5255 The -C/--clean, -c/--check, and -m/--merge options control what
5254 happens if the working directory contains uncommitted changes.
5256 happens if the working directory contains uncommitted changes.
5255 At most of one of them can be specified.
5257 At most of one of them can be specified.
5256
5258
5257 1. If no option is specified, and if
5259 1. If no option is specified, and if
5258 the requested changeset is an ancestor or descendant of
5260 the requested changeset is an ancestor or descendant of
5259 the working directory's parent, the uncommitted changes
5261 the working directory's parent, the uncommitted changes
5260 are merged into the requested changeset and the merged
5262 are merged into the requested changeset and the merged
5261 result is left uncommitted. If the requested changeset is
5263 result is left uncommitted. If the requested changeset is
5262 not an ancestor or descendant (that is, it is on another
5264 not an ancestor or descendant (that is, it is on another
5263 branch), the update is aborted and the uncommitted changes
5265 branch), the update is aborted and the uncommitted changes
5264 are preserved.
5266 are preserved.
5265
5267
5266 2. With the -m/--merge option, the update is allowed even if the
5268 2. With the -m/--merge option, the update is allowed even if the
5267 requested changeset is not an ancestor or descendant of
5269 requested changeset is not an ancestor or descendant of
5268 the working directory's parent.
5270 the working directory's parent.
5269
5271
5270 3. With the -c/--check option, the update is aborted and the
5272 3. With the -c/--check option, the update is aborted and the
5271 uncommitted changes are preserved.
5273 uncommitted changes are preserved.
5272
5274
5273 4. With the -C/--clean option, uncommitted changes are discarded and
5275 4. With the -C/--clean option, uncommitted changes are discarded and
5274 the working directory is updated to the requested changeset.
5276 the working directory is updated to the requested changeset.
5275
5277
5276 To cancel an uncommitted merge (and lose your changes), use
5278 To cancel an uncommitted merge (and lose your changes), use
5277 :hg:`update --clean .`.
5279 :hg:`update --clean .`.
5278
5280
5279 Use null as the changeset to remove the working directory (like
5281 Use null as the changeset to remove the working directory (like
5280 :hg:`clone -U`).
5282 :hg:`clone -U`).
5281
5283
5282 If you want to revert just one file to an older revision, use
5284 If you want to revert just one file to an older revision, use
5283 :hg:`revert [-r REV] NAME`.
5285 :hg:`revert [-r REV] NAME`.
5284
5286
5285 See :hg:`help dates` for a list of formats valid for -d/--date.
5287 See :hg:`help dates` for a list of formats valid for -d/--date.
5286
5288
5287 Returns 0 on success, 1 if there are unresolved files.
5289 Returns 0 on success, 1 if there are unresolved files.
5288 """
5290 """
5289 if rev and node:
5291 if rev and node:
5290 raise error.Abort(_("please specify just one revision"))
5292 raise error.Abort(_("please specify just one revision"))
5291
5293
5292 if ui.configbool('commands', 'update.requiredest'):
5294 if ui.configbool('commands', 'update.requiredest'):
5293 if not node and not rev and not date:
5295 if not node and not rev and not date:
5294 raise error.Abort(_('you must specify a destination'),
5296 raise error.Abort(_('you must specify a destination'),
5295 hint=_('for example: hg update ".::"'))
5297 hint=_('for example: hg update ".::"'))
5296
5298
5297 if rev is None or rev == '':
5299 if rev is None or rev == '':
5298 rev = node
5300 rev = node
5299
5301
5300 if date and rev is not None:
5302 if date and rev is not None:
5301 raise error.Abort(_("you can't specify a revision and a date"))
5303 raise error.Abort(_("you can't specify a revision and a date"))
5302
5304
5303 if len([x for x in (clean, check, merge) if x]) > 1:
5305 if len([x for x in (clean, check, merge) if x]) > 1:
5304 raise error.Abort(_("can only specify one of -C/--clean, -c/--check, "
5306 raise error.Abort(_("can only specify one of -C/--clean, -c/--check, "
5305 "or -m/merge"))
5307 "or -m/merge"))
5306
5308
5307 updatecheck = None
5309 updatecheck = None
5308 if check:
5310 if check:
5309 updatecheck = 'abort'
5311 updatecheck = 'abort'
5310 elif merge:
5312 elif merge:
5311 updatecheck = 'none'
5313 updatecheck = 'none'
5312
5314
5313 with repo.wlock():
5315 with repo.wlock():
5314 cmdutil.clearunfinished(repo)
5316 cmdutil.clearunfinished(repo)
5315
5317
5316 if date:
5318 if date:
5317 rev = cmdutil.finddate(ui, repo, date)
5319 rev = cmdutil.finddate(ui, repo, date)
5318
5320
5319 # if we defined a bookmark, we have to remember the original name
5321 # if we defined a bookmark, we have to remember the original name
5320 brev = rev
5322 brev = rev
5321 rev = scmutil.revsingle(repo, rev, rev).rev()
5323 rev = scmutil.revsingle(repo, rev, rev).rev()
5322
5324
5323 repo.ui.setconfig('ui', 'forcemerge', tool, 'update')
5325 repo.ui.setconfig('ui', 'forcemerge', tool, 'update')
5324
5326
5325 return hg.updatetotally(ui, repo, rev, brev, clean=clean,
5327 return hg.updatetotally(ui, repo, rev, brev, clean=clean,
5326 updatecheck=updatecheck)
5328 updatecheck=updatecheck)
5327
5329
5328 @command('verify', [])
5330 @command('verify', [])
5329 def verify(ui, repo):
5331 def verify(ui, repo):
5330 """verify the integrity of the repository
5332 """verify the integrity of the repository
5331
5333
5332 Verify the integrity of the current repository.
5334 Verify the integrity of the current repository.
5333
5335
5334 This will perform an extensive check of the repository's
5336 This will perform an extensive check of the repository's
5335 integrity, validating the hashes and checksums of each entry in
5337 integrity, validating the hashes and checksums of each entry in
5336 the changelog, manifest, and tracked files, as well as the
5338 the changelog, manifest, and tracked files, as well as the
5337 integrity of their crosslinks and indices.
5339 integrity of their crosslinks and indices.
5338
5340
5339 Please see https://mercurial-scm.org/wiki/RepositoryCorruption
5341 Please see https://mercurial-scm.org/wiki/RepositoryCorruption
5340 for more information about recovery from corruption of the
5342 for more information about recovery from corruption of the
5341 repository.
5343 repository.
5342
5344
5343 Returns 0 on success, 1 if errors are encountered.
5345 Returns 0 on success, 1 if errors are encountered.
5344 """
5346 """
5345 return hg.verify(repo)
5347 return hg.verify(repo)
5346
5348
5347 @command('version', [] + formatteropts, norepo=True)
5349 @command('version', [] + formatteropts, norepo=True)
5348 def version_(ui, **opts):
5350 def version_(ui, **opts):
5349 """output version and copyright information"""
5351 """output version and copyright information"""
5350 opts = pycompat.byteskwargs(opts)
5352 opts = pycompat.byteskwargs(opts)
5351 if ui.verbose:
5353 if ui.verbose:
5352 ui.pager('version')
5354 ui.pager('version')
5353 fm = ui.formatter("version", opts)
5355 fm = ui.formatter("version", opts)
5354 fm.startitem()
5356 fm.startitem()
5355 fm.write("ver", _("Mercurial Distributed SCM (version %s)\n"),
5357 fm.write("ver", _("Mercurial Distributed SCM (version %s)\n"),
5356 util.version())
5358 util.version())
5357 license = _(
5359 license = _(
5358 "(see https://mercurial-scm.org for more information)\n"
5360 "(see https://mercurial-scm.org for more information)\n"
5359 "\nCopyright (C) 2005-2017 Matt Mackall and others\n"
5361 "\nCopyright (C) 2005-2017 Matt Mackall and others\n"
5360 "This is free software; see the source for copying conditions. "
5362 "This is free software; see the source for copying conditions. "
5361 "There is NO\nwarranty; "
5363 "There is NO\nwarranty; "
5362 "not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n"
5364 "not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n"
5363 )
5365 )
5364 if not ui.quiet:
5366 if not ui.quiet:
5365 fm.plain(license)
5367 fm.plain(license)
5366
5368
5367 if ui.verbose:
5369 if ui.verbose:
5368 fm.plain(_("\nEnabled extensions:\n\n"))
5370 fm.plain(_("\nEnabled extensions:\n\n"))
5369 # format names and versions into columns
5371 # format names and versions into columns
5370 names = []
5372 names = []
5371 vers = []
5373 vers = []
5372 isinternals = []
5374 isinternals = []
5373 for name, module in extensions.extensions():
5375 for name, module in extensions.extensions():
5374 names.append(name)
5376 names.append(name)
5375 vers.append(extensions.moduleversion(module) or None)
5377 vers.append(extensions.moduleversion(module) or None)
5376 isinternals.append(extensions.ismoduleinternal(module))
5378 isinternals.append(extensions.ismoduleinternal(module))
5377 fn = fm.nested("extensions")
5379 fn = fm.nested("extensions")
5378 if names:
5380 if names:
5379 namefmt = " %%-%ds " % max(len(n) for n in names)
5381 namefmt = " %%-%ds " % max(len(n) for n in names)
5380 places = [_("external"), _("internal")]
5382 places = [_("external"), _("internal")]
5381 for n, v, p in zip(names, vers, isinternals):
5383 for n, v, p in zip(names, vers, isinternals):
5382 fn.startitem()
5384 fn.startitem()
5383 fn.condwrite(ui.verbose, "name", namefmt, n)
5385 fn.condwrite(ui.verbose, "name", namefmt, n)
5384 if ui.verbose:
5386 if ui.verbose:
5385 fn.plain("%s " % places[p])
5387 fn.plain("%s " % places[p])
5386 fn.data(bundled=p)
5388 fn.data(bundled=p)
5387 fn.condwrite(ui.verbose and v, "ver", "%s", v)
5389 fn.condwrite(ui.verbose and v, "ver", "%s", v)
5388 if ui.verbose:
5390 if ui.verbose:
5389 fn.plain("\n")
5391 fn.plain("\n")
5390 fn.end()
5392 fn.end()
5391 fm.end()
5393 fm.end()
5392
5394
5393 def loadcmdtable(ui, name, cmdtable):
5395 def loadcmdtable(ui, name, cmdtable):
5394 """Load command functions from specified cmdtable
5396 """Load command functions from specified cmdtable
5395 """
5397 """
5396 overrides = [cmd for cmd in cmdtable if cmd in table]
5398 overrides = [cmd for cmd in cmdtable if cmd in table]
5397 if overrides:
5399 if overrides:
5398 ui.warn(_("extension '%s' overrides commands: %s\n")
5400 ui.warn(_("extension '%s' overrides commands: %s\n")
5399 % (name, " ".join(overrides)))
5401 % (name, " ".join(overrides)))
5400 table.update(cmdtable)
5402 table.update(cmdtable)
@@ -1,2012 +1,2012 b''
1 # exchange.py - utility to exchange data between repos.
1 # exchange.py - utility to exchange data between repos.
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import errno
10 import errno
11 import hashlib
11 import hashlib
12
12
13 from .i18n import _
13 from .i18n import _
14 from .node import (
14 from .node import (
15 hex,
15 hex,
16 nullid,
16 nullid,
17 )
17 )
18 from . import (
18 from . import (
19 bookmarks as bookmod,
19 bookmarks as bookmod,
20 bundle2,
20 bundle2,
21 changegroup,
21 changegroup,
22 discovery,
22 discovery,
23 error,
23 error,
24 lock as lockmod,
24 lock as lockmod,
25 obsolete,
25 obsolete,
26 phases,
26 phases,
27 pushkey,
27 pushkey,
28 pycompat,
28 pycompat,
29 scmutil,
29 scmutil,
30 sslutil,
30 sslutil,
31 streamclone,
31 streamclone,
32 url as urlmod,
32 url as urlmod,
33 util,
33 util,
34 )
34 )
35
35
36 urlerr = util.urlerr
36 urlerr = util.urlerr
37 urlreq = util.urlreq
37 urlreq = util.urlreq
38
38
39 # Maps bundle version human names to changegroup versions.
39 # Maps bundle version human names to changegroup versions.
40 _bundlespeccgversions = {'v1': '01',
40 _bundlespeccgversions = {'v1': '01',
41 'v2': '02',
41 'v2': '02',
42 'packed1': 's1',
42 'packed1': 's1',
43 'bundle2': '02', #legacy
43 'bundle2': '02', #legacy
44 }
44 }
45
45
46 # Compression engines allowed in version 1. THIS SHOULD NEVER CHANGE.
46 # Compression engines allowed in version 1. THIS SHOULD NEVER CHANGE.
47 _bundlespecv1compengines = {'gzip', 'bzip2', 'none'}
47 _bundlespecv1compengines = {'gzip', 'bzip2', 'none'}
48
48
49 def parsebundlespec(repo, spec, strict=True, externalnames=False):
49 def parsebundlespec(repo, spec, strict=True, externalnames=False):
50 """Parse a bundle string specification into parts.
50 """Parse a bundle string specification into parts.
51
51
52 Bundle specifications denote a well-defined bundle/exchange format.
52 Bundle specifications denote a well-defined bundle/exchange format.
53 The content of a given specification should not change over time in
53 The content of a given specification should not change over time in
54 order to ensure that bundles produced by a newer version of Mercurial are
54 order to ensure that bundles produced by a newer version of Mercurial are
55 readable from an older version.
55 readable from an older version.
56
56
57 The string currently has the form:
57 The string currently has the form:
58
58
59 <compression>-<type>[;<parameter0>[;<parameter1>]]
59 <compression>-<type>[;<parameter0>[;<parameter1>]]
60
60
61 Where <compression> is one of the supported compression formats
61 Where <compression> is one of the supported compression formats
62 and <type> is (currently) a version string. A ";" can follow the type and
62 and <type> is (currently) a version string. A ";" can follow the type and
63 all text afterwards is interpreted as URI encoded, ";" delimited key=value
63 all text afterwards is interpreted as URI encoded, ";" delimited key=value
64 pairs.
64 pairs.
65
65
66 If ``strict`` is True (the default) <compression> is required. Otherwise,
66 If ``strict`` is True (the default) <compression> is required. Otherwise,
67 it is optional.
67 it is optional.
68
68
69 If ``externalnames`` is False (the default), the human-centric names will
69 If ``externalnames`` is False (the default), the human-centric names will
70 be converted to their internal representation.
70 be converted to their internal representation.
71
71
72 Returns a 3-tuple of (compression, version, parameters). Compression will
72 Returns a 3-tuple of (compression, version, parameters). Compression will
73 be ``None`` if not in strict mode and a compression isn't defined.
73 be ``None`` if not in strict mode and a compression isn't defined.
74
74
75 An ``InvalidBundleSpecification`` is raised when the specification is
75 An ``InvalidBundleSpecification`` is raised when the specification is
76 not syntactically well formed.
76 not syntactically well formed.
77
77
78 An ``UnsupportedBundleSpecification`` is raised when the compression or
78 An ``UnsupportedBundleSpecification`` is raised when the compression or
79 bundle type/version is not recognized.
79 bundle type/version is not recognized.
80
80
81 Note: this function will likely eventually return a more complex data
81 Note: this function will likely eventually return a more complex data
82 structure, including bundle2 part information.
82 structure, including bundle2 part information.
83 """
83 """
84 def parseparams(s):
84 def parseparams(s):
85 if ';' not in s:
85 if ';' not in s:
86 return s, {}
86 return s, {}
87
87
88 params = {}
88 params = {}
89 version, paramstr = s.split(';', 1)
89 version, paramstr = s.split(';', 1)
90
90
91 for p in paramstr.split(';'):
91 for p in paramstr.split(';'):
92 if '=' not in p:
92 if '=' not in p:
93 raise error.InvalidBundleSpecification(
93 raise error.InvalidBundleSpecification(
94 _('invalid bundle specification: '
94 _('invalid bundle specification: '
95 'missing "=" in parameter: %s') % p)
95 'missing "=" in parameter: %s') % p)
96
96
97 key, value = p.split('=', 1)
97 key, value = p.split('=', 1)
98 key = urlreq.unquote(key)
98 key = urlreq.unquote(key)
99 value = urlreq.unquote(value)
99 value = urlreq.unquote(value)
100 params[key] = value
100 params[key] = value
101
101
102 return version, params
102 return version, params
103
103
104
104
105 if strict and '-' not in spec:
105 if strict and '-' not in spec:
106 raise error.InvalidBundleSpecification(
106 raise error.InvalidBundleSpecification(
107 _('invalid bundle specification; '
107 _('invalid bundle specification; '
108 'must be prefixed with compression: %s') % spec)
108 'must be prefixed with compression: %s') % spec)
109
109
110 if '-' in spec:
110 if '-' in spec:
111 compression, version = spec.split('-', 1)
111 compression, version = spec.split('-', 1)
112
112
113 if compression not in util.compengines.supportedbundlenames:
113 if compression not in util.compengines.supportedbundlenames:
114 raise error.UnsupportedBundleSpecification(
114 raise error.UnsupportedBundleSpecification(
115 _('%s compression is not supported') % compression)
115 _('%s compression is not supported') % compression)
116
116
117 version, params = parseparams(version)
117 version, params = parseparams(version)
118
118
119 if version not in _bundlespeccgversions:
119 if version not in _bundlespeccgversions:
120 raise error.UnsupportedBundleSpecification(
120 raise error.UnsupportedBundleSpecification(
121 _('%s is not a recognized bundle version') % version)
121 _('%s is not a recognized bundle version') % version)
122 else:
122 else:
123 # Value could be just the compression or just the version, in which
123 # Value could be just the compression or just the version, in which
124 # case some defaults are assumed (but only when not in strict mode).
124 # case some defaults are assumed (but only when not in strict mode).
125 assert not strict
125 assert not strict
126
126
127 spec, params = parseparams(spec)
127 spec, params = parseparams(spec)
128
128
129 if spec in util.compengines.supportedbundlenames:
129 if spec in util.compengines.supportedbundlenames:
130 compression = spec
130 compression = spec
131 version = 'v1'
131 version = 'v1'
132 # Generaldelta repos require v2.
132 # Generaldelta repos require v2.
133 if 'generaldelta' in repo.requirements:
133 if 'generaldelta' in repo.requirements:
134 version = 'v2'
134 version = 'v2'
135 # Modern compression engines require v2.
135 # Modern compression engines require v2.
136 if compression not in _bundlespecv1compengines:
136 if compression not in _bundlespecv1compengines:
137 version = 'v2'
137 version = 'v2'
138 elif spec in _bundlespeccgversions:
138 elif spec in _bundlespeccgversions:
139 if spec == 'packed1':
139 if spec == 'packed1':
140 compression = 'none'
140 compression = 'none'
141 else:
141 else:
142 compression = 'bzip2'
142 compression = 'bzip2'
143 version = spec
143 version = spec
144 else:
144 else:
145 raise error.UnsupportedBundleSpecification(
145 raise error.UnsupportedBundleSpecification(
146 _('%s is not a recognized bundle specification') % spec)
146 _('%s is not a recognized bundle specification') % spec)
147
147
148 # Bundle version 1 only supports a known set of compression engines.
148 # Bundle version 1 only supports a known set of compression engines.
149 if version == 'v1' and compression not in _bundlespecv1compengines:
149 if version == 'v1' and compression not in _bundlespecv1compengines:
150 raise error.UnsupportedBundleSpecification(
150 raise error.UnsupportedBundleSpecification(
151 _('compression engine %s is not supported on v1 bundles') %
151 _('compression engine %s is not supported on v1 bundles') %
152 compression)
152 compression)
153
153
154 # The specification for packed1 can optionally declare the data formats
154 # The specification for packed1 can optionally declare the data formats
155 # required to apply it. If we see this metadata, compare against what the
155 # required to apply it. If we see this metadata, compare against what the
156 # repo supports and error if the bundle isn't compatible.
156 # repo supports and error if the bundle isn't compatible.
157 if version == 'packed1' and 'requirements' in params:
157 if version == 'packed1' and 'requirements' in params:
158 requirements = set(params['requirements'].split(','))
158 requirements = set(params['requirements'].split(','))
159 missingreqs = requirements - repo.supportedformats
159 missingreqs = requirements - repo.supportedformats
160 if missingreqs:
160 if missingreqs:
161 raise error.UnsupportedBundleSpecification(
161 raise error.UnsupportedBundleSpecification(
162 _('missing support for repository features: %s') %
162 _('missing support for repository features: %s') %
163 ', '.join(sorted(missingreqs)))
163 ', '.join(sorted(missingreqs)))
164
164
165 if not externalnames:
165 if not externalnames:
166 engine = util.compengines.forbundlename(compression)
166 engine = util.compengines.forbundlename(compression)
167 compression = engine.bundletype()[1]
167 compression = engine.bundletype()[1]
168 version = _bundlespeccgversions[version]
168 version = _bundlespeccgversions[version]
169 return compression, version, params
169 return compression, version, params
170
170
171 def readbundle(ui, fh, fname, vfs=None):
171 def readbundle(ui, fh, fname, vfs=None):
172 header = changegroup.readexactly(fh, 4)
172 header = changegroup.readexactly(fh, 4)
173
173
174 alg = None
174 alg = None
175 if not fname:
175 if not fname:
176 fname = "stream"
176 fname = "stream"
177 if not header.startswith('HG') and header.startswith('\0'):
177 if not header.startswith('HG') and header.startswith('\0'):
178 fh = changegroup.headerlessfixup(fh, header)
178 fh = changegroup.headerlessfixup(fh, header)
179 header = "HG10"
179 header = "HG10"
180 alg = 'UN'
180 alg = 'UN'
181 elif vfs:
181 elif vfs:
182 fname = vfs.join(fname)
182 fname = vfs.join(fname)
183
183
184 magic, version = header[0:2], header[2:4]
184 magic, version = header[0:2], header[2:4]
185
185
186 if magic != 'HG':
186 if magic != 'HG':
187 raise error.Abort(_('%s: not a Mercurial bundle') % fname)
187 raise error.Abort(_('%s: not a Mercurial bundle') % fname)
188 if version == '10':
188 if version == '10':
189 if alg is None:
189 if alg is None:
190 alg = changegroup.readexactly(fh, 2)
190 alg = changegroup.readexactly(fh, 2)
191 return changegroup.cg1unpacker(fh, alg)
191 return changegroup.cg1unpacker(fh, alg)
192 elif version.startswith('2'):
192 elif version.startswith('2'):
193 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
193 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
194 elif version == 'S1':
194 elif version == 'S1':
195 return streamclone.streamcloneapplier(fh)
195 return streamclone.streamcloneapplier(fh)
196 else:
196 else:
197 raise error.Abort(_('%s: unknown bundle version %s') % (fname, version))
197 raise error.Abort(_('%s: unknown bundle version %s') % (fname, version))
198
198
199 def getbundlespec(ui, fh):
199 def getbundlespec(ui, fh):
200 """Infer the bundlespec from a bundle file handle.
200 """Infer the bundlespec from a bundle file handle.
201
201
202 The input file handle is seeked and the original seek position is not
202 The input file handle is seeked and the original seek position is not
203 restored.
203 restored.
204 """
204 """
205 def speccompression(alg):
205 def speccompression(alg):
206 try:
206 try:
207 return util.compengines.forbundletype(alg).bundletype()[0]
207 return util.compengines.forbundletype(alg).bundletype()[0]
208 except KeyError:
208 except KeyError:
209 return None
209 return None
210
210
211 b = readbundle(ui, fh, None)
211 b = readbundle(ui, fh, None)
212 if isinstance(b, changegroup.cg1unpacker):
212 if isinstance(b, changegroup.cg1unpacker):
213 alg = b._type
213 alg = b._type
214 if alg == '_truncatedBZ':
214 if alg == '_truncatedBZ':
215 alg = 'BZ'
215 alg = 'BZ'
216 comp = speccompression(alg)
216 comp = speccompression(alg)
217 if not comp:
217 if not comp:
218 raise error.Abort(_('unknown compression algorithm: %s') % alg)
218 raise error.Abort(_('unknown compression algorithm: %s') % alg)
219 return '%s-v1' % comp
219 return '%s-v1' % comp
220 elif isinstance(b, bundle2.unbundle20):
220 elif isinstance(b, bundle2.unbundle20):
221 if 'Compression' in b.params:
221 if 'Compression' in b.params:
222 comp = speccompression(b.params['Compression'])
222 comp = speccompression(b.params['Compression'])
223 if not comp:
223 if not comp:
224 raise error.Abort(_('unknown compression algorithm: %s') % comp)
224 raise error.Abort(_('unknown compression algorithm: %s') % comp)
225 else:
225 else:
226 comp = 'none'
226 comp = 'none'
227
227
228 version = None
228 version = None
229 for part in b.iterparts():
229 for part in b.iterparts():
230 if part.type == 'changegroup':
230 if part.type == 'changegroup':
231 version = part.params['version']
231 version = part.params['version']
232 if version in ('01', '02'):
232 if version in ('01', '02'):
233 version = 'v2'
233 version = 'v2'
234 else:
234 else:
235 raise error.Abort(_('changegroup version %s does not have '
235 raise error.Abort(_('changegroup version %s does not have '
236 'a known bundlespec') % version,
236 'a known bundlespec') % version,
237 hint=_('try upgrading your Mercurial '
237 hint=_('try upgrading your Mercurial '
238 'client'))
238 'client'))
239
239
240 if not version:
240 if not version:
241 raise error.Abort(_('could not identify changegroup version in '
241 raise error.Abort(_('could not identify changegroup version in '
242 'bundle'))
242 'bundle'))
243
243
244 return '%s-%s' % (comp, version)
244 return '%s-%s' % (comp, version)
245 elif isinstance(b, streamclone.streamcloneapplier):
245 elif isinstance(b, streamclone.streamcloneapplier):
246 requirements = streamclone.readbundle1header(fh)[2]
246 requirements = streamclone.readbundle1header(fh)[2]
247 params = 'requirements=%s' % ','.join(sorted(requirements))
247 params = 'requirements=%s' % ','.join(sorted(requirements))
248 return 'none-packed1;%s' % urlreq.quote(params)
248 return 'none-packed1;%s' % urlreq.quote(params)
249 else:
249 else:
250 raise error.Abort(_('unknown bundle type: %s') % b)
250 raise error.Abort(_('unknown bundle type: %s') % b)
251
251
252 def _computeoutgoing(repo, heads, common):
252 def _computeoutgoing(repo, heads, common):
253 """Computes which revs are outgoing given a set of common
253 """Computes which revs are outgoing given a set of common
254 and a set of heads.
254 and a set of heads.
255
255
256 This is a separate function so extensions can have access to
256 This is a separate function so extensions can have access to
257 the logic.
257 the logic.
258
258
259 Returns a discovery.outgoing object.
259 Returns a discovery.outgoing object.
260 """
260 """
261 cl = repo.changelog
261 cl = repo.changelog
262 if common:
262 if common:
263 hasnode = cl.hasnode
263 hasnode = cl.hasnode
264 common = [n for n in common if hasnode(n)]
264 common = [n for n in common if hasnode(n)]
265 else:
265 else:
266 common = [nullid]
266 common = [nullid]
267 if not heads:
267 if not heads:
268 heads = cl.heads()
268 heads = cl.heads()
269 return discovery.outgoing(repo, common, heads)
269 return discovery.outgoing(repo, common, heads)
270
270
271 def _forcebundle1(op):
271 def _forcebundle1(op):
272 """return true if a pull/push must use bundle1
272 """return true if a pull/push must use bundle1
273
273
274 This function is used to allow testing of the older bundle version"""
274 This function is used to allow testing of the older bundle version"""
275 ui = op.repo.ui
275 ui = op.repo.ui
276 forcebundle1 = False
276 forcebundle1 = False
277 # The goal is this config is to allow developer to choose the bundle
277 # The goal is this config is to allow developer to choose the bundle
278 # version used during exchanged. This is especially handy during test.
278 # version used during exchanged. This is especially handy during test.
279 # Value is a list of bundle version to be picked from, highest version
279 # Value is a list of bundle version to be picked from, highest version
280 # should be used.
280 # should be used.
281 #
281 #
282 # developer config: devel.legacy.exchange
282 # developer config: devel.legacy.exchange
283 exchange = ui.configlist('devel', 'legacy.exchange')
283 exchange = ui.configlist('devel', 'legacy.exchange')
284 forcebundle1 = 'bundle2' not in exchange and 'bundle1' in exchange
284 forcebundle1 = 'bundle2' not in exchange and 'bundle1' in exchange
285 return forcebundle1 or not op.remote.capable('bundle2')
285 return forcebundle1 or not op.remote.capable('bundle2')
286
286
287 class pushoperation(object):
287 class pushoperation(object):
288 """A object that represent a single push operation
288 """A object that represent a single push operation
289
289
290 Its purpose is to carry push related state and very common operations.
290 Its purpose is to carry push related state and very common operations.
291
291
292 A new pushoperation should be created at the beginning of each push and
292 A new pushoperation should be created at the beginning of each push and
293 discarded afterward.
293 discarded afterward.
294 """
294 """
295
295
296 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
296 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
297 bookmarks=()):
297 bookmarks=()):
298 # repo we push from
298 # repo we push from
299 self.repo = repo
299 self.repo = repo
300 self.ui = repo.ui
300 self.ui = repo.ui
301 # repo we push to
301 # repo we push to
302 self.remote = remote
302 self.remote = remote
303 # force option provided
303 # force option provided
304 self.force = force
304 self.force = force
305 # revs to be pushed (None is "all")
305 # revs to be pushed (None is "all")
306 self.revs = revs
306 self.revs = revs
307 # bookmark explicitly pushed
307 # bookmark explicitly pushed
308 self.bookmarks = bookmarks
308 self.bookmarks = bookmarks
309 # allow push of new branch
309 # allow push of new branch
310 self.newbranch = newbranch
310 self.newbranch = newbranch
311 # did a local lock get acquired?
311 # did a local lock get acquired?
312 self.locallocked = None
312 self.locallocked = None
313 # step already performed
313 # step already performed
314 # (used to check what steps have been already performed through bundle2)
314 # (used to check what steps have been already performed through bundle2)
315 self.stepsdone = set()
315 self.stepsdone = set()
316 # Integer version of the changegroup push result
316 # Integer version of the changegroup push result
317 # - None means nothing to push
317 # - None means nothing to push
318 # - 0 means HTTP error
318 # - 0 means HTTP error
319 # - 1 means we pushed and remote head count is unchanged *or*
319 # - 1 means we pushed and remote head count is unchanged *or*
320 # we have outgoing changesets but refused to push
320 # we have outgoing changesets but refused to push
321 # - other values as described by addchangegroup()
321 # - other values as described by addchangegroup()
322 self.cgresult = None
322 self.cgresult = None
323 # Boolean value for the bookmark push
323 # Boolean value for the bookmark push
324 self.bkresult = None
324 self.bkresult = None
325 # discover.outgoing object (contains common and outgoing data)
325 # discover.outgoing object (contains common and outgoing data)
326 self.outgoing = None
326 self.outgoing = None
327 # all remote topological heads before the push
327 # all remote topological heads before the push
328 self.remoteheads = None
328 self.remoteheads = None
329 # Details of the remote branch pre and post push
329 # Details of the remote branch pre and post push
330 #
330 #
331 # mapping: {'branch': ([remoteheads],
331 # mapping: {'branch': ([remoteheads],
332 # [newheads],
332 # [newheads],
333 # [unsyncedheads],
333 # [unsyncedheads],
334 # [discardedheads])}
334 # [discardedheads])}
335 # - branch: the branch name
335 # - branch: the branch name
336 # - remoteheads: the list of remote heads known locally
336 # - remoteheads: the list of remote heads known locally
337 # None if the branch is new
337 # None if the branch is new
338 # - newheads: the new remote heads (known locally) with outgoing pushed
338 # - newheads: the new remote heads (known locally) with outgoing pushed
339 # - unsyncedheads: the list of remote heads unknown locally.
339 # - unsyncedheads: the list of remote heads unknown locally.
340 # - discardedheads: the list of remote heads made obsolete by the push
340 # - discardedheads: the list of remote heads made obsolete by the push
341 self.pushbranchmap = None
341 self.pushbranchmap = None
342 # testable as a boolean indicating if any nodes are missing locally.
342 # testable as a boolean indicating if any nodes are missing locally.
343 self.incoming = None
343 self.incoming = None
344 # phases changes that must be pushed along side the changesets
344 # phases changes that must be pushed along side the changesets
345 self.outdatedphases = None
345 self.outdatedphases = None
346 # phases changes that must be pushed if changeset push fails
346 # phases changes that must be pushed if changeset push fails
347 self.fallbackoutdatedphases = None
347 self.fallbackoutdatedphases = None
348 # outgoing obsmarkers
348 # outgoing obsmarkers
349 self.outobsmarkers = set()
349 self.outobsmarkers = set()
350 # outgoing bookmarks
350 # outgoing bookmarks
351 self.outbookmarks = []
351 self.outbookmarks = []
352 # transaction manager
352 # transaction manager
353 self.trmanager = None
353 self.trmanager = None
354 # map { pushkey partid -> callback handling failure}
354 # map { pushkey partid -> callback handling failure}
355 # used to handle exception from mandatory pushkey part failure
355 # used to handle exception from mandatory pushkey part failure
356 self.pkfailcb = {}
356 self.pkfailcb = {}
357
357
358 @util.propertycache
358 @util.propertycache
359 def futureheads(self):
359 def futureheads(self):
360 """future remote heads if the changeset push succeeds"""
360 """future remote heads if the changeset push succeeds"""
361 return self.outgoing.missingheads
361 return self.outgoing.missingheads
362
362
363 @util.propertycache
363 @util.propertycache
364 def fallbackheads(self):
364 def fallbackheads(self):
365 """future remote heads if the changeset push fails"""
365 """future remote heads if the changeset push fails"""
366 if self.revs is None:
366 if self.revs is None:
367 # not target to push, all common are relevant
367 # not target to push, all common are relevant
368 return self.outgoing.commonheads
368 return self.outgoing.commonheads
369 unfi = self.repo.unfiltered()
369 unfi = self.repo.unfiltered()
370 # I want cheads = heads(::missingheads and ::commonheads)
370 # I want cheads = heads(::missingheads and ::commonheads)
371 # (missingheads is revs with secret changeset filtered out)
371 # (missingheads is revs with secret changeset filtered out)
372 #
372 #
373 # This can be expressed as:
373 # This can be expressed as:
374 # cheads = ( (missingheads and ::commonheads)
374 # cheads = ( (missingheads and ::commonheads)
375 # + (commonheads and ::missingheads))"
375 # + (commonheads and ::missingheads))"
376 # )
376 # )
377 #
377 #
378 # while trying to push we already computed the following:
378 # while trying to push we already computed the following:
379 # common = (::commonheads)
379 # common = (::commonheads)
380 # missing = ((commonheads::missingheads) - commonheads)
380 # missing = ((commonheads::missingheads) - commonheads)
381 #
381 #
382 # We can pick:
382 # We can pick:
383 # * missingheads part of common (::commonheads)
383 # * missingheads part of common (::commonheads)
384 common = self.outgoing.common
384 common = self.outgoing.common
385 nm = self.repo.changelog.nodemap
385 nm = self.repo.changelog.nodemap
386 cheads = [node for node in self.revs if nm[node] in common]
386 cheads = [node for node in self.revs if nm[node] in common]
387 # and
387 # and
388 # * commonheads parents on missing
388 # * commonheads parents on missing
389 revset = unfi.set('%ln and parents(roots(%ln))',
389 revset = unfi.set('%ln and parents(roots(%ln))',
390 self.outgoing.commonheads,
390 self.outgoing.commonheads,
391 self.outgoing.missing)
391 self.outgoing.missing)
392 cheads.extend(c.node() for c in revset)
392 cheads.extend(c.node() for c in revset)
393 return cheads
393 return cheads
394
394
395 @property
395 @property
396 def commonheads(self):
396 def commonheads(self):
397 """set of all common heads after changeset bundle push"""
397 """set of all common heads after changeset bundle push"""
398 if self.cgresult:
398 if self.cgresult:
399 return self.futureheads
399 return self.futureheads
400 else:
400 else:
401 return self.fallbackheads
401 return self.fallbackheads
402
402
403 # mapping of message used when pushing bookmark
403 # mapping of message used when pushing bookmark
404 bookmsgmap = {'update': (_("updating bookmark %s\n"),
404 bookmsgmap = {'update': (_("updating bookmark %s\n"),
405 _('updating bookmark %s failed!\n')),
405 _('updating bookmark %s failed!\n')),
406 'export': (_("exporting bookmark %s\n"),
406 'export': (_("exporting bookmark %s\n"),
407 _('exporting bookmark %s failed!\n')),
407 _('exporting bookmark %s failed!\n')),
408 'delete': (_("deleting remote bookmark %s\n"),
408 'delete': (_("deleting remote bookmark %s\n"),
409 _('deleting remote bookmark %s failed!\n')),
409 _('deleting remote bookmark %s failed!\n')),
410 }
410 }
411
411
412
412
413 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=(),
413 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=(),
414 opargs=None):
414 opargs=None):
415 '''Push outgoing changesets (limited by revs) from a local
415 '''Push outgoing changesets (limited by revs) from a local
416 repository to remote. Return an integer:
416 repository to remote. Return an integer:
417 - None means nothing to push
417 - None means nothing to push
418 - 0 means HTTP error
418 - 0 means HTTP error
419 - 1 means we pushed and remote head count is unchanged *or*
419 - 1 means we pushed and remote head count is unchanged *or*
420 we have outgoing changesets but refused to push
420 we have outgoing changesets but refused to push
421 - other values as described by addchangegroup()
421 - other values as described by addchangegroup()
422 '''
422 '''
423 if opargs is None:
423 if opargs is None:
424 opargs = {}
424 opargs = {}
425 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks,
425 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks,
426 **opargs)
426 **opargs)
427 if pushop.remote.local():
427 if pushop.remote.local():
428 missing = (set(pushop.repo.requirements)
428 missing = (set(pushop.repo.requirements)
429 - pushop.remote.local().supported)
429 - pushop.remote.local().supported)
430 if missing:
430 if missing:
431 msg = _("required features are not"
431 msg = _("required features are not"
432 " supported in the destination:"
432 " supported in the destination:"
433 " %s") % (', '.join(sorted(missing)))
433 " %s") % (', '.join(sorted(missing)))
434 raise error.Abort(msg)
434 raise error.Abort(msg)
435
435
436 # there are two ways to push to remote repo:
436 # there are two ways to push to remote repo:
437 #
437 #
438 # addchangegroup assumes local user can lock remote
438 # addchangegroup assumes local user can lock remote
439 # repo (local filesystem, old ssh servers).
439 # repo (local filesystem, old ssh servers).
440 #
440 #
441 # unbundle assumes local user cannot lock remote repo (new ssh
441 # unbundle assumes local user cannot lock remote repo (new ssh
442 # servers, http servers).
442 # servers, http servers).
443
443
444 if not pushop.remote.canpush():
444 if not pushop.remote.canpush():
445 raise error.Abort(_("destination does not support push"))
445 raise error.Abort(_("destination does not support push"))
446 # get local lock as we might write phase data
446 # get local lock as we might write phase data
447 localwlock = locallock = None
447 localwlock = locallock = None
448 try:
448 try:
449 # bundle2 push may receive a reply bundle touching bookmarks or other
449 # bundle2 push may receive a reply bundle touching bookmarks or other
450 # things requiring the wlock. Take it now to ensure proper ordering.
450 # things requiring the wlock. Take it now to ensure proper ordering.
451 maypushback = pushop.ui.configbool('experimental', 'bundle2.pushback')
451 maypushback = pushop.ui.configbool('experimental', 'bundle2.pushback')
452 if (not _forcebundle1(pushop)) and maypushback:
452 if (not _forcebundle1(pushop)) and maypushback:
453 localwlock = pushop.repo.wlock()
453 localwlock = pushop.repo.wlock()
454 locallock = pushop.repo.lock()
454 locallock = pushop.repo.lock()
455 pushop.locallocked = True
455 pushop.locallocked = True
456 except IOError as err:
456 except IOError as err:
457 pushop.locallocked = False
457 pushop.locallocked = False
458 if err.errno != errno.EACCES:
458 if err.errno != errno.EACCES:
459 raise
459 raise
460 # source repo cannot be locked.
460 # source repo cannot be locked.
461 # We do not abort the push, but just disable the local phase
461 # We do not abort the push, but just disable the local phase
462 # synchronisation.
462 # synchronisation.
463 msg = 'cannot lock source repository: %s\n' % err
463 msg = 'cannot lock source repository: %s\n' % err
464 pushop.ui.debug(msg)
464 pushop.ui.debug(msg)
465 try:
465 try:
466 if pushop.locallocked:
466 if pushop.locallocked:
467 pushop.trmanager = transactionmanager(pushop.repo,
467 pushop.trmanager = transactionmanager(pushop.repo,
468 'push-response',
468 'push-response',
469 pushop.remote.url())
469 pushop.remote.url())
470 pushop.repo.checkpush(pushop)
470 pushop.repo.checkpush(pushop)
471 lock = None
471 lock = None
472 unbundle = pushop.remote.capable('unbundle')
472 unbundle = pushop.remote.capable('unbundle')
473 if not unbundle:
473 if not unbundle:
474 lock = pushop.remote.lock()
474 lock = pushop.remote.lock()
475 try:
475 try:
476 _pushdiscovery(pushop)
476 _pushdiscovery(pushop)
477 if not _forcebundle1(pushop):
477 if not _forcebundle1(pushop):
478 _pushbundle2(pushop)
478 _pushbundle2(pushop)
479 _pushchangeset(pushop)
479 _pushchangeset(pushop)
480 _pushsyncphase(pushop)
480 _pushsyncphase(pushop)
481 _pushobsolete(pushop)
481 _pushobsolete(pushop)
482 _pushbookmark(pushop)
482 _pushbookmark(pushop)
483 finally:
483 finally:
484 if lock is not None:
484 if lock is not None:
485 lock.release()
485 lock.release()
486 if pushop.trmanager:
486 if pushop.trmanager:
487 pushop.trmanager.close()
487 pushop.trmanager.close()
488 finally:
488 finally:
489 if pushop.trmanager:
489 if pushop.trmanager:
490 pushop.trmanager.release()
490 pushop.trmanager.release()
491 if locallock is not None:
491 if locallock is not None:
492 locallock.release()
492 locallock.release()
493 if localwlock is not None:
493 if localwlock is not None:
494 localwlock.release()
494 localwlock.release()
495
495
496 return pushop
496 return pushop
497
497
498 # list of steps to perform discovery before push
498 # list of steps to perform discovery before push
499 pushdiscoveryorder = []
499 pushdiscoveryorder = []
500
500
501 # Mapping between step name and function
501 # Mapping between step name and function
502 #
502 #
503 # This exists to help extensions wrap steps if necessary
503 # This exists to help extensions wrap steps if necessary
504 pushdiscoverymapping = {}
504 pushdiscoverymapping = {}
505
505
506 def pushdiscovery(stepname):
506 def pushdiscovery(stepname):
507 """decorator for function performing discovery before push
507 """decorator for function performing discovery before push
508
508
509 The function is added to the step -> function mapping and appended to the
509 The function is added to the step -> function mapping and appended to the
510 list of steps. Beware that decorated function will be added in order (this
510 list of steps. Beware that decorated function will be added in order (this
511 may matter).
511 may matter).
512
512
513 You can only use this decorator for a new step, if you want to wrap a step
513 You can only use this decorator for a new step, if you want to wrap a step
514 from an extension, change the pushdiscovery dictionary directly."""
514 from an extension, change the pushdiscovery dictionary directly."""
515 def dec(func):
515 def dec(func):
516 assert stepname not in pushdiscoverymapping
516 assert stepname not in pushdiscoverymapping
517 pushdiscoverymapping[stepname] = func
517 pushdiscoverymapping[stepname] = func
518 pushdiscoveryorder.append(stepname)
518 pushdiscoveryorder.append(stepname)
519 return func
519 return func
520 return dec
520 return dec
521
521
522 def _pushdiscovery(pushop):
522 def _pushdiscovery(pushop):
523 """Run all discovery steps"""
523 """Run all discovery steps"""
524 for stepname in pushdiscoveryorder:
524 for stepname in pushdiscoveryorder:
525 step = pushdiscoverymapping[stepname]
525 step = pushdiscoverymapping[stepname]
526 step(pushop)
526 step(pushop)
527
527
528 @pushdiscovery('changeset')
528 @pushdiscovery('changeset')
529 def _pushdiscoverychangeset(pushop):
529 def _pushdiscoverychangeset(pushop):
530 """discover the changeset that need to be pushed"""
530 """discover the changeset that need to be pushed"""
531 fci = discovery.findcommonincoming
531 fci = discovery.findcommonincoming
532 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
532 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
533 common, inc, remoteheads = commoninc
533 common, inc, remoteheads = commoninc
534 fco = discovery.findcommonoutgoing
534 fco = discovery.findcommonoutgoing
535 outgoing = fco(pushop.repo, pushop.remote, onlyheads=pushop.revs,
535 outgoing = fco(pushop.repo, pushop.remote, onlyheads=pushop.revs,
536 commoninc=commoninc, force=pushop.force)
536 commoninc=commoninc, force=pushop.force)
537 pushop.outgoing = outgoing
537 pushop.outgoing = outgoing
538 pushop.remoteheads = remoteheads
538 pushop.remoteheads = remoteheads
539 pushop.incoming = inc
539 pushop.incoming = inc
540
540
541 @pushdiscovery('phase')
541 @pushdiscovery('phase')
542 def _pushdiscoveryphase(pushop):
542 def _pushdiscoveryphase(pushop):
543 """discover the phase that needs to be pushed
543 """discover the phase that needs to be pushed
544
544
545 (computed for both success and failure case for changesets push)"""
545 (computed for both success and failure case for changesets push)"""
546 outgoing = pushop.outgoing
546 outgoing = pushop.outgoing
547 unfi = pushop.repo.unfiltered()
547 unfi = pushop.repo.unfiltered()
548 remotephases = pushop.remote.listkeys('phases')
548 remotephases = pushop.remote.listkeys('phases')
549 publishing = remotephases.get('publishing', False)
549 publishing = remotephases.get('publishing', False)
550 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
550 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
551 and remotephases # server supports phases
551 and remotephases # server supports phases
552 and not pushop.outgoing.missing # no changesets to be pushed
552 and not pushop.outgoing.missing # no changesets to be pushed
553 and publishing):
553 and publishing):
554 # When:
554 # When:
555 # - this is a subrepo push
555 # - this is a subrepo push
556 # - and remote support phase
556 # - and remote support phase
557 # - and no changeset are to be pushed
557 # - and no changeset are to be pushed
558 # - and remote is publishing
558 # - and remote is publishing
559 # We may be in issue 3871 case!
559 # We may be in issue 3871 case!
560 # We drop the possible phase synchronisation done by
560 # We drop the possible phase synchronisation done by
561 # courtesy to publish changesets possibly locally draft
561 # courtesy to publish changesets possibly locally draft
562 # on the remote.
562 # on the remote.
563 remotephases = {'publishing': 'True'}
563 remotephases = {'publishing': 'True'}
564 ana = phases.analyzeremotephases(pushop.repo,
564 ana = phases.analyzeremotephases(pushop.repo,
565 pushop.fallbackheads,
565 pushop.fallbackheads,
566 remotephases)
566 remotephases)
567 pheads, droots = ana
567 pheads, droots = ana
568 extracond = ''
568 extracond = ''
569 if not publishing:
569 if not publishing:
570 extracond = ' and public()'
570 extracond = ' and public()'
571 revset = 'heads((%%ln::%%ln) %s)' % extracond
571 revset = 'heads((%%ln::%%ln) %s)' % extracond
572 # Get the list of all revs draft on remote by public here.
572 # Get the list of all revs draft on remote by public here.
573 # XXX Beware that revset break if droots is not strictly
573 # XXX Beware that revset break if droots is not strictly
574 # XXX root we may want to ensure it is but it is costly
574 # XXX root we may want to ensure it is but it is costly
575 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
575 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
576 if not outgoing.missing:
576 if not outgoing.missing:
577 future = fallback
577 future = fallback
578 else:
578 else:
579 # adds changeset we are going to push as draft
579 # adds changeset we are going to push as draft
580 #
580 #
581 # should not be necessary for publishing server, but because of an
581 # should not be necessary for publishing server, but because of an
582 # issue fixed in xxxxx we have to do it anyway.
582 # issue fixed in xxxxx we have to do it anyway.
583 fdroots = list(unfi.set('roots(%ln + %ln::)',
583 fdroots = list(unfi.set('roots(%ln + %ln::)',
584 outgoing.missing, droots))
584 outgoing.missing, droots))
585 fdroots = [f.node() for f in fdroots]
585 fdroots = [f.node() for f in fdroots]
586 future = list(unfi.set(revset, fdroots, pushop.futureheads))
586 future = list(unfi.set(revset, fdroots, pushop.futureheads))
587 pushop.outdatedphases = future
587 pushop.outdatedphases = future
588 pushop.fallbackoutdatedphases = fallback
588 pushop.fallbackoutdatedphases = fallback
589
589
590 @pushdiscovery('obsmarker')
590 @pushdiscovery('obsmarker')
591 def _pushdiscoveryobsmarkers(pushop):
591 def _pushdiscoveryobsmarkers(pushop):
592 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
592 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
593 and pushop.repo.obsstore
593 and pushop.repo.obsstore
594 and 'obsolete' in pushop.remote.listkeys('namespaces')):
594 and 'obsolete' in pushop.remote.listkeys('namespaces')):
595 repo = pushop.repo
595 repo = pushop.repo
596 # very naive computation, that can be quite expensive on big repo.
596 # very naive computation, that can be quite expensive on big repo.
597 # However: evolution is currently slow on them anyway.
597 # However: evolution is currently slow on them anyway.
598 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
598 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
599 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
599 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
600
600
601 @pushdiscovery('bookmarks')
601 @pushdiscovery('bookmarks')
602 def _pushdiscoverybookmarks(pushop):
602 def _pushdiscoverybookmarks(pushop):
603 ui = pushop.ui
603 ui = pushop.ui
604 repo = pushop.repo.unfiltered()
604 repo = pushop.repo.unfiltered()
605 remote = pushop.remote
605 remote = pushop.remote
606 ui.debug("checking for updated bookmarks\n")
606 ui.debug("checking for updated bookmarks\n")
607 ancestors = ()
607 ancestors = ()
608 if pushop.revs:
608 if pushop.revs:
609 revnums = map(repo.changelog.rev, pushop.revs)
609 revnums = map(repo.changelog.rev, pushop.revs)
610 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
610 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
611 remotebookmark = remote.listkeys('bookmarks')
611 remotebookmark = remote.listkeys('bookmarks')
612
612
613 explicit = set([repo._bookmarks.expandname(bookmark)
613 explicit = set([repo._bookmarks.expandname(bookmark)
614 for bookmark in pushop.bookmarks])
614 for bookmark in pushop.bookmarks])
615
615
616 remotebookmark = bookmod.unhexlifybookmarks(remotebookmark)
616 remotebookmark = bookmod.unhexlifybookmarks(remotebookmark)
617 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
617 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
618
618
619 def safehex(x):
619 def safehex(x):
620 if x is None:
620 if x is None:
621 return x
621 return x
622 return hex(x)
622 return hex(x)
623
623
624 def hexifycompbookmarks(bookmarks):
624 def hexifycompbookmarks(bookmarks):
625 for b, scid, dcid in bookmarks:
625 for b, scid, dcid in bookmarks:
626 yield b, safehex(scid), safehex(dcid)
626 yield b, safehex(scid), safehex(dcid)
627
627
628 comp = [hexifycompbookmarks(marks) for marks in comp]
628 comp = [hexifycompbookmarks(marks) for marks in comp]
629 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
629 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
630
630
631 for b, scid, dcid in advsrc:
631 for b, scid, dcid in advsrc:
632 if b in explicit:
632 if b in explicit:
633 explicit.remove(b)
633 explicit.remove(b)
634 if not ancestors or repo[scid].rev() in ancestors:
634 if not ancestors or repo[scid].rev() in ancestors:
635 pushop.outbookmarks.append((b, dcid, scid))
635 pushop.outbookmarks.append((b, dcid, scid))
636 # search added bookmark
636 # search added bookmark
637 for b, scid, dcid in addsrc:
637 for b, scid, dcid in addsrc:
638 if b in explicit:
638 if b in explicit:
639 explicit.remove(b)
639 explicit.remove(b)
640 pushop.outbookmarks.append((b, '', scid))
640 pushop.outbookmarks.append((b, '', scid))
641 # search for overwritten bookmark
641 # search for overwritten bookmark
642 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
642 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
643 if b in explicit:
643 if b in explicit:
644 explicit.remove(b)
644 explicit.remove(b)
645 pushop.outbookmarks.append((b, dcid, scid))
645 pushop.outbookmarks.append((b, dcid, scid))
646 # search for bookmark to delete
646 # search for bookmark to delete
647 for b, scid, dcid in adddst:
647 for b, scid, dcid in adddst:
648 if b in explicit:
648 if b in explicit:
649 explicit.remove(b)
649 explicit.remove(b)
650 # treat as "deleted locally"
650 # treat as "deleted locally"
651 pushop.outbookmarks.append((b, dcid, ''))
651 pushop.outbookmarks.append((b, dcid, ''))
652 # identical bookmarks shouldn't get reported
652 # identical bookmarks shouldn't get reported
653 for b, scid, dcid in same:
653 for b, scid, dcid in same:
654 if b in explicit:
654 if b in explicit:
655 explicit.remove(b)
655 explicit.remove(b)
656
656
657 if explicit:
657 if explicit:
658 explicit = sorted(explicit)
658 explicit = sorted(explicit)
659 # we should probably list all of them
659 # we should probably list all of them
660 ui.warn(_('bookmark %s does not exist on the local '
660 ui.warn(_('bookmark %s does not exist on the local '
661 'or remote repository!\n') % explicit[0])
661 'or remote repository!\n') % explicit[0])
662 pushop.bkresult = 2
662 pushop.bkresult = 2
663
663
664 pushop.outbookmarks.sort()
664 pushop.outbookmarks.sort()
665
665
666 def _pushcheckoutgoing(pushop):
666 def _pushcheckoutgoing(pushop):
667 outgoing = pushop.outgoing
667 outgoing = pushop.outgoing
668 unfi = pushop.repo.unfiltered()
668 unfi = pushop.repo.unfiltered()
669 if not outgoing.missing:
669 if not outgoing.missing:
670 # nothing to push
670 # nothing to push
671 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
671 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
672 return False
672 return False
673 # something to push
673 # something to push
674 if not pushop.force:
674 if not pushop.force:
675 # if repo.obsstore == False --> no obsolete
675 # if repo.obsstore == False --> no obsolete
676 # then, save the iteration
676 # then, save the iteration
677 if unfi.obsstore:
677 if unfi.obsstore:
678 # this message are here for 80 char limit reason
678 # this message are here for 80 char limit reason
679 mso = _("push includes obsolete changeset: %s!")
679 mso = _("push includes obsolete changeset: %s!")
680 mst = {"unstable": _("push includes unstable changeset: %s!"),
680 mst = {"unstable": _("push includes unstable changeset: %s!"),
681 "bumped": _("push includes bumped changeset: %s!"),
681 "bumped": _("push includes bumped changeset: %s!"),
682 "divergent": _("push includes divergent changeset: %s!")}
682 "divergent": _("push includes divergent changeset: %s!")}
683 # If we are to push if there is at least one
683 # If we are to push if there is at least one
684 # obsolete or unstable changeset in missing, at
684 # obsolete or unstable changeset in missing, at
685 # least one of the missinghead will be obsolete or
685 # least one of the missinghead will be obsolete or
686 # unstable. So checking heads only is ok
686 # unstable. So checking heads only is ok
687 for node in outgoing.missingheads:
687 for node in outgoing.missingheads:
688 ctx = unfi[node]
688 ctx = unfi[node]
689 if ctx.obsolete():
689 if ctx.obsolete():
690 raise error.Abort(mso % ctx)
690 raise error.Abort(mso % ctx)
691 elif ctx.troubled():
691 elif ctx.troubled():
692 raise error.Abort(mst[ctx.troubles()[0]] % ctx)
692 raise error.Abort(mst[ctx.troubles()[0]] % ctx)
693
693
694 discovery.checkheads(pushop)
694 discovery.checkheads(pushop)
695 return True
695 return True
696
696
697 # List of names of steps to perform for an outgoing bundle2, order matters.
697 # List of names of steps to perform for an outgoing bundle2, order matters.
698 b2partsgenorder = []
698 b2partsgenorder = []
699
699
700 # Mapping between step name and function
700 # Mapping between step name and function
701 #
701 #
702 # This exists to help extensions wrap steps if necessary
702 # This exists to help extensions wrap steps if necessary
703 b2partsgenmapping = {}
703 b2partsgenmapping = {}
704
704
705 def b2partsgenerator(stepname, idx=None):
705 def b2partsgenerator(stepname, idx=None):
706 """decorator for function generating bundle2 part
706 """decorator for function generating bundle2 part
707
707
708 The function is added to the step -> function mapping and appended to the
708 The function is added to the step -> function mapping and appended to the
709 list of steps. Beware that decorated functions will be added in order
709 list of steps. Beware that decorated functions will be added in order
710 (this may matter).
710 (this may matter).
711
711
712 You can only use this decorator for new steps, if you want to wrap a step
712 You can only use this decorator for new steps, if you want to wrap a step
713 from an extension, attack the b2partsgenmapping dictionary directly."""
713 from an extension, attack the b2partsgenmapping dictionary directly."""
714 def dec(func):
714 def dec(func):
715 assert stepname not in b2partsgenmapping
715 assert stepname not in b2partsgenmapping
716 b2partsgenmapping[stepname] = func
716 b2partsgenmapping[stepname] = func
717 if idx is None:
717 if idx is None:
718 b2partsgenorder.append(stepname)
718 b2partsgenorder.append(stepname)
719 else:
719 else:
720 b2partsgenorder.insert(idx, stepname)
720 b2partsgenorder.insert(idx, stepname)
721 return func
721 return func
722 return dec
722 return dec
723
723
724 def _pushb2ctxcheckheads(pushop, bundler):
724 def _pushb2ctxcheckheads(pushop, bundler):
725 """Generate race condition checking parts
725 """Generate race condition checking parts
726
726
727 Exists as an independent function to aid extensions
727 Exists as an independent function to aid extensions
728 """
728 """
729 # * 'force' do not check for push race,
729 # * 'force' do not check for push race,
730 # * if we don't push anything, there are nothing to check.
730 # * if we don't push anything, there are nothing to check.
731 if not pushop.force and pushop.outgoing.missingheads:
731 if not pushop.force and pushop.outgoing.missingheads:
732 allowunrelated = 'related' in bundler.capabilities.get('checkheads', ())
732 allowunrelated = 'related' in bundler.capabilities.get('checkheads', ())
733 if not allowunrelated:
733 if not allowunrelated:
734 bundler.newpart('check:heads', data=iter(pushop.remoteheads))
734 bundler.newpart('check:heads', data=iter(pushop.remoteheads))
735 else:
735 else:
736 affected = set()
736 affected = set()
737 for branch, heads in pushop.pushbranchmap.iteritems():
737 for branch, heads in pushop.pushbranchmap.iteritems():
738 remoteheads, newheads, unsyncedheads, discardedheads = heads
738 remoteheads, newheads, unsyncedheads, discardedheads = heads
739 if remoteheads is not None:
739 if remoteheads is not None:
740 remote = set(remoteheads)
740 remote = set(remoteheads)
741 affected |= set(discardedheads) & remote
741 affected |= set(discardedheads) & remote
742 affected |= remote - set(newheads)
742 affected |= remote - set(newheads)
743 if affected:
743 if affected:
744 data = iter(sorted(affected))
744 data = iter(sorted(affected))
745 bundler.newpart('check:updated-heads', data=data)
745 bundler.newpart('check:updated-heads', data=data)
746
746
747 @b2partsgenerator('changeset')
747 @b2partsgenerator('changeset')
748 def _pushb2ctx(pushop, bundler):
748 def _pushb2ctx(pushop, bundler):
749 """handle changegroup push through bundle2
749 """handle changegroup push through bundle2
750
750
751 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
751 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
752 """
752 """
753 if 'changesets' in pushop.stepsdone:
753 if 'changesets' in pushop.stepsdone:
754 return
754 return
755 pushop.stepsdone.add('changesets')
755 pushop.stepsdone.add('changesets')
756 # Send known heads to the server for race detection.
756 # Send known heads to the server for race detection.
757 if not _pushcheckoutgoing(pushop):
757 if not _pushcheckoutgoing(pushop):
758 return
758 return
759 pushop.repo.prepushoutgoinghooks(pushop)
759 pushop.repo.prepushoutgoinghooks(pushop)
760
760
761 _pushb2ctxcheckheads(pushop, bundler)
761 _pushb2ctxcheckheads(pushop, bundler)
762
762
763 b2caps = bundle2.bundle2caps(pushop.remote)
763 b2caps = bundle2.bundle2caps(pushop.remote)
764 version = '01'
764 version = '01'
765 cgversions = b2caps.get('changegroup')
765 cgversions = b2caps.get('changegroup')
766 if cgversions: # 3.1 and 3.2 ship with an empty value
766 if cgversions: # 3.1 and 3.2 ship with an empty value
767 cgversions = [v for v in cgversions
767 cgversions = [v for v in cgversions
768 if v in changegroup.supportedoutgoingversions(
768 if v in changegroup.supportedoutgoingversions(
769 pushop.repo)]
769 pushop.repo)]
770 if not cgversions:
770 if not cgversions:
771 raise ValueError(_('no common changegroup version'))
771 raise ValueError(_('no common changegroup version'))
772 version = max(cgversions)
772 version = max(cgversions)
773 cg = changegroup.getlocalchangegroupraw(pushop.repo, 'push',
773 cg = changegroup.getlocalchangegroupraw(pushop.repo, 'push',
774 pushop.outgoing,
774 pushop.outgoing,
775 version=version)
775 version=version)
776 cgpart = bundler.newpart('changegroup', data=cg)
776 cgpart = bundler.newpart('changegroup', data=cg)
777 if cgversions:
777 if cgversions:
778 cgpart.addparam('version', version)
778 cgpart.addparam('version', version)
779 if 'treemanifest' in pushop.repo.requirements:
779 if 'treemanifest' in pushop.repo.requirements:
780 cgpart.addparam('treemanifest', '1')
780 cgpart.addparam('treemanifest', '1')
781 def handlereply(op):
781 def handlereply(op):
782 """extract addchangegroup returns from server reply"""
782 """extract addchangegroup returns from server reply"""
783 cgreplies = op.records.getreplies(cgpart.id)
783 cgreplies = op.records.getreplies(cgpart.id)
784 assert len(cgreplies['changegroup']) == 1
784 assert len(cgreplies['changegroup']) == 1
785 pushop.cgresult = cgreplies['changegroup'][0]['return']
785 pushop.cgresult = cgreplies['changegroup'][0]['return']
786 return handlereply
786 return handlereply
787
787
788 @b2partsgenerator('phase')
788 @b2partsgenerator('phase')
789 def _pushb2phases(pushop, bundler):
789 def _pushb2phases(pushop, bundler):
790 """handle phase push through bundle2"""
790 """handle phase push through bundle2"""
791 if 'phases' in pushop.stepsdone:
791 if 'phases' in pushop.stepsdone:
792 return
792 return
793 b2caps = bundle2.bundle2caps(pushop.remote)
793 b2caps = bundle2.bundle2caps(pushop.remote)
794 if not 'pushkey' in b2caps:
794 if not 'pushkey' in b2caps:
795 return
795 return
796 pushop.stepsdone.add('phases')
796 pushop.stepsdone.add('phases')
797 part2node = []
797 part2node = []
798
798
799 def handlefailure(pushop, exc):
799 def handlefailure(pushop, exc):
800 targetid = int(exc.partid)
800 targetid = int(exc.partid)
801 for partid, node in part2node:
801 for partid, node in part2node:
802 if partid == targetid:
802 if partid == targetid:
803 raise error.Abort(_('updating %s to public failed') % node)
803 raise error.Abort(_('updating %s to public failed') % node)
804
804
805 enc = pushkey.encode
805 enc = pushkey.encode
806 for newremotehead in pushop.outdatedphases:
806 for newremotehead in pushop.outdatedphases:
807 part = bundler.newpart('pushkey')
807 part = bundler.newpart('pushkey')
808 part.addparam('namespace', enc('phases'))
808 part.addparam('namespace', enc('phases'))
809 part.addparam('key', enc(newremotehead.hex()))
809 part.addparam('key', enc(newremotehead.hex()))
810 part.addparam('old', enc(str(phases.draft)))
810 part.addparam('old', enc(str(phases.draft)))
811 part.addparam('new', enc(str(phases.public)))
811 part.addparam('new', enc(str(phases.public)))
812 part2node.append((part.id, newremotehead))
812 part2node.append((part.id, newremotehead))
813 pushop.pkfailcb[part.id] = handlefailure
813 pushop.pkfailcb[part.id] = handlefailure
814
814
815 def handlereply(op):
815 def handlereply(op):
816 for partid, node in part2node:
816 for partid, node in part2node:
817 partrep = op.records.getreplies(partid)
817 partrep = op.records.getreplies(partid)
818 results = partrep['pushkey']
818 results = partrep['pushkey']
819 assert len(results) <= 1
819 assert len(results) <= 1
820 msg = None
820 msg = None
821 if not results:
821 if not results:
822 msg = _('server ignored update of %s to public!\n') % node
822 msg = _('server ignored update of %s to public!\n') % node
823 elif not int(results[0]['return']):
823 elif not int(results[0]['return']):
824 msg = _('updating %s to public failed!\n') % node
824 msg = _('updating %s to public failed!\n') % node
825 if msg is not None:
825 if msg is not None:
826 pushop.ui.warn(msg)
826 pushop.ui.warn(msg)
827 return handlereply
827 return handlereply
828
828
829 @b2partsgenerator('obsmarkers')
829 @b2partsgenerator('obsmarkers')
830 def _pushb2obsmarkers(pushop, bundler):
830 def _pushb2obsmarkers(pushop, bundler):
831 if 'obsmarkers' in pushop.stepsdone:
831 if 'obsmarkers' in pushop.stepsdone:
832 return
832 return
833 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
833 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
834 if obsolete.commonversion(remoteversions) is None:
834 if obsolete.commonversion(remoteversions) is None:
835 return
835 return
836 pushop.stepsdone.add('obsmarkers')
836 pushop.stepsdone.add('obsmarkers')
837 if pushop.outobsmarkers:
837 if pushop.outobsmarkers:
838 markers = sorted(pushop.outobsmarkers)
838 markers = sorted(pushop.outobsmarkers)
839 bundle2.buildobsmarkerspart(bundler, markers)
839 bundle2.buildobsmarkerspart(bundler, markers)
840
840
841 @b2partsgenerator('bookmarks')
841 @b2partsgenerator('bookmarks')
842 def _pushb2bookmarks(pushop, bundler):
842 def _pushb2bookmarks(pushop, bundler):
843 """handle bookmark push through bundle2"""
843 """handle bookmark push through bundle2"""
844 if 'bookmarks' in pushop.stepsdone:
844 if 'bookmarks' in pushop.stepsdone:
845 return
845 return
846 b2caps = bundle2.bundle2caps(pushop.remote)
846 b2caps = bundle2.bundle2caps(pushop.remote)
847 if 'pushkey' not in b2caps:
847 if 'pushkey' not in b2caps:
848 return
848 return
849 pushop.stepsdone.add('bookmarks')
849 pushop.stepsdone.add('bookmarks')
850 part2book = []
850 part2book = []
851 enc = pushkey.encode
851 enc = pushkey.encode
852
852
853 def handlefailure(pushop, exc):
853 def handlefailure(pushop, exc):
854 targetid = int(exc.partid)
854 targetid = int(exc.partid)
855 for partid, book, action in part2book:
855 for partid, book, action in part2book:
856 if partid == targetid:
856 if partid == targetid:
857 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
857 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
858 # we should not be called for part we did not generated
858 # we should not be called for part we did not generated
859 assert False
859 assert False
860
860
861 for book, old, new in pushop.outbookmarks:
861 for book, old, new in pushop.outbookmarks:
862 part = bundler.newpart('pushkey')
862 part = bundler.newpart('pushkey')
863 part.addparam('namespace', enc('bookmarks'))
863 part.addparam('namespace', enc('bookmarks'))
864 part.addparam('key', enc(book))
864 part.addparam('key', enc(book))
865 part.addparam('old', enc(old))
865 part.addparam('old', enc(old))
866 part.addparam('new', enc(new))
866 part.addparam('new', enc(new))
867 action = 'update'
867 action = 'update'
868 if not old:
868 if not old:
869 action = 'export'
869 action = 'export'
870 elif not new:
870 elif not new:
871 action = 'delete'
871 action = 'delete'
872 part2book.append((part.id, book, action))
872 part2book.append((part.id, book, action))
873 pushop.pkfailcb[part.id] = handlefailure
873 pushop.pkfailcb[part.id] = handlefailure
874
874
875 def handlereply(op):
875 def handlereply(op):
876 ui = pushop.ui
876 ui = pushop.ui
877 for partid, book, action in part2book:
877 for partid, book, action in part2book:
878 partrep = op.records.getreplies(partid)
878 partrep = op.records.getreplies(partid)
879 results = partrep['pushkey']
879 results = partrep['pushkey']
880 assert len(results) <= 1
880 assert len(results) <= 1
881 if not results:
881 if not results:
882 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
882 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
883 else:
883 else:
884 ret = int(results[0]['return'])
884 ret = int(results[0]['return'])
885 if ret:
885 if ret:
886 ui.status(bookmsgmap[action][0] % book)
886 ui.status(bookmsgmap[action][0] % book)
887 else:
887 else:
888 ui.warn(bookmsgmap[action][1] % book)
888 ui.warn(bookmsgmap[action][1] % book)
889 if pushop.bkresult is not None:
889 if pushop.bkresult is not None:
890 pushop.bkresult = 1
890 pushop.bkresult = 1
891 return handlereply
891 return handlereply
892
892
893
893
894 def _pushbundle2(pushop):
894 def _pushbundle2(pushop):
895 """push data to the remote using bundle2
895 """push data to the remote using bundle2
896
896
897 The only currently supported type of data is changegroup but this will
897 The only currently supported type of data is changegroup but this will
898 evolve in the future."""
898 evolve in the future."""
899 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
899 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
900 pushback = (pushop.trmanager
900 pushback = (pushop.trmanager
901 and pushop.ui.configbool('experimental', 'bundle2.pushback'))
901 and pushop.ui.configbool('experimental', 'bundle2.pushback'))
902
902
903 # create reply capability
903 # create reply capability
904 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo,
904 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo,
905 allowpushback=pushback))
905 allowpushback=pushback))
906 bundler.newpart('replycaps', data=capsblob)
906 bundler.newpart('replycaps', data=capsblob)
907 replyhandlers = []
907 replyhandlers = []
908 for partgenname in b2partsgenorder:
908 for partgenname in b2partsgenorder:
909 partgen = b2partsgenmapping[partgenname]
909 partgen = b2partsgenmapping[partgenname]
910 ret = partgen(pushop, bundler)
910 ret = partgen(pushop, bundler)
911 if callable(ret):
911 if callable(ret):
912 replyhandlers.append(ret)
912 replyhandlers.append(ret)
913 # do not push if nothing to push
913 # do not push if nothing to push
914 if bundler.nbparts <= 1:
914 if bundler.nbparts <= 1:
915 return
915 return
916 stream = util.chunkbuffer(bundler.getchunks())
916 stream = util.chunkbuffer(bundler.getchunks())
917 try:
917 try:
918 try:
918 try:
919 reply = pushop.remote.unbundle(
919 reply = pushop.remote.unbundle(
920 stream, ['force'], pushop.remote.url())
920 stream, ['force'], pushop.remote.url())
921 except error.BundleValueError as exc:
921 except error.BundleValueError as exc:
922 raise error.Abort(_('missing support for %s') % exc)
922 raise error.Abort(_('missing support for %s') % exc)
923 try:
923 try:
924 trgetter = None
924 trgetter = None
925 if pushback:
925 if pushback:
926 trgetter = pushop.trmanager.transaction
926 trgetter = pushop.trmanager.transaction
927 op = bundle2.processbundle(pushop.repo, reply, trgetter)
927 op = bundle2.processbundle(pushop.repo, reply, trgetter)
928 except error.BundleValueError as exc:
928 except error.BundleValueError as exc:
929 raise error.Abort(_('missing support for %s') % exc)
929 raise error.Abort(_('missing support for %s') % exc)
930 except bundle2.AbortFromPart as exc:
930 except bundle2.AbortFromPart as exc:
931 pushop.ui.status(_('remote: %s\n') % exc)
931 pushop.ui.status(_('remote: %s\n') % exc)
932 if exc.hint is not None:
932 if exc.hint is not None:
933 pushop.ui.status(_('remote: %s\n') % ('(%s)' % exc.hint))
933 pushop.ui.status(_('remote: %s\n') % ('(%s)' % exc.hint))
934 raise error.Abort(_('push failed on remote'))
934 raise error.Abort(_('push failed on remote'))
935 except error.PushkeyFailed as exc:
935 except error.PushkeyFailed as exc:
936 partid = int(exc.partid)
936 partid = int(exc.partid)
937 if partid not in pushop.pkfailcb:
937 if partid not in pushop.pkfailcb:
938 raise
938 raise
939 pushop.pkfailcb[partid](pushop, exc)
939 pushop.pkfailcb[partid](pushop, exc)
940 for rephand in replyhandlers:
940 for rephand in replyhandlers:
941 rephand(op)
941 rephand(op)
942
942
943 def _pushchangeset(pushop):
943 def _pushchangeset(pushop):
944 """Make the actual push of changeset bundle to remote repo"""
944 """Make the actual push of changeset bundle to remote repo"""
945 if 'changesets' in pushop.stepsdone:
945 if 'changesets' in pushop.stepsdone:
946 return
946 return
947 pushop.stepsdone.add('changesets')
947 pushop.stepsdone.add('changesets')
948 if not _pushcheckoutgoing(pushop):
948 if not _pushcheckoutgoing(pushop):
949 return
949 return
950 pushop.repo.prepushoutgoinghooks(pushop)
950 pushop.repo.prepushoutgoinghooks(pushop)
951 outgoing = pushop.outgoing
951 outgoing = pushop.outgoing
952 unbundle = pushop.remote.capable('unbundle')
952 unbundle = pushop.remote.capable('unbundle')
953 # TODO: get bundlecaps from remote
953 # TODO: get bundlecaps from remote
954 bundlecaps = None
954 bundlecaps = None
955 # create a changegroup from local
955 # create a changegroup from local
956 if pushop.revs is None and not (outgoing.excluded
956 if pushop.revs is None and not (outgoing.excluded
957 or pushop.repo.changelog.filteredrevs):
957 or pushop.repo.changelog.filteredrevs):
958 # push everything,
958 # push everything,
959 # use the fast path, no race possible on push
959 # use the fast path, no race possible on push
960 bundler = changegroup.cg1packer(pushop.repo, bundlecaps)
960 bundler = changegroup.cg1packer(pushop.repo, bundlecaps)
961 cg = changegroup.getsubset(pushop.repo,
961 cg = changegroup.getsubset(pushop.repo,
962 outgoing,
962 outgoing,
963 bundler,
963 bundler,
964 'push',
964 'push',
965 fastpath=True)
965 fastpath=True)
966 else:
966 else:
967 cg = changegroup.getchangegroup(pushop.repo, 'push', outgoing,
967 cg = changegroup.getchangegroup(pushop.repo, 'push', outgoing,
968 bundlecaps=bundlecaps)
968 bundlecaps=bundlecaps)
969
969
970 # apply changegroup to remote
970 # apply changegroup to remote
971 if unbundle:
971 if unbundle:
972 # local repo finds heads on server, finds out what
972 # local repo finds heads on server, finds out what
973 # revs it must push. once revs transferred, if server
973 # revs it must push. once revs transferred, if server
974 # finds it has different heads (someone else won
974 # finds it has different heads (someone else won
975 # commit/push race), server aborts.
975 # commit/push race), server aborts.
976 if pushop.force:
976 if pushop.force:
977 remoteheads = ['force']
977 remoteheads = ['force']
978 else:
978 else:
979 remoteheads = pushop.remoteheads
979 remoteheads = pushop.remoteheads
980 # ssh: return remote's addchangegroup()
980 # ssh: return remote's addchangegroup()
981 # http: return remote's addchangegroup() or 0 for error
981 # http: return remote's addchangegroup() or 0 for error
982 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
982 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
983 pushop.repo.url())
983 pushop.repo.url())
984 else:
984 else:
985 # we return an integer indicating remote head count
985 # we return an integer indicating remote head count
986 # change
986 # change
987 pushop.cgresult = pushop.remote.addchangegroup(cg, 'push',
987 pushop.cgresult = pushop.remote.addchangegroup(cg, 'push',
988 pushop.repo.url())
988 pushop.repo.url())
989
989
990 def _pushsyncphase(pushop):
990 def _pushsyncphase(pushop):
991 """synchronise phase information locally and remotely"""
991 """synchronise phase information locally and remotely"""
992 cheads = pushop.commonheads
992 cheads = pushop.commonheads
993 # even when we don't push, exchanging phase data is useful
993 # even when we don't push, exchanging phase data is useful
994 remotephases = pushop.remote.listkeys('phases')
994 remotephases = pushop.remote.listkeys('phases')
995 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
995 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
996 and remotephases # server supports phases
996 and remotephases # server supports phases
997 and pushop.cgresult is None # nothing was pushed
997 and pushop.cgresult is None # nothing was pushed
998 and remotephases.get('publishing', False)):
998 and remotephases.get('publishing', False)):
999 # When:
999 # When:
1000 # - this is a subrepo push
1000 # - this is a subrepo push
1001 # - and remote support phase
1001 # - and remote support phase
1002 # - and no changeset was pushed
1002 # - and no changeset was pushed
1003 # - and remote is publishing
1003 # - and remote is publishing
1004 # We may be in issue 3871 case!
1004 # We may be in issue 3871 case!
1005 # We drop the possible phase synchronisation done by
1005 # We drop the possible phase synchronisation done by
1006 # courtesy to publish changesets possibly locally draft
1006 # courtesy to publish changesets possibly locally draft
1007 # on the remote.
1007 # on the remote.
1008 remotephases = {'publishing': 'True'}
1008 remotephases = {'publishing': 'True'}
1009 if not remotephases: # old server or public only reply from non-publishing
1009 if not remotephases: # old server or public only reply from non-publishing
1010 _localphasemove(pushop, cheads)
1010 _localphasemove(pushop, cheads)
1011 # don't push any phase data as there is nothing to push
1011 # don't push any phase data as there is nothing to push
1012 else:
1012 else:
1013 ana = phases.analyzeremotephases(pushop.repo, cheads,
1013 ana = phases.analyzeremotephases(pushop.repo, cheads,
1014 remotephases)
1014 remotephases)
1015 pheads, droots = ana
1015 pheads, droots = ana
1016 ### Apply remote phase on local
1016 ### Apply remote phase on local
1017 if remotephases.get('publishing', False):
1017 if remotephases.get('publishing', False):
1018 _localphasemove(pushop, cheads)
1018 _localphasemove(pushop, cheads)
1019 else: # publish = False
1019 else: # publish = False
1020 _localphasemove(pushop, pheads)
1020 _localphasemove(pushop, pheads)
1021 _localphasemove(pushop, cheads, phases.draft)
1021 _localphasemove(pushop, cheads, phases.draft)
1022 ### Apply local phase on remote
1022 ### Apply local phase on remote
1023
1023
1024 if pushop.cgresult:
1024 if pushop.cgresult:
1025 if 'phases' in pushop.stepsdone:
1025 if 'phases' in pushop.stepsdone:
1026 # phases already pushed though bundle2
1026 # phases already pushed though bundle2
1027 return
1027 return
1028 outdated = pushop.outdatedphases
1028 outdated = pushop.outdatedphases
1029 else:
1029 else:
1030 outdated = pushop.fallbackoutdatedphases
1030 outdated = pushop.fallbackoutdatedphases
1031
1031
1032 pushop.stepsdone.add('phases')
1032 pushop.stepsdone.add('phases')
1033
1033
1034 # filter heads already turned public by the push
1034 # filter heads already turned public by the push
1035 outdated = [c for c in outdated if c.node() not in pheads]
1035 outdated = [c for c in outdated if c.node() not in pheads]
1036 # fallback to independent pushkey command
1036 # fallback to independent pushkey command
1037 for newremotehead in outdated:
1037 for newremotehead in outdated:
1038 r = pushop.remote.pushkey('phases',
1038 r = pushop.remote.pushkey('phases',
1039 newremotehead.hex(),
1039 newremotehead.hex(),
1040 str(phases.draft),
1040 str(phases.draft),
1041 str(phases.public))
1041 str(phases.public))
1042 if not r:
1042 if not r:
1043 pushop.ui.warn(_('updating %s to public failed!\n')
1043 pushop.ui.warn(_('updating %s to public failed!\n')
1044 % newremotehead)
1044 % newremotehead)
1045
1045
1046 def _localphasemove(pushop, nodes, phase=phases.public):
1046 def _localphasemove(pushop, nodes, phase=phases.public):
1047 """move <nodes> to <phase> in the local source repo"""
1047 """move <nodes> to <phase> in the local source repo"""
1048 if pushop.trmanager:
1048 if pushop.trmanager:
1049 phases.advanceboundary(pushop.repo,
1049 phases.advanceboundary(pushop.repo,
1050 pushop.trmanager.transaction(),
1050 pushop.trmanager.transaction(),
1051 phase,
1051 phase,
1052 nodes)
1052 nodes)
1053 else:
1053 else:
1054 # repo is not locked, do not change any phases!
1054 # repo is not locked, do not change any phases!
1055 # Informs the user that phases should have been moved when
1055 # Informs the user that phases should have been moved when
1056 # applicable.
1056 # applicable.
1057 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1057 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1058 phasestr = phases.phasenames[phase]
1058 phasestr = phases.phasenames[phase]
1059 if actualmoves:
1059 if actualmoves:
1060 pushop.ui.status(_('cannot lock source repo, skipping '
1060 pushop.ui.status(_('cannot lock source repo, skipping '
1061 'local %s phase update\n') % phasestr)
1061 'local %s phase update\n') % phasestr)
1062
1062
1063 def _pushobsolete(pushop):
1063 def _pushobsolete(pushop):
1064 """utility function to push obsolete markers to a remote"""
1064 """utility function to push obsolete markers to a remote"""
1065 if 'obsmarkers' in pushop.stepsdone:
1065 if 'obsmarkers' in pushop.stepsdone:
1066 return
1066 return
1067 repo = pushop.repo
1067 repo = pushop.repo
1068 remote = pushop.remote
1068 remote = pushop.remote
1069 pushop.stepsdone.add('obsmarkers')
1069 pushop.stepsdone.add('obsmarkers')
1070 if pushop.outobsmarkers:
1070 if pushop.outobsmarkers:
1071 pushop.ui.debug('try to push obsolete markers to remote\n')
1071 pushop.ui.debug('try to push obsolete markers to remote\n')
1072 rslts = []
1072 rslts = []
1073 remotedata = obsolete._pushkeyescape(sorted(pushop.outobsmarkers))
1073 remotedata = obsolete._pushkeyescape(sorted(pushop.outobsmarkers))
1074 for key in sorted(remotedata, reverse=True):
1074 for key in sorted(remotedata, reverse=True):
1075 # reverse sort to ensure we end with dump0
1075 # reverse sort to ensure we end with dump0
1076 data = remotedata[key]
1076 data = remotedata[key]
1077 rslts.append(remote.pushkey('obsolete', key, '', data))
1077 rslts.append(remote.pushkey('obsolete', key, '', data))
1078 if [r for r in rslts if not r]:
1078 if [r for r in rslts if not r]:
1079 msg = _('failed to push some obsolete markers!\n')
1079 msg = _('failed to push some obsolete markers!\n')
1080 repo.ui.warn(msg)
1080 repo.ui.warn(msg)
1081
1081
1082 def _pushbookmark(pushop):
1082 def _pushbookmark(pushop):
1083 """Update bookmark position on remote"""
1083 """Update bookmark position on remote"""
1084 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
1084 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
1085 return
1085 return
1086 pushop.stepsdone.add('bookmarks')
1086 pushop.stepsdone.add('bookmarks')
1087 ui = pushop.ui
1087 ui = pushop.ui
1088 remote = pushop.remote
1088 remote = pushop.remote
1089
1089
1090 for b, old, new in pushop.outbookmarks:
1090 for b, old, new in pushop.outbookmarks:
1091 action = 'update'
1091 action = 'update'
1092 if not old:
1092 if not old:
1093 action = 'export'
1093 action = 'export'
1094 elif not new:
1094 elif not new:
1095 action = 'delete'
1095 action = 'delete'
1096 if remote.pushkey('bookmarks', b, old, new):
1096 if remote.pushkey('bookmarks', b, old, new):
1097 ui.status(bookmsgmap[action][0] % b)
1097 ui.status(bookmsgmap[action][0] % b)
1098 else:
1098 else:
1099 ui.warn(bookmsgmap[action][1] % b)
1099 ui.warn(bookmsgmap[action][1] % b)
1100 # discovery can have set the value form invalid entry
1100 # discovery can have set the value form invalid entry
1101 if pushop.bkresult is not None:
1101 if pushop.bkresult is not None:
1102 pushop.bkresult = 1
1102 pushop.bkresult = 1
1103
1103
1104 class pulloperation(object):
1104 class pulloperation(object):
1105 """A object that represent a single pull operation
1105 """A object that represent a single pull operation
1106
1106
1107 It purpose is to carry pull related state and very common operation.
1107 It purpose is to carry pull related state and very common operation.
1108
1108
1109 A new should be created at the beginning of each pull and discarded
1109 A new should be created at the beginning of each pull and discarded
1110 afterward.
1110 afterward.
1111 """
1111 """
1112
1112
1113 def __init__(self, repo, remote, heads=None, force=False, bookmarks=(),
1113 def __init__(self, repo, remote, heads=None, force=False, bookmarks=(),
1114 remotebookmarks=None, streamclonerequested=None):
1114 remotebookmarks=None, streamclonerequested=None):
1115 # repo we pull into
1115 # repo we pull into
1116 self.repo = repo
1116 self.repo = repo
1117 # repo we pull from
1117 # repo we pull from
1118 self.remote = remote
1118 self.remote = remote
1119 # revision we try to pull (None is "all")
1119 # revision we try to pull (None is "all")
1120 self.heads = heads
1120 self.heads = heads
1121 # bookmark pulled explicitly
1121 # bookmark pulled explicitly
1122 self.explicitbookmarks = [repo._bookmarks.expandname(bookmark)
1122 self.explicitbookmarks = [repo._bookmarks.expandname(bookmark)
1123 for bookmark in bookmarks]
1123 for bookmark in bookmarks]
1124 # do we force pull?
1124 # do we force pull?
1125 self.force = force
1125 self.force = force
1126 # whether a streaming clone was requested
1126 # whether a streaming clone was requested
1127 self.streamclonerequested = streamclonerequested
1127 self.streamclonerequested = streamclonerequested
1128 # transaction manager
1128 # transaction manager
1129 self.trmanager = None
1129 self.trmanager = None
1130 # set of common changeset between local and remote before pull
1130 # set of common changeset between local and remote before pull
1131 self.common = None
1131 self.common = None
1132 # set of pulled head
1132 # set of pulled head
1133 self.rheads = None
1133 self.rheads = None
1134 # list of missing changeset to fetch remotely
1134 # list of missing changeset to fetch remotely
1135 self.fetch = None
1135 self.fetch = None
1136 # remote bookmarks data
1136 # remote bookmarks data
1137 self.remotebookmarks = remotebookmarks
1137 self.remotebookmarks = remotebookmarks
1138 # result of changegroup pulling (used as return code by pull)
1138 # result of changegroup pulling (used as return code by pull)
1139 self.cgresult = None
1139 self.cgresult = None
1140 # list of step already done
1140 # list of step already done
1141 self.stepsdone = set()
1141 self.stepsdone = set()
1142 # Whether we attempted a clone from pre-generated bundles.
1142 # Whether we attempted a clone from pre-generated bundles.
1143 self.clonebundleattempted = False
1143 self.clonebundleattempted = False
1144
1144
1145 @util.propertycache
1145 @util.propertycache
1146 def pulledsubset(self):
1146 def pulledsubset(self):
1147 """heads of the set of changeset target by the pull"""
1147 """heads of the set of changeset target by the pull"""
1148 # compute target subset
1148 # compute target subset
1149 if self.heads is None:
1149 if self.heads is None:
1150 # We pulled every thing possible
1150 # We pulled every thing possible
1151 # sync on everything common
1151 # sync on everything common
1152 c = set(self.common)
1152 c = set(self.common)
1153 ret = list(self.common)
1153 ret = list(self.common)
1154 for n in self.rheads:
1154 for n in self.rheads:
1155 if n not in c:
1155 if n not in c:
1156 ret.append(n)
1156 ret.append(n)
1157 return ret
1157 return ret
1158 else:
1158 else:
1159 # We pulled a specific subset
1159 # We pulled a specific subset
1160 # sync on this subset
1160 # sync on this subset
1161 return self.heads
1161 return self.heads
1162
1162
1163 @util.propertycache
1163 @util.propertycache
1164 def canusebundle2(self):
1164 def canusebundle2(self):
1165 return not _forcebundle1(self)
1165 return not _forcebundle1(self)
1166
1166
1167 @util.propertycache
1167 @util.propertycache
1168 def remotebundle2caps(self):
1168 def remotebundle2caps(self):
1169 return bundle2.bundle2caps(self.remote)
1169 return bundle2.bundle2caps(self.remote)
1170
1170
1171 def gettransaction(self):
1171 def gettransaction(self):
1172 # deprecated; talk to trmanager directly
1172 # deprecated; talk to trmanager directly
1173 return self.trmanager.transaction()
1173 return self.trmanager.transaction()
1174
1174
1175 class transactionmanager(object):
1175 class transactionmanager(object):
1176 """An object to manage the life cycle of a transaction
1176 """An object to manage the life cycle of a transaction
1177
1177
1178 It creates the transaction on demand and calls the appropriate hooks when
1178 It creates the transaction on demand and calls the appropriate hooks when
1179 closing the transaction."""
1179 closing the transaction."""
1180 def __init__(self, repo, source, url):
1180 def __init__(self, repo, source, url):
1181 self.repo = repo
1181 self.repo = repo
1182 self.source = source
1182 self.source = source
1183 self.url = url
1183 self.url = url
1184 self._tr = None
1184 self._tr = None
1185
1185
1186 def transaction(self):
1186 def transaction(self):
1187 """Return an open transaction object, constructing if necessary"""
1187 """Return an open transaction object, constructing if necessary"""
1188 if not self._tr:
1188 if not self._tr:
1189 trname = '%s\n%s' % (self.source, util.hidepassword(self.url))
1189 trname = '%s\n%s' % (self.source, util.hidepassword(self.url))
1190 self._tr = self.repo.transaction(trname)
1190 self._tr = self.repo.transaction(trname)
1191 self._tr.hookargs['source'] = self.source
1191 self._tr.hookargs['source'] = self.source
1192 self._tr.hookargs['url'] = self.url
1192 self._tr.hookargs['url'] = self.url
1193 return self._tr
1193 return self._tr
1194
1194
1195 def close(self):
1195 def close(self):
1196 """close transaction if created"""
1196 """close transaction if created"""
1197 if self._tr is not None:
1197 if self._tr is not None:
1198 self._tr.close()
1198 self._tr.close()
1199
1199
1200 def release(self):
1200 def release(self):
1201 """release transaction if created"""
1201 """release transaction if created"""
1202 if self._tr is not None:
1202 if self._tr is not None:
1203 self._tr.release()
1203 self._tr.release()
1204
1204
1205 def pull(repo, remote, heads=None, force=False, bookmarks=(), opargs=None,
1205 def pull(repo, remote, heads=None, force=False, bookmarks=(), opargs=None,
1206 streamclonerequested=None):
1206 streamclonerequested=None):
1207 """Fetch repository data from a remote.
1207 """Fetch repository data from a remote.
1208
1208
1209 This is the main function used to retrieve data from a remote repository.
1209 This is the main function used to retrieve data from a remote repository.
1210
1210
1211 ``repo`` is the local repository to clone into.
1211 ``repo`` is the local repository to clone into.
1212 ``remote`` is a peer instance.
1212 ``remote`` is a peer instance.
1213 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1213 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1214 default) means to pull everything from the remote.
1214 default) means to pull everything from the remote.
1215 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1215 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1216 default, all remote bookmarks are pulled.
1216 default, all remote bookmarks are pulled.
1217 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1217 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1218 initialization.
1218 initialization.
1219 ``streamclonerequested`` is a boolean indicating whether a "streaming
1219 ``streamclonerequested`` is a boolean indicating whether a "streaming
1220 clone" is requested. A "streaming clone" is essentially a raw file copy
1220 clone" is requested. A "streaming clone" is essentially a raw file copy
1221 of revlogs from the server. This only works when the local repository is
1221 of revlogs from the server. This only works when the local repository is
1222 empty. The default value of ``None`` means to respect the server
1222 empty. The default value of ``None`` means to respect the server
1223 configuration for preferring stream clones.
1223 configuration for preferring stream clones.
1224
1224
1225 Returns the ``pulloperation`` created for this pull.
1225 Returns the ``pulloperation`` created for this pull.
1226 """
1226 """
1227 if opargs is None:
1227 if opargs is None:
1228 opargs = {}
1228 opargs = {}
1229 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks,
1229 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks,
1230 streamclonerequested=streamclonerequested, **opargs)
1230 streamclonerequested=streamclonerequested, **opargs)
1231 if pullop.remote.local():
1231 if pullop.remote.local():
1232 missing = set(pullop.remote.requirements) - pullop.repo.supported
1232 missing = set(pullop.remote.requirements) - pullop.repo.supported
1233 if missing:
1233 if missing:
1234 msg = _("required features are not"
1234 msg = _("required features are not"
1235 " supported in the destination:"
1235 " supported in the destination:"
1236 " %s") % (', '.join(sorted(missing)))
1236 " %s") % (', '.join(sorted(missing)))
1237 raise error.Abort(msg)
1237 raise error.Abort(msg)
1238
1238
1239 wlock = lock = None
1239 wlock = lock = None
1240 try:
1240 try:
1241 wlock = pullop.repo.wlock()
1241 wlock = pullop.repo.wlock()
1242 lock = pullop.repo.lock()
1242 lock = pullop.repo.lock()
1243 pullop.trmanager = transactionmanager(repo, 'pull', remote.url())
1243 pullop.trmanager = transactionmanager(repo, 'pull', remote.url())
1244 streamclone.maybeperformlegacystreamclone(pullop)
1244 streamclone.maybeperformlegacystreamclone(pullop)
1245 # This should ideally be in _pullbundle2(). However, it needs to run
1245 # This should ideally be in _pullbundle2(). However, it needs to run
1246 # before discovery to avoid extra work.
1246 # before discovery to avoid extra work.
1247 _maybeapplyclonebundle(pullop)
1247 _maybeapplyclonebundle(pullop)
1248 _pulldiscovery(pullop)
1248 _pulldiscovery(pullop)
1249 if pullop.canusebundle2:
1249 if pullop.canusebundle2:
1250 _pullbundle2(pullop)
1250 _pullbundle2(pullop)
1251 _pullchangeset(pullop)
1251 _pullchangeset(pullop)
1252 _pullphase(pullop)
1252 _pullphase(pullop)
1253 _pullbookmarks(pullop)
1253 _pullbookmarks(pullop)
1254 _pullobsolete(pullop)
1254 _pullobsolete(pullop)
1255 pullop.trmanager.close()
1255 pullop.trmanager.close()
1256 finally:
1256 finally:
1257 lockmod.release(pullop.trmanager, lock, wlock)
1257 lockmod.release(pullop.trmanager, lock, wlock)
1258
1258
1259 return pullop
1259 return pullop
1260
1260
1261 # list of steps to perform discovery before pull
1261 # list of steps to perform discovery before pull
1262 pulldiscoveryorder = []
1262 pulldiscoveryorder = []
1263
1263
1264 # Mapping between step name and function
1264 # Mapping between step name and function
1265 #
1265 #
1266 # This exists to help extensions wrap steps if necessary
1266 # This exists to help extensions wrap steps if necessary
1267 pulldiscoverymapping = {}
1267 pulldiscoverymapping = {}
1268
1268
1269 def pulldiscovery(stepname):
1269 def pulldiscovery(stepname):
1270 """decorator for function performing discovery before pull
1270 """decorator for function performing discovery before pull
1271
1271
1272 The function is added to the step -> function mapping and appended to the
1272 The function is added to the step -> function mapping and appended to the
1273 list of steps. Beware that decorated function will be added in order (this
1273 list of steps. Beware that decorated function will be added in order (this
1274 may matter).
1274 may matter).
1275
1275
1276 You can only use this decorator for a new step, if you want to wrap a step
1276 You can only use this decorator for a new step, if you want to wrap a step
1277 from an extension, change the pulldiscovery dictionary directly."""
1277 from an extension, change the pulldiscovery dictionary directly."""
1278 def dec(func):
1278 def dec(func):
1279 assert stepname not in pulldiscoverymapping
1279 assert stepname not in pulldiscoverymapping
1280 pulldiscoverymapping[stepname] = func
1280 pulldiscoverymapping[stepname] = func
1281 pulldiscoveryorder.append(stepname)
1281 pulldiscoveryorder.append(stepname)
1282 return func
1282 return func
1283 return dec
1283 return dec
1284
1284
1285 def _pulldiscovery(pullop):
1285 def _pulldiscovery(pullop):
1286 """Run all discovery steps"""
1286 """Run all discovery steps"""
1287 for stepname in pulldiscoveryorder:
1287 for stepname in pulldiscoveryorder:
1288 step = pulldiscoverymapping[stepname]
1288 step = pulldiscoverymapping[stepname]
1289 step(pullop)
1289 step(pullop)
1290
1290
1291 @pulldiscovery('b1:bookmarks')
1291 @pulldiscovery('b1:bookmarks')
1292 def _pullbookmarkbundle1(pullop):
1292 def _pullbookmarkbundle1(pullop):
1293 """fetch bookmark data in bundle1 case
1293 """fetch bookmark data in bundle1 case
1294
1294
1295 If not using bundle2, we have to fetch bookmarks before changeset
1295 If not using bundle2, we have to fetch bookmarks before changeset
1296 discovery to reduce the chance and impact of race conditions."""
1296 discovery to reduce the chance and impact of race conditions."""
1297 if pullop.remotebookmarks is not None:
1297 if pullop.remotebookmarks is not None:
1298 return
1298 return
1299 if pullop.canusebundle2 and 'listkeys' in pullop.remotebundle2caps:
1299 if pullop.canusebundle2 and 'listkeys' in pullop.remotebundle2caps:
1300 # all known bundle2 servers now support listkeys, but lets be nice with
1300 # all known bundle2 servers now support listkeys, but lets be nice with
1301 # new implementation.
1301 # new implementation.
1302 return
1302 return
1303 pullop.remotebookmarks = pullop.remote.listkeys('bookmarks')
1303 pullop.remotebookmarks = pullop.remote.listkeys('bookmarks')
1304
1304
1305
1305
1306 @pulldiscovery('changegroup')
1306 @pulldiscovery('changegroup')
1307 def _pulldiscoverychangegroup(pullop):
1307 def _pulldiscoverychangegroup(pullop):
1308 """discovery phase for the pull
1308 """discovery phase for the pull
1309
1309
1310 Current handle changeset discovery only, will change handle all discovery
1310 Current handle changeset discovery only, will change handle all discovery
1311 at some point."""
1311 at some point."""
1312 tmp = discovery.findcommonincoming(pullop.repo,
1312 tmp = discovery.findcommonincoming(pullop.repo,
1313 pullop.remote,
1313 pullop.remote,
1314 heads=pullop.heads,
1314 heads=pullop.heads,
1315 force=pullop.force)
1315 force=pullop.force)
1316 common, fetch, rheads = tmp
1316 common, fetch, rheads = tmp
1317 nm = pullop.repo.unfiltered().changelog.nodemap
1317 nm = pullop.repo.unfiltered().changelog.nodemap
1318 if fetch and rheads:
1318 if fetch and rheads:
1319 # If a remote heads in filtered locally, lets drop it from the unknown
1319 # If a remote heads in filtered locally, lets drop it from the unknown
1320 # remote heads and put in back in common.
1320 # remote heads and put in back in common.
1321 #
1321 #
1322 # This is a hackish solution to catch most of "common but locally
1322 # This is a hackish solution to catch most of "common but locally
1323 # hidden situation". We do not performs discovery on unfiltered
1323 # hidden situation". We do not performs discovery on unfiltered
1324 # repository because it end up doing a pathological amount of round
1324 # repository because it end up doing a pathological amount of round
1325 # trip for w huge amount of changeset we do not care about.
1325 # trip for w huge amount of changeset we do not care about.
1326 #
1326 #
1327 # If a set of such "common but filtered" changeset exist on the server
1327 # If a set of such "common but filtered" changeset exist on the server
1328 # but are not including a remote heads, we'll not be able to detect it,
1328 # but are not including a remote heads, we'll not be able to detect it,
1329 scommon = set(common)
1329 scommon = set(common)
1330 filteredrheads = []
1330 filteredrheads = []
1331 for n in rheads:
1331 for n in rheads:
1332 if n in nm:
1332 if n in nm:
1333 if n not in scommon:
1333 if n not in scommon:
1334 common.append(n)
1334 common.append(n)
1335 else:
1335 else:
1336 filteredrheads.append(n)
1336 filteredrheads.append(n)
1337 if not filteredrheads:
1337 if not filteredrheads:
1338 fetch = []
1338 fetch = []
1339 rheads = filteredrheads
1339 rheads = filteredrheads
1340 pullop.common = common
1340 pullop.common = common
1341 pullop.fetch = fetch
1341 pullop.fetch = fetch
1342 pullop.rheads = rheads
1342 pullop.rheads = rheads
1343
1343
1344 def _pullbundle2(pullop):
1344 def _pullbundle2(pullop):
1345 """pull data using bundle2
1345 """pull data using bundle2
1346
1346
1347 For now, the only supported data are changegroup."""
1347 For now, the only supported data are changegroup."""
1348 kwargs = {'bundlecaps': caps20to10(pullop.repo)}
1348 kwargs = {'bundlecaps': caps20to10(pullop.repo)}
1349
1349
1350 # At the moment we don't do stream clones over bundle2. If that is
1350 # At the moment we don't do stream clones over bundle2. If that is
1351 # implemented then here's where the check for that will go.
1351 # implemented then here's where the check for that will go.
1352 streaming = False
1352 streaming = False
1353
1353
1354 # pulling changegroup
1354 # pulling changegroup
1355 pullop.stepsdone.add('changegroup')
1355 pullop.stepsdone.add('changegroup')
1356
1356
1357 kwargs['common'] = pullop.common
1357 kwargs['common'] = pullop.common
1358 kwargs['heads'] = pullop.heads or pullop.rheads
1358 kwargs['heads'] = pullop.heads or pullop.rheads
1359 kwargs['cg'] = pullop.fetch
1359 kwargs['cg'] = pullop.fetch
1360 if 'listkeys' in pullop.remotebundle2caps:
1360 if 'listkeys' in pullop.remotebundle2caps:
1361 kwargs['listkeys'] = ['phases']
1361 kwargs['listkeys'] = ['phases']
1362 if pullop.remotebookmarks is None:
1362 if pullop.remotebookmarks is None:
1363 # make sure to always includes bookmark data when migrating
1363 # make sure to always includes bookmark data when migrating
1364 # `hg incoming --bundle` to using this function.
1364 # `hg incoming --bundle` to using this function.
1365 kwargs['listkeys'].append('bookmarks')
1365 kwargs['listkeys'].append('bookmarks')
1366
1366
1367 # If this is a full pull / clone and the server supports the clone bundles
1367 # If this is a full pull / clone and the server supports the clone bundles
1368 # feature, tell the server whether we attempted a clone bundle. The
1368 # feature, tell the server whether we attempted a clone bundle. The
1369 # presence of this flag indicates the client supports clone bundles. This
1369 # presence of this flag indicates the client supports clone bundles. This
1370 # will enable the server to treat clients that support clone bundles
1370 # will enable the server to treat clients that support clone bundles
1371 # differently from those that don't.
1371 # differently from those that don't.
1372 if (pullop.remote.capable('clonebundles')
1372 if (pullop.remote.capable('clonebundles')
1373 and pullop.heads is None and list(pullop.common) == [nullid]):
1373 and pullop.heads is None and list(pullop.common) == [nullid]):
1374 kwargs['cbattempted'] = pullop.clonebundleattempted
1374 kwargs['cbattempted'] = pullop.clonebundleattempted
1375
1375
1376 if streaming:
1376 if streaming:
1377 pullop.repo.ui.status(_('streaming all changes\n'))
1377 pullop.repo.ui.status(_('streaming all changes\n'))
1378 elif not pullop.fetch:
1378 elif not pullop.fetch:
1379 pullop.repo.ui.status(_("no changes found\n"))
1379 pullop.repo.ui.status(_("no changes found\n"))
1380 pullop.cgresult = 0
1380 pullop.cgresult = 0
1381 else:
1381 else:
1382 if pullop.heads is None and list(pullop.common) == [nullid]:
1382 if pullop.heads is None and list(pullop.common) == [nullid]:
1383 pullop.repo.ui.status(_("requesting all changes\n"))
1383 pullop.repo.ui.status(_("requesting all changes\n"))
1384 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1384 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1385 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1385 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1386 if obsolete.commonversion(remoteversions) is not None:
1386 if obsolete.commonversion(remoteversions) is not None:
1387 kwargs['obsmarkers'] = True
1387 kwargs['obsmarkers'] = True
1388 pullop.stepsdone.add('obsmarkers')
1388 pullop.stepsdone.add('obsmarkers')
1389 _pullbundle2extraprepare(pullop, kwargs)
1389 _pullbundle2extraprepare(pullop, kwargs)
1390 bundle = pullop.remote.getbundle('pull', **pycompat.strkwargs(kwargs))
1390 bundle = pullop.remote.getbundle('pull', **pycompat.strkwargs(kwargs))
1391 try:
1391 try:
1392 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
1392 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
1393 except bundle2.AbortFromPart as exc:
1393 except bundle2.AbortFromPart as exc:
1394 pullop.repo.ui.status(_('remote: abort: %s\n') % exc)
1394 pullop.repo.ui.status(_('remote: abort: %s\n') % exc)
1395 raise error.Abort(_('pull failed on remote'), hint=exc.hint)
1395 raise error.Abort(_('pull failed on remote'), hint=exc.hint)
1396 except error.BundleValueError as exc:
1396 except error.BundleValueError as exc:
1397 raise error.Abort(_('missing support for %s') % exc)
1397 raise error.Abort(_('missing support for %s') % exc)
1398
1398
1399 if pullop.fetch:
1399 if pullop.fetch:
1400 pullop.cgresult = bundle2.combinechangegroupresults(op)
1400 pullop.cgresult = bundle2.combinechangegroupresults(op)
1401
1401
1402 # processing phases change
1402 # processing phases change
1403 for namespace, value in op.records['listkeys']:
1403 for namespace, value in op.records['listkeys']:
1404 if namespace == 'phases':
1404 if namespace == 'phases':
1405 _pullapplyphases(pullop, value)
1405 _pullapplyphases(pullop, value)
1406
1406
1407 # processing bookmark update
1407 # processing bookmark update
1408 for namespace, value in op.records['listkeys']:
1408 for namespace, value in op.records['listkeys']:
1409 if namespace == 'bookmarks':
1409 if namespace == 'bookmarks':
1410 pullop.remotebookmarks = value
1410 pullop.remotebookmarks = value
1411
1411
1412 # bookmark data were either already there or pulled in the bundle
1412 # bookmark data were either already there or pulled in the bundle
1413 if pullop.remotebookmarks is not None:
1413 if pullop.remotebookmarks is not None:
1414 _pullbookmarks(pullop)
1414 _pullbookmarks(pullop)
1415
1415
1416 def _pullbundle2extraprepare(pullop, kwargs):
1416 def _pullbundle2extraprepare(pullop, kwargs):
1417 """hook function so that extensions can extend the getbundle call"""
1417 """hook function so that extensions can extend the getbundle call"""
1418 pass
1418 pass
1419
1419
1420 def _pullchangeset(pullop):
1420 def _pullchangeset(pullop):
1421 """pull changeset from unbundle into the local repo"""
1421 """pull changeset from unbundle into the local repo"""
1422 # We delay the open of the transaction as late as possible so we
1422 # We delay the open of the transaction as late as possible so we
1423 # don't open transaction for nothing or you break future useful
1423 # don't open transaction for nothing or you break future useful
1424 # rollback call
1424 # rollback call
1425 if 'changegroup' in pullop.stepsdone:
1425 if 'changegroup' in pullop.stepsdone:
1426 return
1426 return
1427 pullop.stepsdone.add('changegroup')
1427 pullop.stepsdone.add('changegroup')
1428 if not pullop.fetch:
1428 if not pullop.fetch:
1429 pullop.repo.ui.status(_("no changes found\n"))
1429 pullop.repo.ui.status(_("no changes found\n"))
1430 pullop.cgresult = 0
1430 pullop.cgresult = 0
1431 return
1431 return
1432 tr = pullop.gettransaction()
1432 tr = pullop.gettransaction()
1433 if pullop.heads is None and list(pullop.common) == [nullid]:
1433 if pullop.heads is None and list(pullop.common) == [nullid]:
1434 pullop.repo.ui.status(_("requesting all changes\n"))
1434 pullop.repo.ui.status(_("requesting all changes\n"))
1435 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1435 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1436 # issue1320, avoid a race if remote changed after discovery
1436 # issue1320, avoid a race if remote changed after discovery
1437 pullop.heads = pullop.rheads
1437 pullop.heads = pullop.rheads
1438
1438
1439 if pullop.remote.capable('getbundle'):
1439 if pullop.remote.capable('getbundle'):
1440 # TODO: get bundlecaps from remote
1440 # TODO: get bundlecaps from remote
1441 cg = pullop.remote.getbundle('pull', common=pullop.common,
1441 cg = pullop.remote.getbundle('pull', common=pullop.common,
1442 heads=pullop.heads or pullop.rheads)
1442 heads=pullop.heads or pullop.rheads)
1443 elif pullop.heads is None:
1443 elif pullop.heads is None:
1444 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1444 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1445 elif not pullop.remote.capable('changegroupsubset'):
1445 elif not pullop.remote.capable('changegroupsubset'):
1446 raise error.Abort(_("partial pull cannot be done because "
1446 raise error.Abort(_("partial pull cannot be done because "
1447 "other repository doesn't support "
1447 "other repository doesn't support "
1448 "changegroupsubset."))
1448 "changegroupsubset."))
1449 else:
1449 else:
1450 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1450 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1451 pullop.cgresult, addednodes = cg.apply(pullop.repo, tr, 'pull',
1451 pullop.cgresult = bundle2.applybundle1(pullop.repo, cg, tr, 'pull',
1452 pullop.remote.url())
1452 pullop.remote.url())
1453
1453
1454 def _pullphase(pullop):
1454 def _pullphase(pullop):
1455 # Get remote phases data from remote
1455 # Get remote phases data from remote
1456 if 'phases' in pullop.stepsdone:
1456 if 'phases' in pullop.stepsdone:
1457 return
1457 return
1458 remotephases = pullop.remote.listkeys('phases')
1458 remotephases = pullop.remote.listkeys('phases')
1459 _pullapplyphases(pullop, remotephases)
1459 _pullapplyphases(pullop, remotephases)
1460
1460
1461 def _pullapplyphases(pullop, remotephases):
1461 def _pullapplyphases(pullop, remotephases):
1462 """apply phase movement from observed remote state"""
1462 """apply phase movement from observed remote state"""
1463 if 'phases' in pullop.stepsdone:
1463 if 'phases' in pullop.stepsdone:
1464 return
1464 return
1465 pullop.stepsdone.add('phases')
1465 pullop.stepsdone.add('phases')
1466 publishing = bool(remotephases.get('publishing', False))
1466 publishing = bool(remotephases.get('publishing', False))
1467 if remotephases and not publishing:
1467 if remotephases and not publishing:
1468 # remote is new and non-publishing
1468 # remote is new and non-publishing
1469 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1469 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1470 pullop.pulledsubset,
1470 pullop.pulledsubset,
1471 remotephases)
1471 remotephases)
1472 dheads = pullop.pulledsubset
1472 dheads = pullop.pulledsubset
1473 else:
1473 else:
1474 # Remote is old or publishing all common changesets
1474 # Remote is old or publishing all common changesets
1475 # should be seen as public
1475 # should be seen as public
1476 pheads = pullop.pulledsubset
1476 pheads = pullop.pulledsubset
1477 dheads = []
1477 dheads = []
1478 unfi = pullop.repo.unfiltered()
1478 unfi = pullop.repo.unfiltered()
1479 phase = unfi._phasecache.phase
1479 phase = unfi._phasecache.phase
1480 rev = unfi.changelog.nodemap.get
1480 rev = unfi.changelog.nodemap.get
1481 public = phases.public
1481 public = phases.public
1482 draft = phases.draft
1482 draft = phases.draft
1483
1483
1484 # exclude changesets already public locally and update the others
1484 # exclude changesets already public locally and update the others
1485 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1485 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1486 if pheads:
1486 if pheads:
1487 tr = pullop.gettransaction()
1487 tr = pullop.gettransaction()
1488 phases.advanceboundary(pullop.repo, tr, public, pheads)
1488 phases.advanceboundary(pullop.repo, tr, public, pheads)
1489
1489
1490 # exclude changesets already draft locally and update the others
1490 # exclude changesets already draft locally and update the others
1491 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1491 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1492 if dheads:
1492 if dheads:
1493 tr = pullop.gettransaction()
1493 tr = pullop.gettransaction()
1494 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1494 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1495
1495
1496 def _pullbookmarks(pullop):
1496 def _pullbookmarks(pullop):
1497 """process the remote bookmark information to update the local one"""
1497 """process the remote bookmark information to update the local one"""
1498 if 'bookmarks' in pullop.stepsdone:
1498 if 'bookmarks' in pullop.stepsdone:
1499 return
1499 return
1500 pullop.stepsdone.add('bookmarks')
1500 pullop.stepsdone.add('bookmarks')
1501 repo = pullop.repo
1501 repo = pullop.repo
1502 remotebookmarks = pullop.remotebookmarks
1502 remotebookmarks = pullop.remotebookmarks
1503 remotebookmarks = bookmod.unhexlifybookmarks(remotebookmarks)
1503 remotebookmarks = bookmod.unhexlifybookmarks(remotebookmarks)
1504 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1504 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1505 pullop.remote.url(),
1505 pullop.remote.url(),
1506 pullop.gettransaction,
1506 pullop.gettransaction,
1507 explicit=pullop.explicitbookmarks)
1507 explicit=pullop.explicitbookmarks)
1508
1508
1509 def _pullobsolete(pullop):
1509 def _pullobsolete(pullop):
1510 """utility function to pull obsolete markers from a remote
1510 """utility function to pull obsolete markers from a remote
1511
1511
1512 The `gettransaction` is function that return the pull transaction, creating
1512 The `gettransaction` is function that return the pull transaction, creating
1513 one if necessary. We return the transaction to inform the calling code that
1513 one if necessary. We return the transaction to inform the calling code that
1514 a new transaction have been created (when applicable).
1514 a new transaction have been created (when applicable).
1515
1515
1516 Exists mostly to allow overriding for experimentation purpose"""
1516 Exists mostly to allow overriding for experimentation purpose"""
1517 if 'obsmarkers' in pullop.stepsdone:
1517 if 'obsmarkers' in pullop.stepsdone:
1518 return
1518 return
1519 pullop.stepsdone.add('obsmarkers')
1519 pullop.stepsdone.add('obsmarkers')
1520 tr = None
1520 tr = None
1521 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1521 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1522 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1522 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1523 remoteobs = pullop.remote.listkeys('obsolete')
1523 remoteobs = pullop.remote.listkeys('obsolete')
1524 if 'dump0' in remoteobs:
1524 if 'dump0' in remoteobs:
1525 tr = pullop.gettransaction()
1525 tr = pullop.gettransaction()
1526 markers = []
1526 markers = []
1527 for key in sorted(remoteobs, reverse=True):
1527 for key in sorted(remoteobs, reverse=True):
1528 if key.startswith('dump'):
1528 if key.startswith('dump'):
1529 data = util.b85decode(remoteobs[key])
1529 data = util.b85decode(remoteobs[key])
1530 version, newmarks = obsolete._readmarkers(data)
1530 version, newmarks = obsolete._readmarkers(data)
1531 markers += newmarks
1531 markers += newmarks
1532 if markers:
1532 if markers:
1533 pullop.repo.obsstore.add(tr, markers)
1533 pullop.repo.obsstore.add(tr, markers)
1534 pullop.repo.invalidatevolatilesets()
1534 pullop.repo.invalidatevolatilesets()
1535 return tr
1535 return tr
1536
1536
1537 def caps20to10(repo):
1537 def caps20to10(repo):
1538 """return a set with appropriate options to use bundle20 during getbundle"""
1538 """return a set with appropriate options to use bundle20 during getbundle"""
1539 caps = {'HG20'}
1539 caps = {'HG20'}
1540 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
1540 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
1541 caps.add('bundle2=' + urlreq.quote(capsblob))
1541 caps.add('bundle2=' + urlreq.quote(capsblob))
1542 return caps
1542 return caps
1543
1543
1544 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1544 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1545 getbundle2partsorder = []
1545 getbundle2partsorder = []
1546
1546
1547 # Mapping between step name and function
1547 # Mapping between step name and function
1548 #
1548 #
1549 # This exists to help extensions wrap steps if necessary
1549 # This exists to help extensions wrap steps if necessary
1550 getbundle2partsmapping = {}
1550 getbundle2partsmapping = {}
1551
1551
1552 def getbundle2partsgenerator(stepname, idx=None):
1552 def getbundle2partsgenerator(stepname, idx=None):
1553 """decorator for function generating bundle2 part for getbundle
1553 """decorator for function generating bundle2 part for getbundle
1554
1554
1555 The function is added to the step -> function mapping and appended to the
1555 The function is added to the step -> function mapping and appended to the
1556 list of steps. Beware that decorated functions will be added in order
1556 list of steps. Beware that decorated functions will be added in order
1557 (this may matter).
1557 (this may matter).
1558
1558
1559 You can only use this decorator for new steps, if you want to wrap a step
1559 You can only use this decorator for new steps, if you want to wrap a step
1560 from an extension, attack the getbundle2partsmapping dictionary directly."""
1560 from an extension, attack the getbundle2partsmapping dictionary directly."""
1561 def dec(func):
1561 def dec(func):
1562 assert stepname not in getbundle2partsmapping
1562 assert stepname not in getbundle2partsmapping
1563 getbundle2partsmapping[stepname] = func
1563 getbundle2partsmapping[stepname] = func
1564 if idx is None:
1564 if idx is None:
1565 getbundle2partsorder.append(stepname)
1565 getbundle2partsorder.append(stepname)
1566 else:
1566 else:
1567 getbundle2partsorder.insert(idx, stepname)
1567 getbundle2partsorder.insert(idx, stepname)
1568 return func
1568 return func
1569 return dec
1569 return dec
1570
1570
1571 def bundle2requested(bundlecaps):
1571 def bundle2requested(bundlecaps):
1572 if bundlecaps is not None:
1572 if bundlecaps is not None:
1573 return any(cap.startswith('HG2') for cap in bundlecaps)
1573 return any(cap.startswith('HG2') for cap in bundlecaps)
1574 return False
1574 return False
1575
1575
1576 def getbundlechunks(repo, source, heads=None, common=None, bundlecaps=None,
1576 def getbundlechunks(repo, source, heads=None, common=None, bundlecaps=None,
1577 **kwargs):
1577 **kwargs):
1578 """Return chunks constituting a bundle's raw data.
1578 """Return chunks constituting a bundle's raw data.
1579
1579
1580 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
1580 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
1581 passed.
1581 passed.
1582
1582
1583 Returns an iterator over raw chunks (of varying sizes).
1583 Returns an iterator over raw chunks (of varying sizes).
1584 """
1584 """
1585 kwargs = pycompat.byteskwargs(kwargs)
1585 kwargs = pycompat.byteskwargs(kwargs)
1586 usebundle2 = bundle2requested(bundlecaps)
1586 usebundle2 = bundle2requested(bundlecaps)
1587 # bundle10 case
1587 # bundle10 case
1588 if not usebundle2:
1588 if not usebundle2:
1589 if bundlecaps and not kwargs.get('cg', True):
1589 if bundlecaps and not kwargs.get('cg', True):
1590 raise ValueError(_('request for bundle10 must include changegroup'))
1590 raise ValueError(_('request for bundle10 must include changegroup'))
1591
1591
1592 if kwargs:
1592 if kwargs:
1593 raise ValueError(_('unsupported getbundle arguments: %s')
1593 raise ValueError(_('unsupported getbundle arguments: %s')
1594 % ', '.join(sorted(kwargs.keys())))
1594 % ', '.join(sorted(kwargs.keys())))
1595 outgoing = _computeoutgoing(repo, heads, common)
1595 outgoing = _computeoutgoing(repo, heads, common)
1596 bundler = changegroup.getbundler('01', repo, bundlecaps)
1596 bundler = changegroup.getbundler('01', repo, bundlecaps)
1597 return changegroup.getsubsetraw(repo, outgoing, bundler, source)
1597 return changegroup.getsubsetraw(repo, outgoing, bundler, source)
1598
1598
1599 # bundle20 case
1599 # bundle20 case
1600 b2caps = {}
1600 b2caps = {}
1601 for bcaps in bundlecaps:
1601 for bcaps in bundlecaps:
1602 if bcaps.startswith('bundle2='):
1602 if bcaps.startswith('bundle2='):
1603 blob = urlreq.unquote(bcaps[len('bundle2='):])
1603 blob = urlreq.unquote(bcaps[len('bundle2='):])
1604 b2caps.update(bundle2.decodecaps(blob))
1604 b2caps.update(bundle2.decodecaps(blob))
1605 bundler = bundle2.bundle20(repo.ui, b2caps)
1605 bundler = bundle2.bundle20(repo.ui, b2caps)
1606
1606
1607 kwargs['heads'] = heads
1607 kwargs['heads'] = heads
1608 kwargs['common'] = common
1608 kwargs['common'] = common
1609
1609
1610 for name in getbundle2partsorder:
1610 for name in getbundle2partsorder:
1611 func = getbundle2partsmapping[name]
1611 func = getbundle2partsmapping[name]
1612 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1612 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1613 **pycompat.strkwargs(kwargs))
1613 **pycompat.strkwargs(kwargs))
1614
1614
1615 return bundler.getchunks()
1615 return bundler.getchunks()
1616
1616
1617 @getbundle2partsgenerator('changegroup')
1617 @getbundle2partsgenerator('changegroup')
1618 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1618 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1619 b2caps=None, heads=None, common=None, **kwargs):
1619 b2caps=None, heads=None, common=None, **kwargs):
1620 """add a changegroup part to the requested bundle"""
1620 """add a changegroup part to the requested bundle"""
1621 cg = None
1621 cg = None
1622 if kwargs.get('cg', True):
1622 if kwargs.get('cg', True):
1623 # build changegroup bundle here.
1623 # build changegroup bundle here.
1624 version = '01'
1624 version = '01'
1625 cgversions = b2caps.get('changegroup')
1625 cgversions = b2caps.get('changegroup')
1626 if cgversions: # 3.1 and 3.2 ship with an empty value
1626 if cgversions: # 3.1 and 3.2 ship with an empty value
1627 cgversions = [v for v in cgversions
1627 cgversions = [v for v in cgversions
1628 if v in changegroup.supportedoutgoingversions(repo)]
1628 if v in changegroup.supportedoutgoingversions(repo)]
1629 if not cgversions:
1629 if not cgversions:
1630 raise ValueError(_('no common changegroup version'))
1630 raise ValueError(_('no common changegroup version'))
1631 version = max(cgversions)
1631 version = max(cgversions)
1632 outgoing = _computeoutgoing(repo, heads, common)
1632 outgoing = _computeoutgoing(repo, heads, common)
1633 cg = changegroup.getlocalchangegroupraw(repo, source, outgoing,
1633 cg = changegroup.getlocalchangegroupraw(repo, source, outgoing,
1634 bundlecaps=bundlecaps,
1634 bundlecaps=bundlecaps,
1635 version=version)
1635 version=version)
1636
1636
1637 if cg:
1637 if cg:
1638 part = bundler.newpart('changegroup', data=cg)
1638 part = bundler.newpart('changegroup', data=cg)
1639 if cgversions:
1639 if cgversions:
1640 part.addparam('version', version)
1640 part.addparam('version', version)
1641 part.addparam('nbchanges', str(len(outgoing.missing)), mandatory=False)
1641 part.addparam('nbchanges', str(len(outgoing.missing)), mandatory=False)
1642 if 'treemanifest' in repo.requirements:
1642 if 'treemanifest' in repo.requirements:
1643 part.addparam('treemanifest', '1')
1643 part.addparam('treemanifest', '1')
1644
1644
1645 @getbundle2partsgenerator('listkeys')
1645 @getbundle2partsgenerator('listkeys')
1646 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1646 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1647 b2caps=None, **kwargs):
1647 b2caps=None, **kwargs):
1648 """add parts containing listkeys namespaces to the requested bundle"""
1648 """add parts containing listkeys namespaces to the requested bundle"""
1649 listkeys = kwargs.get('listkeys', ())
1649 listkeys = kwargs.get('listkeys', ())
1650 for namespace in listkeys:
1650 for namespace in listkeys:
1651 part = bundler.newpart('listkeys')
1651 part = bundler.newpart('listkeys')
1652 part.addparam('namespace', namespace)
1652 part.addparam('namespace', namespace)
1653 keys = repo.listkeys(namespace).items()
1653 keys = repo.listkeys(namespace).items()
1654 part.data = pushkey.encodekeys(keys)
1654 part.data = pushkey.encodekeys(keys)
1655
1655
1656 @getbundle2partsgenerator('obsmarkers')
1656 @getbundle2partsgenerator('obsmarkers')
1657 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1657 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1658 b2caps=None, heads=None, **kwargs):
1658 b2caps=None, heads=None, **kwargs):
1659 """add an obsolescence markers part to the requested bundle"""
1659 """add an obsolescence markers part to the requested bundle"""
1660 if kwargs.get('obsmarkers', False):
1660 if kwargs.get('obsmarkers', False):
1661 if heads is None:
1661 if heads is None:
1662 heads = repo.heads()
1662 heads = repo.heads()
1663 subset = [c.node() for c in repo.set('::%ln', heads)]
1663 subset = [c.node() for c in repo.set('::%ln', heads)]
1664 markers = repo.obsstore.relevantmarkers(subset)
1664 markers = repo.obsstore.relevantmarkers(subset)
1665 markers = sorted(markers)
1665 markers = sorted(markers)
1666 bundle2.buildobsmarkerspart(bundler, markers)
1666 bundle2.buildobsmarkerspart(bundler, markers)
1667
1667
1668 @getbundle2partsgenerator('hgtagsfnodes')
1668 @getbundle2partsgenerator('hgtagsfnodes')
1669 def _getbundletagsfnodes(bundler, repo, source, bundlecaps=None,
1669 def _getbundletagsfnodes(bundler, repo, source, bundlecaps=None,
1670 b2caps=None, heads=None, common=None,
1670 b2caps=None, heads=None, common=None,
1671 **kwargs):
1671 **kwargs):
1672 """Transfer the .hgtags filenodes mapping.
1672 """Transfer the .hgtags filenodes mapping.
1673
1673
1674 Only values for heads in this bundle will be transferred.
1674 Only values for heads in this bundle will be transferred.
1675
1675
1676 The part data consists of pairs of 20 byte changeset node and .hgtags
1676 The part data consists of pairs of 20 byte changeset node and .hgtags
1677 filenodes raw values.
1677 filenodes raw values.
1678 """
1678 """
1679 # Don't send unless:
1679 # Don't send unless:
1680 # - changeset are being exchanged,
1680 # - changeset are being exchanged,
1681 # - the client supports it.
1681 # - the client supports it.
1682 if not (kwargs.get('cg', True) and 'hgtagsfnodes' in b2caps):
1682 if not (kwargs.get('cg', True) and 'hgtagsfnodes' in b2caps):
1683 return
1683 return
1684
1684
1685 outgoing = _computeoutgoing(repo, heads, common)
1685 outgoing = _computeoutgoing(repo, heads, common)
1686 bundle2.addparttagsfnodescache(repo, bundler, outgoing)
1686 bundle2.addparttagsfnodescache(repo, bundler, outgoing)
1687
1687
1688 def _getbookmarks(repo, **kwargs):
1688 def _getbookmarks(repo, **kwargs):
1689 """Returns bookmark to node mapping.
1689 """Returns bookmark to node mapping.
1690
1690
1691 This function is primarily used to generate `bookmarks` bundle2 part.
1691 This function is primarily used to generate `bookmarks` bundle2 part.
1692 It is a separate function in order to make it easy to wrap it
1692 It is a separate function in order to make it easy to wrap it
1693 in extensions. Passing `kwargs` to the function makes it easy to
1693 in extensions. Passing `kwargs` to the function makes it easy to
1694 add new parameters in extensions.
1694 add new parameters in extensions.
1695 """
1695 """
1696
1696
1697 return dict(bookmod.listbinbookmarks(repo))
1697 return dict(bookmod.listbinbookmarks(repo))
1698
1698
1699 def check_heads(repo, their_heads, context):
1699 def check_heads(repo, their_heads, context):
1700 """check if the heads of a repo have been modified
1700 """check if the heads of a repo have been modified
1701
1701
1702 Used by peer for unbundling.
1702 Used by peer for unbundling.
1703 """
1703 """
1704 heads = repo.heads()
1704 heads = repo.heads()
1705 heads_hash = hashlib.sha1(''.join(sorted(heads))).digest()
1705 heads_hash = hashlib.sha1(''.join(sorted(heads))).digest()
1706 if not (their_heads == ['force'] or their_heads == heads or
1706 if not (their_heads == ['force'] or their_heads == heads or
1707 their_heads == ['hashed', heads_hash]):
1707 their_heads == ['hashed', heads_hash]):
1708 # someone else committed/pushed/unbundled while we
1708 # someone else committed/pushed/unbundled while we
1709 # were transferring data
1709 # were transferring data
1710 raise error.PushRaced('repository changed while %s - '
1710 raise error.PushRaced('repository changed while %s - '
1711 'please try again' % context)
1711 'please try again' % context)
1712
1712
1713 def unbundle(repo, cg, heads, source, url):
1713 def unbundle(repo, cg, heads, source, url):
1714 """Apply a bundle to a repo.
1714 """Apply a bundle to a repo.
1715
1715
1716 this function makes sure the repo is locked during the application and have
1716 this function makes sure the repo is locked during the application and have
1717 mechanism to check that no push race occurred between the creation of the
1717 mechanism to check that no push race occurred between the creation of the
1718 bundle and its application.
1718 bundle and its application.
1719
1719
1720 If the push was raced as PushRaced exception is raised."""
1720 If the push was raced as PushRaced exception is raised."""
1721 r = 0
1721 r = 0
1722 # need a transaction when processing a bundle2 stream
1722 # need a transaction when processing a bundle2 stream
1723 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
1723 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
1724 lockandtr = [None, None, None]
1724 lockandtr = [None, None, None]
1725 recordout = None
1725 recordout = None
1726 # quick fix for output mismatch with bundle2 in 3.4
1726 # quick fix for output mismatch with bundle2 in 3.4
1727 captureoutput = repo.ui.configbool('experimental', 'bundle2-output-capture',
1727 captureoutput = repo.ui.configbool('experimental', 'bundle2-output-capture',
1728 False)
1728 False)
1729 if url.startswith('remote:http:') or url.startswith('remote:https:'):
1729 if url.startswith('remote:http:') or url.startswith('remote:https:'):
1730 captureoutput = True
1730 captureoutput = True
1731 try:
1731 try:
1732 # note: outside bundle1, 'heads' is expected to be empty and this
1732 # note: outside bundle1, 'heads' is expected to be empty and this
1733 # 'check_heads' call wil be a no-op
1733 # 'check_heads' call wil be a no-op
1734 check_heads(repo, heads, 'uploading changes')
1734 check_heads(repo, heads, 'uploading changes')
1735 # push can proceed
1735 # push can proceed
1736 if not isinstance(cg, bundle2.unbundle20):
1736 if not isinstance(cg, bundle2.unbundle20):
1737 # legacy case: bundle1 (changegroup 01)
1737 # legacy case: bundle1 (changegroup 01)
1738 txnname = "\n".join([source, util.hidepassword(url)])
1738 txnname = "\n".join([source, util.hidepassword(url)])
1739 with repo.lock(), repo.transaction(txnname) as tr:
1739 with repo.lock(), repo.transaction(txnname) as tr:
1740 r, addednodes = cg.apply(repo, tr, source, url)
1740 r = bundle2.applybundle1(repo, cg, tr, source, url)
1741 else:
1741 else:
1742 r = None
1742 r = None
1743 try:
1743 try:
1744 def gettransaction():
1744 def gettransaction():
1745 if not lockandtr[2]:
1745 if not lockandtr[2]:
1746 lockandtr[0] = repo.wlock()
1746 lockandtr[0] = repo.wlock()
1747 lockandtr[1] = repo.lock()
1747 lockandtr[1] = repo.lock()
1748 lockandtr[2] = repo.transaction(source)
1748 lockandtr[2] = repo.transaction(source)
1749 lockandtr[2].hookargs['source'] = source
1749 lockandtr[2].hookargs['source'] = source
1750 lockandtr[2].hookargs['url'] = url
1750 lockandtr[2].hookargs['url'] = url
1751 lockandtr[2].hookargs['bundle2'] = '1'
1751 lockandtr[2].hookargs['bundle2'] = '1'
1752 return lockandtr[2]
1752 return lockandtr[2]
1753
1753
1754 # Do greedy locking by default until we're satisfied with lazy
1754 # Do greedy locking by default until we're satisfied with lazy
1755 # locking.
1755 # locking.
1756 if not repo.ui.configbool('experimental', 'bundle2lazylocking'):
1756 if not repo.ui.configbool('experimental', 'bundle2lazylocking'):
1757 gettransaction()
1757 gettransaction()
1758
1758
1759 op = bundle2.bundleoperation(repo, gettransaction,
1759 op = bundle2.bundleoperation(repo, gettransaction,
1760 captureoutput=captureoutput)
1760 captureoutput=captureoutput)
1761 try:
1761 try:
1762 op = bundle2.processbundle(repo, cg, op=op)
1762 op = bundle2.processbundle(repo, cg, op=op)
1763 finally:
1763 finally:
1764 r = op.reply
1764 r = op.reply
1765 if captureoutput and r is not None:
1765 if captureoutput and r is not None:
1766 repo.ui.pushbuffer(error=True, subproc=True)
1766 repo.ui.pushbuffer(error=True, subproc=True)
1767 def recordout(output):
1767 def recordout(output):
1768 r.newpart('output', data=output, mandatory=False)
1768 r.newpart('output', data=output, mandatory=False)
1769 if lockandtr[2] is not None:
1769 if lockandtr[2] is not None:
1770 lockandtr[2].close()
1770 lockandtr[2].close()
1771 except BaseException as exc:
1771 except BaseException as exc:
1772 exc.duringunbundle2 = True
1772 exc.duringunbundle2 = True
1773 if captureoutput and r is not None:
1773 if captureoutput and r is not None:
1774 parts = exc._bundle2salvagedoutput = r.salvageoutput()
1774 parts = exc._bundle2salvagedoutput = r.salvageoutput()
1775 def recordout(output):
1775 def recordout(output):
1776 part = bundle2.bundlepart('output', data=output,
1776 part = bundle2.bundlepart('output', data=output,
1777 mandatory=False)
1777 mandatory=False)
1778 parts.append(part)
1778 parts.append(part)
1779 raise
1779 raise
1780 finally:
1780 finally:
1781 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
1781 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
1782 if recordout is not None:
1782 if recordout is not None:
1783 recordout(repo.ui.popbuffer())
1783 recordout(repo.ui.popbuffer())
1784 return r
1784 return r
1785
1785
1786 def _maybeapplyclonebundle(pullop):
1786 def _maybeapplyclonebundle(pullop):
1787 """Apply a clone bundle from a remote, if possible."""
1787 """Apply a clone bundle from a remote, if possible."""
1788
1788
1789 repo = pullop.repo
1789 repo = pullop.repo
1790 remote = pullop.remote
1790 remote = pullop.remote
1791
1791
1792 if not repo.ui.configbool('ui', 'clonebundles', True):
1792 if not repo.ui.configbool('ui', 'clonebundles', True):
1793 return
1793 return
1794
1794
1795 # Only run if local repo is empty.
1795 # Only run if local repo is empty.
1796 if len(repo):
1796 if len(repo):
1797 return
1797 return
1798
1798
1799 if pullop.heads:
1799 if pullop.heads:
1800 return
1800 return
1801
1801
1802 if not remote.capable('clonebundles'):
1802 if not remote.capable('clonebundles'):
1803 return
1803 return
1804
1804
1805 res = remote._call('clonebundles')
1805 res = remote._call('clonebundles')
1806
1806
1807 # If we call the wire protocol command, that's good enough to record the
1807 # If we call the wire protocol command, that's good enough to record the
1808 # attempt.
1808 # attempt.
1809 pullop.clonebundleattempted = True
1809 pullop.clonebundleattempted = True
1810
1810
1811 entries = parseclonebundlesmanifest(repo, res)
1811 entries = parseclonebundlesmanifest(repo, res)
1812 if not entries:
1812 if not entries:
1813 repo.ui.note(_('no clone bundles available on remote; '
1813 repo.ui.note(_('no clone bundles available on remote; '
1814 'falling back to regular clone\n'))
1814 'falling back to regular clone\n'))
1815 return
1815 return
1816
1816
1817 entries = filterclonebundleentries(repo, entries)
1817 entries = filterclonebundleentries(repo, entries)
1818 if not entries:
1818 if not entries:
1819 # There is a thundering herd concern here. However, if a server
1819 # There is a thundering herd concern here. However, if a server
1820 # operator doesn't advertise bundles appropriate for its clients,
1820 # operator doesn't advertise bundles appropriate for its clients,
1821 # they deserve what's coming. Furthermore, from a client's
1821 # they deserve what's coming. Furthermore, from a client's
1822 # perspective, no automatic fallback would mean not being able to
1822 # perspective, no automatic fallback would mean not being able to
1823 # clone!
1823 # clone!
1824 repo.ui.warn(_('no compatible clone bundles available on server; '
1824 repo.ui.warn(_('no compatible clone bundles available on server; '
1825 'falling back to regular clone\n'))
1825 'falling back to regular clone\n'))
1826 repo.ui.warn(_('(you may want to report this to the server '
1826 repo.ui.warn(_('(you may want to report this to the server '
1827 'operator)\n'))
1827 'operator)\n'))
1828 return
1828 return
1829
1829
1830 entries = sortclonebundleentries(repo.ui, entries)
1830 entries = sortclonebundleentries(repo.ui, entries)
1831
1831
1832 url = entries[0]['URL']
1832 url = entries[0]['URL']
1833 repo.ui.status(_('applying clone bundle from %s\n') % url)
1833 repo.ui.status(_('applying clone bundle from %s\n') % url)
1834 if trypullbundlefromurl(repo.ui, repo, url):
1834 if trypullbundlefromurl(repo.ui, repo, url):
1835 repo.ui.status(_('finished applying clone bundle\n'))
1835 repo.ui.status(_('finished applying clone bundle\n'))
1836 # Bundle failed.
1836 # Bundle failed.
1837 #
1837 #
1838 # We abort by default to avoid the thundering herd of
1838 # We abort by default to avoid the thundering herd of
1839 # clients flooding a server that was expecting expensive
1839 # clients flooding a server that was expecting expensive
1840 # clone load to be offloaded.
1840 # clone load to be offloaded.
1841 elif repo.ui.configbool('ui', 'clonebundlefallback', False):
1841 elif repo.ui.configbool('ui', 'clonebundlefallback', False):
1842 repo.ui.warn(_('falling back to normal clone\n'))
1842 repo.ui.warn(_('falling back to normal clone\n'))
1843 else:
1843 else:
1844 raise error.Abort(_('error applying bundle'),
1844 raise error.Abort(_('error applying bundle'),
1845 hint=_('if this error persists, consider contacting '
1845 hint=_('if this error persists, consider contacting '
1846 'the server operator or disable clone '
1846 'the server operator or disable clone '
1847 'bundles via '
1847 'bundles via '
1848 '"--config ui.clonebundles=false"'))
1848 '"--config ui.clonebundles=false"'))
1849
1849
1850 def parseclonebundlesmanifest(repo, s):
1850 def parseclonebundlesmanifest(repo, s):
1851 """Parses the raw text of a clone bundles manifest.
1851 """Parses the raw text of a clone bundles manifest.
1852
1852
1853 Returns a list of dicts. The dicts have a ``URL`` key corresponding
1853 Returns a list of dicts. The dicts have a ``URL`` key corresponding
1854 to the URL and other keys are the attributes for the entry.
1854 to the URL and other keys are the attributes for the entry.
1855 """
1855 """
1856 m = []
1856 m = []
1857 for line in s.splitlines():
1857 for line in s.splitlines():
1858 fields = line.split()
1858 fields = line.split()
1859 if not fields:
1859 if not fields:
1860 continue
1860 continue
1861 attrs = {'URL': fields[0]}
1861 attrs = {'URL': fields[0]}
1862 for rawattr in fields[1:]:
1862 for rawattr in fields[1:]:
1863 key, value = rawattr.split('=', 1)
1863 key, value = rawattr.split('=', 1)
1864 key = urlreq.unquote(key)
1864 key = urlreq.unquote(key)
1865 value = urlreq.unquote(value)
1865 value = urlreq.unquote(value)
1866 attrs[key] = value
1866 attrs[key] = value
1867
1867
1868 # Parse BUNDLESPEC into components. This makes client-side
1868 # Parse BUNDLESPEC into components. This makes client-side
1869 # preferences easier to specify since you can prefer a single
1869 # preferences easier to specify since you can prefer a single
1870 # component of the BUNDLESPEC.
1870 # component of the BUNDLESPEC.
1871 if key == 'BUNDLESPEC':
1871 if key == 'BUNDLESPEC':
1872 try:
1872 try:
1873 comp, version, params = parsebundlespec(repo, value,
1873 comp, version, params = parsebundlespec(repo, value,
1874 externalnames=True)
1874 externalnames=True)
1875 attrs['COMPRESSION'] = comp
1875 attrs['COMPRESSION'] = comp
1876 attrs['VERSION'] = version
1876 attrs['VERSION'] = version
1877 except error.InvalidBundleSpecification:
1877 except error.InvalidBundleSpecification:
1878 pass
1878 pass
1879 except error.UnsupportedBundleSpecification:
1879 except error.UnsupportedBundleSpecification:
1880 pass
1880 pass
1881
1881
1882 m.append(attrs)
1882 m.append(attrs)
1883
1883
1884 return m
1884 return m
1885
1885
1886 def filterclonebundleentries(repo, entries):
1886 def filterclonebundleentries(repo, entries):
1887 """Remove incompatible clone bundle manifest entries.
1887 """Remove incompatible clone bundle manifest entries.
1888
1888
1889 Accepts a list of entries parsed with ``parseclonebundlesmanifest``
1889 Accepts a list of entries parsed with ``parseclonebundlesmanifest``
1890 and returns a new list consisting of only the entries that this client
1890 and returns a new list consisting of only the entries that this client
1891 should be able to apply.
1891 should be able to apply.
1892
1892
1893 There is no guarantee we'll be able to apply all returned entries because
1893 There is no guarantee we'll be able to apply all returned entries because
1894 the metadata we use to filter on may be missing or wrong.
1894 the metadata we use to filter on may be missing or wrong.
1895 """
1895 """
1896 newentries = []
1896 newentries = []
1897 for entry in entries:
1897 for entry in entries:
1898 spec = entry.get('BUNDLESPEC')
1898 spec = entry.get('BUNDLESPEC')
1899 if spec:
1899 if spec:
1900 try:
1900 try:
1901 parsebundlespec(repo, spec, strict=True)
1901 parsebundlespec(repo, spec, strict=True)
1902 except error.InvalidBundleSpecification as e:
1902 except error.InvalidBundleSpecification as e:
1903 repo.ui.debug(str(e) + '\n')
1903 repo.ui.debug(str(e) + '\n')
1904 continue
1904 continue
1905 except error.UnsupportedBundleSpecification as e:
1905 except error.UnsupportedBundleSpecification as e:
1906 repo.ui.debug('filtering %s because unsupported bundle '
1906 repo.ui.debug('filtering %s because unsupported bundle '
1907 'spec: %s\n' % (entry['URL'], str(e)))
1907 'spec: %s\n' % (entry['URL'], str(e)))
1908 continue
1908 continue
1909
1909
1910 if 'REQUIRESNI' in entry and not sslutil.hassni:
1910 if 'REQUIRESNI' in entry and not sslutil.hassni:
1911 repo.ui.debug('filtering %s because SNI not supported\n' %
1911 repo.ui.debug('filtering %s because SNI not supported\n' %
1912 entry['URL'])
1912 entry['URL'])
1913 continue
1913 continue
1914
1914
1915 newentries.append(entry)
1915 newentries.append(entry)
1916
1916
1917 return newentries
1917 return newentries
1918
1918
1919 class clonebundleentry(object):
1919 class clonebundleentry(object):
1920 """Represents an item in a clone bundles manifest.
1920 """Represents an item in a clone bundles manifest.
1921
1921
1922 This rich class is needed to support sorting since sorted() in Python 3
1922 This rich class is needed to support sorting since sorted() in Python 3
1923 doesn't support ``cmp`` and our comparison is complex enough that ``key=``
1923 doesn't support ``cmp`` and our comparison is complex enough that ``key=``
1924 won't work.
1924 won't work.
1925 """
1925 """
1926
1926
1927 def __init__(self, value, prefers):
1927 def __init__(self, value, prefers):
1928 self.value = value
1928 self.value = value
1929 self.prefers = prefers
1929 self.prefers = prefers
1930
1930
1931 def _cmp(self, other):
1931 def _cmp(self, other):
1932 for prefkey, prefvalue in self.prefers:
1932 for prefkey, prefvalue in self.prefers:
1933 avalue = self.value.get(prefkey)
1933 avalue = self.value.get(prefkey)
1934 bvalue = other.value.get(prefkey)
1934 bvalue = other.value.get(prefkey)
1935
1935
1936 # Special case for b missing attribute and a matches exactly.
1936 # Special case for b missing attribute and a matches exactly.
1937 if avalue is not None and bvalue is None and avalue == prefvalue:
1937 if avalue is not None and bvalue is None and avalue == prefvalue:
1938 return -1
1938 return -1
1939
1939
1940 # Special case for a missing attribute and b matches exactly.
1940 # Special case for a missing attribute and b matches exactly.
1941 if bvalue is not None and avalue is None and bvalue == prefvalue:
1941 if bvalue is not None and avalue is None and bvalue == prefvalue:
1942 return 1
1942 return 1
1943
1943
1944 # We can't compare unless attribute present on both.
1944 # We can't compare unless attribute present on both.
1945 if avalue is None or bvalue is None:
1945 if avalue is None or bvalue is None:
1946 continue
1946 continue
1947
1947
1948 # Same values should fall back to next attribute.
1948 # Same values should fall back to next attribute.
1949 if avalue == bvalue:
1949 if avalue == bvalue:
1950 continue
1950 continue
1951
1951
1952 # Exact matches come first.
1952 # Exact matches come first.
1953 if avalue == prefvalue:
1953 if avalue == prefvalue:
1954 return -1
1954 return -1
1955 if bvalue == prefvalue:
1955 if bvalue == prefvalue:
1956 return 1
1956 return 1
1957
1957
1958 # Fall back to next attribute.
1958 # Fall back to next attribute.
1959 continue
1959 continue
1960
1960
1961 # If we got here we couldn't sort by attributes and prefers. Fall
1961 # If we got here we couldn't sort by attributes and prefers. Fall
1962 # back to index order.
1962 # back to index order.
1963 return 0
1963 return 0
1964
1964
1965 def __lt__(self, other):
1965 def __lt__(self, other):
1966 return self._cmp(other) < 0
1966 return self._cmp(other) < 0
1967
1967
1968 def __gt__(self, other):
1968 def __gt__(self, other):
1969 return self._cmp(other) > 0
1969 return self._cmp(other) > 0
1970
1970
1971 def __eq__(self, other):
1971 def __eq__(self, other):
1972 return self._cmp(other) == 0
1972 return self._cmp(other) == 0
1973
1973
1974 def __le__(self, other):
1974 def __le__(self, other):
1975 return self._cmp(other) <= 0
1975 return self._cmp(other) <= 0
1976
1976
1977 def __ge__(self, other):
1977 def __ge__(self, other):
1978 return self._cmp(other) >= 0
1978 return self._cmp(other) >= 0
1979
1979
1980 def __ne__(self, other):
1980 def __ne__(self, other):
1981 return self._cmp(other) != 0
1981 return self._cmp(other) != 0
1982
1982
1983 def sortclonebundleentries(ui, entries):
1983 def sortclonebundleentries(ui, entries):
1984 prefers = ui.configlist('ui', 'clonebundleprefers')
1984 prefers = ui.configlist('ui', 'clonebundleprefers')
1985 if not prefers:
1985 if not prefers:
1986 return list(entries)
1986 return list(entries)
1987
1987
1988 prefers = [p.split('=', 1) for p in prefers]
1988 prefers = [p.split('=', 1) for p in prefers]
1989
1989
1990 items = sorted(clonebundleentry(v, prefers) for v in entries)
1990 items = sorted(clonebundleentry(v, prefers) for v in entries)
1991 return [i.value for i in items]
1991 return [i.value for i in items]
1992
1992
1993 def trypullbundlefromurl(ui, repo, url):
1993 def trypullbundlefromurl(ui, repo, url):
1994 """Attempt to apply a bundle from a URL."""
1994 """Attempt to apply a bundle from a URL."""
1995 with repo.lock(), repo.transaction('bundleurl') as tr:
1995 with repo.lock(), repo.transaction('bundleurl') as tr:
1996 try:
1996 try:
1997 fh = urlmod.open(ui, url)
1997 fh = urlmod.open(ui, url)
1998 cg = readbundle(ui, fh, 'stream')
1998 cg = readbundle(ui, fh, 'stream')
1999
1999
2000 if isinstance(cg, bundle2.unbundle20):
2000 if isinstance(cg, bundle2.unbundle20):
2001 bundle2.applybundle(repo, cg, tr, 'clonebundles', url)
2001 bundle2.applybundle(repo, cg, tr, 'clonebundles', url)
2002 elif isinstance(cg, streamclone.streamcloneapplier):
2002 elif isinstance(cg, streamclone.streamcloneapplier):
2003 cg.apply(repo)
2003 cg.apply(repo)
2004 else:
2004 else:
2005 cg.apply(repo, tr, 'clonebundles', url)
2005 bundle2.applybundle1(repo, cg, tr, 'clonebundles', url)
2006 return True
2006 return True
2007 except urlerr.httperror as e:
2007 except urlerr.httperror as e:
2008 ui.warn(_('HTTP error fetching bundle: %s\n') % str(e))
2008 ui.warn(_('HTTP error fetching bundle: %s\n') % str(e))
2009 except urlerr.urlerror as e:
2009 except urlerr.urlerror as e:
2010 ui.warn(_('error fetching bundle: %s\n') % e.reason)
2010 ui.warn(_('error fetching bundle: %s\n') % e.reason)
2011
2011
2012 return False
2012 return False
@@ -1,378 +1,379 b''
1 # repair.py - functions for repository repair for mercurial
1 # repair.py - functions for repository repair for mercurial
2 #
2 #
3 # Copyright 2005, 2006 Chris Mason <mason@suse.com>
3 # Copyright 2005, 2006 Chris Mason <mason@suse.com>
4 # Copyright 2007 Matt Mackall
4 # Copyright 2007 Matt Mackall
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 from __future__ import absolute_import
9 from __future__ import absolute_import
10
10
11 import errno
11 import errno
12 import hashlib
12 import hashlib
13
13
14 from .i18n import _
14 from .i18n import _
15 from .node import short
15 from .node import short
16 from . import (
16 from . import (
17 bundle2,
17 bundle2,
18 changegroup,
18 changegroup,
19 discovery,
19 discovery,
20 error,
20 error,
21 exchange,
21 exchange,
22 obsolete,
22 obsolete,
23 util,
23 util,
24 )
24 )
25
25
26 def _bundle(repo, bases, heads, node, suffix, compress=True, obsolescence=True):
26 def _bundle(repo, bases, heads, node, suffix, compress=True, obsolescence=True):
27 """create a bundle with the specified revisions as a backup"""
27 """create a bundle with the specified revisions as a backup"""
28
28
29 backupdir = "strip-backup"
29 backupdir = "strip-backup"
30 vfs = repo.vfs
30 vfs = repo.vfs
31 if not vfs.isdir(backupdir):
31 if not vfs.isdir(backupdir):
32 vfs.mkdir(backupdir)
32 vfs.mkdir(backupdir)
33
33
34 # Include a hash of all the nodes in the filename for uniqueness
34 # Include a hash of all the nodes in the filename for uniqueness
35 allcommits = repo.set('%ln::%ln', bases, heads)
35 allcommits = repo.set('%ln::%ln', bases, heads)
36 allhashes = sorted(c.hex() for c in allcommits)
36 allhashes = sorted(c.hex() for c in allcommits)
37 totalhash = hashlib.sha1(''.join(allhashes)).hexdigest()
37 totalhash = hashlib.sha1(''.join(allhashes)).hexdigest()
38 name = "%s/%s-%s-%s.hg" % (backupdir, short(node), totalhash[:8], suffix)
38 name = "%s/%s-%s-%s.hg" % (backupdir, short(node), totalhash[:8], suffix)
39
39
40 cgversion = changegroup.safeversion(repo)
40 cgversion = changegroup.safeversion(repo)
41 comp = None
41 comp = None
42 if cgversion != '01':
42 if cgversion != '01':
43 bundletype = "HG20"
43 bundletype = "HG20"
44 if compress:
44 if compress:
45 comp = 'BZ'
45 comp = 'BZ'
46 elif compress:
46 elif compress:
47 bundletype = "HG10BZ"
47 bundletype = "HG10BZ"
48 else:
48 else:
49 bundletype = "HG10UN"
49 bundletype = "HG10UN"
50
50
51 outgoing = discovery.outgoing(repo, missingroots=bases, missingheads=heads)
51 outgoing = discovery.outgoing(repo, missingroots=bases, missingheads=heads)
52 contentopts = {
52 contentopts = {
53 'cg.version': cgversion,
53 'cg.version': cgversion,
54 'obsolescence': obsolescence,
54 'obsolescence': obsolescence,
55 'phases': True,
55 'phases': True,
56 }
56 }
57 return bundle2.writenewbundle(repo.ui, repo, 'strip', name, bundletype,
57 return bundle2.writenewbundle(repo.ui, repo, 'strip', name, bundletype,
58 outgoing, contentopts, vfs, compression=comp)
58 outgoing, contentopts, vfs, compression=comp)
59
59
60 def _collectfiles(repo, striprev):
60 def _collectfiles(repo, striprev):
61 """find out the filelogs affected by the strip"""
61 """find out the filelogs affected by the strip"""
62 files = set()
62 files = set()
63
63
64 for x in xrange(striprev, len(repo)):
64 for x in xrange(striprev, len(repo)):
65 files.update(repo[x].files())
65 files.update(repo[x].files())
66
66
67 return sorted(files)
67 return sorted(files)
68
68
69 def _collectbrokencsets(repo, files, striprev):
69 def _collectbrokencsets(repo, files, striprev):
70 """return the changesets which will be broken by the truncation"""
70 """return the changesets which will be broken by the truncation"""
71 s = set()
71 s = set()
72 def collectone(revlog):
72 def collectone(revlog):
73 _, brokenset = revlog.getstrippoint(striprev)
73 _, brokenset = revlog.getstrippoint(striprev)
74 s.update([revlog.linkrev(r) for r in brokenset])
74 s.update([revlog.linkrev(r) for r in brokenset])
75
75
76 collectone(repo.manifestlog._revlog)
76 collectone(repo.manifestlog._revlog)
77 for fname in files:
77 for fname in files:
78 collectone(repo.file(fname))
78 collectone(repo.file(fname))
79
79
80 return s
80 return s
81
81
82 def strip(ui, repo, nodelist, backup=True, topic='backup'):
82 def strip(ui, repo, nodelist, backup=True, topic='backup'):
83 # This function requires the caller to lock the repo, but it operates
83 # This function requires the caller to lock the repo, but it operates
84 # within a transaction of its own, and thus requires there to be no current
84 # within a transaction of its own, and thus requires there to be no current
85 # transaction when it is called.
85 # transaction when it is called.
86 if repo.currenttransaction() is not None:
86 if repo.currenttransaction() is not None:
87 raise error.ProgrammingError('cannot strip from inside a transaction')
87 raise error.ProgrammingError('cannot strip from inside a transaction')
88
88
89 # Simple way to maintain backwards compatibility for this
89 # Simple way to maintain backwards compatibility for this
90 # argument.
90 # argument.
91 if backup in ['none', 'strip']:
91 if backup in ['none', 'strip']:
92 backup = False
92 backup = False
93
93
94 repo = repo.unfiltered()
94 repo = repo.unfiltered()
95 repo.destroying()
95 repo.destroying()
96
96
97 cl = repo.changelog
97 cl = repo.changelog
98 # TODO handle undo of merge sets
98 # TODO handle undo of merge sets
99 if isinstance(nodelist, str):
99 if isinstance(nodelist, str):
100 nodelist = [nodelist]
100 nodelist = [nodelist]
101 striplist = [cl.rev(node) for node in nodelist]
101 striplist = [cl.rev(node) for node in nodelist]
102 striprev = min(striplist)
102 striprev = min(striplist)
103
103
104 files = _collectfiles(repo, striprev)
104 files = _collectfiles(repo, striprev)
105 saverevs = _collectbrokencsets(repo, files, striprev)
105 saverevs = _collectbrokencsets(repo, files, striprev)
106
106
107 # Some revisions with rev > striprev may not be descendants of striprev.
107 # Some revisions with rev > striprev may not be descendants of striprev.
108 # We have to find these revisions and put them in a bundle, so that
108 # We have to find these revisions and put them in a bundle, so that
109 # we can restore them after the truncations.
109 # we can restore them after the truncations.
110 # To create the bundle we use repo.changegroupsubset which requires
110 # To create the bundle we use repo.changegroupsubset which requires
111 # the list of heads and bases of the set of interesting revisions.
111 # the list of heads and bases of the set of interesting revisions.
112 # (head = revision in the set that has no descendant in the set;
112 # (head = revision in the set that has no descendant in the set;
113 # base = revision in the set that has no ancestor in the set)
113 # base = revision in the set that has no ancestor in the set)
114 tostrip = set(striplist)
114 tostrip = set(striplist)
115 saveheads = set(saverevs)
115 saveheads = set(saverevs)
116 for r in cl.revs(start=striprev + 1):
116 for r in cl.revs(start=striprev + 1):
117 if any(p in tostrip for p in cl.parentrevs(r)):
117 if any(p in tostrip for p in cl.parentrevs(r)):
118 tostrip.add(r)
118 tostrip.add(r)
119
119
120 if r not in tostrip:
120 if r not in tostrip:
121 saverevs.add(r)
121 saverevs.add(r)
122 saveheads.difference_update(cl.parentrevs(r))
122 saveheads.difference_update(cl.parentrevs(r))
123 saveheads.add(r)
123 saveheads.add(r)
124 saveheads = [cl.node(r) for r in saveheads]
124 saveheads = [cl.node(r) for r in saveheads]
125
125
126 # compute base nodes
126 # compute base nodes
127 if saverevs:
127 if saverevs:
128 descendants = set(cl.descendants(saverevs))
128 descendants = set(cl.descendants(saverevs))
129 saverevs.difference_update(descendants)
129 saverevs.difference_update(descendants)
130 savebases = [cl.node(r) for r in saverevs]
130 savebases = [cl.node(r) for r in saverevs]
131 stripbases = [cl.node(r) for r in tostrip]
131 stripbases = [cl.node(r) for r in tostrip]
132
132
133 stripobsidx = obsmarkers = ()
133 stripobsidx = obsmarkers = ()
134 if repo.ui.configbool('devel', 'strip-obsmarkers', True):
134 if repo.ui.configbool('devel', 'strip-obsmarkers', True):
135 obsmarkers = obsolete.exclusivemarkers(repo, stripbases)
135 obsmarkers = obsolete.exclusivemarkers(repo, stripbases)
136 if obsmarkers:
136 if obsmarkers:
137 stripobsidx = [i for i, m in enumerate(repo.obsstore)
137 stripobsidx = [i for i, m in enumerate(repo.obsstore)
138 if m in obsmarkers]
138 if m in obsmarkers]
139
139
140 # For a set s, max(parents(s) - s) is the same as max(heads(::s - s)), but
140 # For a set s, max(parents(s) - s) is the same as max(heads(::s - s)), but
141 # is much faster
141 # is much faster
142 newbmtarget = repo.revs('max(parents(%ld) - (%ld))', tostrip, tostrip)
142 newbmtarget = repo.revs('max(parents(%ld) - (%ld))', tostrip, tostrip)
143 if newbmtarget:
143 if newbmtarget:
144 newbmtarget = repo[newbmtarget.first()].node()
144 newbmtarget = repo[newbmtarget.first()].node()
145 else:
145 else:
146 newbmtarget = '.'
146 newbmtarget = '.'
147
147
148 bm = repo._bookmarks
148 bm = repo._bookmarks
149 updatebm = []
149 updatebm = []
150 for m in bm:
150 for m in bm:
151 rev = repo[bm[m]].rev()
151 rev = repo[bm[m]].rev()
152 if rev in tostrip:
152 if rev in tostrip:
153 updatebm.append(m)
153 updatebm.append(m)
154
154
155 # create a changegroup for all the branches we need to keep
155 # create a changegroup for all the branches we need to keep
156 backupfile = None
156 backupfile = None
157 vfs = repo.vfs
157 vfs = repo.vfs
158 node = nodelist[-1]
158 node = nodelist[-1]
159 if backup:
159 if backup:
160 backupfile = _bundle(repo, stripbases, cl.heads(), node, topic)
160 backupfile = _bundle(repo, stripbases, cl.heads(), node, topic)
161 repo.ui.status(_("saved backup bundle to %s\n") %
161 repo.ui.status(_("saved backup bundle to %s\n") %
162 vfs.join(backupfile))
162 vfs.join(backupfile))
163 repo.ui.log("backupbundle", "saved backup bundle to %s\n",
163 repo.ui.log("backupbundle", "saved backup bundle to %s\n",
164 vfs.join(backupfile))
164 vfs.join(backupfile))
165 tmpbundlefile = None
165 tmpbundlefile = None
166 if saveheads:
166 if saveheads:
167 # do not compress temporary bundle if we remove it from disk later
167 # do not compress temporary bundle if we remove it from disk later
168 #
168 #
169 # We do not include obsolescence, it might re-introduce prune markers
169 # We do not include obsolescence, it might re-introduce prune markers
170 # we are trying to strip. This is harmless since the stripped markers
170 # we are trying to strip. This is harmless since the stripped markers
171 # are already backed up and we did not touched the markers for the
171 # are already backed up and we did not touched the markers for the
172 # saved changesets.
172 # saved changesets.
173 tmpbundlefile = _bundle(repo, savebases, saveheads, node, 'temp',
173 tmpbundlefile = _bundle(repo, savebases, saveheads, node, 'temp',
174 compress=False, obsolescence=False)
174 compress=False, obsolescence=False)
175
175
176 mfst = repo.manifestlog._revlog
176 mfst = repo.manifestlog._revlog
177
177
178 try:
178 try:
179 with repo.transaction("strip") as tr:
179 with repo.transaction("strip") as tr:
180 offset = len(tr.entries)
180 offset = len(tr.entries)
181
181
182 tr.startgroup()
182 tr.startgroup()
183 cl.strip(striprev, tr)
183 cl.strip(striprev, tr)
184 mfst.strip(striprev, tr)
184 mfst.strip(striprev, tr)
185 striptrees(repo, tr, striprev, files)
185 striptrees(repo, tr, striprev, files)
186
186
187 for fn in files:
187 for fn in files:
188 repo.file(fn).strip(striprev, tr)
188 repo.file(fn).strip(striprev, tr)
189 tr.endgroup()
189 tr.endgroup()
190
190
191 for i in xrange(offset, len(tr.entries)):
191 for i in xrange(offset, len(tr.entries)):
192 file, troffset, ignore = tr.entries[i]
192 file, troffset, ignore = tr.entries[i]
193 with repo.svfs(file, 'a', checkambig=True) as fp:
193 with repo.svfs(file, 'a', checkambig=True) as fp:
194 fp.truncate(troffset)
194 fp.truncate(troffset)
195 if troffset == 0:
195 if troffset == 0:
196 repo.store.markremoved(file)
196 repo.store.markremoved(file)
197
197
198 deleteobsmarkers(repo.obsstore, stripobsidx)
198 deleteobsmarkers(repo.obsstore, stripobsidx)
199 del repo.obsstore
199 del repo.obsstore
200
200
201 repo._phasecache.filterunknown(repo)
201 repo._phasecache.filterunknown(repo)
202 if tmpbundlefile:
202 if tmpbundlefile:
203 ui.note(_("adding branch\n"))
203 ui.note(_("adding branch\n"))
204 f = vfs.open(tmpbundlefile, "rb")
204 f = vfs.open(tmpbundlefile, "rb")
205 gen = exchange.readbundle(ui, f, tmpbundlefile, vfs)
205 gen = exchange.readbundle(ui, f, tmpbundlefile, vfs)
206 if not repo.ui.verbose:
206 if not repo.ui.verbose:
207 # silence internal shuffling chatter
207 # silence internal shuffling chatter
208 repo.ui.pushbuffer()
208 repo.ui.pushbuffer()
209 tmpbundleurl = 'bundle:' + vfs.join(tmpbundlefile)
209 tmpbundleurl = 'bundle:' + vfs.join(tmpbundlefile)
210 if isinstance(gen, bundle2.unbundle20):
210 if isinstance(gen, bundle2.unbundle20):
211 with repo.transaction('strip') as tr:
211 with repo.transaction('strip') as tr:
212 bundle2.applybundle(repo, gen, tr, source='strip',
212 bundle2.applybundle(repo, gen, tr, source='strip',
213 url=tmpbundleurl)
213 url=tmpbundleurl)
214 else:
214 else:
215 txnname = "strip\n%s" % util.hidepassword(tmpbundleurl)
215 txnname = "strip\n%s" % util.hidepassword(tmpbundleurl)
216 with repo.transaction(txnname) as tr:
216 with repo.transaction(txnname) as tr:
217 gen.apply(repo, tr, 'strip', tmpbundleurl, True)
217 bundle2.applybundle1(repo, gen, tr, 'strip', tmpbundleurl,
218 emptyok=True)
218 if not repo.ui.verbose:
219 if not repo.ui.verbose:
219 repo.ui.popbuffer()
220 repo.ui.popbuffer()
220 f.close()
221 f.close()
221 repo._phasecache.invalidate()
222 repo._phasecache.invalidate()
222
223
223 for m in updatebm:
224 for m in updatebm:
224 bm[m] = repo[newbmtarget].node()
225 bm[m] = repo[newbmtarget].node()
225
226
226 with repo.transaction('repair') as tr:
227 with repo.transaction('repair') as tr:
227 bm.recordchange(tr)
228 bm.recordchange(tr)
228
229
229 # remove undo files
230 # remove undo files
230 for undovfs, undofile in repo.undofiles():
231 for undovfs, undofile in repo.undofiles():
231 try:
232 try:
232 undovfs.unlink(undofile)
233 undovfs.unlink(undofile)
233 except OSError as e:
234 except OSError as e:
234 if e.errno != errno.ENOENT:
235 if e.errno != errno.ENOENT:
235 ui.warn(_('error removing %s: %s\n') %
236 ui.warn(_('error removing %s: %s\n') %
236 (undovfs.join(undofile), str(e)))
237 (undovfs.join(undofile), str(e)))
237
238
238 except: # re-raises
239 except: # re-raises
239 if backupfile:
240 if backupfile:
240 ui.warn(_("strip failed, backup bundle stored in '%s'\n")
241 ui.warn(_("strip failed, backup bundle stored in '%s'\n")
241 % vfs.join(backupfile))
242 % vfs.join(backupfile))
242 if tmpbundlefile:
243 if tmpbundlefile:
243 ui.warn(_("strip failed, unrecovered changes stored in '%s'\n")
244 ui.warn(_("strip failed, unrecovered changes stored in '%s'\n")
244 % vfs.join(tmpbundlefile))
245 % vfs.join(tmpbundlefile))
245 ui.warn(_("(fix the problem, then recover the changesets with "
246 ui.warn(_("(fix the problem, then recover the changesets with "
246 "\"hg unbundle '%s'\")\n") % vfs.join(tmpbundlefile))
247 "\"hg unbundle '%s'\")\n") % vfs.join(tmpbundlefile))
247 raise
248 raise
248 else:
249 else:
249 if tmpbundlefile:
250 if tmpbundlefile:
250 # Remove temporary bundle only if there were no exceptions
251 # Remove temporary bundle only if there were no exceptions
251 vfs.unlink(tmpbundlefile)
252 vfs.unlink(tmpbundlefile)
252
253
253 repo.destroyed()
254 repo.destroyed()
254 # return the backup file path (or None if 'backup' was False) so
255 # return the backup file path (or None if 'backup' was False) so
255 # extensions can use it
256 # extensions can use it
256 return backupfile
257 return backupfile
257
258
258 def striptrees(repo, tr, striprev, files):
259 def striptrees(repo, tr, striprev, files):
259 if 'treemanifest' in repo.requirements: # safe but unnecessary
260 if 'treemanifest' in repo.requirements: # safe but unnecessary
260 # otherwise
261 # otherwise
261 for unencoded, encoded, size in repo.store.datafiles():
262 for unencoded, encoded, size in repo.store.datafiles():
262 if (unencoded.startswith('meta/') and
263 if (unencoded.startswith('meta/') and
263 unencoded.endswith('00manifest.i')):
264 unencoded.endswith('00manifest.i')):
264 dir = unencoded[5:-12]
265 dir = unencoded[5:-12]
265 repo.manifestlog._revlog.dirlog(dir).strip(striprev, tr)
266 repo.manifestlog._revlog.dirlog(dir).strip(striprev, tr)
266
267
267 def rebuildfncache(ui, repo):
268 def rebuildfncache(ui, repo):
268 """Rebuilds the fncache file from repo history.
269 """Rebuilds the fncache file from repo history.
269
270
270 Missing entries will be added. Extra entries will be removed.
271 Missing entries will be added. Extra entries will be removed.
271 """
272 """
272 repo = repo.unfiltered()
273 repo = repo.unfiltered()
273
274
274 if 'fncache' not in repo.requirements:
275 if 'fncache' not in repo.requirements:
275 ui.warn(_('(not rebuilding fncache because repository does not '
276 ui.warn(_('(not rebuilding fncache because repository does not '
276 'support fncache)\n'))
277 'support fncache)\n'))
277 return
278 return
278
279
279 with repo.lock():
280 with repo.lock():
280 fnc = repo.store.fncache
281 fnc = repo.store.fncache
281 # Trigger load of fncache.
282 # Trigger load of fncache.
282 if 'irrelevant' in fnc:
283 if 'irrelevant' in fnc:
283 pass
284 pass
284
285
285 oldentries = set(fnc.entries)
286 oldentries = set(fnc.entries)
286 newentries = set()
287 newentries = set()
287 seenfiles = set()
288 seenfiles = set()
288
289
289 repolen = len(repo)
290 repolen = len(repo)
290 for rev in repo:
291 for rev in repo:
291 ui.progress(_('rebuilding'), rev, total=repolen,
292 ui.progress(_('rebuilding'), rev, total=repolen,
292 unit=_('changesets'))
293 unit=_('changesets'))
293
294
294 ctx = repo[rev]
295 ctx = repo[rev]
295 for f in ctx.files():
296 for f in ctx.files():
296 # This is to minimize I/O.
297 # This is to minimize I/O.
297 if f in seenfiles:
298 if f in seenfiles:
298 continue
299 continue
299 seenfiles.add(f)
300 seenfiles.add(f)
300
301
301 i = 'data/%s.i' % f
302 i = 'data/%s.i' % f
302 d = 'data/%s.d' % f
303 d = 'data/%s.d' % f
303
304
304 if repo.store._exists(i):
305 if repo.store._exists(i):
305 newentries.add(i)
306 newentries.add(i)
306 if repo.store._exists(d):
307 if repo.store._exists(d):
307 newentries.add(d)
308 newentries.add(d)
308
309
309 ui.progress(_('rebuilding'), None)
310 ui.progress(_('rebuilding'), None)
310
311
311 if 'treemanifest' in repo.requirements: # safe but unnecessary otherwise
312 if 'treemanifest' in repo.requirements: # safe but unnecessary otherwise
312 for dir in util.dirs(seenfiles):
313 for dir in util.dirs(seenfiles):
313 i = 'meta/%s/00manifest.i' % dir
314 i = 'meta/%s/00manifest.i' % dir
314 d = 'meta/%s/00manifest.d' % dir
315 d = 'meta/%s/00manifest.d' % dir
315
316
316 if repo.store._exists(i):
317 if repo.store._exists(i):
317 newentries.add(i)
318 newentries.add(i)
318 if repo.store._exists(d):
319 if repo.store._exists(d):
319 newentries.add(d)
320 newentries.add(d)
320
321
321 addcount = len(newentries - oldentries)
322 addcount = len(newentries - oldentries)
322 removecount = len(oldentries - newentries)
323 removecount = len(oldentries - newentries)
323 for p in sorted(oldentries - newentries):
324 for p in sorted(oldentries - newentries):
324 ui.write(_('removing %s\n') % p)
325 ui.write(_('removing %s\n') % p)
325 for p in sorted(newentries - oldentries):
326 for p in sorted(newentries - oldentries):
326 ui.write(_('adding %s\n') % p)
327 ui.write(_('adding %s\n') % p)
327
328
328 if addcount or removecount:
329 if addcount or removecount:
329 ui.write(_('%d items added, %d removed from fncache\n') %
330 ui.write(_('%d items added, %d removed from fncache\n') %
330 (addcount, removecount))
331 (addcount, removecount))
331 fnc.entries = newentries
332 fnc.entries = newentries
332 fnc._dirty = True
333 fnc._dirty = True
333
334
334 with repo.transaction('fncache') as tr:
335 with repo.transaction('fncache') as tr:
335 fnc.write(tr)
336 fnc.write(tr)
336 else:
337 else:
337 ui.write(_('fncache already up to date\n'))
338 ui.write(_('fncache already up to date\n'))
338
339
339 def stripbmrevset(repo, mark):
340 def stripbmrevset(repo, mark):
340 """
341 """
341 The revset to strip when strip is called with -B mark
342 The revset to strip when strip is called with -B mark
342
343
343 Needs to live here so extensions can use it and wrap it even when strip is
344 Needs to live here so extensions can use it and wrap it even when strip is
344 not enabled or not present on a box.
345 not enabled or not present on a box.
345 """
346 """
346 return repo.revs("ancestors(bookmark(%s)) - "
347 return repo.revs("ancestors(bookmark(%s)) - "
347 "ancestors(head() and not bookmark(%s)) - "
348 "ancestors(head() and not bookmark(%s)) - "
348 "ancestors(bookmark() and not bookmark(%s))",
349 "ancestors(bookmark() and not bookmark(%s))",
349 mark, mark, mark)
350 mark, mark, mark)
350
351
351 def deleteobsmarkers(obsstore, indices):
352 def deleteobsmarkers(obsstore, indices):
352 """Delete some obsmarkers from obsstore and return how many were deleted
353 """Delete some obsmarkers from obsstore and return how many were deleted
353
354
354 'indices' is a list of ints which are the indices
355 'indices' is a list of ints which are the indices
355 of the markers to be deleted.
356 of the markers to be deleted.
356
357
357 Every invocation of this function completely rewrites the obsstore file,
358 Every invocation of this function completely rewrites the obsstore file,
358 skipping the markers we want to be removed. The new temporary file is
359 skipping the markers we want to be removed. The new temporary file is
359 created, remaining markers are written there and on .close() this file
360 created, remaining markers are written there and on .close() this file
360 gets atomically renamed to obsstore, thus guaranteeing consistency."""
361 gets atomically renamed to obsstore, thus guaranteeing consistency."""
361 if not indices:
362 if not indices:
362 # we don't want to rewrite the obsstore with the same content
363 # we don't want to rewrite the obsstore with the same content
363 return
364 return
364
365
365 left = []
366 left = []
366 current = obsstore._all
367 current = obsstore._all
367 n = 0
368 n = 0
368 for i, m in enumerate(current):
369 for i, m in enumerate(current):
369 if i in indices:
370 if i in indices:
370 n += 1
371 n += 1
371 continue
372 continue
372 left.append(m)
373 left.append(m)
373
374
374 newobsstorefile = obsstore.svfs('obsstore', 'w', atomictemp=True)
375 newobsstorefile = obsstore.svfs('obsstore', 'w', atomictemp=True)
375 for bytes in obsolete.encodemarkers(left, True, obsstore._version):
376 for bytes in obsolete.encodemarkers(left, True, obsstore._version):
376 newobsstorefile.write(bytes)
377 newobsstorefile.write(bytes)
377 newobsstorefile.close()
378 newobsstorefile.close()
378 return n
379 return n
General Comments 0
You need to be logged in to leave comments. Login now