##// END OF EJS Templates
bundle: move combineresults() from changegroup to bundle2...
Martin von Zweigbergk -
r33036:52c7060b default
parent child Browse files
Show More
@@ -1,1818 +1,1837 b''
1 # bundle2.py - generic container format to transmit arbitrary data.
1 # bundle2.py - generic container format to transmit arbitrary data.
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """Handling of the new bundle2 format
7 """Handling of the new bundle2 format
8
8
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 payloads in an application agnostic way. It consist in a sequence of "parts"
10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 that will be handed to and processed by the application layer.
11 that will be handed to and processed by the application layer.
12
12
13
13
14 General format architecture
14 General format architecture
15 ===========================
15 ===========================
16
16
17 The format is architectured as follow
17 The format is architectured as follow
18
18
19 - magic string
19 - magic string
20 - stream level parameters
20 - stream level parameters
21 - payload parts (any number)
21 - payload parts (any number)
22 - end of stream marker.
22 - end of stream marker.
23
23
24 the Binary format
24 the Binary format
25 ============================
25 ============================
26
26
27 All numbers are unsigned and big-endian.
27 All numbers are unsigned and big-endian.
28
28
29 stream level parameters
29 stream level parameters
30 ------------------------
30 ------------------------
31
31
32 Binary format is as follow
32 Binary format is as follow
33
33
34 :params size: int32
34 :params size: int32
35
35
36 The total number of Bytes used by the parameters
36 The total number of Bytes used by the parameters
37
37
38 :params value: arbitrary number of Bytes
38 :params value: arbitrary number of Bytes
39
39
40 A blob of `params size` containing the serialized version of all stream level
40 A blob of `params size` containing the serialized version of all stream level
41 parameters.
41 parameters.
42
42
43 The blob contains a space separated list of parameters. Parameters with value
43 The blob contains a space separated list of parameters. Parameters with value
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45
45
46 Empty name are obviously forbidden.
46 Empty name are obviously forbidden.
47
47
48 Name MUST start with a letter. If this first letter is lower case, the
48 Name MUST start with a letter. If this first letter is lower case, the
49 parameter is advisory and can be safely ignored. However when the first
49 parameter is advisory and can be safely ignored. However when the first
50 letter is capital, the parameter is mandatory and the bundling process MUST
50 letter is capital, the parameter is mandatory and the bundling process MUST
51 stop if he is not able to proceed it.
51 stop if he is not able to proceed it.
52
52
53 Stream parameters use a simple textual format for two main reasons:
53 Stream parameters use a simple textual format for two main reasons:
54
54
55 - Stream level parameters should remain simple and we want to discourage any
55 - Stream level parameters should remain simple and we want to discourage any
56 crazy usage.
56 crazy usage.
57 - Textual data allow easy human inspection of a bundle2 header in case of
57 - Textual data allow easy human inspection of a bundle2 header in case of
58 troubles.
58 troubles.
59
59
60 Any Applicative level options MUST go into a bundle2 part instead.
60 Any Applicative level options MUST go into a bundle2 part instead.
61
61
62 Payload part
62 Payload part
63 ------------------------
63 ------------------------
64
64
65 Binary format is as follow
65 Binary format is as follow
66
66
67 :header size: int32
67 :header size: int32
68
68
69 The total number of Bytes used by the part header. When the header is empty
69 The total number of Bytes used by the part header. When the header is empty
70 (size = 0) this is interpreted as the end of stream marker.
70 (size = 0) this is interpreted as the end of stream marker.
71
71
72 :header:
72 :header:
73
73
74 The header defines how to interpret the part. It contains two piece of
74 The header defines how to interpret the part. It contains two piece of
75 data: the part type, and the part parameters.
75 data: the part type, and the part parameters.
76
76
77 The part type is used to route an application level handler, that can
77 The part type is used to route an application level handler, that can
78 interpret payload.
78 interpret payload.
79
79
80 Part parameters are passed to the application level handler. They are
80 Part parameters are passed to the application level handler. They are
81 meant to convey information that will help the application level object to
81 meant to convey information that will help the application level object to
82 interpret the part payload.
82 interpret the part payload.
83
83
84 The binary format of the header is has follow
84 The binary format of the header is has follow
85
85
86 :typesize: (one byte)
86 :typesize: (one byte)
87
87
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89
89
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 to this part.
91 to this part.
92
92
93 :parameters:
93 :parameters:
94
94
95 Part's parameter may have arbitrary content, the binary structure is::
95 Part's parameter may have arbitrary content, the binary structure is::
96
96
97 <mandatory-count><advisory-count><param-sizes><param-data>
97 <mandatory-count><advisory-count><param-sizes><param-data>
98
98
99 :mandatory-count: 1 byte, number of mandatory parameters
99 :mandatory-count: 1 byte, number of mandatory parameters
100
100
101 :advisory-count: 1 byte, number of advisory parameters
101 :advisory-count: 1 byte, number of advisory parameters
102
102
103 :param-sizes:
103 :param-sizes:
104
104
105 N couple of bytes, where N is the total number of parameters. Each
105 N couple of bytes, where N is the total number of parameters. Each
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107
107
108 :param-data:
108 :param-data:
109
109
110 A blob of bytes from which each parameter key and value can be
110 A blob of bytes from which each parameter key and value can be
111 retrieved using the list of size couples stored in the previous
111 retrieved using the list of size couples stored in the previous
112 field.
112 field.
113
113
114 Mandatory parameters comes first, then the advisory ones.
114 Mandatory parameters comes first, then the advisory ones.
115
115
116 Each parameter's key MUST be unique within the part.
116 Each parameter's key MUST be unique within the part.
117
117
118 :payload:
118 :payload:
119
119
120 payload is a series of `<chunksize><chunkdata>`.
120 payload is a series of `<chunksize><chunkdata>`.
121
121
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124
124
125 The current implementation always produces either zero or one chunk.
125 The current implementation always produces either zero or one chunk.
126 This is an implementation limitation that will ultimately be lifted.
126 This is an implementation limitation that will ultimately be lifted.
127
127
128 `chunksize` can be negative to trigger special case processing. No such
128 `chunksize` can be negative to trigger special case processing. No such
129 processing is in place yet.
129 processing is in place yet.
130
130
131 Bundle processing
131 Bundle processing
132 ============================
132 ============================
133
133
134 Each part is processed in order using a "part handler". Handler are registered
134 Each part is processed in order using a "part handler". Handler are registered
135 for a certain part type.
135 for a certain part type.
136
136
137 The matching of a part to its handler is case insensitive. The case of the
137 The matching of a part to its handler is case insensitive. The case of the
138 part type is used to know if a part is mandatory or advisory. If the Part type
138 part type is used to know if a part is mandatory or advisory. If the Part type
139 contains any uppercase char it is considered mandatory. When no handler is
139 contains any uppercase char it is considered mandatory. When no handler is
140 known for a Mandatory part, the process is aborted and an exception is raised.
140 known for a Mandatory part, the process is aborted and an exception is raised.
141 If the part is advisory and no handler is known, the part is ignored. When the
141 If the part is advisory and no handler is known, the part is ignored. When the
142 process is aborted, the full bundle is still read from the stream to keep the
142 process is aborted, the full bundle is still read from the stream to keep the
143 channel usable. But none of the part read from an abort are processed. In the
143 channel usable. But none of the part read from an abort are processed. In the
144 future, dropping the stream may become an option for channel we do not care to
144 future, dropping the stream may become an option for channel we do not care to
145 preserve.
145 preserve.
146 """
146 """
147
147
148 from __future__ import absolute_import
148 from __future__ import absolute_import
149
149
150 import errno
150 import errno
151 import re
151 import re
152 import string
152 import string
153 import struct
153 import struct
154 import sys
154 import sys
155
155
156 from .i18n import _
156 from .i18n import _
157 from . import (
157 from . import (
158 changegroup,
158 changegroup,
159 error,
159 error,
160 obsolete,
160 obsolete,
161 phases,
161 phases,
162 pushkey,
162 pushkey,
163 pycompat,
163 pycompat,
164 tags,
164 tags,
165 url,
165 url,
166 util,
166 util,
167 )
167 )
168
168
169 urlerr = util.urlerr
169 urlerr = util.urlerr
170 urlreq = util.urlreq
170 urlreq = util.urlreq
171
171
172 _pack = struct.pack
172 _pack = struct.pack
173 _unpack = struct.unpack
173 _unpack = struct.unpack
174
174
175 _fstreamparamsize = '>i'
175 _fstreamparamsize = '>i'
176 _fpartheadersize = '>i'
176 _fpartheadersize = '>i'
177 _fparttypesize = '>B'
177 _fparttypesize = '>B'
178 _fpartid = '>I'
178 _fpartid = '>I'
179 _fpayloadsize = '>i'
179 _fpayloadsize = '>i'
180 _fpartparamcount = '>BB'
180 _fpartparamcount = '>BB'
181
181
182 _fphasesentry = '>i20s'
182 _fphasesentry = '>i20s'
183
183
184 preferedchunksize = 4096
184 preferedchunksize = 4096
185
185
186 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
186 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
187
187
188 def outdebug(ui, message):
188 def outdebug(ui, message):
189 """debug regarding output stream (bundling)"""
189 """debug regarding output stream (bundling)"""
190 if ui.configbool('devel', 'bundle2.debug', False):
190 if ui.configbool('devel', 'bundle2.debug', False):
191 ui.debug('bundle2-output: %s\n' % message)
191 ui.debug('bundle2-output: %s\n' % message)
192
192
193 def indebug(ui, message):
193 def indebug(ui, message):
194 """debug on input stream (unbundling)"""
194 """debug on input stream (unbundling)"""
195 if ui.configbool('devel', 'bundle2.debug', False):
195 if ui.configbool('devel', 'bundle2.debug', False):
196 ui.debug('bundle2-input: %s\n' % message)
196 ui.debug('bundle2-input: %s\n' % message)
197
197
198 def validateparttype(parttype):
198 def validateparttype(parttype):
199 """raise ValueError if a parttype contains invalid character"""
199 """raise ValueError if a parttype contains invalid character"""
200 if _parttypeforbidden.search(parttype):
200 if _parttypeforbidden.search(parttype):
201 raise ValueError(parttype)
201 raise ValueError(parttype)
202
202
203 def _makefpartparamsizes(nbparams):
203 def _makefpartparamsizes(nbparams):
204 """return a struct format to read part parameter sizes
204 """return a struct format to read part parameter sizes
205
205
206 The number parameters is variable so we need to build that format
206 The number parameters is variable so we need to build that format
207 dynamically.
207 dynamically.
208 """
208 """
209 return '>'+('BB'*nbparams)
209 return '>'+('BB'*nbparams)
210
210
211 parthandlermapping = {}
211 parthandlermapping = {}
212
212
213 def parthandler(parttype, params=()):
213 def parthandler(parttype, params=()):
214 """decorator that register a function as a bundle2 part handler
214 """decorator that register a function as a bundle2 part handler
215
215
216 eg::
216 eg::
217
217
218 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
218 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
219 def myparttypehandler(...):
219 def myparttypehandler(...):
220 '''process a part of type "my part".'''
220 '''process a part of type "my part".'''
221 ...
221 ...
222 """
222 """
223 validateparttype(parttype)
223 validateparttype(parttype)
224 def _decorator(func):
224 def _decorator(func):
225 lparttype = parttype.lower() # enforce lower case matching.
225 lparttype = parttype.lower() # enforce lower case matching.
226 assert lparttype not in parthandlermapping
226 assert lparttype not in parthandlermapping
227 parthandlermapping[lparttype] = func
227 parthandlermapping[lparttype] = func
228 func.params = frozenset(params)
228 func.params = frozenset(params)
229 return func
229 return func
230 return _decorator
230 return _decorator
231
231
232 class unbundlerecords(object):
232 class unbundlerecords(object):
233 """keep record of what happens during and unbundle
233 """keep record of what happens during and unbundle
234
234
235 New records are added using `records.add('cat', obj)`. Where 'cat' is a
235 New records are added using `records.add('cat', obj)`. Where 'cat' is a
236 category of record and obj is an arbitrary object.
236 category of record and obj is an arbitrary object.
237
237
238 `records['cat']` will return all entries of this category 'cat'.
238 `records['cat']` will return all entries of this category 'cat'.
239
239
240 Iterating on the object itself will yield `('category', obj)` tuples
240 Iterating on the object itself will yield `('category', obj)` tuples
241 for all entries.
241 for all entries.
242
242
243 All iterations happens in chronological order.
243 All iterations happens in chronological order.
244 """
244 """
245
245
246 def __init__(self):
246 def __init__(self):
247 self._categories = {}
247 self._categories = {}
248 self._sequences = []
248 self._sequences = []
249 self._replies = {}
249 self._replies = {}
250
250
251 def add(self, category, entry, inreplyto=None):
251 def add(self, category, entry, inreplyto=None):
252 """add a new record of a given category.
252 """add a new record of a given category.
253
253
254 The entry can then be retrieved in the list returned by
254 The entry can then be retrieved in the list returned by
255 self['category']."""
255 self['category']."""
256 self._categories.setdefault(category, []).append(entry)
256 self._categories.setdefault(category, []).append(entry)
257 self._sequences.append((category, entry))
257 self._sequences.append((category, entry))
258 if inreplyto is not None:
258 if inreplyto is not None:
259 self.getreplies(inreplyto).add(category, entry)
259 self.getreplies(inreplyto).add(category, entry)
260
260
261 def getreplies(self, partid):
261 def getreplies(self, partid):
262 """get the records that are replies to a specific part"""
262 """get the records that are replies to a specific part"""
263 return self._replies.setdefault(partid, unbundlerecords())
263 return self._replies.setdefault(partid, unbundlerecords())
264
264
265 def __getitem__(self, cat):
265 def __getitem__(self, cat):
266 return tuple(self._categories.get(cat, ()))
266 return tuple(self._categories.get(cat, ()))
267
267
268 def __iter__(self):
268 def __iter__(self):
269 return iter(self._sequences)
269 return iter(self._sequences)
270
270
271 def __len__(self):
271 def __len__(self):
272 return len(self._sequences)
272 return len(self._sequences)
273
273
274 def __nonzero__(self):
274 def __nonzero__(self):
275 return bool(self._sequences)
275 return bool(self._sequences)
276
276
277 __bool__ = __nonzero__
277 __bool__ = __nonzero__
278
278
279 class bundleoperation(object):
279 class bundleoperation(object):
280 """an object that represents a single bundling process
280 """an object that represents a single bundling process
281
281
282 Its purpose is to carry unbundle-related objects and states.
282 Its purpose is to carry unbundle-related objects and states.
283
283
284 A new object should be created at the beginning of each bundle processing.
284 A new object should be created at the beginning of each bundle processing.
285 The object is to be returned by the processing function.
285 The object is to be returned by the processing function.
286
286
287 The object has very little content now it will ultimately contain:
287 The object has very little content now it will ultimately contain:
288 * an access to the repo the bundle is applied to,
288 * an access to the repo the bundle is applied to,
289 * a ui object,
289 * a ui object,
290 * a way to retrieve a transaction to add changes to the repo,
290 * a way to retrieve a transaction to add changes to the repo,
291 * a way to record the result of processing each part,
291 * a way to record the result of processing each part,
292 * a way to construct a bundle response when applicable.
292 * a way to construct a bundle response when applicable.
293 """
293 """
294
294
295 def __init__(self, repo, transactiongetter, captureoutput=True):
295 def __init__(self, repo, transactiongetter, captureoutput=True):
296 self.repo = repo
296 self.repo = repo
297 self.ui = repo.ui
297 self.ui = repo.ui
298 self.records = unbundlerecords()
298 self.records = unbundlerecords()
299 self.gettransaction = transactiongetter
299 self.gettransaction = transactiongetter
300 self.reply = None
300 self.reply = None
301 self.captureoutput = captureoutput
301 self.captureoutput = captureoutput
302
302
303 class TransactionUnavailable(RuntimeError):
303 class TransactionUnavailable(RuntimeError):
304 pass
304 pass
305
305
306 def _notransaction():
306 def _notransaction():
307 """default method to get a transaction while processing a bundle
307 """default method to get a transaction while processing a bundle
308
308
309 Raise an exception to highlight the fact that no transaction was expected
309 Raise an exception to highlight the fact that no transaction was expected
310 to be created"""
310 to be created"""
311 raise TransactionUnavailable()
311 raise TransactionUnavailable()
312
312
313 def applybundle(repo, unbundler, tr, source=None, url=None):
313 def applybundle(repo, unbundler, tr, source=None, url=None):
314 # transform me into unbundler.apply() as soon as the freeze is lifted
314 # transform me into unbundler.apply() as soon as the freeze is lifted
315 tr.hookargs['bundle2'] = '1'
315 tr.hookargs['bundle2'] = '1'
316 if source is not None and 'source' not in tr.hookargs:
316 if source is not None and 'source' not in tr.hookargs:
317 tr.hookargs['source'] = source
317 tr.hookargs['source'] = source
318 if url is not None and 'url' not in tr.hookargs:
318 if url is not None and 'url' not in tr.hookargs:
319 tr.hookargs['url'] = url
319 tr.hookargs['url'] = url
320 return processbundle(repo, unbundler, lambda: tr)
320 return processbundle(repo, unbundler, lambda: tr)
321
321
322 def processbundle(repo, unbundler, transactiongetter=None, op=None):
322 def processbundle(repo, unbundler, transactiongetter=None, op=None):
323 """This function process a bundle, apply effect to/from a repo
323 """This function process a bundle, apply effect to/from a repo
324
324
325 It iterates over each part then searches for and uses the proper handling
325 It iterates over each part then searches for and uses the proper handling
326 code to process the part. Parts are processed in order.
326 code to process the part. Parts are processed in order.
327
327
328 Unknown Mandatory part will abort the process.
328 Unknown Mandatory part will abort the process.
329
329
330 It is temporarily possible to provide a prebuilt bundleoperation to the
330 It is temporarily possible to provide a prebuilt bundleoperation to the
331 function. This is used to ensure output is properly propagated in case of
331 function. This is used to ensure output is properly propagated in case of
332 an error during the unbundling. This output capturing part will likely be
332 an error during the unbundling. This output capturing part will likely be
333 reworked and this ability will probably go away in the process.
333 reworked and this ability will probably go away in the process.
334 """
334 """
335 if op is None:
335 if op is None:
336 if transactiongetter is None:
336 if transactiongetter is None:
337 transactiongetter = _notransaction
337 transactiongetter = _notransaction
338 op = bundleoperation(repo, transactiongetter)
338 op = bundleoperation(repo, transactiongetter)
339 # todo:
339 # todo:
340 # - replace this is a init function soon.
340 # - replace this is a init function soon.
341 # - exception catching
341 # - exception catching
342 unbundler.params
342 unbundler.params
343 if repo.ui.debugflag:
343 if repo.ui.debugflag:
344 msg = ['bundle2-input-bundle:']
344 msg = ['bundle2-input-bundle:']
345 if unbundler.params:
345 if unbundler.params:
346 msg.append(' %i params')
346 msg.append(' %i params')
347 if op.gettransaction is None or op.gettransaction is _notransaction:
347 if op.gettransaction is None or op.gettransaction is _notransaction:
348 msg.append(' no-transaction')
348 msg.append(' no-transaction')
349 else:
349 else:
350 msg.append(' with-transaction')
350 msg.append(' with-transaction')
351 msg.append('\n')
351 msg.append('\n')
352 repo.ui.debug(''.join(msg))
352 repo.ui.debug(''.join(msg))
353 iterparts = enumerate(unbundler.iterparts())
353 iterparts = enumerate(unbundler.iterparts())
354 part = None
354 part = None
355 nbpart = 0
355 nbpart = 0
356 try:
356 try:
357 for nbpart, part in iterparts:
357 for nbpart, part in iterparts:
358 _processpart(op, part)
358 _processpart(op, part)
359 except Exception as exc:
359 except Exception as exc:
360 # Any exceptions seeking to the end of the bundle at this point are
360 # Any exceptions seeking to the end of the bundle at this point are
361 # almost certainly related to the underlying stream being bad.
361 # almost certainly related to the underlying stream being bad.
362 # And, chances are that the exception we're handling is related to
362 # And, chances are that the exception we're handling is related to
363 # getting in that bad state. So, we swallow the seeking error and
363 # getting in that bad state. So, we swallow the seeking error and
364 # re-raise the original error.
364 # re-raise the original error.
365 seekerror = False
365 seekerror = False
366 try:
366 try:
367 for nbpart, part in iterparts:
367 for nbpart, part in iterparts:
368 # consume the bundle content
368 # consume the bundle content
369 part.seek(0, 2)
369 part.seek(0, 2)
370 except Exception:
370 except Exception:
371 seekerror = True
371 seekerror = True
372
372
373 # Small hack to let caller code distinguish exceptions from bundle2
373 # Small hack to let caller code distinguish exceptions from bundle2
374 # processing from processing the old format. This is mostly
374 # processing from processing the old format. This is mostly
375 # needed to handle different return codes to unbundle according to the
375 # needed to handle different return codes to unbundle according to the
376 # type of bundle. We should probably clean up or drop this return code
376 # type of bundle. We should probably clean up or drop this return code
377 # craziness in a future version.
377 # craziness in a future version.
378 exc.duringunbundle2 = True
378 exc.duringunbundle2 = True
379 salvaged = []
379 salvaged = []
380 replycaps = None
380 replycaps = None
381 if op.reply is not None:
381 if op.reply is not None:
382 salvaged = op.reply.salvageoutput()
382 salvaged = op.reply.salvageoutput()
383 replycaps = op.reply.capabilities
383 replycaps = op.reply.capabilities
384 exc._replycaps = replycaps
384 exc._replycaps = replycaps
385 exc._bundle2salvagedoutput = salvaged
385 exc._bundle2salvagedoutput = salvaged
386
386
387 # Re-raising from a variable loses the original stack. So only use
387 # Re-raising from a variable loses the original stack. So only use
388 # that form if we need to.
388 # that form if we need to.
389 if seekerror:
389 if seekerror:
390 raise exc
390 raise exc
391 else:
391 else:
392 raise
392 raise
393 finally:
393 finally:
394 repo.ui.debug('bundle2-input-bundle: %i parts total\n' % nbpart)
394 repo.ui.debug('bundle2-input-bundle: %i parts total\n' % nbpart)
395
395
396 return op
396 return op
397
397
398 def _processpart(op, part):
398 def _processpart(op, part):
399 """process a single part from a bundle
399 """process a single part from a bundle
400
400
401 The part is guaranteed to have been fully consumed when the function exits
401 The part is guaranteed to have been fully consumed when the function exits
402 (even if an exception is raised)."""
402 (even if an exception is raised)."""
403 status = 'unknown' # used by debug output
403 status = 'unknown' # used by debug output
404 hardabort = False
404 hardabort = False
405 try:
405 try:
406 try:
406 try:
407 handler = parthandlermapping.get(part.type)
407 handler = parthandlermapping.get(part.type)
408 if handler is None:
408 if handler is None:
409 status = 'unsupported-type'
409 status = 'unsupported-type'
410 raise error.BundleUnknownFeatureError(parttype=part.type)
410 raise error.BundleUnknownFeatureError(parttype=part.type)
411 indebug(op.ui, 'found a handler for part %r' % part.type)
411 indebug(op.ui, 'found a handler for part %r' % part.type)
412 unknownparams = part.mandatorykeys - handler.params
412 unknownparams = part.mandatorykeys - handler.params
413 if unknownparams:
413 if unknownparams:
414 unknownparams = list(unknownparams)
414 unknownparams = list(unknownparams)
415 unknownparams.sort()
415 unknownparams.sort()
416 status = 'unsupported-params (%s)' % unknownparams
416 status = 'unsupported-params (%s)' % unknownparams
417 raise error.BundleUnknownFeatureError(parttype=part.type,
417 raise error.BundleUnknownFeatureError(parttype=part.type,
418 params=unknownparams)
418 params=unknownparams)
419 status = 'supported'
419 status = 'supported'
420 except error.BundleUnknownFeatureError as exc:
420 except error.BundleUnknownFeatureError as exc:
421 if part.mandatory: # mandatory parts
421 if part.mandatory: # mandatory parts
422 raise
422 raise
423 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
423 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
424 return # skip to part processing
424 return # skip to part processing
425 finally:
425 finally:
426 if op.ui.debugflag:
426 if op.ui.debugflag:
427 msg = ['bundle2-input-part: "%s"' % part.type]
427 msg = ['bundle2-input-part: "%s"' % part.type]
428 if not part.mandatory:
428 if not part.mandatory:
429 msg.append(' (advisory)')
429 msg.append(' (advisory)')
430 nbmp = len(part.mandatorykeys)
430 nbmp = len(part.mandatorykeys)
431 nbap = len(part.params) - nbmp
431 nbap = len(part.params) - nbmp
432 if nbmp or nbap:
432 if nbmp or nbap:
433 msg.append(' (params:')
433 msg.append(' (params:')
434 if nbmp:
434 if nbmp:
435 msg.append(' %i mandatory' % nbmp)
435 msg.append(' %i mandatory' % nbmp)
436 if nbap:
436 if nbap:
437 msg.append(' %i advisory' % nbmp)
437 msg.append(' %i advisory' % nbmp)
438 msg.append(')')
438 msg.append(')')
439 msg.append(' %s\n' % status)
439 msg.append(' %s\n' % status)
440 op.ui.debug(''.join(msg))
440 op.ui.debug(''.join(msg))
441
441
442 # handler is called outside the above try block so that we don't
442 # handler is called outside the above try block so that we don't
443 # risk catching KeyErrors from anything other than the
443 # risk catching KeyErrors from anything other than the
444 # parthandlermapping lookup (any KeyError raised by handler()
444 # parthandlermapping lookup (any KeyError raised by handler()
445 # itself represents a defect of a different variety).
445 # itself represents a defect of a different variety).
446 output = None
446 output = None
447 if op.captureoutput and op.reply is not None:
447 if op.captureoutput and op.reply is not None:
448 op.ui.pushbuffer(error=True, subproc=True)
448 op.ui.pushbuffer(error=True, subproc=True)
449 output = ''
449 output = ''
450 try:
450 try:
451 handler(op, part)
451 handler(op, part)
452 finally:
452 finally:
453 if output is not None:
453 if output is not None:
454 output = op.ui.popbuffer()
454 output = op.ui.popbuffer()
455 if output:
455 if output:
456 outpart = op.reply.newpart('output', data=output,
456 outpart = op.reply.newpart('output', data=output,
457 mandatory=False)
457 mandatory=False)
458 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
458 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
459 # If exiting or interrupted, do not attempt to seek the stream in the
459 # If exiting or interrupted, do not attempt to seek the stream in the
460 # finally block below. This makes abort faster.
460 # finally block below. This makes abort faster.
461 except (SystemExit, KeyboardInterrupt):
461 except (SystemExit, KeyboardInterrupt):
462 hardabort = True
462 hardabort = True
463 raise
463 raise
464 finally:
464 finally:
465 # consume the part content to not corrupt the stream.
465 # consume the part content to not corrupt the stream.
466 if not hardabort:
466 if not hardabort:
467 part.seek(0, 2)
467 part.seek(0, 2)
468
468
469
469
470 def decodecaps(blob):
470 def decodecaps(blob):
471 """decode a bundle2 caps bytes blob into a dictionary
471 """decode a bundle2 caps bytes blob into a dictionary
472
472
473 The blob is a list of capabilities (one per line)
473 The blob is a list of capabilities (one per line)
474 Capabilities may have values using a line of the form::
474 Capabilities may have values using a line of the form::
475
475
476 capability=value1,value2,value3
476 capability=value1,value2,value3
477
477
478 The values are always a list."""
478 The values are always a list."""
479 caps = {}
479 caps = {}
480 for line in blob.splitlines():
480 for line in blob.splitlines():
481 if not line:
481 if not line:
482 continue
482 continue
483 if '=' not in line:
483 if '=' not in line:
484 key, vals = line, ()
484 key, vals = line, ()
485 else:
485 else:
486 key, vals = line.split('=', 1)
486 key, vals = line.split('=', 1)
487 vals = vals.split(',')
487 vals = vals.split(',')
488 key = urlreq.unquote(key)
488 key = urlreq.unquote(key)
489 vals = [urlreq.unquote(v) for v in vals]
489 vals = [urlreq.unquote(v) for v in vals]
490 caps[key] = vals
490 caps[key] = vals
491 return caps
491 return caps
492
492
493 def encodecaps(caps):
493 def encodecaps(caps):
494 """encode a bundle2 caps dictionary into a bytes blob"""
494 """encode a bundle2 caps dictionary into a bytes blob"""
495 chunks = []
495 chunks = []
496 for ca in sorted(caps):
496 for ca in sorted(caps):
497 vals = caps[ca]
497 vals = caps[ca]
498 ca = urlreq.quote(ca)
498 ca = urlreq.quote(ca)
499 vals = [urlreq.quote(v) for v in vals]
499 vals = [urlreq.quote(v) for v in vals]
500 if vals:
500 if vals:
501 ca = "%s=%s" % (ca, ','.join(vals))
501 ca = "%s=%s" % (ca, ','.join(vals))
502 chunks.append(ca)
502 chunks.append(ca)
503 return '\n'.join(chunks)
503 return '\n'.join(chunks)
504
504
505 bundletypes = {
505 bundletypes = {
506 "": ("", 'UN'), # only when using unbundle on ssh and old http servers
506 "": ("", 'UN'), # only when using unbundle on ssh and old http servers
507 # since the unification ssh accepts a header but there
507 # since the unification ssh accepts a header but there
508 # is no capability signaling it.
508 # is no capability signaling it.
509 "HG20": (), # special-cased below
509 "HG20": (), # special-cased below
510 "HG10UN": ("HG10UN", 'UN'),
510 "HG10UN": ("HG10UN", 'UN'),
511 "HG10BZ": ("HG10", 'BZ'),
511 "HG10BZ": ("HG10", 'BZ'),
512 "HG10GZ": ("HG10GZ", 'GZ'),
512 "HG10GZ": ("HG10GZ", 'GZ'),
513 }
513 }
514
514
515 # hgweb uses this list to communicate its preferred type
515 # hgweb uses this list to communicate its preferred type
516 bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN']
516 bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN']
517
517
518 class bundle20(object):
518 class bundle20(object):
519 """represent an outgoing bundle2 container
519 """represent an outgoing bundle2 container
520
520
521 Use the `addparam` method to add stream level parameter. and `newpart` to
521 Use the `addparam` method to add stream level parameter. and `newpart` to
522 populate it. Then call `getchunks` to retrieve all the binary chunks of
522 populate it. Then call `getchunks` to retrieve all the binary chunks of
523 data that compose the bundle2 container."""
523 data that compose the bundle2 container."""
524
524
525 _magicstring = 'HG20'
525 _magicstring = 'HG20'
526
526
527 def __init__(self, ui, capabilities=()):
527 def __init__(self, ui, capabilities=()):
528 self.ui = ui
528 self.ui = ui
529 self._params = []
529 self._params = []
530 self._parts = []
530 self._parts = []
531 self.capabilities = dict(capabilities)
531 self.capabilities = dict(capabilities)
532 self._compengine = util.compengines.forbundletype('UN')
532 self._compengine = util.compengines.forbundletype('UN')
533 self._compopts = None
533 self._compopts = None
534
534
535 def setcompression(self, alg, compopts=None):
535 def setcompression(self, alg, compopts=None):
536 """setup core part compression to <alg>"""
536 """setup core part compression to <alg>"""
537 if alg in (None, 'UN'):
537 if alg in (None, 'UN'):
538 return
538 return
539 assert not any(n.lower() == 'compression' for n, v in self._params)
539 assert not any(n.lower() == 'compression' for n, v in self._params)
540 self.addparam('Compression', alg)
540 self.addparam('Compression', alg)
541 self._compengine = util.compengines.forbundletype(alg)
541 self._compengine = util.compengines.forbundletype(alg)
542 self._compopts = compopts
542 self._compopts = compopts
543
543
544 @property
544 @property
545 def nbparts(self):
545 def nbparts(self):
546 """total number of parts added to the bundler"""
546 """total number of parts added to the bundler"""
547 return len(self._parts)
547 return len(self._parts)
548
548
549 # methods used to defines the bundle2 content
549 # methods used to defines the bundle2 content
550 def addparam(self, name, value=None):
550 def addparam(self, name, value=None):
551 """add a stream level parameter"""
551 """add a stream level parameter"""
552 if not name:
552 if not name:
553 raise ValueError('empty parameter name')
553 raise ValueError('empty parameter name')
554 if name[0] not in string.letters:
554 if name[0] not in string.letters:
555 raise ValueError('non letter first character: %r' % name)
555 raise ValueError('non letter first character: %r' % name)
556 self._params.append((name, value))
556 self._params.append((name, value))
557
557
558 def addpart(self, part):
558 def addpart(self, part):
559 """add a new part to the bundle2 container
559 """add a new part to the bundle2 container
560
560
561 Parts contains the actual applicative payload."""
561 Parts contains the actual applicative payload."""
562 assert part.id is None
562 assert part.id is None
563 part.id = len(self._parts) # very cheap counter
563 part.id = len(self._parts) # very cheap counter
564 self._parts.append(part)
564 self._parts.append(part)
565
565
566 def newpart(self, typeid, *args, **kwargs):
566 def newpart(self, typeid, *args, **kwargs):
567 """create a new part and add it to the containers
567 """create a new part and add it to the containers
568
568
569 As the part is directly added to the containers. For now, this means
569 As the part is directly added to the containers. For now, this means
570 that any failure to properly initialize the part after calling
570 that any failure to properly initialize the part after calling
571 ``newpart`` should result in a failure of the whole bundling process.
571 ``newpart`` should result in a failure of the whole bundling process.
572
572
573 You can still fall back to manually create and add if you need better
573 You can still fall back to manually create and add if you need better
574 control."""
574 control."""
575 part = bundlepart(typeid, *args, **kwargs)
575 part = bundlepart(typeid, *args, **kwargs)
576 self.addpart(part)
576 self.addpart(part)
577 return part
577 return part
578
578
579 # methods used to generate the bundle2 stream
579 # methods used to generate the bundle2 stream
580 def getchunks(self):
580 def getchunks(self):
581 if self.ui.debugflag:
581 if self.ui.debugflag:
582 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
582 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
583 if self._params:
583 if self._params:
584 msg.append(' (%i params)' % len(self._params))
584 msg.append(' (%i params)' % len(self._params))
585 msg.append(' %i parts total\n' % len(self._parts))
585 msg.append(' %i parts total\n' % len(self._parts))
586 self.ui.debug(''.join(msg))
586 self.ui.debug(''.join(msg))
587 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
587 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
588 yield self._magicstring
588 yield self._magicstring
589 param = self._paramchunk()
589 param = self._paramchunk()
590 outdebug(self.ui, 'bundle parameter: %s' % param)
590 outdebug(self.ui, 'bundle parameter: %s' % param)
591 yield _pack(_fstreamparamsize, len(param))
591 yield _pack(_fstreamparamsize, len(param))
592 if param:
592 if param:
593 yield param
593 yield param
594 for chunk in self._compengine.compressstream(self._getcorechunk(),
594 for chunk in self._compengine.compressstream(self._getcorechunk(),
595 self._compopts):
595 self._compopts):
596 yield chunk
596 yield chunk
597
597
598 def _paramchunk(self):
598 def _paramchunk(self):
599 """return a encoded version of all stream parameters"""
599 """return a encoded version of all stream parameters"""
600 blocks = []
600 blocks = []
601 for par, value in self._params:
601 for par, value in self._params:
602 par = urlreq.quote(par)
602 par = urlreq.quote(par)
603 if value is not None:
603 if value is not None:
604 value = urlreq.quote(value)
604 value = urlreq.quote(value)
605 par = '%s=%s' % (par, value)
605 par = '%s=%s' % (par, value)
606 blocks.append(par)
606 blocks.append(par)
607 return ' '.join(blocks)
607 return ' '.join(blocks)
608
608
609 def _getcorechunk(self):
609 def _getcorechunk(self):
610 """yield chunk for the core part of the bundle
610 """yield chunk for the core part of the bundle
611
611
612 (all but headers and parameters)"""
612 (all but headers and parameters)"""
613 outdebug(self.ui, 'start of parts')
613 outdebug(self.ui, 'start of parts')
614 for part in self._parts:
614 for part in self._parts:
615 outdebug(self.ui, 'bundle part: "%s"' % part.type)
615 outdebug(self.ui, 'bundle part: "%s"' % part.type)
616 for chunk in part.getchunks(ui=self.ui):
616 for chunk in part.getchunks(ui=self.ui):
617 yield chunk
617 yield chunk
618 outdebug(self.ui, 'end of bundle')
618 outdebug(self.ui, 'end of bundle')
619 yield _pack(_fpartheadersize, 0)
619 yield _pack(_fpartheadersize, 0)
620
620
621
621
622 def salvageoutput(self):
622 def salvageoutput(self):
623 """return a list with a copy of all output parts in the bundle
623 """return a list with a copy of all output parts in the bundle
624
624
625 This is meant to be used during error handling to make sure we preserve
625 This is meant to be used during error handling to make sure we preserve
626 server output"""
626 server output"""
627 salvaged = []
627 salvaged = []
628 for part in self._parts:
628 for part in self._parts:
629 if part.type.startswith('output'):
629 if part.type.startswith('output'):
630 salvaged.append(part.copy())
630 salvaged.append(part.copy())
631 return salvaged
631 return salvaged
632
632
633
633
634 class unpackermixin(object):
634 class unpackermixin(object):
635 """A mixin to extract bytes and struct data from a stream"""
635 """A mixin to extract bytes and struct data from a stream"""
636
636
637 def __init__(self, fp):
637 def __init__(self, fp):
638 self._fp = fp
638 self._fp = fp
639
639
640 def _unpack(self, format):
640 def _unpack(self, format):
641 """unpack this struct format from the stream
641 """unpack this struct format from the stream
642
642
643 This method is meant for internal usage by the bundle2 protocol only.
643 This method is meant for internal usage by the bundle2 protocol only.
644 They directly manipulate the low level stream including bundle2 level
644 They directly manipulate the low level stream including bundle2 level
645 instruction.
645 instruction.
646
646
647 Do not use it to implement higher-level logic or methods."""
647 Do not use it to implement higher-level logic or methods."""
648 data = self._readexact(struct.calcsize(format))
648 data = self._readexact(struct.calcsize(format))
649 return _unpack(format, data)
649 return _unpack(format, data)
650
650
651 def _readexact(self, size):
651 def _readexact(self, size):
652 """read exactly <size> bytes from the stream
652 """read exactly <size> bytes from the stream
653
653
654 This method is meant for internal usage by the bundle2 protocol only.
654 This method is meant for internal usage by the bundle2 protocol only.
655 They directly manipulate the low level stream including bundle2 level
655 They directly manipulate the low level stream including bundle2 level
656 instruction.
656 instruction.
657
657
658 Do not use it to implement higher-level logic or methods."""
658 Do not use it to implement higher-level logic or methods."""
659 return changegroup.readexactly(self._fp, size)
659 return changegroup.readexactly(self._fp, size)
660
660
661 def getunbundler(ui, fp, magicstring=None):
661 def getunbundler(ui, fp, magicstring=None):
662 """return a valid unbundler object for a given magicstring"""
662 """return a valid unbundler object for a given magicstring"""
663 if magicstring is None:
663 if magicstring is None:
664 magicstring = changegroup.readexactly(fp, 4)
664 magicstring = changegroup.readexactly(fp, 4)
665 magic, version = magicstring[0:2], magicstring[2:4]
665 magic, version = magicstring[0:2], magicstring[2:4]
666 if magic != 'HG':
666 if magic != 'HG':
667 raise error.Abort(_('not a Mercurial bundle'))
667 raise error.Abort(_('not a Mercurial bundle'))
668 unbundlerclass = formatmap.get(version)
668 unbundlerclass = formatmap.get(version)
669 if unbundlerclass is None:
669 if unbundlerclass is None:
670 raise error.Abort(_('unknown bundle version %s') % version)
670 raise error.Abort(_('unknown bundle version %s') % version)
671 unbundler = unbundlerclass(ui, fp)
671 unbundler = unbundlerclass(ui, fp)
672 indebug(ui, 'start processing of %s stream' % magicstring)
672 indebug(ui, 'start processing of %s stream' % magicstring)
673 return unbundler
673 return unbundler
674
674
675 class unbundle20(unpackermixin):
675 class unbundle20(unpackermixin):
676 """interpret a bundle2 stream
676 """interpret a bundle2 stream
677
677
678 This class is fed with a binary stream and yields parts through its
678 This class is fed with a binary stream and yields parts through its
679 `iterparts` methods."""
679 `iterparts` methods."""
680
680
681 _magicstring = 'HG20'
681 _magicstring = 'HG20'
682
682
683 def __init__(self, ui, fp):
683 def __init__(self, ui, fp):
684 """If header is specified, we do not read it out of the stream."""
684 """If header is specified, we do not read it out of the stream."""
685 self.ui = ui
685 self.ui = ui
686 self._compengine = util.compengines.forbundletype('UN')
686 self._compengine = util.compengines.forbundletype('UN')
687 self._compressed = None
687 self._compressed = None
688 super(unbundle20, self).__init__(fp)
688 super(unbundle20, self).__init__(fp)
689
689
690 @util.propertycache
690 @util.propertycache
691 def params(self):
691 def params(self):
692 """dictionary of stream level parameters"""
692 """dictionary of stream level parameters"""
693 indebug(self.ui, 'reading bundle2 stream parameters')
693 indebug(self.ui, 'reading bundle2 stream parameters')
694 params = {}
694 params = {}
695 paramssize = self._unpack(_fstreamparamsize)[0]
695 paramssize = self._unpack(_fstreamparamsize)[0]
696 if paramssize < 0:
696 if paramssize < 0:
697 raise error.BundleValueError('negative bundle param size: %i'
697 raise error.BundleValueError('negative bundle param size: %i'
698 % paramssize)
698 % paramssize)
699 if paramssize:
699 if paramssize:
700 params = self._readexact(paramssize)
700 params = self._readexact(paramssize)
701 params = self._processallparams(params)
701 params = self._processallparams(params)
702 return params
702 return params
703
703
704 def _processallparams(self, paramsblock):
704 def _processallparams(self, paramsblock):
705 """"""
705 """"""
706 params = util.sortdict()
706 params = util.sortdict()
707 for p in paramsblock.split(' '):
707 for p in paramsblock.split(' '):
708 p = p.split('=', 1)
708 p = p.split('=', 1)
709 p = [urlreq.unquote(i) for i in p]
709 p = [urlreq.unquote(i) for i in p]
710 if len(p) < 2:
710 if len(p) < 2:
711 p.append(None)
711 p.append(None)
712 self._processparam(*p)
712 self._processparam(*p)
713 params[p[0]] = p[1]
713 params[p[0]] = p[1]
714 return params
714 return params
715
715
716
716
717 def _processparam(self, name, value):
717 def _processparam(self, name, value):
718 """process a parameter, applying its effect if needed
718 """process a parameter, applying its effect if needed
719
719
720 Parameter starting with a lower case letter are advisory and will be
720 Parameter starting with a lower case letter are advisory and will be
721 ignored when unknown. Those starting with an upper case letter are
721 ignored when unknown. Those starting with an upper case letter are
722 mandatory and will this function will raise a KeyError when unknown.
722 mandatory and will this function will raise a KeyError when unknown.
723
723
724 Note: no option are currently supported. Any input will be either
724 Note: no option are currently supported. Any input will be either
725 ignored or failing.
725 ignored or failing.
726 """
726 """
727 if not name:
727 if not name:
728 raise ValueError('empty parameter name')
728 raise ValueError('empty parameter name')
729 if name[0] not in string.letters:
729 if name[0] not in string.letters:
730 raise ValueError('non letter first character: %r' % name)
730 raise ValueError('non letter first character: %r' % name)
731 try:
731 try:
732 handler = b2streamparamsmap[name.lower()]
732 handler = b2streamparamsmap[name.lower()]
733 except KeyError:
733 except KeyError:
734 if name[0].islower():
734 if name[0].islower():
735 indebug(self.ui, "ignoring unknown parameter %r" % name)
735 indebug(self.ui, "ignoring unknown parameter %r" % name)
736 else:
736 else:
737 raise error.BundleUnknownFeatureError(params=(name,))
737 raise error.BundleUnknownFeatureError(params=(name,))
738 else:
738 else:
739 handler(self, name, value)
739 handler(self, name, value)
740
740
741 def _forwardchunks(self):
741 def _forwardchunks(self):
742 """utility to transfer a bundle2 as binary
742 """utility to transfer a bundle2 as binary
743
743
744 This is made necessary by the fact the 'getbundle' command over 'ssh'
744 This is made necessary by the fact the 'getbundle' command over 'ssh'
745 have no way to know then the reply end, relying on the bundle to be
745 have no way to know then the reply end, relying on the bundle to be
746 interpreted to know its end. This is terrible and we are sorry, but we
746 interpreted to know its end. This is terrible and we are sorry, but we
747 needed to move forward to get general delta enabled.
747 needed to move forward to get general delta enabled.
748 """
748 """
749 yield self._magicstring
749 yield self._magicstring
750 assert 'params' not in vars(self)
750 assert 'params' not in vars(self)
751 paramssize = self._unpack(_fstreamparamsize)[0]
751 paramssize = self._unpack(_fstreamparamsize)[0]
752 if paramssize < 0:
752 if paramssize < 0:
753 raise error.BundleValueError('negative bundle param size: %i'
753 raise error.BundleValueError('negative bundle param size: %i'
754 % paramssize)
754 % paramssize)
755 yield _pack(_fstreamparamsize, paramssize)
755 yield _pack(_fstreamparamsize, paramssize)
756 if paramssize:
756 if paramssize:
757 params = self._readexact(paramssize)
757 params = self._readexact(paramssize)
758 self._processallparams(params)
758 self._processallparams(params)
759 yield params
759 yield params
760 assert self._compengine.bundletype == 'UN'
760 assert self._compengine.bundletype == 'UN'
761 # From there, payload might need to be decompressed
761 # From there, payload might need to be decompressed
762 self._fp = self._compengine.decompressorreader(self._fp)
762 self._fp = self._compengine.decompressorreader(self._fp)
763 emptycount = 0
763 emptycount = 0
764 while emptycount < 2:
764 while emptycount < 2:
765 # so we can brainlessly loop
765 # so we can brainlessly loop
766 assert _fpartheadersize == _fpayloadsize
766 assert _fpartheadersize == _fpayloadsize
767 size = self._unpack(_fpartheadersize)[0]
767 size = self._unpack(_fpartheadersize)[0]
768 yield _pack(_fpartheadersize, size)
768 yield _pack(_fpartheadersize, size)
769 if size:
769 if size:
770 emptycount = 0
770 emptycount = 0
771 else:
771 else:
772 emptycount += 1
772 emptycount += 1
773 continue
773 continue
774 if size == flaginterrupt:
774 if size == flaginterrupt:
775 continue
775 continue
776 elif size < 0:
776 elif size < 0:
777 raise error.BundleValueError('negative chunk size: %i')
777 raise error.BundleValueError('negative chunk size: %i')
778 yield self._readexact(size)
778 yield self._readexact(size)
779
779
780
780
781 def iterparts(self):
781 def iterparts(self):
782 """yield all parts contained in the stream"""
782 """yield all parts contained in the stream"""
783 # make sure param have been loaded
783 # make sure param have been loaded
784 self.params
784 self.params
785 # From there, payload need to be decompressed
785 # From there, payload need to be decompressed
786 self._fp = self._compengine.decompressorreader(self._fp)
786 self._fp = self._compengine.decompressorreader(self._fp)
787 indebug(self.ui, 'start extraction of bundle2 parts')
787 indebug(self.ui, 'start extraction of bundle2 parts')
788 headerblock = self._readpartheader()
788 headerblock = self._readpartheader()
789 while headerblock is not None:
789 while headerblock is not None:
790 part = unbundlepart(self.ui, headerblock, self._fp)
790 part = unbundlepart(self.ui, headerblock, self._fp)
791 yield part
791 yield part
792 part.seek(0, 2)
792 part.seek(0, 2)
793 headerblock = self._readpartheader()
793 headerblock = self._readpartheader()
794 indebug(self.ui, 'end of bundle2 stream')
794 indebug(self.ui, 'end of bundle2 stream')
795
795
796 def _readpartheader(self):
796 def _readpartheader(self):
797 """reads a part header size and return the bytes blob
797 """reads a part header size and return the bytes blob
798
798
799 returns None if empty"""
799 returns None if empty"""
800 headersize = self._unpack(_fpartheadersize)[0]
800 headersize = self._unpack(_fpartheadersize)[0]
801 if headersize < 0:
801 if headersize < 0:
802 raise error.BundleValueError('negative part header size: %i'
802 raise error.BundleValueError('negative part header size: %i'
803 % headersize)
803 % headersize)
804 indebug(self.ui, 'part header size: %i' % headersize)
804 indebug(self.ui, 'part header size: %i' % headersize)
805 if headersize:
805 if headersize:
806 return self._readexact(headersize)
806 return self._readexact(headersize)
807 return None
807 return None
808
808
809 def compressed(self):
809 def compressed(self):
810 self.params # load params
810 self.params # load params
811 return self._compressed
811 return self._compressed
812
812
813 def close(self):
813 def close(self):
814 """close underlying file"""
814 """close underlying file"""
815 if util.safehasattr(self._fp, 'close'):
815 if util.safehasattr(self._fp, 'close'):
816 return self._fp.close()
816 return self._fp.close()
817
817
818 formatmap = {'20': unbundle20}
818 formatmap = {'20': unbundle20}
819
819
820 b2streamparamsmap = {}
820 b2streamparamsmap = {}
821
821
822 def b2streamparamhandler(name):
822 def b2streamparamhandler(name):
823 """register a handler for a stream level parameter"""
823 """register a handler for a stream level parameter"""
824 def decorator(func):
824 def decorator(func):
825 assert name not in formatmap
825 assert name not in formatmap
826 b2streamparamsmap[name] = func
826 b2streamparamsmap[name] = func
827 return func
827 return func
828 return decorator
828 return decorator
829
829
830 @b2streamparamhandler('compression')
830 @b2streamparamhandler('compression')
831 def processcompression(unbundler, param, value):
831 def processcompression(unbundler, param, value):
832 """read compression parameter and install payload decompression"""
832 """read compression parameter and install payload decompression"""
833 if value not in util.compengines.supportedbundletypes:
833 if value not in util.compengines.supportedbundletypes:
834 raise error.BundleUnknownFeatureError(params=(param,),
834 raise error.BundleUnknownFeatureError(params=(param,),
835 values=(value,))
835 values=(value,))
836 unbundler._compengine = util.compengines.forbundletype(value)
836 unbundler._compengine = util.compengines.forbundletype(value)
837 if value is not None:
837 if value is not None:
838 unbundler._compressed = True
838 unbundler._compressed = True
839
839
840 class bundlepart(object):
840 class bundlepart(object):
841 """A bundle2 part contains application level payload
841 """A bundle2 part contains application level payload
842
842
843 The part `type` is used to route the part to the application level
843 The part `type` is used to route the part to the application level
844 handler.
844 handler.
845
845
846 The part payload is contained in ``part.data``. It could be raw bytes or a
846 The part payload is contained in ``part.data``. It could be raw bytes or a
847 generator of byte chunks.
847 generator of byte chunks.
848
848
849 You can add parameters to the part using the ``addparam`` method.
849 You can add parameters to the part using the ``addparam`` method.
850 Parameters can be either mandatory (default) or advisory. Remote side
850 Parameters can be either mandatory (default) or advisory. Remote side
851 should be able to safely ignore the advisory ones.
851 should be able to safely ignore the advisory ones.
852
852
853 Both data and parameters cannot be modified after the generation has begun.
853 Both data and parameters cannot be modified after the generation has begun.
854 """
854 """
855
855
856 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
856 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
857 data='', mandatory=True):
857 data='', mandatory=True):
858 validateparttype(parttype)
858 validateparttype(parttype)
859 self.id = None
859 self.id = None
860 self.type = parttype
860 self.type = parttype
861 self._data = data
861 self._data = data
862 self._mandatoryparams = list(mandatoryparams)
862 self._mandatoryparams = list(mandatoryparams)
863 self._advisoryparams = list(advisoryparams)
863 self._advisoryparams = list(advisoryparams)
864 # checking for duplicated entries
864 # checking for duplicated entries
865 self._seenparams = set()
865 self._seenparams = set()
866 for pname, __ in self._mandatoryparams + self._advisoryparams:
866 for pname, __ in self._mandatoryparams + self._advisoryparams:
867 if pname in self._seenparams:
867 if pname in self._seenparams:
868 raise error.ProgrammingError('duplicated params: %s' % pname)
868 raise error.ProgrammingError('duplicated params: %s' % pname)
869 self._seenparams.add(pname)
869 self._seenparams.add(pname)
870 # status of the part's generation:
870 # status of the part's generation:
871 # - None: not started,
871 # - None: not started,
872 # - False: currently generated,
872 # - False: currently generated,
873 # - True: generation done.
873 # - True: generation done.
874 self._generated = None
874 self._generated = None
875 self.mandatory = mandatory
875 self.mandatory = mandatory
876
876
877 def __repr__(self):
877 def __repr__(self):
878 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
878 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
879 return ('<%s object at %x; id: %s; type: %s; mandatory: %s>'
879 return ('<%s object at %x; id: %s; type: %s; mandatory: %s>'
880 % (cls, id(self), self.id, self.type, self.mandatory))
880 % (cls, id(self), self.id, self.type, self.mandatory))
881
881
882 def copy(self):
882 def copy(self):
883 """return a copy of the part
883 """return a copy of the part
884
884
885 The new part have the very same content but no partid assigned yet.
885 The new part have the very same content but no partid assigned yet.
886 Parts with generated data cannot be copied."""
886 Parts with generated data cannot be copied."""
887 assert not util.safehasattr(self.data, 'next')
887 assert not util.safehasattr(self.data, 'next')
888 return self.__class__(self.type, self._mandatoryparams,
888 return self.__class__(self.type, self._mandatoryparams,
889 self._advisoryparams, self._data, self.mandatory)
889 self._advisoryparams, self._data, self.mandatory)
890
890
891 # methods used to defines the part content
891 # methods used to defines the part content
892 @property
892 @property
893 def data(self):
893 def data(self):
894 return self._data
894 return self._data
895
895
896 @data.setter
896 @data.setter
897 def data(self, data):
897 def data(self, data):
898 if self._generated is not None:
898 if self._generated is not None:
899 raise error.ReadOnlyPartError('part is being generated')
899 raise error.ReadOnlyPartError('part is being generated')
900 self._data = data
900 self._data = data
901
901
902 @property
902 @property
903 def mandatoryparams(self):
903 def mandatoryparams(self):
904 # make it an immutable tuple to force people through ``addparam``
904 # make it an immutable tuple to force people through ``addparam``
905 return tuple(self._mandatoryparams)
905 return tuple(self._mandatoryparams)
906
906
907 @property
907 @property
908 def advisoryparams(self):
908 def advisoryparams(self):
909 # make it an immutable tuple to force people through ``addparam``
909 # make it an immutable tuple to force people through ``addparam``
910 return tuple(self._advisoryparams)
910 return tuple(self._advisoryparams)
911
911
912 def addparam(self, name, value='', mandatory=True):
912 def addparam(self, name, value='', mandatory=True):
913 """add a parameter to the part
913 """add a parameter to the part
914
914
915 If 'mandatory' is set to True, the remote handler must claim support
915 If 'mandatory' is set to True, the remote handler must claim support
916 for this parameter or the unbundling will be aborted.
916 for this parameter or the unbundling will be aborted.
917
917
918 The 'name' and 'value' cannot exceed 255 bytes each.
918 The 'name' and 'value' cannot exceed 255 bytes each.
919 """
919 """
920 if self._generated is not None:
920 if self._generated is not None:
921 raise error.ReadOnlyPartError('part is being generated')
921 raise error.ReadOnlyPartError('part is being generated')
922 if name in self._seenparams:
922 if name in self._seenparams:
923 raise ValueError('duplicated params: %s' % name)
923 raise ValueError('duplicated params: %s' % name)
924 self._seenparams.add(name)
924 self._seenparams.add(name)
925 params = self._advisoryparams
925 params = self._advisoryparams
926 if mandatory:
926 if mandatory:
927 params = self._mandatoryparams
927 params = self._mandatoryparams
928 params.append((name, value))
928 params.append((name, value))
929
929
930 # methods used to generates the bundle2 stream
930 # methods used to generates the bundle2 stream
931 def getchunks(self, ui):
931 def getchunks(self, ui):
932 if self._generated is not None:
932 if self._generated is not None:
933 raise error.ProgrammingError('part can only be consumed once')
933 raise error.ProgrammingError('part can only be consumed once')
934 self._generated = False
934 self._generated = False
935
935
936 if ui.debugflag:
936 if ui.debugflag:
937 msg = ['bundle2-output-part: "%s"' % self.type]
937 msg = ['bundle2-output-part: "%s"' % self.type]
938 if not self.mandatory:
938 if not self.mandatory:
939 msg.append(' (advisory)')
939 msg.append(' (advisory)')
940 nbmp = len(self.mandatoryparams)
940 nbmp = len(self.mandatoryparams)
941 nbap = len(self.advisoryparams)
941 nbap = len(self.advisoryparams)
942 if nbmp or nbap:
942 if nbmp or nbap:
943 msg.append(' (params:')
943 msg.append(' (params:')
944 if nbmp:
944 if nbmp:
945 msg.append(' %i mandatory' % nbmp)
945 msg.append(' %i mandatory' % nbmp)
946 if nbap:
946 if nbap:
947 msg.append(' %i advisory' % nbmp)
947 msg.append(' %i advisory' % nbmp)
948 msg.append(')')
948 msg.append(')')
949 if not self.data:
949 if not self.data:
950 msg.append(' empty payload')
950 msg.append(' empty payload')
951 elif util.safehasattr(self.data, 'next'):
951 elif util.safehasattr(self.data, 'next'):
952 msg.append(' streamed payload')
952 msg.append(' streamed payload')
953 else:
953 else:
954 msg.append(' %i bytes payload' % len(self.data))
954 msg.append(' %i bytes payload' % len(self.data))
955 msg.append('\n')
955 msg.append('\n')
956 ui.debug(''.join(msg))
956 ui.debug(''.join(msg))
957
957
958 #### header
958 #### header
959 if self.mandatory:
959 if self.mandatory:
960 parttype = self.type.upper()
960 parttype = self.type.upper()
961 else:
961 else:
962 parttype = self.type.lower()
962 parttype = self.type.lower()
963 outdebug(ui, 'part %s: "%s"' % (self.id, parttype))
963 outdebug(ui, 'part %s: "%s"' % (self.id, parttype))
964 ## parttype
964 ## parttype
965 header = [_pack(_fparttypesize, len(parttype)),
965 header = [_pack(_fparttypesize, len(parttype)),
966 parttype, _pack(_fpartid, self.id),
966 parttype, _pack(_fpartid, self.id),
967 ]
967 ]
968 ## parameters
968 ## parameters
969 # count
969 # count
970 manpar = self.mandatoryparams
970 manpar = self.mandatoryparams
971 advpar = self.advisoryparams
971 advpar = self.advisoryparams
972 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
972 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
973 # size
973 # size
974 parsizes = []
974 parsizes = []
975 for key, value in manpar:
975 for key, value in manpar:
976 parsizes.append(len(key))
976 parsizes.append(len(key))
977 parsizes.append(len(value))
977 parsizes.append(len(value))
978 for key, value in advpar:
978 for key, value in advpar:
979 parsizes.append(len(key))
979 parsizes.append(len(key))
980 parsizes.append(len(value))
980 parsizes.append(len(value))
981 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
981 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
982 header.append(paramsizes)
982 header.append(paramsizes)
983 # key, value
983 # key, value
984 for key, value in manpar:
984 for key, value in manpar:
985 header.append(key)
985 header.append(key)
986 header.append(value)
986 header.append(value)
987 for key, value in advpar:
987 for key, value in advpar:
988 header.append(key)
988 header.append(key)
989 header.append(value)
989 header.append(value)
990 ## finalize header
990 ## finalize header
991 headerchunk = ''.join(header)
991 headerchunk = ''.join(header)
992 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
992 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
993 yield _pack(_fpartheadersize, len(headerchunk))
993 yield _pack(_fpartheadersize, len(headerchunk))
994 yield headerchunk
994 yield headerchunk
995 ## payload
995 ## payload
996 try:
996 try:
997 for chunk in self._payloadchunks():
997 for chunk in self._payloadchunks():
998 outdebug(ui, 'payload chunk size: %i' % len(chunk))
998 outdebug(ui, 'payload chunk size: %i' % len(chunk))
999 yield _pack(_fpayloadsize, len(chunk))
999 yield _pack(_fpayloadsize, len(chunk))
1000 yield chunk
1000 yield chunk
1001 except GeneratorExit:
1001 except GeneratorExit:
1002 # GeneratorExit means that nobody is listening for our
1002 # GeneratorExit means that nobody is listening for our
1003 # results anyway, so just bail quickly rather than trying
1003 # results anyway, so just bail quickly rather than trying
1004 # to produce an error part.
1004 # to produce an error part.
1005 ui.debug('bundle2-generatorexit\n')
1005 ui.debug('bundle2-generatorexit\n')
1006 raise
1006 raise
1007 except BaseException as exc:
1007 except BaseException as exc:
1008 # backup exception data for later
1008 # backup exception data for later
1009 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
1009 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
1010 % exc)
1010 % exc)
1011 tb = sys.exc_info()[2]
1011 tb = sys.exc_info()[2]
1012 msg = 'unexpected error: %s' % exc
1012 msg = 'unexpected error: %s' % exc
1013 interpart = bundlepart('error:abort', [('message', msg)],
1013 interpart = bundlepart('error:abort', [('message', msg)],
1014 mandatory=False)
1014 mandatory=False)
1015 interpart.id = 0
1015 interpart.id = 0
1016 yield _pack(_fpayloadsize, -1)
1016 yield _pack(_fpayloadsize, -1)
1017 for chunk in interpart.getchunks(ui=ui):
1017 for chunk in interpart.getchunks(ui=ui):
1018 yield chunk
1018 yield chunk
1019 outdebug(ui, 'closing payload chunk')
1019 outdebug(ui, 'closing payload chunk')
1020 # abort current part payload
1020 # abort current part payload
1021 yield _pack(_fpayloadsize, 0)
1021 yield _pack(_fpayloadsize, 0)
1022 pycompat.raisewithtb(exc, tb)
1022 pycompat.raisewithtb(exc, tb)
1023 # end of payload
1023 # end of payload
1024 outdebug(ui, 'closing payload chunk')
1024 outdebug(ui, 'closing payload chunk')
1025 yield _pack(_fpayloadsize, 0)
1025 yield _pack(_fpayloadsize, 0)
1026 self._generated = True
1026 self._generated = True
1027
1027
1028 def _payloadchunks(self):
1028 def _payloadchunks(self):
1029 """yield chunks of a the part payload
1029 """yield chunks of a the part payload
1030
1030
1031 Exists to handle the different methods to provide data to a part."""
1031 Exists to handle the different methods to provide data to a part."""
1032 # we only support fixed size data now.
1032 # we only support fixed size data now.
1033 # This will be improved in the future.
1033 # This will be improved in the future.
1034 if util.safehasattr(self.data, 'next'):
1034 if util.safehasattr(self.data, 'next'):
1035 buff = util.chunkbuffer(self.data)
1035 buff = util.chunkbuffer(self.data)
1036 chunk = buff.read(preferedchunksize)
1036 chunk = buff.read(preferedchunksize)
1037 while chunk:
1037 while chunk:
1038 yield chunk
1038 yield chunk
1039 chunk = buff.read(preferedchunksize)
1039 chunk = buff.read(preferedchunksize)
1040 elif len(self.data):
1040 elif len(self.data):
1041 yield self.data
1041 yield self.data
1042
1042
1043
1043
1044 flaginterrupt = -1
1044 flaginterrupt = -1
1045
1045
1046 class interrupthandler(unpackermixin):
1046 class interrupthandler(unpackermixin):
1047 """read one part and process it with restricted capability
1047 """read one part and process it with restricted capability
1048
1048
1049 This allows to transmit exception raised on the producer size during part
1049 This allows to transmit exception raised on the producer size during part
1050 iteration while the consumer is reading a part.
1050 iteration while the consumer is reading a part.
1051
1051
1052 Part processed in this manner only have access to a ui object,"""
1052 Part processed in this manner only have access to a ui object,"""
1053
1053
1054 def __init__(self, ui, fp):
1054 def __init__(self, ui, fp):
1055 super(interrupthandler, self).__init__(fp)
1055 super(interrupthandler, self).__init__(fp)
1056 self.ui = ui
1056 self.ui = ui
1057
1057
1058 def _readpartheader(self):
1058 def _readpartheader(self):
1059 """reads a part header size and return the bytes blob
1059 """reads a part header size and return the bytes blob
1060
1060
1061 returns None if empty"""
1061 returns None if empty"""
1062 headersize = self._unpack(_fpartheadersize)[0]
1062 headersize = self._unpack(_fpartheadersize)[0]
1063 if headersize < 0:
1063 if headersize < 0:
1064 raise error.BundleValueError('negative part header size: %i'
1064 raise error.BundleValueError('negative part header size: %i'
1065 % headersize)
1065 % headersize)
1066 indebug(self.ui, 'part header size: %i\n' % headersize)
1066 indebug(self.ui, 'part header size: %i\n' % headersize)
1067 if headersize:
1067 if headersize:
1068 return self._readexact(headersize)
1068 return self._readexact(headersize)
1069 return None
1069 return None
1070
1070
1071 def __call__(self):
1071 def __call__(self):
1072
1072
1073 self.ui.debug('bundle2-input-stream-interrupt:'
1073 self.ui.debug('bundle2-input-stream-interrupt:'
1074 ' opening out of band context\n')
1074 ' opening out of band context\n')
1075 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1075 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1076 headerblock = self._readpartheader()
1076 headerblock = self._readpartheader()
1077 if headerblock is None:
1077 if headerblock is None:
1078 indebug(self.ui, 'no part found during interruption.')
1078 indebug(self.ui, 'no part found during interruption.')
1079 return
1079 return
1080 part = unbundlepart(self.ui, headerblock, self._fp)
1080 part = unbundlepart(self.ui, headerblock, self._fp)
1081 op = interruptoperation(self.ui)
1081 op = interruptoperation(self.ui)
1082 _processpart(op, part)
1082 _processpart(op, part)
1083 self.ui.debug('bundle2-input-stream-interrupt:'
1083 self.ui.debug('bundle2-input-stream-interrupt:'
1084 ' closing out of band context\n')
1084 ' closing out of band context\n')
1085
1085
1086 class interruptoperation(object):
1086 class interruptoperation(object):
1087 """A limited operation to be use by part handler during interruption
1087 """A limited operation to be use by part handler during interruption
1088
1088
1089 It only have access to an ui object.
1089 It only have access to an ui object.
1090 """
1090 """
1091
1091
1092 def __init__(self, ui):
1092 def __init__(self, ui):
1093 self.ui = ui
1093 self.ui = ui
1094 self.reply = None
1094 self.reply = None
1095 self.captureoutput = False
1095 self.captureoutput = False
1096
1096
1097 @property
1097 @property
1098 def repo(self):
1098 def repo(self):
1099 raise error.ProgrammingError('no repo access from stream interruption')
1099 raise error.ProgrammingError('no repo access from stream interruption')
1100
1100
1101 def gettransaction(self):
1101 def gettransaction(self):
1102 raise TransactionUnavailable('no repo access from stream interruption')
1102 raise TransactionUnavailable('no repo access from stream interruption')
1103
1103
1104 class unbundlepart(unpackermixin):
1104 class unbundlepart(unpackermixin):
1105 """a bundle part read from a bundle"""
1105 """a bundle part read from a bundle"""
1106
1106
1107 def __init__(self, ui, header, fp):
1107 def __init__(self, ui, header, fp):
1108 super(unbundlepart, self).__init__(fp)
1108 super(unbundlepart, self).__init__(fp)
1109 self._seekable = (util.safehasattr(fp, 'seek') and
1109 self._seekable = (util.safehasattr(fp, 'seek') and
1110 util.safehasattr(fp, 'tell'))
1110 util.safehasattr(fp, 'tell'))
1111 self.ui = ui
1111 self.ui = ui
1112 # unbundle state attr
1112 # unbundle state attr
1113 self._headerdata = header
1113 self._headerdata = header
1114 self._headeroffset = 0
1114 self._headeroffset = 0
1115 self._initialized = False
1115 self._initialized = False
1116 self.consumed = False
1116 self.consumed = False
1117 # part data
1117 # part data
1118 self.id = None
1118 self.id = None
1119 self.type = None
1119 self.type = None
1120 self.mandatoryparams = None
1120 self.mandatoryparams = None
1121 self.advisoryparams = None
1121 self.advisoryparams = None
1122 self.params = None
1122 self.params = None
1123 self.mandatorykeys = ()
1123 self.mandatorykeys = ()
1124 self._payloadstream = None
1124 self._payloadstream = None
1125 self._readheader()
1125 self._readheader()
1126 self._mandatory = None
1126 self._mandatory = None
1127 self._chunkindex = [] #(payload, file) position tuples for chunk starts
1127 self._chunkindex = [] #(payload, file) position tuples for chunk starts
1128 self._pos = 0
1128 self._pos = 0
1129
1129
1130 def _fromheader(self, size):
1130 def _fromheader(self, size):
1131 """return the next <size> byte from the header"""
1131 """return the next <size> byte from the header"""
1132 offset = self._headeroffset
1132 offset = self._headeroffset
1133 data = self._headerdata[offset:(offset + size)]
1133 data = self._headerdata[offset:(offset + size)]
1134 self._headeroffset = offset + size
1134 self._headeroffset = offset + size
1135 return data
1135 return data
1136
1136
1137 def _unpackheader(self, format):
1137 def _unpackheader(self, format):
1138 """read given format from header
1138 """read given format from header
1139
1139
1140 This automatically compute the size of the format to read."""
1140 This automatically compute the size of the format to read."""
1141 data = self._fromheader(struct.calcsize(format))
1141 data = self._fromheader(struct.calcsize(format))
1142 return _unpack(format, data)
1142 return _unpack(format, data)
1143
1143
1144 def _initparams(self, mandatoryparams, advisoryparams):
1144 def _initparams(self, mandatoryparams, advisoryparams):
1145 """internal function to setup all logic related parameters"""
1145 """internal function to setup all logic related parameters"""
1146 # make it read only to prevent people touching it by mistake.
1146 # make it read only to prevent people touching it by mistake.
1147 self.mandatoryparams = tuple(mandatoryparams)
1147 self.mandatoryparams = tuple(mandatoryparams)
1148 self.advisoryparams = tuple(advisoryparams)
1148 self.advisoryparams = tuple(advisoryparams)
1149 # user friendly UI
1149 # user friendly UI
1150 self.params = util.sortdict(self.mandatoryparams)
1150 self.params = util.sortdict(self.mandatoryparams)
1151 self.params.update(self.advisoryparams)
1151 self.params.update(self.advisoryparams)
1152 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1152 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1153
1153
1154 def _payloadchunks(self, chunknum=0):
1154 def _payloadchunks(self, chunknum=0):
1155 '''seek to specified chunk and start yielding data'''
1155 '''seek to specified chunk and start yielding data'''
1156 if len(self._chunkindex) == 0:
1156 if len(self._chunkindex) == 0:
1157 assert chunknum == 0, 'Must start with chunk 0'
1157 assert chunknum == 0, 'Must start with chunk 0'
1158 self._chunkindex.append((0, self._tellfp()))
1158 self._chunkindex.append((0, self._tellfp()))
1159 else:
1159 else:
1160 assert chunknum < len(self._chunkindex), \
1160 assert chunknum < len(self._chunkindex), \
1161 'Unknown chunk %d' % chunknum
1161 'Unknown chunk %d' % chunknum
1162 self._seekfp(self._chunkindex[chunknum][1])
1162 self._seekfp(self._chunkindex[chunknum][1])
1163
1163
1164 pos = self._chunkindex[chunknum][0]
1164 pos = self._chunkindex[chunknum][0]
1165 payloadsize = self._unpack(_fpayloadsize)[0]
1165 payloadsize = self._unpack(_fpayloadsize)[0]
1166 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1166 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1167 while payloadsize:
1167 while payloadsize:
1168 if payloadsize == flaginterrupt:
1168 if payloadsize == flaginterrupt:
1169 # interruption detection, the handler will now read a
1169 # interruption detection, the handler will now read a
1170 # single part and process it.
1170 # single part and process it.
1171 interrupthandler(self.ui, self._fp)()
1171 interrupthandler(self.ui, self._fp)()
1172 elif payloadsize < 0:
1172 elif payloadsize < 0:
1173 msg = 'negative payload chunk size: %i' % payloadsize
1173 msg = 'negative payload chunk size: %i' % payloadsize
1174 raise error.BundleValueError(msg)
1174 raise error.BundleValueError(msg)
1175 else:
1175 else:
1176 result = self._readexact(payloadsize)
1176 result = self._readexact(payloadsize)
1177 chunknum += 1
1177 chunknum += 1
1178 pos += payloadsize
1178 pos += payloadsize
1179 if chunknum == len(self._chunkindex):
1179 if chunknum == len(self._chunkindex):
1180 self._chunkindex.append((pos, self._tellfp()))
1180 self._chunkindex.append((pos, self._tellfp()))
1181 yield result
1181 yield result
1182 payloadsize = self._unpack(_fpayloadsize)[0]
1182 payloadsize = self._unpack(_fpayloadsize)[0]
1183 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1183 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1184
1184
1185 def _findchunk(self, pos):
1185 def _findchunk(self, pos):
1186 '''for a given payload position, return a chunk number and offset'''
1186 '''for a given payload position, return a chunk number and offset'''
1187 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1187 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1188 if ppos == pos:
1188 if ppos == pos:
1189 return chunk, 0
1189 return chunk, 0
1190 elif ppos > pos:
1190 elif ppos > pos:
1191 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1191 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1192 raise ValueError('Unknown chunk')
1192 raise ValueError('Unknown chunk')
1193
1193
1194 def _readheader(self):
1194 def _readheader(self):
1195 """read the header and setup the object"""
1195 """read the header and setup the object"""
1196 typesize = self._unpackheader(_fparttypesize)[0]
1196 typesize = self._unpackheader(_fparttypesize)[0]
1197 self.type = self._fromheader(typesize)
1197 self.type = self._fromheader(typesize)
1198 indebug(self.ui, 'part type: "%s"' % self.type)
1198 indebug(self.ui, 'part type: "%s"' % self.type)
1199 self.id = self._unpackheader(_fpartid)[0]
1199 self.id = self._unpackheader(_fpartid)[0]
1200 indebug(self.ui, 'part id: "%s"' % self.id)
1200 indebug(self.ui, 'part id: "%s"' % self.id)
1201 # extract mandatory bit from type
1201 # extract mandatory bit from type
1202 self.mandatory = (self.type != self.type.lower())
1202 self.mandatory = (self.type != self.type.lower())
1203 self.type = self.type.lower()
1203 self.type = self.type.lower()
1204 ## reading parameters
1204 ## reading parameters
1205 # param count
1205 # param count
1206 mancount, advcount = self._unpackheader(_fpartparamcount)
1206 mancount, advcount = self._unpackheader(_fpartparamcount)
1207 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1207 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1208 # param size
1208 # param size
1209 fparamsizes = _makefpartparamsizes(mancount + advcount)
1209 fparamsizes = _makefpartparamsizes(mancount + advcount)
1210 paramsizes = self._unpackheader(fparamsizes)
1210 paramsizes = self._unpackheader(fparamsizes)
1211 # make it a list of couple again
1211 # make it a list of couple again
1212 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
1212 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
1213 # split mandatory from advisory
1213 # split mandatory from advisory
1214 mansizes = paramsizes[:mancount]
1214 mansizes = paramsizes[:mancount]
1215 advsizes = paramsizes[mancount:]
1215 advsizes = paramsizes[mancount:]
1216 # retrieve param value
1216 # retrieve param value
1217 manparams = []
1217 manparams = []
1218 for key, value in mansizes:
1218 for key, value in mansizes:
1219 manparams.append((self._fromheader(key), self._fromheader(value)))
1219 manparams.append((self._fromheader(key), self._fromheader(value)))
1220 advparams = []
1220 advparams = []
1221 for key, value in advsizes:
1221 for key, value in advsizes:
1222 advparams.append((self._fromheader(key), self._fromheader(value)))
1222 advparams.append((self._fromheader(key), self._fromheader(value)))
1223 self._initparams(manparams, advparams)
1223 self._initparams(manparams, advparams)
1224 ## part payload
1224 ## part payload
1225 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1225 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1226 # we read the data, tell it
1226 # we read the data, tell it
1227 self._initialized = True
1227 self._initialized = True
1228
1228
1229 def read(self, size=None):
1229 def read(self, size=None):
1230 """read payload data"""
1230 """read payload data"""
1231 if not self._initialized:
1231 if not self._initialized:
1232 self._readheader()
1232 self._readheader()
1233 if size is None:
1233 if size is None:
1234 data = self._payloadstream.read()
1234 data = self._payloadstream.read()
1235 else:
1235 else:
1236 data = self._payloadstream.read(size)
1236 data = self._payloadstream.read(size)
1237 self._pos += len(data)
1237 self._pos += len(data)
1238 if size is None or len(data) < size:
1238 if size is None or len(data) < size:
1239 if not self.consumed and self._pos:
1239 if not self.consumed and self._pos:
1240 self.ui.debug('bundle2-input-part: total payload size %i\n'
1240 self.ui.debug('bundle2-input-part: total payload size %i\n'
1241 % self._pos)
1241 % self._pos)
1242 self.consumed = True
1242 self.consumed = True
1243 return data
1243 return data
1244
1244
1245 def tell(self):
1245 def tell(self):
1246 return self._pos
1246 return self._pos
1247
1247
1248 def seek(self, offset, whence=0):
1248 def seek(self, offset, whence=0):
1249 if whence == 0:
1249 if whence == 0:
1250 newpos = offset
1250 newpos = offset
1251 elif whence == 1:
1251 elif whence == 1:
1252 newpos = self._pos + offset
1252 newpos = self._pos + offset
1253 elif whence == 2:
1253 elif whence == 2:
1254 if not self.consumed:
1254 if not self.consumed:
1255 self.read()
1255 self.read()
1256 newpos = self._chunkindex[-1][0] - offset
1256 newpos = self._chunkindex[-1][0] - offset
1257 else:
1257 else:
1258 raise ValueError('Unknown whence value: %r' % (whence,))
1258 raise ValueError('Unknown whence value: %r' % (whence,))
1259
1259
1260 if newpos > self._chunkindex[-1][0] and not self.consumed:
1260 if newpos > self._chunkindex[-1][0] and not self.consumed:
1261 self.read()
1261 self.read()
1262 if not 0 <= newpos <= self._chunkindex[-1][0]:
1262 if not 0 <= newpos <= self._chunkindex[-1][0]:
1263 raise ValueError('Offset out of range')
1263 raise ValueError('Offset out of range')
1264
1264
1265 if self._pos != newpos:
1265 if self._pos != newpos:
1266 chunk, internaloffset = self._findchunk(newpos)
1266 chunk, internaloffset = self._findchunk(newpos)
1267 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1267 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1268 adjust = self.read(internaloffset)
1268 adjust = self.read(internaloffset)
1269 if len(adjust) != internaloffset:
1269 if len(adjust) != internaloffset:
1270 raise error.Abort(_('Seek failed\n'))
1270 raise error.Abort(_('Seek failed\n'))
1271 self._pos = newpos
1271 self._pos = newpos
1272
1272
1273 def _seekfp(self, offset, whence=0):
1273 def _seekfp(self, offset, whence=0):
1274 """move the underlying file pointer
1274 """move the underlying file pointer
1275
1275
1276 This method is meant for internal usage by the bundle2 protocol only.
1276 This method is meant for internal usage by the bundle2 protocol only.
1277 They directly manipulate the low level stream including bundle2 level
1277 They directly manipulate the low level stream including bundle2 level
1278 instruction.
1278 instruction.
1279
1279
1280 Do not use it to implement higher-level logic or methods."""
1280 Do not use it to implement higher-level logic or methods."""
1281 if self._seekable:
1281 if self._seekable:
1282 return self._fp.seek(offset, whence)
1282 return self._fp.seek(offset, whence)
1283 else:
1283 else:
1284 raise NotImplementedError(_('File pointer is not seekable'))
1284 raise NotImplementedError(_('File pointer is not seekable'))
1285
1285
1286 def _tellfp(self):
1286 def _tellfp(self):
1287 """return the file offset, or None if file is not seekable
1287 """return the file offset, or None if file is not seekable
1288
1288
1289 This method is meant for internal usage by the bundle2 protocol only.
1289 This method is meant for internal usage by the bundle2 protocol only.
1290 They directly manipulate the low level stream including bundle2 level
1290 They directly manipulate the low level stream including bundle2 level
1291 instruction.
1291 instruction.
1292
1292
1293 Do not use it to implement higher-level logic or methods."""
1293 Do not use it to implement higher-level logic or methods."""
1294 if self._seekable:
1294 if self._seekable:
1295 try:
1295 try:
1296 return self._fp.tell()
1296 return self._fp.tell()
1297 except IOError as e:
1297 except IOError as e:
1298 if e.errno == errno.ESPIPE:
1298 if e.errno == errno.ESPIPE:
1299 self._seekable = False
1299 self._seekable = False
1300 else:
1300 else:
1301 raise
1301 raise
1302 return None
1302 return None
1303
1303
1304 # These are only the static capabilities.
1304 # These are only the static capabilities.
1305 # Check the 'getrepocaps' function for the rest.
1305 # Check the 'getrepocaps' function for the rest.
1306 capabilities = {'HG20': (),
1306 capabilities = {'HG20': (),
1307 'error': ('abort', 'unsupportedcontent', 'pushraced',
1307 'error': ('abort', 'unsupportedcontent', 'pushraced',
1308 'pushkey'),
1308 'pushkey'),
1309 'listkeys': (),
1309 'listkeys': (),
1310 'pushkey': (),
1310 'pushkey': (),
1311 'digests': tuple(sorted(util.DIGESTS.keys())),
1311 'digests': tuple(sorted(util.DIGESTS.keys())),
1312 'remote-changegroup': ('http', 'https'),
1312 'remote-changegroup': ('http', 'https'),
1313 'hgtagsfnodes': (),
1313 'hgtagsfnodes': (),
1314 }
1314 }
1315
1315
1316 def getrepocaps(repo, allowpushback=False):
1316 def getrepocaps(repo, allowpushback=False):
1317 """return the bundle2 capabilities for a given repo
1317 """return the bundle2 capabilities for a given repo
1318
1318
1319 Exists to allow extensions (like evolution) to mutate the capabilities.
1319 Exists to allow extensions (like evolution) to mutate the capabilities.
1320 """
1320 """
1321 caps = capabilities.copy()
1321 caps = capabilities.copy()
1322 caps['changegroup'] = tuple(sorted(
1322 caps['changegroup'] = tuple(sorted(
1323 changegroup.supportedincomingversions(repo)))
1323 changegroup.supportedincomingversions(repo)))
1324 if obsolete.isenabled(repo, obsolete.exchangeopt):
1324 if obsolete.isenabled(repo, obsolete.exchangeopt):
1325 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1325 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1326 caps['obsmarkers'] = supportedformat
1326 caps['obsmarkers'] = supportedformat
1327 if allowpushback:
1327 if allowpushback:
1328 caps['pushback'] = ()
1328 caps['pushback'] = ()
1329 cpmode = repo.ui.config('server', 'concurrent-push-mode', 'strict')
1329 cpmode = repo.ui.config('server', 'concurrent-push-mode', 'strict')
1330 if cpmode == 'check-related':
1330 if cpmode == 'check-related':
1331 caps['checkheads'] = ('related',)
1331 caps['checkheads'] = ('related',)
1332 return caps
1332 return caps
1333
1333
1334 def bundle2caps(remote):
1334 def bundle2caps(remote):
1335 """return the bundle capabilities of a peer as dict"""
1335 """return the bundle capabilities of a peer as dict"""
1336 raw = remote.capable('bundle2')
1336 raw = remote.capable('bundle2')
1337 if not raw and raw != '':
1337 if not raw and raw != '':
1338 return {}
1338 return {}
1339 capsblob = urlreq.unquote(remote.capable('bundle2'))
1339 capsblob = urlreq.unquote(remote.capable('bundle2'))
1340 return decodecaps(capsblob)
1340 return decodecaps(capsblob)
1341
1341
1342 def obsmarkersversion(caps):
1342 def obsmarkersversion(caps):
1343 """extract the list of supported obsmarkers versions from a bundle2caps dict
1343 """extract the list of supported obsmarkers versions from a bundle2caps dict
1344 """
1344 """
1345 obscaps = caps.get('obsmarkers', ())
1345 obscaps = caps.get('obsmarkers', ())
1346 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1346 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1347
1347
1348 def writenewbundle(ui, repo, source, filename, bundletype, outgoing, opts,
1348 def writenewbundle(ui, repo, source, filename, bundletype, outgoing, opts,
1349 vfs=None, compression=None, compopts=None):
1349 vfs=None, compression=None, compopts=None):
1350 if bundletype.startswith('HG10'):
1350 if bundletype.startswith('HG10'):
1351 cg = changegroup.getchangegroup(repo, source, outgoing, version='01')
1351 cg = changegroup.getchangegroup(repo, source, outgoing, version='01')
1352 return writebundle(ui, cg, filename, bundletype, vfs=vfs,
1352 return writebundle(ui, cg, filename, bundletype, vfs=vfs,
1353 compression=compression, compopts=compopts)
1353 compression=compression, compopts=compopts)
1354 elif not bundletype.startswith('HG20'):
1354 elif not bundletype.startswith('HG20'):
1355 raise error.ProgrammingError('unknown bundle type: %s' % bundletype)
1355 raise error.ProgrammingError('unknown bundle type: %s' % bundletype)
1356
1356
1357 caps = {}
1357 caps = {}
1358 if 'obsolescence' in opts:
1358 if 'obsolescence' in opts:
1359 caps['obsmarkers'] = ('V1',)
1359 caps['obsmarkers'] = ('V1',)
1360 bundle = bundle20(ui, caps)
1360 bundle = bundle20(ui, caps)
1361 bundle.setcompression(compression, compopts)
1361 bundle.setcompression(compression, compopts)
1362 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1362 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1363 chunkiter = bundle.getchunks()
1363 chunkiter = bundle.getchunks()
1364
1364
1365 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1365 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1366
1366
1367 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1367 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1368 # We should eventually reconcile this logic with the one behind
1368 # We should eventually reconcile this logic with the one behind
1369 # 'exchange.getbundle2partsgenerator'.
1369 # 'exchange.getbundle2partsgenerator'.
1370 #
1370 #
1371 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1371 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1372 # different right now. So we keep them separated for now for the sake of
1372 # different right now. So we keep them separated for now for the sake of
1373 # simplicity.
1373 # simplicity.
1374
1374
1375 # we always want a changegroup in such bundle
1375 # we always want a changegroup in such bundle
1376 cgversion = opts.get('cg.version')
1376 cgversion = opts.get('cg.version')
1377 if cgversion is None:
1377 if cgversion is None:
1378 cgversion = changegroup.safeversion(repo)
1378 cgversion = changegroup.safeversion(repo)
1379 cg = changegroup.getchangegroup(repo, source, outgoing,
1379 cg = changegroup.getchangegroup(repo, source, outgoing,
1380 version=cgversion)
1380 version=cgversion)
1381 part = bundler.newpart('changegroup', data=cg.getchunks())
1381 part = bundler.newpart('changegroup', data=cg.getchunks())
1382 part.addparam('version', cg.version)
1382 part.addparam('version', cg.version)
1383 if 'clcount' in cg.extras:
1383 if 'clcount' in cg.extras:
1384 part.addparam('nbchanges', str(cg.extras['clcount']),
1384 part.addparam('nbchanges', str(cg.extras['clcount']),
1385 mandatory=False)
1385 mandatory=False)
1386
1386
1387 addparttagsfnodescache(repo, bundler, outgoing)
1387 addparttagsfnodescache(repo, bundler, outgoing)
1388
1388
1389 if opts.get('obsolescence', False):
1389 if opts.get('obsolescence', False):
1390 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1390 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1391 buildobsmarkerspart(bundler, obsmarkers)
1391 buildobsmarkerspart(bundler, obsmarkers)
1392
1392
1393 if opts.get('phases', False):
1393 if opts.get('phases', False):
1394 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1394 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1395 phasedata = []
1395 phasedata = []
1396 for phase in phases.allphases:
1396 for phase in phases.allphases:
1397 for head in headsbyphase[phase]:
1397 for head in headsbyphase[phase]:
1398 phasedata.append(_pack(_fphasesentry, phase, head))
1398 phasedata.append(_pack(_fphasesentry, phase, head))
1399 bundler.newpart('phase-heads', data=''.join(phasedata))
1399 bundler.newpart('phase-heads', data=''.join(phasedata))
1400
1400
1401 def addparttagsfnodescache(repo, bundler, outgoing):
1401 def addparttagsfnodescache(repo, bundler, outgoing):
1402 # we include the tags fnode cache for the bundle changeset
1402 # we include the tags fnode cache for the bundle changeset
1403 # (as an optional parts)
1403 # (as an optional parts)
1404 cache = tags.hgtagsfnodescache(repo.unfiltered())
1404 cache = tags.hgtagsfnodescache(repo.unfiltered())
1405 chunks = []
1405 chunks = []
1406
1406
1407 # .hgtags fnodes are only relevant for head changesets. While we could
1407 # .hgtags fnodes are only relevant for head changesets. While we could
1408 # transfer values for all known nodes, there will likely be little to
1408 # transfer values for all known nodes, there will likely be little to
1409 # no benefit.
1409 # no benefit.
1410 #
1410 #
1411 # We don't bother using a generator to produce output data because
1411 # We don't bother using a generator to produce output data because
1412 # a) we only have 40 bytes per head and even esoteric numbers of heads
1412 # a) we only have 40 bytes per head and even esoteric numbers of heads
1413 # consume little memory (1M heads is 40MB) b) we don't want to send the
1413 # consume little memory (1M heads is 40MB) b) we don't want to send the
1414 # part if we don't have entries and knowing if we have entries requires
1414 # part if we don't have entries and knowing if we have entries requires
1415 # cache lookups.
1415 # cache lookups.
1416 for node in outgoing.missingheads:
1416 for node in outgoing.missingheads:
1417 # Don't compute missing, as this may slow down serving.
1417 # Don't compute missing, as this may slow down serving.
1418 fnode = cache.getfnode(node, computemissing=False)
1418 fnode = cache.getfnode(node, computemissing=False)
1419 if fnode is not None:
1419 if fnode is not None:
1420 chunks.extend([node, fnode])
1420 chunks.extend([node, fnode])
1421
1421
1422 if chunks:
1422 if chunks:
1423 bundler.newpart('hgtagsfnodes', data=''.join(chunks))
1423 bundler.newpart('hgtagsfnodes', data=''.join(chunks))
1424
1424
1425 def buildobsmarkerspart(bundler, markers):
1425 def buildobsmarkerspart(bundler, markers):
1426 """add an obsmarker part to the bundler with <markers>
1426 """add an obsmarker part to the bundler with <markers>
1427
1427
1428 No part is created if markers is empty.
1428 No part is created if markers is empty.
1429 Raises ValueError if the bundler doesn't support any known obsmarker format.
1429 Raises ValueError if the bundler doesn't support any known obsmarker format.
1430 """
1430 """
1431 if not markers:
1431 if not markers:
1432 return None
1432 return None
1433
1433
1434 remoteversions = obsmarkersversion(bundler.capabilities)
1434 remoteversions = obsmarkersversion(bundler.capabilities)
1435 version = obsolete.commonversion(remoteversions)
1435 version = obsolete.commonversion(remoteversions)
1436 if version is None:
1436 if version is None:
1437 raise ValueError('bundler does not support common obsmarker format')
1437 raise ValueError('bundler does not support common obsmarker format')
1438 stream = obsolete.encodemarkers(markers, True, version=version)
1438 stream = obsolete.encodemarkers(markers, True, version=version)
1439 return bundler.newpart('obsmarkers', data=stream)
1439 return bundler.newpart('obsmarkers', data=stream)
1440
1440
1441 def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None,
1441 def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None,
1442 compopts=None):
1442 compopts=None):
1443 """Write a bundle file and return its filename.
1443 """Write a bundle file and return its filename.
1444
1444
1445 Existing files will not be overwritten.
1445 Existing files will not be overwritten.
1446 If no filename is specified, a temporary file is created.
1446 If no filename is specified, a temporary file is created.
1447 bz2 compression can be turned off.
1447 bz2 compression can be turned off.
1448 The bundle file will be deleted in case of errors.
1448 The bundle file will be deleted in case of errors.
1449 """
1449 """
1450
1450
1451 if bundletype == "HG20":
1451 if bundletype == "HG20":
1452 bundle = bundle20(ui)
1452 bundle = bundle20(ui)
1453 bundle.setcompression(compression, compopts)
1453 bundle.setcompression(compression, compopts)
1454 part = bundle.newpart('changegroup', data=cg.getchunks())
1454 part = bundle.newpart('changegroup', data=cg.getchunks())
1455 part.addparam('version', cg.version)
1455 part.addparam('version', cg.version)
1456 if 'clcount' in cg.extras:
1456 if 'clcount' in cg.extras:
1457 part.addparam('nbchanges', str(cg.extras['clcount']),
1457 part.addparam('nbchanges', str(cg.extras['clcount']),
1458 mandatory=False)
1458 mandatory=False)
1459 chunkiter = bundle.getchunks()
1459 chunkiter = bundle.getchunks()
1460 else:
1460 else:
1461 # compression argument is only for the bundle2 case
1461 # compression argument is only for the bundle2 case
1462 assert compression is None
1462 assert compression is None
1463 if cg.version != '01':
1463 if cg.version != '01':
1464 raise error.Abort(_('old bundle types only supports v1 '
1464 raise error.Abort(_('old bundle types only supports v1 '
1465 'changegroups'))
1465 'changegroups'))
1466 header, comp = bundletypes[bundletype]
1466 header, comp = bundletypes[bundletype]
1467 if comp not in util.compengines.supportedbundletypes:
1467 if comp not in util.compengines.supportedbundletypes:
1468 raise error.Abort(_('unknown stream compression type: %s')
1468 raise error.Abort(_('unknown stream compression type: %s')
1469 % comp)
1469 % comp)
1470 compengine = util.compengines.forbundletype(comp)
1470 compengine = util.compengines.forbundletype(comp)
1471 def chunkiter():
1471 def chunkiter():
1472 yield header
1472 yield header
1473 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1473 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1474 yield chunk
1474 yield chunk
1475 chunkiter = chunkiter()
1475 chunkiter = chunkiter()
1476
1476
1477 # parse the changegroup data, otherwise we will block
1477 # parse the changegroup data, otherwise we will block
1478 # in case of sshrepo because we don't know the end of the stream
1478 # in case of sshrepo because we don't know the end of the stream
1479 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1479 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1480
1480
1481 def combinechangegroupresults(results):
1482 """logic to combine 0 or more addchangegroup results into one"""
1483 changedheads = 0
1484 result = 1
1485 for ret in results:
1486 # If any changegroup result is 0, return 0
1487 if ret == 0:
1488 result = 0
1489 break
1490 if ret < -1:
1491 changedheads += ret + 1
1492 elif ret > 1:
1493 changedheads += ret - 1
1494 if changedheads > 0:
1495 result = 1 + changedheads
1496 elif changedheads < 0:
1497 result = -1 + changedheads
1498 return result
1499
1481 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest'))
1500 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest'))
1482 def handlechangegroup(op, inpart):
1501 def handlechangegroup(op, inpart):
1483 """apply a changegroup part on the repo
1502 """apply a changegroup part on the repo
1484
1503
1485 This is a very early implementation that will massive rework before being
1504 This is a very early implementation that will massive rework before being
1486 inflicted to any end-user.
1505 inflicted to any end-user.
1487 """
1506 """
1488 tr = op.gettransaction()
1507 tr = op.gettransaction()
1489 unpackerversion = inpart.params.get('version', '01')
1508 unpackerversion = inpart.params.get('version', '01')
1490 # We should raise an appropriate exception here
1509 # We should raise an appropriate exception here
1491 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1510 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1492 # the source and url passed here are overwritten by the one contained in
1511 # the source and url passed here are overwritten by the one contained in
1493 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1512 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1494 nbchangesets = None
1513 nbchangesets = None
1495 if 'nbchanges' in inpart.params:
1514 if 'nbchanges' in inpart.params:
1496 nbchangesets = int(inpart.params.get('nbchanges'))
1515 nbchangesets = int(inpart.params.get('nbchanges'))
1497 if ('treemanifest' in inpart.params and
1516 if ('treemanifest' in inpart.params and
1498 'treemanifest' not in op.repo.requirements):
1517 'treemanifest' not in op.repo.requirements):
1499 if len(op.repo.changelog) != 0:
1518 if len(op.repo.changelog) != 0:
1500 raise error.Abort(_(
1519 raise error.Abort(_(
1501 "bundle contains tree manifests, but local repo is "
1520 "bundle contains tree manifests, but local repo is "
1502 "non-empty and does not use tree manifests"))
1521 "non-empty and does not use tree manifests"))
1503 op.repo.requirements.add('treemanifest')
1522 op.repo.requirements.add('treemanifest')
1504 op.repo._applyopenerreqs()
1523 op.repo._applyopenerreqs()
1505 op.repo._writerequirements()
1524 op.repo._writerequirements()
1506 ret, addednodes = cg.apply(op.repo, tr, 'bundle2', 'bundle2',
1525 ret, addednodes = cg.apply(op.repo, tr, 'bundle2', 'bundle2',
1507 expectedtotal=nbchangesets)
1526 expectedtotal=nbchangesets)
1508 op.records.add('changegroup', {
1527 op.records.add('changegroup', {
1509 'return': ret,
1528 'return': ret,
1510 'addednodes': addednodes,
1529 'addednodes': addednodes,
1511 })
1530 })
1512 if op.reply is not None:
1531 if op.reply is not None:
1513 # This is definitely not the final form of this
1532 # This is definitely not the final form of this
1514 # return. But one need to start somewhere.
1533 # return. But one need to start somewhere.
1515 part = op.reply.newpart('reply:changegroup', mandatory=False)
1534 part = op.reply.newpart('reply:changegroup', mandatory=False)
1516 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1535 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1517 part.addparam('return', '%i' % ret, mandatory=False)
1536 part.addparam('return', '%i' % ret, mandatory=False)
1518 assert not inpart.read()
1537 assert not inpart.read()
1519
1538
1520 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1539 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1521 ['digest:%s' % k for k in util.DIGESTS.keys()])
1540 ['digest:%s' % k for k in util.DIGESTS.keys()])
1522 @parthandler('remote-changegroup', _remotechangegroupparams)
1541 @parthandler('remote-changegroup', _remotechangegroupparams)
1523 def handleremotechangegroup(op, inpart):
1542 def handleremotechangegroup(op, inpart):
1524 """apply a bundle10 on the repo, given an url and validation information
1543 """apply a bundle10 on the repo, given an url and validation information
1525
1544
1526 All the information about the remote bundle to import are given as
1545 All the information about the remote bundle to import are given as
1527 parameters. The parameters include:
1546 parameters. The parameters include:
1528 - url: the url to the bundle10.
1547 - url: the url to the bundle10.
1529 - size: the bundle10 file size. It is used to validate what was
1548 - size: the bundle10 file size. It is used to validate what was
1530 retrieved by the client matches the server knowledge about the bundle.
1549 retrieved by the client matches the server knowledge about the bundle.
1531 - digests: a space separated list of the digest types provided as
1550 - digests: a space separated list of the digest types provided as
1532 parameters.
1551 parameters.
1533 - digest:<digest-type>: the hexadecimal representation of the digest with
1552 - digest:<digest-type>: the hexadecimal representation of the digest with
1534 that name. Like the size, it is used to validate what was retrieved by
1553 that name. Like the size, it is used to validate what was retrieved by
1535 the client matches what the server knows about the bundle.
1554 the client matches what the server knows about the bundle.
1536
1555
1537 When multiple digest types are given, all of them are checked.
1556 When multiple digest types are given, all of them are checked.
1538 """
1557 """
1539 try:
1558 try:
1540 raw_url = inpart.params['url']
1559 raw_url = inpart.params['url']
1541 except KeyError:
1560 except KeyError:
1542 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1561 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1543 parsed_url = util.url(raw_url)
1562 parsed_url = util.url(raw_url)
1544 if parsed_url.scheme not in capabilities['remote-changegroup']:
1563 if parsed_url.scheme not in capabilities['remote-changegroup']:
1545 raise error.Abort(_('remote-changegroup does not support %s urls') %
1564 raise error.Abort(_('remote-changegroup does not support %s urls') %
1546 parsed_url.scheme)
1565 parsed_url.scheme)
1547
1566
1548 try:
1567 try:
1549 size = int(inpart.params['size'])
1568 size = int(inpart.params['size'])
1550 except ValueError:
1569 except ValueError:
1551 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1570 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1552 % 'size')
1571 % 'size')
1553 except KeyError:
1572 except KeyError:
1554 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1573 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1555
1574
1556 digests = {}
1575 digests = {}
1557 for typ in inpart.params.get('digests', '').split():
1576 for typ in inpart.params.get('digests', '').split():
1558 param = 'digest:%s' % typ
1577 param = 'digest:%s' % typ
1559 try:
1578 try:
1560 value = inpart.params[param]
1579 value = inpart.params[param]
1561 except KeyError:
1580 except KeyError:
1562 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1581 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1563 param)
1582 param)
1564 digests[typ] = value
1583 digests[typ] = value
1565
1584
1566 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1585 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1567
1586
1568 tr = op.gettransaction()
1587 tr = op.gettransaction()
1569 from . import exchange
1588 from . import exchange
1570 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1589 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1571 if not isinstance(cg, changegroup.cg1unpacker):
1590 if not isinstance(cg, changegroup.cg1unpacker):
1572 raise error.Abort(_('%s: not a bundle version 1.0') %
1591 raise error.Abort(_('%s: not a bundle version 1.0') %
1573 util.hidepassword(raw_url))
1592 util.hidepassword(raw_url))
1574 ret, addednodes = cg.apply(op.repo, tr, 'bundle2', 'bundle2')
1593 ret, addednodes = cg.apply(op.repo, tr, 'bundle2', 'bundle2')
1575 op.records.add('changegroup', {
1594 op.records.add('changegroup', {
1576 'return': ret,
1595 'return': ret,
1577 'addednodes': addednodes,
1596 'addednodes': addednodes,
1578 })
1597 })
1579 if op.reply is not None:
1598 if op.reply is not None:
1580 # This is definitely not the final form of this
1599 # This is definitely not the final form of this
1581 # return. But one need to start somewhere.
1600 # return. But one need to start somewhere.
1582 part = op.reply.newpart('reply:changegroup')
1601 part = op.reply.newpart('reply:changegroup')
1583 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1602 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1584 part.addparam('return', '%i' % ret, mandatory=False)
1603 part.addparam('return', '%i' % ret, mandatory=False)
1585 try:
1604 try:
1586 real_part.validate()
1605 real_part.validate()
1587 except error.Abort as e:
1606 except error.Abort as e:
1588 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1607 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1589 (util.hidepassword(raw_url), str(e)))
1608 (util.hidepassword(raw_url), str(e)))
1590 assert not inpart.read()
1609 assert not inpart.read()
1591
1610
1592 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1611 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1593 def handlereplychangegroup(op, inpart):
1612 def handlereplychangegroup(op, inpart):
1594 ret = int(inpart.params['return'])
1613 ret = int(inpart.params['return'])
1595 replyto = int(inpart.params['in-reply-to'])
1614 replyto = int(inpart.params['in-reply-to'])
1596 op.records.add('changegroup', {'return': ret}, replyto)
1615 op.records.add('changegroup', {'return': ret}, replyto)
1597
1616
1598 @parthandler('check:heads')
1617 @parthandler('check:heads')
1599 def handlecheckheads(op, inpart):
1618 def handlecheckheads(op, inpart):
1600 """check that head of the repo did not change
1619 """check that head of the repo did not change
1601
1620
1602 This is used to detect a push race when using unbundle.
1621 This is used to detect a push race when using unbundle.
1603 This replaces the "heads" argument of unbundle."""
1622 This replaces the "heads" argument of unbundle."""
1604 h = inpart.read(20)
1623 h = inpart.read(20)
1605 heads = []
1624 heads = []
1606 while len(h) == 20:
1625 while len(h) == 20:
1607 heads.append(h)
1626 heads.append(h)
1608 h = inpart.read(20)
1627 h = inpart.read(20)
1609 assert not h
1628 assert not h
1610 # Trigger a transaction so that we are guaranteed to have the lock now.
1629 # Trigger a transaction so that we are guaranteed to have the lock now.
1611 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1630 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1612 op.gettransaction()
1631 op.gettransaction()
1613 if sorted(heads) != sorted(op.repo.heads()):
1632 if sorted(heads) != sorted(op.repo.heads()):
1614 raise error.PushRaced('repository changed while pushing - '
1633 raise error.PushRaced('repository changed while pushing - '
1615 'please try again')
1634 'please try again')
1616
1635
1617 @parthandler('check:updated-heads')
1636 @parthandler('check:updated-heads')
1618 def handlecheckupdatedheads(op, inpart):
1637 def handlecheckupdatedheads(op, inpart):
1619 """check for race on the heads touched by a push
1638 """check for race on the heads touched by a push
1620
1639
1621 This is similar to 'check:heads' but focus on the heads actually updated
1640 This is similar to 'check:heads' but focus on the heads actually updated
1622 during the push. If other activities happen on unrelated heads, it is
1641 during the push. If other activities happen on unrelated heads, it is
1623 ignored.
1642 ignored.
1624
1643
1625 This allow server with high traffic to avoid push contention as long as
1644 This allow server with high traffic to avoid push contention as long as
1626 unrelated parts of the graph are involved."""
1645 unrelated parts of the graph are involved."""
1627 h = inpart.read(20)
1646 h = inpart.read(20)
1628 heads = []
1647 heads = []
1629 while len(h) == 20:
1648 while len(h) == 20:
1630 heads.append(h)
1649 heads.append(h)
1631 h = inpart.read(20)
1650 h = inpart.read(20)
1632 assert not h
1651 assert not h
1633 # trigger a transaction so that we are guaranteed to have the lock now.
1652 # trigger a transaction so that we are guaranteed to have the lock now.
1634 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1653 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1635 op.gettransaction()
1654 op.gettransaction()
1636
1655
1637 currentheads = set()
1656 currentheads = set()
1638 for ls in op.repo.branchmap().itervalues():
1657 for ls in op.repo.branchmap().itervalues():
1639 currentheads.update(ls)
1658 currentheads.update(ls)
1640
1659
1641 for h in heads:
1660 for h in heads:
1642 if h not in currentheads:
1661 if h not in currentheads:
1643 raise error.PushRaced('repository changed while pushing - '
1662 raise error.PushRaced('repository changed while pushing - '
1644 'please try again')
1663 'please try again')
1645
1664
1646 @parthandler('output')
1665 @parthandler('output')
1647 def handleoutput(op, inpart):
1666 def handleoutput(op, inpart):
1648 """forward output captured on the server to the client"""
1667 """forward output captured on the server to the client"""
1649 for line in inpart.read().splitlines():
1668 for line in inpart.read().splitlines():
1650 op.ui.status(_('remote: %s\n') % line)
1669 op.ui.status(_('remote: %s\n') % line)
1651
1670
1652 @parthandler('replycaps')
1671 @parthandler('replycaps')
1653 def handlereplycaps(op, inpart):
1672 def handlereplycaps(op, inpart):
1654 """Notify that a reply bundle should be created
1673 """Notify that a reply bundle should be created
1655
1674
1656 The payload contains the capabilities information for the reply"""
1675 The payload contains the capabilities information for the reply"""
1657 caps = decodecaps(inpart.read())
1676 caps = decodecaps(inpart.read())
1658 if op.reply is None:
1677 if op.reply is None:
1659 op.reply = bundle20(op.ui, caps)
1678 op.reply = bundle20(op.ui, caps)
1660
1679
1661 class AbortFromPart(error.Abort):
1680 class AbortFromPart(error.Abort):
1662 """Sub-class of Abort that denotes an error from a bundle2 part."""
1681 """Sub-class of Abort that denotes an error from a bundle2 part."""
1663
1682
1664 @parthandler('error:abort', ('message', 'hint'))
1683 @parthandler('error:abort', ('message', 'hint'))
1665 def handleerrorabort(op, inpart):
1684 def handleerrorabort(op, inpart):
1666 """Used to transmit abort error over the wire"""
1685 """Used to transmit abort error over the wire"""
1667 raise AbortFromPart(inpart.params['message'],
1686 raise AbortFromPart(inpart.params['message'],
1668 hint=inpart.params.get('hint'))
1687 hint=inpart.params.get('hint'))
1669
1688
1670 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
1689 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
1671 'in-reply-to'))
1690 'in-reply-to'))
1672 def handleerrorpushkey(op, inpart):
1691 def handleerrorpushkey(op, inpart):
1673 """Used to transmit failure of a mandatory pushkey over the wire"""
1692 """Used to transmit failure of a mandatory pushkey over the wire"""
1674 kwargs = {}
1693 kwargs = {}
1675 for name in ('namespace', 'key', 'new', 'old', 'ret'):
1694 for name in ('namespace', 'key', 'new', 'old', 'ret'):
1676 value = inpart.params.get(name)
1695 value = inpart.params.get(name)
1677 if value is not None:
1696 if value is not None:
1678 kwargs[name] = value
1697 kwargs[name] = value
1679 raise error.PushkeyFailed(inpart.params['in-reply-to'], **kwargs)
1698 raise error.PushkeyFailed(inpart.params['in-reply-to'], **kwargs)
1680
1699
1681 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1700 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1682 def handleerrorunsupportedcontent(op, inpart):
1701 def handleerrorunsupportedcontent(op, inpart):
1683 """Used to transmit unknown content error over the wire"""
1702 """Used to transmit unknown content error over the wire"""
1684 kwargs = {}
1703 kwargs = {}
1685 parttype = inpart.params.get('parttype')
1704 parttype = inpart.params.get('parttype')
1686 if parttype is not None:
1705 if parttype is not None:
1687 kwargs['parttype'] = parttype
1706 kwargs['parttype'] = parttype
1688 params = inpart.params.get('params')
1707 params = inpart.params.get('params')
1689 if params is not None:
1708 if params is not None:
1690 kwargs['params'] = params.split('\0')
1709 kwargs['params'] = params.split('\0')
1691
1710
1692 raise error.BundleUnknownFeatureError(**kwargs)
1711 raise error.BundleUnknownFeatureError(**kwargs)
1693
1712
1694 @parthandler('error:pushraced', ('message',))
1713 @parthandler('error:pushraced', ('message',))
1695 def handleerrorpushraced(op, inpart):
1714 def handleerrorpushraced(op, inpart):
1696 """Used to transmit push race error over the wire"""
1715 """Used to transmit push race error over the wire"""
1697 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1716 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1698
1717
1699 @parthandler('listkeys', ('namespace',))
1718 @parthandler('listkeys', ('namespace',))
1700 def handlelistkeys(op, inpart):
1719 def handlelistkeys(op, inpart):
1701 """retrieve pushkey namespace content stored in a bundle2"""
1720 """retrieve pushkey namespace content stored in a bundle2"""
1702 namespace = inpart.params['namespace']
1721 namespace = inpart.params['namespace']
1703 r = pushkey.decodekeys(inpart.read())
1722 r = pushkey.decodekeys(inpart.read())
1704 op.records.add('listkeys', (namespace, r))
1723 op.records.add('listkeys', (namespace, r))
1705
1724
1706 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1725 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1707 def handlepushkey(op, inpart):
1726 def handlepushkey(op, inpart):
1708 """process a pushkey request"""
1727 """process a pushkey request"""
1709 dec = pushkey.decode
1728 dec = pushkey.decode
1710 namespace = dec(inpart.params['namespace'])
1729 namespace = dec(inpart.params['namespace'])
1711 key = dec(inpart.params['key'])
1730 key = dec(inpart.params['key'])
1712 old = dec(inpart.params['old'])
1731 old = dec(inpart.params['old'])
1713 new = dec(inpart.params['new'])
1732 new = dec(inpart.params['new'])
1714 # Grab the transaction to ensure that we have the lock before performing the
1733 # Grab the transaction to ensure that we have the lock before performing the
1715 # pushkey.
1734 # pushkey.
1716 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1735 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1717 op.gettransaction()
1736 op.gettransaction()
1718 ret = op.repo.pushkey(namespace, key, old, new)
1737 ret = op.repo.pushkey(namespace, key, old, new)
1719 record = {'namespace': namespace,
1738 record = {'namespace': namespace,
1720 'key': key,
1739 'key': key,
1721 'old': old,
1740 'old': old,
1722 'new': new}
1741 'new': new}
1723 op.records.add('pushkey', record)
1742 op.records.add('pushkey', record)
1724 if op.reply is not None:
1743 if op.reply is not None:
1725 rpart = op.reply.newpart('reply:pushkey')
1744 rpart = op.reply.newpart('reply:pushkey')
1726 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1745 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1727 rpart.addparam('return', '%i' % ret, mandatory=False)
1746 rpart.addparam('return', '%i' % ret, mandatory=False)
1728 if inpart.mandatory and not ret:
1747 if inpart.mandatory and not ret:
1729 kwargs = {}
1748 kwargs = {}
1730 for key in ('namespace', 'key', 'new', 'old', 'ret'):
1749 for key in ('namespace', 'key', 'new', 'old', 'ret'):
1731 if key in inpart.params:
1750 if key in inpart.params:
1732 kwargs[key] = inpart.params[key]
1751 kwargs[key] = inpart.params[key]
1733 raise error.PushkeyFailed(partid=str(inpart.id), **kwargs)
1752 raise error.PushkeyFailed(partid=str(inpart.id), **kwargs)
1734
1753
1735 def _readphaseheads(inpart):
1754 def _readphaseheads(inpart):
1736 headsbyphase = [[] for i in phases.allphases]
1755 headsbyphase = [[] for i in phases.allphases]
1737 entrysize = struct.calcsize(_fphasesentry)
1756 entrysize = struct.calcsize(_fphasesentry)
1738 while True:
1757 while True:
1739 entry = inpart.read(entrysize)
1758 entry = inpart.read(entrysize)
1740 if len(entry) < entrysize:
1759 if len(entry) < entrysize:
1741 if entry:
1760 if entry:
1742 raise error.Abort(_('bad phase-heads bundle part'))
1761 raise error.Abort(_('bad phase-heads bundle part'))
1743 break
1762 break
1744 phase, node = struct.unpack(_fphasesentry, entry)
1763 phase, node = struct.unpack(_fphasesentry, entry)
1745 headsbyphase[phase].append(node)
1764 headsbyphase[phase].append(node)
1746 return headsbyphase
1765 return headsbyphase
1747
1766
1748 @parthandler('phase-heads')
1767 @parthandler('phase-heads')
1749 def handlephases(op, inpart):
1768 def handlephases(op, inpart):
1750 """apply phases from bundle part to repo"""
1769 """apply phases from bundle part to repo"""
1751 headsbyphase = _readphaseheads(inpart)
1770 headsbyphase = _readphaseheads(inpart)
1752 addednodes = []
1771 addednodes = []
1753 for entry in op.records['changegroup']:
1772 for entry in op.records['changegroup']:
1754 addednodes.extend(entry['addednodes'])
1773 addednodes.extend(entry['addednodes'])
1755 phases.updatephases(op.repo.unfiltered(), op.gettransaction(), headsbyphase,
1774 phases.updatephases(op.repo.unfiltered(), op.gettransaction(), headsbyphase,
1756 addednodes)
1775 addednodes)
1757
1776
1758 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
1777 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
1759 def handlepushkeyreply(op, inpart):
1778 def handlepushkeyreply(op, inpart):
1760 """retrieve the result of a pushkey request"""
1779 """retrieve the result of a pushkey request"""
1761 ret = int(inpart.params['return'])
1780 ret = int(inpart.params['return'])
1762 partid = int(inpart.params['in-reply-to'])
1781 partid = int(inpart.params['in-reply-to'])
1763 op.records.add('pushkey', {'return': ret}, partid)
1782 op.records.add('pushkey', {'return': ret}, partid)
1764
1783
1765 @parthandler('obsmarkers')
1784 @parthandler('obsmarkers')
1766 def handleobsmarker(op, inpart):
1785 def handleobsmarker(op, inpart):
1767 """add a stream of obsmarkers to the repo"""
1786 """add a stream of obsmarkers to the repo"""
1768 tr = op.gettransaction()
1787 tr = op.gettransaction()
1769 markerdata = inpart.read()
1788 markerdata = inpart.read()
1770 if op.ui.config('experimental', 'obsmarkers-exchange-debug', False):
1789 if op.ui.config('experimental', 'obsmarkers-exchange-debug', False):
1771 op.ui.write(('obsmarker-exchange: %i bytes received\n')
1790 op.ui.write(('obsmarker-exchange: %i bytes received\n')
1772 % len(markerdata))
1791 % len(markerdata))
1773 # The mergemarkers call will crash if marker creation is not enabled.
1792 # The mergemarkers call will crash if marker creation is not enabled.
1774 # we want to avoid this if the part is advisory.
1793 # we want to avoid this if the part is advisory.
1775 if not inpart.mandatory and op.repo.obsstore.readonly:
1794 if not inpart.mandatory and op.repo.obsstore.readonly:
1776 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled')
1795 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled')
1777 return
1796 return
1778 new = op.repo.obsstore.mergemarkers(tr, markerdata)
1797 new = op.repo.obsstore.mergemarkers(tr, markerdata)
1779 op.repo.invalidatevolatilesets()
1798 op.repo.invalidatevolatilesets()
1780 if new:
1799 if new:
1781 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1800 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1782 op.records.add('obsmarkers', {'new': new})
1801 op.records.add('obsmarkers', {'new': new})
1783 if op.reply is not None:
1802 if op.reply is not None:
1784 rpart = op.reply.newpart('reply:obsmarkers')
1803 rpart = op.reply.newpart('reply:obsmarkers')
1785 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1804 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1786 rpart.addparam('new', '%i' % new, mandatory=False)
1805 rpart.addparam('new', '%i' % new, mandatory=False)
1787
1806
1788
1807
1789 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
1808 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
1790 def handleobsmarkerreply(op, inpart):
1809 def handleobsmarkerreply(op, inpart):
1791 """retrieve the result of a pushkey request"""
1810 """retrieve the result of a pushkey request"""
1792 ret = int(inpart.params['new'])
1811 ret = int(inpart.params['new'])
1793 partid = int(inpart.params['in-reply-to'])
1812 partid = int(inpart.params['in-reply-to'])
1794 op.records.add('obsmarkers', {'new': ret}, partid)
1813 op.records.add('obsmarkers', {'new': ret}, partid)
1795
1814
1796 @parthandler('hgtagsfnodes')
1815 @parthandler('hgtagsfnodes')
1797 def handlehgtagsfnodes(op, inpart):
1816 def handlehgtagsfnodes(op, inpart):
1798 """Applies .hgtags fnodes cache entries to the local repo.
1817 """Applies .hgtags fnodes cache entries to the local repo.
1799
1818
1800 Payload is pairs of 20 byte changeset nodes and filenodes.
1819 Payload is pairs of 20 byte changeset nodes and filenodes.
1801 """
1820 """
1802 # Grab the transaction so we ensure that we have the lock at this point.
1821 # Grab the transaction so we ensure that we have the lock at this point.
1803 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1822 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1804 op.gettransaction()
1823 op.gettransaction()
1805 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
1824 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
1806
1825
1807 count = 0
1826 count = 0
1808 while True:
1827 while True:
1809 node = inpart.read(20)
1828 node = inpart.read(20)
1810 fnode = inpart.read(20)
1829 fnode = inpart.read(20)
1811 if len(node) < 20 or len(fnode) < 20:
1830 if len(node) < 20 or len(fnode) < 20:
1812 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
1831 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
1813 break
1832 break
1814 cache.setfnode(node, fnode)
1833 cache.setfnode(node, fnode)
1815 count += 1
1834 count += 1
1816
1835
1817 cache.write()
1836 cache.write()
1818 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
1837 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
@@ -1,1026 +1,1007 b''
1 # changegroup.py - Mercurial changegroup manipulation functions
1 # changegroup.py - Mercurial changegroup manipulation functions
2 #
2 #
3 # Copyright 2006 Matt Mackall <mpm@selenic.com>
3 # Copyright 2006 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import os
10 import os
11 import struct
11 import struct
12 import tempfile
12 import tempfile
13 import weakref
13 import weakref
14
14
15 from .i18n import _
15 from .i18n import _
16 from .node import (
16 from .node import (
17 hex,
17 hex,
18 nullrev,
18 nullrev,
19 short,
19 short,
20 )
20 )
21
21
22 from . import (
22 from . import (
23 dagutil,
23 dagutil,
24 discovery,
24 discovery,
25 error,
25 error,
26 mdiff,
26 mdiff,
27 phases,
27 phases,
28 pycompat,
28 pycompat,
29 util,
29 util,
30 )
30 )
31
31
32 _CHANGEGROUPV1_DELTA_HEADER = "20s20s20s20s"
32 _CHANGEGROUPV1_DELTA_HEADER = "20s20s20s20s"
33 _CHANGEGROUPV2_DELTA_HEADER = "20s20s20s20s20s"
33 _CHANGEGROUPV2_DELTA_HEADER = "20s20s20s20s20s"
34 _CHANGEGROUPV3_DELTA_HEADER = ">20s20s20s20s20sH"
34 _CHANGEGROUPV3_DELTA_HEADER = ">20s20s20s20s20sH"
35
35
36 def readexactly(stream, n):
36 def readexactly(stream, n):
37 '''read n bytes from stream.read and abort if less was available'''
37 '''read n bytes from stream.read and abort if less was available'''
38 s = stream.read(n)
38 s = stream.read(n)
39 if len(s) < n:
39 if len(s) < n:
40 raise error.Abort(_("stream ended unexpectedly"
40 raise error.Abort(_("stream ended unexpectedly"
41 " (got %d bytes, expected %d)")
41 " (got %d bytes, expected %d)")
42 % (len(s), n))
42 % (len(s), n))
43 return s
43 return s
44
44
45 def getchunk(stream):
45 def getchunk(stream):
46 """return the next chunk from stream as a string"""
46 """return the next chunk from stream as a string"""
47 d = readexactly(stream, 4)
47 d = readexactly(stream, 4)
48 l = struct.unpack(">l", d)[0]
48 l = struct.unpack(">l", d)[0]
49 if l <= 4:
49 if l <= 4:
50 if l:
50 if l:
51 raise error.Abort(_("invalid chunk length %d") % l)
51 raise error.Abort(_("invalid chunk length %d") % l)
52 return ""
52 return ""
53 return readexactly(stream, l - 4)
53 return readexactly(stream, l - 4)
54
54
55 def chunkheader(length):
55 def chunkheader(length):
56 """return a changegroup chunk header (string)"""
56 """return a changegroup chunk header (string)"""
57 return struct.pack(">l", length + 4)
57 return struct.pack(">l", length + 4)
58
58
59 def closechunk():
59 def closechunk():
60 """return a changegroup chunk header (string) for a zero-length chunk"""
60 """return a changegroup chunk header (string) for a zero-length chunk"""
61 return struct.pack(">l", 0)
61 return struct.pack(">l", 0)
62
62
63 def combineresults(results):
64 """logic to combine 0 or more addchangegroup results into one"""
65 changedheads = 0
66 result = 1
67 for ret in results:
68 # If any changegroup result is 0, return 0
69 if ret == 0:
70 result = 0
71 break
72 if ret < -1:
73 changedheads += ret + 1
74 elif ret > 1:
75 changedheads += ret - 1
76 if changedheads > 0:
77 result = 1 + changedheads
78 elif changedheads < 0:
79 result = -1 + changedheads
80 return result
81
82 def writechunks(ui, chunks, filename, vfs=None):
63 def writechunks(ui, chunks, filename, vfs=None):
83 """Write chunks to a file and return its filename.
64 """Write chunks to a file and return its filename.
84
65
85 The stream is assumed to be a bundle file.
66 The stream is assumed to be a bundle file.
86 Existing files will not be overwritten.
67 Existing files will not be overwritten.
87 If no filename is specified, a temporary file is created.
68 If no filename is specified, a temporary file is created.
88 """
69 """
89 fh = None
70 fh = None
90 cleanup = None
71 cleanup = None
91 try:
72 try:
92 if filename:
73 if filename:
93 if vfs:
74 if vfs:
94 fh = vfs.open(filename, "wb")
75 fh = vfs.open(filename, "wb")
95 else:
76 else:
96 # Increase default buffer size because default is usually
77 # Increase default buffer size because default is usually
97 # small (4k is common on Linux).
78 # small (4k is common on Linux).
98 fh = open(filename, "wb", 131072)
79 fh = open(filename, "wb", 131072)
99 else:
80 else:
100 fd, filename = tempfile.mkstemp(prefix="hg-bundle-", suffix=".hg")
81 fd, filename = tempfile.mkstemp(prefix="hg-bundle-", suffix=".hg")
101 fh = os.fdopen(fd, pycompat.sysstr("wb"))
82 fh = os.fdopen(fd, pycompat.sysstr("wb"))
102 cleanup = filename
83 cleanup = filename
103 for c in chunks:
84 for c in chunks:
104 fh.write(c)
85 fh.write(c)
105 cleanup = None
86 cleanup = None
106 return filename
87 return filename
107 finally:
88 finally:
108 if fh is not None:
89 if fh is not None:
109 fh.close()
90 fh.close()
110 if cleanup is not None:
91 if cleanup is not None:
111 if filename and vfs:
92 if filename and vfs:
112 vfs.unlink(cleanup)
93 vfs.unlink(cleanup)
113 else:
94 else:
114 os.unlink(cleanup)
95 os.unlink(cleanup)
115
96
116 class cg1unpacker(object):
97 class cg1unpacker(object):
117 """Unpacker for cg1 changegroup streams.
98 """Unpacker for cg1 changegroup streams.
118
99
119 A changegroup unpacker handles the framing of the revision data in
100 A changegroup unpacker handles the framing of the revision data in
120 the wire format. Most consumers will want to use the apply()
101 the wire format. Most consumers will want to use the apply()
121 method to add the changes from the changegroup to a repository.
102 method to add the changes from the changegroup to a repository.
122
103
123 If you're forwarding a changegroup unmodified to another consumer,
104 If you're forwarding a changegroup unmodified to another consumer,
124 use getchunks(), which returns an iterator of changegroup
105 use getchunks(), which returns an iterator of changegroup
125 chunks. This is mostly useful for cases where you need to know the
106 chunks. This is mostly useful for cases where you need to know the
126 data stream has ended by observing the end of the changegroup.
107 data stream has ended by observing the end of the changegroup.
127
108
128 deltachunk() is useful only if you're applying delta data. Most
109 deltachunk() is useful only if you're applying delta data. Most
129 consumers should prefer apply() instead.
110 consumers should prefer apply() instead.
130
111
131 A few other public methods exist. Those are used only for
112 A few other public methods exist. Those are used only for
132 bundlerepo and some debug commands - their use is discouraged.
113 bundlerepo and some debug commands - their use is discouraged.
133 """
114 """
134 deltaheader = _CHANGEGROUPV1_DELTA_HEADER
115 deltaheader = _CHANGEGROUPV1_DELTA_HEADER
135 deltaheadersize = struct.calcsize(deltaheader)
116 deltaheadersize = struct.calcsize(deltaheader)
136 version = '01'
117 version = '01'
137 _grouplistcount = 1 # One list of files after the manifests
118 _grouplistcount = 1 # One list of files after the manifests
138
119
139 def __init__(self, fh, alg, extras=None):
120 def __init__(self, fh, alg, extras=None):
140 if alg is None:
121 if alg is None:
141 alg = 'UN'
122 alg = 'UN'
142 if alg not in util.compengines.supportedbundletypes:
123 if alg not in util.compengines.supportedbundletypes:
143 raise error.Abort(_('unknown stream compression type: %s')
124 raise error.Abort(_('unknown stream compression type: %s')
144 % alg)
125 % alg)
145 if alg == 'BZ':
126 if alg == 'BZ':
146 alg = '_truncatedBZ'
127 alg = '_truncatedBZ'
147
128
148 compengine = util.compengines.forbundletype(alg)
129 compengine = util.compengines.forbundletype(alg)
149 self._stream = compengine.decompressorreader(fh)
130 self._stream = compengine.decompressorreader(fh)
150 self._type = alg
131 self._type = alg
151 self.extras = extras or {}
132 self.extras = extras or {}
152 self.callback = None
133 self.callback = None
153
134
154 # These methods (compressed, read, seek, tell) all appear to only
135 # These methods (compressed, read, seek, tell) all appear to only
155 # be used by bundlerepo, but it's a little hard to tell.
136 # be used by bundlerepo, but it's a little hard to tell.
156 def compressed(self):
137 def compressed(self):
157 return self._type is not None and self._type != 'UN'
138 return self._type is not None and self._type != 'UN'
158 def read(self, l):
139 def read(self, l):
159 return self._stream.read(l)
140 return self._stream.read(l)
160 def seek(self, pos):
141 def seek(self, pos):
161 return self._stream.seek(pos)
142 return self._stream.seek(pos)
162 def tell(self):
143 def tell(self):
163 return self._stream.tell()
144 return self._stream.tell()
164 def close(self):
145 def close(self):
165 return self._stream.close()
146 return self._stream.close()
166
147
167 def _chunklength(self):
148 def _chunklength(self):
168 d = readexactly(self._stream, 4)
149 d = readexactly(self._stream, 4)
169 l = struct.unpack(">l", d)[0]
150 l = struct.unpack(">l", d)[0]
170 if l <= 4:
151 if l <= 4:
171 if l:
152 if l:
172 raise error.Abort(_("invalid chunk length %d") % l)
153 raise error.Abort(_("invalid chunk length %d") % l)
173 return 0
154 return 0
174 if self.callback:
155 if self.callback:
175 self.callback()
156 self.callback()
176 return l - 4
157 return l - 4
177
158
178 def changelogheader(self):
159 def changelogheader(self):
179 """v10 does not have a changelog header chunk"""
160 """v10 does not have a changelog header chunk"""
180 return {}
161 return {}
181
162
182 def manifestheader(self):
163 def manifestheader(self):
183 """v10 does not have a manifest header chunk"""
164 """v10 does not have a manifest header chunk"""
184 return {}
165 return {}
185
166
186 def filelogheader(self):
167 def filelogheader(self):
187 """return the header of the filelogs chunk, v10 only has the filename"""
168 """return the header of the filelogs chunk, v10 only has the filename"""
188 l = self._chunklength()
169 l = self._chunklength()
189 if not l:
170 if not l:
190 return {}
171 return {}
191 fname = readexactly(self._stream, l)
172 fname = readexactly(self._stream, l)
192 return {'filename': fname}
173 return {'filename': fname}
193
174
194 def _deltaheader(self, headertuple, prevnode):
175 def _deltaheader(self, headertuple, prevnode):
195 node, p1, p2, cs = headertuple
176 node, p1, p2, cs = headertuple
196 if prevnode is None:
177 if prevnode is None:
197 deltabase = p1
178 deltabase = p1
198 else:
179 else:
199 deltabase = prevnode
180 deltabase = prevnode
200 flags = 0
181 flags = 0
201 return node, p1, p2, deltabase, cs, flags
182 return node, p1, p2, deltabase, cs, flags
202
183
203 def deltachunk(self, prevnode):
184 def deltachunk(self, prevnode):
204 l = self._chunklength()
185 l = self._chunklength()
205 if not l:
186 if not l:
206 return {}
187 return {}
207 headerdata = readexactly(self._stream, self.deltaheadersize)
188 headerdata = readexactly(self._stream, self.deltaheadersize)
208 header = struct.unpack(self.deltaheader, headerdata)
189 header = struct.unpack(self.deltaheader, headerdata)
209 delta = readexactly(self._stream, l - self.deltaheadersize)
190 delta = readexactly(self._stream, l - self.deltaheadersize)
210 node, p1, p2, deltabase, cs, flags = self._deltaheader(header, prevnode)
191 node, p1, p2, deltabase, cs, flags = self._deltaheader(header, prevnode)
211 return {'node': node, 'p1': p1, 'p2': p2, 'cs': cs,
192 return {'node': node, 'p1': p1, 'p2': p2, 'cs': cs,
212 'deltabase': deltabase, 'delta': delta, 'flags': flags}
193 'deltabase': deltabase, 'delta': delta, 'flags': flags}
213
194
214 def getchunks(self):
195 def getchunks(self):
215 """returns all the chunks contains in the bundle
196 """returns all the chunks contains in the bundle
216
197
217 Used when you need to forward the binary stream to a file or another
198 Used when you need to forward the binary stream to a file or another
218 network API. To do so, it parse the changegroup data, otherwise it will
199 network API. To do so, it parse the changegroup data, otherwise it will
219 block in case of sshrepo because it don't know the end of the stream.
200 block in case of sshrepo because it don't know the end of the stream.
220 """
201 """
221 # an empty chunkgroup is the end of the changegroup
202 # an empty chunkgroup is the end of the changegroup
222 # a changegroup has at least 2 chunkgroups (changelog and manifest).
203 # a changegroup has at least 2 chunkgroups (changelog and manifest).
223 # after that, changegroup versions 1 and 2 have a series of groups
204 # after that, changegroup versions 1 and 2 have a series of groups
224 # with one group per file. changegroup 3 has a series of directory
205 # with one group per file. changegroup 3 has a series of directory
225 # manifests before the files.
206 # manifests before the files.
226 count = 0
207 count = 0
227 emptycount = 0
208 emptycount = 0
228 while emptycount < self._grouplistcount:
209 while emptycount < self._grouplistcount:
229 empty = True
210 empty = True
230 count += 1
211 count += 1
231 while True:
212 while True:
232 chunk = getchunk(self)
213 chunk = getchunk(self)
233 if not chunk:
214 if not chunk:
234 if empty and count > 2:
215 if empty and count > 2:
235 emptycount += 1
216 emptycount += 1
236 break
217 break
237 empty = False
218 empty = False
238 yield chunkheader(len(chunk))
219 yield chunkheader(len(chunk))
239 pos = 0
220 pos = 0
240 while pos < len(chunk):
221 while pos < len(chunk):
241 next = pos + 2**20
222 next = pos + 2**20
242 yield chunk[pos:next]
223 yield chunk[pos:next]
243 pos = next
224 pos = next
244 yield closechunk()
225 yield closechunk()
245
226
246 def _unpackmanifests(self, repo, revmap, trp, prog, numchanges):
227 def _unpackmanifests(self, repo, revmap, trp, prog, numchanges):
247 # We know that we'll never have more manifests than we had
228 # We know that we'll never have more manifests than we had
248 # changesets.
229 # changesets.
249 self.callback = prog(_('manifests'), numchanges)
230 self.callback = prog(_('manifests'), numchanges)
250 # no need to check for empty manifest group here:
231 # no need to check for empty manifest group here:
251 # if the result of the merge of 1 and 2 is the same in 3 and 4,
232 # if the result of the merge of 1 and 2 is the same in 3 and 4,
252 # no new manifest will be created and the manifest group will
233 # no new manifest will be created and the manifest group will
253 # be empty during the pull
234 # be empty during the pull
254 self.manifestheader()
235 self.manifestheader()
255 repo.manifestlog._revlog.addgroup(self, revmap, trp)
236 repo.manifestlog._revlog.addgroup(self, revmap, trp)
256 repo.ui.progress(_('manifests'), None)
237 repo.ui.progress(_('manifests'), None)
257 self.callback = None
238 self.callback = None
258
239
259 def apply(self, repo, tr, srctype, url, emptyok=False,
240 def apply(self, repo, tr, srctype, url, emptyok=False,
260 targetphase=phases.draft, expectedtotal=None):
241 targetphase=phases.draft, expectedtotal=None):
261 """Add the changegroup returned by source.read() to this repo.
242 """Add the changegroup returned by source.read() to this repo.
262 srctype is a string like 'push', 'pull', or 'unbundle'. url is
243 srctype is a string like 'push', 'pull', or 'unbundle'. url is
263 the URL of the repo where this changegroup is coming from.
244 the URL of the repo where this changegroup is coming from.
264
245
265 Return an integer summarizing the change to this repo:
246 Return an integer summarizing the change to this repo:
266 - nothing changed or no source: 0
247 - nothing changed or no source: 0
267 - more heads than before: 1+added heads (2..n)
248 - more heads than before: 1+added heads (2..n)
268 - fewer heads than before: -1-removed heads (-2..-n)
249 - fewer heads than before: -1-removed heads (-2..-n)
269 - number of heads stays the same: 1
250 - number of heads stays the same: 1
270 """
251 """
271 repo = repo.unfiltered()
252 repo = repo.unfiltered()
272 def csmap(x):
253 def csmap(x):
273 repo.ui.debug("add changeset %s\n" % short(x))
254 repo.ui.debug("add changeset %s\n" % short(x))
274 return len(cl)
255 return len(cl)
275
256
276 def revmap(x):
257 def revmap(x):
277 return cl.rev(x)
258 return cl.rev(x)
278
259
279 changesets = files = revisions = 0
260 changesets = files = revisions = 0
280
261
281 try:
262 try:
282 # The transaction may already carry source information. In this
263 # The transaction may already carry source information. In this
283 # case we use the top level data. We overwrite the argument
264 # case we use the top level data. We overwrite the argument
284 # because we need to use the top level value (if they exist)
265 # because we need to use the top level value (if they exist)
285 # in this function.
266 # in this function.
286 srctype = tr.hookargs.setdefault('source', srctype)
267 srctype = tr.hookargs.setdefault('source', srctype)
287 url = tr.hookargs.setdefault('url', url)
268 url = tr.hookargs.setdefault('url', url)
288 repo.hook('prechangegroup', throw=True, **tr.hookargs)
269 repo.hook('prechangegroup', throw=True, **tr.hookargs)
289
270
290 # write changelog data to temp files so concurrent readers
271 # write changelog data to temp files so concurrent readers
291 # will not see an inconsistent view
272 # will not see an inconsistent view
292 cl = repo.changelog
273 cl = repo.changelog
293 cl.delayupdate(tr)
274 cl.delayupdate(tr)
294 oldheads = set(cl.heads())
275 oldheads = set(cl.heads())
295
276
296 trp = weakref.proxy(tr)
277 trp = weakref.proxy(tr)
297 # pull off the changeset group
278 # pull off the changeset group
298 repo.ui.status(_("adding changesets\n"))
279 repo.ui.status(_("adding changesets\n"))
299 clstart = len(cl)
280 clstart = len(cl)
300 class prog(object):
281 class prog(object):
301 def __init__(self, step, total):
282 def __init__(self, step, total):
302 self._step = step
283 self._step = step
303 self._total = total
284 self._total = total
304 self._count = 1
285 self._count = 1
305 def __call__(self):
286 def __call__(self):
306 repo.ui.progress(self._step, self._count, unit=_('chunks'),
287 repo.ui.progress(self._step, self._count, unit=_('chunks'),
307 total=self._total)
288 total=self._total)
308 self._count += 1
289 self._count += 1
309 self.callback = prog(_('changesets'), expectedtotal)
290 self.callback = prog(_('changesets'), expectedtotal)
310
291
311 efiles = set()
292 efiles = set()
312 def onchangelog(cl, node):
293 def onchangelog(cl, node):
313 efiles.update(cl.readfiles(node))
294 efiles.update(cl.readfiles(node))
314
295
315 self.changelogheader()
296 self.changelogheader()
316 cgnodes = cl.addgroup(self, csmap, trp, addrevisioncb=onchangelog)
297 cgnodes = cl.addgroup(self, csmap, trp, addrevisioncb=onchangelog)
317 efiles = len(efiles)
298 efiles = len(efiles)
318
299
319 if not (cgnodes or emptyok):
300 if not (cgnodes or emptyok):
320 raise error.Abort(_("received changelog group is empty"))
301 raise error.Abort(_("received changelog group is empty"))
321 clend = len(cl)
302 clend = len(cl)
322 changesets = clend - clstart
303 changesets = clend - clstart
323 repo.ui.progress(_('changesets'), None)
304 repo.ui.progress(_('changesets'), None)
324 self.callback = None
305 self.callback = None
325
306
326 # pull off the manifest group
307 # pull off the manifest group
327 repo.ui.status(_("adding manifests\n"))
308 repo.ui.status(_("adding manifests\n"))
328 self._unpackmanifests(repo, revmap, trp, prog, changesets)
309 self._unpackmanifests(repo, revmap, trp, prog, changesets)
329
310
330 needfiles = {}
311 needfiles = {}
331 if repo.ui.configbool('server', 'validate', default=False):
312 if repo.ui.configbool('server', 'validate', default=False):
332 cl = repo.changelog
313 cl = repo.changelog
333 ml = repo.manifestlog
314 ml = repo.manifestlog
334 # validate incoming csets have their manifests
315 # validate incoming csets have their manifests
335 for cset in xrange(clstart, clend):
316 for cset in xrange(clstart, clend):
336 mfnode = cl.changelogrevision(cset).manifest
317 mfnode = cl.changelogrevision(cset).manifest
337 mfest = ml[mfnode].readdelta()
318 mfest = ml[mfnode].readdelta()
338 # store file cgnodes we must see
319 # store file cgnodes we must see
339 for f, n in mfest.iteritems():
320 for f, n in mfest.iteritems():
340 needfiles.setdefault(f, set()).add(n)
321 needfiles.setdefault(f, set()).add(n)
341
322
342 # process the files
323 # process the files
343 repo.ui.status(_("adding file changes\n"))
324 repo.ui.status(_("adding file changes\n"))
344 newrevs, newfiles = _addchangegroupfiles(
325 newrevs, newfiles = _addchangegroupfiles(
345 repo, self, revmap, trp, efiles, needfiles)
326 repo, self, revmap, trp, efiles, needfiles)
346 revisions += newrevs
327 revisions += newrevs
347 files += newfiles
328 files += newfiles
348
329
349 deltaheads = 0
330 deltaheads = 0
350 if oldheads:
331 if oldheads:
351 heads = cl.heads()
332 heads = cl.heads()
352 deltaheads = len(heads) - len(oldheads)
333 deltaheads = len(heads) - len(oldheads)
353 for h in heads:
334 for h in heads:
354 if h not in oldheads and repo[h].closesbranch():
335 if h not in oldheads and repo[h].closesbranch():
355 deltaheads -= 1
336 deltaheads -= 1
356 htext = ""
337 htext = ""
357 if deltaheads:
338 if deltaheads:
358 htext = _(" (%+d heads)") % deltaheads
339 htext = _(" (%+d heads)") % deltaheads
359
340
360 repo.ui.status(_("added %d changesets"
341 repo.ui.status(_("added %d changesets"
361 " with %d changes to %d files%s\n")
342 " with %d changes to %d files%s\n")
362 % (changesets, revisions, files, htext))
343 % (changesets, revisions, files, htext))
363 repo.invalidatevolatilesets()
344 repo.invalidatevolatilesets()
364
345
365 if changesets > 0:
346 if changesets > 0:
366 if 'node' not in tr.hookargs:
347 if 'node' not in tr.hookargs:
367 tr.hookargs['node'] = hex(cl.node(clstart))
348 tr.hookargs['node'] = hex(cl.node(clstart))
368 tr.hookargs['node_last'] = hex(cl.node(clend - 1))
349 tr.hookargs['node_last'] = hex(cl.node(clend - 1))
369 hookargs = dict(tr.hookargs)
350 hookargs = dict(tr.hookargs)
370 else:
351 else:
371 hookargs = dict(tr.hookargs)
352 hookargs = dict(tr.hookargs)
372 hookargs['node'] = hex(cl.node(clstart))
353 hookargs['node'] = hex(cl.node(clstart))
373 hookargs['node_last'] = hex(cl.node(clend - 1))
354 hookargs['node_last'] = hex(cl.node(clend - 1))
374 repo.hook('pretxnchangegroup', throw=True, **hookargs)
355 repo.hook('pretxnchangegroup', throw=True, **hookargs)
375
356
376 added = [cl.node(r) for r in xrange(clstart, clend)]
357 added = [cl.node(r) for r in xrange(clstart, clend)]
377 if srctype in ('push', 'serve'):
358 if srctype in ('push', 'serve'):
378 # Old servers can not push the boundary themselves.
359 # Old servers can not push the boundary themselves.
379 # New servers won't push the boundary if changeset already
360 # New servers won't push the boundary if changeset already
380 # exists locally as secret
361 # exists locally as secret
381 #
362 #
382 # We should not use added here but the list of all change in
363 # We should not use added here but the list of all change in
383 # the bundle
364 # the bundle
384 if repo.publishing():
365 if repo.publishing():
385 phases.advanceboundary(repo, tr, phases.public, cgnodes)
366 phases.advanceboundary(repo, tr, phases.public, cgnodes)
386 else:
367 else:
387 # Those changesets have been pushed from the
368 # Those changesets have been pushed from the
388 # outside, their phases are going to be pushed
369 # outside, their phases are going to be pushed
389 # alongside. Therefor `targetphase` is
370 # alongside. Therefor `targetphase` is
390 # ignored.
371 # ignored.
391 phases.advanceboundary(repo, tr, phases.draft, cgnodes)
372 phases.advanceboundary(repo, tr, phases.draft, cgnodes)
392 phases.retractboundary(repo, tr, phases.draft, added)
373 phases.retractboundary(repo, tr, phases.draft, added)
393 elif srctype != 'strip':
374 elif srctype != 'strip':
394 # publishing only alter behavior during push
375 # publishing only alter behavior during push
395 #
376 #
396 # strip should not touch boundary at all
377 # strip should not touch boundary at all
397 phases.retractboundary(repo, tr, targetphase, added)
378 phases.retractboundary(repo, tr, targetphase, added)
398
379
399 if changesets > 0:
380 if changesets > 0:
400
381
401 def runhooks():
382 def runhooks():
402 # These hooks run when the lock releases, not when the
383 # These hooks run when the lock releases, not when the
403 # transaction closes. So it's possible for the changelog
384 # transaction closes. So it's possible for the changelog
404 # to have changed since we last saw it.
385 # to have changed since we last saw it.
405 if clstart >= len(repo):
386 if clstart >= len(repo):
406 return
387 return
407
388
408 repo.hook("changegroup", **hookargs)
389 repo.hook("changegroup", **hookargs)
409
390
410 for n in added:
391 for n in added:
411 args = hookargs.copy()
392 args = hookargs.copy()
412 args['node'] = hex(n)
393 args['node'] = hex(n)
413 del args['node_last']
394 del args['node_last']
414 repo.hook("incoming", **args)
395 repo.hook("incoming", **args)
415
396
416 newheads = [h for h in repo.heads()
397 newheads = [h for h in repo.heads()
417 if h not in oldheads]
398 if h not in oldheads]
418 repo.ui.log("incoming",
399 repo.ui.log("incoming",
419 "%s incoming changes - new heads: %s\n",
400 "%s incoming changes - new heads: %s\n",
420 len(added),
401 len(added),
421 ', '.join([hex(c[:6]) for c in newheads]))
402 ', '.join([hex(c[:6]) for c in newheads]))
422
403
423 tr.addpostclose('changegroup-runhooks-%020i' % clstart,
404 tr.addpostclose('changegroup-runhooks-%020i' % clstart,
424 lambda tr: repo._afterlock(runhooks))
405 lambda tr: repo._afterlock(runhooks))
425 finally:
406 finally:
426 repo.ui.flush()
407 repo.ui.flush()
427 # never return 0 here:
408 # never return 0 here:
428 if deltaheads < 0:
409 if deltaheads < 0:
429 ret = deltaheads - 1
410 ret = deltaheads - 1
430 else:
411 else:
431 ret = deltaheads + 1
412 ret = deltaheads + 1
432 return ret, added
413 return ret, added
433
414
434 class cg2unpacker(cg1unpacker):
415 class cg2unpacker(cg1unpacker):
435 """Unpacker for cg2 streams.
416 """Unpacker for cg2 streams.
436
417
437 cg2 streams add support for generaldelta, so the delta header
418 cg2 streams add support for generaldelta, so the delta header
438 format is slightly different. All other features about the data
419 format is slightly different. All other features about the data
439 remain the same.
420 remain the same.
440 """
421 """
441 deltaheader = _CHANGEGROUPV2_DELTA_HEADER
422 deltaheader = _CHANGEGROUPV2_DELTA_HEADER
442 deltaheadersize = struct.calcsize(deltaheader)
423 deltaheadersize = struct.calcsize(deltaheader)
443 version = '02'
424 version = '02'
444
425
445 def _deltaheader(self, headertuple, prevnode):
426 def _deltaheader(self, headertuple, prevnode):
446 node, p1, p2, deltabase, cs = headertuple
427 node, p1, p2, deltabase, cs = headertuple
447 flags = 0
428 flags = 0
448 return node, p1, p2, deltabase, cs, flags
429 return node, p1, p2, deltabase, cs, flags
449
430
450 class cg3unpacker(cg2unpacker):
431 class cg3unpacker(cg2unpacker):
451 """Unpacker for cg3 streams.
432 """Unpacker for cg3 streams.
452
433
453 cg3 streams add support for exchanging treemanifests and revlog
434 cg3 streams add support for exchanging treemanifests and revlog
454 flags. It adds the revlog flags to the delta header and an empty chunk
435 flags. It adds the revlog flags to the delta header and an empty chunk
455 separating manifests and files.
436 separating manifests and files.
456 """
437 """
457 deltaheader = _CHANGEGROUPV3_DELTA_HEADER
438 deltaheader = _CHANGEGROUPV3_DELTA_HEADER
458 deltaheadersize = struct.calcsize(deltaheader)
439 deltaheadersize = struct.calcsize(deltaheader)
459 version = '03'
440 version = '03'
460 _grouplistcount = 2 # One list of manifests and one list of files
441 _grouplistcount = 2 # One list of manifests and one list of files
461
442
462 def _deltaheader(self, headertuple, prevnode):
443 def _deltaheader(self, headertuple, prevnode):
463 node, p1, p2, deltabase, cs, flags = headertuple
444 node, p1, p2, deltabase, cs, flags = headertuple
464 return node, p1, p2, deltabase, cs, flags
445 return node, p1, p2, deltabase, cs, flags
465
446
466 def _unpackmanifests(self, repo, revmap, trp, prog, numchanges):
447 def _unpackmanifests(self, repo, revmap, trp, prog, numchanges):
467 super(cg3unpacker, self)._unpackmanifests(repo, revmap, trp, prog,
448 super(cg3unpacker, self)._unpackmanifests(repo, revmap, trp, prog,
468 numchanges)
449 numchanges)
469 for chunkdata in iter(self.filelogheader, {}):
450 for chunkdata in iter(self.filelogheader, {}):
470 # If we get here, there are directory manifests in the changegroup
451 # If we get here, there are directory manifests in the changegroup
471 d = chunkdata["filename"]
452 d = chunkdata["filename"]
472 repo.ui.debug("adding %s revisions\n" % d)
453 repo.ui.debug("adding %s revisions\n" % d)
473 dirlog = repo.manifestlog._revlog.dirlog(d)
454 dirlog = repo.manifestlog._revlog.dirlog(d)
474 if not dirlog.addgroup(self, revmap, trp):
455 if not dirlog.addgroup(self, revmap, trp):
475 raise error.Abort(_("received dir revlog group is empty"))
456 raise error.Abort(_("received dir revlog group is empty"))
476
457
477 class headerlessfixup(object):
458 class headerlessfixup(object):
478 def __init__(self, fh, h):
459 def __init__(self, fh, h):
479 self._h = h
460 self._h = h
480 self._fh = fh
461 self._fh = fh
481 def read(self, n):
462 def read(self, n):
482 if self._h:
463 if self._h:
483 d, self._h = self._h[:n], self._h[n:]
464 d, self._h = self._h[:n], self._h[n:]
484 if len(d) < n:
465 if len(d) < n:
485 d += readexactly(self._fh, n - len(d))
466 d += readexactly(self._fh, n - len(d))
486 return d
467 return d
487 return readexactly(self._fh, n)
468 return readexactly(self._fh, n)
488
469
489 class cg1packer(object):
470 class cg1packer(object):
490 deltaheader = _CHANGEGROUPV1_DELTA_HEADER
471 deltaheader = _CHANGEGROUPV1_DELTA_HEADER
491 version = '01'
472 version = '01'
492 def __init__(self, repo, bundlecaps=None):
473 def __init__(self, repo, bundlecaps=None):
493 """Given a source repo, construct a bundler.
474 """Given a source repo, construct a bundler.
494
475
495 bundlecaps is optional and can be used to specify the set of
476 bundlecaps is optional and can be used to specify the set of
496 capabilities which can be used to build the bundle. While bundlecaps is
477 capabilities which can be used to build the bundle. While bundlecaps is
497 unused in core Mercurial, extensions rely on this feature to communicate
478 unused in core Mercurial, extensions rely on this feature to communicate
498 capabilities to customize the changegroup packer.
479 capabilities to customize the changegroup packer.
499 """
480 """
500 # Set of capabilities we can use to build the bundle.
481 # Set of capabilities we can use to build the bundle.
501 if bundlecaps is None:
482 if bundlecaps is None:
502 bundlecaps = set()
483 bundlecaps = set()
503 self._bundlecaps = bundlecaps
484 self._bundlecaps = bundlecaps
504 # experimental config: bundle.reorder
485 # experimental config: bundle.reorder
505 reorder = repo.ui.config('bundle', 'reorder', 'auto')
486 reorder = repo.ui.config('bundle', 'reorder', 'auto')
506 if reorder == 'auto':
487 if reorder == 'auto':
507 reorder = None
488 reorder = None
508 else:
489 else:
509 reorder = util.parsebool(reorder)
490 reorder = util.parsebool(reorder)
510 self._repo = repo
491 self._repo = repo
511 self._reorder = reorder
492 self._reorder = reorder
512 self._progress = repo.ui.progress
493 self._progress = repo.ui.progress
513 if self._repo.ui.verbose and not self._repo.ui.debugflag:
494 if self._repo.ui.verbose and not self._repo.ui.debugflag:
514 self._verbosenote = self._repo.ui.note
495 self._verbosenote = self._repo.ui.note
515 else:
496 else:
516 self._verbosenote = lambda s: None
497 self._verbosenote = lambda s: None
517
498
518 def close(self):
499 def close(self):
519 return closechunk()
500 return closechunk()
520
501
521 def fileheader(self, fname):
502 def fileheader(self, fname):
522 return chunkheader(len(fname)) + fname
503 return chunkheader(len(fname)) + fname
523
504
524 # Extracted both for clarity and for overriding in extensions.
505 # Extracted both for clarity and for overriding in extensions.
525 def _sortgroup(self, revlog, nodelist, lookup):
506 def _sortgroup(self, revlog, nodelist, lookup):
526 """Sort nodes for change group and turn them into revnums."""
507 """Sort nodes for change group and turn them into revnums."""
527 # for generaldelta revlogs, we linearize the revs; this will both be
508 # for generaldelta revlogs, we linearize the revs; this will both be
528 # much quicker and generate a much smaller bundle
509 # much quicker and generate a much smaller bundle
529 if (revlog._generaldelta and self._reorder is None) or self._reorder:
510 if (revlog._generaldelta and self._reorder is None) or self._reorder:
530 dag = dagutil.revlogdag(revlog)
511 dag = dagutil.revlogdag(revlog)
531 return dag.linearize(set(revlog.rev(n) for n in nodelist))
512 return dag.linearize(set(revlog.rev(n) for n in nodelist))
532 else:
513 else:
533 return sorted([revlog.rev(n) for n in nodelist])
514 return sorted([revlog.rev(n) for n in nodelist])
534
515
535 def group(self, nodelist, revlog, lookup, units=None):
516 def group(self, nodelist, revlog, lookup, units=None):
536 """Calculate a delta group, yielding a sequence of changegroup chunks
517 """Calculate a delta group, yielding a sequence of changegroup chunks
537 (strings).
518 (strings).
538
519
539 Given a list of changeset revs, return a set of deltas and
520 Given a list of changeset revs, return a set of deltas and
540 metadata corresponding to nodes. The first delta is
521 metadata corresponding to nodes. The first delta is
541 first parent(nodelist[0]) -> nodelist[0], the receiver is
522 first parent(nodelist[0]) -> nodelist[0], the receiver is
542 guaranteed to have this parent as it has all history before
523 guaranteed to have this parent as it has all history before
543 these changesets. In the case firstparent is nullrev the
524 these changesets. In the case firstparent is nullrev the
544 changegroup starts with a full revision.
525 changegroup starts with a full revision.
545
526
546 If units is not None, progress detail will be generated, units specifies
527 If units is not None, progress detail will be generated, units specifies
547 the type of revlog that is touched (changelog, manifest, etc.).
528 the type of revlog that is touched (changelog, manifest, etc.).
548 """
529 """
549 # if we don't have any revisions touched by these changesets, bail
530 # if we don't have any revisions touched by these changesets, bail
550 if len(nodelist) == 0:
531 if len(nodelist) == 0:
551 yield self.close()
532 yield self.close()
552 return
533 return
553
534
554 revs = self._sortgroup(revlog, nodelist, lookup)
535 revs = self._sortgroup(revlog, nodelist, lookup)
555
536
556 # add the parent of the first rev
537 # add the parent of the first rev
557 p = revlog.parentrevs(revs[0])[0]
538 p = revlog.parentrevs(revs[0])[0]
558 revs.insert(0, p)
539 revs.insert(0, p)
559
540
560 # build deltas
541 # build deltas
561 total = len(revs) - 1
542 total = len(revs) - 1
562 msgbundling = _('bundling')
543 msgbundling = _('bundling')
563 for r in xrange(len(revs) - 1):
544 for r in xrange(len(revs) - 1):
564 if units is not None:
545 if units is not None:
565 self._progress(msgbundling, r + 1, unit=units, total=total)
546 self._progress(msgbundling, r + 1, unit=units, total=total)
566 prev, curr = revs[r], revs[r + 1]
547 prev, curr = revs[r], revs[r + 1]
567 linknode = lookup(revlog.node(curr))
548 linknode = lookup(revlog.node(curr))
568 for c in self.revchunk(revlog, curr, prev, linknode):
549 for c in self.revchunk(revlog, curr, prev, linknode):
569 yield c
550 yield c
570
551
571 if units is not None:
552 if units is not None:
572 self._progress(msgbundling, None)
553 self._progress(msgbundling, None)
573 yield self.close()
554 yield self.close()
574
555
575 # filter any nodes that claim to be part of the known set
556 # filter any nodes that claim to be part of the known set
576 def prune(self, revlog, missing, commonrevs):
557 def prune(self, revlog, missing, commonrevs):
577 rr, rl = revlog.rev, revlog.linkrev
558 rr, rl = revlog.rev, revlog.linkrev
578 return [n for n in missing if rl(rr(n)) not in commonrevs]
559 return [n for n in missing if rl(rr(n)) not in commonrevs]
579
560
580 def _packmanifests(self, dir, mfnodes, lookuplinknode):
561 def _packmanifests(self, dir, mfnodes, lookuplinknode):
581 """Pack flat manifests into a changegroup stream."""
562 """Pack flat manifests into a changegroup stream."""
582 assert not dir
563 assert not dir
583 for chunk in self.group(mfnodes, self._repo.manifestlog._revlog,
564 for chunk in self.group(mfnodes, self._repo.manifestlog._revlog,
584 lookuplinknode, units=_('manifests')):
565 lookuplinknode, units=_('manifests')):
585 yield chunk
566 yield chunk
586
567
587 def _manifestsdone(self):
568 def _manifestsdone(self):
588 return ''
569 return ''
589
570
590 def generate(self, commonrevs, clnodes, fastpathlinkrev, source):
571 def generate(self, commonrevs, clnodes, fastpathlinkrev, source):
591 '''yield a sequence of changegroup chunks (strings)'''
572 '''yield a sequence of changegroup chunks (strings)'''
592 repo = self._repo
573 repo = self._repo
593 cl = repo.changelog
574 cl = repo.changelog
594
575
595 clrevorder = {}
576 clrevorder = {}
596 mfs = {} # needed manifests
577 mfs = {} # needed manifests
597 fnodes = {} # needed file nodes
578 fnodes = {} # needed file nodes
598 changedfiles = set()
579 changedfiles = set()
599
580
600 # Callback for the changelog, used to collect changed files and manifest
581 # Callback for the changelog, used to collect changed files and manifest
601 # nodes.
582 # nodes.
602 # Returns the linkrev node (identity in the changelog case).
583 # Returns the linkrev node (identity in the changelog case).
603 def lookupcl(x):
584 def lookupcl(x):
604 c = cl.read(x)
585 c = cl.read(x)
605 clrevorder[x] = len(clrevorder)
586 clrevorder[x] = len(clrevorder)
606 n = c[0]
587 n = c[0]
607 # record the first changeset introducing this manifest version
588 # record the first changeset introducing this manifest version
608 mfs.setdefault(n, x)
589 mfs.setdefault(n, x)
609 # Record a complete list of potentially-changed files in
590 # Record a complete list of potentially-changed files in
610 # this manifest.
591 # this manifest.
611 changedfiles.update(c[3])
592 changedfiles.update(c[3])
612 return x
593 return x
613
594
614 self._verbosenote(_('uncompressed size of bundle content:\n'))
595 self._verbosenote(_('uncompressed size of bundle content:\n'))
615 size = 0
596 size = 0
616 for chunk in self.group(clnodes, cl, lookupcl, units=_('changesets')):
597 for chunk in self.group(clnodes, cl, lookupcl, units=_('changesets')):
617 size += len(chunk)
598 size += len(chunk)
618 yield chunk
599 yield chunk
619 self._verbosenote(_('%8.i (changelog)\n') % size)
600 self._verbosenote(_('%8.i (changelog)\n') % size)
620
601
621 # We need to make sure that the linkrev in the changegroup refers to
602 # We need to make sure that the linkrev in the changegroup refers to
622 # the first changeset that introduced the manifest or file revision.
603 # the first changeset that introduced the manifest or file revision.
623 # The fastpath is usually safer than the slowpath, because the filelogs
604 # The fastpath is usually safer than the slowpath, because the filelogs
624 # are walked in revlog order.
605 # are walked in revlog order.
625 #
606 #
626 # When taking the slowpath with reorder=None and the manifest revlog
607 # When taking the slowpath with reorder=None and the manifest revlog
627 # uses generaldelta, the manifest may be walked in the "wrong" order.
608 # uses generaldelta, the manifest may be walked in the "wrong" order.
628 # Without 'clrevorder', we would get an incorrect linkrev (see fix in
609 # Without 'clrevorder', we would get an incorrect linkrev (see fix in
629 # cc0ff93d0c0c).
610 # cc0ff93d0c0c).
630 #
611 #
631 # When taking the fastpath, we are only vulnerable to reordering
612 # When taking the fastpath, we are only vulnerable to reordering
632 # of the changelog itself. The changelog never uses generaldelta, so
613 # of the changelog itself. The changelog never uses generaldelta, so
633 # it is only reordered when reorder=True. To handle this case, we
614 # it is only reordered when reorder=True. To handle this case, we
634 # simply take the slowpath, which already has the 'clrevorder' logic.
615 # simply take the slowpath, which already has the 'clrevorder' logic.
635 # This was also fixed in cc0ff93d0c0c.
616 # This was also fixed in cc0ff93d0c0c.
636 fastpathlinkrev = fastpathlinkrev and not self._reorder
617 fastpathlinkrev = fastpathlinkrev and not self._reorder
637 # Treemanifests don't work correctly with fastpathlinkrev
618 # Treemanifests don't work correctly with fastpathlinkrev
638 # either, because we don't discover which directory nodes to
619 # either, because we don't discover which directory nodes to
639 # send along with files. This could probably be fixed.
620 # send along with files. This could probably be fixed.
640 fastpathlinkrev = fastpathlinkrev and (
621 fastpathlinkrev = fastpathlinkrev and (
641 'treemanifest' not in repo.requirements)
622 'treemanifest' not in repo.requirements)
642
623
643 for chunk in self.generatemanifests(commonrevs, clrevorder,
624 for chunk in self.generatemanifests(commonrevs, clrevorder,
644 fastpathlinkrev, mfs, fnodes):
625 fastpathlinkrev, mfs, fnodes):
645 yield chunk
626 yield chunk
646 mfs.clear()
627 mfs.clear()
647 clrevs = set(cl.rev(x) for x in clnodes)
628 clrevs = set(cl.rev(x) for x in clnodes)
648
629
649 if not fastpathlinkrev:
630 if not fastpathlinkrev:
650 def linknodes(unused, fname):
631 def linknodes(unused, fname):
651 return fnodes.get(fname, {})
632 return fnodes.get(fname, {})
652 else:
633 else:
653 cln = cl.node
634 cln = cl.node
654 def linknodes(filerevlog, fname):
635 def linknodes(filerevlog, fname):
655 llr = filerevlog.linkrev
636 llr = filerevlog.linkrev
656 fln = filerevlog.node
637 fln = filerevlog.node
657 revs = ((r, llr(r)) for r in filerevlog)
638 revs = ((r, llr(r)) for r in filerevlog)
658 return dict((fln(r), cln(lr)) for r, lr in revs if lr in clrevs)
639 return dict((fln(r), cln(lr)) for r, lr in revs if lr in clrevs)
659
640
660 for chunk in self.generatefiles(changedfiles, linknodes, commonrevs,
641 for chunk in self.generatefiles(changedfiles, linknodes, commonrevs,
661 source):
642 source):
662 yield chunk
643 yield chunk
663
644
664 yield self.close()
645 yield self.close()
665
646
666 if clnodes:
647 if clnodes:
667 repo.hook('outgoing', node=hex(clnodes[0]), source=source)
648 repo.hook('outgoing', node=hex(clnodes[0]), source=source)
668
649
669 def generatemanifests(self, commonrevs, clrevorder, fastpathlinkrev, mfs,
650 def generatemanifests(self, commonrevs, clrevorder, fastpathlinkrev, mfs,
670 fnodes):
651 fnodes):
671 repo = self._repo
652 repo = self._repo
672 mfl = repo.manifestlog
653 mfl = repo.manifestlog
673 dirlog = mfl._revlog.dirlog
654 dirlog = mfl._revlog.dirlog
674 tmfnodes = {'': mfs}
655 tmfnodes = {'': mfs}
675
656
676 # Callback for the manifest, used to collect linkrevs for filelog
657 # Callback for the manifest, used to collect linkrevs for filelog
677 # revisions.
658 # revisions.
678 # Returns the linkrev node (collected in lookupcl).
659 # Returns the linkrev node (collected in lookupcl).
679 def makelookupmflinknode(dir):
660 def makelookupmflinknode(dir):
680 if fastpathlinkrev:
661 if fastpathlinkrev:
681 assert not dir
662 assert not dir
682 return mfs.__getitem__
663 return mfs.__getitem__
683
664
684 def lookupmflinknode(x):
665 def lookupmflinknode(x):
685 """Callback for looking up the linknode for manifests.
666 """Callback for looking up the linknode for manifests.
686
667
687 Returns the linkrev node for the specified manifest.
668 Returns the linkrev node for the specified manifest.
688
669
689 SIDE EFFECT:
670 SIDE EFFECT:
690
671
691 1) fclnodes gets populated with the list of relevant
672 1) fclnodes gets populated with the list of relevant
692 file nodes if we're not using fastpathlinkrev
673 file nodes if we're not using fastpathlinkrev
693 2) When treemanifests are in use, collects treemanifest nodes
674 2) When treemanifests are in use, collects treemanifest nodes
694 to send
675 to send
695
676
696 Note that this means manifests must be completely sent to
677 Note that this means manifests must be completely sent to
697 the client before you can trust the list of files and
678 the client before you can trust the list of files and
698 treemanifests to send.
679 treemanifests to send.
699 """
680 """
700 clnode = tmfnodes[dir][x]
681 clnode = tmfnodes[dir][x]
701 mdata = mfl.get(dir, x).readfast(shallow=True)
682 mdata = mfl.get(dir, x).readfast(shallow=True)
702 for p, n, fl in mdata.iterentries():
683 for p, n, fl in mdata.iterentries():
703 if fl == 't': # subdirectory manifest
684 if fl == 't': # subdirectory manifest
704 subdir = dir + p + '/'
685 subdir = dir + p + '/'
705 tmfclnodes = tmfnodes.setdefault(subdir, {})
686 tmfclnodes = tmfnodes.setdefault(subdir, {})
706 tmfclnode = tmfclnodes.setdefault(n, clnode)
687 tmfclnode = tmfclnodes.setdefault(n, clnode)
707 if clrevorder[clnode] < clrevorder[tmfclnode]:
688 if clrevorder[clnode] < clrevorder[tmfclnode]:
708 tmfclnodes[n] = clnode
689 tmfclnodes[n] = clnode
709 else:
690 else:
710 f = dir + p
691 f = dir + p
711 fclnodes = fnodes.setdefault(f, {})
692 fclnodes = fnodes.setdefault(f, {})
712 fclnode = fclnodes.setdefault(n, clnode)
693 fclnode = fclnodes.setdefault(n, clnode)
713 if clrevorder[clnode] < clrevorder[fclnode]:
694 if clrevorder[clnode] < clrevorder[fclnode]:
714 fclnodes[n] = clnode
695 fclnodes[n] = clnode
715 return clnode
696 return clnode
716 return lookupmflinknode
697 return lookupmflinknode
717
698
718 size = 0
699 size = 0
719 while tmfnodes:
700 while tmfnodes:
720 dir = min(tmfnodes)
701 dir = min(tmfnodes)
721 nodes = tmfnodes[dir]
702 nodes = tmfnodes[dir]
722 prunednodes = self.prune(dirlog(dir), nodes, commonrevs)
703 prunednodes = self.prune(dirlog(dir), nodes, commonrevs)
723 if not dir or prunednodes:
704 if not dir or prunednodes:
724 for x in self._packmanifests(dir, prunednodes,
705 for x in self._packmanifests(dir, prunednodes,
725 makelookupmflinknode(dir)):
706 makelookupmflinknode(dir)):
726 size += len(x)
707 size += len(x)
727 yield x
708 yield x
728 del tmfnodes[dir]
709 del tmfnodes[dir]
729 self._verbosenote(_('%8.i (manifests)\n') % size)
710 self._verbosenote(_('%8.i (manifests)\n') % size)
730 yield self._manifestsdone()
711 yield self._manifestsdone()
731
712
732 # The 'source' parameter is useful for extensions
713 # The 'source' parameter is useful for extensions
733 def generatefiles(self, changedfiles, linknodes, commonrevs, source):
714 def generatefiles(self, changedfiles, linknodes, commonrevs, source):
734 repo = self._repo
715 repo = self._repo
735 progress = self._progress
716 progress = self._progress
736 msgbundling = _('bundling')
717 msgbundling = _('bundling')
737
718
738 total = len(changedfiles)
719 total = len(changedfiles)
739 # for progress output
720 # for progress output
740 msgfiles = _('files')
721 msgfiles = _('files')
741 for i, fname in enumerate(sorted(changedfiles)):
722 for i, fname in enumerate(sorted(changedfiles)):
742 filerevlog = repo.file(fname)
723 filerevlog = repo.file(fname)
743 if not filerevlog:
724 if not filerevlog:
744 raise error.Abort(_("empty or missing revlog for %s") % fname)
725 raise error.Abort(_("empty or missing revlog for %s") % fname)
745
726
746 linkrevnodes = linknodes(filerevlog, fname)
727 linkrevnodes = linknodes(filerevlog, fname)
747 # Lookup for filenodes, we collected the linkrev nodes above in the
728 # Lookup for filenodes, we collected the linkrev nodes above in the
748 # fastpath case and with lookupmf in the slowpath case.
729 # fastpath case and with lookupmf in the slowpath case.
749 def lookupfilelog(x):
730 def lookupfilelog(x):
750 return linkrevnodes[x]
731 return linkrevnodes[x]
751
732
752 filenodes = self.prune(filerevlog, linkrevnodes, commonrevs)
733 filenodes = self.prune(filerevlog, linkrevnodes, commonrevs)
753 if filenodes:
734 if filenodes:
754 progress(msgbundling, i + 1, item=fname, unit=msgfiles,
735 progress(msgbundling, i + 1, item=fname, unit=msgfiles,
755 total=total)
736 total=total)
756 h = self.fileheader(fname)
737 h = self.fileheader(fname)
757 size = len(h)
738 size = len(h)
758 yield h
739 yield h
759 for chunk in self.group(filenodes, filerevlog, lookupfilelog):
740 for chunk in self.group(filenodes, filerevlog, lookupfilelog):
760 size += len(chunk)
741 size += len(chunk)
761 yield chunk
742 yield chunk
762 self._verbosenote(_('%8.i %s\n') % (size, fname))
743 self._verbosenote(_('%8.i %s\n') % (size, fname))
763 progress(msgbundling, None)
744 progress(msgbundling, None)
764
745
765 def deltaparent(self, revlog, rev, p1, p2, prev):
746 def deltaparent(self, revlog, rev, p1, p2, prev):
766 return prev
747 return prev
767
748
768 def revchunk(self, revlog, rev, prev, linknode):
749 def revchunk(self, revlog, rev, prev, linknode):
769 node = revlog.node(rev)
750 node = revlog.node(rev)
770 p1, p2 = revlog.parentrevs(rev)
751 p1, p2 = revlog.parentrevs(rev)
771 base = self.deltaparent(revlog, rev, p1, p2, prev)
752 base = self.deltaparent(revlog, rev, p1, p2, prev)
772
753
773 prefix = ''
754 prefix = ''
774 if revlog.iscensored(base) or revlog.iscensored(rev):
755 if revlog.iscensored(base) or revlog.iscensored(rev):
775 try:
756 try:
776 delta = revlog.revision(node, raw=True)
757 delta = revlog.revision(node, raw=True)
777 except error.CensoredNodeError as e:
758 except error.CensoredNodeError as e:
778 delta = e.tombstone
759 delta = e.tombstone
779 if base == nullrev:
760 if base == nullrev:
780 prefix = mdiff.trivialdiffheader(len(delta))
761 prefix = mdiff.trivialdiffheader(len(delta))
781 else:
762 else:
782 baselen = revlog.rawsize(base)
763 baselen = revlog.rawsize(base)
783 prefix = mdiff.replacediffheader(baselen, len(delta))
764 prefix = mdiff.replacediffheader(baselen, len(delta))
784 elif base == nullrev:
765 elif base == nullrev:
785 delta = revlog.revision(node, raw=True)
766 delta = revlog.revision(node, raw=True)
786 prefix = mdiff.trivialdiffheader(len(delta))
767 prefix = mdiff.trivialdiffheader(len(delta))
787 else:
768 else:
788 delta = revlog.revdiff(base, rev)
769 delta = revlog.revdiff(base, rev)
789 p1n, p2n = revlog.parents(node)
770 p1n, p2n = revlog.parents(node)
790 basenode = revlog.node(base)
771 basenode = revlog.node(base)
791 flags = revlog.flags(rev)
772 flags = revlog.flags(rev)
792 meta = self.builddeltaheader(node, p1n, p2n, basenode, linknode, flags)
773 meta = self.builddeltaheader(node, p1n, p2n, basenode, linknode, flags)
793 meta += prefix
774 meta += prefix
794 l = len(meta) + len(delta)
775 l = len(meta) + len(delta)
795 yield chunkheader(l)
776 yield chunkheader(l)
796 yield meta
777 yield meta
797 yield delta
778 yield delta
798 def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags):
779 def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags):
799 # do nothing with basenode, it is implicitly the previous one in HG10
780 # do nothing with basenode, it is implicitly the previous one in HG10
800 # do nothing with flags, it is implicitly 0 for cg1 and cg2
781 # do nothing with flags, it is implicitly 0 for cg1 and cg2
801 return struct.pack(self.deltaheader, node, p1n, p2n, linknode)
782 return struct.pack(self.deltaheader, node, p1n, p2n, linknode)
802
783
803 class cg2packer(cg1packer):
784 class cg2packer(cg1packer):
804 version = '02'
785 version = '02'
805 deltaheader = _CHANGEGROUPV2_DELTA_HEADER
786 deltaheader = _CHANGEGROUPV2_DELTA_HEADER
806
787
807 def __init__(self, repo, bundlecaps=None):
788 def __init__(self, repo, bundlecaps=None):
808 super(cg2packer, self).__init__(repo, bundlecaps)
789 super(cg2packer, self).__init__(repo, bundlecaps)
809 if self._reorder is None:
790 if self._reorder is None:
810 # Since generaldelta is directly supported by cg2, reordering
791 # Since generaldelta is directly supported by cg2, reordering
811 # generally doesn't help, so we disable it by default (treating
792 # generally doesn't help, so we disable it by default (treating
812 # bundle.reorder=auto just like bundle.reorder=False).
793 # bundle.reorder=auto just like bundle.reorder=False).
813 self._reorder = False
794 self._reorder = False
814
795
815 def deltaparent(self, revlog, rev, p1, p2, prev):
796 def deltaparent(self, revlog, rev, p1, p2, prev):
816 dp = revlog.deltaparent(rev)
797 dp = revlog.deltaparent(rev)
817 if dp == nullrev and revlog.storedeltachains:
798 if dp == nullrev and revlog.storedeltachains:
818 # Avoid sending full revisions when delta parent is null. Pick prev
799 # Avoid sending full revisions when delta parent is null. Pick prev
819 # in that case. It's tempting to pick p1 in this case, as p1 will
800 # in that case. It's tempting to pick p1 in this case, as p1 will
820 # be smaller in the common case. However, computing a delta against
801 # be smaller in the common case. However, computing a delta against
821 # p1 may require resolving the raw text of p1, which could be
802 # p1 may require resolving the raw text of p1, which could be
822 # expensive. The revlog caches should have prev cached, meaning
803 # expensive. The revlog caches should have prev cached, meaning
823 # less CPU for changegroup generation. There is likely room to add
804 # less CPU for changegroup generation. There is likely room to add
824 # a flag and/or config option to control this behavior.
805 # a flag and/or config option to control this behavior.
825 return prev
806 return prev
826 elif dp == nullrev:
807 elif dp == nullrev:
827 # revlog is configured to use full snapshot for a reason,
808 # revlog is configured to use full snapshot for a reason,
828 # stick to full snapshot.
809 # stick to full snapshot.
829 return nullrev
810 return nullrev
830 elif dp not in (p1, p2, prev):
811 elif dp not in (p1, p2, prev):
831 # Pick prev when we can't be sure remote has the base revision.
812 # Pick prev when we can't be sure remote has the base revision.
832 return prev
813 return prev
833 else:
814 else:
834 return dp
815 return dp
835
816
836 def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags):
817 def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags):
837 # Do nothing with flags, it is implicitly 0 in cg1 and cg2
818 # Do nothing with flags, it is implicitly 0 in cg1 and cg2
838 return struct.pack(self.deltaheader, node, p1n, p2n, basenode, linknode)
819 return struct.pack(self.deltaheader, node, p1n, p2n, basenode, linknode)
839
820
840 class cg3packer(cg2packer):
821 class cg3packer(cg2packer):
841 version = '03'
822 version = '03'
842 deltaheader = _CHANGEGROUPV3_DELTA_HEADER
823 deltaheader = _CHANGEGROUPV3_DELTA_HEADER
843
824
844 def _packmanifests(self, dir, mfnodes, lookuplinknode):
825 def _packmanifests(self, dir, mfnodes, lookuplinknode):
845 if dir:
826 if dir:
846 yield self.fileheader(dir)
827 yield self.fileheader(dir)
847
828
848 dirlog = self._repo.manifestlog._revlog.dirlog(dir)
829 dirlog = self._repo.manifestlog._revlog.dirlog(dir)
849 for chunk in self.group(mfnodes, dirlog, lookuplinknode,
830 for chunk in self.group(mfnodes, dirlog, lookuplinknode,
850 units=_('manifests')):
831 units=_('manifests')):
851 yield chunk
832 yield chunk
852
833
853 def _manifestsdone(self):
834 def _manifestsdone(self):
854 return self.close()
835 return self.close()
855
836
856 def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags):
837 def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags):
857 return struct.pack(
838 return struct.pack(
858 self.deltaheader, node, p1n, p2n, basenode, linknode, flags)
839 self.deltaheader, node, p1n, p2n, basenode, linknode, flags)
859
840
860 _packermap = {'01': (cg1packer, cg1unpacker),
841 _packermap = {'01': (cg1packer, cg1unpacker),
861 # cg2 adds support for exchanging generaldelta
842 # cg2 adds support for exchanging generaldelta
862 '02': (cg2packer, cg2unpacker),
843 '02': (cg2packer, cg2unpacker),
863 # cg3 adds support for exchanging revlog flags and treemanifests
844 # cg3 adds support for exchanging revlog flags and treemanifests
864 '03': (cg3packer, cg3unpacker),
845 '03': (cg3packer, cg3unpacker),
865 }
846 }
866
847
867 def allsupportedversions(repo):
848 def allsupportedversions(repo):
868 versions = set(_packermap.keys())
849 versions = set(_packermap.keys())
869 if not (repo.ui.configbool('experimental', 'changegroup3') or
850 if not (repo.ui.configbool('experimental', 'changegroup3') or
870 repo.ui.configbool('experimental', 'treemanifest') or
851 repo.ui.configbool('experimental', 'treemanifest') or
871 'treemanifest' in repo.requirements):
852 'treemanifest' in repo.requirements):
872 versions.discard('03')
853 versions.discard('03')
873 return versions
854 return versions
874
855
875 # Changegroup versions that can be applied to the repo
856 # Changegroup versions that can be applied to the repo
876 def supportedincomingversions(repo):
857 def supportedincomingversions(repo):
877 return allsupportedversions(repo)
858 return allsupportedversions(repo)
878
859
879 # Changegroup versions that can be created from the repo
860 # Changegroup versions that can be created from the repo
880 def supportedoutgoingversions(repo):
861 def supportedoutgoingversions(repo):
881 versions = allsupportedversions(repo)
862 versions = allsupportedversions(repo)
882 if 'treemanifest' in repo.requirements:
863 if 'treemanifest' in repo.requirements:
883 # Versions 01 and 02 support only flat manifests and it's just too
864 # Versions 01 and 02 support only flat manifests and it's just too
884 # expensive to convert between the flat manifest and tree manifest on
865 # expensive to convert between the flat manifest and tree manifest on
885 # the fly. Since tree manifests are hashed differently, all of history
866 # the fly. Since tree manifests are hashed differently, all of history
886 # would have to be converted. Instead, we simply don't even pretend to
867 # would have to be converted. Instead, we simply don't even pretend to
887 # support versions 01 and 02.
868 # support versions 01 and 02.
888 versions.discard('01')
869 versions.discard('01')
889 versions.discard('02')
870 versions.discard('02')
890 return versions
871 return versions
891
872
892 def safeversion(repo):
873 def safeversion(repo):
893 # Finds the smallest version that it's safe to assume clients of the repo
874 # Finds the smallest version that it's safe to assume clients of the repo
894 # will support. For example, all hg versions that support generaldelta also
875 # will support. For example, all hg versions that support generaldelta also
895 # support changegroup 02.
876 # support changegroup 02.
896 versions = supportedoutgoingversions(repo)
877 versions = supportedoutgoingversions(repo)
897 if 'generaldelta' in repo.requirements:
878 if 'generaldelta' in repo.requirements:
898 versions.discard('01')
879 versions.discard('01')
899 assert versions
880 assert versions
900 return min(versions)
881 return min(versions)
901
882
902 def getbundler(version, repo, bundlecaps=None):
883 def getbundler(version, repo, bundlecaps=None):
903 assert version in supportedoutgoingversions(repo)
884 assert version in supportedoutgoingversions(repo)
904 return _packermap[version][0](repo, bundlecaps)
885 return _packermap[version][0](repo, bundlecaps)
905
886
906 def getunbundler(version, fh, alg, extras=None):
887 def getunbundler(version, fh, alg, extras=None):
907 return _packermap[version][1](fh, alg, extras=extras)
888 return _packermap[version][1](fh, alg, extras=extras)
908
889
909 def _changegroupinfo(repo, nodes, source):
890 def _changegroupinfo(repo, nodes, source):
910 if repo.ui.verbose or source == 'bundle':
891 if repo.ui.verbose or source == 'bundle':
911 repo.ui.status(_("%d changesets found\n") % len(nodes))
892 repo.ui.status(_("%d changesets found\n") % len(nodes))
912 if repo.ui.debugflag:
893 if repo.ui.debugflag:
913 repo.ui.debug("list of changesets:\n")
894 repo.ui.debug("list of changesets:\n")
914 for node in nodes:
895 for node in nodes:
915 repo.ui.debug("%s\n" % hex(node))
896 repo.ui.debug("%s\n" % hex(node))
916
897
917 def getsubsetraw(repo, outgoing, bundler, source, fastpath=False):
898 def getsubsetraw(repo, outgoing, bundler, source, fastpath=False):
918 repo = repo.unfiltered()
899 repo = repo.unfiltered()
919 commonrevs = outgoing.common
900 commonrevs = outgoing.common
920 csets = outgoing.missing
901 csets = outgoing.missing
921 heads = outgoing.missingheads
902 heads = outgoing.missingheads
922 # We go through the fast path if we get told to, or if all (unfiltered
903 # We go through the fast path if we get told to, or if all (unfiltered
923 # heads have been requested (since we then know there all linkrevs will
904 # heads have been requested (since we then know there all linkrevs will
924 # be pulled by the client).
905 # be pulled by the client).
925 heads.sort()
906 heads.sort()
926 fastpathlinkrev = fastpath or (
907 fastpathlinkrev = fastpath or (
927 repo.filtername is None and heads == sorted(repo.heads()))
908 repo.filtername is None and heads == sorted(repo.heads()))
928
909
929 repo.hook('preoutgoing', throw=True, source=source)
910 repo.hook('preoutgoing', throw=True, source=source)
930 _changegroupinfo(repo, csets, source)
911 _changegroupinfo(repo, csets, source)
931 return bundler.generate(commonrevs, csets, fastpathlinkrev, source)
912 return bundler.generate(commonrevs, csets, fastpathlinkrev, source)
932
913
933 def getsubset(repo, outgoing, bundler, source, fastpath=False):
914 def getsubset(repo, outgoing, bundler, source, fastpath=False):
934 gengroup = getsubsetraw(repo, outgoing, bundler, source, fastpath)
915 gengroup = getsubsetraw(repo, outgoing, bundler, source, fastpath)
935 return getunbundler(bundler.version, util.chunkbuffer(gengroup), None,
916 return getunbundler(bundler.version, util.chunkbuffer(gengroup), None,
936 {'clcount': len(outgoing.missing)})
917 {'clcount': len(outgoing.missing)})
937
918
938 def changegroupsubset(repo, roots, heads, source, version='01'):
919 def changegroupsubset(repo, roots, heads, source, version='01'):
939 """Compute a changegroup consisting of all the nodes that are
920 """Compute a changegroup consisting of all the nodes that are
940 descendants of any of the roots and ancestors of any of the heads.
921 descendants of any of the roots and ancestors of any of the heads.
941 Return a chunkbuffer object whose read() method will return
922 Return a chunkbuffer object whose read() method will return
942 successive changegroup chunks.
923 successive changegroup chunks.
943
924
944 It is fairly complex as determining which filenodes and which
925 It is fairly complex as determining which filenodes and which
945 manifest nodes need to be included for the changeset to be complete
926 manifest nodes need to be included for the changeset to be complete
946 is non-trivial.
927 is non-trivial.
947
928
948 Another wrinkle is doing the reverse, figuring out which changeset in
929 Another wrinkle is doing the reverse, figuring out which changeset in
949 the changegroup a particular filenode or manifestnode belongs to.
930 the changegroup a particular filenode or manifestnode belongs to.
950 """
931 """
951 outgoing = discovery.outgoing(repo, missingroots=roots, missingheads=heads)
932 outgoing = discovery.outgoing(repo, missingroots=roots, missingheads=heads)
952 bundler = getbundler(version, repo)
933 bundler = getbundler(version, repo)
953 return getsubset(repo, outgoing, bundler, source)
934 return getsubset(repo, outgoing, bundler, source)
954
935
955 def getlocalchangegroupraw(repo, source, outgoing, bundlecaps=None,
936 def getlocalchangegroupraw(repo, source, outgoing, bundlecaps=None,
956 version='01'):
937 version='01'):
957 """Like getbundle, but taking a discovery.outgoing as an argument.
938 """Like getbundle, but taking a discovery.outgoing as an argument.
958
939
959 This is only implemented for local repos and reuses potentially
940 This is only implemented for local repos and reuses potentially
960 precomputed sets in outgoing. Returns a raw changegroup generator."""
941 precomputed sets in outgoing. Returns a raw changegroup generator."""
961 if not outgoing.missing:
942 if not outgoing.missing:
962 return None
943 return None
963 bundler = getbundler(version, repo, bundlecaps)
944 bundler = getbundler(version, repo, bundlecaps)
964 return getsubsetraw(repo, outgoing, bundler, source)
945 return getsubsetraw(repo, outgoing, bundler, source)
965
946
966 def getchangegroup(repo, source, outgoing, bundlecaps=None,
947 def getchangegroup(repo, source, outgoing, bundlecaps=None,
967 version='01'):
948 version='01'):
968 """Like getbundle, but taking a discovery.outgoing as an argument.
949 """Like getbundle, but taking a discovery.outgoing as an argument.
969
950
970 This is only implemented for local repos and reuses potentially
951 This is only implemented for local repos and reuses potentially
971 precomputed sets in outgoing."""
952 precomputed sets in outgoing."""
972 if not outgoing.missing:
953 if not outgoing.missing:
973 return None
954 return None
974 bundler = getbundler(version, repo, bundlecaps)
955 bundler = getbundler(version, repo, bundlecaps)
975 return getsubset(repo, outgoing, bundler, source)
956 return getsubset(repo, outgoing, bundler, source)
976
957
977 def getlocalchangegroup(repo, *args, **kwargs):
958 def getlocalchangegroup(repo, *args, **kwargs):
978 repo.ui.deprecwarn('getlocalchangegroup is deprecated, use getchangegroup',
959 repo.ui.deprecwarn('getlocalchangegroup is deprecated, use getchangegroup',
979 '4.3')
960 '4.3')
980 return getchangegroup(repo, *args, **kwargs)
961 return getchangegroup(repo, *args, **kwargs)
981
962
982 def changegroup(repo, basenodes, source):
963 def changegroup(repo, basenodes, source):
983 # to avoid a race we use changegroupsubset() (issue1320)
964 # to avoid a race we use changegroupsubset() (issue1320)
984 return changegroupsubset(repo, basenodes, repo.heads(), source)
965 return changegroupsubset(repo, basenodes, repo.heads(), source)
985
966
986 def _addchangegroupfiles(repo, source, revmap, trp, expectedfiles, needfiles):
967 def _addchangegroupfiles(repo, source, revmap, trp, expectedfiles, needfiles):
987 revisions = 0
968 revisions = 0
988 files = 0
969 files = 0
989 for chunkdata in iter(source.filelogheader, {}):
970 for chunkdata in iter(source.filelogheader, {}):
990 files += 1
971 files += 1
991 f = chunkdata["filename"]
972 f = chunkdata["filename"]
992 repo.ui.debug("adding %s revisions\n" % f)
973 repo.ui.debug("adding %s revisions\n" % f)
993 repo.ui.progress(_('files'), files, unit=_('files'),
974 repo.ui.progress(_('files'), files, unit=_('files'),
994 total=expectedfiles)
975 total=expectedfiles)
995 fl = repo.file(f)
976 fl = repo.file(f)
996 o = len(fl)
977 o = len(fl)
997 try:
978 try:
998 if not fl.addgroup(source, revmap, trp):
979 if not fl.addgroup(source, revmap, trp):
999 raise error.Abort(_("received file revlog group is empty"))
980 raise error.Abort(_("received file revlog group is empty"))
1000 except error.CensoredBaseError as e:
981 except error.CensoredBaseError as e:
1001 raise error.Abort(_("received delta base is censored: %s") % e)
982 raise error.Abort(_("received delta base is censored: %s") % e)
1002 revisions += len(fl) - o
983 revisions += len(fl) - o
1003 if f in needfiles:
984 if f in needfiles:
1004 needs = needfiles[f]
985 needs = needfiles[f]
1005 for new in xrange(o, len(fl)):
986 for new in xrange(o, len(fl)):
1006 n = fl.node(new)
987 n = fl.node(new)
1007 if n in needs:
988 if n in needs:
1008 needs.remove(n)
989 needs.remove(n)
1009 else:
990 else:
1010 raise error.Abort(
991 raise error.Abort(
1011 _("received spurious file revlog entry"))
992 _("received spurious file revlog entry"))
1012 if not needs:
993 if not needs:
1013 del needfiles[f]
994 del needfiles[f]
1014 repo.ui.progress(_('files'), None)
995 repo.ui.progress(_('files'), None)
1015
996
1016 for f, needs in needfiles.iteritems():
997 for f, needs in needfiles.iteritems():
1017 fl = repo.file(f)
998 fl = repo.file(f)
1018 for n in needs:
999 for n in needs:
1019 try:
1000 try:
1020 fl.rev(n)
1001 fl.rev(n)
1021 except error.LookupError:
1002 except error.LookupError:
1022 raise error.Abort(
1003 raise error.Abort(
1023 _('missing file data for %s:%s - run hg verify') %
1004 _('missing file data for %s:%s - run hg verify') %
1024 (f, hex(n)))
1005 (f, hex(n)))
1025
1006
1026 return revisions, files
1007 return revisions, files
@@ -1,5402 +1,5402 b''
1 # commands.py - command processing for mercurial
1 # commands.py - command processing for mercurial
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import difflib
10 import difflib
11 import errno
11 import errno
12 import os
12 import os
13 import re
13 import re
14 import sys
14 import sys
15
15
16 from .i18n import _
16 from .i18n import _
17 from .node import (
17 from .node import (
18 hex,
18 hex,
19 nullid,
19 nullid,
20 nullrev,
20 nullrev,
21 short,
21 short,
22 )
22 )
23 from . import (
23 from . import (
24 archival,
24 archival,
25 bookmarks,
25 bookmarks,
26 bundle2,
26 bundle2,
27 changegroup,
27 changegroup,
28 cmdutil,
28 cmdutil,
29 copies,
29 copies,
30 debugcommands as debugcommandsmod,
30 debugcommands as debugcommandsmod,
31 destutil,
31 destutil,
32 dirstateguard,
32 dirstateguard,
33 discovery,
33 discovery,
34 encoding,
34 encoding,
35 error,
35 error,
36 exchange,
36 exchange,
37 extensions,
37 extensions,
38 formatter,
38 formatter,
39 graphmod,
39 graphmod,
40 hbisect,
40 hbisect,
41 help,
41 help,
42 hg,
42 hg,
43 lock as lockmod,
43 lock as lockmod,
44 merge as mergemod,
44 merge as mergemod,
45 obsolete,
45 obsolete,
46 patch,
46 patch,
47 phases,
47 phases,
48 pycompat,
48 pycompat,
49 rcutil,
49 rcutil,
50 registrar,
50 registrar,
51 revsetlang,
51 revsetlang,
52 scmutil,
52 scmutil,
53 server,
53 server,
54 sshserver,
54 sshserver,
55 streamclone,
55 streamclone,
56 tags as tagsmod,
56 tags as tagsmod,
57 templatekw,
57 templatekw,
58 ui as uimod,
58 ui as uimod,
59 util,
59 util,
60 )
60 )
61
61
62 release = lockmod.release
62 release = lockmod.release
63
63
64 table = {}
64 table = {}
65 table.update(debugcommandsmod.command._table)
65 table.update(debugcommandsmod.command._table)
66
66
67 command = registrar.command(table)
67 command = registrar.command(table)
68
68
69 # common command options
69 # common command options
70
70
71 globalopts = [
71 globalopts = [
72 ('R', 'repository', '',
72 ('R', 'repository', '',
73 _('repository root directory or name of overlay bundle file'),
73 _('repository root directory or name of overlay bundle file'),
74 _('REPO')),
74 _('REPO')),
75 ('', 'cwd', '',
75 ('', 'cwd', '',
76 _('change working directory'), _('DIR')),
76 _('change working directory'), _('DIR')),
77 ('y', 'noninteractive', None,
77 ('y', 'noninteractive', None,
78 _('do not prompt, automatically pick the first choice for all prompts')),
78 _('do not prompt, automatically pick the first choice for all prompts')),
79 ('q', 'quiet', None, _('suppress output')),
79 ('q', 'quiet', None, _('suppress output')),
80 ('v', 'verbose', None, _('enable additional output')),
80 ('v', 'verbose', None, _('enable additional output')),
81 ('', 'color', '',
81 ('', 'color', '',
82 # i18n: 'always', 'auto', 'never', and 'debug' are keywords
82 # i18n: 'always', 'auto', 'never', and 'debug' are keywords
83 # and should not be translated
83 # and should not be translated
84 _("when to colorize (boolean, always, auto, never, or debug)"),
84 _("when to colorize (boolean, always, auto, never, or debug)"),
85 _('TYPE')),
85 _('TYPE')),
86 ('', 'config', [],
86 ('', 'config', [],
87 _('set/override config option (use \'section.name=value\')'),
87 _('set/override config option (use \'section.name=value\')'),
88 _('CONFIG')),
88 _('CONFIG')),
89 ('', 'debug', None, _('enable debugging output')),
89 ('', 'debug', None, _('enable debugging output')),
90 ('', 'debugger', None, _('start debugger')),
90 ('', 'debugger', None, _('start debugger')),
91 ('', 'encoding', encoding.encoding, _('set the charset encoding'),
91 ('', 'encoding', encoding.encoding, _('set the charset encoding'),
92 _('ENCODE')),
92 _('ENCODE')),
93 ('', 'encodingmode', encoding.encodingmode,
93 ('', 'encodingmode', encoding.encodingmode,
94 _('set the charset encoding mode'), _('MODE')),
94 _('set the charset encoding mode'), _('MODE')),
95 ('', 'traceback', None, _('always print a traceback on exception')),
95 ('', 'traceback', None, _('always print a traceback on exception')),
96 ('', 'time', None, _('time how long the command takes')),
96 ('', 'time', None, _('time how long the command takes')),
97 ('', 'profile', None, _('print command execution profile')),
97 ('', 'profile', None, _('print command execution profile')),
98 ('', 'version', None, _('output version information and exit')),
98 ('', 'version', None, _('output version information and exit')),
99 ('h', 'help', None, _('display help and exit')),
99 ('h', 'help', None, _('display help and exit')),
100 ('', 'hidden', False, _('consider hidden changesets')),
100 ('', 'hidden', False, _('consider hidden changesets')),
101 ('', 'pager', 'auto',
101 ('', 'pager', 'auto',
102 _("when to paginate (boolean, always, auto, or never)"), _('TYPE')),
102 _("when to paginate (boolean, always, auto, or never)"), _('TYPE')),
103 ]
103 ]
104
104
105 dryrunopts = cmdutil.dryrunopts
105 dryrunopts = cmdutil.dryrunopts
106 remoteopts = cmdutil.remoteopts
106 remoteopts = cmdutil.remoteopts
107 walkopts = cmdutil.walkopts
107 walkopts = cmdutil.walkopts
108 commitopts = cmdutil.commitopts
108 commitopts = cmdutil.commitopts
109 commitopts2 = cmdutil.commitopts2
109 commitopts2 = cmdutil.commitopts2
110 formatteropts = cmdutil.formatteropts
110 formatteropts = cmdutil.formatteropts
111 templateopts = cmdutil.templateopts
111 templateopts = cmdutil.templateopts
112 logopts = cmdutil.logopts
112 logopts = cmdutil.logopts
113 diffopts = cmdutil.diffopts
113 diffopts = cmdutil.diffopts
114 diffwsopts = cmdutil.diffwsopts
114 diffwsopts = cmdutil.diffwsopts
115 diffopts2 = cmdutil.diffopts2
115 diffopts2 = cmdutil.diffopts2
116 mergetoolopts = cmdutil.mergetoolopts
116 mergetoolopts = cmdutil.mergetoolopts
117 similarityopts = cmdutil.similarityopts
117 similarityopts = cmdutil.similarityopts
118 subrepoopts = cmdutil.subrepoopts
118 subrepoopts = cmdutil.subrepoopts
119 debugrevlogopts = cmdutil.debugrevlogopts
119 debugrevlogopts = cmdutil.debugrevlogopts
120
120
121 # Commands start here, listed alphabetically
121 # Commands start here, listed alphabetically
122
122
123 @command('^add',
123 @command('^add',
124 walkopts + subrepoopts + dryrunopts,
124 walkopts + subrepoopts + dryrunopts,
125 _('[OPTION]... [FILE]...'),
125 _('[OPTION]... [FILE]...'),
126 inferrepo=True)
126 inferrepo=True)
127 def add(ui, repo, *pats, **opts):
127 def add(ui, repo, *pats, **opts):
128 """add the specified files on the next commit
128 """add the specified files on the next commit
129
129
130 Schedule files to be version controlled and added to the
130 Schedule files to be version controlled and added to the
131 repository.
131 repository.
132
132
133 The files will be added to the repository at the next commit. To
133 The files will be added to the repository at the next commit. To
134 undo an add before that, see :hg:`forget`.
134 undo an add before that, see :hg:`forget`.
135
135
136 If no names are given, add all files to the repository (except
136 If no names are given, add all files to the repository (except
137 files matching ``.hgignore``).
137 files matching ``.hgignore``).
138
138
139 .. container:: verbose
139 .. container:: verbose
140
140
141 Examples:
141 Examples:
142
142
143 - New (unknown) files are added
143 - New (unknown) files are added
144 automatically by :hg:`add`::
144 automatically by :hg:`add`::
145
145
146 $ ls
146 $ ls
147 foo.c
147 foo.c
148 $ hg status
148 $ hg status
149 ? foo.c
149 ? foo.c
150 $ hg add
150 $ hg add
151 adding foo.c
151 adding foo.c
152 $ hg status
152 $ hg status
153 A foo.c
153 A foo.c
154
154
155 - Specific files to be added can be specified::
155 - Specific files to be added can be specified::
156
156
157 $ ls
157 $ ls
158 bar.c foo.c
158 bar.c foo.c
159 $ hg status
159 $ hg status
160 ? bar.c
160 ? bar.c
161 ? foo.c
161 ? foo.c
162 $ hg add bar.c
162 $ hg add bar.c
163 $ hg status
163 $ hg status
164 A bar.c
164 A bar.c
165 ? foo.c
165 ? foo.c
166
166
167 Returns 0 if all files are successfully added.
167 Returns 0 if all files are successfully added.
168 """
168 """
169
169
170 m = scmutil.match(repo[None], pats, pycompat.byteskwargs(opts))
170 m = scmutil.match(repo[None], pats, pycompat.byteskwargs(opts))
171 rejected = cmdutil.add(ui, repo, m, "", False, **opts)
171 rejected = cmdutil.add(ui, repo, m, "", False, **opts)
172 return rejected and 1 or 0
172 return rejected and 1 or 0
173
173
174 @command('addremove',
174 @command('addremove',
175 similarityopts + subrepoopts + walkopts + dryrunopts,
175 similarityopts + subrepoopts + walkopts + dryrunopts,
176 _('[OPTION]... [FILE]...'),
176 _('[OPTION]... [FILE]...'),
177 inferrepo=True)
177 inferrepo=True)
178 def addremove(ui, repo, *pats, **opts):
178 def addremove(ui, repo, *pats, **opts):
179 """add all new files, delete all missing files
179 """add all new files, delete all missing files
180
180
181 Add all new files and remove all missing files from the
181 Add all new files and remove all missing files from the
182 repository.
182 repository.
183
183
184 Unless names are given, new files are ignored if they match any of
184 Unless names are given, new files are ignored if they match any of
185 the patterns in ``.hgignore``. As with add, these changes take
185 the patterns in ``.hgignore``. As with add, these changes take
186 effect at the next commit.
186 effect at the next commit.
187
187
188 Use the -s/--similarity option to detect renamed files. This
188 Use the -s/--similarity option to detect renamed files. This
189 option takes a percentage between 0 (disabled) and 100 (files must
189 option takes a percentage between 0 (disabled) and 100 (files must
190 be identical) as its parameter. With a parameter greater than 0,
190 be identical) as its parameter. With a parameter greater than 0,
191 this compares every removed file with every added file and records
191 this compares every removed file with every added file and records
192 those similar enough as renames. Detecting renamed files this way
192 those similar enough as renames. Detecting renamed files this way
193 can be expensive. After using this option, :hg:`status -C` can be
193 can be expensive. After using this option, :hg:`status -C` can be
194 used to check which files were identified as moved or renamed. If
194 used to check which files were identified as moved or renamed. If
195 not specified, -s/--similarity defaults to 100 and only renames of
195 not specified, -s/--similarity defaults to 100 and only renames of
196 identical files are detected.
196 identical files are detected.
197
197
198 .. container:: verbose
198 .. container:: verbose
199
199
200 Examples:
200 Examples:
201
201
202 - A number of files (bar.c and foo.c) are new,
202 - A number of files (bar.c and foo.c) are new,
203 while foobar.c has been removed (without using :hg:`remove`)
203 while foobar.c has been removed (without using :hg:`remove`)
204 from the repository::
204 from the repository::
205
205
206 $ ls
206 $ ls
207 bar.c foo.c
207 bar.c foo.c
208 $ hg status
208 $ hg status
209 ! foobar.c
209 ! foobar.c
210 ? bar.c
210 ? bar.c
211 ? foo.c
211 ? foo.c
212 $ hg addremove
212 $ hg addremove
213 adding bar.c
213 adding bar.c
214 adding foo.c
214 adding foo.c
215 removing foobar.c
215 removing foobar.c
216 $ hg status
216 $ hg status
217 A bar.c
217 A bar.c
218 A foo.c
218 A foo.c
219 R foobar.c
219 R foobar.c
220
220
221 - A file foobar.c was moved to foo.c without using :hg:`rename`.
221 - A file foobar.c was moved to foo.c without using :hg:`rename`.
222 Afterwards, it was edited slightly::
222 Afterwards, it was edited slightly::
223
223
224 $ ls
224 $ ls
225 foo.c
225 foo.c
226 $ hg status
226 $ hg status
227 ! foobar.c
227 ! foobar.c
228 ? foo.c
228 ? foo.c
229 $ hg addremove --similarity 90
229 $ hg addremove --similarity 90
230 removing foobar.c
230 removing foobar.c
231 adding foo.c
231 adding foo.c
232 recording removal of foobar.c as rename to foo.c (94% similar)
232 recording removal of foobar.c as rename to foo.c (94% similar)
233 $ hg status -C
233 $ hg status -C
234 A foo.c
234 A foo.c
235 foobar.c
235 foobar.c
236 R foobar.c
236 R foobar.c
237
237
238 Returns 0 if all files are successfully added.
238 Returns 0 if all files are successfully added.
239 """
239 """
240 opts = pycompat.byteskwargs(opts)
240 opts = pycompat.byteskwargs(opts)
241 try:
241 try:
242 sim = float(opts.get('similarity') or 100)
242 sim = float(opts.get('similarity') or 100)
243 except ValueError:
243 except ValueError:
244 raise error.Abort(_('similarity must be a number'))
244 raise error.Abort(_('similarity must be a number'))
245 if sim < 0 or sim > 100:
245 if sim < 0 or sim > 100:
246 raise error.Abort(_('similarity must be between 0 and 100'))
246 raise error.Abort(_('similarity must be between 0 and 100'))
247 matcher = scmutil.match(repo[None], pats, opts)
247 matcher = scmutil.match(repo[None], pats, opts)
248 return scmutil.addremove(repo, matcher, "", opts, similarity=sim / 100.0)
248 return scmutil.addremove(repo, matcher, "", opts, similarity=sim / 100.0)
249
249
250 @command('^annotate|blame',
250 @command('^annotate|blame',
251 [('r', 'rev', '', _('annotate the specified revision'), _('REV')),
251 [('r', 'rev', '', _('annotate the specified revision'), _('REV')),
252 ('', 'follow', None,
252 ('', 'follow', None,
253 _('follow copies/renames and list the filename (DEPRECATED)')),
253 _('follow copies/renames and list the filename (DEPRECATED)')),
254 ('', 'no-follow', None, _("don't follow copies and renames")),
254 ('', 'no-follow', None, _("don't follow copies and renames")),
255 ('a', 'text', None, _('treat all files as text')),
255 ('a', 'text', None, _('treat all files as text')),
256 ('u', 'user', None, _('list the author (long with -v)')),
256 ('u', 'user', None, _('list the author (long with -v)')),
257 ('f', 'file', None, _('list the filename')),
257 ('f', 'file', None, _('list the filename')),
258 ('d', 'date', None, _('list the date (short with -q)')),
258 ('d', 'date', None, _('list the date (short with -q)')),
259 ('n', 'number', None, _('list the revision number (default)')),
259 ('n', 'number', None, _('list the revision number (default)')),
260 ('c', 'changeset', None, _('list the changeset')),
260 ('c', 'changeset', None, _('list the changeset')),
261 ('l', 'line-number', None, _('show line number at the first appearance')),
261 ('l', 'line-number', None, _('show line number at the first appearance')),
262 ('', 'skip', [], _('revision to not display (EXPERIMENTAL)'), _('REV')),
262 ('', 'skip', [], _('revision to not display (EXPERIMENTAL)'), _('REV')),
263 ] + diffwsopts + walkopts + formatteropts,
263 ] + diffwsopts + walkopts + formatteropts,
264 _('[-r REV] [-f] [-a] [-u] [-d] [-n] [-c] [-l] FILE...'),
264 _('[-r REV] [-f] [-a] [-u] [-d] [-n] [-c] [-l] FILE...'),
265 inferrepo=True)
265 inferrepo=True)
266 def annotate(ui, repo, *pats, **opts):
266 def annotate(ui, repo, *pats, **opts):
267 """show changeset information by line for each file
267 """show changeset information by line for each file
268
268
269 List changes in files, showing the revision id responsible for
269 List changes in files, showing the revision id responsible for
270 each line.
270 each line.
271
271
272 This command is useful for discovering when a change was made and
272 This command is useful for discovering when a change was made and
273 by whom.
273 by whom.
274
274
275 If you include --file, --user, or --date, the revision number is
275 If you include --file, --user, or --date, the revision number is
276 suppressed unless you also include --number.
276 suppressed unless you also include --number.
277
277
278 Without the -a/--text option, annotate will avoid processing files
278 Without the -a/--text option, annotate will avoid processing files
279 it detects as binary. With -a, annotate will annotate the file
279 it detects as binary. With -a, annotate will annotate the file
280 anyway, although the results will probably be neither useful
280 anyway, although the results will probably be neither useful
281 nor desirable.
281 nor desirable.
282
282
283 Returns 0 on success.
283 Returns 0 on success.
284 """
284 """
285 opts = pycompat.byteskwargs(opts)
285 opts = pycompat.byteskwargs(opts)
286 if not pats:
286 if not pats:
287 raise error.Abort(_('at least one filename or pattern is required'))
287 raise error.Abort(_('at least one filename or pattern is required'))
288
288
289 if opts.get('follow'):
289 if opts.get('follow'):
290 # --follow is deprecated and now just an alias for -f/--file
290 # --follow is deprecated and now just an alias for -f/--file
291 # to mimic the behavior of Mercurial before version 1.5
291 # to mimic the behavior of Mercurial before version 1.5
292 opts['file'] = True
292 opts['file'] = True
293
293
294 ctx = scmutil.revsingle(repo, opts.get('rev'))
294 ctx = scmutil.revsingle(repo, opts.get('rev'))
295
295
296 rootfm = ui.formatter('annotate', opts)
296 rootfm = ui.formatter('annotate', opts)
297 if ui.quiet:
297 if ui.quiet:
298 datefunc = util.shortdate
298 datefunc = util.shortdate
299 else:
299 else:
300 datefunc = util.datestr
300 datefunc = util.datestr
301 if ctx.rev() is None:
301 if ctx.rev() is None:
302 def hexfn(node):
302 def hexfn(node):
303 if node is None:
303 if node is None:
304 return None
304 return None
305 else:
305 else:
306 return rootfm.hexfunc(node)
306 return rootfm.hexfunc(node)
307 if opts.get('changeset'):
307 if opts.get('changeset'):
308 # omit "+" suffix which is appended to node hex
308 # omit "+" suffix which is appended to node hex
309 def formatrev(rev):
309 def formatrev(rev):
310 if rev is None:
310 if rev is None:
311 return '%d' % ctx.p1().rev()
311 return '%d' % ctx.p1().rev()
312 else:
312 else:
313 return '%d' % rev
313 return '%d' % rev
314 else:
314 else:
315 def formatrev(rev):
315 def formatrev(rev):
316 if rev is None:
316 if rev is None:
317 return '%d+' % ctx.p1().rev()
317 return '%d+' % ctx.p1().rev()
318 else:
318 else:
319 return '%d ' % rev
319 return '%d ' % rev
320 def formathex(hex):
320 def formathex(hex):
321 if hex is None:
321 if hex is None:
322 return '%s+' % rootfm.hexfunc(ctx.p1().node())
322 return '%s+' % rootfm.hexfunc(ctx.p1().node())
323 else:
323 else:
324 return '%s ' % hex
324 return '%s ' % hex
325 else:
325 else:
326 hexfn = rootfm.hexfunc
326 hexfn = rootfm.hexfunc
327 formatrev = formathex = str
327 formatrev = formathex = str
328
328
329 opmap = [('user', ' ', lambda x: x[0].user(), ui.shortuser),
329 opmap = [('user', ' ', lambda x: x[0].user(), ui.shortuser),
330 ('number', ' ', lambda x: x[0].rev(), formatrev),
330 ('number', ' ', lambda x: x[0].rev(), formatrev),
331 ('changeset', ' ', lambda x: hexfn(x[0].node()), formathex),
331 ('changeset', ' ', lambda x: hexfn(x[0].node()), formathex),
332 ('date', ' ', lambda x: x[0].date(), util.cachefunc(datefunc)),
332 ('date', ' ', lambda x: x[0].date(), util.cachefunc(datefunc)),
333 ('file', ' ', lambda x: x[0].path(), str),
333 ('file', ' ', lambda x: x[0].path(), str),
334 ('line_number', ':', lambda x: x[1], str),
334 ('line_number', ':', lambda x: x[1], str),
335 ]
335 ]
336 fieldnamemap = {'number': 'rev', 'changeset': 'node'}
336 fieldnamemap = {'number': 'rev', 'changeset': 'node'}
337
337
338 if (not opts.get('user') and not opts.get('changeset')
338 if (not opts.get('user') and not opts.get('changeset')
339 and not opts.get('date') and not opts.get('file')):
339 and not opts.get('date') and not opts.get('file')):
340 opts['number'] = True
340 opts['number'] = True
341
341
342 linenumber = opts.get('line_number') is not None
342 linenumber = opts.get('line_number') is not None
343 if linenumber and (not opts.get('changeset')) and (not opts.get('number')):
343 if linenumber and (not opts.get('changeset')) and (not opts.get('number')):
344 raise error.Abort(_('at least one of -n/-c is required for -l'))
344 raise error.Abort(_('at least one of -n/-c is required for -l'))
345
345
346 ui.pager('annotate')
346 ui.pager('annotate')
347
347
348 if rootfm.isplain():
348 if rootfm.isplain():
349 def makefunc(get, fmt):
349 def makefunc(get, fmt):
350 return lambda x: fmt(get(x))
350 return lambda x: fmt(get(x))
351 else:
351 else:
352 def makefunc(get, fmt):
352 def makefunc(get, fmt):
353 return get
353 return get
354 funcmap = [(makefunc(get, fmt), sep) for op, sep, get, fmt in opmap
354 funcmap = [(makefunc(get, fmt), sep) for op, sep, get, fmt in opmap
355 if opts.get(op)]
355 if opts.get(op)]
356 funcmap[0] = (funcmap[0][0], '') # no separator in front of first column
356 funcmap[0] = (funcmap[0][0], '') # no separator in front of first column
357 fields = ' '.join(fieldnamemap.get(op, op) for op, sep, get, fmt in opmap
357 fields = ' '.join(fieldnamemap.get(op, op) for op, sep, get, fmt in opmap
358 if opts.get(op))
358 if opts.get(op))
359
359
360 def bad(x, y):
360 def bad(x, y):
361 raise error.Abort("%s: %s" % (x, y))
361 raise error.Abort("%s: %s" % (x, y))
362
362
363 m = scmutil.match(ctx, pats, opts, badfn=bad)
363 m = scmutil.match(ctx, pats, opts, badfn=bad)
364
364
365 follow = not opts.get('no_follow')
365 follow = not opts.get('no_follow')
366 diffopts = patch.difffeatureopts(ui, opts, section='annotate',
366 diffopts = patch.difffeatureopts(ui, opts, section='annotate',
367 whitespace=True)
367 whitespace=True)
368 skiprevs = opts.get('skip')
368 skiprevs = opts.get('skip')
369 if skiprevs:
369 if skiprevs:
370 skiprevs = scmutil.revrange(repo, skiprevs)
370 skiprevs = scmutil.revrange(repo, skiprevs)
371
371
372 for abs in ctx.walk(m):
372 for abs in ctx.walk(m):
373 fctx = ctx[abs]
373 fctx = ctx[abs]
374 rootfm.startitem()
374 rootfm.startitem()
375 rootfm.data(abspath=abs, path=m.rel(abs))
375 rootfm.data(abspath=abs, path=m.rel(abs))
376 if not opts.get('text') and fctx.isbinary():
376 if not opts.get('text') and fctx.isbinary():
377 rootfm.plain(_("%s: binary file\n")
377 rootfm.plain(_("%s: binary file\n")
378 % ((pats and m.rel(abs)) or abs))
378 % ((pats and m.rel(abs)) or abs))
379 continue
379 continue
380
380
381 fm = rootfm.nested('lines')
381 fm = rootfm.nested('lines')
382 lines = fctx.annotate(follow=follow, linenumber=linenumber,
382 lines = fctx.annotate(follow=follow, linenumber=linenumber,
383 skiprevs=skiprevs, diffopts=diffopts)
383 skiprevs=skiprevs, diffopts=diffopts)
384 if not lines:
384 if not lines:
385 fm.end()
385 fm.end()
386 continue
386 continue
387 formats = []
387 formats = []
388 pieces = []
388 pieces = []
389
389
390 for f, sep in funcmap:
390 for f, sep in funcmap:
391 l = [f(n) for n, dummy in lines]
391 l = [f(n) for n, dummy in lines]
392 if fm.isplain():
392 if fm.isplain():
393 sizes = [encoding.colwidth(x) for x in l]
393 sizes = [encoding.colwidth(x) for x in l]
394 ml = max(sizes)
394 ml = max(sizes)
395 formats.append([sep + ' ' * (ml - w) + '%s' for w in sizes])
395 formats.append([sep + ' ' * (ml - w) + '%s' for w in sizes])
396 else:
396 else:
397 formats.append(['%s' for x in l])
397 formats.append(['%s' for x in l])
398 pieces.append(l)
398 pieces.append(l)
399
399
400 for f, p, l in zip(zip(*formats), zip(*pieces), lines):
400 for f, p, l in zip(zip(*formats), zip(*pieces), lines):
401 fm.startitem()
401 fm.startitem()
402 fm.write(fields, "".join(f), *p)
402 fm.write(fields, "".join(f), *p)
403 fm.write('line', ": %s", l[1])
403 fm.write('line', ": %s", l[1])
404
404
405 if not lines[-1][1].endswith('\n'):
405 if not lines[-1][1].endswith('\n'):
406 fm.plain('\n')
406 fm.plain('\n')
407 fm.end()
407 fm.end()
408
408
409 rootfm.end()
409 rootfm.end()
410
410
411 @command('archive',
411 @command('archive',
412 [('', 'no-decode', None, _('do not pass files through decoders')),
412 [('', 'no-decode', None, _('do not pass files through decoders')),
413 ('p', 'prefix', '', _('directory prefix for files in archive'),
413 ('p', 'prefix', '', _('directory prefix for files in archive'),
414 _('PREFIX')),
414 _('PREFIX')),
415 ('r', 'rev', '', _('revision to distribute'), _('REV')),
415 ('r', 'rev', '', _('revision to distribute'), _('REV')),
416 ('t', 'type', '', _('type of distribution to create'), _('TYPE')),
416 ('t', 'type', '', _('type of distribution to create'), _('TYPE')),
417 ] + subrepoopts + walkopts,
417 ] + subrepoopts + walkopts,
418 _('[OPTION]... DEST'))
418 _('[OPTION]... DEST'))
419 def archive(ui, repo, dest, **opts):
419 def archive(ui, repo, dest, **opts):
420 '''create an unversioned archive of a repository revision
420 '''create an unversioned archive of a repository revision
421
421
422 By default, the revision used is the parent of the working
422 By default, the revision used is the parent of the working
423 directory; use -r/--rev to specify a different revision.
423 directory; use -r/--rev to specify a different revision.
424
424
425 The archive type is automatically detected based on file
425 The archive type is automatically detected based on file
426 extension (to override, use -t/--type).
426 extension (to override, use -t/--type).
427
427
428 .. container:: verbose
428 .. container:: verbose
429
429
430 Examples:
430 Examples:
431
431
432 - create a zip file containing the 1.0 release::
432 - create a zip file containing the 1.0 release::
433
433
434 hg archive -r 1.0 project-1.0.zip
434 hg archive -r 1.0 project-1.0.zip
435
435
436 - create a tarball excluding .hg files::
436 - create a tarball excluding .hg files::
437
437
438 hg archive project.tar.gz -X ".hg*"
438 hg archive project.tar.gz -X ".hg*"
439
439
440 Valid types are:
440 Valid types are:
441
441
442 :``files``: a directory full of files (default)
442 :``files``: a directory full of files (default)
443 :``tar``: tar archive, uncompressed
443 :``tar``: tar archive, uncompressed
444 :``tbz2``: tar archive, compressed using bzip2
444 :``tbz2``: tar archive, compressed using bzip2
445 :``tgz``: tar archive, compressed using gzip
445 :``tgz``: tar archive, compressed using gzip
446 :``uzip``: zip archive, uncompressed
446 :``uzip``: zip archive, uncompressed
447 :``zip``: zip archive, compressed using deflate
447 :``zip``: zip archive, compressed using deflate
448
448
449 The exact name of the destination archive or directory is given
449 The exact name of the destination archive or directory is given
450 using a format string; see :hg:`help export` for details.
450 using a format string; see :hg:`help export` for details.
451
451
452 Each member added to an archive file has a directory prefix
452 Each member added to an archive file has a directory prefix
453 prepended. Use -p/--prefix to specify a format string for the
453 prepended. Use -p/--prefix to specify a format string for the
454 prefix. The default is the basename of the archive, with suffixes
454 prefix. The default is the basename of the archive, with suffixes
455 removed.
455 removed.
456
456
457 Returns 0 on success.
457 Returns 0 on success.
458 '''
458 '''
459
459
460 opts = pycompat.byteskwargs(opts)
460 opts = pycompat.byteskwargs(opts)
461 ctx = scmutil.revsingle(repo, opts.get('rev'))
461 ctx = scmutil.revsingle(repo, opts.get('rev'))
462 if not ctx:
462 if not ctx:
463 raise error.Abort(_('no working directory: please specify a revision'))
463 raise error.Abort(_('no working directory: please specify a revision'))
464 node = ctx.node()
464 node = ctx.node()
465 dest = cmdutil.makefilename(repo, dest, node)
465 dest = cmdutil.makefilename(repo, dest, node)
466 if os.path.realpath(dest) == repo.root:
466 if os.path.realpath(dest) == repo.root:
467 raise error.Abort(_('repository root cannot be destination'))
467 raise error.Abort(_('repository root cannot be destination'))
468
468
469 kind = opts.get('type') or archival.guesskind(dest) or 'files'
469 kind = opts.get('type') or archival.guesskind(dest) or 'files'
470 prefix = opts.get('prefix')
470 prefix = opts.get('prefix')
471
471
472 if dest == '-':
472 if dest == '-':
473 if kind == 'files':
473 if kind == 'files':
474 raise error.Abort(_('cannot archive plain files to stdout'))
474 raise error.Abort(_('cannot archive plain files to stdout'))
475 dest = cmdutil.makefileobj(repo, dest)
475 dest = cmdutil.makefileobj(repo, dest)
476 if not prefix:
476 if not prefix:
477 prefix = os.path.basename(repo.root) + '-%h'
477 prefix = os.path.basename(repo.root) + '-%h'
478
478
479 prefix = cmdutil.makefilename(repo, prefix, node)
479 prefix = cmdutil.makefilename(repo, prefix, node)
480 matchfn = scmutil.match(ctx, [], opts)
480 matchfn = scmutil.match(ctx, [], opts)
481 archival.archive(repo, dest, node, kind, not opts.get('no_decode'),
481 archival.archive(repo, dest, node, kind, not opts.get('no_decode'),
482 matchfn, prefix, subrepos=opts.get('subrepos'))
482 matchfn, prefix, subrepos=opts.get('subrepos'))
483
483
484 @command('backout',
484 @command('backout',
485 [('', 'merge', None, _('merge with old dirstate parent after backout')),
485 [('', 'merge', None, _('merge with old dirstate parent after backout')),
486 ('', 'commit', None,
486 ('', 'commit', None,
487 _('commit if no conflicts were encountered (DEPRECATED)')),
487 _('commit if no conflicts were encountered (DEPRECATED)')),
488 ('', 'no-commit', None, _('do not commit')),
488 ('', 'no-commit', None, _('do not commit')),
489 ('', 'parent', '',
489 ('', 'parent', '',
490 _('parent to choose when backing out merge (DEPRECATED)'), _('REV')),
490 _('parent to choose when backing out merge (DEPRECATED)'), _('REV')),
491 ('r', 'rev', '', _('revision to backout'), _('REV')),
491 ('r', 'rev', '', _('revision to backout'), _('REV')),
492 ('e', 'edit', False, _('invoke editor on commit messages')),
492 ('e', 'edit', False, _('invoke editor on commit messages')),
493 ] + mergetoolopts + walkopts + commitopts + commitopts2,
493 ] + mergetoolopts + walkopts + commitopts + commitopts2,
494 _('[OPTION]... [-r] REV'))
494 _('[OPTION]... [-r] REV'))
495 def backout(ui, repo, node=None, rev=None, **opts):
495 def backout(ui, repo, node=None, rev=None, **opts):
496 '''reverse effect of earlier changeset
496 '''reverse effect of earlier changeset
497
497
498 Prepare a new changeset with the effect of REV undone in the
498 Prepare a new changeset with the effect of REV undone in the
499 current working directory. If no conflicts were encountered,
499 current working directory. If no conflicts were encountered,
500 it will be committed immediately.
500 it will be committed immediately.
501
501
502 If REV is the parent of the working directory, then this new changeset
502 If REV is the parent of the working directory, then this new changeset
503 is committed automatically (unless --no-commit is specified).
503 is committed automatically (unless --no-commit is specified).
504
504
505 .. note::
505 .. note::
506
506
507 :hg:`backout` cannot be used to fix either an unwanted or
507 :hg:`backout` cannot be used to fix either an unwanted or
508 incorrect merge.
508 incorrect merge.
509
509
510 .. container:: verbose
510 .. container:: verbose
511
511
512 Examples:
512 Examples:
513
513
514 - Reverse the effect of the parent of the working directory.
514 - Reverse the effect of the parent of the working directory.
515 This backout will be committed immediately::
515 This backout will be committed immediately::
516
516
517 hg backout -r .
517 hg backout -r .
518
518
519 - Reverse the effect of previous bad revision 23::
519 - Reverse the effect of previous bad revision 23::
520
520
521 hg backout -r 23
521 hg backout -r 23
522
522
523 - Reverse the effect of previous bad revision 23 and
523 - Reverse the effect of previous bad revision 23 and
524 leave changes uncommitted::
524 leave changes uncommitted::
525
525
526 hg backout -r 23 --no-commit
526 hg backout -r 23 --no-commit
527 hg commit -m "Backout revision 23"
527 hg commit -m "Backout revision 23"
528
528
529 By default, the pending changeset will have one parent,
529 By default, the pending changeset will have one parent,
530 maintaining a linear history. With --merge, the pending
530 maintaining a linear history. With --merge, the pending
531 changeset will instead have two parents: the old parent of the
531 changeset will instead have two parents: the old parent of the
532 working directory and a new child of REV that simply undoes REV.
532 working directory and a new child of REV that simply undoes REV.
533
533
534 Before version 1.7, the behavior without --merge was equivalent
534 Before version 1.7, the behavior without --merge was equivalent
535 to specifying --merge followed by :hg:`update --clean .` to
535 to specifying --merge followed by :hg:`update --clean .` to
536 cancel the merge and leave the child of REV as a head to be
536 cancel the merge and leave the child of REV as a head to be
537 merged separately.
537 merged separately.
538
538
539 See :hg:`help dates` for a list of formats valid for -d/--date.
539 See :hg:`help dates` for a list of formats valid for -d/--date.
540
540
541 See :hg:`help revert` for a way to restore files to the state
541 See :hg:`help revert` for a way to restore files to the state
542 of another revision.
542 of another revision.
543
543
544 Returns 0 on success, 1 if nothing to backout or there are unresolved
544 Returns 0 on success, 1 if nothing to backout or there are unresolved
545 files.
545 files.
546 '''
546 '''
547 wlock = lock = None
547 wlock = lock = None
548 try:
548 try:
549 wlock = repo.wlock()
549 wlock = repo.wlock()
550 lock = repo.lock()
550 lock = repo.lock()
551 return _dobackout(ui, repo, node, rev, **opts)
551 return _dobackout(ui, repo, node, rev, **opts)
552 finally:
552 finally:
553 release(lock, wlock)
553 release(lock, wlock)
554
554
555 def _dobackout(ui, repo, node=None, rev=None, **opts):
555 def _dobackout(ui, repo, node=None, rev=None, **opts):
556 opts = pycompat.byteskwargs(opts)
556 opts = pycompat.byteskwargs(opts)
557 if opts.get('commit') and opts.get('no_commit'):
557 if opts.get('commit') and opts.get('no_commit'):
558 raise error.Abort(_("cannot use --commit with --no-commit"))
558 raise error.Abort(_("cannot use --commit with --no-commit"))
559 if opts.get('merge') and opts.get('no_commit'):
559 if opts.get('merge') and opts.get('no_commit'):
560 raise error.Abort(_("cannot use --merge with --no-commit"))
560 raise error.Abort(_("cannot use --merge with --no-commit"))
561
561
562 if rev and node:
562 if rev and node:
563 raise error.Abort(_("please specify just one revision"))
563 raise error.Abort(_("please specify just one revision"))
564
564
565 if not rev:
565 if not rev:
566 rev = node
566 rev = node
567
567
568 if not rev:
568 if not rev:
569 raise error.Abort(_("please specify a revision to backout"))
569 raise error.Abort(_("please specify a revision to backout"))
570
570
571 date = opts.get('date')
571 date = opts.get('date')
572 if date:
572 if date:
573 opts['date'] = util.parsedate(date)
573 opts['date'] = util.parsedate(date)
574
574
575 cmdutil.checkunfinished(repo)
575 cmdutil.checkunfinished(repo)
576 cmdutil.bailifchanged(repo)
576 cmdutil.bailifchanged(repo)
577 node = scmutil.revsingle(repo, rev).node()
577 node = scmutil.revsingle(repo, rev).node()
578
578
579 op1, op2 = repo.dirstate.parents()
579 op1, op2 = repo.dirstate.parents()
580 if not repo.changelog.isancestor(node, op1):
580 if not repo.changelog.isancestor(node, op1):
581 raise error.Abort(_('cannot backout change that is not an ancestor'))
581 raise error.Abort(_('cannot backout change that is not an ancestor'))
582
582
583 p1, p2 = repo.changelog.parents(node)
583 p1, p2 = repo.changelog.parents(node)
584 if p1 == nullid:
584 if p1 == nullid:
585 raise error.Abort(_('cannot backout a change with no parents'))
585 raise error.Abort(_('cannot backout a change with no parents'))
586 if p2 != nullid:
586 if p2 != nullid:
587 if not opts.get('parent'):
587 if not opts.get('parent'):
588 raise error.Abort(_('cannot backout a merge changeset'))
588 raise error.Abort(_('cannot backout a merge changeset'))
589 p = repo.lookup(opts['parent'])
589 p = repo.lookup(opts['parent'])
590 if p not in (p1, p2):
590 if p not in (p1, p2):
591 raise error.Abort(_('%s is not a parent of %s') %
591 raise error.Abort(_('%s is not a parent of %s') %
592 (short(p), short(node)))
592 (short(p), short(node)))
593 parent = p
593 parent = p
594 else:
594 else:
595 if opts.get('parent'):
595 if opts.get('parent'):
596 raise error.Abort(_('cannot use --parent on non-merge changeset'))
596 raise error.Abort(_('cannot use --parent on non-merge changeset'))
597 parent = p1
597 parent = p1
598
598
599 # the backout should appear on the same branch
599 # the backout should appear on the same branch
600 branch = repo.dirstate.branch()
600 branch = repo.dirstate.branch()
601 bheads = repo.branchheads(branch)
601 bheads = repo.branchheads(branch)
602 rctx = scmutil.revsingle(repo, hex(parent))
602 rctx = scmutil.revsingle(repo, hex(parent))
603 if not opts.get('merge') and op1 != node:
603 if not opts.get('merge') and op1 != node:
604 dsguard = dirstateguard.dirstateguard(repo, 'backout')
604 dsguard = dirstateguard.dirstateguard(repo, 'backout')
605 try:
605 try:
606 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
606 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
607 'backout')
607 'backout')
608 stats = mergemod.update(repo, parent, True, True, node, False)
608 stats = mergemod.update(repo, parent, True, True, node, False)
609 repo.setparents(op1, op2)
609 repo.setparents(op1, op2)
610 dsguard.close()
610 dsguard.close()
611 hg._showstats(repo, stats)
611 hg._showstats(repo, stats)
612 if stats[3]:
612 if stats[3]:
613 repo.ui.status(_("use 'hg resolve' to retry unresolved "
613 repo.ui.status(_("use 'hg resolve' to retry unresolved "
614 "file merges\n"))
614 "file merges\n"))
615 return 1
615 return 1
616 finally:
616 finally:
617 ui.setconfig('ui', 'forcemerge', '', '')
617 ui.setconfig('ui', 'forcemerge', '', '')
618 lockmod.release(dsguard)
618 lockmod.release(dsguard)
619 else:
619 else:
620 hg.clean(repo, node, show_stats=False)
620 hg.clean(repo, node, show_stats=False)
621 repo.dirstate.setbranch(branch)
621 repo.dirstate.setbranch(branch)
622 cmdutil.revert(ui, repo, rctx, repo.dirstate.parents())
622 cmdutil.revert(ui, repo, rctx, repo.dirstate.parents())
623
623
624 if opts.get('no_commit'):
624 if opts.get('no_commit'):
625 msg = _("changeset %s backed out, "
625 msg = _("changeset %s backed out, "
626 "don't forget to commit.\n")
626 "don't forget to commit.\n")
627 ui.status(msg % short(node))
627 ui.status(msg % short(node))
628 return 0
628 return 0
629
629
630 def commitfunc(ui, repo, message, match, opts):
630 def commitfunc(ui, repo, message, match, opts):
631 editform = 'backout'
631 editform = 'backout'
632 e = cmdutil.getcommiteditor(editform=editform,
632 e = cmdutil.getcommiteditor(editform=editform,
633 **pycompat.strkwargs(opts))
633 **pycompat.strkwargs(opts))
634 if not message:
634 if not message:
635 # we don't translate commit messages
635 # we don't translate commit messages
636 message = "Backed out changeset %s" % short(node)
636 message = "Backed out changeset %s" % short(node)
637 e = cmdutil.getcommiteditor(edit=True, editform=editform)
637 e = cmdutil.getcommiteditor(edit=True, editform=editform)
638 return repo.commit(message, opts.get('user'), opts.get('date'),
638 return repo.commit(message, opts.get('user'), opts.get('date'),
639 match, editor=e)
639 match, editor=e)
640 newnode = cmdutil.commit(ui, repo, commitfunc, [], opts)
640 newnode = cmdutil.commit(ui, repo, commitfunc, [], opts)
641 if not newnode:
641 if not newnode:
642 ui.status(_("nothing changed\n"))
642 ui.status(_("nothing changed\n"))
643 return 1
643 return 1
644 cmdutil.commitstatus(repo, newnode, branch, bheads)
644 cmdutil.commitstatus(repo, newnode, branch, bheads)
645
645
646 def nice(node):
646 def nice(node):
647 return '%d:%s' % (repo.changelog.rev(node), short(node))
647 return '%d:%s' % (repo.changelog.rev(node), short(node))
648 ui.status(_('changeset %s backs out changeset %s\n') %
648 ui.status(_('changeset %s backs out changeset %s\n') %
649 (nice(repo.changelog.tip()), nice(node)))
649 (nice(repo.changelog.tip()), nice(node)))
650 if opts.get('merge') and op1 != node:
650 if opts.get('merge') and op1 != node:
651 hg.clean(repo, op1, show_stats=False)
651 hg.clean(repo, op1, show_stats=False)
652 ui.status(_('merging with changeset %s\n')
652 ui.status(_('merging with changeset %s\n')
653 % nice(repo.changelog.tip()))
653 % nice(repo.changelog.tip()))
654 try:
654 try:
655 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
655 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
656 'backout')
656 'backout')
657 return hg.merge(repo, hex(repo.changelog.tip()))
657 return hg.merge(repo, hex(repo.changelog.tip()))
658 finally:
658 finally:
659 ui.setconfig('ui', 'forcemerge', '', '')
659 ui.setconfig('ui', 'forcemerge', '', '')
660 return 0
660 return 0
661
661
662 @command('bisect',
662 @command('bisect',
663 [('r', 'reset', False, _('reset bisect state')),
663 [('r', 'reset', False, _('reset bisect state')),
664 ('g', 'good', False, _('mark changeset good')),
664 ('g', 'good', False, _('mark changeset good')),
665 ('b', 'bad', False, _('mark changeset bad')),
665 ('b', 'bad', False, _('mark changeset bad')),
666 ('s', 'skip', False, _('skip testing changeset')),
666 ('s', 'skip', False, _('skip testing changeset')),
667 ('e', 'extend', False, _('extend the bisect range')),
667 ('e', 'extend', False, _('extend the bisect range')),
668 ('c', 'command', '', _('use command to check changeset state'), _('CMD')),
668 ('c', 'command', '', _('use command to check changeset state'), _('CMD')),
669 ('U', 'noupdate', False, _('do not update to target'))],
669 ('U', 'noupdate', False, _('do not update to target'))],
670 _("[-gbsr] [-U] [-c CMD] [REV]"))
670 _("[-gbsr] [-U] [-c CMD] [REV]"))
671 def bisect(ui, repo, rev=None, extra=None, command=None,
671 def bisect(ui, repo, rev=None, extra=None, command=None,
672 reset=None, good=None, bad=None, skip=None, extend=None,
672 reset=None, good=None, bad=None, skip=None, extend=None,
673 noupdate=None):
673 noupdate=None):
674 """subdivision search of changesets
674 """subdivision search of changesets
675
675
676 This command helps to find changesets which introduce problems. To
676 This command helps to find changesets which introduce problems. To
677 use, mark the earliest changeset you know exhibits the problem as
677 use, mark the earliest changeset you know exhibits the problem as
678 bad, then mark the latest changeset which is free from the problem
678 bad, then mark the latest changeset which is free from the problem
679 as good. Bisect will update your working directory to a revision
679 as good. Bisect will update your working directory to a revision
680 for testing (unless the -U/--noupdate option is specified). Once
680 for testing (unless the -U/--noupdate option is specified). Once
681 you have performed tests, mark the working directory as good or
681 you have performed tests, mark the working directory as good or
682 bad, and bisect will either update to another candidate changeset
682 bad, and bisect will either update to another candidate changeset
683 or announce that it has found the bad revision.
683 or announce that it has found the bad revision.
684
684
685 As a shortcut, you can also use the revision argument to mark a
685 As a shortcut, you can also use the revision argument to mark a
686 revision as good or bad without checking it out first.
686 revision as good or bad without checking it out first.
687
687
688 If you supply a command, it will be used for automatic bisection.
688 If you supply a command, it will be used for automatic bisection.
689 The environment variable HG_NODE will contain the ID of the
689 The environment variable HG_NODE will contain the ID of the
690 changeset being tested. The exit status of the command will be
690 changeset being tested. The exit status of the command will be
691 used to mark revisions as good or bad: status 0 means good, 125
691 used to mark revisions as good or bad: status 0 means good, 125
692 means to skip the revision, 127 (command not found) will abort the
692 means to skip the revision, 127 (command not found) will abort the
693 bisection, and any other non-zero exit status means the revision
693 bisection, and any other non-zero exit status means the revision
694 is bad.
694 is bad.
695
695
696 .. container:: verbose
696 .. container:: verbose
697
697
698 Some examples:
698 Some examples:
699
699
700 - start a bisection with known bad revision 34, and good revision 12::
700 - start a bisection with known bad revision 34, and good revision 12::
701
701
702 hg bisect --bad 34
702 hg bisect --bad 34
703 hg bisect --good 12
703 hg bisect --good 12
704
704
705 - advance the current bisection by marking current revision as good or
705 - advance the current bisection by marking current revision as good or
706 bad::
706 bad::
707
707
708 hg bisect --good
708 hg bisect --good
709 hg bisect --bad
709 hg bisect --bad
710
710
711 - mark the current revision, or a known revision, to be skipped (e.g. if
711 - mark the current revision, or a known revision, to be skipped (e.g. if
712 that revision is not usable because of another issue)::
712 that revision is not usable because of another issue)::
713
713
714 hg bisect --skip
714 hg bisect --skip
715 hg bisect --skip 23
715 hg bisect --skip 23
716
716
717 - skip all revisions that do not touch directories ``foo`` or ``bar``::
717 - skip all revisions that do not touch directories ``foo`` or ``bar``::
718
718
719 hg bisect --skip "!( file('path:foo') & file('path:bar') )"
719 hg bisect --skip "!( file('path:foo') & file('path:bar') )"
720
720
721 - forget the current bisection::
721 - forget the current bisection::
722
722
723 hg bisect --reset
723 hg bisect --reset
724
724
725 - use 'make && make tests' to automatically find the first broken
725 - use 'make && make tests' to automatically find the first broken
726 revision::
726 revision::
727
727
728 hg bisect --reset
728 hg bisect --reset
729 hg bisect --bad 34
729 hg bisect --bad 34
730 hg bisect --good 12
730 hg bisect --good 12
731 hg bisect --command "make && make tests"
731 hg bisect --command "make && make tests"
732
732
733 - see all changesets whose states are already known in the current
733 - see all changesets whose states are already known in the current
734 bisection::
734 bisection::
735
735
736 hg log -r "bisect(pruned)"
736 hg log -r "bisect(pruned)"
737
737
738 - see the changeset currently being bisected (especially useful
738 - see the changeset currently being bisected (especially useful
739 if running with -U/--noupdate)::
739 if running with -U/--noupdate)::
740
740
741 hg log -r "bisect(current)"
741 hg log -r "bisect(current)"
742
742
743 - see all changesets that took part in the current bisection::
743 - see all changesets that took part in the current bisection::
744
744
745 hg log -r "bisect(range)"
745 hg log -r "bisect(range)"
746
746
747 - you can even get a nice graph::
747 - you can even get a nice graph::
748
748
749 hg log --graph -r "bisect(range)"
749 hg log --graph -r "bisect(range)"
750
750
751 See :hg:`help revisions.bisect` for more about the `bisect()` predicate.
751 See :hg:`help revisions.bisect` for more about the `bisect()` predicate.
752
752
753 Returns 0 on success.
753 Returns 0 on success.
754 """
754 """
755 # backward compatibility
755 # backward compatibility
756 if rev in "good bad reset init".split():
756 if rev in "good bad reset init".split():
757 ui.warn(_("(use of 'hg bisect <cmd>' is deprecated)\n"))
757 ui.warn(_("(use of 'hg bisect <cmd>' is deprecated)\n"))
758 cmd, rev, extra = rev, extra, None
758 cmd, rev, extra = rev, extra, None
759 if cmd == "good":
759 if cmd == "good":
760 good = True
760 good = True
761 elif cmd == "bad":
761 elif cmd == "bad":
762 bad = True
762 bad = True
763 else:
763 else:
764 reset = True
764 reset = True
765 elif extra:
765 elif extra:
766 raise error.Abort(_('incompatible arguments'))
766 raise error.Abort(_('incompatible arguments'))
767
767
768 incompatibles = {
768 incompatibles = {
769 '--bad': bad,
769 '--bad': bad,
770 '--command': bool(command),
770 '--command': bool(command),
771 '--extend': extend,
771 '--extend': extend,
772 '--good': good,
772 '--good': good,
773 '--reset': reset,
773 '--reset': reset,
774 '--skip': skip,
774 '--skip': skip,
775 }
775 }
776
776
777 enabled = [x for x in incompatibles if incompatibles[x]]
777 enabled = [x for x in incompatibles if incompatibles[x]]
778
778
779 if len(enabled) > 1:
779 if len(enabled) > 1:
780 raise error.Abort(_('%s and %s are incompatible') %
780 raise error.Abort(_('%s and %s are incompatible') %
781 tuple(sorted(enabled)[0:2]))
781 tuple(sorted(enabled)[0:2]))
782
782
783 if reset:
783 if reset:
784 hbisect.resetstate(repo)
784 hbisect.resetstate(repo)
785 return
785 return
786
786
787 state = hbisect.load_state(repo)
787 state = hbisect.load_state(repo)
788
788
789 # update state
789 # update state
790 if good or bad or skip:
790 if good or bad or skip:
791 if rev:
791 if rev:
792 nodes = [repo.lookup(i) for i in scmutil.revrange(repo, [rev])]
792 nodes = [repo.lookup(i) for i in scmutil.revrange(repo, [rev])]
793 else:
793 else:
794 nodes = [repo.lookup('.')]
794 nodes = [repo.lookup('.')]
795 if good:
795 if good:
796 state['good'] += nodes
796 state['good'] += nodes
797 elif bad:
797 elif bad:
798 state['bad'] += nodes
798 state['bad'] += nodes
799 elif skip:
799 elif skip:
800 state['skip'] += nodes
800 state['skip'] += nodes
801 hbisect.save_state(repo, state)
801 hbisect.save_state(repo, state)
802 if not (state['good'] and state['bad']):
802 if not (state['good'] and state['bad']):
803 return
803 return
804
804
805 def mayupdate(repo, node, show_stats=True):
805 def mayupdate(repo, node, show_stats=True):
806 """common used update sequence"""
806 """common used update sequence"""
807 if noupdate:
807 if noupdate:
808 return
808 return
809 cmdutil.checkunfinished(repo)
809 cmdutil.checkunfinished(repo)
810 cmdutil.bailifchanged(repo)
810 cmdutil.bailifchanged(repo)
811 return hg.clean(repo, node, show_stats=show_stats)
811 return hg.clean(repo, node, show_stats=show_stats)
812
812
813 displayer = cmdutil.show_changeset(ui, repo, {})
813 displayer = cmdutil.show_changeset(ui, repo, {})
814
814
815 if command:
815 if command:
816 changesets = 1
816 changesets = 1
817 if noupdate:
817 if noupdate:
818 try:
818 try:
819 node = state['current'][0]
819 node = state['current'][0]
820 except LookupError:
820 except LookupError:
821 raise error.Abort(_('current bisect revision is unknown - '
821 raise error.Abort(_('current bisect revision is unknown - '
822 'start a new bisect to fix'))
822 'start a new bisect to fix'))
823 else:
823 else:
824 node, p2 = repo.dirstate.parents()
824 node, p2 = repo.dirstate.parents()
825 if p2 != nullid:
825 if p2 != nullid:
826 raise error.Abort(_('current bisect revision is a merge'))
826 raise error.Abort(_('current bisect revision is a merge'))
827 if rev:
827 if rev:
828 node = repo[scmutil.revsingle(repo, rev, node)].node()
828 node = repo[scmutil.revsingle(repo, rev, node)].node()
829 try:
829 try:
830 while changesets:
830 while changesets:
831 # update state
831 # update state
832 state['current'] = [node]
832 state['current'] = [node]
833 hbisect.save_state(repo, state)
833 hbisect.save_state(repo, state)
834 status = ui.system(command, environ={'HG_NODE': hex(node)},
834 status = ui.system(command, environ={'HG_NODE': hex(node)},
835 blockedtag='bisect_check')
835 blockedtag='bisect_check')
836 if status == 125:
836 if status == 125:
837 transition = "skip"
837 transition = "skip"
838 elif status == 0:
838 elif status == 0:
839 transition = "good"
839 transition = "good"
840 # status < 0 means process was killed
840 # status < 0 means process was killed
841 elif status == 127:
841 elif status == 127:
842 raise error.Abort(_("failed to execute %s") % command)
842 raise error.Abort(_("failed to execute %s") % command)
843 elif status < 0:
843 elif status < 0:
844 raise error.Abort(_("%s killed") % command)
844 raise error.Abort(_("%s killed") % command)
845 else:
845 else:
846 transition = "bad"
846 transition = "bad"
847 state[transition].append(node)
847 state[transition].append(node)
848 ctx = repo[node]
848 ctx = repo[node]
849 ui.status(_('changeset %d:%s: %s\n') % (ctx, ctx, transition))
849 ui.status(_('changeset %d:%s: %s\n') % (ctx, ctx, transition))
850 hbisect.checkstate(state)
850 hbisect.checkstate(state)
851 # bisect
851 # bisect
852 nodes, changesets, bgood = hbisect.bisect(repo.changelog, state)
852 nodes, changesets, bgood = hbisect.bisect(repo.changelog, state)
853 # update to next check
853 # update to next check
854 node = nodes[0]
854 node = nodes[0]
855 mayupdate(repo, node, show_stats=False)
855 mayupdate(repo, node, show_stats=False)
856 finally:
856 finally:
857 state['current'] = [node]
857 state['current'] = [node]
858 hbisect.save_state(repo, state)
858 hbisect.save_state(repo, state)
859 hbisect.printresult(ui, repo, state, displayer, nodes, bgood)
859 hbisect.printresult(ui, repo, state, displayer, nodes, bgood)
860 return
860 return
861
861
862 hbisect.checkstate(state)
862 hbisect.checkstate(state)
863
863
864 # actually bisect
864 # actually bisect
865 nodes, changesets, good = hbisect.bisect(repo.changelog, state)
865 nodes, changesets, good = hbisect.bisect(repo.changelog, state)
866 if extend:
866 if extend:
867 if not changesets:
867 if not changesets:
868 extendnode = hbisect.extendrange(repo, state, nodes, good)
868 extendnode = hbisect.extendrange(repo, state, nodes, good)
869 if extendnode is not None:
869 if extendnode is not None:
870 ui.write(_("Extending search to changeset %d:%s\n")
870 ui.write(_("Extending search to changeset %d:%s\n")
871 % (extendnode.rev(), extendnode))
871 % (extendnode.rev(), extendnode))
872 state['current'] = [extendnode.node()]
872 state['current'] = [extendnode.node()]
873 hbisect.save_state(repo, state)
873 hbisect.save_state(repo, state)
874 return mayupdate(repo, extendnode.node())
874 return mayupdate(repo, extendnode.node())
875 raise error.Abort(_("nothing to extend"))
875 raise error.Abort(_("nothing to extend"))
876
876
877 if changesets == 0:
877 if changesets == 0:
878 hbisect.printresult(ui, repo, state, displayer, nodes, good)
878 hbisect.printresult(ui, repo, state, displayer, nodes, good)
879 else:
879 else:
880 assert len(nodes) == 1 # only a single node can be tested next
880 assert len(nodes) == 1 # only a single node can be tested next
881 node = nodes[0]
881 node = nodes[0]
882 # compute the approximate number of remaining tests
882 # compute the approximate number of remaining tests
883 tests, size = 0, 2
883 tests, size = 0, 2
884 while size <= changesets:
884 while size <= changesets:
885 tests, size = tests + 1, size * 2
885 tests, size = tests + 1, size * 2
886 rev = repo.changelog.rev(node)
886 rev = repo.changelog.rev(node)
887 ui.write(_("Testing changeset %d:%s "
887 ui.write(_("Testing changeset %d:%s "
888 "(%d changesets remaining, ~%d tests)\n")
888 "(%d changesets remaining, ~%d tests)\n")
889 % (rev, short(node), changesets, tests))
889 % (rev, short(node), changesets, tests))
890 state['current'] = [node]
890 state['current'] = [node]
891 hbisect.save_state(repo, state)
891 hbisect.save_state(repo, state)
892 return mayupdate(repo, node)
892 return mayupdate(repo, node)
893
893
894 @command('bookmarks|bookmark',
894 @command('bookmarks|bookmark',
895 [('f', 'force', False, _('force')),
895 [('f', 'force', False, _('force')),
896 ('r', 'rev', '', _('revision for bookmark action'), _('REV')),
896 ('r', 'rev', '', _('revision for bookmark action'), _('REV')),
897 ('d', 'delete', False, _('delete a given bookmark')),
897 ('d', 'delete', False, _('delete a given bookmark')),
898 ('m', 'rename', '', _('rename a given bookmark'), _('OLD')),
898 ('m', 'rename', '', _('rename a given bookmark'), _('OLD')),
899 ('i', 'inactive', False, _('mark a bookmark inactive')),
899 ('i', 'inactive', False, _('mark a bookmark inactive')),
900 ] + formatteropts,
900 ] + formatteropts,
901 _('hg bookmarks [OPTIONS]... [NAME]...'))
901 _('hg bookmarks [OPTIONS]... [NAME]...'))
902 def bookmark(ui, repo, *names, **opts):
902 def bookmark(ui, repo, *names, **opts):
903 '''create a new bookmark or list existing bookmarks
903 '''create a new bookmark or list existing bookmarks
904
904
905 Bookmarks are labels on changesets to help track lines of development.
905 Bookmarks are labels on changesets to help track lines of development.
906 Bookmarks are unversioned and can be moved, renamed and deleted.
906 Bookmarks are unversioned and can be moved, renamed and deleted.
907 Deleting or moving a bookmark has no effect on the associated changesets.
907 Deleting or moving a bookmark has no effect on the associated changesets.
908
908
909 Creating or updating to a bookmark causes it to be marked as 'active'.
909 Creating or updating to a bookmark causes it to be marked as 'active'.
910 The active bookmark is indicated with a '*'.
910 The active bookmark is indicated with a '*'.
911 When a commit is made, the active bookmark will advance to the new commit.
911 When a commit is made, the active bookmark will advance to the new commit.
912 A plain :hg:`update` will also advance an active bookmark, if possible.
912 A plain :hg:`update` will also advance an active bookmark, if possible.
913 Updating away from a bookmark will cause it to be deactivated.
913 Updating away from a bookmark will cause it to be deactivated.
914
914
915 Bookmarks can be pushed and pulled between repositories (see
915 Bookmarks can be pushed and pulled between repositories (see
916 :hg:`help push` and :hg:`help pull`). If a shared bookmark has
916 :hg:`help push` and :hg:`help pull`). If a shared bookmark has
917 diverged, a new 'divergent bookmark' of the form 'name@path' will
917 diverged, a new 'divergent bookmark' of the form 'name@path' will
918 be created. Using :hg:`merge` will resolve the divergence.
918 be created. Using :hg:`merge` will resolve the divergence.
919
919
920 A bookmark named '@' has the special property that :hg:`clone` will
920 A bookmark named '@' has the special property that :hg:`clone` will
921 check it out by default if it exists.
921 check it out by default if it exists.
922
922
923 .. container:: verbose
923 .. container:: verbose
924
924
925 Examples:
925 Examples:
926
926
927 - create an active bookmark for a new line of development::
927 - create an active bookmark for a new line of development::
928
928
929 hg book new-feature
929 hg book new-feature
930
930
931 - create an inactive bookmark as a place marker::
931 - create an inactive bookmark as a place marker::
932
932
933 hg book -i reviewed
933 hg book -i reviewed
934
934
935 - create an inactive bookmark on another changeset::
935 - create an inactive bookmark on another changeset::
936
936
937 hg book -r .^ tested
937 hg book -r .^ tested
938
938
939 - rename bookmark turkey to dinner::
939 - rename bookmark turkey to dinner::
940
940
941 hg book -m turkey dinner
941 hg book -m turkey dinner
942
942
943 - move the '@' bookmark from another branch::
943 - move the '@' bookmark from another branch::
944
944
945 hg book -f @
945 hg book -f @
946 '''
946 '''
947 opts = pycompat.byteskwargs(opts)
947 opts = pycompat.byteskwargs(opts)
948 force = opts.get('force')
948 force = opts.get('force')
949 rev = opts.get('rev')
949 rev = opts.get('rev')
950 delete = opts.get('delete')
950 delete = opts.get('delete')
951 rename = opts.get('rename')
951 rename = opts.get('rename')
952 inactive = opts.get('inactive')
952 inactive = opts.get('inactive')
953
953
954 if delete and rename:
954 if delete and rename:
955 raise error.Abort(_("--delete and --rename are incompatible"))
955 raise error.Abort(_("--delete and --rename are incompatible"))
956 if delete and rev:
956 if delete and rev:
957 raise error.Abort(_("--rev is incompatible with --delete"))
957 raise error.Abort(_("--rev is incompatible with --delete"))
958 if rename and rev:
958 if rename and rev:
959 raise error.Abort(_("--rev is incompatible with --rename"))
959 raise error.Abort(_("--rev is incompatible with --rename"))
960 if not names and (delete or rev):
960 if not names and (delete or rev):
961 raise error.Abort(_("bookmark name required"))
961 raise error.Abort(_("bookmark name required"))
962
962
963 if delete or rename or names or inactive:
963 if delete or rename or names or inactive:
964 with repo.wlock(), repo.lock(), repo.transaction('bookmark') as tr:
964 with repo.wlock(), repo.lock(), repo.transaction('bookmark') as tr:
965 if delete:
965 if delete:
966 bookmarks.delete(repo, tr, names)
966 bookmarks.delete(repo, tr, names)
967 elif rename:
967 elif rename:
968 if not names:
968 if not names:
969 raise error.Abort(_("new bookmark name required"))
969 raise error.Abort(_("new bookmark name required"))
970 elif len(names) > 1:
970 elif len(names) > 1:
971 raise error.Abort(_("only one new bookmark name allowed"))
971 raise error.Abort(_("only one new bookmark name allowed"))
972 bookmarks.rename(repo, tr, rename, names[0], force, inactive)
972 bookmarks.rename(repo, tr, rename, names[0], force, inactive)
973 elif names:
973 elif names:
974 bookmarks.addbookmarks(repo, tr, names, rev, force, inactive)
974 bookmarks.addbookmarks(repo, tr, names, rev, force, inactive)
975 elif inactive:
975 elif inactive:
976 if len(repo._bookmarks) == 0:
976 if len(repo._bookmarks) == 0:
977 ui.status(_("no bookmarks set\n"))
977 ui.status(_("no bookmarks set\n"))
978 elif not repo._activebookmark:
978 elif not repo._activebookmark:
979 ui.status(_("no active bookmark\n"))
979 ui.status(_("no active bookmark\n"))
980 else:
980 else:
981 bookmarks.deactivate(repo)
981 bookmarks.deactivate(repo)
982 else: # show bookmarks
982 else: # show bookmarks
983 bookmarks.printbookmarks(ui, repo, **opts)
983 bookmarks.printbookmarks(ui, repo, **opts)
984
984
985 @command('branch',
985 @command('branch',
986 [('f', 'force', None,
986 [('f', 'force', None,
987 _('set branch name even if it shadows an existing branch')),
987 _('set branch name even if it shadows an existing branch')),
988 ('C', 'clean', None, _('reset branch name to parent branch name'))],
988 ('C', 'clean', None, _('reset branch name to parent branch name'))],
989 _('[-fC] [NAME]'))
989 _('[-fC] [NAME]'))
990 def branch(ui, repo, label=None, **opts):
990 def branch(ui, repo, label=None, **opts):
991 """set or show the current branch name
991 """set or show the current branch name
992
992
993 .. note::
993 .. note::
994
994
995 Branch names are permanent and global. Use :hg:`bookmark` to create a
995 Branch names are permanent and global. Use :hg:`bookmark` to create a
996 light-weight bookmark instead. See :hg:`help glossary` for more
996 light-weight bookmark instead. See :hg:`help glossary` for more
997 information about named branches and bookmarks.
997 information about named branches and bookmarks.
998
998
999 With no argument, show the current branch name. With one argument,
999 With no argument, show the current branch name. With one argument,
1000 set the working directory branch name (the branch will not exist
1000 set the working directory branch name (the branch will not exist
1001 in the repository until the next commit). Standard practice
1001 in the repository until the next commit). Standard practice
1002 recommends that primary development take place on the 'default'
1002 recommends that primary development take place on the 'default'
1003 branch.
1003 branch.
1004
1004
1005 Unless -f/--force is specified, branch will not let you set a
1005 Unless -f/--force is specified, branch will not let you set a
1006 branch name that already exists.
1006 branch name that already exists.
1007
1007
1008 Use -C/--clean to reset the working directory branch to that of
1008 Use -C/--clean to reset the working directory branch to that of
1009 the parent of the working directory, negating a previous branch
1009 the parent of the working directory, negating a previous branch
1010 change.
1010 change.
1011
1011
1012 Use the command :hg:`update` to switch to an existing branch. Use
1012 Use the command :hg:`update` to switch to an existing branch. Use
1013 :hg:`commit --close-branch` to mark this branch head as closed.
1013 :hg:`commit --close-branch` to mark this branch head as closed.
1014 When all heads of a branch are closed, the branch will be
1014 When all heads of a branch are closed, the branch will be
1015 considered closed.
1015 considered closed.
1016
1016
1017 Returns 0 on success.
1017 Returns 0 on success.
1018 """
1018 """
1019 opts = pycompat.byteskwargs(opts)
1019 opts = pycompat.byteskwargs(opts)
1020 if label:
1020 if label:
1021 label = label.strip()
1021 label = label.strip()
1022
1022
1023 if not opts.get('clean') and not label:
1023 if not opts.get('clean') and not label:
1024 ui.write("%s\n" % repo.dirstate.branch())
1024 ui.write("%s\n" % repo.dirstate.branch())
1025 return
1025 return
1026
1026
1027 with repo.wlock():
1027 with repo.wlock():
1028 if opts.get('clean'):
1028 if opts.get('clean'):
1029 label = repo[None].p1().branch()
1029 label = repo[None].p1().branch()
1030 repo.dirstate.setbranch(label)
1030 repo.dirstate.setbranch(label)
1031 ui.status(_('reset working directory to branch %s\n') % label)
1031 ui.status(_('reset working directory to branch %s\n') % label)
1032 elif label:
1032 elif label:
1033 if not opts.get('force') and label in repo.branchmap():
1033 if not opts.get('force') and label in repo.branchmap():
1034 if label not in [p.branch() for p in repo[None].parents()]:
1034 if label not in [p.branch() for p in repo[None].parents()]:
1035 raise error.Abort(_('a branch of the same name already'
1035 raise error.Abort(_('a branch of the same name already'
1036 ' exists'),
1036 ' exists'),
1037 # i18n: "it" refers to an existing branch
1037 # i18n: "it" refers to an existing branch
1038 hint=_("use 'hg update' to switch to it"))
1038 hint=_("use 'hg update' to switch to it"))
1039 scmutil.checknewlabel(repo, label, 'branch')
1039 scmutil.checknewlabel(repo, label, 'branch')
1040 repo.dirstate.setbranch(label)
1040 repo.dirstate.setbranch(label)
1041 ui.status(_('marked working directory as branch %s\n') % label)
1041 ui.status(_('marked working directory as branch %s\n') % label)
1042
1042
1043 # find any open named branches aside from default
1043 # find any open named branches aside from default
1044 others = [n for n, h, t, c in repo.branchmap().iterbranches()
1044 others = [n for n, h, t, c in repo.branchmap().iterbranches()
1045 if n != "default" and not c]
1045 if n != "default" and not c]
1046 if not others:
1046 if not others:
1047 ui.status(_('(branches are permanent and global, '
1047 ui.status(_('(branches are permanent and global, '
1048 'did you want a bookmark?)\n'))
1048 'did you want a bookmark?)\n'))
1049
1049
1050 @command('branches',
1050 @command('branches',
1051 [('a', 'active', False,
1051 [('a', 'active', False,
1052 _('show only branches that have unmerged heads (DEPRECATED)')),
1052 _('show only branches that have unmerged heads (DEPRECATED)')),
1053 ('c', 'closed', False, _('show normal and closed branches')),
1053 ('c', 'closed', False, _('show normal and closed branches')),
1054 ] + formatteropts,
1054 ] + formatteropts,
1055 _('[-c]'))
1055 _('[-c]'))
1056 def branches(ui, repo, active=False, closed=False, **opts):
1056 def branches(ui, repo, active=False, closed=False, **opts):
1057 """list repository named branches
1057 """list repository named branches
1058
1058
1059 List the repository's named branches, indicating which ones are
1059 List the repository's named branches, indicating which ones are
1060 inactive. If -c/--closed is specified, also list branches which have
1060 inactive. If -c/--closed is specified, also list branches which have
1061 been marked closed (see :hg:`commit --close-branch`).
1061 been marked closed (see :hg:`commit --close-branch`).
1062
1062
1063 Use the command :hg:`update` to switch to an existing branch.
1063 Use the command :hg:`update` to switch to an existing branch.
1064
1064
1065 Returns 0.
1065 Returns 0.
1066 """
1066 """
1067
1067
1068 opts = pycompat.byteskwargs(opts)
1068 opts = pycompat.byteskwargs(opts)
1069 ui.pager('branches')
1069 ui.pager('branches')
1070 fm = ui.formatter('branches', opts)
1070 fm = ui.formatter('branches', opts)
1071 hexfunc = fm.hexfunc
1071 hexfunc = fm.hexfunc
1072
1072
1073 allheads = set(repo.heads())
1073 allheads = set(repo.heads())
1074 branches = []
1074 branches = []
1075 for tag, heads, tip, isclosed in repo.branchmap().iterbranches():
1075 for tag, heads, tip, isclosed in repo.branchmap().iterbranches():
1076 isactive = not isclosed and bool(set(heads) & allheads)
1076 isactive = not isclosed and bool(set(heads) & allheads)
1077 branches.append((tag, repo[tip], isactive, not isclosed))
1077 branches.append((tag, repo[tip], isactive, not isclosed))
1078 branches.sort(key=lambda i: (i[2], i[1].rev(), i[0], i[3]),
1078 branches.sort(key=lambda i: (i[2], i[1].rev(), i[0], i[3]),
1079 reverse=True)
1079 reverse=True)
1080
1080
1081 for tag, ctx, isactive, isopen in branches:
1081 for tag, ctx, isactive, isopen in branches:
1082 if active and not isactive:
1082 if active and not isactive:
1083 continue
1083 continue
1084 if isactive:
1084 if isactive:
1085 label = 'branches.active'
1085 label = 'branches.active'
1086 notice = ''
1086 notice = ''
1087 elif not isopen:
1087 elif not isopen:
1088 if not closed:
1088 if not closed:
1089 continue
1089 continue
1090 label = 'branches.closed'
1090 label = 'branches.closed'
1091 notice = _(' (closed)')
1091 notice = _(' (closed)')
1092 else:
1092 else:
1093 label = 'branches.inactive'
1093 label = 'branches.inactive'
1094 notice = _(' (inactive)')
1094 notice = _(' (inactive)')
1095 current = (tag == repo.dirstate.branch())
1095 current = (tag == repo.dirstate.branch())
1096 if current:
1096 if current:
1097 label = 'branches.current'
1097 label = 'branches.current'
1098
1098
1099 fm.startitem()
1099 fm.startitem()
1100 fm.write('branch', '%s', tag, label=label)
1100 fm.write('branch', '%s', tag, label=label)
1101 rev = ctx.rev()
1101 rev = ctx.rev()
1102 padsize = max(31 - len(str(rev)) - encoding.colwidth(tag), 0)
1102 padsize = max(31 - len(str(rev)) - encoding.colwidth(tag), 0)
1103 fmt = ' ' * padsize + ' %d:%s'
1103 fmt = ' ' * padsize + ' %d:%s'
1104 fm.condwrite(not ui.quiet, 'rev node', fmt, rev, hexfunc(ctx.node()),
1104 fm.condwrite(not ui.quiet, 'rev node', fmt, rev, hexfunc(ctx.node()),
1105 label='log.changeset changeset.%s' % ctx.phasestr())
1105 label='log.changeset changeset.%s' % ctx.phasestr())
1106 fm.context(ctx=ctx)
1106 fm.context(ctx=ctx)
1107 fm.data(active=isactive, closed=not isopen, current=current)
1107 fm.data(active=isactive, closed=not isopen, current=current)
1108 if not ui.quiet:
1108 if not ui.quiet:
1109 fm.plain(notice)
1109 fm.plain(notice)
1110 fm.plain('\n')
1110 fm.plain('\n')
1111 fm.end()
1111 fm.end()
1112
1112
1113 @command('bundle',
1113 @command('bundle',
1114 [('f', 'force', None, _('run even when the destination is unrelated')),
1114 [('f', 'force', None, _('run even when the destination is unrelated')),
1115 ('r', 'rev', [], _('a changeset intended to be added to the destination'),
1115 ('r', 'rev', [], _('a changeset intended to be added to the destination'),
1116 _('REV')),
1116 _('REV')),
1117 ('b', 'branch', [], _('a specific branch you would like to bundle'),
1117 ('b', 'branch', [], _('a specific branch you would like to bundle'),
1118 _('BRANCH')),
1118 _('BRANCH')),
1119 ('', 'base', [],
1119 ('', 'base', [],
1120 _('a base changeset assumed to be available at the destination'),
1120 _('a base changeset assumed to be available at the destination'),
1121 _('REV')),
1121 _('REV')),
1122 ('a', 'all', None, _('bundle all changesets in the repository')),
1122 ('a', 'all', None, _('bundle all changesets in the repository')),
1123 ('t', 'type', 'bzip2', _('bundle compression type to use'), _('TYPE')),
1123 ('t', 'type', 'bzip2', _('bundle compression type to use'), _('TYPE')),
1124 ] + remoteopts,
1124 ] + remoteopts,
1125 _('[-f] [-t BUNDLESPEC] [-a] [-r REV]... [--base REV]... FILE [DEST]'))
1125 _('[-f] [-t BUNDLESPEC] [-a] [-r REV]... [--base REV]... FILE [DEST]'))
1126 def bundle(ui, repo, fname, dest=None, **opts):
1126 def bundle(ui, repo, fname, dest=None, **opts):
1127 """create a bundle file
1127 """create a bundle file
1128
1128
1129 Generate a bundle file containing data to be added to a repository.
1129 Generate a bundle file containing data to be added to a repository.
1130
1130
1131 To create a bundle containing all changesets, use -a/--all
1131 To create a bundle containing all changesets, use -a/--all
1132 (or --base null). Otherwise, hg assumes the destination will have
1132 (or --base null). Otherwise, hg assumes the destination will have
1133 all the nodes you specify with --base parameters. Otherwise, hg
1133 all the nodes you specify with --base parameters. Otherwise, hg
1134 will assume the repository has all the nodes in destination, or
1134 will assume the repository has all the nodes in destination, or
1135 default-push/default if no destination is specified.
1135 default-push/default if no destination is specified.
1136
1136
1137 You can change bundle format with the -t/--type option. See
1137 You can change bundle format with the -t/--type option. See
1138 :hg:`help bundlespec` for documentation on this format. By default,
1138 :hg:`help bundlespec` for documentation on this format. By default,
1139 the most appropriate format is used and compression defaults to
1139 the most appropriate format is used and compression defaults to
1140 bzip2.
1140 bzip2.
1141
1141
1142 The bundle file can then be transferred using conventional means
1142 The bundle file can then be transferred using conventional means
1143 and applied to another repository with the unbundle or pull
1143 and applied to another repository with the unbundle or pull
1144 command. This is useful when direct push and pull are not
1144 command. This is useful when direct push and pull are not
1145 available or when exporting an entire repository is undesirable.
1145 available or when exporting an entire repository is undesirable.
1146
1146
1147 Applying bundles preserves all changeset contents including
1147 Applying bundles preserves all changeset contents including
1148 permissions, copy/rename information, and revision history.
1148 permissions, copy/rename information, and revision history.
1149
1149
1150 Returns 0 on success, 1 if no changes found.
1150 Returns 0 on success, 1 if no changes found.
1151 """
1151 """
1152 opts = pycompat.byteskwargs(opts)
1152 opts = pycompat.byteskwargs(opts)
1153 revs = None
1153 revs = None
1154 if 'rev' in opts:
1154 if 'rev' in opts:
1155 revstrings = opts['rev']
1155 revstrings = opts['rev']
1156 revs = scmutil.revrange(repo, revstrings)
1156 revs = scmutil.revrange(repo, revstrings)
1157 if revstrings and not revs:
1157 if revstrings and not revs:
1158 raise error.Abort(_('no commits to bundle'))
1158 raise error.Abort(_('no commits to bundle'))
1159
1159
1160 bundletype = opts.get('type', 'bzip2').lower()
1160 bundletype = opts.get('type', 'bzip2').lower()
1161 try:
1161 try:
1162 bcompression, cgversion, params = exchange.parsebundlespec(
1162 bcompression, cgversion, params = exchange.parsebundlespec(
1163 repo, bundletype, strict=False)
1163 repo, bundletype, strict=False)
1164 except error.UnsupportedBundleSpecification as e:
1164 except error.UnsupportedBundleSpecification as e:
1165 raise error.Abort(str(e),
1165 raise error.Abort(str(e),
1166 hint=_("see 'hg help bundlespec' for supported "
1166 hint=_("see 'hg help bundlespec' for supported "
1167 "values for --type"))
1167 "values for --type"))
1168
1168
1169 # Packed bundles are a pseudo bundle format for now.
1169 # Packed bundles are a pseudo bundle format for now.
1170 if cgversion == 's1':
1170 if cgversion == 's1':
1171 raise error.Abort(_('packed bundles cannot be produced by "hg bundle"'),
1171 raise error.Abort(_('packed bundles cannot be produced by "hg bundle"'),
1172 hint=_("use 'hg debugcreatestreamclonebundle'"))
1172 hint=_("use 'hg debugcreatestreamclonebundle'"))
1173
1173
1174 if opts.get('all'):
1174 if opts.get('all'):
1175 if dest:
1175 if dest:
1176 raise error.Abort(_("--all is incompatible with specifying "
1176 raise error.Abort(_("--all is incompatible with specifying "
1177 "a destination"))
1177 "a destination"))
1178 if opts.get('base'):
1178 if opts.get('base'):
1179 ui.warn(_("ignoring --base because --all was specified\n"))
1179 ui.warn(_("ignoring --base because --all was specified\n"))
1180 base = ['null']
1180 base = ['null']
1181 else:
1181 else:
1182 base = scmutil.revrange(repo, opts.get('base'))
1182 base = scmutil.revrange(repo, opts.get('base'))
1183 if cgversion not in changegroup.supportedoutgoingversions(repo):
1183 if cgversion not in changegroup.supportedoutgoingversions(repo):
1184 raise error.Abort(_("repository does not support bundle version %s") %
1184 raise error.Abort(_("repository does not support bundle version %s") %
1185 cgversion)
1185 cgversion)
1186
1186
1187 if base:
1187 if base:
1188 if dest:
1188 if dest:
1189 raise error.Abort(_("--base is incompatible with specifying "
1189 raise error.Abort(_("--base is incompatible with specifying "
1190 "a destination"))
1190 "a destination"))
1191 common = [repo.lookup(rev) for rev in base]
1191 common = [repo.lookup(rev) for rev in base]
1192 heads = revs and map(repo.lookup, revs) or None
1192 heads = revs and map(repo.lookup, revs) or None
1193 outgoing = discovery.outgoing(repo, common, heads)
1193 outgoing = discovery.outgoing(repo, common, heads)
1194 else:
1194 else:
1195 dest = ui.expandpath(dest or 'default-push', dest or 'default')
1195 dest = ui.expandpath(dest or 'default-push', dest or 'default')
1196 dest, branches = hg.parseurl(dest, opts.get('branch'))
1196 dest, branches = hg.parseurl(dest, opts.get('branch'))
1197 other = hg.peer(repo, opts, dest)
1197 other = hg.peer(repo, opts, dest)
1198 revs, checkout = hg.addbranchrevs(repo, repo, branches, revs)
1198 revs, checkout = hg.addbranchrevs(repo, repo, branches, revs)
1199 heads = revs and map(repo.lookup, revs) or revs
1199 heads = revs and map(repo.lookup, revs) or revs
1200 outgoing = discovery.findcommonoutgoing(repo, other,
1200 outgoing = discovery.findcommonoutgoing(repo, other,
1201 onlyheads=heads,
1201 onlyheads=heads,
1202 force=opts.get('force'),
1202 force=opts.get('force'),
1203 portable=True)
1203 portable=True)
1204
1204
1205 if not outgoing.missing:
1205 if not outgoing.missing:
1206 scmutil.nochangesfound(ui, repo, not base and outgoing.excluded)
1206 scmutil.nochangesfound(ui, repo, not base and outgoing.excluded)
1207 return 1
1207 return 1
1208
1208
1209 if cgversion == '01': #bundle1
1209 if cgversion == '01': #bundle1
1210 if bcompression is None:
1210 if bcompression is None:
1211 bcompression = 'UN'
1211 bcompression = 'UN'
1212 bversion = 'HG10' + bcompression
1212 bversion = 'HG10' + bcompression
1213 bcompression = None
1213 bcompression = None
1214 elif cgversion in ('02', '03'):
1214 elif cgversion in ('02', '03'):
1215 bversion = 'HG20'
1215 bversion = 'HG20'
1216 else:
1216 else:
1217 raise error.ProgrammingError(
1217 raise error.ProgrammingError(
1218 'bundle: unexpected changegroup version %s' % cgversion)
1218 'bundle: unexpected changegroup version %s' % cgversion)
1219
1219
1220 # TODO compression options should be derived from bundlespec parsing.
1220 # TODO compression options should be derived from bundlespec parsing.
1221 # This is a temporary hack to allow adjusting bundle compression
1221 # This is a temporary hack to allow adjusting bundle compression
1222 # level without a) formalizing the bundlespec changes to declare it
1222 # level without a) formalizing the bundlespec changes to declare it
1223 # b) introducing a command flag.
1223 # b) introducing a command flag.
1224 compopts = {}
1224 compopts = {}
1225 complevel = ui.configint('experimental', 'bundlecomplevel')
1225 complevel = ui.configint('experimental', 'bundlecomplevel')
1226 if complevel is not None:
1226 if complevel is not None:
1227 compopts['level'] = complevel
1227 compopts['level'] = complevel
1228
1228
1229
1229
1230 contentopts = {'cg.version': cgversion}
1230 contentopts = {'cg.version': cgversion}
1231 if repo.ui.configbool('experimental', 'evolution.bundle-obsmarker', False):
1231 if repo.ui.configbool('experimental', 'evolution.bundle-obsmarker', False):
1232 contentopts['obsolescence'] = True
1232 contentopts['obsolescence'] = True
1233 if repo.ui.configbool('experimental', 'bundle-phases', False):
1233 if repo.ui.configbool('experimental', 'bundle-phases', False):
1234 contentopts['phases'] = True
1234 contentopts['phases'] = True
1235 bundle2.writenewbundle(ui, repo, 'bundle', fname, bversion, outgoing,
1235 bundle2.writenewbundle(ui, repo, 'bundle', fname, bversion, outgoing,
1236 contentopts, compression=bcompression,
1236 contentopts, compression=bcompression,
1237 compopts=compopts)
1237 compopts=compopts)
1238
1238
1239 @command('cat',
1239 @command('cat',
1240 [('o', 'output', '',
1240 [('o', 'output', '',
1241 _('print output to file with formatted name'), _('FORMAT')),
1241 _('print output to file with formatted name'), _('FORMAT')),
1242 ('r', 'rev', '', _('print the given revision'), _('REV')),
1242 ('r', 'rev', '', _('print the given revision'), _('REV')),
1243 ('', 'decode', None, _('apply any matching decode filter')),
1243 ('', 'decode', None, _('apply any matching decode filter')),
1244 ] + walkopts + formatteropts,
1244 ] + walkopts + formatteropts,
1245 _('[OPTION]... FILE...'),
1245 _('[OPTION]... FILE...'),
1246 inferrepo=True)
1246 inferrepo=True)
1247 def cat(ui, repo, file1, *pats, **opts):
1247 def cat(ui, repo, file1, *pats, **opts):
1248 """output the current or given revision of files
1248 """output the current or given revision of files
1249
1249
1250 Print the specified files as they were at the given revision. If
1250 Print the specified files as they were at the given revision. If
1251 no revision is given, the parent of the working directory is used.
1251 no revision is given, the parent of the working directory is used.
1252
1252
1253 Output may be to a file, in which case the name of the file is
1253 Output may be to a file, in which case the name of the file is
1254 given using a format string. The formatting rules as follows:
1254 given using a format string. The formatting rules as follows:
1255
1255
1256 :``%%``: literal "%" character
1256 :``%%``: literal "%" character
1257 :``%s``: basename of file being printed
1257 :``%s``: basename of file being printed
1258 :``%d``: dirname of file being printed, or '.' if in repository root
1258 :``%d``: dirname of file being printed, or '.' if in repository root
1259 :``%p``: root-relative path name of file being printed
1259 :``%p``: root-relative path name of file being printed
1260 :``%H``: changeset hash (40 hexadecimal digits)
1260 :``%H``: changeset hash (40 hexadecimal digits)
1261 :``%R``: changeset revision number
1261 :``%R``: changeset revision number
1262 :``%h``: short-form changeset hash (12 hexadecimal digits)
1262 :``%h``: short-form changeset hash (12 hexadecimal digits)
1263 :``%r``: zero-padded changeset revision number
1263 :``%r``: zero-padded changeset revision number
1264 :``%b``: basename of the exporting repository
1264 :``%b``: basename of the exporting repository
1265
1265
1266 Returns 0 on success.
1266 Returns 0 on success.
1267 """
1267 """
1268 ctx = scmutil.revsingle(repo, opts.get('rev'))
1268 ctx = scmutil.revsingle(repo, opts.get('rev'))
1269 m = scmutil.match(ctx, (file1,) + pats, opts)
1269 m = scmutil.match(ctx, (file1,) + pats, opts)
1270 fntemplate = opts.pop('output', '')
1270 fntemplate = opts.pop('output', '')
1271 if cmdutil.isstdiofilename(fntemplate):
1271 if cmdutil.isstdiofilename(fntemplate):
1272 fntemplate = ''
1272 fntemplate = ''
1273
1273
1274 if fntemplate:
1274 if fntemplate:
1275 fm = formatter.nullformatter(ui, 'cat')
1275 fm = formatter.nullformatter(ui, 'cat')
1276 else:
1276 else:
1277 ui.pager('cat')
1277 ui.pager('cat')
1278 fm = ui.formatter('cat', opts)
1278 fm = ui.formatter('cat', opts)
1279 with fm:
1279 with fm:
1280 return cmdutil.cat(ui, repo, ctx, m, fm, fntemplate, '', **opts)
1280 return cmdutil.cat(ui, repo, ctx, m, fm, fntemplate, '', **opts)
1281
1281
1282 @command('^clone',
1282 @command('^clone',
1283 [('U', 'noupdate', None, _('the clone will include an empty working '
1283 [('U', 'noupdate', None, _('the clone will include an empty working '
1284 'directory (only a repository)')),
1284 'directory (only a repository)')),
1285 ('u', 'updaterev', '', _('revision, tag, or branch to check out'),
1285 ('u', 'updaterev', '', _('revision, tag, or branch to check out'),
1286 _('REV')),
1286 _('REV')),
1287 ('r', 'rev', [], _('include the specified changeset'), _('REV')),
1287 ('r', 'rev', [], _('include the specified changeset'), _('REV')),
1288 ('b', 'branch', [], _('clone only the specified branch'), _('BRANCH')),
1288 ('b', 'branch', [], _('clone only the specified branch'), _('BRANCH')),
1289 ('', 'pull', None, _('use pull protocol to copy metadata')),
1289 ('', 'pull', None, _('use pull protocol to copy metadata')),
1290 ('', 'uncompressed', None, _('use uncompressed transfer (fast over LAN)')),
1290 ('', 'uncompressed', None, _('use uncompressed transfer (fast over LAN)')),
1291 ] + remoteopts,
1291 ] + remoteopts,
1292 _('[OPTION]... SOURCE [DEST]'),
1292 _('[OPTION]... SOURCE [DEST]'),
1293 norepo=True)
1293 norepo=True)
1294 def clone(ui, source, dest=None, **opts):
1294 def clone(ui, source, dest=None, **opts):
1295 """make a copy of an existing repository
1295 """make a copy of an existing repository
1296
1296
1297 Create a copy of an existing repository in a new directory.
1297 Create a copy of an existing repository in a new directory.
1298
1298
1299 If no destination directory name is specified, it defaults to the
1299 If no destination directory name is specified, it defaults to the
1300 basename of the source.
1300 basename of the source.
1301
1301
1302 The location of the source is added to the new repository's
1302 The location of the source is added to the new repository's
1303 ``.hg/hgrc`` file, as the default to be used for future pulls.
1303 ``.hg/hgrc`` file, as the default to be used for future pulls.
1304
1304
1305 Only local paths and ``ssh://`` URLs are supported as
1305 Only local paths and ``ssh://`` URLs are supported as
1306 destinations. For ``ssh://`` destinations, no working directory or
1306 destinations. For ``ssh://`` destinations, no working directory or
1307 ``.hg/hgrc`` will be created on the remote side.
1307 ``.hg/hgrc`` will be created on the remote side.
1308
1308
1309 If the source repository has a bookmark called '@' set, that
1309 If the source repository has a bookmark called '@' set, that
1310 revision will be checked out in the new repository by default.
1310 revision will be checked out in the new repository by default.
1311
1311
1312 To check out a particular version, use -u/--update, or
1312 To check out a particular version, use -u/--update, or
1313 -U/--noupdate to create a clone with no working directory.
1313 -U/--noupdate to create a clone with no working directory.
1314
1314
1315 To pull only a subset of changesets, specify one or more revisions
1315 To pull only a subset of changesets, specify one or more revisions
1316 identifiers with -r/--rev or branches with -b/--branch. The
1316 identifiers with -r/--rev or branches with -b/--branch. The
1317 resulting clone will contain only the specified changesets and
1317 resulting clone will contain only the specified changesets and
1318 their ancestors. These options (or 'clone src#rev dest') imply
1318 their ancestors. These options (or 'clone src#rev dest') imply
1319 --pull, even for local source repositories.
1319 --pull, even for local source repositories.
1320
1320
1321 .. note::
1321 .. note::
1322
1322
1323 Specifying a tag will include the tagged changeset but not the
1323 Specifying a tag will include the tagged changeset but not the
1324 changeset containing the tag.
1324 changeset containing the tag.
1325
1325
1326 .. container:: verbose
1326 .. container:: verbose
1327
1327
1328 For efficiency, hardlinks are used for cloning whenever the
1328 For efficiency, hardlinks are used for cloning whenever the
1329 source and destination are on the same filesystem (note this
1329 source and destination are on the same filesystem (note this
1330 applies only to the repository data, not to the working
1330 applies only to the repository data, not to the working
1331 directory). Some filesystems, such as AFS, implement hardlinking
1331 directory). Some filesystems, such as AFS, implement hardlinking
1332 incorrectly, but do not report errors. In these cases, use the
1332 incorrectly, but do not report errors. In these cases, use the
1333 --pull option to avoid hardlinking.
1333 --pull option to avoid hardlinking.
1334
1334
1335 In some cases, you can clone repositories and the working
1335 In some cases, you can clone repositories and the working
1336 directory using full hardlinks with ::
1336 directory using full hardlinks with ::
1337
1337
1338 $ cp -al REPO REPOCLONE
1338 $ cp -al REPO REPOCLONE
1339
1339
1340 This is the fastest way to clone, but it is not always safe. The
1340 This is the fastest way to clone, but it is not always safe. The
1341 operation is not atomic (making sure REPO is not modified during
1341 operation is not atomic (making sure REPO is not modified during
1342 the operation is up to you) and you have to make sure your
1342 the operation is up to you) and you have to make sure your
1343 editor breaks hardlinks (Emacs and most Linux Kernel tools do
1343 editor breaks hardlinks (Emacs and most Linux Kernel tools do
1344 so). Also, this is not compatible with certain extensions that
1344 so). Also, this is not compatible with certain extensions that
1345 place their metadata under the .hg directory, such as mq.
1345 place their metadata under the .hg directory, such as mq.
1346
1346
1347 Mercurial will update the working directory to the first applicable
1347 Mercurial will update the working directory to the first applicable
1348 revision from this list:
1348 revision from this list:
1349
1349
1350 a) null if -U or the source repository has no changesets
1350 a) null if -U or the source repository has no changesets
1351 b) if -u . and the source repository is local, the first parent of
1351 b) if -u . and the source repository is local, the first parent of
1352 the source repository's working directory
1352 the source repository's working directory
1353 c) the changeset specified with -u (if a branch name, this means the
1353 c) the changeset specified with -u (if a branch name, this means the
1354 latest head of that branch)
1354 latest head of that branch)
1355 d) the changeset specified with -r
1355 d) the changeset specified with -r
1356 e) the tipmost head specified with -b
1356 e) the tipmost head specified with -b
1357 f) the tipmost head specified with the url#branch source syntax
1357 f) the tipmost head specified with the url#branch source syntax
1358 g) the revision marked with the '@' bookmark, if present
1358 g) the revision marked with the '@' bookmark, if present
1359 h) the tipmost head of the default branch
1359 h) the tipmost head of the default branch
1360 i) tip
1360 i) tip
1361
1361
1362 When cloning from servers that support it, Mercurial may fetch
1362 When cloning from servers that support it, Mercurial may fetch
1363 pre-generated data from a server-advertised URL. When this is done,
1363 pre-generated data from a server-advertised URL. When this is done,
1364 hooks operating on incoming changesets and changegroups may fire twice,
1364 hooks operating on incoming changesets and changegroups may fire twice,
1365 once for the bundle fetched from the URL and another for any additional
1365 once for the bundle fetched from the URL and another for any additional
1366 data not fetched from this URL. In addition, if an error occurs, the
1366 data not fetched from this URL. In addition, if an error occurs, the
1367 repository may be rolled back to a partial clone. This behavior may
1367 repository may be rolled back to a partial clone. This behavior may
1368 change in future releases. See :hg:`help -e clonebundles` for more.
1368 change in future releases. See :hg:`help -e clonebundles` for more.
1369
1369
1370 Examples:
1370 Examples:
1371
1371
1372 - clone a remote repository to a new directory named hg/::
1372 - clone a remote repository to a new directory named hg/::
1373
1373
1374 hg clone https://www.mercurial-scm.org/repo/hg/
1374 hg clone https://www.mercurial-scm.org/repo/hg/
1375
1375
1376 - create a lightweight local clone::
1376 - create a lightweight local clone::
1377
1377
1378 hg clone project/ project-feature/
1378 hg clone project/ project-feature/
1379
1379
1380 - clone from an absolute path on an ssh server (note double-slash)::
1380 - clone from an absolute path on an ssh server (note double-slash)::
1381
1381
1382 hg clone ssh://user@server//home/projects/alpha/
1382 hg clone ssh://user@server//home/projects/alpha/
1383
1383
1384 - do a high-speed clone over a LAN while checking out a
1384 - do a high-speed clone over a LAN while checking out a
1385 specified version::
1385 specified version::
1386
1386
1387 hg clone --uncompressed http://server/repo -u 1.5
1387 hg clone --uncompressed http://server/repo -u 1.5
1388
1388
1389 - create a repository without changesets after a particular revision::
1389 - create a repository without changesets after a particular revision::
1390
1390
1391 hg clone -r 04e544 experimental/ good/
1391 hg clone -r 04e544 experimental/ good/
1392
1392
1393 - clone (and track) a particular named branch::
1393 - clone (and track) a particular named branch::
1394
1394
1395 hg clone https://www.mercurial-scm.org/repo/hg/#stable
1395 hg clone https://www.mercurial-scm.org/repo/hg/#stable
1396
1396
1397 See :hg:`help urls` for details on specifying URLs.
1397 See :hg:`help urls` for details on specifying URLs.
1398
1398
1399 Returns 0 on success.
1399 Returns 0 on success.
1400 """
1400 """
1401 opts = pycompat.byteskwargs(opts)
1401 opts = pycompat.byteskwargs(opts)
1402 if opts.get('noupdate') and opts.get('updaterev'):
1402 if opts.get('noupdate') and opts.get('updaterev'):
1403 raise error.Abort(_("cannot specify both --noupdate and --updaterev"))
1403 raise error.Abort(_("cannot specify both --noupdate and --updaterev"))
1404
1404
1405 r = hg.clone(ui, opts, source, dest,
1405 r = hg.clone(ui, opts, source, dest,
1406 pull=opts.get('pull'),
1406 pull=opts.get('pull'),
1407 stream=opts.get('uncompressed'),
1407 stream=opts.get('uncompressed'),
1408 rev=opts.get('rev'),
1408 rev=opts.get('rev'),
1409 update=opts.get('updaterev') or not opts.get('noupdate'),
1409 update=opts.get('updaterev') or not opts.get('noupdate'),
1410 branch=opts.get('branch'),
1410 branch=opts.get('branch'),
1411 shareopts=opts.get('shareopts'))
1411 shareopts=opts.get('shareopts'))
1412
1412
1413 return r is None
1413 return r is None
1414
1414
1415 @command('^commit|ci',
1415 @command('^commit|ci',
1416 [('A', 'addremove', None,
1416 [('A', 'addremove', None,
1417 _('mark new/missing files as added/removed before committing')),
1417 _('mark new/missing files as added/removed before committing')),
1418 ('', 'close-branch', None,
1418 ('', 'close-branch', None,
1419 _('mark a branch head as closed')),
1419 _('mark a branch head as closed')),
1420 ('', 'amend', None, _('amend the parent of the working directory')),
1420 ('', 'amend', None, _('amend the parent of the working directory')),
1421 ('s', 'secret', None, _('use the secret phase for committing')),
1421 ('s', 'secret', None, _('use the secret phase for committing')),
1422 ('e', 'edit', None, _('invoke editor on commit messages')),
1422 ('e', 'edit', None, _('invoke editor on commit messages')),
1423 ('i', 'interactive', None, _('use interactive mode')),
1423 ('i', 'interactive', None, _('use interactive mode')),
1424 ] + walkopts + commitopts + commitopts2 + subrepoopts,
1424 ] + walkopts + commitopts + commitopts2 + subrepoopts,
1425 _('[OPTION]... [FILE]...'),
1425 _('[OPTION]... [FILE]...'),
1426 inferrepo=True)
1426 inferrepo=True)
1427 def commit(ui, repo, *pats, **opts):
1427 def commit(ui, repo, *pats, **opts):
1428 """commit the specified files or all outstanding changes
1428 """commit the specified files or all outstanding changes
1429
1429
1430 Commit changes to the given files into the repository. Unlike a
1430 Commit changes to the given files into the repository. Unlike a
1431 centralized SCM, this operation is a local operation. See
1431 centralized SCM, this operation is a local operation. See
1432 :hg:`push` for a way to actively distribute your changes.
1432 :hg:`push` for a way to actively distribute your changes.
1433
1433
1434 If a list of files is omitted, all changes reported by :hg:`status`
1434 If a list of files is omitted, all changes reported by :hg:`status`
1435 will be committed.
1435 will be committed.
1436
1436
1437 If you are committing the result of a merge, do not provide any
1437 If you are committing the result of a merge, do not provide any
1438 filenames or -I/-X filters.
1438 filenames or -I/-X filters.
1439
1439
1440 If no commit message is specified, Mercurial starts your
1440 If no commit message is specified, Mercurial starts your
1441 configured editor where you can enter a message. In case your
1441 configured editor where you can enter a message. In case your
1442 commit fails, you will find a backup of your message in
1442 commit fails, you will find a backup of your message in
1443 ``.hg/last-message.txt``.
1443 ``.hg/last-message.txt``.
1444
1444
1445 The --close-branch flag can be used to mark the current branch
1445 The --close-branch flag can be used to mark the current branch
1446 head closed. When all heads of a branch are closed, the branch
1446 head closed. When all heads of a branch are closed, the branch
1447 will be considered closed and no longer listed.
1447 will be considered closed and no longer listed.
1448
1448
1449 The --amend flag can be used to amend the parent of the
1449 The --amend flag can be used to amend the parent of the
1450 working directory with a new commit that contains the changes
1450 working directory with a new commit that contains the changes
1451 in the parent in addition to those currently reported by :hg:`status`,
1451 in the parent in addition to those currently reported by :hg:`status`,
1452 if there are any. The old commit is stored in a backup bundle in
1452 if there are any. The old commit is stored in a backup bundle in
1453 ``.hg/strip-backup`` (see :hg:`help bundle` and :hg:`help unbundle`
1453 ``.hg/strip-backup`` (see :hg:`help bundle` and :hg:`help unbundle`
1454 on how to restore it).
1454 on how to restore it).
1455
1455
1456 Message, user and date are taken from the amended commit unless
1456 Message, user and date are taken from the amended commit unless
1457 specified. When a message isn't specified on the command line,
1457 specified. When a message isn't specified on the command line,
1458 the editor will open with the message of the amended commit.
1458 the editor will open with the message of the amended commit.
1459
1459
1460 It is not possible to amend public changesets (see :hg:`help phases`)
1460 It is not possible to amend public changesets (see :hg:`help phases`)
1461 or changesets that have children.
1461 or changesets that have children.
1462
1462
1463 See :hg:`help dates` for a list of formats valid for -d/--date.
1463 See :hg:`help dates` for a list of formats valid for -d/--date.
1464
1464
1465 Returns 0 on success, 1 if nothing changed.
1465 Returns 0 on success, 1 if nothing changed.
1466
1466
1467 .. container:: verbose
1467 .. container:: verbose
1468
1468
1469 Examples:
1469 Examples:
1470
1470
1471 - commit all files ending in .py::
1471 - commit all files ending in .py::
1472
1472
1473 hg commit --include "set:**.py"
1473 hg commit --include "set:**.py"
1474
1474
1475 - commit all non-binary files::
1475 - commit all non-binary files::
1476
1476
1477 hg commit --exclude "set:binary()"
1477 hg commit --exclude "set:binary()"
1478
1478
1479 - amend the current commit and set the date to now::
1479 - amend the current commit and set the date to now::
1480
1480
1481 hg commit --amend --date now
1481 hg commit --amend --date now
1482 """
1482 """
1483 wlock = lock = None
1483 wlock = lock = None
1484 try:
1484 try:
1485 wlock = repo.wlock()
1485 wlock = repo.wlock()
1486 lock = repo.lock()
1486 lock = repo.lock()
1487 return _docommit(ui, repo, *pats, **opts)
1487 return _docommit(ui, repo, *pats, **opts)
1488 finally:
1488 finally:
1489 release(lock, wlock)
1489 release(lock, wlock)
1490
1490
1491 def _docommit(ui, repo, *pats, **opts):
1491 def _docommit(ui, repo, *pats, **opts):
1492 if opts.get(r'interactive'):
1492 if opts.get(r'interactive'):
1493 opts.pop(r'interactive')
1493 opts.pop(r'interactive')
1494 ret = cmdutil.dorecord(ui, repo, commit, None, False,
1494 ret = cmdutil.dorecord(ui, repo, commit, None, False,
1495 cmdutil.recordfilter, *pats,
1495 cmdutil.recordfilter, *pats,
1496 **opts)
1496 **opts)
1497 # ret can be 0 (no changes to record) or the value returned by
1497 # ret can be 0 (no changes to record) or the value returned by
1498 # commit(), 1 if nothing changed or None on success.
1498 # commit(), 1 if nothing changed or None on success.
1499 return 1 if ret == 0 else ret
1499 return 1 if ret == 0 else ret
1500
1500
1501 opts = pycompat.byteskwargs(opts)
1501 opts = pycompat.byteskwargs(opts)
1502 if opts.get('subrepos'):
1502 if opts.get('subrepos'):
1503 if opts.get('amend'):
1503 if opts.get('amend'):
1504 raise error.Abort(_('cannot amend with --subrepos'))
1504 raise error.Abort(_('cannot amend with --subrepos'))
1505 # Let --subrepos on the command line override config setting.
1505 # Let --subrepos on the command line override config setting.
1506 ui.setconfig('ui', 'commitsubrepos', True, 'commit')
1506 ui.setconfig('ui', 'commitsubrepos', True, 'commit')
1507
1507
1508 cmdutil.checkunfinished(repo, commit=True)
1508 cmdutil.checkunfinished(repo, commit=True)
1509
1509
1510 branch = repo[None].branch()
1510 branch = repo[None].branch()
1511 bheads = repo.branchheads(branch)
1511 bheads = repo.branchheads(branch)
1512
1512
1513 extra = {}
1513 extra = {}
1514 if opts.get('close_branch'):
1514 if opts.get('close_branch'):
1515 extra['close'] = 1
1515 extra['close'] = 1
1516
1516
1517 if not bheads:
1517 if not bheads:
1518 raise error.Abort(_('can only close branch heads'))
1518 raise error.Abort(_('can only close branch heads'))
1519 elif opts.get('amend'):
1519 elif opts.get('amend'):
1520 if repo[None].parents()[0].p1().branch() != branch and \
1520 if repo[None].parents()[0].p1().branch() != branch and \
1521 repo[None].parents()[0].p2().branch() != branch:
1521 repo[None].parents()[0].p2().branch() != branch:
1522 raise error.Abort(_('can only close branch heads'))
1522 raise error.Abort(_('can only close branch heads'))
1523
1523
1524 if opts.get('amend'):
1524 if opts.get('amend'):
1525 if ui.configbool('ui', 'commitsubrepos'):
1525 if ui.configbool('ui', 'commitsubrepos'):
1526 raise error.Abort(_('cannot amend with ui.commitsubrepos enabled'))
1526 raise error.Abort(_('cannot amend with ui.commitsubrepos enabled'))
1527
1527
1528 old = repo['.']
1528 old = repo['.']
1529 if not old.mutable():
1529 if not old.mutable():
1530 raise error.Abort(_('cannot amend public changesets'))
1530 raise error.Abort(_('cannot amend public changesets'))
1531 if len(repo[None].parents()) > 1:
1531 if len(repo[None].parents()) > 1:
1532 raise error.Abort(_('cannot amend while merging'))
1532 raise error.Abort(_('cannot amend while merging'))
1533 allowunstable = obsolete.isenabled(repo, obsolete.allowunstableopt)
1533 allowunstable = obsolete.isenabled(repo, obsolete.allowunstableopt)
1534 if not allowunstable and old.children():
1534 if not allowunstable and old.children():
1535 raise error.Abort(_('cannot amend changeset with children'))
1535 raise error.Abort(_('cannot amend changeset with children'))
1536
1536
1537 # Currently histedit gets confused if an amend happens while histedit
1537 # Currently histedit gets confused if an amend happens while histedit
1538 # is in progress. Since we have a checkunfinished command, we are
1538 # is in progress. Since we have a checkunfinished command, we are
1539 # temporarily honoring it.
1539 # temporarily honoring it.
1540 #
1540 #
1541 # Note: eventually this guard will be removed. Please do not expect
1541 # Note: eventually this guard will be removed. Please do not expect
1542 # this behavior to remain.
1542 # this behavior to remain.
1543 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1543 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1544 cmdutil.checkunfinished(repo)
1544 cmdutil.checkunfinished(repo)
1545
1545
1546 # commitfunc is used only for temporary amend commit by cmdutil.amend
1546 # commitfunc is used only for temporary amend commit by cmdutil.amend
1547 def commitfunc(ui, repo, message, match, opts):
1547 def commitfunc(ui, repo, message, match, opts):
1548 return repo.commit(message,
1548 return repo.commit(message,
1549 opts.get('user') or old.user(),
1549 opts.get('user') or old.user(),
1550 opts.get('date') or old.date(),
1550 opts.get('date') or old.date(),
1551 match,
1551 match,
1552 extra=extra)
1552 extra=extra)
1553
1553
1554 node = cmdutil.amend(ui, repo, commitfunc, old, extra, pats, opts)
1554 node = cmdutil.amend(ui, repo, commitfunc, old, extra, pats, opts)
1555 if node == old.node():
1555 if node == old.node():
1556 ui.status(_("nothing changed\n"))
1556 ui.status(_("nothing changed\n"))
1557 return 1
1557 return 1
1558 else:
1558 else:
1559 def commitfunc(ui, repo, message, match, opts):
1559 def commitfunc(ui, repo, message, match, opts):
1560 overrides = {}
1560 overrides = {}
1561 if opts.get('secret'):
1561 if opts.get('secret'):
1562 overrides[('phases', 'new-commit')] = 'secret'
1562 overrides[('phases', 'new-commit')] = 'secret'
1563
1563
1564 baseui = repo.baseui
1564 baseui = repo.baseui
1565 with baseui.configoverride(overrides, 'commit'):
1565 with baseui.configoverride(overrides, 'commit'):
1566 with ui.configoverride(overrides, 'commit'):
1566 with ui.configoverride(overrides, 'commit'):
1567 editform = cmdutil.mergeeditform(repo[None],
1567 editform = cmdutil.mergeeditform(repo[None],
1568 'commit.normal')
1568 'commit.normal')
1569 editor = cmdutil.getcommiteditor(
1569 editor = cmdutil.getcommiteditor(
1570 editform=editform, **pycompat.strkwargs(opts))
1570 editform=editform, **pycompat.strkwargs(opts))
1571 return repo.commit(message,
1571 return repo.commit(message,
1572 opts.get('user'),
1572 opts.get('user'),
1573 opts.get('date'),
1573 opts.get('date'),
1574 match,
1574 match,
1575 editor=editor,
1575 editor=editor,
1576 extra=extra)
1576 extra=extra)
1577
1577
1578 node = cmdutil.commit(ui, repo, commitfunc, pats, opts)
1578 node = cmdutil.commit(ui, repo, commitfunc, pats, opts)
1579
1579
1580 if not node:
1580 if not node:
1581 stat = cmdutil.postcommitstatus(repo, pats, opts)
1581 stat = cmdutil.postcommitstatus(repo, pats, opts)
1582 if stat[3]:
1582 if stat[3]:
1583 ui.status(_("nothing changed (%d missing files, see "
1583 ui.status(_("nothing changed (%d missing files, see "
1584 "'hg status')\n") % len(stat[3]))
1584 "'hg status')\n") % len(stat[3]))
1585 else:
1585 else:
1586 ui.status(_("nothing changed\n"))
1586 ui.status(_("nothing changed\n"))
1587 return 1
1587 return 1
1588
1588
1589 cmdutil.commitstatus(repo, node, branch, bheads, opts)
1589 cmdutil.commitstatus(repo, node, branch, bheads, opts)
1590
1590
1591 @command('config|showconfig|debugconfig',
1591 @command('config|showconfig|debugconfig',
1592 [('u', 'untrusted', None, _('show untrusted configuration options')),
1592 [('u', 'untrusted', None, _('show untrusted configuration options')),
1593 ('e', 'edit', None, _('edit user config')),
1593 ('e', 'edit', None, _('edit user config')),
1594 ('l', 'local', None, _('edit repository config')),
1594 ('l', 'local', None, _('edit repository config')),
1595 ('g', 'global', None, _('edit global config'))] + formatteropts,
1595 ('g', 'global', None, _('edit global config'))] + formatteropts,
1596 _('[-u] [NAME]...'),
1596 _('[-u] [NAME]...'),
1597 optionalrepo=True)
1597 optionalrepo=True)
1598 def config(ui, repo, *values, **opts):
1598 def config(ui, repo, *values, **opts):
1599 """show combined config settings from all hgrc files
1599 """show combined config settings from all hgrc files
1600
1600
1601 With no arguments, print names and values of all config items.
1601 With no arguments, print names and values of all config items.
1602
1602
1603 With one argument of the form section.name, print just the value
1603 With one argument of the form section.name, print just the value
1604 of that config item.
1604 of that config item.
1605
1605
1606 With multiple arguments, print names and values of all config
1606 With multiple arguments, print names and values of all config
1607 items with matching section names.
1607 items with matching section names.
1608
1608
1609 With --edit, start an editor on the user-level config file. With
1609 With --edit, start an editor on the user-level config file. With
1610 --global, edit the system-wide config file. With --local, edit the
1610 --global, edit the system-wide config file. With --local, edit the
1611 repository-level config file.
1611 repository-level config file.
1612
1612
1613 With --debug, the source (filename and line number) is printed
1613 With --debug, the source (filename and line number) is printed
1614 for each config item.
1614 for each config item.
1615
1615
1616 See :hg:`help config` for more information about config files.
1616 See :hg:`help config` for more information about config files.
1617
1617
1618 Returns 0 on success, 1 if NAME does not exist.
1618 Returns 0 on success, 1 if NAME does not exist.
1619
1619
1620 """
1620 """
1621
1621
1622 opts = pycompat.byteskwargs(opts)
1622 opts = pycompat.byteskwargs(opts)
1623 if opts.get('edit') or opts.get('local') or opts.get('global'):
1623 if opts.get('edit') or opts.get('local') or opts.get('global'):
1624 if opts.get('local') and opts.get('global'):
1624 if opts.get('local') and opts.get('global'):
1625 raise error.Abort(_("can't use --local and --global together"))
1625 raise error.Abort(_("can't use --local and --global together"))
1626
1626
1627 if opts.get('local'):
1627 if opts.get('local'):
1628 if not repo:
1628 if not repo:
1629 raise error.Abort(_("can't use --local outside a repository"))
1629 raise error.Abort(_("can't use --local outside a repository"))
1630 paths = [repo.vfs.join('hgrc')]
1630 paths = [repo.vfs.join('hgrc')]
1631 elif opts.get('global'):
1631 elif opts.get('global'):
1632 paths = rcutil.systemrcpath()
1632 paths = rcutil.systemrcpath()
1633 else:
1633 else:
1634 paths = rcutil.userrcpath()
1634 paths = rcutil.userrcpath()
1635
1635
1636 for f in paths:
1636 for f in paths:
1637 if os.path.exists(f):
1637 if os.path.exists(f):
1638 break
1638 break
1639 else:
1639 else:
1640 if opts.get('global'):
1640 if opts.get('global'):
1641 samplehgrc = uimod.samplehgrcs['global']
1641 samplehgrc = uimod.samplehgrcs['global']
1642 elif opts.get('local'):
1642 elif opts.get('local'):
1643 samplehgrc = uimod.samplehgrcs['local']
1643 samplehgrc = uimod.samplehgrcs['local']
1644 else:
1644 else:
1645 samplehgrc = uimod.samplehgrcs['user']
1645 samplehgrc = uimod.samplehgrcs['user']
1646
1646
1647 f = paths[0]
1647 f = paths[0]
1648 fp = open(f, "w")
1648 fp = open(f, "w")
1649 fp.write(samplehgrc)
1649 fp.write(samplehgrc)
1650 fp.close()
1650 fp.close()
1651
1651
1652 editor = ui.geteditor()
1652 editor = ui.geteditor()
1653 ui.system("%s \"%s\"" % (editor, f),
1653 ui.system("%s \"%s\"" % (editor, f),
1654 onerr=error.Abort, errprefix=_("edit failed"),
1654 onerr=error.Abort, errprefix=_("edit failed"),
1655 blockedtag='config_edit')
1655 blockedtag='config_edit')
1656 return
1656 return
1657 ui.pager('config')
1657 ui.pager('config')
1658 fm = ui.formatter('config', opts)
1658 fm = ui.formatter('config', opts)
1659 for t, f in rcutil.rccomponents():
1659 for t, f in rcutil.rccomponents():
1660 if t == 'path':
1660 if t == 'path':
1661 ui.debug('read config from: %s\n' % f)
1661 ui.debug('read config from: %s\n' % f)
1662 elif t == 'items':
1662 elif t == 'items':
1663 for section, name, value, source in f:
1663 for section, name, value, source in f:
1664 ui.debug('set config by: %s\n' % source)
1664 ui.debug('set config by: %s\n' % source)
1665 else:
1665 else:
1666 raise error.ProgrammingError('unknown rctype: %s' % t)
1666 raise error.ProgrammingError('unknown rctype: %s' % t)
1667 untrusted = bool(opts.get('untrusted'))
1667 untrusted = bool(opts.get('untrusted'))
1668 if values:
1668 if values:
1669 sections = [v for v in values if '.' not in v]
1669 sections = [v for v in values if '.' not in v]
1670 items = [v for v in values if '.' in v]
1670 items = [v for v in values if '.' in v]
1671 if len(items) > 1 or items and sections:
1671 if len(items) > 1 or items and sections:
1672 raise error.Abort(_('only one config item permitted'))
1672 raise error.Abort(_('only one config item permitted'))
1673 matched = False
1673 matched = False
1674 for section, name, value in ui.walkconfig(untrusted=untrusted):
1674 for section, name, value in ui.walkconfig(untrusted=untrusted):
1675 source = ui.configsource(section, name, untrusted)
1675 source = ui.configsource(section, name, untrusted)
1676 value = pycompat.bytestr(value)
1676 value = pycompat.bytestr(value)
1677 if fm.isplain():
1677 if fm.isplain():
1678 source = source or 'none'
1678 source = source or 'none'
1679 value = value.replace('\n', '\\n')
1679 value = value.replace('\n', '\\n')
1680 entryname = section + '.' + name
1680 entryname = section + '.' + name
1681 if values:
1681 if values:
1682 for v in values:
1682 for v in values:
1683 if v == section:
1683 if v == section:
1684 fm.startitem()
1684 fm.startitem()
1685 fm.condwrite(ui.debugflag, 'source', '%s: ', source)
1685 fm.condwrite(ui.debugflag, 'source', '%s: ', source)
1686 fm.write('name value', '%s=%s\n', entryname, value)
1686 fm.write('name value', '%s=%s\n', entryname, value)
1687 matched = True
1687 matched = True
1688 elif v == entryname:
1688 elif v == entryname:
1689 fm.startitem()
1689 fm.startitem()
1690 fm.condwrite(ui.debugflag, 'source', '%s: ', source)
1690 fm.condwrite(ui.debugflag, 'source', '%s: ', source)
1691 fm.write('value', '%s\n', value)
1691 fm.write('value', '%s\n', value)
1692 fm.data(name=entryname)
1692 fm.data(name=entryname)
1693 matched = True
1693 matched = True
1694 else:
1694 else:
1695 fm.startitem()
1695 fm.startitem()
1696 fm.condwrite(ui.debugflag, 'source', '%s: ', source)
1696 fm.condwrite(ui.debugflag, 'source', '%s: ', source)
1697 fm.write('name value', '%s=%s\n', entryname, value)
1697 fm.write('name value', '%s=%s\n', entryname, value)
1698 matched = True
1698 matched = True
1699 fm.end()
1699 fm.end()
1700 if matched:
1700 if matched:
1701 return 0
1701 return 0
1702 return 1
1702 return 1
1703
1703
1704 @command('copy|cp',
1704 @command('copy|cp',
1705 [('A', 'after', None, _('record a copy that has already occurred')),
1705 [('A', 'after', None, _('record a copy that has already occurred')),
1706 ('f', 'force', None, _('forcibly copy over an existing managed file')),
1706 ('f', 'force', None, _('forcibly copy over an existing managed file')),
1707 ] + walkopts + dryrunopts,
1707 ] + walkopts + dryrunopts,
1708 _('[OPTION]... [SOURCE]... DEST'))
1708 _('[OPTION]... [SOURCE]... DEST'))
1709 def copy(ui, repo, *pats, **opts):
1709 def copy(ui, repo, *pats, **opts):
1710 """mark files as copied for the next commit
1710 """mark files as copied for the next commit
1711
1711
1712 Mark dest as having copies of source files. If dest is a
1712 Mark dest as having copies of source files. If dest is a
1713 directory, copies are put in that directory. If dest is a file,
1713 directory, copies are put in that directory. If dest is a file,
1714 the source must be a single file.
1714 the source must be a single file.
1715
1715
1716 By default, this command copies the contents of files as they
1716 By default, this command copies the contents of files as they
1717 exist in the working directory. If invoked with -A/--after, the
1717 exist in the working directory. If invoked with -A/--after, the
1718 operation is recorded, but no copying is performed.
1718 operation is recorded, but no copying is performed.
1719
1719
1720 This command takes effect with the next commit. To undo a copy
1720 This command takes effect with the next commit. To undo a copy
1721 before that, see :hg:`revert`.
1721 before that, see :hg:`revert`.
1722
1722
1723 Returns 0 on success, 1 if errors are encountered.
1723 Returns 0 on success, 1 if errors are encountered.
1724 """
1724 """
1725 opts = pycompat.byteskwargs(opts)
1725 opts = pycompat.byteskwargs(opts)
1726 with repo.wlock(False):
1726 with repo.wlock(False):
1727 return cmdutil.copy(ui, repo, pats, opts)
1727 return cmdutil.copy(ui, repo, pats, opts)
1728
1728
1729 @command('debugcommands', [], _('[COMMAND]'), norepo=True)
1729 @command('debugcommands', [], _('[COMMAND]'), norepo=True)
1730 def debugcommands(ui, cmd='', *args):
1730 def debugcommands(ui, cmd='', *args):
1731 """list all available commands and options"""
1731 """list all available commands and options"""
1732 for cmd, vals in sorted(table.iteritems()):
1732 for cmd, vals in sorted(table.iteritems()):
1733 cmd = cmd.split('|')[0].strip('^')
1733 cmd = cmd.split('|')[0].strip('^')
1734 opts = ', '.join([i[1] for i in vals[1]])
1734 opts = ', '.join([i[1] for i in vals[1]])
1735 ui.write('%s: %s\n' % (cmd, opts))
1735 ui.write('%s: %s\n' % (cmd, opts))
1736
1736
1737 @command('debugcomplete',
1737 @command('debugcomplete',
1738 [('o', 'options', None, _('show the command options'))],
1738 [('o', 'options', None, _('show the command options'))],
1739 _('[-o] CMD'),
1739 _('[-o] CMD'),
1740 norepo=True)
1740 norepo=True)
1741 def debugcomplete(ui, cmd='', **opts):
1741 def debugcomplete(ui, cmd='', **opts):
1742 """returns the completion list associated with the given command"""
1742 """returns the completion list associated with the given command"""
1743
1743
1744 if opts.get('options'):
1744 if opts.get('options'):
1745 options = []
1745 options = []
1746 otables = [globalopts]
1746 otables = [globalopts]
1747 if cmd:
1747 if cmd:
1748 aliases, entry = cmdutil.findcmd(cmd, table, False)
1748 aliases, entry = cmdutil.findcmd(cmd, table, False)
1749 otables.append(entry[1])
1749 otables.append(entry[1])
1750 for t in otables:
1750 for t in otables:
1751 for o in t:
1751 for o in t:
1752 if "(DEPRECATED)" in o[3]:
1752 if "(DEPRECATED)" in o[3]:
1753 continue
1753 continue
1754 if o[0]:
1754 if o[0]:
1755 options.append('-%s' % o[0])
1755 options.append('-%s' % o[0])
1756 options.append('--%s' % o[1])
1756 options.append('--%s' % o[1])
1757 ui.write("%s\n" % "\n".join(options))
1757 ui.write("%s\n" % "\n".join(options))
1758 return
1758 return
1759
1759
1760 cmdlist, unused_allcmds = cmdutil.findpossible(cmd, table)
1760 cmdlist, unused_allcmds = cmdutil.findpossible(cmd, table)
1761 if ui.verbose:
1761 if ui.verbose:
1762 cmdlist = [' '.join(c[0]) for c in cmdlist.values()]
1762 cmdlist = [' '.join(c[0]) for c in cmdlist.values()]
1763 ui.write("%s\n" % "\n".join(sorted(cmdlist)))
1763 ui.write("%s\n" % "\n".join(sorted(cmdlist)))
1764
1764
1765 @command('^diff',
1765 @command('^diff',
1766 [('r', 'rev', [], _('revision'), _('REV')),
1766 [('r', 'rev', [], _('revision'), _('REV')),
1767 ('c', 'change', '', _('change made by revision'), _('REV'))
1767 ('c', 'change', '', _('change made by revision'), _('REV'))
1768 ] + diffopts + diffopts2 + walkopts + subrepoopts,
1768 ] + diffopts + diffopts2 + walkopts + subrepoopts,
1769 _('[OPTION]... ([-c REV] | [-r REV1 [-r REV2]]) [FILE]...'),
1769 _('[OPTION]... ([-c REV] | [-r REV1 [-r REV2]]) [FILE]...'),
1770 inferrepo=True)
1770 inferrepo=True)
1771 def diff(ui, repo, *pats, **opts):
1771 def diff(ui, repo, *pats, **opts):
1772 """diff repository (or selected files)
1772 """diff repository (or selected files)
1773
1773
1774 Show differences between revisions for the specified files.
1774 Show differences between revisions for the specified files.
1775
1775
1776 Differences between files are shown using the unified diff format.
1776 Differences between files are shown using the unified diff format.
1777
1777
1778 .. note::
1778 .. note::
1779
1779
1780 :hg:`diff` may generate unexpected results for merges, as it will
1780 :hg:`diff` may generate unexpected results for merges, as it will
1781 default to comparing against the working directory's first
1781 default to comparing against the working directory's first
1782 parent changeset if no revisions are specified.
1782 parent changeset if no revisions are specified.
1783
1783
1784 When two revision arguments are given, then changes are shown
1784 When two revision arguments are given, then changes are shown
1785 between those revisions. If only one revision is specified then
1785 between those revisions. If only one revision is specified then
1786 that revision is compared to the working directory, and, when no
1786 that revision is compared to the working directory, and, when no
1787 revisions are specified, the working directory files are compared
1787 revisions are specified, the working directory files are compared
1788 to its first parent.
1788 to its first parent.
1789
1789
1790 Alternatively you can specify -c/--change with a revision to see
1790 Alternatively you can specify -c/--change with a revision to see
1791 the changes in that changeset relative to its first parent.
1791 the changes in that changeset relative to its first parent.
1792
1792
1793 Without the -a/--text option, diff will avoid generating diffs of
1793 Without the -a/--text option, diff will avoid generating diffs of
1794 files it detects as binary. With -a, diff will generate a diff
1794 files it detects as binary. With -a, diff will generate a diff
1795 anyway, probably with undesirable results.
1795 anyway, probably with undesirable results.
1796
1796
1797 Use the -g/--git option to generate diffs in the git extended diff
1797 Use the -g/--git option to generate diffs in the git extended diff
1798 format. For more information, read :hg:`help diffs`.
1798 format. For more information, read :hg:`help diffs`.
1799
1799
1800 .. container:: verbose
1800 .. container:: verbose
1801
1801
1802 Examples:
1802 Examples:
1803
1803
1804 - compare a file in the current working directory to its parent::
1804 - compare a file in the current working directory to its parent::
1805
1805
1806 hg diff foo.c
1806 hg diff foo.c
1807
1807
1808 - compare two historical versions of a directory, with rename info::
1808 - compare two historical versions of a directory, with rename info::
1809
1809
1810 hg diff --git -r 1.0:1.2 lib/
1810 hg diff --git -r 1.0:1.2 lib/
1811
1811
1812 - get change stats relative to the last change on some date::
1812 - get change stats relative to the last change on some date::
1813
1813
1814 hg diff --stat -r "date('may 2')"
1814 hg diff --stat -r "date('may 2')"
1815
1815
1816 - diff all newly-added files that contain a keyword::
1816 - diff all newly-added files that contain a keyword::
1817
1817
1818 hg diff "set:added() and grep(GNU)"
1818 hg diff "set:added() and grep(GNU)"
1819
1819
1820 - compare a revision and its parents::
1820 - compare a revision and its parents::
1821
1821
1822 hg diff -c 9353 # compare against first parent
1822 hg diff -c 9353 # compare against first parent
1823 hg diff -r 9353^:9353 # same using revset syntax
1823 hg diff -r 9353^:9353 # same using revset syntax
1824 hg diff -r 9353^2:9353 # compare against the second parent
1824 hg diff -r 9353^2:9353 # compare against the second parent
1825
1825
1826 Returns 0 on success.
1826 Returns 0 on success.
1827 """
1827 """
1828
1828
1829 opts = pycompat.byteskwargs(opts)
1829 opts = pycompat.byteskwargs(opts)
1830 revs = opts.get('rev')
1830 revs = opts.get('rev')
1831 change = opts.get('change')
1831 change = opts.get('change')
1832 stat = opts.get('stat')
1832 stat = opts.get('stat')
1833 reverse = opts.get('reverse')
1833 reverse = opts.get('reverse')
1834
1834
1835 if revs and change:
1835 if revs and change:
1836 msg = _('cannot specify --rev and --change at the same time')
1836 msg = _('cannot specify --rev and --change at the same time')
1837 raise error.Abort(msg)
1837 raise error.Abort(msg)
1838 elif change:
1838 elif change:
1839 node2 = scmutil.revsingle(repo, change, None).node()
1839 node2 = scmutil.revsingle(repo, change, None).node()
1840 node1 = repo[node2].p1().node()
1840 node1 = repo[node2].p1().node()
1841 else:
1841 else:
1842 node1, node2 = scmutil.revpair(repo, revs)
1842 node1, node2 = scmutil.revpair(repo, revs)
1843
1843
1844 if reverse:
1844 if reverse:
1845 node1, node2 = node2, node1
1845 node1, node2 = node2, node1
1846
1846
1847 diffopts = patch.diffallopts(ui, opts)
1847 diffopts = patch.diffallopts(ui, opts)
1848 m = scmutil.match(repo[node2], pats, opts)
1848 m = scmutil.match(repo[node2], pats, opts)
1849 ui.pager('diff')
1849 ui.pager('diff')
1850 cmdutil.diffordiffstat(ui, repo, diffopts, node1, node2, m, stat=stat,
1850 cmdutil.diffordiffstat(ui, repo, diffopts, node1, node2, m, stat=stat,
1851 listsubrepos=opts.get('subrepos'),
1851 listsubrepos=opts.get('subrepos'),
1852 root=opts.get('root'))
1852 root=opts.get('root'))
1853
1853
1854 @command('^export',
1854 @command('^export',
1855 [('o', 'output', '',
1855 [('o', 'output', '',
1856 _('print output to file with formatted name'), _('FORMAT')),
1856 _('print output to file with formatted name'), _('FORMAT')),
1857 ('', 'switch-parent', None, _('diff against the second parent')),
1857 ('', 'switch-parent', None, _('diff against the second parent')),
1858 ('r', 'rev', [], _('revisions to export'), _('REV')),
1858 ('r', 'rev', [], _('revisions to export'), _('REV')),
1859 ] + diffopts,
1859 ] + diffopts,
1860 _('[OPTION]... [-o OUTFILESPEC] [-r] [REV]...'))
1860 _('[OPTION]... [-o OUTFILESPEC] [-r] [REV]...'))
1861 def export(ui, repo, *changesets, **opts):
1861 def export(ui, repo, *changesets, **opts):
1862 """dump the header and diffs for one or more changesets
1862 """dump the header and diffs for one or more changesets
1863
1863
1864 Print the changeset header and diffs for one or more revisions.
1864 Print the changeset header and diffs for one or more revisions.
1865 If no revision is given, the parent of the working directory is used.
1865 If no revision is given, the parent of the working directory is used.
1866
1866
1867 The information shown in the changeset header is: author, date,
1867 The information shown in the changeset header is: author, date,
1868 branch name (if non-default), changeset hash, parent(s) and commit
1868 branch name (if non-default), changeset hash, parent(s) and commit
1869 comment.
1869 comment.
1870
1870
1871 .. note::
1871 .. note::
1872
1872
1873 :hg:`export` may generate unexpected diff output for merge
1873 :hg:`export` may generate unexpected diff output for merge
1874 changesets, as it will compare the merge changeset against its
1874 changesets, as it will compare the merge changeset against its
1875 first parent only.
1875 first parent only.
1876
1876
1877 Output may be to a file, in which case the name of the file is
1877 Output may be to a file, in which case the name of the file is
1878 given using a format string. The formatting rules are as follows:
1878 given using a format string. The formatting rules are as follows:
1879
1879
1880 :``%%``: literal "%" character
1880 :``%%``: literal "%" character
1881 :``%H``: changeset hash (40 hexadecimal digits)
1881 :``%H``: changeset hash (40 hexadecimal digits)
1882 :``%N``: number of patches being generated
1882 :``%N``: number of patches being generated
1883 :``%R``: changeset revision number
1883 :``%R``: changeset revision number
1884 :``%b``: basename of the exporting repository
1884 :``%b``: basename of the exporting repository
1885 :``%h``: short-form changeset hash (12 hexadecimal digits)
1885 :``%h``: short-form changeset hash (12 hexadecimal digits)
1886 :``%m``: first line of the commit message (only alphanumeric characters)
1886 :``%m``: first line of the commit message (only alphanumeric characters)
1887 :``%n``: zero-padded sequence number, starting at 1
1887 :``%n``: zero-padded sequence number, starting at 1
1888 :``%r``: zero-padded changeset revision number
1888 :``%r``: zero-padded changeset revision number
1889
1889
1890 Without the -a/--text option, export will avoid generating diffs
1890 Without the -a/--text option, export will avoid generating diffs
1891 of files it detects as binary. With -a, export will generate a
1891 of files it detects as binary. With -a, export will generate a
1892 diff anyway, probably with undesirable results.
1892 diff anyway, probably with undesirable results.
1893
1893
1894 Use the -g/--git option to generate diffs in the git extended diff
1894 Use the -g/--git option to generate diffs in the git extended diff
1895 format. See :hg:`help diffs` for more information.
1895 format. See :hg:`help diffs` for more information.
1896
1896
1897 With the --switch-parent option, the diff will be against the
1897 With the --switch-parent option, the diff will be against the
1898 second parent. It can be useful to review a merge.
1898 second parent. It can be useful to review a merge.
1899
1899
1900 .. container:: verbose
1900 .. container:: verbose
1901
1901
1902 Examples:
1902 Examples:
1903
1903
1904 - use export and import to transplant a bugfix to the current
1904 - use export and import to transplant a bugfix to the current
1905 branch::
1905 branch::
1906
1906
1907 hg export -r 9353 | hg import -
1907 hg export -r 9353 | hg import -
1908
1908
1909 - export all the changesets between two revisions to a file with
1909 - export all the changesets between two revisions to a file with
1910 rename information::
1910 rename information::
1911
1911
1912 hg export --git -r 123:150 > changes.txt
1912 hg export --git -r 123:150 > changes.txt
1913
1913
1914 - split outgoing changes into a series of patches with
1914 - split outgoing changes into a series of patches with
1915 descriptive names::
1915 descriptive names::
1916
1916
1917 hg export -r "outgoing()" -o "%n-%m.patch"
1917 hg export -r "outgoing()" -o "%n-%m.patch"
1918
1918
1919 Returns 0 on success.
1919 Returns 0 on success.
1920 """
1920 """
1921 opts = pycompat.byteskwargs(opts)
1921 opts = pycompat.byteskwargs(opts)
1922 changesets += tuple(opts.get('rev', []))
1922 changesets += tuple(opts.get('rev', []))
1923 if not changesets:
1923 if not changesets:
1924 changesets = ['.']
1924 changesets = ['.']
1925 revs = scmutil.revrange(repo, changesets)
1925 revs = scmutil.revrange(repo, changesets)
1926 if not revs:
1926 if not revs:
1927 raise error.Abort(_("export requires at least one changeset"))
1927 raise error.Abort(_("export requires at least one changeset"))
1928 if len(revs) > 1:
1928 if len(revs) > 1:
1929 ui.note(_('exporting patches:\n'))
1929 ui.note(_('exporting patches:\n'))
1930 else:
1930 else:
1931 ui.note(_('exporting patch:\n'))
1931 ui.note(_('exporting patch:\n'))
1932 ui.pager('export')
1932 ui.pager('export')
1933 cmdutil.export(repo, revs, fntemplate=opts.get('output'),
1933 cmdutil.export(repo, revs, fntemplate=opts.get('output'),
1934 switch_parent=opts.get('switch_parent'),
1934 switch_parent=opts.get('switch_parent'),
1935 opts=patch.diffallopts(ui, opts))
1935 opts=patch.diffallopts(ui, opts))
1936
1936
1937 @command('files',
1937 @command('files',
1938 [('r', 'rev', '', _('search the repository as it is in REV'), _('REV')),
1938 [('r', 'rev', '', _('search the repository as it is in REV'), _('REV')),
1939 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
1939 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
1940 ] + walkopts + formatteropts + subrepoopts,
1940 ] + walkopts + formatteropts + subrepoopts,
1941 _('[OPTION]... [FILE]...'))
1941 _('[OPTION]... [FILE]...'))
1942 def files(ui, repo, *pats, **opts):
1942 def files(ui, repo, *pats, **opts):
1943 """list tracked files
1943 """list tracked files
1944
1944
1945 Print files under Mercurial control in the working directory or
1945 Print files under Mercurial control in the working directory or
1946 specified revision for given files (excluding removed files).
1946 specified revision for given files (excluding removed files).
1947 Files can be specified as filenames or filesets.
1947 Files can be specified as filenames or filesets.
1948
1948
1949 If no files are given to match, this command prints the names
1949 If no files are given to match, this command prints the names
1950 of all files under Mercurial control.
1950 of all files under Mercurial control.
1951
1951
1952 .. container:: verbose
1952 .. container:: verbose
1953
1953
1954 Examples:
1954 Examples:
1955
1955
1956 - list all files under the current directory::
1956 - list all files under the current directory::
1957
1957
1958 hg files .
1958 hg files .
1959
1959
1960 - shows sizes and flags for current revision::
1960 - shows sizes and flags for current revision::
1961
1961
1962 hg files -vr .
1962 hg files -vr .
1963
1963
1964 - list all files named README::
1964 - list all files named README::
1965
1965
1966 hg files -I "**/README"
1966 hg files -I "**/README"
1967
1967
1968 - list all binary files::
1968 - list all binary files::
1969
1969
1970 hg files "set:binary()"
1970 hg files "set:binary()"
1971
1971
1972 - find files containing a regular expression::
1972 - find files containing a regular expression::
1973
1973
1974 hg files "set:grep('bob')"
1974 hg files "set:grep('bob')"
1975
1975
1976 - search tracked file contents with xargs and grep::
1976 - search tracked file contents with xargs and grep::
1977
1977
1978 hg files -0 | xargs -0 grep foo
1978 hg files -0 | xargs -0 grep foo
1979
1979
1980 See :hg:`help patterns` and :hg:`help filesets` for more information
1980 See :hg:`help patterns` and :hg:`help filesets` for more information
1981 on specifying file patterns.
1981 on specifying file patterns.
1982
1982
1983 Returns 0 if a match is found, 1 otherwise.
1983 Returns 0 if a match is found, 1 otherwise.
1984
1984
1985 """
1985 """
1986
1986
1987 opts = pycompat.byteskwargs(opts)
1987 opts = pycompat.byteskwargs(opts)
1988 ctx = scmutil.revsingle(repo, opts.get('rev'), None)
1988 ctx = scmutil.revsingle(repo, opts.get('rev'), None)
1989
1989
1990 end = '\n'
1990 end = '\n'
1991 if opts.get('print0'):
1991 if opts.get('print0'):
1992 end = '\0'
1992 end = '\0'
1993 fmt = '%s' + end
1993 fmt = '%s' + end
1994
1994
1995 m = scmutil.match(ctx, pats, opts)
1995 m = scmutil.match(ctx, pats, opts)
1996 ui.pager('files')
1996 ui.pager('files')
1997 with ui.formatter('files', opts) as fm:
1997 with ui.formatter('files', opts) as fm:
1998 return cmdutil.files(ui, ctx, m, fm, fmt, opts.get('subrepos'))
1998 return cmdutil.files(ui, ctx, m, fm, fmt, opts.get('subrepos'))
1999
1999
2000 @command('^forget', walkopts, _('[OPTION]... FILE...'), inferrepo=True)
2000 @command('^forget', walkopts, _('[OPTION]... FILE...'), inferrepo=True)
2001 def forget(ui, repo, *pats, **opts):
2001 def forget(ui, repo, *pats, **opts):
2002 """forget the specified files on the next commit
2002 """forget the specified files on the next commit
2003
2003
2004 Mark the specified files so they will no longer be tracked
2004 Mark the specified files so they will no longer be tracked
2005 after the next commit.
2005 after the next commit.
2006
2006
2007 This only removes files from the current branch, not from the
2007 This only removes files from the current branch, not from the
2008 entire project history, and it does not delete them from the
2008 entire project history, and it does not delete them from the
2009 working directory.
2009 working directory.
2010
2010
2011 To delete the file from the working directory, see :hg:`remove`.
2011 To delete the file from the working directory, see :hg:`remove`.
2012
2012
2013 To undo a forget before the next commit, see :hg:`add`.
2013 To undo a forget before the next commit, see :hg:`add`.
2014
2014
2015 .. container:: verbose
2015 .. container:: verbose
2016
2016
2017 Examples:
2017 Examples:
2018
2018
2019 - forget newly-added binary files::
2019 - forget newly-added binary files::
2020
2020
2021 hg forget "set:added() and binary()"
2021 hg forget "set:added() and binary()"
2022
2022
2023 - forget files that would be excluded by .hgignore::
2023 - forget files that would be excluded by .hgignore::
2024
2024
2025 hg forget "set:hgignore()"
2025 hg forget "set:hgignore()"
2026
2026
2027 Returns 0 on success.
2027 Returns 0 on success.
2028 """
2028 """
2029
2029
2030 opts = pycompat.byteskwargs(opts)
2030 opts = pycompat.byteskwargs(opts)
2031 if not pats:
2031 if not pats:
2032 raise error.Abort(_('no files specified'))
2032 raise error.Abort(_('no files specified'))
2033
2033
2034 m = scmutil.match(repo[None], pats, opts)
2034 m = scmutil.match(repo[None], pats, opts)
2035 rejected = cmdutil.forget(ui, repo, m, prefix="", explicitonly=False)[0]
2035 rejected = cmdutil.forget(ui, repo, m, prefix="", explicitonly=False)[0]
2036 return rejected and 1 or 0
2036 return rejected and 1 or 0
2037
2037
2038 @command(
2038 @command(
2039 'graft',
2039 'graft',
2040 [('r', 'rev', [], _('revisions to graft'), _('REV')),
2040 [('r', 'rev', [], _('revisions to graft'), _('REV')),
2041 ('c', 'continue', False, _('resume interrupted graft')),
2041 ('c', 'continue', False, _('resume interrupted graft')),
2042 ('e', 'edit', False, _('invoke editor on commit messages')),
2042 ('e', 'edit', False, _('invoke editor on commit messages')),
2043 ('', 'log', None, _('append graft info to log message')),
2043 ('', 'log', None, _('append graft info to log message')),
2044 ('f', 'force', False, _('force graft')),
2044 ('f', 'force', False, _('force graft')),
2045 ('D', 'currentdate', False,
2045 ('D', 'currentdate', False,
2046 _('record the current date as commit date')),
2046 _('record the current date as commit date')),
2047 ('U', 'currentuser', False,
2047 ('U', 'currentuser', False,
2048 _('record the current user as committer'), _('DATE'))]
2048 _('record the current user as committer'), _('DATE'))]
2049 + commitopts2 + mergetoolopts + dryrunopts,
2049 + commitopts2 + mergetoolopts + dryrunopts,
2050 _('[OPTION]... [-r REV]... REV...'))
2050 _('[OPTION]... [-r REV]... REV...'))
2051 def graft(ui, repo, *revs, **opts):
2051 def graft(ui, repo, *revs, **opts):
2052 '''copy changes from other branches onto the current branch
2052 '''copy changes from other branches onto the current branch
2053
2053
2054 This command uses Mercurial's merge logic to copy individual
2054 This command uses Mercurial's merge logic to copy individual
2055 changes from other branches without merging branches in the
2055 changes from other branches without merging branches in the
2056 history graph. This is sometimes known as 'backporting' or
2056 history graph. This is sometimes known as 'backporting' or
2057 'cherry-picking'. By default, graft will copy user, date, and
2057 'cherry-picking'. By default, graft will copy user, date, and
2058 description from the source changesets.
2058 description from the source changesets.
2059
2059
2060 Changesets that are ancestors of the current revision, that have
2060 Changesets that are ancestors of the current revision, that have
2061 already been grafted, or that are merges will be skipped.
2061 already been grafted, or that are merges will be skipped.
2062
2062
2063 If --log is specified, log messages will have a comment appended
2063 If --log is specified, log messages will have a comment appended
2064 of the form::
2064 of the form::
2065
2065
2066 (grafted from CHANGESETHASH)
2066 (grafted from CHANGESETHASH)
2067
2067
2068 If --force is specified, revisions will be grafted even if they
2068 If --force is specified, revisions will be grafted even if they
2069 are already ancestors of or have been grafted to the destination.
2069 are already ancestors of or have been grafted to the destination.
2070 This is useful when the revisions have since been backed out.
2070 This is useful when the revisions have since been backed out.
2071
2071
2072 If a graft merge results in conflicts, the graft process is
2072 If a graft merge results in conflicts, the graft process is
2073 interrupted so that the current merge can be manually resolved.
2073 interrupted so that the current merge can be manually resolved.
2074 Once all conflicts are addressed, the graft process can be
2074 Once all conflicts are addressed, the graft process can be
2075 continued with the -c/--continue option.
2075 continued with the -c/--continue option.
2076
2076
2077 .. note::
2077 .. note::
2078
2078
2079 The -c/--continue option does not reapply earlier options, except
2079 The -c/--continue option does not reapply earlier options, except
2080 for --force.
2080 for --force.
2081
2081
2082 .. container:: verbose
2082 .. container:: verbose
2083
2083
2084 Examples:
2084 Examples:
2085
2085
2086 - copy a single change to the stable branch and edit its description::
2086 - copy a single change to the stable branch and edit its description::
2087
2087
2088 hg update stable
2088 hg update stable
2089 hg graft --edit 9393
2089 hg graft --edit 9393
2090
2090
2091 - graft a range of changesets with one exception, updating dates::
2091 - graft a range of changesets with one exception, updating dates::
2092
2092
2093 hg graft -D "2085::2093 and not 2091"
2093 hg graft -D "2085::2093 and not 2091"
2094
2094
2095 - continue a graft after resolving conflicts::
2095 - continue a graft after resolving conflicts::
2096
2096
2097 hg graft -c
2097 hg graft -c
2098
2098
2099 - show the source of a grafted changeset::
2099 - show the source of a grafted changeset::
2100
2100
2101 hg log --debug -r .
2101 hg log --debug -r .
2102
2102
2103 - show revisions sorted by date::
2103 - show revisions sorted by date::
2104
2104
2105 hg log -r "sort(all(), date)"
2105 hg log -r "sort(all(), date)"
2106
2106
2107 See :hg:`help revisions` for more about specifying revisions.
2107 See :hg:`help revisions` for more about specifying revisions.
2108
2108
2109 Returns 0 on successful completion.
2109 Returns 0 on successful completion.
2110 '''
2110 '''
2111 with repo.wlock():
2111 with repo.wlock():
2112 return _dograft(ui, repo, *revs, **opts)
2112 return _dograft(ui, repo, *revs, **opts)
2113
2113
2114 def _dograft(ui, repo, *revs, **opts):
2114 def _dograft(ui, repo, *revs, **opts):
2115 opts = pycompat.byteskwargs(opts)
2115 opts = pycompat.byteskwargs(opts)
2116 if revs and opts.get('rev'):
2116 if revs and opts.get('rev'):
2117 ui.warn(_('warning: inconsistent use of --rev might give unexpected '
2117 ui.warn(_('warning: inconsistent use of --rev might give unexpected '
2118 'revision ordering!\n'))
2118 'revision ordering!\n'))
2119
2119
2120 revs = list(revs)
2120 revs = list(revs)
2121 revs.extend(opts.get('rev'))
2121 revs.extend(opts.get('rev'))
2122
2122
2123 if not opts.get('user') and opts.get('currentuser'):
2123 if not opts.get('user') and opts.get('currentuser'):
2124 opts['user'] = ui.username()
2124 opts['user'] = ui.username()
2125 if not opts.get('date') and opts.get('currentdate'):
2125 if not opts.get('date') and opts.get('currentdate'):
2126 opts['date'] = "%d %d" % util.makedate()
2126 opts['date'] = "%d %d" % util.makedate()
2127
2127
2128 editor = cmdutil.getcommiteditor(editform='graft',
2128 editor = cmdutil.getcommiteditor(editform='graft',
2129 **pycompat.strkwargs(opts))
2129 **pycompat.strkwargs(opts))
2130
2130
2131 cont = False
2131 cont = False
2132 if opts.get('continue'):
2132 if opts.get('continue'):
2133 cont = True
2133 cont = True
2134 if revs:
2134 if revs:
2135 raise error.Abort(_("can't specify --continue and revisions"))
2135 raise error.Abort(_("can't specify --continue and revisions"))
2136 # read in unfinished revisions
2136 # read in unfinished revisions
2137 try:
2137 try:
2138 nodes = repo.vfs.read('graftstate').splitlines()
2138 nodes = repo.vfs.read('graftstate').splitlines()
2139 revs = [repo[node].rev() for node in nodes]
2139 revs = [repo[node].rev() for node in nodes]
2140 except IOError as inst:
2140 except IOError as inst:
2141 if inst.errno != errno.ENOENT:
2141 if inst.errno != errno.ENOENT:
2142 raise
2142 raise
2143 cmdutil.wrongtooltocontinue(repo, _('graft'))
2143 cmdutil.wrongtooltocontinue(repo, _('graft'))
2144 else:
2144 else:
2145 cmdutil.checkunfinished(repo)
2145 cmdutil.checkunfinished(repo)
2146 cmdutil.bailifchanged(repo)
2146 cmdutil.bailifchanged(repo)
2147 if not revs:
2147 if not revs:
2148 raise error.Abort(_('no revisions specified'))
2148 raise error.Abort(_('no revisions specified'))
2149 revs = scmutil.revrange(repo, revs)
2149 revs = scmutil.revrange(repo, revs)
2150
2150
2151 skipped = set()
2151 skipped = set()
2152 # check for merges
2152 # check for merges
2153 for rev in repo.revs('%ld and merge()', revs):
2153 for rev in repo.revs('%ld and merge()', revs):
2154 ui.warn(_('skipping ungraftable merge revision %s\n') % rev)
2154 ui.warn(_('skipping ungraftable merge revision %s\n') % rev)
2155 skipped.add(rev)
2155 skipped.add(rev)
2156 revs = [r for r in revs if r not in skipped]
2156 revs = [r for r in revs if r not in skipped]
2157 if not revs:
2157 if not revs:
2158 return -1
2158 return -1
2159
2159
2160 # Don't check in the --continue case, in effect retaining --force across
2160 # Don't check in the --continue case, in effect retaining --force across
2161 # --continues. That's because without --force, any revisions we decided to
2161 # --continues. That's because without --force, any revisions we decided to
2162 # skip would have been filtered out here, so they wouldn't have made their
2162 # skip would have been filtered out here, so they wouldn't have made their
2163 # way to the graftstate. With --force, any revisions we would have otherwise
2163 # way to the graftstate. With --force, any revisions we would have otherwise
2164 # skipped would not have been filtered out, and if they hadn't been applied
2164 # skipped would not have been filtered out, and if they hadn't been applied
2165 # already, they'd have been in the graftstate.
2165 # already, they'd have been in the graftstate.
2166 if not (cont or opts.get('force')):
2166 if not (cont or opts.get('force')):
2167 # check for ancestors of dest branch
2167 # check for ancestors of dest branch
2168 crev = repo['.'].rev()
2168 crev = repo['.'].rev()
2169 ancestors = repo.changelog.ancestors([crev], inclusive=True)
2169 ancestors = repo.changelog.ancestors([crev], inclusive=True)
2170 # XXX make this lazy in the future
2170 # XXX make this lazy in the future
2171 # don't mutate while iterating, create a copy
2171 # don't mutate while iterating, create a copy
2172 for rev in list(revs):
2172 for rev in list(revs):
2173 if rev in ancestors:
2173 if rev in ancestors:
2174 ui.warn(_('skipping ancestor revision %d:%s\n') %
2174 ui.warn(_('skipping ancestor revision %d:%s\n') %
2175 (rev, repo[rev]))
2175 (rev, repo[rev]))
2176 # XXX remove on list is slow
2176 # XXX remove on list is slow
2177 revs.remove(rev)
2177 revs.remove(rev)
2178 if not revs:
2178 if not revs:
2179 return -1
2179 return -1
2180
2180
2181 # analyze revs for earlier grafts
2181 # analyze revs for earlier grafts
2182 ids = {}
2182 ids = {}
2183 for ctx in repo.set("%ld", revs):
2183 for ctx in repo.set("%ld", revs):
2184 ids[ctx.hex()] = ctx.rev()
2184 ids[ctx.hex()] = ctx.rev()
2185 n = ctx.extra().get('source')
2185 n = ctx.extra().get('source')
2186 if n:
2186 if n:
2187 ids[n] = ctx.rev()
2187 ids[n] = ctx.rev()
2188
2188
2189 # check ancestors for earlier grafts
2189 # check ancestors for earlier grafts
2190 ui.debug('scanning for duplicate grafts\n')
2190 ui.debug('scanning for duplicate grafts\n')
2191
2191
2192 # The only changesets we can be sure doesn't contain grafts of any
2192 # The only changesets we can be sure doesn't contain grafts of any
2193 # revs, are the ones that are common ancestors of *all* revs:
2193 # revs, are the ones that are common ancestors of *all* revs:
2194 for rev in repo.revs('only(%d,ancestor(%ld))', crev, revs):
2194 for rev in repo.revs('only(%d,ancestor(%ld))', crev, revs):
2195 ctx = repo[rev]
2195 ctx = repo[rev]
2196 n = ctx.extra().get('source')
2196 n = ctx.extra().get('source')
2197 if n in ids:
2197 if n in ids:
2198 try:
2198 try:
2199 r = repo[n].rev()
2199 r = repo[n].rev()
2200 except error.RepoLookupError:
2200 except error.RepoLookupError:
2201 r = None
2201 r = None
2202 if r in revs:
2202 if r in revs:
2203 ui.warn(_('skipping revision %d:%s '
2203 ui.warn(_('skipping revision %d:%s '
2204 '(already grafted to %d:%s)\n')
2204 '(already grafted to %d:%s)\n')
2205 % (r, repo[r], rev, ctx))
2205 % (r, repo[r], rev, ctx))
2206 revs.remove(r)
2206 revs.remove(r)
2207 elif ids[n] in revs:
2207 elif ids[n] in revs:
2208 if r is None:
2208 if r is None:
2209 ui.warn(_('skipping already grafted revision %d:%s '
2209 ui.warn(_('skipping already grafted revision %d:%s '
2210 '(%d:%s also has unknown origin %s)\n')
2210 '(%d:%s also has unknown origin %s)\n')
2211 % (ids[n], repo[ids[n]], rev, ctx, n[:12]))
2211 % (ids[n], repo[ids[n]], rev, ctx, n[:12]))
2212 else:
2212 else:
2213 ui.warn(_('skipping already grafted revision %d:%s '
2213 ui.warn(_('skipping already grafted revision %d:%s '
2214 '(%d:%s also has origin %d:%s)\n')
2214 '(%d:%s also has origin %d:%s)\n')
2215 % (ids[n], repo[ids[n]], rev, ctx, r, n[:12]))
2215 % (ids[n], repo[ids[n]], rev, ctx, r, n[:12]))
2216 revs.remove(ids[n])
2216 revs.remove(ids[n])
2217 elif ctx.hex() in ids:
2217 elif ctx.hex() in ids:
2218 r = ids[ctx.hex()]
2218 r = ids[ctx.hex()]
2219 ui.warn(_('skipping already grafted revision %d:%s '
2219 ui.warn(_('skipping already grafted revision %d:%s '
2220 '(was grafted from %d:%s)\n') %
2220 '(was grafted from %d:%s)\n') %
2221 (r, repo[r], rev, ctx))
2221 (r, repo[r], rev, ctx))
2222 revs.remove(r)
2222 revs.remove(r)
2223 if not revs:
2223 if not revs:
2224 return -1
2224 return -1
2225
2225
2226 for pos, ctx in enumerate(repo.set("%ld", revs)):
2226 for pos, ctx in enumerate(repo.set("%ld", revs)):
2227 desc = '%d:%s "%s"' % (ctx.rev(), ctx,
2227 desc = '%d:%s "%s"' % (ctx.rev(), ctx,
2228 ctx.description().split('\n', 1)[0])
2228 ctx.description().split('\n', 1)[0])
2229 names = repo.nodetags(ctx.node()) + repo.nodebookmarks(ctx.node())
2229 names = repo.nodetags(ctx.node()) + repo.nodebookmarks(ctx.node())
2230 if names:
2230 if names:
2231 desc += ' (%s)' % ' '.join(names)
2231 desc += ' (%s)' % ' '.join(names)
2232 ui.status(_('grafting %s\n') % desc)
2232 ui.status(_('grafting %s\n') % desc)
2233 if opts.get('dry_run'):
2233 if opts.get('dry_run'):
2234 continue
2234 continue
2235
2235
2236 source = ctx.extra().get('source')
2236 source = ctx.extra().get('source')
2237 extra = {}
2237 extra = {}
2238 if source:
2238 if source:
2239 extra['source'] = source
2239 extra['source'] = source
2240 extra['intermediate-source'] = ctx.hex()
2240 extra['intermediate-source'] = ctx.hex()
2241 else:
2241 else:
2242 extra['source'] = ctx.hex()
2242 extra['source'] = ctx.hex()
2243 user = ctx.user()
2243 user = ctx.user()
2244 if opts.get('user'):
2244 if opts.get('user'):
2245 user = opts['user']
2245 user = opts['user']
2246 date = ctx.date()
2246 date = ctx.date()
2247 if opts.get('date'):
2247 if opts.get('date'):
2248 date = opts['date']
2248 date = opts['date']
2249 message = ctx.description()
2249 message = ctx.description()
2250 if opts.get('log'):
2250 if opts.get('log'):
2251 message += '\n(grafted from %s)' % ctx.hex()
2251 message += '\n(grafted from %s)' % ctx.hex()
2252
2252
2253 # we don't merge the first commit when continuing
2253 # we don't merge the first commit when continuing
2254 if not cont:
2254 if not cont:
2255 # perform the graft merge with p1(rev) as 'ancestor'
2255 # perform the graft merge with p1(rev) as 'ancestor'
2256 try:
2256 try:
2257 # ui.forcemerge is an internal variable, do not document
2257 # ui.forcemerge is an internal variable, do not document
2258 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
2258 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
2259 'graft')
2259 'graft')
2260 stats = mergemod.graft(repo, ctx, ctx.p1(),
2260 stats = mergemod.graft(repo, ctx, ctx.p1(),
2261 ['local', 'graft'])
2261 ['local', 'graft'])
2262 finally:
2262 finally:
2263 repo.ui.setconfig('ui', 'forcemerge', '', 'graft')
2263 repo.ui.setconfig('ui', 'forcemerge', '', 'graft')
2264 # report any conflicts
2264 # report any conflicts
2265 if stats and stats[3] > 0:
2265 if stats and stats[3] > 0:
2266 # write out state for --continue
2266 # write out state for --continue
2267 nodelines = [repo[rev].hex() + "\n" for rev in revs[pos:]]
2267 nodelines = [repo[rev].hex() + "\n" for rev in revs[pos:]]
2268 repo.vfs.write('graftstate', ''.join(nodelines))
2268 repo.vfs.write('graftstate', ''.join(nodelines))
2269 extra = ''
2269 extra = ''
2270 if opts.get('user'):
2270 if opts.get('user'):
2271 extra += ' --user %s' % util.shellquote(opts['user'])
2271 extra += ' --user %s' % util.shellquote(opts['user'])
2272 if opts.get('date'):
2272 if opts.get('date'):
2273 extra += ' --date %s' % util.shellquote(opts['date'])
2273 extra += ' --date %s' % util.shellquote(opts['date'])
2274 if opts.get('log'):
2274 if opts.get('log'):
2275 extra += ' --log'
2275 extra += ' --log'
2276 hint=_("use 'hg resolve' and 'hg graft --continue%s'") % extra
2276 hint=_("use 'hg resolve' and 'hg graft --continue%s'") % extra
2277 raise error.Abort(
2277 raise error.Abort(
2278 _("unresolved conflicts, can't continue"),
2278 _("unresolved conflicts, can't continue"),
2279 hint=hint)
2279 hint=hint)
2280 else:
2280 else:
2281 cont = False
2281 cont = False
2282
2282
2283 # commit
2283 # commit
2284 node = repo.commit(text=message, user=user,
2284 node = repo.commit(text=message, user=user,
2285 date=date, extra=extra, editor=editor)
2285 date=date, extra=extra, editor=editor)
2286 if node is None:
2286 if node is None:
2287 ui.warn(
2287 ui.warn(
2288 _('note: graft of %d:%s created no changes to commit\n') %
2288 _('note: graft of %d:%s created no changes to commit\n') %
2289 (ctx.rev(), ctx))
2289 (ctx.rev(), ctx))
2290
2290
2291 # remove state when we complete successfully
2291 # remove state when we complete successfully
2292 if not opts.get('dry_run'):
2292 if not opts.get('dry_run'):
2293 repo.vfs.unlinkpath('graftstate', ignoremissing=True)
2293 repo.vfs.unlinkpath('graftstate', ignoremissing=True)
2294
2294
2295 return 0
2295 return 0
2296
2296
2297 @command('grep',
2297 @command('grep',
2298 [('0', 'print0', None, _('end fields with NUL')),
2298 [('0', 'print0', None, _('end fields with NUL')),
2299 ('', 'all', None, _('print all revisions that match')),
2299 ('', 'all', None, _('print all revisions that match')),
2300 ('a', 'text', None, _('treat all files as text')),
2300 ('a', 'text', None, _('treat all files as text')),
2301 ('f', 'follow', None,
2301 ('f', 'follow', None,
2302 _('follow changeset history,'
2302 _('follow changeset history,'
2303 ' or file history across copies and renames')),
2303 ' or file history across copies and renames')),
2304 ('i', 'ignore-case', None, _('ignore case when matching')),
2304 ('i', 'ignore-case', None, _('ignore case when matching')),
2305 ('l', 'files-with-matches', None,
2305 ('l', 'files-with-matches', None,
2306 _('print only filenames and revisions that match')),
2306 _('print only filenames and revisions that match')),
2307 ('n', 'line-number', None, _('print matching line numbers')),
2307 ('n', 'line-number', None, _('print matching line numbers')),
2308 ('r', 'rev', [],
2308 ('r', 'rev', [],
2309 _('only search files changed within revision range'), _('REV')),
2309 _('only search files changed within revision range'), _('REV')),
2310 ('u', 'user', None, _('list the author (long with -v)')),
2310 ('u', 'user', None, _('list the author (long with -v)')),
2311 ('d', 'date', None, _('list the date (short with -q)')),
2311 ('d', 'date', None, _('list the date (short with -q)')),
2312 ] + formatteropts + walkopts,
2312 ] + formatteropts + walkopts,
2313 _('[OPTION]... PATTERN [FILE]...'),
2313 _('[OPTION]... PATTERN [FILE]...'),
2314 inferrepo=True)
2314 inferrepo=True)
2315 def grep(ui, repo, pattern, *pats, **opts):
2315 def grep(ui, repo, pattern, *pats, **opts):
2316 """search revision history for a pattern in specified files
2316 """search revision history for a pattern in specified files
2317
2317
2318 Search revision history for a regular expression in the specified
2318 Search revision history for a regular expression in the specified
2319 files or the entire project.
2319 files or the entire project.
2320
2320
2321 By default, grep prints the most recent revision number for each
2321 By default, grep prints the most recent revision number for each
2322 file in which it finds a match. To get it to print every revision
2322 file in which it finds a match. To get it to print every revision
2323 that contains a change in match status ("-" for a match that becomes
2323 that contains a change in match status ("-" for a match that becomes
2324 a non-match, or "+" for a non-match that becomes a match), use the
2324 a non-match, or "+" for a non-match that becomes a match), use the
2325 --all flag.
2325 --all flag.
2326
2326
2327 PATTERN can be any Python (roughly Perl-compatible) regular
2327 PATTERN can be any Python (roughly Perl-compatible) regular
2328 expression.
2328 expression.
2329
2329
2330 If no FILEs are specified (and -f/--follow isn't set), all files in
2330 If no FILEs are specified (and -f/--follow isn't set), all files in
2331 the repository are searched, including those that don't exist in the
2331 the repository are searched, including those that don't exist in the
2332 current branch or have been deleted in a prior changeset.
2332 current branch or have been deleted in a prior changeset.
2333
2333
2334 Returns 0 if a match is found, 1 otherwise.
2334 Returns 0 if a match is found, 1 otherwise.
2335 """
2335 """
2336 opts = pycompat.byteskwargs(opts)
2336 opts = pycompat.byteskwargs(opts)
2337 reflags = re.M
2337 reflags = re.M
2338 if opts.get('ignore_case'):
2338 if opts.get('ignore_case'):
2339 reflags |= re.I
2339 reflags |= re.I
2340 try:
2340 try:
2341 regexp = util.re.compile(pattern, reflags)
2341 regexp = util.re.compile(pattern, reflags)
2342 except re.error as inst:
2342 except re.error as inst:
2343 ui.warn(_("grep: invalid match pattern: %s\n") % inst)
2343 ui.warn(_("grep: invalid match pattern: %s\n") % inst)
2344 return 1
2344 return 1
2345 sep, eol = ':', '\n'
2345 sep, eol = ':', '\n'
2346 if opts.get('print0'):
2346 if opts.get('print0'):
2347 sep = eol = '\0'
2347 sep = eol = '\0'
2348
2348
2349 getfile = util.lrucachefunc(repo.file)
2349 getfile = util.lrucachefunc(repo.file)
2350
2350
2351 def matchlines(body):
2351 def matchlines(body):
2352 begin = 0
2352 begin = 0
2353 linenum = 0
2353 linenum = 0
2354 while begin < len(body):
2354 while begin < len(body):
2355 match = regexp.search(body, begin)
2355 match = regexp.search(body, begin)
2356 if not match:
2356 if not match:
2357 break
2357 break
2358 mstart, mend = match.span()
2358 mstart, mend = match.span()
2359 linenum += body.count('\n', begin, mstart) + 1
2359 linenum += body.count('\n', begin, mstart) + 1
2360 lstart = body.rfind('\n', begin, mstart) + 1 or begin
2360 lstart = body.rfind('\n', begin, mstart) + 1 or begin
2361 begin = body.find('\n', mend) + 1 or len(body) + 1
2361 begin = body.find('\n', mend) + 1 or len(body) + 1
2362 lend = begin - 1
2362 lend = begin - 1
2363 yield linenum, mstart - lstart, mend - lstart, body[lstart:lend]
2363 yield linenum, mstart - lstart, mend - lstart, body[lstart:lend]
2364
2364
2365 class linestate(object):
2365 class linestate(object):
2366 def __init__(self, line, linenum, colstart, colend):
2366 def __init__(self, line, linenum, colstart, colend):
2367 self.line = line
2367 self.line = line
2368 self.linenum = linenum
2368 self.linenum = linenum
2369 self.colstart = colstart
2369 self.colstart = colstart
2370 self.colend = colend
2370 self.colend = colend
2371
2371
2372 def __hash__(self):
2372 def __hash__(self):
2373 return hash((self.linenum, self.line))
2373 return hash((self.linenum, self.line))
2374
2374
2375 def __eq__(self, other):
2375 def __eq__(self, other):
2376 return self.line == other.line
2376 return self.line == other.line
2377
2377
2378 def findpos(self):
2378 def findpos(self):
2379 """Iterate all (start, end) indices of matches"""
2379 """Iterate all (start, end) indices of matches"""
2380 yield self.colstart, self.colend
2380 yield self.colstart, self.colend
2381 p = self.colend
2381 p = self.colend
2382 while p < len(self.line):
2382 while p < len(self.line):
2383 m = regexp.search(self.line, p)
2383 m = regexp.search(self.line, p)
2384 if not m:
2384 if not m:
2385 break
2385 break
2386 yield m.span()
2386 yield m.span()
2387 p = m.end()
2387 p = m.end()
2388
2388
2389 matches = {}
2389 matches = {}
2390 copies = {}
2390 copies = {}
2391 def grepbody(fn, rev, body):
2391 def grepbody(fn, rev, body):
2392 matches[rev].setdefault(fn, [])
2392 matches[rev].setdefault(fn, [])
2393 m = matches[rev][fn]
2393 m = matches[rev][fn]
2394 for lnum, cstart, cend, line in matchlines(body):
2394 for lnum, cstart, cend, line in matchlines(body):
2395 s = linestate(line, lnum, cstart, cend)
2395 s = linestate(line, lnum, cstart, cend)
2396 m.append(s)
2396 m.append(s)
2397
2397
2398 def difflinestates(a, b):
2398 def difflinestates(a, b):
2399 sm = difflib.SequenceMatcher(None, a, b)
2399 sm = difflib.SequenceMatcher(None, a, b)
2400 for tag, alo, ahi, blo, bhi in sm.get_opcodes():
2400 for tag, alo, ahi, blo, bhi in sm.get_opcodes():
2401 if tag == 'insert':
2401 if tag == 'insert':
2402 for i in xrange(blo, bhi):
2402 for i in xrange(blo, bhi):
2403 yield ('+', b[i])
2403 yield ('+', b[i])
2404 elif tag == 'delete':
2404 elif tag == 'delete':
2405 for i in xrange(alo, ahi):
2405 for i in xrange(alo, ahi):
2406 yield ('-', a[i])
2406 yield ('-', a[i])
2407 elif tag == 'replace':
2407 elif tag == 'replace':
2408 for i in xrange(alo, ahi):
2408 for i in xrange(alo, ahi):
2409 yield ('-', a[i])
2409 yield ('-', a[i])
2410 for i in xrange(blo, bhi):
2410 for i in xrange(blo, bhi):
2411 yield ('+', b[i])
2411 yield ('+', b[i])
2412
2412
2413 def display(fm, fn, ctx, pstates, states):
2413 def display(fm, fn, ctx, pstates, states):
2414 rev = ctx.rev()
2414 rev = ctx.rev()
2415 if fm.isplain():
2415 if fm.isplain():
2416 formatuser = ui.shortuser
2416 formatuser = ui.shortuser
2417 else:
2417 else:
2418 formatuser = str
2418 formatuser = str
2419 if ui.quiet:
2419 if ui.quiet:
2420 datefmt = '%Y-%m-%d'
2420 datefmt = '%Y-%m-%d'
2421 else:
2421 else:
2422 datefmt = '%a %b %d %H:%M:%S %Y %1%2'
2422 datefmt = '%a %b %d %H:%M:%S %Y %1%2'
2423 found = False
2423 found = False
2424 @util.cachefunc
2424 @util.cachefunc
2425 def binary():
2425 def binary():
2426 flog = getfile(fn)
2426 flog = getfile(fn)
2427 return util.binary(flog.read(ctx.filenode(fn)))
2427 return util.binary(flog.read(ctx.filenode(fn)))
2428
2428
2429 fieldnamemap = {'filename': 'file', 'linenumber': 'line_number'}
2429 fieldnamemap = {'filename': 'file', 'linenumber': 'line_number'}
2430 if opts.get('all'):
2430 if opts.get('all'):
2431 iter = difflinestates(pstates, states)
2431 iter = difflinestates(pstates, states)
2432 else:
2432 else:
2433 iter = [('', l) for l in states]
2433 iter = [('', l) for l in states]
2434 for change, l in iter:
2434 for change, l in iter:
2435 fm.startitem()
2435 fm.startitem()
2436 fm.data(node=fm.hexfunc(ctx.node()))
2436 fm.data(node=fm.hexfunc(ctx.node()))
2437 cols = [
2437 cols = [
2438 ('filename', fn, True),
2438 ('filename', fn, True),
2439 ('rev', rev, True),
2439 ('rev', rev, True),
2440 ('linenumber', l.linenum, opts.get('line_number')),
2440 ('linenumber', l.linenum, opts.get('line_number')),
2441 ]
2441 ]
2442 if opts.get('all'):
2442 if opts.get('all'):
2443 cols.append(('change', change, True))
2443 cols.append(('change', change, True))
2444 cols.extend([
2444 cols.extend([
2445 ('user', formatuser(ctx.user()), opts.get('user')),
2445 ('user', formatuser(ctx.user()), opts.get('user')),
2446 ('date', fm.formatdate(ctx.date(), datefmt), opts.get('date')),
2446 ('date', fm.formatdate(ctx.date(), datefmt), opts.get('date')),
2447 ])
2447 ])
2448 lastcol = next(name for name, data, cond in reversed(cols) if cond)
2448 lastcol = next(name for name, data, cond in reversed(cols) if cond)
2449 for name, data, cond in cols:
2449 for name, data, cond in cols:
2450 field = fieldnamemap.get(name, name)
2450 field = fieldnamemap.get(name, name)
2451 fm.condwrite(cond, field, '%s', data, label='grep.%s' % name)
2451 fm.condwrite(cond, field, '%s', data, label='grep.%s' % name)
2452 if cond and name != lastcol:
2452 if cond and name != lastcol:
2453 fm.plain(sep, label='grep.sep')
2453 fm.plain(sep, label='grep.sep')
2454 if not opts.get('files_with_matches'):
2454 if not opts.get('files_with_matches'):
2455 fm.plain(sep, label='grep.sep')
2455 fm.plain(sep, label='grep.sep')
2456 if not opts.get('text') and binary():
2456 if not opts.get('text') and binary():
2457 fm.plain(_(" Binary file matches"))
2457 fm.plain(_(" Binary file matches"))
2458 else:
2458 else:
2459 displaymatches(fm.nested('texts'), l)
2459 displaymatches(fm.nested('texts'), l)
2460 fm.plain(eol)
2460 fm.plain(eol)
2461 found = True
2461 found = True
2462 if opts.get('files_with_matches'):
2462 if opts.get('files_with_matches'):
2463 break
2463 break
2464 return found
2464 return found
2465
2465
2466 def displaymatches(fm, l):
2466 def displaymatches(fm, l):
2467 p = 0
2467 p = 0
2468 for s, e in l.findpos():
2468 for s, e in l.findpos():
2469 if p < s:
2469 if p < s:
2470 fm.startitem()
2470 fm.startitem()
2471 fm.write('text', '%s', l.line[p:s])
2471 fm.write('text', '%s', l.line[p:s])
2472 fm.data(matched=False)
2472 fm.data(matched=False)
2473 fm.startitem()
2473 fm.startitem()
2474 fm.write('text', '%s', l.line[s:e], label='grep.match')
2474 fm.write('text', '%s', l.line[s:e], label='grep.match')
2475 fm.data(matched=True)
2475 fm.data(matched=True)
2476 p = e
2476 p = e
2477 if p < len(l.line):
2477 if p < len(l.line):
2478 fm.startitem()
2478 fm.startitem()
2479 fm.write('text', '%s', l.line[p:])
2479 fm.write('text', '%s', l.line[p:])
2480 fm.data(matched=False)
2480 fm.data(matched=False)
2481 fm.end()
2481 fm.end()
2482
2482
2483 skip = {}
2483 skip = {}
2484 revfiles = {}
2484 revfiles = {}
2485 matchfn = scmutil.match(repo[None], pats, opts)
2485 matchfn = scmutil.match(repo[None], pats, opts)
2486 found = False
2486 found = False
2487 follow = opts.get('follow')
2487 follow = opts.get('follow')
2488
2488
2489 def prep(ctx, fns):
2489 def prep(ctx, fns):
2490 rev = ctx.rev()
2490 rev = ctx.rev()
2491 pctx = ctx.p1()
2491 pctx = ctx.p1()
2492 parent = pctx.rev()
2492 parent = pctx.rev()
2493 matches.setdefault(rev, {})
2493 matches.setdefault(rev, {})
2494 matches.setdefault(parent, {})
2494 matches.setdefault(parent, {})
2495 files = revfiles.setdefault(rev, [])
2495 files = revfiles.setdefault(rev, [])
2496 for fn in fns:
2496 for fn in fns:
2497 flog = getfile(fn)
2497 flog = getfile(fn)
2498 try:
2498 try:
2499 fnode = ctx.filenode(fn)
2499 fnode = ctx.filenode(fn)
2500 except error.LookupError:
2500 except error.LookupError:
2501 continue
2501 continue
2502
2502
2503 copied = flog.renamed(fnode)
2503 copied = flog.renamed(fnode)
2504 copy = follow and copied and copied[0]
2504 copy = follow and copied and copied[0]
2505 if copy:
2505 if copy:
2506 copies.setdefault(rev, {})[fn] = copy
2506 copies.setdefault(rev, {})[fn] = copy
2507 if fn in skip:
2507 if fn in skip:
2508 if copy:
2508 if copy:
2509 skip[copy] = True
2509 skip[copy] = True
2510 continue
2510 continue
2511 files.append(fn)
2511 files.append(fn)
2512
2512
2513 if fn not in matches[rev]:
2513 if fn not in matches[rev]:
2514 grepbody(fn, rev, flog.read(fnode))
2514 grepbody(fn, rev, flog.read(fnode))
2515
2515
2516 pfn = copy or fn
2516 pfn = copy or fn
2517 if pfn not in matches[parent]:
2517 if pfn not in matches[parent]:
2518 try:
2518 try:
2519 fnode = pctx.filenode(pfn)
2519 fnode = pctx.filenode(pfn)
2520 grepbody(pfn, parent, flog.read(fnode))
2520 grepbody(pfn, parent, flog.read(fnode))
2521 except error.LookupError:
2521 except error.LookupError:
2522 pass
2522 pass
2523
2523
2524 ui.pager('grep')
2524 ui.pager('grep')
2525 fm = ui.formatter('grep', opts)
2525 fm = ui.formatter('grep', opts)
2526 for ctx in cmdutil.walkchangerevs(repo, matchfn, opts, prep):
2526 for ctx in cmdutil.walkchangerevs(repo, matchfn, opts, prep):
2527 rev = ctx.rev()
2527 rev = ctx.rev()
2528 parent = ctx.p1().rev()
2528 parent = ctx.p1().rev()
2529 for fn in sorted(revfiles.get(rev, [])):
2529 for fn in sorted(revfiles.get(rev, [])):
2530 states = matches[rev][fn]
2530 states = matches[rev][fn]
2531 copy = copies.get(rev, {}).get(fn)
2531 copy = copies.get(rev, {}).get(fn)
2532 if fn in skip:
2532 if fn in skip:
2533 if copy:
2533 if copy:
2534 skip[copy] = True
2534 skip[copy] = True
2535 continue
2535 continue
2536 pstates = matches.get(parent, {}).get(copy or fn, [])
2536 pstates = matches.get(parent, {}).get(copy or fn, [])
2537 if pstates or states:
2537 if pstates or states:
2538 r = display(fm, fn, ctx, pstates, states)
2538 r = display(fm, fn, ctx, pstates, states)
2539 found = found or r
2539 found = found or r
2540 if r and not opts.get('all'):
2540 if r and not opts.get('all'):
2541 skip[fn] = True
2541 skip[fn] = True
2542 if copy:
2542 if copy:
2543 skip[copy] = True
2543 skip[copy] = True
2544 del matches[rev]
2544 del matches[rev]
2545 del revfiles[rev]
2545 del revfiles[rev]
2546 fm.end()
2546 fm.end()
2547
2547
2548 return not found
2548 return not found
2549
2549
2550 @command('heads',
2550 @command('heads',
2551 [('r', 'rev', '',
2551 [('r', 'rev', '',
2552 _('show only heads which are descendants of STARTREV'), _('STARTREV')),
2552 _('show only heads which are descendants of STARTREV'), _('STARTREV')),
2553 ('t', 'topo', False, _('show topological heads only')),
2553 ('t', 'topo', False, _('show topological heads only')),
2554 ('a', 'active', False, _('show active branchheads only (DEPRECATED)')),
2554 ('a', 'active', False, _('show active branchheads only (DEPRECATED)')),
2555 ('c', 'closed', False, _('show normal and closed branch heads')),
2555 ('c', 'closed', False, _('show normal and closed branch heads')),
2556 ] + templateopts,
2556 ] + templateopts,
2557 _('[-ct] [-r STARTREV] [REV]...'))
2557 _('[-ct] [-r STARTREV] [REV]...'))
2558 def heads(ui, repo, *branchrevs, **opts):
2558 def heads(ui, repo, *branchrevs, **opts):
2559 """show branch heads
2559 """show branch heads
2560
2560
2561 With no arguments, show all open branch heads in the repository.
2561 With no arguments, show all open branch heads in the repository.
2562 Branch heads are changesets that have no descendants on the
2562 Branch heads are changesets that have no descendants on the
2563 same branch. They are where development generally takes place and
2563 same branch. They are where development generally takes place and
2564 are the usual targets for update and merge operations.
2564 are the usual targets for update and merge operations.
2565
2565
2566 If one or more REVs are given, only open branch heads on the
2566 If one or more REVs are given, only open branch heads on the
2567 branches associated with the specified changesets are shown. This
2567 branches associated with the specified changesets are shown. This
2568 means that you can use :hg:`heads .` to see the heads on the
2568 means that you can use :hg:`heads .` to see the heads on the
2569 currently checked-out branch.
2569 currently checked-out branch.
2570
2570
2571 If -c/--closed is specified, also show branch heads marked closed
2571 If -c/--closed is specified, also show branch heads marked closed
2572 (see :hg:`commit --close-branch`).
2572 (see :hg:`commit --close-branch`).
2573
2573
2574 If STARTREV is specified, only those heads that are descendants of
2574 If STARTREV is specified, only those heads that are descendants of
2575 STARTREV will be displayed.
2575 STARTREV will be displayed.
2576
2576
2577 If -t/--topo is specified, named branch mechanics will be ignored and only
2577 If -t/--topo is specified, named branch mechanics will be ignored and only
2578 topological heads (changesets with no children) will be shown.
2578 topological heads (changesets with no children) will be shown.
2579
2579
2580 Returns 0 if matching heads are found, 1 if not.
2580 Returns 0 if matching heads are found, 1 if not.
2581 """
2581 """
2582
2582
2583 opts = pycompat.byteskwargs(opts)
2583 opts = pycompat.byteskwargs(opts)
2584 start = None
2584 start = None
2585 if 'rev' in opts:
2585 if 'rev' in opts:
2586 start = scmutil.revsingle(repo, opts['rev'], None).node()
2586 start = scmutil.revsingle(repo, opts['rev'], None).node()
2587
2587
2588 if opts.get('topo'):
2588 if opts.get('topo'):
2589 heads = [repo[h] for h in repo.heads(start)]
2589 heads = [repo[h] for h in repo.heads(start)]
2590 else:
2590 else:
2591 heads = []
2591 heads = []
2592 for branch in repo.branchmap():
2592 for branch in repo.branchmap():
2593 heads += repo.branchheads(branch, start, opts.get('closed'))
2593 heads += repo.branchheads(branch, start, opts.get('closed'))
2594 heads = [repo[h] for h in heads]
2594 heads = [repo[h] for h in heads]
2595
2595
2596 if branchrevs:
2596 if branchrevs:
2597 branches = set(repo[br].branch() for br in branchrevs)
2597 branches = set(repo[br].branch() for br in branchrevs)
2598 heads = [h for h in heads if h.branch() in branches]
2598 heads = [h for h in heads if h.branch() in branches]
2599
2599
2600 if opts.get('active') and branchrevs:
2600 if opts.get('active') and branchrevs:
2601 dagheads = repo.heads(start)
2601 dagheads = repo.heads(start)
2602 heads = [h for h in heads if h.node() in dagheads]
2602 heads = [h for h in heads if h.node() in dagheads]
2603
2603
2604 if branchrevs:
2604 if branchrevs:
2605 haveheads = set(h.branch() for h in heads)
2605 haveheads = set(h.branch() for h in heads)
2606 if branches - haveheads:
2606 if branches - haveheads:
2607 headless = ', '.join(b for b in branches - haveheads)
2607 headless = ', '.join(b for b in branches - haveheads)
2608 msg = _('no open branch heads found on branches %s')
2608 msg = _('no open branch heads found on branches %s')
2609 if opts.get('rev'):
2609 if opts.get('rev'):
2610 msg += _(' (started at %s)') % opts['rev']
2610 msg += _(' (started at %s)') % opts['rev']
2611 ui.warn((msg + '\n') % headless)
2611 ui.warn((msg + '\n') % headless)
2612
2612
2613 if not heads:
2613 if not heads:
2614 return 1
2614 return 1
2615
2615
2616 ui.pager('heads')
2616 ui.pager('heads')
2617 heads = sorted(heads, key=lambda x: -x.rev())
2617 heads = sorted(heads, key=lambda x: -x.rev())
2618 displayer = cmdutil.show_changeset(ui, repo, opts)
2618 displayer = cmdutil.show_changeset(ui, repo, opts)
2619 for ctx in heads:
2619 for ctx in heads:
2620 displayer.show(ctx)
2620 displayer.show(ctx)
2621 displayer.close()
2621 displayer.close()
2622
2622
2623 @command('help',
2623 @command('help',
2624 [('e', 'extension', None, _('show only help for extensions')),
2624 [('e', 'extension', None, _('show only help for extensions')),
2625 ('c', 'command', None, _('show only help for commands')),
2625 ('c', 'command', None, _('show only help for commands')),
2626 ('k', 'keyword', None, _('show topics matching keyword')),
2626 ('k', 'keyword', None, _('show topics matching keyword')),
2627 ('s', 'system', [], _('show help for specific platform(s)')),
2627 ('s', 'system', [], _('show help for specific platform(s)')),
2628 ],
2628 ],
2629 _('[-ecks] [TOPIC]'),
2629 _('[-ecks] [TOPIC]'),
2630 norepo=True)
2630 norepo=True)
2631 def help_(ui, name=None, **opts):
2631 def help_(ui, name=None, **opts):
2632 """show help for a given topic or a help overview
2632 """show help for a given topic or a help overview
2633
2633
2634 With no arguments, print a list of commands with short help messages.
2634 With no arguments, print a list of commands with short help messages.
2635
2635
2636 Given a topic, extension, or command name, print help for that
2636 Given a topic, extension, or command name, print help for that
2637 topic.
2637 topic.
2638
2638
2639 Returns 0 if successful.
2639 Returns 0 if successful.
2640 """
2640 """
2641
2641
2642 keep = opts.get(r'system') or []
2642 keep = opts.get(r'system') or []
2643 if len(keep) == 0:
2643 if len(keep) == 0:
2644 if pycompat.sysplatform.startswith('win'):
2644 if pycompat.sysplatform.startswith('win'):
2645 keep.append('windows')
2645 keep.append('windows')
2646 elif pycompat.sysplatform == 'OpenVMS':
2646 elif pycompat.sysplatform == 'OpenVMS':
2647 keep.append('vms')
2647 keep.append('vms')
2648 elif pycompat.sysplatform == 'plan9':
2648 elif pycompat.sysplatform == 'plan9':
2649 keep.append('plan9')
2649 keep.append('plan9')
2650 else:
2650 else:
2651 keep.append('unix')
2651 keep.append('unix')
2652 keep.append(pycompat.sysplatform.lower())
2652 keep.append(pycompat.sysplatform.lower())
2653 if ui.verbose:
2653 if ui.verbose:
2654 keep.append('verbose')
2654 keep.append('verbose')
2655
2655
2656 commands = sys.modules[__name__]
2656 commands = sys.modules[__name__]
2657 formatted = help.formattedhelp(ui, commands, name, keep=keep, **opts)
2657 formatted = help.formattedhelp(ui, commands, name, keep=keep, **opts)
2658 ui.pager('help')
2658 ui.pager('help')
2659 ui.write(formatted)
2659 ui.write(formatted)
2660
2660
2661
2661
2662 @command('identify|id',
2662 @command('identify|id',
2663 [('r', 'rev', '',
2663 [('r', 'rev', '',
2664 _('identify the specified revision'), _('REV')),
2664 _('identify the specified revision'), _('REV')),
2665 ('n', 'num', None, _('show local revision number')),
2665 ('n', 'num', None, _('show local revision number')),
2666 ('i', 'id', None, _('show global revision id')),
2666 ('i', 'id', None, _('show global revision id')),
2667 ('b', 'branch', None, _('show branch')),
2667 ('b', 'branch', None, _('show branch')),
2668 ('t', 'tags', None, _('show tags')),
2668 ('t', 'tags', None, _('show tags')),
2669 ('B', 'bookmarks', None, _('show bookmarks')),
2669 ('B', 'bookmarks', None, _('show bookmarks')),
2670 ] + remoteopts,
2670 ] + remoteopts,
2671 _('[-nibtB] [-r REV] [SOURCE]'),
2671 _('[-nibtB] [-r REV] [SOURCE]'),
2672 optionalrepo=True)
2672 optionalrepo=True)
2673 def identify(ui, repo, source=None, rev=None,
2673 def identify(ui, repo, source=None, rev=None,
2674 num=None, id=None, branch=None, tags=None, bookmarks=None, **opts):
2674 num=None, id=None, branch=None, tags=None, bookmarks=None, **opts):
2675 """identify the working directory or specified revision
2675 """identify the working directory or specified revision
2676
2676
2677 Print a summary identifying the repository state at REV using one or
2677 Print a summary identifying the repository state at REV using one or
2678 two parent hash identifiers, followed by a "+" if the working
2678 two parent hash identifiers, followed by a "+" if the working
2679 directory has uncommitted changes, the branch name (if not default),
2679 directory has uncommitted changes, the branch name (if not default),
2680 a list of tags, and a list of bookmarks.
2680 a list of tags, and a list of bookmarks.
2681
2681
2682 When REV is not given, print a summary of the current state of the
2682 When REV is not given, print a summary of the current state of the
2683 repository.
2683 repository.
2684
2684
2685 Specifying a path to a repository root or Mercurial bundle will
2685 Specifying a path to a repository root or Mercurial bundle will
2686 cause lookup to operate on that repository/bundle.
2686 cause lookup to operate on that repository/bundle.
2687
2687
2688 .. container:: verbose
2688 .. container:: verbose
2689
2689
2690 Examples:
2690 Examples:
2691
2691
2692 - generate a build identifier for the working directory::
2692 - generate a build identifier for the working directory::
2693
2693
2694 hg id --id > build-id.dat
2694 hg id --id > build-id.dat
2695
2695
2696 - find the revision corresponding to a tag::
2696 - find the revision corresponding to a tag::
2697
2697
2698 hg id -n -r 1.3
2698 hg id -n -r 1.3
2699
2699
2700 - check the most recent revision of a remote repository::
2700 - check the most recent revision of a remote repository::
2701
2701
2702 hg id -r tip https://www.mercurial-scm.org/repo/hg/
2702 hg id -r tip https://www.mercurial-scm.org/repo/hg/
2703
2703
2704 See :hg:`log` for generating more information about specific revisions,
2704 See :hg:`log` for generating more information about specific revisions,
2705 including full hash identifiers.
2705 including full hash identifiers.
2706
2706
2707 Returns 0 if successful.
2707 Returns 0 if successful.
2708 """
2708 """
2709
2709
2710 opts = pycompat.byteskwargs(opts)
2710 opts = pycompat.byteskwargs(opts)
2711 if not repo and not source:
2711 if not repo and not source:
2712 raise error.Abort(_("there is no Mercurial repository here "
2712 raise error.Abort(_("there is no Mercurial repository here "
2713 "(.hg not found)"))
2713 "(.hg not found)"))
2714
2714
2715 if ui.debugflag:
2715 if ui.debugflag:
2716 hexfunc = hex
2716 hexfunc = hex
2717 else:
2717 else:
2718 hexfunc = short
2718 hexfunc = short
2719 default = not (num or id or branch or tags or bookmarks)
2719 default = not (num or id or branch or tags or bookmarks)
2720 output = []
2720 output = []
2721 revs = []
2721 revs = []
2722
2722
2723 if source:
2723 if source:
2724 source, branches = hg.parseurl(ui.expandpath(source))
2724 source, branches = hg.parseurl(ui.expandpath(source))
2725 peer = hg.peer(repo or ui, opts, source) # only pass ui when no repo
2725 peer = hg.peer(repo or ui, opts, source) # only pass ui when no repo
2726 repo = peer.local()
2726 repo = peer.local()
2727 revs, checkout = hg.addbranchrevs(repo, peer, branches, None)
2727 revs, checkout = hg.addbranchrevs(repo, peer, branches, None)
2728
2728
2729 if not repo:
2729 if not repo:
2730 if num or branch or tags:
2730 if num or branch or tags:
2731 raise error.Abort(
2731 raise error.Abort(
2732 _("can't query remote revision number, branch, or tags"))
2732 _("can't query remote revision number, branch, or tags"))
2733 if not rev and revs:
2733 if not rev and revs:
2734 rev = revs[0]
2734 rev = revs[0]
2735 if not rev:
2735 if not rev:
2736 rev = "tip"
2736 rev = "tip"
2737
2737
2738 remoterev = peer.lookup(rev)
2738 remoterev = peer.lookup(rev)
2739 if default or id:
2739 if default or id:
2740 output = [hexfunc(remoterev)]
2740 output = [hexfunc(remoterev)]
2741
2741
2742 def getbms():
2742 def getbms():
2743 bms = []
2743 bms = []
2744
2744
2745 if 'bookmarks' in peer.listkeys('namespaces'):
2745 if 'bookmarks' in peer.listkeys('namespaces'):
2746 hexremoterev = hex(remoterev)
2746 hexremoterev = hex(remoterev)
2747 bms = [bm for bm, bmr in peer.listkeys('bookmarks').iteritems()
2747 bms = [bm for bm, bmr in peer.listkeys('bookmarks').iteritems()
2748 if bmr == hexremoterev]
2748 if bmr == hexremoterev]
2749
2749
2750 return sorted(bms)
2750 return sorted(bms)
2751
2751
2752 if bookmarks:
2752 if bookmarks:
2753 output.extend(getbms())
2753 output.extend(getbms())
2754 elif default and not ui.quiet:
2754 elif default and not ui.quiet:
2755 # multiple bookmarks for a single parent separated by '/'
2755 # multiple bookmarks for a single parent separated by '/'
2756 bm = '/'.join(getbms())
2756 bm = '/'.join(getbms())
2757 if bm:
2757 if bm:
2758 output.append(bm)
2758 output.append(bm)
2759 else:
2759 else:
2760 ctx = scmutil.revsingle(repo, rev, None)
2760 ctx = scmutil.revsingle(repo, rev, None)
2761
2761
2762 if ctx.rev() is None:
2762 if ctx.rev() is None:
2763 ctx = repo[None]
2763 ctx = repo[None]
2764 parents = ctx.parents()
2764 parents = ctx.parents()
2765 taglist = []
2765 taglist = []
2766 for p in parents:
2766 for p in parents:
2767 taglist.extend(p.tags())
2767 taglist.extend(p.tags())
2768
2768
2769 changed = ""
2769 changed = ""
2770 if default or id or num:
2770 if default or id or num:
2771 if (any(repo.status())
2771 if (any(repo.status())
2772 or any(ctx.sub(s).dirty() for s in ctx.substate)):
2772 or any(ctx.sub(s).dirty() for s in ctx.substate)):
2773 changed = '+'
2773 changed = '+'
2774 if default or id:
2774 if default or id:
2775 output = ["%s%s" %
2775 output = ["%s%s" %
2776 ('+'.join([hexfunc(p.node()) for p in parents]), changed)]
2776 ('+'.join([hexfunc(p.node()) for p in parents]), changed)]
2777 if num:
2777 if num:
2778 output.append("%s%s" %
2778 output.append("%s%s" %
2779 ('+'.join(["%d" % p.rev() for p in parents]), changed))
2779 ('+'.join(["%d" % p.rev() for p in parents]), changed))
2780 else:
2780 else:
2781 if default or id:
2781 if default or id:
2782 output = [hexfunc(ctx.node())]
2782 output = [hexfunc(ctx.node())]
2783 if num:
2783 if num:
2784 output.append(pycompat.bytestr(ctx.rev()))
2784 output.append(pycompat.bytestr(ctx.rev()))
2785 taglist = ctx.tags()
2785 taglist = ctx.tags()
2786
2786
2787 if default and not ui.quiet:
2787 if default and not ui.quiet:
2788 b = ctx.branch()
2788 b = ctx.branch()
2789 if b != 'default':
2789 if b != 'default':
2790 output.append("(%s)" % b)
2790 output.append("(%s)" % b)
2791
2791
2792 # multiple tags for a single parent separated by '/'
2792 # multiple tags for a single parent separated by '/'
2793 t = '/'.join(taglist)
2793 t = '/'.join(taglist)
2794 if t:
2794 if t:
2795 output.append(t)
2795 output.append(t)
2796
2796
2797 # multiple bookmarks for a single parent separated by '/'
2797 # multiple bookmarks for a single parent separated by '/'
2798 bm = '/'.join(ctx.bookmarks())
2798 bm = '/'.join(ctx.bookmarks())
2799 if bm:
2799 if bm:
2800 output.append(bm)
2800 output.append(bm)
2801 else:
2801 else:
2802 if branch:
2802 if branch:
2803 output.append(ctx.branch())
2803 output.append(ctx.branch())
2804
2804
2805 if tags:
2805 if tags:
2806 output.extend(taglist)
2806 output.extend(taglist)
2807
2807
2808 if bookmarks:
2808 if bookmarks:
2809 output.extend(ctx.bookmarks())
2809 output.extend(ctx.bookmarks())
2810
2810
2811 ui.write("%s\n" % ' '.join(output))
2811 ui.write("%s\n" % ' '.join(output))
2812
2812
2813 @command('import|patch',
2813 @command('import|patch',
2814 [('p', 'strip', 1,
2814 [('p', 'strip', 1,
2815 _('directory strip option for patch. This has the same '
2815 _('directory strip option for patch. This has the same '
2816 'meaning as the corresponding patch option'), _('NUM')),
2816 'meaning as the corresponding patch option'), _('NUM')),
2817 ('b', 'base', '', _('base path (DEPRECATED)'), _('PATH')),
2817 ('b', 'base', '', _('base path (DEPRECATED)'), _('PATH')),
2818 ('e', 'edit', False, _('invoke editor on commit messages')),
2818 ('e', 'edit', False, _('invoke editor on commit messages')),
2819 ('f', 'force', None,
2819 ('f', 'force', None,
2820 _('skip check for outstanding uncommitted changes (DEPRECATED)')),
2820 _('skip check for outstanding uncommitted changes (DEPRECATED)')),
2821 ('', 'no-commit', None,
2821 ('', 'no-commit', None,
2822 _("don't commit, just update the working directory")),
2822 _("don't commit, just update the working directory")),
2823 ('', 'bypass', None,
2823 ('', 'bypass', None,
2824 _("apply patch without touching the working directory")),
2824 _("apply patch without touching the working directory")),
2825 ('', 'partial', None,
2825 ('', 'partial', None,
2826 _('commit even if some hunks fail')),
2826 _('commit even if some hunks fail')),
2827 ('', 'exact', None,
2827 ('', 'exact', None,
2828 _('abort if patch would apply lossily')),
2828 _('abort if patch would apply lossily')),
2829 ('', 'prefix', '',
2829 ('', 'prefix', '',
2830 _('apply patch to subdirectory'), _('DIR')),
2830 _('apply patch to subdirectory'), _('DIR')),
2831 ('', 'import-branch', None,
2831 ('', 'import-branch', None,
2832 _('use any branch information in patch (implied by --exact)'))] +
2832 _('use any branch information in patch (implied by --exact)'))] +
2833 commitopts + commitopts2 + similarityopts,
2833 commitopts + commitopts2 + similarityopts,
2834 _('[OPTION]... PATCH...'))
2834 _('[OPTION]... PATCH...'))
2835 def import_(ui, repo, patch1=None, *patches, **opts):
2835 def import_(ui, repo, patch1=None, *patches, **opts):
2836 """import an ordered set of patches
2836 """import an ordered set of patches
2837
2837
2838 Import a list of patches and commit them individually (unless
2838 Import a list of patches and commit them individually (unless
2839 --no-commit is specified).
2839 --no-commit is specified).
2840
2840
2841 To read a patch from standard input (stdin), use "-" as the patch
2841 To read a patch from standard input (stdin), use "-" as the patch
2842 name. If a URL is specified, the patch will be downloaded from
2842 name. If a URL is specified, the patch will be downloaded from
2843 there.
2843 there.
2844
2844
2845 Import first applies changes to the working directory (unless
2845 Import first applies changes to the working directory (unless
2846 --bypass is specified), import will abort if there are outstanding
2846 --bypass is specified), import will abort if there are outstanding
2847 changes.
2847 changes.
2848
2848
2849 Use --bypass to apply and commit patches directly to the
2849 Use --bypass to apply and commit patches directly to the
2850 repository, without affecting the working directory. Without
2850 repository, without affecting the working directory. Without
2851 --exact, patches will be applied on top of the working directory
2851 --exact, patches will be applied on top of the working directory
2852 parent revision.
2852 parent revision.
2853
2853
2854 You can import a patch straight from a mail message. Even patches
2854 You can import a patch straight from a mail message. Even patches
2855 as attachments work (to use the body part, it must have type
2855 as attachments work (to use the body part, it must have type
2856 text/plain or text/x-patch). From and Subject headers of email
2856 text/plain or text/x-patch). From and Subject headers of email
2857 message are used as default committer and commit message. All
2857 message are used as default committer and commit message. All
2858 text/plain body parts before first diff are added to the commit
2858 text/plain body parts before first diff are added to the commit
2859 message.
2859 message.
2860
2860
2861 If the imported patch was generated by :hg:`export`, user and
2861 If the imported patch was generated by :hg:`export`, user and
2862 description from patch override values from message headers and
2862 description from patch override values from message headers and
2863 body. Values given on command line with -m/--message and -u/--user
2863 body. Values given on command line with -m/--message and -u/--user
2864 override these.
2864 override these.
2865
2865
2866 If --exact is specified, import will set the working directory to
2866 If --exact is specified, import will set the working directory to
2867 the parent of each patch before applying it, and will abort if the
2867 the parent of each patch before applying it, and will abort if the
2868 resulting changeset has a different ID than the one recorded in
2868 resulting changeset has a different ID than the one recorded in
2869 the patch. This will guard against various ways that portable
2869 the patch. This will guard against various ways that portable
2870 patch formats and mail systems might fail to transfer Mercurial
2870 patch formats and mail systems might fail to transfer Mercurial
2871 data or metadata. See :hg:`bundle` for lossless transmission.
2871 data or metadata. See :hg:`bundle` for lossless transmission.
2872
2872
2873 Use --partial to ensure a changeset will be created from the patch
2873 Use --partial to ensure a changeset will be created from the patch
2874 even if some hunks fail to apply. Hunks that fail to apply will be
2874 even if some hunks fail to apply. Hunks that fail to apply will be
2875 written to a <target-file>.rej file. Conflicts can then be resolved
2875 written to a <target-file>.rej file. Conflicts can then be resolved
2876 by hand before :hg:`commit --amend` is run to update the created
2876 by hand before :hg:`commit --amend` is run to update the created
2877 changeset. This flag exists to let people import patches that
2877 changeset. This flag exists to let people import patches that
2878 partially apply without losing the associated metadata (author,
2878 partially apply without losing the associated metadata (author,
2879 date, description, ...).
2879 date, description, ...).
2880
2880
2881 .. note::
2881 .. note::
2882
2882
2883 When no hunks apply cleanly, :hg:`import --partial` will create
2883 When no hunks apply cleanly, :hg:`import --partial` will create
2884 an empty changeset, importing only the patch metadata.
2884 an empty changeset, importing only the patch metadata.
2885
2885
2886 With -s/--similarity, hg will attempt to discover renames and
2886 With -s/--similarity, hg will attempt to discover renames and
2887 copies in the patch in the same way as :hg:`addremove`.
2887 copies in the patch in the same way as :hg:`addremove`.
2888
2888
2889 It is possible to use external patch programs to perform the patch
2889 It is possible to use external patch programs to perform the patch
2890 by setting the ``ui.patch`` configuration option. For the default
2890 by setting the ``ui.patch`` configuration option. For the default
2891 internal tool, the fuzz can also be configured via ``patch.fuzz``.
2891 internal tool, the fuzz can also be configured via ``patch.fuzz``.
2892 See :hg:`help config` for more information about configuration
2892 See :hg:`help config` for more information about configuration
2893 files and how to use these options.
2893 files and how to use these options.
2894
2894
2895 See :hg:`help dates` for a list of formats valid for -d/--date.
2895 See :hg:`help dates` for a list of formats valid for -d/--date.
2896
2896
2897 .. container:: verbose
2897 .. container:: verbose
2898
2898
2899 Examples:
2899 Examples:
2900
2900
2901 - import a traditional patch from a website and detect renames::
2901 - import a traditional patch from a website and detect renames::
2902
2902
2903 hg import -s 80 http://example.com/bugfix.patch
2903 hg import -s 80 http://example.com/bugfix.patch
2904
2904
2905 - import a changeset from an hgweb server::
2905 - import a changeset from an hgweb server::
2906
2906
2907 hg import https://www.mercurial-scm.org/repo/hg/rev/5ca8c111e9aa
2907 hg import https://www.mercurial-scm.org/repo/hg/rev/5ca8c111e9aa
2908
2908
2909 - import all the patches in an Unix-style mbox::
2909 - import all the patches in an Unix-style mbox::
2910
2910
2911 hg import incoming-patches.mbox
2911 hg import incoming-patches.mbox
2912
2912
2913 - import patches from stdin::
2913 - import patches from stdin::
2914
2914
2915 hg import -
2915 hg import -
2916
2916
2917 - attempt to exactly restore an exported changeset (not always
2917 - attempt to exactly restore an exported changeset (not always
2918 possible)::
2918 possible)::
2919
2919
2920 hg import --exact proposed-fix.patch
2920 hg import --exact proposed-fix.patch
2921
2921
2922 - use an external tool to apply a patch which is too fuzzy for
2922 - use an external tool to apply a patch which is too fuzzy for
2923 the default internal tool.
2923 the default internal tool.
2924
2924
2925 hg import --config ui.patch="patch --merge" fuzzy.patch
2925 hg import --config ui.patch="patch --merge" fuzzy.patch
2926
2926
2927 - change the default fuzzing from 2 to a less strict 7
2927 - change the default fuzzing from 2 to a less strict 7
2928
2928
2929 hg import --config ui.fuzz=7 fuzz.patch
2929 hg import --config ui.fuzz=7 fuzz.patch
2930
2930
2931 Returns 0 on success, 1 on partial success (see --partial).
2931 Returns 0 on success, 1 on partial success (see --partial).
2932 """
2932 """
2933
2933
2934 opts = pycompat.byteskwargs(opts)
2934 opts = pycompat.byteskwargs(opts)
2935 if not patch1:
2935 if not patch1:
2936 raise error.Abort(_('need at least one patch to import'))
2936 raise error.Abort(_('need at least one patch to import'))
2937
2937
2938 patches = (patch1,) + patches
2938 patches = (patch1,) + patches
2939
2939
2940 date = opts.get('date')
2940 date = opts.get('date')
2941 if date:
2941 if date:
2942 opts['date'] = util.parsedate(date)
2942 opts['date'] = util.parsedate(date)
2943
2943
2944 exact = opts.get('exact')
2944 exact = opts.get('exact')
2945 update = not opts.get('bypass')
2945 update = not opts.get('bypass')
2946 if not update and opts.get('no_commit'):
2946 if not update and opts.get('no_commit'):
2947 raise error.Abort(_('cannot use --no-commit with --bypass'))
2947 raise error.Abort(_('cannot use --no-commit with --bypass'))
2948 try:
2948 try:
2949 sim = float(opts.get('similarity') or 0)
2949 sim = float(opts.get('similarity') or 0)
2950 except ValueError:
2950 except ValueError:
2951 raise error.Abort(_('similarity must be a number'))
2951 raise error.Abort(_('similarity must be a number'))
2952 if sim < 0 or sim > 100:
2952 if sim < 0 or sim > 100:
2953 raise error.Abort(_('similarity must be between 0 and 100'))
2953 raise error.Abort(_('similarity must be between 0 and 100'))
2954 if sim and not update:
2954 if sim and not update:
2955 raise error.Abort(_('cannot use --similarity with --bypass'))
2955 raise error.Abort(_('cannot use --similarity with --bypass'))
2956 if exact:
2956 if exact:
2957 if opts.get('edit'):
2957 if opts.get('edit'):
2958 raise error.Abort(_('cannot use --exact with --edit'))
2958 raise error.Abort(_('cannot use --exact with --edit'))
2959 if opts.get('prefix'):
2959 if opts.get('prefix'):
2960 raise error.Abort(_('cannot use --exact with --prefix'))
2960 raise error.Abort(_('cannot use --exact with --prefix'))
2961
2961
2962 base = opts["base"]
2962 base = opts["base"]
2963 wlock = dsguard = lock = tr = None
2963 wlock = dsguard = lock = tr = None
2964 msgs = []
2964 msgs = []
2965 ret = 0
2965 ret = 0
2966
2966
2967
2967
2968 try:
2968 try:
2969 wlock = repo.wlock()
2969 wlock = repo.wlock()
2970
2970
2971 if update:
2971 if update:
2972 cmdutil.checkunfinished(repo)
2972 cmdutil.checkunfinished(repo)
2973 if (exact or not opts.get('force')):
2973 if (exact or not opts.get('force')):
2974 cmdutil.bailifchanged(repo)
2974 cmdutil.bailifchanged(repo)
2975
2975
2976 if not opts.get('no_commit'):
2976 if not opts.get('no_commit'):
2977 lock = repo.lock()
2977 lock = repo.lock()
2978 tr = repo.transaction('import')
2978 tr = repo.transaction('import')
2979 else:
2979 else:
2980 dsguard = dirstateguard.dirstateguard(repo, 'import')
2980 dsguard = dirstateguard.dirstateguard(repo, 'import')
2981 parents = repo[None].parents()
2981 parents = repo[None].parents()
2982 for patchurl in patches:
2982 for patchurl in patches:
2983 if patchurl == '-':
2983 if patchurl == '-':
2984 ui.status(_('applying patch from stdin\n'))
2984 ui.status(_('applying patch from stdin\n'))
2985 patchfile = ui.fin
2985 patchfile = ui.fin
2986 patchurl = 'stdin' # for error message
2986 patchurl = 'stdin' # for error message
2987 else:
2987 else:
2988 patchurl = os.path.join(base, patchurl)
2988 patchurl = os.path.join(base, patchurl)
2989 ui.status(_('applying %s\n') % patchurl)
2989 ui.status(_('applying %s\n') % patchurl)
2990 patchfile = hg.openpath(ui, patchurl)
2990 patchfile = hg.openpath(ui, patchurl)
2991
2991
2992 haspatch = False
2992 haspatch = False
2993 for hunk in patch.split(patchfile):
2993 for hunk in patch.split(patchfile):
2994 (msg, node, rej) = cmdutil.tryimportone(ui, repo, hunk,
2994 (msg, node, rej) = cmdutil.tryimportone(ui, repo, hunk,
2995 parents, opts,
2995 parents, opts,
2996 msgs, hg.clean)
2996 msgs, hg.clean)
2997 if msg:
2997 if msg:
2998 haspatch = True
2998 haspatch = True
2999 ui.note(msg + '\n')
2999 ui.note(msg + '\n')
3000 if update or exact:
3000 if update or exact:
3001 parents = repo[None].parents()
3001 parents = repo[None].parents()
3002 else:
3002 else:
3003 parents = [repo[node]]
3003 parents = [repo[node]]
3004 if rej:
3004 if rej:
3005 ui.write_err(_("patch applied partially\n"))
3005 ui.write_err(_("patch applied partially\n"))
3006 ui.write_err(_("(fix the .rej files and run "
3006 ui.write_err(_("(fix the .rej files and run "
3007 "`hg commit --amend`)\n"))
3007 "`hg commit --amend`)\n"))
3008 ret = 1
3008 ret = 1
3009 break
3009 break
3010
3010
3011 if not haspatch:
3011 if not haspatch:
3012 raise error.Abort(_('%s: no diffs found') % patchurl)
3012 raise error.Abort(_('%s: no diffs found') % patchurl)
3013
3013
3014 if tr:
3014 if tr:
3015 tr.close()
3015 tr.close()
3016 if msgs:
3016 if msgs:
3017 repo.savecommitmessage('\n* * *\n'.join(msgs))
3017 repo.savecommitmessage('\n* * *\n'.join(msgs))
3018 if dsguard:
3018 if dsguard:
3019 dsguard.close()
3019 dsguard.close()
3020 return ret
3020 return ret
3021 finally:
3021 finally:
3022 if tr:
3022 if tr:
3023 tr.release()
3023 tr.release()
3024 release(lock, dsguard, wlock)
3024 release(lock, dsguard, wlock)
3025
3025
3026 @command('incoming|in',
3026 @command('incoming|in',
3027 [('f', 'force', None,
3027 [('f', 'force', None,
3028 _('run even if remote repository is unrelated')),
3028 _('run even if remote repository is unrelated')),
3029 ('n', 'newest-first', None, _('show newest record first')),
3029 ('n', 'newest-first', None, _('show newest record first')),
3030 ('', 'bundle', '',
3030 ('', 'bundle', '',
3031 _('file to store the bundles into'), _('FILE')),
3031 _('file to store the bundles into'), _('FILE')),
3032 ('r', 'rev', [], _('a remote changeset intended to be added'), _('REV')),
3032 ('r', 'rev', [], _('a remote changeset intended to be added'), _('REV')),
3033 ('B', 'bookmarks', False, _("compare bookmarks")),
3033 ('B', 'bookmarks', False, _("compare bookmarks")),
3034 ('b', 'branch', [],
3034 ('b', 'branch', [],
3035 _('a specific branch you would like to pull'), _('BRANCH')),
3035 _('a specific branch you would like to pull'), _('BRANCH')),
3036 ] + logopts + remoteopts + subrepoopts,
3036 ] + logopts + remoteopts + subrepoopts,
3037 _('[-p] [-n] [-M] [-f] [-r REV]... [--bundle FILENAME] [SOURCE]'))
3037 _('[-p] [-n] [-M] [-f] [-r REV]... [--bundle FILENAME] [SOURCE]'))
3038 def incoming(ui, repo, source="default", **opts):
3038 def incoming(ui, repo, source="default", **opts):
3039 """show new changesets found in source
3039 """show new changesets found in source
3040
3040
3041 Show new changesets found in the specified path/URL or the default
3041 Show new changesets found in the specified path/URL or the default
3042 pull location. These are the changesets that would have been pulled
3042 pull location. These are the changesets that would have been pulled
3043 if a pull at the time you issued this command.
3043 if a pull at the time you issued this command.
3044
3044
3045 See pull for valid source format details.
3045 See pull for valid source format details.
3046
3046
3047 .. container:: verbose
3047 .. container:: verbose
3048
3048
3049 With -B/--bookmarks, the result of bookmark comparison between
3049 With -B/--bookmarks, the result of bookmark comparison between
3050 local and remote repositories is displayed. With -v/--verbose,
3050 local and remote repositories is displayed. With -v/--verbose,
3051 status is also displayed for each bookmark like below::
3051 status is also displayed for each bookmark like below::
3052
3052
3053 BM1 01234567890a added
3053 BM1 01234567890a added
3054 BM2 1234567890ab advanced
3054 BM2 1234567890ab advanced
3055 BM3 234567890abc diverged
3055 BM3 234567890abc diverged
3056 BM4 34567890abcd changed
3056 BM4 34567890abcd changed
3057
3057
3058 The action taken locally when pulling depends on the
3058 The action taken locally when pulling depends on the
3059 status of each bookmark:
3059 status of each bookmark:
3060
3060
3061 :``added``: pull will create it
3061 :``added``: pull will create it
3062 :``advanced``: pull will update it
3062 :``advanced``: pull will update it
3063 :``diverged``: pull will create a divergent bookmark
3063 :``diverged``: pull will create a divergent bookmark
3064 :``changed``: result depends on remote changesets
3064 :``changed``: result depends on remote changesets
3065
3065
3066 From the point of view of pulling behavior, bookmark
3066 From the point of view of pulling behavior, bookmark
3067 existing only in the remote repository are treated as ``added``,
3067 existing only in the remote repository are treated as ``added``,
3068 even if it is in fact locally deleted.
3068 even if it is in fact locally deleted.
3069
3069
3070 .. container:: verbose
3070 .. container:: verbose
3071
3071
3072 For remote repository, using --bundle avoids downloading the
3072 For remote repository, using --bundle avoids downloading the
3073 changesets twice if the incoming is followed by a pull.
3073 changesets twice if the incoming is followed by a pull.
3074
3074
3075 Examples:
3075 Examples:
3076
3076
3077 - show incoming changes with patches and full description::
3077 - show incoming changes with patches and full description::
3078
3078
3079 hg incoming -vp
3079 hg incoming -vp
3080
3080
3081 - show incoming changes excluding merges, store a bundle::
3081 - show incoming changes excluding merges, store a bundle::
3082
3082
3083 hg in -vpM --bundle incoming.hg
3083 hg in -vpM --bundle incoming.hg
3084 hg pull incoming.hg
3084 hg pull incoming.hg
3085
3085
3086 - briefly list changes inside a bundle::
3086 - briefly list changes inside a bundle::
3087
3087
3088 hg in changes.hg -T "{desc|firstline}\\n"
3088 hg in changes.hg -T "{desc|firstline}\\n"
3089
3089
3090 Returns 0 if there are incoming changes, 1 otherwise.
3090 Returns 0 if there are incoming changes, 1 otherwise.
3091 """
3091 """
3092 opts = pycompat.byteskwargs(opts)
3092 opts = pycompat.byteskwargs(opts)
3093 if opts.get('graph'):
3093 if opts.get('graph'):
3094 cmdutil.checkunsupportedgraphflags([], opts)
3094 cmdutil.checkunsupportedgraphflags([], opts)
3095 def display(other, chlist, displayer):
3095 def display(other, chlist, displayer):
3096 revdag = cmdutil.graphrevs(other, chlist, opts)
3096 revdag = cmdutil.graphrevs(other, chlist, opts)
3097 cmdutil.displaygraph(ui, repo, revdag, displayer,
3097 cmdutil.displaygraph(ui, repo, revdag, displayer,
3098 graphmod.asciiedges)
3098 graphmod.asciiedges)
3099
3099
3100 hg._incoming(display, lambda: 1, ui, repo, source, opts, buffered=True)
3100 hg._incoming(display, lambda: 1, ui, repo, source, opts, buffered=True)
3101 return 0
3101 return 0
3102
3102
3103 if opts.get('bundle') and opts.get('subrepos'):
3103 if opts.get('bundle') and opts.get('subrepos'):
3104 raise error.Abort(_('cannot combine --bundle and --subrepos'))
3104 raise error.Abort(_('cannot combine --bundle and --subrepos'))
3105
3105
3106 if opts.get('bookmarks'):
3106 if opts.get('bookmarks'):
3107 source, branches = hg.parseurl(ui.expandpath(source),
3107 source, branches = hg.parseurl(ui.expandpath(source),
3108 opts.get('branch'))
3108 opts.get('branch'))
3109 other = hg.peer(repo, opts, source)
3109 other = hg.peer(repo, opts, source)
3110 if 'bookmarks' not in other.listkeys('namespaces'):
3110 if 'bookmarks' not in other.listkeys('namespaces'):
3111 ui.warn(_("remote doesn't support bookmarks\n"))
3111 ui.warn(_("remote doesn't support bookmarks\n"))
3112 return 0
3112 return 0
3113 ui.pager('incoming')
3113 ui.pager('incoming')
3114 ui.status(_('comparing with %s\n') % util.hidepassword(source))
3114 ui.status(_('comparing with %s\n') % util.hidepassword(source))
3115 return bookmarks.incoming(ui, repo, other)
3115 return bookmarks.incoming(ui, repo, other)
3116
3116
3117 repo._subtoppath = ui.expandpath(source)
3117 repo._subtoppath = ui.expandpath(source)
3118 try:
3118 try:
3119 return hg.incoming(ui, repo, source, opts)
3119 return hg.incoming(ui, repo, source, opts)
3120 finally:
3120 finally:
3121 del repo._subtoppath
3121 del repo._subtoppath
3122
3122
3123
3123
3124 @command('^init', remoteopts, _('[-e CMD] [--remotecmd CMD] [DEST]'),
3124 @command('^init', remoteopts, _('[-e CMD] [--remotecmd CMD] [DEST]'),
3125 norepo=True)
3125 norepo=True)
3126 def init(ui, dest=".", **opts):
3126 def init(ui, dest=".", **opts):
3127 """create a new repository in the given directory
3127 """create a new repository in the given directory
3128
3128
3129 Initialize a new repository in the given directory. If the given
3129 Initialize a new repository in the given directory. If the given
3130 directory does not exist, it will be created.
3130 directory does not exist, it will be created.
3131
3131
3132 If no directory is given, the current directory is used.
3132 If no directory is given, the current directory is used.
3133
3133
3134 It is possible to specify an ``ssh://`` URL as the destination.
3134 It is possible to specify an ``ssh://`` URL as the destination.
3135 See :hg:`help urls` for more information.
3135 See :hg:`help urls` for more information.
3136
3136
3137 Returns 0 on success.
3137 Returns 0 on success.
3138 """
3138 """
3139 opts = pycompat.byteskwargs(opts)
3139 opts = pycompat.byteskwargs(opts)
3140 hg.peer(ui, opts, ui.expandpath(dest), create=True)
3140 hg.peer(ui, opts, ui.expandpath(dest), create=True)
3141
3141
3142 @command('locate',
3142 @command('locate',
3143 [('r', 'rev', '', _('search the repository as it is in REV'), _('REV')),
3143 [('r', 'rev', '', _('search the repository as it is in REV'), _('REV')),
3144 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
3144 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
3145 ('f', 'fullpath', None, _('print complete paths from the filesystem root')),
3145 ('f', 'fullpath', None, _('print complete paths from the filesystem root')),
3146 ] + walkopts,
3146 ] + walkopts,
3147 _('[OPTION]... [PATTERN]...'))
3147 _('[OPTION]... [PATTERN]...'))
3148 def locate(ui, repo, *pats, **opts):
3148 def locate(ui, repo, *pats, **opts):
3149 """locate files matching specific patterns (DEPRECATED)
3149 """locate files matching specific patterns (DEPRECATED)
3150
3150
3151 Print files under Mercurial control in the working directory whose
3151 Print files under Mercurial control in the working directory whose
3152 names match the given patterns.
3152 names match the given patterns.
3153
3153
3154 By default, this command searches all directories in the working
3154 By default, this command searches all directories in the working
3155 directory. To search just the current directory and its
3155 directory. To search just the current directory and its
3156 subdirectories, use "--include .".
3156 subdirectories, use "--include .".
3157
3157
3158 If no patterns are given to match, this command prints the names
3158 If no patterns are given to match, this command prints the names
3159 of all files under Mercurial control in the working directory.
3159 of all files under Mercurial control in the working directory.
3160
3160
3161 If you want to feed the output of this command into the "xargs"
3161 If you want to feed the output of this command into the "xargs"
3162 command, use the -0 option to both this command and "xargs". This
3162 command, use the -0 option to both this command and "xargs". This
3163 will avoid the problem of "xargs" treating single filenames that
3163 will avoid the problem of "xargs" treating single filenames that
3164 contain whitespace as multiple filenames.
3164 contain whitespace as multiple filenames.
3165
3165
3166 See :hg:`help files` for a more versatile command.
3166 See :hg:`help files` for a more versatile command.
3167
3167
3168 Returns 0 if a match is found, 1 otherwise.
3168 Returns 0 if a match is found, 1 otherwise.
3169 """
3169 """
3170 opts = pycompat.byteskwargs(opts)
3170 opts = pycompat.byteskwargs(opts)
3171 if opts.get('print0'):
3171 if opts.get('print0'):
3172 end = '\0'
3172 end = '\0'
3173 else:
3173 else:
3174 end = '\n'
3174 end = '\n'
3175 rev = scmutil.revsingle(repo, opts.get('rev'), None).node()
3175 rev = scmutil.revsingle(repo, opts.get('rev'), None).node()
3176
3176
3177 ret = 1
3177 ret = 1
3178 ctx = repo[rev]
3178 ctx = repo[rev]
3179 m = scmutil.match(ctx, pats, opts, default='relglob',
3179 m = scmutil.match(ctx, pats, opts, default='relglob',
3180 badfn=lambda x, y: False)
3180 badfn=lambda x, y: False)
3181
3181
3182 ui.pager('locate')
3182 ui.pager('locate')
3183 for abs in ctx.matches(m):
3183 for abs in ctx.matches(m):
3184 if opts.get('fullpath'):
3184 if opts.get('fullpath'):
3185 ui.write(repo.wjoin(abs), end)
3185 ui.write(repo.wjoin(abs), end)
3186 else:
3186 else:
3187 ui.write(((pats and m.rel(abs)) or abs), end)
3187 ui.write(((pats and m.rel(abs)) or abs), end)
3188 ret = 0
3188 ret = 0
3189
3189
3190 return ret
3190 return ret
3191
3191
3192 @command('^log|history',
3192 @command('^log|history',
3193 [('f', 'follow', None,
3193 [('f', 'follow', None,
3194 _('follow changeset history, or file history across copies and renames')),
3194 _('follow changeset history, or file history across copies and renames')),
3195 ('', 'follow-first', None,
3195 ('', 'follow-first', None,
3196 _('only follow the first parent of merge changesets (DEPRECATED)')),
3196 _('only follow the first parent of merge changesets (DEPRECATED)')),
3197 ('d', 'date', '', _('show revisions matching date spec'), _('DATE')),
3197 ('d', 'date', '', _('show revisions matching date spec'), _('DATE')),
3198 ('C', 'copies', None, _('show copied files')),
3198 ('C', 'copies', None, _('show copied files')),
3199 ('k', 'keyword', [],
3199 ('k', 'keyword', [],
3200 _('do case-insensitive search for a given text'), _('TEXT')),
3200 _('do case-insensitive search for a given text'), _('TEXT')),
3201 ('r', 'rev', [], _('show the specified revision or revset'), _('REV')),
3201 ('r', 'rev', [], _('show the specified revision or revset'), _('REV')),
3202 ('', 'removed', None, _('include revisions where files were removed')),
3202 ('', 'removed', None, _('include revisions where files were removed')),
3203 ('m', 'only-merges', None, _('show only merges (DEPRECATED)')),
3203 ('m', 'only-merges', None, _('show only merges (DEPRECATED)')),
3204 ('u', 'user', [], _('revisions committed by user'), _('USER')),
3204 ('u', 'user', [], _('revisions committed by user'), _('USER')),
3205 ('', 'only-branch', [],
3205 ('', 'only-branch', [],
3206 _('show only changesets within the given named branch (DEPRECATED)'),
3206 _('show only changesets within the given named branch (DEPRECATED)'),
3207 _('BRANCH')),
3207 _('BRANCH')),
3208 ('b', 'branch', [],
3208 ('b', 'branch', [],
3209 _('show changesets within the given named branch'), _('BRANCH')),
3209 _('show changesets within the given named branch'), _('BRANCH')),
3210 ('P', 'prune', [],
3210 ('P', 'prune', [],
3211 _('do not display revision or any of its ancestors'), _('REV')),
3211 _('do not display revision or any of its ancestors'), _('REV')),
3212 ] + logopts + walkopts,
3212 ] + logopts + walkopts,
3213 _('[OPTION]... [FILE]'),
3213 _('[OPTION]... [FILE]'),
3214 inferrepo=True)
3214 inferrepo=True)
3215 def log(ui, repo, *pats, **opts):
3215 def log(ui, repo, *pats, **opts):
3216 """show revision history of entire repository or files
3216 """show revision history of entire repository or files
3217
3217
3218 Print the revision history of the specified files or the entire
3218 Print the revision history of the specified files or the entire
3219 project.
3219 project.
3220
3220
3221 If no revision range is specified, the default is ``tip:0`` unless
3221 If no revision range is specified, the default is ``tip:0`` unless
3222 --follow is set, in which case the working directory parent is
3222 --follow is set, in which case the working directory parent is
3223 used as the starting revision.
3223 used as the starting revision.
3224
3224
3225 File history is shown without following rename or copy history of
3225 File history is shown without following rename or copy history of
3226 files. Use -f/--follow with a filename to follow history across
3226 files. Use -f/--follow with a filename to follow history across
3227 renames and copies. --follow without a filename will only show
3227 renames and copies. --follow without a filename will only show
3228 ancestors or descendants of the starting revision.
3228 ancestors or descendants of the starting revision.
3229
3229
3230 By default this command prints revision number and changeset id,
3230 By default this command prints revision number and changeset id,
3231 tags, non-trivial parents, user, date and time, and a summary for
3231 tags, non-trivial parents, user, date and time, and a summary for
3232 each commit. When the -v/--verbose switch is used, the list of
3232 each commit. When the -v/--verbose switch is used, the list of
3233 changed files and full commit message are shown.
3233 changed files and full commit message are shown.
3234
3234
3235 With --graph the revisions are shown as an ASCII art DAG with the most
3235 With --graph the revisions are shown as an ASCII art DAG with the most
3236 recent changeset at the top.
3236 recent changeset at the top.
3237 'o' is a changeset, '@' is a working directory parent, 'x' is obsolete,
3237 'o' is a changeset, '@' is a working directory parent, 'x' is obsolete,
3238 and '+' represents a fork where the changeset from the lines below is a
3238 and '+' represents a fork where the changeset from the lines below is a
3239 parent of the 'o' merge on the same line.
3239 parent of the 'o' merge on the same line.
3240 Paths in the DAG are represented with '|', '/' and so forth. ':' in place
3240 Paths in the DAG are represented with '|', '/' and so forth. ':' in place
3241 of a '|' indicates one or more revisions in a path are omitted.
3241 of a '|' indicates one or more revisions in a path are omitted.
3242
3242
3243 .. note::
3243 .. note::
3244
3244
3245 :hg:`log --patch` may generate unexpected diff output for merge
3245 :hg:`log --patch` may generate unexpected diff output for merge
3246 changesets, as it will only compare the merge changeset against
3246 changesets, as it will only compare the merge changeset against
3247 its first parent. Also, only files different from BOTH parents
3247 its first parent. Also, only files different from BOTH parents
3248 will appear in files:.
3248 will appear in files:.
3249
3249
3250 .. note::
3250 .. note::
3251
3251
3252 For performance reasons, :hg:`log FILE` may omit duplicate changes
3252 For performance reasons, :hg:`log FILE` may omit duplicate changes
3253 made on branches and will not show removals or mode changes. To
3253 made on branches and will not show removals or mode changes. To
3254 see all such changes, use the --removed switch.
3254 see all such changes, use the --removed switch.
3255
3255
3256 .. container:: verbose
3256 .. container:: verbose
3257
3257
3258 Some examples:
3258 Some examples:
3259
3259
3260 - changesets with full descriptions and file lists::
3260 - changesets with full descriptions and file lists::
3261
3261
3262 hg log -v
3262 hg log -v
3263
3263
3264 - changesets ancestral to the working directory::
3264 - changesets ancestral to the working directory::
3265
3265
3266 hg log -f
3266 hg log -f
3267
3267
3268 - last 10 commits on the current branch::
3268 - last 10 commits on the current branch::
3269
3269
3270 hg log -l 10 -b .
3270 hg log -l 10 -b .
3271
3271
3272 - changesets showing all modifications of a file, including removals::
3272 - changesets showing all modifications of a file, including removals::
3273
3273
3274 hg log --removed file.c
3274 hg log --removed file.c
3275
3275
3276 - all changesets that touch a directory, with diffs, excluding merges::
3276 - all changesets that touch a directory, with diffs, excluding merges::
3277
3277
3278 hg log -Mp lib/
3278 hg log -Mp lib/
3279
3279
3280 - all revision numbers that match a keyword::
3280 - all revision numbers that match a keyword::
3281
3281
3282 hg log -k bug --template "{rev}\\n"
3282 hg log -k bug --template "{rev}\\n"
3283
3283
3284 - the full hash identifier of the working directory parent::
3284 - the full hash identifier of the working directory parent::
3285
3285
3286 hg log -r . --template "{node}\\n"
3286 hg log -r . --template "{node}\\n"
3287
3287
3288 - list available log templates::
3288 - list available log templates::
3289
3289
3290 hg log -T list
3290 hg log -T list
3291
3291
3292 - check if a given changeset is included in a tagged release::
3292 - check if a given changeset is included in a tagged release::
3293
3293
3294 hg log -r "a21ccf and ancestor(1.9)"
3294 hg log -r "a21ccf and ancestor(1.9)"
3295
3295
3296 - find all changesets by some user in a date range::
3296 - find all changesets by some user in a date range::
3297
3297
3298 hg log -k alice -d "may 2008 to jul 2008"
3298 hg log -k alice -d "may 2008 to jul 2008"
3299
3299
3300 - summary of all changesets after the last tag::
3300 - summary of all changesets after the last tag::
3301
3301
3302 hg log -r "last(tagged())::" --template "{desc|firstline}\\n"
3302 hg log -r "last(tagged())::" --template "{desc|firstline}\\n"
3303
3303
3304 See :hg:`help dates` for a list of formats valid for -d/--date.
3304 See :hg:`help dates` for a list of formats valid for -d/--date.
3305
3305
3306 See :hg:`help revisions` for more about specifying and ordering
3306 See :hg:`help revisions` for more about specifying and ordering
3307 revisions.
3307 revisions.
3308
3308
3309 See :hg:`help templates` for more about pre-packaged styles and
3309 See :hg:`help templates` for more about pre-packaged styles and
3310 specifying custom templates.
3310 specifying custom templates.
3311
3311
3312 Returns 0 on success.
3312 Returns 0 on success.
3313
3313
3314 """
3314 """
3315 opts = pycompat.byteskwargs(opts)
3315 opts = pycompat.byteskwargs(opts)
3316 if opts.get('follow') and opts.get('rev'):
3316 if opts.get('follow') and opts.get('rev'):
3317 opts['rev'] = [revsetlang.formatspec('reverse(::%lr)', opts.get('rev'))]
3317 opts['rev'] = [revsetlang.formatspec('reverse(::%lr)', opts.get('rev'))]
3318 del opts['follow']
3318 del opts['follow']
3319
3319
3320 if opts.get('graph'):
3320 if opts.get('graph'):
3321 return cmdutil.graphlog(ui, repo, pats, opts)
3321 return cmdutil.graphlog(ui, repo, pats, opts)
3322
3322
3323 revs, expr, filematcher = cmdutil.getlogrevs(repo, pats, opts)
3323 revs, expr, filematcher = cmdutil.getlogrevs(repo, pats, opts)
3324 limit = cmdutil.loglimit(opts)
3324 limit = cmdutil.loglimit(opts)
3325 count = 0
3325 count = 0
3326
3326
3327 getrenamed = None
3327 getrenamed = None
3328 if opts.get('copies'):
3328 if opts.get('copies'):
3329 endrev = None
3329 endrev = None
3330 if opts.get('rev'):
3330 if opts.get('rev'):
3331 endrev = scmutil.revrange(repo, opts.get('rev')).max() + 1
3331 endrev = scmutil.revrange(repo, opts.get('rev')).max() + 1
3332 getrenamed = templatekw.getrenamedfn(repo, endrev=endrev)
3332 getrenamed = templatekw.getrenamedfn(repo, endrev=endrev)
3333
3333
3334 ui.pager('log')
3334 ui.pager('log')
3335 displayer = cmdutil.show_changeset(ui, repo, opts, buffered=True)
3335 displayer = cmdutil.show_changeset(ui, repo, opts, buffered=True)
3336 for rev in revs:
3336 for rev in revs:
3337 if count == limit:
3337 if count == limit:
3338 break
3338 break
3339 ctx = repo[rev]
3339 ctx = repo[rev]
3340 copies = None
3340 copies = None
3341 if getrenamed is not None and rev:
3341 if getrenamed is not None and rev:
3342 copies = []
3342 copies = []
3343 for fn in ctx.files():
3343 for fn in ctx.files():
3344 rename = getrenamed(fn, rev)
3344 rename = getrenamed(fn, rev)
3345 if rename:
3345 if rename:
3346 copies.append((fn, rename[0]))
3346 copies.append((fn, rename[0]))
3347 if filematcher:
3347 if filematcher:
3348 revmatchfn = filematcher(ctx.rev())
3348 revmatchfn = filematcher(ctx.rev())
3349 else:
3349 else:
3350 revmatchfn = None
3350 revmatchfn = None
3351 displayer.show(ctx, copies=copies, matchfn=revmatchfn)
3351 displayer.show(ctx, copies=copies, matchfn=revmatchfn)
3352 if displayer.flush(ctx):
3352 if displayer.flush(ctx):
3353 count += 1
3353 count += 1
3354
3354
3355 displayer.close()
3355 displayer.close()
3356
3356
3357 @command('manifest',
3357 @command('manifest',
3358 [('r', 'rev', '', _('revision to display'), _('REV')),
3358 [('r', 'rev', '', _('revision to display'), _('REV')),
3359 ('', 'all', False, _("list files from all revisions"))]
3359 ('', 'all', False, _("list files from all revisions"))]
3360 + formatteropts,
3360 + formatteropts,
3361 _('[-r REV]'))
3361 _('[-r REV]'))
3362 def manifest(ui, repo, node=None, rev=None, **opts):
3362 def manifest(ui, repo, node=None, rev=None, **opts):
3363 """output the current or given revision of the project manifest
3363 """output the current or given revision of the project manifest
3364
3364
3365 Print a list of version controlled files for the given revision.
3365 Print a list of version controlled files for the given revision.
3366 If no revision is given, the first parent of the working directory
3366 If no revision is given, the first parent of the working directory
3367 is used, or the null revision if no revision is checked out.
3367 is used, or the null revision if no revision is checked out.
3368
3368
3369 With -v, print file permissions, symlink and executable bits.
3369 With -v, print file permissions, symlink and executable bits.
3370 With --debug, print file revision hashes.
3370 With --debug, print file revision hashes.
3371
3371
3372 If option --all is specified, the list of all files from all revisions
3372 If option --all is specified, the list of all files from all revisions
3373 is printed. This includes deleted and renamed files.
3373 is printed. This includes deleted and renamed files.
3374
3374
3375 Returns 0 on success.
3375 Returns 0 on success.
3376 """
3376 """
3377 opts = pycompat.byteskwargs(opts)
3377 opts = pycompat.byteskwargs(opts)
3378 fm = ui.formatter('manifest', opts)
3378 fm = ui.formatter('manifest', opts)
3379
3379
3380 if opts.get('all'):
3380 if opts.get('all'):
3381 if rev or node:
3381 if rev or node:
3382 raise error.Abort(_("can't specify a revision with --all"))
3382 raise error.Abort(_("can't specify a revision with --all"))
3383
3383
3384 res = []
3384 res = []
3385 prefix = "data/"
3385 prefix = "data/"
3386 suffix = ".i"
3386 suffix = ".i"
3387 plen = len(prefix)
3387 plen = len(prefix)
3388 slen = len(suffix)
3388 slen = len(suffix)
3389 with repo.lock():
3389 with repo.lock():
3390 for fn, b, size in repo.store.datafiles():
3390 for fn, b, size in repo.store.datafiles():
3391 if size != 0 and fn[-slen:] == suffix and fn[:plen] == prefix:
3391 if size != 0 and fn[-slen:] == suffix and fn[:plen] == prefix:
3392 res.append(fn[plen:-slen])
3392 res.append(fn[plen:-slen])
3393 ui.pager('manifest')
3393 ui.pager('manifest')
3394 for f in res:
3394 for f in res:
3395 fm.startitem()
3395 fm.startitem()
3396 fm.write("path", '%s\n', f)
3396 fm.write("path", '%s\n', f)
3397 fm.end()
3397 fm.end()
3398 return
3398 return
3399
3399
3400 if rev and node:
3400 if rev and node:
3401 raise error.Abort(_("please specify just one revision"))
3401 raise error.Abort(_("please specify just one revision"))
3402
3402
3403 if not node:
3403 if not node:
3404 node = rev
3404 node = rev
3405
3405
3406 char = {'l': '@', 'x': '*', '': ''}
3406 char = {'l': '@', 'x': '*', '': ''}
3407 mode = {'l': '644', 'x': '755', '': '644'}
3407 mode = {'l': '644', 'x': '755', '': '644'}
3408 ctx = scmutil.revsingle(repo, node)
3408 ctx = scmutil.revsingle(repo, node)
3409 mf = ctx.manifest()
3409 mf = ctx.manifest()
3410 ui.pager('manifest')
3410 ui.pager('manifest')
3411 for f in ctx:
3411 for f in ctx:
3412 fm.startitem()
3412 fm.startitem()
3413 fl = ctx[f].flags()
3413 fl = ctx[f].flags()
3414 fm.condwrite(ui.debugflag, 'hash', '%s ', hex(mf[f]))
3414 fm.condwrite(ui.debugflag, 'hash', '%s ', hex(mf[f]))
3415 fm.condwrite(ui.verbose, 'mode type', '%s %1s ', mode[fl], char[fl])
3415 fm.condwrite(ui.verbose, 'mode type', '%s %1s ', mode[fl], char[fl])
3416 fm.write('path', '%s\n', f)
3416 fm.write('path', '%s\n', f)
3417 fm.end()
3417 fm.end()
3418
3418
3419 @command('^merge',
3419 @command('^merge',
3420 [('f', 'force', None,
3420 [('f', 'force', None,
3421 _('force a merge including outstanding changes (DEPRECATED)')),
3421 _('force a merge including outstanding changes (DEPRECATED)')),
3422 ('r', 'rev', '', _('revision to merge'), _('REV')),
3422 ('r', 'rev', '', _('revision to merge'), _('REV')),
3423 ('P', 'preview', None,
3423 ('P', 'preview', None,
3424 _('review revisions to merge (no merge is performed)'))
3424 _('review revisions to merge (no merge is performed)'))
3425 ] + mergetoolopts,
3425 ] + mergetoolopts,
3426 _('[-P] [[-r] REV]'))
3426 _('[-P] [[-r] REV]'))
3427 def merge(ui, repo, node=None, **opts):
3427 def merge(ui, repo, node=None, **opts):
3428 """merge another revision into working directory
3428 """merge another revision into working directory
3429
3429
3430 The current working directory is updated with all changes made in
3430 The current working directory is updated with all changes made in
3431 the requested revision since the last common predecessor revision.
3431 the requested revision since the last common predecessor revision.
3432
3432
3433 Files that changed between either parent are marked as changed for
3433 Files that changed between either parent are marked as changed for
3434 the next commit and a commit must be performed before any further
3434 the next commit and a commit must be performed before any further
3435 updates to the repository are allowed. The next commit will have
3435 updates to the repository are allowed. The next commit will have
3436 two parents.
3436 two parents.
3437
3437
3438 ``--tool`` can be used to specify the merge tool used for file
3438 ``--tool`` can be used to specify the merge tool used for file
3439 merges. It overrides the HGMERGE environment variable and your
3439 merges. It overrides the HGMERGE environment variable and your
3440 configuration files. See :hg:`help merge-tools` for options.
3440 configuration files. See :hg:`help merge-tools` for options.
3441
3441
3442 If no revision is specified, the working directory's parent is a
3442 If no revision is specified, the working directory's parent is a
3443 head revision, and the current branch contains exactly one other
3443 head revision, and the current branch contains exactly one other
3444 head, the other head is merged with by default. Otherwise, an
3444 head, the other head is merged with by default. Otherwise, an
3445 explicit revision with which to merge with must be provided.
3445 explicit revision with which to merge with must be provided.
3446
3446
3447 See :hg:`help resolve` for information on handling file conflicts.
3447 See :hg:`help resolve` for information on handling file conflicts.
3448
3448
3449 To undo an uncommitted merge, use :hg:`update --clean .` which
3449 To undo an uncommitted merge, use :hg:`update --clean .` which
3450 will check out a clean copy of the original merge parent, losing
3450 will check out a clean copy of the original merge parent, losing
3451 all changes.
3451 all changes.
3452
3452
3453 Returns 0 on success, 1 if there are unresolved files.
3453 Returns 0 on success, 1 if there are unresolved files.
3454 """
3454 """
3455
3455
3456 opts = pycompat.byteskwargs(opts)
3456 opts = pycompat.byteskwargs(opts)
3457 if opts.get('rev') and node:
3457 if opts.get('rev') and node:
3458 raise error.Abort(_("please specify just one revision"))
3458 raise error.Abort(_("please specify just one revision"))
3459 if not node:
3459 if not node:
3460 node = opts.get('rev')
3460 node = opts.get('rev')
3461
3461
3462 if node:
3462 if node:
3463 node = scmutil.revsingle(repo, node).node()
3463 node = scmutil.revsingle(repo, node).node()
3464
3464
3465 if not node:
3465 if not node:
3466 node = repo[destutil.destmerge(repo)].node()
3466 node = repo[destutil.destmerge(repo)].node()
3467
3467
3468 if opts.get('preview'):
3468 if opts.get('preview'):
3469 # find nodes that are ancestors of p2 but not of p1
3469 # find nodes that are ancestors of p2 but not of p1
3470 p1 = repo.lookup('.')
3470 p1 = repo.lookup('.')
3471 p2 = repo.lookup(node)
3471 p2 = repo.lookup(node)
3472 nodes = repo.changelog.findmissing(common=[p1], heads=[p2])
3472 nodes = repo.changelog.findmissing(common=[p1], heads=[p2])
3473
3473
3474 displayer = cmdutil.show_changeset(ui, repo, opts)
3474 displayer = cmdutil.show_changeset(ui, repo, opts)
3475 for node in nodes:
3475 for node in nodes:
3476 displayer.show(repo[node])
3476 displayer.show(repo[node])
3477 displayer.close()
3477 displayer.close()
3478 return 0
3478 return 0
3479
3479
3480 try:
3480 try:
3481 # ui.forcemerge is an internal variable, do not document
3481 # ui.forcemerge is an internal variable, do not document
3482 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''), 'merge')
3482 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''), 'merge')
3483 force = opts.get('force')
3483 force = opts.get('force')
3484 labels = ['working copy', 'merge rev']
3484 labels = ['working copy', 'merge rev']
3485 return hg.merge(repo, node, force=force, mergeforce=force,
3485 return hg.merge(repo, node, force=force, mergeforce=force,
3486 labels=labels)
3486 labels=labels)
3487 finally:
3487 finally:
3488 ui.setconfig('ui', 'forcemerge', '', 'merge')
3488 ui.setconfig('ui', 'forcemerge', '', 'merge')
3489
3489
3490 @command('outgoing|out',
3490 @command('outgoing|out',
3491 [('f', 'force', None, _('run even when the destination is unrelated')),
3491 [('f', 'force', None, _('run even when the destination is unrelated')),
3492 ('r', 'rev', [],
3492 ('r', 'rev', [],
3493 _('a changeset intended to be included in the destination'), _('REV')),
3493 _('a changeset intended to be included in the destination'), _('REV')),
3494 ('n', 'newest-first', None, _('show newest record first')),
3494 ('n', 'newest-first', None, _('show newest record first')),
3495 ('B', 'bookmarks', False, _('compare bookmarks')),
3495 ('B', 'bookmarks', False, _('compare bookmarks')),
3496 ('b', 'branch', [], _('a specific branch you would like to push'),
3496 ('b', 'branch', [], _('a specific branch you would like to push'),
3497 _('BRANCH')),
3497 _('BRANCH')),
3498 ] + logopts + remoteopts + subrepoopts,
3498 ] + logopts + remoteopts + subrepoopts,
3499 _('[-M] [-p] [-n] [-f] [-r REV]... [DEST]'))
3499 _('[-M] [-p] [-n] [-f] [-r REV]... [DEST]'))
3500 def outgoing(ui, repo, dest=None, **opts):
3500 def outgoing(ui, repo, dest=None, **opts):
3501 """show changesets not found in the destination
3501 """show changesets not found in the destination
3502
3502
3503 Show changesets not found in the specified destination repository
3503 Show changesets not found in the specified destination repository
3504 or the default push location. These are the changesets that would
3504 or the default push location. These are the changesets that would
3505 be pushed if a push was requested.
3505 be pushed if a push was requested.
3506
3506
3507 See pull for details of valid destination formats.
3507 See pull for details of valid destination formats.
3508
3508
3509 .. container:: verbose
3509 .. container:: verbose
3510
3510
3511 With -B/--bookmarks, the result of bookmark comparison between
3511 With -B/--bookmarks, the result of bookmark comparison between
3512 local and remote repositories is displayed. With -v/--verbose,
3512 local and remote repositories is displayed. With -v/--verbose,
3513 status is also displayed for each bookmark like below::
3513 status is also displayed for each bookmark like below::
3514
3514
3515 BM1 01234567890a added
3515 BM1 01234567890a added
3516 BM2 deleted
3516 BM2 deleted
3517 BM3 234567890abc advanced
3517 BM3 234567890abc advanced
3518 BM4 34567890abcd diverged
3518 BM4 34567890abcd diverged
3519 BM5 4567890abcde changed
3519 BM5 4567890abcde changed
3520
3520
3521 The action taken when pushing depends on the
3521 The action taken when pushing depends on the
3522 status of each bookmark:
3522 status of each bookmark:
3523
3523
3524 :``added``: push with ``-B`` will create it
3524 :``added``: push with ``-B`` will create it
3525 :``deleted``: push with ``-B`` will delete it
3525 :``deleted``: push with ``-B`` will delete it
3526 :``advanced``: push will update it
3526 :``advanced``: push will update it
3527 :``diverged``: push with ``-B`` will update it
3527 :``diverged``: push with ``-B`` will update it
3528 :``changed``: push with ``-B`` will update it
3528 :``changed``: push with ``-B`` will update it
3529
3529
3530 From the point of view of pushing behavior, bookmarks
3530 From the point of view of pushing behavior, bookmarks
3531 existing only in the remote repository are treated as
3531 existing only in the remote repository are treated as
3532 ``deleted``, even if it is in fact added remotely.
3532 ``deleted``, even if it is in fact added remotely.
3533
3533
3534 Returns 0 if there are outgoing changes, 1 otherwise.
3534 Returns 0 if there are outgoing changes, 1 otherwise.
3535 """
3535 """
3536 opts = pycompat.byteskwargs(opts)
3536 opts = pycompat.byteskwargs(opts)
3537 if opts.get('graph'):
3537 if opts.get('graph'):
3538 cmdutil.checkunsupportedgraphflags([], opts)
3538 cmdutil.checkunsupportedgraphflags([], opts)
3539 o, other = hg._outgoing(ui, repo, dest, opts)
3539 o, other = hg._outgoing(ui, repo, dest, opts)
3540 if not o:
3540 if not o:
3541 cmdutil.outgoinghooks(ui, repo, other, opts, o)
3541 cmdutil.outgoinghooks(ui, repo, other, opts, o)
3542 return
3542 return
3543
3543
3544 revdag = cmdutil.graphrevs(repo, o, opts)
3544 revdag = cmdutil.graphrevs(repo, o, opts)
3545 ui.pager('outgoing')
3545 ui.pager('outgoing')
3546 displayer = cmdutil.show_changeset(ui, repo, opts, buffered=True)
3546 displayer = cmdutil.show_changeset(ui, repo, opts, buffered=True)
3547 cmdutil.displaygraph(ui, repo, revdag, displayer, graphmod.asciiedges)
3547 cmdutil.displaygraph(ui, repo, revdag, displayer, graphmod.asciiedges)
3548 cmdutil.outgoinghooks(ui, repo, other, opts, o)
3548 cmdutil.outgoinghooks(ui, repo, other, opts, o)
3549 return 0
3549 return 0
3550
3550
3551 if opts.get('bookmarks'):
3551 if opts.get('bookmarks'):
3552 dest = ui.expandpath(dest or 'default-push', dest or 'default')
3552 dest = ui.expandpath(dest or 'default-push', dest or 'default')
3553 dest, branches = hg.parseurl(dest, opts.get('branch'))
3553 dest, branches = hg.parseurl(dest, opts.get('branch'))
3554 other = hg.peer(repo, opts, dest)
3554 other = hg.peer(repo, opts, dest)
3555 if 'bookmarks' not in other.listkeys('namespaces'):
3555 if 'bookmarks' not in other.listkeys('namespaces'):
3556 ui.warn(_("remote doesn't support bookmarks\n"))
3556 ui.warn(_("remote doesn't support bookmarks\n"))
3557 return 0
3557 return 0
3558 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
3558 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
3559 ui.pager('outgoing')
3559 ui.pager('outgoing')
3560 return bookmarks.outgoing(ui, repo, other)
3560 return bookmarks.outgoing(ui, repo, other)
3561
3561
3562 repo._subtoppath = ui.expandpath(dest or 'default-push', dest or 'default')
3562 repo._subtoppath = ui.expandpath(dest or 'default-push', dest or 'default')
3563 try:
3563 try:
3564 return hg.outgoing(ui, repo, dest, opts)
3564 return hg.outgoing(ui, repo, dest, opts)
3565 finally:
3565 finally:
3566 del repo._subtoppath
3566 del repo._subtoppath
3567
3567
3568 @command('parents',
3568 @command('parents',
3569 [('r', 'rev', '', _('show parents of the specified revision'), _('REV')),
3569 [('r', 'rev', '', _('show parents of the specified revision'), _('REV')),
3570 ] + templateopts,
3570 ] + templateopts,
3571 _('[-r REV] [FILE]'),
3571 _('[-r REV] [FILE]'),
3572 inferrepo=True)
3572 inferrepo=True)
3573 def parents(ui, repo, file_=None, **opts):
3573 def parents(ui, repo, file_=None, **opts):
3574 """show the parents of the working directory or revision (DEPRECATED)
3574 """show the parents of the working directory or revision (DEPRECATED)
3575
3575
3576 Print the working directory's parent revisions. If a revision is
3576 Print the working directory's parent revisions. If a revision is
3577 given via -r/--rev, the parent of that revision will be printed.
3577 given via -r/--rev, the parent of that revision will be printed.
3578 If a file argument is given, the revision in which the file was
3578 If a file argument is given, the revision in which the file was
3579 last changed (before the working directory revision or the
3579 last changed (before the working directory revision or the
3580 argument to --rev if given) is printed.
3580 argument to --rev if given) is printed.
3581
3581
3582 This command is equivalent to::
3582 This command is equivalent to::
3583
3583
3584 hg log -r "p1()+p2()" or
3584 hg log -r "p1()+p2()" or
3585 hg log -r "p1(REV)+p2(REV)" or
3585 hg log -r "p1(REV)+p2(REV)" or
3586 hg log -r "max(::p1() and file(FILE))+max(::p2() and file(FILE))" or
3586 hg log -r "max(::p1() and file(FILE))+max(::p2() and file(FILE))" or
3587 hg log -r "max(::p1(REV) and file(FILE))+max(::p2(REV) and file(FILE))"
3587 hg log -r "max(::p1(REV) and file(FILE))+max(::p2(REV) and file(FILE))"
3588
3588
3589 See :hg:`summary` and :hg:`help revsets` for related information.
3589 See :hg:`summary` and :hg:`help revsets` for related information.
3590
3590
3591 Returns 0 on success.
3591 Returns 0 on success.
3592 """
3592 """
3593
3593
3594 opts = pycompat.byteskwargs(opts)
3594 opts = pycompat.byteskwargs(opts)
3595 ctx = scmutil.revsingle(repo, opts.get('rev'), None)
3595 ctx = scmutil.revsingle(repo, opts.get('rev'), None)
3596
3596
3597 if file_:
3597 if file_:
3598 m = scmutil.match(ctx, (file_,), opts)
3598 m = scmutil.match(ctx, (file_,), opts)
3599 if m.anypats() or len(m.files()) != 1:
3599 if m.anypats() or len(m.files()) != 1:
3600 raise error.Abort(_('can only specify an explicit filename'))
3600 raise error.Abort(_('can only specify an explicit filename'))
3601 file_ = m.files()[0]
3601 file_ = m.files()[0]
3602 filenodes = []
3602 filenodes = []
3603 for cp in ctx.parents():
3603 for cp in ctx.parents():
3604 if not cp:
3604 if not cp:
3605 continue
3605 continue
3606 try:
3606 try:
3607 filenodes.append(cp.filenode(file_))
3607 filenodes.append(cp.filenode(file_))
3608 except error.LookupError:
3608 except error.LookupError:
3609 pass
3609 pass
3610 if not filenodes:
3610 if not filenodes:
3611 raise error.Abort(_("'%s' not found in manifest!") % file_)
3611 raise error.Abort(_("'%s' not found in manifest!") % file_)
3612 p = []
3612 p = []
3613 for fn in filenodes:
3613 for fn in filenodes:
3614 fctx = repo.filectx(file_, fileid=fn)
3614 fctx = repo.filectx(file_, fileid=fn)
3615 p.append(fctx.node())
3615 p.append(fctx.node())
3616 else:
3616 else:
3617 p = [cp.node() for cp in ctx.parents()]
3617 p = [cp.node() for cp in ctx.parents()]
3618
3618
3619 displayer = cmdutil.show_changeset(ui, repo, opts)
3619 displayer = cmdutil.show_changeset(ui, repo, opts)
3620 for n in p:
3620 for n in p:
3621 if n != nullid:
3621 if n != nullid:
3622 displayer.show(repo[n])
3622 displayer.show(repo[n])
3623 displayer.close()
3623 displayer.close()
3624
3624
3625 @command('paths', formatteropts, _('[NAME]'), optionalrepo=True)
3625 @command('paths', formatteropts, _('[NAME]'), optionalrepo=True)
3626 def paths(ui, repo, search=None, **opts):
3626 def paths(ui, repo, search=None, **opts):
3627 """show aliases for remote repositories
3627 """show aliases for remote repositories
3628
3628
3629 Show definition of symbolic path name NAME. If no name is given,
3629 Show definition of symbolic path name NAME. If no name is given,
3630 show definition of all available names.
3630 show definition of all available names.
3631
3631
3632 Option -q/--quiet suppresses all output when searching for NAME
3632 Option -q/--quiet suppresses all output when searching for NAME
3633 and shows only the path names when listing all definitions.
3633 and shows only the path names when listing all definitions.
3634
3634
3635 Path names are defined in the [paths] section of your
3635 Path names are defined in the [paths] section of your
3636 configuration file and in ``/etc/mercurial/hgrc``. If run inside a
3636 configuration file and in ``/etc/mercurial/hgrc``. If run inside a
3637 repository, ``.hg/hgrc`` is used, too.
3637 repository, ``.hg/hgrc`` is used, too.
3638
3638
3639 The path names ``default`` and ``default-push`` have a special
3639 The path names ``default`` and ``default-push`` have a special
3640 meaning. When performing a push or pull operation, they are used
3640 meaning. When performing a push or pull operation, they are used
3641 as fallbacks if no location is specified on the command-line.
3641 as fallbacks if no location is specified on the command-line.
3642 When ``default-push`` is set, it will be used for push and
3642 When ``default-push`` is set, it will be used for push and
3643 ``default`` will be used for pull; otherwise ``default`` is used
3643 ``default`` will be used for pull; otherwise ``default`` is used
3644 as the fallback for both. When cloning a repository, the clone
3644 as the fallback for both. When cloning a repository, the clone
3645 source is written as ``default`` in ``.hg/hgrc``.
3645 source is written as ``default`` in ``.hg/hgrc``.
3646
3646
3647 .. note::
3647 .. note::
3648
3648
3649 ``default`` and ``default-push`` apply to all inbound (e.g.
3649 ``default`` and ``default-push`` apply to all inbound (e.g.
3650 :hg:`incoming`) and outbound (e.g. :hg:`outgoing`, :hg:`email`
3650 :hg:`incoming`) and outbound (e.g. :hg:`outgoing`, :hg:`email`
3651 and :hg:`bundle`) operations.
3651 and :hg:`bundle`) operations.
3652
3652
3653 See :hg:`help urls` for more information.
3653 See :hg:`help urls` for more information.
3654
3654
3655 Returns 0 on success.
3655 Returns 0 on success.
3656 """
3656 """
3657
3657
3658 opts = pycompat.byteskwargs(opts)
3658 opts = pycompat.byteskwargs(opts)
3659 ui.pager('paths')
3659 ui.pager('paths')
3660 if search:
3660 if search:
3661 pathitems = [(name, path) for name, path in ui.paths.iteritems()
3661 pathitems = [(name, path) for name, path in ui.paths.iteritems()
3662 if name == search]
3662 if name == search]
3663 else:
3663 else:
3664 pathitems = sorted(ui.paths.iteritems())
3664 pathitems = sorted(ui.paths.iteritems())
3665
3665
3666 fm = ui.formatter('paths', opts)
3666 fm = ui.formatter('paths', opts)
3667 if fm.isplain():
3667 if fm.isplain():
3668 hidepassword = util.hidepassword
3668 hidepassword = util.hidepassword
3669 else:
3669 else:
3670 hidepassword = str
3670 hidepassword = str
3671 if ui.quiet:
3671 if ui.quiet:
3672 namefmt = '%s\n'
3672 namefmt = '%s\n'
3673 else:
3673 else:
3674 namefmt = '%s = '
3674 namefmt = '%s = '
3675 showsubopts = not search and not ui.quiet
3675 showsubopts = not search and not ui.quiet
3676
3676
3677 for name, path in pathitems:
3677 for name, path in pathitems:
3678 fm.startitem()
3678 fm.startitem()
3679 fm.condwrite(not search, 'name', namefmt, name)
3679 fm.condwrite(not search, 'name', namefmt, name)
3680 fm.condwrite(not ui.quiet, 'url', '%s\n', hidepassword(path.rawloc))
3680 fm.condwrite(not ui.quiet, 'url', '%s\n', hidepassword(path.rawloc))
3681 for subopt, value in sorted(path.suboptions.items()):
3681 for subopt, value in sorted(path.suboptions.items()):
3682 assert subopt not in ('name', 'url')
3682 assert subopt not in ('name', 'url')
3683 if showsubopts:
3683 if showsubopts:
3684 fm.plain('%s:%s = ' % (name, subopt))
3684 fm.plain('%s:%s = ' % (name, subopt))
3685 fm.condwrite(showsubopts, subopt, '%s\n', value)
3685 fm.condwrite(showsubopts, subopt, '%s\n', value)
3686
3686
3687 fm.end()
3687 fm.end()
3688
3688
3689 if search and not pathitems:
3689 if search and not pathitems:
3690 if not ui.quiet:
3690 if not ui.quiet:
3691 ui.warn(_("not found!\n"))
3691 ui.warn(_("not found!\n"))
3692 return 1
3692 return 1
3693 else:
3693 else:
3694 return 0
3694 return 0
3695
3695
3696 @command('phase',
3696 @command('phase',
3697 [('p', 'public', False, _('set changeset phase to public')),
3697 [('p', 'public', False, _('set changeset phase to public')),
3698 ('d', 'draft', False, _('set changeset phase to draft')),
3698 ('d', 'draft', False, _('set changeset phase to draft')),
3699 ('s', 'secret', False, _('set changeset phase to secret')),
3699 ('s', 'secret', False, _('set changeset phase to secret')),
3700 ('f', 'force', False, _('allow to move boundary backward')),
3700 ('f', 'force', False, _('allow to move boundary backward')),
3701 ('r', 'rev', [], _('target revision'), _('REV')),
3701 ('r', 'rev', [], _('target revision'), _('REV')),
3702 ],
3702 ],
3703 _('[-p|-d|-s] [-f] [-r] [REV...]'))
3703 _('[-p|-d|-s] [-f] [-r] [REV...]'))
3704 def phase(ui, repo, *revs, **opts):
3704 def phase(ui, repo, *revs, **opts):
3705 """set or show the current phase name
3705 """set or show the current phase name
3706
3706
3707 With no argument, show the phase name of the current revision(s).
3707 With no argument, show the phase name of the current revision(s).
3708
3708
3709 With one of -p/--public, -d/--draft or -s/--secret, change the
3709 With one of -p/--public, -d/--draft or -s/--secret, change the
3710 phase value of the specified revisions.
3710 phase value of the specified revisions.
3711
3711
3712 Unless -f/--force is specified, :hg:`phase` won't move changeset from a
3712 Unless -f/--force is specified, :hg:`phase` won't move changeset from a
3713 lower phase to an higher phase. Phases are ordered as follows::
3713 lower phase to an higher phase. Phases are ordered as follows::
3714
3714
3715 public < draft < secret
3715 public < draft < secret
3716
3716
3717 Returns 0 on success, 1 if some phases could not be changed.
3717 Returns 0 on success, 1 if some phases could not be changed.
3718
3718
3719 (For more information about the phases concept, see :hg:`help phases`.)
3719 (For more information about the phases concept, see :hg:`help phases`.)
3720 """
3720 """
3721 opts = pycompat.byteskwargs(opts)
3721 opts = pycompat.byteskwargs(opts)
3722 # search for a unique phase argument
3722 # search for a unique phase argument
3723 targetphase = None
3723 targetphase = None
3724 for idx, name in enumerate(phases.phasenames):
3724 for idx, name in enumerate(phases.phasenames):
3725 if opts[name]:
3725 if opts[name]:
3726 if targetphase is not None:
3726 if targetphase is not None:
3727 raise error.Abort(_('only one phase can be specified'))
3727 raise error.Abort(_('only one phase can be specified'))
3728 targetphase = idx
3728 targetphase = idx
3729
3729
3730 # look for specified revision
3730 # look for specified revision
3731 revs = list(revs)
3731 revs = list(revs)
3732 revs.extend(opts['rev'])
3732 revs.extend(opts['rev'])
3733 if not revs:
3733 if not revs:
3734 # display both parents as the second parent phase can influence
3734 # display both parents as the second parent phase can influence
3735 # the phase of a merge commit
3735 # the phase of a merge commit
3736 revs = [c.rev() for c in repo[None].parents()]
3736 revs = [c.rev() for c in repo[None].parents()]
3737
3737
3738 revs = scmutil.revrange(repo, revs)
3738 revs = scmutil.revrange(repo, revs)
3739
3739
3740 lock = None
3740 lock = None
3741 ret = 0
3741 ret = 0
3742 if targetphase is None:
3742 if targetphase is None:
3743 # display
3743 # display
3744 for r in revs:
3744 for r in revs:
3745 ctx = repo[r]
3745 ctx = repo[r]
3746 ui.write('%i: %s\n' % (ctx.rev(), ctx.phasestr()))
3746 ui.write('%i: %s\n' % (ctx.rev(), ctx.phasestr()))
3747 else:
3747 else:
3748 tr = None
3748 tr = None
3749 lock = repo.lock()
3749 lock = repo.lock()
3750 try:
3750 try:
3751 tr = repo.transaction("phase")
3751 tr = repo.transaction("phase")
3752 # set phase
3752 # set phase
3753 if not revs:
3753 if not revs:
3754 raise error.Abort(_('empty revision set'))
3754 raise error.Abort(_('empty revision set'))
3755 nodes = [repo[r].node() for r in revs]
3755 nodes = [repo[r].node() for r in revs]
3756 # moving revision from public to draft may hide them
3756 # moving revision from public to draft may hide them
3757 # We have to check result on an unfiltered repository
3757 # We have to check result on an unfiltered repository
3758 unfi = repo.unfiltered()
3758 unfi = repo.unfiltered()
3759 getphase = unfi._phasecache.phase
3759 getphase = unfi._phasecache.phase
3760 olddata = [getphase(unfi, r) for r in unfi]
3760 olddata = [getphase(unfi, r) for r in unfi]
3761 phases.advanceboundary(repo, tr, targetphase, nodes)
3761 phases.advanceboundary(repo, tr, targetphase, nodes)
3762 if opts['force']:
3762 if opts['force']:
3763 phases.retractboundary(repo, tr, targetphase, nodes)
3763 phases.retractboundary(repo, tr, targetphase, nodes)
3764 tr.close()
3764 tr.close()
3765 finally:
3765 finally:
3766 if tr is not None:
3766 if tr is not None:
3767 tr.release()
3767 tr.release()
3768 lock.release()
3768 lock.release()
3769 getphase = unfi._phasecache.phase
3769 getphase = unfi._phasecache.phase
3770 newdata = [getphase(unfi, r) for r in unfi]
3770 newdata = [getphase(unfi, r) for r in unfi]
3771 changes = sum(newdata[r] != olddata[r] for r in unfi)
3771 changes = sum(newdata[r] != olddata[r] for r in unfi)
3772 cl = unfi.changelog
3772 cl = unfi.changelog
3773 rejected = [n for n in nodes
3773 rejected = [n for n in nodes
3774 if newdata[cl.rev(n)] < targetphase]
3774 if newdata[cl.rev(n)] < targetphase]
3775 if rejected:
3775 if rejected:
3776 ui.warn(_('cannot move %i changesets to a higher '
3776 ui.warn(_('cannot move %i changesets to a higher '
3777 'phase, use --force\n') % len(rejected))
3777 'phase, use --force\n') % len(rejected))
3778 ret = 1
3778 ret = 1
3779 if changes:
3779 if changes:
3780 msg = _('phase changed for %i changesets\n') % changes
3780 msg = _('phase changed for %i changesets\n') % changes
3781 if ret:
3781 if ret:
3782 ui.status(msg)
3782 ui.status(msg)
3783 else:
3783 else:
3784 ui.note(msg)
3784 ui.note(msg)
3785 else:
3785 else:
3786 ui.warn(_('no phases changed\n'))
3786 ui.warn(_('no phases changed\n'))
3787 return ret
3787 return ret
3788
3788
3789 def postincoming(ui, repo, modheads, optupdate, checkout, brev):
3789 def postincoming(ui, repo, modheads, optupdate, checkout, brev):
3790 """Run after a changegroup has been added via pull/unbundle
3790 """Run after a changegroup has been added via pull/unbundle
3791
3791
3792 This takes arguments below:
3792 This takes arguments below:
3793
3793
3794 :modheads: change of heads by pull/unbundle
3794 :modheads: change of heads by pull/unbundle
3795 :optupdate: updating working directory is needed or not
3795 :optupdate: updating working directory is needed or not
3796 :checkout: update destination revision (or None to default destination)
3796 :checkout: update destination revision (or None to default destination)
3797 :brev: a name, which might be a bookmark to be activated after updating
3797 :brev: a name, which might be a bookmark to be activated after updating
3798 """
3798 """
3799 if modheads == 0:
3799 if modheads == 0:
3800 return
3800 return
3801 if optupdate:
3801 if optupdate:
3802 try:
3802 try:
3803 return hg.updatetotally(ui, repo, checkout, brev)
3803 return hg.updatetotally(ui, repo, checkout, brev)
3804 except error.UpdateAbort as inst:
3804 except error.UpdateAbort as inst:
3805 msg = _("not updating: %s") % str(inst)
3805 msg = _("not updating: %s") % str(inst)
3806 hint = inst.hint
3806 hint = inst.hint
3807 raise error.UpdateAbort(msg, hint=hint)
3807 raise error.UpdateAbort(msg, hint=hint)
3808 if modheads > 1:
3808 if modheads > 1:
3809 currentbranchheads = len(repo.branchheads())
3809 currentbranchheads = len(repo.branchheads())
3810 if currentbranchheads == modheads:
3810 if currentbranchheads == modheads:
3811 ui.status(_("(run 'hg heads' to see heads, 'hg merge' to merge)\n"))
3811 ui.status(_("(run 'hg heads' to see heads, 'hg merge' to merge)\n"))
3812 elif currentbranchheads > 1:
3812 elif currentbranchheads > 1:
3813 ui.status(_("(run 'hg heads .' to see heads, 'hg merge' to "
3813 ui.status(_("(run 'hg heads .' to see heads, 'hg merge' to "
3814 "merge)\n"))
3814 "merge)\n"))
3815 else:
3815 else:
3816 ui.status(_("(run 'hg heads' to see heads)\n"))
3816 ui.status(_("(run 'hg heads' to see heads)\n"))
3817 else:
3817 else:
3818 ui.status(_("(run 'hg update' to get a working copy)\n"))
3818 ui.status(_("(run 'hg update' to get a working copy)\n"))
3819
3819
3820 @command('^pull',
3820 @command('^pull',
3821 [('u', 'update', None,
3821 [('u', 'update', None,
3822 _('update to new branch head if changesets were pulled')),
3822 _('update to new branch head if changesets were pulled')),
3823 ('f', 'force', None, _('run even when remote repository is unrelated')),
3823 ('f', 'force', None, _('run even when remote repository is unrelated')),
3824 ('r', 'rev', [], _('a remote changeset intended to be added'), _('REV')),
3824 ('r', 'rev', [], _('a remote changeset intended to be added'), _('REV')),
3825 ('B', 'bookmark', [], _("bookmark to pull"), _('BOOKMARK')),
3825 ('B', 'bookmark', [], _("bookmark to pull"), _('BOOKMARK')),
3826 ('b', 'branch', [], _('a specific branch you would like to pull'),
3826 ('b', 'branch', [], _('a specific branch you would like to pull'),
3827 _('BRANCH')),
3827 _('BRANCH')),
3828 ] + remoteopts,
3828 ] + remoteopts,
3829 _('[-u] [-f] [-r REV]... [-e CMD] [--remotecmd CMD] [SOURCE]'))
3829 _('[-u] [-f] [-r REV]... [-e CMD] [--remotecmd CMD] [SOURCE]'))
3830 def pull(ui, repo, source="default", **opts):
3830 def pull(ui, repo, source="default", **opts):
3831 """pull changes from the specified source
3831 """pull changes from the specified source
3832
3832
3833 Pull changes from a remote repository to a local one.
3833 Pull changes from a remote repository to a local one.
3834
3834
3835 This finds all changes from the repository at the specified path
3835 This finds all changes from the repository at the specified path
3836 or URL and adds them to a local repository (the current one unless
3836 or URL and adds them to a local repository (the current one unless
3837 -R is specified). By default, this does not update the copy of the
3837 -R is specified). By default, this does not update the copy of the
3838 project in the working directory.
3838 project in the working directory.
3839
3839
3840 Use :hg:`incoming` if you want to see what would have been added
3840 Use :hg:`incoming` if you want to see what would have been added
3841 by a pull at the time you issued this command. If you then decide
3841 by a pull at the time you issued this command. If you then decide
3842 to add those changes to the repository, you should use :hg:`pull
3842 to add those changes to the repository, you should use :hg:`pull
3843 -r X` where ``X`` is the last changeset listed by :hg:`incoming`.
3843 -r X` where ``X`` is the last changeset listed by :hg:`incoming`.
3844
3844
3845 If SOURCE is omitted, the 'default' path will be used.
3845 If SOURCE is omitted, the 'default' path will be used.
3846 See :hg:`help urls` for more information.
3846 See :hg:`help urls` for more information.
3847
3847
3848 Specifying bookmark as ``.`` is equivalent to specifying the active
3848 Specifying bookmark as ``.`` is equivalent to specifying the active
3849 bookmark's name.
3849 bookmark's name.
3850
3850
3851 Returns 0 on success, 1 if an update had unresolved files.
3851 Returns 0 on success, 1 if an update had unresolved files.
3852 """
3852 """
3853
3853
3854 opts = pycompat.byteskwargs(opts)
3854 opts = pycompat.byteskwargs(opts)
3855 if ui.configbool('commands', 'update.requiredest') and opts.get('update'):
3855 if ui.configbool('commands', 'update.requiredest') and opts.get('update'):
3856 msg = _('update destination required by configuration')
3856 msg = _('update destination required by configuration')
3857 hint = _('use hg pull followed by hg update DEST')
3857 hint = _('use hg pull followed by hg update DEST')
3858 raise error.Abort(msg, hint=hint)
3858 raise error.Abort(msg, hint=hint)
3859
3859
3860 source, branches = hg.parseurl(ui.expandpath(source), opts.get('branch'))
3860 source, branches = hg.parseurl(ui.expandpath(source), opts.get('branch'))
3861 ui.status(_('pulling from %s\n') % util.hidepassword(source))
3861 ui.status(_('pulling from %s\n') % util.hidepassword(source))
3862 other = hg.peer(repo, opts, source)
3862 other = hg.peer(repo, opts, source)
3863 try:
3863 try:
3864 revs, checkout = hg.addbranchrevs(repo, other, branches,
3864 revs, checkout = hg.addbranchrevs(repo, other, branches,
3865 opts.get('rev'))
3865 opts.get('rev'))
3866
3866
3867
3867
3868 pullopargs = {}
3868 pullopargs = {}
3869 if opts.get('bookmark'):
3869 if opts.get('bookmark'):
3870 if not revs:
3870 if not revs:
3871 revs = []
3871 revs = []
3872 # The list of bookmark used here is not the one used to actually
3872 # The list of bookmark used here is not the one used to actually
3873 # update the bookmark name. This can result in the revision pulled
3873 # update the bookmark name. This can result in the revision pulled
3874 # not ending up with the name of the bookmark because of a race
3874 # not ending up with the name of the bookmark because of a race
3875 # condition on the server. (See issue 4689 for details)
3875 # condition on the server. (See issue 4689 for details)
3876 remotebookmarks = other.listkeys('bookmarks')
3876 remotebookmarks = other.listkeys('bookmarks')
3877 pullopargs['remotebookmarks'] = remotebookmarks
3877 pullopargs['remotebookmarks'] = remotebookmarks
3878 for b in opts['bookmark']:
3878 for b in opts['bookmark']:
3879 b = repo._bookmarks.expandname(b)
3879 b = repo._bookmarks.expandname(b)
3880 if b not in remotebookmarks:
3880 if b not in remotebookmarks:
3881 raise error.Abort(_('remote bookmark %s not found!') % b)
3881 raise error.Abort(_('remote bookmark %s not found!') % b)
3882 revs.append(remotebookmarks[b])
3882 revs.append(remotebookmarks[b])
3883
3883
3884 if revs:
3884 if revs:
3885 try:
3885 try:
3886 # When 'rev' is a bookmark name, we cannot guarantee that it
3886 # When 'rev' is a bookmark name, we cannot guarantee that it
3887 # will be updated with that name because of a race condition
3887 # will be updated with that name because of a race condition
3888 # server side. (See issue 4689 for details)
3888 # server side. (See issue 4689 for details)
3889 oldrevs = revs
3889 oldrevs = revs
3890 revs = [] # actually, nodes
3890 revs = [] # actually, nodes
3891 for r in oldrevs:
3891 for r in oldrevs:
3892 node = other.lookup(r)
3892 node = other.lookup(r)
3893 revs.append(node)
3893 revs.append(node)
3894 if r == checkout:
3894 if r == checkout:
3895 checkout = node
3895 checkout = node
3896 except error.CapabilityError:
3896 except error.CapabilityError:
3897 err = _("other repository doesn't support revision lookup, "
3897 err = _("other repository doesn't support revision lookup, "
3898 "so a rev cannot be specified.")
3898 "so a rev cannot be specified.")
3899 raise error.Abort(err)
3899 raise error.Abort(err)
3900
3900
3901 pullopargs.update(opts.get('opargs', {}))
3901 pullopargs.update(opts.get('opargs', {}))
3902 modheads = exchange.pull(repo, other, heads=revs,
3902 modheads = exchange.pull(repo, other, heads=revs,
3903 force=opts.get('force'),
3903 force=opts.get('force'),
3904 bookmarks=opts.get('bookmark', ()),
3904 bookmarks=opts.get('bookmark', ()),
3905 opargs=pullopargs).cgresult
3905 opargs=pullopargs).cgresult
3906
3906
3907 # brev is a name, which might be a bookmark to be activated at
3907 # brev is a name, which might be a bookmark to be activated at
3908 # the end of the update. In other words, it is an explicit
3908 # the end of the update. In other words, it is an explicit
3909 # destination of the update
3909 # destination of the update
3910 brev = None
3910 brev = None
3911
3911
3912 if checkout:
3912 if checkout:
3913 checkout = str(repo.changelog.rev(checkout))
3913 checkout = str(repo.changelog.rev(checkout))
3914
3914
3915 # order below depends on implementation of
3915 # order below depends on implementation of
3916 # hg.addbranchrevs(). opts['bookmark'] is ignored,
3916 # hg.addbranchrevs(). opts['bookmark'] is ignored,
3917 # because 'checkout' is determined without it.
3917 # because 'checkout' is determined without it.
3918 if opts.get('rev'):
3918 if opts.get('rev'):
3919 brev = opts['rev'][0]
3919 brev = opts['rev'][0]
3920 elif opts.get('branch'):
3920 elif opts.get('branch'):
3921 brev = opts['branch'][0]
3921 brev = opts['branch'][0]
3922 else:
3922 else:
3923 brev = branches[0]
3923 brev = branches[0]
3924 repo._subtoppath = source
3924 repo._subtoppath = source
3925 try:
3925 try:
3926 ret = postincoming(ui, repo, modheads, opts.get('update'),
3926 ret = postincoming(ui, repo, modheads, opts.get('update'),
3927 checkout, brev)
3927 checkout, brev)
3928
3928
3929 finally:
3929 finally:
3930 del repo._subtoppath
3930 del repo._subtoppath
3931
3931
3932 finally:
3932 finally:
3933 other.close()
3933 other.close()
3934 return ret
3934 return ret
3935
3935
3936 @command('^push',
3936 @command('^push',
3937 [('f', 'force', None, _('force push')),
3937 [('f', 'force', None, _('force push')),
3938 ('r', 'rev', [],
3938 ('r', 'rev', [],
3939 _('a changeset intended to be included in the destination'),
3939 _('a changeset intended to be included in the destination'),
3940 _('REV')),
3940 _('REV')),
3941 ('B', 'bookmark', [], _("bookmark to push"), _('BOOKMARK')),
3941 ('B', 'bookmark', [], _("bookmark to push"), _('BOOKMARK')),
3942 ('b', 'branch', [],
3942 ('b', 'branch', [],
3943 _('a specific branch you would like to push'), _('BRANCH')),
3943 _('a specific branch you would like to push'), _('BRANCH')),
3944 ('', 'new-branch', False, _('allow pushing a new branch')),
3944 ('', 'new-branch', False, _('allow pushing a new branch')),
3945 ] + remoteopts,
3945 ] + remoteopts,
3946 _('[-f] [-r REV]... [-e CMD] [--remotecmd CMD] [DEST]'))
3946 _('[-f] [-r REV]... [-e CMD] [--remotecmd CMD] [DEST]'))
3947 def push(ui, repo, dest=None, **opts):
3947 def push(ui, repo, dest=None, **opts):
3948 """push changes to the specified destination
3948 """push changes to the specified destination
3949
3949
3950 Push changesets from the local repository to the specified
3950 Push changesets from the local repository to the specified
3951 destination.
3951 destination.
3952
3952
3953 This operation is symmetrical to pull: it is identical to a pull
3953 This operation is symmetrical to pull: it is identical to a pull
3954 in the destination repository from the current one.
3954 in the destination repository from the current one.
3955
3955
3956 By default, push will not allow creation of new heads at the
3956 By default, push will not allow creation of new heads at the
3957 destination, since multiple heads would make it unclear which head
3957 destination, since multiple heads would make it unclear which head
3958 to use. In this situation, it is recommended to pull and merge
3958 to use. In this situation, it is recommended to pull and merge
3959 before pushing.
3959 before pushing.
3960
3960
3961 Use --new-branch if you want to allow push to create a new named
3961 Use --new-branch if you want to allow push to create a new named
3962 branch that is not present at the destination. This allows you to
3962 branch that is not present at the destination. This allows you to
3963 only create a new branch without forcing other changes.
3963 only create a new branch without forcing other changes.
3964
3964
3965 .. note::
3965 .. note::
3966
3966
3967 Extra care should be taken with the -f/--force option,
3967 Extra care should be taken with the -f/--force option,
3968 which will push all new heads on all branches, an action which will
3968 which will push all new heads on all branches, an action which will
3969 almost always cause confusion for collaborators.
3969 almost always cause confusion for collaborators.
3970
3970
3971 If -r/--rev is used, the specified revision and all its ancestors
3971 If -r/--rev is used, the specified revision and all its ancestors
3972 will be pushed to the remote repository.
3972 will be pushed to the remote repository.
3973
3973
3974 If -B/--bookmark is used, the specified bookmarked revision, its
3974 If -B/--bookmark is used, the specified bookmarked revision, its
3975 ancestors, and the bookmark will be pushed to the remote
3975 ancestors, and the bookmark will be pushed to the remote
3976 repository. Specifying ``.`` is equivalent to specifying the active
3976 repository. Specifying ``.`` is equivalent to specifying the active
3977 bookmark's name.
3977 bookmark's name.
3978
3978
3979 Please see :hg:`help urls` for important details about ``ssh://``
3979 Please see :hg:`help urls` for important details about ``ssh://``
3980 URLs. If DESTINATION is omitted, a default path will be used.
3980 URLs. If DESTINATION is omitted, a default path will be used.
3981
3981
3982 Returns 0 if push was successful, 1 if nothing to push.
3982 Returns 0 if push was successful, 1 if nothing to push.
3983 """
3983 """
3984
3984
3985 opts = pycompat.byteskwargs(opts)
3985 opts = pycompat.byteskwargs(opts)
3986 if opts.get('bookmark'):
3986 if opts.get('bookmark'):
3987 ui.setconfig('bookmarks', 'pushing', opts['bookmark'], 'push')
3987 ui.setconfig('bookmarks', 'pushing', opts['bookmark'], 'push')
3988 for b in opts['bookmark']:
3988 for b in opts['bookmark']:
3989 # translate -B options to -r so changesets get pushed
3989 # translate -B options to -r so changesets get pushed
3990 b = repo._bookmarks.expandname(b)
3990 b = repo._bookmarks.expandname(b)
3991 if b in repo._bookmarks:
3991 if b in repo._bookmarks:
3992 opts.setdefault('rev', []).append(b)
3992 opts.setdefault('rev', []).append(b)
3993 else:
3993 else:
3994 # if we try to push a deleted bookmark, translate it to null
3994 # if we try to push a deleted bookmark, translate it to null
3995 # this lets simultaneous -r, -b options continue working
3995 # this lets simultaneous -r, -b options continue working
3996 opts.setdefault('rev', []).append("null")
3996 opts.setdefault('rev', []).append("null")
3997
3997
3998 path = ui.paths.getpath(dest, default=('default-push', 'default'))
3998 path = ui.paths.getpath(dest, default=('default-push', 'default'))
3999 if not path:
3999 if not path:
4000 raise error.Abort(_('default repository not configured!'),
4000 raise error.Abort(_('default repository not configured!'),
4001 hint=_("see 'hg help config.paths'"))
4001 hint=_("see 'hg help config.paths'"))
4002 dest = path.pushloc or path.loc
4002 dest = path.pushloc or path.loc
4003 branches = (path.branch, opts.get('branch') or [])
4003 branches = (path.branch, opts.get('branch') or [])
4004 ui.status(_('pushing to %s\n') % util.hidepassword(dest))
4004 ui.status(_('pushing to %s\n') % util.hidepassword(dest))
4005 revs, checkout = hg.addbranchrevs(repo, repo, branches, opts.get('rev'))
4005 revs, checkout = hg.addbranchrevs(repo, repo, branches, opts.get('rev'))
4006 other = hg.peer(repo, opts, dest)
4006 other = hg.peer(repo, opts, dest)
4007
4007
4008 if revs:
4008 if revs:
4009 revs = [repo.lookup(r) for r in scmutil.revrange(repo, revs)]
4009 revs = [repo.lookup(r) for r in scmutil.revrange(repo, revs)]
4010 if not revs:
4010 if not revs:
4011 raise error.Abort(_("specified revisions evaluate to an empty set"),
4011 raise error.Abort(_("specified revisions evaluate to an empty set"),
4012 hint=_("use different revision arguments"))
4012 hint=_("use different revision arguments"))
4013 elif path.pushrev:
4013 elif path.pushrev:
4014 # It doesn't make any sense to specify ancestor revisions. So limit
4014 # It doesn't make any sense to specify ancestor revisions. So limit
4015 # to DAG heads to make discovery simpler.
4015 # to DAG heads to make discovery simpler.
4016 expr = revsetlang.formatspec('heads(%r)', path.pushrev)
4016 expr = revsetlang.formatspec('heads(%r)', path.pushrev)
4017 revs = scmutil.revrange(repo, [expr])
4017 revs = scmutil.revrange(repo, [expr])
4018 revs = [repo[rev].node() for rev in revs]
4018 revs = [repo[rev].node() for rev in revs]
4019 if not revs:
4019 if not revs:
4020 raise error.Abort(_('default push revset for path evaluates to an '
4020 raise error.Abort(_('default push revset for path evaluates to an '
4021 'empty set'))
4021 'empty set'))
4022
4022
4023 repo._subtoppath = dest
4023 repo._subtoppath = dest
4024 try:
4024 try:
4025 # push subrepos depth-first for coherent ordering
4025 # push subrepos depth-first for coherent ordering
4026 c = repo['']
4026 c = repo['']
4027 subs = c.substate # only repos that are committed
4027 subs = c.substate # only repos that are committed
4028 for s in sorted(subs):
4028 for s in sorted(subs):
4029 result = c.sub(s).push(opts)
4029 result = c.sub(s).push(opts)
4030 if result == 0:
4030 if result == 0:
4031 return not result
4031 return not result
4032 finally:
4032 finally:
4033 del repo._subtoppath
4033 del repo._subtoppath
4034 pushop = exchange.push(repo, other, opts.get('force'), revs=revs,
4034 pushop = exchange.push(repo, other, opts.get('force'), revs=revs,
4035 newbranch=opts.get('new_branch'),
4035 newbranch=opts.get('new_branch'),
4036 bookmarks=opts.get('bookmark', ()),
4036 bookmarks=opts.get('bookmark', ()),
4037 opargs=opts.get('opargs'))
4037 opargs=opts.get('opargs'))
4038
4038
4039 result = not pushop.cgresult
4039 result = not pushop.cgresult
4040
4040
4041 if pushop.bkresult is not None:
4041 if pushop.bkresult is not None:
4042 if pushop.bkresult == 2:
4042 if pushop.bkresult == 2:
4043 result = 2
4043 result = 2
4044 elif not result and pushop.bkresult:
4044 elif not result and pushop.bkresult:
4045 result = 2
4045 result = 2
4046
4046
4047 return result
4047 return result
4048
4048
4049 @command('recover', [])
4049 @command('recover', [])
4050 def recover(ui, repo):
4050 def recover(ui, repo):
4051 """roll back an interrupted transaction
4051 """roll back an interrupted transaction
4052
4052
4053 Recover from an interrupted commit or pull.
4053 Recover from an interrupted commit or pull.
4054
4054
4055 This command tries to fix the repository status after an
4055 This command tries to fix the repository status after an
4056 interrupted operation. It should only be necessary when Mercurial
4056 interrupted operation. It should only be necessary when Mercurial
4057 suggests it.
4057 suggests it.
4058
4058
4059 Returns 0 if successful, 1 if nothing to recover or verify fails.
4059 Returns 0 if successful, 1 if nothing to recover or verify fails.
4060 """
4060 """
4061 if repo.recover():
4061 if repo.recover():
4062 return hg.verify(repo)
4062 return hg.verify(repo)
4063 return 1
4063 return 1
4064
4064
4065 @command('^remove|rm',
4065 @command('^remove|rm',
4066 [('A', 'after', None, _('record delete for missing files')),
4066 [('A', 'after', None, _('record delete for missing files')),
4067 ('f', 'force', None,
4067 ('f', 'force', None,
4068 _('forget added files, delete modified files')),
4068 _('forget added files, delete modified files')),
4069 ] + subrepoopts + walkopts,
4069 ] + subrepoopts + walkopts,
4070 _('[OPTION]... FILE...'),
4070 _('[OPTION]... FILE...'),
4071 inferrepo=True)
4071 inferrepo=True)
4072 def remove(ui, repo, *pats, **opts):
4072 def remove(ui, repo, *pats, **opts):
4073 """remove the specified files on the next commit
4073 """remove the specified files on the next commit
4074
4074
4075 Schedule the indicated files for removal from the current branch.
4075 Schedule the indicated files for removal from the current branch.
4076
4076
4077 This command schedules the files to be removed at the next commit.
4077 This command schedules the files to be removed at the next commit.
4078 To undo a remove before that, see :hg:`revert`. To undo added
4078 To undo a remove before that, see :hg:`revert`. To undo added
4079 files, see :hg:`forget`.
4079 files, see :hg:`forget`.
4080
4080
4081 .. container:: verbose
4081 .. container:: verbose
4082
4082
4083 -A/--after can be used to remove only files that have already
4083 -A/--after can be used to remove only files that have already
4084 been deleted, -f/--force can be used to force deletion, and -Af
4084 been deleted, -f/--force can be used to force deletion, and -Af
4085 can be used to remove files from the next revision without
4085 can be used to remove files from the next revision without
4086 deleting them from the working directory.
4086 deleting them from the working directory.
4087
4087
4088 The following table details the behavior of remove for different
4088 The following table details the behavior of remove for different
4089 file states (columns) and option combinations (rows). The file
4089 file states (columns) and option combinations (rows). The file
4090 states are Added [A], Clean [C], Modified [M] and Missing [!]
4090 states are Added [A], Clean [C], Modified [M] and Missing [!]
4091 (as reported by :hg:`status`). The actions are Warn, Remove
4091 (as reported by :hg:`status`). The actions are Warn, Remove
4092 (from branch) and Delete (from disk):
4092 (from branch) and Delete (from disk):
4093
4093
4094 ========= == == == ==
4094 ========= == == == ==
4095 opt/state A C M !
4095 opt/state A C M !
4096 ========= == == == ==
4096 ========= == == == ==
4097 none W RD W R
4097 none W RD W R
4098 -f R RD RD R
4098 -f R RD RD R
4099 -A W W W R
4099 -A W W W R
4100 -Af R R R R
4100 -Af R R R R
4101 ========= == == == ==
4101 ========= == == == ==
4102
4102
4103 .. note::
4103 .. note::
4104
4104
4105 :hg:`remove` never deletes files in Added [A] state from the
4105 :hg:`remove` never deletes files in Added [A] state from the
4106 working directory, not even if ``--force`` is specified.
4106 working directory, not even if ``--force`` is specified.
4107
4107
4108 Returns 0 on success, 1 if any warnings encountered.
4108 Returns 0 on success, 1 if any warnings encountered.
4109 """
4109 """
4110
4110
4111 opts = pycompat.byteskwargs(opts)
4111 opts = pycompat.byteskwargs(opts)
4112 after, force = opts.get('after'), opts.get('force')
4112 after, force = opts.get('after'), opts.get('force')
4113 if not pats and not after:
4113 if not pats and not after:
4114 raise error.Abort(_('no files specified'))
4114 raise error.Abort(_('no files specified'))
4115
4115
4116 m = scmutil.match(repo[None], pats, opts)
4116 m = scmutil.match(repo[None], pats, opts)
4117 subrepos = opts.get('subrepos')
4117 subrepos = opts.get('subrepos')
4118 return cmdutil.remove(ui, repo, m, "", after, force, subrepos)
4118 return cmdutil.remove(ui, repo, m, "", after, force, subrepos)
4119
4119
4120 @command('rename|move|mv',
4120 @command('rename|move|mv',
4121 [('A', 'after', None, _('record a rename that has already occurred')),
4121 [('A', 'after', None, _('record a rename that has already occurred')),
4122 ('f', 'force', None, _('forcibly copy over an existing managed file')),
4122 ('f', 'force', None, _('forcibly copy over an existing managed file')),
4123 ] + walkopts + dryrunopts,
4123 ] + walkopts + dryrunopts,
4124 _('[OPTION]... SOURCE... DEST'))
4124 _('[OPTION]... SOURCE... DEST'))
4125 def rename(ui, repo, *pats, **opts):
4125 def rename(ui, repo, *pats, **opts):
4126 """rename files; equivalent of copy + remove
4126 """rename files; equivalent of copy + remove
4127
4127
4128 Mark dest as copies of sources; mark sources for deletion. If dest
4128 Mark dest as copies of sources; mark sources for deletion. If dest
4129 is a directory, copies are put in that directory. If dest is a
4129 is a directory, copies are put in that directory. If dest is a
4130 file, there can only be one source.
4130 file, there can only be one source.
4131
4131
4132 By default, this command copies the contents of files as they
4132 By default, this command copies the contents of files as they
4133 exist in the working directory. If invoked with -A/--after, the
4133 exist in the working directory. If invoked with -A/--after, the
4134 operation is recorded, but no copying is performed.
4134 operation is recorded, but no copying is performed.
4135
4135
4136 This command takes effect at the next commit. To undo a rename
4136 This command takes effect at the next commit. To undo a rename
4137 before that, see :hg:`revert`.
4137 before that, see :hg:`revert`.
4138
4138
4139 Returns 0 on success, 1 if errors are encountered.
4139 Returns 0 on success, 1 if errors are encountered.
4140 """
4140 """
4141 opts = pycompat.byteskwargs(opts)
4141 opts = pycompat.byteskwargs(opts)
4142 with repo.wlock(False):
4142 with repo.wlock(False):
4143 return cmdutil.copy(ui, repo, pats, opts, rename=True)
4143 return cmdutil.copy(ui, repo, pats, opts, rename=True)
4144
4144
4145 @command('resolve',
4145 @command('resolve',
4146 [('a', 'all', None, _('select all unresolved files')),
4146 [('a', 'all', None, _('select all unresolved files')),
4147 ('l', 'list', None, _('list state of files needing merge')),
4147 ('l', 'list', None, _('list state of files needing merge')),
4148 ('m', 'mark', None, _('mark files as resolved')),
4148 ('m', 'mark', None, _('mark files as resolved')),
4149 ('u', 'unmark', None, _('mark files as unresolved')),
4149 ('u', 'unmark', None, _('mark files as unresolved')),
4150 ('n', 'no-status', None, _('hide status prefix'))]
4150 ('n', 'no-status', None, _('hide status prefix'))]
4151 + mergetoolopts + walkopts + formatteropts,
4151 + mergetoolopts + walkopts + formatteropts,
4152 _('[OPTION]... [FILE]...'),
4152 _('[OPTION]... [FILE]...'),
4153 inferrepo=True)
4153 inferrepo=True)
4154 def resolve(ui, repo, *pats, **opts):
4154 def resolve(ui, repo, *pats, **opts):
4155 """redo merges or set/view the merge status of files
4155 """redo merges or set/view the merge status of files
4156
4156
4157 Merges with unresolved conflicts are often the result of
4157 Merges with unresolved conflicts are often the result of
4158 non-interactive merging using the ``internal:merge`` configuration
4158 non-interactive merging using the ``internal:merge`` configuration
4159 setting, or a command-line merge tool like ``diff3``. The resolve
4159 setting, or a command-line merge tool like ``diff3``. The resolve
4160 command is used to manage the files involved in a merge, after
4160 command is used to manage the files involved in a merge, after
4161 :hg:`merge` has been run, and before :hg:`commit` is run (i.e. the
4161 :hg:`merge` has been run, and before :hg:`commit` is run (i.e. the
4162 working directory must have two parents). See :hg:`help
4162 working directory must have two parents). See :hg:`help
4163 merge-tools` for information on configuring merge tools.
4163 merge-tools` for information on configuring merge tools.
4164
4164
4165 The resolve command can be used in the following ways:
4165 The resolve command can be used in the following ways:
4166
4166
4167 - :hg:`resolve [--tool TOOL] FILE...`: attempt to re-merge the specified
4167 - :hg:`resolve [--tool TOOL] FILE...`: attempt to re-merge the specified
4168 files, discarding any previous merge attempts. Re-merging is not
4168 files, discarding any previous merge attempts. Re-merging is not
4169 performed for files already marked as resolved. Use ``--all/-a``
4169 performed for files already marked as resolved. Use ``--all/-a``
4170 to select all unresolved files. ``--tool`` can be used to specify
4170 to select all unresolved files. ``--tool`` can be used to specify
4171 the merge tool used for the given files. It overrides the HGMERGE
4171 the merge tool used for the given files. It overrides the HGMERGE
4172 environment variable and your configuration files. Previous file
4172 environment variable and your configuration files. Previous file
4173 contents are saved with a ``.orig`` suffix.
4173 contents are saved with a ``.orig`` suffix.
4174
4174
4175 - :hg:`resolve -m [FILE]`: mark a file as having been resolved
4175 - :hg:`resolve -m [FILE]`: mark a file as having been resolved
4176 (e.g. after having manually fixed-up the files). The default is
4176 (e.g. after having manually fixed-up the files). The default is
4177 to mark all unresolved files.
4177 to mark all unresolved files.
4178
4178
4179 - :hg:`resolve -u [FILE]...`: mark a file as unresolved. The
4179 - :hg:`resolve -u [FILE]...`: mark a file as unresolved. The
4180 default is to mark all resolved files.
4180 default is to mark all resolved files.
4181
4181
4182 - :hg:`resolve -l`: list files which had or still have conflicts.
4182 - :hg:`resolve -l`: list files which had or still have conflicts.
4183 In the printed list, ``U`` = unresolved and ``R`` = resolved.
4183 In the printed list, ``U`` = unresolved and ``R`` = resolved.
4184 You can use ``set:unresolved()`` or ``set:resolved()`` to filter
4184 You can use ``set:unresolved()`` or ``set:resolved()`` to filter
4185 the list. See :hg:`help filesets` for details.
4185 the list. See :hg:`help filesets` for details.
4186
4186
4187 .. note::
4187 .. note::
4188
4188
4189 Mercurial will not let you commit files with unresolved merge
4189 Mercurial will not let you commit files with unresolved merge
4190 conflicts. You must use :hg:`resolve -m ...` before you can
4190 conflicts. You must use :hg:`resolve -m ...` before you can
4191 commit after a conflicting merge.
4191 commit after a conflicting merge.
4192
4192
4193 Returns 0 on success, 1 if any files fail a resolve attempt.
4193 Returns 0 on success, 1 if any files fail a resolve attempt.
4194 """
4194 """
4195
4195
4196 opts = pycompat.byteskwargs(opts)
4196 opts = pycompat.byteskwargs(opts)
4197 flaglist = 'all mark unmark list no_status'.split()
4197 flaglist = 'all mark unmark list no_status'.split()
4198 all, mark, unmark, show, nostatus = \
4198 all, mark, unmark, show, nostatus = \
4199 [opts.get(o) for o in flaglist]
4199 [opts.get(o) for o in flaglist]
4200
4200
4201 if (show and (mark or unmark)) or (mark and unmark):
4201 if (show and (mark or unmark)) or (mark and unmark):
4202 raise error.Abort(_("too many options specified"))
4202 raise error.Abort(_("too many options specified"))
4203 if pats and all:
4203 if pats and all:
4204 raise error.Abort(_("can't specify --all and patterns"))
4204 raise error.Abort(_("can't specify --all and patterns"))
4205 if not (all or pats or show or mark or unmark):
4205 if not (all or pats or show or mark or unmark):
4206 raise error.Abort(_('no files or directories specified'),
4206 raise error.Abort(_('no files or directories specified'),
4207 hint=('use --all to re-merge all unresolved files'))
4207 hint=('use --all to re-merge all unresolved files'))
4208
4208
4209 if show:
4209 if show:
4210 ui.pager('resolve')
4210 ui.pager('resolve')
4211 fm = ui.formatter('resolve', opts)
4211 fm = ui.formatter('resolve', opts)
4212 ms = mergemod.mergestate.read(repo)
4212 ms = mergemod.mergestate.read(repo)
4213 m = scmutil.match(repo[None], pats, opts)
4213 m = scmutil.match(repo[None], pats, opts)
4214 for f in ms:
4214 for f in ms:
4215 if not m(f):
4215 if not m(f):
4216 continue
4216 continue
4217 l = 'resolve.' + {'u': 'unresolved', 'r': 'resolved',
4217 l = 'resolve.' + {'u': 'unresolved', 'r': 'resolved',
4218 'd': 'driverresolved'}[ms[f]]
4218 'd': 'driverresolved'}[ms[f]]
4219 fm.startitem()
4219 fm.startitem()
4220 fm.condwrite(not nostatus, 'status', '%s ', ms[f].upper(), label=l)
4220 fm.condwrite(not nostatus, 'status', '%s ', ms[f].upper(), label=l)
4221 fm.write('path', '%s\n', f, label=l)
4221 fm.write('path', '%s\n', f, label=l)
4222 fm.end()
4222 fm.end()
4223 return 0
4223 return 0
4224
4224
4225 with repo.wlock():
4225 with repo.wlock():
4226 ms = mergemod.mergestate.read(repo)
4226 ms = mergemod.mergestate.read(repo)
4227
4227
4228 if not (ms.active() or repo.dirstate.p2() != nullid):
4228 if not (ms.active() or repo.dirstate.p2() != nullid):
4229 raise error.Abort(
4229 raise error.Abort(
4230 _('resolve command not applicable when not merging'))
4230 _('resolve command not applicable when not merging'))
4231
4231
4232 wctx = repo[None]
4232 wctx = repo[None]
4233
4233
4234 if ms.mergedriver and ms.mdstate() == 'u':
4234 if ms.mergedriver and ms.mdstate() == 'u':
4235 proceed = mergemod.driverpreprocess(repo, ms, wctx)
4235 proceed = mergemod.driverpreprocess(repo, ms, wctx)
4236 ms.commit()
4236 ms.commit()
4237 # allow mark and unmark to go through
4237 # allow mark and unmark to go through
4238 if not mark and not unmark and not proceed:
4238 if not mark and not unmark and not proceed:
4239 return 1
4239 return 1
4240
4240
4241 m = scmutil.match(wctx, pats, opts)
4241 m = scmutil.match(wctx, pats, opts)
4242 ret = 0
4242 ret = 0
4243 didwork = False
4243 didwork = False
4244 runconclude = False
4244 runconclude = False
4245
4245
4246 tocomplete = []
4246 tocomplete = []
4247 for f in ms:
4247 for f in ms:
4248 if not m(f):
4248 if not m(f):
4249 continue
4249 continue
4250
4250
4251 didwork = True
4251 didwork = True
4252
4252
4253 # don't let driver-resolved files be marked, and run the conclude
4253 # don't let driver-resolved files be marked, and run the conclude
4254 # step if asked to resolve
4254 # step if asked to resolve
4255 if ms[f] == "d":
4255 if ms[f] == "d":
4256 exact = m.exact(f)
4256 exact = m.exact(f)
4257 if mark:
4257 if mark:
4258 if exact:
4258 if exact:
4259 ui.warn(_('not marking %s as it is driver-resolved\n')
4259 ui.warn(_('not marking %s as it is driver-resolved\n')
4260 % f)
4260 % f)
4261 elif unmark:
4261 elif unmark:
4262 if exact:
4262 if exact:
4263 ui.warn(_('not unmarking %s as it is driver-resolved\n')
4263 ui.warn(_('not unmarking %s as it is driver-resolved\n')
4264 % f)
4264 % f)
4265 else:
4265 else:
4266 runconclude = True
4266 runconclude = True
4267 continue
4267 continue
4268
4268
4269 if mark:
4269 if mark:
4270 ms.mark(f, "r")
4270 ms.mark(f, "r")
4271 elif unmark:
4271 elif unmark:
4272 ms.mark(f, "u")
4272 ms.mark(f, "u")
4273 else:
4273 else:
4274 # backup pre-resolve (merge uses .orig for its own purposes)
4274 # backup pre-resolve (merge uses .orig for its own purposes)
4275 a = repo.wjoin(f)
4275 a = repo.wjoin(f)
4276 try:
4276 try:
4277 util.copyfile(a, a + ".resolve")
4277 util.copyfile(a, a + ".resolve")
4278 except (IOError, OSError) as inst:
4278 except (IOError, OSError) as inst:
4279 if inst.errno != errno.ENOENT:
4279 if inst.errno != errno.ENOENT:
4280 raise
4280 raise
4281
4281
4282 try:
4282 try:
4283 # preresolve file
4283 # preresolve file
4284 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
4284 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
4285 'resolve')
4285 'resolve')
4286 complete, r = ms.preresolve(f, wctx)
4286 complete, r = ms.preresolve(f, wctx)
4287 if not complete:
4287 if not complete:
4288 tocomplete.append(f)
4288 tocomplete.append(f)
4289 elif r:
4289 elif r:
4290 ret = 1
4290 ret = 1
4291 finally:
4291 finally:
4292 ui.setconfig('ui', 'forcemerge', '', 'resolve')
4292 ui.setconfig('ui', 'forcemerge', '', 'resolve')
4293 ms.commit()
4293 ms.commit()
4294
4294
4295 # replace filemerge's .orig file with our resolve file, but only
4295 # replace filemerge's .orig file with our resolve file, but only
4296 # for merges that are complete
4296 # for merges that are complete
4297 if complete:
4297 if complete:
4298 try:
4298 try:
4299 util.rename(a + ".resolve",
4299 util.rename(a + ".resolve",
4300 scmutil.origpath(ui, repo, a))
4300 scmutil.origpath(ui, repo, a))
4301 except OSError as inst:
4301 except OSError as inst:
4302 if inst.errno != errno.ENOENT:
4302 if inst.errno != errno.ENOENT:
4303 raise
4303 raise
4304
4304
4305 for f in tocomplete:
4305 for f in tocomplete:
4306 try:
4306 try:
4307 # resolve file
4307 # resolve file
4308 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
4308 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
4309 'resolve')
4309 'resolve')
4310 r = ms.resolve(f, wctx)
4310 r = ms.resolve(f, wctx)
4311 if r:
4311 if r:
4312 ret = 1
4312 ret = 1
4313 finally:
4313 finally:
4314 ui.setconfig('ui', 'forcemerge', '', 'resolve')
4314 ui.setconfig('ui', 'forcemerge', '', 'resolve')
4315 ms.commit()
4315 ms.commit()
4316
4316
4317 # replace filemerge's .orig file with our resolve file
4317 # replace filemerge's .orig file with our resolve file
4318 a = repo.wjoin(f)
4318 a = repo.wjoin(f)
4319 try:
4319 try:
4320 util.rename(a + ".resolve", scmutil.origpath(ui, repo, a))
4320 util.rename(a + ".resolve", scmutil.origpath(ui, repo, a))
4321 except OSError as inst:
4321 except OSError as inst:
4322 if inst.errno != errno.ENOENT:
4322 if inst.errno != errno.ENOENT:
4323 raise
4323 raise
4324
4324
4325 ms.commit()
4325 ms.commit()
4326 ms.recordactions()
4326 ms.recordactions()
4327
4327
4328 if not didwork and pats:
4328 if not didwork and pats:
4329 hint = None
4329 hint = None
4330 if not any([p for p in pats if p.find(':') >= 0]):
4330 if not any([p for p in pats if p.find(':') >= 0]):
4331 pats = ['path:%s' % p for p in pats]
4331 pats = ['path:%s' % p for p in pats]
4332 m = scmutil.match(wctx, pats, opts)
4332 m = scmutil.match(wctx, pats, opts)
4333 for f in ms:
4333 for f in ms:
4334 if not m(f):
4334 if not m(f):
4335 continue
4335 continue
4336 flags = ''.join(['-%s ' % o[0] for o in flaglist
4336 flags = ''.join(['-%s ' % o[0] for o in flaglist
4337 if opts.get(o)])
4337 if opts.get(o)])
4338 hint = _("(try: hg resolve %s%s)\n") % (
4338 hint = _("(try: hg resolve %s%s)\n") % (
4339 flags,
4339 flags,
4340 ' '.join(pats))
4340 ' '.join(pats))
4341 break
4341 break
4342 ui.warn(_("arguments do not match paths that need resolving\n"))
4342 ui.warn(_("arguments do not match paths that need resolving\n"))
4343 if hint:
4343 if hint:
4344 ui.warn(hint)
4344 ui.warn(hint)
4345 elif ms.mergedriver and ms.mdstate() != 's':
4345 elif ms.mergedriver and ms.mdstate() != 's':
4346 # run conclude step when either a driver-resolved file is requested
4346 # run conclude step when either a driver-resolved file is requested
4347 # or there are no driver-resolved files
4347 # or there are no driver-resolved files
4348 # we can't use 'ret' to determine whether any files are unresolved
4348 # we can't use 'ret' to determine whether any files are unresolved
4349 # because we might not have tried to resolve some
4349 # because we might not have tried to resolve some
4350 if ((runconclude or not list(ms.driverresolved()))
4350 if ((runconclude or not list(ms.driverresolved()))
4351 and not list(ms.unresolved())):
4351 and not list(ms.unresolved())):
4352 proceed = mergemod.driverconclude(repo, ms, wctx)
4352 proceed = mergemod.driverconclude(repo, ms, wctx)
4353 ms.commit()
4353 ms.commit()
4354 if not proceed:
4354 if not proceed:
4355 return 1
4355 return 1
4356
4356
4357 # Nudge users into finishing an unfinished operation
4357 # Nudge users into finishing an unfinished operation
4358 unresolvedf = list(ms.unresolved())
4358 unresolvedf = list(ms.unresolved())
4359 driverresolvedf = list(ms.driverresolved())
4359 driverresolvedf = list(ms.driverresolved())
4360 if not unresolvedf and not driverresolvedf:
4360 if not unresolvedf and not driverresolvedf:
4361 ui.status(_('(no more unresolved files)\n'))
4361 ui.status(_('(no more unresolved files)\n'))
4362 cmdutil.checkafterresolved(repo)
4362 cmdutil.checkafterresolved(repo)
4363 elif not unresolvedf:
4363 elif not unresolvedf:
4364 ui.status(_('(no more unresolved files -- '
4364 ui.status(_('(no more unresolved files -- '
4365 'run "hg resolve --all" to conclude)\n'))
4365 'run "hg resolve --all" to conclude)\n'))
4366
4366
4367 return ret
4367 return ret
4368
4368
4369 @command('revert',
4369 @command('revert',
4370 [('a', 'all', None, _('revert all changes when no arguments given')),
4370 [('a', 'all', None, _('revert all changes when no arguments given')),
4371 ('d', 'date', '', _('tipmost revision matching date'), _('DATE')),
4371 ('d', 'date', '', _('tipmost revision matching date'), _('DATE')),
4372 ('r', 'rev', '', _('revert to the specified revision'), _('REV')),
4372 ('r', 'rev', '', _('revert to the specified revision'), _('REV')),
4373 ('C', 'no-backup', None, _('do not save backup copies of files')),
4373 ('C', 'no-backup', None, _('do not save backup copies of files')),
4374 ('i', 'interactive', None,
4374 ('i', 'interactive', None,
4375 _('interactively select the changes (EXPERIMENTAL)')),
4375 _('interactively select the changes (EXPERIMENTAL)')),
4376 ] + walkopts + dryrunopts,
4376 ] + walkopts + dryrunopts,
4377 _('[OPTION]... [-r REV] [NAME]...'))
4377 _('[OPTION]... [-r REV] [NAME]...'))
4378 def revert(ui, repo, *pats, **opts):
4378 def revert(ui, repo, *pats, **opts):
4379 """restore files to their checkout state
4379 """restore files to their checkout state
4380
4380
4381 .. note::
4381 .. note::
4382
4382
4383 To check out earlier revisions, you should use :hg:`update REV`.
4383 To check out earlier revisions, you should use :hg:`update REV`.
4384 To cancel an uncommitted merge (and lose your changes),
4384 To cancel an uncommitted merge (and lose your changes),
4385 use :hg:`update --clean .`.
4385 use :hg:`update --clean .`.
4386
4386
4387 With no revision specified, revert the specified files or directories
4387 With no revision specified, revert the specified files or directories
4388 to the contents they had in the parent of the working directory.
4388 to the contents they had in the parent of the working directory.
4389 This restores the contents of files to an unmodified
4389 This restores the contents of files to an unmodified
4390 state and unschedules adds, removes, copies, and renames. If the
4390 state and unschedules adds, removes, copies, and renames. If the
4391 working directory has two parents, you must explicitly specify a
4391 working directory has two parents, you must explicitly specify a
4392 revision.
4392 revision.
4393
4393
4394 Using the -r/--rev or -d/--date options, revert the given files or
4394 Using the -r/--rev or -d/--date options, revert the given files or
4395 directories to their states as of a specific revision. Because
4395 directories to their states as of a specific revision. Because
4396 revert does not change the working directory parents, this will
4396 revert does not change the working directory parents, this will
4397 cause these files to appear modified. This can be helpful to "back
4397 cause these files to appear modified. This can be helpful to "back
4398 out" some or all of an earlier change. See :hg:`backout` for a
4398 out" some or all of an earlier change. See :hg:`backout` for a
4399 related method.
4399 related method.
4400
4400
4401 Modified files are saved with a .orig suffix before reverting.
4401 Modified files are saved with a .orig suffix before reverting.
4402 To disable these backups, use --no-backup. It is possible to store
4402 To disable these backups, use --no-backup. It is possible to store
4403 the backup files in a custom directory relative to the root of the
4403 the backup files in a custom directory relative to the root of the
4404 repository by setting the ``ui.origbackuppath`` configuration
4404 repository by setting the ``ui.origbackuppath`` configuration
4405 option.
4405 option.
4406
4406
4407 See :hg:`help dates` for a list of formats valid for -d/--date.
4407 See :hg:`help dates` for a list of formats valid for -d/--date.
4408
4408
4409 See :hg:`help backout` for a way to reverse the effect of an
4409 See :hg:`help backout` for a way to reverse the effect of an
4410 earlier changeset.
4410 earlier changeset.
4411
4411
4412 Returns 0 on success.
4412 Returns 0 on success.
4413 """
4413 """
4414
4414
4415 if opts.get("date"):
4415 if opts.get("date"):
4416 if opts.get("rev"):
4416 if opts.get("rev"):
4417 raise error.Abort(_("you can't specify a revision and a date"))
4417 raise error.Abort(_("you can't specify a revision and a date"))
4418 opts["rev"] = cmdutil.finddate(ui, repo, opts["date"])
4418 opts["rev"] = cmdutil.finddate(ui, repo, opts["date"])
4419
4419
4420 parent, p2 = repo.dirstate.parents()
4420 parent, p2 = repo.dirstate.parents()
4421 if not opts.get('rev') and p2 != nullid:
4421 if not opts.get('rev') and p2 != nullid:
4422 # revert after merge is a trap for new users (issue2915)
4422 # revert after merge is a trap for new users (issue2915)
4423 raise error.Abort(_('uncommitted merge with no revision specified'),
4423 raise error.Abort(_('uncommitted merge with no revision specified'),
4424 hint=_("use 'hg update' or see 'hg help revert'"))
4424 hint=_("use 'hg update' or see 'hg help revert'"))
4425
4425
4426 ctx = scmutil.revsingle(repo, opts.get('rev'))
4426 ctx = scmutil.revsingle(repo, opts.get('rev'))
4427
4427
4428 if (not (pats or opts.get('include') or opts.get('exclude') or
4428 if (not (pats or opts.get('include') or opts.get('exclude') or
4429 opts.get('all') or opts.get('interactive'))):
4429 opts.get('all') or opts.get('interactive'))):
4430 msg = _("no files or directories specified")
4430 msg = _("no files or directories specified")
4431 if p2 != nullid:
4431 if p2 != nullid:
4432 hint = _("uncommitted merge, use --all to discard all changes,"
4432 hint = _("uncommitted merge, use --all to discard all changes,"
4433 " or 'hg update -C .' to abort the merge")
4433 " or 'hg update -C .' to abort the merge")
4434 raise error.Abort(msg, hint=hint)
4434 raise error.Abort(msg, hint=hint)
4435 dirty = any(repo.status())
4435 dirty = any(repo.status())
4436 node = ctx.node()
4436 node = ctx.node()
4437 if node != parent:
4437 if node != parent:
4438 if dirty:
4438 if dirty:
4439 hint = _("uncommitted changes, use --all to discard all"
4439 hint = _("uncommitted changes, use --all to discard all"
4440 " changes, or 'hg update %s' to update") % ctx.rev()
4440 " changes, or 'hg update %s' to update") % ctx.rev()
4441 else:
4441 else:
4442 hint = _("use --all to revert all files,"
4442 hint = _("use --all to revert all files,"
4443 " or 'hg update %s' to update") % ctx.rev()
4443 " or 'hg update %s' to update") % ctx.rev()
4444 elif dirty:
4444 elif dirty:
4445 hint = _("uncommitted changes, use --all to discard all changes")
4445 hint = _("uncommitted changes, use --all to discard all changes")
4446 else:
4446 else:
4447 hint = _("use --all to revert all files")
4447 hint = _("use --all to revert all files")
4448 raise error.Abort(msg, hint=hint)
4448 raise error.Abort(msg, hint=hint)
4449
4449
4450 return cmdutil.revert(ui, repo, ctx, (parent, p2), *pats, **opts)
4450 return cmdutil.revert(ui, repo, ctx, (parent, p2), *pats, **opts)
4451
4451
4452 @command('rollback', dryrunopts +
4452 @command('rollback', dryrunopts +
4453 [('f', 'force', False, _('ignore safety measures'))])
4453 [('f', 'force', False, _('ignore safety measures'))])
4454 def rollback(ui, repo, **opts):
4454 def rollback(ui, repo, **opts):
4455 """roll back the last transaction (DANGEROUS) (DEPRECATED)
4455 """roll back the last transaction (DANGEROUS) (DEPRECATED)
4456
4456
4457 Please use :hg:`commit --amend` instead of rollback to correct
4457 Please use :hg:`commit --amend` instead of rollback to correct
4458 mistakes in the last commit.
4458 mistakes in the last commit.
4459
4459
4460 This command should be used with care. There is only one level of
4460 This command should be used with care. There is only one level of
4461 rollback, and there is no way to undo a rollback. It will also
4461 rollback, and there is no way to undo a rollback. It will also
4462 restore the dirstate at the time of the last transaction, losing
4462 restore the dirstate at the time of the last transaction, losing
4463 any dirstate changes since that time. This command does not alter
4463 any dirstate changes since that time. This command does not alter
4464 the working directory.
4464 the working directory.
4465
4465
4466 Transactions are used to encapsulate the effects of all commands
4466 Transactions are used to encapsulate the effects of all commands
4467 that create new changesets or propagate existing changesets into a
4467 that create new changesets or propagate existing changesets into a
4468 repository.
4468 repository.
4469
4469
4470 .. container:: verbose
4470 .. container:: verbose
4471
4471
4472 For example, the following commands are transactional, and their
4472 For example, the following commands are transactional, and their
4473 effects can be rolled back:
4473 effects can be rolled back:
4474
4474
4475 - commit
4475 - commit
4476 - import
4476 - import
4477 - pull
4477 - pull
4478 - push (with this repository as the destination)
4478 - push (with this repository as the destination)
4479 - unbundle
4479 - unbundle
4480
4480
4481 To avoid permanent data loss, rollback will refuse to rollback a
4481 To avoid permanent data loss, rollback will refuse to rollback a
4482 commit transaction if it isn't checked out. Use --force to
4482 commit transaction if it isn't checked out. Use --force to
4483 override this protection.
4483 override this protection.
4484
4484
4485 The rollback command can be entirely disabled by setting the
4485 The rollback command can be entirely disabled by setting the
4486 ``ui.rollback`` configuration setting to false. If you're here
4486 ``ui.rollback`` configuration setting to false. If you're here
4487 because you want to use rollback and it's disabled, you can
4487 because you want to use rollback and it's disabled, you can
4488 re-enable the command by setting ``ui.rollback`` to true.
4488 re-enable the command by setting ``ui.rollback`` to true.
4489
4489
4490 This command is not intended for use on public repositories. Once
4490 This command is not intended for use on public repositories. Once
4491 changes are visible for pull by other users, rolling a transaction
4491 changes are visible for pull by other users, rolling a transaction
4492 back locally is ineffective (someone else may already have pulled
4492 back locally is ineffective (someone else may already have pulled
4493 the changes). Furthermore, a race is possible with readers of the
4493 the changes). Furthermore, a race is possible with readers of the
4494 repository; for example an in-progress pull from the repository
4494 repository; for example an in-progress pull from the repository
4495 may fail if a rollback is performed.
4495 may fail if a rollback is performed.
4496
4496
4497 Returns 0 on success, 1 if no rollback data is available.
4497 Returns 0 on success, 1 if no rollback data is available.
4498 """
4498 """
4499 if not ui.configbool('ui', 'rollback', True):
4499 if not ui.configbool('ui', 'rollback', True):
4500 raise error.Abort(_('rollback is disabled because it is unsafe'),
4500 raise error.Abort(_('rollback is disabled because it is unsafe'),
4501 hint=('see `hg help -v rollback` for information'))
4501 hint=('see `hg help -v rollback` for information'))
4502 return repo.rollback(dryrun=opts.get(r'dry_run'),
4502 return repo.rollback(dryrun=opts.get(r'dry_run'),
4503 force=opts.get(r'force'))
4503 force=opts.get(r'force'))
4504
4504
4505 @command('root', [])
4505 @command('root', [])
4506 def root(ui, repo):
4506 def root(ui, repo):
4507 """print the root (top) of the current working directory
4507 """print the root (top) of the current working directory
4508
4508
4509 Print the root directory of the current repository.
4509 Print the root directory of the current repository.
4510
4510
4511 Returns 0 on success.
4511 Returns 0 on success.
4512 """
4512 """
4513 ui.write(repo.root + "\n")
4513 ui.write(repo.root + "\n")
4514
4514
4515 @command('^serve',
4515 @command('^serve',
4516 [('A', 'accesslog', '', _('name of access log file to write to'),
4516 [('A', 'accesslog', '', _('name of access log file to write to'),
4517 _('FILE')),
4517 _('FILE')),
4518 ('d', 'daemon', None, _('run server in background')),
4518 ('d', 'daemon', None, _('run server in background')),
4519 ('', 'daemon-postexec', [], _('used internally by daemon mode')),
4519 ('', 'daemon-postexec', [], _('used internally by daemon mode')),
4520 ('E', 'errorlog', '', _('name of error log file to write to'), _('FILE')),
4520 ('E', 'errorlog', '', _('name of error log file to write to'), _('FILE')),
4521 # use string type, then we can check if something was passed
4521 # use string type, then we can check if something was passed
4522 ('p', 'port', '', _('port to listen on (default: 8000)'), _('PORT')),
4522 ('p', 'port', '', _('port to listen on (default: 8000)'), _('PORT')),
4523 ('a', 'address', '', _('address to listen on (default: all interfaces)'),
4523 ('a', 'address', '', _('address to listen on (default: all interfaces)'),
4524 _('ADDR')),
4524 _('ADDR')),
4525 ('', 'prefix', '', _('prefix path to serve from (default: server root)'),
4525 ('', 'prefix', '', _('prefix path to serve from (default: server root)'),
4526 _('PREFIX')),
4526 _('PREFIX')),
4527 ('n', 'name', '',
4527 ('n', 'name', '',
4528 _('name to show in web pages (default: working directory)'), _('NAME')),
4528 _('name to show in web pages (default: working directory)'), _('NAME')),
4529 ('', 'web-conf', '',
4529 ('', 'web-conf', '',
4530 _("name of the hgweb config file (see 'hg help hgweb')"), _('FILE')),
4530 _("name of the hgweb config file (see 'hg help hgweb')"), _('FILE')),
4531 ('', 'webdir-conf', '', _('name of the hgweb config file (DEPRECATED)'),
4531 ('', 'webdir-conf', '', _('name of the hgweb config file (DEPRECATED)'),
4532 _('FILE')),
4532 _('FILE')),
4533 ('', 'pid-file', '', _('name of file to write process ID to'), _('FILE')),
4533 ('', 'pid-file', '', _('name of file to write process ID to'), _('FILE')),
4534 ('', 'stdio', None, _('for remote clients (ADVANCED)')),
4534 ('', 'stdio', None, _('for remote clients (ADVANCED)')),
4535 ('', 'cmdserver', '', _('for remote clients (ADVANCED)'), _('MODE')),
4535 ('', 'cmdserver', '', _('for remote clients (ADVANCED)'), _('MODE')),
4536 ('t', 'templates', '', _('web templates to use'), _('TEMPLATE')),
4536 ('t', 'templates', '', _('web templates to use'), _('TEMPLATE')),
4537 ('', 'style', '', _('template style to use'), _('STYLE')),
4537 ('', 'style', '', _('template style to use'), _('STYLE')),
4538 ('6', 'ipv6', None, _('use IPv6 in addition to IPv4')),
4538 ('6', 'ipv6', None, _('use IPv6 in addition to IPv4')),
4539 ('', 'certificate', '', _('SSL certificate file'), _('FILE'))]
4539 ('', 'certificate', '', _('SSL certificate file'), _('FILE'))]
4540 + subrepoopts,
4540 + subrepoopts,
4541 _('[OPTION]...'),
4541 _('[OPTION]...'),
4542 optionalrepo=True)
4542 optionalrepo=True)
4543 def serve(ui, repo, **opts):
4543 def serve(ui, repo, **opts):
4544 """start stand-alone webserver
4544 """start stand-alone webserver
4545
4545
4546 Start a local HTTP repository browser and pull server. You can use
4546 Start a local HTTP repository browser and pull server. You can use
4547 this for ad-hoc sharing and browsing of repositories. It is
4547 this for ad-hoc sharing and browsing of repositories. It is
4548 recommended to use a real web server to serve a repository for
4548 recommended to use a real web server to serve a repository for
4549 longer periods of time.
4549 longer periods of time.
4550
4550
4551 Please note that the server does not implement access control.
4551 Please note that the server does not implement access control.
4552 This means that, by default, anybody can read from the server and
4552 This means that, by default, anybody can read from the server and
4553 nobody can write to it by default. Set the ``web.allow_push``
4553 nobody can write to it by default. Set the ``web.allow_push``
4554 option to ``*`` to allow everybody to push to the server. You
4554 option to ``*`` to allow everybody to push to the server. You
4555 should use a real web server if you need to authenticate users.
4555 should use a real web server if you need to authenticate users.
4556
4556
4557 By default, the server logs accesses to stdout and errors to
4557 By default, the server logs accesses to stdout and errors to
4558 stderr. Use the -A/--accesslog and -E/--errorlog options to log to
4558 stderr. Use the -A/--accesslog and -E/--errorlog options to log to
4559 files.
4559 files.
4560
4560
4561 To have the server choose a free port number to listen on, specify
4561 To have the server choose a free port number to listen on, specify
4562 a port number of 0; in this case, the server will print the port
4562 a port number of 0; in this case, the server will print the port
4563 number it uses.
4563 number it uses.
4564
4564
4565 Returns 0 on success.
4565 Returns 0 on success.
4566 """
4566 """
4567
4567
4568 opts = pycompat.byteskwargs(opts)
4568 opts = pycompat.byteskwargs(opts)
4569 if opts["stdio"] and opts["cmdserver"]:
4569 if opts["stdio"] and opts["cmdserver"]:
4570 raise error.Abort(_("cannot use --stdio with --cmdserver"))
4570 raise error.Abort(_("cannot use --stdio with --cmdserver"))
4571
4571
4572 if opts["stdio"]:
4572 if opts["stdio"]:
4573 if repo is None:
4573 if repo is None:
4574 raise error.RepoError(_("there is no Mercurial repository here"
4574 raise error.RepoError(_("there is no Mercurial repository here"
4575 " (.hg not found)"))
4575 " (.hg not found)"))
4576 s = sshserver.sshserver(ui, repo)
4576 s = sshserver.sshserver(ui, repo)
4577 s.serve_forever()
4577 s.serve_forever()
4578
4578
4579 service = server.createservice(ui, repo, opts)
4579 service = server.createservice(ui, repo, opts)
4580 return server.runservice(opts, initfn=service.init, runfn=service.run)
4580 return server.runservice(opts, initfn=service.init, runfn=service.run)
4581
4581
4582 @command('^status|st',
4582 @command('^status|st',
4583 [('A', 'all', None, _('show status of all files')),
4583 [('A', 'all', None, _('show status of all files')),
4584 ('m', 'modified', None, _('show only modified files')),
4584 ('m', 'modified', None, _('show only modified files')),
4585 ('a', 'added', None, _('show only added files')),
4585 ('a', 'added', None, _('show only added files')),
4586 ('r', 'removed', None, _('show only removed files')),
4586 ('r', 'removed', None, _('show only removed files')),
4587 ('d', 'deleted', None, _('show only deleted (but tracked) files')),
4587 ('d', 'deleted', None, _('show only deleted (but tracked) files')),
4588 ('c', 'clean', None, _('show only files without changes')),
4588 ('c', 'clean', None, _('show only files without changes')),
4589 ('u', 'unknown', None, _('show only unknown (not tracked) files')),
4589 ('u', 'unknown', None, _('show only unknown (not tracked) files')),
4590 ('i', 'ignored', None, _('show only ignored files')),
4590 ('i', 'ignored', None, _('show only ignored files')),
4591 ('n', 'no-status', None, _('hide status prefix')),
4591 ('n', 'no-status', None, _('hide status prefix')),
4592 ('C', 'copies', None, _('show source of copied files')),
4592 ('C', 'copies', None, _('show source of copied files')),
4593 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
4593 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
4594 ('', 'rev', [], _('show difference from revision'), _('REV')),
4594 ('', 'rev', [], _('show difference from revision'), _('REV')),
4595 ('', 'change', '', _('list the changed files of a revision'), _('REV')),
4595 ('', 'change', '', _('list the changed files of a revision'), _('REV')),
4596 ] + walkopts + subrepoopts + formatteropts,
4596 ] + walkopts + subrepoopts + formatteropts,
4597 _('[OPTION]... [FILE]...'),
4597 _('[OPTION]... [FILE]...'),
4598 inferrepo=True)
4598 inferrepo=True)
4599 def status(ui, repo, *pats, **opts):
4599 def status(ui, repo, *pats, **opts):
4600 """show changed files in the working directory
4600 """show changed files in the working directory
4601
4601
4602 Show status of files in the repository. If names are given, only
4602 Show status of files in the repository. If names are given, only
4603 files that match are shown. Files that are clean or ignored or
4603 files that match are shown. Files that are clean or ignored or
4604 the source of a copy/move operation, are not listed unless
4604 the source of a copy/move operation, are not listed unless
4605 -c/--clean, -i/--ignored, -C/--copies or -A/--all are given.
4605 -c/--clean, -i/--ignored, -C/--copies or -A/--all are given.
4606 Unless options described with "show only ..." are given, the
4606 Unless options described with "show only ..." are given, the
4607 options -mardu are used.
4607 options -mardu are used.
4608
4608
4609 Option -q/--quiet hides untracked (unknown and ignored) files
4609 Option -q/--quiet hides untracked (unknown and ignored) files
4610 unless explicitly requested with -u/--unknown or -i/--ignored.
4610 unless explicitly requested with -u/--unknown or -i/--ignored.
4611
4611
4612 .. note::
4612 .. note::
4613
4613
4614 :hg:`status` may appear to disagree with diff if permissions have
4614 :hg:`status` may appear to disagree with diff if permissions have
4615 changed or a merge has occurred. The standard diff format does
4615 changed or a merge has occurred. The standard diff format does
4616 not report permission changes and diff only reports changes
4616 not report permission changes and diff only reports changes
4617 relative to one merge parent.
4617 relative to one merge parent.
4618
4618
4619 If one revision is given, it is used as the base revision.
4619 If one revision is given, it is used as the base revision.
4620 If two revisions are given, the differences between them are
4620 If two revisions are given, the differences between them are
4621 shown. The --change option can also be used as a shortcut to list
4621 shown. The --change option can also be used as a shortcut to list
4622 the changed files of a revision from its first parent.
4622 the changed files of a revision from its first parent.
4623
4623
4624 The codes used to show the status of files are::
4624 The codes used to show the status of files are::
4625
4625
4626 M = modified
4626 M = modified
4627 A = added
4627 A = added
4628 R = removed
4628 R = removed
4629 C = clean
4629 C = clean
4630 ! = missing (deleted by non-hg command, but still tracked)
4630 ! = missing (deleted by non-hg command, but still tracked)
4631 ? = not tracked
4631 ? = not tracked
4632 I = ignored
4632 I = ignored
4633 = origin of the previous file (with --copies)
4633 = origin of the previous file (with --copies)
4634
4634
4635 .. container:: verbose
4635 .. container:: verbose
4636
4636
4637 Examples:
4637 Examples:
4638
4638
4639 - show changes in the working directory relative to a
4639 - show changes in the working directory relative to a
4640 changeset::
4640 changeset::
4641
4641
4642 hg status --rev 9353
4642 hg status --rev 9353
4643
4643
4644 - show changes in the working directory relative to the
4644 - show changes in the working directory relative to the
4645 current directory (see :hg:`help patterns` for more information)::
4645 current directory (see :hg:`help patterns` for more information)::
4646
4646
4647 hg status re:
4647 hg status re:
4648
4648
4649 - show all changes including copies in an existing changeset::
4649 - show all changes including copies in an existing changeset::
4650
4650
4651 hg status --copies --change 9353
4651 hg status --copies --change 9353
4652
4652
4653 - get a NUL separated list of added files, suitable for xargs::
4653 - get a NUL separated list of added files, suitable for xargs::
4654
4654
4655 hg status -an0
4655 hg status -an0
4656
4656
4657 Returns 0 on success.
4657 Returns 0 on success.
4658 """
4658 """
4659
4659
4660 opts = pycompat.byteskwargs(opts)
4660 opts = pycompat.byteskwargs(opts)
4661 revs = opts.get('rev')
4661 revs = opts.get('rev')
4662 change = opts.get('change')
4662 change = opts.get('change')
4663
4663
4664 if revs and change:
4664 if revs and change:
4665 msg = _('cannot specify --rev and --change at the same time')
4665 msg = _('cannot specify --rev and --change at the same time')
4666 raise error.Abort(msg)
4666 raise error.Abort(msg)
4667 elif change:
4667 elif change:
4668 node2 = scmutil.revsingle(repo, change, None).node()
4668 node2 = scmutil.revsingle(repo, change, None).node()
4669 node1 = repo[node2].p1().node()
4669 node1 = repo[node2].p1().node()
4670 else:
4670 else:
4671 node1, node2 = scmutil.revpair(repo, revs)
4671 node1, node2 = scmutil.revpair(repo, revs)
4672
4672
4673 if pats or ui.configbool('commands', 'status.relative'):
4673 if pats or ui.configbool('commands', 'status.relative'):
4674 cwd = repo.getcwd()
4674 cwd = repo.getcwd()
4675 else:
4675 else:
4676 cwd = ''
4676 cwd = ''
4677
4677
4678 if opts.get('print0'):
4678 if opts.get('print0'):
4679 end = '\0'
4679 end = '\0'
4680 else:
4680 else:
4681 end = '\n'
4681 end = '\n'
4682 copy = {}
4682 copy = {}
4683 states = 'modified added removed deleted unknown ignored clean'.split()
4683 states = 'modified added removed deleted unknown ignored clean'.split()
4684 show = [k for k in states if opts.get(k)]
4684 show = [k for k in states if opts.get(k)]
4685 if opts.get('all'):
4685 if opts.get('all'):
4686 show += ui.quiet and (states[:4] + ['clean']) or states
4686 show += ui.quiet and (states[:4] + ['clean']) or states
4687 if not show:
4687 if not show:
4688 if ui.quiet:
4688 if ui.quiet:
4689 show = states[:4]
4689 show = states[:4]
4690 else:
4690 else:
4691 show = states[:5]
4691 show = states[:5]
4692
4692
4693 m = scmutil.match(repo[node2], pats, opts)
4693 m = scmutil.match(repo[node2], pats, opts)
4694 stat = repo.status(node1, node2, m,
4694 stat = repo.status(node1, node2, m,
4695 'ignored' in show, 'clean' in show, 'unknown' in show,
4695 'ignored' in show, 'clean' in show, 'unknown' in show,
4696 opts.get('subrepos'))
4696 opts.get('subrepos'))
4697 changestates = zip(states, pycompat.iterbytestr('MAR!?IC'), stat)
4697 changestates = zip(states, pycompat.iterbytestr('MAR!?IC'), stat)
4698
4698
4699 if (opts.get('all') or opts.get('copies')
4699 if (opts.get('all') or opts.get('copies')
4700 or ui.configbool('ui', 'statuscopies')) and not opts.get('no_status'):
4700 or ui.configbool('ui', 'statuscopies')) and not opts.get('no_status'):
4701 copy = copies.pathcopies(repo[node1], repo[node2], m)
4701 copy = copies.pathcopies(repo[node1], repo[node2], m)
4702
4702
4703 ui.pager('status')
4703 ui.pager('status')
4704 fm = ui.formatter('status', opts)
4704 fm = ui.formatter('status', opts)
4705 fmt = '%s' + end
4705 fmt = '%s' + end
4706 showchar = not opts.get('no_status')
4706 showchar = not opts.get('no_status')
4707
4707
4708 for state, char, files in changestates:
4708 for state, char, files in changestates:
4709 if state in show:
4709 if state in show:
4710 label = 'status.' + state
4710 label = 'status.' + state
4711 for f in files:
4711 for f in files:
4712 fm.startitem()
4712 fm.startitem()
4713 fm.condwrite(showchar, 'status', '%s ', char, label=label)
4713 fm.condwrite(showchar, 'status', '%s ', char, label=label)
4714 fm.write('path', fmt, repo.pathto(f, cwd), label=label)
4714 fm.write('path', fmt, repo.pathto(f, cwd), label=label)
4715 if f in copy:
4715 if f in copy:
4716 fm.write("copy", ' %s' + end, repo.pathto(copy[f], cwd),
4716 fm.write("copy", ' %s' + end, repo.pathto(copy[f], cwd),
4717 label='status.copied')
4717 label='status.copied')
4718 fm.end()
4718 fm.end()
4719
4719
4720 @command('^summary|sum',
4720 @command('^summary|sum',
4721 [('', 'remote', None, _('check for push and pull'))], '[--remote]')
4721 [('', 'remote', None, _('check for push and pull'))], '[--remote]')
4722 def summary(ui, repo, **opts):
4722 def summary(ui, repo, **opts):
4723 """summarize working directory state
4723 """summarize working directory state
4724
4724
4725 This generates a brief summary of the working directory state,
4725 This generates a brief summary of the working directory state,
4726 including parents, branch, commit status, phase and available updates.
4726 including parents, branch, commit status, phase and available updates.
4727
4727
4728 With the --remote option, this will check the default paths for
4728 With the --remote option, this will check the default paths for
4729 incoming and outgoing changes. This can be time-consuming.
4729 incoming and outgoing changes. This can be time-consuming.
4730
4730
4731 Returns 0 on success.
4731 Returns 0 on success.
4732 """
4732 """
4733
4733
4734 opts = pycompat.byteskwargs(opts)
4734 opts = pycompat.byteskwargs(opts)
4735 ui.pager('summary')
4735 ui.pager('summary')
4736 ctx = repo[None]
4736 ctx = repo[None]
4737 parents = ctx.parents()
4737 parents = ctx.parents()
4738 pnode = parents[0].node()
4738 pnode = parents[0].node()
4739 marks = []
4739 marks = []
4740
4740
4741 ms = None
4741 ms = None
4742 try:
4742 try:
4743 ms = mergemod.mergestate.read(repo)
4743 ms = mergemod.mergestate.read(repo)
4744 except error.UnsupportedMergeRecords as e:
4744 except error.UnsupportedMergeRecords as e:
4745 s = ' '.join(e.recordtypes)
4745 s = ' '.join(e.recordtypes)
4746 ui.warn(
4746 ui.warn(
4747 _('warning: merge state has unsupported record types: %s\n') % s)
4747 _('warning: merge state has unsupported record types: %s\n') % s)
4748 unresolved = 0
4748 unresolved = 0
4749 else:
4749 else:
4750 unresolved = [f for f in ms if ms[f] == 'u']
4750 unresolved = [f for f in ms if ms[f] == 'u']
4751
4751
4752 for p in parents:
4752 for p in parents:
4753 # label with log.changeset (instead of log.parent) since this
4753 # label with log.changeset (instead of log.parent) since this
4754 # shows a working directory parent *changeset*:
4754 # shows a working directory parent *changeset*:
4755 # i18n: column positioning for "hg summary"
4755 # i18n: column positioning for "hg summary"
4756 ui.write(_('parent: %d:%s ') % (p.rev(), p),
4756 ui.write(_('parent: %d:%s ') % (p.rev(), p),
4757 label=cmdutil._changesetlabels(p))
4757 label=cmdutil._changesetlabels(p))
4758 ui.write(' '.join(p.tags()), label='log.tag')
4758 ui.write(' '.join(p.tags()), label='log.tag')
4759 if p.bookmarks():
4759 if p.bookmarks():
4760 marks.extend(p.bookmarks())
4760 marks.extend(p.bookmarks())
4761 if p.rev() == -1:
4761 if p.rev() == -1:
4762 if not len(repo):
4762 if not len(repo):
4763 ui.write(_(' (empty repository)'))
4763 ui.write(_(' (empty repository)'))
4764 else:
4764 else:
4765 ui.write(_(' (no revision checked out)'))
4765 ui.write(_(' (no revision checked out)'))
4766 if p.obsolete():
4766 if p.obsolete():
4767 ui.write(_(' (obsolete)'))
4767 ui.write(_(' (obsolete)'))
4768 if p.troubled():
4768 if p.troubled():
4769 ui.write(' ('
4769 ui.write(' ('
4770 + ', '.join(ui.label(trouble, 'trouble.%s' % trouble)
4770 + ', '.join(ui.label(trouble, 'trouble.%s' % trouble)
4771 for trouble in p.troubles())
4771 for trouble in p.troubles())
4772 + ')')
4772 + ')')
4773 ui.write('\n')
4773 ui.write('\n')
4774 if p.description():
4774 if p.description():
4775 ui.status(' ' + p.description().splitlines()[0].strip() + '\n',
4775 ui.status(' ' + p.description().splitlines()[0].strip() + '\n',
4776 label='log.summary')
4776 label='log.summary')
4777
4777
4778 branch = ctx.branch()
4778 branch = ctx.branch()
4779 bheads = repo.branchheads(branch)
4779 bheads = repo.branchheads(branch)
4780 # i18n: column positioning for "hg summary"
4780 # i18n: column positioning for "hg summary"
4781 m = _('branch: %s\n') % branch
4781 m = _('branch: %s\n') % branch
4782 if branch != 'default':
4782 if branch != 'default':
4783 ui.write(m, label='log.branch')
4783 ui.write(m, label='log.branch')
4784 else:
4784 else:
4785 ui.status(m, label='log.branch')
4785 ui.status(m, label='log.branch')
4786
4786
4787 if marks:
4787 if marks:
4788 active = repo._activebookmark
4788 active = repo._activebookmark
4789 # i18n: column positioning for "hg summary"
4789 # i18n: column positioning for "hg summary"
4790 ui.write(_('bookmarks:'), label='log.bookmark')
4790 ui.write(_('bookmarks:'), label='log.bookmark')
4791 if active is not None:
4791 if active is not None:
4792 if active in marks:
4792 if active in marks:
4793 ui.write(' *' + active, label=bookmarks.activebookmarklabel)
4793 ui.write(' *' + active, label=bookmarks.activebookmarklabel)
4794 marks.remove(active)
4794 marks.remove(active)
4795 else:
4795 else:
4796 ui.write(' [%s]' % active, label=bookmarks.activebookmarklabel)
4796 ui.write(' [%s]' % active, label=bookmarks.activebookmarklabel)
4797 for m in marks:
4797 for m in marks:
4798 ui.write(' ' + m, label='log.bookmark')
4798 ui.write(' ' + m, label='log.bookmark')
4799 ui.write('\n', label='log.bookmark')
4799 ui.write('\n', label='log.bookmark')
4800
4800
4801 status = repo.status(unknown=True)
4801 status = repo.status(unknown=True)
4802
4802
4803 c = repo.dirstate.copies()
4803 c = repo.dirstate.copies()
4804 copied, renamed = [], []
4804 copied, renamed = [], []
4805 for d, s in c.iteritems():
4805 for d, s in c.iteritems():
4806 if s in status.removed:
4806 if s in status.removed:
4807 status.removed.remove(s)
4807 status.removed.remove(s)
4808 renamed.append(d)
4808 renamed.append(d)
4809 else:
4809 else:
4810 copied.append(d)
4810 copied.append(d)
4811 if d in status.added:
4811 if d in status.added:
4812 status.added.remove(d)
4812 status.added.remove(d)
4813
4813
4814 subs = [s for s in ctx.substate if ctx.sub(s).dirty()]
4814 subs = [s for s in ctx.substate if ctx.sub(s).dirty()]
4815
4815
4816 labels = [(ui.label(_('%d modified'), 'status.modified'), status.modified),
4816 labels = [(ui.label(_('%d modified'), 'status.modified'), status.modified),
4817 (ui.label(_('%d added'), 'status.added'), status.added),
4817 (ui.label(_('%d added'), 'status.added'), status.added),
4818 (ui.label(_('%d removed'), 'status.removed'), status.removed),
4818 (ui.label(_('%d removed'), 'status.removed'), status.removed),
4819 (ui.label(_('%d renamed'), 'status.copied'), renamed),
4819 (ui.label(_('%d renamed'), 'status.copied'), renamed),
4820 (ui.label(_('%d copied'), 'status.copied'), copied),
4820 (ui.label(_('%d copied'), 'status.copied'), copied),
4821 (ui.label(_('%d deleted'), 'status.deleted'), status.deleted),
4821 (ui.label(_('%d deleted'), 'status.deleted'), status.deleted),
4822 (ui.label(_('%d unknown'), 'status.unknown'), status.unknown),
4822 (ui.label(_('%d unknown'), 'status.unknown'), status.unknown),
4823 (ui.label(_('%d unresolved'), 'resolve.unresolved'), unresolved),
4823 (ui.label(_('%d unresolved'), 'resolve.unresolved'), unresolved),
4824 (ui.label(_('%d subrepos'), 'status.modified'), subs)]
4824 (ui.label(_('%d subrepos'), 'status.modified'), subs)]
4825 t = []
4825 t = []
4826 for l, s in labels:
4826 for l, s in labels:
4827 if s:
4827 if s:
4828 t.append(l % len(s))
4828 t.append(l % len(s))
4829
4829
4830 t = ', '.join(t)
4830 t = ', '.join(t)
4831 cleanworkdir = False
4831 cleanworkdir = False
4832
4832
4833 if repo.vfs.exists('graftstate'):
4833 if repo.vfs.exists('graftstate'):
4834 t += _(' (graft in progress)')
4834 t += _(' (graft in progress)')
4835 if repo.vfs.exists('updatestate'):
4835 if repo.vfs.exists('updatestate'):
4836 t += _(' (interrupted update)')
4836 t += _(' (interrupted update)')
4837 elif len(parents) > 1:
4837 elif len(parents) > 1:
4838 t += _(' (merge)')
4838 t += _(' (merge)')
4839 elif branch != parents[0].branch():
4839 elif branch != parents[0].branch():
4840 t += _(' (new branch)')
4840 t += _(' (new branch)')
4841 elif (parents[0].closesbranch() and
4841 elif (parents[0].closesbranch() and
4842 pnode in repo.branchheads(branch, closed=True)):
4842 pnode in repo.branchheads(branch, closed=True)):
4843 t += _(' (head closed)')
4843 t += _(' (head closed)')
4844 elif not (status.modified or status.added or status.removed or renamed or
4844 elif not (status.modified or status.added or status.removed or renamed or
4845 copied or subs):
4845 copied or subs):
4846 t += _(' (clean)')
4846 t += _(' (clean)')
4847 cleanworkdir = True
4847 cleanworkdir = True
4848 elif pnode not in bheads:
4848 elif pnode not in bheads:
4849 t += _(' (new branch head)')
4849 t += _(' (new branch head)')
4850
4850
4851 if parents:
4851 if parents:
4852 pendingphase = max(p.phase() for p in parents)
4852 pendingphase = max(p.phase() for p in parents)
4853 else:
4853 else:
4854 pendingphase = phases.public
4854 pendingphase = phases.public
4855
4855
4856 if pendingphase > phases.newcommitphase(ui):
4856 if pendingphase > phases.newcommitphase(ui):
4857 t += ' (%s)' % phases.phasenames[pendingphase]
4857 t += ' (%s)' % phases.phasenames[pendingphase]
4858
4858
4859 if cleanworkdir:
4859 if cleanworkdir:
4860 # i18n: column positioning for "hg summary"
4860 # i18n: column positioning for "hg summary"
4861 ui.status(_('commit: %s\n') % t.strip())
4861 ui.status(_('commit: %s\n') % t.strip())
4862 else:
4862 else:
4863 # i18n: column positioning for "hg summary"
4863 # i18n: column positioning for "hg summary"
4864 ui.write(_('commit: %s\n') % t.strip())
4864 ui.write(_('commit: %s\n') % t.strip())
4865
4865
4866 # all ancestors of branch heads - all ancestors of parent = new csets
4866 # all ancestors of branch heads - all ancestors of parent = new csets
4867 new = len(repo.changelog.findmissing([pctx.node() for pctx in parents],
4867 new = len(repo.changelog.findmissing([pctx.node() for pctx in parents],
4868 bheads))
4868 bheads))
4869
4869
4870 if new == 0:
4870 if new == 0:
4871 # i18n: column positioning for "hg summary"
4871 # i18n: column positioning for "hg summary"
4872 ui.status(_('update: (current)\n'))
4872 ui.status(_('update: (current)\n'))
4873 elif pnode not in bheads:
4873 elif pnode not in bheads:
4874 # i18n: column positioning for "hg summary"
4874 # i18n: column positioning for "hg summary"
4875 ui.write(_('update: %d new changesets (update)\n') % new)
4875 ui.write(_('update: %d new changesets (update)\n') % new)
4876 else:
4876 else:
4877 # i18n: column positioning for "hg summary"
4877 # i18n: column positioning for "hg summary"
4878 ui.write(_('update: %d new changesets, %d branch heads (merge)\n') %
4878 ui.write(_('update: %d new changesets, %d branch heads (merge)\n') %
4879 (new, len(bheads)))
4879 (new, len(bheads)))
4880
4880
4881 t = []
4881 t = []
4882 draft = len(repo.revs('draft()'))
4882 draft = len(repo.revs('draft()'))
4883 if draft:
4883 if draft:
4884 t.append(_('%d draft') % draft)
4884 t.append(_('%d draft') % draft)
4885 secret = len(repo.revs('secret()'))
4885 secret = len(repo.revs('secret()'))
4886 if secret:
4886 if secret:
4887 t.append(_('%d secret') % secret)
4887 t.append(_('%d secret') % secret)
4888
4888
4889 if draft or secret:
4889 if draft or secret:
4890 ui.status(_('phases: %s\n') % ', '.join(t))
4890 ui.status(_('phases: %s\n') % ', '.join(t))
4891
4891
4892 if obsolete.isenabled(repo, obsolete.createmarkersopt):
4892 if obsolete.isenabled(repo, obsolete.createmarkersopt):
4893 for trouble in ("unstable", "divergent", "bumped"):
4893 for trouble in ("unstable", "divergent", "bumped"):
4894 numtrouble = len(repo.revs(trouble + "()"))
4894 numtrouble = len(repo.revs(trouble + "()"))
4895 # We write all the possibilities to ease translation
4895 # We write all the possibilities to ease translation
4896 troublemsg = {
4896 troublemsg = {
4897 "unstable": _("unstable: %d changesets"),
4897 "unstable": _("unstable: %d changesets"),
4898 "divergent": _("divergent: %d changesets"),
4898 "divergent": _("divergent: %d changesets"),
4899 "bumped": _("bumped: %d changesets"),
4899 "bumped": _("bumped: %d changesets"),
4900 }
4900 }
4901 if numtrouble > 0:
4901 if numtrouble > 0:
4902 ui.status(troublemsg[trouble] % numtrouble + "\n")
4902 ui.status(troublemsg[trouble] % numtrouble + "\n")
4903
4903
4904 cmdutil.summaryhooks(ui, repo)
4904 cmdutil.summaryhooks(ui, repo)
4905
4905
4906 if opts.get('remote'):
4906 if opts.get('remote'):
4907 needsincoming, needsoutgoing = True, True
4907 needsincoming, needsoutgoing = True, True
4908 else:
4908 else:
4909 needsincoming, needsoutgoing = False, False
4909 needsincoming, needsoutgoing = False, False
4910 for i, o in cmdutil.summaryremotehooks(ui, repo, opts, None):
4910 for i, o in cmdutil.summaryremotehooks(ui, repo, opts, None):
4911 if i:
4911 if i:
4912 needsincoming = True
4912 needsincoming = True
4913 if o:
4913 if o:
4914 needsoutgoing = True
4914 needsoutgoing = True
4915 if not needsincoming and not needsoutgoing:
4915 if not needsincoming and not needsoutgoing:
4916 return
4916 return
4917
4917
4918 def getincoming():
4918 def getincoming():
4919 source, branches = hg.parseurl(ui.expandpath('default'))
4919 source, branches = hg.parseurl(ui.expandpath('default'))
4920 sbranch = branches[0]
4920 sbranch = branches[0]
4921 try:
4921 try:
4922 other = hg.peer(repo, {}, source)
4922 other = hg.peer(repo, {}, source)
4923 except error.RepoError:
4923 except error.RepoError:
4924 if opts.get('remote'):
4924 if opts.get('remote'):
4925 raise
4925 raise
4926 return source, sbranch, None, None, None
4926 return source, sbranch, None, None, None
4927 revs, checkout = hg.addbranchrevs(repo, other, branches, None)
4927 revs, checkout = hg.addbranchrevs(repo, other, branches, None)
4928 if revs:
4928 if revs:
4929 revs = [other.lookup(rev) for rev in revs]
4929 revs = [other.lookup(rev) for rev in revs]
4930 ui.debug('comparing with %s\n' % util.hidepassword(source))
4930 ui.debug('comparing with %s\n' % util.hidepassword(source))
4931 repo.ui.pushbuffer()
4931 repo.ui.pushbuffer()
4932 commoninc = discovery.findcommonincoming(repo, other, heads=revs)
4932 commoninc = discovery.findcommonincoming(repo, other, heads=revs)
4933 repo.ui.popbuffer()
4933 repo.ui.popbuffer()
4934 return source, sbranch, other, commoninc, commoninc[1]
4934 return source, sbranch, other, commoninc, commoninc[1]
4935
4935
4936 if needsincoming:
4936 if needsincoming:
4937 source, sbranch, sother, commoninc, incoming = getincoming()
4937 source, sbranch, sother, commoninc, incoming = getincoming()
4938 else:
4938 else:
4939 source = sbranch = sother = commoninc = incoming = None
4939 source = sbranch = sother = commoninc = incoming = None
4940
4940
4941 def getoutgoing():
4941 def getoutgoing():
4942 dest, branches = hg.parseurl(ui.expandpath('default-push', 'default'))
4942 dest, branches = hg.parseurl(ui.expandpath('default-push', 'default'))
4943 dbranch = branches[0]
4943 dbranch = branches[0]
4944 revs, checkout = hg.addbranchrevs(repo, repo, branches, None)
4944 revs, checkout = hg.addbranchrevs(repo, repo, branches, None)
4945 if source != dest:
4945 if source != dest:
4946 try:
4946 try:
4947 dother = hg.peer(repo, {}, dest)
4947 dother = hg.peer(repo, {}, dest)
4948 except error.RepoError:
4948 except error.RepoError:
4949 if opts.get('remote'):
4949 if opts.get('remote'):
4950 raise
4950 raise
4951 return dest, dbranch, None, None
4951 return dest, dbranch, None, None
4952 ui.debug('comparing with %s\n' % util.hidepassword(dest))
4952 ui.debug('comparing with %s\n' % util.hidepassword(dest))
4953 elif sother is None:
4953 elif sother is None:
4954 # there is no explicit destination peer, but source one is invalid
4954 # there is no explicit destination peer, but source one is invalid
4955 return dest, dbranch, None, None
4955 return dest, dbranch, None, None
4956 else:
4956 else:
4957 dother = sother
4957 dother = sother
4958 if (source != dest or (sbranch is not None and sbranch != dbranch)):
4958 if (source != dest or (sbranch is not None and sbranch != dbranch)):
4959 common = None
4959 common = None
4960 else:
4960 else:
4961 common = commoninc
4961 common = commoninc
4962 if revs:
4962 if revs:
4963 revs = [repo.lookup(rev) for rev in revs]
4963 revs = [repo.lookup(rev) for rev in revs]
4964 repo.ui.pushbuffer()
4964 repo.ui.pushbuffer()
4965 outgoing = discovery.findcommonoutgoing(repo, dother, onlyheads=revs,
4965 outgoing = discovery.findcommonoutgoing(repo, dother, onlyheads=revs,
4966 commoninc=common)
4966 commoninc=common)
4967 repo.ui.popbuffer()
4967 repo.ui.popbuffer()
4968 return dest, dbranch, dother, outgoing
4968 return dest, dbranch, dother, outgoing
4969
4969
4970 if needsoutgoing:
4970 if needsoutgoing:
4971 dest, dbranch, dother, outgoing = getoutgoing()
4971 dest, dbranch, dother, outgoing = getoutgoing()
4972 else:
4972 else:
4973 dest = dbranch = dother = outgoing = None
4973 dest = dbranch = dother = outgoing = None
4974
4974
4975 if opts.get('remote'):
4975 if opts.get('remote'):
4976 t = []
4976 t = []
4977 if incoming:
4977 if incoming:
4978 t.append(_('1 or more incoming'))
4978 t.append(_('1 or more incoming'))
4979 o = outgoing.missing
4979 o = outgoing.missing
4980 if o:
4980 if o:
4981 t.append(_('%d outgoing') % len(o))
4981 t.append(_('%d outgoing') % len(o))
4982 other = dother or sother
4982 other = dother or sother
4983 if 'bookmarks' in other.listkeys('namespaces'):
4983 if 'bookmarks' in other.listkeys('namespaces'):
4984 counts = bookmarks.summary(repo, other)
4984 counts = bookmarks.summary(repo, other)
4985 if counts[0] > 0:
4985 if counts[0] > 0:
4986 t.append(_('%d incoming bookmarks') % counts[0])
4986 t.append(_('%d incoming bookmarks') % counts[0])
4987 if counts[1] > 0:
4987 if counts[1] > 0:
4988 t.append(_('%d outgoing bookmarks') % counts[1])
4988 t.append(_('%d outgoing bookmarks') % counts[1])
4989
4989
4990 if t:
4990 if t:
4991 # i18n: column positioning for "hg summary"
4991 # i18n: column positioning for "hg summary"
4992 ui.write(_('remote: %s\n') % (', '.join(t)))
4992 ui.write(_('remote: %s\n') % (', '.join(t)))
4993 else:
4993 else:
4994 # i18n: column positioning for "hg summary"
4994 # i18n: column positioning for "hg summary"
4995 ui.status(_('remote: (synced)\n'))
4995 ui.status(_('remote: (synced)\n'))
4996
4996
4997 cmdutil.summaryremotehooks(ui, repo, opts,
4997 cmdutil.summaryremotehooks(ui, repo, opts,
4998 ((source, sbranch, sother, commoninc),
4998 ((source, sbranch, sother, commoninc),
4999 (dest, dbranch, dother, outgoing)))
4999 (dest, dbranch, dother, outgoing)))
5000
5000
5001 @command('tag',
5001 @command('tag',
5002 [('f', 'force', None, _('force tag')),
5002 [('f', 'force', None, _('force tag')),
5003 ('l', 'local', None, _('make the tag local')),
5003 ('l', 'local', None, _('make the tag local')),
5004 ('r', 'rev', '', _('revision to tag'), _('REV')),
5004 ('r', 'rev', '', _('revision to tag'), _('REV')),
5005 ('', 'remove', None, _('remove a tag')),
5005 ('', 'remove', None, _('remove a tag')),
5006 # -l/--local is already there, commitopts cannot be used
5006 # -l/--local is already there, commitopts cannot be used
5007 ('e', 'edit', None, _('invoke editor on commit messages')),
5007 ('e', 'edit', None, _('invoke editor on commit messages')),
5008 ('m', 'message', '', _('use text as commit message'), _('TEXT')),
5008 ('m', 'message', '', _('use text as commit message'), _('TEXT')),
5009 ] + commitopts2,
5009 ] + commitopts2,
5010 _('[-f] [-l] [-m TEXT] [-d DATE] [-u USER] [-r REV] NAME...'))
5010 _('[-f] [-l] [-m TEXT] [-d DATE] [-u USER] [-r REV] NAME...'))
5011 def tag(ui, repo, name1, *names, **opts):
5011 def tag(ui, repo, name1, *names, **opts):
5012 """add one or more tags for the current or given revision
5012 """add one or more tags for the current or given revision
5013
5013
5014 Name a particular revision using <name>.
5014 Name a particular revision using <name>.
5015
5015
5016 Tags are used to name particular revisions of the repository and are
5016 Tags are used to name particular revisions of the repository and are
5017 very useful to compare different revisions, to go back to significant
5017 very useful to compare different revisions, to go back to significant
5018 earlier versions or to mark branch points as releases, etc. Changing
5018 earlier versions or to mark branch points as releases, etc. Changing
5019 an existing tag is normally disallowed; use -f/--force to override.
5019 an existing tag is normally disallowed; use -f/--force to override.
5020
5020
5021 If no revision is given, the parent of the working directory is
5021 If no revision is given, the parent of the working directory is
5022 used.
5022 used.
5023
5023
5024 To facilitate version control, distribution, and merging of tags,
5024 To facilitate version control, distribution, and merging of tags,
5025 they are stored as a file named ".hgtags" which is managed similarly
5025 they are stored as a file named ".hgtags" which is managed similarly
5026 to other project files and can be hand-edited if necessary. This
5026 to other project files and can be hand-edited if necessary. This
5027 also means that tagging creates a new commit. The file
5027 also means that tagging creates a new commit. The file
5028 ".hg/localtags" is used for local tags (not shared among
5028 ".hg/localtags" is used for local tags (not shared among
5029 repositories).
5029 repositories).
5030
5030
5031 Tag commits are usually made at the head of a branch. If the parent
5031 Tag commits are usually made at the head of a branch. If the parent
5032 of the working directory is not a branch head, :hg:`tag` aborts; use
5032 of the working directory is not a branch head, :hg:`tag` aborts; use
5033 -f/--force to force the tag commit to be based on a non-head
5033 -f/--force to force the tag commit to be based on a non-head
5034 changeset.
5034 changeset.
5035
5035
5036 See :hg:`help dates` for a list of formats valid for -d/--date.
5036 See :hg:`help dates` for a list of formats valid for -d/--date.
5037
5037
5038 Since tag names have priority over branch names during revision
5038 Since tag names have priority over branch names during revision
5039 lookup, using an existing branch name as a tag name is discouraged.
5039 lookup, using an existing branch name as a tag name is discouraged.
5040
5040
5041 Returns 0 on success.
5041 Returns 0 on success.
5042 """
5042 """
5043 opts = pycompat.byteskwargs(opts)
5043 opts = pycompat.byteskwargs(opts)
5044 wlock = lock = None
5044 wlock = lock = None
5045 try:
5045 try:
5046 wlock = repo.wlock()
5046 wlock = repo.wlock()
5047 lock = repo.lock()
5047 lock = repo.lock()
5048 rev_ = "."
5048 rev_ = "."
5049 names = [t.strip() for t in (name1,) + names]
5049 names = [t.strip() for t in (name1,) + names]
5050 if len(names) != len(set(names)):
5050 if len(names) != len(set(names)):
5051 raise error.Abort(_('tag names must be unique'))
5051 raise error.Abort(_('tag names must be unique'))
5052 for n in names:
5052 for n in names:
5053 scmutil.checknewlabel(repo, n, 'tag')
5053 scmutil.checknewlabel(repo, n, 'tag')
5054 if not n:
5054 if not n:
5055 raise error.Abort(_('tag names cannot consist entirely of '
5055 raise error.Abort(_('tag names cannot consist entirely of '
5056 'whitespace'))
5056 'whitespace'))
5057 if opts.get('rev') and opts.get('remove'):
5057 if opts.get('rev') and opts.get('remove'):
5058 raise error.Abort(_("--rev and --remove are incompatible"))
5058 raise error.Abort(_("--rev and --remove are incompatible"))
5059 if opts.get('rev'):
5059 if opts.get('rev'):
5060 rev_ = opts['rev']
5060 rev_ = opts['rev']
5061 message = opts.get('message')
5061 message = opts.get('message')
5062 if opts.get('remove'):
5062 if opts.get('remove'):
5063 if opts.get('local'):
5063 if opts.get('local'):
5064 expectedtype = 'local'
5064 expectedtype = 'local'
5065 else:
5065 else:
5066 expectedtype = 'global'
5066 expectedtype = 'global'
5067
5067
5068 for n in names:
5068 for n in names:
5069 if not repo.tagtype(n):
5069 if not repo.tagtype(n):
5070 raise error.Abort(_("tag '%s' does not exist") % n)
5070 raise error.Abort(_("tag '%s' does not exist") % n)
5071 if repo.tagtype(n) != expectedtype:
5071 if repo.tagtype(n) != expectedtype:
5072 if expectedtype == 'global':
5072 if expectedtype == 'global':
5073 raise error.Abort(_("tag '%s' is not a global tag") % n)
5073 raise error.Abort(_("tag '%s' is not a global tag") % n)
5074 else:
5074 else:
5075 raise error.Abort(_("tag '%s' is not a local tag") % n)
5075 raise error.Abort(_("tag '%s' is not a local tag") % n)
5076 rev_ = 'null'
5076 rev_ = 'null'
5077 if not message:
5077 if not message:
5078 # we don't translate commit messages
5078 # we don't translate commit messages
5079 message = 'Removed tag %s' % ', '.join(names)
5079 message = 'Removed tag %s' % ', '.join(names)
5080 elif not opts.get('force'):
5080 elif not opts.get('force'):
5081 for n in names:
5081 for n in names:
5082 if n in repo.tags():
5082 if n in repo.tags():
5083 raise error.Abort(_("tag '%s' already exists "
5083 raise error.Abort(_("tag '%s' already exists "
5084 "(use -f to force)") % n)
5084 "(use -f to force)") % n)
5085 if not opts.get('local'):
5085 if not opts.get('local'):
5086 p1, p2 = repo.dirstate.parents()
5086 p1, p2 = repo.dirstate.parents()
5087 if p2 != nullid:
5087 if p2 != nullid:
5088 raise error.Abort(_('uncommitted merge'))
5088 raise error.Abort(_('uncommitted merge'))
5089 bheads = repo.branchheads()
5089 bheads = repo.branchheads()
5090 if not opts.get('force') and bheads and p1 not in bheads:
5090 if not opts.get('force') and bheads and p1 not in bheads:
5091 raise error.Abort(_('working directory is not at a branch head '
5091 raise error.Abort(_('working directory is not at a branch head '
5092 '(use -f to force)'))
5092 '(use -f to force)'))
5093 r = scmutil.revsingle(repo, rev_).node()
5093 r = scmutil.revsingle(repo, rev_).node()
5094
5094
5095 if not message:
5095 if not message:
5096 # we don't translate commit messages
5096 # we don't translate commit messages
5097 message = ('Added tag %s for changeset %s' %
5097 message = ('Added tag %s for changeset %s' %
5098 (', '.join(names), short(r)))
5098 (', '.join(names), short(r)))
5099
5099
5100 date = opts.get('date')
5100 date = opts.get('date')
5101 if date:
5101 if date:
5102 date = util.parsedate(date)
5102 date = util.parsedate(date)
5103
5103
5104 if opts.get('remove'):
5104 if opts.get('remove'):
5105 editform = 'tag.remove'
5105 editform = 'tag.remove'
5106 else:
5106 else:
5107 editform = 'tag.add'
5107 editform = 'tag.add'
5108 editor = cmdutil.getcommiteditor(editform=editform,
5108 editor = cmdutil.getcommiteditor(editform=editform,
5109 **pycompat.strkwargs(opts))
5109 **pycompat.strkwargs(opts))
5110
5110
5111 # don't allow tagging the null rev
5111 # don't allow tagging the null rev
5112 if (not opts.get('remove') and
5112 if (not opts.get('remove') and
5113 scmutil.revsingle(repo, rev_).rev() == nullrev):
5113 scmutil.revsingle(repo, rev_).rev() == nullrev):
5114 raise error.Abort(_("cannot tag null revision"))
5114 raise error.Abort(_("cannot tag null revision"))
5115
5115
5116 tagsmod.tag(repo, names, r, message, opts.get('local'),
5116 tagsmod.tag(repo, names, r, message, opts.get('local'),
5117 opts.get('user'), date, editor=editor)
5117 opts.get('user'), date, editor=editor)
5118 finally:
5118 finally:
5119 release(lock, wlock)
5119 release(lock, wlock)
5120
5120
5121 @command('tags', formatteropts, '')
5121 @command('tags', formatteropts, '')
5122 def tags(ui, repo, **opts):
5122 def tags(ui, repo, **opts):
5123 """list repository tags
5123 """list repository tags
5124
5124
5125 This lists both regular and local tags. When the -v/--verbose
5125 This lists both regular and local tags. When the -v/--verbose
5126 switch is used, a third column "local" is printed for local tags.
5126 switch is used, a third column "local" is printed for local tags.
5127 When the -q/--quiet switch is used, only the tag name is printed.
5127 When the -q/--quiet switch is used, only the tag name is printed.
5128
5128
5129 Returns 0 on success.
5129 Returns 0 on success.
5130 """
5130 """
5131
5131
5132 opts = pycompat.byteskwargs(opts)
5132 opts = pycompat.byteskwargs(opts)
5133 ui.pager('tags')
5133 ui.pager('tags')
5134 fm = ui.formatter('tags', opts)
5134 fm = ui.formatter('tags', opts)
5135 hexfunc = fm.hexfunc
5135 hexfunc = fm.hexfunc
5136 tagtype = ""
5136 tagtype = ""
5137
5137
5138 for t, n in reversed(repo.tagslist()):
5138 for t, n in reversed(repo.tagslist()):
5139 hn = hexfunc(n)
5139 hn = hexfunc(n)
5140 label = 'tags.normal'
5140 label = 'tags.normal'
5141 tagtype = ''
5141 tagtype = ''
5142 if repo.tagtype(t) == 'local':
5142 if repo.tagtype(t) == 'local':
5143 label = 'tags.local'
5143 label = 'tags.local'
5144 tagtype = 'local'
5144 tagtype = 'local'
5145
5145
5146 fm.startitem()
5146 fm.startitem()
5147 fm.write('tag', '%s', t, label=label)
5147 fm.write('tag', '%s', t, label=label)
5148 fmt = " " * (30 - encoding.colwidth(t)) + ' %5d:%s'
5148 fmt = " " * (30 - encoding.colwidth(t)) + ' %5d:%s'
5149 fm.condwrite(not ui.quiet, 'rev node', fmt,
5149 fm.condwrite(not ui.quiet, 'rev node', fmt,
5150 repo.changelog.rev(n), hn, label=label)
5150 repo.changelog.rev(n), hn, label=label)
5151 fm.condwrite(ui.verbose and tagtype, 'type', ' %s',
5151 fm.condwrite(ui.verbose and tagtype, 'type', ' %s',
5152 tagtype, label=label)
5152 tagtype, label=label)
5153 fm.plain('\n')
5153 fm.plain('\n')
5154 fm.end()
5154 fm.end()
5155
5155
5156 @command('tip',
5156 @command('tip',
5157 [('p', 'patch', None, _('show patch')),
5157 [('p', 'patch', None, _('show patch')),
5158 ('g', 'git', None, _('use git extended diff format')),
5158 ('g', 'git', None, _('use git extended diff format')),
5159 ] + templateopts,
5159 ] + templateopts,
5160 _('[-p] [-g]'))
5160 _('[-p] [-g]'))
5161 def tip(ui, repo, **opts):
5161 def tip(ui, repo, **opts):
5162 """show the tip revision (DEPRECATED)
5162 """show the tip revision (DEPRECATED)
5163
5163
5164 The tip revision (usually just called the tip) is the changeset
5164 The tip revision (usually just called the tip) is the changeset
5165 most recently added to the repository (and therefore the most
5165 most recently added to the repository (and therefore the most
5166 recently changed head).
5166 recently changed head).
5167
5167
5168 If you have just made a commit, that commit will be the tip. If
5168 If you have just made a commit, that commit will be the tip. If
5169 you have just pulled changes from another repository, the tip of
5169 you have just pulled changes from another repository, the tip of
5170 that repository becomes the current tip. The "tip" tag is special
5170 that repository becomes the current tip. The "tip" tag is special
5171 and cannot be renamed or assigned to a different changeset.
5171 and cannot be renamed or assigned to a different changeset.
5172
5172
5173 This command is deprecated, please use :hg:`heads` instead.
5173 This command is deprecated, please use :hg:`heads` instead.
5174
5174
5175 Returns 0 on success.
5175 Returns 0 on success.
5176 """
5176 """
5177 opts = pycompat.byteskwargs(opts)
5177 opts = pycompat.byteskwargs(opts)
5178 displayer = cmdutil.show_changeset(ui, repo, opts)
5178 displayer = cmdutil.show_changeset(ui, repo, opts)
5179 displayer.show(repo['tip'])
5179 displayer.show(repo['tip'])
5180 displayer.close()
5180 displayer.close()
5181
5181
5182 @command('unbundle',
5182 @command('unbundle',
5183 [('u', 'update', None,
5183 [('u', 'update', None,
5184 _('update to new branch head if changesets were unbundled'))],
5184 _('update to new branch head if changesets were unbundled'))],
5185 _('[-u] FILE...'))
5185 _('[-u] FILE...'))
5186 def unbundle(ui, repo, fname1, *fnames, **opts):
5186 def unbundle(ui, repo, fname1, *fnames, **opts):
5187 """apply one or more bundle files
5187 """apply one or more bundle files
5188
5188
5189 Apply one or more bundle files generated by :hg:`bundle`.
5189 Apply one or more bundle files generated by :hg:`bundle`.
5190
5190
5191 Returns 0 on success, 1 if an update has unresolved files.
5191 Returns 0 on success, 1 if an update has unresolved files.
5192 """
5192 """
5193 fnames = (fname1,) + fnames
5193 fnames = (fname1,) + fnames
5194
5194
5195 with repo.lock():
5195 with repo.lock():
5196 for fname in fnames:
5196 for fname in fnames:
5197 f = hg.openpath(ui, fname)
5197 f = hg.openpath(ui, fname)
5198 gen = exchange.readbundle(ui, f, fname)
5198 gen = exchange.readbundle(ui, f, fname)
5199 if isinstance(gen, streamclone.streamcloneapplier):
5199 if isinstance(gen, streamclone.streamcloneapplier):
5200 raise error.Abort(
5200 raise error.Abort(
5201 _('packed bundles cannot be applied with '
5201 _('packed bundles cannot be applied with '
5202 '"hg unbundle"'),
5202 '"hg unbundle"'),
5203 hint=_('use "hg debugapplystreamclonebundle"'))
5203 hint=_('use "hg debugapplystreamclonebundle"'))
5204 url = 'bundle:' + fname
5204 url = 'bundle:' + fname
5205 if isinstance(gen, bundle2.unbundle20):
5205 if isinstance(gen, bundle2.unbundle20):
5206 with repo.transaction('unbundle') as tr:
5206 with repo.transaction('unbundle') as tr:
5207 try:
5207 try:
5208 op = bundle2.applybundle(repo, gen, tr,
5208 op = bundle2.applybundle(repo, gen, tr,
5209 source='unbundle',
5209 source='unbundle',
5210 url=url)
5210 url=url)
5211 except error.BundleUnknownFeatureError as exc:
5211 except error.BundleUnknownFeatureError as exc:
5212 raise error.Abort(
5212 raise error.Abort(
5213 _('%s: unknown bundle feature, %s') % (fname, exc),
5213 _('%s: unknown bundle feature, %s') % (fname, exc),
5214 hint=_("see https://mercurial-scm.org/"
5214 hint=_("see https://mercurial-scm.org/"
5215 "wiki/BundleFeature for more "
5215 "wiki/BundleFeature for more "
5216 "information"))
5216 "information"))
5217 changes = [r.get('return', 0)
5217 changes = [r.get('return', 0)
5218 for r in op.records['changegroup']]
5218 for r in op.records['changegroup']]
5219 modheads = changegroup.combineresults(changes)
5219 modheads = bundle2.combinechangegroupresults(changes)
5220 else:
5220 else:
5221 txnname = 'unbundle\n%s' % util.hidepassword(url)
5221 txnname = 'unbundle\n%s' % util.hidepassword(url)
5222 with repo.transaction(txnname) as tr:
5222 with repo.transaction(txnname) as tr:
5223 modheads, addednodes = gen.apply(repo, tr, 'unbundle', url)
5223 modheads, addednodes = gen.apply(repo, tr, 'unbundle', url)
5224
5224
5225 return postincoming(ui, repo, modheads, opts.get(r'update'), None, None)
5225 return postincoming(ui, repo, modheads, opts.get(r'update'), None, None)
5226
5226
5227 @command('^update|up|checkout|co',
5227 @command('^update|up|checkout|co',
5228 [('C', 'clean', None, _('discard uncommitted changes (no backup)')),
5228 [('C', 'clean', None, _('discard uncommitted changes (no backup)')),
5229 ('c', 'check', None, _('require clean working directory')),
5229 ('c', 'check', None, _('require clean working directory')),
5230 ('m', 'merge', None, _('merge uncommitted changes')),
5230 ('m', 'merge', None, _('merge uncommitted changes')),
5231 ('d', 'date', '', _('tipmost revision matching date'), _('DATE')),
5231 ('d', 'date', '', _('tipmost revision matching date'), _('DATE')),
5232 ('r', 'rev', '', _('revision'), _('REV'))
5232 ('r', 'rev', '', _('revision'), _('REV'))
5233 ] + mergetoolopts,
5233 ] + mergetoolopts,
5234 _('[-C|-c|-m] [-d DATE] [[-r] REV]'))
5234 _('[-C|-c|-m] [-d DATE] [[-r] REV]'))
5235 def update(ui, repo, node=None, rev=None, clean=False, date=None, check=False,
5235 def update(ui, repo, node=None, rev=None, clean=False, date=None, check=False,
5236 merge=None, tool=None):
5236 merge=None, tool=None):
5237 """update working directory (or switch revisions)
5237 """update working directory (or switch revisions)
5238
5238
5239 Update the repository's working directory to the specified
5239 Update the repository's working directory to the specified
5240 changeset. If no changeset is specified, update to the tip of the
5240 changeset. If no changeset is specified, update to the tip of the
5241 current named branch and move the active bookmark (see :hg:`help
5241 current named branch and move the active bookmark (see :hg:`help
5242 bookmarks`).
5242 bookmarks`).
5243
5243
5244 Update sets the working directory's parent revision to the specified
5244 Update sets the working directory's parent revision to the specified
5245 changeset (see :hg:`help parents`).
5245 changeset (see :hg:`help parents`).
5246
5246
5247 If the changeset is not a descendant or ancestor of the working
5247 If the changeset is not a descendant or ancestor of the working
5248 directory's parent and there are uncommitted changes, the update is
5248 directory's parent and there are uncommitted changes, the update is
5249 aborted. With the -c/--check option, the working directory is checked
5249 aborted. With the -c/--check option, the working directory is checked
5250 for uncommitted changes; if none are found, the working directory is
5250 for uncommitted changes; if none are found, the working directory is
5251 updated to the specified changeset.
5251 updated to the specified changeset.
5252
5252
5253 .. container:: verbose
5253 .. container:: verbose
5254
5254
5255 The -C/--clean, -c/--check, and -m/--merge options control what
5255 The -C/--clean, -c/--check, and -m/--merge options control what
5256 happens if the working directory contains uncommitted changes.
5256 happens if the working directory contains uncommitted changes.
5257 At most of one of them can be specified.
5257 At most of one of them can be specified.
5258
5258
5259 1. If no option is specified, and if
5259 1. If no option is specified, and if
5260 the requested changeset is an ancestor or descendant of
5260 the requested changeset is an ancestor or descendant of
5261 the working directory's parent, the uncommitted changes
5261 the working directory's parent, the uncommitted changes
5262 are merged into the requested changeset and the merged
5262 are merged into the requested changeset and the merged
5263 result is left uncommitted. If the requested changeset is
5263 result is left uncommitted. If the requested changeset is
5264 not an ancestor or descendant (that is, it is on another
5264 not an ancestor or descendant (that is, it is on another
5265 branch), the update is aborted and the uncommitted changes
5265 branch), the update is aborted and the uncommitted changes
5266 are preserved.
5266 are preserved.
5267
5267
5268 2. With the -m/--merge option, the update is allowed even if the
5268 2. With the -m/--merge option, the update is allowed even if the
5269 requested changeset is not an ancestor or descendant of
5269 requested changeset is not an ancestor or descendant of
5270 the working directory's parent.
5270 the working directory's parent.
5271
5271
5272 3. With the -c/--check option, the update is aborted and the
5272 3. With the -c/--check option, the update is aborted and the
5273 uncommitted changes are preserved.
5273 uncommitted changes are preserved.
5274
5274
5275 4. With the -C/--clean option, uncommitted changes are discarded and
5275 4. With the -C/--clean option, uncommitted changes are discarded and
5276 the working directory is updated to the requested changeset.
5276 the working directory is updated to the requested changeset.
5277
5277
5278 To cancel an uncommitted merge (and lose your changes), use
5278 To cancel an uncommitted merge (and lose your changes), use
5279 :hg:`update --clean .`.
5279 :hg:`update --clean .`.
5280
5280
5281 Use null as the changeset to remove the working directory (like
5281 Use null as the changeset to remove the working directory (like
5282 :hg:`clone -U`).
5282 :hg:`clone -U`).
5283
5283
5284 If you want to revert just one file to an older revision, use
5284 If you want to revert just one file to an older revision, use
5285 :hg:`revert [-r REV] NAME`.
5285 :hg:`revert [-r REV] NAME`.
5286
5286
5287 See :hg:`help dates` for a list of formats valid for -d/--date.
5287 See :hg:`help dates` for a list of formats valid for -d/--date.
5288
5288
5289 Returns 0 on success, 1 if there are unresolved files.
5289 Returns 0 on success, 1 if there are unresolved files.
5290 """
5290 """
5291 if rev and node:
5291 if rev and node:
5292 raise error.Abort(_("please specify just one revision"))
5292 raise error.Abort(_("please specify just one revision"))
5293
5293
5294 if ui.configbool('commands', 'update.requiredest'):
5294 if ui.configbool('commands', 'update.requiredest'):
5295 if not node and not rev and not date:
5295 if not node and not rev and not date:
5296 raise error.Abort(_('you must specify a destination'),
5296 raise error.Abort(_('you must specify a destination'),
5297 hint=_('for example: hg update ".::"'))
5297 hint=_('for example: hg update ".::"'))
5298
5298
5299 if rev is None or rev == '':
5299 if rev is None or rev == '':
5300 rev = node
5300 rev = node
5301
5301
5302 if date and rev is not None:
5302 if date and rev is not None:
5303 raise error.Abort(_("you can't specify a revision and a date"))
5303 raise error.Abort(_("you can't specify a revision and a date"))
5304
5304
5305 if len([x for x in (clean, check, merge) if x]) > 1:
5305 if len([x for x in (clean, check, merge) if x]) > 1:
5306 raise error.Abort(_("can only specify one of -C/--clean, -c/--check, "
5306 raise error.Abort(_("can only specify one of -C/--clean, -c/--check, "
5307 "or -m/merge"))
5307 "or -m/merge"))
5308
5308
5309 updatecheck = None
5309 updatecheck = None
5310 if check:
5310 if check:
5311 updatecheck = 'abort'
5311 updatecheck = 'abort'
5312 elif merge:
5312 elif merge:
5313 updatecheck = 'none'
5313 updatecheck = 'none'
5314
5314
5315 with repo.wlock():
5315 with repo.wlock():
5316 cmdutil.clearunfinished(repo)
5316 cmdutil.clearunfinished(repo)
5317
5317
5318 if date:
5318 if date:
5319 rev = cmdutil.finddate(ui, repo, date)
5319 rev = cmdutil.finddate(ui, repo, date)
5320
5320
5321 # if we defined a bookmark, we have to remember the original name
5321 # if we defined a bookmark, we have to remember the original name
5322 brev = rev
5322 brev = rev
5323 rev = scmutil.revsingle(repo, rev, rev).rev()
5323 rev = scmutil.revsingle(repo, rev, rev).rev()
5324
5324
5325 repo.ui.setconfig('ui', 'forcemerge', tool, 'update')
5325 repo.ui.setconfig('ui', 'forcemerge', tool, 'update')
5326
5326
5327 return hg.updatetotally(ui, repo, rev, brev, clean=clean,
5327 return hg.updatetotally(ui, repo, rev, brev, clean=clean,
5328 updatecheck=updatecheck)
5328 updatecheck=updatecheck)
5329
5329
5330 @command('verify', [])
5330 @command('verify', [])
5331 def verify(ui, repo):
5331 def verify(ui, repo):
5332 """verify the integrity of the repository
5332 """verify the integrity of the repository
5333
5333
5334 Verify the integrity of the current repository.
5334 Verify the integrity of the current repository.
5335
5335
5336 This will perform an extensive check of the repository's
5336 This will perform an extensive check of the repository's
5337 integrity, validating the hashes and checksums of each entry in
5337 integrity, validating the hashes and checksums of each entry in
5338 the changelog, manifest, and tracked files, as well as the
5338 the changelog, manifest, and tracked files, as well as the
5339 integrity of their crosslinks and indices.
5339 integrity of their crosslinks and indices.
5340
5340
5341 Please see https://mercurial-scm.org/wiki/RepositoryCorruption
5341 Please see https://mercurial-scm.org/wiki/RepositoryCorruption
5342 for more information about recovery from corruption of the
5342 for more information about recovery from corruption of the
5343 repository.
5343 repository.
5344
5344
5345 Returns 0 on success, 1 if errors are encountered.
5345 Returns 0 on success, 1 if errors are encountered.
5346 """
5346 """
5347 return hg.verify(repo)
5347 return hg.verify(repo)
5348
5348
5349 @command('version', [] + formatteropts, norepo=True)
5349 @command('version', [] + formatteropts, norepo=True)
5350 def version_(ui, **opts):
5350 def version_(ui, **opts):
5351 """output version and copyright information"""
5351 """output version and copyright information"""
5352 opts = pycompat.byteskwargs(opts)
5352 opts = pycompat.byteskwargs(opts)
5353 if ui.verbose:
5353 if ui.verbose:
5354 ui.pager('version')
5354 ui.pager('version')
5355 fm = ui.formatter("version", opts)
5355 fm = ui.formatter("version", opts)
5356 fm.startitem()
5356 fm.startitem()
5357 fm.write("ver", _("Mercurial Distributed SCM (version %s)\n"),
5357 fm.write("ver", _("Mercurial Distributed SCM (version %s)\n"),
5358 util.version())
5358 util.version())
5359 license = _(
5359 license = _(
5360 "(see https://mercurial-scm.org for more information)\n"
5360 "(see https://mercurial-scm.org for more information)\n"
5361 "\nCopyright (C) 2005-2017 Matt Mackall and others\n"
5361 "\nCopyright (C) 2005-2017 Matt Mackall and others\n"
5362 "This is free software; see the source for copying conditions. "
5362 "This is free software; see the source for copying conditions. "
5363 "There is NO\nwarranty; "
5363 "There is NO\nwarranty; "
5364 "not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n"
5364 "not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n"
5365 )
5365 )
5366 if not ui.quiet:
5366 if not ui.quiet:
5367 fm.plain(license)
5367 fm.plain(license)
5368
5368
5369 if ui.verbose:
5369 if ui.verbose:
5370 fm.plain(_("\nEnabled extensions:\n\n"))
5370 fm.plain(_("\nEnabled extensions:\n\n"))
5371 # format names and versions into columns
5371 # format names and versions into columns
5372 names = []
5372 names = []
5373 vers = []
5373 vers = []
5374 isinternals = []
5374 isinternals = []
5375 for name, module in extensions.extensions():
5375 for name, module in extensions.extensions():
5376 names.append(name)
5376 names.append(name)
5377 vers.append(extensions.moduleversion(module) or None)
5377 vers.append(extensions.moduleversion(module) or None)
5378 isinternals.append(extensions.ismoduleinternal(module))
5378 isinternals.append(extensions.ismoduleinternal(module))
5379 fn = fm.nested("extensions")
5379 fn = fm.nested("extensions")
5380 if names:
5380 if names:
5381 namefmt = " %%-%ds " % max(len(n) for n in names)
5381 namefmt = " %%-%ds " % max(len(n) for n in names)
5382 places = [_("external"), _("internal")]
5382 places = [_("external"), _("internal")]
5383 for n, v, p in zip(names, vers, isinternals):
5383 for n, v, p in zip(names, vers, isinternals):
5384 fn.startitem()
5384 fn.startitem()
5385 fn.condwrite(ui.verbose, "name", namefmt, n)
5385 fn.condwrite(ui.verbose, "name", namefmt, n)
5386 if ui.verbose:
5386 if ui.verbose:
5387 fn.plain("%s " % places[p])
5387 fn.plain("%s " % places[p])
5388 fn.data(bundled=p)
5388 fn.data(bundled=p)
5389 fn.condwrite(ui.verbose and v, "ver", "%s", v)
5389 fn.condwrite(ui.verbose and v, "ver", "%s", v)
5390 if ui.verbose:
5390 if ui.verbose:
5391 fn.plain("\n")
5391 fn.plain("\n")
5392 fn.end()
5392 fn.end()
5393 fm.end()
5393 fm.end()
5394
5394
5395 def loadcmdtable(ui, name, cmdtable):
5395 def loadcmdtable(ui, name, cmdtable):
5396 """Load command functions from specified cmdtable
5396 """Load command functions from specified cmdtable
5397 """
5397 """
5398 overrides = [cmd for cmd in cmdtable if cmd in table]
5398 overrides = [cmd for cmd in cmdtable if cmd in table]
5399 if overrides:
5399 if overrides:
5400 ui.warn(_("extension '%s' overrides commands: %s\n")
5400 ui.warn(_("extension '%s' overrides commands: %s\n")
5401 % (name, " ".join(overrides)))
5401 % (name, " ".join(overrides)))
5402 table.update(cmdtable)
5402 table.update(cmdtable)
@@ -1,2013 +1,2013 b''
1 # exchange.py - utility to exchange data between repos.
1 # exchange.py - utility to exchange data between repos.
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import errno
10 import errno
11 import hashlib
11 import hashlib
12
12
13 from .i18n import _
13 from .i18n import _
14 from .node import (
14 from .node import (
15 hex,
15 hex,
16 nullid,
16 nullid,
17 )
17 )
18 from . import (
18 from . import (
19 bookmarks as bookmod,
19 bookmarks as bookmod,
20 bundle2,
20 bundle2,
21 changegroup,
21 changegroup,
22 discovery,
22 discovery,
23 error,
23 error,
24 lock as lockmod,
24 lock as lockmod,
25 obsolete,
25 obsolete,
26 phases,
26 phases,
27 pushkey,
27 pushkey,
28 pycompat,
28 pycompat,
29 scmutil,
29 scmutil,
30 sslutil,
30 sslutil,
31 streamclone,
31 streamclone,
32 url as urlmod,
32 url as urlmod,
33 util,
33 util,
34 )
34 )
35
35
36 urlerr = util.urlerr
36 urlerr = util.urlerr
37 urlreq = util.urlreq
37 urlreq = util.urlreq
38
38
39 # Maps bundle version human names to changegroup versions.
39 # Maps bundle version human names to changegroup versions.
40 _bundlespeccgversions = {'v1': '01',
40 _bundlespeccgversions = {'v1': '01',
41 'v2': '02',
41 'v2': '02',
42 'packed1': 's1',
42 'packed1': 's1',
43 'bundle2': '02', #legacy
43 'bundle2': '02', #legacy
44 }
44 }
45
45
46 # Compression engines allowed in version 1. THIS SHOULD NEVER CHANGE.
46 # Compression engines allowed in version 1. THIS SHOULD NEVER CHANGE.
47 _bundlespecv1compengines = {'gzip', 'bzip2', 'none'}
47 _bundlespecv1compengines = {'gzip', 'bzip2', 'none'}
48
48
49 def parsebundlespec(repo, spec, strict=True, externalnames=False):
49 def parsebundlespec(repo, spec, strict=True, externalnames=False):
50 """Parse a bundle string specification into parts.
50 """Parse a bundle string specification into parts.
51
51
52 Bundle specifications denote a well-defined bundle/exchange format.
52 Bundle specifications denote a well-defined bundle/exchange format.
53 The content of a given specification should not change over time in
53 The content of a given specification should not change over time in
54 order to ensure that bundles produced by a newer version of Mercurial are
54 order to ensure that bundles produced by a newer version of Mercurial are
55 readable from an older version.
55 readable from an older version.
56
56
57 The string currently has the form:
57 The string currently has the form:
58
58
59 <compression>-<type>[;<parameter0>[;<parameter1>]]
59 <compression>-<type>[;<parameter0>[;<parameter1>]]
60
60
61 Where <compression> is one of the supported compression formats
61 Where <compression> is one of the supported compression formats
62 and <type> is (currently) a version string. A ";" can follow the type and
62 and <type> is (currently) a version string. A ";" can follow the type and
63 all text afterwards is interpreted as URI encoded, ";" delimited key=value
63 all text afterwards is interpreted as URI encoded, ";" delimited key=value
64 pairs.
64 pairs.
65
65
66 If ``strict`` is True (the default) <compression> is required. Otherwise,
66 If ``strict`` is True (the default) <compression> is required. Otherwise,
67 it is optional.
67 it is optional.
68
68
69 If ``externalnames`` is False (the default), the human-centric names will
69 If ``externalnames`` is False (the default), the human-centric names will
70 be converted to their internal representation.
70 be converted to their internal representation.
71
71
72 Returns a 3-tuple of (compression, version, parameters). Compression will
72 Returns a 3-tuple of (compression, version, parameters). Compression will
73 be ``None`` if not in strict mode and a compression isn't defined.
73 be ``None`` if not in strict mode and a compression isn't defined.
74
74
75 An ``InvalidBundleSpecification`` is raised when the specification is
75 An ``InvalidBundleSpecification`` is raised when the specification is
76 not syntactically well formed.
76 not syntactically well formed.
77
77
78 An ``UnsupportedBundleSpecification`` is raised when the compression or
78 An ``UnsupportedBundleSpecification`` is raised when the compression or
79 bundle type/version is not recognized.
79 bundle type/version is not recognized.
80
80
81 Note: this function will likely eventually return a more complex data
81 Note: this function will likely eventually return a more complex data
82 structure, including bundle2 part information.
82 structure, including bundle2 part information.
83 """
83 """
84 def parseparams(s):
84 def parseparams(s):
85 if ';' not in s:
85 if ';' not in s:
86 return s, {}
86 return s, {}
87
87
88 params = {}
88 params = {}
89 version, paramstr = s.split(';', 1)
89 version, paramstr = s.split(';', 1)
90
90
91 for p in paramstr.split(';'):
91 for p in paramstr.split(';'):
92 if '=' not in p:
92 if '=' not in p:
93 raise error.InvalidBundleSpecification(
93 raise error.InvalidBundleSpecification(
94 _('invalid bundle specification: '
94 _('invalid bundle specification: '
95 'missing "=" in parameter: %s') % p)
95 'missing "=" in parameter: %s') % p)
96
96
97 key, value = p.split('=', 1)
97 key, value = p.split('=', 1)
98 key = urlreq.unquote(key)
98 key = urlreq.unquote(key)
99 value = urlreq.unquote(value)
99 value = urlreq.unquote(value)
100 params[key] = value
100 params[key] = value
101
101
102 return version, params
102 return version, params
103
103
104
104
105 if strict and '-' not in spec:
105 if strict and '-' not in spec:
106 raise error.InvalidBundleSpecification(
106 raise error.InvalidBundleSpecification(
107 _('invalid bundle specification; '
107 _('invalid bundle specification; '
108 'must be prefixed with compression: %s') % spec)
108 'must be prefixed with compression: %s') % spec)
109
109
110 if '-' in spec:
110 if '-' in spec:
111 compression, version = spec.split('-', 1)
111 compression, version = spec.split('-', 1)
112
112
113 if compression not in util.compengines.supportedbundlenames:
113 if compression not in util.compengines.supportedbundlenames:
114 raise error.UnsupportedBundleSpecification(
114 raise error.UnsupportedBundleSpecification(
115 _('%s compression is not supported') % compression)
115 _('%s compression is not supported') % compression)
116
116
117 version, params = parseparams(version)
117 version, params = parseparams(version)
118
118
119 if version not in _bundlespeccgversions:
119 if version not in _bundlespeccgversions:
120 raise error.UnsupportedBundleSpecification(
120 raise error.UnsupportedBundleSpecification(
121 _('%s is not a recognized bundle version') % version)
121 _('%s is not a recognized bundle version') % version)
122 else:
122 else:
123 # Value could be just the compression or just the version, in which
123 # Value could be just the compression or just the version, in which
124 # case some defaults are assumed (but only when not in strict mode).
124 # case some defaults are assumed (but only when not in strict mode).
125 assert not strict
125 assert not strict
126
126
127 spec, params = parseparams(spec)
127 spec, params = parseparams(spec)
128
128
129 if spec in util.compengines.supportedbundlenames:
129 if spec in util.compengines.supportedbundlenames:
130 compression = spec
130 compression = spec
131 version = 'v1'
131 version = 'v1'
132 # Generaldelta repos require v2.
132 # Generaldelta repos require v2.
133 if 'generaldelta' in repo.requirements:
133 if 'generaldelta' in repo.requirements:
134 version = 'v2'
134 version = 'v2'
135 # Modern compression engines require v2.
135 # Modern compression engines require v2.
136 if compression not in _bundlespecv1compengines:
136 if compression not in _bundlespecv1compengines:
137 version = 'v2'
137 version = 'v2'
138 elif spec in _bundlespeccgversions:
138 elif spec in _bundlespeccgversions:
139 if spec == 'packed1':
139 if spec == 'packed1':
140 compression = 'none'
140 compression = 'none'
141 else:
141 else:
142 compression = 'bzip2'
142 compression = 'bzip2'
143 version = spec
143 version = spec
144 else:
144 else:
145 raise error.UnsupportedBundleSpecification(
145 raise error.UnsupportedBundleSpecification(
146 _('%s is not a recognized bundle specification') % spec)
146 _('%s is not a recognized bundle specification') % spec)
147
147
148 # Bundle version 1 only supports a known set of compression engines.
148 # Bundle version 1 only supports a known set of compression engines.
149 if version == 'v1' and compression not in _bundlespecv1compengines:
149 if version == 'v1' and compression not in _bundlespecv1compengines:
150 raise error.UnsupportedBundleSpecification(
150 raise error.UnsupportedBundleSpecification(
151 _('compression engine %s is not supported on v1 bundles') %
151 _('compression engine %s is not supported on v1 bundles') %
152 compression)
152 compression)
153
153
154 # The specification for packed1 can optionally declare the data formats
154 # The specification for packed1 can optionally declare the data formats
155 # required to apply it. If we see this metadata, compare against what the
155 # required to apply it. If we see this metadata, compare against what the
156 # repo supports and error if the bundle isn't compatible.
156 # repo supports and error if the bundle isn't compatible.
157 if version == 'packed1' and 'requirements' in params:
157 if version == 'packed1' and 'requirements' in params:
158 requirements = set(params['requirements'].split(','))
158 requirements = set(params['requirements'].split(','))
159 missingreqs = requirements - repo.supportedformats
159 missingreqs = requirements - repo.supportedformats
160 if missingreqs:
160 if missingreqs:
161 raise error.UnsupportedBundleSpecification(
161 raise error.UnsupportedBundleSpecification(
162 _('missing support for repository features: %s') %
162 _('missing support for repository features: %s') %
163 ', '.join(sorted(missingreqs)))
163 ', '.join(sorted(missingreqs)))
164
164
165 if not externalnames:
165 if not externalnames:
166 engine = util.compengines.forbundlename(compression)
166 engine = util.compengines.forbundlename(compression)
167 compression = engine.bundletype()[1]
167 compression = engine.bundletype()[1]
168 version = _bundlespeccgversions[version]
168 version = _bundlespeccgversions[version]
169 return compression, version, params
169 return compression, version, params
170
170
171 def readbundle(ui, fh, fname, vfs=None):
171 def readbundle(ui, fh, fname, vfs=None):
172 header = changegroup.readexactly(fh, 4)
172 header = changegroup.readexactly(fh, 4)
173
173
174 alg = None
174 alg = None
175 if not fname:
175 if not fname:
176 fname = "stream"
176 fname = "stream"
177 if not header.startswith('HG') and header.startswith('\0'):
177 if not header.startswith('HG') and header.startswith('\0'):
178 fh = changegroup.headerlessfixup(fh, header)
178 fh = changegroup.headerlessfixup(fh, header)
179 header = "HG10"
179 header = "HG10"
180 alg = 'UN'
180 alg = 'UN'
181 elif vfs:
181 elif vfs:
182 fname = vfs.join(fname)
182 fname = vfs.join(fname)
183
183
184 magic, version = header[0:2], header[2:4]
184 magic, version = header[0:2], header[2:4]
185
185
186 if magic != 'HG':
186 if magic != 'HG':
187 raise error.Abort(_('%s: not a Mercurial bundle') % fname)
187 raise error.Abort(_('%s: not a Mercurial bundle') % fname)
188 if version == '10':
188 if version == '10':
189 if alg is None:
189 if alg is None:
190 alg = changegroup.readexactly(fh, 2)
190 alg = changegroup.readexactly(fh, 2)
191 return changegroup.cg1unpacker(fh, alg)
191 return changegroup.cg1unpacker(fh, alg)
192 elif version.startswith('2'):
192 elif version.startswith('2'):
193 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
193 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
194 elif version == 'S1':
194 elif version == 'S1':
195 return streamclone.streamcloneapplier(fh)
195 return streamclone.streamcloneapplier(fh)
196 else:
196 else:
197 raise error.Abort(_('%s: unknown bundle version %s') % (fname, version))
197 raise error.Abort(_('%s: unknown bundle version %s') % (fname, version))
198
198
199 def getbundlespec(ui, fh):
199 def getbundlespec(ui, fh):
200 """Infer the bundlespec from a bundle file handle.
200 """Infer the bundlespec from a bundle file handle.
201
201
202 The input file handle is seeked and the original seek position is not
202 The input file handle is seeked and the original seek position is not
203 restored.
203 restored.
204 """
204 """
205 def speccompression(alg):
205 def speccompression(alg):
206 try:
206 try:
207 return util.compengines.forbundletype(alg).bundletype()[0]
207 return util.compengines.forbundletype(alg).bundletype()[0]
208 except KeyError:
208 except KeyError:
209 return None
209 return None
210
210
211 b = readbundle(ui, fh, None)
211 b = readbundle(ui, fh, None)
212 if isinstance(b, changegroup.cg1unpacker):
212 if isinstance(b, changegroup.cg1unpacker):
213 alg = b._type
213 alg = b._type
214 if alg == '_truncatedBZ':
214 if alg == '_truncatedBZ':
215 alg = 'BZ'
215 alg = 'BZ'
216 comp = speccompression(alg)
216 comp = speccompression(alg)
217 if not comp:
217 if not comp:
218 raise error.Abort(_('unknown compression algorithm: %s') % alg)
218 raise error.Abort(_('unknown compression algorithm: %s') % alg)
219 return '%s-v1' % comp
219 return '%s-v1' % comp
220 elif isinstance(b, bundle2.unbundle20):
220 elif isinstance(b, bundle2.unbundle20):
221 if 'Compression' in b.params:
221 if 'Compression' in b.params:
222 comp = speccompression(b.params['Compression'])
222 comp = speccompression(b.params['Compression'])
223 if not comp:
223 if not comp:
224 raise error.Abort(_('unknown compression algorithm: %s') % comp)
224 raise error.Abort(_('unknown compression algorithm: %s') % comp)
225 else:
225 else:
226 comp = 'none'
226 comp = 'none'
227
227
228 version = None
228 version = None
229 for part in b.iterparts():
229 for part in b.iterparts():
230 if part.type == 'changegroup':
230 if part.type == 'changegroup':
231 version = part.params['version']
231 version = part.params['version']
232 if version in ('01', '02'):
232 if version in ('01', '02'):
233 version = 'v2'
233 version = 'v2'
234 else:
234 else:
235 raise error.Abort(_('changegroup version %s does not have '
235 raise error.Abort(_('changegroup version %s does not have '
236 'a known bundlespec') % version,
236 'a known bundlespec') % version,
237 hint=_('try upgrading your Mercurial '
237 hint=_('try upgrading your Mercurial '
238 'client'))
238 'client'))
239
239
240 if not version:
240 if not version:
241 raise error.Abort(_('could not identify changegroup version in '
241 raise error.Abort(_('could not identify changegroup version in '
242 'bundle'))
242 'bundle'))
243
243
244 return '%s-%s' % (comp, version)
244 return '%s-%s' % (comp, version)
245 elif isinstance(b, streamclone.streamcloneapplier):
245 elif isinstance(b, streamclone.streamcloneapplier):
246 requirements = streamclone.readbundle1header(fh)[2]
246 requirements = streamclone.readbundle1header(fh)[2]
247 params = 'requirements=%s' % ','.join(sorted(requirements))
247 params = 'requirements=%s' % ','.join(sorted(requirements))
248 return 'none-packed1;%s' % urlreq.quote(params)
248 return 'none-packed1;%s' % urlreq.quote(params)
249 else:
249 else:
250 raise error.Abort(_('unknown bundle type: %s') % b)
250 raise error.Abort(_('unknown bundle type: %s') % b)
251
251
252 def _computeoutgoing(repo, heads, common):
252 def _computeoutgoing(repo, heads, common):
253 """Computes which revs are outgoing given a set of common
253 """Computes which revs are outgoing given a set of common
254 and a set of heads.
254 and a set of heads.
255
255
256 This is a separate function so extensions can have access to
256 This is a separate function so extensions can have access to
257 the logic.
257 the logic.
258
258
259 Returns a discovery.outgoing object.
259 Returns a discovery.outgoing object.
260 """
260 """
261 cl = repo.changelog
261 cl = repo.changelog
262 if common:
262 if common:
263 hasnode = cl.hasnode
263 hasnode = cl.hasnode
264 common = [n for n in common if hasnode(n)]
264 common = [n for n in common if hasnode(n)]
265 else:
265 else:
266 common = [nullid]
266 common = [nullid]
267 if not heads:
267 if not heads:
268 heads = cl.heads()
268 heads = cl.heads()
269 return discovery.outgoing(repo, common, heads)
269 return discovery.outgoing(repo, common, heads)
270
270
271 def _forcebundle1(op):
271 def _forcebundle1(op):
272 """return true if a pull/push must use bundle1
272 """return true if a pull/push must use bundle1
273
273
274 This function is used to allow testing of the older bundle version"""
274 This function is used to allow testing of the older bundle version"""
275 ui = op.repo.ui
275 ui = op.repo.ui
276 forcebundle1 = False
276 forcebundle1 = False
277 # The goal is this config is to allow developer to choose the bundle
277 # The goal is this config is to allow developer to choose the bundle
278 # version used during exchanged. This is especially handy during test.
278 # version used during exchanged. This is especially handy during test.
279 # Value is a list of bundle version to be picked from, highest version
279 # Value is a list of bundle version to be picked from, highest version
280 # should be used.
280 # should be used.
281 #
281 #
282 # developer config: devel.legacy.exchange
282 # developer config: devel.legacy.exchange
283 exchange = ui.configlist('devel', 'legacy.exchange')
283 exchange = ui.configlist('devel', 'legacy.exchange')
284 forcebundle1 = 'bundle2' not in exchange and 'bundle1' in exchange
284 forcebundle1 = 'bundle2' not in exchange and 'bundle1' in exchange
285 return forcebundle1 or not op.remote.capable('bundle2')
285 return forcebundle1 or not op.remote.capable('bundle2')
286
286
287 class pushoperation(object):
287 class pushoperation(object):
288 """A object that represent a single push operation
288 """A object that represent a single push operation
289
289
290 Its purpose is to carry push related state and very common operations.
290 Its purpose is to carry push related state and very common operations.
291
291
292 A new pushoperation should be created at the beginning of each push and
292 A new pushoperation should be created at the beginning of each push and
293 discarded afterward.
293 discarded afterward.
294 """
294 """
295
295
296 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
296 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
297 bookmarks=()):
297 bookmarks=()):
298 # repo we push from
298 # repo we push from
299 self.repo = repo
299 self.repo = repo
300 self.ui = repo.ui
300 self.ui = repo.ui
301 # repo we push to
301 # repo we push to
302 self.remote = remote
302 self.remote = remote
303 # force option provided
303 # force option provided
304 self.force = force
304 self.force = force
305 # revs to be pushed (None is "all")
305 # revs to be pushed (None is "all")
306 self.revs = revs
306 self.revs = revs
307 # bookmark explicitly pushed
307 # bookmark explicitly pushed
308 self.bookmarks = bookmarks
308 self.bookmarks = bookmarks
309 # allow push of new branch
309 # allow push of new branch
310 self.newbranch = newbranch
310 self.newbranch = newbranch
311 # did a local lock get acquired?
311 # did a local lock get acquired?
312 self.locallocked = None
312 self.locallocked = None
313 # step already performed
313 # step already performed
314 # (used to check what steps have been already performed through bundle2)
314 # (used to check what steps have been already performed through bundle2)
315 self.stepsdone = set()
315 self.stepsdone = set()
316 # Integer version of the changegroup push result
316 # Integer version of the changegroup push result
317 # - None means nothing to push
317 # - None means nothing to push
318 # - 0 means HTTP error
318 # - 0 means HTTP error
319 # - 1 means we pushed and remote head count is unchanged *or*
319 # - 1 means we pushed and remote head count is unchanged *or*
320 # we have outgoing changesets but refused to push
320 # we have outgoing changesets but refused to push
321 # - other values as described by addchangegroup()
321 # - other values as described by addchangegroup()
322 self.cgresult = None
322 self.cgresult = None
323 # Boolean value for the bookmark push
323 # Boolean value for the bookmark push
324 self.bkresult = None
324 self.bkresult = None
325 # discover.outgoing object (contains common and outgoing data)
325 # discover.outgoing object (contains common and outgoing data)
326 self.outgoing = None
326 self.outgoing = None
327 # all remote topological heads before the push
327 # all remote topological heads before the push
328 self.remoteheads = None
328 self.remoteheads = None
329 # Details of the remote branch pre and post push
329 # Details of the remote branch pre and post push
330 #
330 #
331 # mapping: {'branch': ([remoteheads],
331 # mapping: {'branch': ([remoteheads],
332 # [newheads],
332 # [newheads],
333 # [unsyncedheads],
333 # [unsyncedheads],
334 # [discardedheads])}
334 # [discardedheads])}
335 # - branch: the branch name
335 # - branch: the branch name
336 # - remoteheads: the list of remote heads known locally
336 # - remoteheads: the list of remote heads known locally
337 # None if the branch is new
337 # None if the branch is new
338 # - newheads: the new remote heads (known locally) with outgoing pushed
338 # - newheads: the new remote heads (known locally) with outgoing pushed
339 # - unsyncedheads: the list of remote heads unknown locally.
339 # - unsyncedheads: the list of remote heads unknown locally.
340 # - discardedheads: the list of remote heads made obsolete by the push
340 # - discardedheads: the list of remote heads made obsolete by the push
341 self.pushbranchmap = None
341 self.pushbranchmap = None
342 # testable as a boolean indicating if any nodes are missing locally.
342 # testable as a boolean indicating if any nodes are missing locally.
343 self.incoming = None
343 self.incoming = None
344 # phases changes that must be pushed along side the changesets
344 # phases changes that must be pushed along side the changesets
345 self.outdatedphases = None
345 self.outdatedphases = None
346 # phases changes that must be pushed if changeset push fails
346 # phases changes that must be pushed if changeset push fails
347 self.fallbackoutdatedphases = None
347 self.fallbackoutdatedphases = None
348 # outgoing obsmarkers
348 # outgoing obsmarkers
349 self.outobsmarkers = set()
349 self.outobsmarkers = set()
350 # outgoing bookmarks
350 # outgoing bookmarks
351 self.outbookmarks = []
351 self.outbookmarks = []
352 # transaction manager
352 # transaction manager
353 self.trmanager = None
353 self.trmanager = None
354 # map { pushkey partid -> callback handling failure}
354 # map { pushkey partid -> callback handling failure}
355 # used to handle exception from mandatory pushkey part failure
355 # used to handle exception from mandatory pushkey part failure
356 self.pkfailcb = {}
356 self.pkfailcb = {}
357
357
358 @util.propertycache
358 @util.propertycache
359 def futureheads(self):
359 def futureheads(self):
360 """future remote heads if the changeset push succeeds"""
360 """future remote heads if the changeset push succeeds"""
361 return self.outgoing.missingheads
361 return self.outgoing.missingheads
362
362
363 @util.propertycache
363 @util.propertycache
364 def fallbackheads(self):
364 def fallbackheads(self):
365 """future remote heads if the changeset push fails"""
365 """future remote heads if the changeset push fails"""
366 if self.revs is None:
366 if self.revs is None:
367 # not target to push, all common are relevant
367 # not target to push, all common are relevant
368 return self.outgoing.commonheads
368 return self.outgoing.commonheads
369 unfi = self.repo.unfiltered()
369 unfi = self.repo.unfiltered()
370 # I want cheads = heads(::missingheads and ::commonheads)
370 # I want cheads = heads(::missingheads and ::commonheads)
371 # (missingheads is revs with secret changeset filtered out)
371 # (missingheads is revs with secret changeset filtered out)
372 #
372 #
373 # This can be expressed as:
373 # This can be expressed as:
374 # cheads = ( (missingheads and ::commonheads)
374 # cheads = ( (missingheads and ::commonheads)
375 # + (commonheads and ::missingheads))"
375 # + (commonheads and ::missingheads))"
376 # )
376 # )
377 #
377 #
378 # while trying to push we already computed the following:
378 # while trying to push we already computed the following:
379 # common = (::commonheads)
379 # common = (::commonheads)
380 # missing = ((commonheads::missingheads) - commonheads)
380 # missing = ((commonheads::missingheads) - commonheads)
381 #
381 #
382 # We can pick:
382 # We can pick:
383 # * missingheads part of common (::commonheads)
383 # * missingheads part of common (::commonheads)
384 common = self.outgoing.common
384 common = self.outgoing.common
385 nm = self.repo.changelog.nodemap
385 nm = self.repo.changelog.nodemap
386 cheads = [node for node in self.revs if nm[node] in common]
386 cheads = [node for node in self.revs if nm[node] in common]
387 # and
387 # and
388 # * commonheads parents on missing
388 # * commonheads parents on missing
389 revset = unfi.set('%ln and parents(roots(%ln))',
389 revset = unfi.set('%ln and parents(roots(%ln))',
390 self.outgoing.commonheads,
390 self.outgoing.commonheads,
391 self.outgoing.missing)
391 self.outgoing.missing)
392 cheads.extend(c.node() for c in revset)
392 cheads.extend(c.node() for c in revset)
393 return cheads
393 return cheads
394
394
395 @property
395 @property
396 def commonheads(self):
396 def commonheads(self):
397 """set of all common heads after changeset bundle push"""
397 """set of all common heads after changeset bundle push"""
398 if self.cgresult:
398 if self.cgresult:
399 return self.futureheads
399 return self.futureheads
400 else:
400 else:
401 return self.fallbackheads
401 return self.fallbackheads
402
402
403 # mapping of message used when pushing bookmark
403 # mapping of message used when pushing bookmark
404 bookmsgmap = {'update': (_("updating bookmark %s\n"),
404 bookmsgmap = {'update': (_("updating bookmark %s\n"),
405 _('updating bookmark %s failed!\n')),
405 _('updating bookmark %s failed!\n')),
406 'export': (_("exporting bookmark %s\n"),
406 'export': (_("exporting bookmark %s\n"),
407 _('exporting bookmark %s failed!\n')),
407 _('exporting bookmark %s failed!\n')),
408 'delete': (_("deleting remote bookmark %s\n"),
408 'delete': (_("deleting remote bookmark %s\n"),
409 _('deleting remote bookmark %s failed!\n')),
409 _('deleting remote bookmark %s failed!\n')),
410 }
410 }
411
411
412
412
413 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=(),
413 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=(),
414 opargs=None):
414 opargs=None):
415 '''Push outgoing changesets (limited by revs) from a local
415 '''Push outgoing changesets (limited by revs) from a local
416 repository to remote. Return an integer:
416 repository to remote. Return an integer:
417 - None means nothing to push
417 - None means nothing to push
418 - 0 means HTTP error
418 - 0 means HTTP error
419 - 1 means we pushed and remote head count is unchanged *or*
419 - 1 means we pushed and remote head count is unchanged *or*
420 we have outgoing changesets but refused to push
420 we have outgoing changesets but refused to push
421 - other values as described by addchangegroup()
421 - other values as described by addchangegroup()
422 '''
422 '''
423 if opargs is None:
423 if opargs is None:
424 opargs = {}
424 opargs = {}
425 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks,
425 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks,
426 **opargs)
426 **opargs)
427 if pushop.remote.local():
427 if pushop.remote.local():
428 missing = (set(pushop.repo.requirements)
428 missing = (set(pushop.repo.requirements)
429 - pushop.remote.local().supported)
429 - pushop.remote.local().supported)
430 if missing:
430 if missing:
431 msg = _("required features are not"
431 msg = _("required features are not"
432 " supported in the destination:"
432 " supported in the destination:"
433 " %s") % (', '.join(sorted(missing)))
433 " %s") % (', '.join(sorted(missing)))
434 raise error.Abort(msg)
434 raise error.Abort(msg)
435
435
436 # there are two ways to push to remote repo:
436 # there are two ways to push to remote repo:
437 #
437 #
438 # addchangegroup assumes local user can lock remote
438 # addchangegroup assumes local user can lock remote
439 # repo (local filesystem, old ssh servers).
439 # repo (local filesystem, old ssh servers).
440 #
440 #
441 # unbundle assumes local user cannot lock remote repo (new ssh
441 # unbundle assumes local user cannot lock remote repo (new ssh
442 # servers, http servers).
442 # servers, http servers).
443
443
444 if not pushop.remote.canpush():
444 if not pushop.remote.canpush():
445 raise error.Abort(_("destination does not support push"))
445 raise error.Abort(_("destination does not support push"))
446 # get local lock as we might write phase data
446 # get local lock as we might write phase data
447 localwlock = locallock = None
447 localwlock = locallock = None
448 try:
448 try:
449 # bundle2 push may receive a reply bundle touching bookmarks or other
449 # bundle2 push may receive a reply bundle touching bookmarks or other
450 # things requiring the wlock. Take it now to ensure proper ordering.
450 # things requiring the wlock. Take it now to ensure proper ordering.
451 maypushback = pushop.ui.configbool('experimental', 'bundle2.pushback')
451 maypushback = pushop.ui.configbool('experimental', 'bundle2.pushback')
452 if (not _forcebundle1(pushop)) and maypushback:
452 if (not _forcebundle1(pushop)) and maypushback:
453 localwlock = pushop.repo.wlock()
453 localwlock = pushop.repo.wlock()
454 locallock = pushop.repo.lock()
454 locallock = pushop.repo.lock()
455 pushop.locallocked = True
455 pushop.locallocked = True
456 except IOError as err:
456 except IOError as err:
457 pushop.locallocked = False
457 pushop.locallocked = False
458 if err.errno != errno.EACCES:
458 if err.errno != errno.EACCES:
459 raise
459 raise
460 # source repo cannot be locked.
460 # source repo cannot be locked.
461 # We do not abort the push, but just disable the local phase
461 # We do not abort the push, but just disable the local phase
462 # synchronisation.
462 # synchronisation.
463 msg = 'cannot lock source repository: %s\n' % err
463 msg = 'cannot lock source repository: %s\n' % err
464 pushop.ui.debug(msg)
464 pushop.ui.debug(msg)
465 try:
465 try:
466 if pushop.locallocked:
466 if pushop.locallocked:
467 pushop.trmanager = transactionmanager(pushop.repo,
467 pushop.trmanager = transactionmanager(pushop.repo,
468 'push-response',
468 'push-response',
469 pushop.remote.url())
469 pushop.remote.url())
470 pushop.repo.checkpush(pushop)
470 pushop.repo.checkpush(pushop)
471 lock = None
471 lock = None
472 unbundle = pushop.remote.capable('unbundle')
472 unbundle = pushop.remote.capable('unbundle')
473 if not unbundle:
473 if not unbundle:
474 lock = pushop.remote.lock()
474 lock = pushop.remote.lock()
475 try:
475 try:
476 _pushdiscovery(pushop)
476 _pushdiscovery(pushop)
477 if not _forcebundle1(pushop):
477 if not _forcebundle1(pushop):
478 _pushbundle2(pushop)
478 _pushbundle2(pushop)
479 _pushchangeset(pushop)
479 _pushchangeset(pushop)
480 _pushsyncphase(pushop)
480 _pushsyncphase(pushop)
481 _pushobsolete(pushop)
481 _pushobsolete(pushop)
482 _pushbookmark(pushop)
482 _pushbookmark(pushop)
483 finally:
483 finally:
484 if lock is not None:
484 if lock is not None:
485 lock.release()
485 lock.release()
486 if pushop.trmanager:
486 if pushop.trmanager:
487 pushop.trmanager.close()
487 pushop.trmanager.close()
488 finally:
488 finally:
489 if pushop.trmanager:
489 if pushop.trmanager:
490 pushop.trmanager.release()
490 pushop.trmanager.release()
491 if locallock is not None:
491 if locallock is not None:
492 locallock.release()
492 locallock.release()
493 if localwlock is not None:
493 if localwlock is not None:
494 localwlock.release()
494 localwlock.release()
495
495
496 return pushop
496 return pushop
497
497
498 # list of steps to perform discovery before push
498 # list of steps to perform discovery before push
499 pushdiscoveryorder = []
499 pushdiscoveryorder = []
500
500
501 # Mapping between step name and function
501 # Mapping between step name and function
502 #
502 #
503 # This exists to help extensions wrap steps if necessary
503 # This exists to help extensions wrap steps if necessary
504 pushdiscoverymapping = {}
504 pushdiscoverymapping = {}
505
505
506 def pushdiscovery(stepname):
506 def pushdiscovery(stepname):
507 """decorator for function performing discovery before push
507 """decorator for function performing discovery before push
508
508
509 The function is added to the step -> function mapping and appended to the
509 The function is added to the step -> function mapping and appended to the
510 list of steps. Beware that decorated function will be added in order (this
510 list of steps. Beware that decorated function will be added in order (this
511 may matter).
511 may matter).
512
512
513 You can only use this decorator for a new step, if you want to wrap a step
513 You can only use this decorator for a new step, if you want to wrap a step
514 from an extension, change the pushdiscovery dictionary directly."""
514 from an extension, change the pushdiscovery dictionary directly."""
515 def dec(func):
515 def dec(func):
516 assert stepname not in pushdiscoverymapping
516 assert stepname not in pushdiscoverymapping
517 pushdiscoverymapping[stepname] = func
517 pushdiscoverymapping[stepname] = func
518 pushdiscoveryorder.append(stepname)
518 pushdiscoveryorder.append(stepname)
519 return func
519 return func
520 return dec
520 return dec
521
521
522 def _pushdiscovery(pushop):
522 def _pushdiscovery(pushop):
523 """Run all discovery steps"""
523 """Run all discovery steps"""
524 for stepname in pushdiscoveryorder:
524 for stepname in pushdiscoveryorder:
525 step = pushdiscoverymapping[stepname]
525 step = pushdiscoverymapping[stepname]
526 step(pushop)
526 step(pushop)
527
527
528 @pushdiscovery('changeset')
528 @pushdiscovery('changeset')
529 def _pushdiscoverychangeset(pushop):
529 def _pushdiscoverychangeset(pushop):
530 """discover the changeset that need to be pushed"""
530 """discover the changeset that need to be pushed"""
531 fci = discovery.findcommonincoming
531 fci = discovery.findcommonincoming
532 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
532 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
533 common, inc, remoteheads = commoninc
533 common, inc, remoteheads = commoninc
534 fco = discovery.findcommonoutgoing
534 fco = discovery.findcommonoutgoing
535 outgoing = fco(pushop.repo, pushop.remote, onlyheads=pushop.revs,
535 outgoing = fco(pushop.repo, pushop.remote, onlyheads=pushop.revs,
536 commoninc=commoninc, force=pushop.force)
536 commoninc=commoninc, force=pushop.force)
537 pushop.outgoing = outgoing
537 pushop.outgoing = outgoing
538 pushop.remoteheads = remoteheads
538 pushop.remoteheads = remoteheads
539 pushop.incoming = inc
539 pushop.incoming = inc
540
540
541 @pushdiscovery('phase')
541 @pushdiscovery('phase')
542 def _pushdiscoveryphase(pushop):
542 def _pushdiscoveryphase(pushop):
543 """discover the phase that needs to be pushed
543 """discover the phase that needs to be pushed
544
544
545 (computed for both success and failure case for changesets push)"""
545 (computed for both success and failure case for changesets push)"""
546 outgoing = pushop.outgoing
546 outgoing = pushop.outgoing
547 unfi = pushop.repo.unfiltered()
547 unfi = pushop.repo.unfiltered()
548 remotephases = pushop.remote.listkeys('phases')
548 remotephases = pushop.remote.listkeys('phases')
549 publishing = remotephases.get('publishing', False)
549 publishing = remotephases.get('publishing', False)
550 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
550 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
551 and remotephases # server supports phases
551 and remotephases # server supports phases
552 and not pushop.outgoing.missing # no changesets to be pushed
552 and not pushop.outgoing.missing # no changesets to be pushed
553 and publishing):
553 and publishing):
554 # When:
554 # When:
555 # - this is a subrepo push
555 # - this is a subrepo push
556 # - and remote support phase
556 # - and remote support phase
557 # - and no changeset are to be pushed
557 # - and no changeset are to be pushed
558 # - and remote is publishing
558 # - and remote is publishing
559 # We may be in issue 3871 case!
559 # We may be in issue 3871 case!
560 # We drop the possible phase synchronisation done by
560 # We drop the possible phase synchronisation done by
561 # courtesy to publish changesets possibly locally draft
561 # courtesy to publish changesets possibly locally draft
562 # on the remote.
562 # on the remote.
563 remotephases = {'publishing': 'True'}
563 remotephases = {'publishing': 'True'}
564 ana = phases.analyzeremotephases(pushop.repo,
564 ana = phases.analyzeremotephases(pushop.repo,
565 pushop.fallbackheads,
565 pushop.fallbackheads,
566 remotephases)
566 remotephases)
567 pheads, droots = ana
567 pheads, droots = ana
568 extracond = ''
568 extracond = ''
569 if not publishing:
569 if not publishing:
570 extracond = ' and public()'
570 extracond = ' and public()'
571 revset = 'heads((%%ln::%%ln) %s)' % extracond
571 revset = 'heads((%%ln::%%ln) %s)' % extracond
572 # Get the list of all revs draft on remote by public here.
572 # Get the list of all revs draft on remote by public here.
573 # XXX Beware that revset break if droots is not strictly
573 # XXX Beware that revset break if droots is not strictly
574 # XXX root we may want to ensure it is but it is costly
574 # XXX root we may want to ensure it is but it is costly
575 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
575 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
576 if not outgoing.missing:
576 if not outgoing.missing:
577 future = fallback
577 future = fallback
578 else:
578 else:
579 # adds changeset we are going to push as draft
579 # adds changeset we are going to push as draft
580 #
580 #
581 # should not be necessary for publishing server, but because of an
581 # should not be necessary for publishing server, but because of an
582 # issue fixed in xxxxx we have to do it anyway.
582 # issue fixed in xxxxx we have to do it anyway.
583 fdroots = list(unfi.set('roots(%ln + %ln::)',
583 fdroots = list(unfi.set('roots(%ln + %ln::)',
584 outgoing.missing, droots))
584 outgoing.missing, droots))
585 fdroots = [f.node() for f in fdroots]
585 fdroots = [f.node() for f in fdroots]
586 future = list(unfi.set(revset, fdroots, pushop.futureheads))
586 future = list(unfi.set(revset, fdroots, pushop.futureheads))
587 pushop.outdatedphases = future
587 pushop.outdatedphases = future
588 pushop.fallbackoutdatedphases = fallback
588 pushop.fallbackoutdatedphases = fallback
589
589
590 @pushdiscovery('obsmarker')
590 @pushdiscovery('obsmarker')
591 def _pushdiscoveryobsmarkers(pushop):
591 def _pushdiscoveryobsmarkers(pushop):
592 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
592 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
593 and pushop.repo.obsstore
593 and pushop.repo.obsstore
594 and 'obsolete' in pushop.remote.listkeys('namespaces')):
594 and 'obsolete' in pushop.remote.listkeys('namespaces')):
595 repo = pushop.repo
595 repo = pushop.repo
596 # very naive computation, that can be quite expensive on big repo.
596 # very naive computation, that can be quite expensive on big repo.
597 # However: evolution is currently slow on them anyway.
597 # However: evolution is currently slow on them anyway.
598 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
598 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
599 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
599 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
600
600
601 @pushdiscovery('bookmarks')
601 @pushdiscovery('bookmarks')
602 def _pushdiscoverybookmarks(pushop):
602 def _pushdiscoverybookmarks(pushop):
603 ui = pushop.ui
603 ui = pushop.ui
604 repo = pushop.repo.unfiltered()
604 repo = pushop.repo.unfiltered()
605 remote = pushop.remote
605 remote = pushop.remote
606 ui.debug("checking for updated bookmarks\n")
606 ui.debug("checking for updated bookmarks\n")
607 ancestors = ()
607 ancestors = ()
608 if pushop.revs:
608 if pushop.revs:
609 revnums = map(repo.changelog.rev, pushop.revs)
609 revnums = map(repo.changelog.rev, pushop.revs)
610 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
610 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
611 remotebookmark = remote.listkeys('bookmarks')
611 remotebookmark = remote.listkeys('bookmarks')
612
612
613 explicit = set([repo._bookmarks.expandname(bookmark)
613 explicit = set([repo._bookmarks.expandname(bookmark)
614 for bookmark in pushop.bookmarks])
614 for bookmark in pushop.bookmarks])
615
615
616 remotebookmark = bookmod.unhexlifybookmarks(remotebookmark)
616 remotebookmark = bookmod.unhexlifybookmarks(remotebookmark)
617 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
617 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
618
618
619 def safehex(x):
619 def safehex(x):
620 if x is None:
620 if x is None:
621 return x
621 return x
622 return hex(x)
622 return hex(x)
623
623
624 def hexifycompbookmarks(bookmarks):
624 def hexifycompbookmarks(bookmarks):
625 for b, scid, dcid in bookmarks:
625 for b, scid, dcid in bookmarks:
626 yield b, safehex(scid), safehex(dcid)
626 yield b, safehex(scid), safehex(dcid)
627
627
628 comp = [hexifycompbookmarks(marks) for marks in comp]
628 comp = [hexifycompbookmarks(marks) for marks in comp]
629 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
629 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
630
630
631 for b, scid, dcid in advsrc:
631 for b, scid, dcid in advsrc:
632 if b in explicit:
632 if b in explicit:
633 explicit.remove(b)
633 explicit.remove(b)
634 if not ancestors or repo[scid].rev() in ancestors:
634 if not ancestors or repo[scid].rev() in ancestors:
635 pushop.outbookmarks.append((b, dcid, scid))
635 pushop.outbookmarks.append((b, dcid, scid))
636 # search added bookmark
636 # search added bookmark
637 for b, scid, dcid in addsrc:
637 for b, scid, dcid in addsrc:
638 if b in explicit:
638 if b in explicit:
639 explicit.remove(b)
639 explicit.remove(b)
640 pushop.outbookmarks.append((b, '', scid))
640 pushop.outbookmarks.append((b, '', scid))
641 # search for overwritten bookmark
641 # search for overwritten bookmark
642 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
642 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
643 if b in explicit:
643 if b in explicit:
644 explicit.remove(b)
644 explicit.remove(b)
645 pushop.outbookmarks.append((b, dcid, scid))
645 pushop.outbookmarks.append((b, dcid, scid))
646 # search for bookmark to delete
646 # search for bookmark to delete
647 for b, scid, dcid in adddst:
647 for b, scid, dcid in adddst:
648 if b in explicit:
648 if b in explicit:
649 explicit.remove(b)
649 explicit.remove(b)
650 # treat as "deleted locally"
650 # treat as "deleted locally"
651 pushop.outbookmarks.append((b, dcid, ''))
651 pushop.outbookmarks.append((b, dcid, ''))
652 # identical bookmarks shouldn't get reported
652 # identical bookmarks shouldn't get reported
653 for b, scid, dcid in same:
653 for b, scid, dcid in same:
654 if b in explicit:
654 if b in explicit:
655 explicit.remove(b)
655 explicit.remove(b)
656
656
657 if explicit:
657 if explicit:
658 explicit = sorted(explicit)
658 explicit = sorted(explicit)
659 # we should probably list all of them
659 # we should probably list all of them
660 ui.warn(_('bookmark %s does not exist on the local '
660 ui.warn(_('bookmark %s does not exist on the local '
661 'or remote repository!\n') % explicit[0])
661 'or remote repository!\n') % explicit[0])
662 pushop.bkresult = 2
662 pushop.bkresult = 2
663
663
664 pushop.outbookmarks.sort()
664 pushop.outbookmarks.sort()
665
665
666 def _pushcheckoutgoing(pushop):
666 def _pushcheckoutgoing(pushop):
667 outgoing = pushop.outgoing
667 outgoing = pushop.outgoing
668 unfi = pushop.repo.unfiltered()
668 unfi = pushop.repo.unfiltered()
669 if not outgoing.missing:
669 if not outgoing.missing:
670 # nothing to push
670 # nothing to push
671 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
671 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
672 return False
672 return False
673 # something to push
673 # something to push
674 if not pushop.force:
674 if not pushop.force:
675 # if repo.obsstore == False --> no obsolete
675 # if repo.obsstore == False --> no obsolete
676 # then, save the iteration
676 # then, save the iteration
677 if unfi.obsstore:
677 if unfi.obsstore:
678 # this message are here for 80 char limit reason
678 # this message are here for 80 char limit reason
679 mso = _("push includes obsolete changeset: %s!")
679 mso = _("push includes obsolete changeset: %s!")
680 mst = {"unstable": _("push includes unstable changeset: %s!"),
680 mst = {"unstable": _("push includes unstable changeset: %s!"),
681 "bumped": _("push includes bumped changeset: %s!"),
681 "bumped": _("push includes bumped changeset: %s!"),
682 "divergent": _("push includes divergent changeset: %s!")}
682 "divergent": _("push includes divergent changeset: %s!")}
683 # If we are to push if there is at least one
683 # If we are to push if there is at least one
684 # obsolete or unstable changeset in missing, at
684 # obsolete or unstable changeset in missing, at
685 # least one of the missinghead will be obsolete or
685 # least one of the missinghead will be obsolete or
686 # unstable. So checking heads only is ok
686 # unstable. So checking heads only is ok
687 for node in outgoing.missingheads:
687 for node in outgoing.missingheads:
688 ctx = unfi[node]
688 ctx = unfi[node]
689 if ctx.obsolete():
689 if ctx.obsolete():
690 raise error.Abort(mso % ctx)
690 raise error.Abort(mso % ctx)
691 elif ctx.troubled():
691 elif ctx.troubled():
692 raise error.Abort(mst[ctx.troubles()[0]] % ctx)
692 raise error.Abort(mst[ctx.troubles()[0]] % ctx)
693
693
694 discovery.checkheads(pushop)
694 discovery.checkheads(pushop)
695 return True
695 return True
696
696
697 # List of names of steps to perform for an outgoing bundle2, order matters.
697 # List of names of steps to perform for an outgoing bundle2, order matters.
698 b2partsgenorder = []
698 b2partsgenorder = []
699
699
700 # Mapping between step name and function
700 # Mapping between step name and function
701 #
701 #
702 # This exists to help extensions wrap steps if necessary
702 # This exists to help extensions wrap steps if necessary
703 b2partsgenmapping = {}
703 b2partsgenmapping = {}
704
704
705 def b2partsgenerator(stepname, idx=None):
705 def b2partsgenerator(stepname, idx=None):
706 """decorator for function generating bundle2 part
706 """decorator for function generating bundle2 part
707
707
708 The function is added to the step -> function mapping and appended to the
708 The function is added to the step -> function mapping and appended to the
709 list of steps. Beware that decorated functions will be added in order
709 list of steps. Beware that decorated functions will be added in order
710 (this may matter).
710 (this may matter).
711
711
712 You can only use this decorator for new steps, if you want to wrap a step
712 You can only use this decorator for new steps, if you want to wrap a step
713 from an extension, attack the b2partsgenmapping dictionary directly."""
713 from an extension, attack the b2partsgenmapping dictionary directly."""
714 def dec(func):
714 def dec(func):
715 assert stepname not in b2partsgenmapping
715 assert stepname not in b2partsgenmapping
716 b2partsgenmapping[stepname] = func
716 b2partsgenmapping[stepname] = func
717 if idx is None:
717 if idx is None:
718 b2partsgenorder.append(stepname)
718 b2partsgenorder.append(stepname)
719 else:
719 else:
720 b2partsgenorder.insert(idx, stepname)
720 b2partsgenorder.insert(idx, stepname)
721 return func
721 return func
722 return dec
722 return dec
723
723
724 def _pushb2ctxcheckheads(pushop, bundler):
724 def _pushb2ctxcheckheads(pushop, bundler):
725 """Generate race condition checking parts
725 """Generate race condition checking parts
726
726
727 Exists as an independent function to aid extensions
727 Exists as an independent function to aid extensions
728 """
728 """
729 # * 'force' do not check for push race,
729 # * 'force' do not check for push race,
730 # * if we don't push anything, there are nothing to check.
730 # * if we don't push anything, there are nothing to check.
731 if not pushop.force and pushop.outgoing.missingheads:
731 if not pushop.force and pushop.outgoing.missingheads:
732 allowunrelated = 'related' in bundler.capabilities.get('checkheads', ())
732 allowunrelated = 'related' in bundler.capabilities.get('checkheads', ())
733 if not allowunrelated:
733 if not allowunrelated:
734 bundler.newpart('check:heads', data=iter(pushop.remoteheads))
734 bundler.newpart('check:heads', data=iter(pushop.remoteheads))
735 else:
735 else:
736 affected = set()
736 affected = set()
737 for branch, heads in pushop.pushbranchmap.iteritems():
737 for branch, heads in pushop.pushbranchmap.iteritems():
738 remoteheads, newheads, unsyncedheads, discardedheads = heads
738 remoteheads, newheads, unsyncedheads, discardedheads = heads
739 if remoteheads is not None:
739 if remoteheads is not None:
740 remote = set(remoteheads)
740 remote = set(remoteheads)
741 affected |= set(discardedheads) & remote
741 affected |= set(discardedheads) & remote
742 affected |= remote - set(newheads)
742 affected |= remote - set(newheads)
743 if affected:
743 if affected:
744 data = iter(sorted(affected))
744 data = iter(sorted(affected))
745 bundler.newpart('check:updated-heads', data=data)
745 bundler.newpart('check:updated-heads', data=data)
746
746
747 @b2partsgenerator('changeset')
747 @b2partsgenerator('changeset')
748 def _pushb2ctx(pushop, bundler):
748 def _pushb2ctx(pushop, bundler):
749 """handle changegroup push through bundle2
749 """handle changegroup push through bundle2
750
750
751 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
751 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
752 """
752 """
753 if 'changesets' in pushop.stepsdone:
753 if 'changesets' in pushop.stepsdone:
754 return
754 return
755 pushop.stepsdone.add('changesets')
755 pushop.stepsdone.add('changesets')
756 # Send known heads to the server for race detection.
756 # Send known heads to the server for race detection.
757 if not _pushcheckoutgoing(pushop):
757 if not _pushcheckoutgoing(pushop):
758 return
758 return
759 pushop.repo.prepushoutgoinghooks(pushop)
759 pushop.repo.prepushoutgoinghooks(pushop)
760
760
761 _pushb2ctxcheckheads(pushop, bundler)
761 _pushb2ctxcheckheads(pushop, bundler)
762
762
763 b2caps = bundle2.bundle2caps(pushop.remote)
763 b2caps = bundle2.bundle2caps(pushop.remote)
764 version = '01'
764 version = '01'
765 cgversions = b2caps.get('changegroup')
765 cgversions = b2caps.get('changegroup')
766 if cgversions: # 3.1 and 3.2 ship with an empty value
766 if cgversions: # 3.1 and 3.2 ship with an empty value
767 cgversions = [v for v in cgversions
767 cgversions = [v for v in cgversions
768 if v in changegroup.supportedoutgoingversions(
768 if v in changegroup.supportedoutgoingversions(
769 pushop.repo)]
769 pushop.repo)]
770 if not cgversions:
770 if not cgversions:
771 raise ValueError(_('no common changegroup version'))
771 raise ValueError(_('no common changegroup version'))
772 version = max(cgversions)
772 version = max(cgversions)
773 cg = changegroup.getlocalchangegroupraw(pushop.repo, 'push',
773 cg = changegroup.getlocalchangegroupraw(pushop.repo, 'push',
774 pushop.outgoing,
774 pushop.outgoing,
775 version=version)
775 version=version)
776 cgpart = bundler.newpart('changegroup', data=cg)
776 cgpart = bundler.newpart('changegroup', data=cg)
777 if cgversions:
777 if cgversions:
778 cgpart.addparam('version', version)
778 cgpart.addparam('version', version)
779 if 'treemanifest' in pushop.repo.requirements:
779 if 'treemanifest' in pushop.repo.requirements:
780 cgpart.addparam('treemanifest', '1')
780 cgpart.addparam('treemanifest', '1')
781 def handlereply(op):
781 def handlereply(op):
782 """extract addchangegroup returns from server reply"""
782 """extract addchangegroup returns from server reply"""
783 cgreplies = op.records.getreplies(cgpart.id)
783 cgreplies = op.records.getreplies(cgpart.id)
784 assert len(cgreplies['changegroup']) == 1
784 assert len(cgreplies['changegroup']) == 1
785 pushop.cgresult = cgreplies['changegroup'][0]['return']
785 pushop.cgresult = cgreplies['changegroup'][0]['return']
786 return handlereply
786 return handlereply
787
787
788 @b2partsgenerator('phase')
788 @b2partsgenerator('phase')
789 def _pushb2phases(pushop, bundler):
789 def _pushb2phases(pushop, bundler):
790 """handle phase push through bundle2"""
790 """handle phase push through bundle2"""
791 if 'phases' in pushop.stepsdone:
791 if 'phases' in pushop.stepsdone:
792 return
792 return
793 b2caps = bundle2.bundle2caps(pushop.remote)
793 b2caps = bundle2.bundle2caps(pushop.remote)
794 if not 'pushkey' in b2caps:
794 if not 'pushkey' in b2caps:
795 return
795 return
796 pushop.stepsdone.add('phases')
796 pushop.stepsdone.add('phases')
797 part2node = []
797 part2node = []
798
798
799 def handlefailure(pushop, exc):
799 def handlefailure(pushop, exc):
800 targetid = int(exc.partid)
800 targetid = int(exc.partid)
801 for partid, node in part2node:
801 for partid, node in part2node:
802 if partid == targetid:
802 if partid == targetid:
803 raise error.Abort(_('updating %s to public failed') % node)
803 raise error.Abort(_('updating %s to public failed') % node)
804
804
805 enc = pushkey.encode
805 enc = pushkey.encode
806 for newremotehead in pushop.outdatedphases:
806 for newremotehead in pushop.outdatedphases:
807 part = bundler.newpart('pushkey')
807 part = bundler.newpart('pushkey')
808 part.addparam('namespace', enc('phases'))
808 part.addparam('namespace', enc('phases'))
809 part.addparam('key', enc(newremotehead.hex()))
809 part.addparam('key', enc(newremotehead.hex()))
810 part.addparam('old', enc(str(phases.draft)))
810 part.addparam('old', enc(str(phases.draft)))
811 part.addparam('new', enc(str(phases.public)))
811 part.addparam('new', enc(str(phases.public)))
812 part2node.append((part.id, newremotehead))
812 part2node.append((part.id, newremotehead))
813 pushop.pkfailcb[part.id] = handlefailure
813 pushop.pkfailcb[part.id] = handlefailure
814
814
815 def handlereply(op):
815 def handlereply(op):
816 for partid, node in part2node:
816 for partid, node in part2node:
817 partrep = op.records.getreplies(partid)
817 partrep = op.records.getreplies(partid)
818 results = partrep['pushkey']
818 results = partrep['pushkey']
819 assert len(results) <= 1
819 assert len(results) <= 1
820 msg = None
820 msg = None
821 if not results:
821 if not results:
822 msg = _('server ignored update of %s to public!\n') % node
822 msg = _('server ignored update of %s to public!\n') % node
823 elif not int(results[0]['return']):
823 elif not int(results[0]['return']):
824 msg = _('updating %s to public failed!\n') % node
824 msg = _('updating %s to public failed!\n') % node
825 if msg is not None:
825 if msg is not None:
826 pushop.ui.warn(msg)
826 pushop.ui.warn(msg)
827 return handlereply
827 return handlereply
828
828
829 @b2partsgenerator('obsmarkers')
829 @b2partsgenerator('obsmarkers')
830 def _pushb2obsmarkers(pushop, bundler):
830 def _pushb2obsmarkers(pushop, bundler):
831 if 'obsmarkers' in pushop.stepsdone:
831 if 'obsmarkers' in pushop.stepsdone:
832 return
832 return
833 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
833 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
834 if obsolete.commonversion(remoteversions) is None:
834 if obsolete.commonversion(remoteversions) is None:
835 return
835 return
836 pushop.stepsdone.add('obsmarkers')
836 pushop.stepsdone.add('obsmarkers')
837 if pushop.outobsmarkers:
837 if pushop.outobsmarkers:
838 markers = sorted(pushop.outobsmarkers)
838 markers = sorted(pushop.outobsmarkers)
839 bundle2.buildobsmarkerspart(bundler, markers)
839 bundle2.buildobsmarkerspart(bundler, markers)
840
840
841 @b2partsgenerator('bookmarks')
841 @b2partsgenerator('bookmarks')
842 def _pushb2bookmarks(pushop, bundler):
842 def _pushb2bookmarks(pushop, bundler):
843 """handle bookmark push through bundle2"""
843 """handle bookmark push through bundle2"""
844 if 'bookmarks' in pushop.stepsdone:
844 if 'bookmarks' in pushop.stepsdone:
845 return
845 return
846 b2caps = bundle2.bundle2caps(pushop.remote)
846 b2caps = bundle2.bundle2caps(pushop.remote)
847 if 'pushkey' not in b2caps:
847 if 'pushkey' not in b2caps:
848 return
848 return
849 pushop.stepsdone.add('bookmarks')
849 pushop.stepsdone.add('bookmarks')
850 part2book = []
850 part2book = []
851 enc = pushkey.encode
851 enc = pushkey.encode
852
852
853 def handlefailure(pushop, exc):
853 def handlefailure(pushop, exc):
854 targetid = int(exc.partid)
854 targetid = int(exc.partid)
855 for partid, book, action in part2book:
855 for partid, book, action in part2book:
856 if partid == targetid:
856 if partid == targetid:
857 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
857 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
858 # we should not be called for part we did not generated
858 # we should not be called for part we did not generated
859 assert False
859 assert False
860
860
861 for book, old, new in pushop.outbookmarks:
861 for book, old, new in pushop.outbookmarks:
862 part = bundler.newpart('pushkey')
862 part = bundler.newpart('pushkey')
863 part.addparam('namespace', enc('bookmarks'))
863 part.addparam('namespace', enc('bookmarks'))
864 part.addparam('key', enc(book))
864 part.addparam('key', enc(book))
865 part.addparam('old', enc(old))
865 part.addparam('old', enc(old))
866 part.addparam('new', enc(new))
866 part.addparam('new', enc(new))
867 action = 'update'
867 action = 'update'
868 if not old:
868 if not old:
869 action = 'export'
869 action = 'export'
870 elif not new:
870 elif not new:
871 action = 'delete'
871 action = 'delete'
872 part2book.append((part.id, book, action))
872 part2book.append((part.id, book, action))
873 pushop.pkfailcb[part.id] = handlefailure
873 pushop.pkfailcb[part.id] = handlefailure
874
874
875 def handlereply(op):
875 def handlereply(op):
876 ui = pushop.ui
876 ui = pushop.ui
877 for partid, book, action in part2book:
877 for partid, book, action in part2book:
878 partrep = op.records.getreplies(partid)
878 partrep = op.records.getreplies(partid)
879 results = partrep['pushkey']
879 results = partrep['pushkey']
880 assert len(results) <= 1
880 assert len(results) <= 1
881 if not results:
881 if not results:
882 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
882 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
883 else:
883 else:
884 ret = int(results[0]['return'])
884 ret = int(results[0]['return'])
885 if ret:
885 if ret:
886 ui.status(bookmsgmap[action][0] % book)
886 ui.status(bookmsgmap[action][0] % book)
887 else:
887 else:
888 ui.warn(bookmsgmap[action][1] % book)
888 ui.warn(bookmsgmap[action][1] % book)
889 if pushop.bkresult is not None:
889 if pushop.bkresult is not None:
890 pushop.bkresult = 1
890 pushop.bkresult = 1
891 return handlereply
891 return handlereply
892
892
893
893
894 def _pushbundle2(pushop):
894 def _pushbundle2(pushop):
895 """push data to the remote using bundle2
895 """push data to the remote using bundle2
896
896
897 The only currently supported type of data is changegroup but this will
897 The only currently supported type of data is changegroup but this will
898 evolve in the future."""
898 evolve in the future."""
899 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
899 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
900 pushback = (pushop.trmanager
900 pushback = (pushop.trmanager
901 and pushop.ui.configbool('experimental', 'bundle2.pushback'))
901 and pushop.ui.configbool('experimental', 'bundle2.pushback'))
902
902
903 # create reply capability
903 # create reply capability
904 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo,
904 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo,
905 allowpushback=pushback))
905 allowpushback=pushback))
906 bundler.newpart('replycaps', data=capsblob)
906 bundler.newpart('replycaps', data=capsblob)
907 replyhandlers = []
907 replyhandlers = []
908 for partgenname in b2partsgenorder:
908 for partgenname in b2partsgenorder:
909 partgen = b2partsgenmapping[partgenname]
909 partgen = b2partsgenmapping[partgenname]
910 ret = partgen(pushop, bundler)
910 ret = partgen(pushop, bundler)
911 if callable(ret):
911 if callable(ret):
912 replyhandlers.append(ret)
912 replyhandlers.append(ret)
913 # do not push if nothing to push
913 # do not push if nothing to push
914 if bundler.nbparts <= 1:
914 if bundler.nbparts <= 1:
915 return
915 return
916 stream = util.chunkbuffer(bundler.getchunks())
916 stream = util.chunkbuffer(bundler.getchunks())
917 try:
917 try:
918 try:
918 try:
919 reply = pushop.remote.unbundle(
919 reply = pushop.remote.unbundle(
920 stream, ['force'], pushop.remote.url())
920 stream, ['force'], pushop.remote.url())
921 except error.BundleValueError as exc:
921 except error.BundleValueError as exc:
922 raise error.Abort(_('missing support for %s') % exc)
922 raise error.Abort(_('missing support for %s') % exc)
923 try:
923 try:
924 trgetter = None
924 trgetter = None
925 if pushback:
925 if pushback:
926 trgetter = pushop.trmanager.transaction
926 trgetter = pushop.trmanager.transaction
927 op = bundle2.processbundle(pushop.repo, reply, trgetter)
927 op = bundle2.processbundle(pushop.repo, reply, trgetter)
928 except error.BundleValueError as exc:
928 except error.BundleValueError as exc:
929 raise error.Abort(_('missing support for %s') % exc)
929 raise error.Abort(_('missing support for %s') % exc)
930 except bundle2.AbortFromPart as exc:
930 except bundle2.AbortFromPart as exc:
931 pushop.ui.status(_('remote: %s\n') % exc)
931 pushop.ui.status(_('remote: %s\n') % exc)
932 if exc.hint is not None:
932 if exc.hint is not None:
933 pushop.ui.status(_('remote: %s\n') % ('(%s)' % exc.hint))
933 pushop.ui.status(_('remote: %s\n') % ('(%s)' % exc.hint))
934 raise error.Abort(_('push failed on remote'))
934 raise error.Abort(_('push failed on remote'))
935 except error.PushkeyFailed as exc:
935 except error.PushkeyFailed as exc:
936 partid = int(exc.partid)
936 partid = int(exc.partid)
937 if partid not in pushop.pkfailcb:
937 if partid not in pushop.pkfailcb:
938 raise
938 raise
939 pushop.pkfailcb[partid](pushop, exc)
939 pushop.pkfailcb[partid](pushop, exc)
940 for rephand in replyhandlers:
940 for rephand in replyhandlers:
941 rephand(op)
941 rephand(op)
942
942
943 def _pushchangeset(pushop):
943 def _pushchangeset(pushop):
944 """Make the actual push of changeset bundle to remote repo"""
944 """Make the actual push of changeset bundle to remote repo"""
945 if 'changesets' in pushop.stepsdone:
945 if 'changesets' in pushop.stepsdone:
946 return
946 return
947 pushop.stepsdone.add('changesets')
947 pushop.stepsdone.add('changesets')
948 if not _pushcheckoutgoing(pushop):
948 if not _pushcheckoutgoing(pushop):
949 return
949 return
950 pushop.repo.prepushoutgoinghooks(pushop)
950 pushop.repo.prepushoutgoinghooks(pushop)
951 outgoing = pushop.outgoing
951 outgoing = pushop.outgoing
952 unbundle = pushop.remote.capable('unbundle')
952 unbundle = pushop.remote.capable('unbundle')
953 # TODO: get bundlecaps from remote
953 # TODO: get bundlecaps from remote
954 bundlecaps = None
954 bundlecaps = None
955 # create a changegroup from local
955 # create a changegroup from local
956 if pushop.revs is None and not (outgoing.excluded
956 if pushop.revs is None and not (outgoing.excluded
957 or pushop.repo.changelog.filteredrevs):
957 or pushop.repo.changelog.filteredrevs):
958 # push everything,
958 # push everything,
959 # use the fast path, no race possible on push
959 # use the fast path, no race possible on push
960 bundler = changegroup.cg1packer(pushop.repo, bundlecaps)
960 bundler = changegroup.cg1packer(pushop.repo, bundlecaps)
961 cg = changegroup.getsubset(pushop.repo,
961 cg = changegroup.getsubset(pushop.repo,
962 outgoing,
962 outgoing,
963 bundler,
963 bundler,
964 'push',
964 'push',
965 fastpath=True)
965 fastpath=True)
966 else:
966 else:
967 cg = changegroup.getchangegroup(pushop.repo, 'push', outgoing,
967 cg = changegroup.getchangegroup(pushop.repo, 'push', outgoing,
968 bundlecaps=bundlecaps)
968 bundlecaps=bundlecaps)
969
969
970 # apply changegroup to remote
970 # apply changegroup to remote
971 if unbundle:
971 if unbundle:
972 # local repo finds heads on server, finds out what
972 # local repo finds heads on server, finds out what
973 # revs it must push. once revs transferred, if server
973 # revs it must push. once revs transferred, if server
974 # finds it has different heads (someone else won
974 # finds it has different heads (someone else won
975 # commit/push race), server aborts.
975 # commit/push race), server aborts.
976 if pushop.force:
976 if pushop.force:
977 remoteheads = ['force']
977 remoteheads = ['force']
978 else:
978 else:
979 remoteheads = pushop.remoteheads
979 remoteheads = pushop.remoteheads
980 # ssh: return remote's addchangegroup()
980 # ssh: return remote's addchangegroup()
981 # http: return remote's addchangegroup() or 0 for error
981 # http: return remote's addchangegroup() or 0 for error
982 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
982 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
983 pushop.repo.url())
983 pushop.repo.url())
984 else:
984 else:
985 # we return an integer indicating remote head count
985 # we return an integer indicating remote head count
986 # change
986 # change
987 pushop.cgresult = pushop.remote.addchangegroup(cg, 'push',
987 pushop.cgresult = pushop.remote.addchangegroup(cg, 'push',
988 pushop.repo.url())
988 pushop.repo.url())
989
989
990 def _pushsyncphase(pushop):
990 def _pushsyncphase(pushop):
991 """synchronise phase information locally and remotely"""
991 """synchronise phase information locally and remotely"""
992 cheads = pushop.commonheads
992 cheads = pushop.commonheads
993 # even when we don't push, exchanging phase data is useful
993 # even when we don't push, exchanging phase data is useful
994 remotephases = pushop.remote.listkeys('phases')
994 remotephases = pushop.remote.listkeys('phases')
995 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
995 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
996 and remotephases # server supports phases
996 and remotephases # server supports phases
997 and pushop.cgresult is None # nothing was pushed
997 and pushop.cgresult is None # nothing was pushed
998 and remotephases.get('publishing', False)):
998 and remotephases.get('publishing', False)):
999 # When:
999 # When:
1000 # - this is a subrepo push
1000 # - this is a subrepo push
1001 # - and remote support phase
1001 # - and remote support phase
1002 # - and no changeset was pushed
1002 # - and no changeset was pushed
1003 # - and remote is publishing
1003 # - and remote is publishing
1004 # We may be in issue 3871 case!
1004 # We may be in issue 3871 case!
1005 # We drop the possible phase synchronisation done by
1005 # We drop the possible phase synchronisation done by
1006 # courtesy to publish changesets possibly locally draft
1006 # courtesy to publish changesets possibly locally draft
1007 # on the remote.
1007 # on the remote.
1008 remotephases = {'publishing': 'True'}
1008 remotephases = {'publishing': 'True'}
1009 if not remotephases: # old server or public only reply from non-publishing
1009 if not remotephases: # old server or public only reply from non-publishing
1010 _localphasemove(pushop, cheads)
1010 _localphasemove(pushop, cheads)
1011 # don't push any phase data as there is nothing to push
1011 # don't push any phase data as there is nothing to push
1012 else:
1012 else:
1013 ana = phases.analyzeremotephases(pushop.repo, cheads,
1013 ana = phases.analyzeremotephases(pushop.repo, cheads,
1014 remotephases)
1014 remotephases)
1015 pheads, droots = ana
1015 pheads, droots = ana
1016 ### Apply remote phase on local
1016 ### Apply remote phase on local
1017 if remotephases.get('publishing', False):
1017 if remotephases.get('publishing', False):
1018 _localphasemove(pushop, cheads)
1018 _localphasemove(pushop, cheads)
1019 else: # publish = False
1019 else: # publish = False
1020 _localphasemove(pushop, pheads)
1020 _localphasemove(pushop, pheads)
1021 _localphasemove(pushop, cheads, phases.draft)
1021 _localphasemove(pushop, cheads, phases.draft)
1022 ### Apply local phase on remote
1022 ### Apply local phase on remote
1023
1023
1024 if pushop.cgresult:
1024 if pushop.cgresult:
1025 if 'phases' in pushop.stepsdone:
1025 if 'phases' in pushop.stepsdone:
1026 # phases already pushed though bundle2
1026 # phases already pushed though bundle2
1027 return
1027 return
1028 outdated = pushop.outdatedphases
1028 outdated = pushop.outdatedphases
1029 else:
1029 else:
1030 outdated = pushop.fallbackoutdatedphases
1030 outdated = pushop.fallbackoutdatedphases
1031
1031
1032 pushop.stepsdone.add('phases')
1032 pushop.stepsdone.add('phases')
1033
1033
1034 # filter heads already turned public by the push
1034 # filter heads already turned public by the push
1035 outdated = [c for c in outdated if c.node() not in pheads]
1035 outdated = [c for c in outdated if c.node() not in pheads]
1036 # fallback to independent pushkey command
1036 # fallback to independent pushkey command
1037 for newremotehead in outdated:
1037 for newremotehead in outdated:
1038 r = pushop.remote.pushkey('phases',
1038 r = pushop.remote.pushkey('phases',
1039 newremotehead.hex(),
1039 newremotehead.hex(),
1040 str(phases.draft),
1040 str(phases.draft),
1041 str(phases.public))
1041 str(phases.public))
1042 if not r:
1042 if not r:
1043 pushop.ui.warn(_('updating %s to public failed!\n')
1043 pushop.ui.warn(_('updating %s to public failed!\n')
1044 % newremotehead)
1044 % newremotehead)
1045
1045
1046 def _localphasemove(pushop, nodes, phase=phases.public):
1046 def _localphasemove(pushop, nodes, phase=phases.public):
1047 """move <nodes> to <phase> in the local source repo"""
1047 """move <nodes> to <phase> in the local source repo"""
1048 if pushop.trmanager:
1048 if pushop.trmanager:
1049 phases.advanceboundary(pushop.repo,
1049 phases.advanceboundary(pushop.repo,
1050 pushop.trmanager.transaction(),
1050 pushop.trmanager.transaction(),
1051 phase,
1051 phase,
1052 nodes)
1052 nodes)
1053 else:
1053 else:
1054 # repo is not locked, do not change any phases!
1054 # repo is not locked, do not change any phases!
1055 # Informs the user that phases should have been moved when
1055 # Informs the user that phases should have been moved when
1056 # applicable.
1056 # applicable.
1057 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1057 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1058 phasestr = phases.phasenames[phase]
1058 phasestr = phases.phasenames[phase]
1059 if actualmoves:
1059 if actualmoves:
1060 pushop.ui.status(_('cannot lock source repo, skipping '
1060 pushop.ui.status(_('cannot lock source repo, skipping '
1061 'local %s phase update\n') % phasestr)
1061 'local %s phase update\n') % phasestr)
1062
1062
1063 def _pushobsolete(pushop):
1063 def _pushobsolete(pushop):
1064 """utility function to push obsolete markers to a remote"""
1064 """utility function to push obsolete markers to a remote"""
1065 if 'obsmarkers' in pushop.stepsdone:
1065 if 'obsmarkers' in pushop.stepsdone:
1066 return
1066 return
1067 repo = pushop.repo
1067 repo = pushop.repo
1068 remote = pushop.remote
1068 remote = pushop.remote
1069 pushop.stepsdone.add('obsmarkers')
1069 pushop.stepsdone.add('obsmarkers')
1070 if pushop.outobsmarkers:
1070 if pushop.outobsmarkers:
1071 pushop.ui.debug('try to push obsolete markers to remote\n')
1071 pushop.ui.debug('try to push obsolete markers to remote\n')
1072 rslts = []
1072 rslts = []
1073 remotedata = obsolete._pushkeyescape(sorted(pushop.outobsmarkers))
1073 remotedata = obsolete._pushkeyescape(sorted(pushop.outobsmarkers))
1074 for key in sorted(remotedata, reverse=True):
1074 for key in sorted(remotedata, reverse=True):
1075 # reverse sort to ensure we end with dump0
1075 # reverse sort to ensure we end with dump0
1076 data = remotedata[key]
1076 data = remotedata[key]
1077 rslts.append(remote.pushkey('obsolete', key, '', data))
1077 rslts.append(remote.pushkey('obsolete', key, '', data))
1078 if [r for r in rslts if not r]:
1078 if [r for r in rslts if not r]:
1079 msg = _('failed to push some obsolete markers!\n')
1079 msg = _('failed to push some obsolete markers!\n')
1080 repo.ui.warn(msg)
1080 repo.ui.warn(msg)
1081
1081
1082 def _pushbookmark(pushop):
1082 def _pushbookmark(pushop):
1083 """Update bookmark position on remote"""
1083 """Update bookmark position on remote"""
1084 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
1084 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
1085 return
1085 return
1086 pushop.stepsdone.add('bookmarks')
1086 pushop.stepsdone.add('bookmarks')
1087 ui = pushop.ui
1087 ui = pushop.ui
1088 remote = pushop.remote
1088 remote = pushop.remote
1089
1089
1090 for b, old, new in pushop.outbookmarks:
1090 for b, old, new in pushop.outbookmarks:
1091 action = 'update'
1091 action = 'update'
1092 if not old:
1092 if not old:
1093 action = 'export'
1093 action = 'export'
1094 elif not new:
1094 elif not new:
1095 action = 'delete'
1095 action = 'delete'
1096 if remote.pushkey('bookmarks', b, old, new):
1096 if remote.pushkey('bookmarks', b, old, new):
1097 ui.status(bookmsgmap[action][0] % b)
1097 ui.status(bookmsgmap[action][0] % b)
1098 else:
1098 else:
1099 ui.warn(bookmsgmap[action][1] % b)
1099 ui.warn(bookmsgmap[action][1] % b)
1100 # discovery can have set the value form invalid entry
1100 # discovery can have set the value form invalid entry
1101 if pushop.bkresult is not None:
1101 if pushop.bkresult is not None:
1102 pushop.bkresult = 1
1102 pushop.bkresult = 1
1103
1103
1104 class pulloperation(object):
1104 class pulloperation(object):
1105 """A object that represent a single pull operation
1105 """A object that represent a single pull operation
1106
1106
1107 It purpose is to carry pull related state and very common operation.
1107 It purpose is to carry pull related state and very common operation.
1108
1108
1109 A new should be created at the beginning of each pull and discarded
1109 A new should be created at the beginning of each pull and discarded
1110 afterward.
1110 afterward.
1111 """
1111 """
1112
1112
1113 def __init__(self, repo, remote, heads=None, force=False, bookmarks=(),
1113 def __init__(self, repo, remote, heads=None, force=False, bookmarks=(),
1114 remotebookmarks=None, streamclonerequested=None):
1114 remotebookmarks=None, streamclonerequested=None):
1115 # repo we pull into
1115 # repo we pull into
1116 self.repo = repo
1116 self.repo = repo
1117 # repo we pull from
1117 # repo we pull from
1118 self.remote = remote
1118 self.remote = remote
1119 # revision we try to pull (None is "all")
1119 # revision we try to pull (None is "all")
1120 self.heads = heads
1120 self.heads = heads
1121 # bookmark pulled explicitly
1121 # bookmark pulled explicitly
1122 self.explicitbookmarks = [repo._bookmarks.expandname(bookmark)
1122 self.explicitbookmarks = [repo._bookmarks.expandname(bookmark)
1123 for bookmark in bookmarks]
1123 for bookmark in bookmarks]
1124 # do we force pull?
1124 # do we force pull?
1125 self.force = force
1125 self.force = force
1126 # whether a streaming clone was requested
1126 # whether a streaming clone was requested
1127 self.streamclonerequested = streamclonerequested
1127 self.streamclonerequested = streamclonerequested
1128 # transaction manager
1128 # transaction manager
1129 self.trmanager = None
1129 self.trmanager = None
1130 # set of common changeset between local and remote before pull
1130 # set of common changeset between local and remote before pull
1131 self.common = None
1131 self.common = None
1132 # set of pulled head
1132 # set of pulled head
1133 self.rheads = None
1133 self.rheads = None
1134 # list of missing changeset to fetch remotely
1134 # list of missing changeset to fetch remotely
1135 self.fetch = None
1135 self.fetch = None
1136 # remote bookmarks data
1136 # remote bookmarks data
1137 self.remotebookmarks = remotebookmarks
1137 self.remotebookmarks = remotebookmarks
1138 # result of changegroup pulling (used as return code by pull)
1138 # result of changegroup pulling (used as return code by pull)
1139 self.cgresult = None
1139 self.cgresult = None
1140 # list of step already done
1140 # list of step already done
1141 self.stepsdone = set()
1141 self.stepsdone = set()
1142 # Whether we attempted a clone from pre-generated bundles.
1142 # Whether we attempted a clone from pre-generated bundles.
1143 self.clonebundleattempted = False
1143 self.clonebundleattempted = False
1144
1144
1145 @util.propertycache
1145 @util.propertycache
1146 def pulledsubset(self):
1146 def pulledsubset(self):
1147 """heads of the set of changeset target by the pull"""
1147 """heads of the set of changeset target by the pull"""
1148 # compute target subset
1148 # compute target subset
1149 if self.heads is None:
1149 if self.heads is None:
1150 # We pulled every thing possible
1150 # We pulled every thing possible
1151 # sync on everything common
1151 # sync on everything common
1152 c = set(self.common)
1152 c = set(self.common)
1153 ret = list(self.common)
1153 ret = list(self.common)
1154 for n in self.rheads:
1154 for n in self.rheads:
1155 if n not in c:
1155 if n not in c:
1156 ret.append(n)
1156 ret.append(n)
1157 return ret
1157 return ret
1158 else:
1158 else:
1159 # We pulled a specific subset
1159 # We pulled a specific subset
1160 # sync on this subset
1160 # sync on this subset
1161 return self.heads
1161 return self.heads
1162
1162
1163 @util.propertycache
1163 @util.propertycache
1164 def canusebundle2(self):
1164 def canusebundle2(self):
1165 return not _forcebundle1(self)
1165 return not _forcebundle1(self)
1166
1166
1167 @util.propertycache
1167 @util.propertycache
1168 def remotebundle2caps(self):
1168 def remotebundle2caps(self):
1169 return bundle2.bundle2caps(self.remote)
1169 return bundle2.bundle2caps(self.remote)
1170
1170
1171 def gettransaction(self):
1171 def gettransaction(self):
1172 # deprecated; talk to trmanager directly
1172 # deprecated; talk to trmanager directly
1173 return self.trmanager.transaction()
1173 return self.trmanager.transaction()
1174
1174
1175 class transactionmanager(object):
1175 class transactionmanager(object):
1176 """An object to manage the life cycle of a transaction
1176 """An object to manage the life cycle of a transaction
1177
1177
1178 It creates the transaction on demand and calls the appropriate hooks when
1178 It creates the transaction on demand and calls the appropriate hooks when
1179 closing the transaction."""
1179 closing the transaction."""
1180 def __init__(self, repo, source, url):
1180 def __init__(self, repo, source, url):
1181 self.repo = repo
1181 self.repo = repo
1182 self.source = source
1182 self.source = source
1183 self.url = url
1183 self.url = url
1184 self._tr = None
1184 self._tr = None
1185
1185
1186 def transaction(self):
1186 def transaction(self):
1187 """Return an open transaction object, constructing if necessary"""
1187 """Return an open transaction object, constructing if necessary"""
1188 if not self._tr:
1188 if not self._tr:
1189 trname = '%s\n%s' % (self.source, util.hidepassword(self.url))
1189 trname = '%s\n%s' % (self.source, util.hidepassword(self.url))
1190 self._tr = self.repo.transaction(trname)
1190 self._tr = self.repo.transaction(trname)
1191 self._tr.hookargs['source'] = self.source
1191 self._tr.hookargs['source'] = self.source
1192 self._tr.hookargs['url'] = self.url
1192 self._tr.hookargs['url'] = self.url
1193 return self._tr
1193 return self._tr
1194
1194
1195 def close(self):
1195 def close(self):
1196 """close transaction if created"""
1196 """close transaction if created"""
1197 if self._tr is not None:
1197 if self._tr is not None:
1198 self._tr.close()
1198 self._tr.close()
1199
1199
1200 def release(self):
1200 def release(self):
1201 """release transaction if created"""
1201 """release transaction if created"""
1202 if self._tr is not None:
1202 if self._tr is not None:
1203 self._tr.release()
1203 self._tr.release()
1204
1204
1205 def pull(repo, remote, heads=None, force=False, bookmarks=(), opargs=None,
1205 def pull(repo, remote, heads=None, force=False, bookmarks=(), opargs=None,
1206 streamclonerequested=None):
1206 streamclonerequested=None):
1207 """Fetch repository data from a remote.
1207 """Fetch repository data from a remote.
1208
1208
1209 This is the main function used to retrieve data from a remote repository.
1209 This is the main function used to retrieve data from a remote repository.
1210
1210
1211 ``repo`` is the local repository to clone into.
1211 ``repo`` is the local repository to clone into.
1212 ``remote`` is a peer instance.
1212 ``remote`` is a peer instance.
1213 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1213 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1214 default) means to pull everything from the remote.
1214 default) means to pull everything from the remote.
1215 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1215 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1216 default, all remote bookmarks are pulled.
1216 default, all remote bookmarks are pulled.
1217 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1217 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1218 initialization.
1218 initialization.
1219 ``streamclonerequested`` is a boolean indicating whether a "streaming
1219 ``streamclonerequested`` is a boolean indicating whether a "streaming
1220 clone" is requested. A "streaming clone" is essentially a raw file copy
1220 clone" is requested. A "streaming clone" is essentially a raw file copy
1221 of revlogs from the server. This only works when the local repository is
1221 of revlogs from the server. This only works when the local repository is
1222 empty. The default value of ``None`` means to respect the server
1222 empty. The default value of ``None`` means to respect the server
1223 configuration for preferring stream clones.
1223 configuration for preferring stream clones.
1224
1224
1225 Returns the ``pulloperation`` created for this pull.
1225 Returns the ``pulloperation`` created for this pull.
1226 """
1226 """
1227 if opargs is None:
1227 if opargs is None:
1228 opargs = {}
1228 opargs = {}
1229 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks,
1229 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks,
1230 streamclonerequested=streamclonerequested, **opargs)
1230 streamclonerequested=streamclonerequested, **opargs)
1231 if pullop.remote.local():
1231 if pullop.remote.local():
1232 missing = set(pullop.remote.requirements) - pullop.repo.supported
1232 missing = set(pullop.remote.requirements) - pullop.repo.supported
1233 if missing:
1233 if missing:
1234 msg = _("required features are not"
1234 msg = _("required features are not"
1235 " supported in the destination:"
1235 " supported in the destination:"
1236 " %s") % (', '.join(sorted(missing)))
1236 " %s") % (', '.join(sorted(missing)))
1237 raise error.Abort(msg)
1237 raise error.Abort(msg)
1238
1238
1239 wlock = lock = None
1239 wlock = lock = None
1240 try:
1240 try:
1241 wlock = pullop.repo.wlock()
1241 wlock = pullop.repo.wlock()
1242 lock = pullop.repo.lock()
1242 lock = pullop.repo.lock()
1243 pullop.trmanager = transactionmanager(repo, 'pull', remote.url())
1243 pullop.trmanager = transactionmanager(repo, 'pull', remote.url())
1244 streamclone.maybeperformlegacystreamclone(pullop)
1244 streamclone.maybeperformlegacystreamclone(pullop)
1245 # This should ideally be in _pullbundle2(). However, it needs to run
1245 # This should ideally be in _pullbundle2(). However, it needs to run
1246 # before discovery to avoid extra work.
1246 # before discovery to avoid extra work.
1247 _maybeapplyclonebundle(pullop)
1247 _maybeapplyclonebundle(pullop)
1248 _pulldiscovery(pullop)
1248 _pulldiscovery(pullop)
1249 if pullop.canusebundle2:
1249 if pullop.canusebundle2:
1250 _pullbundle2(pullop)
1250 _pullbundle2(pullop)
1251 _pullchangeset(pullop)
1251 _pullchangeset(pullop)
1252 _pullphase(pullop)
1252 _pullphase(pullop)
1253 _pullbookmarks(pullop)
1253 _pullbookmarks(pullop)
1254 _pullobsolete(pullop)
1254 _pullobsolete(pullop)
1255 pullop.trmanager.close()
1255 pullop.trmanager.close()
1256 finally:
1256 finally:
1257 lockmod.release(pullop.trmanager, lock, wlock)
1257 lockmod.release(pullop.trmanager, lock, wlock)
1258
1258
1259 return pullop
1259 return pullop
1260
1260
1261 # list of steps to perform discovery before pull
1261 # list of steps to perform discovery before pull
1262 pulldiscoveryorder = []
1262 pulldiscoveryorder = []
1263
1263
1264 # Mapping between step name and function
1264 # Mapping between step name and function
1265 #
1265 #
1266 # This exists to help extensions wrap steps if necessary
1266 # This exists to help extensions wrap steps if necessary
1267 pulldiscoverymapping = {}
1267 pulldiscoverymapping = {}
1268
1268
1269 def pulldiscovery(stepname):
1269 def pulldiscovery(stepname):
1270 """decorator for function performing discovery before pull
1270 """decorator for function performing discovery before pull
1271
1271
1272 The function is added to the step -> function mapping and appended to the
1272 The function is added to the step -> function mapping and appended to the
1273 list of steps. Beware that decorated function will be added in order (this
1273 list of steps. Beware that decorated function will be added in order (this
1274 may matter).
1274 may matter).
1275
1275
1276 You can only use this decorator for a new step, if you want to wrap a step
1276 You can only use this decorator for a new step, if you want to wrap a step
1277 from an extension, change the pulldiscovery dictionary directly."""
1277 from an extension, change the pulldiscovery dictionary directly."""
1278 def dec(func):
1278 def dec(func):
1279 assert stepname not in pulldiscoverymapping
1279 assert stepname not in pulldiscoverymapping
1280 pulldiscoverymapping[stepname] = func
1280 pulldiscoverymapping[stepname] = func
1281 pulldiscoveryorder.append(stepname)
1281 pulldiscoveryorder.append(stepname)
1282 return func
1282 return func
1283 return dec
1283 return dec
1284
1284
1285 def _pulldiscovery(pullop):
1285 def _pulldiscovery(pullop):
1286 """Run all discovery steps"""
1286 """Run all discovery steps"""
1287 for stepname in pulldiscoveryorder:
1287 for stepname in pulldiscoveryorder:
1288 step = pulldiscoverymapping[stepname]
1288 step = pulldiscoverymapping[stepname]
1289 step(pullop)
1289 step(pullop)
1290
1290
1291 @pulldiscovery('b1:bookmarks')
1291 @pulldiscovery('b1:bookmarks')
1292 def _pullbookmarkbundle1(pullop):
1292 def _pullbookmarkbundle1(pullop):
1293 """fetch bookmark data in bundle1 case
1293 """fetch bookmark data in bundle1 case
1294
1294
1295 If not using bundle2, we have to fetch bookmarks before changeset
1295 If not using bundle2, we have to fetch bookmarks before changeset
1296 discovery to reduce the chance and impact of race conditions."""
1296 discovery to reduce the chance and impact of race conditions."""
1297 if pullop.remotebookmarks is not None:
1297 if pullop.remotebookmarks is not None:
1298 return
1298 return
1299 if pullop.canusebundle2 and 'listkeys' in pullop.remotebundle2caps:
1299 if pullop.canusebundle2 and 'listkeys' in pullop.remotebundle2caps:
1300 # all known bundle2 servers now support listkeys, but lets be nice with
1300 # all known bundle2 servers now support listkeys, but lets be nice with
1301 # new implementation.
1301 # new implementation.
1302 return
1302 return
1303 pullop.remotebookmarks = pullop.remote.listkeys('bookmarks')
1303 pullop.remotebookmarks = pullop.remote.listkeys('bookmarks')
1304
1304
1305
1305
1306 @pulldiscovery('changegroup')
1306 @pulldiscovery('changegroup')
1307 def _pulldiscoverychangegroup(pullop):
1307 def _pulldiscoverychangegroup(pullop):
1308 """discovery phase for the pull
1308 """discovery phase for the pull
1309
1309
1310 Current handle changeset discovery only, will change handle all discovery
1310 Current handle changeset discovery only, will change handle all discovery
1311 at some point."""
1311 at some point."""
1312 tmp = discovery.findcommonincoming(pullop.repo,
1312 tmp = discovery.findcommonincoming(pullop.repo,
1313 pullop.remote,
1313 pullop.remote,
1314 heads=pullop.heads,
1314 heads=pullop.heads,
1315 force=pullop.force)
1315 force=pullop.force)
1316 common, fetch, rheads = tmp
1316 common, fetch, rheads = tmp
1317 nm = pullop.repo.unfiltered().changelog.nodemap
1317 nm = pullop.repo.unfiltered().changelog.nodemap
1318 if fetch and rheads:
1318 if fetch and rheads:
1319 # If a remote heads in filtered locally, lets drop it from the unknown
1319 # If a remote heads in filtered locally, lets drop it from the unknown
1320 # remote heads and put in back in common.
1320 # remote heads and put in back in common.
1321 #
1321 #
1322 # This is a hackish solution to catch most of "common but locally
1322 # This is a hackish solution to catch most of "common but locally
1323 # hidden situation". We do not performs discovery on unfiltered
1323 # hidden situation". We do not performs discovery on unfiltered
1324 # repository because it end up doing a pathological amount of round
1324 # repository because it end up doing a pathological amount of round
1325 # trip for w huge amount of changeset we do not care about.
1325 # trip for w huge amount of changeset we do not care about.
1326 #
1326 #
1327 # If a set of such "common but filtered" changeset exist on the server
1327 # If a set of such "common but filtered" changeset exist on the server
1328 # but are not including a remote heads, we'll not be able to detect it,
1328 # but are not including a remote heads, we'll not be able to detect it,
1329 scommon = set(common)
1329 scommon = set(common)
1330 filteredrheads = []
1330 filteredrheads = []
1331 for n in rheads:
1331 for n in rheads:
1332 if n in nm:
1332 if n in nm:
1333 if n not in scommon:
1333 if n not in scommon:
1334 common.append(n)
1334 common.append(n)
1335 else:
1335 else:
1336 filteredrheads.append(n)
1336 filteredrheads.append(n)
1337 if not filteredrheads:
1337 if not filteredrheads:
1338 fetch = []
1338 fetch = []
1339 rheads = filteredrheads
1339 rheads = filteredrheads
1340 pullop.common = common
1340 pullop.common = common
1341 pullop.fetch = fetch
1341 pullop.fetch = fetch
1342 pullop.rheads = rheads
1342 pullop.rheads = rheads
1343
1343
1344 def _pullbundle2(pullop):
1344 def _pullbundle2(pullop):
1345 """pull data using bundle2
1345 """pull data using bundle2
1346
1346
1347 For now, the only supported data are changegroup."""
1347 For now, the only supported data are changegroup."""
1348 kwargs = {'bundlecaps': caps20to10(pullop.repo)}
1348 kwargs = {'bundlecaps': caps20to10(pullop.repo)}
1349
1349
1350 # At the moment we don't do stream clones over bundle2. If that is
1350 # At the moment we don't do stream clones over bundle2. If that is
1351 # implemented then here's where the check for that will go.
1351 # implemented then here's where the check for that will go.
1352 streaming = False
1352 streaming = False
1353
1353
1354 # pulling changegroup
1354 # pulling changegroup
1355 pullop.stepsdone.add('changegroup')
1355 pullop.stepsdone.add('changegroup')
1356
1356
1357 kwargs['common'] = pullop.common
1357 kwargs['common'] = pullop.common
1358 kwargs['heads'] = pullop.heads or pullop.rheads
1358 kwargs['heads'] = pullop.heads or pullop.rheads
1359 kwargs['cg'] = pullop.fetch
1359 kwargs['cg'] = pullop.fetch
1360 if 'listkeys' in pullop.remotebundle2caps:
1360 if 'listkeys' in pullop.remotebundle2caps:
1361 kwargs['listkeys'] = ['phases']
1361 kwargs['listkeys'] = ['phases']
1362 if pullop.remotebookmarks is None:
1362 if pullop.remotebookmarks is None:
1363 # make sure to always includes bookmark data when migrating
1363 # make sure to always includes bookmark data when migrating
1364 # `hg incoming --bundle` to using this function.
1364 # `hg incoming --bundle` to using this function.
1365 kwargs['listkeys'].append('bookmarks')
1365 kwargs['listkeys'].append('bookmarks')
1366
1366
1367 # If this is a full pull / clone and the server supports the clone bundles
1367 # If this is a full pull / clone and the server supports the clone bundles
1368 # feature, tell the server whether we attempted a clone bundle. The
1368 # feature, tell the server whether we attempted a clone bundle. The
1369 # presence of this flag indicates the client supports clone bundles. This
1369 # presence of this flag indicates the client supports clone bundles. This
1370 # will enable the server to treat clients that support clone bundles
1370 # will enable the server to treat clients that support clone bundles
1371 # differently from those that don't.
1371 # differently from those that don't.
1372 if (pullop.remote.capable('clonebundles')
1372 if (pullop.remote.capable('clonebundles')
1373 and pullop.heads is None and list(pullop.common) == [nullid]):
1373 and pullop.heads is None and list(pullop.common) == [nullid]):
1374 kwargs['cbattempted'] = pullop.clonebundleattempted
1374 kwargs['cbattempted'] = pullop.clonebundleattempted
1375
1375
1376 if streaming:
1376 if streaming:
1377 pullop.repo.ui.status(_('streaming all changes\n'))
1377 pullop.repo.ui.status(_('streaming all changes\n'))
1378 elif not pullop.fetch:
1378 elif not pullop.fetch:
1379 pullop.repo.ui.status(_("no changes found\n"))
1379 pullop.repo.ui.status(_("no changes found\n"))
1380 pullop.cgresult = 0
1380 pullop.cgresult = 0
1381 else:
1381 else:
1382 if pullop.heads is None and list(pullop.common) == [nullid]:
1382 if pullop.heads is None and list(pullop.common) == [nullid]:
1383 pullop.repo.ui.status(_("requesting all changes\n"))
1383 pullop.repo.ui.status(_("requesting all changes\n"))
1384 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1384 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1385 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1385 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1386 if obsolete.commonversion(remoteversions) is not None:
1386 if obsolete.commonversion(remoteversions) is not None:
1387 kwargs['obsmarkers'] = True
1387 kwargs['obsmarkers'] = True
1388 pullop.stepsdone.add('obsmarkers')
1388 pullop.stepsdone.add('obsmarkers')
1389 _pullbundle2extraprepare(pullop, kwargs)
1389 _pullbundle2extraprepare(pullop, kwargs)
1390 bundle = pullop.remote.getbundle('pull', **pycompat.strkwargs(kwargs))
1390 bundle = pullop.remote.getbundle('pull', **pycompat.strkwargs(kwargs))
1391 try:
1391 try:
1392 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
1392 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
1393 except bundle2.AbortFromPart as exc:
1393 except bundle2.AbortFromPart as exc:
1394 pullop.repo.ui.status(_('remote: abort: %s\n') % exc)
1394 pullop.repo.ui.status(_('remote: abort: %s\n') % exc)
1395 raise error.Abort(_('pull failed on remote'), hint=exc.hint)
1395 raise error.Abort(_('pull failed on remote'), hint=exc.hint)
1396 except error.BundleValueError as exc:
1396 except error.BundleValueError as exc:
1397 raise error.Abort(_('missing support for %s') % exc)
1397 raise error.Abort(_('missing support for %s') % exc)
1398
1398
1399 if pullop.fetch:
1399 if pullop.fetch:
1400 results = [cg['return'] for cg in op.records['changegroup']]
1400 results = [cg['return'] for cg in op.records['changegroup']]
1401 pullop.cgresult = changegroup.combineresults(results)
1401 pullop.cgresult = bundle2.combinechangegroupresults(results)
1402
1402
1403 # processing phases change
1403 # processing phases change
1404 for namespace, value in op.records['listkeys']:
1404 for namespace, value in op.records['listkeys']:
1405 if namespace == 'phases':
1405 if namespace == 'phases':
1406 _pullapplyphases(pullop, value)
1406 _pullapplyphases(pullop, value)
1407
1407
1408 # processing bookmark update
1408 # processing bookmark update
1409 for namespace, value in op.records['listkeys']:
1409 for namespace, value in op.records['listkeys']:
1410 if namespace == 'bookmarks':
1410 if namespace == 'bookmarks':
1411 pullop.remotebookmarks = value
1411 pullop.remotebookmarks = value
1412
1412
1413 # bookmark data were either already there or pulled in the bundle
1413 # bookmark data were either already there or pulled in the bundle
1414 if pullop.remotebookmarks is not None:
1414 if pullop.remotebookmarks is not None:
1415 _pullbookmarks(pullop)
1415 _pullbookmarks(pullop)
1416
1416
1417 def _pullbundle2extraprepare(pullop, kwargs):
1417 def _pullbundle2extraprepare(pullop, kwargs):
1418 """hook function so that extensions can extend the getbundle call"""
1418 """hook function so that extensions can extend the getbundle call"""
1419 pass
1419 pass
1420
1420
1421 def _pullchangeset(pullop):
1421 def _pullchangeset(pullop):
1422 """pull changeset from unbundle into the local repo"""
1422 """pull changeset from unbundle into the local repo"""
1423 # We delay the open of the transaction as late as possible so we
1423 # We delay the open of the transaction as late as possible so we
1424 # don't open transaction for nothing or you break future useful
1424 # don't open transaction for nothing or you break future useful
1425 # rollback call
1425 # rollback call
1426 if 'changegroup' in pullop.stepsdone:
1426 if 'changegroup' in pullop.stepsdone:
1427 return
1427 return
1428 pullop.stepsdone.add('changegroup')
1428 pullop.stepsdone.add('changegroup')
1429 if not pullop.fetch:
1429 if not pullop.fetch:
1430 pullop.repo.ui.status(_("no changes found\n"))
1430 pullop.repo.ui.status(_("no changes found\n"))
1431 pullop.cgresult = 0
1431 pullop.cgresult = 0
1432 return
1432 return
1433 tr = pullop.gettransaction()
1433 tr = pullop.gettransaction()
1434 if pullop.heads is None and list(pullop.common) == [nullid]:
1434 if pullop.heads is None and list(pullop.common) == [nullid]:
1435 pullop.repo.ui.status(_("requesting all changes\n"))
1435 pullop.repo.ui.status(_("requesting all changes\n"))
1436 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1436 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1437 # issue1320, avoid a race if remote changed after discovery
1437 # issue1320, avoid a race if remote changed after discovery
1438 pullop.heads = pullop.rheads
1438 pullop.heads = pullop.rheads
1439
1439
1440 if pullop.remote.capable('getbundle'):
1440 if pullop.remote.capable('getbundle'):
1441 # TODO: get bundlecaps from remote
1441 # TODO: get bundlecaps from remote
1442 cg = pullop.remote.getbundle('pull', common=pullop.common,
1442 cg = pullop.remote.getbundle('pull', common=pullop.common,
1443 heads=pullop.heads or pullop.rheads)
1443 heads=pullop.heads or pullop.rheads)
1444 elif pullop.heads is None:
1444 elif pullop.heads is None:
1445 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1445 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1446 elif not pullop.remote.capable('changegroupsubset'):
1446 elif not pullop.remote.capable('changegroupsubset'):
1447 raise error.Abort(_("partial pull cannot be done because "
1447 raise error.Abort(_("partial pull cannot be done because "
1448 "other repository doesn't support "
1448 "other repository doesn't support "
1449 "changegroupsubset."))
1449 "changegroupsubset."))
1450 else:
1450 else:
1451 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1451 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1452 pullop.cgresult, addednodes = cg.apply(pullop.repo, tr, 'pull',
1452 pullop.cgresult, addednodes = cg.apply(pullop.repo, tr, 'pull',
1453 pullop.remote.url())
1453 pullop.remote.url())
1454
1454
1455 def _pullphase(pullop):
1455 def _pullphase(pullop):
1456 # Get remote phases data from remote
1456 # Get remote phases data from remote
1457 if 'phases' in pullop.stepsdone:
1457 if 'phases' in pullop.stepsdone:
1458 return
1458 return
1459 remotephases = pullop.remote.listkeys('phases')
1459 remotephases = pullop.remote.listkeys('phases')
1460 _pullapplyphases(pullop, remotephases)
1460 _pullapplyphases(pullop, remotephases)
1461
1461
1462 def _pullapplyphases(pullop, remotephases):
1462 def _pullapplyphases(pullop, remotephases):
1463 """apply phase movement from observed remote state"""
1463 """apply phase movement from observed remote state"""
1464 if 'phases' in pullop.stepsdone:
1464 if 'phases' in pullop.stepsdone:
1465 return
1465 return
1466 pullop.stepsdone.add('phases')
1466 pullop.stepsdone.add('phases')
1467 publishing = bool(remotephases.get('publishing', False))
1467 publishing = bool(remotephases.get('publishing', False))
1468 if remotephases and not publishing:
1468 if remotephases and not publishing:
1469 # remote is new and non-publishing
1469 # remote is new and non-publishing
1470 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1470 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1471 pullop.pulledsubset,
1471 pullop.pulledsubset,
1472 remotephases)
1472 remotephases)
1473 dheads = pullop.pulledsubset
1473 dheads = pullop.pulledsubset
1474 else:
1474 else:
1475 # Remote is old or publishing all common changesets
1475 # Remote is old or publishing all common changesets
1476 # should be seen as public
1476 # should be seen as public
1477 pheads = pullop.pulledsubset
1477 pheads = pullop.pulledsubset
1478 dheads = []
1478 dheads = []
1479 unfi = pullop.repo.unfiltered()
1479 unfi = pullop.repo.unfiltered()
1480 phase = unfi._phasecache.phase
1480 phase = unfi._phasecache.phase
1481 rev = unfi.changelog.nodemap.get
1481 rev = unfi.changelog.nodemap.get
1482 public = phases.public
1482 public = phases.public
1483 draft = phases.draft
1483 draft = phases.draft
1484
1484
1485 # exclude changesets already public locally and update the others
1485 # exclude changesets already public locally and update the others
1486 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1486 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1487 if pheads:
1487 if pheads:
1488 tr = pullop.gettransaction()
1488 tr = pullop.gettransaction()
1489 phases.advanceboundary(pullop.repo, tr, public, pheads)
1489 phases.advanceboundary(pullop.repo, tr, public, pheads)
1490
1490
1491 # exclude changesets already draft locally and update the others
1491 # exclude changesets already draft locally and update the others
1492 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1492 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1493 if dheads:
1493 if dheads:
1494 tr = pullop.gettransaction()
1494 tr = pullop.gettransaction()
1495 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1495 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1496
1496
1497 def _pullbookmarks(pullop):
1497 def _pullbookmarks(pullop):
1498 """process the remote bookmark information to update the local one"""
1498 """process the remote bookmark information to update the local one"""
1499 if 'bookmarks' in pullop.stepsdone:
1499 if 'bookmarks' in pullop.stepsdone:
1500 return
1500 return
1501 pullop.stepsdone.add('bookmarks')
1501 pullop.stepsdone.add('bookmarks')
1502 repo = pullop.repo
1502 repo = pullop.repo
1503 remotebookmarks = pullop.remotebookmarks
1503 remotebookmarks = pullop.remotebookmarks
1504 remotebookmarks = bookmod.unhexlifybookmarks(remotebookmarks)
1504 remotebookmarks = bookmod.unhexlifybookmarks(remotebookmarks)
1505 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1505 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1506 pullop.remote.url(),
1506 pullop.remote.url(),
1507 pullop.gettransaction,
1507 pullop.gettransaction,
1508 explicit=pullop.explicitbookmarks)
1508 explicit=pullop.explicitbookmarks)
1509
1509
1510 def _pullobsolete(pullop):
1510 def _pullobsolete(pullop):
1511 """utility function to pull obsolete markers from a remote
1511 """utility function to pull obsolete markers from a remote
1512
1512
1513 The `gettransaction` is function that return the pull transaction, creating
1513 The `gettransaction` is function that return the pull transaction, creating
1514 one if necessary. We return the transaction to inform the calling code that
1514 one if necessary. We return the transaction to inform the calling code that
1515 a new transaction have been created (when applicable).
1515 a new transaction have been created (when applicable).
1516
1516
1517 Exists mostly to allow overriding for experimentation purpose"""
1517 Exists mostly to allow overriding for experimentation purpose"""
1518 if 'obsmarkers' in pullop.stepsdone:
1518 if 'obsmarkers' in pullop.stepsdone:
1519 return
1519 return
1520 pullop.stepsdone.add('obsmarkers')
1520 pullop.stepsdone.add('obsmarkers')
1521 tr = None
1521 tr = None
1522 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1522 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1523 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1523 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1524 remoteobs = pullop.remote.listkeys('obsolete')
1524 remoteobs = pullop.remote.listkeys('obsolete')
1525 if 'dump0' in remoteobs:
1525 if 'dump0' in remoteobs:
1526 tr = pullop.gettransaction()
1526 tr = pullop.gettransaction()
1527 markers = []
1527 markers = []
1528 for key in sorted(remoteobs, reverse=True):
1528 for key in sorted(remoteobs, reverse=True):
1529 if key.startswith('dump'):
1529 if key.startswith('dump'):
1530 data = util.b85decode(remoteobs[key])
1530 data = util.b85decode(remoteobs[key])
1531 version, newmarks = obsolete._readmarkers(data)
1531 version, newmarks = obsolete._readmarkers(data)
1532 markers += newmarks
1532 markers += newmarks
1533 if markers:
1533 if markers:
1534 pullop.repo.obsstore.add(tr, markers)
1534 pullop.repo.obsstore.add(tr, markers)
1535 pullop.repo.invalidatevolatilesets()
1535 pullop.repo.invalidatevolatilesets()
1536 return tr
1536 return tr
1537
1537
1538 def caps20to10(repo):
1538 def caps20to10(repo):
1539 """return a set with appropriate options to use bundle20 during getbundle"""
1539 """return a set with appropriate options to use bundle20 during getbundle"""
1540 caps = {'HG20'}
1540 caps = {'HG20'}
1541 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
1541 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
1542 caps.add('bundle2=' + urlreq.quote(capsblob))
1542 caps.add('bundle2=' + urlreq.quote(capsblob))
1543 return caps
1543 return caps
1544
1544
1545 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1545 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1546 getbundle2partsorder = []
1546 getbundle2partsorder = []
1547
1547
1548 # Mapping between step name and function
1548 # Mapping between step name and function
1549 #
1549 #
1550 # This exists to help extensions wrap steps if necessary
1550 # This exists to help extensions wrap steps if necessary
1551 getbundle2partsmapping = {}
1551 getbundle2partsmapping = {}
1552
1552
1553 def getbundle2partsgenerator(stepname, idx=None):
1553 def getbundle2partsgenerator(stepname, idx=None):
1554 """decorator for function generating bundle2 part for getbundle
1554 """decorator for function generating bundle2 part for getbundle
1555
1555
1556 The function is added to the step -> function mapping and appended to the
1556 The function is added to the step -> function mapping and appended to the
1557 list of steps. Beware that decorated functions will be added in order
1557 list of steps. Beware that decorated functions will be added in order
1558 (this may matter).
1558 (this may matter).
1559
1559
1560 You can only use this decorator for new steps, if you want to wrap a step
1560 You can only use this decorator for new steps, if you want to wrap a step
1561 from an extension, attack the getbundle2partsmapping dictionary directly."""
1561 from an extension, attack the getbundle2partsmapping dictionary directly."""
1562 def dec(func):
1562 def dec(func):
1563 assert stepname not in getbundle2partsmapping
1563 assert stepname not in getbundle2partsmapping
1564 getbundle2partsmapping[stepname] = func
1564 getbundle2partsmapping[stepname] = func
1565 if idx is None:
1565 if idx is None:
1566 getbundle2partsorder.append(stepname)
1566 getbundle2partsorder.append(stepname)
1567 else:
1567 else:
1568 getbundle2partsorder.insert(idx, stepname)
1568 getbundle2partsorder.insert(idx, stepname)
1569 return func
1569 return func
1570 return dec
1570 return dec
1571
1571
1572 def bundle2requested(bundlecaps):
1572 def bundle2requested(bundlecaps):
1573 if bundlecaps is not None:
1573 if bundlecaps is not None:
1574 return any(cap.startswith('HG2') for cap in bundlecaps)
1574 return any(cap.startswith('HG2') for cap in bundlecaps)
1575 return False
1575 return False
1576
1576
1577 def getbundlechunks(repo, source, heads=None, common=None, bundlecaps=None,
1577 def getbundlechunks(repo, source, heads=None, common=None, bundlecaps=None,
1578 **kwargs):
1578 **kwargs):
1579 """Return chunks constituting a bundle's raw data.
1579 """Return chunks constituting a bundle's raw data.
1580
1580
1581 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
1581 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
1582 passed.
1582 passed.
1583
1583
1584 Returns an iterator over raw chunks (of varying sizes).
1584 Returns an iterator over raw chunks (of varying sizes).
1585 """
1585 """
1586 kwargs = pycompat.byteskwargs(kwargs)
1586 kwargs = pycompat.byteskwargs(kwargs)
1587 usebundle2 = bundle2requested(bundlecaps)
1587 usebundle2 = bundle2requested(bundlecaps)
1588 # bundle10 case
1588 # bundle10 case
1589 if not usebundle2:
1589 if not usebundle2:
1590 if bundlecaps and not kwargs.get('cg', True):
1590 if bundlecaps and not kwargs.get('cg', True):
1591 raise ValueError(_('request for bundle10 must include changegroup'))
1591 raise ValueError(_('request for bundle10 must include changegroup'))
1592
1592
1593 if kwargs:
1593 if kwargs:
1594 raise ValueError(_('unsupported getbundle arguments: %s')
1594 raise ValueError(_('unsupported getbundle arguments: %s')
1595 % ', '.join(sorted(kwargs.keys())))
1595 % ', '.join(sorted(kwargs.keys())))
1596 outgoing = _computeoutgoing(repo, heads, common)
1596 outgoing = _computeoutgoing(repo, heads, common)
1597 bundler = changegroup.getbundler('01', repo, bundlecaps)
1597 bundler = changegroup.getbundler('01', repo, bundlecaps)
1598 return changegroup.getsubsetraw(repo, outgoing, bundler, source)
1598 return changegroup.getsubsetraw(repo, outgoing, bundler, source)
1599
1599
1600 # bundle20 case
1600 # bundle20 case
1601 b2caps = {}
1601 b2caps = {}
1602 for bcaps in bundlecaps:
1602 for bcaps in bundlecaps:
1603 if bcaps.startswith('bundle2='):
1603 if bcaps.startswith('bundle2='):
1604 blob = urlreq.unquote(bcaps[len('bundle2='):])
1604 blob = urlreq.unquote(bcaps[len('bundle2='):])
1605 b2caps.update(bundle2.decodecaps(blob))
1605 b2caps.update(bundle2.decodecaps(blob))
1606 bundler = bundle2.bundle20(repo.ui, b2caps)
1606 bundler = bundle2.bundle20(repo.ui, b2caps)
1607
1607
1608 kwargs['heads'] = heads
1608 kwargs['heads'] = heads
1609 kwargs['common'] = common
1609 kwargs['common'] = common
1610
1610
1611 for name in getbundle2partsorder:
1611 for name in getbundle2partsorder:
1612 func = getbundle2partsmapping[name]
1612 func = getbundle2partsmapping[name]
1613 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1613 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1614 **pycompat.strkwargs(kwargs))
1614 **pycompat.strkwargs(kwargs))
1615
1615
1616 return bundler.getchunks()
1616 return bundler.getchunks()
1617
1617
1618 @getbundle2partsgenerator('changegroup')
1618 @getbundle2partsgenerator('changegroup')
1619 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1619 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1620 b2caps=None, heads=None, common=None, **kwargs):
1620 b2caps=None, heads=None, common=None, **kwargs):
1621 """add a changegroup part to the requested bundle"""
1621 """add a changegroup part to the requested bundle"""
1622 cg = None
1622 cg = None
1623 if kwargs.get('cg', True):
1623 if kwargs.get('cg', True):
1624 # build changegroup bundle here.
1624 # build changegroup bundle here.
1625 version = '01'
1625 version = '01'
1626 cgversions = b2caps.get('changegroup')
1626 cgversions = b2caps.get('changegroup')
1627 if cgversions: # 3.1 and 3.2 ship with an empty value
1627 if cgversions: # 3.1 and 3.2 ship with an empty value
1628 cgversions = [v for v in cgversions
1628 cgversions = [v for v in cgversions
1629 if v in changegroup.supportedoutgoingversions(repo)]
1629 if v in changegroup.supportedoutgoingversions(repo)]
1630 if not cgversions:
1630 if not cgversions:
1631 raise ValueError(_('no common changegroup version'))
1631 raise ValueError(_('no common changegroup version'))
1632 version = max(cgversions)
1632 version = max(cgversions)
1633 outgoing = _computeoutgoing(repo, heads, common)
1633 outgoing = _computeoutgoing(repo, heads, common)
1634 cg = changegroup.getlocalchangegroupraw(repo, source, outgoing,
1634 cg = changegroup.getlocalchangegroupraw(repo, source, outgoing,
1635 bundlecaps=bundlecaps,
1635 bundlecaps=bundlecaps,
1636 version=version)
1636 version=version)
1637
1637
1638 if cg:
1638 if cg:
1639 part = bundler.newpart('changegroup', data=cg)
1639 part = bundler.newpart('changegroup', data=cg)
1640 if cgversions:
1640 if cgversions:
1641 part.addparam('version', version)
1641 part.addparam('version', version)
1642 part.addparam('nbchanges', str(len(outgoing.missing)), mandatory=False)
1642 part.addparam('nbchanges', str(len(outgoing.missing)), mandatory=False)
1643 if 'treemanifest' in repo.requirements:
1643 if 'treemanifest' in repo.requirements:
1644 part.addparam('treemanifest', '1')
1644 part.addparam('treemanifest', '1')
1645
1645
1646 @getbundle2partsgenerator('listkeys')
1646 @getbundle2partsgenerator('listkeys')
1647 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1647 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1648 b2caps=None, **kwargs):
1648 b2caps=None, **kwargs):
1649 """add parts containing listkeys namespaces to the requested bundle"""
1649 """add parts containing listkeys namespaces to the requested bundle"""
1650 listkeys = kwargs.get('listkeys', ())
1650 listkeys = kwargs.get('listkeys', ())
1651 for namespace in listkeys:
1651 for namespace in listkeys:
1652 part = bundler.newpart('listkeys')
1652 part = bundler.newpart('listkeys')
1653 part.addparam('namespace', namespace)
1653 part.addparam('namespace', namespace)
1654 keys = repo.listkeys(namespace).items()
1654 keys = repo.listkeys(namespace).items()
1655 part.data = pushkey.encodekeys(keys)
1655 part.data = pushkey.encodekeys(keys)
1656
1656
1657 @getbundle2partsgenerator('obsmarkers')
1657 @getbundle2partsgenerator('obsmarkers')
1658 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1658 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1659 b2caps=None, heads=None, **kwargs):
1659 b2caps=None, heads=None, **kwargs):
1660 """add an obsolescence markers part to the requested bundle"""
1660 """add an obsolescence markers part to the requested bundle"""
1661 if kwargs.get('obsmarkers', False):
1661 if kwargs.get('obsmarkers', False):
1662 if heads is None:
1662 if heads is None:
1663 heads = repo.heads()
1663 heads = repo.heads()
1664 subset = [c.node() for c in repo.set('::%ln', heads)]
1664 subset = [c.node() for c in repo.set('::%ln', heads)]
1665 markers = repo.obsstore.relevantmarkers(subset)
1665 markers = repo.obsstore.relevantmarkers(subset)
1666 markers = sorted(markers)
1666 markers = sorted(markers)
1667 bundle2.buildobsmarkerspart(bundler, markers)
1667 bundle2.buildobsmarkerspart(bundler, markers)
1668
1668
1669 @getbundle2partsgenerator('hgtagsfnodes')
1669 @getbundle2partsgenerator('hgtagsfnodes')
1670 def _getbundletagsfnodes(bundler, repo, source, bundlecaps=None,
1670 def _getbundletagsfnodes(bundler, repo, source, bundlecaps=None,
1671 b2caps=None, heads=None, common=None,
1671 b2caps=None, heads=None, common=None,
1672 **kwargs):
1672 **kwargs):
1673 """Transfer the .hgtags filenodes mapping.
1673 """Transfer the .hgtags filenodes mapping.
1674
1674
1675 Only values for heads in this bundle will be transferred.
1675 Only values for heads in this bundle will be transferred.
1676
1676
1677 The part data consists of pairs of 20 byte changeset node and .hgtags
1677 The part data consists of pairs of 20 byte changeset node and .hgtags
1678 filenodes raw values.
1678 filenodes raw values.
1679 """
1679 """
1680 # Don't send unless:
1680 # Don't send unless:
1681 # - changeset are being exchanged,
1681 # - changeset are being exchanged,
1682 # - the client supports it.
1682 # - the client supports it.
1683 if not (kwargs.get('cg', True) and 'hgtagsfnodes' in b2caps):
1683 if not (kwargs.get('cg', True) and 'hgtagsfnodes' in b2caps):
1684 return
1684 return
1685
1685
1686 outgoing = _computeoutgoing(repo, heads, common)
1686 outgoing = _computeoutgoing(repo, heads, common)
1687 bundle2.addparttagsfnodescache(repo, bundler, outgoing)
1687 bundle2.addparttagsfnodescache(repo, bundler, outgoing)
1688
1688
1689 def _getbookmarks(repo, **kwargs):
1689 def _getbookmarks(repo, **kwargs):
1690 """Returns bookmark to node mapping.
1690 """Returns bookmark to node mapping.
1691
1691
1692 This function is primarily used to generate `bookmarks` bundle2 part.
1692 This function is primarily used to generate `bookmarks` bundle2 part.
1693 It is a separate function in order to make it easy to wrap it
1693 It is a separate function in order to make it easy to wrap it
1694 in extensions. Passing `kwargs` to the function makes it easy to
1694 in extensions. Passing `kwargs` to the function makes it easy to
1695 add new parameters in extensions.
1695 add new parameters in extensions.
1696 """
1696 """
1697
1697
1698 return dict(bookmod.listbinbookmarks(repo))
1698 return dict(bookmod.listbinbookmarks(repo))
1699
1699
1700 def check_heads(repo, their_heads, context):
1700 def check_heads(repo, their_heads, context):
1701 """check if the heads of a repo have been modified
1701 """check if the heads of a repo have been modified
1702
1702
1703 Used by peer for unbundling.
1703 Used by peer for unbundling.
1704 """
1704 """
1705 heads = repo.heads()
1705 heads = repo.heads()
1706 heads_hash = hashlib.sha1(''.join(sorted(heads))).digest()
1706 heads_hash = hashlib.sha1(''.join(sorted(heads))).digest()
1707 if not (their_heads == ['force'] or their_heads == heads or
1707 if not (their_heads == ['force'] or their_heads == heads or
1708 their_heads == ['hashed', heads_hash]):
1708 their_heads == ['hashed', heads_hash]):
1709 # someone else committed/pushed/unbundled while we
1709 # someone else committed/pushed/unbundled while we
1710 # were transferring data
1710 # were transferring data
1711 raise error.PushRaced('repository changed while %s - '
1711 raise error.PushRaced('repository changed while %s - '
1712 'please try again' % context)
1712 'please try again' % context)
1713
1713
1714 def unbundle(repo, cg, heads, source, url):
1714 def unbundle(repo, cg, heads, source, url):
1715 """Apply a bundle to a repo.
1715 """Apply a bundle to a repo.
1716
1716
1717 this function makes sure the repo is locked during the application and have
1717 this function makes sure the repo is locked during the application and have
1718 mechanism to check that no push race occurred between the creation of the
1718 mechanism to check that no push race occurred between the creation of the
1719 bundle and its application.
1719 bundle and its application.
1720
1720
1721 If the push was raced as PushRaced exception is raised."""
1721 If the push was raced as PushRaced exception is raised."""
1722 r = 0
1722 r = 0
1723 # need a transaction when processing a bundle2 stream
1723 # need a transaction when processing a bundle2 stream
1724 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
1724 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
1725 lockandtr = [None, None, None]
1725 lockandtr = [None, None, None]
1726 recordout = None
1726 recordout = None
1727 # quick fix for output mismatch with bundle2 in 3.4
1727 # quick fix for output mismatch with bundle2 in 3.4
1728 captureoutput = repo.ui.configbool('experimental', 'bundle2-output-capture',
1728 captureoutput = repo.ui.configbool('experimental', 'bundle2-output-capture',
1729 False)
1729 False)
1730 if url.startswith('remote:http:') or url.startswith('remote:https:'):
1730 if url.startswith('remote:http:') or url.startswith('remote:https:'):
1731 captureoutput = True
1731 captureoutput = True
1732 try:
1732 try:
1733 # note: outside bundle1, 'heads' is expected to be empty and this
1733 # note: outside bundle1, 'heads' is expected to be empty and this
1734 # 'check_heads' call wil be a no-op
1734 # 'check_heads' call wil be a no-op
1735 check_heads(repo, heads, 'uploading changes')
1735 check_heads(repo, heads, 'uploading changes')
1736 # push can proceed
1736 # push can proceed
1737 if not isinstance(cg, bundle2.unbundle20):
1737 if not isinstance(cg, bundle2.unbundle20):
1738 # legacy case: bundle1 (changegroup 01)
1738 # legacy case: bundle1 (changegroup 01)
1739 txnname = "\n".join([source, util.hidepassword(url)])
1739 txnname = "\n".join([source, util.hidepassword(url)])
1740 with repo.lock(), repo.transaction(txnname) as tr:
1740 with repo.lock(), repo.transaction(txnname) as tr:
1741 r, addednodes = cg.apply(repo, tr, source, url)
1741 r, addednodes = cg.apply(repo, tr, source, url)
1742 else:
1742 else:
1743 r = None
1743 r = None
1744 try:
1744 try:
1745 def gettransaction():
1745 def gettransaction():
1746 if not lockandtr[2]:
1746 if not lockandtr[2]:
1747 lockandtr[0] = repo.wlock()
1747 lockandtr[0] = repo.wlock()
1748 lockandtr[1] = repo.lock()
1748 lockandtr[1] = repo.lock()
1749 lockandtr[2] = repo.transaction(source)
1749 lockandtr[2] = repo.transaction(source)
1750 lockandtr[2].hookargs['source'] = source
1750 lockandtr[2].hookargs['source'] = source
1751 lockandtr[2].hookargs['url'] = url
1751 lockandtr[2].hookargs['url'] = url
1752 lockandtr[2].hookargs['bundle2'] = '1'
1752 lockandtr[2].hookargs['bundle2'] = '1'
1753 return lockandtr[2]
1753 return lockandtr[2]
1754
1754
1755 # Do greedy locking by default until we're satisfied with lazy
1755 # Do greedy locking by default until we're satisfied with lazy
1756 # locking.
1756 # locking.
1757 if not repo.ui.configbool('experimental', 'bundle2lazylocking'):
1757 if not repo.ui.configbool('experimental', 'bundle2lazylocking'):
1758 gettransaction()
1758 gettransaction()
1759
1759
1760 op = bundle2.bundleoperation(repo, gettransaction,
1760 op = bundle2.bundleoperation(repo, gettransaction,
1761 captureoutput=captureoutput)
1761 captureoutput=captureoutput)
1762 try:
1762 try:
1763 op = bundle2.processbundle(repo, cg, op=op)
1763 op = bundle2.processbundle(repo, cg, op=op)
1764 finally:
1764 finally:
1765 r = op.reply
1765 r = op.reply
1766 if captureoutput and r is not None:
1766 if captureoutput and r is not None:
1767 repo.ui.pushbuffer(error=True, subproc=True)
1767 repo.ui.pushbuffer(error=True, subproc=True)
1768 def recordout(output):
1768 def recordout(output):
1769 r.newpart('output', data=output, mandatory=False)
1769 r.newpart('output', data=output, mandatory=False)
1770 if lockandtr[2] is not None:
1770 if lockandtr[2] is not None:
1771 lockandtr[2].close()
1771 lockandtr[2].close()
1772 except BaseException as exc:
1772 except BaseException as exc:
1773 exc.duringunbundle2 = True
1773 exc.duringunbundle2 = True
1774 if captureoutput and r is not None:
1774 if captureoutput and r is not None:
1775 parts = exc._bundle2salvagedoutput = r.salvageoutput()
1775 parts = exc._bundle2salvagedoutput = r.salvageoutput()
1776 def recordout(output):
1776 def recordout(output):
1777 part = bundle2.bundlepart('output', data=output,
1777 part = bundle2.bundlepart('output', data=output,
1778 mandatory=False)
1778 mandatory=False)
1779 parts.append(part)
1779 parts.append(part)
1780 raise
1780 raise
1781 finally:
1781 finally:
1782 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
1782 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
1783 if recordout is not None:
1783 if recordout is not None:
1784 recordout(repo.ui.popbuffer())
1784 recordout(repo.ui.popbuffer())
1785 return r
1785 return r
1786
1786
1787 def _maybeapplyclonebundle(pullop):
1787 def _maybeapplyclonebundle(pullop):
1788 """Apply a clone bundle from a remote, if possible."""
1788 """Apply a clone bundle from a remote, if possible."""
1789
1789
1790 repo = pullop.repo
1790 repo = pullop.repo
1791 remote = pullop.remote
1791 remote = pullop.remote
1792
1792
1793 if not repo.ui.configbool('ui', 'clonebundles', True):
1793 if not repo.ui.configbool('ui', 'clonebundles', True):
1794 return
1794 return
1795
1795
1796 # Only run if local repo is empty.
1796 # Only run if local repo is empty.
1797 if len(repo):
1797 if len(repo):
1798 return
1798 return
1799
1799
1800 if pullop.heads:
1800 if pullop.heads:
1801 return
1801 return
1802
1802
1803 if not remote.capable('clonebundles'):
1803 if not remote.capable('clonebundles'):
1804 return
1804 return
1805
1805
1806 res = remote._call('clonebundles')
1806 res = remote._call('clonebundles')
1807
1807
1808 # If we call the wire protocol command, that's good enough to record the
1808 # If we call the wire protocol command, that's good enough to record the
1809 # attempt.
1809 # attempt.
1810 pullop.clonebundleattempted = True
1810 pullop.clonebundleattempted = True
1811
1811
1812 entries = parseclonebundlesmanifest(repo, res)
1812 entries = parseclonebundlesmanifest(repo, res)
1813 if not entries:
1813 if not entries:
1814 repo.ui.note(_('no clone bundles available on remote; '
1814 repo.ui.note(_('no clone bundles available on remote; '
1815 'falling back to regular clone\n'))
1815 'falling back to regular clone\n'))
1816 return
1816 return
1817
1817
1818 entries = filterclonebundleentries(repo, entries)
1818 entries = filterclonebundleentries(repo, entries)
1819 if not entries:
1819 if not entries:
1820 # There is a thundering herd concern here. However, if a server
1820 # There is a thundering herd concern here. However, if a server
1821 # operator doesn't advertise bundles appropriate for its clients,
1821 # operator doesn't advertise bundles appropriate for its clients,
1822 # they deserve what's coming. Furthermore, from a client's
1822 # they deserve what's coming. Furthermore, from a client's
1823 # perspective, no automatic fallback would mean not being able to
1823 # perspective, no automatic fallback would mean not being able to
1824 # clone!
1824 # clone!
1825 repo.ui.warn(_('no compatible clone bundles available on server; '
1825 repo.ui.warn(_('no compatible clone bundles available on server; '
1826 'falling back to regular clone\n'))
1826 'falling back to regular clone\n'))
1827 repo.ui.warn(_('(you may want to report this to the server '
1827 repo.ui.warn(_('(you may want to report this to the server '
1828 'operator)\n'))
1828 'operator)\n'))
1829 return
1829 return
1830
1830
1831 entries = sortclonebundleentries(repo.ui, entries)
1831 entries = sortclonebundleentries(repo.ui, entries)
1832
1832
1833 url = entries[0]['URL']
1833 url = entries[0]['URL']
1834 repo.ui.status(_('applying clone bundle from %s\n') % url)
1834 repo.ui.status(_('applying clone bundle from %s\n') % url)
1835 if trypullbundlefromurl(repo.ui, repo, url):
1835 if trypullbundlefromurl(repo.ui, repo, url):
1836 repo.ui.status(_('finished applying clone bundle\n'))
1836 repo.ui.status(_('finished applying clone bundle\n'))
1837 # Bundle failed.
1837 # Bundle failed.
1838 #
1838 #
1839 # We abort by default to avoid the thundering herd of
1839 # We abort by default to avoid the thundering herd of
1840 # clients flooding a server that was expecting expensive
1840 # clients flooding a server that was expecting expensive
1841 # clone load to be offloaded.
1841 # clone load to be offloaded.
1842 elif repo.ui.configbool('ui', 'clonebundlefallback', False):
1842 elif repo.ui.configbool('ui', 'clonebundlefallback', False):
1843 repo.ui.warn(_('falling back to normal clone\n'))
1843 repo.ui.warn(_('falling back to normal clone\n'))
1844 else:
1844 else:
1845 raise error.Abort(_('error applying bundle'),
1845 raise error.Abort(_('error applying bundle'),
1846 hint=_('if this error persists, consider contacting '
1846 hint=_('if this error persists, consider contacting '
1847 'the server operator or disable clone '
1847 'the server operator or disable clone '
1848 'bundles via '
1848 'bundles via '
1849 '"--config ui.clonebundles=false"'))
1849 '"--config ui.clonebundles=false"'))
1850
1850
1851 def parseclonebundlesmanifest(repo, s):
1851 def parseclonebundlesmanifest(repo, s):
1852 """Parses the raw text of a clone bundles manifest.
1852 """Parses the raw text of a clone bundles manifest.
1853
1853
1854 Returns a list of dicts. The dicts have a ``URL`` key corresponding
1854 Returns a list of dicts. The dicts have a ``URL`` key corresponding
1855 to the URL and other keys are the attributes for the entry.
1855 to the URL and other keys are the attributes for the entry.
1856 """
1856 """
1857 m = []
1857 m = []
1858 for line in s.splitlines():
1858 for line in s.splitlines():
1859 fields = line.split()
1859 fields = line.split()
1860 if not fields:
1860 if not fields:
1861 continue
1861 continue
1862 attrs = {'URL': fields[0]}
1862 attrs = {'URL': fields[0]}
1863 for rawattr in fields[1:]:
1863 for rawattr in fields[1:]:
1864 key, value = rawattr.split('=', 1)
1864 key, value = rawattr.split('=', 1)
1865 key = urlreq.unquote(key)
1865 key = urlreq.unquote(key)
1866 value = urlreq.unquote(value)
1866 value = urlreq.unquote(value)
1867 attrs[key] = value
1867 attrs[key] = value
1868
1868
1869 # Parse BUNDLESPEC into components. This makes client-side
1869 # Parse BUNDLESPEC into components. This makes client-side
1870 # preferences easier to specify since you can prefer a single
1870 # preferences easier to specify since you can prefer a single
1871 # component of the BUNDLESPEC.
1871 # component of the BUNDLESPEC.
1872 if key == 'BUNDLESPEC':
1872 if key == 'BUNDLESPEC':
1873 try:
1873 try:
1874 comp, version, params = parsebundlespec(repo, value,
1874 comp, version, params = parsebundlespec(repo, value,
1875 externalnames=True)
1875 externalnames=True)
1876 attrs['COMPRESSION'] = comp
1876 attrs['COMPRESSION'] = comp
1877 attrs['VERSION'] = version
1877 attrs['VERSION'] = version
1878 except error.InvalidBundleSpecification:
1878 except error.InvalidBundleSpecification:
1879 pass
1879 pass
1880 except error.UnsupportedBundleSpecification:
1880 except error.UnsupportedBundleSpecification:
1881 pass
1881 pass
1882
1882
1883 m.append(attrs)
1883 m.append(attrs)
1884
1884
1885 return m
1885 return m
1886
1886
1887 def filterclonebundleentries(repo, entries):
1887 def filterclonebundleentries(repo, entries):
1888 """Remove incompatible clone bundle manifest entries.
1888 """Remove incompatible clone bundle manifest entries.
1889
1889
1890 Accepts a list of entries parsed with ``parseclonebundlesmanifest``
1890 Accepts a list of entries parsed with ``parseclonebundlesmanifest``
1891 and returns a new list consisting of only the entries that this client
1891 and returns a new list consisting of only the entries that this client
1892 should be able to apply.
1892 should be able to apply.
1893
1893
1894 There is no guarantee we'll be able to apply all returned entries because
1894 There is no guarantee we'll be able to apply all returned entries because
1895 the metadata we use to filter on may be missing or wrong.
1895 the metadata we use to filter on may be missing or wrong.
1896 """
1896 """
1897 newentries = []
1897 newentries = []
1898 for entry in entries:
1898 for entry in entries:
1899 spec = entry.get('BUNDLESPEC')
1899 spec = entry.get('BUNDLESPEC')
1900 if spec:
1900 if spec:
1901 try:
1901 try:
1902 parsebundlespec(repo, spec, strict=True)
1902 parsebundlespec(repo, spec, strict=True)
1903 except error.InvalidBundleSpecification as e:
1903 except error.InvalidBundleSpecification as e:
1904 repo.ui.debug(str(e) + '\n')
1904 repo.ui.debug(str(e) + '\n')
1905 continue
1905 continue
1906 except error.UnsupportedBundleSpecification as e:
1906 except error.UnsupportedBundleSpecification as e:
1907 repo.ui.debug('filtering %s because unsupported bundle '
1907 repo.ui.debug('filtering %s because unsupported bundle '
1908 'spec: %s\n' % (entry['URL'], str(e)))
1908 'spec: %s\n' % (entry['URL'], str(e)))
1909 continue
1909 continue
1910
1910
1911 if 'REQUIRESNI' in entry and not sslutil.hassni:
1911 if 'REQUIRESNI' in entry and not sslutil.hassni:
1912 repo.ui.debug('filtering %s because SNI not supported\n' %
1912 repo.ui.debug('filtering %s because SNI not supported\n' %
1913 entry['URL'])
1913 entry['URL'])
1914 continue
1914 continue
1915
1915
1916 newentries.append(entry)
1916 newentries.append(entry)
1917
1917
1918 return newentries
1918 return newentries
1919
1919
1920 class clonebundleentry(object):
1920 class clonebundleentry(object):
1921 """Represents an item in a clone bundles manifest.
1921 """Represents an item in a clone bundles manifest.
1922
1922
1923 This rich class is needed to support sorting since sorted() in Python 3
1923 This rich class is needed to support sorting since sorted() in Python 3
1924 doesn't support ``cmp`` and our comparison is complex enough that ``key=``
1924 doesn't support ``cmp`` and our comparison is complex enough that ``key=``
1925 won't work.
1925 won't work.
1926 """
1926 """
1927
1927
1928 def __init__(self, value, prefers):
1928 def __init__(self, value, prefers):
1929 self.value = value
1929 self.value = value
1930 self.prefers = prefers
1930 self.prefers = prefers
1931
1931
1932 def _cmp(self, other):
1932 def _cmp(self, other):
1933 for prefkey, prefvalue in self.prefers:
1933 for prefkey, prefvalue in self.prefers:
1934 avalue = self.value.get(prefkey)
1934 avalue = self.value.get(prefkey)
1935 bvalue = other.value.get(prefkey)
1935 bvalue = other.value.get(prefkey)
1936
1936
1937 # Special case for b missing attribute and a matches exactly.
1937 # Special case for b missing attribute and a matches exactly.
1938 if avalue is not None and bvalue is None and avalue == prefvalue:
1938 if avalue is not None and bvalue is None and avalue == prefvalue:
1939 return -1
1939 return -1
1940
1940
1941 # Special case for a missing attribute and b matches exactly.
1941 # Special case for a missing attribute and b matches exactly.
1942 if bvalue is not None and avalue is None and bvalue == prefvalue:
1942 if bvalue is not None and avalue is None and bvalue == prefvalue:
1943 return 1
1943 return 1
1944
1944
1945 # We can't compare unless attribute present on both.
1945 # We can't compare unless attribute present on both.
1946 if avalue is None or bvalue is None:
1946 if avalue is None or bvalue is None:
1947 continue
1947 continue
1948
1948
1949 # Same values should fall back to next attribute.
1949 # Same values should fall back to next attribute.
1950 if avalue == bvalue:
1950 if avalue == bvalue:
1951 continue
1951 continue
1952
1952
1953 # Exact matches come first.
1953 # Exact matches come first.
1954 if avalue == prefvalue:
1954 if avalue == prefvalue:
1955 return -1
1955 return -1
1956 if bvalue == prefvalue:
1956 if bvalue == prefvalue:
1957 return 1
1957 return 1
1958
1958
1959 # Fall back to next attribute.
1959 # Fall back to next attribute.
1960 continue
1960 continue
1961
1961
1962 # If we got here we couldn't sort by attributes and prefers. Fall
1962 # If we got here we couldn't sort by attributes and prefers. Fall
1963 # back to index order.
1963 # back to index order.
1964 return 0
1964 return 0
1965
1965
1966 def __lt__(self, other):
1966 def __lt__(self, other):
1967 return self._cmp(other) < 0
1967 return self._cmp(other) < 0
1968
1968
1969 def __gt__(self, other):
1969 def __gt__(self, other):
1970 return self._cmp(other) > 0
1970 return self._cmp(other) > 0
1971
1971
1972 def __eq__(self, other):
1972 def __eq__(self, other):
1973 return self._cmp(other) == 0
1973 return self._cmp(other) == 0
1974
1974
1975 def __le__(self, other):
1975 def __le__(self, other):
1976 return self._cmp(other) <= 0
1976 return self._cmp(other) <= 0
1977
1977
1978 def __ge__(self, other):
1978 def __ge__(self, other):
1979 return self._cmp(other) >= 0
1979 return self._cmp(other) >= 0
1980
1980
1981 def __ne__(self, other):
1981 def __ne__(self, other):
1982 return self._cmp(other) != 0
1982 return self._cmp(other) != 0
1983
1983
1984 def sortclonebundleentries(ui, entries):
1984 def sortclonebundleentries(ui, entries):
1985 prefers = ui.configlist('ui', 'clonebundleprefers')
1985 prefers = ui.configlist('ui', 'clonebundleprefers')
1986 if not prefers:
1986 if not prefers:
1987 return list(entries)
1987 return list(entries)
1988
1988
1989 prefers = [p.split('=', 1) for p in prefers]
1989 prefers = [p.split('=', 1) for p in prefers]
1990
1990
1991 items = sorted(clonebundleentry(v, prefers) for v in entries)
1991 items = sorted(clonebundleentry(v, prefers) for v in entries)
1992 return [i.value for i in items]
1992 return [i.value for i in items]
1993
1993
1994 def trypullbundlefromurl(ui, repo, url):
1994 def trypullbundlefromurl(ui, repo, url):
1995 """Attempt to apply a bundle from a URL."""
1995 """Attempt to apply a bundle from a URL."""
1996 with repo.lock(), repo.transaction('bundleurl') as tr:
1996 with repo.lock(), repo.transaction('bundleurl') as tr:
1997 try:
1997 try:
1998 fh = urlmod.open(ui, url)
1998 fh = urlmod.open(ui, url)
1999 cg = readbundle(ui, fh, 'stream')
1999 cg = readbundle(ui, fh, 'stream')
2000
2000
2001 if isinstance(cg, bundle2.unbundle20):
2001 if isinstance(cg, bundle2.unbundle20):
2002 bundle2.applybundle(repo, cg, tr, 'clonebundles', url)
2002 bundle2.applybundle(repo, cg, tr, 'clonebundles', url)
2003 elif isinstance(cg, streamclone.streamcloneapplier):
2003 elif isinstance(cg, streamclone.streamcloneapplier):
2004 cg.apply(repo)
2004 cg.apply(repo)
2005 else:
2005 else:
2006 cg.apply(repo, tr, 'clonebundles', url)
2006 cg.apply(repo, tr, 'clonebundles', url)
2007 return True
2007 return True
2008 except urlerr.httperror as e:
2008 except urlerr.httperror as e:
2009 ui.warn(_('HTTP error fetching bundle: %s\n') % str(e))
2009 ui.warn(_('HTTP error fetching bundle: %s\n') % str(e))
2010 except urlerr.urlerror as e:
2010 except urlerr.urlerror as e:
2011 ui.warn(_('error fetching bundle: %s\n') % e.reason)
2011 ui.warn(_('error fetching bundle: %s\n') % e.reason)
2012
2012
2013 return False
2013 return False
General Comments 0
You need to be logged in to leave comments. Login now