##// END OF EJS Templates
bundle2: record changegroup data in 'op.records' (API)...
Martin von Zweigbergk -
r33030:3e102a8d default
parent child Browse files
Show More
@@ -1,1778 +1,1784 b''
1 # bundle2.py - generic container format to transmit arbitrary data.
1 # bundle2.py - generic container format to transmit arbitrary data.
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """Handling of the new bundle2 format
7 """Handling of the new bundle2 format
8
8
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 payloads in an application agnostic way. It consist in a sequence of "parts"
10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 that will be handed to and processed by the application layer.
11 that will be handed to and processed by the application layer.
12
12
13
13
14 General format architecture
14 General format architecture
15 ===========================
15 ===========================
16
16
17 The format is architectured as follow
17 The format is architectured as follow
18
18
19 - magic string
19 - magic string
20 - stream level parameters
20 - stream level parameters
21 - payload parts (any number)
21 - payload parts (any number)
22 - end of stream marker.
22 - end of stream marker.
23
23
24 the Binary format
24 the Binary format
25 ============================
25 ============================
26
26
27 All numbers are unsigned and big-endian.
27 All numbers are unsigned and big-endian.
28
28
29 stream level parameters
29 stream level parameters
30 ------------------------
30 ------------------------
31
31
32 Binary format is as follow
32 Binary format is as follow
33
33
34 :params size: int32
34 :params size: int32
35
35
36 The total number of Bytes used by the parameters
36 The total number of Bytes used by the parameters
37
37
38 :params value: arbitrary number of Bytes
38 :params value: arbitrary number of Bytes
39
39
40 A blob of `params size` containing the serialized version of all stream level
40 A blob of `params size` containing the serialized version of all stream level
41 parameters.
41 parameters.
42
42
43 The blob contains a space separated list of parameters. Parameters with value
43 The blob contains a space separated list of parameters. Parameters with value
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45
45
46 Empty name are obviously forbidden.
46 Empty name are obviously forbidden.
47
47
48 Name MUST start with a letter. If this first letter is lower case, the
48 Name MUST start with a letter. If this first letter is lower case, the
49 parameter is advisory and can be safely ignored. However when the first
49 parameter is advisory and can be safely ignored. However when the first
50 letter is capital, the parameter is mandatory and the bundling process MUST
50 letter is capital, the parameter is mandatory and the bundling process MUST
51 stop if he is not able to proceed it.
51 stop if he is not able to proceed it.
52
52
53 Stream parameters use a simple textual format for two main reasons:
53 Stream parameters use a simple textual format for two main reasons:
54
54
55 - Stream level parameters should remain simple and we want to discourage any
55 - Stream level parameters should remain simple and we want to discourage any
56 crazy usage.
56 crazy usage.
57 - Textual data allow easy human inspection of a bundle2 header in case of
57 - Textual data allow easy human inspection of a bundle2 header in case of
58 troubles.
58 troubles.
59
59
60 Any Applicative level options MUST go into a bundle2 part instead.
60 Any Applicative level options MUST go into a bundle2 part instead.
61
61
62 Payload part
62 Payload part
63 ------------------------
63 ------------------------
64
64
65 Binary format is as follow
65 Binary format is as follow
66
66
67 :header size: int32
67 :header size: int32
68
68
69 The total number of Bytes used by the part header. When the header is empty
69 The total number of Bytes used by the part header. When the header is empty
70 (size = 0) this is interpreted as the end of stream marker.
70 (size = 0) this is interpreted as the end of stream marker.
71
71
72 :header:
72 :header:
73
73
74 The header defines how to interpret the part. It contains two piece of
74 The header defines how to interpret the part. It contains two piece of
75 data: the part type, and the part parameters.
75 data: the part type, and the part parameters.
76
76
77 The part type is used to route an application level handler, that can
77 The part type is used to route an application level handler, that can
78 interpret payload.
78 interpret payload.
79
79
80 Part parameters are passed to the application level handler. They are
80 Part parameters are passed to the application level handler. They are
81 meant to convey information that will help the application level object to
81 meant to convey information that will help the application level object to
82 interpret the part payload.
82 interpret the part payload.
83
83
84 The binary format of the header is has follow
84 The binary format of the header is has follow
85
85
86 :typesize: (one byte)
86 :typesize: (one byte)
87
87
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89
89
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 to this part.
91 to this part.
92
92
93 :parameters:
93 :parameters:
94
94
95 Part's parameter may have arbitrary content, the binary structure is::
95 Part's parameter may have arbitrary content, the binary structure is::
96
96
97 <mandatory-count><advisory-count><param-sizes><param-data>
97 <mandatory-count><advisory-count><param-sizes><param-data>
98
98
99 :mandatory-count: 1 byte, number of mandatory parameters
99 :mandatory-count: 1 byte, number of mandatory parameters
100
100
101 :advisory-count: 1 byte, number of advisory parameters
101 :advisory-count: 1 byte, number of advisory parameters
102
102
103 :param-sizes:
103 :param-sizes:
104
104
105 N couple of bytes, where N is the total number of parameters. Each
105 N couple of bytes, where N is the total number of parameters. Each
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107
107
108 :param-data:
108 :param-data:
109
109
110 A blob of bytes from which each parameter key and value can be
110 A blob of bytes from which each parameter key and value can be
111 retrieved using the list of size couples stored in the previous
111 retrieved using the list of size couples stored in the previous
112 field.
112 field.
113
113
114 Mandatory parameters comes first, then the advisory ones.
114 Mandatory parameters comes first, then the advisory ones.
115
115
116 Each parameter's key MUST be unique within the part.
116 Each parameter's key MUST be unique within the part.
117
117
118 :payload:
118 :payload:
119
119
120 payload is a series of `<chunksize><chunkdata>`.
120 payload is a series of `<chunksize><chunkdata>`.
121
121
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124
124
125 The current implementation always produces either zero or one chunk.
125 The current implementation always produces either zero or one chunk.
126 This is an implementation limitation that will ultimately be lifted.
126 This is an implementation limitation that will ultimately be lifted.
127
127
128 `chunksize` can be negative to trigger special case processing. No such
128 `chunksize` can be negative to trigger special case processing. No such
129 processing is in place yet.
129 processing is in place yet.
130
130
131 Bundle processing
131 Bundle processing
132 ============================
132 ============================
133
133
134 Each part is processed in order using a "part handler". Handler are registered
134 Each part is processed in order using a "part handler". Handler are registered
135 for a certain part type.
135 for a certain part type.
136
136
137 The matching of a part to its handler is case insensitive. The case of the
137 The matching of a part to its handler is case insensitive. The case of the
138 part type is used to know if a part is mandatory or advisory. If the Part type
138 part type is used to know if a part is mandatory or advisory. If the Part type
139 contains any uppercase char it is considered mandatory. When no handler is
139 contains any uppercase char it is considered mandatory. When no handler is
140 known for a Mandatory part, the process is aborted and an exception is raised.
140 known for a Mandatory part, the process is aborted and an exception is raised.
141 If the part is advisory and no handler is known, the part is ignored. When the
141 If the part is advisory and no handler is known, the part is ignored. When the
142 process is aborted, the full bundle is still read from the stream to keep the
142 process is aborted, the full bundle is still read from the stream to keep the
143 channel usable. But none of the part read from an abort are processed. In the
143 channel usable. But none of the part read from an abort are processed. In the
144 future, dropping the stream may become an option for channel we do not care to
144 future, dropping the stream may become an option for channel we do not care to
145 preserve.
145 preserve.
146 """
146 """
147
147
148 from __future__ import absolute_import
148 from __future__ import absolute_import
149
149
150 import errno
150 import errno
151 import re
151 import re
152 import string
152 import string
153 import struct
153 import struct
154 import sys
154 import sys
155
155
156 from .i18n import _
156 from .i18n import _
157 from . import (
157 from . import (
158 changegroup,
158 changegroup,
159 error,
159 error,
160 obsolete,
160 obsolete,
161 pushkey,
161 pushkey,
162 pycompat,
162 pycompat,
163 tags,
163 tags,
164 url,
164 url,
165 util,
165 util,
166 )
166 )
167
167
168 urlerr = util.urlerr
168 urlerr = util.urlerr
169 urlreq = util.urlreq
169 urlreq = util.urlreq
170
170
171 _pack = struct.pack
171 _pack = struct.pack
172 _unpack = struct.unpack
172 _unpack = struct.unpack
173
173
174 _fstreamparamsize = '>i'
174 _fstreamparamsize = '>i'
175 _fpartheadersize = '>i'
175 _fpartheadersize = '>i'
176 _fparttypesize = '>B'
176 _fparttypesize = '>B'
177 _fpartid = '>I'
177 _fpartid = '>I'
178 _fpayloadsize = '>i'
178 _fpayloadsize = '>i'
179 _fpartparamcount = '>BB'
179 _fpartparamcount = '>BB'
180
180
181 preferedchunksize = 4096
181 preferedchunksize = 4096
182
182
183 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
183 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
184
184
185 def outdebug(ui, message):
185 def outdebug(ui, message):
186 """debug regarding output stream (bundling)"""
186 """debug regarding output stream (bundling)"""
187 if ui.configbool('devel', 'bundle2.debug', False):
187 if ui.configbool('devel', 'bundle2.debug', False):
188 ui.debug('bundle2-output: %s\n' % message)
188 ui.debug('bundle2-output: %s\n' % message)
189
189
190 def indebug(ui, message):
190 def indebug(ui, message):
191 """debug on input stream (unbundling)"""
191 """debug on input stream (unbundling)"""
192 if ui.configbool('devel', 'bundle2.debug', False):
192 if ui.configbool('devel', 'bundle2.debug', False):
193 ui.debug('bundle2-input: %s\n' % message)
193 ui.debug('bundle2-input: %s\n' % message)
194
194
195 def validateparttype(parttype):
195 def validateparttype(parttype):
196 """raise ValueError if a parttype contains invalid character"""
196 """raise ValueError if a parttype contains invalid character"""
197 if _parttypeforbidden.search(parttype):
197 if _parttypeforbidden.search(parttype):
198 raise ValueError(parttype)
198 raise ValueError(parttype)
199
199
200 def _makefpartparamsizes(nbparams):
200 def _makefpartparamsizes(nbparams):
201 """return a struct format to read part parameter sizes
201 """return a struct format to read part parameter sizes
202
202
203 The number parameters is variable so we need to build that format
203 The number parameters is variable so we need to build that format
204 dynamically.
204 dynamically.
205 """
205 """
206 return '>'+('BB'*nbparams)
206 return '>'+('BB'*nbparams)
207
207
208 parthandlermapping = {}
208 parthandlermapping = {}
209
209
210 def parthandler(parttype, params=()):
210 def parthandler(parttype, params=()):
211 """decorator that register a function as a bundle2 part handler
211 """decorator that register a function as a bundle2 part handler
212
212
213 eg::
213 eg::
214
214
215 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
215 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
216 def myparttypehandler(...):
216 def myparttypehandler(...):
217 '''process a part of type "my part".'''
217 '''process a part of type "my part".'''
218 ...
218 ...
219 """
219 """
220 validateparttype(parttype)
220 validateparttype(parttype)
221 def _decorator(func):
221 def _decorator(func):
222 lparttype = parttype.lower() # enforce lower case matching.
222 lparttype = parttype.lower() # enforce lower case matching.
223 assert lparttype not in parthandlermapping
223 assert lparttype not in parthandlermapping
224 parthandlermapping[lparttype] = func
224 parthandlermapping[lparttype] = func
225 func.params = frozenset(params)
225 func.params = frozenset(params)
226 return func
226 return func
227 return _decorator
227 return _decorator
228
228
229 class unbundlerecords(object):
229 class unbundlerecords(object):
230 """keep record of what happens during and unbundle
230 """keep record of what happens during and unbundle
231
231
232 New records are added using `records.add('cat', obj)`. Where 'cat' is a
232 New records are added using `records.add('cat', obj)`. Where 'cat' is a
233 category of record and obj is an arbitrary object.
233 category of record and obj is an arbitrary object.
234
234
235 `records['cat']` will return all entries of this category 'cat'.
235 `records['cat']` will return all entries of this category 'cat'.
236
236
237 Iterating on the object itself will yield `('category', obj)` tuples
237 Iterating on the object itself will yield `('category', obj)` tuples
238 for all entries.
238 for all entries.
239
239
240 All iterations happens in chronological order.
240 All iterations happens in chronological order.
241 """
241 """
242
242
243 def __init__(self):
243 def __init__(self):
244 self._categories = {}
244 self._categories = {}
245 self._sequences = []
245 self._sequences = []
246 self._replies = {}
246 self._replies = {}
247
247
248 def add(self, category, entry, inreplyto=None):
248 def add(self, category, entry, inreplyto=None):
249 """add a new record of a given category.
249 """add a new record of a given category.
250
250
251 The entry can then be retrieved in the list returned by
251 The entry can then be retrieved in the list returned by
252 self['category']."""
252 self['category']."""
253 self._categories.setdefault(category, []).append(entry)
253 self._categories.setdefault(category, []).append(entry)
254 self._sequences.append((category, entry))
254 self._sequences.append((category, entry))
255 if inreplyto is not None:
255 if inreplyto is not None:
256 self.getreplies(inreplyto).add(category, entry)
256 self.getreplies(inreplyto).add(category, entry)
257
257
258 def getreplies(self, partid):
258 def getreplies(self, partid):
259 """get the records that are replies to a specific part"""
259 """get the records that are replies to a specific part"""
260 return self._replies.setdefault(partid, unbundlerecords())
260 return self._replies.setdefault(partid, unbundlerecords())
261
261
262 def __getitem__(self, cat):
262 def __getitem__(self, cat):
263 return tuple(self._categories.get(cat, ()))
263 return tuple(self._categories.get(cat, ()))
264
264
265 def __iter__(self):
265 def __iter__(self):
266 return iter(self._sequences)
266 return iter(self._sequences)
267
267
268 def __len__(self):
268 def __len__(self):
269 return len(self._sequences)
269 return len(self._sequences)
270
270
271 def __nonzero__(self):
271 def __nonzero__(self):
272 return bool(self._sequences)
272 return bool(self._sequences)
273
273
274 __bool__ = __nonzero__
274 __bool__ = __nonzero__
275
275
276 class bundleoperation(object):
276 class bundleoperation(object):
277 """an object that represents a single bundling process
277 """an object that represents a single bundling process
278
278
279 Its purpose is to carry unbundle-related objects and states.
279 Its purpose is to carry unbundle-related objects and states.
280
280
281 A new object should be created at the beginning of each bundle processing.
281 A new object should be created at the beginning of each bundle processing.
282 The object is to be returned by the processing function.
282 The object is to be returned by the processing function.
283
283
284 The object has very little content now it will ultimately contain:
284 The object has very little content now it will ultimately contain:
285 * an access to the repo the bundle is applied to,
285 * an access to the repo the bundle is applied to,
286 * a ui object,
286 * a ui object,
287 * a way to retrieve a transaction to add changes to the repo,
287 * a way to retrieve a transaction to add changes to the repo,
288 * a way to record the result of processing each part,
288 * a way to record the result of processing each part,
289 * a way to construct a bundle response when applicable.
289 * a way to construct a bundle response when applicable.
290 """
290 """
291
291
292 def __init__(self, repo, transactiongetter, captureoutput=True):
292 def __init__(self, repo, transactiongetter, captureoutput=True):
293 self.repo = repo
293 self.repo = repo
294 self.ui = repo.ui
294 self.ui = repo.ui
295 self.records = unbundlerecords()
295 self.records = unbundlerecords()
296 self.gettransaction = transactiongetter
296 self.gettransaction = transactiongetter
297 self.reply = None
297 self.reply = None
298 self.captureoutput = captureoutput
298 self.captureoutput = captureoutput
299
299
300 class TransactionUnavailable(RuntimeError):
300 class TransactionUnavailable(RuntimeError):
301 pass
301 pass
302
302
303 def _notransaction():
303 def _notransaction():
304 """default method to get a transaction while processing a bundle
304 """default method to get a transaction while processing a bundle
305
305
306 Raise an exception to highlight the fact that no transaction was expected
306 Raise an exception to highlight the fact that no transaction was expected
307 to be created"""
307 to be created"""
308 raise TransactionUnavailable()
308 raise TransactionUnavailable()
309
309
310 def applybundle(repo, unbundler, tr, source=None, url=None, op=None):
310 def applybundle(repo, unbundler, tr, source=None, url=None, op=None):
311 # transform me into unbundler.apply() as soon as the freeze is lifted
311 # transform me into unbundler.apply() as soon as the freeze is lifted
312 tr.hookargs['bundle2'] = '1'
312 tr.hookargs['bundle2'] = '1'
313 if source is not None and 'source' not in tr.hookargs:
313 if source is not None and 'source' not in tr.hookargs:
314 tr.hookargs['source'] = source
314 tr.hookargs['source'] = source
315 if url is not None and 'url' not in tr.hookargs:
315 if url is not None and 'url' not in tr.hookargs:
316 tr.hookargs['url'] = url
316 tr.hookargs['url'] = url
317 return processbundle(repo, unbundler, lambda: tr, op=op)
317 return processbundle(repo, unbundler, lambda: tr, op=op)
318
318
319 def processbundle(repo, unbundler, transactiongetter=None, op=None):
319 def processbundle(repo, unbundler, transactiongetter=None, op=None):
320 """This function process a bundle, apply effect to/from a repo
320 """This function process a bundle, apply effect to/from a repo
321
321
322 It iterates over each part then searches for and uses the proper handling
322 It iterates over each part then searches for and uses the proper handling
323 code to process the part. Parts are processed in order.
323 code to process the part. Parts are processed in order.
324
324
325 Unknown Mandatory part will abort the process.
325 Unknown Mandatory part will abort the process.
326
326
327 It is temporarily possible to provide a prebuilt bundleoperation to the
327 It is temporarily possible to provide a prebuilt bundleoperation to the
328 function. This is used to ensure output is properly propagated in case of
328 function. This is used to ensure output is properly propagated in case of
329 an error during the unbundling. This output capturing part will likely be
329 an error during the unbundling. This output capturing part will likely be
330 reworked and this ability will probably go away in the process.
330 reworked and this ability will probably go away in the process.
331 """
331 """
332 if op is None:
332 if op is None:
333 if transactiongetter is None:
333 if transactiongetter is None:
334 transactiongetter = _notransaction
334 transactiongetter = _notransaction
335 op = bundleoperation(repo, transactiongetter)
335 op = bundleoperation(repo, transactiongetter)
336 # todo:
336 # todo:
337 # - replace this is a init function soon.
337 # - replace this is a init function soon.
338 # - exception catching
338 # - exception catching
339 unbundler.params
339 unbundler.params
340 if repo.ui.debugflag:
340 if repo.ui.debugflag:
341 msg = ['bundle2-input-bundle:']
341 msg = ['bundle2-input-bundle:']
342 if unbundler.params:
342 if unbundler.params:
343 msg.append(' %i params')
343 msg.append(' %i params')
344 if op.gettransaction is None or op.gettransaction is _notransaction:
344 if op.gettransaction is None or op.gettransaction is _notransaction:
345 msg.append(' no-transaction')
345 msg.append(' no-transaction')
346 else:
346 else:
347 msg.append(' with-transaction')
347 msg.append(' with-transaction')
348 msg.append('\n')
348 msg.append('\n')
349 repo.ui.debug(''.join(msg))
349 repo.ui.debug(''.join(msg))
350 iterparts = enumerate(unbundler.iterparts())
350 iterparts = enumerate(unbundler.iterparts())
351 part = None
351 part = None
352 nbpart = 0
352 nbpart = 0
353 try:
353 try:
354 for nbpart, part in iterparts:
354 for nbpart, part in iterparts:
355 _processpart(op, part)
355 _processpart(op, part)
356 except Exception as exc:
356 except Exception as exc:
357 # Any exceptions seeking to the end of the bundle at this point are
357 # Any exceptions seeking to the end of the bundle at this point are
358 # almost certainly related to the underlying stream being bad.
358 # almost certainly related to the underlying stream being bad.
359 # And, chances are that the exception we're handling is related to
359 # And, chances are that the exception we're handling is related to
360 # getting in that bad state. So, we swallow the seeking error and
360 # getting in that bad state. So, we swallow the seeking error and
361 # re-raise the original error.
361 # re-raise the original error.
362 seekerror = False
362 seekerror = False
363 try:
363 try:
364 for nbpart, part in iterparts:
364 for nbpart, part in iterparts:
365 # consume the bundle content
365 # consume the bundle content
366 part.seek(0, 2)
366 part.seek(0, 2)
367 except Exception:
367 except Exception:
368 seekerror = True
368 seekerror = True
369
369
370 # Small hack to let caller code distinguish exceptions from bundle2
370 # Small hack to let caller code distinguish exceptions from bundle2
371 # processing from processing the old format. This is mostly
371 # processing from processing the old format. This is mostly
372 # needed to handle different return codes to unbundle according to the
372 # needed to handle different return codes to unbundle according to the
373 # type of bundle. We should probably clean up or drop this return code
373 # type of bundle. We should probably clean up or drop this return code
374 # craziness in a future version.
374 # craziness in a future version.
375 exc.duringunbundle2 = True
375 exc.duringunbundle2 = True
376 salvaged = []
376 salvaged = []
377 replycaps = None
377 replycaps = None
378 if op.reply is not None:
378 if op.reply is not None:
379 salvaged = op.reply.salvageoutput()
379 salvaged = op.reply.salvageoutput()
380 replycaps = op.reply.capabilities
380 replycaps = op.reply.capabilities
381 exc._replycaps = replycaps
381 exc._replycaps = replycaps
382 exc._bundle2salvagedoutput = salvaged
382 exc._bundle2salvagedoutput = salvaged
383
383
384 # Re-raising from a variable loses the original stack. So only use
384 # Re-raising from a variable loses the original stack. So only use
385 # that form if we need to.
385 # that form if we need to.
386 if seekerror:
386 if seekerror:
387 raise exc
387 raise exc
388 else:
388 else:
389 raise
389 raise
390 finally:
390 finally:
391 repo.ui.debug('bundle2-input-bundle: %i parts total\n' % nbpart)
391 repo.ui.debug('bundle2-input-bundle: %i parts total\n' % nbpart)
392
392
393 return op
393 return op
394
394
395 def _processpart(op, part):
395 def _processpart(op, part):
396 """process a single part from a bundle
396 """process a single part from a bundle
397
397
398 The part is guaranteed to have been fully consumed when the function exits
398 The part is guaranteed to have been fully consumed when the function exits
399 (even if an exception is raised)."""
399 (even if an exception is raised)."""
400 status = 'unknown' # used by debug output
400 status = 'unknown' # used by debug output
401 hardabort = False
401 hardabort = False
402 try:
402 try:
403 try:
403 try:
404 handler = parthandlermapping.get(part.type)
404 handler = parthandlermapping.get(part.type)
405 if handler is None:
405 if handler is None:
406 status = 'unsupported-type'
406 status = 'unsupported-type'
407 raise error.BundleUnknownFeatureError(parttype=part.type)
407 raise error.BundleUnknownFeatureError(parttype=part.type)
408 indebug(op.ui, 'found a handler for part %r' % part.type)
408 indebug(op.ui, 'found a handler for part %r' % part.type)
409 unknownparams = part.mandatorykeys - handler.params
409 unknownparams = part.mandatorykeys - handler.params
410 if unknownparams:
410 if unknownparams:
411 unknownparams = list(unknownparams)
411 unknownparams = list(unknownparams)
412 unknownparams.sort()
412 unknownparams.sort()
413 status = 'unsupported-params (%s)' % unknownparams
413 status = 'unsupported-params (%s)' % unknownparams
414 raise error.BundleUnknownFeatureError(parttype=part.type,
414 raise error.BundleUnknownFeatureError(parttype=part.type,
415 params=unknownparams)
415 params=unknownparams)
416 status = 'supported'
416 status = 'supported'
417 except error.BundleUnknownFeatureError as exc:
417 except error.BundleUnknownFeatureError as exc:
418 if part.mandatory: # mandatory parts
418 if part.mandatory: # mandatory parts
419 raise
419 raise
420 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
420 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
421 return # skip to part processing
421 return # skip to part processing
422 finally:
422 finally:
423 if op.ui.debugflag:
423 if op.ui.debugflag:
424 msg = ['bundle2-input-part: "%s"' % part.type]
424 msg = ['bundle2-input-part: "%s"' % part.type]
425 if not part.mandatory:
425 if not part.mandatory:
426 msg.append(' (advisory)')
426 msg.append(' (advisory)')
427 nbmp = len(part.mandatorykeys)
427 nbmp = len(part.mandatorykeys)
428 nbap = len(part.params) - nbmp
428 nbap = len(part.params) - nbmp
429 if nbmp or nbap:
429 if nbmp or nbap:
430 msg.append(' (params:')
430 msg.append(' (params:')
431 if nbmp:
431 if nbmp:
432 msg.append(' %i mandatory' % nbmp)
432 msg.append(' %i mandatory' % nbmp)
433 if nbap:
433 if nbap:
434 msg.append(' %i advisory' % nbmp)
434 msg.append(' %i advisory' % nbmp)
435 msg.append(')')
435 msg.append(')')
436 msg.append(' %s\n' % status)
436 msg.append(' %s\n' % status)
437 op.ui.debug(''.join(msg))
437 op.ui.debug(''.join(msg))
438
438
439 # handler is called outside the above try block so that we don't
439 # handler is called outside the above try block so that we don't
440 # risk catching KeyErrors from anything other than the
440 # risk catching KeyErrors from anything other than the
441 # parthandlermapping lookup (any KeyError raised by handler()
441 # parthandlermapping lookup (any KeyError raised by handler()
442 # itself represents a defect of a different variety).
442 # itself represents a defect of a different variety).
443 output = None
443 output = None
444 if op.captureoutput and op.reply is not None:
444 if op.captureoutput and op.reply is not None:
445 op.ui.pushbuffer(error=True, subproc=True)
445 op.ui.pushbuffer(error=True, subproc=True)
446 output = ''
446 output = ''
447 try:
447 try:
448 handler(op, part)
448 handler(op, part)
449 finally:
449 finally:
450 if output is not None:
450 if output is not None:
451 output = op.ui.popbuffer()
451 output = op.ui.popbuffer()
452 if output:
452 if output:
453 outpart = op.reply.newpart('output', data=output,
453 outpart = op.reply.newpart('output', data=output,
454 mandatory=False)
454 mandatory=False)
455 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
455 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
456 # If exiting or interrupted, do not attempt to seek the stream in the
456 # If exiting or interrupted, do not attempt to seek the stream in the
457 # finally block below. This makes abort faster.
457 # finally block below. This makes abort faster.
458 except (SystemExit, KeyboardInterrupt):
458 except (SystemExit, KeyboardInterrupt):
459 hardabort = True
459 hardabort = True
460 raise
460 raise
461 finally:
461 finally:
462 # consume the part content to not corrupt the stream.
462 # consume the part content to not corrupt the stream.
463 if not hardabort:
463 if not hardabort:
464 part.seek(0, 2)
464 part.seek(0, 2)
465
465
466
466
467 def decodecaps(blob):
467 def decodecaps(blob):
468 """decode a bundle2 caps bytes blob into a dictionary
468 """decode a bundle2 caps bytes blob into a dictionary
469
469
470 The blob is a list of capabilities (one per line)
470 The blob is a list of capabilities (one per line)
471 Capabilities may have values using a line of the form::
471 Capabilities may have values using a line of the form::
472
472
473 capability=value1,value2,value3
473 capability=value1,value2,value3
474
474
475 The values are always a list."""
475 The values are always a list."""
476 caps = {}
476 caps = {}
477 for line in blob.splitlines():
477 for line in blob.splitlines():
478 if not line:
478 if not line:
479 continue
479 continue
480 if '=' not in line:
480 if '=' not in line:
481 key, vals = line, ()
481 key, vals = line, ()
482 else:
482 else:
483 key, vals = line.split('=', 1)
483 key, vals = line.split('=', 1)
484 vals = vals.split(',')
484 vals = vals.split(',')
485 key = urlreq.unquote(key)
485 key = urlreq.unquote(key)
486 vals = [urlreq.unquote(v) for v in vals]
486 vals = [urlreq.unquote(v) for v in vals]
487 caps[key] = vals
487 caps[key] = vals
488 return caps
488 return caps
489
489
490 def encodecaps(caps):
490 def encodecaps(caps):
491 """encode a bundle2 caps dictionary into a bytes blob"""
491 """encode a bundle2 caps dictionary into a bytes blob"""
492 chunks = []
492 chunks = []
493 for ca in sorted(caps):
493 for ca in sorted(caps):
494 vals = caps[ca]
494 vals = caps[ca]
495 ca = urlreq.quote(ca)
495 ca = urlreq.quote(ca)
496 vals = [urlreq.quote(v) for v in vals]
496 vals = [urlreq.quote(v) for v in vals]
497 if vals:
497 if vals:
498 ca = "%s=%s" % (ca, ','.join(vals))
498 ca = "%s=%s" % (ca, ','.join(vals))
499 chunks.append(ca)
499 chunks.append(ca)
500 return '\n'.join(chunks)
500 return '\n'.join(chunks)
501
501
502 bundletypes = {
502 bundletypes = {
503 "": ("", 'UN'), # only when using unbundle on ssh and old http servers
503 "": ("", 'UN'), # only when using unbundle on ssh and old http servers
504 # since the unification ssh accepts a header but there
504 # since the unification ssh accepts a header but there
505 # is no capability signaling it.
505 # is no capability signaling it.
506 "HG20": (), # special-cased below
506 "HG20": (), # special-cased below
507 "HG10UN": ("HG10UN", 'UN'),
507 "HG10UN": ("HG10UN", 'UN'),
508 "HG10BZ": ("HG10", 'BZ'),
508 "HG10BZ": ("HG10", 'BZ'),
509 "HG10GZ": ("HG10GZ", 'GZ'),
509 "HG10GZ": ("HG10GZ", 'GZ'),
510 }
510 }
511
511
512 # hgweb uses this list to communicate its preferred type
512 # hgweb uses this list to communicate its preferred type
513 bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN']
513 bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN']
514
514
515 class bundle20(object):
515 class bundle20(object):
516 """represent an outgoing bundle2 container
516 """represent an outgoing bundle2 container
517
517
518 Use the `addparam` method to add stream level parameter. and `newpart` to
518 Use the `addparam` method to add stream level parameter. and `newpart` to
519 populate it. Then call `getchunks` to retrieve all the binary chunks of
519 populate it. Then call `getchunks` to retrieve all the binary chunks of
520 data that compose the bundle2 container."""
520 data that compose the bundle2 container."""
521
521
522 _magicstring = 'HG20'
522 _magicstring = 'HG20'
523
523
524 def __init__(self, ui, capabilities=()):
524 def __init__(self, ui, capabilities=()):
525 self.ui = ui
525 self.ui = ui
526 self._params = []
526 self._params = []
527 self._parts = []
527 self._parts = []
528 self.capabilities = dict(capabilities)
528 self.capabilities = dict(capabilities)
529 self._compengine = util.compengines.forbundletype('UN')
529 self._compengine = util.compengines.forbundletype('UN')
530 self._compopts = None
530 self._compopts = None
531
531
532 def setcompression(self, alg, compopts=None):
532 def setcompression(self, alg, compopts=None):
533 """setup core part compression to <alg>"""
533 """setup core part compression to <alg>"""
534 if alg in (None, 'UN'):
534 if alg in (None, 'UN'):
535 return
535 return
536 assert not any(n.lower() == 'compression' for n, v in self._params)
536 assert not any(n.lower() == 'compression' for n, v in self._params)
537 self.addparam('Compression', alg)
537 self.addparam('Compression', alg)
538 self._compengine = util.compengines.forbundletype(alg)
538 self._compengine = util.compengines.forbundletype(alg)
539 self._compopts = compopts
539 self._compopts = compopts
540
540
541 @property
541 @property
542 def nbparts(self):
542 def nbparts(self):
543 """total number of parts added to the bundler"""
543 """total number of parts added to the bundler"""
544 return len(self._parts)
544 return len(self._parts)
545
545
546 # methods used to defines the bundle2 content
546 # methods used to defines the bundle2 content
547 def addparam(self, name, value=None):
547 def addparam(self, name, value=None):
548 """add a stream level parameter"""
548 """add a stream level parameter"""
549 if not name:
549 if not name:
550 raise ValueError('empty parameter name')
550 raise ValueError('empty parameter name')
551 if name[0] not in string.letters:
551 if name[0] not in string.letters:
552 raise ValueError('non letter first character: %r' % name)
552 raise ValueError('non letter first character: %r' % name)
553 self._params.append((name, value))
553 self._params.append((name, value))
554
554
555 def addpart(self, part):
555 def addpart(self, part):
556 """add a new part to the bundle2 container
556 """add a new part to the bundle2 container
557
557
558 Parts contains the actual applicative payload."""
558 Parts contains the actual applicative payload."""
559 assert part.id is None
559 assert part.id is None
560 part.id = len(self._parts) # very cheap counter
560 part.id = len(self._parts) # very cheap counter
561 self._parts.append(part)
561 self._parts.append(part)
562
562
563 def newpart(self, typeid, *args, **kwargs):
563 def newpart(self, typeid, *args, **kwargs):
564 """create a new part and add it to the containers
564 """create a new part and add it to the containers
565
565
566 As the part is directly added to the containers. For now, this means
566 As the part is directly added to the containers. For now, this means
567 that any failure to properly initialize the part after calling
567 that any failure to properly initialize the part after calling
568 ``newpart`` should result in a failure of the whole bundling process.
568 ``newpart`` should result in a failure of the whole bundling process.
569
569
570 You can still fall back to manually create and add if you need better
570 You can still fall back to manually create and add if you need better
571 control."""
571 control."""
572 part = bundlepart(typeid, *args, **kwargs)
572 part = bundlepart(typeid, *args, **kwargs)
573 self.addpart(part)
573 self.addpart(part)
574 return part
574 return part
575
575
576 # methods used to generate the bundle2 stream
576 # methods used to generate the bundle2 stream
577 def getchunks(self):
577 def getchunks(self):
578 if self.ui.debugflag:
578 if self.ui.debugflag:
579 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
579 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
580 if self._params:
580 if self._params:
581 msg.append(' (%i params)' % len(self._params))
581 msg.append(' (%i params)' % len(self._params))
582 msg.append(' %i parts total\n' % len(self._parts))
582 msg.append(' %i parts total\n' % len(self._parts))
583 self.ui.debug(''.join(msg))
583 self.ui.debug(''.join(msg))
584 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
584 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
585 yield self._magicstring
585 yield self._magicstring
586 param = self._paramchunk()
586 param = self._paramchunk()
587 outdebug(self.ui, 'bundle parameter: %s' % param)
587 outdebug(self.ui, 'bundle parameter: %s' % param)
588 yield _pack(_fstreamparamsize, len(param))
588 yield _pack(_fstreamparamsize, len(param))
589 if param:
589 if param:
590 yield param
590 yield param
591 for chunk in self._compengine.compressstream(self._getcorechunk(),
591 for chunk in self._compengine.compressstream(self._getcorechunk(),
592 self._compopts):
592 self._compopts):
593 yield chunk
593 yield chunk
594
594
595 def _paramchunk(self):
595 def _paramchunk(self):
596 """return a encoded version of all stream parameters"""
596 """return a encoded version of all stream parameters"""
597 blocks = []
597 blocks = []
598 for par, value in self._params:
598 for par, value in self._params:
599 par = urlreq.quote(par)
599 par = urlreq.quote(par)
600 if value is not None:
600 if value is not None:
601 value = urlreq.quote(value)
601 value = urlreq.quote(value)
602 par = '%s=%s' % (par, value)
602 par = '%s=%s' % (par, value)
603 blocks.append(par)
603 blocks.append(par)
604 return ' '.join(blocks)
604 return ' '.join(blocks)
605
605
606 def _getcorechunk(self):
606 def _getcorechunk(self):
607 """yield chunk for the core part of the bundle
607 """yield chunk for the core part of the bundle
608
608
609 (all but headers and parameters)"""
609 (all but headers and parameters)"""
610 outdebug(self.ui, 'start of parts')
610 outdebug(self.ui, 'start of parts')
611 for part in self._parts:
611 for part in self._parts:
612 outdebug(self.ui, 'bundle part: "%s"' % part.type)
612 outdebug(self.ui, 'bundle part: "%s"' % part.type)
613 for chunk in part.getchunks(ui=self.ui):
613 for chunk in part.getchunks(ui=self.ui):
614 yield chunk
614 yield chunk
615 outdebug(self.ui, 'end of bundle')
615 outdebug(self.ui, 'end of bundle')
616 yield _pack(_fpartheadersize, 0)
616 yield _pack(_fpartheadersize, 0)
617
617
618
618
619 def salvageoutput(self):
619 def salvageoutput(self):
620 """return a list with a copy of all output parts in the bundle
620 """return a list with a copy of all output parts in the bundle
621
621
622 This is meant to be used during error handling to make sure we preserve
622 This is meant to be used during error handling to make sure we preserve
623 server output"""
623 server output"""
624 salvaged = []
624 salvaged = []
625 for part in self._parts:
625 for part in self._parts:
626 if part.type.startswith('output'):
626 if part.type.startswith('output'):
627 salvaged.append(part.copy())
627 salvaged.append(part.copy())
628 return salvaged
628 return salvaged
629
629
630
630
631 class unpackermixin(object):
631 class unpackermixin(object):
632 """A mixin to extract bytes and struct data from a stream"""
632 """A mixin to extract bytes and struct data from a stream"""
633
633
634 def __init__(self, fp):
634 def __init__(self, fp):
635 self._fp = fp
635 self._fp = fp
636
636
637 def _unpack(self, format):
637 def _unpack(self, format):
638 """unpack this struct format from the stream
638 """unpack this struct format from the stream
639
639
640 This method is meant for internal usage by the bundle2 protocol only.
640 This method is meant for internal usage by the bundle2 protocol only.
641 They directly manipulate the low level stream including bundle2 level
641 They directly manipulate the low level stream including bundle2 level
642 instruction.
642 instruction.
643
643
644 Do not use it to implement higher-level logic or methods."""
644 Do not use it to implement higher-level logic or methods."""
645 data = self._readexact(struct.calcsize(format))
645 data = self._readexact(struct.calcsize(format))
646 return _unpack(format, data)
646 return _unpack(format, data)
647
647
648 def _readexact(self, size):
648 def _readexact(self, size):
649 """read exactly <size> bytes from the stream
649 """read exactly <size> bytes from the stream
650
650
651 This method is meant for internal usage by the bundle2 protocol only.
651 This method is meant for internal usage by the bundle2 protocol only.
652 They directly manipulate the low level stream including bundle2 level
652 They directly manipulate the low level stream including bundle2 level
653 instruction.
653 instruction.
654
654
655 Do not use it to implement higher-level logic or methods."""
655 Do not use it to implement higher-level logic or methods."""
656 return changegroup.readexactly(self._fp, size)
656 return changegroup.readexactly(self._fp, size)
657
657
658 def getunbundler(ui, fp, magicstring=None):
658 def getunbundler(ui, fp, magicstring=None):
659 """return a valid unbundler object for a given magicstring"""
659 """return a valid unbundler object for a given magicstring"""
660 if magicstring is None:
660 if magicstring is None:
661 magicstring = changegroup.readexactly(fp, 4)
661 magicstring = changegroup.readexactly(fp, 4)
662 magic, version = magicstring[0:2], magicstring[2:4]
662 magic, version = magicstring[0:2], magicstring[2:4]
663 if magic != 'HG':
663 if magic != 'HG':
664 raise error.Abort(_('not a Mercurial bundle'))
664 raise error.Abort(_('not a Mercurial bundle'))
665 unbundlerclass = formatmap.get(version)
665 unbundlerclass = formatmap.get(version)
666 if unbundlerclass is None:
666 if unbundlerclass is None:
667 raise error.Abort(_('unknown bundle version %s') % version)
667 raise error.Abort(_('unknown bundle version %s') % version)
668 unbundler = unbundlerclass(ui, fp)
668 unbundler = unbundlerclass(ui, fp)
669 indebug(ui, 'start processing of %s stream' % magicstring)
669 indebug(ui, 'start processing of %s stream' % magicstring)
670 return unbundler
670 return unbundler
671
671
672 class unbundle20(unpackermixin):
672 class unbundle20(unpackermixin):
673 """interpret a bundle2 stream
673 """interpret a bundle2 stream
674
674
675 This class is fed with a binary stream and yields parts through its
675 This class is fed with a binary stream and yields parts through its
676 `iterparts` methods."""
676 `iterparts` methods."""
677
677
678 _magicstring = 'HG20'
678 _magicstring = 'HG20'
679
679
680 def __init__(self, ui, fp):
680 def __init__(self, ui, fp):
681 """If header is specified, we do not read it out of the stream."""
681 """If header is specified, we do not read it out of the stream."""
682 self.ui = ui
682 self.ui = ui
683 self._compengine = util.compengines.forbundletype('UN')
683 self._compengine = util.compengines.forbundletype('UN')
684 self._compressed = None
684 self._compressed = None
685 super(unbundle20, self).__init__(fp)
685 super(unbundle20, self).__init__(fp)
686
686
687 @util.propertycache
687 @util.propertycache
688 def params(self):
688 def params(self):
689 """dictionary of stream level parameters"""
689 """dictionary of stream level parameters"""
690 indebug(self.ui, 'reading bundle2 stream parameters')
690 indebug(self.ui, 'reading bundle2 stream parameters')
691 params = {}
691 params = {}
692 paramssize = self._unpack(_fstreamparamsize)[0]
692 paramssize = self._unpack(_fstreamparamsize)[0]
693 if paramssize < 0:
693 if paramssize < 0:
694 raise error.BundleValueError('negative bundle param size: %i'
694 raise error.BundleValueError('negative bundle param size: %i'
695 % paramssize)
695 % paramssize)
696 if paramssize:
696 if paramssize:
697 params = self._readexact(paramssize)
697 params = self._readexact(paramssize)
698 params = self._processallparams(params)
698 params = self._processallparams(params)
699 return params
699 return params
700
700
701 def _processallparams(self, paramsblock):
701 def _processallparams(self, paramsblock):
702 """"""
702 """"""
703 params = util.sortdict()
703 params = util.sortdict()
704 for p in paramsblock.split(' '):
704 for p in paramsblock.split(' '):
705 p = p.split('=', 1)
705 p = p.split('=', 1)
706 p = [urlreq.unquote(i) for i in p]
706 p = [urlreq.unquote(i) for i in p]
707 if len(p) < 2:
707 if len(p) < 2:
708 p.append(None)
708 p.append(None)
709 self._processparam(*p)
709 self._processparam(*p)
710 params[p[0]] = p[1]
710 params[p[0]] = p[1]
711 return params
711 return params
712
712
713
713
714 def _processparam(self, name, value):
714 def _processparam(self, name, value):
715 """process a parameter, applying its effect if needed
715 """process a parameter, applying its effect if needed
716
716
717 Parameter starting with a lower case letter are advisory and will be
717 Parameter starting with a lower case letter are advisory and will be
718 ignored when unknown. Those starting with an upper case letter are
718 ignored when unknown. Those starting with an upper case letter are
719 mandatory and will this function will raise a KeyError when unknown.
719 mandatory and will this function will raise a KeyError when unknown.
720
720
721 Note: no option are currently supported. Any input will be either
721 Note: no option are currently supported. Any input will be either
722 ignored or failing.
722 ignored or failing.
723 """
723 """
724 if not name:
724 if not name:
725 raise ValueError('empty parameter name')
725 raise ValueError('empty parameter name')
726 if name[0] not in string.letters:
726 if name[0] not in string.letters:
727 raise ValueError('non letter first character: %r' % name)
727 raise ValueError('non letter first character: %r' % name)
728 try:
728 try:
729 handler = b2streamparamsmap[name.lower()]
729 handler = b2streamparamsmap[name.lower()]
730 except KeyError:
730 except KeyError:
731 if name[0].islower():
731 if name[0].islower():
732 indebug(self.ui, "ignoring unknown parameter %r" % name)
732 indebug(self.ui, "ignoring unknown parameter %r" % name)
733 else:
733 else:
734 raise error.BundleUnknownFeatureError(params=(name,))
734 raise error.BundleUnknownFeatureError(params=(name,))
735 else:
735 else:
736 handler(self, name, value)
736 handler(self, name, value)
737
737
738 def _forwardchunks(self):
738 def _forwardchunks(self):
739 """utility to transfer a bundle2 as binary
739 """utility to transfer a bundle2 as binary
740
740
741 This is made necessary by the fact the 'getbundle' command over 'ssh'
741 This is made necessary by the fact the 'getbundle' command over 'ssh'
742 have no way to know then the reply end, relying on the bundle to be
742 have no way to know then the reply end, relying on the bundle to be
743 interpreted to know its end. This is terrible and we are sorry, but we
743 interpreted to know its end. This is terrible and we are sorry, but we
744 needed to move forward to get general delta enabled.
744 needed to move forward to get general delta enabled.
745 """
745 """
746 yield self._magicstring
746 yield self._magicstring
747 assert 'params' not in vars(self)
747 assert 'params' not in vars(self)
748 paramssize = self._unpack(_fstreamparamsize)[0]
748 paramssize = self._unpack(_fstreamparamsize)[0]
749 if paramssize < 0:
749 if paramssize < 0:
750 raise error.BundleValueError('negative bundle param size: %i'
750 raise error.BundleValueError('negative bundle param size: %i'
751 % paramssize)
751 % paramssize)
752 yield _pack(_fstreamparamsize, paramssize)
752 yield _pack(_fstreamparamsize, paramssize)
753 if paramssize:
753 if paramssize:
754 params = self._readexact(paramssize)
754 params = self._readexact(paramssize)
755 self._processallparams(params)
755 self._processallparams(params)
756 yield params
756 yield params
757 assert self._compengine.bundletype == 'UN'
757 assert self._compengine.bundletype == 'UN'
758 # From there, payload might need to be decompressed
758 # From there, payload might need to be decompressed
759 self._fp = self._compengine.decompressorreader(self._fp)
759 self._fp = self._compengine.decompressorreader(self._fp)
760 emptycount = 0
760 emptycount = 0
761 while emptycount < 2:
761 while emptycount < 2:
762 # so we can brainlessly loop
762 # so we can brainlessly loop
763 assert _fpartheadersize == _fpayloadsize
763 assert _fpartheadersize == _fpayloadsize
764 size = self._unpack(_fpartheadersize)[0]
764 size = self._unpack(_fpartheadersize)[0]
765 yield _pack(_fpartheadersize, size)
765 yield _pack(_fpartheadersize, size)
766 if size:
766 if size:
767 emptycount = 0
767 emptycount = 0
768 else:
768 else:
769 emptycount += 1
769 emptycount += 1
770 continue
770 continue
771 if size == flaginterrupt:
771 if size == flaginterrupt:
772 continue
772 continue
773 elif size < 0:
773 elif size < 0:
774 raise error.BundleValueError('negative chunk size: %i')
774 raise error.BundleValueError('negative chunk size: %i')
775 yield self._readexact(size)
775 yield self._readexact(size)
776
776
777
777
778 def iterparts(self):
778 def iterparts(self):
779 """yield all parts contained in the stream"""
779 """yield all parts contained in the stream"""
780 # make sure param have been loaded
780 # make sure param have been loaded
781 self.params
781 self.params
782 # From there, payload need to be decompressed
782 # From there, payload need to be decompressed
783 self._fp = self._compengine.decompressorreader(self._fp)
783 self._fp = self._compengine.decompressorreader(self._fp)
784 indebug(self.ui, 'start extraction of bundle2 parts')
784 indebug(self.ui, 'start extraction of bundle2 parts')
785 headerblock = self._readpartheader()
785 headerblock = self._readpartheader()
786 while headerblock is not None:
786 while headerblock is not None:
787 part = unbundlepart(self.ui, headerblock, self._fp)
787 part = unbundlepart(self.ui, headerblock, self._fp)
788 yield part
788 yield part
789 part.seek(0, 2)
789 part.seek(0, 2)
790 headerblock = self._readpartheader()
790 headerblock = self._readpartheader()
791 indebug(self.ui, 'end of bundle2 stream')
791 indebug(self.ui, 'end of bundle2 stream')
792
792
793 def _readpartheader(self):
793 def _readpartheader(self):
794 """reads a part header size and return the bytes blob
794 """reads a part header size and return the bytes blob
795
795
796 returns None if empty"""
796 returns None if empty"""
797 headersize = self._unpack(_fpartheadersize)[0]
797 headersize = self._unpack(_fpartheadersize)[0]
798 if headersize < 0:
798 if headersize < 0:
799 raise error.BundleValueError('negative part header size: %i'
799 raise error.BundleValueError('negative part header size: %i'
800 % headersize)
800 % headersize)
801 indebug(self.ui, 'part header size: %i' % headersize)
801 indebug(self.ui, 'part header size: %i' % headersize)
802 if headersize:
802 if headersize:
803 return self._readexact(headersize)
803 return self._readexact(headersize)
804 return None
804 return None
805
805
806 def compressed(self):
806 def compressed(self):
807 self.params # load params
807 self.params # load params
808 return self._compressed
808 return self._compressed
809
809
810 def close(self):
810 def close(self):
811 """close underlying file"""
811 """close underlying file"""
812 if util.safehasattr(self._fp, 'close'):
812 if util.safehasattr(self._fp, 'close'):
813 return self._fp.close()
813 return self._fp.close()
814
814
815 formatmap = {'20': unbundle20}
815 formatmap = {'20': unbundle20}
816
816
817 b2streamparamsmap = {}
817 b2streamparamsmap = {}
818
818
819 def b2streamparamhandler(name):
819 def b2streamparamhandler(name):
820 """register a handler for a stream level parameter"""
820 """register a handler for a stream level parameter"""
821 def decorator(func):
821 def decorator(func):
822 assert name not in formatmap
822 assert name not in formatmap
823 b2streamparamsmap[name] = func
823 b2streamparamsmap[name] = func
824 return func
824 return func
825 return decorator
825 return decorator
826
826
827 @b2streamparamhandler('compression')
827 @b2streamparamhandler('compression')
828 def processcompression(unbundler, param, value):
828 def processcompression(unbundler, param, value):
829 """read compression parameter and install payload decompression"""
829 """read compression parameter and install payload decompression"""
830 if value not in util.compengines.supportedbundletypes:
830 if value not in util.compengines.supportedbundletypes:
831 raise error.BundleUnknownFeatureError(params=(param,),
831 raise error.BundleUnknownFeatureError(params=(param,),
832 values=(value,))
832 values=(value,))
833 unbundler._compengine = util.compengines.forbundletype(value)
833 unbundler._compengine = util.compengines.forbundletype(value)
834 if value is not None:
834 if value is not None:
835 unbundler._compressed = True
835 unbundler._compressed = True
836
836
837 class bundlepart(object):
837 class bundlepart(object):
838 """A bundle2 part contains application level payload
838 """A bundle2 part contains application level payload
839
839
840 The part `type` is used to route the part to the application level
840 The part `type` is used to route the part to the application level
841 handler.
841 handler.
842
842
843 The part payload is contained in ``part.data``. It could be raw bytes or a
843 The part payload is contained in ``part.data``. It could be raw bytes or a
844 generator of byte chunks.
844 generator of byte chunks.
845
845
846 You can add parameters to the part using the ``addparam`` method.
846 You can add parameters to the part using the ``addparam`` method.
847 Parameters can be either mandatory (default) or advisory. Remote side
847 Parameters can be either mandatory (default) or advisory. Remote side
848 should be able to safely ignore the advisory ones.
848 should be able to safely ignore the advisory ones.
849
849
850 Both data and parameters cannot be modified after the generation has begun.
850 Both data and parameters cannot be modified after the generation has begun.
851 """
851 """
852
852
853 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
853 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
854 data='', mandatory=True):
854 data='', mandatory=True):
855 validateparttype(parttype)
855 validateparttype(parttype)
856 self.id = None
856 self.id = None
857 self.type = parttype
857 self.type = parttype
858 self._data = data
858 self._data = data
859 self._mandatoryparams = list(mandatoryparams)
859 self._mandatoryparams = list(mandatoryparams)
860 self._advisoryparams = list(advisoryparams)
860 self._advisoryparams = list(advisoryparams)
861 # checking for duplicated entries
861 # checking for duplicated entries
862 self._seenparams = set()
862 self._seenparams = set()
863 for pname, __ in self._mandatoryparams + self._advisoryparams:
863 for pname, __ in self._mandatoryparams + self._advisoryparams:
864 if pname in self._seenparams:
864 if pname in self._seenparams:
865 raise error.ProgrammingError('duplicated params: %s' % pname)
865 raise error.ProgrammingError('duplicated params: %s' % pname)
866 self._seenparams.add(pname)
866 self._seenparams.add(pname)
867 # status of the part's generation:
867 # status of the part's generation:
868 # - None: not started,
868 # - None: not started,
869 # - False: currently generated,
869 # - False: currently generated,
870 # - True: generation done.
870 # - True: generation done.
871 self._generated = None
871 self._generated = None
872 self.mandatory = mandatory
872 self.mandatory = mandatory
873
873
874 def __repr__(self):
874 def __repr__(self):
875 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
875 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
876 return ('<%s object at %x; id: %s; type: %s; mandatory: %s>'
876 return ('<%s object at %x; id: %s; type: %s; mandatory: %s>'
877 % (cls, id(self), self.id, self.type, self.mandatory))
877 % (cls, id(self), self.id, self.type, self.mandatory))
878
878
879 def copy(self):
879 def copy(self):
880 """return a copy of the part
880 """return a copy of the part
881
881
882 The new part have the very same content but no partid assigned yet.
882 The new part have the very same content but no partid assigned yet.
883 Parts with generated data cannot be copied."""
883 Parts with generated data cannot be copied."""
884 assert not util.safehasattr(self.data, 'next')
884 assert not util.safehasattr(self.data, 'next')
885 return self.__class__(self.type, self._mandatoryparams,
885 return self.__class__(self.type, self._mandatoryparams,
886 self._advisoryparams, self._data, self.mandatory)
886 self._advisoryparams, self._data, self.mandatory)
887
887
888 # methods used to defines the part content
888 # methods used to defines the part content
889 @property
889 @property
890 def data(self):
890 def data(self):
891 return self._data
891 return self._data
892
892
893 @data.setter
893 @data.setter
894 def data(self, data):
894 def data(self, data):
895 if self._generated is not None:
895 if self._generated is not None:
896 raise error.ReadOnlyPartError('part is being generated')
896 raise error.ReadOnlyPartError('part is being generated')
897 self._data = data
897 self._data = data
898
898
899 @property
899 @property
900 def mandatoryparams(self):
900 def mandatoryparams(self):
901 # make it an immutable tuple to force people through ``addparam``
901 # make it an immutable tuple to force people through ``addparam``
902 return tuple(self._mandatoryparams)
902 return tuple(self._mandatoryparams)
903
903
904 @property
904 @property
905 def advisoryparams(self):
905 def advisoryparams(self):
906 # make it an immutable tuple to force people through ``addparam``
906 # make it an immutable tuple to force people through ``addparam``
907 return tuple(self._advisoryparams)
907 return tuple(self._advisoryparams)
908
908
909 def addparam(self, name, value='', mandatory=True):
909 def addparam(self, name, value='', mandatory=True):
910 """add a parameter to the part
910 """add a parameter to the part
911
911
912 If 'mandatory' is set to True, the remote handler must claim support
912 If 'mandatory' is set to True, the remote handler must claim support
913 for this parameter or the unbundling will be aborted.
913 for this parameter or the unbundling will be aborted.
914
914
915 The 'name' and 'value' cannot exceed 255 bytes each.
915 The 'name' and 'value' cannot exceed 255 bytes each.
916 """
916 """
917 if self._generated is not None:
917 if self._generated is not None:
918 raise error.ReadOnlyPartError('part is being generated')
918 raise error.ReadOnlyPartError('part is being generated')
919 if name in self._seenparams:
919 if name in self._seenparams:
920 raise ValueError('duplicated params: %s' % name)
920 raise ValueError('duplicated params: %s' % name)
921 self._seenparams.add(name)
921 self._seenparams.add(name)
922 params = self._advisoryparams
922 params = self._advisoryparams
923 if mandatory:
923 if mandatory:
924 params = self._mandatoryparams
924 params = self._mandatoryparams
925 params.append((name, value))
925 params.append((name, value))
926
926
927 # methods used to generates the bundle2 stream
927 # methods used to generates the bundle2 stream
928 def getchunks(self, ui):
928 def getchunks(self, ui):
929 if self._generated is not None:
929 if self._generated is not None:
930 raise error.ProgrammingError('part can only be consumed once')
930 raise error.ProgrammingError('part can only be consumed once')
931 self._generated = False
931 self._generated = False
932
932
933 if ui.debugflag:
933 if ui.debugflag:
934 msg = ['bundle2-output-part: "%s"' % self.type]
934 msg = ['bundle2-output-part: "%s"' % self.type]
935 if not self.mandatory:
935 if not self.mandatory:
936 msg.append(' (advisory)')
936 msg.append(' (advisory)')
937 nbmp = len(self.mandatoryparams)
937 nbmp = len(self.mandatoryparams)
938 nbap = len(self.advisoryparams)
938 nbap = len(self.advisoryparams)
939 if nbmp or nbap:
939 if nbmp or nbap:
940 msg.append(' (params:')
940 msg.append(' (params:')
941 if nbmp:
941 if nbmp:
942 msg.append(' %i mandatory' % nbmp)
942 msg.append(' %i mandatory' % nbmp)
943 if nbap:
943 if nbap:
944 msg.append(' %i advisory' % nbmp)
944 msg.append(' %i advisory' % nbmp)
945 msg.append(')')
945 msg.append(')')
946 if not self.data:
946 if not self.data:
947 msg.append(' empty payload')
947 msg.append(' empty payload')
948 elif util.safehasattr(self.data, 'next'):
948 elif util.safehasattr(self.data, 'next'):
949 msg.append(' streamed payload')
949 msg.append(' streamed payload')
950 else:
950 else:
951 msg.append(' %i bytes payload' % len(self.data))
951 msg.append(' %i bytes payload' % len(self.data))
952 msg.append('\n')
952 msg.append('\n')
953 ui.debug(''.join(msg))
953 ui.debug(''.join(msg))
954
954
955 #### header
955 #### header
956 if self.mandatory:
956 if self.mandatory:
957 parttype = self.type.upper()
957 parttype = self.type.upper()
958 else:
958 else:
959 parttype = self.type.lower()
959 parttype = self.type.lower()
960 outdebug(ui, 'part %s: "%s"' % (self.id, parttype))
960 outdebug(ui, 'part %s: "%s"' % (self.id, parttype))
961 ## parttype
961 ## parttype
962 header = [_pack(_fparttypesize, len(parttype)),
962 header = [_pack(_fparttypesize, len(parttype)),
963 parttype, _pack(_fpartid, self.id),
963 parttype, _pack(_fpartid, self.id),
964 ]
964 ]
965 ## parameters
965 ## parameters
966 # count
966 # count
967 manpar = self.mandatoryparams
967 manpar = self.mandatoryparams
968 advpar = self.advisoryparams
968 advpar = self.advisoryparams
969 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
969 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
970 # size
970 # size
971 parsizes = []
971 parsizes = []
972 for key, value in manpar:
972 for key, value in manpar:
973 parsizes.append(len(key))
973 parsizes.append(len(key))
974 parsizes.append(len(value))
974 parsizes.append(len(value))
975 for key, value in advpar:
975 for key, value in advpar:
976 parsizes.append(len(key))
976 parsizes.append(len(key))
977 parsizes.append(len(value))
977 parsizes.append(len(value))
978 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
978 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
979 header.append(paramsizes)
979 header.append(paramsizes)
980 # key, value
980 # key, value
981 for key, value in manpar:
981 for key, value in manpar:
982 header.append(key)
982 header.append(key)
983 header.append(value)
983 header.append(value)
984 for key, value in advpar:
984 for key, value in advpar:
985 header.append(key)
985 header.append(key)
986 header.append(value)
986 header.append(value)
987 ## finalize header
987 ## finalize header
988 headerchunk = ''.join(header)
988 headerchunk = ''.join(header)
989 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
989 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
990 yield _pack(_fpartheadersize, len(headerchunk))
990 yield _pack(_fpartheadersize, len(headerchunk))
991 yield headerchunk
991 yield headerchunk
992 ## payload
992 ## payload
993 try:
993 try:
994 for chunk in self._payloadchunks():
994 for chunk in self._payloadchunks():
995 outdebug(ui, 'payload chunk size: %i' % len(chunk))
995 outdebug(ui, 'payload chunk size: %i' % len(chunk))
996 yield _pack(_fpayloadsize, len(chunk))
996 yield _pack(_fpayloadsize, len(chunk))
997 yield chunk
997 yield chunk
998 except GeneratorExit:
998 except GeneratorExit:
999 # GeneratorExit means that nobody is listening for our
999 # GeneratorExit means that nobody is listening for our
1000 # results anyway, so just bail quickly rather than trying
1000 # results anyway, so just bail quickly rather than trying
1001 # to produce an error part.
1001 # to produce an error part.
1002 ui.debug('bundle2-generatorexit\n')
1002 ui.debug('bundle2-generatorexit\n')
1003 raise
1003 raise
1004 except BaseException as exc:
1004 except BaseException as exc:
1005 # backup exception data for later
1005 # backup exception data for later
1006 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
1006 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
1007 % exc)
1007 % exc)
1008 tb = sys.exc_info()[2]
1008 tb = sys.exc_info()[2]
1009 msg = 'unexpected error: %s' % exc
1009 msg = 'unexpected error: %s' % exc
1010 interpart = bundlepart('error:abort', [('message', msg)],
1010 interpart = bundlepart('error:abort', [('message', msg)],
1011 mandatory=False)
1011 mandatory=False)
1012 interpart.id = 0
1012 interpart.id = 0
1013 yield _pack(_fpayloadsize, -1)
1013 yield _pack(_fpayloadsize, -1)
1014 for chunk in interpart.getchunks(ui=ui):
1014 for chunk in interpart.getchunks(ui=ui):
1015 yield chunk
1015 yield chunk
1016 outdebug(ui, 'closing payload chunk')
1016 outdebug(ui, 'closing payload chunk')
1017 # abort current part payload
1017 # abort current part payload
1018 yield _pack(_fpayloadsize, 0)
1018 yield _pack(_fpayloadsize, 0)
1019 pycompat.raisewithtb(exc, tb)
1019 pycompat.raisewithtb(exc, tb)
1020 # end of payload
1020 # end of payload
1021 outdebug(ui, 'closing payload chunk')
1021 outdebug(ui, 'closing payload chunk')
1022 yield _pack(_fpayloadsize, 0)
1022 yield _pack(_fpayloadsize, 0)
1023 self._generated = True
1023 self._generated = True
1024
1024
1025 def _payloadchunks(self):
1025 def _payloadchunks(self):
1026 """yield chunks of a the part payload
1026 """yield chunks of a the part payload
1027
1027
1028 Exists to handle the different methods to provide data to a part."""
1028 Exists to handle the different methods to provide data to a part."""
1029 # we only support fixed size data now.
1029 # we only support fixed size data now.
1030 # This will be improved in the future.
1030 # This will be improved in the future.
1031 if util.safehasattr(self.data, 'next'):
1031 if util.safehasattr(self.data, 'next'):
1032 buff = util.chunkbuffer(self.data)
1032 buff = util.chunkbuffer(self.data)
1033 chunk = buff.read(preferedchunksize)
1033 chunk = buff.read(preferedchunksize)
1034 while chunk:
1034 while chunk:
1035 yield chunk
1035 yield chunk
1036 chunk = buff.read(preferedchunksize)
1036 chunk = buff.read(preferedchunksize)
1037 elif len(self.data):
1037 elif len(self.data):
1038 yield self.data
1038 yield self.data
1039
1039
1040
1040
1041 flaginterrupt = -1
1041 flaginterrupt = -1
1042
1042
1043 class interrupthandler(unpackermixin):
1043 class interrupthandler(unpackermixin):
1044 """read one part and process it with restricted capability
1044 """read one part and process it with restricted capability
1045
1045
1046 This allows to transmit exception raised on the producer size during part
1046 This allows to transmit exception raised on the producer size during part
1047 iteration while the consumer is reading a part.
1047 iteration while the consumer is reading a part.
1048
1048
1049 Part processed in this manner only have access to a ui object,"""
1049 Part processed in this manner only have access to a ui object,"""
1050
1050
1051 def __init__(self, ui, fp):
1051 def __init__(self, ui, fp):
1052 super(interrupthandler, self).__init__(fp)
1052 super(interrupthandler, self).__init__(fp)
1053 self.ui = ui
1053 self.ui = ui
1054
1054
1055 def _readpartheader(self):
1055 def _readpartheader(self):
1056 """reads a part header size and return the bytes blob
1056 """reads a part header size and return the bytes blob
1057
1057
1058 returns None if empty"""
1058 returns None if empty"""
1059 headersize = self._unpack(_fpartheadersize)[0]
1059 headersize = self._unpack(_fpartheadersize)[0]
1060 if headersize < 0:
1060 if headersize < 0:
1061 raise error.BundleValueError('negative part header size: %i'
1061 raise error.BundleValueError('negative part header size: %i'
1062 % headersize)
1062 % headersize)
1063 indebug(self.ui, 'part header size: %i\n' % headersize)
1063 indebug(self.ui, 'part header size: %i\n' % headersize)
1064 if headersize:
1064 if headersize:
1065 return self._readexact(headersize)
1065 return self._readexact(headersize)
1066 return None
1066 return None
1067
1067
1068 def __call__(self):
1068 def __call__(self):
1069
1069
1070 self.ui.debug('bundle2-input-stream-interrupt:'
1070 self.ui.debug('bundle2-input-stream-interrupt:'
1071 ' opening out of band context\n')
1071 ' opening out of band context\n')
1072 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1072 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1073 headerblock = self._readpartheader()
1073 headerblock = self._readpartheader()
1074 if headerblock is None:
1074 if headerblock is None:
1075 indebug(self.ui, 'no part found during interruption.')
1075 indebug(self.ui, 'no part found during interruption.')
1076 return
1076 return
1077 part = unbundlepart(self.ui, headerblock, self._fp)
1077 part = unbundlepart(self.ui, headerblock, self._fp)
1078 op = interruptoperation(self.ui)
1078 op = interruptoperation(self.ui)
1079 _processpart(op, part)
1079 _processpart(op, part)
1080 self.ui.debug('bundle2-input-stream-interrupt:'
1080 self.ui.debug('bundle2-input-stream-interrupt:'
1081 ' closing out of band context\n')
1081 ' closing out of band context\n')
1082
1082
1083 class interruptoperation(object):
1083 class interruptoperation(object):
1084 """A limited operation to be use by part handler during interruption
1084 """A limited operation to be use by part handler during interruption
1085
1085
1086 It only have access to an ui object.
1086 It only have access to an ui object.
1087 """
1087 """
1088
1088
1089 def __init__(self, ui):
1089 def __init__(self, ui):
1090 self.ui = ui
1090 self.ui = ui
1091 self.reply = None
1091 self.reply = None
1092 self.captureoutput = False
1092 self.captureoutput = False
1093
1093
1094 @property
1094 @property
1095 def repo(self):
1095 def repo(self):
1096 raise error.ProgrammingError('no repo access from stream interruption')
1096 raise error.ProgrammingError('no repo access from stream interruption')
1097
1097
1098 def gettransaction(self):
1098 def gettransaction(self):
1099 raise TransactionUnavailable('no repo access from stream interruption')
1099 raise TransactionUnavailable('no repo access from stream interruption')
1100
1100
1101 class unbundlepart(unpackermixin):
1101 class unbundlepart(unpackermixin):
1102 """a bundle part read from a bundle"""
1102 """a bundle part read from a bundle"""
1103
1103
1104 def __init__(self, ui, header, fp):
1104 def __init__(self, ui, header, fp):
1105 super(unbundlepart, self).__init__(fp)
1105 super(unbundlepart, self).__init__(fp)
1106 self._seekable = (util.safehasattr(fp, 'seek') and
1106 self._seekable = (util.safehasattr(fp, 'seek') and
1107 util.safehasattr(fp, 'tell'))
1107 util.safehasattr(fp, 'tell'))
1108 self.ui = ui
1108 self.ui = ui
1109 # unbundle state attr
1109 # unbundle state attr
1110 self._headerdata = header
1110 self._headerdata = header
1111 self._headeroffset = 0
1111 self._headeroffset = 0
1112 self._initialized = False
1112 self._initialized = False
1113 self.consumed = False
1113 self.consumed = False
1114 # part data
1114 # part data
1115 self.id = None
1115 self.id = None
1116 self.type = None
1116 self.type = None
1117 self.mandatoryparams = None
1117 self.mandatoryparams = None
1118 self.advisoryparams = None
1118 self.advisoryparams = None
1119 self.params = None
1119 self.params = None
1120 self.mandatorykeys = ()
1120 self.mandatorykeys = ()
1121 self._payloadstream = None
1121 self._payloadstream = None
1122 self._readheader()
1122 self._readheader()
1123 self._mandatory = None
1123 self._mandatory = None
1124 self._chunkindex = [] #(payload, file) position tuples for chunk starts
1124 self._chunkindex = [] #(payload, file) position tuples for chunk starts
1125 self._pos = 0
1125 self._pos = 0
1126
1126
1127 def _fromheader(self, size):
1127 def _fromheader(self, size):
1128 """return the next <size> byte from the header"""
1128 """return the next <size> byte from the header"""
1129 offset = self._headeroffset
1129 offset = self._headeroffset
1130 data = self._headerdata[offset:(offset + size)]
1130 data = self._headerdata[offset:(offset + size)]
1131 self._headeroffset = offset + size
1131 self._headeroffset = offset + size
1132 return data
1132 return data
1133
1133
1134 def _unpackheader(self, format):
1134 def _unpackheader(self, format):
1135 """read given format from header
1135 """read given format from header
1136
1136
1137 This automatically compute the size of the format to read."""
1137 This automatically compute the size of the format to read."""
1138 data = self._fromheader(struct.calcsize(format))
1138 data = self._fromheader(struct.calcsize(format))
1139 return _unpack(format, data)
1139 return _unpack(format, data)
1140
1140
1141 def _initparams(self, mandatoryparams, advisoryparams):
1141 def _initparams(self, mandatoryparams, advisoryparams):
1142 """internal function to setup all logic related parameters"""
1142 """internal function to setup all logic related parameters"""
1143 # make it read only to prevent people touching it by mistake.
1143 # make it read only to prevent people touching it by mistake.
1144 self.mandatoryparams = tuple(mandatoryparams)
1144 self.mandatoryparams = tuple(mandatoryparams)
1145 self.advisoryparams = tuple(advisoryparams)
1145 self.advisoryparams = tuple(advisoryparams)
1146 # user friendly UI
1146 # user friendly UI
1147 self.params = util.sortdict(self.mandatoryparams)
1147 self.params = util.sortdict(self.mandatoryparams)
1148 self.params.update(self.advisoryparams)
1148 self.params.update(self.advisoryparams)
1149 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1149 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1150
1150
1151 def _payloadchunks(self, chunknum=0):
1151 def _payloadchunks(self, chunknum=0):
1152 '''seek to specified chunk and start yielding data'''
1152 '''seek to specified chunk and start yielding data'''
1153 if len(self._chunkindex) == 0:
1153 if len(self._chunkindex) == 0:
1154 assert chunknum == 0, 'Must start with chunk 0'
1154 assert chunknum == 0, 'Must start with chunk 0'
1155 self._chunkindex.append((0, self._tellfp()))
1155 self._chunkindex.append((0, self._tellfp()))
1156 else:
1156 else:
1157 assert chunknum < len(self._chunkindex), \
1157 assert chunknum < len(self._chunkindex), \
1158 'Unknown chunk %d' % chunknum
1158 'Unknown chunk %d' % chunknum
1159 self._seekfp(self._chunkindex[chunknum][1])
1159 self._seekfp(self._chunkindex[chunknum][1])
1160
1160
1161 pos = self._chunkindex[chunknum][0]
1161 pos = self._chunkindex[chunknum][0]
1162 payloadsize = self._unpack(_fpayloadsize)[0]
1162 payloadsize = self._unpack(_fpayloadsize)[0]
1163 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1163 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1164 while payloadsize:
1164 while payloadsize:
1165 if payloadsize == flaginterrupt:
1165 if payloadsize == flaginterrupt:
1166 # interruption detection, the handler will now read a
1166 # interruption detection, the handler will now read a
1167 # single part and process it.
1167 # single part and process it.
1168 interrupthandler(self.ui, self._fp)()
1168 interrupthandler(self.ui, self._fp)()
1169 elif payloadsize < 0:
1169 elif payloadsize < 0:
1170 msg = 'negative payload chunk size: %i' % payloadsize
1170 msg = 'negative payload chunk size: %i' % payloadsize
1171 raise error.BundleValueError(msg)
1171 raise error.BundleValueError(msg)
1172 else:
1172 else:
1173 result = self._readexact(payloadsize)
1173 result = self._readexact(payloadsize)
1174 chunknum += 1
1174 chunknum += 1
1175 pos += payloadsize
1175 pos += payloadsize
1176 if chunknum == len(self._chunkindex):
1176 if chunknum == len(self._chunkindex):
1177 self._chunkindex.append((pos, self._tellfp()))
1177 self._chunkindex.append((pos, self._tellfp()))
1178 yield result
1178 yield result
1179 payloadsize = self._unpack(_fpayloadsize)[0]
1179 payloadsize = self._unpack(_fpayloadsize)[0]
1180 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1180 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1181
1181
1182 def _findchunk(self, pos):
1182 def _findchunk(self, pos):
1183 '''for a given payload position, return a chunk number and offset'''
1183 '''for a given payload position, return a chunk number and offset'''
1184 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1184 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1185 if ppos == pos:
1185 if ppos == pos:
1186 return chunk, 0
1186 return chunk, 0
1187 elif ppos > pos:
1187 elif ppos > pos:
1188 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1188 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1189 raise ValueError('Unknown chunk')
1189 raise ValueError('Unknown chunk')
1190
1190
1191 def _readheader(self):
1191 def _readheader(self):
1192 """read the header and setup the object"""
1192 """read the header and setup the object"""
1193 typesize = self._unpackheader(_fparttypesize)[0]
1193 typesize = self._unpackheader(_fparttypesize)[0]
1194 self.type = self._fromheader(typesize)
1194 self.type = self._fromheader(typesize)
1195 indebug(self.ui, 'part type: "%s"' % self.type)
1195 indebug(self.ui, 'part type: "%s"' % self.type)
1196 self.id = self._unpackheader(_fpartid)[0]
1196 self.id = self._unpackheader(_fpartid)[0]
1197 indebug(self.ui, 'part id: "%s"' % self.id)
1197 indebug(self.ui, 'part id: "%s"' % self.id)
1198 # extract mandatory bit from type
1198 # extract mandatory bit from type
1199 self.mandatory = (self.type != self.type.lower())
1199 self.mandatory = (self.type != self.type.lower())
1200 self.type = self.type.lower()
1200 self.type = self.type.lower()
1201 ## reading parameters
1201 ## reading parameters
1202 # param count
1202 # param count
1203 mancount, advcount = self._unpackheader(_fpartparamcount)
1203 mancount, advcount = self._unpackheader(_fpartparamcount)
1204 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1204 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1205 # param size
1205 # param size
1206 fparamsizes = _makefpartparamsizes(mancount + advcount)
1206 fparamsizes = _makefpartparamsizes(mancount + advcount)
1207 paramsizes = self._unpackheader(fparamsizes)
1207 paramsizes = self._unpackheader(fparamsizes)
1208 # make it a list of couple again
1208 # make it a list of couple again
1209 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
1209 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
1210 # split mandatory from advisory
1210 # split mandatory from advisory
1211 mansizes = paramsizes[:mancount]
1211 mansizes = paramsizes[:mancount]
1212 advsizes = paramsizes[mancount:]
1212 advsizes = paramsizes[mancount:]
1213 # retrieve param value
1213 # retrieve param value
1214 manparams = []
1214 manparams = []
1215 for key, value in mansizes:
1215 for key, value in mansizes:
1216 manparams.append((self._fromheader(key), self._fromheader(value)))
1216 manparams.append((self._fromheader(key), self._fromheader(value)))
1217 advparams = []
1217 advparams = []
1218 for key, value in advsizes:
1218 for key, value in advsizes:
1219 advparams.append((self._fromheader(key), self._fromheader(value)))
1219 advparams.append((self._fromheader(key), self._fromheader(value)))
1220 self._initparams(manparams, advparams)
1220 self._initparams(manparams, advparams)
1221 ## part payload
1221 ## part payload
1222 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1222 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1223 # we read the data, tell it
1223 # we read the data, tell it
1224 self._initialized = True
1224 self._initialized = True
1225
1225
1226 def read(self, size=None):
1226 def read(self, size=None):
1227 """read payload data"""
1227 """read payload data"""
1228 if not self._initialized:
1228 if not self._initialized:
1229 self._readheader()
1229 self._readheader()
1230 if size is None:
1230 if size is None:
1231 data = self._payloadstream.read()
1231 data = self._payloadstream.read()
1232 else:
1232 else:
1233 data = self._payloadstream.read(size)
1233 data = self._payloadstream.read(size)
1234 self._pos += len(data)
1234 self._pos += len(data)
1235 if size is None or len(data) < size:
1235 if size is None or len(data) < size:
1236 if not self.consumed and self._pos:
1236 if not self.consumed and self._pos:
1237 self.ui.debug('bundle2-input-part: total payload size %i\n'
1237 self.ui.debug('bundle2-input-part: total payload size %i\n'
1238 % self._pos)
1238 % self._pos)
1239 self.consumed = True
1239 self.consumed = True
1240 return data
1240 return data
1241
1241
1242 def tell(self):
1242 def tell(self):
1243 return self._pos
1243 return self._pos
1244
1244
1245 def seek(self, offset, whence=0):
1245 def seek(self, offset, whence=0):
1246 if whence == 0:
1246 if whence == 0:
1247 newpos = offset
1247 newpos = offset
1248 elif whence == 1:
1248 elif whence == 1:
1249 newpos = self._pos + offset
1249 newpos = self._pos + offset
1250 elif whence == 2:
1250 elif whence == 2:
1251 if not self.consumed:
1251 if not self.consumed:
1252 self.read()
1252 self.read()
1253 newpos = self._chunkindex[-1][0] - offset
1253 newpos = self._chunkindex[-1][0] - offset
1254 else:
1254 else:
1255 raise ValueError('Unknown whence value: %r' % (whence,))
1255 raise ValueError('Unknown whence value: %r' % (whence,))
1256
1256
1257 if newpos > self._chunkindex[-1][0] and not self.consumed:
1257 if newpos > self._chunkindex[-1][0] and not self.consumed:
1258 self.read()
1258 self.read()
1259 if not 0 <= newpos <= self._chunkindex[-1][0]:
1259 if not 0 <= newpos <= self._chunkindex[-1][0]:
1260 raise ValueError('Offset out of range')
1260 raise ValueError('Offset out of range')
1261
1261
1262 if self._pos != newpos:
1262 if self._pos != newpos:
1263 chunk, internaloffset = self._findchunk(newpos)
1263 chunk, internaloffset = self._findchunk(newpos)
1264 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1264 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1265 adjust = self.read(internaloffset)
1265 adjust = self.read(internaloffset)
1266 if len(adjust) != internaloffset:
1266 if len(adjust) != internaloffset:
1267 raise error.Abort(_('Seek failed\n'))
1267 raise error.Abort(_('Seek failed\n'))
1268 self._pos = newpos
1268 self._pos = newpos
1269
1269
1270 def _seekfp(self, offset, whence=0):
1270 def _seekfp(self, offset, whence=0):
1271 """move the underlying file pointer
1271 """move the underlying file pointer
1272
1272
1273 This method is meant for internal usage by the bundle2 protocol only.
1273 This method is meant for internal usage by the bundle2 protocol only.
1274 They directly manipulate the low level stream including bundle2 level
1274 They directly manipulate the low level stream including bundle2 level
1275 instruction.
1275 instruction.
1276
1276
1277 Do not use it to implement higher-level logic or methods."""
1277 Do not use it to implement higher-level logic or methods."""
1278 if self._seekable:
1278 if self._seekable:
1279 return self._fp.seek(offset, whence)
1279 return self._fp.seek(offset, whence)
1280 else:
1280 else:
1281 raise NotImplementedError(_('File pointer is not seekable'))
1281 raise NotImplementedError(_('File pointer is not seekable'))
1282
1282
1283 def _tellfp(self):
1283 def _tellfp(self):
1284 """return the file offset, or None if file is not seekable
1284 """return the file offset, or None if file is not seekable
1285
1285
1286 This method is meant for internal usage by the bundle2 protocol only.
1286 This method is meant for internal usage by the bundle2 protocol only.
1287 They directly manipulate the low level stream including bundle2 level
1287 They directly manipulate the low level stream including bundle2 level
1288 instruction.
1288 instruction.
1289
1289
1290 Do not use it to implement higher-level logic or methods."""
1290 Do not use it to implement higher-level logic or methods."""
1291 if self._seekable:
1291 if self._seekable:
1292 try:
1292 try:
1293 return self._fp.tell()
1293 return self._fp.tell()
1294 except IOError as e:
1294 except IOError as e:
1295 if e.errno == errno.ESPIPE:
1295 if e.errno == errno.ESPIPE:
1296 self._seekable = False
1296 self._seekable = False
1297 else:
1297 else:
1298 raise
1298 raise
1299 return None
1299 return None
1300
1300
1301 # These are only the static capabilities.
1301 # These are only the static capabilities.
1302 # Check the 'getrepocaps' function for the rest.
1302 # Check the 'getrepocaps' function for the rest.
1303 capabilities = {'HG20': (),
1303 capabilities = {'HG20': (),
1304 'error': ('abort', 'unsupportedcontent', 'pushraced',
1304 'error': ('abort', 'unsupportedcontent', 'pushraced',
1305 'pushkey'),
1305 'pushkey'),
1306 'listkeys': (),
1306 'listkeys': (),
1307 'pushkey': (),
1307 'pushkey': (),
1308 'digests': tuple(sorted(util.DIGESTS.keys())),
1308 'digests': tuple(sorted(util.DIGESTS.keys())),
1309 'remote-changegroup': ('http', 'https'),
1309 'remote-changegroup': ('http', 'https'),
1310 'hgtagsfnodes': (),
1310 'hgtagsfnodes': (),
1311 }
1311 }
1312
1312
1313 def getrepocaps(repo, allowpushback=False):
1313 def getrepocaps(repo, allowpushback=False):
1314 """return the bundle2 capabilities for a given repo
1314 """return the bundle2 capabilities for a given repo
1315
1315
1316 Exists to allow extensions (like evolution) to mutate the capabilities.
1316 Exists to allow extensions (like evolution) to mutate the capabilities.
1317 """
1317 """
1318 caps = capabilities.copy()
1318 caps = capabilities.copy()
1319 caps['changegroup'] = tuple(sorted(
1319 caps['changegroup'] = tuple(sorted(
1320 changegroup.supportedincomingversions(repo)))
1320 changegroup.supportedincomingversions(repo)))
1321 if obsolete.isenabled(repo, obsolete.exchangeopt):
1321 if obsolete.isenabled(repo, obsolete.exchangeopt):
1322 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1322 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1323 caps['obsmarkers'] = supportedformat
1323 caps['obsmarkers'] = supportedformat
1324 if allowpushback:
1324 if allowpushback:
1325 caps['pushback'] = ()
1325 caps['pushback'] = ()
1326 cpmode = repo.ui.config('server', 'concurrent-push-mode', 'strict')
1326 cpmode = repo.ui.config('server', 'concurrent-push-mode', 'strict')
1327 if cpmode == 'check-related':
1327 if cpmode == 'check-related':
1328 caps['checkheads'] = ('related',)
1328 caps['checkheads'] = ('related',)
1329 return caps
1329 return caps
1330
1330
1331 def bundle2caps(remote):
1331 def bundle2caps(remote):
1332 """return the bundle capabilities of a peer as dict"""
1332 """return the bundle capabilities of a peer as dict"""
1333 raw = remote.capable('bundle2')
1333 raw = remote.capable('bundle2')
1334 if not raw and raw != '':
1334 if not raw and raw != '':
1335 return {}
1335 return {}
1336 capsblob = urlreq.unquote(remote.capable('bundle2'))
1336 capsblob = urlreq.unquote(remote.capable('bundle2'))
1337 return decodecaps(capsblob)
1337 return decodecaps(capsblob)
1338
1338
1339 def obsmarkersversion(caps):
1339 def obsmarkersversion(caps):
1340 """extract the list of supported obsmarkers versions from a bundle2caps dict
1340 """extract the list of supported obsmarkers versions from a bundle2caps dict
1341 """
1341 """
1342 obscaps = caps.get('obsmarkers', ())
1342 obscaps = caps.get('obsmarkers', ())
1343 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1343 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1344
1344
1345 def writenewbundle(ui, repo, source, filename, bundletype, outgoing, opts,
1345 def writenewbundle(ui, repo, source, filename, bundletype, outgoing, opts,
1346 vfs=None, compression=None, compopts=None):
1346 vfs=None, compression=None, compopts=None):
1347 if bundletype.startswith('HG10'):
1347 if bundletype.startswith('HG10'):
1348 cg = changegroup.getchangegroup(repo, source, outgoing, version='01')
1348 cg = changegroup.getchangegroup(repo, source, outgoing, version='01')
1349 return writebundle(ui, cg, filename, bundletype, vfs=vfs,
1349 return writebundle(ui, cg, filename, bundletype, vfs=vfs,
1350 compression=compression, compopts=compopts)
1350 compression=compression, compopts=compopts)
1351 elif not bundletype.startswith('HG20'):
1351 elif not bundletype.startswith('HG20'):
1352 raise error.ProgrammingError('unknown bundle type: %s' % bundletype)
1352 raise error.ProgrammingError('unknown bundle type: %s' % bundletype)
1353
1353
1354 caps = {}
1354 caps = {}
1355 if 'obsolescence' in opts:
1355 if 'obsolescence' in opts:
1356 caps['obsmarkers'] = ('V1',)
1356 caps['obsmarkers'] = ('V1',)
1357 bundle = bundle20(ui, caps)
1357 bundle = bundle20(ui, caps)
1358 bundle.setcompression(compression, compopts)
1358 bundle.setcompression(compression, compopts)
1359 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1359 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1360 chunkiter = bundle.getchunks()
1360 chunkiter = bundle.getchunks()
1361
1361
1362 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1362 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1363
1363
1364 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1364 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1365 # We should eventually reconcile this logic with the one behind
1365 # We should eventually reconcile this logic with the one behind
1366 # 'exchange.getbundle2partsgenerator'.
1366 # 'exchange.getbundle2partsgenerator'.
1367 #
1367 #
1368 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1368 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1369 # different right now. So we keep them separated for now for the sake of
1369 # different right now. So we keep them separated for now for the sake of
1370 # simplicity.
1370 # simplicity.
1371
1371
1372 # we always want a changegroup in such bundle
1372 # we always want a changegroup in such bundle
1373 cgversion = opts.get('cg.version')
1373 cgversion = opts.get('cg.version')
1374 if cgversion is None:
1374 if cgversion is None:
1375 cgversion = changegroup.safeversion(repo)
1375 cgversion = changegroup.safeversion(repo)
1376 cg = changegroup.getchangegroup(repo, source, outgoing,
1376 cg = changegroup.getchangegroup(repo, source, outgoing,
1377 version=cgversion)
1377 version=cgversion)
1378 part = bundler.newpart('changegroup', data=cg.getchunks())
1378 part = bundler.newpart('changegroup', data=cg.getchunks())
1379 part.addparam('version', cg.version)
1379 part.addparam('version', cg.version)
1380 if 'clcount' in cg.extras:
1380 if 'clcount' in cg.extras:
1381 part.addparam('nbchanges', str(cg.extras['clcount']),
1381 part.addparam('nbchanges', str(cg.extras['clcount']),
1382 mandatory=False)
1382 mandatory=False)
1383
1383
1384 addparttagsfnodescache(repo, bundler, outgoing)
1384 addparttagsfnodescache(repo, bundler, outgoing)
1385
1385
1386 if opts.get('obsolescence', False):
1386 if opts.get('obsolescence', False):
1387 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1387 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1388 buildobsmarkerspart(bundler, obsmarkers)
1388 buildobsmarkerspart(bundler, obsmarkers)
1389
1389
1390 def addparttagsfnodescache(repo, bundler, outgoing):
1390 def addparttagsfnodescache(repo, bundler, outgoing):
1391 # we include the tags fnode cache for the bundle changeset
1391 # we include the tags fnode cache for the bundle changeset
1392 # (as an optional parts)
1392 # (as an optional parts)
1393 cache = tags.hgtagsfnodescache(repo.unfiltered())
1393 cache = tags.hgtagsfnodescache(repo.unfiltered())
1394 chunks = []
1394 chunks = []
1395
1395
1396 # .hgtags fnodes are only relevant for head changesets. While we could
1396 # .hgtags fnodes are only relevant for head changesets. While we could
1397 # transfer values for all known nodes, there will likely be little to
1397 # transfer values for all known nodes, there will likely be little to
1398 # no benefit.
1398 # no benefit.
1399 #
1399 #
1400 # We don't bother using a generator to produce output data because
1400 # We don't bother using a generator to produce output data because
1401 # a) we only have 40 bytes per head and even esoteric numbers of heads
1401 # a) we only have 40 bytes per head and even esoteric numbers of heads
1402 # consume little memory (1M heads is 40MB) b) we don't want to send the
1402 # consume little memory (1M heads is 40MB) b) we don't want to send the
1403 # part if we don't have entries and knowing if we have entries requires
1403 # part if we don't have entries and knowing if we have entries requires
1404 # cache lookups.
1404 # cache lookups.
1405 for node in outgoing.missingheads:
1405 for node in outgoing.missingheads:
1406 # Don't compute missing, as this may slow down serving.
1406 # Don't compute missing, as this may slow down serving.
1407 fnode = cache.getfnode(node, computemissing=False)
1407 fnode = cache.getfnode(node, computemissing=False)
1408 if fnode is not None:
1408 if fnode is not None:
1409 chunks.extend([node, fnode])
1409 chunks.extend([node, fnode])
1410
1410
1411 if chunks:
1411 if chunks:
1412 bundler.newpart('hgtagsfnodes', data=''.join(chunks))
1412 bundler.newpart('hgtagsfnodes', data=''.join(chunks))
1413
1413
1414 def buildobsmarkerspart(bundler, markers):
1414 def buildobsmarkerspart(bundler, markers):
1415 """add an obsmarker part to the bundler with <markers>
1415 """add an obsmarker part to the bundler with <markers>
1416
1416
1417 No part is created if markers is empty.
1417 No part is created if markers is empty.
1418 Raises ValueError if the bundler doesn't support any known obsmarker format.
1418 Raises ValueError if the bundler doesn't support any known obsmarker format.
1419 """
1419 """
1420 if not markers:
1420 if not markers:
1421 return None
1421 return None
1422
1422
1423 remoteversions = obsmarkersversion(bundler.capabilities)
1423 remoteversions = obsmarkersversion(bundler.capabilities)
1424 version = obsolete.commonversion(remoteversions)
1424 version = obsolete.commonversion(remoteversions)
1425 if version is None:
1425 if version is None:
1426 raise ValueError('bundler does not support common obsmarker format')
1426 raise ValueError('bundler does not support common obsmarker format')
1427 stream = obsolete.encodemarkers(markers, True, version=version)
1427 stream = obsolete.encodemarkers(markers, True, version=version)
1428 return bundler.newpart('obsmarkers', data=stream)
1428 return bundler.newpart('obsmarkers', data=stream)
1429
1429
1430 def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None,
1430 def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None,
1431 compopts=None):
1431 compopts=None):
1432 """Write a bundle file and return its filename.
1432 """Write a bundle file and return its filename.
1433
1433
1434 Existing files will not be overwritten.
1434 Existing files will not be overwritten.
1435 If no filename is specified, a temporary file is created.
1435 If no filename is specified, a temporary file is created.
1436 bz2 compression can be turned off.
1436 bz2 compression can be turned off.
1437 The bundle file will be deleted in case of errors.
1437 The bundle file will be deleted in case of errors.
1438 """
1438 """
1439
1439
1440 if bundletype == "HG20":
1440 if bundletype == "HG20":
1441 bundle = bundle20(ui)
1441 bundle = bundle20(ui)
1442 bundle.setcompression(compression, compopts)
1442 bundle.setcompression(compression, compopts)
1443 part = bundle.newpart('changegroup', data=cg.getchunks())
1443 part = bundle.newpart('changegroup', data=cg.getchunks())
1444 part.addparam('version', cg.version)
1444 part.addparam('version', cg.version)
1445 if 'clcount' in cg.extras:
1445 if 'clcount' in cg.extras:
1446 part.addparam('nbchanges', str(cg.extras['clcount']),
1446 part.addparam('nbchanges', str(cg.extras['clcount']),
1447 mandatory=False)
1447 mandatory=False)
1448 chunkiter = bundle.getchunks()
1448 chunkiter = bundle.getchunks()
1449 else:
1449 else:
1450 # compression argument is only for the bundle2 case
1450 # compression argument is only for the bundle2 case
1451 assert compression is None
1451 assert compression is None
1452 if cg.version != '01':
1452 if cg.version != '01':
1453 raise error.Abort(_('old bundle types only supports v1 '
1453 raise error.Abort(_('old bundle types only supports v1 '
1454 'changegroups'))
1454 'changegroups'))
1455 header, comp = bundletypes[bundletype]
1455 header, comp = bundletypes[bundletype]
1456 if comp not in util.compengines.supportedbundletypes:
1456 if comp not in util.compengines.supportedbundletypes:
1457 raise error.Abort(_('unknown stream compression type: %s')
1457 raise error.Abort(_('unknown stream compression type: %s')
1458 % comp)
1458 % comp)
1459 compengine = util.compengines.forbundletype(comp)
1459 compengine = util.compengines.forbundletype(comp)
1460 def chunkiter():
1460 def chunkiter():
1461 yield header
1461 yield header
1462 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1462 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1463 yield chunk
1463 yield chunk
1464 chunkiter = chunkiter()
1464 chunkiter = chunkiter()
1465
1465
1466 # parse the changegroup data, otherwise we will block
1466 # parse the changegroup data, otherwise we will block
1467 # in case of sshrepo because we don't know the end of the stream
1467 # in case of sshrepo because we don't know the end of the stream
1468 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1468 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1469
1469
1470 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest'))
1470 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest'))
1471 def handlechangegroup(op, inpart):
1471 def handlechangegroup(op, inpart):
1472 """apply a changegroup part on the repo
1472 """apply a changegroup part on the repo
1473
1473
1474 This is a very early implementation that will massive rework before being
1474 This is a very early implementation that will massive rework before being
1475 inflicted to any end-user.
1475 inflicted to any end-user.
1476 """
1476 """
1477 tr = op.gettransaction()
1477 tr = op.gettransaction()
1478 unpackerversion = inpart.params.get('version', '01')
1478 unpackerversion = inpart.params.get('version', '01')
1479 # We should raise an appropriate exception here
1479 # We should raise an appropriate exception here
1480 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1480 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1481 # the source and url passed here are overwritten by the one contained in
1481 # the source and url passed here are overwritten by the one contained in
1482 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1482 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1483 nbchangesets = None
1483 nbchangesets = None
1484 if 'nbchanges' in inpart.params:
1484 if 'nbchanges' in inpart.params:
1485 nbchangesets = int(inpart.params.get('nbchanges'))
1485 nbchangesets = int(inpart.params.get('nbchanges'))
1486 if ('treemanifest' in inpart.params and
1486 if ('treemanifest' in inpart.params and
1487 'treemanifest' not in op.repo.requirements):
1487 'treemanifest' not in op.repo.requirements):
1488 if len(op.repo.changelog) != 0:
1488 if len(op.repo.changelog) != 0:
1489 raise error.Abort(_(
1489 raise error.Abort(_(
1490 "bundle contains tree manifests, but local repo is "
1490 "bundle contains tree manifests, but local repo is "
1491 "non-empty and does not use tree manifests"))
1491 "non-empty and does not use tree manifests"))
1492 op.repo.requirements.add('treemanifest')
1492 op.repo.requirements.add('treemanifest')
1493 op.repo._applyopenerreqs()
1493 op.repo._applyopenerreqs()
1494 op.repo._writerequirements()
1494 op.repo._writerequirements()
1495 ret = cg.apply(op.repo, tr, 'bundle2', 'bundle2',
1495 ret, addednodes = cg.apply(op.repo, tr, 'bundle2', 'bundle2',
1496 expectedtotal=nbchangesets)
1496 expectedtotal=nbchangesets)
1497 op.records.add('changegroup', {'return': ret})
1497 op.records.add('changegroup', {
1498 'return': ret,
1499 'addednodes': addednodes,
1500 })
1498 if op.reply is not None:
1501 if op.reply is not None:
1499 # This is definitely not the final form of this
1502 # This is definitely not the final form of this
1500 # return. But one need to start somewhere.
1503 # return. But one need to start somewhere.
1501 part = op.reply.newpart('reply:changegroup', mandatory=False)
1504 part = op.reply.newpart('reply:changegroup', mandatory=False)
1502 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1505 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1503 part.addparam('return', '%i' % ret, mandatory=False)
1506 part.addparam('return', '%i' % ret, mandatory=False)
1504 assert not inpart.read()
1507 assert not inpart.read()
1505
1508
1506 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1509 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1507 ['digest:%s' % k for k in util.DIGESTS.keys()])
1510 ['digest:%s' % k for k in util.DIGESTS.keys()])
1508 @parthandler('remote-changegroup', _remotechangegroupparams)
1511 @parthandler('remote-changegroup', _remotechangegroupparams)
1509 def handleremotechangegroup(op, inpart):
1512 def handleremotechangegroup(op, inpart):
1510 """apply a bundle10 on the repo, given an url and validation information
1513 """apply a bundle10 on the repo, given an url and validation information
1511
1514
1512 All the information about the remote bundle to import are given as
1515 All the information about the remote bundle to import are given as
1513 parameters. The parameters include:
1516 parameters. The parameters include:
1514 - url: the url to the bundle10.
1517 - url: the url to the bundle10.
1515 - size: the bundle10 file size. It is used to validate what was
1518 - size: the bundle10 file size. It is used to validate what was
1516 retrieved by the client matches the server knowledge about the bundle.
1519 retrieved by the client matches the server knowledge about the bundle.
1517 - digests: a space separated list of the digest types provided as
1520 - digests: a space separated list of the digest types provided as
1518 parameters.
1521 parameters.
1519 - digest:<digest-type>: the hexadecimal representation of the digest with
1522 - digest:<digest-type>: the hexadecimal representation of the digest with
1520 that name. Like the size, it is used to validate what was retrieved by
1523 that name. Like the size, it is used to validate what was retrieved by
1521 the client matches what the server knows about the bundle.
1524 the client matches what the server knows about the bundle.
1522
1525
1523 When multiple digest types are given, all of them are checked.
1526 When multiple digest types are given, all of them are checked.
1524 """
1527 """
1525 try:
1528 try:
1526 raw_url = inpart.params['url']
1529 raw_url = inpart.params['url']
1527 except KeyError:
1530 except KeyError:
1528 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1531 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1529 parsed_url = util.url(raw_url)
1532 parsed_url = util.url(raw_url)
1530 if parsed_url.scheme not in capabilities['remote-changegroup']:
1533 if parsed_url.scheme not in capabilities['remote-changegroup']:
1531 raise error.Abort(_('remote-changegroup does not support %s urls') %
1534 raise error.Abort(_('remote-changegroup does not support %s urls') %
1532 parsed_url.scheme)
1535 parsed_url.scheme)
1533
1536
1534 try:
1537 try:
1535 size = int(inpart.params['size'])
1538 size = int(inpart.params['size'])
1536 except ValueError:
1539 except ValueError:
1537 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1540 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1538 % 'size')
1541 % 'size')
1539 except KeyError:
1542 except KeyError:
1540 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1543 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1541
1544
1542 digests = {}
1545 digests = {}
1543 for typ in inpart.params.get('digests', '').split():
1546 for typ in inpart.params.get('digests', '').split():
1544 param = 'digest:%s' % typ
1547 param = 'digest:%s' % typ
1545 try:
1548 try:
1546 value = inpart.params[param]
1549 value = inpart.params[param]
1547 except KeyError:
1550 except KeyError:
1548 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1551 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1549 param)
1552 param)
1550 digests[typ] = value
1553 digests[typ] = value
1551
1554
1552 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1555 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1553
1556
1554 tr = op.gettransaction()
1557 tr = op.gettransaction()
1555 from . import exchange
1558 from . import exchange
1556 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1559 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1557 if not isinstance(cg, changegroup.cg1unpacker):
1560 if not isinstance(cg, changegroup.cg1unpacker):
1558 raise error.Abort(_('%s: not a bundle version 1.0') %
1561 raise error.Abort(_('%s: not a bundle version 1.0') %
1559 util.hidepassword(raw_url))
1562 util.hidepassword(raw_url))
1560 ret = cg.apply(op.repo, tr, 'bundle2', 'bundle2')
1563 ret, addednodes = cg.apply(op.repo, tr, 'bundle2', 'bundle2')
1561 op.records.add('changegroup', {'return': ret})
1564 op.records.add('changegroup', {
1565 'return': ret,
1566 'addednodes': addednodes,
1567 })
1562 if op.reply is not None:
1568 if op.reply is not None:
1563 # This is definitely not the final form of this
1569 # This is definitely not the final form of this
1564 # return. But one need to start somewhere.
1570 # return. But one need to start somewhere.
1565 part = op.reply.newpart('reply:changegroup')
1571 part = op.reply.newpart('reply:changegroup')
1566 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1572 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1567 part.addparam('return', '%i' % ret, mandatory=False)
1573 part.addparam('return', '%i' % ret, mandatory=False)
1568 try:
1574 try:
1569 real_part.validate()
1575 real_part.validate()
1570 except error.Abort as e:
1576 except error.Abort as e:
1571 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1577 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1572 (util.hidepassword(raw_url), str(e)))
1578 (util.hidepassword(raw_url), str(e)))
1573 assert not inpart.read()
1579 assert not inpart.read()
1574
1580
1575 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1581 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1576 def handlereplychangegroup(op, inpart):
1582 def handlereplychangegroup(op, inpart):
1577 ret = int(inpart.params['return'])
1583 ret = int(inpart.params['return'])
1578 replyto = int(inpart.params['in-reply-to'])
1584 replyto = int(inpart.params['in-reply-to'])
1579 op.records.add('changegroup', {'return': ret}, replyto)
1585 op.records.add('changegroup', {'return': ret}, replyto)
1580
1586
1581 @parthandler('check:heads')
1587 @parthandler('check:heads')
1582 def handlecheckheads(op, inpart):
1588 def handlecheckheads(op, inpart):
1583 """check that head of the repo did not change
1589 """check that head of the repo did not change
1584
1590
1585 This is used to detect a push race when using unbundle.
1591 This is used to detect a push race when using unbundle.
1586 This replaces the "heads" argument of unbundle."""
1592 This replaces the "heads" argument of unbundle."""
1587 h = inpart.read(20)
1593 h = inpart.read(20)
1588 heads = []
1594 heads = []
1589 while len(h) == 20:
1595 while len(h) == 20:
1590 heads.append(h)
1596 heads.append(h)
1591 h = inpart.read(20)
1597 h = inpart.read(20)
1592 assert not h
1598 assert not h
1593 # Trigger a transaction so that we are guaranteed to have the lock now.
1599 # Trigger a transaction so that we are guaranteed to have the lock now.
1594 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1600 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1595 op.gettransaction()
1601 op.gettransaction()
1596 if sorted(heads) != sorted(op.repo.heads()):
1602 if sorted(heads) != sorted(op.repo.heads()):
1597 raise error.PushRaced('repository changed while pushing - '
1603 raise error.PushRaced('repository changed while pushing - '
1598 'please try again')
1604 'please try again')
1599
1605
1600 @parthandler('check:updated-heads')
1606 @parthandler('check:updated-heads')
1601 def handlecheckupdatedheads(op, inpart):
1607 def handlecheckupdatedheads(op, inpart):
1602 """check for race on the heads touched by a push
1608 """check for race on the heads touched by a push
1603
1609
1604 This is similar to 'check:heads' but focus on the heads actually updated
1610 This is similar to 'check:heads' but focus on the heads actually updated
1605 during the push. If other activities happen on unrelated heads, it is
1611 during the push. If other activities happen on unrelated heads, it is
1606 ignored.
1612 ignored.
1607
1613
1608 This allow server with high traffic to avoid push contention as long as
1614 This allow server with high traffic to avoid push contention as long as
1609 unrelated parts of the graph are involved."""
1615 unrelated parts of the graph are involved."""
1610 h = inpart.read(20)
1616 h = inpart.read(20)
1611 heads = []
1617 heads = []
1612 while len(h) == 20:
1618 while len(h) == 20:
1613 heads.append(h)
1619 heads.append(h)
1614 h = inpart.read(20)
1620 h = inpart.read(20)
1615 assert not h
1621 assert not h
1616 # trigger a transaction so that we are guaranteed to have the lock now.
1622 # trigger a transaction so that we are guaranteed to have the lock now.
1617 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1623 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1618 op.gettransaction()
1624 op.gettransaction()
1619
1625
1620 currentheads = set()
1626 currentheads = set()
1621 for ls in op.repo.branchmap().itervalues():
1627 for ls in op.repo.branchmap().itervalues():
1622 currentheads.update(ls)
1628 currentheads.update(ls)
1623
1629
1624 for h in heads:
1630 for h in heads:
1625 if h not in currentheads:
1631 if h not in currentheads:
1626 raise error.PushRaced('repository changed while pushing - '
1632 raise error.PushRaced('repository changed while pushing - '
1627 'please try again')
1633 'please try again')
1628
1634
1629 @parthandler('output')
1635 @parthandler('output')
1630 def handleoutput(op, inpart):
1636 def handleoutput(op, inpart):
1631 """forward output captured on the server to the client"""
1637 """forward output captured on the server to the client"""
1632 for line in inpart.read().splitlines():
1638 for line in inpart.read().splitlines():
1633 op.ui.status(_('remote: %s\n') % line)
1639 op.ui.status(_('remote: %s\n') % line)
1634
1640
1635 @parthandler('replycaps')
1641 @parthandler('replycaps')
1636 def handlereplycaps(op, inpart):
1642 def handlereplycaps(op, inpart):
1637 """Notify that a reply bundle should be created
1643 """Notify that a reply bundle should be created
1638
1644
1639 The payload contains the capabilities information for the reply"""
1645 The payload contains the capabilities information for the reply"""
1640 caps = decodecaps(inpart.read())
1646 caps = decodecaps(inpart.read())
1641 if op.reply is None:
1647 if op.reply is None:
1642 op.reply = bundle20(op.ui, caps)
1648 op.reply = bundle20(op.ui, caps)
1643
1649
1644 class AbortFromPart(error.Abort):
1650 class AbortFromPart(error.Abort):
1645 """Sub-class of Abort that denotes an error from a bundle2 part."""
1651 """Sub-class of Abort that denotes an error from a bundle2 part."""
1646
1652
1647 @parthandler('error:abort', ('message', 'hint'))
1653 @parthandler('error:abort', ('message', 'hint'))
1648 def handleerrorabort(op, inpart):
1654 def handleerrorabort(op, inpart):
1649 """Used to transmit abort error over the wire"""
1655 """Used to transmit abort error over the wire"""
1650 raise AbortFromPart(inpart.params['message'],
1656 raise AbortFromPart(inpart.params['message'],
1651 hint=inpart.params.get('hint'))
1657 hint=inpart.params.get('hint'))
1652
1658
1653 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
1659 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
1654 'in-reply-to'))
1660 'in-reply-to'))
1655 def handleerrorpushkey(op, inpart):
1661 def handleerrorpushkey(op, inpart):
1656 """Used to transmit failure of a mandatory pushkey over the wire"""
1662 """Used to transmit failure of a mandatory pushkey over the wire"""
1657 kwargs = {}
1663 kwargs = {}
1658 for name in ('namespace', 'key', 'new', 'old', 'ret'):
1664 for name in ('namespace', 'key', 'new', 'old', 'ret'):
1659 value = inpart.params.get(name)
1665 value = inpart.params.get(name)
1660 if value is not None:
1666 if value is not None:
1661 kwargs[name] = value
1667 kwargs[name] = value
1662 raise error.PushkeyFailed(inpart.params['in-reply-to'], **kwargs)
1668 raise error.PushkeyFailed(inpart.params['in-reply-to'], **kwargs)
1663
1669
1664 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1670 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1665 def handleerrorunsupportedcontent(op, inpart):
1671 def handleerrorunsupportedcontent(op, inpart):
1666 """Used to transmit unknown content error over the wire"""
1672 """Used to transmit unknown content error over the wire"""
1667 kwargs = {}
1673 kwargs = {}
1668 parttype = inpart.params.get('parttype')
1674 parttype = inpart.params.get('parttype')
1669 if parttype is not None:
1675 if parttype is not None:
1670 kwargs['parttype'] = parttype
1676 kwargs['parttype'] = parttype
1671 params = inpart.params.get('params')
1677 params = inpart.params.get('params')
1672 if params is not None:
1678 if params is not None:
1673 kwargs['params'] = params.split('\0')
1679 kwargs['params'] = params.split('\0')
1674
1680
1675 raise error.BundleUnknownFeatureError(**kwargs)
1681 raise error.BundleUnknownFeatureError(**kwargs)
1676
1682
1677 @parthandler('error:pushraced', ('message',))
1683 @parthandler('error:pushraced', ('message',))
1678 def handleerrorpushraced(op, inpart):
1684 def handleerrorpushraced(op, inpart):
1679 """Used to transmit push race error over the wire"""
1685 """Used to transmit push race error over the wire"""
1680 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1686 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1681
1687
1682 @parthandler('listkeys', ('namespace',))
1688 @parthandler('listkeys', ('namespace',))
1683 def handlelistkeys(op, inpart):
1689 def handlelistkeys(op, inpart):
1684 """retrieve pushkey namespace content stored in a bundle2"""
1690 """retrieve pushkey namespace content stored in a bundle2"""
1685 namespace = inpart.params['namespace']
1691 namespace = inpart.params['namespace']
1686 r = pushkey.decodekeys(inpart.read())
1692 r = pushkey.decodekeys(inpart.read())
1687 op.records.add('listkeys', (namespace, r))
1693 op.records.add('listkeys', (namespace, r))
1688
1694
1689 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1695 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1690 def handlepushkey(op, inpart):
1696 def handlepushkey(op, inpart):
1691 """process a pushkey request"""
1697 """process a pushkey request"""
1692 dec = pushkey.decode
1698 dec = pushkey.decode
1693 namespace = dec(inpart.params['namespace'])
1699 namespace = dec(inpart.params['namespace'])
1694 key = dec(inpart.params['key'])
1700 key = dec(inpart.params['key'])
1695 old = dec(inpart.params['old'])
1701 old = dec(inpart.params['old'])
1696 new = dec(inpart.params['new'])
1702 new = dec(inpart.params['new'])
1697 # Grab the transaction to ensure that we have the lock before performing the
1703 # Grab the transaction to ensure that we have the lock before performing the
1698 # pushkey.
1704 # pushkey.
1699 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1705 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1700 op.gettransaction()
1706 op.gettransaction()
1701 ret = op.repo.pushkey(namespace, key, old, new)
1707 ret = op.repo.pushkey(namespace, key, old, new)
1702 record = {'namespace': namespace,
1708 record = {'namespace': namespace,
1703 'key': key,
1709 'key': key,
1704 'old': old,
1710 'old': old,
1705 'new': new}
1711 'new': new}
1706 op.records.add('pushkey', record)
1712 op.records.add('pushkey', record)
1707 if op.reply is not None:
1713 if op.reply is not None:
1708 rpart = op.reply.newpart('reply:pushkey')
1714 rpart = op.reply.newpart('reply:pushkey')
1709 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1715 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1710 rpart.addparam('return', '%i' % ret, mandatory=False)
1716 rpart.addparam('return', '%i' % ret, mandatory=False)
1711 if inpart.mandatory and not ret:
1717 if inpart.mandatory and not ret:
1712 kwargs = {}
1718 kwargs = {}
1713 for key in ('namespace', 'key', 'new', 'old', 'ret'):
1719 for key in ('namespace', 'key', 'new', 'old', 'ret'):
1714 if key in inpart.params:
1720 if key in inpart.params:
1715 kwargs[key] = inpart.params[key]
1721 kwargs[key] = inpart.params[key]
1716 raise error.PushkeyFailed(partid=str(inpart.id), **kwargs)
1722 raise error.PushkeyFailed(partid=str(inpart.id), **kwargs)
1717
1723
1718 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
1724 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
1719 def handlepushkeyreply(op, inpart):
1725 def handlepushkeyreply(op, inpart):
1720 """retrieve the result of a pushkey request"""
1726 """retrieve the result of a pushkey request"""
1721 ret = int(inpart.params['return'])
1727 ret = int(inpart.params['return'])
1722 partid = int(inpart.params['in-reply-to'])
1728 partid = int(inpart.params['in-reply-to'])
1723 op.records.add('pushkey', {'return': ret}, partid)
1729 op.records.add('pushkey', {'return': ret}, partid)
1724
1730
1725 @parthandler('obsmarkers')
1731 @parthandler('obsmarkers')
1726 def handleobsmarker(op, inpart):
1732 def handleobsmarker(op, inpart):
1727 """add a stream of obsmarkers to the repo"""
1733 """add a stream of obsmarkers to the repo"""
1728 tr = op.gettransaction()
1734 tr = op.gettransaction()
1729 markerdata = inpart.read()
1735 markerdata = inpart.read()
1730 if op.ui.config('experimental', 'obsmarkers-exchange-debug', False):
1736 if op.ui.config('experimental', 'obsmarkers-exchange-debug', False):
1731 op.ui.write(('obsmarker-exchange: %i bytes received\n')
1737 op.ui.write(('obsmarker-exchange: %i bytes received\n')
1732 % len(markerdata))
1738 % len(markerdata))
1733 # The mergemarkers call will crash if marker creation is not enabled.
1739 # The mergemarkers call will crash if marker creation is not enabled.
1734 # we want to avoid this if the part is advisory.
1740 # we want to avoid this if the part is advisory.
1735 if not inpart.mandatory and op.repo.obsstore.readonly:
1741 if not inpart.mandatory and op.repo.obsstore.readonly:
1736 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled')
1742 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled')
1737 return
1743 return
1738 new = op.repo.obsstore.mergemarkers(tr, markerdata)
1744 new = op.repo.obsstore.mergemarkers(tr, markerdata)
1739 op.repo.invalidatevolatilesets()
1745 op.repo.invalidatevolatilesets()
1740 if new:
1746 if new:
1741 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1747 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1742 op.records.add('obsmarkers', {'new': new})
1748 op.records.add('obsmarkers', {'new': new})
1743 if op.reply is not None:
1749 if op.reply is not None:
1744 rpart = op.reply.newpart('reply:obsmarkers')
1750 rpart = op.reply.newpart('reply:obsmarkers')
1745 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1751 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1746 rpart.addparam('new', '%i' % new, mandatory=False)
1752 rpart.addparam('new', '%i' % new, mandatory=False)
1747
1753
1748
1754
1749 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
1755 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
1750 def handleobsmarkerreply(op, inpart):
1756 def handleobsmarkerreply(op, inpart):
1751 """retrieve the result of a pushkey request"""
1757 """retrieve the result of a pushkey request"""
1752 ret = int(inpart.params['new'])
1758 ret = int(inpart.params['new'])
1753 partid = int(inpart.params['in-reply-to'])
1759 partid = int(inpart.params['in-reply-to'])
1754 op.records.add('obsmarkers', {'new': ret}, partid)
1760 op.records.add('obsmarkers', {'new': ret}, partid)
1755
1761
1756 @parthandler('hgtagsfnodes')
1762 @parthandler('hgtagsfnodes')
1757 def handlehgtagsfnodes(op, inpart):
1763 def handlehgtagsfnodes(op, inpart):
1758 """Applies .hgtags fnodes cache entries to the local repo.
1764 """Applies .hgtags fnodes cache entries to the local repo.
1759
1765
1760 Payload is pairs of 20 byte changeset nodes and filenodes.
1766 Payload is pairs of 20 byte changeset nodes and filenodes.
1761 """
1767 """
1762 # Grab the transaction so we ensure that we have the lock at this point.
1768 # Grab the transaction so we ensure that we have the lock at this point.
1763 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1769 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1764 op.gettransaction()
1770 op.gettransaction()
1765 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
1771 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
1766
1772
1767 count = 0
1773 count = 0
1768 while True:
1774 while True:
1769 node = inpart.read(20)
1775 node = inpart.read(20)
1770 fnode = inpart.read(20)
1776 fnode = inpart.read(20)
1771 if len(node) < 20 or len(fnode) < 20:
1777 if len(node) < 20 or len(fnode) < 20:
1772 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
1778 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
1773 break
1779 break
1774 cache.setfnode(node, fnode)
1780 cache.setfnode(node, fnode)
1775 count += 1
1781 count += 1
1776
1782
1777 cache.write()
1783 cache.write()
1778 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
1784 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
@@ -1,1025 +1,1026 b''
1 # changegroup.py - Mercurial changegroup manipulation functions
1 # changegroup.py - Mercurial changegroup manipulation functions
2 #
2 #
3 # Copyright 2006 Matt Mackall <mpm@selenic.com>
3 # Copyright 2006 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import os
10 import os
11 import struct
11 import struct
12 import tempfile
12 import tempfile
13 import weakref
13 import weakref
14
14
15 from .i18n import _
15 from .i18n import _
16 from .node import (
16 from .node import (
17 hex,
17 hex,
18 nullrev,
18 nullrev,
19 short,
19 short,
20 )
20 )
21
21
22 from . import (
22 from . import (
23 dagutil,
23 dagutil,
24 discovery,
24 discovery,
25 error,
25 error,
26 mdiff,
26 mdiff,
27 phases,
27 phases,
28 pycompat,
28 pycompat,
29 util,
29 util,
30 )
30 )
31
31
32 _CHANGEGROUPV1_DELTA_HEADER = "20s20s20s20s"
32 _CHANGEGROUPV1_DELTA_HEADER = "20s20s20s20s"
33 _CHANGEGROUPV2_DELTA_HEADER = "20s20s20s20s20s"
33 _CHANGEGROUPV2_DELTA_HEADER = "20s20s20s20s20s"
34 _CHANGEGROUPV3_DELTA_HEADER = ">20s20s20s20s20sH"
34 _CHANGEGROUPV3_DELTA_HEADER = ">20s20s20s20s20sH"
35
35
36 def readexactly(stream, n):
36 def readexactly(stream, n):
37 '''read n bytes from stream.read and abort if less was available'''
37 '''read n bytes from stream.read and abort if less was available'''
38 s = stream.read(n)
38 s = stream.read(n)
39 if len(s) < n:
39 if len(s) < n:
40 raise error.Abort(_("stream ended unexpectedly"
40 raise error.Abort(_("stream ended unexpectedly"
41 " (got %d bytes, expected %d)")
41 " (got %d bytes, expected %d)")
42 % (len(s), n))
42 % (len(s), n))
43 return s
43 return s
44
44
45 def getchunk(stream):
45 def getchunk(stream):
46 """return the next chunk from stream as a string"""
46 """return the next chunk from stream as a string"""
47 d = readexactly(stream, 4)
47 d = readexactly(stream, 4)
48 l = struct.unpack(">l", d)[0]
48 l = struct.unpack(">l", d)[0]
49 if l <= 4:
49 if l <= 4:
50 if l:
50 if l:
51 raise error.Abort(_("invalid chunk length %d") % l)
51 raise error.Abort(_("invalid chunk length %d") % l)
52 return ""
52 return ""
53 return readexactly(stream, l - 4)
53 return readexactly(stream, l - 4)
54
54
55 def chunkheader(length):
55 def chunkheader(length):
56 """return a changegroup chunk header (string)"""
56 """return a changegroup chunk header (string)"""
57 return struct.pack(">l", length + 4)
57 return struct.pack(">l", length + 4)
58
58
59 def closechunk():
59 def closechunk():
60 """return a changegroup chunk header (string) for a zero-length chunk"""
60 """return a changegroup chunk header (string) for a zero-length chunk"""
61 return struct.pack(">l", 0)
61 return struct.pack(">l", 0)
62
62
63 def combineresults(results):
63 def combineresults(results):
64 """logic to combine 0 or more addchangegroup results into one"""
64 """logic to combine 0 or more addchangegroup results into one"""
65 changedheads = 0
65 changedheads = 0
66 result = 1
66 result = 1
67 for ret in results:
67 for ret in results:
68 # If any changegroup result is 0, return 0
68 # If any changegroup result is 0, return 0
69 if ret == 0:
69 if ret == 0:
70 result = 0
70 result = 0
71 break
71 break
72 if ret < -1:
72 if ret < -1:
73 changedheads += ret + 1
73 changedheads += ret + 1
74 elif ret > 1:
74 elif ret > 1:
75 changedheads += ret - 1
75 changedheads += ret - 1
76 if changedheads > 0:
76 if changedheads > 0:
77 result = 1 + changedheads
77 result = 1 + changedheads
78 elif changedheads < 0:
78 elif changedheads < 0:
79 result = -1 + changedheads
79 result = -1 + changedheads
80 return result
80 return result
81
81
82 def writechunks(ui, chunks, filename, vfs=None):
82 def writechunks(ui, chunks, filename, vfs=None):
83 """Write chunks to a file and return its filename.
83 """Write chunks to a file and return its filename.
84
84
85 The stream is assumed to be a bundle file.
85 The stream is assumed to be a bundle file.
86 Existing files will not be overwritten.
86 Existing files will not be overwritten.
87 If no filename is specified, a temporary file is created.
87 If no filename is specified, a temporary file is created.
88 """
88 """
89 fh = None
89 fh = None
90 cleanup = None
90 cleanup = None
91 try:
91 try:
92 if filename:
92 if filename:
93 if vfs:
93 if vfs:
94 fh = vfs.open(filename, "wb")
94 fh = vfs.open(filename, "wb")
95 else:
95 else:
96 # Increase default buffer size because default is usually
96 # Increase default buffer size because default is usually
97 # small (4k is common on Linux).
97 # small (4k is common on Linux).
98 fh = open(filename, "wb", 131072)
98 fh = open(filename, "wb", 131072)
99 else:
99 else:
100 fd, filename = tempfile.mkstemp(prefix="hg-bundle-", suffix=".hg")
100 fd, filename = tempfile.mkstemp(prefix="hg-bundle-", suffix=".hg")
101 fh = os.fdopen(fd, pycompat.sysstr("wb"))
101 fh = os.fdopen(fd, pycompat.sysstr("wb"))
102 cleanup = filename
102 cleanup = filename
103 for c in chunks:
103 for c in chunks:
104 fh.write(c)
104 fh.write(c)
105 cleanup = None
105 cleanup = None
106 return filename
106 return filename
107 finally:
107 finally:
108 if fh is not None:
108 if fh is not None:
109 fh.close()
109 fh.close()
110 if cleanup is not None:
110 if cleanup is not None:
111 if filename and vfs:
111 if filename and vfs:
112 vfs.unlink(cleanup)
112 vfs.unlink(cleanup)
113 else:
113 else:
114 os.unlink(cleanup)
114 os.unlink(cleanup)
115
115
116 class cg1unpacker(object):
116 class cg1unpacker(object):
117 """Unpacker for cg1 changegroup streams.
117 """Unpacker for cg1 changegroup streams.
118
118
119 A changegroup unpacker handles the framing of the revision data in
119 A changegroup unpacker handles the framing of the revision data in
120 the wire format. Most consumers will want to use the apply()
120 the wire format. Most consumers will want to use the apply()
121 method to add the changes from the changegroup to a repository.
121 method to add the changes from the changegroup to a repository.
122
122
123 If you're forwarding a changegroup unmodified to another consumer,
123 If you're forwarding a changegroup unmodified to another consumer,
124 use getchunks(), which returns an iterator of changegroup
124 use getchunks(), which returns an iterator of changegroup
125 chunks. This is mostly useful for cases where you need to know the
125 chunks. This is mostly useful for cases where you need to know the
126 data stream has ended by observing the end of the changegroup.
126 data stream has ended by observing the end of the changegroup.
127
127
128 deltachunk() is useful only if you're applying delta data. Most
128 deltachunk() is useful only if you're applying delta data. Most
129 consumers should prefer apply() instead.
129 consumers should prefer apply() instead.
130
130
131 A few other public methods exist. Those are used only for
131 A few other public methods exist. Those are used only for
132 bundlerepo and some debug commands - their use is discouraged.
132 bundlerepo and some debug commands - their use is discouraged.
133 """
133 """
134 deltaheader = _CHANGEGROUPV1_DELTA_HEADER
134 deltaheader = _CHANGEGROUPV1_DELTA_HEADER
135 deltaheadersize = struct.calcsize(deltaheader)
135 deltaheadersize = struct.calcsize(deltaheader)
136 version = '01'
136 version = '01'
137 _grouplistcount = 1 # One list of files after the manifests
137 _grouplistcount = 1 # One list of files after the manifests
138
138
139 def __init__(self, fh, alg, extras=None):
139 def __init__(self, fh, alg, extras=None):
140 if alg is None:
140 if alg is None:
141 alg = 'UN'
141 alg = 'UN'
142 if alg not in util.compengines.supportedbundletypes:
142 if alg not in util.compengines.supportedbundletypes:
143 raise error.Abort(_('unknown stream compression type: %s')
143 raise error.Abort(_('unknown stream compression type: %s')
144 % alg)
144 % alg)
145 if alg == 'BZ':
145 if alg == 'BZ':
146 alg = '_truncatedBZ'
146 alg = '_truncatedBZ'
147
147
148 compengine = util.compengines.forbundletype(alg)
148 compengine = util.compengines.forbundletype(alg)
149 self._stream = compengine.decompressorreader(fh)
149 self._stream = compengine.decompressorreader(fh)
150 self._type = alg
150 self._type = alg
151 self.extras = extras or {}
151 self.extras = extras or {}
152 self.callback = None
152 self.callback = None
153
153
154 # These methods (compressed, read, seek, tell) all appear to only
154 # These methods (compressed, read, seek, tell) all appear to only
155 # be used by bundlerepo, but it's a little hard to tell.
155 # be used by bundlerepo, but it's a little hard to tell.
156 def compressed(self):
156 def compressed(self):
157 return self._type is not None and self._type != 'UN'
157 return self._type is not None and self._type != 'UN'
158 def read(self, l):
158 def read(self, l):
159 return self._stream.read(l)
159 return self._stream.read(l)
160 def seek(self, pos):
160 def seek(self, pos):
161 return self._stream.seek(pos)
161 return self._stream.seek(pos)
162 def tell(self):
162 def tell(self):
163 return self._stream.tell()
163 return self._stream.tell()
164 def close(self):
164 def close(self):
165 return self._stream.close()
165 return self._stream.close()
166
166
167 def _chunklength(self):
167 def _chunklength(self):
168 d = readexactly(self._stream, 4)
168 d = readexactly(self._stream, 4)
169 l = struct.unpack(">l", d)[0]
169 l = struct.unpack(">l", d)[0]
170 if l <= 4:
170 if l <= 4:
171 if l:
171 if l:
172 raise error.Abort(_("invalid chunk length %d") % l)
172 raise error.Abort(_("invalid chunk length %d") % l)
173 return 0
173 return 0
174 if self.callback:
174 if self.callback:
175 self.callback()
175 self.callback()
176 return l - 4
176 return l - 4
177
177
178 def changelogheader(self):
178 def changelogheader(self):
179 """v10 does not have a changelog header chunk"""
179 """v10 does not have a changelog header chunk"""
180 return {}
180 return {}
181
181
182 def manifestheader(self):
182 def manifestheader(self):
183 """v10 does not have a manifest header chunk"""
183 """v10 does not have a manifest header chunk"""
184 return {}
184 return {}
185
185
186 def filelogheader(self):
186 def filelogheader(self):
187 """return the header of the filelogs chunk, v10 only has the filename"""
187 """return the header of the filelogs chunk, v10 only has the filename"""
188 l = self._chunklength()
188 l = self._chunklength()
189 if not l:
189 if not l:
190 return {}
190 return {}
191 fname = readexactly(self._stream, l)
191 fname = readexactly(self._stream, l)
192 return {'filename': fname}
192 return {'filename': fname}
193
193
194 def _deltaheader(self, headertuple, prevnode):
194 def _deltaheader(self, headertuple, prevnode):
195 node, p1, p2, cs = headertuple
195 node, p1, p2, cs = headertuple
196 if prevnode is None:
196 if prevnode is None:
197 deltabase = p1
197 deltabase = p1
198 else:
198 else:
199 deltabase = prevnode
199 deltabase = prevnode
200 flags = 0
200 flags = 0
201 return node, p1, p2, deltabase, cs, flags
201 return node, p1, p2, deltabase, cs, flags
202
202
203 def deltachunk(self, prevnode):
203 def deltachunk(self, prevnode):
204 l = self._chunklength()
204 l = self._chunklength()
205 if not l:
205 if not l:
206 return {}
206 return {}
207 headerdata = readexactly(self._stream, self.deltaheadersize)
207 headerdata = readexactly(self._stream, self.deltaheadersize)
208 header = struct.unpack(self.deltaheader, headerdata)
208 header = struct.unpack(self.deltaheader, headerdata)
209 delta = readexactly(self._stream, l - self.deltaheadersize)
209 delta = readexactly(self._stream, l - self.deltaheadersize)
210 node, p1, p2, deltabase, cs, flags = self._deltaheader(header, prevnode)
210 node, p1, p2, deltabase, cs, flags = self._deltaheader(header, prevnode)
211 return {'node': node, 'p1': p1, 'p2': p2, 'cs': cs,
211 return {'node': node, 'p1': p1, 'p2': p2, 'cs': cs,
212 'deltabase': deltabase, 'delta': delta, 'flags': flags}
212 'deltabase': deltabase, 'delta': delta, 'flags': flags}
213
213
214 def getchunks(self):
214 def getchunks(self):
215 """returns all the chunks contains in the bundle
215 """returns all the chunks contains in the bundle
216
216
217 Used when you need to forward the binary stream to a file or another
217 Used when you need to forward the binary stream to a file or another
218 network API. To do so, it parse the changegroup data, otherwise it will
218 network API. To do so, it parse the changegroup data, otherwise it will
219 block in case of sshrepo because it don't know the end of the stream.
219 block in case of sshrepo because it don't know the end of the stream.
220 """
220 """
221 # an empty chunkgroup is the end of the changegroup
221 # an empty chunkgroup is the end of the changegroup
222 # a changegroup has at least 2 chunkgroups (changelog and manifest).
222 # a changegroup has at least 2 chunkgroups (changelog and manifest).
223 # after that, changegroup versions 1 and 2 have a series of groups
223 # after that, changegroup versions 1 and 2 have a series of groups
224 # with one group per file. changegroup 3 has a series of directory
224 # with one group per file. changegroup 3 has a series of directory
225 # manifests before the files.
225 # manifests before the files.
226 count = 0
226 count = 0
227 emptycount = 0
227 emptycount = 0
228 while emptycount < self._grouplistcount:
228 while emptycount < self._grouplistcount:
229 empty = True
229 empty = True
230 count += 1
230 count += 1
231 while True:
231 while True:
232 chunk = getchunk(self)
232 chunk = getchunk(self)
233 if not chunk:
233 if not chunk:
234 if empty and count > 2:
234 if empty and count > 2:
235 emptycount += 1
235 emptycount += 1
236 break
236 break
237 empty = False
237 empty = False
238 yield chunkheader(len(chunk))
238 yield chunkheader(len(chunk))
239 pos = 0
239 pos = 0
240 while pos < len(chunk):
240 while pos < len(chunk):
241 next = pos + 2**20
241 next = pos + 2**20
242 yield chunk[pos:next]
242 yield chunk[pos:next]
243 pos = next
243 pos = next
244 yield closechunk()
244 yield closechunk()
245
245
246 def _unpackmanifests(self, repo, revmap, trp, prog, numchanges):
246 def _unpackmanifests(self, repo, revmap, trp, prog, numchanges):
247 # We know that we'll never have more manifests than we had
247 # We know that we'll never have more manifests than we had
248 # changesets.
248 # changesets.
249 self.callback = prog(_('manifests'), numchanges)
249 self.callback = prog(_('manifests'), numchanges)
250 # no need to check for empty manifest group here:
250 # no need to check for empty manifest group here:
251 # if the result of the merge of 1 and 2 is the same in 3 and 4,
251 # if the result of the merge of 1 and 2 is the same in 3 and 4,
252 # no new manifest will be created and the manifest group will
252 # no new manifest will be created and the manifest group will
253 # be empty during the pull
253 # be empty during the pull
254 self.manifestheader()
254 self.manifestheader()
255 repo.manifestlog._revlog.addgroup(self, revmap, trp)
255 repo.manifestlog._revlog.addgroup(self, revmap, trp)
256 repo.ui.progress(_('manifests'), None)
256 repo.ui.progress(_('manifests'), None)
257 self.callback = None
257 self.callback = None
258
258
259 def apply(self, repo, tr, srctype, url, emptyok=False,
259 def apply(self, repo, tr, srctype, url, emptyok=False,
260 targetphase=phases.draft, expectedtotal=None):
260 targetphase=phases.draft, expectedtotal=None):
261 """Add the changegroup returned by source.read() to this repo.
261 """Add the changegroup returned by source.read() to this repo.
262 srctype is a string like 'push', 'pull', or 'unbundle'. url is
262 srctype is a string like 'push', 'pull', or 'unbundle'. url is
263 the URL of the repo where this changegroup is coming from.
263 the URL of the repo where this changegroup is coming from.
264
264
265 Return an integer summarizing the change to this repo:
265 Return an integer summarizing the change to this repo:
266 - nothing changed or no source: 0
266 - nothing changed or no source: 0
267 - more heads than before: 1+added heads (2..n)
267 - more heads than before: 1+added heads (2..n)
268 - fewer heads than before: -1-removed heads (-2..-n)
268 - fewer heads than before: -1-removed heads (-2..-n)
269 - number of heads stays the same: 1
269 - number of heads stays the same: 1
270 """
270 """
271 repo = repo.unfiltered()
271 repo = repo.unfiltered()
272 def csmap(x):
272 def csmap(x):
273 repo.ui.debug("add changeset %s\n" % short(x))
273 repo.ui.debug("add changeset %s\n" % short(x))
274 return len(cl)
274 return len(cl)
275
275
276 def revmap(x):
276 def revmap(x):
277 return cl.rev(x)
277 return cl.rev(x)
278
278
279 changesets = files = revisions = 0
279 changesets = files = revisions = 0
280
280
281 try:
281 try:
282 # The transaction may already carry source information. In this
282 # The transaction may already carry source information. In this
283 # case we use the top level data. We overwrite the argument
283 # case we use the top level data. We overwrite the argument
284 # because we need to use the top level value (if they exist)
284 # because we need to use the top level value (if they exist)
285 # in this function.
285 # in this function.
286 srctype = tr.hookargs.setdefault('source', srctype)
286 srctype = tr.hookargs.setdefault('source', srctype)
287 url = tr.hookargs.setdefault('url', url)
287 url = tr.hookargs.setdefault('url', url)
288 repo.hook('prechangegroup', throw=True, **tr.hookargs)
288 repo.hook('prechangegroup', throw=True, **tr.hookargs)
289
289
290 # write changelog data to temp files so concurrent readers
290 # write changelog data to temp files so concurrent readers
291 # will not see an inconsistent view
291 # will not see an inconsistent view
292 cl = repo.changelog
292 cl = repo.changelog
293 cl.delayupdate(tr)
293 cl.delayupdate(tr)
294 oldheads = set(cl.heads())
294 oldheads = set(cl.heads())
295
295
296 trp = weakref.proxy(tr)
296 trp = weakref.proxy(tr)
297 # pull off the changeset group
297 # pull off the changeset group
298 repo.ui.status(_("adding changesets\n"))
298 repo.ui.status(_("adding changesets\n"))
299 clstart = len(cl)
299 clstart = len(cl)
300 class prog(object):
300 class prog(object):
301 def __init__(self, step, total):
301 def __init__(self, step, total):
302 self._step = step
302 self._step = step
303 self._total = total
303 self._total = total
304 self._count = 1
304 self._count = 1
305 def __call__(self):
305 def __call__(self):
306 repo.ui.progress(self._step, self._count, unit=_('chunks'),
306 repo.ui.progress(self._step, self._count, unit=_('chunks'),
307 total=self._total)
307 total=self._total)
308 self._count += 1
308 self._count += 1
309 self.callback = prog(_('changesets'), expectedtotal)
309 self.callback = prog(_('changesets'), expectedtotal)
310
310
311 efiles = set()
311 efiles = set()
312 def onchangelog(cl, node):
312 def onchangelog(cl, node):
313 efiles.update(cl.readfiles(node))
313 efiles.update(cl.readfiles(node))
314
314
315 self.changelogheader()
315 self.changelogheader()
316 cgnodes = cl.addgroup(self, csmap, trp, addrevisioncb=onchangelog)
316 cgnodes = cl.addgroup(self, csmap, trp, addrevisioncb=onchangelog)
317 efiles = len(efiles)
317 efiles = len(efiles)
318
318
319 if not (cgnodes or emptyok):
319 if not (cgnodes or emptyok):
320 raise error.Abort(_("received changelog group is empty"))
320 raise error.Abort(_("received changelog group is empty"))
321 clend = len(cl)
321 clend = len(cl)
322 changesets = clend - clstart
322 changesets = clend - clstart
323 repo.ui.progress(_('changesets'), None)
323 repo.ui.progress(_('changesets'), None)
324 self.callback = None
324 self.callback = None
325
325
326 # pull off the manifest group
326 # pull off the manifest group
327 repo.ui.status(_("adding manifests\n"))
327 repo.ui.status(_("adding manifests\n"))
328 self._unpackmanifests(repo, revmap, trp, prog, changesets)
328 self._unpackmanifests(repo, revmap, trp, prog, changesets)
329
329
330 needfiles = {}
330 needfiles = {}
331 if repo.ui.configbool('server', 'validate', default=False):
331 if repo.ui.configbool('server', 'validate', default=False):
332 cl = repo.changelog
332 cl = repo.changelog
333 ml = repo.manifestlog
333 ml = repo.manifestlog
334 # validate incoming csets have their manifests
334 # validate incoming csets have their manifests
335 for cset in xrange(clstart, clend):
335 for cset in xrange(clstart, clend):
336 mfnode = cl.changelogrevision(cset).manifest
336 mfnode = cl.changelogrevision(cset).manifest
337 mfest = ml[mfnode].readdelta()
337 mfest = ml[mfnode].readdelta()
338 # store file cgnodes we must see
338 # store file cgnodes we must see
339 for f, n in mfest.iteritems():
339 for f, n in mfest.iteritems():
340 needfiles.setdefault(f, set()).add(n)
340 needfiles.setdefault(f, set()).add(n)
341
341
342 # process the files
342 # process the files
343 repo.ui.status(_("adding file changes\n"))
343 repo.ui.status(_("adding file changes\n"))
344 newrevs, newfiles = _addchangegroupfiles(
344 newrevs, newfiles = _addchangegroupfiles(
345 repo, self, revmap, trp, efiles, needfiles)
345 repo, self, revmap, trp, efiles, needfiles)
346 revisions += newrevs
346 revisions += newrevs
347 files += newfiles
347 files += newfiles
348
348
349 deltaheads = 0
349 deltaheads = 0
350 if oldheads:
350 if oldheads:
351 heads = cl.heads()
351 heads = cl.heads()
352 deltaheads = len(heads) - len(oldheads)
352 deltaheads = len(heads) - len(oldheads)
353 for h in heads:
353 for h in heads:
354 if h not in oldheads and repo[h].closesbranch():
354 if h not in oldheads and repo[h].closesbranch():
355 deltaheads -= 1
355 deltaheads -= 1
356 htext = ""
356 htext = ""
357 if deltaheads:
357 if deltaheads:
358 htext = _(" (%+d heads)") % deltaheads
358 htext = _(" (%+d heads)") % deltaheads
359
359
360 repo.ui.status(_("added %d changesets"
360 repo.ui.status(_("added %d changesets"
361 " with %d changes to %d files%s\n")
361 " with %d changes to %d files%s\n")
362 % (changesets, revisions, files, htext))
362 % (changesets, revisions, files, htext))
363 repo.invalidatevolatilesets()
363 repo.invalidatevolatilesets()
364
364
365 if changesets > 0:
365 if changesets > 0:
366 if 'node' not in tr.hookargs:
366 if 'node' not in tr.hookargs:
367 tr.hookargs['node'] = hex(cl.node(clstart))
367 tr.hookargs['node'] = hex(cl.node(clstart))
368 tr.hookargs['node_last'] = hex(cl.node(clend - 1))
368 tr.hookargs['node_last'] = hex(cl.node(clend - 1))
369 hookargs = dict(tr.hookargs)
369 hookargs = dict(tr.hookargs)
370 else:
370 else:
371 hookargs = dict(tr.hookargs)
371 hookargs = dict(tr.hookargs)
372 hookargs['node'] = hex(cl.node(clstart))
372 hookargs['node'] = hex(cl.node(clstart))
373 hookargs['node_last'] = hex(cl.node(clend - 1))
373 hookargs['node_last'] = hex(cl.node(clend - 1))
374 repo.hook('pretxnchangegroup', throw=True, **hookargs)
374 repo.hook('pretxnchangegroup', throw=True, **hookargs)
375
375
376 added = [cl.node(r) for r in xrange(clstart, clend)]
376 added = [cl.node(r) for r in xrange(clstart, clend)]
377 if srctype in ('push', 'serve'):
377 if srctype in ('push', 'serve'):
378 # Old servers can not push the boundary themselves.
378 # Old servers can not push the boundary themselves.
379 # New servers won't push the boundary if changeset already
379 # New servers won't push the boundary if changeset already
380 # exists locally as secret
380 # exists locally as secret
381 #
381 #
382 # We should not use added here but the list of all change in
382 # We should not use added here but the list of all change in
383 # the bundle
383 # the bundle
384 if repo.publishing():
384 if repo.publishing():
385 phases.advanceboundary(repo, tr, phases.public, cgnodes)
385 phases.advanceboundary(repo, tr, phases.public, cgnodes)
386 else:
386 else:
387 # Those changesets have been pushed from the
387 # Those changesets have been pushed from the
388 # outside, their phases are going to be pushed
388 # outside, their phases are going to be pushed
389 # alongside. Therefor `targetphase` is
389 # alongside. Therefor `targetphase` is
390 # ignored.
390 # ignored.
391 phases.advanceboundary(repo, tr, phases.draft, cgnodes)
391 phases.advanceboundary(repo, tr, phases.draft, cgnodes)
392 phases.retractboundary(repo, tr, phases.draft, added)
392 phases.retractboundary(repo, tr, phases.draft, added)
393 elif srctype != 'strip':
393 elif srctype != 'strip':
394 # publishing only alter behavior during push
394 # publishing only alter behavior during push
395 #
395 #
396 # strip should not touch boundary at all
396 # strip should not touch boundary at all
397 phases.retractboundary(repo, tr, targetphase, added)
397 phases.retractboundary(repo, tr, targetphase, added)
398
398
399 if changesets > 0:
399 if changesets > 0:
400
400
401 def runhooks():
401 def runhooks():
402 # These hooks run when the lock releases, not when the
402 # These hooks run when the lock releases, not when the
403 # transaction closes. So it's possible for the changelog
403 # transaction closes. So it's possible for the changelog
404 # to have changed since we last saw it.
404 # to have changed since we last saw it.
405 if clstart >= len(repo):
405 if clstart >= len(repo):
406 return
406 return
407
407
408 repo.hook("changegroup", **hookargs)
408 repo.hook("changegroup", **hookargs)
409
409
410 for n in added:
410 for n in added:
411 args = hookargs.copy()
411 args = hookargs.copy()
412 args['node'] = hex(n)
412 args['node'] = hex(n)
413 del args['node_last']
413 del args['node_last']
414 repo.hook("incoming", **args)
414 repo.hook("incoming", **args)
415
415
416 newheads = [h for h in repo.heads()
416 newheads = [h for h in repo.heads()
417 if h not in oldheads]
417 if h not in oldheads]
418 repo.ui.log("incoming",
418 repo.ui.log("incoming",
419 "%s incoming changes - new heads: %s\n",
419 "%s incoming changes - new heads: %s\n",
420 len(added),
420 len(added),
421 ', '.join([hex(c[:6]) for c in newheads]))
421 ', '.join([hex(c[:6]) for c in newheads]))
422
422
423 tr.addpostclose('changegroup-runhooks-%020i' % clstart,
423 tr.addpostclose('changegroup-runhooks-%020i' % clstart,
424 lambda tr: repo._afterlock(runhooks))
424 lambda tr: repo._afterlock(runhooks))
425 finally:
425 finally:
426 repo.ui.flush()
426 repo.ui.flush()
427 # never return 0 here:
427 # never return 0 here:
428 if deltaheads < 0:
428 if deltaheads < 0:
429 return deltaheads - 1
429 ret = deltaheads - 1
430 else:
430 else:
431 return deltaheads + 1
431 ret = deltaheads + 1
432 return ret, added
432
433
433 class cg2unpacker(cg1unpacker):
434 class cg2unpacker(cg1unpacker):
434 """Unpacker for cg2 streams.
435 """Unpacker for cg2 streams.
435
436
436 cg2 streams add support for generaldelta, so the delta header
437 cg2 streams add support for generaldelta, so the delta header
437 format is slightly different. All other features about the data
438 format is slightly different. All other features about the data
438 remain the same.
439 remain the same.
439 """
440 """
440 deltaheader = _CHANGEGROUPV2_DELTA_HEADER
441 deltaheader = _CHANGEGROUPV2_DELTA_HEADER
441 deltaheadersize = struct.calcsize(deltaheader)
442 deltaheadersize = struct.calcsize(deltaheader)
442 version = '02'
443 version = '02'
443
444
444 def _deltaheader(self, headertuple, prevnode):
445 def _deltaheader(self, headertuple, prevnode):
445 node, p1, p2, deltabase, cs = headertuple
446 node, p1, p2, deltabase, cs = headertuple
446 flags = 0
447 flags = 0
447 return node, p1, p2, deltabase, cs, flags
448 return node, p1, p2, deltabase, cs, flags
448
449
449 class cg3unpacker(cg2unpacker):
450 class cg3unpacker(cg2unpacker):
450 """Unpacker for cg3 streams.
451 """Unpacker for cg3 streams.
451
452
452 cg3 streams add support for exchanging treemanifests and revlog
453 cg3 streams add support for exchanging treemanifests and revlog
453 flags. It adds the revlog flags to the delta header and an empty chunk
454 flags. It adds the revlog flags to the delta header and an empty chunk
454 separating manifests and files.
455 separating manifests and files.
455 """
456 """
456 deltaheader = _CHANGEGROUPV3_DELTA_HEADER
457 deltaheader = _CHANGEGROUPV3_DELTA_HEADER
457 deltaheadersize = struct.calcsize(deltaheader)
458 deltaheadersize = struct.calcsize(deltaheader)
458 version = '03'
459 version = '03'
459 _grouplistcount = 2 # One list of manifests and one list of files
460 _grouplistcount = 2 # One list of manifests and one list of files
460
461
461 def _deltaheader(self, headertuple, prevnode):
462 def _deltaheader(self, headertuple, prevnode):
462 node, p1, p2, deltabase, cs, flags = headertuple
463 node, p1, p2, deltabase, cs, flags = headertuple
463 return node, p1, p2, deltabase, cs, flags
464 return node, p1, p2, deltabase, cs, flags
464
465
465 def _unpackmanifests(self, repo, revmap, trp, prog, numchanges):
466 def _unpackmanifests(self, repo, revmap, trp, prog, numchanges):
466 super(cg3unpacker, self)._unpackmanifests(repo, revmap, trp, prog,
467 super(cg3unpacker, self)._unpackmanifests(repo, revmap, trp, prog,
467 numchanges)
468 numchanges)
468 for chunkdata in iter(self.filelogheader, {}):
469 for chunkdata in iter(self.filelogheader, {}):
469 # If we get here, there are directory manifests in the changegroup
470 # If we get here, there are directory manifests in the changegroup
470 d = chunkdata["filename"]
471 d = chunkdata["filename"]
471 repo.ui.debug("adding %s revisions\n" % d)
472 repo.ui.debug("adding %s revisions\n" % d)
472 dirlog = repo.manifestlog._revlog.dirlog(d)
473 dirlog = repo.manifestlog._revlog.dirlog(d)
473 if not dirlog.addgroup(self, revmap, trp):
474 if not dirlog.addgroup(self, revmap, trp):
474 raise error.Abort(_("received dir revlog group is empty"))
475 raise error.Abort(_("received dir revlog group is empty"))
475
476
476 class headerlessfixup(object):
477 class headerlessfixup(object):
477 def __init__(self, fh, h):
478 def __init__(self, fh, h):
478 self._h = h
479 self._h = h
479 self._fh = fh
480 self._fh = fh
480 def read(self, n):
481 def read(self, n):
481 if self._h:
482 if self._h:
482 d, self._h = self._h[:n], self._h[n:]
483 d, self._h = self._h[:n], self._h[n:]
483 if len(d) < n:
484 if len(d) < n:
484 d += readexactly(self._fh, n - len(d))
485 d += readexactly(self._fh, n - len(d))
485 return d
486 return d
486 return readexactly(self._fh, n)
487 return readexactly(self._fh, n)
487
488
488 class cg1packer(object):
489 class cg1packer(object):
489 deltaheader = _CHANGEGROUPV1_DELTA_HEADER
490 deltaheader = _CHANGEGROUPV1_DELTA_HEADER
490 version = '01'
491 version = '01'
491 def __init__(self, repo, bundlecaps=None):
492 def __init__(self, repo, bundlecaps=None):
492 """Given a source repo, construct a bundler.
493 """Given a source repo, construct a bundler.
493
494
494 bundlecaps is optional and can be used to specify the set of
495 bundlecaps is optional and can be used to specify the set of
495 capabilities which can be used to build the bundle. While bundlecaps is
496 capabilities which can be used to build the bundle. While bundlecaps is
496 unused in core Mercurial, extensions rely on this feature to communicate
497 unused in core Mercurial, extensions rely on this feature to communicate
497 capabilities to customize the changegroup packer.
498 capabilities to customize the changegroup packer.
498 """
499 """
499 # Set of capabilities we can use to build the bundle.
500 # Set of capabilities we can use to build the bundle.
500 if bundlecaps is None:
501 if bundlecaps is None:
501 bundlecaps = set()
502 bundlecaps = set()
502 self._bundlecaps = bundlecaps
503 self._bundlecaps = bundlecaps
503 # experimental config: bundle.reorder
504 # experimental config: bundle.reorder
504 reorder = repo.ui.config('bundle', 'reorder', 'auto')
505 reorder = repo.ui.config('bundle', 'reorder', 'auto')
505 if reorder == 'auto':
506 if reorder == 'auto':
506 reorder = None
507 reorder = None
507 else:
508 else:
508 reorder = util.parsebool(reorder)
509 reorder = util.parsebool(reorder)
509 self._repo = repo
510 self._repo = repo
510 self._reorder = reorder
511 self._reorder = reorder
511 self._progress = repo.ui.progress
512 self._progress = repo.ui.progress
512 if self._repo.ui.verbose and not self._repo.ui.debugflag:
513 if self._repo.ui.verbose and not self._repo.ui.debugflag:
513 self._verbosenote = self._repo.ui.note
514 self._verbosenote = self._repo.ui.note
514 else:
515 else:
515 self._verbosenote = lambda s: None
516 self._verbosenote = lambda s: None
516
517
517 def close(self):
518 def close(self):
518 return closechunk()
519 return closechunk()
519
520
520 def fileheader(self, fname):
521 def fileheader(self, fname):
521 return chunkheader(len(fname)) + fname
522 return chunkheader(len(fname)) + fname
522
523
523 # Extracted both for clarity and for overriding in extensions.
524 # Extracted both for clarity and for overriding in extensions.
524 def _sortgroup(self, revlog, nodelist, lookup):
525 def _sortgroup(self, revlog, nodelist, lookup):
525 """Sort nodes for change group and turn them into revnums."""
526 """Sort nodes for change group and turn them into revnums."""
526 # for generaldelta revlogs, we linearize the revs; this will both be
527 # for generaldelta revlogs, we linearize the revs; this will both be
527 # much quicker and generate a much smaller bundle
528 # much quicker and generate a much smaller bundle
528 if (revlog._generaldelta and self._reorder is None) or self._reorder:
529 if (revlog._generaldelta and self._reorder is None) or self._reorder:
529 dag = dagutil.revlogdag(revlog)
530 dag = dagutil.revlogdag(revlog)
530 return dag.linearize(set(revlog.rev(n) for n in nodelist))
531 return dag.linearize(set(revlog.rev(n) for n in nodelist))
531 else:
532 else:
532 return sorted([revlog.rev(n) for n in nodelist])
533 return sorted([revlog.rev(n) for n in nodelist])
533
534
534 def group(self, nodelist, revlog, lookup, units=None):
535 def group(self, nodelist, revlog, lookup, units=None):
535 """Calculate a delta group, yielding a sequence of changegroup chunks
536 """Calculate a delta group, yielding a sequence of changegroup chunks
536 (strings).
537 (strings).
537
538
538 Given a list of changeset revs, return a set of deltas and
539 Given a list of changeset revs, return a set of deltas and
539 metadata corresponding to nodes. The first delta is
540 metadata corresponding to nodes. The first delta is
540 first parent(nodelist[0]) -> nodelist[0], the receiver is
541 first parent(nodelist[0]) -> nodelist[0], the receiver is
541 guaranteed to have this parent as it has all history before
542 guaranteed to have this parent as it has all history before
542 these changesets. In the case firstparent is nullrev the
543 these changesets. In the case firstparent is nullrev the
543 changegroup starts with a full revision.
544 changegroup starts with a full revision.
544
545
545 If units is not None, progress detail will be generated, units specifies
546 If units is not None, progress detail will be generated, units specifies
546 the type of revlog that is touched (changelog, manifest, etc.).
547 the type of revlog that is touched (changelog, manifest, etc.).
547 """
548 """
548 # if we don't have any revisions touched by these changesets, bail
549 # if we don't have any revisions touched by these changesets, bail
549 if len(nodelist) == 0:
550 if len(nodelist) == 0:
550 yield self.close()
551 yield self.close()
551 return
552 return
552
553
553 revs = self._sortgroup(revlog, nodelist, lookup)
554 revs = self._sortgroup(revlog, nodelist, lookup)
554
555
555 # add the parent of the first rev
556 # add the parent of the first rev
556 p = revlog.parentrevs(revs[0])[0]
557 p = revlog.parentrevs(revs[0])[0]
557 revs.insert(0, p)
558 revs.insert(0, p)
558
559
559 # build deltas
560 # build deltas
560 total = len(revs) - 1
561 total = len(revs) - 1
561 msgbundling = _('bundling')
562 msgbundling = _('bundling')
562 for r in xrange(len(revs) - 1):
563 for r in xrange(len(revs) - 1):
563 if units is not None:
564 if units is not None:
564 self._progress(msgbundling, r + 1, unit=units, total=total)
565 self._progress(msgbundling, r + 1, unit=units, total=total)
565 prev, curr = revs[r], revs[r + 1]
566 prev, curr = revs[r], revs[r + 1]
566 linknode = lookup(revlog.node(curr))
567 linknode = lookup(revlog.node(curr))
567 for c in self.revchunk(revlog, curr, prev, linknode):
568 for c in self.revchunk(revlog, curr, prev, linknode):
568 yield c
569 yield c
569
570
570 if units is not None:
571 if units is not None:
571 self._progress(msgbundling, None)
572 self._progress(msgbundling, None)
572 yield self.close()
573 yield self.close()
573
574
574 # filter any nodes that claim to be part of the known set
575 # filter any nodes that claim to be part of the known set
575 def prune(self, revlog, missing, commonrevs):
576 def prune(self, revlog, missing, commonrevs):
576 rr, rl = revlog.rev, revlog.linkrev
577 rr, rl = revlog.rev, revlog.linkrev
577 return [n for n in missing if rl(rr(n)) not in commonrevs]
578 return [n for n in missing if rl(rr(n)) not in commonrevs]
578
579
579 def _packmanifests(self, dir, mfnodes, lookuplinknode):
580 def _packmanifests(self, dir, mfnodes, lookuplinknode):
580 """Pack flat manifests into a changegroup stream."""
581 """Pack flat manifests into a changegroup stream."""
581 assert not dir
582 assert not dir
582 for chunk in self.group(mfnodes, self._repo.manifestlog._revlog,
583 for chunk in self.group(mfnodes, self._repo.manifestlog._revlog,
583 lookuplinknode, units=_('manifests')):
584 lookuplinknode, units=_('manifests')):
584 yield chunk
585 yield chunk
585
586
586 def _manifestsdone(self):
587 def _manifestsdone(self):
587 return ''
588 return ''
588
589
589 def generate(self, commonrevs, clnodes, fastpathlinkrev, source):
590 def generate(self, commonrevs, clnodes, fastpathlinkrev, source):
590 '''yield a sequence of changegroup chunks (strings)'''
591 '''yield a sequence of changegroup chunks (strings)'''
591 repo = self._repo
592 repo = self._repo
592 cl = repo.changelog
593 cl = repo.changelog
593
594
594 clrevorder = {}
595 clrevorder = {}
595 mfs = {} # needed manifests
596 mfs = {} # needed manifests
596 fnodes = {} # needed file nodes
597 fnodes = {} # needed file nodes
597 changedfiles = set()
598 changedfiles = set()
598
599
599 # Callback for the changelog, used to collect changed files and manifest
600 # Callback for the changelog, used to collect changed files and manifest
600 # nodes.
601 # nodes.
601 # Returns the linkrev node (identity in the changelog case).
602 # Returns the linkrev node (identity in the changelog case).
602 def lookupcl(x):
603 def lookupcl(x):
603 c = cl.read(x)
604 c = cl.read(x)
604 clrevorder[x] = len(clrevorder)
605 clrevorder[x] = len(clrevorder)
605 n = c[0]
606 n = c[0]
606 # record the first changeset introducing this manifest version
607 # record the first changeset introducing this manifest version
607 mfs.setdefault(n, x)
608 mfs.setdefault(n, x)
608 # Record a complete list of potentially-changed files in
609 # Record a complete list of potentially-changed files in
609 # this manifest.
610 # this manifest.
610 changedfiles.update(c[3])
611 changedfiles.update(c[3])
611 return x
612 return x
612
613
613 self._verbosenote(_('uncompressed size of bundle content:\n'))
614 self._verbosenote(_('uncompressed size of bundle content:\n'))
614 size = 0
615 size = 0
615 for chunk in self.group(clnodes, cl, lookupcl, units=_('changesets')):
616 for chunk in self.group(clnodes, cl, lookupcl, units=_('changesets')):
616 size += len(chunk)
617 size += len(chunk)
617 yield chunk
618 yield chunk
618 self._verbosenote(_('%8.i (changelog)\n') % size)
619 self._verbosenote(_('%8.i (changelog)\n') % size)
619
620
620 # We need to make sure that the linkrev in the changegroup refers to
621 # We need to make sure that the linkrev in the changegroup refers to
621 # the first changeset that introduced the manifest or file revision.
622 # the first changeset that introduced the manifest or file revision.
622 # The fastpath is usually safer than the slowpath, because the filelogs
623 # The fastpath is usually safer than the slowpath, because the filelogs
623 # are walked in revlog order.
624 # are walked in revlog order.
624 #
625 #
625 # When taking the slowpath with reorder=None and the manifest revlog
626 # When taking the slowpath with reorder=None and the manifest revlog
626 # uses generaldelta, the manifest may be walked in the "wrong" order.
627 # uses generaldelta, the manifest may be walked in the "wrong" order.
627 # Without 'clrevorder', we would get an incorrect linkrev (see fix in
628 # Without 'clrevorder', we would get an incorrect linkrev (see fix in
628 # cc0ff93d0c0c).
629 # cc0ff93d0c0c).
629 #
630 #
630 # When taking the fastpath, we are only vulnerable to reordering
631 # When taking the fastpath, we are only vulnerable to reordering
631 # of the changelog itself. The changelog never uses generaldelta, so
632 # of the changelog itself. The changelog never uses generaldelta, so
632 # it is only reordered when reorder=True. To handle this case, we
633 # it is only reordered when reorder=True. To handle this case, we
633 # simply take the slowpath, which already has the 'clrevorder' logic.
634 # simply take the slowpath, which already has the 'clrevorder' logic.
634 # This was also fixed in cc0ff93d0c0c.
635 # This was also fixed in cc0ff93d0c0c.
635 fastpathlinkrev = fastpathlinkrev and not self._reorder
636 fastpathlinkrev = fastpathlinkrev and not self._reorder
636 # Treemanifests don't work correctly with fastpathlinkrev
637 # Treemanifests don't work correctly with fastpathlinkrev
637 # either, because we don't discover which directory nodes to
638 # either, because we don't discover which directory nodes to
638 # send along with files. This could probably be fixed.
639 # send along with files. This could probably be fixed.
639 fastpathlinkrev = fastpathlinkrev and (
640 fastpathlinkrev = fastpathlinkrev and (
640 'treemanifest' not in repo.requirements)
641 'treemanifest' not in repo.requirements)
641
642
642 for chunk in self.generatemanifests(commonrevs, clrevorder,
643 for chunk in self.generatemanifests(commonrevs, clrevorder,
643 fastpathlinkrev, mfs, fnodes):
644 fastpathlinkrev, mfs, fnodes):
644 yield chunk
645 yield chunk
645 mfs.clear()
646 mfs.clear()
646 clrevs = set(cl.rev(x) for x in clnodes)
647 clrevs = set(cl.rev(x) for x in clnodes)
647
648
648 if not fastpathlinkrev:
649 if not fastpathlinkrev:
649 def linknodes(unused, fname):
650 def linknodes(unused, fname):
650 return fnodes.get(fname, {})
651 return fnodes.get(fname, {})
651 else:
652 else:
652 cln = cl.node
653 cln = cl.node
653 def linknodes(filerevlog, fname):
654 def linknodes(filerevlog, fname):
654 llr = filerevlog.linkrev
655 llr = filerevlog.linkrev
655 fln = filerevlog.node
656 fln = filerevlog.node
656 revs = ((r, llr(r)) for r in filerevlog)
657 revs = ((r, llr(r)) for r in filerevlog)
657 return dict((fln(r), cln(lr)) for r, lr in revs if lr in clrevs)
658 return dict((fln(r), cln(lr)) for r, lr in revs if lr in clrevs)
658
659
659 for chunk in self.generatefiles(changedfiles, linknodes, commonrevs,
660 for chunk in self.generatefiles(changedfiles, linknodes, commonrevs,
660 source):
661 source):
661 yield chunk
662 yield chunk
662
663
663 yield self.close()
664 yield self.close()
664
665
665 if clnodes:
666 if clnodes:
666 repo.hook('outgoing', node=hex(clnodes[0]), source=source)
667 repo.hook('outgoing', node=hex(clnodes[0]), source=source)
667
668
668 def generatemanifests(self, commonrevs, clrevorder, fastpathlinkrev, mfs,
669 def generatemanifests(self, commonrevs, clrevorder, fastpathlinkrev, mfs,
669 fnodes):
670 fnodes):
670 repo = self._repo
671 repo = self._repo
671 mfl = repo.manifestlog
672 mfl = repo.manifestlog
672 dirlog = mfl._revlog.dirlog
673 dirlog = mfl._revlog.dirlog
673 tmfnodes = {'': mfs}
674 tmfnodes = {'': mfs}
674
675
675 # Callback for the manifest, used to collect linkrevs for filelog
676 # Callback for the manifest, used to collect linkrevs for filelog
676 # revisions.
677 # revisions.
677 # Returns the linkrev node (collected in lookupcl).
678 # Returns the linkrev node (collected in lookupcl).
678 def makelookupmflinknode(dir):
679 def makelookupmflinknode(dir):
679 if fastpathlinkrev:
680 if fastpathlinkrev:
680 assert not dir
681 assert not dir
681 return mfs.__getitem__
682 return mfs.__getitem__
682
683
683 def lookupmflinknode(x):
684 def lookupmflinknode(x):
684 """Callback for looking up the linknode for manifests.
685 """Callback for looking up the linknode for manifests.
685
686
686 Returns the linkrev node for the specified manifest.
687 Returns the linkrev node for the specified manifest.
687
688
688 SIDE EFFECT:
689 SIDE EFFECT:
689
690
690 1) fclnodes gets populated with the list of relevant
691 1) fclnodes gets populated with the list of relevant
691 file nodes if we're not using fastpathlinkrev
692 file nodes if we're not using fastpathlinkrev
692 2) When treemanifests are in use, collects treemanifest nodes
693 2) When treemanifests are in use, collects treemanifest nodes
693 to send
694 to send
694
695
695 Note that this means manifests must be completely sent to
696 Note that this means manifests must be completely sent to
696 the client before you can trust the list of files and
697 the client before you can trust the list of files and
697 treemanifests to send.
698 treemanifests to send.
698 """
699 """
699 clnode = tmfnodes[dir][x]
700 clnode = tmfnodes[dir][x]
700 mdata = mfl.get(dir, x).readfast(shallow=True)
701 mdata = mfl.get(dir, x).readfast(shallow=True)
701 for p, n, fl in mdata.iterentries():
702 for p, n, fl in mdata.iterentries():
702 if fl == 't': # subdirectory manifest
703 if fl == 't': # subdirectory manifest
703 subdir = dir + p + '/'
704 subdir = dir + p + '/'
704 tmfclnodes = tmfnodes.setdefault(subdir, {})
705 tmfclnodes = tmfnodes.setdefault(subdir, {})
705 tmfclnode = tmfclnodes.setdefault(n, clnode)
706 tmfclnode = tmfclnodes.setdefault(n, clnode)
706 if clrevorder[clnode] < clrevorder[tmfclnode]:
707 if clrevorder[clnode] < clrevorder[tmfclnode]:
707 tmfclnodes[n] = clnode
708 tmfclnodes[n] = clnode
708 else:
709 else:
709 f = dir + p
710 f = dir + p
710 fclnodes = fnodes.setdefault(f, {})
711 fclnodes = fnodes.setdefault(f, {})
711 fclnode = fclnodes.setdefault(n, clnode)
712 fclnode = fclnodes.setdefault(n, clnode)
712 if clrevorder[clnode] < clrevorder[fclnode]:
713 if clrevorder[clnode] < clrevorder[fclnode]:
713 fclnodes[n] = clnode
714 fclnodes[n] = clnode
714 return clnode
715 return clnode
715 return lookupmflinknode
716 return lookupmflinknode
716
717
717 size = 0
718 size = 0
718 while tmfnodes:
719 while tmfnodes:
719 dir = min(tmfnodes)
720 dir = min(tmfnodes)
720 nodes = tmfnodes[dir]
721 nodes = tmfnodes[dir]
721 prunednodes = self.prune(dirlog(dir), nodes, commonrevs)
722 prunednodes = self.prune(dirlog(dir), nodes, commonrevs)
722 if not dir or prunednodes:
723 if not dir or prunednodes:
723 for x in self._packmanifests(dir, prunednodes,
724 for x in self._packmanifests(dir, prunednodes,
724 makelookupmflinknode(dir)):
725 makelookupmflinknode(dir)):
725 size += len(x)
726 size += len(x)
726 yield x
727 yield x
727 del tmfnodes[dir]
728 del tmfnodes[dir]
728 self._verbosenote(_('%8.i (manifests)\n') % size)
729 self._verbosenote(_('%8.i (manifests)\n') % size)
729 yield self._manifestsdone()
730 yield self._manifestsdone()
730
731
731 # The 'source' parameter is useful for extensions
732 # The 'source' parameter is useful for extensions
732 def generatefiles(self, changedfiles, linknodes, commonrevs, source):
733 def generatefiles(self, changedfiles, linknodes, commonrevs, source):
733 repo = self._repo
734 repo = self._repo
734 progress = self._progress
735 progress = self._progress
735 msgbundling = _('bundling')
736 msgbundling = _('bundling')
736
737
737 total = len(changedfiles)
738 total = len(changedfiles)
738 # for progress output
739 # for progress output
739 msgfiles = _('files')
740 msgfiles = _('files')
740 for i, fname in enumerate(sorted(changedfiles)):
741 for i, fname in enumerate(sorted(changedfiles)):
741 filerevlog = repo.file(fname)
742 filerevlog = repo.file(fname)
742 if not filerevlog:
743 if not filerevlog:
743 raise error.Abort(_("empty or missing revlog for %s") % fname)
744 raise error.Abort(_("empty or missing revlog for %s") % fname)
744
745
745 linkrevnodes = linknodes(filerevlog, fname)
746 linkrevnodes = linknodes(filerevlog, fname)
746 # Lookup for filenodes, we collected the linkrev nodes above in the
747 # Lookup for filenodes, we collected the linkrev nodes above in the
747 # fastpath case and with lookupmf in the slowpath case.
748 # fastpath case and with lookupmf in the slowpath case.
748 def lookupfilelog(x):
749 def lookupfilelog(x):
749 return linkrevnodes[x]
750 return linkrevnodes[x]
750
751
751 filenodes = self.prune(filerevlog, linkrevnodes, commonrevs)
752 filenodes = self.prune(filerevlog, linkrevnodes, commonrevs)
752 if filenodes:
753 if filenodes:
753 progress(msgbundling, i + 1, item=fname, unit=msgfiles,
754 progress(msgbundling, i + 1, item=fname, unit=msgfiles,
754 total=total)
755 total=total)
755 h = self.fileheader(fname)
756 h = self.fileheader(fname)
756 size = len(h)
757 size = len(h)
757 yield h
758 yield h
758 for chunk in self.group(filenodes, filerevlog, lookupfilelog):
759 for chunk in self.group(filenodes, filerevlog, lookupfilelog):
759 size += len(chunk)
760 size += len(chunk)
760 yield chunk
761 yield chunk
761 self._verbosenote(_('%8.i %s\n') % (size, fname))
762 self._verbosenote(_('%8.i %s\n') % (size, fname))
762 progress(msgbundling, None)
763 progress(msgbundling, None)
763
764
764 def deltaparent(self, revlog, rev, p1, p2, prev):
765 def deltaparent(self, revlog, rev, p1, p2, prev):
765 return prev
766 return prev
766
767
767 def revchunk(self, revlog, rev, prev, linknode):
768 def revchunk(self, revlog, rev, prev, linknode):
768 node = revlog.node(rev)
769 node = revlog.node(rev)
769 p1, p2 = revlog.parentrevs(rev)
770 p1, p2 = revlog.parentrevs(rev)
770 base = self.deltaparent(revlog, rev, p1, p2, prev)
771 base = self.deltaparent(revlog, rev, p1, p2, prev)
771
772
772 prefix = ''
773 prefix = ''
773 if revlog.iscensored(base) or revlog.iscensored(rev):
774 if revlog.iscensored(base) or revlog.iscensored(rev):
774 try:
775 try:
775 delta = revlog.revision(node, raw=True)
776 delta = revlog.revision(node, raw=True)
776 except error.CensoredNodeError as e:
777 except error.CensoredNodeError as e:
777 delta = e.tombstone
778 delta = e.tombstone
778 if base == nullrev:
779 if base == nullrev:
779 prefix = mdiff.trivialdiffheader(len(delta))
780 prefix = mdiff.trivialdiffheader(len(delta))
780 else:
781 else:
781 baselen = revlog.rawsize(base)
782 baselen = revlog.rawsize(base)
782 prefix = mdiff.replacediffheader(baselen, len(delta))
783 prefix = mdiff.replacediffheader(baselen, len(delta))
783 elif base == nullrev:
784 elif base == nullrev:
784 delta = revlog.revision(node, raw=True)
785 delta = revlog.revision(node, raw=True)
785 prefix = mdiff.trivialdiffheader(len(delta))
786 prefix = mdiff.trivialdiffheader(len(delta))
786 else:
787 else:
787 delta = revlog.revdiff(base, rev)
788 delta = revlog.revdiff(base, rev)
788 p1n, p2n = revlog.parents(node)
789 p1n, p2n = revlog.parents(node)
789 basenode = revlog.node(base)
790 basenode = revlog.node(base)
790 flags = revlog.flags(rev)
791 flags = revlog.flags(rev)
791 meta = self.builddeltaheader(node, p1n, p2n, basenode, linknode, flags)
792 meta = self.builddeltaheader(node, p1n, p2n, basenode, linknode, flags)
792 meta += prefix
793 meta += prefix
793 l = len(meta) + len(delta)
794 l = len(meta) + len(delta)
794 yield chunkheader(l)
795 yield chunkheader(l)
795 yield meta
796 yield meta
796 yield delta
797 yield delta
797 def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags):
798 def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags):
798 # do nothing with basenode, it is implicitly the previous one in HG10
799 # do nothing with basenode, it is implicitly the previous one in HG10
799 # do nothing with flags, it is implicitly 0 for cg1 and cg2
800 # do nothing with flags, it is implicitly 0 for cg1 and cg2
800 return struct.pack(self.deltaheader, node, p1n, p2n, linknode)
801 return struct.pack(self.deltaheader, node, p1n, p2n, linknode)
801
802
802 class cg2packer(cg1packer):
803 class cg2packer(cg1packer):
803 version = '02'
804 version = '02'
804 deltaheader = _CHANGEGROUPV2_DELTA_HEADER
805 deltaheader = _CHANGEGROUPV2_DELTA_HEADER
805
806
806 def __init__(self, repo, bundlecaps=None):
807 def __init__(self, repo, bundlecaps=None):
807 super(cg2packer, self).__init__(repo, bundlecaps)
808 super(cg2packer, self).__init__(repo, bundlecaps)
808 if self._reorder is None:
809 if self._reorder is None:
809 # Since generaldelta is directly supported by cg2, reordering
810 # Since generaldelta is directly supported by cg2, reordering
810 # generally doesn't help, so we disable it by default (treating
811 # generally doesn't help, so we disable it by default (treating
811 # bundle.reorder=auto just like bundle.reorder=False).
812 # bundle.reorder=auto just like bundle.reorder=False).
812 self._reorder = False
813 self._reorder = False
813
814
814 def deltaparent(self, revlog, rev, p1, p2, prev):
815 def deltaparent(self, revlog, rev, p1, p2, prev):
815 dp = revlog.deltaparent(rev)
816 dp = revlog.deltaparent(rev)
816 if dp == nullrev and revlog.storedeltachains:
817 if dp == nullrev and revlog.storedeltachains:
817 # Avoid sending full revisions when delta parent is null. Pick prev
818 # Avoid sending full revisions when delta parent is null. Pick prev
818 # in that case. It's tempting to pick p1 in this case, as p1 will
819 # in that case. It's tempting to pick p1 in this case, as p1 will
819 # be smaller in the common case. However, computing a delta against
820 # be smaller in the common case. However, computing a delta against
820 # p1 may require resolving the raw text of p1, which could be
821 # p1 may require resolving the raw text of p1, which could be
821 # expensive. The revlog caches should have prev cached, meaning
822 # expensive. The revlog caches should have prev cached, meaning
822 # less CPU for changegroup generation. There is likely room to add
823 # less CPU for changegroup generation. There is likely room to add
823 # a flag and/or config option to control this behavior.
824 # a flag and/or config option to control this behavior.
824 return prev
825 return prev
825 elif dp == nullrev:
826 elif dp == nullrev:
826 # revlog is configured to use full snapshot for a reason,
827 # revlog is configured to use full snapshot for a reason,
827 # stick to full snapshot.
828 # stick to full snapshot.
828 return nullrev
829 return nullrev
829 elif dp not in (p1, p2, prev):
830 elif dp not in (p1, p2, prev):
830 # Pick prev when we can't be sure remote has the base revision.
831 # Pick prev when we can't be sure remote has the base revision.
831 return prev
832 return prev
832 else:
833 else:
833 return dp
834 return dp
834
835
835 def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags):
836 def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags):
836 # Do nothing with flags, it is implicitly 0 in cg1 and cg2
837 # Do nothing with flags, it is implicitly 0 in cg1 and cg2
837 return struct.pack(self.deltaheader, node, p1n, p2n, basenode, linknode)
838 return struct.pack(self.deltaheader, node, p1n, p2n, basenode, linknode)
838
839
839 class cg3packer(cg2packer):
840 class cg3packer(cg2packer):
840 version = '03'
841 version = '03'
841 deltaheader = _CHANGEGROUPV3_DELTA_HEADER
842 deltaheader = _CHANGEGROUPV3_DELTA_HEADER
842
843
843 def _packmanifests(self, dir, mfnodes, lookuplinknode):
844 def _packmanifests(self, dir, mfnodes, lookuplinknode):
844 if dir:
845 if dir:
845 yield self.fileheader(dir)
846 yield self.fileheader(dir)
846
847
847 dirlog = self._repo.manifestlog._revlog.dirlog(dir)
848 dirlog = self._repo.manifestlog._revlog.dirlog(dir)
848 for chunk in self.group(mfnodes, dirlog, lookuplinknode,
849 for chunk in self.group(mfnodes, dirlog, lookuplinknode,
849 units=_('manifests')):
850 units=_('manifests')):
850 yield chunk
851 yield chunk
851
852
852 def _manifestsdone(self):
853 def _manifestsdone(self):
853 return self.close()
854 return self.close()
854
855
855 def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags):
856 def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags):
856 return struct.pack(
857 return struct.pack(
857 self.deltaheader, node, p1n, p2n, basenode, linknode, flags)
858 self.deltaheader, node, p1n, p2n, basenode, linknode, flags)
858
859
859 _packermap = {'01': (cg1packer, cg1unpacker),
860 _packermap = {'01': (cg1packer, cg1unpacker),
860 # cg2 adds support for exchanging generaldelta
861 # cg2 adds support for exchanging generaldelta
861 '02': (cg2packer, cg2unpacker),
862 '02': (cg2packer, cg2unpacker),
862 # cg3 adds support for exchanging revlog flags and treemanifests
863 # cg3 adds support for exchanging revlog flags and treemanifests
863 '03': (cg3packer, cg3unpacker),
864 '03': (cg3packer, cg3unpacker),
864 }
865 }
865
866
866 def allsupportedversions(repo):
867 def allsupportedversions(repo):
867 versions = set(_packermap.keys())
868 versions = set(_packermap.keys())
868 if not (repo.ui.configbool('experimental', 'changegroup3') or
869 if not (repo.ui.configbool('experimental', 'changegroup3') or
869 repo.ui.configbool('experimental', 'treemanifest') or
870 repo.ui.configbool('experimental', 'treemanifest') or
870 'treemanifest' in repo.requirements):
871 'treemanifest' in repo.requirements):
871 versions.discard('03')
872 versions.discard('03')
872 return versions
873 return versions
873
874
874 # Changegroup versions that can be applied to the repo
875 # Changegroup versions that can be applied to the repo
875 def supportedincomingversions(repo):
876 def supportedincomingversions(repo):
876 return allsupportedversions(repo)
877 return allsupportedversions(repo)
877
878
878 # Changegroup versions that can be created from the repo
879 # Changegroup versions that can be created from the repo
879 def supportedoutgoingversions(repo):
880 def supportedoutgoingversions(repo):
880 versions = allsupportedversions(repo)
881 versions = allsupportedversions(repo)
881 if 'treemanifest' in repo.requirements:
882 if 'treemanifest' in repo.requirements:
882 # Versions 01 and 02 support only flat manifests and it's just too
883 # Versions 01 and 02 support only flat manifests and it's just too
883 # expensive to convert between the flat manifest and tree manifest on
884 # expensive to convert between the flat manifest and tree manifest on
884 # the fly. Since tree manifests are hashed differently, all of history
885 # the fly. Since tree manifests are hashed differently, all of history
885 # would have to be converted. Instead, we simply don't even pretend to
886 # would have to be converted. Instead, we simply don't even pretend to
886 # support versions 01 and 02.
887 # support versions 01 and 02.
887 versions.discard('01')
888 versions.discard('01')
888 versions.discard('02')
889 versions.discard('02')
889 return versions
890 return versions
890
891
891 def safeversion(repo):
892 def safeversion(repo):
892 # Finds the smallest version that it's safe to assume clients of the repo
893 # Finds the smallest version that it's safe to assume clients of the repo
893 # will support. For example, all hg versions that support generaldelta also
894 # will support. For example, all hg versions that support generaldelta also
894 # support changegroup 02.
895 # support changegroup 02.
895 versions = supportedoutgoingversions(repo)
896 versions = supportedoutgoingversions(repo)
896 if 'generaldelta' in repo.requirements:
897 if 'generaldelta' in repo.requirements:
897 versions.discard('01')
898 versions.discard('01')
898 assert versions
899 assert versions
899 return min(versions)
900 return min(versions)
900
901
901 def getbundler(version, repo, bundlecaps=None):
902 def getbundler(version, repo, bundlecaps=None):
902 assert version in supportedoutgoingversions(repo)
903 assert version in supportedoutgoingversions(repo)
903 return _packermap[version][0](repo, bundlecaps)
904 return _packermap[version][0](repo, bundlecaps)
904
905
905 def getunbundler(version, fh, alg, extras=None):
906 def getunbundler(version, fh, alg, extras=None):
906 return _packermap[version][1](fh, alg, extras=extras)
907 return _packermap[version][1](fh, alg, extras=extras)
907
908
908 def _changegroupinfo(repo, nodes, source):
909 def _changegroupinfo(repo, nodes, source):
909 if repo.ui.verbose or source == 'bundle':
910 if repo.ui.verbose or source == 'bundle':
910 repo.ui.status(_("%d changesets found\n") % len(nodes))
911 repo.ui.status(_("%d changesets found\n") % len(nodes))
911 if repo.ui.debugflag:
912 if repo.ui.debugflag:
912 repo.ui.debug("list of changesets:\n")
913 repo.ui.debug("list of changesets:\n")
913 for node in nodes:
914 for node in nodes:
914 repo.ui.debug("%s\n" % hex(node))
915 repo.ui.debug("%s\n" % hex(node))
915
916
916 def getsubsetraw(repo, outgoing, bundler, source, fastpath=False):
917 def getsubsetraw(repo, outgoing, bundler, source, fastpath=False):
917 repo = repo.unfiltered()
918 repo = repo.unfiltered()
918 commonrevs = outgoing.common
919 commonrevs = outgoing.common
919 csets = outgoing.missing
920 csets = outgoing.missing
920 heads = outgoing.missingheads
921 heads = outgoing.missingheads
921 # We go through the fast path if we get told to, or if all (unfiltered
922 # We go through the fast path if we get told to, or if all (unfiltered
922 # heads have been requested (since we then know there all linkrevs will
923 # heads have been requested (since we then know there all linkrevs will
923 # be pulled by the client).
924 # be pulled by the client).
924 heads.sort()
925 heads.sort()
925 fastpathlinkrev = fastpath or (
926 fastpathlinkrev = fastpath or (
926 repo.filtername is None and heads == sorted(repo.heads()))
927 repo.filtername is None and heads == sorted(repo.heads()))
927
928
928 repo.hook('preoutgoing', throw=True, source=source)
929 repo.hook('preoutgoing', throw=True, source=source)
929 _changegroupinfo(repo, csets, source)
930 _changegroupinfo(repo, csets, source)
930 return bundler.generate(commonrevs, csets, fastpathlinkrev, source)
931 return bundler.generate(commonrevs, csets, fastpathlinkrev, source)
931
932
932 def getsubset(repo, outgoing, bundler, source, fastpath=False):
933 def getsubset(repo, outgoing, bundler, source, fastpath=False):
933 gengroup = getsubsetraw(repo, outgoing, bundler, source, fastpath)
934 gengroup = getsubsetraw(repo, outgoing, bundler, source, fastpath)
934 return getunbundler(bundler.version, util.chunkbuffer(gengroup), None,
935 return getunbundler(bundler.version, util.chunkbuffer(gengroup), None,
935 {'clcount': len(outgoing.missing)})
936 {'clcount': len(outgoing.missing)})
936
937
937 def changegroupsubset(repo, roots, heads, source, version='01'):
938 def changegroupsubset(repo, roots, heads, source, version='01'):
938 """Compute a changegroup consisting of all the nodes that are
939 """Compute a changegroup consisting of all the nodes that are
939 descendants of any of the roots and ancestors of any of the heads.
940 descendants of any of the roots and ancestors of any of the heads.
940 Return a chunkbuffer object whose read() method will return
941 Return a chunkbuffer object whose read() method will return
941 successive changegroup chunks.
942 successive changegroup chunks.
942
943
943 It is fairly complex as determining which filenodes and which
944 It is fairly complex as determining which filenodes and which
944 manifest nodes need to be included for the changeset to be complete
945 manifest nodes need to be included for the changeset to be complete
945 is non-trivial.
946 is non-trivial.
946
947
947 Another wrinkle is doing the reverse, figuring out which changeset in
948 Another wrinkle is doing the reverse, figuring out which changeset in
948 the changegroup a particular filenode or manifestnode belongs to.
949 the changegroup a particular filenode or manifestnode belongs to.
949 """
950 """
950 outgoing = discovery.outgoing(repo, missingroots=roots, missingheads=heads)
951 outgoing = discovery.outgoing(repo, missingroots=roots, missingheads=heads)
951 bundler = getbundler(version, repo)
952 bundler = getbundler(version, repo)
952 return getsubset(repo, outgoing, bundler, source)
953 return getsubset(repo, outgoing, bundler, source)
953
954
954 def getlocalchangegroupraw(repo, source, outgoing, bundlecaps=None,
955 def getlocalchangegroupraw(repo, source, outgoing, bundlecaps=None,
955 version='01'):
956 version='01'):
956 """Like getbundle, but taking a discovery.outgoing as an argument.
957 """Like getbundle, but taking a discovery.outgoing as an argument.
957
958
958 This is only implemented for local repos and reuses potentially
959 This is only implemented for local repos and reuses potentially
959 precomputed sets in outgoing. Returns a raw changegroup generator."""
960 precomputed sets in outgoing. Returns a raw changegroup generator."""
960 if not outgoing.missing:
961 if not outgoing.missing:
961 return None
962 return None
962 bundler = getbundler(version, repo, bundlecaps)
963 bundler = getbundler(version, repo, bundlecaps)
963 return getsubsetraw(repo, outgoing, bundler, source)
964 return getsubsetraw(repo, outgoing, bundler, source)
964
965
965 def getchangegroup(repo, source, outgoing, bundlecaps=None,
966 def getchangegroup(repo, source, outgoing, bundlecaps=None,
966 version='01'):
967 version='01'):
967 """Like getbundle, but taking a discovery.outgoing as an argument.
968 """Like getbundle, but taking a discovery.outgoing as an argument.
968
969
969 This is only implemented for local repos and reuses potentially
970 This is only implemented for local repos and reuses potentially
970 precomputed sets in outgoing."""
971 precomputed sets in outgoing."""
971 if not outgoing.missing:
972 if not outgoing.missing:
972 return None
973 return None
973 bundler = getbundler(version, repo, bundlecaps)
974 bundler = getbundler(version, repo, bundlecaps)
974 return getsubset(repo, outgoing, bundler, source)
975 return getsubset(repo, outgoing, bundler, source)
975
976
976 def getlocalchangegroup(repo, *args, **kwargs):
977 def getlocalchangegroup(repo, *args, **kwargs):
977 repo.ui.deprecwarn('getlocalchangegroup is deprecated, use getchangegroup',
978 repo.ui.deprecwarn('getlocalchangegroup is deprecated, use getchangegroup',
978 '4.3')
979 '4.3')
979 return getchangegroup(repo, *args, **kwargs)
980 return getchangegroup(repo, *args, **kwargs)
980
981
981 def changegroup(repo, basenodes, source):
982 def changegroup(repo, basenodes, source):
982 # to avoid a race we use changegroupsubset() (issue1320)
983 # to avoid a race we use changegroupsubset() (issue1320)
983 return changegroupsubset(repo, basenodes, repo.heads(), source)
984 return changegroupsubset(repo, basenodes, repo.heads(), source)
984
985
985 def _addchangegroupfiles(repo, source, revmap, trp, expectedfiles, needfiles):
986 def _addchangegroupfiles(repo, source, revmap, trp, expectedfiles, needfiles):
986 revisions = 0
987 revisions = 0
987 files = 0
988 files = 0
988 for chunkdata in iter(source.filelogheader, {}):
989 for chunkdata in iter(source.filelogheader, {}):
989 files += 1
990 files += 1
990 f = chunkdata["filename"]
991 f = chunkdata["filename"]
991 repo.ui.debug("adding %s revisions\n" % f)
992 repo.ui.debug("adding %s revisions\n" % f)
992 repo.ui.progress(_('files'), files, unit=_('files'),
993 repo.ui.progress(_('files'), files, unit=_('files'),
993 total=expectedfiles)
994 total=expectedfiles)
994 fl = repo.file(f)
995 fl = repo.file(f)
995 o = len(fl)
996 o = len(fl)
996 try:
997 try:
997 if not fl.addgroup(source, revmap, trp):
998 if not fl.addgroup(source, revmap, trp):
998 raise error.Abort(_("received file revlog group is empty"))
999 raise error.Abort(_("received file revlog group is empty"))
999 except error.CensoredBaseError as e:
1000 except error.CensoredBaseError as e:
1000 raise error.Abort(_("received delta base is censored: %s") % e)
1001 raise error.Abort(_("received delta base is censored: %s") % e)
1001 revisions += len(fl) - o
1002 revisions += len(fl) - o
1002 if f in needfiles:
1003 if f in needfiles:
1003 needs = needfiles[f]
1004 needs = needfiles[f]
1004 for new in xrange(o, len(fl)):
1005 for new in xrange(o, len(fl)):
1005 n = fl.node(new)
1006 n = fl.node(new)
1006 if n in needs:
1007 if n in needs:
1007 needs.remove(n)
1008 needs.remove(n)
1008 else:
1009 else:
1009 raise error.Abort(
1010 raise error.Abort(
1010 _("received spurious file revlog entry"))
1011 _("received spurious file revlog entry"))
1011 if not needs:
1012 if not needs:
1012 del needfiles[f]
1013 del needfiles[f]
1013 repo.ui.progress(_('files'), None)
1014 repo.ui.progress(_('files'), None)
1014
1015
1015 for f, needs in needfiles.iteritems():
1016 for f, needs in needfiles.iteritems():
1016 fl = repo.file(f)
1017 fl = repo.file(f)
1017 for n in needs:
1018 for n in needs:
1018 try:
1019 try:
1019 fl.rev(n)
1020 fl.rev(n)
1020 except error.LookupError:
1021 except error.LookupError:
1021 raise error.Abort(
1022 raise error.Abort(
1022 _('missing file data for %s:%s - run hg verify') %
1023 _('missing file data for %s:%s - run hg verify') %
1023 (f, hex(n)))
1024 (f, hex(n)))
1024
1025
1025 return revisions, files
1026 return revisions, files
@@ -1,5400 +1,5400 b''
1 # commands.py - command processing for mercurial
1 # commands.py - command processing for mercurial
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import difflib
10 import difflib
11 import errno
11 import errno
12 import os
12 import os
13 import re
13 import re
14 import sys
14 import sys
15
15
16 from .i18n import _
16 from .i18n import _
17 from .node import (
17 from .node import (
18 hex,
18 hex,
19 nullid,
19 nullid,
20 nullrev,
20 nullrev,
21 short,
21 short,
22 )
22 )
23 from . import (
23 from . import (
24 archival,
24 archival,
25 bookmarks,
25 bookmarks,
26 bundle2,
26 bundle2,
27 changegroup,
27 changegroup,
28 cmdutil,
28 cmdutil,
29 copies,
29 copies,
30 debugcommands as debugcommandsmod,
30 debugcommands as debugcommandsmod,
31 destutil,
31 destutil,
32 dirstateguard,
32 dirstateguard,
33 discovery,
33 discovery,
34 encoding,
34 encoding,
35 error,
35 error,
36 exchange,
36 exchange,
37 extensions,
37 extensions,
38 formatter,
38 formatter,
39 graphmod,
39 graphmod,
40 hbisect,
40 hbisect,
41 help,
41 help,
42 hg,
42 hg,
43 lock as lockmod,
43 lock as lockmod,
44 merge as mergemod,
44 merge as mergemod,
45 obsolete,
45 obsolete,
46 patch,
46 patch,
47 phases,
47 phases,
48 pycompat,
48 pycompat,
49 rcutil,
49 rcutil,
50 registrar,
50 registrar,
51 revsetlang,
51 revsetlang,
52 scmutil,
52 scmutil,
53 server,
53 server,
54 sshserver,
54 sshserver,
55 streamclone,
55 streamclone,
56 tags as tagsmod,
56 tags as tagsmod,
57 templatekw,
57 templatekw,
58 ui as uimod,
58 ui as uimod,
59 util,
59 util,
60 )
60 )
61
61
62 release = lockmod.release
62 release = lockmod.release
63
63
64 table = {}
64 table = {}
65 table.update(debugcommandsmod.command._table)
65 table.update(debugcommandsmod.command._table)
66
66
67 command = registrar.command(table)
67 command = registrar.command(table)
68
68
69 # common command options
69 # common command options
70
70
71 globalopts = [
71 globalopts = [
72 ('R', 'repository', '',
72 ('R', 'repository', '',
73 _('repository root directory or name of overlay bundle file'),
73 _('repository root directory or name of overlay bundle file'),
74 _('REPO')),
74 _('REPO')),
75 ('', 'cwd', '',
75 ('', 'cwd', '',
76 _('change working directory'), _('DIR')),
76 _('change working directory'), _('DIR')),
77 ('y', 'noninteractive', None,
77 ('y', 'noninteractive', None,
78 _('do not prompt, automatically pick the first choice for all prompts')),
78 _('do not prompt, automatically pick the first choice for all prompts')),
79 ('q', 'quiet', None, _('suppress output')),
79 ('q', 'quiet', None, _('suppress output')),
80 ('v', 'verbose', None, _('enable additional output')),
80 ('v', 'verbose', None, _('enable additional output')),
81 ('', 'color', '',
81 ('', 'color', '',
82 # i18n: 'always', 'auto', 'never', and 'debug' are keywords
82 # i18n: 'always', 'auto', 'never', and 'debug' are keywords
83 # and should not be translated
83 # and should not be translated
84 _("when to colorize (boolean, always, auto, never, or debug)"),
84 _("when to colorize (boolean, always, auto, never, or debug)"),
85 _('TYPE')),
85 _('TYPE')),
86 ('', 'config', [],
86 ('', 'config', [],
87 _('set/override config option (use \'section.name=value\')'),
87 _('set/override config option (use \'section.name=value\')'),
88 _('CONFIG')),
88 _('CONFIG')),
89 ('', 'debug', None, _('enable debugging output')),
89 ('', 'debug', None, _('enable debugging output')),
90 ('', 'debugger', None, _('start debugger')),
90 ('', 'debugger', None, _('start debugger')),
91 ('', 'encoding', encoding.encoding, _('set the charset encoding'),
91 ('', 'encoding', encoding.encoding, _('set the charset encoding'),
92 _('ENCODE')),
92 _('ENCODE')),
93 ('', 'encodingmode', encoding.encodingmode,
93 ('', 'encodingmode', encoding.encodingmode,
94 _('set the charset encoding mode'), _('MODE')),
94 _('set the charset encoding mode'), _('MODE')),
95 ('', 'traceback', None, _('always print a traceback on exception')),
95 ('', 'traceback', None, _('always print a traceback on exception')),
96 ('', 'time', None, _('time how long the command takes')),
96 ('', 'time', None, _('time how long the command takes')),
97 ('', 'profile', None, _('print command execution profile')),
97 ('', 'profile', None, _('print command execution profile')),
98 ('', 'version', None, _('output version information and exit')),
98 ('', 'version', None, _('output version information and exit')),
99 ('h', 'help', None, _('display help and exit')),
99 ('h', 'help', None, _('display help and exit')),
100 ('', 'hidden', False, _('consider hidden changesets')),
100 ('', 'hidden', False, _('consider hidden changesets')),
101 ('', 'pager', 'auto',
101 ('', 'pager', 'auto',
102 _("when to paginate (boolean, always, auto, or never)"), _('TYPE')),
102 _("when to paginate (boolean, always, auto, or never)"), _('TYPE')),
103 ]
103 ]
104
104
105 dryrunopts = cmdutil.dryrunopts
105 dryrunopts = cmdutil.dryrunopts
106 remoteopts = cmdutil.remoteopts
106 remoteopts = cmdutil.remoteopts
107 walkopts = cmdutil.walkopts
107 walkopts = cmdutil.walkopts
108 commitopts = cmdutil.commitopts
108 commitopts = cmdutil.commitopts
109 commitopts2 = cmdutil.commitopts2
109 commitopts2 = cmdutil.commitopts2
110 formatteropts = cmdutil.formatteropts
110 formatteropts = cmdutil.formatteropts
111 templateopts = cmdutil.templateopts
111 templateopts = cmdutil.templateopts
112 logopts = cmdutil.logopts
112 logopts = cmdutil.logopts
113 diffopts = cmdutil.diffopts
113 diffopts = cmdutil.diffopts
114 diffwsopts = cmdutil.diffwsopts
114 diffwsopts = cmdutil.diffwsopts
115 diffopts2 = cmdutil.diffopts2
115 diffopts2 = cmdutil.diffopts2
116 mergetoolopts = cmdutil.mergetoolopts
116 mergetoolopts = cmdutil.mergetoolopts
117 similarityopts = cmdutil.similarityopts
117 similarityopts = cmdutil.similarityopts
118 subrepoopts = cmdutil.subrepoopts
118 subrepoopts = cmdutil.subrepoopts
119 debugrevlogopts = cmdutil.debugrevlogopts
119 debugrevlogopts = cmdutil.debugrevlogopts
120
120
121 # Commands start here, listed alphabetically
121 # Commands start here, listed alphabetically
122
122
123 @command('^add',
123 @command('^add',
124 walkopts + subrepoopts + dryrunopts,
124 walkopts + subrepoopts + dryrunopts,
125 _('[OPTION]... [FILE]...'),
125 _('[OPTION]... [FILE]...'),
126 inferrepo=True)
126 inferrepo=True)
127 def add(ui, repo, *pats, **opts):
127 def add(ui, repo, *pats, **opts):
128 """add the specified files on the next commit
128 """add the specified files on the next commit
129
129
130 Schedule files to be version controlled and added to the
130 Schedule files to be version controlled and added to the
131 repository.
131 repository.
132
132
133 The files will be added to the repository at the next commit. To
133 The files will be added to the repository at the next commit. To
134 undo an add before that, see :hg:`forget`.
134 undo an add before that, see :hg:`forget`.
135
135
136 If no names are given, add all files to the repository (except
136 If no names are given, add all files to the repository (except
137 files matching ``.hgignore``).
137 files matching ``.hgignore``).
138
138
139 .. container:: verbose
139 .. container:: verbose
140
140
141 Examples:
141 Examples:
142
142
143 - New (unknown) files are added
143 - New (unknown) files are added
144 automatically by :hg:`add`::
144 automatically by :hg:`add`::
145
145
146 $ ls
146 $ ls
147 foo.c
147 foo.c
148 $ hg status
148 $ hg status
149 ? foo.c
149 ? foo.c
150 $ hg add
150 $ hg add
151 adding foo.c
151 adding foo.c
152 $ hg status
152 $ hg status
153 A foo.c
153 A foo.c
154
154
155 - Specific files to be added can be specified::
155 - Specific files to be added can be specified::
156
156
157 $ ls
157 $ ls
158 bar.c foo.c
158 bar.c foo.c
159 $ hg status
159 $ hg status
160 ? bar.c
160 ? bar.c
161 ? foo.c
161 ? foo.c
162 $ hg add bar.c
162 $ hg add bar.c
163 $ hg status
163 $ hg status
164 A bar.c
164 A bar.c
165 ? foo.c
165 ? foo.c
166
166
167 Returns 0 if all files are successfully added.
167 Returns 0 if all files are successfully added.
168 """
168 """
169
169
170 m = scmutil.match(repo[None], pats, pycompat.byteskwargs(opts))
170 m = scmutil.match(repo[None], pats, pycompat.byteskwargs(opts))
171 rejected = cmdutil.add(ui, repo, m, "", False, **opts)
171 rejected = cmdutil.add(ui, repo, m, "", False, **opts)
172 return rejected and 1 or 0
172 return rejected and 1 or 0
173
173
174 @command('addremove',
174 @command('addremove',
175 similarityopts + subrepoopts + walkopts + dryrunopts,
175 similarityopts + subrepoopts + walkopts + dryrunopts,
176 _('[OPTION]... [FILE]...'),
176 _('[OPTION]... [FILE]...'),
177 inferrepo=True)
177 inferrepo=True)
178 def addremove(ui, repo, *pats, **opts):
178 def addremove(ui, repo, *pats, **opts):
179 """add all new files, delete all missing files
179 """add all new files, delete all missing files
180
180
181 Add all new files and remove all missing files from the
181 Add all new files and remove all missing files from the
182 repository.
182 repository.
183
183
184 Unless names are given, new files are ignored if they match any of
184 Unless names are given, new files are ignored if they match any of
185 the patterns in ``.hgignore``. As with add, these changes take
185 the patterns in ``.hgignore``. As with add, these changes take
186 effect at the next commit.
186 effect at the next commit.
187
187
188 Use the -s/--similarity option to detect renamed files. This
188 Use the -s/--similarity option to detect renamed files. This
189 option takes a percentage between 0 (disabled) and 100 (files must
189 option takes a percentage between 0 (disabled) and 100 (files must
190 be identical) as its parameter. With a parameter greater than 0,
190 be identical) as its parameter. With a parameter greater than 0,
191 this compares every removed file with every added file and records
191 this compares every removed file with every added file and records
192 those similar enough as renames. Detecting renamed files this way
192 those similar enough as renames. Detecting renamed files this way
193 can be expensive. After using this option, :hg:`status -C` can be
193 can be expensive. After using this option, :hg:`status -C` can be
194 used to check which files were identified as moved or renamed. If
194 used to check which files were identified as moved or renamed. If
195 not specified, -s/--similarity defaults to 100 and only renames of
195 not specified, -s/--similarity defaults to 100 and only renames of
196 identical files are detected.
196 identical files are detected.
197
197
198 .. container:: verbose
198 .. container:: verbose
199
199
200 Examples:
200 Examples:
201
201
202 - A number of files (bar.c and foo.c) are new,
202 - A number of files (bar.c and foo.c) are new,
203 while foobar.c has been removed (without using :hg:`remove`)
203 while foobar.c has been removed (without using :hg:`remove`)
204 from the repository::
204 from the repository::
205
205
206 $ ls
206 $ ls
207 bar.c foo.c
207 bar.c foo.c
208 $ hg status
208 $ hg status
209 ! foobar.c
209 ! foobar.c
210 ? bar.c
210 ? bar.c
211 ? foo.c
211 ? foo.c
212 $ hg addremove
212 $ hg addremove
213 adding bar.c
213 adding bar.c
214 adding foo.c
214 adding foo.c
215 removing foobar.c
215 removing foobar.c
216 $ hg status
216 $ hg status
217 A bar.c
217 A bar.c
218 A foo.c
218 A foo.c
219 R foobar.c
219 R foobar.c
220
220
221 - A file foobar.c was moved to foo.c without using :hg:`rename`.
221 - A file foobar.c was moved to foo.c without using :hg:`rename`.
222 Afterwards, it was edited slightly::
222 Afterwards, it was edited slightly::
223
223
224 $ ls
224 $ ls
225 foo.c
225 foo.c
226 $ hg status
226 $ hg status
227 ! foobar.c
227 ! foobar.c
228 ? foo.c
228 ? foo.c
229 $ hg addremove --similarity 90
229 $ hg addremove --similarity 90
230 removing foobar.c
230 removing foobar.c
231 adding foo.c
231 adding foo.c
232 recording removal of foobar.c as rename to foo.c (94% similar)
232 recording removal of foobar.c as rename to foo.c (94% similar)
233 $ hg status -C
233 $ hg status -C
234 A foo.c
234 A foo.c
235 foobar.c
235 foobar.c
236 R foobar.c
236 R foobar.c
237
237
238 Returns 0 if all files are successfully added.
238 Returns 0 if all files are successfully added.
239 """
239 """
240 opts = pycompat.byteskwargs(opts)
240 opts = pycompat.byteskwargs(opts)
241 try:
241 try:
242 sim = float(opts.get('similarity') or 100)
242 sim = float(opts.get('similarity') or 100)
243 except ValueError:
243 except ValueError:
244 raise error.Abort(_('similarity must be a number'))
244 raise error.Abort(_('similarity must be a number'))
245 if sim < 0 or sim > 100:
245 if sim < 0 or sim > 100:
246 raise error.Abort(_('similarity must be between 0 and 100'))
246 raise error.Abort(_('similarity must be between 0 and 100'))
247 matcher = scmutil.match(repo[None], pats, opts)
247 matcher = scmutil.match(repo[None], pats, opts)
248 return scmutil.addremove(repo, matcher, "", opts, similarity=sim / 100.0)
248 return scmutil.addremove(repo, matcher, "", opts, similarity=sim / 100.0)
249
249
250 @command('^annotate|blame',
250 @command('^annotate|blame',
251 [('r', 'rev', '', _('annotate the specified revision'), _('REV')),
251 [('r', 'rev', '', _('annotate the specified revision'), _('REV')),
252 ('', 'follow', None,
252 ('', 'follow', None,
253 _('follow copies/renames and list the filename (DEPRECATED)')),
253 _('follow copies/renames and list the filename (DEPRECATED)')),
254 ('', 'no-follow', None, _("don't follow copies and renames")),
254 ('', 'no-follow', None, _("don't follow copies and renames")),
255 ('a', 'text', None, _('treat all files as text')),
255 ('a', 'text', None, _('treat all files as text')),
256 ('u', 'user', None, _('list the author (long with -v)')),
256 ('u', 'user', None, _('list the author (long with -v)')),
257 ('f', 'file', None, _('list the filename')),
257 ('f', 'file', None, _('list the filename')),
258 ('d', 'date', None, _('list the date (short with -q)')),
258 ('d', 'date', None, _('list the date (short with -q)')),
259 ('n', 'number', None, _('list the revision number (default)')),
259 ('n', 'number', None, _('list the revision number (default)')),
260 ('c', 'changeset', None, _('list the changeset')),
260 ('c', 'changeset', None, _('list the changeset')),
261 ('l', 'line-number', None, _('show line number at the first appearance')),
261 ('l', 'line-number', None, _('show line number at the first appearance')),
262 ('', 'skip', [], _('revision to not display (EXPERIMENTAL)'), _('REV')),
262 ('', 'skip', [], _('revision to not display (EXPERIMENTAL)'), _('REV')),
263 ] + diffwsopts + walkopts + formatteropts,
263 ] + diffwsopts + walkopts + formatteropts,
264 _('[-r REV] [-f] [-a] [-u] [-d] [-n] [-c] [-l] FILE...'),
264 _('[-r REV] [-f] [-a] [-u] [-d] [-n] [-c] [-l] FILE...'),
265 inferrepo=True)
265 inferrepo=True)
266 def annotate(ui, repo, *pats, **opts):
266 def annotate(ui, repo, *pats, **opts):
267 """show changeset information by line for each file
267 """show changeset information by line for each file
268
268
269 List changes in files, showing the revision id responsible for
269 List changes in files, showing the revision id responsible for
270 each line.
270 each line.
271
271
272 This command is useful for discovering when a change was made and
272 This command is useful for discovering when a change was made and
273 by whom.
273 by whom.
274
274
275 If you include --file, --user, or --date, the revision number is
275 If you include --file, --user, or --date, the revision number is
276 suppressed unless you also include --number.
276 suppressed unless you also include --number.
277
277
278 Without the -a/--text option, annotate will avoid processing files
278 Without the -a/--text option, annotate will avoid processing files
279 it detects as binary. With -a, annotate will annotate the file
279 it detects as binary. With -a, annotate will annotate the file
280 anyway, although the results will probably be neither useful
280 anyway, although the results will probably be neither useful
281 nor desirable.
281 nor desirable.
282
282
283 Returns 0 on success.
283 Returns 0 on success.
284 """
284 """
285 opts = pycompat.byteskwargs(opts)
285 opts = pycompat.byteskwargs(opts)
286 if not pats:
286 if not pats:
287 raise error.Abort(_('at least one filename or pattern is required'))
287 raise error.Abort(_('at least one filename or pattern is required'))
288
288
289 if opts.get('follow'):
289 if opts.get('follow'):
290 # --follow is deprecated and now just an alias for -f/--file
290 # --follow is deprecated and now just an alias for -f/--file
291 # to mimic the behavior of Mercurial before version 1.5
291 # to mimic the behavior of Mercurial before version 1.5
292 opts['file'] = True
292 opts['file'] = True
293
293
294 ctx = scmutil.revsingle(repo, opts.get('rev'))
294 ctx = scmutil.revsingle(repo, opts.get('rev'))
295
295
296 rootfm = ui.formatter('annotate', opts)
296 rootfm = ui.formatter('annotate', opts)
297 if ui.quiet:
297 if ui.quiet:
298 datefunc = util.shortdate
298 datefunc = util.shortdate
299 else:
299 else:
300 datefunc = util.datestr
300 datefunc = util.datestr
301 if ctx.rev() is None:
301 if ctx.rev() is None:
302 def hexfn(node):
302 def hexfn(node):
303 if node is None:
303 if node is None:
304 return None
304 return None
305 else:
305 else:
306 return rootfm.hexfunc(node)
306 return rootfm.hexfunc(node)
307 if opts.get('changeset'):
307 if opts.get('changeset'):
308 # omit "+" suffix which is appended to node hex
308 # omit "+" suffix which is appended to node hex
309 def formatrev(rev):
309 def formatrev(rev):
310 if rev is None:
310 if rev is None:
311 return '%d' % ctx.p1().rev()
311 return '%d' % ctx.p1().rev()
312 else:
312 else:
313 return '%d' % rev
313 return '%d' % rev
314 else:
314 else:
315 def formatrev(rev):
315 def formatrev(rev):
316 if rev is None:
316 if rev is None:
317 return '%d+' % ctx.p1().rev()
317 return '%d+' % ctx.p1().rev()
318 else:
318 else:
319 return '%d ' % rev
319 return '%d ' % rev
320 def formathex(hex):
320 def formathex(hex):
321 if hex is None:
321 if hex is None:
322 return '%s+' % rootfm.hexfunc(ctx.p1().node())
322 return '%s+' % rootfm.hexfunc(ctx.p1().node())
323 else:
323 else:
324 return '%s ' % hex
324 return '%s ' % hex
325 else:
325 else:
326 hexfn = rootfm.hexfunc
326 hexfn = rootfm.hexfunc
327 formatrev = formathex = str
327 formatrev = formathex = str
328
328
329 opmap = [('user', ' ', lambda x: x[0].user(), ui.shortuser),
329 opmap = [('user', ' ', lambda x: x[0].user(), ui.shortuser),
330 ('number', ' ', lambda x: x[0].rev(), formatrev),
330 ('number', ' ', lambda x: x[0].rev(), formatrev),
331 ('changeset', ' ', lambda x: hexfn(x[0].node()), formathex),
331 ('changeset', ' ', lambda x: hexfn(x[0].node()), formathex),
332 ('date', ' ', lambda x: x[0].date(), util.cachefunc(datefunc)),
332 ('date', ' ', lambda x: x[0].date(), util.cachefunc(datefunc)),
333 ('file', ' ', lambda x: x[0].path(), str),
333 ('file', ' ', lambda x: x[0].path(), str),
334 ('line_number', ':', lambda x: x[1], str),
334 ('line_number', ':', lambda x: x[1], str),
335 ]
335 ]
336 fieldnamemap = {'number': 'rev', 'changeset': 'node'}
336 fieldnamemap = {'number': 'rev', 'changeset': 'node'}
337
337
338 if (not opts.get('user') and not opts.get('changeset')
338 if (not opts.get('user') and not opts.get('changeset')
339 and not opts.get('date') and not opts.get('file')):
339 and not opts.get('date') and not opts.get('file')):
340 opts['number'] = True
340 opts['number'] = True
341
341
342 linenumber = opts.get('line_number') is not None
342 linenumber = opts.get('line_number') is not None
343 if linenumber and (not opts.get('changeset')) and (not opts.get('number')):
343 if linenumber and (not opts.get('changeset')) and (not opts.get('number')):
344 raise error.Abort(_('at least one of -n/-c is required for -l'))
344 raise error.Abort(_('at least one of -n/-c is required for -l'))
345
345
346 ui.pager('annotate')
346 ui.pager('annotate')
347
347
348 if rootfm.isplain():
348 if rootfm.isplain():
349 def makefunc(get, fmt):
349 def makefunc(get, fmt):
350 return lambda x: fmt(get(x))
350 return lambda x: fmt(get(x))
351 else:
351 else:
352 def makefunc(get, fmt):
352 def makefunc(get, fmt):
353 return get
353 return get
354 funcmap = [(makefunc(get, fmt), sep) for op, sep, get, fmt in opmap
354 funcmap = [(makefunc(get, fmt), sep) for op, sep, get, fmt in opmap
355 if opts.get(op)]
355 if opts.get(op)]
356 funcmap[0] = (funcmap[0][0], '') # no separator in front of first column
356 funcmap[0] = (funcmap[0][0], '') # no separator in front of first column
357 fields = ' '.join(fieldnamemap.get(op, op) for op, sep, get, fmt in opmap
357 fields = ' '.join(fieldnamemap.get(op, op) for op, sep, get, fmt in opmap
358 if opts.get(op))
358 if opts.get(op))
359
359
360 def bad(x, y):
360 def bad(x, y):
361 raise error.Abort("%s: %s" % (x, y))
361 raise error.Abort("%s: %s" % (x, y))
362
362
363 m = scmutil.match(ctx, pats, opts, badfn=bad)
363 m = scmutil.match(ctx, pats, opts, badfn=bad)
364
364
365 follow = not opts.get('no_follow')
365 follow = not opts.get('no_follow')
366 diffopts = patch.difffeatureopts(ui, opts, section='annotate',
366 diffopts = patch.difffeatureopts(ui, opts, section='annotate',
367 whitespace=True)
367 whitespace=True)
368 skiprevs = opts.get('skip')
368 skiprevs = opts.get('skip')
369 if skiprevs:
369 if skiprevs:
370 skiprevs = scmutil.revrange(repo, skiprevs)
370 skiprevs = scmutil.revrange(repo, skiprevs)
371
371
372 for abs in ctx.walk(m):
372 for abs in ctx.walk(m):
373 fctx = ctx[abs]
373 fctx = ctx[abs]
374 rootfm.startitem()
374 rootfm.startitem()
375 rootfm.data(abspath=abs, path=m.rel(abs))
375 rootfm.data(abspath=abs, path=m.rel(abs))
376 if not opts.get('text') and fctx.isbinary():
376 if not opts.get('text') and fctx.isbinary():
377 rootfm.plain(_("%s: binary file\n")
377 rootfm.plain(_("%s: binary file\n")
378 % ((pats and m.rel(abs)) or abs))
378 % ((pats and m.rel(abs)) or abs))
379 continue
379 continue
380
380
381 fm = rootfm.nested('lines')
381 fm = rootfm.nested('lines')
382 lines = fctx.annotate(follow=follow, linenumber=linenumber,
382 lines = fctx.annotate(follow=follow, linenumber=linenumber,
383 skiprevs=skiprevs, diffopts=diffopts)
383 skiprevs=skiprevs, diffopts=diffopts)
384 if not lines:
384 if not lines:
385 fm.end()
385 fm.end()
386 continue
386 continue
387 formats = []
387 formats = []
388 pieces = []
388 pieces = []
389
389
390 for f, sep in funcmap:
390 for f, sep in funcmap:
391 l = [f(n) for n, dummy in lines]
391 l = [f(n) for n, dummy in lines]
392 if fm.isplain():
392 if fm.isplain():
393 sizes = [encoding.colwidth(x) for x in l]
393 sizes = [encoding.colwidth(x) for x in l]
394 ml = max(sizes)
394 ml = max(sizes)
395 formats.append([sep + ' ' * (ml - w) + '%s' for w in sizes])
395 formats.append([sep + ' ' * (ml - w) + '%s' for w in sizes])
396 else:
396 else:
397 formats.append(['%s' for x in l])
397 formats.append(['%s' for x in l])
398 pieces.append(l)
398 pieces.append(l)
399
399
400 for f, p, l in zip(zip(*formats), zip(*pieces), lines):
400 for f, p, l in zip(zip(*formats), zip(*pieces), lines):
401 fm.startitem()
401 fm.startitem()
402 fm.write(fields, "".join(f), *p)
402 fm.write(fields, "".join(f), *p)
403 fm.write('line', ": %s", l[1])
403 fm.write('line', ": %s", l[1])
404
404
405 if not lines[-1][1].endswith('\n'):
405 if not lines[-1][1].endswith('\n'):
406 fm.plain('\n')
406 fm.plain('\n')
407 fm.end()
407 fm.end()
408
408
409 rootfm.end()
409 rootfm.end()
410
410
411 @command('archive',
411 @command('archive',
412 [('', 'no-decode', None, _('do not pass files through decoders')),
412 [('', 'no-decode', None, _('do not pass files through decoders')),
413 ('p', 'prefix', '', _('directory prefix for files in archive'),
413 ('p', 'prefix', '', _('directory prefix for files in archive'),
414 _('PREFIX')),
414 _('PREFIX')),
415 ('r', 'rev', '', _('revision to distribute'), _('REV')),
415 ('r', 'rev', '', _('revision to distribute'), _('REV')),
416 ('t', 'type', '', _('type of distribution to create'), _('TYPE')),
416 ('t', 'type', '', _('type of distribution to create'), _('TYPE')),
417 ] + subrepoopts + walkopts,
417 ] + subrepoopts + walkopts,
418 _('[OPTION]... DEST'))
418 _('[OPTION]... DEST'))
419 def archive(ui, repo, dest, **opts):
419 def archive(ui, repo, dest, **opts):
420 '''create an unversioned archive of a repository revision
420 '''create an unversioned archive of a repository revision
421
421
422 By default, the revision used is the parent of the working
422 By default, the revision used is the parent of the working
423 directory; use -r/--rev to specify a different revision.
423 directory; use -r/--rev to specify a different revision.
424
424
425 The archive type is automatically detected based on file
425 The archive type is automatically detected based on file
426 extension (to override, use -t/--type).
426 extension (to override, use -t/--type).
427
427
428 .. container:: verbose
428 .. container:: verbose
429
429
430 Examples:
430 Examples:
431
431
432 - create a zip file containing the 1.0 release::
432 - create a zip file containing the 1.0 release::
433
433
434 hg archive -r 1.0 project-1.0.zip
434 hg archive -r 1.0 project-1.0.zip
435
435
436 - create a tarball excluding .hg files::
436 - create a tarball excluding .hg files::
437
437
438 hg archive project.tar.gz -X ".hg*"
438 hg archive project.tar.gz -X ".hg*"
439
439
440 Valid types are:
440 Valid types are:
441
441
442 :``files``: a directory full of files (default)
442 :``files``: a directory full of files (default)
443 :``tar``: tar archive, uncompressed
443 :``tar``: tar archive, uncompressed
444 :``tbz2``: tar archive, compressed using bzip2
444 :``tbz2``: tar archive, compressed using bzip2
445 :``tgz``: tar archive, compressed using gzip
445 :``tgz``: tar archive, compressed using gzip
446 :``uzip``: zip archive, uncompressed
446 :``uzip``: zip archive, uncompressed
447 :``zip``: zip archive, compressed using deflate
447 :``zip``: zip archive, compressed using deflate
448
448
449 The exact name of the destination archive or directory is given
449 The exact name of the destination archive or directory is given
450 using a format string; see :hg:`help export` for details.
450 using a format string; see :hg:`help export` for details.
451
451
452 Each member added to an archive file has a directory prefix
452 Each member added to an archive file has a directory prefix
453 prepended. Use -p/--prefix to specify a format string for the
453 prepended. Use -p/--prefix to specify a format string for the
454 prefix. The default is the basename of the archive, with suffixes
454 prefix. The default is the basename of the archive, with suffixes
455 removed.
455 removed.
456
456
457 Returns 0 on success.
457 Returns 0 on success.
458 '''
458 '''
459
459
460 opts = pycompat.byteskwargs(opts)
460 opts = pycompat.byteskwargs(opts)
461 ctx = scmutil.revsingle(repo, opts.get('rev'))
461 ctx = scmutil.revsingle(repo, opts.get('rev'))
462 if not ctx:
462 if not ctx:
463 raise error.Abort(_('no working directory: please specify a revision'))
463 raise error.Abort(_('no working directory: please specify a revision'))
464 node = ctx.node()
464 node = ctx.node()
465 dest = cmdutil.makefilename(repo, dest, node)
465 dest = cmdutil.makefilename(repo, dest, node)
466 if os.path.realpath(dest) == repo.root:
466 if os.path.realpath(dest) == repo.root:
467 raise error.Abort(_('repository root cannot be destination'))
467 raise error.Abort(_('repository root cannot be destination'))
468
468
469 kind = opts.get('type') or archival.guesskind(dest) or 'files'
469 kind = opts.get('type') or archival.guesskind(dest) or 'files'
470 prefix = opts.get('prefix')
470 prefix = opts.get('prefix')
471
471
472 if dest == '-':
472 if dest == '-':
473 if kind == 'files':
473 if kind == 'files':
474 raise error.Abort(_('cannot archive plain files to stdout'))
474 raise error.Abort(_('cannot archive plain files to stdout'))
475 dest = cmdutil.makefileobj(repo, dest)
475 dest = cmdutil.makefileobj(repo, dest)
476 if not prefix:
476 if not prefix:
477 prefix = os.path.basename(repo.root) + '-%h'
477 prefix = os.path.basename(repo.root) + '-%h'
478
478
479 prefix = cmdutil.makefilename(repo, prefix, node)
479 prefix = cmdutil.makefilename(repo, prefix, node)
480 matchfn = scmutil.match(ctx, [], opts)
480 matchfn = scmutil.match(ctx, [], opts)
481 archival.archive(repo, dest, node, kind, not opts.get('no_decode'),
481 archival.archive(repo, dest, node, kind, not opts.get('no_decode'),
482 matchfn, prefix, subrepos=opts.get('subrepos'))
482 matchfn, prefix, subrepos=opts.get('subrepos'))
483
483
484 @command('backout',
484 @command('backout',
485 [('', 'merge', None, _('merge with old dirstate parent after backout')),
485 [('', 'merge', None, _('merge with old dirstate parent after backout')),
486 ('', 'commit', None,
486 ('', 'commit', None,
487 _('commit if no conflicts were encountered (DEPRECATED)')),
487 _('commit if no conflicts were encountered (DEPRECATED)')),
488 ('', 'no-commit', None, _('do not commit')),
488 ('', 'no-commit', None, _('do not commit')),
489 ('', 'parent', '',
489 ('', 'parent', '',
490 _('parent to choose when backing out merge (DEPRECATED)'), _('REV')),
490 _('parent to choose when backing out merge (DEPRECATED)'), _('REV')),
491 ('r', 'rev', '', _('revision to backout'), _('REV')),
491 ('r', 'rev', '', _('revision to backout'), _('REV')),
492 ('e', 'edit', False, _('invoke editor on commit messages')),
492 ('e', 'edit', False, _('invoke editor on commit messages')),
493 ] + mergetoolopts + walkopts + commitopts + commitopts2,
493 ] + mergetoolopts + walkopts + commitopts + commitopts2,
494 _('[OPTION]... [-r] REV'))
494 _('[OPTION]... [-r] REV'))
495 def backout(ui, repo, node=None, rev=None, **opts):
495 def backout(ui, repo, node=None, rev=None, **opts):
496 '''reverse effect of earlier changeset
496 '''reverse effect of earlier changeset
497
497
498 Prepare a new changeset with the effect of REV undone in the
498 Prepare a new changeset with the effect of REV undone in the
499 current working directory. If no conflicts were encountered,
499 current working directory. If no conflicts were encountered,
500 it will be committed immediately.
500 it will be committed immediately.
501
501
502 If REV is the parent of the working directory, then this new changeset
502 If REV is the parent of the working directory, then this new changeset
503 is committed automatically (unless --no-commit is specified).
503 is committed automatically (unless --no-commit is specified).
504
504
505 .. note::
505 .. note::
506
506
507 :hg:`backout` cannot be used to fix either an unwanted or
507 :hg:`backout` cannot be used to fix either an unwanted or
508 incorrect merge.
508 incorrect merge.
509
509
510 .. container:: verbose
510 .. container:: verbose
511
511
512 Examples:
512 Examples:
513
513
514 - Reverse the effect of the parent of the working directory.
514 - Reverse the effect of the parent of the working directory.
515 This backout will be committed immediately::
515 This backout will be committed immediately::
516
516
517 hg backout -r .
517 hg backout -r .
518
518
519 - Reverse the effect of previous bad revision 23::
519 - Reverse the effect of previous bad revision 23::
520
520
521 hg backout -r 23
521 hg backout -r 23
522
522
523 - Reverse the effect of previous bad revision 23 and
523 - Reverse the effect of previous bad revision 23 and
524 leave changes uncommitted::
524 leave changes uncommitted::
525
525
526 hg backout -r 23 --no-commit
526 hg backout -r 23 --no-commit
527 hg commit -m "Backout revision 23"
527 hg commit -m "Backout revision 23"
528
528
529 By default, the pending changeset will have one parent,
529 By default, the pending changeset will have one parent,
530 maintaining a linear history. With --merge, the pending
530 maintaining a linear history. With --merge, the pending
531 changeset will instead have two parents: the old parent of the
531 changeset will instead have two parents: the old parent of the
532 working directory and a new child of REV that simply undoes REV.
532 working directory and a new child of REV that simply undoes REV.
533
533
534 Before version 1.7, the behavior without --merge was equivalent
534 Before version 1.7, the behavior without --merge was equivalent
535 to specifying --merge followed by :hg:`update --clean .` to
535 to specifying --merge followed by :hg:`update --clean .` to
536 cancel the merge and leave the child of REV as a head to be
536 cancel the merge and leave the child of REV as a head to be
537 merged separately.
537 merged separately.
538
538
539 See :hg:`help dates` for a list of formats valid for -d/--date.
539 See :hg:`help dates` for a list of formats valid for -d/--date.
540
540
541 See :hg:`help revert` for a way to restore files to the state
541 See :hg:`help revert` for a way to restore files to the state
542 of another revision.
542 of another revision.
543
543
544 Returns 0 on success, 1 if nothing to backout or there are unresolved
544 Returns 0 on success, 1 if nothing to backout or there are unresolved
545 files.
545 files.
546 '''
546 '''
547 wlock = lock = None
547 wlock = lock = None
548 try:
548 try:
549 wlock = repo.wlock()
549 wlock = repo.wlock()
550 lock = repo.lock()
550 lock = repo.lock()
551 return _dobackout(ui, repo, node, rev, **opts)
551 return _dobackout(ui, repo, node, rev, **opts)
552 finally:
552 finally:
553 release(lock, wlock)
553 release(lock, wlock)
554
554
555 def _dobackout(ui, repo, node=None, rev=None, **opts):
555 def _dobackout(ui, repo, node=None, rev=None, **opts):
556 opts = pycompat.byteskwargs(opts)
556 opts = pycompat.byteskwargs(opts)
557 if opts.get('commit') and opts.get('no_commit'):
557 if opts.get('commit') and opts.get('no_commit'):
558 raise error.Abort(_("cannot use --commit with --no-commit"))
558 raise error.Abort(_("cannot use --commit with --no-commit"))
559 if opts.get('merge') and opts.get('no_commit'):
559 if opts.get('merge') and opts.get('no_commit'):
560 raise error.Abort(_("cannot use --merge with --no-commit"))
560 raise error.Abort(_("cannot use --merge with --no-commit"))
561
561
562 if rev and node:
562 if rev and node:
563 raise error.Abort(_("please specify just one revision"))
563 raise error.Abort(_("please specify just one revision"))
564
564
565 if not rev:
565 if not rev:
566 rev = node
566 rev = node
567
567
568 if not rev:
568 if not rev:
569 raise error.Abort(_("please specify a revision to backout"))
569 raise error.Abort(_("please specify a revision to backout"))
570
570
571 date = opts.get('date')
571 date = opts.get('date')
572 if date:
572 if date:
573 opts['date'] = util.parsedate(date)
573 opts['date'] = util.parsedate(date)
574
574
575 cmdutil.checkunfinished(repo)
575 cmdutil.checkunfinished(repo)
576 cmdutil.bailifchanged(repo)
576 cmdutil.bailifchanged(repo)
577 node = scmutil.revsingle(repo, rev).node()
577 node = scmutil.revsingle(repo, rev).node()
578
578
579 op1, op2 = repo.dirstate.parents()
579 op1, op2 = repo.dirstate.parents()
580 if not repo.changelog.isancestor(node, op1):
580 if not repo.changelog.isancestor(node, op1):
581 raise error.Abort(_('cannot backout change that is not an ancestor'))
581 raise error.Abort(_('cannot backout change that is not an ancestor'))
582
582
583 p1, p2 = repo.changelog.parents(node)
583 p1, p2 = repo.changelog.parents(node)
584 if p1 == nullid:
584 if p1 == nullid:
585 raise error.Abort(_('cannot backout a change with no parents'))
585 raise error.Abort(_('cannot backout a change with no parents'))
586 if p2 != nullid:
586 if p2 != nullid:
587 if not opts.get('parent'):
587 if not opts.get('parent'):
588 raise error.Abort(_('cannot backout a merge changeset'))
588 raise error.Abort(_('cannot backout a merge changeset'))
589 p = repo.lookup(opts['parent'])
589 p = repo.lookup(opts['parent'])
590 if p not in (p1, p2):
590 if p not in (p1, p2):
591 raise error.Abort(_('%s is not a parent of %s') %
591 raise error.Abort(_('%s is not a parent of %s') %
592 (short(p), short(node)))
592 (short(p), short(node)))
593 parent = p
593 parent = p
594 else:
594 else:
595 if opts.get('parent'):
595 if opts.get('parent'):
596 raise error.Abort(_('cannot use --parent on non-merge changeset'))
596 raise error.Abort(_('cannot use --parent on non-merge changeset'))
597 parent = p1
597 parent = p1
598
598
599 # the backout should appear on the same branch
599 # the backout should appear on the same branch
600 branch = repo.dirstate.branch()
600 branch = repo.dirstate.branch()
601 bheads = repo.branchheads(branch)
601 bheads = repo.branchheads(branch)
602 rctx = scmutil.revsingle(repo, hex(parent))
602 rctx = scmutil.revsingle(repo, hex(parent))
603 if not opts.get('merge') and op1 != node:
603 if not opts.get('merge') and op1 != node:
604 dsguard = dirstateguard.dirstateguard(repo, 'backout')
604 dsguard = dirstateguard.dirstateguard(repo, 'backout')
605 try:
605 try:
606 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
606 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
607 'backout')
607 'backout')
608 stats = mergemod.update(repo, parent, True, True, node, False)
608 stats = mergemod.update(repo, parent, True, True, node, False)
609 repo.setparents(op1, op2)
609 repo.setparents(op1, op2)
610 dsguard.close()
610 dsguard.close()
611 hg._showstats(repo, stats)
611 hg._showstats(repo, stats)
612 if stats[3]:
612 if stats[3]:
613 repo.ui.status(_("use 'hg resolve' to retry unresolved "
613 repo.ui.status(_("use 'hg resolve' to retry unresolved "
614 "file merges\n"))
614 "file merges\n"))
615 return 1
615 return 1
616 finally:
616 finally:
617 ui.setconfig('ui', 'forcemerge', '', '')
617 ui.setconfig('ui', 'forcemerge', '', '')
618 lockmod.release(dsguard)
618 lockmod.release(dsguard)
619 else:
619 else:
620 hg.clean(repo, node, show_stats=False)
620 hg.clean(repo, node, show_stats=False)
621 repo.dirstate.setbranch(branch)
621 repo.dirstate.setbranch(branch)
622 cmdutil.revert(ui, repo, rctx, repo.dirstate.parents())
622 cmdutil.revert(ui, repo, rctx, repo.dirstate.parents())
623
623
624 if opts.get('no_commit'):
624 if opts.get('no_commit'):
625 msg = _("changeset %s backed out, "
625 msg = _("changeset %s backed out, "
626 "don't forget to commit.\n")
626 "don't forget to commit.\n")
627 ui.status(msg % short(node))
627 ui.status(msg % short(node))
628 return 0
628 return 0
629
629
630 def commitfunc(ui, repo, message, match, opts):
630 def commitfunc(ui, repo, message, match, opts):
631 editform = 'backout'
631 editform = 'backout'
632 e = cmdutil.getcommiteditor(editform=editform,
632 e = cmdutil.getcommiteditor(editform=editform,
633 **pycompat.strkwargs(opts))
633 **pycompat.strkwargs(opts))
634 if not message:
634 if not message:
635 # we don't translate commit messages
635 # we don't translate commit messages
636 message = "Backed out changeset %s" % short(node)
636 message = "Backed out changeset %s" % short(node)
637 e = cmdutil.getcommiteditor(edit=True, editform=editform)
637 e = cmdutil.getcommiteditor(edit=True, editform=editform)
638 return repo.commit(message, opts.get('user'), opts.get('date'),
638 return repo.commit(message, opts.get('user'), opts.get('date'),
639 match, editor=e)
639 match, editor=e)
640 newnode = cmdutil.commit(ui, repo, commitfunc, [], opts)
640 newnode = cmdutil.commit(ui, repo, commitfunc, [], opts)
641 if not newnode:
641 if not newnode:
642 ui.status(_("nothing changed\n"))
642 ui.status(_("nothing changed\n"))
643 return 1
643 return 1
644 cmdutil.commitstatus(repo, newnode, branch, bheads)
644 cmdutil.commitstatus(repo, newnode, branch, bheads)
645
645
646 def nice(node):
646 def nice(node):
647 return '%d:%s' % (repo.changelog.rev(node), short(node))
647 return '%d:%s' % (repo.changelog.rev(node), short(node))
648 ui.status(_('changeset %s backs out changeset %s\n') %
648 ui.status(_('changeset %s backs out changeset %s\n') %
649 (nice(repo.changelog.tip()), nice(node)))
649 (nice(repo.changelog.tip()), nice(node)))
650 if opts.get('merge') and op1 != node:
650 if opts.get('merge') and op1 != node:
651 hg.clean(repo, op1, show_stats=False)
651 hg.clean(repo, op1, show_stats=False)
652 ui.status(_('merging with changeset %s\n')
652 ui.status(_('merging with changeset %s\n')
653 % nice(repo.changelog.tip()))
653 % nice(repo.changelog.tip()))
654 try:
654 try:
655 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
655 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
656 'backout')
656 'backout')
657 return hg.merge(repo, hex(repo.changelog.tip()))
657 return hg.merge(repo, hex(repo.changelog.tip()))
658 finally:
658 finally:
659 ui.setconfig('ui', 'forcemerge', '', '')
659 ui.setconfig('ui', 'forcemerge', '', '')
660 return 0
660 return 0
661
661
662 @command('bisect',
662 @command('bisect',
663 [('r', 'reset', False, _('reset bisect state')),
663 [('r', 'reset', False, _('reset bisect state')),
664 ('g', 'good', False, _('mark changeset good')),
664 ('g', 'good', False, _('mark changeset good')),
665 ('b', 'bad', False, _('mark changeset bad')),
665 ('b', 'bad', False, _('mark changeset bad')),
666 ('s', 'skip', False, _('skip testing changeset')),
666 ('s', 'skip', False, _('skip testing changeset')),
667 ('e', 'extend', False, _('extend the bisect range')),
667 ('e', 'extend', False, _('extend the bisect range')),
668 ('c', 'command', '', _('use command to check changeset state'), _('CMD')),
668 ('c', 'command', '', _('use command to check changeset state'), _('CMD')),
669 ('U', 'noupdate', False, _('do not update to target'))],
669 ('U', 'noupdate', False, _('do not update to target'))],
670 _("[-gbsr] [-U] [-c CMD] [REV]"))
670 _("[-gbsr] [-U] [-c CMD] [REV]"))
671 def bisect(ui, repo, rev=None, extra=None, command=None,
671 def bisect(ui, repo, rev=None, extra=None, command=None,
672 reset=None, good=None, bad=None, skip=None, extend=None,
672 reset=None, good=None, bad=None, skip=None, extend=None,
673 noupdate=None):
673 noupdate=None):
674 """subdivision search of changesets
674 """subdivision search of changesets
675
675
676 This command helps to find changesets which introduce problems. To
676 This command helps to find changesets which introduce problems. To
677 use, mark the earliest changeset you know exhibits the problem as
677 use, mark the earliest changeset you know exhibits the problem as
678 bad, then mark the latest changeset which is free from the problem
678 bad, then mark the latest changeset which is free from the problem
679 as good. Bisect will update your working directory to a revision
679 as good. Bisect will update your working directory to a revision
680 for testing (unless the -U/--noupdate option is specified). Once
680 for testing (unless the -U/--noupdate option is specified). Once
681 you have performed tests, mark the working directory as good or
681 you have performed tests, mark the working directory as good or
682 bad, and bisect will either update to another candidate changeset
682 bad, and bisect will either update to another candidate changeset
683 or announce that it has found the bad revision.
683 or announce that it has found the bad revision.
684
684
685 As a shortcut, you can also use the revision argument to mark a
685 As a shortcut, you can also use the revision argument to mark a
686 revision as good or bad without checking it out first.
686 revision as good or bad without checking it out first.
687
687
688 If you supply a command, it will be used for automatic bisection.
688 If you supply a command, it will be used for automatic bisection.
689 The environment variable HG_NODE will contain the ID of the
689 The environment variable HG_NODE will contain the ID of the
690 changeset being tested. The exit status of the command will be
690 changeset being tested. The exit status of the command will be
691 used to mark revisions as good or bad: status 0 means good, 125
691 used to mark revisions as good or bad: status 0 means good, 125
692 means to skip the revision, 127 (command not found) will abort the
692 means to skip the revision, 127 (command not found) will abort the
693 bisection, and any other non-zero exit status means the revision
693 bisection, and any other non-zero exit status means the revision
694 is bad.
694 is bad.
695
695
696 .. container:: verbose
696 .. container:: verbose
697
697
698 Some examples:
698 Some examples:
699
699
700 - start a bisection with known bad revision 34, and good revision 12::
700 - start a bisection with known bad revision 34, and good revision 12::
701
701
702 hg bisect --bad 34
702 hg bisect --bad 34
703 hg bisect --good 12
703 hg bisect --good 12
704
704
705 - advance the current bisection by marking current revision as good or
705 - advance the current bisection by marking current revision as good or
706 bad::
706 bad::
707
707
708 hg bisect --good
708 hg bisect --good
709 hg bisect --bad
709 hg bisect --bad
710
710
711 - mark the current revision, or a known revision, to be skipped (e.g. if
711 - mark the current revision, or a known revision, to be skipped (e.g. if
712 that revision is not usable because of another issue)::
712 that revision is not usable because of another issue)::
713
713
714 hg bisect --skip
714 hg bisect --skip
715 hg bisect --skip 23
715 hg bisect --skip 23
716
716
717 - skip all revisions that do not touch directories ``foo`` or ``bar``::
717 - skip all revisions that do not touch directories ``foo`` or ``bar``::
718
718
719 hg bisect --skip "!( file('path:foo') & file('path:bar') )"
719 hg bisect --skip "!( file('path:foo') & file('path:bar') )"
720
720
721 - forget the current bisection::
721 - forget the current bisection::
722
722
723 hg bisect --reset
723 hg bisect --reset
724
724
725 - use 'make && make tests' to automatically find the first broken
725 - use 'make && make tests' to automatically find the first broken
726 revision::
726 revision::
727
727
728 hg bisect --reset
728 hg bisect --reset
729 hg bisect --bad 34
729 hg bisect --bad 34
730 hg bisect --good 12
730 hg bisect --good 12
731 hg bisect --command "make && make tests"
731 hg bisect --command "make && make tests"
732
732
733 - see all changesets whose states are already known in the current
733 - see all changesets whose states are already known in the current
734 bisection::
734 bisection::
735
735
736 hg log -r "bisect(pruned)"
736 hg log -r "bisect(pruned)"
737
737
738 - see the changeset currently being bisected (especially useful
738 - see the changeset currently being bisected (especially useful
739 if running with -U/--noupdate)::
739 if running with -U/--noupdate)::
740
740
741 hg log -r "bisect(current)"
741 hg log -r "bisect(current)"
742
742
743 - see all changesets that took part in the current bisection::
743 - see all changesets that took part in the current bisection::
744
744
745 hg log -r "bisect(range)"
745 hg log -r "bisect(range)"
746
746
747 - you can even get a nice graph::
747 - you can even get a nice graph::
748
748
749 hg log --graph -r "bisect(range)"
749 hg log --graph -r "bisect(range)"
750
750
751 See :hg:`help revisions.bisect` for more about the `bisect()` predicate.
751 See :hg:`help revisions.bisect` for more about the `bisect()` predicate.
752
752
753 Returns 0 on success.
753 Returns 0 on success.
754 """
754 """
755 # backward compatibility
755 # backward compatibility
756 if rev in "good bad reset init".split():
756 if rev in "good bad reset init".split():
757 ui.warn(_("(use of 'hg bisect <cmd>' is deprecated)\n"))
757 ui.warn(_("(use of 'hg bisect <cmd>' is deprecated)\n"))
758 cmd, rev, extra = rev, extra, None
758 cmd, rev, extra = rev, extra, None
759 if cmd == "good":
759 if cmd == "good":
760 good = True
760 good = True
761 elif cmd == "bad":
761 elif cmd == "bad":
762 bad = True
762 bad = True
763 else:
763 else:
764 reset = True
764 reset = True
765 elif extra:
765 elif extra:
766 raise error.Abort(_('incompatible arguments'))
766 raise error.Abort(_('incompatible arguments'))
767
767
768 incompatibles = {
768 incompatibles = {
769 '--bad': bad,
769 '--bad': bad,
770 '--command': bool(command),
770 '--command': bool(command),
771 '--extend': extend,
771 '--extend': extend,
772 '--good': good,
772 '--good': good,
773 '--reset': reset,
773 '--reset': reset,
774 '--skip': skip,
774 '--skip': skip,
775 }
775 }
776
776
777 enabled = [x for x in incompatibles if incompatibles[x]]
777 enabled = [x for x in incompatibles if incompatibles[x]]
778
778
779 if len(enabled) > 1:
779 if len(enabled) > 1:
780 raise error.Abort(_('%s and %s are incompatible') %
780 raise error.Abort(_('%s and %s are incompatible') %
781 tuple(sorted(enabled)[0:2]))
781 tuple(sorted(enabled)[0:2]))
782
782
783 if reset:
783 if reset:
784 hbisect.resetstate(repo)
784 hbisect.resetstate(repo)
785 return
785 return
786
786
787 state = hbisect.load_state(repo)
787 state = hbisect.load_state(repo)
788
788
789 # update state
789 # update state
790 if good or bad or skip:
790 if good or bad or skip:
791 if rev:
791 if rev:
792 nodes = [repo.lookup(i) for i in scmutil.revrange(repo, [rev])]
792 nodes = [repo.lookup(i) for i in scmutil.revrange(repo, [rev])]
793 else:
793 else:
794 nodes = [repo.lookup('.')]
794 nodes = [repo.lookup('.')]
795 if good:
795 if good:
796 state['good'] += nodes
796 state['good'] += nodes
797 elif bad:
797 elif bad:
798 state['bad'] += nodes
798 state['bad'] += nodes
799 elif skip:
799 elif skip:
800 state['skip'] += nodes
800 state['skip'] += nodes
801 hbisect.save_state(repo, state)
801 hbisect.save_state(repo, state)
802 if not (state['good'] and state['bad']):
802 if not (state['good'] and state['bad']):
803 return
803 return
804
804
805 def mayupdate(repo, node, show_stats=True):
805 def mayupdate(repo, node, show_stats=True):
806 """common used update sequence"""
806 """common used update sequence"""
807 if noupdate:
807 if noupdate:
808 return
808 return
809 cmdutil.checkunfinished(repo)
809 cmdutil.checkunfinished(repo)
810 cmdutil.bailifchanged(repo)
810 cmdutil.bailifchanged(repo)
811 return hg.clean(repo, node, show_stats=show_stats)
811 return hg.clean(repo, node, show_stats=show_stats)
812
812
813 displayer = cmdutil.show_changeset(ui, repo, {})
813 displayer = cmdutil.show_changeset(ui, repo, {})
814
814
815 if command:
815 if command:
816 changesets = 1
816 changesets = 1
817 if noupdate:
817 if noupdate:
818 try:
818 try:
819 node = state['current'][0]
819 node = state['current'][0]
820 except LookupError:
820 except LookupError:
821 raise error.Abort(_('current bisect revision is unknown - '
821 raise error.Abort(_('current bisect revision is unknown - '
822 'start a new bisect to fix'))
822 'start a new bisect to fix'))
823 else:
823 else:
824 node, p2 = repo.dirstate.parents()
824 node, p2 = repo.dirstate.parents()
825 if p2 != nullid:
825 if p2 != nullid:
826 raise error.Abort(_('current bisect revision is a merge'))
826 raise error.Abort(_('current bisect revision is a merge'))
827 if rev:
827 if rev:
828 node = repo[scmutil.revsingle(repo, rev, node)].node()
828 node = repo[scmutil.revsingle(repo, rev, node)].node()
829 try:
829 try:
830 while changesets:
830 while changesets:
831 # update state
831 # update state
832 state['current'] = [node]
832 state['current'] = [node]
833 hbisect.save_state(repo, state)
833 hbisect.save_state(repo, state)
834 status = ui.system(command, environ={'HG_NODE': hex(node)},
834 status = ui.system(command, environ={'HG_NODE': hex(node)},
835 blockedtag='bisect_check')
835 blockedtag='bisect_check')
836 if status == 125:
836 if status == 125:
837 transition = "skip"
837 transition = "skip"
838 elif status == 0:
838 elif status == 0:
839 transition = "good"
839 transition = "good"
840 # status < 0 means process was killed
840 # status < 0 means process was killed
841 elif status == 127:
841 elif status == 127:
842 raise error.Abort(_("failed to execute %s") % command)
842 raise error.Abort(_("failed to execute %s") % command)
843 elif status < 0:
843 elif status < 0:
844 raise error.Abort(_("%s killed") % command)
844 raise error.Abort(_("%s killed") % command)
845 else:
845 else:
846 transition = "bad"
846 transition = "bad"
847 state[transition].append(node)
847 state[transition].append(node)
848 ctx = repo[node]
848 ctx = repo[node]
849 ui.status(_('changeset %d:%s: %s\n') % (ctx, ctx, transition))
849 ui.status(_('changeset %d:%s: %s\n') % (ctx, ctx, transition))
850 hbisect.checkstate(state)
850 hbisect.checkstate(state)
851 # bisect
851 # bisect
852 nodes, changesets, bgood = hbisect.bisect(repo.changelog, state)
852 nodes, changesets, bgood = hbisect.bisect(repo.changelog, state)
853 # update to next check
853 # update to next check
854 node = nodes[0]
854 node = nodes[0]
855 mayupdate(repo, node, show_stats=False)
855 mayupdate(repo, node, show_stats=False)
856 finally:
856 finally:
857 state['current'] = [node]
857 state['current'] = [node]
858 hbisect.save_state(repo, state)
858 hbisect.save_state(repo, state)
859 hbisect.printresult(ui, repo, state, displayer, nodes, bgood)
859 hbisect.printresult(ui, repo, state, displayer, nodes, bgood)
860 return
860 return
861
861
862 hbisect.checkstate(state)
862 hbisect.checkstate(state)
863
863
864 # actually bisect
864 # actually bisect
865 nodes, changesets, good = hbisect.bisect(repo.changelog, state)
865 nodes, changesets, good = hbisect.bisect(repo.changelog, state)
866 if extend:
866 if extend:
867 if not changesets:
867 if not changesets:
868 extendnode = hbisect.extendrange(repo, state, nodes, good)
868 extendnode = hbisect.extendrange(repo, state, nodes, good)
869 if extendnode is not None:
869 if extendnode is not None:
870 ui.write(_("Extending search to changeset %d:%s\n")
870 ui.write(_("Extending search to changeset %d:%s\n")
871 % (extendnode.rev(), extendnode))
871 % (extendnode.rev(), extendnode))
872 state['current'] = [extendnode.node()]
872 state['current'] = [extendnode.node()]
873 hbisect.save_state(repo, state)
873 hbisect.save_state(repo, state)
874 return mayupdate(repo, extendnode.node())
874 return mayupdate(repo, extendnode.node())
875 raise error.Abort(_("nothing to extend"))
875 raise error.Abort(_("nothing to extend"))
876
876
877 if changesets == 0:
877 if changesets == 0:
878 hbisect.printresult(ui, repo, state, displayer, nodes, good)
878 hbisect.printresult(ui, repo, state, displayer, nodes, good)
879 else:
879 else:
880 assert len(nodes) == 1 # only a single node can be tested next
880 assert len(nodes) == 1 # only a single node can be tested next
881 node = nodes[0]
881 node = nodes[0]
882 # compute the approximate number of remaining tests
882 # compute the approximate number of remaining tests
883 tests, size = 0, 2
883 tests, size = 0, 2
884 while size <= changesets:
884 while size <= changesets:
885 tests, size = tests + 1, size * 2
885 tests, size = tests + 1, size * 2
886 rev = repo.changelog.rev(node)
886 rev = repo.changelog.rev(node)
887 ui.write(_("Testing changeset %d:%s "
887 ui.write(_("Testing changeset %d:%s "
888 "(%d changesets remaining, ~%d tests)\n")
888 "(%d changesets remaining, ~%d tests)\n")
889 % (rev, short(node), changesets, tests))
889 % (rev, short(node), changesets, tests))
890 state['current'] = [node]
890 state['current'] = [node]
891 hbisect.save_state(repo, state)
891 hbisect.save_state(repo, state)
892 return mayupdate(repo, node)
892 return mayupdate(repo, node)
893
893
894 @command('bookmarks|bookmark',
894 @command('bookmarks|bookmark',
895 [('f', 'force', False, _('force')),
895 [('f', 'force', False, _('force')),
896 ('r', 'rev', '', _('revision for bookmark action'), _('REV')),
896 ('r', 'rev', '', _('revision for bookmark action'), _('REV')),
897 ('d', 'delete', False, _('delete a given bookmark')),
897 ('d', 'delete', False, _('delete a given bookmark')),
898 ('m', 'rename', '', _('rename a given bookmark'), _('OLD')),
898 ('m', 'rename', '', _('rename a given bookmark'), _('OLD')),
899 ('i', 'inactive', False, _('mark a bookmark inactive')),
899 ('i', 'inactive', False, _('mark a bookmark inactive')),
900 ] + formatteropts,
900 ] + formatteropts,
901 _('hg bookmarks [OPTIONS]... [NAME]...'))
901 _('hg bookmarks [OPTIONS]... [NAME]...'))
902 def bookmark(ui, repo, *names, **opts):
902 def bookmark(ui, repo, *names, **opts):
903 '''create a new bookmark or list existing bookmarks
903 '''create a new bookmark or list existing bookmarks
904
904
905 Bookmarks are labels on changesets to help track lines of development.
905 Bookmarks are labels on changesets to help track lines of development.
906 Bookmarks are unversioned and can be moved, renamed and deleted.
906 Bookmarks are unversioned and can be moved, renamed and deleted.
907 Deleting or moving a bookmark has no effect on the associated changesets.
907 Deleting or moving a bookmark has no effect on the associated changesets.
908
908
909 Creating or updating to a bookmark causes it to be marked as 'active'.
909 Creating or updating to a bookmark causes it to be marked as 'active'.
910 The active bookmark is indicated with a '*'.
910 The active bookmark is indicated with a '*'.
911 When a commit is made, the active bookmark will advance to the new commit.
911 When a commit is made, the active bookmark will advance to the new commit.
912 A plain :hg:`update` will also advance an active bookmark, if possible.
912 A plain :hg:`update` will also advance an active bookmark, if possible.
913 Updating away from a bookmark will cause it to be deactivated.
913 Updating away from a bookmark will cause it to be deactivated.
914
914
915 Bookmarks can be pushed and pulled between repositories (see
915 Bookmarks can be pushed and pulled between repositories (see
916 :hg:`help push` and :hg:`help pull`). If a shared bookmark has
916 :hg:`help push` and :hg:`help pull`). If a shared bookmark has
917 diverged, a new 'divergent bookmark' of the form 'name@path' will
917 diverged, a new 'divergent bookmark' of the form 'name@path' will
918 be created. Using :hg:`merge` will resolve the divergence.
918 be created. Using :hg:`merge` will resolve the divergence.
919
919
920 A bookmark named '@' has the special property that :hg:`clone` will
920 A bookmark named '@' has the special property that :hg:`clone` will
921 check it out by default if it exists.
921 check it out by default if it exists.
922
922
923 .. container:: verbose
923 .. container:: verbose
924
924
925 Examples:
925 Examples:
926
926
927 - create an active bookmark for a new line of development::
927 - create an active bookmark for a new line of development::
928
928
929 hg book new-feature
929 hg book new-feature
930
930
931 - create an inactive bookmark as a place marker::
931 - create an inactive bookmark as a place marker::
932
932
933 hg book -i reviewed
933 hg book -i reviewed
934
934
935 - create an inactive bookmark on another changeset::
935 - create an inactive bookmark on another changeset::
936
936
937 hg book -r .^ tested
937 hg book -r .^ tested
938
938
939 - rename bookmark turkey to dinner::
939 - rename bookmark turkey to dinner::
940
940
941 hg book -m turkey dinner
941 hg book -m turkey dinner
942
942
943 - move the '@' bookmark from another branch::
943 - move the '@' bookmark from another branch::
944
944
945 hg book -f @
945 hg book -f @
946 '''
946 '''
947 opts = pycompat.byteskwargs(opts)
947 opts = pycompat.byteskwargs(opts)
948 force = opts.get('force')
948 force = opts.get('force')
949 rev = opts.get('rev')
949 rev = opts.get('rev')
950 delete = opts.get('delete')
950 delete = opts.get('delete')
951 rename = opts.get('rename')
951 rename = opts.get('rename')
952 inactive = opts.get('inactive')
952 inactive = opts.get('inactive')
953
953
954 if delete and rename:
954 if delete and rename:
955 raise error.Abort(_("--delete and --rename are incompatible"))
955 raise error.Abort(_("--delete and --rename are incompatible"))
956 if delete and rev:
956 if delete and rev:
957 raise error.Abort(_("--rev is incompatible with --delete"))
957 raise error.Abort(_("--rev is incompatible with --delete"))
958 if rename and rev:
958 if rename and rev:
959 raise error.Abort(_("--rev is incompatible with --rename"))
959 raise error.Abort(_("--rev is incompatible with --rename"))
960 if not names and (delete or rev):
960 if not names and (delete or rev):
961 raise error.Abort(_("bookmark name required"))
961 raise error.Abort(_("bookmark name required"))
962
962
963 if delete or rename or names or inactive:
963 if delete or rename or names or inactive:
964 with repo.wlock(), repo.lock(), repo.transaction('bookmark') as tr:
964 with repo.wlock(), repo.lock(), repo.transaction('bookmark') as tr:
965 if delete:
965 if delete:
966 bookmarks.delete(repo, tr, names)
966 bookmarks.delete(repo, tr, names)
967 elif rename:
967 elif rename:
968 if not names:
968 if not names:
969 raise error.Abort(_("new bookmark name required"))
969 raise error.Abort(_("new bookmark name required"))
970 elif len(names) > 1:
970 elif len(names) > 1:
971 raise error.Abort(_("only one new bookmark name allowed"))
971 raise error.Abort(_("only one new bookmark name allowed"))
972 bookmarks.rename(repo, tr, rename, names[0], force, inactive)
972 bookmarks.rename(repo, tr, rename, names[0], force, inactive)
973 elif names:
973 elif names:
974 bookmarks.addbookmarks(repo, tr, names, rev, force, inactive)
974 bookmarks.addbookmarks(repo, tr, names, rev, force, inactive)
975 elif inactive:
975 elif inactive:
976 if len(repo._bookmarks) == 0:
976 if len(repo._bookmarks) == 0:
977 ui.status(_("no bookmarks set\n"))
977 ui.status(_("no bookmarks set\n"))
978 elif not repo._activebookmark:
978 elif not repo._activebookmark:
979 ui.status(_("no active bookmark\n"))
979 ui.status(_("no active bookmark\n"))
980 else:
980 else:
981 bookmarks.deactivate(repo)
981 bookmarks.deactivate(repo)
982 else: # show bookmarks
982 else: # show bookmarks
983 bookmarks.printbookmarks(ui, repo, **opts)
983 bookmarks.printbookmarks(ui, repo, **opts)
984
984
985 @command('branch',
985 @command('branch',
986 [('f', 'force', None,
986 [('f', 'force', None,
987 _('set branch name even if it shadows an existing branch')),
987 _('set branch name even if it shadows an existing branch')),
988 ('C', 'clean', None, _('reset branch name to parent branch name'))],
988 ('C', 'clean', None, _('reset branch name to parent branch name'))],
989 _('[-fC] [NAME]'))
989 _('[-fC] [NAME]'))
990 def branch(ui, repo, label=None, **opts):
990 def branch(ui, repo, label=None, **opts):
991 """set or show the current branch name
991 """set or show the current branch name
992
992
993 .. note::
993 .. note::
994
994
995 Branch names are permanent and global. Use :hg:`bookmark` to create a
995 Branch names are permanent and global. Use :hg:`bookmark` to create a
996 light-weight bookmark instead. See :hg:`help glossary` for more
996 light-weight bookmark instead. See :hg:`help glossary` for more
997 information about named branches and bookmarks.
997 information about named branches and bookmarks.
998
998
999 With no argument, show the current branch name. With one argument,
999 With no argument, show the current branch name. With one argument,
1000 set the working directory branch name (the branch will not exist
1000 set the working directory branch name (the branch will not exist
1001 in the repository until the next commit). Standard practice
1001 in the repository until the next commit). Standard practice
1002 recommends that primary development take place on the 'default'
1002 recommends that primary development take place on the 'default'
1003 branch.
1003 branch.
1004
1004
1005 Unless -f/--force is specified, branch will not let you set a
1005 Unless -f/--force is specified, branch will not let you set a
1006 branch name that already exists.
1006 branch name that already exists.
1007
1007
1008 Use -C/--clean to reset the working directory branch to that of
1008 Use -C/--clean to reset the working directory branch to that of
1009 the parent of the working directory, negating a previous branch
1009 the parent of the working directory, negating a previous branch
1010 change.
1010 change.
1011
1011
1012 Use the command :hg:`update` to switch to an existing branch. Use
1012 Use the command :hg:`update` to switch to an existing branch. Use
1013 :hg:`commit --close-branch` to mark this branch head as closed.
1013 :hg:`commit --close-branch` to mark this branch head as closed.
1014 When all heads of a branch are closed, the branch will be
1014 When all heads of a branch are closed, the branch will be
1015 considered closed.
1015 considered closed.
1016
1016
1017 Returns 0 on success.
1017 Returns 0 on success.
1018 """
1018 """
1019 opts = pycompat.byteskwargs(opts)
1019 opts = pycompat.byteskwargs(opts)
1020 if label:
1020 if label:
1021 label = label.strip()
1021 label = label.strip()
1022
1022
1023 if not opts.get('clean') and not label:
1023 if not opts.get('clean') and not label:
1024 ui.write("%s\n" % repo.dirstate.branch())
1024 ui.write("%s\n" % repo.dirstate.branch())
1025 return
1025 return
1026
1026
1027 with repo.wlock():
1027 with repo.wlock():
1028 if opts.get('clean'):
1028 if opts.get('clean'):
1029 label = repo[None].p1().branch()
1029 label = repo[None].p1().branch()
1030 repo.dirstate.setbranch(label)
1030 repo.dirstate.setbranch(label)
1031 ui.status(_('reset working directory to branch %s\n') % label)
1031 ui.status(_('reset working directory to branch %s\n') % label)
1032 elif label:
1032 elif label:
1033 if not opts.get('force') and label in repo.branchmap():
1033 if not opts.get('force') and label in repo.branchmap():
1034 if label not in [p.branch() for p in repo[None].parents()]:
1034 if label not in [p.branch() for p in repo[None].parents()]:
1035 raise error.Abort(_('a branch of the same name already'
1035 raise error.Abort(_('a branch of the same name already'
1036 ' exists'),
1036 ' exists'),
1037 # i18n: "it" refers to an existing branch
1037 # i18n: "it" refers to an existing branch
1038 hint=_("use 'hg update' to switch to it"))
1038 hint=_("use 'hg update' to switch to it"))
1039 scmutil.checknewlabel(repo, label, 'branch')
1039 scmutil.checknewlabel(repo, label, 'branch')
1040 repo.dirstate.setbranch(label)
1040 repo.dirstate.setbranch(label)
1041 ui.status(_('marked working directory as branch %s\n') % label)
1041 ui.status(_('marked working directory as branch %s\n') % label)
1042
1042
1043 # find any open named branches aside from default
1043 # find any open named branches aside from default
1044 others = [n for n, h, t, c in repo.branchmap().iterbranches()
1044 others = [n for n, h, t, c in repo.branchmap().iterbranches()
1045 if n != "default" and not c]
1045 if n != "default" and not c]
1046 if not others:
1046 if not others:
1047 ui.status(_('(branches are permanent and global, '
1047 ui.status(_('(branches are permanent and global, '
1048 'did you want a bookmark?)\n'))
1048 'did you want a bookmark?)\n'))
1049
1049
1050 @command('branches',
1050 @command('branches',
1051 [('a', 'active', False,
1051 [('a', 'active', False,
1052 _('show only branches that have unmerged heads (DEPRECATED)')),
1052 _('show only branches that have unmerged heads (DEPRECATED)')),
1053 ('c', 'closed', False, _('show normal and closed branches')),
1053 ('c', 'closed', False, _('show normal and closed branches')),
1054 ] + formatteropts,
1054 ] + formatteropts,
1055 _('[-c]'))
1055 _('[-c]'))
1056 def branches(ui, repo, active=False, closed=False, **opts):
1056 def branches(ui, repo, active=False, closed=False, **opts):
1057 """list repository named branches
1057 """list repository named branches
1058
1058
1059 List the repository's named branches, indicating which ones are
1059 List the repository's named branches, indicating which ones are
1060 inactive. If -c/--closed is specified, also list branches which have
1060 inactive. If -c/--closed is specified, also list branches which have
1061 been marked closed (see :hg:`commit --close-branch`).
1061 been marked closed (see :hg:`commit --close-branch`).
1062
1062
1063 Use the command :hg:`update` to switch to an existing branch.
1063 Use the command :hg:`update` to switch to an existing branch.
1064
1064
1065 Returns 0.
1065 Returns 0.
1066 """
1066 """
1067
1067
1068 opts = pycompat.byteskwargs(opts)
1068 opts = pycompat.byteskwargs(opts)
1069 ui.pager('branches')
1069 ui.pager('branches')
1070 fm = ui.formatter('branches', opts)
1070 fm = ui.formatter('branches', opts)
1071 hexfunc = fm.hexfunc
1071 hexfunc = fm.hexfunc
1072
1072
1073 allheads = set(repo.heads())
1073 allheads = set(repo.heads())
1074 branches = []
1074 branches = []
1075 for tag, heads, tip, isclosed in repo.branchmap().iterbranches():
1075 for tag, heads, tip, isclosed in repo.branchmap().iterbranches():
1076 isactive = not isclosed and bool(set(heads) & allheads)
1076 isactive = not isclosed and bool(set(heads) & allheads)
1077 branches.append((tag, repo[tip], isactive, not isclosed))
1077 branches.append((tag, repo[tip], isactive, not isclosed))
1078 branches.sort(key=lambda i: (i[2], i[1].rev(), i[0], i[3]),
1078 branches.sort(key=lambda i: (i[2], i[1].rev(), i[0], i[3]),
1079 reverse=True)
1079 reverse=True)
1080
1080
1081 for tag, ctx, isactive, isopen in branches:
1081 for tag, ctx, isactive, isopen in branches:
1082 if active and not isactive:
1082 if active and not isactive:
1083 continue
1083 continue
1084 if isactive:
1084 if isactive:
1085 label = 'branches.active'
1085 label = 'branches.active'
1086 notice = ''
1086 notice = ''
1087 elif not isopen:
1087 elif not isopen:
1088 if not closed:
1088 if not closed:
1089 continue
1089 continue
1090 label = 'branches.closed'
1090 label = 'branches.closed'
1091 notice = _(' (closed)')
1091 notice = _(' (closed)')
1092 else:
1092 else:
1093 label = 'branches.inactive'
1093 label = 'branches.inactive'
1094 notice = _(' (inactive)')
1094 notice = _(' (inactive)')
1095 current = (tag == repo.dirstate.branch())
1095 current = (tag == repo.dirstate.branch())
1096 if current:
1096 if current:
1097 label = 'branches.current'
1097 label = 'branches.current'
1098
1098
1099 fm.startitem()
1099 fm.startitem()
1100 fm.write('branch', '%s', tag, label=label)
1100 fm.write('branch', '%s', tag, label=label)
1101 rev = ctx.rev()
1101 rev = ctx.rev()
1102 padsize = max(31 - len(str(rev)) - encoding.colwidth(tag), 0)
1102 padsize = max(31 - len(str(rev)) - encoding.colwidth(tag), 0)
1103 fmt = ' ' * padsize + ' %d:%s'
1103 fmt = ' ' * padsize + ' %d:%s'
1104 fm.condwrite(not ui.quiet, 'rev node', fmt, rev, hexfunc(ctx.node()),
1104 fm.condwrite(not ui.quiet, 'rev node', fmt, rev, hexfunc(ctx.node()),
1105 label='log.changeset changeset.%s' % ctx.phasestr())
1105 label='log.changeset changeset.%s' % ctx.phasestr())
1106 fm.context(ctx=ctx)
1106 fm.context(ctx=ctx)
1107 fm.data(active=isactive, closed=not isopen, current=current)
1107 fm.data(active=isactive, closed=not isopen, current=current)
1108 if not ui.quiet:
1108 if not ui.quiet:
1109 fm.plain(notice)
1109 fm.plain(notice)
1110 fm.plain('\n')
1110 fm.plain('\n')
1111 fm.end()
1111 fm.end()
1112
1112
1113 @command('bundle',
1113 @command('bundle',
1114 [('f', 'force', None, _('run even when the destination is unrelated')),
1114 [('f', 'force', None, _('run even when the destination is unrelated')),
1115 ('r', 'rev', [], _('a changeset intended to be added to the destination'),
1115 ('r', 'rev', [], _('a changeset intended to be added to the destination'),
1116 _('REV')),
1116 _('REV')),
1117 ('b', 'branch', [], _('a specific branch you would like to bundle'),
1117 ('b', 'branch', [], _('a specific branch you would like to bundle'),
1118 _('BRANCH')),
1118 _('BRANCH')),
1119 ('', 'base', [],
1119 ('', 'base', [],
1120 _('a base changeset assumed to be available at the destination'),
1120 _('a base changeset assumed to be available at the destination'),
1121 _('REV')),
1121 _('REV')),
1122 ('a', 'all', None, _('bundle all changesets in the repository')),
1122 ('a', 'all', None, _('bundle all changesets in the repository')),
1123 ('t', 'type', 'bzip2', _('bundle compression type to use'), _('TYPE')),
1123 ('t', 'type', 'bzip2', _('bundle compression type to use'), _('TYPE')),
1124 ] + remoteopts,
1124 ] + remoteopts,
1125 _('[-f] [-t BUNDLESPEC] [-a] [-r REV]... [--base REV]... FILE [DEST]'))
1125 _('[-f] [-t BUNDLESPEC] [-a] [-r REV]... [--base REV]... FILE [DEST]'))
1126 def bundle(ui, repo, fname, dest=None, **opts):
1126 def bundle(ui, repo, fname, dest=None, **opts):
1127 """create a bundle file
1127 """create a bundle file
1128
1128
1129 Generate a bundle file containing data to be added to a repository.
1129 Generate a bundle file containing data to be added to a repository.
1130
1130
1131 To create a bundle containing all changesets, use -a/--all
1131 To create a bundle containing all changesets, use -a/--all
1132 (or --base null). Otherwise, hg assumes the destination will have
1132 (or --base null). Otherwise, hg assumes the destination will have
1133 all the nodes you specify with --base parameters. Otherwise, hg
1133 all the nodes you specify with --base parameters. Otherwise, hg
1134 will assume the repository has all the nodes in destination, or
1134 will assume the repository has all the nodes in destination, or
1135 default-push/default if no destination is specified.
1135 default-push/default if no destination is specified.
1136
1136
1137 You can change bundle format with the -t/--type option. See
1137 You can change bundle format with the -t/--type option. See
1138 :hg:`help bundlespec` for documentation on this format. By default,
1138 :hg:`help bundlespec` for documentation on this format. By default,
1139 the most appropriate format is used and compression defaults to
1139 the most appropriate format is used and compression defaults to
1140 bzip2.
1140 bzip2.
1141
1141
1142 The bundle file can then be transferred using conventional means
1142 The bundle file can then be transferred using conventional means
1143 and applied to another repository with the unbundle or pull
1143 and applied to another repository with the unbundle or pull
1144 command. This is useful when direct push and pull are not
1144 command. This is useful when direct push and pull are not
1145 available or when exporting an entire repository is undesirable.
1145 available or when exporting an entire repository is undesirable.
1146
1146
1147 Applying bundles preserves all changeset contents including
1147 Applying bundles preserves all changeset contents including
1148 permissions, copy/rename information, and revision history.
1148 permissions, copy/rename information, and revision history.
1149
1149
1150 Returns 0 on success, 1 if no changes found.
1150 Returns 0 on success, 1 if no changes found.
1151 """
1151 """
1152 opts = pycompat.byteskwargs(opts)
1152 opts = pycompat.byteskwargs(opts)
1153 revs = None
1153 revs = None
1154 if 'rev' in opts:
1154 if 'rev' in opts:
1155 revstrings = opts['rev']
1155 revstrings = opts['rev']
1156 revs = scmutil.revrange(repo, revstrings)
1156 revs = scmutil.revrange(repo, revstrings)
1157 if revstrings and not revs:
1157 if revstrings and not revs:
1158 raise error.Abort(_('no commits to bundle'))
1158 raise error.Abort(_('no commits to bundle'))
1159
1159
1160 bundletype = opts.get('type', 'bzip2').lower()
1160 bundletype = opts.get('type', 'bzip2').lower()
1161 try:
1161 try:
1162 bcompression, cgversion, params = exchange.parsebundlespec(
1162 bcompression, cgversion, params = exchange.parsebundlespec(
1163 repo, bundletype, strict=False)
1163 repo, bundletype, strict=False)
1164 except error.UnsupportedBundleSpecification as e:
1164 except error.UnsupportedBundleSpecification as e:
1165 raise error.Abort(str(e),
1165 raise error.Abort(str(e),
1166 hint=_("see 'hg help bundlespec' for supported "
1166 hint=_("see 'hg help bundlespec' for supported "
1167 "values for --type"))
1167 "values for --type"))
1168
1168
1169 # Packed bundles are a pseudo bundle format for now.
1169 # Packed bundles are a pseudo bundle format for now.
1170 if cgversion == 's1':
1170 if cgversion == 's1':
1171 raise error.Abort(_('packed bundles cannot be produced by "hg bundle"'),
1171 raise error.Abort(_('packed bundles cannot be produced by "hg bundle"'),
1172 hint=_("use 'hg debugcreatestreamclonebundle'"))
1172 hint=_("use 'hg debugcreatestreamclonebundle'"))
1173
1173
1174 if opts.get('all'):
1174 if opts.get('all'):
1175 if dest:
1175 if dest:
1176 raise error.Abort(_("--all is incompatible with specifying "
1176 raise error.Abort(_("--all is incompatible with specifying "
1177 "a destination"))
1177 "a destination"))
1178 if opts.get('base'):
1178 if opts.get('base'):
1179 ui.warn(_("ignoring --base because --all was specified\n"))
1179 ui.warn(_("ignoring --base because --all was specified\n"))
1180 base = ['null']
1180 base = ['null']
1181 else:
1181 else:
1182 base = scmutil.revrange(repo, opts.get('base'))
1182 base = scmutil.revrange(repo, opts.get('base'))
1183 if cgversion not in changegroup.supportedoutgoingversions(repo):
1183 if cgversion not in changegroup.supportedoutgoingversions(repo):
1184 raise error.Abort(_("repository does not support bundle version %s") %
1184 raise error.Abort(_("repository does not support bundle version %s") %
1185 cgversion)
1185 cgversion)
1186
1186
1187 if base:
1187 if base:
1188 if dest:
1188 if dest:
1189 raise error.Abort(_("--base is incompatible with specifying "
1189 raise error.Abort(_("--base is incompatible with specifying "
1190 "a destination"))
1190 "a destination"))
1191 common = [repo.lookup(rev) for rev in base]
1191 common = [repo.lookup(rev) for rev in base]
1192 heads = revs and map(repo.lookup, revs) or None
1192 heads = revs and map(repo.lookup, revs) or None
1193 outgoing = discovery.outgoing(repo, common, heads)
1193 outgoing = discovery.outgoing(repo, common, heads)
1194 else:
1194 else:
1195 dest = ui.expandpath(dest or 'default-push', dest or 'default')
1195 dest = ui.expandpath(dest or 'default-push', dest or 'default')
1196 dest, branches = hg.parseurl(dest, opts.get('branch'))
1196 dest, branches = hg.parseurl(dest, opts.get('branch'))
1197 other = hg.peer(repo, opts, dest)
1197 other = hg.peer(repo, opts, dest)
1198 revs, checkout = hg.addbranchrevs(repo, repo, branches, revs)
1198 revs, checkout = hg.addbranchrevs(repo, repo, branches, revs)
1199 heads = revs and map(repo.lookup, revs) or revs
1199 heads = revs and map(repo.lookup, revs) or revs
1200 outgoing = discovery.findcommonoutgoing(repo, other,
1200 outgoing = discovery.findcommonoutgoing(repo, other,
1201 onlyheads=heads,
1201 onlyheads=heads,
1202 force=opts.get('force'),
1202 force=opts.get('force'),
1203 portable=True)
1203 portable=True)
1204
1204
1205 if not outgoing.missing:
1205 if not outgoing.missing:
1206 scmutil.nochangesfound(ui, repo, not base and outgoing.excluded)
1206 scmutil.nochangesfound(ui, repo, not base and outgoing.excluded)
1207 return 1
1207 return 1
1208
1208
1209 if cgversion == '01': #bundle1
1209 if cgversion == '01': #bundle1
1210 if bcompression is None:
1210 if bcompression is None:
1211 bcompression = 'UN'
1211 bcompression = 'UN'
1212 bversion = 'HG10' + bcompression
1212 bversion = 'HG10' + bcompression
1213 bcompression = None
1213 bcompression = None
1214 elif cgversion in ('02', '03'):
1214 elif cgversion in ('02', '03'):
1215 bversion = 'HG20'
1215 bversion = 'HG20'
1216 else:
1216 else:
1217 raise error.ProgrammingError(
1217 raise error.ProgrammingError(
1218 'bundle: unexpected changegroup version %s' % cgversion)
1218 'bundle: unexpected changegroup version %s' % cgversion)
1219
1219
1220 # TODO compression options should be derived from bundlespec parsing.
1220 # TODO compression options should be derived from bundlespec parsing.
1221 # This is a temporary hack to allow adjusting bundle compression
1221 # This is a temporary hack to allow adjusting bundle compression
1222 # level without a) formalizing the bundlespec changes to declare it
1222 # level without a) formalizing the bundlespec changes to declare it
1223 # b) introducing a command flag.
1223 # b) introducing a command flag.
1224 compopts = {}
1224 compopts = {}
1225 complevel = ui.configint('experimental', 'bundlecomplevel')
1225 complevel = ui.configint('experimental', 'bundlecomplevel')
1226 if complevel is not None:
1226 if complevel is not None:
1227 compopts['level'] = complevel
1227 compopts['level'] = complevel
1228
1228
1229
1229
1230 contentopts = {'cg.version': cgversion}
1230 contentopts = {'cg.version': cgversion}
1231 if repo.ui.configbool('experimental', 'evolution.bundle-obsmarker', False):
1231 if repo.ui.configbool('experimental', 'evolution.bundle-obsmarker', False):
1232 contentopts['obsolescence'] = True
1232 contentopts['obsolescence'] = True
1233 bundle2.writenewbundle(ui, repo, 'bundle', fname, bversion, outgoing,
1233 bundle2.writenewbundle(ui, repo, 'bundle', fname, bversion, outgoing,
1234 contentopts, compression=bcompression,
1234 contentopts, compression=bcompression,
1235 compopts=compopts)
1235 compopts=compopts)
1236
1236
1237 @command('cat',
1237 @command('cat',
1238 [('o', 'output', '',
1238 [('o', 'output', '',
1239 _('print output to file with formatted name'), _('FORMAT')),
1239 _('print output to file with formatted name'), _('FORMAT')),
1240 ('r', 'rev', '', _('print the given revision'), _('REV')),
1240 ('r', 'rev', '', _('print the given revision'), _('REV')),
1241 ('', 'decode', None, _('apply any matching decode filter')),
1241 ('', 'decode', None, _('apply any matching decode filter')),
1242 ] + walkopts + formatteropts,
1242 ] + walkopts + formatteropts,
1243 _('[OPTION]... FILE...'),
1243 _('[OPTION]... FILE...'),
1244 inferrepo=True)
1244 inferrepo=True)
1245 def cat(ui, repo, file1, *pats, **opts):
1245 def cat(ui, repo, file1, *pats, **opts):
1246 """output the current or given revision of files
1246 """output the current or given revision of files
1247
1247
1248 Print the specified files as they were at the given revision. If
1248 Print the specified files as they were at the given revision. If
1249 no revision is given, the parent of the working directory is used.
1249 no revision is given, the parent of the working directory is used.
1250
1250
1251 Output may be to a file, in which case the name of the file is
1251 Output may be to a file, in which case the name of the file is
1252 given using a format string. The formatting rules as follows:
1252 given using a format string. The formatting rules as follows:
1253
1253
1254 :``%%``: literal "%" character
1254 :``%%``: literal "%" character
1255 :``%s``: basename of file being printed
1255 :``%s``: basename of file being printed
1256 :``%d``: dirname of file being printed, or '.' if in repository root
1256 :``%d``: dirname of file being printed, or '.' if in repository root
1257 :``%p``: root-relative path name of file being printed
1257 :``%p``: root-relative path name of file being printed
1258 :``%H``: changeset hash (40 hexadecimal digits)
1258 :``%H``: changeset hash (40 hexadecimal digits)
1259 :``%R``: changeset revision number
1259 :``%R``: changeset revision number
1260 :``%h``: short-form changeset hash (12 hexadecimal digits)
1260 :``%h``: short-form changeset hash (12 hexadecimal digits)
1261 :``%r``: zero-padded changeset revision number
1261 :``%r``: zero-padded changeset revision number
1262 :``%b``: basename of the exporting repository
1262 :``%b``: basename of the exporting repository
1263
1263
1264 Returns 0 on success.
1264 Returns 0 on success.
1265 """
1265 """
1266 ctx = scmutil.revsingle(repo, opts.get('rev'))
1266 ctx = scmutil.revsingle(repo, opts.get('rev'))
1267 m = scmutil.match(ctx, (file1,) + pats, opts)
1267 m = scmutil.match(ctx, (file1,) + pats, opts)
1268 fntemplate = opts.pop('output', '')
1268 fntemplate = opts.pop('output', '')
1269 if cmdutil.isstdiofilename(fntemplate):
1269 if cmdutil.isstdiofilename(fntemplate):
1270 fntemplate = ''
1270 fntemplate = ''
1271
1271
1272 if fntemplate:
1272 if fntemplate:
1273 fm = formatter.nullformatter(ui, 'cat')
1273 fm = formatter.nullformatter(ui, 'cat')
1274 else:
1274 else:
1275 ui.pager('cat')
1275 ui.pager('cat')
1276 fm = ui.formatter('cat', opts)
1276 fm = ui.formatter('cat', opts)
1277 with fm:
1277 with fm:
1278 return cmdutil.cat(ui, repo, ctx, m, fm, fntemplate, '', **opts)
1278 return cmdutil.cat(ui, repo, ctx, m, fm, fntemplate, '', **opts)
1279
1279
1280 @command('^clone',
1280 @command('^clone',
1281 [('U', 'noupdate', None, _('the clone will include an empty working '
1281 [('U', 'noupdate', None, _('the clone will include an empty working '
1282 'directory (only a repository)')),
1282 'directory (only a repository)')),
1283 ('u', 'updaterev', '', _('revision, tag, or branch to check out'),
1283 ('u', 'updaterev', '', _('revision, tag, or branch to check out'),
1284 _('REV')),
1284 _('REV')),
1285 ('r', 'rev', [], _('include the specified changeset'), _('REV')),
1285 ('r', 'rev', [], _('include the specified changeset'), _('REV')),
1286 ('b', 'branch', [], _('clone only the specified branch'), _('BRANCH')),
1286 ('b', 'branch', [], _('clone only the specified branch'), _('BRANCH')),
1287 ('', 'pull', None, _('use pull protocol to copy metadata')),
1287 ('', 'pull', None, _('use pull protocol to copy metadata')),
1288 ('', 'uncompressed', None, _('use uncompressed transfer (fast over LAN)')),
1288 ('', 'uncompressed', None, _('use uncompressed transfer (fast over LAN)')),
1289 ] + remoteopts,
1289 ] + remoteopts,
1290 _('[OPTION]... SOURCE [DEST]'),
1290 _('[OPTION]... SOURCE [DEST]'),
1291 norepo=True)
1291 norepo=True)
1292 def clone(ui, source, dest=None, **opts):
1292 def clone(ui, source, dest=None, **opts):
1293 """make a copy of an existing repository
1293 """make a copy of an existing repository
1294
1294
1295 Create a copy of an existing repository in a new directory.
1295 Create a copy of an existing repository in a new directory.
1296
1296
1297 If no destination directory name is specified, it defaults to the
1297 If no destination directory name is specified, it defaults to the
1298 basename of the source.
1298 basename of the source.
1299
1299
1300 The location of the source is added to the new repository's
1300 The location of the source is added to the new repository's
1301 ``.hg/hgrc`` file, as the default to be used for future pulls.
1301 ``.hg/hgrc`` file, as the default to be used for future pulls.
1302
1302
1303 Only local paths and ``ssh://`` URLs are supported as
1303 Only local paths and ``ssh://`` URLs are supported as
1304 destinations. For ``ssh://`` destinations, no working directory or
1304 destinations. For ``ssh://`` destinations, no working directory or
1305 ``.hg/hgrc`` will be created on the remote side.
1305 ``.hg/hgrc`` will be created on the remote side.
1306
1306
1307 If the source repository has a bookmark called '@' set, that
1307 If the source repository has a bookmark called '@' set, that
1308 revision will be checked out in the new repository by default.
1308 revision will be checked out in the new repository by default.
1309
1309
1310 To check out a particular version, use -u/--update, or
1310 To check out a particular version, use -u/--update, or
1311 -U/--noupdate to create a clone with no working directory.
1311 -U/--noupdate to create a clone with no working directory.
1312
1312
1313 To pull only a subset of changesets, specify one or more revisions
1313 To pull only a subset of changesets, specify one or more revisions
1314 identifiers with -r/--rev or branches with -b/--branch. The
1314 identifiers with -r/--rev or branches with -b/--branch. The
1315 resulting clone will contain only the specified changesets and
1315 resulting clone will contain only the specified changesets and
1316 their ancestors. These options (or 'clone src#rev dest') imply
1316 their ancestors. These options (or 'clone src#rev dest') imply
1317 --pull, even for local source repositories.
1317 --pull, even for local source repositories.
1318
1318
1319 .. note::
1319 .. note::
1320
1320
1321 Specifying a tag will include the tagged changeset but not the
1321 Specifying a tag will include the tagged changeset but not the
1322 changeset containing the tag.
1322 changeset containing the tag.
1323
1323
1324 .. container:: verbose
1324 .. container:: verbose
1325
1325
1326 For efficiency, hardlinks are used for cloning whenever the
1326 For efficiency, hardlinks are used for cloning whenever the
1327 source and destination are on the same filesystem (note this
1327 source and destination are on the same filesystem (note this
1328 applies only to the repository data, not to the working
1328 applies only to the repository data, not to the working
1329 directory). Some filesystems, such as AFS, implement hardlinking
1329 directory). Some filesystems, such as AFS, implement hardlinking
1330 incorrectly, but do not report errors. In these cases, use the
1330 incorrectly, but do not report errors. In these cases, use the
1331 --pull option to avoid hardlinking.
1331 --pull option to avoid hardlinking.
1332
1332
1333 In some cases, you can clone repositories and the working
1333 In some cases, you can clone repositories and the working
1334 directory using full hardlinks with ::
1334 directory using full hardlinks with ::
1335
1335
1336 $ cp -al REPO REPOCLONE
1336 $ cp -al REPO REPOCLONE
1337
1337
1338 This is the fastest way to clone, but it is not always safe. The
1338 This is the fastest way to clone, but it is not always safe. The
1339 operation is not atomic (making sure REPO is not modified during
1339 operation is not atomic (making sure REPO is not modified during
1340 the operation is up to you) and you have to make sure your
1340 the operation is up to you) and you have to make sure your
1341 editor breaks hardlinks (Emacs and most Linux Kernel tools do
1341 editor breaks hardlinks (Emacs and most Linux Kernel tools do
1342 so). Also, this is not compatible with certain extensions that
1342 so). Also, this is not compatible with certain extensions that
1343 place their metadata under the .hg directory, such as mq.
1343 place their metadata under the .hg directory, such as mq.
1344
1344
1345 Mercurial will update the working directory to the first applicable
1345 Mercurial will update the working directory to the first applicable
1346 revision from this list:
1346 revision from this list:
1347
1347
1348 a) null if -U or the source repository has no changesets
1348 a) null if -U or the source repository has no changesets
1349 b) if -u . and the source repository is local, the first parent of
1349 b) if -u . and the source repository is local, the first parent of
1350 the source repository's working directory
1350 the source repository's working directory
1351 c) the changeset specified with -u (if a branch name, this means the
1351 c) the changeset specified with -u (if a branch name, this means the
1352 latest head of that branch)
1352 latest head of that branch)
1353 d) the changeset specified with -r
1353 d) the changeset specified with -r
1354 e) the tipmost head specified with -b
1354 e) the tipmost head specified with -b
1355 f) the tipmost head specified with the url#branch source syntax
1355 f) the tipmost head specified with the url#branch source syntax
1356 g) the revision marked with the '@' bookmark, if present
1356 g) the revision marked with the '@' bookmark, if present
1357 h) the tipmost head of the default branch
1357 h) the tipmost head of the default branch
1358 i) tip
1358 i) tip
1359
1359
1360 When cloning from servers that support it, Mercurial may fetch
1360 When cloning from servers that support it, Mercurial may fetch
1361 pre-generated data from a server-advertised URL. When this is done,
1361 pre-generated data from a server-advertised URL. When this is done,
1362 hooks operating on incoming changesets and changegroups may fire twice,
1362 hooks operating on incoming changesets and changegroups may fire twice,
1363 once for the bundle fetched from the URL and another for any additional
1363 once for the bundle fetched from the URL and another for any additional
1364 data not fetched from this URL. In addition, if an error occurs, the
1364 data not fetched from this URL. In addition, if an error occurs, the
1365 repository may be rolled back to a partial clone. This behavior may
1365 repository may be rolled back to a partial clone. This behavior may
1366 change in future releases. See :hg:`help -e clonebundles` for more.
1366 change in future releases. See :hg:`help -e clonebundles` for more.
1367
1367
1368 Examples:
1368 Examples:
1369
1369
1370 - clone a remote repository to a new directory named hg/::
1370 - clone a remote repository to a new directory named hg/::
1371
1371
1372 hg clone https://www.mercurial-scm.org/repo/hg/
1372 hg clone https://www.mercurial-scm.org/repo/hg/
1373
1373
1374 - create a lightweight local clone::
1374 - create a lightweight local clone::
1375
1375
1376 hg clone project/ project-feature/
1376 hg clone project/ project-feature/
1377
1377
1378 - clone from an absolute path on an ssh server (note double-slash)::
1378 - clone from an absolute path on an ssh server (note double-slash)::
1379
1379
1380 hg clone ssh://user@server//home/projects/alpha/
1380 hg clone ssh://user@server//home/projects/alpha/
1381
1381
1382 - do a high-speed clone over a LAN while checking out a
1382 - do a high-speed clone over a LAN while checking out a
1383 specified version::
1383 specified version::
1384
1384
1385 hg clone --uncompressed http://server/repo -u 1.5
1385 hg clone --uncompressed http://server/repo -u 1.5
1386
1386
1387 - create a repository without changesets after a particular revision::
1387 - create a repository without changesets after a particular revision::
1388
1388
1389 hg clone -r 04e544 experimental/ good/
1389 hg clone -r 04e544 experimental/ good/
1390
1390
1391 - clone (and track) a particular named branch::
1391 - clone (and track) a particular named branch::
1392
1392
1393 hg clone https://www.mercurial-scm.org/repo/hg/#stable
1393 hg clone https://www.mercurial-scm.org/repo/hg/#stable
1394
1394
1395 See :hg:`help urls` for details on specifying URLs.
1395 See :hg:`help urls` for details on specifying URLs.
1396
1396
1397 Returns 0 on success.
1397 Returns 0 on success.
1398 """
1398 """
1399 opts = pycompat.byteskwargs(opts)
1399 opts = pycompat.byteskwargs(opts)
1400 if opts.get('noupdate') and opts.get('updaterev'):
1400 if opts.get('noupdate') and opts.get('updaterev'):
1401 raise error.Abort(_("cannot specify both --noupdate and --updaterev"))
1401 raise error.Abort(_("cannot specify both --noupdate and --updaterev"))
1402
1402
1403 r = hg.clone(ui, opts, source, dest,
1403 r = hg.clone(ui, opts, source, dest,
1404 pull=opts.get('pull'),
1404 pull=opts.get('pull'),
1405 stream=opts.get('uncompressed'),
1405 stream=opts.get('uncompressed'),
1406 rev=opts.get('rev'),
1406 rev=opts.get('rev'),
1407 update=opts.get('updaterev') or not opts.get('noupdate'),
1407 update=opts.get('updaterev') or not opts.get('noupdate'),
1408 branch=opts.get('branch'),
1408 branch=opts.get('branch'),
1409 shareopts=opts.get('shareopts'))
1409 shareopts=opts.get('shareopts'))
1410
1410
1411 return r is None
1411 return r is None
1412
1412
1413 @command('^commit|ci',
1413 @command('^commit|ci',
1414 [('A', 'addremove', None,
1414 [('A', 'addremove', None,
1415 _('mark new/missing files as added/removed before committing')),
1415 _('mark new/missing files as added/removed before committing')),
1416 ('', 'close-branch', None,
1416 ('', 'close-branch', None,
1417 _('mark a branch head as closed')),
1417 _('mark a branch head as closed')),
1418 ('', 'amend', None, _('amend the parent of the working directory')),
1418 ('', 'amend', None, _('amend the parent of the working directory')),
1419 ('s', 'secret', None, _('use the secret phase for committing')),
1419 ('s', 'secret', None, _('use the secret phase for committing')),
1420 ('e', 'edit', None, _('invoke editor on commit messages')),
1420 ('e', 'edit', None, _('invoke editor on commit messages')),
1421 ('i', 'interactive', None, _('use interactive mode')),
1421 ('i', 'interactive', None, _('use interactive mode')),
1422 ] + walkopts + commitopts + commitopts2 + subrepoopts,
1422 ] + walkopts + commitopts + commitopts2 + subrepoopts,
1423 _('[OPTION]... [FILE]...'),
1423 _('[OPTION]... [FILE]...'),
1424 inferrepo=True)
1424 inferrepo=True)
1425 def commit(ui, repo, *pats, **opts):
1425 def commit(ui, repo, *pats, **opts):
1426 """commit the specified files or all outstanding changes
1426 """commit the specified files or all outstanding changes
1427
1427
1428 Commit changes to the given files into the repository. Unlike a
1428 Commit changes to the given files into the repository. Unlike a
1429 centralized SCM, this operation is a local operation. See
1429 centralized SCM, this operation is a local operation. See
1430 :hg:`push` for a way to actively distribute your changes.
1430 :hg:`push` for a way to actively distribute your changes.
1431
1431
1432 If a list of files is omitted, all changes reported by :hg:`status`
1432 If a list of files is omitted, all changes reported by :hg:`status`
1433 will be committed.
1433 will be committed.
1434
1434
1435 If you are committing the result of a merge, do not provide any
1435 If you are committing the result of a merge, do not provide any
1436 filenames or -I/-X filters.
1436 filenames or -I/-X filters.
1437
1437
1438 If no commit message is specified, Mercurial starts your
1438 If no commit message is specified, Mercurial starts your
1439 configured editor where you can enter a message. In case your
1439 configured editor where you can enter a message. In case your
1440 commit fails, you will find a backup of your message in
1440 commit fails, you will find a backup of your message in
1441 ``.hg/last-message.txt``.
1441 ``.hg/last-message.txt``.
1442
1442
1443 The --close-branch flag can be used to mark the current branch
1443 The --close-branch flag can be used to mark the current branch
1444 head closed. When all heads of a branch are closed, the branch
1444 head closed. When all heads of a branch are closed, the branch
1445 will be considered closed and no longer listed.
1445 will be considered closed and no longer listed.
1446
1446
1447 The --amend flag can be used to amend the parent of the
1447 The --amend flag can be used to amend the parent of the
1448 working directory with a new commit that contains the changes
1448 working directory with a new commit that contains the changes
1449 in the parent in addition to those currently reported by :hg:`status`,
1449 in the parent in addition to those currently reported by :hg:`status`,
1450 if there are any. The old commit is stored in a backup bundle in
1450 if there are any. The old commit is stored in a backup bundle in
1451 ``.hg/strip-backup`` (see :hg:`help bundle` and :hg:`help unbundle`
1451 ``.hg/strip-backup`` (see :hg:`help bundle` and :hg:`help unbundle`
1452 on how to restore it).
1452 on how to restore it).
1453
1453
1454 Message, user and date are taken from the amended commit unless
1454 Message, user and date are taken from the amended commit unless
1455 specified. When a message isn't specified on the command line,
1455 specified. When a message isn't specified on the command line,
1456 the editor will open with the message of the amended commit.
1456 the editor will open with the message of the amended commit.
1457
1457
1458 It is not possible to amend public changesets (see :hg:`help phases`)
1458 It is not possible to amend public changesets (see :hg:`help phases`)
1459 or changesets that have children.
1459 or changesets that have children.
1460
1460
1461 See :hg:`help dates` for a list of formats valid for -d/--date.
1461 See :hg:`help dates` for a list of formats valid for -d/--date.
1462
1462
1463 Returns 0 on success, 1 if nothing changed.
1463 Returns 0 on success, 1 if nothing changed.
1464
1464
1465 .. container:: verbose
1465 .. container:: verbose
1466
1466
1467 Examples:
1467 Examples:
1468
1468
1469 - commit all files ending in .py::
1469 - commit all files ending in .py::
1470
1470
1471 hg commit --include "set:**.py"
1471 hg commit --include "set:**.py"
1472
1472
1473 - commit all non-binary files::
1473 - commit all non-binary files::
1474
1474
1475 hg commit --exclude "set:binary()"
1475 hg commit --exclude "set:binary()"
1476
1476
1477 - amend the current commit and set the date to now::
1477 - amend the current commit and set the date to now::
1478
1478
1479 hg commit --amend --date now
1479 hg commit --amend --date now
1480 """
1480 """
1481 wlock = lock = None
1481 wlock = lock = None
1482 try:
1482 try:
1483 wlock = repo.wlock()
1483 wlock = repo.wlock()
1484 lock = repo.lock()
1484 lock = repo.lock()
1485 return _docommit(ui, repo, *pats, **opts)
1485 return _docommit(ui, repo, *pats, **opts)
1486 finally:
1486 finally:
1487 release(lock, wlock)
1487 release(lock, wlock)
1488
1488
1489 def _docommit(ui, repo, *pats, **opts):
1489 def _docommit(ui, repo, *pats, **opts):
1490 if opts.get(r'interactive'):
1490 if opts.get(r'interactive'):
1491 opts.pop(r'interactive')
1491 opts.pop(r'interactive')
1492 ret = cmdutil.dorecord(ui, repo, commit, None, False,
1492 ret = cmdutil.dorecord(ui, repo, commit, None, False,
1493 cmdutil.recordfilter, *pats,
1493 cmdutil.recordfilter, *pats,
1494 **opts)
1494 **opts)
1495 # ret can be 0 (no changes to record) or the value returned by
1495 # ret can be 0 (no changes to record) or the value returned by
1496 # commit(), 1 if nothing changed or None on success.
1496 # commit(), 1 if nothing changed or None on success.
1497 return 1 if ret == 0 else ret
1497 return 1 if ret == 0 else ret
1498
1498
1499 opts = pycompat.byteskwargs(opts)
1499 opts = pycompat.byteskwargs(opts)
1500 if opts.get('subrepos'):
1500 if opts.get('subrepos'):
1501 if opts.get('amend'):
1501 if opts.get('amend'):
1502 raise error.Abort(_('cannot amend with --subrepos'))
1502 raise error.Abort(_('cannot amend with --subrepos'))
1503 # Let --subrepos on the command line override config setting.
1503 # Let --subrepos on the command line override config setting.
1504 ui.setconfig('ui', 'commitsubrepos', True, 'commit')
1504 ui.setconfig('ui', 'commitsubrepos', True, 'commit')
1505
1505
1506 cmdutil.checkunfinished(repo, commit=True)
1506 cmdutil.checkunfinished(repo, commit=True)
1507
1507
1508 branch = repo[None].branch()
1508 branch = repo[None].branch()
1509 bheads = repo.branchheads(branch)
1509 bheads = repo.branchheads(branch)
1510
1510
1511 extra = {}
1511 extra = {}
1512 if opts.get('close_branch'):
1512 if opts.get('close_branch'):
1513 extra['close'] = 1
1513 extra['close'] = 1
1514
1514
1515 if not bheads:
1515 if not bheads:
1516 raise error.Abort(_('can only close branch heads'))
1516 raise error.Abort(_('can only close branch heads'))
1517 elif opts.get('amend'):
1517 elif opts.get('amend'):
1518 if repo[None].parents()[0].p1().branch() != branch and \
1518 if repo[None].parents()[0].p1().branch() != branch and \
1519 repo[None].parents()[0].p2().branch() != branch:
1519 repo[None].parents()[0].p2().branch() != branch:
1520 raise error.Abort(_('can only close branch heads'))
1520 raise error.Abort(_('can only close branch heads'))
1521
1521
1522 if opts.get('amend'):
1522 if opts.get('amend'):
1523 if ui.configbool('ui', 'commitsubrepos'):
1523 if ui.configbool('ui', 'commitsubrepos'):
1524 raise error.Abort(_('cannot amend with ui.commitsubrepos enabled'))
1524 raise error.Abort(_('cannot amend with ui.commitsubrepos enabled'))
1525
1525
1526 old = repo['.']
1526 old = repo['.']
1527 if not old.mutable():
1527 if not old.mutable():
1528 raise error.Abort(_('cannot amend public changesets'))
1528 raise error.Abort(_('cannot amend public changesets'))
1529 if len(repo[None].parents()) > 1:
1529 if len(repo[None].parents()) > 1:
1530 raise error.Abort(_('cannot amend while merging'))
1530 raise error.Abort(_('cannot amend while merging'))
1531 allowunstable = obsolete.isenabled(repo, obsolete.allowunstableopt)
1531 allowunstable = obsolete.isenabled(repo, obsolete.allowunstableopt)
1532 if not allowunstable and old.children():
1532 if not allowunstable and old.children():
1533 raise error.Abort(_('cannot amend changeset with children'))
1533 raise error.Abort(_('cannot amend changeset with children'))
1534
1534
1535 # Currently histedit gets confused if an amend happens while histedit
1535 # Currently histedit gets confused if an amend happens while histedit
1536 # is in progress. Since we have a checkunfinished command, we are
1536 # is in progress. Since we have a checkunfinished command, we are
1537 # temporarily honoring it.
1537 # temporarily honoring it.
1538 #
1538 #
1539 # Note: eventually this guard will be removed. Please do not expect
1539 # Note: eventually this guard will be removed. Please do not expect
1540 # this behavior to remain.
1540 # this behavior to remain.
1541 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1541 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1542 cmdutil.checkunfinished(repo)
1542 cmdutil.checkunfinished(repo)
1543
1543
1544 # commitfunc is used only for temporary amend commit by cmdutil.amend
1544 # commitfunc is used only for temporary amend commit by cmdutil.amend
1545 def commitfunc(ui, repo, message, match, opts):
1545 def commitfunc(ui, repo, message, match, opts):
1546 return repo.commit(message,
1546 return repo.commit(message,
1547 opts.get('user') or old.user(),
1547 opts.get('user') or old.user(),
1548 opts.get('date') or old.date(),
1548 opts.get('date') or old.date(),
1549 match,
1549 match,
1550 extra=extra)
1550 extra=extra)
1551
1551
1552 node = cmdutil.amend(ui, repo, commitfunc, old, extra, pats, opts)
1552 node = cmdutil.amend(ui, repo, commitfunc, old, extra, pats, opts)
1553 if node == old.node():
1553 if node == old.node():
1554 ui.status(_("nothing changed\n"))
1554 ui.status(_("nothing changed\n"))
1555 return 1
1555 return 1
1556 else:
1556 else:
1557 def commitfunc(ui, repo, message, match, opts):
1557 def commitfunc(ui, repo, message, match, opts):
1558 overrides = {}
1558 overrides = {}
1559 if opts.get('secret'):
1559 if opts.get('secret'):
1560 overrides[('phases', 'new-commit')] = 'secret'
1560 overrides[('phases', 'new-commit')] = 'secret'
1561
1561
1562 baseui = repo.baseui
1562 baseui = repo.baseui
1563 with baseui.configoverride(overrides, 'commit'):
1563 with baseui.configoverride(overrides, 'commit'):
1564 with ui.configoverride(overrides, 'commit'):
1564 with ui.configoverride(overrides, 'commit'):
1565 editform = cmdutil.mergeeditform(repo[None],
1565 editform = cmdutil.mergeeditform(repo[None],
1566 'commit.normal')
1566 'commit.normal')
1567 editor = cmdutil.getcommiteditor(
1567 editor = cmdutil.getcommiteditor(
1568 editform=editform, **pycompat.strkwargs(opts))
1568 editform=editform, **pycompat.strkwargs(opts))
1569 return repo.commit(message,
1569 return repo.commit(message,
1570 opts.get('user'),
1570 opts.get('user'),
1571 opts.get('date'),
1571 opts.get('date'),
1572 match,
1572 match,
1573 editor=editor,
1573 editor=editor,
1574 extra=extra)
1574 extra=extra)
1575
1575
1576 node = cmdutil.commit(ui, repo, commitfunc, pats, opts)
1576 node = cmdutil.commit(ui, repo, commitfunc, pats, opts)
1577
1577
1578 if not node:
1578 if not node:
1579 stat = cmdutil.postcommitstatus(repo, pats, opts)
1579 stat = cmdutil.postcommitstatus(repo, pats, opts)
1580 if stat[3]:
1580 if stat[3]:
1581 ui.status(_("nothing changed (%d missing files, see "
1581 ui.status(_("nothing changed (%d missing files, see "
1582 "'hg status')\n") % len(stat[3]))
1582 "'hg status')\n") % len(stat[3]))
1583 else:
1583 else:
1584 ui.status(_("nothing changed\n"))
1584 ui.status(_("nothing changed\n"))
1585 return 1
1585 return 1
1586
1586
1587 cmdutil.commitstatus(repo, node, branch, bheads, opts)
1587 cmdutil.commitstatus(repo, node, branch, bheads, opts)
1588
1588
1589 @command('config|showconfig|debugconfig',
1589 @command('config|showconfig|debugconfig',
1590 [('u', 'untrusted', None, _('show untrusted configuration options')),
1590 [('u', 'untrusted', None, _('show untrusted configuration options')),
1591 ('e', 'edit', None, _('edit user config')),
1591 ('e', 'edit', None, _('edit user config')),
1592 ('l', 'local', None, _('edit repository config')),
1592 ('l', 'local', None, _('edit repository config')),
1593 ('g', 'global', None, _('edit global config'))] + formatteropts,
1593 ('g', 'global', None, _('edit global config'))] + formatteropts,
1594 _('[-u] [NAME]...'),
1594 _('[-u] [NAME]...'),
1595 optionalrepo=True)
1595 optionalrepo=True)
1596 def config(ui, repo, *values, **opts):
1596 def config(ui, repo, *values, **opts):
1597 """show combined config settings from all hgrc files
1597 """show combined config settings from all hgrc files
1598
1598
1599 With no arguments, print names and values of all config items.
1599 With no arguments, print names and values of all config items.
1600
1600
1601 With one argument of the form section.name, print just the value
1601 With one argument of the form section.name, print just the value
1602 of that config item.
1602 of that config item.
1603
1603
1604 With multiple arguments, print names and values of all config
1604 With multiple arguments, print names and values of all config
1605 items with matching section names.
1605 items with matching section names.
1606
1606
1607 With --edit, start an editor on the user-level config file. With
1607 With --edit, start an editor on the user-level config file. With
1608 --global, edit the system-wide config file. With --local, edit the
1608 --global, edit the system-wide config file. With --local, edit the
1609 repository-level config file.
1609 repository-level config file.
1610
1610
1611 With --debug, the source (filename and line number) is printed
1611 With --debug, the source (filename and line number) is printed
1612 for each config item.
1612 for each config item.
1613
1613
1614 See :hg:`help config` for more information about config files.
1614 See :hg:`help config` for more information about config files.
1615
1615
1616 Returns 0 on success, 1 if NAME does not exist.
1616 Returns 0 on success, 1 if NAME does not exist.
1617
1617
1618 """
1618 """
1619
1619
1620 opts = pycompat.byteskwargs(opts)
1620 opts = pycompat.byteskwargs(opts)
1621 if opts.get('edit') or opts.get('local') or opts.get('global'):
1621 if opts.get('edit') or opts.get('local') or opts.get('global'):
1622 if opts.get('local') and opts.get('global'):
1622 if opts.get('local') and opts.get('global'):
1623 raise error.Abort(_("can't use --local and --global together"))
1623 raise error.Abort(_("can't use --local and --global together"))
1624
1624
1625 if opts.get('local'):
1625 if opts.get('local'):
1626 if not repo:
1626 if not repo:
1627 raise error.Abort(_("can't use --local outside a repository"))
1627 raise error.Abort(_("can't use --local outside a repository"))
1628 paths = [repo.vfs.join('hgrc')]
1628 paths = [repo.vfs.join('hgrc')]
1629 elif opts.get('global'):
1629 elif opts.get('global'):
1630 paths = rcutil.systemrcpath()
1630 paths = rcutil.systemrcpath()
1631 else:
1631 else:
1632 paths = rcutil.userrcpath()
1632 paths = rcutil.userrcpath()
1633
1633
1634 for f in paths:
1634 for f in paths:
1635 if os.path.exists(f):
1635 if os.path.exists(f):
1636 break
1636 break
1637 else:
1637 else:
1638 if opts.get('global'):
1638 if opts.get('global'):
1639 samplehgrc = uimod.samplehgrcs['global']
1639 samplehgrc = uimod.samplehgrcs['global']
1640 elif opts.get('local'):
1640 elif opts.get('local'):
1641 samplehgrc = uimod.samplehgrcs['local']
1641 samplehgrc = uimod.samplehgrcs['local']
1642 else:
1642 else:
1643 samplehgrc = uimod.samplehgrcs['user']
1643 samplehgrc = uimod.samplehgrcs['user']
1644
1644
1645 f = paths[0]
1645 f = paths[0]
1646 fp = open(f, "w")
1646 fp = open(f, "w")
1647 fp.write(samplehgrc)
1647 fp.write(samplehgrc)
1648 fp.close()
1648 fp.close()
1649
1649
1650 editor = ui.geteditor()
1650 editor = ui.geteditor()
1651 ui.system("%s \"%s\"" % (editor, f),
1651 ui.system("%s \"%s\"" % (editor, f),
1652 onerr=error.Abort, errprefix=_("edit failed"),
1652 onerr=error.Abort, errprefix=_("edit failed"),
1653 blockedtag='config_edit')
1653 blockedtag='config_edit')
1654 return
1654 return
1655 ui.pager('config')
1655 ui.pager('config')
1656 fm = ui.formatter('config', opts)
1656 fm = ui.formatter('config', opts)
1657 for t, f in rcutil.rccomponents():
1657 for t, f in rcutil.rccomponents():
1658 if t == 'path':
1658 if t == 'path':
1659 ui.debug('read config from: %s\n' % f)
1659 ui.debug('read config from: %s\n' % f)
1660 elif t == 'items':
1660 elif t == 'items':
1661 for section, name, value, source in f:
1661 for section, name, value, source in f:
1662 ui.debug('set config by: %s\n' % source)
1662 ui.debug('set config by: %s\n' % source)
1663 else:
1663 else:
1664 raise error.ProgrammingError('unknown rctype: %s' % t)
1664 raise error.ProgrammingError('unknown rctype: %s' % t)
1665 untrusted = bool(opts.get('untrusted'))
1665 untrusted = bool(opts.get('untrusted'))
1666 if values:
1666 if values:
1667 sections = [v for v in values if '.' not in v]
1667 sections = [v for v in values if '.' not in v]
1668 items = [v for v in values if '.' in v]
1668 items = [v for v in values if '.' in v]
1669 if len(items) > 1 or items and sections:
1669 if len(items) > 1 or items and sections:
1670 raise error.Abort(_('only one config item permitted'))
1670 raise error.Abort(_('only one config item permitted'))
1671 matched = False
1671 matched = False
1672 for section, name, value in ui.walkconfig(untrusted=untrusted):
1672 for section, name, value in ui.walkconfig(untrusted=untrusted):
1673 source = ui.configsource(section, name, untrusted)
1673 source = ui.configsource(section, name, untrusted)
1674 value = pycompat.bytestr(value)
1674 value = pycompat.bytestr(value)
1675 if fm.isplain():
1675 if fm.isplain():
1676 source = source or 'none'
1676 source = source or 'none'
1677 value = value.replace('\n', '\\n')
1677 value = value.replace('\n', '\\n')
1678 entryname = section + '.' + name
1678 entryname = section + '.' + name
1679 if values:
1679 if values:
1680 for v in values:
1680 for v in values:
1681 if v == section:
1681 if v == section:
1682 fm.startitem()
1682 fm.startitem()
1683 fm.condwrite(ui.debugflag, 'source', '%s: ', source)
1683 fm.condwrite(ui.debugflag, 'source', '%s: ', source)
1684 fm.write('name value', '%s=%s\n', entryname, value)
1684 fm.write('name value', '%s=%s\n', entryname, value)
1685 matched = True
1685 matched = True
1686 elif v == entryname:
1686 elif v == entryname:
1687 fm.startitem()
1687 fm.startitem()
1688 fm.condwrite(ui.debugflag, 'source', '%s: ', source)
1688 fm.condwrite(ui.debugflag, 'source', '%s: ', source)
1689 fm.write('value', '%s\n', value)
1689 fm.write('value', '%s\n', value)
1690 fm.data(name=entryname)
1690 fm.data(name=entryname)
1691 matched = True
1691 matched = True
1692 else:
1692 else:
1693 fm.startitem()
1693 fm.startitem()
1694 fm.condwrite(ui.debugflag, 'source', '%s: ', source)
1694 fm.condwrite(ui.debugflag, 'source', '%s: ', source)
1695 fm.write('name value', '%s=%s\n', entryname, value)
1695 fm.write('name value', '%s=%s\n', entryname, value)
1696 matched = True
1696 matched = True
1697 fm.end()
1697 fm.end()
1698 if matched:
1698 if matched:
1699 return 0
1699 return 0
1700 return 1
1700 return 1
1701
1701
1702 @command('copy|cp',
1702 @command('copy|cp',
1703 [('A', 'after', None, _('record a copy that has already occurred')),
1703 [('A', 'after', None, _('record a copy that has already occurred')),
1704 ('f', 'force', None, _('forcibly copy over an existing managed file')),
1704 ('f', 'force', None, _('forcibly copy over an existing managed file')),
1705 ] + walkopts + dryrunopts,
1705 ] + walkopts + dryrunopts,
1706 _('[OPTION]... [SOURCE]... DEST'))
1706 _('[OPTION]... [SOURCE]... DEST'))
1707 def copy(ui, repo, *pats, **opts):
1707 def copy(ui, repo, *pats, **opts):
1708 """mark files as copied for the next commit
1708 """mark files as copied for the next commit
1709
1709
1710 Mark dest as having copies of source files. If dest is a
1710 Mark dest as having copies of source files. If dest is a
1711 directory, copies are put in that directory. If dest is a file,
1711 directory, copies are put in that directory. If dest is a file,
1712 the source must be a single file.
1712 the source must be a single file.
1713
1713
1714 By default, this command copies the contents of files as they
1714 By default, this command copies the contents of files as they
1715 exist in the working directory. If invoked with -A/--after, the
1715 exist in the working directory. If invoked with -A/--after, the
1716 operation is recorded, but no copying is performed.
1716 operation is recorded, but no copying is performed.
1717
1717
1718 This command takes effect with the next commit. To undo a copy
1718 This command takes effect with the next commit. To undo a copy
1719 before that, see :hg:`revert`.
1719 before that, see :hg:`revert`.
1720
1720
1721 Returns 0 on success, 1 if errors are encountered.
1721 Returns 0 on success, 1 if errors are encountered.
1722 """
1722 """
1723 opts = pycompat.byteskwargs(opts)
1723 opts = pycompat.byteskwargs(opts)
1724 with repo.wlock(False):
1724 with repo.wlock(False):
1725 return cmdutil.copy(ui, repo, pats, opts)
1725 return cmdutil.copy(ui, repo, pats, opts)
1726
1726
1727 @command('debugcommands', [], _('[COMMAND]'), norepo=True)
1727 @command('debugcommands', [], _('[COMMAND]'), norepo=True)
1728 def debugcommands(ui, cmd='', *args):
1728 def debugcommands(ui, cmd='', *args):
1729 """list all available commands and options"""
1729 """list all available commands and options"""
1730 for cmd, vals in sorted(table.iteritems()):
1730 for cmd, vals in sorted(table.iteritems()):
1731 cmd = cmd.split('|')[0].strip('^')
1731 cmd = cmd.split('|')[0].strip('^')
1732 opts = ', '.join([i[1] for i in vals[1]])
1732 opts = ', '.join([i[1] for i in vals[1]])
1733 ui.write('%s: %s\n' % (cmd, opts))
1733 ui.write('%s: %s\n' % (cmd, opts))
1734
1734
1735 @command('debugcomplete',
1735 @command('debugcomplete',
1736 [('o', 'options', None, _('show the command options'))],
1736 [('o', 'options', None, _('show the command options'))],
1737 _('[-o] CMD'),
1737 _('[-o] CMD'),
1738 norepo=True)
1738 norepo=True)
1739 def debugcomplete(ui, cmd='', **opts):
1739 def debugcomplete(ui, cmd='', **opts):
1740 """returns the completion list associated with the given command"""
1740 """returns the completion list associated with the given command"""
1741
1741
1742 if opts.get('options'):
1742 if opts.get('options'):
1743 options = []
1743 options = []
1744 otables = [globalopts]
1744 otables = [globalopts]
1745 if cmd:
1745 if cmd:
1746 aliases, entry = cmdutil.findcmd(cmd, table, False)
1746 aliases, entry = cmdutil.findcmd(cmd, table, False)
1747 otables.append(entry[1])
1747 otables.append(entry[1])
1748 for t in otables:
1748 for t in otables:
1749 for o in t:
1749 for o in t:
1750 if "(DEPRECATED)" in o[3]:
1750 if "(DEPRECATED)" in o[3]:
1751 continue
1751 continue
1752 if o[0]:
1752 if o[0]:
1753 options.append('-%s' % o[0])
1753 options.append('-%s' % o[0])
1754 options.append('--%s' % o[1])
1754 options.append('--%s' % o[1])
1755 ui.write("%s\n" % "\n".join(options))
1755 ui.write("%s\n" % "\n".join(options))
1756 return
1756 return
1757
1757
1758 cmdlist, unused_allcmds = cmdutil.findpossible(cmd, table)
1758 cmdlist, unused_allcmds = cmdutil.findpossible(cmd, table)
1759 if ui.verbose:
1759 if ui.verbose:
1760 cmdlist = [' '.join(c[0]) for c in cmdlist.values()]
1760 cmdlist = [' '.join(c[0]) for c in cmdlist.values()]
1761 ui.write("%s\n" % "\n".join(sorted(cmdlist)))
1761 ui.write("%s\n" % "\n".join(sorted(cmdlist)))
1762
1762
1763 @command('^diff',
1763 @command('^diff',
1764 [('r', 'rev', [], _('revision'), _('REV')),
1764 [('r', 'rev', [], _('revision'), _('REV')),
1765 ('c', 'change', '', _('change made by revision'), _('REV'))
1765 ('c', 'change', '', _('change made by revision'), _('REV'))
1766 ] + diffopts + diffopts2 + walkopts + subrepoopts,
1766 ] + diffopts + diffopts2 + walkopts + subrepoopts,
1767 _('[OPTION]... ([-c REV] | [-r REV1 [-r REV2]]) [FILE]...'),
1767 _('[OPTION]... ([-c REV] | [-r REV1 [-r REV2]]) [FILE]...'),
1768 inferrepo=True)
1768 inferrepo=True)
1769 def diff(ui, repo, *pats, **opts):
1769 def diff(ui, repo, *pats, **opts):
1770 """diff repository (or selected files)
1770 """diff repository (or selected files)
1771
1771
1772 Show differences between revisions for the specified files.
1772 Show differences between revisions for the specified files.
1773
1773
1774 Differences between files are shown using the unified diff format.
1774 Differences between files are shown using the unified diff format.
1775
1775
1776 .. note::
1776 .. note::
1777
1777
1778 :hg:`diff` may generate unexpected results for merges, as it will
1778 :hg:`diff` may generate unexpected results for merges, as it will
1779 default to comparing against the working directory's first
1779 default to comparing against the working directory's first
1780 parent changeset if no revisions are specified.
1780 parent changeset if no revisions are specified.
1781
1781
1782 When two revision arguments are given, then changes are shown
1782 When two revision arguments are given, then changes are shown
1783 between those revisions. If only one revision is specified then
1783 between those revisions. If only one revision is specified then
1784 that revision is compared to the working directory, and, when no
1784 that revision is compared to the working directory, and, when no
1785 revisions are specified, the working directory files are compared
1785 revisions are specified, the working directory files are compared
1786 to its first parent.
1786 to its first parent.
1787
1787
1788 Alternatively you can specify -c/--change with a revision to see
1788 Alternatively you can specify -c/--change with a revision to see
1789 the changes in that changeset relative to its first parent.
1789 the changes in that changeset relative to its first parent.
1790
1790
1791 Without the -a/--text option, diff will avoid generating diffs of
1791 Without the -a/--text option, diff will avoid generating diffs of
1792 files it detects as binary. With -a, diff will generate a diff
1792 files it detects as binary. With -a, diff will generate a diff
1793 anyway, probably with undesirable results.
1793 anyway, probably with undesirable results.
1794
1794
1795 Use the -g/--git option to generate diffs in the git extended diff
1795 Use the -g/--git option to generate diffs in the git extended diff
1796 format. For more information, read :hg:`help diffs`.
1796 format. For more information, read :hg:`help diffs`.
1797
1797
1798 .. container:: verbose
1798 .. container:: verbose
1799
1799
1800 Examples:
1800 Examples:
1801
1801
1802 - compare a file in the current working directory to its parent::
1802 - compare a file in the current working directory to its parent::
1803
1803
1804 hg diff foo.c
1804 hg diff foo.c
1805
1805
1806 - compare two historical versions of a directory, with rename info::
1806 - compare two historical versions of a directory, with rename info::
1807
1807
1808 hg diff --git -r 1.0:1.2 lib/
1808 hg diff --git -r 1.0:1.2 lib/
1809
1809
1810 - get change stats relative to the last change on some date::
1810 - get change stats relative to the last change on some date::
1811
1811
1812 hg diff --stat -r "date('may 2')"
1812 hg diff --stat -r "date('may 2')"
1813
1813
1814 - diff all newly-added files that contain a keyword::
1814 - diff all newly-added files that contain a keyword::
1815
1815
1816 hg diff "set:added() and grep(GNU)"
1816 hg diff "set:added() and grep(GNU)"
1817
1817
1818 - compare a revision and its parents::
1818 - compare a revision and its parents::
1819
1819
1820 hg diff -c 9353 # compare against first parent
1820 hg diff -c 9353 # compare against first parent
1821 hg diff -r 9353^:9353 # same using revset syntax
1821 hg diff -r 9353^:9353 # same using revset syntax
1822 hg diff -r 9353^2:9353 # compare against the second parent
1822 hg diff -r 9353^2:9353 # compare against the second parent
1823
1823
1824 Returns 0 on success.
1824 Returns 0 on success.
1825 """
1825 """
1826
1826
1827 opts = pycompat.byteskwargs(opts)
1827 opts = pycompat.byteskwargs(opts)
1828 revs = opts.get('rev')
1828 revs = opts.get('rev')
1829 change = opts.get('change')
1829 change = opts.get('change')
1830 stat = opts.get('stat')
1830 stat = opts.get('stat')
1831 reverse = opts.get('reverse')
1831 reverse = opts.get('reverse')
1832
1832
1833 if revs and change:
1833 if revs and change:
1834 msg = _('cannot specify --rev and --change at the same time')
1834 msg = _('cannot specify --rev and --change at the same time')
1835 raise error.Abort(msg)
1835 raise error.Abort(msg)
1836 elif change:
1836 elif change:
1837 node2 = scmutil.revsingle(repo, change, None).node()
1837 node2 = scmutil.revsingle(repo, change, None).node()
1838 node1 = repo[node2].p1().node()
1838 node1 = repo[node2].p1().node()
1839 else:
1839 else:
1840 node1, node2 = scmutil.revpair(repo, revs)
1840 node1, node2 = scmutil.revpair(repo, revs)
1841
1841
1842 if reverse:
1842 if reverse:
1843 node1, node2 = node2, node1
1843 node1, node2 = node2, node1
1844
1844
1845 diffopts = patch.diffallopts(ui, opts)
1845 diffopts = patch.diffallopts(ui, opts)
1846 m = scmutil.match(repo[node2], pats, opts)
1846 m = scmutil.match(repo[node2], pats, opts)
1847 ui.pager('diff')
1847 ui.pager('diff')
1848 cmdutil.diffordiffstat(ui, repo, diffopts, node1, node2, m, stat=stat,
1848 cmdutil.diffordiffstat(ui, repo, diffopts, node1, node2, m, stat=stat,
1849 listsubrepos=opts.get('subrepos'),
1849 listsubrepos=opts.get('subrepos'),
1850 root=opts.get('root'))
1850 root=opts.get('root'))
1851
1851
1852 @command('^export',
1852 @command('^export',
1853 [('o', 'output', '',
1853 [('o', 'output', '',
1854 _('print output to file with formatted name'), _('FORMAT')),
1854 _('print output to file with formatted name'), _('FORMAT')),
1855 ('', 'switch-parent', None, _('diff against the second parent')),
1855 ('', 'switch-parent', None, _('diff against the second parent')),
1856 ('r', 'rev', [], _('revisions to export'), _('REV')),
1856 ('r', 'rev', [], _('revisions to export'), _('REV')),
1857 ] + diffopts,
1857 ] + diffopts,
1858 _('[OPTION]... [-o OUTFILESPEC] [-r] [REV]...'))
1858 _('[OPTION]... [-o OUTFILESPEC] [-r] [REV]...'))
1859 def export(ui, repo, *changesets, **opts):
1859 def export(ui, repo, *changesets, **opts):
1860 """dump the header and diffs for one or more changesets
1860 """dump the header and diffs for one or more changesets
1861
1861
1862 Print the changeset header and diffs for one or more revisions.
1862 Print the changeset header and diffs for one or more revisions.
1863 If no revision is given, the parent of the working directory is used.
1863 If no revision is given, the parent of the working directory is used.
1864
1864
1865 The information shown in the changeset header is: author, date,
1865 The information shown in the changeset header is: author, date,
1866 branch name (if non-default), changeset hash, parent(s) and commit
1866 branch name (if non-default), changeset hash, parent(s) and commit
1867 comment.
1867 comment.
1868
1868
1869 .. note::
1869 .. note::
1870
1870
1871 :hg:`export` may generate unexpected diff output for merge
1871 :hg:`export` may generate unexpected diff output for merge
1872 changesets, as it will compare the merge changeset against its
1872 changesets, as it will compare the merge changeset against its
1873 first parent only.
1873 first parent only.
1874
1874
1875 Output may be to a file, in which case the name of the file is
1875 Output may be to a file, in which case the name of the file is
1876 given using a format string. The formatting rules are as follows:
1876 given using a format string. The formatting rules are as follows:
1877
1877
1878 :``%%``: literal "%" character
1878 :``%%``: literal "%" character
1879 :``%H``: changeset hash (40 hexadecimal digits)
1879 :``%H``: changeset hash (40 hexadecimal digits)
1880 :``%N``: number of patches being generated
1880 :``%N``: number of patches being generated
1881 :``%R``: changeset revision number
1881 :``%R``: changeset revision number
1882 :``%b``: basename of the exporting repository
1882 :``%b``: basename of the exporting repository
1883 :``%h``: short-form changeset hash (12 hexadecimal digits)
1883 :``%h``: short-form changeset hash (12 hexadecimal digits)
1884 :``%m``: first line of the commit message (only alphanumeric characters)
1884 :``%m``: first line of the commit message (only alphanumeric characters)
1885 :``%n``: zero-padded sequence number, starting at 1
1885 :``%n``: zero-padded sequence number, starting at 1
1886 :``%r``: zero-padded changeset revision number
1886 :``%r``: zero-padded changeset revision number
1887
1887
1888 Without the -a/--text option, export will avoid generating diffs
1888 Without the -a/--text option, export will avoid generating diffs
1889 of files it detects as binary. With -a, export will generate a
1889 of files it detects as binary. With -a, export will generate a
1890 diff anyway, probably with undesirable results.
1890 diff anyway, probably with undesirable results.
1891
1891
1892 Use the -g/--git option to generate diffs in the git extended diff
1892 Use the -g/--git option to generate diffs in the git extended diff
1893 format. See :hg:`help diffs` for more information.
1893 format. See :hg:`help diffs` for more information.
1894
1894
1895 With the --switch-parent option, the diff will be against the
1895 With the --switch-parent option, the diff will be against the
1896 second parent. It can be useful to review a merge.
1896 second parent. It can be useful to review a merge.
1897
1897
1898 .. container:: verbose
1898 .. container:: verbose
1899
1899
1900 Examples:
1900 Examples:
1901
1901
1902 - use export and import to transplant a bugfix to the current
1902 - use export and import to transplant a bugfix to the current
1903 branch::
1903 branch::
1904
1904
1905 hg export -r 9353 | hg import -
1905 hg export -r 9353 | hg import -
1906
1906
1907 - export all the changesets between two revisions to a file with
1907 - export all the changesets between two revisions to a file with
1908 rename information::
1908 rename information::
1909
1909
1910 hg export --git -r 123:150 > changes.txt
1910 hg export --git -r 123:150 > changes.txt
1911
1911
1912 - split outgoing changes into a series of patches with
1912 - split outgoing changes into a series of patches with
1913 descriptive names::
1913 descriptive names::
1914
1914
1915 hg export -r "outgoing()" -o "%n-%m.patch"
1915 hg export -r "outgoing()" -o "%n-%m.patch"
1916
1916
1917 Returns 0 on success.
1917 Returns 0 on success.
1918 """
1918 """
1919 opts = pycompat.byteskwargs(opts)
1919 opts = pycompat.byteskwargs(opts)
1920 changesets += tuple(opts.get('rev', []))
1920 changesets += tuple(opts.get('rev', []))
1921 if not changesets:
1921 if not changesets:
1922 changesets = ['.']
1922 changesets = ['.']
1923 revs = scmutil.revrange(repo, changesets)
1923 revs = scmutil.revrange(repo, changesets)
1924 if not revs:
1924 if not revs:
1925 raise error.Abort(_("export requires at least one changeset"))
1925 raise error.Abort(_("export requires at least one changeset"))
1926 if len(revs) > 1:
1926 if len(revs) > 1:
1927 ui.note(_('exporting patches:\n'))
1927 ui.note(_('exporting patches:\n'))
1928 else:
1928 else:
1929 ui.note(_('exporting patch:\n'))
1929 ui.note(_('exporting patch:\n'))
1930 ui.pager('export')
1930 ui.pager('export')
1931 cmdutil.export(repo, revs, fntemplate=opts.get('output'),
1931 cmdutil.export(repo, revs, fntemplate=opts.get('output'),
1932 switch_parent=opts.get('switch_parent'),
1932 switch_parent=opts.get('switch_parent'),
1933 opts=patch.diffallopts(ui, opts))
1933 opts=patch.diffallopts(ui, opts))
1934
1934
1935 @command('files',
1935 @command('files',
1936 [('r', 'rev', '', _('search the repository as it is in REV'), _('REV')),
1936 [('r', 'rev', '', _('search the repository as it is in REV'), _('REV')),
1937 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
1937 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
1938 ] + walkopts + formatteropts + subrepoopts,
1938 ] + walkopts + formatteropts + subrepoopts,
1939 _('[OPTION]... [FILE]...'))
1939 _('[OPTION]... [FILE]...'))
1940 def files(ui, repo, *pats, **opts):
1940 def files(ui, repo, *pats, **opts):
1941 """list tracked files
1941 """list tracked files
1942
1942
1943 Print files under Mercurial control in the working directory or
1943 Print files under Mercurial control in the working directory or
1944 specified revision for given files (excluding removed files).
1944 specified revision for given files (excluding removed files).
1945 Files can be specified as filenames or filesets.
1945 Files can be specified as filenames or filesets.
1946
1946
1947 If no files are given to match, this command prints the names
1947 If no files are given to match, this command prints the names
1948 of all files under Mercurial control.
1948 of all files under Mercurial control.
1949
1949
1950 .. container:: verbose
1950 .. container:: verbose
1951
1951
1952 Examples:
1952 Examples:
1953
1953
1954 - list all files under the current directory::
1954 - list all files under the current directory::
1955
1955
1956 hg files .
1956 hg files .
1957
1957
1958 - shows sizes and flags for current revision::
1958 - shows sizes and flags for current revision::
1959
1959
1960 hg files -vr .
1960 hg files -vr .
1961
1961
1962 - list all files named README::
1962 - list all files named README::
1963
1963
1964 hg files -I "**/README"
1964 hg files -I "**/README"
1965
1965
1966 - list all binary files::
1966 - list all binary files::
1967
1967
1968 hg files "set:binary()"
1968 hg files "set:binary()"
1969
1969
1970 - find files containing a regular expression::
1970 - find files containing a regular expression::
1971
1971
1972 hg files "set:grep('bob')"
1972 hg files "set:grep('bob')"
1973
1973
1974 - search tracked file contents with xargs and grep::
1974 - search tracked file contents with xargs and grep::
1975
1975
1976 hg files -0 | xargs -0 grep foo
1976 hg files -0 | xargs -0 grep foo
1977
1977
1978 See :hg:`help patterns` and :hg:`help filesets` for more information
1978 See :hg:`help patterns` and :hg:`help filesets` for more information
1979 on specifying file patterns.
1979 on specifying file patterns.
1980
1980
1981 Returns 0 if a match is found, 1 otherwise.
1981 Returns 0 if a match is found, 1 otherwise.
1982
1982
1983 """
1983 """
1984
1984
1985 opts = pycompat.byteskwargs(opts)
1985 opts = pycompat.byteskwargs(opts)
1986 ctx = scmutil.revsingle(repo, opts.get('rev'), None)
1986 ctx = scmutil.revsingle(repo, opts.get('rev'), None)
1987
1987
1988 end = '\n'
1988 end = '\n'
1989 if opts.get('print0'):
1989 if opts.get('print0'):
1990 end = '\0'
1990 end = '\0'
1991 fmt = '%s' + end
1991 fmt = '%s' + end
1992
1992
1993 m = scmutil.match(ctx, pats, opts)
1993 m = scmutil.match(ctx, pats, opts)
1994 ui.pager('files')
1994 ui.pager('files')
1995 with ui.formatter('files', opts) as fm:
1995 with ui.formatter('files', opts) as fm:
1996 return cmdutil.files(ui, ctx, m, fm, fmt, opts.get('subrepos'))
1996 return cmdutil.files(ui, ctx, m, fm, fmt, opts.get('subrepos'))
1997
1997
1998 @command('^forget', walkopts, _('[OPTION]... FILE...'), inferrepo=True)
1998 @command('^forget', walkopts, _('[OPTION]... FILE...'), inferrepo=True)
1999 def forget(ui, repo, *pats, **opts):
1999 def forget(ui, repo, *pats, **opts):
2000 """forget the specified files on the next commit
2000 """forget the specified files on the next commit
2001
2001
2002 Mark the specified files so they will no longer be tracked
2002 Mark the specified files so they will no longer be tracked
2003 after the next commit.
2003 after the next commit.
2004
2004
2005 This only removes files from the current branch, not from the
2005 This only removes files from the current branch, not from the
2006 entire project history, and it does not delete them from the
2006 entire project history, and it does not delete them from the
2007 working directory.
2007 working directory.
2008
2008
2009 To delete the file from the working directory, see :hg:`remove`.
2009 To delete the file from the working directory, see :hg:`remove`.
2010
2010
2011 To undo a forget before the next commit, see :hg:`add`.
2011 To undo a forget before the next commit, see :hg:`add`.
2012
2012
2013 .. container:: verbose
2013 .. container:: verbose
2014
2014
2015 Examples:
2015 Examples:
2016
2016
2017 - forget newly-added binary files::
2017 - forget newly-added binary files::
2018
2018
2019 hg forget "set:added() and binary()"
2019 hg forget "set:added() and binary()"
2020
2020
2021 - forget files that would be excluded by .hgignore::
2021 - forget files that would be excluded by .hgignore::
2022
2022
2023 hg forget "set:hgignore()"
2023 hg forget "set:hgignore()"
2024
2024
2025 Returns 0 on success.
2025 Returns 0 on success.
2026 """
2026 """
2027
2027
2028 opts = pycompat.byteskwargs(opts)
2028 opts = pycompat.byteskwargs(opts)
2029 if not pats:
2029 if not pats:
2030 raise error.Abort(_('no files specified'))
2030 raise error.Abort(_('no files specified'))
2031
2031
2032 m = scmutil.match(repo[None], pats, opts)
2032 m = scmutil.match(repo[None], pats, opts)
2033 rejected = cmdutil.forget(ui, repo, m, prefix="", explicitonly=False)[0]
2033 rejected = cmdutil.forget(ui, repo, m, prefix="", explicitonly=False)[0]
2034 return rejected and 1 or 0
2034 return rejected and 1 or 0
2035
2035
2036 @command(
2036 @command(
2037 'graft',
2037 'graft',
2038 [('r', 'rev', [], _('revisions to graft'), _('REV')),
2038 [('r', 'rev', [], _('revisions to graft'), _('REV')),
2039 ('c', 'continue', False, _('resume interrupted graft')),
2039 ('c', 'continue', False, _('resume interrupted graft')),
2040 ('e', 'edit', False, _('invoke editor on commit messages')),
2040 ('e', 'edit', False, _('invoke editor on commit messages')),
2041 ('', 'log', None, _('append graft info to log message')),
2041 ('', 'log', None, _('append graft info to log message')),
2042 ('f', 'force', False, _('force graft')),
2042 ('f', 'force', False, _('force graft')),
2043 ('D', 'currentdate', False,
2043 ('D', 'currentdate', False,
2044 _('record the current date as commit date')),
2044 _('record the current date as commit date')),
2045 ('U', 'currentuser', False,
2045 ('U', 'currentuser', False,
2046 _('record the current user as committer'), _('DATE'))]
2046 _('record the current user as committer'), _('DATE'))]
2047 + commitopts2 + mergetoolopts + dryrunopts,
2047 + commitopts2 + mergetoolopts + dryrunopts,
2048 _('[OPTION]... [-r REV]... REV...'))
2048 _('[OPTION]... [-r REV]... REV...'))
2049 def graft(ui, repo, *revs, **opts):
2049 def graft(ui, repo, *revs, **opts):
2050 '''copy changes from other branches onto the current branch
2050 '''copy changes from other branches onto the current branch
2051
2051
2052 This command uses Mercurial's merge logic to copy individual
2052 This command uses Mercurial's merge logic to copy individual
2053 changes from other branches without merging branches in the
2053 changes from other branches without merging branches in the
2054 history graph. This is sometimes known as 'backporting' or
2054 history graph. This is sometimes known as 'backporting' or
2055 'cherry-picking'. By default, graft will copy user, date, and
2055 'cherry-picking'. By default, graft will copy user, date, and
2056 description from the source changesets.
2056 description from the source changesets.
2057
2057
2058 Changesets that are ancestors of the current revision, that have
2058 Changesets that are ancestors of the current revision, that have
2059 already been grafted, or that are merges will be skipped.
2059 already been grafted, or that are merges will be skipped.
2060
2060
2061 If --log is specified, log messages will have a comment appended
2061 If --log is specified, log messages will have a comment appended
2062 of the form::
2062 of the form::
2063
2063
2064 (grafted from CHANGESETHASH)
2064 (grafted from CHANGESETHASH)
2065
2065
2066 If --force is specified, revisions will be grafted even if they
2066 If --force is specified, revisions will be grafted even if they
2067 are already ancestors of or have been grafted to the destination.
2067 are already ancestors of or have been grafted to the destination.
2068 This is useful when the revisions have since been backed out.
2068 This is useful when the revisions have since been backed out.
2069
2069
2070 If a graft merge results in conflicts, the graft process is
2070 If a graft merge results in conflicts, the graft process is
2071 interrupted so that the current merge can be manually resolved.
2071 interrupted so that the current merge can be manually resolved.
2072 Once all conflicts are addressed, the graft process can be
2072 Once all conflicts are addressed, the graft process can be
2073 continued with the -c/--continue option.
2073 continued with the -c/--continue option.
2074
2074
2075 .. note::
2075 .. note::
2076
2076
2077 The -c/--continue option does not reapply earlier options, except
2077 The -c/--continue option does not reapply earlier options, except
2078 for --force.
2078 for --force.
2079
2079
2080 .. container:: verbose
2080 .. container:: verbose
2081
2081
2082 Examples:
2082 Examples:
2083
2083
2084 - copy a single change to the stable branch and edit its description::
2084 - copy a single change to the stable branch and edit its description::
2085
2085
2086 hg update stable
2086 hg update stable
2087 hg graft --edit 9393
2087 hg graft --edit 9393
2088
2088
2089 - graft a range of changesets with one exception, updating dates::
2089 - graft a range of changesets with one exception, updating dates::
2090
2090
2091 hg graft -D "2085::2093 and not 2091"
2091 hg graft -D "2085::2093 and not 2091"
2092
2092
2093 - continue a graft after resolving conflicts::
2093 - continue a graft after resolving conflicts::
2094
2094
2095 hg graft -c
2095 hg graft -c
2096
2096
2097 - show the source of a grafted changeset::
2097 - show the source of a grafted changeset::
2098
2098
2099 hg log --debug -r .
2099 hg log --debug -r .
2100
2100
2101 - show revisions sorted by date::
2101 - show revisions sorted by date::
2102
2102
2103 hg log -r "sort(all(), date)"
2103 hg log -r "sort(all(), date)"
2104
2104
2105 See :hg:`help revisions` for more about specifying revisions.
2105 See :hg:`help revisions` for more about specifying revisions.
2106
2106
2107 Returns 0 on successful completion.
2107 Returns 0 on successful completion.
2108 '''
2108 '''
2109 with repo.wlock():
2109 with repo.wlock():
2110 return _dograft(ui, repo, *revs, **opts)
2110 return _dograft(ui, repo, *revs, **opts)
2111
2111
2112 def _dograft(ui, repo, *revs, **opts):
2112 def _dograft(ui, repo, *revs, **opts):
2113 opts = pycompat.byteskwargs(opts)
2113 opts = pycompat.byteskwargs(opts)
2114 if revs and opts.get('rev'):
2114 if revs and opts.get('rev'):
2115 ui.warn(_('warning: inconsistent use of --rev might give unexpected '
2115 ui.warn(_('warning: inconsistent use of --rev might give unexpected '
2116 'revision ordering!\n'))
2116 'revision ordering!\n'))
2117
2117
2118 revs = list(revs)
2118 revs = list(revs)
2119 revs.extend(opts.get('rev'))
2119 revs.extend(opts.get('rev'))
2120
2120
2121 if not opts.get('user') and opts.get('currentuser'):
2121 if not opts.get('user') and opts.get('currentuser'):
2122 opts['user'] = ui.username()
2122 opts['user'] = ui.username()
2123 if not opts.get('date') and opts.get('currentdate'):
2123 if not opts.get('date') and opts.get('currentdate'):
2124 opts['date'] = "%d %d" % util.makedate()
2124 opts['date'] = "%d %d" % util.makedate()
2125
2125
2126 editor = cmdutil.getcommiteditor(editform='graft',
2126 editor = cmdutil.getcommiteditor(editform='graft',
2127 **pycompat.strkwargs(opts))
2127 **pycompat.strkwargs(opts))
2128
2128
2129 cont = False
2129 cont = False
2130 if opts.get('continue'):
2130 if opts.get('continue'):
2131 cont = True
2131 cont = True
2132 if revs:
2132 if revs:
2133 raise error.Abort(_("can't specify --continue and revisions"))
2133 raise error.Abort(_("can't specify --continue and revisions"))
2134 # read in unfinished revisions
2134 # read in unfinished revisions
2135 try:
2135 try:
2136 nodes = repo.vfs.read('graftstate').splitlines()
2136 nodes = repo.vfs.read('graftstate').splitlines()
2137 revs = [repo[node].rev() for node in nodes]
2137 revs = [repo[node].rev() for node in nodes]
2138 except IOError as inst:
2138 except IOError as inst:
2139 if inst.errno != errno.ENOENT:
2139 if inst.errno != errno.ENOENT:
2140 raise
2140 raise
2141 cmdutil.wrongtooltocontinue(repo, _('graft'))
2141 cmdutil.wrongtooltocontinue(repo, _('graft'))
2142 else:
2142 else:
2143 cmdutil.checkunfinished(repo)
2143 cmdutil.checkunfinished(repo)
2144 cmdutil.bailifchanged(repo)
2144 cmdutil.bailifchanged(repo)
2145 if not revs:
2145 if not revs:
2146 raise error.Abort(_('no revisions specified'))
2146 raise error.Abort(_('no revisions specified'))
2147 revs = scmutil.revrange(repo, revs)
2147 revs = scmutil.revrange(repo, revs)
2148
2148
2149 skipped = set()
2149 skipped = set()
2150 # check for merges
2150 # check for merges
2151 for rev in repo.revs('%ld and merge()', revs):
2151 for rev in repo.revs('%ld and merge()', revs):
2152 ui.warn(_('skipping ungraftable merge revision %s\n') % rev)
2152 ui.warn(_('skipping ungraftable merge revision %s\n') % rev)
2153 skipped.add(rev)
2153 skipped.add(rev)
2154 revs = [r for r in revs if r not in skipped]
2154 revs = [r for r in revs if r not in skipped]
2155 if not revs:
2155 if not revs:
2156 return -1
2156 return -1
2157
2157
2158 # Don't check in the --continue case, in effect retaining --force across
2158 # Don't check in the --continue case, in effect retaining --force across
2159 # --continues. That's because without --force, any revisions we decided to
2159 # --continues. That's because without --force, any revisions we decided to
2160 # skip would have been filtered out here, so they wouldn't have made their
2160 # skip would have been filtered out here, so they wouldn't have made their
2161 # way to the graftstate. With --force, any revisions we would have otherwise
2161 # way to the graftstate. With --force, any revisions we would have otherwise
2162 # skipped would not have been filtered out, and if they hadn't been applied
2162 # skipped would not have been filtered out, and if they hadn't been applied
2163 # already, they'd have been in the graftstate.
2163 # already, they'd have been in the graftstate.
2164 if not (cont or opts.get('force')):
2164 if not (cont or opts.get('force')):
2165 # check for ancestors of dest branch
2165 # check for ancestors of dest branch
2166 crev = repo['.'].rev()
2166 crev = repo['.'].rev()
2167 ancestors = repo.changelog.ancestors([crev], inclusive=True)
2167 ancestors = repo.changelog.ancestors([crev], inclusive=True)
2168 # XXX make this lazy in the future
2168 # XXX make this lazy in the future
2169 # don't mutate while iterating, create a copy
2169 # don't mutate while iterating, create a copy
2170 for rev in list(revs):
2170 for rev in list(revs):
2171 if rev in ancestors:
2171 if rev in ancestors:
2172 ui.warn(_('skipping ancestor revision %d:%s\n') %
2172 ui.warn(_('skipping ancestor revision %d:%s\n') %
2173 (rev, repo[rev]))
2173 (rev, repo[rev]))
2174 # XXX remove on list is slow
2174 # XXX remove on list is slow
2175 revs.remove(rev)
2175 revs.remove(rev)
2176 if not revs:
2176 if not revs:
2177 return -1
2177 return -1
2178
2178
2179 # analyze revs for earlier grafts
2179 # analyze revs for earlier grafts
2180 ids = {}
2180 ids = {}
2181 for ctx in repo.set("%ld", revs):
2181 for ctx in repo.set("%ld", revs):
2182 ids[ctx.hex()] = ctx.rev()
2182 ids[ctx.hex()] = ctx.rev()
2183 n = ctx.extra().get('source')
2183 n = ctx.extra().get('source')
2184 if n:
2184 if n:
2185 ids[n] = ctx.rev()
2185 ids[n] = ctx.rev()
2186
2186
2187 # check ancestors for earlier grafts
2187 # check ancestors for earlier grafts
2188 ui.debug('scanning for duplicate grafts\n')
2188 ui.debug('scanning for duplicate grafts\n')
2189
2189
2190 # The only changesets we can be sure doesn't contain grafts of any
2190 # The only changesets we can be sure doesn't contain grafts of any
2191 # revs, are the ones that are common ancestors of *all* revs:
2191 # revs, are the ones that are common ancestors of *all* revs:
2192 for rev in repo.revs('only(%d,ancestor(%ld))', crev, revs):
2192 for rev in repo.revs('only(%d,ancestor(%ld))', crev, revs):
2193 ctx = repo[rev]
2193 ctx = repo[rev]
2194 n = ctx.extra().get('source')
2194 n = ctx.extra().get('source')
2195 if n in ids:
2195 if n in ids:
2196 try:
2196 try:
2197 r = repo[n].rev()
2197 r = repo[n].rev()
2198 except error.RepoLookupError:
2198 except error.RepoLookupError:
2199 r = None
2199 r = None
2200 if r in revs:
2200 if r in revs:
2201 ui.warn(_('skipping revision %d:%s '
2201 ui.warn(_('skipping revision %d:%s '
2202 '(already grafted to %d:%s)\n')
2202 '(already grafted to %d:%s)\n')
2203 % (r, repo[r], rev, ctx))
2203 % (r, repo[r], rev, ctx))
2204 revs.remove(r)
2204 revs.remove(r)
2205 elif ids[n] in revs:
2205 elif ids[n] in revs:
2206 if r is None:
2206 if r is None:
2207 ui.warn(_('skipping already grafted revision %d:%s '
2207 ui.warn(_('skipping already grafted revision %d:%s '
2208 '(%d:%s also has unknown origin %s)\n')
2208 '(%d:%s also has unknown origin %s)\n')
2209 % (ids[n], repo[ids[n]], rev, ctx, n[:12]))
2209 % (ids[n], repo[ids[n]], rev, ctx, n[:12]))
2210 else:
2210 else:
2211 ui.warn(_('skipping already grafted revision %d:%s '
2211 ui.warn(_('skipping already grafted revision %d:%s '
2212 '(%d:%s also has origin %d:%s)\n')
2212 '(%d:%s also has origin %d:%s)\n')
2213 % (ids[n], repo[ids[n]], rev, ctx, r, n[:12]))
2213 % (ids[n], repo[ids[n]], rev, ctx, r, n[:12]))
2214 revs.remove(ids[n])
2214 revs.remove(ids[n])
2215 elif ctx.hex() in ids:
2215 elif ctx.hex() in ids:
2216 r = ids[ctx.hex()]
2216 r = ids[ctx.hex()]
2217 ui.warn(_('skipping already grafted revision %d:%s '
2217 ui.warn(_('skipping already grafted revision %d:%s '
2218 '(was grafted from %d:%s)\n') %
2218 '(was grafted from %d:%s)\n') %
2219 (r, repo[r], rev, ctx))
2219 (r, repo[r], rev, ctx))
2220 revs.remove(r)
2220 revs.remove(r)
2221 if not revs:
2221 if not revs:
2222 return -1
2222 return -1
2223
2223
2224 for pos, ctx in enumerate(repo.set("%ld", revs)):
2224 for pos, ctx in enumerate(repo.set("%ld", revs)):
2225 desc = '%d:%s "%s"' % (ctx.rev(), ctx,
2225 desc = '%d:%s "%s"' % (ctx.rev(), ctx,
2226 ctx.description().split('\n', 1)[0])
2226 ctx.description().split('\n', 1)[0])
2227 names = repo.nodetags(ctx.node()) + repo.nodebookmarks(ctx.node())
2227 names = repo.nodetags(ctx.node()) + repo.nodebookmarks(ctx.node())
2228 if names:
2228 if names:
2229 desc += ' (%s)' % ' '.join(names)
2229 desc += ' (%s)' % ' '.join(names)
2230 ui.status(_('grafting %s\n') % desc)
2230 ui.status(_('grafting %s\n') % desc)
2231 if opts.get('dry_run'):
2231 if opts.get('dry_run'):
2232 continue
2232 continue
2233
2233
2234 source = ctx.extra().get('source')
2234 source = ctx.extra().get('source')
2235 extra = {}
2235 extra = {}
2236 if source:
2236 if source:
2237 extra['source'] = source
2237 extra['source'] = source
2238 extra['intermediate-source'] = ctx.hex()
2238 extra['intermediate-source'] = ctx.hex()
2239 else:
2239 else:
2240 extra['source'] = ctx.hex()
2240 extra['source'] = ctx.hex()
2241 user = ctx.user()
2241 user = ctx.user()
2242 if opts.get('user'):
2242 if opts.get('user'):
2243 user = opts['user']
2243 user = opts['user']
2244 date = ctx.date()
2244 date = ctx.date()
2245 if opts.get('date'):
2245 if opts.get('date'):
2246 date = opts['date']
2246 date = opts['date']
2247 message = ctx.description()
2247 message = ctx.description()
2248 if opts.get('log'):
2248 if opts.get('log'):
2249 message += '\n(grafted from %s)' % ctx.hex()
2249 message += '\n(grafted from %s)' % ctx.hex()
2250
2250
2251 # we don't merge the first commit when continuing
2251 # we don't merge the first commit when continuing
2252 if not cont:
2252 if not cont:
2253 # perform the graft merge with p1(rev) as 'ancestor'
2253 # perform the graft merge with p1(rev) as 'ancestor'
2254 try:
2254 try:
2255 # ui.forcemerge is an internal variable, do not document
2255 # ui.forcemerge is an internal variable, do not document
2256 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
2256 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
2257 'graft')
2257 'graft')
2258 stats = mergemod.graft(repo, ctx, ctx.p1(),
2258 stats = mergemod.graft(repo, ctx, ctx.p1(),
2259 ['local', 'graft'])
2259 ['local', 'graft'])
2260 finally:
2260 finally:
2261 repo.ui.setconfig('ui', 'forcemerge', '', 'graft')
2261 repo.ui.setconfig('ui', 'forcemerge', '', 'graft')
2262 # report any conflicts
2262 # report any conflicts
2263 if stats and stats[3] > 0:
2263 if stats and stats[3] > 0:
2264 # write out state for --continue
2264 # write out state for --continue
2265 nodelines = [repo[rev].hex() + "\n" for rev in revs[pos:]]
2265 nodelines = [repo[rev].hex() + "\n" for rev in revs[pos:]]
2266 repo.vfs.write('graftstate', ''.join(nodelines))
2266 repo.vfs.write('graftstate', ''.join(nodelines))
2267 extra = ''
2267 extra = ''
2268 if opts.get('user'):
2268 if opts.get('user'):
2269 extra += ' --user %s' % util.shellquote(opts['user'])
2269 extra += ' --user %s' % util.shellquote(opts['user'])
2270 if opts.get('date'):
2270 if opts.get('date'):
2271 extra += ' --date %s' % util.shellquote(opts['date'])
2271 extra += ' --date %s' % util.shellquote(opts['date'])
2272 if opts.get('log'):
2272 if opts.get('log'):
2273 extra += ' --log'
2273 extra += ' --log'
2274 hint=_("use 'hg resolve' and 'hg graft --continue%s'") % extra
2274 hint=_("use 'hg resolve' and 'hg graft --continue%s'") % extra
2275 raise error.Abort(
2275 raise error.Abort(
2276 _("unresolved conflicts, can't continue"),
2276 _("unresolved conflicts, can't continue"),
2277 hint=hint)
2277 hint=hint)
2278 else:
2278 else:
2279 cont = False
2279 cont = False
2280
2280
2281 # commit
2281 # commit
2282 node = repo.commit(text=message, user=user,
2282 node = repo.commit(text=message, user=user,
2283 date=date, extra=extra, editor=editor)
2283 date=date, extra=extra, editor=editor)
2284 if node is None:
2284 if node is None:
2285 ui.warn(
2285 ui.warn(
2286 _('note: graft of %d:%s created no changes to commit\n') %
2286 _('note: graft of %d:%s created no changes to commit\n') %
2287 (ctx.rev(), ctx))
2287 (ctx.rev(), ctx))
2288
2288
2289 # remove state when we complete successfully
2289 # remove state when we complete successfully
2290 if not opts.get('dry_run'):
2290 if not opts.get('dry_run'):
2291 repo.vfs.unlinkpath('graftstate', ignoremissing=True)
2291 repo.vfs.unlinkpath('graftstate', ignoremissing=True)
2292
2292
2293 return 0
2293 return 0
2294
2294
2295 @command('grep',
2295 @command('grep',
2296 [('0', 'print0', None, _('end fields with NUL')),
2296 [('0', 'print0', None, _('end fields with NUL')),
2297 ('', 'all', None, _('print all revisions that match')),
2297 ('', 'all', None, _('print all revisions that match')),
2298 ('a', 'text', None, _('treat all files as text')),
2298 ('a', 'text', None, _('treat all files as text')),
2299 ('f', 'follow', None,
2299 ('f', 'follow', None,
2300 _('follow changeset history,'
2300 _('follow changeset history,'
2301 ' or file history across copies and renames')),
2301 ' or file history across copies and renames')),
2302 ('i', 'ignore-case', None, _('ignore case when matching')),
2302 ('i', 'ignore-case', None, _('ignore case when matching')),
2303 ('l', 'files-with-matches', None,
2303 ('l', 'files-with-matches', None,
2304 _('print only filenames and revisions that match')),
2304 _('print only filenames and revisions that match')),
2305 ('n', 'line-number', None, _('print matching line numbers')),
2305 ('n', 'line-number', None, _('print matching line numbers')),
2306 ('r', 'rev', [],
2306 ('r', 'rev', [],
2307 _('only search files changed within revision range'), _('REV')),
2307 _('only search files changed within revision range'), _('REV')),
2308 ('u', 'user', None, _('list the author (long with -v)')),
2308 ('u', 'user', None, _('list the author (long with -v)')),
2309 ('d', 'date', None, _('list the date (short with -q)')),
2309 ('d', 'date', None, _('list the date (short with -q)')),
2310 ] + formatteropts + walkopts,
2310 ] + formatteropts + walkopts,
2311 _('[OPTION]... PATTERN [FILE]...'),
2311 _('[OPTION]... PATTERN [FILE]...'),
2312 inferrepo=True)
2312 inferrepo=True)
2313 def grep(ui, repo, pattern, *pats, **opts):
2313 def grep(ui, repo, pattern, *pats, **opts):
2314 """search revision history for a pattern in specified files
2314 """search revision history for a pattern in specified files
2315
2315
2316 Search revision history for a regular expression in the specified
2316 Search revision history for a regular expression in the specified
2317 files or the entire project.
2317 files or the entire project.
2318
2318
2319 By default, grep prints the most recent revision number for each
2319 By default, grep prints the most recent revision number for each
2320 file in which it finds a match. To get it to print every revision
2320 file in which it finds a match. To get it to print every revision
2321 that contains a change in match status ("-" for a match that becomes
2321 that contains a change in match status ("-" for a match that becomes
2322 a non-match, or "+" for a non-match that becomes a match), use the
2322 a non-match, or "+" for a non-match that becomes a match), use the
2323 --all flag.
2323 --all flag.
2324
2324
2325 PATTERN can be any Python (roughly Perl-compatible) regular
2325 PATTERN can be any Python (roughly Perl-compatible) regular
2326 expression.
2326 expression.
2327
2327
2328 If no FILEs are specified (and -f/--follow isn't set), all files in
2328 If no FILEs are specified (and -f/--follow isn't set), all files in
2329 the repository are searched, including those that don't exist in the
2329 the repository are searched, including those that don't exist in the
2330 current branch or have been deleted in a prior changeset.
2330 current branch or have been deleted in a prior changeset.
2331
2331
2332 Returns 0 if a match is found, 1 otherwise.
2332 Returns 0 if a match is found, 1 otherwise.
2333 """
2333 """
2334 opts = pycompat.byteskwargs(opts)
2334 opts = pycompat.byteskwargs(opts)
2335 reflags = re.M
2335 reflags = re.M
2336 if opts.get('ignore_case'):
2336 if opts.get('ignore_case'):
2337 reflags |= re.I
2337 reflags |= re.I
2338 try:
2338 try:
2339 regexp = util.re.compile(pattern, reflags)
2339 regexp = util.re.compile(pattern, reflags)
2340 except re.error as inst:
2340 except re.error as inst:
2341 ui.warn(_("grep: invalid match pattern: %s\n") % inst)
2341 ui.warn(_("grep: invalid match pattern: %s\n") % inst)
2342 return 1
2342 return 1
2343 sep, eol = ':', '\n'
2343 sep, eol = ':', '\n'
2344 if opts.get('print0'):
2344 if opts.get('print0'):
2345 sep = eol = '\0'
2345 sep = eol = '\0'
2346
2346
2347 getfile = util.lrucachefunc(repo.file)
2347 getfile = util.lrucachefunc(repo.file)
2348
2348
2349 def matchlines(body):
2349 def matchlines(body):
2350 begin = 0
2350 begin = 0
2351 linenum = 0
2351 linenum = 0
2352 while begin < len(body):
2352 while begin < len(body):
2353 match = regexp.search(body, begin)
2353 match = regexp.search(body, begin)
2354 if not match:
2354 if not match:
2355 break
2355 break
2356 mstart, mend = match.span()
2356 mstart, mend = match.span()
2357 linenum += body.count('\n', begin, mstart) + 1
2357 linenum += body.count('\n', begin, mstart) + 1
2358 lstart = body.rfind('\n', begin, mstart) + 1 or begin
2358 lstart = body.rfind('\n', begin, mstart) + 1 or begin
2359 begin = body.find('\n', mend) + 1 or len(body) + 1
2359 begin = body.find('\n', mend) + 1 or len(body) + 1
2360 lend = begin - 1
2360 lend = begin - 1
2361 yield linenum, mstart - lstart, mend - lstart, body[lstart:lend]
2361 yield linenum, mstart - lstart, mend - lstart, body[lstart:lend]
2362
2362
2363 class linestate(object):
2363 class linestate(object):
2364 def __init__(self, line, linenum, colstart, colend):
2364 def __init__(self, line, linenum, colstart, colend):
2365 self.line = line
2365 self.line = line
2366 self.linenum = linenum
2366 self.linenum = linenum
2367 self.colstart = colstart
2367 self.colstart = colstart
2368 self.colend = colend
2368 self.colend = colend
2369
2369
2370 def __hash__(self):
2370 def __hash__(self):
2371 return hash((self.linenum, self.line))
2371 return hash((self.linenum, self.line))
2372
2372
2373 def __eq__(self, other):
2373 def __eq__(self, other):
2374 return self.line == other.line
2374 return self.line == other.line
2375
2375
2376 def findpos(self):
2376 def findpos(self):
2377 """Iterate all (start, end) indices of matches"""
2377 """Iterate all (start, end) indices of matches"""
2378 yield self.colstart, self.colend
2378 yield self.colstart, self.colend
2379 p = self.colend
2379 p = self.colend
2380 while p < len(self.line):
2380 while p < len(self.line):
2381 m = regexp.search(self.line, p)
2381 m = regexp.search(self.line, p)
2382 if not m:
2382 if not m:
2383 break
2383 break
2384 yield m.span()
2384 yield m.span()
2385 p = m.end()
2385 p = m.end()
2386
2386
2387 matches = {}
2387 matches = {}
2388 copies = {}
2388 copies = {}
2389 def grepbody(fn, rev, body):
2389 def grepbody(fn, rev, body):
2390 matches[rev].setdefault(fn, [])
2390 matches[rev].setdefault(fn, [])
2391 m = matches[rev][fn]
2391 m = matches[rev][fn]
2392 for lnum, cstart, cend, line in matchlines(body):
2392 for lnum, cstart, cend, line in matchlines(body):
2393 s = linestate(line, lnum, cstart, cend)
2393 s = linestate(line, lnum, cstart, cend)
2394 m.append(s)
2394 m.append(s)
2395
2395
2396 def difflinestates(a, b):
2396 def difflinestates(a, b):
2397 sm = difflib.SequenceMatcher(None, a, b)
2397 sm = difflib.SequenceMatcher(None, a, b)
2398 for tag, alo, ahi, blo, bhi in sm.get_opcodes():
2398 for tag, alo, ahi, blo, bhi in sm.get_opcodes():
2399 if tag == 'insert':
2399 if tag == 'insert':
2400 for i in xrange(blo, bhi):
2400 for i in xrange(blo, bhi):
2401 yield ('+', b[i])
2401 yield ('+', b[i])
2402 elif tag == 'delete':
2402 elif tag == 'delete':
2403 for i in xrange(alo, ahi):
2403 for i in xrange(alo, ahi):
2404 yield ('-', a[i])
2404 yield ('-', a[i])
2405 elif tag == 'replace':
2405 elif tag == 'replace':
2406 for i in xrange(alo, ahi):
2406 for i in xrange(alo, ahi):
2407 yield ('-', a[i])
2407 yield ('-', a[i])
2408 for i in xrange(blo, bhi):
2408 for i in xrange(blo, bhi):
2409 yield ('+', b[i])
2409 yield ('+', b[i])
2410
2410
2411 def display(fm, fn, ctx, pstates, states):
2411 def display(fm, fn, ctx, pstates, states):
2412 rev = ctx.rev()
2412 rev = ctx.rev()
2413 if fm.isplain():
2413 if fm.isplain():
2414 formatuser = ui.shortuser
2414 formatuser = ui.shortuser
2415 else:
2415 else:
2416 formatuser = str
2416 formatuser = str
2417 if ui.quiet:
2417 if ui.quiet:
2418 datefmt = '%Y-%m-%d'
2418 datefmt = '%Y-%m-%d'
2419 else:
2419 else:
2420 datefmt = '%a %b %d %H:%M:%S %Y %1%2'
2420 datefmt = '%a %b %d %H:%M:%S %Y %1%2'
2421 found = False
2421 found = False
2422 @util.cachefunc
2422 @util.cachefunc
2423 def binary():
2423 def binary():
2424 flog = getfile(fn)
2424 flog = getfile(fn)
2425 return util.binary(flog.read(ctx.filenode(fn)))
2425 return util.binary(flog.read(ctx.filenode(fn)))
2426
2426
2427 fieldnamemap = {'filename': 'file', 'linenumber': 'line_number'}
2427 fieldnamemap = {'filename': 'file', 'linenumber': 'line_number'}
2428 if opts.get('all'):
2428 if opts.get('all'):
2429 iter = difflinestates(pstates, states)
2429 iter = difflinestates(pstates, states)
2430 else:
2430 else:
2431 iter = [('', l) for l in states]
2431 iter = [('', l) for l in states]
2432 for change, l in iter:
2432 for change, l in iter:
2433 fm.startitem()
2433 fm.startitem()
2434 fm.data(node=fm.hexfunc(ctx.node()))
2434 fm.data(node=fm.hexfunc(ctx.node()))
2435 cols = [
2435 cols = [
2436 ('filename', fn, True),
2436 ('filename', fn, True),
2437 ('rev', rev, True),
2437 ('rev', rev, True),
2438 ('linenumber', l.linenum, opts.get('line_number')),
2438 ('linenumber', l.linenum, opts.get('line_number')),
2439 ]
2439 ]
2440 if opts.get('all'):
2440 if opts.get('all'):
2441 cols.append(('change', change, True))
2441 cols.append(('change', change, True))
2442 cols.extend([
2442 cols.extend([
2443 ('user', formatuser(ctx.user()), opts.get('user')),
2443 ('user', formatuser(ctx.user()), opts.get('user')),
2444 ('date', fm.formatdate(ctx.date(), datefmt), opts.get('date')),
2444 ('date', fm.formatdate(ctx.date(), datefmt), opts.get('date')),
2445 ])
2445 ])
2446 lastcol = next(name for name, data, cond in reversed(cols) if cond)
2446 lastcol = next(name for name, data, cond in reversed(cols) if cond)
2447 for name, data, cond in cols:
2447 for name, data, cond in cols:
2448 field = fieldnamemap.get(name, name)
2448 field = fieldnamemap.get(name, name)
2449 fm.condwrite(cond, field, '%s', data, label='grep.%s' % name)
2449 fm.condwrite(cond, field, '%s', data, label='grep.%s' % name)
2450 if cond and name != lastcol:
2450 if cond and name != lastcol:
2451 fm.plain(sep, label='grep.sep')
2451 fm.plain(sep, label='grep.sep')
2452 if not opts.get('files_with_matches'):
2452 if not opts.get('files_with_matches'):
2453 fm.plain(sep, label='grep.sep')
2453 fm.plain(sep, label='grep.sep')
2454 if not opts.get('text') and binary():
2454 if not opts.get('text') and binary():
2455 fm.plain(_(" Binary file matches"))
2455 fm.plain(_(" Binary file matches"))
2456 else:
2456 else:
2457 displaymatches(fm.nested('texts'), l)
2457 displaymatches(fm.nested('texts'), l)
2458 fm.plain(eol)
2458 fm.plain(eol)
2459 found = True
2459 found = True
2460 if opts.get('files_with_matches'):
2460 if opts.get('files_with_matches'):
2461 break
2461 break
2462 return found
2462 return found
2463
2463
2464 def displaymatches(fm, l):
2464 def displaymatches(fm, l):
2465 p = 0
2465 p = 0
2466 for s, e in l.findpos():
2466 for s, e in l.findpos():
2467 if p < s:
2467 if p < s:
2468 fm.startitem()
2468 fm.startitem()
2469 fm.write('text', '%s', l.line[p:s])
2469 fm.write('text', '%s', l.line[p:s])
2470 fm.data(matched=False)
2470 fm.data(matched=False)
2471 fm.startitem()
2471 fm.startitem()
2472 fm.write('text', '%s', l.line[s:e], label='grep.match')
2472 fm.write('text', '%s', l.line[s:e], label='grep.match')
2473 fm.data(matched=True)
2473 fm.data(matched=True)
2474 p = e
2474 p = e
2475 if p < len(l.line):
2475 if p < len(l.line):
2476 fm.startitem()
2476 fm.startitem()
2477 fm.write('text', '%s', l.line[p:])
2477 fm.write('text', '%s', l.line[p:])
2478 fm.data(matched=False)
2478 fm.data(matched=False)
2479 fm.end()
2479 fm.end()
2480
2480
2481 skip = {}
2481 skip = {}
2482 revfiles = {}
2482 revfiles = {}
2483 matchfn = scmutil.match(repo[None], pats, opts)
2483 matchfn = scmutil.match(repo[None], pats, opts)
2484 found = False
2484 found = False
2485 follow = opts.get('follow')
2485 follow = opts.get('follow')
2486
2486
2487 def prep(ctx, fns):
2487 def prep(ctx, fns):
2488 rev = ctx.rev()
2488 rev = ctx.rev()
2489 pctx = ctx.p1()
2489 pctx = ctx.p1()
2490 parent = pctx.rev()
2490 parent = pctx.rev()
2491 matches.setdefault(rev, {})
2491 matches.setdefault(rev, {})
2492 matches.setdefault(parent, {})
2492 matches.setdefault(parent, {})
2493 files = revfiles.setdefault(rev, [])
2493 files = revfiles.setdefault(rev, [])
2494 for fn in fns:
2494 for fn in fns:
2495 flog = getfile(fn)
2495 flog = getfile(fn)
2496 try:
2496 try:
2497 fnode = ctx.filenode(fn)
2497 fnode = ctx.filenode(fn)
2498 except error.LookupError:
2498 except error.LookupError:
2499 continue
2499 continue
2500
2500
2501 copied = flog.renamed(fnode)
2501 copied = flog.renamed(fnode)
2502 copy = follow and copied and copied[0]
2502 copy = follow and copied and copied[0]
2503 if copy:
2503 if copy:
2504 copies.setdefault(rev, {})[fn] = copy
2504 copies.setdefault(rev, {})[fn] = copy
2505 if fn in skip:
2505 if fn in skip:
2506 if copy:
2506 if copy:
2507 skip[copy] = True
2507 skip[copy] = True
2508 continue
2508 continue
2509 files.append(fn)
2509 files.append(fn)
2510
2510
2511 if fn not in matches[rev]:
2511 if fn not in matches[rev]:
2512 grepbody(fn, rev, flog.read(fnode))
2512 grepbody(fn, rev, flog.read(fnode))
2513
2513
2514 pfn = copy or fn
2514 pfn = copy or fn
2515 if pfn not in matches[parent]:
2515 if pfn not in matches[parent]:
2516 try:
2516 try:
2517 fnode = pctx.filenode(pfn)
2517 fnode = pctx.filenode(pfn)
2518 grepbody(pfn, parent, flog.read(fnode))
2518 grepbody(pfn, parent, flog.read(fnode))
2519 except error.LookupError:
2519 except error.LookupError:
2520 pass
2520 pass
2521
2521
2522 ui.pager('grep')
2522 ui.pager('grep')
2523 fm = ui.formatter('grep', opts)
2523 fm = ui.formatter('grep', opts)
2524 for ctx in cmdutil.walkchangerevs(repo, matchfn, opts, prep):
2524 for ctx in cmdutil.walkchangerevs(repo, matchfn, opts, prep):
2525 rev = ctx.rev()
2525 rev = ctx.rev()
2526 parent = ctx.p1().rev()
2526 parent = ctx.p1().rev()
2527 for fn in sorted(revfiles.get(rev, [])):
2527 for fn in sorted(revfiles.get(rev, [])):
2528 states = matches[rev][fn]
2528 states = matches[rev][fn]
2529 copy = copies.get(rev, {}).get(fn)
2529 copy = copies.get(rev, {}).get(fn)
2530 if fn in skip:
2530 if fn in skip:
2531 if copy:
2531 if copy:
2532 skip[copy] = True
2532 skip[copy] = True
2533 continue
2533 continue
2534 pstates = matches.get(parent, {}).get(copy or fn, [])
2534 pstates = matches.get(parent, {}).get(copy or fn, [])
2535 if pstates or states:
2535 if pstates or states:
2536 r = display(fm, fn, ctx, pstates, states)
2536 r = display(fm, fn, ctx, pstates, states)
2537 found = found or r
2537 found = found or r
2538 if r and not opts.get('all'):
2538 if r and not opts.get('all'):
2539 skip[fn] = True
2539 skip[fn] = True
2540 if copy:
2540 if copy:
2541 skip[copy] = True
2541 skip[copy] = True
2542 del matches[rev]
2542 del matches[rev]
2543 del revfiles[rev]
2543 del revfiles[rev]
2544 fm.end()
2544 fm.end()
2545
2545
2546 return not found
2546 return not found
2547
2547
2548 @command('heads',
2548 @command('heads',
2549 [('r', 'rev', '',
2549 [('r', 'rev', '',
2550 _('show only heads which are descendants of STARTREV'), _('STARTREV')),
2550 _('show only heads which are descendants of STARTREV'), _('STARTREV')),
2551 ('t', 'topo', False, _('show topological heads only')),
2551 ('t', 'topo', False, _('show topological heads only')),
2552 ('a', 'active', False, _('show active branchheads only (DEPRECATED)')),
2552 ('a', 'active', False, _('show active branchheads only (DEPRECATED)')),
2553 ('c', 'closed', False, _('show normal and closed branch heads')),
2553 ('c', 'closed', False, _('show normal and closed branch heads')),
2554 ] + templateopts,
2554 ] + templateopts,
2555 _('[-ct] [-r STARTREV] [REV]...'))
2555 _('[-ct] [-r STARTREV] [REV]...'))
2556 def heads(ui, repo, *branchrevs, **opts):
2556 def heads(ui, repo, *branchrevs, **opts):
2557 """show branch heads
2557 """show branch heads
2558
2558
2559 With no arguments, show all open branch heads in the repository.
2559 With no arguments, show all open branch heads in the repository.
2560 Branch heads are changesets that have no descendants on the
2560 Branch heads are changesets that have no descendants on the
2561 same branch. They are where development generally takes place and
2561 same branch. They are where development generally takes place and
2562 are the usual targets for update and merge operations.
2562 are the usual targets for update and merge operations.
2563
2563
2564 If one or more REVs are given, only open branch heads on the
2564 If one or more REVs are given, only open branch heads on the
2565 branches associated with the specified changesets are shown. This
2565 branches associated with the specified changesets are shown. This
2566 means that you can use :hg:`heads .` to see the heads on the
2566 means that you can use :hg:`heads .` to see the heads on the
2567 currently checked-out branch.
2567 currently checked-out branch.
2568
2568
2569 If -c/--closed is specified, also show branch heads marked closed
2569 If -c/--closed is specified, also show branch heads marked closed
2570 (see :hg:`commit --close-branch`).
2570 (see :hg:`commit --close-branch`).
2571
2571
2572 If STARTREV is specified, only those heads that are descendants of
2572 If STARTREV is specified, only those heads that are descendants of
2573 STARTREV will be displayed.
2573 STARTREV will be displayed.
2574
2574
2575 If -t/--topo is specified, named branch mechanics will be ignored and only
2575 If -t/--topo is specified, named branch mechanics will be ignored and only
2576 topological heads (changesets with no children) will be shown.
2576 topological heads (changesets with no children) will be shown.
2577
2577
2578 Returns 0 if matching heads are found, 1 if not.
2578 Returns 0 if matching heads are found, 1 if not.
2579 """
2579 """
2580
2580
2581 opts = pycompat.byteskwargs(opts)
2581 opts = pycompat.byteskwargs(opts)
2582 start = None
2582 start = None
2583 if 'rev' in opts:
2583 if 'rev' in opts:
2584 start = scmutil.revsingle(repo, opts['rev'], None).node()
2584 start = scmutil.revsingle(repo, opts['rev'], None).node()
2585
2585
2586 if opts.get('topo'):
2586 if opts.get('topo'):
2587 heads = [repo[h] for h in repo.heads(start)]
2587 heads = [repo[h] for h in repo.heads(start)]
2588 else:
2588 else:
2589 heads = []
2589 heads = []
2590 for branch in repo.branchmap():
2590 for branch in repo.branchmap():
2591 heads += repo.branchheads(branch, start, opts.get('closed'))
2591 heads += repo.branchheads(branch, start, opts.get('closed'))
2592 heads = [repo[h] for h in heads]
2592 heads = [repo[h] for h in heads]
2593
2593
2594 if branchrevs:
2594 if branchrevs:
2595 branches = set(repo[br].branch() for br in branchrevs)
2595 branches = set(repo[br].branch() for br in branchrevs)
2596 heads = [h for h in heads if h.branch() in branches]
2596 heads = [h for h in heads if h.branch() in branches]
2597
2597
2598 if opts.get('active') and branchrevs:
2598 if opts.get('active') and branchrevs:
2599 dagheads = repo.heads(start)
2599 dagheads = repo.heads(start)
2600 heads = [h for h in heads if h.node() in dagheads]
2600 heads = [h for h in heads if h.node() in dagheads]
2601
2601
2602 if branchrevs:
2602 if branchrevs:
2603 haveheads = set(h.branch() for h in heads)
2603 haveheads = set(h.branch() for h in heads)
2604 if branches - haveheads:
2604 if branches - haveheads:
2605 headless = ', '.join(b for b in branches - haveheads)
2605 headless = ', '.join(b for b in branches - haveheads)
2606 msg = _('no open branch heads found on branches %s')
2606 msg = _('no open branch heads found on branches %s')
2607 if opts.get('rev'):
2607 if opts.get('rev'):
2608 msg += _(' (started at %s)') % opts['rev']
2608 msg += _(' (started at %s)') % opts['rev']
2609 ui.warn((msg + '\n') % headless)
2609 ui.warn((msg + '\n') % headless)
2610
2610
2611 if not heads:
2611 if not heads:
2612 return 1
2612 return 1
2613
2613
2614 ui.pager('heads')
2614 ui.pager('heads')
2615 heads = sorted(heads, key=lambda x: -x.rev())
2615 heads = sorted(heads, key=lambda x: -x.rev())
2616 displayer = cmdutil.show_changeset(ui, repo, opts)
2616 displayer = cmdutil.show_changeset(ui, repo, opts)
2617 for ctx in heads:
2617 for ctx in heads:
2618 displayer.show(ctx)
2618 displayer.show(ctx)
2619 displayer.close()
2619 displayer.close()
2620
2620
2621 @command('help',
2621 @command('help',
2622 [('e', 'extension', None, _('show only help for extensions')),
2622 [('e', 'extension', None, _('show only help for extensions')),
2623 ('c', 'command', None, _('show only help for commands')),
2623 ('c', 'command', None, _('show only help for commands')),
2624 ('k', 'keyword', None, _('show topics matching keyword')),
2624 ('k', 'keyword', None, _('show topics matching keyword')),
2625 ('s', 'system', [], _('show help for specific platform(s)')),
2625 ('s', 'system', [], _('show help for specific platform(s)')),
2626 ],
2626 ],
2627 _('[-ecks] [TOPIC]'),
2627 _('[-ecks] [TOPIC]'),
2628 norepo=True)
2628 norepo=True)
2629 def help_(ui, name=None, **opts):
2629 def help_(ui, name=None, **opts):
2630 """show help for a given topic or a help overview
2630 """show help for a given topic or a help overview
2631
2631
2632 With no arguments, print a list of commands with short help messages.
2632 With no arguments, print a list of commands with short help messages.
2633
2633
2634 Given a topic, extension, or command name, print help for that
2634 Given a topic, extension, or command name, print help for that
2635 topic.
2635 topic.
2636
2636
2637 Returns 0 if successful.
2637 Returns 0 if successful.
2638 """
2638 """
2639
2639
2640 keep = opts.get(r'system') or []
2640 keep = opts.get(r'system') or []
2641 if len(keep) == 0:
2641 if len(keep) == 0:
2642 if pycompat.sysplatform.startswith('win'):
2642 if pycompat.sysplatform.startswith('win'):
2643 keep.append('windows')
2643 keep.append('windows')
2644 elif pycompat.sysplatform == 'OpenVMS':
2644 elif pycompat.sysplatform == 'OpenVMS':
2645 keep.append('vms')
2645 keep.append('vms')
2646 elif pycompat.sysplatform == 'plan9':
2646 elif pycompat.sysplatform == 'plan9':
2647 keep.append('plan9')
2647 keep.append('plan9')
2648 else:
2648 else:
2649 keep.append('unix')
2649 keep.append('unix')
2650 keep.append(pycompat.sysplatform.lower())
2650 keep.append(pycompat.sysplatform.lower())
2651 if ui.verbose:
2651 if ui.verbose:
2652 keep.append('verbose')
2652 keep.append('verbose')
2653
2653
2654 commands = sys.modules[__name__]
2654 commands = sys.modules[__name__]
2655 formatted = help.formattedhelp(ui, commands, name, keep=keep, **opts)
2655 formatted = help.formattedhelp(ui, commands, name, keep=keep, **opts)
2656 ui.pager('help')
2656 ui.pager('help')
2657 ui.write(formatted)
2657 ui.write(formatted)
2658
2658
2659
2659
2660 @command('identify|id',
2660 @command('identify|id',
2661 [('r', 'rev', '',
2661 [('r', 'rev', '',
2662 _('identify the specified revision'), _('REV')),
2662 _('identify the specified revision'), _('REV')),
2663 ('n', 'num', None, _('show local revision number')),
2663 ('n', 'num', None, _('show local revision number')),
2664 ('i', 'id', None, _('show global revision id')),
2664 ('i', 'id', None, _('show global revision id')),
2665 ('b', 'branch', None, _('show branch')),
2665 ('b', 'branch', None, _('show branch')),
2666 ('t', 'tags', None, _('show tags')),
2666 ('t', 'tags', None, _('show tags')),
2667 ('B', 'bookmarks', None, _('show bookmarks')),
2667 ('B', 'bookmarks', None, _('show bookmarks')),
2668 ] + remoteopts,
2668 ] + remoteopts,
2669 _('[-nibtB] [-r REV] [SOURCE]'),
2669 _('[-nibtB] [-r REV] [SOURCE]'),
2670 optionalrepo=True)
2670 optionalrepo=True)
2671 def identify(ui, repo, source=None, rev=None,
2671 def identify(ui, repo, source=None, rev=None,
2672 num=None, id=None, branch=None, tags=None, bookmarks=None, **opts):
2672 num=None, id=None, branch=None, tags=None, bookmarks=None, **opts):
2673 """identify the working directory or specified revision
2673 """identify the working directory or specified revision
2674
2674
2675 Print a summary identifying the repository state at REV using one or
2675 Print a summary identifying the repository state at REV using one or
2676 two parent hash identifiers, followed by a "+" if the working
2676 two parent hash identifiers, followed by a "+" if the working
2677 directory has uncommitted changes, the branch name (if not default),
2677 directory has uncommitted changes, the branch name (if not default),
2678 a list of tags, and a list of bookmarks.
2678 a list of tags, and a list of bookmarks.
2679
2679
2680 When REV is not given, print a summary of the current state of the
2680 When REV is not given, print a summary of the current state of the
2681 repository.
2681 repository.
2682
2682
2683 Specifying a path to a repository root or Mercurial bundle will
2683 Specifying a path to a repository root or Mercurial bundle will
2684 cause lookup to operate on that repository/bundle.
2684 cause lookup to operate on that repository/bundle.
2685
2685
2686 .. container:: verbose
2686 .. container:: verbose
2687
2687
2688 Examples:
2688 Examples:
2689
2689
2690 - generate a build identifier for the working directory::
2690 - generate a build identifier for the working directory::
2691
2691
2692 hg id --id > build-id.dat
2692 hg id --id > build-id.dat
2693
2693
2694 - find the revision corresponding to a tag::
2694 - find the revision corresponding to a tag::
2695
2695
2696 hg id -n -r 1.3
2696 hg id -n -r 1.3
2697
2697
2698 - check the most recent revision of a remote repository::
2698 - check the most recent revision of a remote repository::
2699
2699
2700 hg id -r tip https://www.mercurial-scm.org/repo/hg/
2700 hg id -r tip https://www.mercurial-scm.org/repo/hg/
2701
2701
2702 See :hg:`log` for generating more information about specific revisions,
2702 See :hg:`log` for generating more information about specific revisions,
2703 including full hash identifiers.
2703 including full hash identifiers.
2704
2704
2705 Returns 0 if successful.
2705 Returns 0 if successful.
2706 """
2706 """
2707
2707
2708 opts = pycompat.byteskwargs(opts)
2708 opts = pycompat.byteskwargs(opts)
2709 if not repo and not source:
2709 if not repo and not source:
2710 raise error.Abort(_("there is no Mercurial repository here "
2710 raise error.Abort(_("there is no Mercurial repository here "
2711 "(.hg not found)"))
2711 "(.hg not found)"))
2712
2712
2713 if ui.debugflag:
2713 if ui.debugflag:
2714 hexfunc = hex
2714 hexfunc = hex
2715 else:
2715 else:
2716 hexfunc = short
2716 hexfunc = short
2717 default = not (num or id or branch or tags or bookmarks)
2717 default = not (num or id or branch or tags or bookmarks)
2718 output = []
2718 output = []
2719 revs = []
2719 revs = []
2720
2720
2721 if source:
2721 if source:
2722 source, branches = hg.parseurl(ui.expandpath(source))
2722 source, branches = hg.parseurl(ui.expandpath(source))
2723 peer = hg.peer(repo or ui, opts, source) # only pass ui when no repo
2723 peer = hg.peer(repo or ui, opts, source) # only pass ui when no repo
2724 repo = peer.local()
2724 repo = peer.local()
2725 revs, checkout = hg.addbranchrevs(repo, peer, branches, None)
2725 revs, checkout = hg.addbranchrevs(repo, peer, branches, None)
2726
2726
2727 if not repo:
2727 if not repo:
2728 if num or branch or tags:
2728 if num or branch or tags:
2729 raise error.Abort(
2729 raise error.Abort(
2730 _("can't query remote revision number, branch, or tags"))
2730 _("can't query remote revision number, branch, or tags"))
2731 if not rev and revs:
2731 if not rev and revs:
2732 rev = revs[0]
2732 rev = revs[0]
2733 if not rev:
2733 if not rev:
2734 rev = "tip"
2734 rev = "tip"
2735
2735
2736 remoterev = peer.lookup(rev)
2736 remoterev = peer.lookup(rev)
2737 if default or id:
2737 if default or id:
2738 output = [hexfunc(remoterev)]
2738 output = [hexfunc(remoterev)]
2739
2739
2740 def getbms():
2740 def getbms():
2741 bms = []
2741 bms = []
2742
2742
2743 if 'bookmarks' in peer.listkeys('namespaces'):
2743 if 'bookmarks' in peer.listkeys('namespaces'):
2744 hexremoterev = hex(remoterev)
2744 hexremoterev = hex(remoterev)
2745 bms = [bm for bm, bmr in peer.listkeys('bookmarks').iteritems()
2745 bms = [bm for bm, bmr in peer.listkeys('bookmarks').iteritems()
2746 if bmr == hexremoterev]
2746 if bmr == hexremoterev]
2747
2747
2748 return sorted(bms)
2748 return sorted(bms)
2749
2749
2750 if bookmarks:
2750 if bookmarks:
2751 output.extend(getbms())
2751 output.extend(getbms())
2752 elif default and not ui.quiet:
2752 elif default and not ui.quiet:
2753 # multiple bookmarks for a single parent separated by '/'
2753 # multiple bookmarks for a single parent separated by '/'
2754 bm = '/'.join(getbms())
2754 bm = '/'.join(getbms())
2755 if bm:
2755 if bm:
2756 output.append(bm)
2756 output.append(bm)
2757 else:
2757 else:
2758 ctx = scmutil.revsingle(repo, rev, None)
2758 ctx = scmutil.revsingle(repo, rev, None)
2759
2759
2760 if ctx.rev() is None:
2760 if ctx.rev() is None:
2761 ctx = repo[None]
2761 ctx = repo[None]
2762 parents = ctx.parents()
2762 parents = ctx.parents()
2763 taglist = []
2763 taglist = []
2764 for p in parents:
2764 for p in parents:
2765 taglist.extend(p.tags())
2765 taglist.extend(p.tags())
2766
2766
2767 changed = ""
2767 changed = ""
2768 if default or id or num:
2768 if default or id or num:
2769 if (any(repo.status())
2769 if (any(repo.status())
2770 or any(ctx.sub(s).dirty() for s in ctx.substate)):
2770 or any(ctx.sub(s).dirty() for s in ctx.substate)):
2771 changed = '+'
2771 changed = '+'
2772 if default or id:
2772 if default or id:
2773 output = ["%s%s" %
2773 output = ["%s%s" %
2774 ('+'.join([hexfunc(p.node()) for p in parents]), changed)]
2774 ('+'.join([hexfunc(p.node()) for p in parents]), changed)]
2775 if num:
2775 if num:
2776 output.append("%s%s" %
2776 output.append("%s%s" %
2777 ('+'.join(["%d" % p.rev() for p in parents]), changed))
2777 ('+'.join(["%d" % p.rev() for p in parents]), changed))
2778 else:
2778 else:
2779 if default or id:
2779 if default or id:
2780 output = [hexfunc(ctx.node())]
2780 output = [hexfunc(ctx.node())]
2781 if num:
2781 if num:
2782 output.append(pycompat.bytestr(ctx.rev()))
2782 output.append(pycompat.bytestr(ctx.rev()))
2783 taglist = ctx.tags()
2783 taglist = ctx.tags()
2784
2784
2785 if default and not ui.quiet:
2785 if default and not ui.quiet:
2786 b = ctx.branch()
2786 b = ctx.branch()
2787 if b != 'default':
2787 if b != 'default':
2788 output.append("(%s)" % b)
2788 output.append("(%s)" % b)
2789
2789
2790 # multiple tags for a single parent separated by '/'
2790 # multiple tags for a single parent separated by '/'
2791 t = '/'.join(taglist)
2791 t = '/'.join(taglist)
2792 if t:
2792 if t:
2793 output.append(t)
2793 output.append(t)
2794
2794
2795 # multiple bookmarks for a single parent separated by '/'
2795 # multiple bookmarks for a single parent separated by '/'
2796 bm = '/'.join(ctx.bookmarks())
2796 bm = '/'.join(ctx.bookmarks())
2797 if bm:
2797 if bm:
2798 output.append(bm)
2798 output.append(bm)
2799 else:
2799 else:
2800 if branch:
2800 if branch:
2801 output.append(ctx.branch())
2801 output.append(ctx.branch())
2802
2802
2803 if tags:
2803 if tags:
2804 output.extend(taglist)
2804 output.extend(taglist)
2805
2805
2806 if bookmarks:
2806 if bookmarks:
2807 output.extend(ctx.bookmarks())
2807 output.extend(ctx.bookmarks())
2808
2808
2809 ui.write("%s\n" % ' '.join(output))
2809 ui.write("%s\n" % ' '.join(output))
2810
2810
2811 @command('import|patch',
2811 @command('import|patch',
2812 [('p', 'strip', 1,
2812 [('p', 'strip', 1,
2813 _('directory strip option for patch. This has the same '
2813 _('directory strip option for patch. This has the same '
2814 'meaning as the corresponding patch option'), _('NUM')),
2814 'meaning as the corresponding patch option'), _('NUM')),
2815 ('b', 'base', '', _('base path (DEPRECATED)'), _('PATH')),
2815 ('b', 'base', '', _('base path (DEPRECATED)'), _('PATH')),
2816 ('e', 'edit', False, _('invoke editor on commit messages')),
2816 ('e', 'edit', False, _('invoke editor on commit messages')),
2817 ('f', 'force', None,
2817 ('f', 'force', None,
2818 _('skip check for outstanding uncommitted changes (DEPRECATED)')),
2818 _('skip check for outstanding uncommitted changes (DEPRECATED)')),
2819 ('', 'no-commit', None,
2819 ('', 'no-commit', None,
2820 _("don't commit, just update the working directory")),
2820 _("don't commit, just update the working directory")),
2821 ('', 'bypass', None,
2821 ('', 'bypass', None,
2822 _("apply patch without touching the working directory")),
2822 _("apply patch without touching the working directory")),
2823 ('', 'partial', None,
2823 ('', 'partial', None,
2824 _('commit even if some hunks fail')),
2824 _('commit even if some hunks fail')),
2825 ('', 'exact', None,
2825 ('', 'exact', None,
2826 _('abort if patch would apply lossily')),
2826 _('abort if patch would apply lossily')),
2827 ('', 'prefix', '',
2827 ('', 'prefix', '',
2828 _('apply patch to subdirectory'), _('DIR')),
2828 _('apply patch to subdirectory'), _('DIR')),
2829 ('', 'import-branch', None,
2829 ('', 'import-branch', None,
2830 _('use any branch information in patch (implied by --exact)'))] +
2830 _('use any branch information in patch (implied by --exact)'))] +
2831 commitopts + commitopts2 + similarityopts,
2831 commitopts + commitopts2 + similarityopts,
2832 _('[OPTION]... PATCH...'))
2832 _('[OPTION]... PATCH...'))
2833 def import_(ui, repo, patch1=None, *patches, **opts):
2833 def import_(ui, repo, patch1=None, *patches, **opts):
2834 """import an ordered set of patches
2834 """import an ordered set of patches
2835
2835
2836 Import a list of patches and commit them individually (unless
2836 Import a list of patches and commit them individually (unless
2837 --no-commit is specified).
2837 --no-commit is specified).
2838
2838
2839 To read a patch from standard input (stdin), use "-" as the patch
2839 To read a patch from standard input (stdin), use "-" as the patch
2840 name. If a URL is specified, the patch will be downloaded from
2840 name. If a URL is specified, the patch will be downloaded from
2841 there.
2841 there.
2842
2842
2843 Import first applies changes to the working directory (unless
2843 Import first applies changes to the working directory (unless
2844 --bypass is specified), import will abort if there are outstanding
2844 --bypass is specified), import will abort if there are outstanding
2845 changes.
2845 changes.
2846
2846
2847 Use --bypass to apply and commit patches directly to the
2847 Use --bypass to apply and commit patches directly to the
2848 repository, without affecting the working directory. Without
2848 repository, without affecting the working directory. Without
2849 --exact, patches will be applied on top of the working directory
2849 --exact, patches will be applied on top of the working directory
2850 parent revision.
2850 parent revision.
2851
2851
2852 You can import a patch straight from a mail message. Even patches
2852 You can import a patch straight from a mail message. Even patches
2853 as attachments work (to use the body part, it must have type
2853 as attachments work (to use the body part, it must have type
2854 text/plain or text/x-patch). From and Subject headers of email
2854 text/plain or text/x-patch). From and Subject headers of email
2855 message are used as default committer and commit message. All
2855 message are used as default committer and commit message. All
2856 text/plain body parts before first diff are added to the commit
2856 text/plain body parts before first diff are added to the commit
2857 message.
2857 message.
2858
2858
2859 If the imported patch was generated by :hg:`export`, user and
2859 If the imported patch was generated by :hg:`export`, user and
2860 description from patch override values from message headers and
2860 description from patch override values from message headers and
2861 body. Values given on command line with -m/--message and -u/--user
2861 body. Values given on command line with -m/--message and -u/--user
2862 override these.
2862 override these.
2863
2863
2864 If --exact is specified, import will set the working directory to
2864 If --exact is specified, import will set the working directory to
2865 the parent of each patch before applying it, and will abort if the
2865 the parent of each patch before applying it, and will abort if the
2866 resulting changeset has a different ID than the one recorded in
2866 resulting changeset has a different ID than the one recorded in
2867 the patch. This will guard against various ways that portable
2867 the patch. This will guard against various ways that portable
2868 patch formats and mail systems might fail to transfer Mercurial
2868 patch formats and mail systems might fail to transfer Mercurial
2869 data or metadata. See :hg:`bundle` for lossless transmission.
2869 data or metadata. See :hg:`bundle` for lossless transmission.
2870
2870
2871 Use --partial to ensure a changeset will be created from the patch
2871 Use --partial to ensure a changeset will be created from the patch
2872 even if some hunks fail to apply. Hunks that fail to apply will be
2872 even if some hunks fail to apply. Hunks that fail to apply will be
2873 written to a <target-file>.rej file. Conflicts can then be resolved
2873 written to a <target-file>.rej file. Conflicts can then be resolved
2874 by hand before :hg:`commit --amend` is run to update the created
2874 by hand before :hg:`commit --amend` is run to update the created
2875 changeset. This flag exists to let people import patches that
2875 changeset. This flag exists to let people import patches that
2876 partially apply without losing the associated metadata (author,
2876 partially apply without losing the associated metadata (author,
2877 date, description, ...).
2877 date, description, ...).
2878
2878
2879 .. note::
2879 .. note::
2880
2880
2881 When no hunks apply cleanly, :hg:`import --partial` will create
2881 When no hunks apply cleanly, :hg:`import --partial` will create
2882 an empty changeset, importing only the patch metadata.
2882 an empty changeset, importing only the patch metadata.
2883
2883
2884 With -s/--similarity, hg will attempt to discover renames and
2884 With -s/--similarity, hg will attempt to discover renames and
2885 copies in the patch in the same way as :hg:`addremove`.
2885 copies in the patch in the same way as :hg:`addremove`.
2886
2886
2887 It is possible to use external patch programs to perform the patch
2887 It is possible to use external patch programs to perform the patch
2888 by setting the ``ui.patch`` configuration option. For the default
2888 by setting the ``ui.patch`` configuration option. For the default
2889 internal tool, the fuzz can also be configured via ``patch.fuzz``.
2889 internal tool, the fuzz can also be configured via ``patch.fuzz``.
2890 See :hg:`help config` for more information about configuration
2890 See :hg:`help config` for more information about configuration
2891 files and how to use these options.
2891 files and how to use these options.
2892
2892
2893 See :hg:`help dates` for a list of formats valid for -d/--date.
2893 See :hg:`help dates` for a list of formats valid for -d/--date.
2894
2894
2895 .. container:: verbose
2895 .. container:: verbose
2896
2896
2897 Examples:
2897 Examples:
2898
2898
2899 - import a traditional patch from a website and detect renames::
2899 - import a traditional patch from a website and detect renames::
2900
2900
2901 hg import -s 80 http://example.com/bugfix.patch
2901 hg import -s 80 http://example.com/bugfix.patch
2902
2902
2903 - import a changeset from an hgweb server::
2903 - import a changeset from an hgweb server::
2904
2904
2905 hg import https://www.mercurial-scm.org/repo/hg/rev/5ca8c111e9aa
2905 hg import https://www.mercurial-scm.org/repo/hg/rev/5ca8c111e9aa
2906
2906
2907 - import all the patches in an Unix-style mbox::
2907 - import all the patches in an Unix-style mbox::
2908
2908
2909 hg import incoming-patches.mbox
2909 hg import incoming-patches.mbox
2910
2910
2911 - import patches from stdin::
2911 - import patches from stdin::
2912
2912
2913 hg import -
2913 hg import -
2914
2914
2915 - attempt to exactly restore an exported changeset (not always
2915 - attempt to exactly restore an exported changeset (not always
2916 possible)::
2916 possible)::
2917
2917
2918 hg import --exact proposed-fix.patch
2918 hg import --exact proposed-fix.patch
2919
2919
2920 - use an external tool to apply a patch which is too fuzzy for
2920 - use an external tool to apply a patch which is too fuzzy for
2921 the default internal tool.
2921 the default internal tool.
2922
2922
2923 hg import --config ui.patch="patch --merge" fuzzy.patch
2923 hg import --config ui.patch="patch --merge" fuzzy.patch
2924
2924
2925 - change the default fuzzing from 2 to a less strict 7
2925 - change the default fuzzing from 2 to a less strict 7
2926
2926
2927 hg import --config ui.fuzz=7 fuzz.patch
2927 hg import --config ui.fuzz=7 fuzz.patch
2928
2928
2929 Returns 0 on success, 1 on partial success (see --partial).
2929 Returns 0 on success, 1 on partial success (see --partial).
2930 """
2930 """
2931
2931
2932 opts = pycompat.byteskwargs(opts)
2932 opts = pycompat.byteskwargs(opts)
2933 if not patch1:
2933 if not patch1:
2934 raise error.Abort(_('need at least one patch to import'))
2934 raise error.Abort(_('need at least one patch to import'))
2935
2935
2936 patches = (patch1,) + patches
2936 patches = (patch1,) + patches
2937
2937
2938 date = opts.get('date')
2938 date = opts.get('date')
2939 if date:
2939 if date:
2940 opts['date'] = util.parsedate(date)
2940 opts['date'] = util.parsedate(date)
2941
2941
2942 exact = opts.get('exact')
2942 exact = opts.get('exact')
2943 update = not opts.get('bypass')
2943 update = not opts.get('bypass')
2944 if not update and opts.get('no_commit'):
2944 if not update and opts.get('no_commit'):
2945 raise error.Abort(_('cannot use --no-commit with --bypass'))
2945 raise error.Abort(_('cannot use --no-commit with --bypass'))
2946 try:
2946 try:
2947 sim = float(opts.get('similarity') or 0)
2947 sim = float(opts.get('similarity') or 0)
2948 except ValueError:
2948 except ValueError:
2949 raise error.Abort(_('similarity must be a number'))
2949 raise error.Abort(_('similarity must be a number'))
2950 if sim < 0 or sim > 100:
2950 if sim < 0 or sim > 100:
2951 raise error.Abort(_('similarity must be between 0 and 100'))
2951 raise error.Abort(_('similarity must be between 0 and 100'))
2952 if sim and not update:
2952 if sim and not update:
2953 raise error.Abort(_('cannot use --similarity with --bypass'))
2953 raise error.Abort(_('cannot use --similarity with --bypass'))
2954 if exact:
2954 if exact:
2955 if opts.get('edit'):
2955 if opts.get('edit'):
2956 raise error.Abort(_('cannot use --exact with --edit'))
2956 raise error.Abort(_('cannot use --exact with --edit'))
2957 if opts.get('prefix'):
2957 if opts.get('prefix'):
2958 raise error.Abort(_('cannot use --exact with --prefix'))
2958 raise error.Abort(_('cannot use --exact with --prefix'))
2959
2959
2960 base = opts["base"]
2960 base = opts["base"]
2961 wlock = dsguard = lock = tr = None
2961 wlock = dsguard = lock = tr = None
2962 msgs = []
2962 msgs = []
2963 ret = 0
2963 ret = 0
2964
2964
2965
2965
2966 try:
2966 try:
2967 wlock = repo.wlock()
2967 wlock = repo.wlock()
2968
2968
2969 if update:
2969 if update:
2970 cmdutil.checkunfinished(repo)
2970 cmdutil.checkunfinished(repo)
2971 if (exact or not opts.get('force')):
2971 if (exact or not opts.get('force')):
2972 cmdutil.bailifchanged(repo)
2972 cmdutil.bailifchanged(repo)
2973
2973
2974 if not opts.get('no_commit'):
2974 if not opts.get('no_commit'):
2975 lock = repo.lock()
2975 lock = repo.lock()
2976 tr = repo.transaction('import')
2976 tr = repo.transaction('import')
2977 else:
2977 else:
2978 dsguard = dirstateguard.dirstateguard(repo, 'import')
2978 dsguard = dirstateguard.dirstateguard(repo, 'import')
2979 parents = repo[None].parents()
2979 parents = repo[None].parents()
2980 for patchurl in patches:
2980 for patchurl in patches:
2981 if patchurl == '-':
2981 if patchurl == '-':
2982 ui.status(_('applying patch from stdin\n'))
2982 ui.status(_('applying patch from stdin\n'))
2983 patchfile = ui.fin
2983 patchfile = ui.fin
2984 patchurl = 'stdin' # for error message
2984 patchurl = 'stdin' # for error message
2985 else:
2985 else:
2986 patchurl = os.path.join(base, patchurl)
2986 patchurl = os.path.join(base, patchurl)
2987 ui.status(_('applying %s\n') % patchurl)
2987 ui.status(_('applying %s\n') % patchurl)
2988 patchfile = hg.openpath(ui, patchurl)
2988 patchfile = hg.openpath(ui, patchurl)
2989
2989
2990 haspatch = False
2990 haspatch = False
2991 for hunk in patch.split(patchfile):
2991 for hunk in patch.split(patchfile):
2992 (msg, node, rej) = cmdutil.tryimportone(ui, repo, hunk,
2992 (msg, node, rej) = cmdutil.tryimportone(ui, repo, hunk,
2993 parents, opts,
2993 parents, opts,
2994 msgs, hg.clean)
2994 msgs, hg.clean)
2995 if msg:
2995 if msg:
2996 haspatch = True
2996 haspatch = True
2997 ui.note(msg + '\n')
2997 ui.note(msg + '\n')
2998 if update or exact:
2998 if update or exact:
2999 parents = repo[None].parents()
2999 parents = repo[None].parents()
3000 else:
3000 else:
3001 parents = [repo[node]]
3001 parents = [repo[node]]
3002 if rej:
3002 if rej:
3003 ui.write_err(_("patch applied partially\n"))
3003 ui.write_err(_("patch applied partially\n"))
3004 ui.write_err(_("(fix the .rej files and run "
3004 ui.write_err(_("(fix the .rej files and run "
3005 "`hg commit --amend`)\n"))
3005 "`hg commit --amend`)\n"))
3006 ret = 1
3006 ret = 1
3007 break
3007 break
3008
3008
3009 if not haspatch:
3009 if not haspatch:
3010 raise error.Abort(_('%s: no diffs found') % patchurl)
3010 raise error.Abort(_('%s: no diffs found') % patchurl)
3011
3011
3012 if tr:
3012 if tr:
3013 tr.close()
3013 tr.close()
3014 if msgs:
3014 if msgs:
3015 repo.savecommitmessage('\n* * *\n'.join(msgs))
3015 repo.savecommitmessage('\n* * *\n'.join(msgs))
3016 if dsguard:
3016 if dsguard:
3017 dsguard.close()
3017 dsguard.close()
3018 return ret
3018 return ret
3019 finally:
3019 finally:
3020 if tr:
3020 if tr:
3021 tr.release()
3021 tr.release()
3022 release(lock, dsguard, wlock)
3022 release(lock, dsguard, wlock)
3023
3023
3024 @command('incoming|in',
3024 @command('incoming|in',
3025 [('f', 'force', None,
3025 [('f', 'force', None,
3026 _('run even if remote repository is unrelated')),
3026 _('run even if remote repository is unrelated')),
3027 ('n', 'newest-first', None, _('show newest record first')),
3027 ('n', 'newest-first', None, _('show newest record first')),
3028 ('', 'bundle', '',
3028 ('', 'bundle', '',
3029 _('file to store the bundles into'), _('FILE')),
3029 _('file to store the bundles into'), _('FILE')),
3030 ('r', 'rev', [], _('a remote changeset intended to be added'), _('REV')),
3030 ('r', 'rev', [], _('a remote changeset intended to be added'), _('REV')),
3031 ('B', 'bookmarks', False, _("compare bookmarks")),
3031 ('B', 'bookmarks', False, _("compare bookmarks")),
3032 ('b', 'branch', [],
3032 ('b', 'branch', [],
3033 _('a specific branch you would like to pull'), _('BRANCH')),
3033 _('a specific branch you would like to pull'), _('BRANCH')),
3034 ] + logopts + remoteopts + subrepoopts,
3034 ] + logopts + remoteopts + subrepoopts,
3035 _('[-p] [-n] [-M] [-f] [-r REV]... [--bundle FILENAME] [SOURCE]'))
3035 _('[-p] [-n] [-M] [-f] [-r REV]... [--bundle FILENAME] [SOURCE]'))
3036 def incoming(ui, repo, source="default", **opts):
3036 def incoming(ui, repo, source="default", **opts):
3037 """show new changesets found in source
3037 """show new changesets found in source
3038
3038
3039 Show new changesets found in the specified path/URL or the default
3039 Show new changesets found in the specified path/URL or the default
3040 pull location. These are the changesets that would have been pulled
3040 pull location. These are the changesets that would have been pulled
3041 if a pull at the time you issued this command.
3041 if a pull at the time you issued this command.
3042
3042
3043 See pull for valid source format details.
3043 See pull for valid source format details.
3044
3044
3045 .. container:: verbose
3045 .. container:: verbose
3046
3046
3047 With -B/--bookmarks, the result of bookmark comparison between
3047 With -B/--bookmarks, the result of bookmark comparison between
3048 local and remote repositories is displayed. With -v/--verbose,
3048 local and remote repositories is displayed. With -v/--verbose,
3049 status is also displayed for each bookmark like below::
3049 status is also displayed for each bookmark like below::
3050
3050
3051 BM1 01234567890a added
3051 BM1 01234567890a added
3052 BM2 1234567890ab advanced
3052 BM2 1234567890ab advanced
3053 BM3 234567890abc diverged
3053 BM3 234567890abc diverged
3054 BM4 34567890abcd changed
3054 BM4 34567890abcd changed
3055
3055
3056 The action taken locally when pulling depends on the
3056 The action taken locally when pulling depends on the
3057 status of each bookmark:
3057 status of each bookmark:
3058
3058
3059 :``added``: pull will create it
3059 :``added``: pull will create it
3060 :``advanced``: pull will update it
3060 :``advanced``: pull will update it
3061 :``diverged``: pull will create a divergent bookmark
3061 :``diverged``: pull will create a divergent bookmark
3062 :``changed``: result depends on remote changesets
3062 :``changed``: result depends on remote changesets
3063
3063
3064 From the point of view of pulling behavior, bookmark
3064 From the point of view of pulling behavior, bookmark
3065 existing only in the remote repository are treated as ``added``,
3065 existing only in the remote repository are treated as ``added``,
3066 even if it is in fact locally deleted.
3066 even if it is in fact locally deleted.
3067
3067
3068 .. container:: verbose
3068 .. container:: verbose
3069
3069
3070 For remote repository, using --bundle avoids downloading the
3070 For remote repository, using --bundle avoids downloading the
3071 changesets twice if the incoming is followed by a pull.
3071 changesets twice if the incoming is followed by a pull.
3072
3072
3073 Examples:
3073 Examples:
3074
3074
3075 - show incoming changes with patches and full description::
3075 - show incoming changes with patches and full description::
3076
3076
3077 hg incoming -vp
3077 hg incoming -vp
3078
3078
3079 - show incoming changes excluding merges, store a bundle::
3079 - show incoming changes excluding merges, store a bundle::
3080
3080
3081 hg in -vpM --bundle incoming.hg
3081 hg in -vpM --bundle incoming.hg
3082 hg pull incoming.hg
3082 hg pull incoming.hg
3083
3083
3084 - briefly list changes inside a bundle::
3084 - briefly list changes inside a bundle::
3085
3085
3086 hg in changes.hg -T "{desc|firstline}\\n"
3086 hg in changes.hg -T "{desc|firstline}\\n"
3087
3087
3088 Returns 0 if there are incoming changes, 1 otherwise.
3088 Returns 0 if there are incoming changes, 1 otherwise.
3089 """
3089 """
3090 opts = pycompat.byteskwargs(opts)
3090 opts = pycompat.byteskwargs(opts)
3091 if opts.get('graph'):
3091 if opts.get('graph'):
3092 cmdutil.checkunsupportedgraphflags([], opts)
3092 cmdutil.checkunsupportedgraphflags([], opts)
3093 def display(other, chlist, displayer):
3093 def display(other, chlist, displayer):
3094 revdag = cmdutil.graphrevs(other, chlist, opts)
3094 revdag = cmdutil.graphrevs(other, chlist, opts)
3095 cmdutil.displaygraph(ui, repo, revdag, displayer,
3095 cmdutil.displaygraph(ui, repo, revdag, displayer,
3096 graphmod.asciiedges)
3096 graphmod.asciiedges)
3097
3097
3098 hg._incoming(display, lambda: 1, ui, repo, source, opts, buffered=True)
3098 hg._incoming(display, lambda: 1, ui, repo, source, opts, buffered=True)
3099 return 0
3099 return 0
3100
3100
3101 if opts.get('bundle') and opts.get('subrepos'):
3101 if opts.get('bundle') and opts.get('subrepos'):
3102 raise error.Abort(_('cannot combine --bundle and --subrepos'))
3102 raise error.Abort(_('cannot combine --bundle and --subrepos'))
3103
3103
3104 if opts.get('bookmarks'):
3104 if opts.get('bookmarks'):
3105 source, branches = hg.parseurl(ui.expandpath(source),
3105 source, branches = hg.parseurl(ui.expandpath(source),
3106 opts.get('branch'))
3106 opts.get('branch'))
3107 other = hg.peer(repo, opts, source)
3107 other = hg.peer(repo, opts, source)
3108 if 'bookmarks' not in other.listkeys('namespaces'):
3108 if 'bookmarks' not in other.listkeys('namespaces'):
3109 ui.warn(_("remote doesn't support bookmarks\n"))
3109 ui.warn(_("remote doesn't support bookmarks\n"))
3110 return 0
3110 return 0
3111 ui.pager('incoming')
3111 ui.pager('incoming')
3112 ui.status(_('comparing with %s\n') % util.hidepassword(source))
3112 ui.status(_('comparing with %s\n') % util.hidepassword(source))
3113 return bookmarks.incoming(ui, repo, other)
3113 return bookmarks.incoming(ui, repo, other)
3114
3114
3115 repo._subtoppath = ui.expandpath(source)
3115 repo._subtoppath = ui.expandpath(source)
3116 try:
3116 try:
3117 return hg.incoming(ui, repo, source, opts)
3117 return hg.incoming(ui, repo, source, opts)
3118 finally:
3118 finally:
3119 del repo._subtoppath
3119 del repo._subtoppath
3120
3120
3121
3121
3122 @command('^init', remoteopts, _('[-e CMD] [--remotecmd CMD] [DEST]'),
3122 @command('^init', remoteopts, _('[-e CMD] [--remotecmd CMD] [DEST]'),
3123 norepo=True)
3123 norepo=True)
3124 def init(ui, dest=".", **opts):
3124 def init(ui, dest=".", **opts):
3125 """create a new repository in the given directory
3125 """create a new repository in the given directory
3126
3126
3127 Initialize a new repository in the given directory. If the given
3127 Initialize a new repository in the given directory. If the given
3128 directory does not exist, it will be created.
3128 directory does not exist, it will be created.
3129
3129
3130 If no directory is given, the current directory is used.
3130 If no directory is given, the current directory is used.
3131
3131
3132 It is possible to specify an ``ssh://`` URL as the destination.
3132 It is possible to specify an ``ssh://`` URL as the destination.
3133 See :hg:`help urls` for more information.
3133 See :hg:`help urls` for more information.
3134
3134
3135 Returns 0 on success.
3135 Returns 0 on success.
3136 """
3136 """
3137 opts = pycompat.byteskwargs(opts)
3137 opts = pycompat.byteskwargs(opts)
3138 hg.peer(ui, opts, ui.expandpath(dest), create=True)
3138 hg.peer(ui, opts, ui.expandpath(dest), create=True)
3139
3139
3140 @command('locate',
3140 @command('locate',
3141 [('r', 'rev', '', _('search the repository as it is in REV'), _('REV')),
3141 [('r', 'rev', '', _('search the repository as it is in REV'), _('REV')),
3142 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
3142 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
3143 ('f', 'fullpath', None, _('print complete paths from the filesystem root')),
3143 ('f', 'fullpath', None, _('print complete paths from the filesystem root')),
3144 ] + walkopts,
3144 ] + walkopts,
3145 _('[OPTION]... [PATTERN]...'))
3145 _('[OPTION]... [PATTERN]...'))
3146 def locate(ui, repo, *pats, **opts):
3146 def locate(ui, repo, *pats, **opts):
3147 """locate files matching specific patterns (DEPRECATED)
3147 """locate files matching specific patterns (DEPRECATED)
3148
3148
3149 Print files under Mercurial control in the working directory whose
3149 Print files under Mercurial control in the working directory whose
3150 names match the given patterns.
3150 names match the given patterns.
3151
3151
3152 By default, this command searches all directories in the working
3152 By default, this command searches all directories in the working
3153 directory. To search just the current directory and its
3153 directory. To search just the current directory and its
3154 subdirectories, use "--include .".
3154 subdirectories, use "--include .".
3155
3155
3156 If no patterns are given to match, this command prints the names
3156 If no patterns are given to match, this command prints the names
3157 of all files under Mercurial control in the working directory.
3157 of all files under Mercurial control in the working directory.
3158
3158
3159 If you want to feed the output of this command into the "xargs"
3159 If you want to feed the output of this command into the "xargs"
3160 command, use the -0 option to both this command and "xargs". This
3160 command, use the -0 option to both this command and "xargs". This
3161 will avoid the problem of "xargs" treating single filenames that
3161 will avoid the problem of "xargs" treating single filenames that
3162 contain whitespace as multiple filenames.
3162 contain whitespace as multiple filenames.
3163
3163
3164 See :hg:`help files` for a more versatile command.
3164 See :hg:`help files` for a more versatile command.
3165
3165
3166 Returns 0 if a match is found, 1 otherwise.
3166 Returns 0 if a match is found, 1 otherwise.
3167 """
3167 """
3168 opts = pycompat.byteskwargs(opts)
3168 opts = pycompat.byteskwargs(opts)
3169 if opts.get('print0'):
3169 if opts.get('print0'):
3170 end = '\0'
3170 end = '\0'
3171 else:
3171 else:
3172 end = '\n'
3172 end = '\n'
3173 rev = scmutil.revsingle(repo, opts.get('rev'), None).node()
3173 rev = scmutil.revsingle(repo, opts.get('rev'), None).node()
3174
3174
3175 ret = 1
3175 ret = 1
3176 ctx = repo[rev]
3176 ctx = repo[rev]
3177 m = scmutil.match(ctx, pats, opts, default='relglob',
3177 m = scmutil.match(ctx, pats, opts, default='relglob',
3178 badfn=lambda x, y: False)
3178 badfn=lambda x, y: False)
3179
3179
3180 ui.pager('locate')
3180 ui.pager('locate')
3181 for abs in ctx.matches(m):
3181 for abs in ctx.matches(m):
3182 if opts.get('fullpath'):
3182 if opts.get('fullpath'):
3183 ui.write(repo.wjoin(abs), end)
3183 ui.write(repo.wjoin(abs), end)
3184 else:
3184 else:
3185 ui.write(((pats and m.rel(abs)) or abs), end)
3185 ui.write(((pats and m.rel(abs)) or abs), end)
3186 ret = 0
3186 ret = 0
3187
3187
3188 return ret
3188 return ret
3189
3189
3190 @command('^log|history',
3190 @command('^log|history',
3191 [('f', 'follow', None,
3191 [('f', 'follow', None,
3192 _('follow changeset history, or file history across copies and renames')),
3192 _('follow changeset history, or file history across copies and renames')),
3193 ('', 'follow-first', None,
3193 ('', 'follow-first', None,
3194 _('only follow the first parent of merge changesets (DEPRECATED)')),
3194 _('only follow the first parent of merge changesets (DEPRECATED)')),
3195 ('d', 'date', '', _('show revisions matching date spec'), _('DATE')),
3195 ('d', 'date', '', _('show revisions matching date spec'), _('DATE')),
3196 ('C', 'copies', None, _('show copied files')),
3196 ('C', 'copies', None, _('show copied files')),
3197 ('k', 'keyword', [],
3197 ('k', 'keyword', [],
3198 _('do case-insensitive search for a given text'), _('TEXT')),
3198 _('do case-insensitive search for a given text'), _('TEXT')),
3199 ('r', 'rev', [], _('show the specified revision or revset'), _('REV')),
3199 ('r', 'rev', [], _('show the specified revision or revset'), _('REV')),
3200 ('', 'removed', None, _('include revisions where files were removed')),
3200 ('', 'removed', None, _('include revisions where files were removed')),
3201 ('m', 'only-merges', None, _('show only merges (DEPRECATED)')),
3201 ('m', 'only-merges', None, _('show only merges (DEPRECATED)')),
3202 ('u', 'user', [], _('revisions committed by user'), _('USER')),
3202 ('u', 'user', [], _('revisions committed by user'), _('USER')),
3203 ('', 'only-branch', [],
3203 ('', 'only-branch', [],
3204 _('show only changesets within the given named branch (DEPRECATED)'),
3204 _('show only changesets within the given named branch (DEPRECATED)'),
3205 _('BRANCH')),
3205 _('BRANCH')),
3206 ('b', 'branch', [],
3206 ('b', 'branch', [],
3207 _('show changesets within the given named branch'), _('BRANCH')),
3207 _('show changesets within the given named branch'), _('BRANCH')),
3208 ('P', 'prune', [],
3208 ('P', 'prune', [],
3209 _('do not display revision or any of its ancestors'), _('REV')),
3209 _('do not display revision or any of its ancestors'), _('REV')),
3210 ] + logopts + walkopts,
3210 ] + logopts + walkopts,
3211 _('[OPTION]... [FILE]'),
3211 _('[OPTION]... [FILE]'),
3212 inferrepo=True)
3212 inferrepo=True)
3213 def log(ui, repo, *pats, **opts):
3213 def log(ui, repo, *pats, **opts):
3214 """show revision history of entire repository or files
3214 """show revision history of entire repository or files
3215
3215
3216 Print the revision history of the specified files or the entire
3216 Print the revision history of the specified files or the entire
3217 project.
3217 project.
3218
3218
3219 If no revision range is specified, the default is ``tip:0`` unless
3219 If no revision range is specified, the default is ``tip:0`` unless
3220 --follow is set, in which case the working directory parent is
3220 --follow is set, in which case the working directory parent is
3221 used as the starting revision.
3221 used as the starting revision.
3222
3222
3223 File history is shown without following rename or copy history of
3223 File history is shown without following rename or copy history of
3224 files. Use -f/--follow with a filename to follow history across
3224 files. Use -f/--follow with a filename to follow history across
3225 renames and copies. --follow without a filename will only show
3225 renames and copies. --follow without a filename will only show
3226 ancestors or descendants of the starting revision.
3226 ancestors or descendants of the starting revision.
3227
3227
3228 By default this command prints revision number and changeset id,
3228 By default this command prints revision number and changeset id,
3229 tags, non-trivial parents, user, date and time, and a summary for
3229 tags, non-trivial parents, user, date and time, and a summary for
3230 each commit. When the -v/--verbose switch is used, the list of
3230 each commit. When the -v/--verbose switch is used, the list of
3231 changed files and full commit message are shown.
3231 changed files and full commit message are shown.
3232
3232
3233 With --graph the revisions are shown as an ASCII art DAG with the most
3233 With --graph the revisions are shown as an ASCII art DAG with the most
3234 recent changeset at the top.
3234 recent changeset at the top.
3235 'o' is a changeset, '@' is a working directory parent, 'x' is obsolete,
3235 'o' is a changeset, '@' is a working directory parent, 'x' is obsolete,
3236 and '+' represents a fork where the changeset from the lines below is a
3236 and '+' represents a fork where the changeset from the lines below is a
3237 parent of the 'o' merge on the same line.
3237 parent of the 'o' merge on the same line.
3238 Paths in the DAG are represented with '|', '/' and so forth. ':' in place
3238 Paths in the DAG are represented with '|', '/' and so forth. ':' in place
3239 of a '|' indicates one or more revisions in a path are omitted.
3239 of a '|' indicates one or more revisions in a path are omitted.
3240
3240
3241 .. note::
3241 .. note::
3242
3242
3243 :hg:`log --patch` may generate unexpected diff output for merge
3243 :hg:`log --patch` may generate unexpected diff output for merge
3244 changesets, as it will only compare the merge changeset against
3244 changesets, as it will only compare the merge changeset against
3245 its first parent. Also, only files different from BOTH parents
3245 its first parent. Also, only files different from BOTH parents
3246 will appear in files:.
3246 will appear in files:.
3247
3247
3248 .. note::
3248 .. note::
3249
3249
3250 For performance reasons, :hg:`log FILE` may omit duplicate changes
3250 For performance reasons, :hg:`log FILE` may omit duplicate changes
3251 made on branches and will not show removals or mode changes. To
3251 made on branches and will not show removals or mode changes. To
3252 see all such changes, use the --removed switch.
3252 see all such changes, use the --removed switch.
3253
3253
3254 .. container:: verbose
3254 .. container:: verbose
3255
3255
3256 Some examples:
3256 Some examples:
3257
3257
3258 - changesets with full descriptions and file lists::
3258 - changesets with full descriptions and file lists::
3259
3259
3260 hg log -v
3260 hg log -v
3261
3261
3262 - changesets ancestral to the working directory::
3262 - changesets ancestral to the working directory::
3263
3263
3264 hg log -f
3264 hg log -f
3265
3265
3266 - last 10 commits on the current branch::
3266 - last 10 commits on the current branch::
3267
3267
3268 hg log -l 10 -b .
3268 hg log -l 10 -b .
3269
3269
3270 - changesets showing all modifications of a file, including removals::
3270 - changesets showing all modifications of a file, including removals::
3271
3271
3272 hg log --removed file.c
3272 hg log --removed file.c
3273
3273
3274 - all changesets that touch a directory, with diffs, excluding merges::
3274 - all changesets that touch a directory, with diffs, excluding merges::
3275
3275
3276 hg log -Mp lib/
3276 hg log -Mp lib/
3277
3277
3278 - all revision numbers that match a keyword::
3278 - all revision numbers that match a keyword::
3279
3279
3280 hg log -k bug --template "{rev}\\n"
3280 hg log -k bug --template "{rev}\\n"
3281
3281
3282 - the full hash identifier of the working directory parent::
3282 - the full hash identifier of the working directory parent::
3283
3283
3284 hg log -r . --template "{node}\\n"
3284 hg log -r . --template "{node}\\n"
3285
3285
3286 - list available log templates::
3286 - list available log templates::
3287
3287
3288 hg log -T list
3288 hg log -T list
3289
3289
3290 - check if a given changeset is included in a tagged release::
3290 - check if a given changeset is included in a tagged release::
3291
3291
3292 hg log -r "a21ccf and ancestor(1.9)"
3292 hg log -r "a21ccf and ancestor(1.9)"
3293
3293
3294 - find all changesets by some user in a date range::
3294 - find all changesets by some user in a date range::
3295
3295
3296 hg log -k alice -d "may 2008 to jul 2008"
3296 hg log -k alice -d "may 2008 to jul 2008"
3297
3297
3298 - summary of all changesets after the last tag::
3298 - summary of all changesets after the last tag::
3299
3299
3300 hg log -r "last(tagged())::" --template "{desc|firstline}\\n"
3300 hg log -r "last(tagged())::" --template "{desc|firstline}\\n"
3301
3301
3302 See :hg:`help dates` for a list of formats valid for -d/--date.
3302 See :hg:`help dates` for a list of formats valid for -d/--date.
3303
3303
3304 See :hg:`help revisions` for more about specifying and ordering
3304 See :hg:`help revisions` for more about specifying and ordering
3305 revisions.
3305 revisions.
3306
3306
3307 See :hg:`help templates` for more about pre-packaged styles and
3307 See :hg:`help templates` for more about pre-packaged styles and
3308 specifying custom templates.
3308 specifying custom templates.
3309
3309
3310 Returns 0 on success.
3310 Returns 0 on success.
3311
3311
3312 """
3312 """
3313 opts = pycompat.byteskwargs(opts)
3313 opts = pycompat.byteskwargs(opts)
3314 if opts.get('follow') and opts.get('rev'):
3314 if opts.get('follow') and opts.get('rev'):
3315 opts['rev'] = [revsetlang.formatspec('reverse(::%lr)', opts.get('rev'))]
3315 opts['rev'] = [revsetlang.formatspec('reverse(::%lr)', opts.get('rev'))]
3316 del opts['follow']
3316 del opts['follow']
3317
3317
3318 if opts.get('graph'):
3318 if opts.get('graph'):
3319 return cmdutil.graphlog(ui, repo, pats, opts)
3319 return cmdutil.graphlog(ui, repo, pats, opts)
3320
3320
3321 revs, expr, filematcher = cmdutil.getlogrevs(repo, pats, opts)
3321 revs, expr, filematcher = cmdutil.getlogrevs(repo, pats, opts)
3322 limit = cmdutil.loglimit(opts)
3322 limit = cmdutil.loglimit(opts)
3323 count = 0
3323 count = 0
3324
3324
3325 getrenamed = None
3325 getrenamed = None
3326 if opts.get('copies'):
3326 if opts.get('copies'):
3327 endrev = None
3327 endrev = None
3328 if opts.get('rev'):
3328 if opts.get('rev'):
3329 endrev = scmutil.revrange(repo, opts.get('rev')).max() + 1
3329 endrev = scmutil.revrange(repo, opts.get('rev')).max() + 1
3330 getrenamed = templatekw.getrenamedfn(repo, endrev=endrev)
3330 getrenamed = templatekw.getrenamedfn(repo, endrev=endrev)
3331
3331
3332 ui.pager('log')
3332 ui.pager('log')
3333 displayer = cmdutil.show_changeset(ui, repo, opts, buffered=True)
3333 displayer = cmdutil.show_changeset(ui, repo, opts, buffered=True)
3334 for rev in revs:
3334 for rev in revs:
3335 if count == limit:
3335 if count == limit:
3336 break
3336 break
3337 ctx = repo[rev]
3337 ctx = repo[rev]
3338 copies = None
3338 copies = None
3339 if getrenamed is not None and rev:
3339 if getrenamed is not None and rev:
3340 copies = []
3340 copies = []
3341 for fn in ctx.files():
3341 for fn in ctx.files():
3342 rename = getrenamed(fn, rev)
3342 rename = getrenamed(fn, rev)
3343 if rename:
3343 if rename:
3344 copies.append((fn, rename[0]))
3344 copies.append((fn, rename[0]))
3345 if filematcher:
3345 if filematcher:
3346 revmatchfn = filematcher(ctx.rev())
3346 revmatchfn = filematcher(ctx.rev())
3347 else:
3347 else:
3348 revmatchfn = None
3348 revmatchfn = None
3349 displayer.show(ctx, copies=copies, matchfn=revmatchfn)
3349 displayer.show(ctx, copies=copies, matchfn=revmatchfn)
3350 if displayer.flush(ctx):
3350 if displayer.flush(ctx):
3351 count += 1
3351 count += 1
3352
3352
3353 displayer.close()
3353 displayer.close()
3354
3354
3355 @command('manifest',
3355 @command('manifest',
3356 [('r', 'rev', '', _('revision to display'), _('REV')),
3356 [('r', 'rev', '', _('revision to display'), _('REV')),
3357 ('', 'all', False, _("list files from all revisions"))]
3357 ('', 'all', False, _("list files from all revisions"))]
3358 + formatteropts,
3358 + formatteropts,
3359 _('[-r REV]'))
3359 _('[-r REV]'))
3360 def manifest(ui, repo, node=None, rev=None, **opts):
3360 def manifest(ui, repo, node=None, rev=None, **opts):
3361 """output the current or given revision of the project manifest
3361 """output the current or given revision of the project manifest
3362
3362
3363 Print a list of version controlled files for the given revision.
3363 Print a list of version controlled files for the given revision.
3364 If no revision is given, the first parent of the working directory
3364 If no revision is given, the first parent of the working directory
3365 is used, or the null revision if no revision is checked out.
3365 is used, or the null revision if no revision is checked out.
3366
3366
3367 With -v, print file permissions, symlink and executable bits.
3367 With -v, print file permissions, symlink and executable bits.
3368 With --debug, print file revision hashes.
3368 With --debug, print file revision hashes.
3369
3369
3370 If option --all is specified, the list of all files from all revisions
3370 If option --all is specified, the list of all files from all revisions
3371 is printed. This includes deleted and renamed files.
3371 is printed. This includes deleted and renamed files.
3372
3372
3373 Returns 0 on success.
3373 Returns 0 on success.
3374 """
3374 """
3375 opts = pycompat.byteskwargs(opts)
3375 opts = pycompat.byteskwargs(opts)
3376 fm = ui.formatter('manifest', opts)
3376 fm = ui.formatter('manifest', opts)
3377
3377
3378 if opts.get('all'):
3378 if opts.get('all'):
3379 if rev or node:
3379 if rev or node:
3380 raise error.Abort(_("can't specify a revision with --all"))
3380 raise error.Abort(_("can't specify a revision with --all"))
3381
3381
3382 res = []
3382 res = []
3383 prefix = "data/"
3383 prefix = "data/"
3384 suffix = ".i"
3384 suffix = ".i"
3385 plen = len(prefix)
3385 plen = len(prefix)
3386 slen = len(suffix)
3386 slen = len(suffix)
3387 with repo.lock():
3387 with repo.lock():
3388 for fn, b, size in repo.store.datafiles():
3388 for fn, b, size in repo.store.datafiles():
3389 if size != 0 and fn[-slen:] == suffix and fn[:plen] == prefix:
3389 if size != 0 and fn[-slen:] == suffix and fn[:plen] == prefix:
3390 res.append(fn[plen:-slen])
3390 res.append(fn[plen:-slen])
3391 ui.pager('manifest')
3391 ui.pager('manifest')
3392 for f in res:
3392 for f in res:
3393 fm.startitem()
3393 fm.startitem()
3394 fm.write("path", '%s\n', f)
3394 fm.write("path", '%s\n', f)
3395 fm.end()
3395 fm.end()
3396 return
3396 return
3397
3397
3398 if rev and node:
3398 if rev and node:
3399 raise error.Abort(_("please specify just one revision"))
3399 raise error.Abort(_("please specify just one revision"))
3400
3400
3401 if not node:
3401 if not node:
3402 node = rev
3402 node = rev
3403
3403
3404 char = {'l': '@', 'x': '*', '': ''}
3404 char = {'l': '@', 'x': '*', '': ''}
3405 mode = {'l': '644', 'x': '755', '': '644'}
3405 mode = {'l': '644', 'x': '755', '': '644'}
3406 ctx = scmutil.revsingle(repo, node)
3406 ctx = scmutil.revsingle(repo, node)
3407 mf = ctx.manifest()
3407 mf = ctx.manifest()
3408 ui.pager('manifest')
3408 ui.pager('manifest')
3409 for f in ctx:
3409 for f in ctx:
3410 fm.startitem()
3410 fm.startitem()
3411 fl = ctx[f].flags()
3411 fl = ctx[f].flags()
3412 fm.condwrite(ui.debugflag, 'hash', '%s ', hex(mf[f]))
3412 fm.condwrite(ui.debugflag, 'hash', '%s ', hex(mf[f]))
3413 fm.condwrite(ui.verbose, 'mode type', '%s %1s ', mode[fl], char[fl])
3413 fm.condwrite(ui.verbose, 'mode type', '%s %1s ', mode[fl], char[fl])
3414 fm.write('path', '%s\n', f)
3414 fm.write('path', '%s\n', f)
3415 fm.end()
3415 fm.end()
3416
3416
3417 @command('^merge',
3417 @command('^merge',
3418 [('f', 'force', None,
3418 [('f', 'force', None,
3419 _('force a merge including outstanding changes (DEPRECATED)')),
3419 _('force a merge including outstanding changes (DEPRECATED)')),
3420 ('r', 'rev', '', _('revision to merge'), _('REV')),
3420 ('r', 'rev', '', _('revision to merge'), _('REV')),
3421 ('P', 'preview', None,
3421 ('P', 'preview', None,
3422 _('review revisions to merge (no merge is performed)'))
3422 _('review revisions to merge (no merge is performed)'))
3423 ] + mergetoolopts,
3423 ] + mergetoolopts,
3424 _('[-P] [[-r] REV]'))
3424 _('[-P] [[-r] REV]'))
3425 def merge(ui, repo, node=None, **opts):
3425 def merge(ui, repo, node=None, **opts):
3426 """merge another revision into working directory
3426 """merge another revision into working directory
3427
3427
3428 The current working directory is updated with all changes made in
3428 The current working directory is updated with all changes made in
3429 the requested revision since the last common predecessor revision.
3429 the requested revision since the last common predecessor revision.
3430
3430
3431 Files that changed between either parent are marked as changed for
3431 Files that changed between either parent are marked as changed for
3432 the next commit and a commit must be performed before any further
3432 the next commit and a commit must be performed before any further
3433 updates to the repository are allowed. The next commit will have
3433 updates to the repository are allowed. The next commit will have
3434 two parents.
3434 two parents.
3435
3435
3436 ``--tool`` can be used to specify the merge tool used for file
3436 ``--tool`` can be used to specify the merge tool used for file
3437 merges. It overrides the HGMERGE environment variable and your
3437 merges. It overrides the HGMERGE environment variable and your
3438 configuration files. See :hg:`help merge-tools` for options.
3438 configuration files. See :hg:`help merge-tools` for options.
3439
3439
3440 If no revision is specified, the working directory's parent is a
3440 If no revision is specified, the working directory's parent is a
3441 head revision, and the current branch contains exactly one other
3441 head revision, and the current branch contains exactly one other
3442 head, the other head is merged with by default. Otherwise, an
3442 head, the other head is merged with by default. Otherwise, an
3443 explicit revision with which to merge with must be provided.
3443 explicit revision with which to merge with must be provided.
3444
3444
3445 See :hg:`help resolve` for information on handling file conflicts.
3445 See :hg:`help resolve` for information on handling file conflicts.
3446
3446
3447 To undo an uncommitted merge, use :hg:`update --clean .` which
3447 To undo an uncommitted merge, use :hg:`update --clean .` which
3448 will check out a clean copy of the original merge parent, losing
3448 will check out a clean copy of the original merge parent, losing
3449 all changes.
3449 all changes.
3450
3450
3451 Returns 0 on success, 1 if there are unresolved files.
3451 Returns 0 on success, 1 if there are unresolved files.
3452 """
3452 """
3453
3453
3454 opts = pycompat.byteskwargs(opts)
3454 opts = pycompat.byteskwargs(opts)
3455 if opts.get('rev') and node:
3455 if opts.get('rev') and node:
3456 raise error.Abort(_("please specify just one revision"))
3456 raise error.Abort(_("please specify just one revision"))
3457 if not node:
3457 if not node:
3458 node = opts.get('rev')
3458 node = opts.get('rev')
3459
3459
3460 if node:
3460 if node:
3461 node = scmutil.revsingle(repo, node).node()
3461 node = scmutil.revsingle(repo, node).node()
3462
3462
3463 if not node:
3463 if not node:
3464 node = repo[destutil.destmerge(repo)].node()
3464 node = repo[destutil.destmerge(repo)].node()
3465
3465
3466 if opts.get('preview'):
3466 if opts.get('preview'):
3467 # find nodes that are ancestors of p2 but not of p1
3467 # find nodes that are ancestors of p2 but not of p1
3468 p1 = repo.lookup('.')
3468 p1 = repo.lookup('.')
3469 p2 = repo.lookup(node)
3469 p2 = repo.lookup(node)
3470 nodes = repo.changelog.findmissing(common=[p1], heads=[p2])
3470 nodes = repo.changelog.findmissing(common=[p1], heads=[p2])
3471
3471
3472 displayer = cmdutil.show_changeset(ui, repo, opts)
3472 displayer = cmdutil.show_changeset(ui, repo, opts)
3473 for node in nodes:
3473 for node in nodes:
3474 displayer.show(repo[node])
3474 displayer.show(repo[node])
3475 displayer.close()
3475 displayer.close()
3476 return 0
3476 return 0
3477
3477
3478 try:
3478 try:
3479 # ui.forcemerge is an internal variable, do not document
3479 # ui.forcemerge is an internal variable, do not document
3480 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''), 'merge')
3480 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''), 'merge')
3481 force = opts.get('force')
3481 force = opts.get('force')
3482 labels = ['working copy', 'merge rev']
3482 labels = ['working copy', 'merge rev']
3483 return hg.merge(repo, node, force=force, mergeforce=force,
3483 return hg.merge(repo, node, force=force, mergeforce=force,
3484 labels=labels)
3484 labels=labels)
3485 finally:
3485 finally:
3486 ui.setconfig('ui', 'forcemerge', '', 'merge')
3486 ui.setconfig('ui', 'forcemerge', '', 'merge')
3487
3487
3488 @command('outgoing|out',
3488 @command('outgoing|out',
3489 [('f', 'force', None, _('run even when the destination is unrelated')),
3489 [('f', 'force', None, _('run even when the destination is unrelated')),
3490 ('r', 'rev', [],
3490 ('r', 'rev', [],
3491 _('a changeset intended to be included in the destination'), _('REV')),
3491 _('a changeset intended to be included in the destination'), _('REV')),
3492 ('n', 'newest-first', None, _('show newest record first')),
3492 ('n', 'newest-first', None, _('show newest record first')),
3493 ('B', 'bookmarks', False, _('compare bookmarks')),
3493 ('B', 'bookmarks', False, _('compare bookmarks')),
3494 ('b', 'branch', [], _('a specific branch you would like to push'),
3494 ('b', 'branch', [], _('a specific branch you would like to push'),
3495 _('BRANCH')),
3495 _('BRANCH')),
3496 ] + logopts + remoteopts + subrepoopts,
3496 ] + logopts + remoteopts + subrepoopts,
3497 _('[-M] [-p] [-n] [-f] [-r REV]... [DEST]'))
3497 _('[-M] [-p] [-n] [-f] [-r REV]... [DEST]'))
3498 def outgoing(ui, repo, dest=None, **opts):
3498 def outgoing(ui, repo, dest=None, **opts):
3499 """show changesets not found in the destination
3499 """show changesets not found in the destination
3500
3500
3501 Show changesets not found in the specified destination repository
3501 Show changesets not found in the specified destination repository
3502 or the default push location. These are the changesets that would
3502 or the default push location. These are the changesets that would
3503 be pushed if a push was requested.
3503 be pushed if a push was requested.
3504
3504
3505 See pull for details of valid destination formats.
3505 See pull for details of valid destination formats.
3506
3506
3507 .. container:: verbose
3507 .. container:: verbose
3508
3508
3509 With -B/--bookmarks, the result of bookmark comparison between
3509 With -B/--bookmarks, the result of bookmark comparison between
3510 local and remote repositories is displayed. With -v/--verbose,
3510 local and remote repositories is displayed. With -v/--verbose,
3511 status is also displayed for each bookmark like below::
3511 status is also displayed for each bookmark like below::
3512
3512
3513 BM1 01234567890a added
3513 BM1 01234567890a added
3514 BM2 deleted
3514 BM2 deleted
3515 BM3 234567890abc advanced
3515 BM3 234567890abc advanced
3516 BM4 34567890abcd diverged
3516 BM4 34567890abcd diverged
3517 BM5 4567890abcde changed
3517 BM5 4567890abcde changed
3518
3518
3519 The action taken when pushing depends on the
3519 The action taken when pushing depends on the
3520 status of each bookmark:
3520 status of each bookmark:
3521
3521
3522 :``added``: push with ``-B`` will create it
3522 :``added``: push with ``-B`` will create it
3523 :``deleted``: push with ``-B`` will delete it
3523 :``deleted``: push with ``-B`` will delete it
3524 :``advanced``: push will update it
3524 :``advanced``: push will update it
3525 :``diverged``: push with ``-B`` will update it
3525 :``diverged``: push with ``-B`` will update it
3526 :``changed``: push with ``-B`` will update it
3526 :``changed``: push with ``-B`` will update it
3527
3527
3528 From the point of view of pushing behavior, bookmarks
3528 From the point of view of pushing behavior, bookmarks
3529 existing only in the remote repository are treated as
3529 existing only in the remote repository are treated as
3530 ``deleted``, even if it is in fact added remotely.
3530 ``deleted``, even if it is in fact added remotely.
3531
3531
3532 Returns 0 if there are outgoing changes, 1 otherwise.
3532 Returns 0 if there are outgoing changes, 1 otherwise.
3533 """
3533 """
3534 opts = pycompat.byteskwargs(opts)
3534 opts = pycompat.byteskwargs(opts)
3535 if opts.get('graph'):
3535 if opts.get('graph'):
3536 cmdutil.checkunsupportedgraphflags([], opts)
3536 cmdutil.checkunsupportedgraphflags([], opts)
3537 o, other = hg._outgoing(ui, repo, dest, opts)
3537 o, other = hg._outgoing(ui, repo, dest, opts)
3538 if not o:
3538 if not o:
3539 cmdutil.outgoinghooks(ui, repo, other, opts, o)
3539 cmdutil.outgoinghooks(ui, repo, other, opts, o)
3540 return
3540 return
3541
3541
3542 revdag = cmdutil.graphrevs(repo, o, opts)
3542 revdag = cmdutil.graphrevs(repo, o, opts)
3543 ui.pager('outgoing')
3543 ui.pager('outgoing')
3544 displayer = cmdutil.show_changeset(ui, repo, opts, buffered=True)
3544 displayer = cmdutil.show_changeset(ui, repo, opts, buffered=True)
3545 cmdutil.displaygraph(ui, repo, revdag, displayer, graphmod.asciiedges)
3545 cmdutil.displaygraph(ui, repo, revdag, displayer, graphmod.asciiedges)
3546 cmdutil.outgoinghooks(ui, repo, other, opts, o)
3546 cmdutil.outgoinghooks(ui, repo, other, opts, o)
3547 return 0
3547 return 0
3548
3548
3549 if opts.get('bookmarks'):
3549 if opts.get('bookmarks'):
3550 dest = ui.expandpath(dest or 'default-push', dest or 'default')
3550 dest = ui.expandpath(dest or 'default-push', dest or 'default')
3551 dest, branches = hg.parseurl(dest, opts.get('branch'))
3551 dest, branches = hg.parseurl(dest, opts.get('branch'))
3552 other = hg.peer(repo, opts, dest)
3552 other = hg.peer(repo, opts, dest)
3553 if 'bookmarks' not in other.listkeys('namespaces'):
3553 if 'bookmarks' not in other.listkeys('namespaces'):
3554 ui.warn(_("remote doesn't support bookmarks\n"))
3554 ui.warn(_("remote doesn't support bookmarks\n"))
3555 return 0
3555 return 0
3556 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
3556 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
3557 ui.pager('outgoing')
3557 ui.pager('outgoing')
3558 return bookmarks.outgoing(ui, repo, other)
3558 return bookmarks.outgoing(ui, repo, other)
3559
3559
3560 repo._subtoppath = ui.expandpath(dest or 'default-push', dest or 'default')
3560 repo._subtoppath = ui.expandpath(dest or 'default-push', dest or 'default')
3561 try:
3561 try:
3562 return hg.outgoing(ui, repo, dest, opts)
3562 return hg.outgoing(ui, repo, dest, opts)
3563 finally:
3563 finally:
3564 del repo._subtoppath
3564 del repo._subtoppath
3565
3565
3566 @command('parents',
3566 @command('parents',
3567 [('r', 'rev', '', _('show parents of the specified revision'), _('REV')),
3567 [('r', 'rev', '', _('show parents of the specified revision'), _('REV')),
3568 ] + templateopts,
3568 ] + templateopts,
3569 _('[-r REV] [FILE]'),
3569 _('[-r REV] [FILE]'),
3570 inferrepo=True)
3570 inferrepo=True)
3571 def parents(ui, repo, file_=None, **opts):
3571 def parents(ui, repo, file_=None, **opts):
3572 """show the parents of the working directory or revision (DEPRECATED)
3572 """show the parents of the working directory or revision (DEPRECATED)
3573
3573
3574 Print the working directory's parent revisions. If a revision is
3574 Print the working directory's parent revisions. If a revision is
3575 given via -r/--rev, the parent of that revision will be printed.
3575 given via -r/--rev, the parent of that revision will be printed.
3576 If a file argument is given, the revision in which the file was
3576 If a file argument is given, the revision in which the file was
3577 last changed (before the working directory revision or the
3577 last changed (before the working directory revision or the
3578 argument to --rev if given) is printed.
3578 argument to --rev if given) is printed.
3579
3579
3580 This command is equivalent to::
3580 This command is equivalent to::
3581
3581
3582 hg log -r "p1()+p2()" or
3582 hg log -r "p1()+p2()" or
3583 hg log -r "p1(REV)+p2(REV)" or
3583 hg log -r "p1(REV)+p2(REV)" or
3584 hg log -r "max(::p1() and file(FILE))+max(::p2() and file(FILE))" or
3584 hg log -r "max(::p1() and file(FILE))+max(::p2() and file(FILE))" or
3585 hg log -r "max(::p1(REV) and file(FILE))+max(::p2(REV) and file(FILE))"
3585 hg log -r "max(::p1(REV) and file(FILE))+max(::p2(REV) and file(FILE))"
3586
3586
3587 See :hg:`summary` and :hg:`help revsets` for related information.
3587 See :hg:`summary` and :hg:`help revsets` for related information.
3588
3588
3589 Returns 0 on success.
3589 Returns 0 on success.
3590 """
3590 """
3591
3591
3592 opts = pycompat.byteskwargs(opts)
3592 opts = pycompat.byteskwargs(opts)
3593 ctx = scmutil.revsingle(repo, opts.get('rev'), None)
3593 ctx = scmutil.revsingle(repo, opts.get('rev'), None)
3594
3594
3595 if file_:
3595 if file_:
3596 m = scmutil.match(ctx, (file_,), opts)
3596 m = scmutil.match(ctx, (file_,), opts)
3597 if m.anypats() or len(m.files()) != 1:
3597 if m.anypats() or len(m.files()) != 1:
3598 raise error.Abort(_('can only specify an explicit filename'))
3598 raise error.Abort(_('can only specify an explicit filename'))
3599 file_ = m.files()[0]
3599 file_ = m.files()[0]
3600 filenodes = []
3600 filenodes = []
3601 for cp in ctx.parents():
3601 for cp in ctx.parents():
3602 if not cp:
3602 if not cp:
3603 continue
3603 continue
3604 try:
3604 try:
3605 filenodes.append(cp.filenode(file_))
3605 filenodes.append(cp.filenode(file_))
3606 except error.LookupError:
3606 except error.LookupError:
3607 pass
3607 pass
3608 if not filenodes:
3608 if not filenodes:
3609 raise error.Abort(_("'%s' not found in manifest!") % file_)
3609 raise error.Abort(_("'%s' not found in manifest!") % file_)
3610 p = []
3610 p = []
3611 for fn in filenodes:
3611 for fn in filenodes:
3612 fctx = repo.filectx(file_, fileid=fn)
3612 fctx = repo.filectx(file_, fileid=fn)
3613 p.append(fctx.node())
3613 p.append(fctx.node())
3614 else:
3614 else:
3615 p = [cp.node() for cp in ctx.parents()]
3615 p = [cp.node() for cp in ctx.parents()]
3616
3616
3617 displayer = cmdutil.show_changeset(ui, repo, opts)
3617 displayer = cmdutil.show_changeset(ui, repo, opts)
3618 for n in p:
3618 for n in p:
3619 if n != nullid:
3619 if n != nullid:
3620 displayer.show(repo[n])
3620 displayer.show(repo[n])
3621 displayer.close()
3621 displayer.close()
3622
3622
3623 @command('paths', formatteropts, _('[NAME]'), optionalrepo=True)
3623 @command('paths', formatteropts, _('[NAME]'), optionalrepo=True)
3624 def paths(ui, repo, search=None, **opts):
3624 def paths(ui, repo, search=None, **opts):
3625 """show aliases for remote repositories
3625 """show aliases for remote repositories
3626
3626
3627 Show definition of symbolic path name NAME. If no name is given,
3627 Show definition of symbolic path name NAME. If no name is given,
3628 show definition of all available names.
3628 show definition of all available names.
3629
3629
3630 Option -q/--quiet suppresses all output when searching for NAME
3630 Option -q/--quiet suppresses all output when searching for NAME
3631 and shows only the path names when listing all definitions.
3631 and shows only the path names when listing all definitions.
3632
3632
3633 Path names are defined in the [paths] section of your
3633 Path names are defined in the [paths] section of your
3634 configuration file and in ``/etc/mercurial/hgrc``. If run inside a
3634 configuration file and in ``/etc/mercurial/hgrc``. If run inside a
3635 repository, ``.hg/hgrc`` is used, too.
3635 repository, ``.hg/hgrc`` is used, too.
3636
3636
3637 The path names ``default`` and ``default-push`` have a special
3637 The path names ``default`` and ``default-push`` have a special
3638 meaning. When performing a push or pull operation, they are used
3638 meaning. When performing a push or pull operation, they are used
3639 as fallbacks if no location is specified on the command-line.
3639 as fallbacks if no location is specified on the command-line.
3640 When ``default-push`` is set, it will be used for push and
3640 When ``default-push`` is set, it will be used for push and
3641 ``default`` will be used for pull; otherwise ``default`` is used
3641 ``default`` will be used for pull; otherwise ``default`` is used
3642 as the fallback for both. When cloning a repository, the clone
3642 as the fallback for both. When cloning a repository, the clone
3643 source is written as ``default`` in ``.hg/hgrc``.
3643 source is written as ``default`` in ``.hg/hgrc``.
3644
3644
3645 .. note::
3645 .. note::
3646
3646
3647 ``default`` and ``default-push`` apply to all inbound (e.g.
3647 ``default`` and ``default-push`` apply to all inbound (e.g.
3648 :hg:`incoming`) and outbound (e.g. :hg:`outgoing`, :hg:`email`
3648 :hg:`incoming`) and outbound (e.g. :hg:`outgoing`, :hg:`email`
3649 and :hg:`bundle`) operations.
3649 and :hg:`bundle`) operations.
3650
3650
3651 See :hg:`help urls` for more information.
3651 See :hg:`help urls` for more information.
3652
3652
3653 Returns 0 on success.
3653 Returns 0 on success.
3654 """
3654 """
3655
3655
3656 opts = pycompat.byteskwargs(opts)
3656 opts = pycompat.byteskwargs(opts)
3657 ui.pager('paths')
3657 ui.pager('paths')
3658 if search:
3658 if search:
3659 pathitems = [(name, path) for name, path in ui.paths.iteritems()
3659 pathitems = [(name, path) for name, path in ui.paths.iteritems()
3660 if name == search]
3660 if name == search]
3661 else:
3661 else:
3662 pathitems = sorted(ui.paths.iteritems())
3662 pathitems = sorted(ui.paths.iteritems())
3663
3663
3664 fm = ui.formatter('paths', opts)
3664 fm = ui.formatter('paths', opts)
3665 if fm.isplain():
3665 if fm.isplain():
3666 hidepassword = util.hidepassword
3666 hidepassword = util.hidepassword
3667 else:
3667 else:
3668 hidepassword = str
3668 hidepassword = str
3669 if ui.quiet:
3669 if ui.quiet:
3670 namefmt = '%s\n'
3670 namefmt = '%s\n'
3671 else:
3671 else:
3672 namefmt = '%s = '
3672 namefmt = '%s = '
3673 showsubopts = not search and not ui.quiet
3673 showsubopts = not search and not ui.quiet
3674
3674
3675 for name, path in pathitems:
3675 for name, path in pathitems:
3676 fm.startitem()
3676 fm.startitem()
3677 fm.condwrite(not search, 'name', namefmt, name)
3677 fm.condwrite(not search, 'name', namefmt, name)
3678 fm.condwrite(not ui.quiet, 'url', '%s\n', hidepassword(path.rawloc))
3678 fm.condwrite(not ui.quiet, 'url', '%s\n', hidepassword(path.rawloc))
3679 for subopt, value in sorted(path.suboptions.items()):
3679 for subopt, value in sorted(path.suboptions.items()):
3680 assert subopt not in ('name', 'url')
3680 assert subopt not in ('name', 'url')
3681 if showsubopts:
3681 if showsubopts:
3682 fm.plain('%s:%s = ' % (name, subopt))
3682 fm.plain('%s:%s = ' % (name, subopt))
3683 fm.condwrite(showsubopts, subopt, '%s\n', value)
3683 fm.condwrite(showsubopts, subopt, '%s\n', value)
3684
3684
3685 fm.end()
3685 fm.end()
3686
3686
3687 if search and not pathitems:
3687 if search and not pathitems:
3688 if not ui.quiet:
3688 if not ui.quiet:
3689 ui.warn(_("not found!\n"))
3689 ui.warn(_("not found!\n"))
3690 return 1
3690 return 1
3691 else:
3691 else:
3692 return 0
3692 return 0
3693
3693
3694 @command('phase',
3694 @command('phase',
3695 [('p', 'public', False, _('set changeset phase to public')),
3695 [('p', 'public', False, _('set changeset phase to public')),
3696 ('d', 'draft', False, _('set changeset phase to draft')),
3696 ('d', 'draft', False, _('set changeset phase to draft')),
3697 ('s', 'secret', False, _('set changeset phase to secret')),
3697 ('s', 'secret', False, _('set changeset phase to secret')),
3698 ('f', 'force', False, _('allow to move boundary backward')),
3698 ('f', 'force', False, _('allow to move boundary backward')),
3699 ('r', 'rev', [], _('target revision'), _('REV')),
3699 ('r', 'rev', [], _('target revision'), _('REV')),
3700 ],
3700 ],
3701 _('[-p|-d|-s] [-f] [-r] [REV...]'))
3701 _('[-p|-d|-s] [-f] [-r] [REV...]'))
3702 def phase(ui, repo, *revs, **opts):
3702 def phase(ui, repo, *revs, **opts):
3703 """set or show the current phase name
3703 """set or show the current phase name
3704
3704
3705 With no argument, show the phase name of the current revision(s).
3705 With no argument, show the phase name of the current revision(s).
3706
3706
3707 With one of -p/--public, -d/--draft or -s/--secret, change the
3707 With one of -p/--public, -d/--draft or -s/--secret, change the
3708 phase value of the specified revisions.
3708 phase value of the specified revisions.
3709
3709
3710 Unless -f/--force is specified, :hg:`phase` won't move changeset from a
3710 Unless -f/--force is specified, :hg:`phase` won't move changeset from a
3711 lower phase to an higher phase. Phases are ordered as follows::
3711 lower phase to an higher phase. Phases are ordered as follows::
3712
3712
3713 public < draft < secret
3713 public < draft < secret
3714
3714
3715 Returns 0 on success, 1 if some phases could not be changed.
3715 Returns 0 on success, 1 if some phases could not be changed.
3716
3716
3717 (For more information about the phases concept, see :hg:`help phases`.)
3717 (For more information about the phases concept, see :hg:`help phases`.)
3718 """
3718 """
3719 opts = pycompat.byteskwargs(opts)
3719 opts = pycompat.byteskwargs(opts)
3720 # search for a unique phase argument
3720 # search for a unique phase argument
3721 targetphase = None
3721 targetphase = None
3722 for idx, name in enumerate(phases.phasenames):
3722 for idx, name in enumerate(phases.phasenames):
3723 if opts[name]:
3723 if opts[name]:
3724 if targetphase is not None:
3724 if targetphase is not None:
3725 raise error.Abort(_('only one phase can be specified'))
3725 raise error.Abort(_('only one phase can be specified'))
3726 targetphase = idx
3726 targetphase = idx
3727
3727
3728 # look for specified revision
3728 # look for specified revision
3729 revs = list(revs)
3729 revs = list(revs)
3730 revs.extend(opts['rev'])
3730 revs.extend(opts['rev'])
3731 if not revs:
3731 if not revs:
3732 # display both parents as the second parent phase can influence
3732 # display both parents as the second parent phase can influence
3733 # the phase of a merge commit
3733 # the phase of a merge commit
3734 revs = [c.rev() for c in repo[None].parents()]
3734 revs = [c.rev() for c in repo[None].parents()]
3735
3735
3736 revs = scmutil.revrange(repo, revs)
3736 revs = scmutil.revrange(repo, revs)
3737
3737
3738 lock = None
3738 lock = None
3739 ret = 0
3739 ret = 0
3740 if targetphase is None:
3740 if targetphase is None:
3741 # display
3741 # display
3742 for r in revs:
3742 for r in revs:
3743 ctx = repo[r]
3743 ctx = repo[r]
3744 ui.write('%i: %s\n' % (ctx.rev(), ctx.phasestr()))
3744 ui.write('%i: %s\n' % (ctx.rev(), ctx.phasestr()))
3745 else:
3745 else:
3746 tr = None
3746 tr = None
3747 lock = repo.lock()
3747 lock = repo.lock()
3748 try:
3748 try:
3749 tr = repo.transaction("phase")
3749 tr = repo.transaction("phase")
3750 # set phase
3750 # set phase
3751 if not revs:
3751 if not revs:
3752 raise error.Abort(_('empty revision set'))
3752 raise error.Abort(_('empty revision set'))
3753 nodes = [repo[r].node() for r in revs]
3753 nodes = [repo[r].node() for r in revs]
3754 # moving revision from public to draft may hide them
3754 # moving revision from public to draft may hide them
3755 # We have to check result on an unfiltered repository
3755 # We have to check result on an unfiltered repository
3756 unfi = repo.unfiltered()
3756 unfi = repo.unfiltered()
3757 getphase = unfi._phasecache.phase
3757 getphase = unfi._phasecache.phase
3758 olddata = [getphase(unfi, r) for r in unfi]
3758 olddata = [getphase(unfi, r) for r in unfi]
3759 phases.advanceboundary(repo, tr, targetphase, nodes)
3759 phases.advanceboundary(repo, tr, targetphase, nodes)
3760 if opts['force']:
3760 if opts['force']:
3761 phases.retractboundary(repo, tr, targetphase, nodes)
3761 phases.retractboundary(repo, tr, targetphase, nodes)
3762 tr.close()
3762 tr.close()
3763 finally:
3763 finally:
3764 if tr is not None:
3764 if tr is not None:
3765 tr.release()
3765 tr.release()
3766 lock.release()
3766 lock.release()
3767 getphase = unfi._phasecache.phase
3767 getphase = unfi._phasecache.phase
3768 newdata = [getphase(unfi, r) for r in unfi]
3768 newdata = [getphase(unfi, r) for r in unfi]
3769 changes = sum(newdata[r] != olddata[r] for r in unfi)
3769 changes = sum(newdata[r] != olddata[r] for r in unfi)
3770 cl = unfi.changelog
3770 cl = unfi.changelog
3771 rejected = [n for n in nodes
3771 rejected = [n for n in nodes
3772 if newdata[cl.rev(n)] < targetphase]
3772 if newdata[cl.rev(n)] < targetphase]
3773 if rejected:
3773 if rejected:
3774 ui.warn(_('cannot move %i changesets to a higher '
3774 ui.warn(_('cannot move %i changesets to a higher '
3775 'phase, use --force\n') % len(rejected))
3775 'phase, use --force\n') % len(rejected))
3776 ret = 1
3776 ret = 1
3777 if changes:
3777 if changes:
3778 msg = _('phase changed for %i changesets\n') % changes
3778 msg = _('phase changed for %i changesets\n') % changes
3779 if ret:
3779 if ret:
3780 ui.status(msg)
3780 ui.status(msg)
3781 else:
3781 else:
3782 ui.note(msg)
3782 ui.note(msg)
3783 else:
3783 else:
3784 ui.warn(_('no phases changed\n'))
3784 ui.warn(_('no phases changed\n'))
3785 return ret
3785 return ret
3786
3786
3787 def postincoming(ui, repo, modheads, optupdate, checkout, brev):
3787 def postincoming(ui, repo, modheads, optupdate, checkout, brev):
3788 """Run after a changegroup has been added via pull/unbundle
3788 """Run after a changegroup has been added via pull/unbundle
3789
3789
3790 This takes arguments below:
3790 This takes arguments below:
3791
3791
3792 :modheads: change of heads by pull/unbundle
3792 :modheads: change of heads by pull/unbundle
3793 :optupdate: updating working directory is needed or not
3793 :optupdate: updating working directory is needed or not
3794 :checkout: update destination revision (or None to default destination)
3794 :checkout: update destination revision (or None to default destination)
3795 :brev: a name, which might be a bookmark to be activated after updating
3795 :brev: a name, which might be a bookmark to be activated after updating
3796 """
3796 """
3797 if modheads == 0:
3797 if modheads == 0:
3798 return
3798 return
3799 if optupdate:
3799 if optupdate:
3800 try:
3800 try:
3801 return hg.updatetotally(ui, repo, checkout, brev)
3801 return hg.updatetotally(ui, repo, checkout, brev)
3802 except error.UpdateAbort as inst:
3802 except error.UpdateAbort as inst:
3803 msg = _("not updating: %s") % str(inst)
3803 msg = _("not updating: %s") % str(inst)
3804 hint = inst.hint
3804 hint = inst.hint
3805 raise error.UpdateAbort(msg, hint=hint)
3805 raise error.UpdateAbort(msg, hint=hint)
3806 if modheads > 1:
3806 if modheads > 1:
3807 currentbranchheads = len(repo.branchheads())
3807 currentbranchheads = len(repo.branchheads())
3808 if currentbranchheads == modheads:
3808 if currentbranchheads == modheads:
3809 ui.status(_("(run 'hg heads' to see heads, 'hg merge' to merge)\n"))
3809 ui.status(_("(run 'hg heads' to see heads, 'hg merge' to merge)\n"))
3810 elif currentbranchheads > 1:
3810 elif currentbranchheads > 1:
3811 ui.status(_("(run 'hg heads .' to see heads, 'hg merge' to "
3811 ui.status(_("(run 'hg heads .' to see heads, 'hg merge' to "
3812 "merge)\n"))
3812 "merge)\n"))
3813 else:
3813 else:
3814 ui.status(_("(run 'hg heads' to see heads)\n"))
3814 ui.status(_("(run 'hg heads' to see heads)\n"))
3815 else:
3815 else:
3816 ui.status(_("(run 'hg update' to get a working copy)\n"))
3816 ui.status(_("(run 'hg update' to get a working copy)\n"))
3817
3817
3818 @command('^pull',
3818 @command('^pull',
3819 [('u', 'update', None,
3819 [('u', 'update', None,
3820 _('update to new branch head if changesets were pulled')),
3820 _('update to new branch head if changesets were pulled')),
3821 ('f', 'force', None, _('run even when remote repository is unrelated')),
3821 ('f', 'force', None, _('run even when remote repository is unrelated')),
3822 ('r', 'rev', [], _('a remote changeset intended to be added'), _('REV')),
3822 ('r', 'rev', [], _('a remote changeset intended to be added'), _('REV')),
3823 ('B', 'bookmark', [], _("bookmark to pull"), _('BOOKMARK')),
3823 ('B', 'bookmark', [], _("bookmark to pull"), _('BOOKMARK')),
3824 ('b', 'branch', [], _('a specific branch you would like to pull'),
3824 ('b', 'branch', [], _('a specific branch you would like to pull'),
3825 _('BRANCH')),
3825 _('BRANCH')),
3826 ] + remoteopts,
3826 ] + remoteopts,
3827 _('[-u] [-f] [-r REV]... [-e CMD] [--remotecmd CMD] [SOURCE]'))
3827 _('[-u] [-f] [-r REV]... [-e CMD] [--remotecmd CMD] [SOURCE]'))
3828 def pull(ui, repo, source="default", **opts):
3828 def pull(ui, repo, source="default", **opts):
3829 """pull changes from the specified source
3829 """pull changes from the specified source
3830
3830
3831 Pull changes from a remote repository to a local one.
3831 Pull changes from a remote repository to a local one.
3832
3832
3833 This finds all changes from the repository at the specified path
3833 This finds all changes from the repository at the specified path
3834 or URL and adds them to a local repository (the current one unless
3834 or URL and adds them to a local repository (the current one unless
3835 -R is specified). By default, this does not update the copy of the
3835 -R is specified). By default, this does not update the copy of the
3836 project in the working directory.
3836 project in the working directory.
3837
3837
3838 Use :hg:`incoming` if you want to see what would have been added
3838 Use :hg:`incoming` if you want to see what would have been added
3839 by a pull at the time you issued this command. If you then decide
3839 by a pull at the time you issued this command. If you then decide
3840 to add those changes to the repository, you should use :hg:`pull
3840 to add those changes to the repository, you should use :hg:`pull
3841 -r X` where ``X`` is the last changeset listed by :hg:`incoming`.
3841 -r X` where ``X`` is the last changeset listed by :hg:`incoming`.
3842
3842
3843 If SOURCE is omitted, the 'default' path will be used.
3843 If SOURCE is omitted, the 'default' path will be used.
3844 See :hg:`help urls` for more information.
3844 See :hg:`help urls` for more information.
3845
3845
3846 Specifying bookmark as ``.`` is equivalent to specifying the active
3846 Specifying bookmark as ``.`` is equivalent to specifying the active
3847 bookmark's name.
3847 bookmark's name.
3848
3848
3849 Returns 0 on success, 1 if an update had unresolved files.
3849 Returns 0 on success, 1 if an update had unresolved files.
3850 """
3850 """
3851
3851
3852 opts = pycompat.byteskwargs(opts)
3852 opts = pycompat.byteskwargs(opts)
3853 if ui.configbool('commands', 'update.requiredest') and opts.get('update'):
3853 if ui.configbool('commands', 'update.requiredest') and opts.get('update'):
3854 msg = _('update destination required by configuration')
3854 msg = _('update destination required by configuration')
3855 hint = _('use hg pull followed by hg update DEST')
3855 hint = _('use hg pull followed by hg update DEST')
3856 raise error.Abort(msg, hint=hint)
3856 raise error.Abort(msg, hint=hint)
3857
3857
3858 source, branches = hg.parseurl(ui.expandpath(source), opts.get('branch'))
3858 source, branches = hg.parseurl(ui.expandpath(source), opts.get('branch'))
3859 ui.status(_('pulling from %s\n') % util.hidepassword(source))
3859 ui.status(_('pulling from %s\n') % util.hidepassword(source))
3860 other = hg.peer(repo, opts, source)
3860 other = hg.peer(repo, opts, source)
3861 try:
3861 try:
3862 revs, checkout = hg.addbranchrevs(repo, other, branches,
3862 revs, checkout = hg.addbranchrevs(repo, other, branches,
3863 opts.get('rev'))
3863 opts.get('rev'))
3864
3864
3865
3865
3866 pullopargs = {}
3866 pullopargs = {}
3867 if opts.get('bookmark'):
3867 if opts.get('bookmark'):
3868 if not revs:
3868 if not revs:
3869 revs = []
3869 revs = []
3870 # The list of bookmark used here is not the one used to actually
3870 # The list of bookmark used here is not the one used to actually
3871 # update the bookmark name. This can result in the revision pulled
3871 # update the bookmark name. This can result in the revision pulled
3872 # not ending up with the name of the bookmark because of a race
3872 # not ending up with the name of the bookmark because of a race
3873 # condition on the server. (See issue 4689 for details)
3873 # condition on the server. (See issue 4689 for details)
3874 remotebookmarks = other.listkeys('bookmarks')
3874 remotebookmarks = other.listkeys('bookmarks')
3875 pullopargs['remotebookmarks'] = remotebookmarks
3875 pullopargs['remotebookmarks'] = remotebookmarks
3876 for b in opts['bookmark']:
3876 for b in opts['bookmark']:
3877 b = repo._bookmarks.expandname(b)
3877 b = repo._bookmarks.expandname(b)
3878 if b not in remotebookmarks:
3878 if b not in remotebookmarks:
3879 raise error.Abort(_('remote bookmark %s not found!') % b)
3879 raise error.Abort(_('remote bookmark %s not found!') % b)
3880 revs.append(remotebookmarks[b])
3880 revs.append(remotebookmarks[b])
3881
3881
3882 if revs:
3882 if revs:
3883 try:
3883 try:
3884 # When 'rev' is a bookmark name, we cannot guarantee that it
3884 # When 'rev' is a bookmark name, we cannot guarantee that it
3885 # will be updated with that name because of a race condition
3885 # will be updated with that name because of a race condition
3886 # server side. (See issue 4689 for details)
3886 # server side. (See issue 4689 for details)
3887 oldrevs = revs
3887 oldrevs = revs
3888 revs = [] # actually, nodes
3888 revs = [] # actually, nodes
3889 for r in oldrevs:
3889 for r in oldrevs:
3890 node = other.lookup(r)
3890 node = other.lookup(r)
3891 revs.append(node)
3891 revs.append(node)
3892 if r == checkout:
3892 if r == checkout:
3893 checkout = node
3893 checkout = node
3894 except error.CapabilityError:
3894 except error.CapabilityError:
3895 err = _("other repository doesn't support revision lookup, "
3895 err = _("other repository doesn't support revision lookup, "
3896 "so a rev cannot be specified.")
3896 "so a rev cannot be specified.")
3897 raise error.Abort(err)
3897 raise error.Abort(err)
3898
3898
3899 pullopargs.update(opts.get('opargs', {}))
3899 pullopargs.update(opts.get('opargs', {}))
3900 modheads = exchange.pull(repo, other, heads=revs,
3900 modheads = exchange.pull(repo, other, heads=revs,
3901 force=opts.get('force'),
3901 force=opts.get('force'),
3902 bookmarks=opts.get('bookmark', ()),
3902 bookmarks=opts.get('bookmark', ()),
3903 opargs=pullopargs).cgresult
3903 opargs=pullopargs).cgresult
3904
3904
3905 # brev is a name, which might be a bookmark to be activated at
3905 # brev is a name, which might be a bookmark to be activated at
3906 # the end of the update. In other words, it is an explicit
3906 # the end of the update. In other words, it is an explicit
3907 # destination of the update
3907 # destination of the update
3908 brev = None
3908 brev = None
3909
3909
3910 if checkout:
3910 if checkout:
3911 checkout = str(repo.changelog.rev(checkout))
3911 checkout = str(repo.changelog.rev(checkout))
3912
3912
3913 # order below depends on implementation of
3913 # order below depends on implementation of
3914 # hg.addbranchrevs(). opts['bookmark'] is ignored,
3914 # hg.addbranchrevs(). opts['bookmark'] is ignored,
3915 # because 'checkout' is determined without it.
3915 # because 'checkout' is determined without it.
3916 if opts.get('rev'):
3916 if opts.get('rev'):
3917 brev = opts['rev'][0]
3917 brev = opts['rev'][0]
3918 elif opts.get('branch'):
3918 elif opts.get('branch'):
3919 brev = opts['branch'][0]
3919 brev = opts['branch'][0]
3920 else:
3920 else:
3921 brev = branches[0]
3921 brev = branches[0]
3922 repo._subtoppath = source
3922 repo._subtoppath = source
3923 try:
3923 try:
3924 ret = postincoming(ui, repo, modheads, opts.get('update'),
3924 ret = postincoming(ui, repo, modheads, opts.get('update'),
3925 checkout, brev)
3925 checkout, brev)
3926
3926
3927 finally:
3927 finally:
3928 del repo._subtoppath
3928 del repo._subtoppath
3929
3929
3930 finally:
3930 finally:
3931 other.close()
3931 other.close()
3932 return ret
3932 return ret
3933
3933
3934 @command('^push',
3934 @command('^push',
3935 [('f', 'force', None, _('force push')),
3935 [('f', 'force', None, _('force push')),
3936 ('r', 'rev', [],
3936 ('r', 'rev', [],
3937 _('a changeset intended to be included in the destination'),
3937 _('a changeset intended to be included in the destination'),
3938 _('REV')),
3938 _('REV')),
3939 ('B', 'bookmark', [], _("bookmark to push"), _('BOOKMARK')),
3939 ('B', 'bookmark', [], _("bookmark to push"), _('BOOKMARK')),
3940 ('b', 'branch', [],
3940 ('b', 'branch', [],
3941 _('a specific branch you would like to push'), _('BRANCH')),
3941 _('a specific branch you would like to push'), _('BRANCH')),
3942 ('', 'new-branch', False, _('allow pushing a new branch')),
3942 ('', 'new-branch', False, _('allow pushing a new branch')),
3943 ] + remoteopts,
3943 ] + remoteopts,
3944 _('[-f] [-r REV]... [-e CMD] [--remotecmd CMD] [DEST]'))
3944 _('[-f] [-r REV]... [-e CMD] [--remotecmd CMD] [DEST]'))
3945 def push(ui, repo, dest=None, **opts):
3945 def push(ui, repo, dest=None, **opts):
3946 """push changes to the specified destination
3946 """push changes to the specified destination
3947
3947
3948 Push changesets from the local repository to the specified
3948 Push changesets from the local repository to the specified
3949 destination.
3949 destination.
3950
3950
3951 This operation is symmetrical to pull: it is identical to a pull
3951 This operation is symmetrical to pull: it is identical to a pull
3952 in the destination repository from the current one.
3952 in the destination repository from the current one.
3953
3953
3954 By default, push will not allow creation of new heads at the
3954 By default, push will not allow creation of new heads at the
3955 destination, since multiple heads would make it unclear which head
3955 destination, since multiple heads would make it unclear which head
3956 to use. In this situation, it is recommended to pull and merge
3956 to use. In this situation, it is recommended to pull and merge
3957 before pushing.
3957 before pushing.
3958
3958
3959 Use --new-branch if you want to allow push to create a new named
3959 Use --new-branch if you want to allow push to create a new named
3960 branch that is not present at the destination. This allows you to
3960 branch that is not present at the destination. This allows you to
3961 only create a new branch without forcing other changes.
3961 only create a new branch without forcing other changes.
3962
3962
3963 .. note::
3963 .. note::
3964
3964
3965 Extra care should be taken with the -f/--force option,
3965 Extra care should be taken with the -f/--force option,
3966 which will push all new heads on all branches, an action which will
3966 which will push all new heads on all branches, an action which will
3967 almost always cause confusion for collaborators.
3967 almost always cause confusion for collaborators.
3968
3968
3969 If -r/--rev is used, the specified revision and all its ancestors
3969 If -r/--rev is used, the specified revision and all its ancestors
3970 will be pushed to the remote repository.
3970 will be pushed to the remote repository.
3971
3971
3972 If -B/--bookmark is used, the specified bookmarked revision, its
3972 If -B/--bookmark is used, the specified bookmarked revision, its
3973 ancestors, and the bookmark will be pushed to the remote
3973 ancestors, and the bookmark will be pushed to the remote
3974 repository. Specifying ``.`` is equivalent to specifying the active
3974 repository. Specifying ``.`` is equivalent to specifying the active
3975 bookmark's name.
3975 bookmark's name.
3976
3976
3977 Please see :hg:`help urls` for important details about ``ssh://``
3977 Please see :hg:`help urls` for important details about ``ssh://``
3978 URLs. If DESTINATION is omitted, a default path will be used.
3978 URLs. If DESTINATION is omitted, a default path will be used.
3979
3979
3980 Returns 0 if push was successful, 1 if nothing to push.
3980 Returns 0 if push was successful, 1 if nothing to push.
3981 """
3981 """
3982
3982
3983 opts = pycompat.byteskwargs(opts)
3983 opts = pycompat.byteskwargs(opts)
3984 if opts.get('bookmark'):
3984 if opts.get('bookmark'):
3985 ui.setconfig('bookmarks', 'pushing', opts['bookmark'], 'push')
3985 ui.setconfig('bookmarks', 'pushing', opts['bookmark'], 'push')
3986 for b in opts['bookmark']:
3986 for b in opts['bookmark']:
3987 # translate -B options to -r so changesets get pushed
3987 # translate -B options to -r so changesets get pushed
3988 b = repo._bookmarks.expandname(b)
3988 b = repo._bookmarks.expandname(b)
3989 if b in repo._bookmarks:
3989 if b in repo._bookmarks:
3990 opts.setdefault('rev', []).append(b)
3990 opts.setdefault('rev', []).append(b)
3991 else:
3991 else:
3992 # if we try to push a deleted bookmark, translate it to null
3992 # if we try to push a deleted bookmark, translate it to null
3993 # this lets simultaneous -r, -b options continue working
3993 # this lets simultaneous -r, -b options continue working
3994 opts.setdefault('rev', []).append("null")
3994 opts.setdefault('rev', []).append("null")
3995
3995
3996 path = ui.paths.getpath(dest, default=('default-push', 'default'))
3996 path = ui.paths.getpath(dest, default=('default-push', 'default'))
3997 if not path:
3997 if not path:
3998 raise error.Abort(_('default repository not configured!'),
3998 raise error.Abort(_('default repository not configured!'),
3999 hint=_("see 'hg help config.paths'"))
3999 hint=_("see 'hg help config.paths'"))
4000 dest = path.pushloc or path.loc
4000 dest = path.pushloc or path.loc
4001 branches = (path.branch, opts.get('branch') or [])
4001 branches = (path.branch, opts.get('branch') or [])
4002 ui.status(_('pushing to %s\n') % util.hidepassword(dest))
4002 ui.status(_('pushing to %s\n') % util.hidepassword(dest))
4003 revs, checkout = hg.addbranchrevs(repo, repo, branches, opts.get('rev'))
4003 revs, checkout = hg.addbranchrevs(repo, repo, branches, opts.get('rev'))
4004 other = hg.peer(repo, opts, dest)
4004 other = hg.peer(repo, opts, dest)
4005
4005
4006 if revs:
4006 if revs:
4007 revs = [repo.lookup(r) for r in scmutil.revrange(repo, revs)]
4007 revs = [repo.lookup(r) for r in scmutil.revrange(repo, revs)]
4008 if not revs:
4008 if not revs:
4009 raise error.Abort(_("specified revisions evaluate to an empty set"),
4009 raise error.Abort(_("specified revisions evaluate to an empty set"),
4010 hint=_("use different revision arguments"))
4010 hint=_("use different revision arguments"))
4011 elif path.pushrev:
4011 elif path.pushrev:
4012 # It doesn't make any sense to specify ancestor revisions. So limit
4012 # It doesn't make any sense to specify ancestor revisions. So limit
4013 # to DAG heads to make discovery simpler.
4013 # to DAG heads to make discovery simpler.
4014 expr = revsetlang.formatspec('heads(%r)', path.pushrev)
4014 expr = revsetlang.formatspec('heads(%r)', path.pushrev)
4015 revs = scmutil.revrange(repo, [expr])
4015 revs = scmutil.revrange(repo, [expr])
4016 revs = [repo[rev].node() for rev in revs]
4016 revs = [repo[rev].node() for rev in revs]
4017 if not revs:
4017 if not revs:
4018 raise error.Abort(_('default push revset for path evaluates to an '
4018 raise error.Abort(_('default push revset for path evaluates to an '
4019 'empty set'))
4019 'empty set'))
4020
4020
4021 repo._subtoppath = dest
4021 repo._subtoppath = dest
4022 try:
4022 try:
4023 # push subrepos depth-first for coherent ordering
4023 # push subrepos depth-first for coherent ordering
4024 c = repo['']
4024 c = repo['']
4025 subs = c.substate # only repos that are committed
4025 subs = c.substate # only repos that are committed
4026 for s in sorted(subs):
4026 for s in sorted(subs):
4027 result = c.sub(s).push(opts)
4027 result = c.sub(s).push(opts)
4028 if result == 0:
4028 if result == 0:
4029 return not result
4029 return not result
4030 finally:
4030 finally:
4031 del repo._subtoppath
4031 del repo._subtoppath
4032 pushop = exchange.push(repo, other, opts.get('force'), revs=revs,
4032 pushop = exchange.push(repo, other, opts.get('force'), revs=revs,
4033 newbranch=opts.get('new_branch'),
4033 newbranch=opts.get('new_branch'),
4034 bookmarks=opts.get('bookmark', ()),
4034 bookmarks=opts.get('bookmark', ()),
4035 opargs=opts.get('opargs'))
4035 opargs=opts.get('opargs'))
4036
4036
4037 result = not pushop.cgresult
4037 result = not pushop.cgresult
4038
4038
4039 if pushop.bkresult is not None:
4039 if pushop.bkresult is not None:
4040 if pushop.bkresult == 2:
4040 if pushop.bkresult == 2:
4041 result = 2
4041 result = 2
4042 elif not result and pushop.bkresult:
4042 elif not result and pushop.bkresult:
4043 result = 2
4043 result = 2
4044
4044
4045 return result
4045 return result
4046
4046
4047 @command('recover', [])
4047 @command('recover', [])
4048 def recover(ui, repo):
4048 def recover(ui, repo):
4049 """roll back an interrupted transaction
4049 """roll back an interrupted transaction
4050
4050
4051 Recover from an interrupted commit or pull.
4051 Recover from an interrupted commit or pull.
4052
4052
4053 This command tries to fix the repository status after an
4053 This command tries to fix the repository status after an
4054 interrupted operation. It should only be necessary when Mercurial
4054 interrupted operation. It should only be necessary when Mercurial
4055 suggests it.
4055 suggests it.
4056
4056
4057 Returns 0 if successful, 1 if nothing to recover or verify fails.
4057 Returns 0 if successful, 1 if nothing to recover or verify fails.
4058 """
4058 """
4059 if repo.recover():
4059 if repo.recover():
4060 return hg.verify(repo)
4060 return hg.verify(repo)
4061 return 1
4061 return 1
4062
4062
4063 @command('^remove|rm',
4063 @command('^remove|rm',
4064 [('A', 'after', None, _('record delete for missing files')),
4064 [('A', 'after', None, _('record delete for missing files')),
4065 ('f', 'force', None,
4065 ('f', 'force', None,
4066 _('forget added files, delete modified files')),
4066 _('forget added files, delete modified files')),
4067 ] + subrepoopts + walkopts,
4067 ] + subrepoopts + walkopts,
4068 _('[OPTION]... FILE...'),
4068 _('[OPTION]... FILE...'),
4069 inferrepo=True)
4069 inferrepo=True)
4070 def remove(ui, repo, *pats, **opts):
4070 def remove(ui, repo, *pats, **opts):
4071 """remove the specified files on the next commit
4071 """remove the specified files on the next commit
4072
4072
4073 Schedule the indicated files for removal from the current branch.
4073 Schedule the indicated files for removal from the current branch.
4074
4074
4075 This command schedules the files to be removed at the next commit.
4075 This command schedules the files to be removed at the next commit.
4076 To undo a remove before that, see :hg:`revert`. To undo added
4076 To undo a remove before that, see :hg:`revert`. To undo added
4077 files, see :hg:`forget`.
4077 files, see :hg:`forget`.
4078
4078
4079 .. container:: verbose
4079 .. container:: verbose
4080
4080
4081 -A/--after can be used to remove only files that have already
4081 -A/--after can be used to remove only files that have already
4082 been deleted, -f/--force can be used to force deletion, and -Af
4082 been deleted, -f/--force can be used to force deletion, and -Af
4083 can be used to remove files from the next revision without
4083 can be used to remove files from the next revision without
4084 deleting them from the working directory.
4084 deleting them from the working directory.
4085
4085
4086 The following table details the behavior of remove for different
4086 The following table details the behavior of remove for different
4087 file states (columns) and option combinations (rows). The file
4087 file states (columns) and option combinations (rows). The file
4088 states are Added [A], Clean [C], Modified [M] and Missing [!]
4088 states are Added [A], Clean [C], Modified [M] and Missing [!]
4089 (as reported by :hg:`status`). The actions are Warn, Remove
4089 (as reported by :hg:`status`). The actions are Warn, Remove
4090 (from branch) and Delete (from disk):
4090 (from branch) and Delete (from disk):
4091
4091
4092 ========= == == == ==
4092 ========= == == == ==
4093 opt/state A C M !
4093 opt/state A C M !
4094 ========= == == == ==
4094 ========= == == == ==
4095 none W RD W R
4095 none W RD W R
4096 -f R RD RD R
4096 -f R RD RD R
4097 -A W W W R
4097 -A W W W R
4098 -Af R R R R
4098 -Af R R R R
4099 ========= == == == ==
4099 ========= == == == ==
4100
4100
4101 .. note::
4101 .. note::
4102
4102
4103 :hg:`remove` never deletes files in Added [A] state from the
4103 :hg:`remove` never deletes files in Added [A] state from the
4104 working directory, not even if ``--force`` is specified.
4104 working directory, not even if ``--force`` is specified.
4105
4105
4106 Returns 0 on success, 1 if any warnings encountered.
4106 Returns 0 on success, 1 if any warnings encountered.
4107 """
4107 """
4108
4108
4109 opts = pycompat.byteskwargs(opts)
4109 opts = pycompat.byteskwargs(opts)
4110 after, force = opts.get('after'), opts.get('force')
4110 after, force = opts.get('after'), opts.get('force')
4111 if not pats and not after:
4111 if not pats and not after:
4112 raise error.Abort(_('no files specified'))
4112 raise error.Abort(_('no files specified'))
4113
4113
4114 m = scmutil.match(repo[None], pats, opts)
4114 m = scmutil.match(repo[None], pats, opts)
4115 subrepos = opts.get('subrepos')
4115 subrepos = opts.get('subrepos')
4116 return cmdutil.remove(ui, repo, m, "", after, force, subrepos)
4116 return cmdutil.remove(ui, repo, m, "", after, force, subrepos)
4117
4117
4118 @command('rename|move|mv',
4118 @command('rename|move|mv',
4119 [('A', 'after', None, _('record a rename that has already occurred')),
4119 [('A', 'after', None, _('record a rename that has already occurred')),
4120 ('f', 'force', None, _('forcibly copy over an existing managed file')),
4120 ('f', 'force', None, _('forcibly copy over an existing managed file')),
4121 ] + walkopts + dryrunopts,
4121 ] + walkopts + dryrunopts,
4122 _('[OPTION]... SOURCE... DEST'))
4122 _('[OPTION]... SOURCE... DEST'))
4123 def rename(ui, repo, *pats, **opts):
4123 def rename(ui, repo, *pats, **opts):
4124 """rename files; equivalent of copy + remove
4124 """rename files; equivalent of copy + remove
4125
4125
4126 Mark dest as copies of sources; mark sources for deletion. If dest
4126 Mark dest as copies of sources; mark sources for deletion. If dest
4127 is a directory, copies are put in that directory. If dest is a
4127 is a directory, copies are put in that directory. If dest is a
4128 file, there can only be one source.
4128 file, there can only be one source.
4129
4129
4130 By default, this command copies the contents of files as they
4130 By default, this command copies the contents of files as they
4131 exist in the working directory. If invoked with -A/--after, the
4131 exist in the working directory. If invoked with -A/--after, the
4132 operation is recorded, but no copying is performed.
4132 operation is recorded, but no copying is performed.
4133
4133
4134 This command takes effect at the next commit. To undo a rename
4134 This command takes effect at the next commit. To undo a rename
4135 before that, see :hg:`revert`.
4135 before that, see :hg:`revert`.
4136
4136
4137 Returns 0 on success, 1 if errors are encountered.
4137 Returns 0 on success, 1 if errors are encountered.
4138 """
4138 """
4139 opts = pycompat.byteskwargs(opts)
4139 opts = pycompat.byteskwargs(opts)
4140 with repo.wlock(False):
4140 with repo.wlock(False):
4141 return cmdutil.copy(ui, repo, pats, opts, rename=True)
4141 return cmdutil.copy(ui, repo, pats, opts, rename=True)
4142
4142
4143 @command('resolve',
4143 @command('resolve',
4144 [('a', 'all', None, _('select all unresolved files')),
4144 [('a', 'all', None, _('select all unresolved files')),
4145 ('l', 'list', None, _('list state of files needing merge')),
4145 ('l', 'list', None, _('list state of files needing merge')),
4146 ('m', 'mark', None, _('mark files as resolved')),
4146 ('m', 'mark', None, _('mark files as resolved')),
4147 ('u', 'unmark', None, _('mark files as unresolved')),
4147 ('u', 'unmark', None, _('mark files as unresolved')),
4148 ('n', 'no-status', None, _('hide status prefix'))]
4148 ('n', 'no-status', None, _('hide status prefix'))]
4149 + mergetoolopts + walkopts + formatteropts,
4149 + mergetoolopts + walkopts + formatteropts,
4150 _('[OPTION]... [FILE]...'),
4150 _('[OPTION]... [FILE]...'),
4151 inferrepo=True)
4151 inferrepo=True)
4152 def resolve(ui, repo, *pats, **opts):
4152 def resolve(ui, repo, *pats, **opts):
4153 """redo merges or set/view the merge status of files
4153 """redo merges or set/view the merge status of files
4154
4154
4155 Merges with unresolved conflicts are often the result of
4155 Merges with unresolved conflicts are often the result of
4156 non-interactive merging using the ``internal:merge`` configuration
4156 non-interactive merging using the ``internal:merge`` configuration
4157 setting, or a command-line merge tool like ``diff3``. The resolve
4157 setting, or a command-line merge tool like ``diff3``. The resolve
4158 command is used to manage the files involved in a merge, after
4158 command is used to manage the files involved in a merge, after
4159 :hg:`merge` has been run, and before :hg:`commit` is run (i.e. the
4159 :hg:`merge` has been run, and before :hg:`commit` is run (i.e. the
4160 working directory must have two parents). See :hg:`help
4160 working directory must have two parents). See :hg:`help
4161 merge-tools` for information on configuring merge tools.
4161 merge-tools` for information on configuring merge tools.
4162
4162
4163 The resolve command can be used in the following ways:
4163 The resolve command can be used in the following ways:
4164
4164
4165 - :hg:`resolve [--tool TOOL] FILE...`: attempt to re-merge the specified
4165 - :hg:`resolve [--tool TOOL] FILE...`: attempt to re-merge the specified
4166 files, discarding any previous merge attempts. Re-merging is not
4166 files, discarding any previous merge attempts. Re-merging is not
4167 performed for files already marked as resolved. Use ``--all/-a``
4167 performed for files already marked as resolved. Use ``--all/-a``
4168 to select all unresolved files. ``--tool`` can be used to specify
4168 to select all unresolved files. ``--tool`` can be used to specify
4169 the merge tool used for the given files. It overrides the HGMERGE
4169 the merge tool used for the given files. It overrides the HGMERGE
4170 environment variable and your configuration files. Previous file
4170 environment variable and your configuration files. Previous file
4171 contents are saved with a ``.orig`` suffix.
4171 contents are saved with a ``.orig`` suffix.
4172
4172
4173 - :hg:`resolve -m [FILE]`: mark a file as having been resolved
4173 - :hg:`resolve -m [FILE]`: mark a file as having been resolved
4174 (e.g. after having manually fixed-up the files). The default is
4174 (e.g. after having manually fixed-up the files). The default is
4175 to mark all unresolved files.
4175 to mark all unresolved files.
4176
4176
4177 - :hg:`resolve -u [FILE]...`: mark a file as unresolved. The
4177 - :hg:`resolve -u [FILE]...`: mark a file as unresolved. The
4178 default is to mark all resolved files.
4178 default is to mark all resolved files.
4179
4179
4180 - :hg:`resolve -l`: list files which had or still have conflicts.
4180 - :hg:`resolve -l`: list files which had or still have conflicts.
4181 In the printed list, ``U`` = unresolved and ``R`` = resolved.
4181 In the printed list, ``U`` = unresolved and ``R`` = resolved.
4182 You can use ``set:unresolved()`` or ``set:resolved()`` to filter
4182 You can use ``set:unresolved()`` or ``set:resolved()`` to filter
4183 the list. See :hg:`help filesets` for details.
4183 the list. See :hg:`help filesets` for details.
4184
4184
4185 .. note::
4185 .. note::
4186
4186
4187 Mercurial will not let you commit files with unresolved merge
4187 Mercurial will not let you commit files with unresolved merge
4188 conflicts. You must use :hg:`resolve -m ...` before you can
4188 conflicts. You must use :hg:`resolve -m ...` before you can
4189 commit after a conflicting merge.
4189 commit after a conflicting merge.
4190
4190
4191 Returns 0 on success, 1 if any files fail a resolve attempt.
4191 Returns 0 on success, 1 if any files fail a resolve attempt.
4192 """
4192 """
4193
4193
4194 opts = pycompat.byteskwargs(opts)
4194 opts = pycompat.byteskwargs(opts)
4195 flaglist = 'all mark unmark list no_status'.split()
4195 flaglist = 'all mark unmark list no_status'.split()
4196 all, mark, unmark, show, nostatus = \
4196 all, mark, unmark, show, nostatus = \
4197 [opts.get(o) for o in flaglist]
4197 [opts.get(o) for o in flaglist]
4198
4198
4199 if (show and (mark or unmark)) or (mark and unmark):
4199 if (show and (mark or unmark)) or (mark and unmark):
4200 raise error.Abort(_("too many options specified"))
4200 raise error.Abort(_("too many options specified"))
4201 if pats and all:
4201 if pats and all:
4202 raise error.Abort(_("can't specify --all and patterns"))
4202 raise error.Abort(_("can't specify --all and patterns"))
4203 if not (all or pats or show or mark or unmark):
4203 if not (all or pats or show or mark or unmark):
4204 raise error.Abort(_('no files or directories specified'),
4204 raise error.Abort(_('no files or directories specified'),
4205 hint=('use --all to re-merge all unresolved files'))
4205 hint=('use --all to re-merge all unresolved files'))
4206
4206
4207 if show:
4207 if show:
4208 ui.pager('resolve')
4208 ui.pager('resolve')
4209 fm = ui.formatter('resolve', opts)
4209 fm = ui.formatter('resolve', opts)
4210 ms = mergemod.mergestate.read(repo)
4210 ms = mergemod.mergestate.read(repo)
4211 m = scmutil.match(repo[None], pats, opts)
4211 m = scmutil.match(repo[None], pats, opts)
4212 for f in ms:
4212 for f in ms:
4213 if not m(f):
4213 if not m(f):
4214 continue
4214 continue
4215 l = 'resolve.' + {'u': 'unresolved', 'r': 'resolved',
4215 l = 'resolve.' + {'u': 'unresolved', 'r': 'resolved',
4216 'd': 'driverresolved'}[ms[f]]
4216 'd': 'driverresolved'}[ms[f]]
4217 fm.startitem()
4217 fm.startitem()
4218 fm.condwrite(not nostatus, 'status', '%s ', ms[f].upper(), label=l)
4218 fm.condwrite(not nostatus, 'status', '%s ', ms[f].upper(), label=l)
4219 fm.write('path', '%s\n', f, label=l)
4219 fm.write('path', '%s\n', f, label=l)
4220 fm.end()
4220 fm.end()
4221 return 0
4221 return 0
4222
4222
4223 with repo.wlock():
4223 with repo.wlock():
4224 ms = mergemod.mergestate.read(repo)
4224 ms = mergemod.mergestate.read(repo)
4225
4225
4226 if not (ms.active() or repo.dirstate.p2() != nullid):
4226 if not (ms.active() or repo.dirstate.p2() != nullid):
4227 raise error.Abort(
4227 raise error.Abort(
4228 _('resolve command not applicable when not merging'))
4228 _('resolve command not applicable when not merging'))
4229
4229
4230 wctx = repo[None]
4230 wctx = repo[None]
4231
4231
4232 if ms.mergedriver and ms.mdstate() == 'u':
4232 if ms.mergedriver and ms.mdstate() == 'u':
4233 proceed = mergemod.driverpreprocess(repo, ms, wctx)
4233 proceed = mergemod.driverpreprocess(repo, ms, wctx)
4234 ms.commit()
4234 ms.commit()
4235 # allow mark and unmark to go through
4235 # allow mark and unmark to go through
4236 if not mark and not unmark and not proceed:
4236 if not mark and not unmark and not proceed:
4237 return 1
4237 return 1
4238
4238
4239 m = scmutil.match(wctx, pats, opts)
4239 m = scmutil.match(wctx, pats, opts)
4240 ret = 0
4240 ret = 0
4241 didwork = False
4241 didwork = False
4242 runconclude = False
4242 runconclude = False
4243
4243
4244 tocomplete = []
4244 tocomplete = []
4245 for f in ms:
4245 for f in ms:
4246 if not m(f):
4246 if not m(f):
4247 continue
4247 continue
4248
4248
4249 didwork = True
4249 didwork = True
4250
4250
4251 # don't let driver-resolved files be marked, and run the conclude
4251 # don't let driver-resolved files be marked, and run the conclude
4252 # step if asked to resolve
4252 # step if asked to resolve
4253 if ms[f] == "d":
4253 if ms[f] == "d":
4254 exact = m.exact(f)
4254 exact = m.exact(f)
4255 if mark:
4255 if mark:
4256 if exact:
4256 if exact:
4257 ui.warn(_('not marking %s as it is driver-resolved\n')
4257 ui.warn(_('not marking %s as it is driver-resolved\n')
4258 % f)
4258 % f)
4259 elif unmark:
4259 elif unmark:
4260 if exact:
4260 if exact:
4261 ui.warn(_('not unmarking %s as it is driver-resolved\n')
4261 ui.warn(_('not unmarking %s as it is driver-resolved\n')
4262 % f)
4262 % f)
4263 else:
4263 else:
4264 runconclude = True
4264 runconclude = True
4265 continue
4265 continue
4266
4266
4267 if mark:
4267 if mark:
4268 ms.mark(f, "r")
4268 ms.mark(f, "r")
4269 elif unmark:
4269 elif unmark:
4270 ms.mark(f, "u")
4270 ms.mark(f, "u")
4271 else:
4271 else:
4272 # backup pre-resolve (merge uses .orig for its own purposes)
4272 # backup pre-resolve (merge uses .orig for its own purposes)
4273 a = repo.wjoin(f)
4273 a = repo.wjoin(f)
4274 try:
4274 try:
4275 util.copyfile(a, a + ".resolve")
4275 util.copyfile(a, a + ".resolve")
4276 except (IOError, OSError) as inst:
4276 except (IOError, OSError) as inst:
4277 if inst.errno != errno.ENOENT:
4277 if inst.errno != errno.ENOENT:
4278 raise
4278 raise
4279
4279
4280 try:
4280 try:
4281 # preresolve file
4281 # preresolve file
4282 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
4282 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
4283 'resolve')
4283 'resolve')
4284 complete, r = ms.preresolve(f, wctx)
4284 complete, r = ms.preresolve(f, wctx)
4285 if not complete:
4285 if not complete:
4286 tocomplete.append(f)
4286 tocomplete.append(f)
4287 elif r:
4287 elif r:
4288 ret = 1
4288 ret = 1
4289 finally:
4289 finally:
4290 ui.setconfig('ui', 'forcemerge', '', 'resolve')
4290 ui.setconfig('ui', 'forcemerge', '', 'resolve')
4291 ms.commit()
4291 ms.commit()
4292
4292
4293 # replace filemerge's .orig file with our resolve file, but only
4293 # replace filemerge's .orig file with our resolve file, but only
4294 # for merges that are complete
4294 # for merges that are complete
4295 if complete:
4295 if complete:
4296 try:
4296 try:
4297 util.rename(a + ".resolve",
4297 util.rename(a + ".resolve",
4298 scmutil.origpath(ui, repo, a))
4298 scmutil.origpath(ui, repo, a))
4299 except OSError as inst:
4299 except OSError as inst:
4300 if inst.errno != errno.ENOENT:
4300 if inst.errno != errno.ENOENT:
4301 raise
4301 raise
4302
4302
4303 for f in tocomplete:
4303 for f in tocomplete:
4304 try:
4304 try:
4305 # resolve file
4305 # resolve file
4306 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
4306 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
4307 'resolve')
4307 'resolve')
4308 r = ms.resolve(f, wctx)
4308 r = ms.resolve(f, wctx)
4309 if r:
4309 if r:
4310 ret = 1
4310 ret = 1
4311 finally:
4311 finally:
4312 ui.setconfig('ui', 'forcemerge', '', 'resolve')
4312 ui.setconfig('ui', 'forcemerge', '', 'resolve')
4313 ms.commit()
4313 ms.commit()
4314
4314
4315 # replace filemerge's .orig file with our resolve file
4315 # replace filemerge's .orig file with our resolve file
4316 a = repo.wjoin(f)
4316 a = repo.wjoin(f)
4317 try:
4317 try:
4318 util.rename(a + ".resolve", scmutil.origpath(ui, repo, a))
4318 util.rename(a + ".resolve", scmutil.origpath(ui, repo, a))
4319 except OSError as inst:
4319 except OSError as inst:
4320 if inst.errno != errno.ENOENT:
4320 if inst.errno != errno.ENOENT:
4321 raise
4321 raise
4322
4322
4323 ms.commit()
4323 ms.commit()
4324 ms.recordactions()
4324 ms.recordactions()
4325
4325
4326 if not didwork and pats:
4326 if not didwork and pats:
4327 hint = None
4327 hint = None
4328 if not any([p for p in pats if p.find(':') >= 0]):
4328 if not any([p for p in pats if p.find(':') >= 0]):
4329 pats = ['path:%s' % p for p in pats]
4329 pats = ['path:%s' % p for p in pats]
4330 m = scmutil.match(wctx, pats, opts)
4330 m = scmutil.match(wctx, pats, opts)
4331 for f in ms:
4331 for f in ms:
4332 if not m(f):
4332 if not m(f):
4333 continue
4333 continue
4334 flags = ''.join(['-%s ' % o[0] for o in flaglist
4334 flags = ''.join(['-%s ' % o[0] for o in flaglist
4335 if opts.get(o)])
4335 if opts.get(o)])
4336 hint = _("(try: hg resolve %s%s)\n") % (
4336 hint = _("(try: hg resolve %s%s)\n") % (
4337 flags,
4337 flags,
4338 ' '.join(pats))
4338 ' '.join(pats))
4339 break
4339 break
4340 ui.warn(_("arguments do not match paths that need resolving\n"))
4340 ui.warn(_("arguments do not match paths that need resolving\n"))
4341 if hint:
4341 if hint:
4342 ui.warn(hint)
4342 ui.warn(hint)
4343 elif ms.mergedriver and ms.mdstate() != 's':
4343 elif ms.mergedriver and ms.mdstate() != 's':
4344 # run conclude step when either a driver-resolved file is requested
4344 # run conclude step when either a driver-resolved file is requested
4345 # or there are no driver-resolved files
4345 # or there are no driver-resolved files
4346 # we can't use 'ret' to determine whether any files are unresolved
4346 # we can't use 'ret' to determine whether any files are unresolved
4347 # because we might not have tried to resolve some
4347 # because we might not have tried to resolve some
4348 if ((runconclude or not list(ms.driverresolved()))
4348 if ((runconclude or not list(ms.driverresolved()))
4349 and not list(ms.unresolved())):
4349 and not list(ms.unresolved())):
4350 proceed = mergemod.driverconclude(repo, ms, wctx)
4350 proceed = mergemod.driverconclude(repo, ms, wctx)
4351 ms.commit()
4351 ms.commit()
4352 if not proceed:
4352 if not proceed:
4353 return 1
4353 return 1
4354
4354
4355 # Nudge users into finishing an unfinished operation
4355 # Nudge users into finishing an unfinished operation
4356 unresolvedf = list(ms.unresolved())
4356 unresolvedf = list(ms.unresolved())
4357 driverresolvedf = list(ms.driverresolved())
4357 driverresolvedf = list(ms.driverresolved())
4358 if not unresolvedf and not driverresolvedf:
4358 if not unresolvedf and not driverresolvedf:
4359 ui.status(_('(no more unresolved files)\n'))
4359 ui.status(_('(no more unresolved files)\n'))
4360 cmdutil.checkafterresolved(repo)
4360 cmdutil.checkafterresolved(repo)
4361 elif not unresolvedf:
4361 elif not unresolvedf:
4362 ui.status(_('(no more unresolved files -- '
4362 ui.status(_('(no more unresolved files -- '
4363 'run "hg resolve --all" to conclude)\n'))
4363 'run "hg resolve --all" to conclude)\n'))
4364
4364
4365 return ret
4365 return ret
4366
4366
4367 @command('revert',
4367 @command('revert',
4368 [('a', 'all', None, _('revert all changes when no arguments given')),
4368 [('a', 'all', None, _('revert all changes when no arguments given')),
4369 ('d', 'date', '', _('tipmost revision matching date'), _('DATE')),
4369 ('d', 'date', '', _('tipmost revision matching date'), _('DATE')),
4370 ('r', 'rev', '', _('revert to the specified revision'), _('REV')),
4370 ('r', 'rev', '', _('revert to the specified revision'), _('REV')),
4371 ('C', 'no-backup', None, _('do not save backup copies of files')),
4371 ('C', 'no-backup', None, _('do not save backup copies of files')),
4372 ('i', 'interactive', None,
4372 ('i', 'interactive', None,
4373 _('interactively select the changes (EXPERIMENTAL)')),
4373 _('interactively select the changes (EXPERIMENTAL)')),
4374 ] + walkopts + dryrunopts,
4374 ] + walkopts + dryrunopts,
4375 _('[OPTION]... [-r REV] [NAME]...'))
4375 _('[OPTION]... [-r REV] [NAME]...'))
4376 def revert(ui, repo, *pats, **opts):
4376 def revert(ui, repo, *pats, **opts):
4377 """restore files to their checkout state
4377 """restore files to their checkout state
4378
4378
4379 .. note::
4379 .. note::
4380
4380
4381 To check out earlier revisions, you should use :hg:`update REV`.
4381 To check out earlier revisions, you should use :hg:`update REV`.
4382 To cancel an uncommitted merge (and lose your changes),
4382 To cancel an uncommitted merge (and lose your changes),
4383 use :hg:`update --clean .`.
4383 use :hg:`update --clean .`.
4384
4384
4385 With no revision specified, revert the specified files or directories
4385 With no revision specified, revert the specified files or directories
4386 to the contents they had in the parent of the working directory.
4386 to the contents they had in the parent of the working directory.
4387 This restores the contents of files to an unmodified
4387 This restores the contents of files to an unmodified
4388 state and unschedules adds, removes, copies, and renames. If the
4388 state and unschedules adds, removes, copies, and renames. If the
4389 working directory has two parents, you must explicitly specify a
4389 working directory has two parents, you must explicitly specify a
4390 revision.
4390 revision.
4391
4391
4392 Using the -r/--rev or -d/--date options, revert the given files or
4392 Using the -r/--rev or -d/--date options, revert the given files or
4393 directories to their states as of a specific revision. Because
4393 directories to their states as of a specific revision. Because
4394 revert does not change the working directory parents, this will
4394 revert does not change the working directory parents, this will
4395 cause these files to appear modified. This can be helpful to "back
4395 cause these files to appear modified. This can be helpful to "back
4396 out" some or all of an earlier change. See :hg:`backout` for a
4396 out" some or all of an earlier change. See :hg:`backout` for a
4397 related method.
4397 related method.
4398
4398
4399 Modified files are saved with a .orig suffix before reverting.
4399 Modified files are saved with a .orig suffix before reverting.
4400 To disable these backups, use --no-backup. It is possible to store
4400 To disable these backups, use --no-backup. It is possible to store
4401 the backup files in a custom directory relative to the root of the
4401 the backup files in a custom directory relative to the root of the
4402 repository by setting the ``ui.origbackuppath`` configuration
4402 repository by setting the ``ui.origbackuppath`` configuration
4403 option.
4403 option.
4404
4404
4405 See :hg:`help dates` for a list of formats valid for -d/--date.
4405 See :hg:`help dates` for a list of formats valid for -d/--date.
4406
4406
4407 See :hg:`help backout` for a way to reverse the effect of an
4407 See :hg:`help backout` for a way to reverse the effect of an
4408 earlier changeset.
4408 earlier changeset.
4409
4409
4410 Returns 0 on success.
4410 Returns 0 on success.
4411 """
4411 """
4412
4412
4413 if opts.get("date"):
4413 if opts.get("date"):
4414 if opts.get("rev"):
4414 if opts.get("rev"):
4415 raise error.Abort(_("you can't specify a revision and a date"))
4415 raise error.Abort(_("you can't specify a revision and a date"))
4416 opts["rev"] = cmdutil.finddate(ui, repo, opts["date"])
4416 opts["rev"] = cmdutil.finddate(ui, repo, opts["date"])
4417
4417
4418 parent, p2 = repo.dirstate.parents()
4418 parent, p2 = repo.dirstate.parents()
4419 if not opts.get('rev') and p2 != nullid:
4419 if not opts.get('rev') and p2 != nullid:
4420 # revert after merge is a trap for new users (issue2915)
4420 # revert after merge is a trap for new users (issue2915)
4421 raise error.Abort(_('uncommitted merge with no revision specified'),
4421 raise error.Abort(_('uncommitted merge with no revision specified'),
4422 hint=_("use 'hg update' or see 'hg help revert'"))
4422 hint=_("use 'hg update' or see 'hg help revert'"))
4423
4423
4424 ctx = scmutil.revsingle(repo, opts.get('rev'))
4424 ctx = scmutil.revsingle(repo, opts.get('rev'))
4425
4425
4426 if (not (pats or opts.get('include') or opts.get('exclude') or
4426 if (not (pats or opts.get('include') or opts.get('exclude') or
4427 opts.get('all') or opts.get('interactive'))):
4427 opts.get('all') or opts.get('interactive'))):
4428 msg = _("no files or directories specified")
4428 msg = _("no files or directories specified")
4429 if p2 != nullid:
4429 if p2 != nullid:
4430 hint = _("uncommitted merge, use --all to discard all changes,"
4430 hint = _("uncommitted merge, use --all to discard all changes,"
4431 " or 'hg update -C .' to abort the merge")
4431 " or 'hg update -C .' to abort the merge")
4432 raise error.Abort(msg, hint=hint)
4432 raise error.Abort(msg, hint=hint)
4433 dirty = any(repo.status())
4433 dirty = any(repo.status())
4434 node = ctx.node()
4434 node = ctx.node()
4435 if node != parent:
4435 if node != parent:
4436 if dirty:
4436 if dirty:
4437 hint = _("uncommitted changes, use --all to discard all"
4437 hint = _("uncommitted changes, use --all to discard all"
4438 " changes, or 'hg update %s' to update") % ctx.rev()
4438 " changes, or 'hg update %s' to update") % ctx.rev()
4439 else:
4439 else:
4440 hint = _("use --all to revert all files,"
4440 hint = _("use --all to revert all files,"
4441 " or 'hg update %s' to update") % ctx.rev()
4441 " or 'hg update %s' to update") % ctx.rev()
4442 elif dirty:
4442 elif dirty:
4443 hint = _("uncommitted changes, use --all to discard all changes")
4443 hint = _("uncommitted changes, use --all to discard all changes")
4444 else:
4444 else:
4445 hint = _("use --all to revert all files")
4445 hint = _("use --all to revert all files")
4446 raise error.Abort(msg, hint=hint)
4446 raise error.Abort(msg, hint=hint)
4447
4447
4448 return cmdutil.revert(ui, repo, ctx, (parent, p2), *pats, **opts)
4448 return cmdutil.revert(ui, repo, ctx, (parent, p2), *pats, **opts)
4449
4449
4450 @command('rollback', dryrunopts +
4450 @command('rollback', dryrunopts +
4451 [('f', 'force', False, _('ignore safety measures'))])
4451 [('f', 'force', False, _('ignore safety measures'))])
4452 def rollback(ui, repo, **opts):
4452 def rollback(ui, repo, **opts):
4453 """roll back the last transaction (DANGEROUS) (DEPRECATED)
4453 """roll back the last transaction (DANGEROUS) (DEPRECATED)
4454
4454
4455 Please use :hg:`commit --amend` instead of rollback to correct
4455 Please use :hg:`commit --amend` instead of rollback to correct
4456 mistakes in the last commit.
4456 mistakes in the last commit.
4457
4457
4458 This command should be used with care. There is only one level of
4458 This command should be used with care. There is only one level of
4459 rollback, and there is no way to undo a rollback. It will also
4459 rollback, and there is no way to undo a rollback. It will also
4460 restore the dirstate at the time of the last transaction, losing
4460 restore the dirstate at the time of the last transaction, losing
4461 any dirstate changes since that time. This command does not alter
4461 any dirstate changes since that time. This command does not alter
4462 the working directory.
4462 the working directory.
4463
4463
4464 Transactions are used to encapsulate the effects of all commands
4464 Transactions are used to encapsulate the effects of all commands
4465 that create new changesets or propagate existing changesets into a
4465 that create new changesets or propagate existing changesets into a
4466 repository.
4466 repository.
4467
4467
4468 .. container:: verbose
4468 .. container:: verbose
4469
4469
4470 For example, the following commands are transactional, and their
4470 For example, the following commands are transactional, and their
4471 effects can be rolled back:
4471 effects can be rolled back:
4472
4472
4473 - commit
4473 - commit
4474 - import
4474 - import
4475 - pull
4475 - pull
4476 - push (with this repository as the destination)
4476 - push (with this repository as the destination)
4477 - unbundle
4477 - unbundle
4478
4478
4479 To avoid permanent data loss, rollback will refuse to rollback a
4479 To avoid permanent data loss, rollback will refuse to rollback a
4480 commit transaction if it isn't checked out. Use --force to
4480 commit transaction if it isn't checked out. Use --force to
4481 override this protection.
4481 override this protection.
4482
4482
4483 The rollback command can be entirely disabled by setting the
4483 The rollback command can be entirely disabled by setting the
4484 ``ui.rollback`` configuration setting to false. If you're here
4484 ``ui.rollback`` configuration setting to false. If you're here
4485 because you want to use rollback and it's disabled, you can
4485 because you want to use rollback and it's disabled, you can
4486 re-enable the command by setting ``ui.rollback`` to true.
4486 re-enable the command by setting ``ui.rollback`` to true.
4487
4487
4488 This command is not intended for use on public repositories. Once
4488 This command is not intended for use on public repositories. Once
4489 changes are visible for pull by other users, rolling a transaction
4489 changes are visible for pull by other users, rolling a transaction
4490 back locally is ineffective (someone else may already have pulled
4490 back locally is ineffective (someone else may already have pulled
4491 the changes). Furthermore, a race is possible with readers of the
4491 the changes). Furthermore, a race is possible with readers of the
4492 repository; for example an in-progress pull from the repository
4492 repository; for example an in-progress pull from the repository
4493 may fail if a rollback is performed.
4493 may fail if a rollback is performed.
4494
4494
4495 Returns 0 on success, 1 if no rollback data is available.
4495 Returns 0 on success, 1 if no rollback data is available.
4496 """
4496 """
4497 if not ui.configbool('ui', 'rollback', True):
4497 if not ui.configbool('ui', 'rollback', True):
4498 raise error.Abort(_('rollback is disabled because it is unsafe'),
4498 raise error.Abort(_('rollback is disabled because it is unsafe'),
4499 hint=('see `hg help -v rollback` for information'))
4499 hint=('see `hg help -v rollback` for information'))
4500 return repo.rollback(dryrun=opts.get(r'dry_run'),
4500 return repo.rollback(dryrun=opts.get(r'dry_run'),
4501 force=opts.get(r'force'))
4501 force=opts.get(r'force'))
4502
4502
4503 @command('root', [])
4503 @command('root', [])
4504 def root(ui, repo):
4504 def root(ui, repo):
4505 """print the root (top) of the current working directory
4505 """print the root (top) of the current working directory
4506
4506
4507 Print the root directory of the current repository.
4507 Print the root directory of the current repository.
4508
4508
4509 Returns 0 on success.
4509 Returns 0 on success.
4510 """
4510 """
4511 ui.write(repo.root + "\n")
4511 ui.write(repo.root + "\n")
4512
4512
4513 @command('^serve',
4513 @command('^serve',
4514 [('A', 'accesslog', '', _('name of access log file to write to'),
4514 [('A', 'accesslog', '', _('name of access log file to write to'),
4515 _('FILE')),
4515 _('FILE')),
4516 ('d', 'daemon', None, _('run server in background')),
4516 ('d', 'daemon', None, _('run server in background')),
4517 ('', 'daemon-postexec', [], _('used internally by daemon mode')),
4517 ('', 'daemon-postexec', [], _('used internally by daemon mode')),
4518 ('E', 'errorlog', '', _('name of error log file to write to'), _('FILE')),
4518 ('E', 'errorlog', '', _('name of error log file to write to'), _('FILE')),
4519 # use string type, then we can check if something was passed
4519 # use string type, then we can check if something was passed
4520 ('p', 'port', '', _('port to listen on (default: 8000)'), _('PORT')),
4520 ('p', 'port', '', _('port to listen on (default: 8000)'), _('PORT')),
4521 ('a', 'address', '', _('address to listen on (default: all interfaces)'),
4521 ('a', 'address', '', _('address to listen on (default: all interfaces)'),
4522 _('ADDR')),
4522 _('ADDR')),
4523 ('', 'prefix', '', _('prefix path to serve from (default: server root)'),
4523 ('', 'prefix', '', _('prefix path to serve from (default: server root)'),
4524 _('PREFIX')),
4524 _('PREFIX')),
4525 ('n', 'name', '',
4525 ('n', 'name', '',
4526 _('name to show in web pages (default: working directory)'), _('NAME')),
4526 _('name to show in web pages (default: working directory)'), _('NAME')),
4527 ('', 'web-conf', '',
4527 ('', 'web-conf', '',
4528 _("name of the hgweb config file (see 'hg help hgweb')"), _('FILE')),
4528 _("name of the hgweb config file (see 'hg help hgweb')"), _('FILE')),
4529 ('', 'webdir-conf', '', _('name of the hgweb config file (DEPRECATED)'),
4529 ('', 'webdir-conf', '', _('name of the hgweb config file (DEPRECATED)'),
4530 _('FILE')),
4530 _('FILE')),
4531 ('', 'pid-file', '', _('name of file to write process ID to'), _('FILE')),
4531 ('', 'pid-file', '', _('name of file to write process ID to'), _('FILE')),
4532 ('', 'stdio', None, _('for remote clients (ADVANCED)')),
4532 ('', 'stdio', None, _('for remote clients (ADVANCED)')),
4533 ('', 'cmdserver', '', _('for remote clients (ADVANCED)'), _('MODE')),
4533 ('', 'cmdserver', '', _('for remote clients (ADVANCED)'), _('MODE')),
4534 ('t', 'templates', '', _('web templates to use'), _('TEMPLATE')),
4534 ('t', 'templates', '', _('web templates to use'), _('TEMPLATE')),
4535 ('', 'style', '', _('template style to use'), _('STYLE')),
4535 ('', 'style', '', _('template style to use'), _('STYLE')),
4536 ('6', 'ipv6', None, _('use IPv6 in addition to IPv4')),
4536 ('6', 'ipv6', None, _('use IPv6 in addition to IPv4')),
4537 ('', 'certificate', '', _('SSL certificate file'), _('FILE'))]
4537 ('', 'certificate', '', _('SSL certificate file'), _('FILE'))]
4538 + subrepoopts,
4538 + subrepoopts,
4539 _('[OPTION]...'),
4539 _('[OPTION]...'),
4540 optionalrepo=True)
4540 optionalrepo=True)
4541 def serve(ui, repo, **opts):
4541 def serve(ui, repo, **opts):
4542 """start stand-alone webserver
4542 """start stand-alone webserver
4543
4543
4544 Start a local HTTP repository browser and pull server. You can use
4544 Start a local HTTP repository browser and pull server. You can use
4545 this for ad-hoc sharing and browsing of repositories. It is
4545 this for ad-hoc sharing and browsing of repositories. It is
4546 recommended to use a real web server to serve a repository for
4546 recommended to use a real web server to serve a repository for
4547 longer periods of time.
4547 longer periods of time.
4548
4548
4549 Please note that the server does not implement access control.
4549 Please note that the server does not implement access control.
4550 This means that, by default, anybody can read from the server and
4550 This means that, by default, anybody can read from the server and
4551 nobody can write to it by default. Set the ``web.allow_push``
4551 nobody can write to it by default. Set the ``web.allow_push``
4552 option to ``*`` to allow everybody to push to the server. You
4552 option to ``*`` to allow everybody to push to the server. You
4553 should use a real web server if you need to authenticate users.
4553 should use a real web server if you need to authenticate users.
4554
4554
4555 By default, the server logs accesses to stdout and errors to
4555 By default, the server logs accesses to stdout and errors to
4556 stderr. Use the -A/--accesslog and -E/--errorlog options to log to
4556 stderr. Use the -A/--accesslog and -E/--errorlog options to log to
4557 files.
4557 files.
4558
4558
4559 To have the server choose a free port number to listen on, specify
4559 To have the server choose a free port number to listen on, specify
4560 a port number of 0; in this case, the server will print the port
4560 a port number of 0; in this case, the server will print the port
4561 number it uses.
4561 number it uses.
4562
4562
4563 Returns 0 on success.
4563 Returns 0 on success.
4564 """
4564 """
4565
4565
4566 opts = pycompat.byteskwargs(opts)
4566 opts = pycompat.byteskwargs(opts)
4567 if opts["stdio"] and opts["cmdserver"]:
4567 if opts["stdio"] and opts["cmdserver"]:
4568 raise error.Abort(_("cannot use --stdio with --cmdserver"))
4568 raise error.Abort(_("cannot use --stdio with --cmdserver"))
4569
4569
4570 if opts["stdio"]:
4570 if opts["stdio"]:
4571 if repo is None:
4571 if repo is None:
4572 raise error.RepoError(_("there is no Mercurial repository here"
4572 raise error.RepoError(_("there is no Mercurial repository here"
4573 " (.hg not found)"))
4573 " (.hg not found)"))
4574 s = sshserver.sshserver(ui, repo)
4574 s = sshserver.sshserver(ui, repo)
4575 s.serve_forever()
4575 s.serve_forever()
4576
4576
4577 service = server.createservice(ui, repo, opts)
4577 service = server.createservice(ui, repo, opts)
4578 return server.runservice(opts, initfn=service.init, runfn=service.run)
4578 return server.runservice(opts, initfn=service.init, runfn=service.run)
4579
4579
4580 @command('^status|st',
4580 @command('^status|st',
4581 [('A', 'all', None, _('show status of all files')),
4581 [('A', 'all', None, _('show status of all files')),
4582 ('m', 'modified', None, _('show only modified files')),
4582 ('m', 'modified', None, _('show only modified files')),
4583 ('a', 'added', None, _('show only added files')),
4583 ('a', 'added', None, _('show only added files')),
4584 ('r', 'removed', None, _('show only removed files')),
4584 ('r', 'removed', None, _('show only removed files')),
4585 ('d', 'deleted', None, _('show only deleted (but tracked) files')),
4585 ('d', 'deleted', None, _('show only deleted (but tracked) files')),
4586 ('c', 'clean', None, _('show only files without changes')),
4586 ('c', 'clean', None, _('show only files without changes')),
4587 ('u', 'unknown', None, _('show only unknown (not tracked) files')),
4587 ('u', 'unknown', None, _('show only unknown (not tracked) files')),
4588 ('i', 'ignored', None, _('show only ignored files')),
4588 ('i', 'ignored', None, _('show only ignored files')),
4589 ('n', 'no-status', None, _('hide status prefix')),
4589 ('n', 'no-status', None, _('hide status prefix')),
4590 ('C', 'copies', None, _('show source of copied files')),
4590 ('C', 'copies', None, _('show source of copied files')),
4591 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
4591 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
4592 ('', 'rev', [], _('show difference from revision'), _('REV')),
4592 ('', 'rev', [], _('show difference from revision'), _('REV')),
4593 ('', 'change', '', _('list the changed files of a revision'), _('REV')),
4593 ('', 'change', '', _('list the changed files of a revision'), _('REV')),
4594 ] + walkopts + subrepoopts + formatteropts,
4594 ] + walkopts + subrepoopts + formatteropts,
4595 _('[OPTION]... [FILE]...'),
4595 _('[OPTION]... [FILE]...'),
4596 inferrepo=True)
4596 inferrepo=True)
4597 def status(ui, repo, *pats, **opts):
4597 def status(ui, repo, *pats, **opts):
4598 """show changed files in the working directory
4598 """show changed files in the working directory
4599
4599
4600 Show status of files in the repository. If names are given, only
4600 Show status of files in the repository. If names are given, only
4601 files that match are shown. Files that are clean or ignored or
4601 files that match are shown. Files that are clean or ignored or
4602 the source of a copy/move operation, are not listed unless
4602 the source of a copy/move operation, are not listed unless
4603 -c/--clean, -i/--ignored, -C/--copies or -A/--all are given.
4603 -c/--clean, -i/--ignored, -C/--copies or -A/--all are given.
4604 Unless options described with "show only ..." are given, the
4604 Unless options described with "show only ..." are given, the
4605 options -mardu are used.
4605 options -mardu are used.
4606
4606
4607 Option -q/--quiet hides untracked (unknown and ignored) files
4607 Option -q/--quiet hides untracked (unknown and ignored) files
4608 unless explicitly requested with -u/--unknown or -i/--ignored.
4608 unless explicitly requested with -u/--unknown or -i/--ignored.
4609
4609
4610 .. note::
4610 .. note::
4611
4611
4612 :hg:`status` may appear to disagree with diff if permissions have
4612 :hg:`status` may appear to disagree with diff if permissions have
4613 changed or a merge has occurred. The standard diff format does
4613 changed or a merge has occurred. The standard diff format does
4614 not report permission changes and diff only reports changes
4614 not report permission changes and diff only reports changes
4615 relative to one merge parent.
4615 relative to one merge parent.
4616
4616
4617 If one revision is given, it is used as the base revision.
4617 If one revision is given, it is used as the base revision.
4618 If two revisions are given, the differences between them are
4618 If two revisions are given, the differences between them are
4619 shown. The --change option can also be used as a shortcut to list
4619 shown. The --change option can also be used as a shortcut to list
4620 the changed files of a revision from its first parent.
4620 the changed files of a revision from its first parent.
4621
4621
4622 The codes used to show the status of files are::
4622 The codes used to show the status of files are::
4623
4623
4624 M = modified
4624 M = modified
4625 A = added
4625 A = added
4626 R = removed
4626 R = removed
4627 C = clean
4627 C = clean
4628 ! = missing (deleted by non-hg command, but still tracked)
4628 ! = missing (deleted by non-hg command, but still tracked)
4629 ? = not tracked
4629 ? = not tracked
4630 I = ignored
4630 I = ignored
4631 = origin of the previous file (with --copies)
4631 = origin of the previous file (with --copies)
4632
4632
4633 .. container:: verbose
4633 .. container:: verbose
4634
4634
4635 Examples:
4635 Examples:
4636
4636
4637 - show changes in the working directory relative to a
4637 - show changes in the working directory relative to a
4638 changeset::
4638 changeset::
4639
4639
4640 hg status --rev 9353
4640 hg status --rev 9353
4641
4641
4642 - show changes in the working directory relative to the
4642 - show changes in the working directory relative to the
4643 current directory (see :hg:`help patterns` for more information)::
4643 current directory (see :hg:`help patterns` for more information)::
4644
4644
4645 hg status re:
4645 hg status re:
4646
4646
4647 - show all changes including copies in an existing changeset::
4647 - show all changes including copies in an existing changeset::
4648
4648
4649 hg status --copies --change 9353
4649 hg status --copies --change 9353
4650
4650
4651 - get a NUL separated list of added files, suitable for xargs::
4651 - get a NUL separated list of added files, suitable for xargs::
4652
4652
4653 hg status -an0
4653 hg status -an0
4654
4654
4655 Returns 0 on success.
4655 Returns 0 on success.
4656 """
4656 """
4657
4657
4658 opts = pycompat.byteskwargs(opts)
4658 opts = pycompat.byteskwargs(opts)
4659 revs = opts.get('rev')
4659 revs = opts.get('rev')
4660 change = opts.get('change')
4660 change = opts.get('change')
4661
4661
4662 if revs and change:
4662 if revs and change:
4663 msg = _('cannot specify --rev and --change at the same time')
4663 msg = _('cannot specify --rev and --change at the same time')
4664 raise error.Abort(msg)
4664 raise error.Abort(msg)
4665 elif change:
4665 elif change:
4666 node2 = scmutil.revsingle(repo, change, None).node()
4666 node2 = scmutil.revsingle(repo, change, None).node()
4667 node1 = repo[node2].p1().node()
4667 node1 = repo[node2].p1().node()
4668 else:
4668 else:
4669 node1, node2 = scmutil.revpair(repo, revs)
4669 node1, node2 = scmutil.revpair(repo, revs)
4670
4670
4671 if pats or ui.configbool('commands', 'status.relative'):
4671 if pats or ui.configbool('commands', 'status.relative'):
4672 cwd = repo.getcwd()
4672 cwd = repo.getcwd()
4673 else:
4673 else:
4674 cwd = ''
4674 cwd = ''
4675
4675
4676 if opts.get('print0'):
4676 if opts.get('print0'):
4677 end = '\0'
4677 end = '\0'
4678 else:
4678 else:
4679 end = '\n'
4679 end = '\n'
4680 copy = {}
4680 copy = {}
4681 states = 'modified added removed deleted unknown ignored clean'.split()
4681 states = 'modified added removed deleted unknown ignored clean'.split()
4682 show = [k for k in states if opts.get(k)]
4682 show = [k for k in states if opts.get(k)]
4683 if opts.get('all'):
4683 if opts.get('all'):
4684 show += ui.quiet and (states[:4] + ['clean']) or states
4684 show += ui.quiet and (states[:4] + ['clean']) or states
4685 if not show:
4685 if not show:
4686 if ui.quiet:
4686 if ui.quiet:
4687 show = states[:4]
4687 show = states[:4]
4688 else:
4688 else:
4689 show = states[:5]
4689 show = states[:5]
4690
4690
4691 m = scmutil.match(repo[node2], pats, opts)
4691 m = scmutil.match(repo[node2], pats, opts)
4692 stat = repo.status(node1, node2, m,
4692 stat = repo.status(node1, node2, m,
4693 'ignored' in show, 'clean' in show, 'unknown' in show,
4693 'ignored' in show, 'clean' in show, 'unknown' in show,
4694 opts.get('subrepos'))
4694 opts.get('subrepos'))
4695 changestates = zip(states, pycompat.iterbytestr('MAR!?IC'), stat)
4695 changestates = zip(states, pycompat.iterbytestr('MAR!?IC'), stat)
4696
4696
4697 if (opts.get('all') or opts.get('copies')
4697 if (opts.get('all') or opts.get('copies')
4698 or ui.configbool('ui', 'statuscopies')) and not opts.get('no_status'):
4698 or ui.configbool('ui', 'statuscopies')) and not opts.get('no_status'):
4699 copy = copies.pathcopies(repo[node1], repo[node2], m)
4699 copy = copies.pathcopies(repo[node1], repo[node2], m)
4700
4700
4701 ui.pager('status')
4701 ui.pager('status')
4702 fm = ui.formatter('status', opts)
4702 fm = ui.formatter('status', opts)
4703 fmt = '%s' + end
4703 fmt = '%s' + end
4704 showchar = not opts.get('no_status')
4704 showchar = not opts.get('no_status')
4705
4705
4706 for state, char, files in changestates:
4706 for state, char, files in changestates:
4707 if state in show:
4707 if state in show:
4708 label = 'status.' + state
4708 label = 'status.' + state
4709 for f in files:
4709 for f in files:
4710 fm.startitem()
4710 fm.startitem()
4711 fm.condwrite(showchar, 'status', '%s ', char, label=label)
4711 fm.condwrite(showchar, 'status', '%s ', char, label=label)
4712 fm.write('path', fmt, repo.pathto(f, cwd), label=label)
4712 fm.write('path', fmt, repo.pathto(f, cwd), label=label)
4713 if f in copy:
4713 if f in copy:
4714 fm.write("copy", ' %s' + end, repo.pathto(copy[f], cwd),
4714 fm.write("copy", ' %s' + end, repo.pathto(copy[f], cwd),
4715 label='status.copied')
4715 label='status.copied')
4716 fm.end()
4716 fm.end()
4717
4717
4718 @command('^summary|sum',
4718 @command('^summary|sum',
4719 [('', 'remote', None, _('check for push and pull'))], '[--remote]')
4719 [('', 'remote', None, _('check for push and pull'))], '[--remote]')
4720 def summary(ui, repo, **opts):
4720 def summary(ui, repo, **opts):
4721 """summarize working directory state
4721 """summarize working directory state
4722
4722
4723 This generates a brief summary of the working directory state,
4723 This generates a brief summary of the working directory state,
4724 including parents, branch, commit status, phase and available updates.
4724 including parents, branch, commit status, phase and available updates.
4725
4725
4726 With the --remote option, this will check the default paths for
4726 With the --remote option, this will check the default paths for
4727 incoming and outgoing changes. This can be time-consuming.
4727 incoming and outgoing changes. This can be time-consuming.
4728
4728
4729 Returns 0 on success.
4729 Returns 0 on success.
4730 """
4730 """
4731
4731
4732 opts = pycompat.byteskwargs(opts)
4732 opts = pycompat.byteskwargs(opts)
4733 ui.pager('summary')
4733 ui.pager('summary')
4734 ctx = repo[None]
4734 ctx = repo[None]
4735 parents = ctx.parents()
4735 parents = ctx.parents()
4736 pnode = parents[0].node()
4736 pnode = parents[0].node()
4737 marks = []
4737 marks = []
4738
4738
4739 ms = None
4739 ms = None
4740 try:
4740 try:
4741 ms = mergemod.mergestate.read(repo)
4741 ms = mergemod.mergestate.read(repo)
4742 except error.UnsupportedMergeRecords as e:
4742 except error.UnsupportedMergeRecords as e:
4743 s = ' '.join(e.recordtypes)
4743 s = ' '.join(e.recordtypes)
4744 ui.warn(
4744 ui.warn(
4745 _('warning: merge state has unsupported record types: %s\n') % s)
4745 _('warning: merge state has unsupported record types: %s\n') % s)
4746 unresolved = 0
4746 unresolved = 0
4747 else:
4747 else:
4748 unresolved = [f for f in ms if ms[f] == 'u']
4748 unresolved = [f for f in ms if ms[f] == 'u']
4749
4749
4750 for p in parents:
4750 for p in parents:
4751 # label with log.changeset (instead of log.parent) since this
4751 # label with log.changeset (instead of log.parent) since this
4752 # shows a working directory parent *changeset*:
4752 # shows a working directory parent *changeset*:
4753 # i18n: column positioning for "hg summary"
4753 # i18n: column positioning for "hg summary"
4754 ui.write(_('parent: %d:%s ') % (p.rev(), p),
4754 ui.write(_('parent: %d:%s ') % (p.rev(), p),
4755 label=cmdutil._changesetlabels(p))
4755 label=cmdutil._changesetlabels(p))
4756 ui.write(' '.join(p.tags()), label='log.tag')
4756 ui.write(' '.join(p.tags()), label='log.tag')
4757 if p.bookmarks():
4757 if p.bookmarks():
4758 marks.extend(p.bookmarks())
4758 marks.extend(p.bookmarks())
4759 if p.rev() == -1:
4759 if p.rev() == -1:
4760 if not len(repo):
4760 if not len(repo):
4761 ui.write(_(' (empty repository)'))
4761 ui.write(_(' (empty repository)'))
4762 else:
4762 else:
4763 ui.write(_(' (no revision checked out)'))
4763 ui.write(_(' (no revision checked out)'))
4764 if p.obsolete():
4764 if p.obsolete():
4765 ui.write(_(' (obsolete)'))
4765 ui.write(_(' (obsolete)'))
4766 if p.troubled():
4766 if p.troubled():
4767 ui.write(' ('
4767 ui.write(' ('
4768 + ', '.join(ui.label(trouble, 'trouble.%s' % trouble)
4768 + ', '.join(ui.label(trouble, 'trouble.%s' % trouble)
4769 for trouble in p.troubles())
4769 for trouble in p.troubles())
4770 + ')')
4770 + ')')
4771 ui.write('\n')
4771 ui.write('\n')
4772 if p.description():
4772 if p.description():
4773 ui.status(' ' + p.description().splitlines()[0].strip() + '\n',
4773 ui.status(' ' + p.description().splitlines()[0].strip() + '\n',
4774 label='log.summary')
4774 label='log.summary')
4775
4775
4776 branch = ctx.branch()
4776 branch = ctx.branch()
4777 bheads = repo.branchheads(branch)
4777 bheads = repo.branchheads(branch)
4778 # i18n: column positioning for "hg summary"
4778 # i18n: column positioning for "hg summary"
4779 m = _('branch: %s\n') % branch
4779 m = _('branch: %s\n') % branch
4780 if branch != 'default':
4780 if branch != 'default':
4781 ui.write(m, label='log.branch')
4781 ui.write(m, label='log.branch')
4782 else:
4782 else:
4783 ui.status(m, label='log.branch')
4783 ui.status(m, label='log.branch')
4784
4784
4785 if marks:
4785 if marks:
4786 active = repo._activebookmark
4786 active = repo._activebookmark
4787 # i18n: column positioning for "hg summary"
4787 # i18n: column positioning for "hg summary"
4788 ui.write(_('bookmarks:'), label='log.bookmark')
4788 ui.write(_('bookmarks:'), label='log.bookmark')
4789 if active is not None:
4789 if active is not None:
4790 if active in marks:
4790 if active in marks:
4791 ui.write(' *' + active, label=bookmarks.activebookmarklabel)
4791 ui.write(' *' + active, label=bookmarks.activebookmarklabel)
4792 marks.remove(active)
4792 marks.remove(active)
4793 else:
4793 else:
4794 ui.write(' [%s]' % active, label=bookmarks.activebookmarklabel)
4794 ui.write(' [%s]' % active, label=bookmarks.activebookmarklabel)
4795 for m in marks:
4795 for m in marks:
4796 ui.write(' ' + m, label='log.bookmark')
4796 ui.write(' ' + m, label='log.bookmark')
4797 ui.write('\n', label='log.bookmark')
4797 ui.write('\n', label='log.bookmark')
4798
4798
4799 status = repo.status(unknown=True)
4799 status = repo.status(unknown=True)
4800
4800
4801 c = repo.dirstate.copies()
4801 c = repo.dirstate.copies()
4802 copied, renamed = [], []
4802 copied, renamed = [], []
4803 for d, s in c.iteritems():
4803 for d, s in c.iteritems():
4804 if s in status.removed:
4804 if s in status.removed:
4805 status.removed.remove(s)
4805 status.removed.remove(s)
4806 renamed.append(d)
4806 renamed.append(d)
4807 else:
4807 else:
4808 copied.append(d)
4808 copied.append(d)
4809 if d in status.added:
4809 if d in status.added:
4810 status.added.remove(d)
4810 status.added.remove(d)
4811
4811
4812 subs = [s for s in ctx.substate if ctx.sub(s).dirty()]
4812 subs = [s for s in ctx.substate if ctx.sub(s).dirty()]
4813
4813
4814 labels = [(ui.label(_('%d modified'), 'status.modified'), status.modified),
4814 labels = [(ui.label(_('%d modified'), 'status.modified'), status.modified),
4815 (ui.label(_('%d added'), 'status.added'), status.added),
4815 (ui.label(_('%d added'), 'status.added'), status.added),
4816 (ui.label(_('%d removed'), 'status.removed'), status.removed),
4816 (ui.label(_('%d removed'), 'status.removed'), status.removed),
4817 (ui.label(_('%d renamed'), 'status.copied'), renamed),
4817 (ui.label(_('%d renamed'), 'status.copied'), renamed),
4818 (ui.label(_('%d copied'), 'status.copied'), copied),
4818 (ui.label(_('%d copied'), 'status.copied'), copied),
4819 (ui.label(_('%d deleted'), 'status.deleted'), status.deleted),
4819 (ui.label(_('%d deleted'), 'status.deleted'), status.deleted),
4820 (ui.label(_('%d unknown'), 'status.unknown'), status.unknown),
4820 (ui.label(_('%d unknown'), 'status.unknown'), status.unknown),
4821 (ui.label(_('%d unresolved'), 'resolve.unresolved'), unresolved),
4821 (ui.label(_('%d unresolved'), 'resolve.unresolved'), unresolved),
4822 (ui.label(_('%d subrepos'), 'status.modified'), subs)]
4822 (ui.label(_('%d subrepos'), 'status.modified'), subs)]
4823 t = []
4823 t = []
4824 for l, s in labels:
4824 for l, s in labels:
4825 if s:
4825 if s:
4826 t.append(l % len(s))
4826 t.append(l % len(s))
4827
4827
4828 t = ', '.join(t)
4828 t = ', '.join(t)
4829 cleanworkdir = False
4829 cleanworkdir = False
4830
4830
4831 if repo.vfs.exists('graftstate'):
4831 if repo.vfs.exists('graftstate'):
4832 t += _(' (graft in progress)')
4832 t += _(' (graft in progress)')
4833 if repo.vfs.exists('updatestate'):
4833 if repo.vfs.exists('updatestate'):
4834 t += _(' (interrupted update)')
4834 t += _(' (interrupted update)')
4835 elif len(parents) > 1:
4835 elif len(parents) > 1:
4836 t += _(' (merge)')
4836 t += _(' (merge)')
4837 elif branch != parents[0].branch():
4837 elif branch != parents[0].branch():
4838 t += _(' (new branch)')
4838 t += _(' (new branch)')
4839 elif (parents[0].closesbranch() and
4839 elif (parents[0].closesbranch() and
4840 pnode in repo.branchheads(branch, closed=True)):
4840 pnode in repo.branchheads(branch, closed=True)):
4841 t += _(' (head closed)')
4841 t += _(' (head closed)')
4842 elif not (status.modified or status.added or status.removed or renamed or
4842 elif not (status.modified or status.added or status.removed or renamed or
4843 copied or subs):
4843 copied or subs):
4844 t += _(' (clean)')
4844 t += _(' (clean)')
4845 cleanworkdir = True
4845 cleanworkdir = True
4846 elif pnode not in bheads:
4846 elif pnode not in bheads:
4847 t += _(' (new branch head)')
4847 t += _(' (new branch head)')
4848
4848
4849 if parents:
4849 if parents:
4850 pendingphase = max(p.phase() for p in parents)
4850 pendingphase = max(p.phase() for p in parents)
4851 else:
4851 else:
4852 pendingphase = phases.public
4852 pendingphase = phases.public
4853
4853
4854 if pendingphase > phases.newcommitphase(ui):
4854 if pendingphase > phases.newcommitphase(ui):
4855 t += ' (%s)' % phases.phasenames[pendingphase]
4855 t += ' (%s)' % phases.phasenames[pendingphase]
4856
4856
4857 if cleanworkdir:
4857 if cleanworkdir:
4858 # i18n: column positioning for "hg summary"
4858 # i18n: column positioning for "hg summary"
4859 ui.status(_('commit: %s\n') % t.strip())
4859 ui.status(_('commit: %s\n') % t.strip())
4860 else:
4860 else:
4861 # i18n: column positioning for "hg summary"
4861 # i18n: column positioning for "hg summary"
4862 ui.write(_('commit: %s\n') % t.strip())
4862 ui.write(_('commit: %s\n') % t.strip())
4863
4863
4864 # all ancestors of branch heads - all ancestors of parent = new csets
4864 # all ancestors of branch heads - all ancestors of parent = new csets
4865 new = len(repo.changelog.findmissing([pctx.node() for pctx in parents],
4865 new = len(repo.changelog.findmissing([pctx.node() for pctx in parents],
4866 bheads))
4866 bheads))
4867
4867
4868 if new == 0:
4868 if new == 0:
4869 # i18n: column positioning for "hg summary"
4869 # i18n: column positioning for "hg summary"
4870 ui.status(_('update: (current)\n'))
4870 ui.status(_('update: (current)\n'))
4871 elif pnode not in bheads:
4871 elif pnode not in bheads:
4872 # i18n: column positioning for "hg summary"
4872 # i18n: column positioning for "hg summary"
4873 ui.write(_('update: %d new changesets (update)\n') % new)
4873 ui.write(_('update: %d new changesets (update)\n') % new)
4874 else:
4874 else:
4875 # i18n: column positioning for "hg summary"
4875 # i18n: column positioning for "hg summary"
4876 ui.write(_('update: %d new changesets, %d branch heads (merge)\n') %
4876 ui.write(_('update: %d new changesets, %d branch heads (merge)\n') %
4877 (new, len(bheads)))
4877 (new, len(bheads)))
4878
4878
4879 t = []
4879 t = []
4880 draft = len(repo.revs('draft()'))
4880 draft = len(repo.revs('draft()'))
4881 if draft:
4881 if draft:
4882 t.append(_('%d draft') % draft)
4882 t.append(_('%d draft') % draft)
4883 secret = len(repo.revs('secret()'))
4883 secret = len(repo.revs('secret()'))
4884 if secret:
4884 if secret:
4885 t.append(_('%d secret') % secret)
4885 t.append(_('%d secret') % secret)
4886
4886
4887 if draft or secret:
4887 if draft or secret:
4888 ui.status(_('phases: %s\n') % ', '.join(t))
4888 ui.status(_('phases: %s\n') % ', '.join(t))
4889
4889
4890 if obsolete.isenabled(repo, obsolete.createmarkersopt):
4890 if obsolete.isenabled(repo, obsolete.createmarkersopt):
4891 for trouble in ("unstable", "divergent", "bumped"):
4891 for trouble in ("unstable", "divergent", "bumped"):
4892 numtrouble = len(repo.revs(trouble + "()"))
4892 numtrouble = len(repo.revs(trouble + "()"))
4893 # We write all the possibilities to ease translation
4893 # We write all the possibilities to ease translation
4894 troublemsg = {
4894 troublemsg = {
4895 "unstable": _("unstable: %d changesets"),
4895 "unstable": _("unstable: %d changesets"),
4896 "divergent": _("divergent: %d changesets"),
4896 "divergent": _("divergent: %d changesets"),
4897 "bumped": _("bumped: %d changesets"),
4897 "bumped": _("bumped: %d changesets"),
4898 }
4898 }
4899 if numtrouble > 0:
4899 if numtrouble > 0:
4900 ui.status(troublemsg[trouble] % numtrouble + "\n")
4900 ui.status(troublemsg[trouble] % numtrouble + "\n")
4901
4901
4902 cmdutil.summaryhooks(ui, repo)
4902 cmdutil.summaryhooks(ui, repo)
4903
4903
4904 if opts.get('remote'):
4904 if opts.get('remote'):
4905 needsincoming, needsoutgoing = True, True
4905 needsincoming, needsoutgoing = True, True
4906 else:
4906 else:
4907 needsincoming, needsoutgoing = False, False
4907 needsincoming, needsoutgoing = False, False
4908 for i, o in cmdutil.summaryremotehooks(ui, repo, opts, None):
4908 for i, o in cmdutil.summaryremotehooks(ui, repo, opts, None):
4909 if i:
4909 if i:
4910 needsincoming = True
4910 needsincoming = True
4911 if o:
4911 if o:
4912 needsoutgoing = True
4912 needsoutgoing = True
4913 if not needsincoming and not needsoutgoing:
4913 if not needsincoming and not needsoutgoing:
4914 return
4914 return
4915
4915
4916 def getincoming():
4916 def getincoming():
4917 source, branches = hg.parseurl(ui.expandpath('default'))
4917 source, branches = hg.parseurl(ui.expandpath('default'))
4918 sbranch = branches[0]
4918 sbranch = branches[0]
4919 try:
4919 try:
4920 other = hg.peer(repo, {}, source)
4920 other = hg.peer(repo, {}, source)
4921 except error.RepoError:
4921 except error.RepoError:
4922 if opts.get('remote'):
4922 if opts.get('remote'):
4923 raise
4923 raise
4924 return source, sbranch, None, None, None
4924 return source, sbranch, None, None, None
4925 revs, checkout = hg.addbranchrevs(repo, other, branches, None)
4925 revs, checkout = hg.addbranchrevs(repo, other, branches, None)
4926 if revs:
4926 if revs:
4927 revs = [other.lookup(rev) for rev in revs]
4927 revs = [other.lookup(rev) for rev in revs]
4928 ui.debug('comparing with %s\n' % util.hidepassword(source))
4928 ui.debug('comparing with %s\n' % util.hidepassword(source))
4929 repo.ui.pushbuffer()
4929 repo.ui.pushbuffer()
4930 commoninc = discovery.findcommonincoming(repo, other, heads=revs)
4930 commoninc = discovery.findcommonincoming(repo, other, heads=revs)
4931 repo.ui.popbuffer()
4931 repo.ui.popbuffer()
4932 return source, sbranch, other, commoninc, commoninc[1]
4932 return source, sbranch, other, commoninc, commoninc[1]
4933
4933
4934 if needsincoming:
4934 if needsincoming:
4935 source, sbranch, sother, commoninc, incoming = getincoming()
4935 source, sbranch, sother, commoninc, incoming = getincoming()
4936 else:
4936 else:
4937 source = sbranch = sother = commoninc = incoming = None
4937 source = sbranch = sother = commoninc = incoming = None
4938
4938
4939 def getoutgoing():
4939 def getoutgoing():
4940 dest, branches = hg.parseurl(ui.expandpath('default-push', 'default'))
4940 dest, branches = hg.parseurl(ui.expandpath('default-push', 'default'))
4941 dbranch = branches[0]
4941 dbranch = branches[0]
4942 revs, checkout = hg.addbranchrevs(repo, repo, branches, None)
4942 revs, checkout = hg.addbranchrevs(repo, repo, branches, None)
4943 if source != dest:
4943 if source != dest:
4944 try:
4944 try:
4945 dother = hg.peer(repo, {}, dest)
4945 dother = hg.peer(repo, {}, dest)
4946 except error.RepoError:
4946 except error.RepoError:
4947 if opts.get('remote'):
4947 if opts.get('remote'):
4948 raise
4948 raise
4949 return dest, dbranch, None, None
4949 return dest, dbranch, None, None
4950 ui.debug('comparing with %s\n' % util.hidepassword(dest))
4950 ui.debug('comparing with %s\n' % util.hidepassword(dest))
4951 elif sother is None:
4951 elif sother is None:
4952 # there is no explicit destination peer, but source one is invalid
4952 # there is no explicit destination peer, but source one is invalid
4953 return dest, dbranch, None, None
4953 return dest, dbranch, None, None
4954 else:
4954 else:
4955 dother = sother
4955 dother = sother
4956 if (source != dest or (sbranch is not None and sbranch != dbranch)):
4956 if (source != dest or (sbranch is not None and sbranch != dbranch)):
4957 common = None
4957 common = None
4958 else:
4958 else:
4959 common = commoninc
4959 common = commoninc
4960 if revs:
4960 if revs:
4961 revs = [repo.lookup(rev) for rev in revs]
4961 revs = [repo.lookup(rev) for rev in revs]
4962 repo.ui.pushbuffer()
4962 repo.ui.pushbuffer()
4963 outgoing = discovery.findcommonoutgoing(repo, dother, onlyheads=revs,
4963 outgoing = discovery.findcommonoutgoing(repo, dother, onlyheads=revs,
4964 commoninc=common)
4964 commoninc=common)
4965 repo.ui.popbuffer()
4965 repo.ui.popbuffer()
4966 return dest, dbranch, dother, outgoing
4966 return dest, dbranch, dother, outgoing
4967
4967
4968 if needsoutgoing:
4968 if needsoutgoing:
4969 dest, dbranch, dother, outgoing = getoutgoing()
4969 dest, dbranch, dother, outgoing = getoutgoing()
4970 else:
4970 else:
4971 dest = dbranch = dother = outgoing = None
4971 dest = dbranch = dother = outgoing = None
4972
4972
4973 if opts.get('remote'):
4973 if opts.get('remote'):
4974 t = []
4974 t = []
4975 if incoming:
4975 if incoming:
4976 t.append(_('1 or more incoming'))
4976 t.append(_('1 or more incoming'))
4977 o = outgoing.missing
4977 o = outgoing.missing
4978 if o:
4978 if o:
4979 t.append(_('%d outgoing') % len(o))
4979 t.append(_('%d outgoing') % len(o))
4980 other = dother or sother
4980 other = dother or sother
4981 if 'bookmarks' in other.listkeys('namespaces'):
4981 if 'bookmarks' in other.listkeys('namespaces'):
4982 counts = bookmarks.summary(repo, other)
4982 counts = bookmarks.summary(repo, other)
4983 if counts[0] > 0:
4983 if counts[0] > 0:
4984 t.append(_('%d incoming bookmarks') % counts[0])
4984 t.append(_('%d incoming bookmarks') % counts[0])
4985 if counts[1] > 0:
4985 if counts[1] > 0:
4986 t.append(_('%d outgoing bookmarks') % counts[1])
4986 t.append(_('%d outgoing bookmarks') % counts[1])
4987
4987
4988 if t:
4988 if t:
4989 # i18n: column positioning for "hg summary"
4989 # i18n: column positioning for "hg summary"
4990 ui.write(_('remote: %s\n') % (', '.join(t)))
4990 ui.write(_('remote: %s\n') % (', '.join(t)))
4991 else:
4991 else:
4992 # i18n: column positioning for "hg summary"
4992 # i18n: column positioning for "hg summary"
4993 ui.status(_('remote: (synced)\n'))
4993 ui.status(_('remote: (synced)\n'))
4994
4994
4995 cmdutil.summaryremotehooks(ui, repo, opts,
4995 cmdutil.summaryremotehooks(ui, repo, opts,
4996 ((source, sbranch, sother, commoninc),
4996 ((source, sbranch, sother, commoninc),
4997 (dest, dbranch, dother, outgoing)))
4997 (dest, dbranch, dother, outgoing)))
4998
4998
4999 @command('tag',
4999 @command('tag',
5000 [('f', 'force', None, _('force tag')),
5000 [('f', 'force', None, _('force tag')),
5001 ('l', 'local', None, _('make the tag local')),
5001 ('l', 'local', None, _('make the tag local')),
5002 ('r', 'rev', '', _('revision to tag'), _('REV')),
5002 ('r', 'rev', '', _('revision to tag'), _('REV')),
5003 ('', 'remove', None, _('remove a tag')),
5003 ('', 'remove', None, _('remove a tag')),
5004 # -l/--local is already there, commitopts cannot be used
5004 # -l/--local is already there, commitopts cannot be used
5005 ('e', 'edit', None, _('invoke editor on commit messages')),
5005 ('e', 'edit', None, _('invoke editor on commit messages')),
5006 ('m', 'message', '', _('use text as commit message'), _('TEXT')),
5006 ('m', 'message', '', _('use text as commit message'), _('TEXT')),
5007 ] + commitopts2,
5007 ] + commitopts2,
5008 _('[-f] [-l] [-m TEXT] [-d DATE] [-u USER] [-r REV] NAME...'))
5008 _('[-f] [-l] [-m TEXT] [-d DATE] [-u USER] [-r REV] NAME...'))
5009 def tag(ui, repo, name1, *names, **opts):
5009 def tag(ui, repo, name1, *names, **opts):
5010 """add one or more tags for the current or given revision
5010 """add one or more tags for the current or given revision
5011
5011
5012 Name a particular revision using <name>.
5012 Name a particular revision using <name>.
5013
5013
5014 Tags are used to name particular revisions of the repository and are
5014 Tags are used to name particular revisions of the repository and are
5015 very useful to compare different revisions, to go back to significant
5015 very useful to compare different revisions, to go back to significant
5016 earlier versions or to mark branch points as releases, etc. Changing
5016 earlier versions or to mark branch points as releases, etc. Changing
5017 an existing tag is normally disallowed; use -f/--force to override.
5017 an existing tag is normally disallowed; use -f/--force to override.
5018
5018
5019 If no revision is given, the parent of the working directory is
5019 If no revision is given, the parent of the working directory is
5020 used.
5020 used.
5021
5021
5022 To facilitate version control, distribution, and merging of tags,
5022 To facilitate version control, distribution, and merging of tags,
5023 they are stored as a file named ".hgtags" which is managed similarly
5023 they are stored as a file named ".hgtags" which is managed similarly
5024 to other project files and can be hand-edited if necessary. This
5024 to other project files and can be hand-edited if necessary. This
5025 also means that tagging creates a new commit. The file
5025 also means that tagging creates a new commit. The file
5026 ".hg/localtags" is used for local tags (not shared among
5026 ".hg/localtags" is used for local tags (not shared among
5027 repositories).
5027 repositories).
5028
5028
5029 Tag commits are usually made at the head of a branch. If the parent
5029 Tag commits are usually made at the head of a branch. If the parent
5030 of the working directory is not a branch head, :hg:`tag` aborts; use
5030 of the working directory is not a branch head, :hg:`tag` aborts; use
5031 -f/--force to force the tag commit to be based on a non-head
5031 -f/--force to force the tag commit to be based on a non-head
5032 changeset.
5032 changeset.
5033
5033
5034 See :hg:`help dates` for a list of formats valid for -d/--date.
5034 See :hg:`help dates` for a list of formats valid for -d/--date.
5035
5035
5036 Since tag names have priority over branch names during revision
5036 Since tag names have priority over branch names during revision
5037 lookup, using an existing branch name as a tag name is discouraged.
5037 lookup, using an existing branch name as a tag name is discouraged.
5038
5038
5039 Returns 0 on success.
5039 Returns 0 on success.
5040 """
5040 """
5041 opts = pycompat.byteskwargs(opts)
5041 opts = pycompat.byteskwargs(opts)
5042 wlock = lock = None
5042 wlock = lock = None
5043 try:
5043 try:
5044 wlock = repo.wlock()
5044 wlock = repo.wlock()
5045 lock = repo.lock()
5045 lock = repo.lock()
5046 rev_ = "."
5046 rev_ = "."
5047 names = [t.strip() for t in (name1,) + names]
5047 names = [t.strip() for t in (name1,) + names]
5048 if len(names) != len(set(names)):
5048 if len(names) != len(set(names)):
5049 raise error.Abort(_('tag names must be unique'))
5049 raise error.Abort(_('tag names must be unique'))
5050 for n in names:
5050 for n in names:
5051 scmutil.checknewlabel(repo, n, 'tag')
5051 scmutil.checknewlabel(repo, n, 'tag')
5052 if not n:
5052 if not n:
5053 raise error.Abort(_('tag names cannot consist entirely of '
5053 raise error.Abort(_('tag names cannot consist entirely of '
5054 'whitespace'))
5054 'whitespace'))
5055 if opts.get('rev') and opts.get('remove'):
5055 if opts.get('rev') and opts.get('remove'):
5056 raise error.Abort(_("--rev and --remove are incompatible"))
5056 raise error.Abort(_("--rev and --remove are incompatible"))
5057 if opts.get('rev'):
5057 if opts.get('rev'):
5058 rev_ = opts['rev']
5058 rev_ = opts['rev']
5059 message = opts.get('message')
5059 message = opts.get('message')
5060 if opts.get('remove'):
5060 if opts.get('remove'):
5061 if opts.get('local'):
5061 if opts.get('local'):
5062 expectedtype = 'local'
5062 expectedtype = 'local'
5063 else:
5063 else:
5064 expectedtype = 'global'
5064 expectedtype = 'global'
5065
5065
5066 for n in names:
5066 for n in names:
5067 if not repo.tagtype(n):
5067 if not repo.tagtype(n):
5068 raise error.Abort(_("tag '%s' does not exist") % n)
5068 raise error.Abort(_("tag '%s' does not exist") % n)
5069 if repo.tagtype(n) != expectedtype:
5069 if repo.tagtype(n) != expectedtype:
5070 if expectedtype == 'global':
5070 if expectedtype == 'global':
5071 raise error.Abort(_("tag '%s' is not a global tag") % n)
5071 raise error.Abort(_("tag '%s' is not a global tag") % n)
5072 else:
5072 else:
5073 raise error.Abort(_("tag '%s' is not a local tag") % n)
5073 raise error.Abort(_("tag '%s' is not a local tag") % n)
5074 rev_ = 'null'
5074 rev_ = 'null'
5075 if not message:
5075 if not message:
5076 # we don't translate commit messages
5076 # we don't translate commit messages
5077 message = 'Removed tag %s' % ', '.join(names)
5077 message = 'Removed tag %s' % ', '.join(names)
5078 elif not opts.get('force'):
5078 elif not opts.get('force'):
5079 for n in names:
5079 for n in names:
5080 if n in repo.tags():
5080 if n in repo.tags():
5081 raise error.Abort(_("tag '%s' already exists "
5081 raise error.Abort(_("tag '%s' already exists "
5082 "(use -f to force)") % n)
5082 "(use -f to force)") % n)
5083 if not opts.get('local'):
5083 if not opts.get('local'):
5084 p1, p2 = repo.dirstate.parents()
5084 p1, p2 = repo.dirstate.parents()
5085 if p2 != nullid:
5085 if p2 != nullid:
5086 raise error.Abort(_('uncommitted merge'))
5086 raise error.Abort(_('uncommitted merge'))
5087 bheads = repo.branchheads()
5087 bheads = repo.branchheads()
5088 if not opts.get('force') and bheads and p1 not in bheads:
5088 if not opts.get('force') and bheads and p1 not in bheads:
5089 raise error.Abort(_('working directory is not at a branch head '
5089 raise error.Abort(_('working directory is not at a branch head '
5090 '(use -f to force)'))
5090 '(use -f to force)'))
5091 r = scmutil.revsingle(repo, rev_).node()
5091 r = scmutil.revsingle(repo, rev_).node()
5092
5092
5093 if not message:
5093 if not message:
5094 # we don't translate commit messages
5094 # we don't translate commit messages
5095 message = ('Added tag %s for changeset %s' %
5095 message = ('Added tag %s for changeset %s' %
5096 (', '.join(names), short(r)))
5096 (', '.join(names), short(r)))
5097
5097
5098 date = opts.get('date')
5098 date = opts.get('date')
5099 if date:
5099 if date:
5100 date = util.parsedate(date)
5100 date = util.parsedate(date)
5101
5101
5102 if opts.get('remove'):
5102 if opts.get('remove'):
5103 editform = 'tag.remove'
5103 editform = 'tag.remove'
5104 else:
5104 else:
5105 editform = 'tag.add'
5105 editform = 'tag.add'
5106 editor = cmdutil.getcommiteditor(editform=editform,
5106 editor = cmdutil.getcommiteditor(editform=editform,
5107 **pycompat.strkwargs(opts))
5107 **pycompat.strkwargs(opts))
5108
5108
5109 # don't allow tagging the null rev
5109 # don't allow tagging the null rev
5110 if (not opts.get('remove') and
5110 if (not opts.get('remove') and
5111 scmutil.revsingle(repo, rev_).rev() == nullrev):
5111 scmutil.revsingle(repo, rev_).rev() == nullrev):
5112 raise error.Abort(_("cannot tag null revision"))
5112 raise error.Abort(_("cannot tag null revision"))
5113
5113
5114 tagsmod.tag(repo, names, r, message, opts.get('local'),
5114 tagsmod.tag(repo, names, r, message, opts.get('local'),
5115 opts.get('user'), date, editor=editor)
5115 opts.get('user'), date, editor=editor)
5116 finally:
5116 finally:
5117 release(lock, wlock)
5117 release(lock, wlock)
5118
5118
5119 @command('tags', formatteropts, '')
5119 @command('tags', formatteropts, '')
5120 def tags(ui, repo, **opts):
5120 def tags(ui, repo, **opts):
5121 """list repository tags
5121 """list repository tags
5122
5122
5123 This lists both regular and local tags. When the -v/--verbose
5123 This lists both regular and local tags. When the -v/--verbose
5124 switch is used, a third column "local" is printed for local tags.
5124 switch is used, a third column "local" is printed for local tags.
5125 When the -q/--quiet switch is used, only the tag name is printed.
5125 When the -q/--quiet switch is used, only the tag name is printed.
5126
5126
5127 Returns 0 on success.
5127 Returns 0 on success.
5128 """
5128 """
5129
5129
5130 opts = pycompat.byteskwargs(opts)
5130 opts = pycompat.byteskwargs(opts)
5131 ui.pager('tags')
5131 ui.pager('tags')
5132 fm = ui.formatter('tags', opts)
5132 fm = ui.formatter('tags', opts)
5133 hexfunc = fm.hexfunc
5133 hexfunc = fm.hexfunc
5134 tagtype = ""
5134 tagtype = ""
5135
5135
5136 for t, n in reversed(repo.tagslist()):
5136 for t, n in reversed(repo.tagslist()):
5137 hn = hexfunc(n)
5137 hn = hexfunc(n)
5138 label = 'tags.normal'
5138 label = 'tags.normal'
5139 tagtype = ''
5139 tagtype = ''
5140 if repo.tagtype(t) == 'local':
5140 if repo.tagtype(t) == 'local':
5141 label = 'tags.local'
5141 label = 'tags.local'
5142 tagtype = 'local'
5142 tagtype = 'local'
5143
5143
5144 fm.startitem()
5144 fm.startitem()
5145 fm.write('tag', '%s', t, label=label)
5145 fm.write('tag', '%s', t, label=label)
5146 fmt = " " * (30 - encoding.colwidth(t)) + ' %5d:%s'
5146 fmt = " " * (30 - encoding.colwidth(t)) + ' %5d:%s'
5147 fm.condwrite(not ui.quiet, 'rev node', fmt,
5147 fm.condwrite(not ui.quiet, 'rev node', fmt,
5148 repo.changelog.rev(n), hn, label=label)
5148 repo.changelog.rev(n), hn, label=label)
5149 fm.condwrite(ui.verbose and tagtype, 'type', ' %s',
5149 fm.condwrite(ui.verbose and tagtype, 'type', ' %s',
5150 tagtype, label=label)
5150 tagtype, label=label)
5151 fm.plain('\n')
5151 fm.plain('\n')
5152 fm.end()
5152 fm.end()
5153
5153
5154 @command('tip',
5154 @command('tip',
5155 [('p', 'patch', None, _('show patch')),
5155 [('p', 'patch', None, _('show patch')),
5156 ('g', 'git', None, _('use git extended diff format')),
5156 ('g', 'git', None, _('use git extended diff format')),
5157 ] + templateopts,
5157 ] + templateopts,
5158 _('[-p] [-g]'))
5158 _('[-p] [-g]'))
5159 def tip(ui, repo, **opts):
5159 def tip(ui, repo, **opts):
5160 """show the tip revision (DEPRECATED)
5160 """show the tip revision (DEPRECATED)
5161
5161
5162 The tip revision (usually just called the tip) is the changeset
5162 The tip revision (usually just called the tip) is the changeset
5163 most recently added to the repository (and therefore the most
5163 most recently added to the repository (and therefore the most
5164 recently changed head).
5164 recently changed head).
5165
5165
5166 If you have just made a commit, that commit will be the tip. If
5166 If you have just made a commit, that commit will be the tip. If
5167 you have just pulled changes from another repository, the tip of
5167 you have just pulled changes from another repository, the tip of
5168 that repository becomes the current tip. The "tip" tag is special
5168 that repository becomes the current tip. The "tip" tag is special
5169 and cannot be renamed or assigned to a different changeset.
5169 and cannot be renamed or assigned to a different changeset.
5170
5170
5171 This command is deprecated, please use :hg:`heads` instead.
5171 This command is deprecated, please use :hg:`heads` instead.
5172
5172
5173 Returns 0 on success.
5173 Returns 0 on success.
5174 """
5174 """
5175 opts = pycompat.byteskwargs(opts)
5175 opts = pycompat.byteskwargs(opts)
5176 displayer = cmdutil.show_changeset(ui, repo, opts)
5176 displayer = cmdutil.show_changeset(ui, repo, opts)
5177 displayer.show(repo['tip'])
5177 displayer.show(repo['tip'])
5178 displayer.close()
5178 displayer.close()
5179
5179
5180 @command('unbundle',
5180 @command('unbundle',
5181 [('u', 'update', None,
5181 [('u', 'update', None,
5182 _('update to new branch head if changesets were unbundled'))],
5182 _('update to new branch head if changesets were unbundled'))],
5183 _('[-u] FILE...'))
5183 _('[-u] FILE...'))
5184 def unbundle(ui, repo, fname1, *fnames, **opts):
5184 def unbundle(ui, repo, fname1, *fnames, **opts):
5185 """apply one or more bundle files
5185 """apply one or more bundle files
5186
5186
5187 Apply one or more bundle files generated by :hg:`bundle`.
5187 Apply one or more bundle files generated by :hg:`bundle`.
5188
5188
5189 Returns 0 on success, 1 if an update has unresolved files.
5189 Returns 0 on success, 1 if an update has unresolved files.
5190 """
5190 """
5191 fnames = (fname1,) + fnames
5191 fnames = (fname1,) + fnames
5192
5192
5193 with repo.lock():
5193 with repo.lock():
5194 for fname in fnames:
5194 for fname in fnames:
5195 f = hg.openpath(ui, fname)
5195 f = hg.openpath(ui, fname)
5196 gen = exchange.readbundle(ui, f, fname)
5196 gen = exchange.readbundle(ui, f, fname)
5197 if isinstance(gen, streamclone.streamcloneapplier):
5197 if isinstance(gen, streamclone.streamcloneapplier):
5198 raise error.Abort(
5198 raise error.Abort(
5199 _('packed bundles cannot be applied with '
5199 _('packed bundles cannot be applied with '
5200 '"hg unbundle"'),
5200 '"hg unbundle"'),
5201 hint=_('use "hg debugapplystreamclonebundle"'))
5201 hint=_('use "hg debugapplystreamclonebundle"'))
5202 url = 'bundle:' + fname
5202 url = 'bundle:' + fname
5203 if isinstance(gen, bundle2.unbundle20):
5203 if isinstance(gen, bundle2.unbundle20):
5204 with repo.transaction('unbundle') as tr:
5204 with repo.transaction('unbundle') as tr:
5205 try:
5205 try:
5206 op = bundle2.applybundle(repo, gen, tr,
5206 op = bundle2.applybundle(repo, gen, tr,
5207 source='unbundle',
5207 source='unbundle',
5208 url=url)
5208 url=url)
5209 except error.BundleUnknownFeatureError as exc:
5209 except error.BundleUnknownFeatureError as exc:
5210 raise error.Abort(
5210 raise error.Abort(
5211 _('%s: unknown bundle feature, %s') % (fname, exc),
5211 _('%s: unknown bundle feature, %s') % (fname, exc),
5212 hint=_("see https://mercurial-scm.org/"
5212 hint=_("see https://mercurial-scm.org/"
5213 "wiki/BundleFeature for more "
5213 "wiki/BundleFeature for more "
5214 "information"))
5214 "information"))
5215 changes = [r.get('return', 0)
5215 changes = [r.get('return', 0)
5216 for r in op.records['changegroup']]
5216 for r in op.records['changegroup']]
5217 modheads = changegroup.combineresults(changes)
5217 modheads = changegroup.combineresults(changes)
5218 else:
5218 else:
5219 txnname = 'unbundle\n%s' % util.hidepassword(url)
5219 txnname = 'unbundle\n%s' % util.hidepassword(url)
5220 with repo.transaction(txnname) as tr:
5220 with repo.transaction(txnname) as tr:
5221 modheads = gen.apply(repo, tr, 'unbundle', url)
5221 modheads, addednodes = gen.apply(repo, tr, 'unbundle', url)
5222
5222
5223 return postincoming(ui, repo, modheads, opts.get(r'update'), None, None)
5223 return postincoming(ui, repo, modheads, opts.get(r'update'), None, None)
5224
5224
5225 @command('^update|up|checkout|co',
5225 @command('^update|up|checkout|co',
5226 [('C', 'clean', None, _('discard uncommitted changes (no backup)')),
5226 [('C', 'clean', None, _('discard uncommitted changes (no backup)')),
5227 ('c', 'check', None, _('require clean working directory')),
5227 ('c', 'check', None, _('require clean working directory')),
5228 ('m', 'merge', None, _('merge uncommitted changes')),
5228 ('m', 'merge', None, _('merge uncommitted changes')),
5229 ('d', 'date', '', _('tipmost revision matching date'), _('DATE')),
5229 ('d', 'date', '', _('tipmost revision matching date'), _('DATE')),
5230 ('r', 'rev', '', _('revision'), _('REV'))
5230 ('r', 'rev', '', _('revision'), _('REV'))
5231 ] + mergetoolopts,
5231 ] + mergetoolopts,
5232 _('[-C|-c|-m] [-d DATE] [[-r] REV]'))
5232 _('[-C|-c|-m] [-d DATE] [[-r] REV]'))
5233 def update(ui, repo, node=None, rev=None, clean=False, date=None, check=False,
5233 def update(ui, repo, node=None, rev=None, clean=False, date=None, check=False,
5234 merge=None, tool=None):
5234 merge=None, tool=None):
5235 """update working directory (or switch revisions)
5235 """update working directory (or switch revisions)
5236
5236
5237 Update the repository's working directory to the specified
5237 Update the repository's working directory to the specified
5238 changeset. If no changeset is specified, update to the tip of the
5238 changeset. If no changeset is specified, update to the tip of the
5239 current named branch and move the active bookmark (see :hg:`help
5239 current named branch and move the active bookmark (see :hg:`help
5240 bookmarks`).
5240 bookmarks`).
5241
5241
5242 Update sets the working directory's parent revision to the specified
5242 Update sets the working directory's parent revision to the specified
5243 changeset (see :hg:`help parents`).
5243 changeset (see :hg:`help parents`).
5244
5244
5245 If the changeset is not a descendant or ancestor of the working
5245 If the changeset is not a descendant or ancestor of the working
5246 directory's parent and there are uncommitted changes, the update is
5246 directory's parent and there are uncommitted changes, the update is
5247 aborted. With the -c/--check option, the working directory is checked
5247 aborted. With the -c/--check option, the working directory is checked
5248 for uncommitted changes; if none are found, the working directory is
5248 for uncommitted changes; if none are found, the working directory is
5249 updated to the specified changeset.
5249 updated to the specified changeset.
5250
5250
5251 .. container:: verbose
5251 .. container:: verbose
5252
5252
5253 The -C/--clean, -c/--check, and -m/--merge options control what
5253 The -C/--clean, -c/--check, and -m/--merge options control what
5254 happens if the working directory contains uncommitted changes.
5254 happens if the working directory contains uncommitted changes.
5255 At most of one of them can be specified.
5255 At most of one of them can be specified.
5256
5256
5257 1. If no option is specified, and if
5257 1. If no option is specified, and if
5258 the requested changeset is an ancestor or descendant of
5258 the requested changeset is an ancestor or descendant of
5259 the working directory's parent, the uncommitted changes
5259 the working directory's parent, the uncommitted changes
5260 are merged into the requested changeset and the merged
5260 are merged into the requested changeset and the merged
5261 result is left uncommitted. If the requested changeset is
5261 result is left uncommitted. If the requested changeset is
5262 not an ancestor or descendant (that is, it is on another
5262 not an ancestor or descendant (that is, it is on another
5263 branch), the update is aborted and the uncommitted changes
5263 branch), the update is aborted and the uncommitted changes
5264 are preserved.
5264 are preserved.
5265
5265
5266 2. With the -m/--merge option, the update is allowed even if the
5266 2. With the -m/--merge option, the update is allowed even if the
5267 requested changeset is not an ancestor or descendant of
5267 requested changeset is not an ancestor or descendant of
5268 the working directory's parent.
5268 the working directory's parent.
5269
5269
5270 3. With the -c/--check option, the update is aborted and the
5270 3. With the -c/--check option, the update is aborted and the
5271 uncommitted changes are preserved.
5271 uncommitted changes are preserved.
5272
5272
5273 4. With the -C/--clean option, uncommitted changes are discarded and
5273 4. With the -C/--clean option, uncommitted changes are discarded and
5274 the working directory is updated to the requested changeset.
5274 the working directory is updated to the requested changeset.
5275
5275
5276 To cancel an uncommitted merge (and lose your changes), use
5276 To cancel an uncommitted merge (and lose your changes), use
5277 :hg:`update --clean .`.
5277 :hg:`update --clean .`.
5278
5278
5279 Use null as the changeset to remove the working directory (like
5279 Use null as the changeset to remove the working directory (like
5280 :hg:`clone -U`).
5280 :hg:`clone -U`).
5281
5281
5282 If you want to revert just one file to an older revision, use
5282 If you want to revert just one file to an older revision, use
5283 :hg:`revert [-r REV] NAME`.
5283 :hg:`revert [-r REV] NAME`.
5284
5284
5285 See :hg:`help dates` for a list of formats valid for -d/--date.
5285 See :hg:`help dates` for a list of formats valid for -d/--date.
5286
5286
5287 Returns 0 on success, 1 if there are unresolved files.
5287 Returns 0 on success, 1 if there are unresolved files.
5288 """
5288 """
5289 if rev and node:
5289 if rev and node:
5290 raise error.Abort(_("please specify just one revision"))
5290 raise error.Abort(_("please specify just one revision"))
5291
5291
5292 if ui.configbool('commands', 'update.requiredest'):
5292 if ui.configbool('commands', 'update.requiredest'):
5293 if not node and not rev and not date:
5293 if not node and not rev and not date:
5294 raise error.Abort(_('you must specify a destination'),
5294 raise error.Abort(_('you must specify a destination'),
5295 hint=_('for example: hg update ".::"'))
5295 hint=_('for example: hg update ".::"'))
5296
5296
5297 if rev is None or rev == '':
5297 if rev is None or rev == '':
5298 rev = node
5298 rev = node
5299
5299
5300 if date and rev is not None:
5300 if date and rev is not None:
5301 raise error.Abort(_("you can't specify a revision and a date"))
5301 raise error.Abort(_("you can't specify a revision and a date"))
5302
5302
5303 if len([x for x in (clean, check, merge) if x]) > 1:
5303 if len([x for x in (clean, check, merge) if x]) > 1:
5304 raise error.Abort(_("can only specify one of -C/--clean, -c/--check, "
5304 raise error.Abort(_("can only specify one of -C/--clean, -c/--check, "
5305 "or -m/merge"))
5305 "or -m/merge"))
5306
5306
5307 updatecheck = None
5307 updatecheck = None
5308 if check:
5308 if check:
5309 updatecheck = 'abort'
5309 updatecheck = 'abort'
5310 elif merge:
5310 elif merge:
5311 updatecheck = 'none'
5311 updatecheck = 'none'
5312
5312
5313 with repo.wlock():
5313 with repo.wlock():
5314 cmdutil.clearunfinished(repo)
5314 cmdutil.clearunfinished(repo)
5315
5315
5316 if date:
5316 if date:
5317 rev = cmdutil.finddate(ui, repo, date)
5317 rev = cmdutil.finddate(ui, repo, date)
5318
5318
5319 # if we defined a bookmark, we have to remember the original name
5319 # if we defined a bookmark, we have to remember the original name
5320 brev = rev
5320 brev = rev
5321 rev = scmutil.revsingle(repo, rev, rev).rev()
5321 rev = scmutil.revsingle(repo, rev, rev).rev()
5322
5322
5323 repo.ui.setconfig('ui', 'forcemerge', tool, 'update')
5323 repo.ui.setconfig('ui', 'forcemerge', tool, 'update')
5324
5324
5325 return hg.updatetotally(ui, repo, rev, brev, clean=clean,
5325 return hg.updatetotally(ui, repo, rev, brev, clean=clean,
5326 updatecheck=updatecheck)
5326 updatecheck=updatecheck)
5327
5327
5328 @command('verify', [])
5328 @command('verify', [])
5329 def verify(ui, repo):
5329 def verify(ui, repo):
5330 """verify the integrity of the repository
5330 """verify the integrity of the repository
5331
5331
5332 Verify the integrity of the current repository.
5332 Verify the integrity of the current repository.
5333
5333
5334 This will perform an extensive check of the repository's
5334 This will perform an extensive check of the repository's
5335 integrity, validating the hashes and checksums of each entry in
5335 integrity, validating the hashes and checksums of each entry in
5336 the changelog, manifest, and tracked files, as well as the
5336 the changelog, manifest, and tracked files, as well as the
5337 integrity of their crosslinks and indices.
5337 integrity of their crosslinks and indices.
5338
5338
5339 Please see https://mercurial-scm.org/wiki/RepositoryCorruption
5339 Please see https://mercurial-scm.org/wiki/RepositoryCorruption
5340 for more information about recovery from corruption of the
5340 for more information about recovery from corruption of the
5341 repository.
5341 repository.
5342
5342
5343 Returns 0 on success, 1 if errors are encountered.
5343 Returns 0 on success, 1 if errors are encountered.
5344 """
5344 """
5345 return hg.verify(repo)
5345 return hg.verify(repo)
5346
5346
5347 @command('version', [] + formatteropts, norepo=True)
5347 @command('version', [] + formatteropts, norepo=True)
5348 def version_(ui, **opts):
5348 def version_(ui, **opts):
5349 """output version and copyright information"""
5349 """output version and copyright information"""
5350 opts = pycompat.byteskwargs(opts)
5350 opts = pycompat.byteskwargs(opts)
5351 if ui.verbose:
5351 if ui.verbose:
5352 ui.pager('version')
5352 ui.pager('version')
5353 fm = ui.formatter("version", opts)
5353 fm = ui.formatter("version", opts)
5354 fm.startitem()
5354 fm.startitem()
5355 fm.write("ver", _("Mercurial Distributed SCM (version %s)\n"),
5355 fm.write("ver", _("Mercurial Distributed SCM (version %s)\n"),
5356 util.version())
5356 util.version())
5357 license = _(
5357 license = _(
5358 "(see https://mercurial-scm.org for more information)\n"
5358 "(see https://mercurial-scm.org for more information)\n"
5359 "\nCopyright (C) 2005-2017 Matt Mackall and others\n"
5359 "\nCopyright (C) 2005-2017 Matt Mackall and others\n"
5360 "This is free software; see the source for copying conditions. "
5360 "This is free software; see the source for copying conditions. "
5361 "There is NO\nwarranty; "
5361 "There is NO\nwarranty; "
5362 "not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n"
5362 "not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n"
5363 )
5363 )
5364 if not ui.quiet:
5364 if not ui.quiet:
5365 fm.plain(license)
5365 fm.plain(license)
5366
5366
5367 if ui.verbose:
5367 if ui.verbose:
5368 fm.plain(_("\nEnabled extensions:\n\n"))
5368 fm.plain(_("\nEnabled extensions:\n\n"))
5369 # format names and versions into columns
5369 # format names and versions into columns
5370 names = []
5370 names = []
5371 vers = []
5371 vers = []
5372 isinternals = []
5372 isinternals = []
5373 for name, module in extensions.extensions():
5373 for name, module in extensions.extensions():
5374 names.append(name)
5374 names.append(name)
5375 vers.append(extensions.moduleversion(module) or None)
5375 vers.append(extensions.moduleversion(module) or None)
5376 isinternals.append(extensions.ismoduleinternal(module))
5376 isinternals.append(extensions.ismoduleinternal(module))
5377 fn = fm.nested("extensions")
5377 fn = fm.nested("extensions")
5378 if names:
5378 if names:
5379 namefmt = " %%-%ds " % max(len(n) for n in names)
5379 namefmt = " %%-%ds " % max(len(n) for n in names)
5380 places = [_("external"), _("internal")]
5380 places = [_("external"), _("internal")]
5381 for n, v, p in zip(names, vers, isinternals):
5381 for n, v, p in zip(names, vers, isinternals):
5382 fn.startitem()
5382 fn.startitem()
5383 fn.condwrite(ui.verbose, "name", namefmt, n)
5383 fn.condwrite(ui.verbose, "name", namefmt, n)
5384 if ui.verbose:
5384 if ui.verbose:
5385 fn.plain("%s " % places[p])
5385 fn.plain("%s " % places[p])
5386 fn.data(bundled=p)
5386 fn.data(bundled=p)
5387 fn.condwrite(ui.verbose and v, "ver", "%s", v)
5387 fn.condwrite(ui.verbose and v, "ver", "%s", v)
5388 if ui.verbose:
5388 if ui.verbose:
5389 fn.plain("\n")
5389 fn.plain("\n")
5390 fn.end()
5390 fn.end()
5391 fm.end()
5391 fm.end()
5392
5392
5393 def loadcmdtable(ui, name, cmdtable):
5393 def loadcmdtable(ui, name, cmdtable):
5394 """Load command functions from specified cmdtable
5394 """Load command functions from specified cmdtable
5395 """
5395 """
5396 overrides = [cmd for cmd in cmdtable if cmd in table]
5396 overrides = [cmd for cmd in cmdtable if cmd in table]
5397 if overrides:
5397 if overrides:
5398 ui.warn(_("extension '%s' overrides commands: %s\n")
5398 ui.warn(_("extension '%s' overrides commands: %s\n")
5399 % (name, " ".join(overrides)))
5399 % (name, " ".join(overrides)))
5400 table.update(cmdtable)
5400 table.update(cmdtable)
@@ -1,2012 +1,2013 b''
1 # exchange.py - utility to exchange data between repos.
1 # exchange.py - utility to exchange data between repos.
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import errno
10 import errno
11 import hashlib
11 import hashlib
12
12
13 from .i18n import _
13 from .i18n import _
14 from .node import (
14 from .node import (
15 hex,
15 hex,
16 nullid,
16 nullid,
17 )
17 )
18 from . import (
18 from . import (
19 bookmarks as bookmod,
19 bookmarks as bookmod,
20 bundle2,
20 bundle2,
21 changegroup,
21 changegroup,
22 discovery,
22 discovery,
23 error,
23 error,
24 lock as lockmod,
24 lock as lockmod,
25 obsolete,
25 obsolete,
26 phases,
26 phases,
27 pushkey,
27 pushkey,
28 pycompat,
28 pycompat,
29 scmutil,
29 scmutil,
30 sslutil,
30 sslutil,
31 streamclone,
31 streamclone,
32 url as urlmod,
32 url as urlmod,
33 util,
33 util,
34 )
34 )
35
35
36 urlerr = util.urlerr
36 urlerr = util.urlerr
37 urlreq = util.urlreq
37 urlreq = util.urlreq
38
38
39 # Maps bundle version human names to changegroup versions.
39 # Maps bundle version human names to changegroup versions.
40 _bundlespeccgversions = {'v1': '01',
40 _bundlespeccgversions = {'v1': '01',
41 'v2': '02',
41 'v2': '02',
42 'packed1': 's1',
42 'packed1': 's1',
43 'bundle2': '02', #legacy
43 'bundle2': '02', #legacy
44 }
44 }
45
45
46 # Compression engines allowed in version 1. THIS SHOULD NEVER CHANGE.
46 # Compression engines allowed in version 1. THIS SHOULD NEVER CHANGE.
47 _bundlespecv1compengines = {'gzip', 'bzip2', 'none'}
47 _bundlespecv1compengines = {'gzip', 'bzip2', 'none'}
48
48
49 def parsebundlespec(repo, spec, strict=True, externalnames=False):
49 def parsebundlespec(repo, spec, strict=True, externalnames=False):
50 """Parse a bundle string specification into parts.
50 """Parse a bundle string specification into parts.
51
51
52 Bundle specifications denote a well-defined bundle/exchange format.
52 Bundle specifications denote a well-defined bundle/exchange format.
53 The content of a given specification should not change over time in
53 The content of a given specification should not change over time in
54 order to ensure that bundles produced by a newer version of Mercurial are
54 order to ensure that bundles produced by a newer version of Mercurial are
55 readable from an older version.
55 readable from an older version.
56
56
57 The string currently has the form:
57 The string currently has the form:
58
58
59 <compression>-<type>[;<parameter0>[;<parameter1>]]
59 <compression>-<type>[;<parameter0>[;<parameter1>]]
60
60
61 Where <compression> is one of the supported compression formats
61 Where <compression> is one of the supported compression formats
62 and <type> is (currently) a version string. A ";" can follow the type and
62 and <type> is (currently) a version string. A ";" can follow the type and
63 all text afterwards is interpreted as URI encoded, ";" delimited key=value
63 all text afterwards is interpreted as URI encoded, ";" delimited key=value
64 pairs.
64 pairs.
65
65
66 If ``strict`` is True (the default) <compression> is required. Otherwise,
66 If ``strict`` is True (the default) <compression> is required. Otherwise,
67 it is optional.
67 it is optional.
68
68
69 If ``externalnames`` is False (the default), the human-centric names will
69 If ``externalnames`` is False (the default), the human-centric names will
70 be converted to their internal representation.
70 be converted to their internal representation.
71
71
72 Returns a 3-tuple of (compression, version, parameters). Compression will
72 Returns a 3-tuple of (compression, version, parameters). Compression will
73 be ``None`` if not in strict mode and a compression isn't defined.
73 be ``None`` if not in strict mode and a compression isn't defined.
74
74
75 An ``InvalidBundleSpecification`` is raised when the specification is
75 An ``InvalidBundleSpecification`` is raised when the specification is
76 not syntactically well formed.
76 not syntactically well formed.
77
77
78 An ``UnsupportedBundleSpecification`` is raised when the compression or
78 An ``UnsupportedBundleSpecification`` is raised when the compression or
79 bundle type/version is not recognized.
79 bundle type/version is not recognized.
80
80
81 Note: this function will likely eventually return a more complex data
81 Note: this function will likely eventually return a more complex data
82 structure, including bundle2 part information.
82 structure, including bundle2 part information.
83 """
83 """
84 def parseparams(s):
84 def parseparams(s):
85 if ';' not in s:
85 if ';' not in s:
86 return s, {}
86 return s, {}
87
87
88 params = {}
88 params = {}
89 version, paramstr = s.split(';', 1)
89 version, paramstr = s.split(';', 1)
90
90
91 for p in paramstr.split(';'):
91 for p in paramstr.split(';'):
92 if '=' not in p:
92 if '=' not in p:
93 raise error.InvalidBundleSpecification(
93 raise error.InvalidBundleSpecification(
94 _('invalid bundle specification: '
94 _('invalid bundle specification: '
95 'missing "=" in parameter: %s') % p)
95 'missing "=" in parameter: %s') % p)
96
96
97 key, value = p.split('=', 1)
97 key, value = p.split('=', 1)
98 key = urlreq.unquote(key)
98 key = urlreq.unquote(key)
99 value = urlreq.unquote(value)
99 value = urlreq.unquote(value)
100 params[key] = value
100 params[key] = value
101
101
102 return version, params
102 return version, params
103
103
104
104
105 if strict and '-' not in spec:
105 if strict and '-' not in spec:
106 raise error.InvalidBundleSpecification(
106 raise error.InvalidBundleSpecification(
107 _('invalid bundle specification; '
107 _('invalid bundle specification; '
108 'must be prefixed with compression: %s') % spec)
108 'must be prefixed with compression: %s') % spec)
109
109
110 if '-' in spec:
110 if '-' in spec:
111 compression, version = spec.split('-', 1)
111 compression, version = spec.split('-', 1)
112
112
113 if compression not in util.compengines.supportedbundlenames:
113 if compression not in util.compengines.supportedbundlenames:
114 raise error.UnsupportedBundleSpecification(
114 raise error.UnsupportedBundleSpecification(
115 _('%s compression is not supported') % compression)
115 _('%s compression is not supported') % compression)
116
116
117 version, params = parseparams(version)
117 version, params = parseparams(version)
118
118
119 if version not in _bundlespeccgversions:
119 if version not in _bundlespeccgversions:
120 raise error.UnsupportedBundleSpecification(
120 raise error.UnsupportedBundleSpecification(
121 _('%s is not a recognized bundle version') % version)
121 _('%s is not a recognized bundle version') % version)
122 else:
122 else:
123 # Value could be just the compression or just the version, in which
123 # Value could be just the compression or just the version, in which
124 # case some defaults are assumed (but only when not in strict mode).
124 # case some defaults are assumed (but only when not in strict mode).
125 assert not strict
125 assert not strict
126
126
127 spec, params = parseparams(spec)
127 spec, params = parseparams(spec)
128
128
129 if spec in util.compengines.supportedbundlenames:
129 if spec in util.compengines.supportedbundlenames:
130 compression = spec
130 compression = spec
131 version = 'v1'
131 version = 'v1'
132 # Generaldelta repos require v2.
132 # Generaldelta repos require v2.
133 if 'generaldelta' in repo.requirements:
133 if 'generaldelta' in repo.requirements:
134 version = 'v2'
134 version = 'v2'
135 # Modern compression engines require v2.
135 # Modern compression engines require v2.
136 if compression not in _bundlespecv1compengines:
136 if compression not in _bundlespecv1compengines:
137 version = 'v2'
137 version = 'v2'
138 elif spec in _bundlespeccgversions:
138 elif spec in _bundlespeccgversions:
139 if spec == 'packed1':
139 if spec == 'packed1':
140 compression = 'none'
140 compression = 'none'
141 else:
141 else:
142 compression = 'bzip2'
142 compression = 'bzip2'
143 version = spec
143 version = spec
144 else:
144 else:
145 raise error.UnsupportedBundleSpecification(
145 raise error.UnsupportedBundleSpecification(
146 _('%s is not a recognized bundle specification') % spec)
146 _('%s is not a recognized bundle specification') % spec)
147
147
148 # Bundle version 1 only supports a known set of compression engines.
148 # Bundle version 1 only supports a known set of compression engines.
149 if version == 'v1' and compression not in _bundlespecv1compengines:
149 if version == 'v1' and compression not in _bundlespecv1compengines:
150 raise error.UnsupportedBundleSpecification(
150 raise error.UnsupportedBundleSpecification(
151 _('compression engine %s is not supported on v1 bundles') %
151 _('compression engine %s is not supported on v1 bundles') %
152 compression)
152 compression)
153
153
154 # The specification for packed1 can optionally declare the data formats
154 # The specification for packed1 can optionally declare the data formats
155 # required to apply it. If we see this metadata, compare against what the
155 # required to apply it. If we see this metadata, compare against what the
156 # repo supports and error if the bundle isn't compatible.
156 # repo supports and error if the bundle isn't compatible.
157 if version == 'packed1' and 'requirements' in params:
157 if version == 'packed1' and 'requirements' in params:
158 requirements = set(params['requirements'].split(','))
158 requirements = set(params['requirements'].split(','))
159 missingreqs = requirements - repo.supportedformats
159 missingreqs = requirements - repo.supportedformats
160 if missingreqs:
160 if missingreqs:
161 raise error.UnsupportedBundleSpecification(
161 raise error.UnsupportedBundleSpecification(
162 _('missing support for repository features: %s') %
162 _('missing support for repository features: %s') %
163 ', '.join(sorted(missingreqs)))
163 ', '.join(sorted(missingreqs)))
164
164
165 if not externalnames:
165 if not externalnames:
166 engine = util.compengines.forbundlename(compression)
166 engine = util.compengines.forbundlename(compression)
167 compression = engine.bundletype()[1]
167 compression = engine.bundletype()[1]
168 version = _bundlespeccgversions[version]
168 version = _bundlespeccgversions[version]
169 return compression, version, params
169 return compression, version, params
170
170
171 def readbundle(ui, fh, fname, vfs=None):
171 def readbundle(ui, fh, fname, vfs=None):
172 header = changegroup.readexactly(fh, 4)
172 header = changegroup.readexactly(fh, 4)
173
173
174 alg = None
174 alg = None
175 if not fname:
175 if not fname:
176 fname = "stream"
176 fname = "stream"
177 if not header.startswith('HG') and header.startswith('\0'):
177 if not header.startswith('HG') and header.startswith('\0'):
178 fh = changegroup.headerlessfixup(fh, header)
178 fh = changegroup.headerlessfixup(fh, header)
179 header = "HG10"
179 header = "HG10"
180 alg = 'UN'
180 alg = 'UN'
181 elif vfs:
181 elif vfs:
182 fname = vfs.join(fname)
182 fname = vfs.join(fname)
183
183
184 magic, version = header[0:2], header[2:4]
184 magic, version = header[0:2], header[2:4]
185
185
186 if magic != 'HG':
186 if magic != 'HG':
187 raise error.Abort(_('%s: not a Mercurial bundle') % fname)
187 raise error.Abort(_('%s: not a Mercurial bundle') % fname)
188 if version == '10':
188 if version == '10':
189 if alg is None:
189 if alg is None:
190 alg = changegroup.readexactly(fh, 2)
190 alg = changegroup.readexactly(fh, 2)
191 return changegroup.cg1unpacker(fh, alg)
191 return changegroup.cg1unpacker(fh, alg)
192 elif version.startswith('2'):
192 elif version.startswith('2'):
193 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
193 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
194 elif version == 'S1':
194 elif version == 'S1':
195 return streamclone.streamcloneapplier(fh)
195 return streamclone.streamcloneapplier(fh)
196 else:
196 else:
197 raise error.Abort(_('%s: unknown bundle version %s') % (fname, version))
197 raise error.Abort(_('%s: unknown bundle version %s') % (fname, version))
198
198
199 def getbundlespec(ui, fh):
199 def getbundlespec(ui, fh):
200 """Infer the bundlespec from a bundle file handle.
200 """Infer the bundlespec from a bundle file handle.
201
201
202 The input file handle is seeked and the original seek position is not
202 The input file handle is seeked and the original seek position is not
203 restored.
203 restored.
204 """
204 """
205 def speccompression(alg):
205 def speccompression(alg):
206 try:
206 try:
207 return util.compengines.forbundletype(alg).bundletype()[0]
207 return util.compengines.forbundletype(alg).bundletype()[0]
208 except KeyError:
208 except KeyError:
209 return None
209 return None
210
210
211 b = readbundle(ui, fh, None)
211 b = readbundle(ui, fh, None)
212 if isinstance(b, changegroup.cg1unpacker):
212 if isinstance(b, changegroup.cg1unpacker):
213 alg = b._type
213 alg = b._type
214 if alg == '_truncatedBZ':
214 if alg == '_truncatedBZ':
215 alg = 'BZ'
215 alg = 'BZ'
216 comp = speccompression(alg)
216 comp = speccompression(alg)
217 if not comp:
217 if not comp:
218 raise error.Abort(_('unknown compression algorithm: %s') % alg)
218 raise error.Abort(_('unknown compression algorithm: %s') % alg)
219 return '%s-v1' % comp
219 return '%s-v1' % comp
220 elif isinstance(b, bundle2.unbundle20):
220 elif isinstance(b, bundle2.unbundle20):
221 if 'Compression' in b.params:
221 if 'Compression' in b.params:
222 comp = speccompression(b.params['Compression'])
222 comp = speccompression(b.params['Compression'])
223 if not comp:
223 if not comp:
224 raise error.Abort(_('unknown compression algorithm: %s') % comp)
224 raise error.Abort(_('unknown compression algorithm: %s') % comp)
225 else:
225 else:
226 comp = 'none'
226 comp = 'none'
227
227
228 version = None
228 version = None
229 for part in b.iterparts():
229 for part in b.iterparts():
230 if part.type == 'changegroup':
230 if part.type == 'changegroup':
231 version = part.params['version']
231 version = part.params['version']
232 if version in ('01', '02'):
232 if version in ('01', '02'):
233 version = 'v2'
233 version = 'v2'
234 else:
234 else:
235 raise error.Abort(_('changegroup version %s does not have '
235 raise error.Abort(_('changegroup version %s does not have '
236 'a known bundlespec') % version,
236 'a known bundlespec') % version,
237 hint=_('try upgrading your Mercurial '
237 hint=_('try upgrading your Mercurial '
238 'client'))
238 'client'))
239
239
240 if not version:
240 if not version:
241 raise error.Abort(_('could not identify changegroup version in '
241 raise error.Abort(_('could not identify changegroup version in '
242 'bundle'))
242 'bundle'))
243
243
244 return '%s-%s' % (comp, version)
244 return '%s-%s' % (comp, version)
245 elif isinstance(b, streamclone.streamcloneapplier):
245 elif isinstance(b, streamclone.streamcloneapplier):
246 requirements = streamclone.readbundle1header(fh)[2]
246 requirements = streamclone.readbundle1header(fh)[2]
247 params = 'requirements=%s' % ','.join(sorted(requirements))
247 params = 'requirements=%s' % ','.join(sorted(requirements))
248 return 'none-packed1;%s' % urlreq.quote(params)
248 return 'none-packed1;%s' % urlreq.quote(params)
249 else:
249 else:
250 raise error.Abort(_('unknown bundle type: %s') % b)
250 raise error.Abort(_('unknown bundle type: %s') % b)
251
251
252 def _computeoutgoing(repo, heads, common):
252 def _computeoutgoing(repo, heads, common):
253 """Computes which revs are outgoing given a set of common
253 """Computes which revs are outgoing given a set of common
254 and a set of heads.
254 and a set of heads.
255
255
256 This is a separate function so extensions can have access to
256 This is a separate function so extensions can have access to
257 the logic.
257 the logic.
258
258
259 Returns a discovery.outgoing object.
259 Returns a discovery.outgoing object.
260 """
260 """
261 cl = repo.changelog
261 cl = repo.changelog
262 if common:
262 if common:
263 hasnode = cl.hasnode
263 hasnode = cl.hasnode
264 common = [n for n in common if hasnode(n)]
264 common = [n for n in common if hasnode(n)]
265 else:
265 else:
266 common = [nullid]
266 common = [nullid]
267 if not heads:
267 if not heads:
268 heads = cl.heads()
268 heads = cl.heads()
269 return discovery.outgoing(repo, common, heads)
269 return discovery.outgoing(repo, common, heads)
270
270
271 def _forcebundle1(op):
271 def _forcebundle1(op):
272 """return true if a pull/push must use bundle1
272 """return true if a pull/push must use bundle1
273
273
274 This function is used to allow testing of the older bundle version"""
274 This function is used to allow testing of the older bundle version"""
275 ui = op.repo.ui
275 ui = op.repo.ui
276 forcebundle1 = False
276 forcebundle1 = False
277 # The goal is this config is to allow developer to choose the bundle
277 # The goal is this config is to allow developer to choose the bundle
278 # version used during exchanged. This is especially handy during test.
278 # version used during exchanged. This is especially handy during test.
279 # Value is a list of bundle version to be picked from, highest version
279 # Value is a list of bundle version to be picked from, highest version
280 # should be used.
280 # should be used.
281 #
281 #
282 # developer config: devel.legacy.exchange
282 # developer config: devel.legacy.exchange
283 exchange = ui.configlist('devel', 'legacy.exchange')
283 exchange = ui.configlist('devel', 'legacy.exchange')
284 forcebundle1 = 'bundle2' not in exchange and 'bundle1' in exchange
284 forcebundle1 = 'bundle2' not in exchange and 'bundle1' in exchange
285 return forcebundle1 or not op.remote.capable('bundle2')
285 return forcebundle1 or not op.remote.capable('bundle2')
286
286
287 class pushoperation(object):
287 class pushoperation(object):
288 """A object that represent a single push operation
288 """A object that represent a single push operation
289
289
290 Its purpose is to carry push related state and very common operations.
290 Its purpose is to carry push related state and very common operations.
291
291
292 A new pushoperation should be created at the beginning of each push and
292 A new pushoperation should be created at the beginning of each push and
293 discarded afterward.
293 discarded afterward.
294 """
294 """
295
295
296 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
296 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
297 bookmarks=()):
297 bookmarks=()):
298 # repo we push from
298 # repo we push from
299 self.repo = repo
299 self.repo = repo
300 self.ui = repo.ui
300 self.ui = repo.ui
301 # repo we push to
301 # repo we push to
302 self.remote = remote
302 self.remote = remote
303 # force option provided
303 # force option provided
304 self.force = force
304 self.force = force
305 # revs to be pushed (None is "all")
305 # revs to be pushed (None is "all")
306 self.revs = revs
306 self.revs = revs
307 # bookmark explicitly pushed
307 # bookmark explicitly pushed
308 self.bookmarks = bookmarks
308 self.bookmarks = bookmarks
309 # allow push of new branch
309 # allow push of new branch
310 self.newbranch = newbranch
310 self.newbranch = newbranch
311 # did a local lock get acquired?
311 # did a local lock get acquired?
312 self.locallocked = None
312 self.locallocked = None
313 # step already performed
313 # step already performed
314 # (used to check what steps have been already performed through bundle2)
314 # (used to check what steps have been already performed through bundle2)
315 self.stepsdone = set()
315 self.stepsdone = set()
316 # Integer version of the changegroup push result
316 # Integer version of the changegroup push result
317 # - None means nothing to push
317 # - None means nothing to push
318 # - 0 means HTTP error
318 # - 0 means HTTP error
319 # - 1 means we pushed and remote head count is unchanged *or*
319 # - 1 means we pushed and remote head count is unchanged *or*
320 # we have outgoing changesets but refused to push
320 # we have outgoing changesets but refused to push
321 # - other values as described by addchangegroup()
321 # - other values as described by addchangegroup()
322 self.cgresult = None
322 self.cgresult = None
323 # Boolean value for the bookmark push
323 # Boolean value for the bookmark push
324 self.bkresult = None
324 self.bkresult = None
325 # discover.outgoing object (contains common and outgoing data)
325 # discover.outgoing object (contains common and outgoing data)
326 self.outgoing = None
326 self.outgoing = None
327 # all remote topological heads before the push
327 # all remote topological heads before the push
328 self.remoteheads = None
328 self.remoteheads = None
329 # Details of the remote branch pre and post push
329 # Details of the remote branch pre and post push
330 #
330 #
331 # mapping: {'branch': ([remoteheads],
331 # mapping: {'branch': ([remoteheads],
332 # [newheads],
332 # [newheads],
333 # [unsyncedheads],
333 # [unsyncedheads],
334 # [discardedheads])}
334 # [discardedheads])}
335 # - branch: the branch name
335 # - branch: the branch name
336 # - remoteheads: the list of remote heads known locally
336 # - remoteheads: the list of remote heads known locally
337 # None if the branch is new
337 # None if the branch is new
338 # - newheads: the new remote heads (known locally) with outgoing pushed
338 # - newheads: the new remote heads (known locally) with outgoing pushed
339 # - unsyncedheads: the list of remote heads unknown locally.
339 # - unsyncedheads: the list of remote heads unknown locally.
340 # - discardedheads: the list of remote heads made obsolete by the push
340 # - discardedheads: the list of remote heads made obsolete by the push
341 self.pushbranchmap = None
341 self.pushbranchmap = None
342 # testable as a boolean indicating if any nodes are missing locally.
342 # testable as a boolean indicating if any nodes are missing locally.
343 self.incoming = None
343 self.incoming = None
344 # phases changes that must be pushed along side the changesets
344 # phases changes that must be pushed along side the changesets
345 self.outdatedphases = None
345 self.outdatedphases = None
346 # phases changes that must be pushed if changeset push fails
346 # phases changes that must be pushed if changeset push fails
347 self.fallbackoutdatedphases = None
347 self.fallbackoutdatedphases = None
348 # outgoing obsmarkers
348 # outgoing obsmarkers
349 self.outobsmarkers = set()
349 self.outobsmarkers = set()
350 # outgoing bookmarks
350 # outgoing bookmarks
351 self.outbookmarks = []
351 self.outbookmarks = []
352 # transaction manager
352 # transaction manager
353 self.trmanager = None
353 self.trmanager = None
354 # map { pushkey partid -> callback handling failure}
354 # map { pushkey partid -> callback handling failure}
355 # used to handle exception from mandatory pushkey part failure
355 # used to handle exception from mandatory pushkey part failure
356 self.pkfailcb = {}
356 self.pkfailcb = {}
357
357
358 @util.propertycache
358 @util.propertycache
359 def futureheads(self):
359 def futureheads(self):
360 """future remote heads if the changeset push succeeds"""
360 """future remote heads if the changeset push succeeds"""
361 return self.outgoing.missingheads
361 return self.outgoing.missingheads
362
362
363 @util.propertycache
363 @util.propertycache
364 def fallbackheads(self):
364 def fallbackheads(self):
365 """future remote heads if the changeset push fails"""
365 """future remote heads if the changeset push fails"""
366 if self.revs is None:
366 if self.revs is None:
367 # not target to push, all common are relevant
367 # not target to push, all common are relevant
368 return self.outgoing.commonheads
368 return self.outgoing.commonheads
369 unfi = self.repo.unfiltered()
369 unfi = self.repo.unfiltered()
370 # I want cheads = heads(::missingheads and ::commonheads)
370 # I want cheads = heads(::missingheads and ::commonheads)
371 # (missingheads is revs with secret changeset filtered out)
371 # (missingheads is revs with secret changeset filtered out)
372 #
372 #
373 # This can be expressed as:
373 # This can be expressed as:
374 # cheads = ( (missingheads and ::commonheads)
374 # cheads = ( (missingheads and ::commonheads)
375 # + (commonheads and ::missingheads))"
375 # + (commonheads and ::missingheads))"
376 # )
376 # )
377 #
377 #
378 # while trying to push we already computed the following:
378 # while trying to push we already computed the following:
379 # common = (::commonheads)
379 # common = (::commonheads)
380 # missing = ((commonheads::missingheads) - commonheads)
380 # missing = ((commonheads::missingheads) - commonheads)
381 #
381 #
382 # We can pick:
382 # We can pick:
383 # * missingheads part of common (::commonheads)
383 # * missingheads part of common (::commonheads)
384 common = self.outgoing.common
384 common = self.outgoing.common
385 nm = self.repo.changelog.nodemap
385 nm = self.repo.changelog.nodemap
386 cheads = [node for node in self.revs if nm[node] in common]
386 cheads = [node for node in self.revs if nm[node] in common]
387 # and
387 # and
388 # * commonheads parents on missing
388 # * commonheads parents on missing
389 revset = unfi.set('%ln and parents(roots(%ln))',
389 revset = unfi.set('%ln and parents(roots(%ln))',
390 self.outgoing.commonheads,
390 self.outgoing.commonheads,
391 self.outgoing.missing)
391 self.outgoing.missing)
392 cheads.extend(c.node() for c in revset)
392 cheads.extend(c.node() for c in revset)
393 return cheads
393 return cheads
394
394
395 @property
395 @property
396 def commonheads(self):
396 def commonheads(self):
397 """set of all common heads after changeset bundle push"""
397 """set of all common heads after changeset bundle push"""
398 if self.cgresult:
398 if self.cgresult:
399 return self.futureheads
399 return self.futureheads
400 else:
400 else:
401 return self.fallbackheads
401 return self.fallbackheads
402
402
403 # mapping of message used when pushing bookmark
403 # mapping of message used when pushing bookmark
404 bookmsgmap = {'update': (_("updating bookmark %s\n"),
404 bookmsgmap = {'update': (_("updating bookmark %s\n"),
405 _('updating bookmark %s failed!\n')),
405 _('updating bookmark %s failed!\n')),
406 'export': (_("exporting bookmark %s\n"),
406 'export': (_("exporting bookmark %s\n"),
407 _('exporting bookmark %s failed!\n')),
407 _('exporting bookmark %s failed!\n')),
408 'delete': (_("deleting remote bookmark %s\n"),
408 'delete': (_("deleting remote bookmark %s\n"),
409 _('deleting remote bookmark %s failed!\n')),
409 _('deleting remote bookmark %s failed!\n')),
410 }
410 }
411
411
412
412
413 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=(),
413 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=(),
414 opargs=None):
414 opargs=None):
415 '''Push outgoing changesets (limited by revs) from a local
415 '''Push outgoing changesets (limited by revs) from a local
416 repository to remote. Return an integer:
416 repository to remote. Return an integer:
417 - None means nothing to push
417 - None means nothing to push
418 - 0 means HTTP error
418 - 0 means HTTP error
419 - 1 means we pushed and remote head count is unchanged *or*
419 - 1 means we pushed and remote head count is unchanged *or*
420 we have outgoing changesets but refused to push
420 we have outgoing changesets but refused to push
421 - other values as described by addchangegroup()
421 - other values as described by addchangegroup()
422 '''
422 '''
423 if opargs is None:
423 if opargs is None:
424 opargs = {}
424 opargs = {}
425 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks,
425 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks,
426 **opargs)
426 **opargs)
427 if pushop.remote.local():
427 if pushop.remote.local():
428 missing = (set(pushop.repo.requirements)
428 missing = (set(pushop.repo.requirements)
429 - pushop.remote.local().supported)
429 - pushop.remote.local().supported)
430 if missing:
430 if missing:
431 msg = _("required features are not"
431 msg = _("required features are not"
432 " supported in the destination:"
432 " supported in the destination:"
433 " %s") % (', '.join(sorted(missing)))
433 " %s") % (', '.join(sorted(missing)))
434 raise error.Abort(msg)
434 raise error.Abort(msg)
435
435
436 # there are two ways to push to remote repo:
436 # there are two ways to push to remote repo:
437 #
437 #
438 # addchangegroup assumes local user can lock remote
438 # addchangegroup assumes local user can lock remote
439 # repo (local filesystem, old ssh servers).
439 # repo (local filesystem, old ssh servers).
440 #
440 #
441 # unbundle assumes local user cannot lock remote repo (new ssh
441 # unbundle assumes local user cannot lock remote repo (new ssh
442 # servers, http servers).
442 # servers, http servers).
443
443
444 if not pushop.remote.canpush():
444 if not pushop.remote.canpush():
445 raise error.Abort(_("destination does not support push"))
445 raise error.Abort(_("destination does not support push"))
446 # get local lock as we might write phase data
446 # get local lock as we might write phase data
447 localwlock = locallock = None
447 localwlock = locallock = None
448 try:
448 try:
449 # bundle2 push may receive a reply bundle touching bookmarks or other
449 # bundle2 push may receive a reply bundle touching bookmarks or other
450 # things requiring the wlock. Take it now to ensure proper ordering.
450 # things requiring the wlock. Take it now to ensure proper ordering.
451 maypushback = pushop.ui.configbool('experimental', 'bundle2.pushback')
451 maypushback = pushop.ui.configbool('experimental', 'bundle2.pushback')
452 if (not _forcebundle1(pushop)) and maypushback:
452 if (not _forcebundle1(pushop)) and maypushback:
453 localwlock = pushop.repo.wlock()
453 localwlock = pushop.repo.wlock()
454 locallock = pushop.repo.lock()
454 locallock = pushop.repo.lock()
455 pushop.locallocked = True
455 pushop.locallocked = True
456 except IOError as err:
456 except IOError as err:
457 pushop.locallocked = False
457 pushop.locallocked = False
458 if err.errno != errno.EACCES:
458 if err.errno != errno.EACCES:
459 raise
459 raise
460 # source repo cannot be locked.
460 # source repo cannot be locked.
461 # We do not abort the push, but just disable the local phase
461 # We do not abort the push, but just disable the local phase
462 # synchronisation.
462 # synchronisation.
463 msg = 'cannot lock source repository: %s\n' % err
463 msg = 'cannot lock source repository: %s\n' % err
464 pushop.ui.debug(msg)
464 pushop.ui.debug(msg)
465 try:
465 try:
466 if pushop.locallocked:
466 if pushop.locallocked:
467 pushop.trmanager = transactionmanager(pushop.repo,
467 pushop.trmanager = transactionmanager(pushop.repo,
468 'push-response',
468 'push-response',
469 pushop.remote.url())
469 pushop.remote.url())
470 pushop.repo.checkpush(pushop)
470 pushop.repo.checkpush(pushop)
471 lock = None
471 lock = None
472 unbundle = pushop.remote.capable('unbundle')
472 unbundle = pushop.remote.capable('unbundle')
473 if not unbundle:
473 if not unbundle:
474 lock = pushop.remote.lock()
474 lock = pushop.remote.lock()
475 try:
475 try:
476 _pushdiscovery(pushop)
476 _pushdiscovery(pushop)
477 if not _forcebundle1(pushop):
477 if not _forcebundle1(pushop):
478 _pushbundle2(pushop)
478 _pushbundle2(pushop)
479 _pushchangeset(pushop)
479 _pushchangeset(pushop)
480 _pushsyncphase(pushop)
480 _pushsyncphase(pushop)
481 _pushobsolete(pushop)
481 _pushobsolete(pushop)
482 _pushbookmark(pushop)
482 _pushbookmark(pushop)
483 finally:
483 finally:
484 if lock is not None:
484 if lock is not None:
485 lock.release()
485 lock.release()
486 if pushop.trmanager:
486 if pushop.trmanager:
487 pushop.trmanager.close()
487 pushop.trmanager.close()
488 finally:
488 finally:
489 if pushop.trmanager:
489 if pushop.trmanager:
490 pushop.trmanager.release()
490 pushop.trmanager.release()
491 if locallock is not None:
491 if locallock is not None:
492 locallock.release()
492 locallock.release()
493 if localwlock is not None:
493 if localwlock is not None:
494 localwlock.release()
494 localwlock.release()
495
495
496 return pushop
496 return pushop
497
497
498 # list of steps to perform discovery before push
498 # list of steps to perform discovery before push
499 pushdiscoveryorder = []
499 pushdiscoveryorder = []
500
500
501 # Mapping between step name and function
501 # Mapping between step name and function
502 #
502 #
503 # This exists to help extensions wrap steps if necessary
503 # This exists to help extensions wrap steps if necessary
504 pushdiscoverymapping = {}
504 pushdiscoverymapping = {}
505
505
506 def pushdiscovery(stepname):
506 def pushdiscovery(stepname):
507 """decorator for function performing discovery before push
507 """decorator for function performing discovery before push
508
508
509 The function is added to the step -> function mapping and appended to the
509 The function is added to the step -> function mapping and appended to the
510 list of steps. Beware that decorated function will be added in order (this
510 list of steps. Beware that decorated function will be added in order (this
511 may matter).
511 may matter).
512
512
513 You can only use this decorator for a new step, if you want to wrap a step
513 You can only use this decorator for a new step, if you want to wrap a step
514 from an extension, change the pushdiscovery dictionary directly."""
514 from an extension, change the pushdiscovery dictionary directly."""
515 def dec(func):
515 def dec(func):
516 assert stepname not in pushdiscoverymapping
516 assert stepname not in pushdiscoverymapping
517 pushdiscoverymapping[stepname] = func
517 pushdiscoverymapping[stepname] = func
518 pushdiscoveryorder.append(stepname)
518 pushdiscoveryorder.append(stepname)
519 return func
519 return func
520 return dec
520 return dec
521
521
522 def _pushdiscovery(pushop):
522 def _pushdiscovery(pushop):
523 """Run all discovery steps"""
523 """Run all discovery steps"""
524 for stepname in pushdiscoveryorder:
524 for stepname in pushdiscoveryorder:
525 step = pushdiscoverymapping[stepname]
525 step = pushdiscoverymapping[stepname]
526 step(pushop)
526 step(pushop)
527
527
528 @pushdiscovery('changeset')
528 @pushdiscovery('changeset')
529 def _pushdiscoverychangeset(pushop):
529 def _pushdiscoverychangeset(pushop):
530 """discover the changeset that need to be pushed"""
530 """discover the changeset that need to be pushed"""
531 fci = discovery.findcommonincoming
531 fci = discovery.findcommonincoming
532 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
532 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
533 common, inc, remoteheads = commoninc
533 common, inc, remoteheads = commoninc
534 fco = discovery.findcommonoutgoing
534 fco = discovery.findcommonoutgoing
535 outgoing = fco(pushop.repo, pushop.remote, onlyheads=pushop.revs,
535 outgoing = fco(pushop.repo, pushop.remote, onlyheads=pushop.revs,
536 commoninc=commoninc, force=pushop.force)
536 commoninc=commoninc, force=pushop.force)
537 pushop.outgoing = outgoing
537 pushop.outgoing = outgoing
538 pushop.remoteheads = remoteheads
538 pushop.remoteheads = remoteheads
539 pushop.incoming = inc
539 pushop.incoming = inc
540
540
541 @pushdiscovery('phase')
541 @pushdiscovery('phase')
542 def _pushdiscoveryphase(pushop):
542 def _pushdiscoveryphase(pushop):
543 """discover the phase that needs to be pushed
543 """discover the phase that needs to be pushed
544
544
545 (computed for both success and failure case for changesets push)"""
545 (computed for both success and failure case for changesets push)"""
546 outgoing = pushop.outgoing
546 outgoing = pushop.outgoing
547 unfi = pushop.repo.unfiltered()
547 unfi = pushop.repo.unfiltered()
548 remotephases = pushop.remote.listkeys('phases')
548 remotephases = pushop.remote.listkeys('phases')
549 publishing = remotephases.get('publishing', False)
549 publishing = remotephases.get('publishing', False)
550 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
550 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
551 and remotephases # server supports phases
551 and remotephases # server supports phases
552 and not pushop.outgoing.missing # no changesets to be pushed
552 and not pushop.outgoing.missing # no changesets to be pushed
553 and publishing):
553 and publishing):
554 # When:
554 # When:
555 # - this is a subrepo push
555 # - this is a subrepo push
556 # - and remote support phase
556 # - and remote support phase
557 # - and no changeset are to be pushed
557 # - and no changeset are to be pushed
558 # - and remote is publishing
558 # - and remote is publishing
559 # We may be in issue 3871 case!
559 # We may be in issue 3871 case!
560 # We drop the possible phase synchronisation done by
560 # We drop the possible phase synchronisation done by
561 # courtesy to publish changesets possibly locally draft
561 # courtesy to publish changesets possibly locally draft
562 # on the remote.
562 # on the remote.
563 remotephases = {'publishing': 'True'}
563 remotephases = {'publishing': 'True'}
564 ana = phases.analyzeremotephases(pushop.repo,
564 ana = phases.analyzeremotephases(pushop.repo,
565 pushop.fallbackheads,
565 pushop.fallbackheads,
566 remotephases)
566 remotephases)
567 pheads, droots = ana
567 pheads, droots = ana
568 extracond = ''
568 extracond = ''
569 if not publishing:
569 if not publishing:
570 extracond = ' and public()'
570 extracond = ' and public()'
571 revset = 'heads((%%ln::%%ln) %s)' % extracond
571 revset = 'heads((%%ln::%%ln) %s)' % extracond
572 # Get the list of all revs draft on remote by public here.
572 # Get the list of all revs draft on remote by public here.
573 # XXX Beware that revset break if droots is not strictly
573 # XXX Beware that revset break if droots is not strictly
574 # XXX root we may want to ensure it is but it is costly
574 # XXX root we may want to ensure it is but it is costly
575 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
575 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
576 if not outgoing.missing:
576 if not outgoing.missing:
577 future = fallback
577 future = fallback
578 else:
578 else:
579 # adds changeset we are going to push as draft
579 # adds changeset we are going to push as draft
580 #
580 #
581 # should not be necessary for publishing server, but because of an
581 # should not be necessary for publishing server, but because of an
582 # issue fixed in xxxxx we have to do it anyway.
582 # issue fixed in xxxxx we have to do it anyway.
583 fdroots = list(unfi.set('roots(%ln + %ln::)',
583 fdroots = list(unfi.set('roots(%ln + %ln::)',
584 outgoing.missing, droots))
584 outgoing.missing, droots))
585 fdroots = [f.node() for f in fdroots]
585 fdroots = [f.node() for f in fdroots]
586 future = list(unfi.set(revset, fdroots, pushop.futureheads))
586 future = list(unfi.set(revset, fdroots, pushop.futureheads))
587 pushop.outdatedphases = future
587 pushop.outdatedphases = future
588 pushop.fallbackoutdatedphases = fallback
588 pushop.fallbackoutdatedphases = fallback
589
589
590 @pushdiscovery('obsmarker')
590 @pushdiscovery('obsmarker')
591 def _pushdiscoveryobsmarkers(pushop):
591 def _pushdiscoveryobsmarkers(pushop):
592 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
592 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
593 and pushop.repo.obsstore
593 and pushop.repo.obsstore
594 and 'obsolete' in pushop.remote.listkeys('namespaces')):
594 and 'obsolete' in pushop.remote.listkeys('namespaces')):
595 repo = pushop.repo
595 repo = pushop.repo
596 # very naive computation, that can be quite expensive on big repo.
596 # very naive computation, that can be quite expensive on big repo.
597 # However: evolution is currently slow on them anyway.
597 # However: evolution is currently slow on them anyway.
598 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
598 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
599 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
599 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
600
600
601 @pushdiscovery('bookmarks')
601 @pushdiscovery('bookmarks')
602 def _pushdiscoverybookmarks(pushop):
602 def _pushdiscoverybookmarks(pushop):
603 ui = pushop.ui
603 ui = pushop.ui
604 repo = pushop.repo.unfiltered()
604 repo = pushop.repo.unfiltered()
605 remote = pushop.remote
605 remote = pushop.remote
606 ui.debug("checking for updated bookmarks\n")
606 ui.debug("checking for updated bookmarks\n")
607 ancestors = ()
607 ancestors = ()
608 if pushop.revs:
608 if pushop.revs:
609 revnums = map(repo.changelog.rev, pushop.revs)
609 revnums = map(repo.changelog.rev, pushop.revs)
610 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
610 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
611 remotebookmark = remote.listkeys('bookmarks')
611 remotebookmark = remote.listkeys('bookmarks')
612
612
613 explicit = set([repo._bookmarks.expandname(bookmark)
613 explicit = set([repo._bookmarks.expandname(bookmark)
614 for bookmark in pushop.bookmarks])
614 for bookmark in pushop.bookmarks])
615
615
616 remotebookmark = bookmod.unhexlifybookmarks(remotebookmark)
616 remotebookmark = bookmod.unhexlifybookmarks(remotebookmark)
617 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
617 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
618
618
619 def safehex(x):
619 def safehex(x):
620 if x is None:
620 if x is None:
621 return x
621 return x
622 return hex(x)
622 return hex(x)
623
623
624 def hexifycompbookmarks(bookmarks):
624 def hexifycompbookmarks(bookmarks):
625 for b, scid, dcid in bookmarks:
625 for b, scid, dcid in bookmarks:
626 yield b, safehex(scid), safehex(dcid)
626 yield b, safehex(scid), safehex(dcid)
627
627
628 comp = [hexifycompbookmarks(marks) for marks in comp]
628 comp = [hexifycompbookmarks(marks) for marks in comp]
629 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
629 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
630
630
631 for b, scid, dcid in advsrc:
631 for b, scid, dcid in advsrc:
632 if b in explicit:
632 if b in explicit:
633 explicit.remove(b)
633 explicit.remove(b)
634 if not ancestors or repo[scid].rev() in ancestors:
634 if not ancestors or repo[scid].rev() in ancestors:
635 pushop.outbookmarks.append((b, dcid, scid))
635 pushop.outbookmarks.append((b, dcid, scid))
636 # search added bookmark
636 # search added bookmark
637 for b, scid, dcid in addsrc:
637 for b, scid, dcid in addsrc:
638 if b in explicit:
638 if b in explicit:
639 explicit.remove(b)
639 explicit.remove(b)
640 pushop.outbookmarks.append((b, '', scid))
640 pushop.outbookmarks.append((b, '', scid))
641 # search for overwritten bookmark
641 # search for overwritten bookmark
642 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
642 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
643 if b in explicit:
643 if b in explicit:
644 explicit.remove(b)
644 explicit.remove(b)
645 pushop.outbookmarks.append((b, dcid, scid))
645 pushop.outbookmarks.append((b, dcid, scid))
646 # search for bookmark to delete
646 # search for bookmark to delete
647 for b, scid, dcid in adddst:
647 for b, scid, dcid in adddst:
648 if b in explicit:
648 if b in explicit:
649 explicit.remove(b)
649 explicit.remove(b)
650 # treat as "deleted locally"
650 # treat as "deleted locally"
651 pushop.outbookmarks.append((b, dcid, ''))
651 pushop.outbookmarks.append((b, dcid, ''))
652 # identical bookmarks shouldn't get reported
652 # identical bookmarks shouldn't get reported
653 for b, scid, dcid in same:
653 for b, scid, dcid in same:
654 if b in explicit:
654 if b in explicit:
655 explicit.remove(b)
655 explicit.remove(b)
656
656
657 if explicit:
657 if explicit:
658 explicit = sorted(explicit)
658 explicit = sorted(explicit)
659 # we should probably list all of them
659 # we should probably list all of them
660 ui.warn(_('bookmark %s does not exist on the local '
660 ui.warn(_('bookmark %s does not exist on the local '
661 'or remote repository!\n') % explicit[0])
661 'or remote repository!\n') % explicit[0])
662 pushop.bkresult = 2
662 pushop.bkresult = 2
663
663
664 pushop.outbookmarks.sort()
664 pushop.outbookmarks.sort()
665
665
666 def _pushcheckoutgoing(pushop):
666 def _pushcheckoutgoing(pushop):
667 outgoing = pushop.outgoing
667 outgoing = pushop.outgoing
668 unfi = pushop.repo.unfiltered()
668 unfi = pushop.repo.unfiltered()
669 if not outgoing.missing:
669 if not outgoing.missing:
670 # nothing to push
670 # nothing to push
671 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
671 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
672 return False
672 return False
673 # something to push
673 # something to push
674 if not pushop.force:
674 if not pushop.force:
675 # if repo.obsstore == False --> no obsolete
675 # if repo.obsstore == False --> no obsolete
676 # then, save the iteration
676 # then, save the iteration
677 if unfi.obsstore:
677 if unfi.obsstore:
678 # this message are here for 80 char limit reason
678 # this message are here for 80 char limit reason
679 mso = _("push includes obsolete changeset: %s!")
679 mso = _("push includes obsolete changeset: %s!")
680 mst = {"unstable": _("push includes unstable changeset: %s!"),
680 mst = {"unstable": _("push includes unstable changeset: %s!"),
681 "bumped": _("push includes bumped changeset: %s!"),
681 "bumped": _("push includes bumped changeset: %s!"),
682 "divergent": _("push includes divergent changeset: %s!")}
682 "divergent": _("push includes divergent changeset: %s!")}
683 # If we are to push if there is at least one
683 # If we are to push if there is at least one
684 # obsolete or unstable changeset in missing, at
684 # obsolete or unstable changeset in missing, at
685 # least one of the missinghead will be obsolete or
685 # least one of the missinghead will be obsolete or
686 # unstable. So checking heads only is ok
686 # unstable. So checking heads only is ok
687 for node in outgoing.missingheads:
687 for node in outgoing.missingheads:
688 ctx = unfi[node]
688 ctx = unfi[node]
689 if ctx.obsolete():
689 if ctx.obsolete():
690 raise error.Abort(mso % ctx)
690 raise error.Abort(mso % ctx)
691 elif ctx.troubled():
691 elif ctx.troubled():
692 raise error.Abort(mst[ctx.troubles()[0]] % ctx)
692 raise error.Abort(mst[ctx.troubles()[0]] % ctx)
693
693
694 discovery.checkheads(pushop)
694 discovery.checkheads(pushop)
695 return True
695 return True
696
696
697 # List of names of steps to perform for an outgoing bundle2, order matters.
697 # List of names of steps to perform for an outgoing bundle2, order matters.
698 b2partsgenorder = []
698 b2partsgenorder = []
699
699
700 # Mapping between step name and function
700 # Mapping between step name and function
701 #
701 #
702 # This exists to help extensions wrap steps if necessary
702 # This exists to help extensions wrap steps if necessary
703 b2partsgenmapping = {}
703 b2partsgenmapping = {}
704
704
705 def b2partsgenerator(stepname, idx=None):
705 def b2partsgenerator(stepname, idx=None):
706 """decorator for function generating bundle2 part
706 """decorator for function generating bundle2 part
707
707
708 The function is added to the step -> function mapping and appended to the
708 The function is added to the step -> function mapping and appended to the
709 list of steps. Beware that decorated functions will be added in order
709 list of steps. Beware that decorated functions will be added in order
710 (this may matter).
710 (this may matter).
711
711
712 You can only use this decorator for new steps, if you want to wrap a step
712 You can only use this decorator for new steps, if you want to wrap a step
713 from an extension, attack the b2partsgenmapping dictionary directly."""
713 from an extension, attack the b2partsgenmapping dictionary directly."""
714 def dec(func):
714 def dec(func):
715 assert stepname not in b2partsgenmapping
715 assert stepname not in b2partsgenmapping
716 b2partsgenmapping[stepname] = func
716 b2partsgenmapping[stepname] = func
717 if idx is None:
717 if idx is None:
718 b2partsgenorder.append(stepname)
718 b2partsgenorder.append(stepname)
719 else:
719 else:
720 b2partsgenorder.insert(idx, stepname)
720 b2partsgenorder.insert(idx, stepname)
721 return func
721 return func
722 return dec
722 return dec
723
723
724 def _pushb2ctxcheckheads(pushop, bundler):
724 def _pushb2ctxcheckheads(pushop, bundler):
725 """Generate race condition checking parts
725 """Generate race condition checking parts
726
726
727 Exists as an independent function to aid extensions
727 Exists as an independent function to aid extensions
728 """
728 """
729 # * 'force' do not check for push race,
729 # * 'force' do not check for push race,
730 # * if we don't push anything, there are nothing to check.
730 # * if we don't push anything, there are nothing to check.
731 if not pushop.force and pushop.outgoing.missingheads:
731 if not pushop.force and pushop.outgoing.missingheads:
732 allowunrelated = 'related' in bundler.capabilities.get('checkheads', ())
732 allowunrelated = 'related' in bundler.capabilities.get('checkheads', ())
733 if not allowunrelated:
733 if not allowunrelated:
734 bundler.newpart('check:heads', data=iter(pushop.remoteheads))
734 bundler.newpart('check:heads', data=iter(pushop.remoteheads))
735 else:
735 else:
736 affected = set()
736 affected = set()
737 for branch, heads in pushop.pushbranchmap.iteritems():
737 for branch, heads in pushop.pushbranchmap.iteritems():
738 remoteheads, newheads, unsyncedheads, discardedheads = heads
738 remoteheads, newheads, unsyncedheads, discardedheads = heads
739 if remoteheads is not None:
739 if remoteheads is not None:
740 remote = set(remoteheads)
740 remote = set(remoteheads)
741 affected |= set(discardedheads) & remote
741 affected |= set(discardedheads) & remote
742 affected |= remote - set(newheads)
742 affected |= remote - set(newheads)
743 if affected:
743 if affected:
744 data = iter(sorted(affected))
744 data = iter(sorted(affected))
745 bundler.newpart('check:updated-heads', data=data)
745 bundler.newpart('check:updated-heads', data=data)
746
746
747 @b2partsgenerator('changeset')
747 @b2partsgenerator('changeset')
748 def _pushb2ctx(pushop, bundler):
748 def _pushb2ctx(pushop, bundler):
749 """handle changegroup push through bundle2
749 """handle changegroup push through bundle2
750
750
751 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
751 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
752 """
752 """
753 if 'changesets' in pushop.stepsdone:
753 if 'changesets' in pushop.stepsdone:
754 return
754 return
755 pushop.stepsdone.add('changesets')
755 pushop.stepsdone.add('changesets')
756 # Send known heads to the server for race detection.
756 # Send known heads to the server for race detection.
757 if not _pushcheckoutgoing(pushop):
757 if not _pushcheckoutgoing(pushop):
758 return
758 return
759 pushop.repo.prepushoutgoinghooks(pushop)
759 pushop.repo.prepushoutgoinghooks(pushop)
760
760
761 _pushb2ctxcheckheads(pushop, bundler)
761 _pushb2ctxcheckheads(pushop, bundler)
762
762
763 b2caps = bundle2.bundle2caps(pushop.remote)
763 b2caps = bundle2.bundle2caps(pushop.remote)
764 version = '01'
764 version = '01'
765 cgversions = b2caps.get('changegroup')
765 cgversions = b2caps.get('changegroup')
766 if cgversions: # 3.1 and 3.2 ship with an empty value
766 if cgversions: # 3.1 and 3.2 ship with an empty value
767 cgversions = [v for v in cgversions
767 cgversions = [v for v in cgversions
768 if v in changegroup.supportedoutgoingversions(
768 if v in changegroup.supportedoutgoingversions(
769 pushop.repo)]
769 pushop.repo)]
770 if not cgversions:
770 if not cgversions:
771 raise ValueError(_('no common changegroup version'))
771 raise ValueError(_('no common changegroup version'))
772 version = max(cgversions)
772 version = max(cgversions)
773 cg = changegroup.getlocalchangegroupraw(pushop.repo, 'push',
773 cg = changegroup.getlocalchangegroupraw(pushop.repo, 'push',
774 pushop.outgoing,
774 pushop.outgoing,
775 version=version)
775 version=version)
776 cgpart = bundler.newpart('changegroup', data=cg)
776 cgpart = bundler.newpart('changegroup', data=cg)
777 if cgversions:
777 if cgversions:
778 cgpart.addparam('version', version)
778 cgpart.addparam('version', version)
779 if 'treemanifest' in pushop.repo.requirements:
779 if 'treemanifest' in pushop.repo.requirements:
780 cgpart.addparam('treemanifest', '1')
780 cgpart.addparam('treemanifest', '1')
781 def handlereply(op):
781 def handlereply(op):
782 """extract addchangegroup returns from server reply"""
782 """extract addchangegroup returns from server reply"""
783 cgreplies = op.records.getreplies(cgpart.id)
783 cgreplies = op.records.getreplies(cgpart.id)
784 assert len(cgreplies['changegroup']) == 1
784 assert len(cgreplies['changegroup']) == 1
785 pushop.cgresult = cgreplies['changegroup'][0]['return']
785 pushop.cgresult = cgreplies['changegroup'][0]['return']
786 return handlereply
786 return handlereply
787
787
788 @b2partsgenerator('phase')
788 @b2partsgenerator('phase')
789 def _pushb2phases(pushop, bundler):
789 def _pushb2phases(pushop, bundler):
790 """handle phase push through bundle2"""
790 """handle phase push through bundle2"""
791 if 'phases' in pushop.stepsdone:
791 if 'phases' in pushop.stepsdone:
792 return
792 return
793 b2caps = bundle2.bundle2caps(pushop.remote)
793 b2caps = bundle2.bundle2caps(pushop.remote)
794 if not 'pushkey' in b2caps:
794 if not 'pushkey' in b2caps:
795 return
795 return
796 pushop.stepsdone.add('phases')
796 pushop.stepsdone.add('phases')
797 part2node = []
797 part2node = []
798
798
799 def handlefailure(pushop, exc):
799 def handlefailure(pushop, exc):
800 targetid = int(exc.partid)
800 targetid = int(exc.partid)
801 for partid, node in part2node:
801 for partid, node in part2node:
802 if partid == targetid:
802 if partid == targetid:
803 raise error.Abort(_('updating %s to public failed') % node)
803 raise error.Abort(_('updating %s to public failed') % node)
804
804
805 enc = pushkey.encode
805 enc = pushkey.encode
806 for newremotehead in pushop.outdatedphases:
806 for newremotehead in pushop.outdatedphases:
807 part = bundler.newpart('pushkey')
807 part = bundler.newpart('pushkey')
808 part.addparam('namespace', enc('phases'))
808 part.addparam('namespace', enc('phases'))
809 part.addparam('key', enc(newremotehead.hex()))
809 part.addparam('key', enc(newremotehead.hex()))
810 part.addparam('old', enc(str(phases.draft)))
810 part.addparam('old', enc(str(phases.draft)))
811 part.addparam('new', enc(str(phases.public)))
811 part.addparam('new', enc(str(phases.public)))
812 part2node.append((part.id, newremotehead))
812 part2node.append((part.id, newremotehead))
813 pushop.pkfailcb[part.id] = handlefailure
813 pushop.pkfailcb[part.id] = handlefailure
814
814
815 def handlereply(op):
815 def handlereply(op):
816 for partid, node in part2node:
816 for partid, node in part2node:
817 partrep = op.records.getreplies(partid)
817 partrep = op.records.getreplies(partid)
818 results = partrep['pushkey']
818 results = partrep['pushkey']
819 assert len(results) <= 1
819 assert len(results) <= 1
820 msg = None
820 msg = None
821 if not results:
821 if not results:
822 msg = _('server ignored update of %s to public!\n') % node
822 msg = _('server ignored update of %s to public!\n') % node
823 elif not int(results[0]['return']):
823 elif not int(results[0]['return']):
824 msg = _('updating %s to public failed!\n') % node
824 msg = _('updating %s to public failed!\n') % node
825 if msg is not None:
825 if msg is not None:
826 pushop.ui.warn(msg)
826 pushop.ui.warn(msg)
827 return handlereply
827 return handlereply
828
828
829 @b2partsgenerator('obsmarkers')
829 @b2partsgenerator('obsmarkers')
830 def _pushb2obsmarkers(pushop, bundler):
830 def _pushb2obsmarkers(pushop, bundler):
831 if 'obsmarkers' in pushop.stepsdone:
831 if 'obsmarkers' in pushop.stepsdone:
832 return
832 return
833 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
833 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
834 if obsolete.commonversion(remoteversions) is None:
834 if obsolete.commonversion(remoteversions) is None:
835 return
835 return
836 pushop.stepsdone.add('obsmarkers')
836 pushop.stepsdone.add('obsmarkers')
837 if pushop.outobsmarkers:
837 if pushop.outobsmarkers:
838 markers = sorted(pushop.outobsmarkers)
838 markers = sorted(pushop.outobsmarkers)
839 bundle2.buildobsmarkerspart(bundler, markers)
839 bundle2.buildobsmarkerspart(bundler, markers)
840
840
841 @b2partsgenerator('bookmarks')
841 @b2partsgenerator('bookmarks')
842 def _pushb2bookmarks(pushop, bundler):
842 def _pushb2bookmarks(pushop, bundler):
843 """handle bookmark push through bundle2"""
843 """handle bookmark push through bundle2"""
844 if 'bookmarks' in pushop.stepsdone:
844 if 'bookmarks' in pushop.stepsdone:
845 return
845 return
846 b2caps = bundle2.bundle2caps(pushop.remote)
846 b2caps = bundle2.bundle2caps(pushop.remote)
847 if 'pushkey' not in b2caps:
847 if 'pushkey' not in b2caps:
848 return
848 return
849 pushop.stepsdone.add('bookmarks')
849 pushop.stepsdone.add('bookmarks')
850 part2book = []
850 part2book = []
851 enc = pushkey.encode
851 enc = pushkey.encode
852
852
853 def handlefailure(pushop, exc):
853 def handlefailure(pushop, exc):
854 targetid = int(exc.partid)
854 targetid = int(exc.partid)
855 for partid, book, action in part2book:
855 for partid, book, action in part2book:
856 if partid == targetid:
856 if partid == targetid:
857 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
857 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
858 # we should not be called for part we did not generated
858 # we should not be called for part we did not generated
859 assert False
859 assert False
860
860
861 for book, old, new in pushop.outbookmarks:
861 for book, old, new in pushop.outbookmarks:
862 part = bundler.newpart('pushkey')
862 part = bundler.newpart('pushkey')
863 part.addparam('namespace', enc('bookmarks'))
863 part.addparam('namespace', enc('bookmarks'))
864 part.addparam('key', enc(book))
864 part.addparam('key', enc(book))
865 part.addparam('old', enc(old))
865 part.addparam('old', enc(old))
866 part.addparam('new', enc(new))
866 part.addparam('new', enc(new))
867 action = 'update'
867 action = 'update'
868 if not old:
868 if not old:
869 action = 'export'
869 action = 'export'
870 elif not new:
870 elif not new:
871 action = 'delete'
871 action = 'delete'
872 part2book.append((part.id, book, action))
872 part2book.append((part.id, book, action))
873 pushop.pkfailcb[part.id] = handlefailure
873 pushop.pkfailcb[part.id] = handlefailure
874
874
875 def handlereply(op):
875 def handlereply(op):
876 ui = pushop.ui
876 ui = pushop.ui
877 for partid, book, action in part2book:
877 for partid, book, action in part2book:
878 partrep = op.records.getreplies(partid)
878 partrep = op.records.getreplies(partid)
879 results = partrep['pushkey']
879 results = partrep['pushkey']
880 assert len(results) <= 1
880 assert len(results) <= 1
881 if not results:
881 if not results:
882 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
882 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
883 else:
883 else:
884 ret = int(results[0]['return'])
884 ret = int(results[0]['return'])
885 if ret:
885 if ret:
886 ui.status(bookmsgmap[action][0] % book)
886 ui.status(bookmsgmap[action][0] % book)
887 else:
887 else:
888 ui.warn(bookmsgmap[action][1] % book)
888 ui.warn(bookmsgmap[action][1] % book)
889 if pushop.bkresult is not None:
889 if pushop.bkresult is not None:
890 pushop.bkresult = 1
890 pushop.bkresult = 1
891 return handlereply
891 return handlereply
892
892
893
893
894 def _pushbundle2(pushop):
894 def _pushbundle2(pushop):
895 """push data to the remote using bundle2
895 """push data to the remote using bundle2
896
896
897 The only currently supported type of data is changegroup but this will
897 The only currently supported type of data is changegroup but this will
898 evolve in the future."""
898 evolve in the future."""
899 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
899 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
900 pushback = (pushop.trmanager
900 pushback = (pushop.trmanager
901 and pushop.ui.configbool('experimental', 'bundle2.pushback'))
901 and pushop.ui.configbool('experimental', 'bundle2.pushback'))
902
902
903 # create reply capability
903 # create reply capability
904 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo,
904 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo,
905 allowpushback=pushback))
905 allowpushback=pushback))
906 bundler.newpart('replycaps', data=capsblob)
906 bundler.newpart('replycaps', data=capsblob)
907 replyhandlers = []
907 replyhandlers = []
908 for partgenname in b2partsgenorder:
908 for partgenname in b2partsgenorder:
909 partgen = b2partsgenmapping[partgenname]
909 partgen = b2partsgenmapping[partgenname]
910 ret = partgen(pushop, bundler)
910 ret = partgen(pushop, bundler)
911 if callable(ret):
911 if callable(ret):
912 replyhandlers.append(ret)
912 replyhandlers.append(ret)
913 # do not push if nothing to push
913 # do not push if nothing to push
914 if bundler.nbparts <= 1:
914 if bundler.nbparts <= 1:
915 return
915 return
916 stream = util.chunkbuffer(bundler.getchunks())
916 stream = util.chunkbuffer(bundler.getchunks())
917 try:
917 try:
918 try:
918 try:
919 reply = pushop.remote.unbundle(
919 reply = pushop.remote.unbundle(
920 stream, ['force'], pushop.remote.url())
920 stream, ['force'], pushop.remote.url())
921 except error.BundleValueError as exc:
921 except error.BundleValueError as exc:
922 raise error.Abort(_('missing support for %s') % exc)
922 raise error.Abort(_('missing support for %s') % exc)
923 try:
923 try:
924 trgetter = None
924 trgetter = None
925 if pushback:
925 if pushback:
926 trgetter = pushop.trmanager.transaction
926 trgetter = pushop.trmanager.transaction
927 op = bundle2.processbundle(pushop.repo, reply, trgetter)
927 op = bundle2.processbundle(pushop.repo, reply, trgetter)
928 except error.BundleValueError as exc:
928 except error.BundleValueError as exc:
929 raise error.Abort(_('missing support for %s') % exc)
929 raise error.Abort(_('missing support for %s') % exc)
930 except bundle2.AbortFromPart as exc:
930 except bundle2.AbortFromPart as exc:
931 pushop.ui.status(_('remote: %s\n') % exc)
931 pushop.ui.status(_('remote: %s\n') % exc)
932 if exc.hint is not None:
932 if exc.hint is not None:
933 pushop.ui.status(_('remote: %s\n') % ('(%s)' % exc.hint))
933 pushop.ui.status(_('remote: %s\n') % ('(%s)' % exc.hint))
934 raise error.Abort(_('push failed on remote'))
934 raise error.Abort(_('push failed on remote'))
935 except error.PushkeyFailed as exc:
935 except error.PushkeyFailed as exc:
936 partid = int(exc.partid)
936 partid = int(exc.partid)
937 if partid not in pushop.pkfailcb:
937 if partid not in pushop.pkfailcb:
938 raise
938 raise
939 pushop.pkfailcb[partid](pushop, exc)
939 pushop.pkfailcb[partid](pushop, exc)
940 for rephand in replyhandlers:
940 for rephand in replyhandlers:
941 rephand(op)
941 rephand(op)
942
942
943 def _pushchangeset(pushop):
943 def _pushchangeset(pushop):
944 """Make the actual push of changeset bundle to remote repo"""
944 """Make the actual push of changeset bundle to remote repo"""
945 if 'changesets' in pushop.stepsdone:
945 if 'changesets' in pushop.stepsdone:
946 return
946 return
947 pushop.stepsdone.add('changesets')
947 pushop.stepsdone.add('changesets')
948 if not _pushcheckoutgoing(pushop):
948 if not _pushcheckoutgoing(pushop):
949 return
949 return
950 pushop.repo.prepushoutgoinghooks(pushop)
950 pushop.repo.prepushoutgoinghooks(pushop)
951 outgoing = pushop.outgoing
951 outgoing = pushop.outgoing
952 unbundle = pushop.remote.capable('unbundle')
952 unbundle = pushop.remote.capable('unbundle')
953 # TODO: get bundlecaps from remote
953 # TODO: get bundlecaps from remote
954 bundlecaps = None
954 bundlecaps = None
955 # create a changegroup from local
955 # create a changegroup from local
956 if pushop.revs is None and not (outgoing.excluded
956 if pushop.revs is None and not (outgoing.excluded
957 or pushop.repo.changelog.filteredrevs):
957 or pushop.repo.changelog.filteredrevs):
958 # push everything,
958 # push everything,
959 # use the fast path, no race possible on push
959 # use the fast path, no race possible on push
960 bundler = changegroup.cg1packer(pushop.repo, bundlecaps)
960 bundler = changegroup.cg1packer(pushop.repo, bundlecaps)
961 cg = changegroup.getsubset(pushop.repo,
961 cg = changegroup.getsubset(pushop.repo,
962 outgoing,
962 outgoing,
963 bundler,
963 bundler,
964 'push',
964 'push',
965 fastpath=True)
965 fastpath=True)
966 else:
966 else:
967 cg = changegroup.getchangegroup(pushop.repo, 'push', outgoing,
967 cg = changegroup.getchangegroup(pushop.repo, 'push', outgoing,
968 bundlecaps=bundlecaps)
968 bundlecaps=bundlecaps)
969
969
970 # apply changegroup to remote
970 # apply changegroup to remote
971 if unbundle:
971 if unbundle:
972 # local repo finds heads on server, finds out what
972 # local repo finds heads on server, finds out what
973 # revs it must push. once revs transferred, if server
973 # revs it must push. once revs transferred, if server
974 # finds it has different heads (someone else won
974 # finds it has different heads (someone else won
975 # commit/push race), server aborts.
975 # commit/push race), server aborts.
976 if pushop.force:
976 if pushop.force:
977 remoteheads = ['force']
977 remoteheads = ['force']
978 else:
978 else:
979 remoteheads = pushop.remoteheads
979 remoteheads = pushop.remoteheads
980 # ssh: return remote's addchangegroup()
980 # ssh: return remote's addchangegroup()
981 # http: return remote's addchangegroup() or 0 for error
981 # http: return remote's addchangegroup() or 0 for error
982 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
982 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
983 pushop.repo.url())
983 pushop.repo.url())
984 else:
984 else:
985 # we return an integer indicating remote head count
985 # we return an integer indicating remote head count
986 # change
986 # change
987 pushop.cgresult = pushop.remote.addchangegroup(cg, 'push',
987 pushop.cgresult = pushop.remote.addchangegroup(cg, 'push',
988 pushop.repo.url())
988 pushop.repo.url())
989
989
990 def _pushsyncphase(pushop):
990 def _pushsyncphase(pushop):
991 """synchronise phase information locally and remotely"""
991 """synchronise phase information locally and remotely"""
992 cheads = pushop.commonheads
992 cheads = pushop.commonheads
993 # even when we don't push, exchanging phase data is useful
993 # even when we don't push, exchanging phase data is useful
994 remotephases = pushop.remote.listkeys('phases')
994 remotephases = pushop.remote.listkeys('phases')
995 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
995 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
996 and remotephases # server supports phases
996 and remotephases # server supports phases
997 and pushop.cgresult is None # nothing was pushed
997 and pushop.cgresult is None # nothing was pushed
998 and remotephases.get('publishing', False)):
998 and remotephases.get('publishing', False)):
999 # When:
999 # When:
1000 # - this is a subrepo push
1000 # - this is a subrepo push
1001 # - and remote support phase
1001 # - and remote support phase
1002 # - and no changeset was pushed
1002 # - and no changeset was pushed
1003 # - and remote is publishing
1003 # - and remote is publishing
1004 # We may be in issue 3871 case!
1004 # We may be in issue 3871 case!
1005 # We drop the possible phase synchronisation done by
1005 # We drop the possible phase synchronisation done by
1006 # courtesy to publish changesets possibly locally draft
1006 # courtesy to publish changesets possibly locally draft
1007 # on the remote.
1007 # on the remote.
1008 remotephases = {'publishing': 'True'}
1008 remotephases = {'publishing': 'True'}
1009 if not remotephases: # old server or public only reply from non-publishing
1009 if not remotephases: # old server or public only reply from non-publishing
1010 _localphasemove(pushop, cheads)
1010 _localphasemove(pushop, cheads)
1011 # don't push any phase data as there is nothing to push
1011 # don't push any phase data as there is nothing to push
1012 else:
1012 else:
1013 ana = phases.analyzeremotephases(pushop.repo, cheads,
1013 ana = phases.analyzeremotephases(pushop.repo, cheads,
1014 remotephases)
1014 remotephases)
1015 pheads, droots = ana
1015 pheads, droots = ana
1016 ### Apply remote phase on local
1016 ### Apply remote phase on local
1017 if remotephases.get('publishing', False):
1017 if remotephases.get('publishing', False):
1018 _localphasemove(pushop, cheads)
1018 _localphasemove(pushop, cheads)
1019 else: # publish = False
1019 else: # publish = False
1020 _localphasemove(pushop, pheads)
1020 _localphasemove(pushop, pheads)
1021 _localphasemove(pushop, cheads, phases.draft)
1021 _localphasemove(pushop, cheads, phases.draft)
1022 ### Apply local phase on remote
1022 ### Apply local phase on remote
1023
1023
1024 if pushop.cgresult:
1024 if pushop.cgresult:
1025 if 'phases' in pushop.stepsdone:
1025 if 'phases' in pushop.stepsdone:
1026 # phases already pushed though bundle2
1026 # phases already pushed though bundle2
1027 return
1027 return
1028 outdated = pushop.outdatedphases
1028 outdated = pushop.outdatedphases
1029 else:
1029 else:
1030 outdated = pushop.fallbackoutdatedphases
1030 outdated = pushop.fallbackoutdatedphases
1031
1031
1032 pushop.stepsdone.add('phases')
1032 pushop.stepsdone.add('phases')
1033
1033
1034 # filter heads already turned public by the push
1034 # filter heads already turned public by the push
1035 outdated = [c for c in outdated if c.node() not in pheads]
1035 outdated = [c for c in outdated if c.node() not in pheads]
1036 # fallback to independent pushkey command
1036 # fallback to independent pushkey command
1037 for newremotehead in outdated:
1037 for newremotehead in outdated:
1038 r = pushop.remote.pushkey('phases',
1038 r = pushop.remote.pushkey('phases',
1039 newremotehead.hex(),
1039 newremotehead.hex(),
1040 str(phases.draft),
1040 str(phases.draft),
1041 str(phases.public))
1041 str(phases.public))
1042 if not r:
1042 if not r:
1043 pushop.ui.warn(_('updating %s to public failed!\n')
1043 pushop.ui.warn(_('updating %s to public failed!\n')
1044 % newremotehead)
1044 % newremotehead)
1045
1045
1046 def _localphasemove(pushop, nodes, phase=phases.public):
1046 def _localphasemove(pushop, nodes, phase=phases.public):
1047 """move <nodes> to <phase> in the local source repo"""
1047 """move <nodes> to <phase> in the local source repo"""
1048 if pushop.trmanager:
1048 if pushop.trmanager:
1049 phases.advanceboundary(pushop.repo,
1049 phases.advanceboundary(pushop.repo,
1050 pushop.trmanager.transaction(),
1050 pushop.trmanager.transaction(),
1051 phase,
1051 phase,
1052 nodes)
1052 nodes)
1053 else:
1053 else:
1054 # repo is not locked, do not change any phases!
1054 # repo is not locked, do not change any phases!
1055 # Informs the user that phases should have been moved when
1055 # Informs the user that phases should have been moved when
1056 # applicable.
1056 # applicable.
1057 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1057 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1058 phasestr = phases.phasenames[phase]
1058 phasestr = phases.phasenames[phase]
1059 if actualmoves:
1059 if actualmoves:
1060 pushop.ui.status(_('cannot lock source repo, skipping '
1060 pushop.ui.status(_('cannot lock source repo, skipping '
1061 'local %s phase update\n') % phasestr)
1061 'local %s phase update\n') % phasestr)
1062
1062
1063 def _pushobsolete(pushop):
1063 def _pushobsolete(pushop):
1064 """utility function to push obsolete markers to a remote"""
1064 """utility function to push obsolete markers to a remote"""
1065 if 'obsmarkers' in pushop.stepsdone:
1065 if 'obsmarkers' in pushop.stepsdone:
1066 return
1066 return
1067 repo = pushop.repo
1067 repo = pushop.repo
1068 remote = pushop.remote
1068 remote = pushop.remote
1069 pushop.stepsdone.add('obsmarkers')
1069 pushop.stepsdone.add('obsmarkers')
1070 if pushop.outobsmarkers:
1070 if pushop.outobsmarkers:
1071 pushop.ui.debug('try to push obsolete markers to remote\n')
1071 pushop.ui.debug('try to push obsolete markers to remote\n')
1072 rslts = []
1072 rslts = []
1073 remotedata = obsolete._pushkeyescape(sorted(pushop.outobsmarkers))
1073 remotedata = obsolete._pushkeyescape(sorted(pushop.outobsmarkers))
1074 for key in sorted(remotedata, reverse=True):
1074 for key in sorted(remotedata, reverse=True):
1075 # reverse sort to ensure we end with dump0
1075 # reverse sort to ensure we end with dump0
1076 data = remotedata[key]
1076 data = remotedata[key]
1077 rslts.append(remote.pushkey('obsolete', key, '', data))
1077 rslts.append(remote.pushkey('obsolete', key, '', data))
1078 if [r for r in rslts if not r]:
1078 if [r for r in rslts if not r]:
1079 msg = _('failed to push some obsolete markers!\n')
1079 msg = _('failed to push some obsolete markers!\n')
1080 repo.ui.warn(msg)
1080 repo.ui.warn(msg)
1081
1081
1082 def _pushbookmark(pushop):
1082 def _pushbookmark(pushop):
1083 """Update bookmark position on remote"""
1083 """Update bookmark position on remote"""
1084 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
1084 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
1085 return
1085 return
1086 pushop.stepsdone.add('bookmarks')
1086 pushop.stepsdone.add('bookmarks')
1087 ui = pushop.ui
1087 ui = pushop.ui
1088 remote = pushop.remote
1088 remote = pushop.remote
1089
1089
1090 for b, old, new in pushop.outbookmarks:
1090 for b, old, new in pushop.outbookmarks:
1091 action = 'update'
1091 action = 'update'
1092 if not old:
1092 if not old:
1093 action = 'export'
1093 action = 'export'
1094 elif not new:
1094 elif not new:
1095 action = 'delete'
1095 action = 'delete'
1096 if remote.pushkey('bookmarks', b, old, new):
1096 if remote.pushkey('bookmarks', b, old, new):
1097 ui.status(bookmsgmap[action][0] % b)
1097 ui.status(bookmsgmap[action][0] % b)
1098 else:
1098 else:
1099 ui.warn(bookmsgmap[action][1] % b)
1099 ui.warn(bookmsgmap[action][1] % b)
1100 # discovery can have set the value form invalid entry
1100 # discovery can have set the value form invalid entry
1101 if pushop.bkresult is not None:
1101 if pushop.bkresult is not None:
1102 pushop.bkresult = 1
1102 pushop.bkresult = 1
1103
1103
1104 class pulloperation(object):
1104 class pulloperation(object):
1105 """A object that represent a single pull operation
1105 """A object that represent a single pull operation
1106
1106
1107 It purpose is to carry pull related state and very common operation.
1107 It purpose is to carry pull related state and very common operation.
1108
1108
1109 A new should be created at the beginning of each pull and discarded
1109 A new should be created at the beginning of each pull and discarded
1110 afterward.
1110 afterward.
1111 """
1111 """
1112
1112
1113 def __init__(self, repo, remote, heads=None, force=False, bookmarks=(),
1113 def __init__(self, repo, remote, heads=None, force=False, bookmarks=(),
1114 remotebookmarks=None, streamclonerequested=None):
1114 remotebookmarks=None, streamclonerequested=None):
1115 # repo we pull into
1115 # repo we pull into
1116 self.repo = repo
1116 self.repo = repo
1117 # repo we pull from
1117 # repo we pull from
1118 self.remote = remote
1118 self.remote = remote
1119 # revision we try to pull (None is "all")
1119 # revision we try to pull (None is "all")
1120 self.heads = heads
1120 self.heads = heads
1121 # bookmark pulled explicitly
1121 # bookmark pulled explicitly
1122 self.explicitbookmarks = [repo._bookmarks.expandname(bookmark)
1122 self.explicitbookmarks = [repo._bookmarks.expandname(bookmark)
1123 for bookmark in bookmarks]
1123 for bookmark in bookmarks]
1124 # do we force pull?
1124 # do we force pull?
1125 self.force = force
1125 self.force = force
1126 # whether a streaming clone was requested
1126 # whether a streaming clone was requested
1127 self.streamclonerequested = streamclonerequested
1127 self.streamclonerequested = streamclonerequested
1128 # transaction manager
1128 # transaction manager
1129 self.trmanager = None
1129 self.trmanager = None
1130 # set of common changeset between local and remote before pull
1130 # set of common changeset between local and remote before pull
1131 self.common = None
1131 self.common = None
1132 # set of pulled head
1132 # set of pulled head
1133 self.rheads = None
1133 self.rheads = None
1134 # list of missing changeset to fetch remotely
1134 # list of missing changeset to fetch remotely
1135 self.fetch = None
1135 self.fetch = None
1136 # remote bookmarks data
1136 # remote bookmarks data
1137 self.remotebookmarks = remotebookmarks
1137 self.remotebookmarks = remotebookmarks
1138 # result of changegroup pulling (used as return code by pull)
1138 # result of changegroup pulling (used as return code by pull)
1139 self.cgresult = None
1139 self.cgresult = None
1140 # list of step already done
1140 # list of step already done
1141 self.stepsdone = set()
1141 self.stepsdone = set()
1142 # Whether we attempted a clone from pre-generated bundles.
1142 # Whether we attempted a clone from pre-generated bundles.
1143 self.clonebundleattempted = False
1143 self.clonebundleattempted = False
1144
1144
1145 @util.propertycache
1145 @util.propertycache
1146 def pulledsubset(self):
1146 def pulledsubset(self):
1147 """heads of the set of changeset target by the pull"""
1147 """heads of the set of changeset target by the pull"""
1148 # compute target subset
1148 # compute target subset
1149 if self.heads is None:
1149 if self.heads is None:
1150 # We pulled every thing possible
1150 # We pulled every thing possible
1151 # sync on everything common
1151 # sync on everything common
1152 c = set(self.common)
1152 c = set(self.common)
1153 ret = list(self.common)
1153 ret = list(self.common)
1154 for n in self.rheads:
1154 for n in self.rheads:
1155 if n not in c:
1155 if n not in c:
1156 ret.append(n)
1156 ret.append(n)
1157 return ret
1157 return ret
1158 else:
1158 else:
1159 # We pulled a specific subset
1159 # We pulled a specific subset
1160 # sync on this subset
1160 # sync on this subset
1161 return self.heads
1161 return self.heads
1162
1162
1163 @util.propertycache
1163 @util.propertycache
1164 def canusebundle2(self):
1164 def canusebundle2(self):
1165 return not _forcebundle1(self)
1165 return not _forcebundle1(self)
1166
1166
1167 @util.propertycache
1167 @util.propertycache
1168 def remotebundle2caps(self):
1168 def remotebundle2caps(self):
1169 return bundle2.bundle2caps(self.remote)
1169 return bundle2.bundle2caps(self.remote)
1170
1170
1171 def gettransaction(self):
1171 def gettransaction(self):
1172 # deprecated; talk to trmanager directly
1172 # deprecated; talk to trmanager directly
1173 return self.trmanager.transaction()
1173 return self.trmanager.transaction()
1174
1174
1175 class transactionmanager(object):
1175 class transactionmanager(object):
1176 """An object to manage the life cycle of a transaction
1176 """An object to manage the life cycle of a transaction
1177
1177
1178 It creates the transaction on demand and calls the appropriate hooks when
1178 It creates the transaction on demand and calls the appropriate hooks when
1179 closing the transaction."""
1179 closing the transaction."""
1180 def __init__(self, repo, source, url):
1180 def __init__(self, repo, source, url):
1181 self.repo = repo
1181 self.repo = repo
1182 self.source = source
1182 self.source = source
1183 self.url = url
1183 self.url = url
1184 self._tr = None
1184 self._tr = None
1185
1185
1186 def transaction(self):
1186 def transaction(self):
1187 """Return an open transaction object, constructing if necessary"""
1187 """Return an open transaction object, constructing if necessary"""
1188 if not self._tr:
1188 if not self._tr:
1189 trname = '%s\n%s' % (self.source, util.hidepassword(self.url))
1189 trname = '%s\n%s' % (self.source, util.hidepassword(self.url))
1190 self._tr = self.repo.transaction(trname)
1190 self._tr = self.repo.transaction(trname)
1191 self._tr.hookargs['source'] = self.source
1191 self._tr.hookargs['source'] = self.source
1192 self._tr.hookargs['url'] = self.url
1192 self._tr.hookargs['url'] = self.url
1193 return self._tr
1193 return self._tr
1194
1194
1195 def close(self):
1195 def close(self):
1196 """close transaction if created"""
1196 """close transaction if created"""
1197 if self._tr is not None:
1197 if self._tr is not None:
1198 self._tr.close()
1198 self._tr.close()
1199
1199
1200 def release(self):
1200 def release(self):
1201 """release transaction if created"""
1201 """release transaction if created"""
1202 if self._tr is not None:
1202 if self._tr is not None:
1203 self._tr.release()
1203 self._tr.release()
1204
1204
1205 def pull(repo, remote, heads=None, force=False, bookmarks=(), opargs=None,
1205 def pull(repo, remote, heads=None, force=False, bookmarks=(), opargs=None,
1206 streamclonerequested=None):
1206 streamclonerequested=None):
1207 """Fetch repository data from a remote.
1207 """Fetch repository data from a remote.
1208
1208
1209 This is the main function used to retrieve data from a remote repository.
1209 This is the main function used to retrieve data from a remote repository.
1210
1210
1211 ``repo`` is the local repository to clone into.
1211 ``repo`` is the local repository to clone into.
1212 ``remote`` is a peer instance.
1212 ``remote`` is a peer instance.
1213 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1213 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1214 default) means to pull everything from the remote.
1214 default) means to pull everything from the remote.
1215 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1215 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1216 default, all remote bookmarks are pulled.
1216 default, all remote bookmarks are pulled.
1217 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1217 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1218 initialization.
1218 initialization.
1219 ``streamclonerequested`` is a boolean indicating whether a "streaming
1219 ``streamclonerequested`` is a boolean indicating whether a "streaming
1220 clone" is requested. A "streaming clone" is essentially a raw file copy
1220 clone" is requested. A "streaming clone" is essentially a raw file copy
1221 of revlogs from the server. This only works when the local repository is
1221 of revlogs from the server. This only works when the local repository is
1222 empty. The default value of ``None`` means to respect the server
1222 empty. The default value of ``None`` means to respect the server
1223 configuration for preferring stream clones.
1223 configuration for preferring stream clones.
1224
1224
1225 Returns the ``pulloperation`` created for this pull.
1225 Returns the ``pulloperation`` created for this pull.
1226 """
1226 """
1227 if opargs is None:
1227 if opargs is None:
1228 opargs = {}
1228 opargs = {}
1229 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks,
1229 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks,
1230 streamclonerequested=streamclonerequested, **opargs)
1230 streamclonerequested=streamclonerequested, **opargs)
1231 if pullop.remote.local():
1231 if pullop.remote.local():
1232 missing = set(pullop.remote.requirements) - pullop.repo.supported
1232 missing = set(pullop.remote.requirements) - pullop.repo.supported
1233 if missing:
1233 if missing:
1234 msg = _("required features are not"
1234 msg = _("required features are not"
1235 " supported in the destination:"
1235 " supported in the destination:"
1236 " %s") % (', '.join(sorted(missing)))
1236 " %s") % (', '.join(sorted(missing)))
1237 raise error.Abort(msg)
1237 raise error.Abort(msg)
1238
1238
1239 wlock = lock = None
1239 wlock = lock = None
1240 try:
1240 try:
1241 wlock = pullop.repo.wlock()
1241 wlock = pullop.repo.wlock()
1242 lock = pullop.repo.lock()
1242 lock = pullop.repo.lock()
1243 pullop.trmanager = transactionmanager(repo, 'pull', remote.url())
1243 pullop.trmanager = transactionmanager(repo, 'pull', remote.url())
1244 streamclone.maybeperformlegacystreamclone(pullop)
1244 streamclone.maybeperformlegacystreamclone(pullop)
1245 # This should ideally be in _pullbundle2(). However, it needs to run
1245 # This should ideally be in _pullbundle2(). However, it needs to run
1246 # before discovery to avoid extra work.
1246 # before discovery to avoid extra work.
1247 _maybeapplyclonebundle(pullop)
1247 _maybeapplyclonebundle(pullop)
1248 _pulldiscovery(pullop)
1248 _pulldiscovery(pullop)
1249 if pullop.canusebundle2:
1249 if pullop.canusebundle2:
1250 _pullbundle2(pullop)
1250 _pullbundle2(pullop)
1251 _pullchangeset(pullop)
1251 _pullchangeset(pullop)
1252 _pullphase(pullop)
1252 _pullphase(pullop)
1253 _pullbookmarks(pullop)
1253 _pullbookmarks(pullop)
1254 _pullobsolete(pullop)
1254 _pullobsolete(pullop)
1255 pullop.trmanager.close()
1255 pullop.trmanager.close()
1256 finally:
1256 finally:
1257 lockmod.release(pullop.trmanager, lock, wlock)
1257 lockmod.release(pullop.trmanager, lock, wlock)
1258
1258
1259 return pullop
1259 return pullop
1260
1260
1261 # list of steps to perform discovery before pull
1261 # list of steps to perform discovery before pull
1262 pulldiscoveryorder = []
1262 pulldiscoveryorder = []
1263
1263
1264 # Mapping between step name and function
1264 # Mapping between step name and function
1265 #
1265 #
1266 # This exists to help extensions wrap steps if necessary
1266 # This exists to help extensions wrap steps if necessary
1267 pulldiscoverymapping = {}
1267 pulldiscoverymapping = {}
1268
1268
1269 def pulldiscovery(stepname):
1269 def pulldiscovery(stepname):
1270 """decorator for function performing discovery before pull
1270 """decorator for function performing discovery before pull
1271
1271
1272 The function is added to the step -> function mapping and appended to the
1272 The function is added to the step -> function mapping and appended to the
1273 list of steps. Beware that decorated function will be added in order (this
1273 list of steps. Beware that decorated function will be added in order (this
1274 may matter).
1274 may matter).
1275
1275
1276 You can only use this decorator for a new step, if you want to wrap a step
1276 You can only use this decorator for a new step, if you want to wrap a step
1277 from an extension, change the pulldiscovery dictionary directly."""
1277 from an extension, change the pulldiscovery dictionary directly."""
1278 def dec(func):
1278 def dec(func):
1279 assert stepname not in pulldiscoverymapping
1279 assert stepname not in pulldiscoverymapping
1280 pulldiscoverymapping[stepname] = func
1280 pulldiscoverymapping[stepname] = func
1281 pulldiscoveryorder.append(stepname)
1281 pulldiscoveryorder.append(stepname)
1282 return func
1282 return func
1283 return dec
1283 return dec
1284
1284
1285 def _pulldiscovery(pullop):
1285 def _pulldiscovery(pullop):
1286 """Run all discovery steps"""
1286 """Run all discovery steps"""
1287 for stepname in pulldiscoveryorder:
1287 for stepname in pulldiscoveryorder:
1288 step = pulldiscoverymapping[stepname]
1288 step = pulldiscoverymapping[stepname]
1289 step(pullop)
1289 step(pullop)
1290
1290
1291 @pulldiscovery('b1:bookmarks')
1291 @pulldiscovery('b1:bookmarks')
1292 def _pullbookmarkbundle1(pullop):
1292 def _pullbookmarkbundle1(pullop):
1293 """fetch bookmark data in bundle1 case
1293 """fetch bookmark data in bundle1 case
1294
1294
1295 If not using bundle2, we have to fetch bookmarks before changeset
1295 If not using bundle2, we have to fetch bookmarks before changeset
1296 discovery to reduce the chance and impact of race conditions."""
1296 discovery to reduce the chance and impact of race conditions."""
1297 if pullop.remotebookmarks is not None:
1297 if pullop.remotebookmarks is not None:
1298 return
1298 return
1299 if pullop.canusebundle2 and 'listkeys' in pullop.remotebundle2caps:
1299 if pullop.canusebundle2 and 'listkeys' in pullop.remotebundle2caps:
1300 # all known bundle2 servers now support listkeys, but lets be nice with
1300 # all known bundle2 servers now support listkeys, but lets be nice with
1301 # new implementation.
1301 # new implementation.
1302 return
1302 return
1303 pullop.remotebookmarks = pullop.remote.listkeys('bookmarks')
1303 pullop.remotebookmarks = pullop.remote.listkeys('bookmarks')
1304
1304
1305
1305
1306 @pulldiscovery('changegroup')
1306 @pulldiscovery('changegroup')
1307 def _pulldiscoverychangegroup(pullop):
1307 def _pulldiscoverychangegroup(pullop):
1308 """discovery phase for the pull
1308 """discovery phase for the pull
1309
1309
1310 Current handle changeset discovery only, will change handle all discovery
1310 Current handle changeset discovery only, will change handle all discovery
1311 at some point."""
1311 at some point."""
1312 tmp = discovery.findcommonincoming(pullop.repo,
1312 tmp = discovery.findcommonincoming(pullop.repo,
1313 pullop.remote,
1313 pullop.remote,
1314 heads=pullop.heads,
1314 heads=pullop.heads,
1315 force=pullop.force)
1315 force=pullop.force)
1316 common, fetch, rheads = tmp
1316 common, fetch, rheads = tmp
1317 nm = pullop.repo.unfiltered().changelog.nodemap
1317 nm = pullop.repo.unfiltered().changelog.nodemap
1318 if fetch and rheads:
1318 if fetch and rheads:
1319 # If a remote heads in filtered locally, lets drop it from the unknown
1319 # If a remote heads in filtered locally, lets drop it from the unknown
1320 # remote heads and put in back in common.
1320 # remote heads and put in back in common.
1321 #
1321 #
1322 # This is a hackish solution to catch most of "common but locally
1322 # This is a hackish solution to catch most of "common but locally
1323 # hidden situation". We do not performs discovery on unfiltered
1323 # hidden situation". We do not performs discovery on unfiltered
1324 # repository because it end up doing a pathological amount of round
1324 # repository because it end up doing a pathological amount of round
1325 # trip for w huge amount of changeset we do not care about.
1325 # trip for w huge amount of changeset we do not care about.
1326 #
1326 #
1327 # If a set of such "common but filtered" changeset exist on the server
1327 # If a set of such "common but filtered" changeset exist on the server
1328 # but are not including a remote heads, we'll not be able to detect it,
1328 # but are not including a remote heads, we'll not be able to detect it,
1329 scommon = set(common)
1329 scommon = set(common)
1330 filteredrheads = []
1330 filteredrheads = []
1331 for n in rheads:
1331 for n in rheads:
1332 if n in nm:
1332 if n in nm:
1333 if n not in scommon:
1333 if n not in scommon:
1334 common.append(n)
1334 common.append(n)
1335 else:
1335 else:
1336 filteredrheads.append(n)
1336 filteredrheads.append(n)
1337 if not filteredrheads:
1337 if not filteredrheads:
1338 fetch = []
1338 fetch = []
1339 rheads = filteredrheads
1339 rheads = filteredrheads
1340 pullop.common = common
1340 pullop.common = common
1341 pullop.fetch = fetch
1341 pullop.fetch = fetch
1342 pullop.rheads = rheads
1342 pullop.rheads = rheads
1343
1343
1344 def _pullbundle2(pullop):
1344 def _pullbundle2(pullop):
1345 """pull data using bundle2
1345 """pull data using bundle2
1346
1346
1347 For now, the only supported data are changegroup."""
1347 For now, the only supported data are changegroup."""
1348 kwargs = {'bundlecaps': caps20to10(pullop.repo)}
1348 kwargs = {'bundlecaps': caps20to10(pullop.repo)}
1349
1349
1350 # At the moment we don't do stream clones over bundle2. If that is
1350 # At the moment we don't do stream clones over bundle2. If that is
1351 # implemented then here's where the check for that will go.
1351 # implemented then here's where the check for that will go.
1352 streaming = False
1352 streaming = False
1353
1353
1354 # pulling changegroup
1354 # pulling changegroup
1355 pullop.stepsdone.add('changegroup')
1355 pullop.stepsdone.add('changegroup')
1356
1356
1357 kwargs['common'] = pullop.common
1357 kwargs['common'] = pullop.common
1358 kwargs['heads'] = pullop.heads or pullop.rheads
1358 kwargs['heads'] = pullop.heads or pullop.rheads
1359 kwargs['cg'] = pullop.fetch
1359 kwargs['cg'] = pullop.fetch
1360 if 'listkeys' in pullop.remotebundle2caps:
1360 if 'listkeys' in pullop.remotebundle2caps:
1361 kwargs['listkeys'] = ['phases']
1361 kwargs['listkeys'] = ['phases']
1362 if pullop.remotebookmarks is None:
1362 if pullop.remotebookmarks is None:
1363 # make sure to always includes bookmark data when migrating
1363 # make sure to always includes bookmark data when migrating
1364 # `hg incoming --bundle` to using this function.
1364 # `hg incoming --bundle` to using this function.
1365 kwargs['listkeys'].append('bookmarks')
1365 kwargs['listkeys'].append('bookmarks')
1366
1366
1367 # If this is a full pull / clone and the server supports the clone bundles
1367 # If this is a full pull / clone and the server supports the clone bundles
1368 # feature, tell the server whether we attempted a clone bundle. The
1368 # feature, tell the server whether we attempted a clone bundle. The
1369 # presence of this flag indicates the client supports clone bundles. This
1369 # presence of this flag indicates the client supports clone bundles. This
1370 # will enable the server to treat clients that support clone bundles
1370 # will enable the server to treat clients that support clone bundles
1371 # differently from those that don't.
1371 # differently from those that don't.
1372 if (pullop.remote.capable('clonebundles')
1372 if (pullop.remote.capable('clonebundles')
1373 and pullop.heads is None and list(pullop.common) == [nullid]):
1373 and pullop.heads is None and list(pullop.common) == [nullid]):
1374 kwargs['cbattempted'] = pullop.clonebundleattempted
1374 kwargs['cbattempted'] = pullop.clonebundleattempted
1375
1375
1376 if streaming:
1376 if streaming:
1377 pullop.repo.ui.status(_('streaming all changes\n'))
1377 pullop.repo.ui.status(_('streaming all changes\n'))
1378 elif not pullop.fetch:
1378 elif not pullop.fetch:
1379 pullop.repo.ui.status(_("no changes found\n"))
1379 pullop.repo.ui.status(_("no changes found\n"))
1380 pullop.cgresult = 0
1380 pullop.cgresult = 0
1381 else:
1381 else:
1382 if pullop.heads is None and list(pullop.common) == [nullid]:
1382 if pullop.heads is None and list(pullop.common) == [nullid]:
1383 pullop.repo.ui.status(_("requesting all changes\n"))
1383 pullop.repo.ui.status(_("requesting all changes\n"))
1384 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1384 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1385 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1385 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1386 if obsolete.commonversion(remoteversions) is not None:
1386 if obsolete.commonversion(remoteversions) is not None:
1387 kwargs['obsmarkers'] = True
1387 kwargs['obsmarkers'] = True
1388 pullop.stepsdone.add('obsmarkers')
1388 pullop.stepsdone.add('obsmarkers')
1389 _pullbundle2extraprepare(pullop, kwargs)
1389 _pullbundle2extraprepare(pullop, kwargs)
1390 bundle = pullop.remote.getbundle('pull', **pycompat.strkwargs(kwargs))
1390 bundle = pullop.remote.getbundle('pull', **pycompat.strkwargs(kwargs))
1391 try:
1391 try:
1392 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
1392 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
1393 except bundle2.AbortFromPart as exc:
1393 except bundle2.AbortFromPart as exc:
1394 pullop.repo.ui.status(_('remote: abort: %s\n') % exc)
1394 pullop.repo.ui.status(_('remote: abort: %s\n') % exc)
1395 raise error.Abort(_('pull failed on remote'), hint=exc.hint)
1395 raise error.Abort(_('pull failed on remote'), hint=exc.hint)
1396 except error.BundleValueError as exc:
1396 except error.BundleValueError as exc:
1397 raise error.Abort(_('missing support for %s') % exc)
1397 raise error.Abort(_('missing support for %s') % exc)
1398
1398
1399 if pullop.fetch:
1399 if pullop.fetch:
1400 results = [cg['return'] for cg in op.records['changegroup']]
1400 results = [cg['return'] for cg in op.records['changegroup']]
1401 pullop.cgresult = changegroup.combineresults(results)
1401 pullop.cgresult = changegroup.combineresults(results)
1402
1402
1403 # processing phases change
1403 # processing phases change
1404 for namespace, value in op.records['listkeys']:
1404 for namespace, value in op.records['listkeys']:
1405 if namespace == 'phases':
1405 if namespace == 'phases':
1406 _pullapplyphases(pullop, value)
1406 _pullapplyphases(pullop, value)
1407
1407
1408 # processing bookmark update
1408 # processing bookmark update
1409 for namespace, value in op.records['listkeys']:
1409 for namespace, value in op.records['listkeys']:
1410 if namespace == 'bookmarks':
1410 if namespace == 'bookmarks':
1411 pullop.remotebookmarks = value
1411 pullop.remotebookmarks = value
1412
1412
1413 # bookmark data were either already there or pulled in the bundle
1413 # bookmark data were either already there or pulled in the bundle
1414 if pullop.remotebookmarks is not None:
1414 if pullop.remotebookmarks is not None:
1415 _pullbookmarks(pullop)
1415 _pullbookmarks(pullop)
1416
1416
1417 def _pullbundle2extraprepare(pullop, kwargs):
1417 def _pullbundle2extraprepare(pullop, kwargs):
1418 """hook function so that extensions can extend the getbundle call"""
1418 """hook function so that extensions can extend the getbundle call"""
1419 pass
1419 pass
1420
1420
1421 def _pullchangeset(pullop):
1421 def _pullchangeset(pullop):
1422 """pull changeset from unbundle into the local repo"""
1422 """pull changeset from unbundle into the local repo"""
1423 # We delay the open of the transaction as late as possible so we
1423 # We delay the open of the transaction as late as possible so we
1424 # don't open transaction for nothing or you break future useful
1424 # don't open transaction for nothing or you break future useful
1425 # rollback call
1425 # rollback call
1426 if 'changegroup' in pullop.stepsdone:
1426 if 'changegroup' in pullop.stepsdone:
1427 return
1427 return
1428 pullop.stepsdone.add('changegroup')
1428 pullop.stepsdone.add('changegroup')
1429 if not pullop.fetch:
1429 if not pullop.fetch:
1430 pullop.repo.ui.status(_("no changes found\n"))
1430 pullop.repo.ui.status(_("no changes found\n"))
1431 pullop.cgresult = 0
1431 pullop.cgresult = 0
1432 return
1432 return
1433 tr = pullop.gettransaction()
1433 tr = pullop.gettransaction()
1434 if pullop.heads is None and list(pullop.common) == [nullid]:
1434 if pullop.heads is None and list(pullop.common) == [nullid]:
1435 pullop.repo.ui.status(_("requesting all changes\n"))
1435 pullop.repo.ui.status(_("requesting all changes\n"))
1436 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1436 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1437 # issue1320, avoid a race if remote changed after discovery
1437 # issue1320, avoid a race if remote changed after discovery
1438 pullop.heads = pullop.rheads
1438 pullop.heads = pullop.rheads
1439
1439
1440 if pullop.remote.capable('getbundle'):
1440 if pullop.remote.capable('getbundle'):
1441 # TODO: get bundlecaps from remote
1441 # TODO: get bundlecaps from remote
1442 cg = pullop.remote.getbundle('pull', common=pullop.common,
1442 cg = pullop.remote.getbundle('pull', common=pullop.common,
1443 heads=pullop.heads or pullop.rheads)
1443 heads=pullop.heads or pullop.rheads)
1444 elif pullop.heads is None:
1444 elif pullop.heads is None:
1445 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1445 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1446 elif not pullop.remote.capable('changegroupsubset'):
1446 elif not pullop.remote.capable('changegroupsubset'):
1447 raise error.Abort(_("partial pull cannot be done because "
1447 raise error.Abort(_("partial pull cannot be done because "
1448 "other repository doesn't support "
1448 "other repository doesn't support "
1449 "changegroupsubset."))
1449 "changegroupsubset."))
1450 else:
1450 else:
1451 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1451 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1452 pullop.cgresult = cg.apply(pullop.repo, tr, 'pull', pullop.remote.url())
1452 pullop.cgresult, addednodes = cg.apply(pullop.repo, tr, 'pull',
1453 pullop.remote.url())
1453
1454
1454 def _pullphase(pullop):
1455 def _pullphase(pullop):
1455 # Get remote phases data from remote
1456 # Get remote phases data from remote
1456 if 'phases' in pullop.stepsdone:
1457 if 'phases' in pullop.stepsdone:
1457 return
1458 return
1458 remotephases = pullop.remote.listkeys('phases')
1459 remotephases = pullop.remote.listkeys('phases')
1459 _pullapplyphases(pullop, remotephases)
1460 _pullapplyphases(pullop, remotephases)
1460
1461
1461 def _pullapplyphases(pullop, remotephases):
1462 def _pullapplyphases(pullop, remotephases):
1462 """apply phase movement from observed remote state"""
1463 """apply phase movement from observed remote state"""
1463 if 'phases' in pullop.stepsdone:
1464 if 'phases' in pullop.stepsdone:
1464 return
1465 return
1465 pullop.stepsdone.add('phases')
1466 pullop.stepsdone.add('phases')
1466 publishing = bool(remotephases.get('publishing', False))
1467 publishing = bool(remotephases.get('publishing', False))
1467 if remotephases and not publishing:
1468 if remotephases and not publishing:
1468 # remote is new and non-publishing
1469 # remote is new and non-publishing
1469 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1470 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1470 pullop.pulledsubset,
1471 pullop.pulledsubset,
1471 remotephases)
1472 remotephases)
1472 dheads = pullop.pulledsubset
1473 dheads = pullop.pulledsubset
1473 else:
1474 else:
1474 # Remote is old or publishing all common changesets
1475 # Remote is old or publishing all common changesets
1475 # should be seen as public
1476 # should be seen as public
1476 pheads = pullop.pulledsubset
1477 pheads = pullop.pulledsubset
1477 dheads = []
1478 dheads = []
1478 unfi = pullop.repo.unfiltered()
1479 unfi = pullop.repo.unfiltered()
1479 phase = unfi._phasecache.phase
1480 phase = unfi._phasecache.phase
1480 rev = unfi.changelog.nodemap.get
1481 rev = unfi.changelog.nodemap.get
1481 public = phases.public
1482 public = phases.public
1482 draft = phases.draft
1483 draft = phases.draft
1483
1484
1484 # exclude changesets already public locally and update the others
1485 # exclude changesets already public locally and update the others
1485 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1486 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1486 if pheads:
1487 if pheads:
1487 tr = pullop.gettransaction()
1488 tr = pullop.gettransaction()
1488 phases.advanceboundary(pullop.repo, tr, public, pheads)
1489 phases.advanceboundary(pullop.repo, tr, public, pheads)
1489
1490
1490 # exclude changesets already draft locally and update the others
1491 # exclude changesets already draft locally and update the others
1491 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1492 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1492 if dheads:
1493 if dheads:
1493 tr = pullop.gettransaction()
1494 tr = pullop.gettransaction()
1494 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1495 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1495
1496
1496 def _pullbookmarks(pullop):
1497 def _pullbookmarks(pullop):
1497 """process the remote bookmark information to update the local one"""
1498 """process the remote bookmark information to update the local one"""
1498 if 'bookmarks' in pullop.stepsdone:
1499 if 'bookmarks' in pullop.stepsdone:
1499 return
1500 return
1500 pullop.stepsdone.add('bookmarks')
1501 pullop.stepsdone.add('bookmarks')
1501 repo = pullop.repo
1502 repo = pullop.repo
1502 remotebookmarks = pullop.remotebookmarks
1503 remotebookmarks = pullop.remotebookmarks
1503 remotebookmarks = bookmod.unhexlifybookmarks(remotebookmarks)
1504 remotebookmarks = bookmod.unhexlifybookmarks(remotebookmarks)
1504 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1505 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1505 pullop.remote.url(),
1506 pullop.remote.url(),
1506 pullop.gettransaction,
1507 pullop.gettransaction,
1507 explicit=pullop.explicitbookmarks)
1508 explicit=pullop.explicitbookmarks)
1508
1509
1509 def _pullobsolete(pullop):
1510 def _pullobsolete(pullop):
1510 """utility function to pull obsolete markers from a remote
1511 """utility function to pull obsolete markers from a remote
1511
1512
1512 The `gettransaction` is function that return the pull transaction, creating
1513 The `gettransaction` is function that return the pull transaction, creating
1513 one if necessary. We return the transaction to inform the calling code that
1514 one if necessary. We return the transaction to inform the calling code that
1514 a new transaction have been created (when applicable).
1515 a new transaction have been created (when applicable).
1515
1516
1516 Exists mostly to allow overriding for experimentation purpose"""
1517 Exists mostly to allow overriding for experimentation purpose"""
1517 if 'obsmarkers' in pullop.stepsdone:
1518 if 'obsmarkers' in pullop.stepsdone:
1518 return
1519 return
1519 pullop.stepsdone.add('obsmarkers')
1520 pullop.stepsdone.add('obsmarkers')
1520 tr = None
1521 tr = None
1521 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1522 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1522 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1523 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1523 remoteobs = pullop.remote.listkeys('obsolete')
1524 remoteobs = pullop.remote.listkeys('obsolete')
1524 if 'dump0' in remoteobs:
1525 if 'dump0' in remoteobs:
1525 tr = pullop.gettransaction()
1526 tr = pullop.gettransaction()
1526 markers = []
1527 markers = []
1527 for key in sorted(remoteobs, reverse=True):
1528 for key in sorted(remoteobs, reverse=True):
1528 if key.startswith('dump'):
1529 if key.startswith('dump'):
1529 data = util.b85decode(remoteobs[key])
1530 data = util.b85decode(remoteobs[key])
1530 version, newmarks = obsolete._readmarkers(data)
1531 version, newmarks = obsolete._readmarkers(data)
1531 markers += newmarks
1532 markers += newmarks
1532 if markers:
1533 if markers:
1533 pullop.repo.obsstore.add(tr, markers)
1534 pullop.repo.obsstore.add(tr, markers)
1534 pullop.repo.invalidatevolatilesets()
1535 pullop.repo.invalidatevolatilesets()
1535 return tr
1536 return tr
1536
1537
1537 def caps20to10(repo):
1538 def caps20to10(repo):
1538 """return a set with appropriate options to use bundle20 during getbundle"""
1539 """return a set with appropriate options to use bundle20 during getbundle"""
1539 caps = {'HG20'}
1540 caps = {'HG20'}
1540 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
1541 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
1541 caps.add('bundle2=' + urlreq.quote(capsblob))
1542 caps.add('bundle2=' + urlreq.quote(capsblob))
1542 return caps
1543 return caps
1543
1544
1544 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1545 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1545 getbundle2partsorder = []
1546 getbundle2partsorder = []
1546
1547
1547 # Mapping between step name and function
1548 # Mapping between step name and function
1548 #
1549 #
1549 # This exists to help extensions wrap steps if necessary
1550 # This exists to help extensions wrap steps if necessary
1550 getbundle2partsmapping = {}
1551 getbundle2partsmapping = {}
1551
1552
1552 def getbundle2partsgenerator(stepname, idx=None):
1553 def getbundle2partsgenerator(stepname, idx=None):
1553 """decorator for function generating bundle2 part for getbundle
1554 """decorator for function generating bundle2 part for getbundle
1554
1555
1555 The function is added to the step -> function mapping and appended to the
1556 The function is added to the step -> function mapping and appended to the
1556 list of steps. Beware that decorated functions will be added in order
1557 list of steps. Beware that decorated functions will be added in order
1557 (this may matter).
1558 (this may matter).
1558
1559
1559 You can only use this decorator for new steps, if you want to wrap a step
1560 You can only use this decorator for new steps, if you want to wrap a step
1560 from an extension, attack the getbundle2partsmapping dictionary directly."""
1561 from an extension, attack the getbundle2partsmapping dictionary directly."""
1561 def dec(func):
1562 def dec(func):
1562 assert stepname not in getbundle2partsmapping
1563 assert stepname not in getbundle2partsmapping
1563 getbundle2partsmapping[stepname] = func
1564 getbundle2partsmapping[stepname] = func
1564 if idx is None:
1565 if idx is None:
1565 getbundle2partsorder.append(stepname)
1566 getbundle2partsorder.append(stepname)
1566 else:
1567 else:
1567 getbundle2partsorder.insert(idx, stepname)
1568 getbundle2partsorder.insert(idx, stepname)
1568 return func
1569 return func
1569 return dec
1570 return dec
1570
1571
1571 def bundle2requested(bundlecaps):
1572 def bundle2requested(bundlecaps):
1572 if bundlecaps is not None:
1573 if bundlecaps is not None:
1573 return any(cap.startswith('HG2') for cap in bundlecaps)
1574 return any(cap.startswith('HG2') for cap in bundlecaps)
1574 return False
1575 return False
1575
1576
1576 def getbundlechunks(repo, source, heads=None, common=None, bundlecaps=None,
1577 def getbundlechunks(repo, source, heads=None, common=None, bundlecaps=None,
1577 **kwargs):
1578 **kwargs):
1578 """Return chunks constituting a bundle's raw data.
1579 """Return chunks constituting a bundle's raw data.
1579
1580
1580 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
1581 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
1581 passed.
1582 passed.
1582
1583
1583 Returns an iterator over raw chunks (of varying sizes).
1584 Returns an iterator over raw chunks (of varying sizes).
1584 """
1585 """
1585 kwargs = pycompat.byteskwargs(kwargs)
1586 kwargs = pycompat.byteskwargs(kwargs)
1586 usebundle2 = bundle2requested(bundlecaps)
1587 usebundle2 = bundle2requested(bundlecaps)
1587 # bundle10 case
1588 # bundle10 case
1588 if not usebundle2:
1589 if not usebundle2:
1589 if bundlecaps and not kwargs.get('cg', True):
1590 if bundlecaps and not kwargs.get('cg', True):
1590 raise ValueError(_('request for bundle10 must include changegroup'))
1591 raise ValueError(_('request for bundle10 must include changegroup'))
1591
1592
1592 if kwargs:
1593 if kwargs:
1593 raise ValueError(_('unsupported getbundle arguments: %s')
1594 raise ValueError(_('unsupported getbundle arguments: %s')
1594 % ', '.join(sorted(kwargs.keys())))
1595 % ', '.join(sorted(kwargs.keys())))
1595 outgoing = _computeoutgoing(repo, heads, common)
1596 outgoing = _computeoutgoing(repo, heads, common)
1596 bundler = changegroup.getbundler('01', repo, bundlecaps)
1597 bundler = changegroup.getbundler('01', repo, bundlecaps)
1597 return changegroup.getsubsetraw(repo, outgoing, bundler, source)
1598 return changegroup.getsubsetraw(repo, outgoing, bundler, source)
1598
1599
1599 # bundle20 case
1600 # bundle20 case
1600 b2caps = {}
1601 b2caps = {}
1601 for bcaps in bundlecaps:
1602 for bcaps in bundlecaps:
1602 if bcaps.startswith('bundle2='):
1603 if bcaps.startswith('bundle2='):
1603 blob = urlreq.unquote(bcaps[len('bundle2='):])
1604 blob = urlreq.unquote(bcaps[len('bundle2='):])
1604 b2caps.update(bundle2.decodecaps(blob))
1605 b2caps.update(bundle2.decodecaps(blob))
1605 bundler = bundle2.bundle20(repo.ui, b2caps)
1606 bundler = bundle2.bundle20(repo.ui, b2caps)
1606
1607
1607 kwargs['heads'] = heads
1608 kwargs['heads'] = heads
1608 kwargs['common'] = common
1609 kwargs['common'] = common
1609
1610
1610 for name in getbundle2partsorder:
1611 for name in getbundle2partsorder:
1611 func = getbundle2partsmapping[name]
1612 func = getbundle2partsmapping[name]
1612 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1613 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1613 **pycompat.strkwargs(kwargs))
1614 **pycompat.strkwargs(kwargs))
1614
1615
1615 return bundler.getchunks()
1616 return bundler.getchunks()
1616
1617
1617 @getbundle2partsgenerator('changegroup')
1618 @getbundle2partsgenerator('changegroup')
1618 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1619 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1619 b2caps=None, heads=None, common=None, **kwargs):
1620 b2caps=None, heads=None, common=None, **kwargs):
1620 """add a changegroup part to the requested bundle"""
1621 """add a changegroup part to the requested bundle"""
1621 cg = None
1622 cg = None
1622 if kwargs.get('cg', True):
1623 if kwargs.get('cg', True):
1623 # build changegroup bundle here.
1624 # build changegroup bundle here.
1624 version = '01'
1625 version = '01'
1625 cgversions = b2caps.get('changegroup')
1626 cgversions = b2caps.get('changegroup')
1626 if cgversions: # 3.1 and 3.2 ship with an empty value
1627 if cgversions: # 3.1 and 3.2 ship with an empty value
1627 cgversions = [v for v in cgversions
1628 cgversions = [v for v in cgversions
1628 if v in changegroup.supportedoutgoingversions(repo)]
1629 if v in changegroup.supportedoutgoingversions(repo)]
1629 if not cgversions:
1630 if not cgversions:
1630 raise ValueError(_('no common changegroup version'))
1631 raise ValueError(_('no common changegroup version'))
1631 version = max(cgversions)
1632 version = max(cgversions)
1632 outgoing = _computeoutgoing(repo, heads, common)
1633 outgoing = _computeoutgoing(repo, heads, common)
1633 cg = changegroup.getlocalchangegroupraw(repo, source, outgoing,
1634 cg = changegroup.getlocalchangegroupraw(repo, source, outgoing,
1634 bundlecaps=bundlecaps,
1635 bundlecaps=bundlecaps,
1635 version=version)
1636 version=version)
1636
1637
1637 if cg:
1638 if cg:
1638 part = bundler.newpart('changegroup', data=cg)
1639 part = bundler.newpart('changegroup', data=cg)
1639 if cgversions:
1640 if cgversions:
1640 part.addparam('version', version)
1641 part.addparam('version', version)
1641 part.addparam('nbchanges', str(len(outgoing.missing)), mandatory=False)
1642 part.addparam('nbchanges', str(len(outgoing.missing)), mandatory=False)
1642 if 'treemanifest' in repo.requirements:
1643 if 'treemanifest' in repo.requirements:
1643 part.addparam('treemanifest', '1')
1644 part.addparam('treemanifest', '1')
1644
1645
1645 @getbundle2partsgenerator('listkeys')
1646 @getbundle2partsgenerator('listkeys')
1646 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1647 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1647 b2caps=None, **kwargs):
1648 b2caps=None, **kwargs):
1648 """add parts containing listkeys namespaces to the requested bundle"""
1649 """add parts containing listkeys namespaces to the requested bundle"""
1649 listkeys = kwargs.get('listkeys', ())
1650 listkeys = kwargs.get('listkeys', ())
1650 for namespace in listkeys:
1651 for namespace in listkeys:
1651 part = bundler.newpart('listkeys')
1652 part = bundler.newpart('listkeys')
1652 part.addparam('namespace', namespace)
1653 part.addparam('namespace', namespace)
1653 keys = repo.listkeys(namespace).items()
1654 keys = repo.listkeys(namespace).items()
1654 part.data = pushkey.encodekeys(keys)
1655 part.data = pushkey.encodekeys(keys)
1655
1656
1656 @getbundle2partsgenerator('obsmarkers')
1657 @getbundle2partsgenerator('obsmarkers')
1657 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1658 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1658 b2caps=None, heads=None, **kwargs):
1659 b2caps=None, heads=None, **kwargs):
1659 """add an obsolescence markers part to the requested bundle"""
1660 """add an obsolescence markers part to the requested bundle"""
1660 if kwargs.get('obsmarkers', False):
1661 if kwargs.get('obsmarkers', False):
1661 if heads is None:
1662 if heads is None:
1662 heads = repo.heads()
1663 heads = repo.heads()
1663 subset = [c.node() for c in repo.set('::%ln', heads)]
1664 subset = [c.node() for c in repo.set('::%ln', heads)]
1664 markers = repo.obsstore.relevantmarkers(subset)
1665 markers = repo.obsstore.relevantmarkers(subset)
1665 markers = sorted(markers)
1666 markers = sorted(markers)
1666 bundle2.buildobsmarkerspart(bundler, markers)
1667 bundle2.buildobsmarkerspart(bundler, markers)
1667
1668
1668 @getbundle2partsgenerator('hgtagsfnodes')
1669 @getbundle2partsgenerator('hgtagsfnodes')
1669 def _getbundletagsfnodes(bundler, repo, source, bundlecaps=None,
1670 def _getbundletagsfnodes(bundler, repo, source, bundlecaps=None,
1670 b2caps=None, heads=None, common=None,
1671 b2caps=None, heads=None, common=None,
1671 **kwargs):
1672 **kwargs):
1672 """Transfer the .hgtags filenodes mapping.
1673 """Transfer the .hgtags filenodes mapping.
1673
1674
1674 Only values for heads in this bundle will be transferred.
1675 Only values for heads in this bundle will be transferred.
1675
1676
1676 The part data consists of pairs of 20 byte changeset node and .hgtags
1677 The part data consists of pairs of 20 byte changeset node and .hgtags
1677 filenodes raw values.
1678 filenodes raw values.
1678 """
1679 """
1679 # Don't send unless:
1680 # Don't send unless:
1680 # - changeset are being exchanged,
1681 # - changeset are being exchanged,
1681 # - the client supports it.
1682 # - the client supports it.
1682 if not (kwargs.get('cg', True) and 'hgtagsfnodes' in b2caps):
1683 if not (kwargs.get('cg', True) and 'hgtagsfnodes' in b2caps):
1683 return
1684 return
1684
1685
1685 outgoing = _computeoutgoing(repo, heads, common)
1686 outgoing = _computeoutgoing(repo, heads, common)
1686 bundle2.addparttagsfnodescache(repo, bundler, outgoing)
1687 bundle2.addparttagsfnodescache(repo, bundler, outgoing)
1687
1688
1688 def _getbookmarks(repo, **kwargs):
1689 def _getbookmarks(repo, **kwargs):
1689 """Returns bookmark to node mapping.
1690 """Returns bookmark to node mapping.
1690
1691
1691 This function is primarily used to generate `bookmarks` bundle2 part.
1692 This function is primarily used to generate `bookmarks` bundle2 part.
1692 It is a separate function in order to make it easy to wrap it
1693 It is a separate function in order to make it easy to wrap it
1693 in extensions. Passing `kwargs` to the function makes it easy to
1694 in extensions. Passing `kwargs` to the function makes it easy to
1694 add new parameters in extensions.
1695 add new parameters in extensions.
1695 """
1696 """
1696
1697
1697 return dict(bookmod.listbinbookmarks(repo))
1698 return dict(bookmod.listbinbookmarks(repo))
1698
1699
1699 def check_heads(repo, their_heads, context):
1700 def check_heads(repo, their_heads, context):
1700 """check if the heads of a repo have been modified
1701 """check if the heads of a repo have been modified
1701
1702
1702 Used by peer for unbundling.
1703 Used by peer for unbundling.
1703 """
1704 """
1704 heads = repo.heads()
1705 heads = repo.heads()
1705 heads_hash = hashlib.sha1(''.join(sorted(heads))).digest()
1706 heads_hash = hashlib.sha1(''.join(sorted(heads))).digest()
1706 if not (their_heads == ['force'] or their_heads == heads or
1707 if not (their_heads == ['force'] or their_heads == heads or
1707 their_heads == ['hashed', heads_hash]):
1708 their_heads == ['hashed', heads_hash]):
1708 # someone else committed/pushed/unbundled while we
1709 # someone else committed/pushed/unbundled while we
1709 # were transferring data
1710 # were transferring data
1710 raise error.PushRaced('repository changed while %s - '
1711 raise error.PushRaced('repository changed while %s - '
1711 'please try again' % context)
1712 'please try again' % context)
1712
1713
1713 def unbundle(repo, cg, heads, source, url):
1714 def unbundle(repo, cg, heads, source, url):
1714 """Apply a bundle to a repo.
1715 """Apply a bundle to a repo.
1715
1716
1716 this function makes sure the repo is locked during the application and have
1717 this function makes sure the repo is locked during the application and have
1717 mechanism to check that no push race occurred between the creation of the
1718 mechanism to check that no push race occurred between the creation of the
1718 bundle and its application.
1719 bundle and its application.
1719
1720
1720 If the push was raced as PushRaced exception is raised."""
1721 If the push was raced as PushRaced exception is raised."""
1721 r = 0
1722 r = 0
1722 # need a transaction when processing a bundle2 stream
1723 # need a transaction when processing a bundle2 stream
1723 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
1724 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
1724 lockandtr = [None, None, None]
1725 lockandtr = [None, None, None]
1725 recordout = None
1726 recordout = None
1726 # quick fix for output mismatch with bundle2 in 3.4
1727 # quick fix for output mismatch with bundle2 in 3.4
1727 captureoutput = repo.ui.configbool('experimental', 'bundle2-output-capture',
1728 captureoutput = repo.ui.configbool('experimental', 'bundle2-output-capture',
1728 False)
1729 False)
1729 if url.startswith('remote:http:') or url.startswith('remote:https:'):
1730 if url.startswith('remote:http:') or url.startswith('remote:https:'):
1730 captureoutput = True
1731 captureoutput = True
1731 try:
1732 try:
1732 # note: outside bundle1, 'heads' is expected to be empty and this
1733 # note: outside bundle1, 'heads' is expected to be empty and this
1733 # 'check_heads' call wil be a no-op
1734 # 'check_heads' call wil be a no-op
1734 check_heads(repo, heads, 'uploading changes')
1735 check_heads(repo, heads, 'uploading changes')
1735 # push can proceed
1736 # push can proceed
1736 if not isinstance(cg, bundle2.unbundle20):
1737 if not isinstance(cg, bundle2.unbundle20):
1737 # legacy case: bundle1 (changegroup 01)
1738 # legacy case: bundle1 (changegroup 01)
1738 txnname = "\n".join([source, util.hidepassword(url)])
1739 txnname = "\n".join([source, util.hidepassword(url)])
1739 with repo.lock(), repo.transaction(txnname) as tr:
1740 with repo.lock(), repo.transaction(txnname) as tr:
1740 r = cg.apply(repo, tr, source, url)
1741 r, addednodes = cg.apply(repo, tr, source, url)
1741 else:
1742 else:
1742 r = None
1743 r = None
1743 try:
1744 try:
1744 def gettransaction():
1745 def gettransaction():
1745 if not lockandtr[2]:
1746 if not lockandtr[2]:
1746 lockandtr[0] = repo.wlock()
1747 lockandtr[0] = repo.wlock()
1747 lockandtr[1] = repo.lock()
1748 lockandtr[1] = repo.lock()
1748 lockandtr[2] = repo.transaction(source)
1749 lockandtr[2] = repo.transaction(source)
1749 lockandtr[2].hookargs['source'] = source
1750 lockandtr[2].hookargs['source'] = source
1750 lockandtr[2].hookargs['url'] = url
1751 lockandtr[2].hookargs['url'] = url
1751 lockandtr[2].hookargs['bundle2'] = '1'
1752 lockandtr[2].hookargs['bundle2'] = '1'
1752 return lockandtr[2]
1753 return lockandtr[2]
1753
1754
1754 # Do greedy locking by default until we're satisfied with lazy
1755 # Do greedy locking by default until we're satisfied with lazy
1755 # locking.
1756 # locking.
1756 if not repo.ui.configbool('experimental', 'bundle2lazylocking'):
1757 if not repo.ui.configbool('experimental', 'bundle2lazylocking'):
1757 gettransaction()
1758 gettransaction()
1758
1759
1759 op = bundle2.bundleoperation(repo, gettransaction,
1760 op = bundle2.bundleoperation(repo, gettransaction,
1760 captureoutput=captureoutput)
1761 captureoutput=captureoutput)
1761 try:
1762 try:
1762 op = bundle2.processbundle(repo, cg, op=op)
1763 op = bundle2.processbundle(repo, cg, op=op)
1763 finally:
1764 finally:
1764 r = op.reply
1765 r = op.reply
1765 if captureoutput and r is not None:
1766 if captureoutput and r is not None:
1766 repo.ui.pushbuffer(error=True, subproc=True)
1767 repo.ui.pushbuffer(error=True, subproc=True)
1767 def recordout(output):
1768 def recordout(output):
1768 r.newpart('output', data=output, mandatory=False)
1769 r.newpart('output', data=output, mandatory=False)
1769 if lockandtr[2] is not None:
1770 if lockandtr[2] is not None:
1770 lockandtr[2].close()
1771 lockandtr[2].close()
1771 except BaseException as exc:
1772 except BaseException as exc:
1772 exc.duringunbundle2 = True
1773 exc.duringunbundle2 = True
1773 if captureoutput and r is not None:
1774 if captureoutput and r is not None:
1774 parts = exc._bundle2salvagedoutput = r.salvageoutput()
1775 parts = exc._bundle2salvagedoutput = r.salvageoutput()
1775 def recordout(output):
1776 def recordout(output):
1776 part = bundle2.bundlepart('output', data=output,
1777 part = bundle2.bundlepart('output', data=output,
1777 mandatory=False)
1778 mandatory=False)
1778 parts.append(part)
1779 parts.append(part)
1779 raise
1780 raise
1780 finally:
1781 finally:
1781 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
1782 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
1782 if recordout is not None:
1783 if recordout is not None:
1783 recordout(repo.ui.popbuffer())
1784 recordout(repo.ui.popbuffer())
1784 return r
1785 return r
1785
1786
1786 def _maybeapplyclonebundle(pullop):
1787 def _maybeapplyclonebundle(pullop):
1787 """Apply a clone bundle from a remote, if possible."""
1788 """Apply a clone bundle from a remote, if possible."""
1788
1789
1789 repo = pullop.repo
1790 repo = pullop.repo
1790 remote = pullop.remote
1791 remote = pullop.remote
1791
1792
1792 if not repo.ui.configbool('ui', 'clonebundles', True):
1793 if not repo.ui.configbool('ui', 'clonebundles', True):
1793 return
1794 return
1794
1795
1795 # Only run if local repo is empty.
1796 # Only run if local repo is empty.
1796 if len(repo):
1797 if len(repo):
1797 return
1798 return
1798
1799
1799 if pullop.heads:
1800 if pullop.heads:
1800 return
1801 return
1801
1802
1802 if not remote.capable('clonebundles'):
1803 if not remote.capable('clonebundles'):
1803 return
1804 return
1804
1805
1805 res = remote._call('clonebundles')
1806 res = remote._call('clonebundles')
1806
1807
1807 # If we call the wire protocol command, that's good enough to record the
1808 # If we call the wire protocol command, that's good enough to record the
1808 # attempt.
1809 # attempt.
1809 pullop.clonebundleattempted = True
1810 pullop.clonebundleattempted = True
1810
1811
1811 entries = parseclonebundlesmanifest(repo, res)
1812 entries = parseclonebundlesmanifest(repo, res)
1812 if not entries:
1813 if not entries:
1813 repo.ui.note(_('no clone bundles available on remote; '
1814 repo.ui.note(_('no clone bundles available on remote; '
1814 'falling back to regular clone\n'))
1815 'falling back to regular clone\n'))
1815 return
1816 return
1816
1817
1817 entries = filterclonebundleentries(repo, entries)
1818 entries = filterclonebundleentries(repo, entries)
1818 if not entries:
1819 if not entries:
1819 # There is a thundering herd concern here. However, if a server
1820 # There is a thundering herd concern here. However, if a server
1820 # operator doesn't advertise bundles appropriate for its clients,
1821 # operator doesn't advertise bundles appropriate for its clients,
1821 # they deserve what's coming. Furthermore, from a client's
1822 # they deserve what's coming. Furthermore, from a client's
1822 # perspective, no automatic fallback would mean not being able to
1823 # perspective, no automatic fallback would mean not being able to
1823 # clone!
1824 # clone!
1824 repo.ui.warn(_('no compatible clone bundles available on server; '
1825 repo.ui.warn(_('no compatible clone bundles available on server; '
1825 'falling back to regular clone\n'))
1826 'falling back to regular clone\n'))
1826 repo.ui.warn(_('(you may want to report this to the server '
1827 repo.ui.warn(_('(you may want to report this to the server '
1827 'operator)\n'))
1828 'operator)\n'))
1828 return
1829 return
1829
1830
1830 entries = sortclonebundleentries(repo.ui, entries)
1831 entries = sortclonebundleentries(repo.ui, entries)
1831
1832
1832 url = entries[0]['URL']
1833 url = entries[0]['URL']
1833 repo.ui.status(_('applying clone bundle from %s\n') % url)
1834 repo.ui.status(_('applying clone bundle from %s\n') % url)
1834 if trypullbundlefromurl(repo.ui, repo, url):
1835 if trypullbundlefromurl(repo.ui, repo, url):
1835 repo.ui.status(_('finished applying clone bundle\n'))
1836 repo.ui.status(_('finished applying clone bundle\n'))
1836 # Bundle failed.
1837 # Bundle failed.
1837 #
1838 #
1838 # We abort by default to avoid the thundering herd of
1839 # We abort by default to avoid the thundering herd of
1839 # clients flooding a server that was expecting expensive
1840 # clients flooding a server that was expecting expensive
1840 # clone load to be offloaded.
1841 # clone load to be offloaded.
1841 elif repo.ui.configbool('ui', 'clonebundlefallback', False):
1842 elif repo.ui.configbool('ui', 'clonebundlefallback', False):
1842 repo.ui.warn(_('falling back to normal clone\n'))
1843 repo.ui.warn(_('falling back to normal clone\n'))
1843 else:
1844 else:
1844 raise error.Abort(_('error applying bundle'),
1845 raise error.Abort(_('error applying bundle'),
1845 hint=_('if this error persists, consider contacting '
1846 hint=_('if this error persists, consider contacting '
1846 'the server operator or disable clone '
1847 'the server operator or disable clone '
1847 'bundles via '
1848 'bundles via '
1848 '"--config ui.clonebundles=false"'))
1849 '"--config ui.clonebundles=false"'))
1849
1850
1850 def parseclonebundlesmanifest(repo, s):
1851 def parseclonebundlesmanifest(repo, s):
1851 """Parses the raw text of a clone bundles manifest.
1852 """Parses the raw text of a clone bundles manifest.
1852
1853
1853 Returns a list of dicts. The dicts have a ``URL`` key corresponding
1854 Returns a list of dicts. The dicts have a ``URL`` key corresponding
1854 to the URL and other keys are the attributes for the entry.
1855 to the URL and other keys are the attributes for the entry.
1855 """
1856 """
1856 m = []
1857 m = []
1857 for line in s.splitlines():
1858 for line in s.splitlines():
1858 fields = line.split()
1859 fields = line.split()
1859 if not fields:
1860 if not fields:
1860 continue
1861 continue
1861 attrs = {'URL': fields[0]}
1862 attrs = {'URL': fields[0]}
1862 for rawattr in fields[1:]:
1863 for rawattr in fields[1:]:
1863 key, value = rawattr.split('=', 1)
1864 key, value = rawattr.split('=', 1)
1864 key = urlreq.unquote(key)
1865 key = urlreq.unquote(key)
1865 value = urlreq.unquote(value)
1866 value = urlreq.unquote(value)
1866 attrs[key] = value
1867 attrs[key] = value
1867
1868
1868 # Parse BUNDLESPEC into components. This makes client-side
1869 # Parse BUNDLESPEC into components. This makes client-side
1869 # preferences easier to specify since you can prefer a single
1870 # preferences easier to specify since you can prefer a single
1870 # component of the BUNDLESPEC.
1871 # component of the BUNDLESPEC.
1871 if key == 'BUNDLESPEC':
1872 if key == 'BUNDLESPEC':
1872 try:
1873 try:
1873 comp, version, params = parsebundlespec(repo, value,
1874 comp, version, params = parsebundlespec(repo, value,
1874 externalnames=True)
1875 externalnames=True)
1875 attrs['COMPRESSION'] = comp
1876 attrs['COMPRESSION'] = comp
1876 attrs['VERSION'] = version
1877 attrs['VERSION'] = version
1877 except error.InvalidBundleSpecification:
1878 except error.InvalidBundleSpecification:
1878 pass
1879 pass
1879 except error.UnsupportedBundleSpecification:
1880 except error.UnsupportedBundleSpecification:
1880 pass
1881 pass
1881
1882
1882 m.append(attrs)
1883 m.append(attrs)
1883
1884
1884 return m
1885 return m
1885
1886
1886 def filterclonebundleentries(repo, entries):
1887 def filterclonebundleentries(repo, entries):
1887 """Remove incompatible clone bundle manifest entries.
1888 """Remove incompatible clone bundle manifest entries.
1888
1889
1889 Accepts a list of entries parsed with ``parseclonebundlesmanifest``
1890 Accepts a list of entries parsed with ``parseclonebundlesmanifest``
1890 and returns a new list consisting of only the entries that this client
1891 and returns a new list consisting of only the entries that this client
1891 should be able to apply.
1892 should be able to apply.
1892
1893
1893 There is no guarantee we'll be able to apply all returned entries because
1894 There is no guarantee we'll be able to apply all returned entries because
1894 the metadata we use to filter on may be missing or wrong.
1895 the metadata we use to filter on may be missing or wrong.
1895 """
1896 """
1896 newentries = []
1897 newentries = []
1897 for entry in entries:
1898 for entry in entries:
1898 spec = entry.get('BUNDLESPEC')
1899 spec = entry.get('BUNDLESPEC')
1899 if spec:
1900 if spec:
1900 try:
1901 try:
1901 parsebundlespec(repo, spec, strict=True)
1902 parsebundlespec(repo, spec, strict=True)
1902 except error.InvalidBundleSpecification as e:
1903 except error.InvalidBundleSpecification as e:
1903 repo.ui.debug(str(e) + '\n')
1904 repo.ui.debug(str(e) + '\n')
1904 continue
1905 continue
1905 except error.UnsupportedBundleSpecification as e:
1906 except error.UnsupportedBundleSpecification as e:
1906 repo.ui.debug('filtering %s because unsupported bundle '
1907 repo.ui.debug('filtering %s because unsupported bundle '
1907 'spec: %s\n' % (entry['URL'], str(e)))
1908 'spec: %s\n' % (entry['URL'], str(e)))
1908 continue
1909 continue
1909
1910
1910 if 'REQUIRESNI' in entry and not sslutil.hassni:
1911 if 'REQUIRESNI' in entry and not sslutil.hassni:
1911 repo.ui.debug('filtering %s because SNI not supported\n' %
1912 repo.ui.debug('filtering %s because SNI not supported\n' %
1912 entry['URL'])
1913 entry['URL'])
1913 continue
1914 continue
1914
1915
1915 newentries.append(entry)
1916 newentries.append(entry)
1916
1917
1917 return newentries
1918 return newentries
1918
1919
1919 class clonebundleentry(object):
1920 class clonebundleentry(object):
1920 """Represents an item in a clone bundles manifest.
1921 """Represents an item in a clone bundles manifest.
1921
1922
1922 This rich class is needed to support sorting since sorted() in Python 3
1923 This rich class is needed to support sorting since sorted() in Python 3
1923 doesn't support ``cmp`` and our comparison is complex enough that ``key=``
1924 doesn't support ``cmp`` and our comparison is complex enough that ``key=``
1924 won't work.
1925 won't work.
1925 """
1926 """
1926
1927
1927 def __init__(self, value, prefers):
1928 def __init__(self, value, prefers):
1928 self.value = value
1929 self.value = value
1929 self.prefers = prefers
1930 self.prefers = prefers
1930
1931
1931 def _cmp(self, other):
1932 def _cmp(self, other):
1932 for prefkey, prefvalue in self.prefers:
1933 for prefkey, prefvalue in self.prefers:
1933 avalue = self.value.get(prefkey)
1934 avalue = self.value.get(prefkey)
1934 bvalue = other.value.get(prefkey)
1935 bvalue = other.value.get(prefkey)
1935
1936
1936 # Special case for b missing attribute and a matches exactly.
1937 # Special case for b missing attribute and a matches exactly.
1937 if avalue is not None and bvalue is None and avalue == prefvalue:
1938 if avalue is not None and bvalue is None and avalue == prefvalue:
1938 return -1
1939 return -1
1939
1940
1940 # Special case for a missing attribute and b matches exactly.
1941 # Special case for a missing attribute and b matches exactly.
1941 if bvalue is not None and avalue is None and bvalue == prefvalue:
1942 if bvalue is not None and avalue is None and bvalue == prefvalue:
1942 return 1
1943 return 1
1943
1944
1944 # We can't compare unless attribute present on both.
1945 # We can't compare unless attribute present on both.
1945 if avalue is None or bvalue is None:
1946 if avalue is None or bvalue is None:
1946 continue
1947 continue
1947
1948
1948 # Same values should fall back to next attribute.
1949 # Same values should fall back to next attribute.
1949 if avalue == bvalue:
1950 if avalue == bvalue:
1950 continue
1951 continue
1951
1952
1952 # Exact matches come first.
1953 # Exact matches come first.
1953 if avalue == prefvalue:
1954 if avalue == prefvalue:
1954 return -1
1955 return -1
1955 if bvalue == prefvalue:
1956 if bvalue == prefvalue:
1956 return 1
1957 return 1
1957
1958
1958 # Fall back to next attribute.
1959 # Fall back to next attribute.
1959 continue
1960 continue
1960
1961
1961 # If we got here we couldn't sort by attributes and prefers. Fall
1962 # If we got here we couldn't sort by attributes and prefers. Fall
1962 # back to index order.
1963 # back to index order.
1963 return 0
1964 return 0
1964
1965
1965 def __lt__(self, other):
1966 def __lt__(self, other):
1966 return self._cmp(other) < 0
1967 return self._cmp(other) < 0
1967
1968
1968 def __gt__(self, other):
1969 def __gt__(self, other):
1969 return self._cmp(other) > 0
1970 return self._cmp(other) > 0
1970
1971
1971 def __eq__(self, other):
1972 def __eq__(self, other):
1972 return self._cmp(other) == 0
1973 return self._cmp(other) == 0
1973
1974
1974 def __le__(self, other):
1975 def __le__(self, other):
1975 return self._cmp(other) <= 0
1976 return self._cmp(other) <= 0
1976
1977
1977 def __ge__(self, other):
1978 def __ge__(self, other):
1978 return self._cmp(other) >= 0
1979 return self._cmp(other) >= 0
1979
1980
1980 def __ne__(self, other):
1981 def __ne__(self, other):
1981 return self._cmp(other) != 0
1982 return self._cmp(other) != 0
1982
1983
1983 def sortclonebundleentries(ui, entries):
1984 def sortclonebundleentries(ui, entries):
1984 prefers = ui.configlist('ui', 'clonebundleprefers')
1985 prefers = ui.configlist('ui', 'clonebundleprefers')
1985 if not prefers:
1986 if not prefers:
1986 return list(entries)
1987 return list(entries)
1987
1988
1988 prefers = [p.split('=', 1) for p in prefers]
1989 prefers = [p.split('=', 1) for p in prefers]
1989
1990
1990 items = sorted(clonebundleentry(v, prefers) for v in entries)
1991 items = sorted(clonebundleentry(v, prefers) for v in entries)
1991 return [i.value for i in items]
1992 return [i.value for i in items]
1992
1993
1993 def trypullbundlefromurl(ui, repo, url):
1994 def trypullbundlefromurl(ui, repo, url):
1994 """Attempt to apply a bundle from a URL."""
1995 """Attempt to apply a bundle from a URL."""
1995 with repo.lock(), repo.transaction('bundleurl') as tr:
1996 with repo.lock(), repo.transaction('bundleurl') as tr:
1996 try:
1997 try:
1997 fh = urlmod.open(ui, url)
1998 fh = urlmod.open(ui, url)
1998 cg = readbundle(ui, fh, 'stream')
1999 cg = readbundle(ui, fh, 'stream')
1999
2000
2000 if isinstance(cg, bundle2.unbundle20):
2001 if isinstance(cg, bundle2.unbundle20):
2001 bundle2.applybundle(repo, cg, tr, 'clonebundles', url)
2002 bundle2.applybundle(repo, cg, tr, 'clonebundles', url)
2002 elif isinstance(cg, streamclone.streamcloneapplier):
2003 elif isinstance(cg, streamclone.streamcloneapplier):
2003 cg.apply(repo)
2004 cg.apply(repo)
2004 else:
2005 else:
2005 cg.apply(repo, tr, 'clonebundles', url)
2006 cg.apply(repo, tr, 'clonebundles', url)
2006 return True
2007 return True
2007 except urlerr.httperror as e:
2008 except urlerr.httperror as e:
2008 ui.warn(_('HTTP error fetching bundle: %s\n') % str(e))
2009 ui.warn(_('HTTP error fetching bundle: %s\n') % str(e))
2009 except urlerr.urlerror as e:
2010 except urlerr.urlerror as e:
2010 ui.warn(_('error fetching bundle: %s\n') % e.reason)
2011 ui.warn(_('error fetching bundle: %s\n') % e.reason)
2011
2012
2012 return False
2013 return False
General Comments 0
You need to be logged in to leave comments. Login now