##// END OF EJS Templates
bundle2: move function building obsmarker-part in the bundle2 module...
marmoute -
r32515:e70d6dbd default
parent child Browse files
Show More
@@ -1,1732 +1,1748 b''
1 # bundle2.py - generic container format to transmit arbitrary data.
1 # bundle2.py - generic container format to transmit arbitrary data.
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """Handling of the new bundle2 format
7 """Handling of the new bundle2 format
8
8
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 payloads in an application agnostic way. It consist in a sequence of "parts"
10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 that will be handed to and processed by the application layer.
11 that will be handed to and processed by the application layer.
12
12
13
13
14 General format architecture
14 General format architecture
15 ===========================
15 ===========================
16
16
17 The format is architectured as follow
17 The format is architectured as follow
18
18
19 - magic string
19 - magic string
20 - stream level parameters
20 - stream level parameters
21 - payload parts (any number)
21 - payload parts (any number)
22 - end of stream marker.
22 - end of stream marker.
23
23
24 the Binary format
24 the Binary format
25 ============================
25 ============================
26
26
27 All numbers are unsigned and big-endian.
27 All numbers are unsigned and big-endian.
28
28
29 stream level parameters
29 stream level parameters
30 ------------------------
30 ------------------------
31
31
32 Binary format is as follow
32 Binary format is as follow
33
33
34 :params size: int32
34 :params size: int32
35
35
36 The total number of Bytes used by the parameters
36 The total number of Bytes used by the parameters
37
37
38 :params value: arbitrary number of Bytes
38 :params value: arbitrary number of Bytes
39
39
40 A blob of `params size` containing the serialized version of all stream level
40 A blob of `params size` containing the serialized version of all stream level
41 parameters.
41 parameters.
42
42
43 The blob contains a space separated list of parameters. Parameters with value
43 The blob contains a space separated list of parameters. Parameters with value
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45
45
46 Empty name are obviously forbidden.
46 Empty name are obviously forbidden.
47
47
48 Name MUST start with a letter. If this first letter is lower case, the
48 Name MUST start with a letter. If this first letter is lower case, the
49 parameter is advisory and can be safely ignored. However when the first
49 parameter is advisory and can be safely ignored. However when the first
50 letter is capital, the parameter is mandatory and the bundling process MUST
50 letter is capital, the parameter is mandatory and the bundling process MUST
51 stop if he is not able to proceed it.
51 stop if he is not able to proceed it.
52
52
53 Stream parameters use a simple textual format for two main reasons:
53 Stream parameters use a simple textual format for two main reasons:
54
54
55 - Stream level parameters should remain simple and we want to discourage any
55 - Stream level parameters should remain simple and we want to discourage any
56 crazy usage.
56 crazy usage.
57 - Textual data allow easy human inspection of a bundle2 header in case of
57 - Textual data allow easy human inspection of a bundle2 header in case of
58 troubles.
58 troubles.
59
59
60 Any Applicative level options MUST go into a bundle2 part instead.
60 Any Applicative level options MUST go into a bundle2 part instead.
61
61
62 Payload part
62 Payload part
63 ------------------------
63 ------------------------
64
64
65 Binary format is as follow
65 Binary format is as follow
66
66
67 :header size: int32
67 :header size: int32
68
68
69 The total number of Bytes used by the part header. When the header is empty
69 The total number of Bytes used by the part header. When the header is empty
70 (size = 0) this is interpreted as the end of stream marker.
70 (size = 0) this is interpreted as the end of stream marker.
71
71
72 :header:
72 :header:
73
73
74 The header defines how to interpret the part. It contains two piece of
74 The header defines how to interpret the part. It contains two piece of
75 data: the part type, and the part parameters.
75 data: the part type, and the part parameters.
76
76
77 The part type is used to route an application level handler, that can
77 The part type is used to route an application level handler, that can
78 interpret payload.
78 interpret payload.
79
79
80 Part parameters are passed to the application level handler. They are
80 Part parameters are passed to the application level handler. They are
81 meant to convey information that will help the application level object to
81 meant to convey information that will help the application level object to
82 interpret the part payload.
82 interpret the part payload.
83
83
84 The binary format of the header is has follow
84 The binary format of the header is has follow
85
85
86 :typesize: (one byte)
86 :typesize: (one byte)
87
87
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89
89
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 to this part.
91 to this part.
92
92
93 :parameters:
93 :parameters:
94
94
95 Part's parameter may have arbitrary content, the binary structure is::
95 Part's parameter may have arbitrary content, the binary structure is::
96
96
97 <mandatory-count><advisory-count><param-sizes><param-data>
97 <mandatory-count><advisory-count><param-sizes><param-data>
98
98
99 :mandatory-count: 1 byte, number of mandatory parameters
99 :mandatory-count: 1 byte, number of mandatory parameters
100
100
101 :advisory-count: 1 byte, number of advisory parameters
101 :advisory-count: 1 byte, number of advisory parameters
102
102
103 :param-sizes:
103 :param-sizes:
104
104
105 N couple of bytes, where N is the total number of parameters. Each
105 N couple of bytes, where N is the total number of parameters. Each
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107
107
108 :param-data:
108 :param-data:
109
109
110 A blob of bytes from which each parameter key and value can be
110 A blob of bytes from which each parameter key and value can be
111 retrieved using the list of size couples stored in the previous
111 retrieved using the list of size couples stored in the previous
112 field.
112 field.
113
113
114 Mandatory parameters comes first, then the advisory ones.
114 Mandatory parameters comes first, then the advisory ones.
115
115
116 Each parameter's key MUST be unique within the part.
116 Each parameter's key MUST be unique within the part.
117
117
118 :payload:
118 :payload:
119
119
120 payload is a series of `<chunksize><chunkdata>`.
120 payload is a series of `<chunksize><chunkdata>`.
121
121
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124
124
125 The current implementation always produces either zero or one chunk.
125 The current implementation always produces either zero or one chunk.
126 This is an implementation limitation that will ultimately be lifted.
126 This is an implementation limitation that will ultimately be lifted.
127
127
128 `chunksize` can be negative to trigger special case processing. No such
128 `chunksize` can be negative to trigger special case processing. No such
129 processing is in place yet.
129 processing is in place yet.
130
130
131 Bundle processing
131 Bundle processing
132 ============================
132 ============================
133
133
134 Each part is processed in order using a "part handler". Handler are registered
134 Each part is processed in order using a "part handler". Handler are registered
135 for a certain part type.
135 for a certain part type.
136
136
137 The matching of a part to its handler is case insensitive. The case of the
137 The matching of a part to its handler is case insensitive. The case of the
138 part type is used to know if a part is mandatory or advisory. If the Part type
138 part type is used to know if a part is mandatory or advisory. If the Part type
139 contains any uppercase char it is considered mandatory. When no handler is
139 contains any uppercase char it is considered mandatory. When no handler is
140 known for a Mandatory part, the process is aborted and an exception is raised.
140 known for a Mandatory part, the process is aborted and an exception is raised.
141 If the part is advisory and no handler is known, the part is ignored. When the
141 If the part is advisory and no handler is known, the part is ignored. When the
142 process is aborted, the full bundle is still read from the stream to keep the
142 process is aborted, the full bundle is still read from the stream to keep the
143 channel usable. But none of the part read from an abort are processed. In the
143 channel usable. But none of the part read from an abort are processed. In the
144 future, dropping the stream may become an option for channel we do not care to
144 future, dropping the stream may become an option for channel we do not care to
145 preserve.
145 preserve.
146 """
146 """
147
147
148 from __future__ import absolute_import
148 from __future__ import absolute_import
149
149
150 import errno
150 import errno
151 import re
151 import re
152 import string
152 import string
153 import struct
153 import struct
154 import sys
154 import sys
155
155
156 from .i18n import _
156 from .i18n import _
157 from . import (
157 from . import (
158 changegroup,
158 changegroup,
159 error,
159 error,
160 obsolete,
160 obsolete,
161 pushkey,
161 pushkey,
162 pycompat,
162 pycompat,
163 tags,
163 tags,
164 url,
164 url,
165 util,
165 util,
166 )
166 )
167
167
168 urlerr = util.urlerr
168 urlerr = util.urlerr
169 urlreq = util.urlreq
169 urlreq = util.urlreq
170
170
171 _pack = struct.pack
171 _pack = struct.pack
172 _unpack = struct.unpack
172 _unpack = struct.unpack
173
173
174 _fstreamparamsize = '>i'
174 _fstreamparamsize = '>i'
175 _fpartheadersize = '>i'
175 _fpartheadersize = '>i'
176 _fparttypesize = '>B'
176 _fparttypesize = '>B'
177 _fpartid = '>I'
177 _fpartid = '>I'
178 _fpayloadsize = '>i'
178 _fpayloadsize = '>i'
179 _fpartparamcount = '>BB'
179 _fpartparamcount = '>BB'
180
180
181 preferedchunksize = 4096
181 preferedchunksize = 4096
182
182
183 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
183 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
184
184
185 def outdebug(ui, message):
185 def outdebug(ui, message):
186 """debug regarding output stream (bundling)"""
186 """debug regarding output stream (bundling)"""
187 if ui.configbool('devel', 'bundle2.debug', False):
187 if ui.configbool('devel', 'bundle2.debug', False):
188 ui.debug('bundle2-output: %s\n' % message)
188 ui.debug('bundle2-output: %s\n' % message)
189
189
190 def indebug(ui, message):
190 def indebug(ui, message):
191 """debug on input stream (unbundling)"""
191 """debug on input stream (unbundling)"""
192 if ui.configbool('devel', 'bundle2.debug', False):
192 if ui.configbool('devel', 'bundle2.debug', False):
193 ui.debug('bundle2-input: %s\n' % message)
193 ui.debug('bundle2-input: %s\n' % message)
194
194
195 def validateparttype(parttype):
195 def validateparttype(parttype):
196 """raise ValueError if a parttype contains invalid character"""
196 """raise ValueError if a parttype contains invalid character"""
197 if _parttypeforbidden.search(parttype):
197 if _parttypeforbidden.search(parttype):
198 raise ValueError(parttype)
198 raise ValueError(parttype)
199
199
200 def _makefpartparamsizes(nbparams):
200 def _makefpartparamsizes(nbparams):
201 """return a struct format to read part parameter sizes
201 """return a struct format to read part parameter sizes
202
202
203 The number parameters is variable so we need to build that format
203 The number parameters is variable so we need to build that format
204 dynamically.
204 dynamically.
205 """
205 """
206 return '>'+('BB'*nbparams)
206 return '>'+('BB'*nbparams)
207
207
208 parthandlermapping = {}
208 parthandlermapping = {}
209
209
210 def parthandler(parttype, params=()):
210 def parthandler(parttype, params=()):
211 """decorator that register a function as a bundle2 part handler
211 """decorator that register a function as a bundle2 part handler
212
212
213 eg::
213 eg::
214
214
215 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
215 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
216 def myparttypehandler(...):
216 def myparttypehandler(...):
217 '''process a part of type "my part".'''
217 '''process a part of type "my part".'''
218 ...
218 ...
219 """
219 """
220 validateparttype(parttype)
220 validateparttype(parttype)
221 def _decorator(func):
221 def _decorator(func):
222 lparttype = parttype.lower() # enforce lower case matching.
222 lparttype = parttype.lower() # enforce lower case matching.
223 assert lparttype not in parthandlermapping
223 assert lparttype not in parthandlermapping
224 parthandlermapping[lparttype] = func
224 parthandlermapping[lparttype] = func
225 func.params = frozenset(params)
225 func.params = frozenset(params)
226 return func
226 return func
227 return _decorator
227 return _decorator
228
228
229 class unbundlerecords(object):
229 class unbundlerecords(object):
230 """keep record of what happens during and unbundle
230 """keep record of what happens during and unbundle
231
231
232 New records are added using `records.add('cat', obj)`. Where 'cat' is a
232 New records are added using `records.add('cat', obj)`. Where 'cat' is a
233 category of record and obj is an arbitrary object.
233 category of record and obj is an arbitrary object.
234
234
235 `records['cat']` will return all entries of this category 'cat'.
235 `records['cat']` will return all entries of this category 'cat'.
236
236
237 Iterating on the object itself will yield `('category', obj)` tuples
237 Iterating on the object itself will yield `('category', obj)` tuples
238 for all entries.
238 for all entries.
239
239
240 All iterations happens in chronological order.
240 All iterations happens in chronological order.
241 """
241 """
242
242
243 def __init__(self):
243 def __init__(self):
244 self._categories = {}
244 self._categories = {}
245 self._sequences = []
245 self._sequences = []
246 self._replies = {}
246 self._replies = {}
247
247
248 def add(self, category, entry, inreplyto=None):
248 def add(self, category, entry, inreplyto=None):
249 """add a new record of a given category.
249 """add a new record of a given category.
250
250
251 The entry can then be retrieved in the list returned by
251 The entry can then be retrieved in the list returned by
252 self['category']."""
252 self['category']."""
253 self._categories.setdefault(category, []).append(entry)
253 self._categories.setdefault(category, []).append(entry)
254 self._sequences.append((category, entry))
254 self._sequences.append((category, entry))
255 if inreplyto is not None:
255 if inreplyto is not None:
256 self.getreplies(inreplyto).add(category, entry)
256 self.getreplies(inreplyto).add(category, entry)
257
257
258 def getreplies(self, partid):
258 def getreplies(self, partid):
259 """get the records that are replies to a specific part"""
259 """get the records that are replies to a specific part"""
260 return self._replies.setdefault(partid, unbundlerecords())
260 return self._replies.setdefault(partid, unbundlerecords())
261
261
262 def __getitem__(self, cat):
262 def __getitem__(self, cat):
263 return tuple(self._categories.get(cat, ()))
263 return tuple(self._categories.get(cat, ()))
264
264
265 def __iter__(self):
265 def __iter__(self):
266 return iter(self._sequences)
266 return iter(self._sequences)
267
267
268 def __len__(self):
268 def __len__(self):
269 return len(self._sequences)
269 return len(self._sequences)
270
270
271 def __nonzero__(self):
271 def __nonzero__(self):
272 return bool(self._sequences)
272 return bool(self._sequences)
273
273
274 __bool__ = __nonzero__
274 __bool__ = __nonzero__
275
275
276 class bundleoperation(object):
276 class bundleoperation(object):
277 """an object that represents a single bundling process
277 """an object that represents a single bundling process
278
278
279 Its purpose is to carry unbundle-related objects and states.
279 Its purpose is to carry unbundle-related objects and states.
280
280
281 A new object should be created at the beginning of each bundle processing.
281 A new object should be created at the beginning of each bundle processing.
282 The object is to be returned by the processing function.
282 The object is to be returned by the processing function.
283
283
284 The object has very little content now it will ultimately contain:
284 The object has very little content now it will ultimately contain:
285 * an access to the repo the bundle is applied to,
285 * an access to the repo the bundle is applied to,
286 * a ui object,
286 * a ui object,
287 * a way to retrieve a transaction to add changes to the repo,
287 * a way to retrieve a transaction to add changes to the repo,
288 * a way to record the result of processing each part,
288 * a way to record the result of processing each part,
289 * a way to construct a bundle response when applicable.
289 * a way to construct a bundle response when applicable.
290 """
290 """
291
291
292 def __init__(self, repo, transactiongetter, captureoutput=True):
292 def __init__(self, repo, transactiongetter, captureoutput=True):
293 self.repo = repo
293 self.repo = repo
294 self.ui = repo.ui
294 self.ui = repo.ui
295 self.records = unbundlerecords()
295 self.records = unbundlerecords()
296 self.gettransaction = transactiongetter
296 self.gettransaction = transactiongetter
297 self.reply = None
297 self.reply = None
298 self.captureoutput = captureoutput
298 self.captureoutput = captureoutput
299
299
300 class TransactionUnavailable(RuntimeError):
300 class TransactionUnavailable(RuntimeError):
301 pass
301 pass
302
302
303 def _notransaction():
303 def _notransaction():
304 """default method to get a transaction while processing a bundle
304 """default method to get a transaction while processing a bundle
305
305
306 Raise an exception to highlight the fact that no transaction was expected
306 Raise an exception to highlight the fact that no transaction was expected
307 to be created"""
307 to be created"""
308 raise TransactionUnavailable()
308 raise TransactionUnavailable()
309
309
310 def applybundle(repo, unbundler, tr, source=None, url=None, op=None):
310 def applybundle(repo, unbundler, tr, source=None, url=None, op=None):
311 # transform me into unbundler.apply() as soon as the freeze is lifted
311 # transform me into unbundler.apply() as soon as the freeze is lifted
312 tr.hookargs['bundle2'] = '1'
312 tr.hookargs['bundle2'] = '1'
313 if source is not None and 'source' not in tr.hookargs:
313 if source is not None and 'source' not in tr.hookargs:
314 tr.hookargs['source'] = source
314 tr.hookargs['source'] = source
315 if url is not None and 'url' not in tr.hookargs:
315 if url is not None and 'url' not in tr.hookargs:
316 tr.hookargs['url'] = url
316 tr.hookargs['url'] = url
317 return processbundle(repo, unbundler, lambda: tr, op=op)
317 return processbundle(repo, unbundler, lambda: tr, op=op)
318
318
319 def processbundle(repo, unbundler, transactiongetter=None, op=None):
319 def processbundle(repo, unbundler, transactiongetter=None, op=None):
320 """This function process a bundle, apply effect to/from a repo
320 """This function process a bundle, apply effect to/from a repo
321
321
322 It iterates over each part then searches for and uses the proper handling
322 It iterates over each part then searches for and uses the proper handling
323 code to process the part. Parts are processed in order.
323 code to process the part. Parts are processed in order.
324
324
325 Unknown Mandatory part will abort the process.
325 Unknown Mandatory part will abort the process.
326
326
327 It is temporarily possible to provide a prebuilt bundleoperation to the
327 It is temporarily possible to provide a prebuilt bundleoperation to the
328 function. This is used to ensure output is properly propagated in case of
328 function. This is used to ensure output is properly propagated in case of
329 an error during the unbundling. This output capturing part will likely be
329 an error during the unbundling. This output capturing part will likely be
330 reworked and this ability will probably go away in the process.
330 reworked and this ability will probably go away in the process.
331 """
331 """
332 if op is None:
332 if op is None:
333 if transactiongetter is None:
333 if transactiongetter is None:
334 transactiongetter = _notransaction
334 transactiongetter = _notransaction
335 op = bundleoperation(repo, transactiongetter)
335 op = bundleoperation(repo, transactiongetter)
336 # todo:
336 # todo:
337 # - replace this is a init function soon.
337 # - replace this is a init function soon.
338 # - exception catching
338 # - exception catching
339 unbundler.params
339 unbundler.params
340 if repo.ui.debugflag:
340 if repo.ui.debugflag:
341 msg = ['bundle2-input-bundle:']
341 msg = ['bundle2-input-bundle:']
342 if unbundler.params:
342 if unbundler.params:
343 msg.append(' %i params')
343 msg.append(' %i params')
344 if op.gettransaction is None:
344 if op.gettransaction is None:
345 msg.append(' no-transaction')
345 msg.append(' no-transaction')
346 else:
346 else:
347 msg.append(' with-transaction')
347 msg.append(' with-transaction')
348 msg.append('\n')
348 msg.append('\n')
349 repo.ui.debug(''.join(msg))
349 repo.ui.debug(''.join(msg))
350 iterparts = enumerate(unbundler.iterparts())
350 iterparts = enumerate(unbundler.iterparts())
351 part = None
351 part = None
352 nbpart = 0
352 nbpart = 0
353 try:
353 try:
354 for nbpart, part in iterparts:
354 for nbpart, part in iterparts:
355 _processpart(op, part)
355 _processpart(op, part)
356 except Exception as exc:
356 except Exception as exc:
357 # Any exceptions seeking to the end of the bundle at this point are
357 # Any exceptions seeking to the end of the bundle at this point are
358 # almost certainly related to the underlying stream being bad.
358 # almost certainly related to the underlying stream being bad.
359 # And, chances are that the exception we're handling is related to
359 # And, chances are that the exception we're handling is related to
360 # getting in that bad state. So, we swallow the seeking error and
360 # getting in that bad state. So, we swallow the seeking error and
361 # re-raise the original error.
361 # re-raise the original error.
362 seekerror = False
362 seekerror = False
363 try:
363 try:
364 for nbpart, part in iterparts:
364 for nbpart, part in iterparts:
365 # consume the bundle content
365 # consume the bundle content
366 part.seek(0, 2)
366 part.seek(0, 2)
367 except Exception:
367 except Exception:
368 seekerror = True
368 seekerror = True
369
369
370 # Small hack to let caller code distinguish exceptions from bundle2
370 # Small hack to let caller code distinguish exceptions from bundle2
371 # processing from processing the old format. This is mostly
371 # processing from processing the old format. This is mostly
372 # needed to handle different return codes to unbundle according to the
372 # needed to handle different return codes to unbundle according to the
373 # type of bundle. We should probably clean up or drop this return code
373 # type of bundle. We should probably clean up or drop this return code
374 # craziness in a future version.
374 # craziness in a future version.
375 exc.duringunbundle2 = True
375 exc.duringunbundle2 = True
376 salvaged = []
376 salvaged = []
377 replycaps = None
377 replycaps = None
378 if op.reply is not None:
378 if op.reply is not None:
379 salvaged = op.reply.salvageoutput()
379 salvaged = op.reply.salvageoutput()
380 replycaps = op.reply.capabilities
380 replycaps = op.reply.capabilities
381 exc._replycaps = replycaps
381 exc._replycaps = replycaps
382 exc._bundle2salvagedoutput = salvaged
382 exc._bundle2salvagedoutput = salvaged
383
383
384 # Re-raising from a variable loses the original stack. So only use
384 # Re-raising from a variable loses the original stack. So only use
385 # that form if we need to.
385 # that form if we need to.
386 if seekerror:
386 if seekerror:
387 raise exc
387 raise exc
388 else:
388 else:
389 raise
389 raise
390 finally:
390 finally:
391 repo.ui.debug('bundle2-input-bundle: %i parts total\n' % nbpart)
391 repo.ui.debug('bundle2-input-bundle: %i parts total\n' % nbpart)
392
392
393 return op
393 return op
394
394
395 def _processpart(op, part):
395 def _processpart(op, part):
396 """process a single part from a bundle
396 """process a single part from a bundle
397
397
398 The part is guaranteed to have been fully consumed when the function exits
398 The part is guaranteed to have been fully consumed when the function exits
399 (even if an exception is raised)."""
399 (even if an exception is raised)."""
400 status = 'unknown' # used by debug output
400 status = 'unknown' # used by debug output
401 hardabort = False
401 hardabort = False
402 try:
402 try:
403 try:
403 try:
404 handler = parthandlermapping.get(part.type)
404 handler = parthandlermapping.get(part.type)
405 if handler is None:
405 if handler is None:
406 status = 'unsupported-type'
406 status = 'unsupported-type'
407 raise error.BundleUnknownFeatureError(parttype=part.type)
407 raise error.BundleUnknownFeatureError(parttype=part.type)
408 indebug(op.ui, 'found a handler for part %r' % part.type)
408 indebug(op.ui, 'found a handler for part %r' % part.type)
409 unknownparams = part.mandatorykeys - handler.params
409 unknownparams = part.mandatorykeys - handler.params
410 if unknownparams:
410 if unknownparams:
411 unknownparams = list(unknownparams)
411 unknownparams = list(unknownparams)
412 unknownparams.sort()
412 unknownparams.sort()
413 status = 'unsupported-params (%s)' % unknownparams
413 status = 'unsupported-params (%s)' % unknownparams
414 raise error.BundleUnknownFeatureError(parttype=part.type,
414 raise error.BundleUnknownFeatureError(parttype=part.type,
415 params=unknownparams)
415 params=unknownparams)
416 status = 'supported'
416 status = 'supported'
417 except error.BundleUnknownFeatureError as exc:
417 except error.BundleUnknownFeatureError as exc:
418 if part.mandatory: # mandatory parts
418 if part.mandatory: # mandatory parts
419 raise
419 raise
420 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
420 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
421 return # skip to part processing
421 return # skip to part processing
422 finally:
422 finally:
423 if op.ui.debugflag:
423 if op.ui.debugflag:
424 msg = ['bundle2-input-part: "%s"' % part.type]
424 msg = ['bundle2-input-part: "%s"' % part.type]
425 if not part.mandatory:
425 if not part.mandatory:
426 msg.append(' (advisory)')
426 msg.append(' (advisory)')
427 nbmp = len(part.mandatorykeys)
427 nbmp = len(part.mandatorykeys)
428 nbap = len(part.params) - nbmp
428 nbap = len(part.params) - nbmp
429 if nbmp or nbap:
429 if nbmp or nbap:
430 msg.append(' (params:')
430 msg.append(' (params:')
431 if nbmp:
431 if nbmp:
432 msg.append(' %i mandatory' % nbmp)
432 msg.append(' %i mandatory' % nbmp)
433 if nbap:
433 if nbap:
434 msg.append(' %i advisory' % nbmp)
434 msg.append(' %i advisory' % nbmp)
435 msg.append(')')
435 msg.append(')')
436 msg.append(' %s\n' % status)
436 msg.append(' %s\n' % status)
437 op.ui.debug(''.join(msg))
437 op.ui.debug(''.join(msg))
438
438
439 # handler is called outside the above try block so that we don't
439 # handler is called outside the above try block so that we don't
440 # risk catching KeyErrors from anything other than the
440 # risk catching KeyErrors from anything other than the
441 # parthandlermapping lookup (any KeyError raised by handler()
441 # parthandlermapping lookup (any KeyError raised by handler()
442 # itself represents a defect of a different variety).
442 # itself represents a defect of a different variety).
443 output = None
443 output = None
444 if op.captureoutput and op.reply is not None:
444 if op.captureoutput and op.reply is not None:
445 op.ui.pushbuffer(error=True, subproc=True)
445 op.ui.pushbuffer(error=True, subproc=True)
446 output = ''
446 output = ''
447 try:
447 try:
448 handler(op, part)
448 handler(op, part)
449 finally:
449 finally:
450 if output is not None:
450 if output is not None:
451 output = op.ui.popbuffer()
451 output = op.ui.popbuffer()
452 if output:
452 if output:
453 outpart = op.reply.newpart('output', data=output,
453 outpart = op.reply.newpart('output', data=output,
454 mandatory=False)
454 mandatory=False)
455 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
455 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
456 # If exiting or interrupted, do not attempt to seek the stream in the
456 # If exiting or interrupted, do not attempt to seek the stream in the
457 # finally block below. This makes abort faster.
457 # finally block below. This makes abort faster.
458 except (SystemExit, KeyboardInterrupt):
458 except (SystemExit, KeyboardInterrupt):
459 hardabort = True
459 hardabort = True
460 raise
460 raise
461 finally:
461 finally:
462 # consume the part content to not corrupt the stream.
462 # consume the part content to not corrupt the stream.
463 if not hardabort:
463 if not hardabort:
464 part.seek(0, 2)
464 part.seek(0, 2)
465
465
466
466
467 def decodecaps(blob):
467 def decodecaps(blob):
468 """decode a bundle2 caps bytes blob into a dictionary
468 """decode a bundle2 caps bytes blob into a dictionary
469
469
470 The blob is a list of capabilities (one per line)
470 The blob is a list of capabilities (one per line)
471 Capabilities may have values using a line of the form::
471 Capabilities may have values using a line of the form::
472
472
473 capability=value1,value2,value3
473 capability=value1,value2,value3
474
474
475 The values are always a list."""
475 The values are always a list."""
476 caps = {}
476 caps = {}
477 for line in blob.splitlines():
477 for line in blob.splitlines():
478 if not line:
478 if not line:
479 continue
479 continue
480 if '=' not in line:
480 if '=' not in line:
481 key, vals = line, ()
481 key, vals = line, ()
482 else:
482 else:
483 key, vals = line.split('=', 1)
483 key, vals = line.split('=', 1)
484 vals = vals.split(',')
484 vals = vals.split(',')
485 key = urlreq.unquote(key)
485 key = urlreq.unquote(key)
486 vals = [urlreq.unquote(v) for v in vals]
486 vals = [urlreq.unquote(v) for v in vals]
487 caps[key] = vals
487 caps[key] = vals
488 return caps
488 return caps
489
489
490 def encodecaps(caps):
490 def encodecaps(caps):
491 """encode a bundle2 caps dictionary into a bytes blob"""
491 """encode a bundle2 caps dictionary into a bytes blob"""
492 chunks = []
492 chunks = []
493 for ca in sorted(caps):
493 for ca in sorted(caps):
494 vals = caps[ca]
494 vals = caps[ca]
495 ca = urlreq.quote(ca)
495 ca = urlreq.quote(ca)
496 vals = [urlreq.quote(v) for v in vals]
496 vals = [urlreq.quote(v) for v in vals]
497 if vals:
497 if vals:
498 ca = "%s=%s" % (ca, ','.join(vals))
498 ca = "%s=%s" % (ca, ','.join(vals))
499 chunks.append(ca)
499 chunks.append(ca)
500 return '\n'.join(chunks)
500 return '\n'.join(chunks)
501
501
502 bundletypes = {
502 bundletypes = {
503 "": ("", 'UN'), # only when using unbundle on ssh and old http servers
503 "": ("", 'UN'), # only when using unbundle on ssh and old http servers
504 # since the unification ssh accepts a header but there
504 # since the unification ssh accepts a header but there
505 # is no capability signaling it.
505 # is no capability signaling it.
506 "HG20": (), # special-cased below
506 "HG20": (), # special-cased below
507 "HG10UN": ("HG10UN", 'UN'),
507 "HG10UN": ("HG10UN", 'UN'),
508 "HG10BZ": ("HG10", 'BZ'),
508 "HG10BZ": ("HG10", 'BZ'),
509 "HG10GZ": ("HG10GZ", 'GZ'),
509 "HG10GZ": ("HG10GZ", 'GZ'),
510 }
510 }
511
511
512 # hgweb uses this list to communicate its preferred type
512 # hgweb uses this list to communicate its preferred type
513 bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN']
513 bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN']
514
514
515 class bundle20(object):
515 class bundle20(object):
516 """represent an outgoing bundle2 container
516 """represent an outgoing bundle2 container
517
517
518 Use the `addparam` method to add stream level parameter. and `newpart` to
518 Use the `addparam` method to add stream level parameter. and `newpart` to
519 populate it. Then call `getchunks` to retrieve all the binary chunks of
519 populate it. Then call `getchunks` to retrieve all the binary chunks of
520 data that compose the bundle2 container."""
520 data that compose the bundle2 container."""
521
521
522 _magicstring = 'HG20'
522 _magicstring = 'HG20'
523
523
524 def __init__(self, ui, capabilities=()):
524 def __init__(self, ui, capabilities=()):
525 self.ui = ui
525 self.ui = ui
526 self._params = []
526 self._params = []
527 self._parts = []
527 self._parts = []
528 self.capabilities = dict(capabilities)
528 self.capabilities = dict(capabilities)
529 self._compengine = util.compengines.forbundletype('UN')
529 self._compengine = util.compengines.forbundletype('UN')
530 self._compopts = None
530 self._compopts = None
531
531
532 def setcompression(self, alg, compopts=None):
532 def setcompression(self, alg, compopts=None):
533 """setup core part compression to <alg>"""
533 """setup core part compression to <alg>"""
534 if alg in (None, 'UN'):
534 if alg in (None, 'UN'):
535 return
535 return
536 assert not any(n.lower() == 'compression' for n, v in self._params)
536 assert not any(n.lower() == 'compression' for n, v in self._params)
537 self.addparam('Compression', alg)
537 self.addparam('Compression', alg)
538 self._compengine = util.compengines.forbundletype(alg)
538 self._compengine = util.compengines.forbundletype(alg)
539 self._compopts = compopts
539 self._compopts = compopts
540
540
541 @property
541 @property
542 def nbparts(self):
542 def nbparts(self):
543 """total number of parts added to the bundler"""
543 """total number of parts added to the bundler"""
544 return len(self._parts)
544 return len(self._parts)
545
545
546 # methods used to defines the bundle2 content
546 # methods used to defines the bundle2 content
547 def addparam(self, name, value=None):
547 def addparam(self, name, value=None):
548 """add a stream level parameter"""
548 """add a stream level parameter"""
549 if not name:
549 if not name:
550 raise ValueError('empty parameter name')
550 raise ValueError('empty parameter name')
551 if name[0] not in string.letters:
551 if name[0] not in string.letters:
552 raise ValueError('non letter first character: %r' % name)
552 raise ValueError('non letter first character: %r' % name)
553 self._params.append((name, value))
553 self._params.append((name, value))
554
554
555 def addpart(self, part):
555 def addpart(self, part):
556 """add a new part to the bundle2 container
556 """add a new part to the bundle2 container
557
557
558 Parts contains the actual applicative payload."""
558 Parts contains the actual applicative payload."""
559 assert part.id is None
559 assert part.id is None
560 part.id = len(self._parts) # very cheap counter
560 part.id = len(self._parts) # very cheap counter
561 self._parts.append(part)
561 self._parts.append(part)
562
562
563 def newpart(self, typeid, *args, **kwargs):
563 def newpart(self, typeid, *args, **kwargs):
564 """create a new part and add it to the containers
564 """create a new part and add it to the containers
565
565
566 As the part is directly added to the containers. For now, this means
566 As the part is directly added to the containers. For now, this means
567 that any failure to properly initialize the part after calling
567 that any failure to properly initialize the part after calling
568 ``newpart`` should result in a failure of the whole bundling process.
568 ``newpart`` should result in a failure of the whole bundling process.
569
569
570 You can still fall back to manually create and add if you need better
570 You can still fall back to manually create and add if you need better
571 control."""
571 control."""
572 part = bundlepart(typeid, *args, **kwargs)
572 part = bundlepart(typeid, *args, **kwargs)
573 self.addpart(part)
573 self.addpart(part)
574 return part
574 return part
575
575
576 # methods used to generate the bundle2 stream
576 # methods used to generate the bundle2 stream
577 def getchunks(self):
577 def getchunks(self):
578 if self.ui.debugflag:
578 if self.ui.debugflag:
579 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
579 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
580 if self._params:
580 if self._params:
581 msg.append(' (%i params)' % len(self._params))
581 msg.append(' (%i params)' % len(self._params))
582 msg.append(' %i parts total\n' % len(self._parts))
582 msg.append(' %i parts total\n' % len(self._parts))
583 self.ui.debug(''.join(msg))
583 self.ui.debug(''.join(msg))
584 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
584 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
585 yield self._magicstring
585 yield self._magicstring
586 param = self._paramchunk()
586 param = self._paramchunk()
587 outdebug(self.ui, 'bundle parameter: %s' % param)
587 outdebug(self.ui, 'bundle parameter: %s' % param)
588 yield _pack(_fstreamparamsize, len(param))
588 yield _pack(_fstreamparamsize, len(param))
589 if param:
589 if param:
590 yield param
590 yield param
591 for chunk in self._compengine.compressstream(self._getcorechunk(),
591 for chunk in self._compengine.compressstream(self._getcorechunk(),
592 self._compopts):
592 self._compopts):
593 yield chunk
593 yield chunk
594
594
595 def _paramchunk(self):
595 def _paramchunk(self):
596 """return a encoded version of all stream parameters"""
596 """return a encoded version of all stream parameters"""
597 blocks = []
597 blocks = []
598 for par, value in self._params:
598 for par, value in self._params:
599 par = urlreq.quote(par)
599 par = urlreq.quote(par)
600 if value is not None:
600 if value is not None:
601 value = urlreq.quote(value)
601 value = urlreq.quote(value)
602 par = '%s=%s' % (par, value)
602 par = '%s=%s' % (par, value)
603 blocks.append(par)
603 blocks.append(par)
604 return ' '.join(blocks)
604 return ' '.join(blocks)
605
605
606 def _getcorechunk(self):
606 def _getcorechunk(self):
607 """yield chunk for the core part of the bundle
607 """yield chunk for the core part of the bundle
608
608
609 (all but headers and parameters)"""
609 (all but headers and parameters)"""
610 outdebug(self.ui, 'start of parts')
610 outdebug(self.ui, 'start of parts')
611 for part in self._parts:
611 for part in self._parts:
612 outdebug(self.ui, 'bundle part: "%s"' % part.type)
612 outdebug(self.ui, 'bundle part: "%s"' % part.type)
613 for chunk in part.getchunks(ui=self.ui):
613 for chunk in part.getchunks(ui=self.ui):
614 yield chunk
614 yield chunk
615 outdebug(self.ui, 'end of bundle')
615 outdebug(self.ui, 'end of bundle')
616 yield _pack(_fpartheadersize, 0)
616 yield _pack(_fpartheadersize, 0)
617
617
618
618
619 def salvageoutput(self):
619 def salvageoutput(self):
620 """return a list with a copy of all output parts in the bundle
620 """return a list with a copy of all output parts in the bundle
621
621
622 This is meant to be used during error handling to make sure we preserve
622 This is meant to be used during error handling to make sure we preserve
623 server output"""
623 server output"""
624 salvaged = []
624 salvaged = []
625 for part in self._parts:
625 for part in self._parts:
626 if part.type.startswith('output'):
626 if part.type.startswith('output'):
627 salvaged.append(part.copy())
627 salvaged.append(part.copy())
628 return salvaged
628 return salvaged
629
629
630
630
631 class unpackermixin(object):
631 class unpackermixin(object):
632 """A mixin to extract bytes and struct data from a stream"""
632 """A mixin to extract bytes and struct data from a stream"""
633
633
634 def __init__(self, fp):
634 def __init__(self, fp):
635 self._fp = fp
635 self._fp = fp
636
636
637 def _unpack(self, format):
637 def _unpack(self, format):
638 """unpack this struct format from the stream
638 """unpack this struct format from the stream
639
639
640 This method is meant for internal usage by the bundle2 protocol only.
640 This method is meant for internal usage by the bundle2 protocol only.
641 They directly manipulate the low level stream including bundle2 level
641 They directly manipulate the low level stream including bundle2 level
642 instruction.
642 instruction.
643
643
644 Do not use it to implement higher-level logic or methods."""
644 Do not use it to implement higher-level logic or methods."""
645 data = self._readexact(struct.calcsize(format))
645 data = self._readexact(struct.calcsize(format))
646 return _unpack(format, data)
646 return _unpack(format, data)
647
647
648 def _readexact(self, size):
648 def _readexact(self, size):
649 """read exactly <size> bytes from the stream
649 """read exactly <size> bytes from the stream
650
650
651 This method is meant for internal usage by the bundle2 protocol only.
651 This method is meant for internal usage by the bundle2 protocol only.
652 They directly manipulate the low level stream including bundle2 level
652 They directly manipulate the low level stream including bundle2 level
653 instruction.
653 instruction.
654
654
655 Do not use it to implement higher-level logic or methods."""
655 Do not use it to implement higher-level logic or methods."""
656 return changegroup.readexactly(self._fp, size)
656 return changegroup.readexactly(self._fp, size)
657
657
658 def getunbundler(ui, fp, magicstring=None):
658 def getunbundler(ui, fp, magicstring=None):
659 """return a valid unbundler object for a given magicstring"""
659 """return a valid unbundler object for a given magicstring"""
660 if magicstring is None:
660 if magicstring is None:
661 magicstring = changegroup.readexactly(fp, 4)
661 magicstring = changegroup.readexactly(fp, 4)
662 magic, version = magicstring[0:2], magicstring[2:4]
662 magic, version = magicstring[0:2], magicstring[2:4]
663 if magic != 'HG':
663 if magic != 'HG':
664 raise error.Abort(_('not a Mercurial bundle'))
664 raise error.Abort(_('not a Mercurial bundle'))
665 unbundlerclass = formatmap.get(version)
665 unbundlerclass = formatmap.get(version)
666 if unbundlerclass is None:
666 if unbundlerclass is None:
667 raise error.Abort(_('unknown bundle version %s') % version)
667 raise error.Abort(_('unknown bundle version %s') % version)
668 unbundler = unbundlerclass(ui, fp)
668 unbundler = unbundlerclass(ui, fp)
669 indebug(ui, 'start processing of %s stream' % magicstring)
669 indebug(ui, 'start processing of %s stream' % magicstring)
670 return unbundler
670 return unbundler
671
671
672 class unbundle20(unpackermixin):
672 class unbundle20(unpackermixin):
673 """interpret a bundle2 stream
673 """interpret a bundle2 stream
674
674
675 This class is fed with a binary stream and yields parts through its
675 This class is fed with a binary stream and yields parts through its
676 `iterparts` methods."""
676 `iterparts` methods."""
677
677
678 _magicstring = 'HG20'
678 _magicstring = 'HG20'
679
679
680 def __init__(self, ui, fp):
680 def __init__(self, ui, fp):
681 """If header is specified, we do not read it out of the stream."""
681 """If header is specified, we do not read it out of the stream."""
682 self.ui = ui
682 self.ui = ui
683 self._compengine = util.compengines.forbundletype('UN')
683 self._compengine = util.compengines.forbundletype('UN')
684 self._compressed = None
684 self._compressed = None
685 super(unbundle20, self).__init__(fp)
685 super(unbundle20, self).__init__(fp)
686
686
687 @util.propertycache
687 @util.propertycache
688 def params(self):
688 def params(self):
689 """dictionary of stream level parameters"""
689 """dictionary of stream level parameters"""
690 indebug(self.ui, 'reading bundle2 stream parameters')
690 indebug(self.ui, 'reading bundle2 stream parameters')
691 params = {}
691 params = {}
692 paramssize = self._unpack(_fstreamparamsize)[0]
692 paramssize = self._unpack(_fstreamparamsize)[0]
693 if paramssize < 0:
693 if paramssize < 0:
694 raise error.BundleValueError('negative bundle param size: %i'
694 raise error.BundleValueError('negative bundle param size: %i'
695 % paramssize)
695 % paramssize)
696 if paramssize:
696 if paramssize:
697 params = self._readexact(paramssize)
697 params = self._readexact(paramssize)
698 params = self._processallparams(params)
698 params = self._processallparams(params)
699 return params
699 return params
700
700
701 def _processallparams(self, paramsblock):
701 def _processallparams(self, paramsblock):
702 """"""
702 """"""
703 params = util.sortdict()
703 params = util.sortdict()
704 for p in paramsblock.split(' '):
704 for p in paramsblock.split(' '):
705 p = p.split('=', 1)
705 p = p.split('=', 1)
706 p = [urlreq.unquote(i) for i in p]
706 p = [urlreq.unquote(i) for i in p]
707 if len(p) < 2:
707 if len(p) < 2:
708 p.append(None)
708 p.append(None)
709 self._processparam(*p)
709 self._processparam(*p)
710 params[p[0]] = p[1]
710 params[p[0]] = p[1]
711 return params
711 return params
712
712
713
713
714 def _processparam(self, name, value):
714 def _processparam(self, name, value):
715 """process a parameter, applying its effect if needed
715 """process a parameter, applying its effect if needed
716
716
717 Parameter starting with a lower case letter are advisory and will be
717 Parameter starting with a lower case letter are advisory and will be
718 ignored when unknown. Those starting with an upper case letter are
718 ignored when unknown. Those starting with an upper case letter are
719 mandatory and will this function will raise a KeyError when unknown.
719 mandatory and will this function will raise a KeyError when unknown.
720
720
721 Note: no option are currently supported. Any input will be either
721 Note: no option are currently supported. Any input will be either
722 ignored or failing.
722 ignored or failing.
723 """
723 """
724 if not name:
724 if not name:
725 raise ValueError('empty parameter name')
725 raise ValueError('empty parameter name')
726 if name[0] not in string.letters:
726 if name[0] not in string.letters:
727 raise ValueError('non letter first character: %r' % name)
727 raise ValueError('non letter first character: %r' % name)
728 try:
728 try:
729 handler = b2streamparamsmap[name.lower()]
729 handler = b2streamparamsmap[name.lower()]
730 except KeyError:
730 except KeyError:
731 if name[0].islower():
731 if name[0].islower():
732 indebug(self.ui, "ignoring unknown parameter %r" % name)
732 indebug(self.ui, "ignoring unknown parameter %r" % name)
733 else:
733 else:
734 raise error.BundleUnknownFeatureError(params=(name,))
734 raise error.BundleUnknownFeatureError(params=(name,))
735 else:
735 else:
736 handler(self, name, value)
736 handler(self, name, value)
737
737
738 def _forwardchunks(self):
738 def _forwardchunks(self):
739 """utility to transfer a bundle2 as binary
739 """utility to transfer a bundle2 as binary
740
740
741 This is made necessary by the fact the 'getbundle' command over 'ssh'
741 This is made necessary by the fact the 'getbundle' command over 'ssh'
742 have no way to know then the reply end, relying on the bundle to be
742 have no way to know then the reply end, relying on the bundle to be
743 interpreted to know its end. This is terrible and we are sorry, but we
743 interpreted to know its end. This is terrible and we are sorry, but we
744 needed to move forward to get general delta enabled.
744 needed to move forward to get general delta enabled.
745 """
745 """
746 yield self._magicstring
746 yield self._magicstring
747 assert 'params' not in vars(self)
747 assert 'params' not in vars(self)
748 paramssize = self._unpack(_fstreamparamsize)[0]
748 paramssize = self._unpack(_fstreamparamsize)[0]
749 if paramssize < 0:
749 if paramssize < 0:
750 raise error.BundleValueError('negative bundle param size: %i'
750 raise error.BundleValueError('negative bundle param size: %i'
751 % paramssize)
751 % paramssize)
752 yield _pack(_fstreamparamsize, paramssize)
752 yield _pack(_fstreamparamsize, paramssize)
753 if paramssize:
753 if paramssize:
754 params = self._readexact(paramssize)
754 params = self._readexact(paramssize)
755 self._processallparams(params)
755 self._processallparams(params)
756 yield params
756 yield params
757 assert self._compengine.bundletype == 'UN'
757 assert self._compengine.bundletype == 'UN'
758 # From there, payload might need to be decompressed
758 # From there, payload might need to be decompressed
759 self._fp = self._compengine.decompressorreader(self._fp)
759 self._fp = self._compengine.decompressorreader(self._fp)
760 emptycount = 0
760 emptycount = 0
761 while emptycount < 2:
761 while emptycount < 2:
762 # so we can brainlessly loop
762 # so we can brainlessly loop
763 assert _fpartheadersize == _fpayloadsize
763 assert _fpartheadersize == _fpayloadsize
764 size = self._unpack(_fpartheadersize)[0]
764 size = self._unpack(_fpartheadersize)[0]
765 yield _pack(_fpartheadersize, size)
765 yield _pack(_fpartheadersize, size)
766 if size:
766 if size:
767 emptycount = 0
767 emptycount = 0
768 else:
768 else:
769 emptycount += 1
769 emptycount += 1
770 continue
770 continue
771 if size == flaginterrupt:
771 if size == flaginterrupt:
772 continue
772 continue
773 elif size < 0:
773 elif size < 0:
774 raise error.BundleValueError('negative chunk size: %i')
774 raise error.BundleValueError('negative chunk size: %i')
775 yield self._readexact(size)
775 yield self._readexact(size)
776
776
777
777
778 def iterparts(self):
778 def iterparts(self):
779 """yield all parts contained in the stream"""
779 """yield all parts contained in the stream"""
780 # make sure param have been loaded
780 # make sure param have been loaded
781 self.params
781 self.params
782 # From there, payload need to be decompressed
782 # From there, payload need to be decompressed
783 self._fp = self._compengine.decompressorreader(self._fp)
783 self._fp = self._compengine.decompressorreader(self._fp)
784 indebug(self.ui, 'start extraction of bundle2 parts')
784 indebug(self.ui, 'start extraction of bundle2 parts')
785 headerblock = self._readpartheader()
785 headerblock = self._readpartheader()
786 while headerblock is not None:
786 while headerblock is not None:
787 part = unbundlepart(self.ui, headerblock, self._fp)
787 part = unbundlepart(self.ui, headerblock, self._fp)
788 yield part
788 yield part
789 part.seek(0, 2)
789 part.seek(0, 2)
790 headerblock = self._readpartheader()
790 headerblock = self._readpartheader()
791 indebug(self.ui, 'end of bundle2 stream')
791 indebug(self.ui, 'end of bundle2 stream')
792
792
793 def _readpartheader(self):
793 def _readpartheader(self):
794 """reads a part header size and return the bytes blob
794 """reads a part header size and return the bytes blob
795
795
796 returns None if empty"""
796 returns None if empty"""
797 headersize = self._unpack(_fpartheadersize)[0]
797 headersize = self._unpack(_fpartheadersize)[0]
798 if headersize < 0:
798 if headersize < 0:
799 raise error.BundleValueError('negative part header size: %i'
799 raise error.BundleValueError('negative part header size: %i'
800 % headersize)
800 % headersize)
801 indebug(self.ui, 'part header size: %i' % headersize)
801 indebug(self.ui, 'part header size: %i' % headersize)
802 if headersize:
802 if headersize:
803 return self._readexact(headersize)
803 return self._readexact(headersize)
804 return None
804 return None
805
805
806 def compressed(self):
806 def compressed(self):
807 self.params # load params
807 self.params # load params
808 return self._compressed
808 return self._compressed
809
809
810 def close(self):
810 def close(self):
811 """close underlying file"""
811 """close underlying file"""
812 if util.safehasattr(self._fp, 'close'):
812 if util.safehasattr(self._fp, 'close'):
813 return self._fp.close()
813 return self._fp.close()
814
814
815 formatmap = {'20': unbundle20}
815 formatmap = {'20': unbundle20}
816
816
817 b2streamparamsmap = {}
817 b2streamparamsmap = {}
818
818
819 def b2streamparamhandler(name):
819 def b2streamparamhandler(name):
820 """register a handler for a stream level parameter"""
820 """register a handler for a stream level parameter"""
821 def decorator(func):
821 def decorator(func):
822 assert name not in formatmap
822 assert name not in formatmap
823 b2streamparamsmap[name] = func
823 b2streamparamsmap[name] = func
824 return func
824 return func
825 return decorator
825 return decorator
826
826
827 @b2streamparamhandler('compression')
827 @b2streamparamhandler('compression')
828 def processcompression(unbundler, param, value):
828 def processcompression(unbundler, param, value):
829 """read compression parameter and install payload decompression"""
829 """read compression parameter and install payload decompression"""
830 if value not in util.compengines.supportedbundletypes:
830 if value not in util.compengines.supportedbundletypes:
831 raise error.BundleUnknownFeatureError(params=(param,),
831 raise error.BundleUnknownFeatureError(params=(param,),
832 values=(value,))
832 values=(value,))
833 unbundler._compengine = util.compengines.forbundletype(value)
833 unbundler._compengine = util.compengines.forbundletype(value)
834 if value is not None:
834 if value is not None:
835 unbundler._compressed = True
835 unbundler._compressed = True
836
836
837 class bundlepart(object):
837 class bundlepart(object):
838 """A bundle2 part contains application level payload
838 """A bundle2 part contains application level payload
839
839
840 The part `type` is used to route the part to the application level
840 The part `type` is used to route the part to the application level
841 handler.
841 handler.
842
842
843 The part payload is contained in ``part.data``. It could be raw bytes or a
843 The part payload is contained in ``part.data``. It could be raw bytes or a
844 generator of byte chunks.
844 generator of byte chunks.
845
845
846 You can add parameters to the part using the ``addparam`` method.
846 You can add parameters to the part using the ``addparam`` method.
847 Parameters can be either mandatory (default) or advisory. Remote side
847 Parameters can be either mandatory (default) or advisory. Remote side
848 should be able to safely ignore the advisory ones.
848 should be able to safely ignore the advisory ones.
849
849
850 Both data and parameters cannot be modified after the generation has begun.
850 Both data and parameters cannot be modified after the generation has begun.
851 """
851 """
852
852
853 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
853 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
854 data='', mandatory=True):
854 data='', mandatory=True):
855 validateparttype(parttype)
855 validateparttype(parttype)
856 self.id = None
856 self.id = None
857 self.type = parttype
857 self.type = parttype
858 self._data = data
858 self._data = data
859 self._mandatoryparams = list(mandatoryparams)
859 self._mandatoryparams = list(mandatoryparams)
860 self._advisoryparams = list(advisoryparams)
860 self._advisoryparams = list(advisoryparams)
861 # checking for duplicated entries
861 # checking for duplicated entries
862 self._seenparams = set()
862 self._seenparams = set()
863 for pname, __ in self._mandatoryparams + self._advisoryparams:
863 for pname, __ in self._mandatoryparams + self._advisoryparams:
864 if pname in self._seenparams:
864 if pname in self._seenparams:
865 raise error.ProgrammingError('duplicated params: %s' % pname)
865 raise error.ProgrammingError('duplicated params: %s' % pname)
866 self._seenparams.add(pname)
866 self._seenparams.add(pname)
867 # status of the part's generation:
867 # status of the part's generation:
868 # - None: not started,
868 # - None: not started,
869 # - False: currently generated,
869 # - False: currently generated,
870 # - True: generation done.
870 # - True: generation done.
871 self._generated = None
871 self._generated = None
872 self.mandatory = mandatory
872 self.mandatory = mandatory
873
873
874 def __repr__(self):
874 def __repr__(self):
875 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
875 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
876 return ('<%s object at %x; id: %s; type: %s; mandatory: %s>'
876 return ('<%s object at %x; id: %s; type: %s; mandatory: %s>'
877 % (cls, id(self), self.id, self.type, self.mandatory))
877 % (cls, id(self), self.id, self.type, self.mandatory))
878
878
879 def copy(self):
879 def copy(self):
880 """return a copy of the part
880 """return a copy of the part
881
881
882 The new part have the very same content but no partid assigned yet.
882 The new part have the very same content but no partid assigned yet.
883 Parts with generated data cannot be copied."""
883 Parts with generated data cannot be copied."""
884 assert not util.safehasattr(self.data, 'next')
884 assert not util.safehasattr(self.data, 'next')
885 return self.__class__(self.type, self._mandatoryparams,
885 return self.__class__(self.type, self._mandatoryparams,
886 self._advisoryparams, self._data, self.mandatory)
886 self._advisoryparams, self._data, self.mandatory)
887
887
888 # methods used to defines the part content
888 # methods used to defines the part content
889 @property
889 @property
890 def data(self):
890 def data(self):
891 return self._data
891 return self._data
892
892
893 @data.setter
893 @data.setter
894 def data(self, data):
894 def data(self, data):
895 if self._generated is not None:
895 if self._generated is not None:
896 raise error.ReadOnlyPartError('part is being generated')
896 raise error.ReadOnlyPartError('part is being generated')
897 self._data = data
897 self._data = data
898
898
899 @property
899 @property
900 def mandatoryparams(self):
900 def mandatoryparams(self):
901 # make it an immutable tuple to force people through ``addparam``
901 # make it an immutable tuple to force people through ``addparam``
902 return tuple(self._mandatoryparams)
902 return tuple(self._mandatoryparams)
903
903
904 @property
904 @property
905 def advisoryparams(self):
905 def advisoryparams(self):
906 # make it an immutable tuple to force people through ``addparam``
906 # make it an immutable tuple to force people through ``addparam``
907 return tuple(self._advisoryparams)
907 return tuple(self._advisoryparams)
908
908
909 def addparam(self, name, value='', mandatory=True):
909 def addparam(self, name, value='', mandatory=True):
910 """add a parameter to the part
910 """add a parameter to the part
911
911
912 If 'mandatory' is set to True, the remote handler must claim support
912 If 'mandatory' is set to True, the remote handler must claim support
913 for this parameter or the unbundling will be aborted.
913 for this parameter or the unbundling will be aborted.
914
914
915 The 'name' and 'value' cannot exceed 255 bytes each.
915 The 'name' and 'value' cannot exceed 255 bytes each.
916 """
916 """
917 if self._generated is not None:
917 if self._generated is not None:
918 raise error.ReadOnlyPartError('part is being generated')
918 raise error.ReadOnlyPartError('part is being generated')
919 if name in self._seenparams:
919 if name in self._seenparams:
920 raise ValueError('duplicated params: %s' % name)
920 raise ValueError('duplicated params: %s' % name)
921 self._seenparams.add(name)
921 self._seenparams.add(name)
922 params = self._advisoryparams
922 params = self._advisoryparams
923 if mandatory:
923 if mandatory:
924 params = self._mandatoryparams
924 params = self._mandatoryparams
925 params.append((name, value))
925 params.append((name, value))
926
926
927 # methods used to generates the bundle2 stream
927 # methods used to generates the bundle2 stream
928 def getchunks(self, ui):
928 def getchunks(self, ui):
929 if self._generated is not None:
929 if self._generated is not None:
930 raise error.ProgrammingError('part can only be consumed once')
930 raise error.ProgrammingError('part can only be consumed once')
931 self._generated = False
931 self._generated = False
932
932
933 if ui.debugflag:
933 if ui.debugflag:
934 msg = ['bundle2-output-part: "%s"' % self.type]
934 msg = ['bundle2-output-part: "%s"' % self.type]
935 if not self.mandatory:
935 if not self.mandatory:
936 msg.append(' (advisory)')
936 msg.append(' (advisory)')
937 nbmp = len(self.mandatoryparams)
937 nbmp = len(self.mandatoryparams)
938 nbap = len(self.advisoryparams)
938 nbap = len(self.advisoryparams)
939 if nbmp or nbap:
939 if nbmp or nbap:
940 msg.append(' (params:')
940 msg.append(' (params:')
941 if nbmp:
941 if nbmp:
942 msg.append(' %i mandatory' % nbmp)
942 msg.append(' %i mandatory' % nbmp)
943 if nbap:
943 if nbap:
944 msg.append(' %i advisory' % nbmp)
944 msg.append(' %i advisory' % nbmp)
945 msg.append(')')
945 msg.append(')')
946 if not self.data:
946 if not self.data:
947 msg.append(' empty payload')
947 msg.append(' empty payload')
948 elif util.safehasattr(self.data, 'next'):
948 elif util.safehasattr(self.data, 'next'):
949 msg.append(' streamed payload')
949 msg.append(' streamed payload')
950 else:
950 else:
951 msg.append(' %i bytes payload' % len(self.data))
951 msg.append(' %i bytes payload' % len(self.data))
952 msg.append('\n')
952 msg.append('\n')
953 ui.debug(''.join(msg))
953 ui.debug(''.join(msg))
954
954
955 #### header
955 #### header
956 if self.mandatory:
956 if self.mandatory:
957 parttype = self.type.upper()
957 parttype = self.type.upper()
958 else:
958 else:
959 parttype = self.type.lower()
959 parttype = self.type.lower()
960 outdebug(ui, 'part %s: "%s"' % (self.id, parttype))
960 outdebug(ui, 'part %s: "%s"' % (self.id, parttype))
961 ## parttype
961 ## parttype
962 header = [_pack(_fparttypesize, len(parttype)),
962 header = [_pack(_fparttypesize, len(parttype)),
963 parttype, _pack(_fpartid, self.id),
963 parttype, _pack(_fpartid, self.id),
964 ]
964 ]
965 ## parameters
965 ## parameters
966 # count
966 # count
967 manpar = self.mandatoryparams
967 manpar = self.mandatoryparams
968 advpar = self.advisoryparams
968 advpar = self.advisoryparams
969 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
969 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
970 # size
970 # size
971 parsizes = []
971 parsizes = []
972 for key, value in manpar:
972 for key, value in manpar:
973 parsizes.append(len(key))
973 parsizes.append(len(key))
974 parsizes.append(len(value))
974 parsizes.append(len(value))
975 for key, value in advpar:
975 for key, value in advpar:
976 parsizes.append(len(key))
976 parsizes.append(len(key))
977 parsizes.append(len(value))
977 parsizes.append(len(value))
978 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
978 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
979 header.append(paramsizes)
979 header.append(paramsizes)
980 # key, value
980 # key, value
981 for key, value in manpar:
981 for key, value in manpar:
982 header.append(key)
982 header.append(key)
983 header.append(value)
983 header.append(value)
984 for key, value in advpar:
984 for key, value in advpar:
985 header.append(key)
985 header.append(key)
986 header.append(value)
986 header.append(value)
987 ## finalize header
987 ## finalize header
988 headerchunk = ''.join(header)
988 headerchunk = ''.join(header)
989 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
989 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
990 yield _pack(_fpartheadersize, len(headerchunk))
990 yield _pack(_fpartheadersize, len(headerchunk))
991 yield headerchunk
991 yield headerchunk
992 ## payload
992 ## payload
993 try:
993 try:
994 for chunk in self._payloadchunks():
994 for chunk in self._payloadchunks():
995 outdebug(ui, 'payload chunk size: %i' % len(chunk))
995 outdebug(ui, 'payload chunk size: %i' % len(chunk))
996 yield _pack(_fpayloadsize, len(chunk))
996 yield _pack(_fpayloadsize, len(chunk))
997 yield chunk
997 yield chunk
998 except GeneratorExit:
998 except GeneratorExit:
999 # GeneratorExit means that nobody is listening for our
999 # GeneratorExit means that nobody is listening for our
1000 # results anyway, so just bail quickly rather than trying
1000 # results anyway, so just bail quickly rather than trying
1001 # to produce an error part.
1001 # to produce an error part.
1002 ui.debug('bundle2-generatorexit\n')
1002 ui.debug('bundle2-generatorexit\n')
1003 raise
1003 raise
1004 except BaseException as exc:
1004 except BaseException as exc:
1005 # backup exception data for later
1005 # backup exception data for later
1006 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
1006 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
1007 % exc)
1007 % exc)
1008 tb = sys.exc_info()[2]
1008 tb = sys.exc_info()[2]
1009 msg = 'unexpected error: %s' % exc
1009 msg = 'unexpected error: %s' % exc
1010 interpart = bundlepart('error:abort', [('message', msg)],
1010 interpart = bundlepart('error:abort', [('message', msg)],
1011 mandatory=False)
1011 mandatory=False)
1012 interpart.id = 0
1012 interpart.id = 0
1013 yield _pack(_fpayloadsize, -1)
1013 yield _pack(_fpayloadsize, -1)
1014 for chunk in interpart.getchunks(ui=ui):
1014 for chunk in interpart.getchunks(ui=ui):
1015 yield chunk
1015 yield chunk
1016 outdebug(ui, 'closing payload chunk')
1016 outdebug(ui, 'closing payload chunk')
1017 # abort current part payload
1017 # abort current part payload
1018 yield _pack(_fpayloadsize, 0)
1018 yield _pack(_fpayloadsize, 0)
1019 pycompat.raisewithtb(exc, tb)
1019 pycompat.raisewithtb(exc, tb)
1020 # end of payload
1020 # end of payload
1021 outdebug(ui, 'closing payload chunk')
1021 outdebug(ui, 'closing payload chunk')
1022 yield _pack(_fpayloadsize, 0)
1022 yield _pack(_fpayloadsize, 0)
1023 self._generated = True
1023 self._generated = True
1024
1024
1025 def _payloadchunks(self):
1025 def _payloadchunks(self):
1026 """yield chunks of a the part payload
1026 """yield chunks of a the part payload
1027
1027
1028 Exists to handle the different methods to provide data to a part."""
1028 Exists to handle the different methods to provide data to a part."""
1029 # we only support fixed size data now.
1029 # we only support fixed size data now.
1030 # This will be improved in the future.
1030 # This will be improved in the future.
1031 if util.safehasattr(self.data, 'next'):
1031 if util.safehasattr(self.data, 'next'):
1032 buff = util.chunkbuffer(self.data)
1032 buff = util.chunkbuffer(self.data)
1033 chunk = buff.read(preferedchunksize)
1033 chunk = buff.read(preferedchunksize)
1034 while chunk:
1034 while chunk:
1035 yield chunk
1035 yield chunk
1036 chunk = buff.read(preferedchunksize)
1036 chunk = buff.read(preferedchunksize)
1037 elif len(self.data):
1037 elif len(self.data):
1038 yield self.data
1038 yield self.data
1039
1039
1040
1040
1041 flaginterrupt = -1
1041 flaginterrupt = -1
1042
1042
1043 class interrupthandler(unpackermixin):
1043 class interrupthandler(unpackermixin):
1044 """read one part and process it with restricted capability
1044 """read one part and process it with restricted capability
1045
1045
1046 This allows to transmit exception raised on the producer size during part
1046 This allows to transmit exception raised on the producer size during part
1047 iteration while the consumer is reading a part.
1047 iteration while the consumer is reading a part.
1048
1048
1049 Part processed in this manner only have access to a ui object,"""
1049 Part processed in this manner only have access to a ui object,"""
1050
1050
1051 def __init__(self, ui, fp):
1051 def __init__(self, ui, fp):
1052 super(interrupthandler, self).__init__(fp)
1052 super(interrupthandler, self).__init__(fp)
1053 self.ui = ui
1053 self.ui = ui
1054
1054
1055 def _readpartheader(self):
1055 def _readpartheader(self):
1056 """reads a part header size and return the bytes blob
1056 """reads a part header size and return the bytes blob
1057
1057
1058 returns None if empty"""
1058 returns None if empty"""
1059 headersize = self._unpack(_fpartheadersize)[0]
1059 headersize = self._unpack(_fpartheadersize)[0]
1060 if headersize < 0:
1060 if headersize < 0:
1061 raise error.BundleValueError('negative part header size: %i'
1061 raise error.BundleValueError('negative part header size: %i'
1062 % headersize)
1062 % headersize)
1063 indebug(self.ui, 'part header size: %i\n' % headersize)
1063 indebug(self.ui, 'part header size: %i\n' % headersize)
1064 if headersize:
1064 if headersize:
1065 return self._readexact(headersize)
1065 return self._readexact(headersize)
1066 return None
1066 return None
1067
1067
1068 def __call__(self):
1068 def __call__(self):
1069
1069
1070 self.ui.debug('bundle2-input-stream-interrupt:'
1070 self.ui.debug('bundle2-input-stream-interrupt:'
1071 ' opening out of band context\n')
1071 ' opening out of band context\n')
1072 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1072 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1073 headerblock = self._readpartheader()
1073 headerblock = self._readpartheader()
1074 if headerblock is None:
1074 if headerblock is None:
1075 indebug(self.ui, 'no part found during interruption.')
1075 indebug(self.ui, 'no part found during interruption.')
1076 return
1076 return
1077 part = unbundlepart(self.ui, headerblock, self._fp)
1077 part = unbundlepart(self.ui, headerblock, self._fp)
1078 op = interruptoperation(self.ui)
1078 op = interruptoperation(self.ui)
1079 _processpart(op, part)
1079 _processpart(op, part)
1080 self.ui.debug('bundle2-input-stream-interrupt:'
1080 self.ui.debug('bundle2-input-stream-interrupt:'
1081 ' closing out of band context\n')
1081 ' closing out of band context\n')
1082
1082
1083 class interruptoperation(object):
1083 class interruptoperation(object):
1084 """A limited operation to be use by part handler during interruption
1084 """A limited operation to be use by part handler during interruption
1085
1085
1086 It only have access to an ui object.
1086 It only have access to an ui object.
1087 """
1087 """
1088
1088
1089 def __init__(self, ui):
1089 def __init__(self, ui):
1090 self.ui = ui
1090 self.ui = ui
1091 self.reply = None
1091 self.reply = None
1092 self.captureoutput = False
1092 self.captureoutput = False
1093
1093
1094 @property
1094 @property
1095 def repo(self):
1095 def repo(self):
1096 raise error.ProgrammingError('no repo access from stream interruption')
1096 raise error.ProgrammingError('no repo access from stream interruption')
1097
1097
1098 def gettransaction(self):
1098 def gettransaction(self):
1099 raise TransactionUnavailable('no repo access from stream interruption')
1099 raise TransactionUnavailable('no repo access from stream interruption')
1100
1100
1101 class unbundlepart(unpackermixin):
1101 class unbundlepart(unpackermixin):
1102 """a bundle part read from a bundle"""
1102 """a bundle part read from a bundle"""
1103
1103
1104 def __init__(self, ui, header, fp):
1104 def __init__(self, ui, header, fp):
1105 super(unbundlepart, self).__init__(fp)
1105 super(unbundlepart, self).__init__(fp)
1106 self._seekable = (util.safehasattr(fp, 'seek') and
1106 self._seekable = (util.safehasattr(fp, 'seek') and
1107 util.safehasattr(fp, 'tell'))
1107 util.safehasattr(fp, 'tell'))
1108 self.ui = ui
1108 self.ui = ui
1109 # unbundle state attr
1109 # unbundle state attr
1110 self._headerdata = header
1110 self._headerdata = header
1111 self._headeroffset = 0
1111 self._headeroffset = 0
1112 self._initialized = False
1112 self._initialized = False
1113 self.consumed = False
1113 self.consumed = False
1114 # part data
1114 # part data
1115 self.id = None
1115 self.id = None
1116 self.type = None
1116 self.type = None
1117 self.mandatoryparams = None
1117 self.mandatoryparams = None
1118 self.advisoryparams = None
1118 self.advisoryparams = None
1119 self.params = None
1119 self.params = None
1120 self.mandatorykeys = ()
1120 self.mandatorykeys = ()
1121 self._payloadstream = None
1121 self._payloadstream = None
1122 self._readheader()
1122 self._readheader()
1123 self._mandatory = None
1123 self._mandatory = None
1124 self._chunkindex = [] #(payload, file) position tuples for chunk starts
1124 self._chunkindex = [] #(payload, file) position tuples for chunk starts
1125 self._pos = 0
1125 self._pos = 0
1126
1126
1127 def _fromheader(self, size):
1127 def _fromheader(self, size):
1128 """return the next <size> byte from the header"""
1128 """return the next <size> byte from the header"""
1129 offset = self._headeroffset
1129 offset = self._headeroffset
1130 data = self._headerdata[offset:(offset + size)]
1130 data = self._headerdata[offset:(offset + size)]
1131 self._headeroffset = offset + size
1131 self._headeroffset = offset + size
1132 return data
1132 return data
1133
1133
1134 def _unpackheader(self, format):
1134 def _unpackheader(self, format):
1135 """read given format from header
1135 """read given format from header
1136
1136
1137 This automatically compute the size of the format to read."""
1137 This automatically compute the size of the format to read."""
1138 data = self._fromheader(struct.calcsize(format))
1138 data = self._fromheader(struct.calcsize(format))
1139 return _unpack(format, data)
1139 return _unpack(format, data)
1140
1140
1141 def _initparams(self, mandatoryparams, advisoryparams):
1141 def _initparams(self, mandatoryparams, advisoryparams):
1142 """internal function to setup all logic related parameters"""
1142 """internal function to setup all logic related parameters"""
1143 # make it read only to prevent people touching it by mistake.
1143 # make it read only to prevent people touching it by mistake.
1144 self.mandatoryparams = tuple(mandatoryparams)
1144 self.mandatoryparams = tuple(mandatoryparams)
1145 self.advisoryparams = tuple(advisoryparams)
1145 self.advisoryparams = tuple(advisoryparams)
1146 # user friendly UI
1146 # user friendly UI
1147 self.params = util.sortdict(self.mandatoryparams)
1147 self.params = util.sortdict(self.mandatoryparams)
1148 self.params.update(self.advisoryparams)
1148 self.params.update(self.advisoryparams)
1149 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1149 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1150
1150
1151 def _payloadchunks(self, chunknum=0):
1151 def _payloadchunks(self, chunknum=0):
1152 '''seek to specified chunk and start yielding data'''
1152 '''seek to specified chunk and start yielding data'''
1153 if len(self._chunkindex) == 0:
1153 if len(self._chunkindex) == 0:
1154 assert chunknum == 0, 'Must start with chunk 0'
1154 assert chunknum == 0, 'Must start with chunk 0'
1155 self._chunkindex.append((0, self._tellfp()))
1155 self._chunkindex.append((0, self._tellfp()))
1156 else:
1156 else:
1157 assert chunknum < len(self._chunkindex), \
1157 assert chunknum < len(self._chunkindex), \
1158 'Unknown chunk %d' % chunknum
1158 'Unknown chunk %d' % chunknum
1159 self._seekfp(self._chunkindex[chunknum][1])
1159 self._seekfp(self._chunkindex[chunknum][1])
1160
1160
1161 pos = self._chunkindex[chunknum][0]
1161 pos = self._chunkindex[chunknum][0]
1162 payloadsize = self._unpack(_fpayloadsize)[0]
1162 payloadsize = self._unpack(_fpayloadsize)[0]
1163 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1163 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1164 while payloadsize:
1164 while payloadsize:
1165 if payloadsize == flaginterrupt:
1165 if payloadsize == flaginterrupt:
1166 # interruption detection, the handler will now read a
1166 # interruption detection, the handler will now read a
1167 # single part and process it.
1167 # single part and process it.
1168 interrupthandler(self.ui, self._fp)()
1168 interrupthandler(self.ui, self._fp)()
1169 elif payloadsize < 0:
1169 elif payloadsize < 0:
1170 msg = 'negative payload chunk size: %i' % payloadsize
1170 msg = 'negative payload chunk size: %i' % payloadsize
1171 raise error.BundleValueError(msg)
1171 raise error.BundleValueError(msg)
1172 else:
1172 else:
1173 result = self._readexact(payloadsize)
1173 result = self._readexact(payloadsize)
1174 chunknum += 1
1174 chunknum += 1
1175 pos += payloadsize
1175 pos += payloadsize
1176 if chunknum == len(self._chunkindex):
1176 if chunknum == len(self._chunkindex):
1177 self._chunkindex.append((pos, self._tellfp()))
1177 self._chunkindex.append((pos, self._tellfp()))
1178 yield result
1178 yield result
1179 payloadsize = self._unpack(_fpayloadsize)[0]
1179 payloadsize = self._unpack(_fpayloadsize)[0]
1180 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1180 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1181
1181
1182 def _findchunk(self, pos):
1182 def _findchunk(self, pos):
1183 '''for a given payload position, return a chunk number and offset'''
1183 '''for a given payload position, return a chunk number and offset'''
1184 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1184 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1185 if ppos == pos:
1185 if ppos == pos:
1186 return chunk, 0
1186 return chunk, 0
1187 elif ppos > pos:
1187 elif ppos > pos:
1188 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1188 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1189 raise ValueError('Unknown chunk')
1189 raise ValueError('Unknown chunk')
1190
1190
1191 def _readheader(self):
1191 def _readheader(self):
1192 """read the header and setup the object"""
1192 """read the header and setup the object"""
1193 typesize = self._unpackheader(_fparttypesize)[0]
1193 typesize = self._unpackheader(_fparttypesize)[0]
1194 self.type = self._fromheader(typesize)
1194 self.type = self._fromheader(typesize)
1195 indebug(self.ui, 'part type: "%s"' % self.type)
1195 indebug(self.ui, 'part type: "%s"' % self.type)
1196 self.id = self._unpackheader(_fpartid)[0]
1196 self.id = self._unpackheader(_fpartid)[0]
1197 indebug(self.ui, 'part id: "%s"' % self.id)
1197 indebug(self.ui, 'part id: "%s"' % self.id)
1198 # extract mandatory bit from type
1198 # extract mandatory bit from type
1199 self.mandatory = (self.type != self.type.lower())
1199 self.mandatory = (self.type != self.type.lower())
1200 self.type = self.type.lower()
1200 self.type = self.type.lower()
1201 ## reading parameters
1201 ## reading parameters
1202 # param count
1202 # param count
1203 mancount, advcount = self._unpackheader(_fpartparamcount)
1203 mancount, advcount = self._unpackheader(_fpartparamcount)
1204 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1204 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1205 # param size
1205 # param size
1206 fparamsizes = _makefpartparamsizes(mancount + advcount)
1206 fparamsizes = _makefpartparamsizes(mancount + advcount)
1207 paramsizes = self._unpackheader(fparamsizes)
1207 paramsizes = self._unpackheader(fparamsizes)
1208 # make it a list of couple again
1208 # make it a list of couple again
1209 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
1209 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
1210 # split mandatory from advisory
1210 # split mandatory from advisory
1211 mansizes = paramsizes[:mancount]
1211 mansizes = paramsizes[:mancount]
1212 advsizes = paramsizes[mancount:]
1212 advsizes = paramsizes[mancount:]
1213 # retrieve param value
1213 # retrieve param value
1214 manparams = []
1214 manparams = []
1215 for key, value in mansizes:
1215 for key, value in mansizes:
1216 manparams.append((self._fromheader(key), self._fromheader(value)))
1216 manparams.append((self._fromheader(key), self._fromheader(value)))
1217 advparams = []
1217 advparams = []
1218 for key, value in advsizes:
1218 for key, value in advsizes:
1219 advparams.append((self._fromheader(key), self._fromheader(value)))
1219 advparams.append((self._fromheader(key), self._fromheader(value)))
1220 self._initparams(manparams, advparams)
1220 self._initparams(manparams, advparams)
1221 ## part payload
1221 ## part payload
1222 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1222 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1223 # we read the data, tell it
1223 # we read the data, tell it
1224 self._initialized = True
1224 self._initialized = True
1225
1225
1226 def read(self, size=None):
1226 def read(self, size=None):
1227 """read payload data"""
1227 """read payload data"""
1228 if not self._initialized:
1228 if not self._initialized:
1229 self._readheader()
1229 self._readheader()
1230 if size is None:
1230 if size is None:
1231 data = self._payloadstream.read()
1231 data = self._payloadstream.read()
1232 else:
1232 else:
1233 data = self._payloadstream.read(size)
1233 data = self._payloadstream.read(size)
1234 self._pos += len(data)
1234 self._pos += len(data)
1235 if size is None or len(data) < size:
1235 if size is None or len(data) < size:
1236 if not self.consumed and self._pos:
1236 if not self.consumed and self._pos:
1237 self.ui.debug('bundle2-input-part: total payload size %i\n'
1237 self.ui.debug('bundle2-input-part: total payload size %i\n'
1238 % self._pos)
1238 % self._pos)
1239 self.consumed = True
1239 self.consumed = True
1240 return data
1240 return data
1241
1241
1242 def tell(self):
1242 def tell(self):
1243 return self._pos
1243 return self._pos
1244
1244
1245 def seek(self, offset, whence=0):
1245 def seek(self, offset, whence=0):
1246 if whence == 0:
1246 if whence == 0:
1247 newpos = offset
1247 newpos = offset
1248 elif whence == 1:
1248 elif whence == 1:
1249 newpos = self._pos + offset
1249 newpos = self._pos + offset
1250 elif whence == 2:
1250 elif whence == 2:
1251 if not self.consumed:
1251 if not self.consumed:
1252 self.read()
1252 self.read()
1253 newpos = self._chunkindex[-1][0] - offset
1253 newpos = self._chunkindex[-1][0] - offset
1254 else:
1254 else:
1255 raise ValueError('Unknown whence value: %r' % (whence,))
1255 raise ValueError('Unknown whence value: %r' % (whence,))
1256
1256
1257 if newpos > self._chunkindex[-1][0] and not self.consumed:
1257 if newpos > self._chunkindex[-1][0] and not self.consumed:
1258 self.read()
1258 self.read()
1259 if not 0 <= newpos <= self._chunkindex[-1][0]:
1259 if not 0 <= newpos <= self._chunkindex[-1][0]:
1260 raise ValueError('Offset out of range')
1260 raise ValueError('Offset out of range')
1261
1261
1262 if self._pos != newpos:
1262 if self._pos != newpos:
1263 chunk, internaloffset = self._findchunk(newpos)
1263 chunk, internaloffset = self._findchunk(newpos)
1264 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1264 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1265 adjust = self.read(internaloffset)
1265 adjust = self.read(internaloffset)
1266 if len(adjust) != internaloffset:
1266 if len(adjust) != internaloffset:
1267 raise error.Abort(_('Seek failed\n'))
1267 raise error.Abort(_('Seek failed\n'))
1268 self._pos = newpos
1268 self._pos = newpos
1269
1269
1270 def _seekfp(self, offset, whence=0):
1270 def _seekfp(self, offset, whence=0):
1271 """move the underlying file pointer
1271 """move the underlying file pointer
1272
1272
1273 This method is meant for internal usage by the bundle2 protocol only.
1273 This method is meant for internal usage by the bundle2 protocol only.
1274 They directly manipulate the low level stream including bundle2 level
1274 They directly manipulate the low level stream including bundle2 level
1275 instruction.
1275 instruction.
1276
1276
1277 Do not use it to implement higher-level logic or methods."""
1277 Do not use it to implement higher-level logic or methods."""
1278 if self._seekable:
1278 if self._seekable:
1279 return self._fp.seek(offset, whence)
1279 return self._fp.seek(offset, whence)
1280 else:
1280 else:
1281 raise NotImplementedError(_('File pointer is not seekable'))
1281 raise NotImplementedError(_('File pointer is not seekable'))
1282
1282
1283 def _tellfp(self):
1283 def _tellfp(self):
1284 """return the file offset, or None if file is not seekable
1284 """return the file offset, or None if file is not seekable
1285
1285
1286 This method is meant for internal usage by the bundle2 protocol only.
1286 This method is meant for internal usage by the bundle2 protocol only.
1287 They directly manipulate the low level stream including bundle2 level
1287 They directly manipulate the low level stream including bundle2 level
1288 instruction.
1288 instruction.
1289
1289
1290 Do not use it to implement higher-level logic or methods."""
1290 Do not use it to implement higher-level logic or methods."""
1291 if self._seekable:
1291 if self._seekable:
1292 try:
1292 try:
1293 return self._fp.tell()
1293 return self._fp.tell()
1294 except IOError as e:
1294 except IOError as e:
1295 if e.errno == errno.ESPIPE:
1295 if e.errno == errno.ESPIPE:
1296 self._seekable = False
1296 self._seekable = False
1297 else:
1297 else:
1298 raise
1298 raise
1299 return None
1299 return None
1300
1300
1301 # These are only the static capabilities.
1301 # These are only the static capabilities.
1302 # Check the 'getrepocaps' function for the rest.
1302 # Check the 'getrepocaps' function for the rest.
1303 capabilities = {'HG20': (),
1303 capabilities = {'HG20': (),
1304 'error': ('abort', 'unsupportedcontent', 'pushraced',
1304 'error': ('abort', 'unsupportedcontent', 'pushraced',
1305 'pushkey'),
1305 'pushkey'),
1306 'listkeys': (),
1306 'listkeys': (),
1307 'pushkey': (),
1307 'pushkey': (),
1308 'digests': tuple(sorted(util.DIGESTS.keys())),
1308 'digests': tuple(sorted(util.DIGESTS.keys())),
1309 'remote-changegroup': ('http', 'https'),
1309 'remote-changegroup': ('http', 'https'),
1310 'hgtagsfnodes': (),
1310 'hgtagsfnodes': (),
1311 }
1311 }
1312
1312
1313 def getrepocaps(repo, allowpushback=False):
1313 def getrepocaps(repo, allowpushback=False):
1314 """return the bundle2 capabilities for a given repo
1314 """return the bundle2 capabilities for a given repo
1315
1315
1316 Exists to allow extensions (like evolution) to mutate the capabilities.
1316 Exists to allow extensions (like evolution) to mutate the capabilities.
1317 """
1317 """
1318 caps = capabilities.copy()
1318 caps = capabilities.copy()
1319 caps['changegroup'] = tuple(sorted(
1319 caps['changegroup'] = tuple(sorted(
1320 changegroup.supportedincomingversions(repo)))
1320 changegroup.supportedincomingversions(repo)))
1321 if obsolete.isenabled(repo, obsolete.exchangeopt):
1321 if obsolete.isenabled(repo, obsolete.exchangeopt):
1322 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1322 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1323 caps['obsmarkers'] = supportedformat
1323 caps['obsmarkers'] = supportedformat
1324 if allowpushback:
1324 if allowpushback:
1325 caps['pushback'] = ()
1325 caps['pushback'] = ()
1326 return caps
1326 return caps
1327
1327
1328 def bundle2caps(remote):
1328 def bundle2caps(remote):
1329 """return the bundle capabilities of a peer as dict"""
1329 """return the bundle capabilities of a peer as dict"""
1330 raw = remote.capable('bundle2')
1330 raw = remote.capable('bundle2')
1331 if not raw and raw != '':
1331 if not raw and raw != '':
1332 return {}
1332 return {}
1333 capsblob = urlreq.unquote(remote.capable('bundle2'))
1333 capsblob = urlreq.unquote(remote.capable('bundle2'))
1334 return decodecaps(capsblob)
1334 return decodecaps(capsblob)
1335
1335
1336 def obsmarkersversion(caps):
1336 def obsmarkersversion(caps):
1337 """extract the list of supported obsmarkers versions from a bundle2caps dict
1337 """extract the list of supported obsmarkers versions from a bundle2caps dict
1338 """
1338 """
1339 obscaps = caps.get('obsmarkers', ())
1339 obscaps = caps.get('obsmarkers', ())
1340 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1340 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1341
1341
1342 def writenewbundle(ui, repo, source, filename, bundletype, outgoing, opts,
1342 def writenewbundle(ui, repo, source, filename, bundletype, outgoing, opts,
1343 vfs=None, compression=None, compopts=None):
1343 vfs=None, compression=None, compopts=None):
1344 if bundletype.startswith('HG10'):
1344 if bundletype.startswith('HG10'):
1345 cg = changegroup.getchangegroup(repo, source, outgoing, version='01')
1345 cg = changegroup.getchangegroup(repo, source, outgoing, version='01')
1346 return writebundle(ui, cg, filename, bundletype, vfs=vfs,
1346 return writebundle(ui, cg, filename, bundletype, vfs=vfs,
1347 compression=compression, compopts=compopts)
1347 compression=compression, compopts=compopts)
1348 elif not bundletype.startswith('HG20'):
1348 elif not bundletype.startswith('HG20'):
1349 raise error.ProgrammingError('unknown bundle type: %s' % bundletype)
1349 raise error.ProgrammingError('unknown bundle type: %s' % bundletype)
1350
1350
1351 bundle = bundle20(ui)
1351 bundle = bundle20(ui)
1352 bundle.setcompression(compression, compopts)
1352 bundle.setcompression(compression, compopts)
1353 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1353 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1354 chunkiter = bundle.getchunks()
1354 chunkiter = bundle.getchunks()
1355
1355
1356 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1356 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1357
1357
1358 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1358 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1359 # We should eventually reconcile this logic with the one behind
1359 # We should eventually reconcile this logic with the one behind
1360 # 'exchange.getbundle2partsgenerator'.
1360 # 'exchange.getbundle2partsgenerator'.
1361 #
1361 #
1362 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1362 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1363 # different right now. So we keep them separated for now for the sake of
1363 # different right now. So we keep them separated for now for the sake of
1364 # simplicity.
1364 # simplicity.
1365
1365
1366 # we always want a changegroup in such bundle
1366 # we always want a changegroup in such bundle
1367 cgversion = opts.get('cg.version')
1367 cgversion = opts.get('cg.version')
1368 if cgversion is None:
1368 if cgversion is None:
1369 cgversion = changegroup.safeversion(repo)
1369 cgversion = changegroup.safeversion(repo)
1370 cg = changegroup.getchangegroup(repo, source, outgoing,
1370 cg = changegroup.getchangegroup(repo, source, outgoing,
1371 version=cgversion)
1371 version=cgversion)
1372 part = bundler.newpart('changegroup', data=cg.getchunks())
1372 part = bundler.newpart('changegroup', data=cg.getchunks())
1373 part.addparam('version', cg.version)
1373 part.addparam('version', cg.version)
1374 if 'clcount' in cg.extras:
1374 if 'clcount' in cg.extras:
1375 part.addparam('nbchanges', str(cg.extras['clcount']),
1375 part.addparam('nbchanges', str(cg.extras['clcount']),
1376 mandatory=False)
1376 mandatory=False)
1377
1377
1378 addparttagsfnodescache(repo, bundler, outgoing)
1378 addparttagsfnodescache(repo, bundler, outgoing)
1379
1379
1380 def addparttagsfnodescache(repo, bundler, outgoing):
1380 def addparttagsfnodescache(repo, bundler, outgoing):
1381 # we include the tags fnode cache for the bundle changeset
1381 # we include the tags fnode cache for the bundle changeset
1382 # (as an optional parts)
1382 # (as an optional parts)
1383 cache = tags.hgtagsfnodescache(repo.unfiltered())
1383 cache = tags.hgtagsfnodescache(repo.unfiltered())
1384 chunks = []
1384 chunks = []
1385
1385
1386 # .hgtags fnodes are only relevant for head changesets. While we could
1386 # .hgtags fnodes are only relevant for head changesets. While we could
1387 # transfer values for all known nodes, there will likely be little to
1387 # transfer values for all known nodes, there will likely be little to
1388 # no benefit.
1388 # no benefit.
1389 #
1389 #
1390 # We don't bother using a generator to produce output data because
1390 # We don't bother using a generator to produce output data because
1391 # a) we only have 40 bytes per head and even esoteric numbers of heads
1391 # a) we only have 40 bytes per head and even esoteric numbers of heads
1392 # consume little memory (1M heads is 40MB) b) we don't want to send the
1392 # consume little memory (1M heads is 40MB) b) we don't want to send the
1393 # part if we don't have entries and knowing if we have entries requires
1393 # part if we don't have entries and knowing if we have entries requires
1394 # cache lookups.
1394 # cache lookups.
1395 for node in outgoing.missingheads:
1395 for node in outgoing.missingheads:
1396 # Don't compute missing, as this may slow down serving.
1396 # Don't compute missing, as this may slow down serving.
1397 fnode = cache.getfnode(node, computemissing=False)
1397 fnode = cache.getfnode(node, computemissing=False)
1398 if fnode is not None:
1398 if fnode is not None:
1399 chunks.extend([node, fnode])
1399 chunks.extend([node, fnode])
1400
1400
1401 if chunks:
1401 if chunks:
1402 bundler.newpart('hgtagsfnodes', data=''.join(chunks))
1402 bundler.newpart('hgtagsfnodes', data=''.join(chunks))
1403
1403
1404 def buildobsmarkerspart(bundler, markers):
1405 """add an obsmarker part to the bundler with <markers>
1406
1407 No part is created if markers is empty.
1408 Raises ValueError if the bundler doesn't support any known obsmarker format.
1409 """
1410 if not markers:
1411 return None
1412
1413 remoteversions = obsmarkersversion(bundler.capabilities)
1414 version = obsolete.commonversion(remoteversions)
1415 if version is None:
1416 raise ValueError('bundler does not support common obsmarker format')
1417 stream = obsolete.encodemarkers(markers, True, version=version)
1418 return bundler.newpart('obsmarkers', data=stream)
1419
1404 def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None,
1420 def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None,
1405 compopts=None):
1421 compopts=None):
1406 """Write a bundle file and return its filename.
1422 """Write a bundle file and return its filename.
1407
1423
1408 Existing files will not be overwritten.
1424 Existing files will not be overwritten.
1409 If no filename is specified, a temporary file is created.
1425 If no filename is specified, a temporary file is created.
1410 bz2 compression can be turned off.
1426 bz2 compression can be turned off.
1411 The bundle file will be deleted in case of errors.
1427 The bundle file will be deleted in case of errors.
1412 """
1428 """
1413
1429
1414 if bundletype == "HG20":
1430 if bundletype == "HG20":
1415 bundle = bundle20(ui)
1431 bundle = bundle20(ui)
1416 bundle.setcompression(compression, compopts)
1432 bundle.setcompression(compression, compopts)
1417 part = bundle.newpart('changegroup', data=cg.getchunks())
1433 part = bundle.newpart('changegroup', data=cg.getchunks())
1418 part.addparam('version', cg.version)
1434 part.addparam('version', cg.version)
1419 if 'clcount' in cg.extras:
1435 if 'clcount' in cg.extras:
1420 part.addparam('nbchanges', str(cg.extras['clcount']),
1436 part.addparam('nbchanges', str(cg.extras['clcount']),
1421 mandatory=False)
1437 mandatory=False)
1422 chunkiter = bundle.getchunks()
1438 chunkiter = bundle.getchunks()
1423 else:
1439 else:
1424 # compression argument is only for the bundle2 case
1440 # compression argument is only for the bundle2 case
1425 assert compression is None
1441 assert compression is None
1426 if cg.version != '01':
1442 if cg.version != '01':
1427 raise error.Abort(_('old bundle types only supports v1 '
1443 raise error.Abort(_('old bundle types only supports v1 '
1428 'changegroups'))
1444 'changegroups'))
1429 header, comp = bundletypes[bundletype]
1445 header, comp = bundletypes[bundletype]
1430 if comp not in util.compengines.supportedbundletypes:
1446 if comp not in util.compengines.supportedbundletypes:
1431 raise error.Abort(_('unknown stream compression type: %s')
1447 raise error.Abort(_('unknown stream compression type: %s')
1432 % comp)
1448 % comp)
1433 compengine = util.compengines.forbundletype(comp)
1449 compengine = util.compengines.forbundletype(comp)
1434 def chunkiter():
1450 def chunkiter():
1435 yield header
1451 yield header
1436 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1452 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1437 yield chunk
1453 yield chunk
1438 chunkiter = chunkiter()
1454 chunkiter = chunkiter()
1439
1455
1440 # parse the changegroup data, otherwise we will block
1456 # parse the changegroup data, otherwise we will block
1441 # in case of sshrepo because we don't know the end of the stream
1457 # in case of sshrepo because we don't know the end of the stream
1442 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1458 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1443
1459
1444 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest'))
1460 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest'))
1445 def handlechangegroup(op, inpart):
1461 def handlechangegroup(op, inpart):
1446 """apply a changegroup part on the repo
1462 """apply a changegroup part on the repo
1447
1463
1448 This is a very early implementation that will massive rework before being
1464 This is a very early implementation that will massive rework before being
1449 inflicted to any end-user.
1465 inflicted to any end-user.
1450 """
1466 """
1451 # Make sure we trigger a transaction creation
1467 # Make sure we trigger a transaction creation
1452 #
1468 #
1453 # The addchangegroup function will get a transaction object by itself, but
1469 # The addchangegroup function will get a transaction object by itself, but
1454 # we need to make sure we trigger the creation of a transaction object used
1470 # we need to make sure we trigger the creation of a transaction object used
1455 # for the whole processing scope.
1471 # for the whole processing scope.
1456 op.gettransaction()
1472 op.gettransaction()
1457 unpackerversion = inpart.params.get('version', '01')
1473 unpackerversion = inpart.params.get('version', '01')
1458 # We should raise an appropriate exception here
1474 # We should raise an appropriate exception here
1459 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1475 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1460 # the source and url passed here are overwritten by the one contained in
1476 # the source and url passed here are overwritten by the one contained in
1461 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1477 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1462 nbchangesets = None
1478 nbchangesets = None
1463 if 'nbchanges' in inpart.params:
1479 if 'nbchanges' in inpart.params:
1464 nbchangesets = int(inpart.params.get('nbchanges'))
1480 nbchangesets = int(inpart.params.get('nbchanges'))
1465 if ('treemanifest' in inpart.params and
1481 if ('treemanifest' in inpart.params and
1466 'treemanifest' not in op.repo.requirements):
1482 'treemanifest' not in op.repo.requirements):
1467 if len(op.repo.changelog) != 0:
1483 if len(op.repo.changelog) != 0:
1468 raise error.Abort(_(
1484 raise error.Abort(_(
1469 "bundle contains tree manifests, but local repo is "
1485 "bundle contains tree manifests, but local repo is "
1470 "non-empty and does not use tree manifests"))
1486 "non-empty and does not use tree manifests"))
1471 op.repo.requirements.add('treemanifest')
1487 op.repo.requirements.add('treemanifest')
1472 op.repo._applyopenerreqs()
1488 op.repo._applyopenerreqs()
1473 op.repo._writerequirements()
1489 op.repo._writerequirements()
1474 ret = cg.apply(op.repo, 'bundle2', 'bundle2', expectedtotal=nbchangesets)
1490 ret = cg.apply(op.repo, 'bundle2', 'bundle2', expectedtotal=nbchangesets)
1475 op.records.add('changegroup', {'return': ret})
1491 op.records.add('changegroup', {'return': ret})
1476 if op.reply is not None:
1492 if op.reply is not None:
1477 # This is definitely not the final form of this
1493 # This is definitely not the final form of this
1478 # return. But one need to start somewhere.
1494 # return. But one need to start somewhere.
1479 part = op.reply.newpart('reply:changegroup', mandatory=False)
1495 part = op.reply.newpart('reply:changegroup', mandatory=False)
1480 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1496 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1481 part.addparam('return', '%i' % ret, mandatory=False)
1497 part.addparam('return', '%i' % ret, mandatory=False)
1482 assert not inpart.read()
1498 assert not inpart.read()
1483
1499
1484 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1500 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1485 ['digest:%s' % k for k in util.DIGESTS.keys()])
1501 ['digest:%s' % k for k in util.DIGESTS.keys()])
1486 @parthandler('remote-changegroup', _remotechangegroupparams)
1502 @parthandler('remote-changegroup', _remotechangegroupparams)
1487 def handleremotechangegroup(op, inpart):
1503 def handleremotechangegroup(op, inpart):
1488 """apply a bundle10 on the repo, given an url and validation information
1504 """apply a bundle10 on the repo, given an url and validation information
1489
1505
1490 All the information about the remote bundle to import are given as
1506 All the information about the remote bundle to import are given as
1491 parameters. The parameters include:
1507 parameters. The parameters include:
1492 - url: the url to the bundle10.
1508 - url: the url to the bundle10.
1493 - size: the bundle10 file size. It is used to validate what was
1509 - size: the bundle10 file size. It is used to validate what was
1494 retrieved by the client matches the server knowledge about the bundle.
1510 retrieved by the client matches the server knowledge about the bundle.
1495 - digests: a space separated list of the digest types provided as
1511 - digests: a space separated list of the digest types provided as
1496 parameters.
1512 parameters.
1497 - digest:<digest-type>: the hexadecimal representation of the digest with
1513 - digest:<digest-type>: the hexadecimal representation of the digest with
1498 that name. Like the size, it is used to validate what was retrieved by
1514 that name. Like the size, it is used to validate what was retrieved by
1499 the client matches what the server knows about the bundle.
1515 the client matches what the server knows about the bundle.
1500
1516
1501 When multiple digest types are given, all of them are checked.
1517 When multiple digest types are given, all of them are checked.
1502 """
1518 """
1503 try:
1519 try:
1504 raw_url = inpart.params['url']
1520 raw_url = inpart.params['url']
1505 except KeyError:
1521 except KeyError:
1506 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1522 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1507 parsed_url = util.url(raw_url)
1523 parsed_url = util.url(raw_url)
1508 if parsed_url.scheme not in capabilities['remote-changegroup']:
1524 if parsed_url.scheme not in capabilities['remote-changegroup']:
1509 raise error.Abort(_('remote-changegroup does not support %s urls') %
1525 raise error.Abort(_('remote-changegroup does not support %s urls') %
1510 parsed_url.scheme)
1526 parsed_url.scheme)
1511
1527
1512 try:
1528 try:
1513 size = int(inpart.params['size'])
1529 size = int(inpart.params['size'])
1514 except ValueError:
1530 except ValueError:
1515 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1531 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1516 % 'size')
1532 % 'size')
1517 except KeyError:
1533 except KeyError:
1518 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1534 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1519
1535
1520 digests = {}
1536 digests = {}
1521 for typ in inpart.params.get('digests', '').split():
1537 for typ in inpart.params.get('digests', '').split():
1522 param = 'digest:%s' % typ
1538 param = 'digest:%s' % typ
1523 try:
1539 try:
1524 value = inpart.params[param]
1540 value = inpart.params[param]
1525 except KeyError:
1541 except KeyError:
1526 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1542 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1527 param)
1543 param)
1528 digests[typ] = value
1544 digests[typ] = value
1529
1545
1530 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1546 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1531
1547
1532 # Make sure we trigger a transaction creation
1548 # Make sure we trigger a transaction creation
1533 #
1549 #
1534 # The addchangegroup function will get a transaction object by itself, but
1550 # The addchangegroup function will get a transaction object by itself, but
1535 # we need to make sure we trigger the creation of a transaction object used
1551 # we need to make sure we trigger the creation of a transaction object used
1536 # for the whole processing scope.
1552 # for the whole processing scope.
1537 op.gettransaction()
1553 op.gettransaction()
1538 from . import exchange
1554 from . import exchange
1539 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1555 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1540 if not isinstance(cg, changegroup.cg1unpacker):
1556 if not isinstance(cg, changegroup.cg1unpacker):
1541 raise error.Abort(_('%s: not a bundle version 1.0') %
1557 raise error.Abort(_('%s: not a bundle version 1.0') %
1542 util.hidepassword(raw_url))
1558 util.hidepassword(raw_url))
1543 ret = cg.apply(op.repo, 'bundle2', 'bundle2')
1559 ret = cg.apply(op.repo, 'bundle2', 'bundle2')
1544 op.records.add('changegroup', {'return': ret})
1560 op.records.add('changegroup', {'return': ret})
1545 if op.reply is not None:
1561 if op.reply is not None:
1546 # This is definitely not the final form of this
1562 # This is definitely not the final form of this
1547 # return. But one need to start somewhere.
1563 # return. But one need to start somewhere.
1548 part = op.reply.newpart('reply:changegroup')
1564 part = op.reply.newpart('reply:changegroup')
1549 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1565 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1550 part.addparam('return', '%i' % ret, mandatory=False)
1566 part.addparam('return', '%i' % ret, mandatory=False)
1551 try:
1567 try:
1552 real_part.validate()
1568 real_part.validate()
1553 except error.Abort as e:
1569 except error.Abort as e:
1554 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1570 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1555 (util.hidepassword(raw_url), str(e)))
1571 (util.hidepassword(raw_url), str(e)))
1556 assert not inpart.read()
1572 assert not inpart.read()
1557
1573
1558 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1574 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1559 def handlereplychangegroup(op, inpart):
1575 def handlereplychangegroup(op, inpart):
1560 ret = int(inpart.params['return'])
1576 ret = int(inpart.params['return'])
1561 replyto = int(inpart.params['in-reply-to'])
1577 replyto = int(inpart.params['in-reply-to'])
1562 op.records.add('changegroup', {'return': ret}, replyto)
1578 op.records.add('changegroup', {'return': ret}, replyto)
1563
1579
1564 @parthandler('check:heads')
1580 @parthandler('check:heads')
1565 def handlecheckheads(op, inpart):
1581 def handlecheckheads(op, inpart):
1566 """check that head of the repo did not change
1582 """check that head of the repo did not change
1567
1583
1568 This is used to detect a push race when using unbundle.
1584 This is used to detect a push race when using unbundle.
1569 This replaces the "heads" argument of unbundle."""
1585 This replaces the "heads" argument of unbundle."""
1570 h = inpart.read(20)
1586 h = inpart.read(20)
1571 heads = []
1587 heads = []
1572 while len(h) == 20:
1588 while len(h) == 20:
1573 heads.append(h)
1589 heads.append(h)
1574 h = inpart.read(20)
1590 h = inpart.read(20)
1575 assert not h
1591 assert not h
1576 # Trigger a transaction so that we are guaranteed to have the lock now.
1592 # Trigger a transaction so that we are guaranteed to have the lock now.
1577 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1593 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1578 op.gettransaction()
1594 op.gettransaction()
1579 if sorted(heads) != sorted(op.repo.heads()):
1595 if sorted(heads) != sorted(op.repo.heads()):
1580 raise error.PushRaced('repository changed while pushing - '
1596 raise error.PushRaced('repository changed while pushing - '
1581 'please try again')
1597 'please try again')
1582
1598
1583 @parthandler('output')
1599 @parthandler('output')
1584 def handleoutput(op, inpart):
1600 def handleoutput(op, inpart):
1585 """forward output captured on the server to the client"""
1601 """forward output captured on the server to the client"""
1586 for line in inpart.read().splitlines():
1602 for line in inpart.read().splitlines():
1587 op.ui.status(_('remote: %s\n') % line)
1603 op.ui.status(_('remote: %s\n') % line)
1588
1604
1589 @parthandler('replycaps')
1605 @parthandler('replycaps')
1590 def handlereplycaps(op, inpart):
1606 def handlereplycaps(op, inpart):
1591 """Notify that a reply bundle should be created
1607 """Notify that a reply bundle should be created
1592
1608
1593 The payload contains the capabilities information for the reply"""
1609 The payload contains the capabilities information for the reply"""
1594 caps = decodecaps(inpart.read())
1610 caps = decodecaps(inpart.read())
1595 if op.reply is None:
1611 if op.reply is None:
1596 op.reply = bundle20(op.ui, caps)
1612 op.reply = bundle20(op.ui, caps)
1597
1613
1598 class AbortFromPart(error.Abort):
1614 class AbortFromPart(error.Abort):
1599 """Sub-class of Abort that denotes an error from a bundle2 part."""
1615 """Sub-class of Abort that denotes an error from a bundle2 part."""
1600
1616
1601 @parthandler('error:abort', ('message', 'hint'))
1617 @parthandler('error:abort', ('message', 'hint'))
1602 def handleerrorabort(op, inpart):
1618 def handleerrorabort(op, inpart):
1603 """Used to transmit abort error over the wire"""
1619 """Used to transmit abort error over the wire"""
1604 raise AbortFromPart(inpart.params['message'],
1620 raise AbortFromPart(inpart.params['message'],
1605 hint=inpart.params.get('hint'))
1621 hint=inpart.params.get('hint'))
1606
1622
1607 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
1623 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
1608 'in-reply-to'))
1624 'in-reply-to'))
1609 def handleerrorpushkey(op, inpart):
1625 def handleerrorpushkey(op, inpart):
1610 """Used to transmit failure of a mandatory pushkey over the wire"""
1626 """Used to transmit failure of a mandatory pushkey over the wire"""
1611 kwargs = {}
1627 kwargs = {}
1612 for name in ('namespace', 'key', 'new', 'old', 'ret'):
1628 for name in ('namespace', 'key', 'new', 'old', 'ret'):
1613 value = inpart.params.get(name)
1629 value = inpart.params.get(name)
1614 if value is not None:
1630 if value is not None:
1615 kwargs[name] = value
1631 kwargs[name] = value
1616 raise error.PushkeyFailed(inpart.params['in-reply-to'], **kwargs)
1632 raise error.PushkeyFailed(inpart.params['in-reply-to'], **kwargs)
1617
1633
1618 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1634 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1619 def handleerrorunsupportedcontent(op, inpart):
1635 def handleerrorunsupportedcontent(op, inpart):
1620 """Used to transmit unknown content error over the wire"""
1636 """Used to transmit unknown content error over the wire"""
1621 kwargs = {}
1637 kwargs = {}
1622 parttype = inpart.params.get('parttype')
1638 parttype = inpart.params.get('parttype')
1623 if parttype is not None:
1639 if parttype is not None:
1624 kwargs['parttype'] = parttype
1640 kwargs['parttype'] = parttype
1625 params = inpart.params.get('params')
1641 params = inpart.params.get('params')
1626 if params is not None:
1642 if params is not None:
1627 kwargs['params'] = params.split('\0')
1643 kwargs['params'] = params.split('\0')
1628
1644
1629 raise error.BundleUnknownFeatureError(**kwargs)
1645 raise error.BundleUnknownFeatureError(**kwargs)
1630
1646
1631 @parthandler('error:pushraced', ('message',))
1647 @parthandler('error:pushraced', ('message',))
1632 def handleerrorpushraced(op, inpart):
1648 def handleerrorpushraced(op, inpart):
1633 """Used to transmit push race error over the wire"""
1649 """Used to transmit push race error over the wire"""
1634 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1650 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1635
1651
1636 @parthandler('listkeys', ('namespace',))
1652 @parthandler('listkeys', ('namespace',))
1637 def handlelistkeys(op, inpart):
1653 def handlelistkeys(op, inpart):
1638 """retrieve pushkey namespace content stored in a bundle2"""
1654 """retrieve pushkey namespace content stored in a bundle2"""
1639 namespace = inpart.params['namespace']
1655 namespace = inpart.params['namespace']
1640 r = pushkey.decodekeys(inpart.read())
1656 r = pushkey.decodekeys(inpart.read())
1641 op.records.add('listkeys', (namespace, r))
1657 op.records.add('listkeys', (namespace, r))
1642
1658
1643 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1659 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1644 def handlepushkey(op, inpart):
1660 def handlepushkey(op, inpart):
1645 """process a pushkey request"""
1661 """process a pushkey request"""
1646 dec = pushkey.decode
1662 dec = pushkey.decode
1647 namespace = dec(inpart.params['namespace'])
1663 namespace = dec(inpart.params['namespace'])
1648 key = dec(inpart.params['key'])
1664 key = dec(inpart.params['key'])
1649 old = dec(inpart.params['old'])
1665 old = dec(inpart.params['old'])
1650 new = dec(inpart.params['new'])
1666 new = dec(inpart.params['new'])
1651 # Grab the transaction to ensure that we have the lock before performing the
1667 # Grab the transaction to ensure that we have the lock before performing the
1652 # pushkey.
1668 # pushkey.
1653 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1669 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1654 op.gettransaction()
1670 op.gettransaction()
1655 ret = op.repo.pushkey(namespace, key, old, new)
1671 ret = op.repo.pushkey(namespace, key, old, new)
1656 record = {'namespace': namespace,
1672 record = {'namespace': namespace,
1657 'key': key,
1673 'key': key,
1658 'old': old,
1674 'old': old,
1659 'new': new}
1675 'new': new}
1660 op.records.add('pushkey', record)
1676 op.records.add('pushkey', record)
1661 if op.reply is not None:
1677 if op.reply is not None:
1662 rpart = op.reply.newpart('reply:pushkey')
1678 rpart = op.reply.newpart('reply:pushkey')
1663 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1679 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1664 rpart.addparam('return', '%i' % ret, mandatory=False)
1680 rpart.addparam('return', '%i' % ret, mandatory=False)
1665 if inpart.mandatory and not ret:
1681 if inpart.mandatory and not ret:
1666 kwargs = {}
1682 kwargs = {}
1667 for key in ('namespace', 'key', 'new', 'old', 'ret'):
1683 for key in ('namespace', 'key', 'new', 'old', 'ret'):
1668 if key in inpart.params:
1684 if key in inpart.params:
1669 kwargs[key] = inpart.params[key]
1685 kwargs[key] = inpart.params[key]
1670 raise error.PushkeyFailed(partid=str(inpart.id), **kwargs)
1686 raise error.PushkeyFailed(partid=str(inpart.id), **kwargs)
1671
1687
1672 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
1688 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
1673 def handlepushkeyreply(op, inpart):
1689 def handlepushkeyreply(op, inpart):
1674 """retrieve the result of a pushkey request"""
1690 """retrieve the result of a pushkey request"""
1675 ret = int(inpart.params['return'])
1691 ret = int(inpart.params['return'])
1676 partid = int(inpart.params['in-reply-to'])
1692 partid = int(inpart.params['in-reply-to'])
1677 op.records.add('pushkey', {'return': ret}, partid)
1693 op.records.add('pushkey', {'return': ret}, partid)
1678
1694
1679 @parthandler('obsmarkers')
1695 @parthandler('obsmarkers')
1680 def handleobsmarker(op, inpart):
1696 def handleobsmarker(op, inpart):
1681 """add a stream of obsmarkers to the repo"""
1697 """add a stream of obsmarkers to the repo"""
1682 tr = op.gettransaction()
1698 tr = op.gettransaction()
1683 markerdata = inpart.read()
1699 markerdata = inpart.read()
1684 if op.ui.config('experimental', 'obsmarkers-exchange-debug', False):
1700 if op.ui.config('experimental', 'obsmarkers-exchange-debug', False):
1685 op.ui.write(('obsmarker-exchange: %i bytes received\n')
1701 op.ui.write(('obsmarker-exchange: %i bytes received\n')
1686 % len(markerdata))
1702 % len(markerdata))
1687 # The mergemarkers call will crash if marker creation is not enabled.
1703 # The mergemarkers call will crash if marker creation is not enabled.
1688 # we want to avoid this if the part is advisory.
1704 # we want to avoid this if the part is advisory.
1689 if not inpart.mandatory and op.repo.obsstore.readonly:
1705 if not inpart.mandatory and op.repo.obsstore.readonly:
1690 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled')
1706 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled')
1691 return
1707 return
1692 new = op.repo.obsstore.mergemarkers(tr, markerdata)
1708 new = op.repo.obsstore.mergemarkers(tr, markerdata)
1693 op.repo.invalidatevolatilesets()
1709 op.repo.invalidatevolatilesets()
1694 if new:
1710 if new:
1695 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1711 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1696 op.records.add('obsmarkers', {'new': new})
1712 op.records.add('obsmarkers', {'new': new})
1697 if op.reply is not None:
1713 if op.reply is not None:
1698 rpart = op.reply.newpart('reply:obsmarkers')
1714 rpart = op.reply.newpart('reply:obsmarkers')
1699 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1715 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1700 rpart.addparam('new', '%i' % new, mandatory=False)
1716 rpart.addparam('new', '%i' % new, mandatory=False)
1701
1717
1702
1718
1703 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
1719 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
1704 def handleobsmarkerreply(op, inpart):
1720 def handleobsmarkerreply(op, inpart):
1705 """retrieve the result of a pushkey request"""
1721 """retrieve the result of a pushkey request"""
1706 ret = int(inpart.params['new'])
1722 ret = int(inpart.params['new'])
1707 partid = int(inpart.params['in-reply-to'])
1723 partid = int(inpart.params['in-reply-to'])
1708 op.records.add('obsmarkers', {'new': ret}, partid)
1724 op.records.add('obsmarkers', {'new': ret}, partid)
1709
1725
1710 @parthandler('hgtagsfnodes')
1726 @parthandler('hgtagsfnodes')
1711 def handlehgtagsfnodes(op, inpart):
1727 def handlehgtagsfnodes(op, inpart):
1712 """Applies .hgtags fnodes cache entries to the local repo.
1728 """Applies .hgtags fnodes cache entries to the local repo.
1713
1729
1714 Payload is pairs of 20 byte changeset nodes and filenodes.
1730 Payload is pairs of 20 byte changeset nodes and filenodes.
1715 """
1731 """
1716 # Grab the transaction so we ensure that we have the lock at this point.
1732 # Grab the transaction so we ensure that we have the lock at this point.
1717 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1733 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1718 op.gettransaction()
1734 op.gettransaction()
1719 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
1735 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
1720
1736
1721 count = 0
1737 count = 0
1722 while True:
1738 while True:
1723 node = inpart.read(20)
1739 node = inpart.read(20)
1724 fnode = inpart.read(20)
1740 fnode = inpart.read(20)
1725 if len(node) < 20 or len(fnode) < 20:
1741 if len(node) < 20 or len(fnode) < 20:
1726 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
1742 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
1727 break
1743 break
1728 cache.setfnode(node, fnode)
1744 cache.setfnode(node, fnode)
1729 count += 1
1745 count += 1
1730
1746
1731 cache.write()
1747 cache.write()
1732 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
1748 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
@@ -1,2004 +1,1989 b''
1 # exchange.py - utility to exchange data between repos.
1 # exchange.py - utility to exchange data between repos.
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import errno
10 import errno
11 import hashlib
11 import hashlib
12
12
13 from .i18n import _
13 from .i18n import _
14 from .node import (
14 from .node import (
15 hex,
15 hex,
16 nullid,
16 nullid,
17 )
17 )
18 from . import (
18 from . import (
19 bookmarks as bookmod,
19 bookmarks as bookmod,
20 bundle2,
20 bundle2,
21 changegroup,
21 changegroup,
22 discovery,
22 discovery,
23 error,
23 error,
24 lock as lockmod,
24 lock as lockmod,
25 obsolete,
25 obsolete,
26 phases,
26 phases,
27 pushkey,
27 pushkey,
28 scmutil,
28 scmutil,
29 sslutil,
29 sslutil,
30 streamclone,
30 streamclone,
31 url as urlmod,
31 url as urlmod,
32 util,
32 util,
33 )
33 )
34
34
35 urlerr = util.urlerr
35 urlerr = util.urlerr
36 urlreq = util.urlreq
36 urlreq = util.urlreq
37
37
38 # Maps bundle version human names to changegroup versions.
38 # Maps bundle version human names to changegroup versions.
39 _bundlespeccgversions = {'v1': '01',
39 _bundlespeccgversions = {'v1': '01',
40 'v2': '02',
40 'v2': '02',
41 'packed1': 's1',
41 'packed1': 's1',
42 'bundle2': '02', #legacy
42 'bundle2': '02', #legacy
43 }
43 }
44
44
45 # Compression engines allowed in version 1. THIS SHOULD NEVER CHANGE.
45 # Compression engines allowed in version 1. THIS SHOULD NEVER CHANGE.
46 _bundlespecv1compengines = {'gzip', 'bzip2', 'none'}
46 _bundlespecv1compengines = {'gzip', 'bzip2', 'none'}
47
47
48 def parsebundlespec(repo, spec, strict=True, externalnames=False):
48 def parsebundlespec(repo, spec, strict=True, externalnames=False):
49 """Parse a bundle string specification into parts.
49 """Parse a bundle string specification into parts.
50
50
51 Bundle specifications denote a well-defined bundle/exchange format.
51 Bundle specifications denote a well-defined bundle/exchange format.
52 The content of a given specification should not change over time in
52 The content of a given specification should not change over time in
53 order to ensure that bundles produced by a newer version of Mercurial are
53 order to ensure that bundles produced by a newer version of Mercurial are
54 readable from an older version.
54 readable from an older version.
55
55
56 The string currently has the form:
56 The string currently has the form:
57
57
58 <compression>-<type>[;<parameter0>[;<parameter1>]]
58 <compression>-<type>[;<parameter0>[;<parameter1>]]
59
59
60 Where <compression> is one of the supported compression formats
60 Where <compression> is one of the supported compression formats
61 and <type> is (currently) a version string. A ";" can follow the type and
61 and <type> is (currently) a version string. A ";" can follow the type and
62 all text afterwards is interpreted as URI encoded, ";" delimited key=value
62 all text afterwards is interpreted as URI encoded, ";" delimited key=value
63 pairs.
63 pairs.
64
64
65 If ``strict`` is True (the default) <compression> is required. Otherwise,
65 If ``strict`` is True (the default) <compression> is required. Otherwise,
66 it is optional.
66 it is optional.
67
67
68 If ``externalnames`` is False (the default), the human-centric names will
68 If ``externalnames`` is False (the default), the human-centric names will
69 be converted to their internal representation.
69 be converted to their internal representation.
70
70
71 Returns a 3-tuple of (compression, version, parameters). Compression will
71 Returns a 3-tuple of (compression, version, parameters). Compression will
72 be ``None`` if not in strict mode and a compression isn't defined.
72 be ``None`` if not in strict mode and a compression isn't defined.
73
73
74 An ``InvalidBundleSpecification`` is raised when the specification is
74 An ``InvalidBundleSpecification`` is raised when the specification is
75 not syntactically well formed.
75 not syntactically well formed.
76
76
77 An ``UnsupportedBundleSpecification`` is raised when the compression or
77 An ``UnsupportedBundleSpecification`` is raised when the compression or
78 bundle type/version is not recognized.
78 bundle type/version is not recognized.
79
79
80 Note: this function will likely eventually return a more complex data
80 Note: this function will likely eventually return a more complex data
81 structure, including bundle2 part information.
81 structure, including bundle2 part information.
82 """
82 """
83 def parseparams(s):
83 def parseparams(s):
84 if ';' not in s:
84 if ';' not in s:
85 return s, {}
85 return s, {}
86
86
87 params = {}
87 params = {}
88 version, paramstr = s.split(';', 1)
88 version, paramstr = s.split(';', 1)
89
89
90 for p in paramstr.split(';'):
90 for p in paramstr.split(';'):
91 if '=' not in p:
91 if '=' not in p:
92 raise error.InvalidBundleSpecification(
92 raise error.InvalidBundleSpecification(
93 _('invalid bundle specification: '
93 _('invalid bundle specification: '
94 'missing "=" in parameter: %s') % p)
94 'missing "=" in parameter: %s') % p)
95
95
96 key, value = p.split('=', 1)
96 key, value = p.split('=', 1)
97 key = urlreq.unquote(key)
97 key = urlreq.unquote(key)
98 value = urlreq.unquote(value)
98 value = urlreq.unquote(value)
99 params[key] = value
99 params[key] = value
100
100
101 return version, params
101 return version, params
102
102
103
103
104 if strict and '-' not in spec:
104 if strict and '-' not in spec:
105 raise error.InvalidBundleSpecification(
105 raise error.InvalidBundleSpecification(
106 _('invalid bundle specification; '
106 _('invalid bundle specification; '
107 'must be prefixed with compression: %s') % spec)
107 'must be prefixed with compression: %s') % spec)
108
108
109 if '-' in spec:
109 if '-' in spec:
110 compression, version = spec.split('-', 1)
110 compression, version = spec.split('-', 1)
111
111
112 if compression not in util.compengines.supportedbundlenames:
112 if compression not in util.compengines.supportedbundlenames:
113 raise error.UnsupportedBundleSpecification(
113 raise error.UnsupportedBundleSpecification(
114 _('%s compression is not supported') % compression)
114 _('%s compression is not supported') % compression)
115
115
116 version, params = parseparams(version)
116 version, params = parseparams(version)
117
117
118 if version not in _bundlespeccgversions:
118 if version not in _bundlespeccgversions:
119 raise error.UnsupportedBundleSpecification(
119 raise error.UnsupportedBundleSpecification(
120 _('%s is not a recognized bundle version') % version)
120 _('%s is not a recognized bundle version') % version)
121 else:
121 else:
122 # Value could be just the compression or just the version, in which
122 # Value could be just the compression or just the version, in which
123 # case some defaults are assumed (but only when not in strict mode).
123 # case some defaults are assumed (but only when not in strict mode).
124 assert not strict
124 assert not strict
125
125
126 spec, params = parseparams(spec)
126 spec, params = parseparams(spec)
127
127
128 if spec in util.compengines.supportedbundlenames:
128 if spec in util.compengines.supportedbundlenames:
129 compression = spec
129 compression = spec
130 version = 'v1'
130 version = 'v1'
131 # Generaldelta repos require v2.
131 # Generaldelta repos require v2.
132 if 'generaldelta' in repo.requirements:
132 if 'generaldelta' in repo.requirements:
133 version = 'v2'
133 version = 'v2'
134 # Modern compression engines require v2.
134 # Modern compression engines require v2.
135 if compression not in _bundlespecv1compengines:
135 if compression not in _bundlespecv1compengines:
136 version = 'v2'
136 version = 'v2'
137 elif spec in _bundlespeccgversions:
137 elif spec in _bundlespeccgversions:
138 if spec == 'packed1':
138 if spec == 'packed1':
139 compression = 'none'
139 compression = 'none'
140 else:
140 else:
141 compression = 'bzip2'
141 compression = 'bzip2'
142 version = spec
142 version = spec
143 else:
143 else:
144 raise error.UnsupportedBundleSpecification(
144 raise error.UnsupportedBundleSpecification(
145 _('%s is not a recognized bundle specification') % spec)
145 _('%s is not a recognized bundle specification') % spec)
146
146
147 # Bundle version 1 only supports a known set of compression engines.
147 # Bundle version 1 only supports a known set of compression engines.
148 if version == 'v1' and compression not in _bundlespecv1compengines:
148 if version == 'v1' and compression not in _bundlespecv1compengines:
149 raise error.UnsupportedBundleSpecification(
149 raise error.UnsupportedBundleSpecification(
150 _('compression engine %s is not supported on v1 bundles') %
150 _('compression engine %s is not supported on v1 bundles') %
151 compression)
151 compression)
152
152
153 # The specification for packed1 can optionally declare the data formats
153 # The specification for packed1 can optionally declare the data formats
154 # required to apply it. If we see this metadata, compare against what the
154 # required to apply it. If we see this metadata, compare against what the
155 # repo supports and error if the bundle isn't compatible.
155 # repo supports and error if the bundle isn't compatible.
156 if version == 'packed1' and 'requirements' in params:
156 if version == 'packed1' and 'requirements' in params:
157 requirements = set(params['requirements'].split(','))
157 requirements = set(params['requirements'].split(','))
158 missingreqs = requirements - repo.supportedformats
158 missingreqs = requirements - repo.supportedformats
159 if missingreqs:
159 if missingreqs:
160 raise error.UnsupportedBundleSpecification(
160 raise error.UnsupportedBundleSpecification(
161 _('missing support for repository features: %s') %
161 _('missing support for repository features: %s') %
162 ', '.join(sorted(missingreqs)))
162 ', '.join(sorted(missingreqs)))
163
163
164 if not externalnames:
164 if not externalnames:
165 engine = util.compengines.forbundlename(compression)
165 engine = util.compengines.forbundlename(compression)
166 compression = engine.bundletype()[1]
166 compression = engine.bundletype()[1]
167 version = _bundlespeccgversions[version]
167 version = _bundlespeccgversions[version]
168 return compression, version, params
168 return compression, version, params
169
169
170 def readbundle(ui, fh, fname, vfs=None):
170 def readbundle(ui, fh, fname, vfs=None):
171 header = changegroup.readexactly(fh, 4)
171 header = changegroup.readexactly(fh, 4)
172
172
173 alg = None
173 alg = None
174 if not fname:
174 if not fname:
175 fname = "stream"
175 fname = "stream"
176 if not header.startswith('HG') and header.startswith('\0'):
176 if not header.startswith('HG') and header.startswith('\0'):
177 fh = changegroup.headerlessfixup(fh, header)
177 fh = changegroup.headerlessfixup(fh, header)
178 header = "HG10"
178 header = "HG10"
179 alg = 'UN'
179 alg = 'UN'
180 elif vfs:
180 elif vfs:
181 fname = vfs.join(fname)
181 fname = vfs.join(fname)
182
182
183 magic, version = header[0:2], header[2:4]
183 magic, version = header[0:2], header[2:4]
184
184
185 if magic != 'HG':
185 if magic != 'HG':
186 raise error.Abort(_('%s: not a Mercurial bundle') % fname)
186 raise error.Abort(_('%s: not a Mercurial bundle') % fname)
187 if version == '10':
187 if version == '10':
188 if alg is None:
188 if alg is None:
189 alg = changegroup.readexactly(fh, 2)
189 alg = changegroup.readexactly(fh, 2)
190 return changegroup.cg1unpacker(fh, alg)
190 return changegroup.cg1unpacker(fh, alg)
191 elif version.startswith('2'):
191 elif version.startswith('2'):
192 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
192 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
193 elif version == 'S1':
193 elif version == 'S1':
194 return streamclone.streamcloneapplier(fh)
194 return streamclone.streamcloneapplier(fh)
195 else:
195 else:
196 raise error.Abort(_('%s: unknown bundle version %s') % (fname, version))
196 raise error.Abort(_('%s: unknown bundle version %s') % (fname, version))
197
197
198 def getbundlespec(ui, fh):
198 def getbundlespec(ui, fh):
199 """Infer the bundlespec from a bundle file handle.
199 """Infer the bundlespec from a bundle file handle.
200
200
201 The input file handle is seeked and the original seek position is not
201 The input file handle is seeked and the original seek position is not
202 restored.
202 restored.
203 """
203 """
204 def speccompression(alg):
204 def speccompression(alg):
205 try:
205 try:
206 return util.compengines.forbundletype(alg).bundletype()[0]
206 return util.compengines.forbundletype(alg).bundletype()[0]
207 except KeyError:
207 except KeyError:
208 return None
208 return None
209
209
210 b = readbundle(ui, fh, None)
210 b = readbundle(ui, fh, None)
211 if isinstance(b, changegroup.cg1unpacker):
211 if isinstance(b, changegroup.cg1unpacker):
212 alg = b._type
212 alg = b._type
213 if alg == '_truncatedBZ':
213 if alg == '_truncatedBZ':
214 alg = 'BZ'
214 alg = 'BZ'
215 comp = speccompression(alg)
215 comp = speccompression(alg)
216 if not comp:
216 if not comp:
217 raise error.Abort(_('unknown compression algorithm: %s') % alg)
217 raise error.Abort(_('unknown compression algorithm: %s') % alg)
218 return '%s-v1' % comp
218 return '%s-v1' % comp
219 elif isinstance(b, bundle2.unbundle20):
219 elif isinstance(b, bundle2.unbundle20):
220 if 'Compression' in b.params:
220 if 'Compression' in b.params:
221 comp = speccompression(b.params['Compression'])
221 comp = speccompression(b.params['Compression'])
222 if not comp:
222 if not comp:
223 raise error.Abort(_('unknown compression algorithm: %s') % comp)
223 raise error.Abort(_('unknown compression algorithm: %s') % comp)
224 else:
224 else:
225 comp = 'none'
225 comp = 'none'
226
226
227 version = None
227 version = None
228 for part in b.iterparts():
228 for part in b.iterparts():
229 if part.type == 'changegroup':
229 if part.type == 'changegroup':
230 version = part.params['version']
230 version = part.params['version']
231 if version in ('01', '02'):
231 if version in ('01', '02'):
232 version = 'v2'
232 version = 'v2'
233 else:
233 else:
234 raise error.Abort(_('changegroup version %s does not have '
234 raise error.Abort(_('changegroup version %s does not have '
235 'a known bundlespec') % version,
235 'a known bundlespec') % version,
236 hint=_('try upgrading your Mercurial '
236 hint=_('try upgrading your Mercurial '
237 'client'))
237 'client'))
238
238
239 if not version:
239 if not version:
240 raise error.Abort(_('could not identify changegroup version in '
240 raise error.Abort(_('could not identify changegroup version in '
241 'bundle'))
241 'bundle'))
242
242
243 return '%s-%s' % (comp, version)
243 return '%s-%s' % (comp, version)
244 elif isinstance(b, streamclone.streamcloneapplier):
244 elif isinstance(b, streamclone.streamcloneapplier):
245 requirements = streamclone.readbundle1header(fh)[2]
245 requirements = streamclone.readbundle1header(fh)[2]
246 params = 'requirements=%s' % ','.join(sorted(requirements))
246 params = 'requirements=%s' % ','.join(sorted(requirements))
247 return 'none-packed1;%s' % urlreq.quote(params)
247 return 'none-packed1;%s' % urlreq.quote(params)
248 else:
248 else:
249 raise error.Abort(_('unknown bundle type: %s') % b)
249 raise error.Abort(_('unknown bundle type: %s') % b)
250
250
251 def buildobsmarkerspart(bundler, markers):
252 """add an obsmarker part to the bundler with <markers>
253
254 No part is created if markers is empty.
255 Raises ValueError if the bundler doesn't support any known obsmarker format.
256 """
257 if markers:
258 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
259 version = obsolete.commonversion(remoteversions)
260 if version is None:
261 raise ValueError('bundler does not support common obsmarker format')
262 stream = obsolete.encodemarkers(markers, True, version=version)
263 return bundler.newpart('obsmarkers', data=stream)
264 return None
265
266 def _computeoutgoing(repo, heads, common):
251 def _computeoutgoing(repo, heads, common):
267 """Computes which revs are outgoing given a set of common
252 """Computes which revs are outgoing given a set of common
268 and a set of heads.
253 and a set of heads.
269
254
270 This is a separate function so extensions can have access to
255 This is a separate function so extensions can have access to
271 the logic.
256 the logic.
272
257
273 Returns a discovery.outgoing object.
258 Returns a discovery.outgoing object.
274 """
259 """
275 cl = repo.changelog
260 cl = repo.changelog
276 if common:
261 if common:
277 hasnode = cl.hasnode
262 hasnode = cl.hasnode
278 common = [n for n in common if hasnode(n)]
263 common = [n for n in common if hasnode(n)]
279 else:
264 else:
280 common = [nullid]
265 common = [nullid]
281 if not heads:
266 if not heads:
282 heads = cl.heads()
267 heads = cl.heads()
283 return discovery.outgoing(repo, common, heads)
268 return discovery.outgoing(repo, common, heads)
284
269
285 def _forcebundle1(op):
270 def _forcebundle1(op):
286 """return true if a pull/push must use bundle1
271 """return true if a pull/push must use bundle1
287
272
288 This function is used to allow testing of the older bundle version"""
273 This function is used to allow testing of the older bundle version"""
289 ui = op.repo.ui
274 ui = op.repo.ui
290 forcebundle1 = False
275 forcebundle1 = False
291 # The goal is this config is to allow developer to choose the bundle
276 # The goal is this config is to allow developer to choose the bundle
292 # version used during exchanged. This is especially handy during test.
277 # version used during exchanged. This is especially handy during test.
293 # Value is a list of bundle version to be picked from, highest version
278 # Value is a list of bundle version to be picked from, highest version
294 # should be used.
279 # should be used.
295 #
280 #
296 # developer config: devel.legacy.exchange
281 # developer config: devel.legacy.exchange
297 exchange = ui.configlist('devel', 'legacy.exchange')
282 exchange = ui.configlist('devel', 'legacy.exchange')
298 forcebundle1 = 'bundle2' not in exchange and 'bundle1' in exchange
283 forcebundle1 = 'bundle2' not in exchange and 'bundle1' in exchange
299 return forcebundle1 or not op.remote.capable('bundle2')
284 return forcebundle1 or not op.remote.capable('bundle2')
300
285
301 class pushoperation(object):
286 class pushoperation(object):
302 """A object that represent a single push operation
287 """A object that represent a single push operation
303
288
304 Its purpose is to carry push related state and very common operations.
289 Its purpose is to carry push related state and very common operations.
305
290
306 A new pushoperation should be created at the beginning of each push and
291 A new pushoperation should be created at the beginning of each push and
307 discarded afterward.
292 discarded afterward.
308 """
293 """
309
294
310 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
295 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
311 bookmarks=()):
296 bookmarks=()):
312 # repo we push from
297 # repo we push from
313 self.repo = repo
298 self.repo = repo
314 self.ui = repo.ui
299 self.ui = repo.ui
315 # repo we push to
300 # repo we push to
316 self.remote = remote
301 self.remote = remote
317 # force option provided
302 # force option provided
318 self.force = force
303 self.force = force
319 # revs to be pushed (None is "all")
304 # revs to be pushed (None is "all")
320 self.revs = revs
305 self.revs = revs
321 # bookmark explicitly pushed
306 # bookmark explicitly pushed
322 self.bookmarks = bookmarks
307 self.bookmarks = bookmarks
323 # allow push of new branch
308 # allow push of new branch
324 self.newbranch = newbranch
309 self.newbranch = newbranch
325 # did a local lock get acquired?
310 # did a local lock get acquired?
326 self.locallocked = None
311 self.locallocked = None
327 # step already performed
312 # step already performed
328 # (used to check what steps have been already performed through bundle2)
313 # (used to check what steps have been already performed through bundle2)
329 self.stepsdone = set()
314 self.stepsdone = set()
330 # Integer version of the changegroup push result
315 # Integer version of the changegroup push result
331 # - None means nothing to push
316 # - None means nothing to push
332 # - 0 means HTTP error
317 # - 0 means HTTP error
333 # - 1 means we pushed and remote head count is unchanged *or*
318 # - 1 means we pushed and remote head count is unchanged *or*
334 # we have outgoing changesets but refused to push
319 # we have outgoing changesets but refused to push
335 # - other values as described by addchangegroup()
320 # - other values as described by addchangegroup()
336 self.cgresult = None
321 self.cgresult = None
337 # Boolean value for the bookmark push
322 # Boolean value for the bookmark push
338 self.bkresult = None
323 self.bkresult = None
339 # discover.outgoing object (contains common and outgoing data)
324 # discover.outgoing object (contains common and outgoing data)
340 self.outgoing = None
325 self.outgoing = None
341 # all remote heads before the push
326 # all remote heads before the push
342 self.remoteheads = None
327 self.remoteheads = None
343 # testable as a boolean indicating if any nodes are missing locally.
328 # testable as a boolean indicating if any nodes are missing locally.
344 self.incoming = None
329 self.incoming = None
345 # phases changes that must be pushed along side the changesets
330 # phases changes that must be pushed along side the changesets
346 self.outdatedphases = None
331 self.outdatedphases = None
347 # phases changes that must be pushed if changeset push fails
332 # phases changes that must be pushed if changeset push fails
348 self.fallbackoutdatedphases = None
333 self.fallbackoutdatedphases = None
349 # outgoing obsmarkers
334 # outgoing obsmarkers
350 self.outobsmarkers = set()
335 self.outobsmarkers = set()
351 # outgoing bookmarks
336 # outgoing bookmarks
352 self.outbookmarks = []
337 self.outbookmarks = []
353 # transaction manager
338 # transaction manager
354 self.trmanager = None
339 self.trmanager = None
355 # map { pushkey partid -> callback handling failure}
340 # map { pushkey partid -> callback handling failure}
356 # used to handle exception from mandatory pushkey part failure
341 # used to handle exception from mandatory pushkey part failure
357 self.pkfailcb = {}
342 self.pkfailcb = {}
358
343
359 @util.propertycache
344 @util.propertycache
360 def futureheads(self):
345 def futureheads(self):
361 """future remote heads if the changeset push succeeds"""
346 """future remote heads if the changeset push succeeds"""
362 return self.outgoing.missingheads
347 return self.outgoing.missingheads
363
348
364 @util.propertycache
349 @util.propertycache
365 def fallbackheads(self):
350 def fallbackheads(self):
366 """future remote heads if the changeset push fails"""
351 """future remote heads if the changeset push fails"""
367 if self.revs is None:
352 if self.revs is None:
368 # not target to push, all common are relevant
353 # not target to push, all common are relevant
369 return self.outgoing.commonheads
354 return self.outgoing.commonheads
370 unfi = self.repo.unfiltered()
355 unfi = self.repo.unfiltered()
371 # I want cheads = heads(::missingheads and ::commonheads)
356 # I want cheads = heads(::missingheads and ::commonheads)
372 # (missingheads is revs with secret changeset filtered out)
357 # (missingheads is revs with secret changeset filtered out)
373 #
358 #
374 # This can be expressed as:
359 # This can be expressed as:
375 # cheads = ( (missingheads and ::commonheads)
360 # cheads = ( (missingheads and ::commonheads)
376 # + (commonheads and ::missingheads))"
361 # + (commonheads and ::missingheads))"
377 # )
362 # )
378 #
363 #
379 # while trying to push we already computed the following:
364 # while trying to push we already computed the following:
380 # common = (::commonheads)
365 # common = (::commonheads)
381 # missing = ((commonheads::missingheads) - commonheads)
366 # missing = ((commonheads::missingheads) - commonheads)
382 #
367 #
383 # We can pick:
368 # We can pick:
384 # * missingheads part of common (::commonheads)
369 # * missingheads part of common (::commonheads)
385 common = self.outgoing.common
370 common = self.outgoing.common
386 nm = self.repo.changelog.nodemap
371 nm = self.repo.changelog.nodemap
387 cheads = [node for node in self.revs if nm[node] in common]
372 cheads = [node for node in self.revs if nm[node] in common]
388 # and
373 # and
389 # * commonheads parents on missing
374 # * commonheads parents on missing
390 revset = unfi.set('%ln and parents(roots(%ln))',
375 revset = unfi.set('%ln and parents(roots(%ln))',
391 self.outgoing.commonheads,
376 self.outgoing.commonheads,
392 self.outgoing.missing)
377 self.outgoing.missing)
393 cheads.extend(c.node() for c in revset)
378 cheads.extend(c.node() for c in revset)
394 return cheads
379 return cheads
395
380
396 @property
381 @property
397 def commonheads(self):
382 def commonheads(self):
398 """set of all common heads after changeset bundle push"""
383 """set of all common heads after changeset bundle push"""
399 if self.cgresult:
384 if self.cgresult:
400 return self.futureheads
385 return self.futureheads
401 else:
386 else:
402 return self.fallbackheads
387 return self.fallbackheads
403
388
404 # mapping of message used when pushing bookmark
389 # mapping of message used when pushing bookmark
405 bookmsgmap = {'update': (_("updating bookmark %s\n"),
390 bookmsgmap = {'update': (_("updating bookmark %s\n"),
406 _('updating bookmark %s failed!\n')),
391 _('updating bookmark %s failed!\n')),
407 'export': (_("exporting bookmark %s\n"),
392 'export': (_("exporting bookmark %s\n"),
408 _('exporting bookmark %s failed!\n')),
393 _('exporting bookmark %s failed!\n')),
409 'delete': (_("deleting remote bookmark %s\n"),
394 'delete': (_("deleting remote bookmark %s\n"),
410 _('deleting remote bookmark %s failed!\n')),
395 _('deleting remote bookmark %s failed!\n')),
411 }
396 }
412
397
413
398
414 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=(),
399 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=(),
415 opargs=None):
400 opargs=None):
416 '''Push outgoing changesets (limited by revs) from a local
401 '''Push outgoing changesets (limited by revs) from a local
417 repository to remote. Return an integer:
402 repository to remote. Return an integer:
418 - None means nothing to push
403 - None means nothing to push
419 - 0 means HTTP error
404 - 0 means HTTP error
420 - 1 means we pushed and remote head count is unchanged *or*
405 - 1 means we pushed and remote head count is unchanged *or*
421 we have outgoing changesets but refused to push
406 we have outgoing changesets but refused to push
422 - other values as described by addchangegroup()
407 - other values as described by addchangegroup()
423 '''
408 '''
424 if opargs is None:
409 if opargs is None:
425 opargs = {}
410 opargs = {}
426 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks,
411 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks,
427 **opargs)
412 **opargs)
428 if pushop.remote.local():
413 if pushop.remote.local():
429 missing = (set(pushop.repo.requirements)
414 missing = (set(pushop.repo.requirements)
430 - pushop.remote.local().supported)
415 - pushop.remote.local().supported)
431 if missing:
416 if missing:
432 msg = _("required features are not"
417 msg = _("required features are not"
433 " supported in the destination:"
418 " supported in the destination:"
434 " %s") % (', '.join(sorted(missing)))
419 " %s") % (', '.join(sorted(missing)))
435 raise error.Abort(msg)
420 raise error.Abort(msg)
436
421
437 # there are two ways to push to remote repo:
422 # there are two ways to push to remote repo:
438 #
423 #
439 # addchangegroup assumes local user can lock remote
424 # addchangegroup assumes local user can lock remote
440 # repo (local filesystem, old ssh servers).
425 # repo (local filesystem, old ssh servers).
441 #
426 #
442 # unbundle assumes local user cannot lock remote repo (new ssh
427 # unbundle assumes local user cannot lock remote repo (new ssh
443 # servers, http servers).
428 # servers, http servers).
444
429
445 if not pushop.remote.canpush():
430 if not pushop.remote.canpush():
446 raise error.Abort(_("destination does not support push"))
431 raise error.Abort(_("destination does not support push"))
447 # get local lock as we might write phase data
432 # get local lock as we might write phase data
448 localwlock = locallock = None
433 localwlock = locallock = None
449 try:
434 try:
450 # bundle2 push may receive a reply bundle touching bookmarks or other
435 # bundle2 push may receive a reply bundle touching bookmarks or other
451 # things requiring the wlock. Take it now to ensure proper ordering.
436 # things requiring the wlock. Take it now to ensure proper ordering.
452 maypushback = pushop.ui.configbool('experimental', 'bundle2.pushback')
437 maypushback = pushop.ui.configbool('experimental', 'bundle2.pushback')
453 if (not _forcebundle1(pushop)) and maypushback:
438 if (not _forcebundle1(pushop)) and maypushback:
454 localwlock = pushop.repo.wlock()
439 localwlock = pushop.repo.wlock()
455 locallock = pushop.repo.lock()
440 locallock = pushop.repo.lock()
456 pushop.locallocked = True
441 pushop.locallocked = True
457 except IOError as err:
442 except IOError as err:
458 pushop.locallocked = False
443 pushop.locallocked = False
459 if err.errno != errno.EACCES:
444 if err.errno != errno.EACCES:
460 raise
445 raise
461 # source repo cannot be locked.
446 # source repo cannot be locked.
462 # We do not abort the push, but just disable the local phase
447 # We do not abort the push, but just disable the local phase
463 # synchronisation.
448 # synchronisation.
464 msg = 'cannot lock source repository: %s\n' % err
449 msg = 'cannot lock source repository: %s\n' % err
465 pushop.ui.debug(msg)
450 pushop.ui.debug(msg)
466 try:
451 try:
467 if pushop.locallocked:
452 if pushop.locallocked:
468 pushop.trmanager = transactionmanager(pushop.repo,
453 pushop.trmanager = transactionmanager(pushop.repo,
469 'push-response',
454 'push-response',
470 pushop.remote.url())
455 pushop.remote.url())
471 pushop.repo.checkpush(pushop)
456 pushop.repo.checkpush(pushop)
472 lock = None
457 lock = None
473 unbundle = pushop.remote.capable('unbundle')
458 unbundle = pushop.remote.capable('unbundle')
474 if not unbundle:
459 if not unbundle:
475 lock = pushop.remote.lock()
460 lock = pushop.remote.lock()
476 try:
461 try:
477 _pushdiscovery(pushop)
462 _pushdiscovery(pushop)
478 if not _forcebundle1(pushop):
463 if not _forcebundle1(pushop):
479 _pushbundle2(pushop)
464 _pushbundle2(pushop)
480 _pushchangeset(pushop)
465 _pushchangeset(pushop)
481 _pushsyncphase(pushop)
466 _pushsyncphase(pushop)
482 _pushobsolete(pushop)
467 _pushobsolete(pushop)
483 _pushbookmark(pushop)
468 _pushbookmark(pushop)
484 finally:
469 finally:
485 if lock is not None:
470 if lock is not None:
486 lock.release()
471 lock.release()
487 if pushop.trmanager:
472 if pushop.trmanager:
488 pushop.trmanager.close()
473 pushop.trmanager.close()
489 finally:
474 finally:
490 if pushop.trmanager:
475 if pushop.trmanager:
491 pushop.trmanager.release()
476 pushop.trmanager.release()
492 if locallock is not None:
477 if locallock is not None:
493 locallock.release()
478 locallock.release()
494 if localwlock is not None:
479 if localwlock is not None:
495 localwlock.release()
480 localwlock.release()
496
481
497 return pushop
482 return pushop
498
483
499 # list of steps to perform discovery before push
484 # list of steps to perform discovery before push
500 pushdiscoveryorder = []
485 pushdiscoveryorder = []
501
486
502 # Mapping between step name and function
487 # Mapping between step name and function
503 #
488 #
504 # This exists to help extensions wrap steps if necessary
489 # This exists to help extensions wrap steps if necessary
505 pushdiscoverymapping = {}
490 pushdiscoverymapping = {}
506
491
507 def pushdiscovery(stepname):
492 def pushdiscovery(stepname):
508 """decorator for function performing discovery before push
493 """decorator for function performing discovery before push
509
494
510 The function is added to the step -> function mapping and appended to the
495 The function is added to the step -> function mapping and appended to the
511 list of steps. Beware that decorated function will be added in order (this
496 list of steps. Beware that decorated function will be added in order (this
512 may matter).
497 may matter).
513
498
514 You can only use this decorator for a new step, if you want to wrap a step
499 You can only use this decorator for a new step, if you want to wrap a step
515 from an extension, change the pushdiscovery dictionary directly."""
500 from an extension, change the pushdiscovery dictionary directly."""
516 def dec(func):
501 def dec(func):
517 assert stepname not in pushdiscoverymapping
502 assert stepname not in pushdiscoverymapping
518 pushdiscoverymapping[stepname] = func
503 pushdiscoverymapping[stepname] = func
519 pushdiscoveryorder.append(stepname)
504 pushdiscoveryorder.append(stepname)
520 return func
505 return func
521 return dec
506 return dec
522
507
523 def _pushdiscovery(pushop):
508 def _pushdiscovery(pushop):
524 """Run all discovery steps"""
509 """Run all discovery steps"""
525 for stepname in pushdiscoveryorder:
510 for stepname in pushdiscoveryorder:
526 step = pushdiscoverymapping[stepname]
511 step = pushdiscoverymapping[stepname]
527 step(pushop)
512 step(pushop)
528
513
529 @pushdiscovery('changeset')
514 @pushdiscovery('changeset')
530 def _pushdiscoverychangeset(pushop):
515 def _pushdiscoverychangeset(pushop):
531 """discover the changeset that need to be pushed"""
516 """discover the changeset that need to be pushed"""
532 fci = discovery.findcommonincoming
517 fci = discovery.findcommonincoming
533 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
518 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
534 common, inc, remoteheads = commoninc
519 common, inc, remoteheads = commoninc
535 fco = discovery.findcommonoutgoing
520 fco = discovery.findcommonoutgoing
536 outgoing = fco(pushop.repo, pushop.remote, onlyheads=pushop.revs,
521 outgoing = fco(pushop.repo, pushop.remote, onlyheads=pushop.revs,
537 commoninc=commoninc, force=pushop.force)
522 commoninc=commoninc, force=pushop.force)
538 pushop.outgoing = outgoing
523 pushop.outgoing = outgoing
539 pushop.remoteheads = remoteheads
524 pushop.remoteheads = remoteheads
540 pushop.incoming = inc
525 pushop.incoming = inc
541
526
542 @pushdiscovery('phase')
527 @pushdiscovery('phase')
543 def _pushdiscoveryphase(pushop):
528 def _pushdiscoveryphase(pushop):
544 """discover the phase that needs to be pushed
529 """discover the phase that needs to be pushed
545
530
546 (computed for both success and failure case for changesets push)"""
531 (computed for both success and failure case for changesets push)"""
547 outgoing = pushop.outgoing
532 outgoing = pushop.outgoing
548 unfi = pushop.repo.unfiltered()
533 unfi = pushop.repo.unfiltered()
549 remotephases = pushop.remote.listkeys('phases')
534 remotephases = pushop.remote.listkeys('phases')
550 publishing = remotephases.get('publishing', False)
535 publishing = remotephases.get('publishing', False)
551 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
536 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
552 and remotephases # server supports phases
537 and remotephases # server supports phases
553 and not pushop.outgoing.missing # no changesets to be pushed
538 and not pushop.outgoing.missing # no changesets to be pushed
554 and publishing):
539 and publishing):
555 # When:
540 # When:
556 # - this is a subrepo push
541 # - this is a subrepo push
557 # - and remote support phase
542 # - and remote support phase
558 # - and no changeset are to be pushed
543 # - and no changeset are to be pushed
559 # - and remote is publishing
544 # - and remote is publishing
560 # We may be in issue 3871 case!
545 # We may be in issue 3871 case!
561 # We drop the possible phase synchronisation done by
546 # We drop the possible phase synchronisation done by
562 # courtesy to publish changesets possibly locally draft
547 # courtesy to publish changesets possibly locally draft
563 # on the remote.
548 # on the remote.
564 remotephases = {'publishing': 'True'}
549 remotephases = {'publishing': 'True'}
565 ana = phases.analyzeremotephases(pushop.repo,
550 ana = phases.analyzeremotephases(pushop.repo,
566 pushop.fallbackheads,
551 pushop.fallbackheads,
567 remotephases)
552 remotephases)
568 pheads, droots = ana
553 pheads, droots = ana
569 extracond = ''
554 extracond = ''
570 if not publishing:
555 if not publishing:
571 extracond = ' and public()'
556 extracond = ' and public()'
572 revset = 'heads((%%ln::%%ln) %s)' % extracond
557 revset = 'heads((%%ln::%%ln) %s)' % extracond
573 # Get the list of all revs draft on remote by public here.
558 # Get the list of all revs draft on remote by public here.
574 # XXX Beware that revset break if droots is not strictly
559 # XXX Beware that revset break if droots is not strictly
575 # XXX root we may want to ensure it is but it is costly
560 # XXX root we may want to ensure it is but it is costly
576 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
561 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
577 if not outgoing.missing:
562 if not outgoing.missing:
578 future = fallback
563 future = fallback
579 else:
564 else:
580 # adds changeset we are going to push as draft
565 # adds changeset we are going to push as draft
581 #
566 #
582 # should not be necessary for publishing server, but because of an
567 # should not be necessary for publishing server, but because of an
583 # issue fixed in xxxxx we have to do it anyway.
568 # issue fixed in xxxxx we have to do it anyway.
584 fdroots = list(unfi.set('roots(%ln + %ln::)',
569 fdroots = list(unfi.set('roots(%ln + %ln::)',
585 outgoing.missing, droots))
570 outgoing.missing, droots))
586 fdroots = [f.node() for f in fdroots]
571 fdroots = [f.node() for f in fdroots]
587 future = list(unfi.set(revset, fdroots, pushop.futureheads))
572 future = list(unfi.set(revset, fdroots, pushop.futureheads))
588 pushop.outdatedphases = future
573 pushop.outdatedphases = future
589 pushop.fallbackoutdatedphases = fallback
574 pushop.fallbackoutdatedphases = fallback
590
575
591 @pushdiscovery('obsmarker')
576 @pushdiscovery('obsmarker')
592 def _pushdiscoveryobsmarkers(pushop):
577 def _pushdiscoveryobsmarkers(pushop):
593 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
578 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
594 and pushop.repo.obsstore
579 and pushop.repo.obsstore
595 and 'obsolete' in pushop.remote.listkeys('namespaces')):
580 and 'obsolete' in pushop.remote.listkeys('namespaces')):
596 repo = pushop.repo
581 repo = pushop.repo
597 # very naive computation, that can be quite expensive on big repo.
582 # very naive computation, that can be quite expensive on big repo.
598 # However: evolution is currently slow on them anyway.
583 # However: evolution is currently slow on them anyway.
599 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
584 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
600 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
585 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
601
586
602 @pushdiscovery('bookmarks')
587 @pushdiscovery('bookmarks')
603 def _pushdiscoverybookmarks(pushop):
588 def _pushdiscoverybookmarks(pushop):
604 ui = pushop.ui
589 ui = pushop.ui
605 repo = pushop.repo.unfiltered()
590 repo = pushop.repo.unfiltered()
606 remote = pushop.remote
591 remote = pushop.remote
607 ui.debug("checking for updated bookmarks\n")
592 ui.debug("checking for updated bookmarks\n")
608 ancestors = ()
593 ancestors = ()
609 if pushop.revs:
594 if pushop.revs:
610 revnums = map(repo.changelog.rev, pushop.revs)
595 revnums = map(repo.changelog.rev, pushop.revs)
611 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
596 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
612 remotebookmark = remote.listkeys('bookmarks')
597 remotebookmark = remote.listkeys('bookmarks')
613
598
614 explicit = set([repo._bookmarks.expandname(bookmark)
599 explicit = set([repo._bookmarks.expandname(bookmark)
615 for bookmark in pushop.bookmarks])
600 for bookmark in pushop.bookmarks])
616
601
617 remotebookmark = bookmod.unhexlifybookmarks(remotebookmark)
602 remotebookmark = bookmod.unhexlifybookmarks(remotebookmark)
618 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
603 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
619
604
620 def safehex(x):
605 def safehex(x):
621 if x is None:
606 if x is None:
622 return x
607 return x
623 return hex(x)
608 return hex(x)
624
609
625 def hexifycompbookmarks(bookmarks):
610 def hexifycompbookmarks(bookmarks):
626 for b, scid, dcid in bookmarks:
611 for b, scid, dcid in bookmarks:
627 yield b, safehex(scid), safehex(dcid)
612 yield b, safehex(scid), safehex(dcid)
628
613
629 comp = [hexifycompbookmarks(marks) for marks in comp]
614 comp = [hexifycompbookmarks(marks) for marks in comp]
630 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
615 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
631
616
632 for b, scid, dcid in advsrc:
617 for b, scid, dcid in advsrc:
633 if b in explicit:
618 if b in explicit:
634 explicit.remove(b)
619 explicit.remove(b)
635 if not ancestors or repo[scid].rev() in ancestors:
620 if not ancestors or repo[scid].rev() in ancestors:
636 pushop.outbookmarks.append((b, dcid, scid))
621 pushop.outbookmarks.append((b, dcid, scid))
637 # search added bookmark
622 # search added bookmark
638 for b, scid, dcid in addsrc:
623 for b, scid, dcid in addsrc:
639 if b in explicit:
624 if b in explicit:
640 explicit.remove(b)
625 explicit.remove(b)
641 pushop.outbookmarks.append((b, '', scid))
626 pushop.outbookmarks.append((b, '', scid))
642 # search for overwritten bookmark
627 # search for overwritten bookmark
643 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
628 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
644 if b in explicit:
629 if b in explicit:
645 explicit.remove(b)
630 explicit.remove(b)
646 pushop.outbookmarks.append((b, dcid, scid))
631 pushop.outbookmarks.append((b, dcid, scid))
647 # search for bookmark to delete
632 # search for bookmark to delete
648 for b, scid, dcid in adddst:
633 for b, scid, dcid in adddst:
649 if b in explicit:
634 if b in explicit:
650 explicit.remove(b)
635 explicit.remove(b)
651 # treat as "deleted locally"
636 # treat as "deleted locally"
652 pushop.outbookmarks.append((b, dcid, ''))
637 pushop.outbookmarks.append((b, dcid, ''))
653 # identical bookmarks shouldn't get reported
638 # identical bookmarks shouldn't get reported
654 for b, scid, dcid in same:
639 for b, scid, dcid in same:
655 if b in explicit:
640 if b in explicit:
656 explicit.remove(b)
641 explicit.remove(b)
657
642
658 if explicit:
643 if explicit:
659 explicit = sorted(explicit)
644 explicit = sorted(explicit)
660 # we should probably list all of them
645 # we should probably list all of them
661 ui.warn(_('bookmark %s does not exist on the local '
646 ui.warn(_('bookmark %s does not exist on the local '
662 'or remote repository!\n') % explicit[0])
647 'or remote repository!\n') % explicit[0])
663 pushop.bkresult = 2
648 pushop.bkresult = 2
664
649
665 pushop.outbookmarks.sort()
650 pushop.outbookmarks.sort()
666
651
667 def _pushcheckoutgoing(pushop):
652 def _pushcheckoutgoing(pushop):
668 outgoing = pushop.outgoing
653 outgoing = pushop.outgoing
669 unfi = pushop.repo.unfiltered()
654 unfi = pushop.repo.unfiltered()
670 if not outgoing.missing:
655 if not outgoing.missing:
671 # nothing to push
656 # nothing to push
672 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
657 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
673 return False
658 return False
674 # something to push
659 # something to push
675 if not pushop.force:
660 if not pushop.force:
676 # if repo.obsstore == False --> no obsolete
661 # if repo.obsstore == False --> no obsolete
677 # then, save the iteration
662 # then, save the iteration
678 if unfi.obsstore:
663 if unfi.obsstore:
679 # this message are here for 80 char limit reason
664 # this message are here for 80 char limit reason
680 mso = _("push includes obsolete changeset: %s!")
665 mso = _("push includes obsolete changeset: %s!")
681 mst = {"unstable": _("push includes unstable changeset: %s!"),
666 mst = {"unstable": _("push includes unstable changeset: %s!"),
682 "bumped": _("push includes bumped changeset: %s!"),
667 "bumped": _("push includes bumped changeset: %s!"),
683 "divergent": _("push includes divergent changeset: %s!")}
668 "divergent": _("push includes divergent changeset: %s!")}
684 # If we are to push if there is at least one
669 # If we are to push if there is at least one
685 # obsolete or unstable changeset in missing, at
670 # obsolete or unstable changeset in missing, at
686 # least one of the missinghead will be obsolete or
671 # least one of the missinghead will be obsolete or
687 # unstable. So checking heads only is ok
672 # unstable. So checking heads only is ok
688 for node in outgoing.missingheads:
673 for node in outgoing.missingheads:
689 ctx = unfi[node]
674 ctx = unfi[node]
690 if ctx.obsolete():
675 if ctx.obsolete():
691 raise error.Abort(mso % ctx)
676 raise error.Abort(mso % ctx)
692 elif ctx.troubled():
677 elif ctx.troubled():
693 raise error.Abort(mst[ctx.troubles()[0]] % ctx)
678 raise error.Abort(mst[ctx.troubles()[0]] % ctx)
694
679
695 discovery.checkheads(pushop)
680 discovery.checkheads(pushop)
696 return True
681 return True
697
682
698 # List of names of steps to perform for an outgoing bundle2, order matters.
683 # List of names of steps to perform for an outgoing bundle2, order matters.
699 b2partsgenorder = []
684 b2partsgenorder = []
700
685
701 # Mapping between step name and function
686 # Mapping between step name and function
702 #
687 #
703 # This exists to help extensions wrap steps if necessary
688 # This exists to help extensions wrap steps if necessary
704 b2partsgenmapping = {}
689 b2partsgenmapping = {}
705
690
706 def b2partsgenerator(stepname, idx=None):
691 def b2partsgenerator(stepname, idx=None):
707 """decorator for function generating bundle2 part
692 """decorator for function generating bundle2 part
708
693
709 The function is added to the step -> function mapping and appended to the
694 The function is added to the step -> function mapping and appended to the
710 list of steps. Beware that decorated functions will be added in order
695 list of steps. Beware that decorated functions will be added in order
711 (this may matter).
696 (this may matter).
712
697
713 You can only use this decorator for new steps, if you want to wrap a step
698 You can only use this decorator for new steps, if you want to wrap a step
714 from an extension, attack the b2partsgenmapping dictionary directly."""
699 from an extension, attack the b2partsgenmapping dictionary directly."""
715 def dec(func):
700 def dec(func):
716 assert stepname not in b2partsgenmapping
701 assert stepname not in b2partsgenmapping
717 b2partsgenmapping[stepname] = func
702 b2partsgenmapping[stepname] = func
718 if idx is None:
703 if idx is None:
719 b2partsgenorder.append(stepname)
704 b2partsgenorder.append(stepname)
720 else:
705 else:
721 b2partsgenorder.insert(idx, stepname)
706 b2partsgenorder.insert(idx, stepname)
722 return func
707 return func
723 return dec
708 return dec
724
709
725 def _pushb2ctxcheckheads(pushop, bundler):
710 def _pushb2ctxcheckheads(pushop, bundler):
726 """Generate race condition checking parts
711 """Generate race condition checking parts
727
712
728 Exists as an independent function to aid extensions
713 Exists as an independent function to aid extensions
729 """
714 """
730 if not pushop.force:
715 if not pushop.force:
731 bundler.newpart('check:heads', data=iter(pushop.remoteheads))
716 bundler.newpart('check:heads', data=iter(pushop.remoteheads))
732
717
733 @b2partsgenerator('changeset')
718 @b2partsgenerator('changeset')
734 def _pushb2ctx(pushop, bundler):
719 def _pushb2ctx(pushop, bundler):
735 """handle changegroup push through bundle2
720 """handle changegroup push through bundle2
736
721
737 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
722 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
738 """
723 """
739 if 'changesets' in pushop.stepsdone:
724 if 'changesets' in pushop.stepsdone:
740 return
725 return
741 pushop.stepsdone.add('changesets')
726 pushop.stepsdone.add('changesets')
742 # Send known heads to the server for race detection.
727 # Send known heads to the server for race detection.
743 if not _pushcheckoutgoing(pushop):
728 if not _pushcheckoutgoing(pushop):
744 return
729 return
745 pushop.repo.prepushoutgoinghooks(pushop)
730 pushop.repo.prepushoutgoinghooks(pushop)
746
731
747 _pushb2ctxcheckheads(pushop, bundler)
732 _pushb2ctxcheckheads(pushop, bundler)
748
733
749 b2caps = bundle2.bundle2caps(pushop.remote)
734 b2caps = bundle2.bundle2caps(pushop.remote)
750 version = '01'
735 version = '01'
751 cgversions = b2caps.get('changegroup')
736 cgversions = b2caps.get('changegroup')
752 if cgversions: # 3.1 and 3.2 ship with an empty value
737 if cgversions: # 3.1 and 3.2 ship with an empty value
753 cgversions = [v for v in cgversions
738 cgversions = [v for v in cgversions
754 if v in changegroup.supportedoutgoingversions(
739 if v in changegroup.supportedoutgoingversions(
755 pushop.repo)]
740 pushop.repo)]
756 if not cgversions:
741 if not cgversions:
757 raise ValueError(_('no common changegroup version'))
742 raise ValueError(_('no common changegroup version'))
758 version = max(cgversions)
743 version = max(cgversions)
759 cg = changegroup.getlocalchangegroupraw(pushop.repo, 'push',
744 cg = changegroup.getlocalchangegroupraw(pushop.repo, 'push',
760 pushop.outgoing,
745 pushop.outgoing,
761 version=version)
746 version=version)
762 cgpart = bundler.newpart('changegroup', data=cg)
747 cgpart = bundler.newpart('changegroup', data=cg)
763 if cgversions:
748 if cgversions:
764 cgpart.addparam('version', version)
749 cgpart.addparam('version', version)
765 if 'treemanifest' in pushop.repo.requirements:
750 if 'treemanifest' in pushop.repo.requirements:
766 cgpart.addparam('treemanifest', '1')
751 cgpart.addparam('treemanifest', '1')
767 def handlereply(op):
752 def handlereply(op):
768 """extract addchangegroup returns from server reply"""
753 """extract addchangegroup returns from server reply"""
769 cgreplies = op.records.getreplies(cgpart.id)
754 cgreplies = op.records.getreplies(cgpart.id)
770 assert len(cgreplies['changegroup']) == 1
755 assert len(cgreplies['changegroup']) == 1
771 pushop.cgresult = cgreplies['changegroup'][0]['return']
756 pushop.cgresult = cgreplies['changegroup'][0]['return']
772 return handlereply
757 return handlereply
773
758
774 @b2partsgenerator('phase')
759 @b2partsgenerator('phase')
775 def _pushb2phases(pushop, bundler):
760 def _pushb2phases(pushop, bundler):
776 """handle phase push through bundle2"""
761 """handle phase push through bundle2"""
777 if 'phases' in pushop.stepsdone:
762 if 'phases' in pushop.stepsdone:
778 return
763 return
779 b2caps = bundle2.bundle2caps(pushop.remote)
764 b2caps = bundle2.bundle2caps(pushop.remote)
780 if not 'pushkey' in b2caps:
765 if not 'pushkey' in b2caps:
781 return
766 return
782 pushop.stepsdone.add('phases')
767 pushop.stepsdone.add('phases')
783 part2node = []
768 part2node = []
784
769
785 def handlefailure(pushop, exc):
770 def handlefailure(pushop, exc):
786 targetid = int(exc.partid)
771 targetid = int(exc.partid)
787 for partid, node in part2node:
772 for partid, node in part2node:
788 if partid == targetid:
773 if partid == targetid:
789 raise error.Abort(_('updating %s to public failed') % node)
774 raise error.Abort(_('updating %s to public failed') % node)
790
775
791 enc = pushkey.encode
776 enc = pushkey.encode
792 for newremotehead in pushop.outdatedphases:
777 for newremotehead in pushop.outdatedphases:
793 part = bundler.newpart('pushkey')
778 part = bundler.newpart('pushkey')
794 part.addparam('namespace', enc('phases'))
779 part.addparam('namespace', enc('phases'))
795 part.addparam('key', enc(newremotehead.hex()))
780 part.addparam('key', enc(newremotehead.hex()))
796 part.addparam('old', enc(str(phases.draft)))
781 part.addparam('old', enc(str(phases.draft)))
797 part.addparam('new', enc(str(phases.public)))
782 part.addparam('new', enc(str(phases.public)))
798 part2node.append((part.id, newremotehead))
783 part2node.append((part.id, newremotehead))
799 pushop.pkfailcb[part.id] = handlefailure
784 pushop.pkfailcb[part.id] = handlefailure
800
785
801 def handlereply(op):
786 def handlereply(op):
802 for partid, node in part2node:
787 for partid, node in part2node:
803 partrep = op.records.getreplies(partid)
788 partrep = op.records.getreplies(partid)
804 results = partrep['pushkey']
789 results = partrep['pushkey']
805 assert len(results) <= 1
790 assert len(results) <= 1
806 msg = None
791 msg = None
807 if not results:
792 if not results:
808 msg = _('server ignored update of %s to public!\n') % node
793 msg = _('server ignored update of %s to public!\n') % node
809 elif not int(results[0]['return']):
794 elif not int(results[0]['return']):
810 msg = _('updating %s to public failed!\n') % node
795 msg = _('updating %s to public failed!\n') % node
811 if msg is not None:
796 if msg is not None:
812 pushop.ui.warn(msg)
797 pushop.ui.warn(msg)
813 return handlereply
798 return handlereply
814
799
815 @b2partsgenerator('obsmarkers')
800 @b2partsgenerator('obsmarkers')
816 def _pushb2obsmarkers(pushop, bundler):
801 def _pushb2obsmarkers(pushop, bundler):
817 if 'obsmarkers' in pushop.stepsdone:
802 if 'obsmarkers' in pushop.stepsdone:
818 return
803 return
819 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
804 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
820 if obsolete.commonversion(remoteversions) is None:
805 if obsolete.commonversion(remoteversions) is None:
821 return
806 return
822 pushop.stepsdone.add('obsmarkers')
807 pushop.stepsdone.add('obsmarkers')
823 if pushop.outobsmarkers:
808 if pushop.outobsmarkers:
824 markers = sorted(pushop.outobsmarkers)
809 markers = sorted(pushop.outobsmarkers)
825 buildobsmarkerspart(bundler, markers)
810 bundle2.buildobsmarkerspart(bundler, markers)
826
811
827 @b2partsgenerator('bookmarks')
812 @b2partsgenerator('bookmarks')
828 def _pushb2bookmarks(pushop, bundler):
813 def _pushb2bookmarks(pushop, bundler):
829 """handle bookmark push through bundle2"""
814 """handle bookmark push through bundle2"""
830 if 'bookmarks' in pushop.stepsdone:
815 if 'bookmarks' in pushop.stepsdone:
831 return
816 return
832 b2caps = bundle2.bundle2caps(pushop.remote)
817 b2caps = bundle2.bundle2caps(pushop.remote)
833 if 'pushkey' not in b2caps:
818 if 'pushkey' not in b2caps:
834 return
819 return
835 pushop.stepsdone.add('bookmarks')
820 pushop.stepsdone.add('bookmarks')
836 part2book = []
821 part2book = []
837 enc = pushkey.encode
822 enc = pushkey.encode
838
823
839 def handlefailure(pushop, exc):
824 def handlefailure(pushop, exc):
840 targetid = int(exc.partid)
825 targetid = int(exc.partid)
841 for partid, book, action in part2book:
826 for partid, book, action in part2book:
842 if partid == targetid:
827 if partid == targetid:
843 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
828 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
844 # we should not be called for part we did not generated
829 # we should not be called for part we did not generated
845 assert False
830 assert False
846
831
847 for book, old, new in pushop.outbookmarks:
832 for book, old, new in pushop.outbookmarks:
848 part = bundler.newpart('pushkey')
833 part = bundler.newpart('pushkey')
849 part.addparam('namespace', enc('bookmarks'))
834 part.addparam('namespace', enc('bookmarks'))
850 part.addparam('key', enc(book))
835 part.addparam('key', enc(book))
851 part.addparam('old', enc(old))
836 part.addparam('old', enc(old))
852 part.addparam('new', enc(new))
837 part.addparam('new', enc(new))
853 action = 'update'
838 action = 'update'
854 if not old:
839 if not old:
855 action = 'export'
840 action = 'export'
856 elif not new:
841 elif not new:
857 action = 'delete'
842 action = 'delete'
858 part2book.append((part.id, book, action))
843 part2book.append((part.id, book, action))
859 pushop.pkfailcb[part.id] = handlefailure
844 pushop.pkfailcb[part.id] = handlefailure
860
845
861 def handlereply(op):
846 def handlereply(op):
862 ui = pushop.ui
847 ui = pushop.ui
863 for partid, book, action in part2book:
848 for partid, book, action in part2book:
864 partrep = op.records.getreplies(partid)
849 partrep = op.records.getreplies(partid)
865 results = partrep['pushkey']
850 results = partrep['pushkey']
866 assert len(results) <= 1
851 assert len(results) <= 1
867 if not results:
852 if not results:
868 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
853 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
869 else:
854 else:
870 ret = int(results[0]['return'])
855 ret = int(results[0]['return'])
871 if ret:
856 if ret:
872 ui.status(bookmsgmap[action][0] % book)
857 ui.status(bookmsgmap[action][0] % book)
873 else:
858 else:
874 ui.warn(bookmsgmap[action][1] % book)
859 ui.warn(bookmsgmap[action][1] % book)
875 if pushop.bkresult is not None:
860 if pushop.bkresult is not None:
876 pushop.bkresult = 1
861 pushop.bkresult = 1
877 return handlereply
862 return handlereply
878
863
879
864
880 def _pushbundle2(pushop):
865 def _pushbundle2(pushop):
881 """push data to the remote using bundle2
866 """push data to the remote using bundle2
882
867
883 The only currently supported type of data is changegroup but this will
868 The only currently supported type of data is changegroup but this will
884 evolve in the future."""
869 evolve in the future."""
885 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
870 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
886 pushback = (pushop.trmanager
871 pushback = (pushop.trmanager
887 and pushop.ui.configbool('experimental', 'bundle2.pushback'))
872 and pushop.ui.configbool('experimental', 'bundle2.pushback'))
888
873
889 # create reply capability
874 # create reply capability
890 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo,
875 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo,
891 allowpushback=pushback))
876 allowpushback=pushback))
892 bundler.newpart('replycaps', data=capsblob)
877 bundler.newpart('replycaps', data=capsblob)
893 replyhandlers = []
878 replyhandlers = []
894 for partgenname in b2partsgenorder:
879 for partgenname in b2partsgenorder:
895 partgen = b2partsgenmapping[partgenname]
880 partgen = b2partsgenmapping[partgenname]
896 ret = partgen(pushop, bundler)
881 ret = partgen(pushop, bundler)
897 if callable(ret):
882 if callable(ret):
898 replyhandlers.append(ret)
883 replyhandlers.append(ret)
899 # do not push if nothing to push
884 # do not push if nothing to push
900 if bundler.nbparts <= 1:
885 if bundler.nbparts <= 1:
901 return
886 return
902 stream = util.chunkbuffer(bundler.getchunks())
887 stream = util.chunkbuffer(bundler.getchunks())
903 try:
888 try:
904 try:
889 try:
905 reply = pushop.remote.unbundle(
890 reply = pushop.remote.unbundle(
906 stream, ['force'], pushop.remote.url())
891 stream, ['force'], pushop.remote.url())
907 except error.BundleValueError as exc:
892 except error.BundleValueError as exc:
908 raise error.Abort(_('missing support for %s') % exc)
893 raise error.Abort(_('missing support for %s') % exc)
909 try:
894 try:
910 trgetter = None
895 trgetter = None
911 if pushback:
896 if pushback:
912 trgetter = pushop.trmanager.transaction
897 trgetter = pushop.trmanager.transaction
913 op = bundle2.processbundle(pushop.repo, reply, trgetter)
898 op = bundle2.processbundle(pushop.repo, reply, trgetter)
914 except error.BundleValueError as exc:
899 except error.BundleValueError as exc:
915 raise error.Abort(_('missing support for %s') % exc)
900 raise error.Abort(_('missing support for %s') % exc)
916 except bundle2.AbortFromPart as exc:
901 except bundle2.AbortFromPart as exc:
917 pushop.ui.status(_('remote: %s\n') % exc)
902 pushop.ui.status(_('remote: %s\n') % exc)
918 if exc.hint is not None:
903 if exc.hint is not None:
919 pushop.ui.status(_('remote: %s\n') % ('(%s)' % exc.hint))
904 pushop.ui.status(_('remote: %s\n') % ('(%s)' % exc.hint))
920 raise error.Abort(_('push failed on remote'))
905 raise error.Abort(_('push failed on remote'))
921 except error.PushkeyFailed as exc:
906 except error.PushkeyFailed as exc:
922 partid = int(exc.partid)
907 partid = int(exc.partid)
923 if partid not in pushop.pkfailcb:
908 if partid not in pushop.pkfailcb:
924 raise
909 raise
925 pushop.pkfailcb[partid](pushop, exc)
910 pushop.pkfailcb[partid](pushop, exc)
926 for rephand in replyhandlers:
911 for rephand in replyhandlers:
927 rephand(op)
912 rephand(op)
928
913
929 def _pushchangeset(pushop):
914 def _pushchangeset(pushop):
930 """Make the actual push of changeset bundle to remote repo"""
915 """Make the actual push of changeset bundle to remote repo"""
931 if 'changesets' in pushop.stepsdone:
916 if 'changesets' in pushop.stepsdone:
932 return
917 return
933 pushop.stepsdone.add('changesets')
918 pushop.stepsdone.add('changesets')
934 if not _pushcheckoutgoing(pushop):
919 if not _pushcheckoutgoing(pushop):
935 return
920 return
936 pushop.repo.prepushoutgoinghooks(pushop)
921 pushop.repo.prepushoutgoinghooks(pushop)
937 outgoing = pushop.outgoing
922 outgoing = pushop.outgoing
938 unbundle = pushop.remote.capable('unbundle')
923 unbundle = pushop.remote.capable('unbundle')
939 # TODO: get bundlecaps from remote
924 # TODO: get bundlecaps from remote
940 bundlecaps = None
925 bundlecaps = None
941 # create a changegroup from local
926 # create a changegroup from local
942 if pushop.revs is None and not (outgoing.excluded
927 if pushop.revs is None and not (outgoing.excluded
943 or pushop.repo.changelog.filteredrevs):
928 or pushop.repo.changelog.filteredrevs):
944 # push everything,
929 # push everything,
945 # use the fast path, no race possible on push
930 # use the fast path, no race possible on push
946 bundler = changegroup.cg1packer(pushop.repo, bundlecaps)
931 bundler = changegroup.cg1packer(pushop.repo, bundlecaps)
947 cg = changegroup.getsubset(pushop.repo,
932 cg = changegroup.getsubset(pushop.repo,
948 outgoing,
933 outgoing,
949 bundler,
934 bundler,
950 'push',
935 'push',
951 fastpath=True)
936 fastpath=True)
952 else:
937 else:
953 cg = changegroup.getchangegroup(pushop.repo, 'push', outgoing,
938 cg = changegroup.getchangegroup(pushop.repo, 'push', outgoing,
954 bundlecaps=bundlecaps)
939 bundlecaps=bundlecaps)
955
940
956 # apply changegroup to remote
941 # apply changegroup to remote
957 if unbundle:
942 if unbundle:
958 # local repo finds heads on server, finds out what
943 # local repo finds heads on server, finds out what
959 # revs it must push. once revs transferred, if server
944 # revs it must push. once revs transferred, if server
960 # finds it has different heads (someone else won
945 # finds it has different heads (someone else won
961 # commit/push race), server aborts.
946 # commit/push race), server aborts.
962 if pushop.force:
947 if pushop.force:
963 remoteheads = ['force']
948 remoteheads = ['force']
964 else:
949 else:
965 remoteheads = pushop.remoteheads
950 remoteheads = pushop.remoteheads
966 # ssh: return remote's addchangegroup()
951 # ssh: return remote's addchangegroup()
967 # http: return remote's addchangegroup() or 0 for error
952 # http: return remote's addchangegroup() or 0 for error
968 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
953 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
969 pushop.repo.url())
954 pushop.repo.url())
970 else:
955 else:
971 # we return an integer indicating remote head count
956 # we return an integer indicating remote head count
972 # change
957 # change
973 pushop.cgresult = pushop.remote.addchangegroup(cg, 'push',
958 pushop.cgresult = pushop.remote.addchangegroup(cg, 'push',
974 pushop.repo.url())
959 pushop.repo.url())
975
960
976 def _pushsyncphase(pushop):
961 def _pushsyncphase(pushop):
977 """synchronise phase information locally and remotely"""
962 """synchronise phase information locally and remotely"""
978 cheads = pushop.commonheads
963 cheads = pushop.commonheads
979 # even when we don't push, exchanging phase data is useful
964 # even when we don't push, exchanging phase data is useful
980 remotephases = pushop.remote.listkeys('phases')
965 remotephases = pushop.remote.listkeys('phases')
981 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
966 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
982 and remotephases # server supports phases
967 and remotephases # server supports phases
983 and pushop.cgresult is None # nothing was pushed
968 and pushop.cgresult is None # nothing was pushed
984 and remotephases.get('publishing', False)):
969 and remotephases.get('publishing', False)):
985 # When:
970 # When:
986 # - this is a subrepo push
971 # - this is a subrepo push
987 # - and remote support phase
972 # - and remote support phase
988 # - and no changeset was pushed
973 # - and no changeset was pushed
989 # - and remote is publishing
974 # - and remote is publishing
990 # We may be in issue 3871 case!
975 # We may be in issue 3871 case!
991 # We drop the possible phase synchronisation done by
976 # We drop the possible phase synchronisation done by
992 # courtesy to publish changesets possibly locally draft
977 # courtesy to publish changesets possibly locally draft
993 # on the remote.
978 # on the remote.
994 remotephases = {'publishing': 'True'}
979 remotephases = {'publishing': 'True'}
995 if not remotephases: # old server or public only reply from non-publishing
980 if not remotephases: # old server or public only reply from non-publishing
996 _localphasemove(pushop, cheads)
981 _localphasemove(pushop, cheads)
997 # don't push any phase data as there is nothing to push
982 # don't push any phase data as there is nothing to push
998 else:
983 else:
999 ana = phases.analyzeremotephases(pushop.repo, cheads,
984 ana = phases.analyzeremotephases(pushop.repo, cheads,
1000 remotephases)
985 remotephases)
1001 pheads, droots = ana
986 pheads, droots = ana
1002 ### Apply remote phase on local
987 ### Apply remote phase on local
1003 if remotephases.get('publishing', False):
988 if remotephases.get('publishing', False):
1004 _localphasemove(pushop, cheads)
989 _localphasemove(pushop, cheads)
1005 else: # publish = False
990 else: # publish = False
1006 _localphasemove(pushop, pheads)
991 _localphasemove(pushop, pheads)
1007 _localphasemove(pushop, cheads, phases.draft)
992 _localphasemove(pushop, cheads, phases.draft)
1008 ### Apply local phase on remote
993 ### Apply local phase on remote
1009
994
1010 if pushop.cgresult:
995 if pushop.cgresult:
1011 if 'phases' in pushop.stepsdone:
996 if 'phases' in pushop.stepsdone:
1012 # phases already pushed though bundle2
997 # phases already pushed though bundle2
1013 return
998 return
1014 outdated = pushop.outdatedphases
999 outdated = pushop.outdatedphases
1015 else:
1000 else:
1016 outdated = pushop.fallbackoutdatedphases
1001 outdated = pushop.fallbackoutdatedphases
1017
1002
1018 pushop.stepsdone.add('phases')
1003 pushop.stepsdone.add('phases')
1019
1004
1020 # filter heads already turned public by the push
1005 # filter heads already turned public by the push
1021 outdated = [c for c in outdated if c.node() not in pheads]
1006 outdated = [c for c in outdated if c.node() not in pheads]
1022 # fallback to independent pushkey command
1007 # fallback to independent pushkey command
1023 for newremotehead in outdated:
1008 for newremotehead in outdated:
1024 r = pushop.remote.pushkey('phases',
1009 r = pushop.remote.pushkey('phases',
1025 newremotehead.hex(),
1010 newremotehead.hex(),
1026 str(phases.draft),
1011 str(phases.draft),
1027 str(phases.public))
1012 str(phases.public))
1028 if not r:
1013 if not r:
1029 pushop.ui.warn(_('updating %s to public failed!\n')
1014 pushop.ui.warn(_('updating %s to public failed!\n')
1030 % newremotehead)
1015 % newremotehead)
1031
1016
1032 def _localphasemove(pushop, nodes, phase=phases.public):
1017 def _localphasemove(pushop, nodes, phase=phases.public):
1033 """move <nodes> to <phase> in the local source repo"""
1018 """move <nodes> to <phase> in the local source repo"""
1034 if pushop.trmanager:
1019 if pushop.trmanager:
1035 phases.advanceboundary(pushop.repo,
1020 phases.advanceboundary(pushop.repo,
1036 pushop.trmanager.transaction(),
1021 pushop.trmanager.transaction(),
1037 phase,
1022 phase,
1038 nodes)
1023 nodes)
1039 else:
1024 else:
1040 # repo is not locked, do not change any phases!
1025 # repo is not locked, do not change any phases!
1041 # Informs the user that phases should have been moved when
1026 # Informs the user that phases should have been moved when
1042 # applicable.
1027 # applicable.
1043 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1028 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1044 phasestr = phases.phasenames[phase]
1029 phasestr = phases.phasenames[phase]
1045 if actualmoves:
1030 if actualmoves:
1046 pushop.ui.status(_('cannot lock source repo, skipping '
1031 pushop.ui.status(_('cannot lock source repo, skipping '
1047 'local %s phase update\n') % phasestr)
1032 'local %s phase update\n') % phasestr)
1048
1033
1049 def _pushobsolete(pushop):
1034 def _pushobsolete(pushop):
1050 """utility function to push obsolete markers to a remote"""
1035 """utility function to push obsolete markers to a remote"""
1051 if 'obsmarkers' in pushop.stepsdone:
1036 if 'obsmarkers' in pushop.stepsdone:
1052 return
1037 return
1053 repo = pushop.repo
1038 repo = pushop.repo
1054 remote = pushop.remote
1039 remote = pushop.remote
1055 pushop.stepsdone.add('obsmarkers')
1040 pushop.stepsdone.add('obsmarkers')
1056 if pushop.outobsmarkers:
1041 if pushop.outobsmarkers:
1057 pushop.ui.debug('try to push obsolete markers to remote\n')
1042 pushop.ui.debug('try to push obsolete markers to remote\n')
1058 rslts = []
1043 rslts = []
1059 remotedata = obsolete._pushkeyescape(sorted(pushop.outobsmarkers))
1044 remotedata = obsolete._pushkeyescape(sorted(pushop.outobsmarkers))
1060 for key in sorted(remotedata, reverse=True):
1045 for key in sorted(remotedata, reverse=True):
1061 # reverse sort to ensure we end with dump0
1046 # reverse sort to ensure we end with dump0
1062 data = remotedata[key]
1047 data = remotedata[key]
1063 rslts.append(remote.pushkey('obsolete', key, '', data))
1048 rslts.append(remote.pushkey('obsolete', key, '', data))
1064 if [r for r in rslts if not r]:
1049 if [r for r in rslts if not r]:
1065 msg = _('failed to push some obsolete markers!\n')
1050 msg = _('failed to push some obsolete markers!\n')
1066 repo.ui.warn(msg)
1051 repo.ui.warn(msg)
1067
1052
1068 def _pushbookmark(pushop):
1053 def _pushbookmark(pushop):
1069 """Update bookmark position on remote"""
1054 """Update bookmark position on remote"""
1070 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
1055 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
1071 return
1056 return
1072 pushop.stepsdone.add('bookmarks')
1057 pushop.stepsdone.add('bookmarks')
1073 ui = pushop.ui
1058 ui = pushop.ui
1074 remote = pushop.remote
1059 remote = pushop.remote
1075
1060
1076 for b, old, new in pushop.outbookmarks:
1061 for b, old, new in pushop.outbookmarks:
1077 action = 'update'
1062 action = 'update'
1078 if not old:
1063 if not old:
1079 action = 'export'
1064 action = 'export'
1080 elif not new:
1065 elif not new:
1081 action = 'delete'
1066 action = 'delete'
1082 if remote.pushkey('bookmarks', b, old, new):
1067 if remote.pushkey('bookmarks', b, old, new):
1083 ui.status(bookmsgmap[action][0] % b)
1068 ui.status(bookmsgmap[action][0] % b)
1084 else:
1069 else:
1085 ui.warn(bookmsgmap[action][1] % b)
1070 ui.warn(bookmsgmap[action][1] % b)
1086 # discovery can have set the value form invalid entry
1071 # discovery can have set the value form invalid entry
1087 if pushop.bkresult is not None:
1072 if pushop.bkresult is not None:
1088 pushop.bkresult = 1
1073 pushop.bkresult = 1
1089
1074
1090 class pulloperation(object):
1075 class pulloperation(object):
1091 """A object that represent a single pull operation
1076 """A object that represent a single pull operation
1092
1077
1093 It purpose is to carry pull related state and very common operation.
1078 It purpose is to carry pull related state and very common operation.
1094
1079
1095 A new should be created at the beginning of each pull and discarded
1080 A new should be created at the beginning of each pull and discarded
1096 afterward.
1081 afterward.
1097 """
1082 """
1098
1083
1099 def __init__(self, repo, remote, heads=None, force=False, bookmarks=(),
1084 def __init__(self, repo, remote, heads=None, force=False, bookmarks=(),
1100 remotebookmarks=None, streamclonerequested=None):
1085 remotebookmarks=None, streamclonerequested=None):
1101 # repo we pull into
1086 # repo we pull into
1102 self.repo = repo
1087 self.repo = repo
1103 # repo we pull from
1088 # repo we pull from
1104 self.remote = remote
1089 self.remote = remote
1105 # revision we try to pull (None is "all")
1090 # revision we try to pull (None is "all")
1106 self.heads = heads
1091 self.heads = heads
1107 # bookmark pulled explicitly
1092 # bookmark pulled explicitly
1108 self.explicitbookmarks = [repo._bookmarks.expandname(bookmark)
1093 self.explicitbookmarks = [repo._bookmarks.expandname(bookmark)
1109 for bookmark in bookmarks]
1094 for bookmark in bookmarks]
1110 # do we force pull?
1095 # do we force pull?
1111 self.force = force
1096 self.force = force
1112 # whether a streaming clone was requested
1097 # whether a streaming clone was requested
1113 self.streamclonerequested = streamclonerequested
1098 self.streamclonerequested = streamclonerequested
1114 # transaction manager
1099 # transaction manager
1115 self.trmanager = None
1100 self.trmanager = None
1116 # set of common changeset between local and remote before pull
1101 # set of common changeset between local and remote before pull
1117 self.common = None
1102 self.common = None
1118 # set of pulled head
1103 # set of pulled head
1119 self.rheads = None
1104 self.rheads = None
1120 # list of missing changeset to fetch remotely
1105 # list of missing changeset to fetch remotely
1121 self.fetch = None
1106 self.fetch = None
1122 # remote bookmarks data
1107 # remote bookmarks data
1123 self.remotebookmarks = remotebookmarks
1108 self.remotebookmarks = remotebookmarks
1124 # result of changegroup pulling (used as return code by pull)
1109 # result of changegroup pulling (used as return code by pull)
1125 self.cgresult = None
1110 self.cgresult = None
1126 # list of step already done
1111 # list of step already done
1127 self.stepsdone = set()
1112 self.stepsdone = set()
1128 # Whether we attempted a clone from pre-generated bundles.
1113 # Whether we attempted a clone from pre-generated bundles.
1129 self.clonebundleattempted = False
1114 self.clonebundleattempted = False
1130
1115
1131 @util.propertycache
1116 @util.propertycache
1132 def pulledsubset(self):
1117 def pulledsubset(self):
1133 """heads of the set of changeset target by the pull"""
1118 """heads of the set of changeset target by the pull"""
1134 # compute target subset
1119 # compute target subset
1135 if self.heads is None:
1120 if self.heads is None:
1136 # We pulled every thing possible
1121 # We pulled every thing possible
1137 # sync on everything common
1122 # sync on everything common
1138 c = set(self.common)
1123 c = set(self.common)
1139 ret = list(self.common)
1124 ret = list(self.common)
1140 for n in self.rheads:
1125 for n in self.rheads:
1141 if n not in c:
1126 if n not in c:
1142 ret.append(n)
1127 ret.append(n)
1143 return ret
1128 return ret
1144 else:
1129 else:
1145 # We pulled a specific subset
1130 # We pulled a specific subset
1146 # sync on this subset
1131 # sync on this subset
1147 return self.heads
1132 return self.heads
1148
1133
1149 @util.propertycache
1134 @util.propertycache
1150 def canusebundle2(self):
1135 def canusebundle2(self):
1151 return not _forcebundle1(self)
1136 return not _forcebundle1(self)
1152
1137
1153 @util.propertycache
1138 @util.propertycache
1154 def remotebundle2caps(self):
1139 def remotebundle2caps(self):
1155 return bundle2.bundle2caps(self.remote)
1140 return bundle2.bundle2caps(self.remote)
1156
1141
1157 def gettransaction(self):
1142 def gettransaction(self):
1158 # deprecated; talk to trmanager directly
1143 # deprecated; talk to trmanager directly
1159 return self.trmanager.transaction()
1144 return self.trmanager.transaction()
1160
1145
1161 class transactionmanager(object):
1146 class transactionmanager(object):
1162 """An object to manage the life cycle of a transaction
1147 """An object to manage the life cycle of a transaction
1163
1148
1164 It creates the transaction on demand and calls the appropriate hooks when
1149 It creates the transaction on demand and calls the appropriate hooks when
1165 closing the transaction."""
1150 closing the transaction."""
1166 def __init__(self, repo, source, url):
1151 def __init__(self, repo, source, url):
1167 self.repo = repo
1152 self.repo = repo
1168 self.source = source
1153 self.source = source
1169 self.url = url
1154 self.url = url
1170 self._tr = None
1155 self._tr = None
1171
1156
1172 def transaction(self):
1157 def transaction(self):
1173 """Return an open transaction object, constructing if necessary"""
1158 """Return an open transaction object, constructing if necessary"""
1174 if not self._tr:
1159 if not self._tr:
1175 trname = '%s\n%s' % (self.source, util.hidepassword(self.url))
1160 trname = '%s\n%s' % (self.source, util.hidepassword(self.url))
1176 self._tr = self.repo.transaction(trname)
1161 self._tr = self.repo.transaction(trname)
1177 self._tr.hookargs['source'] = self.source
1162 self._tr.hookargs['source'] = self.source
1178 self._tr.hookargs['url'] = self.url
1163 self._tr.hookargs['url'] = self.url
1179 return self._tr
1164 return self._tr
1180
1165
1181 def close(self):
1166 def close(self):
1182 """close transaction if created"""
1167 """close transaction if created"""
1183 if self._tr is not None:
1168 if self._tr is not None:
1184 self._tr.close()
1169 self._tr.close()
1185
1170
1186 def release(self):
1171 def release(self):
1187 """release transaction if created"""
1172 """release transaction if created"""
1188 if self._tr is not None:
1173 if self._tr is not None:
1189 self._tr.release()
1174 self._tr.release()
1190
1175
1191 def pull(repo, remote, heads=None, force=False, bookmarks=(), opargs=None,
1176 def pull(repo, remote, heads=None, force=False, bookmarks=(), opargs=None,
1192 streamclonerequested=None):
1177 streamclonerequested=None):
1193 """Fetch repository data from a remote.
1178 """Fetch repository data from a remote.
1194
1179
1195 This is the main function used to retrieve data from a remote repository.
1180 This is the main function used to retrieve data from a remote repository.
1196
1181
1197 ``repo`` is the local repository to clone into.
1182 ``repo`` is the local repository to clone into.
1198 ``remote`` is a peer instance.
1183 ``remote`` is a peer instance.
1199 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1184 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1200 default) means to pull everything from the remote.
1185 default) means to pull everything from the remote.
1201 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1186 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1202 default, all remote bookmarks are pulled.
1187 default, all remote bookmarks are pulled.
1203 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1188 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1204 initialization.
1189 initialization.
1205 ``streamclonerequested`` is a boolean indicating whether a "streaming
1190 ``streamclonerequested`` is a boolean indicating whether a "streaming
1206 clone" is requested. A "streaming clone" is essentially a raw file copy
1191 clone" is requested. A "streaming clone" is essentially a raw file copy
1207 of revlogs from the server. This only works when the local repository is
1192 of revlogs from the server. This only works when the local repository is
1208 empty. The default value of ``None`` means to respect the server
1193 empty. The default value of ``None`` means to respect the server
1209 configuration for preferring stream clones.
1194 configuration for preferring stream clones.
1210
1195
1211 Returns the ``pulloperation`` created for this pull.
1196 Returns the ``pulloperation`` created for this pull.
1212 """
1197 """
1213 if opargs is None:
1198 if opargs is None:
1214 opargs = {}
1199 opargs = {}
1215 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks,
1200 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks,
1216 streamclonerequested=streamclonerequested, **opargs)
1201 streamclonerequested=streamclonerequested, **opargs)
1217 if pullop.remote.local():
1202 if pullop.remote.local():
1218 missing = set(pullop.remote.requirements) - pullop.repo.supported
1203 missing = set(pullop.remote.requirements) - pullop.repo.supported
1219 if missing:
1204 if missing:
1220 msg = _("required features are not"
1205 msg = _("required features are not"
1221 " supported in the destination:"
1206 " supported in the destination:"
1222 " %s") % (', '.join(sorted(missing)))
1207 " %s") % (', '.join(sorted(missing)))
1223 raise error.Abort(msg)
1208 raise error.Abort(msg)
1224
1209
1225 wlock = lock = None
1210 wlock = lock = None
1226 try:
1211 try:
1227 wlock = pullop.repo.wlock()
1212 wlock = pullop.repo.wlock()
1228 lock = pullop.repo.lock()
1213 lock = pullop.repo.lock()
1229 pullop.trmanager = transactionmanager(repo, 'pull', remote.url())
1214 pullop.trmanager = transactionmanager(repo, 'pull', remote.url())
1230 streamclone.maybeperformlegacystreamclone(pullop)
1215 streamclone.maybeperformlegacystreamclone(pullop)
1231 # This should ideally be in _pullbundle2(). However, it needs to run
1216 # This should ideally be in _pullbundle2(). However, it needs to run
1232 # before discovery to avoid extra work.
1217 # before discovery to avoid extra work.
1233 _maybeapplyclonebundle(pullop)
1218 _maybeapplyclonebundle(pullop)
1234 _pulldiscovery(pullop)
1219 _pulldiscovery(pullop)
1235 if pullop.canusebundle2:
1220 if pullop.canusebundle2:
1236 _pullbundle2(pullop)
1221 _pullbundle2(pullop)
1237 _pullchangeset(pullop)
1222 _pullchangeset(pullop)
1238 _pullphase(pullop)
1223 _pullphase(pullop)
1239 _pullbookmarks(pullop)
1224 _pullbookmarks(pullop)
1240 _pullobsolete(pullop)
1225 _pullobsolete(pullop)
1241 pullop.trmanager.close()
1226 pullop.trmanager.close()
1242 finally:
1227 finally:
1243 lockmod.release(pullop.trmanager, lock, wlock)
1228 lockmod.release(pullop.trmanager, lock, wlock)
1244
1229
1245 return pullop
1230 return pullop
1246
1231
1247 # list of steps to perform discovery before pull
1232 # list of steps to perform discovery before pull
1248 pulldiscoveryorder = []
1233 pulldiscoveryorder = []
1249
1234
1250 # Mapping between step name and function
1235 # Mapping between step name and function
1251 #
1236 #
1252 # This exists to help extensions wrap steps if necessary
1237 # This exists to help extensions wrap steps if necessary
1253 pulldiscoverymapping = {}
1238 pulldiscoverymapping = {}
1254
1239
1255 def pulldiscovery(stepname):
1240 def pulldiscovery(stepname):
1256 """decorator for function performing discovery before pull
1241 """decorator for function performing discovery before pull
1257
1242
1258 The function is added to the step -> function mapping and appended to the
1243 The function is added to the step -> function mapping and appended to the
1259 list of steps. Beware that decorated function will be added in order (this
1244 list of steps. Beware that decorated function will be added in order (this
1260 may matter).
1245 may matter).
1261
1246
1262 You can only use this decorator for a new step, if you want to wrap a step
1247 You can only use this decorator for a new step, if you want to wrap a step
1263 from an extension, change the pulldiscovery dictionary directly."""
1248 from an extension, change the pulldiscovery dictionary directly."""
1264 def dec(func):
1249 def dec(func):
1265 assert stepname not in pulldiscoverymapping
1250 assert stepname not in pulldiscoverymapping
1266 pulldiscoverymapping[stepname] = func
1251 pulldiscoverymapping[stepname] = func
1267 pulldiscoveryorder.append(stepname)
1252 pulldiscoveryorder.append(stepname)
1268 return func
1253 return func
1269 return dec
1254 return dec
1270
1255
1271 def _pulldiscovery(pullop):
1256 def _pulldiscovery(pullop):
1272 """Run all discovery steps"""
1257 """Run all discovery steps"""
1273 for stepname in pulldiscoveryorder:
1258 for stepname in pulldiscoveryorder:
1274 step = pulldiscoverymapping[stepname]
1259 step = pulldiscoverymapping[stepname]
1275 step(pullop)
1260 step(pullop)
1276
1261
1277 @pulldiscovery('b1:bookmarks')
1262 @pulldiscovery('b1:bookmarks')
1278 def _pullbookmarkbundle1(pullop):
1263 def _pullbookmarkbundle1(pullop):
1279 """fetch bookmark data in bundle1 case
1264 """fetch bookmark data in bundle1 case
1280
1265
1281 If not using bundle2, we have to fetch bookmarks before changeset
1266 If not using bundle2, we have to fetch bookmarks before changeset
1282 discovery to reduce the chance and impact of race conditions."""
1267 discovery to reduce the chance and impact of race conditions."""
1283 if pullop.remotebookmarks is not None:
1268 if pullop.remotebookmarks is not None:
1284 return
1269 return
1285 if pullop.canusebundle2 and 'listkeys' in pullop.remotebundle2caps:
1270 if pullop.canusebundle2 and 'listkeys' in pullop.remotebundle2caps:
1286 # all known bundle2 servers now support listkeys, but lets be nice with
1271 # all known bundle2 servers now support listkeys, but lets be nice with
1287 # new implementation.
1272 # new implementation.
1288 return
1273 return
1289 pullop.remotebookmarks = pullop.remote.listkeys('bookmarks')
1274 pullop.remotebookmarks = pullop.remote.listkeys('bookmarks')
1290
1275
1291
1276
1292 @pulldiscovery('changegroup')
1277 @pulldiscovery('changegroup')
1293 def _pulldiscoverychangegroup(pullop):
1278 def _pulldiscoverychangegroup(pullop):
1294 """discovery phase for the pull
1279 """discovery phase for the pull
1295
1280
1296 Current handle changeset discovery only, will change handle all discovery
1281 Current handle changeset discovery only, will change handle all discovery
1297 at some point."""
1282 at some point."""
1298 tmp = discovery.findcommonincoming(pullop.repo,
1283 tmp = discovery.findcommonincoming(pullop.repo,
1299 pullop.remote,
1284 pullop.remote,
1300 heads=pullop.heads,
1285 heads=pullop.heads,
1301 force=pullop.force)
1286 force=pullop.force)
1302 common, fetch, rheads = tmp
1287 common, fetch, rheads = tmp
1303 nm = pullop.repo.unfiltered().changelog.nodemap
1288 nm = pullop.repo.unfiltered().changelog.nodemap
1304 if fetch and rheads:
1289 if fetch and rheads:
1305 # If a remote heads in filtered locally, lets drop it from the unknown
1290 # If a remote heads in filtered locally, lets drop it from the unknown
1306 # remote heads and put in back in common.
1291 # remote heads and put in back in common.
1307 #
1292 #
1308 # This is a hackish solution to catch most of "common but locally
1293 # This is a hackish solution to catch most of "common but locally
1309 # hidden situation". We do not performs discovery on unfiltered
1294 # hidden situation". We do not performs discovery on unfiltered
1310 # repository because it end up doing a pathological amount of round
1295 # repository because it end up doing a pathological amount of round
1311 # trip for w huge amount of changeset we do not care about.
1296 # trip for w huge amount of changeset we do not care about.
1312 #
1297 #
1313 # If a set of such "common but filtered" changeset exist on the server
1298 # If a set of such "common but filtered" changeset exist on the server
1314 # but are not including a remote heads, we'll not be able to detect it,
1299 # but are not including a remote heads, we'll not be able to detect it,
1315 scommon = set(common)
1300 scommon = set(common)
1316 filteredrheads = []
1301 filteredrheads = []
1317 for n in rheads:
1302 for n in rheads:
1318 if n in nm:
1303 if n in nm:
1319 if n not in scommon:
1304 if n not in scommon:
1320 common.append(n)
1305 common.append(n)
1321 else:
1306 else:
1322 filteredrheads.append(n)
1307 filteredrheads.append(n)
1323 if not filteredrheads:
1308 if not filteredrheads:
1324 fetch = []
1309 fetch = []
1325 rheads = filteredrheads
1310 rheads = filteredrheads
1326 pullop.common = common
1311 pullop.common = common
1327 pullop.fetch = fetch
1312 pullop.fetch = fetch
1328 pullop.rheads = rheads
1313 pullop.rheads = rheads
1329
1314
1330 def _pullbundle2(pullop):
1315 def _pullbundle2(pullop):
1331 """pull data using bundle2
1316 """pull data using bundle2
1332
1317
1333 For now, the only supported data are changegroup."""
1318 For now, the only supported data are changegroup."""
1334 kwargs = {'bundlecaps': caps20to10(pullop.repo)}
1319 kwargs = {'bundlecaps': caps20to10(pullop.repo)}
1335
1320
1336 # At the moment we don't do stream clones over bundle2. If that is
1321 # At the moment we don't do stream clones over bundle2. If that is
1337 # implemented then here's where the check for that will go.
1322 # implemented then here's where the check for that will go.
1338 streaming = False
1323 streaming = False
1339
1324
1340 # pulling changegroup
1325 # pulling changegroup
1341 pullop.stepsdone.add('changegroup')
1326 pullop.stepsdone.add('changegroup')
1342
1327
1343 kwargs['common'] = pullop.common
1328 kwargs['common'] = pullop.common
1344 kwargs['heads'] = pullop.heads or pullop.rheads
1329 kwargs['heads'] = pullop.heads or pullop.rheads
1345 kwargs['cg'] = pullop.fetch
1330 kwargs['cg'] = pullop.fetch
1346 if 'listkeys' in pullop.remotebundle2caps:
1331 if 'listkeys' in pullop.remotebundle2caps:
1347 kwargs['listkeys'] = ['phases']
1332 kwargs['listkeys'] = ['phases']
1348 if pullop.remotebookmarks is None:
1333 if pullop.remotebookmarks is None:
1349 # make sure to always includes bookmark data when migrating
1334 # make sure to always includes bookmark data when migrating
1350 # `hg incoming --bundle` to using this function.
1335 # `hg incoming --bundle` to using this function.
1351 kwargs['listkeys'].append('bookmarks')
1336 kwargs['listkeys'].append('bookmarks')
1352
1337
1353 # If this is a full pull / clone and the server supports the clone bundles
1338 # If this is a full pull / clone and the server supports the clone bundles
1354 # feature, tell the server whether we attempted a clone bundle. The
1339 # feature, tell the server whether we attempted a clone bundle. The
1355 # presence of this flag indicates the client supports clone bundles. This
1340 # presence of this flag indicates the client supports clone bundles. This
1356 # will enable the server to treat clients that support clone bundles
1341 # will enable the server to treat clients that support clone bundles
1357 # differently from those that don't.
1342 # differently from those that don't.
1358 if (pullop.remote.capable('clonebundles')
1343 if (pullop.remote.capable('clonebundles')
1359 and pullop.heads is None and list(pullop.common) == [nullid]):
1344 and pullop.heads is None and list(pullop.common) == [nullid]):
1360 kwargs['cbattempted'] = pullop.clonebundleattempted
1345 kwargs['cbattempted'] = pullop.clonebundleattempted
1361
1346
1362 if streaming:
1347 if streaming:
1363 pullop.repo.ui.status(_('streaming all changes\n'))
1348 pullop.repo.ui.status(_('streaming all changes\n'))
1364 elif not pullop.fetch:
1349 elif not pullop.fetch:
1365 pullop.repo.ui.status(_("no changes found\n"))
1350 pullop.repo.ui.status(_("no changes found\n"))
1366 pullop.cgresult = 0
1351 pullop.cgresult = 0
1367 else:
1352 else:
1368 if pullop.heads is None and list(pullop.common) == [nullid]:
1353 if pullop.heads is None and list(pullop.common) == [nullid]:
1369 pullop.repo.ui.status(_("requesting all changes\n"))
1354 pullop.repo.ui.status(_("requesting all changes\n"))
1370 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1355 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1371 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1356 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1372 if obsolete.commonversion(remoteversions) is not None:
1357 if obsolete.commonversion(remoteversions) is not None:
1373 kwargs['obsmarkers'] = True
1358 kwargs['obsmarkers'] = True
1374 pullop.stepsdone.add('obsmarkers')
1359 pullop.stepsdone.add('obsmarkers')
1375 _pullbundle2extraprepare(pullop, kwargs)
1360 _pullbundle2extraprepare(pullop, kwargs)
1376 bundle = pullop.remote.getbundle('pull', **kwargs)
1361 bundle = pullop.remote.getbundle('pull', **kwargs)
1377 try:
1362 try:
1378 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
1363 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
1379 except bundle2.AbortFromPart as exc:
1364 except bundle2.AbortFromPart as exc:
1380 pullop.repo.ui.status(_('remote: abort: %s\n') % exc)
1365 pullop.repo.ui.status(_('remote: abort: %s\n') % exc)
1381 raise error.Abort(_('pull failed on remote'), hint=exc.hint)
1366 raise error.Abort(_('pull failed on remote'), hint=exc.hint)
1382 except error.BundleValueError as exc:
1367 except error.BundleValueError as exc:
1383 raise error.Abort(_('missing support for %s') % exc)
1368 raise error.Abort(_('missing support for %s') % exc)
1384
1369
1385 if pullop.fetch:
1370 if pullop.fetch:
1386 results = [cg['return'] for cg in op.records['changegroup']]
1371 results = [cg['return'] for cg in op.records['changegroup']]
1387 pullop.cgresult = changegroup.combineresults(results)
1372 pullop.cgresult = changegroup.combineresults(results)
1388
1373
1389 # processing phases change
1374 # processing phases change
1390 for namespace, value in op.records['listkeys']:
1375 for namespace, value in op.records['listkeys']:
1391 if namespace == 'phases':
1376 if namespace == 'phases':
1392 _pullapplyphases(pullop, value)
1377 _pullapplyphases(pullop, value)
1393
1378
1394 # processing bookmark update
1379 # processing bookmark update
1395 for namespace, value in op.records['listkeys']:
1380 for namespace, value in op.records['listkeys']:
1396 if namespace == 'bookmarks':
1381 if namespace == 'bookmarks':
1397 pullop.remotebookmarks = value
1382 pullop.remotebookmarks = value
1398
1383
1399 # bookmark data were either already there or pulled in the bundle
1384 # bookmark data were either already there or pulled in the bundle
1400 if pullop.remotebookmarks is not None:
1385 if pullop.remotebookmarks is not None:
1401 _pullbookmarks(pullop)
1386 _pullbookmarks(pullop)
1402
1387
1403 def _pullbundle2extraprepare(pullop, kwargs):
1388 def _pullbundle2extraprepare(pullop, kwargs):
1404 """hook function so that extensions can extend the getbundle call"""
1389 """hook function so that extensions can extend the getbundle call"""
1405 pass
1390 pass
1406
1391
1407 def _pullchangeset(pullop):
1392 def _pullchangeset(pullop):
1408 """pull changeset from unbundle into the local repo"""
1393 """pull changeset from unbundle into the local repo"""
1409 # We delay the open of the transaction as late as possible so we
1394 # We delay the open of the transaction as late as possible so we
1410 # don't open transaction for nothing or you break future useful
1395 # don't open transaction for nothing or you break future useful
1411 # rollback call
1396 # rollback call
1412 if 'changegroup' in pullop.stepsdone:
1397 if 'changegroup' in pullop.stepsdone:
1413 return
1398 return
1414 pullop.stepsdone.add('changegroup')
1399 pullop.stepsdone.add('changegroup')
1415 if not pullop.fetch:
1400 if not pullop.fetch:
1416 pullop.repo.ui.status(_("no changes found\n"))
1401 pullop.repo.ui.status(_("no changes found\n"))
1417 pullop.cgresult = 0
1402 pullop.cgresult = 0
1418 return
1403 return
1419 pullop.gettransaction()
1404 pullop.gettransaction()
1420 if pullop.heads is None and list(pullop.common) == [nullid]:
1405 if pullop.heads is None and list(pullop.common) == [nullid]:
1421 pullop.repo.ui.status(_("requesting all changes\n"))
1406 pullop.repo.ui.status(_("requesting all changes\n"))
1422 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1407 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1423 # issue1320, avoid a race if remote changed after discovery
1408 # issue1320, avoid a race if remote changed after discovery
1424 pullop.heads = pullop.rheads
1409 pullop.heads = pullop.rheads
1425
1410
1426 if pullop.remote.capable('getbundle'):
1411 if pullop.remote.capable('getbundle'):
1427 # TODO: get bundlecaps from remote
1412 # TODO: get bundlecaps from remote
1428 cg = pullop.remote.getbundle('pull', common=pullop.common,
1413 cg = pullop.remote.getbundle('pull', common=pullop.common,
1429 heads=pullop.heads or pullop.rheads)
1414 heads=pullop.heads or pullop.rheads)
1430 elif pullop.heads is None:
1415 elif pullop.heads is None:
1431 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1416 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1432 elif not pullop.remote.capable('changegroupsubset'):
1417 elif not pullop.remote.capable('changegroupsubset'):
1433 raise error.Abort(_("partial pull cannot be done because "
1418 raise error.Abort(_("partial pull cannot be done because "
1434 "other repository doesn't support "
1419 "other repository doesn't support "
1435 "changegroupsubset."))
1420 "changegroupsubset."))
1436 else:
1421 else:
1437 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1422 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1438 pullop.cgresult = cg.apply(pullop.repo, 'pull', pullop.remote.url())
1423 pullop.cgresult = cg.apply(pullop.repo, 'pull', pullop.remote.url())
1439
1424
1440 def _pullphase(pullop):
1425 def _pullphase(pullop):
1441 # Get remote phases data from remote
1426 # Get remote phases data from remote
1442 if 'phases' in pullop.stepsdone:
1427 if 'phases' in pullop.stepsdone:
1443 return
1428 return
1444 remotephases = pullop.remote.listkeys('phases')
1429 remotephases = pullop.remote.listkeys('phases')
1445 _pullapplyphases(pullop, remotephases)
1430 _pullapplyphases(pullop, remotephases)
1446
1431
1447 def _pullapplyphases(pullop, remotephases):
1432 def _pullapplyphases(pullop, remotephases):
1448 """apply phase movement from observed remote state"""
1433 """apply phase movement from observed remote state"""
1449 if 'phases' in pullop.stepsdone:
1434 if 'phases' in pullop.stepsdone:
1450 return
1435 return
1451 pullop.stepsdone.add('phases')
1436 pullop.stepsdone.add('phases')
1452 publishing = bool(remotephases.get('publishing', False))
1437 publishing = bool(remotephases.get('publishing', False))
1453 if remotephases and not publishing:
1438 if remotephases and not publishing:
1454 # remote is new and non-publishing
1439 # remote is new and non-publishing
1455 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1440 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1456 pullop.pulledsubset,
1441 pullop.pulledsubset,
1457 remotephases)
1442 remotephases)
1458 dheads = pullop.pulledsubset
1443 dheads = pullop.pulledsubset
1459 else:
1444 else:
1460 # Remote is old or publishing all common changesets
1445 # Remote is old or publishing all common changesets
1461 # should be seen as public
1446 # should be seen as public
1462 pheads = pullop.pulledsubset
1447 pheads = pullop.pulledsubset
1463 dheads = []
1448 dheads = []
1464 unfi = pullop.repo.unfiltered()
1449 unfi = pullop.repo.unfiltered()
1465 phase = unfi._phasecache.phase
1450 phase = unfi._phasecache.phase
1466 rev = unfi.changelog.nodemap.get
1451 rev = unfi.changelog.nodemap.get
1467 public = phases.public
1452 public = phases.public
1468 draft = phases.draft
1453 draft = phases.draft
1469
1454
1470 # exclude changesets already public locally and update the others
1455 # exclude changesets already public locally and update the others
1471 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1456 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1472 if pheads:
1457 if pheads:
1473 tr = pullop.gettransaction()
1458 tr = pullop.gettransaction()
1474 phases.advanceboundary(pullop.repo, tr, public, pheads)
1459 phases.advanceboundary(pullop.repo, tr, public, pheads)
1475
1460
1476 # exclude changesets already draft locally and update the others
1461 # exclude changesets already draft locally and update the others
1477 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1462 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1478 if dheads:
1463 if dheads:
1479 tr = pullop.gettransaction()
1464 tr = pullop.gettransaction()
1480 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1465 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1481
1466
1482 def _pullbookmarks(pullop):
1467 def _pullbookmarks(pullop):
1483 """process the remote bookmark information to update the local one"""
1468 """process the remote bookmark information to update the local one"""
1484 if 'bookmarks' in pullop.stepsdone:
1469 if 'bookmarks' in pullop.stepsdone:
1485 return
1470 return
1486 pullop.stepsdone.add('bookmarks')
1471 pullop.stepsdone.add('bookmarks')
1487 repo = pullop.repo
1472 repo = pullop.repo
1488 remotebookmarks = pullop.remotebookmarks
1473 remotebookmarks = pullop.remotebookmarks
1489 remotebookmarks = bookmod.unhexlifybookmarks(remotebookmarks)
1474 remotebookmarks = bookmod.unhexlifybookmarks(remotebookmarks)
1490 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1475 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1491 pullop.remote.url(),
1476 pullop.remote.url(),
1492 pullop.gettransaction,
1477 pullop.gettransaction,
1493 explicit=pullop.explicitbookmarks)
1478 explicit=pullop.explicitbookmarks)
1494
1479
1495 def _pullobsolete(pullop):
1480 def _pullobsolete(pullop):
1496 """utility function to pull obsolete markers from a remote
1481 """utility function to pull obsolete markers from a remote
1497
1482
1498 The `gettransaction` is function that return the pull transaction, creating
1483 The `gettransaction` is function that return the pull transaction, creating
1499 one if necessary. We return the transaction to inform the calling code that
1484 one if necessary. We return the transaction to inform the calling code that
1500 a new transaction have been created (when applicable).
1485 a new transaction have been created (when applicable).
1501
1486
1502 Exists mostly to allow overriding for experimentation purpose"""
1487 Exists mostly to allow overriding for experimentation purpose"""
1503 if 'obsmarkers' in pullop.stepsdone:
1488 if 'obsmarkers' in pullop.stepsdone:
1504 return
1489 return
1505 pullop.stepsdone.add('obsmarkers')
1490 pullop.stepsdone.add('obsmarkers')
1506 tr = None
1491 tr = None
1507 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1492 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1508 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1493 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1509 remoteobs = pullop.remote.listkeys('obsolete')
1494 remoteobs = pullop.remote.listkeys('obsolete')
1510 if 'dump0' in remoteobs:
1495 if 'dump0' in remoteobs:
1511 tr = pullop.gettransaction()
1496 tr = pullop.gettransaction()
1512 markers = []
1497 markers = []
1513 for key in sorted(remoteobs, reverse=True):
1498 for key in sorted(remoteobs, reverse=True):
1514 if key.startswith('dump'):
1499 if key.startswith('dump'):
1515 data = util.b85decode(remoteobs[key])
1500 data = util.b85decode(remoteobs[key])
1516 version, newmarks = obsolete._readmarkers(data)
1501 version, newmarks = obsolete._readmarkers(data)
1517 markers += newmarks
1502 markers += newmarks
1518 if markers:
1503 if markers:
1519 pullop.repo.obsstore.add(tr, markers)
1504 pullop.repo.obsstore.add(tr, markers)
1520 pullop.repo.invalidatevolatilesets()
1505 pullop.repo.invalidatevolatilesets()
1521 return tr
1506 return tr
1522
1507
1523 def caps20to10(repo):
1508 def caps20to10(repo):
1524 """return a set with appropriate options to use bundle20 during getbundle"""
1509 """return a set with appropriate options to use bundle20 during getbundle"""
1525 caps = {'HG20'}
1510 caps = {'HG20'}
1526 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
1511 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
1527 caps.add('bundle2=' + urlreq.quote(capsblob))
1512 caps.add('bundle2=' + urlreq.quote(capsblob))
1528 return caps
1513 return caps
1529
1514
1530 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1515 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1531 getbundle2partsorder = []
1516 getbundle2partsorder = []
1532
1517
1533 # Mapping between step name and function
1518 # Mapping between step name and function
1534 #
1519 #
1535 # This exists to help extensions wrap steps if necessary
1520 # This exists to help extensions wrap steps if necessary
1536 getbundle2partsmapping = {}
1521 getbundle2partsmapping = {}
1537
1522
1538 def getbundle2partsgenerator(stepname, idx=None):
1523 def getbundle2partsgenerator(stepname, idx=None):
1539 """decorator for function generating bundle2 part for getbundle
1524 """decorator for function generating bundle2 part for getbundle
1540
1525
1541 The function is added to the step -> function mapping and appended to the
1526 The function is added to the step -> function mapping and appended to the
1542 list of steps. Beware that decorated functions will be added in order
1527 list of steps. Beware that decorated functions will be added in order
1543 (this may matter).
1528 (this may matter).
1544
1529
1545 You can only use this decorator for new steps, if you want to wrap a step
1530 You can only use this decorator for new steps, if you want to wrap a step
1546 from an extension, attack the getbundle2partsmapping dictionary directly."""
1531 from an extension, attack the getbundle2partsmapping dictionary directly."""
1547 def dec(func):
1532 def dec(func):
1548 assert stepname not in getbundle2partsmapping
1533 assert stepname not in getbundle2partsmapping
1549 getbundle2partsmapping[stepname] = func
1534 getbundle2partsmapping[stepname] = func
1550 if idx is None:
1535 if idx is None:
1551 getbundle2partsorder.append(stepname)
1536 getbundle2partsorder.append(stepname)
1552 else:
1537 else:
1553 getbundle2partsorder.insert(idx, stepname)
1538 getbundle2partsorder.insert(idx, stepname)
1554 return func
1539 return func
1555 return dec
1540 return dec
1556
1541
1557 def bundle2requested(bundlecaps):
1542 def bundle2requested(bundlecaps):
1558 if bundlecaps is not None:
1543 if bundlecaps is not None:
1559 return any(cap.startswith('HG2') for cap in bundlecaps)
1544 return any(cap.startswith('HG2') for cap in bundlecaps)
1560 return False
1545 return False
1561
1546
1562 def getbundlechunks(repo, source, heads=None, common=None, bundlecaps=None,
1547 def getbundlechunks(repo, source, heads=None, common=None, bundlecaps=None,
1563 **kwargs):
1548 **kwargs):
1564 """Return chunks constituting a bundle's raw data.
1549 """Return chunks constituting a bundle's raw data.
1565
1550
1566 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
1551 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
1567 passed.
1552 passed.
1568
1553
1569 Returns an iterator over raw chunks (of varying sizes).
1554 Returns an iterator over raw chunks (of varying sizes).
1570 """
1555 """
1571 usebundle2 = bundle2requested(bundlecaps)
1556 usebundle2 = bundle2requested(bundlecaps)
1572 # bundle10 case
1557 # bundle10 case
1573 if not usebundle2:
1558 if not usebundle2:
1574 if bundlecaps and not kwargs.get('cg', True):
1559 if bundlecaps and not kwargs.get('cg', True):
1575 raise ValueError(_('request for bundle10 must include changegroup'))
1560 raise ValueError(_('request for bundle10 must include changegroup'))
1576
1561
1577 if kwargs:
1562 if kwargs:
1578 raise ValueError(_('unsupported getbundle arguments: %s')
1563 raise ValueError(_('unsupported getbundle arguments: %s')
1579 % ', '.join(sorted(kwargs.keys())))
1564 % ', '.join(sorted(kwargs.keys())))
1580 outgoing = _computeoutgoing(repo, heads, common)
1565 outgoing = _computeoutgoing(repo, heads, common)
1581 bundler = changegroup.getbundler('01', repo, bundlecaps)
1566 bundler = changegroup.getbundler('01', repo, bundlecaps)
1582 return changegroup.getsubsetraw(repo, outgoing, bundler, source)
1567 return changegroup.getsubsetraw(repo, outgoing, bundler, source)
1583
1568
1584 # bundle20 case
1569 # bundle20 case
1585 b2caps = {}
1570 b2caps = {}
1586 for bcaps in bundlecaps:
1571 for bcaps in bundlecaps:
1587 if bcaps.startswith('bundle2='):
1572 if bcaps.startswith('bundle2='):
1588 blob = urlreq.unquote(bcaps[len('bundle2='):])
1573 blob = urlreq.unquote(bcaps[len('bundle2='):])
1589 b2caps.update(bundle2.decodecaps(blob))
1574 b2caps.update(bundle2.decodecaps(blob))
1590 bundler = bundle2.bundle20(repo.ui, b2caps)
1575 bundler = bundle2.bundle20(repo.ui, b2caps)
1591
1576
1592 kwargs['heads'] = heads
1577 kwargs['heads'] = heads
1593 kwargs['common'] = common
1578 kwargs['common'] = common
1594
1579
1595 for name in getbundle2partsorder:
1580 for name in getbundle2partsorder:
1596 func = getbundle2partsmapping[name]
1581 func = getbundle2partsmapping[name]
1597 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1582 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1598 **kwargs)
1583 **kwargs)
1599
1584
1600 return bundler.getchunks()
1585 return bundler.getchunks()
1601
1586
1602 @getbundle2partsgenerator('changegroup')
1587 @getbundle2partsgenerator('changegroup')
1603 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1588 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1604 b2caps=None, heads=None, common=None, **kwargs):
1589 b2caps=None, heads=None, common=None, **kwargs):
1605 """add a changegroup part to the requested bundle"""
1590 """add a changegroup part to the requested bundle"""
1606 cg = None
1591 cg = None
1607 if kwargs.get('cg', True):
1592 if kwargs.get('cg', True):
1608 # build changegroup bundle here.
1593 # build changegroup bundle here.
1609 version = '01'
1594 version = '01'
1610 cgversions = b2caps.get('changegroup')
1595 cgversions = b2caps.get('changegroup')
1611 if cgversions: # 3.1 and 3.2 ship with an empty value
1596 if cgversions: # 3.1 and 3.2 ship with an empty value
1612 cgversions = [v for v in cgversions
1597 cgversions = [v for v in cgversions
1613 if v in changegroup.supportedoutgoingversions(repo)]
1598 if v in changegroup.supportedoutgoingversions(repo)]
1614 if not cgversions:
1599 if not cgversions:
1615 raise ValueError(_('no common changegroup version'))
1600 raise ValueError(_('no common changegroup version'))
1616 version = max(cgversions)
1601 version = max(cgversions)
1617 outgoing = _computeoutgoing(repo, heads, common)
1602 outgoing = _computeoutgoing(repo, heads, common)
1618 cg = changegroup.getlocalchangegroupraw(repo, source, outgoing,
1603 cg = changegroup.getlocalchangegroupraw(repo, source, outgoing,
1619 bundlecaps=bundlecaps,
1604 bundlecaps=bundlecaps,
1620 version=version)
1605 version=version)
1621
1606
1622 if cg:
1607 if cg:
1623 part = bundler.newpart('changegroup', data=cg)
1608 part = bundler.newpart('changegroup', data=cg)
1624 if cgversions:
1609 if cgversions:
1625 part.addparam('version', version)
1610 part.addparam('version', version)
1626 part.addparam('nbchanges', str(len(outgoing.missing)), mandatory=False)
1611 part.addparam('nbchanges', str(len(outgoing.missing)), mandatory=False)
1627 if 'treemanifest' in repo.requirements:
1612 if 'treemanifest' in repo.requirements:
1628 part.addparam('treemanifest', '1')
1613 part.addparam('treemanifest', '1')
1629
1614
1630 @getbundle2partsgenerator('listkeys')
1615 @getbundle2partsgenerator('listkeys')
1631 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1616 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1632 b2caps=None, **kwargs):
1617 b2caps=None, **kwargs):
1633 """add parts containing listkeys namespaces to the requested bundle"""
1618 """add parts containing listkeys namespaces to the requested bundle"""
1634 listkeys = kwargs.get('listkeys', ())
1619 listkeys = kwargs.get('listkeys', ())
1635 for namespace in listkeys:
1620 for namespace in listkeys:
1636 part = bundler.newpart('listkeys')
1621 part = bundler.newpart('listkeys')
1637 part.addparam('namespace', namespace)
1622 part.addparam('namespace', namespace)
1638 keys = repo.listkeys(namespace).items()
1623 keys = repo.listkeys(namespace).items()
1639 part.data = pushkey.encodekeys(keys)
1624 part.data = pushkey.encodekeys(keys)
1640
1625
1641 @getbundle2partsgenerator('obsmarkers')
1626 @getbundle2partsgenerator('obsmarkers')
1642 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1627 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1643 b2caps=None, heads=None, **kwargs):
1628 b2caps=None, heads=None, **kwargs):
1644 """add an obsolescence markers part to the requested bundle"""
1629 """add an obsolescence markers part to the requested bundle"""
1645 if kwargs.get('obsmarkers', False):
1630 if kwargs.get('obsmarkers', False):
1646 if heads is None:
1631 if heads is None:
1647 heads = repo.heads()
1632 heads = repo.heads()
1648 subset = [c.node() for c in repo.set('::%ln', heads)]
1633 subset = [c.node() for c in repo.set('::%ln', heads)]
1649 markers = repo.obsstore.relevantmarkers(subset)
1634 markers = repo.obsstore.relevantmarkers(subset)
1650 markers = sorted(markers)
1635 markers = sorted(markers)
1651 buildobsmarkerspart(bundler, markers)
1636 bundle2.buildobsmarkerspart(bundler, markers)
1652
1637
1653 @getbundle2partsgenerator('hgtagsfnodes')
1638 @getbundle2partsgenerator('hgtagsfnodes')
1654 def _getbundletagsfnodes(bundler, repo, source, bundlecaps=None,
1639 def _getbundletagsfnodes(bundler, repo, source, bundlecaps=None,
1655 b2caps=None, heads=None, common=None,
1640 b2caps=None, heads=None, common=None,
1656 **kwargs):
1641 **kwargs):
1657 """Transfer the .hgtags filenodes mapping.
1642 """Transfer the .hgtags filenodes mapping.
1658
1643
1659 Only values for heads in this bundle will be transferred.
1644 Only values for heads in this bundle will be transferred.
1660
1645
1661 The part data consists of pairs of 20 byte changeset node and .hgtags
1646 The part data consists of pairs of 20 byte changeset node and .hgtags
1662 filenodes raw values.
1647 filenodes raw values.
1663 """
1648 """
1664 # Don't send unless:
1649 # Don't send unless:
1665 # - changeset are being exchanged,
1650 # - changeset are being exchanged,
1666 # - the client supports it.
1651 # - the client supports it.
1667 if not (kwargs.get('cg', True) and 'hgtagsfnodes' in b2caps):
1652 if not (kwargs.get('cg', True) and 'hgtagsfnodes' in b2caps):
1668 return
1653 return
1669
1654
1670 outgoing = _computeoutgoing(repo, heads, common)
1655 outgoing = _computeoutgoing(repo, heads, common)
1671 bundle2.addparttagsfnodescache(repo, bundler, outgoing)
1656 bundle2.addparttagsfnodescache(repo, bundler, outgoing)
1672
1657
1673 def _getbookmarks(repo, **kwargs):
1658 def _getbookmarks(repo, **kwargs):
1674 """Returns bookmark to node mapping.
1659 """Returns bookmark to node mapping.
1675
1660
1676 This function is primarily used to generate `bookmarks` bundle2 part.
1661 This function is primarily used to generate `bookmarks` bundle2 part.
1677 It is a separate function in order to make it easy to wrap it
1662 It is a separate function in order to make it easy to wrap it
1678 in extensions. Passing `kwargs` to the function makes it easy to
1663 in extensions. Passing `kwargs` to the function makes it easy to
1679 add new parameters in extensions.
1664 add new parameters in extensions.
1680 """
1665 """
1681
1666
1682 return dict(bookmod.listbinbookmarks(repo))
1667 return dict(bookmod.listbinbookmarks(repo))
1683
1668
1684 def check_heads(repo, their_heads, context):
1669 def check_heads(repo, their_heads, context):
1685 """check if the heads of a repo have been modified
1670 """check if the heads of a repo have been modified
1686
1671
1687 Used by peer for unbundling.
1672 Used by peer for unbundling.
1688 """
1673 """
1689 heads = repo.heads()
1674 heads = repo.heads()
1690 heads_hash = hashlib.sha1(''.join(sorted(heads))).digest()
1675 heads_hash = hashlib.sha1(''.join(sorted(heads))).digest()
1691 if not (their_heads == ['force'] or their_heads == heads or
1676 if not (their_heads == ['force'] or their_heads == heads or
1692 their_heads == ['hashed', heads_hash]):
1677 their_heads == ['hashed', heads_hash]):
1693 # someone else committed/pushed/unbundled while we
1678 # someone else committed/pushed/unbundled while we
1694 # were transferring data
1679 # were transferring data
1695 raise error.PushRaced('repository changed while %s - '
1680 raise error.PushRaced('repository changed while %s - '
1696 'please try again' % context)
1681 'please try again' % context)
1697
1682
1698 def unbundle(repo, cg, heads, source, url):
1683 def unbundle(repo, cg, heads, source, url):
1699 """Apply a bundle to a repo.
1684 """Apply a bundle to a repo.
1700
1685
1701 this function makes sure the repo is locked during the application and have
1686 this function makes sure the repo is locked during the application and have
1702 mechanism to check that no push race occurred between the creation of the
1687 mechanism to check that no push race occurred between the creation of the
1703 bundle and its application.
1688 bundle and its application.
1704
1689
1705 If the push was raced as PushRaced exception is raised."""
1690 If the push was raced as PushRaced exception is raised."""
1706 r = 0
1691 r = 0
1707 # need a transaction when processing a bundle2 stream
1692 # need a transaction when processing a bundle2 stream
1708 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
1693 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
1709 lockandtr = [None, None, None]
1694 lockandtr = [None, None, None]
1710 recordout = None
1695 recordout = None
1711 # quick fix for output mismatch with bundle2 in 3.4
1696 # quick fix for output mismatch with bundle2 in 3.4
1712 captureoutput = repo.ui.configbool('experimental', 'bundle2-output-capture',
1697 captureoutput = repo.ui.configbool('experimental', 'bundle2-output-capture',
1713 False)
1698 False)
1714 if url.startswith('remote:http:') or url.startswith('remote:https:'):
1699 if url.startswith('remote:http:') or url.startswith('remote:https:'):
1715 captureoutput = True
1700 captureoutput = True
1716 try:
1701 try:
1717 # note: outside bundle1, 'heads' is expected to be empty and this
1702 # note: outside bundle1, 'heads' is expected to be empty and this
1718 # 'check_heads' call wil be a no-op
1703 # 'check_heads' call wil be a no-op
1719 check_heads(repo, heads, 'uploading changes')
1704 check_heads(repo, heads, 'uploading changes')
1720 # push can proceed
1705 # push can proceed
1721 if not util.safehasattr(cg, 'params'):
1706 if not util.safehasattr(cg, 'params'):
1722 # legacy case: bundle1 (changegroup 01)
1707 # legacy case: bundle1 (changegroup 01)
1723 lockandtr[1] = repo.lock()
1708 lockandtr[1] = repo.lock()
1724 r = cg.apply(repo, source, url)
1709 r = cg.apply(repo, source, url)
1725 else:
1710 else:
1726 r = None
1711 r = None
1727 try:
1712 try:
1728 def gettransaction():
1713 def gettransaction():
1729 if not lockandtr[2]:
1714 if not lockandtr[2]:
1730 lockandtr[0] = repo.wlock()
1715 lockandtr[0] = repo.wlock()
1731 lockandtr[1] = repo.lock()
1716 lockandtr[1] = repo.lock()
1732 lockandtr[2] = repo.transaction(source)
1717 lockandtr[2] = repo.transaction(source)
1733 lockandtr[2].hookargs['source'] = source
1718 lockandtr[2].hookargs['source'] = source
1734 lockandtr[2].hookargs['url'] = url
1719 lockandtr[2].hookargs['url'] = url
1735 lockandtr[2].hookargs['bundle2'] = '1'
1720 lockandtr[2].hookargs['bundle2'] = '1'
1736 return lockandtr[2]
1721 return lockandtr[2]
1737
1722
1738 # Do greedy locking by default until we're satisfied with lazy
1723 # Do greedy locking by default until we're satisfied with lazy
1739 # locking.
1724 # locking.
1740 if not repo.ui.configbool('experimental', 'bundle2lazylocking'):
1725 if not repo.ui.configbool('experimental', 'bundle2lazylocking'):
1741 gettransaction()
1726 gettransaction()
1742
1727
1743 op = bundle2.bundleoperation(repo, gettransaction,
1728 op = bundle2.bundleoperation(repo, gettransaction,
1744 captureoutput=captureoutput)
1729 captureoutput=captureoutput)
1745 try:
1730 try:
1746 op = bundle2.processbundle(repo, cg, op=op)
1731 op = bundle2.processbundle(repo, cg, op=op)
1747 finally:
1732 finally:
1748 r = op.reply
1733 r = op.reply
1749 if captureoutput and r is not None:
1734 if captureoutput and r is not None:
1750 repo.ui.pushbuffer(error=True, subproc=True)
1735 repo.ui.pushbuffer(error=True, subproc=True)
1751 def recordout(output):
1736 def recordout(output):
1752 r.newpart('output', data=output, mandatory=False)
1737 r.newpart('output', data=output, mandatory=False)
1753 if lockandtr[2] is not None:
1738 if lockandtr[2] is not None:
1754 lockandtr[2].close()
1739 lockandtr[2].close()
1755 except BaseException as exc:
1740 except BaseException as exc:
1756 exc.duringunbundle2 = True
1741 exc.duringunbundle2 = True
1757 if captureoutput and r is not None:
1742 if captureoutput and r is not None:
1758 parts = exc._bundle2salvagedoutput = r.salvageoutput()
1743 parts = exc._bundle2salvagedoutput = r.salvageoutput()
1759 def recordout(output):
1744 def recordout(output):
1760 part = bundle2.bundlepart('output', data=output,
1745 part = bundle2.bundlepart('output', data=output,
1761 mandatory=False)
1746 mandatory=False)
1762 parts.append(part)
1747 parts.append(part)
1763 raise
1748 raise
1764 finally:
1749 finally:
1765 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
1750 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
1766 if recordout is not None:
1751 if recordout is not None:
1767 recordout(repo.ui.popbuffer())
1752 recordout(repo.ui.popbuffer())
1768 return r
1753 return r
1769
1754
1770 def _maybeapplyclonebundle(pullop):
1755 def _maybeapplyclonebundle(pullop):
1771 """Apply a clone bundle from a remote, if possible."""
1756 """Apply a clone bundle from a remote, if possible."""
1772
1757
1773 repo = pullop.repo
1758 repo = pullop.repo
1774 remote = pullop.remote
1759 remote = pullop.remote
1775
1760
1776 if not repo.ui.configbool('ui', 'clonebundles', True):
1761 if not repo.ui.configbool('ui', 'clonebundles', True):
1777 return
1762 return
1778
1763
1779 # Only run if local repo is empty.
1764 # Only run if local repo is empty.
1780 if len(repo):
1765 if len(repo):
1781 return
1766 return
1782
1767
1783 if pullop.heads:
1768 if pullop.heads:
1784 return
1769 return
1785
1770
1786 if not remote.capable('clonebundles'):
1771 if not remote.capable('clonebundles'):
1787 return
1772 return
1788
1773
1789 res = remote._call('clonebundles')
1774 res = remote._call('clonebundles')
1790
1775
1791 # If we call the wire protocol command, that's good enough to record the
1776 # If we call the wire protocol command, that's good enough to record the
1792 # attempt.
1777 # attempt.
1793 pullop.clonebundleattempted = True
1778 pullop.clonebundleattempted = True
1794
1779
1795 entries = parseclonebundlesmanifest(repo, res)
1780 entries = parseclonebundlesmanifest(repo, res)
1796 if not entries:
1781 if not entries:
1797 repo.ui.note(_('no clone bundles available on remote; '
1782 repo.ui.note(_('no clone bundles available on remote; '
1798 'falling back to regular clone\n'))
1783 'falling back to regular clone\n'))
1799 return
1784 return
1800
1785
1801 entries = filterclonebundleentries(repo, entries)
1786 entries = filterclonebundleentries(repo, entries)
1802 if not entries:
1787 if not entries:
1803 # There is a thundering herd concern here. However, if a server
1788 # There is a thundering herd concern here. However, if a server
1804 # operator doesn't advertise bundles appropriate for its clients,
1789 # operator doesn't advertise bundles appropriate for its clients,
1805 # they deserve what's coming. Furthermore, from a client's
1790 # they deserve what's coming. Furthermore, from a client's
1806 # perspective, no automatic fallback would mean not being able to
1791 # perspective, no automatic fallback would mean not being able to
1807 # clone!
1792 # clone!
1808 repo.ui.warn(_('no compatible clone bundles available on server; '
1793 repo.ui.warn(_('no compatible clone bundles available on server; '
1809 'falling back to regular clone\n'))
1794 'falling back to regular clone\n'))
1810 repo.ui.warn(_('(you may want to report this to the server '
1795 repo.ui.warn(_('(you may want to report this to the server '
1811 'operator)\n'))
1796 'operator)\n'))
1812 return
1797 return
1813
1798
1814 entries = sortclonebundleentries(repo.ui, entries)
1799 entries = sortclonebundleentries(repo.ui, entries)
1815
1800
1816 url = entries[0]['URL']
1801 url = entries[0]['URL']
1817 repo.ui.status(_('applying clone bundle from %s\n') % url)
1802 repo.ui.status(_('applying clone bundle from %s\n') % url)
1818 if trypullbundlefromurl(repo.ui, repo, url):
1803 if trypullbundlefromurl(repo.ui, repo, url):
1819 repo.ui.status(_('finished applying clone bundle\n'))
1804 repo.ui.status(_('finished applying clone bundle\n'))
1820 # Bundle failed.
1805 # Bundle failed.
1821 #
1806 #
1822 # We abort by default to avoid the thundering herd of
1807 # We abort by default to avoid the thundering herd of
1823 # clients flooding a server that was expecting expensive
1808 # clients flooding a server that was expecting expensive
1824 # clone load to be offloaded.
1809 # clone load to be offloaded.
1825 elif repo.ui.configbool('ui', 'clonebundlefallback', False):
1810 elif repo.ui.configbool('ui', 'clonebundlefallback', False):
1826 repo.ui.warn(_('falling back to normal clone\n'))
1811 repo.ui.warn(_('falling back to normal clone\n'))
1827 else:
1812 else:
1828 raise error.Abort(_('error applying bundle'),
1813 raise error.Abort(_('error applying bundle'),
1829 hint=_('if this error persists, consider contacting '
1814 hint=_('if this error persists, consider contacting '
1830 'the server operator or disable clone '
1815 'the server operator or disable clone '
1831 'bundles via '
1816 'bundles via '
1832 '"--config ui.clonebundles=false"'))
1817 '"--config ui.clonebundles=false"'))
1833
1818
1834 def parseclonebundlesmanifest(repo, s):
1819 def parseclonebundlesmanifest(repo, s):
1835 """Parses the raw text of a clone bundles manifest.
1820 """Parses the raw text of a clone bundles manifest.
1836
1821
1837 Returns a list of dicts. The dicts have a ``URL`` key corresponding
1822 Returns a list of dicts. The dicts have a ``URL`` key corresponding
1838 to the URL and other keys are the attributes for the entry.
1823 to the URL and other keys are the attributes for the entry.
1839 """
1824 """
1840 m = []
1825 m = []
1841 for line in s.splitlines():
1826 for line in s.splitlines():
1842 fields = line.split()
1827 fields = line.split()
1843 if not fields:
1828 if not fields:
1844 continue
1829 continue
1845 attrs = {'URL': fields[0]}
1830 attrs = {'URL': fields[0]}
1846 for rawattr in fields[1:]:
1831 for rawattr in fields[1:]:
1847 key, value = rawattr.split('=', 1)
1832 key, value = rawattr.split('=', 1)
1848 key = urlreq.unquote(key)
1833 key = urlreq.unquote(key)
1849 value = urlreq.unquote(value)
1834 value = urlreq.unquote(value)
1850 attrs[key] = value
1835 attrs[key] = value
1851
1836
1852 # Parse BUNDLESPEC into components. This makes client-side
1837 # Parse BUNDLESPEC into components. This makes client-side
1853 # preferences easier to specify since you can prefer a single
1838 # preferences easier to specify since you can prefer a single
1854 # component of the BUNDLESPEC.
1839 # component of the BUNDLESPEC.
1855 if key == 'BUNDLESPEC':
1840 if key == 'BUNDLESPEC':
1856 try:
1841 try:
1857 comp, version, params = parsebundlespec(repo, value,
1842 comp, version, params = parsebundlespec(repo, value,
1858 externalnames=True)
1843 externalnames=True)
1859 attrs['COMPRESSION'] = comp
1844 attrs['COMPRESSION'] = comp
1860 attrs['VERSION'] = version
1845 attrs['VERSION'] = version
1861 except error.InvalidBundleSpecification:
1846 except error.InvalidBundleSpecification:
1862 pass
1847 pass
1863 except error.UnsupportedBundleSpecification:
1848 except error.UnsupportedBundleSpecification:
1864 pass
1849 pass
1865
1850
1866 m.append(attrs)
1851 m.append(attrs)
1867
1852
1868 return m
1853 return m
1869
1854
1870 def filterclonebundleentries(repo, entries):
1855 def filterclonebundleentries(repo, entries):
1871 """Remove incompatible clone bundle manifest entries.
1856 """Remove incompatible clone bundle manifest entries.
1872
1857
1873 Accepts a list of entries parsed with ``parseclonebundlesmanifest``
1858 Accepts a list of entries parsed with ``parseclonebundlesmanifest``
1874 and returns a new list consisting of only the entries that this client
1859 and returns a new list consisting of only the entries that this client
1875 should be able to apply.
1860 should be able to apply.
1876
1861
1877 There is no guarantee we'll be able to apply all returned entries because
1862 There is no guarantee we'll be able to apply all returned entries because
1878 the metadata we use to filter on may be missing or wrong.
1863 the metadata we use to filter on may be missing or wrong.
1879 """
1864 """
1880 newentries = []
1865 newentries = []
1881 for entry in entries:
1866 for entry in entries:
1882 spec = entry.get('BUNDLESPEC')
1867 spec = entry.get('BUNDLESPEC')
1883 if spec:
1868 if spec:
1884 try:
1869 try:
1885 parsebundlespec(repo, spec, strict=True)
1870 parsebundlespec(repo, spec, strict=True)
1886 except error.InvalidBundleSpecification as e:
1871 except error.InvalidBundleSpecification as e:
1887 repo.ui.debug(str(e) + '\n')
1872 repo.ui.debug(str(e) + '\n')
1888 continue
1873 continue
1889 except error.UnsupportedBundleSpecification as e:
1874 except error.UnsupportedBundleSpecification as e:
1890 repo.ui.debug('filtering %s because unsupported bundle '
1875 repo.ui.debug('filtering %s because unsupported bundle '
1891 'spec: %s\n' % (entry['URL'], str(e)))
1876 'spec: %s\n' % (entry['URL'], str(e)))
1892 continue
1877 continue
1893
1878
1894 if 'REQUIRESNI' in entry and not sslutil.hassni:
1879 if 'REQUIRESNI' in entry and not sslutil.hassni:
1895 repo.ui.debug('filtering %s because SNI not supported\n' %
1880 repo.ui.debug('filtering %s because SNI not supported\n' %
1896 entry['URL'])
1881 entry['URL'])
1897 continue
1882 continue
1898
1883
1899 newentries.append(entry)
1884 newentries.append(entry)
1900
1885
1901 return newentries
1886 return newentries
1902
1887
1903 class clonebundleentry(object):
1888 class clonebundleentry(object):
1904 """Represents an item in a clone bundles manifest.
1889 """Represents an item in a clone bundles manifest.
1905
1890
1906 This rich class is needed to support sorting since sorted() in Python 3
1891 This rich class is needed to support sorting since sorted() in Python 3
1907 doesn't support ``cmp`` and our comparison is complex enough that ``key=``
1892 doesn't support ``cmp`` and our comparison is complex enough that ``key=``
1908 won't work.
1893 won't work.
1909 """
1894 """
1910
1895
1911 def __init__(self, value, prefers):
1896 def __init__(self, value, prefers):
1912 self.value = value
1897 self.value = value
1913 self.prefers = prefers
1898 self.prefers = prefers
1914
1899
1915 def _cmp(self, other):
1900 def _cmp(self, other):
1916 for prefkey, prefvalue in self.prefers:
1901 for prefkey, prefvalue in self.prefers:
1917 avalue = self.value.get(prefkey)
1902 avalue = self.value.get(prefkey)
1918 bvalue = other.value.get(prefkey)
1903 bvalue = other.value.get(prefkey)
1919
1904
1920 # Special case for b missing attribute and a matches exactly.
1905 # Special case for b missing attribute and a matches exactly.
1921 if avalue is not None and bvalue is None and avalue == prefvalue:
1906 if avalue is not None and bvalue is None and avalue == prefvalue:
1922 return -1
1907 return -1
1923
1908
1924 # Special case for a missing attribute and b matches exactly.
1909 # Special case for a missing attribute and b matches exactly.
1925 if bvalue is not None and avalue is None and bvalue == prefvalue:
1910 if bvalue is not None and avalue is None and bvalue == prefvalue:
1926 return 1
1911 return 1
1927
1912
1928 # We can't compare unless attribute present on both.
1913 # We can't compare unless attribute present on both.
1929 if avalue is None or bvalue is None:
1914 if avalue is None or bvalue is None:
1930 continue
1915 continue
1931
1916
1932 # Same values should fall back to next attribute.
1917 # Same values should fall back to next attribute.
1933 if avalue == bvalue:
1918 if avalue == bvalue:
1934 continue
1919 continue
1935
1920
1936 # Exact matches come first.
1921 # Exact matches come first.
1937 if avalue == prefvalue:
1922 if avalue == prefvalue:
1938 return -1
1923 return -1
1939 if bvalue == prefvalue:
1924 if bvalue == prefvalue:
1940 return 1
1925 return 1
1941
1926
1942 # Fall back to next attribute.
1927 # Fall back to next attribute.
1943 continue
1928 continue
1944
1929
1945 # If we got here we couldn't sort by attributes and prefers. Fall
1930 # If we got here we couldn't sort by attributes and prefers. Fall
1946 # back to index order.
1931 # back to index order.
1947 return 0
1932 return 0
1948
1933
1949 def __lt__(self, other):
1934 def __lt__(self, other):
1950 return self._cmp(other) < 0
1935 return self._cmp(other) < 0
1951
1936
1952 def __gt__(self, other):
1937 def __gt__(self, other):
1953 return self._cmp(other) > 0
1938 return self._cmp(other) > 0
1954
1939
1955 def __eq__(self, other):
1940 def __eq__(self, other):
1956 return self._cmp(other) == 0
1941 return self._cmp(other) == 0
1957
1942
1958 def __le__(self, other):
1943 def __le__(self, other):
1959 return self._cmp(other) <= 0
1944 return self._cmp(other) <= 0
1960
1945
1961 def __ge__(self, other):
1946 def __ge__(self, other):
1962 return self._cmp(other) >= 0
1947 return self._cmp(other) >= 0
1963
1948
1964 def __ne__(self, other):
1949 def __ne__(self, other):
1965 return self._cmp(other) != 0
1950 return self._cmp(other) != 0
1966
1951
1967 def sortclonebundleentries(ui, entries):
1952 def sortclonebundleentries(ui, entries):
1968 prefers = ui.configlist('ui', 'clonebundleprefers', default=[])
1953 prefers = ui.configlist('ui', 'clonebundleprefers', default=[])
1969 if not prefers:
1954 if not prefers:
1970 return list(entries)
1955 return list(entries)
1971
1956
1972 prefers = [p.split('=', 1) for p in prefers]
1957 prefers = [p.split('=', 1) for p in prefers]
1973
1958
1974 items = sorted(clonebundleentry(v, prefers) for v in entries)
1959 items = sorted(clonebundleentry(v, prefers) for v in entries)
1975 return [i.value for i in items]
1960 return [i.value for i in items]
1976
1961
1977 def trypullbundlefromurl(ui, repo, url):
1962 def trypullbundlefromurl(ui, repo, url):
1978 """Attempt to apply a bundle from a URL."""
1963 """Attempt to apply a bundle from a URL."""
1979 lock = repo.lock()
1964 lock = repo.lock()
1980 try:
1965 try:
1981 tr = repo.transaction('bundleurl')
1966 tr = repo.transaction('bundleurl')
1982 try:
1967 try:
1983 try:
1968 try:
1984 fh = urlmod.open(ui, url)
1969 fh = urlmod.open(ui, url)
1985 cg = readbundle(ui, fh, 'stream')
1970 cg = readbundle(ui, fh, 'stream')
1986
1971
1987 if isinstance(cg, bundle2.unbundle20):
1972 if isinstance(cg, bundle2.unbundle20):
1988 bundle2.processbundle(repo, cg, lambda: tr)
1973 bundle2.processbundle(repo, cg, lambda: tr)
1989 elif isinstance(cg, streamclone.streamcloneapplier):
1974 elif isinstance(cg, streamclone.streamcloneapplier):
1990 cg.apply(repo)
1975 cg.apply(repo)
1991 else:
1976 else:
1992 cg.apply(repo, 'clonebundles', url)
1977 cg.apply(repo, 'clonebundles', url)
1993 tr.close()
1978 tr.close()
1994 return True
1979 return True
1995 except urlerr.httperror as e:
1980 except urlerr.httperror as e:
1996 ui.warn(_('HTTP error fetching bundle: %s\n') % str(e))
1981 ui.warn(_('HTTP error fetching bundle: %s\n') % str(e))
1997 except urlerr.urlerror as e:
1982 except urlerr.urlerror as e:
1998 ui.warn(_('error fetching bundle: %s\n') % e.reason[1])
1983 ui.warn(_('error fetching bundle: %s\n') % e.reason[1])
1999
1984
2000 return False
1985 return False
2001 finally:
1986 finally:
2002 tr.release()
1987 tr.release()
2003 finally:
1988 finally:
2004 lock.release()
1989 lock.release()
General Comments 0
You need to be logged in to leave comments. Login now