##// END OF EJS Templates
bundle2: move tagsfnodecache generation in a generic function...
marmoute -
r32217:6068712c default
parent child Browse files
Show More
@@ -1,1705 +1,1729 b''
1 # bundle2.py - generic container format to transmit arbitrary data.
1 # bundle2.py - generic container format to transmit arbitrary data.
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """Handling of the new bundle2 format
7 """Handling of the new bundle2 format
8
8
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 payloads in an application agnostic way. It consist in a sequence of "parts"
10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 that will be handed to and processed by the application layer.
11 that will be handed to and processed by the application layer.
12
12
13
13
14 General format architecture
14 General format architecture
15 ===========================
15 ===========================
16
16
17 The format is architectured as follow
17 The format is architectured as follow
18
18
19 - magic string
19 - magic string
20 - stream level parameters
20 - stream level parameters
21 - payload parts (any number)
21 - payload parts (any number)
22 - end of stream marker.
22 - end of stream marker.
23
23
24 the Binary format
24 the Binary format
25 ============================
25 ============================
26
26
27 All numbers are unsigned and big-endian.
27 All numbers are unsigned and big-endian.
28
28
29 stream level parameters
29 stream level parameters
30 ------------------------
30 ------------------------
31
31
32 Binary format is as follow
32 Binary format is as follow
33
33
34 :params size: int32
34 :params size: int32
35
35
36 The total number of Bytes used by the parameters
36 The total number of Bytes used by the parameters
37
37
38 :params value: arbitrary number of Bytes
38 :params value: arbitrary number of Bytes
39
39
40 A blob of `params size` containing the serialized version of all stream level
40 A blob of `params size` containing the serialized version of all stream level
41 parameters.
41 parameters.
42
42
43 The blob contains a space separated list of parameters. Parameters with value
43 The blob contains a space separated list of parameters. Parameters with value
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45
45
46 Empty name are obviously forbidden.
46 Empty name are obviously forbidden.
47
47
48 Name MUST start with a letter. If this first letter is lower case, the
48 Name MUST start with a letter. If this first letter is lower case, the
49 parameter is advisory and can be safely ignored. However when the first
49 parameter is advisory and can be safely ignored. However when the first
50 letter is capital, the parameter is mandatory and the bundling process MUST
50 letter is capital, the parameter is mandatory and the bundling process MUST
51 stop if he is not able to proceed it.
51 stop if he is not able to proceed it.
52
52
53 Stream parameters use a simple textual format for two main reasons:
53 Stream parameters use a simple textual format for two main reasons:
54
54
55 - Stream level parameters should remain simple and we want to discourage any
55 - Stream level parameters should remain simple and we want to discourage any
56 crazy usage.
56 crazy usage.
57 - Textual data allow easy human inspection of a bundle2 header in case of
57 - Textual data allow easy human inspection of a bundle2 header in case of
58 troubles.
58 troubles.
59
59
60 Any Applicative level options MUST go into a bundle2 part instead.
60 Any Applicative level options MUST go into a bundle2 part instead.
61
61
62 Payload part
62 Payload part
63 ------------------------
63 ------------------------
64
64
65 Binary format is as follow
65 Binary format is as follow
66
66
67 :header size: int32
67 :header size: int32
68
68
69 The total number of Bytes used by the part header. When the header is empty
69 The total number of Bytes used by the part header. When the header is empty
70 (size = 0) this is interpreted as the end of stream marker.
70 (size = 0) this is interpreted as the end of stream marker.
71
71
72 :header:
72 :header:
73
73
74 The header defines how to interpret the part. It contains two piece of
74 The header defines how to interpret the part. It contains two piece of
75 data: the part type, and the part parameters.
75 data: the part type, and the part parameters.
76
76
77 The part type is used to route an application level handler, that can
77 The part type is used to route an application level handler, that can
78 interpret payload.
78 interpret payload.
79
79
80 Part parameters are passed to the application level handler. They are
80 Part parameters are passed to the application level handler. They are
81 meant to convey information that will help the application level object to
81 meant to convey information that will help the application level object to
82 interpret the part payload.
82 interpret the part payload.
83
83
84 The binary format of the header is has follow
84 The binary format of the header is has follow
85
85
86 :typesize: (one byte)
86 :typesize: (one byte)
87
87
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89
89
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 to this part.
91 to this part.
92
92
93 :parameters:
93 :parameters:
94
94
95 Part's parameter may have arbitrary content, the binary structure is::
95 Part's parameter may have arbitrary content, the binary structure is::
96
96
97 <mandatory-count><advisory-count><param-sizes><param-data>
97 <mandatory-count><advisory-count><param-sizes><param-data>
98
98
99 :mandatory-count: 1 byte, number of mandatory parameters
99 :mandatory-count: 1 byte, number of mandatory parameters
100
100
101 :advisory-count: 1 byte, number of advisory parameters
101 :advisory-count: 1 byte, number of advisory parameters
102
102
103 :param-sizes:
103 :param-sizes:
104
104
105 N couple of bytes, where N is the total number of parameters. Each
105 N couple of bytes, where N is the total number of parameters. Each
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107
107
108 :param-data:
108 :param-data:
109
109
110 A blob of bytes from which each parameter key and value can be
110 A blob of bytes from which each parameter key and value can be
111 retrieved using the list of size couples stored in the previous
111 retrieved using the list of size couples stored in the previous
112 field.
112 field.
113
113
114 Mandatory parameters comes first, then the advisory ones.
114 Mandatory parameters comes first, then the advisory ones.
115
115
116 Each parameter's key MUST be unique within the part.
116 Each parameter's key MUST be unique within the part.
117
117
118 :payload:
118 :payload:
119
119
120 payload is a series of `<chunksize><chunkdata>`.
120 payload is a series of `<chunksize><chunkdata>`.
121
121
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124
124
125 The current implementation always produces either zero or one chunk.
125 The current implementation always produces either zero or one chunk.
126 This is an implementation limitation that will ultimately be lifted.
126 This is an implementation limitation that will ultimately be lifted.
127
127
128 `chunksize` can be negative to trigger special case processing. No such
128 `chunksize` can be negative to trigger special case processing. No such
129 processing is in place yet.
129 processing is in place yet.
130
130
131 Bundle processing
131 Bundle processing
132 ============================
132 ============================
133
133
134 Each part is processed in order using a "part handler". Handler are registered
134 Each part is processed in order using a "part handler". Handler are registered
135 for a certain part type.
135 for a certain part type.
136
136
137 The matching of a part to its handler is case insensitive. The case of the
137 The matching of a part to its handler is case insensitive. The case of the
138 part type is used to know if a part is mandatory or advisory. If the Part type
138 part type is used to know if a part is mandatory or advisory. If the Part type
139 contains any uppercase char it is considered mandatory. When no handler is
139 contains any uppercase char it is considered mandatory. When no handler is
140 known for a Mandatory part, the process is aborted and an exception is raised.
140 known for a Mandatory part, the process is aborted and an exception is raised.
141 If the part is advisory and no handler is known, the part is ignored. When the
141 If the part is advisory and no handler is known, the part is ignored. When the
142 process is aborted, the full bundle is still read from the stream to keep the
142 process is aborted, the full bundle is still read from the stream to keep the
143 channel usable. But none of the part read from an abort are processed. In the
143 channel usable. But none of the part read from an abort are processed. In the
144 future, dropping the stream may become an option for channel we do not care to
144 future, dropping the stream may become an option for channel we do not care to
145 preserve.
145 preserve.
146 """
146 """
147
147
148 from __future__ import absolute_import
148 from __future__ import absolute_import
149
149
150 import errno
150 import errno
151 import re
151 import re
152 import string
152 import string
153 import struct
153 import struct
154 import sys
154 import sys
155
155
156 from .i18n import _
156 from .i18n import _
157 from . import (
157 from . import (
158 changegroup,
158 changegroup,
159 error,
159 error,
160 obsolete,
160 obsolete,
161 pushkey,
161 pushkey,
162 pycompat,
162 pycompat,
163 tags,
163 tags,
164 url,
164 url,
165 util,
165 util,
166 )
166 )
167
167
168 urlerr = util.urlerr
168 urlerr = util.urlerr
169 urlreq = util.urlreq
169 urlreq = util.urlreq
170
170
171 _pack = struct.pack
171 _pack = struct.pack
172 _unpack = struct.unpack
172 _unpack = struct.unpack
173
173
174 _fstreamparamsize = '>i'
174 _fstreamparamsize = '>i'
175 _fpartheadersize = '>i'
175 _fpartheadersize = '>i'
176 _fparttypesize = '>B'
176 _fparttypesize = '>B'
177 _fpartid = '>I'
177 _fpartid = '>I'
178 _fpayloadsize = '>i'
178 _fpayloadsize = '>i'
179 _fpartparamcount = '>BB'
179 _fpartparamcount = '>BB'
180
180
181 preferedchunksize = 4096
181 preferedchunksize = 4096
182
182
183 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
183 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
184
184
185 def outdebug(ui, message):
185 def outdebug(ui, message):
186 """debug regarding output stream (bundling)"""
186 """debug regarding output stream (bundling)"""
187 if ui.configbool('devel', 'bundle2.debug', False):
187 if ui.configbool('devel', 'bundle2.debug', False):
188 ui.debug('bundle2-output: %s\n' % message)
188 ui.debug('bundle2-output: %s\n' % message)
189
189
190 def indebug(ui, message):
190 def indebug(ui, message):
191 """debug on input stream (unbundling)"""
191 """debug on input stream (unbundling)"""
192 if ui.configbool('devel', 'bundle2.debug', False):
192 if ui.configbool('devel', 'bundle2.debug', False):
193 ui.debug('bundle2-input: %s\n' % message)
193 ui.debug('bundle2-input: %s\n' % message)
194
194
195 def validateparttype(parttype):
195 def validateparttype(parttype):
196 """raise ValueError if a parttype contains invalid character"""
196 """raise ValueError if a parttype contains invalid character"""
197 if _parttypeforbidden.search(parttype):
197 if _parttypeforbidden.search(parttype):
198 raise ValueError(parttype)
198 raise ValueError(parttype)
199
199
200 def _makefpartparamsizes(nbparams):
200 def _makefpartparamsizes(nbparams):
201 """return a struct format to read part parameter sizes
201 """return a struct format to read part parameter sizes
202
202
203 The number parameters is variable so we need to build that format
203 The number parameters is variable so we need to build that format
204 dynamically.
204 dynamically.
205 """
205 """
206 return '>'+('BB'*nbparams)
206 return '>'+('BB'*nbparams)
207
207
208 parthandlermapping = {}
208 parthandlermapping = {}
209
209
210 def parthandler(parttype, params=()):
210 def parthandler(parttype, params=()):
211 """decorator that register a function as a bundle2 part handler
211 """decorator that register a function as a bundle2 part handler
212
212
213 eg::
213 eg::
214
214
215 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
215 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
216 def myparttypehandler(...):
216 def myparttypehandler(...):
217 '''process a part of type "my part".'''
217 '''process a part of type "my part".'''
218 ...
218 ...
219 """
219 """
220 validateparttype(parttype)
220 validateparttype(parttype)
221 def _decorator(func):
221 def _decorator(func):
222 lparttype = parttype.lower() # enforce lower case matching.
222 lparttype = parttype.lower() # enforce lower case matching.
223 assert lparttype not in parthandlermapping
223 assert lparttype not in parthandlermapping
224 parthandlermapping[lparttype] = func
224 parthandlermapping[lparttype] = func
225 func.params = frozenset(params)
225 func.params = frozenset(params)
226 return func
226 return func
227 return _decorator
227 return _decorator
228
228
229 class unbundlerecords(object):
229 class unbundlerecords(object):
230 """keep record of what happens during and unbundle
230 """keep record of what happens during and unbundle
231
231
232 New records are added using `records.add('cat', obj)`. Where 'cat' is a
232 New records are added using `records.add('cat', obj)`. Where 'cat' is a
233 category of record and obj is an arbitrary object.
233 category of record and obj is an arbitrary object.
234
234
235 `records['cat']` will return all entries of this category 'cat'.
235 `records['cat']` will return all entries of this category 'cat'.
236
236
237 Iterating on the object itself will yield `('category', obj)` tuples
237 Iterating on the object itself will yield `('category', obj)` tuples
238 for all entries.
238 for all entries.
239
239
240 All iterations happens in chronological order.
240 All iterations happens in chronological order.
241 """
241 """
242
242
243 def __init__(self):
243 def __init__(self):
244 self._categories = {}
244 self._categories = {}
245 self._sequences = []
245 self._sequences = []
246 self._replies = {}
246 self._replies = {}
247
247
248 def add(self, category, entry, inreplyto=None):
248 def add(self, category, entry, inreplyto=None):
249 """add a new record of a given category.
249 """add a new record of a given category.
250
250
251 The entry can then be retrieved in the list returned by
251 The entry can then be retrieved in the list returned by
252 self['category']."""
252 self['category']."""
253 self._categories.setdefault(category, []).append(entry)
253 self._categories.setdefault(category, []).append(entry)
254 self._sequences.append((category, entry))
254 self._sequences.append((category, entry))
255 if inreplyto is not None:
255 if inreplyto is not None:
256 self.getreplies(inreplyto).add(category, entry)
256 self.getreplies(inreplyto).add(category, entry)
257
257
258 def getreplies(self, partid):
258 def getreplies(self, partid):
259 """get the records that are replies to a specific part"""
259 """get the records that are replies to a specific part"""
260 return self._replies.setdefault(partid, unbundlerecords())
260 return self._replies.setdefault(partid, unbundlerecords())
261
261
262 def __getitem__(self, cat):
262 def __getitem__(self, cat):
263 return tuple(self._categories.get(cat, ()))
263 return tuple(self._categories.get(cat, ()))
264
264
265 def __iter__(self):
265 def __iter__(self):
266 return iter(self._sequences)
266 return iter(self._sequences)
267
267
268 def __len__(self):
268 def __len__(self):
269 return len(self._sequences)
269 return len(self._sequences)
270
270
271 def __nonzero__(self):
271 def __nonzero__(self):
272 return bool(self._sequences)
272 return bool(self._sequences)
273
273
274 __bool__ = __nonzero__
274 __bool__ = __nonzero__
275
275
276 class bundleoperation(object):
276 class bundleoperation(object):
277 """an object that represents a single bundling process
277 """an object that represents a single bundling process
278
278
279 Its purpose is to carry unbundle-related objects and states.
279 Its purpose is to carry unbundle-related objects and states.
280
280
281 A new object should be created at the beginning of each bundle processing.
281 A new object should be created at the beginning of each bundle processing.
282 The object is to be returned by the processing function.
282 The object is to be returned by the processing function.
283
283
284 The object has very little content now it will ultimately contain:
284 The object has very little content now it will ultimately contain:
285 * an access to the repo the bundle is applied to,
285 * an access to the repo the bundle is applied to,
286 * a ui object,
286 * a ui object,
287 * a way to retrieve a transaction to add changes to the repo,
287 * a way to retrieve a transaction to add changes to the repo,
288 * a way to record the result of processing each part,
288 * a way to record the result of processing each part,
289 * a way to construct a bundle response when applicable.
289 * a way to construct a bundle response when applicable.
290 """
290 """
291
291
292 def __init__(self, repo, transactiongetter, captureoutput=True):
292 def __init__(self, repo, transactiongetter, captureoutput=True):
293 self.repo = repo
293 self.repo = repo
294 self.ui = repo.ui
294 self.ui = repo.ui
295 self.records = unbundlerecords()
295 self.records = unbundlerecords()
296 self.gettransaction = transactiongetter
296 self.gettransaction = transactiongetter
297 self.reply = None
297 self.reply = None
298 self.captureoutput = captureoutput
298 self.captureoutput = captureoutput
299
299
300 class TransactionUnavailable(RuntimeError):
300 class TransactionUnavailable(RuntimeError):
301 pass
301 pass
302
302
303 def _notransaction():
303 def _notransaction():
304 """default method to get a transaction while processing a bundle
304 """default method to get a transaction while processing a bundle
305
305
306 Raise an exception to highlight the fact that no transaction was expected
306 Raise an exception to highlight the fact that no transaction was expected
307 to be created"""
307 to be created"""
308 raise TransactionUnavailable()
308 raise TransactionUnavailable()
309
309
310 def applybundle(repo, unbundler, tr, source=None, url=None, op=None):
310 def applybundle(repo, unbundler, tr, source=None, url=None, op=None):
311 # transform me into unbundler.apply() as soon as the freeze is lifted
311 # transform me into unbundler.apply() as soon as the freeze is lifted
312 tr.hookargs['bundle2'] = '1'
312 tr.hookargs['bundle2'] = '1'
313 if source is not None and 'source' not in tr.hookargs:
313 if source is not None and 'source' not in tr.hookargs:
314 tr.hookargs['source'] = source
314 tr.hookargs['source'] = source
315 if url is not None and 'url' not in tr.hookargs:
315 if url is not None and 'url' not in tr.hookargs:
316 tr.hookargs['url'] = url
316 tr.hookargs['url'] = url
317 return processbundle(repo, unbundler, lambda: tr, op=op)
317 return processbundle(repo, unbundler, lambda: tr, op=op)
318
318
319 def processbundle(repo, unbundler, transactiongetter=None, op=None):
319 def processbundle(repo, unbundler, transactiongetter=None, op=None):
320 """This function process a bundle, apply effect to/from a repo
320 """This function process a bundle, apply effect to/from a repo
321
321
322 It iterates over each part then searches for and uses the proper handling
322 It iterates over each part then searches for and uses the proper handling
323 code to process the part. Parts are processed in order.
323 code to process the part. Parts are processed in order.
324
324
325 Unknown Mandatory part will abort the process.
325 Unknown Mandatory part will abort the process.
326
326
327 It is temporarily possible to provide a prebuilt bundleoperation to the
327 It is temporarily possible to provide a prebuilt bundleoperation to the
328 function. This is used to ensure output is properly propagated in case of
328 function. This is used to ensure output is properly propagated in case of
329 an error during the unbundling. This output capturing part will likely be
329 an error during the unbundling. This output capturing part will likely be
330 reworked and this ability will probably go away in the process.
330 reworked and this ability will probably go away in the process.
331 """
331 """
332 if op is None:
332 if op is None:
333 if transactiongetter is None:
333 if transactiongetter is None:
334 transactiongetter = _notransaction
334 transactiongetter = _notransaction
335 op = bundleoperation(repo, transactiongetter)
335 op = bundleoperation(repo, transactiongetter)
336 # todo:
336 # todo:
337 # - replace this is a init function soon.
337 # - replace this is a init function soon.
338 # - exception catching
338 # - exception catching
339 unbundler.params
339 unbundler.params
340 if repo.ui.debugflag:
340 if repo.ui.debugflag:
341 msg = ['bundle2-input-bundle:']
341 msg = ['bundle2-input-bundle:']
342 if unbundler.params:
342 if unbundler.params:
343 msg.append(' %i params')
343 msg.append(' %i params')
344 if op.gettransaction is None:
344 if op.gettransaction is None:
345 msg.append(' no-transaction')
345 msg.append(' no-transaction')
346 else:
346 else:
347 msg.append(' with-transaction')
347 msg.append(' with-transaction')
348 msg.append('\n')
348 msg.append('\n')
349 repo.ui.debug(''.join(msg))
349 repo.ui.debug(''.join(msg))
350 iterparts = enumerate(unbundler.iterparts())
350 iterparts = enumerate(unbundler.iterparts())
351 part = None
351 part = None
352 nbpart = 0
352 nbpart = 0
353 try:
353 try:
354 for nbpart, part in iterparts:
354 for nbpart, part in iterparts:
355 _processpart(op, part)
355 _processpart(op, part)
356 except Exception as exc:
356 except Exception as exc:
357 # Any exceptions seeking to the end of the bundle at this point are
357 # Any exceptions seeking to the end of the bundle at this point are
358 # almost certainly related to the underlying stream being bad.
358 # almost certainly related to the underlying stream being bad.
359 # And, chances are that the exception we're handling is related to
359 # And, chances are that the exception we're handling is related to
360 # getting in that bad state. So, we swallow the seeking error and
360 # getting in that bad state. So, we swallow the seeking error and
361 # re-raise the original error.
361 # re-raise the original error.
362 seekerror = False
362 seekerror = False
363 try:
363 try:
364 for nbpart, part in iterparts:
364 for nbpart, part in iterparts:
365 # consume the bundle content
365 # consume the bundle content
366 part.seek(0, 2)
366 part.seek(0, 2)
367 except Exception:
367 except Exception:
368 seekerror = True
368 seekerror = True
369
369
370 # Small hack to let caller code distinguish exceptions from bundle2
370 # Small hack to let caller code distinguish exceptions from bundle2
371 # processing from processing the old format. This is mostly
371 # processing from processing the old format. This is mostly
372 # needed to handle different return codes to unbundle according to the
372 # needed to handle different return codes to unbundle according to the
373 # type of bundle. We should probably clean up or drop this return code
373 # type of bundle. We should probably clean up or drop this return code
374 # craziness in a future version.
374 # craziness in a future version.
375 exc.duringunbundle2 = True
375 exc.duringunbundle2 = True
376 salvaged = []
376 salvaged = []
377 replycaps = None
377 replycaps = None
378 if op.reply is not None:
378 if op.reply is not None:
379 salvaged = op.reply.salvageoutput()
379 salvaged = op.reply.salvageoutput()
380 replycaps = op.reply.capabilities
380 replycaps = op.reply.capabilities
381 exc._replycaps = replycaps
381 exc._replycaps = replycaps
382 exc._bundle2salvagedoutput = salvaged
382 exc._bundle2salvagedoutput = salvaged
383
383
384 # Re-raising from a variable loses the original stack. So only use
384 # Re-raising from a variable loses the original stack. So only use
385 # that form if we need to.
385 # that form if we need to.
386 if seekerror:
386 if seekerror:
387 raise exc
387 raise exc
388 else:
388 else:
389 raise
389 raise
390 finally:
390 finally:
391 repo.ui.debug('bundle2-input-bundle: %i parts total\n' % nbpart)
391 repo.ui.debug('bundle2-input-bundle: %i parts total\n' % nbpart)
392
392
393 return op
393 return op
394
394
395 def _processpart(op, part):
395 def _processpart(op, part):
396 """process a single part from a bundle
396 """process a single part from a bundle
397
397
398 The part is guaranteed to have been fully consumed when the function exits
398 The part is guaranteed to have been fully consumed when the function exits
399 (even if an exception is raised)."""
399 (even if an exception is raised)."""
400 status = 'unknown' # used by debug output
400 status = 'unknown' # used by debug output
401 hardabort = False
401 hardabort = False
402 try:
402 try:
403 try:
403 try:
404 handler = parthandlermapping.get(part.type)
404 handler = parthandlermapping.get(part.type)
405 if handler is None:
405 if handler is None:
406 status = 'unsupported-type'
406 status = 'unsupported-type'
407 raise error.BundleUnknownFeatureError(parttype=part.type)
407 raise error.BundleUnknownFeatureError(parttype=part.type)
408 indebug(op.ui, 'found a handler for part %r' % part.type)
408 indebug(op.ui, 'found a handler for part %r' % part.type)
409 unknownparams = part.mandatorykeys - handler.params
409 unknownparams = part.mandatorykeys - handler.params
410 if unknownparams:
410 if unknownparams:
411 unknownparams = list(unknownparams)
411 unknownparams = list(unknownparams)
412 unknownparams.sort()
412 unknownparams.sort()
413 status = 'unsupported-params (%s)' % unknownparams
413 status = 'unsupported-params (%s)' % unknownparams
414 raise error.BundleUnknownFeatureError(parttype=part.type,
414 raise error.BundleUnknownFeatureError(parttype=part.type,
415 params=unknownparams)
415 params=unknownparams)
416 status = 'supported'
416 status = 'supported'
417 except error.BundleUnknownFeatureError as exc:
417 except error.BundleUnknownFeatureError as exc:
418 if part.mandatory: # mandatory parts
418 if part.mandatory: # mandatory parts
419 raise
419 raise
420 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
420 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
421 return # skip to part processing
421 return # skip to part processing
422 finally:
422 finally:
423 if op.ui.debugflag:
423 if op.ui.debugflag:
424 msg = ['bundle2-input-part: "%s"' % part.type]
424 msg = ['bundle2-input-part: "%s"' % part.type]
425 if not part.mandatory:
425 if not part.mandatory:
426 msg.append(' (advisory)')
426 msg.append(' (advisory)')
427 nbmp = len(part.mandatorykeys)
427 nbmp = len(part.mandatorykeys)
428 nbap = len(part.params) - nbmp
428 nbap = len(part.params) - nbmp
429 if nbmp or nbap:
429 if nbmp or nbap:
430 msg.append(' (params:')
430 msg.append(' (params:')
431 if nbmp:
431 if nbmp:
432 msg.append(' %i mandatory' % nbmp)
432 msg.append(' %i mandatory' % nbmp)
433 if nbap:
433 if nbap:
434 msg.append(' %i advisory' % nbmp)
434 msg.append(' %i advisory' % nbmp)
435 msg.append(')')
435 msg.append(')')
436 msg.append(' %s\n' % status)
436 msg.append(' %s\n' % status)
437 op.ui.debug(''.join(msg))
437 op.ui.debug(''.join(msg))
438
438
439 # handler is called outside the above try block so that we don't
439 # handler is called outside the above try block so that we don't
440 # risk catching KeyErrors from anything other than the
440 # risk catching KeyErrors from anything other than the
441 # parthandlermapping lookup (any KeyError raised by handler()
441 # parthandlermapping lookup (any KeyError raised by handler()
442 # itself represents a defect of a different variety).
442 # itself represents a defect of a different variety).
443 output = None
443 output = None
444 if op.captureoutput and op.reply is not None:
444 if op.captureoutput and op.reply is not None:
445 op.ui.pushbuffer(error=True, subproc=True)
445 op.ui.pushbuffer(error=True, subproc=True)
446 output = ''
446 output = ''
447 try:
447 try:
448 handler(op, part)
448 handler(op, part)
449 finally:
449 finally:
450 if output is not None:
450 if output is not None:
451 output = op.ui.popbuffer()
451 output = op.ui.popbuffer()
452 if output:
452 if output:
453 outpart = op.reply.newpart('output', data=output,
453 outpart = op.reply.newpart('output', data=output,
454 mandatory=False)
454 mandatory=False)
455 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
455 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
456 # If exiting or interrupted, do not attempt to seek the stream in the
456 # If exiting or interrupted, do not attempt to seek the stream in the
457 # finally block below. This makes abort faster.
457 # finally block below. This makes abort faster.
458 except (SystemExit, KeyboardInterrupt):
458 except (SystemExit, KeyboardInterrupt):
459 hardabort = True
459 hardabort = True
460 raise
460 raise
461 finally:
461 finally:
462 # consume the part content to not corrupt the stream.
462 # consume the part content to not corrupt the stream.
463 if not hardabort:
463 if not hardabort:
464 part.seek(0, 2)
464 part.seek(0, 2)
465
465
466
466
467 def decodecaps(blob):
467 def decodecaps(blob):
468 """decode a bundle2 caps bytes blob into a dictionary
468 """decode a bundle2 caps bytes blob into a dictionary
469
469
470 The blob is a list of capabilities (one per line)
470 The blob is a list of capabilities (one per line)
471 Capabilities may have values using a line of the form::
471 Capabilities may have values using a line of the form::
472
472
473 capability=value1,value2,value3
473 capability=value1,value2,value3
474
474
475 The values are always a list."""
475 The values are always a list."""
476 caps = {}
476 caps = {}
477 for line in blob.splitlines():
477 for line in blob.splitlines():
478 if not line:
478 if not line:
479 continue
479 continue
480 if '=' not in line:
480 if '=' not in line:
481 key, vals = line, ()
481 key, vals = line, ()
482 else:
482 else:
483 key, vals = line.split('=', 1)
483 key, vals = line.split('=', 1)
484 vals = vals.split(',')
484 vals = vals.split(',')
485 key = urlreq.unquote(key)
485 key = urlreq.unquote(key)
486 vals = [urlreq.unquote(v) for v in vals]
486 vals = [urlreq.unquote(v) for v in vals]
487 caps[key] = vals
487 caps[key] = vals
488 return caps
488 return caps
489
489
490 def encodecaps(caps):
490 def encodecaps(caps):
491 """encode a bundle2 caps dictionary into a bytes blob"""
491 """encode a bundle2 caps dictionary into a bytes blob"""
492 chunks = []
492 chunks = []
493 for ca in sorted(caps):
493 for ca in sorted(caps):
494 vals = caps[ca]
494 vals = caps[ca]
495 ca = urlreq.quote(ca)
495 ca = urlreq.quote(ca)
496 vals = [urlreq.quote(v) for v in vals]
496 vals = [urlreq.quote(v) for v in vals]
497 if vals:
497 if vals:
498 ca = "%s=%s" % (ca, ','.join(vals))
498 ca = "%s=%s" % (ca, ','.join(vals))
499 chunks.append(ca)
499 chunks.append(ca)
500 return '\n'.join(chunks)
500 return '\n'.join(chunks)
501
501
502 bundletypes = {
502 bundletypes = {
503 "": ("", 'UN'), # only when using unbundle on ssh and old http servers
503 "": ("", 'UN'), # only when using unbundle on ssh and old http servers
504 # since the unification ssh accepts a header but there
504 # since the unification ssh accepts a header but there
505 # is no capability signaling it.
505 # is no capability signaling it.
506 "HG20": (), # special-cased below
506 "HG20": (), # special-cased below
507 "HG10UN": ("HG10UN", 'UN'),
507 "HG10UN": ("HG10UN", 'UN'),
508 "HG10BZ": ("HG10", 'BZ'),
508 "HG10BZ": ("HG10", 'BZ'),
509 "HG10GZ": ("HG10GZ", 'GZ'),
509 "HG10GZ": ("HG10GZ", 'GZ'),
510 }
510 }
511
511
512 # hgweb uses this list to communicate its preferred type
512 # hgweb uses this list to communicate its preferred type
513 bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN']
513 bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN']
514
514
515 class bundle20(object):
515 class bundle20(object):
516 """represent an outgoing bundle2 container
516 """represent an outgoing bundle2 container
517
517
518 Use the `addparam` method to add stream level parameter. and `newpart` to
518 Use the `addparam` method to add stream level parameter. and `newpart` to
519 populate it. Then call `getchunks` to retrieve all the binary chunks of
519 populate it. Then call `getchunks` to retrieve all the binary chunks of
520 data that compose the bundle2 container."""
520 data that compose the bundle2 container."""
521
521
522 _magicstring = 'HG20'
522 _magicstring = 'HG20'
523
523
524 def __init__(self, ui, capabilities=()):
524 def __init__(self, ui, capabilities=()):
525 self.ui = ui
525 self.ui = ui
526 self._params = []
526 self._params = []
527 self._parts = []
527 self._parts = []
528 self.capabilities = dict(capabilities)
528 self.capabilities = dict(capabilities)
529 self._compengine = util.compengines.forbundletype('UN')
529 self._compengine = util.compengines.forbundletype('UN')
530 self._compopts = None
530 self._compopts = None
531
531
532 def setcompression(self, alg, compopts=None):
532 def setcompression(self, alg, compopts=None):
533 """setup core part compression to <alg>"""
533 """setup core part compression to <alg>"""
534 if alg in (None, 'UN'):
534 if alg in (None, 'UN'):
535 return
535 return
536 assert not any(n.lower() == 'compression' for n, v in self._params)
536 assert not any(n.lower() == 'compression' for n, v in self._params)
537 self.addparam('Compression', alg)
537 self.addparam('Compression', alg)
538 self._compengine = util.compengines.forbundletype(alg)
538 self._compengine = util.compengines.forbundletype(alg)
539 self._compopts = compopts
539 self._compopts = compopts
540
540
541 @property
541 @property
542 def nbparts(self):
542 def nbparts(self):
543 """total number of parts added to the bundler"""
543 """total number of parts added to the bundler"""
544 return len(self._parts)
544 return len(self._parts)
545
545
546 # methods used to defines the bundle2 content
546 # methods used to defines the bundle2 content
547 def addparam(self, name, value=None):
547 def addparam(self, name, value=None):
548 """add a stream level parameter"""
548 """add a stream level parameter"""
549 if not name:
549 if not name:
550 raise ValueError('empty parameter name')
550 raise ValueError('empty parameter name')
551 if name[0] not in string.letters:
551 if name[0] not in string.letters:
552 raise ValueError('non letter first character: %r' % name)
552 raise ValueError('non letter first character: %r' % name)
553 self._params.append((name, value))
553 self._params.append((name, value))
554
554
555 def addpart(self, part):
555 def addpart(self, part):
556 """add a new part to the bundle2 container
556 """add a new part to the bundle2 container
557
557
558 Parts contains the actual applicative payload."""
558 Parts contains the actual applicative payload."""
559 assert part.id is None
559 assert part.id is None
560 part.id = len(self._parts) # very cheap counter
560 part.id = len(self._parts) # very cheap counter
561 self._parts.append(part)
561 self._parts.append(part)
562
562
563 def newpart(self, typeid, *args, **kwargs):
563 def newpart(self, typeid, *args, **kwargs):
564 """create a new part and add it to the containers
564 """create a new part and add it to the containers
565
565
566 As the part is directly added to the containers. For now, this means
566 As the part is directly added to the containers. For now, this means
567 that any failure to properly initialize the part after calling
567 that any failure to properly initialize the part after calling
568 ``newpart`` should result in a failure of the whole bundling process.
568 ``newpart`` should result in a failure of the whole bundling process.
569
569
570 You can still fall back to manually create and add if you need better
570 You can still fall back to manually create and add if you need better
571 control."""
571 control."""
572 part = bundlepart(typeid, *args, **kwargs)
572 part = bundlepart(typeid, *args, **kwargs)
573 self.addpart(part)
573 self.addpart(part)
574 return part
574 return part
575
575
576 # methods used to generate the bundle2 stream
576 # methods used to generate the bundle2 stream
577 def getchunks(self):
577 def getchunks(self):
578 if self.ui.debugflag:
578 if self.ui.debugflag:
579 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
579 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
580 if self._params:
580 if self._params:
581 msg.append(' (%i params)' % len(self._params))
581 msg.append(' (%i params)' % len(self._params))
582 msg.append(' %i parts total\n' % len(self._parts))
582 msg.append(' %i parts total\n' % len(self._parts))
583 self.ui.debug(''.join(msg))
583 self.ui.debug(''.join(msg))
584 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
584 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
585 yield self._magicstring
585 yield self._magicstring
586 param = self._paramchunk()
586 param = self._paramchunk()
587 outdebug(self.ui, 'bundle parameter: %s' % param)
587 outdebug(self.ui, 'bundle parameter: %s' % param)
588 yield _pack(_fstreamparamsize, len(param))
588 yield _pack(_fstreamparamsize, len(param))
589 if param:
589 if param:
590 yield param
590 yield param
591 for chunk in self._compengine.compressstream(self._getcorechunk(),
591 for chunk in self._compengine.compressstream(self._getcorechunk(),
592 self._compopts):
592 self._compopts):
593 yield chunk
593 yield chunk
594
594
595 def _paramchunk(self):
595 def _paramchunk(self):
596 """return a encoded version of all stream parameters"""
596 """return a encoded version of all stream parameters"""
597 blocks = []
597 blocks = []
598 for par, value in self._params:
598 for par, value in self._params:
599 par = urlreq.quote(par)
599 par = urlreq.quote(par)
600 if value is not None:
600 if value is not None:
601 value = urlreq.quote(value)
601 value = urlreq.quote(value)
602 par = '%s=%s' % (par, value)
602 par = '%s=%s' % (par, value)
603 blocks.append(par)
603 blocks.append(par)
604 return ' '.join(blocks)
604 return ' '.join(blocks)
605
605
606 def _getcorechunk(self):
606 def _getcorechunk(self):
607 """yield chunk for the core part of the bundle
607 """yield chunk for the core part of the bundle
608
608
609 (all but headers and parameters)"""
609 (all but headers and parameters)"""
610 outdebug(self.ui, 'start of parts')
610 outdebug(self.ui, 'start of parts')
611 for part in self._parts:
611 for part in self._parts:
612 outdebug(self.ui, 'bundle part: "%s"' % part.type)
612 outdebug(self.ui, 'bundle part: "%s"' % part.type)
613 for chunk in part.getchunks(ui=self.ui):
613 for chunk in part.getchunks(ui=self.ui):
614 yield chunk
614 yield chunk
615 outdebug(self.ui, 'end of bundle')
615 outdebug(self.ui, 'end of bundle')
616 yield _pack(_fpartheadersize, 0)
616 yield _pack(_fpartheadersize, 0)
617
617
618
618
619 def salvageoutput(self):
619 def salvageoutput(self):
620 """return a list with a copy of all output parts in the bundle
620 """return a list with a copy of all output parts in the bundle
621
621
622 This is meant to be used during error handling to make sure we preserve
622 This is meant to be used during error handling to make sure we preserve
623 server output"""
623 server output"""
624 salvaged = []
624 salvaged = []
625 for part in self._parts:
625 for part in self._parts:
626 if part.type.startswith('output'):
626 if part.type.startswith('output'):
627 salvaged.append(part.copy())
627 salvaged.append(part.copy())
628 return salvaged
628 return salvaged
629
629
630
630
631 class unpackermixin(object):
631 class unpackermixin(object):
632 """A mixin to extract bytes and struct data from a stream"""
632 """A mixin to extract bytes and struct data from a stream"""
633
633
634 def __init__(self, fp):
634 def __init__(self, fp):
635 self._fp = fp
635 self._fp = fp
636
636
637 def _unpack(self, format):
637 def _unpack(self, format):
638 """unpack this struct format from the stream
638 """unpack this struct format from the stream
639
639
640 This method is meant for internal usage by the bundle2 protocol only.
640 This method is meant for internal usage by the bundle2 protocol only.
641 They directly manipulate the low level stream including bundle2 level
641 They directly manipulate the low level stream including bundle2 level
642 instruction.
642 instruction.
643
643
644 Do not use it to implement higher-level logic or methods."""
644 Do not use it to implement higher-level logic or methods."""
645 data = self._readexact(struct.calcsize(format))
645 data = self._readexact(struct.calcsize(format))
646 return _unpack(format, data)
646 return _unpack(format, data)
647
647
648 def _readexact(self, size):
648 def _readexact(self, size):
649 """read exactly <size> bytes from the stream
649 """read exactly <size> bytes from the stream
650
650
651 This method is meant for internal usage by the bundle2 protocol only.
651 This method is meant for internal usage by the bundle2 protocol only.
652 They directly manipulate the low level stream including bundle2 level
652 They directly manipulate the low level stream including bundle2 level
653 instruction.
653 instruction.
654
654
655 Do not use it to implement higher-level logic or methods."""
655 Do not use it to implement higher-level logic or methods."""
656 return changegroup.readexactly(self._fp, size)
656 return changegroup.readexactly(self._fp, size)
657
657
658 def getunbundler(ui, fp, magicstring=None):
658 def getunbundler(ui, fp, magicstring=None):
659 """return a valid unbundler object for a given magicstring"""
659 """return a valid unbundler object for a given magicstring"""
660 if magicstring is None:
660 if magicstring is None:
661 magicstring = changegroup.readexactly(fp, 4)
661 magicstring = changegroup.readexactly(fp, 4)
662 magic, version = magicstring[0:2], magicstring[2:4]
662 magic, version = magicstring[0:2], magicstring[2:4]
663 if magic != 'HG':
663 if magic != 'HG':
664 raise error.Abort(_('not a Mercurial bundle'))
664 raise error.Abort(_('not a Mercurial bundle'))
665 unbundlerclass = formatmap.get(version)
665 unbundlerclass = formatmap.get(version)
666 if unbundlerclass is None:
666 if unbundlerclass is None:
667 raise error.Abort(_('unknown bundle version %s') % version)
667 raise error.Abort(_('unknown bundle version %s') % version)
668 unbundler = unbundlerclass(ui, fp)
668 unbundler = unbundlerclass(ui, fp)
669 indebug(ui, 'start processing of %s stream' % magicstring)
669 indebug(ui, 'start processing of %s stream' % magicstring)
670 return unbundler
670 return unbundler
671
671
672 class unbundle20(unpackermixin):
672 class unbundle20(unpackermixin):
673 """interpret a bundle2 stream
673 """interpret a bundle2 stream
674
674
675 This class is fed with a binary stream and yields parts through its
675 This class is fed with a binary stream and yields parts through its
676 `iterparts` methods."""
676 `iterparts` methods."""
677
677
678 _magicstring = 'HG20'
678 _magicstring = 'HG20'
679
679
680 def __init__(self, ui, fp):
680 def __init__(self, ui, fp):
681 """If header is specified, we do not read it out of the stream."""
681 """If header is specified, we do not read it out of the stream."""
682 self.ui = ui
682 self.ui = ui
683 self._compengine = util.compengines.forbundletype('UN')
683 self._compengine = util.compengines.forbundletype('UN')
684 self._compressed = None
684 self._compressed = None
685 super(unbundle20, self).__init__(fp)
685 super(unbundle20, self).__init__(fp)
686
686
687 @util.propertycache
687 @util.propertycache
688 def params(self):
688 def params(self):
689 """dictionary of stream level parameters"""
689 """dictionary of stream level parameters"""
690 indebug(self.ui, 'reading bundle2 stream parameters')
690 indebug(self.ui, 'reading bundle2 stream parameters')
691 params = {}
691 params = {}
692 paramssize = self._unpack(_fstreamparamsize)[0]
692 paramssize = self._unpack(_fstreamparamsize)[0]
693 if paramssize < 0:
693 if paramssize < 0:
694 raise error.BundleValueError('negative bundle param size: %i'
694 raise error.BundleValueError('negative bundle param size: %i'
695 % paramssize)
695 % paramssize)
696 if paramssize:
696 if paramssize:
697 params = self._readexact(paramssize)
697 params = self._readexact(paramssize)
698 params = self._processallparams(params)
698 params = self._processallparams(params)
699 return params
699 return params
700
700
701 def _processallparams(self, paramsblock):
701 def _processallparams(self, paramsblock):
702 """"""
702 """"""
703 params = util.sortdict()
703 params = util.sortdict()
704 for p in paramsblock.split(' '):
704 for p in paramsblock.split(' '):
705 p = p.split('=', 1)
705 p = p.split('=', 1)
706 p = [urlreq.unquote(i) for i in p]
706 p = [urlreq.unquote(i) for i in p]
707 if len(p) < 2:
707 if len(p) < 2:
708 p.append(None)
708 p.append(None)
709 self._processparam(*p)
709 self._processparam(*p)
710 params[p[0]] = p[1]
710 params[p[0]] = p[1]
711 return params
711 return params
712
712
713
713
714 def _processparam(self, name, value):
714 def _processparam(self, name, value):
715 """process a parameter, applying its effect if needed
715 """process a parameter, applying its effect if needed
716
716
717 Parameter starting with a lower case letter are advisory and will be
717 Parameter starting with a lower case letter are advisory and will be
718 ignored when unknown. Those starting with an upper case letter are
718 ignored when unknown. Those starting with an upper case letter are
719 mandatory and will this function will raise a KeyError when unknown.
719 mandatory and will this function will raise a KeyError when unknown.
720
720
721 Note: no option are currently supported. Any input will be either
721 Note: no option are currently supported. Any input will be either
722 ignored or failing.
722 ignored or failing.
723 """
723 """
724 if not name:
724 if not name:
725 raise ValueError('empty parameter name')
725 raise ValueError('empty parameter name')
726 if name[0] not in string.letters:
726 if name[0] not in string.letters:
727 raise ValueError('non letter first character: %r' % name)
727 raise ValueError('non letter first character: %r' % name)
728 try:
728 try:
729 handler = b2streamparamsmap[name.lower()]
729 handler = b2streamparamsmap[name.lower()]
730 except KeyError:
730 except KeyError:
731 if name[0].islower():
731 if name[0].islower():
732 indebug(self.ui, "ignoring unknown parameter %r" % name)
732 indebug(self.ui, "ignoring unknown parameter %r" % name)
733 else:
733 else:
734 raise error.BundleUnknownFeatureError(params=(name,))
734 raise error.BundleUnknownFeatureError(params=(name,))
735 else:
735 else:
736 handler(self, name, value)
736 handler(self, name, value)
737
737
738 def _forwardchunks(self):
738 def _forwardchunks(self):
739 """utility to transfer a bundle2 as binary
739 """utility to transfer a bundle2 as binary
740
740
741 This is made necessary by the fact the 'getbundle' command over 'ssh'
741 This is made necessary by the fact the 'getbundle' command over 'ssh'
742 have no way to know then the reply end, relying on the bundle to be
742 have no way to know then the reply end, relying on the bundle to be
743 interpreted to know its end. This is terrible and we are sorry, but we
743 interpreted to know its end. This is terrible and we are sorry, but we
744 needed to move forward to get general delta enabled.
744 needed to move forward to get general delta enabled.
745 """
745 """
746 yield self._magicstring
746 yield self._magicstring
747 assert 'params' not in vars(self)
747 assert 'params' not in vars(self)
748 paramssize = self._unpack(_fstreamparamsize)[0]
748 paramssize = self._unpack(_fstreamparamsize)[0]
749 if paramssize < 0:
749 if paramssize < 0:
750 raise error.BundleValueError('negative bundle param size: %i'
750 raise error.BundleValueError('negative bundle param size: %i'
751 % paramssize)
751 % paramssize)
752 yield _pack(_fstreamparamsize, paramssize)
752 yield _pack(_fstreamparamsize, paramssize)
753 if paramssize:
753 if paramssize:
754 params = self._readexact(paramssize)
754 params = self._readexact(paramssize)
755 self._processallparams(params)
755 self._processallparams(params)
756 yield params
756 yield params
757 assert self._compengine.bundletype == 'UN'
757 assert self._compengine.bundletype == 'UN'
758 # From there, payload might need to be decompressed
758 # From there, payload might need to be decompressed
759 self._fp = self._compengine.decompressorreader(self._fp)
759 self._fp = self._compengine.decompressorreader(self._fp)
760 emptycount = 0
760 emptycount = 0
761 while emptycount < 2:
761 while emptycount < 2:
762 # so we can brainlessly loop
762 # so we can brainlessly loop
763 assert _fpartheadersize == _fpayloadsize
763 assert _fpartheadersize == _fpayloadsize
764 size = self._unpack(_fpartheadersize)[0]
764 size = self._unpack(_fpartheadersize)[0]
765 yield _pack(_fpartheadersize, size)
765 yield _pack(_fpartheadersize, size)
766 if size:
766 if size:
767 emptycount = 0
767 emptycount = 0
768 else:
768 else:
769 emptycount += 1
769 emptycount += 1
770 continue
770 continue
771 if size == flaginterrupt:
771 if size == flaginterrupt:
772 continue
772 continue
773 elif size < 0:
773 elif size < 0:
774 raise error.BundleValueError('negative chunk size: %i')
774 raise error.BundleValueError('negative chunk size: %i')
775 yield self._readexact(size)
775 yield self._readexact(size)
776
776
777
777
778 def iterparts(self):
778 def iterparts(self):
779 """yield all parts contained in the stream"""
779 """yield all parts contained in the stream"""
780 # make sure param have been loaded
780 # make sure param have been loaded
781 self.params
781 self.params
782 # From there, payload need to be decompressed
782 # From there, payload need to be decompressed
783 self._fp = self._compengine.decompressorreader(self._fp)
783 self._fp = self._compengine.decompressorreader(self._fp)
784 indebug(self.ui, 'start extraction of bundle2 parts')
784 indebug(self.ui, 'start extraction of bundle2 parts')
785 headerblock = self._readpartheader()
785 headerblock = self._readpartheader()
786 while headerblock is not None:
786 while headerblock is not None:
787 part = unbundlepart(self.ui, headerblock, self._fp)
787 part = unbundlepart(self.ui, headerblock, self._fp)
788 yield part
788 yield part
789 part.seek(0, 2)
789 part.seek(0, 2)
790 headerblock = self._readpartheader()
790 headerblock = self._readpartheader()
791 indebug(self.ui, 'end of bundle2 stream')
791 indebug(self.ui, 'end of bundle2 stream')
792
792
793 def _readpartheader(self):
793 def _readpartheader(self):
794 """reads a part header size and return the bytes blob
794 """reads a part header size and return the bytes blob
795
795
796 returns None if empty"""
796 returns None if empty"""
797 headersize = self._unpack(_fpartheadersize)[0]
797 headersize = self._unpack(_fpartheadersize)[0]
798 if headersize < 0:
798 if headersize < 0:
799 raise error.BundleValueError('negative part header size: %i'
799 raise error.BundleValueError('negative part header size: %i'
800 % headersize)
800 % headersize)
801 indebug(self.ui, 'part header size: %i' % headersize)
801 indebug(self.ui, 'part header size: %i' % headersize)
802 if headersize:
802 if headersize:
803 return self._readexact(headersize)
803 return self._readexact(headersize)
804 return None
804 return None
805
805
806 def compressed(self):
806 def compressed(self):
807 self.params # load params
807 self.params # load params
808 return self._compressed
808 return self._compressed
809
809
810 def close(self):
810 def close(self):
811 """close underlying file"""
811 """close underlying file"""
812 if util.safehasattr(self._fp, 'close'):
812 if util.safehasattr(self._fp, 'close'):
813 return self._fp.close()
813 return self._fp.close()
814
814
815 formatmap = {'20': unbundle20}
815 formatmap = {'20': unbundle20}
816
816
817 b2streamparamsmap = {}
817 b2streamparamsmap = {}
818
818
819 def b2streamparamhandler(name):
819 def b2streamparamhandler(name):
820 """register a handler for a stream level parameter"""
820 """register a handler for a stream level parameter"""
821 def decorator(func):
821 def decorator(func):
822 assert name not in formatmap
822 assert name not in formatmap
823 b2streamparamsmap[name] = func
823 b2streamparamsmap[name] = func
824 return func
824 return func
825 return decorator
825 return decorator
826
826
827 @b2streamparamhandler('compression')
827 @b2streamparamhandler('compression')
828 def processcompression(unbundler, param, value):
828 def processcompression(unbundler, param, value):
829 """read compression parameter and install payload decompression"""
829 """read compression parameter and install payload decompression"""
830 if value not in util.compengines.supportedbundletypes:
830 if value not in util.compengines.supportedbundletypes:
831 raise error.BundleUnknownFeatureError(params=(param,),
831 raise error.BundleUnknownFeatureError(params=(param,),
832 values=(value,))
832 values=(value,))
833 unbundler._compengine = util.compengines.forbundletype(value)
833 unbundler._compengine = util.compengines.forbundletype(value)
834 if value is not None:
834 if value is not None:
835 unbundler._compressed = True
835 unbundler._compressed = True
836
836
837 class bundlepart(object):
837 class bundlepart(object):
838 """A bundle2 part contains application level payload
838 """A bundle2 part contains application level payload
839
839
840 The part `type` is used to route the part to the application level
840 The part `type` is used to route the part to the application level
841 handler.
841 handler.
842
842
843 The part payload is contained in ``part.data``. It could be raw bytes or a
843 The part payload is contained in ``part.data``. It could be raw bytes or a
844 generator of byte chunks.
844 generator of byte chunks.
845
845
846 You can add parameters to the part using the ``addparam`` method.
846 You can add parameters to the part using the ``addparam`` method.
847 Parameters can be either mandatory (default) or advisory. Remote side
847 Parameters can be either mandatory (default) or advisory. Remote side
848 should be able to safely ignore the advisory ones.
848 should be able to safely ignore the advisory ones.
849
849
850 Both data and parameters cannot be modified after the generation has begun.
850 Both data and parameters cannot be modified after the generation has begun.
851 """
851 """
852
852
853 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
853 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
854 data='', mandatory=True):
854 data='', mandatory=True):
855 validateparttype(parttype)
855 validateparttype(parttype)
856 self.id = None
856 self.id = None
857 self.type = parttype
857 self.type = parttype
858 self._data = data
858 self._data = data
859 self._mandatoryparams = list(mandatoryparams)
859 self._mandatoryparams = list(mandatoryparams)
860 self._advisoryparams = list(advisoryparams)
860 self._advisoryparams = list(advisoryparams)
861 # checking for duplicated entries
861 # checking for duplicated entries
862 self._seenparams = set()
862 self._seenparams = set()
863 for pname, __ in self._mandatoryparams + self._advisoryparams:
863 for pname, __ in self._mandatoryparams + self._advisoryparams:
864 if pname in self._seenparams:
864 if pname in self._seenparams:
865 raise error.ProgrammingError('duplicated params: %s' % pname)
865 raise error.ProgrammingError('duplicated params: %s' % pname)
866 self._seenparams.add(pname)
866 self._seenparams.add(pname)
867 # status of the part's generation:
867 # status of the part's generation:
868 # - None: not started,
868 # - None: not started,
869 # - False: currently generated,
869 # - False: currently generated,
870 # - True: generation done.
870 # - True: generation done.
871 self._generated = None
871 self._generated = None
872 self.mandatory = mandatory
872 self.mandatory = mandatory
873
873
874 def __repr__(self):
874 def __repr__(self):
875 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
875 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
876 return ('<%s object at %x; id: %s; type: %s; mandatory: %s>'
876 return ('<%s object at %x; id: %s; type: %s; mandatory: %s>'
877 % (cls, id(self), self.id, self.type, self.mandatory))
877 % (cls, id(self), self.id, self.type, self.mandatory))
878
878
879 def copy(self):
879 def copy(self):
880 """return a copy of the part
880 """return a copy of the part
881
881
882 The new part have the very same content but no partid assigned yet.
882 The new part have the very same content but no partid assigned yet.
883 Parts with generated data cannot be copied."""
883 Parts with generated data cannot be copied."""
884 assert not util.safehasattr(self.data, 'next')
884 assert not util.safehasattr(self.data, 'next')
885 return self.__class__(self.type, self._mandatoryparams,
885 return self.__class__(self.type, self._mandatoryparams,
886 self._advisoryparams, self._data, self.mandatory)
886 self._advisoryparams, self._data, self.mandatory)
887
887
888 # methods used to defines the part content
888 # methods used to defines the part content
889 @property
889 @property
890 def data(self):
890 def data(self):
891 return self._data
891 return self._data
892
892
893 @data.setter
893 @data.setter
894 def data(self, data):
894 def data(self, data):
895 if self._generated is not None:
895 if self._generated is not None:
896 raise error.ReadOnlyPartError('part is being generated')
896 raise error.ReadOnlyPartError('part is being generated')
897 self._data = data
897 self._data = data
898
898
899 @property
899 @property
900 def mandatoryparams(self):
900 def mandatoryparams(self):
901 # make it an immutable tuple to force people through ``addparam``
901 # make it an immutable tuple to force people through ``addparam``
902 return tuple(self._mandatoryparams)
902 return tuple(self._mandatoryparams)
903
903
904 @property
904 @property
905 def advisoryparams(self):
905 def advisoryparams(self):
906 # make it an immutable tuple to force people through ``addparam``
906 # make it an immutable tuple to force people through ``addparam``
907 return tuple(self._advisoryparams)
907 return tuple(self._advisoryparams)
908
908
909 def addparam(self, name, value='', mandatory=True):
909 def addparam(self, name, value='', mandatory=True):
910 """add a parameter to the part
910 """add a parameter to the part
911
911
912 If 'mandatory' is set to True, the remote handler must claim support
912 If 'mandatory' is set to True, the remote handler must claim support
913 for this parameter or the unbundling will be aborted.
913 for this parameter or the unbundling will be aborted.
914
914
915 The 'name' and 'value' cannot exceed 255 bytes each.
915 The 'name' and 'value' cannot exceed 255 bytes each.
916 """
916 """
917 if self._generated is not None:
917 if self._generated is not None:
918 raise error.ReadOnlyPartError('part is being generated')
918 raise error.ReadOnlyPartError('part is being generated')
919 if name in self._seenparams:
919 if name in self._seenparams:
920 raise ValueError('duplicated params: %s' % name)
920 raise ValueError('duplicated params: %s' % name)
921 self._seenparams.add(name)
921 self._seenparams.add(name)
922 params = self._advisoryparams
922 params = self._advisoryparams
923 if mandatory:
923 if mandatory:
924 params = self._mandatoryparams
924 params = self._mandatoryparams
925 params.append((name, value))
925 params.append((name, value))
926
926
927 # methods used to generates the bundle2 stream
927 # methods used to generates the bundle2 stream
928 def getchunks(self, ui):
928 def getchunks(self, ui):
929 if self._generated is not None:
929 if self._generated is not None:
930 raise error.ProgrammingError('part can only be consumed once')
930 raise error.ProgrammingError('part can only be consumed once')
931 self._generated = False
931 self._generated = False
932
932
933 if ui.debugflag:
933 if ui.debugflag:
934 msg = ['bundle2-output-part: "%s"' % self.type]
934 msg = ['bundle2-output-part: "%s"' % self.type]
935 if not self.mandatory:
935 if not self.mandatory:
936 msg.append(' (advisory)')
936 msg.append(' (advisory)')
937 nbmp = len(self.mandatoryparams)
937 nbmp = len(self.mandatoryparams)
938 nbap = len(self.advisoryparams)
938 nbap = len(self.advisoryparams)
939 if nbmp or nbap:
939 if nbmp or nbap:
940 msg.append(' (params:')
940 msg.append(' (params:')
941 if nbmp:
941 if nbmp:
942 msg.append(' %i mandatory' % nbmp)
942 msg.append(' %i mandatory' % nbmp)
943 if nbap:
943 if nbap:
944 msg.append(' %i advisory' % nbmp)
944 msg.append(' %i advisory' % nbmp)
945 msg.append(')')
945 msg.append(')')
946 if not self.data:
946 if not self.data:
947 msg.append(' empty payload')
947 msg.append(' empty payload')
948 elif util.safehasattr(self.data, 'next'):
948 elif util.safehasattr(self.data, 'next'):
949 msg.append(' streamed payload')
949 msg.append(' streamed payload')
950 else:
950 else:
951 msg.append(' %i bytes payload' % len(self.data))
951 msg.append(' %i bytes payload' % len(self.data))
952 msg.append('\n')
952 msg.append('\n')
953 ui.debug(''.join(msg))
953 ui.debug(''.join(msg))
954
954
955 #### header
955 #### header
956 if self.mandatory:
956 if self.mandatory:
957 parttype = self.type.upper()
957 parttype = self.type.upper()
958 else:
958 else:
959 parttype = self.type.lower()
959 parttype = self.type.lower()
960 outdebug(ui, 'part %s: "%s"' % (self.id, parttype))
960 outdebug(ui, 'part %s: "%s"' % (self.id, parttype))
961 ## parttype
961 ## parttype
962 header = [_pack(_fparttypesize, len(parttype)),
962 header = [_pack(_fparttypesize, len(parttype)),
963 parttype, _pack(_fpartid, self.id),
963 parttype, _pack(_fpartid, self.id),
964 ]
964 ]
965 ## parameters
965 ## parameters
966 # count
966 # count
967 manpar = self.mandatoryparams
967 manpar = self.mandatoryparams
968 advpar = self.advisoryparams
968 advpar = self.advisoryparams
969 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
969 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
970 # size
970 # size
971 parsizes = []
971 parsizes = []
972 for key, value in manpar:
972 for key, value in manpar:
973 parsizes.append(len(key))
973 parsizes.append(len(key))
974 parsizes.append(len(value))
974 parsizes.append(len(value))
975 for key, value in advpar:
975 for key, value in advpar:
976 parsizes.append(len(key))
976 parsizes.append(len(key))
977 parsizes.append(len(value))
977 parsizes.append(len(value))
978 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
978 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
979 header.append(paramsizes)
979 header.append(paramsizes)
980 # key, value
980 # key, value
981 for key, value in manpar:
981 for key, value in manpar:
982 header.append(key)
982 header.append(key)
983 header.append(value)
983 header.append(value)
984 for key, value in advpar:
984 for key, value in advpar:
985 header.append(key)
985 header.append(key)
986 header.append(value)
986 header.append(value)
987 ## finalize header
987 ## finalize header
988 headerchunk = ''.join(header)
988 headerchunk = ''.join(header)
989 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
989 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
990 yield _pack(_fpartheadersize, len(headerchunk))
990 yield _pack(_fpartheadersize, len(headerchunk))
991 yield headerchunk
991 yield headerchunk
992 ## payload
992 ## payload
993 try:
993 try:
994 for chunk in self._payloadchunks():
994 for chunk in self._payloadchunks():
995 outdebug(ui, 'payload chunk size: %i' % len(chunk))
995 outdebug(ui, 'payload chunk size: %i' % len(chunk))
996 yield _pack(_fpayloadsize, len(chunk))
996 yield _pack(_fpayloadsize, len(chunk))
997 yield chunk
997 yield chunk
998 except GeneratorExit:
998 except GeneratorExit:
999 # GeneratorExit means that nobody is listening for our
999 # GeneratorExit means that nobody is listening for our
1000 # results anyway, so just bail quickly rather than trying
1000 # results anyway, so just bail quickly rather than trying
1001 # to produce an error part.
1001 # to produce an error part.
1002 ui.debug('bundle2-generatorexit\n')
1002 ui.debug('bundle2-generatorexit\n')
1003 raise
1003 raise
1004 except BaseException as exc:
1004 except BaseException as exc:
1005 # backup exception data for later
1005 # backup exception data for later
1006 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
1006 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
1007 % exc)
1007 % exc)
1008 tb = sys.exc_info()[2]
1008 tb = sys.exc_info()[2]
1009 msg = 'unexpected error: %s' % exc
1009 msg = 'unexpected error: %s' % exc
1010 interpart = bundlepart('error:abort', [('message', msg)],
1010 interpart = bundlepart('error:abort', [('message', msg)],
1011 mandatory=False)
1011 mandatory=False)
1012 interpart.id = 0
1012 interpart.id = 0
1013 yield _pack(_fpayloadsize, -1)
1013 yield _pack(_fpayloadsize, -1)
1014 for chunk in interpart.getchunks(ui=ui):
1014 for chunk in interpart.getchunks(ui=ui):
1015 yield chunk
1015 yield chunk
1016 outdebug(ui, 'closing payload chunk')
1016 outdebug(ui, 'closing payload chunk')
1017 # abort current part payload
1017 # abort current part payload
1018 yield _pack(_fpayloadsize, 0)
1018 yield _pack(_fpayloadsize, 0)
1019 pycompat.raisewithtb(exc, tb)
1019 pycompat.raisewithtb(exc, tb)
1020 # end of payload
1020 # end of payload
1021 outdebug(ui, 'closing payload chunk')
1021 outdebug(ui, 'closing payload chunk')
1022 yield _pack(_fpayloadsize, 0)
1022 yield _pack(_fpayloadsize, 0)
1023 self._generated = True
1023 self._generated = True
1024
1024
1025 def _payloadchunks(self):
1025 def _payloadchunks(self):
1026 """yield chunks of a the part payload
1026 """yield chunks of a the part payload
1027
1027
1028 Exists to handle the different methods to provide data to a part."""
1028 Exists to handle the different methods to provide data to a part."""
1029 # we only support fixed size data now.
1029 # we only support fixed size data now.
1030 # This will be improved in the future.
1030 # This will be improved in the future.
1031 if util.safehasattr(self.data, 'next'):
1031 if util.safehasattr(self.data, 'next'):
1032 buff = util.chunkbuffer(self.data)
1032 buff = util.chunkbuffer(self.data)
1033 chunk = buff.read(preferedchunksize)
1033 chunk = buff.read(preferedchunksize)
1034 while chunk:
1034 while chunk:
1035 yield chunk
1035 yield chunk
1036 chunk = buff.read(preferedchunksize)
1036 chunk = buff.read(preferedchunksize)
1037 elif len(self.data):
1037 elif len(self.data):
1038 yield self.data
1038 yield self.data
1039
1039
1040
1040
1041 flaginterrupt = -1
1041 flaginterrupt = -1
1042
1042
1043 class interrupthandler(unpackermixin):
1043 class interrupthandler(unpackermixin):
1044 """read one part and process it with restricted capability
1044 """read one part and process it with restricted capability
1045
1045
1046 This allows to transmit exception raised on the producer size during part
1046 This allows to transmit exception raised on the producer size during part
1047 iteration while the consumer is reading a part.
1047 iteration while the consumer is reading a part.
1048
1048
1049 Part processed in this manner only have access to a ui object,"""
1049 Part processed in this manner only have access to a ui object,"""
1050
1050
1051 def __init__(self, ui, fp):
1051 def __init__(self, ui, fp):
1052 super(interrupthandler, self).__init__(fp)
1052 super(interrupthandler, self).__init__(fp)
1053 self.ui = ui
1053 self.ui = ui
1054
1054
1055 def _readpartheader(self):
1055 def _readpartheader(self):
1056 """reads a part header size and return the bytes blob
1056 """reads a part header size and return the bytes blob
1057
1057
1058 returns None if empty"""
1058 returns None if empty"""
1059 headersize = self._unpack(_fpartheadersize)[0]
1059 headersize = self._unpack(_fpartheadersize)[0]
1060 if headersize < 0:
1060 if headersize < 0:
1061 raise error.BundleValueError('negative part header size: %i'
1061 raise error.BundleValueError('negative part header size: %i'
1062 % headersize)
1062 % headersize)
1063 indebug(self.ui, 'part header size: %i\n' % headersize)
1063 indebug(self.ui, 'part header size: %i\n' % headersize)
1064 if headersize:
1064 if headersize:
1065 return self._readexact(headersize)
1065 return self._readexact(headersize)
1066 return None
1066 return None
1067
1067
1068 def __call__(self):
1068 def __call__(self):
1069
1069
1070 self.ui.debug('bundle2-input-stream-interrupt:'
1070 self.ui.debug('bundle2-input-stream-interrupt:'
1071 ' opening out of band context\n')
1071 ' opening out of band context\n')
1072 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1072 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1073 headerblock = self._readpartheader()
1073 headerblock = self._readpartheader()
1074 if headerblock is None:
1074 if headerblock is None:
1075 indebug(self.ui, 'no part found during interruption.')
1075 indebug(self.ui, 'no part found during interruption.')
1076 return
1076 return
1077 part = unbundlepart(self.ui, headerblock, self._fp)
1077 part = unbundlepart(self.ui, headerblock, self._fp)
1078 op = interruptoperation(self.ui)
1078 op = interruptoperation(self.ui)
1079 _processpart(op, part)
1079 _processpart(op, part)
1080 self.ui.debug('bundle2-input-stream-interrupt:'
1080 self.ui.debug('bundle2-input-stream-interrupt:'
1081 ' closing out of band context\n')
1081 ' closing out of band context\n')
1082
1082
1083 class interruptoperation(object):
1083 class interruptoperation(object):
1084 """A limited operation to be use by part handler during interruption
1084 """A limited operation to be use by part handler during interruption
1085
1085
1086 It only have access to an ui object.
1086 It only have access to an ui object.
1087 """
1087 """
1088
1088
1089 def __init__(self, ui):
1089 def __init__(self, ui):
1090 self.ui = ui
1090 self.ui = ui
1091 self.reply = None
1091 self.reply = None
1092 self.captureoutput = False
1092 self.captureoutput = False
1093
1093
1094 @property
1094 @property
1095 def repo(self):
1095 def repo(self):
1096 raise error.ProgrammingError('no repo access from stream interruption')
1096 raise error.ProgrammingError('no repo access from stream interruption')
1097
1097
1098 def gettransaction(self):
1098 def gettransaction(self):
1099 raise TransactionUnavailable('no repo access from stream interruption')
1099 raise TransactionUnavailable('no repo access from stream interruption')
1100
1100
1101 class unbundlepart(unpackermixin):
1101 class unbundlepart(unpackermixin):
1102 """a bundle part read from a bundle"""
1102 """a bundle part read from a bundle"""
1103
1103
1104 def __init__(self, ui, header, fp):
1104 def __init__(self, ui, header, fp):
1105 super(unbundlepart, self).__init__(fp)
1105 super(unbundlepart, self).__init__(fp)
1106 self._seekable = (util.safehasattr(fp, 'seek') and
1106 self._seekable = (util.safehasattr(fp, 'seek') and
1107 util.safehasattr(fp, 'tell'))
1107 util.safehasattr(fp, 'tell'))
1108 self.ui = ui
1108 self.ui = ui
1109 # unbundle state attr
1109 # unbundle state attr
1110 self._headerdata = header
1110 self._headerdata = header
1111 self._headeroffset = 0
1111 self._headeroffset = 0
1112 self._initialized = False
1112 self._initialized = False
1113 self.consumed = False
1113 self.consumed = False
1114 # part data
1114 # part data
1115 self.id = None
1115 self.id = None
1116 self.type = None
1116 self.type = None
1117 self.mandatoryparams = None
1117 self.mandatoryparams = None
1118 self.advisoryparams = None
1118 self.advisoryparams = None
1119 self.params = None
1119 self.params = None
1120 self.mandatorykeys = ()
1120 self.mandatorykeys = ()
1121 self._payloadstream = None
1121 self._payloadstream = None
1122 self._readheader()
1122 self._readheader()
1123 self._mandatory = None
1123 self._mandatory = None
1124 self._chunkindex = [] #(payload, file) position tuples for chunk starts
1124 self._chunkindex = [] #(payload, file) position tuples for chunk starts
1125 self._pos = 0
1125 self._pos = 0
1126
1126
1127 def _fromheader(self, size):
1127 def _fromheader(self, size):
1128 """return the next <size> byte from the header"""
1128 """return the next <size> byte from the header"""
1129 offset = self._headeroffset
1129 offset = self._headeroffset
1130 data = self._headerdata[offset:(offset + size)]
1130 data = self._headerdata[offset:(offset + size)]
1131 self._headeroffset = offset + size
1131 self._headeroffset = offset + size
1132 return data
1132 return data
1133
1133
1134 def _unpackheader(self, format):
1134 def _unpackheader(self, format):
1135 """read given format from header
1135 """read given format from header
1136
1136
1137 This automatically compute the size of the format to read."""
1137 This automatically compute the size of the format to read."""
1138 data = self._fromheader(struct.calcsize(format))
1138 data = self._fromheader(struct.calcsize(format))
1139 return _unpack(format, data)
1139 return _unpack(format, data)
1140
1140
1141 def _initparams(self, mandatoryparams, advisoryparams):
1141 def _initparams(self, mandatoryparams, advisoryparams):
1142 """internal function to setup all logic related parameters"""
1142 """internal function to setup all logic related parameters"""
1143 # make it read only to prevent people touching it by mistake.
1143 # make it read only to prevent people touching it by mistake.
1144 self.mandatoryparams = tuple(mandatoryparams)
1144 self.mandatoryparams = tuple(mandatoryparams)
1145 self.advisoryparams = tuple(advisoryparams)
1145 self.advisoryparams = tuple(advisoryparams)
1146 # user friendly UI
1146 # user friendly UI
1147 self.params = util.sortdict(self.mandatoryparams)
1147 self.params = util.sortdict(self.mandatoryparams)
1148 self.params.update(self.advisoryparams)
1148 self.params.update(self.advisoryparams)
1149 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1149 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1150
1150
1151 def _payloadchunks(self, chunknum=0):
1151 def _payloadchunks(self, chunknum=0):
1152 '''seek to specified chunk and start yielding data'''
1152 '''seek to specified chunk and start yielding data'''
1153 if len(self._chunkindex) == 0:
1153 if len(self._chunkindex) == 0:
1154 assert chunknum == 0, 'Must start with chunk 0'
1154 assert chunknum == 0, 'Must start with chunk 0'
1155 self._chunkindex.append((0, self._tellfp()))
1155 self._chunkindex.append((0, self._tellfp()))
1156 else:
1156 else:
1157 assert chunknum < len(self._chunkindex), \
1157 assert chunknum < len(self._chunkindex), \
1158 'Unknown chunk %d' % chunknum
1158 'Unknown chunk %d' % chunknum
1159 self._seekfp(self._chunkindex[chunknum][1])
1159 self._seekfp(self._chunkindex[chunknum][1])
1160
1160
1161 pos = self._chunkindex[chunknum][0]
1161 pos = self._chunkindex[chunknum][0]
1162 payloadsize = self._unpack(_fpayloadsize)[0]
1162 payloadsize = self._unpack(_fpayloadsize)[0]
1163 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1163 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1164 while payloadsize:
1164 while payloadsize:
1165 if payloadsize == flaginterrupt:
1165 if payloadsize == flaginterrupt:
1166 # interruption detection, the handler will now read a
1166 # interruption detection, the handler will now read a
1167 # single part and process it.
1167 # single part and process it.
1168 interrupthandler(self.ui, self._fp)()
1168 interrupthandler(self.ui, self._fp)()
1169 elif payloadsize < 0:
1169 elif payloadsize < 0:
1170 msg = 'negative payload chunk size: %i' % payloadsize
1170 msg = 'negative payload chunk size: %i' % payloadsize
1171 raise error.BundleValueError(msg)
1171 raise error.BundleValueError(msg)
1172 else:
1172 else:
1173 result = self._readexact(payloadsize)
1173 result = self._readexact(payloadsize)
1174 chunknum += 1
1174 chunknum += 1
1175 pos += payloadsize
1175 pos += payloadsize
1176 if chunknum == len(self._chunkindex):
1176 if chunknum == len(self._chunkindex):
1177 self._chunkindex.append((pos, self._tellfp()))
1177 self._chunkindex.append((pos, self._tellfp()))
1178 yield result
1178 yield result
1179 payloadsize = self._unpack(_fpayloadsize)[0]
1179 payloadsize = self._unpack(_fpayloadsize)[0]
1180 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1180 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1181
1181
1182 def _findchunk(self, pos):
1182 def _findchunk(self, pos):
1183 '''for a given payload position, return a chunk number and offset'''
1183 '''for a given payload position, return a chunk number and offset'''
1184 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1184 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1185 if ppos == pos:
1185 if ppos == pos:
1186 return chunk, 0
1186 return chunk, 0
1187 elif ppos > pos:
1187 elif ppos > pos:
1188 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1188 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1189 raise ValueError('Unknown chunk')
1189 raise ValueError('Unknown chunk')
1190
1190
1191 def _readheader(self):
1191 def _readheader(self):
1192 """read the header and setup the object"""
1192 """read the header and setup the object"""
1193 typesize = self._unpackheader(_fparttypesize)[0]
1193 typesize = self._unpackheader(_fparttypesize)[0]
1194 self.type = self._fromheader(typesize)
1194 self.type = self._fromheader(typesize)
1195 indebug(self.ui, 'part type: "%s"' % self.type)
1195 indebug(self.ui, 'part type: "%s"' % self.type)
1196 self.id = self._unpackheader(_fpartid)[0]
1196 self.id = self._unpackheader(_fpartid)[0]
1197 indebug(self.ui, 'part id: "%s"' % self.id)
1197 indebug(self.ui, 'part id: "%s"' % self.id)
1198 # extract mandatory bit from type
1198 # extract mandatory bit from type
1199 self.mandatory = (self.type != self.type.lower())
1199 self.mandatory = (self.type != self.type.lower())
1200 self.type = self.type.lower()
1200 self.type = self.type.lower()
1201 ## reading parameters
1201 ## reading parameters
1202 # param count
1202 # param count
1203 mancount, advcount = self._unpackheader(_fpartparamcount)
1203 mancount, advcount = self._unpackheader(_fpartparamcount)
1204 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1204 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1205 # param size
1205 # param size
1206 fparamsizes = _makefpartparamsizes(mancount + advcount)
1206 fparamsizes = _makefpartparamsizes(mancount + advcount)
1207 paramsizes = self._unpackheader(fparamsizes)
1207 paramsizes = self._unpackheader(fparamsizes)
1208 # make it a list of couple again
1208 # make it a list of couple again
1209 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
1209 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
1210 # split mandatory from advisory
1210 # split mandatory from advisory
1211 mansizes = paramsizes[:mancount]
1211 mansizes = paramsizes[:mancount]
1212 advsizes = paramsizes[mancount:]
1212 advsizes = paramsizes[mancount:]
1213 # retrieve param value
1213 # retrieve param value
1214 manparams = []
1214 manparams = []
1215 for key, value in mansizes:
1215 for key, value in mansizes:
1216 manparams.append((self._fromheader(key), self._fromheader(value)))
1216 manparams.append((self._fromheader(key), self._fromheader(value)))
1217 advparams = []
1217 advparams = []
1218 for key, value in advsizes:
1218 for key, value in advsizes:
1219 advparams.append((self._fromheader(key), self._fromheader(value)))
1219 advparams.append((self._fromheader(key), self._fromheader(value)))
1220 self._initparams(manparams, advparams)
1220 self._initparams(manparams, advparams)
1221 ## part payload
1221 ## part payload
1222 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1222 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1223 # we read the data, tell it
1223 # we read the data, tell it
1224 self._initialized = True
1224 self._initialized = True
1225
1225
1226 def read(self, size=None):
1226 def read(self, size=None):
1227 """read payload data"""
1227 """read payload data"""
1228 if not self._initialized:
1228 if not self._initialized:
1229 self._readheader()
1229 self._readheader()
1230 if size is None:
1230 if size is None:
1231 data = self._payloadstream.read()
1231 data = self._payloadstream.read()
1232 else:
1232 else:
1233 data = self._payloadstream.read(size)
1233 data = self._payloadstream.read(size)
1234 self._pos += len(data)
1234 self._pos += len(data)
1235 if size is None or len(data) < size:
1235 if size is None or len(data) < size:
1236 if not self.consumed and self._pos:
1236 if not self.consumed and self._pos:
1237 self.ui.debug('bundle2-input-part: total payload size %i\n'
1237 self.ui.debug('bundle2-input-part: total payload size %i\n'
1238 % self._pos)
1238 % self._pos)
1239 self.consumed = True
1239 self.consumed = True
1240 return data
1240 return data
1241
1241
1242 def tell(self):
1242 def tell(self):
1243 return self._pos
1243 return self._pos
1244
1244
1245 def seek(self, offset, whence=0):
1245 def seek(self, offset, whence=0):
1246 if whence == 0:
1246 if whence == 0:
1247 newpos = offset
1247 newpos = offset
1248 elif whence == 1:
1248 elif whence == 1:
1249 newpos = self._pos + offset
1249 newpos = self._pos + offset
1250 elif whence == 2:
1250 elif whence == 2:
1251 if not self.consumed:
1251 if not self.consumed:
1252 self.read()
1252 self.read()
1253 newpos = self._chunkindex[-1][0] - offset
1253 newpos = self._chunkindex[-1][0] - offset
1254 else:
1254 else:
1255 raise ValueError('Unknown whence value: %r' % (whence,))
1255 raise ValueError('Unknown whence value: %r' % (whence,))
1256
1256
1257 if newpos > self._chunkindex[-1][0] and not self.consumed:
1257 if newpos > self._chunkindex[-1][0] and not self.consumed:
1258 self.read()
1258 self.read()
1259 if not 0 <= newpos <= self._chunkindex[-1][0]:
1259 if not 0 <= newpos <= self._chunkindex[-1][0]:
1260 raise ValueError('Offset out of range')
1260 raise ValueError('Offset out of range')
1261
1261
1262 if self._pos != newpos:
1262 if self._pos != newpos:
1263 chunk, internaloffset = self._findchunk(newpos)
1263 chunk, internaloffset = self._findchunk(newpos)
1264 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1264 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1265 adjust = self.read(internaloffset)
1265 adjust = self.read(internaloffset)
1266 if len(adjust) != internaloffset:
1266 if len(adjust) != internaloffset:
1267 raise error.Abort(_('Seek failed\n'))
1267 raise error.Abort(_('Seek failed\n'))
1268 self._pos = newpos
1268 self._pos = newpos
1269
1269
1270 def _seekfp(self, offset, whence=0):
1270 def _seekfp(self, offset, whence=0):
1271 """move the underlying file pointer
1271 """move the underlying file pointer
1272
1272
1273 This method is meant for internal usage by the bundle2 protocol only.
1273 This method is meant for internal usage by the bundle2 protocol only.
1274 They directly manipulate the low level stream including bundle2 level
1274 They directly manipulate the low level stream including bundle2 level
1275 instruction.
1275 instruction.
1276
1276
1277 Do not use it to implement higher-level logic or methods."""
1277 Do not use it to implement higher-level logic or methods."""
1278 if self._seekable:
1278 if self._seekable:
1279 return self._fp.seek(offset, whence)
1279 return self._fp.seek(offset, whence)
1280 else:
1280 else:
1281 raise NotImplementedError(_('File pointer is not seekable'))
1281 raise NotImplementedError(_('File pointer is not seekable'))
1282
1282
1283 def _tellfp(self):
1283 def _tellfp(self):
1284 """return the file offset, or None if file is not seekable
1284 """return the file offset, or None if file is not seekable
1285
1285
1286 This method is meant for internal usage by the bundle2 protocol only.
1286 This method is meant for internal usage by the bundle2 protocol only.
1287 They directly manipulate the low level stream including bundle2 level
1287 They directly manipulate the low level stream including bundle2 level
1288 instruction.
1288 instruction.
1289
1289
1290 Do not use it to implement higher-level logic or methods."""
1290 Do not use it to implement higher-level logic or methods."""
1291 if self._seekable:
1291 if self._seekable:
1292 try:
1292 try:
1293 return self._fp.tell()
1293 return self._fp.tell()
1294 except IOError as e:
1294 except IOError as e:
1295 if e.errno == errno.ESPIPE:
1295 if e.errno == errno.ESPIPE:
1296 self._seekable = False
1296 self._seekable = False
1297 else:
1297 else:
1298 raise
1298 raise
1299 return None
1299 return None
1300
1300
1301 # These are only the static capabilities.
1301 # These are only the static capabilities.
1302 # Check the 'getrepocaps' function for the rest.
1302 # Check the 'getrepocaps' function for the rest.
1303 capabilities = {'HG20': (),
1303 capabilities = {'HG20': (),
1304 'error': ('abort', 'unsupportedcontent', 'pushraced',
1304 'error': ('abort', 'unsupportedcontent', 'pushraced',
1305 'pushkey'),
1305 'pushkey'),
1306 'listkeys': (),
1306 'listkeys': (),
1307 'pushkey': (),
1307 'pushkey': (),
1308 'digests': tuple(sorted(util.DIGESTS.keys())),
1308 'digests': tuple(sorted(util.DIGESTS.keys())),
1309 'remote-changegroup': ('http', 'https'),
1309 'remote-changegroup': ('http', 'https'),
1310 'hgtagsfnodes': (),
1310 'hgtagsfnodes': (),
1311 }
1311 }
1312
1312
1313 def getrepocaps(repo, allowpushback=False):
1313 def getrepocaps(repo, allowpushback=False):
1314 """return the bundle2 capabilities for a given repo
1314 """return the bundle2 capabilities for a given repo
1315
1315
1316 Exists to allow extensions (like evolution) to mutate the capabilities.
1316 Exists to allow extensions (like evolution) to mutate the capabilities.
1317 """
1317 """
1318 caps = capabilities.copy()
1318 caps = capabilities.copy()
1319 caps['changegroup'] = tuple(sorted(
1319 caps['changegroup'] = tuple(sorted(
1320 changegroup.supportedincomingversions(repo)))
1320 changegroup.supportedincomingversions(repo)))
1321 if obsolete.isenabled(repo, obsolete.exchangeopt):
1321 if obsolete.isenabled(repo, obsolete.exchangeopt):
1322 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1322 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1323 caps['obsmarkers'] = supportedformat
1323 caps['obsmarkers'] = supportedformat
1324 if allowpushback:
1324 if allowpushback:
1325 caps['pushback'] = ()
1325 caps['pushback'] = ()
1326 return caps
1326 return caps
1327
1327
1328 def bundle2caps(remote):
1328 def bundle2caps(remote):
1329 """return the bundle capabilities of a peer as dict"""
1329 """return the bundle capabilities of a peer as dict"""
1330 raw = remote.capable('bundle2')
1330 raw = remote.capable('bundle2')
1331 if not raw and raw != '':
1331 if not raw and raw != '':
1332 return {}
1332 return {}
1333 capsblob = urlreq.unquote(remote.capable('bundle2'))
1333 capsblob = urlreq.unquote(remote.capable('bundle2'))
1334 return decodecaps(capsblob)
1334 return decodecaps(capsblob)
1335
1335
1336 def obsmarkersversion(caps):
1336 def obsmarkersversion(caps):
1337 """extract the list of supported obsmarkers versions from a bundle2caps dict
1337 """extract the list of supported obsmarkers versions from a bundle2caps dict
1338 """
1338 """
1339 obscaps = caps.get('obsmarkers', ())
1339 obscaps = caps.get('obsmarkers', ())
1340 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1340 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1341
1341
1342 def writenewbundle(ui, repo, source, filename, bundletype, outgoing, opts,
1342 def writenewbundle(ui, repo, source, filename, bundletype, outgoing, opts,
1343 vfs=None, compression=None, compopts=None):
1343 vfs=None, compression=None, compopts=None):
1344 if bundletype.startswith('HG10'):
1344 if bundletype.startswith('HG10'):
1345 cg = changegroup.getchangegroup(repo, source, outgoing, version='01')
1345 cg = changegroup.getchangegroup(repo, source, outgoing, version='01')
1346 return writebundle(ui, cg, filename, bundletype, vfs=vfs,
1346 return writebundle(ui, cg, filename, bundletype, vfs=vfs,
1347 compression=compression, compopts=compopts)
1347 compression=compression, compopts=compopts)
1348 elif not bundletype.startswith('HG20'):
1348 elif not bundletype.startswith('HG20'):
1349 raise error.ProgrammingError('unknown bundle type: %s' % bundletype)
1349 raise error.ProgrammingError('unknown bundle type: %s' % bundletype)
1350
1350
1351 bundle = bundle20(ui)
1351 bundle = bundle20(ui)
1352 bundle.setcompression(compression, compopts)
1352 bundle.setcompression(compression, compopts)
1353 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1353 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1354 chunkiter = bundle.getchunks()
1354 chunkiter = bundle.getchunks()
1355
1355
1356 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1356 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1357
1357
1358 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1358 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1359 # We should eventually reconcile this logic with the one behind
1359 # We should eventually reconcile this logic with the one behind
1360 # 'exchange.getbundle2partsgenerator'.
1360 # 'exchange.getbundle2partsgenerator'.
1361 #
1361 #
1362 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1362 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1363 # different right now. So we keep them separated for now for the sake of
1363 # different right now. So we keep them separated for now for the sake of
1364 # simplicity.
1364 # simplicity.
1365
1365
1366 # we always want a changegroup in such bundle
1366 # we always want a changegroup in such bundle
1367 cgversion = opts.get('cg.version')
1367 cgversion = opts.get('cg.version')
1368 if cgversion is None:
1368 if cgversion is None:
1369 cgversion = changegroup.safeversion(repo)
1369 cgversion = changegroup.safeversion(repo)
1370 cg = changegroup.getchangegroup(repo, source, outgoing,
1370 cg = changegroup.getchangegroup(repo, source, outgoing,
1371 version=cgversion)
1371 version=cgversion)
1372 part = bundler.newpart('changegroup', data=cg.getchunks())
1372 part = bundler.newpart('changegroup', data=cg.getchunks())
1373 part.addparam('version', cg.version)
1373 part.addparam('version', cg.version)
1374 if 'clcount' in cg.extras:
1374 if 'clcount' in cg.extras:
1375 part.addparam('nbchanges', str(cg.extras['clcount']),
1375 part.addparam('nbchanges', str(cg.extras['clcount']),
1376 mandatory=False)
1376 mandatory=False)
1377
1377
1378 def addparttagsfnodescache(repo, bundler, outgoing):
1379 # we include the tags fnode cache for the bundle changeset
1380 # (as an optional parts)
1381 cache = tags.hgtagsfnodescache(repo.unfiltered())
1382 chunks = []
1383
1384 # .hgtags fnodes are only relevant for head changesets. While we could
1385 # transfer values for all known nodes, there will likely be little to
1386 # no benefit.
1387 #
1388 # We don't bother using a generator to produce output data because
1389 # a) we only have 40 bytes per head and even esoteric numbers of heads
1390 # consume little memory (1M heads is 40MB) b) we don't want to send the
1391 # part if we don't have entries and knowing if we have entries requires
1392 # cache lookups.
1393 for node in outgoing.missingheads:
1394 # Don't compute missing, as this may slow down serving.
1395 fnode = cache.getfnode(node, computemissing=False)
1396 if fnode is not None:
1397 chunks.extend([node, fnode])
1398
1399 if chunks:
1400 bundler.newpart('hgtagsfnodes', data=''.join(chunks))
1401
1378 def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None,
1402 def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None,
1379 compopts=None):
1403 compopts=None):
1380 """Write a bundle file and return its filename.
1404 """Write a bundle file and return its filename.
1381
1405
1382 Existing files will not be overwritten.
1406 Existing files will not be overwritten.
1383 If no filename is specified, a temporary file is created.
1407 If no filename is specified, a temporary file is created.
1384 bz2 compression can be turned off.
1408 bz2 compression can be turned off.
1385 The bundle file will be deleted in case of errors.
1409 The bundle file will be deleted in case of errors.
1386 """
1410 """
1387
1411
1388 if bundletype == "HG20":
1412 if bundletype == "HG20":
1389 bundle = bundle20(ui)
1413 bundle = bundle20(ui)
1390 bundle.setcompression(compression, compopts)
1414 bundle.setcompression(compression, compopts)
1391 part = bundle.newpart('changegroup', data=cg.getchunks())
1415 part = bundle.newpart('changegroup', data=cg.getchunks())
1392 part.addparam('version', cg.version)
1416 part.addparam('version', cg.version)
1393 if 'clcount' in cg.extras:
1417 if 'clcount' in cg.extras:
1394 part.addparam('nbchanges', str(cg.extras['clcount']),
1418 part.addparam('nbchanges', str(cg.extras['clcount']),
1395 mandatory=False)
1419 mandatory=False)
1396 chunkiter = bundle.getchunks()
1420 chunkiter = bundle.getchunks()
1397 else:
1421 else:
1398 # compression argument is only for the bundle2 case
1422 # compression argument is only for the bundle2 case
1399 assert compression is None
1423 assert compression is None
1400 if cg.version != '01':
1424 if cg.version != '01':
1401 raise error.Abort(_('old bundle types only supports v1 '
1425 raise error.Abort(_('old bundle types only supports v1 '
1402 'changegroups'))
1426 'changegroups'))
1403 header, comp = bundletypes[bundletype]
1427 header, comp = bundletypes[bundletype]
1404 if comp not in util.compengines.supportedbundletypes:
1428 if comp not in util.compengines.supportedbundletypes:
1405 raise error.Abort(_('unknown stream compression type: %s')
1429 raise error.Abort(_('unknown stream compression type: %s')
1406 % comp)
1430 % comp)
1407 compengine = util.compengines.forbundletype(comp)
1431 compengine = util.compengines.forbundletype(comp)
1408 def chunkiter():
1432 def chunkiter():
1409 yield header
1433 yield header
1410 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1434 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1411 yield chunk
1435 yield chunk
1412 chunkiter = chunkiter()
1436 chunkiter = chunkiter()
1413
1437
1414 # parse the changegroup data, otherwise we will block
1438 # parse the changegroup data, otherwise we will block
1415 # in case of sshrepo because we don't know the end of the stream
1439 # in case of sshrepo because we don't know the end of the stream
1416 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1440 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1417
1441
1418 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest'))
1442 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest'))
1419 def handlechangegroup(op, inpart):
1443 def handlechangegroup(op, inpart):
1420 """apply a changegroup part on the repo
1444 """apply a changegroup part on the repo
1421
1445
1422 This is a very early implementation that will massive rework before being
1446 This is a very early implementation that will massive rework before being
1423 inflicted to any end-user.
1447 inflicted to any end-user.
1424 """
1448 """
1425 # Make sure we trigger a transaction creation
1449 # Make sure we trigger a transaction creation
1426 #
1450 #
1427 # The addchangegroup function will get a transaction object by itself, but
1451 # The addchangegroup function will get a transaction object by itself, but
1428 # we need to make sure we trigger the creation of a transaction object used
1452 # we need to make sure we trigger the creation of a transaction object used
1429 # for the whole processing scope.
1453 # for the whole processing scope.
1430 op.gettransaction()
1454 op.gettransaction()
1431 unpackerversion = inpart.params.get('version', '01')
1455 unpackerversion = inpart.params.get('version', '01')
1432 # We should raise an appropriate exception here
1456 # We should raise an appropriate exception here
1433 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1457 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1434 # the source and url passed here are overwritten by the one contained in
1458 # the source and url passed here are overwritten by the one contained in
1435 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1459 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1436 nbchangesets = None
1460 nbchangesets = None
1437 if 'nbchanges' in inpart.params:
1461 if 'nbchanges' in inpart.params:
1438 nbchangesets = int(inpart.params.get('nbchanges'))
1462 nbchangesets = int(inpart.params.get('nbchanges'))
1439 if ('treemanifest' in inpart.params and
1463 if ('treemanifest' in inpart.params and
1440 'treemanifest' not in op.repo.requirements):
1464 'treemanifest' not in op.repo.requirements):
1441 if len(op.repo.changelog) != 0:
1465 if len(op.repo.changelog) != 0:
1442 raise error.Abort(_(
1466 raise error.Abort(_(
1443 "bundle contains tree manifests, but local repo is "
1467 "bundle contains tree manifests, but local repo is "
1444 "non-empty and does not use tree manifests"))
1468 "non-empty and does not use tree manifests"))
1445 op.repo.requirements.add('treemanifest')
1469 op.repo.requirements.add('treemanifest')
1446 op.repo._applyopenerreqs()
1470 op.repo._applyopenerreqs()
1447 op.repo._writerequirements()
1471 op.repo._writerequirements()
1448 ret = cg.apply(op.repo, 'bundle2', 'bundle2', expectedtotal=nbchangesets)
1472 ret = cg.apply(op.repo, 'bundle2', 'bundle2', expectedtotal=nbchangesets)
1449 op.records.add('changegroup', {'return': ret})
1473 op.records.add('changegroup', {'return': ret})
1450 if op.reply is not None:
1474 if op.reply is not None:
1451 # This is definitely not the final form of this
1475 # This is definitely not the final form of this
1452 # return. But one need to start somewhere.
1476 # return. But one need to start somewhere.
1453 part = op.reply.newpart('reply:changegroup', mandatory=False)
1477 part = op.reply.newpart('reply:changegroup', mandatory=False)
1454 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1478 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1455 part.addparam('return', '%i' % ret, mandatory=False)
1479 part.addparam('return', '%i' % ret, mandatory=False)
1456 assert not inpart.read()
1480 assert not inpart.read()
1457
1481
1458 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1482 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1459 ['digest:%s' % k for k in util.DIGESTS.keys()])
1483 ['digest:%s' % k for k in util.DIGESTS.keys()])
1460 @parthandler('remote-changegroup', _remotechangegroupparams)
1484 @parthandler('remote-changegroup', _remotechangegroupparams)
1461 def handleremotechangegroup(op, inpart):
1485 def handleremotechangegroup(op, inpart):
1462 """apply a bundle10 on the repo, given an url and validation information
1486 """apply a bundle10 on the repo, given an url and validation information
1463
1487
1464 All the information about the remote bundle to import are given as
1488 All the information about the remote bundle to import are given as
1465 parameters. The parameters include:
1489 parameters. The parameters include:
1466 - url: the url to the bundle10.
1490 - url: the url to the bundle10.
1467 - size: the bundle10 file size. It is used to validate what was
1491 - size: the bundle10 file size. It is used to validate what was
1468 retrieved by the client matches the server knowledge about the bundle.
1492 retrieved by the client matches the server knowledge about the bundle.
1469 - digests: a space separated list of the digest types provided as
1493 - digests: a space separated list of the digest types provided as
1470 parameters.
1494 parameters.
1471 - digest:<digest-type>: the hexadecimal representation of the digest with
1495 - digest:<digest-type>: the hexadecimal representation of the digest with
1472 that name. Like the size, it is used to validate what was retrieved by
1496 that name. Like the size, it is used to validate what was retrieved by
1473 the client matches what the server knows about the bundle.
1497 the client matches what the server knows about the bundle.
1474
1498
1475 When multiple digest types are given, all of them are checked.
1499 When multiple digest types are given, all of them are checked.
1476 """
1500 """
1477 try:
1501 try:
1478 raw_url = inpart.params['url']
1502 raw_url = inpart.params['url']
1479 except KeyError:
1503 except KeyError:
1480 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1504 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1481 parsed_url = util.url(raw_url)
1505 parsed_url = util.url(raw_url)
1482 if parsed_url.scheme not in capabilities['remote-changegroup']:
1506 if parsed_url.scheme not in capabilities['remote-changegroup']:
1483 raise error.Abort(_('remote-changegroup does not support %s urls') %
1507 raise error.Abort(_('remote-changegroup does not support %s urls') %
1484 parsed_url.scheme)
1508 parsed_url.scheme)
1485
1509
1486 try:
1510 try:
1487 size = int(inpart.params['size'])
1511 size = int(inpart.params['size'])
1488 except ValueError:
1512 except ValueError:
1489 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1513 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1490 % 'size')
1514 % 'size')
1491 except KeyError:
1515 except KeyError:
1492 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1516 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1493
1517
1494 digests = {}
1518 digests = {}
1495 for typ in inpart.params.get('digests', '').split():
1519 for typ in inpart.params.get('digests', '').split():
1496 param = 'digest:%s' % typ
1520 param = 'digest:%s' % typ
1497 try:
1521 try:
1498 value = inpart.params[param]
1522 value = inpart.params[param]
1499 except KeyError:
1523 except KeyError:
1500 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1524 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1501 param)
1525 param)
1502 digests[typ] = value
1526 digests[typ] = value
1503
1527
1504 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1528 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1505
1529
1506 # Make sure we trigger a transaction creation
1530 # Make sure we trigger a transaction creation
1507 #
1531 #
1508 # The addchangegroup function will get a transaction object by itself, but
1532 # The addchangegroup function will get a transaction object by itself, but
1509 # we need to make sure we trigger the creation of a transaction object used
1533 # we need to make sure we trigger the creation of a transaction object used
1510 # for the whole processing scope.
1534 # for the whole processing scope.
1511 op.gettransaction()
1535 op.gettransaction()
1512 from . import exchange
1536 from . import exchange
1513 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1537 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1514 if not isinstance(cg, changegroup.cg1unpacker):
1538 if not isinstance(cg, changegroup.cg1unpacker):
1515 raise error.Abort(_('%s: not a bundle version 1.0') %
1539 raise error.Abort(_('%s: not a bundle version 1.0') %
1516 util.hidepassword(raw_url))
1540 util.hidepassword(raw_url))
1517 ret = cg.apply(op.repo, 'bundle2', 'bundle2')
1541 ret = cg.apply(op.repo, 'bundle2', 'bundle2')
1518 op.records.add('changegroup', {'return': ret})
1542 op.records.add('changegroup', {'return': ret})
1519 if op.reply is not None:
1543 if op.reply is not None:
1520 # This is definitely not the final form of this
1544 # This is definitely not the final form of this
1521 # return. But one need to start somewhere.
1545 # return. But one need to start somewhere.
1522 part = op.reply.newpart('reply:changegroup')
1546 part = op.reply.newpart('reply:changegroup')
1523 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1547 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1524 part.addparam('return', '%i' % ret, mandatory=False)
1548 part.addparam('return', '%i' % ret, mandatory=False)
1525 try:
1549 try:
1526 real_part.validate()
1550 real_part.validate()
1527 except error.Abort as e:
1551 except error.Abort as e:
1528 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1552 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1529 (util.hidepassword(raw_url), str(e)))
1553 (util.hidepassword(raw_url), str(e)))
1530 assert not inpart.read()
1554 assert not inpart.read()
1531
1555
1532 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1556 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1533 def handlereplychangegroup(op, inpart):
1557 def handlereplychangegroup(op, inpart):
1534 ret = int(inpart.params['return'])
1558 ret = int(inpart.params['return'])
1535 replyto = int(inpart.params['in-reply-to'])
1559 replyto = int(inpart.params['in-reply-to'])
1536 op.records.add('changegroup', {'return': ret}, replyto)
1560 op.records.add('changegroup', {'return': ret}, replyto)
1537
1561
1538 @parthandler('check:heads')
1562 @parthandler('check:heads')
1539 def handlecheckheads(op, inpart):
1563 def handlecheckheads(op, inpart):
1540 """check that head of the repo did not change
1564 """check that head of the repo did not change
1541
1565
1542 This is used to detect a push race when using unbundle.
1566 This is used to detect a push race when using unbundle.
1543 This replaces the "heads" argument of unbundle."""
1567 This replaces the "heads" argument of unbundle."""
1544 h = inpart.read(20)
1568 h = inpart.read(20)
1545 heads = []
1569 heads = []
1546 while len(h) == 20:
1570 while len(h) == 20:
1547 heads.append(h)
1571 heads.append(h)
1548 h = inpart.read(20)
1572 h = inpart.read(20)
1549 assert not h
1573 assert not h
1550 # Trigger a transaction so that we are guaranteed to have the lock now.
1574 # Trigger a transaction so that we are guaranteed to have the lock now.
1551 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1575 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1552 op.gettransaction()
1576 op.gettransaction()
1553 if sorted(heads) != sorted(op.repo.heads()):
1577 if sorted(heads) != sorted(op.repo.heads()):
1554 raise error.PushRaced('repository changed while pushing - '
1578 raise error.PushRaced('repository changed while pushing - '
1555 'please try again')
1579 'please try again')
1556
1580
1557 @parthandler('output')
1581 @parthandler('output')
1558 def handleoutput(op, inpart):
1582 def handleoutput(op, inpart):
1559 """forward output captured on the server to the client"""
1583 """forward output captured on the server to the client"""
1560 for line in inpart.read().splitlines():
1584 for line in inpart.read().splitlines():
1561 op.ui.status(_('remote: %s\n') % line)
1585 op.ui.status(_('remote: %s\n') % line)
1562
1586
1563 @parthandler('replycaps')
1587 @parthandler('replycaps')
1564 def handlereplycaps(op, inpart):
1588 def handlereplycaps(op, inpart):
1565 """Notify that a reply bundle should be created
1589 """Notify that a reply bundle should be created
1566
1590
1567 The payload contains the capabilities information for the reply"""
1591 The payload contains the capabilities information for the reply"""
1568 caps = decodecaps(inpart.read())
1592 caps = decodecaps(inpart.read())
1569 if op.reply is None:
1593 if op.reply is None:
1570 op.reply = bundle20(op.ui, caps)
1594 op.reply = bundle20(op.ui, caps)
1571
1595
1572 class AbortFromPart(error.Abort):
1596 class AbortFromPart(error.Abort):
1573 """Sub-class of Abort that denotes an error from a bundle2 part."""
1597 """Sub-class of Abort that denotes an error from a bundle2 part."""
1574
1598
1575 @parthandler('error:abort', ('message', 'hint'))
1599 @parthandler('error:abort', ('message', 'hint'))
1576 def handleerrorabort(op, inpart):
1600 def handleerrorabort(op, inpart):
1577 """Used to transmit abort error over the wire"""
1601 """Used to transmit abort error over the wire"""
1578 raise AbortFromPart(inpart.params['message'],
1602 raise AbortFromPart(inpart.params['message'],
1579 hint=inpart.params.get('hint'))
1603 hint=inpart.params.get('hint'))
1580
1604
1581 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
1605 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
1582 'in-reply-to'))
1606 'in-reply-to'))
1583 def handleerrorpushkey(op, inpart):
1607 def handleerrorpushkey(op, inpart):
1584 """Used to transmit failure of a mandatory pushkey over the wire"""
1608 """Used to transmit failure of a mandatory pushkey over the wire"""
1585 kwargs = {}
1609 kwargs = {}
1586 for name in ('namespace', 'key', 'new', 'old', 'ret'):
1610 for name in ('namespace', 'key', 'new', 'old', 'ret'):
1587 value = inpart.params.get(name)
1611 value = inpart.params.get(name)
1588 if value is not None:
1612 if value is not None:
1589 kwargs[name] = value
1613 kwargs[name] = value
1590 raise error.PushkeyFailed(inpart.params['in-reply-to'], **kwargs)
1614 raise error.PushkeyFailed(inpart.params['in-reply-to'], **kwargs)
1591
1615
1592 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1616 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1593 def handleerrorunsupportedcontent(op, inpart):
1617 def handleerrorunsupportedcontent(op, inpart):
1594 """Used to transmit unknown content error over the wire"""
1618 """Used to transmit unknown content error over the wire"""
1595 kwargs = {}
1619 kwargs = {}
1596 parttype = inpart.params.get('parttype')
1620 parttype = inpart.params.get('parttype')
1597 if parttype is not None:
1621 if parttype is not None:
1598 kwargs['parttype'] = parttype
1622 kwargs['parttype'] = parttype
1599 params = inpart.params.get('params')
1623 params = inpart.params.get('params')
1600 if params is not None:
1624 if params is not None:
1601 kwargs['params'] = params.split('\0')
1625 kwargs['params'] = params.split('\0')
1602
1626
1603 raise error.BundleUnknownFeatureError(**kwargs)
1627 raise error.BundleUnknownFeatureError(**kwargs)
1604
1628
1605 @parthandler('error:pushraced', ('message',))
1629 @parthandler('error:pushraced', ('message',))
1606 def handleerrorpushraced(op, inpart):
1630 def handleerrorpushraced(op, inpart):
1607 """Used to transmit push race error over the wire"""
1631 """Used to transmit push race error over the wire"""
1608 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1632 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1609
1633
1610 @parthandler('listkeys', ('namespace',))
1634 @parthandler('listkeys', ('namespace',))
1611 def handlelistkeys(op, inpart):
1635 def handlelistkeys(op, inpart):
1612 """retrieve pushkey namespace content stored in a bundle2"""
1636 """retrieve pushkey namespace content stored in a bundle2"""
1613 namespace = inpart.params['namespace']
1637 namespace = inpart.params['namespace']
1614 r = pushkey.decodekeys(inpart.read())
1638 r = pushkey.decodekeys(inpart.read())
1615 op.records.add('listkeys', (namespace, r))
1639 op.records.add('listkeys', (namespace, r))
1616
1640
1617 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1641 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1618 def handlepushkey(op, inpart):
1642 def handlepushkey(op, inpart):
1619 """process a pushkey request"""
1643 """process a pushkey request"""
1620 dec = pushkey.decode
1644 dec = pushkey.decode
1621 namespace = dec(inpart.params['namespace'])
1645 namespace = dec(inpart.params['namespace'])
1622 key = dec(inpart.params['key'])
1646 key = dec(inpart.params['key'])
1623 old = dec(inpart.params['old'])
1647 old = dec(inpart.params['old'])
1624 new = dec(inpart.params['new'])
1648 new = dec(inpart.params['new'])
1625 # Grab the transaction to ensure that we have the lock before performing the
1649 # Grab the transaction to ensure that we have the lock before performing the
1626 # pushkey.
1650 # pushkey.
1627 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1651 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1628 op.gettransaction()
1652 op.gettransaction()
1629 ret = op.repo.pushkey(namespace, key, old, new)
1653 ret = op.repo.pushkey(namespace, key, old, new)
1630 record = {'namespace': namespace,
1654 record = {'namespace': namespace,
1631 'key': key,
1655 'key': key,
1632 'old': old,
1656 'old': old,
1633 'new': new}
1657 'new': new}
1634 op.records.add('pushkey', record)
1658 op.records.add('pushkey', record)
1635 if op.reply is not None:
1659 if op.reply is not None:
1636 rpart = op.reply.newpart('reply:pushkey')
1660 rpart = op.reply.newpart('reply:pushkey')
1637 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1661 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1638 rpart.addparam('return', '%i' % ret, mandatory=False)
1662 rpart.addparam('return', '%i' % ret, mandatory=False)
1639 if inpart.mandatory and not ret:
1663 if inpart.mandatory and not ret:
1640 kwargs = {}
1664 kwargs = {}
1641 for key in ('namespace', 'key', 'new', 'old', 'ret'):
1665 for key in ('namespace', 'key', 'new', 'old', 'ret'):
1642 if key in inpart.params:
1666 if key in inpart.params:
1643 kwargs[key] = inpart.params[key]
1667 kwargs[key] = inpart.params[key]
1644 raise error.PushkeyFailed(partid=str(inpart.id), **kwargs)
1668 raise error.PushkeyFailed(partid=str(inpart.id), **kwargs)
1645
1669
1646 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
1670 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
1647 def handlepushkeyreply(op, inpart):
1671 def handlepushkeyreply(op, inpart):
1648 """retrieve the result of a pushkey request"""
1672 """retrieve the result of a pushkey request"""
1649 ret = int(inpart.params['return'])
1673 ret = int(inpart.params['return'])
1650 partid = int(inpart.params['in-reply-to'])
1674 partid = int(inpart.params['in-reply-to'])
1651 op.records.add('pushkey', {'return': ret}, partid)
1675 op.records.add('pushkey', {'return': ret}, partid)
1652
1676
1653 @parthandler('obsmarkers')
1677 @parthandler('obsmarkers')
1654 def handleobsmarker(op, inpart):
1678 def handleobsmarker(op, inpart):
1655 """add a stream of obsmarkers to the repo"""
1679 """add a stream of obsmarkers to the repo"""
1656 tr = op.gettransaction()
1680 tr = op.gettransaction()
1657 markerdata = inpart.read()
1681 markerdata = inpart.read()
1658 if op.ui.config('experimental', 'obsmarkers-exchange-debug', False):
1682 if op.ui.config('experimental', 'obsmarkers-exchange-debug', False):
1659 op.ui.write(('obsmarker-exchange: %i bytes received\n')
1683 op.ui.write(('obsmarker-exchange: %i bytes received\n')
1660 % len(markerdata))
1684 % len(markerdata))
1661 # The mergemarkers call will crash if marker creation is not enabled.
1685 # The mergemarkers call will crash if marker creation is not enabled.
1662 # we want to avoid this if the part is advisory.
1686 # we want to avoid this if the part is advisory.
1663 if not inpart.mandatory and op.repo.obsstore.readonly:
1687 if not inpart.mandatory and op.repo.obsstore.readonly:
1664 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled')
1688 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled')
1665 return
1689 return
1666 new = op.repo.obsstore.mergemarkers(tr, markerdata)
1690 new = op.repo.obsstore.mergemarkers(tr, markerdata)
1667 if new:
1691 if new:
1668 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1692 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1669 op.records.add('obsmarkers', {'new': new})
1693 op.records.add('obsmarkers', {'new': new})
1670 if op.reply is not None:
1694 if op.reply is not None:
1671 rpart = op.reply.newpart('reply:obsmarkers')
1695 rpart = op.reply.newpart('reply:obsmarkers')
1672 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1696 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1673 rpart.addparam('new', '%i' % new, mandatory=False)
1697 rpart.addparam('new', '%i' % new, mandatory=False)
1674
1698
1675
1699
1676 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
1700 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
1677 def handleobsmarkerreply(op, inpart):
1701 def handleobsmarkerreply(op, inpart):
1678 """retrieve the result of a pushkey request"""
1702 """retrieve the result of a pushkey request"""
1679 ret = int(inpart.params['new'])
1703 ret = int(inpart.params['new'])
1680 partid = int(inpart.params['in-reply-to'])
1704 partid = int(inpart.params['in-reply-to'])
1681 op.records.add('obsmarkers', {'new': ret}, partid)
1705 op.records.add('obsmarkers', {'new': ret}, partid)
1682
1706
1683 @parthandler('hgtagsfnodes')
1707 @parthandler('hgtagsfnodes')
1684 def handlehgtagsfnodes(op, inpart):
1708 def handlehgtagsfnodes(op, inpart):
1685 """Applies .hgtags fnodes cache entries to the local repo.
1709 """Applies .hgtags fnodes cache entries to the local repo.
1686
1710
1687 Payload is pairs of 20 byte changeset nodes and filenodes.
1711 Payload is pairs of 20 byte changeset nodes and filenodes.
1688 """
1712 """
1689 # Grab the transaction so we ensure that we have the lock at this point.
1713 # Grab the transaction so we ensure that we have the lock at this point.
1690 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1714 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1691 op.gettransaction()
1715 op.gettransaction()
1692 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
1716 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
1693
1717
1694 count = 0
1718 count = 0
1695 while True:
1719 while True:
1696 node = inpart.read(20)
1720 node = inpart.read(20)
1697 fnode = inpart.read(20)
1721 fnode = inpart.read(20)
1698 if len(node) < 20 or len(fnode) < 20:
1722 if len(node) < 20 or len(fnode) < 20:
1699 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
1723 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
1700 break
1724 break
1701 cache.setfnode(node, fnode)
1725 cache.setfnode(node, fnode)
1702 count += 1
1726 count += 1
1703
1727
1704 cache.write()
1728 cache.write()
1705 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
1729 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
@@ -1,2022 +1,1998 b''
1 # exchange.py - utility to exchange data between repos.
1 # exchange.py - utility to exchange data between repos.
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import errno
10 import errno
11 import hashlib
11 import hashlib
12
12
13 from .i18n import _
13 from .i18n import _
14 from .node import (
14 from .node import (
15 hex,
15 hex,
16 nullid,
16 nullid,
17 )
17 )
18 from . import (
18 from . import (
19 bookmarks as bookmod,
19 bookmarks as bookmod,
20 bundle2,
20 bundle2,
21 changegroup,
21 changegroup,
22 discovery,
22 discovery,
23 error,
23 error,
24 lock as lockmod,
24 lock as lockmod,
25 obsolete,
25 obsolete,
26 phases,
26 phases,
27 pushkey,
27 pushkey,
28 scmutil,
28 scmutil,
29 sslutil,
29 sslutil,
30 streamclone,
30 streamclone,
31 tags,
32 url as urlmod,
31 url as urlmod,
33 util,
32 util,
34 )
33 )
35
34
36 urlerr = util.urlerr
35 urlerr = util.urlerr
37 urlreq = util.urlreq
36 urlreq = util.urlreq
38
37
39 # Maps bundle version human names to changegroup versions.
38 # Maps bundle version human names to changegroup versions.
40 _bundlespeccgversions = {'v1': '01',
39 _bundlespeccgversions = {'v1': '01',
41 'v2': '02',
40 'v2': '02',
42 'packed1': 's1',
41 'packed1': 's1',
43 'bundle2': '02', #legacy
42 'bundle2': '02', #legacy
44 }
43 }
45
44
46 # Compression engines allowed in version 1. THIS SHOULD NEVER CHANGE.
45 # Compression engines allowed in version 1. THIS SHOULD NEVER CHANGE.
47 _bundlespecv1compengines = set(['gzip', 'bzip2', 'none'])
46 _bundlespecv1compengines = set(['gzip', 'bzip2', 'none'])
48
47
49 def parsebundlespec(repo, spec, strict=True, externalnames=False):
48 def parsebundlespec(repo, spec, strict=True, externalnames=False):
50 """Parse a bundle string specification into parts.
49 """Parse a bundle string specification into parts.
51
50
52 Bundle specifications denote a well-defined bundle/exchange format.
51 Bundle specifications denote a well-defined bundle/exchange format.
53 The content of a given specification should not change over time in
52 The content of a given specification should not change over time in
54 order to ensure that bundles produced by a newer version of Mercurial are
53 order to ensure that bundles produced by a newer version of Mercurial are
55 readable from an older version.
54 readable from an older version.
56
55
57 The string currently has the form:
56 The string currently has the form:
58
57
59 <compression>-<type>[;<parameter0>[;<parameter1>]]
58 <compression>-<type>[;<parameter0>[;<parameter1>]]
60
59
61 Where <compression> is one of the supported compression formats
60 Where <compression> is one of the supported compression formats
62 and <type> is (currently) a version string. A ";" can follow the type and
61 and <type> is (currently) a version string. A ";" can follow the type and
63 all text afterwards is interpreted as URI encoded, ";" delimited key=value
62 all text afterwards is interpreted as URI encoded, ";" delimited key=value
64 pairs.
63 pairs.
65
64
66 If ``strict`` is True (the default) <compression> is required. Otherwise,
65 If ``strict`` is True (the default) <compression> is required. Otherwise,
67 it is optional.
66 it is optional.
68
67
69 If ``externalnames`` is False (the default), the human-centric names will
68 If ``externalnames`` is False (the default), the human-centric names will
70 be converted to their internal representation.
69 be converted to their internal representation.
71
70
72 Returns a 3-tuple of (compression, version, parameters). Compression will
71 Returns a 3-tuple of (compression, version, parameters). Compression will
73 be ``None`` if not in strict mode and a compression isn't defined.
72 be ``None`` if not in strict mode and a compression isn't defined.
74
73
75 An ``InvalidBundleSpecification`` is raised when the specification is
74 An ``InvalidBundleSpecification`` is raised when the specification is
76 not syntactically well formed.
75 not syntactically well formed.
77
76
78 An ``UnsupportedBundleSpecification`` is raised when the compression or
77 An ``UnsupportedBundleSpecification`` is raised when the compression or
79 bundle type/version is not recognized.
78 bundle type/version is not recognized.
80
79
81 Note: this function will likely eventually return a more complex data
80 Note: this function will likely eventually return a more complex data
82 structure, including bundle2 part information.
81 structure, including bundle2 part information.
83 """
82 """
84 def parseparams(s):
83 def parseparams(s):
85 if ';' not in s:
84 if ';' not in s:
86 return s, {}
85 return s, {}
87
86
88 params = {}
87 params = {}
89 version, paramstr = s.split(';', 1)
88 version, paramstr = s.split(';', 1)
90
89
91 for p in paramstr.split(';'):
90 for p in paramstr.split(';'):
92 if '=' not in p:
91 if '=' not in p:
93 raise error.InvalidBundleSpecification(
92 raise error.InvalidBundleSpecification(
94 _('invalid bundle specification: '
93 _('invalid bundle specification: '
95 'missing "=" in parameter: %s') % p)
94 'missing "=" in parameter: %s') % p)
96
95
97 key, value = p.split('=', 1)
96 key, value = p.split('=', 1)
98 key = urlreq.unquote(key)
97 key = urlreq.unquote(key)
99 value = urlreq.unquote(value)
98 value = urlreq.unquote(value)
100 params[key] = value
99 params[key] = value
101
100
102 return version, params
101 return version, params
103
102
104
103
105 if strict and '-' not in spec:
104 if strict and '-' not in spec:
106 raise error.InvalidBundleSpecification(
105 raise error.InvalidBundleSpecification(
107 _('invalid bundle specification; '
106 _('invalid bundle specification; '
108 'must be prefixed with compression: %s') % spec)
107 'must be prefixed with compression: %s') % spec)
109
108
110 if '-' in spec:
109 if '-' in spec:
111 compression, version = spec.split('-', 1)
110 compression, version = spec.split('-', 1)
112
111
113 if compression not in util.compengines.supportedbundlenames:
112 if compression not in util.compengines.supportedbundlenames:
114 raise error.UnsupportedBundleSpecification(
113 raise error.UnsupportedBundleSpecification(
115 _('%s compression is not supported') % compression)
114 _('%s compression is not supported') % compression)
116
115
117 version, params = parseparams(version)
116 version, params = parseparams(version)
118
117
119 if version not in _bundlespeccgversions:
118 if version not in _bundlespeccgversions:
120 raise error.UnsupportedBundleSpecification(
119 raise error.UnsupportedBundleSpecification(
121 _('%s is not a recognized bundle version') % version)
120 _('%s is not a recognized bundle version') % version)
122 else:
121 else:
123 # Value could be just the compression or just the version, in which
122 # Value could be just the compression or just the version, in which
124 # case some defaults are assumed (but only when not in strict mode).
123 # case some defaults are assumed (but only when not in strict mode).
125 assert not strict
124 assert not strict
126
125
127 spec, params = parseparams(spec)
126 spec, params = parseparams(spec)
128
127
129 if spec in util.compengines.supportedbundlenames:
128 if spec in util.compengines.supportedbundlenames:
130 compression = spec
129 compression = spec
131 version = 'v1'
130 version = 'v1'
132 # Generaldelta repos require v2.
131 # Generaldelta repos require v2.
133 if 'generaldelta' in repo.requirements:
132 if 'generaldelta' in repo.requirements:
134 version = 'v2'
133 version = 'v2'
135 # Modern compression engines require v2.
134 # Modern compression engines require v2.
136 if compression not in _bundlespecv1compengines:
135 if compression not in _bundlespecv1compengines:
137 version = 'v2'
136 version = 'v2'
138 elif spec in _bundlespeccgversions:
137 elif spec in _bundlespeccgversions:
139 if spec == 'packed1':
138 if spec == 'packed1':
140 compression = 'none'
139 compression = 'none'
141 else:
140 else:
142 compression = 'bzip2'
141 compression = 'bzip2'
143 version = spec
142 version = spec
144 else:
143 else:
145 raise error.UnsupportedBundleSpecification(
144 raise error.UnsupportedBundleSpecification(
146 _('%s is not a recognized bundle specification') % spec)
145 _('%s is not a recognized bundle specification') % spec)
147
146
148 # Bundle version 1 only supports a known set of compression engines.
147 # Bundle version 1 only supports a known set of compression engines.
149 if version == 'v1' and compression not in _bundlespecv1compengines:
148 if version == 'v1' and compression not in _bundlespecv1compengines:
150 raise error.UnsupportedBundleSpecification(
149 raise error.UnsupportedBundleSpecification(
151 _('compression engine %s is not supported on v1 bundles') %
150 _('compression engine %s is not supported on v1 bundles') %
152 compression)
151 compression)
153
152
154 # The specification for packed1 can optionally declare the data formats
153 # The specification for packed1 can optionally declare the data formats
155 # required to apply it. If we see this metadata, compare against what the
154 # required to apply it. If we see this metadata, compare against what the
156 # repo supports and error if the bundle isn't compatible.
155 # repo supports and error if the bundle isn't compatible.
157 if version == 'packed1' and 'requirements' in params:
156 if version == 'packed1' and 'requirements' in params:
158 requirements = set(params['requirements'].split(','))
157 requirements = set(params['requirements'].split(','))
159 missingreqs = requirements - repo.supportedformats
158 missingreqs = requirements - repo.supportedformats
160 if missingreqs:
159 if missingreqs:
161 raise error.UnsupportedBundleSpecification(
160 raise error.UnsupportedBundleSpecification(
162 _('missing support for repository features: %s') %
161 _('missing support for repository features: %s') %
163 ', '.join(sorted(missingreqs)))
162 ', '.join(sorted(missingreqs)))
164
163
165 if not externalnames:
164 if not externalnames:
166 engine = util.compengines.forbundlename(compression)
165 engine = util.compengines.forbundlename(compression)
167 compression = engine.bundletype()[1]
166 compression = engine.bundletype()[1]
168 version = _bundlespeccgversions[version]
167 version = _bundlespeccgversions[version]
169 return compression, version, params
168 return compression, version, params
170
169
171 def readbundle(ui, fh, fname, vfs=None):
170 def readbundle(ui, fh, fname, vfs=None):
172 header = changegroup.readexactly(fh, 4)
171 header = changegroup.readexactly(fh, 4)
173
172
174 alg = None
173 alg = None
175 if not fname:
174 if not fname:
176 fname = "stream"
175 fname = "stream"
177 if not header.startswith('HG') and header.startswith('\0'):
176 if not header.startswith('HG') and header.startswith('\0'):
178 fh = changegroup.headerlessfixup(fh, header)
177 fh = changegroup.headerlessfixup(fh, header)
179 header = "HG10"
178 header = "HG10"
180 alg = 'UN'
179 alg = 'UN'
181 elif vfs:
180 elif vfs:
182 fname = vfs.join(fname)
181 fname = vfs.join(fname)
183
182
184 magic, version = header[0:2], header[2:4]
183 magic, version = header[0:2], header[2:4]
185
184
186 if magic != 'HG':
185 if magic != 'HG':
187 raise error.Abort(_('%s: not a Mercurial bundle') % fname)
186 raise error.Abort(_('%s: not a Mercurial bundle') % fname)
188 if version == '10':
187 if version == '10':
189 if alg is None:
188 if alg is None:
190 alg = changegroup.readexactly(fh, 2)
189 alg = changegroup.readexactly(fh, 2)
191 return changegroup.cg1unpacker(fh, alg)
190 return changegroup.cg1unpacker(fh, alg)
192 elif version.startswith('2'):
191 elif version.startswith('2'):
193 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
192 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
194 elif version == 'S1':
193 elif version == 'S1':
195 return streamclone.streamcloneapplier(fh)
194 return streamclone.streamcloneapplier(fh)
196 else:
195 else:
197 raise error.Abort(_('%s: unknown bundle version %s') % (fname, version))
196 raise error.Abort(_('%s: unknown bundle version %s') % (fname, version))
198
197
199 def getbundlespec(ui, fh):
198 def getbundlespec(ui, fh):
200 """Infer the bundlespec from a bundle file handle.
199 """Infer the bundlespec from a bundle file handle.
201
200
202 The input file handle is seeked and the original seek position is not
201 The input file handle is seeked and the original seek position is not
203 restored.
202 restored.
204 """
203 """
205 def speccompression(alg):
204 def speccompression(alg):
206 try:
205 try:
207 return util.compengines.forbundletype(alg).bundletype()[0]
206 return util.compengines.forbundletype(alg).bundletype()[0]
208 except KeyError:
207 except KeyError:
209 return None
208 return None
210
209
211 b = readbundle(ui, fh, None)
210 b = readbundle(ui, fh, None)
212 if isinstance(b, changegroup.cg1unpacker):
211 if isinstance(b, changegroup.cg1unpacker):
213 alg = b._type
212 alg = b._type
214 if alg == '_truncatedBZ':
213 if alg == '_truncatedBZ':
215 alg = 'BZ'
214 alg = 'BZ'
216 comp = speccompression(alg)
215 comp = speccompression(alg)
217 if not comp:
216 if not comp:
218 raise error.Abort(_('unknown compression algorithm: %s') % alg)
217 raise error.Abort(_('unknown compression algorithm: %s') % alg)
219 return '%s-v1' % comp
218 return '%s-v1' % comp
220 elif isinstance(b, bundle2.unbundle20):
219 elif isinstance(b, bundle2.unbundle20):
221 if 'Compression' in b.params:
220 if 'Compression' in b.params:
222 comp = speccompression(b.params['Compression'])
221 comp = speccompression(b.params['Compression'])
223 if not comp:
222 if not comp:
224 raise error.Abort(_('unknown compression algorithm: %s') % comp)
223 raise error.Abort(_('unknown compression algorithm: %s') % comp)
225 else:
224 else:
226 comp = 'none'
225 comp = 'none'
227
226
228 version = None
227 version = None
229 for part in b.iterparts():
228 for part in b.iterparts():
230 if part.type == 'changegroup':
229 if part.type == 'changegroup':
231 version = part.params['version']
230 version = part.params['version']
232 if version in ('01', '02'):
231 if version in ('01', '02'):
233 version = 'v2'
232 version = 'v2'
234 else:
233 else:
235 raise error.Abort(_('changegroup version %s does not have '
234 raise error.Abort(_('changegroup version %s does not have '
236 'a known bundlespec') % version,
235 'a known bundlespec') % version,
237 hint=_('try upgrading your Mercurial '
236 hint=_('try upgrading your Mercurial '
238 'client'))
237 'client'))
239
238
240 if not version:
239 if not version:
241 raise error.Abort(_('could not identify changegroup version in '
240 raise error.Abort(_('could not identify changegroup version in '
242 'bundle'))
241 'bundle'))
243
242
244 return '%s-%s' % (comp, version)
243 return '%s-%s' % (comp, version)
245 elif isinstance(b, streamclone.streamcloneapplier):
244 elif isinstance(b, streamclone.streamcloneapplier):
246 requirements = streamclone.readbundle1header(fh)[2]
245 requirements = streamclone.readbundle1header(fh)[2]
247 params = 'requirements=%s' % ','.join(sorted(requirements))
246 params = 'requirements=%s' % ','.join(sorted(requirements))
248 return 'none-packed1;%s' % urlreq.quote(params)
247 return 'none-packed1;%s' % urlreq.quote(params)
249 else:
248 else:
250 raise error.Abort(_('unknown bundle type: %s') % b)
249 raise error.Abort(_('unknown bundle type: %s') % b)
251
250
252 def buildobsmarkerspart(bundler, markers):
251 def buildobsmarkerspart(bundler, markers):
253 """add an obsmarker part to the bundler with <markers>
252 """add an obsmarker part to the bundler with <markers>
254
253
255 No part is created if markers is empty.
254 No part is created if markers is empty.
256 Raises ValueError if the bundler doesn't support any known obsmarker format.
255 Raises ValueError if the bundler doesn't support any known obsmarker format.
257 """
256 """
258 if markers:
257 if markers:
259 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
258 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
260 version = obsolete.commonversion(remoteversions)
259 version = obsolete.commonversion(remoteversions)
261 if version is None:
260 if version is None:
262 raise ValueError('bundler does not support common obsmarker format')
261 raise ValueError('bundler does not support common obsmarker format')
263 stream = obsolete.encodemarkers(markers, True, version=version)
262 stream = obsolete.encodemarkers(markers, True, version=version)
264 return bundler.newpart('obsmarkers', data=stream)
263 return bundler.newpart('obsmarkers', data=stream)
265 return None
264 return None
266
265
267 def _computeoutgoing(repo, heads, common):
266 def _computeoutgoing(repo, heads, common):
268 """Computes which revs are outgoing given a set of common
267 """Computes which revs are outgoing given a set of common
269 and a set of heads.
268 and a set of heads.
270
269
271 This is a separate function so extensions can have access to
270 This is a separate function so extensions can have access to
272 the logic.
271 the logic.
273
272
274 Returns a discovery.outgoing object.
273 Returns a discovery.outgoing object.
275 """
274 """
276 cl = repo.changelog
275 cl = repo.changelog
277 if common:
276 if common:
278 hasnode = cl.hasnode
277 hasnode = cl.hasnode
279 common = [n for n in common if hasnode(n)]
278 common = [n for n in common if hasnode(n)]
280 else:
279 else:
281 common = [nullid]
280 common = [nullid]
282 if not heads:
281 if not heads:
283 heads = cl.heads()
282 heads = cl.heads()
284 return discovery.outgoing(repo, common, heads)
283 return discovery.outgoing(repo, common, heads)
285
284
286 def _forcebundle1(op):
285 def _forcebundle1(op):
287 """return true if a pull/push must use bundle1
286 """return true if a pull/push must use bundle1
288
287
289 This function is used to allow testing of the older bundle version"""
288 This function is used to allow testing of the older bundle version"""
290 ui = op.repo.ui
289 ui = op.repo.ui
291 forcebundle1 = False
290 forcebundle1 = False
292 # The goal is this config is to allow developer to choose the bundle
291 # The goal is this config is to allow developer to choose the bundle
293 # version used during exchanged. This is especially handy during test.
292 # version used during exchanged. This is especially handy during test.
294 # Value is a list of bundle version to be picked from, highest version
293 # Value is a list of bundle version to be picked from, highest version
295 # should be used.
294 # should be used.
296 #
295 #
297 # developer config: devel.legacy.exchange
296 # developer config: devel.legacy.exchange
298 exchange = ui.configlist('devel', 'legacy.exchange')
297 exchange = ui.configlist('devel', 'legacy.exchange')
299 forcebundle1 = 'bundle2' not in exchange and 'bundle1' in exchange
298 forcebundle1 = 'bundle2' not in exchange and 'bundle1' in exchange
300 return forcebundle1 or not op.remote.capable('bundle2')
299 return forcebundle1 or not op.remote.capable('bundle2')
301
300
302 class pushoperation(object):
301 class pushoperation(object):
303 """A object that represent a single push operation
302 """A object that represent a single push operation
304
303
305 Its purpose is to carry push related state and very common operations.
304 Its purpose is to carry push related state and very common operations.
306
305
307 A new pushoperation should be created at the beginning of each push and
306 A new pushoperation should be created at the beginning of each push and
308 discarded afterward.
307 discarded afterward.
309 """
308 """
310
309
311 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
310 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
312 bookmarks=()):
311 bookmarks=()):
313 # repo we push from
312 # repo we push from
314 self.repo = repo
313 self.repo = repo
315 self.ui = repo.ui
314 self.ui = repo.ui
316 # repo we push to
315 # repo we push to
317 self.remote = remote
316 self.remote = remote
318 # force option provided
317 # force option provided
319 self.force = force
318 self.force = force
320 # revs to be pushed (None is "all")
319 # revs to be pushed (None is "all")
321 self.revs = revs
320 self.revs = revs
322 # bookmark explicitly pushed
321 # bookmark explicitly pushed
323 self.bookmarks = bookmarks
322 self.bookmarks = bookmarks
324 # allow push of new branch
323 # allow push of new branch
325 self.newbranch = newbranch
324 self.newbranch = newbranch
326 # did a local lock get acquired?
325 # did a local lock get acquired?
327 self.locallocked = None
326 self.locallocked = None
328 # step already performed
327 # step already performed
329 # (used to check what steps have been already performed through bundle2)
328 # (used to check what steps have been already performed through bundle2)
330 self.stepsdone = set()
329 self.stepsdone = set()
331 # Integer version of the changegroup push result
330 # Integer version of the changegroup push result
332 # - None means nothing to push
331 # - None means nothing to push
333 # - 0 means HTTP error
332 # - 0 means HTTP error
334 # - 1 means we pushed and remote head count is unchanged *or*
333 # - 1 means we pushed and remote head count is unchanged *or*
335 # we have outgoing changesets but refused to push
334 # we have outgoing changesets but refused to push
336 # - other values as described by addchangegroup()
335 # - other values as described by addchangegroup()
337 self.cgresult = None
336 self.cgresult = None
338 # Boolean value for the bookmark push
337 # Boolean value for the bookmark push
339 self.bkresult = None
338 self.bkresult = None
340 # discover.outgoing object (contains common and outgoing data)
339 # discover.outgoing object (contains common and outgoing data)
341 self.outgoing = None
340 self.outgoing = None
342 # all remote heads before the push
341 # all remote heads before the push
343 self.remoteheads = None
342 self.remoteheads = None
344 # testable as a boolean indicating if any nodes are missing locally.
343 # testable as a boolean indicating if any nodes are missing locally.
345 self.incoming = None
344 self.incoming = None
346 # phases changes that must be pushed along side the changesets
345 # phases changes that must be pushed along side the changesets
347 self.outdatedphases = None
346 self.outdatedphases = None
348 # phases changes that must be pushed if changeset push fails
347 # phases changes that must be pushed if changeset push fails
349 self.fallbackoutdatedphases = None
348 self.fallbackoutdatedphases = None
350 # outgoing obsmarkers
349 # outgoing obsmarkers
351 self.outobsmarkers = set()
350 self.outobsmarkers = set()
352 # outgoing bookmarks
351 # outgoing bookmarks
353 self.outbookmarks = []
352 self.outbookmarks = []
354 # transaction manager
353 # transaction manager
355 self.trmanager = None
354 self.trmanager = None
356 # map { pushkey partid -> callback handling failure}
355 # map { pushkey partid -> callback handling failure}
357 # used to handle exception from mandatory pushkey part failure
356 # used to handle exception from mandatory pushkey part failure
358 self.pkfailcb = {}
357 self.pkfailcb = {}
359
358
360 @util.propertycache
359 @util.propertycache
361 def futureheads(self):
360 def futureheads(self):
362 """future remote heads if the changeset push succeeds"""
361 """future remote heads if the changeset push succeeds"""
363 return self.outgoing.missingheads
362 return self.outgoing.missingheads
364
363
365 @util.propertycache
364 @util.propertycache
366 def fallbackheads(self):
365 def fallbackheads(self):
367 """future remote heads if the changeset push fails"""
366 """future remote heads if the changeset push fails"""
368 if self.revs is None:
367 if self.revs is None:
369 # not target to push, all common are relevant
368 # not target to push, all common are relevant
370 return self.outgoing.commonheads
369 return self.outgoing.commonheads
371 unfi = self.repo.unfiltered()
370 unfi = self.repo.unfiltered()
372 # I want cheads = heads(::missingheads and ::commonheads)
371 # I want cheads = heads(::missingheads and ::commonheads)
373 # (missingheads is revs with secret changeset filtered out)
372 # (missingheads is revs with secret changeset filtered out)
374 #
373 #
375 # This can be expressed as:
374 # This can be expressed as:
376 # cheads = ( (missingheads and ::commonheads)
375 # cheads = ( (missingheads and ::commonheads)
377 # + (commonheads and ::missingheads))"
376 # + (commonheads and ::missingheads))"
378 # )
377 # )
379 #
378 #
380 # while trying to push we already computed the following:
379 # while trying to push we already computed the following:
381 # common = (::commonheads)
380 # common = (::commonheads)
382 # missing = ((commonheads::missingheads) - commonheads)
381 # missing = ((commonheads::missingheads) - commonheads)
383 #
382 #
384 # We can pick:
383 # We can pick:
385 # * missingheads part of common (::commonheads)
384 # * missingheads part of common (::commonheads)
386 common = self.outgoing.common
385 common = self.outgoing.common
387 nm = self.repo.changelog.nodemap
386 nm = self.repo.changelog.nodemap
388 cheads = [node for node in self.revs if nm[node] in common]
387 cheads = [node for node in self.revs if nm[node] in common]
389 # and
388 # and
390 # * commonheads parents on missing
389 # * commonheads parents on missing
391 revset = unfi.set('%ln and parents(roots(%ln))',
390 revset = unfi.set('%ln and parents(roots(%ln))',
392 self.outgoing.commonheads,
391 self.outgoing.commonheads,
393 self.outgoing.missing)
392 self.outgoing.missing)
394 cheads.extend(c.node() for c in revset)
393 cheads.extend(c.node() for c in revset)
395 return cheads
394 return cheads
396
395
397 @property
396 @property
398 def commonheads(self):
397 def commonheads(self):
399 """set of all common heads after changeset bundle push"""
398 """set of all common heads after changeset bundle push"""
400 if self.cgresult:
399 if self.cgresult:
401 return self.futureheads
400 return self.futureheads
402 else:
401 else:
403 return self.fallbackheads
402 return self.fallbackheads
404
403
405 # mapping of message used when pushing bookmark
404 # mapping of message used when pushing bookmark
406 bookmsgmap = {'update': (_("updating bookmark %s\n"),
405 bookmsgmap = {'update': (_("updating bookmark %s\n"),
407 _('updating bookmark %s failed!\n')),
406 _('updating bookmark %s failed!\n')),
408 'export': (_("exporting bookmark %s\n"),
407 'export': (_("exporting bookmark %s\n"),
409 _('exporting bookmark %s failed!\n')),
408 _('exporting bookmark %s failed!\n')),
410 'delete': (_("deleting remote bookmark %s\n"),
409 'delete': (_("deleting remote bookmark %s\n"),
411 _('deleting remote bookmark %s failed!\n')),
410 _('deleting remote bookmark %s failed!\n')),
412 }
411 }
413
412
414
413
415 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=(),
414 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=(),
416 opargs=None):
415 opargs=None):
417 '''Push outgoing changesets (limited by revs) from a local
416 '''Push outgoing changesets (limited by revs) from a local
418 repository to remote. Return an integer:
417 repository to remote. Return an integer:
419 - None means nothing to push
418 - None means nothing to push
420 - 0 means HTTP error
419 - 0 means HTTP error
421 - 1 means we pushed and remote head count is unchanged *or*
420 - 1 means we pushed and remote head count is unchanged *or*
422 we have outgoing changesets but refused to push
421 we have outgoing changesets but refused to push
423 - other values as described by addchangegroup()
422 - other values as described by addchangegroup()
424 '''
423 '''
425 if opargs is None:
424 if opargs is None:
426 opargs = {}
425 opargs = {}
427 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks,
426 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks,
428 **opargs)
427 **opargs)
429 if pushop.remote.local():
428 if pushop.remote.local():
430 missing = (set(pushop.repo.requirements)
429 missing = (set(pushop.repo.requirements)
431 - pushop.remote.local().supported)
430 - pushop.remote.local().supported)
432 if missing:
431 if missing:
433 msg = _("required features are not"
432 msg = _("required features are not"
434 " supported in the destination:"
433 " supported in the destination:"
435 " %s") % (', '.join(sorted(missing)))
434 " %s") % (', '.join(sorted(missing)))
436 raise error.Abort(msg)
435 raise error.Abort(msg)
437
436
438 # there are two ways to push to remote repo:
437 # there are two ways to push to remote repo:
439 #
438 #
440 # addchangegroup assumes local user can lock remote
439 # addchangegroup assumes local user can lock remote
441 # repo (local filesystem, old ssh servers).
440 # repo (local filesystem, old ssh servers).
442 #
441 #
443 # unbundle assumes local user cannot lock remote repo (new ssh
442 # unbundle assumes local user cannot lock remote repo (new ssh
444 # servers, http servers).
443 # servers, http servers).
445
444
446 if not pushop.remote.canpush():
445 if not pushop.remote.canpush():
447 raise error.Abort(_("destination does not support push"))
446 raise error.Abort(_("destination does not support push"))
448 # get local lock as we might write phase data
447 # get local lock as we might write phase data
449 localwlock = locallock = None
448 localwlock = locallock = None
450 try:
449 try:
451 # bundle2 push may receive a reply bundle touching bookmarks or other
450 # bundle2 push may receive a reply bundle touching bookmarks or other
452 # things requiring the wlock. Take it now to ensure proper ordering.
451 # things requiring the wlock. Take it now to ensure proper ordering.
453 maypushback = pushop.ui.configbool('experimental', 'bundle2.pushback')
452 maypushback = pushop.ui.configbool('experimental', 'bundle2.pushback')
454 if (not _forcebundle1(pushop)) and maypushback:
453 if (not _forcebundle1(pushop)) and maypushback:
455 localwlock = pushop.repo.wlock()
454 localwlock = pushop.repo.wlock()
456 locallock = pushop.repo.lock()
455 locallock = pushop.repo.lock()
457 pushop.locallocked = True
456 pushop.locallocked = True
458 except IOError as err:
457 except IOError as err:
459 pushop.locallocked = False
458 pushop.locallocked = False
460 if err.errno != errno.EACCES:
459 if err.errno != errno.EACCES:
461 raise
460 raise
462 # source repo cannot be locked.
461 # source repo cannot be locked.
463 # We do not abort the push, but just disable the local phase
462 # We do not abort the push, but just disable the local phase
464 # synchronisation.
463 # synchronisation.
465 msg = 'cannot lock source repository: %s\n' % err
464 msg = 'cannot lock source repository: %s\n' % err
466 pushop.ui.debug(msg)
465 pushop.ui.debug(msg)
467 try:
466 try:
468 if pushop.locallocked:
467 if pushop.locallocked:
469 pushop.trmanager = transactionmanager(pushop.repo,
468 pushop.trmanager = transactionmanager(pushop.repo,
470 'push-response',
469 'push-response',
471 pushop.remote.url())
470 pushop.remote.url())
472 pushop.repo.checkpush(pushop)
471 pushop.repo.checkpush(pushop)
473 lock = None
472 lock = None
474 unbundle = pushop.remote.capable('unbundle')
473 unbundle = pushop.remote.capable('unbundle')
475 if not unbundle:
474 if not unbundle:
476 lock = pushop.remote.lock()
475 lock = pushop.remote.lock()
477 try:
476 try:
478 _pushdiscovery(pushop)
477 _pushdiscovery(pushop)
479 if not _forcebundle1(pushop):
478 if not _forcebundle1(pushop):
480 _pushbundle2(pushop)
479 _pushbundle2(pushop)
481 _pushchangeset(pushop)
480 _pushchangeset(pushop)
482 _pushsyncphase(pushop)
481 _pushsyncphase(pushop)
483 _pushobsolete(pushop)
482 _pushobsolete(pushop)
484 _pushbookmark(pushop)
483 _pushbookmark(pushop)
485 finally:
484 finally:
486 if lock is not None:
485 if lock is not None:
487 lock.release()
486 lock.release()
488 if pushop.trmanager:
487 if pushop.trmanager:
489 pushop.trmanager.close()
488 pushop.trmanager.close()
490 finally:
489 finally:
491 if pushop.trmanager:
490 if pushop.trmanager:
492 pushop.trmanager.release()
491 pushop.trmanager.release()
493 if locallock is not None:
492 if locallock is not None:
494 locallock.release()
493 locallock.release()
495 if localwlock is not None:
494 if localwlock is not None:
496 localwlock.release()
495 localwlock.release()
497
496
498 return pushop
497 return pushop
499
498
500 # list of steps to perform discovery before push
499 # list of steps to perform discovery before push
501 pushdiscoveryorder = []
500 pushdiscoveryorder = []
502
501
503 # Mapping between step name and function
502 # Mapping between step name and function
504 #
503 #
505 # This exists to help extensions wrap steps if necessary
504 # This exists to help extensions wrap steps if necessary
506 pushdiscoverymapping = {}
505 pushdiscoverymapping = {}
507
506
508 def pushdiscovery(stepname):
507 def pushdiscovery(stepname):
509 """decorator for function performing discovery before push
508 """decorator for function performing discovery before push
510
509
511 The function is added to the step -> function mapping and appended to the
510 The function is added to the step -> function mapping and appended to the
512 list of steps. Beware that decorated function will be added in order (this
511 list of steps. Beware that decorated function will be added in order (this
513 may matter).
512 may matter).
514
513
515 You can only use this decorator for a new step, if you want to wrap a step
514 You can only use this decorator for a new step, if you want to wrap a step
516 from an extension, change the pushdiscovery dictionary directly."""
515 from an extension, change the pushdiscovery dictionary directly."""
517 def dec(func):
516 def dec(func):
518 assert stepname not in pushdiscoverymapping
517 assert stepname not in pushdiscoverymapping
519 pushdiscoverymapping[stepname] = func
518 pushdiscoverymapping[stepname] = func
520 pushdiscoveryorder.append(stepname)
519 pushdiscoveryorder.append(stepname)
521 return func
520 return func
522 return dec
521 return dec
523
522
524 def _pushdiscovery(pushop):
523 def _pushdiscovery(pushop):
525 """Run all discovery steps"""
524 """Run all discovery steps"""
526 for stepname in pushdiscoveryorder:
525 for stepname in pushdiscoveryorder:
527 step = pushdiscoverymapping[stepname]
526 step = pushdiscoverymapping[stepname]
528 step(pushop)
527 step(pushop)
529
528
530 @pushdiscovery('changeset')
529 @pushdiscovery('changeset')
531 def _pushdiscoverychangeset(pushop):
530 def _pushdiscoverychangeset(pushop):
532 """discover the changeset that need to be pushed"""
531 """discover the changeset that need to be pushed"""
533 fci = discovery.findcommonincoming
532 fci = discovery.findcommonincoming
534 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
533 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
535 common, inc, remoteheads = commoninc
534 common, inc, remoteheads = commoninc
536 fco = discovery.findcommonoutgoing
535 fco = discovery.findcommonoutgoing
537 outgoing = fco(pushop.repo, pushop.remote, onlyheads=pushop.revs,
536 outgoing = fco(pushop.repo, pushop.remote, onlyheads=pushop.revs,
538 commoninc=commoninc, force=pushop.force)
537 commoninc=commoninc, force=pushop.force)
539 pushop.outgoing = outgoing
538 pushop.outgoing = outgoing
540 pushop.remoteheads = remoteheads
539 pushop.remoteheads = remoteheads
541 pushop.incoming = inc
540 pushop.incoming = inc
542
541
543 @pushdiscovery('phase')
542 @pushdiscovery('phase')
544 def _pushdiscoveryphase(pushop):
543 def _pushdiscoveryphase(pushop):
545 """discover the phase that needs to be pushed
544 """discover the phase that needs to be pushed
546
545
547 (computed for both success and failure case for changesets push)"""
546 (computed for both success and failure case for changesets push)"""
548 outgoing = pushop.outgoing
547 outgoing = pushop.outgoing
549 unfi = pushop.repo.unfiltered()
548 unfi = pushop.repo.unfiltered()
550 remotephases = pushop.remote.listkeys('phases')
549 remotephases = pushop.remote.listkeys('phases')
551 publishing = remotephases.get('publishing', False)
550 publishing = remotephases.get('publishing', False)
552 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
551 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
553 and remotephases # server supports phases
552 and remotephases # server supports phases
554 and not pushop.outgoing.missing # no changesets to be pushed
553 and not pushop.outgoing.missing # no changesets to be pushed
555 and publishing):
554 and publishing):
556 # When:
555 # When:
557 # - this is a subrepo push
556 # - this is a subrepo push
558 # - and remote support phase
557 # - and remote support phase
559 # - and no changeset are to be pushed
558 # - and no changeset are to be pushed
560 # - and remote is publishing
559 # - and remote is publishing
561 # We may be in issue 3871 case!
560 # We may be in issue 3871 case!
562 # We drop the possible phase synchronisation done by
561 # We drop the possible phase synchronisation done by
563 # courtesy to publish changesets possibly locally draft
562 # courtesy to publish changesets possibly locally draft
564 # on the remote.
563 # on the remote.
565 remotephases = {'publishing': 'True'}
564 remotephases = {'publishing': 'True'}
566 ana = phases.analyzeremotephases(pushop.repo,
565 ana = phases.analyzeremotephases(pushop.repo,
567 pushop.fallbackheads,
566 pushop.fallbackheads,
568 remotephases)
567 remotephases)
569 pheads, droots = ana
568 pheads, droots = ana
570 extracond = ''
569 extracond = ''
571 if not publishing:
570 if not publishing:
572 extracond = ' and public()'
571 extracond = ' and public()'
573 revset = 'heads((%%ln::%%ln) %s)' % extracond
572 revset = 'heads((%%ln::%%ln) %s)' % extracond
574 # Get the list of all revs draft on remote by public here.
573 # Get the list of all revs draft on remote by public here.
575 # XXX Beware that revset break if droots is not strictly
574 # XXX Beware that revset break if droots is not strictly
576 # XXX root we may want to ensure it is but it is costly
575 # XXX root we may want to ensure it is but it is costly
577 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
576 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
578 if not outgoing.missing:
577 if not outgoing.missing:
579 future = fallback
578 future = fallback
580 else:
579 else:
581 # adds changeset we are going to push as draft
580 # adds changeset we are going to push as draft
582 #
581 #
583 # should not be necessary for publishing server, but because of an
582 # should not be necessary for publishing server, but because of an
584 # issue fixed in xxxxx we have to do it anyway.
583 # issue fixed in xxxxx we have to do it anyway.
585 fdroots = list(unfi.set('roots(%ln + %ln::)',
584 fdroots = list(unfi.set('roots(%ln + %ln::)',
586 outgoing.missing, droots))
585 outgoing.missing, droots))
587 fdroots = [f.node() for f in fdroots]
586 fdroots = [f.node() for f in fdroots]
588 future = list(unfi.set(revset, fdroots, pushop.futureheads))
587 future = list(unfi.set(revset, fdroots, pushop.futureheads))
589 pushop.outdatedphases = future
588 pushop.outdatedphases = future
590 pushop.fallbackoutdatedphases = fallback
589 pushop.fallbackoutdatedphases = fallback
591
590
592 @pushdiscovery('obsmarker')
591 @pushdiscovery('obsmarker')
593 def _pushdiscoveryobsmarkers(pushop):
592 def _pushdiscoveryobsmarkers(pushop):
594 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
593 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
595 and pushop.repo.obsstore
594 and pushop.repo.obsstore
596 and 'obsolete' in pushop.remote.listkeys('namespaces')):
595 and 'obsolete' in pushop.remote.listkeys('namespaces')):
597 repo = pushop.repo
596 repo = pushop.repo
598 # very naive computation, that can be quite expensive on big repo.
597 # very naive computation, that can be quite expensive on big repo.
599 # However: evolution is currently slow on them anyway.
598 # However: evolution is currently slow on them anyway.
600 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
599 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
601 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
600 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
602
601
603 @pushdiscovery('bookmarks')
602 @pushdiscovery('bookmarks')
604 def _pushdiscoverybookmarks(pushop):
603 def _pushdiscoverybookmarks(pushop):
605 ui = pushop.ui
604 ui = pushop.ui
606 repo = pushop.repo.unfiltered()
605 repo = pushop.repo.unfiltered()
607 remote = pushop.remote
606 remote = pushop.remote
608 ui.debug("checking for updated bookmarks\n")
607 ui.debug("checking for updated bookmarks\n")
609 ancestors = ()
608 ancestors = ()
610 if pushop.revs:
609 if pushop.revs:
611 revnums = map(repo.changelog.rev, pushop.revs)
610 revnums = map(repo.changelog.rev, pushop.revs)
612 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
611 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
613 remotebookmark = remote.listkeys('bookmarks')
612 remotebookmark = remote.listkeys('bookmarks')
614
613
615 explicit = set([repo._bookmarks.expandname(bookmark)
614 explicit = set([repo._bookmarks.expandname(bookmark)
616 for bookmark in pushop.bookmarks])
615 for bookmark in pushop.bookmarks])
617
616
618 remotebookmark = bookmod.unhexlifybookmarks(remotebookmark)
617 remotebookmark = bookmod.unhexlifybookmarks(remotebookmark)
619 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
618 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
620
619
621 def safehex(x):
620 def safehex(x):
622 if x is None:
621 if x is None:
623 return x
622 return x
624 return hex(x)
623 return hex(x)
625
624
626 def hexifycompbookmarks(bookmarks):
625 def hexifycompbookmarks(bookmarks):
627 for b, scid, dcid in bookmarks:
626 for b, scid, dcid in bookmarks:
628 yield b, safehex(scid), safehex(dcid)
627 yield b, safehex(scid), safehex(dcid)
629
628
630 comp = [hexifycompbookmarks(marks) for marks in comp]
629 comp = [hexifycompbookmarks(marks) for marks in comp]
631 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
630 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
632
631
633 for b, scid, dcid in advsrc:
632 for b, scid, dcid in advsrc:
634 if b in explicit:
633 if b in explicit:
635 explicit.remove(b)
634 explicit.remove(b)
636 if not ancestors or repo[scid].rev() in ancestors:
635 if not ancestors or repo[scid].rev() in ancestors:
637 pushop.outbookmarks.append((b, dcid, scid))
636 pushop.outbookmarks.append((b, dcid, scid))
638 # search added bookmark
637 # search added bookmark
639 for b, scid, dcid in addsrc:
638 for b, scid, dcid in addsrc:
640 if b in explicit:
639 if b in explicit:
641 explicit.remove(b)
640 explicit.remove(b)
642 pushop.outbookmarks.append((b, '', scid))
641 pushop.outbookmarks.append((b, '', scid))
643 # search for overwritten bookmark
642 # search for overwritten bookmark
644 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
643 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
645 if b in explicit:
644 if b in explicit:
646 explicit.remove(b)
645 explicit.remove(b)
647 pushop.outbookmarks.append((b, dcid, scid))
646 pushop.outbookmarks.append((b, dcid, scid))
648 # search for bookmark to delete
647 # search for bookmark to delete
649 for b, scid, dcid in adddst:
648 for b, scid, dcid in adddst:
650 if b in explicit:
649 if b in explicit:
651 explicit.remove(b)
650 explicit.remove(b)
652 # treat as "deleted locally"
651 # treat as "deleted locally"
653 pushop.outbookmarks.append((b, dcid, ''))
652 pushop.outbookmarks.append((b, dcid, ''))
654 # identical bookmarks shouldn't get reported
653 # identical bookmarks shouldn't get reported
655 for b, scid, dcid in same:
654 for b, scid, dcid in same:
656 if b in explicit:
655 if b in explicit:
657 explicit.remove(b)
656 explicit.remove(b)
658
657
659 if explicit:
658 if explicit:
660 explicit = sorted(explicit)
659 explicit = sorted(explicit)
661 # we should probably list all of them
660 # we should probably list all of them
662 ui.warn(_('bookmark %s does not exist on the local '
661 ui.warn(_('bookmark %s does not exist on the local '
663 'or remote repository!\n') % explicit[0])
662 'or remote repository!\n') % explicit[0])
664 pushop.bkresult = 2
663 pushop.bkresult = 2
665
664
666 pushop.outbookmarks.sort()
665 pushop.outbookmarks.sort()
667
666
668 def _pushcheckoutgoing(pushop):
667 def _pushcheckoutgoing(pushop):
669 outgoing = pushop.outgoing
668 outgoing = pushop.outgoing
670 unfi = pushop.repo.unfiltered()
669 unfi = pushop.repo.unfiltered()
671 if not outgoing.missing:
670 if not outgoing.missing:
672 # nothing to push
671 # nothing to push
673 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
672 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
674 return False
673 return False
675 # something to push
674 # something to push
676 if not pushop.force:
675 if not pushop.force:
677 # if repo.obsstore == False --> no obsolete
676 # if repo.obsstore == False --> no obsolete
678 # then, save the iteration
677 # then, save the iteration
679 if unfi.obsstore:
678 if unfi.obsstore:
680 # this message are here for 80 char limit reason
679 # this message are here for 80 char limit reason
681 mso = _("push includes obsolete changeset: %s!")
680 mso = _("push includes obsolete changeset: %s!")
682 mst = {"unstable": _("push includes unstable changeset: %s!"),
681 mst = {"unstable": _("push includes unstable changeset: %s!"),
683 "bumped": _("push includes bumped changeset: %s!"),
682 "bumped": _("push includes bumped changeset: %s!"),
684 "divergent": _("push includes divergent changeset: %s!")}
683 "divergent": _("push includes divergent changeset: %s!")}
685 # If we are to push if there is at least one
684 # If we are to push if there is at least one
686 # obsolete or unstable changeset in missing, at
685 # obsolete or unstable changeset in missing, at
687 # least one of the missinghead will be obsolete or
686 # least one of the missinghead will be obsolete or
688 # unstable. So checking heads only is ok
687 # unstable. So checking heads only is ok
689 for node in outgoing.missingheads:
688 for node in outgoing.missingheads:
690 ctx = unfi[node]
689 ctx = unfi[node]
691 if ctx.obsolete():
690 if ctx.obsolete():
692 raise error.Abort(mso % ctx)
691 raise error.Abort(mso % ctx)
693 elif ctx.troubled():
692 elif ctx.troubled():
694 raise error.Abort(mst[ctx.troubles()[0]] % ctx)
693 raise error.Abort(mst[ctx.troubles()[0]] % ctx)
695
694
696 discovery.checkheads(pushop)
695 discovery.checkheads(pushop)
697 return True
696 return True
698
697
699 # List of names of steps to perform for an outgoing bundle2, order matters.
698 # List of names of steps to perform for an outgoing bundle2, order matters.
700 b2partsgenorder = []
699 b2partsgenorder = []
701
700
702 # Mapping between step name and function
701 # Mapping between step name and function
703 #
702 #
704 # This exists to help extensions wrap steps if necessary
703 # This exists to help extensions wrap steps if necessary
705 b2partsgenmapping = {}
704 b2partsgenmapping = {}
706
705
707 def b2partsgenerator(stepname, idx=None):
706 def b2partsgenerator(stepname, idx=None):
708 """decorator for function generating bundle2 part
707 """decorator for function generating bundle2 part
709
708
710 The function is added to the step -> function mapping and appended to the
709 The function is added to the step -> function mapping and appended to the
711 list of steps. Beware that decorated functions will be added in order
710 list of steps. Beware that decorated functions will be added in order
712 (this may matter).
711 (this may matter).
713
712
714 You can only use this decorator for new steps, if you want to wrap a step
713 You can only use this decorator for new steps, if you want to wrap a step
715 from an extension, attack the b2partsgenmapping dictionary directly."""
714 from an extension, attack the b2partsgenmapping dictionary directly."""
716 def dec(func):
715 def dec(func):
717 assert stepname not in b2partsgenmapping
716 assert stepname not in b2partsgenmapping
718 b2partsgenmapping[stepname] = func
717 b2partsgenmapping[stepname] = func
719 if idx is None:
718 if idx is None:
720 b2partsgenorder.append(stepname)
719 b2partsgenorder.append(stepname)
721 else:
720 else:
722 b2partsgenorder.insert(idx, stepname)
721 b2partsgenorder.insert(idx, stepname)
723 return func
722 return func
724 return dec
723 return dec
725
724
726 def _pushb2ctxcheckheads(pushop, bundler):
725 def _pushb2ctxcheckheads(pushop, bundler):
727 """Generate race condition checking parts
726 """Generate race condition checking parts
728
727
729 Exists as an independent function to aid extensions
728 Exists as an independent function to aid extensions
730 """
729 """
731 if not pushop.force:
730 if not pushop.force:
732 bundler.newpart('check:heads', data=iter(pushop.remoteheads))
731 bundler.newpart('check:heads', data=iter(pushop.remoteheads))
733
732
734 @b2partsgenerator('changeset')
733 @b2partsgenerator('changeset')
735 def _pushb2ctx(pushop, bundler):
734 def _pushb2ctx(pushop, bundler):
736 """handle changegroup push through bundle2
735 """handle changegroup push through bundle2
737
736
738 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
737 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
739 """
738 """
740 if 'changesets' in pushop.stepsdone:
739 if 'changesets' in pushop.stepsdone:
741 return
740 return
742 pushop.stepsdone.add('changesets')
741 pushop.stepsdone.add('changesets')
743 # Send known heads to the server for race detection.
742 # Send known heads to the server for race detection.
744 if not _pushcheckoutgoing(pushop):
743 if not _pushcheckoutgoing(pushop):
745 return
744 return
746 pushop.repo.prepushoutgoinghooks(pushop)
745 pushop.repo.prepushoutgoinghooks(pushop)
747
746
748 _pushb2ctxcheckheads(pushop, bundler)
747 _pushb2ctxcheckheads(pushop, bundler)
749
748
750 b2caps = bundle2.bundle2caps(pushop.remote)
749 b2caps = bundle2.bundle2caps(pushop.remote)
751 version = '01'
750 version = '01'
752 cgversions = b2caps.get('changegroup')
751 cgversions = b2caps.get('changegroup')
753 if cgversions: # 3.1 and 3.2 ship with an empty value
752 if cgversions: # 3.1 and 3.2 ship with an empty value
754 cgversions = [v for v in cgversions
753 cgversions = [v for v in cgversions
755 if v in changegroup.supportedoutgoingversions(
754 if v in changegroup.supportedoutgoingversions(
756 pushop.repo)]
755 pushop.repo)]
757 if not cgversions:
756 if not cgversions:
758 raise ValueError(_('no common changegroup version'))
757 raise ValueError(_('no common changegroup version'))
759 version = max(cgversions)
758 version = max(cgversions)
760 cg = changegroup.getlocalchangegroupraw(pushop.repo, 'push',
759 cg = changegroup.getlocalchangegroupraw(pushop.repo, 'push',
761 pushop.outgoing,
760 pushop.outgoing,
762 version=version)
761 version=version)
763 cgpart = bundler.newpart('changegroup', data=cg)
762 cgpart = bundler.newpart('changegroup', data=cg)
764 if cgversions:
763 if cgversions:
765 cgpart.addparam('version', version)
764 cgpart.addparam('version', version)
766 if 'treemanifest' in pushop.repo.requirements:
765 if 'treemanifest' in pushop.repo.requirements:
767 cgpart.addparam('treemanifest', '1')
766 cgpart.addparam('treemanifest', '1')
768 def handlereply(op):
767 def handlereply(op):
769 """extract addchangegroup returns from server reply"""
768 """extract addchangegroup returns from server reply"""
770 cgreplies = op.records.getreplies(cgpart.id)
769 cgreplies = op.records.getreplies(cgpart.id)
771 assert len(cgreplies['changegroup']) == 1
770 assert len(cgreplies['changegroup']) == 1
772 pushop.cgresult = cgreplies['changegroup'][0]['return']
771 pushop.cgresult = cgreplies['changegroup'][0]['return']
773 return handlereply
772 return handlereply
774
773
775 @b2partsgenerator('phase')
774 @b2partsgenerator('phase')
776 def _pushb2phases(pushop, bundler):
775 def _pushb2phases(pushop, bundler):
777 """handle phase push through bundle2"""
776 """handle phase push through bundle2"""
778 if 'phases' in pushop.stepsdone:
777 if 'phases' in pushop.stepsdone:
779 return
778 return
780 b2caps = bundle2.bundle2caps(pushop.remote)
779 b2caps = bundle2.bundle2caps(pushop.remote)
781 if not 'pushkey' in b2caps:
780 if not 'pushkey' in b2caps:
782 return
781 return
783 pushop.stepsdone.add('phases')
782 pushop.stepsdone.add('phases')
784 part2node = []
783 part2node = []
785
784
786 def handlefailure(pushop, exc):
785 def handlefailure(pushop, exc):
787 targetid = int(exc.partid)
786 targetid = int(exc.partid)
788 for partid, node in part2node:
787 for partid, node in part2node:
789 if partid == targetid:
788 if partid == targetid:
790 raise error.Abort(_('updating %s to public failed') % node)
789 raise error.Abort(_('updating %s to public failed') % node)
791
790
792 enc = pushkey.encode
791 enc = pushkey.encode
793 for newremotehead in pushop.outdatedphases:
792 for newremotehead in pushop.outdatedphases:
794 part = bundler.newpart('pushkey')
793 part = bundler.newpart('pushkey')
795 part.addparam('namespace', enc('phases'))
794 part.addparam('namespace', enc('phases'))
796 part.addparam('key', enc(newremotehead.hex()))
795 part.addparam('key', enc(newremotehead.hex()))
797 part.addparam('old', enc(str(phases.draft)))
796 part.addparam('old', enc(str(phases.draft)))
798 part.addparam('new', enc(str(phases.public)))
797 part.addparam('new', enc(str(phases.public)))
799 part2node.append((part.id, newremotehead))
798 part2node.append((part.id, newremotehead))
800 pushop.pkfailcb[part.id] = handlefailure
799 pushop.pkfailcb[part.id] = handlefailure
801
800
802 def handlereply(op):
801 def handlereply(op):
803 for partid, node in part2node:
802 for partid, node in part2node:
804 partrep = op.records.getreplies(partid)
803 partrep = op.records.getreplies(partid)
805 results = partrep['pushkey']
804 results = partrep['pushkey']
806 assert len(results) <= 1
805 assert len(results) <= 1
807 msg = None
806 msg = None
808 if not results:
807 if not results:
809 msg = _('server ignored update of %s to public!\n') % node
808 msg = _('server ignored update of %s to public!\n') % node
810 elif not int(results[0]['return']):
809 elif not int(results[0]['return']):
811 msg = _('updating %s to public failed!\n') % node
810 msg = _('updating %s to public failed!\n') % node
812 if msg is not None:
811 if msg is not None:
813 pushop.ui.warn(msg)
812 pushop.ui.warn(msg)
814 return handlereply
813 return handlereply
815
814
816 @b2partsgenerator('obsmarkers')
815 @b2partsgenerator('obsmarkers')
817 def _pushb2obsmarkers(pushop, bundler):
816 def _pushb2obsmarkers(pushop, bundler):
818 if 'obsmarkers' in pushop.stepsdone:
817 if 'obsmarkers' in pushop.stepsdone:
819 return
818 return
820 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
819 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
821 if obsolete.commonversion(remoteversions) is None:
820 if obsolete.commonversion(remoteversions) is None:
822 return
821 return
823 pushop.stepsdone.add('obsmarkers')
822 pushop.stepsdone.add('obsmarkers')
824 if pushop.outobsmarkers:
823 if pushop.outobsmarkers:
825 markers = sorted(pushop.outobsmarkers)
824 markers = sorted(pushop.outobsmarkers)
826 buildobsmarkerspart(bundler, markers)
825 buildobsmarkerspart(bundler, markers)
827
826
828 @b2partsgenerator('bookmarks')
827 @b2partsgenerator('bookmarks')
829 def _pushb2bookmarks(pushop, bundler):
828 def _pushb2bookmarks(pushop, bundler):
830 """handle bookmark push through bundle2"""
829 """handle bookmark push through bundle2"""
831 if 'bookmarks' in pushop.stepsdone:
830 if 'bookmarks' in pushop.stepsdone:
832 return
831 return
833 b2caps = bundle2.bundle2caps(pushop.remote)
832 b2caps = bundle2.bundle2caps(pushop.remote)
834 if 'pushkey' not in b2caps:
833 if 'pushkey' not in b2caps:
835 return
834 return
836 pushop.stepsdone.add('bookmarks')
835 pushop.stepsdone.add('bookmarks')
837 part2book = []
836 part2book = []
838 enc = pushkey.encode
837 enc = pushkey.encode
839
838
840 def handlefailure(pushop, exc):
839 def handlefailure(pushop, exc):
841 targetid = int(exc.partid)
840 targetid = int(exc.partid)
842 for partid, book, action in part2book:
841 for partid, book, action in part2book:
843 if partid == targetid:
842 if partid == targetid:
844 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
843 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
845 # we should not be called for part we did not generated
844 # we should not be called for part we did not generated
846 assert False
845 assert False
847
846
848 for book, old, new in pushop.outbookmarks:
847 for book, old, new in pushop.outbookmarks:
849 part = bundler.newpart('pushkey')
848 part = bundler.newpart('pushkey')
850 part.addparam('namespace', enc('bookmarks'))
849 part.addparam('namespace', enc('bookmarks'))
851 part.addparam('key', enc(book))
850 part.addparam('key', enc(book))
852 part.addparam('old', enc(old))
851 part.addparam('old', enc(old))
853 part.addparam('new', enc(new))
852 part.addparam('new', enc(new))
854 action = 'update'
853 action = 'update'
855 if not old:
854 if not old:
856 action = 'export'
855 action = 'export'
857 elif not new:
856 elif not new:
858 action = 'delete'
857 action = 'delete'
859 part2book.append((part.id, book, action))
858 part2book.append((part.id, book, action))
860 pushop.pkfailcb[part.id] = handlefailure
859 pushop.pkfailcb[part.id] = handlefailure
861
860
862 def handlereply(op):
861 def handlereply(op):
863 ui = pushop.ui
862 ui = pushop.ui
864 for partid, book, action in part2book:
863 for partid, book, action in part2book:
865 partrep = op.records.getreplies(partid)
864 partrep = op.records.getreplies(partid)
866 results = partrep['pushkey']
865 results = partrep['pushkey']
867 assert len(results) <= 1
866 assert len(results) <= 1
868 if not results:
867 if not results:
869 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
868 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
870 else:
869 else:
871 ret = int(results[0]['return'])
870 ret = int(results[0]['return'])
872 if ret:
871 if ret:
873 ui.status(bookmsgmap[action][0] % book)
872 ui.status(bookmsgmap[action][0] % book)
874 else:
873 else:
875 ui.warn(bookmsgmap[action][1] % book)
874 ui.warn(bookmsgmap[action][1] % book)
876 if pushop.bkresult is not None:
875 if pushop.bkresult is not None:
877 pushop.bkresult = 1
876 pushop.bkresult = 1
878 return handlereply
877 return handlereply
879
878
880
879
881 def _pushbundle2(pushop):
880 def _pushbundle2(pushop):
882 """push data to the remote using bundle2
881 """push data to the remote using bundle2
883
882
884 The only currently supported type of data is changegroup but this will
883 The only currently supported type of data is changegroup but this will
885 evolve in the future."""
884 evolve in the future."""
886 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
885 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
887 pushback = (pushop.trmanager
886 pushback = (pushop.trmanager
888 and pushop.ui.configbool('experimental', 'bundle2.pushback'))
887 and pushop.ui.configbool('experimental', 'bundle2.pushback'))
889
888
890 # create reply capability
889 # create reply capability
891 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo,
890 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo,
892 allowpushback=pushback))
891 allowpushback=pushback))
893 bundler.newpart('replycaps', data=capsblob)
892 bundler.newpart('replycaps', data=capsblob)
894 replyhandlers = []
893 replyhandlers = []
895 for partgenname in b2partsgenorder:
894 for partgenname in b2partsgenorder:
896 partgen = b2partsgenmapping[partgenname]
895 partgen = b2partsgenmapping[partgenname]
897 ret = partgen(pushop, bundler)
896 ret = partgen(pushop, bundler)
898 if callable(ret):
897 if callable(ret):
899 replyhandlers.append(ret)
898 replyhandlers.append(ret)
900 # do not push if nothing to push
899 # do not push if nothing to push
901 if bundler.nbparts <= 1:
900 if bundler.nbparts <= 1:
902 return
901 return
903 stream = util.chunkbuffer(bundler.getchunks())
902 stream = util.chunkbuffer(bundler.getchunks())
904 try:
903 try:
905 try:
904 try:
906 reply = pushop.remote.unbundle(
905 reply = pushop.remote.unbundle(
907 stream, ['force'], pushop.remote.url())
906 stream, ['force'], pushop.remote.url())
908 except error.BundleValueError as exc:
907 except error.BundleValueError as exc:
909 raise error.Abort(_('missing support for %s') % exc)
908 raise error.Abort(_('missing support for %s') % exc)
910 try:
909 try:
911 trgetter = None
910 trgetter = None
912 if pushback:
911 if pushback:
913 trgetter = pushop.trmanager.transaction
912 trgetter = pushop.trmanager.transaction
914 op = bundle2.processbundle(pushop.repo, reply, trgetter)
913 op = bundle2.processbundle(pushop.repo, reply, trgetter)
915 except error.BundleValueError as exc:
914 except error.BundleValueError as exc:
916 raise error.Abort(_('missing support for %s') % exc)
915 raise error.Abort(_('missing support for %s') % exc)
917 except bundle2.AbortFromPart as exc:
916 except bundle2.AbortFromPart as exc:
918 pushop.ui.status(_('remote: %s\n') % exc)
917 pushop.ui.status(_('remote: %s\n') % exc)
919 if exc.hint is not None:
918 if exc.hint is not None:
920 pushop.ui.status(_('remote: %s\n') % ('(%s)' % exc.hint))
919 pushop.ui.status(_('remote: %s\n') % ('(%s)' % exc.hint))
921 raise error.Abort(_('push failed on remote'))
920 raise error.Abort(_('push failed on remote'))
922 except error.PushkeyFailed as exc:
921 except error.PushkeyFailed as exc:
923 partid = int(exc.partid)
922 partid = int(exc.partid)
924 if partid not in pushop.pkfailcb:
923 if partid not in pushop.pkfailcb:
925 raise
924 raise
926 pushop.pkfailcb[partid](pushop, exc)
925 pushop.pkfailcb[partid](pushop, exc)
927 for rephand in replyhandlers:
926 for rephand in replyhandlers:
928 rephand(op)
927 rephand(op)
929
928
930 def _pushchangeset(pushop):
929 def _pushchangeset(pushop):
931 """Make the actual push of changeset bundle to remote repo"""
930 """Make the actual push of changeset bundle to remote repo"""
932 if 'changesets' in pushop.stepsdone:
931 if 'changesets' in pushop.stepsdone:
933 return
932 return
934 pushop.stepsdone.add('changesets')
933 pushop.stepsdone.add('changesets')
935 if not _pushcheckoutgoing(pushop):
934 if not _pushcheckoutgoing(pushop):
936 return
935 return
937 pushop.repo.prepushoutgoinghooks(pushop)
936 pushop.repo.prepushoutgoinghooks(pushop)
938 outgoing = pushop.outgoing
937 outgoing = pushop.outgoing
939 unbundle = pushop.remote.capable('unbundle')
938 unbundle = pushop.remote.capable('unbundle')
940 # create a changegroup from local
939 # create a changegroup from local
941 if pushop.revs is None and not (outgoing.excluded
940 if pushop.revs is None and not (outgoing.excluded
942 or pushop.repo.changelog.filteredrevs):
941 or pushop.repo.changelog.filteredrevs):
943 # push everything,
942 # push everything,
944 # use the fast path, no race possible on push
943 # use the fast path, no race possible on push
945 bundler = changegroup.cg1packer(pushop.repo)
944 bundler = changegroup.cg1packer(pushop.repo)
946 cg = changegroup.getsubset(pushop.repo,
945 cg = changegroup.getsubset(pushop.repo,
947 outgoing,
946 outgoing,
948 bundler,
947 bundler,
949 'push',
948 'push',
950 fastpath=True)
949 fastpath=True)
951 else:
950 else:
952 cg = changegroup.getchangegroup(pushop.repo, 'push', outgoing)
951 cg = changegroup.getchangegroup(pushop.repo, 'push', outgoing)
953
952
954 # apply changegroup to remote
953 # apply changegroup to remote
955 if unbundle:
954 if unbundle:
956 # local repo finds heads on server, finds out what
955 # local repo finds heads on server, finds out what
957 # revs it must push. once revs transferred, if server
956 # revs it must push. once revs transferred, if server
958 # finds it has different heads (someone else won
957 # finds it has different heads (someone else won
959 # commit/push race), server aborts.
958 # commit/push race), server aborts.
960 if pushop.force:
959 if pushop.force:
961 remoteheads = ['force']
960 remoteheads = ['force']
962 else:
961 else:
963 remoteheads = pushop.remoteheads
962 remoteheads = pushop.remoteheads
964 # ssh: return remote's addchangegroup()
963 # ssh: return remote's addchangegroup()
965 # http: return remote's addchangegroup() or 0 for error
964 # http: return remote's addchangegroup() or 0 for error
966 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
965 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
967 pushop.repo.url())
966 pushop.repo.url())
968 else:
967 else:
969 # we return an integer indicating remote head count
968 # we return an integer indicating remote head count
970 # change
969 # change
971 pushop.cgresult = pushop.remote.addchangegroup(cg, 'push',
970 pushop.cgresult = pushop.remote.addchangegroup(cg, 'push',
972 pushop.repo.url())
971 pushop.repo.url())
973
972
974 def _pushsyncphase(pushop):
973 def _pushsyncphase(pushop):
975 """synchronise phase information locally and remotely"""
974 """synchronise phase information locally and remotely"""
976 cheads = pushop.commonheads
975 cheads = pushop.commonheads
977 # even when we don't push, exchanging phase data is useful
976 # even when we don't push, exchanging phase data is useful
978 remotephases = pushop.remote.listkeys('phases')
977 remotephases = pushop.remote.listkeys('phases')
979 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
978 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
980 and remotephases # server supports phases
979 and remotephases # server supports phases
981 and pushop.cgresult is None # nothing was pushed
980 and pushop.cgresult is None # nothing was pushed
982 and remotephases.get('publishing', False)):
981 and remotephases.get('publishing', False)):
983 # When:
982 # When:
984 # - this is a subrepo push
983 # - this is a subrepo push
985 # - and remote support phase
984 # - and remote support phase
986 # - and no changeset was pushed
985 # - and no changeset was pushed
987 # - and remote is publishing
986 # - and remote is publishing
988 # We may be in issue 3871 case!
987 # We may be in issue 3871 case!
989 # We drop the possible phase synchronisation done by
988 # We drop the possible phase synchronisation done by
990 # courtesy to publish changesets possibly locally draft
989 # courtesy to publish changesets possibly locally draft
991 # on the remote.
990 # on the remote.
992 remotephases = {'publishing': 'True'}
991 remotephases = {'publishing': 'True'}
993 if not remotephases: # old server or public only reply from non-publishing
992 if not remotephases: # old server or public only reply from non-publishing
994 _localphasemove(pushop, cheads)
993 _localphasemove(pushop, cheads)
995 # don't push any phase data as there is nothing to push
994 # don't push any phase data as there is nothing to push
996 else:
995 else:
997 ana = phases.analyzeremotephases(pushop.repo, cheads,
996 ana = phases.analyzeremotephases(pushop.repo, cheads,
998 remotephases)
997 remotephases)
999 pheads, droots = ana
998 pheads, droots = ana
1000 ### Apply remote phase on local
999 ### Apply remote phase on local
1001 if remotephases.get('publishing', False):
1000 if remotephases.get('publishing', False):
1002 _localphasemove(pushop, cheads)
1001 _localphasemove(pushop, cheads)
1003 else: # publish = False
1002 else: # publish = False
1004 _localphasemove(pushop, pheads)
1003 _localphasemove(pushop, pheads)
1005 _localphasemove(pushop, cheads, phases.draft)
1004 _localphasemove(pushop, cheads, phases.draft)
1006 ### Apply local phase on remote
1005 ### Apply local phase on remote
1007
1006
1008 if pushop.cgresult:
1007 if pushop.cgresult:
1009 if 'phases' in pushop.stepsdone:
1008 if 'phases' in pushop.stepsdone:
1010 # phases already pushed though bundle2
1009 # phases already pushed though bundle2
1011 return
1010 return
1012 outdated = pushop.outdatedphases
1011 outdated = pushop.outdatedphases
1013 else:
1012 else:
1014 outdated = pushop.fallbackoutdatedphases
1013 outdated = pushop.fallbackoutdatedphases
1015
1014
1016 pushop.stepsdone.add('phases')
1015 pushop.stepsdone.add('phases')
1017
1016
1018 # filter heads already turned public by the push
1017 # filter heads already turned public by the push
1019 outdated = [c for c in outdated if c.node() not in pheads]
1018 outdated = [c for c in outdated if c.node() not in pheads]
1020 # fallback to independent pushkey command
1019 # fallback to independent pushkey command
1021 for newremotehead in outdated:
1020 for newremotehead in outdated:
1022 r = pushop.remote.pushkey('phases',
1021 r = pushop.remote.pushkey('phases',
1023 newremotehead.hex(),
1022 newremotehead.hex(),
1024 str(phases.draft),
1023 str(phases.draft),
1025 str(phases.public))
1024 str(phases.public))
1026 if not r:
1025 if not r:
1027 pushop.ui.warn(_('updating %s to public failed!\n')
1026 pushop.ui.warn(_('updating %s to public failed!\n')
1028 % newremotehead)
1027 % newremotehead)
1029
1028
1030 def _localphasemove(pushop, nodes, phase=phases.public):
1029 def _localphasemove(pushop, nodes, phase=phases.public):
1031 """move <nodes> to <phase> in the local source repo"""
1030 """move <nodes> to <phase> in the local source repo"""
1032 if pushop.trmanager:
1031 if pushop.trmanager:
1033 phases.advanceboundary(pushop.repo,
1032 phases.advanceboundary(pushop.repo,
1034 pushop.trmanager.transaction(),
1033 pushop.trmanager.transaction(),
1035 phase,
1034 phase,
1036 nodes)
1035 nodes)
1037 else:
1036 else:
1038 # repo is not locked, do not change any phases!
1037 # repo is not locked, do not change any phases!
1039 # Informs the user that phases should have been moved when
1038 # Informs the user that phases should have been moved when
1040 # applicable.
1039 # applicable.
1041 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1040 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1042 phasestr = phases.phasenames[phase]
1041 phasestr = phases.phasenames[phase]
1043 if actualmoves:
1042 if actualmoves:
1044 pushop.ui.status(_('cannot lock source repo, skipping '
1043 pushop.ui.status(_('cannot lock source repo, skipping '
1045 'local %s phase update\n') % phasestr)
1044 'local %s phase update\n') % phasestr)
1046
1045
1047 def _pushobsolete(pushop):
1046 def _pushobsolete(pushop):
1048 """utility function to push obsolete markers to a remote"""
1047 """utility function to push obsolete markers to a remote"""
1049 if 'obsmarkers' in pushop.stepsdone:
1048 if 'obsmarkers' in pushop.stepsdone:
1050 return
1049 return
1051 repo = pushop.repo
1050 repo = pushop.repo
1052 remote = pushop.remote
1051 remote = pushop.remote
1053 pushop.stepsdone.add('obsmarkers')
1052 pushop.stepsdone.add('obsmarkers')
1054 if pushop.outobsmarkers:
1053 if pushop.outobsmarkers:
1055 pushop.ui.debug('try to push obsolete markers to remote\n')
1054 pushop.ui.debug('try to push obsolete markers to remote\n')
1056 rslts = []
1055 rslts = []
1057 remotedata = obsolete._pushkeyescape(sorted(pushop.outobsmarkers))
1056 remotedata = obsolete._pushkeyescape(sorted(pushop.outobsmarkers))
1058 for key in sorted(remotedata, reverse=True):
1057 for key in sorted(remotedata, reverse=True):
1059 # reverse sort to ensure we end with dump0
1058 # reverse sort to ensure we end with dump0
1060 data = remotedata[key]
1059 data = remotedata[key]
1061 rslts.append(remote.pushkey('obsolete', key, '', data))
1060 rslts.append(remote.pushkey('obsolete', key, '', data))
1062 if [r for r in rslts if not r]:
1061 if [r for r in rslts if not r]:
1063 msg = _('failed to push some obsolete markers!\n')
1062 msg = _('failed to push some obsolete markers!\n')
1064 repo.ui.warn(msg)
1063 repo.ui.warn(msg)
1065
1064
1066 def _pushbookmark(pushop):
1065 def _pushbookmark(pushop):
1067 """Update bookmark position on remote"""
1066 """Update bookmark position on remote"""
1068 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
1067 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
1069 return
1068 return
1070 pushop.stepsdone.add('bookmarks')
1069 pushop.stepsdone.add('bookmarks')
1071 ui = pushop.ui
1070 ui = pushop.ui
1072 remote = pushop.remote
1071 remote = pushop.remote
1073
1072
1074 for b, old, new in pushop.outbookmarks:
1073 for b, old, new in pushop.outbookmarks:
1075 action = 'update'
1074 action = 'update'
1076 if not old:
1075 if not old:
1077 action = 'export'
1076 action = 'export'
1078 elif not new:
1077 elif not new:
1079 action = 'delete'
1078 action = 'delete'
1080 if remote.pushkey('bookmarks', b, old, new):
1079 if remote.pushkey('bookmarks', b, old, new):
1081 ui.status(bookmsgmap[action][0] % b)
1080 ui.status(bookmsgmap[action][0] % b)
1082 else:
1081 else:
1083 ui.warn(bookmsgmap[action][1] % b)
1082 ui.warn(bookmsgmap[action][1] % b)
1084 # discovery can have set the value form invalid entry
1083 # discovery can have set the value form invalid entry
1085 if pushop.bkresult is not None:
1084 if pushop.bkresult is not None:
1086 pushop.bkresult = 1
1085 pushop.bkresult = 1
1087
1086
1088 class pulloperation(object):
1087 class pulloperation(object):
1089 """A object that represent a single pull operation
1088 """A object that represent a single pull operation
1090
1089
1091 It purpose is to carry pull related state and very common operation.
1090 It purpose is to carry pull related state and very common operation.
1092
1091
1093 A new should be created at the beginning of each pull and discarded
1092 A new should be created at the beginning of each pull and discarded
1094 afterward.
1093 afterward.
1095 """
1094 """
1096
1095
1097 def __init__(self, repo, remote, heads=None, force=False, bookmarks=(),
1096 def __init__(self, repo, remote, heads=None, force=False, bookmarks=(),
1098 remotebookmarks=None, streamclonerequested=None):
1097 remotebookmarks=None, streamclonerequested=None):
1099 # repo we pull into
1098 # repo we pull into
1100 self.repo = repo
1099 self.repo = repo
1101 # repo we pull from
1100 # repo we pull from
1102 self.remote = remote
1101 self.remote = remote
1103 # revision we try to pull (None is "all")
1102 # revision we try to pull (None is "all")
1104 self.heads = heads
1103 self.heads = heads
1105 # bookmark pulled explicitly
1104 # bookmark pulled explicitly
1106 self.explicitbookmarks = [repo._bookmarks.expandname(bookmark)
1105 self.explicitbookmarks = [repo._bookmarks.expandname(bookmark)
1107 for bookmark in bookmarks]
1106 for bookmark in bookmarks]
1108 # do we force pull?
1107 # do we force pull?
1109 self.force = force
1108 self.force = force
1110 # whether a streaming clone was requested
1109 # whether a streaming clone was requested
1111 self.streamclonerequested = streamclonerequested
1110 self.streamclonerequested = streamclonerequested
1112 # transaction manager
1111 # transaction manager
1113 self.trmanager = None
1112 self.trmanager = None
1114 # set of common changeset between local and remote before pull
1113 # set of common changeset between local and remote before pull
1115 self.common = None
1114 self.common = None
1116 # set of pulled head
1115 # set of pulled head
1117 self.rheads = None
1116 self.rheads = None
1118 # list of missing changeset to fetch remotely
1117 # list of missing changeset to fetch remotely
1119 self.fetch = None
1118 self.fetch = None
1120 # remote bookmarks data
1119 # remote bookmarks data
1121 self.remotebookmarks = remotebookmarks
1120 self.remotebookmarks = remotebookmarks
1122 # result of changegroup pulling (used as return code by pull)
1121 # result of changegroup pulling (used as return code by pull)
1123 self.cgresult = None
1122 self.cgresult = None
1124 # list of step already done
1123 # list of step already done
1125 self.stepsdone = set()
1124 self.stepsdone = set()
1126 # Whether we attempted a clone from pre-generated bundles.
1125 # Whether we attempted a clone from pre-generated bundles.
1127 self.clonebundleattempted = False
1126 self.clonebundleattempted = False
1128
1127
1129 @util.propertycache
1128 @util.propertycache
1130 def pulledsubset(self):
1129 def pulledsubset(self):
1131 """heads of the set of changeset target by the pull"""
1130 """heads of the set of changeset target by the pull"""
1132 # compute target subset
1131 # compute target subset
1133 if self.heads is None:
1132 if self.heads is None:
1134 # We pulled every thing possible
1133 # We pulled every thing possible
1135 # sync on everything common
1134 # sync on everything common
1136 c = set(self.common)
1135 c = set(self.common)
1137 ret = list(self.common)
1136 ret = list(self.common)
1138 for n in self.rheads:
1137 for n in self.rheads:
1139 if n not in c:
1138 if n not in c:
1140 ret.append(n)
1139 ret.append(n)
1141 return ret
1140 return ret
1142 else:
1141 else:
1143 # We pulled a specific subset
1142 # We pulled a specific subset
1144 # sync on this subset
1143 # sync on this subset
1145 return self.heads
1144 return self.heads
1146
1145
1147 @util.propertycache
1146 @util.propertycache
1148 def canusebundle2(self):
1147 def canusebundle2(self):
1149 return not _forcebundle1(self)
1148 return not _forcebundle1(self)
1150
1149
1151 @util.propertycache
1150 @util.propertycache
1152 def remotebundle2caps(self):
1151 def remotebundle2caps(self):
1153 return bundle2.bundle2caps(self.remote)
1152 return bundle2.bundle2caps(self.remote)
1154
1153
1155 def gettransaction(self):
1154 def gettransaction(self):
1156 # deprecated; talk to trmanager directly
1155 # deprecated; talk to trmanager directly
1157 return self.trmanager.transaction()
1156 return self.trmanager.transaction()
1158
1157
1159 class transactionmanager(object):
1158 class transactionmanager(object):
1160 """An object to manage the life cycle of a transaction
1159 """An object to manage the life cycle of a transaction
1161
1160
1162 It creates the transaction on demand and calls the appropriate hooks when
1161 It creates the transaction on demand and calls the appropriate hooks when
1163 closing the transaction."""
1162 closing the transaction."""
1164 def __init__(self, repo, source, url):
1163 def __init__(self, repo, source, url):
1165 self.repo = repo
1164 self.repo = repo
1166 self.source = source
1165 self.source = source
1167 self.url = url
1166 self.url = url
1168 self._tr = None
1167 self._tr = None
1169
1168
1170 def transaction(self):
1169 def transaction(self):
1171 """Return an open transaction object, constructing if necessary"""
1170 """Return an open transaction object, constructing if necessary"""
1172 if not self._tr:
1171 if not self._tr:
1173 trname = '%s\n%s' % (self.source, util.hidepassword(self.url))
1172 trname = '%s\n%s' % (self.source, util.hidepassword(self.url))
1174 self._tr = self.repo.transaction(trname)
1173 self._tr = self.repo.transaction(trname)
1175 self._tr.hookargs['source'] = self.source
1174 self._tr.hookargs['source'] = self.source
1176 self._tr.hookargs['url'] = self.url
1175 self._tr.hookargs['url'] = self.url
1177 return self._tr
1176 return self._tr
1178
1177
1179 def close(self):
1178 def close(self):
1180 """close transaction if created"""
1179 """close transaction if created"""
1181 if self._tr is not None:
1180 if self._tr is not None:
1182 self._tr.close()
1181 self._tr.close()
1183
1182
1184 def release(self):
1183 def release(self):
1185 """release transaction if created"""
1184 """release transaction if created"""
1186 if self._tr is not None:
1185 if self._tr is not None:
1187 self._tr.release()
1186 self._tr.release()
1188
1187
1189 def pull(repo, remote, heads=None, force=False, bookmarks=(), opargs=None,
1188 def pull(repo, remote, heads=None, force=False, bookmarks=(), opargs=None,
1190 streamclonerequested=None):
1189 streamclonerequested=None):
1191 """Fetch repository data from a remote.
1190 """Fetch repository data from a remote.
1192
1191
1193 This is the main function used to retrieve data from a remote repository.
1192 This is the main function used to retrieve data from a remote repository.
1194
1193
1195 ``repo`` is the local repository to clone into.
1194 ``repo`` is the local repository to clone into.
1196 ``remote`` is a peer instance.
1195 ``remote`` is a peer instance.
1197 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1196 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1198 default) means to pull everything from the remote.
1197 default) means to pull everything from the remote.
1199 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1198 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1200 default, all remote bookmarks are pulled.
1199 default, all remote bookmarks are pulled.
1201 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1200 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1202 initialization.
1201 initialization.
1203 ``streamclonerequested`` is a boolean indicating whether a "streaming
1202 ``streamclonerequested`` is a boolean indicating whether a "streaming
1204 clone" is requested. A "streaming clone" is essentially a raw file copy
1203 clone" is requested. A "streaming clone" is essentially a raw file copy
1205 of revlogs from the server. This only works when the local repository is
1204 of revlogs from the server. This only works when the local repository is
1206 empty. The default value of ``None`` means to respect the server
1205 empty. The default value of ``None`` means to respect the server
1207 configuration for preferring stream clones.
1206 configuration for preferring stream clones.
1208
1207
1209 Returns the ``pulloperation`` created for this pull.
1208 Returns the ``pulloperation`` created for this pull.
1210 """
1209 """
1211 if opargs is None:
1210 if opargs is None:
1212 opargs = {}
1211 opargs = {}
1213 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks,
1212 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks,
1214 streamclonerequested=streamclonerequested, **opargs)
1213 streamclonerequested=streamclonerequested, **opargs)
1215 if pullop.remote.local():
1214 if pullop.remote.local():
1216 missing = set(pullop.remote.requirements) - pullop.repo.supported
1215 missing = set(pullop.remote.requirements) - pullop.repo.supported
1217 if missing:
1216 if missing:
1218 msg = _("required features are not"
1217 msg = _("required features are not"
1219 " supported in the destination:"
1218 " supported in the destination:"
1220 " %s") % (', '.join(sorted(missing)))
1219 " %s") % (', '.join(sorted(missing)))
1221 raise error.Abort(msg)
1220 raise error.Abort(msg)
1222
1221
1223 wlock = lock = None
1222 wlock = lock = None
1224 try:
1223 try:
1225 wlock = pullop.repo.wlock()
1224 wlock = pullop.repo.wlock()
1226 lock = pullop.repo.lock()
1225 lock = pullop.repo.lock()
1227 pullop.trmanager = transactionmanager(repo, 'pull', remote.url())
1226 pullop.trmanager = transactionmanager(repo, 'pull', remote.url())
1228 streamclone.maybeperformlegacystreamclone(pullop)
1227 streamclone.maybeperformlegacystreamclone(pullop)
1229 # This should ideally be in _pullbundle2(). However, it needs to run
1228 # This should ideally be in _pullbundle2(). However, it needs to run
1230 # before discovery to avoid extra work.
1229 # before discovery to avoid extra work.
1231 _maybeapplyclonebundle(pullop)
1230 _maybeapplyclonebundle(pullop)
1232 _pulldiscovery(pullop)
1231 _pulldiscovery(pullop)
1233 if pullop.canusebundle2:
1232 if pullop.canusebundle2:
1234 _pullbundle2(pullop)
1233 _pullbundle2(pullop)
1235 _pullchangeset(pullop)
1234 _pullchangeset(pullop)
1236 _pullphase(pullop)
1235 _pullphase(pullop)
1237 _pullbookmarks(pullop)
1236 _pullbookmarks(pullop)
1238 _pullobsolete(pullop)
1237 _pullobsolete(pullop)
1239 pullop.trmanager.close()
1238 pullop.trmanager.close()
1240 finally:
1239 finally:
1241 lockmod.release(pullop.trmanager, lock, wlock)
1240 lockmod.release(pullop.trmanager, lock, wlock)
1242
1241
1243 return pullop
1242 return pullop
1244
1243
1245 # list of steps to perform discovery before pull
1244 # list of steps to perform discovery before pull
1246 pulldiscoveryorder = []
1245 pulldiscoveryorder = []
1247
1246
1248 # Mapping between step name and function
1247 # Mapping between step name and function
1249 #
1248 #
1250 # This exists to help extensions wrap steps if necessary
1249 # This exists to help extensions wrap steps if necessary
1251 pulldiscoverymapping = {}
1250 pulldiscoverymapping = {}
1252
1251
1253 def pulldiscovery(stepname):
1252 def pulldiscovery(stepname):
1254 """decorator for function performing discovery before pull
1253 """decorator for function performing discovery before pull
1255
1254
1256 The function is added to the step -> function mapping and appended to the
1255 The function is added to the step -> function mapping and appended to the
1257 list of steps. Beware that decorated function will be added in order (this
1256 list of steps. Beware that decorated function will be added in order (this
1258 may matter).
1257 may matter).
1259
1258
1260 You can only use this decorator for a new step, if you want to wrap a step
1259 You can only use this decorator for a new step, if you want to wrap a step
1261 from an extension, change the pulldiscovery dictionary directly."""
1260 from an extension, change the pulldiscovery dictionary directly."""
1262 def dec(func):
1261 def dec(func):
1263 assert stepname not in pulldiscoverymapping
1262 assert stepname not in pulldiscoverymapping
1264 pulldiscoverymapping[stepname] = func
1263 pulldiscoverymapping[stepname] = func
1265 pulldiscoveryorder.append(stepname)
1264 pulldiscoveryorder.append(stepname)
1266 return func
1265 return func
1267 return dec
1266 return dec
1268
1267
1269 def _pulldiscovery(pullop):
1268 def _pulldiscovery(pullop):
1270 """Run all discovery steps"""
1269 """Run all discovery steps"""
1271 for stepname in pulldiscoveryorder:
1270 for stepname in pulldiscoveryorder:
1272 step = pulldiscoverymapping[stepname]
1271 step = pulldiscoverymapping[stepname]
1273 step(pullop)
1272 step(pullop)
1274
1273
1275 @pulldiscovery('b1:bookmarks')
1274 @pulldiscovery('b1:bookmarks')
1276 def _pullbookmarkbundle1(pullop):
1275 def _pullbookmarkbundle1(pullop):
1277 """fetch bookmark data in bundle1 case
1276 """fetch bookmark data in bundle1 case
1278
1277
1279 If not using bundle2, we have to fetch bookmarks before changeset
1278 If not using bundle2, we have to fetch bookmarks before changeset
1280 discovery to reduce the chance and impact of race conditions."""
1279 discovery to reduce the chance and impact of race conditions."""
1281 if pullop.remotebookmarks is not None:
1280 if pullop.remotebookmarks is not None:
1282 return
1281 return
1283 if pullop.canusebundle2 and 'listkeys' in pullop.remotebundle2caps:
1282 if pullop.canusebundle2 and 'listkeys' in pullop.remotebundle2caps:
1284 # all known bundle2 servers now support listkeys, but lets be nice with
1283 # all known bundle2 servers now support listkeys, but lets be nice with
1285 # new implementation.
1284 # new implementation.
1286 return
1285 return
1287 pullop.remotebookmarks = pullop.remote.listkeys('bookmarks')
1286 pullop.remotebookmarks = pullop.remote.listkeys('bookmarks')
1288
1287
1289
1288
1290 @pulldiscovery('changegroup')
1289 @pulldiscovery('changegroup')
1291 def _pulldiscoverychangegroup(pullop):
1290 def _pulldiscoverychangegroup(pullop):
1292 """discovery phase for the pull
1291 """discovery phase for the pull
1293
1292
1294 Current handle changeset discovery only, will change handle all discovery
1293 Current handle changeset discovery only, will change handle all discovery
1295 at some point."""
1294 at some point."""
1296 tmp = discovery.findcommonincoming(pullop.repo,
1295 tmp = discovery.findcommonincoming(pullop.repo,
1297 pullop.remote,
1296 pullop.remote,
1298 heads=pullop.heads,
1297 heads=pullop.heads,
1299 force=pullop.force)
1298 force=pullop.force)
1300 common, fetch, rheads = tmp
1299 common, fetch, rheads = tmp
1301 nm = pullop.repo.unfiltered().changelog.nodemap
1300 nm = pullop.repo.unfiltered().changelog.nodemap
1302 if fetch and rheads:
1301 if fetch and rheads:
1303 # If a remote heads in filtered locally, lets drop it from the unknown
1302 # If a remote heads in filtered locally, lets drop it from the unknown
1304 # remote heads and put in back in common.
1303 # remote heads and put in back in common.
1305 #
1304 #
1306 # This is a hackish solution to catch most of "common but locally
1305 # This is a hackish solution to catch most of "common but locally
1307 # hidden situation". We do not performs discovery on unfiltered
1306 # hidden situation". We do not performs discovery on unfiltered
1308 # repository because it end up doing a pathological amount of round
1307 # repository because it end up doing a pathological amount of round
1309 # trip for w huge amount of changeset we do not care about.
1308 # trip for w huge amount of changeset we do not care about.
1310 #
1309 #
1311 # If a set of such "common but filtered" changeset exist on the server
1310 # If a set of such "common but filtered" changeset exist on the server
1312 # but are not including a remote heads, we'll not be able to detect it,
1311 # but are not including a remote heads, we'll not be able to detect it,
1313 scommon = set(common)
1312 scommon = set(common)
1314 filteredrheads = []
1313 filteredrheads = []
1315 for n in rheads:
1314 for n in rheads:
1316 if n in nm:
1315 if n in nm:
1317 if n not in scommon:
1316 if n not in scommon:
1318 common.append(n)
1317 common.append(n)
1319 else:
1318 else:
1320 filteredrheads.append(n)
1319 filteredrheads.append(n)
1321 if not filteredrheads:
1320 if not filteredrheads:
1322 fetch = []
1321 fetch = []
1323 rheads = filteredrheads
1322 rheads = filteredrheads
1324 pullop.common = common
1323 pullop.common = common
1325 pullop.fetch = fetch
1324 pullop.fetch = fetch
1326 pullop.rheads = rheads
1325 pullop.rheads = rheads
1327
1326
1328 def _pullbundle2(pullop):
1327 def _pullbundle2(pullop):
1329 """pull data using bundle2
1328 """pull data using bundle2
1330
1329
1331 For now, the only supported data are changegroup."""
1330 For now, the only supported data are changegroup."""
1332 kwargs = {'bundlecaps': caps20to10(pullop.repo)}
1331 kwargs = {'bundlecaps': caps20to10(pullop.repo)}
1333
1332
1334 streaming, streamreqs = streamclone.canperformstreamclone(pullop)
1333 streaming, streamreqs = streamclone.canperformstreamclone(pullop)
1335
1334
1336 # pulling changegroup
1335 # pulling changegroup
1337 pullop.stepsdone.add('changegroup')
1336 pullop.stepsdone.add('changegroup')
1338
1337
1339 kwargs['common'] = pullop.common
1338 kwargs['common'] = pullop.common
1340 kwargs['heads'] = pullop.heads or pullop.rheads
1339 kwargs['heads'] = pullop.heads or pullop.rheads
1341 kwargs['cg'] = pullop.fetch
1340 kwargs['cg'] = pullop.fetch
1342 if 'listkeys' in pullop.remotebundle2caps:
1341 if 'listkeys' in pullop.remotebundle2caps:
1343 kwargs['listkeys'] = ['phases']
1342 kwargs['listkeys'] = ['phases']
1344 if pullop.remotebookmarks is None:
1343 if pullop.remotebookmarks is None:
1345 # make sure to always includes bookmark data when migrating
1344 # make sure to always includes bookmark data when migrating
1346 # `hg incoming --bundle` to using this function.
1345 # `hg incoming --bundle` to using this function.
1347 kwargs['listkeys'].append('bookmarks')
1346 kwargs['listkeys'].append('bookmarks')
1348
1347
1349 # If this is a full pull / clone and the server supports the clone bundles
1348 # If this is a full pull / clone and the server supports the clone bundles
1350 # feature, tell the server whether we attempted a clone bundle. The
1349 # feature, tell the server whether we attempted a clone bundle. The
1351 # presence of this flag indicates the client supports clone bundles. This
1350 # presence of this flag indicates the client supports clone bundles. This
1352 # will enable the server to treat clients that support clone bundles
1351 # will enable the server to treat clients that support clone bundles
1353 # differently from those that don't.
1352 # differently from those that don't.
1354 if (pullop.remote.capable('clonebundles')
1353 if (pullop.remote.capable('clonebundles')
1355 and pullop.heads is None and list(pullop.common) == [nullid]):
1354 and pullop.heads is None and list(pullop.common) == [nullid]):
1356 kwargs['cbattempted'] = pullop.clonebundleattempted
1355 kwargs['cbattempted'] = pullop.clonebundleattempted
1357
1356
1358 if streaming:
1357 if streaming:
1359 pullop.repo.ui.status(_('streaming all changes\n'))
1358 pullop.repo.ui.status(_('streaming all changes\n'))
1360 elif not pullop.fetch:
1359 elif not pullop.fetch:
1361 pullop.repo.ui.status(_("no changes found\n"))
1360 pullop.repo.ui.status(_("no changes found\n"))
1362 pullop.cgresult = 0
1361 pullop.cgresult = 0
1363 else:
1362 else:
1364 if pullop.heads is None and list(pullop.common) == [nullid]:
1363 if pullop.heads is None and list(pullop.common) == [nullid]:
1365 pullop.repo.ui.status(_("requesting all changes\n"))
1364 pullop.repo.ui.status(_("requesting all changes\n"))
1366 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1365 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1367 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1366 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1368 if obsolete.commonversion(remoteversions) is not None:
1367 if obsolete.commonversion(remoteversions) is not None:
1369 kwargs['obsmarkers'] = True
1368 kwargs['obsmarkers'] = True
1370 pullop.stepsdone.add('obsmarkers')
1369 pullop.stepsdone.add('obsmarkers')
1371 _pullbundle2extraprepare(pullop, kwargs)
1370 _pullbundle2extraprepare(pullop, kwargs)
1372 bundle = pullop.remote.getbundle('pull', **kwargs)
1371 bundle = pullop.remote.getbundle('pull', **kwargs)
1373 try:
1372 try:
1374 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
1373 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
1375 except bundle2.AbortFromPart as exc:
1374 except bundle2.AbortFromPart as exc:
1376 pullop.repo.ui.status(_('remote: abort: %s\n') % exc)
1375 pullop.repo.ui.status(_('remote: abort: %s\n') % exc)
1377 raise error.Abort(_('pull failed on remote'), hint=exc.hint)
1376 raise error.Abort(_('pull failed on remote'), hint=exc.hint)
1378 except error.BundleValueError as exc:
1377 except error.BundleValueError as exc:
1379 raise error.Abort(_('missing support for %s') % exc)
1378 raise error.Abort(_('missing support for %s') % exc)
1380
1379
1381 if pullop.fetch:
1380 if pullop.fetch:
1382 results = [cg['return'] for cg in op.records['changegroup']]
1381 results = [cg['return'] for cg in op.records['changegroup']]
1383 pullop.cgresult = changegroup.combineresults(results)
1382 pullop.cgresult = changegroup.combineresults(results)
1384
1383
1385 # processing phases change
1384 # processing phases change
1386 for namespace, value in op.records['listkeys']:
1385 for namespace, value in op.records['listkeys']:
1387 if namespace == 'phases':
1386 if namespace == 'phases':
1388 _pullapplyphases(pullop, value)
1387 _pullapplyphases(pullop, value)
1389
1388
1390 # processing bookmark update
1389 # processing bookmark update
1391 for namespace, value in op.records['listkeys']:
1390 for namespace, value in op.records['listkeys']:
1392 if namespace == 'bookmarks':
1391 if namespace == 'bookmarks':
1393 pullop.remotebookmarks = value
1392 pullop.remotebookmarks = value
1394
1393
1395 # bookmark data were either already there or pulled in the bundle
1394 # bookmark data were either already there or pulled in the bundle
1396 if pullop.remotebookmarks is not None:
1395 if pullop.remotebookmarks is not None:
1397 _pullbookmarks(pullop)
1396 _pullbookmarks(pullop)
1398
1397
1399 def _pullbundle2extraprepare(pullop, kwargs):
1398 def _pullbundle2extraprepare(pullop, kwargs):
1400 """hook function so that extensions can extend the getbundle call"""
1399 """hook function so that extensions can extend the getbundle call"""
1401 pass
1400 pass
1402
1401
1403 def _pullchangeset(pullop):
1402 def _pullchangeset(pullop):
1404 """pull changeset from unbundle into the local repo"""
1403 """pull changeset from unbundle into the local repo"""
1405 # We delay the open of the transaction as late as possible so we
1404 # We delay the open of the transaction as late as possible so we
1406 # don't open transaction for nothing or you break future useful
1405 # don't open transaction for nothing or you break future useful
1407 # rollback call
1406 # rollback call
1408 if 'changegroup' in pullop.stepsdone:
1407 if 'changegroup' in pullop.stepsdone:
1409 return
1408 return
1410 pullop.stepsdone.add('changegroup')
1409 pullop.stepsdone.add('changegroup')
1411 if not pullop.fetch:
1410 if not pullop.fetch:
1412 pullop.repo.ui.status(_("no changes found\n"))
1411 pullop.repo.ui.status(_("no changes found\n"))
1413 pullop.cgresult = 0
1412 pullop.cgresult = 0
1414 return
1413 return
1415 pullop.gettransaction()
1414 pullop.gettransaction()
1416 if pullop.heads is None and list(pullop.common) == [nullid]:
1415 if pullop.heads is None and list(pullop.common) == [nullid]:
1417 pullop.repo.ui.status(_("requesting all changes\n"))
1416 pullop.repo.ui.status(_("requesting all changes\n"))
1418 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1417 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1419 # issue1320, avoid a race if remote changed after discovery
1418 # issue1320, avoid a race if remote changed after discovery
1420 pullop.heads = pullop.rheads
1419 pullop.heads = pullop.rheads
1421
1420
1422 if pullop.remote.capable('getbundle'):
1421 if pullop.remote.capable('getbundle'):
1423 # TODO: get bundlecaps from remote
1422 # TODO: get bundlecaps from remote
1424 cg = pullop.remote.getbundle('pull', common=pullop.common,
1423 cg = pullop.remote.getbundle('pull', common=pullop.common,
1425 heads=pullop.heads or pullop.rheads)
1424 heads=pullop.heads or pullop.rheads)
1426 elif pullop.heads is None:
1425 elif pullop.heads is None:
1427 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1426 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1428 elif not pullop.remote.capable('changegroupsubset'):
1427 elif not pullop.remote.capable('changegroupsubset'):
1429 raise error.Abort(_("partial pull cannot be done because "
1428 raise error.Abort(_("partial pull cannot be done because "
1430 "other repository doesn't support "
1429 "other repository doesn't support "
1431 "changegroupsubset."))
1430 "changegroupsubset."))
1432 else:
1431 else:
1433 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1432 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1434 pullop.cgresult = cg.apply(pullop.repo, 'pull', pullop.remote.url())
1433 pullop.cgresult = cg.apply(pullop.repo, 'pull', pullop.remote.url())
1435
1434
1436 def _pullphase(pullop):
1435 def _pullphase(pullop):
1437 # Get remote phases data from remote
1436 # Get remote phases data from remote
1438 if 'phases' in pullop.stepsdone:
1437 if 'phases' in pullop.stepsdone:
1439 return
1438 return
1440 remotephases = pullop.remote.listkeys('phases')
1439 remotephases = pullop.remote.listkeys('phases')
1441 _pullapplyphases(pullop, remotephases)
1440 _pullapplyphases(pullop, remotephases)
1442
1441
1443 def _pullapplyphases(pullop, remotephases):
1442 def _pullapplyphases(pullop, remotephases):
1444 """apply phase movement from observed remote state"""
1443 """apply phase movement from observed remote state"""
1445 if 'phases' in pullop.stepsdone:
1444 if 'phases' in pullop.stepsdone:
1446 return
1445 return
1447 pullop.stepsdone.add('phases')
1446 pullop.stepsdone.add('phases')
1448 publishing = bool(remotephases.get('publishing', False))
1447 publishing = bool(remotephases.get('publishing', False))
1449 if remotephases and not publishing:
1448 if remotephases and not publishing:
1450 # remote is new and non-publishing
1449 # remote is new and non-publishing
1451 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1450 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1452 pullop.pulledsubset,
1451 pullop.pulledsubset,
1453 remotephases)
1452 remotephases)
1454 dheads = pullop.pulledsubset
1453 dheads = pullop.pulledsubset
1455 else:
1454 else:
1456 # Remote is old or publishing all common changesets
1455 # Remote is old or publishing all common changesets
1457 # should be seen as public
1456 # should be seen as public
1458 pheads = pullop.pulledsubset
1457 pheads = pullop.pulledsubset
1459 dheads = []
1458 dheads = []
1460 unfi = pullop.repo.unfiltered()
1459 unfi = pullop.repo.unfiltered()
1461 phase = unfi._phasecache.phase
1460 phase = unfi._phasecache.phase
1462 rev = unfi.changelog.nodemap.get
1461 rev = unfi.changelog.nodemap.get
1463 public = phases.public
1462 public = phases.public
1464 draft = phases.draft
1463 draft = phases.draft
1465
1464
1466 # exclude changesets already public locally and update the others
1465 # exclude changesets already public locally and update the others
1467 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1466 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1468 if pheads:
1467 if pheads:
1469 tr = pullop.gettransaction()
1468 tr = pullop.gettransaction()
1470 phases.advanceboundary(pullop.repo, tr, public, pheads)
1469 phases.advanceboundary(pullop.repo, tr, public, pheads)
1471
1470
1472 # exclude changesets already draft locally and update the others
1471 # exclude changesets already draft locally and update the others
1473 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1472 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1474 if dheads:
1473 if dheads:
1475 tr = pullop.gettransaction()
1474 tr = pullop.gettransaction()
1476 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1475 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1477
1476
1478 def _pullbookmarks(pullop):
1477 def _pullbookmarks(pullop):
1479 """process the remote bookmark information to update the local one"""
1478 """process the remote bookmark information to update the local one"""
1480 if 'bookmarks' in pullop.stepsdone:
1479 if 'bookmarks' in pullop.stepsdone:
1481 return
1480 return
1482 pullop.stepsdone.add('bookmarks')
1481 pullop.stepsdone.add('bookmarks')
1483 repo = pullop.repo
1482 repo = pullop.repo
1484 remotebookmarks = pullop.remotebookmarks
1483 remotebookmarks = pullop.remotebookmarks
1485 remotebookmarks = bookmod.unhexlifybookmarks(remotebookmarks)
1484 remotebookmarks = bookmod.unhexlifybookmarks(remotebookmarks)
1486 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1485 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1487 pullop.remote.url(),
1486 pullop.remote.url(),
1488 pullop.gettransaction,
1487 pullop.gettransaction,
1489 explicit=pullop.explicitbookmarks)
1488 explicit=pullop.explicitbookmarks)
1490
1489
1491 def _pullobsolete(pullop):
1490 def _pullobsolete(pullop):
1492 """utility function to pull obsolete markers from a remote
1491 """utility function to pull obsolete markers from a remote
1493
1492
1494 The `gettransaction` is function that return the pull transaction, creating
1493 The `gettransaction` is function that return the pull transaction, creating
1495 one if necessary. We return the transaction to inform the calling code that
1494 one if necessary. We return the transaction to inform the calling code that
1496 a new transaction have been created (when applicable).
1495 a new transaction have been created (when applicable).
1497
1496
1498 Exists mostly to allow overriding for experimentation purpose"""
1497 Exists mostly to allow overriding for experimentation purpose"""
1499 if 'obsmarkers' in pullop.stepsdone:
1498 if 'obsmarkers' in pullop.stepsdone:
1500 return
1499 return
1501 pullop.stepsdone.add('obsmarkers')
1500 pullop.stepsdone.add('obsmarkers')
1502 tr = None
1501 tr = None
1503 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1502 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1504 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1503 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1505 remoteobs = pullop.remote.listkeys('obsolete')
1504 remoteobs = pullop.remote.listkeys('obsolete')
1506 if 'dump0' in remoteobs:
1505 if 'dump0' in remoteobs:
1507 tr = pullop.gettransaction()
1506 tr = pullop.gettransaction()
1508 markers = []
1507 markers = []
1509 for key in sorted(remoteobs, reverse=True):
1508 for key in sorted(remoteobs, reverse=True):
1510 if key.startswith('dump'):
1509 if key.startswith('dump'):
1511 data = util.b85decode(remoteobs[key])
1510 data = util.b85decode(remoteobs[key])
1512 version, newmarks = obsolete._readmarkers(data)
1511 version, newmarks = obsolete._readmarkers(data)
1513 markers += newmarks
1512 markers += newmarks
1514 if markers:
1513 if markers:
1515 pullop.repo.obsstore.add(tr, markers)
1514 pullop.repo.obsstore.add(tr, markers)
1516 pullop.repo.invalidatevolatilesets()
1515 pullop.repo.invalidatevolatilesets()
1517 return tr
1516 return tr
1518
1517
1519 def caps20to10(repo):
1518 def caps20to10(repo):
1520 """return a set with appropriate options to use bundle20 during getbundle"""
1519 """return a set with appropriate options to use bundle20 during getbundle"""
1521 caps = set(['HG20'])
1520 caps = set(['HG20'])
1522 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
1521 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
1523 caps.add('bundle2=' + urlreq.quote(capsblob))
1522 caps.add('bundle2=' + urlreq.quote(capsblob))
1524 return caps
1523 return caps
1525
1524
1526 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1525 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1527 getbundle2partsorder = []
1526 getbundle2partsorder = []
1528
1527
1529 # Mapping between step name and function
1528 # Mapping between step name and function
1530 #
1529 #
1531 # This exists to help extensions wrap steps if necessary
1530 # This exists to help extensions wrap steps if necessary
1532 getbundle2partsmapping = {}
1531 getbundle2partsmapping = {}
1533
1532
1534 def getbundle2partsgenerator(stepname, idx=None):
1533 def getbundle2partsgenerator(stepname, idx=None):
1535 """decorator for function generating bundle2 part for getbundle
1534 """decorator for function generating bundle2 part for getbundle
1536
1535
1537 The function is added to the step -> function mapping and appended to the
1536 The function is added to the step -> function mapping and appended to the
1538 list of steps. Beware that decorated functions will be added in order
1537 list of steps. Beware that decorated functions will be added in order
1539 (this may matter).
1538 (this may matter).
1540
1539
1541 You can only use this decorator for new steps, if you want to wrap a step
1540 You can only use this decorator for new steps, if you want to wrap a step
1542 from an extension, attack the getbundle2partsmapping dictionary directly."""
1541 from an extension, attack the getbundle2partsmapping dictionary directly."""
1543 def dec(func):
1542 def dec(func):
1544 assert stepname not in getbundle2partsmapping
1543 assert stepname not in getbundle2partsmapping
1545 getbundle2partsmapping[stepname] = func
1544 getbundle2partsmapping[stepname] = func
1546 if idx is None:
1545 if idx is None:
1547 getbundle2partsorder.append(stepname)
1546 getbundle2partsorder.append(stepname)
1548 else:
1547 else:
1549 getbundle2partsorder.insert(idx, stepname)
1548 getbundle2partsorder.insert(idx, stepname)
1550 return func
1549 return func
1551 return dec
1550 return dec
1552
1551
1553 def bundle2requested(bundlecaps):
1552 def bundle2requested(bundlecaps):
1554 if bundlecaps is not None:
1553 if bundlecaps is not None:
1555 return any(cap.startswith('HG2') for cap in bundlecaps)
1554 return any(cap.startswith('HG2') for cap in bundlecaps)
1556 return False
1555 return False
1557
1556
1558 def getbundlechunks(repo, source, heads=None, common=None, bundlecaps=None,
1557 def getbundlechunks(repo, source, heads=None, common=None, bundlecaps=None,
1559 **kwargs):
1558 **kwargs):
1560 """Return chunks constituting a bundle's raw data.
1559 """Return chunks constituting a bundle's raw data.
1561
1560
1562 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
1561 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
1563 passed.
1562 passed.
1564
1563
1565 Returns an iterator over raw chunks (of varying sizes).
1564 Returns an iterator over raw chunks (of varying sizes).
1566 """
1565 """
1567 usebundle2 = bundle2requested(bundlecaps)
1566 usebundle2 = bundle2requested(bundlecaps)
1568 # bundle10 case
1567 # bundle10 case
1569 if not usebundle2:
1568 if not usebundle2:
1570 if bundlecaps and not kwargs.get('cg', True):
1569 if bundlecaps and not kwargs.get('cg', True):
1571 raise ValueError(_('request for bundle10 must include changegroup'))
1570 raise ValueError(_('request for bundle10 must include changegroup'))
1572
1571
1573 if kwargs:
1572 if kwargs:
1574 raise ValueError(_('unsupported getbundle arguments: %s')
1573 raise ValueError(_('unsupported getbundle arguments: %s')
1575 % ', '.join(sorted(kwargs.keys())))
1574 % ', '.join(sorted(kwargs.keys())))
1576 outgoing = _computeoutgoing(repo, heads, common)
1575 outgoing = _computeoutgoing(repo, heads, common)
1577 bundler = changegroup.getbundler('01', repo)
1576 bundler = changegroup.getbundler('01', repo)
1578 return changegroup.getsubsetraw(repo, outgoing, bundler, source)
1577 return changegroup.getsubsetraw(repo, outgoing, bundler, source)
1579
1578
1580 # bundle20 case
1579 # bundle20 case
1581 b2caps = {}
1580 b2caps = {}
1582 for bcaps in bundlecaps:
1581 for bcaps in bundlecaps:
1583 if bcaps.startswith('bundle2='):
1582 if bcaps.startswith('bundle2='):
1584 blob = urlreq.unquote(bcaps[len('bundle2='):])
1583 blob = urlreq.unquote(bcaps[len('bundle2='):])
1585 b2caps.update(bundle2.decodecaps(blob))
1584 b2caps.update(bundle2.decodecaps(blob))
1586 bundler = bundle2.bundle20(repo.ui, b2caps)
1585 bundler = bundle2.bundle20(repo.ui, b2caps)
1587
1586
1588 kwargs['heads'] = heads
1587 kwargs['heads'] = heads
1589 kwargs['common'] = common
1588 kwargs['common'] = common
1590
1589
1591 for name in getbundle2partsorder:
1590 for name in getbundle2partsorder:
1592 func = getbundle2partsmapping[name]
1591 func = getbundle2partsmapping[name]
1593 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1592 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1594 **kwargs)
1593 **kwargs)
1595
1594
1596 return bundler.getchunks()
1595 return bundler.getchunks()
1597
1596
1598 @getbundle2partsgenerator('changegroup')
1597 @getbundle2partsgenerator('changegroup')
1599 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1598 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1600 b2caps=None, heads=None, common=None, **kwargs):
1599 b2caps=None, heads=None, common=None, **kwargs):
1601 """add a changegroup part to the requested bundle"""
1600 """add a changegroup part to the requested bundle"""
1602 cg = None
1601 cg = None
1603 if kwargs.get('cg', True):
1602 if kwargs.get('cg', True):
1604 # build changegroup bundle here.
1603 # build changegroup bundle here.
1605 version = '01'
1604 version = '01'
1606 cgversions = b2caps.get('changegroup')
1605 cgversions = b2caps.get('changegroup')
1607 if cgversions: # 3.1 and 3.2 ship with an empty value
1606 if cgversions: # 3.1 and 3.2 ship with an empty value
1608 cgversions = [v for v in cgversions
1607 cgversions = [v for v in cgversions
1609 if v in changegroup.supportedoutgoingversions(repo)]
1608 if v in changegroup.supportedoutgoingversions(repo)]
1610 if not cgversions:
1609 if not cgversions:
1611 raise ValueError(_('no common changegroup version'))
1610 raise ValueError(_('no common changegroup version'))
1612 version = max(cgversions)
1611 version = max(cgversions)
1613 outgoing = _computeoutgoing(repo, heads, common)
1612 outgoing = _computeoutgoing(repo, heads, common)
1614 cg = changegroup.getlocalchangegroupraw(repo, source, outgoing,
1613 cg = changegroup.getlocalchangegroupraw(repo, source, outgoing,
1615 version=version)
1614 version=version)
1616
1615
1617 if cg:
1616 if cg:
1618 part = bundler.newpart('changegroup', data=cg)
1617 part = bundler.newpart('changegroup', data=cg)
1619 if cgversions:
1618 if cgversions:
1620 part.addparam('version', version)
1619 part.addparam('version', version)
1621 part.addparam('nbchanges', str(len(outgoing.missing)), mandatory=False)
1620 part.addparam('nbchanges', str(len(outgoing.missing)), mandatory=False)
1622 if 'treemanifest' in repo.requirements:
1621 if 'treemanifest' in repo.requirements:
1623 part.addparam('treemanifest', '1')
1622 part.addparam('treemanifest', '1')
1624
1623
1625 @getbundle2partsgenerator('listkeys')
1624 @getbundle2partsgenerator('listkeys')
1626 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1625 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1627 b2caps=None, **kwargs):
1626 b2caps=None, **kwargs):
1628 """add parts containing listkeys namespaces to the requested bundle"""
1627 """add parts containing listkeys namespaces to the requested bundle"""
1629 listkeys = kwargs.get('listkeys', ())
1628 listkeys = kwargs.get('listkeys', ())
1630 for namespace in listkeys:
1629 for namespace in listkeys:
1631 part = bundler.newpart('listkeys')
1630 part = bundler.newpart('listkeys')
1632 part.addparam('namespace', namespace)
1631 part.addparam('namespace', namespace)
1633 keys = repo.listkeys(namespace).items()
1632 keys = repo.listkeys(namespace).items()
1634 part.data = pushkey.encodekeys(keys)
1633 part.data = pushkey.encodekeys(keys)
1635
1634
1636 @getbundle2partsgenerator('obsmarkers')
1635 @getbundle2partsgenerator('obsmarkers')
1637 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1636 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1638 b2caps=None, heads=None, **kwargs):
1637 b2caps=None, heads=None, **kwargs):
1639 """add an obsolescence markers part to the requested bundle"""
1638 """add an obsolescence markers part to the requested bundle"""
1640 if kwargs.get('obsmarkers', False):
1639 if kwargs.get('obsmarkers', False):
1641 if heads is None:
1640 if heads is None:
1642 heads = repo.heads()
1641 heads = repo.heads()
1643 subset = [c.node() for c in repo.set('::%ln', heads)]
1642 subset = [c.node() for c in repo.set('::%ln', heads)]
1644 markers = repo.obsstore.relevantmarkers(subset)
1643 markers = repo.obsstore.relevantmarkers(subset)
1645 markers = sorted(markers)
1644 markers = sorted(markers)
1646 buildobsmarkerspart(bundler, markers)
1645 buildobsmarkerspart(bundler, markers)
1647
1646
1648 @getbundle2partsgenerator('hgtagsfnodes')
1647 @getbundle2partsgenerator('hgtagsfnodes')
1649 def _getbundletagsfnodes(bundler, repo, source, bundlecaps=None,
1648 def _getbundletagsfnodes(bundler, repo, source, bundlecaps=None,
1650 b2caps=None, heads=None, common=None,
1649 b2caps=None, heads=None, common=None,
1651 **kwargs):
1650 **kwargs):
1652 """Transfer the .hgtags filenodes mapping.
1651 """Transfer the .hgtags filenodes mapping.
1653
1652
1654 Only values for heads in this bundle will be transferred.
1653 Only values for heads in this bundle will be transferred.
1655
1654
1656 The part data consists of pairs of 20 byte changeset node and .hgtags
1655 The part data consists of pairs of 20 byte changeset node and .hgtags
1657 filenodes raw values.
1656 filenodes raw values.
1658 """
1657 """
1659 # Don't send unless:
1658 # Don't send unless:
1660 # - changeset are being exchanged,
1659 # - changeset are being exchanged,
1661 # - the client supports it.
1660 # - the client supports it.
1662 if not (kwargs.get('cg', True) and 'hgtagsfnodes' in b2caps):
1661 if not (kwargs.get('cg', True) and 'hgtagsfnodes' in b2caps):
1663 return
1662 return
1664
1663
1665 outgoing = _computeoutgoing(repo, heads, common)
1664 outgoing = _computeoutgoing(repo, heads, common)
1666
1665 bundle2.addparttagsfnodescache(repo, bundler, outgoing)
1667 if not outgoing.missingheads:
1668 return
1669
1670 cache = tags.hgtagsfnodescache(repo.unfiltered())
1671 chunks = []
1672
1673 # .hgtags fnodes are only relevant for head changesets. While we could
1674 # transfer values for all known nodes, there will likely be little to
1675 # no benefit.
1676 #
1677 # We don't bother using a generator to produce output data because
1678 # a) we only have 40 bytes per head and even esoteric numbers of heads
1679 # consume little memory (1M heads is 40MB) b) we don't want to send the
1680 # part if we don't have entries and knowing if we have entries requires
1681 # cache lookups.
1682 for node in outgoing.missingheads:
1683 # Don't compute missing, as this may slow down serving.
1684 fnode = cache.getfnode(node, computemissing=False)
1685 if fnode is not None:
1686 chunks.extend([node, fnode])
1687
1688 if chunks:
1689 bundler.newpart('hgtagsfnodes', data=''.join(chunks))
1690
1666
1691 def _getbookmarks(repo, **kwargs):
1667 def _getbookmarks(repo, **kwargs):
1692 """Returns bookmark to node mapping.
1668 """Returns bookmark to node mapping.
1693
1669
1694 This function is primarily used to generate `bookmarks` bundle2 part.
1670 This function is primarily used to generate `bookmarks` bundle2 part.
1695 It is a separate function in order to make it easy to wrap it
1671 It is a separate function in order to make it easy to wrap it
1696 in extensions. Passing `kwargs` to the function makes it easy to
1672 in extensions. Passing `kwargs` to the function makes it easy to
1697 add new parameters in extensions.
1673 add new parameters in extensions.
1698 """
1674 """
1699
1675
1700 return dict(bookmod.listbinbookmarks(repo))
1676 return dict(bookmod.listbinbookmarks(repo))
1701
1677
1702 def check_heads(repo, their_heads, context):
1678 def check_heads(repo, their_heads, context):
1703 """check if the heads of a repo have been modified
1679 """check if the heads of a repo have been modified
1704
1680
1705 Used by peer for unbundling.
1681 Used by peer for unbundling.
1706 """
1682 """
1707 heads = repo.heads()
1683 heads = repo.heads()
1708 heads_hash = hashlib.sha1(''.join(sorted(heads))).digest()
1684 heads_hash = hashlib.sha1(''.join(sorted(heads))).digest()
1709 if not (their_heads == ['force'] or their_heads == heads or
1685 if not (their_heads == ['force'] or their_heads == heads or
1710 their_heads == ['hashed', heads_hash]):
1686 their_heads == ['hashed', heads_hash]):
1711 # someone else committed/pushed/unbundled while we
1687 # someone else committed/pushed/unbundled while we
1712 # were transferring data
1688 # were transferring data
1713 raise error.PushRaced('repository changed while %s - '
1689 raise error.PushRaced('repository changed while %s - '
1714 'please try again' % context)
1690 'please try again' % context)
1715
1691
1716 def unbundle(repo, cg, heads, source, url):
1692 def unbundle(repo, cg, heads, source, url):
1717 """Apply a bundle to a repo.
1693 """Apply a bundle to a repo.
1718
1694
1719 this function makes sure the repo is locked during the application and have
1695 this function makes sure the repo is locked during the application and have
1720 mechanism to check that no push race occurred between the creation of the
1696 mechanism to check that no push race occurred between the creation of the
1721 bundle and its application.
1697 bundle and its application.
1722
1698
1723 If the push was raced as PushRaced exception is raised."""
1699 If the push was raced as PushRaced exception is raised."""
1724 r = 0
1700 r = 0
1725 # need a transaction when processing a bundle2 stream
1701 # need a transaction when processing a bundle2 stream
1726 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
1702 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
1727 lockandtr = [None, None, None]
1703 lockandtr = [None, None, None]
1728 recordout = None
1704 recordout = None
1729 # quick fix for output mismatch with bundle2 in 3.4
1705 # quick fix for output mismatch with bundle2 in 3.4
1730 captureoutput = repo.ui.configbool('experimental', 'bundle2-output-capture',
1706 captureoutput = repo.ui.configbool('experimental', 'bundle2-output-capture',
1731 False)
1707 False)
1732 if url.startswith('remote:http:') or url.startswith('remote:https:'):
1708 if url.startswith('remote:http:') or url.startswith('remote:https:'):
1733 captureoutput = True
1709 captureoutput = True
1734 try:
1710 try:
1735 # note: outside bundle1, 'heads' is expected to be empty and this
1711 # note: outside bundle1, 'heads' is expected to be empty and this
1736 # 'check_heads' call wil be a no-op
1712 # 'check_heads' call wil be a no-op
1737 check_heads(repo, heads, 'uploading changes')
1713 check_heads(repo, heads, 'uploading changes')
1738 # push can proceed
1714 # push can proceed
1739 if not util.safehasattr(cg, 'params'):
1715 if not util.safehasattr(cg, 'params'):
1740 # legacy case: bundle1 (changegroup 01)
1716 # legacy case: bundle1 (changegroup 01)
1741 lockandtr[1] = repo.lock()
1717 lockandtr[1] = repo.lock()
1742 r = cg.apply(repo, source, url)
1718 r = cg.apply(repo, source, url)
1743 else:
1719 else:
1744 r = None
1720 r = None
1745 try:
1721 try:
1746 def gettransaction():
1722 def gettransaction():
1747 if not lockandtr[2]:
1723 if not lockandtr[2]:
1748 lockandtr[0] = repo.wlock()
1724 lockandtr[0] = repo.wlock()
1749 lockandtr[1] = repo.lock()
1725 lockandtr[1] = repo.lock()
1750 lockandtr[2] = repo.transaction(source)
1726 lockandtr[2] = repo.transaction(source)
1751 lockandtr[2].hookargs['source'] = source
1727 lockandtr[2].hookargs['source'] = source
1752 lockandtr[2].hookargs['url'] = url
1728 lockandtr[2].hookargs['url'] = url
1753 lockandtr[2].hookargs['bundle2'] = '1'
1729 lockandtr[2].hookargs['bundle2'] = '1'
1754 return lockandtr[2]
1730 return lockandtr[2]
1755
1731
1756 # Do greedy locking by default until we're satisfied with lazy
1732 # Do greedy locking by default until we're satisfied with lazy
1757 # locking.
1733 # locking.
1758 if not repo.ui.configbool('experimental', 'bundle2lazylocking'):
1734 if not repo.ui.configbool('experimental', 'bundle2lazylocking'):
1759 gettransaction()
1735 gettransaction()
1760
1736
1761 op = bundle2.bundleoperation(repo, gettransaction,
1737 op = bundle2.bundleoperation(repo, gettransaction,
1762 captureoutput=captureoutput)
1738 captureoutput=captureoutput)
1763 try:
1739 try:
1764 op = bundle2.processbundle(repo, cg, op=op)
1740 op = bundle2.processbundle(repo, cg, op=op)
1765 finally:
1741 finally:
1766 r = op.reply
1742 r = op.reply
1767 if captureoutput and r is not None:
1743 if captureoutput and r is not None:
1768 repo.ui.pushbuffer(error=True, subproc=True)
1744 repo.ui.pushbuffer(error=True, subproc=True)
1769 def recordout(output):
1745 def recordout(output):
1770 r.newpart('output', data=output, mandatory=False)
1746 r.newpart('output', data=output, mandatory=False)
1771 if lockandtr[2] is not None:
1747 if lockandtr[2] is not None:
1772 lockandtr[2].close()
1748 lockandtr[2].close()
1773 except BaseException as exc:
1749 except BaseException as exc:
1774 exc.duringunbundle2 = True
1750 exc.duringunbundle2 = True
1775 if captureoutput and r is not None:
1751 if captureoutput and r is not None:
1776 parts = exc._bundle2salvagedoutput = r.salvageoutput()
1752 parts = exc._bundle2salvagedoutput = r.salvageoutput()
1777 def recordout(output):
1753 def recordout(output):
1778 part = bundle2.bundlepart('output', data=output,
1754 part = bundle2.bundlepart('output', data=output,
1779 mandatory=False)
1755 mandatory=False)
1780 parts.append(part)
1756 parts.append(part)
1781 raise
1757 raise
1782 finally:
1758 finally:
1783 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
1759 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
1784 if recordout is not None:
1760 if recordout is not None:
1785 recordout(repo.ui.popbuffer())
1761 recordout(repo.ui.popbuffer())
1786 return r
1762 return r
1787
1763
1788 def _maybeapplyclonebundle(pullop):
1764 def _maybeapplyclonebundle(pullop):
1789 """Apply a clone bundle from a remote, if possible."""
1765 """Apply a clone bundle from a remote, if possible."""
1790
1766
1791 repo = pullop.repo
1767 repo = pullop.repo
1792 remote = pullop.remote
1768 remote = pullop.remote
1793
1769
1794 if not repo.ui.configbool('ui', 'clonebundles', True):
1770 if not repo.ui.configbool('ui', 'clonebundles', True):
1795 return
1771 return
1796
1772
1797 # Only run if local repo is empty.
1773 # Only run if local repo is empty.
1798 if len(repo):
1774 if len(repo):
1799 return
1775 return
1800
1776
1801 if pullop.heads:
1777 if pullop.heads:
1802 return
1778 return
1803
1779
1804 if not remote.capable('clonebundles'):
1780 if not remote.capable('clonebundles'):
1805 return
1781 return
1806
1782
1807 res = remote._call('clonebundles')
1783 res = remote._call('clonebundles')
1808
1784
1809 # If we call the wire protocol command, that's good enough to record the
1785 # If we call the wire protocol command, that's good enough to record the
1810 # attempt.
1786 # attempt.
1811 pullop.clonebundleattempted = True
1787 pullop.clonebundleattempted = True
1812
1788
1813 entries = parseclonebundlesmanifest(repo, res)
1789 entries = parseclonebundlesmanifest(repo, res)
1814 if not entries:
1790 if not entries:
1815 repo.ui.note(_('no clone bundles available on remote; '
1791 repo.ui.note(_('no clone bundles available on remote; '
1816 'falling back to regular clone\n'))
1792 'falling back to regular clone\n'))
1817 return
1793 return
1818
1794
1819 entries = filterclonebundleentries(repo, entries)
1795 entries = filterclonebundleentries(repo, entries)
1820 if not entries:
1796 if not entries:
1821 # There is a thundering herd concern here. However, if a server
1797 # There is a thundering herd concern here. However, if a server
1822 # operator doesn't advertise bundles appropriate for its clients,
1798 # operator doesn't advertise bundles appropriate for its clients,
1823 # they deserve what's coming. Furthermore, from a client's
1799 # they deserve what's coming. Furthermore, from a client's
1824 # perspective, no automatic fallback would mean not being able to
1800 # perspective, no automatic fallback would mean not being able to
1825 # clone!
1801 # clone!
1826 repo.ui.warn(_('no compatible clone bundles available on server; '
1802 repo.ui.warn(_('no compatible clone bundles available on server; '
1827 'falling back to regular clone\n'))
1803 'falling back to regular clone\n'))
1828 repo.ui.warn(_('(you may want to report this to the server '
1804 repo.ui.warn(_('(you may want to report this to the server '
1829 'operator)\n'))
1805 'operator)\n'))
1830 return
1806 return
1831
1807
1832 entries = sortclonebundleentries(repo.ui, entries)
1808 entries = sortclonebundleentries(repo.ui, entries)
1833
1809
1834 url = entries[0]['URL']
1810 url = entries[0]['URL']
1835 repo.ui.status(_('applying clone bundle from %s\n') % url)
1811 repo.ui.status(_('applying clone bundle from %s\n') % url)
1836 if trypullbundlefromurl(repo.ui, repo, url):
1812 if trypullbundlefromurl(repo.ui, repo, url):
1837 repo.ui.status(_('finished applying clone bundle\n'))
1813 repo.ui.status(_('finished applying clone bundle\n'))
1838 # Bundle failed.
1814 # Bundle failed.
1839 #
1815 #
1840 # We abort by default to avoid the thundering herd of
1816 # We abort by default to avoid the thundering herd of
1841 # clients flooding a server that was expecting expensive
1817 # clients flooding a server that was expecting expensive
1842 # clone load to be offloaded.
1818 # clone load to be offloaded.
1843 elif repo.ui.configbool('ui', 'clonebundlefallback', False):
1819 elif repo.ui.configbool('ui', 'clonebundlefallback', False):
1844 repo.ui.warn(_('falling back to normal clone\n'))
1820 repo.ui.warn(_('falling back to normal clone\n'))
1845 else:
1821 else:
1846 raise error.Abort(_('error applying bundle'),
1822 raise error.Abort(_('error applying bundle'),
1847 hint=_('if this error persists, consider contacting '
1823 hint=_('if this error persists, consider contacting '
1848 'the server operator or disable clone '
1824 'the server operator or disable clone '
1849 'bundles via '
1825 'bundles via '
1850 '"--config ui.clonebundles=false"'))
1826 '"--config ui.clonebundles=false"'))
1851
1827
1852 def parseclonebundlesmanifest(repo, s):
1828 def parseclonebundlesmanifest(repo, s):
1853 """Parses the raw text of a clone bundles manifest.
1829 """Parses the raw text of a clone bundles manifest.
1854
1830
1855 Returns a list of dicts. The dicts have a ``URL`` key corresponding
1831 Returns a list of dicts. The dicts have a ``URL`` key corresponding
1856 to the URL and other keys are the attributes for the entry.
1832 to the URL and other keys are the attributes for the entry.
1857 """
1833 """
1858 m = []
1834 m = []
1859 for line in s.splitlines():
1835 for line in s.splitlines():
1860 fields = line.split()
1836 fields = line.split()
1861 if not fields:
1837 if not fields:
1862 continue
1838 continue
1863 attrs = {'URL': fields[0]}
1839 attrs = {'URL': fields[0]}
1864 for rawattr in fields[1:]:
1840 for rawattr in fields[1:]:
1865 key, value = rawattr.split('=', 1)
1841 key, value = rawattr.split('=', 1)
1866 key = urlreq.unquote(key)
1842 key = urlreq.unquote(key)
1867 value = urlreq.unquote(value)
1843 value = urlreq.unquote(value)
1868 attrs[key] = value
1844 attrs[key] = value
1869
1845
1870 # Parse BUNDLESPEC into components. This makes client-side
1846 # Parse BUNDLESPEC into components. This makes client-side
1871 # preferences easier to specify since you can prefer a single
1847 # preferences easier to specify since you can prefer a single
1872 # component of the BUNDLESPEC.
1848 # component of the BUNDLESPEC.
1873 if key == 'BUNDLESPEC':
1849 if key == 'BUNDLESPEC':
1874 try:
1850 try:
1875 comp, version, params = parsebundlespec(repo, value,
1851 comp, version, params = parsebundlespec(repo, value,
1876 externalnames=True)
1852 externalnames=True)
1877 attrs['COMPRESSION'] = comp
1853 attrs['COMPRESSION'] = comp
1878 attrs['VERSION'] = version
1854 attrs['VERSION'] = version
1879 except error.InvalidBundleSpecification:
1855 except error.InvalidBundleSpecification:
1880 pass
1856 pass
1881 except error.UnsupportedBundleSpecification:
1857 except error.UnsupportedBundleSpecification:
1882 pass
1858 pass
1883
1859
1884 m.append(attrs)
1860 m.append(attrs)
1885
1861
1886 return m
1862 return m
1887
1863
1888 def filterclonebundleentries(repo, entries):
1864 def filterclonebundleentries(repo, entries):
1889 """Remove incompatible clone bundle manifest entries.
1865 """Remove incompatible clone bundle manifest entries.
1890
1866
1891 Accepts a list of entries parsed with ``parseclonebundlesmanifest``
1867 Accepts a list of entries parsed with ``parseclonebundlesmanifest``
1892 and returns a new list consisting of only the entries that this client
1868 and returns a new list consisting of only the entries that this client
1893 should be able to apply.
1869 should be able to apply.
1894
1870
1895 There is no guarantee we'll be able to apply all returned entries because
1871 There is no guarantee we'll be able to apply all returned entries because
1896 the metadata we use to filter on may be missing or wrong.
1872 the metadata we use to filter on may be missing or wrong.
1897 """
1873 """
1898 newentries = []
1874 newentries = []
1899 for entry in entries:
1875 for entry in entries:
1900 spec = entry.get('BUNDLESPEC')
1876 spec = entry.get('BUNDLESPEC')
1901 if spec:
1877 if spec:
1902 try:
1878 try:
1903 parsebundlespec(repo, spec, strict=True)
1879 parsebundlespec(repo, spec, strict=True)
1904 except error.InvalidBundleSpecification as e:
1880 except error.InvalidBundleSpecification as e:
1905 repo.ui.debug(str(e) + '\n')
1881 repo.ui.debug(str(e) + '\n')
1906 continue
1882 continue
1907 except error.UnsupportedBundleSpecification as e:
1883 except error.UnsupportedBundleSpecification as e:
1908 repo.ui.debug('filtering %s because unsupported bundle '
1884 repo.ui.debug('filtering %s because unsupported bundle '
1909 'spec: %s\n' % (entry['URL'], str(e)))
1885 'spec: %s\n' % (entry['URL'], str(e)))
1910 continue
1886 continue
1911
1887
1912 if 'REQUIRESNI' in entry and not sslutil.hassni:
1888 if 'REQUIRESNI' in entry and not sslutil.hassni:
1913 repo.ui.debug('filtering %s because SNI not supported\n' %
1889 repo.ui.debug('filtering %s because SNI not supported\n' %
1914 entry['URL'])
1890 entry['URL'])
1915 continue
1891 continue
1916
1892
1917 newentries.append(entry)
1893 newentries.append(entry)
1918
1894
1919 return newentries
1895 return newentries
1920
1896
1921 class clonebundleentry(object):
1897 class clonebundleentry(object):
1922 """Represents an item in a clone bundles manifest.
1898 """Represents an item in a clone bundles manifest.
1923
1899
1924 This rich class is needed to support sorting since sorted() in Python 3
1900 This rich class is needed to support sorting since sorted() in Python 3
1925 doesn't support ``cmp`` and our comparison is complex enough that ``key=``
1901 doesn't support ``cmp`` and our comparison is complex enough that ``key=``
1926 won't work.
1902 won't work.
1927 """
1903 """
1928
1904
1929 def __init__(self, value, prefers):
1905 def __init__(self, value, prefers):
1930 self.value = value
1906 self.value = value
1931 self.prefers = prefers
1907 self.prefers = prefers
1932
1908
1933 def _cmp(self, other):
1909 def _cmp(self, other):
1934 for prefkey, prefvalue in self.prefers:
1910 for prefkey, prefvalue in self.prefers:
1935 avalue = self.value.get(prefkey)
1911 avalue = self.value.get(prefkey)
1936 bvalue = other.value.get(prefkey)
1912 bvalue = other.value.get(prefkey)
1937
1913
1938 # Special case for b missing attribute and a matches exactly.
1914 # Special case for b missing attribute and a matches exactly.
1939 if avalue is not None and bvalue is None and avalue == prefvalue:
1915 if avalue is not None and bvalue is None and avalue == prefvalue:
1940 return -1
1916 return -1
1941
1917
1942 # Special case for a missing attribute and b matches exactly.
1918 # Special case for a missing attribute and b matches exactly.
1943 if bvalue is not None and avalue is None and bvalue == prefvalue:
1919 if bvalue is not None and avalue is None and bvalue == prefvalue:
1944 return 1
1920 return 1
1945
1921
1946 # We can't compare unless attribute present on both.
1922 # We can't compare unless attribute present on both.
1947 if avalue is None or bvalue is None:
1923 if avalue is None or bvalue is None:
1948 continue
1924 continue
1949
1925
1950 # Same values should fall back to next attribute.
1926 # Same values should fall back to next attribute.
1951 if avalue == bvalue:
1927 if avalue == bvalue:
1952 continue
1928 continue
1953
1929
1954 # Exact matches come first.
1930 # Exact matches come first.
1955 if avalue == prefvalue:
1931 if avalue == prefvalue:
1956 return -1
1932 return -1
1957 if bvalue == prefvalue:
1933 if bvalue == prefvalue:
1958 return 1
1934 return 1
1959
1935
1960 # Fall back to next attribute.
1936 # Fall back to next attribute.
1961 continue
1937 continue
1962
1938
1963 # If we got here we couldn't sort by attributes and prefers. Fall
1939 # If we got here we couldn't sort by attributes and prefers. Fall
1964 # back to index order.
1940 # back to index order.
1965 return 0
1941 return 0
1966
1942
1967 def __lt__(self, other):
1943 def __lt__(self, other):
1968 return self._cmp(other) < 0
1944 return self._cmp(other) < 0
1969
1945
1970 def __gt__(self, other):
1946 def __gt__(self, other):
1971 return self._cmp(other) > 0
1947 return self._cmp(other) > 0
1972
1948
1973 def __eq__(self, other):
1949 def __eq__(self, other):
1974 return self._cmp(other) == 0
1950 return self._cmp(other) == 0
1975
1951
1976 def __le__(self, other):
1952 def __le__(self, other):
1977 return self._cmp(other) <= 0
1953 return self._cmp(other) <= 0
1978
1954
1979 def __ge__(self, other):
1955 def __ge__(self, other):
1980 return self._cmp(other) >= 0
1956 return self._cmp(other) >= 0
1981
1957
1982 def __ne__(self, other):
1958 def __ne__(self, other):
1983 return self._cmp(other) != 0
1959 return self._cmp(other) != 0
1984
1960
1985 def sortclonebundleentries(ui, entries):
1961 def sortclonebundleentries(ui, entries):
1986 prefers = ui.configlist('ui', 'clonebundleprefers', default=[])
1962 prefers = ui.configlist('ui', 'clonebundleprefers', default=[])
1987 if not prefers:
1963 if not prefers:
1988 return list(entries)
1964 return list(entries)
1989
1965
1990 prefers = [p.split('=', 1) for p in prefers]
1966 prefers = [p.split('=', 1) for p in prefers]
1991
1967
1992 items = sorted(clonebundleentry(v, prefers) for v in entries)
1968 items = sorted(clonebundleentry(v, prefers) for v in entries)
1993 return [i.value for i in items]
1969 return [i.value for i in items]
1994
1970
1995 def trypullbundlefromurl(ui, repo, url):
1971 def trypullbundlefromurl(ui, repo, url):
1996 """Attempt to apply a bundle from a URL."""
1972 """Attempt to apply a bundle from a URL."""
1997 lock = repo.lock()
1973 lock = repo.lock()
1998 try:
1974 try:
1999 tr = repo.transaction('bundleurl')
1975 tr = repo.transaction('bundleurl')
2000 try:
1976 try:
2001 try:
1977 try:
2002 fh = urlmod.open(ui, url)
1978 fh = urlmod.open(ui, url)
2003 cg = readbundle(ui, fh, 'stream')
1979 cg = readbundle(ui, fh, 'stream')
2004
1980
2005 if isinstance(cg, bundle2.unbundle20):
1981 if isinstance(cg, bundle2.unbundle20):
2006 bundle2.processbundle(repo, cg, lambda: tr)
1982 bundle2.processbundle(repo, cg, lambda: tr)
2007 elif isinstance(cg, streamclone.streamcloneapplier):
1983 elif isinstance(cg, streamclone.streamcloneapplier):
2008 cg.apply(repo)
1984 cg.apply(repo)
2009 else:
1985 else:
2010 cg.apply(repo, 'clonebundles', url)
1986 cg.apply(repo, 'clonebundles', url)
2011 tr.close()
1987 tr.close()
2012 return True
1988 return True
2013 except urlerr.httperror as e:
1989 except urlerr.httperror as e:
2014 ui.warn(_('HTTP error fetching bundle: %s\n') % str(e))
1990 ui.warn(_('HTTP error fetching bundle: %s\n') % str(e))
2015 except urlerr.urlerror as e:
1991 except urlerr.urlerror as e:
2016 ui.warn(_('error fetching bundle: %s\n') % e.reason[1])
1992 ui.warn(_('error fetching bundle: %s\n') % e.reason[1])
2017
1993
2018 return False
1994 return False
2019 finally:
1995 finally:
2020 tr.release()
1996 tr.release()
2021 finally:
1997 finally:
2022 lock.release()
1998 lock.release()
General Comments 0
You need to be logged in to leave comments. Login now