##// END OF EJS Templates
bundle2: fix names for error part handler...
Pierre-Yves David -
r24741:bb67e523 default
parent child Browse files
Show More
@@ -1,1239 +1,1239 b''
1 # bundle2.py - generic container format to transmit arbitrary data.
1 # bundle2.py - generic container format to transmit arbitrary data.
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """Handling of the new bundle2 format
7 """Handling of the new bundle2 format
8
8
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 payloads in an application agnostic way. It consist in a sequence of "parts"
10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 that will be handed to and processed by the application layer.
11 that will be handed to and processed by the application layer.
12
12
13
13
14 General format architecture
14 General format architecture
15 ===========================
15 ===========================
16
16
17 The format is architectured as follow
17 The format is architectured as follow
18
18
19 - magic string
19 - magic string
20 - stream level parameters
20 - stream level parameters
21 - payload parts (any number)
21 - payload parts (any number)
22 - end of stream marker.
22 - end of stream marker.
23
23
24 the Binary format
24 the Binary format
25 ============================
25 ============================
26
26
27 All numbers are unsigned and big-endian.
27 All numbers are unsigned and big-endian.
28
28
29 stream level parameters
29 stream level parameters
30 ------------------------
30 ------------------------
31
31
32 Binary format is as follow
32 Binary format is as follow
33
33
34 :params size: int32
34 :params size: int32
35
35
36 The total number of Bytes used by the parameters
36 The total number of Bytes used by the parameters
37
37
38 :params value: arbitrary number of Bytes
38 :params value: arbitrary number of Bytes
39
39
40 A blob of `params size` containing the serialized version of all stream level
40 A blob of `params size` containing the serialized version of all stream level
41 parameters.
41 parameters.
42
42
43 The blob contains a space separated list of parameters. Parameters with value
43 The blob contains a space separated list of parameters. Parameters with value
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45
45
46 Empty name are obviously forbidden.
46 Empty name are obviously forbidden.
47
47
48 Name MUST start with a letter. If this first letter is lower case, the
48 Name MUST start with a letter. If this first letter is lower case, the
49 parameter is advisory and can be safely ignored. However when the first
49 parameter is advisory and can be safely ignored. However when the first
50 letter is capital, the parameter is mandatory and the bundling process MUST
50 letter is capital, the parameter is mandatory and the bundling process MUST
51 stop if he is not able to proceed it.
51 stop if he is not able to proceed it.
52
52
53 Stream parameters use a simple textual format for two main reasons:
53 Stream parameters use a simple textual format for two main reasons:
54
54
55 - Stream level parameters should remain simple and we want to discourage any
55 - Stream level parameters should remain simple and we want to discourage any
56 crazy usage.
56 crazy usage.
57 - Textual data allow easy human inspection of a bundle2 header in case of
57 - Textual data allow easy human inspection of a bundle2 header in case of
58 troubles.
58 troubles.
59
59
60 Any Applicative level options MUST go into a bundle2 part instead.
60 Any Applicative level options MUST go into a bundle2 part instead.
61
61
62 Payload part
62 Payload part
63 ------------------------
63 ------------------------
64
64
65 Binary format is as follow
65 Binary format is as follow
66
66
67 :header size: int32
67 :header size: int32
68
68
69 The total number of Bytes used by the part headers. When the header is empty
69 The total number of Bytes used by the part headers. When the header is empty
70 (size = 0) this is interpreted as the end of stream marker.
70 (size = 0) this is interpreted as the end of stream marker.
71
71
72 :header:
72 :header:
73
73
74 The header defines how to interpret the part. It contains two piece of
74 The header defines how to interpret the part. It contains two piece of
75 data: the part type, and the part parameters.
75 data: the part type, and the part parameters.
76
76
77 The part type is used to route an application level handler, that can
77 The part type is used to route an application level handler, that can
78 interpret payload.
78 interpret payload.
79
79
80 Part parameters are passed to the application level handler. They are
80 Part parameters are passed to the application level handler. They are
81 meant to convey information that will help the application level object to
81 meant to convey information that will help the application level object to
82 interpret the part payload.
82 interpret the part payload.
83
83
84 The binary format of the header is has follow
84 The binary format of the header is has follow
85
85
86 :typesize: (one byte)
86 :typesize: (one byte)
87
87
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89
89
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 to this part.
91 to this part.
92
92
93 :parameters:
93 :parameters:
94
94
95 Part's parameter may have arbitrary content, the binary structure is::
95 Part's parameter may have arbitrary content, the binary structure is::
96
96
97 <mandatory-count><advisory-count><param-sizes><param-data>
97 <mandatory-count><advisory-count><param-sizes><param-data>
98
98
99 :mandatory-count: 1 byte, number of mandatory parameters
99 :mandatory-count: 1 byte, number of mandatory parameters
100
100
101 :advisory-count: 1 byte, number of advisory parameters
101 :advisory-count: 1 byte, number of advisory parameters
102
102
103 :param-sizes:
103 :param-sizes:
104
104
105 N couple of bytes, where N is the total number of parameters. Each
105 N couple of bytes, where N is the total number of parameters. Each
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107
107
108 :param-data:
108 :param-data:
109
109
110 A blob of bytes from which each parameter key and value can be
110 A blob of bytes from which each parameter key and value can be
111 retrieved using the list of size couples stored in the previous
111 retrieved using the list of size couples stored in the previous
112 field.
112 field.
113
113
114 Mandatory parameters comes first, then the advisory ones.
114 Mandatory parameters comes first, then the advisory ones.
115
115
116 Each parameter's key MUST be unique within the part.
116 Each parameter's key MUST be unique within the part.
117
117
118 :payload:
118 :payload:
119
119
120 payload is a series of `<chunksize><chunkdata>`.
120 payload is a series of `<chunksize><chunkdata>`.
121
121
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124
124
125 The current implementation always produces either zero or one chunk.
125 The current implementation always produces either zero or one chunk.
126 This is an implementation limitation that will ultimately be lifted.
126 This is an implementation limitation that will ultimately be lifted.
127
127
128 `chunksize` can be negative to trigger special case processing. No such
128 `chunksize` can be negative to trigger special case processing. No such
129 processing is in place yet.
129 processing is in place yet.
130
130
131 Bundle processing
131 Bundle processing
132 ============================
132 ============================
133
133
134 Each part is processed in order using a "part handler". Handler are registered
134 Each part is processed in order using a "part handler". Handler are registered
135 for a certain part type.
135 for a certain part type.
136
136
137 The matching of a part to its handler is case insensitive. The case of the
137 The matching of a part to its handler is case insensitive. The case of the
138 part type is used to know if a part is mandatory or advisory. If the Part type
138 part type is used to know if a part is mandatory or advisory. If the Part type
139 contains any uppercase char it is considered mandatory. When no handler is
139 contains any uppercase char it is considered mandatory. When no handler is
140 known for a Mandatory part, the process is aborted and an exception is raised.
140 known for a Mandatory part, the process is aborted and an exception is raised.
141 If the part is advisory and no handler is known, the part is ignored. When the
141 If the part is advisory and no handler is known, the part is ignored. When the
142 process is aborted, the full bundle is still read from the stream to keep the
142 process is aborted, the full bundle is still read from the stream to keep the
143 channel usable. But none of the part read from an abort are processed. In the
143 channel usable. But none of the part read from an abort are processed. In the
144 future, dropping the stream may become an option for channel we do not care to
144 future, dropping the stream may become an option for channel we do not care to
145 preserve.
145 preserve.
146 """
146 """
147
147
148 import errno
148 import errno
149 import sys
149 import sys
150 import util
150 import util
151 import struct
151 import struct
152 import urllib
152 import urllib
153 import string
153 import string
154 import obsolete
154 import obsolete
155 import pushkey
155 import pushkey
156 import url
156 import url
157 import re
157 import re
158
158
159 import changegroup, error
159 import changegroup, error
160 from i18n import _
160 from i18n import _
161
161
162 _pack = struct.pack
162 _pack = struct.pack
163 _unpack = struct.unpack
163 _unpack = struct.unpack
164
164
165 _fstreamparamsize = '>i'
165 _fstreamparamsize = '>i'
166 _fpartheadersize = '>i'
166 _fpartheadersize = '>i'
167 _fparttypesize = '>B'
167 _fparttypesize = '>B'
168 _fpartid = '>I'
168 _fpartid = '>I'
169 _fpayloadsize = '>i'
169 _fpayloadsize = '>i'
170 _fpartparamcount = '>BB'
170 _fpartparamcount = '>BB'
171
171
172 preferedchunksize = 4096
172 preferedchunksize = 4096
173
173
174 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
174 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
175
175
176 def validateparttype(parttype):
176 def validateparttype(parttype):
177 """raise ValueError if a parttype contains invalid character"""
177 """raise ValueError if a parttype contains invalid character"""
178 if _parttypeforbidden.search(parttype):
178 if _parttypeforbidden.search(parttype):
179 raise ValueError(parttype)
179 raise ValueError(parttype)
180
180
181 def _makefpartparamsizes(nbparams):
181 def _makefpartparamsizes(nbparams):
182 """return a struct format to read part parameter sizes
182 """return a struct format to read part parameter sizes
183
183
184 The number parameters is variable so we need to build that format
184 The number parameters is variable so we need to build that format
185 dynamically.
185 dynamically.
186 """
186 """
187 return '>'+('BB'*nbparams)
187 return '>'+('BB'*nbparams)
188
188
189 parthandlermapping = {}
189 parthandlermapping = {}
190
190
191 def parthandler(parttype, params=()):
191 def parthandler(parttype, params=()):
192 """decorator that register a function as a bundle2 part handler
192 """decorator that register a function as a bundle2 part handler
193
193
194 eg::
194 eg::
195
195
196 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
196 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
197 def myparttypehandler(...):
197 def myparttypehandler(...):
198 '''process a part of type "my part".'''
198 '''process a part of type "my part".'''
199 ...
199 ...
200 """
200 """
201 validateparttype(parttype)
201 validateparttype(parttype)
202 def _decorator(func):
202 def _decorator(func):
203 lparttype = parttype.lower() # enforce lower case matching.
203 lparttype = parttype.lower() # enforce lower case matching.
204 assert lparttype not in parthandlermapping
204 assert lparttype not in parthandlermapping
205 parthandlermapping[lparttype] = func
205 parthandlermapping[lparttype] = func
206 func.params = frozenset(params)
206 func.params = frozenset(params)
207 return func
207 return func
208 return _decorator
208 return _decorator
209
209
210 class unbundlerecords(object):
210 class unbundlerecords(object):
211 """keep record of what happens during and unbundle
211 """keep record of what happens during and unbundle
212
212
213 New records are added using `records.add('cat', obj)`. Where 'cat' is a
213 New records are added using `records.add('cat', obj)`. Where 'cat' is a
214 category of record and obj is an arbitrary object.
214 category of record and obj is an arbitrary object.
215
215
216 `records['cat']` will return all entries of this category 'cat'.
216 `records['cat']` will return all entries of this category 'cat'.
217
217
218 Iterating on the object itself will yield `('category', obj)` tuples
218 Iterating on the object itself will yield `('category', obj)` tuples
219 for all entries.
219 for all entries.
220
220
221 All iterations happens in chronological order.
221 All iterations happens in chronological order.
222 """
222 """
223
223
224 def __init__(self):
224 def __init__(self):
225 self._categories = {}
225 self._categories = {}
226 self._sequences = []
226 self._sequences = []
227 self._replies = {}
227 self._replies = {}
228
228
229 def add(self, category, entry, inreplyto=None):
229 def add(self, category, entry, inreplyto=None):
230 """add a new record of a given category.
230 """add a new record of a given category.
231
231
232 The entry can then be retrieved in the list returned by
232 The entry can then be retrieved in the list returned by
233 self['category']."""
233 self['category']."""
234 self._categories.setdefault(category, []).append(entry)
234 self._categories.setdefault(category, []).append(entry)
235 self._sequences.append((category, entry))
235 self._sequences.append((category, entry))
236 if inreplyto is not None:
236 if inreplyto is not None:
237 self.getreplies(inreplyto).add(category, entry)
237 self.getreplies(inreplyto).add(category, entry)
238
238
239 def getreplies(self, partid):
239 def getreplies(self, partid):
240 """get the records that are replies to a specific part"""
240 """get the records that are replies to a specific part"""
241 return self._replies.setdefault(partid, unbundlerecords())
241 return self._replies.setdefault(partid, unbundlerecords())
242
242
243 def __getitem__(self, cat):
243 def __getitem__(self, cat):
244 return tuple(self._categories.get(cat, ()))
244 return tuple(self._categories.get(cat, ()))
245
245
246 def __iter__(self):
246 def __iter__(self):
247 return iter(self._sequences)
247 return iter(self._sequences)
248
248
249 def __len__(self):
249 def __len__(self):
250 return len(self._sequences)
250 return len(self._sequences)
251
251
252 def __nonzero__(self):
252 def __nonzero__(self):
253 return bool(self._sequences)
253 return bool(self._sequences)
254
254
255 class bundleoperation(object):
255 class bundleoperation(object):
256 """an object that represents a single bundling process
256 """an object that represents a single bundling process
257
257
258 Its purpose is to carry unbundle-related objects and states.
258 Its purpose is to carry unbundle-related objects and states.
259
259
260 A new object should be created at the beginning of each bundle processing.
260 A new object should be created at the beginning of each bundle processing.
261 The object is to be returned by the processing function.
261 The object is to be returned by the processing function.
262
262
263 The object has very little content now it will ultimately contain:
263 The object has very little content now it will ultimately contain:
264 * an access to the repo the bundle is applied to,
264 * an access to the repo the bundle is applied to,
265 * a ui object,
265 * a ui object,
266 * a way to retrieve a transaction to add changes to the repo,
266 * a way to retrieve a transaction to add changes to the repo,
267 * a way to record the result of processing each part,
267 * a way to record the result of processing each part,
268 * a way to construct a bundle response when applicable.
268 * a way to construct a bundle response when applicable.
269 """
269 """
270
270
271 def __init__(self, repo, transactiongetter):
271 def __init__(self, repo, transactiongetter):
272 self.repo = repo
272 self.repo = repo
273 self.ui = repo.ui
273 self.ui = repo.ui
274 self.records = unbundlerecords()
274 self.records = unbundlerecords()
275 self.gettransaction = transactiongetter
275 self.gettransaction = transactiongetter
276 self.reply = None
276 self.reply = None
277
277
278 class TransactionUnavailable(RuntimeError):
278 class TransactionUnavailable(RuntimeError):
279 pass
279 pass
280
280
281 def _notransaction():
281 def _notransaction():
282 """default method to get a transaction while processing a bundle
282 """default method to get a transaction while processing a bundle
283
283
284 Raise an exception to highlight the fact that no transaction was expected
284 Raise an exception to highlight the fact that no transaction was expected
285 to be created"""
285 to be created"""
286 raise TransactionUnavailable()
286 raise TransactionUnavailable()
287
287
288 def processbundle(repo, unbundler, transactiongetter=None):
288 def processbundle(repo, unbundler, transactiongetter=None):
289 """This function process a bundle, apply effect to/from a repo
289 """This function process a bundle, apply effect to/from a repo
290
290
291 It iterates over each part then searches for and uses the proper handling
291 It iterates over each part then searches for and uses the proper handling
292 code to process the part. Parts are processed in order.
292 code to process the part. Parts are processed in order.
293
293
294 This is very early version of this function that will be strongly reworked
294 This is very early version of this function that will be strongly reworked
295 before final usage.
295 before final usage.
296
296
297 Unknown Mandatory part will abort the process.
297 Unknown Mandatory part will abort the process.
298 """
298 """
299 if transactiongetter is None:
299 if transactiongetter is None:
300 transactiongetter = _notransaction
300 transactiongetter = _notransaction
301 op = bundleoperation(repo, transactiongetter)
301 op = bundleoperation(repo, transactiongetter)
302 # todo:
302 # todo:
303 # - replace this is a init function soon.
303 # - replace this is a init function soon.
304 # - exception catching
304 # - exception catching
305 unbundler.params
305 unbundler.params
306 iterparts = unbundler.iterparts()
306 iterparts = unbundler.iterparts()
307 part = None
307 part = None
308 try:
308 try:
309 for part in iterparts:
309 for part in iterparts:
310 _processpart(op, part)
310 _processpart(op, part)
311 except Exception, exc:
311 except Exception, exc:
312 for part in iterparts:
312 for part in iterparts:
313 # consume the bundle content
313 # consume the bundle content
314 part.seek(0, 2)
314 part.seek(0, 2)
315 # Small hack to let caller code distinguish exceptions from bundle2
315 # Small hack to let caller code distinguish exceptions from bundle2
316 # processing from processing the old format. This is mostly
316 # processing from processing the old format. This is mostly
317 # needed to handle different return codes to unbundle according to the
317 # needed to handle different return codes to unbundle according to the
318 # type of bundle. We should probably clean up or drop this return code
318 # type of bundle. We should probably clean up or drop this return code
319 # craziness in a future version.
319 # craziness in a future version.
320 exc.duringunbundle2 = True
320 exc.duringunbundle2 = True
321 raise
321 raise
322 return op
322 return op
323
323
324 def _processpart(op, part):
324 def _processpart(op, part):
325 """process a single part from a bundle
325 """process a single part from a bundle
326
326
327 The part is guaranteed to have been fully consumed when the function exits
327 The part is guaranteed to have been fully consumed when the function exits
328 (even if an exception is raised)."""
328 (even if an exception is raised)."""
329 try:
329 try:
330 try:
330 try:
331 handler = parthandlermapping.get(part.type)
331 handler = parthandlermapping.get(part.type)
332 if handler is None:
332 if handler is None:
333 raise error.UnsupportedPartError(parttype=part.type)
333 raise error.UnsupportedPartError(parttype=part.type)
334 op.ui.debug('found a handler for part %r\n' % part.type)
334 op.ui.debug('found a handler for part %r\n' % part.type)
335 unknownparams = part.mandatorykeys - handler.params
335 unknownparams = part.mandatorykeys - handler.params
336 if unknownparams:
336 if unknownparams:
337 unknownparams = list(unknownparams)
337 unknownparams = list(unknownparams)
338 unknownparams.sort()
338 unknownparams.sort()
339 raise error.UnsupportedPartError(parttype=part.type,
339 raise error.UnsupportedPartError(parttype=part.type,
340 params=unknownparams)
340 params=unknownparams)
341 except error.UnsupportedPartError, exc:
341 except error.UnsupportedPartError, exc:
342 if part.mandatory: # mandatory parts
342 if part.mandatory: # mandatory parts
343 raise
343 raise
344 op.ui.debug('ignoring unsupported advisory part %s\n' % exc)
344 op.ui.debug('ignoring unsupported advisory part %s\n' % exc)
345 return # skip to part processing
345 return # skip to part processing
346
346
347 # handler is called outside the above try block so that we don't
347 # handler is called outside the above try block so that we don't
348 # risk catching KeyErrors from anything other than the
348 # risk catching KeyErrors from anything other than the
349 # parthandlermapping lookup (any KeyError raised by handler()
349 # parthandlermapping lookup (any KeyError raised by handler()
350 # itself represents a defect of a different variety).
350 # itself represents a defect of a different variety).
351 output = None
351 output = None
352 if op.reply is not None:
352 if op.reply is not None:
353 op.ui.pushbuffer(error=True)
353 op.ui.pushbuffer(error=True)
354 output = ''
354 output = ''
355 try:
355 try:
356 handler(op, part)
356 handler(op, part)
357 finally:
357 finally:
358 if output is not None:
358 if output is not None:
359 output = op.ui.popbuffer()
359 output = op.ui.popbuffer()
360 if output:
360 if output:
361 outpart = op.reply.newpart('output', data=output,
361 outpart = op.reply.newpart('output', data=output,
362 mandatory=False)
362 mandatory=False)
363 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
363 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
364 finally:
364 finally:
365 # consume the part content to not corrupt the stream.
365 # consume the part content to not corrupt the stream.
366 part.seek(0, 2)
366 part.seek(0, 2)
367
367
368
368
369 def decodecaps(blob):
369 def decodecaps(blob):
370 """decode a bundle2 caps bytes blob into a dictionary
370 """decode a bundle2 caps bytes blob into a dictionary
371
371
372 The blob is a list of capabilities (one per line)
372 The blob is a list of capabilities (one per line)
373 Capabilities may have values using a line of the form::
373 Capabilities may have values using a line of the form::
374
374
375 capability=value1,value2,value3
375 capability=value1,value2,value3
376
376
377 The values are always a list."""
377 The values are always a list."""
378 caps = {}
378 caps = {}
379 for line in blob.splitlines():
379 for line in blob.splitlines():
380 if not line:
380 if not line:
381 continue
381 continue
382 if '=' not in line:
382 if '=' not in line:
383 key, vals = line, ()
383 key, vals = line, ()
384 else:
384 else:
385 key, vals = line.split('=', 1)
385 key, vals = line.split('=', 1)
386 vals = vals.split(',')
386 vals = vals.split(',')
387 key = urllib.unquote(key)
387 key = urllib.unquote(key)
388 vals = [urllib.unquote(v) for v in vals]
388 vals = [urllib.unquote(v) for v in vals]
389 caps[key] = vals
389 caps[key] = vals
390 return caps
390 return caps
391
391
392 def encodecaps(caps):
392 def encodecaps(caps):
393 """encode a bundle2 caps dictionary into a bytes blob"""
393 """encode a bundle2 caps dictionary into a bytes blob"""
394 chunks = []
394 chunks = []
395 for ca in sorted(caps):
395 for ca in sorted(caps):
396 vals = caps[ca]
396 vals = caps[ca]
397 ca = urllib.quote(ca)
397 ca = urllib.quote(ca)
398 vals = [urllib.quote(v) for v in vals]
398 vals = [urllib.quote(v) for v in vals]
399 if vals:
399 if vals:
400 ca = "%s=%s" % (ca, ','.join(vals))
400 ca = "%s=%s" % (ca, ','.join(vals))
401 chunks.append(ca)
401 chunks.append(ca)
402 return '\n'.join(chunks)
402 return '\n'.join(chunks)
403
403
404 class bundle20(object):
404 class bundle20(object):
405 """represent an outgoing bundle2 container
405 """represent an outgoing bundle2 container
406
406
407 Use the `addparam` method to add stream level parameter. and `newpart` to
407 Use the `addparam` method to add stream level parameter. and `newpart` to
408 populate it. Then call `getchunks` to retrieve all the binary chunks of
408 populate it. Then call `getchunks` to retrieve all the binary chunks of
409 data that compose the bundle2 container."""
409 data that compose the bundle2 container."""
410
410
411 _magicstring = 'HG20'
411 _magicstring = 'HG20'
412
412
413 def __init__(self, ui, capabilities=()):
413 def __init__(self, ui, capabilities=()):
414 self.ui = ui
414 self.ui = ui
415 self._params = []
415 self._params = []
416 self._parts = []
416 self._parts = []
417 self.capabilities = dict(capabilities)
417 self.capabilities = dict(capabilities)
418
418
419 @property
419 @property
420 def nbparts(self):
420 def nbparts(self):
421 """total number of parts added to the bundler"""
421 """total number of parts added to the bundler"""
422 return len(self._parts)
422 return len(self._parts)
423
423
424 # methods used to defines the bundle2 content
424 # methods used to defines the bundle2 content
425 def addparam(self, name, value=None):
425 def addparam(self, name, value=None):
426 """add a stream level parameter"""
426 """add a stream level parameter"""
427 if not name:
427 if not name:
428 raise ValueError('empty parameter name')
428 raise ValueError('empty parameter name')
429 if name[0] not in string.letters:
429 if name[0] not in string.letters:
430 raise ValueError('non letter first character: %r' % name)
430 raise ValueError('non letter first character: %r' % name)
431 self._params.append((name, value))
431 self._params.append((name, value))
432
432
433 def addpart(self, part):
433 def addpart(self, part):
434 """add a new part to the bundle2 container
434 """add a new part to the bundle2 container
435
435
436 Parts contains the actual applicative payload."""
436 Parts contains the actual applicative payload."""
437 assert part.id is None
437 assert part.id is None
438 part.id = len(self._parts) # very cheap counter
438 part.id = len(self._parts) # very cheap counter
439 self._parts.append(part)
439 self._parts.append(part)
440
440
441 def newpart(self, typeid, *args, **kwargs):
441 def newpart(self, typeid, *args, **kwargs):
442 """create a new part and add it to the containers
442 """create a new part and add it to the containers
443
443
444 As the part is directly added to the containers. For now, this means
444 As the part is directly added to the containers. For now, this means
445 that any failure to properly initialize the part after calling
445 that any failure to properly initialize the part after calling
446 ``newpart`` should result in a failure of the whole bundling process.
446 ``newpart`` should result in a failure of the whole bundling process.
447
447
448 You can still fall back to manually create and add if you need better
448 You can still fall back to manually create and add if you need better
449 control."""
449 control."""
450 part = bundlepart(typeid, *args, **kwargs)
450 part = bundlepart(typeid, *args, **kwargs)
451 self.addpart(part)
451 self.addpart(part)
452 return part
452 return part
453
453
454 # methods used to generate the bundle2 stream
454 # methods used to generate the bundle2 stream
455 def getchunks(self):
455 def getchunks(self):
456 self.ui.debug('start emission of %s stream\n' % self._magicstring)
456 self.ui.debug('start emission of %s stream\n' % self._magicstring)
457 yield self._magicstring
457 yield self._magicstring
458 param = self._paramchunk()
458 param = self._paramchunk()
459 self.ui.debug('bundle parameter: %s\n' % param)
459 self.ui.debug('bundle parameter: %s\n' % param)
460 yield _pack(_fstreamparamsize, len(param))
460 yield _pack(_fstreamparamsize, len(param))
461 if param:
461 if param:
462 yield param
462 yield param
463
463
464 self.ui.debug('start of parts\n')
464 self.ui.debug('start of parts\n')
465 for part in self._parts:
465 for part in self._parts:
466 self.ui.debug('bundle part: "%s"\n' % part.type)
466 self.ui.debug('bundle part: "%s"\n' % part.type)
467 for chunk in part.getchunks():
467 for chunk in part.getchunks():
468 yield chunk
468 yield chunk
469 self.ui.debug('end of bundle\n')
469 self.ui.debug('end of bundle\n')
470 yield _pack(_fpartheadersize, 0)
470 yield _pack(_fpartheadersize, 0)
471
471
472 def _paramchunk(self):
472 def _paramchunk(self):
473 """return a encoded version of all stream parameters"""
473 """return a encoded version of all stream parameters"""
474 blocks = []
474 blocks = []
475 for par, value in self._params:
475 for par, value in self._params:
476 par = urllib.quote(par)
476 par = urllib.quote(par)
477 if value is not None:
477 if value is not None:
478 value = urllib.quote(value)
478 value = urllib.quote(value)
479 par = '%s=%s' % (par, value)
479 par = '%s=%s' % (par, value)
480 blocks.append(par)
480 blocks.append(par)
481 return ' '.join(blocks)
481 return ' '.join(blocks)
482
482
483 class unpackermixin(object):
483 class unpackermixin(object):
484 """A mixin to extract bytes and struct data from a stream"""
484 """A mixin to extract bytes and struct data from a stream"""
485
485
486 def __init__(self, fp):
486 def __init__(self, fp):
487 self._fp = fp
487 self._fp = fp
488 self._seekable = (util.safehasattr(fp, 'seek') and
488 self._seekable = (util.safehasattr(fp, 'seek') and
489 util.safehasattr(fp, 'tell'))
489 util.safehasattr(fp, 'tell'))
490
490
491 def _unpack(self, format):
491 def _unpack(self, format):
492 """unpack this struct format from the stream"""
492 """unpack this struct format from the stream"""
493 data = self._readexact(struct.calcsize(format))
493 data = self._readexact(struct.calcsize(format))
494 return _unpack(format, data)
494 return _unpack(format, data)
495
495
496 def _readexact(self, size):
496 def _readexact(self, size):
497 """read exactly <size> bytes from the stream"""
497 """read exactly <size> bytes from the stream"""
498 return changegroup.readexactly(self._fp, size)
498 return changegroup.readexactly(self._fp, size)
499
499
500 def seek(self, offset, whence=0):
500 def seek(self, offset, whence=0):
501 """move the underlying file pointer"""
501 """move the underlying file pointer"""
502 if self._seekable:
502 if self._seekable:
503 return self._fp.seek(offset, whence)
503 return self._fp.seek(offset, whence)
504 else:
504 else:
505 raise NotImplementedError(_('File pointer is not seekable'))
505 raise NotImplementedError(_('File pointer is not seekable'))
506
506
507 def tell(self):
507 def tell(self):
508 """return the file offset, or None if file is not seekable"""
508 """return the file offset, or None if file is not seekable"""
509 if self._seekable:
509 if self._seekable:
510 try:
510 try:
511 return self._fp.tell()
511 return self._fp.tell()
512 except IOError, e:
512 except IOError, e:
513 if e.errno == errno.ESPIPE:
513 if e.errno == errno.ESPIPE:
514 self._seekable = False
514 self._seekable = False
515 else:
515 else:
516 raise
516 raise
517 return None
517 return None
518
518
519 def close(self):
519 def close(self):
520 """close underlying file"""
520 """close underlying file"""
521 if util.safehasattr(self._fp, 'close'):
521 if util.safehasattr(self._fp, 'close'):
522 return self._fp.close()
522 return self._fp.close()
523
523
524 def getunbundler(ui, fp, header=None):
524 def getunbundler(ui, fp, header=None):
525 """return a valid unbundler object for a given header"""
525 """return a valid unbundler object for a given header"""
526 if header is None:
526 if header is None:
527 header = changegroup.readexactly(fp, 4)
527 header = changegroup.readexactly(fp, 4)
528 magic, version = header[0:2], header[2:4]
528 magic, version = header[0:2], header[2:4]
529 if magic != 'HG':
529 if magic != 'HG':
530 raise util.Abort(_('not a Mercurial bundle'))
530 raise util.Abort(_('not a Mercurial bundle'))
531 unbundlerclass = formatmap.get(version)
531 unbundlerclass = formatmap.get(version)
532 if unbundlerclass is None:
532 if unbundlerclass is None:
533 raise util.Abort(_('unknown bundle version %s') % version)
533 raise util.Abort(_('unknown bundle version %s') % version)
534 unbundler = unbundlerclass(ui, fp)
534 unbundler = unbundlerclass(ui, fp)
535 ui.debug('start processing of %s stream\n' % header)
535 ui.debug('start processing of %s stream\n' % header)
536 return unbundler
536 return unbundler
537
537
538 class unbundle20(unpackermixin):
538 class unbundle20(unpackermixin):
539 """interpret a bundle2 stream
539 """interpret a bundle2 stream
540
540
541 This class is fed with a binary stream and yields parts through its
541 This class is fed with a binary stream and yields parts through its
542 `iterparts` methods."""
542 `iterparts` methods."""
543
543
544 def __init__(self, ui, fp):
544 def __init__(self, ui, fp):
545 """If header is specified, we do not read it out of the stream."""
545 """If header is specified, we do not read it out of the stream."""
546 self.ui = ui
546 self.ui = ui
547 super(unbundle20, self).__init__(fp)
547 super(unbundle20, self).__init__(fp)
548
548
549 @util.propertycache
549 @util.propertycache
550 def params(self):
550 def params(self):
551 """dictionary of stream level parameters"""
551 """dictionary of stream level parameters"""
552 self.ui.debug('reading bundle2 stream parameters\n')
552 self.ui.debug('reading bundle2 stream parameters\n')
553 params = {}
553 params = {}
554 paramssize = self._unpack(_fstreamparamsize)[0]
554 paramssize = self._unpack(_fstreamparamsize)[0]
555 if paramssize < 0:
555 if paramssize < 0:
556 raise error.BundleValueError('negative bundle param size: %i'
556 raise error.BundleValueError('negative bundle param size: %i'
557 % paramssize)
557 % paramssize)
558 if paramssize:
558 if paramssize:
559 for p in self._readexact(paramssize).split(' '):
559 for p in self._readexact(paramssize).split(' '):
560 p = p.split('=', 1)
560 p = p.split('=', 1)
561 p = [urllib.unquote(i) for i in p]
561 p = [urllib.unquote(i) for i in p]
562 if len(p) < 2:
562 if len(p) < 2:
563 p.append(None)
563 p.append(None)
564 self._processparam(*p)
564 self._processparam(*p)
565 params[p[0]] = p[1]
565 params[p[0]] = p[1]
566 return params
566 return params
567
567
568 def _processparam(self, name, value):
568 def _processparam(self, name, value):
569 """process a parameter, applying its effect if needed
569 """process a parameter, applying its effect if needed
570
570
571 Parameter starting with a lower case letter are advisory and will be
571 Parameter starting with a lower case letter are advisory and will be
572 ignored when unknown. Those starting with an upper case letter are
572 ignored when unknown. Those starting with an upper case letter are
573 mandatory and will this function will raise a KeyError when unknown.
573 mandatory and will this function will raise a KeyError when unknown.
574
574
575 Note: no option are currently supported. Any input will be either
575 Note: no option are currently supported. Any input will be either
576 ignored or failing.
576 ignored or failing.
577 """
577 """
578 if not name:
578 if not name:
579 raise ValueError('empty parameter name')
579 raise ValueError('empty parameter name')
580 if name[0] not in string.letters:
580 if name[0] not in string.letters:
581 raise ValueError('non letter first character: %r' % name)
581 raise ValueError('non letter first character: %r' % name)
582 # Some logic will be later added here to try to process the option for
582 # Some logic will be later added here to try to process the option for
583 # a dict of known parameter.
583 # a dict of known parameter.
584 if name[0].islower():
584 if name[0].islower():
585 self.ui.debug("ignoring unknown parameter %r\n" % name)
585 self.ui.debug("ignoring unknown parameter %r\n" % name)
586 else:
586 else:
587 raise error.UnsupportedPartError(params=(name,))
587 raise error.UnsupportedPartError(params=(name,))
588
588
589
589
590 def iterparts(self):
590 def iterparts(self):
591 """yield all parts contained in the stream"""
591 """yield all parts contained in the stream"""
592 # make sure param have been loaded
592 # make sure param have been loaded
593 self.params
593 self.params
594 self.ui.debug('start extraction of bundle2 parts\n')
594 self.ui.debug('start extraction of bundle2 parts\n')
595 headerblock = self._readpartheader()
595 headerblock = self._readpartheader()
596 while headerblock is not None:
596 while headerblock is not None:
597 part = unbundlepart(self.ui, headerblock, self._fp)
597 part = unbundlepart(self.ui, headerblock, self._fp)
598 yield part
598 yield part
599 part.seek(0, 2)
599 part.seek(0, 2)
600 headerblock = self._readpartheader()
600 headerblock = self._readpartheader()
601 self.ui.debug('end of bundle2 stream\n')
601 self.ui.debug('end of bundle2 stream\n')
602
602
603 def _readpartheader(self):
603 def _readpartheader(self):
604 """reads a part header size and return the bytes blob
604 """reads a part header size and return the bytes blob
605
605
606 returns None if empty"""
606 returns None if empty"""
607 headersize = self._unpack(_fpartheadersize)[0]
607 headersize = self._unpack(_fpartheadersize)[0]
608 if headersize < 0:
608 if headersize < 0:
609 raise error.BundleValueError('negative part header size: %i'
609 raise error.BundleValueError('negative part header size: %i'
610 % headersize)
610 % headersize)
611 self.ui.debug('part header size: %i\n' % headersize)
611 self.ui.debug('part header size: %i\n' % headersize)
612 if headersize:
612 if headersize:
613 return self._readexact(headersize)
613 return self._readexact(headersize)
614 return None
614 return None
615
615
616 def compressed(self):
616 def compressed(self):
617 return False
617 return False
618
618
619 formatmap = {'20': unbundle20}
619 formatmap = {'20': unbundle20}
620
620
621 class bundlepart(object):
621 class bundlepart(object):
622 """A bundle2 part contains application level payload
622 """A bundle2 part contains application level payload
623
623
624 The part `type` is used to route the part to the application level
624 The part `type` is used to route the part to the application level
625 handler.
625 handler.
626
626
627 The part payload is contained in ``part.data``. It could be raw bytes or a
627 The part payload is contained in ``part.data``. It could be raw bytes or a
628 generator of byte chunks.
628 generator of byte chunks.
629
629
630 You can add parameters to the part using the ``addparam`` method.
630 You can add parameters to the part using the ``addparam`` method.
631 Parameters can be either mandatory (default) or advisory. Remote side
631 Parameters can be either mandatory (default) or advisory. Remote side
632 should be able to safely ignore the advisory ones.
632 should be able to safely ignore the advisory ones.
633
633
634 Both data and parameters cannot be modified after the generation has begun.
634 Both data and parameters cannot be modified after the generation has begun.
635 """
635 """
636
636
637 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
637 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
638 data='', mandatory=True):
638 data='', mandatory=True):
639 validateparttype(parttype)
639 validateparttype(parttype)
640 self.id = None
640 self.id = None
641 self.type = parttype
641 self.type = parttype
642 self._data = data
642 self._data = data
643 self._mandatoryparams = list(mandatoryparams)
643 self._mandatoryparams = list(mandatoryparams)
644 self._advisoryparams = list(advisoryparams)
644 self._advisoryparams = list(advisoryparams)
645 # checking for duplicated entries
645 # checking for duplicated entries
646 self._seenparams = set()
646 self._seenparams = set()
647 for pname, __ in self._mandatoryparams + self._advisoryparams:
647 for pname, __ in self._mandatoryparams + self._advisoryparams:
648 if pname in self._seenparams:
648 if pname in self._seenparams:
649 raise RuntimeError('duplicated params: %s' % pname)
649 raise RuntimeError('duplicated params: %s' % pname)
650 self._seenparams.add(pname)
650 self._seenparams.add(pname)
651 # status of the part's generation:
651 # status of the part's generation:
652 # - None: not started,
652 # - None: not started,
653 # - False: currently generated,
653 # - False: currently generated,
654 # - True: generation done.
654 # - True: generation done.
655 self._generated = None
655 self._generated = None
656 self.mandatory = mandatory
656 self.mandatory = mandatory
657
657
658 # methods used to defines the part content
658 # methods used to defines the part content
659 def __setdata(self, data):
659 def __setdata(self, data):
660 if self._generated is not None:
660 if self._generated is not None:
661 raise error.ReadOnlyPartError('part is being generated')
661 raise error.ReadOnlyPartError('part is being generated')
662 self._data = data
662 self._data = data
663 def __getdata(self):
663 def __getdata(self):
664 return self._data
664 return self._data
665 data = property(__getdata, __setdata)
665 data = property(__getdata, __setdata)
666
666
667 @property
667 @property
668 def mandatoryparams(self):
668 def mandatoryparams(self):
669 # make it an immutable tuple to force people through ``addparam``
669 # make it an immutable tuple to force people through ``addparam``
670 return tuple(self._mandatoryparams)
670 return tuple(self._mandatoryparams)
671
671
672 @property
672 @property
673 def advisoryparams(self):
673 def advisoryparams(self):
674 # make it an immutable tuple to force people through ``addparam``
674 # make it an immutable tuple to force people through ``addparam``
675 return tuple(self._advisoryparams)
675 return tuple(self._advisoryparams)
676
676
677 def addparam(self, name, value='', mandatory=True):
677 def addparam(self, name, value='', mandatory=True):
678 if self._generated is not None:
678 if self._generated is not None:
679 raise error.ReadOnlyPartError('part is being generated')
679 raise error.ReadOnlyPartError('part is being generated')
680 if name in self._seenparams:
680 if name in self._seenparams:
681 raise ValueError('duplicated params: %s' % name)
681 raise ValueError('duplicated params: %s' % name)
682 self._seenparams.add(name)
682 self._seenparams.add(name)
683 params = self._advisoryparams
683 params = self._advisoryparams
684 if mandatory:
684 if mandatory:
685 params = self._mandatoryparams
685 params = self._mandatoryparams
686 params.append((name, value))
686 params.append((name, value))
687
687
688 # methods used to generates the bundle2 stream
688 # methods used to generates the bundle2 stream
689 def getchunks(self):
689 def getchunks(self):
690 if self._generated is not None:
690 if self._generated is not None:
691 raise RuntimeError('part can only be consumed once')
691 raise RuntimeError('part can only be consumed once')
692 self._generated = False
692 self._generated = False
693 #### header
693 #### header
694 if self.mandatory:
694 if self.mandatory:
695 parttype = self.type.upper()
695 parttype = self.type.upper()
696 else:
696 else:
697 parttype = self.type.lower()
697 parttype = self.type.lower()
698 ## parttype
698 ## parttype
699 header = [_pack(_fparttypesize, len(parttype)),
699 header = [_pack(_fparttypesize, len(parttype)),
700 parttype, _pack(_fpartid, self.id),
700 parttype, _pack(_fpartid, self.id),
701 ]
701 ]
702 ## parameters
702 ## parameters
703 # count
703 # count
704 manpar = self.mandatoryparams
704 manpar = self.mandatoryparams
705 advpar = self.advisoryparams
705 advpar = self.advisoryparams
706 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
706 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
707 # size
707 # size
708 parsizes = []
708 parsizes = []
709 for key, value in manpar:
709 for key, value in manpar:
710 parsizes.append(len(key))
710 parsizes.append(len(key))
711 parsizes.append(len(value))
711 parsizes.append(len(value))
712 for key, value in advpar:
712 for key, value in advpar:
713 parsizes.append(len(key))
713 parsizes.append(len(key))
714 parsizes.append(len(value))
714 parsizes.append(len(value))
715 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
715 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
716 header.append(paramsizes)
716 header.append(paramsizes)
717 # key, value
717 # key, value
718 for key, value in manpar:
718 for key, value in manpar:
719 header.append(key)
719 header.append(key)
720 header.append(value)
720 header.append(value)
721 for key, value in advpar:
721 for key, value in advpar:
722 header.append(key)
722 header.append(key)
723 header.append(value)
723 header.append(value)
724 ## finalize header
724 ## finalize header
725 headerchunk = ''.join(header)
725 headerchunk = ''.join(header)
726 yield _pack(_fpartheadersize, len(headerchunk))
726 yield _pack(_fpartheadersize, len(headerchunk))
727 yield headerchunk
727 yield headerchunk
728 ## payload
728 ## payload
729 try:
729 try:
730 for chunk in self._payloadchunks():
730 for chunk in self._payloadchunks():
731 yield _pack(_fpayloadsize, len(chunk))
731 yield _pack(_fpayloadsize, len(chunk))
732 yield chunk
732 yield chunk
733 except Exception, exc:
733 except Exception, exc:
734 # backup exception data for later
734 # backup exception data for later
735 exc_info = sys.exc_info()
735 exc_info = sys.exc_info()
736 msg = 'unexpected error: %s' % exc
736 msg = 'unexpected error: %s' % exc
737 interpart = bundlepart('error:abort', [('message', msg)],
737 interpart = bundlepart('error:abort', [('message', msg)],
738 mandatory=False)
738 mandatory=False)
739 interpart.id = 0
739 interpart.id = 0
740 yield _pack(_fpayloadsize, -1)
740 yield _pack(_fpayloadsize, -1)
741 for chunk in interpart.getchunks():
741 for chunk in interpart.getchunks():
742 yield chunk
742 yield chunk
743 # abort current part payload
743 # abort current part payload
744 yield _pack(_fpayloadsize, 0)
744 yield _pack(_fpayloadsize, 0)
745 raise exc_info[0], exc_info[1], exc_info[2]
745 raise exc_info[0], exc_info[1], exc_info[2]
746 # end of payload
746 # end of payload
747 yield _pack(_fpayloadsize, 0)
747 yield _pack(_fpayloadsize, 0)
748 self._generated = True
748 self._generated = True
749
749
750 def _payloadchunks(self):
750 def _payloadchunks(self):
751 """yield chunks of a the part payload
751 """yield chunks of a the part payload
752
752
753 Exists to handle the different methods to provide data to a part."""
753 Exists to handle the different methods to provide data to a part."""
754 # we only support fixed size data now.
754 # we only support fixed size data now.
755 # This will be improved in the future.
755 # This will be improved in the future.
756 if util.safehasattr(self.data, 'next'):
756 if util.safehasattr(self.data, 'next'):
757 buff = util.chunkbuffer(self.data)
757 buff = util.chunkbuffer(self.data)
758 chunk = buff.read(preferedchunksize)
758 chunk = buff.read(preferedchunksize)
759 while chunk:
759 while chunk:
760 yield chunk
760 yield chunk
761 chunk = buff.read(preferedchunksize)
761 chunk = buff.read(preferedchunksize)
762 elif len(self.data):
762 elif len(self.data):
763 yield self.data
763 yield self.data
764
764
765
765
766 flaginterrupt = -1
766 flaginterrupt = -1
767
767
768 class interrupthandler(unpackermixin):
768 class interrupthandler(unpackermixin):
769 """read one part and process it with restricted capability
769 """read one part and process it with restricted capability
770
770
771 This allows to transmit exception raised on the producer size during part
771 This allows to transmit exception raised on the producer size during part
772 iteration while the consumer is reading a part.
772 iteration while the consumer is reading a part.
773
773
774 Part processed in this manner only have access to a ui object,"""
774 Part processed in this manner only have access to a ui object,"""
775
775
776 def __init__(self, ui, fp):
776 def __init__(self, ui, fp):
777 super(interrupthandler, self).__init__(fp)
777 super(interrupthandler, self).__init__(fp)
778 self.ui = ui
778 self.ui = ui
779
779
780 def _readpartheader(self):
780 def _readpartheader(self):
781 """reads a part header size and return the bytes blob
781 """reads a part header size and return the bytes blob
782
782
783 returns None if empty"""
783 returns None if empty"""
784 headersize = self._unpack(_fpartheadersize)[0]
784 headersize = self._unpack(_fpartheadersize)[0]
785 if headersize < 0:
785 if headersize < 0:
786 raise error.BundleValueError('negative part header size: %i'
786 raise error.BundleValueError('negative part header size: %i'
787 % headersize)
787 % headersize)
788 self.ui.debug('part header size: %i\n' % headersize)
788 self.ui.debug('part header size: %i\n' % headersize)
789 if headersize:
789 if headersize:
790 return self._readexact(headersize)
790 return self._readexact(headersize)
791 return None
791 return None
792
792
793 def __call__(self):
793 def __call__(self):
794 self.ui.debug('bundle2 stream interruption, looking for a part.\n')
794 self.ui.debug('bundle2 stream interruption, looking for a part.\n')
795 headerblock = self._readpartheader()
795 headerblock = self._readpartheader()
796 if headerblock is None:
796 if headerblock is None:
797 self.ui.debug('no part found during interruption.\n')
797 self.ui.debug('no part found during interruption.\n')
798 return
798 return
799 part = unbundlepart(self.ui, headerblock, self._fp)
799 part = unbundlepart(self.ui, headerblock, self._fp)
800 op = interruptoperation(self.ui)
800 op = interruptoperation(self.ui)
801 _processpart(op, part)
801 _processpart(op, part)
802
802
803 class interruptoperation(object):
803 class interruptoperation(object):
804 """A limited operation to be use by part handler during interruption
804 """A limited operation to be use by part handler during interruption
805
805
806 It only have access to an ui object.
806 It only have access to an ui object.
807 """
807 """
808
808
809 def __init__(self, ui):
809 def __init__(self, ui):
810 self.ui = ui
810 self.ui = ui
811 self.reply = None
811 self.reply = None
812
812
813 @property
813 @property
814 def repo(self):
814 def repo(self):
815 raise RuntimeError('no repo access from stream interruption')
815 raise RuntimeError('no repo access from stream interruption')
816
816
817 def gettransaction(self):
817 def gettransaction(self):
818 raise TransactionUnavailable('no repo access from stream interruption')
818 raise TransactionUnavailable('no repo access from stream interruption')
819
819
820 class unbundlepart(unpackermixin):
820 class unbundlepart(unpackermixin):
821 """a bundle part read from a bundle"""
821 """a bundle part read from a bundle"""
822
822
823 def __init__(self, ui, header, fp):
823 def __init__(self, ui, header, fp):
824 super(unbundlepart, self).__init__(fp)
824 super(unbundlepart, self).__init__(fp)
825 self.ui = ui
825 self.ui = ui
826 # unbundle state attr
826 # unbundle state attr
827 self._headerdata = header
827 self._headerdata = header
828 self._headeroffset = 0
828 self._headeroffset = 0
829 self._initialized = False
829 self._initialized = False
830 self.consumed = False
830 self.consumed = False
831 # part data
831 # part data
832 self.id = None
832 self.id = None
833 self.type = None
833 self.type = None
834 self.mandatoryparams = None
834 self.mandatoryparams = None
835 self.advisoryparams = None
835 self.advisoryparams = None
836 self.params = None
836 self.params = None
837 self.mandatorykeys = ()
837 self.mandatorykeys = ()
838 self._payloadstream = None
838 self._payloadstream = None
839 self._readheader()
839 self._readheader()
840 self._mandatory = None
840 self._mandatory = None
841 self._chunkindex = [] #(payload, file) position tuples for chunk starts
841 self._chunkindex = [] #(payload, file) position tuples for chunk starts
842 self._pos = 0
842 self._pos = 0
843
843
844 def _fromheader(self, size):
844 def _fromheader(self, size):
845 """return the next <size> byte from the header"""
845 """return the next <size> byte from the header"""
846 offset = self._headeroffset
846 offset = self._headeroffset
847 data = self._headerdata[offset:(offset + size)]
847 data = self._headerdata[offset:(offset + size)]
848 self._headeroffset = offset + size
848 self._headeroffset = offset + size
849 return data
849 return data
850
850
851 def _unpackheader(self, format):
851 def _unpackheader(self, format):
852 """read given format from header
852 """read given format from header
853
853
854 This automatically compute the size of the format to read."""
854 This automatically compute the size of the format to read."""
855 data = self._fromheader(struct.calcsize(format))
855 data = self._fromheader(struct.calcsize(format))
856 return _unpack(format, data)
856 return _unpack(format, data)
857
857
858 def _initparams(self, mandatoryparams, advisoryparams):
858 def _initparams(self, mandatoryparams, advisoryparams):
859 """internal function to setup all logic related parameters"""
859 """internal function to setup all logic related parameters"""
860 # make it read only to prevent people touching it by mistake.
860 # make it read only to prevent people touching it by mistake.
861 self.mandatoryparams = tuple(mandatoryparams)
861 self.mandatoryparams = tuple(mandatoryparams)
862 self.advisoryparams = tuple(advisoryparams)
862 self.advisoryparams = tuple(advisoryparams)
863 # user friendly UI
863 # user friendly UI
864 self.params = dict(self.mandatoryparams)
864 self.params = dict(self.mandatoryparams)
865 self.params.update(dict(self.advisoryparams))
865 self.params.update(dict(self.advisoryparams))
866 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
866 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
867
867
868 def _payloadchunks(self, chunknum=0):
868 def _payloadchunks(self, chunknum=0):
869 '''seek to specified chunk and start yielding data'''
869 '''seek to specified chunk and start yielding data'''
870 if len(self._chunkindex) == 0:
870 if len(self._chunkindex) == 0:
871 assert chunknum == 0, 'Must start with chunk 0'
871 assert chunknum == 0, 'Must start with chunk 0'
872 self._chunkindex.append((0, super(unbundlepart, self).tell()))
872 self._chunkindex.append((0, super(unbundlepart, self).tell()))
873 else:
873 else:
874 assert chunknum < len(self._chunkindex), \
874 assert chunknum < len(self._chunkindex), \
875 'Unknown chunk %d' % chunknum
875 'Unknown chunk %d' % chunknum
876 super(unbundlepart, self).seek(self._chunkindex[chunknum][1])
876 super(unbundlepart, self).seek(self._chunkindex[chunknum][1])
877
877
878 pos = self._chunkindex[chunknum][0]
878 pos = self._chunkindex[chunknum][0]
879 payloadsize = self._unpack(_fpayloadsize)[0]
879 payloadsize = self._unpack(_fpayloadsize)[0]
880 self.ui.debug('payload chunk size: %i\n' % payloadsize)
880 self.ui.debug('payload chunk size: %i\n' % payloadsize)
881 while payloadsize:
881 while payloadsize:
882 if payloadsize == flaginterrupt:
882 if payloadsize == flaginterrupt:
883 # interruption detection, the handler will now read a
883 # interruption detection, the handler will now read a
884 # single part and process it.
884 # single part and process it.
885 interrupthandler(self.ui, self._fp)()
885 interrupthandler(self.ui, self._fp)()
886 elif payloadsize < 0:
886 elif payloadsize < 0:
887 msg = 'negative payload chunk size: %i' % payloadsize
887 msg = 'negative payload chunk size: %i' % payloadsize
888 raise error.BundleValueError(msg)
888 raise error.BundleValueError(msg)
889 else:
889 else:
890 result = self._readexact(payloadsize)
890 result = self._readexact(payloadsize)
891 chunknum += 1
891 chunknum += 1
892 pos += payloadsize
892 pos += payloadsize
893 if chunknum == len(self._chunkindex):
893 if chunknum == len(self._chunkindex):
894 self._chunkindex.append((pos,
894 self._chunkindex.append((pos,
895 super(unbundlepart, self).tell()))
895 super(unbundlepart, self).tell()))
896 yield result
896 yield result
897 payloadsize = self._unpack(_fpayloadsize)[0]
897 payloadsize = self._unpack(_fpayloadsize)[0]
898 self.ui.debug('payload chunk size: %i\n' % payloadsize)
898 self.ui.debug('payload chunk size: %i\n' % payloadsize)
899
899
900 def _findchunk(self, pos):
900 def _findchunk(self, pos):
901 '''for a given payload position, return a chunk number and offset'''
901 '''for a given payload position, return a chunk number and offset'''
902 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
902 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
903 if ppos == pos:
903 if ppos == pos:
904 return chunk, 0
904 return chunk, 0
905 elif ppos > pos:
905 elif ppos > pos:
906 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
906 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
907 raise ValueError('Unknown chunk')
907 raise ValueError('Unknown chunk')
908
908
909 def _readheader(self):
909 def _readheader(self):
910 """read the header and setup the object"""
910 """read the header and setup the object"""
911 typesize = self._unpackheader(_fparttypesize)[0]
911 typesize = self._unpackheader(_fparttypesize)[0]
912 self.type = self._fromheader(typesize)
912 self.type = self._fromheader(typesize)
913 self.ui.debug('part type: "%s"\n' % self.type)
913 self.ui.debug('part type: "%s"\n' % self.type)
914 self.id = self._unpackheader(_fpartid)[0]
914 self.id = self._unpackheader(_fpartid)[0]
915 self.ui.debug('part id: "%s"\n' % self.id)
915 self.ui.debug('part id: "%s"\n' % self.id)
916 # extract mandatory bit from type
916 # extract mandatory bit from type
917 self.mandatory = (self.type != self.type.lower())
917 self.mandatory = (self.type != self.type.lower())
918 self.type = self.type.lower()
918 self.type = self.type.lower()
919 ## reading parameters
919 ## reading parameters
920 # param count
920 # param count
921 mancount, advcount = self._unpackheader(_fpartparamcount)
921 mancount, advcount = self._unpackheader(_fpartparamcount)
922 self.ui.debug('part parameters: %i\n' % (mancount + advcount))
922 self.ui.debug('part parameters: %i\n' % (mancount + advcount))
923 # param size
923 # param size
924 fparamsizes = _makefpartparamsizes(mancount + advcount)
924 fparamsizes = _makefpartparamsizes(mancount + advcount)
925 paramsizes = self._unpackheader(fparamsizes)
925 paramsizes = self._unpackheader(fparamsizes)
926 # make it a list of couple again
926 # make it a list of couple again
927 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
927 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
928 # split mandatory from advisory
928 # split mandatory from advisory
929 mansizes = paramsizes[:mancount]
929 mansizes = paramsizes[:mancount]
930 advsizes = paramsizes[mancount:]
930 advsizes = paramsizes[mancount:]
931 # retrieve param value
931 # retrieve param value
932 manparams = []
932 manparams = []
933 for key, value in mansizes:
933 for key, value in mansizes:
934 manparams.append((self._fromheader(key), self._fromheader(value)))
934 manparams.append((self._fromheader(key), self._fromheader(value)))
935 advparams = []
935 advparams = []
936 for key, value in advsizes:
936 for key, value in advsizes:
937 advparams.append((self._fromheader(key), self._fromheader(value)))
937 advparams.append((self._fromheader(key), self._fromheader(value)))
938 self._initparams(manparams, advparams)
938 self._initparams(manparams, advparams)
939 ## part payload
939 ## part payload
940 self._payloadstream = util.chunkbuffer(self._payloadchunks())
940 self._payloadstream = util.chunkbuffer(self._payloadchunks())
941 # we read the data, tell it
941 # we read the data, tell it
942 self._initialized = True
942 self._initialized = True
943
943
944 def read(self, size=None):
944 def read(self, size=None):
945 """read payload data"""
945 """read payload data"""
946 if not self._initialized:
946 if not self._initialized:
947 self._readheader()
947 self._readheader()
948 if size is None:
948 if size is None:
949 data = self._payloadstream.read()
949 data = self._payloadstream.read()
950 else:
950 else:
951 data = self._payloadstream.read(size)
951 data = self._payloadstream.read(size)
952 if size is None or len(data) < size:
952 if size is None or len(data) < size:
953 self.consumed = True
953 self.consumed = True
954 self._pos += len(data)
954 self._pos += len(data)
955 return data
955 return data
956
956
957 def tell(self):
957 def tell(self):
958 return self._pos
958 return self._pos
959
959
960 def seek(self, offset, whence=0):
960 def seek(self, offset, whence=0):
961 if whence == 0:
961 if whence == 0:
962 newpos = offset
962 newpos = offset
963 elif whence == 1:
963 elif whence == 1:
964 newpos = self._pos + offset
964 newpos = self._pos + offset
965 elif whence == 2:
965 elif whence == 2:
966 if not self.consumed:
966 if not self.consumed:
967 self.read()
967 self.read()
968 newpos = self._chunkindex[-1][0] - offset
968 newpos = self._chunkindex[-1][0] - offset
969 else:
969 else:
970 raise ValueError('Unknown whence value: %r' % (whence,))
970 raise ValueError('Unknown whence value: %r' % (whence,))
971
971
972 if newpos > self._chunkindex[-1][0] and not self.consumed:
972 if newpos > self._chunkindex[-1][0] and not self.consumed:
973 self.read()
973 self.read()
974 if not 0 <= newpos <= self._chunkindex[-1][0]:
974 if not 0 <= newpos <= self._chunkindex[-1][0]:
975 raise ValueError('Offset out of range')
975 raise ValueError('Offset out of range')
976
976
977 if self._pos != newpos:
977 if self._pos != newpos:
978 chunk, internaloffset = self._findchunk(newpos)
978 chunk, internaloffset = self._findchunk(newpos)
979 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
979 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
980 adjust = self.read(internaloffset)
980 adjust = self.read(internaloffset)
981 if len(adjust) != internaloffset:
981 if len(adjust) != internaloffset:
982 raise util.Abort(_('Seek failed\n'))
982 raise util.Abort(_('Seek failed\n'))
983 self._pos = newpos
983 self._pos = newpos
984
984
985 capabilities = {'HG20': (),
985 capabilities = {'HG20': (),
986 'listkeys': (),
986 'listkeys': (),
987 'pushkey': (),
987 'pushkey': (),
988 'digests': tuple(sorted(util.DIGESTS.keys())),
988 'digests': tuple(sorted(util.DIGESTS.keys())),
989 'remote-changegroup': ('http', 'https'),
989 'remote-changegroup': ('http', 'https'),
990 }
990 }
991
991
992 def getrepocaps(repo, allowpushback=False):
992 def getrepocaps(repo, allowpushback=False):
993 """return the bundle2 capabilities for a given repo
993 """return the bundle2 capabilities for a given repo
994
994
995 Exists to allow extensions (like evolution) to mutate the capabilities.
995 Exists to allow extensions (like evolution) to mutate the capabilities.
996 """
996 """
997 caps = capabilities.copy()
997 caps = capabilities.copy()
998 caps['changegroup'] = tuple(sorted(changegroup.packermap.keys()))
998 caps['changegroup'] = tuple(sorted(changegroup.packermap.keys()))
999 if obsolete.isenabled(repo, obsolete.exchangeopt):
999 if obsolete.isenabled(repo, obsolete.exchangeopt):
1000 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1000 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1001 caps['obsmarkers'] = supportedformat
1001 caps['obsmarkers'] = supportedformat
1002 if allowpushback:
1002 if allowpushback:
1003 caps['pushback'] = ()
1003 caps['pushback'] = ()
1004 return caps
1004 return caps
1005
1005
1006 def bundle2caps(remote):
1006 def bundle2caps(remote):
1007 """return the bundle capabilities of a peer as dict"""
1007 """return the bundle capabilities of a peer as dict"""
1008 raw = remote.capable('bundle2')
1008 raw = remote.capable('bundle2')
1009 if not raw and raw != '':
1009 if not raw and raw != '':
1010 return {}
1010 return {}
1011 capsblob = urllib.unquote(remote.capable('bundle2'))
1011 capsblob = urllib.unquote(remote.capable('bundle2'))
1012 return decodecaps(capsblob)
1012 return decodecaps(capsblob)
1013
1013
1014 def obsmarkersversion(caps):
1014 def obsmarkersversion(caps):
1015 """extract the list of supported obsmarkers versions from a bundle2caps dict
1015 """extract the list of supported obsmarkers versions from a bundle2caps dict
1016 """
1016 """
1017 obscaps = caps.get('obsmarkers', ())
1017 obscaps = caps.get('obsmarkers', ())
1018 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1018 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1019
1019
1020 @parthandler('changegroup', ('version',))
1020 @parthandler('changegroup', ('version',))
1021 def handlechangegroup(op, inpart):
1021 def handlechangegroup(op, inpart):
1022 """apply a changegroup part on the repo
1022 """apply a changegroup part on the repo
1023
1023
1024 This is a very early implementation that will massive rework before being
1024 This is a very early implementation that will massive rework before being
1025 inflicted to any end-user.
1025 inflicted to any end-user.
1026 """
1026 """
1027 # Make sure we trigger a transaction creation
1027 # Make sure we trigger a transaction creation
1028 #
1028 #
1029 # The addchangegroup function will get a transaction object by itself, but
1029 # The addchangegroup function will get a transaction object by itself, but
1030 # we need to make sure we trigger the creation of a transaction object used
1030 # we need to make sure we trigger the creation of a transaction object used
1031 # for the whole processing scope.
1031 # for the whole processing scope.
1032 op.gettransaction()
1032 op.gettransaction()
1033 unpackerversion = inpart.params.get('version', '01')
1033 unpackerversion = inpart.params.get('version', '01')
1034 # We should raise an appropriate exception here
1034 # We should raise an appropriate exception here
1035 unpacker = changegroup.packermap[unpackerversion][1]
1035 unpacker = changegroup.packermap[unpackerversion][1]
1036 cg = unpacker(inpart, 'UN')
1036 cg = unpacker(inpart, 'UN')
1037 # the source and url passed here are overwritten by the one contained in
1037 # the source and url passed here are overwritten by the one contained in
1038 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1038 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1039 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
1039 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
1040 op.records.add('changegroup', {'return': ret})
1040 op.records.add('changegroup', {'return': ret})
1041 if op.reply is not None:
1041 if op.reply is not None:
1042 # This is definitely not the final form of this
1042 # This is definitely not the final form of this
1043 # return. But one need to start somewhere.
1043 # return. But one need to start somewhere.
1044 part = op.reply.newpart('reply:changegroup', mandatory=False)
1044 part = op.reply.newpart('reply:changegroup', mandatory=False)
1045 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1045 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1046 part.addparam('return', '%i' % ret, mandatory=False)
1046 part.addparam('return', '%i' % ret, mandatory=False)
1047 assert not inpart.read()
1047 assert not inpart.read()
1048
1048
1049 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1049 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1050 ['digest:%s' % k for k in util.DIGESTS.keys()])
1050 ['digest:%s' % k for k in util.DIGESTS.keys()])
1051 @parthandler('remote-changegroup', _remotechangegroupparams)
1051 @parthandler('remote-changegroup', _remotechangegroupparams)
1052 def handleremotechangegroup(op, inpart):
1052 def handleremotechangegroup(op, inpart):
1053 """apply a bundle10 on the repo, given an url and validation information
1053 """apply a bundle10 on the repo, given an url and validation information
1054
1054
1055 All the information about the remote bundle to import are given as
1055 All the information about the remote bundle to import are given as
1056 parameters. The parameters include:
1056 parameters. The parameters include:
1057 - url: the url to the bundle10.
1057 - url: the url to the bundle10.
1058 - size: the bundle10 file size. It is used to validate what was
1058 - size: the bundle10 file size. It is used to validate what was
1059 retrieved by the client matches the server knowledge about the bundle.
1059 retrieved by the client matches the server knowledge about the bundle.
1060 - digests: a space separated list of the digest types provided as
1060 - digests: a space separated list of the digest types provided as
1061 parameters.
1061 parameters.
1062 - digest:<digest-type>: the hexadecimal representation of the digest with
1062 - digest:<digest-type>: the hexadecimal representation of the digest with
1063 that name. Like the size, it is used to validate what was retrieved by
1063 that name. Like the size, it is used to validate what was retrieved by
1064 the client matches what the server knows about the bundle.
1064 the client matches what the server knows about the bundle.
1065
1065
1066 When multiple digest types are given, all of them are checked.
1066 When multiple digest types are given, all of them are checked.
1067 """
1067 """
1068 try:
1068 try:
1069 raw_url = inpart.params['url']
1069 raw_url = inpart.params['url']
1070 except KeyError:
1070 except KeyError:
1071 raise util.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1071 raise util.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1072 parsed_url = util.url(raw_url)
1072 parsed_url = util.url(raw_url)
1073 if parsed_url.scheme not in capabilities['remote-changegroup']:
1073 if parsed_url.scheme not in capabilities['remote-changegroup']:
1074 raise util.Abort(_('remote-changegroup does not support %s urls') %
1074 raise util.Abort(_('remote-changegroup does not support %s urls') %
1075 parsed_url.scheme)
1075 parsed_url.scheme)
1076
1076
1077 try:
1077 try:
1078 size = int(inpart.params['size'])
1078 size = int(inpart.params['size'])
1079 except ValueError:
1079 except ValueError:
1080 raise util.Abort(_('remote-changegroup: invalid value for param "%s"')
1080 raise util.Abort(_('remote-changegroup: invalid value for param "%s"')
1081 % 'size')
1081 % 'size')
1082 except KeyError:
1082 except KeyError:
1083 raise util.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1083 raise util.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1084
1084
1085 digests = {}
1085 digests = {}
1086 for typ in inpart.params.get('digests', '').split():
1086 for typ in inpart.params.get('digests', '').split():
1087 param = 'digest:%s' % typ
1087 param = 'digest:%s' % typ
1088 try:
1088 try:
1089 value = inpart.params[param]
1089 value = inpart.params[param]
1090 except KeyError:
1090 except KeyError:
1091 raise util.Abort(_('remote-changegroup: missing "%s" param') %
1091 raise util.Abort(_('remote-changegroup: missing "%s" param') %
1092 param)
1092 param)
1093 digests[typ] = value
1093 digests[typ] = value
1094
1094
1095 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1095 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1096
1096
1097 # Make sure we trigger a transaction creation
1097 # Make sure we trigger a transaction creation
1098 #
1098 #
1099 # The addchangegroup function will get a transaction object by itself, but
1099 # The addchangegroup function will get a transaction object by itself, but
1100 # we need to make sure we trigger the creation of a transaction object used
1100 # we need to make sure we trigger the creation of a transaction object used
1101 # for the whole processing scope.
1101 # for the whole processing scope.
1102 op.gettransaction()
1102 op.gettransaction()
1103 import exchange
1103 import exchange
1104 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1104 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1105 if not isinstance(cg, changegroup.cg1unpacker):
1105 if not isinstance(cg, changegroup.cg1unpacker):
1106 raise util.Abort(_('%s: not a bundle version 1.0') %
1106 raise util.Abort(_('%s: not a bundle version 1.0') %
1107 util.hidepassword(raw_url))
1107 util.hidepassword(raw_url))
1108 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
1108 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
1109 op.records.add('changegroup', {'return': ret})
1109 op.records.add('changegroup', {'return': ret})
1110 if op.reply is not None:
1110 if op.reply is not None:
1111 # This is definitely not the final form of this
1111 # This is definitely not the final form of this
1112 # return. But one need to start somewhere.
1112 # return. But one need to start somewhere.
1113 part = op.reply.newpart('reply:changegroup')
1113 part = op.reply.newpart('reply:changegroup')
1114 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1114 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1115 part.addparam('return', '%i' % ret, mandatory=False)
1115 part.addparam('return', '%i' % ret, mandatory=False)
1116 try:
1116 try:
1117 real_part.validate()
1117 real_part.validate()
1118 except util.Abort, e:
1118 except util.Abort, e:
1119 raise util.Abort(_('bundle at %s is corrupted:\n%s') %
1119 raise util.Abort(_('bundle at %s is corrupted:\n%s') %
1120 (util.hidepassword(raw_url), str(e)))
1120 (util.hidepassword(raw_url), str(e)))
1121 assert not inpart.read()
1121 assert not inpart.read()
1122
1122
1123 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1123 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1124 def handlereplychangegroup(op, inpart):
1124 def handlereplychangegroup(op, inpart):
1125 ret = int(inpart.params['return'])
1125 ret = int(inpart.params['return'])
1126 replyto = int(inpart.params['in-reply-to'])
1126 replyto = int(inpart.params['in-reply-to'])
1127 op.records.add('changegroup', {'return': ret}, replyto)
1127 op.records.add('changegroup', {'return': ret}, replyto)
1128
1128
1129 @parthandler('check:heads')
1129 @parthandler('check:heads')
1130 def handlecheckheads(op, inpart):
1130 def handlecheckheads(op, inpart):
1131 """check that head of the repo did not change
1131 """check that head of the repo did not change
1132
1132
1133 This is used to detect a push race when using unbundle.
1133 This is used to detect a push race when using unbundle.
1134 This replaces the "heads" argument of unbundle."""
1134 This replaces the "heads" argument of unbundle."""
1135 h = inpart.read(20)
1135 h = inpart.read(20)
1136 heads = []
1136 heads = []
1137 while len(h) == 20:
1137 while len(h) == 20:
1138 heads.append(h)
1138 heads.append(h)
1139 h = inpart.read(20)
1139 h = inpart.read(20)
1140 assert not h
1140 assert not h
1141 if heads != op.repo.heads():
1141 if heads != op.repo.heads():
1142 raise error.PushRaced('repository changed while pushing - '
1142 raise error.PushRaced('repository changed while pushing - '
1143 'please try again')
1143 'please try again')
1144
1144
1145 @parthandler('output')
1145 @parthandler('output')
1146 def handleoutput(op, inpart):
1146 def handleoutput(op, inpart):
1147 """forward output captured on the server to the client"""
1147 """forward output captured on the server to the client"""
1148 for line in inpart.read().splitlines():
1148 for line in inpart.read().splitlines():
1149 op.ui.write(('remote: %s\n' % line))
1149 op.ui.write(('remote: %s\n' % line))
1150
1150
1151 @parthandler('replycaps')
1151 @parthandler('replycaps')
1152 def handlereplycaps(op, inpart):
1152 def handlereplycaps(op, inpart):
1153 """Notify that a reply bundle should be created
1153 """Notify that a reply bundle should be created
1154
1154
1155 The payload contains the capabilities information for the reply"""
1155 The payload contains the capabilities information for the reply"""
1156 caps = decodecaps(inpart.read())
1156 caps = decodecaps(inpart.read())
1157 if op.reply is None:
1157 if op.reply is None:
1158 op.reply = bundle20(op.ui, caps)
1158 op.reply = bundle20(op.ui, caps)
1159
1159
1160 @parthandler('error:abort', ('message', 'hint'))
1160 @parthandler('error:abort', ('message', 'hint'))
1161 def handlereplycaps(op, inpart):
1161 def handleerrorabort(op, inpart):
1162 """Used to transmit abort error over the wire"""
1162 """Used to transmit abort error over the wire"""
1163 raise util.Abort(inpart.params['message'], hint=inpart.params.get('hint'))
1163 raise util.Abort(inpart.params['message'], hint=inpart.params.get('hint'))
1164
1164
1165 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1165 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1166 def handlereplycaps(op, inpart):
1166 def handleerrorunsupportedcontent(op, inpart):
1167 """Used to transmit unknown content error over the wire"""
1167 """Used to transmit unknown content error over the wire"""
1168 kwargs = {}
1168 kwargs = {}
1169 parttype = inpart.params.get('parttype')
1169 parttype = inpart.params.get('parttype')
1170 if parttype is not None:
1170 if parttype is not None:
1171 kwargs['parttype'] = parttype
1171 kwargs['parttype'] = parttype
1172 params = inpart.params.get('params')
1172 params = inpart.params.get('params')
1173 if params is not None:
1173 if params is not None:
1174 kwargs['params'] = params.split('\0')
1174 kwargs['params'] = params.split('\0')
1175
1175
1176 raise error.UnsupportedPartError(**kwargs)
1176 raise error.UnsupportedPartError(**kwargs)
1177
1177
1178 @parthandler('error:pushraced', ('message',))
1178 @parthandler('error:pushraced', ('message',))
1179 def handlereplycaps(op, inpart):
1179 def handleerrorpushraced(op, inpart):
1180 """Used to transmit push race error over the wire"""
1180 """Used to transmit push race error over the wire"""
1181 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1181 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1182
1182
1183 @parthandler('listkeys', ('namespace',))
1183 @parthandler('listkeys', ('namespace',))
1184 def handlelistkeys(op, inpart):
1184 def handlelistkeys(op, inpart):
1185 """retrieve pushkey namespace content stored in a bundle2"""
1185 """retrieve pushkey namespace content stored in a bundle2"""
1186 namespace = inpart.params['namespace']
1186 namespace = inpart.params['namespace']
1187 r = pushkey.decodekeys(inpart.read())
1187 r = pushkey.decodekeys(inpart.read())
1188 op.records.add('listkeys', (namespace, r))
1188 op.records.add('listkeys', (namespace, r))
1189
1189
1190 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1190 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1191 def handlepushkey(op, inpart):
1191 def handlepushkey(op, inpart):
1192 """process a pushkey request"""
1192 """process a pushkey request"""
1193 dec = pushkey.decode
1193 dec = pushkey.decode
1194 namespace = dec(inpart.params['namespace'])
1194 namespace = dec(inpart.params['namespace'])
1195 key = dec(inpart.params['key'])
1195 key = dec(inpart.params['key'])
1196 old = dec(inpart.params['old'])
1196 old = dec(inpart.params['old'])
1197 new = dec(inpart.params['new'])
1197 new = dec(inpart.params['new'])
1198 ret = op.repo.pushkey(namespace, key, old, new)
1198 ret = op.repo.pushkey(namespace, key, old, new)
1199 record = {'namespace': namespace,
1199 record = {'namespace': namespace,
1200 'key': key,
1200 'key': key,
1201 'old': old,
1201 'old': old,
1202 'new': new}
1202 'new': new}
1203 op.records.add('pushkey', record)
1203 op.records.add('pushkey', record)
1204 if op.reply is not None:
1204 if op.reply is not None:
1205 rpart = op.reply.newpart('reply:pushkey')
1205 rpart = op.reply.newpart('reply:pushkey')
1206 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1206 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1207 rpart.addparam('return', '%i' % ret, mandatory=False)
1207 rpart.addparam('return', '%i' % ret, mandatory=False)
1208
1208
1209 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
1209 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
1210 def handlepushkeyreply(op, inpart):
1210 def handlepushkeyreply(op, inpart):
1211 """retrieve the result of a pushkey request"""
1211 """retrieve the result of a pushkey request"""
1212 ret = int(inpart.params['return'])
1212 ret = int(inpart.params['return'])
1213 partid = int(inpart.params['in-reply-to'])
1213 partid = int(inpart.params['in-reply-to'])
1214 op.records.add('pushkey', {'return': ret}, partid)
1214 op.records.add('pushkey', {'return': ret}, partid)
1215
1215
1216 @parthandler('obsmarkers')
1216 @parthandler('obsmarkers')
1217 def handleobsmarker(op, inpart):
1217 def handleobsmarker(op, inpart):
1218 """add a stream of obsmarkers to the repo"""
1218 """add a stream of obsmarkers to the repo"""
1219 tr = op.gettransaction()
1219 tr = op.gettransaction()
1220 markerdata = inpart.read()
1220 markerdata = inpart.read()
1221 if op.ui.config('experimental', 'obsmarkers-exchange-debug', False):
1221 if op.ui.config('experimental', 'obsmarkers-exchange-debug', False):
1222 op.ui.write(('obsmarker-exchange: %i bytes received\n')
1222 op.ui.write(('obsmarker-exchange: %i bytes received\n')
1223 % len(markerdata))
1223 % len(markerdata))
1224 new = op.repo.obsstore.mergemarkers(tr, markerdata)
1224 new = op.repo.obsstore.mergemarkers(tr, markerdata)
1225 if new:
1225 if new:
1226 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1226 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1227 op.records.add('obsmarkers', {'new': new})
1227 op.records.add('obsmarkers', {'new': new})
1228 if op.reply is not None:
1228 if op.reply is not None:
1229 rpart = op.reply.newpart('reply:obsmarkers')
1229 rpart = op.reply.newpart('reply:obsmarkers')
1230 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1230 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1231 rpart.addparam('new', '%i' % new, mandatory=False)
1231 rpart.addparam('new', '%i' % new, mandatory=False)
1232
1232
1233
1233
1234 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
1234 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
1235 def handlepushkeyreply(op, inpart):
1235 def handlepushkeyreply(op, inpart):
1236 """retrieve the result of a pushkey request"""
1236 """retrieve the result of a pushkey request"""
1237 ret = int(inpart.params['new'])
1237 ret = int(inpart.params['new'])
1238 partid = int(inpart.params['in-reply-to'])
1238 partid = int(inpart.params['in-reply-to'])
1239 op.records.add('obsmarkers', {'new': ret}, partid)
1239 op.records.add('obsmarkers', {'new': ret}, partid)
General Comments 0
You need to be logged in to leave comments. Login now