##// END OF EJS Templates
unbundle20: allow registering handlers for stream level parameters...
Pierre-Yves David -
r26395:4e7b0bf9 default
parent child Browse files
Show More
@@ -1,1422 +1,1434 b''
1 # bundle2.py - generic container format to transmit arbitrary data.
1 # bundle2.py - generic container format to transmit arbitrary data.
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """Handling of the new bundle2 format
7 """Handling of the new bundle2 format
8
8
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 payloads in an application agnostic way. It consist in a sequence of "parts"
10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 that will be handed to and processed by the application layer.
11 that will be handed to and processed by the application layer.
12
12
13
13
14 General format architecture
14 General format architecture
15 ===========================
15 ===========================
16
16
17 The format is architectured as follow
17 The format is architectured as follow
18
18
19 - magic string
19 - magic string
20 - stream level parameters
20 - stream level parameters
21 - payload parts (any number)
21 - payload parts (any number)
22 - end of stream marker.
22 - end of stream marker.
23
23
24 the Binary format
24 the Binary format
25 ============================
25 ============================
26
26
27 All numbers are unsigned and big-endian.
27 All numbers are unsigned and big-endian.
28
28
29 stream level parameters
29 stream level parameters
30 ------------------------
30 ------------------------
31
31
32 Binary format is as follow
32 Binary format is as follow
33
33
34 :params size: int32
34 :params size: int32
35
35
36 The total number of Bytes used by the parameters
36 The total number of Bytes used by the parameters
37
37
38 :params value: arbitrary number of Bytes
38 :params value: arbitrary number of Bytes
39
39
40 A blob of `params size` containing the serialized version of all stream level
40 A blob of `params size` containing the serialized version of all stream level
41 parameters.
41 parameters.
42
42
43 The blob contains a space separated list of parameters. Parameters with value
43 The blob contains a space separated list of parameters. Parameters with value
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45
45
46 Empty name are obviously forbidden.
46 Empty name are obviously forbidden.
47
47
48 Name MUST start with a letter. If this first letter is lower case, the
48 Name MUST start with a letter. If this first letter is lower case, the
49 parameter is advisory and can be safely ignored. However when the first
49 parameter is advisory and can be safely ignored. However when the first
50 letter is capital, the parameter is mandatory and the bundling process MUST
50 letter is capital, the parameter is mandatory and the bundling process MUST
51 stop if he is not able to proceed it.
51 stop if he is not able to proceed it.
52
52
53 Stream parameters use a simple textual format for two main reasons:
53 Stream parameters use a simple textual format for two main reasons:
54
54
55 - Stream level parameters should remain simple and we want to discourage any
55 - Stream level parameters should remain simple and we want to discourage any
56 crazy usage.
56 crazy usage.
57 - Textual data allow easy human inspection of a bundle2 header in case of
57 - Textual data allow easy human inspection of a bundle2 header in case of
58 troubles.
58 troubles.
59
59
60 Any Applicative level options MUST go into a bundle2 part instead.
60 Any Applicative level options MUST go into a bundle2 part instead.
61
61
62 Payload part
62 Payload part
63 ------------------------
63 ------------------------
64
64
65 Binary format is as follow
65 Binary format is as follow
66
66
67 :header size: int32
67 :header size: int32
68
68
69 The total number of Bytes used by the part header. When the header is empty
69 The total number of Bytes used by the part header. When the header is empty
70 (size = 0) this is interpreted as the end of stream marker.
70 (size = 0) this is interpreted as the end of stream marker.
71
71
72 :header:
72 :header:
73
73
74 The header defines how to interpret the part. It contains two piece of
74 The header defines how to interpret the part. It contains two piece of
75 data: the part type, and the part parameters.
75 data: the part type, and the part parameters.
76
76
77 The part type is used to route an application level handler, that can
77 The part type is used to route an application level handler, that can
78 interpret payload.
78 interpret payload.
79
79
80 Part parameters are passed to the application level handler. They are
80 Part parameters are passed to the application level handler. They are
81 meant to convey information that will help the application level object to
81 meant to convey information that will help the application level object to
82 interpret the part payload.
82 interpret the part payload.
83
83
84 The binary format of the header is has follow
84 The binary format of the header is has follow
85
85
86 :typesize: (one byte)
86 :typesize: (one byte)
87
87
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89
89
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 to this part.
91 to this part.
92
92
93 :parameters:
93 :parameters:
94
94
95 Part's parameter may have arbitrary content, the binary structure is::
95 Part's parameter may have arbitrary content, the binary structure is::
96
96
97 <mandatory-count><advisory-count><param-sizes><param-data>
97 <mandatory-count><advisory-count><param-sizes><param-data>
98
98
99 :mandatory-count: 1 byte, number of mandatory parameters
99 :mandatory-count: 1 byte, number of mandatory parameters
100
100
101 :advisory-count: 1 byte, number of advisory parameters
101 :advisory-count: 1 byte, number of advisory parameters
102
102
103 :param-sizes:
103 :param-sizes:
104
104
105 N couple of bytes, where N is the total number of parameters. Each
105 N couple of bytes, where N is the total number of parameters. Each
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107
107
108 :param-data:
108 :param-data:
109
109
110 A blob of bytes from which each parameter key and value can be
110 A blob of bytes from which each parameter key and value can be
111 retrieved using the list of size couples stored in the previous
111 retrieved using the list of size couples stored in the previous
112 field.
112 field.
113
113
114 Mandatory parameters comes first, then the advisory ones.
114 Mandatory parameters comes first, then the advisory ones.
115
115
116 Each parameter's key MUST be unique within the part.
116 Each parameter's key MUST be unique within the part.
117
117
118 :payload:
118 :payload:
119
119
120 payload is a series of `<chunksize><chunkdata>`.
120 payload is a series of `<chunksize><chunkdata>`.
121
121
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124
124
125 The current implementation always produces either zero or one chunk.
125 The current implementation always produces either zero or one chunk.
126 This is an implementation limitation that will ultimately be lifted.
126 This is an implementation limitation that will ultimately be lifted.
127
127
128 `chunksize` can be negative to trigger special case processing. No such
128 `chunksize` can be negative to trigger special case processing. No such
129 processing is in place yet.
129 processing is in place yet.
130
130
131 Bundle processing
131 Bundle processing
132 ============================
132 ============================
133
133
134 Each part is processed in order using a "part handler". Handler are registered
134 Each part is processed in order using a "part handler". Handler are registered
135 for a certain part type.
135 for a certain part type.
136
136
137 The matching of a part to its handler is case insensitive. The case of the
137 The matching of a part to its handler is case insensitive. The case of the
138 part type is used to know if a part is mandatory or advisory. If the Part type
138 part type is used to know if a part is mandatory or advisory. If the Part type
139 contains any uppercase char it is considered mandatory. When no handler is
139 contains any uppercase char it is considered mandatory. When no handler is
140 known for a Mandatory part, the process is aborted and an exception is raised.
140 known for a Mandatory part, the process is aborted and an exception is raised.
141 If the part is advisory and no handler is known, the part is ignored. When the
141 If the part is advisory and no handler is known, the part is ignored. When the
142 process is aborted, the full bundle is still read from the stream to keep the
142 process is aborted, the full bundle is still read from the stream to keep the
143 channel usable. But none of the part read from an abort are processed. In the
143 channel usable. But none of the part read from an abort are processed. In the
144 future, dropping the stream may become an option for channel we do not care to
144 future, dropping the stream may become an option for channel we do not care to
145 preserve.
145 preserve.
146 """
146 """
147
147
148 from __future__ import absolute_import
148 from __future__ import absolute_import
149
149
150 import errno
150 import errno
151 import re
151 import re
152 import string
152 import string
153 import struct
153 import struct
154 import sys
154 import sys
155 import urllib
155 import urllib
156
156
157 from .i18n import _
157 from .i18n import _
158 from . import (
158 from . import (
159 changegroup,
159 changegroup,
160 error,
160 error,
161 obsolete,
161 obsolete,
162 pushkey,
162 pushkey,
163 tags,
163 tags,
164 url,
164 url,
165 util,
165 util,
166 )
166 )
167
167
168 _pack = struct.pack
168 _pack = struct.pack
169 _unpack = struct.unpack
169 _unpack = struct.unpack
170
170
171 _fstreamparamsize = '>i'
171 _fstreamparamsize = '>i'
172 _fpartheadersize = '>i'
172 _fpartheadersize = '>i'
173 _fparttypesize = '>B'
173 _fparttypesize = '>B'
174 _fpartid = '>I'
174 _fpartid = '>I'
175 _fpayloadsize = '>i'
175 _fpayloadsize = '>i'
176 _fpartparamcount = '>BB'
176 _fpartparamcount = '>BB'
177
177
178 preferedchunksize = 4096
178 preferedchunksize = 4096
179
179
180 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
180 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
181
181
182 def outdebug(ui, message):
182 def outdebug(ui, message):
183 """debug regarding output stream (bundling)"""
183 """debug regarding output stream (bundling)"""
184 if ui.configbool('devel', 'bundle2.debug', False):
184 if ui.configbool('devel', 'bundle2.debug', False):
185 ui.debug('bundle2-output: %s\n' % message)
185 ui.debug('bundle2-output: %s\n' % message)
186
186
187 def indebug(ui, message):
187 def indebug(ui, message):
188 """debug on input stream (unbundling)"""
188 """debug on input stream (unbundling)"""
189 if ui.configbool('devel', 'bundle2.debug', False):
189 if ui.configbool('devel', 'bundle2.debug', False):
190 ui.debug('bundle2-input: %s\n' % message)
190 ui.debug('bundle2-input: %s\n' % message)
191
191
192 def validateparttype(parttype):
192 def validateparttype(parttype):
193 """raise ValueError if a parttype contains invalid character"""
193 """raise ValueError if a parttype contains invalid character"""
194 if _parttypeforbidden.search(parttype):
194 if _parttypeforbidden.search(parttype):
195 raise ValueError(parttype)
195 raise ValueError(parttype)
196
196
197 def _makefpartparamsizes(nbparams):
197 def _makefpartparamsizes(nbparams):
198 """return a struct format to read part parameter sizes
198 """return a struct format to read part parameter sizes
199
199
200 The number parameters is variable so we need to build that format
200 The number parameters is variable so we need to build that format
201 dynamically.
201 dynamically.
202 """
202 """
203 return '>'+('BB'*nbparams)
203 return '>'+('BB'*nbparams)
204
204
205 parthandlermapping = {}
205 parthandlermapping = {}
206
206
207 def parthandler(parttype, params=()):
207 def parthandler(parttype, params=()):
208 """decorator that register a function as a bundle2 part handler
208 """decorator that register a function as a bundle2 part handler
209
209
210 eg::
210 eg::
211
211
212 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
212 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
213 def myparttypehandler(...):
213 def myparttypehandler(...):
214 '''process a part of type "my part".'''
214 '''process a part of type "my part".'''
215 ...
215 ...
216 """
216 """
217 validateparttype(parttype)
217 validateparttype(parttype)
218 def _decorator(func):
218 def _decorator(func):
219 lparttype = parttype.lower() # enforce lower case matching.
219 lparttype = parttype.lower() # enforce lower case matching.
220 assert lparttype not in parthandlermapping
220 assert lparttype not in parthandlermapping
221 parthandlermapping[lparttype] = func
221 parthandlermapping[lparttype] = func
222 func.params = frozenset(params)
222 func.params = frozenset(params)
223 return func
223 return func
224 return _decorator
224 return _decorator
225
225
226 class unbundlerecords(object):
226 class unbundlerecords(object):
227 """keep record of what happens during and unbundle
227 """keep record of what happens during and unbundle
228
228
229 New records are added using `records.add('cat', obj)`. Where 'cat' is a
229 New records are added using `records.add('cat', obj)`. Where 'cat' is a
230 category of record and obj is an arbitrary object.
230 category of record and obj is an arbitrary object.
231
231
232 `records['cat']` will return all entries of this category 'cat'.
232 `records['cat']` will return all entries of this category 'cat'.
233
233
234 Iterating on the object itself will yield `('category', obj)` tuples
234 Iterating on the object itself will yield `('category', obj)` tuples
235 for all entries.
235 for all entries.
236
236
237 All iterations happens in chronological order.
237 All iterations happens in chronological order.
238 """
238 """
239
239
240 def __init__(self):
240 def __init__(self):
241 self._categories = {}
241 self._categories = {}
242 self._sequences = []
242 self._sequences = []
243 self._replies = {}
243 self._replies = {}
244
244
245 def add(self, category, entry, inreplyto=None):
245 def add(self, category, entry, inreplyto=None):
246 """add a new record of a given category.
246 """add a new record of a given category.
247
247
248 The entry can then be retrieved in the list returned by
248 The entry can then be retrieved in the list returned by
249 self['category']."""
249 self['category']."""
250 self._categories.setdefault(category, []).append(entry)
250 self._categories.setdefault(category, []).append(entry)
251 self._sequences.append((category, entry))
251 self._sequences.append((category, entry))
252 if inreplyto is not None:
252 if inreplyto is not None:
253 self.getreplies(inreplyto).add(category, entry)
253 self.getreplies(inreplyto).add(category, entry)
254
254
255 def getreplies(self, partid):
255 def getreplies(self, partid):
256 """get the records that are replies to a specific part"""
256 """get the records that are replies to a specific part"""
257 return self._replies.setdefault(partid, unbundlerecords())
257 return self._replies.setdefault(partid, unbundlerecords())
258
258
259 def __getitem__(self, cat):
259 def __getitem__(self, cat):
260 return tuple(self._categories.get(cat, ()))
260 return tuple(self._categories.get(cat, ()))
261
261
262 def __iter__(self):
262 def __iter__(self):
263 return iter(self._sequences)
263 return iter(self._sequences)
264
264
265 def __len__(self):
265 def __len__(self):
266 return len(self._sequences)
266 return len(self._sequences)
267
267
268 def __nonzero__(self):
268 def __nonzero__(self):
269 return bool(self._sequences)
269 return bool(self._sequences)
270
270
271 class bundleoperation(object):
271 class bundleoperation(object):
272 """an object that represents a single bundling process
272 """an object that represents a single bundling process
273
273
274 Its purpose is to carry unbundle-related objects and states.
274 Its purpose is to carry unbundle-related objects and states.
275
275
276 A new object should be created at the beginning of each bundle processing.
276 A new object should be created at the beginning of each bundle processing.
277 The object is to be returned by the processing function.
277 The object is to be returned by the processing function.
278
278
279 The object has very little content now it will ultimately contain:
279 The object has very little content now it will ultimately contain:
280 * an access to the repo the bundle is applied to,
280 * an access to the repo the bundle is applied to,
281 * a ui object,
281 * a ui object,
282 * a way to retrieve a transaction to add changes to the repo,
282 * a way to retrieve a transaction to add changes to the repo,
283 * a way to record the result of processing each part,
283 * a way to record the result of processing each part,
284 * a way to construct a bundle response when applicable.
284 * a way to construct a bundle response when applicable.
285 """
285 """
286
286
287 def __init__(self, repo, transactiongetter, captureoutput=True):
287 def __init__(self, repo, transactiongetter, captureoutput=True):
288 self.repo = repo
288 self.repo = repo
289 self.ui = repo.ui
289 self.ui = repo.ui
290 self.records = unbundlerecords()
290 self.records = unbundlerecords()
291 self.gettransaction = transactiongetter
291 self.gettransaction = transactiongetter
292 self.reply = None
292 self.reply = None
293 self.captureoutput = captureoutput
293 self.captureoutput = captureoutput
294
294
295 class TransactionUnavailable(RuntimeError):
295 class TransactionUnavailable(RuntimeError):
296 pass
296 pass
297
297
298 def _notransaction():
298 def _notransaction():
299 """default method to get a transaction while processing a bundle
299 """default method to get a transaction while processing a bundle
300
300
301 Raise an exception to highlight the fact that no transaction was expected
301 Raise an exception to highlight the fact that no transaction was expected
302 to be created"""
302 to be created"""
303 raise TransactionUnavailable()
303 raise TransactionUnavailable()
304
304
305 def processbundle(repo, unbundler, transactiongetter=None, op=None):
305 def processbundle(repo, unbundler, transactiongetter=None, op=None):
306 """This function process a bundle, apply effect to/from a repo
306 """This function process a bundle, apply effect to/from a repo
307
307
308 It iterates over each part then searches for and uses the proper handling
308 It iterates over each part then searches for and uses the proper handling
309 code to process the part. Parts are processed in order.
309 code to process the part. Parts are processed in order.
310
310
311 This is very early version of this function that will be strongly reworked
311 This is very early version of this function that will be strongly reworked
312 before final usage.
312 before final usage.
313
313
314 Unknown Mandatory part will abort the process.
314 Unknown Mandatory part will abort the process.
315
315
316 It is temporarily possible to provide a prebuilt bundleoperation to the
316 It is temporarily possible to provide a prebuilt bundleoperation to the
317 function. This is used to ensure output is properly propagated in case of
317 function. This is used to ensure output is properly propagated in case of
318 an error during the unbundling. This output capturing part will likely be
318 an error during the unbundling. This output capturing part will likely be
319 reworked and this ability will probably go away in the process.
319 reworked and this ability will probably go away in the process.
320 """
320 """
321 if op is None:
321 if op is None:
322 if transactiongetter is None:
322 if transactiongetter is None:
323 transactiongetter = _notransaction
323 transactiongetter = _notransaction
324 op = bundleoperation(repo, transactiongetter)
324 op = bundleoperation(repo, transactiongetter)
325 # todo:
325 # todo:
326 # - replace this is a init function soon.
326 # - replace this is a init function soon.
327 # - exception catching
327 # - exception catching
328 unbundler.params
328 unbundler.params
329 if repo.ui.debugflag:
329 if repo.ui.debugflag:
330 msg = ['bundle2-input-bundle:']
330 msg = ['bundle2-input-bundle:']
331 if unbundler.params:
331 if unbundler.params:
332 msg.append(' %i params')
332 msg.append(' %i params')
333 if op.gettransaction is None:
333 if op.gettransaction is None:
334 msg.append(' no-transaction')
334 msg.append(' no-transaction')
335 else:
335 else:
336 msg.append(' with-transaction')
336 msg.append(' with-transaction')
337 msg.append('\n')
337 msg.append('\n')
338 repo.ui.debug(''.join(msg))
338 repo.ui.debug(''.join(msg))
339 iterparts = enumerate(unbundler.iterparts())
339 iterparts = enumerate(unbundler.iterparts())
340 part = None
340 part = None
341 nbpart = 0
341 nbpart = 0
342 try:
342 try:
343 for nbpart, part in iterparts:
343 for nbpart, part in iterparts:
344 _processpart(op, part)
344 _processpart(op, part)
345 except BaseException as exc:
345 except BaseException as exc:
346 for nbpart, part in iterparts:
346 for nbpart, part in iterparts:
347 # consume the bundle content
347 # consume the bundle content
348 part.seek(0, 2)
348 part.seek(0, 2)
349 # Small hack to let caller code distinguish exceptions from bundle2
349 # Small hack to let caller code distinguish exceptions from bundle2
350 # processing from processing the old format. This is mostly
350 # processing from processing the old format. This is mostly
351 # needed to handle different return codes to unbundle according to the
351 # needed to handle different return codes to unbundle according to the
352 # type of bundle. We should probably clean up or drop this return code
352 # type of bundle. We should probably clean up or drop this return code
353 # craziness in a future version.
353 # craziness in a future version.
354 exc.duringunbundle2 = True
354 exc.duringunbundle2 = True
355 salvaged = []
355 salvaged = []
356 replycaps = None
356 replycaps = None
357 if op.reply is not None:
357 if op.reply is not None:
358 salvaged = op.reply.salvageoutput()
358 salvaged = op.reply.salvageoutput()
359 replycaps = op.reply.capabilities
359 replycaps = op.reply.capabilities
360 exc._replycaps = replycaps
360 exc._replycaps = replycaps
361 exc._bundle2salvagedoutput = salvaged
361 exc._bundle2salvagedoutput = salvaged
362 raise
362 raise
363 finally:
363 finally:
364 repo.ui.debug('bundle2-input-bundle: %i parts total\n' % nbpart)
364 repo.ui.debug('bundle2-input-bundle: %i parts total\n' % nbpart)
365
365
366 return op
366 return op
367
367
368 def _processpart(op, part):
368 def _processpart(op, part):
369 """process a single part from a bundle
369 """process a single part from a bundle
370
370
371 The part is guaranteed to have been fully consumed when the function exits
371 The part is guaranteed to have been fully consumed when the function exits
372 (even if an exception is raised)."""
372 (even if an exception is raised)."""
373 status = 'unknown' # used by debug output
373 status = 'unknown' # used by debug output
374 try:
374 try:
375 try:
375 try:
376 handler = parthandlermapping.get(part.type)
376 handler = parthandlermapping.get(part.type)
377 if handler is None:
377 if handler is None:
378 status = 'unsupported-type'
378 status = 'unsupported-type'
379 raise error.BundleUnknownFeatureError(parttype=part.type)
379 raise error.BundleUnknownFeatureError(parttype=part.type)
380 indebug(op.ui, 'found a handler for part %r' % part.type)
380 indebug(op.ui, 'found a handler for part %r' % part.type)
381 unknownparams = part.mandatorykeys - handler.params
381 unknownparams = part.mandatorykeys - handler.params
382 if unknownparams:
382 if unknownparams:
383 unknownparams = list(unknownparams)
383 unknownparams = list(unknownparams)
384 unknownparams.sort()
384 unknownparams.sort()
385 status = 'unsupported-params (%s)' % unknownparams
385 status = 'unsupported-params (%s)' % unknownparams
386 raise error.BundleUnknownFeatureError(parttype=part.type,
386 raise error.BundleUnknownFeatureError(parttype=part.type,
387 params=unknownparams)
387 params=unknownparams)
388 status = 'supported'
388 status = 'supported'
389 except error.BundleUnknownFeatureError as exc:
389 except error.BundleUnknownFeatureError as exc:
390 if part.mandatory: # mandatory parts
390 if part.mandatory: # mandatory parts
391 raise
391 raise
392 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
392 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
393 return # skip to part processing
393 return # skip to part processing
394 finally:
394 finally:
395 if op.ui.debugflag:
395 if op.ui.debugflag:
396 msg = ['bundle2-input-part: "%s"' % part.type]
396 msg = ['bundle2-input-part: "%s"' % part.type]
397 if not part.mandatory:
397 if not part.mandatory:
398 msg.append(' (advisory)')
398 msg.append(' (advisory)')
399 nbmp = len(part.mandatorykeys)
399 nbmp = len(part.mandatorykeys)
400 nbap = len(part.params) - nbmp
400 nbap = len(part.params) - nbmp
401 if nbmp or nbap:
401 if nbmp or nbap:
402 msg.append(' (params:')
402 msg.append(' (params:')
403 if nbmp:
403 if nbmp:
404 msg.append(' %i mandatory' % nbmp)
404 msg.append(' %i mandatory' % nbmp)
405 if nbap:
405 if nbap:
406 msg.append(' %i advisory' % nbmp)
406 msg.append(' %i advisory' % nbmp)
407 msg.append(')')
407 msg.append(')')
408 msg.append(' %s\n' % status)
408 msg.append(' %s\n' % status)
409 op.ui.debug(''.join(msg))
409 op.ui.debug(''.join(msg))
410
410
411 # handler is called outside the above try block so that we don't
411 # handler is called outside the above try block so that we don't
412 # risk catching KeyErrors from anything other than the
412 # risk catching KeyErrors from anything other than the
413 # parthandlermapping lookup (any KeyError raised by handler()
413 # parthandlermapping lookup (any KeyError raised by handler()
414 # itself represents a defect of a different variety).
414 # itself represents a defect of a different variety).
415 output = None
415 output = None
416 if op.captureoutput and op.reply is not None:
416 if op.captureoutput and op.reply is not None:
417 op.ui.pushbuffer(error=True, subproc=True)
417 op.ui.pushbuffer(error=True, subproc=True)
418 output = ''
418 output = ''
419 try:
419 try:
420 handler(op, part)
420 handler(op, part)
421 finally:
421 finally:
422 if output is not None:
422 if output is not None:
423 output = op.ui.popbuffer()
423 output = op.ui.popbuffer()
424 if output:
424 if output:
425 outpart = op.reply.newpart('output', data=output,
425 outpart = op.reply.newpart('output', data=output,
426 mandatory=False)
426 mandatory=False)
427 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
427 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
428 finally:
428 finally:
429 # consume the part content to not corrupt the stream.
429 # consume the part content to not corrupt the stream.
430 part.seek(0, 2)
430 part.seek(0, 2)
431
431
432
432
433 def decodecaps(blob):
433 def decodecaps(blob):
434 """decode a bundle2 caps bytes blob into a dictionary
434 """decode a bundle2 caps bytes blob into a dictionary
435
435
436 The blob is a list of capabilities (one per line)
436 The blob is a list of capabilities (one per line)
437 Capabilities may have values using a line of the form::
437 Capabilities may have values using a line of the form::
438
438
439 capability=value1,value2,value3
439 capability=value1,value2,value3
440
440
441 The values are always a list."""
441 The values are always a list."""
442 caps = {}
442 caps = {}
443 for line in blob.splitlines():
443 for line in blob.splitlines():
444 if not line:
444 if not line:
445 continue
445 continue
446 if '=' not in line:
446 if '=' not in line:
447 key, vals = line, ()
447 key, vals = line, ()
448 else:
448 else:
449 key, vals = line.split('=', 1)
449 key, vals = line.split('=', 1)
450 vals = vals.split(',')
450 vals = vals.split(',')
451 key = urllib.unquote(key)
451 key = urllib.unquote(key)
452 vals = [urllib.unquote(v) for v in vals]
452 vals = [urllib.unquote(v) for v in vals]
453 caps[key] = vals
453 caps[key] = vals
454 return caps
454 return caps
455
455
456 def encodecaps(caps):
456 def encodecaps(caps):
457 """encode a bundle2 caps dictionary into a bytes blob"""
457 """encode a bundle2 caps dictionary into a bytes blob"""
458 chunks = []
458 chunks = []
459 for ca in sorted(caps):
459 for ca in sorted(caps):
460 vals = caps[ca]
460 vals = caps[ca]
461 ca = urllib.quote(ca)
461 ca = urllib.quote(ca)
462 vals = [urllib.quote(v) for v in vals]
462 vals = [urllib.quote(v) for v in vals]
463 if vals:
463 if vals:
464 ca = "%s=%s" % (ca, ','.join(vals))
464 ca = "%s=%s" % (ca, ','.join(vals))
465 chunks.append(ca)
465 chunks.append(ca)
466 return '\n'.join(chunks)
466 return '\n'.join(chunks)
467
467
468 class bundle20(object):
468 class bundle20(object):
469 """represent an outgoing bundle2 container
469 """represent an outgoing bundle2 container
470
470
471 Use the `addparam` method to add stream level parameter. and `newpart` to
471 Use the `addparam` method to add stream level parameter. and `newpart` to
472 populate it. Then call `getchunks` to retrieve all the binary chunks of
472 populate it. Then call `getchunks` to retrieve all the binary chunks of
473 data that compose the bundle2 container."""
473 data that compose the bundle2 container."""
474
474
475 _magicstring = 'HG20'
475 _magicstring = 'HG20'
476
476
477 def __init__(self, ui, capabilities=()):
477 def __init__(self, ui, capabilities=()):
478 self.ui = ui
478 self.ui = ui
479 self._params = []
479 self._params = []
480 self._parts = []
480 self._parts = []
481 self.capabilities = dict(capabilities)
481 self.capabilities = dict(capabilities)
482
482
483 @property
483 @property
484 def nbparts(self):
484 def nbparts(self):
485 """total number of parts added to the bundler"""
485 """total number of parts added to the bundler"""
486 return len(self._parts)
486 return len(self._parts)
487
487
488 # methods used to defines the bundle2 content
488 # methods used to defines the bundle2 content
489 def addparam(self, name, value=None):
489 def addparam(self, name, value=None):
490 """add a stream level parameter"""
490 """add a stream level parameter"""
491 if not name:
491 if not name:
492 raise ValueError('empty parameter name')
492 raise ValueError('empty parameter name')
493 if name[0] not in string.letters:
493 if name[0] not in string.letters:
494 raise ValueError('non letter first character: %r' % name)
494 raise ValueError('non letter first character: %r' % name)
495 self._params.append((name, value))
495 self._params.append((name, value))
496
496
497 def addpart(self, part):
497 def addpart(self, part):
498 """add a new part to the bundle2 container
498 """add a new part to the bundle2 container
499
499
500 Parts contains the actual applicative payload."""
500 Parts contains the actual applicative payload."""
501 assert part.id is None
501 assert part.id is None
502 part.id = len(self._parts) # very cheap counter
502 part.id = len(self._parts) # very cheap counter
503 self._parts.append(part)
503 self._parts.append(part)
504
504
505 def newpart(self, typeid, *args, **kwargs):
505 def newpart(self, typeid, *args, **kwargs):
506 """create a new part and add it to the containers
506 """create a new part and add it to the containers
507
507
508 As the part is directly added to the containers. For now, this means
508 As the part is directly added to the containers. For now, this means
509 that any failure to properly initialize the part after calling
509 that any failure to properly initialize the part after calling
510 ``newpart`` should result in a failure of the whole bundling process.
510 ``newpart`` should result in a failure of the whole bundling process.
511
511
512 You can still fall back to manually create and add if you need better
512 You can still fall back to manually create and add if you need better
513 control."""
513 control."""
514 part = bundlepart(typeid, *args, **kwargs)
514 part = bundlepart(typeid, *args, **kwargs)
515 self.addpart(part)
515 self.addpart(part)
516 return part
516 return part
517
517
518 # methods used to generate the bundle2 stream
518 # methods used to generate the bundle2 stream
519 def getchunks(self):
519 def getchunks(self):
520 if self.ui.debugflag:
520 if self.ui.debugflag:
521 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
521 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
522 if self._params:
522 if self._params:
523 msg.append(' (%i params)' % len(self._params))
523 msg.append(' (%i params)' % len(self._params))
524 msg.append(' %i parts total\n' % len(self._parts))
524 msg.append(' %i parts total\n' % len(self._parts))
525 self.ui.debug(''.join(msg))
525 self.ui.debug(''.join(msg))
526 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
526 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
527 yield self._magicstring
527 yield self._magicstring
528 param = self._paramchunk()
528 param = self._paramchunk()
529 outdebug(self.ui, 'bundle parameter: %s' % param)
529 outdebug(self.ui, 'bundle parameter: %s' % param)
530 yield _pack(_fstreamparamsize, len(param))
530 yield _pack(_fstreamparamsize, len(param))
531 if param:
531 if param:
532 yield param
532 yield param
533
533
534 outdebug(self.ui, 'start of parts')
534 outdebug(self.ui, 'start of parts')
535 for part in self._parts:
535 for part in self._parts:
536 outdebug(self.ui, 'bundle part: "%s"' % part.type)
536 outdebug(self.ui, 'bundle part: "%s"' % part.type)
537 for chunk in part.getchunks(ui=self.ui):
537 for chunk in part.getchunks(ui=self.ui):
538 yield chunk
538 yield chunk
539 outdebug(self.ui, 'end of bundle')
539 outdebug(self.ui, 'end of bundle')
540 yield _pack(_fpartheadersize, 0)
540 yield _pack(_fpartheadersize, 0)
541
541
542 def _paramchunk(self):
542 def _paramchunk(self):
543 """return a encoded version of all stream parameters"""
543 """return a encoded version of all stream parameters"""
544 blocks = []
544 blocks = []
545 for par, value in self._params:
545 for par, value in self._params:
546 par = urllib.quote(par)
546 par = urllib.quote(par)
547 if value is not None:
547 if value is not None:
548 value = urllib.quote(value)
548 value = urllib.quote(value)
549 par = '%s=%s' % (par, value)
549 par = '%s=%s' % (par, value)
550 blocks.append(par)
550 blocks.append(par)
551 return ' '.join(blocks)
551 return ' '.join(blocks)
552
552
553 def salvageoutput(self):
553 def salvageoutput(self):
554 """return a list with a copy of all output parts in the bundle
554 """return a list with a copy of all output parts in the bundle
555
555
556 This is meant to be used during error handling to make sure we preserve
556 This is meant to be used during error handling to make sure we preserve
557 server output"""
557 server output"""
558 salvaged = []
558 salvaged = []
559 for part in self._parts:
559 for part in self._parts:
560 if part.type.startswith('output'):
560 if part.type.startswith('output'):
561 salvaged.append(part.copy())
561 salvaged.append(part.copy())
562 return salvaged
562 return salvaged
563
563
564
564
565 class unpackermixin(object):
565 class unpackermixin(object):
566 """A mixin to extract bytes and struct data from a stream"""
566 """A mixin to extract bytes and struct data from a stream"""
567
567
568 def __init__(self, fp):
568 def __init__(self, fp):
569 self._fp = fp
569 self._fp = fp
570 self._seekable = (util.safehasattr(fp, 'seek') and
570 self._seekable = (util.safehasattr(fp, 'seek') and
571 util.safehasattr(fp, 'tell'))
571 util.safehasattr(fp, 'tell'))
572
572
573 def _unpack(self, format):
573 def _unpack(self, format):
574 """unpack this struct format from the stream"""
574 """unpack this struct format from the stream"""
575 data = self._readexact(struct.calcsize(format))
575 data = self._readexact(struct.calcsize(format))
576 return _unpack(format, data)
576 return _unpack(format, data)
577
577
578 def _readexact(self, size):
578 def _readexact(self, size):
579 """read exactly <size> bytes from the stream"""
579 """read exactly <size> bytes from the stream"""
580 return changegroup.readexactly(self._fp, size)
580 return changegroup.readexactly(self._fp, size)
581
581
582 def seek(self, offset, whence=0):
582 def seek(self, offset, whence=0):
583 """move the underlying file pointer"""
583 """move the underlying file pointer"""
584 if self._seekable:
584 if self._seekable:
585 return self._fp.seek(offset, whence)
585 return self._fp.seek(offset, whence)
586 else:
586 else:
587 raise NotImplementedError(_('File pointer is not seekable'))
587 raise NotImplementedError(_('File pointer is not seekable'))
588
588
589 def tell(self):
589 def tell(self):
590 """return the file offset, or None if file is not seekable"""
590 """return the file offset, or None if file is not seekable"""
591 if self._seekable:
591 if self._seekable:
592 try:
592 try:
593 return self._fp.tell()
593 return self._fp.tell()
594 except IOError as e:
594 except IOError as e:
595 if e.errno == errno.ESPIPE:
595 if e.errno == errno.ESPIPE:
596 self._seekable = False
596 self._seekable = False
597 else:
597 else:
598 raise
598 raise
599 return None
599 return None
600
600
601 def close(self):
601 def close(self):
602 """close underlying file"""
602 """close underlying file"""
603 if util.safehasattr(self._fp, 'close'):
603 if util.safehasattr(self._fp, 'close'):
604 return self._fp.close()
604 return self._fp.close()
605
605
606 def getunbundler(ui, fp, magicstring=None):
606 def getunbundler(ui, fp, magicstring=None):
607 """return a valid unbundler object for a given magicstring"""
607 """return a valid unbundler object for a given magicstring"""
608 if magicstring is None:
608 if magicstring is None:
609 magicstring = changegroup.readexactly(fp, 4)
609 magicstring = changegroup.readexactly(fp, 4)
610 magic, version = magicstring[0:2], magicstring[2:4]
610 magic, version = magicstring[0:2], magicstring[2:4]
611 if magic != 'HG':
611 if magic != 'HG':
612 raise util.Abort(_('not a Mercurial bundle'))
612 raise util.Abort(_('not a Mercurial bundle'))
613 unbundlerclass = formatmap.get(version)
613 unbundlerclass = formatmap.get(version)
614 if unbundlerclass is None:
614 if unbundlerclass is None:
615 raise util.Abort(_('unknown bundle version %s') % version)
615 raise util.Abort(_('unknown bundle version %s') % version)
616 unbundler = unbundlerclass(ui, fp)
616 unbundler = unbundlerclass(ui, fp)
617 indebug(ui, 'start processing of %s stream' % magicstring)
617 indebug(ui, 'start processing of %s stream' % magicstring)
618 return unbundler
618 return unbundler
619
619
620 class unbundle20(unpackermixin):
620 class unbundle20(unpackermixin):
621 """interpret a bundle2 stream
621 """interpret a bundle2 stream
622
622
623 This class is fed with a binary stream and yields parts through its
623 This class is fed with a binary stream and yields parts through its
624 `iterparts` methods."""
624 `iterparts` methods."""
625
625
626 def __init__(self, ui, fp):
626 def __init__(self, ui, fp):
627 """If header is specified, we do not read it out of the stream."""
627 """If header is specified, we do not read it out of the stream."""
628 self.ui = ui
628 self.ui = ui
629 super(unbundle20, self).__init__(fp)
629 super(unbundle20, self).__init__(fp)
630
630
631 @util.propertycache
631 @util.propertycache
632 def params(self):
632 def params(self):
633 """dictionary of stream level parameters"""
633 """dictionary of stream level parameters"""
634 indebug(self.ui, 'reading bundle2 stream parameters')
634 indebug(self.ui, 'reading bundle2 stream parameters')
635 params = {}
635 params = {}
636 paramssize = self._unpack(_fstreamparamsize)[0]
636 paramssize = self._unpack(_fstreamparamsize)[0]
637 if paramssize < 0:
637 if paramssize < 0:
638 raise error.BundleValueError('negative bundle param size: %i'
638 raise error.BundleValueError('negative bundle param size: %i'
639 % paramssize)
639 % paramssize)
640 if paramssize:
640 if paramssize:
641 for p in self._readexact(paramssize).split(' '):
641 for p in self._readexact(paramssize).split(' '):
642 p = p.split('=', 1)
642 p = p.split('=', 1)
643 p = [urllib.unquote(i) for i in p]
643 p = [urllib.unquote(i) for i in p]
644 if len(p) < 2:
644 if len(p) < 2:
645 p.append(None)
645 p.append(None)
646 self._processparam(*p)
646 self._processparam(*p)
647 params[p[0]] = p[1]
647 params[p[0]] = p[1]
648 return params
648 return params
649
649
650 def _processparam(self, name, value):
650 def _processparam(self, name, value):
651 """process a parameter, applying its effect if needed
651 """process a parameter, applying its effect if needed
652
652
653 Parameter starting with a lower case letter are advisory and will be
653 Parameter starting with a lower case letter are advisory and will be
654 ignored when unknown. Those starting with an upper case letter are
654 ignored when unknown. Those starting with an upper case letter are
655 mandatory and will this function will raise a KeyError when unknown.
655 mandatory and will this function will raise a KeyError when unknown.
656
656
657 Note: no option are currently supported. Any input will be either
657 Note: no option are currently supported. Any input will be either
658 ignored or failing.
658 ignored or failing.
659 """
659 """
660 if not name:
660 if not name:
661 raise ValueError('empty parameter name')
661 raise ValueError('empty parameter name')
662 if name[0] not in string.letters:
662 if name[0] not in string.letters:
663 raise ValueError('non letter first character: %r' % name)
663 raise ValueError('non letter first character: %r' % name)
664 # Some logic will be later added here to try to process the option for
664 try:
665 # a dict of known parameter.
665 handler = b2streamparamsmap[name.lower()]
666 if name[0].islower():
666 except KeyError:
667 indebug(self.ui, "ignoring unknown parameter %r" % name)
667 if name[0].islower():
668 indebug(self.ui, "ignoring unknown parameter %r" % name)
669 else:
670 raise error.BundleUnknownFeatureError(params=(name,))
668 else:
671 else:
669 raise error.BundleUnknownFeatureError(params=(name,))
672 handler(self, name, value)
670
671
673
672 def iterparts(self):
674 def iterparts(self):
673 """yield all parts contained in the stream"""
675 """yield all parts contained in the stream"""
674 # make sure param have been loaded
676 # make sure param have been loaded
675 self.params
677 self.params
676 indebug(self.ui, 'start extraction of bundle2 parts')
678 indebug(self.ui, 'start extraction of bundle2 parts')
677 headerblock = self._readpartheader()
679 headerblock = self._readpartheader()
678 while headerblock is not None:
680 while headerblock is not None:
679 part = unbundlepart(self.ui, headerblock, self._fp)
681 part = unbundlepart(self.ui, headerblock, self._fp)
680 yield part
682 yield part
681 part.seek(0, 2)
683 part.seek(0, 2)
682 headerblock = self._readpartheader()
684 headerblock = self._readpartheader()
683 indebug(self.ui, 'end of bundle2 stream')
685 indebug(self.ui, 'end of bundle2 stream')
684
686
685 def _readpartheader(self):
687 def _readpartheader(self):
686 """reads a part header size and return the bytes blob
688 """reads a part header size and return the bytes blob
687
689
688 returns None if empty"""
690 returns None if empty"""
689 headersize = self._unpack(_fpartheadersize)[0]
691 headersize = self._unpack(_fpartheadersize)[0]
690 if headersize < 0:
692 if headersize < 0:
691 raise error.BundleValueError('negative part header size: %i'
693 raise error.BundleValueError('negative part header size: %i'
692 % headersize)
694 % headersize)
693 indebug(self.ui, 'part header size: %i' % headersize)
695 indebug(self.ui, 'part header size: %i' % headersize)
694 if headersize:
696 if headersize:
695 return self._readexact(headersize)
697 return self._readexact(headersize)
696 return None
698 return None
697
699
698 def compressed(self):
700 def compressed(self):
699 return False
701 return False
700
702
701 formatmap = {'20': unbundle20}
703 formatmap = {'20': unbundle20}
702
704
705 b2streamparamsmap = {}
706
707 def b2streamparamhandler(name):
708 """register a handler for a stream level parameter"""
709 def decorator(func):
710 assert name not in formatmap
711 b2streamparamsmap[name] = func
712 return func
713 return decorator
714
703 class bundlepart(object):
715 class bundlepart(object):
704 """A bundle2 part contains application level payload
716 """A bundle2 part contains application level payload
705
717
706 The part `type` is used to route the part to the application level
718 The part `type` is used to route the part to the application level
707 handler.
719 handler.
708
720
709 The part payload is contained in ``part.data``. It could be raw bytes or a
721 The part payload is contained in ``part.data``. It could be raw bytes or a
710 generator of byte chunks.
722 generator of byte chunks.
711
723
712 You can add parameters to the part using the ``addparam`` method.
724 You can add parameters to the part using the ``addparam`` method.
713 Parameters can be either mandatory (default) or advisory. Remote side
725 Parameters can be either mandatory (default) or advisory. Remote side
714 should be able to safely ignore the advisory ones.
726 should be able to safely ignore the advisory ones.
715
727
716 Both data and parameters cannot be modified after the generation has begun.
728 Both data and parameters cannot be modified after the generation has begun.
717 """
729 """
718
730
719 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
731 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
720 data='', mandatory=True):
732 data='', mandatory=True):
721 validateparttype(parttype)
733 validateparttype(parttype)
722 self.id = None
734 self.id = None
723 self.type = parttype
735 self.type = parttype
724 self._data = data
736 self._data = data
725 self._mandatoryparams = list(mandatoryparams)
737 self._mandatoryparams = list(mandatoryparams)
726 self._advisoryparams = list(advisoryparams)
738 self._advisoryparams = list(advisoryparams)
727 # checking for duplicated entries
739 # checking for duplicated entries
728 self._seenparams = set()
740 self._seenparams = set()
729 for pname, __ in self._mandatoryparams + self._advisoryparams:
741 for pname, __ in self._mandatoryparams + self._advisoryparams:
730 if pname in self._seenparams:
742 if pname in self._seenparams:
731 raise RuntimeError('duplicated params: %s' % pname)
743 raise RuntimeError('duplicated params: %s' % pname)
732 self._seenparams.add(pname)
744 self._seenparams.add(pname)
733 # status of the part's generation:
745 # status of the part's generation:
734 # - None: not started,
746 # - None: not started,
735 # - False: currently generated,
747 # - False: currently generated,
736 # - True: generation done.
748 # - True: generation done.
737 self._generated = None
749 self._generated = None
738 self.mandatory = mandatory
750 self.mandatory = mandatory
739
751
740 def copy(self):
752 def copy(self):
741 """return a copy of the part
753 """return a copy of the part
742
754
743 The new part have the very same content but no partid assigned yet.
755 The new part have the very same content but no partid assigned yet.
744 Parts with generated data cannot be copied."""
756 Parts with generated data cannot be copied."""
745 assert not util.safehasattr(self.data, 'next')
757 assert not util.safehasattr(self.data, 'next')
746 return self.__class__(self.type, self._mandatoryparams,
758 return self.__class__(self.type, self._mandatoryparams,
747 self._advisoryparams, self._data, self.mandatory)
759 self._advisoryparams, self._data, self.mandatory)
748
760
749 # methods used to defines the part content
761 # methods used to defines the part content
750 def __setdata(self, data):
762 def __setdata(self, data):
751 if self._generated is not None:
763 if self._generated is not None:
752 raise error.ReadOnlyPartError('part is being generated')
764 raise error.ReadOnlyPartError('part is being generated')
753 self._data = data
765 self._data = data
754 def __getdata(self):
766 def __getdata(self):
755 return self._data
767 return self._data
756 data = property(__getdata, __setdata)
768 data = property(__getdata, __setdata)
757
769
758 @property
770 @property
759 def mandatoryparams(self):
771 def mandatoryparams(self):
760 # make it an immutable tuple to force people through ``addparam``
772 # make it an immutable tuple to force people through ``addparam``
761 return tuple(self._mandatoryparams)
773 return tuple(self._mandatoryparams)
762
774
763 @property
775 @property
764 def advisoryparams(self):
776 def advisoryparams(self):
765 # make it an immutable tuple to force people through ``addparam``
777 # make it an immutable tuple to force people through ``addparam``
766 return tuple(self._advisoryparams)
778 return tuple(self._advisoryparams)
767
779
768 def addparam(self, name, value='', mandatory=True):
780 def addparam(self, name, value='', mandatory=True):
769 if self._generated is not None:
781 if self._generated is not None:
770 raise error.ReadOnlyPartError('part is being generated')
782 raise error.ReadOnlyPartError('part is being generated')
771 if name in self._seenparams:
783 if name in self._seenparams:
772 raise ValueError('duplicated params: %s' % name)
784 raise ValueError('duplicated params: %s' % name)
773 self._seenparams.add(name)
785 self._seenparams.add(name)
774 params = self._advisoryparams
786 params = self._advisoryparams
775 if mandatory:
787 if mandatory:
776 params = self._mandatoryparams
788 params = self._mandatoryparams
777 params.append((name, value))
789 params.append((name, value))
778
790
779 # methods used to generates the bundle2 stream
791 # methods used to generates the bundle2 stream
780 def getchunks(self, ui):
792 def getchunks(self, ui):
781 if self._generated is not None:
793 if self._generated is not None:
782 raise RuntimeError('part can only be consumed once')
794 raise RuntimeError('part can only be consumed once')
783 self._generated = False
795 self._generated = False
784
796
785 if ui.debugflag:
797 if ui.debugflag:
786 msg = ['bundle2-output-part: "%s"' % self.type]
798 msg = ['bundle2-output-part: "%s"' % self.type]
787 if not self.mandatory:
799 if not self.mandatory:
788 msg.append(' (advisory)')
800 msg.append(' (advisory)')
789 nbmp = len(self.mandatoryparams)
801 nbmp = len(self.mandatoryparams)
790 nbap = len(self.advisoryparams)
802 nbap = len(self.advisoryparams)
791 if nbmp or nbap:
803 if nbmp or nbap:
792 msg.append(' (params:')
804 msg.append(' (params:')
793 if nbmp:
805 if nbmp:
794 msg.append(' %i mandatory' % nbmp)
806 msg.append(' %i mandatory' % nbmp)
795 if nbap:
807 if nbap:
796 msg.append(' %i advisory' % nbmp)
808 msg.append(' %i advisory' % nbmp)
797 msg.append(')')
809 msg.append(')')
798 if not self.data:
810 if not self.data:
799 msg.append(' empty payload')
811 msg.append(' empty payload')
800 elif util.safehasattr(self.data, 'next'):
812 elif util.safehasattr(self.data, 'next'):
801 msg.append(' streamed payload')
813 msg.append(' streamed payload')
802 else:
814 else:
803 msg.append(' %i bytes payload' % len(self.data))
815 msg.append(' %i bytes payload' % len(self.data))
804 msg.append('\n')
816 msg.append('\n')
805 ui.debug(''.join(msg))
817 ui.debug(''.join(msg))
806
818
807 #### header
819 #### header
808 if self.mandatory:
820 if self.mandatory:
809 parttype = self.type.upper()
821 parttype = self.type.upper()
810 else:
822 else:
811 parttype = self.type.lower()
823 parttype = self.type.lower()
812 outdebug(ui, 'part %s: "%s"' % (self.id, parttype))
824 outdebug(ui, 'part %s: "%s"' % (self.id, parttype))
813 ## parttype
825 ## parttype
814 header = [_pack(_fparttypesize, len(parttype)),
826 header = [_pack(_fparttypesize, len(parttype)),
815 parttype, _pack(_fpartid, self.id),
827 parttype, _pack(_fpartid, self.id),
816 ]
828 ]
817 ## parameters
829 ## parameters
818 # count
830 # count
819 manpar = self.mandatoryparams
831 manpar = self.mandatoryparams
820 advpar = self.advisoryparams
832 advpar = self.advisoryparams
821 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
833 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
822 # size
834 # size
823 parsizes = []
835 parsizes = []
824 for key, value in manpar:
836 for key, value in manpar:
825 parsizes.append(len(key))
837 parsizes.append(len(key))
826 parsizes.append(len(value))
838 parsizes.append(len(value))
827 for key, value in advpar:
839 for key, value in advpar:
828 parsizes.append(len(key))
840 parsizes.append(len(key))
829 parsizes.append(len(value))
841 parsizes.append(len(value))
830 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
842 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
831 header.append(paramsizes)
843 header.append(paramsizes)
832 # key, value
844 # key, value
833 for key, value in manpar:
845 for key, value in manpar:
834 header.append(key)
846 header.append(key)
835 header.append(value)
847 header.append(value)
836 for key, value in advpar:
848 for key, value in advpar:
837 header.append(key)
849 header.append(key)
838 header.append(value)
850 header.append(value)
839 ## finalize header
851 ## finalize header
840 headerchunk = ''.join(header)
852 headerchunk = ''.join(header)
841 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
853 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
842 yield _pack(_fpartheadersize, len(headerchunk))
854 yield _pack(_fpartheadersize, len(headerchunk))
843 yield headerchunk
855 yield headerchunk
844 ## payload
856 ## payload
845 try:
857 try:
846 for chunk in self._payloadchunks():
858 for chunk in self._payloadchunks():
847 outdebug(ui, 'payload chunk size: %i' % len(chunk))
859 outdebug(ui, 'payload chunk size: %i' % len(chunk))
848 yield _pack(_fpayloadsize, len(chunk))
860 yield _pack(_fpayloadsize, len(chunk))
849 yield chunk
861 yield chunk
850 except GeneratorExit:
862 except GeneratorExit:
851 # GeneratorExit means that nobody is listening for our
863 # GeneratorExit means that nobody is listening for our
852 # results anyway, so just bail quickly rather than trying
864 # results anyway, so just bail quickly rather than trying
853 # to produce an error part.
865 # to produce an error part.
854 ui.debug('bundle2-generatorexit\n')
866 ui.debug('bundle2-generatorexit\n')
855 raise
867 raise
856 except BaseException as exc:
868 except BaseException as exc:
857 # backup exception data for later
869 # backup exception data for later
858 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
870 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
859 % exc)
871 % exc)
860 exc_info = sys.exc_info()
872 exc_info = sys.exc_info()
861 msg = 'unexpected error: %s' % exc
873 msg = 'unexpected error: %s' % exc
862 interpart = bundlepart('error:abort', [('message', msg)],
874 interpart = bundlepart('error:abort', [('message', msg)],
863 mandatory=False)
875 mandatory=False)
864 interpart.id = 0
876 interpart.id = 0
865 yield _pack(_fpayloadsize, -1)
877 yield _pack(_fpayloadsize, -1)
866 for chunk in interpart.getchunks(ui=ui):
878 for chunk in interpart.getchunks(ui=ui):
867 yield chunk
879 yield chunk
868 outdebug(ui, 'closing payload chunk')
880 outdebug(ui, 'closing payload chunk')
869 # abort current part payload
881 # abort current part payload
870 yield _pack(_fpayloadsize, 0)
882 yield _pack(_fpayloadsize, 0)
871 raise exc_info[0], exc_info[1], exc_info[2]
883 raise exc_info[0], exc_info[1], exc_info[2]
872 # end of payload
884 # end of payload
873 outdebug(ui, 'closing payload chunk')
885 outdebug(ui, 'closing payload chunk')
874 yield _pack(_fpayloadsize, 0)
886 yield _pack(_fpayloadsize, 0)
875 self._generated = True
887 self._generated = True
876
888
877 def _payloadchunks(self):
889 def _payloadchunks(self):
878 """yield chunks of a the part payload
890 """yield chunks of a the part payload
879
891
880 Exists to handle the different methods to provide data to a part."""
892 Exists to handle the different methods to provide data to a part."""
881 # we only support fixed size data now.
893 # we only support fixed size data now.
882 # This will be improved in the future.
894 # This will be improved in the future.
883 if util.safehasattr(self.data, 'next'):
895 if util.safehasattr(self.data, 'next'):
884 buff = util.chunkbuffer(self.data)
896 buff = util.chunkbuffer(self.data)
885 chunk = buff.read(preferedchunksize)
897 chunk = buff.read(preferedchunksize)
886 while chunk:
898 while chunk:
887 yield chunk
899 yield chunk
888 chunk = buff.read(preferedchunksize)
900 chunk = buff.read(preferedchunksize)
889 elif len(self.data):
901 elif len(self.data):
890 yield self.data
902 yield self.data
891
903
892
904
893 flaginterrupt = -1
905 flaginterrupt = -1
894
906
895 class interrupthandler(unpackermixin):
907 class interrupthandler(unpackermixin):
896 """read one part and process it with restricted capability
908 """read one part and process it with restricted capability
897
909
898 This allows to transmit exception raised on the producer size during part
910 This allows to transmit exception raised on the producer size during part
899 iteration while the consumer is reading a part.
911 iteration while the consumer is reading a part.
900
912
901 Part processed in this manner only have access to a ui object,"""
913 Part processed in this manner only have access to a ui object,"""
902
914
903 def __init__(self, ui, fp):
915 def __init__(self, ui, fp):
904 super(interrupthandler, self).__init__(fp)
916 super(interrupthandler, self).__init__(fp)
905 self.ui = ui
917 self.ui = ui
906
918
907 def _readpartheader(self):
919 def _readpartheader(self):
908 """reads a part header size and return the bytes blob
920 """reads a part header size and return the bytes blob
909
921
910 returns None if empty"""
922 returns None if empty"""
911 headersize = self._unpack(_fpartheadersize)[0]
923 headersize = self._unpack(_fpartheadersize)[0]
912 if headersize < 0:
924 if headersize < 0:
913 raise error.BundleValueError('negative part header size: %i'
925 raise error.BundleValueError('negative part header size: %i'
914 % headersize)
926 % headersize)
915 indebug(self.ui, 'part header size: %i\n' % headersize)
927 indebug(self.ui, 'part header size: %i\n' % headersize)
916 if headersize:
928 if headersize:
917 return self._readexact(headersize)
929 return self._readexact(headersize)
918 return None
930 return None
919
931
920 def __call__(self):
932 def __call__(self):
921
933
922 self.ui.debug('bundle2-input-stream-interrupt:'
934 self.ui.debug('bundle2-input-stream-interrupt:'
923 ' opening out of band context\n')
935 ' opening out of band context\n')
924 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
936 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
925 headerblock = self._readpartheader()
937 headerblock = self._readpartheader()
926 if headerblock is None:
938 if headerblock is None:
927 indebug(self.ui, 'no part found during interruption.')
939 indebug(self.ui, 'no part found during interruption.')
928 return
940 return
929 part = unbundlepart(self.ui, headerblock, self._fp)
941 part = unbundlepart(self.ui, headerblock, self._fp)
930 op = interruptoperation(self.ui)
942 op = interruptoperation(self.ui)
931 _processpart(op, part)
943 _processpart(op, part)
932 self.ui.debug('bundle2-input-stream-interrupt:'
944 self.ui.debug('bundle2-input-stream-interrupt:'
933 ' closing out of band context\n')
945 ' closing out of band context\n')
934
946
935 class interruptoperation(object):
947 class interruptoperation(object):
936 """A limited operation to be use by part handler during interruption
948 """A limited operation to be use by part handler during interruption
937
949
938 It only have access to an ui object.
950 It only have access to an ui object.
939 """
951 """
940
952
941 def __init__(self, ui):
953 def __init__(self, ui):
942 self.ui = ui
954 self.ui = ui
943 self.reply = None
955 self.reply = None
944 self.captureoutput = False
956 self.captureoutput = False
945
957
946 @property
958 @property
947 def repo(self):
959 def repo(self):
948 raise RuntimeError('no repo access from stream interruption')
960 raise RuntimeError('no repo access from stream interruption')
949
961
950 def gettransaction(self):
962 def gettransaction(self):
951 raise TransactionUnavailable('no repo access from stream interruption')
963 raise TransactionUnavailable('no repo access from stream interruption')
952
964
953 class unbundlepart(unpackermixin):
965 class unbundlepart(unpackermixin):
954 """a bundle part read from a bundle"""
966 """a bundle part read from a bundle"""
955
967
956 def __init__(self, ui, header, fp):
968 def __init__(self, ui, header, fp):
957 super(unbundlepart, self).__init__(fp)
969 super(unbundlepart, self).__init__(fp)
958 self.ui = ui
970 self.ui = ui
959 # unbundle state attr
971 # unbundle state attr
960 self._headerdata = header
972 self._headerdata = header
961 self._headeroffset = 0
973 self._headeroffset = 0
962 self._initialized = False
974 self._initialized = False
963 self.consumed = False
975 self.consumed = False
964 # part data
976 # part data
965 self.id = None
977 self.id = None
966 self.type = None
978 self.type = None
967 self.mandatoryparams = None
979 self.mandatoryparams = None
968 self.advisoryparams = None
980 self.advisoryparams = None
969 self.params = None
981 self.params = None
970 self.mandatorykeys = ()
982 self.mandatorykeys = ()
971 self._payloadstream = None
983 self._payloadstream = None
972 self._readheader()
984 self._readheader()
973 self._mandatory = None
985 self._mandatory = None
974 self._chunkindex = [] #(payload, file) position tuples for chunk starts
986 self._chunkindex = [] #(payload, file) position tuples for chunk starts
975 self._pos = 0
987 self._pos = 0
976
988
977 def _fromheader(self, size):
989 def _fromheader(self, size):
978 """return the next <size> byte from the header"""
990 """return the next <size> byte from the header"""
979 offset = self._headeroffset
991 offset = self._headeroffset
980 data = self._headerdata[offset:(offset + size)]
992 data = self._headerdata[offset:(offset + size)]
981 self._headeroffset = offset + size
993 self._headeroffset = offset + size
982 return data
994 return data
983
995
984 def _unpackheader(self, format):
996 def _unpackheader(self, format):
985 """read given format from header
997 """read given format from header
986
998
987 This automatically compute the size of the format to read."""
999 This automatically compute the size of the format to read."""
988 data = self._fromheader(struct.calcsize(format))
1000 data = self._fromheader(struct.calcsize(format))
989 return _unpack(format, data)
1001 return _unpack(format, data)
990
1002
991 def _initparams(self, mandatoryparams, advisoryparams):
1003 def _initparams(self, mandatoryparams, advisoryparams):
992 """internal function to setup all logic related parameters"""
1004 """internal function to setup all logic related parameters"""
993 # make it read only to prevent people touching it by mistake.
1005 # make it read only to prevent people touching it by mistake.
994 self.mandatoryparams = tuple(mandatoryparams)
1006 self.mandatoryparams = tuple(mandatoryparams)
995 self.advisoryparams = tuple(advisoryparams)
1007 self.advisoryparams = tuple(advisoryparams)
996 # user friendly UI
1008 # user friendly UI
997 self.params = dict(self.mandatoryparams)
1009 self.params = dict(self.mandatoryparams)
998 self.params.update(dict(self.advisoryparams))
1010 self.params.update(dict(self.advisoryparams))
999 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1011 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1000
1012
1001 def _payloadchunks(self, chunknum=0):
1013 def _payloadchunks(self, chunknum=0):
1002 '''seek to specified chunk and start yielding data'''
1014 '''seek to specified chunk and start yielding data'''
1003 if len(self._chunkindex) == 0:
1015 if len(self._chunkindex) == 0:
1004 assert chunknum == 0, 'Must start with chunk 0'
1016 assert chunknum == 0, 'Must start with chunk 0'
1005 self._chunkindex.append((0, super(unbundlepart, self).tell()))
1017 self._chunkindex.append((0, super(unbundlepart, self).tell()))
1006 else:
1018 else:
1007 assert chunknum < len(self._chunkindex), \
1019 assert chunknum < len(self._chunkindex), \
1008 'Unknown chunk %d' % chunknum
1020 'Unknown chunk %d' % chunknum
1009 super(unbundlepart, self).seek(self._chunkindex[chunknum][1])
1021 super(unbundlepart, self).seek(self._chunkindex[chunknum][1])
1010
1022
1011 pos = self._chunkindex[chunknum][0]
1023 pos = self._chunkindex[chunknum][0]
1012 payloadsize = self._unpack(_fpayloadsize)[0]
1024 payloadsize = self._unpack(_fpayloadsize)[0]
1013 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1025 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1014 while payloadsize:
1026 while payloadsize:
1015 if payloadsize == flaginterrupt:
1027 if payloadsize == flaginterrupt:
1016 # interruption detection, the handler will now read a
1028 # interruption detection, the handler will now read a
1017 # single part and process it.
1029 # single part and process it.
1018 interrupthandler(self.ui, self._fp)()
1030 interrupthandler(self.ui, self._fp)()
1019 elif payloadsize < 0:
1031 elif payloadsize < 0:
1020 msg = 'negative payload chunk size: %i' % payloadsize
1032 msg = 'negative payload chunk size: %i' % payloadsize
1021 raise error.BundleValueError(msg)
1033 raise error.BundleValueError(msg)
1022 else:
1034 else:
1023 result = self._readexact(payloadsize)
1035 result = self._readexact(payloadsize)
1024 chunknum += 1
1036 chunknum += 1
1025 pos += payloadsize
1037 pos += payloadsize
1026 if chunknum == len(self._chunkindex):
1038 if chunknum == len(self._chunkindex):
1027 self._chunkindex.append((pos,
1039 self._chunkindex.append((pos,
1028 super(unbundlepart, self).tell()))
1040 super(unbundlepart, self).tell()))
1029 yield result
1041 yield result
1030 payloadsize = self._unpack(_fpayloadsize)[0]
1042 payloadsize = self._unpack(_fpayloadsize)[0]
1031 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1043 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1032
1044
1033 def _findchunk(self, pos):
1045 def _findchunk(self, pos):
1034 '''for a given payload position, return a chunk number and offset'''
1046 '''for a given payload position, return a chunk number and offset'''
1035 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1047 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1036 if ppos == pos:
1048 if ppos == pos:
1037 return chunk, 0
1049 return chunk, 0
1038 elif ppos > pos:
1050 elif ppos > pos:
1039 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1051 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1040 raise ValueError('Unknown chunk')
1052 raise ValueError('Unknown chunk')
1041
1053
1042 def _readheader(self):
1054 def _readheader(self):
1043 """read the header and setup the object"""
1055 """read the header and setup the object"""
1044 typesize = self._unpackheader(_fparttypesize)[0]
1056 typesize = self._unpackheader(_fparttypesize)[0]
1045 self.type = self._fromheader(typesize)
1057 self.type = self._fromheader(typesize)
1046 indebug(self.ui, 'part type: "%s"' % self.type)
1058 indebug(self.ui, 'part type: "%s"' % self.type)
1047 self.id = self._unpackheader(_fpartid)[0]
1059 self.id = self._unpackheader(_fpartid)[0]
1048 indebug(self.ui, 'part id: "%s"' % self.id)
1060 indebug(self.ui, 'part id: "%s"' % self.id)
1049 # extract mandatory bit from type
1061 # extract mandatory bit from type
1050 self.mandatory = (self.type != self.type.lower())
1062 self.mandatory = (self.type != self.type.lower())
1051 self.type = self.type.lower()
1063 self.type = self.type.lower()
1052 ## reading parameters
1064 ## reading parameters
1053 # param count
1065 # param count
1054 mancount, advcount = self._unpackheader(_fpartparamcount)
1066 mancount, advcount = self._unpackheader(_fpartparamcount)
1055 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1067 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1056 # param size
1068 # param size
1057 fparamsizes = _makefpartparamsizes(mancount + advcount)
1069 fparamsizes = _makefpartparamsizes(mancount + advcount)
1058 paramsizes = self._unpackheader(fparamsizes)
1070 paramsizes = self._unpackheader(fparamsizes)
1059 # make it a list of couple again
1071 # make it a list of couple again
1060 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
1072 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
1061 # split mandatory from advisory
1073 # split mandatory from advisory
1062 mansizes = paramsizes[:mancount]
1074 mansizes = paramsizes[:mancount]
1063 advsizes = paramsizes[mancount:]
1075 advsizes = paramsizes[mancount:]
1064 # retrieve param value
1076 # retrieve param value
1065 manparams = []
1077 manparams = []
1066 for key, value in mansizes:
1078 for key, value in mansizes:
1067 manparams.append((self._fromheader(key), self._fromheader(value)))
1079 manparams.append((self._fromheader(key), self._fromheader(value)))
1068 advparams = []
1080 advparams = []
1069 for key, value in advsizes:
1081 for key, value in advsizes:
1070 advparams.append((self._fromheader(key), self._fromheader(value)))
1082 advparams.append((self._fromheader(key), self._fromheader(value)))
1071 self._initparams(manparams, advparams)
1083 self._initparams(manparams, advparams)
1072 ## part payload
1084 ## part payload
1073 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1085 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1074 # we read the data, tell it
1086 # we read the data, tell it
1075 self._initialized = True
1087 self._initialized = True
1076
1088
1077 def read(self, size=None):
1089 def read(self, size=None):
1078 """read payload data"""
1090 """read payload data"""
1079 if not self._initialized:
1091 if not self._initialized:
1080 self._readheader()
1092 self._readheader()
1081 if size is None:
1093 if size is None:
1082 data = self._payloadstream.read()
1094 data = self._payloadstream.read()
1083 else:
1095 else:
1084 data = self._payloadstream.read(size)
1096 data = self._payloadstream.read(size)
1085 self._pos += len(data)
1097 self._pos += len(data)
1086 if size is None or len(data) < size:
1098 if size is None or len(data) < size:
1087 if not self.consumed and self._pos:
1099 if not self.consumed and self._pos:
1088 self.ui.debug('bundle2-input-part: total payload size %i\n'
1100 self.ui.debug('bundle2-input-part: total payload size %i\n'
1089 % self._pos)
1101 % self._pos)
1090 self.consumed = True
1102 self.consumed = True
1091 return data
1103 return data
1092
1104
1093 def tell(self):
1105 def tell(self):
1094 return self._pos
1106 return self._pos
1095
1107
1096 def seek(self, offset, whence=0):
1108 def seek(self, offset, whence=0):
1097 if whence == 0:
1109 if whence == 0:
1098 newpos = offset
1110 newpos = offset
1099 elif whence == 1:
1111 elif whence == 1:
1100 newpos = self._pos + offset
1112 newpos = self._pos + offset
1101 elif whence == 2:
1113 elif whence == 2:
1102 if not self.consumed:
1114 if not self.consumed:
1103 self.read()
1115 self.read()
1104 newpos = self._chunkindex[-1][0] - offset
1116 newpos = self._chunkindex[-1][0] - offset
1105 else:
1117 else:
1106 raise ValueError('Unknown whence value: %r' % (whence,))
1118 raise ValueError('Unknown whence value: %r' % (whence,))
1107
1119
1108 if newpos > self._chunkindex[-1][0] and not self.consumed:
1120 if newpos > self._chunkindex[-1][0] and not self.consumed:
1109 self.read()
1121 self.read()
1110 if not 0 <= newpos <= self._chunkindex[-1][0]:
1122 if not 0 <= newpos <= self._chunkindex[-1][0]:
1111 raise ValueError('Offset out of range')
1123 raise ValueError('Offset out of range')
1112
1124
1113 if self._pos != newpos:
1125 if self._pos != newpos:
1114 chunk, internaloffset = self._findchunk(newpos)
1126 chunk, internaloffset = self._findchunk(newpos)
1115 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1127 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1116 adjust = self.read(internaloffset)
1128 adjust = self.read(internaloffset)
1117 if len(adjust) != internaloffset:
1129 if len(adjust) != internaloffset:
1118 raise util.Abort(_('Seek failed\n'))
1130 raise util.Abort(_('Seek failed\n'))
1119 self._pos = newpos
1131 self._pos = newpos
1120
1132
1121 # These are only the static capabilities.
1133 # These are only the static capabilities.
1122 # Check the 'getrepocaps' function for the rest.
1134 # Check the 'getrepocaps' function for the rest.
1123 capabilities = {'HG20': (),
1135 capabilities = {'HG20': (),
1124 'error': ('abort', 'unsupportedcontent', 'pushraced',
1136 'error': ('abort', 'unsupportedcontent', 'pushraced',
1125 'pushkey'),
1137 'pushkey'),
1126 'listkeys': (),
1138 'listkeys': (),
1127 'pushkey': (),
1139 'pushkey': (),
1128 'digests': tuple(sorted(util.DIGESTS.keys())),
1140 'digests': tuple(sorted(util.DIGESTS.keys())),
1129 'remote-changegroup': ('http', 'https'),
1141 'remote-changegroup': ('http', 'https'),
1130 'hgtagsfnodes': (),
1142 'hgtagsfnodes': (),
1131 }
1143 }
1132
1144
1133 def getrepocaps(repo, allowpushback=False):
1145 def getrepocaps(repo, allowpushback=False):
1134 """return the bundle2 capabilities for a given repo
1146 """return the bundle2 capabilities for a given repo
1135
1147
1136 Exists to allow extensions (like evolution) to mutate the capabilities.
1148 Exists to allow extensions (like evolution) to mutate the capabilities.
1137 """
1149 """
1138 caps = capabilities.copy()
1150 caps = capabilities.copy()
1139 caps['changegroup'] = tuple(sorted(changegroup.packermap.keys()))
1151 caps['changegroup'] = tuple(sorted(changegroup.packermap.keys()))
1140 if obsolete.isenabled(repo, obsolete.exchangeopt):
1152 if obsolete.isenabled(repo, obsolete.exchangeopt):
1141 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1153 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1142 caps['obsmarkers'] = supportedformat
1154 caps['obsmarkers'] = supportedformat
1143 if allowpushback:
1155 if allowpushback:
1144 caps['pushback'] = ()
1156 caps['pushback'] = ()
1145 return caps
1157 return caps
1146
1158
1147 def bundle2caps(remote):
1159 def bundle2caps(remote):
1148 """return the bundle capabilities of a peer as dict"""
1160 """return the bundle capabilities of a peer as dict"""
1149 raw = remote.capable('bundle2')
1161 raw = remote.capable('bundle2')
1150 if not raw and raw != '':
1162 if not raw and raw != '':
1151 return {}
1163 return {}
1152 capsblob = urllib.unquote(remote.capable('bundle2'))
1164 capsblob = urllib.unquote(remote.capable('bundle2'))
1153 return decodecaps(capsblob)
1165 return decodecaps(capsblob)
1154
1166
1155 def obsmarkersversion(caps):
1167 def obsmarkersversion(caps):
1156 """extract the list of supported obsmarkers versions from a bundle2caps dict
1168 """extract the list of supported obsmarkers versions from a bundle2caps dict
1157 """
1169 """
1158 obscaps = caps.get('obsmarkers', ())
1170 obscaps = caps.get('obsmarkers', ())
1159 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1171 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1160
1172
1161 @parthandler('changegroup', ('version', 'nbchanges'))
1173 @parthandler('changegroup', ('version', 'nbchanges'))
1162 def handlechangegroup(op, inpart):
1174 def handlechangegroup(op, inpart):
1163 """apply a changegroup part on the repo
1175 """apply a changegroup part on the repo
1164
1176
1165 This is a very early implementation that will massive rework before being
1177 This is a very early implementation that will massive rework before being
1166 inflicted to any end-user.
1178 inflicted to any end-user.
1167 """
1179 """
1168 # Make sure we trigger a transaction creation
1180 # Make sure we trigger a transaction creation
1169 #
1181 #
1170 # The addchangegroup function will get a transaction object by itself, but
1182 # The addchangegroup function will get a transaction object by itself, but
1171 # we need to make sure we trigger the creation of a transaction object used
1183 # we need to make sure we trigger the creation of a transaction object used
1172 # for the whole processing scope.
1184 # for the whole processing scope.
1173 op.gettransaction()
1185 op.gettransaction()
1174 unpackerversion = inpart.params.get('version', '01')
1186 unpackerversion = inpart.params.get('version', '01')
1175 # We should raise an appropriate exception here
1187 # We should raise an appropriate exception here
1176 unpacker = changegroup.packermap[unpackerversion][1]
1188 unpacker = changegroup.packermap[unpackerversion][1]
1177 cg = unpacker(inpart, None)
1189 cg = unpacker(inpart, None)
1178 # the source and url passed here are overwritten by the one contained in
1190 # the source and url passed here are overwritten by the one contained in
1179 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1191 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1180 nbchangesets = None
1192 nbchangesets = None
1181 if 'nbchanges' in inpart.params:
1193 if 'nbchanges' in inpart.params:
1182 nbchangesets = int(inpart.params.get('nbchanges'))
1194 nbchangesets = int(inpart.params.get('nbchanges'))
1183 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2',
1195 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2',
1184 expectedtotal=nbchangesets)
1196 expectedtotal=nbchangesets)
1185 op.records.add('changegroup', {'return': ret})
1197 op.records.add('changegroup', {'return': ret})
1186 if op.reply is not None:
1198 if op.reply is not None:
1187 # This is definitely not the final form of this
1199 # This is definitely not the final form of this
1188 # return. But one need to start somewhere.
1200 # return. But one need to start somewhere.
1189 part = op.reply.newpart('reply:changegroup', mandatory=False)
1201 part = op.reply.newpart('reply:changegroup', mandatory=False)
1190 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1202 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1191 part.addparam('return', '%i' % ret, mandatory=False)
1203 part.addparam('return', '%i' % ret, mandatory=False)
1192 assert not inpart.read()
1204 assert not inpart.read()
1193
1205
1194 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1206 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1195 ['digest:%s' % k for k in util.DIGESTS.keys()])
1207 ['digest:%s' % k for k in util.DIGESTS.keys()])
1196 @parthandler('remote-changegroup', _remotechangegroupparams)
1208 @parthandler('remote-changegroup', _remotechangegroupparams)
1197 def handleremotechangegroup(op, inpart):
1209 def handleremotechangegroup(op, inpart):
1198 """apply a bundle10 on the repo, given an url and validation information
1210 """apply a bundle10 on the repo, given an url and validation information
1199
1211
1200 All the information about the remote bundle to import are given as
1212 All the information about the remote bundle to import are given as
1201 parameters. The parameters include:
1213 parameters. The parameters include:
1202 - url: the url to the bundle10.
1214 - url: the url to the bundle10.
1203 - size: the bundle10 file size. It is used to validate what was
1215 - size: the bundle10 file size. It is used to validate what was
1204 retrieved by the client matches the server knowledge about the bundle.
1216 retrieved by the client matches the server knowledge about the bundle.
1205 - digests: a space separated list of the digest types provided as
1217 - digests: a space separated list of the digest types provided as
1206 parameters.
1218 parameters.
1207 - digest:<digest-type>: the hexadecimal representation of the digest with
1219 - digest:<digest-type>: the hexadecimal representation of the digest with
1208 that name. Like the size, it is used to validate what was retrieved by
1220 that name. Like the size, it is used to validate what was retrieved by
1209 the client matches what the server knows about the bundle.
1221 the client matches what the server knows about the bundle.
1210
1222
1211 When multiple digest types are given, all of them are checked.
1223 When multiple digest types are given, all of them are checked.
1212 """
1224 """
1213 try:
1225 try:
1214 raw_url = inpart.params['url']
1226 raw_url = inpart.params['url']
1215 except KeyError:
1227 except KeyError:
1216 raise util.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1228 raise util.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1217 parsed_url = util.url(raw_url)
1229 parsed_url = util.url(raw_url)
1218 if parsed_url.scheme not in capabilities['remote-changegroup']:
1230 if parsed_url.scheme not in capabilities['remote-changegroup']:
1219 raise util.Abort(_('remote-changegroup does not support %s urls') %
1231 raise util.Abort(_('remote-changegroup does not support %s urls') %
1220 parsed_url.scheme)
1232 parsed_url.scheme)
1221
1233
1222 try:
1234 try:
1223 size = int(inpart.params['size'])
1235 size = int(inpart.params['size'])
1224 except ValueError:
1236 except ValueError:
1225 raise util.Abort(_('remote-changegroup: invalid value for param "%s"')
1237 raise util.Abort(_('remote-changegroup: invalid value for param "%s"')
1226 % 'size')
1238 % 'size')
1227 except KeyError:
1239 except KeyError:
1228 raise util.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1240 raise util.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1229
1241
1230 digests = {}
1242 digests = {}
1231 for typ in inpart.params.get('digests', '').split():
1243 for typ in inpart.params.get('digests', '').split():
1232 param = 'digest:%s' % typ
1244 param = 'digest:%s' % typ
1233 try:
1245 try:
1234 value = inpart.params[param]
1246 value = inpart.params[param]
1235 except KeyError:
1247 except KeyError:
1236 raise util.Abort(_('remote-changegroup: missing "%s" param') %
1248 raise util.Abort(_('remote-changegroup: missing "%s" param') %
1237 param)
1249 param)
1238 digests[typ] = value
1250 digests[typ] = value
1239
1251
1240 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1252 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1241
1253
1242 # Make sure we trigger a transaction creation
1254 # Make sure we trigger a transaction creation
1243 #
1255 #
1244 # The addchangegroup function will get a transaction object by itself, but
1256 # The addchangegroup function will get a transaction object by itself, but
1245 # we need to make sure we trigger the creation of a transaction object used
1257 # we need to make sure we trigger the creation of a transaction object used
1246 # for the whole processing scope.
1258 # for the whole processing scope.
1247 op.gettransaction()
1259 op.gettransaction()
1248 from . import exchange
1260 from . import exchange
1249 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1261 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1250 if not isinstance(cg, changegroup.cg1unpacker):
1262 if not isinstance(cg, changegroup.cg1unpacker):
1251 raise util.Abort(_('%s: not a bundle version 1.0') %
1263 raise util.Abort(_('%s: not a bundle version 1.0') %
1252 util.hidepassword(raw_url))
1264 util.hidepassword(raw_url))
1253 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
1265 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
1254 op.records.add('changegroup', {'return': ret})
1266 op.records.add('changegroup', {'return': ret})
1255 if op.reply is not None:
1267 if op.reply is not None:
1256 # This is definitely not the final form of this
1268 # This is definitely not the final form of this
1257 # return. But one need to start somewhere.
1269 # return. But one need to start somewhere.
1258 part = op.reply.newpart('reply:changegroup')
1270 part = op.reply.newpart('reply:changegroup')
1259 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1271 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1260 part.addparam('return', '%i' % ret, mandatory=False)
1272 part.addparam('return', '%i' % ret, mandatory=False)
1261 try:
1273 try:
1262 real_part.validate()
1274 real_part.validate()
1263 except util.Abort as e:
1275 except util.Abort as e:
1264 raise util.Abort(_('bundle at %s is corrupted:\n%s') %
1276 raise util.Abort(_('bundle at %s is corrupted:\n%s') %
1265 (util.hidepassword(raw_url), str(e)))
1277 (util.hidepassword(raw_url), str(e)))
1266 assert not inpart.read()
1278 assert not inpart.read()
1267
1279
1268 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1280 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1269 def handlereplychangegroup(op, inpart):
1281 def handlereplychangegroup(op, inpart):
1270 ret = int(inpart.params['return'])
1282 ret = int(inpart.params['return'])
1271 replyto = int(inpart.params['in-reply-to'])
1283 replyto = int(inpart.params['in-reply-to'])
1272 op.records.add('changegroup', {'return': ret}, replyto)
1284 op.records.add('changegroup', {'return': ret}, replyto)
1273
1285
1274 @parthandler('check:heads')
1286 @parthandler('check:heads')
1275 def handlecheckheads(op, inpart):
1287 def handlecheckheads(op, inpart):
1276 """check that head of the repo did not change
1288 """check that head of the repo did not change
1277
1289
1278 This is used to detect a push race when using unbundle.
1290 This is used to detect a push race when using unbundle.
1279 This replaces the "heads" argument of unbundle."""
1291 This replaces the "heads" argument of unbundle."""
1280 h = inpart.read(20)
1292 h = inpart.read(20)
1281 heads = []
1293 heads = []
1282 while len(h) == 20:
1294 while len(h) == 20:
1283 heads.append(h)
1295 heads.append(h)
1284 h = inpart.read(20)
1296 h = inpart.read(20)
1285 assert not h
1297 assert not h
1286 if heads != op.repo.heads():
1298 if heads != op.repo.heads():
1287 raise error.PushRaced('repository changed while pushing - '
1299 raise error.PushRaced('repository changed while pushing - '
1288 'please try again')
1300 'please try again')
1289
1301
1290 @parthandler('output')
1302 @parthandler('output')
1291 def handleoutput(op, inpart):
1303 def handleoutput(op, inpart):
1292 """forward output captured on the server to the client"""
1304 """forward output captured on the server to the client"""
1293 for line in inpart.read().splitlines():
1305 for line in inpart.read().splitlines():
1294 op.ui.status(('remote: %s\n' % line))
1306 op.ui.status(('remote: %s\n' % line))
1295
1307
1296 @parthandler('replycaps')
1308 @parthandler('replycaps')
1297 def handlereplycaps(op, inpart):
1309 def handlereplycaps(op, inpart):
1298 """Notify that a reply bundle should be created
1310 """Notify that a reply bundle should be created
1299
1311
1300 The payload contains the capabilities information for the reply"""
1312 The payload contains the capabilities information for the reply"""
1301 caps = decodecaps(inpart.read())
1313 caps = decodecaps(inpart.read())
1302 if op.reply is None:
1314 if op.reply is None:
1303 op.reply = bundle20(op.ui, caps)
1315 op.reply = bundle20(op.ui, caps)
1304
1316
1305 @parthandler('error:abort', ('message', 'hint'))
1317 @parthandler('error:abort', ('message', 'hint'))
1306 def handleerrorabort(op, inpart):
1318 def handleerrorabort(op, inpart):
1307 """Used to transmit abort error over the wire"""
1319 """Used to transmit abort error over the wire"""
1308 raise util.Abort(inpart.params['message'], hint=inpart.params.get('hint'))
1320 raise util.Abort(inpart.params['message'], hint=inpart.params.get('hint'))
1309
1321
1310 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
1322 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
1311 'in-reply-to'))
1323 'in-reply-to'))
1312 def handleerrorpushkey(op, inpart):
1324 def handleerrorpushkey(op, inpart):
1313 """Used to transmit failure of a mandatory pushkey over the wire"""
1325 """Used to transmit failure of a mandatory pushkey over the wire"""
1314 kwargs = {}
1326 kwargs = {}
1315 for name in ('namespace', 'key', 'new', 'old', 'ret'):
1327 for name in ('namespace', 'key', 'new', 'old', 'ret'):
1316 value = inpart.params.get(name)
1328 value = inpart.params.get(name)
1317 if value is not None:
1329 if value is not None:
1318 kwargs[name] = value
1330 kwargs[name] = value
1319 raise error.PushkeyFailed(inpart.params['in-reply-to'], **kwargs)
1331 raise error.PushkeyFailed(inpart.params['in-reply-to'], **kwargs)
1320
1332
1321 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1333 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1322 def handleerrorunsupportedcontent(op, inpart):
1334 def handleerrorunsupportedcontent(op, inpart):
1323 """Used to transmit unknown content error over the wire"""
1335 """Used to transmit unknown content error over the wire"""
1324 kwargs = {}
1336 kwargs = {}
1325 parttype = inpart.params.get('parttype')
1337 parttype = inpart.params.get('parttype')
1326 if parttype is not None:
1338 if parttype is not None:
1327 kwargs['parttype'] = parttype
1339 kwargs['parttype'] = parttype
1328 params = inpart.params.get('params')
1340 params = inpart.params.get('params')
1329 if params is not None:
1341 if params is not None:
1330 kwargs['params'] = params.split('\0')
1342 kwargs['params'] = params.split('\0')
1331
1343
1332 raise error.BundleUnknownFeatureError(**kwargs)
1344 raise error.BundleUnknownFeatureError(**kwargs)
1333
1345
1334 @parthandler('error:pushraced', ('message',))
1346 @parthandler('error:pushraced', ('message',))
1335 def handleerrorpushraced(op, inpart):
1347 def handleerrorpushraced(op, inpart):
1336 """Used to transmit push race error over the wire"""
1348 """Used to transmit push race error over the wire"""
1337 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1349 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1338
1350
1339 @parthandler('listkeys', ('namespace',))
1351 @parthandler('listkeys', ('namespace',))
1340 def handlelistkeys(op, inpart):
1352 def handlelistkeys(op, inpart):
1341 """retrieve pushkey namespace content stored in a bundle2"""
1353 """retrieve pushkey namespace content stored in a bundle2"""
1342 namespace = inpart.params['namespace']
1354 namespace = inpart.params['namespace']
1343 r = pushkey.decodekeys(inpart.read())
1355 r = pushkey.decodekeys(inpart.read())
1344 op.records.add('listkeys', (namespace, r))
1356 op.records.add('listkeys', (namespace, r))
1345
1357
1346 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1358 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1347 def handlepushkey(op, inpart):
1359 def handlepushkey(op, inpart):
1348 """process a pushkey request"""
1360 """process a pushkey request"""
1349 dec = pushkey.decode
1361 dec = pushkey.decode
1350 namespace = dec(inpart.params['namespace'])
1362 namespace = dec(inpart.params['namespace'])
1351 key = dec(inpart.params['key'])
1363 key = dec(inpart.params['key'])
1352 old = dec(inpart.params['old'])
1364 old = dec(inpart.params['old'])
1353 new = dec(inpart.params['new'])
1365 new = dec(inpart.params['new'])
1354 ret = op.repo.pushkey(namespace, key, old, new)
1366 ret = op.repo.pushkey(namespace, key, old, new)
1355 record = {'namespace': namespace,
1367 record = {'namespace': namespace,
1356 'key': key,
1368 'key': key,
1357 'old': old,
1369 'old': old,
1358 'new': new}
1370 'new': new}
1359 op.records.add('pushkey', record)
1371 op.records.add('pushkey', record)
1360 if op.reply is not None:
1372 if op.reply is not None:
1361 rpart = op.reply.newpart('reply:pushkey')
1373 rpart = op.reply.newpart('reply:pushkey')
1362 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1374 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1363 rpart.addparam('return', '%i' % ret, mandatory=False)
1375 rpart.addparam('return', '%i' % ret, mandatory=False)
1364 if inpart.mandatory and not ret:
1376 if inpart.mandatory and not ret:
1365 kwargs = {}
1377 kwargs = {}
1366 for key in ('namespace', 'key', 'new', 'old', 'ret'):
1378 for key in ('namespace', 'key', 'new', 'old', 'ret'):
1367 if key in inpart.params:
1379 if key in inpart.params:
1368 kwargs[key] = inpart.params[key]
1380 kwargs[key] = inpart.params[key]
1369 raise error.PushkeyFailed(partid=str(inpart.id), **kwargs)
1381 raise error.PushkeyFailed(partid=str(inpart.id), **kwargs)
1370
1382
1371 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
1383 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
1372 def handlepushkeyreply(op, inpart):
1384 def handlepushkeyreply(op, inpart):
1373 """retrieve the result of a pushkey request"""
1385 """retrieve the result of a pushkey request"""
1374 ret = int(inpart.params['return'])
1386 ret = int(inpart.params['return'])
1375 partid = int(inpart.params['in-reply-to'])
1387 partid = int(inpart.params['in-reply-to'])
1376 op.records.add('pushkey', {'return': ret}, partid)
1388 op.records.add('pushkey', {'return': ret}, partid)
1377
1389
1378 @parthandler('obsmarkers')
1390 @parthandler('obsmarkers')
1379 def handleobsmarker(op, inpart):
1391 def handleobsmarker(op, inpart):
1380 """add a stream of obsmarkers to the repo"""
1392 """add a stream of obsmarkers to the repo"""
1381 tr = op.gettransaction()
1393 tr = op.gettransaction()
1382 markerdata = inpart.read()
1394 markerdata = inpart.read()
1383 if op.ui.config('experimental', 'obsmarkers-exchange-debug', False):
1395 if op.ui.config('experimental', 'obsmarkers-exchange-debug', False):
1384 op.ui.write(('obsmarker-exchange: %i bytes received\n')
1396 op.ui.write(('obsmarker-exchange: %i bytes received\n')
1385 % len(markerdata))
1397 % len(markerdata))
1386 new = op.repo.obsstore.mergemarkers(tr, markerdata)
1398 new = op.repo.obsstore.mergemarkers(tr, markerdata)
1387 if new:
1399 if new:
1388 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1400 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1389 op.records.add('obsmarkers', {'new': new})
1401 op.records.add('obsmarkers', {'new': new})
1390 if op.reply is not None:
1402 if op.reply is not None:
1391 rpart = op.reply.newpart('reply:obsmarkers')
1403 rpart = op.reply.newpart('reply:obsmarkers')
1392 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1404 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1393 rpart.addparam('new', '%i' % new, mandatory=False)
1405 rpart.addparam('new', '%i' % new, mandatory=False)
1394
1406
1395
1407
1396 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
1408 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
1397 def handleobsmarkerreply(op, inpart):
1409 def handleobsmarkerreply(op, inpart):
1398 """retrieve the result of a pushkey request"""
1410 """retrieve the result of a pushkey request"""
1399 ret = int(inpart.params['new'])
1411 ret = int(inpart.params['new'])
1400 partid = int(inpart.params['in-reply-to'])
1412 partid = int(inpart.params['in-reply-to'])
1401 op.records.add('obsmarkers', {'new': ret}, partid)
1413 op.records.add('obsmarkers', {'new': ret}, partid)
1402
1414
1403 @parthandler('hgtagsfnodes')
1415 @parthandler('hgtagsfnodes')
1404 def handlehgtagsfnodes(op, inpart):
1416 def handlehgtagsfnodes(op, inpart):
1405 """Applies .hgtags fnodes cache entries to the local repo.
1417 """Applies .hgtags fnodes cache entries to the local repo.
1406
1418
1407 Payload is pairs of 20 byte changeset nodes and filenodes.
1419 Payload is pairs of 20 byte changeset nodes and filenodes.
1408 """
1420 """
1409 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
1421 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
1410
1422
1411 count = 0
1423 count = 0
1412 while True:
1424 while True:
1413 node = inpart.read(20)
1425 node = inpart.read(20)
1414 fnode = inpart.read(20)
1426 fnode = inpart.read(20)
1415 if len(node) < 20 or len(fnode) < 20:
1427 if len(node) < 20 or len(fnode) < 20:
1416 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
1428 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
1417 break
1429 break
1418 cache.setfnode(node, fnode)
1430 cache.setfnode(node, fnode)
1419 count += 1
1431 count += 1
1420
1432
1421 cache.write()
1433 cache.write()
1422 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
1434 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
General Comments 0
You need to be logged in to leave comments. Login now