##// END OF EJS Templates
bundle2: ignore errors seeking a bundle after an exception (issue4784)...
Gregory Szorc -
r32024:ad41739c default
parent child Browse files
Show More
@@ -1,1656 +1,1672 b''
1 1 # bundle2.py - generic container format to transmit arbitrary data.
2 2 #
3 3 # Copyright 2013 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 """Handling of the new bundle2 format
8 8
9 9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 11 that will be handed to and processed by the application layer.
12 12
13 13
14 14 General format architecture
15 15 ===========================
16 16
17 17 The format is architectured as follow
18 18
19 19 - magic string
20 20 - stream level parameters
21 21 - payload parts (any number)
22 22 - end of stream marker.
23 23
24 24 the Binary format
25 25 ============================
26 26
27 27 All numbers are unsigned and big-endian.
28 28
29 29 stream level parameters
30 30 ------------------------
31 31
32 32 Binary format is as follow
33 33
34 34 :params size: int32
35 35
36 36 The total number of Bytes used by the parameters
37 37
38 38 :params value: arbitrary number of Bytes
39 39
40 40 A blob of `params size` containing the serialized version of all stream level
41 41 parameters.
42 42
43 43 The blob contains a space separated list of parameters. Parameters with value
44 44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45 45
46 46 Empty name are obviously forbidden.
47 47
48 48 Name MUST start with a letter. If this first letter is lower case, the
49 49 parameter is advisory and can be safely ignored. However when the first
50 50 letter is capital, the parameter is mandatory and the bundling process MUST
51 51 stop if he is not able to proceed it.
52 52
53 53 Stream parameters use a simple textual format for two main reasons:
54 54
55 55 - Stream level parameters should remain simple and we want to discourage any
56 56 crazy usage.
57 57 - Textual data allow easy human inspection of a bundle2 header in case of
58 58 troubles.
59 59
60 60 Any Applicative level options MUST go into a bundle2 part instead.
61 61
62 62 Payload part
63 63 ------------------------
64 64
65 65 Binary format is as follow
66 66
67 67 :header size: int32
68 68
69 69 The total number of Bytes used by the part header. When the header is empty
70 70 (size = 0) this is interpreted as the end of stream marker.
71 71
72 72 :header:
73 73
74 74 The header defines how to interpret the part. It contains two piece of
75 75 data: the part type, and the part parameters.
76 76
77 77 The part type is used to route an application level handler, that can
78 78 interpret payload.
79 79
80 80 Part parameters are passed to the application level handler. They are
81 81 meant to convey information that will help the application level object to
82 82 interpret the part payload.
83 83
84 84 The binary format of the header is has follow
85 85
86 86 :typesize: (one byte)
87 87
88 88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89 89
90 90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 91 to this part.
92 92
93 93 :parameters:
94 94
95 95 Part's parameter may have arbitrary content, the binary structure is::
96 96
97 97 <mandatory-count><advisory-count><param-sizes><param-data>
98 98
99 99 :mandatory-count: 1 byte, number of mandatory parameters
100 100
101 101 :advisory-count: 1 byte, number of advisory parameters
102 102
103 103 :param-sizes:
104 104
105 105 N couple of bytes, where N is the total number of parameters. Each
106 106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107 107
108 108 :param-data:
109 109
110 110 A blob of bytes from which each parameter key and value can be
111 111 retrieved using the list of size couples stored in the previous
112 112 field.
113 113
114 114 Mandatory parameters comes first, then the advisory ones.
115 115
116 116 Each parameter's key MUST be unique within the part.
117 117
118 118 :payload:
119 119
120 120 payload is a series of `<chunksize><chunkdata>`.
121 121
122 122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124 124
125 125 The current implementation always produces either zero or one chunk.
126 126 This is an implementation limitation that will ultimately be lifted.
127 127
128 128 `chunksize` can be negative to trigger special case processing. No such
129 129 processing is in place yet.
130 130
131 131 Bundle processing
132 132 ============================
133 133
134 134 Each part is processed in order using a "part handler". Handler are registered
135 135 for a certain part type.
136 136
137 137 The matching of a part to its handler is case insensitive. The case of the
138 138 part type is used to know if a part is mandatory or advisory. If the Part type
139 139 contains any uppercase char it is considered mandatory. When no handler is
140 140 known for a Mandatory part, the process is aborted and an exception is raised.
141 141 If the part is advisory and no handler is known, the part is ignored. When the
142 142 process is aborted, the full bundle is still read from the stream to keep the
143 143 channel usable. But none of the part read from an abort are processed. In the
144 144 future, dropping the stream may become an option for channel we do not care to
145 145 preserve.
146 146 """
147 147
148 148 from __future__ import absolute_import
149 149
150 150 import errno
151 151 import re
152 152 import string
153 153 import struct
154 154 import sys
155 155
156 156 from .i18n import _
157 157 from . import (
158 158 changegroup,
159 159 error,
160 160 obsolete,
161 161 pushkey,
162 162 pycompat,
163 163 tags,
164 164 url,
165 165 util,
166 166 )
167 167
168 168 urlerr = util.urlerr
169 169 urlreq = util.urlreq
170 170
171 171 _pack = struct.pack
172 172 _unpack = struct.unpack
173 173
174 174 _fstreamparamsize = '>i'
175 175 _fpartheadersize = '>i'
176 176 _fparttypesize = '>B'
177 177 _fpartid = '>I'
178 178 _fpayloadsize = '>i'
179 179 _fpartparamcount = '>BB'
180 180
181 181 preferedchunksize = 4096
182 182
183 183 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
184 184
185 185 def outdebug(ui, message):
186 186 """debug regarding output stream (bundling)"""
187 187 if ui.configbool('devel', 'bundle2.debug', False):
188 188 ui.debug('bundle2-output: %s\n' % message)
189 189
190 190 def indebug(ui, message):
191 191 """debug on input stream (unbundling)"""
192 192 if ui.configbool('devel', 'bundle2.debug', False):
193 193 ui.debug('bundle2-input: %s\n' % message)
194 194
195 195 def validateparttype(parttype):
196 196 """raise ValueError if a parttype contains invalid character"""
197 197 if _parttypeforbidden.search(parttype):
198 198 raise ValueError(parttype)
199 199
200 200 def _makefpartparamsizes(nbparams):
201 201 """return a struct format to read part parameter sizes
202 202
203 203 The number parameters is variable so we need to build that format
204 204 dynamically.
205 205 """
206 206 return '>'+('BB'*nbparams)
207 207
208 208 parthandlermapping = {}
209 209
210 210 def parthandler(parttype, params=()):
211 211 """decorator that register a function as a bundle2 part handler
212 212
213 213 eg::
214 214
215 215 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
216 216 def myparttypehandler(...):
217 217 '''process a part of type "my part".'''
218 218 ...
219 219 """
220 220 validateparttype(parttype)
221 221 def _decorator(func):
222 222 lparttype = parttype.lower() # enforce lower case matching.
223 223 assert lparttype not in parthandlermapping
224 224 parthandlermapping[lparttype] = func
225 225 func.params = frozenset(params)
226 226 return func
227 227 return _decorator
228 228
229 229 class unbundlerecords(object):
230 230 """keep record of what happens during and unbundle
231 231
232 232 New records are added using `records.add('cat', obj)`. Where 'cat' is a
233 233 category of record and obj is an arbitrary object.
234 234
235 235 `records['cat']` will return all entries of this category 'cat'.
236 236
237 237 Iterating on the object itself will yield `('category', obj)` tuples
238 238 for all entries.
239 239
240 240 All iterations happens in chronological order.
241 241 """
242 242
243 243 def __init__(self):
244 244 self._categories = {}
245 245 self._sequences = []
246 246 self._replies = {}
247 247
248 248 def add(self, category, entry, inreplyto=None):
249 249 """add a new record of a given category.
250 250
251 251 The entry can then be retrieved in the list returned by
252 252 self['category']."""
253 253 self._categories.setdefault(category, []).append(entry)
254 254 self._sequences.append((category, entry))
255 255 if inreplyto is not None:
256 256 self.getreplies(inreplyto).add(category, entry)
257 257
258 258 def getreplies(self, partid):
259 259 """get the records that are replies to a specific part"""
260 260 return self._replies.setdefault(partid, unbundlerecords())
261 261
262 262 def __getitem__(self, cat):
263 263 return tuple(self._categories.get(cat, ()))
264 264
265 265 def __iter__(self):
266 266 return iter(self._sequences)
267 267
268 268 def __len__(self):
269 269 return len(self._sequences)
270 270
271 271 def __nonzero__(self):
272 272 return bool(self._sequences)
273 273
274 274 __bool__ = __nonzero__
275 275
276 276 class bundleoperation(object):
277 277 """an object that represents a single bundling process
278 278
279 279 Its purpose is to carry unbundle-related objects and states.
280 280
281 281 A new object should be created at the beginning of each bundle processing.
282 282 The object is to be returned by the processing function.
283 283
284 284 The object has very little content now it will ultimately contain:
285 285 * an access to the repo the bundle is applied to,
286 286 * a ui object,
287 287 * a way to retrieve a transaction to add changes to the repo,
288 288 * a way to record the result of processing each part,
289 289 * a way to construct a bundle response when applicable.
290 290 """
291 291
292 292 def __init__(self, repo, transactiongetter, captureoutput=True):
293 293 self.repo = repo
294 294 self.ui = repo.ui
295 295 self.records = unbundlerecords()
296 296 self.gettransaction = transactiongetter
297 297 self.reply = None
298 298 self.captureoutput = captureoutput
299 299
300 300 class TransactionUnavailable(RuntimeError):
301 301 pass
302 302
303 303 def _notransaction():
304 304 """default method to get a transaction while processing a bundle
305 305
306 306 Raise an exception to highlight the fact that no transaction was expected
307 307 to be created"""
308 308 raise TransactionUnavailable()
309 309
310 310 def applybundle(repo, unbundler, tr, source=None, url=None, op=None):
311 311 # transform me into unbundler.apply() as soon as the freeze is lifted
312 312 tr.hookargs['bundle2'] = '1'
313 313 if source is not None and 'source' not in tr.hookargs:
314 314 tr.hookargs['source'] = source
315 315 if url is not None and 'url' not in tr.hookargs:
316 316 tr.hookargs['url'] = url
317 317 return processbundle(repo, unbundler, lambda: tr, op=op)
318 318
319 319 def processbundle(repo, unbundler, transactiongetter=None, op=None):
320 320 """This function process a bundle, apply effect to/from a repo
321 321
322 322 It iterates over each part then searches for and uses the proper handling
323 323 code to process the part. Parts are processed in order.
324 324
325 325 Unknown Mandatory part will abort the process.
326 326
327 327 It is temporarily possible to provide a prebuilt bundleoperation to the
328 328 function. This is used to ensure output is properly propagated in case of
329 329 an error during the unbundling. This output capturing part will likely be
330 330 reworked and this ability will probably go away in the process.
331 331 """
332 332 if op is None:
333 333 if transactiongetter is None:
334 334 transactiongetter = _notransaction
335 335 op = bundleoperation(repo, transactiongetter)
336 336 # todo:
337 337 # - replace this is a init function soon.
338 338 # - exception catching
339 339 unbundler.params
340 340 if repo.ui.debugflag:
341 341 msg = ['bundle2-input-bundle:']
342 342 if unbundler.params:
343 343 msg.append(' %i params')
344 344 if op.gettransaction is None:
345 345 msg.append(' no-transaction')
346 346 else:
347 347 msg.append(' with-transaction')
348 348 msg.append('\n')
349 349 repo.ui.debug(''.join(msg))
350 350 iterparts = enumerate(unbundler.iterparts())
351 351 part = None
352 352 nbpart = 0
353 353 try:
354 354 for nbpart, part in iterparts:
355 355 _processpart(op, part)
356 356 except Exception as exc:
357 for nbpart, part in iterparts:
358 # consume the bundle content
359 part.seek(0, 2)
357 # Any exceptions seeking to the end of the bundle at this point are
358 # almost certainly related to the underlying stream being bad.
359 # And, chances are that the exception we're handling is related to
360 # getting in that bad state. So, we swallow the seeking error and
361 # re-raise the original error.
362 seekerror = False
363 try:
364 for nbpart, part in iterparts:
365 # consume the bundle content
366 part.seek(0, 2)
367 except Exception:
368 seekerror = True
369
360 370 # Small hack to let caller code distinguish exceptions from bundle2
361 371 # processing from processing the old format. This is mostly
362 372 # needed to handle different return codes to unbundle according to the
363 373 # type of bundle. We should probably clean up or drop this return code
364 374 # craziness in a future version.
365 375 exc.duringunbundle2 = True
366 376 salvaged = []
367 377 replycaps = None
368 378 if op.reply is not None:
369 379 salvaged = op.reply.salvageoutput()
370 380 replycaps = op.reply.capabilities
371 381 exc._replycaps = replycaps
372 382 exc._bundle2salvagedoutput = salvaged
373 raise
383
384 # Re-raising from a variable loses the original stack. So only use
385 # that form if we need to.
386 if seekerror:
387 raise exc
388 else:
389 raise
374 390 finally:
375 391 repo.ui.debug('bundle2-input-bundle: %i parts total\n' % nbpart)
376 392
377 393 return op
378 394
379 395 def _processpart(op, part):
380 396 """process a single part from a bundle
381 397
382 398 The part is guaranteed to have been fully consumed when the function exits
383 399 (even if an exception is raised)."""
384 400 status = 'unknown' # used by debug output
385 401 hardabort = False
386 402 try:
387 403 try:
388 404 handler = parthandlermapping.get(part.type)
389 405 if handler is None:
390 406 status = 'unsupported-type'
391 407 raise error.BundleUnknownFeatureError(parttype=part.type)
392 408 indebug(op.ui, 'found a handler for part %r' % part.type)
393 409 unknownparams = part.mandatorykeys - handler.params
394 410 if unknownparams:
395 411 unknownparams = list(unknownparams)
396 412 unknownparams.sort()
397 413 status = 'unsupported-params (%s)' % unknownparams
398 414 raise error.BundleUnknownFeatureError(parttype=part.type,
399 415 params=unknownparams)
400 416 status = 'supported'
401 417 except error.BundleUnknownFeatureError as exc:
402 418 if part.mandatory: # mandatory parts
403 419 raise
404 420 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
405 421 return # skip to part processing
406 422 finally:
407 423 if op.ui.debugflag:
408 424 msg = ['bundle2-input-part: "%s"' % part.type]
409 425 if not part.mandatory:
410 426 msg.append(' (advisory)')
411 427 nbmp = len(part.mandatorykeys)
412 428 nbap = len(part.params) - nbmp
413 429 if nbmp or nbap:
414 430 msg.append(' (params:')
415 431 if nbmp:
416 432 msg.append(' %i mandatory' % nbmp)
417 433 if nbap:
418 434 msg.append(' %i advisory' % nbmp)
419 435 msg.append(')')
420 436 msg.append(' %s\n' % status)
421 437 op.ui.debug(''.join(msg))
422 438
423 439 # handler is called outside the above try block so that we don't
424 440 # risk catching KeyErrors from anything other than the
425 441 # parthandlermapping lookup (any KeyError raised by handler()
426 442 # itself represents a defect of a different variety).
427 443 output = None
428 444 if op.captureoutput and op.reply is not None:
429 445 op.ui.pushbuffer(error=True, subproc=True)
430 446 output = ''
431 447 try:
432 448 handler(op, part)
433 449 finally:
434 450 if output is not None:
435 451 output = op.ui.popbuffer()
436 452 if output:
437 453 outpart = op.reply.newpart('output', data=output,
438 454 mandatory=False)
439 455 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
440 456 # If exiting or interrupted, do not attempt to seek the stream in the
441 457 # finally block below. This makes abort faster.
442 458 except (SystemExit, KeyboardInterrupt):
443 459 hardabort = True
444 460 raise
445 461 finally:
446 462 # consume the part content to not corrupt the stream.
447 463 if not hardabort:
448 464 part.seek(0, 2)
449 465
450 466
451 467 def decodecaps(blob):
452 468 """decode a bundle2 caps bytes blob into a dictionary
453 469
454 470 The blob is a list of capabilities (one per line)
455 471 Capabilities may have values using a line of the form::
456 472
457 473 capability=value1,value2,value3
458 474
459 475 The values are always a list."""
460 476 caps = {}
461 477 for line in blob.splitlines():
462 478 if not line:
463 479 continue
464 480 if '=' not in line:
465 481 key, vals = line, ()
466 482 else:
467 483 key, vals = line.split('=', 1)
468 484 vals = vals.split(',')
469 485 key = urlreq.unquote(key)
470 486 vals = [urlreq.unquote(v) for v in vals]
471 487 caps[key] = vals
472 488 return caps
473 489
474 490 def encodecaps(caps):
475 491 """encode a bundle2 caps dictionary into a bytes blob"""
476 492 chunks = []
477 493 for ca in sorted(caps):
478 494 vals = caps[ca]
479 495 ca = urlreq.quote(ca)
480 496 vals = [urlreq.quote(v) for v in vals]
481 497 if vals:
482 498 ca = "%s=%s" % (ca, ','.join(vals))
483 499 chunks.append(ca)
484 500 return '\n'.join(chunks)
485 501
486 502 bundletypes = {
487 503 "": ("", 'UN'), # only when using unbundle on ssh and old http servers
488 504 # since the unification ssh accepts a header but there
489 505 # is no capability signaling it.
490 506 "HG20": (), # special-cased below
491 507 "HG10UN": ("HG10UN", 'UN'),
492 508 "HG10BZ": ("HG10", 'BZ'),
493 509 "HG10GZ": ("HG10GZ", 'GZ'),
494 510 }
495 511
496 512 # hgweb uses this list to communicate its preferred type
497 513 bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN']
498 514
499 515 class bundle20(object):
500 516 """represent an outgoing bundle2 container
501 517
502 518 Use the `addparam` method to add stream level parameter. and `newpart` to
503 519 populate it. Then call `getchunks` to retrieve all the binary chunks of
504 520 data that compose the bundle2 container."""
505 521
506 522 _magicstring = 'HG20'
507 523
508 524 def __init__(self, ui, capabilities=()):
509 525 self.ui = ui
510 526 self._params = []
511 527 self._parts = []
512 528 self.capabilities = dict(capabilities)
513 529 self._compengine = util.compengines.forbundletype('UN')
514 530 self._compopts = None
515 531
516 532 def setcompression(self, alg, compopts=None):
517 533 """setup core part compression to <alg>"""
518 534 if alg in (None, 'UN'):
519 535 return
520 536 assert not any(n.lower() == 'compression' for n, v in self._params)
521 537 self.addparam('Compression', alg)
522 538 self._compengine = util.compengines.forbundletype(alg)
523 539 self._compopts = compopts
524 540
525 541 @property
526 542 def nbparts(self):
527 543 """total number of parts added to the bundler"""
528 544 return len(self._parts)
529 545
530 546 # methods used to defines the bundle2 content
531 547 def addparam(self, name, value=None):
532 548 """add a stream level parameter"""
533 549 if not name:
534 550 raise ValueError('empty parameter name')
535 551 if name[0] not in string.letters:
536 552 raise ValueError('non letter first character: %r' % name)
537 553 self._params.append((name, value))
538 554
539 555 def addpart(self, part):
540 556 """add a new part to the bundle2 container
541 557
542 558 Parts contains the actual applicative payload."""
543 559 assert part.id is None
544 560 part.id = len(self._parts) # very cheap counter
545 561 self._parts.append(part)
546 562
547 563 def newpart(self, typeid, *args, **kwargs):
548 564 """create a new part and add it to the containers
549 565
550 566 As the part is directly added to the containers. For now, this means
551 567 that any failure to properly initialize the part after calling
552 568 ``newpart`` should result in a failure of the whole bundling process.
553 569
554 570 You can still fall back to manually create and add if you need better
555 571 control."""
556 572 part = bundlepart(typeid, *args, **kwargs)
557 573 self.addpart(part)
558 574 return part
559 575
560 576 # methods used to generate the bundle2 stream
561 577 def getchunks(self):
562 578 if self.ui.debugflag:
563 579 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
564 580 if self._params:
565 581 msg.append(' (%i params)' % len(self._params))
566 582 msg.append(' %i parts total\n' % len(self._parts))
567 583 self.ui.debug(''.join(msg))
568 584 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
569 585 yield self._magicstring
570 586 param = self._paramchunk()
571 587 outdebug(self.ui, 'bundle parameter: %s' % param)
572 588 yield _pack(_fstreamparamsize, len(param))
573 589 if param:
574 590 yield param
575 591 for chunk in self._compengine.compressstream(self._getcorechunk(),
576 592 self._compopts):
577 593 yield chunk
578 594
579 595 def _paramchunk(self):
580 596 """return a encoded version of all stream parameters"""
581 597 blocks = []
582 598 for par, value in self._params:
583 599 par = urlreq.quote(par)
584 600 if value is not None:
585 601 value = urlreq.quote(value)
586 602 par = '%s=%s' % (par, value)
587 603 blocks.append(par)
588 604 return ' '.join(blocks)
589 605
590 606 def _getcorechunk(self):
591 607 """yield chunk for the core part of the bundle
592 608
593 609 (all but headers and parameters)"""
594 610 outdebug(self.ui, 'start of parts')
595 611 for part in self._parts:
596 612 outdebug(self.ui, 'bundle part: "%s"' % part.type)
597 613 for chunk in part.getchunks(ui=self.ui):
598 614 yield chunk
599 615 outdebug(self.ui, 'end of bundle')
600 616 yield _pack(_fpartheadersize, 0)
601 617
602 618
603 619 def salvageoutput(self):
604 620 """return a list with a copy of all output parts in the bundle
605 621
606 622 This is meant to be used during error handling to make sure we preserve
607 623 server output"""
608 624 salvaged = []
609 625 for part in self._parts:
610 626 if part.type.startswith('output'):
611 627 salvaged.append(part.copy())
612 628 return salvaged
613 629
614 630
615 631 class unpackermixin(object):
616 632 """A mixin to extract bytes and struct data from a stream"""
617 633
618 634 def __init__(self, fp):
619 635 self._fp = fp
620 636
621 637 def _unpack(self, format):
622 638 """unpack this struct format from the stream
623 639
624 640 This method is meant for internal usage by the bundle2 protocol only.
625 641 They directly manipulate the low level stream including bundle2 level
626 642 instruction.
627 643
628 644 Do not use it to implement higher-level logic or methods."""
629 645 data = self._readexact(struct.calcsize(format))
630 646 return _unpack(format, data)
631 647
632 648 def _readexact(self, size):
633 649 """read exactly <size> bytes from the stream
634 650
635 651 This method is meant for internal usage by the bundle2 protocol only.
636 652 They directly manipulate the low level stream including bundle2 level
637 653 instruction.
638 654
639 655 Do not use it to implement higher-level logic or methods."""
640 656 return changegroup.readexactly(self._fp, size)
641 657
642 658 def getunbundler(ui, fp, magicstring=None):
643 659 """return a valid unbundler object for a given magicstring"""
644 660 if magicstring is None:
645 661 magicstring = changegroup.readexactly(fp, 4)
646 662 magic, version = magicstring[0:2], magicstring[2:4]
647 663 if magic != 'HG':
648 664 raise error.Abort(_('not a Mercurial bundle'))
649 665 unbundlerclass = formatmap.get(version)
650 666 if unbundlerclass is None:
651 667 raise error.Abort(_('unknown bundle version %s') % version)
652 668 unbundler = unbundlerclass(ui, fp)
653 669 indebug(ui, 'start processing of %s stream' % magicstring)
654 670 return unbundler
655 671
656 672 class unbundle20(unpackermixin):
657 673 """interpret a bundle2 stream
658 674
659 675 This class is fed with a binary stream and yields parts through its
660 676 `iterparts` methods."""
661 677
662 678 _magicstring = 'HG20'
663 679
664 680 def __init__(self, ui, fp):
665 681 """If header is specified, we do not read it out of the stream."""
666 682 self.ui = ui
667 683 self._compengine = util.compengines.forbundletype('UN')
668 684 self._compressed = None
669 685 super(unbundle20, self).__init__(fp)
670 686
671 687 @util.propertycache
672 688 def params(self):
673 689 """dictionary of stream level parameters"""
674 690 indebug(self.ui, 'reading bundle2 stream parameters')
675 691 params = {}
676 692 paramssize = self._unpack(_fstreamparamsize)[0]
677 693 if paramssize < 0:
678 694 raise error.BundleValueError('negative bundle param size: %i'
679 695 % paramssize)
680 696 if paramssize:
681 697 params = self._readexact(paramssize)
682 698 params = self._processallparams(params)
683 699 return params
684 700
685 701 def _processallparams(self, paramsblock):
686 702 """"""
687 703 params = util.sortdict()
688 704 for p in paramsblock.split(' '):
689 705 p = p.split('=', 1)
690 706 p = [urlreq.unquote(i) for i in p]
691 707 if len(p) < 2:
692 708 p.append(None)
693 709 self._processparam(*p)
694 710 params[p[0]] = p[1]
695 711 return params
696 712
697 713
698 714 def _processparam(self, name, value):
699 715 """process a parameter, applying its effect if needed
700 716
701 717 Parameter starting with a lower case letter are advisory and will be
702 718 ignored when unknown. Those starting with an upper case letter are
703 719 mandatory and will this function will raise a KeyError when unknown.
704 720
705 721 Note: no option are currently supported. Any input will be either
706 722 ignored or failing.
707 723 """
708 724 if not name:
709 725 raise ValueError('empty parameter name')
710 726 if name[0] not in string.letters:
711 727 raise ValueError('non letter first character: %r' % name)
712 728 try:
713 729 handler = b2streamparamsmap[name.lower()]
714 730 except KeyError:
715 731 if name[0].islower():
716 732 indebug(self.ui, "ignoring unknown parameter %r" % name)
717 733 else:
718 734 raise error.BundleUnknownFeatureError(params=(name,))
719 735 else:
720 736 handler(self, name, value)
721 737
722 738 def _forwardchunks(self):
723 739 """utility to transfer a bundle2 as binary
724 740
725 741 This is made necessary by the fact the 'getbundle' command over 'ssh'
726 742 have no way to know then the reply end, relying on the bundle to be
727 743 interpreted to know its end. This is terrible and we are sorry, but we
728 744 needed to move forward to get general delta enabled.
729 745 """
730 746 yield self._magicstring
731 747 assert 'params' not in vars(self)
732 748 paramssize = self._unpack(_fstreamparamsize)[0]
733 749 if paramssize < 0:
734 750 raise error.BundleValueError('negative bundle param size: %i'
735 751 % paramssize)
736 752 yield _pack(_fstreamparamsize, paramssize)
737 753 if paramssize:
738 754 params = self._readexact(paramssize)
739 755 self._processallparams(params)
740 756 yield params
741 757 assert self._compengine.bundletype == 'UN'
742 758 # From there, payload might need to be decompressed
743 759 self._fp = self._compengine.decompressorreader(self._fp)
744 760 emptycount = 0
745 761 while emptycount < 2:
746 762 # so we can brainlessly loop
747 763 assert _fpartheadersize == _fpayloadsize
748 764 size = self._unpack(_fpartheadersize)[0]
749 765 yield _pack(_fpartheadersize, size)
750 766 if size:
751 767 emptycount = 0
752 768 else:
753 769 emptycount += 1
754 770 continue
755 771 if size == flaginterrupt:
756 772 continue
757 773 elif size < 0:
758 774 raise error.BundleValueError('negative chunk size: %i')
759 775 yield self._readexact(size)
760 776
761 777
762 778 def iterparts(self):
763 779 """yield all parts contained in the stream"""
764 780 # make sure param have been loaded
765 781 self.params
766 782 # From there, payload need to be decompressed
767 783 self._fp = self._compengine.decompressorreader(self._fp)
768 784 indebug(self.ui, 'start extraction of bundle2 parts')
769 785 headerblock = self._readpartheader()
770 786 while headerblock is not None:
771 787 part = unbundlepart(self.ui, headerblock, self._fp)
772 788 yield part
773 789 part.seek(0, 2)
774 790 headerblock = self._readpartheader()
775 791 indebug(self.ui, 'end of bundle2 stream')
776 792
777 793 def _readpartheader(self):
778 794 """reads a part header size and return the bytes blob
779 795
780 796 returns None if empty"""
781 797 headersize = self._unpack(_fpartheadersize)[0]
782 798 if headersize < 0:
783 799 raise error.BundleValueError('negative part header size: %i'
784 800 % headersize)
785 801 indebug(self.ui, 'part header size: %i' % headersize)
786 802 if headersize:
787 803 return self._readexact(headersize)
788 804 return None
789 805
790 806 def compressed(self):
791 807 self.params # load params
792 808 return self._compressed
793 809
794 810 def close(self):
795 811 """close underlying file"""
796 812 if util.safehasattr(self._fp, 'close'):
797 813 return self._fp.close()
798 814
799 815 formatmap = {'20': unbundle20}
800 816
801 817 b2streamparamsmap = {}
802 818
803 819 def b2streamparamhandler(name):
804 820 """register a handler for a stream level parameter"""
805 821 def decorator(func):
806 822 assert name not in formatmap
807 823 b2streamparamsmap[name] = func
808 824 return func
809 825 return decorator
810 826
811 827 @b2streamparamhandler('compression')
812 828 def processcompression(unbundler, param, value):
813 829 """read compression parameter and install payload decompression"""
814 830 if value not in util.compengines.supportedbundletypes:
815 831 raise error.BundleUnknownFeatureError(params=(param,),
816 832 values=(value,))
817 833 unbundler._compengine = util.compengines.forbundletype(value)
818 834 if value is not None:
819 835 unbundler._compressed = True
820 836
821 837 class bundlepart(object):
822 838 """A bundle2 part contains application level payload
823 839
824 840 The part `type` is used to route the part to the application level
825 841 handler.
826 842
827 843 The part payload is contained in ``part.data``. It could be raw bytes or a
828 844 generator of byte chunks.
829 845
830 846 You can add parameters to the part using the ``addparam`` method.
831 847 Parameters can be either mandatory (default) or advisory. Remote side
832 848 should be able to safely ignore the advisory ones.
833 849
834 850 Both data and parameters cannot be modified after the generation has begun.
835 851 """
836 852
837 853 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
838 854 data='', mandatory=True):
839 855 validateparttype(parttype)
840 856 self.id = None
841 857 self.type = parttype
842 858 self._data = data
843 859 self._mandatoryparams = list(mandatoryparams)
844 860 self._advisoryparams = list(advisoryparams)
845 861 # checking for duplicated entries
846 862 self._seenparams = set()
847 863 for pname, __ in self._mandatoryparams + self._advisoryparams:
848 864 if pname in self._seenparams:
849 865 raise error.ProgrammingError('duplicated params: %s' % pname)
850 866 self._seenparams.add(pname)
851 867 # status of the part's generation:
852 868 # - None: not started,
853 869 # - False: currently generated,
854 870 # - True: generation done.
855 871 self._generated = None
856 872 self.mandatory = mandatory
857 873
858 874 def __repr__(self):
859 875 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
860 876 return ('<%s object at %x; id: %s; type: %s; mandatory: %s>'
861 877 % (cls, id(self), self.id, self.type, self.mandatory))
862 878
863 879 def copy(self):
864 880 """return a copy of the part
865 881
866 882 The new part have the very same content but no partid assigned yet.
867 883 Parts with generated data cannot be copied."""
868 884 assert not util.safehasattr(self.data, 'next')
869 885 return self.__class__(self.type, self._mandatoryparams,
870 886 self._advisoryparams, self._data, self.mandatory)
871 887
872 888 # methods used to defines the part content
873 889 @property
874 890 def data(self):
875 891 return self._data
876 892
877 893 @data.setter
878 894 def data(self, data):
879 895 if self._generated is not None:
880 896 raise error.ReadOnlyPartError('part is being generated')
881 897 self._data = data
882 898
883 899 @property
884 900 def mandatoryparams(self):
885 901 # make it an immutable tuple to force people through ``addparam``
886 902 return tuple(self._mandatoryparams)
887 903
888 904 @property
889 905 def advisoryparams(self):
890 906 # make it an immutable tuple to force people through ``addparam``
891 907 return tuple(self._advisoryparams)
892 908
893 909 def addparam(self, name, value='', mandatory=True):
894 910 """add a parameter to the part
895 911
896 912 If 'mandatory' is set to True, the remote handler must claim support
897 913 for this parameter or the unbundling will be aborted.
898 914
899 915 The 'name' and 'value' cannot exceed 255 bytes each.
900 916 """
901 917 if self._generated is not None:
902 918 raise error.ReadOnlyPartError('part is being generated')
903 919 if name in self._seenparams:
904 920 raise ValueError('duplicated params: %s' % name)
905 921 self._seenparams.add(name)
906 922 params = self._advisoryparams
907 923 if mandatory:
908 924 params = self._mandatoryparams
909 925 params.append((name, value))
910 926
911 927 # methods used to generates the bundle2 stream
912 928 def getchunks(self, ui):
913 929 if self._generated is not None:
914 930 raise error.ProgrammingError('part can only be consumed once')
915 931 self._generated = False
916 932
917 933 if ui.debugflag:
918 934 msg = ['bundle2-output-part: "%s"' % self.type]
919 935 if not self.mandatory:
920 936 msg.append(' (advisory)')
921 937 nbmp = len(self.mandatoryparams)
922 938 nbap = len(self.advisoryparams)
923 939 if nbmp or nbap:
924 940 msg.append(' (params:')
925 941 if nbmp:
926 942 msg.append(' %i mandatory' % nbmp)
927 943 if nbap:
928 944 msg.append(' %i advisory' % nbmp)
929 945 msg.append(')')
930 946 if not self.data:
931 947 msg.append(' empty payload')
932 948 elif util.safehasattr(self.data, 'next'):
933 949 msg.append(' streamed payload')
934 950 else:
935 951 msg.append(' %i bytes payload' % len(self.data))
936 952 msg.append('\n')
937 953 ui.debug(''.join(msg))
938 954
939 955 #### header
940 956 if self.mandatory:
941 957 parttype = self.type.upper()
942 958 else:
943 959 parttype = self.type.lower()
944 960 outdebug(ui, 'part %s: "%s"' % (self.id, parttype))
945 961 ## parttype
946 962 header = [_pack(_fparttypesize, len(parttype)),
947 963 parttype, _pack(_fpartid, self.id),
948 964 ]
949 965 ## parameters
950 966 # count
951 967 manpar = self.mandatoryparams
952 968 advpar = self.advisoryparams
953 969 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
954 970 # size
955 971 parsizes = []
956 972 for key, value in manpar:
957 973 parsizes.append(len(key))
958 974 parsizes.append(len(value))
959 975 for key, value in advpar:
960 976 parsizes.append(len(key))
961 977 parsizes.append(len(value))
962 978 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
963 979 header.append(paramsizes)
964 980 # key, value
965 981 for key, value in manpar:
966 982 header.append(key)
967 983 header.append(value)
968 984 for key, value in advpar:
969 985 header.append(key)
970 986 header.append(value)
971 987 ## finalize header
972 988 headerchunk = ''.join(header)
973 989 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
974 990 yield _pack(_fpartheadersize, len(headerchunk))
975 991 yield headerchunk
976 992 ## payload
977 993 try:
978 994 for chunk in self._payloadchunks():
979 995 outdebug(ui, 'payload chunk size: %i' % len(chunk))
980 996 yield _pack(_fpayloadsize, len(chunk))
981 997 yield chunk
982 998 except GeneratorExit:
983 999 # GeneratorExit means that nobody is listening for our
984 1000 # results anyway, so just bail quickly rather than trying
985 1001 # to produce an error part.
986 1002 ui.debug('bundle2-generatorexit\n')
987 1003 raise
988 1004 except BaseException as exc:
989 1005 # backup exception data for later
990 1006 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
991 1007 % exc)
992 1008 exc_info = sys.exc_info()
993 1009 msg = 'unexpected error: %s' % exc
994 1010 interpart = bundlepart('error:abort', [('message', msg)],
995 1011 mandatory=False)
996 1012 interpart.id = 0
997 1013 yield _pack(_fpayloadsize, -1)
998 1014 for chunk in interpart.getchunks(ui=ui):
999 1015 yield chunk
1000 1016 outdebug(ui, 'closing payload chunk')
1001 1017 # abort current part payload
1002 1018 yield _pack(_fpayloadsize, 0)
1003 1019 if pycompat.ispy3:
1004 1020 raise exc_info[0](exc_info[1]).with_traceback(exc_info[2])
1005 1021 else:
1006 1022 exec("""raise exc_info[0], exc_info[1], exc_info[2]""")
1007 1023 # end of payload
1008 1024 outdebug(ui, 'closing payload chunk')
1009 1025 yield _pack(_fpayloadsize, 0)
1010 1026 self._generated = True
1011 1027
1012 1028 def _payloadchunks(self):
1013 1029 """yield chunks of a the part payload
1014 1030
1015 1031 Exists to handle the different methods to provide data to a part."""
1016 1032 # we only support fixed size data now.
1017 1033 # This will be improved in the future.
1018 1034 if util.safehasattr(self.data, 'next'):
1019 1035 buff = util.chunkbuffer(self.data)
1020 1036 chunk = buff.read(preferedchunksize)
1021 1037 while chunk:
1022 1038 yield chunk
1023 1039 chunk = buff.read(preferedchunksize)
1024 1040 elif len(self.data):
1025 1041 yield self.data
1026 1042
1027 1043
1028 1044 flaginterrupt = -1
1029 1045
1030 1046 class interrupthandler(unpackermixin):
1031 1047 """read one part and process it with restricted capability
1032 1048
1033 1049 This allows to transmit exception raised on the producer size during part
1034 1050 iteration while the consumer is reading a part.
1035 1051
1036 1052 Part processed in this manner only have access to a ui object,"""
1037 1053
1038 1054 def __init__(self, ui, fp):
1039 1055 super(interrupthandler, self).__init__(fp)
1040 1056 self.ui = ui
1041 1057
1042 1058 def _readpartheader(self):
1043 1059 """reads a part header size and return the bytes blob
1044 1060
1045 1061 returns None if empty"""
1046 1062 headersize = self._unpack(_fpartheadersize)[0]
1047 1063 if headersize < 0:
1048 1064 raise error.BundleValueError('negative part header size: %i'
1049 1065 % headersize)
1050 1066 indebug(self.ui, 'part header size: %i\n' % headersize)
1051 1067 if headersize:
1052 1068 return self._readexact(headersize)
1053 1069 return None
1054 1070
1055 1071 def __call__(self):
1056 1072
1057 1073 self.ui.debug('bundle2-input-stream-interrupt:'
1058 1074 ' opening out of band context\n')
1059 1075 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1060 1076 headerblock = self._readpartheader()
1061 1077 if headerblock is None:
1062 1078 indebug(self.ui, 'no part found during interruption.')
1063 1079 return
1064 1080 part = unbundlepart(self.ui, headerblock, self._fp)
1065 1081 op = interruptoperation(self.ui)
1066 1082 _processpart(op, part)
1067 1083 self.ui.debug('bundle2-input-stream-interrupt:'
1068 1084 ' closing out of band context\n')
1069 1085
1070 1086 class interruptoperation(object):
1071 1087 """A limited operation to be use by part handler during interruption
1072 1088
1073 1089 It only have access to an ui object.
1074 1090 """
1075 1091
1076 1092 def __init__(self, ui):
1077 1093 self.ui = ui
1078 1094 self.reply = None
1079 1095 self.captureoutput = False
1080 1096
1081 1097 @property
1082 1098 def repo(self):
1083 1099 raise error.ProgrammingError('no repo access from stream interruption')
1084 1100
1085 1101 def gettransaction(self):
1086 1102 raise TransactionUnavailable('no repo access from stream interruption')
1087 1103
1088 1104 class unbundlepart(unpackermixin):
1089 1105 """a bundle part read from a bundle"""
1090 1106
1091 1107 def __init__(self, ui, header, fp):
1092 1108 super(unbundlepart, self).__init__(fp)
1093 1109 self._seekable = (util.safehasattr(fp, 'seek') and
1094 1110 util.safehasattr(fp, 'tell'))
1095 1111 self.ui = ui
1096 1112 # unbundle state attr
1097 1113 self._headerdata = header
1098 1114 self._headeroffset = 0
1099 1115 self._initialized = False
1100 1116 self.consumed = False
1101 1117 # part data
1102 1118 self.id = None
1103 1119 self.type = None
1104 1120 self.mandatoryparams = None
1105 1121 self.advisoryparams = None
1106 1122 self.params = None
1107 1123 self.mandatorykeys = ()
1108 1124 self._payloadstream = None
1109 1125 self._readheader()
1110 1126 self._mandatory = None
1111 1127 self._chunkindex = [] #(payload, file) position tuples for chunk starts
1112 1128 self._pos = 0
1113 1129
1114 1130 def _fromheader(self, size):
1115 1131 """return the next <size> byte from the header"""
1116 1132 offset = self._headeroffset
1117 1133 data = self._headerdata[offset:(offset + size)]
1118 1134 self._headeroffset = offset + size
1119 1135 return data
1120 1136
1121 1137 def _unpackheader(self, format):
1122 1138 """read given format from header
1123 1139
1124 1140 This automatically compute the size of the format to read."""
1125 1141 data = self._fromheader(struct.calcsize(format))
1126 1142 return _unpack(format, data)
1127 1143
1128 1144 def _initparams(self, mandatoryparams, advisoryparams):
1129 1145 """internal function to setup all logic related parameters"""
1130 1146 # make it read only to prevent people touching it by mistake.
1131 1147 self.mandatoryparams = tuple(mandatoryparams)
1132 1148 self.advisoryparams = tuple(advisoryparams)
1133 1149 # user friendly UI
1134 1150 self.params = util.sortdict(self.mandatoryparams)
1135 1151 self.params.update(self.advisoryparams)
1136 1152 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1137 1153
1138 1154 def _payloadchunks(self, chunknum=0):
1139 1155 '''seek to specified chunk and start yielding data'''
1140 1156 if len(self._chunkindex) == 0:
1141 1157 assert chunknum == 0, 'Must start with chunk 0'
1142 1158 self._chunkindex.append((0, self._tellfp()))
1143 1159 else:
1144 1160 assert chunknum < len(self._chunkindex), \
1145 1161 'Unknown chunk %d' % chunknum
1146 1162 self._seekfp(self._chunkindex[chunknum][1])
1147 1163
1148 1164 pos = self._chunkindex[chunknum][0]
1149 1165 payloadsize = self._unpack(_fpayloadsize)[0]
1150 1166 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1151 1167 while payloadsize:
1152 1168 if payloadsize == flaginterrupt:
1153 1169 # interruption detection, the handler will now read a
1154 1170 # single part and process it.
1155 1171 interrupthandler(self.ui, self._fp)()
1156 1172 elif payloadsize < 0:
1157 1173 msg = 'negative payload chunk size: %i' % payloadsize
1158 1174 raise error.BundleValueError(msg)
1159 1175 else:
1160 1176 result = self._readexact(payloadsize)
1161 1177 chunknum += 1
1162 1178 pos += payloadsize
1163 1179 if chunknum == len(self._chunkindex):
1164 1180 self._chunkindex.append((pos, self._tellfp()))
1165 1181 yield result
1166 1182 payloadsize = self._unpack(_fpayloadsize)[0]
1167 1183 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1168 1184
1169 1185 def _findchunk(self, pos):
1170 1186 '''for a given payload position, return a chunk number and offset'''
1171 1187 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1172 1188 if ppos == pos:
1173 1189 return chunk, 0
1174 1190 elif ppos > pos:
1175 1191 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1176 1192 raise ValueError('Unknown chunk')
1177 1193
1178 1194 def _readheader(self):
1179 1195 """read the header and setup the object"""
1180 1196 typesize = self._unpackheader(_fparttypesize)[0]
1181 1197 self.type = self._fromheader(typesize)
1182 1198 indebug(self.ui, 'part type: "%s"' % self.type)
1183 1199 self.id = self._unpackheader(_fpartid)[0]
1184 1200 indebug(self.ui, 'part id: "%s"' % self.id)
1185 1201 # extract mandatory bit from type
1186 1202 self.mandatory = (self.type != self.type.lower())
1187 1203 self.type = self.type.lower()
1188 1204 ## reading parameters
1189 1205 # param count
1190 1206 mancount, advcount = self._unpackheader(_fpartparamcount)
1191 1207 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1192 1208 # param size
1193 1209 fparamsizes = _makefpartparamsizes(mancount + advcount)
1194 1210 paramsizes = self._unpackheader(fparamsizes)
1195 1211 # make it a list of couple again
1196 1212 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
1197 1213 # split mandatory from advisory
1198 1214 mansizes = paramsizes[:mancount]
1199 1215 advsizes = paramsizes[mancount:]
1200 1216 # retrieve param value
1201 1217 manparams = []
1202 1218 for key, value in mansizes:
1203 1219 manparams.append((self._fromheader(key), self._fromheader(value)))
1204 1220 advparams = []
1205 1221 for key, value in advsizes:
1206 1222 advparams.append((self._fromheader(key), self._fromheader(value)))
1207 1223 self._initparams(manparams, advparams)
1208 1224 ## part payload
1209 1225 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1210 1226 # we read the data, tell it
1211 1227 self._initialized = True
1212 1228
1213 1229 def read(self, size=None):
1214 1230 """read payload data"""
1215 1231 if not self._initialized:
1216 1232 self._readheader()
1217 1233 if size is None:
1218 1234 data = self._payloadstream.read()
1219 1235 else:
1220 1236 data = self._payloadstream.read(size)
1221 1237 self._pos += len(data)
1222 1238 if size is None or len(data) < size:
1223 1239 if not self.consumed and self._pos:
1224 1240 self.ui.debug('bundle2-input-part: total payload size %i\n'
1225 1241 % self._pos)
1226 1242 self.consumed = True
1227 1243 return data
1228 1244
1229 1245 def tell(self):
1230 1246 return self._pos
1231 1247
1232 1248 def seek(self, offset, whence=0):
1233 1249 if whence == 0:
1234 1250 newpos = offset
1235 1251 elif whence == 1:
1236 1252 newpos = self._pos + offset
1237 1253 elif whence == 2:
1238 1254 if not self.consumed:
1239 1255 self.read()
1240 1256 newpos = self._chunkindex[-1][0] - offset
1241 1257 else:
1242 1258 raise ValueError('Unknown whence value: %r' % (whence,))
1243 1259
1244 1260 if newpos > self._chunkindex[-1][0] and not self.consumed:
1245 1261 self.read()
1246 1262 if not 0 <= newpos <= self._chunkindex[-1][0]:
1247 1263 raise ValueError('Offset out of range')
1248 1264
1249 1265 if self._pos != newpos:
1250 1266 chunk, internaloffset = self._findchunk(newpos)
1251 1267 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1252 1268 adjust = self.read(internaloffset)
1253 1269 if len(adjust) != internaloffset:
1254 1270 raise error.Abort(_('Seek failed\n'))
1255 1271 self._pos = newpos
1256 1272
1257 1273 def _seekfp(self, offset, whence=0):
1258 1274 """move the underlying file pointer
1259 1275
1260 1276 This method is meant for internal usage by the bundle2 protocol only.
1261 1277 They directly manipulate the low level stream including bundle2 level
1262 1278 instruction.
1263 1279
1264 1280 Do not use it to implement higher-level logic or methods."""
1265 1281 if self._seekable:
1266 1282 return self._fp.seek(offset, whence)
1267 1283 else:
1268 1284 raise NotImplementedError(_('File pointer is not seekable'))
1269 1285
1270 1286 def _tellfp(self):
1271 1287 """return the file offset, or None if file is not seekable
1272 1288
1273 1289 This method is meant for internal usage by the bundle2 protocol only.
1274 1290 They directly manipulate the low level stream including bundle2 level
1275 1291 instruction.
1276 1292
1277 1293 Do not use it to implement higher-level logic or methods."""
1278 1294 if self._seekable:
1279 1295 try:
1280 1296 return self._fp.tell()
1281 1297 except IOError as e:
1282 1298 if e.errno == errno.ESPIPE:
1283 1299 self._seekable = False
1284 1300 else:
1285 1301 raise
1286 1302 return None
1287 1303
1288 1304 # These are only the static capabilities.
1289 1305 # Check the 'getrepocaps' function for the rest.
1290 1306 capabilities = {'HG20': (),
1291 1307 'error': ('abort', 'unsupportedcontent', 'pushraced',
1292 1308 'pushkey'),
1293 1309 'listkeys': (),
1294 1310 'pushkey': (),
1295 1311 'digests': tuple(sorted(util.DIGESTS.keys())),
1296 1312 'remote-changegroup': ('http', 'https'),
1297 1313 'hgtagsfnodes': (),
1298 1314 }
1299 1315
1300 1316 def getrepocaps(repo, allowpushback=False):
1301 1317 """return the bundle2 capabilities for a given repo
1302 1318
1303 1319 Exists to allow extensions (like evolution) to mutate the capabilities.
1304 1320 """
1305 1321 caps = capabilities.copy()
1306 1322 caps['changegroup'] = tuple(sorted(
1307 1323 changegroup.supportedincomingversions(repo)))
1308 1324 if obsolete.isenabled(repo, obsolete.exchangeopt):
1309 1325 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1310 1326 caps['obsmarkers'] = supportedformat
1311 1327 if allowpushback:
1312 1328 caps['pushback'] = ()
1313 1329 return caps
1314 1330
1315 1331 def bundle2caps(remote):
1316 1332 """return the bundle capabilities of a peer as dict"""
1317 1333 raw = remote.capable('bundle2')
1318 1334 if not raw and raw != '':
1319 1335 return {}
1320 1336 capsblob = urlreq.unquote(remote.capable('bundle2'))
1321 1337 return decodecaps(capsblob)
1322 1338
1323 1339 def obsmarkersversion(caps):
1324 1340 """extract the list of supported obsmarkers versions from a bundle2caps dict
1325 1341 """
1326 1342 obscaps = caps.get('obsmarkers', ())
1327 1343 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1328 1344
1329 1345 def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None,
1330 1346 compopts=None):
1331 1347 """Write a bundle file and return its filename.
1332 1348
1333 1349 Existing files will not be overwritten.
1334 1350 If no filename is specified, a temporary file is created.
1335 1351 bz2 compression can be turned off.
1336 1352 The bundle file will be deleted in case of errors.
1337 1353 """
1338 1354
1339 1355 if bundletype == "HG20":
1340 1356 bundle = bundle20(ui)
1341 1357 bundle.setcompression(compression, compopts)
1342 1358 part = bundle.newpart('changegroup', data=cg.getchunks())
1343 1359 part.addparam('version', cg.version)
1344 1360 if 'clcount' in cg.extras:
1345 1361 part.addparam('nbchanges', str(cg.extras['clcount']),
1346 1362 mandatory=False)
1347 1363 chunkiter = bundle.getchunks()
1348 1364 else:
1349 1365 # compression argument is only for the bundle2 case
1350 1366 assert compression is None
1351 1367 if cg.version != '01':
1352 1368 raise error.Abort(_('old bundle types only supports v1 '
1353 1369 'changegroups'))
1354 1370 header, comp = bundletypes[bundletype]
1355 1371 if comp not in util.compengines.supportedbundletypes:
1356 1372 raise error.Abort(_('unknown stream compression type: %s')
1357 1373 % comp)
1358 1374 compengine = util.compengines.forbundletype(comp)
1359 1375 def chunkiter():
1360 1376 yield header
1361 1377 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1362 1378 yield chunk
1363 1379 chunkiter = chunkiter()
1364 1380
1365 1381 # parse the changegroup data, otherwise we will block
1366 1382 # in case of sshrepo because we don't know the end of the stream
1367 1383 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1368 1384
1369 1385 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest'))
1370 1386 def handlechangegroup(op, inpart):
1371 1387 """apply a changegroup part on the repo
1372 1388
1373 1389 This is a very early implementation that will massive rework before being
1374 1390 inflicted to any end-user.
1375 1391 """
1376 1392 # Make sure we trigger a transaction creation
1377 1393 #
1378 1394 # The addchangegroup function will get a transaction object by itself, but
1379 1395 # we need to make sure we trigger the creation of a transaction object used
1380 1396 # for the whole processing scope.
1381 1397 op.gettransaction()
1382 1398 unpackerversion = inpart.params.get('version', '01')
1383 1399 # We should raise an appropriate exception here
1384 1400 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1385 1401 # the source and url passed here are overwritten by the one contained in
1386 1402 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1387 1403 nbchangesets = None
1388 1404 if 'nbchanges' in inpart.params:
1389 1405 nbchangesets = int(inpart.params.get('nbchanges'))
1390 1406 if ('treemanifest' in inpart.params and
1391 1407 'treemanifest' not in op.repo.requirements):
1392 1408 if len(op.repo.changelog) != 0:
1393 1409 raise error.Abort(_(
1394 1410 "bundle contains tree manifests, but local repo is "
1395 1411 "non-empty and does not use tree manifests"))
1396 1412 op.repo.requirements.add('treemanifest')
1397 1413 op.repo._applyopenerreqs()
1398 1414 op.repo._writerequirements()
1399 1415 ret = cg.apply(op.repo, 'bundle2', 'bundle2', expectedtotal=nbchangesets)
1400 1416 op.records.add('changegroup', {'return': ret})
1401 1417 if op.reply is not None:
1402 1418 # This is definitely not the final form of this
1403 1419 # return. But one need to start somewhere.
1404 1420 part = op.reply.newpart('reply:changegroup', mandatory=False)
1405 1421 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1406 1422 part.addparam('return', '%i' % ret, mandatory=False)
1407 1423 assert not inpart.read()
1408 1424
1409 1425 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1410 1426 ['digest:%s' % k for k in util.DIGESTS.keys()])
1411 1427 @parthandler('remote-changegroup', _remotechangegroupparams)
1412 1428 def handleremotechangegroup(op, inpart):
1413 1429 """apply a bundle10 on the repo, given an url and validation information
1414 1430
1415 1431 All the information about the remote bundle to import are given as
1416 1432 parameters. The parameters include:
1417 1433 - url: the url to the bundle10.
1418 1434 - size: the bundle10 file size. It is used to validate what was
1419 1435 retrieved by the client matches the server knowledge about the bundle.
1420 1436 - digests: a space separated list of the digest types provided as
1421 1437 parameters.
1422 1438 - digest:<digest-type>: the hexadecimal representation of the digest with
1423 1439 that name. Like the size, it is used to validate what was retrieved by
1424 1440 the client matches what the server knows about the bundle.
1425 1441
1426 1442 When multiple digest types are given, all of them are checked.
1427 1443 """
1428 1444 try:
1429 1445 raw_url = inpart.params['url']
1430 1446 except KeyError:
1431 1447 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1432 1448 parsed_url = util.url(raw_url)
1433 1449 if parsed_url.scheme not in capabilities['remote-changegroup']:
1434 1450 raise error.Abort(_('remote-changegroup does not support %s urls') %
1435 1451 parsed_url.scheme)
1436 1452
1437 1453 try:
1438 1454 size = int(inpart.params['size'])
1439 1455 except ValueError:
1440 1456 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1441 1457 % 'size')
1442 1458 except KeyError:
1443 1459 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1444 1460
1445 1461 digests = {}
1446 1462 for typ in inpart.params.get('digests', '').split():
1447 1463 param = 'digest:%s' % typ
1448 1464 try:
1449 1465 value = inpart.params[param]
1450 1466 except KeyError:
1451 1467 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1452 1468 param)
1453 1469 digests[typ] = value
1454 1470
1455 1471 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1456 1472
1457 1473 # Make sure we trigger a transaction creation
1458 1474 #
1459 1475 # The addchangegroup function will get a transaction object by itself, but
1460 1476 # we need to make sure we trigger the creation of a transaction object used
1461 1477 # for the whole processing scope.
1462 1478 op.gettransaction()
1463 1479 from . import exchange
1464 1480 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1465 1481 if not isinstance(cg, changegroup.cg1unpacker):
1466 1482 raise error.Abort(_('%s: not a bundle version 1.0') %
1467 1483 util.hidepassword(raw_url))
1468 1484 ret = cg.apply(op.repo, 'bundle2', 'bundle2')
1469 1485 op.records.add('changegroup', {'return': ret})
1470 1486 if op.reply is not None:
1471 1487 # This is definitely not the final form of this
1472 1488 # return. But one need to start somewhere.
1473 1489 part = op.reply.newpart('reply:changegroup')
1474 1490 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1475 1491 part.addparam('return', '%i' % ret, mandatory=False)
1476 1492 try:
1477 1493 real_part.validate()
1478 1494 except error.Abort as e:
1479 1495 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1480 1496 (util.hidepassword(raw_url), str(e)))
1481 1497 assert not inpart.read()
1482 1498
1483 1499 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1484 1500 def handlereplychangegroup(op, inpart):
1485 1501 ret = int(inpart.params['return'])
1486 1502 replyto = int(inpart.params['in-reply-to'])
1487 1503 op.records.add('changegroup', {'return': ret}, replyto)
1488 1504
1489 1505 @parthandler('check:heads')
1490 1506 def handlecheckheads(op, inpart):
1491 1507 """check that head of the repo did not change
1492 1508
1493 1509 This is used to detect a push race when using unbundle.
1494 1510 This replaces the "heads" argument of unbundle."""
1495 1511 h = inpart.read(20)
1496 1512 heads = []
1497 1513 while len(h) == 20:
1498 1514 heads.append(h)
1499 1515 h = inpart.read(20)
1500 1516 assert not h
1501 1517 # Trigger a transaction so that we are guaranteed to have the lock now.
1502 1518 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1503 1519 op.gettransaction()
1504 1520 if sorted(heads) != sorted(op.repo.heads()):
1505 1521 raise error.PushRaced('repository changed while pushing - '
1506 1522 'please try again')
1507 1523
1508 1524 @parthandler('output')
1509 1525 def handleoutput(op, inpart):
1510 1526 """forward output captured on the server to the client"""
1511 1527 for line in inpart.read().splitlines():
1512 1528 op.ui.status(_('remote: %s\n') % line)
1513 1529
1514 1530 @parthandler('replycaps')
1515 1531 def handlereplycaps(op, inpart):
1516 1532 """Notify that a reply bundle should be created
1517 1533
1518 1534 The payload contains the capabilities information for the reply"""
1519 1535 caps = decodecaps(inpart.read())
1520 1536 if op.reply is None:
1521 1537 op.reply = bundle20(op.ui, caps)
1522 1538
1523 1539 class AbortFromPart(error.Abort):
1524 1540 """Sub-class of Abort that denotes an error from a bundle2 part."""
1525 1541
1526 1542 @parthandler('error:abort', ('message', 'hint'))
1527 1543 def handleerrorabort(op, inpart):
1528 1544 """Used to transmit abort error over the wire"""
1529 1545 raise AbortFromPart(inpart.params['message'],
1530 1546 hint=inpart.params.get('hint'))
1531 1547
1532 1548 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
1533 1549 'in-reply-to'))
1534 1550 def handleerrorpushkey(op, inpart):
1535 1551 """Used to transmit failure of a mandatory pushkey over the wire"""
1536 1552 kwargs = {}
1537 1553 for name in ('namespace', 'key', 'new', 'old', 'ret'):
1538 1554 value = inpart.params.get(name)
1539 1555 if value is not None:
1540 1556 kwargs[name] = value
1541 1557 raise error.PushkeyFailed(inpart.params['in-reply-to'], **kwargs)
1542 1558
1543 1559 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1544 1560 def handleerrorunsupportedcontent(op, inpart):
1545 1561 """Used to transmit unknown content error over the wire"""
1546 1562 kwargs = {}
1547 1563 parttype = inpart.params.get('parttype')
1548 1564 if parttype is not None:
1549 1565 kwargs['parttype'] = parttype
1550 1566 params = inpart.params.get('params')
1551 1567 if params is not None:
1552 1568 kwargs['params'] = params.split('\0')
1553 1569
1554 1570 raise error.BundleUnknownFeatureError(**kwargs)
1555 1571
1556 1572 @parthandler('error:pushraced', ('message',))
1557 1573 def handleerrorpushraced(op, inpart):
1558 1574 """Used to transmit push race error over the wire"""
1559 1575 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1560 1576
1561 1577 @parthandler('listkeys', ('namespace',))
1562 1578 def handlelistkeys(op, inpart):
1563 1579 """retrieve pushkey namespace content stored in a bundle2"""
1564 1580 namespace = inpart.params['namespace']
1565 1581 r = pushkey.decodekeys(inpart.read())
1566 1582 op.records.add('listkeys', (namespace, r))
1567 1583
1568 1584 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1569 1585 def handlepushkey(op, inpart):
1570 1586 """process a pushkey request"""
1571 1587 dec = pushkey.decode
1572 1588 namespace = dec(inpart.params['namespace'])
1573 1589 key = dec(inpart.params['key'])
1574 1590 old = dec(inpart.params['old'])
1575 1591 new = dec(inpart.params['new'])
1576 1592 # Grab the transaction to ensure that we have the lock before performing the
1577 1593 # pushkey.
1578 1594 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1579 1595 op.gettransaction()
1580 1596 ret = op.repo.pushkey(namespace, key, old, new)
1581 1597 record = {'namespace': namespace,
1582 1598 'key': key,
1583 1599 'old': old,
1584 1600 'new': new}
1585 1601 op.records.add('pushkey', record)
1586 1602 if op.reply is not None:
1587 1603 rpart = op.reply.newpart('reply:pushkey')
1588 1604 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1589 1605 rpart.addparam('return', '%i' % ret, mandatory=False)
1590 1606 if inpart.mandatory and not ret:
1591 1607 kwargs = {}
1592 1608 for key in ('namespace', 'key', 'new', 'old', 'ret'):
1593 1609 if key in inpart.params:
1594 1610 kwargs[key] = inpart.params[key]
1595 1611 raise error.PushkeyFailed(partid=str(inpart.id), **kwargs)
1596 1612
1597 1613 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
1598 1614 def handlepushkeyreply(op, inpart):
1599 1615 """retrieve the result of a pushkey request"""
1600 1616 ret = int(inpart.params['return'])
1601 1617 partid = int(inpart.params['in-reply-to'])
1602 1618 op.records.add('pushkey', {'return': ret}, partid)
1603 1619
1604 1620 @parthandler('obsmarkers')
1605 1621 def handleobsmarker(op, inpart):
1606 1622 """add a stream of obsmarkers to the repo"""
1607 1623 tr = op.gettransaction()
1608 1624 markerdata = inpart.read()
1609 1625 if op.ui.config('experimental', 'obsmarkers-exchange-debug', False):
1610 1626 op.ui.write(('obsmarker-exchange: %i bytes received\n')
1611 1627 % len(markerdata))
1612 1628 # The mergemarkers call will crash if marker creation is not enabled.
1613 1629 # we want to avoid this if the part is advisory.
1614 1630 if not inpart.mandatory and op.repo.obsstore.readonly:
1615 1631 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled')
1616 1632 return
1617 1633 new = op.repo.obsstore.mergemarkers(tr, markerdata)
1618 1634 if new:
1619 1635 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1620 1636 op.records.add('obsmarkers', {'new': new})
1621 1637 if op.reply is not None:
1622 1638 rpart = op.reply.newpart('reply:obsmarkers')
1623 1639 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1624 1640 rpart.addparam('new', '%i' % new, mandatory=False)
1625 1641
1626 1642
1627 1643 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
1628 1644 def handleobsmarkerreply(op, inpart):
1629 1645 """retrieve the result of a pushkey request"""
1630 1646 ret = int(inpart.params['new'])
1631 1647 partid = int(inpart.params['in-reply-to'])
1632 1648 op.records.add('obsmarkers', {'new': ret}, partid)
1633 1649
1634 1650 @parthandler('hgtagsfnodes')
1635 1651 def handlehgtagsfnodes(op, inpart):
1636 1652 """Applies .hgtags fnodes cache entries to the local repo.
1637 1653
1638 1654 Payload is pairs of 20 byte changeset nodes and filenodes.
1639 1655 """
1640 1656 # Grab the transaction so we ensure that we have the lock at this point.
1641 1657 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1642 1658 op.gettransaction()
1643 1659 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
1644 1660
1645 1661 count = 0
1646 1662 while True:
1647 1663 node = inpart.read(20)
1648 1664 fnode = inpart.read(20)
1649 1665 if len(node) < 20 or len(fnode) < 20:
1650 1666 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
1651 1667 break
1652 1668 cache.setfnode(node, fnode)
1653 1669 count += 1
1654 1670
1655 1671 cache.write()
1656 1672 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
@@ -1,893 +1,896 b''
1 1 #require killdaemons serve zstd
2 2
3 3 Client version is embedded in HTTP request and is effectively dynamic. Pin the
4 4 version so behavior is deterministic.
5 5
6 6 $ cat > fakeversion.py << EOF
7 7 > from mercurial import util
8 8 > util.version = lambda: '4.2'
9 9 > EOF
10 10
11 11 $ cat >> $HGRCPATH << EOF
12 12 > [extensions]
13 13 > fakeversion = `pwd`/fakeversion.py
14 14 > EOF
15 15
16 16 $ hg init server0
17 17 $ cd server0
18 18 $ touch foo
19 19 $ hg -q commit -A -m initial
20 20
21 21 Also disable compression because zstd is optional and causes output to vary
22 22 and because debugging partial responses is hard when compression is involved
23 23
24 24 $ cat > .hg/hgrc << EOF
25 25 > [extensions]
26 26 > badserver = $TESTDIR/badserverext.py
27 27 > [server]
28 28 > compressionengines = none
29 29 > EOF
30 30
31 31 Failure to accept() socket should result in connection related error message
32 32
33 33 $ hg --config badserver.closebeforeaccept=true serve -p $HGPORT -d --pid-file=hg.pid
34 34 $ cat hg.pid > $DAEMON_PIDS
35 35
36 36 $ hg clone http://localhost:$HGPORT/ clone
37 37 abort: error: Connection reset by peer (no-windows !)
38 38 abort: error: An existing connection was forcibly closed by the remote host (windows !)
39 39 [255]
40 40
41 41 (The server exits on its own, but there is a race between that and starting a new server.
42 42 So ensure the process is dead.)
43 43
44 44 $ killdaemons.py $DAEMON_PIDS
45 45
46 46 Failure immediately after accept() should yield connection related error message
47 47
48 48 $ hg --config badserver.closeafteraccept=true serve -p $HGPORT -d --pid-file=hg.pid
49 49 $ cat hg.pid > $DAEMON_PIDS
50 50
51 51 $ hg clone http://localhost:$HGPORT/ clone
52 52 abort: error: Connection reset by peer (no-windows !)
53 53 abort: error: An existing connection was forcibly closed by the remote host (windows !)
54 54 [255]
55 55
56 56 $ killdaemons.py $DAEMON_PIDS
57 57
58 58 Failure to read all bytes in initial HTTP request should yield connection related error message
59 59
60 60 $ hg --config badserver.closeafterrecvbytes=1 serve -p $HGPORT -d --pid-file=hg.pid -E error.log
61 61 $ cat hg.pid > $DAEMON_PIDS
62 62
63 63 TODO this error message is not very good
64 64
65 65 $ hg clone http://localhost:$HGPORT/ clone
66 66 abort: error: ''
67 67 [255]
68 68
69 69 $ killdaemons.py $DAEMON_PIDS
70 70
71 71 $ cat error.log
72 72 readline(1 from 65537) -> (1) G
73 73 read limit reached; closing socket
74 74
75 75 $ rm -f error.log
76 76
77 77 Same failure, but server reads full HTTP request line
78 78
79 79 $ hg --config badserver.closeafterrecvbytes=40 serve -p $HGPORT -d --pid-file=hg.pid -E error.log
80 80 $ cat hg.pid > $DAEMON_PIDS
81 81 $ hg clone http://localhost:$HGPORT/ clone
82 82 abort: error: ''
83 83 [255]
84 84
85 85 $ killdaemons.py $DAEMON_PIDS
86 86
87 87 $ cat error.log
88 88 readline(40 from 65537) -> (33) GET /?cmd=capabilities HTTP/1.1\r\n
89 89 readline(7 from -1) -> (7) Accept-
90 90 read limit reached; closing socket
91 91
92 92 $ rm -f error.log
93 93
94 94 Failure on subsequent HTTP request on the same socket (cmd?batch)
95 95
96 96 $ hg --config badserver.closeafterrecvbytes=210 serve -p $HGPORT -d --pid-file=hg.pid -E error.log
97 97 $ cat hg.pid > $DAEMON_PIDS
98 98 $ hg clone http://localhost:$HGPORT/ clone
99 99 abort: error: ''
100 100 [255]
101 101
102 102 $ killdaemons.py $DAEMON_PIDS
103 103
104 104 $ cat error.log
105 105 readline(210 from 65537) -> (33) GET /?cmd=capabilities HTTP/1.1\r\n
106 106 readline(177 from -1) -> (27) Accept-Encoding: identity\r\n
107 107 readline(150 from -1) -> (35) accept: application/mercurial-0.1\r\n
108 108 readline(115 from -1) -> (23) host: localhost:$HGPORT\r\n
109 109 readline(92 from -1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
110 110 readline(43 from -1) -> (2) \r\n
111 111 write(36) -> HTTP/1.1 200 Script output follows\r\n
112 112 write(23) -> Server: badhttpserver\r\n
113 113 write(37) -> Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
114 114 write(41) -> Content-Type: application/mercurial-0.1\r\n
115 115 write(21) -> Content-Length: 405\r\n
116 116 write(2) -> \r\n
117 117 write(405) -> lookup changegroupsubset branchmap pushkey known getbundle unbundlehash batch streamreqs=generaldelta,revlogv1 bundle2=HG20%0Achangegroup%3D01%2C02%0Adigests%3Dmd5%2Csha1%2Csha512%0Aerror%3Dabort%2Cunsupportedcontent%2Cpushraced%2Cpushkey%0Ahgtagsfnodes%0Alistkeys%0Apushkey%0Aremote-changegroup%3Dhttp%2Chttps unbundle=HG10GZ,HG10BZ,HG10UN httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx compression=none
118 118 readline(41 from 65537) -> (26) GET /?cmd=batch HTTP/1.1\r\n
119 119 readline(15 from -1) -> (15) Accept-Encoding
120 120 read limit reached; closing socket
121 121 readline(210 from 65537) -> (26) GET /?cmd=batch HTTP/1.1\r\n
122 122 readline(184 from -1) -> (27) Accept-Encoding: identity\r\n
123 123 readline(157 from -1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n
124 124 readline(128 from -1) -> (41) x-hgarg-1: cmds=heads+%3Bknown+nodes%3D\r\n
125 125 readline(87 from -1) -> (48) x-hgproto-1: 0.1 0.2 comp=zstd,zlib,none,bzip2\r\n
126 126 readline(39 from -1) -> (35) accept: application/mercurial-0.1\r\n
127 127 readline(4 from -1) -> (4) host
128 128 read limit reached; closing socket
129 129
130 130 $ rm -f error.log
131 131
132 132 Failure to read getbundle HTTP request
133 133
134 134 $ hg --config badserver.closeafterrecvbytes=292 serve -p $HGPORT -d --pid-file=hg.pid -E error.log
135 135 $ cat hg.pid > $DAEMON_PIDS
136 136 $ hg clone http://localhost:$HGPORT/ clone
137 137 requesting all changes
138 138 abort: error: ''
139 139 [255]
140 140
141 141 $ killdaemons.py $DAEMON_PIDS
142 142
143 143 $ cat error.log
144 144 readline(292 from 65537) -> (33) GET /?cmd=capabilities HTTP/1.1\r\n
145 145 readline(259 from -1) -> (27) Accept-Encoding: identity\r\n
146 146 readline(232 from -1) -> (35) accept: application/mercurial-0.1\r\n
147 147 readline(197 from -1) -> (23) host: localhost:$HGPORT\r\n
148 148 readline(174 from -1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
149 149 readline(125 from -1) -> (2) \r\n
150 150 write(36) -> HTTP/1.1 200 Script output follows\r\n
151 151 write(23) -> Server: badhttpserver\r\n
152 152 write(37) -> Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
153 153 write(41) -> Content-Type: application/mercurial-0.1\r\n
154 154 write(21) -> Content-Length: 405\r\n
155 155 write(2) -> \r\n
156 156 write(405) -> lookup changegroupsubset branchmap pushkey known getbundle unbundlehash batch streamreqs=generaldelta,revlogv1 bundle2=HG20%0Achangegroup%3D01%2C02%0Adigests%3Dmd5%2Csha1%2Csha512%0Aerror%3Dabort%2Cunsupportedcontent%2Cpushraced%2Cpushkey%0Ahgtagsfnodes%0Alistkeys%0Apushkey%0Aremote-changegroup%3Dhttp%2Chttps unbundle=HG10GZ,HG10BZ,HG10UN httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx compression=none
157 157 readline(123 from 65537) -> (26) GET /?cmd=batch HTTP/1.1\r\n
158 158 readline(97 from -1) -> (27) Accept-Encoding: identity\r\n
159 159 readline(70 from -1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n
160 160 readline(41 from -1) -> (41) x-hgarg-1: cmds=heads+%3Bknown+nodes%3D\r\n
161 161 read limit reached; closing socket
162 162 readline(292 from 65537) -> (26) GET /?cmd=batch HTTP/1.1\r\n
163 163 readline(266 from -1) -> (27) Accept-Encoding: identity\r\n
164 164 readline(239 from -1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n
165 165 readline(210 from -1) -> (41) x-hgarg-1: cmds=heads+%3Bknown+nodes%3D\r\n
166 166 readline(169 from -1) -> (48) x-hgproto-1: 0.1 0.2 comp=zstd,zlib,none,bzip2\r\n
167 167 readline(121 from -1) -> (35) accept: application/mercurial-0.1\r\n
168 168 readline(86 from -1) -> (23) host: localhost:$HGPORT\r\n
169 169 readline(63 from -1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
170 170 readline(14 from -1) -> (2) \r\n
171 171 write(36) -> HTTP/1.1 200 Script output follows\r\n
172 172 write(23) -> Server: badhttpserver\r\n
173 173 write(37) -> Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
174 174 write(41) -> Content-Type: application/mercurial-0.1\r\n
175 175 write(20) -> Content-Length: 42\r\n
176 176 write(2) -> \r\n
177 177 write(42) -> 96ee1d7354c4ad7372047672c36a1f561e3a6a4c\n;
178 178 readline(12 from 65537) -> (12) GET /?cmd=ge
179 179 read limit reached; closing socket
180 180 readline(292 from 65537) -> (30) GET /?cmd=getbundle HTTP/1.1\r\n
181 181 readline(262 from -1) -> (27) Accept-Encoding: identity\r\n
182 182 readline(235 from -1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n
183 183 readline(206 from -1) -> (206) x-hgarg-1: bundlecaps=HG20%2Cbundle2%3DHG20%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Ali
184 184 read limit reached; closing socket
185 185
186 186 $ rm -f error.log
187 187
188 188 Now do a variation using POST to send arguments
189 189
190 190 $ hg --config experimental.httppostargs=true --config badserver.closeafterrecvbytes=315 serve -p $HGPORT -d --pid-file=hg.pid -E error.log
191 191 $ cat hg.pid > $DAEMON_PIDS
192 192
193 193 $ hg clone http://localhost:$HGPORT/ clone
194 194 abort: error: ''
195 195 [255]
196 196
197 197 $ killdaemons.py $DAEMON_PIDS
198 198
199 199 $ cat error.log
200 200 readline(315 from 65537) -> (33) GET /?cmd=capabilities HTTP/1.1\r\n
201 201 readline(282 from -1) -> (27) Accept-Encoding: identity\r\n
202 202 readline(255 from -1) -> (35) accept: application/mercurial-0.1\r\n
203 203 readline(220 from -1) -> (23) host: localhost:$HGPORT\r\n
204 204 readline(197 from -1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
205 205 readline(148 from -1) -> (2) \r\n
206 206 write(36) -> HTTP/1.1 200 Script output follows\r\n
207 207 write(23) -> Server: badhttpserver\r\n
208 208 write(37) -> Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
209 209 write(41) -> Content-Type: application/mercurial-0.1\r\n
210 210 write(21) -> Content-Length: 418\r\n
211 211 write(2) -> \r\n
212 212 write(418) -> lookup changegroupsubset branchmap pushkey known getbundle unbundlehash batch streamreqs=generaldelta,revlogv1 bundle2=HG20%0Achangegroup%3D01%2C02%0Adigests%3Dmd5%2Csha1%2Csha512%0Aerror%3Dabort%2Cunsupportedcontent%2Cpushraced%2Cpushkey%0Ahgtagsfnodes%0Alistkeys%0Apushkey%0Aremote-changegroup%3Dhttp%2Chttps unbundle=HG10GZ,HG10BZ,HG10UN httpheader=1024 httppostargs httpmediatype=0.1rx,0.1tx,0.2tx compression=none
213 213 readline(146 from 65537) -> (27) POST /?cmd=batch HTTP/1.1\r\n
214 214 readline(119 from -1) -> (27) Accept-Encoding: identity\r\n
215 215 readline(92 from -1) -> (41) content-type: application/mercurial-0.1\r\n
216 216 readline(51 from -1) -> (19) vary: X-HgProto-1\r\n
217 217 readline(32 from -1) -> (19) x-hgargs-post: 28\r\n
218 218 readline(13 from -1) -> (13) x-hgproto-1:
219 219 read limit reached; closing socket
220 220 readline(315 from 65537) -> (27) POST /?cmd=batch HTTP/1.1\r\n
221 221 readline(288 from -1) -> (27) Accept-Encoding: identity\r\n
222 222 readline(261 from -1) -> (41) content-type: application/mercurial-0.1\r\n
223 223 readline(220 from -1) -> (19) vary: X-HgProto-1\r\n
224 224 readline(201 from -1) -> (19) x-hgargs-post: 28\r\n
225 225 readline(182 from -1) -> (48) x-hgproto-1: 0.1 0.2 comp=zstd,zlib,none,bzip2\r\n
226 226 readline(134 from -1) -> (35) accept: application/mercurial-0.1\r\n
227 227 readline(99 from -1) -> (20) content-length: 28\r\n
228 228 readline(79 from -1) -> (23) host: localhost:$HGPORT\r\n
229 229 readline(56 from -1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
230 230 readline(7 from -1) -> (2) \r\n
231 231 read(5 from 28) -> (5) cmds=
232 232 read limit reached, closing socket
233 233 write(36) -> HTTP/1.1 500 Internal Server Error\r\n
234 234
235 235 $ rm -f error.log
236 236
237 237 Now move on to partial server responses
238 238
239 239 Server sends a single character from the HTTP response line
240 240
241 241 $ hg --config badserver.closeaftersendbytes=1 serve -p $HGPORT -d --pid-file=hg.pid -E error.log
242 242 $ cat hg.pid > $DAEMON_PIDS
243 243
244 244 $ hg clone http://localhost:$HGPORT/ clone
245 245 abort: error: H
246 246 [255]
247 247
248 248 $ killdaemons.py $DAEMON_PIDS
249 249
250 250 $ cat error.log
251 251 readline(65537) -> (33) GET /?cmd=capabilities HTTP/1.1\r\n
252 252 readline(-1) -> (27) Accept-Encoding: identity\r\n
253 253 readline(-1) -> (35) accept: application/mercurial-0.1\r\n
254 254 readline(-1) -> (23) host: localhost:$HGPORT\r\n
255 255 readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
256 256 readline(-1) -> (2) \r\n
257 257 write(1 from 36) -> (0) H
258 258 write limit reached; closing socket
259 259 write(36) -> HTTP/1.1 500 Internal Server Error\r\n
260 260
261 261 $ rm -f error.log
262 262
263 263 Server sends an incomplete capabilities response body
264 264
265 265 $ hg --config badserver.closeaftersendbytes=180 serve -p $HGPORT -d --pid-file=hg.pid -E error.log
266 266 $ cat hg.pid > $DAEMON_PIDS
267 267
268 268 $ hg clone http://localhost:$HGPORT/ clone
269 269 abort: HTTP request error (incomplete response; expected 385 bytes got 20)
270 270 (this may be an intermittent network failure; if the error persists, consider contacting the network or server operator)
271 271 [255]
272 272
273 273 $ killdaemons.py $DAEMON_PIDS
274 274
275 275 $ cat error.log
276 276 readline(65537) -> (33) GET /?cmd=capabilities HTTP/1.1\r\n
277 277 readline(-1) -> (27) Accept-Encoding: identity\r\n
278 278 readline(-1) -> (35) accept: application/mercurial-0.1\r\n
279 279 readline(-1) -> (23) host: localhost:$HGPORT\r\n
280 280 readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
281 281 readline(-1) -> (2) \r\n
282 282 write(36 from 36) -> (144) HTTP/1.1 200 Script output follows\r\n
283 283 write(23 from 23) -> (121) Server: badhttpserver\r\n
284 284 write(37 from 37) -> (84) Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
285 285 write(41 from 41) -> (43) Content-Type: application/mercurial-0.1\r\n
286 286 write(21 from 21) -> (22) Content-Length: 405\r\n
287 287 write(2 from 2) -> (20) \r\n
288 288 write(20 from 405) -> (0) lookup changegroupsu
289 289 write limit reached; closing socket
290 290
291 291 $ rm -f error.log
292 292
293 293 Server sends incomplete headers for batch request
294 294
295 295 $ hg --config badserver.closeaftersendbytes=695 serve -p $HGPORT -d --pid-file=hg.pid -E error.log
296 296 $ cat hg.pid > $DAEMON_PIDS
297 297
298 298 TODO this output is horrible
299 299
300 300 $ hg clone http://localhost:$HGPORT/ clone
301 301 abort: 'http://localhost:$HGPORT/' does not appear to be an hg repository:
302 302 ---%<--- (application/mercuria)
303 303
304 304 ---%<---
305 305 !
306 306 [255]
307 307
308 308 $ killdaemons.py $DAEMON_PIDS
309 309
310 310 $ cat error.log
311 311 readline(65537) -> (33) GET /?cmd=capabilities HTTP/1.1\r\n
312 312 readline(-1) -> (27) Accept-Encoding: identity\r\n
313 313 readline(-1) -> (35) accept: application/mercurial-0.1\r\n
314 314 readline(-1) -> (23) host: localhost:$HGPORT\r\n
315 315 readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
316 316 readline(-1) -> (2) \r\n
317 317 write(36 from 36) -> (659) HTTP/1.1 200 Script output follows\r\n
318 318 write(23 from 23) -> (636) Server: badhttpserver\r\n
319 319 write(37 from 37) -> (599) Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
320 320 write(41 from 41) -> (558) Content-Type: application/mercurial-0.1\r\n
321 321 write(21 from 21) -> (537) Content-Length: 405\r\n
322 322 write(2 from 2) -> (535) \r\n
323 323 write(405 from 405) -> (130) lookup changegroupsubset branchmap pushkey known getbundle unbundlehash batch streamreqs=generaldelta,revlogv1 bundle2=HG20%0Achangegroup%3D01%2C02%0Adigests%3Dmd5%2Csha1%2Csha512%0Aerror%3Dabort%2Cunsupportedcontent%2Cpushraced%2Cpushkey%0Ahgtagsfnodes%0Alistkeys%0Apushkey%0Aremote-changegroup%3Dhttp%2Chttps unbundle=HG10GZ,HG10BZ,HG10UN httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx compression=none
324 324 readline(65537) -> (26) GET /?cmd=batch HTTP/1.1\r\n
325 325 readline(-1) -> (27) Accept-Encoding: identity\r\n
326 326 readline(-1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n
327 327 readline(-1) -> (41) x-hgarg-1: cmds=heads+%3Bknown+nodes%3D\r\n
328 328 readline(-1) -> (48) x-hgproto-1: 0.1 0.2 comp=zstd,zlib,none,bzip2\r\n
329 329 readline(-1) -> (35) accept: application/mercurial-0.1\r\n
330 330 readline(-1) -> (23) host: localhost:$HGPORT\r\n
331 331 readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
332 332 readline(-1) -> (2) \r\n
333 333 write(36 from 36) -> (94) HTTP/1.1 200 Script output follows\r\n
334 334 write(23 from 23) -> (71) Server: badhttpserver\r\n
335 335 write(37 from 37) -> (34) Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
336 336 write(34 from 41) -> (0) Content-Type: application/mercuria
337 337 write limit reached; closing socket
338 338 write(36) -> HTTP/1.1 500 Internal Server Error\r\n
339 339
340 340 $ rm -f error.log
341 341
342 342 Server sends an incomplete HTTP response body to batch request
343 343
344 344 $ hg --config badserver.closeaftersendbytes=760 serve -p $HGPORT -d --pid-file=hg.pid -E error.log
345 345 $ cat hg.pid > $DAEMON_PIDS
346 346
347 347 TODO client spews a stack due to uncaught ValueError in batch.results()
348 348 $ hg clone http://localhost:$HGPORT/ clone 2> /dev/null
349 349 [1]
350 350
351 351 $ killdaemons.py $DAEMON_PIDS
352 352
353 353 $ cat error.log
354 354 readline(65537) -> (33) GET /?cmd=capabilities HTTP/1.1\r\n
355 355 readline(-1) -> (27) Accept-Encoding: identity\r\n
356 356 readline(-1) -> (35) accept: application/mercurial-0.1\r\n
357 357 readline(-1) -> (23) host: localhost:$HGPORT\r\n
358 358 readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
359 359 readline(-1) -> (2) \r\n
360 360 write(36 from 36) -> (724) HTTP/1.1 200 Script output follows\r\n
361 361 write(23 from 23) -> (701) Server: badhttpserver\r\n
362 362 write(37 from 37) -> (664) Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
363 363 write(41 from 41) -> (623) Content-Type: application/mercurial-0.1\r\n
364 364 write(21 from 21) -> (602) Content-Length: 405\r\n
365 365 write(2 from 2) -> (600) \r\n
366 366 write(405 from 405) -> (195) lookup changegroupsubset branchmap pushkey known getbundle unbundlehash batch streamreqs=generaldelta,revlogv1 bundle2=HG20%0Achangegroup%3D01%2C02%0Adigests%3Dmd5%2Csha1%2Csha512%0Aerror%3Dabort%2Cunsupportedcontent%2Cpushraced%2Cpushkey%0Ahgtagsfnodes%0Alistkeys%0Apushkey%0Aremote-changegroup%3Dhttp%2Chttps unbundle=HG10GZ,HG10BZ,HG10UN httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx compression=none
367 367 readline(65537) -> (26) GET /?cmd=batch HTTP/1.1\r\n
368 368 readline(-1) -> (27) Accept-Encoding: identity\r\n
369 369 readline(-1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n
370 370 readline(-1) -> (41) x-hgarg-1: cmds=heads+%3Bknown+nodes%3D\r\n
371 371 readline(-1) -> (48) x-hgproto-1: 0.1 0.2 comp=zstd,zlib,none,bzip2\r\n
372 372 readline(-1) -> (35) accept: application/mercurial-0.1\r\n
373 373 readline(-1) -> (23) host: localhost:$HGPORT\r\n
374 374 readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
375 375 readline(-1) -> (2) \r\n
376 376 write(36 from 36) -> (159) HTTP/1.1 200 Script output follows\r\n
377 377 write(23 from 23) -> (136) Server: badhttpserver\r\n
378 378 write(37 from 37) -> (99) Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
379 379 write(41 from 41) -> (58) Content-Type: application/mercurial-0.1\r\n
380 380 write(20 from 20) -> (38) Content-Length: 42\r\n
381 381 write(2 from 2) -> (36) \r\n
382 382 write(36 from 42) -> (0) 96ee1d7354c4ad7372047672c36a1f561e3a
383 383 write limit reached; closing socket
384 384
385 385 $ rm -f error.log
386 386
387 387 Server sends incomplete headers for getbundle response
388 388
389 389 $ hg --config badserver.closeaftersendbytes=895 serve -p $HGPORT -d --pid-file=hg.pid -E error.log
390 390 $ cat hg.pid > $DAEMON_PIDS
391 391
392 392 TODO this output is terrible
393 393
394 394 $ hg clone http://localhost:$HGPORT/ clone
395 395 requesting all changes
396 396 abort: 'http://localhost:$HGPORT/' does not appear to be an hg repository:
397 397 ---%<--- (application/mercuri)
398 398
399 399 ---%<---
400 400 !
401 401 [255]
402 402
403 403 $ killdaemons.py $DAEMON_PIDS
404 404
405 405 $ cat error.log
406 406 readline(65537) -> (33) GET /?cmd=capabilities HTTP/1.1\r\n
407 407 readline(-1) -> (27) Accept-Encoding: identity\r\n
408 408 readline(-1) -> (35) accept: application/mercurial-0.1\r\n
409 409 readline(-1) -> (23) host: localhost:$HGPORT\r\n
410 410 readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
411 411 readline(-1) -> (2) \r\n
412 412 write(36 from 36) -> (859) HTTP/1.1 200 Script output follows\r\n
413 413 write(23 from 23) -> (836) Server: badhttpserver\r\n
414 414 write(37 from 37) -> (799) Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
415 415 write(41 from 41) -> (758) Content-Type: application/mercurial-0.1\r\n
416 416 write(21 from 21) -> (737) Content-Length: 405\r\n
417 417 write(2 from 2) -> (735) \r\n
418 418 write(405 from 405) -> (330) lookup changegroupsubset branchmap pushkey known getbundle unbundlehash batch streamreqs=generaldelta,revlogv1 bundle2=HG20%0Achangegroup%3D01%2C02%0Adigests%3Dmd5%2Csha1%2Csha512%0Aerror%3Dabort%2Cunsupportedcontent%2Cpushraced%2Cpushkey%0Ahgtagsfnodes%0Alistkeys%0Apushkey%0Aremote-changegroup%3Dhttp%2Chttps unbundle=HG10GZ,HG10BZ,HG10UN httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx compression=none
419 419 readline(65537) -> (26) GET /?cmd=batch HTTP/1.1\r\n
420 420 readline(-1) -> (27) Accept-Encoding: identity\r\n
421 421 readline(-1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n
422 422 readline(-1) -> (41) x-hgarg-1: cmds=heads+%3Bknown+nodes%3D\r\n
423 423 readline(-1) -> (48) x-hgproto-1: 0.1 0.2 comp=zstd,zlib,none,bzip2\r\n
424 424 readline(-1) -> (35) accept: application/mercurial-0.1\r\n
425 425 readline(-1) -> (23) host: localhost:$HGPORT\r\n
426 426 readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
427 427 readline(-1) -> (2) \r\n
428 428 write(36 from 36) -> (294) HTTP/1.1 200 Script output follows\r\n
429 429 write(23 from 23) -> (271) Server: badhttpserver\r\n
430 430 write(37 from 37) -> (234) Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
431 431 write(41 from 41) -> (193) Content-Type: application/mercurial-0.1\r\n
432 432 write(20 from 20) -> (173) Content-Length: 42\r\n
433 433 write(2 from 2) -> (171) \r\n
434 434 write(42 from 42) -> (129) 96ee1d7354c4ad7372047672c36a1f561e3a6a4c\n;
435 435 readline(65537) -> (30) GET /?cmd=getbundle HTTP/1.1\r\n
436 436 readline(-1) -> (27) Accept-Encoding: identity\r\n
437 437 readline(-1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n
438 438 readline(-1) -> (396) x-hgarg-1: bundlecaps=HG20%2Cbundle2%3DHG20%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=1&common=0000000000000000000000000000000000000000&heads=96ee1d7354c4ad7372047672c36a1f561e3a6a4c&listkeys=phases%2Cbookmarks\r\n
439 439 readline(-1) -> (48) x-hgproto-1: 0.1 0.2 comp=zstd,zlib,none,bzip2\r\n
440 440 readline(-1) -> (35) accept: application/mercurial-0.1\r\n
441 441 readline(-1) -> (23) host: localhost:$HGPORT\r\n
442 442 readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
443 443 readline(-1) -> (2) \r\n
444 444 write(36 from 36) -> (93) HTTP/1.1 200 Script output follows\r\n
445 445 write(23 from 23) -> (70) Server: badhttpserver\r\n
446 446 write(37 from 37) -> (33) Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
447 447 write(33 from 41) -> (0) Content-Type: application/mercuri
448 448 write limit reached; closing socket
449 449 write(36) -> HTTP/1.1 500 Internal Server Error\r\n
450 450
451 451 $ rm -f error.log
452 452
453 453 Server sends empty HTTP body for getbundle
454 454
455 455 $ hg --config badserver.closeaftersendbytes=933 serve -p $HGPORT -d --pid-file=hg.pid -E error.log
456 456 $ cat hg.pid > $DAEMON_PIDS
457 457
458 458 $ hg clone http://localhost:$HGPORT/ clone
459 459 requesting all changes
460 460 abort: HTTP request error (incomplete response)
461 461 (this may be an intermittent network failure; if the error persists, consider contacting the network or server operator)
462 462 [255]
463 463
464 464 $ killdaemons.py $DAEMON_PIDS
465 465
466 466 $ cat error.log
467 467 readline(65537) -> (33) GET /?cmd=capabilities HTTP/1.1\r\n
468 468 readline(-1) -> (27) Accept-Encoding: identity\r\n
469 469 readline(-1) -> (35) accept: application/mercurial-0.1\r\n
470 470 readline(-1) -> (23) host: localhost:$HGPORT\r\n
471 471 readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
472 472 readline(-1) -> (2) \r\n
473 473 write(36 from 36) -> (897) HTTP/1.1 200 Script output follows\r\n
474 474 write(23 from 23) -> (874) Server: badhttpserver\r\n
475 475 write(37 from 37) -> (837) Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
476 476 write(41 from 41) -> (796) Content-Type: application/mercurial-0.1\r\n
477 477 write(21 from 21) -> (775) Content-Length: 405\r\n
478 478 write(2 from 2) -> (773) \r\n
479 479 write(405 from 405) -> (368) lookup changegroupsubset branchmap pushkey known getbundle unbundlehash batch streamreqs=generaldelta,revlogv1 bundle2=HG20%0Achangegroup%3D01%2C02%0Adigests%3Dmd5%2Csha1%2Csha512%0Aerror%3Dabort%2Cunsupportedcontent%2Cpushraced%2Cpushkey%0Ahgtagsfnodes%0Alistkeys%0Apushkey%0Aremote-changegroup%3Dhttp%2Chttps unbundle=HG10GZ,HG10BZ,HG10UN httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx compression=none
480 480 readline(65537) -> (26) GET /?cmd=batch HTTP/1.1\r\n
481 481 readline(-1) -> (27) Accept-Encoding: identity\r\n
482 482 readline(-1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n
483 483 readline(-1) -> (41) x-hgarg-1: cmds=heads+%3Bknown+nodes%3D\r\n
484 484 readline(-1) -> (48) x-hgproto-1: 0.1 0.2 comp=zstd,zlib,none,bzip2\r\n
485 485 readline(-1) -> (35) accept: application/mercurial-0.1\r\n
486 486 readline(-1) -> (23) host: localhost:$HGPORT\r\n
487 487 readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
488 488 readline(-1) -> (2) \r\n
489 489 write(36 from 36) -> (332) HTTP/1.1 200 Script output follows\r\n
490 490 write(23 from 23) -> (309) Server: badhttpserver\r\n
491 491 write(37 from 37) -> (272) Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
492 492 write(41 from 41) -> (231) Content-Type: application/mercurial-0.1\r\n
493 493 write(20 from 20) -> (211) Content-Length: 42\r\n
494 494 write(2 from 2) -> (209) \r\n
495 495 write(42 from 42) -> (167) 96ee1d7354c4ad7372047672c36a1f561e3a6a4c\n;
496 496 readline(65537) -> (30) GET /?cmd=getbundle HTTP/1.1\r\n
497 497 readline(-1) -> (27) Accept-Encoding: identity\r\n
498 498 readline(-1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n
499 499 readline(-1) -> (396) x-hgarg-1: bundlecaps=HG20%2Cbundle2%3DHG20%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=1&common=0000000000000000000000000000000000000000&heads=96ee1d7354c4ad7372047672c36a1f561e3a6a4c&listkeys=phases%2Cbookmarks\r\n
500 500 readline(-1) -> (48) x-hgproto-1: 0.1 0.2 comp=zstd,zlib,none,bzip2\r\n
501 501 readline(-1) -> (35) accept: application/mercurial-0.1\r\n
502 502 readline(-1) -> (23) host: localhost:$HGPORT\r\n
503 503 readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
504 504 readline(-1) -> (2) \r\n
505 505 write(36 from 36) -> (131) HTTP/1.1 200 Script output follows\r\n
506 506 write(23 from 23) -> (108) Server: badhttpserver\r\n
507 507 write(37 from 37) -> (71) Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
508 508 write(41 from 41) -> (30) Content-Type: application/mercurial-0.2\r\n
509 509 write(28 from 28) -> (2) Transfer-Encoding: chunked\r\n
510 510 write(2 from 2) -> (0) \r\n
511 511 write limit reached; closing socket
512 512 write(36) -> HTTP/1.1 500 Internal Server Error\r\n
513 513
514 514 $ rm -f error.log
515 515
516 516 Server sends partial compression string
517 517
518 518 $ hg --config badserver.closeaftersendbytes=945 serve -p $HGPORT -d --pid-file=hg.pid -E error.log
519 519 $ cat hg.pid > $DAEMON_PIDS
520 520
521 521 $ hg clone http://localhost:$HGPORT/ clone
522 522 requesting all changes
523 523 abort: HTTP request error (incomplete response; expected 1 bytes got 3)
524 524 (this may be an intermittent network failure; if the error persists, consider contacting the network or server operator)
525 525 [255]
526 526
527 527 $ killdaemons.py $DAEMON_PIDS
528 528
529 529 $ cat error.log
530 530 readline(65537) -> (33) GET /?cmd=capabilities HTTP/1.1\r\n
531 531 readline(-1) -> (27) Accept-Encoding: identity\r\n
532 532 readline(-1) -> (35) accept: application/mercurial-0.1\r\n
533 533 readline(-1) -> (23) host: localhost:$HGPORT\r\n
534 534 readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
535 535 readline(-1) -> (2) \r\n
536 536 write(36 from 36) -> (909) HTTP/1.1 200 Script output follows\r\n
537 537 write(23 from 23) -> (886) Server: badhttpserver\r\n
538 538 write(37 from 37) -> (849) Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
539 539 write(41 from 41) -> (808) Content-Type: application/mercurial-0.1\r\n
540 540 write(21 from 21) -> (787) Content-Length: 405\r\n
541 541 write(2 from 2) -> (785) \r\n
542 542 write(405 from 405) -> (380) lookup changegroupsubset branchmap pushkey known getbundle unbundlehash batch streamreqs=generaldelta,revlogv1 bundle2=HG20%0Achangegroup%3D01%2C02%0Adigests%3Dmd5%2Csha1%2Csha512%0Aerror%3Dabort%2Cunsupportedcontent%2Cpushraced%2Cpushkey%0Ahgtagsfnodes%0Alistkeys%0Apushkey%0Aremote-changegroup%3Dhttp%2Chttps unbundle=HG10GZ,HG10BZ,HG10UN httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx compression=none
543 543 readline(65537) -> (26) GET /?cmd=batch HTTP/1.1\r\n
544 544 readline(-1) -> (27) Accept-Encoding: identity\r\n
545 545 readline(-1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n
546 546 readline(-1) -> (41) x-hgarg-1: cmds=heads+%3Bknown+nodes%3D\r\n
547 547 readline(-1) -> (48) x-hgproto-1: 0.1 0.2 comp=zstd,zlib,none,bzip2\r\n
548 548 readline(-1) -> (35) accept: application/mercurial-0.1\r\n
549 549 readline(-1) -> (23) host: localhost:$HGPORT\r\n
550 550 readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
551 551 readline(-1) -> (2) \r\n
552 552 write(36 from 36) -> (344) HTTP/1.1 200 Script output follows\r\n
553 553 write(23 from 23) -> (321) Server: badhttpserver\r\n
554 554 write(37 from 37) -> (284) Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
555 555 write(41 from 41) -> (243) Content-Type: application/mercurial-0.1\r\n
556 556 write(20 from 20) -> (223) Content-Length: 42\r\n
557 557 write(2 from 2) -> (221) \r\n
558 558 write(42 from 42) -> (179) 96ee1d7354c4ad7372047672c36a1f561e3a6a4c\n;
559 559 readline(65537) -> (30) GET /?cmd=getbundle HTTP/1.1\r\n
560 560 readline(-1) -> (27) Accept-Encoding: identity\r\n
561 561 readline(-1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n
562 562 readline(-1) -> (396) x-hgarg-1: bundlecaps=HG20%2Cbundle2%3DHG20%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=1&common=0000000000000000000000000000000000000000&heads=96ee1d7354c4ad7372047672c36a1f561e3a6a4c&listkeys=phases%2Cbookmarks\r\n
563 563 readline(-1) -> (48) x-hgproto-1: 0.1 0.2 comp=zstd,zlib,none,bzip2\r\n
564 564 readline(-1) -> (35) accept: application/mercurial-0.1\r\n
565 565 readline(-1) -> (23) host: localhost:$HGPORT\r\n
566 566 readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
567 567 readline(-1) -> (2) \r\n
568 568 write(36 from 36) -> (143) HTTP/1.1 200 Script output follows\r\n
569 569 write(23 from 23) -> (120) Server: badhttpserver\r\n
570 570 write(37 from 37) -> (83) Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
571 571 write(41 from 41) -> (42) Content-Type: application/mercurial-0.2\r\n
572 572 write(28 from 28) -> (14) Transfer-Encoding: chunked\r\n
573 573 write(2 from 2) -> (12) \r\n
574 574 write(6 from 6) -> (6) 1\\r\\n\x04\\r\\n (esc)
575 575 write(6 from 9) -> (0) 4\r\nnon
576 576 write limit reached; closing socket
577 577 write(27) -> 15\r\nInternal Server Error\r\n
578 578
579 579 $ rm -f error.log
580 580
581 581 Server sends partial bundle2 header magic
582 582
583 583 $ hg --config badserver.closeaftersendbytes=954 serve -p $HGPORT -d --pid-file=hg.pid -E error.log
584 584 $ cat hg.pid > $DAEMON_PIDS
585 585
586 586 $ hg clone http://localhost:$HGPORT/ clone
587 587 requesting all changes
588 588 abort: HTTP request error (incomplete response; expected 1 bytes got 3)
589 589 (this may be an intermittent network failure; if the error persists, consider contacting the network or server operator)
590 590 [255]
591 591
592 592 $ killdaemons.py $DAEMON_PIDS
593 593
594 594 $ tail -7 error.log
595 595 write(28 from 28) -> (23) Transfer-Encoding: chunked\r\n
596 596 write(2 from 2) -> (21) \r\n
597 597 write(6 from 6) -> (15) 1\\r\\n\x04\\r\\n (esc)
598 598 write(9 from 9) -> (6) 4\r\nnone\r\n
599 599 write(6 from 9) -> (0) 4\r\nHG2
600 600 write limit reached; closing socket
601 601 write(27) -> 15\r\nInternal Server Error\r\n
602 602
603 603 $ rm -f error.log
604 604
605 605 Server sends incomplete bundle2 stream params length
606 606
607 607 $ hg --config badserver.closeaftersendbytes=963 serve -p $HGPORT -d --pid-file=hg.pid -E error.log
608 608 $ cat hg.pid > $DAEMON_PIDS
609 609
610 610 $ hg clone http://localhost:$HGPORT/ clone
611 611 requesting all changes
612 612 abort: HTTP request error (incomplete response; expected 1 bytes got 3)
613 613 (this may be an intermittent network failure; if the error persists, consider contacting the network or server operator)
614 614 [255]
615 615
616 616 $ killdaemons.py $DAEMON_PIDS
617 617
618 618 $ tail -8 error.log
619 619 write(28 from 28) -> (32) Transfer-Encoding: chunked\r\n
620 620 write(2 from 2) -> (30) \r\n
621 621 write(6 from 6) -> (24) 1\\r\\n\x04\\r\\n (esc)
622 622 write(9 from 9) -> (15) 4\r\nnone\r\n
623 623 write(9 from 9) -> (6) 4\r\nHG20\r\n
624 624 write(6 from 9) -> (0) 4\\r\\n\x00\x00\x00 (esc)
625 625 write limit reached; closing socket
626 626 write(27) -> 15\r\nInternal Server Error\r\n
627 627
628 628 $ rm -f error.log
629 629
630 630 Servers stops after bundle2 stream params header
631 631
632 632 $ hg --config badserver.closeaftersendbytes=966 serve -p $HGPORT -d --pid-file=hg.pid -E error.log
633 633 $ cat hg.pid > $DAEMON_PIDS
634 634
635 635 $ hg clone http://localhost:$HGPORT/ clone
636 636 requesting all changes
637 637 abort: HTTP request error (incomplete response)
638 638 (this may be an intermittent network failure; if the error persists, consider contacting the network or server operator)
639 639 [255]
640 640
641 641 $ killdaemons.py $DAEMON_PIDS
642 642
643 643 $ tail -8 error.log
644 644 write(28 from 28) -> (35) Transfer-Encoding: chunked\r\n
645 645 write(2 from 2) -> (33) \r\n
646 646 write(6 from 6) -> (27) 1\\r\\n\x04\\r\\n (esc)
647 647 write(9 from 9) -> (18) 4\r\nnone\r\n
648 648 write(9 from 9) -> (9) 4\r\nHG20\r\n
649 649 write(9 from 9) -> (0) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
650 650 write limit reached; closing socket
651 651 write(27) -> 15\r\nInternal Server Error\r\n
652 652
653 653 $ rm -f error.log
654 654
655 655 Server stops sending after bundle2 part header length
656 656
657 657 $ hg --config badserver.closeaftersendbytes=975 serve -p $HGPORT -d --pid-file=hg.pid -E error.log
658 658 $ cat hg.pid > $DAEMON_PIDS
659 659
660 660 $ hg clone http://localhost:$HGPORT/ clone
661 661 requesting all changes
662 662 abort: HTTP request error (incomplete response)
663 663 (this may be an intermittent network failure; if the error persists, consider contacting the network or server operator)
664 664 [255]
665 665
666 666 $ killdaemons.py $DAEMON_PIDS
667 667
668 668 $ tail -9 error.log
669 669 write(28 from 28) -> (44) Transfer-Encoding: chunked\r\n
670 670 write(2 from 2) -> (42) \r\n
671 671 write(6 from 6) -> (36) 1\\r\\n\x04\\r\\n (esc)
672 672 write(9 from 9) -> (27) 4\r\nnone\r\n
673 673 write(9 from 9) -> (18) 4\r\nHG20\r\n
674 674 write(9 from 9) -> (9) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
675 675 write(9 from 9) -> (0) 4\\r\\n\x00\x00\x00)\\r\\n (esc)
676 676 write limit reached; closing socket
677 677 write(27) -> 15\r\nInternal Server Error\r\n
678 678
679 679 $ rm -f error.log
680 680
681 681 Server stops sending after bundle2 part header
682 682
683 683 $ hg --config badserver.closeaftersendbytes=1022 serve -p $HGPORT -d --pid-file=hg.pid -E error.log
684 684 $ cat hg.pid > $DAEMON_PIDS
685 685
686 686 $ hg clone http://localhost:$HGPORT/ clone
687 687 requesting all changes
688 688 adding changesets
689 689 transaction abort!
690 690 rollback completed
691 abort: stream ended unexpectedly (got 0 bytes, expected 4)
691 abort: HTTP request error (incomplete response)
692 (this may be an intermittent network failure; if the error persists, consider contacting the network or server operator)
692 693 [255]
693 694
694 695 $ killdaemons.py $DAEMON_PIDS
695 696
696 697 $ tail -10 error.log
697 698 write(28 from 28) -> (91) Transfer-Encoding: chunked\r\n
698 699 write(2 from 2) -> (89) \r\n
699 700 write(6 from 6) -> (83) 1\\r\\n\x04\\r\\n (esc)
700 701 write(9 from 9) -> (74) 4\r\nnone\r\n
701 702 write(9 from 9) -> (65) 4\r\nHG20\r\n
702 703 write(9 from 9) -> (56) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
703 704 write(9 from 9) -> (47) 4\\r\\n\x00\x00\x00)\\r\\n (esc)
704 705 write(47 from 47) -> (0) 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02 \x01version02nbchanges1\\r\\n (esc)
705 706 write limit reached; closing socket
706 707 write(27) -> 15\r\nInternal Server Error\r\n
707 708
708 709 $ rm -f error.log
709 710
710 711 Server stops after bundle2 part payload chunk size
711 712
712 713 $ hg --config badserver.closeaftersendbytes=1031 serve -p $HGPORT -d --pid-file=hg.pid -E error.log
713 714 $ cat hg.pid > $DAEMON_PIDS
714 715
715 716 $ hg clone http://localhost:$HGPORT/ clone
716 717 requesting all changes
717 718 adding changesets
718 719 transaction abort!
719 720 rollback completed
720 abort: stream ended unexpectedly (got 0 bytes, expected 4)
721 abort: HTTP request error (incomplete response)
722 (this may be an intermittent network failure; if the error persists, consider contacting the network or server operator)
721 723 [255]
722 724
723 725 $ killdaemons.py $DAEMON_PIDS
724 726
725 727 $ tail -11 error.log
726 728 write(28 from 28) -> (100) Transfer-Encoding: chunked\r\n
727 729 write(2 from 2) -> (98) \r\n
728 730 write(6 from 6) -> (92) 1\\r\\n\x04\\r\\n (esc)
729 731 write(9 from 9) -> (83) 4\r\nnone\r\n
730 732 write(9 from 9) -> (74) 4\r\nHG20\r\n
731 733 write(9 from 9) -> (65) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
732 734 write(9 from 9) -> (56) 4\\r\\n\x00\x00\x00)\\r\\n (esc)
733 735 write(47 from 47) -> (9) 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02 \x01version02nbchanges1\\r\\n (esc)
734 736 write(9 from 9) -> (0) 4\\r\\n\x00\x00\x01\xd2\\r\\n (esc)
735 737 write limit reached; closing socket
736 738 write(27) -> 15\r\nInternal Server Error\r\n
737 739
738 740 $ rm -f error.log
739 741
740 742 Server stops sending in middle of bundle2 payload chunk
741 743
742 744 $ hg --config badserver.closeaftersendbytes=1504 serve -p $HGPORT -d --pid-file=hg.pid -E error.log
743 745 $ cat hg.pid > $DAEMON_PIDS
744 746
745 747 $ hg clone http://localhost:$HGPORT/ clone
746 748 requesting all changes
747 749 adding changesets
748 750 transaction abort!
749 751 rollback completed
750 abort: stream ended unexpectedly (got 0 bytes, expected 4)
752 abort: HTTP request error (incomplete response)
753 (this may be an intermittent network failure; if the error persists, consider contacting the network or server operator)
751 754 [255]
752 755
753 756 $ killdaemons.py $DAEMON_PIDS
754 757
755 758 $ tail -12 error.log
756 759 write(28 from 28) -> (573) Transfer-Encoding: chunked\r\n
757 760 write(2 from 2) -> (571) \r\n
758 761 write(6 from 6) -> (565) 1\\r\\n\x04\\r\\n (esc)
759 762 write(9 from 9) -> (556) 4\r\nnone\r\n
760 763 write(9 from 9) -> (547) 4\r\nHG20\r\n
761 764 write(9 from 9) -> (538) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
762 765 write(9 from 9) -> (529) 4\\r\\n\x00\x00\x00)\\r\\n (esc)
763 766 write(47 from 47) -> (482) 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02 \x01version02nbchanges1\\r\\n (esc)
764 767 write(9 from 9) -> (473) 4\\r\\n\x00\x00\x01\xd2\\r\\n (esc)
765 768 write(473 from 473) -> (0) 1d2\\r\\n\x00\x00\x00\xb2\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00>6a3df4de388f3c4f8e28f4f9a814299a3cbb5f50\\ntest\\n0 0\\nfoo\\n\\ninitial\x00\x00\x00\x00\x00\x00\x00\xa1j=\xf4\xde8\x8f<O\x8e(\xf4\xf9\xa8\x14)\x9a<\xbb_P\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00-foo\x00b80de5d138758541c5f05265ad144ab9fa86d1db\\n\x00\x00\x00\x00\x00\x00\x00\x07foo\x00\x00\x00h\xb8\\r\xe5\xd18u\x85A\xc5\xf0Re\xad\x14J\xb9\xfa\x86\xd1\xdb\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\\r\\n (esc)
766 769 write limit reached; closing socket
767 770 write(27) -> 15\r\nInternal Server Error\r\n
768 771
769 772 $ rm -f error.log
770 773
771 774 Server stops sending after 0 length payload chunk size
772 775
773 776 $ hg --config badserver.closeaftersendbytes=1513 serve -p $HGPORT -d --pid-file=hg.pid -E error.log
774 777 $ cat hg.pid > $DAEMON_PIDS
775 778
776 779 $ hg clone http://localhost:$HGPORT/ clone
777 780 requesting all changes
778 781 adding changesets
779 782 adding manifests
780 783 adding file changes
781 784 added 1 changesets with 1 changes to 1 files
782 785 transaction abort!
783 786 rollback completed
784 787 abort: HTTP request error (incomplete response)
785 788 (this may be an intermittent network failure; if the error persists, consider contacting the network or server operator)
786 789 [255]
787 790
788 791 $ killdaemons.py $DAEMON_PIDS
789 792
790 793 $ tail -13 error.log
791 794 write(28 from 28) -> (582) Transfer-Encoding: chunked\r\n
792 795 write(2 from 2) -> (580) \r\n
793 796 write(6 from 6) -> (574) 1\\r\\n\x04\\r\\n (esc)
794 797 write(9 from 9) -> (565) 4\r\nnone\r\n
795 798 write(9 from 9) -> (556) 4\r\nHG20\r\n
796 799 write(9 from 9) -> (547) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
797 800 write(9 from 9) -> (538) 4\\r\\n\x00\x00\x00)\\r\\n (esc)
798 801 write(47 from 47) -> (491) 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02 \x01version02nbchanges1\\r\\n (esc)
799 802 write(9 from 9) -> (482) 4\\r\\n\x00\x00\x01\xd2\\r\\n (esc)
800 803 write(473 from 473) -> (9) 1d2\\r\\n\x00\x00\x00\xb2\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00>6a3df4de388f3c4f8e28f4f9a814299a3cbb5f50\\ntest\\n0 0\\nfoo\\n\\ninitial\x00\x00\x00\x00\x00\x00\x00\xa1j=\xf4\xde8\x8f<O\x8e(\xf4\xf9\xa8\x14)\x9a<\xbb_P\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00-foo\x00b80de5d138758541c5f05265ad144ab9fa86d1db\\n\x00\x00\x00\x00\x00\x00\x00\x07foo\x00\x00\x00h\xb8\\r\xe5\xd18u\x85A\xc5\xf0Re\xad\x14J\xb9\xfa\x86\xd1\xdb\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\\r\\n (esc)
801 804 write(9 from 9) -> (0) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
802 805 write limit reached; closing socket
803 806 write(27) -> 15\r\nInternal Server Error\r\n
804 807
805 808 $ rm -f error.log
806 809
807 810 Server stops sending after 0 part bundle part header (indicating end of bundle2 payload)
808 811 This is before the 0 size chunked transfer part that signals end of HTTP response.
809 812
810 813 $ hg --config badserver.closeaftersendbytes=1710 serve -p $HGPORT -d --pid-file=hg.pid -E error.log
811 814 $ cat hg.pid > $DAEMON_PIDS
812 815
813 816 $ hg clone http://localhost:$HGPORT/ clone
814 817 requesting all changes
815 818 adding changesets
816 819 adding manifests
817 820 adding file changes
818 821 added 1 changesets with 1 changes to 1 files
819 822 updating to branch default
820 823 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
821 824
822 825 $ killdaemons.py $DAEMON_PIDS
823 826
824 827 $ tail -22 error.log
825 828 write(28 from 28) -> (779) Transfer-Encoding: chunked\r\n
826 829 write(2 from 2) -> (777) \r\n
827 830 write(6 from 6) -> (771) 1\\r\\n\x04\\r\\n (esc)
828 831 write(9 from 9) -> (762) 4\r\nnone\r\n
829 832 write(9 from 9) -> (753) 4\r\nHG20\r\n
830 833 write(9 from 9) -> (744) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
831 834 write(9 from 9) -> (735) 4\\r\\n\x00\x00\x00)\\r\\n (esc)
832 835 write(47 from 47) -> (688) 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02 \x01version02nbchanges1\\r\\n (esc)
833 836 write(9 from 9) -> (679) 4\\r\\n\x00\x00\x01\xd2\\r\\n (esc)
834 837 write(473 from 473) -> (206) 1d2\\r\\n\x00\x00\x00\xb2\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00>6a3df4de388f3c4f8e28f4f9a814299a3cbb5f50\\ntest\\n0 0\\nfoo\\n\\ninitial\x00\x00\x00\x00\x00\x00\x00\xa1j=\xf4\xde8\x8f<O\x8e(\xf4\xf9\xa8\x14)\x9a<\xbb_P\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00-foo\x00b80de5d138758541c5f05265ad144ab9fa86d1db\\n\x00\x00\x00\x00\x00\x00\x00\x07foo\x00\x00\x00h\xb8\\r\xe5\xd18u\x85A\xc5\xf0Re\xad\x14J\xb9\xfa\x86\xd1\xdb\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\\r\\n (esc)
835 838 write(9 from 9) -> (197) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
836 839 write(9 from 9) -> (188) 4\\r\\n\x00\x00\x00 \\r\\n (esc)
837 840 write(38 from 38) -> (150) 20\\r\\n\x08LISTKEYS\x00\x00\x00\x01\x01\x00 \x06namespacephases\\r\\n (esc)
838 841 write(9 from 9) -> (141) 4\\r\\n\x00\x00\x00:\\r\\n (esc)
839 842 write(64 from 64) -> (77) 3a\r\n96ee1d7354c4ad7372047672c36a1f561e3a6a4c 1\npublishing True\r\n
840 843 write(9 from 9) -> (68) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
841 844 write(9 from 9) -> (59) 4\\r\\n\x00\x00\x00#\\r\\n (esc)
842 845 write(41 from 41) -> (18) 23\\r\\n\x08LISTKEYS\x00\x00\x00\x02\x01\x00 namespacebookmarks\\r\\n (esc)
843 846 write(9 from 9) -> (9) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
844 847 write(9 from 9) -> (0) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
845 848 write limit reached; closing socket
846 849 write(27) -> 15\r\nInternal Server Error\r\n
847 850
848 851 $ rm -f error.log
849 852 $ rm -rf clone
850 853
851 854 Server sends a size 0 chunked-transfer size without terminating \r\n
852 855
853 856 $ hg --config badserver.closeaftersendbytes=1713 serve -p $HGPORT -d --pid-file=hg.pid -E error.log
854 857 $ cat hg.pid > $DAEMON_PIDS
855 858
856 859 $ hg clone http://localhost:$HGPORT/ clone
857 860 requesting all changes
858 861 adding changesets
859 862 adding manifests
860 863 adding file changes
861 864 added 1 changesets with 1 changes to 1 files
862 865 updating to branch default
863 866 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
864 867
865 868 $ killdaemons.py $DAEMON_PIDS
866 869
867 870 $ tail -23 error.log
868 871 write(28 from 28) -> (782) Transfer-Encoding: chunked\r\n
869 872 write(2 from 2) -> (780) \r\n
870 873 write(6 from 6) -> (774) 1\\r\\n\x04\\r\\n (esc)
871 874 write(9 from 9) -> (765) 4\r\nnone\r\n
872 875 write(9 from 9) -> (756) 4\r\nHG20\r\n
873 876 write(9 from 9) -> (747) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
874 877 write(9 from 9) -> (738) 4\\r\\n\x00\x00\x00)\\r\\n (esc)
875 878 write(47 from 47) -> (691) 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02 \x01version02nbchanges1\\r\\n (esc)
876 879 write(9 from 9) -> (682) 4\\r\\n\x00\x00\x01\xd2\\r\\n (esc)
877 880 write(473 from 473) -> (209) 1d2\\r\\n\x00\x00\x00\xb2\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00>6a3df4de388f3c4f8e28f4f9a814299a3cbb5f50\\ntest\\n0 0\\nfoo\\n\\ninitial\x00\x00\x00\x00\x00\x00\x00\xa1j=\xf4\xde8\x8f<O\x8e(\xf4\xf9\xa8\x14)\x9a<\xbb_P\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00-foo\x00b80de5d138758541c5f05265ad144ab9fa86d1db\\n\x00\x00\x00\x00\x00\x00\x00\x07foo\x00\x00\x00h\xb8\\r\xe5\xd18u\x85A\xc5\xf0Re\xad\x14J\xb9\xfa\x86\xd1\xdb\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\\r\\n (esc)
878 881 write(9 from 9) -> (200) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
879 882 write(9 from 9) -> (191) 4\\r\\n\x00\x00\x00 \\r\\n (esc)
880 883 write(38 from 38) -> (153) 20\\r\\n\x08LISTKEYS\x00\x00\x00\x01\x01\x00 \x06namespacephases\\r\\n (esc)
881 884 write(9 from 9) -> (144) 4\\r\\n\x00\x00\x00:\\r\\n (esc)
882 885 write(64 from 64) -> (80) 3a\r\n96ee1d7354c4ad7372047672c36a1f561e3a6a4c 1\npublishing True\r\n
883 886 write(9 from 9) -> (71) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
884 887 write(9 from 9) -> (62) 4\\r\\n\x00\x00\x00#\\r\\n (esc)
885 888 write(41 from 41) -> (21) 23\\r\\n\x08LISTKEYS\x00\x00\x00\x02\x01\x00 namespacebookmarks\\r\\n (esc)
886 889 write(9 from 9) -> (12) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
887 890 write(9 from 9) -> (3) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
888 891 write(3 from 5) -> (0) 0\r\n
889 892 write limit reached; closing socket
890 893 write(27) -> 15\r\nInternal Server Error\r\n
891 894
892 895 $ rm -f error.log
893 896 $ rm -rf clone
General Comments 0
You need to be logged in to leave comments. Login now