##// END OF EJS Templates
stream-clone: also filter the requirement we put in the bundle 2...
marmoute -
r49884:a3cf460a default
parent child Browse files
Show More
@@ -1,2588 +1,2589 b''
1 1 # bundle2.py - generic container format to transmit arbitrary data.
2 2 #
3 3 # Copyright 2013 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 """Handling of the new bundle2 format
8 8
9 9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 11 that will be handed to and processed by the application layer.
12 12
13 13
14 14 General format architecture
15 15 ===========================
16 16
17 17 The format is architectured as follow
18 18
19 19 - magic string
20 20 - stream level parameters
21 21 - payload parts (any number)
22 22 - end of stream marker.
23 23
24 24 the Binary format
25 25 ============================
26 26
27 27 All numbers are unsigned and big-endian.
28 28
29 29 stream level parameters
30 30 ------------------------
31 31
32 32 Binary format is as follow
33 33
34 34 :params size: int32
35 35
36 36 The total number of Bytes used by the parameters
37 37
38 38 :params value: arbitrary number of Bytes
39 39
40 40 A blob of `params size` containing the serialized version of all stream level
41 41 parameters.
42 42
43 43 The blob contains a space separated list of parameters. Parameters with value
44 44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45 45
46 46 Empty name are obviously forbidden.
47 47
48 48 Name MUST start with a letter. If this first letter is lower case, the
49 49 parameter is advisory and can be safely ignored. However when the first
50 50 letter is capital, the parameter is mandatory and the bundling process MUST
51 51 stop if he is not able to proceed it.
52 52
53 53 Stream parameters use a simple textual format for two main reasons:
54 54
55 55 - Stream level parameters should remain simple and we want to discourage any
56 56 crazy usage.
57 57 - Textual data allow easy human inspection of a bundle2 header in case of
58 58 troubles.
59 59
60 60 Any Applicative level options MUST go into a bundle2 part instead.
61 61
62 62 Payload part
63 63 ------------------------
64 64
65 65 Binary format is as follow
66 66
67 67 :header size: int32
68 68
69 69 The total number of Bytes used by the part header. When the header is empty
70 70 (size = 0) this is interpreted as the end of stream marker.
71 71
72 72 :header:
73 73
74 74 The header defines how to interpret the part. It contains two piece of
75 75 data: the part type, and the part parameters.
76 76
77 77 The part type is used to route an application level handler, that can
78 78 interpret payload.
79 79
80 80 Part parameters are passed to the application level handler. They are
81 81 meant to convey information that will help the application level object to
82 82 interpret the part payload.
83 83
84 84 The binary format of the header is has follow
85 85
86 86 :typesize: (one byte)
87 87
88 88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89 89
90 90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 91 to this part.
92 92
93 93 :parameters:
94 94
95 95 Part's parameter may have arbitrary content, the binary structure is::
96 96
97 97 <mandatory-count><advisory-count><param-sizes><param-data>
98 98
99 99 :mandatory-count: 1 byte, number of mandatory parameters
100 100
101 101 :advisory-count: 1 byte, number of advisory parameters
102 102
103 103 :param-sizes:
104 104
105 105 N couple of bytes, where N is the total number of parameters. Each
106 106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107 107
108 108 :param-data:
109 109
110 110 A blob of bytes from which each parameter key and value can be
111 111 retrieved using the list of size couples stored in the previous
112 112 field.
113 113
114 114 Mandatory parameters comes first, then the advisory ones.
115 115
116 116 Each parameter's key MUST be unique within the part.
117 117
118 118 :payload:
119 119
120 120 payload is a series of `<chunksize><chunkdata>`.
121 121
122 122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124 124
125 125 The current implementation always produces either zero or one chunk.
126 126 This is an implementation limitation that will ultimately be lifted.
127 127
128 128 `chunksize` can be negative to trigger special case processing. No such
129 129 processing is in place yet.
130 130
131 131 Bundle processing
132 132 ============================
133 133
134 134 Each part is processed in order using a "part handler". Handler are registered
135 135 for a certain part type.
136 136
137 137 The matching of a part to its handler is case insensitive. The case of the
138 138 part type is used to know if a part is mandatory or advisory. If the Part type
139 139 contains any uppercase char it is considered mandatory. When no handler is
140 140 known for a Mandatory part, the process is aborted and an exception is raised.
141 141 If the part is advisory and no handler is known, the part is ignored. When the
142 142 process is aborted, the full bundle is still read from the stream to keep the
143 143 channel usable. But none of the part read from an abort are processed. In the
144 144 future, dropping the stream may become an option for channel we do not care to
145 145 preserve.
146 146 """
147 147
148 148 from __future__ import absolute_import, division
149 149
150 150 import collections
151 151 import errno
152 152 import os
153 153 import re
154 154 import string
155 155 import struct
156 156 import sys
157 157
158 158 from .i18n import _
159 159 from .node import (
160 160 hex,
161 161 short,
162 162 )
163 163 from . import (
164 164 bookmarks,
165 165 changegroup,
166 166 encoding,
167 167 error,
168 168 obsolete,
169 169 phases,
170 170 pushkey,
171 171 pycompat,
172 172 requirements,
173 173 scmutil,
174 174 streamclone,
175 175 tags,
176 176 url,
177 177 util,
178 178 )
179 179 from .utils import (
180 180 stringutil,
181 181 urlutil,
182 182 )
183 183 from .interfaces import repository
184 184
185 185 urlerr = util.urlerr
186 186 urlreq = util.urlreq
187 187
188 188 _pack = struct.pack
189 189 _unpack = struct.unpack
190 190
191 191 _fstreamparamsize = b'>i'
192 192 _fpartheadersize = b'>i'
193 193 _fparttypesize = b'>B'
194 194 _fpartid = b'>I'
195 195 _fpayloadsize = b'>i'
196 196 _fpartparamcount = b'>BB'
197 197
198 198 preferedchunksize = 32768
199 199
200 200 _parttypeforbidden = re.compile(b'[^a-zA-Z0-9_:-]')
201 201
202 202
203 203 def outdebug(ui, message):
204 204 """debug regarding output stream (bundling)"""
205 205 if ui.configbool(b'devel', b'bundle2.debug'):
206 206 ui.debug(b'bundle2-output: %s\n' % message)
207 207
208 208
209 209 def indebug(ui, message):
210 210 """debug on input stream (unbundling)"""
211 211 if ui.configbool(b'devel', b'bundle2.debug'):
212 212 ui.debug(b'bundle2-input: %s\n' % message)
213 213
214 214
215 215 def validateparttype(parttype):
216 216 """raise ValueError if a parttype contains invalid character"""
217 217 if _parttypeforbidden.search(parttype):
218 218 raise ValueError(parttype)
219 219
220 220
221 221 def _makefpartparamsizes(nbparams):
222 222 """return a struct format to read part parameter sizes
223 223
224 224 The number parameters is variable so we need to build that format
225 225 dynamically.
226 226 """
227 227 return b'>' + (b'BB' * nbparams)
228 228
229 229
230 230 parthandlermapping = {}
231 231
232 232
233 233 def parthandler(parttype, params=()):
234 234 """decorator that register a function as a bundle2 part handler
235 235
236 236 eg::
237 237
238 238 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
239 239 def myparttypehandler(...):
240 240 '''process a part of type "my part".'''
241 241 ...
242 242 """
243 243 validateparttype(parttype)
244 244
245 245 def _decorator(func):
246 246 lparttype = parttype.lower() # enforce lower case matching.
247 247 assert lparttype not in parthandlermapping
248 248 parthandlermapping[lparttype] = func
249 249 func.params = frozenset(params)
250 250 return func
251 251
252 252 return _decorator
253 253
254 254
255 255 class unbundlerecords(object):
256 256 """keep record of what happens during and unbundle
257 257
258 258 New records are added using `records.add('cat', obj)`. Where 'cat' is a
259 259 category of record and obj is an arbitrary object.
260 260
261 261 `records['cat']` will return all entries of this category 'cat'.
262 262
263 263 Iterating on the object itself will yield `('category', obj)` tuples
264 264 for all entries.
265 265
266 266 All iterations happens in chronological order.
267 267 """
268 268
269 269 def __init__(self):
270 270 self._categories = {}
271 271 self._sequences = []
272 272 self._replies = {}
273 273
274 274 def add(self, category, entry, inreplyto=None):
275 275 """add a new record of a given category.
276 276
277 277 The entry can then be retrieved in the list returned by
278 278 self['category']."""
279 279 self._categories.setdefault(category, []).append(entry)
280 280 self._sequences.append((category, entry))
281 281 if inreplyto is not None:
282 282 self.getreplies(inreplyto).add(category, entry)
283 283
284 284 def getreplies(self, partid):
285 285 """get the records that are replies to a specific part"""
286 286 return self._replies.setdefault(partid, unbundlerecords())
287 287
288 288 def __getitem__(self, cat):
289 289 return tuple(self._categories.get(cat, ()))
290 290
291 291 def __iter__(self):
292 292 return iter(self._sequences)
293 293
294 294 def __len__(self):
295 295 return len(self._sequences)
296 296
297 297 def __nonzero__(self):
298 298 return bool(self._sequences)
299 299
300 300 __bool__ = __nonzero__
301 301
302 302
303 303 class bundleoperation(object):
304 304 """an object that represents a single bundling process
305 305
306 306 Its purpose is to carry unbundle-related objects and states.
307 307
308 308 A new object should be created at the beginning of each bundle processing.
309 309 The object is to be returned by the processing function.
310 310
311 311 The object has very little content now it will ultimately contain:
312 312 * an access to the repo the bundle is applied to,
313 313 * a ui object,
314 314 * a way to retrieve a transaction to add changes to the repo,
315 315 * a way to record the result of processing each part,
316 316 * a way to construct a bundle response when applicable.
317 317 """
318 318
319 319 def __init__(self, repo, transactiongetter, captureoutput=True, source=b''):
320 320 self.repo = repo
321 321 self.ui = repo.ui
322 322 self.records = unbundlerecords()
323 323 self.reply = None
324 324 self.captureoutput = captureoutput
325 325 self.hookargs = {}
326 326 self._gettransaction = transactiongetter
327 327 # carries value that can modify part behavior
328 328 self.modes = {}
329 329 self.source = source
330 330
331 331 def gettransaction(self):
332 332 transaction = self._gettransaction()
333 333
334 334 if self.hookargs:
335 335 # the ones added to the transaction supercede those added
336 336 # to the operation.
337 337 self.hookargs.update(transaction.hookargs)
338 338 transaction.hookargs = self.hookargs
339 339
340 340 # mark the hookargs as flushed. further attempts to add to
341 341 # hookargs will result in an abort.
342 342 self.hookargs = None
343 343
344 344 return transaction
345 345
346 346 def addhookargs(self, hookargs):
347 347 if self.hookargs is None:
348 348 raise error.ProgrammingError(
349 349 b'attempted to add hookargs to '
350 350 b'operation after transaction started'
351 351 )
352 352 self.hookargs.update(hookargs)
353 353
354 354
355 355 class TransactionUnavailable(RuntimeError):
356 356 pass
357 357
358 358
359 359 def _notransaction():
360 360 """default method to get a transaction while processing a bundle
361 361
362 362 Raise an exception to highlight the fact that no transaction was expected
363 363 to be created"""
364 364 raise TransactionUnavailable()
365 365
366 366
367 367 def applybundle(repo, unbundler, tr, source, url=None, **kwargs):
368 368 # transform me into unbundler.apply() as soon as the freeze is lifted
369 369 if isinstance(unbundler, unbundle20):
370 370 tr.hookargs[b'bundle2'] = b'1'
371 371 if source is not None and b'source' not in tr.hookargs:
372 372 tr.hookargs[b'source'] = source
373 373 if url is not None and b'url' not in tr.hookargs:
374 374 tr.hookargs[b'url'] = url
375 375 return processbundle(repo, unbundler, lambda: tr, source=source)
376 376 else:
377 377 # the transactiongetter won't be used, but we might as well set it
378 378 op = bundleoperation(repo, lambda: tr, source=source)
379 379 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
380 380 return op
381 381
382 382
383 383 class partiterator(object):
384 384 def __init__(self, repo, op, unbundler):
385 385 self.repo = repo
386 386 self.op = op
387 387 self.unbundler = unbundler
388 388 self.iterator = None
389 389 self.count = 0
390 390 self.current = None
391 391
392 392 def __enter__(self):
393 393 def func():
394 394 itr = enumerate(self.unbundler.iterparts(), 1)
395 395 for count, p in itr:
396 396 self.count = count
397 397 self.current = p
398 398 yield p
399 399 p.consume()
400 400 self.current = None
401 401
402 402 self.iterator = func()
403 403 return self.iterator
404 404
405 405 def __exit__(self, type, exc, tb):
406 406 if not self.iterator:
407 407 return
408 408
409 409 # Only gracefully abort in a normal exception situation. User aborts
410 410 # like Ctrl+C throw a KeyboardInterrupt which is not a base Exception,
411 411 # and should not gracefully cleanup.
412 412 if isinstance(exc, Exception):
413 413 # Any exceptions seeking to the end of the bundle at this point are
414 414 # almost certainly related to the underlying stream being bad.
415 415 # And, chances are that the exception we're handling is related to
416 416 # getting in that bad state. So, we swallow the seeking error and
417 417 # re-raise the original error.
418 418 seekerror = False
419 419 try:
420 420 if self.current:
421 421 # consume the part content to not corrupt the stream.
422 422 self.current.consume()
423 423
424 424 for part in self.iterator:
425 425 # consume the bundle content
426 426 part.consume()
427 427 except Exception:
428 428 seekerror = True
429 429
430 430 # Small hack to let caller code distinguish exceptions from bundle2
431 431 # processing from processing the old format. This is mostly needed
432 432 # to handle different return codes to unbundle according to the type
433 433 # of bundle. We should probably clean up or drop this return code
434 434 # craziness in a future version.
435 435 exc.duringunbundle2 = True
436 436 salvaged = []
437 437 replycaps = None
438 438 if self.op.reply is not None:
439 439 salvaged = self.op.reply.salvageoutput()
440 440 replycaps = self.op.reply.capabilities
441 441 exc._replycaps = replycaps
442 442 exc._bundle2salvagedoutput = salvaged
443 443
444 444 # Re-raising from a variable loses the original stack. So only use
445 445 # that form if we need to.
446 446 if seekerror:
447 447 raise exc
448 448
449 449 self.repo.ui.debug(
450 450 b'bundle2-input-bundle: %i parts total\n' % self.count
451 451 )
452 452
453 453
454 454 def processbundle(repo, unbundler, transactiongetter=None, op=None, source=b''):
455 455 """This function process a bundle, apply effect to/from a repo
456 456
457 457 It iterates over each part then searches for and uses the proper handling
458 458 code to process the part. Parts are processed in order.
459 459
460 460 Unknown Mandatory part will abort the process.
461 461
462 462 It is temporarily possible to provide a prebuilt bundleoperation to the
463 463 function. This is used to ensure output is properly propagated in case of
464 464 an error during the unbundling. This output capturing part will likely be
465 465 reworked and this ability will probably go away in the process.
466 466 """
467 467 if op is None:
468 468 if transactiongetter is None:
469 469 transactiongetter = _notransaction
470 470 op = bundleoperation(repo, transactiongetter, source=source)
471 471 # todo:
472 472 # - replace this is a init function soon.
473 473 # - exception catching
474 474 unbundler.params
475 475 if repo.ui.debugflag:
476 476 msg = [b'bundle2-input-bundle:']
477 477 if unbundler.params:
478 478 msg.append(b' %i params' % len(unbundler.params))
479 479 if op._gettransaction is None or op._gettransaction is _notransaction:
480 480 msg.append(b' no-transaction')
481 481 else:
482 482 msg.append(b' with-transaction')
483 483 msg.append(b'\n')
484 484 repo.ui.debug(b''.join(msg))
485 485
486 486 processparts(repo, op, unbundler)
487 487
488 488 return op
489 489
490 490
491 491 def processparts(repo, op, unbundler):
492 492 with partiterator(repo, op, unbundler) as parts:
493 493 for part in parts:
494 494 _processpart(op, part)
495 495
496 496
497 497 def _processchangegroup(op, cg, tr, source, url, **kwargs):
498 498 ret = cg.apply(op.repo, tr, source, url, **kwargs)
499 499 op.records.add(
500 500 b'changegroup',
501 501 {
502 502 b'return': ret,
503 503 },
504 504 )
505 505 return ret
506 506
507 507
508 508 def _gethandler(op, part):
509 509 status = b'unknown' # used by debug output
510 510 try:
511 511 handler = parthandlermapping.get(part.type)
512 512 if handler is None:
513 513 status = b'unsupported-type'
514 514 raise error.BundleUnknownFeatureError(parttype=part.type)
515 515 indebug(op.ui, b'found a handler for part %s' % part.type)
516 516 unknownparams = part.mandatorykeys - handler.params
517 517 if unknownparams:
518 518 unknownparams = list(unknownparams)
519 519 unknownparams.sort()
520 520 status = b'unsupported-params (%s)' % b', '.join(unknownparams)
521 521 raise error.BundleUnknownFeatureError(
522 522 parttype=part.type, params=unknownparams
523 523 )
524 524 status = b'supported'
525 525 except error.BundleUnknownFeatureError as exc:
526 526 if part.mandatory: # mandatory parts
527 527 raise
528 528 indebug(op.ui, b'ignoring unsupported advisory part %s' % exc)
529 529 return # skip to part processing
530 530 finally:
531 531 if op.ui.debugflag:
532 532 msg = [b'bundle2-input-part: "%s"' % part.type]
533 533 if not part.mandatory:
534 534 msg.append(b' (advisory)')
535 535 nbmp = len(part.mandatorykeys)
536 536 nbap = len(part.params) - nbmp
537 537 if nbmp or nbap:
538 538 msg.append(b' (params:')
539 539 if nbmp:
540 540 msg.append(b' %i mandatory' % nbmp)
541 541 if nbap:
542 542 msg.append(b' %i advisory' % nbmp)
543 543 msg.append(b')')
544 544 msg.append(b' %s\n' % status)
545 545 op.ui.debug(b''.join(msg))
546 546
547 547 return handler
548 548
549 549
550 550 def _processpart(op, part):
551 551 """process a single part from a bundle
552 552
553 553 The part is guaranteed to have been fully consumed when the function exits
554 554 (even if an exception is raised)."""
555 555 handler = _gethandler(op, part)
556 556 if handler is None:
557 557 return
558 558
559 559 # handler is called outside the above try block so that we don't
560 560 # risk catching KeyErrors from anything other than the
561 561 # parthandlermapping lookup (any KeyError raised by handler()
562 562 # itself represents a defect of a different variety).
563 563 output = None
564 564 if op.captureoutput and op.reply is not None:
565 565 op.ui.pushbuffer(error=True, subproc=True)
566 566 output = b''
567 567 try:
568 568 handler(op, part)
569 569 finally:
570 570 if output is not None:
571 571 output = op.ui.popbuffer()
572 572 if output:
573 573 outpart = op.reply.newpart(b'output', data=output, mandatory=False)
574 574 outpart.addparam(
575 575 b'in-reply-to', pycompat.bytestr(part.id), mandatory=False
576 576 )
577 577
578 578
579 579 def decodecaps(blob):
580 580 """decode a bundle2 caps bytes blob into a dictionary
581 581
582 582 The blob is a list of capabilities (one per line)
583 583 Capabilities may have values using a line of the form::
584 584
585 585 capability=value1,value2,value3
586 586
587 587 The values are always a list."""
588 588 caps = {}
589 589 for line in blob.splitlines():
590 590 if not line:
591 591 continue
592 592 if b'=' not in line:
593 593 key, vals = line, ()
594 594 else:
595 595 key, vals = line.split(b'=', 1)
596 596 vals = vals.split(b',')
597 597 key = urlreq.unquote(key)
598 598 vals = [urlreq.unquote(v) for v in vals]
599 599 caps[key] = vals
600 600 return caps
601 601
602 602
603 603 def encodecaps(caps):
604 604 """encode a bundle2 caps dictionary into a bytes blob"""
605 605 chunks = []
606 606 for ca in sorted(caps):
607 607 vals = caps[ca]
608 608 ca = urlreq.quote(ca)
609 609 vals = [urlreq.quote(v) for v in vals]
610 610 if vals:
611 611 ca = b"%s=%s" % (ca, b','.join(vals))
612 612 chunks.append(ca)
613 613 return b'\n'.join(chunks)
614 614
615 615
616 616 bundletypes = {
617 617 b"": (b"", b'UN'), # only when using unbundle on ssh and old http servers
618 618 # since the unification ssh accepts a header but there
619 619 # is no capability signaling it.
620 620 b"HG20": (), # special-cased below
621 621 b"HG10UN": (b"HG10UN", b'UN'),
622 622 b"HG10BZ": (b"HG10", b'BZ'),
623 623 b"HG10GZ": (b"HG10GZ", b'GZ'),
624 624 }
625 625
626 626 # hgweb uses this list to communicate its preferred type
627 627 bundlepriority = [b'HG10GZ', b'HG10BZ', b'HG10UN']
628 628
629 629
630 630 class bundle20(object):
631 631 """represent an outgoing bundle2 container
632 632
633 633 Use the `addparam` method to add stream level parameter. and `newpart` to
634 634 populate it. Then call `getchunks` to retrieve all the binary chunks of
635 635 data that compose the bundle2 container."""
636 636
637 637 _magicstring = b'HG20'
638 638
639 639 def __init__(self, ui, capabilities=()):
640 640 self.ui = ui
641 641 self._params = []
642 642 self._parts = []
643 643 self.capabilities = dict(capabilities)
644 644 self._compengine = util.compengines.forbundletype(b'UN')
645 645 self._compopts = None
646 646 # If compression is being handled by a consumer of the raw
647 647 # data (e.g. the wire protocol), unsetting this flag tells
648 648 # consumers that the bundle is best left uncompressed.
649 649 self.prefercompressed = True
650 650
651 651 def setcompression(self, alg, compopts=None):
652 652 """setup core part compression to <alg>"""
653 653 if alg in (None, b'UN'):
654 654 return
655 655 assert not any(n.lower() == b'compression' for n, v in self._params)
656 656 self.addparam(b'Compression', alg)
657 657 self._compengine = util.compengines.forbundletype(alg)
658 658 self._compopts = compopts
659 659
660 660 @property
661 661 def nbparts(self):
662 662 """total number of parts added to the bundler"""
663 663 return len(self._parts)
664 664
665 665 # methods used to defines the bundle2 content
666 666 def addparam(self, name, value=None):
667 667 """add a stream level parameter"""
668 668 if not name:
669 669 raise error.ProgrammingError(b'empty parameter name')
670 670 if name[0:1] not in pycompat.bytestr(
671 671 string.ascii_letters # pytype: disable=wrong-arg-types
672 672 ):
673 673 raise error.ProgrammingError(
674 674 b'non letter first character: %s' % name
675 675 )
676 676 self._params.append((name, value))
677 677
678 678 def addpart(self, part):
679 679 """add a new part to the bundle2 container
680 680
681 681 Parts contains the actual applicative payload."""
682 682 assert part.id is None
683 683 part.id = len(self._parts) # very cheap counter
684 684 self._parts.append(part)
685 685
686 686 def newpart(self, typeid, *args, **kwargs):
687 687 """create a new part and add it to the containers
688 688
689 689 As the part is directly added to the containers. For now, this means
690 690 that any failure to properly initialize the part after calling
691 691 ``newpart`` should result in a failure of the whole bundling process.
692 692
693 693 You can still fall back to manually create and add if you need better
694 694 control."""
695 695 part = bundlepart(typeid, *args, **kwargs)
696 696 self.addpart(part)
697 697 return part
698 698
699 699 # methods used to generate the bundle2 stream
700 700 def getchunks(self):
701 701 if self.ui.debugflag:
702 702 msg = [b'bundle2-output-bundle: "%s",' % self._magicstring]
703 703 if self._params:
704 704 msg.append(b' (%i params)' % len(self._params))
705 705 msg.append(b' %i parts total\n' % len(self._parts))
706 706 self.ui.debug(b''.join(msg))
707 707 outdebug(self.ui, b'start emission of %s stream' % self._magicstring)
708 708 yield self._magicstring
709 709 param = self._paramchunk()
710 710 outdebug(self.ui, b'bundle parameter: %s' % param)
711 711 yield _pack(_fstreamparamsize, len(param))
712 712 if param:
713 713 yield param
714 714 for chunk in self._compengine.compressstream(
715 715 self._getcorechunk(), self._compopts
716 716 ):
717 717 yield chunk
718 718
719 719 def _paramchunk(self):
720 720 """return a encoded version of all stream parameters"""
721 721 blocks = []
722 722 for par, value in self._params:
723 723 par = urlreq.quote(par)
724 724 if value is not None:
725 725 value = urlreq.quote(value)
726 726 par = b'%s=%s' % (par, value)
727 727 blocks.append(par)
728 728 return b' '.join(blocks)
729 729
730 730 def _getcorechunk(self):
731 731 """yield chunk for the core part of the bundle
732 732
733 733 (all but headers and parameters)"""
734 734 outdebug(self.ui, b'start of parts')
735 735 for part in self._parts:
736 736 outdebug(self.ui, b'bundle part: "%s"' % part.type)
737 737 for chunk in part.getchunks(ui=self.ui):
738 738 yield chunk
739 739 outdebug(self.ui, b'end of bundle')
740 740 yield _pack(_fpartheadersize, 0)
741 741
742 742 def salvageoutput(self):
743 743 """return a list with a copy of all output parts in the bundle
744 744
745 745 This is meant to be used during error handling to make sure we preserve
746 746 server output"""
747 747 salvaged = []
748 748 for part in self._parts:
749 749 if part.type.startswith(b'output'):
750 750 salvaged.append(part.copy())
751 751 return salvaged
752 752
753 753
754 754 class unpackermixin(object):
755 755 """A mixin to extract bytes and struct data from a stream"""
756 756
757 757 def __init__(self, fp):
758 758 self._fp = fp
759 759
760 760 def _unpack(self, format):
761 761 """unpack this struct format from the stream
762 762
763 763 This method is meant for internal usage by the bundle2 protocol only.
764 764 They directly manipulate the low level stream including bundle2 level
765 765 instruction.
766 766
767 767 Do not use it to implement higher-level logic or methods."""
768 768 data = self._readexact(struct.calcsize(format))
769 769 return _unpack(format, data)
770 770
771 771 def _readexact(self, size):
772 772 """read exactly <size> bytes from the stream
773 773
774 774 This method is meant for internal usage by the bundle2 protocol only.
775 775 They directly manipulate the low level stream including bundle2 level
776 776 instruction.
777 777
778 778 Do not use it to implement higher-level logic or methods."""
779 779 return changegroup.readexactly(self._fp, size)
780 780
781 781
782 782 def getunbundler(ui, fp, magicstring=None):
783 783 """return a valid unbundler object for a given magicstring"""
784 784 if magicstring is None:
785 785 magicstring = changegroup.readexactly(fp, 4)
786 786 magic, version = magicstring[0:2], magicstring[2:4]
787 787 if magic != b'HG':
788 788 ui.debug(
789 789 b"error: invalid magic: %r (version %r), should be 'HG'\n"
790 790 % (magic, version)
791 791 )
792 792 raise error.Abort(_(b'not a Mercurial bundle'))
793 793 unbundlerclass = formatmap.get(version)
794 794 if unbundlerclass is None:
795 795 raise error.Abort(_(b'unknown bundle version %s') % version)
796 796 unbundler = unbundlerclass(ui, fp)
797 797 indebug(ui, b'start processing of %s stream' % magicstring)
798 798 return unbundler
799 799
800 800
801 801 class unbundle20(unpackermixin):
802 802 """interpret a bundle2 stream
803 803
804 804 This class is fed with a binary stream and yields parts through its
805 805 `iterparts` methods."""
806 806
807 807 _magicstring = b'HG20'
808 808
809 809 def __init__(self, ui, fp):
810 810 """If header is specified, we do not read it out of the stream."""
811 811 self.ui = ui
812 812 self._compengine = util.compengines.forbundletype(b'UN')
813 813 self._compressed = None
814 814 super(unbundle20, self).__init__(fp)
815 815
816 816 @util.propertycache
817 817 def params(self):
818 818 """dictionary of stream level parameters"""
819 819 indebug(self.ui, b'reading bundle2 stream parameters')
820 820 params = {}
821 821 paramssize = self._unpack(_fstreamparamsize)[0]
822 822 if paramssize < 0:
823 823 raise error.BundleValueError(
824 824 b'negative bundle param size: %i' % paramssize
825 825 )
826 826 if paramssize:
827 827 params = self._readexact(paramssize)
828 828 params = self._processallparams(params)
829 829 return params
830 830
831 831 def _processallparams(self, paramsblock):
832 832 """ """
833 833 params = util.sortdict()
834 834 for p in paramsblock.split(b' '):
835 835 p = p.split(b'=', 1)
836 836 p = [urlreq.unquote(i) for i in p]
837 837 if len(p) < 2:
838 838 p.append(None)
839 839 self._processparam(*p)
840 840 params[p[0]] = p[1]
841 841 return params
842 842
843 843 def _processparam(self, name, value):
844 844 """process a parameter, applying its effect if needed
845 845
846 846 Parameter starting with a lower case letter are advisory and will be
847 847 ignored when unknown. Those starting with an upper case letter are
848 848 mandatory and will this function will raise a KeyError when unknown.
849 849
850 850 Note: no option are currently supported. Any input will be either
851 851 ignored or failing.
852 852 """
853 853 if not name:
854 854 raise ValueError('empty parameter name')
855 855 if name[0:1] not in pycompat.bytestr(
856 856 string.ascii_letters # pytype: disable=wrong-arg-types
857 857 ):
858 858 raise ValueError('non letter first character: %s' % name)
859 859 try:
860 860 handler = b2streamparamsmap[name.lower()]
861 861 except KeyError:
862 862 if name[0:1].islower():
863 863 indebug(self.ui, b"ignoring unknown parameter %s" % name)
864 864 else:
865 865 raise error.BundleUnknownFeatureError(params=(name,))
866 866 else:
867 867 handler(self, name, value)
868 868
869 869 def _forwardchunks(self):
870 870 """utility to transfer a bundle2 as binary
871 871
872 872 This is made necessary by the fact the 'getbundle' command over 'ssh'
873 873 have no way to know then the reply end, relying on the bundle to be
874 874 interpreted to know its end. This is terrible and we are sorry, but we
875 875 needed to move forward to get general delta enabled.
876 876 """
877 877 yield self._magicstring
878 878 assert 'params' not in vars(self)
879 879 paramssize = self._unpack(_fstreamparamsize)[0]
880 880 if paramssize < 0:
881 881 raise error.BundleValueError(
882 882 b'negative bundle param size: %i' % paramssize
883 883 )
884 884 if paramssize:
885 885 params = self._readexact(paramssize)
886 886 self._processallparams(params)
887 887 # The payload itself is decompressed below, so drop
888 888 # the compression parameter passed down to compensate.
889 889 outparams = []
890 890 for p in params.split(b' '):
891 891 k, v = p.split(b'=', 1)
892 892 if k.lower() != b'compression':
893 893 outparams.append(p)
894 894 outparams = b' '.join(outparams)
895 895 yield _pack(_fstreamparamsize, len(outparams))
896 896 yield outparams
897 897 else:
898 898 yield _pack(_fstreamparamsize, paramssize)
899 899 # From there, payload might need to be decompressed
900 900 self._fp = self._compengine.decompressorreader(self._fp)
901 901 emptycount = 0
902 902 while emptycount < 2:
903 903 # so we can brainlessly loop
904 904 assert _fpartheadersize == _fpayloadsize
905 905 size = self._unpack(_fpartheadersize)[0]
906 906 yield _pack(_fpartheadersize, size)
907 907 if size:
908 908 emptycount = 0
909 909 else:
910 910 emptycount += 1
911 911 continue
912 912 if size == flaginterrupt:
913 913 continue
914 914 elif size < 0:
915 915 raise error.BundleValueError(b'negative chunk size: %i')
916 916 yield self._readexact(size)
917 917
918 918 def iterparts(self, seekable=False):
919 919 """yield all parts contained in the stream"""
920 920 cls = seekableunbundlepart if seekable else unbundlepart
921 921 # make sure param have been loaded
922 922 self.params
923 923 # From there, payload need to be decompressed
924 924 self._fp = self._compengine.decompressorreader(self._fp)
925 925 indebug(self.ui, b'start extraction of bundle2 parts')
926 926 headerblock = self._readpartheader()
927 927 while headerblock is not None:
928 928 part = cls(self.ui, headerblock, self._fp)
929 929 yield part
930 930 # Ensure part is fully consumed so we can start reading the next
931 931 # part.
932 932 part.consume()
933 933
934 934 headerblock = self._readpartheader()
935 935 indebug(self.ui, b'end of bundle2 stream')
936 936
937 937 def _readpartheader(self):
938 938 """reads a part header size and return the bytes blob
939 939
940 940 returns None if empty"""
941 941 headersize = self._unpack(_fpartheadersize)[0]
942 942 if headersize < 0:
943 943 raise error.BundleValueError(
944 944 b'negative part header size: %i' % headersize
945 945 )
946 946 indebug(self.ui, b'part header size: %i' % headersize)
947 947 if headersize:
948 948 return self._readexact(headersize)
949 949 return None
950 950
951 951 def compressed(self):
952 952 self.params # load params
953 953 return self._compressed
954 954
955 955 def close(self):
956 956 """close underlying file"""
957 957 if util.safehasattr(self._fp, 'close'):
958 958 return self._fp.close()
959 959
960 960
961 961 formatmap = {b'20': unbundle20}
962 962
963 963 b2streamparamsmap = {}
964 964
965 965
966 966 def b2streamparamhandler(name):
967 967 """register a handler for a stream level parameter"""
968 968
969 969 def decorator(func):
970 970 assert name not in formatmap
971 971 b2streamparamsmap[name] = func
972 972 return func
973 973
974 974 return decorator
975 975
976 976
977 977 @b2streamparamhandler(b'compression')
978 978 def processcompression(unbundler, param, value):
979 979 """read compression parameter and install payload decompression"""
980 980 if value not in util.compengines.supportedbundletypes:
981 981 raise error.BundleUnknownFeatureError(params=(param,), values=(value,))
982 982 unbundler._compengine = util.compengines.forbundletype(value)
983 983 if value is not None:
984 984 unbundler._compressed = True
985 985
986 986
987 987 class bundlepart(object):
988 988 """A bundle2 part contains application level payload
989 989
990 990 The part `type` is used to route the part to the application level
991 991 handler.
992 992
993 993 The part payload is contained in ``part.data``. It could be raw bytes or a
994 994 generator of byte chunks.
995 995
996 996 You can add parameters to the part using the ``addparam`` method.
997 997 Parameters can be either mandatory (default) or advisory. Remote side
998 998 should be able to safely ignore the advisory ones.
999 999
1000 1000 Both data and parameters cannot be modified after the generation has begun.
1001 1001 """
1002 1002
1003 1003 def __init__(
1004 1004 self,
1005 1005 parttype,
1006 1006 mandatoryparams=(),
1007 1007 advisoryparams=(),
1008 1008 data=b'',
1009 1009 mandatory=True,
1010 1010 ):
1011 1011 validateparttype(parttype)
1012 1012 self.id = None
1013 1013 self.type = parttype
1014 1014 self._data = data
1015 1015 self._mandatoryparams = list(mandatoryparams)
1016 1016 self._advisoryparams = list(advisoryparams)
1017 1017 # checking for duplicated entries
1018 1018 self._seenparams = set()
1019 1019 for pname, __ in self._mandatoryparams + self._advisoryparams:
1020 1020 if pname in self._seenparams:
1021 1021 raise error.ProgrammingError(b'duplicated params: %s' % pname)
1022 1022 self._seenparams.add(pname)
1023 1023 # status of the part's generation:
1024 1024 # - None: not started,
1025 1025 # - False: currently generated,
1026 1026 # - True: generation done.
1027 1027 self._generated = None
1028 1028 self.mandatory = mandatory
1029 1029
1030 1030 def __repr__(self):
1031 1031 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
1032 1032 return '<%s object at %x; id: %s; type: %s; mandatory: %s>' % (
1033 1033 cls,
1034 1034 id(self),
1035 1035 self.id,
1036 1036 self.type,
1037 1037 self.mandatory,
1038 1038 )
1039 1039
1040 1040 def copy(self):
1041 1041 """return a copy of the part
1042 1042
1043 1043 The new part have the very same content but no partid assigned yet.
1044 1044 Parts with generated data cannot be copied."""
1045 1045 assert not util.safehasattr(self.data, 'next')
1046 1046 return self.__class__(
1047 1047 self.type,
1048 1048 self._mandatoryparams,
1049 1049 self._advisoryparams,
1050 1050 self._data,
1051 1051 self.mandatory,
1052 1052 )
1053 1053
1054 1054 # methods used to defines the part content
1055 1055 @property
1056 1056 def data(self):
1057 1057 return self._data
1058 1058
1059 1059 @data.setter
1060 1060 def data(self, data):
1061 1061 if self._generated is not None:
1062 1062 raise error.ReadOnlyPartError(b'part is being generated')
1063 1063 self._data = data
1064 1064
1065 1065 @property
1066 1066 def mandatoryparams(self):
1067 1067 # make it an immutable tuple to force people through ``addparam``
1068 1068 return tuple(self._mandatoryparams)
1069 1069
1070 1070 @property
1071 1071 def advisoryparams(self):
1072 1072 # make it an immutable tuple to force people through ``addparam``
1073 1073 return tuple(self._advisoryparams)
1074 1074
1075 1075 def addparam(self, name, value=b'', mandatory=True):
1076 1076 """add a parameter to the part
1077 1077
1078 1078 If 'mandatory' is set to True, the remote handler must claim support
1079 1079 for this parameter or the unbundling will be aborted.
1080 1080
1081 1081 The 'name' and 'value' cannot exceed 255 bytes each.
1082 1082 """
1083 1083 if self._generated is not None:
1084 1084 raise error.ReadOnlyPartError(b'part is being generated')
1085 1085 if name in self._seenparams:
1086 1086 raise ValueError(b'duplicated params: %s' % name)
1087 1087 self._seenparams.add(name)
1088 1088 params = self._advisoryparams
1089 1089 if mandatory:
1090 1090 params = self._mandatoryparams
1091 1091 params.append((name, value))
1092 1092
1093 1093 # methods used to generates the bundle2 stream
1094 1094 def getchunks(self, ui):
1095 1095 if self._generated is not None:
1096 1096 raise error.ProgrammingError(b'part can only be consumed once')
1097 1097 self._generated = False
1098 1098
1099 1099 if ui.debugflag:
1100 1100 msg = [b'bundle2-output-part: "%s"' % self.type]
1101 1101 if not self.mandatory:
1102 1102 msg.append(b' (advisory)')
1103 1103 nbmp = len(self.mandatoryparams)
1104 1104 nbap = len(self.advisoryparams)
1105 1105 if nbmp or nbap:
1106 1106 msg.append(b' (params:')
1107 1107 if nbmp:
1108 1108 msg.append(b' %i mandatory' % nbmp)
1109 1109 if nbap:
1110 1110 msg.append(b' %i advisory' % nbmp)
1111 1111 msg.append(b')')
1112 1112 if not self.data:
1113 1113 msg.append(b' empty payload')
1114 1114 elif util.safehasattr(self.data, 'next') or util.safehasattr(
1115 1115 self.data, b'__next__'
1116 1116 ):
1117 1117 msg.append(b' streamed payload')
1118 1118 else:
1119 1119 msg.append(b' %i bytes payload' % len(self.data))
1120 1120 msg.append(b'\n')
1121 1121 ui.debug(b''.join(msg))
1122 1122
1123 1123 #### header
1124 1124 if self.mandatory:
1125 1125 parttype = self.type.upper()
1126 1126 else:
1127 1127 parttype = self.type.lower()
1128 1128 outdebug(ui, b'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1129 1129 ## parttype
1130 1130 header = [
1131 1131 _pack(_fparttypesize, len(parttype)),
1132 1132 parttype,
1133 1133 _pack(_fpartid, self.id),
1134 1134 ]
1135 1135 ## parameters
1136 1136 # count
1137 1137 manpar = self.mandatoryparams
1138 1138 advpar = self.advisoryparams
1139 1139 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1140 1140 # size
1141 1141 parsizes = []
1142 1142 for key, value in manpar:
1143 1143 parsizes.append(len(key))
1144 1144 parsizes.append(len(value))
1145 1145 for key, value in advpar:
1146 1146 parsizes.append(len(key))
1147 1147 parsizes.append(len(value))
1148 1148 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1149 1149 header.append(paramsizes)
1150 1150 # key, value
1151 1151 for key, value in manpar:
1152 1152 header.append(key)
1153 1153 header.append(value)
1154 1154 for key, value in advpar:
1155 1155 header.append(key)
1156 1156 header.append(value)
1157 1157 ## finalize header
1158 1158 try:
1159 1159 headerchunk = b''.join(header)
1160 1160 except TypeError:
1161 1161 raise TypeError(
1162 1162 'Found a non-bytes trying to '
1163 1163 'build bundle part header: %r' % header
1164 1164 )
1165 1165 outdebug(ui, b'header chunk size: %i' % len(headerchunk))
1166 1166 yield _pack(_fpartheadersize, len(headerchunk))
1167 1167 yield headerchunk
1168 1168 ## payload
1169 1169 try:
1170 1170 for chunk in self._payloadchunks():
1171 1171 outdebug(ui, b'payload chunk size: %i' % len(chunk))
1172 1172 yield _pack(_fpayloadsize, len(chunk))
1173 1173 yield chunk
1174 1174 except GeneratorExit:
1175 1175 # GeneratorExit means that nobody is listening for our
1176 1176 # results anyway, so just bail quickly rather than trying
1177 1177 # to produce an error part.
1178 1178 ui.debug(b'bundle2-generatorexit\n')
1179 1179 raise
1180 1180 except BaseException as exc:
1181 1181 bexc = stringutil.forcebytestr(exc)
1182 1182 # backup exception data for later
1183 1183 ui.debug(
1184 1184 b'bundle2-input-stream-interrupt: encoding exception %s' % bexc
1185 1185 )
1186 1186 tb = sys.exc_info()[2]
1187 1187 msg = b'unexpected error: %s' % bexc
1188 1188 interpart = bundlepart(
1189 1189 b'error:abort', [(b'message', msg)], mandatory=False
1190 1190 )
1191 1191 interpart.id = 0
1192 1192 yield _pack(_fpayloadsize, -1)
1193 1193 for chunk in interpart.getchunks(ui=ui):
1194 1194 yield chunk
1195 1195 outdebug(ui, b'closing payload chunk')
1196 1196 # abort current part payload
1197 1197 yield _pack(_fpayloadsize, 0)
1198 1198 pycompat.raisewithtb(exc, tb)
1199 1199 # end of payload
1200 1200 outdebug(ui, b'closing payload chunk')
1201 1201 yield _pack(_fpayloadsize, 0)
1202 1202 self._generated = True
1203 1203
1204 1204 def _payloadchunks(self):
1205 1205 """yield chunks of a the part payload
1206 1206
1207 1207 Exists to handle the different methods to provide data to a part."""
1208 1208 # we only support fixed size data now.
1209 1209 # This will be improved in the future.
1210 1210 if util.safehasattr(self.data, 'next') or util.safehasattr(
1211 1211 self.data, b'__next__'
1212 1212 ):
1213 1213 buff = util.chunkbuffer(self.data)
1214 1214 chunk = buff.read(preferedchunksize)
1215 1215 while chunk:
1216 1216 yield chunk
1217 1217 chunk = buff.read(preferedchunksize)
1218 1218 elif len(self.data):
1219 1219 yield self.data
1220 1220
1221 1221
1222 1222 flaginterrupt = -1
1223 1223
1224 1224
1225 1225 class interrupthandler(unpackermixin):
1226 1226 """read one part and process it with restricted capability
1227 1227
1228 1228 This allows to transmit exception raised on the producer size during part
1229 1229 iteration while the consumer is reading a part.
1230 1230
1231 1231 Part processed in this manner only have access to a ui object,"""
1232 1232
1233 1233 def __init__(self, ui, fp):
1234 1234 super(interrupthandler, self).__init__(fp)
1235 1235 self.ui = ui
1236 1236
1237 1237 def _readpartheader(self):
1238 1238 """reads a part header size and return the bytes blob
1239 1239
1240 1240 returns None if empty"""
1241 1241 headersize = self._unpack(_fpartheadersize)[0]
1242 1242 if headersize < 0:
1243 1243 raise error.BundleValueError(
1244 1244 b'negative part header size: %i' % headersize
1245 1245 )
1246 1246 indebug(self.ui, b'part header size: %i\n' % headersize)
1247 1247 if headersize:
1248 1248 return self._readexact(headersize)
1249 1249 return None
1250 1250
1251 1251 def __call__(self):
1252 1252
1253 1253 self.ui.debug(
1254 1254 b'bundle2-input-stream-interrupt: opening out of band context\n'
1255 1255 )
1256 1256 indebug(self.ui, b'bundle2 stream interruption, looking for a part.')
1257 1257 headerblock = self._readpartheader()
1258 1258 if headerblock is None:
1259 1259 indebug(self.ui, b'no part found during interruption.')
1260 1260 return
1261 1261 part = unbundlepart(self.ui, headerblock, self._fp)
1262 1262 op = interruptoperation(self.ui)
1263 1263 hardabort = False
1264 1264 try:
1265 1265 _processpart(op, part)
1266 1266 except (SystemExit, KeyboardInterrupt):
1267 1267 hardabort = True
1268 1268 raise
1269 1269 finally:
1270 1270 if not hardabort:
1271 1271 part.consume()
1272 1272 self.ui.debug(
1273 1273 b'bundle2-input-stream-interrupt: closing out of band context\n'
1274 1274 )
1275 1275
1276 1276
1277 1277 class interruptoperation(object):
1278 1278 """A limited operation to be use by part handler during interruption
1279 1279
1280 1280 It only have access to an ui object.
1281 1281 """
1282 1282
1283 1283 def __init__(self, ui):
1284 1284 self.ui = ui
1285 1285 self.reply = None
1286 1286 self.captureoutput = False
1287 1287
1288 1288 @property
1289 1289 def repo(self):
1290 1290 raise error.ProgrammingError(b'no repo access from stream interruption')
1291 1291
1292 1292 def gettransaction(self):
1293 1293 raise TransactionUnavailable(b'no repo access from stream interruption')
1294 1294
1295 1295
1296 1296 def decodepayloadchunks(ui, fh):
1297 1297 """Reads bundle2 part payload data into chunks.
1298 1298
1299 1299 Part payload data consists of framed chunks. This function takes
1300 1300 a file handle and emits those chunks.
1301 1301 """
1302 1302 dolog = ui.configbool(b'devel', b'bundle2.debug')
1303 1303 debug = ui.debug
1304 1304
1305 1305 headerstruct = struct.Struct(_fpayloadsize)
1306 1306 headersize = headerstruct.size
1307 1307 unpack = headerstruct.unpack
1308 1308
1309 1309 readexactly = changegroup.readexactly
1310 1310 read = fh.read
1311 1311
1312 1312 chunksize = unpack(readexactly(fh, headersize))[0]
1313 1313 indebug(ui, b'payload chunk size: %i' % chunksize)
1314 1314
1315 1315 # changegroup.readexactly() is inlined below for performance.
1316 1316 while chunksize:
1317 1317 if chunksize >= 0:
1318 1318 s = read(chunksize)
1319 1319 if len(s) < chunksize:
1320 1320 raise error.Abort(
1321 1321 _(
1322 1322 b'stream ended unexpectedly '
1323 1323 b' (got %d bytes, expected %d)'
1324 1324 )
1325 1325 % (len(s), chunksize)
1326 1326 )
1327 1327
1328 1328 yield s
1329 1329 elif chunksize == flaginterrupt:
1330 1330 # Interrupt "signal" detected. The regular stream is interrupted
1331 1331 # and a bundle2 part follows. Consume it.
1332 1332 interrupthandler(ui, fh)()
1333 1333 else:
1334 1334 raise error.BundleValueError(
1335 1335 b'negative payload chunk size: %s' % chunksize
1336 1336 )
1337 1337
1338 1338 s = read(headersize)
1339 1339 if len(s) < headersize:
1340 1340 raise error.Abort(
1341 1341 _(b'stream ended unexpectedly (got %d bytes, expected %d)')
1342 1342 % (len(s), chunksize)
1343 1343 )
1344 1344
1345 1345 chunksize = unpack(s)[0]
1346 1346
1347 1347 # indebug() inlined for performance.
1348 1348 if dolog:
1349 1349 debug(b'bundle2-input: payload chunk size: %i\n' % chunksize)
1350 1350
1351 1351
1352 1352 class unbundlepart(unpackermixin):
1353 1353 """a bundle part read from a bundle"""
1354 1354
1355 1355 def __init__(self, ui, header, fp):
1356 1356 super(unbundlepart, self).__init__(fp)
1357 1357 self._seekable = util.safehasattr(fp, 'seek') and util.safehasattr(
1358 1358 fp, b'tell'
1359 1359 )
1360 1360 self.ui = ui
1361 1361 # unbundle state attr
1362 1362 self._headerdata = header
1363 1363 self._headeroffset = 0
1364 1364 self._initialized = False
1365 1365 self.consumed = False
1366 1366 # part data
1367 1367 self.id = None
1368 1368 self.type = None
1369 1369 self.mandatoryparams = None
1370 1370 self.advisoryparams = None
1371 1371 self.params = None
1372 1372 self.mandatorykeys = ()
1373 1373 self._readheader()
1374 1374 self._mandatory = None
1375 1375 self._pos = 0
1376 1376
1377 1377 def _fromheader(self, size):
1378 1378 """return the next <size> byte from the header"""
1379 1379 offset = self._headeroffset
1380 1380 data = self._headerdata[offset : (offset + size)]
1381 1381 self._headeroffset = offset + size
1382 1382 return data
1383 1383
1384 1384 def _unpackheader(self, format):
1385 1385 """read given format from header
1386 1386
1387 1387 This automatically compute the size of the format to read."""
1388 1388 data = self._fromheader(struct.calcsize(format))
1389 1389 return _unpack(format, data)
1390 1390
1391 1391 def _initparams(self, mandatoryparams, advisoryparams):
1392 1392 """internal function to setup all logic related parameters"""
1393 1393 # make it read only to prevent people touching it by mistake.
1394 1394 self.mandatoryparams = tuple(mandatoryparams)
1395 1395 self.advisoryparams = tuple(advisoryparams)
1396 1396 # user friendly UI
1397 1397 self.params = util.sortdict(self.mandatoryparams)
1398 1398 self.params.update(self.advisoryparams)
1399 1399 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1400 1400
1401 1401 def _readheader(self):
1402 1402 """read the header and setup the object"""
1403 1403 typesize = self._unpackheader(_fparttypesize)[0]
1404 1404 self.type = self._fromheader(typesize)
1405 1405 indebug(self.ui, b'part type: "%s"' % self.type)
1406 1406 self.id = self._unpackheader(_fpartid)[0]
1407 1407 indebug(self.ui, b'part id: "%s"' % pycompat.bytestr(self.id))
1408 1408 # extract mandatory bit from type
1409 1409 self.mandatory = self.type != self.type.lower()
1410 1410 self.type = self.type.lower()
1411 1411 ## reading parameters
1412 1412 # param count
1413 1413 mancount, advcount = self._unpackheader(_fpartparamcount)
1414 1414 indebug(self.ui, b'part parameters: %i' % (mancount + advcount))
1415 1415 # param size
1416 1416 fparamsizes = _makefpartparamsizes(mancount + advcount)
1417 1417 paramsizes = self._unpackheader(fparamsizes)
1418 1418 # make it a list of couple again
1419 1419 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1420 1420 # split mandatory from advisory
1421 1421 mansizes = paramsizes[:mancount]
1422 1422 advsizes = paramsizes[mancount:]
1423 1423 # retrieve param value
1424 1424 manparams = []
1425 1425 for key, value in mansizes:
1426 1426 manparams.append((self._fromheader(key), self._fromheader(value)))
1427 1427 advparams = []
1428 1428 for key, value in advsizes:
1429 1429 advparams.append((self._fromheader(key), self._fromheader(value)))
1430 1430 self._initparams(manparams, advparams)
1431 1431 ## part payload
1432 1432 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1433 1433 # we read the data, tell it
1434 1434 self._initialized = True
1435 1435
1436 1436 def _payloadchunks(self):
1437 1437 """Generator of decoded chunks in the payload."""
1438 1438 return decodepayloadchunks(self.ui, self._fp)
1439 1439
1440 1440 def consume(self):
1441 1441 """Read the part payload until completion.
1442 1442
1443 1443 By consuming the part data, the underlying stream read offset will
1444 1444 be advanced to the next part (or end of stream).
1445 1445 """
1446 1446 if self.consumed:
1447 1447 return
1448 1448
1449 1449 chunk = self.read(32768)
1450 1450 while chunk:
1451 1451 self._pos += len(chunk)
1452 1452 chunk = self.read(32768)
1453 1453
1454 1454 def read(self, size=None):
1455 1455 """read payload data"""
1456 1456 if not self._initialized:
1457 1457 self._readheader()
1458 1458 if size is None:
1459 1459 data = self._payloadstream.read()
1460 1460 else:
1461 1461 data = self._payloadstream.read(size)
1462 1462 self._pos += len(data)
1463 1463 if size is None or len(data) < size:
1464 1464 if not self.consumed and self._pos:
1465 1465 self.ui.debug(
1466 1466 b'bundle2-input-part: total payload size %i\n' % self._pos
1467 1467 )
1468 1468 self.consumed = True
1469 1469 return data
1470 1470
1471 1471
1472 1472 class seekableunbundlepart(unbundlepart):
1473 1473 """A bundle2 part in a bundle that is seekable.
1474 1474
1475 1475 Regular ``unbundlepart`` instances can only be read once. This class
1476 1476 extends ``unbundlepart`` to enable bi-directional seeking within the
1477 1477 part.
1478 1478
1479 1479 Bundle2 part data consists of framed chunks. Offsets when seeking
1480 1480 refer to the decoded data, not the offsets in the underlying bundle2
1481 1481 stream.
1482 1482
1483 1483 To facilitate quickly seeking within the decoded data, instances of this
1484 1484 class maintain a mapping between offsets in the underlying stream and
1485 1485 the decoded payload. This mapping will consume memory in proportion
1486 1486 to the number of chunks within the payload (which almost certainly
1487 1487 increases in proportion with the size of the part).
1488 1488 """
1489 1489
1490 1490 def __init__(self, ui, header, fp):
1491 1491 # (payload, file) offsets for chunk starts.
1492 1492 self._chunkindex = []
1493 1493
1494 1494 super(seekableunbundlepart, self).__init__(ui, header, fp)
1495 1495
1496 1496 def _payloadchunks(self, chunknum=0):
1497 1497 '''seek to specified chunk and start yielding data'''
1498 1498 if len(self._chunkindex) == 0:
1499 1499 assert chunknum == 0, b'Must start with chunk 0'
1500 1500 self._chunkindex.append((0, self._tellfp()))
1501 1501 else:
1502 1502 assert chunknum < len(self._chunkindex), (
1503 1503 b'Unknown chunk %d' % chunknum
1504 1504 )
1505 1505 self._seekfp(self._chunkindex[chunknum][1])
1506 1506
1507 1507 pos = self._chunkindex[chunknum][0]
1508 1508
1509 1509 for chunk in decodepayloadchunks(self.ui, self._fp):
1510 1510 chunknum += 1
1511 1511 pos += len(chunk)
1512 1512 if chunknum == len(self._chunkindex):
1513 1513 self._chunkindex.append((pos, self._tellfp()))
1514 1514
1515 1515 yield chunk
1516 1516
1517 1517 def _findchunk(self, pos):
1518 1518 '''for a given payload position, return a chunk number and offset'''
1519 1519 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1520 1520 if ppos == pos:
1521 1521 return chunk, 0
1522 1522 elif ppos > pos:
1523 1523 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1524 1524 raise ValueError(b'Unknown chunk')
1525 1525
1526 1526 def tell(self):
1527 1527 return self._pos
1528 1528
1529 1529 def seek(self, offset, whence=os.SEEK_SET):
1530 1530 if whence == os.SEEK_SET:
1531 1531 newpos = offset
1532 1532 elif whence == os.SEEK_CUR:
1533 1533 newpos = self._pos + offset
1534 1534 elif whence == os.SEEK_END:
1535 1535 if not self.consumed:
1536 1536 # Can't use self.consume() here because it advances self._pos.
1537 1537 chunk = self.read(32768)
1538 1538 while chunk:
1539 1539 chunk = self.read(32768)
1540 1540 newpos = self._chunkindex[-1][0] - offset
1541 1541 else:
1542 1542 raise ValueError(b'Unknown whence value: %r' % (whence,))
1543 1543
1544 1544 if newpos > self._chunkindex[-1][0] and not self.consumed:
1545 1545 # Can't use self.consume() here because it advances self._pos.
1546 1546 chunk = self.read(32768)
1547 1547 while chunk:
1548 1548 chunk = self.read(32668)
1549 1549
1550 1550 if not 0 <= newpos <= self._chunkindex[-1][0]:
1551 1551 raise ValueError(b'Offset out of range')
1552 1552
1553 1553 if self._pos != newpos:
1554 1554 chunk, internaloffset = self._findchunk(newpos)
1555 1555 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1556 1556 adjust = self.read(internaloffset)
1557 1557 if len(adjust) != internaloffset:
1558 1558 raise error.Abort(_(b'Seek failed\n'))
1559 1559 self._pos = newpos
1560 1560
1561 1561 def _seekfp(self, offset, whence=0):
1562 1562 """move the underlying file pointer
1563 1563
1564 1564 This method is meant for internal usage by the bundle2 protocol only.
1565 1565 They directly manipulate the low level stream including bundle2 level
1566 1566 instruction.
1567 1567
1568 1568 Do not use it to implement higher-level logic or methods."""
1569 1569 if self._seekable:
1570 1570 return self._fp.seek(offset, whence)
1571 1571 else:
1572 1572 raise NotImplementedError(_(b'File pointer is not seekable'))
1573 1573
1574 1574 def _tellfp(self):
1575 1575 """return the file offset, or None if file is not seekable
1576 1576
1577 1577 This method is meant for internal usage by the bundle2 protocol only.
1578 1578 They directly manipulate the low level stream including bundle2 level
1579 1579 instruction.
1580 1580
1581 1581 Do not use it to implement higher-level logic or methods."""
1582 1582 if self._seekable:
1583 1583 try:
1584 1584 return self._fp.tell()
1585 1585 except IOError as e:
1586 1586 if e.errno == errno.ESPIPE:
1587 1587 self._seekable = False
1588 1588 else:
1589 1589 raise
1590 1590 return None
1591 1591
1592 1592
1593 1593 # These are only the static capabilities.
1594 1594 # Check the 'getrepocaps' function for the rest.
1595 1595 capabilities = {
1596 1596 b'HG20': (),
1597 1597 b'bookmarks': (),
1598 1598 b'error': (b'abort', b'unsupportedcontent', b'pushraced', b'pushkey'),
1599 1599 b'listkeys': (),
1600 1600 b'pushkey': (),
1601 1601 b'digests': tuple(sorted(util.DIGESTS.keys())),
1602 1602 b'remote-changegroup': (b'http', b'https'),
1603 1603 b'hgtagsfnodes': (),
1604 1604 b'phases': (b'heads',),
1605 1605 b'stream': (b'v2',),
1606 1606 }
1607 1607
1608 1608
1609 1609 def getrepocaps(repo, allowpushback=False, role=None):
1610 1610 """return the bundle2 capabilities for a given repo
1611 1611
1612 1612 Exists to allow extensions (like evolution) to mutate the capabilities.
1613 1613
1614 1614 The returned value is used for servers advertising their capabilities as
1615 1615 well as clients advertising their capabilities to servers as part of
1616 1616 bundle2 requests. The ``role`` argument specifies which is which.
1617 1617 """
1618 1618 if role not in (b'client', b'server'):
1619 1619 raise error.ProgrammingError(b'role argument must be client or server')
1620 1620
1621 1621 caps = capabilities.copy()
1622 1622 caps[b'changegroup'] = tuple(
1623 1623 sorted(changegroup.supportedincomingversions(repo))
1624 1624 )
1625 1625 if obsolete.isenabled(repo, obsolete.exchangeopt):
1626 1626 supportedformat = tuple(b'V%i' % v for v in obsolete.formats)
1627 1627 caps[b'obsmarkers'] = supportedformat
1628 1628 if allowpushback:
1629 1629 caps[b'pushback'] = ()
1630 1630 cpmode = repo.ui.config(b'server', b'concurrent-push-mode')
1631 1631 if cpmode == b'check-related':
1632 1632 caps[b'checkheads'] = (b'related',)
1633 1633 if b'phases' in repo.ui.configlist(b'devel', b'legacy.exchange'):
1634 1634 caps.pop(b'phases')
1635 1635
1636 1636 # Don't advertise stream clone support in server mode if not configured.
1637 1637 if role == b'server':
1638 1638 streamsupported = repo.ui.configbool(
1639 1639 b'server', b'uncompressed', untrusted=True
1640 1640 )
1641 1641 featuresupported = repo.ui.configbool(b'server', b'bundle2.stream')
1642 1642
1643 1643 if not streamsupported or not featuresupported:
1644 1644 caps.pop(b'stream')
1645 1645 # Else always advertise support on client, because payload support
1646 1646 # should always be advertised.
1647 1647
1648 1648 # b'rev-branch-cache is no longer advertised, but still supported
1649 1649 # for legacy clients.
1650 1650
1651 1651 return caps
1652 1652
1653 1653
1654 1654 def bundle2caps(remote):
1655 1655 """return the bundle capabilities of a peer as dict"""
1656 1656 raw = remote.capable(b'bundle2')
1657 1657 if not raw and raw != b'':
1658 1658 return {}
1659 1659 capsblob = urlreq.unquote(remote.capable(b'bundle2'))
1660 1660 return decodecaps(capsblob)
1661 1661
1662 1662
1663 1663 def obsmarkersversion(caps):
1664 1664 """extract the list of supported obsmarkers versions from a bundle2caps dict"""
1665 1665 obscaps = caps.get(b'obsmarkers', ())
1666 1666 return [int(c[1:]) for c in obscaps if c.startswith(b'V')]
1667 1667
1668 1668
1669 1669 def writenewbundle(
1670 1670 ui,
1671 1671 repo,
1672 1672 source,
1673 1673 filename,
1674 1674 bundletype,
1675 1675 outgoing,
1676 1676 opts,
1677 1677 vfs=None,
1678 1678 compression=None,
1679 1679 compopts=None,
1680 1680 ):
1681 1681 if bundletype.startswith(b'HG10'):
1682 1682 cg = changegroup.makechangegroup(repo, outgoing, b'01', source)
1683 1683 return writebundle(
1684 1684 ui,
1685 1685 cg,
1686 1686 filename,
1687 1687 bundletype,
1688 1688 vfs=vfs,
1689 1689 compression=compression,
1690 1690 compopts=compopts,
1691 1691 )
1692 1692 elif not bundletype.startswith(b'HG20'):
1693 1693 raise error.ProgrammingError(b'unknown bundle type: %s' % bundletype)
1694 1694
1695 1695 caps = {}
1696 1696 if b'obsolescence' in opts:
1697 1697 caps[b'obsmarkers'] = (b'V1',)
1698 1698 bundle = bundle20(ui, caps)
1699 1699 bundle.setcompression(compression, compopts)
1700 1700 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1701 1701 chunkiter = bundle.getchunks()
1702 1702
1703 1703 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1704 1704
1705 1705
1706 1706 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1707 1707 # We should eventually reconcile this logic with the one behind
1708 1708 # 'exchange.getbundle2partsgenerator'.
1709 1709 #
1710 1710 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1711 1711 # different right now. So we keep them separated for now for the sake of
1712 1712 # simplicity.
1713 1713
1714 1714 # we might not always want a changegroup in such bundle, for example in
1715 1715 # stream bundles
1716 1716 if opts.get(b'changegroup', True):
1717 1717 cgversion = opts.get(b'cg.version')
1718 1718 if cgversion is None:
1719 1719 cgversion = changegroup.safeversion(repo)
1720 1720 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1721 1721 part = bundler.newpart(b'changegroup', data=cg.getchunks())
1722 1722 part.addparam(b'version', cg.version)
1723 1723 if b'clcount' in cg.extras:
1724 1724 part.addparam(
1725 1725 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
1726 1726 )
1727 1727 if opts.get(b'phases') and repo.revs(
1728 1728 b'%ln and secret()', outgoing.ancestorsof
1729 1729 ):
1730 1730 part.addparam(
1731 1731 b'targetphase', b'%d' % phases.secret, mandatory=False
1732 1732 )
1733 1733 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
1734 1734 part.addparam(b'exp-sidedata', b'1')
1735 1735
1736 1736 if opts.get(b'streamv2', False):
1737 1737 addpartbundlestream2(bundler, repo, stream=True)
1738 1738
1739 1739 if opts.get(b'tagsfnodescache', True):
1740 1740 addparttagsfnodescache(repo, bundler, outgoing)
1741 1741
1742 1742 if opts.get(b'revbranchcache', True):
1743 1743 addpartrevbranchcache(repo, bundler, outgoing)
1744 1744
1745 1745 if opts.get(b'obsolescence', False):
1746 1746 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1747 1747 buildobsmarkerspart(
1748 1748 bundler,
1749 1749 obsmarkers,
1750 1750 mandatory=opts.get(b'obsolescence-mandatory', True),
1751 1751 )
1752 1752
1753 1753 if opts.get(b'phases', False):
1754 1754 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1755 1755 phasedata = phases.binaryencode(headsbyphase)
1756 1756 bundler.newpart(b'phase-heads', data=phasedata)
1757 1757
1758 1758
1759 1759 def addparttagsfnodescache(repo, bundler, outgoing):
1760 1760 # we include the tags fnode cache for the bundle changeset
1761 1761 # (as an optional parts)
1762 1762 cache = tags.hgtagsfnodescache(repo.unfiltered())
1763 1763 chunks = []
1764 1764
1765 1765 # .hgtags fnodes are only relevant for head changesets. While we could
1766 1766 # transfer values for all known nodes, there will likely be little to
1767 1767 # no benefit.
1768 1768 #
1769 1769 # We don't bother using a generator to produce output data because
1770 1770 # a) we only have 40 bytes per head and even esoteric numbers of heads
1771 1771 # consume little memory (1M heads is 40MB) b) we don't want to send the
1772 1772 # part if we don't have entries and knowing if we have entries requires
1773 1773 # cache lookups.
1774 1774 for node in outgoing.ancestorsof:
1775 1775 # Don't compute missing, as this may slow down serving.
1776 1776 fnode = cache.getfnode(node, computemissing=False)
1777 1777 if fnode:
1778 1778 chunks.extend([node, fnode])
1779 1779
1780 1780 if chunks:
1781 1781 bundler.newpart(b'hgtagsfnodes', data=b''.join(chunks))
1782 1782
1783 1783
1784 1784 def addpartrevbranchcache(repo, bundler, outgoing):
1785 1785 # we include the rev branch cache for the bundle changeset
1786 1786 # (as an optional parts)
1787 1787 cache = repo.revbranchcache()
1788 1788 cl = repo.unfiltered().changelog
1789 1789 branchesdata = collections.defaultdict(lambda: (set(), set()))
1790 1790 for node in outgoing.missing:
1791 1791 branch, close = cache.branchinfo(cl.rev(node))
1792 1792 branchesdata[branch][close].add(node)
1793 1793
1794 1794 def generate():
1795 1795 for branch, (nodes, closed) in sorted(branchesdata.items()):
1796 1796 utf8branch = encoding.fromlocal(branch)
1797 1797 yield rbcstruct.pack(len(utf8branch), len(nodes), len(closed))
1798 1798 yield utf8branch
1799 1799 for n in sorted(nodes):
1800 1800 yield n
1801 1801 for n in sorted(closed):
1802 1802 yield n
1803 1803
1804 1804 bundler.newpart(b'cache:rev-branch-cache', data=generate(), mandatory=False)
1805 1805
1806 1806
1807 1807 def _formatrequirementsspec(requirements):
1808 1808 requirements = [req for req in requirements if req != b"shared"]
1809 1809 return urlreq.quote(b','.join(sorted(requirements)))
1810 1810
1811 1811
1812 1812 def _formatrequirementsparams(requirements):
1813 1813 requirements = _formatrequirementsspec(requirements)
1814 1814 params = b"%s%s" % (urlreq.quote(b"requirements="), requirements)
1815 1815 return params
1816 1816
1817 1817
1818 1818 def format_remote_wanted_sidedata(repo):
1819 1819 """Formats a repo's wanted sidedata categories into a bytestring for
1820 1820 capabilities exchange."""
1821 1821 wanted = b""
1822 1822 if repo._wanted_sidedata:
1823 1823 wanted = b','.join(
1824 1824 pycompat.bytestr(c) for c in sorted(repo._wanted_sidedata)
1825 1825 )
1826 1826 return wanted
1827 1827
1828 1828
1829 1829 def read_remote_wanted_sidedata(remote):
1830 1830 sidedata_categories = remote.capable(b'exp-wanted-sidedata')
1831 1831 return read_wanted_sidedata(sidedata_categories)
1832 1832
1833 1833
1834 1834 def read_wanted_sidedata(formatted):
1835 1835 if formatted:
1836 1836 return set(formatted.split(b','))
1837 1837 return set()
1838 1838
1839 1839
1840 1840 def addpartbundlestream2(bundler, repo, **kwargs):
1841 1841 if not kwargs.get('stream', False):
1842 1842 return
1843 1843
1844 1844 if not streamclone.allowservergeneration(repo):
1845 1845 raise error.Abort(
1846 1846 _(
1847 1847 b'stream data requested but server does not allow '
1848 1848 b'this feature'
1849 1849 ),
1850 1850 hint=_(
1851 1851 b'well-behaved clients should not be '
1852 1852 b'requesting stream data from servers not '
1853 1853 b'advertising it; the client may be buggy'
1854 1854 ),
1855 1855 )
1856 1856
1857 1857 # Stream clones don't compress well. And compression undermines a
1858 1858 # goal of stream clones, which is to be fast. Communicate the desire
1859 1859 # to avoid compression to consumers of the bundle.
1860 1860 bundler.prefercompressed = False
1861 1861
1862 1862 # get the includes and excludes
1863 1863 includepats = kwargs.get('includepats')
1864 1864 excludepats = kwargs.get('excludepats')
1865 1865
1866 1866 narrowstream = repo.ui.configbool(
1867 1867 b'experimental', b'server.stream-narrow-clones'
1868 1868 )
1869 1869
1870 1870 if (includepats or excludepats) and not narrowstream:
1871 1871 raise error.Abort(_(b'server does not support narrow stream clones'))
1872 1872
1873 1873 includeobsmarkers = False
1874 1874 if repo.obsstore:
1875 1875 remoteversions = obsmarkersversion(bundler.capabilities)
1876 1876 if not remoteversions:
1877 1877 raise error.Abort(
1878 1878 _(
1879 1879 b'server has obsolescence markers, but client '
1880 1880 b'cannot receive them via stream clone'
1881 1881 )
1882 1882 )
1883 1883 elif repo.obsstore._version in remoteversions:
1884 1884 includeobsmarkers = True
1885 1885
1886 1886 filecount, bytecount, it = streamclone.generatev2(
1887 1887 repo, includepats, excludepats, includeobsmarkers
1888 1888 )
1889 requirements = _formatrequirementsspec(repo.requirements)
1889 requirements = streamclone.streamed_requirements(repo)
1890 requirements = _formatrequirementsspec(requirements)
1890 1891 part = bundler.newpart(b'stream2', data=it)
1891 1892 part.addparam(b'bytecount', b'%d' % bytecount, mandatory=True)
1892 1893 part.addparam(b'filecount', b'%d' % filecount, mandatory=True)
1893 1894 part.addparam(b'requirements', requirements, mandatory=True)
1894 1895
1895 1896
1896 1897 def buildobsmarkerspart(bundler, markers, mandatory=True):
1897 1898 """add an obsmarker part to the bundler with <markers>
1898 1899
1899 1900 No part is created if markers is empty.
1900 1901 Raises ValueError if the bundler doesn't support any known obsmarker format.
1901 1902 """
1902 1903 if not markers:
1903 1904 return None
1904 1905
1905 1906 remoteversions = obsmarkersversion(bundler.capabilities)
1906 1907 version = obsolete.commonversion(remoteversions)
1907 1908 if version is None:
1908 1909 raise ValueError(b'bundler does not support common obsmarker format')
1909 1910 stream = obsolete.encodemarkers(markers, True, version=version)
1910 1911 return bundler.newpart(b'obsmarkers', data=stream, mandatory=mandatory)
1911 1912
1912 1913
1913 1914 def writebundle(
1914 1915 ui, cg, filename, bundletype, vfs=None, compression=None, compopts=None
1915 1916 ):
1916 1917 """Write a bundle file and return its filename.
1917 1918
1918 1919 Existing files will not be overwritten.
1919 1920 If no filename is specified, a temporary file is created.
1920 1921 bz2 compression can be turned off.
1921 1922 The bundle file will be deleted in case of errors.
1922 1923 """
1923 1924
1924 1925 if bundletype == b"HG20":
1925 1926 bundle = bundle20(ui)
1926 1927 bundle.setcompression(compression, compopts)
1927 1928 part = bundle.newpart(b'changegroup', data=cg.getchunks())
1928 1929 part.addparam(b'version', cg.version)
1929 1930 if b'clcount' in cg.extras:
1930 1931 part.addparam(
1931 1932 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
1932 1933 )
1933 1934 chunkiter = bundle.getchunks()
1934 1935 else:
1935 1936 # compression argument is only for the bundle2 case
1936 1937 assert compression is None
1937 1938 if cg.version != b'01':
1938 1939 raise error.Abort(
1939 1940 _(b'old bundle types only supports v1 changegroups')
1940 1941 )
1941 1942 header, comp = bundletypes[bundletype]
1942 1943 if comp not in util.compengines.supportedbundletypes:
1943 1944 raise error.Abort(_(b'unknown stream compression type: %s') % comp)
1944 1945 compengine = util.compengines.forbundletype(comp)
1945 1946
1946 1947 def chunkiter():
1947 1948 yield header
1948 1949 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1949 1950 yield chunk
1950 1951
1951 1952 chunkiter = chunkiter()
1952 1953
1953 1954 # parse the changegroup data, otherwise we will block
1954 1955 # in case of sshrepo because we don't know the end of the stream
1955 1956 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1956 1957
1957 1958
1958 1959 def combinechangegroupresults(op):
1959 1960 """logic to combine 0 or more addchangegroup results into one"""
1960 1961 results = [r.get(b'return', 0) for r in op.records[b'changegroup']]
1961 1962 changedheads = 0
1962 1963 result = 1
1963 1964 for ret in results:
1964 1965 # If any changegroup result is 0, return 0
1965 1966 if ret == 0:
1966 1967 result = 0
1967 1968 break
1968 1969 if ret < -1:
1969 1970 changedheads += ret + 1
1970 1971 elif ret > 1:
1971 1972 changedheads += ret - 1
1972 1973 if changedheads > 0:
1973 1974 result = 1 + changedheads
1974 1975 elif changedheads < 0:
1975 1976 result = -1 + changedheads
1976 1977 return result
1977 1978
1978 1979
1979 1980 @parthandler(
1980 1981 b'changegroup',
1981 1982 (
1982 1983 b'version',
1983 1984 b'nbchanges',
1984 1985 b'exp-sidedata',
1985 1986 b'exp-wanted-sidedata',
1986 1987 b'treemanifest',
1987 1988 b'targetphase',
1988 1989 ),
1989 1990 )
1990 1991 def handlechangegroup(op, inpart):
1991 1992 """apply a changegroup part on the repo"""
1992 1993 from . import localrepo
1993 1994
1994 1995 tr = op.gettransaction()
1995 1996 unpackerversion = inpart.params.get(b'version', b'01')
1996 1997 # We should raise an appropriate exception here
1997 1998 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1998 1999 # the source and url passed here are overwritten by the one contained in
1999 2000 # the transaction.hookargs argument. So 'bundle2' is a placeholder
2000 2001 nbchangesets = None
2001 2002 if b'nbchanges' in inpart.params:
2002 2003 nbchangesets = int(inpart.params.get(b'nbchanges'))
2003 2004 if b'treemanifest' in inpart.params and not scmutil.istreemanifest(op.repo):
2004 2005 if len(op.repo.changelog) != 0:
2005 2006 raise error.Abort(
2006 2007 _(
2007 2008 b"bundle contains tree manifests, but local repo is "
2008 2009 b"non-empty and does not use tree manifests"
2009 2010 )
2010 2011 )
2011 2012 op.repo.requirements.add(requirements.TREEMANIFEST_REQUIREMENT)
2012 2013 op.repo.svfs.options = localrepo.resolvestorevfsoptions(
2013 2014 op.repo.ui, op.repo.requirements, op.repo.features
2014 2015 )
2015 2016 scmutil.writereporequirements(op.repo)
2016 2017
2017 2018 extrakwargs = {}
2018 2019 targetphase = inpart.params.get(b'targetphase')
2019 2020 if targetphase is not None:
2020 2021 extrakwargs['targetphase'] = int(targetphase)
2021 2022
2022 2023 remote_sidedata = inpart.params.get(b'exp-wanted-sidedata')
2023 2024 extrakwargs['sidedata_categories'] = read_wanted_sidedata(remote_sidedata)
2024 2025
2025 2026 ret = _processchangegroup(
2026 2027 op,
2027 2028 cg,
2028 2029 tr,
2029 2030 op.source,
2030 2031 b'bundle2',
2031 2032 expectedtotal=nbchangesets,
2032 2033 **extrakwargs
2033 2034 )
2034 2035 if op.reply is not None:
2035 2036 # This is definitely not the final form of this
2036 2037 # return. But one need to start somewhere.
2037 2038 part = op.reply.newpart(b'reply:changegroup', mandatory=False)
2038 2039 part.addparam(
2039 2040 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2040 2041 )
2041 2042 part.addparam(b'return', b'%i' % ret, mandatory=False)
2042 2043 assert not inpart.read()
2043 2044
2044 2045
2045 2046 _remotechangegroupparams = tuple(
2046 2047 [b'url', b'size', b'digests']
2047 2048 + [b'digest:%s' % k for k in util.DIGESTS.keys()]
2048 2049 )
2049 2050
2050 2051
2051 2052 @parthandler(b'remote-changegroup', _remotechangegroupparams)
2052 2053 def handleremotechangegroup(op, inpart):
2053 2054 """apply a bundle10 on the repo, given an url and validation information
2054 2055
2055 2056 All the information about the remote bundle to import are given as
2056 2057 parameters. The parameters include:
2057 2058 - url: the url to the bundle10.
2058 2059 - size: the bundle10 file size. It is used to validate what was
2059 2060 retrieved by the client matches the server knowledge about the bundle.
2060 2061 - digests: a space separated list of the digest types provided as
2061 2062 parameters.
2062 2063 - digest:<digest-type>: the hexadecimal representation of the digest with
2063 2064 that name. Like the size, it is used to validate what was retrieved by
2064 2065 the client matches what the server knows about the bundle.
2065 2066
2066 2067 When multiple digest types are given, all of them are checked.
2067 2068 """
2068 2069 try:
2069 2070 raw_url = inpart.params[b'url']
2070 2071 except KeyError:
2071 2072 raise error.Abort(_(b'remote-changegroup: missing "%s" param') % b'url')
2072 2073 parsed_url = urlutil.url(raw_url)
2073 2074 if parsed_url.scheme not in capabilities[b'remote-changegroup']:
2074 2075 raise error.Abort(
2075 2076 _(b'remote-changegroup does not support %s urls')
2076 2077 % parsed_url.scheme
2077 2078 )
2078 2079
2079 2080 try:
2080 2081 size = int(inpart.params[b'size'])
2081 2082 except ValueError:
2082 2083 raise error.Abort(
2083 2084 _(b'remote-changegroup: invalid value for param "%s"') % b'size'
2084 2085 )
2085 2086 except KeyError:
2086 2087 raise error.Abort(
2087 2088 _(b'remote-changegroup: missing "%s" param') % b'size'
2088 2089 )
2089 2090
2090 2091 digests = {}
2091 2092 for typ in inpart.params.get(b'digests', b'').split():
2092 2093 param = b'digest:%s' % typ
2093 2094 try:
2094 2095 value = inpart.params[param]
2095 2096 except KeyError:
2096 2097 raise error.Abort(
2097 2098 _(b'remote-changegroup: missing "%s" param') % param
2098 2099 )
2099 2100 digests[typ] = value
2100 2101
2101 2102 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
2102 2103
2103 2104 tr = op.gettransaction()
2104 2105 from . import exchange
2105 2106
2106 2107 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
2107 2108 if not isinstance(cg, changegroup.cg1unpacker):
2108 2109 raise error.Abort(
2109 2110 _(b'%s: not a bundle version 1.0') % urlutil.hidepassword(raw_url)
2110 2111 )
2111 2112 ret = _processchangegroup(op, cg, tr, op.source, b'bundle2')
2112 2113 if op.reply is not None:
2113 2114 # This is definitely not the final form of this
2114 2115 # return. But one need to start somewhere.
2115 2116 part = op.reply.newpart(b'reply:changegroup')
2116 2117 part.addparam(
2117 2118 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2118 2119 )
2119 2120 part.addparam(b'return', b'%i' % ret, mandatory=False)
2120 2121 try:
2121 2122 real_part.validate()
2122 2123 except error.Abort as e:
2123 2124 raise error.Abort(
2124 2125 _(b'bundle at %s is corrupted:\n%s')
2125 2126 % (urlutil.hidepassword(raw_url), e.message)
2126 2127 )
2127 2128 assert not inpart.read()
2128 2129
2129 2130
2130 2131 @parthandler(b'reply:changegroup', (b'return', b'in-reply-to'))
2131 2132 def handlereplychangegroup(op, inpart):
2132 2133 ret = int(inpart.params[b'return'])
2133 2134 replyto = int(inpart.params[b'in-reply-to'])
2134 2135 op.records.add(b'changegroup', {b'return': ret}, replyto)
2135 2136
2136 2137
2137 2138 @parthandler(b'check:bookmarks')
2138 2139 def handlecheckbookmarks(op, inpart):
2139 2140 """check location of bookmarks
2140 2141
2141 2142 This part is to be used to detect push race regarding bookmark, it
2142 2143 contains binary encoded (bookmark, node) tuple. If the local state does
2143 2144 not marks the one in the part, a PushRaced exception is raised
2144 2145 """
2145 2146 bookdata = bookmarks.binarydecode(op.repo, inpart)
2146 2147
2147 2148 msgstandard = (
2148 2149 b'remote repository changed while pushing - please try again '
2149 2150 b'(bookmark "%s" move from %s to %s)'
2150 2151 )
2151 2152 msgmissing = (
2152 2153 b'remote repository changed while pushing - please try again '
2153 2154 b'(bookmark "%s" is missing, expected %s)'
2154 2155 )
2155 2156 msgexist = (
2156 2157 b'remote repository changed while pushing - please try again '
2157 2158 b'(bookmark "%s" set on %s, expected missing)'
2158 2159 )
2159 2160 for book, node in bookdata:
2160 2161 currentnode = op.repo._bookmarks.get(book)
2161 2162 if currentnode != node:
2162 2163 if node is None:
2163 2164 finalmsg = msgexist % (book, short(currentnode))
2164 2165 elif currentnode is None:
2165 2166 finalmsg = msgmissing % (book, short(node))
2166 2167 else:
2167 2168 finalmsg = msgstandard % (
2168 2169 book,
2169 2170 short(node),
2170 2171 short(currentnode),
2171 2172 )
2172 2173 raise error.PushRaced(finalmsg)
2173 2174
2174 2175
2175 2176 @parthandler(b'check:heads')
2176 2177 def handlecheckheads(op, inpart):
2177 2178 """check that head of the repo did not change
2178 2179
2179 2180 This is used to detect a push race when using unbundle.
2180 2181 This replaces the "heads" argument of unbundle."""
2181 2182 h = inpart.read(20)
2182 2183 heads = []
2183 2184 while len(h) == 20:
2184 2185 heads.append(h)
2185 2186 h = inpart.read(20)
2186 2187 assert not h
2187 2188 # Trigger a transaction so that we are guaranteed to have the lock now.
2188 2189 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2189 2190 op.gettransaction()
2190 2191 if sorted(heads) != sorted(op.repo.heads()):
2191 2192 raise error.PushRaced(
2192 2193 b'remote repository changed while pushing - please try again'
2193 2194 )
2194 2195
2195 2196
2196 2197 @parthandler(b'check:updated-heads')
2197 2198 def handlecheckupdatedheads(op, inpart):
2198 2199 """check for race on the heads touched by a push
2199 2200
2200 2201 This is similar to 'check:heads' but focus on the heads actually updated
2201 2202 during the push. If other activities happen on unrelated heads, it is
2202 2203 ignored.
2203 2204
2204 2205 This allow server with high traffic to avoid push contention as long as
2205 2206 unrelated parts of the graph are involved."""
2206 2207 h = inpart.read(20)
2207 2208 heads = []
2208 2209 while len(h) == 20:
2209 2210 heads.append(h)
2210 2211 h = inpart.read(20)
2211 2212 assert not h
2212 2213 # trigger a transaction so that we are guaranteed to have the lock now.
2213 2214 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2214 2215 op.gettransaction()
2215 2216
2216 2217 currentheads = set()
2217 2218 for ls in op.repo.branchmap().iterheads():
2218 2219 currentheads.update(ls)
2219 2220
2220 2221 for h in heads:
2221 2222 if h not in currentheads:
2222 2223 raise error.PushRaced(
2223 2224 b'remote repository changed while pushing - '
2224 2225 b'please try again'
2225 2226 )
2226 2227
2227 2228
2228 2229 @parthandler(b'check:phases')
2229 2230 def handlecheckphases(op, inpart):
2230 2231 """check that phase boundaries of the repository did not change
2231 2232
2232 2233 This is used to detect a push race.
2233 2234 """
2234 2235 phasetonodes = phases.binarydecode(inpart)
2235 2236 unfi = op.repo.unfiltered()
2236 2237 cl = unfi.changelog
2237 2238 phasecache = unfi._phasecache
2238 2239 msg = (
2239 2240 b'remote repository changed while pushing - please try again '
2240 2241 b'(%s is %s expected %s)'
2241 2242 )
2242 2243 for expectedphase, nodes in pycompat.iteritems(phasetonodes):
2243 2244 for n in nodes:
2244 2245 actualphase = phasecache.phase(unfi, cl.rev(n))
2245 2246 if actualphase != expectedphase:
2246 2247 finalmsg = msg % (
2247 2248 short(n),
2248 2249 phases.phasenames[actualphase],
2249 2250 phases.phasenames[expectedphase],
2250 2251 )
2251 2252 raise error.PushRaced(finalmsg)
2252 2253
2253 2254
2254 2255 @parthandler(b'output')
2255 2256 def handleoutput(op, inpart):
2256 2257 """forward output captured on the server to the client"""
2257 2258 for line in inpart.read().splitlines():
2258 2259 op.ui.status(_(b'remote: %s\n') % line)
2259 2260
2260 2261
2261 2262 @parthandler(b'replycaps')
2262 2263 def handlereplycaps(op, inpart):
2263 2264 """Notify that a reply bundle should be created
2264 2265
2265 2266 The payload contains the capabilities information for the reply"""
2266 2267 caps = decodecaps(inpart.read())
2267 2268 if op.reply is None:
2268 2269 op.reply = bundle20(op.ui, caps)
2269 2270
2270 2271
2271 2272 class AbortFromPart(error.Abort):
2272 2273 """Sub-class of Abort that denotes an error from a bundle2 part."""
2273 2274
2274 2275
2275 2276 @parthandler(b'error:abort', (b'message', b'hint'))
2276 2277 def handleerrorabort(op, inpart):
2277 2278 """Used to transmit abort error over the wire"""
2278 2279 raise AbortFromPart(
2279 2280 inpart.params[b'message'], hint=inpart.params.get(b'hint')
2280 2281 )
2281 2282
2282 2283
2283 2284 @parthandler(
2284 2285 b'error:pushkey',
2285 2286 (b'namespace', b'key', b'new', b'old', b'ret', b'in-reply-to'),
2286 2287 )
2287 2288 def handleerrorpushkey(op, inpart):
2288 2289 """Used to transmit failure of a mandatory pushkey over the wire"""
2289 2290 kwargs = {}
2290 2291 for name in (b'namespace', b'key', b'new', b'old', b'ret'):
2291 2292 value = inpart.params.get(name)
2292 2293 if value is not None:
2293 2294 kwargs[name] = value
2294 2295 raise error.PushkeyFailed(
2295 2296 inpart.params[b'in-reply-to'], **pycompat.strkwargs(kwargs)
2296 2297 )
2297 2298
2298 2299
2299 2300 @parthandler(b'error:unsupportedcontent', (b'parttype', b'params'))
2300 2301 def handleerrorunsupportedcontent(op, inpart):
2301 2302 """Used to transmit unknown content error over the wire"""
2302 2303 kwargs = {}
2303 2304 parttype = inpart.params.get(b'parttype')
2304 2305 if parttype is not None:
2305 2306 kwargs[b'parttype'] = parttype
2306 2307 params = inpart.params.get(b'params')
2307 2308 if params is not None:
2308 2309 kwargs[b'params'] = params.split(b'\0')
2309 2310
2310 2311 raise error.BundleUnknownFeatureError(**pycompat.strkwargs(kwargs))
2311 2312
2312 2313
2313 2314 @parthandler(b'error:pushraced', (b'message',))
2314 2315 def handleerrorpushraced(op, inpart):
2315 2316 """Used to transmit push race error over the wire"""
2316 2317 raise error.ResponseError(_(b'push failed:'), inpart.params[b'message'])
2317 2318
2318 2319
2319 2320 @parthandler(b'listkeys', (b'namespace',))
2320 2321 def handlelistkeys(op, inpart):
2321 2322 """retrieve pushkey namespace content stored in a bundle2"""
2322 2323 namespace = inpart.params[b'namespace']
2323 2324 r = pushkey.decodekeys(inpart.read())
2324 2325 op.records.add(b'listkeys', (namespace, r))
2325 2326
2326 2327
2327 2328 @parthandler(b'pushkey', (b'namespace', b'key', b'old', b'new'))
2328 2329 def handlepushkey(op, inpart):
2329 2330 """process a pushkey request"""
2330 2331 dec = pushkey.decode
2331 2332 namespace = dec(inpart.params[b'namespace'])
2332 2333 key = dec(inpart.params[b'key'])
2333 2334 old = dec(inpart.params[b'old'])
2334 2335 new = dec(inpart.params[b'new'])
2335 2336 # Grab the transaction to ensure that we have the lock before performing the
2336 2337 # pushkey.
2337 2338 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2338 2339 op.gettransaction()
2339 2340 ret = op.repo.pushkey(namespace, key, old, new)
2340 2341 record = {b'namespace': namespace, b'key': key, b'old': old, b'new': new}
2341 2342 op.records.add(b'pushkey', record)
2342 2343 if op.reply is not None:
2343 2344 rpart = op.reply.newpart(b'reply:pushkey')
2344 2345 rpart.addparam(
2345 2346 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2346 2347 )
2347 2348 rpart.addparam(b'return', b'%i' % ret, mandatory=False)
2348 2349 if inpart.mandatory and not ret:
2349 2350 kwargs = {}
2350 2351 for key in (b'namespace', b'key', b'new', b'old', b'ret'):
2351 2352 if key in inpart.params:
2352 2353 kwargs[key] = inpart.params[key]
2353 2354 raise error.PushkeyFailed(
2354 2355 partid=b'%d' % inpart.id, **pycompat.strkwargs(kwargs)
2355 2356 )
2356 2357
2357 2358
2358 2359 @parthandler(b'bookmarks')
2359 2360 def handlebookmark(op, inpart):
2360 2361 """transmit bookmark information
2361 2362
2362 2363 The part contains binary encoded bookmark information.
2363 2364
2364 2365 The exact behavior of this part can be controlled by the 'bookmarks' mode
2365 2366 on the bundle operation.
2366 2367
2367 2368 When mode is 'apply' (the default) the bookmark information is applied as
2368 2369 is to the unbundling repository. Make sure a 'check:bookmarks' part is
2369 2370 issued earlier to check for push races in such update. This behavior is
2370 2371 suitable for pushing.
2371 2372
2372 2373 When mode is 'records', the information is recorded into the 'bookmarks'
2373 2374 records of the bundle operation. This behavior is suitable for pulling.
2374 2375 """
2375 2376 changes = bookmarks.binarydecode(op.repo, inpart)
2376 2377
2377 2378 pushkeycompat = op.repo.ui.configbool(
2378 2379 b'server', b'bookmarks-pushkey-compat'
2379 2380 )
2380 2381 bookmarksmode = op.modes.get(b'bookmarks', b'apply')
2381 2382
2382 2383 if bookmarksmode == b'apply':
2383 2384 tr = op.gettransaction()
2384 2385 bookstore = op.repo._bookmarks
2385 2386 if pushkeycompat:
2386 2387 allhooks = []
2387 2388 for book, node in changes:
2388 2389 hookargs = tr.hookargs.copy()
2389 2390 hookargs[b'pushkeycompat'] = b'1'
2390 2391 hookargs[b'namespace'] = b'bookmarks'
2391 2392 hookargs[b'key'] = book
2392 2393 hookargs[b'old'] = hex(bookstore.get(book, b''))
2393 2394 hookargs[b'new'] = hex(node if node is not None else b'')
2394 2395 allhooks.append(hookargs)
2395 2396
2396 2397 for hookargs in allhooks:
2397 2398 op.repo.hook(
2398 2399 b'prepushkey', throw=True, **pycompat.strkwargs(hookargs)
2399 2400 )
2400 2401
2401 2402 for book, node in changes:
2402 2403 if bookmarks.isdivergent(book):
2403 2404 msg = _(b'cannot accept divergent bookmark %s!') % book
2404 2405 raise error.Abort(msg)
2405 2406
2406 2407 bookstore.applychanges(op.repo, op.gettransaction(), changes)
2407 2408
2408 2409 if pushkeycompat:
2409 2410
2410 2411 def runhook(unused_success):
2411 2412 for hookargs in allhooks:
2412 2413 op.repo.hook(b'pushkey', **pycompat.strkwargs(hookargs))
2413 2414
2414 2415 op.repo._afterlock(runhook)
2415 2416
2416 2417 elif bookmarksmode == b'records':
2417 2418 for book, node in changes:
2418 2419 record = {b'bookmark': book, b'node': node}
2419 2420 op.records.add(b'bookmarks', record)
2420 2421 else:
2421 2422 raise error.ProgrammingError(
2422 2423 b'unknown bookmark mode: %s' % bookmarksmode
2423 2424 )
2424 2425
2425 2426
2426 2427 @parthandler(b'phase-heads')
2427 2428 def handlephases(op, inpart):
2428 2429 """apply phases from bundle part to repo"""
2429 2430 headsbyphase = phases.binarydecode(inpart)
2430 2431 phases.updatephases(op.repo.unfiltered(), op.gettransaction, headsbyphase)
2431 2432
2432 2433
2433 2434 @parthandler(b'reply:pushkey', (b'return', b'in-reply-to'))
2434 2435 def handlepushkeyreply(op, inpart):
2435 2436 """retrieve the result of a pushkey request"""
2436 2437 ret = int(inpart.params[b'return'])
2437 2438 partid = int(inpart.params[b'in-reply-to'])
2438 2439 op.records.add(b'pushkey', {b'return': ret}, partid)
2439 2440
2440 2441
2441 2442 @parthandler(b'obsmarkers')
2442 2443 def handleobsmarker(op, inpart):
2443 2444 """add a stream of obsmarkers to the repo"""
2444 2445 tr = op.gettransaction()
2445 2446 markerdata = inpart.read()
2446 2447 if op.ui.config(b'experimental', b'obsmarkers-exchange-debug'):
2447 2448 op.ui.writenoi18n(
2448 2449 b'obsmarker-exchange: %i bytes received\n' % len(markerdata)
2449 2450 )
2450 2451 # The mergemarkers call will crash if marker creation is not enabled.
2451 2452 # we want to avoid this if the part is advisory.
2452 2453 if not inpart.mandatory and op.repo.obsstore.readonly:
2453 2454 op.repo.ui.debug(
2454 2455 b'ignoring obsolescence markers, feature not enabled\n'
2455 2456 )
2456 2457 return
2457 2458 new = op.repo.obsstore.mergemarkers(tr, markerdata)
2458 2459 op.repo.invalidatevolatilesets()
2459 2460 op.records.add(b'obsmarkers', {b'new': new})
2460 2461 if op.reply is not None:
2461 2462 rpart = op.reply.newpart(b'reply:obsmarkers')
2462 2463 rpart.addparam(
2463 2464 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2464 2465 )
2465 2466 rpart.addparam(b'new', b'%i' % new, mandatory=False)
2466 2467
2467 2468
2468 2469 @parthandler(b'reply:obsmarkers', (b'new', b'in-reply-to'))
2469 2470 def handleobsmarkerreply(op, inpart):
2470 2471 """retrieve the result of a pushkey request"""
2471 2472 ret = int(inpart.params[b'new'])
2472 2473 partid = int(inpart.params[b'in-reply-to'])
2473 2474 op.records.add(b'obsmarkers', {b'new': ret}, partid)
2474 2475
2475 2476
2476 2477 @parthandler(b'hgtagsfnodes')
2477 2478 def handlehgtagsfnodes(op, inpart):
2478 2479 """Applies .hgtags fnodes cache entries to the local repo.
2479 2480
2480 2481 Payload is pairs of 20 byte changeset nodes and filenodes.
2481 2482 """
2482 2483 # Grab the transaction so we ensure that we have the lock at this point.
2483 2484 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2484 2485 op.gettransaction()
2485 2486 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
2486 2487
2487 2488 count = 0
2488 2489 while True:
2489 2490 node = inpart.read(20)
2490 2491 fnode = inpart.read(20)
2491 2492 if len(node) < 20 or len(fnode) < 20:
2492 2493 op.ui.debug(b'ignoring incomplete received .hgtags fnodes data\n')
2493 2494 break
2494 2495 cache.setfnode(node, fnode)
2495 2496 count += 1
2496 2497
2497 2498 cache.write()
2498 2499 op.ui.debug(b'applied %i hgtags fnodes cache entries\n' % count)
2499 2500
2500 2501
2501 2502 rbcstruct = struct.Struct(b'>III')
2502 2503
2503 2504
2504 2505 @parthandler(b'cache:rev-branch-cache')
2505 2506 def handlerbc(op, inpart):
2506 2507 """Legacy part, ignored for compatibility with bundles from or
2507 2508 for Mercurial before 5.7. Newer Mercurial computes the cache
2508 2509 efficiently enough during unbundling that the additional transfer
2509 2510 is unnecessary."""
2510 2511
2511 2512
2512 2513 @parthandler(b'pushvars')
2513 2514 def bundle2getvars(op, part):
2514 2515 '''unbundle a bundle2 containing shellvars on the server'''
2515 2516 # An option to disable unbundling on server-side for security reasons
2516 2517 if op.ui.configbool(b'push', b'pushvars.server'):
2517 2518 hookargs = {}
2518 2519 for key, value in part.advisoryparams:
2519 2520 key = key.upper()
2520 2521 # We want pushed variables to have USERVAR_ prepended so we know
2521 2522 # they came from the --pushvar flag.
2522 2523 key = b"USERVAR_" + key
2523 2524 hookargs[key] = value
2524 2525 op.addhookargs(hookargs)
2525 2526
2526 2527
2527 2528 @parthandler(b'stream2', (b'requirements', b'filecount', b'bytecount'))
2528 2529 def handlestreamv2bundle(op, part):
2529 2530
2530 2531 requirements = urlreq.unquote(part.params[b'requirements']).split(b',')
2531 2532 filecount = int(part.params[b'filecount'])
2532 2533 bytecount = int(part.params[b'bytecount'])
2533 2534
2534 2535 repo = op.repo
2535 2536 if len(repo):
2536 2537 msg = _(b'cannot apply stream clone to non empty repository')
2537 2538 raise error.Abort(msg)
2538 2539
2539 2540 repo.ui.debug(b'applying stream bundle\n')
2540 2541 streamclone.applybundlev2(repo, part, filecount, bytecount, requirements)
2541 2542
2542 2543
2543 2544 def widen_bundle(
2544 2545 bundler, repo, oldmatcher, newmatcher, common, known, cgversion, ellipses
2545 2546 ):
2546 2547 """generates bundle2 for widening a narrow clone
2547 2548
2548 2549 bundler is the bundle to which data should be added
2549 2550 repo is the localrepository instance
2550 2551 oldmatcher matches what the client already has
2551 2552 newmatcher matches what the client needs (including what it already has)
2552 2553 common is set of common heads between server and client
2553 2554 known is a set of revs known on the client side (used in ellipses)
2554 2555 cgversion is the changegroup version to send
2555 2556 ellipses is boolean value telling whether to send ellipses data or not
2556 2557
2557 2558 returns bundle2 of the data required for extending
2558 2559 """
2559 2560 commonnodes = set()
2560 2561 cl = repo.changelog
2561 2562 for r in repo.revs(b"::%ln", common):
2562 2563 commonnodes.add(cl.node(r))
2563 2564 if commonnodes:
2564 2565 packer = changegroup.getbundler(
2565 2566 cgversion,
2566 2567 repo,
2567 2568 oldmatcher=oldmatcher,
2568 2569 matcher=newmatcher,
2569 2570 fullnodes=commonnodes,
2570 2571 )
2571 2572 cgdata = packer.generate(
2572 2573 {repo.nullid},
2573 2574 list(commonnodes),
2574 2575 False,
2575 2576 b'narrow_widen',
2576 2577 changelog=False,
2577 2578 )
2578 2579
2579 2580 part = bundler.newpart(b'changegroup', data=cgdata)
2580 2581 part.addparam(b'version', cgversion)
2581 2582 if scmutil.istreemanifest(repo):
2582 2583 part.addparam(b'treemanifest', b'1')
2583 2584 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
2584 2585 part.addparam(b'exp-sidedata', b'1')
2585 2586 wanted = format_remote_wanted_sidedata(repo)
2586 2587 part.addparam(b'exp-wanted-sidedata', wanted)
2587 2588
2588 2589 return bundler
@@ -1,285 +1,283 b''
1 1 This file contains tests case that deal with format change accross stream clone
2 2
3 3 #require serve no-reposimplestore no-chg
4 4
5 #testcases stream-legacy
6
7 (the #stream-bundle2 variant is actually buggy for the moment)
5 #testcases stream-legacy stream-bundle2
8 6
9 7 #if stream-legacy
10 8 $ cat << EOF >> $HGRCPATH
11 9 > [server]
12 10 > bundle2.stream = no
13 11 > EOF
14 12 #endif
15 13
16 14 Initialize repository
17 15
18 16 $ hg init server
19 17 $ cd server
20 18 $ sh $TESTDIR/testlib/stream_clone_setup.sh
21 19 adding 00changelog-ab349180a0405010.nd
22 20 adding 00changelog.d
23 21 adding 00changelog.i
24 22 adding 00changelog.n
25 23 adding 00manifest.d
26 24 adding 00manifest.i
27 25 adding container/isam-build-centos7/bazel-coverage-generator-sandboxfs-compatibility-0758e3e4f6057904d44399bd666faba9e7f40686.patch
28 26 adding data/foo.d
29 27 adding data/foo.i
30 28 adding data/foo.n
31 29 adding data/undo.babar
32 30 adding data/undo.d
33 31 adding data/undo.foo.d
34 32 adding data/undo.foo.i
35 33 adding data/undo.foo.n
36 34 adding data/undo.i
37 35 adding data/undo.n
38 36 adding data/undo.py
39 37 adding foo.d
40 38 adding foo.i
41 39 adding foo.n
42 40 adding meta/foo.d
43 41 adding meta/foo.i
44 42 adding meta/foo.n
45 43 adding meta/undo.babar
46 44 adding meta/undo.d
47 45 adding meta/undo.foo.d
48 46 adding meta/undo.foo.i
49 47 adding meta/undo.foo.n
50 48 adding meta/undo.i
51 49 adding meta/undo.n
52 50 adding meta/undo.py
53 51 adding savanah/foo.d
54 52 adding savanah/foo.i
55 53 adding savanah/foo.n
56 54 adding savanah/undo.babar
57 55 adding savanah/undo.d
58 56 adding savanah/undo.foo.d
59 57 adding savanah/undo.foo.i
60 58 adding savanah/undo.foo.n
61 59 adding savanah/undo.i
62 60 adding savanah/undo.n
63 61 adding savanah/undo.py
64 62 adding store/C\xc3\xa9lesteVille_is_a_Capital_City (esc)
65 63 adding store/foo.d
66 64 adding store/foo.i
67 65 adding store/foo.n
68 66 adding store/undo.babar
69 67 adding store/undo.d
70 68 adding store/undo.foo.d
71 69 adding store/undo.foo.i
72 70 adding store/undo.foo.n
73 71 adding store/undo.i
74 72 adding store/undo.n
75 73 adding store/undo.py
76 74 adding undo.babar
77 75 adding undo.d
78 76 adding undo.foo.d
79 77 adding undo.foo.i
80 78 adding undo.foo.n
81 79 adding undo.i
82 80 adding undo.n
83 81 adding undo.py
84 82 $ cd ..
85 83
86 84
87 85 Test streaming from/to repository without a store:
88 86 ==================================================
89 87
90 88 $ hg clone --pull --config format.usestore=no server server-no-store
91 89 requesting all changes
92 90 adding changesets
93 91 adding manifests
94 92 adding file changes
95 93 added 3 changesets with 1088 changes to 1088 files
96 94 new changesets 96ee1d7354c4:5223b5e3265f
97 95 updating to branch default
98 96 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
99 97 $ hg verify -R server-no-store
100 98 checking changesets
101 99 checking manifests
102 100 crosschecking files in changesets and manifests
103 101 checking files
104 102 checked 3 changesets with 1088 changes to 1088 files
105 103 $ hg -R server serve -p $HGPORT -d --pid-file=hg-1.pid --error errors-1.txt
106 104 $ cat hg-1.pid > $DAEMON_PIDS
107 105 $ hg -R server-no-store serve -p $HGPORT2 -d --pid-file=hg-2.pid --error errors-2.txt
108 106 $ cat hg-2.pid >> $DAEMON_PIDS
109 107 $ hg debugrequires -R server | grep store
110 108 store
111 109 $ hg debugrequires -R server-no-store | grep store
112 110 [1]
113 111
114 112 store β†’ no-store cloning
115 113
116 114 $ hg clone --quiet --stream -U http://localhost:$HGPORT clone-remove-store --config format.usestore=no
117 115 $ cat errors-1.txt
118 116 $ hg -R clone-remove-store verify
119 117 checking changesets
120 118 checking manifests
121 119 crosschecking files in changesets and manifests
122 120 checking files
123 121 checked 3 changesets with 1088 changes to 1088 files
124 122 $ hg debugrequires -R clone-remove-store | grep store
125 123 [1]
126 124
127 125
128 126 no-store β†’ store cloning
129 127
130 128 $ hg clone --quiet --stream -U http://localhost:$HGPORT2 clone-add-store --config format.usestore=yes
131 129 $ cat errors-2.txt
132 130 $ hg -R clone-add-store verify
133 131 checking changesets
134 132 checking manifests
135 133 crosschecking files in changesets and manifests
136 134 checking files
137 135 checked 3 changesets with 1088 changes to 1088 files
138 136 $ hg debugrequires -R clone-add-store | grep store
139 137 store
140 138
141 139
142 140 $ killdaemons.py
143 141
144 142
145 143 Test streaming from/to repository without a fncache
146 144 ===================================================
147 145
148 146 $ rm hg-*.pid errors-*.txt
149 147 $ hg clone --pull --config format.usefncache=no server server-no-fncache
150 148 requesting all changes
151 149 adding changesets
152 150 adding manifests
153 151 adding file changes
154 152 added 3 changesets with 1088 changes to 1088 files
155 153 new changesets 96ee1d7354c4:5223b5e3265f
156 154 updating to branch default
157 155 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
158 156 $ hg verify -R server-no-fncache
159 157 checking changesets
160 158 checking manifests
161 159 crosschecking files in changesets and manifests
162 160 checking files
163 161 checked 3 changesets with 1088 changes to 1088 files
164 162 $ hg -R server serve -p $HGPORT -d --pid-file=hg-1.pid --error errors-1.txt
165 163 $ cat hg-1.pid > $DAEMON_PIDS
166 164 $ hg -R server-no-fncache serve -p $HGPORT2 -d --pid-file=hg-2.pid --error errors-2.txt
167 165 $ cat hg-2.pid >> $DAEMON_PIDS
168 166 $ hg debugrequires -R server | grep fncache
169 167 fncache
170 168 $ hg debugrequires -R server-no-fncache | grep fncache
171 169 [1]
172 170
173 171 fncache β†’ no-fncache cloning
174 172
175 173 $ hg clone --quiet --stream -U http://localhost:$HGPORT clone-remove-fncache --config format.usefncache=no
176 174 $ cat errors-1.txt
177 175 $ hg -R clone-remove-fncache verify
178 176 checking changesets
179 177 checking manifests
180 178 crosschecking files in changesets and manifests
181 179 checking files
182 180 checked 3 changesets with 1088 changes to 1088 files
183 181 $ hg debugrequires -R clone-remove-fncache | grep fncache
184 182 [1]
185 183
186 184
187 185 no-fncache β†’ fncache cloning
188 186
189 187 $ hg clone --quiet --stream -U http://localhost:$HGPORT2 clone-add-fncache --config format.usefncache=yes
190 188 $ cat errors-2.txt
191 189 $ hg -R clone-add-fncache verify
192 190 checking changesets
193 191 checking manifests
194 192 crosschecking files in changesets and manifests
195 193 checking files
196 194 checked 3 changesets with 1088 changes to 1088 files
197 195 $ hg debugrequires -R clone-add-fncache | grep fncache
198 196 fncache
199 197
200 198
201 199 $ killdaemons.py
202 200
203 201
204 202
205 203 Test streaming from/to repository without a dotencode
206 204 ===================================================
207 205
208 206 $ rm hg-*.pid errors-*.txt
209 207 $ hg clone --pull --config format.dotencode=no server server-no-dotencode
210 208 requesting all changes
211 209 adding changesets
212 210 adding manifests
213 211 adding file changes
214 212 added 3 changesets with 1088 changes to 1088 files
215 213 new changesets 96ee1d7354c4:5223b5e3265f
216 214 updating to branch default
217 215 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
218 216 $ hg verify -R server-no-dotencode
219 217 checking changesets
220 218 checking manifests
221 219 crosschecking files in changesets and manifests
222 220 checking files
223 221 checked 3 changesets with 1088 changes to 1088 files
224 222 $ hg -R server serve -p $HGPORT -d --pid-file=hg-1.pid --error errors-1.txt
225 223 $ cat hg-1.pid > $DAEMON_PIDS
226 224 $ hg -R server-no-dotencode serve -p $HGPORT2 -d --pid-file=hg-2.pid --error errors-2.txt
227 225 $ cat hg-2.pid >> $DAEMON_PIDS
228 226 $ hg debugrequires -R server | grep dotencode
229 227 dotencode
230 228 $ hg debugrequires -R server-no-dotencode | grep dotencode
231 229 [1]
232 230
233 231 dotencode β†’ no-dotencode cloning
234 232
235 233 $ hg clone --quiet --stream -U http://localhost:$HGPORT clone-remove-dotencode --config format.dotencode=no
236 234 $ cat errors-1.txt
237 235 $ hg -R clone-remove-dotencode verify
238 236 checking changesets
239 237 checking manifests
240 238 crosschecking files in changesets and manifests
241 239 checking files
242 240 checked 3 changesets with 1088 changes to 1088 files
243 241 $ hg debugrequires -R clone-remove-dotencode | grep dotencode
244 242 [1]
245 243
246 244
247 245 no-dotencode β†’ dotencode cloning
248 246
249 247 $ hg clone --quiet --stream -U http://localhost:$HGPORT2 clone-add-dotencode --config format.dotencode=yes
250 248 $ cat errors-2.txt
251 249 $ hg -R clone-add-dotencode verify
252 250 checking changesets
253 251 checking manifests
254 252 crosschecking files in changesets and manifests
255 253 checking files
256 254 checked 3 changesets with 1088 changes to 1088 files
257 255 $ hg debugrequires -R clone-add-dotencode | grep dotencode
258 256 dotencode
259 257
260 258
261 259 $ killdaemons.py
262 260
263 261 Cloning from a share
264 262 --------------------
265 263
266 264 We should be able to clone from a "share" repository, it will use the source store for streaming.
267 265
268 266 The resulting clone should not use share.
269 267
270 268 $ rm hg-*.pid errors-*.txt
271 269 $ hg share --config extensions.share= server server-share -U
272 270 $ hg -R server-share serve -p $HGPORT -d --pid-file=hg-1.pid --error errors-1.txt
273 271 $ cat hg-1.pid > $DAEMON_PIDS
274 272
275 273 $ hg clone --quiet --stream -U http://localhost:$HGPORT clone-from-share
276 274 $ hg -R clone-from-share verify
277 275 checking changesets
278 276 checking manifests
279 277 crosschecking files in changesets and manifests
280 278 checking files
281 279 checked 3 changesets with 1088 changes to 1088 files
282 280 $ hg debugrequires -R clone-from-share | grep share
283 281 [1]
284 282
285 283 $ killdaemons.py
@@ -1,819 +1,819 b''
1 1 #require serve no-reposimplestore no-chg
2 2
3 3 #testcases stream-legacy stream-bundle2
4 4
5 5 #if stream-legacy
6 6 $ cat << EOF >> $HGRCPATH
7 7 > [server]
8 8 > bundle2.stream = no
9 9 > EOF
10 10 #endif
11 11
12 12 Initialize repository
13 13
14 14 $ hg init server
15 15 $ cd server
16 16 $ sh $TESTDIR/testlib/stream_clone_setup.sh
17 17 adding 00changelog-ab349180a0405010.nd
18 18 adding 00changelog.d
19 19 adding 00changelog.i
20 20 adding 00changelog.n
21 21 adding 00manifest.d
22 22 adding 00manifest.i
23 23 adding container/isam-build-centos7/bazel-coverage-generator-sandboxfs-compatibility-0758e3e4f6057904d44399bd666faba9e7f40686.patch
24 24 adding data/foo.d
25 25 adding data/foo.i
26 26 adding data/foo.n
27 27 adding data/undo.babar
28 28 adding data/undo.d
29 29 adding data/undo.foo.d
30 30 adding data/undo.foo.i
31 31 adding data/undo.foo.n
32 32 adding data/undo.i
33 33 adding data/undo.n
34 34 adding data/undo.py
35 35 adding foo.d
36 36 adding foo.i
37 37 adding foo.n
38 38 adding meta/foo.d
39 39 adding meta/foo.i
40 40 adding meta/foo.n
41 41 adding meta/undo.babar
42 42 adding meta/undo.d
43 43 adding meta/undo.foo.d
44 44 adding meta/undo.foo.i
45 45 adding meta/undo.foo.n
46 46 adding meta/undo.i
47 47 adding meta/undo.n
48 48 adding meta/undo.py
49 49 adding savanah/foo.d
50 50 adding savanah/foo.i
51 51 adding savanah/foo.n
52 52 adding savanah/undo.babar
53 53 adding savanah/undo.d
54 54 adding savanah/undo.foo.d
55 55 adding savanah/undo.foo.i
56 56 adding savanah/undo.foo.n
57 57 adding savanah/undo.i
58 58 adding savanah/undo.n
59 59 adding savanah/undo.py
60 60 adding store/C\xc3\xa9lesteVille_is_a_Capital_City (esc)
61 61 adding store/foo.d
62 62 adding store/foo.i
63 63 adding store/foo.n
64 64 adding store/undo.babar
65 65 adding store/undo.d
66 66 adding store/undo.foo.d
67 67 adding store/undo.foo.i
68 68 adding store/undo.foo.n
69 69 adding store/undo.i
70 70 adding store/undo.n
71 71 adding store/undo.py
72 72 adding undo.babar
73 73 adding undo.d
74 74 adding undo.foo.d
75 75 adding undo.foo.i
76 76 adding undo.foo.n
77 77 adding undo.i
78 78 adding undo.n
79 79 adding undo.py
80 80
81 81 $ hg --config server.uncompressed=false serve -p $HGPORT -d --pid-file=hg.pid
82 82 $ cat hg.pid > $DAEMON_PIDS
83 83 $ cd ..
84 84
85 85 Check local clone
86 86 ==================
87 87
88 88 The logic is close enough of uncompressed.
89 89 This is present here to reuse the testing around file with "special" names.
90 90
91 91 $ hg clone server local-clone
92 92 updating to branch default
93 93 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
94 94
95 95 Check that the clone went well
96 96
97 97 $ hg verify -R local-clone
98 98 checking changesets
99 99 checking manifests
100 100 crosschecking files in changesets and manifests
101 101 checking files
102 102 checked 3 changesets with 1088 changes to 1088 files
103 103
104 104 Check uncompressed
105 105 ==================
106 106
107 107 Cannot stream clone when server.uncompressed is set
108 108
109 109 $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=stream_out'
110 110 200 Script output follows
111 111
112 112 1
113 113
114 114 #if stream-legacy
115 115 $ hg debugcapabilities http://localhost:$HGPORT
116 116 Main capabilities:
117 117 batch
118 118 branchmap
119 119 $USUAL_BUNDLE2_CAPS_SERVER$
120 120 changegroupsubset
121 121 compression=$BUNDLE2_COMPRESSIONS$
122 122 getbundle
123 123 httpheader=1024
124 124 httpmediatype=0.1rx,0.1tx,0.2tx
125 125 known
126 126 lookup
127 127 pushkey
128 128 unbundle=HG10GZ,HG10BZ,HG10UN
129 129 unbundlehash
130 130 Bundle2 capabilities:
131 131 HG20
132 132 bookmarks
133 133 changegroup
134 134 01
135 135 02
136 136 checkheads
137 137 related
138 138 digests
139 139 md5
140 140 sha1
141 141 sha512
142 142 error
143 143 abort
144 144 unsupportedcontent
145 145 pushraced
146 146 pushkey
147 147 hgtagsfnodes
148 148 listkeys
149 149 phases
150 150 heads
151 151 pushkey
152 152 remote-changegroup
153 153 http
154 154 https
155 155
156 156 $ hg clone --stream -U http://localhost:$HGPORT server-disabled
157 157 warning: stream clone requested but server has them disabled
158 158 requesting all changes
159 159 adding changesets
160 160 adding manifests
161 161 adding file changes
162 162 added 3 changesets with 1088 changes to 1088 files
163 163 new changesets 96ee1d7354c4:5223b5e3265f
164 164
165 165 $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto 0.2 --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1"
166 166 200 Script output follows
167 167 content-type: application/mercurial-0.2
168 168
169 169
170 170 $ f --size body --hexdump --bytes 100
171 171 body: size=232
172 172 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
173 173 0010: cf 0b 45 52 52 4f 52 3a 41 42 4f 52 54 00 00 00 |..ERROR:ABORT...|
174 174 0020: 00 01 01 07 3c 04 72 6d 65 73 73 61 67 65 73 74 |....<.rmessagest|
175 175 0030: 72 65 61 6d 20 64 61 74 61 20 72 65 71 75 65 73 |ream data reques|
176 176 0040: 74 65 64 20 62 75 74 20 73 65 72 76 65 72 20 64 |ted but server d|
177 177 0050: 6f 65 73 20 6e 6f 74 20 61 6c 6c 6f 77 20 74 68 |oes not allow th|
178 178 0060: 69 73 20 66 |is f|
179 179
180 180 #endif
181 181 #if stream-bundle2
182 182 $ hg debugcapabilities http://localhost:$HGPORT
183 183 Main capabilities:
184 184 batch
185 185 branchmap
186 186 $USUAL_BUNDLE2_CAPS_SERVER$
187 187 changegroupsubset
188 188 compression=$BUNDLE2_COMPRESSIONS$
189 189 getbundle
190 190 httpheader=1024
191 191 httpmediatype=0.1rx,0.1tx,0.2tx
192 192 known
193 193 lookup
194 194 pushkey
195 195 unbundle=HG10GZ,HG10BZ,HG10UN
196 196 unbundlehash
197 197 Bundle2 capabilities:
198 198 HG20
199 199 bookmarks
200 200 changegroup
201 201 01
202 202 02
203 203 checkheads
204 204 related
205 205 digests
206 206 md5
207 207 sha1
208 208 sha512
209 209 error
210 210 abort
211 211 unsupportedcontent
212 212 pushraced
213 213 pushkey
214 214 hgtagsfnodes
215 215 listkeys
216 216 phases
217 217 heads
218 218 pushkey
219 219 remote-changegroup
220 220 http
221 221 https
222 222
223 223 $ hg clone --stream -U http://localhost:$HGPORT server-disabled
224 224 warning: stream clone requested but server has them disabled
225 225 requesting all changes
226 226 adding changesets
227 227 adding manifests
228 228 adding file changes
229 229 added 3 changesets with 1088 changes to 1088 files
230 230 new changesets 96ee1d7354c4:5223b5e3265f
231 231
232 232 $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto 0.2 --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1"
233 233 200 Script output follows
234 234 content-type: application/mercurial-0.2
235 235
236 236
237 237 $ f --size body --hexdump --bytes 100
238 238 body: size=232
239 239 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
240 240 0010: cf 0b 45 52 52 4f 52 3a 41 42 4f 52 54 00 00 00 |..ERROR:ABORT...|
241 241 0020: 00 01 01 07 3c 04 72 6d 65 73 73 61 67 65 73 74 |....<.rmessagest|
242 242 0030: 72 65 61 6d 20 64 61 74 61 20 72 65 71 75 65 73 |ream data reques|
243 243 0040: 74 65 64 20 62 75 74 20 73 65 72 76 65 72 20 64 |ted but server d|
244 244 0050: 6f 65 73 20 6e 6f 74 20 61 6c 6c 6f 77 20 74 68 |oes not allow th|
245 245 0060: 69 73 20 66 |is f|
246 246
247 247 #endif
248 248
249 249 $ killdaemons.py
250 250 $ cd server
251 251 $ hg serve -p $HGPORT -d --pid-file=hg.pid --error errors.txt
252 252 $ cat hg.pid > $DAEMON_PIDS
253 253 $ cd ..
254 254
255 255 Basic clone
256 256
257 257 #if stream-legacy
258 258 $ hg clone --stream -U http://localhost:$HGPORT clone1
259 259 streaming all changes
260 260 1090 files to transfer, 102 KB of data (no-zstd !)
261 261 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
262 262 1090 files to transfer, 98.8 KB of data (zstd !)
263 263 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
264 264 searching for changes
265 265 no changes found
266 266 $ cat server/errors.txt
267 267 #endif
268 268 #if stream-bundle2
269 269 $ hg clone --stream -U http://localhost:$HGPORT clone1
270 270 streaming all changes
271 271 1093 files to transfer, 102 KB of data (no-zstd !)
272 272 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
273 273 1093 files to transfer, 98.9 KB of data (zstd !)
274 274 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
275 275
276 276 $ ls -1 clone1/.hg/cache
277 277 branch2-base
278 278 branch2-immutable
279 279 branch2-served
280 280 branch2-served.hidden
281 281 branch2-visible
282 282 branch2-visible-hidden
283 283 rbc-names-v1
284 284 rbc-revs-v1
285 285 tags2
286 286 tags2-served
287 287 $ cat server/errors.txt
288 288 #endif
289 289
290 290 getbundle requests with stream=1 are uncompressed
291 291
292 292 $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto '0.1 0.2 comp=zlib,none' --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1"
293 293 200 Script output follows
294 294 content-type: application/mercurial-0.2
295 295
296 296
297 297 #if no-zstd no-rust
298 298 $ f --size --hex --bytes 256 body
299 body: size=119153
299 body: size=119123
300 300 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
301 0010: 80 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 |..STREAM2.......|
302 0020: 06 09 04 0c 44 62 79 74 65 63 6f 75 6e 74 31 30 |....Dbytecount10|
301 0010: 62 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 |b.STREAM2.......|
302 0020: 06 09 04 0c 26 62 79 74 65 63 6f 75 6e 74 31 30 |....&bytecount10|
303 303 0030: 34 31 31 35 66 69 6c 65 63 6f 75 6e 74 31 30 39 |4115filecount109|
304 0040: 33 72 65 71 75 69 72 65 6d 65 6e 74 73 64 6f 74 |3requirementsdot|
305 0050: 65 6e 63 6f 64 65 25 32 43 66 6e 63 61 63 68 65 |encode%2Cfncache|
306 0060: 25 32 43 67 65 6e 65 72 61 6c 64 65 6c 74 61 25 |%2Cgeneraldelta%|
307 0070: 32 43 72 65 76 6c 6f 67 76 31 25 32 43 73 70 61 |2Crevlogv1%2Cspa|
308 0080: 72 73 65 72 65 76 6c 6f 67 25 32 43 73 74 6f 72 |rserevlog%2Cstor|
309 0090: 65 00 00 80 00 73 08 42 64 61 74 61 2f 30 2e 69 |e....s.Bdata/0.i|
310 00a0: 00 03 00 01 00 00 00 00 00 00 00 02 00 00 00 01 |................|
311 00b0: 00 00 00 00 00 00 00 01 ff ff ff ff ff ff ff ff |................|
312 00c0: 80 29 63 a0 49 d3 23 87 bf ce fe 56 67 92 67 2c |.)c.I.#....Vg.g,|
313 00d0: 69 d1 ec 39 00 00 00 00 00 00 00 00 00 00 00 00 |i..9............|
314 00e0: 75 30 73 26 45 64 61 74 61 2f 30 30 63 68 61 6e |u0s&Edata/00chan|
315 00f0: 67 65 6c 6f 67 2d 61 62 33 34 39 31 38 30 61 30 |gelog-ab349180a0|
304 0040: 33 72 65 71 75 69 72 65 6d 65 6e 74 73 67 65 6e |3requirementsgen|
305 0050: 65 72 61 6c 64 65 6c 74 61 25 32 43 72 65 76 6c |eraldelta%2Crevl|
306 0060: 6f 67 76 31 25 32 43 73 70 61 72 73 65 72 65 76 |ogv1%2Csparserev|
307 0070: 6c 6f 67 00 00 80 00 73 08 42 64 61 74 61 2f 30 |log....s.Bdata/0|
308 0080: 2e 69 00 03 00 01 00 00 00 00 00 00 00 02 00 00 |.i..............|
309 0090: 00 01 00 00 00 00 00 00 00 01 ff ff ff ff ff ff |................|
310 00a0: ff ff 80 29 63 a0 49 d3 23 87 bf ce fe 56 67 92 |...)c.I.#....Vg.|
311 00b0: 67 2c 69 d1 ec 39 00 00 00 00 00 00 00 00 00 00 |g,i..9..........|
312 00c0: 00 00 75 30 73 26 45 64 61 74 61 2f 30 30 63 68 |..u0s&Edata/00ch|
313 00d0: 61 6e 67 65 6c 6f 67 2d 61 62 33 34 39 31 38 30 |angelog-ab349180|
314 00e0: 61 30 34 30 35 30 31 30 2e 6e 64 2e 69 00 03 00 |a0405010.nd.i...|
315 00f0: 01 00 00 00 00 00 00 00 05 00 00 00 04 00 00 00 |................|
316 316 #endif
317 317 #if zstd no-rust
318 318 $ f --size --hex --bytes 256 body
319 body: size=116340 (no-bigendian !)
319 body: size=116310
320 320 body: size=116335 (bigendian !)
321 321 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
322 0010: 9a 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 |..STREAM2.......|
323 0020: 06 09 04 0c 5e 62 79 74 65 63 6f 75 6e 74 31 30 |....^bytecount10|
322 0010: 7c 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 ||.STREAM2.......|
323 0020: 06 09 04 0c 40 62 79 74 65 63 6f 75 6e 74 31 30 |....@bytecount10|
324 324 0030: 31 32 37 36 66 69 6c 65 63 6f 75 6e 74 31 30 39 |1276filecount109| (no-bigendian !)
325 325 0030: 31 32 37 31 66 69 6c 65 63 6f 75 6e 74 31 30 39 |1271filecount109| (bigendian !)
326 0040: 33 72 65 71 75 69 72 65 6d 65 6e 74 73 64 6f 74 |3requirementsdot|
327 0050: 65 6e 63 6f 64 65 25 32 43 66 6e 63 61 63 68 65 |encode%2Cfncache|
328 0060: 25 32 43 67 65 6e 65 72 61 6c 64 65 6c 74 61 25 |%2Cgeneraldelta%|
329 0070: 32 43 72 65 76 6c 6f 67 2d 63 6f 6d 70 72 65 73 |2Crevlog-compres|
330 0080: 73 69 6f 6e 2d 7a 73 74 64 25 32 43 72 65 76 6c |sion-zstd%2Crevl|
331 0090: 6f 67 76 31 25 32 43 73 70 61 72 73 65 72 65 76 |ogv1%2Csparserev|
332 00a0: 6c 6f 67 25 32 43 73 74 6f 72 65 00 00 80 00 73 |log%2Cstore....s|
333 00b0: 08 42 64 61 74 61 2f 30 2e 69 00 03 00 01 00 00 |.Bdata/0.i......|
334 00c0: 00 00 00 00 00 02 00 00 00 01 00 00 00 00 00 00 |................|
335 00d0: 00 01 ff ff ff ff ff ff ff ff 80 29 63 a0 49 d3 |...........)c.I.|
336 00e0: 23 87 bf ce fe 56 67 92 67 2c 69 d1 ec 39 00 00 |#....Vg.g,i..9..|
337 00f0: 00 00 00 00 00 00 00 00 00 00 75 30 73 26 45 64 |..........u0s&Ed|
326 0040: 33 72 65 71 75 69 72 65 6d 65 6e 74 73 67 65 6e |3requirementsgen|
327 0050: 65 72 61 6c 64 65 6c 74 61 25 32 43 72 65 76 6c |eraldelta%2Crevl|
328 0060: 6f 67 2d 63 6f 6d 70 72 65 73 73 69 6f 6e 2d 7a |og-compression-z|
329 0070: 73 74 64 25 32 43 72 65 76 6c 6f 67 76 31 25 32 |std%2Crevlogv1%2|
330 0080: 43 73 70 61 72 73 65 72 65 76 6c 6f 67 00 00 80 |Csparserevlog...|
331 0090: 00 73 08 42 64 61 74 61 2f 30 2e 69 00 03 00 01 |.s.Bdata/0.i....|
332 00a0: 00 00 00 00 00 00 00 02 00 00 00 01 00 00 00 00 |................|
333 00b0: 00 00 00 01 ff ff ff ff ff ff ff ff 80 29 63 a0 |.............)c.|
334 00c0: 49 d3 23 87 bf ce fe 56 67 92 67 2c 69 d1 ec 39 |I.#....Vg.g,i..9|
335 00d0: 00 00 00 00 00 00 00 00 00 00 00 00 75 30 73 26 |............u0s&|
336 00e0: 45 64 61 74 61 2f 30 30 63 68 61 6e 67 65 6c 6f |Edata/00changelo|
337 00f0: 67 2d 61 62 33 34 39 31 38 30 61 30 34 30 35 30 |g-ab349180a04050|
338 338 #endif
339 339 #if zstd rust no-dirstate-v2
340 340 $ f --size --hex --bytes 256 body
341 body: size=116361
341 body: size=116331
342 342 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
343 0010: af 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 |..STREAM2.......|
344 0020: 06 09 04 0c 73 62 79 74 65 63 6f 75 6e 74 31 30 |....sbytecount10|
343 0010: 91 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 |..STREAM2.......|
344 0020: 06 09 04 0c 55 62 79 74 65 63 6f 75 6e 74 31 30 |....Ubytecount10|
345 345 0030: 31 32 37 36 66 69 6c 65 63 6f 75 6e 74 31 30 39 |1276filecount109|
346 0040: 33 72 65 71 75 69 72 65 6d 65 6e 74 73 64 6f 74 |3requirementsdot|
347 0050: 65 6e 63 6f 64 65 25 32 43 66 6e 63 61 63 68 65 |encode%2Cfncache|
348 0060: 25 32 43 67 65 6e 65 72 61 6c 64 65 6c 74 61 25 |%2Cgeneraldelta%|
349 0070: 32 43 70 65 72 73 69 73 74 65 6e 74 2d 6e 6f 64 |2Cpersistent-nod|
350 0080: 65 6d 61 70 25 32 43 72 65 76 6c 6f 67 2d 63 6f |emap%2Crevlog-co|
351 0090: 6d 70 72 65 73 73 69 6f 6e 2d 7a 73 74 64 25 32 |mpression-zstd%2|
352 00a0: 43 72 65 76 6c 6f 67 76 31 25 32 43 73 70 61 72 |Crevlogv1%2Cspar|
353 00b0: 73 65 72 65 76 6c 6f 67 25 32 43 73 74 6f 72 65 |serevlog%2Cstore|
354 00c0: 00 00 80 00 73 08 42 64 61 74 61 2f 30 2e 69 00 |....s.Bdata/0.i.|
355 00d0: 03 00 01 00 00 00 00 00 00 00 02 00 00 00 01 00 |................|
356 00e0: 00 00 00 00 00 00 01 ff ff ff ff ff ff ff ff 80 |................|
357 00f0: 29 63 a0 49 d3 23 87 bf ce fe 56 67 92 67 2c 69 |)c.I.#....Vg.g,i|
346 0040: 33 72 65 71 75 69 72 65 6d 65 6e 74 73 67 65 6e |3requirementsgen|
347 0050: 65 72 61 6c 64 65 6c 74 61 25 32 43 70 65 72 73 |eraldelta%2Cpers|
348 0060: 69 73 74 65 6e 74 2d 6e 6f 64 65 6d 61 70 25 32 |istent-nodemap%2|
349 0070: 43 72 65 76 6c 6f 67 2d 63 6f 6d 70 72 65 73 73 |Crevlog-compress|
350 0080: 69 6f 6e 2d 7a 73 74 64 25 32 43 72 65 76 6c 6f |ion-zstd%2Crevlo|
351 0090: 67 76 31 25 32 43 73 70 61 72 73 65 72 65 76 6c |gv1%2Csparserevl|
352 00a0: 6f 67 00 00 80 00 73 08 42 64 61 74 61 2f 30 2e |og....s.Bdata/0.|
353 00b0: 69 00 03 00 01 00 00 00 00 00 00 00 02 00 00 00 |i...............|
354 00c0: 01 00 00 00 00 00 00 00 01 ff ff ff ff ff ff ff |................|
355 00d0: ff 80 29 63 a0 49 d3 23 87 bf ce fe 56 67 92 67 |..)c.I.#....Vg.g|
356 00e0: 2c 69 d1 ec 39 00 00 00 00 00 00 00 00 00 00 00 |,i..9...........|
357 00f0: 00 75 30 73 26 45 64 61 74 61 2f 30 30 63 68 61 |.u0s&Edata/00cha|
358 358 #endif
359 359 #if zstd dirstate-v2
360 360 $ f --size --hex --bytes 256 body
361 361 body: size=109549
362 362 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
363 363 0010: c0 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 |..STREAM2.......|
364 364 0020: 05 09 04 0c 85 62 79 74 65 63 6f 75 6e 74 39 35 |.....bytecount95|
365 365 0030: 38 39 37 66 69 6c 65 63 6f 75 6e 74 31 30 33 30 |897filecount1030|
366 366 0040: 72 65 71 75 69 72 65 6d 65 6e 74 73 64 6f 74 65 |requirementsdote|
367 367 0050: 6e 63 6f 64 65 25 32 43 65 78 70 2d 64 69 72 73 |ncode%2Cexp-dirs|
368 368 0060: 74 61 74 65 2d 76 32 25 32 43 66 6e 63 61 63 68 |tate-v2%2Cfncach|
369 369 0070: 65 25 32 43 67 65 6e 65 72 61 6c 64 65 6c 74 61 |e%2Cgeneraldelta|
370 370 0080: 25 32 43 70 65 72 73 69 73 74 65 6e 74 2d 6e 6f |%2Cpersistent-no|
371 371 0090: 64 65 6d 61 70 25 32 43 72 65 76 6c 6f 67 2d 63 |demap%2Crevlog-c|
372 372 00a0: 6f 6d 70 72 65 73 73 69 6f 6e 2d 7a 73 74 64 25 |ompression-zstd%|
373 373 00b0: 32 43 72 65 76 6c 6f 67 76 31 25 32 43 73 70 61 |2Crevlogv1%2Cspa|
374 374 00c0: 72 73 65 72 65 76 6c 6f 67 25 32 43 73 74 6f 72 |rserevlog%2Cstor|
375 375 00d0: 65 00 00 80 00 73 08 42 64 61 74 61 2f 30 2e 69 |e....s.Bdata/0.i|
376 376 00e0: 00 03 00 01 00 00 00 00 00 00 00 02 00 00 00 01 |................|
377 377 00f0: 00 00 00 00 00 00 00 01 ff ff ff ff ff ff ff ff |................|
378 378 #endif
379 379
380 380 --uncompressed is an alias to --stream
381 381
382 382 #if stream-legacy
383 383 $ hg clone --uncompressed -U http://localhost:$HGPORT clone1-uncompressed
384 384 streaming all changes
385 385 1090 files to transfer, 102 KB of data (no-zstd !)
386 386 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
387 387 1090 files to transfer, 98.8 KB of data (zstd !)
388 388 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
389 389 searching for changes
390 390 no changes found
391 391 #endif
392 392 #if stream-bundle2
393 393 $ hg clone --uncompressed -U http://localhost:$HGPORT clone1-uncompressed
394 394 streaming all changes
395 395 1093 files to transfer, 102 KB of data (no-zstd !)
396 396 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
397 397 1093 files to transfer, 98.9 KB of data (zstd !)
398 398 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
399 399 #endif
400 400
401 401 Clone with background file closing enabled
402 402
403 403 #if stream-legacy
404 404 $ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --stream -U http://localhost:$HGPORT clone-background | grep -v adding
405 405 using http://localhost:$HGPORT/
406 406 sending capabilities command
407 407 sending branchmap command
408 408 streaming all changes
409 409 sending stream_out command
410 410 1090 files to transfer, 102 KB of data (no-zstd !)
411 411 1090 files to transfer, 98.8 KB of data (zstd !)
412 412 starting 4 threads for background file closing
413 413 updating the branch cache
414 414 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
415 415 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
416 416 query 1; heads
417 417 sending batch command
418 418 searching for changes
419 419 all remote heads known locally
420 420 no changes found
421 421 sending getbundle command
422 422 bundle2-input-bundle: with-transaction
423 423 bundle2-input-part: "listkeys" (params: 1 mandatory) supported
424 424 bundle2-input-part: "phase-heads" supported
425 425 bundle2-input-part: total payload size 24
426 426 bundle2-input-bundle: 2 parts total
427 427 checking for updated bookmarks
428 428 updating the branch cache
429 429 (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
430 430 #endif
431 431 #if stream-bundle2
432 432 $ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --stream -U http://localhost:$HGPORT clone-background | grep -v adding
433 433 using http://localhost:$HGPORT/
434 434 sending capabilities command
435 435 query 1; heads
436 436 sending batch command
437 437 streaming all changes
438 438 sending getbundle command
439 439 bundle2-input-bundle: with-transaction
440 440 bundle2-input-part: "stream2" (params: 3 mandatory) supported
441 441 applying stream bundle
442 442 1093 files to transfer, 102 KB of data (no-zstd !)
443 443 1093 files to transfer, 98.9 KB of data (zstd !)
444 444 starting 4 threads for background file closing
445 445 starting 4 threads for background file closing
446 446 updating the branch cache
447 447 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
448 448 bundle2-input-part: total payload size 118984 (no-zstd !)
449 449 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
450 450 bundle2-input-part: total payload size 116145 (zstd no-bigendian !)
451 451 bundle2-input-part: total payload size 116140 (zstd bigendian !)
452 452 bundle2-input-part: "listkeys" (params: 1 mandatory) supported
453 453 bundle2-input-bundle: 2 parts total
454 454 checking for updated bookmarks
455 455 updating the branch cache
456 456 (sent 3 HTTP requests and * bytes; received * bytes in responses) (glob)
457 457 #endif
458 458
459 459 Cannot stream clone when there are secret changesets
460 460
461 461 $ hg -R server phase --force --secret -r tip
462 462 $ hg clone --stream -U http://localhost:$HGPORT secret-denied
463 463 warning: stream clone requested but server has them disabled
464 464 requesting all changes
465 465 adding changesets
466 466 adding manifests
467 467 adding file changes
468 468 added 2 changesets with 1025 changes to 1025 files
469 469 new changesets 96ee1d7354c4:c17445101a72
470 470
471 471 $ killdaemons.py
472 472
473 473 Streaming of secrets can be overridden by server config
474 474
475 475 $ cd server
476 476 $ hg serve --config server.uncompressedallowsecret=true -p $HGPORT -d --pid-file=hg.pid
477 477 $ cat hg.pid > $DAEMON_PIDS
478 478 $ cd ..
479 479
480 480 #if stream-legacy
481 481 $ hg clone --stream -U http://localhost:$HGPORT secret-allowed
482 482 streaming all changes
483 483 1090 files to transfer, 102 KB of data (no-zstd !)
484 484 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
485 485 1090 files to transfer, 98.8 KB of data (zstd !)
486 486 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
487 487 searching for changes
488 488 no changes found
489 489 #endif
490 490 #if stream-bundle2
491 491 $ hg clone --stream -U http://localhost:$HGPORT secret-allowed
492 492 streaming all changes
493 493 1093 files to transfer, 102 KB of data (no-zstd !)
494 494 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
495 495 1093 files to transfer, 98.9 KB of data (zstd !)
496 496 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
497 497 #endif
498 498
499 499 $ killdaemons.py
500 500
501 501 Verify interaction between preferuncompressed and secret presence
502 502
503 503 $ cd server
504 504 $ hg serve --config server.preferuncompressed=true -p $HGPORT -d --pid-file=hg.pid
505 505 $ cat hg.pid > $DAEMON_PIDS
506 506 $ cd ..
507 507
508 508 $ hg clone -U http://localhost:$HGPORT preferuncompressed-secret
509 509 requesting all changes
510 510 adding changesets
511 511 adding manifests
512 512 adding file changes
513 513 added 2 changesets with 1025 changes to 1025 files
514 514 new changesets 96ee1d7354c4:c17445101a72
515 515
516 516 $ killdaemons.py
517 517
518 518 Clone not allowed when full bundles disabled and can't serve secrets
519 519
520 520 $ cd server
521 521 $ hg serve --config server.disablefullbundle=true -p $HGPORT -d --pid-file=hg.pid
522 522 $ cat hg.pid > $DAEMON_PIDS
523 523 $ cd ..
524 524
525 525 $ hg clone --stream http://localhost:$HGPORT secret-full-disabled
526 526 warning: stream clone requested but server has them disabled
527 527 requesting all changes
528 528 remote: abort: server has pull-based clones disabled
529 529 abort: pull failed on remote
530 530 (remove --pull if specified or upgrade Mercurial)
531 531 [100]
532 532
533 533 Local stream clone with secrets involved
534 534 (This is just a test over behavior: if you have access to the repo's files,
535 535 there is no security so it isn't important to prevent a clone here.)
536 536
537 537 $ hg clone -U --stream server local-secret
538 538 warning: stream clone requested but server has them disabled
539 539 requesting all changes
540 540 adding changesets
541 541 adding manifests
542 542 adding file changes
543 543 added 2 changesets with 1025 changes to 1025 files
544 544 new changesets 96ee1d7354c4:c17445101a72
545 545
546 546 Stream clone while repo is changing:
547 547
548 548 $ mkdir changing
549 549 $ cd changing
550 550
551 551 extension for delaying the server process so we reliably can modify the repo
552 552 while cloning
553 553
554 554 $ cat > stream_steps.py <<EOF
555 555 > import os
556 556 > import sys
557 557 > from mercurial import (
558 558 > encoding,
559 559 > extensions,
560 560 > streamclone,
561 561 > testing,
562 562 > )
563 563 > WALKED_FILE_1 = encoding.environ[b'HG_TEST_STREAM_WALKED_FILE_1']
564 564 > WALKED_FILE_2 = encoding.environ[b'HG_TEST_STREAM_WALKED_FILE_2']
565 565 >
566 566 > def _test_sync_point_walk_1(orig, repo):
567 567 > testing.write_file(WALKED_FILE_1)
568 568 >
569 569 > def _test_sync_point_walk_2(orig, repo):
570 570 > assert repo._currentlock(repo._lockref) is None
571 571 > testing.wait_file(WALKED_FILE_2)
572 572 >
573 573 > extensions.wrapfunction(
574 574 > streamclone,
575 575 > '_test_sync_point_walk_1',
576 576 > _test_sync_point_walk_1
577 577 > )
578 578 > extensions.wrapfunction(
579 579 > streamclone,
580 580 > '_test_sync_point_walk_2',
581 581 > _test_sync_point_walk_2
582 582 > )
583 583 > EOF
584 584
585 585 prepare repo with small and big file to cover both code paths in emitrevlogdata
586 586
587 587 $ hg init repo
588 588 $ touch repo/f1
589 589 $ $TESTDIR/seq.py 50000 > repo/f2
590 590 $ hg -R repo ci -Aqm "0"
591 591 $ HG_TEST_STREAM_WALKED_FILE_1="$TESTTMP/sync_file_walked_1"
592 592 $ export HG_TEST_STREAM_WALKED_FILE_1
593 593 $ HG_TEST_STREAM_WALKED_FILE_2="$TESTTMP/sync_file_walked_2"
594 594 $ export HG_TEST_STREAM_WALKED_FILE_2
595 595 $ HG_TEST_STREAM_WALKED_FILE_3="$TESTTMP/sync_file_walked_3"
596 596 $ export HG_TEST_STREAM_WALKED_FILE_3
597 597 # $ cat << EOF >> $HGRCPATH
598 598 # > [hooks]
599 599 # > pre-clone=rm -f "$TESTTMP/sync_file_walked_*"
600 600 # > EOF
601 601 $ hg serve -R repo -p $HGPORT1 -d --error errors.log --pid-file=hg.pid --config extensions.stream_steps="$RUNTESTDIR/testlib/ext-stream-clone-steps.py"
602 602 $ cat hg.pid >> $DAEMON_PIDS
603 603
604 604 clone while modifying the repo between stating file with write lock and
605 605 actually serving file content
606 606
607 607 $ (hg clone -q --stream -U http://localhost:$HGPORT1 clone; touch "$HG_TEST_STREAM_WALKED_FILE_3") &
608 608 $ $RUNTESTDIR/testlib/wait-on-file 10 $HG_TEST_STREAM_WALKED_FILE_1
609 609 $ echo >> repo/f1
610 610 $ echo >> repo/f2
611 611 $ hg -R repo ci -m "1" --config ui.timeout.warn=-1
612 612 $ touch $HG_TEST_STREAM_WALKED_FILE_2
613 613 $ $RUNTESTDIR/testlib/wait-on-file 10 $HG_TEST_STREAM_WALKED_FILE_3
614 614 $ hg -R clone id
615 615 000000000000
616 616 $ cat errors.log
617 617 $ cd ..
618 618
619 619 Stream repository with bookmarks
620 620 --------------------------------
621 621
622 622 (revert introduction of secret changeset)
623 623
624 624 $ hg -R server phase --draft 'secret()'
625 625
626 626 add a bookmark
627 627
628 628 $ hg -R server bookmark -r tip some-bookmark
629 629
630 630 clone it
631 631
632 632 #if stream-legacy
633 633 $ hg clone --stream http://localhost:$HGPORT with-bookmarks
634 634 streaming all changes
635 635 1090 files to transfer, 102 KB of data (no-zstd !)
636 636 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
637 637 1090 files to transfer, 98.8 KB of data (zstd !)
638 638 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
639 639 searching for changes
640 640 no changes found
641 641 updating to branch default
642 642 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
643 643 #endif
644 644 #if stream-bundle2
645 645 $ hg clone --stream http://localhost:$HGPORT with-bookmarks
646 646 streaming all changes
647 647 1096 files to transfer, 102 KB of data (no-zstd !)
648 648 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
649 649 1096 files to transfer, 99.1 KB of data (zstd !)
650 650 transferred 99.1 KB in * seconds (* */sec) (glob) (zstd !)
651 651 updating to branch default
652 652 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
653 653 #endif
654 654 $ hg verify -R with-bookmarks
655 655 checking changesets
656 656 checking manifests
657 657 crosschecking files in changesets and manifests
658 658 checking files
659 659 checked 3 changesets with 1088 changes to 1088 files
660 660 $ hg -R with-bookmarks bookmarks
661 661 some-bookmark 2:5223b5e3265f
662 662
663 663 Stream repository with phases
664 664 -----------------------------
665 665
666 666 Clone as publishing
667 667
668 668 $ hg -R server phase -r 'all()'
669 669 0: draft
670 670 1: draft
671 671 2: draft
672 672
673 673 #if stream-legacy
674 674 $ hg clone --stream http://localhost:$HGPORT phase-publish
675 675 streaming all changes
676 676 1090 files to transfer, 102 KB of data (no-zstd !)
677 677 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
678 678 1090 files to transfer, 98.8 KB of data (zstd !)
679 679 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
680 680 searching for changes
681 681 no changes found
682 682 updating to branch default
683 683 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
684 684 #endif
685 685 #if stream-bundle2
686 686 $ hg clone --stream http://localhost:$HGPORT phase-publish
687 687 streaming all changes
688 688 1096 files to transfer, 102 KB of data (no-zstd !)
689 689 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
690 690 1096 files to transfer, 99.1 KB of data (zstd !)
691 691 transferred 99.1 KB in * seconds (* */sec) (glob) (zstd !)
692 692 updating to branch default
693 693 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
694 694 #endif
695 695 $ hg verify -R phase-publish
696 696 checking changesets
697 697 checking manifests
698 698 crosschecking files in changesets and manifests
699 699 checking files
700 700 checked 3 changesets with 1088 changes to 1088 files
701 701 $ hg -R phase-publish phase -r 'all()'
702 702 0: public
703 703 1: public
704 704 2: public
705 705
706 706 Clone as non publishing
707 707
708 708 $ cat << EOF >> server/.hg/hgrc
709 709 > [phases]
710 710 > publish = False
711 711 > EOF
712 712 $ killdaemons.py
713 713 $ hg -R server serve -p $HGPORT -d --pid-file=hg.pid
714 714 $ cat hg.pid > $DAEMON_PIDS
715 715
716 716 #if stream-legacy
717 717
718 718 With v1 of the stream protocol, changeset are always cloned as public. It make
719 719 stream v1 unsuitable for non-publishing repository.
720 720
721 721 $ hg clone --stream http://localhost:$HGPORT phase-no-publish
722 722 streaming all changes
723 723 1090 files to transfer, 102 KB of data (no-zstd !)
724 724 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
725 725 1090 files to transfer, 98.8 KB of data (zstd !)
726 726 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
727 727 searching for changes
728 728 no changes found
729 729 updating to branch default
730 730 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
731 731 $ hg -R phase-no-publish phase -r 'all()'
732 732 0: public
733 733 1: public
734 734 2: public
735 735 #endif
736 736 #if stream-bundle2
737 737 $ hg clone --stream http://localhost:$HGPORT phase-no-publish
738 738 streaming all changes
739 739 1097 files to transfer, 102 KB of data (no-zstd !)
740 740 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
741 741 1097 files to transfer, 99.1 KB of data (zstd !)
742 742 transferred 99.1 KB in * seconds (* */sec) (glob) (zstd !)
743 743 updating to branch default
744 744 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
745 745 $ hg -R phase-no-publish phase -r 'all()'
746 746 0: draft
747 747 1: draft
748 748 2: draft
749 749 #endif
750 750 $ hg verify -R phase-no-publish
751 751 checking changesets
752 752 checking manifests
753 753 crosschecking files in changesets and manifests
754 754 checking files
755 755 checked 3 changesets with 1088 changes to 1088 files
756 756
757 757 $ killdaemons.py
758 758
759 759 #if stream-legacy
760 760
761 761 With v1 of the stream protocol, changeset are always cloned as public. There's
762 762 no obsolescence markers exchange in stream v1.
763 763
764 764 #endif
765 765 #if stream-bundle2
766 766
767 767 Stream repository with obsolescence
768 768 -----------------------------------
769 769
770 770 Clone non-publishing with obsolescence
771 771
772 772 $ cat >> $HGRCPATH << EOF
773 773 > [experimental]
774 774 > evolution=all
775 775 > EOF
776 776
777 777 $ cd server
778 778 $ echo foo > foo
779 779 $ hg -q commit -m 'about to be pruned'
780 780 $ hg debugobsolete `hg log -r . -T '{node}'` -d '0 0' -u test --record-parents
781 781 1 new obsolescence markers
782 782 obsoleted 1 changesets
783 783 $ hg up null -q
784 784 $ hg log -T '{rev}: {phase}\n'
785 785 2: draft
786 786 1: draft
787 787 0: draft
788 788 $ hg serve -p $HGPORT -d --pid-file=hg.pid
789 789 $ cat hg.pid > $DAEMON_PIDS
790 790 $ cd ..
791 791
792 792 $ hg clone -U --stream http://localhost:$HGPORT with-obsolescence
793 793 streaming all changes
794 794 1098 files to transfer, 102 KB of data (no-zstd !)
795 795 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
796 796 1098 files to transfer, 99.5 KB of data (zstd !)
797 797 transferred 99.5 KB in * seconds (* */sec) (glob) (zstd !)
798 798 $ hg -R with-obsolescence log -T '{rev}: {phase}\n'
799 799 2: draft
800 800 1: draft
801 801 0: draft
802 802 $ hg debugobsolete -R with-obsolescence
803 803 8c206a663911c1f97f2f9d7382e417ae55872cfa 0 {5223b5e3265f0df40bb743da62249413d74ac70f} (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
804 804 $ hg verify -R with-obsolescence
805 805 checking changesets
806 806 checking manifests
807 807 crosschecking files in changesets and manifests
808 808 checking files
809 809 checked 4 changesets with 1089 changes to 1088 files
810 810
811 811 $ hg clone -U --stream --config experimental.evolution=0 http://localhost:$HGPORT with-obsolescence-no-evolution
812 812 streaming all changes
813 813 remote: abort: server has obsolescence markers, but client cannot receive them via stream clone
814 814 abort: pull failed on remote
815 815 [100]
816 816
817 817 $ killdaemons.py
818 818
819 819 #endif
@@ -1,181 +1,181 b''
1 1 #require no-reposimplestore
2 2
3 3 Test creating a consuming stream bundle v2
4 4
5 5 $ getmainid() {
6 6 > hg -R main log --template '{node}\n' --rev "$1"
7 7 > }
8 8
9 9 $ cp $HGRCPATH $TESTTMP/hgrc.orig
10 10
11 11 $ cat >> $HGRCPATH << EOF
12 12 > [experimental]
13 13 > evolution.createmarkers=True
14 14 > evolution.exchange=True
15 15 > bundle2-output-capture=True
16 16 > [ui]
17 17 > logtemplate={rev}:{node|short} {phase} {author} {bookmarks} {desc|firstline}
18 18 > [web]
19 19 > push_ssl = false
20 20 > allow_push = *
21 21 > [phases]
22 22 > publish=False
23 23 > [extensions]
24 24 > drawdag=$TESTDIR/drawdag.py
25 25 > clonebundles=
26 26 > EOF
27 27
28 28 The extension requires a repo (currently unused)
29 29
30 30 $ hg init main
31 31 $ cd main
32 32
33 33 $ hg debugdrawdag <<'EOF'
34 34 > E
35 35 > |
36 36 > D
37 37 > |
38 38 > C
39 39 > |
40 40 > B
41 41 > |
42 42 > A
43 43 > EOF
44 44
45 45 $ hg bundle -a --type="none-v2;stream=v2" bundle.hg
46 46 $ hg debugbundle bundle.hg
47 47 Stream params: {}
48 stream2 -- {bytecount: 1693, filecount: 11, requirements: dotencode%2Cfncache%2Cgeneraldelta%2Crevlogv1%2Csparserevlog%2Cstore} (mandatory: True) (no-zstd !)
49 stream2 -- {bytecount: 1693, filecount: 11, requirements: dotencode%2Cfncache%2Cgeneraldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog%2Cstore} (mandatory: True) (zstd no-rust !)
50 stream2 -- {bytecount: 1693, filecount: 11, requirements: dotencode%2Cfncache%2Cgeneraldelta%2Cpersistent-nodemap%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog%2Cstore} (mandatory: True) (rust !)
48 stream2 -- {bytecount: 1693, filecount: 11, requirements: generaldelta%2Crevlogv1%2Csparserevlog} (mandatory: True) (no-zstd !)
49 stream2 -- {bytecount: 1693, filecount: 11, requirements: generaldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog} (mandatory: True) (zstd no-rust !)
50 stream2 -- {bytecount: 1693, filecount: 11, requirements: generaldelta%2Cpersistent-nodemap%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog} (mandatory: True) (rust !)
51 51 $ hg debugbundle --spec bundle.hg
52 none-v2;stream=v2;requirements%3Ddotencode%2Cfncache%2Cgeneraldelta%2Crevlogv1%2Csparserevlog%2Cstore (no-zstd !)
53 none-v2;stream=v2;requirements%3Ddotencode%2Cfncache%2Cgeneraldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog%2Cstore (zstd no-rust !)
54 none-v2;stream=v2;requirements%3Ddotencode%2Cfncache%2Cgeneraldelta%2Cpersistent-nodemap%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog%2Cstore (rust !)
52 none-v2;stream=v2;requirements%3Dgeneraldelta%2Crevlogv1%2Csparserevlog (no-zstd !)
53 none-v2;stream=v2;requirements%3Dgeneraldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog (zstd no-rust !)
54 none-v2;stream=v2;requirements%3Dgeneraldelta%2Cpersistent-nodemap%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog (rust !)
55 55
56 56 Test that we can apply the bundle as a stream clone bundle
57 57
58 58 $ cat > .hg/clonebundles.manifest << EOF
59 59 > http://localhost:$HGPORT1/bundle.hg BUNDLESPEC=`hg debugbundle --spec bundle.hg`
60 60 > EOF
61 61
62 62 $ hg serve -d -p $HGPORT --pid-file hg.pid --accesslog access.log
63 63 $ cat hg.pid >> $DAEMON_PIDS
64 64
65 65 $ "$PYTHON" $TESTDIR/dumbhttp.py -p $HGPORT1 --pid http.pid
66 66 $ cat http.pid >> $DAEMON_PIDS
67 67
68 68 $ cd ..
69 69 $ hg clone http://localhost:$HGPORT streamv2-clone-implicit --debug
70 70 using http://localhost:$HGPORT/
71 71 sending capabilities command
72 72 sending clonebundles command
73 73 applying clone bundle from http://localhost:$HGPORT1/bundle.hg
74 74 bundle2-input-bundle: with-transaction
75 75 bundle2-input-part: "stream2" (params: 3 mandatory) supported
76 76 applying stream bundle
77 77 11 files to transfer, 1.65 KB of data
78 78 starting 4 threads for background file closing (?)
79 79 starting 4 threads for background file closing (?)
80 80 adding [s] data/A.i (66 bytes)
81 81 adding [s] data/B.i (66 bytes)
82 82 adding [s] data/C.i (66 bytes)
83 83 adding [s] data/D.i (66 bytes)
84 84 adding [s] data/E.i (66 bytes)
85 85 adding [s] 00manifest.i (584 bytes)
86 86 adding [s] 00changelog.i (595 bytes)
87 87 adding [s] phaseroots (43 bytes)
88 88 adding [c] branch2-served (94 bytes)
89 89 adding [c] rbc-names-v1 (7 bytes)
90 90 adding [c] rbc-revs-v1 (40 bytes)
91 91 transferred 1.65 KB in * seconds (* */sec) (glob)
92 92 bundle2-input-part: total payload size 1840
93 93 bundle2-input-bundle: 1 parts total
94 94 updating the branch cache
95 95 finished applying clone bundle
96 96 query 1; heads
97 97 sending batch command
98 98 searching for changes
99 99 all remote heads known locally
100 100 no changes found
101 101 sending getbundle command
102 102 bundle2-input-bundle: with-transaction
103 103 bundle2-input-part: "listkeys" (params: 1 mandatory) supported
104 104 bundle2-input-part: "phase-heads" supported
105 105 bundle2-input-part: total payload size 24
106 106 bundle2-input-bundle: 2 parts total
107 107 checking for updated bookmarks
108 108 updating to branch default
109 109 resolving manifests
110 110 branchmerge: False, force: False, partial: False
111 111 ancestor: 000000000000, local: 000000000000+, remote: 9bc730a19041
112 112 A: remote created -> g
113 113 getting A
114 114 B: remote created -> g
115 115 getting B
116 116 C: remote created -> g
117 117 getting C
118 118 D: remote created -> g
119 119 getting D
120 120 E: remote created -> g
121 121 getting E
122 122 5 files updated, 0 files merged, 0 files removed, 0 files unresolved
123 123 updating the branch cache
124 124 (sent 4 HTTP requests and * bytes; received * bytes in responses) (glob)
125 125
126 126 $ hg clone --stream http://localhost:$HGPORT streamv2-clone-explicit --debug
127 127 using http://localhost:$HGPORT/
128 128 sending capabilities command
129 129 sending clonebundles command
130 130 applying clone bundle from http://localhost:$HGPORT1/bundle.hg
131 131 bundle2-input-bundle: with-transaction
132 132 bundle2-input-part: "stream2" (params: 3 mandatory) supported
133 133 applying stream bundle
134 134 11 files to transfer, 1.65 KB of data
135 135 starting 4 threads for background file closing (?)
136 136 starting 4 threads for background file closing (?)
137 137 adding [s] data/A.i (66 bytes)
138 138 adding [s] data/B.i (66 bytes)
139 139 adding [s] data/C.i (66 bytes)
140 140 adding [s] data/D.i (66 bytes)
141 141 adding [s] data/E.i (66 bytes)
142 142 adding [s] 00manifest.i (584 bytes)
143 143 adding [s] 00changelog.i (595 bytes)
144 144 adding [s] phaseroots (43 bytes)
145 145 adding [c] branch2-served (94 bytes)
146 146 adding [c] rbc-names-v1 (7 bytes)
147 147 adding [c] rbc-revs-v1 (40 bytes)
148 148 transferred 1.65 KB in * seconds (* */sec) (glob)
149 149 bundle2-input-part: total payload size 1840
150 150 bundle2-input-bundle: 1 parts total
151 151 updating the branch cache
152 152 finished applying clone bundle
153 153 query 1; heads
154 154 sending batch command
155 155 searching for changes
156 156 all remote heads known locally
157 157 no changes found
158 158 sending getbundle command
159 159 bundle2-input-bundle: with-transaction
160 160 bundle2-input-part: "listkeys" (params: 1 mandatory) supported
161 161 bundle2-input-part: "phase-heads" supported
162 162 bundle2-input-part: total payload size 24
163 163 bundle2-input-bundle: 2 parts total
164 164 checking for updated bookmarks
165 165 updating to branch default
166 166 resolving manifests
167 167 branchmerge: False, force: False, partial: False
168 168 ancestor: 000000000000, local: 000000000000+, remote: 9bc730a19041
169 169 A: remote created -> g
170 170 getting A
171 171 B: remote created -> g
172 172 getting B
173 173 C: remote created -> g
174 174 getting C
175 175 D: remote created -> g
176 176 getting D
177 177 E: remote created -> g
178 178 getting E
179 179 5 files updated, 0 files merged, 0 files removed, 0 files unresolved
180 180 updating the branch cache
181 181 (sent 4 HTTP requests and * bytes; received * bytes in responses) (glob)
General Comments 0
You need to be logged in to leave comments. Login now