##// END OF EJS Templates
changegroup: replace getchangegroup with makechangegroup...
Durham Goode -
r34103:5ede882c default
parent child Browse files
Show More
@@ -1,1899 +1,1898 b''
1 1 # bundle2.py - generic container format to transmit arbitrary data.
2 2 #
3 3 # Copyright 2013 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 """Handling of the new bundle2 format
8 8
9 9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 11 that will be handed to and processed by the application layer.
12 12
13 13
14 14 General format architecture
15 15 ===========================
16 16
17 17 The format is architectured as follow
18 18
19 19 - magic string
20 20 - stream level parameters
21 21 - payload parts (any number)
22 22 - end of stream marker.
23 23
24 24 the Binary format
25 25 ============================
26 26
27 27 All numbers are unsigned and big-endian.
28 28
29 29 stream level parameters
30 30 ------------------------
31 31
32 32 Binary format is as follow
33 33
34 34 :params size: int32
35 35
36 36 The total number of Bytes used by the parameters
37 37
38 38 :params value: arbitrary number of Bytes
39 39
40 40 A blob of `params size` containing the serialized version of all stream level
41 41 parameters.
42 42
43 43 The blob contains a space separated list of parameters. Parameters with value
44 44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45 45
46 46 Empty name are obviously forbidden.
47 47
48 48 Name MUST start with a letter. If this first letter is lower case, the
49 49 parameter is advisory and can be safely ignored. However when the first
50 50 letter is capital, the parameter is mandatory and the bundling process MUST
51 51 stop if he is not able to proceed it.
52 52
53 53 Stream parameters use a simple textual format for two main reasons:
54 54
55 55 - Stream level parameters should remain simple and we want to discourage any
56 56 crazy usage.
57 57 - Textual data allow easy human inspection of a bundle2 header in case of
58 58 troubles.
59 59
60 60 Any Applicative level options MUST go into a bundle2 part instead.
61 61
62 62 Payload part
63 63 ------------------------
64 64
65 65 Binary format is as follow
66 66
67 67 :header size: int32
68 68
69 69 The total number of Bytes used by the part header. When the header is empty
70 70 (size = 0) this is interpreted as the end of stream marker.
71 71
72 72 :header:
73 73
74 74 The header defines how to interpret the part. It contains two piece of
75 75 data: the part type, and the part parameters.
76 76
77 77 The part type is used to route an application level handler, that can
78 78 interpret payload.
79 79
80 80 Part parameters are passed to the application level handler. They are
81 81 meant to convey information that will help the application level object to
82 82 interpret the part payload.
83 83
84 84 The binary format of the header is has follow
85 85
86 86 :typesize: (one byte)
87 87
88 88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89 89
90 90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 91 to this part.
92 92
93 93 :parameters:
94 94
95 95 Part's parameter may have arbitrary content, the binary structure is::
96 96
97 97 <mandatory-count><advisory-count><param-sizes><param-data>
98 98
99 99 :mandatory-count: 1 byte, number of mandatory parameters
100 100
101 101 :advisory-count: 1 byte, number of advisory parameters
102 102
103 103 :param-sizes:
104 104
105 105 N couple of bytes, where N is the total number of parameters. Each
106 106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107 107
108 108 :param-data:
109 109
110 110 A blob of bytes from which each parameter key and value can be
111 111 retrieved using the list of size couples stored in the previous
112 112 field.
113 113
114 114 Mandatory parameters comes first, then the advisory ones.
115 115
116 116 Each parameter's key MUST be unique within the part.
117 117
118 118 :payload:
119 119
120 120 payload is a series of `<chunksize><chunkdata>`.
121 121
122 122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124 124
125 125 The current implementation always produces either zero or one chunk.
126 126 This is an implementation limitation that will ultimately be lifted.
127 127
128 128 `chunksize` can be negative to trigger special case processing. No such
129 129 processing is in place yet.
130 130
131 131 Bundle processing
132 132 ============================
133 133
134 134 Each part is processed in order using a "part handler". Handler are registered
135 135 for a certain part type.
136 136
137 137 The matching of a part to its handler is case insensitive. The case of the
138 138 part type is used to know if a part is mandatory or advisory. If the Part type
139 139 contains any uppercase char it is considered mandatory. When no handler is
140 140 known for a Mandatory part, the process is aborted and an exception is raised.
141 141 If the part is advisory and no handler is known, the part is ignored. When the
142 142 process is aborted, the full bundle is still read from the stream to keep the
143 143 channel usable. But none of the part read from an abort are processed. In the
144 144 future, dropping the stream may become an option for channel we do not care to
145 145 preserve.
146 146 """
147 147
148 148 from __future__ import absolute_import, division
149 149
150 150 import errno
151 151 import re
152 152 import string
153 153 import struct
154 154 import sys
155 155
156 156 from .i18n import _
157 157 from . import (
158 158 changegroup,
159 159 error,
160 160 obsolete,
161 161 phases,
162 162 pushkey,
163 163 pycompat,
164 164 tags,
165 165 url,
166 166 util,
167 167 )
168 168
169 169 urlerr = util.urlerr
170 170 urlreq = util.urlreq
171 171
172 172 _pack = struct.pack
173 173 _unpack = struct.unpack
174 174
175 175 _fstreamparamsize = '>i'
176 176 _fpartheadersize = '>i'
177 177 _fparttypesize = '>B'
178 178 _fpartid = '>I'
179 179 _fpayloadsize = '>i'
180 180 _fpartparamcount = '>BB'
181 181
182 182 _fphasesentry = '>i20s'
183 183
184 184 preferedchunksize = 4096
185 185
186 186 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
187 187
188 188 def outdebug(ui, message):
189 189 """debug regarding output stream (bundling)"""
190 190 if ui.configbool('devel', 'bundle2.debug'):
191 191 ui.debug('bundle2-output: %s\n' % message)
192 192
193 193 def indebug(ui, message):
194 194 """debug on input stream (unbundling)"""
195 195 if ui.configbool('devel', 'bundle2.debug'):
196 196 ui.debug('bundle2-input: %s\n' % message)
197 197
198 198 def validateparttype(parttype):
199 199 """raise ValueError if a parttype contains invalid character"""
200 200 if _parttypeforbidden.search(parttype):
201 201 raise ValueError(parttype)
202 202
203 203 def _makefpartparamsizes(nbparams):
204 204 """return a struct format to read part parameter sizes
205 205
206 206 The number parameters is variable so we need to build that format
207 207 dynamically.
208 208 """
209 209 return '>'+('BB'*nbparams)
210 210
211 211 parthandlermapping = {}
212 212
213 213 def parthandler(parttype, params=()):
214 214 """decorator that register a function as a bundle2 part handler
215 215
216 216 eg::
217 217
218 218 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
219 219 def myparttypehandler(...):
220 220 '''process a part of type "my part".'''
221 221 ...
222 222 """
223 223 validateparttype(parttype)
224 224 def _decorator(func):
225 225 lparttype = parttype.lower() # enforce lower case matching.
226 226 assert lparttype not in parthandlermapping
227 227 parthandlermapping[lparttype] = func
228 228 func.params = frozenset(params)
229 229 return func
230 230 return _decorator
231 231
232 232 class unbundlerecords(object):
233 233 """keep record of what happens during and unbundle
234 234
235 235 New records are added using `records.add('cat', obj)`. Where 'cat' is a
236 236 category of record and obj is an arbitrary object.
237 237
238 238 `records['cat']` will return all entries of this category 'cat'.
239 239
240 240 Iterating on the object itself will yield `('category', obj)` tuples
241 241 for all entries.
242 242
243 243 All iterations happens in chronological order.
244 244 """
245 245
246 246 def __init__(self):
247 247 self._categories = {}
248 248 self._sequences = []
249 249 self._replies = {}
250 250
251 251 def add(self, category, entry, inreplyto=None):
252 252 """add a new record of a given category.
253 253
254 254 The entry can then be retrieved in the list returned by
255 255 self['category']."""
256 256 self._categories.setdefault(category, []).append(entry)
257 257 self._sequences.append((category, entry))
258 258 if inreplyto is not None:
259 259 self.getreplies(inreplyto).add(category, entry)
260 260
261 261 def getreplies(self, partid):
262 262 """get the records that are replies to a specific part"""
263 263 return self._replies.setdefault(partid, unbundlerecords())
264 264
265 265 def __getitem__(self, cat):
266 266 return tuple(self._categories.get(cat, ()))
267 267
268 268 def __iter__(self):
269 269 return iter(self._sequences)
270 270
271 271 def __len__(self):
272 272 return len(self._sequences)
273 273
274 274 def __nonzero__(self):
275 275 return bool(self._sequences)
276 276
277 277 __bool__ = __nonzero__
278 278
279 279 class bundleoperation(object):
280 280 """an object that represents a single bundling process
281 281
282 282 Its purpose is to carry unbundle-related objects and states.
283 283
284 284 A new object should be created at the beginning of each bundle processing.
285 285 The object is to be returned by the processing function.
286 286
287 287 The object has very little content now it will ultimately contain:
288 288 * an access to the repo the bundle is applied to,
289 289 * a ui object,
290 290 * a way to retrieve a transaction to add changes to the repo,
291 291 * a way to record the result of processing each part,
292 292 * a way to construct a bundle response when applicable.
293 293 """
294 294
295 295 def __init__(self, repo, transactiongetter, captureoutput=True):
296 296 self.repo = repo
297 297 self.ui = repo.ui
298 298 self.records = unbundlerecords()
299 299 self.reply = None
300 300 self.captureoutput = captureoutput
301 301 self.hookargs = {}
302 302 self._gettransaction = transactiongetter
303 303
304 304 def gettransaction(self):
305 305 transaction = self._gettransaction()
306 306
307 307 if self.hookargs:
308 308 # the ones added to the transaction supercede those added
309 309 # to the operation.
310 310 self.hookargs.update(transaction.hookargs)
311 311 transaction.hookargs = self.hookargs
312 312
313 313 # mark the hookargs as flushed. further attempts to add to
314 314 # hookargs will result in an abort.
315 315 self.hookargs = None
316 316
317 317 return transaction
318 318
319 319 def addhookargs(self, hookargs):
320 320 if self.hookargs is None:
321 321 raise error.ProgrammingError('attempted to add hookargs to '
322 322 'operation after transaction started')
323 323 self.hookargs.update(hookargs)
324 324
325 325 class TransactionUnavailable(RuntimeError):
326 326 pass
327 327
328 328 def _notransaction():
329 329 """default method to get a transaction while processing a bundle
330 330
331 331 Raise an exception to highlight the fact that no transaction was expected
332 332 to be created"""
333 333 raise TransactionUnavailable()
334 334
335 335 def applybundle(repo, unbundler, tr, source=None, url=None, **kwargs):
336 336 # transform me into unbundler.apply() as soon as the freeze is lifted
337 337 if isinstance(unbundler, unbundle20):
338 338 tr.hookargs['bundle2'] = '1'
339 339 if source is not None and 'source' not in tr.hookargs:
340 340 tr.hookargs['source'] = source
341 341 if url is not None and 'url' not in tr.hookargs:
342 342 tr.hookargs['url'] = url
343 343 return processbundle(repo, unbundler, lambda: tr)
344 344 else:
345 345 # the transactiongetter won't be used, but we might as well set it
346 346 op = bundleoperation(repo, lambda: tr)
347 347 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
348 348 return op
349 349
350 350 def processbundle(repo, unbundler, transactiongetter=None, op=None):
351 351 """This function process a bundle, apply effect to/from a repo
352 352
353 353 It iterates over each part then searches for and uses the proper handling
354 354 code to process the part. Parts are processed in order.
355 355
356 356 Unknown Mandatory part will abort the process.
357 357
358 358 It is temporarily possible to provide a prebuilt bundleoperation to the
359 359 function. This is used to ensure output is properly propagated in case of
360 360 an error during the unbundling. This output capturing part will likely be
361 361 reworked and this ability will probably go away in the process.
362 362 """
363 363 if op is None:
364 364 if transactiongetter is None:
365 365 transactiongetter = _notransaction
366 366 op = bundleoperation(repo, transactiongetter)
367 367 # todo:
368 368 # - replace this is a init function soon.
369 369 # - exception catching
370 370 unbundler.params
371 371 if repo.ui.debugflag:
372 372 msg = ['bundle2-input-bundle:']
373 373 if unbundler.params:
374 374 msg.append(' %i params' % len(unbundler.params))
375 375 if op._gettransaction is None or op._gettransaction is _notransaction:
376 376 msg.append(' no-transaction')
377 377 else:
378 378 msg.append(' with-transaction')
379 379 msg.append('\n')
380 380 repo.ui.debug(''.join(msg))
381 381 iterparts = enumerate(unbundler.iterparts())
382 382 part = None
383 383 nbpart = 0
384 384 try:
385 385 for nbpart, part in iterparts:
386 386 _processpart(op, part)
387 387 except Exception as exc:
388 388 # Any exceptions seeking to the end of the bundle at this point are
389 389 # almost certainly related to the underlying stream being bad.
390 390 # And, chances are that the exception we're handling is related to
391 391 # getting in that bad state. So, we swallow the seeking error and
392 392 # re-raise the original error.
393 393 seekerror = False
394 394 try:
395 395 for nbpart, part in iterparts:
396 396 # consume the bundle content
397 397 part.seek(0, 2)
398 398 except Exception:
399 399 seekerror = True
400 400
401 401 # Small hack to let caller code distinguish exceptions from bundle2
402 402 # processing from processing the old format. This is mostly
403 403 # needed to handle different return codes to unbundle according to the
404 404 # type of bundle. We should probably clean up or drop this return code
405 405 # craziness in a future version.
406 406 exc.duringunbundle2 = True
407 407 salvaged = []
408 408 replycaps = None
409 409 if op.reply is not None:
410 410 salvaged = op.reply.salvageoutput()
411 411 replycaps = op.reply.capabilities
412 412 exc._replycaps = replycaps
413 413 exc._bundle2salvagedoutput = salvaged
414 414
415 415 # Re-raising from a variable loses the original stack. So only use
416 416 # that form if we need to.
417 417 if seekerror:
418 418 raise exc
419 419 else:
420 420 raise
421 421 finally:
422 422 repo.ui.debug('bundle2-input-bundle: %i parts total\n' % nbpart)
423 423
424 424 return op
425 425
426 426 def _processchangegroup(op, cg, tr, source, url, **kwargs):
427 427 ret = cg.apply(op.repo, tr, source, url, **kwargs)
428 428 op.records.add('changegroup', {
429 429 'return': ret,
430 430 })
431 431 return ret
432 432
433 433 def _processpart(op, part):
434 434 """process a single part from a bundle
435 435
436 436 The part is guaranteed to have been fully consumed when the function exits
437 437 (even if an exception is raised)."""
438 438 status = 'unknown' # used by debug output
439 439 hardabort = False
440 440 try:
441 441 try:
442 442 handler = parthandlermapping.get(part.type)
443 443 if handler is None:
444 444 status = 'unsupported-type'
445 445 raise error.BundleUnknownFeatureError(parttype=part.type)
446 446 indebug(op.ui, 'found a handler for part %r' % part.type)
447 447 unknownparams = part.mandatorykeys - handler.params
448 448 if unknownparams:
449 449 unknownparams = list(unknownparams)
450 450 unknownparams.sort()
451 451 status = 'unsupported-params (%s)' % unknownparams
452 452 raise error.BundleUnknownFeatureError(parttype=part.type,
453 453 params=unknownparams)
454 454 status = 'supported'
455 455 except error.BundleUnknownFeatureError as exc:
456 456 if part.mandatory: # mandatory parts
457 457 raise
458 458 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
459 459 return # skip to part processing
460 460 finally:
461 461 if op.ui.debugflag:
462 462 msg = ['bundle2-input-part: "%s"' % part.type]
463 463 if not part.mandatory:
464 464 msg.append(' (advisory)')
465 465 nbmp = len(part.mandatorykeys)
466 466 nbap = len(part.params) - nbmp
467 467 if nbmp or nbap:
468 468 msg.append(' (params:')
469 469 if nbmp:
470 470 msg.append(' %i mandatory' % nbmp)
471 471 if nbap:
472 472 msg.append(' %i advisory' % nbmp)
473 473 msg.append(')')
474 474 msg.append(' %s\n' % status)
475 475 op.ui.debug(''.join(msg))
476 476
477 477 # handler is called outside the above try block so that we don't
478 478 # risk catching KeyErrors from anything other than the
479 479 # parthandlermapping lookup (any KeyError raised by handler()
480 480 # itself represents a defect of a different variety).
481 481 output = None
482 482 if op.captureoutput and op.reply is not None:
483 483 op.ui.pushbuffer(error=True, subproc=True)
484 484 output = ''
485 485 try:
486 486 handler(op, part)
487 487 finally:
488 488 if output is not None:
489 489 output = op.ui.popbuffer()
490 490 if output:
491 491 outpart = op.reply.newpart('output', data=output,
492 492 mandatory=False)
493 493 outpart.addparam(
494 494 'in-reply-to', pycompat.bytestr(part.id), mandatory=False)
495 495 # If exiting or interrupted, do not attempt to seek the stream in the
496 496 # finally block below. This makes abort faster.
497 497 except (SystemExit, KeyboardInterrupt):
498 498 hardabort = True
499 499 raise
500 500 finally:
501 501 # consume the part content to not corrupt the stream.
502 502 if not hardabort:
503 503 part.seek(0, 2)
504 504
505 505
506 506 def decodecaps(blob):
507 507 """decode a bundle2 caps bytes blob into a dictionary
508 508
509 509 The blob is a list of capabilities (one per line)
510 510 Capabilities may have values using a line of the form::
511 511
512 512 capability=value1,value2,value3
513 513
514 514 The values are always a list."""
515 515 caps = {}
516 516 for line in blob.splitlines():
517 517 if not line:
518 518 continue
519 519 if '=' not in line:
520 520 key, vals = line, ()
521 521 else:
522 522 key, vals = line.split('=', 1)
523 523 vals = vals.split(',')
524 524 key = urlreq.unquote(key)
525 525 vals = [urlreq.unquote(v) for v in vals]
526 526 caps[key] = vals
527 527 return caps
528 528
529 529 def encodecaps(caps):
530 530 """encode a bundle2 caps dictionary into a bytes blob"""
531 531 chunks = []
532 532 for ca in sorted(caps):
533 533 vals = caps[ca]
534 534 ca = urlreq.quote(ca)
535 535 vals = [urlreq.quote(v) for v in vals]
536 536 if vals:
537 537 ca = "%s=%s" % (ca, ','.join(vals))
538 538 chunks.append(ca)
539 539 return '\n'.join(chunks)
540 540
541 541 bundletypes = {
542 542 "": ("", 'UN'), # only when using unbundle on ssh and old http servers
543 543 # since the unification ssh accepts a header but there
544 544 # is no capability signaling it.
545 545 "HG20": (), # special-cased below
546 546 "HG10UN": ("HG10UN", 'UN'),
547 547 "HG10BZ": ("HG10", 'BZ'),
548 548 "HG10GZ": ("HG10GZ", 'GZ'),
549 549 }
550 550
551 551 # hgweb uses this list to communicate its preferred type
552 552 bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN']
553 553
554 554 class bundle20(object):
555 555 """represent an outgoing bundle2 container
556 556
557 557 Use the `addparam` method to add stream level parameter. and `newpart` to
558 558 populate it. Then call `getchunks` to retrieve all the binary chunks of
559 559 data that compose the bundle2 container."""
560 560
561 561 _magicstring = 'HG20'
562 562
563 563 def __init__(self, ui, capabilities=()):
564 564 self.ui = ui
565 565 self._params = []
566 566 self._parts = []
567 567 self.capabilities = dict(capabilities)
568 568 self._compengine = util.compengines.forbundletype('UN')
569 569 self._compopts = None
570 570
571 571 def setcompression(self, alg, compopts=None):
572 572 """setup core part compression to <alg>"""
573 573 if alg in (None, 'UN'):
574 574 return
575 575 assert not any(n.lower() == 'compression' for n, v in self._params)
576 576 self.addparam('Compression', alg)
577 577 self._compengine = util.compengines.forbundletype(alg)
578 578 self._compopts = compopts
579 579
580 580 @property
581 581 def nbparts(self):
582 582 """total number of parts added to the bundler"""
583 583 return len(self._parts)
584 584
585 585 # methods used to defines the bundle2 content
586 586 def addparam(self, name, value=None):
587 587 """add a stream level parameter"""
588 588 if not name:
589 589 raise ValueError('empty parameter name')
590 590 if name[0] not in pycompat.bytestr(string.ascii_letters):
591 591 raise ValueError('non letter first character: %r' % name)
592 592 self._params.append((name, value))
593 593
594 594 def addpart(self, part):
595 595 """add a new part to the bundle2 container
596 596
597 597 Parts contains the actual applicative payload."""
598 598 assert part.id is None
599 599 part.id = len(self._parts) # very cheap counter
600 600 self._parts.append(part)
601 601
602 602 def newpart(self, typeid, *args, **kwargs):
603 603 """create a new part and add it to the containers
604 604
605 605 As the part is directly added to the containers. For now, this means
606 606 that any failure to properly initialize the part after calling
607 607 ``newpart`` should result in a failure of the whole bundling process.
608 608
609 609 You can still fall back to manually create and add if you need better
610 610 control."""
611 611 part = bundlepart(typeid, *args, **kwargs)
612 612 self.addpart(part)
613 613 return part
614 614
615 615 # methods used to generate the bundle2 stream
616 616 def getchunks(self):
617 617 if self.ui.debugflag:
618 618 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
619 619 if self._params:
620 620 msg.append(' (%i params)' % len(self._params))
621 621 msg.append(' %i parts total\n' % len(self._parts))
622 622 self.ui.debug(''.join(msg))
623 623 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
624 624 yield self._magicstring
625 625 param = self._paramchunk()
626 626 outdebug(self.ui, 'bundle parameter: %s' % param)
627 627 yield _pack(_fstreamparamsize, len(param))
628 628 if param:
629 629 yield param
630 630 for chunk in self._compengine.compressstream(self._getcorechunk(),
631 631 self._compopts):
632 632 yield chunk
633 633
634 634 def _paramchunk(self):
635 635 """return a encoded version of all stream parameters"""
636 636 blocks = []
637 637 for par, value in self._params:
638 638 par = urlreq.quote(par)
639 639 if value is not None:
640 640 value = urlreq.quote(value)
641 641 par = '%s=%s' % (par, value)
642 642 blocks.append(par)
643 643 return ' '.join(blocks)
644 644
645 645 def _getcorechunk(self):
646 646 """yield chunk for the core part of the bundle
647 647
648 648 (all but headers and parameters)"""
649 649 outdebug(self.ui, 'start of parts')
650 650 for part in self._parts:
651 651 outdebug(self.ui, 'bundle part: "%s"' % part.type)
652 652 for chunk in part.getchunks(ui=self.ui):
653 653 yield chunk
654 654 outdebug(self.ui, 'end of bundle')
655 655 yield _pack(_fpartheadersize, 0)
656 656
657 657
658 658 def salvageoutput(self):
659 659 """return a list with a copy of all output parts in the bundle
660 660
661 661 This is meant to be used during error handling to make sure we preserve
662 662 server output"""
663 663 salvaged = []
664 664 for part in self._parts:
665 665 if part.type.startswith('output'):
666 666 salvaged.append(part.copy())
667 667 return salvaged
668 668
669 669
670 670 class unpackermixin(object):
671 671 """A mixin to extract bytes and struct data from a stream"""
672 672
673 673 def __init__(self, fp):
674 674 self._fp = fp
675 675
676 676 def _unpack(self, format):
677 677 """unpack this struct format from the stream
678 678
679 679 This method is meant for internal usage by the bundle2 protocol only.
680 680 They directly manipulate the low level stream including bundle2 level
681 681 instruction.
682 682
683 683 Do not use it to implement higher-level logic or methods."""
684 684 data = self._readexact(struct.calcsize(format))
685 685 return _unpack(format, data)
686 686
687 687 def _readexact(self, size):
688 688 """read exactly <size> bytes from the stream
689 689
690 690 This method is meant for internal usage by the bundle2 protocol only.
691 691 They directly manipulate the low level stream including bundle2 level
692 692 instruction.
693 693
694 694 Do not use it to implement higher-level logic or methods."""
695 695 return changegroup.readexactly(self._fp, size)
696 696
697 697 def getunbundler(ui, fp, magicstring=None):
698 698 """return a valid unbundler object for a given magicstring"""
699 699 if magicstring is None:
700 700 magicstring = changegroup.readexactly(fp, 4)
701 701 magic, version = magicstring[0:2], magicstring[2:4]
702 702 if magic != 'HG':
703 703 ui.debug(
704 704 "error: invalid magic: %r (version %r), should be 'HG'\n"
705 705 % (magic, version))
706 706 raise error.Abort(_('not a Mercurial bundle'))
707 707 unbundlerclass = formatmap.get(version)
708 708 if unbundlerclass is None:
709 709 raise error.Abort(_('unknown bundle version %s') % version)
710 710 unbundler = unbundlerclass(ui, fp)
711 711 indebug(ui, 'start processing of %s stream' % magicstring)
712 712 return unbundler
713 713
714 714 class unbundle20(unpackermixin):
715 715 """interpret a bundle2 stream
716 716
717 717 This class is fed with a binary stream and yields parts through its
718 718 `iterparts` methods."""
719 719
720 720 _magicstring = 'HG20'
721 721
722 722 def __init__(self, ui, fp):
723 723 """If header is specified, we do not read it out of the stream."""
724 724 self.ui = ui
725 725 self._compengine = util.compengines.forbundletype('UN')
726 726 self._compressed = None
727 727 super(unbundle20, self).__init__(fp)
728 728
729 729 @util.propertycache
730 730 def params(self):
731 731 """dictionary of stream level parameters"""
732 732 indebug(self.ui, 'reading bundle2 stream parameters')
733 733 params = {}
734 734 paramssize = self._unpack(_fstreamparamsize)[0]
735 735 if paramssize < 0:
736 736 raise error.BundleValueError('negative bundle param size: %i'
737 737 % paramssize)
738 738 if paramssize:
739 739 params = self._readexact(paramssize)
740 740 params = self._processallparams(params)
741 741 return params
742 742
743 743 def _processallparams(self, paramsblock):
744 744 """"""
745 745 params = util.sortdict()
746 746 for p in paramsblock.split(' '):
747 747 p = p.split('=', 1)
748 748 p = [urlreq.unquote(i) for i in p]
749 749 if len(p) < 2:
750 750 p.append(None)
751 751 self._processparam(*p)
752 752 params[p[0]] = p[1]
753 753 return params
754 754
755 755
756 756 def _processparam(self, name, value):
757 757 """process a parameter, applying its effect if needed
758 758
759 759 Parameter starting with a lower case letter are advisory and will be
760 760 ignored when unknown. Those starting with an upper case letter are
761 761 mandatory and will this function will raise a KeyError when unknown.
762 762
763 763 Note: no option are currently supported. Any input will be either
764 764 ignored or failing.
765 765 """
766 766 if not name:
767 767 raise ValueError('empty parameter name')
768 768 if name[0] not in pycompat.bytestr(string.ascii_letters):
769 769 raise ValueError('non letter first character: %r' % name)
770 770 try:
771 771 handler = b2streamparamsmap[name.lower()]
772 772 except KeyError:
773 773 if name[0].islower():
774 774 indebug(self.ui, "ignoring unknown parameter %r" % name)
775 775 else:
776 776 raise error.BundleUnknownFeatureError(params=(name,))
777 777 else:
778 778 handler(self, name, value)
779 779
780 780 def _forwardchunks(self):
781 781 """utility to transfer a bundle2 as binary
782 782
783 783 This is made necessary by the fact the 'getbundle' command over 'ssh'
784 784 have no way to know then the reply end, relying on the bundle to be
785 785 interpreted to know its end. This is terrible and we are sorry, but we
786 786 needed to move forward to get general delta enabled.
787 787 """
788 788 yield self._magicstring
789 789 assert 'params' not in vars(self)
790 790 paramssize = self._unpack(_fstreamparamsize)[0]
791 791 if paramssize < 0:
792 792 raise error.BundleValueError('negative bundle param size: %i'
793 793 % paramssize)
794 794 yield _pack(_fstreamparamsize, paramssize)
795 795 if paramssize:
796 796 params = self._readexact(paramssize)
797 797 self._processallparams(params)
798 798 yield params
799 799 assert self._compengine.bundletype == 'UN'
800 800 # From there, payload might need to be decompressed
801 801 self._fp = self._compengine.decompressorreader(self._fp)
802 802 emptycount = 0
803 803 while emptycount < 2:
804 804 # so we can brainlessly loop
805 805 assert _fpartheadersize == _fpayloadsize
806 806 size = self._unpack(_fpartheadersize)[0]
807 807 yield _pack(_fpartheadersize, size)
808 808 if size:
809 809 emptycount = 0
810 810 else:
811 811 emptycount += 1
812 812 continue
813 813 if size == flaginterrupt:
814 814 continue
815 815 elif size < 0:
816 816 raise error.BundleValueError('negative chunk size: %i')
817 817 yield self._readexact(size)
818 818
819 819
820 820 def iterparts(self):
821 821 """yield all parts contained in the stream"""
822 822 # make sure param have been loaded
823 823 self.params
824 824 # From there, payload need to be decompressed
825 825 self._fp = self._compengine.decompressorreader(self._fp)
826 826 indebug(self.ui, 'start extraction of bundle2 parts')
827 827 headerblock = self._readpartheader()
828 828 while headerblock is not None:
829 829 part = unbundlepart(self.ui, headerblock, self._fp)
830 830 yield part
831 831 # Seek to the end of the part to force it's consumption so the next
832 832 # part can be read. But then seek back to the beginning so the
833 833 # code consuming this generator has a part that starts at 0.
834 834 part.seek(0, 2)
835 835 part.seek(0)
836 836 headerblock = self._readpartheader()
837 837 indebug(self.ui, 'end of bundle2 stream')
838 838
839 839 def _readpartheader(self):
840 840 """reads a part header size and return the bytes blob
841 841
842 842 returns None if empty"""
843 843 headersize = self._unpack(_fpartheadersize)[0]
844 844 if headersize < 0:
845 845 raise error.BundleValueError('negative part header size: %i'
846 846 % headersize)
847 847 indebug(self.ui, 'part header size: %i' % headersize)
848 848 if headersize:
849 849 return self._readexact(headersize)
850 850 return None
851 851
852 852 def compressed(self):
853 853 self.params # load params
854 854 return self._compressed
855 855
856 856 def close(self):
857 857 """close underlying file"""
858 858 if util.safehasattr(self._fp, 'close'):
859 859 return self._fp.close()
860 860
861 861 formatmap = {'20': unbundle20}
862 862
863 863 b2streamparamsmap = {}
864 864
865 865 def b2streamparamhandler(name):
866 866 """register a handler for a stream level parameter"""
867 867 def decorator(func):
868 868 assert name not in formatmap
869 869 b2streamparamsmap[name] = func
870 870 return func
871 871 return decorator
872 872
873 873 @b2streamparamhandler('compression')
874 874 def processcompression(unbundler, param, value):
875 875 """read compression parameter and install payload decompression"""
876 876 if value not in util.compengines.supportedbundletypes:
877 877 raise error.BundleUnknownFeatureError(params=(param,),
878 878 values=(value,))
879 879 unbundler._compengine = util.compengines.forbundletype(value)
880 880 if value is not None:
881 881 unbundler._compressed = True
882 882
883 883 class bundlepart(object):
884 884 """A bundle2 part contains application level payload
885 885
886 886 The part `type` is used to route the part to the application level
887 887 handler.
888 888
889 889 The part payload is contained in ``part.data``. It could be raw bytes or a
890 890 generator of byte chunks.
891 891
892 892 You can add parameters to the part using the ``addparam`` method.
893 893 Parameters can be either mandatory (default) or advisory. Remote side
894 894 should be able to safely ignore the advisory ones.
895 895
896 896 Both data and parameters cannot be modified after the generation has begun.
897 897 """
898 898
899 899 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
900 900 data='', mandatory=True):
901 901 validateparttype(parttype)
902 902 self.id = None
903 903 self.type = parttype
904 904 self._data = data
905 905 self._mandatoryparams = list(mandatoryparams)
906 906 self._advisoryparams = list(advisoryparams)
907 907 # checking for duplicated entries
908 908 self._seenparams = set()
909 909 for pname, __ in self._mandatoryparams + self._advisoryparams:
910 910 if pname in self._seenparams:
911 911 raise error.ProgrammingError('duplicated params: %s' % pname)
912 912 self._seenparams.add(pname)
913 913 # status of the part's generation:
914 914 # - None: not started,
915 915 # - False: currently generated,
916 916 # - True: generation done.
917 917 self._generated = None
918 918 self.mandatory = mandatory
919 919
920 920 def __repr__(self):
921 921 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
922 922 return ('<%s object at %x; id: %s; type: %s; mandatory: %s>'
923 923 % (cls, id(self), self.id, self.type, self.mandatory))
924 924
925 925 def copy(self):
926 926 """return a copy of the part
927 927
928 928 The new part have the very same content but no partid assigned yet.
929 929 Parts with generated data cannot be copied."""
930 930 assert not util.safehasattr(self.data, 'next')
931 931 return self.__class__(self.type, self._mandatoryparams,
932 932 self._advisoryparams, self._data, self.mandatory)
933 933
934 934 # methods used to defines the part content
935 935 @property
936 936 def data(self):
937 937 return self._data
938 938
939 939 @data.setter
940 940 def data(self, data):
941 941 if self._generated is not None:
942 942 raise error.ReadOnlyPartError('part is being generated')
943 943 self._data = data
944 944
945 945 @property
946 946 def mandatoryparams(self):
947 947 # make it an immutable tuple to force people through ``addparam``
948 948 return tuple(self._mandatoryparams)
949 949
950 950 @property
951 951 def advisoryparams(self):
952 952 # make it an immutable tuple to force people through ``addparam``
953 953 return tuple(self._advisoryparams)
954 954
955 955 def addparam(self, name, value='', mandatory=True):
956 956 """add a parameter to the part
957 957
958 958 If 'mandatory' is set to True, the remote handler must claim support
959 959 for this parameter or the unbundling will be aborted.
960 960
961 961 The 'name' and 'value' cannot exceed 255 bytes each.
962 962 """
963 963 if self._generated is not None:
964 964 raise error.ReadOnlyPartError('part is being generated')
965 965 if name in self._seenparams:
966 966 raise ValueError('duplicated params: %s' % name)
967 967 self._seenparams.add(name)
968 968 params = self._advisoryparams
969 969 if mandatory:
970 970 params = self._mandatoryparams
971 971 params.append((name, value))
972 972
973 973 # methods used to generates the bundle2 stream
974 974 def getchunks(self, ui):
975 975 if self._generated is not None:
976 976 raise error.ProgrammingError('part can only be consumed once')
977 977 self._generated = False
978 978
979 979 if ui.debugflag:
980 980 msg = ['bundle2-output-part: "%s"' % self.type]
981 981 if not self.mandatory:
982 982 msg.append(' (advisory)')
983 983 nbmp = len(self.mandatoryparams)
984 984 nbap = len(self.advisoryparams)
985 985 if nbmp or nbap:
986 986 msg.append(' (params:')
987 987 if nbmp:
988 988 msg.append(' %i mandatory' % nbmp)
989 989 if nbap:
990 990 msg.append(' %i advisory' % nbmp)
991 991 msg.append(')')
992 992 if not self.data:
993 993 msg.append(' empty payload')
994 994 elif util.safehasattr(self.data, 'next'):
995 995 msg.append(' streamed payload')
996 996 else:
997 997 msg.append(' %i bytes payload' % len(self.data))
998 998 msg.append('\n')
999 999 ui.debug(''.join(msg))
1000 1000
1001 1001 #### header
1002 1002 if self.mandatory:
1003 1003 parttype = self.type.upper()
1004 1004 else:
1005 1005 parttype = self.type.lower()
1006 1006 outdebug(ui, 'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1007 1007 ## parttype
1008 1008 header = [_pack(_fparttypesize, len(parttype)),
1009 1009 parttype, _pack(_fpartid, self.id),
1010 1010 ]
1011 1011 ## parameters
1012 1012 # count
1013 1013 manpar = self.mandatoryparams
1014 1014 advpar = self.advisoryparams
1015 1015 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1016 1016 # size
1017 1017 parsizes = []
1018 1018 for key, value in manpar:
1019 1019 parsizes.append(len(key))
1020 1020 parsizes.append(len(value))
1021 1021 for key, value in advpar:
1022 1022 parsizes.append(len(key))
1023 1023 parsizes.append(len(value))
1024 1024 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1025 1025 header.append(paramsizes)
1026 1026 # key, value
1027 1027 for key, value in manpar:
1028 1028 header.append(key)
1029 1029 header.append(value)
1030 1030 for key, value in advpar:
1031 1031 header.append(key)
1032 1032 header.append(value)
1033 1033 ## finalize header
1034 1034 headerchunk = ''.join(header)
1035 1035 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
1036 1036 yield _pack(_fpartheadersize, len(headerchunk))
1037 1037 yield headerchunk
1038 1038 ## payload
1039 1039 try:
1040 1040 for chunk in self._payloadchunks():
1041 1041 outdebug(ui, 'payload chunk size: %i' % len(chunk))
1042 1042 yield _pack(_fpayloadsize, len(chunk))
1043 1043 yield chunk
1044 1044 except GeneratorExit:
1045 1045 # GeneratorExit means that nobody is listening for our
1046 1046 # results anyway, so just bail quickly rather than trying
1047 1047 # to produce an error part.
1048 1048 ui.debug('bundle2-generatorexit\n')
1049 1049 raise
1050 1050 except BaseException as exc:
1051 1051 bexc = util.forcebytestr(exc)
1052 1052 # backup exception data for later
1053 1053 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
1054 1054 % bexc)
1055 1055 tb = sys.exc_info()[2]
1056 1056 msg = 'unexpected error: %s' % bexc
1057 1057 interpart = bundlepart('error:abort', [('message', msg)],
1058 1058 mandatory=False)
1059 1059 interpart.id = 0
1060 1060 yield _pack(_fpayloadsize, -1)
1061 1061 for chunk in interpart.getchunks(ui=ui):
1062 1062 yield chunk
1063 1063 outdebug(ui, 'closing payload chunk')
1064 1064 # abort current part payload
1065 1065 yield _pack(_fpayloadsize, 0)
1066 1066 pycompat.raisewithtb(exc, tb)
1067 1067 # end of payload
1068 1068 outdebug(ui, 'closing payload chunk')
1069 1069 yield _pack(_fpayloadsize, 0)
1070 1070 self._generated = True
1071 1071
1072 1072 def _payloadchunks(self):
1073 1073 """yield chunks of a the part payload
1074 1074
1075 1075 Exists to handle the different methods to provide data to a part."""
1076 1076 # we only support fixed size data now.
1077 1077 # This will be improved in the future.
1078 1078 if (util.safehasattr(self.data, 'next')
1079 1079 or util.safehasattr(self.data, '__next__')):
1080 1080 buff = util.chunkbuffer(self.data)
1081 1081 chunk = buff.read(preferedchunksize)
1082 1082 while chunk:
1083 1083 yield chunk
1084 1084 chunk = buff.read(preferedchunksize)
1085 1085 elif len(self.data):
1086 1086 yield self.data
1087 1087
1088 1088
1089 1089 flaginterrupt = -1
1090 1090
1091 1091 class interrupthandler(unpackermixin):
1092 1092 """read one part and process it with restricted capability
1093 1093
1094 1094 This allows to transmit exception raised on the producer size during part
1095 1095 iteration while the consumer is reading a part.
1096 1096
1097 1097 Part processed in this manner only have access to a ui object,"""
1098 1098
1099 1099 def __init__(self, ui, fp):
1100 1100 super(interrupthandler, self).__init__(fp)
1101 1101 self.ui = ui
1102 1102
1103 1103 def _readpartheader(self):
1104 1104 """reads a part header size and return the bytes blob
1105 1105
1106 1106 returns None if empty"""
1107 1107 headersize = self._unpack(_fpartheadersize)[0]
1108 1108 if headersize < 0:
1109 1109 raise error.BundleValueError('negative part header size: %i'
1110 1110 % headersize)
1111 1111 indebug(self.ui, 'part header size: %i\n' % headersize)
1112 1112 if headersize:
1113 1113 return self._readexact(headersize)
1114 1114 return None
1115 1115
1116 1116 def __call__(self):
1117 1117
1118 1118 self.ui.debug('bundle2-input-stream-interrupt:'
1119 1119 ' opening out of band context\n')
1120 1120 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1121 1121 headerblock = self._readpartheader()
1122 1122 if headerblock is None:
1123 1123 indebug(self.ui, 'no part found during interruption.')
1124 1124 return
1125 1125 part = unbundlepart(self.ui, headerblock, self._fp)
1126 1126 op = interruptoperation(self.ui)
1127 1127 _processpart(op, part)
1128 1128 self.ui.debug('bundle2-input-stream-interrupt:'
1129 1129 ' closing out of band context\n')
1130 1130
1131 1131 class interruptoperation(object):
1132 1132 """A limited operation to be use by part handler during interruption
1133 1133
1134 1134 It only have access to an ui object.
1135 1135 """
1136 1136
1137 1137 def __init__(self, ui):
1138 1138 self.ui = ui
1139 1139 self.reply = None
1140 1140 self.captureoutput = False
1141 1141
1142 1142 @property
1143 1143 def repo(self):
1144 1144 raise error.ProgrammingError('no repo access from stream interruption')
1145 1145
1146 1146 def gettransaction(self):
1147 1147 raise TransactionUnavailable('no repo access from stream interruption')
1148 1148
1149 1149 class unbundlepart(unpackermixin):
1150 1150 """a bundle part read from a bundle"""
1151 1151
1152 1152 def __init__(self, ui, header, fp):
1153 1153 super(unbundlepart, self).__init__(fp)
1154 1154 self._seekable = (util.safehasattr(fp, 'seek') and
1155 1155 util.safehasattr(fp, 'tell'))
1156 1156 self.ui = ui
1157 1157 # unbundle state attr
1158 1158 self._headerdata = header
1159 1159 self._headeroffset = 0
1160 1160 self._initialized = False
1161 1161 self.consumed = False
1162 1162 # part data
1163 1163 self.id = None
1164 1164 self.type = None
1165 1165 self.mandatoryparams = None
1166 1166 self.advisoryparams = None
1167 1167 self.params = None
1168 1168 self.mandatorykeys = ()
1169 1169 self._payloadstream = None
1170 1170 self._readheader()
1171 1171 self._mandatory = None
1172 1172 self._chunkindex = [] #(payload, file) position tuples for chunk starts
1173 1173 self._pos = 0
1174 1174
1175 1175 def _fromheader(self, size):
1176 1176 """return the next <size> byte from the header"""
1177 1177 offset = self._headeroffset
1178 1178 data = self._headerdata[offset:(offset + size)]
1179 1179 self._headeroffset = offset + size
1180 1180 return data
1181 1181
1182 1182 def _unpackheader(self, format):
1183 1183 """read given format from header
1184 1184
1185 1185 This automatically compute the size of the format to read."""
1186 1186 data = self._fromheader(struct.calcsize(format))
1187 1187 return _unpack(format, data)
1188 1188
1189 1189 def _initparams(self, mandatoryparams, advisoryparams):
1190 1190 """internal function to setup all logic related parameters"""
1191 1191 # make it read only to prevent people touching it by mistake.
1192 1192 self.mandatoryparams = tuple(mandatoryparams)
1193 1193 self.advisoryparams = tuple(advisoryparams)
1194 1194 # user friendly UI
1195 1195 self.params = util.sortdict(self.mandatoryparams)
1196 1196 self.params.update(self.advisoryparams)
1197 1197 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1198 1198
1199 1199 def _payloadchunks(self, chunknum=0):
1200 1200 '''seek to specified chunk and start yielding data'''
1201 1201 if len(self._chunkindex) == 0:
1202 1202 assert chunknum == 0, 'Must start with chunk 0'
1203 1203 self._chunkindex.append((0, self._tellfp()))
1204 1204 else:
1205 1205 assert chunknum < len(self._chunkindex), \
1206 1206 'Unknown chunk %d' % chunknum
1207 1207 self._seekfp(self._chunkindex[chunknum][1])
1208 1208
1209 1209 pos = self._chunkindex[chunknum][0]
1210 1210 payloadsize = self._unpack(_fpayloadsize)[0]
1211 1211 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1212 1212 while payloadsize:
1213 1213 if payloadsize == flaginterrupt:
1214 1214 # interruption detection, the handler will now read a
1215 1215 # single part and process it.
1216 1216 interrupthandler(self.ui, self._fp)()
1217 1217 elif payloadsize < 0:
1218 1218 msg = 'negative payload chunk size: %i' % payloadsize
1219 1219 raise error.BundleValueError(msg)
1220 1220 else:
1221 1221 result = self._readexact(payloadsize)
1222 1222 chunknum += 1
1223 1223 pos += payloadsize
1224 1224 if chunknum == len(self._chunkindex):
1225 1225 self._chunkindex.append((pos, self._tellfp()))
1226 1226 yield result
1227 1227 payloadsize = self._unpack(_fpayloadsize)[0]
1228 1228 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1229 1229
1230 1230 def _findchunk(self, pos):
1231 1231 '''for a given payload position, return a chunk number and offset'''
1232 1232 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1233 1233 if ppos == pos:
1234 1234 return chunk, 0
1235 1235 elif ppos > pos:
1236 1236 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1237 1237 raise ValueError('Unknown chunk')
1238 1238
1239 1239 def _readheader(self):
1240 1240 """read the header and setup the object"""
1241 1241 typesize = self._unpackheader(_fparttypesize)[0]
1242 1242 self.type = self._fromheader(typesize)
1243 1243 indebug(self.ui, 'part type: "%s"' % self.type)
1244 1244 self.id = self._unpackheader(_fpartid)[0]
1245 1245 indebug(self.ui, 'part id: "%s"' % pycompat.bytestr(self.id))
1246 1246 # extract mandatory bit from type
1247 1247 self.mandatory = (self.type != self.type.lower())
1248 1248 self.type = self.type.lower()
1249 1249 ## reading parameters
1250 1250 # param count
1251 1251 mancount, advcount = self._unpackheader(_fpartparamcount)
1252 1252 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1253 1253 # param size
1254 1254 fparamsizes = _makefpartparamsizes(mancount + advcount)
1255 1255 paramsizes = self._unpackheader(fparamsizes)
1256 1256 # make it a list of couple again
1257 1257 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1258 1258 # split mandatory from advisory
1259 1259 mansizes = paramsizes[:mancount]
1260 1260 advsizes = paramsizes[mancount:]
1261 1261 # retrieve param value
1262 1262 manparams = []
1263 1263 for key, value in mansizes:
1264 1264 manparams.append((self._fromheader(key), self._fromheader(value)))
1265 1265 advparams = []
1266 1266 for key, value in advsizes:
1267 1267 advparams.append((self._fromheader(key), self._fromheader(value)))
1268 1268 self._initparams(manparams, advparams)
1269 1269 ## part payload
1270 1270 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1271 1271 # we read the data, tell it
1272 1272 self._initialized = True
1273 1273
1274 1274 def read(self, size=None):
1275 1275 """read payload data"""
1276 1276 if not self._initialized:
1277 1277 self._readheader()
1278 1278 if size is None:
1279 1279 data = self._payloadstream.read()
1280 1280 else:
1281 1281 data = self._payloadstream.read(size)
1282 1282 self._pos += len(data)
1283 1283 if size is None or len(data) < size:
1284 1284 if not self.consumed and self._pos:
1285 1285 self.ui.debug('bundle2-input-part: total payload size %i\n'
1286 1286 % self._pos)
1287 1287 self.consumed = True
1288 1288 return data
1289 1289
1290 1290 def tell(self):
1291 1291 return self._pos
1292 1292
1293 1293 def seek(self, offset, whence=0):
1294 1294 if whence == 0:
1295 1295 newpos = offset
1296 1296 elif whence == 1:
1297 1297 newpos = self._pos + offset
1298 1298 elif whence == 2:
1299 1299 if not self.consumed:
1300 1300 self.read()
1301 1301 newpos = self._chunkindex[-1][0] - offset
1302 1302 else:
1303 1303 raise ValueError('Unknown whence value: %r' % (whence,))
1304 1304
1305 1305 if newpos > self._chunkindex[-1][0] and not self.consumed:
1306 1306 self.read()
1307 1307 if not 0 <= newpos <= self._chunkindex[-1][0]:
1308 1308 raise ValueError('Offset out of range')
1309 1309
1310 1310 if self._pos != newpos:
1311 1311 chunk, internaloffset = self._findchunk(newpos)
1312 1312 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1313 1313 adjust = self.read(internaloffset)
1314 1314 if len(adjust) != internaloffset:
1315 1315 raise error.Abort(_('Seek failed\n'))
1316 1316 self._pos = newpos
1317 1317
1318 1318 def _seekfp(self, offset, whence=0):
1319 1319 """move the underlying file pointer
1320 1320
1321 1321 This method is meant for internal usage by the bundle2 protocol only.
1322 1322 They directly manipulate the low level stream including bundle2 level
1323 1323 instruction.
1324 1324
1325 1325 Do not use it to implement higher-level logic or methods."""
1326 1326 if self._seekable:
1327 1327 return self._fp.seek(offset, whence)
1328 1328 else:
1329 1329 raise NotImplementedError(_('File pointer is not seekable'))
1330 1330
1331 1331 def _tellfp(self):
1332 1332 """return the file offset, or None if file is not seekable
1333 1333
1334 1334 This method is meant for internal usage by the bundle2 protocol only.
1335 1335 They directly manipulate the low level stream including bundle2 level
1336 1336 instruction.
1337 1337
1338 1338 Do not use it to implement higher-level logic or methods."""
1339 1339 if self._seekable:
1340 1340 try:
1341 1341 return self._fp.tell()
1342 1342 except IOError as e:
1343 1343 if e.errno == errno.ESPIPE:
1344 1344 self._seekable = False
1345 1345 else:
1346 1346 raise
1347 1347 return None
1348 1348
1349 1349 # These are only the static capabilities.
1350 1350 # Check the 'getrepocaps' function for the rest.
1351 1351 capabilities = {'HG20': (),
1352 1352 'error': ('abort', 'unsupportedcontent', 'pushraced',
1353 1353 'pushkey'),
1354 1354 'listkeys': (),
1355 1355 'pushkey': (),
1356 1356 'digests': tuple(sorted(util.DIGESTS.keys())),
1357 1357 'remote-changegroup': ('http', 'https'),
1358 1358 'hgtagsfnodes': (),
1359 1359 }
1360 1360
1361 1361 def getrepocaps(repo, allowpushback=False):
1362 1362 """return the bundle2 capabilities for a given repo
1363 1363
1364 1364 Exists to allow extensions (like evolution) to mutate the capabilities.
1365 1365 """
1366 1366 caps = capabilities.copy()
1367 1367 caps['changegroup'] = tuple(sorted(
1368 1368 changegroup.supportedincomingversions(repo)))
1369 1369 if obsolete.isenabled(repo, obsolete.exchangeopt):
1370 1370 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1371 1371 caps['obsmarkers'] = supportedformat
1372 1372 if allowpushback:
1373 1373 caps['pushback'] = ()
1374 1374 cpmode = repo.ui.config('server', 'concurrent-push-mode')
1375 1375 if cpmode == 'check-related':
1376 1376 caps['checkheads'] = ('related',)
1377 1377 return caps
1378 1378
1379 1379 def bundle2caps(remote):
1380 1380 """return the bundle capabilities of a peer as dict"""
1381 1381 raw = remote.capable('bundle2')
1382 1382 if not raw and raw != '':
1383 1383 return {}
1384 1384 capsblob = urlreq.unquote(remote.capable('bundle2'))
1385 1385 return decodecaps(capsblob)
1386 1386
1387 1387 def obsmarkersversion(caps):
1388 1388 """extract the list of supported obsmarkers versions from a bundle2caps dict
1389 1389 """
1390 1390 obscaps = caps.get('obsmarkers', ())
1391 1391 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1392 1392
1393 1393 def writenewbundle(ui, repo, source, filename, bundletype, outgoing, opts,
1394 1394 vfs=None, compression=None, compopts=None):
1395 1395 if bundletype.startswith('HG10'):
1396 cg = changegroup.getchangegroup(repo, source, outgoing, version='01')
1396 cg = changegroup.makechangegroup(repo, outgoing, '01', source)
1397 1397 return writebundle(ui, cg, filename, bundletype, vfs=vfs,
1398 1398 compression=compression, compopts=compopts)
1399 1399 elif not bundletype.startswith('HG20'):
1400 1400 raise error.ProgrammingError('unknown bundle type: %s' % bundletype)
1401 1401
1402 1402 caps = {}
1403 1403 if 'obsolescence' in opts:
1404 1404 caps['obsmarkers'] = ('V1',)
1405 1405 bundle = bundle20(ui, caps)
1406 1406 bundle.setcompression(compression, compopts)
1407 1407 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1408 1408 chunkiter = bundle.getchunks()
1409 1409
1410 1410 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1411 1411
1412 1412 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1413 1413 # We should eventually reconcile this logic with the one behind
1414 1414 # 'exchange.getbundle2partsgenerator'.
1415 1415 #
1416 1416 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1417 1417 # different right now. So we keep them separated for now for the sake of
1418 1418 # simplicity.
1419 1419
1420 1420 # we always want a changegroup in such bundle
1421 1421 cgversion = opts.get('cg.version')
1422 1422 if cgversion is None:
1423 1423 cgversion = changegroup.safeversion(repo)
1424 cg = changegroup.getchangegroup(repo, source, outgoing,
1425 version=cgversion)
1424 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1426 1425 part = bundler.newpart('changegroup', data=cg.getchunks())
1427 1426 part.addparam('version', cg.version)
1428 1427 if 'clcount' in cg.extras:
1429 1428 part.addparam('nbchanges', str(cg.extras['clcount']),
1430 1429 mandatory=False)
1431 1430 if opts.get('phases') and repo.revs('%ln and secret()',
1432 1431 outgoing.missingheads):
1433 1432 part.addparam('targetphase', '%d' % phases.secret, mandatory=False)
1434 1433
1435 1434 addparttagsfnodescache(repo, bundler, outgoing)
1436 1435
1437 1436 if opts.get('obsolescence', False):
1438 1437 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1439 1438 buildobsmarkerspart(bundler, obsmarkers)
1440 1439
1441 1440 if opts.get('phases', False):
1442 1441 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1443 1442 phasedata = []
1444 1443 for phase in phases.allphases:
1445 1444 for head in headsbyphase[phase]:
1446 1445 phasedata.append(_pack(_fphasesentry, phase, head))
1447 1446 bundler.newpart('phase-heads', data=''.join(phasedata))
1448 1447
1449 1448 def addparttagsfnodescache(repo, bundler, outgoing):
1450 1449 # we include the tags fnode cache for the bundle changeset
1451 1450 # (as an optional parts)
1452 1451 cache = tags.hgtagsfnodescache(repo.unfiltered())
1453 1452 chunks = []
1454 1453
1455 1454 # .hgtags fnodes are only relevant for head changesets. While we could
1456 1455 # transfer values for all known nodes, there will likely be little to
1457 1456 # no benefit.
1458 1457 #
1459 1458 # We don't bother using a generator to produce output data because
1460 1459 # a) we only have 40 bytes per head and even esoteric numbers of heads
1461 1460 # consume little memory (1M heads is 40MB) b) we don't want to send the
1462 1461 # part if we don't have entries and knowing if we have entries requires
1463 1462 # cache lookups.
1464 1463 for node in outgoing.missingheads:
1465 1464 # Don't compute missing, as this may slow down serving.
1466 1465 fnode = cache.getfnode(node, computemissing=False)
1467 1466 if fnode is not None:
1468 1467 chunks.extend([node, fnode])
1469 1468
1470 1469 if chunks:
1471 1470 bundler.newpart('hgtagsfnodes', data=''.join(chunks))
1472 1471
1473 1472 def buildobsmarkerspart(bundler, markers):
1474 1473 """add an obsmarker part to the bundler with <markers>
1475 1474
1476 1475 No part is created if markers is empty.
1477 1476 Raises ValueError if the bundler doesn't support any known obsmarker format.
1478 1477 """
1479 1478 if not markers:
1480 1479 return None
1481 1480
1482 1481 remoteversions = obsmarkersversion(bundler.capabilities)
1483 1482 version = obsolete.commonversion(remoteversions)
1484 1483 if version is None:
1485 1484 raise ValueError('bundler does not support common obsmarker format')
1486 1485 stream = obsolete.encodemarkers(markers, True, version=version)
1487 1486 return bundler.newpart('obsmarkers', data=stream)
1488 1487
1489 1488 def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None,
1490 1489 compopts=None):
1491 1490 """Write a bundle file and return its filename.
1492 1491
1493 1492 Existing files will not be overwritten.
1494 1493 If no filename is specified, a temporary file is created.
1495 1494 bz2 compression can be turned off.
1496 1495 The bundle file will be deleted in case of errors.
1497 1496 """
1498 1497
1499 1498 if bundletype == "HG20":
1500 1499 bundle = bundle20(ui)
1501 1500 bundle.setcompression(compression, compopts)
1502 1501 part = bundle.newpart('changegroup', data=cg.getchunks())
1503 1502 part.addparam('version', cg.version)
1504 1503 if 'clcount' in cg.extras:
1505 1504 part.addparam('nbchanges', str(cg.extras['clcount']),
1506 1505 mandatory=False)
1507 1506 chunkiter = bundle.getchunks()
1508 1507 else:
1509 1508 # compression argument is only for the bundle2 case
1510 1509 assert compression is None
1511 1510 if cg.version != '01':
1512 1511 raise error.Abort(_('old bundle types only supports v1 '
1513 1512 'changegroups'))
1514 1513 header, comp = bundletypes[bundletype]
1515 1514 if comp not in util.compengines.supportedbundletypes:
1516 1515 raise error.Abort(_('unknown stream compression type: %s')
1517 1516 % comp)
1518 1517 compengine = util.compengines.forbundletype(comp)
1519 1518 def chunkiter():
1520 1519 yield header
1521 1520 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1522 1521 yield chunk
1523 1522 chunkiter = chunkiter()
1524 1523
1525 1524 # parse the changegroup data, otherwise we will block
1526 1525 # in case of sshrepo because we don't know the end of the stream
1527 1526 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1528 1527
1529 1528 def combinechangegroupresults(op):
1530 1529 """logic to combine 0 or more addchangegroup results into one"""
1531 1530 results = [r.get('return', 0)
1532 1531 for r in op.records['changegroup']]
1533 1532 changedheads = 0
1534 1533 result = 1
1535 1534 for ret in results:
1536 1535 # If any changegroup result is 0, return 0
1537 1536 if ret == 0:
1538 1537 result = 0
1539 1538 break
1540 1539 if ret < -1:
1541 1540 changedheads += ret + 1
1542 1541 elif ret > 1:
1543 1542 changedheads += ret - 1
1544 1543 if changedheads > 0:
1545 1544 result = 1 + changedheads
1546 1545 elif changedheads < 0:
1547 1546 result = -1 + changedheads
1548 1547 return result
1549 1548
1550 1549 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest',
1551 1550 'targetphase'))
1552 1551 def handlechangegroup(op, inpart):
1553 1552 """apply a changegroup part on the repo
1554 1553
1555 1554 This is a very early implementation that will massive rework before being
1556 1555 inflicted to any end-user.
1557 1556 """
1558 1557 tr = op.gettransaction()
1559 1558 unpackerversion = inpart.params.get('version', '01')
1560 1559 # We should raise an appropriate exception here
1561 1560 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1562 1561 # the source and url passed here are overwritten by the one contained in
1563 1562 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1564 1563 nbchangesets = None
1565 1564 if 'nbchanges' in inpart.params:
1566 1565 nbchangesets = int(inpart.params.get('nbchanges'))
1567 1566 if ('treemanifest' in inpart.params and
1568 1567 'treemanifest' not in op.repo.requirements):
1569 1568 if len(op.repo.changelog) != 0:
1570 1569 raise error.Abort(_(
1571 1570 "bundle contains tree manifests, but local repo is "
1572 1571 "non-empty and does not use tree manifests"))
1573 1572 op.repo.requirements.add('treemanifest')
1574 1573 op.repo._applyopenerreqs()
1575 1574 op.repo._writerequirements()
1576 1575 extrakwargs = {}
1577 1576 targetphase = inpart.params.get('targetphase')
1578 1577 if targetphase is not None:
1579 1578 extrakwargs['targetphase'] = int(targetphase)
1580 1579 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2',
1581 1580 expectedtotal=nbchangesets, **extrakwargs)
1582 1581 if op.reply is not None:
1583 1582 # This is definitely not the final form of this
1584 1583 # return. But one need to start somewhere.
1585 1584 part = op.reply.newpart('reply:changegroup', mandatory=False)
1586 1585 part.addparam(
1587 1586 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1588 1587 part.addparam('return', '%i' % ret, mandatory=False)
1589 1588 assert not inpart.read()
1590 1589
1591 1590 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1592 1591 ['digest:%s' % k for k in util.DIGESTS.keys()])
1593 1592 @parthandler('remote-changegroup', _remotechangegroupparams)
1594 1593 def handleremotechangegroup(op, inpart):
1595 1594 """apply a bundle10 on the repo, given an url and validation information
1596 1595
1597 1596 All the information about the remote bundle to import are given as
1598 1597 parameters. The parameters include:
1599 1598 - url: the url to the bundle10.
1600 1599 - size: the bundle10 file size. It is used to validate what was
1601 1600 retrieved by the client matches the server knowledge about the bundle.
1602 1601 - digests: a space separated list of the digest types provided as
1603 1602 parameters.
1604 1603 - digest:<digest-type>: the hexadecimal representation of the digest with
1605 1604 that name. Like the size, it is used to validate what was retrieved by
1606 1605 the client matches what the server knows about the bundle.
1607 1606
1608 1607 When multiple digest types are given, all of them are checked.
1609 1608 """
1610 1609 try:
1611 1610 raw_url = inpart.params['url']
1612 1611 except KeyError:
1613 1612 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1614 1613 parsed_url = util.url(raw_url)
1615 1614 if parsed_url.scheme not in capabilities['remote-changegroup']:
1616 1615 raise error.Abort(_('remote-changegroup does not support %s urls') %
1617 1616 parsed_url.scheme)
1618 1617
1619 1618 try:
1620 1619 size = int(inpart.params['size'])
1621 1620 except ValueError:
1622 1621 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1623 1622 % 'size')
1624 1623 except KeyError:
1625 1624 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1626 1625
1627 1626 digests = {}
1628 1627 for typ in inpart.params.get('digests', '').split():
1629 1628 param = 'digest:%s' % typ
1630 1629 try:
1631 1630 value = inpart.params[param]
1632 1631 except KeyError:
1633 1632 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1634 1633 param)
1635 1634 digests[typ] = value
1636 1635
1637 1636 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1638 1637
1639 1638 tr = op.gettransaction()
1640 1639 from . import exchange
1641 1640 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1642 1641 if not isinstance(cg, changegroup.cg1unpacker):
1643 1642 raise error.Abort(_('%s: not a bundle version 1.0') %
1644 1643 util.hidepassword(raw_url))
1645 1644 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2')
1646 1645 if op.reply is not None:
1647 1646 # This is definitely not the final form of this
1648 1647 # return. But one need to start somewhere.
1649 1648 part = op.reply.newpart('reply:changegroup')
1650 1649 part.addparam(
1651 1650 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1652 1651 part.addparam('return', '%i' % ret, mandatory=False)
1653 1652 try:
1654 1653 real_part.validate()
1655 1654 except error.Abort as e:
1656 1655 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1657 1656 (util.hidepassword(raw_url), str(e)))
1658 1657 assert not inpart.read()
1659 1658
1660 1659 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1661 1660 def handlereplychangegroup(op, inpart):
1662 1661 ret = int(inpart.params['return'])
1663 1662 replyto = int(inpart.params['in-reply-to'])
1664 1663 op.records.add('changegroup', {'return': ret}, replyto)
1665 1664
1666 1665 @parthandler('check:heads')
1667 1666 def handlecheckheads(op, inpart):
1668 1667 """check that head of the repo did not change
1669 1668
1670 1669 This is used to detect a push race when using unbundle.
1671 1670 This replaces the "heads" argument of unbundle."""
1672 1671 h = inpart.read(20)
1673 1672 heads = []
1674 1673 while len(h) == 20:
1675 1674 heads.append(h)
1676 1675 h = inpart.read(20)
1677 1676 assert not h
1678 1677 # Trigger a transaction so that we are guaranteed to have the lock now.
1679 1678 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1680 1679 op.gettransaction()
1681 1680 if sorted(heads) != sorted(op.repo.heads()):
1682 1681 raise error.PushRaced('repository changed while pushing - '
1683 1682 'please try again')
1684 1683
1685 1684 @parthandler('check:updated-heads')
1686 1685 def handlecheckupdatedheads(op, inpart):
1687 1686 """check for race on the heads touched by a push
1688 1687
1689 1688 This is similar to 'check:heads' but focus on the heads actually updated
1690 1689 during the push. If other activities happen on unrelated heads, it is
1691 1690 ignored.
1692 1691
1693 1692 This allow server with high traffic to avoid push contention as long as
1694 1693 unrelated parts of the graph are involved."""
1695 1694 h = inpart.read(20)
1696 1695 heads = []
1697 1696 while len(h) == 20:
1698 1697 heads.append(h)
1699 1698 h = inpart.read(20)
1700 1699 assert not h
1701 1700 # trigger a transaction so that we are guaranteed to have the lock now.
1702 1701 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1703 1702 op.gettransaction()
1704 1703
1705 1704 currentheads = set()
1706 1705 for ls in op.repo.branchmap().itervalues():
1707 1706 currentheads.update(ls)
1708 1707
1709 1708 for h in heads:
1710 1709 if h not in currentheads:
1711 1710 raise error.PushRaced('repository changed while pushing - '
1712 1711 'please try again')
1713 1712
1714 1713 @parthandler('output')
1715 1714 def handleoutput(op, inpart):
1716 1715 """forward output captured on the server to the client"""
1717 1716 for line in inpart.read().splitlines():
1718 1717 op.ui.status(_('remote: %s\n') % line)
1719 1718
1720 1719 @parthandler('replycaps')
1721 1720 def handlereplycaps(op, inpart):
1722 1721 """Notify that a reply bundle should be created
1723 1722
1724 1723 The payload contains the capabilities information for the reply"""
1725 1724 caps = decodecaps(inpart.read())
1726 1725 if op.reply is None:
1727 1726 op.reply = bundle20(op.ui, caps)
1728 1727
1729 1728 class AbortFromPart(error.Abort):
1730 1729 """Sub-class of Abort that denotes an error from a bundle2 part."""
1731 1730
1732 1731 @parthandler('error:abort', ('message', 'hint'))
1733 1732 def handleerrorabort(op, inpart):
1734 1733 """Used to transmit abort error over the wire"""
1735 1734 raise AbortFromPart(inpart.params['message'],
1736 1735 hint=inpart.params.get('hint'))
1737 1736
1738 1737 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
1739 1738 'in-reply-to'))
1740 1739 def handleerrorpushkey(op, inpart):
1741 1740 """Used to transmit failure of a mandatory pushkey over the wire"""
1742 1741 kwargs = {}
1743 1742 for name in ('namespace', 'key', 'new', 'old', 'ret'):
1744 1743 value = inpart.params.get(name)
1745 1744 if value is not None:
1746 1745 kwargs[name] = value
1747 1746 raise error.PushkeyFailed(inpart.params['in-reply-to'], **kwargs)
1748 1747
1749 1748 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1750 1749 def handleerrorunsupportedcontent(op, inpart):
1751 1750 """Used to transmit unknown content error over the wire"""
1752 1751 kwargs = {}
1753 1752 parttype = inpart.params.get('parttype')
1754 1753 if parttype is not None:
1755 1754 kwargs['parttype'] = parttype
1756 1755 params = inpart.params.get('params')
1757 1756 if params is not None:
1758 1757 kwargs['params'] = params.split('\0')
1759 1758
1760 1759 raise error.BundleUnknownFeatureError(**kwargs)
1761 1760
1762 1761 @parthandler('error:pushraced', ('message',))
1763 1762 def handleerrorpushraced(op, inpart):
1764 1763 """Used to transmit push race error over the wire"""
1765 1764 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1766 1765
1767 1766 @parthandler('listkeys', ('namespace',))
1768 1767 def handlelistkeys(op, inpart):
1769 1768 """retrieve pushkey namespace content stored in a bundle2"""
1770 1769 namespace = inpart.params['namespace']
1771 1770 r = pushkey.decodekeys(inpart.read())
1772 1771 op.records.add('listkeys', (namespace, r))
1773 1772
1774 1773 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1775 1774 def handlepushkey(op, inpart):
1776 1775 """process a pushkey request"""
1777 1776 dec = pushkey.decode
1778 1777 namespace = dec(inpart.params['namespace'])
1779 1778 key = dec(inpart.params['key'])
1780 1779 old = dec(inpart.params['old'])
1781 1780 new = dec(inpart.params['new'])
1782 1781 # Grab the transaction to ensure that we have the lock before performing the
1783 1782 # pushkey.
1784 1783 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1785 1784 op.gettransaction()
1786 1785 ret = op.repo.pushkey(namespace, key, old, new)
1787 1786 record = {'namespace': namespace,
1788 1787 'key': key,
1789 1788 'old': old,
1790 1789 'new': new}
1791 1790 op.records.add('pushkey', record)
1792 1791 if op.reply is not None:
1793 1792 rpart = op.reply.newpart('reply:pushkey')
1794 1793 rpart.addparam(
1795 1794 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1796 1795 rpart.addparam('return', '%i' % ret, mandatory=False)
1797 1796 if inpart.mandatory and not ret:
1798 1797 kwargs = {}
1799 1798 for key in ('namespace', 'key', 'new', 'old', 'ret'):
1800 1799 if key in inpart.params:
1801 1800 kwargs[key] = inpart.params[key]
1802 1801 raise error.PushkeyFailed(partid=str(inpart.id), **kwargs)
1803 1802
1804 1803 def _readphaseheads(inpart):
1805 1804 headsbyphase = [[] for i in phases.allphases]
1806 1805 entrysize = struct.calcsize(_fphasesentry)
1807 1806 while True:
1808 1807 entry = inpart.read(entrysize)
1809 1808 if len(entry) < entrysize:
1810 1809 if entry:
1811 1810 raise error.Abort(_('bad phase-heads bundle part'))
1812 1811 break
1813 1812 phase, node = struct.unpack(_fphasesentry, entry)
1814 1813 headsbyphase[phase].append(node)
1815 1814 return headsbyphase
1816 1815
1817 1816 @parthandler('phase-heads')
1818 1817 def handlephases(op, inpart):
1819 1818 """apply phases from bundle part to repo"""
1820 1819 headsbyphase = _readphaseheads(inpart)
1821 1820 phases.updatephases(op.repo.unfiltered(), op.gettransaction(), headsbyphase)
1822 1821 op.records.add('phase-heads', {})
1823 1822
1824 1823 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
1825 1824 def handlepushkeyreply(op, inpart):
1826 1825 """retrieve the result of a pushkey request"""
1827 1826 ret = int(inpart.params['return'])
1828 1827 partid = int(inpart.params['in-reply-to'])
1829 1828 op.records.add('pushkey', {'return': ret}, partid)
1830 1829
1831 1830 @parthandler('obsmarkers')
1832 1831 def handleobsmarker(op, inpart):
1833 1832 """add a stream of obsmarkers to the repo"""
1834 1833 tr = op.gettransaction()
1835 1834 markerdata = inpart.read()
1836 1835 if op.ui.config('experimental', 'obsmarkers-exchange-debug'):
1837 1836 op.ui.write(('obsmarker-exchange: %i bytes received\n')
1838 1837 % len(markerdata))
1839 1838 # The mergemarkers call will crash if marker creation is not enabled.
1840 1839 # we want to avoid this if the part is advisory.
1841 1840 if not inpart.mandatory and op.repo.obsstore.readonly:
1842 1841 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled')
1843 1842 return
1844 1843 new = op.repo.obsstore.mergemarkers(tr, markerdata)
1845 1844 op.repo.invalidatevolatilesets()
1846 1845 if new:
1847 1846 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1848 1847 op.records.add('obsmarkers', {'new': new})
1849 1848 if op.reply is not None:
1850 1849 rpart = op.reply.newpart('reply:obsmarkers')
1851 1850 rpart.addparam(
1852 1851 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1853 1852 rpart.addparam('new', '%i' % new, mandatory=False)
1854 1853
1855 1854
1856 1855 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
1857 1856 def handleobsmarkerreply(op, inpart):
1858 1857 """retrieve the result of a pushkey request"""
1859 1858 ret = int(inpart.params['new'])
1860 1859 partid = int(inpart.params['in-reply-to'])
1861 1860 op.records.add('obsmarkers', {'new': ret}, partid)
1862 1861
1863 1862 @parthandler('hgtagsfnodes')
1864 1863 def handlehgtagsfnodes(op, inpart):
1865 1864 """Applies .hgtags fnodes cache entries to the local repo.
1866 1865
1867 1866 Payload is pairs of 20 byte changeset nodes and filenodes.
1868 1867 """
1869 1868 # Grab the transaction so we ensure that we have the lock at this point.
1870 1869 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1871 1870 op.gettransaction()
1872 1871 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
1873 1872
1874 1873 count = 0
1875 1874 while True:
1876 1875 node = inpart.read(20)
1877 1876 fnode = inpart.read(20)
1878 1877 if len(node) < 20 or len(fnode) < 20:
1879 1878 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
1880 1879 break
1881 1880 cache.setfnode(node, fnode)
1882 1881 count += 1
1883 1882
1884 1883 cache.write()
1885 1884 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
1886 1885
1887 1886 @parthandler('pushvars')
1888 1887 def bundle2getvars(op, part):
1889 1888 '''unbundle a bundle2 containing shellvars on the server'''
1890 1889 # An option to disable unbundling on server-side for security reasons
1891 1890 if op.ui.configbool('push', 'pushvars.server'):
1892 1891 hookargs = {}
1893 1892 for key, value in part.advisoryparams:
1894 1893 key = key.upper()
1895 1894 # We want pushed variables to have USERVAR_ prepended so we know
1896 1895 # they came from the --pushvar flag.
1897 1896 key = "USERVAR_" + key
1898 1897 hookargs[key] = value
1899 1898 op.addhookargs(hookargs)
@@ -1,993 +1,982 b''
1 1 # changegroup.py - Mercurial changegroup manipulation functions
2 2 #
3 3 # Copyright 2006 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from __future__ import absolute_import
9 9
10 10 import os
11 11 import struct
12 12 import tempfile
13 13 import weakref
14 14
15 15 from .i18n import _
16 16 from .node import (
17 17 hex,
18 18 nullrev,
19 19 short,
20 20 )
21 21
22 22 from . import (
23 23 dagutil,
24 24 error,
25 25 mdiff,
26 26 phases,
27 27 pycompat,
28 28 util,
29 29 )
30 30
31 31 _CHANGEGROUPV1_DELTA_HEADER = "20s20s20s20s"
32 32 _CHANGEGROUPV2_DELTA_HEADER = "20s20s20s20s20s"
33 33 _CHANGEGROUPV3_DELTA_HEADER = ">20s20s20s20s20sH"
34 34
35 35 def readexactly(stream, n):
36 36 '''read n bytes from stream.read and abort if less was available'''
37 37 s = stream.read(n)
38 38 if len(s) < n:
39 39 raise error.Abort(_("stream ended unexpectedly"
40 40 " (got %d bytes, expected %d)")
41 41 % (len(s), n))
42 42 return s
43 43
44 44 def getchunk(stream):
45 45 """return the next chunk from stream as a string"""
46 46 d = readexactly(stream, 4)
47 47 l = struct.unpack(">l", d)[0]
48 48 if l <= 4:
49 49 if l:
50 50 raise error.Abort(_("invalid chunk length %d") % l)
51 51 return ""
52 52 return readexactly(stream, l - 4)
53 53
54 54 def chunkheader(length):
55 55 """return a changegroup chunk header (string)"""
56 56 return struct.pack(">l", length + 4)
57 57
58 58 def closechunk():
59 59 """return a changegroup chunk header (string) for a zero-length chunk"""
60 60 return struct.pack(">l", 0)
61 61
62 62 def writechunks(ui, chunks, filename, vfs=None):
63 63 """Write chunks to a file and return its filename.
64 64
65 65 The stream is assumed to be a bundle file.
66 66 Existing files will not be overwritten.
67 67 If no filename is specified, a temporary file is created.
68 68 """
69 69 fh = None
70 70 cleanup = None
71 71 try:
72 72 if filename:
73 73 if vfs:
74 74 fh = vfs.open(filename, "wb")
75 75 else:
76 76 # Increase default buffer size because default is usually
77 77 # small (4k is common on Linux).
78 78 fh = open(filename, "wb", 131072)
79 79 else:
80 80 fd, filename = tempfile.mkstemp(prefix="hg-bundle-", suffix=".hg")
81 81 fh = os.fdopen(fd, pycompat.sysstr("wb"))
82 82 cleanup = filename
83 83 for c in chunks:
84 84 fh.write(c)
85 85 cleanup = None
86 86 return filename
87 87 finally:
88 88 if fh is not None:
89 89 fh.close()
90 90 if cleanup is not None:
91 91 if filename and vfs:
92 92 vfs.unlink(cleanup)
93 93 else:
94 94 os.unlink(cleanup)
95 95
96 96 class cg1unpacker(object):
97 97 """Unpacker for cg1 changegroup streams.
98 98
99 99 A changegroup unpacker handles the framing of the revision data in
100 100 the wire format. Most consumers will want to use the apply()
101 101 method to add the changes from the changegroup to a repository.
102 102
103 103 If you're forwarding a changegroup unmodified to another consumer,
104 104 use getchunks(), which returns an iterator of changegroup
105 105 chunks. This is mostly useful for cases where you need to know the
106 106 data stream has ended by observing the end of the changegroup.
107 107
108 108 deltachunk() is useful only if you're applying delta data. Most
109 109 consumers should prefer apply() instead.
110 110
111 111 A few other public methods exist. Those are used only for
112 112 bundlerepo and some debug commands - their use is discouraged.
113 113 """
114 114 deltaheader = _CHANGEGROUPV1_DELTA_HEADER
115 115 deltaheadersize = struct.calcsize(deltaheader)
116 116 version = '01'
117 117 _grouplistcount = 1 # One list of files after the manifests
118 118
119 119 def __init__(self, fh, alg, extras=None):
120 120 if alg is None:
121 121 alg = 'UN'
122 122 if alg not in util.compengines.supportedbundletypes:
123 123 raise error.Abort(_('unknown stream compression type: %s')
124 124 % alg)
125 125 if alg == 'BZ':
126 126 alg = '_truncatedBZ'
127 127
128 128 compengine = util.compengines.forbundletype(alg)
129 129 self._stream = compengine.decompressorreader(fh)
130 130 self._type = alg
131 131 self.extras = extras or {}
132 132 self.callback = None
133 133
134 134 # These methods (compressed, read, seek, tell) all appear to only
135 135 # be used by bundlerepo, but it's a little hard to tell.
136 136 def compressed(self):
137 137 return self._type is not None and self._type != 'UN'
138 138 def read(self, l):
139 139 return self._stream.read(l)
140 140 def seek(self, pos):
141 141 return self._stream.seek(pos)
142 142 def tell(self):
143 143 return self._stream.tell()
144 144 def close(self):
145 145 return self._stream.close()
146 146
147 147 def _chunklength(self):
148 148 d = readexactly(self._stream, 4)
149 149 l = struct.unpack(">l", d)[0]
150 150 if l <= 4:
151 151 if l:
152 152 raise error.Abort(_("invalid chunk length %d") % l)
153 153 return 0
154 154 if self.callback:
155 155 self.callback()
156 156 return l - 4
157 157
158 158 def changelogheader(self):
159 159 """v10 does not have a changelog header chunk"""
160 160 return {}
161 161
162 162 def manifestheader(self):
163 163 """v10 does not have a manifest header chunk"""
164 164 return {}
165 165
166 166 def filelogheader(self):
167 167 """return the header of the filelogs chunk, v10 only has the filename"""
168 168 l = self._chunklength()
169 169 if not l:
170 170 return {}
171 171 fname = readexactly(self._stream, l)
172 172 return {'filename': fname}
173 173
174 174 def _deltaheader(self, headertuple, prevnode):
175 175 node, p1, p2, cs = headertuple
176 176 if prevnode is None:
177 177 deltabase = p1
178 178 else:
179 179 deltabase = prevnode
180 180 flags = 0
181 181 return node, p1, p2, deltabase, cs, flags
182 182
183 183 def deltachunk(self, prevnode):
184 184 l = self._chunklength()
185 185 if not l:
186 186 return {}
187 187 headerdata = readexactly(self._stream, self.deltaheadersize)
188 188 header = struct.unpack(self.deltaheader, headerdata)
189 189 delta = readexactly(self._stream, l - self.deltaheadersize)
190 190 node, p1, p2, deltabase, cs, flags = self._deltaheader(header, prevnode)
191 191 return {'node': node, 'p1': p1, 'p2': p2, 'cs': cs,
192 192 'deltabase': deltabase, 'delta': delta, 'flags': flags}
193 193
194 194 def getchunks(self):
195 195 """returns all the chunks contains in the bundle
196 196
197 197 Used when you need to forward the binary stream to a file or another
198 198 network API. To do so, it parse the changegroup data, otherwise it will
199 199 block in case of sshrepo because it don't know the end of the stream.
200 200 """
201 201 # For changegroup 1 and 2, we expect 3 parts: changelog, manifestlog,
202 202 # and a list of filelogs. For changegroup 3, we expect 4 parts:
203 203 # changelog, manifestlog, a list of tree manifestlogs, and a list of
204 204 # filelogs.
205 205 #
206 206 # Changelog and manifestlog parts are terminated with empty chunks. The
207 207 # tree and file parts are a list of entry sections. Each entry section
208 208 # is a series of chunks terminating in an empty chunk. The list of these
209 209 # entry sections is terminated in yet another empty chunk, so we know
210 210 # we've reached the end of the tree/file list when we reach an empty
211 211 # chunk that was proceeded by no non-empty chunks.
212 212
213 213 parts = 0
214 214 while parts < 2 + self._grouplistcount:
215 215 noentries = True
216 216 while True:
217 217 chunk = getchunk(self)
218 218 if not chunk:
219 219 # The first two empty chunks represent the end of the
220 220 # changelog and the manifestlog portions. The remaining
221 221 # empty chunks represent either A) the end of individual
222 222 # tree or file entries in the file list, or B) the end of
223 223 # the entire list. It's the end of the entire list if there
224 224 # were no entries (i.e. noentries is True).
225 225 if parts < 2:
226 226 parts += 1
227 227 elif noentries:
228 228 parts += 1
229 229 break
230 230 noentries = False
231 231 yield chunkheader(len(chunk))
232 232 pos = 0
233 233 while pos < len(chunk):
234 234 next = pos + 2**20
235 235 yield chunk[pos:next]
236 236 pos = next
237 237 yield closechunk()
238 238
239 239 def _unpackmanifests(self, repo, revmap, trp, prog, numchanges):
240 240 # We know that we'll never have more manifests than we had
241 241 # changesets.
242 242 self.callback = prog(_('manifests'), numchanges)
243 243 # no need to check for empty manifest group here:
244 244 # if the result of the merge of 1 and 2 is the same in 3 and 4,
245 245 # no new manifest will be created and the manifest group will
246 246 # be empty during the pull
247 247 self.manifestheader()
248 248 repo.manifestlog._revlog.addgroup(self, revmap, trp)
249 249 repo.ui.progress(_('manifests'), None)
250 250 self.callback = None
251 251
252 252 def apply(self, repo, tr, srctype, url, targetphase=phases.draft,
253 253 expectedtotal=None):
254 254 """Add the changegroup returned by source.read() to this repo.
255 255 srctype is a string like 'push', 'pull', or 'unbundle'. url is
256 256 the URL of the repo where this changegroup is coming from.
257 257
258 258 Return an integer summarizing the change to this repo:
259 259 - nothing changed or no source: 0
260 260 - more heads than before: 1+added heads (2..n)
261 261 - fewer heads than before: -1-removed heads (-2..-n)
262 262 - number of heads stays the same: 1
263 263 """
264 264 repo = repo.unfiltered()
265 265 def csmap(x):
266 266 repo.ui.debug("add changeset %s\n" % short(x))
267 267 return len(cl)
268 268
269 269 def revmap(x):
270 270 return cl.rev(x)
271 271
272 272 changesets = files = revisions = 0
273 273
274 274 try:
275 275 # The transaction may already carry source information. In this
276 276 # case we use the top level data. We overwrite the argument
277 277 # because we need to use the top level value (if they exist)
278 278 # in this function.
279 279 srctype = tr.hookargs.setdefault('source', srctype)
280 280 url = tr.hookargs.setdefault('url', url)
281 281 repo.hook('prechangegroup',
282 282 throw=True, **pycompat.strkwargs(tr.hookargs))
283 283
284 284 # write changelog data to temp files so concurrent readers
285 285 # will not see an inconsistent view
286 286 cl = repo.changelog
287 287 cl.delayupdate(tr)
288 288 oldheads = set(cl.heads())
289 289
290 290 trp = weakref.proxy(tr)
291 291 # pull off the changeset group
292 292 repo.ui.status(_("adding changesets\n"))
293 293 clstart = len(cl)
294 294 class prog(object):
295 295 def __init__(self, step, total):
296 296 self._step = step
297 297 self._total = total
298 298 self._count = 1
299 299 def __call__(self):
300 300 repo.ui.progress(self._step, self._count, unit=_('chunks'),
301 301 total=self._total)
302 302 self._count += 1
303 303 self.callback = prog(_('changesets'), expectedtotal)
304 304
305 305 efiles = set()
306 306 def onchangelog(cl, node):
307 307 efiles.update(cl.readfiles(node))
308 308
309 309 self.changelogheader()
310 310 cgnodes = cl.addgroup(self, csmap, trp, addrevisioncb=onchangelog)
311 311 efiles = len(efiles)
312 312
313 313 if not cgnodes:
314 314 repo.ui.develwarn('applied empty changegroup',
315 315 config='empty-changegroup')
316 316 clend = len(cl)
317 317 changesets = clend - clstart
318 318 repo.ui.progress(_('changesets'), None)
319 319 self.callback = None
320 320
321 321 # pull off the manifest group
322 322 repo.ui.status(_("adding manifests\n"))
323 323 self._unpackmanifests(repo, revmap, trp, prog, changesets)
324 324
325 325 needfiles = {}
326 326 if repo.ui.configbool('server', 'validate'):
327 327 cl = repo.changelog
328 328 ml = repo.manifestlog
329 329 # validate incoming csets have their manifests
330 330 for cset in xrange(clstart, clend):
331 331 mfnode = cl.changelogrevision(cset).manifest
332 332 mfest = ml[mfnode].readdelta()
333 333 # store file cgnodes we must see
334 334 for f, n in mfest.iteritems():
335 335 needfiles.setdefault(f, set()).add(n)
336 336
337 337 # process the files
338 338 repo.ui.status(_("adding file changes\n"))
339 339 newrevs, newfiles = _addchangegroupfiles(
340 340 repo, self, revmap, trp, efiles, needfiles)
341 341 revisions += newrevs
342 342 files += newfiles
343 343
344 344 deltaheads = 0
345 345 if oldheads:
346 346 heads = cl.heads()
347 347 deltaheads = len(heads) - len(oldheads)
348 348 for h in heads:
349 349 if h not in oldheads and repo[h].closesbranch():
350 350 deltaheads -= 1
351 351 htext = ""
352 352 if deltaheads:
353 353 htext = _(" (%+d heads)") % deltaheads
354 354
355 355 repo.ui.status(_("added %d changesets"
356 356 " with %d changes to %d files%s\n")
357 357 % (changesets, revisions, files, htext))
358 358 repo.invalidatevolatilesets()
359 359
360 360 if changesets > 0:
361 361 if 'node' not in tr.hookargs:
362 362 tr.hookargs['node'] = hex(cl.node(clstart))
363 363 tr.hookargs['node_last'] = hex(cl.node(clend - 1))
364 364 hookargs = dict(tr.hookargs)
365 365 else:
366 366 hookargs = dict(tr.hookargs)
367 367 hookargs['node'] = hex(cl.node(clstart))
368 368 hookargs['node_last'] = hex(cl.node(clend - 1))
369 369 repo.hook('pretxnchangegroup',
370 370 throw=True, **pycompat.strkwargs(hookargs))
371 371
372 372 added = [cl.node(r) for r in xrange(clstart, clend)]
373 373 phaseall = None
374 374 if srctype in ('push', 'serve'):
375 375 # Old servers can not push the boundary themselves.
376 376 # New servers won't push the boundary if changeset already
377 377 # exists locally as secret
378 378 #
379 379 # We should not use added here but the list of all change in
380 380 # the bundle
381 381 if repo.publishing():
382 382 targetphase = phaseall = phases.public
383 383 else:
384 384 # closer target phase computation
385 385
386 386 # Those changesets have been pushed from the
387 387 # outside, their phases are going to be pushed
388 388 # alongside. Therefor `targetphase` is
389 389 # ignored.
390 390 targetphase = phaseall = phases.draft
391 391 if added:
392 392 phases.registernew(repo, tr, targetphase, added)
393 393 if phaseall is not None:
394 394 phases.advanceboundary(repo, tr, phaseall, cgnodes)
395 395
396 396 if changesets > 0:
397 397
398 398 def runhooks():
399 399 # These hooks run when the lock releases, not when the
400 400 # transaction closes. So it's possible for the changelog
401 401 # to have changed since we last saw it.
402 402 if clstart >= len(repo):
403 403 return
404 404
405 405 repo.hook("changegroup", **pycompat.strkwargs(hookargs))
406 406
407 407 for n in added:
408 408 args = hookargs.copy()
409 409 args['node'] = hex(n)
410 410 del args['node_last']
411 411 repo.hook("incoming", **pycompat.strkwargs(args))
412 412
413 413 newheads = [h for h in repo.heads()
414 414 if h not in oldheads]
415 415 repo.ui.log("incoming",
416 416 "%s incoming changes - new heads: %s\n",
417 417 len(added),
418 418 ', '.join([hex(c[:6]) for c in newheads]))
419 419
420 420 tr.addpostclose('changegroup-runhooks-%020i' % clstart,
421 421 lambda tr: repo._afterlock(runhooks))
422 422 finally:
423 423 repo.ui.flush()
424 424 # never return 0 here:
425 425 if deltaheads < 0:
426 426 ret = deltaheads - 1
427 427 else:
428 428 ret = deltaheads + 1
429 429 return ret
430 430
431 431 class cg2unpacker(cg1unpacker):
432 432 """Unpacker for cg2 streams.
433 433
434 434 cg2 streams add support for generaldelta, so the delta header
435 435 format is slightly different. All other features about the data
436 436 remain the same.
437 437 """
438 438 deltaheader = _CHANGEGROUPV2_DELTA_HEADER
439 439 deltaheadersize = struct.calcsize(deltaheader)
440 440 version = '02'
441 441
442 442 def _deltaheader(self, headertuple, prevnode):
443 443 node, p1, p2, deltabase, cs = headertuple
444 444 flags = 0
445 445 return node, p1, p2, deltabase, cs, flags
446 446
447 447 class cg3unpacker(cg2unpacker):
448 448 """Unpacker for cg3 streams.
449 449
450 450 cg3 streams add support for exchanging treemanifests and revlog
451 451 flags. It adds the revlog flags to the delta header and an empty chunk
452 452 separating manifests and files.
453 453 """
454 454 deltaheader = _CHANGEGROUPV3_DELTA_HEADER
455 455 deltaheadersize = struct.calcsize(deltaheader)
456 456 version = '03'
457 457 _grouplistcount = 2 # One list of manifests and one list of files
458 458
459 459 def _deltaheader(self, headertuple, prevnode):
460 460 node, p1, p2, deltabase, cs, flags = headertuple
461 461 return node, p1, p2, deltabase, cs, flags
462 462
463 463 def _unpackmanifests(self, repo, revmap, trp, prog, numchanges):
464 464 super(cg3unpacker, self)._unpackmanifests(repo, revmap, trp, prog,
465 465 numchanges)
466 466 for chunkdata in iter(self.filelogheader, {}):
467 467 # If we get here, there are directory manifests in the changegroup
468 468 d = chunkdata["filename"]
469 469 repo.ui.debug("adding %s revisions\n" % d)
470 470 dirlog = repo.manifestlog._revlog.dirlog(d)
471 471 if not dirlog.addgroup(self, revmap, trp):
472 472 raise error.Abort(_("received dir revlog group is empty"))
473 473
474 474 class headerlessfixup(object):
475 475 def __init__(self, fh, h):
476 476 self._h = h
477 477 self._fh = fh
478 478 def read(self, n):
479 479 if self._h:
480 480 d, self._h = self._h[:n], self._h[n:]
481 481 if len(d) < n:
482 482 d += readexactly(self._fh, n - len(d))
483 483 return d
484 484 return readexactly(self._fh, n)
485 485
486 486 class cg1packer(object):
487 487 deltaheader = _CHANGEGROUPV1_DELTA_HEADER
488 488 version = '01'
489 489 def __init__(self, repo, bundlecaps=None):
490 490 """Given a source repo, construct a bundler.
491 491
492 492 bundlecaps is optional and can be used to specify the set of
493 493 capabilities which can be used to build the bundle. While bundlecaps is
494 494 unused in core Mercurial, extensions rely on this feature to communicate
495 495 capabilities to customize the changegroup packer.
496 496 """
497 497 # Set of capabilities we can use to build the bundle.
498 498 if bundlecaps is None:
499 499 bundlecaps = set()
500 500 self._bundlecaps = bundlecaps
501 501 # experimental config: bundle.reorder
502 502 reorder = repo.ui.config('bundle', 'reorder')
503 503 if reorder == 'auto':
504 504 reorder = None
505 505 else:
506 506 reorder = util.parsebool(reorder)
507 507 self._repo = repo
508 508 self._reorder = reorder
509 509 self._progress = repo.ui.progress
510 510 if self._repo.ui.verbose and not self._repo.ui.debugflag:
511 511 self._verbosenote = self._repo.ui.note
512 512 else:
513 513 self._verbosenote = lambda s: None
514 514
515 515 def close(self):
516 516 return closechunk()
517 517
518 518 def fileheader(self, fname):
519 519 return chunkheader(len(fname)) + fname
520 520
521 521 # Extracted both for clarity and for overriding in extensions.
522 522 def _sortgroup(self, revlog, nodelist, lookup):
523 523 """Sort nodes for change group and turn them into revnums."""
524 524 # for generaldelta revlogs, we linearize the revs; this will both be
525 525 # much quicker and generate a much smaller bundle
526 526 if (revlog._generaldelta and self._reorder is None) or self._reorder:
527 527 dag = dagutil.revlogdag(revlog)
528 528 return dag.linearize(set(revlog.rev(n) for n in nodelist))
529 529 else:
530 530 return sorted([revlog.rev(n) for n in nodelist])
531 531
532 532 def group(self, nodelist, revlog, lookup, units=None):
533 533 """Calculate a delta group, yielding a sequence of changegroup chunks
534 534 (strings).
535 535
536 536 Given a list of changeset revs, return a set of deltas and
537 537 metadata corresponding to nodes. The first delta is
538 538 first parent(nodelist[0]) -> nodelist[0], the receiver is
539 539 guaranteed to have this parent as it has all history before
540 540 these changesets. In the case firstparent is nullrev the
541 541 changegroup starts with a full revision.
542 542
543 543 If units is not None, progress detail will be generated, units specifies
544 544 the type of revlog that is touched (changelog, manifest, etc.).
545 545 """
546 546 # if we don't have any revisions touched by these changesets, bail
547 547 if len(nodelist) == 0:
548 548 yield self.close()
549 549 return
550 550
551 551 revs = self._sortgroup(revlog, nodelist, lookup)
552 552
553 553 # add the parent of the first rev
554 554 p = revlog.parentrevs(revs[0])[0]
555 555 revs.insert(0, p)
556 556
557 557 # build deltas
558 558 total = len(revs) - 1
559 559 msgbundling = _('bundling')
560 560 for r in xrange(len(revs) - 1):
561 561 if units is not None:
562 562 self._progress(msgbundling, r + 1, unit=units, total=total)
563 563 prev, curr = revs[r], revs[r + 1]
564 564 linknode = lookup(revlog.node(curr))
565 565 for c in self.revchunk(revlog, curr, prev, linknode):
566 566 yield c
567 567
568 568 if units is not None:
569 569 self._progress(msgbundling, None)
570 570 yield self.close()
571 571
572 572 # filter any nodes that claim to be part of the known set
573 573 def prune(self, revlog, missing, commonrevs):
574 574 rr, rl = revlog.rev, revlog.linkrev
575 575 return [n for n in missing if rl(rr(n)) not in commonrevs]
576 576
577 577 def _packmanifests(self, dir, mfnodes, lookuplinknode):
578 578 """Pack flat manifests into a changegroup stream."""
579 579 assert not dir
580 580 for chunk in self.group(mfnodes, self._repo.manifestlog._revlog,
581 581 lookuplinknode, units=_('manifests')):
582 582 yield chunk
583 583
584 584 def _manifestsdone(self):
585 585 return ''
586 586
587 587 def generate(self, commonrevs, clnodes, fastpathlinkrev, source):
588 588 '''yield a sequence of changegroup chunks (strings)'''
589 589 repo = self._repo
590 590 cl = repo.changelog
591 591
592 592 clrevorder = {}
593 593 mfs = {} # needed manifests
594 594 fnodes = {} # needed file nodes
595 595 changedfiles = set()
596 596
597 597 # Callback for the changelog, used to collect changed files and manifest
598 598 # nodes.
599 599 # Returns the linkrev node (identity in the changelog case).
600 600 def lookupcl(x):
601 601 c = cl.read(x)
602 602 clrevorder[x] = len(clrevorder)
603 603 n = c[0]
604 604 # record the first changeset introducing this manifest version
605 605 mfs.setdefault(n, x)
606 606 # Record a complete list of potentially-changed files in
607 607 # this manifest.
608 608 changedfiles.update(c[3])
609 609 return x
610 610
611 611 self._verbosenote(_('uncompressed size of bundle content:\n'))
612 612 size = 0
613 613 for chunk in self.group(clnodes, cl, lookupcl, units=_('changesets')):
614 614 size += len(chunk)
615 615 yield chunk
616 616 self._verbosenote(_('%8.i (changelog)\n') % size)
617 617
618 618 # We need to make sure that the linkrev in the changegroup refers to
619 619 # the first changeset that introduced the manifest or file revision.
620 620 # The fastpath is usually safer than the slowpath, because the filelogs
621 621 # are walked in revlog order.
622 622 #
623 623 # When taking the slowpath with reorder=None and the manifest revlog
624 624 # uses generaldelta, the manifest may be walked in the "wrong" order.
625 625 # Without 'clrevorder', we would get an incorrect linkrev (see fix in
626 626 # cc0ff93d0c0c).
627 627 #
628 628 # When taking the fastpath, we are only vulnerable to reordering
629 629 # of the changelog itself. The changelog never uses generaldelta, so
630 630 # it is only reordered when reorder=True. To handle this case, we
631 631 # simply take the slowpath, which already has the 'clrevorder' logic.
632 632 # This was also fixed in cc0ff93d0c0c.
633 633 fastpathlinkrev = fastpathlinkrev and not self._reorder
634 634 # Treemanifests don't work correctly with fastpathlinkrev
635 635 # either, because we don't discover which directory nodes to
636 636 # send along with files. This could probably be fixed.
637 637 fastpathlinkrev = fastpathlinkrev and (
638 638 'treemanifest' not in repo.requirements)
639 639
640 640 for chunk in self.generatemanifests(commonrevs, clrevorder,
641 641 fastpathlinkrev, mfs, fnodes):
642 642 yield chunk
643 643 mfs.clear()
644 644 clrevs = set(cl.rev(x) for x in clnodes)
645 645
646 646 if not fastpathlinkrev:
647 647 def linknodes(unused, fname):
648 648 return fnodes.get(fname, {})
649 649 else:
650 650 cln = cl.node
651 651 def linknodes(filerevlog, fname):
652 652 llr = filerevlog.linkrev
653 653 fln = filerevlog.node
654 654 revs = ((r, llr(r)) for r in filerevlog)
655 655 return dict((fln(r), cln(lr)) for r, lr in revs if lr in clrevs)
656 656
657 657 for chunk in self.generatefiles(changedfiles, linknodes, commonrevs,
658 658 source):
659 659 yield chunk
660 660
661 661 yield self.close()
662 662
663 663 if clnodes:
664 664 repo.hook('outgoing', node=hex(clnodes[0]), source=source)
665 665
666 666 def generatemanifests(self, commonrevs, clrevorder, fastpathlinkrev, mfs,
667 667 fnodes):
668 668 repo = self._repo
669 669 mfl = repo.manifestlog
670 670 dirlog = mfl._revlog.dirlog
671 671 tmfnodes = {'': mfs}
672 672
673 673 # Callback for the manifest, used to collect linkrevs for filelog
674 674 # revisions.
675 675 # Returns the linkrev node (collected in lookupcl).
676 676 def makelookupmflinknode(dir):
677 677 if fastpathlinkrev:
678 678 assert not dir
679 679 return mfs.__getitem__
680 680
681 681 def lookupmflinknode(x):
682 682 """Callback for looking up the linknode for manifests.
683 683
684 684 Returns the linkrev node for the specified manifest.
685 685
686 686 SIDE EFFECT:
687 687
688 688 1) fclnodes gets populated with the list of relevant
689 689 file nodes if we're not using fastpathlinkrev
690 690 2) When treemanifests are in use, collects treemanifest nodes
691 691 to send
692 692
693 693 Note that this means manifests must be completely sent to
694 694 the client before you can trust the list of files and
695 695 treemanifests to send.
696 696 """
697 697 clnode = tmfnodes[dir][x]
698 698 mdata = mfl.get(dir, x).readfast(shallow=True)
699 699 for p, n, fl in mdata.iterentries():
700 700 if fl == 't': # subdirectory manifest
701 701 subdir = dir + p + '/'
702 702 tmfclnodes = tmfnodes.setdefault(subdir, {})
703 703 tmfclnode = tmfclnodes.setdefault(n, clnode)
704 704 if clrevorder[clnode] < clrevorder[tmfclnode]:
705 705 tmfclnodes[n] = clnode
706 706 else:
707 707 f = dir + p
708 708 fclnodes = fnodes.setdefault(f, {})
709 709 fclnode = fclnodes.setdefault(n, clnode)
710 710 if clrevorder[clnode] < clrevorder[fclnode]:
711 711 fclnodes[n] = clnode
712 712 return clnode
713 713 return lookupmflinknode
714 714
715 715 size = 0
716 716 while tmfnodes:
717 717 dir = min(tmfnodes)
718 718 nodes = tmfnodes[dir]
719 719 prunednodes = self.prune(dirlog(dir), nodes, commonrevs)
720 720 if not dir or prunednodes:
721 721 for x in self._packmanifests(dir, prunednodes,
722 722 makelookupmflinknode(dir)):
723 723 size += len(x)
724 724 yield x
725 725 del tmfnodes[dir]
726 726 self._verbosenote(_('%8.i (manifests)\n') % size)
727 727 yield self._manifestsdone()
728 728
729 729 # The 'source' parameter is useful for extensions
730 730 def generatefiles(self, changedfiles, linknodes, commonrevs, source):
731 731 repo = self._repo
732 732 progress = self._progress
733 733 msgbundling = _('bundling')
734 734
735 735 total = len(changedfiles)
736 736 # for progress output
737 737 msgfiles = _('files')
738 738 for i, fname in enumerate(sorted(changedfiles)):
739 739 filerevlog = repo.file(fname)
740 740 if not filerevlog:
741 741 raise error.Abort(_("empty or missing revlog for %s") % fname)
742 742
743 743 linkrevnodes = linknodes(filerevlog, fname)
744 744 # Lookup for filenodes, we collected the linkrev nodes above in the
745 745 # fastpath case and with lookupmf in the slowpath case.
746 746 def lookupfilelog(x):
747 747 return linkrevnodes[x]
748 748
749 749 filenodes = self.prune(filerevlog, linkrevnodes, commonrevs)
750 750 if filenodes:
751 751 progress(msgbundling, i + 1, item=fname, unit=msgfiles,
752 752 total=total)
753 753 h = self.fileheader(fname)
754 754 size = len(h)
755 755 yield h
756 756 for chunk in self.group(filenodes, filerevlog, lookupfilelog):
757 757 size += len(chunk)
758 758 yield chunk
759 759 self._verbosenote(_('%8.i %s\n') % (size, fname))
760 760 progress(msgbundling, None)
761 761
762 762 def deltaparent(self, revlog, rev, p1, p2, prev):
763 763 return prev
764 764
765 765 def revchunk(self, revlog, rev, prev, linknode):
766 766 node = revlog.node(rev)
767 767 p1, p2 = revlog.parentrevs(rev)
768 768 base = self.deltaparent(revlog, rev, p1, p2, prev)
769 769
770 770 prefix = ''
771 771 if revlog.iscensored(base) or revlog.iscensored(rev):
772 772 try:
773 773 delta = revlog.revision(node, raw=True)
774 774 except error.CensoredNodeError as e:
775 775 delta = e.tombstone
776 776 if base == nullrev:
777 777 prefix = mdiff.trivialdiffheader(len(delta))
778 778 else:
779 779 baselen = revlog.rawsize(base)
780 780 prefix = mdiff.replacediffheader(baselen, len(delta))
781 781 elif base == nullrev:
782 782 delta = revlog.revision(node, raw=True)
783 783 prefix = mdiff.trivialdiffheader(len(delta))
784 784 else:
785 785 delta = revlog.revdiff(base, rev)
786 786 p1n, p2n = revlog.parents(node)
787 787 basenode = revlog.node(base)
788 788 flags = revlog.flags(rev)
789 789 meta = self.builddeltaheader(node, p1n, p2n, basenode, linknode, flags)
790 790 meta += prefix
791 791 l = len(meta) + len(delta)
792 792 yield chunkheader(l)
793 793 yield meta
794 794 yield delta
795 795 def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags):
796 796 # do nothing with basenode, it is implicitly the previous one in HG10
797 797 # do nothing with flags, it is implicitly 0 for cg1 and cg2
798 798 return struct.pack(self.deltaheader, node, p1n, p2n, linknode)
799 799
800 800 class cg2packer(cg1packer):
801 801 version = '02'
802 802 deltaheader = _CHANGEGROUPV2_DELTA_HEADER
803 803
804 804 def __init__(self, repo, bundlecaps=None):
805 805 super(cg2packer, self).__init__(repo, bundlecaps)
806 806 if self._reorder is None:
807 807 # Since generaldelta is directly supported by cg2, reordering
808 808 # generally doesn't help, so we disable it by default (treating
809 809 # bundle.reorder=auto just like bundle.reorder=False).
810 810 self._reorder = False
811 811
812 812 def deltaparent(self, revlog, rev, p1, p2, prev):
813 813 dp = revlog.deltaparent(rev)
814 814 if dp == nullrev and revlog.storedeltachains:
815 815 # Avoid sending full revisions when delta parent is null. Pick prev
816 816 # in that case. It's tempting to pick p1 in this case, as p1 will
817 817 # be smaller in the common case. However, computing a delta against
818 818 # p1 may require resolving the raw text of p1, which could be
819 819 # expensive. The revlog caches should have prev cached, meaning
820 820 # less CPU for changegroup generation. There is likely room to add
821 821 # a flag and/or config option to control this behavior.
822 822 return prev
823 823 elif dp == nullrev:
824 824 # revlog is configured to use full snapshot for a reason,
825 825 # stick to full snapshot.
826 826 return nullrev
827 827 elif dp not in (p1, p2, prev):
828 828 # Pick prev when we can't be sure remote has the base revision.
829 829 return prev
830 830 else:
831 831 return dp
832 832
833 833 def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags):
834 834 # Do nothing with flags, it is implicitly 0 in cg1 and cg2
835 835 return struct.pack(self.deltaheader, node, p1n, p2n, basenode, linknode)
836 836
837 837 class cg3packer(cg2packer):
838 838 version = '03'
839 839 deltaheader = _CHANGEGROUPV3_DELTA_HEADER
840 840
841 841 def _packmanifests(self, dir, mfnodes, lookuplinknode):
842 842 if dir:
843 843 yield self.fileheader(dir)
844 844
845 845 dirlog = self._repo.manifestlog._revlog.dirlog(dir)
846 846 for chunk in self.group(mfnodes, dirlog, lookuplinknode,
847 847 units=_('manifests')):
848 848 yield chunk
849 849
850 850 def _manifestsdone(self):
851 851 return self.close()
852 852
853 853 def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags):
854 854 return struct.pack(
855 855 self.deltaheader, node, p1n, p2n, basenode, linknode, flags)
856 856
857 857 _packermap = {'01': (cg1packer, cg1unpacker),
858 858 # cg2 adds support for exchanging generaldelta
859 859 '02': (cg2packer, cg2unpacker),
860 860 # cg3 adds support for exchanging revlog flags and treemanifests
861 861 '03': (cg3packer, cg3unpacker),
862 862 }
863 863
864 864 def allsupportedversions(repo):
865 865 versions = set(_packermap.keys())
866 866 if not (repo.ui.configbool('experimental', 'changegroup3') or
867 867 repo.ui.configbool('experimental', 'treemanifest') or
868 868 'treemanifest' in repo.requirements):
869 869 versions.discard('03')
870 870 return versions
871 871
872 872 # Changegroup versions that can be applied to the repo
873 873 def supportedincomingversions(repo):
874 874 return allsupportedversions(repo)
875 875
876 876 # Changegroup versions that can be created from the repo
877 877 def supportedoutgoingversions(repo):
878 878 versions = allsupportedversions(repo)
879 879 if 'treemanifest' in repo.requirements:
880 880 # Versions 01 and 02 support only flat manifests and it's just too
881 881 # expensive to convert between the flat manifest and tree manifest on
882 882 # the fly. Since tree manifests are hashed differently, all of history
883 883 # would have to be converted. Instead, we simply don't even pretend to
884 884 # support versions 01 and 02.
885 885 versions.discard('01')
886 886 versions.discard('02')
887 887 return versions
888 888
889 889 def safeversion(repo):
890 890 # Finds the smallest version that it's safe to assume clients of the repo
891 891 # will support. For example, all hg versions that support generaldelta also
892 892 # support changegroup 02.
893 893 versions = supportedoutgoingversions(repo)
894 894 if 'generaldelta' in repo.requirements:
895 895 versions.discard('01')
896 896 assert versions
897 897 return min(versions)
898 898
899 899 def getbundler(version, repo, bundlecaps=None):
900 900 assert version in supportedoutgoingversions(repo)
901 901 return _packermap[version][0](repo, bundlecaps)
902 902
903 903 def getunbundler(version, fh, alg, extras=None):
904 904 return _packermap[version][1](fh, alg, extras=extras)
905 905
906 906 def _changegroupinfo(repo, nodes, source):
907 907 if repo.ui.verbose or source == 'bundle':
908 908 repo.ui.status(_("%d changesets found\n") % len(nodes))
909 909 if repo.ui.debugflag:
910 910 repo.ui.debug("list of changesets:\n")
911 911 for node in nodes:
912 912 repo.ui.debug("%s\n" % hex(node))
913 913
914 914 def makestream(repo, outgoing, version, source, fastpath=False,
915 915 bundlecaps=None):
916 916 bundler = getbundler(version, repo, bundlecaps=bundlecaps)
917 917 return getsubsetraw(repo, outgoing, bundler, source, fastpath=fastpath)
918 918
919 919 def makechangegroup(repo, outgoing, version, source, fastpath=False,
920 920 bundlecaps=None):
921 921 cgstream = makestream(repo, outgoing, version, source,
922 922 fastpath=fastpath, bundlecaps=bundlecaps)
923 923 return getunbundler(version, util.chunkbuffer(cgstream), None,
924 924 {'clcount': len(outgoing.missing) })
925 925
926 926 def getsubsetraw(repo, outgoing, bundler, source, fastpath=False):
927 927 repo = repo.unfiltered()
928 928 commonrevs = outgoing.common
929 929 csets = outgoing.missing
930 930 heads = outgoing.missingheads
931 931 # We go through the fast path if we get told to, or if all (unfiltered
932 932 # heads have been requested (since we then know there all linkrevs will
933 933 # be pulled by the client).
934 934 heads.sort()
935 935 fastpathlinkrev = fastpath or (
936 936 repo.filtername is None and heads == sorted(repo.heads()))
937 937
938 938 repo.hook('preoutgoing', throw=True, source=source)
939 939 _changegroupinfo(repo, csets, source)
940 940 return bundler.generate(commonrevs, csets, fastpathlinkrev, source)
941 941
942 def getchangegroup(repo, source, outgoing, bundlecaps=None,
943 version='01'):
944 """Like getbundle, but taking a discovery.outgoing as an argument.
945
946 This is only implemented for local repos and reuses potentially
947 precomputed sets in outgoing."""
948 if not outgoing.missing:
949 return None
950 return makechangegroup(repo, outgoing, version, source,
951 bundlecaps=bundlecaps)
952
953 942 def _addchangegroupfiles(repo, source, revmap, trp, expectedfiles, needfiles):
954 943 revisions = 0
955 944 files = 0
956 945 for chunkdata in iter(source.filelogheader, {}):
957 946 files += 1
958 947 f = chunkdata["filename"]
959 948 repo.ui.debug("adding %s revisions\n" % f)
960 949 repo.ui.progress(_('files'), files, unit=_('files'),
961 950 total=expectedfiles)
962 951 fl = repo.file(f)
963 952 o = len(fl)
964 953 try:
965 954 if not fl.addgroup(source, revmap, trp):
966 955 raise error.Abort(_("received file revlog group is empty"))
967 956 except error.CensoredBaseError as e:
968 957 raise error.Abort(_("received delta base is censored: %s") % e)
969 958 revisions += len(fl) - o
970 959 if f in needfiles:
971 960 needs = needfiles[f]
972 961 for new in xrange(o, len(fl)):
973 962 n = fl.node(new)
974 963 if n in needs:
975 964 needs.remove(n)
976 965 else:
977 966 raise error.Abort(
978 967 _("received spurious file revlog entry"))
979 968 if not needs:
980 969 del needfiles[f]
981 970 repo.ui.progress(_('files'), None)
982 971
983 972 for f, needs in needfiles.iteritems():
984 973 fl = repo.file(f)
985 974 for n in needs:
986 975 try:
987 976 fl.rev(n)
988 977 except error.LookupError:
989 978 raise error.Abort(
990 979 _('missing file data for %s:%s - run hg verify') %
991 980 (f, hex(n)))
992 981
993 982 return revisions, files
@@ -1,2011 +1,2011 b''
1 1 # exchange.py - utility to exchange data between repos.
2 2 #
3 3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from __future__ import absolute_import
9 9
10 10 import errno
11 11 import hashlib
12 12
13 13 from .i18n import _
14 14 from .node import (
15 15 hex,
16 16 nullid,
17 17 )
18 18 from . import (
19 19 bookmarks as bookmod,
20 20 bundle2,
21 21 changegroup,
22 22 discovery,
23 23 error,
24 24 lock as lockmod,
25 25 obsolete,
26 26 phases,
27 27 pushkey,
28 28 pycompat,
29 29 scmutil,
30 30 sslutil,
31 31 streamclone,
32 32 url as urlmod,
33 33 util,
34 34 )
35 35
36 36 urlerr = util.urlerr
37 37 urlreq = util.urlreq
38 38
39 39 # Maps bundle version human names to changegroup versions.
40 40 _bundlespeccgversions = {'v1': '01',
41 41 'v2': '02',
42 42 'packed1': 's1',
43 43 'bundle2': '02', #legacy
44 44 }
45 45
46 46 # Compression engines allowed in version 1. THIS SHOULD NEVER CHANGE.
47 47 _bundlespecv1compengines = {'gzip', 'bzip2', 'none'}
48 48
49 49 def parsebundlespec(repo, spec, strict=True, externalnames=False):
50 50 """Parse a bundle string specification into parts.
51 51
52 52 Bundle specifications denote a well-defined bundle/exchange format.
53 53 The content of a given specification should not change over time in
54 54 order to ensure that bundles produced by a newer version of Mercurial are
55 55 readable from an older version.
56 56
57 57 The string currently has the form:
58 58
59 59 <compression>-<type>[;<parameter0>[;<parameter1>]]
60 60
61 61 Where <compression> is one of the supported compression formats
62 62 and <type> is (currently) a version string. A ";" can follow the type and
63 63 all text afterwards is interpreted as URI encoded, ";" delimited key=value
64 64 pairs.
65 65
66 66 If ``strict`` is True (the default) <compression> is required. Otherwise,
67 67 it is optional.
68 68
69 69 If ``externalnames`` is False (the default), the human-centric names will
70 70 be converted to their internal representation.
71 71
72 72 Returns a 3-tuple of (compression, version, parameters). Compression will
73 73 be ``None`` if not in strict mode and a compression isn't defined.
74 74
75 75 An ``InvalidBundleSpecification`` is raised when the specification is
76 76 not syntactically well formed.
77 77
78 78 An ``UnsupportedBundleSpecification`` is raised when the compression or
79 79 bundle type/version is not recognized.
80 80
81 81 Note: this function will likely eventually return a more complex data
82 82 structure, including bundle2 part information.
83 83 """
84 84 def parseparams(s):
85 85 if ';' not in s:
86 86 return s, {}
87 87
88 88 params = {}
89 89 version, paramstr = s.split(';', 1)
90 90
91 91 for p in paramstr.split(';'):
92 92 if '=' not in p:
93 93 raise error.InvalidBundleSpecification(
94 94 _('invalid bundle specification: '
95 95 'missing "=" in parameter: %s') % p)
96 96
97 97 key, value = p.split('=', 1)
98 98 key = urlreq.unquote(key)
99 99 value = urlreq.unquote(value)
100 100 params[key] = value
101 101
102 102 return version, params
103 103
104 104
105 105 if strict and '-' not in spec:
106 106 raise error.InvalidBundleSpecification(
107 107 _('invalid bundle specification; '
108 108 'must be prefixed with compression: %s') % spec)
109 109
110 110 if '-' in spec:
111 111 compression, version = spec.split('-', 1)
112 112
113 113 if compression not in util.compengines.supportedbundlenames:
114 114 raise error.UnsupportedBundleSpecification(
115 115 _('%s compression is not supported') % compression)
116 116
117 117 version, params = parseparams(version)
118 118
119 119 if version not in _bundlespeccgversions:
120 120 raise error.UnsupportedBundleSpecification(
121 121 _('%s is not a recognized bundle version') % version)
122 122 else:
123 123 # Value could be just the compression or just the version, in which
124 124 # case some defaults are assumed (but only when not in strict mode).
125 125 assert not strict
126 126
127 127 spec, params = parseparams(spec)
128 128
129 129 if spec in util.compengines.supportedbundlenames:
130 130 compression = spec
131 131 version = 'v1'
132 132 # Generaldelta repos require v2.
133 133 if 'generaldelta' in repo.requirements:
134 134 version = 'v2'
135 135 # Modern compression engines require v2.
136 136 if compression not in _bundlespecv1compengines:
137 137 version = 'v2'
138 138 elif spec in _bundlespeccgversions:
139 139 if spec == 'packed1':
140 140 compression = 'none'
141 141 else:
142 142 compression = 'bzip2'
143 143 version = spec
144 144 else:
145 145 raise error.UnsupportedBundleSpecification(
146 146 _('%s is not a recognized bundle specification') % spec)
147 147
148 148 # Bundle version 1 only supports a known set of compression engines.
149 149 if version == 'v1' and compression not in _bundlespecv1compengines:
150 150 raise error.UnsupportedBundleSpecification(
151 151 _('compression engine %s is not supported on v1 bundles') %
152 152 compression)
153 153
154 154 # The specification for packed1 can optionally declare the data formats
155 155 # required to apply it. If we see this metadata, compare against what the
156 156 # repo supports and error if the bundle isn't compatible.
157 157 if version == 'packed1' and 'requirements' in params:
158 158 requirements = set(params['requirements'].split(','))
159 159 missingreqs = requirements - repo.supportedformats
160 160 if missingreqs:
161 161 raise error.UnsupportedBundleSpecification(
162 162 _('missing support for repository features: %s') %
163 163 ', '.join(sorted(missingreqs)))
164 164
165 165 if not externalnames:
166 166 engine = util.compengines.forbundlename(compression)
167 167 compression = engine.bundletype()[1]
168 168 version = _bundlespeccgversions[version]
169 169 return compression, version, params
170 170
171 171 def readbundle(ui, fh, fname, vfs=None):
172 172 header = changegroup.readexactly(fh, 4)
173 173
174 174 alg = None
175 175 if not fname:
176 176 fname = "stream"
177 177 if not header.startswith('HG') and header.startswith('\0'):
178 178 fh = changegroup.headerlessfixup(fh, header)
179 179 header = "HG10"
180 180 alg = 'UN'
181 181 elif vfs:
182 182 fname = vfs.join(fname)
183 183
184 184 magic, version = header[0:2], header[2:4]
185 185
186 186 if magic != 'HG':
187 187 raise error.Abort(_('%s: not a Mercurial bundle') % fname)
188 188 if version == '10':
189 189 if alg is None:
190 190 alg = changegroup.readexactly(fh, 2)
191 191 return changegroup.cg1unpacker(fh, alg)
192 192 elif version.startswith('2'):
193 193 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
194 194 elif version == 'S1':
195 195 return streamclone.streamcloneapplier(fh)
196 196 else:
197 197 raise error.Abort(_('%s: unknown bundle version %s') % (fname, version))
198 198
199 199 def getbundlespec(ui, fh):
200 200 """Infer the bundlespec from a bundle file handle.
201 201
202 202 The input file handle is seeked and the original seek position is not
203 203 restored.
204 204 """
205 205 def speccompression(alg):
206 206 try:
207 207 return util.compengines.forbundletype(alg).bundletype()[0]
208 208 except KeyError:
209 209 return None
210 210
211 211 b = readbundle(ui, fh, None)
212 212 if isinstance(b, changegroup.cg1unpacker):
213 213 alg = b._type
214 214 if alg == '_truncatedBZ':
215 215 alg = 'BZ'
216 216 comp = speccompression(alg)
217 217 if not comp:
218 218 raise error.Abort(_('unknown compression algorithm: %s') % alg)
219 219 return '%s-v1' % comp
220 220 elif isinstance(b, bundle2.unbundle20):
221 221 if 'Compression' in b.params:
222 222 comp = speccompression(b.params['Compression'])
223 223 if not comp:
224 224 raise error.Abort(_('unknown compression algorithm: %s') % comp)
225 225 else:
226 226 comp = 'none'
227 227
228 228 version = None
229 229 for part in b.iterparts():
230 230 if part.type == 'changegroup':
231 231 version = part.params['version']
232 232 if version in ('01', '02'):
233 233 version = 'v2'
234 234 else:
235 235 raise error.Abort(_('changegroup version %s does not have '
236 236 'a known bundlespec') % version,
237 237 hint=_('try upgrading your Mercurial '
238 238 'client'))
239 239
240 240 if not version:
241 241 raise error.Abort(_('could not identify changegroup version in '
242 242 'bundle'))
243 243
244 244 return '%s-%s' % (comp, version)
245 245 elif isinstance(b, streamclone.streamcloneapplier):
246 246 requirements = streamclone.readbundle1header(fh)[2]
247 247 params = 'requirements=%s' % ','.join(sorted(requirements))
248 248 return 'none-packed1;%s' % urlreq.quote(params)
249 249 else:
250 250 raise error.Abort(_('unknown bundle type: %s') % b)
251 251
252 252 def _computeoutgoing(repo, heads, common):
253 253 """Computes which revs are outgoing given a set of common
254 254 and a set of heads.
255 255
256 256 This is a separate function so extensions can have access to
257 257 the logic.
258 258
259 259 Returns a discovery.outgoing object.
260 260 """
261 261 cl = repo.changelog
262 262 if common:
263 263 hasnode = cl.hasnode
264 264 common = [n for n in common if hasnode(n)]
265 265 else:
266 266 common = [nullid]
267 267 if not heads:
268 268 heads = cl.heads()
269 269 return discovery.outgoing(repo, common, heads)
270 270
271 271 def _forcebundle1(op):
272 272 """return true if a pull/push must use bundle1
273 273
274 274 This function is used to allow testing of the older bundle version"""
275 275 ui = op.repo.ui
276 276 forcebundle1 = False
277 277 # The goal is this config is to allow developer to choose the bundle
278 278 # version used during exchanged. This is especially handy during test.
279 279 # Value is a list of bundle version to be picked from, highest version
280 280 # should be used.
281 281 #
282 282 # developer config: devel.legacy.exchange
283 283 exchange = ui.configlist('devel', 'legacy.exchange')
284 284 forcebundle1 = 'bundle2' not in exchange and 'bundle1' in exchange
285 285 return forcebundle1 or not op.remote.capable('bundle2')
286 286
287 287 class pushoperation(object):
288 288 """A object that represent a single push operation
289 289
290 290 Its purpose is to carry push related state and very common operations.
291 291
292 292 A new pushoperation should be created at the beginning of each push and
293 293 discarded afterward.
294 294 """
295 295
296 296 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
297 297 bookmarks=(), pushvars=None):
298 298 # repo we push from
299 299 self.repo = repo
300 300 self.ui = repo.ui
301 301 # repo we push to
302 302 self.remote = remote
303 303 # force option provided
304 304 self.force = force
305 305 # revs to be pushed (None is "all")
306 306 self.revs = revs
307 307 # bookmark explicitly pushed
308 308 self.bookmarks = bookmarks
309 309 # allow push of new branch
310 310 self.newbranch = newbranch
311 311 # step already performed
312 312 # (used to check what steps have been already performed through bundle2)
313 313 self.stepsdone = set()
314 314 # Integer version of the changegroup push result
315 315 # - None means nothing to push
316 316 # - 0 means HTTP error
317 317 # - 1 means we pushed and remote head count is unchanged *or*
318 318 # we have outgoing changesets but refused to push
319 319 # - other values as described by addchangegroup()
320 320 self.cgresult = None
321 321 # Boolean value for the bookmark push
322 322 self.bkresult = None
323 323 # discover.outgoing object (contains common and outgoing data)
324 324 self.outgoing = None
325 325 # all remote topological heads before the push
326 326 self.remoteheads = None
327 327 # Details of the remote branch pre and post push
328 328 #
329 329 # mapping: {'branch': ([remoteheads],
330 330 # [newheads],
331 331 # [unsyncedheads],
332 332 # [discardedheads])}
333 333 # - branch: the branch name
334 334 # - remoteheads: the list of remote heads known locally
335 335 # None if the branch is new
336 336 # - newheads: the new remote heads (known locally) with outgoing pushed
337 337 # - unsyncedheads: the list of remote heads unknown locally.
338 338 # - discardedheads: the list of remote heads made obsolete by the push
339 339 self.pushbranchmap = None
340 340 # testable as a boolean indicating if any nodes are missing locally.
341 341 self.incoming = None
342 342 # phases changes that must be pushed along side the changesets
343 343 self.outdatedphases = None
344 344 # phases changes that must be pushed if changeset push fails
345 345 self.fallbackoutdatedphases = None
346 346 # outgoing obsmarkers
347 347 self.outobsmarkers = set()
348 348 # outgoing bookmarks
349 349 self.outbookmarks = []
350 350 # transaction manager
351 351 self.trmanager = None
352 352 # map { pushkey partid -> callback handling failure}
353 353 # used to handle exception from mandatory pushkey part failure
354 354 self.pkfailcb = {}
355 355 # an iterable of pushvars or None
356 356 self.pushvars = pushvars
357 357
358 358 @util.propertycache
359 359 def futureheads(self):
360 360 """future remote heads if the changeset push succeeds"""
361 361 return self.outgoing.missingheads
362 362
363 363 @util.propertycache
364 364 def fallbackheads(self):
365 365 """future remote heads if the changeset push fails"""
366 366 if self.revs is None:
367 367 # not target to push, all common are relevant
368 368 return self.outgoing.commonheads
369 369 unfi = self.repo.unfiltered()
370 370 # I want cheads = heads(::missingheads and ::commonheads)
371 371 # (missingheads is revs with secret changeset filtered out)
372 372 #
373 373 # This can be expressed as:
374 374 # cheads = ( (missingheads and ::commonheads)
375 375 # + (commonheads and ::missingheads))"
376 376 # )
377 377 #
378 378 # while trying to push we already computed the following:
379 379 # common = (::commonheads)
380 380 # missing = ((commonheads::missingheads) - commonheads)
381 381 #
382 382 # We can pick:
383 383 # * missingheads part of common (::commonheads)
384 384 common = self.outgoing.common
385 385 nm = self.repo.changelog.nodemap
386 386 cheads = [node for node in self.revs if nm[node] in common]
387 387 # and
388 388 # * commonheads parents on missing
389 389 revset = unfi.set('%ln and parents(roots(%ln))',
390 390 self.outgoing.commonheads,
391 391 self.outgoing.missing)
392 392 cheads.extend(c.node() for c in revset)
393 393 return cheads
394 394
395 395 @property
396 396 def commonheads(self):
397 397 """set of all common heads after changeset bundle push"""
398 398 if self.cgresult:
399 399 return self.futureheads
400 400 else:
401 401 return self.fallbackheads
402 402
403 403 # mapping of message used when pushing bookmark
404 404 bookmsgmap = {'update': (_("updating bookmark %s\n"),
405 405 _('updating bookmark %s failed!\n')),
406 406 'export': (_("exporting bookmark %s\n"),
407 407 _('exporting bookmark %s failed!\n')),
408 408 'delete': (_("deleting remote bookmark %s\n"),
409 409 _('deleting remote bookmark %s failed!\n')),
410 410 }
411 411
412 412
413 413 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=(),
414 414 opargs=None):
415 415 '''Push outgoing changesets (limited by revs) from a local
416 416 repository to remote. Return an integer:
417 417 - None means nothing to push
418 418 - 0 means HTTP error
419 419 - 1 means we pushed and remote head count is unchanged *or*
420 420 we have outgoing changesets but refused to push
421 421 - other values as described by addchangegroup()
422 422 '''
423 423 if opargs is None:
424 424 opargs = {}
425 425 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks,
426 426 **opargs)
427 427 if pushop.remote.local():
428 428 missing = (set(pushop.repo.requirements)
429 429 - pushop.remote.local().supported)
430 430 if missing:
431 431 msg = _("required features are not"
432 432 " supported in the destination:"
433 433 " %s") % (', '.join(sorted(missing)))
434 434 raise error.Abort(msg)
435 435
436 436 if not pushop.remote.canpush():
437 437 raise error.Abort(_("destination does not support push"))
438 438
439 439 if not pushop.remote.capable('unbundle'):
440 440 raise error.Abort(_('cannot push: destination does not support the '
441 441 'unbundle wire protocol command'))
442 442
443 443 # get lock as we might write phase data
444 444 wlock = lock = None
445 445 try:
446 446 # bundle2 push may receive a reply bundle touching bookmarks or other
447 447 # things requiring the wlock. Take it now to ensure proper ordering.
448 448 maypushback = pushop.ui.configbool('experimental', 'bundle2.pushback')
449 449 if (not _forcebundle1(pushop)) and maypushback:
450 450 wlock = pushop.repo.wlock()
451 451 lock = pushop.repo.lock()
452 452 pushop.trmanager = transactionmanager(pushop.repo,
453 453 'push-response',
454 454 pushop.remote.url())
455 455 except IOError as err:
456 456 if err.errno != errno.EACCES:
457 457 raise
458 458 # source repo cannot be locked.
459 459 # We do not abort the push, but just disable the local phase
460 460 # synchronisation.
461 461 msg = 'cannot lock source repository: %s\n' % err
462 462 pushop.ui.debug(msg)
463 463
464 464 with wlock or util.nullcontextmanager(), \
465 465 lock or util.nullcontextmanager(), \
466 466 pushop.trmanager or util.nullcontextmanager():
467 467 pushop.repo.checkpush(pushop)
468 468 _pushdiscovery(pushop)
469 469 if not _forcebundle1(pushop):
470 470 _pushbundle2(pushop)
471 471 _pushchangeset(pushop)
472 472 _pushsyncphase(pushop)
473 473 _pushobsolete(pushop)
474 474 _pushbookmark(pushop)
475 475
476 476 return pushop
477 477
478 478 # list of steps to perform discovery before push
479 479 pushdiscoveryorder = []
480 480
481 481 # Mapping between step name and function
482 482 #
483 483 # This exists to help extensions wrap steps if necessary
484 484 pushdiscoverymapping = {}
485 485
486 486 def pushdiscovery(stepname):
487 487 """decorator for function performing discovery before push
488 488
489 489 The function is added to the step -> function mapping and appended to the
490 490 list of steps. Beware that decorated function will be added in order (this
491 491 may matter).
492 492
493 493 You can only use this decorator for a new step, if you want to wrap a step
494 494 from an extension, change the pushdiscovery dictionary directly."""
495 495 def dec(func):
496 496 assert stepname not in pushdiscoverymapping
497 497 pushdiscoverymapping[stepname] = func
498 498 pushdiscoveryorder.append(stepname)
499 499 return func
500 500 return dec
501 501
502 502 def _pushdiscovery(pushop):
503 503 """Run all discovery steps"""
504 504 for stepname in pushdiscoveryorder:
505 505 step = pushdiscoverymapping[stepname]
506 506 step(pushop)
507 507
508 508 @pushdiscovery('changeset')
509 509 def _pushdiscoverychangeset(pushop):
510 510 """discover the changeset that need to be pushed"""
511 511 fci = discovery.findcommonincoming
512 512 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
513 513 common, inc, remoteheads = commoninc
514 514 fco = discovery.findcommonoutgoing
515 515 outgoing = fco(pushop.repo, pushop.remote, onlyheads=pushop.revs,
516 516 commoninc=commoninc, force=pushop.force)
517 517 pushop.outgoing = outgoing
518 518 pushop.remoteheads = remoteheads
519 519 pushop.incoming = inc
520 520
521 521 @pushdiscovery('phase')
522 522 def _pushdiscoveryphase(pushop):
523 523 """discover the phase that needs to be pushed
524 524
525 525 (computed for both success and failure case for changesets push)"""
526 526 outgoing = pushop.outgoing
527 527 unfi = pushop.repo.unfiltered()
528 528 remotephases = pushop.remote.listkeys('phases')
529 529 publishing = remotephases.get('publishing', False)
530 530 if (pushop.ui.configbool('ui', '_usedassubrepo')
531 531 and remotephases # server supports phases
532 532 and not pushop.outgoing.missing # no changesets to be pushed
533 533 and publishing):
534 534 # When:
535 535 # - this is a subrepo push
536 536 # - and remote support phase
537 537 # - and no changeset are to be pushed
538 538 # - and remote is publishing
539 539 # We may be in issue 3871 case!
540 540 # We drop the possible phase synchronisation done by
541 541 # courtesy to publish changesets possibly locally draft
542 542 # on the remote.
543 543 remotephases = {'publishing': 'True'}
544 544 ana = phases.analyzeremotephases(pushop.repo,
545 545 pushop.fallbackheads,
546 546 remotephases)
547 547 pheads, droots = ana
548 548 extracond = ''
549 549 if not publishing:
550 550 extracond = ' and public()'
551 551 revset = 'heads((%%ln::%%ln) %s)' % extracond
552 552 # Get the list of all revs draft on remote by public here.
553 553 # XXX Beware that revset break if droots is not strictly
554 554 # XXX root we may want to ensure it is but it is costly
555 555 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
556 556 if not outgoing.missing:
557 557 future = fallback
558 558 else:
559 559 # adds changeset we are going to push as draft
560 560 #
561 561 # should not be necessary for publishing server, but because of an
562 562 # issue fixed in xxxxx we have to do it anyway.
563 563 fdroots = list(unfi.set('roots(%ln + %ln::)',
564 564 outgoing.missing, droots))
565 565 fdroots = [f.node() for f in fdroots]
566 566 future = list(unfi.set(revset, fdroots, pushop.futureheads))
567 567 pushop.outdatedphases = future
568 568 pushop.fallbackoutdatedphases = fallback
569 569
570 570 @pushdiscovery('obsmarker')
571 571 def _pushdiscoveryobsmarkers(pushop):
572 572 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
573 573 and pushop.repo.obsstore
574 574 and 'obsolete' in pushop.remote.listkeys('namespaces')):
575 575 repo = pushop.repo
576 576 # very naive computation, that can be quite expensive on big repo.
577 577 # However: evolution is currently slow on them anyway.
578 578 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
579 579 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
580 580
581 581 @pushdiscovery('bookmarks')
582 582 def _pushdiscoverybookmarks(pushop):
583 583 ui = pushop.ui
584 584 repo = pushop.repo.unfiltered()
585 585 remote = pushop.remote
586 586 ui.debug("checking for updated bookmarks\n")
587 587 ancestors = ()
588 588 if pushop.revs:
589 589 revnums = map(repo.changelog.rev, pushop.revs)
590 590 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
591 591 remotebookmark = remote.listkeys('bookmarks')
592 592
593 593 explicit = set([repo._bookmarks.expandname(bookmark)
594 594 for bookmark in pushop.bookmarks])
595 595
596 596 remotebookmark = bookmod.unhexlifybookmarks(remotebookmark)
597 597 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
598 598
599 599 def safehex(x):
600 600 if x is None:
601 601 return x
602 602 return hex(x)
603 603
604 604 def hexifycompbookmarks(bookmarks):
605 605 for b, scid, dcid in bookmarks:
606 606 yield b, safehex(scid), safehex(dcid)
607 607
608 608 comp = [hexifycompbookmarks(marks) for marks in comp]
609 609 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
610 610
611 611 for b, scid, dcid in advsrc:
612 612 if b in explicit:
613 613 explicit.remove(b)
614 614 if not ancestors or repo[scid].rev() in ancestors:
615 615 pushop.outbookmarks.append((b, dcid, scid))
616 616 # search added bookmark
617 617 for b, scid, dcid in addsrc:
618 618 if b in explicit:
619 619 explicit.remove(b)
620 620 pushop.outbookmarks.append((b, '', scid))
621 621 # search for overwritten bookmark
622 622 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
623 623 if b in explicit:
624 624 explicit.remove(b)
625 625 pushop.outbookmarks.append((b, dcid, scid))
626 626 # search for bookmark to delete
627 627 for b, scid, dcid in adddst:
628 628 if b in explicit:
629 629 explicit.remove(b)
630 630 # treat as "deleted locally"
631 631 pushop.outbookmarks.append((b, dcid, ''))
632 632 # identical bookmarks shouldn't get reported
633 633 for b, scid, dcid in same:
634 634 if b in explicit:
635 635 explicit.remove(b)
636 636
637 637 if explicit:
638 638 explicit = sorted(explicit)
639 639 # we should probably list all of them
640 640 ui.warn(_('bookmark %s does not exist on the local '
641 641 'or remote repository!\n') % explicit[0])
642 642 pushop.bkresult = 2
643 643
644 644 pushop.outbookmarks.sort()
645 645
646 646 def _pushcheckoutgoing(pushop):
647 647 outgoing = pushop.outgoing
648 648 unfi = pushop.repo.unfiltered()
649 649 if not outgoing.missing:
650 650 # nothing to push
651 651 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
652 652 return False
653 653 # something to push
654 654 if not pushop.force:
655 655 # if repo.obsstore == False --> no obsolete
656 656 # then, save the iteration
657 657 if unfi.obsstore:
658 658 # this message are here for 80 char limit reason
659 659 mso = _("push includes obsolete changeset: %s!")
660 660 mspd = _("push includes phase-divergent changeset: %s!")
661 661 mscd = _("push includes content-divergent changeset: %s!")
662 662 mst = {"orphan": _("push includes orphan changeset: %s!"),
663 663 "phase-divergent": mspd,
664 664 "content-divergent": mscd}
665 665 # If we are to push if there is at least one
666 666 # obsolete or unstable changeset in missing, at
667 667 # least one of the missinghead will be obsolete or
668 668 # unstable. So checking heads only is ok
669 669 for node in outgoing.missingheads:
670 670 ctx = unfi[node]
671 671 if ctx.obsolete():
672 672 raise error.Abort(mso % ctx)
673 673 elif ctx.isunstable():
674 674 # TODO print more than one instability in the abort
675 675 # message
676 676 raise error.Abort(mst[ctx.instabilities()[0]] % ctx)
677 677
678 678 discovery.checkheads(pushop)
679 679 return True
680 680
681 681 # List of names of steps to perform for an outgoing bundle2, order matters.
682 682 b2partsgenorder = []
683 683
684 684 # Mapping between step name and function
685 685 #
686 686 # This exists to help extensions wrap steps if necessary
687 687 b2partsgenmapping = {}
688 688
689 689 def b2partsgenerator(stepname, idx=None):
690 690 """decorator for function generating bundle2 part
691 691
692 692 The function is added to the step -> function mapping and appended to the
693 693 list of steps. Beware that decorated functions will be added in order
694 694 (this may matter).
695 695
696 696 You can only use this decorator for new steps, if you want to wrap a step
697 697 from an extension, attack the b2partsgenmapping dictionary directly."""
698 698 def dec(func):
699 699 assert stepname not in b2partsgenmapping
700 700 b2partsgenmapping[stepname] = func
701 701 if idx is None:
702 702 b2partsgenorder.append(stepname)
703 703 else:
704 704 b2partsgenorder.insert(idx, stepname)
705 705 return func
706 706 return dec
707 707
708 708 def _pushb2ctxcheckheads(pushop, bundler):
709 709 """Generate race condition checking parts
710 710
711 711 Exists as an independent function to aid extensions
712 712 """
713 713 # * 'force' do not check for push race,
714 714 # * if we don't push anything, there are nothing to check.
715 715 if not pushop.force and pushop.outgoing.missingheads:
716 716 allowunrelated = 'related' in bundler.capabilities.get('checkheads', ())
717 717 emptyremote = pushop.pushbranchmap is None
718 718 if not allowunrelated or emptyremote:
719 719 bundler.newpart('check:heads', data=iter(pushop.remoteheads))
720 720 else:
721 721 affected = set()
722 722 for branch, heads in pushop.pushbranchmap.iteritems():
723 723 remoteheads, newheads, unsyncedheads, discardedheads = heads
724 724 if remoteheads is not None:
725 725 remote = set(remoteheads)
726 726 affected |= set(discardedheads) & remote
727 727 affected |= remote - set(newheads)
728 728 if affected:
729 729 data = iter(sorted(affected))
730 730 bundler.newpart('check:updated-heads', data=data)
731 731
732 732 @b2partsgenerator('changeset')
733 733 def _pushb2ctx(pushop, bundler):
734 734 """handle changegroup push through bundle2
735 735
736 736 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
737 737 """
738 738 if 'changesets' in pushop.stepsdone:
739 739 return
740 740 pushop.stepsdone.add('changesets')
741 741 # Send known heads to the server for race detection.
742 742 if not _pushcheckoutgoing(pushop):
743 743 return
744 744 pushop.repo.prepushoutgoinghooks(pushop)
745 745
746 746 _pushb2ctxcheckheads(pushop, bundler)
747 747
748 748 b2caps = bundle2.bundle2caps(pushop.remote)
749 749 version = '01'
750 750 cgversions = b2caps.get('changegroup')
751 751 if cgversions: # 3.1 and 3.2 ship with an empty value
752 752 cgversions = [v for v in cgversions
753 753 if v in changegroup.supportedoutgoingversions(
754 754 pushop.repo)]
755 755 if not cgversions:
756 756 raise ValueError(_('no common changegroup version'))
757 757 version = max(cgversions)
758 758 cgstream = changegroup.makestream(pushop.repo, pushop.outgoing, version,
759 759 'push')
760 760 cgpart = bundler.newpart('changegroup', data=cgstream)
761 761 if cgversions:
762 762 cgpart.addparam('version', version)
763 763 if 'treemanifest' in pushop.repo.requirements:
764 764 cgpart.addparam('treemanifest', '1')
765 765 def handlereply(op):
766 766 """extract addchangegroup returns from server reply"""
767 767 cgreplies = op.records.getreplies(cgpart.id)
768 768 assert len(cgreplies['changegroup']) == 1
769 769 pushop.cgresult = cgreplies['changegroup'][0]['return']
770 770 return handlereply
771 771
772 772 @b2partsgenerator('phase')
773 773 def _pushb2phases(pushop, bundler):
774 774 """handle phase push through bundle2"""
775 775 if 'phases' in pushop.stepsdone:
776 776 return
777 777 b2caps = bundle2.bundle2caps(pushop.remote)
778 778 if not 'pushkey' in b2caps:
779 779 return
780 780 pushop.stepsdone.add('phases')
781 781 part2node = []
782 782
783 783 def handlefailure(pushop, exc):
784 784 targetid = int(exc.partid)
785 785 for partid, node in part2node:
786 786 if partid == targetid:
787 787 raise error.Abort(_('updating %s to public failed') % node)
788 788
789 789 enc = pushkey.encode
790 790 for newremotehead in pushop.outdatedphases:
791 791 part = bundler.newpart('pushkey')
792 792 part.addparam('namespace', enc('phases'))
793 793 part.addparam('key', enc(newremotehead.hex()))
794 794 part.addparam('old', enc(str(phases.draft)))
795 795 part.addparam('new', enc(str(phases.public)))
796 796 part2node.append((part.id, newremotehead))
797 797 pushop.pkfailcb[part.id] = handlefailure
798 798
799 799 def handlereply(op):
800 800 for partid, node in part2node:
801 801 partrep = op.records.getreplies(partid)
802 802 results = partrep['pushkey']
803 803 assert len(results) <= 1
804 804 msg = None
805 805 if not results:
806 806 msg = _('server ignored update of %s to public!\n') % node
807 807 elif not int(results[0]['return']):
808 808 msg = _('updating %s to public failed!\n') % node
809 809 if msg is not None:
810 810 pushop.ui.warn(msg)
811 811 return handlereply
812 812
813 813 @b2partsgenerator('obsmarkers')
814 814 def _pushb2obsmarkers(pushop, bundler):
815 815 if 'obsmarkers' in pushop.stepsdone:
816 816 return
817 817 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
818 818 if obsolete.commonversion(remoteversions) is None:
819 819 return
820 820 pushop.stepsdone.add('obsmarkers')
821 821 if pushop.outobsmarkers:
822 822 markers = sorted(pushop.outobsmarkers)
823 823 bundle2.buildobsmarkerspart(bundler, markers)
824 824
825 825 @b2partsgenerator('bookmarks')
826 826 def _pushb2bookmarks(pushop, bundler):
827 827 """handle bookmark push through bundle2"""
828 828 if 'bookmarks' in pushop.stepsdone:
829 829 return
830 830 b2caps = bundle2.bundle2caps(pushop.remote)
831 831 if 'pushkey' not in b2caps:
832 832 return
833 833 pushop.stepsdone.add('bookmarks')
834 834 part2book = []
835 835 enc = pushkey.encode
836 836
837 837 def handlefailure(pushop, exc):
838 838 targetid = int(exc.partid)
839 839 for partid, book, action in part2book:
840 840 if partid == targetid:
841 841 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
842 842 # we should not be called for part we did not generated
843 843 assert False
844 844
845 845 for book, old, new in pushop.outbookmarks:
846 846 part = bundler.newpart('pushkey')
847 847 part.addparam('namespace', enc('bookmarks'))
848 848 part.addparam('key', enc(book))
849 849 part.addparam('old', enc(old))
850 850 part.addparam('new', enc(new))
851 851 action = 'update'
852 852 if not old:
853 853 action = 'export'
854 854 elif not new:
855 855 action = 'delete'
856 856 part2book.append((part.id, book, action))
857 857 pushop.pkfailcb[part.id] = handlefailure
858 858
859 859 def handlereply(op):
860 860 ui = pushop.ui
861 861 for partid, book, action in part2book:
862 862 partrep = op.records.getreplies(partid)
863 863 results = partrep['pushkey']
864 864 assert len(results) <= 1
865 865 if not results:
866 866 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
867 867 else:
868 868 ret = int(results[0]['return'])
869 869 if ret:
870 870 ui.status(bookmsgmap[action][0] % book)
871 871 else:
872 872 ui.warn(bookmsgmap[action][1] % book)
873 873 if pushop.bkresult is not None:
874 874 pushop.bkresult = 1
875 875 return handlereply
876 876
877 877 @b2partsgenerator('pushvars', idx=0)
878 878 def _getbundlesendvars(pushop, bundler):
879 879 '''send shellvars via bundle2'''
880 880 pushvars = pushop.pushvars
881 881 if pushvars:
882 882 shellvars = {}
883 883 for raw in pushvars:
884 884 if '=' not in raw:
885 885 msg = ("unable to parse variable '%s', should follow "
886 886 "'KEY=VALUE' or 'KEY=' format")
887 887 raise error.Abort(msg % raw)
888 888 k, v = raw.split('=', 1)
889 889 shellvars[k] = v
890 890
891 891 part = bundler.newpart('pushvars')
892 892
893 893 for key, value in shellvars.iteritems():
894 894 part.addparam(key, value, mandatory=False)
895 895
896 896 def _pushbundle2(pushop):
897 897 """push data to the remote using bundle2
898 898
899 899 The only currently supported type of data is changegroup but this will
900 900 evolve in the future."""
901 901 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
902 902 pushback = (pushop.trmanager
903 903 and pushop.ui.configbool('experimental', 'bundle2.pushback'))
904 904
905 905 # create reply capability
906 906 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo,
907 907 allowpushback=pushback))
908 908 bundler.newpart('replycaps', data=capsblob)
909 909 replyhandlers = []
910 910 for partgenname in b2partsgenorder:
911 911 partgen = b2partsgenmapping[partgenname]
912 912 ret = partgen(pushop, bundler)
913 913 if callable(ret):
914 914 replyhandlers.append(ret)
915 915 # do not push if nothing to push
916 916 if bundler.nbparts <= 1:
917 917 return
918 918 stream = util.chunkbuffer(bundler.getchunks())
919 919 try:
920 920 try:
921 921 reply = pushop.remote.unbundle(
922 922 stream, ['force'], pushop.remote.url())
923 923 except error.BundleValueError as exc:
924 924 raise error.Abort(_('missing support for %s') % exc)
925 925 try:
926 926 trgetter = None
927 927 if pushback:
928 928 trgetter = pushop.trmanager.transaction
929 929 op = bundle2.processbundle(pushop.repo, reply, trgetter)
930 930 except error.BundleValueError as exc:
931 931 raise error.Abort(_('missing support for %s') % exc)
932 932 except bundle2.AbortFromPart as exc:
933 933 pushop.ui.status(_('remote: %s\n') % exc)
934 934 if exc.hint is not None:
935 935 pushop.ui.status(_('remote: %s\n') % ('(%s)' % exc.hint))
936 936 raise error.Abort(_('push failed on remote'))
937 937 except error.PushkeyFailed as exc:
938 938 partid = int(exc.partid)
939 939 if partid not in pushop.pkfailcb:
940 940 raise
941 941 pushop.pkfailcb[partid](pushop, exc)
942 942 for rephand in replyhandlers:
943 943 rephand(op)
944 944
945 945 def _pushchangeset(pushop):
946 946 """Make the actual push of changeset bundle to remote repo"""
947 947 if 'changesets' in pushop.stepsdone:
948 948 return
949 949 pushop.stepsdone.add('changesets')
950 950 if not _pushcheckoutgoing(pushop):
951 951 return
952 952
953 953 # Should have verified this in push().
954 954 assert pushop.remote.capable('unbundle')
955 955
956 956 pushop.repo.prepushoutgoinghooks(pushop)
957 957 outgoing = pushop.outgoing
958 958 # TODO: get bundlecaps from remote
959 959 bundlecaps = None
960 960 # create a changegroup from local
961 961 if pushop.revs is None and not (outgoing.excluded
962 962 or pushop.repo.changelog.filteredrevs):
963 963 # push everything,
964 964 # use the fast path, no race possible on push
965 965 cg = changegroup.makechangegroup(pushop.repo, outgoing, '01', 'push',
966 966 fastpath=True, bundlecaps=bundlecaps)
967 967 else:
968 cg = changegroup.getchangegroup(pushop.repo, 'push', outgoing,
969 bundlecaps=bundlecaps)
968 cg = changegroup.makechangegroup(pushop.repo, outgoing, '01',
969 'push', bundlecaps=bundlecaps)
970 970
971 971 # apply changegroup to remote
972 972 # local repo finds heads on server, finds out what
973 973 # revs it must push. once revs transferred, if server
974 974 # finds it has different heads (someone else won
975 975 # commit/push race), server aborts.
976 976 if pushop.force:
977 977 remoteheads = ['force']
978 978 else:
979 979 remoteheads = pushop.remoteheads
980 980 # ssh: return remote's addchangegroup()
981 981 # http: return remote's addchangegroup() or 0 for error
982 982 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
983 983 pushop.repo.url())
984 984
985 985 def _pushsyncphase(pushop):
986 986 """synchronise phase information locally and remotely"""
987 987 cheads = pushop.commonheads
988 988 # even when we don't push, exchanging phase data is useful
989 989 remotephases = pushop.remote.listkeys('phases')
990 990 if (pushop.ui.configbool('ui', '_usedassubrepo')
991 991 and remotephases # server supports phases
992 992 and pushop.cgresult is None # nothing was pushed
993 993 and remotephases.get('publishing', False)):
994 994 # When:
995 995 # - this is a subrepo push
996 996 # - and remote support phase
997 997 # - and no changeset was pushed
998 998 # - and remote is publishing
999 999 # We may be in issue 3871 case!
1000 1000 # We drop the possible phase synchronisation done by
1001 1001 # courtesy to publish changesets possibly locally draft
1002 1002 # on the remote.
1003 1003 remotephases = {'publishing': 'True'}
1004 1004 if not remotephases: # old server or public only reply from non-publishing
1005 1005 _localphasemove(pushop, cheads)
1006 1006 # don't push any phase data as there is nothing to push
1007 1007 else:
1008 1008 ana = phases.analyzeremotephases(pushop.repo, cheads,
1009 1009 remotephases)
1010 1010 pheads, droots = ana
1011 1011 ### Apply remote phase on local
1012 1012 if remotephases.get('publishing', False):
1013 1013 _localphasemove(pushop, cheads)
1014 1014 else: # publish = False
1015 1015 _localphasemove(pushop, pheads)
1016 1016 _localphasemove(pushop, cheads, phases.draft)
1017 1017 ### Apply local phase on remote
1018 1018
1019 1019 if pushop.cgresult:
1020 1020 if 'phases' in pushop.stepsdone:
1021 1021 # phases already pushed though bundle2
1022 1022 return
1023 1023 outdated = pushop.outdatedphases
1024 1024 else:
1025 1025 outdated = pushop.fallbackoutdatedphases
1026 1026
1027 1027 pushop.stepsdone.add('phases')
1028 1028
1029 1029 # filter heads already turned public by the push
1030 1030 outdated = [c for c in outdated if c.node() not in pheads]
1031 1031 # fallback to independent pushkey command
1032 1032 for newremotehead in outdated:
1033 1033 r = pushop.remote.pushkey('phases',
1034 1034 newremotehead.hex(),
1035 1035 str(phases.draft),
1036 1036 str(phases.public))
1037 1037 if not r:
1038 1038 pushop.ui.warn(_('updating %s to public failed!\n')
1039 1039 % newremotehead)
1040 1040
1041 1041 def _localphasemove(pushop, nodes, phase=phases.public):
1042 1042 """move <nodes> to <phase> in the local source repo"""
1043 1043 if pushop.trmanager:
1044 1044 phases.advanceboundary(pushop.repo,
1045 1045 pushop.trmanager.transaction(),
1046 1046 phase,
1047 1047 nodes)
1048 1048 else:
1049 1049 # repo is not locked, do not change any phases!
1050 1050 # Informs the user that phases should have been moved when
1051 1051 # applicable.
1052 1052 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1053 1053 phasestr = phases.phasenames[phase]
1054 1054 if actualmoves:
1055 1055 pushop.ui.status(_('cannot lock source repo, skipping '
1056 1056 'local %s phase update\n') % phasestr)
1057 1057
1058 1058 def _pushobsolete(pushop):
1059 1059 """utility function to push obsolete markers to a remote"""
1060 1060 if 'obsmarkers' in pushop.stepsdone:
1061 1061 return
1062 1062 repo = pushop.repo
1063 1063 remote = pushop.remote
1064 1064 pushop.stepsdone.add('obsmarkers')
1065 1065 if pushop.outobsmarkers:
1066 1066 pushop.ui.debug('try to push obsolete markers to remote\n')
1067 1067 rslts = []
1068 1068 remotedata = obsolete._pushkeyescape(sorted(pushop.outobsmarkers))
1069 1069 for key in sorted(remotedata, reverse=True):
1070 1070 # reverse sort to ensure we end with dump0
1071 1071 data = remotedata[key]
1072 1072 rslts.append(remote.pushkey('obsolete', key, '', data))
1073 1073 if [r for r in rslts if not r]:
1074 1074 msg = _('failed to push some obsolete markers!\n')
1075 1075 repo.ui.warn(msg)
1076 1076
1077 1077 def _pushbookmark(pushop):
1078 1078 """Update bookmark position on remote"""
1079 1079 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
1080 1080 return
1081 1081 pushop.stepsdone.add('bookmarks')
1082 1082 ui = pushop.ui
1083 1083 remote = pushop.remote
1084 1084
1085 1085 for b, old, new in pushop.outbookmarks:
1086 1086 action = 'update'
1087 1087 if not old:
1088 1088 action = 'export'
1089 1089 elif not new:
1090 1090 action = 'delete'
1091 1091 if remote.pushkey('bookmarks', b, old, new):
1092 1092 ui.status(bookmsgmap[action][0] % b)
1093 1093 else:
1094 1094 ui.warn(bookmsgmap[action][1] % b)
1095 1095 # discovery can have set the value form invalid entry
1096 1096 if pushop.bkresult is not None:
1097 1097 pushop.bkresult = 1
1098 1098
1099 1099 class pulloperation(object):
1100 1100 """A object that represent a single pull operation
1101 1101
1102 1102 It purpose is to carry pull related state and very common operation.
1103 1103
1104 1104 A new should be created at the beginning of each pull and discarded
1105 1105 afterward.
1106 1106 """
1107 1107
1108 1108 def __init__(self, repo, remote, heads=None, force=False, bookmarks=(),
1109 1109 remotebookmarks=None, streamclonerequested=None):
1110 1110 # repo we pull into
1111 1111 self.repo = repo
1112 1112 # repo we pull from
1113 1113 self.remote = remote
1114 1114 # revision we try to pull (None is "all")
1115 1115 self.heads = heads
1116 1116 # bookmark pulled explicitly
1117 1117 self.explicitbookmarks = [repo._bookmarks.expandname(bookmark)
1118 1118 for bookmark in bookmarks]
1119 1119 # do we force pull?
1120 1120 self.force = force
1121 1121 # whether a streaming clone was requested
1122 1122 self.streamclonerequested = streamclonerequested
1123 1123 # transaction manager
1124 1124 self.trmanager = None
1125 1125 # set of common changeset between local and remote before pull
1126 1126 self.common = None
1127 1127 # set of pulled head
1128 1128 self.rheads = None
1129 1129 # list of missing changeset to fetch remotely
1130 1130 self.fetch = None
1131 1131 # remote bookmarks data
1132 1132 self.remotebookmarks = remotebookmarks
1133 1133 # result of changegroup pulling (used as return code by pull)
1134 1134 self.cgresult = None
1135 1135 # list of step already done
1136 1136 self.stepsdone = set()
1137 1137 # Whether we attempted a clone from pre-generated bundles.
1138 1138 self.clonebundleattempted = False
1139 1139
1140 1140 @util.propertycache
1141 1141 def pulledsubset(self):
1142 1142 """heads of the set of changeset target by the pull"""
1143 1143 # compute target subset
1144 1144 if self.heads is None:
1145 1145 # We pulled every thing possible
1146 1146 # sync on everything common
1147 1147 c = set(self.common)
1148 1148 ret = list(self.common)
1149 1149 for n in self.rheads:
1150 1150 if n not in c:
1151 1151 ret.append(n)
1152 1152 return ret
1153 1153 else:
1154 1154 # We pulled a specific subset
1155 1155 # sync on this subset
1156 1156 return self.heads
1157 1157
1158 1158 @util.propertycache
1159 1159 def canusebundle2(self):
1160 1160 return not _forcebundle1(self)
1161 1161
1162 1162 @util.propertycache
1163 1163 def remotebundle2caps(self):
1164 1164 return bundle2.bundle2caps(self.remote)
1165 1165
1166 1166 def gettransaction(self):
1167 1167 # deprecated; talk to trmanager directly
1168 1168 return self.trmanager.transaction()
1169 1169
1170 1170 class transactionmanager(util.transactional):
1171 1171 """An object to manage the life cycle of a transaction
1172 1172
1173 1173 It creates the transaction on demand and calls the appropriate hooks when
1174 1174 closing the transaction."""
1175 1175 def __init__(self, repo, source, url):
1176 1176 self.repo = repo
1177 1177 self.source = source
1178 1178 self.url = url
1179 1179 self._tr = None
1180 1180
1181 1181 def transaction(self):
1182 1182 """Return an open transaction object, constructing if necessary"""
1183 1183 if not self._tr:
1184 1184 trname = '%s\n%s' % (self.source, util.hidepassword(self.url))
1185 1185 self._tr = self.repo.transaction(trname)
1186 1186 self._tr.hookargs['source'] = self.source
1187 1187 self._tr.hookargs['url'] = self.url
1188 1188 return self._tr
1189 1189
1190 1190 def close(self):
1191 1191 """close transaction if created"""
1192 1192 if self._tr is not None:
1193 1193 self._tr.close()
1194 1194
1195 1195 def release(self):
1196 1196 """release transaction if created"""
1197 1197 if self._tr is not None:
1198 1198 self._tr.release()
1199 1199
1200 1200 def pull(repo, remote, heads=None, force=False, bookmarks=(), opargs=None,
1201 1201 streamclonerequested=None):
1202 1202 """Fetch repository data from a remote.
1203 1203
1204 1204 This is the main function used to retrieve data from a remote repository.
1205 1205
1206 1206 ``repo`` is the local repository to clone into.
1207 1207 ``remote`` is a peer instance.
1208 1208 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1209 1209 default) means to pull everything from the remote.
1210 1210 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1211 1211 default, all remote bookmarks are pulled.
1212 1212 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1213 1213 initialization.
1214 1214 ``streamclonerequested`` is a boolean indicating whether a "streaming
1215 1215 clone" is requested. A "streaming clone" is essentially a raw file copy
1216 1216 of revlogs from the server. This only works when the local repository is
1217 1217 empty. The default value of ``None`` means to respect the server
1218 1218 configuration for preferring stream clones.
1219 1219
1220 1220 Returns the ``pulloperation`` created for this pull.
1221 1221 """
1222 1222 if opargs is None:
1223 1223 opargs = {}
1224 1224 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks,
1225 1225 streamclonerequested=streamclonerequested, **opargs)
1226 1226
1227 1227 peerlocal = pullop.remote.local()
1228 1228 if peerlocal:
1229 1229 missing = set(peerlocal.requirements) - pullop.repo.supported
1230 1230 if missing:
1231 1231 msg = _("required features are not"
1232 1232 " supported in the destination:"
1233 1233 " %s") % (', '.join(sorted(missing)))
1234 1234 raise error.Abort(msg)
1235 1235
1236 1236 wlock = lock = None
1237 1237 try:
1238 1238 wlock = pullop.repo.wlock()
1239 1239 lock = pullop.repo.lock()
1240 1240 pullop.trmanager = transactionmanager(repo, 'pull', remote.url())
1241 1241 streamclone.maybeperformlegacystreamclone(pullop)
1242 1242 # This should ideally be in _pullbundle2(). However, it needs to run
1243 1243 # before discovery to avoid extra work.
1244 1244 _maybeapplyclonebundle(pullop)
1245 1245 _pulldiscovery(pullop)
1246 1246 if pullop.canusebundle2:
1247 1247 _pullbundle2(pullop)
1248 1248 _pullchangeset(pullop)
1249 1249 _pullphase(pullop)
1250 1250 _pullbookmarks(pullop)
1251 1251 _pullobsolete(pullop)
1252 1252 pullop.trmanager.close()
1253 1253 finally:
1254 1254 lockmod.release(pullop.trmanager, lock, wlock)
1255 1255
1256 1256 return pullop
1257 1257
1258 1258 # list of steps to perform discovery before pull
1259 1259 pulldiscoveryorder = []
1260 1260
1261 1261 # Mapping between step name and function
1262 1262 #
1263 1263 # This exists to help extensions wrap steps if necessary
1264 1264 pulldiscoverymapping = {}
1265 1265
1266 1266 def pulldiscovery(stepname):
1267 1267 """decorator for function performing discovery before pull
1268 1268
1269 1269 The function is added to the step -> function mapping and appended to the
1270 1270 list of steps. Beware that decorated function will be added in order (this
1271 1271 may matter).
1272 1272
1273 1273 You can only use this decorator for a new step, if you want to wrap a step
1274 1274 from an extension, change the pulldiscovery dictionary directly."""
1275 1275 def dec(func):
1276 1276 assert stepname not in pulldiscoverymapping
1277 1277 pulldiscoverymapping[stepname] = func
1278 1278 pulldiscoveryorder.append(stepname)
1279 1279 return func
1280 1280 return dec
1281 1281
1282 1282 def _pulldiscovery(pullop):
1283 1283 """Run all discovery steps"""
1284 1284 for stepname in pulldiscoveryorder:
1285 1285 step = pulldiscoverymapping[stepname]
1286 1286 step(pullop)
1287 1287
1288 1288 @pulldiscovery('b1:bookmarks')
1289 1289 def _pullbookmarkbundle1(pullop):
1290 1290 """fetch bookmark data in bundle1 case
1291 1291
1292 1292 If not using bundle2, we have to fetch bookmarks before changeset
1293 1293 discovery to reduce the chance and impact of race conditions."""
1294 1294 if pullop.remotebookmarks is not None:
1295 1295 return
1296 1296 if pullop.canusebundle2 and 'listkeys' in pullop.remotebundle2caps:
1297 1297 # all known bundle2 servers now support listkeys, but lets be nice with
1298 1298 # new implementation.
1299 1299 return
1300 1300 pullop.remotebookmarks = pullop.remote.listkeys('bookmarks')
1301 1301
1302 1302
1303 1303 @pulldiscovery('changegroup')
1304 1304 def _pulldiscoverychangegroup(pullop):
1305 1305 """discovery phase for the pull
1306 1306
1307 1307 Current handle changeset discovery only, will change handle all discovery
1308 1308 at some point."""
1309 1309 tmp = discovery.findcommonincoming(pullop.repo,
1310 1310 pullop.remote,
1311 1311 heads=pullop.heads,
1312 1312 force=pullop.force)
1313 1313 common, fetch, rheads = tmp
1314 1314 nm = pullop.repo.unfiltered().changelog.nodemap
1315 1315 if fetch and rheads:
1316 1316 # If a remote heads in filtered locally, lets drop it from the unknown
1317 1317 # remote heads and put in back in common.
1318 1318 #
1319 1319 # This is a hackish solution to catch most of "common but locally
1320 1320 # hidden situation". We do not performs discovery on unfiltered
1321 1321 # repository because it end up doing a pathological amount of round
1322 1322 # trip for w huge amount of changeset we do not care about.
1323 1323 #
1324 1324 # If a set of such "common but filtered" changeset exist on the server
1325 1325 # but are not including a remote heads, we'll not be able to detect it,
1326 1326 scommon = set(common)
1327 1327 filteredrheads = []
1328 1328 for n in rheads:
1329 1329 if n in nm:
1330 1330 if n not in scommon:
1331 1331 common.append(n)
1332 1332 else:
1333 1333 filteredrheads.append(n)
1334 1334 if not filteredrheads:
1335 1335 fetch = []
1336 1336 rheads = filteredrheads
1337 1337 pullop.common = common
1338 1338 pullop.fetch = fetch
1339 1339 pullop.rheads = rheads
1340 1340
1341 1341 def _pullbundle2(pullop):
1342 1342 """pull data using bundle2
1343 1343
1344 1344 For now, the only supported data are changegroup."""
1345 1345 kwargs = {'bundlecaps': caps20to10(pullop.repo)}
1346 1346
1347 1347 # At the moment we don't do stream clones over bundle2. If that is
1348 1348 # implemented then here's where the check for that will go.
1349 1349 streaming = False
1350 1350
1351 1351 # pulling changegroup
1352 1352 pullop.stepsdone.add('changegroup')
1353 1353
1354 1354 kwargs['common'] = pullop.common
1355 1355 kwargs['heads'] = pullop.heads or pullop.rheads
1356 1356 kwargs['cg'] = pullop.fetch
1357 1357 if 'listkeys' in pullop.remotebundle2caps:
1358 1358 kwargs['listkeys'] = ['phases']
1359 1359 if pullop.remotebookmarks is None:
1360 1360 # make sure to always includes bookmark data when migrating
1361 1361 # `hg incoming --bundle` to using this function.
1362 1362 kwargs['listkeys'].append('bookmarks')
1363 1363
1364 1364 # If this is a full pull / clone and the server supports the clone bundles
1365 1365 # feature, tell the server whether we attempted a clone bundle. The
1366 1366 # presence of this flag indicates the client supports clone bundles. This
1367 1367 # will enable the server to treat clients that support clone bundles
1368 1368 # differently from those that don't.
1369 1369 if (pullop.remote.capable('clonebundles')
1370 1370 and pullop.heads is None and list(pullop.common) == [nullid]):
1371 1371 kwargs['cbattempted'] = pullop.clonebundleattempted
1372 1372
1373 1373 if streaming:
1374 1374 pullop.repo.ui.status(_('streaming all changes\n'))
1375 1375 elif not pullop.fetch:
1376 1376 pullop.repo.ui.status(_("no changes found\n"))
1377 1377 pullop.cgresult = 0
1378 1378 else:
1379 1379 if pullop.heads is None and list(pullop.common) == [nullid]:
1380 1380 pullop.repo.ui.status(_("requesting all changes\n"))
1381 1381 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1382 1382 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1383 1383 if obsolete.commonversion(remoteversions) is not None:
1384 1384 kwargs['obsmarkers'] = True
1385 1385 pullop.stepsdone.add('obsmarkers')
1386 1386 _pullbundle2extraprepare(pullop, kwargs)
1387 1387 bundle = pullop.remote.getbundle('pull', **pycompat.strkwargs(kwargs))
1388 1388 try:
1389 1389 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
1390 1390 except bundle2.AbortFromPart as exc:
1391 1391 pullop.repo.ui.status(_('remote: abort: %s\n') % exc)
1392 1392 raise error.Abort(_('pull failed on remote'), hint=exc.hint)
1393 1393 except error.BundleValueError as exc:
1394 1394 raise error.Abort(_('missing support for %s') % exc)
1395 1395
1396 1396 if pullop.fetch:
1397 1397 pullop.cgresult = bundle2.combinechangegroupresults(op)
1398 1398
1399 1399 # If the bundle had a phase-heads part, then phase exchange is already done
1400 1400 if op.records['phase-heads']:
1401 1401 pullop.stepsdone.add('phases')
1402 1402
1403 1403 # processing phases change
1404 1404 for namespace, value in op.records['listkeys']:
1405 1405 if namespace == 'phases':
1406 1406 _pullapplyphases(pullop, value)
1407 1407
1408 1408 # processing bookmark update
1409 1409 for namespace, value in op.records['listkeys']:
1410 1410 if namespace == 'bookmarks':
1411 1411 pullop.remotebookmarks = value
1412 1412
1413 1413 # bookmark data were either already there or pulled in the bundle
1414 1414 if pullop.remotebookmarks is not None:
1415 1415 _pullbookmarks(pullop)
1416 1416
1417 1417 def _pullbundle2extraprepare(pullop, kwargs):
1418 1418 """hook function so that extensions can extend the getbundle call"""
1419 1419 pass
1420 1420
1421 1421 def _pullchangeset(pullop):
1422 1422 """pull changeset from unbundle into the local repo"""
1423 1423 # We delay the open of the transaction as late as possible so we
1424 1424 # don't open transaction for nothing or you break future useful
1425 1425 # rollback call
1426 1426 if 'changegroup' in pullop.stepsdone:
1427 1427 return
1428 1428 pullop.stepsdone.add('changegroup')
1429 1429 if not pullop.fetch:
1430 1430 pullop.repo.ui.status(_("no changes found\n"))
1431 1431 pullop.cgresult = 0
1432 1432 return
1433 1433 tr = pullop.gettransaction()
1434 1434 if pullop.heads is None and list(pullop.common) == [nullid]:
1435 1435 pullop.repo.ui.status(_("requesting all changes\n"))
1436 1436 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1437 1437 # issue1320, avoid a race if remote changed after discovery
1438 1438 pullop.heads = pullop.rheads
1439 1439
1440 1440 if pullop.remote.capable('getbundle'):
1441 1441 # TODO: get bundlecaps from remote
1442 1442 cg = pullop.remote.getbundle('pull', common=pullop.common,
1443 1443 heads=pullop.heads or pullop.rheads)
1444 1444 elif pullop.heads is None:
1445 1445 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1446 1446 elif not pullop.remote.capable('changegroupsubset'):
1447 1447 raise error.Abort(_("partial pull cannot be done because "
1448 1448 "other repository doesn't support "
1449 1449 "changegroupsubset."))
1450 1450 else:
1451 1451 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1452 1452 bundleop = bundle2.applybundle(pullop.repo, cg, tr, 'pull',
1453 1453 pullop.remote.url())
1454 1454 pullop.cgresult = bundle2.combinechangegroupresults(bundleop)
1455 1455
1456 1456 def _pullphase(pullop):
1457 1457 # Get remote phases data from remote
1458 1458 if 'phases' in pullop.stepsdone:
1459 1459 return
1460 1460 remotephases = pullop.remote.listkeys('phases')
1461 1461 _pullapplyphases(pullop, remotephases)
1462 1462
1463 1463 def _pullapplyphases(pullop, remotephases):
1464 1464 """apply phase movement from observed remote state"""
1465 1465 if 'phases' in pullop.stepsdone:
1466 1466 return
1467 1467 pullop.stepsdone.add('phases')
1468 1468 publishing = bool(remotephases.get('publishing', False))
1469 1469 if remotephases and not publishing:
1470 1470 # remote is new and non-publishing
1471 1471 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1472 1472 pullop.pulledsubset,
1473 1473 remotephases)
1474 1474 dheads = pullop.pulledsubset
1475 1475 else:
1476 1476 # Remote is old or publishing all common changesets
1477 1477 # should be seen as public
1478 1478 pheads = pullop.pulledsubset
1479 1479 dheads = []
1480 1480 unfi = pullop.repo.unfiltered()
1481 1481 phase = unfi._phasecache.phase
1482 1482 rev = unfi.changelog.nodemap.get
1483 1483 public = phases.public
1484 1484 draft = phases.draft
1485 1485
1486 1486 # exclude changesets already public locally and update the others
1487 1487 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1488 1488 if pheads:
1489 1489 tr = pullop.gettransaction()
1490 1490 phases.advanceboundary(pullop.repo, tr, public, pheads)
1491 1491
1492 1492 # exclude changesets already draft locally and update the others
1493 1493 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1494 1494 if dheads:
1495 1495 tr = pullop.gettransaction()
1496 1496 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1497 1497
1498 1498 def _pullbookmarks(pullop):
1499 1499 """process the remote bookmark information to update the local one"""
1500 1500 if 'bookmarks' in pullop.stepsdone:
1501 1501 return
1502 1502 pullop.stepsdone.add('bookmarks')
1503 1503 repo = pullop.repo
1504 1504 remotebookmarks = pullop.remotebookmarks
1505 1505 remotebookmarks = bookmod.unhexlifybookmarks(remotebookmarks)
1506 1506 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1507 1507 pullop.remote.url(),
1508 1508 pullop.gettransaction,
1509 1509 explicit=pullop.explicitbookmarks)
1510 1510
1511 1511 def _pullobsolete(pullop):
1512 1512 """utility function to pull obsolete markers from a remote
1513 1513
1514 1514 The `gettransaction` is function that return the pull transaction, creating
1515 1515 one if necessary. We return the transaction to inform the calling code that
1516 1516 a new transaction have been created (when applicable).
1517 1517
1518 1518 Exists mostly to allow overriding for experimentation purpose"""
1519 1519 if 'obsmarkers' in pullop.stepsdone:
1520 1520 return
1521 1521 pullop.stepsdone.add('obsmarkers')
1522 1522 tr = None
1523 1523 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1524 1524 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1525 1525 remoteobs = pullop.remote.listkeys('obsolete')
1526 1526 if 'dump0' in remoteobs:
1527 1527 tr = pullop.gettransaction()
1528 1528 markers = []
1529 1529 for key in sorted(remoteobs, reverse=True):
1530 1530 if key.startswith('dump'):
1531 1531 data = util.b85decode(remoteobs[key])
1532 1532 version, newmarks = obsolete._readmarkers(data)
1533 1533 markers += newmarks
1534 1534 if markers:
1535 1535 pullop.repo.obsstore.add(tr, markers)
1536 1536 pullop.repo.invalidatevolatilesets()
1537 1537 return tr
1538 1538
1539 1539 def caps20to10(repo):
1540 1540 """return a set with appropriate options to use bundle20 during getbundle"""
1541 1541 caps = {'HG20'}
1542 1542 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
1543 1543 caps.add('bundle2=' + urlreq.quote(capsblob))
1544 1544 return caps
1545 1545
1546 1546 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1547 1547 getbundle2partsorder = []
1548 1548
1549 1549 # Mapping between step name and function
1550 1550 #
1551 1551 # This exists to help extensions wrap steps if necessary
1552 1552 getbundle2partsmapping = {}
1553 1553
1554 1554 def getbundle2partsgenerator(stepname, idx=None):
1555 1555 """decorator for function generating bundle2 part for getbundle
1556 1556
1557 1557 The function is added to the step -> function mapping and appended to the
1558 1558 list of steps. Beware that decorated functions will be added in order
1559 1559 (this may matter).
1560 1560
1561 1561 You can only use this decorator for new steps, if you want to wrap a step
1562 1562 from an extension, attack the getbundle2partsmapping dictionary directly."""
1563 1563 def dec(func):
1564 1564 assert stepname not in getbundle2partsmapping
1565 1565 getbundle2partsmapping[stepname] = func
1566 1566 if idx is None:
1567 1567 getbundle2partsorder.append(stepname)
1568 1568 else:
1569 1569 getbundle2partsorder.insert(idx, stepname)
1570 1570 return func
1571 1571 return dec
1572 1572
1573 1573 def bundle2requested(bundlecaps):
1574 1574 if bundlecaps is not None:
1575 1575 return any(cap.startswith('HG2') for cap in bundlecaps)
1576 1576 return False
1577 1577
1578 1578 def getbundlechunks(repo, source, heads=None, common=None, bundlecaps=None,
1579 1579 **kwargs):
1580 1580 """Return chunks constituting a bundle's raw data.
1581 1581
1582 1582 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
1583 1583 passed.
1584 1584
1585 1585 Returns an iterator over raw chunks (of varying sizes).
1586 1586 """
1587 1587 kwargs = pycompat.byteskwargs(kwargs)
1588 1588 usebundle2 = bundle2requested(bundlecaps)
1589 1589 # bundle10 case
1590 1590 if not usebundle2:
1591 1591 if bundlecaps and not kwargs.get('cg', True):
1592 1592 raise ValueError(_('request for bundle10 must include changegroup'))
1593 1593
1594 1594 if kwargs:
1595 1595 raise ValueError(_('unsupported getbundle arguments: %s')
1596 1596 % ', '.join(sorted(kwargs.keys())))
1597 1597 outgoing = _computeoutgoing(repo, heads, common)
1598 1598 bundler = changegroup.getbundler('01', repo, bundlecaps)
1599 1599 return changegroup.getsubsetraw(repo, outgoing, bundler, source)
1600 1600
1601 1601 # bundle20 case
1602 1602 b2caps = {}
1603 1603 for bcaps in bundlecaps:
1604 1604 if bcaps.startswith('bundle2='):
1605 1605 blob = urlreq.unquote(bcaps[len('bundle2='):])
1606 1606 b2caps.update(bundle2.decodecaps(blob))
1607 1607 bundler = bundle2.bundle20(repo.ui, b2caps)
1608 1608
1609 1609 kwargs['heads'] = heads
1610 1610 kwargs['common'] = common
1611 1611
1612 1612 for name in getbundle2partsorder:
1613 1613 func = getbundle2partsmapping[name]
1614 1614 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1615 1615 **pycompat.strkwargs(kwargs))
1616 1616
1617 1617 return bundler.getchunks()
1618 1618
1619 1619 @getbundle2partsgenerator('changegroup')
1620 1620 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1621 1621 b2caps=None, heads=None, common=None, **kwargs):
1622 1622 """add a changegroup part to the requested bundle"""
1623 1623 cgstream = None
1624 1624 if kwargs.get('cg', True):
1625 1625 # build changegroup bundle here.
1626 1626 version = '01'
1627 1627 cgversions = b2caps.get('changegroup')
1628 1628 if cgversions: # 3.1 and 3.2 ship with an empty value
1629 1629 cgversions = [v for v in cgversions
1630 1630 if v in changegroup.supportedoutgoingversions(repo)]
1631 1631 if not cgversions:
1632 1632 raise ValueError(_('no common changegroup version'))
1633 1633 version = max(cgversions)
1634 1634 outgoing = _computeoutgoing(repo, heads, common)
1635 1635 cgstream = changegroup.makestream(repo, outgoing, version, source,
1636 1636 bundlecaps=bundlecaps)
1637 1637
1638 1638 if cgstream:
1639 1639 part = bundler.newpart('changegroup', data=cgstream)
1640 1640 if cgversions:
1641 1641 part.addparam('version', version)
1642 1642 part.addparam('nbchanges', str(len(outgoing.missing)), mandatory=False)
1643 1643 if 'treemanifest' in repo.requirements:
1644 1644 part.addparam('treemanifest', '1')
1645 1645
1646 1646 @getbundle2partsgenerator('listkeys')
1647 1647 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1648 1648 b2caps=None, **kwargs):
1649 1649 """add parts containing listkeys namespaces to the requested bundle"""
1650 1650 listkeys = kwargs.get('listkeys', ())
1651 1651 for namespace in listkeys:
1652 1652 part = bundler.newpart('listkeys')
1653 1653 part.addparam('namespace', namespace)
1654 1654 keys = repo.listkeys(namespace).items()
1655 1655 part.data = pushkey.encodekeys(keys)
1656 1656
1657 1657 @getbundle2partsgenerator('obsmarkers')
1658 1658 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1659 1659 b2caps=None, heads=None, **kwargs):
1660 1660 """add an obsolescence markers part to the requested bundle"""
1661 1661 if kwargs.get('obsmarkers', False):
1662 1662 if heads is None:
1663 1663 heads = repo.heads()
1664 1664 subset = [c.node() for c in repo.set('::%ln', heads)]
1665 1665 markers = repo.obsstore.relevantmarkers(subset)
1666 1666 markers = sorted(markers)
1667 1667 bundle2.buildobsmarkerspart(bundler, markers)
1668 1668
1669 1669 @getbundle2partsgenerator('hgtagsfnodes')
1670 1670 def _getbundletagsfnodes(bundler, repo, source, bundlecaps=None,
1671 1671 b2caps=None, heads=None, common=None,
1672 1672 **kwargs):
1673 1673 """Transfer the .hgtags filenodes mapping.
1674 1674
1675 1675 Only values for heads in this bundle will be transferred.
1676 1676
1677 1677 The part data consists of pairs of 20 byte changeset node and .hgtags
1678 1678 filenodes raw values.
1679 1679 """
1680 1680 # Don't send unless:
1681 1681 # - changeset are being exchanged,
1682 1682 # - the client supports it.
1683 1683 if not (kwargs.get('cg', True) and 'hgtagsfnodes' in b2caps):
1684 1684 return
1685 1685
1686 1686 outgoing = _computeoutgoing(repo, heads, common)
1687 1687 bundle2.addparttagsfnodescache(repo, bundler, outgoing)
1688 1688
1689 1689 def _getbookmarks(repo, **kwargs):
1690 1690 """Returns bookmark to node mapping.
1691 1691
1692 1692 This function is primarily used to generate `bookmarks` bundle2 part.
1693 1693 It is a separate function in order to make it easy to wrap it
1694 1694 in extensions. Passing `kwargs` to the function makes it easy to
1695 1695 add new parameters in extensions.
1696 1696 """
1697 1697
1698 1698 return dict(bookmod.listbinbookmarks(repo))
1699 1699
1700 1700 def check_heads(repo, their_heads, context):
1701 1701 """check if the heads of a repo have been modified
1702 1702
1703 1703 Used by peer for unbundling.
1704 1704 """
1705 1705 heads = repo.heads()
1706 1706 heads_hash = hashlib.sha1(''.join(sorted(heads))).digest()
1707 1707 if not (their_heads == ['force'] or their_heads == heads or
1708 1708 their_heads == ['hashed', heads_hash]):
1709 1709 # someone else committed/pushed/unbundled while we
1710 1710 # were transferring data
1711 1711 raise error.PushRaced('repository changed while %s - '
1712 1712 'please try again' % context)
1713 1713
1714 1714 def unbundle(repo, cg, heads, source, url):
1715 1715 """Apply a bundle to a repo.
1716 1716
1717 1717 this function makes sure the repo is locked during the application and have
1718 1718 mechanism to check that no push race occurred between the creation of the
1719 1719 bundle and its application.
1720 1720
1721 1721 If the push was raced as PushRaced exception is raised."""
1722 1722 r = 0
1723 1723 # need a transaction when processing a bundle2 stream
1724 1724 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
1725 1725 lockandtr = [None, None, None]
1726 1726 recordout = None
1727 1727 # quick fix for output mismatch with bundle2 in 3.4
1728 1728 captureoutput = repo.ui.configbool('experimental', 'bundle2-output-capture')
1729 1729 if url.startswith('remote:http:') or url.startswith('remote:https:'):
1730 1730 captureoutput = True
1731 1731 try:
1732 1732 # note: outside bundle1, 'heads' is expected to be empty and this
1733 1733 # 'check_heads' call wil be a no-op
1734 1734 check_heads(repo, heads, 'uploading changes')
1735 1735 # push can proceed
1736 1736 if not isinstance(cg, bundle2.unbundle20):
1737 1737 # legacy case: bundle1 (changegroup 01)
1738 1738 txnname = "\n".join([source, util.hidepassword(url)])
1739 1739 with repo.lock(), repo.transaction(txnname) as tr:
1740 1740 op = bundle2.applybundle(repo, cg, tr, source, url)
1741 1741 r = bundle2.combinechangegroupresults(op)
1742 1742 else:
1743 1743 r = None
1744 1744 try:
1745 1745 def gettransaction():
1746 1746 if not lockandtr[2]:
1747 1747 lockandtr[0] = repo.wlock()
1748 1748 lockandtr[1] = repo.lock()
1749 1749 lockandtr[2] = repo.transaction(source)
1750 1750 lockandtr[2].hookargs['source'] = source
1751 1751 lockandtr[2].hookargs['url'] = url
1752 1752 lockandtr[2].hookargs['bundle2'] = '1'
1753 1753 return lockandtr[2]
1754 1754
1755 1755 # Do greedy locking by default until we're satisfied with lazy
1756 1756 # locking.
1757 1757 if not repo.ui.configbool('experimental', 'bundle2lazylocking'):
1758 1758 gettransaction()
1759 1759
1760 1760 op = bundle2.bundleoperation(repo, gettransaction,
1761 1761 captureoutput=captureoutput)
1762 1762 try:
1763 1763 op = bundle2.processbundle(repo, cg, op=op)
1764 1764 finally:
1765 1765 r = op.reply
1766 1766 if captureoutput and r is not None:
1767 1767 repo.ui.pushbuffer(error=True, subproc=True)
1768 1768 def recordout(output):
1769 1769 r.newpart('output', data=output, mandatory=False)
1770 1770 if lockandtr[2] is not None:
1771 1771 lockandtr[2].close()
1772 1772 except BaseException as exc:
1773 1773 exc.duringunbundle2 = True
1774 1774 if captureoutput and r is not None:
1775 1775 parts = exc._bundle2salvagedoutput = r.salvageoutput()
1776 1776 def recordout(output):
1777 1777 part = bundle2.bundlepart('output', data=output,
1778 1778 mandatory=False)
1779 1779 parts.append(part)
1780 1780 raise
1781 1781 finally:
1782 1782 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
1783 1783 if recordout is not None:
1784 1784 recordout(repo.ui.popbuffer())
1785 1785 return r
1786 1786
1787 1787 def _maybeapplyclonebundle(pullop):
1788 1788 """Apply a clone bundle from a remote, if possible."""
1789 1789
1790 1790 repo = pullop.repo
1791 1791 remote = pullop.remote
1792 1792
1793 1793 if not repo.ui.configbool('ui', 'clonebundles'):
1794 1794 return
1795 1795
1796 1796 # Only run if local repo is empty.
1797 1797 if len(repo):
1798 1798 return
1799 1799
1800 1800 if pullop.heads:
1801 1801 return
1802 1802
1803 1803 if not remote.capable('clonebundles'):
1804 1804 return
1805 1805
1806 1806 res = remote._call('clonebundles')
1807 1807
1808 1808 # If we call the wire protocol command, that's good enough to record the
1809 1809 # attempt.
1810 1810 pullop.clonebundleattempted = True
1811 1811
1812 1812 entries = parseclonebundlesmanifest(repo, res)
1813 1813 if not entries:
1814 1814 repo.ui.note(_('no clone bundles available on remote; '
1815 1815 'falling back to regular clone\n'))
1816 1816 return
1817 1817
1818 1818 entries = filterclonebundleentries(repo, entries)
1819 1819 if not entries:
1820 1820 # There is a thundering herd concern here. However, if a server
1821 1821 # operator doesn't advertise bundles appropriate for its clients,
1822 1822 # they deserve what's coming. Furthermore, from a client's
1823 1823 # perspective, no automatic fallback would mean not being able to
1824 1824 # clone!
1825 1825 repo.ui.warn(_('no compatible clone bundles available on server; '
1826 1826 'falling back to regular clone\n'))
1827 1827 repo.ui.warn(_('(you may want to report this to the server '
1828 1828 'operator)\n'))
1829 1829 return
1830 1830
1831 1831 entries = sortclonebundleentries(repo.ui, entries)
1832 1832
1833 1833 url = entries[0]['URL']
1834 1834 repo.ui.status(_('applying clone bundle from %s\n') % url)
1835 1835 if trypullbundlefromurl(repo.ui, repo, url):
1836 1836 repo.ui.status(_('finished applying clone bundle\n'))
1837 1837 # Bundle failed.
1838 1838 #
1839 1839 # We abort by default to avoid the thundering herd of
1840 1840 # clients flooding a server that was expecting expensive
1841 1841 # clone load to be offloaded.
1842 1842 elif repo.ui.configbool('ui', 'clonebundlefallback'):
1843 1843 repo.ui.warn(_('falling back to normal clone\n'))
1844 1844 else:
1845 1845 raise error.Abort(_('error applying bundle'),
1846 1846 hint=_('if this error persists, consider contacting '
1847 1847 'the server operator or disable clone '
1848 1848 'bundles via '
1849 1849 '"--config ui.clonebundles=false"'))
1850 1850
1851 1851 def parseclonebundlesmanifest(repo, s):
1852 1852 """Parses the raw text of a clone bundles manifest.
1853 1853
1854 1854 Returns a list of dicts. The dicts have a ``URL`` key corresponding
1855 1855 to the URL and other keys are the attributes for the entry.
1856 1856 """
1857 1857 m = []
1858 1858 for line in s.splitlines():
1859 1859 fields = line.split()
1860 1860 if not fields:
1861 1861 continue
1862 1862 attrs = {'URL': fields[0]}
1863 1863 for rawattr in fields[1:]:
1864 1864 key, value = rawattr.split('=', 1)
1865 1865 key = urlreq.unquote(key)
1866 1866 value = urlreq.unquote(value)
1867 1867 attrs[key] = value
1868 1868
1869 1869 # Parse BUNDLESPEC into components. This makes client-side
1870 1870 # preferences easier to specify since you can prefer a single
1871 1871 # component of the BUNDLESPEC.
1872 1872 if key == 'BUNDLESPEC':
1873 1873 try:
1874 1874 comp, version, params = parsebundlespec(repo, value,
1875 1875 externalnames=True)
1876 1876 attrs['COMPRESSION'] = comp
1877 1877 attrs['VERSION'] = version
1878 1878 except error.InvalidBundleSpecification:
1879 1879 pass
1880 1880 except error.UnsupportedBundleSpecification:
1881 1881 pass
1882 1882
1883 1883 m.append(attrs)
1884 1884
1885 1885 return m
1886 1886
1887 1887 def filterclonebundleentries(repo, entries):
1888 1888 """Remove incompatible clone bundle manifest entries.
1889 1889
1890 1890 Accepts a list of entries parsed with ``parseclonebundlesmanifest``
1891 1891 and returns a new list consisting of only the entries that this client
1892 1892 should be able to apply.
1893 1893
1894 1894 There is no guarantee we'll be able to apply all returned entries because
1895 1895 the metadata we use to filter on may be missing or wrong.
1896 1896 """
1897 1897 newentries = []
1898 1898 for entry in entries:
1899 1899 spec = entry.get('BUNDLESPEC')
1900 1900 if spec:
1901 1901 try:
1902 1902 parsebundlespec(repo, spec, strict=True)
1903 1903 except error.InvalidBundleSpecification as e:
1904 1904 repo.ui.debug(str(e) + '\n')
1905 1905 continue
1906 1906 except error.UnsupportedBundleSpecification as e:
1907 1907 repo.ui.debug('filtering %s because unsupported bundle '
1908 1908 'spec: %s\n' % (entry['URL'], str(e)))
1909 1909 continue
1910 1910
1911 1911 if 'REQUIRESNI' in entry and not sslutil.hassni:
1912 1912 repo.ui.debug('filtering %s because SNI not supported\n' %
1913 1913 entry['URL'])
1914 1914 continue
1915 1915
1916 1916 newentries.append(entry)
1917 1917
1918 1918 return newentries
1919 1919
1920 1920 class clonebundleentry(object):
1921 1921 """Represents an item in a clone bundles manifest.
1922 1922
1923 1923 This rich class is needed to support sorting since sorted() in Python 3
1924 1924 doesn't support ``cmp`` and our comparison is complex enough that ``key=``
1925 1925 won't work.
1926 1926 """
1927 1927
1928 1928 def __init__(self, value, prefers):
1929 1929 self.value = value
1930 1930 self.prefers = prefers
1931 1931
1932 1932 def _cmp(self, other):
1933 1933 for prefkey, prefvalue in self.prefers:
1934 1934 avalue = self.value.get(prefkey)
1935 1935 bvalue = other.value.get(prefkey)
1936 1936
1937 1937 # Special case for b missing attribute and a matches exactly.
1938 1938 if avalue is not None and bvalue is None and avalue == prefvalue:
1939 1939 return -1
1940 1940
1941 1941 # Special case for a missing attribute and b matches exactly.
1942 1942 if bvalue is not None and avalue is None and bvalue == prefvalue:
1943 1943 return 1
1944 1944
1945 1945 # We can't compare unless attribute present on both.
1946 1946 if avalue is None or bvalue is None:
1947 1947 continue
1948 1948
1949 1949 # Same values should fall back to next attribute.
1950 1950 if avalue == bvalue:
1951 1951 continue
1952 1952
1953 1953 # Exact matches come first.
1954 1954 if avalue == prefvalue:
1955 1955 return -1
1956 1956 if bvalue == prefvalue:
1957 1957 return 1
1958 1958
1959 1959 # Fall back to next attribute.
1960 1960 continue
1961 1961
1962 1962 # If we got here we couldn't sort by attributes and prefers. Fall
1963 1963 # back to index order.
1964 1964 return 0
1965 1965
1966 1966 def __lt__(self, other):
1967 1967 return self._cmp(other) < 0
1968 1968
1969 1969 def __gt__(self, other):
1970 1970 return self._cmp(other) > 0
1971 1971
1972 1972 def __eq__(self, other):
1973 1973 return self._cmp(other) == 0
1974 1974
1975 1975 def __le__(self, other):
1976 1976 return self._cmp(other) <= 0
1977 1977
1978 1978 def __ge__(self, other):
1979 1979 return self._cmp(other) >= 0
1980 1980
1981 1981 def __ne__(self, other):
1982 1982 return self._cmp(other) != 0
1983 1983
1984 1984 def sortclonebundleentries(ui, entries):
1985 1985 prefers = ui.configlist('ui', 'clonebundleprefers')
1986 1986 if not prefers:
1987 1987 return list(entries)
1988 1988
1989 1989 prefers = [p.split('=', 1) for p in prefers]
1990 1990
1991 1991 items = sorted(clonebundleentry(v, prefers) for v in entries)
1992 1992 return [i.value for i in items]
1993 1993
1994 1994 def trypullbundlefromurl(ui, repo, url):
1995 1995 """Attempt to apply a bundle from a URL."""
1996 1996 with repo.lock(), repo.transaction('bundleurl') as tr:
1997 1997 try:
1998 1998 fh = urlmod.open(ui, url)
1999 1999 cg = readbundle(ui, fh, 'stream')
2000 2000
2001 2001 if isinstance(cg, streamclone.streamcloneapplier):
2002 2002 cg.apply(repo)
2003 2003 else:
2004 2004 bundle2.applybundle(repo, cg, tr, 'clonebundles', url)
2005 2005 return True
2006 2006 except urlerr.httperror as e:
2007 2007 ui.warn(_('HTTP error fetching bundle: %s\n') % str(e))
2008 2008 except urlerr.urlerror as e:
2009 2009 ui.warn(_('error fetching bundle: %s\n') % e.reason)
2010 2010
2011 2011 return False
@@ -1,1233 +1,1234 b''
1 1 This test is dedicated to test the bundle2 container format
2 2
3 3 It test multiple existing parts to test different feature of the container. You
4 4 probably do not need to touch this test unless you change the binary encoding
5 5 of the bundle2 format itself.
6 6
7 7 Create an extension to test bundle2 API
8 8
9 9 $ cat > bundle2.py << EOF
10 10 > """A small extension to test bundle2 implementation
11 11 >
12 12 > This extension allows detailed testing of the various bundle2 API and
13 13 > behaviors.
14 14 > """
15 15 > import gc
16 16 > import os
17 17 > import sys
18 18 > from mercurial import util
19 19 > from mercurial import bundle2
20 20 > from mercurial import scmutil
21 21 > from mercurial import discovery
22 22 > from mercurial import changegroup
23 23 > from mercurial import error
24 24 > from mercurial import obsolete
25 25 > from mercurial import registrar
26 26 >
27 27 >
28 28 > try:
29 29 > import msvcrt
30 30 > msvcrt.setmode(sys.stdin.fileno(), os.O_BINARY)
31 31 > msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY)
32 32 > msvcrt.setmode(sys.stderr.fileno(), os.O_BINARY)
33 33 > except ImportError:
34 34 > pass
35 35 >
36 36 > cmdtable = {}
37 37 > command = registrar.command(cmdtable)
38 38 >
39 39 > ELEPHANTSSONG = """Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
40 40 > Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
41 41 > Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko."""
42 42 > assert len(ELEPHANTSSONG) == 178 # future test say 178 bytes, trust it.
43 43 >
44 44 > @bundle2.parthandler('test:song')
45 45 > def songhandler(op, part):
46 46 > """handle a "test:song" bundle2 part, printing the lyrics on stdin"""
47 47 > op.ui.write('The choir starts singing:\n')
48 48 > verses = 0
49 49 > for line in part.read().split('\n'):
50 50 > op.ui.write(' %s\n' % line)
51 51 > verses += 1
52 52 > op.records.add('song', {'verses': verses})
53 53 >
54 54 > @bundle2.parthandler('test:ping')
55 55 > def pinghandler(op, part):
56 56 > op.ui.write('received ping request (id %i)\n' % part.id)
57 57 > if op.reply is not None and 'ping-pong' in op.reply.capabilities:
58 58 > op.ui.write_err('replying to ping request (id %i)\n' % part.id)
59 59 > op.reply.newpart('test:pong', [('in-reply-to', str(part.id))],
60 60 > mandatory=False)
61 61 >
62 62 > @bundle2.parthandler('test:debugreply')
63 63 > def debugreply(op, part):
64 64 > """print data about the capacity of the bundle reply"""
65 65 > if op.reply is None:
66 66 > op.ui.write('debugreply: no reply\n')
67 67 > else:
68 68 > op.ui.write('debugreply: capabilities:\n')
69 69 > for cap in sorted(op.reply.capabilities):
70 70 > op.ui.write('debugreply: %r\n' % cap)
71 71 > for val in op.reply.capabilities[cap]:
72 72 > op.ui.write('debugreply: %r\n' % val)
73 73 >
74 74 > @command(b'bundle2',
75 75 > [('', 'param', [], 'stream level parameter'),
76 76 > ('', 'unknown', False, 'include an unknown mandatory part in the bundle'),
77 77 > ('', 'unknownparams', False, 'include an unknown part parameters in the bundle'),
78 78 > ('', 'parts', False, 'include some arbitrary parts to the bundle'),
79 79 > ('', 'reply', False, 'produce a reply bundle'),
80 80 > ('', 'pushrace', False, 'includes a check:head part with unknown nodes'),
81 81 > ('', 'genraise', False, 'includes a part that raise an exception during generation'),
82 82 > ('', 'timeout', False, 'emulate a timeout during bundle generation'),
83 83 > ('r', 'rev', [], 'includes those changeset in the bundle'),
84 84 > ('', 'compress', '', 'compress the stream'),],
85 85 > '[OUTPUTFILE]')
86 86 > def cmdbundle2(ui, repo, path=None, **opts):
87 87 > """write a bundle2 container on standard output"""
88 88 > bundler = bundle2.bundle20(ui)
89 89 > for p in opts['param']:
90 90 > p = p.split('=', 1)
91 91 > try:
92 92 > bundler.addparam(*p)
93 93 > except ValueError as exc:
94 94 > raise error.Abort('%s' % exc)
95 95 >
96 96 > if opts['compress']:
97 97 > bundler.setcompression(opts['compress'])
98 98 >
99 99 > if opts['reply']:
100 100 > capsstring = 'ping-pong\nelephants=babar,celeste\ncity%3D%21=celeste%2Cville'
101 101 > bundler.newpart('replycaps', data=capsstring)
102 102 >
103 103 > if opts['pushrace']:
104 104 > # also serve to test the assignement of data outside of init
105 105 > part = bundler.newpart('check:heads')
106 106 > part.data = '01234567890123456789'
107 107 >
108 108 > revs = opts['rev']
109 109 > if 'rev' in opts:
110 110 > revs = scmutil.revrange(repo, opts['rev'])
111 111 > if revs:
112 112 > # very crude version of a changegroup part creation
113 113 > bundled = repo.revs('%ld::%ld', revs, revs)
114 114 > headmissing = [c.node() for c in repo.set('heads(%ld)', revs)]
115 115 > headcommon = [c.node() for c in repo.set('parents(%ld) - %ld', revs, revs)]
116 116 > outgoing = discovery.outgoing(repo, headcommon, headmissing)
117 > cg = changegroup.getchangegroup(repo, 'test:bundle2', outgoing, None)
117 > cg = changegroup.makechangegroup(repo, outgoing, '01',
118 > 'test:bundle2')
118 119 > bundler.newpart('changegroup', data=cg.getchunks(),
119 120 > mandatory=False)
120 121 >
121 122 > if opts['parts']:
122 123 > bundler.newpart('test:empty', mandatory=False)
123 124 > # add a second one to make sure we handle multiple parts
124 125 > bundler.newpart('test:empty', mandatory=False)
125 126 > bundler.newpart('test:song', data=ELEPHANTSSONG, mandatory=False)
126 127 > bundler.newpart('test:debugreply', mandatory=False)
127 128 > mathpart = bundler.newpart('test:math')
128 129 > mathpart.addparam('pi', '3.14')
129 130 > mathpart.addparam('e', '2.72')
130 131 > mathpart.addparam('cooking', 'raw', mandatory=False)
131 132 > mathpart.data = '42'
132 133 > mathpart.mandatory = False
133 134 > # advisory known part with unknown mandatory param
134 135 > bundler.newpart('test:song', [('randomparam','')], mandatory=False)
135 136 > if opts['unknown']:
136 137 > bundler.newpart('test:unknown', data='some random content')
137 138 > if opts['unknownparams']:
138 139 > bundler.newpart('test:song', [('randomparams', '')])
139 140 > if opts['parts']:
140 141 > bundler.newpart('test:ping', mandatory=False)
141 142 > if opts['genraise']:
142 143 > def genraise():
143 144 > yield 'first line\n'
144 145 > raise RuntimeError('Someone set up us the bomb!')
145 146 > bundler.newpart('output', data=genraise(), mandatory=False)
146 147 >
147 148 > if path is None:
148 149 > file = sys.stdout
149 150 > else:
150 151 > file = open(path, 'wb')
151 152 >
152 153 > if opts['timeout']:
153 154 > bundler.newpart('test:song', data=ELEPHANTSSONG, mandatory=False)
154 155 > for idx, junk in enumerate(bundler.getchunks()):
155 156 > ui.write('%d chunk\n' % idx)
156 157 > if idx > 4:
157 158 > # This throws a GeneratorExit inside the generator, which
158 159 > # can cause problems if the exception-recovery code is
159 160 > # too zealous. It's important for this test that the break
160 161 > # occur while we're in the middle of a part.
161 162 > break
162 163 > gc.collect()
163 164 > ui.write('fake timeout complete.\n')
164 165 > return
165 166 > try:
166 167 > for chunk in bundler.getchunks():
167 168 > file.write(chunk)
168 169 > except RuntimeError as exc:
169 170 > raise error.Abort(exc)
170 171 > finally:
171 172 > file.flush()
172 173 >
173 174 > @command(b'unbundle2', [], '')
174 175 > def cmdunbundle2(ui, repo, replypath=None):
175 176 > """process a bundle2 stream from stdin on the current repo"""
176 177 > try:
177 178 > tr = None
178 179 > lock = repo.lock()
179 180 > tr = repo.transaction('processbundle')
180 181 > try:
181 182 > unbundler = bundle2.getunbundler(ui, sys.stdin)
182 183 > op = bundle2.processbundle(repo, unbundler, lambda: tr)
183 184 > tr.close()
184 185 > except error.BundleValueError as exc:
185 186 > raise error.Abort('missing support for %s' % exc)
186 187 > except error.PushRaced as exc:
187 188 > raise error.Abort('push race: %s' % exc)
188 189 > finally:
189 190 > if tr is not None:
190 191 > tr.release()
191 192 > lock.release()
192 193 > remains = sys.stdin.read()
193 194 > ui.write('%i unread bytes\n' % len(remains))
194 195 > if op.records['song']:
195 196 > totalverses = sum(r['verses'] for r in op.records['song'])
196 197 > ui.write('%i total verses sung\n' % totalverses)
197 198 > for rec in op.records['changegroup']:
198 199 > ui.write('addchangegroup return: %i\n' % rec['return'])
199 200 > if op.reply is not None and replypath is not None:
200 201 > with open(replypath, 'wb') as file:
201 202 > for chunk in op.reply.getchunks():
202 203 > file.write(chunk)
203 204 >
204 205 > @command(b'statbundle2', [], '')
205 206 > def cmdstatbundle2(ui, repo):
206 207 > """print statistic on the bundle2 container read from stdin"""
207 208 > unbundler = bundle2.getunbundler(ui, sys.stdin)
208 209 > try:
209 210 > params = unbundler.params
210 211 > except error.BundleValueError as exc:
211 212 > raise error.Abort('unknown parameters: %s' % exc)
212 213 > ui.write('options count: %i\n' % len(params))
213 214 > for key in sorted(params):
214 215 > ui.write('- %s\n' % key)
215 216 > value = params[key]
216 217 > if value is not None:
217 218 > ui.write(' %s\n' % value)
218 219 > count = 0
219 220 > for p in unbundler.iterparts():
220 221 > count += 1
221 222 > ui.write(' :%s:\n' % p.type)
222 223 > ui.write(' mandatory: %i\n' % len(p.mandatoryparams))
223 224 > ui.write(' advisory: %i\n' % len(p.advisoryparams))
224 225 > ui.write(' payload: %i bytes\n' % len(p.read()))
225 226 > ui.write('parts count: %i\n' % count)
226 227 > EOF
227 228 $ cat >> $HGRCPATH << EOF
228 229 > [extensions]
229 230 > bundle2=$TESTTMP/bundle2.py
230 231 > [experimental]
231 232 > stabilization=createmarkers
232 233 > [ui]
233 234 > ssh=$PYTHON "$TESTDIR/dummyssh"
234 235 > logtemplate={rev}:{node|short} {phase} {author} {bookmarks} {desc|firstline}
235 236 > [web]
236 237 > push_ssl = false
237 238 > allow_push = *
238 239 > [phases]
239 240 > publish=False
240 241 > EOF
241 242
242 243 The extension requires a repo (currently unused)
243 244
244 245 $ hg init main
245 246 $ cd main
246 247 $ touch a
247 248 $ hg add a
248 249 $ hg commit -m 'a'
249 250
250 251
251 252 Empty bundle
252 253 =================
253 254
254 255 - no option
255 256 - no parts
256 257
257 258 Test bundling
258 259
259 260 $ hg bundle2 | f --hexdump
260 261
261 262 0000: 48 47 32 30 00 00 00 00 00 00 00 00 |HG20........|
262 263
263 264 Test timeouts during bundling
264 265 $ hg bundle2 --timeout --debug --config devel.bundle2.debug=yes
265 266 bundle2-output-bundle: "HG20", 1 parts total
266 267 bundle2-output: start emission of HG20 stream
267 268 0 chunk
268 269 bundle2-output: bundle parameter:
269 270 1 chunk
270 271 bundle2-output: start of parts
271 272 bundle2-output: bundle part: "test:song"
272 273 bundle2-output-part: "test:song" (advisory) 178 bytes payload
273 274 bundle2-output: part 0: "test:song"
274 275 bundle2-output: header chunk size: 16
275 276 2 chunk
276 277 3 chunk
277 278 bundle2-output: payload chunk size: 178
278 279 4 chunk
279 280 5 chunk
280 281 bundle2-generatorexit
281 282 fake timeout complete.
282 283
283 284 Test unbundling
284 285
285 286 $ hg bundle2 | hg statbundle2
286 287 options count: 0
287 288 parts count: 0
288 289
289 290 Test old style bundle are detected and refused
290 291
291 292 $ hg bundle --all --type v1 ../bundle.hg
292 293 1 changesets found
293 294 $ hg statbundle2 < ../bundle.hg
294 295 abort: unknown bundle version 10
295 296 [255]
296 297
297 298 Test parameters
298 299 =================
299 300
300 301 - some options
301 302 - no parts
302 303
303 304 advisory parameters, no value
304 305 -------------------------------
305 306
306 307 Simplest possible parameters form
307 308
308 309 Test generation simple option
309 310
310 311 $ hg bundle2 --param 'caution' | f --hexdump
311 312
312 313 0000: 48 47 32 30 00 00 00 07 63 61 75 74 69 6f 6e 00 |HG20....caution.|
313 314 0010: 00 00 00 |...|
314 315
315 316 Test unbundling
316 317
317 318 $ hg bundle2 --param 'caution' | hg statbundle2
318 319 options count: 1
319 320 - caution
320 321 parts count: 0
321 322
322 323 Test generation multiple option
323 324
324 325 $ hg bundle2 --param 'caution' --param 'meal' | f --hexdump
325 326
326 327 0000: 48 47 32 30 00 00 00 0c 63 61 75 74 69 6f 6e 20 |HG20....caution |
327 328 0010: 6d 65 61 6c 00 00 00 00 |meal....|
328 329
329 330 Test unbundling
330 331
331 332 $ hg bundle2 --param 'caution' --param 'meal' | hg statbundle2
332 333 options count: 2
333 334 - caution
334 335 - meal
335 336 parts count: 0
336 337
337 338 advisory parameters, with value
338 339 -------------------------------
339 340
340 341 Test generation
341 342
342 343 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants' | f --hexdump
343 344
344 345 0000: 48 47 32 30 00 00 00 1c 63 61 75 74 69 6f 6e 20 |HG20....caution |
345 346 0010: 6d 65 61 6c 3d 76 65 67 61 6e 20 65 6c 65 70 68 |meal=vegan eleph|
346 347 0020: 61 6e 74 73 00 00 00 00 |ants....|
347 348
348 349 Test unbundling
349 350
350 351 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants' | hg statbundle2
351 352 options count: 3
352 353 - caution
353 354 - elephants
354 355 - meal
355 356 vegan
356 357 parts count: 0
357 358
358 359 parameter with special char in value
359 360 ---------------------------------------------------
360 361
361 362 Test generation
362 363
363 364 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple | f --hexdump
364 365
365 366 0000: 48 47 32 30 00 00 00 29 65 25 37 43 25 32 31 25 |HG20...)e%7C%21%|
366 367 0010: 32 30 37 2f 3d 62 61 62 61 72 25 32 35 25 32 33 |207/=babar%25%23|
367 368 0020: 25 33 44 25 33 44 74 75 74 75 20 73 69 6d 70 6c |%3D%3Dtutu simpl|
368 369 0030: 65 00 00 00 00 |e....|
369 370
370 371 Test unbundling
371 372
372 373 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple | hg statbundle2
373 374 options count: 2
374 375 - e|! 7/
375 376 babar%#==tutu
376 377 - simple
377 378 parts count: 0
378 379
379 380 Test unknown mandatory option
380 381 ---------------------------------------------------
381 382
382 383 $ hg bundle2 --param 'Gravity' | hg statbundle2
383 384 abort: unknown parameters: Stream Parameter - Gravity
384 385 [255]
385 386
386 387 Test debug output
387 388 ---------------------------------------------------
388 389
389 390 bundling debug
390 391
391 392 $ hg bundle2 --debug --param 'e|! 7/=babar%#==tutu' --param simple ../out.hg2 --config progress.debug=true --config devel.bundle2.debug=true
392 393 bundle2-output-bundle: "HG20", (2 params) 0 parts total
393 394 bundle2-output: start emission of HG20 stream
394 395 bundle2-output: bundle parameter: e%7C%21%207/=babar%25%23%3D%3Dtutu simple
395 396 bundle2-output: start of parts
396 397 bundle2-output: end of bundle
397 398
398 399 file content is ok
399 400
400 401 $ f --hexdump ../out.hg2
401 402 ../out.hg2:
402 403 0000: 48 47 32 30 00 00 00 29 65 25 37 43 25 32 31 25 |HG20...)e%7C%21%|
403 404 0010: 32 30 37 2f 3d 62 61 62 61 72 25 32 35 25 32 33 |207/=babar%25%23|
404 405 0020: 25 33 44 25 33 44 74 75 74 75 20 73 69 6d 70 6c |%3D%3Dtutu simpl|
405 406 0030: 65 00 00 00 00 |e....|
406 407
407 408 unbundling debug
408 409
409 410 $ hg statbundle2 --debug --config progress.debug=true --config devel.bundle2.debug=true < ../out.hg2
410 411 bundle2-input: start processing of HG20 stream
411 412 bundle2-input: reading bundle2 stream parameters
412 413 bundle2-input: ignoring unknown parameter 'e|! 7/'
413 414 bundle2-input: ignoring unknown parameter 'simple'
414 415 options count: 2
415 416 - e|! 7/
416 417 babar%#==tutu
417 418 - simple
418 419 bundle2-input: start extraction of bundle2 parts
419 420 bundle2-input: part header size: 0
420 421 bundle2-input: end of bundle2 stream
421 422 parts count: 0
422 423
423 424
424 425 Test buggy input
425 426 ---------------------------------------------------
426 427
427 428 empty parameter name
428 429
429 430 $ hg bundle2 --param '' --quiet
430 431 abort: empty parameter name
431 432 [255]
432 433
433 434 bad parameter name
434 435
435 436 $ hg bundle2 --param 42babar
436 437 abort: non letter first character: '42babar'
437 438 [255]
438 439
439 440
440 441 Test part
441 442 =================
442 443
443 444 $ hg bundle2 --parts ../parts.hg2 --debug --config progress.debug=true --config devel.bundle2.debug=true
444 445 bundle2-output-bundle: "HG20", 7 parts total
445 446 bundle2-output: start emission of HG20 stream
446 447 bundle2-output: bundle parameter:
447 448 bundle2-output: start of parts
448 449 bundle2-output: bundle part: "test:empty"
449 450 bundle2-output-part: "test:empty" (advisory) empty payload
450 451 bundle2-output: part 0: "test:empty"
451 452 bundle2-output: header chunk size: 17
452 453 bundle2-output: closing payload chunk
453 454 bundle2-output: bundle part: "test:empty"
454 455 bundle2-output-part: "test:empty" (advisory) empty payload
455 456 bundle2-output: part 1: "test:empty"
456 457 bundle2-output: header chunk size: 17
457 458 bundle2-output: closing payload chunk
458 459 bundle2-output: bundle part: "test:song"
459 460 bundle2-output-part: "test:song" (advisory) 178 bytes payload
460 461 bundle2-output: part 2: "test:song"
461 462 bundle2-output: header chunk size: 16
462 463 bundle2-output: payload chunk size: 178
463 464 bundle2-output: closing payload chunk
464 465 bundle2-output: bundle part: "test:debugreply"
465 466 bundle2-output-part: "test:debugreply" (advisory) empty payload
466 467 bundle2-output: part 3: "test:debugreply"
467 468 bundle2-output: header chunk size: 22
468 469 bundle2-output: closing payload chunk
469 470 bundle2-output: bundle part: "test:math"
470 471 bundle2-output-part: "test:math" (advisory) (params: 2 mandatory 2 advisory) 2 bytes payload
471 472 bundle2-output: part 4: "test:math"
472 473 bundle2-output: header chunk size: 43
473 474 bundle2-output: payload chunk size: 2
474 475 bundle2-output: closing payload chunk
475 476 bundle2-output: bundle part: "test:song"
476 477 bundle2-output-part: "test:song" (advisory) (params: 1 mandatory) empty payload
477 478 bundle2-output: part 5: "test:song"
478 479 bundle2-output: header chunk size: 29
479 480 bundle2-output: closing payload chunk
480 481 bundle2-output: bundle part: "test:ping"
481 482 bundle2-output-part: "test:ping" (advisory) empty payload
482 483 bundle2-output: part 6: "test:ping"
483 484 bundle2-output: header chunk size: 16
484 485 bundle2-output: closing payload chunk
485 486 bundle2-output: end of bundle
486 487
487 488 $ f --hexdump ../parts.hg2
488 489 ../parts.hg2:
489 490 0000: 48 47 32 30 00 00 00 00 00 00 00 11 0a 74 65 73 |HG20.........tes|
490 491 0010: 74 3a 65 6d 70 74 79 00 00 00 00 00 00 00 00 00 |t:empty.........|
491 492 0020: 00 00 00 00 11 0a 74 65 73 74 3a 65 6d 70 74 79 |......test:empty|
492 493 0030: 00 00 00 01 00 00 00 00 00 00 00 00 00 10 09 74 |...............t|
493 494 0040: 65 73 74 3a 73 6f 6e 67 00 00 00 02 00 00 00 00 |est:song........|
494 495 0050: 00 b2 50 61 74 61 6c 69 20 44 69 72 61 70 61 74 |..Patali Dirapat|
495 496 0060: 61 2c 20 43 72 6f 6d 64 61 20 43 72 6f 6d 64 61 |a, Cromda Cromda|
496 497 0070: 20 52 69 70 61 6c 6f 2c 20 50 61 74 61 20 50 61 | Ripalo, Pata Pa|
497 498 0080: 74 61 2c 20 4b 6f 20 4b 6f 20 4b 6f 0a 42 6f 6b |ta, Ko Ko Ko.Bok|
498 499 0090: 6f 72 6f 20 44 69 70 6f 75 6c 69 74 6f 2c 20 52 |oro Dipoulito, R|
499 500 00a0: 6f 6e 64 69 20 52 6f 6e 64 69 20 50 65 70 69 6e |ondi Rondi Pepin|
500 501 00b0: 6f 2c 20 50 61 74 61 20 50 61 74 61 2c 20 4b 6f |o, Pata Pata, Ko|
501 502 00c0: 20 4b 6f 20 4b 6f 0a 45 6d 61 6e 61 20 4b 61 72 | Ko Ko.Emana Kar|
502 503 00d0: 61 73 73 6f 6c 69 2c 20 4c 6f 75 63 72 61 20 4c |assoli, Loucra L|
503 504 00e0: 6f 75 63 72 61 20 50 6f 6e 70 6f 6e 74 6f 2c 20 |oucra Ponponto, |
504 505 00f0: 50 61 74 61 20 50 61 74 61 2c 20 4b 6f 20 4b 6f |Pata Pata, Ko Ko|
505 506 0100: 20 4b 6f 2e 00 00 00 00 00 00 00 16 0f 74 65 73 | Ko..........tes|
506 507 0110: 74 3a 64 65 62 75 67 72 65 70 6c 79 00 00 00 03 |t:debugreply....|
507 508 0120: 00 00 00 00 00 00 00 00 00 2b 09 74 65 73 74 3a |.........+.test:|
508 509 0130: 6d 61 74 68 00 00 00 04 02 01 02 04 01 04 07 03 |math............|
509 510 0140: 70 69 33 2e 31 34 65 32 2e 37 32 63 6f 6f 6b 69 |pi3.14e2.72cooki|
510 511 0150: 6e 67 72 61 77 00 00 00 02 34 32 00 00 00 00 00 |ngraw....42.....|
511 512 0160: 00 00 1d 09 74 65 73 74 3a 73 6f 6e 67 00 00 00 |....test:song...|
512 513 0170: 05 01 00 0b 00 72 61 6e 64 6f 6d 70 61 72 61 6d |.....randomparam|
513 514 0180: 00 00 00 00 00 00 00 10 09 74 65 73 74 3a 70 69 |.........test:pi|
514 515 0190: 6e 67 00 00 00 06 00 00 00 00 00 00 00 00 00 00 |ng..............|
515 516
516 517
517 518 $ hg statbundle2 < ../parts.hg2
518 519 options count: 0
519 520 :test:empty:
520 521 mandatory: 0
521 522 advisory: 0
522 523 payload: 0 bytes
523 524 :test:empty:
524 525 mandatory: 0
525 526 advisory: 0
526 527 payload: 0 bytes
527 528 :test:song:
528 529 mandatory: 0
529 530 advisory: 0
530 531 payload: 178 bytes
531 532 :test:debugreply:
532 533 mandatory: 0
533 534 advisory: 0
534 535 payload: 0 bytes
535 536 :test:math:
536 537 mandatory: 2
537 538 advisory: 1
538 539 payload: 2 bytes
539 540 :test:song:
540 541 mandatory: 1
541 542 advisory: 0
542 543 payload: 0 bytes
543 544 :test:ping:
544 545 mandatory: 0
545 546 advisory: 0
546 547 payload: 0 bytes
547 548 parts count: 7
548 549
549 550 $ hg statbundle2 --debug --config progress.debug=true --config devel.bundle2.debug=true < ../parts.hg2
550 551 bundle2-input: start processing of HG20 stream
551 552 bundle2-input: reading bundle2 stream parameters
552 553 options count: 0
553 554 bundle2-input: start extraction of bundle2 parts
554 555 bundle2-input: part header size: 17
555 556 bundle2-input: part type: "test:empty"
556 557 bundle2-input: part id: "0"
557 558 bundle2-input: part parameters: 0
558 559 :test:empty:
559 560 mandatory: 0
560 561 advisory: 0
561 562 bundle2-input: payload chunk size: 0
562 563 payload: 0 bytes
563 564 bundle2-input: part header size: 17
564 565 bundle2-input: part type: "test:empty"
565 566 bundle2-input: part id: "1"
566 567 bundle2-input: part parameters: 0
567 568 :test:empty:
568 569 mandatory: 0
569 570 advisory: 0
570 571 bundle2-input: payload chunk size: 0
571 572 payload: 0 bytes
572 573 bundle2-input: part header size: 16
573 574 bundle2-input: part type: "test:song"
574 575 bundle2-input: part id: "2"
575 576 bundle2-input: part parameters: 0
576 577 :test:song:
577 578 mandatory: 0
578 579 advisory: 0
579 580 bundle2-input: payload chunk size: 178
580 581 bundle2-input: payload chunk size: 0
581 582 bundle2-input-part: total payload size 178
582 583 payload: 178 bytes
583 584 bundle2-input: part header size: 22
584 585 bundle2-input: part type: "test:debugreply"
585 586 bundle2-input: part id: "3"
586 587 bundle2-input: part parameters: 0
587 588 :test:debugreply:
588 589 mandatory: 0
589 590 advisory: 0
590 591 bundle2-input: payload chunk size: 0
591 592 payload: 0 bytes
592 593 bundle2-input: part header size: 43
593 594 bundle2-input: part type: "test:math"
594 595 bundle2-input: part id: "4"
595 596 bundle2-input: part parameters: 3
596 597 :test:math:
597 598 mandatory: 2
598 599 advisory: 1
599 600 bundle2-input: payload chunk size: 2
600 601 bundle2-input: payload chunk size: 0
601 602 bundle2-input-part: total payload size 2
602 603 payload: 2 bytes
603 604 bundle2-input: part header size: 29
604 605 bundle2-input: part type: "test:song"
605 606 bundle2-input: part id: "5"
606 607 bundle2-input: part parameters: 1
607 608 :test:song:
608 609 mandatory: 1
609 610 advisory: 0
610 611 bundle2-input: payload chunk size: 0
611 612 payload: 0 bytes
612 613 bundle2-input: part header size: 16
613 614 bundle2-input: part type: "test:ping"
614 615 bundle2-input: part id: "6"
615 616 bundle2-input: part parameters: 0
616 617 :test:ping:
617 618 mandatory: 0
618 619 advisory: 0
619 620 bundle2-input: payload chunk size: 0
620 621 payload: 0 bytes
621 622 bundle2-input: part header size: 0
622 623 bundle2-input: end of bundle2 stream
623 624 parts count: 7
624 625
625 626 Test actual unbundling of test part
626 627 =======================================
627 628
628 629 Process the bundle
629 630
630 631 $ hg unbundle2 --debug --config progress.debug=true --config devel.bundle2.debug=true < ../parts.hg2
631 632 bundle2-input: start processing of HG20 stream
632 633 bundle2-input: reading bundle2 stream parameters
633 634 bundle2-input-bundle: with-transaction
634 635 bundle2-input: start extraction of bundle2 parts
635 636 bundle2-input: part header size: 17
636 637 bundle2-input: part type: "test:empty"
637 638 bundle2-input: part id: "0"
638 639 bundle2-input: part parameters: 0
639 640 bundle2-input: ignoring unsupported advisory part test:empty
640 641 bundle2-input-part: "test:empty" (advisory) unsupported-type
641 642 bundle2-input: payload chunk size: 0
642 643 bundle2-input: part header size: 17
643 644 bundle2-input: part type: "test:empty"
644 645 bundle2-input: part id: "1"
645 646 bundle2-input: part parameters: 0
646 647 bundle2-input: ignoring unsupported advisory part test:empty
647 648 bundle2-input-part: "test:empty" (advisory) unsupported-type
648 649 bundle2-input: payload chunk size: 0
649 650 bundle2-input: part header size: 16
650 651 bundle2-input: part type: "test:song"
651 652 bundle2-input: part id: "2"
652 653 bundle2-input: part parameters: 0
653 654 bundle2-input: found a handler for part 'test:song'
654 655 bundle2-input-part: "test:song" (advisory) supported
655 656 The choir starts singing:
656 657 bundle2-input: payload chunk size: 178
657 658 bundle2-input: payload chunk size: 0
658 659 bundle2-input-part: total payload size 178
659 660 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
660 661 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
661 662 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
662 663 bundle2-input: part header size: 22
663 664 bundle2-input: part type: "test:debugreply"
664 665 bundle2-input: part id: "3"
665 666 bundle2-input: part parameters: 0
666 667 bundle2-input: found a handler for part 'test:debugreply'
667 668 bundle2-input-part: "test:debugreply" (advisory) supported
668 669 debugreply: no reply
669 670 bundle2-input: payload chunk size: 0
670 671 bundle2-input: part header size: 43
671 672 bundle2-input: part type: "test:math"
672 673 bundle2-input: part id: "4"
673 674 bundle2-input: part parameters: 3
674 675 bundle2-input: ignoring unsupported advisory part test:math
675 676 bundle2-input-part: "test:math" (advisory) (params: 2 mandatory 2 advisory) unsupported-type
676 677 bundle2-input: payload chunk size: 2
677 678 bundle2-input: payload chunk size: 0
678 679 bundle2-input-part: total payload size 2
679 680 bundle2-input: part header size: 29
680 681 bundle2-input: part type: "test:song"
681 682 bundle2-input: part id: "5"
682 683 bundle2-input: part parameters: 1
683 684 bundle2-input: found a handler for part 'test:song'
684 685 bundle2-input: ignoring unsupported advisory part test:song - randomparam
685 686 bundle2-input-part: "test:song" (advisory) (params: 1 mandatory) unsupported-params (['randomparam'])
686 687 bundle2-input: payload chunk size: 0
687 688 bundle2-input: part header size: 16
688 689 bundle2-input: part type: "test:ping"
689 690 bundle2-input: part id: "6"
690 691 bundle2-input: part parameters: 0
691 692 bundle2-input: found a handler for part 'test:ping'
692 693 bundle2-input-part: "test:ping" (advisory) supported
693 694 received ping request (id 6)
694 695 bundle2-input: payload chunk size: 0
695 696 bundle2-input: part header size: 0
696 697 bundle2-input: end of bundle2 stream
697 698 bundle2-input-bundle: 6 parts total
698 699 0 unread bytes
699 700 3 total verses sung
700 701
701 702 Unbundle with an unknown mandatory part
702 703 (should abort)
703 704
704 705 $ hg bundle2 --parts --unknown ../unknown.hg2
705 706
706 707 $ hg unbundle2 < ../unknown.hg2
707 708 The choir starts singing:
708 709 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
709 710 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
710 711 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
711 712 debugreply: no reply
712 713 0 unread bytes
713 714 abort: missing support for test:unknown
714 715 [255]
715 716
716 717 Unbundle with an unknown mandatory part parameters
717 718 (should abort)
718 719
719 720 $ hg bundle2 --unknownparams ../unknown.hg2
720 721
721 722 $ hg unbundle2 < ../unknown.hg2
722 723 0 unread bytes
723 724 abort: missing support for test:song - randomparams
724 725 [255]
725 726
726 727 unbundle with a reply
727 728
728 729 $ hg bundle2 --parts --reply ../parts-reply.hg2
729 730 $ hg unbundle2 ../reply.hg2 < ../parts-reply.hg2
730 731 0 unread bytes
731 732 3 total verses sung
732 733
733 734 The reply is a bundle
734 735
735 736 $ f --hexdump ../reply.hg2
736 737 ../reply.hg2:
737 738 0000: 48 47 32 30 00 00 00 00 00 00 00 1b 06 6f 75 74 |HG20.........out|
738 739 0010: 70 75 74 00 00 00 00 00 01 0b 01 69 6e 2d 72 65 |put........in-re|
739 740 0020: 70 6c 79 2d 74 6f 33 00 00 00 d9 54 68 65 20 63 |ply-to3....The c|
740 741 0030: 68 6f 69 72 20 73 74 61 72 74 73 20 73 69 6e 67 |hoir starts sing|
741 742 0040: 69 6e 67 3a 0a 20 20 20 20 50 61 74 61 6c 69 20 |ing:. Patali |
742 743 0050: 44 69 72 61 70 61 74 61 2c 20 43 72 6f 6d 64 61 |Dirapata, Cromda|
743 744 0060: 20 43 72 6f 6d 64 61 20 52 69 70 61 6c 6f 2c 20 | Cromda Ripalo, |
744 745 0070: 50 61 74 61 20 50 61 74 61 2c 20 4b 6f 20 4b 6f |Pata Pata, Ko Ko|
745 746 0080: 20 4b 6f 0a 20 20 20 20 42 6f 6b 6f 72 6f 20 44 | Ko. Bokoro D|
746 747 0090: 69 70 6f 75 6c 69 74 6f 2c 20 52 6f 6e 64 69 20 |ipoulito, Rondi |
747 748 00a0: 52 6f 6e 64 69 20 50 65 70 69 6e 6f 2c 20 50 61 |Rondi Pepino, Pa|
748 749 00b0: 74 61 20 50 61 74 61 2c 20 4b 6f 20 4b 6f 20 4b |ta Pata, Ko Ko K|
749 750 00c0: 6f 0a 20 20 20 20 45 6d 61 6e 61 20 4b 61 72 61 |o. Emana Kara|
750 751 00d0: 73 73 6f 6c 69 2c 20 4c 6f 75 63 72 61 20 4c 6f |ssoli, Loucra Lo|
751 752 00e0: 75 63 72 61 20 50 6f 6e 70 6f 6e 74 6f 2c 20 50 |ucra Ponponto, P|
752 753 00f0: 61 74 61 20 50 61 74 61 2c 20 4b 6f 20 4b 6f 20 |ata Pata, Ko Ko |
753 754 0100: 4b 6f 2e 0a 00 00 00 00 00 00 00 1b 06 6f 75 74 |Ko...........out|
754 755 0110: 70 75 74 00 00 00 01 00 01 0b 01 69 6e 2d 72 65 |put........in-re|
755 756 0120: 70 6c 79 2d 74 6f 34 00 00 00 c9 64 65 62 75 67 |ply-to4....debug|
756 757 0130: 72 65 70 6c 79 3a 20 63 61 70 61 62 69 6c 69 74 |reply: capabilit|
757 758 0140: 69 65 73 3a 0a 64 65 62 75 67 72 65 70 6c 79 3a |ies:.debugreply:|
758 759 0150: 20 20 20 20 20 27 63 69 74 79 3d 21 27 0a 64 65 | 'city=!'.de|
759 760 0160: 62 75 67 72 65 70 6c 79 3a 20 20 20 20 20 20 20 |bugreply: |
760 761 0170: 20 20 27 63 65 6c 65 73 74 65 2c 76 69 6c 6c 65 | 'celeste,ville|
761 762 0180: 27 0a 64 65 62 75 67 72 65 70 6c 79 3a 20 20 20 |'.debugreply: |
762 763 0190: 20 20 27 65 6c 65 70 68 61 6e 74 73 27 0a 64 65 | 'elephants'.de|
763 764 01a0: 62 75 67 72 65 70 6c 79 3a 20 20 20 20 20 20 20 |bugreply: |
764 765 01b0: 20 20 27 62 61 62 61 72 27 0a 64 65 62 75 67 72 | 'babar'.debugr|
765 766 01c0: 65 70 6c 79 3a 20 20 20 20 20 20 20 20 20 27 63 |eply: 'c|
766 767 01d0: 65 6c 65 73 74 65 27 0a 64 65 62 75 67 72 65 70 |eleste'.debugrep|
767 768 01e0: 6c 79 3a 20 20 20 20 20 27 70 69 6e 67 2d 70 6f |ly: 'ping-po|
768 769 01f0: 6e 67 27 0a 00 00 00 00 00 00 00 1e 09 74 65 73 |ng'..........tes|
769 770 0200: 74 3a 70 6f 6e 67 00 00 00 02 01 00 0b 01 69 6e |t:pong........in|
770 771 0210: 2d 72 65 70 6c 79 2d 74 6f 37 00 00 00 00 00 00 |-reply-to7......|
771 772 0220: 00 1b 06 6f 75 74 70 75 74 00 00 00 03 00 01 0b |...output.......|
772 773 0230: 01 69 6e 2d 72 65 70 6c 79 2d 74 6f 37 00 00 00 |.in-reply-to7...|
773 774 0240: 3d 72 65 63 65 69 76 65 64 20 70 69 6e 67 20 72 |=received ping r|
774 775 0250: 65 71 75 65 73 74 20 28 69 64 20 37 29 0a 72 65 |equest (id 7).re|
775 776 0260: 70 6c 79 69 6e 67 20 74 6f 20 70 69 6e 67 20 72 |plying to ping r|
776 777 0270: 65 71 75 65 73 74 20 28 69 64 20 37 29 0a 00 00 |equest (id 7)...|
777 778 0280: 00 00 00 00 00 00 |......|
778 779
779 780 The reply is valid
780 781
781 782 $ hg statbundle2 < ../reply.hg2
782 783 options count: 0
783 784 :output:
784 785 mandatory: 0
785 786 advisory: 1
786 787 payload: 217 bytes
787 788 :output:
788 789 mandatory: 0
789 790 advisory: 1
790 791 payload: 201 bytes
791 792 :test:pong:
792 793 mandatory: 1
793 794 advisory: 0
794 795 payload: 0 bytes
795 796 :output:
796 797 mandatory: 0
797 798 advisory: 1
798 799 payload: 61 bytes
799 800 parts count: 4
800 801
801 802 Unbundle the reply to get the output:
802 803
803 804 $ hg unbundle2 < ../reply.hg2
804 805 remote: The choir starts singing:
805 806 remote: Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
806 807 remote: Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
807 808 remote: Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
808 809 remote: debugreply: capabilities:
809 810 remote: debugreply: 'city=!'
810 811 remote: debugreply: 'celeste,ville'
811 812 remote: debugreply: 'elephants'
812 813 remote: debugreply: 'babar'
813 814 remote: debugreply: 'celeste'
814 815 remote: debugreply: 'ping-pong'
815 816 remote: received ping request (id 7)
816 817 remote: replying to ping request (id 7)
817 818 0 unread bytes
818 819
819 820 Test push race detection
820 821
821 822 $ hg bundle2 --pushrace ../part-race.hg2
822 823
823 824 $ hg unbundle2 < ../part-race.hg2
824 825 0 unread bytes
825 826 abort: push race: repository changed while pushing - please try again
826 827 [255]
827 828
828 829 Support for changegroup
829 830 ===================================
830 831
831 832 $ hg unbundle $TESTDIR/bundles/rebase.hg
832 833 adding changesets
833 834 adding manifests
834 835 adding file changes
835 836 added 8 changesets with 7 changes to 7 files (+3 heads)
836 837 (run 'hg heads' to see heads, 'hg merge' to merge)
837 838
838 839 $ hg log -G
839 840 o 8:02de42196ebe draft Nicolas Dumazet <nicdumz.commits@gmail.com> H
840 841 |
841 842 | o 7:eea13746799a draft Nicolas Dumazet <nicdumz.commits@gmail.com> G
842 843 |/|
843 844 o | 6:24b6387c8c8c draft Nicolas Dumazet <nicdumz.commits@gmail.com> F
844 845 | |
845 846 | o 5:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
846 847 |/
847 848 | o 4:32af7686d403 draft Nicolas Dumazet <nicdumz.commits@gmail.com> D
848 849 | |
849 850 | o 3:5fddd98957c8 draft Nicolas Dumazet <nicdumz.commits@gmail.com> C
850 851 | |
851 852 | o 2:42ccdea3bb16 draft Nicolas Dumazet <nicdumz.commits@gmail.com> B
852 853 |/
853 854 o 1:cd010b8cd998 draft Nicolas Dumazet <nicdumz.commits@gmail.com> A
854 855
855 856 @ 0:3903775176ed draft test a
856 857
857 858
858 859 $ hg bundle2 --debug --config progress.debug=true --config devel.bundle2.debug=true --rev '8+7+5+4' ../rev.hg2
859 860 4 changesets found
860 861 list of changesets:
861 862 32af7686d403cf45b5d95f2d70cebea587ac806a
862 863 9520eea781bcca16c1e15acc0ba14335a0e8e5ba
863 864 eea13746799a9e0bfd88f29d3c2e9dc9389f524f
864 865 02de42196ebee42ef284b6780a87cdc96e8eaab6
865 866 bundle2-output-bundle: "HG20", 1 parts total
866 867 bundle2-output: start emission of HG20 stream
867 868 bundle2-output: bundle parameter:
868 869 bundle2-output: start of parts
869 870 bundle2-output: bundle part: "changegroup"
870 871 bundle2-output-part: "changegroup" (advisory) streamed payload
871 872 bundle2-output: part 0: "changegroup"
872 873 bundle2-output: header chunk size: 18
873 874 bundling: 1/4 changesets (25.00%)
874 875 bundling: 2/4 changesets (50.00%)
875 876 bundling: 3/4 changesets (75.00%)
876 877 bundling: 4/4 changesets (100.00%)
877 878 bundling: 1/4 manifests (25.00%)
878 879 bundling: 2/4 manifests (50.00%)
879 880 bundling: 3/4 manifests (75.00%)
880 881 bundling: 4/4 manifests (100.00%)
881 882 bundling: D 1/3 files (33.33%)
882 883 bundling: E 2/3 files (66.67%)
883 884 bundling: H 3/3 files (100.00%)
884 885 bundle2-output: payload chunk size: 1555
885 886 bundle2-output: closing payload chunk
886 887 bundle2-output: end of bundle
887 888
888 889 $ f --hexdump ../rev.hg2
889 890 ../rev.hg2:
890 891 0000: 48 47 32 30 00 00 00 00 00 00 00 12 0b 63 68 61 |HG20.........cha|
891 892 0010: 6e 67 65 67 72 6f 75 70 00 00 00 00 00 00 00 00 |ngegroup........|
892 893 0020: 06 13 00 00 00 a4 32 af 76 86 d4 03 cf 45 b5 d9 |......2.v....E..|
893 894 0030: 5f 2d 70 ce be a5 87 ac 80 6a 5f dd d9 89 57 c8 |_-p......j_...W.|
894 895 0040: a5 4a 4d 43 6d fe 1d a9 d8 7f 21 a1 b9 7b 00 00 |.JMCm.....!..{..|
895 896 0050: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
896 897 0060: 00 00 32 af 76 86 d4 03 cf 45 b5 d9 5f 2d 70 ce |..2.v....E.._-p.|
897 898 0070: be a5 87 ac 80 6a 00 00 00 00 00 00 00 29 00 00 |.....j.......)..|
898 899 0080: 00 29 36 65 31 66 34 63 34 37 65 63 62 35 33 33 |.)6e1f4c47ecb533|
899 900 0090: 66 66 64 30 63 38 65 35 32 63 64 63 38 38 61 66 |ffd0c8e52cdc88af|
900 901 00a0: 62 36 63 64 33 39 65 32 30 63 0a 00 00 00 66 00 |b6cd39e20c....f.|
901 902 00b0: 00 00 68 00 00 00 02 44 0a 00 00 00 69 00 00 00 |..h....D....i...|
902 903 00c0: 6a 00 00 00 01 44 00 00 00 a4 95 20 ee a7 81 bc |j....D..... ....|
903 904 00d0: ca 16 c1 e1 5a cc 0b a1 43 35 a0 e8 e5 ba cd 01 |....Z...C5......|
904 905 00e0: 0b 8c d9 98 f3 98 1a 5a 81 15 f9 4f 8d a4 ab 50 |.......Z...O...P|
905 906 00f0: 60 89 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |`...............|
906 907 0100: 00 00 00 00 00 00 95 20 ee a7 81 bc ca 16 c1 e1 |....... ........|
907 908 0110: 5a cc 0b a1 43 35 a0 e8 e5 ba 00 00 00 00 00 00 |Z...C5..........|
908 909 0120: 00 29 00 00 00 29 34 64 65 63 65 39 63 38 32 36 |.)...)4dece9c826|
909 910 0130: 66 36 39 34 39 30 35 30 37 62 39 38 63 36 33 38 |f69490507b98c638|
910 911 0140: 33 61 33 30 30 39 62 32 39 35 38 33 37 64 0a 00 |3a3009b295837d..|
911 912 0150: 00 00 66 00 00 00 68 00 00 00 02 45 0a 00 00 00 |..f...h....E....|
912 913 0160: 69 00 00 00 6a 00 00 00 01 45 00 00 00 a2 ee a1 |i...j....E......|
913 914 0170: 37 46 79 9a 9e 0b fd 88 f2 9d 3c 2e 9d c9 38 9f |7Fy.......<...8.|
914 915 0180: 52 4f 24 b6 38 7c 8c 8c ae 37 17 88 80 f3 fa 95 |RO$.8|...7......|
915 916 0190: de d3 cb 1c f7 85 95 20 ee a7 81 bc ca 16 c1 e1 |....... ........|
916 917 01a0: 5a cc 0b a1 43 35 a0 e8 e5 ba ee a1 37 46 79 9a |Z...C5......7Fy.|
917 918 01b0: 9e 0b fd 88 f2 9d 3c 2e 9d c9 38 9f 52 4f 00 00 |......<...8.RO..|
918 919 01c0: 00 00 00 00 00 29 00 00 00 29 33 36 35 62 39 33 |.....)...)365b93|
919 920 01d0: 64 35 37 66 64 66 34 38 31 34 65 32 62 35 39 31 |d57fdf4814e2b591|
920 921 01e0: 31 64 36 62 61 63 66 66 32 62 31 32 30 31 34 34 |1d6bacff2b120144|
921 922 01f0: 34 31 0a 00 00 00 66 00 00 00 68 00 00 00 00 00 |41....f...h.....|
922 923 0200: 00 00 69 00 00 00 6a 00 00 00 01 47 00 00 00 a4 |..i...j....G....|
923 924 0210: 02 de 42 19 6e be e4 2e f2 84 b6 78 0a 87 cd c9 |..B.n......x....|
924 925 0220: 6e 8e aa b6 24 b6 38 7c 8c 8c ae 37 17 88 80 f3 |n...$.8|...7....|
925 926 0230: fa 95 de d3 cb 1c f7 85 00 00 00 00 00 00 00 00 |................|
926 927 0240: 00 00 00 00 00 00 00 00 00 00 00 00 02 de 42 19 |..............B.|
927 928 0250: 6e be e4 2e f2 84 b6 78 0a 87 cd c9 6e 8e aa b6 |n......x....n...|
928 929 0260: 00 00 00 00 00 00 00 29 00 00 00 29 38 62 65 65 |.......)...)8bee|
929 930 0270: 34 38 65 64 63 37 33 31 38 35 34 31 66 63 30 30 |48edc7318541fc00|
930 931 0280: 31 33 65 65 34 31 62 30 38 39 32 37 36 61 38 63 |13ee41b089276a8c|
931 932 0290: 32 34 62 66 0a 00 00 00 66 00 00 00 66 00 00 00 |24bf....f...f...|
932 933 02a0: 02 48 0a 00 00 00 67 00 00 00 68 00 00 00 01 48 |.H....g...h....H|
933 934 02b0: 00 00 00 00 00 00 00 8b 6e 1f 4c 47 ec b5 33 ff |........n.LG..3.|
934 935 02c0: d0 c8 e5 2c dc 88 af b6 cd 39 e2 0c 66 a5 a0 18 |...,.....9..f...|
935 936 02d0: 17 fd f5 23 9c 27 38 02 b5 b7 61 8d 05 1c 89 e4 |...#.'8...a.....|
936 937 02e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
937 938 02f0: 00 00 00 00 32 af 76 86 d4 03 cf 45 b5 d9 5f 2d |....2.v....E.._-|
938 939 0300: 70 ce be a5 87 ac 80 6a 00 00 00 81 00 00 00 81 |p......j........|
939 940 0310: 00 00 00 2b 44 00 63 33 66 31 63 61 32 39 32 34 |...+D.c3f1ca2924|
940 941 0320: 63 31 36 61 31 39 62 30 36 35 36 61 38 34 39 30 |c16a19b0656a8490|
941 942 0330: 30 65 35 30 34 65 35 62 30 61 65 63 32 64 0a 00 |0e504e5b0aec2d..|
942 943 0340: 00 00 8b 4d ec e9 c8 26 f6 94 90 50 7b 98 c6 38 |...M...&...P{..8|
943 944 0350: 3a 30 09 b2 95 83 7d 00 7d 8c 9d 88 84 13 25 f5 |:0....}.}.....%.|
944 945 0360: c6 b0 63 71 b3 5b 4e 8a 2b 1a 83 00 00 00 00 00 |..cq.[N.+.......|
945 946 0370: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 95 |................|
946 947 0380: 20 ee a7 81 bc ca 16 c1 e1 5a cc 0b a1 43 35 a0 | ........Z...C5.|
947 948 0390: e8 e5 ba 00 00 00 2b 00 00 00 ac 00 00 00 2b 45 |......+.......+E|
948 949 03a0: 00 39 63 36 66 64 30 33 35 30 61 36 63 30 64 30 |.9c6fd0350a6c0d0|
949 950 03b0: 63 34 39 64 34 61 39 63 35 30 31 37 63 66 30 37 |c49d4a9c5017cf07|
950 951 03c0: 30 34 33 66 35 34 65 35 38 0a 00 00 00 8b 36 5b |043f54e58.....6[|
951 952 03d0: 93 d5 7f df 48 14 e2 b5 91 1d 6b ac ff 2b 12 01 |....H.....k..+..|
952 953 03e0: 44 41 28 a5 84 c6 5e f1 21 f8 9e b6 6a b7 d0 bc |DA(...^.!...j...|
953 954 03f0: 15 3d 80 99 e7 ce 4d ec e9 c8 26 f6 94 90 50 7b |.=....M...&...P{|
954 955 0400: 98 c6 38 3a 30 09 b2 95 83 7d ee a1 37 46 79 9a |..8:0....}..7Fy.|
955 956 0410: 9e 0b fd 88 f2 9d 3c 2e 9d c9 38 9f 52 4f 00 00 |......<...8.RO..|
956 957 0420: 00 56 00 00 00 56 00 00 00 2b 46 00 32 32 62 66 |.V...V...+F.22bf|
957 958 0430: 63 66 64 36 32 61 32 31 61 33 32 38 37 65 64 62 |cfd62a21a3287edb|
958 959 0440: 64 34 64 36 35 36 32 31 38 64 30 66 35 32 35 65 |d4d656218d0f525e|
959 960 0450: 64 37 36 61 0a 00 00 00 97 8b ee 48 ed c7 31 85 |d76a.......H..1.|
960 961 0460: 41 fc 00 13 ee 41 b0 89 27 6a 8c 24 bf 28 a5 84 |A....A..'j.$.(..|
961 962 0470: c6 5e f1 21 f8 9e b6 6a b7 d0 bc 15 3d 80 99 e7 |.^.!...j....=...|
962 963 0480: ce 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
963 964 0490: 00 00 00 00 00 02 de 42 19 6e be e4 2e f2 84 b6 |.......B.n......|
964 965 04a0: 78 0a 87 cd c9 6e 8e aa b6 00 00 00 2b 00 00 00 |x....n......+...|
965 966 04b0: 56 00 00 00 00 00 00 00 81 00 00 00 81 00 00 00 |V...............|
966 967 04c0: 2b 48 00 38 35 30 30 31 38 39 65 37 34 61 39 65 |+H.8500189e74a9e|
967 968 04d0: 30 34 37 35 65 38 32 32 30 39 33 62 63 37 64 62 |0475e822093bc7db|
968 969 04e0: 30 64 36 33 31 61 65 62 30 62 34 0a 00 00 00 00 |0d631aeb0b4.....|
969 970 04f0: 00 00 00 05 44 00 00 00 62 c3 f1 ca 29 24 c1 6a |....D...b...)$.j|
970 971 0500: 19 b0 65 6a 84 90 0e 50 4e 5b 0a ec 2d 00 00 00 |..ej...PN[..-...|
971 972 0510: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
972 973 0520: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
973 974 0530: 00 00 00 00 00 32 af 76 86 d4 03 cf 45 b5 d9 5f |.....2.v....E.._|
974 975 0540: 2d 70 ce be a5 87 ac 80 6a 00 00 00 00 00 00 00 |-p......j.......|
975 976 0550: 00 00 00 00 02 44 0a 00 00 00 00 00 00 00 05 45 |.....D.........E|
976 977 0560: 00 00 00 62 9c 6f d0 35 0a 6c 0d 0c 49 d4 a9 c5 |...b.o.5.l..I...|
977 978 0570: 01 7c f0 70 43 f5 4e 58 00 00 00 00 00 00 00 00 |.|.pC.NX........|
978 979 0580: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
979 980 0590: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
980 981 05a0: 95 20 ee a7 81 bc ca 16 c1 e1 5a cc 0b a1 43 35 |. ........Z...C5|
981 982 05b0: a0 e8 e5 ba 00 00 00 00 00 00 00 00 00 00 00 02 |................|
982 983 05c0: 45 0a 00 00 00 00 00 00 00 05 48 00 00 00 62 85 |E.........H...b.|
983 984 05d0: 00 18 9e 74 a9 e0 47 5e 82 20 93 bc 7d b0 d6 31 |...t..G^. ..}..1|
984 985 05e0: ae b0 b4 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
985 986 05f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
986 987 0600: 00 00 00 00 00 00 00 00 00 00 00 02 de 42 19 6e |.............B.n|
987 988 0610: be e4 2e f2 84 b6 78 0a 87 cd c9 6e 8e aa b6 00 |......x....n....|
988 989 0620: 00 00 00 00 00 00 00 00 00 00 02 48 0a 00 00 00 |...........H....|
989 990 0630: 00 00 00 00 00 00 00 00 00 00 00 00 00 |.............|
990 991
991 992 $ hg debugbundle ../rev.hg2
992 993 Stream params: {}
993 994 changegroup -- {}
994 995 32af7686d403cf45b5d95f2d70cebea587ac806a
995 996 9520eea781bcca16c1e15acc0ba14335a0e8e5ba
996 997 eea13746799a9e0bfd88f29d3c2e9dc9389f524f
997 998 02de42196ebee42ef284b6780a87cdc96e8eaab6
998 999 $ hg unbundle ../rev.hg2
999 1000 adding changesets
1000 1001 adding manifests
1001 1002 adding file changes
1002 1003 added 0 changesets with 0 changes to 3 files
1003 1004 (run 'hg update' to get a working copy)
1004 1005
1005 1006 with reply
1006 1007
1007 1008 $ hg bundle2 --rev '8+7+5+4' --reply ../rev-rr.hg2
1008 1009 $ hg unbundle2 ../rev-reply.hg2 < ../rev-rr.hg2
1009 1010 0 unread bytes
1010 1011 addchangegroup return: 1
1011 1012
1012 1013 $ f --hexdump ../rev-reply.hg2
1013 1014 ../rev-reply.hg2:
1014 1015 0000: 48 47 32 30 00 00 00 00 00 00 00 2f 11 72 65 70 |HG20......./.rep|
1015 1016 0010: 6c 79 3a 63 68 61 6e 67 65 67 72 6f 75 70 00 00 |ly:changegroup..|
1016 1017 0020: 00 00 00 02 0b 01 06 01 69 6e 2d 72 65 70 6c 79 |........in-reply|
1017 1018 0030: 2d 74 6f 31 72 65 74 75 72 6e 31 00 00 00 00 00 |-to1return1.....|
1018 1019 0040: 00 00 1b 06 6f 75 74 70 75 74 00 00 00 01 00 01 |....output......|
1019 1020 0050: 0b 01 69 6e 2d 72 65 70 6c 79 2d 74 6f 31 00 00 |..in-reply-to1..|
1020 1021 0060: 00 64 61 64 64 69 6e 67 20 63 68 61 6e 67 65 73 |.dadding changes|
1021 1022 0070: 65 74 73 0a 61 64 64 69 6e 67 20 6d 61 6e 69 66 |ets.adding manif|
1022 1023 0080: 65 73 74 73 0a 61 64 64 69 6e 67 20 66 69 6c 65 |ests.adding file|
1023 1024 0090: 20 63 68 61 6e 67 65 73 0a 61 64 64 65 64 20 30 | changes.added 0|
1024 1025 00a0: 20 63 68 61 6e 67 65 73 65 74 73 20 77 69 74 68 | changesets with|
1025 1026 00b0: 20 30 20 63 68 61 6e 67 65 73 20 74 6f 20 33 20 | 0 changes to 3 |
1026 1027 00c0: 66 69 6c 65 73 0a 00 00 00 00 00 00 00 00 |files.........|
1027 1028
1028 1029 Check handling of exception during generation.
1029 1030 ----------------------------------------------
1030 1031
1031 1032 $ hg bundle2 --genraise > ../genfailed.hg2
1032 1033 abort: Someone set up us the bomb!
1033 1034 [255]
1034 1035
1035 1036 Should still be a valid bundle
1036 1037
1037 1038 $ f --hexdump ../genfailed.hg2
1038 1039 ../genfailed.hg2:
1039 1040 0000: 48 47 32 30 00 00 00 00 00 00 00 0d 06 6f 75 74 |HG20.........out|
1040 1041 0010: 70 75 74 00 00 00 00 00 00 ff ff ff ff 00 00 00 |put.............|
1041 1042 0020: 48 0b 65 72 72 6f 72 3a 61 62 6f 72 74 00 00 00 |H.error:abort...|
1042 1043 0030: 00 01 00 07 2d 6d 65 73 73 61 67 65 75 6e 65 78 |....-messageunex|
1043 1044 0040: 70 65 63 74 65 64 20 65 72 72 6f 72 3a 20 53 6f |pected error: So|
1044 1045 0050: 6d 65 6f 6e 65 20 73 65 74 20 75 70 20 75 73 20 |meone set up us |
1045 1046 0060: 74 68 65 20 62 6f 6d 62 21 00 00 00 00 00 00 00 |the bomb!.......|
1046 1047 0070: 00 |.|
1047 1048
1048 1049 And its handling on the other size raise a clean exception
1049 1050
1050 1051 $ cat ../genfailed.hg2 | hg unbundle2
1051 1052 0 unread bytes
1052 1053 abort: unexpected error: Someone set up us the bomb!
1053 1054 [255]
1054 1055
1055 1056 Test compression
1056 1057 ================
1057 1058
1058 1059 Simple case where it just work: GZ
1059 1060 ----------------------------------
1060 1061
1061 1062 $ hg bundle2 --compress GZ --rev '8+7+5+4' ../rev.hg2.bz
1062 1063 $ f --hexdump ../rev.hg2.bz
1063 1064 ../rev.hg2.bz:
1064 1065 0000: 48 47 32 30 00 00 00 0e 43 6f 6d 70 72 65 73 73 |HG20....Compress|
1065 1066 0010: 69 6f 6e 3d 47 5a 78 9c 95 94 7d 68 95 55 1c c7 |ion=GZx...}h.U..|
1066 1067 0020: 9f 3b 31 e8 ce fa c3 65 be a0 a4 b4 52 b9 29 e7 |.;1....e....R.).|
1067 1068 0030: f5 79 ce 89 fa 63 ed 5e 77 8b 9c c3 3f 2a 1c 68 |.y...c.^w...?*.h|
1068 1069 0040: cf 79 9b dd 6a ae b0 28 74 b8 e5 96 5b bb 86 61 |.y..j..(t...[..a|
1069 1070 0050: a3 15 6e 3a 71 c8 6a e8 a5 da 95 64 28 22 ce 69 |..n:q.j....d(".i|
1070 1071 0060: cd 06 59 34 28 2b 51 2a 58 c3 17 56 2a 9a 9d 67 |..Y4(+Q*X..V*..g|
1071 1072 0070: dc c6 35 9e c4 1d f8 9e 87 f3 9c f3 3b bf 0f bf |..5.........;...|
1072 1073 0080: 97 e3 38 ce f4 42 b9 d6 af ae d2 55 af ae 7b ad |..8..B.....U..{.|
1073 1074 0090: c6 c9 8d bb 8a ec b4 07 ed 7f fd ed d3 53 be 4e |.............S.N|
1074 1075 00a0: f4 0e af 59 52 73 ea 50 d7 96 9e ba d4 9a 1f 87 |...YRs.P........|
1075 1076 00b0: 9b 9f 1d e8 7a 6a 79 e9 cb 7f cf eb fe 7e d3 82 |....zjy......~..|
1076 1077 00c0: ce 2f 36 38 21 23 cc 36 b7 b5 38 90 ab a1 21 92 |./68!#.6..8...!.|
1077 1078 00d0: 78 5a 0a 8a b1 31 0a 48 a6 29 92 4a 32 e6 1b e1 |xZ...1.H.).J2...|
1078 1079 00e0: 4a 85 b9 46 40 46 ed 61 63 b5 d6 aa 20 1e ac 5e |J..F@F.ac... ..^|
1079 1080 00f0: b0 0a ae 8a c4 03 c6 d6 f9 a3 7b eb fb 4e de 7f |..........{..N..|
1080 1081 0100: e4 97 55 5f 15 76 96 d2 5d bf 9d 3f 38 18 29 4c |..U_.v..]..?8.)L|
1081 1082 0110: 0f b7 5d 6e 9b b3 aa 7e c6 d5 15 5b f7 7c 52 f1 |..]n...~...[.|R.|
1082 1083 0120: 7c 73 18 63 98 6d 3e 23 51 5a 6a 2e 19 72 8d cb ||s.c.m>#QZj..r..|
1083 1084 0130: 09 07 14 78 82 33 e9 62 86 7d 0c 00 17 88 53 86 |...x.3.b.}....S.|
1084 1085 0140: 3d 75 0b 63 e2 16 c6 84 9d 76 8f 76 7a cb de fc |=u.c.....v.vz...|
1085 1086 0150: a8 a3 f0 46 d3 a5 f6 c7 96 b6 9f 60 3b 57 ae 28 |...F.......`;W.(|
1086 1087 0160: ce b2 8d e9 f4 3e 6f 66 53 dd e5 6b ad 67 be f9 |.....>ofS..k.g..|
1087 1088 0170: 72 ee 5f 8d 61 3c 61 b6 f9 8c d8 a5 82 63 45 3d |r._.a<a......cE=|
1088 1089 0180: a3 0c 61 90 68 24 28 87 50 b9 c2 97 c6 20 01 11 |..a.h$(.P.... ..|
1089 1090 0190: 80 84 10 98 cf e8 e4 13 96 05 51 2c 38 f3 c4 ec |..........Q,8...|
1090 1091 01a0: ea 43 e7 96 5e 6a c8 be 11 dd 32 78 a2 fa dd 8f |.C..^j....2x....|
1091 1092 01b0: b3 61 84 61 51 0c b3 cd 27 64 42 6b c2 b4 92 1e |.a.aQ...'dBk....|
1092 1093 01c0: 86 8c 12 68 24 00 10 db 7f 50 00 c6 91 e7 fa 4c |...h$....P.....L|
1093 1094 01d0: 22 22 cc bf 84 81 0a 92 c1 aa 2a c7 1b 49 e6 ee |""........*..I..|
1094 1095 01e0: 6b a9 7e e0 e9 b2 91 5e 7c 73 68 e0 fc 23 3f 34 |k.~....^|sh..#?4|
1095 1096 01f0: ed cf 0e f2 b3 d3 4c d7 ae 59 33 6f 8c 3d b8 63 |......L..Y3o.=.c|
1096 1097 0200: 21 2b e8 3d e0 6f 9d 3a b7 f9 dc 24 2a b2 3e a7 |!+.=.o.:...$*.>.|
1097 1098 0210: 58 dc 91 d8 40 e9 23 8e 88 84 ae 0f b9 00 2e b5 |X...@.#.........|
1098 1099 0220: 74 36 f3 40 53 40 34 15 c0 d7 12 8d e7 bb 65 f9 |t6.@S@4.......e.|
1099 1100 0230: c8 ef 03 0f ff f9 fe b6 8a 0d 6d fd ec 51 70 f7 |..........m..Qp.|
1100 1101 0240: a7 ad 9b 6b 9d da 74 7b 53 43 d1 43 63 fd 19 f9 |...k..t{SC.Cc...|
1101 1102 0250: ca 67 95 e5 ef c4 e6 6c 9e 44 e1 c5 ac 7a 82 6f |.g.....l.D...z.o|
1102 1103 0260: c2 e1 d2 b5 2d 81 29 f0 5d 09 6c 6f 10 ae 88 cf |....-.).].lo....|
1103 1104 0270: 25 05 d0 93 06 78 80 60 43 2d 10 1b 47 71 2b b7 |%....x.`C-..Gq+.|
1104 1105 0280: 7f bb e9 a7 e4 7d 67 7b df 9b f7 62 cf cd d8 f4 |.....}g{...b....|
1105 1106 0290: 48 bc 64 51 57 43 ff ea 8b 0b ae 74 64 53 07 86 |H.dQWC.....tdS..|
1106 1107 02a0: fa 66 3c 5e f7 e1 af a7 c2 90 ff a7 be 9e c9 29 |.f<^...........)|
1107 1108 02b0: b6 cc 41 48 18 69 94 8b 7c 04 7d 8c 98 a7 95 50 |..AH.i..|.}....P|
1108 1109 02c0: 44 d9 d0 20 c8 14 30 14 51 ad 6c 16 03 94 0f 5a |D.. ..0.Q.l....Z|
1109 1110 02d0: 46 93 7f 1c 87 8d 25 d7 9d a2 d1 92 4c f3 c2 54 |F.....%.....L..T|
1110 1111 02e0: ba f8 70 18 ca 24 0a 29 96 43 71 f2 93 95 74 18 |..p..$.).Cq...t.|
1111 1112 02f0: b5 65 c4 b8 f6 6c 5c 34 20 1e d5 0c 21 c0 b1 90 |.e...l\4 ...!...|
1112 1113 0300: 9e 12 40 b9 18 fa 5a 00 41 a2 39 d3 a9 c1 73 21 |..@...Z.A.9...s!|
1113 1114 0310: 8e 5e 3c b9 b8 f8 48 6a 76 46 a7 1a b6 dd 5b 51 |.^<...HjvF....[Q|
1114 1115 0320: 5e 19 1d 59 12 c6 32 89 02 9a c0 8f 4f b8 0a ba |^..Y..2.....O...|
1115 1116 0330: 5e ec 58 37 44 a3 2f dd 33 ed c9 d3 dd c7 22 1b |^.X7D./.3.....".|
1116 1117 0340: 2f d4 94 8e 95 3f 77 a7 ae 6e f3 32 8d bb 4a 4c |/....?w..n.2..JL|
1117 1118 0350: b8 0a 5a 43 34 3a b3 3a d6 77 ff 5c b6 fa ad f9 |..ZC4:.:.w.\....|
1118 1119 0360: db fb 6a 33 df c1 7d 99 cf ef d4 d5 6d da 77 7c |..j3..}.....m.w||
1119 1120 0370: 3b 19 fd af c5 3f f1 60 c3 17 |;....?.`..|
1120 1121 $ hg debugbundle ../rev.hg2.bz
1121 1122 Stream params: {Compression: GZ}
1122 1123 changegroup -- {}
1123 1124 32af7686d403cf45b5d95f2d70cebea587ac806a
1124 1125 9520eea781bcca16c1e15acc0ba14335a0e8e5ba
1125 1126 eea13746799a9e0bfd88f29d3c2e9dc9389f524f
1126 1127 02de42196ebee42ef284b6780a87cdc96e8eaab6
1127 1128 $ hg unbundle ../rev.hg2.bz
1128 1129 adding changesets
1129 1130 adding manifests
1130 1131 adding file changes
1131 1132 added 0 changesets with 0 changes to 3 files
1132 1133 (run 'hg update' to get a working copy)
1133 1134 Simple case where it just work: BZ
1134 1135 ----------------------------------
1135 1136
1136 1137 $ hg bundle2 --compress BZ --rev '8+7+5+4' ../rev.hg2.bz
1137 1138 $ f --hexdump ../rev.hg2.bz
1138 1139 ../rev.hg2.bz:
1139 1140 0000: 48 47 32 30 00 00 00 0e 43 6f 6d 70 72 65 73 73 |HG20....Compress|
1140 1141 0010: 69 6f 6e 3d 42 5a 42 5a 68 39 31 41 59 26 53 59 |ion=BZBZh91AY&SY|
1141 1142 0020: a3 4b 18 3d 00 00 1a 7f ff ff bf 5f f6 ef ef 7f |.K.=......._....|
1142 1143 0030: f6 3f f7 d1 d9 ff ff f7 6e ff ff 6e f7 f6 bd df |.?......n..n....|
1143 1144 0040: b5 ab ff cf 67 f6 e7 7b f7 c0 02 d7 33 82 8b 51 |....g..{....3..Q|
1144 1145 0050: 04 a5 53 d5 3d 27 a0 99 18 4d 0d 34 00 d1 a1 e8 |..S.='...M.4....|
1145 1146 0060: 80 c8 7a 87 a9 a3 43 6a 3d 46 86 26 80 34 3d 40 |..z...Cj=F.&.4=@|
1146 1147 0070: c8 c9 b5 34 f4 8f 48 0f 51 ea 34 34 fd 4d aa 19 |...4..H.Q.44.M..|
1147 1148 0080: 03 40 0c 08 da 86 43 d4 f5 0f 42 1e a0 f3 54 33 |.@....C...B...T3|
1148 1149 0090: 54 d3 13 4d 03 40 32 00 00 32 03 26 80 0d 00 0d |T..M.@2..2.&....|
1149 1150 00a0: 00 68 c8 c8 03 20 32 30 98 8c 80 00 00 03 4d 00 |.h... 20......M.|
1150 1151 00b0: c8 00 00 0d 00 00 22 99 a1 34 c2 64 a6 d5 34 1a |......"..4.d..4.|
1151 1152 00c0: 00 00 06 86 83 4d 07 a8 d1 a0 68 01 a0 00 00 00 |.....M....h.....|
1152 1153 00d0: 00 0d 06 80 00 00 00 0d 00 03 40 00 00 04 a4 a1 |..........@.....|
1153 1154 00e0: 4d a9 89 89 b4 9a 32 0c 43 46 86 87 a9 8d 41 9a |M.....2.CF....A.|
1154 1155 00f0: 98 46 9a 0d 31 32 1a 34 0d 0c 8d a2 0c 98 4d 06 |.F..12.4......M.|
1155 1156 0100: 8c 40 c2 60 8d 0d 0c 20 c9 89 fa a0 d0 d3 21 a1 |.@.`... ......!.|
1156 1157 0110: ea 34 d3 68 9e a6 d1 74 05 33 cb 66 96 93 28 64 |.4.h...t.3.f..(d|
1157 1158 0120: 40 91 22 ac 55 9b ea 40 7b 38 94 e2 f8 06 00 cb |@.".U..@{8......|
1158 1159 0130: 28 02 00 4d ab 40 24 10 43 18 cf 64 b4 06 83 0c |(..M.@$.C..d....|
1159 1160 0140: 34 6c b4 a3 d4 0a 0a e4 a8 5c 4e 23 c0 c9 7a 31 |4l.......\N#..z1|
1160 1161 0150: 97 87 77 7a 64 88 80 8e 60 97 20 93 0f 8e eb c4 |..wzd...`. .....|
1161 1162 0160: 62 a4 44 a3 52 20 b2 99 a9 2e e1 d7 29 4a 54 ac |b.D.R ......)JT.|
1162 1163 0170: 44 7a bb cc 04 3d e0 aa bd 6a 33 5e 9b a2 57 36 |Dz...=...j3^..W6|
1163 1164 0180: fa cb 45 bb 6d 3e c1 d9 d9 f5 83 69 8a d0 e0 e2 |..E.m>.....i....|
1164 1165 0190: e7 ae 90 55 24 da 3f ab 78 c0 4c b4 56 a3 9e a4 |...U$.?.x.L.V...|
1165 1166 01a0: af 9c 65 74 86 ec 6d dc 62 dc 33 ca c8 50 dd 9d |..et..m.b.3..P..|
1166 1167 01b0: 98 8e 9e 59 20 f3 f0 42 91 4a 09 f5 75 8d 3d a5 |...Y ..B.J..u.=.|
1167 1168 01c0: a5 15 cb 8d 10 63 b0 c2 2e b2 81 f7 c1 76 0e 53 |.....c.......v.S|
1168 1169 01d0: 6c 0e 46 73 b5 ae 67 f9 4c 0b 45 6b a8 32 2a 2f |l.Fs..g.L.Ek.2*/|
1169 1170 01e0: a2 54 a4 44 05 20 a1 38 d1 a4 c6 09 a8 2b 08 99 |.T.D. .8.....+..|
1170 1171 01f0: a4 14 ae 8d a3 e3 aa 34 27 d8 44 ca c3 5d 21 8b |.......4'.D..]!.|
1171 1172 0200: 1a 1e 97 29 71 2b 09 4a 4a 55 55 94 58 65 b2 bc |...)q+.JJUU.Xe..|
1172 1173 0210: f3 a5 90 26 36 76 67 7a 51 98 d6 8a 4a 99 50 b5 |...&6vgzQ...J.P.|
1173 1174 0220: 99 8f 94 21 17 a9 8b f3 ad 4c 33 d4 2e 40 c8 0c |...!.....L3..@..|
1174 1175 0230: 3b 90 53 39 db 48 02 34 83 48 d6 b3 99 13 d2 58 |;.S9.H.4.H.....X|
1175 1176 0240: 65 8e 71 ac a9 06 95 f2 c4 8e b4 08 6b d3 0c ae |e.q.........k...|
1176 1177 0250: d9 90 56 71 43 a7 a2 62 16 3e 50 63 d3 57 3c 2d |..VqC..b.>Pc.W<-|
1177 1178 0260: 9f 0f 34 05 08 d8 a6 4b 59 31 54 66 3a 45 0c 8a |..4....KY1Tf:E..|
1178 1179 0270: c7 90 3a f0 6a 83 1b f5 ca fb 80 2b 50 06 fb 51 |..:.j......+P..Q|
1179 1180 0280: 7e a6 a4 d4 81 44 82 21 54 00 5b 1a 30 83 62 a3 |~....D.!T.[.0.b.|
1180 1181 0290: 18 b6 24 19 1e 45 df 4d 5c db a6 af 5b ac 90 fa |..$..E.M\...[...|
1181 1182 02a0: 3e ed f9 ec 4c ba 36 ee d8 60 20 a7 c7 3b cb d1 |>...L.6..` ..;..|
1182 1183 02b0: 90 43 7d 27 16 50 5d ad f4 14 07 0b 90 5c cc 6b |.C}'.P]......\.k|
1183 1184 02c0: 8d 3f a6 88 f4 34 37 a8 cf 14 63 36 19 f7 3e 28 |.?...47...c6..>(|
1184 1185 02d0: de 99 e8 16 a4 9d 0d 40 a1 a7 24 52 14 a6 72 62 |.......@..$R..rb|
1185 1186 02e0: 59 5a ca 2d e5 51 90 78 88 d9 c6 c7 21 d0 f7 46 |YZ.-.Q.x....!..F|
1186 1187 02f0: b2 04 46 44 4e 20 9c 12 b1 03 4e 25 e0 a9 0c 58 |..FDN ....N%...X|
1187 1188 0300: 5b 1d 3c 93 20 01 51 de a9 1c 69 23 32 46 14 b4 |[.<. .Q...i#2F..|
1188 1189 0310: 90 db 17 98 98 50 03 90 29 aa 40 b0 13 d8 43 d2 |.....P..).@...C.|
1189 1190 0320: 5f c5 9d eb f3 f2 ad 41 e8 7a a9 ed a1 58 84 a6 |_......A.z...X..|
1190 1191 0330: 42 bf d6 fc 24 82 c1 20 32 26 4a 15 a6 1d 29 7f |B...$.. 2&J...).|
1191 1192 0340: 7e f4 3d 07 bc 62 9a 5b ec 44 3d 72 1d 41 8b 5c |~.=..b.[.D=r.A.\|
1192 1193 0350: 80 de 0e 62 9a 2e f8 83 00 d5 07 a0 9c c6 74 98 |...b..........t.|
1193 1194 0360: 11 b2 5e a9 38 02 03 ee fd 86 5c f4 86 b3 ae da |..^.8.....\.....|
1194 1195 0370: 05 94 01 c5 c6 ea 18 e6 ba 2a ba b3 04 5c 96 89 |.........*...\..|
1195 1196 0380: 72 63 5b 10 11 f6 67 34 98 cb e4 c0 4e fa e6 99 |rc[...g4....N...|
1196 1197 0390: 19 6e 50 e8 26 8d 0c 17 e0 be ef e1 8e 02 6f 32 |.nP.&.........o2|
1197 1198 03a0: 82 dc 26 f8 a1 08 f3 8a 0d f3 c4 75 00 48 73 b8 |..&........u.Hs.|
1198 1199 03b0: be 3b 0d 7f d0 fd c7 78 96 ec e0 03 80 68 4d 8d |.;.....x.....hM.|
1199 1200 03c0: 43 8c d7 68 58 f9 50 f0 18 cb 21 58 1b 60 cd 1f |C..hX.P...!X.`..|
1200 1201 03d0: 84 36 2e 16 1f 0a f7 4e 8f eb df 01 2d c2 79 0b |.6.....N....-.y.|
1201 1202 03e0: f7 24 ea 0d e8 59 86 51 6e 1c 30 a3 ad 2f ee 8c |.$...Y.Qn.0../..|
1202 1203 03f0: 90 c8 84 d5 e8 34 c1 95 b2 c9 f6 4d 87 1c 7d 19 |.....4.....M..}.|
1203 1204 0400: d6 41 58 56 7a e0 6c ba 10 c7 e8 33 39 36 96 e7 |.AXVz.l....396..|
1204 1205 0410: d2 f9 59 9a 08 95 48 38 e7 0b b7 0a 24 67 c4 39 |..Y...H8....$g.9|
1205 1206 0420: 8b 43 88 57 9c 01 f5 61 b5 e1 27 41 7e af 83 fe |.C.W...a..'A~...|
1206 1207 0430: 2e e4 8a 70 a1 21 46 96 30 7a |...p.!F.0z|
1207 1208 $ hg debugbundle ../rev.hg2.bz
1208 1209 Stream params: {Compression: BZ}
1209 1210 changegroup -- {}
1210 1211 32af7686d403cf45b5d95f2d70cebea587ac806a
1211 1212 9520eea781bcca16c1e15acc0ba14335a0e8e5ba
1212 1213 eea13746799a9e0bfd88f29d3c2e9dc9389f524f
1213 1214 02de42196ebee42ef284b6780a87cdc96e8eaab6
1214 1215 $ hg unbundle ../rev.hg2.bz
1215 1216 adding changesets
1216 1217 adding manifests
1217 1218 adding file changes
1218 1219 added 0 changesets with 0 changes to 3 files
1219 1220 (run 'hg update' to get a working copy)
1220 1221
1221 1222 unknown compression while unbundling
1222 1223 -----------------------------
1223 1224
1224 1225 $ hg bundle2 --param Compression=FooBarUnknown --rev '8+7+5+4' ../rev.hg2.bz
1225 1226 $ cat ../rev.hg2.bz | hg statbundle2
1226 1227 abort: unknown parameters: Stream Parameter - Compression='FooBarUnknown'
1227 1228 [255]
1228 1229 $ hg unbundle ../rev.hg2.bz
1229 1230 abort: ../rev.hg2.bz: unknown bundle feature, Stream Parameter - Compression='FooBarUnknown'
1230 1231 (see https://mercurial-scm.org/wiki/BundleFeature for more information)
1231 1232 [255]
1232 1233
1233 1234 $ cd ..
@@ -1,260 +1,260 b''
1 1 Create an extension to test bundle2 with multiple changegroups
2 2
3 3 $ cat > bundle2.py <<EOF
4 4 > """
5 5 > """
6 6 > from mercurial import changegroup, discovery, exchange
7 7 >
8 8 > def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
9 9 > b2caps=None, heads=None, common=None,
10 10 > **kwargs):
11 11 > # Create two changegroups given the common changesets and heads for the
12 12 > # changegroup part we are being requested. Use the parent of each head
13 13 > # in 'heads' as intermediate heads for the first changegroup.
14 14 > intermediates = [repo[r].p1().node() for r in heads]
15 15 > outgoing = discovery.outgoing(repo, common, intermediates)
16 > cg = changegroup.getchangegroup(repo, source, outgoing,
17 > bundlecaps=bundlecaps)
16 > cg = changegroup.makechangegroup(repo, outgoing, '01',
17 > source, bundlecaps=bundlecaps)
18 18 > bundler.newpart('output', data='changegroup1')
19 19 > bundler.newpart('changegroup', data=cg.getchunks())
20 20 > outgoing = discovery.outgoing(repo, common + intermediates, heads)
21 > cg = changegroup.getchangegroup(repo, source, outgoing,
22 > bundlecaps=bundlecaps)
21 > cg = changegroup.makechangegroup(repo, outgoing, '01',
22 > source, bundlecaps=bundlecaps)
23 23 > bundler.newpart('output', data='changegroup2')
24 24 > bundler.newpart('changegroup', data=cg.getchunks())
25 25 >
26 26 > def _pull(repo, *args, **kwargs):
27 27 > pullop = _orig_pull(repo, *args, **kwargs)
28 28 > repo.ui.write('pullop.cgresult is %d\n' % pullop.cgresult)
29 29 > return pullop
30 30 >
31 31 > _orig_pull = exchange.pull
32 32 > exchange.pull = _pull
33 33 > exchange.getbundle2partsmapping['changegroup'] = _getbundlechangegrouppart
34 34 > EOF
35 35
36 36 $ cat >> $HGRCPATH << EOF
37 37 > [ui]
38 38 > logtemplate={rev}:{node|short} {phase} {author} {bookmarks} {desc|firstline}
39 39 > EOF
40 40
41 41 Start with a simple repository with a single commit
42 42
43 43 $ hg init repo
44 44 $ cd repo
45 45 $ cat > .hg/hgrc << EOF
46 46 > [extensions]
47 47 > bundle2=$TESTTMP/bundle2.py
48 48 > EOF
49 49
50 50 $ echo A > A
51 51 $ hg commit -A -m A -q
52 52 $ cd ..
53 53
54 54 Clone
55 55
56 56 $ hg clone -q repo clone
57 57
58 58 Add two linear commits
59 59
60 60 $ cd repo
61 61 $ echo B > B
62 62 $ hg commit -A -m B -q
63 63 $ echo C > C
64 64 $ hg commit -A -m C -q
65 65
66 66 $ cd ../clone
67 67 $ cat >> .hg/hgrc <<EOF
68 68 > [hooks]
69 69 > pretxnchangegroup = sh -c "printenv.py pretxnchangegroup"
70 70 > changegroup = sh -c "printenv.py changegroup"
71 71 > incoming = sh -c "printenv.py incoming"
72 72 > EOF
73 73
74 74 Pull the new commits in the clone
75 75
76 76 $ hg pull
77 77 pulling from $TESTTMP/repo (glob)
78 78 searching for changes
79 79 remote: changegroup1
80 80 adding changesets
81 81 adding manifests
82 82 adding file changes
83 83 added 1 changesets with 1 changes to 1 files
84 84 pretxnchangegroup hook: HG_HOOKNAME=pretxnchangegroup HG_HOOKTYPE=pretxnchangegroup HG_NODE=27547f69f25460a52fff66ad004e58da7ad3fb56 HG_NODE_LAST=27547f69f25460a52fff66ad004e58da7ad3fb56 HG_PENDING=$TESTTMP/clone HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
85 85 remote: changegroup2
86 86 adding changesets
87 87 adding manifests
88 88 adding file changes
89 89 added 1 changesets with 1 changes to 1 files
90 90 pretxnchangegroup hook: HG_HOOKNAME=pretxnchangegroup HG_HOOKTYPE=pretxnchangegroup HG_NODE=f838bfaca5c7226600ebcfd84f3c3c13a28d3757 HG_NODE_LAST=f838bfaca5c7226600ebcfd84f3c3c13a28d3757 HG_PENDING=$TESTTMP/clone HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
91 91 changegroup hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=27547f69f25460a52fff66ad004e58da7ad3fb56 HG_NODE_LAST=27547f69f25460a52fff66ad004e58da7ad3fb56 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
92 92 incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=27547f69f25460a52fff66ad004e58da7ad3fb56 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
93 93 changegroup hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=f838bfaca5c7226600ebcfd84f3c3c13a28d3757 HG_NODE_LAST=f838bfaca5c7226600ebcfd84f3c3c13a28d3757 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
94 94 incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=f838bfaca5c7226600ebcfd84f3c3c13a28d3757 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
95 95 pullop.cgresult is 1
96 96 (run 'hg update' to get a working copy)
97 97 $ hg update
98 98 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
99 99 $ hg log -G
100 100 @ 2:f838bfaca5c7 public test C
101 101 |
102 102 o 1:27547f69f254 public test B
103 103 |
104 104 o 0:4a2df7238c3b public test A
105 105
106 106 Add more changesets with multiple heads to the original repository
107 107
108 108 $ cd ../repo
109 109 $ echo D > D
110 110 $ hg commit -A -m D -q
111 111 $ hg up -r 1
112 112 0 files updated, 0 files merged, 2 files removed, 0 files unresolved
113 113 $ echo E > E
114 114 $ hg commit -A -m E -q
115 115 $ echo F > F
116 116 $ hg commit -A -m F -q
117 117 $ hg up -r 1
118 118 0 files updated, 0 files merged, 2 files removed, 0 files unresolved
119 119 $ echo G > G
120 120 $ hg commit -A -m G -q
121 121 $ hg up -r 3
122 122 2 files updated, 0 files merged, 1 files removed, 0 files unresolved
123 123 $ echo H > H
124 124 $ hg commit -A -m H -q
125 125 $ hg log -G
126 126 @ 7:5cd59d311f65 draft test H
127 127 |
128 128 | o 6:1d14c3ce6ac0 draft test G
129 129 | |
130 130 | | o 5:7f219660301f draft test F
131 131 | | |
132 132 | | o 4:8a5212ebc852 draft test E
133 133 | |/
134 134 o | 3:b3325c91a4d9 draft test D
135 135 | |
136 136 o | 2:f838bfaca5c7 draft test C
137 137 |/
138 138 o 1:27547f69f254 draft test B
139 139 |
140 140 o 0:4a2df7238c3b draft test A
141 141
142 142 New heads are reported during transfer and properly accounted for in
143 143 pullop.cgresult
144 144
145 145 $ cd ../clone
146 146 $ hg pull
147 147 pulling from $TESTTMP/repo (glob)
148 148 searching for changes
149 149 remote: changegroup1
150 150 adding changesets
151 151 adding manifests
152 152 adding file changes
153 153 added 2 changesets with 2 changes to 2 files (+1 heads)
154 154 pretxnchangegroup hook: HG_HOOKNAME=pretxnchangegroup HG_HOOKTYPE=pretxnchangegroup HG_NODE=b3325c91a4d916bcc4cdc83ea3fe4ece46a42f6e HG_NODE_LAST=8a5212ebc8527f9fb821601504794e3eb11a1ed3 HG_PENDING=$TESTTMP/clone HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
155 155 remote: changegroup2
156 156 adding changesets
157 157 adding manifests
158 158 adding file changes
159 159 added 3 changesets with 3 changes to 3 files (+1 heads)
160 160 pretxnchangegroup hook: HG_HOOKNAME=pretxnchangegroup HG_HOOKTYPE=pretxnchangegroup HG_NODE=7f219660301fe4c8a116f714df5e769695cc2b46 HG_NODE_LAST=5cd59d311f6508b8e0ed28a266756c859419c9f1 HG_PENDING=$TESTTMP/clone HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
161 161 changegroup hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=b3325c91a4d916bcc4cdc83ea3fe4ece46a42f6e HG_NODE_LAST=8a5212ebc8527f9fb821601504794e3eb11a1ed3 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
162 162 incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=b3325c91a4d916bcc4cdc83ea3fe4ece46a42f6e HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
163 163 incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=8a5212ebc8527f9fb821601504794e3eb11a1ed3 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
164 164 changegroup hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=7f219660301fe4c8a116f714df5e769695cc2b46 HG_NODE_LAST=5cd59d311f6508b8e0ed28a266756c859419c9f1 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
165 165 incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=7f219660301fe4c8a116f714df5e769695cc2b46 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
166 166 incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=1d14c3ce6ac0582d2809220d33e8cd7a696e0156 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
167 167 incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=5cd59d311f6508b8e0ed28a266756c859419c9f1 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
168 168 pullop.cgresult is 3
169 169 (run 'hg heads' to see heads, 'hg merge' to merge)
170 170 $ hg log -G
171 171 o 7:5cd59d311f65 public test H
172 172 |
173 173 | o 6:1d14c3ce6ac0 public test G
174 174 | |
175 175 | | o 5:7f219660301f public test F
176 176 | | |
177 177 | | o 4:8a5212ebc852 public test E
178 178 | |/
179 179 o | 3:b3325c91a4d9 public test D
180 180 | |
181 181 @ | 2:f838bfaca5c7 public test C
182 182 |/
183 183 o 1:27547f69f254 public test B
184 184 |
185 185 o 0:4a2df7238c3b public test A
186 186
187 187 Removing a head from the original repository by merging it
188 188
189 189 $ cd ../repo
190 190 $ hg merge -r 6 -q
191 191 $ hg commit -m Merge
192 192 $ echo I > I
193 193 $ hg commit -A -m H -q
194 194 $ hg log -G
195 195 @ 9:9d18e5bd9ab0 draft test H
196 196 |
197 197 o 8:71bd7b46de72 draft test Merge
198 198 |\
199 199 | o 7:5cd59d311f65 draft test H
200 200 | |
201 201 o | 6:1d14c3ce6ac0 draft test G
202 202 | |
203 203 | | o 5:7f219660301f draft test F
204 204 | | |
205 205 +---o 4:8a5212ebc852 draft test E
206 206 | |
207 207 | o 3:b3325c91a4d9 draft test D
208 208 | |
209 209 | o 2:f838bfaca5c7 draft test C
210 210 |/
211 211 o 1:27547f69f254 draft test B
212 212 |
213 213 o 0:4a2df7238c3b draft test A
214 214
215 215 Removed heads are reported during transfer and properly accounted for in
216 216 pullop.cgresult
217 217
218 218 $ cd ../clone
219 219 $ hg pull
220 220 pulling from $TESTTMP/repo (glob)
221 221 searching for changes
222 222 remote: changegroup1
223 223 adding changesets
224 224 adding manifests
225 225 adding file changes
226 226 added 1 changesets with 0 changes to 0 files (-1 heads)
227 227 pretxnchangegroup hook: HG_HOOKNAME=pretxnchangegroup HG_HOOKTYPE=pretxnchangegroup HG_NODE=71bd7b46de72e69a32455bf88d04757d542e6cf4 HG_NODE_LAST=71bd7b46de72e69a32455bf88d04757d542e6cf4 HG_PENDING=$TESTTMP/clone HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
228 228 remote: changegroup2
229 229 adding changesets
230 230 adding manifests
231 231 adding file changes
232 232 added 1 changesets with 1 changes to 1 files
233 233 pretxnchangegroup hook: HG_HOOKNAME=pretxnchangegroup HG_HOOKTYPE=pretxnchangegroup HG_NODE=9d18e5bd9ab09337802595d49f1dad0c98df4d84 HG_NODE_LAST=9d18e5bd9ab09337802595d49f1dad0c98df4d84 HG_PENDING=$TESTTMP/clone HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
234 234 changegroup hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=71bd7b46de72e69a32455bf88d04757d542e6cf4 HG_NODE_LAST=71bd7b46de72e69a32455bf88d04757d542e6cf4 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
235 235 incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=71bd7b46de72e69a32455bf88d04757d542e6cf4 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
236 236 changegroup hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=9d18e5bd9ab09337802595d49f1dad0c98df4d84 HG_NODE_LAST=9d18e5bd9ab09337802595d49f1dad0c98df4d84 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
237 237 incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=9d18e5bd9ab09337802595d49f1dad0c98df4d84 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
238 238 pullop.cgresult is -2
239 239 (run 'hg update' to get a working copy)
240 240 $ hg log -G
241 241 o 9:9d18e5bd9ab0 public test H
242 242 |
243 243 o 8:71bd7b46de72 public test Merge
244 244 |\
245 245 | o 7:5cd59d311f65 public test H
246 246 | |
247 247 o | 6:1d14c3ce6ac0 public test G
248 248 | |
249 249 | | o 5:7f219660301f public test F
250 250 | | |
251 251 +---o 4:8a5212ebc852 public test E
252 252 | |
253 253 | o 3:b3325c91a4d9 public test D
254 254 | |
255 255 | @ 2:f838bfaca5c7 public test C
256 256 |/
257 257 o 1:27547f69f254 public test B
258 258 |
259 259 o 0:4a2df7238c3b public test A
260 260
@@ -1,590 +1,591 b''
1 1 #require killdaemons
2 2
3 3 Create an extension to test bundle2 remote-changegroup parts
4 4
5 5 $ cat > bundle2.py << EOF
6 6 > """A small extension to test bundle2 remote-changegroup parts.
7 7 >
8 8 > Current bundle2 implementation doesn't provide a way to generate those
9 9 > parts, so they must be created by extensions.
10 10 > """
11 11 > from mercurial import bundle2, changegroup, discovery, exchange, util
12 12 >
13 13 > def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
14 14 > b2caps=None, heads=None, common=None,
15 15 > **kwargs):
16 16 > """this function replaces the changegroup part handler for getbundle.
17 17 > It allows to create a set of arbitrary parts containing changegroups
18 18 > and remote-changegroups, as described in a bundle2maker file in the
19 19 > repository .hg/ directory.
20 20 >
21 21 > Each line of that bundle2maker file contain a description of the
22 22 > part to add:
23 23 > - changegroup common_revset heads_revset
24 24 > Creates a changegroup part based, using common_revset and
25 25 > heads_revset for outgoing
26 26 > - remote-changegroup url file
27 27 > Creates a remote-changegroup part for a bundle at the given
28 28 > url. Size and digest, as required by the client, are computed
29 29 > from the given file.
30 30 > - raw-remote-changegroup <python expression>
31 31 > Creates a remote-changegroup part with the data given in the
32 32 > Python expression as parameters. The Python expression is
33 33 > evaluated with eval, and is expected to be a dict.
34 34 > """
35 35 > def newpart(name, data=''):
36 36 > """wrapper around bundler.newpart adding an extra part making the
37 37 > client output information about each processed part"""
38 38 > bundler.newpart('output', data=name)
39 39 > part = bundler.newpart(name, data=data)
40 40 > return part
41 41 >
42 42 > for line in open(repo.vfs.join('bundle2maker'), 'r'):
43 43 > line = line.strip()
44 44 > try:
45 45 > verb, args = line.split(None, 1)
46 46 > except ValueError:
47 47 > verb, args = line, ''
48 48 > if verb == 'remote-changegroup':
49 49 > url, file = args.split()
50 50 > bundledata = open(file, 'rb').read()
51 51 > digest = util.digester.preferred(b2caps['digests'])
52 52 > d = util.digester([digest], bundledata)
53 53 > part = newpart('remote-changegroup')
54 54 > part.addparam('url', url)
55 55 > part.addparam('size', str(len(bundledata)))
56 56 > part.addparam('digests', digest)
57 57 > part.addparam('digest:%s' % digest, d[digest])
58 58 > elif verb == 'raw-remote-changegroup':
59 59 > part = newpart('remote-changegroup')
60 60 > for k, v in eval(args).items():
61 61 > part.addparam(k, str(v))
62 62 > elif verb == 'changegroup':
63 63 > _common, heads = args.split()
64 64 > common.extend(repo.lookup(r) for r in repo.revs(_common))
65 65 > heads = [repo.lookup(r) for r in repo.revs(heads)]
66 66 > outgoing = discovery.outgoing(repo, common, heads)
67 > cg = changegroup.getchangegroup(repo, 'changegroup', outgoing)
67 > cg = changegroup.makechangegroup(repo, outgoing, '01',
68 > 'changegroup')
68 69 > newpart('changegroup', cg.getchunks())
69 70 > else:
70 71 > raise Exception('unknown verb')
71 72 >
72 73 > exchange.getbundle2partsmapping['changegroup'] = _getbundlechangegrouppart
73 74 > EOF
74 75
75 76 Start a simple HTTP server to serve bundles
76 77
77 78 $ $PYTHON "$TESTDIR/dumbhttp.py" -p $HGPORT --pid dumb.pid
78 79 $ cat dumb.pid >> $DAEMON_PIDS
79 80
80 81 $ cat >> $HGRCPATH << EOF
81 82 > [ui]
82 83 > ssh=$PYTHON "$TESTDIR/dummyssh"
83 84 > logtemplate={rev}:{node|short} {phase} {author} {bookmarks} {desc|firstline}
84 85 > EOF
85 86
86 87 $ hg init repo
87 88
88 89 $ hg -R repo unbundle $TESTDIR/bundles/rebase.hg
89 90 adding changesets
90 91 adding manifests
91 92 adding file changes
92 93 added 8 changesets with 7 changes to 7 files (+2 heads)
93 94 (run 'hg heads' to see heads, 'hg merge' to merge)
94 95
95 96 $ hg -R repo log -G
96 97 o 7:02de42196ebe draft Nicolas Dumazet <nicdumz.commits@gmail.com> H
97 98 |
98 99 | o 6:eea13746799a draft Nicolas Dumazet <nicdumz.commits@gmail.com> G
99 100 |/|
100 101 o | 5:24b6387c8c8c draft Nicolas Dumazet <nicdumz.commits@gmail.com> F
101 102 | |
102 103 | o 4:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
103 104 |/
104 105 | o 3:32af7686d403 draft Nicolas Dumazet <nicdumz.commits@gmail.com> D
105 106 | |
106 107 | o 2:5fddd98957c8 draft Nicolas Dumazet <nicdumz.commits@gmail.com> C
107 108 | |
108 109 | o 1:42ccdea3bb16 draft Nicolas Dumazet <nicdumz.commits@gmail.com> B
109 110 |/
110 111 o 0:cd010b8cd998 draft Nicolas Dumazet <nicdumz.commits@gmail.com> A
111 112
112 113 $ hg clone repo orig
113 114 updating to branch default
114 115 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
115 116
116 117 $ cat > repo/.hg/hgrc << EOF
117 118 > [extensions]
118 119 > bundle2=$TESTTMP/bundle2.py
119 120 > EOF
120 121
121 122 Test a pull with an remote-changegroup
122 123
123 124 $ hg bundle -R repo --type v1 --base '0:4' -r '5:7' bundle.hg
124 125 3 changesets found
125 126 $ cat > repo/.hg/bundle2maker << EOF
126 127 > remote-changegroup http://localhost:$HGPORT/bundle.hg bundle.hg
127 128 > EOF
128 129 $ hg clone orig clone -r 3 -r 4
129 130 adding changesets
130 131 adding manifests
131 132 adding file changes
132 133 added 5 changesets with 5 changes to 5 files (+1 heads)
133 134 updating to branch default
134 135 4 files updated, 0 files merged, 0 files removed, 0 files unresolved
135 136 $ hg pull -R clone ssh://user@dummy/repo
136 137 pulling from ssh://user@dummy/repo
137 138 searching for changes
138 139 remote: remote-changegroup
139 140 adding changesets
140 141 adding manifests
141 142 adding file changes
142 143 added 3 changesets with 2 changes to 2 files (+1 heads)
143 144 (run 'hg heads .' to see heads, 'hg merge' to merge)
144 145 $ hg -R clone log -G
145 146 o 7:02de42196ebe public Nicolas Dumazet <nicdumz.commits@gmail.com> H
146 147 |
147 148 | o 6:eea13746799a public Nicolas Dumazet <nicdumz.commits@gmail.com> G
148 149 |/|
149 150 o | 5:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
150 151 | |
151 152 | o 4:9520eea781bc public Nicolas Dumazet <nicdumz.commits@gmail.com> E
152 153 |/
153 154 | @ 3:32af7686d403 public Nicolas Dumazet <nicdumz.commits@gmail.com> D
154 155 | |
155 156 | o 2:5fddd98957c8 public Nicolas Dumazet <nicdumz.commits@gmail.com> C
156 157 | |
157 158 | o 1:42ccdea3bb16 public Nicolas Dumazet <nicdumz.commits@gmail.com> B
158 159 |/
159 160 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
160 161
161 162 $ rm -rf clone
162 163
163 164 Test a pull with an remote-changegroup and a following changegroup
164 165
165 166 $ hg bundle -R repo --type v1 --base 2 -r '3:4' bundle2.hg
166 167 2 changesets found
167 168 $ cat > repo/.hg/bundle2maker << EOF
168 169 > remote-changegroup http://localhost:$HGPORT/bundle2.hg bundle2.hg
169 170 > changegroup 0:4 5:7
170 171 > EOF
171 172 $ hg clone orig clone -r 2
172 173 adding changesets
173 174 adding manifests
174 175 adding file changes
175 176 added 3 changesets with 3 changes to 3 files
176 177 updating to branch default
177 178 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
178 179 $ hg pull -R clone ssh://user@dummy/repo
179 180 pulling from ssh://user@dummy/repo
180 181 searching for changes
181 182 remote: remote-changegroup
182 183 adding changesets
183 184 adding manifests
184 185 adding file changes
185 186 added 2 changesets with 2 changes to 2 files (+1 heads)
186 187 remote: changegroup
187 188 adding changesets
188 189 adding manifests
189 190 adding file changes
190 191 added 3 changesets with 2 changes to 2 files (+1 heads)
191 192 (run 'hg heads' to see heads, 'hg merge' to merge)
192 193 $ hg -R clone log -G
193 194 o 7:02de42196ebe public Nicolas Dumazet <nicdumz.commits@gmail.com> H
194 195 |
195 196 | o 6:eea13746799a public Nicolas Dumazet <nicdumz.commits@gmail.com> G
196 197 |/|
197 198 o | 5:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
198 199 | |
199 200 | o 4:9520eea781bc public Nicolas Dumazet <nicdumz.commits@gmail.com> E
200 201 |/
201 202 | o 3:32af7686d403 public Nicolas Dumazet <nicdumz.commits@gmail.com> D
202 203 | |
203 204 | @ 2:5fddd98957c8 public Nicolas Dumazet <nicdumz.commits@gmail.com> C
204 205 | |
205 206 | o 1:42ccdea3bb16 public Nicolas Dumazet <nicdumz.commits@gmail.com> B
206 207 |/
207 208 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
208 209
209 210 $ rm -rf clone
210 211
211 212 Test a pull with a changegroup followed by an remote-changegroup
212 213
213 214 $ hg bundle -R repo --type v1 --base '0:4' -r '5:7' bundle3.hg
214 215 3 changesets found
215 216 $ cat > repo/.hg/bundle2maker << EOF
216 217 > changegroup 000000000000 :4
217 218 > remote-changegroup http://localhost:$HGPORT/bundle3.hg bundle3.hg
218 219 > EOF
219 220 $ hg clone orig clone -r 2
220 221 adding changesets
221 222 adding manifests
222 223 adding file changes
223 224 added 3 changesets with 3 changes to 3 files
224 225 updating to branch default
225 226 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
226 227 $ hg pull -R clone ssh://user@dummy/repo
227 228 pulling from ssh://user@dummy/repo
228 229 searching for changes
229 230 remote: changegroup
230 231 adding changesets
231 232 adding manifests
232 233 adding file changes
233 234 added 2 changesets with 2 changes to 2 files (+1 heads)
234 235 remote: remote-changegroup
235 236 adding changesets
236 237 adding manifests
237 238 adding file changes
238 239 added 3 changesets with 2 changes to 2 files (+1 heads)
239 240 (run 'hg heads' to see heads, 'hg merge' to merge)
240 241 $ hg -R clone log -G
241 242 o 7:02de42196ebe public Nicolas Dumazet <nicdumz.commits@gmail.com> H
242 243 |
243 244 | o 6:eea13746799a public Nicolas Dumazet <nicdumz.commits@gmail.com> G
244 245 |/|
245 246 o | 5:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
246 247 | |
247 248 | o 4:9520eea781bc public Nicolas Dumazet <nicdumz.commits@gmail.com> E
248 249 |/
249 250 | o 3:32af7686d403 public Nicolas Dumazet <nicdumz.commits@gmail.com> D
250 251 | |
251 252 | @ 2:5fddd98957c8 public Nicolas Dumazet <nicdumz.commits@gmail.com> C
252 253 | |
253 254 | o 1:42ccdea3bb16 public Nicolas Dumazet <nicdumz.commits@gmail.com> B
254 255 |/
255 256 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
256 257
257 258 $ rm -rf clone
258 259
259 260 Test a pull with two remote-changegroups and a changegroup
260 261
261 262 $ hg bundle -R repo --type v1 --base 2 -r '3:4' bundle4.hg
262 263 2 changesets found
263 264 $ hg bundle -R repo --type v1 --base '3:4' -r '5:6' bundle5.hg
264 265 2 changesets found
265 266 $ cat > repo/.hg/bundle2maker << EOF
266 267 > remote-changegroup http://localhost:$HGPORT/bundle4.hg bundle4.hg
267 268 > remote-changegroup http://localhost:$HGPORT/bundle5.hg bundle5.hg
268 269 > changegroup 0:6 7
269 270 > EOF
270 271 $ hg clone orig clone -r 2
271 272 adding changesets
272 273 adding manifests
273 274 adding file changes
274 275 added 3 changesets with 3 changes to 3 files
275 276 updating to branch default
276 277 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
277 278 $ hg pull -R clone ssh://user@dummy/repo
278 279 pulling from ssh://user@dummy/repo
279 280 searching for changes
280 281 remote: remote-changegroup
281 282 adding changesets
282 283 adding manifests
283 284 adding file changes
284 285 added 2 changesets with 2 changes to 2 files (+1 heads)
285 286 remote: remote-changegroup
286 287 adding changesets
287 288 adding manifests
288 289 adding file changes
289 290 added 2 changesets with 1 changes to 1 files
290 291 remote: changegroup
291 292 adding changesets
292 293 adding manifests
293 294 adding file changes
294 295 added 1 changesets with 1 changes to 1 files (+1 heads)
295 296 (run 'hg heads' to see heads, 'hg merge' to merge)
296 297 $ hg -R clone log -G
297 298 o 7:02de42196ebe public Nicolas Dumazet <nicdumz.commits@gmail.com> H
298 299 |
299 300 | o 6:eea13746799a public Nicolas Dumazet <nicdumz.commits@gmail.com> G
300 301 |/|
301 302 o | 5:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
302 303 | |
303 304 | o 4:9520eea781bc public Nicolas Dumazet <nicdumz.commits@gmail.com> E
304 305 |/
305 306 | o 3:32af7686d403 public Nicolas Dumazet <nicdumz.commits@gmail.com> D
306 307 | |
307 308 | @ 2:5fddd98957c8 public Nicolas Dumazet <nicdumz.commits@gmail.com> C
308 309 | |
309 310 | o 1:42ccdea3bb16 public Nicolas Dumazet <nicdumz.commits@gmail.com> B
310 311 |/
311 312 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
312 313
313 314 $ rm -rf clone
314 315
315 316 Hash digest tests
316 317
317 318 $ hg bundle -R repo --type v1 -a bundle6.hg
318 319 8 changesets found
319 320
320 321 $ cat > repo/.hg/bundle2maker << EOF
321 322 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle6.hg', 'size': 1663, 'digests': 'sha1', 'digest:sha1': '2c880cfec23cff7d8f80c2f12958d1563cbdaba6'}
322 323 > EOF
323 324 $ hg clone ssh://user@dummy/repo clone
324 325 requesting all changes
325 326 remote: remote-changegroup
326 327 adding changesets
327 328 adding manifests
328 329 adding file changes
329 330 added 8 changesets with 7 changes to 7 files (+2 heads)
330 331 updating to branch default
331 332 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
332 333 $ rm -rf clone
333 334
334 335 $ cat > repo/.hg/bundle2maker << EOF
335 336 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle6.hg', 'size': 1663, 'digests': 'md5', 'digest:md5': 'e22172c2907ef88794b7bea6642c2394'}
336 337 > EOF
337 338 $ hg clone ssh://user@dummy/repo clone
338 339 requesting all changes
339 340 remote: remote-changegroup
340 341 adding changesets
341 342 adding manifests
342 343 adding file changes
343 344 added 8 changesets with 7 changes to 7 files (+2 heads)
344 345 updating to branch default
345 346 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
346 347 $ rm -rf clone
347 348
348 349 Hash digest mismatch throws an error
349 350
350 351 $ cat > repo/.hg/bundle2maker << EOF
351 352 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle6.hg', 'size': 1663, 'digests': 'sha1', 'digest:sha1': '0' * 40}
352 353 > EOF
353 354 $ hg clone ssh://user@dummy/repo clone
354 355 requesting all changes
355 356 remote: remote-changegroup
356 357 adding changesets
357 358 adding manifests
358 359 adding file changes
359 360 added 8 changesets with 7 changes to 7 files (+2 heads)
360 361 transaction abort!
361 362 rollback completed
362 363 abort: bundle at http://localhost:$HGPORT/bundle6.hg is corrupted:
363 364 sha1 mismatch: expected 0000000000000000000000000000000000000000, got 2c880cfec23cff7d8f80c2f12958d1563cbdaba6
364 365 [255]
365 366
366 367 Multiple hash digests can be given
367 368
368 369 $ cat > repo/.hg/bundle2maker << EOF
369 370 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle6.hg', 'size': 1663, 'digests': 'md5 sha1', 'digest:md5': 'e22172c2907ef88794b7bea6642c2394', 'digest:sha1': '2c880cfec23cff7d8f80c2f12958d1563cbdaba6'}
370 371 > EOF
371 372 $ hg clone ssh://user@dummy/repo clone
372 373 requesting all changes
373 374 remote: remote-changegroup
374 375 adding changesets
375 376 adding manifests
376 377 adding file changes
377 378 added 8 changesets with 7 changes to 7 files (+2 heads)
378 379 updating to branch default
379 380 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
380 381 $ rm -rf clone
381 382
382 383 If either of the multiple hash digests mismatches, an error is thrown
383 384
384 385 $ cat > repo/.hg/bundle2maker << EOF
385 386 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle6.hg', 'size': 1663, 'digests': 'md5 sha1', 'digest:md5': '0' * 32, 'digest:sha1': '2c880cfec23cff7d8f80c2f12958d1563cbdaba6'}
386 387 > EOF
387 388 $ hg clone ssh://user@dummy/repo clone
388 389 requesting all changes
389 390 remote: remote-changegroup
390 391 adding changesets
391 392 adding manifests
392 393 adding file changes
393 394 added 8 changesets with 7 changes to 7 files (+2 heads)
394 395 transaction abort!
395 396 rollback completed
396 397 abort: bundle at http://localhost:$HGPORT/bundle6.hg is corrupted:
397 398 md5 mismatch: expected 00000000000000000000000000000000, got e22172c2907ef88794b7bea6642c2394
398 399 [255]
399 400
400 401 $ cat > repo/.hg/bundle2maker << EOF
401 402 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle6.hg', 'size': 1663, 'digests': 'md5 sha1', 'digest:md5': 'e22172c2907ef88794b7bea6642c2394', 'digest:sha1': '0' * 40}
402 403 > EOF
403 404 $ hg clone ssh://user@dummy/repo clone
404 405 requesting all changes
405 406 remote: remote-changegroup
406 407 adding changesets
407 408 adding manifests
408 409 adding file changes
409 410 added 8 changesets with 7 changes to 7 files (+2 heads)
410 411 transaction abort!
411 412 rollback completed
412 413 abort: bundle at http://localhost:$HGPORT/bundle6.hg is corrupted:
413 414 sha1 mismatch: expected 0000000000000000000000000000000000000000, got 2c880cfec23cff7d8f80c2f12958d1563cbdaba6
414 415 [255]
415 416
416 417 Corruption tests
417 418
418 419 $ hg clone orig clone -r 2
419 420 adding changesets
420 421 adding manifests
421 422 adding file changes
422 423 added 3 changesets with 3 changes to 3 files
423 424 updating to branch default
424 425 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
425 426
426 427 $ cat > repo/.hg/bundle2maker << EOF
427 428 > remote-changegroup http://localhost:$HGPORT/bundle4.hg bundle4.hg
428 429 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle5.hg', 'size': 578, 'digests': 'sha1', 'digest:sha1': '0' * 40}
429 430 > changegroup 0:6 7
430 431 > EOF
431 432 $ hg pull -R clone ssh://user@dummy/repo
432 433 pulling from ssh://user@dummy/repo
433 434 searching for changes
434 435 remote: remote-changegroup
435 436 adding changesets
436 437 adding manifests
437 438 adding file changes
438 439 added 2 changesets with 2 changes to 2 files (+1 heads)
439 440 remote: remote-changegroup
440 441 adding changesets
441 442 adding manifests
442 443 adding file changes
443 444 added 2 changesets with 1 changes to 1 files
444 445 transaction abort!
445 446 rollback completed
446 447 abort: bundle at http://localhost:$HGPORT/bundle5.hg is corrupted:
447 448 sha1 mismatch: expected 0000000000000000000000000000000000000000, got f29485d6bfd37db99983cfc95ecb52f8ca396106
448 449 [255]
449 450
450 451 The entire transaction has been rolled back in the pull above
451 452
452 453 $ hg -R clone log -G
453 454 @ 2:5fddd98957c8 public Nicolas Dumazet <nicdumz.commits@gmail.com> C
454 455 |
455 456 o 1:42ccdea3bb16 public Nicolas Dumazet <nicdumz.commits@gmail.com> B
456 457 |
457 458 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
458 459
459 460
460 461 No params
461 462
462 463 $ cat > repo/.hg/bundle2maker << EOF
463 464 > raw-remote-changegroup {}
464 465 > EOF
465 466 $ hg pull -R clone ssh://user@dummy/repo
466 467 pulling from ssh://user@dummy/repo
467 468 searching for changes
468 469 remote: remote-changegroup
469 470 abort: remote-changegroup: missing "url" param
470 471 [255]
471 472
472 473 Missing size
473 474
474 475 $ cat > repo/.hg/bundle2maker << EOF
475 476 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle4.hg'}
476 477 > EOF
477 478 $ hg pull -R clone ssh://user@dummy/repo
478 479 pulling from ssh://user@dummy/repo
479 480 searching for changes
480 481 remote: remote-changegroup
481 482 abort: remote-changegroup: missing "size" param
482 483 [255]
483 484
484 485 Invalid size
485 486
486 487 $ cat > repo/.hg/bundle2maker << EOF
487 488 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle4.hg', 'size': 'foo'}
488 489 > EOF
489 490 $ hg pull -R clone ssh://user@dummy/repo
490 491 pulling from ssh://user@dummy/repo
491 492 searching for changes
492 493 remote: remote-changegroup
493 494 abort: remote-changegroup: invalid value for param "size"
494 495 [255]
495 496
496 497 Size mismatch
497 498
498 499 $ cat > repo/.hg/bundle2maker << EOF
499 500 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle4.hg', 'size': 42}
500 501 > EOF
501 502 $ hg pull -R clone ssh://user@dummy/repo
502 503 pulling from ssh://user@dummy/repo
503 504 searching for changes
504 505 remote: remote-changegroup
505 506 adding changesets
506 507 adding manifests
507 508 adding file changes
508 509 added 2 changesets with 2 changes to 2 files (+1 heads)
509 510 transaction abort!
510 511 rollback completed
511 512 abort: bundle at http://localhost:$HGPORT/bundle4.hg is corrupted:
512 513 size mismatch: expected 42, got 581
513 514 [255]
514 515
515 516 Unknown digest
516 517
517 518 $ cat > repo/.hg/bundle2maker << EOF
518 519 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle4.hg', 'size': 581, 'digests': 'foo', 'digest:foo': 'bar'}
519 520 > EOF
520 521 $ hg pull -R clone ssh://user@dummy/repo
521 522 pulling from ssh://user@dummy/repo
522 523 searching for changes
523 524 remote: remote-changegroup
524 525 abort: missing support for remote-changegroup - digest:foo
525 526 [255]
526 527
527 528 Missing digest
528 529
529 530 $ cat > repo/.hg/bundle2maker << EOF
530 531 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle4.hg', 'size': 581, 'digests': 'sha1'}
531 532 > EOF
532 533 $ hg pull -R clone ssh://user@dummy/repo
533 534 pulling from ssh://user@dummy/repo
534 535 searching for changes
535 536 remote: remote-changegroup
536 537 abort: remote-changegroup: missing "digest:sha1" param
537 538 [255]
538 539
539 540 Not an HTTP url
540 541
541 542 $ cat > repo/.hg/bundle2maker << EOF
542 543 > raw-remote-changegroup {'url': 'ssh://localhost:$HGPORT/bundle4.hg', 'size': 581}
543 544 > EOF
544 545 $ hg pull -R clone ssh://user@dummy/repo
545 546 pulling from ssh://user@dummy/repo
546 547 searching for changes
547 548 remote: remote-changegroup
548 549 abort: remote-changegroup does not support ssh urls
549 550 [255]
550 551
551 552 Not a bundle
552 553
553 554 $ cat > notbundle.hg << EOF
554 555 > foo
555 556 > EOF
556 557 $ cat > repo/.hg/bundle2maker << EOF
557 558 > remote-changegroup http://localhost:$HGPORT/notbundle.hg notbundle.hg
558 559 > EOF
559 560 $ hg pull -R clone ssh://user@dummy/repo
560 561 pulling from ssh://user@dummy/repo
561 562 searching for changes
562 563 remote: remote-changegroup
563 564 abort: http://localhost:$HGPORT/notbundle.hg: not a Mercurial bundle
564 565 [255]
565 566
566 567 Not a bundle 1.0
567 568
568 569 $ cat > notbundle10.hg << EOF
569 570 > HG20
570 571 > EOF
571 572 $ cat > repo/.hg/bundle2maker << EOF
572 573 > remote-changegroup http://localhost:$HGPORT/notbundle10.hg notbundle10.hg
573 574 > EOF
574 575 $ hg pull -R clone ssh://user@dummy/repo
575 576 pulling from ssh://user@dummy/repo
576 577 searching for changes
577 578 remote: remote-changegroup
578 579 abort: http://localhost:$HGPORT/notbundle10.hg: not a bundle version 1.0
579 580 [255]
580 581
581 582 $ hg -R clone log -G
582 583 @ 2:5fddd98957c8 public Nicolas Dumazet <nicdumz.commits@gmail.com> C
583 584 |
584 585 o 1:42ccdea3bb16 public Nicolas Dumazet <nicdumz.commits@gmail.com> B
585 586 |
586 587 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
587 588
588 589 $ rm -rf clone
589 590
590 591 $ killdaemons.py
General Comments 0
You need to be logged in to leave comments. Login now