##// END OF EJS Templates
bundleoperation: pass the source argument from all the users...
Pulkit Goyal -
r37254:f7d3915d default
parent child Browse files
Show More
@@ -1,2263 +1,2263 b''
1 1 # bundle2.py - generic container format to transmit arbitrary data.
2 2 #
3 3 # Copyright 2013 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 """Handling of the new bundle2 format
8 8
9 9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 11 that will be handed to and processed by the application layer.
12 12
13 13
14 14 General format architecture
15 15 ===========================
16 16
17 17 The format is architectured as follow
18 18
19 19 - magic string
20 20 - stream level parameters
21 21 - payload parts (any number)
22 22 - end of stream marker.
23 23
24 24 the Binary format
25 25 ============================
26 26
27 27 All numbers are unsigned and big-endian.
28 28
29 29 stream level parameters
30 30 ------------------------
31 31
32 32 Binary format is as follow
33 33
34 34 :params size: int32
35 35
36 36 The total number of Bytes used by the parameters
37 37
38 38 :params value: arbitrary number of Bytes
39 39
40 40 A blob of `params size` containing the serialized version of all stream level
41 41 parameters.
42 42
43 43 The blob contains a space separated list of parameters. Parameters with value
44 44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45 45
46 46 Empty name are obviously forbidden.
47 47
48 48 Name MUST start with a letter. If this first letter is lower case, the
49 49 parameter is advisory and can be safely ignored. However when the first
50 50 letter is capital, the parameter is mandatory and the bundling process MUST
51 51 stop if he is not able to proceed it.
52 52
53 53 Stream parameters use a simple textual format for two main reasons:
54 54
55 55 - Stream level parameters should remain simple and we want to discourage any
56 56 crazy usage.
57 57 - Textual data allow easy human inspection of a bundle2 header in case of
58 58 troubles.
59 59
60 60 Any Applicative level options MUST go into a bundle2 part instead.
61 61
62 62 Payload part
63 63 ------------------------
64 64
65 65 Binary format is as follow
66 66
67 67 :header size: int32
68 68
69 69 The total number of Bytes used by the part header. When the header is empty
70 70 (size = 0) this is interpreted as the end of stream marker.
71 71
72 72 :header:
73 73
74 74 The header defines how to interpret the part. It contains two piece of
75 75 data: the part type, and the part parameters.
76 76
77 77 The part type is used to route an application level handler, that can
78 78 interpret payload.
79 79
80 80 Part parameters are passed to the application level handler. They are
81 81 meant to convey information that will help the application level object to
82 82 interpret the part payload.
83 83
84 84 The binary format of the header is has follow
85 85
86 86 :typesize: (one byte)
87 87
88 88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89 89
90 90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 91 to this part.
92 92
93 93 :parameters:
94 94
95 95 Part's parameter may have arbitrary content, the binary structure is::
96 96
97 97 <mandatory-count><advisory-count><param-sizes><param-data>
98 98
99 99 :mandatory-count: 1 byte, number of mandatory parameters
100 100
101 101 :advisory-count: 1 byte, number of advisory parameters
102 102
103 103 :param-sizes:
104 104
105 105 N couple of bytes, where N is the total number of parameters. Each
106 106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107 107
108 108 :param-data:
109 109
110 110 A blob of bytes from which each parameter key and value can be
111 111 retrieved using the list of size couples stored in the previous
112 112 field.
113 113
114 114 Mandatory parameters comes first, then the advisory ones.
115 115
116 116 Each parameter's key MUST be unique within the part.
117 117
118 118 :payload:
119 119
120 120 payload is a series of `<chunksize><chunkdata>`.
121 121
122 122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124 124
125 125 The current implementation always produces either zero or one chunk.
126 126 This is an implementation limitation that will ultimately be lifted.
127 127
128 128 `chunksize` can be negative to trigger special case processing. No such
129 129 processing is in place yet.
130 130
131 131 Bundle processing
132 132 ============================
133 133
134 134 Each part is processed in order using a "part handler". Handler are registered
135 135 for a certain part type.
136 136
137 137 The matching of a part to its handler is case insensitive. The case of the
138 138 part type is used to know if a part is mandatory or advisory. If the Part type
139 139 contains any uppercase char it is considered mandatory. When no handler is
140 140 known for a Mandatory part, the process is aborted and an exception is raised.
141 141 If the part is advisory and no handler is known, the part is ignored. When the
142 142 process is aborted, the full bundle is still read from the stream to keep the
143 143 channel usable. But none of the part read from an abort are processed. In the
144 144 future, dropping the stream may become an option for channel we do not care to
145 145 preserve.
146 146 """
147 147
148 148 from __future__ import absolute_import, division
149 149
150 150 import collections
151 151 import errno
152 152 import os
153 153 import re
154 154 import string
155 155 import struct
156 156 import sys
157 157
158 158 from .i18n import _
159 159 from . import (
160 160 bookmarks,
161 161 changegroup,
162 162 encoding,
163 163 error,
164 164 node as nodemod,
165 165 obsolete,
166 166 phases,
167 167 pushkey,
168 168 pycompat,
169 169 streamclone,
170 170 tags,
171 171 url,
172 172 util,
173 173 )
174 174 from .utils import (
175 175 stringutil,
176 176 )
177 177
178 178 urlerr = util.urlerr
179 179 urlreq = util.urlreq
180 180
181 181 _pack = struct.pack
182 182 _unpack = struct.unpack
183 183
184 184 _fstreamparamsize = '>i'
185 185 _fpartheadersize = '>i'
186 186 _fparttypesize = '>B'
187 187 _fpartid = '>I'
188 188 _fpayloadsize = '>i'
189 189 _fpartparamcount = '>BB'
190 190
191 191 preferedchunksize = 32768
192 192
193 193 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
194 194
195 195 def outdebug(ui, message):
196 196 """debug regarding output stream (bundling)"""
197 197 if ui.configbool('devel', 'bundle2.debug'):
198 198 ui.debug('bundle2-output: %s\n' % message)
199 199
200 200 def indebug(ui, message):
201 201 """debug on input stream (unbundling)"""
202 202 if ui.configbool('devel', 'bundle2.debug'):
203 203 ui.debug('bundle2-input: %s\n' % message)
204 204
205 205 def validateparttype(parttype):
206 206 """raise ValueError if a parttype contains invalid character"""
207 207 if _parttypeforbidden.search(parttype):
208 208 raise ValueError(parttype)
209 209
210 210 def _makefpartparamsizes(nbparams):
211 211 """return a struct format to read part parameter sizes
212 212
213 213 The number parameters is variable so we need to build that format
214 214 dynamically.
215 215 """
216 216 return '>'+('BB'*nbparams)
217 217
218 218 parthandlermapping = {}
219 219
220 220 def parthandler(parttype, params=()):
221 221 """decorator that register a function as a bundle2 part handler
222 222
223 223 eg::
224 224
225 225 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
226 226 def myparttypehandler(...):
227 227 '''process a part of type "my part".'''
228 228 ...
229 229 """
230 230 validateparttype(parttype)
231 231 def _decorator(func):
232 232 lparttype = parttype.lower() # enforce lower case matching.
233 233 assert lparttype not in parthandlermapping
234 234 parthandlermapping[lparttype] = func
235 235 func.params = frozenset(params)
236 236 return func
237 237 return _decorator
238 238
239 239 class unbundlerecords(object):
240 240 """keep record of what happens during and unbundle
241 241
242 242 New records are added using `records.add('cat', obj)`. Where 'cat' is a
243 243 category of record and obj is an arbitrary object.
244 244
245 245 `records['cat']` will return all entries of this category 'cat'.
246 246
247 247 Iterating on the object itself will yield `('category', obj)` tuples
248 248 for all entries.
249 249
250 250 All iterations happens in chronological order.
251 251 """
252 252
253 253 def __init__(self):
254 254 self._categories = {}
255 255 self._sequences = []
256 256 self._replies = {}
257 257
258 258 def add(self, category, entry, inreplyto=None):
259 259 """add a new record of a given category.
260 260
261 261 The entry can then be retrieved in the list returned by
262 262 self['category']."""
263 263 self._categories.setdefault(category, []).append(entry)
264 264 self._sequences.append((category, entry))
265 265 if inreplyto is not None:
266 266 self.getreplies(inreplyto).add(category, entry)
267 267
268 268 def getreplies(self, partid):
269 269 """get the records that are replies to a specific part"""
270 270 return self._replies.setdefault(partid, unbundlerecords())
271 271
272 272 def __getitem__(self, cat):
273 273 return tuple(self._categories.get(cat, ()))
274 274
275 275 def __iter__(self):
276 276 return iter(self._sequences)
277 277
278 278 def __len__(self):
279 279 return len(self._sequences)
280 280
281 281 def __nonzero__(self):
282 282 return bool(self._sequences)
283 283
284 284 __bool__ = __nonzero__
285 285
286 286 class bundleoperation(object):
287 287 """an object that represents a single bundling process
288 288
289 289 Its purpose is to carry unbundle-related objects and states.
290 290
291 291 A new object should be created at the beginning of each bundle processing.
292 292 The object is to be returned by the processing function.
293 293
294 294 The object has very little content now it will ultimately contain:
295 295 * an access to the repo the bundle is applied to,
296 296 * a ui object,
297 297 * a way to retrieve a transaction to add changes to the repo,
298 298 * a way to record the result of processing each part,
299 299 * a way to construct a bundle response when applicable.
300 300 """
301 301
302 302 def __init__(self, repo, transactiongetter, captureoutput=True, source=''):
303 303 self.repo = repo
304 304 self.ui = repo.ui
305 305 self.records = unbundlerecords()
306 306 self.reply = None
307 307 self.captureoutput = captureoutput
308 308 self.hookargs = {}
309 309 self._gettransaction = transactiongetter
310 310 # carries value that can modify part behavior
311 311 self.modes = {}
312 312 self.source = source
313 313
314 314 def gettransaction(self):
315 315 transaction = self._gettransaction()
316 316
317 317 if self.hookargs:
318 318 # the ones added to the transaction supercede those added
319 319 # to the operation.
320 320 self.hookargs.update(transaction.hookargs)
321 321 transaction.hookargs = self.hookargs
322 322
323 323 # mark the hookargs as flushed. further attempts to add to
324 324 # hookargs will result in an abort.
325 325 self.hookargs = None
326 326
327 327 return transaction
328 328
329 329 def addhookargs(self, hookargs):
330 330 if self.hookargs is None:
331 331 raise error.ProgrammingError('attempted to add hookargs to '
332 332 'operation after transaction started')
333 333 self.hookargs.update(hookargs)
334 334
335 335 class TransactionUnavailable(RuntimeError):
336 336 pass
337 337
338 338 def _notransaction():
339 339 """default method to get a transaction while processing a bundle
340 340
341 341 Raise an exception to highlight the fact that no transaction was expected
342 342 to be created"""
343 343 raise TransactionUnavailable()
344 344
345 345 def applybundle(repo, unbundler, tr, source=None, url=None, **kwargs):
346 346 # transform me into unbundler.apply() as soon as the freeze is lifted
347 347 if isinstance(unbundler, unbundle20):
348 348 tr.hookargs['bundle2'] = '1'
349 349 if source is not None and 'source' not in tr.hookargs:
350 350 tr.hookargs['source'] = source
351 351 if url is not None and 'url' not in tr.hookargs:
352 352 tr.hookargs['url'] = url
353 353 return processbundle(repo, unbundler, lambda: tr, source=source)
354 354 else:
355 355 # the transactiongetter won't be used, but we might as well set it
356 op = bundleoperation(repo, lambda: tr)
356 op = bundleoperation(repo, lambda: tr, source=source)
357 357 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
358 358 return op
359 359
360 360 class partiterator(object):
361 361 def __init__(self, repo, op, unbundler):
362 362 self.repo = repo
363 363 self.op = op
364 364 self.unbundler = unbundler
365 365 self.iterator = None
366 366 self.count = 0
367 367 self.current = None
368 368
369 369 def __enter__(self):
370 370 def func():
371 371 itr = enumerate(self.unbundler.iterparts())
372 372 for count, p in itr:
373 373 self.count = count
374 374 self.current = p
375 375 yield p
376 376 p.consume()
377 377 self.current = None
378 378 self.iterator = func()
379 379 return self.iterator
380 380
381 381 def __exit__(self, type, exc, tb):
382 382 if not self.iterator:
383 383 return
384 384
385 385 # Only gracefully abort in a normal exception situation. User aborts
386 386 # like Ctrl+C throw a KeyboardInterrupt which is not a base Exception,
387 387 # and should not gracefully cleanup.
388 388 if isinstance(exc, Exception):
389 389 # Any exceptions seeking to the end of the bundle at this point are
390 390 # almost certainly related to the underlying stream being bad.
391 391 # And, chances are that the exception we're handling is related to
392 392 # getting in that bad state. So, we swallow the seeking error and
393 393 # re-raise the original error.
394 394 seekerror = False
395 395 try:
396 396 if self.current:
397 397 # consume the part content to not corrupt the stream.
398 398 self.current.consume()
399 399
400 400 for part in self.iterator:
401 401 # consume the bundle content
402 402 part.consume()
403 403 except Exception:
404 404 seekerror = True
405 405
406 406 # Small hack to let caller code distinguish exceptions from bundle2
407 407 # processing from processing the old format. This is mostly needed
408 408 # to handle different return codes to unbundle according to the type
409 409 # of bundle. We should probably clean up or drop this return code
410 410 # craziness in a future version.
411 411 exc.duringunbundle2 = True
412 412 salvaged = []
413 413 replycaps = None
414 414 if self.op.reply is not None:
415 415 salvaged = self.op.reply.salvageoutput()
416 416 replycaps = self.op.reply.capabilities
417 417 exc._replycaps = replycaps
418 418 exc._bundle2salvagedoutput = salvaged
419 419
420 420 # Re-raising from a variable loses the original stack. So only use
421 421 # that form if we need to.
422 422 if seekerror:
423 423 raise exc
424 424
425 425 self.repo.ui.debug('bundle2-input-bundle: %i parts total\n' %
426 426 self.count)
427 427
428 428 def processbundle(repo, unbundler, transactiongetter=None, op=None, source=''):
429 429 """This function process a bundle, apply effect to/from a repo
430 430
431 431 It iterates over each part then searches for and uses the proper handling
432 432 code to process the part. Parts are processed in order.
433 433
434 434 Unknown Mandatory part will abort the process.
435 435
436 436 It is temporarily possible to provide a prebuilt bundleoperation to the
437 437 function. This is used to ensure output is properly propagated in case of
438 438 an error during the unbundling. This output capturing part will likely be
439 439 reworked and this ability will probably go away in the process.
440 440 """
441 441 if op is None:
442 442 if transactiongetter is None:
443 443 transactiongetter = _notransaction
444 op = bundleoperation(repo, transactiongetter)
444 op = bundleoperation(repo, transactiongetter, source=source)
445 445 # todo:
446 446 # - replace this is a init function soon.
447 447 # - exception catching
448 448 unbundler.params
449 449 if repo.ui.debugflag:
450 450 msg = ['bundle2-input-bundle:']
451 451 if unbundler.params:
452 452 msg.append(' %i params' % len(unbundler.params))
453 453 if op._gettransaction is None or op._gettransaction is _notransaction:
454 454 msg.append(' no-transaction')
455 455 else:
456 456 msg.append(' with-transaction')
457 457 msg.append('\n')
458 458 repo.ui.debug(''.join(msg))
459 459
460 460 processparts(repo, op, unbundler)
461 461
462 462 return op
463 463
464 464 def processparts(repo, op, unbundler):
465 465 with partiterator(repo, op, unbundler) as parts:
466 466 for part in parts:
467 467 _processpart(op, part)
468 468
469 469 def _processchangegroup(op, cg, tr, source, url, **kwargs):
470 470 ret = cg.apply(op.repo, tr, source, url, **kwargs)
471 471 op.records.add('changegroup', {
472 472 'return': ret,
473 473 })
474 474 return ret
475 475
476 476 def _gethandler(op, part):
477 477 status = 'unknown' # used by debug output
478 478 try:
479 479 handler = parthandlermapping.get(part.type)
480 480 if handler is None:
481 481 status = 'unsupported-type'
482 482 raise error.BundleUnknownFeatureError(parttype=part.type)
483 483 indebug(op.ui, 'found a handler for part %s' % part.type)
484 484 unknownparams = part.mandatorykeys - handler.params
485 485 if unknownparams:
486 486 unknownparams = list(unknownparams)
487 487 unknownparams.sort()
488 488 status = 'unsupported-params (%s)' % ', '.join(unknownparams)
489 489 raise error.BundleUnknownFeatureError(parttype=part.type,
490 490 params=unknownparams)
491 491 status = 'supported'
492 492 except error.BundleUnknownFeatureError as exc:
493 493 if part.mandatory: # mandatory parts
494 494 raise
495 495 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
496 496 return # skip to part processing
497 497 finally:
498 498 if op.ui.debugflag:
499 499 msg = ['bundle2-input-part: "%s"' % part.type]
500 500 if not part.mandatory:
501 501 msg.append(' (advisory)')
502 502 nbmp = len(part.mandatorykeys)
503 503 nbap = len(part.params) - nbmp
504 504 if nbmp or nbap:
505 505 msg.append(' (params:')
506 506 if nbmp:
507 507 msg.append(' %i mandatory' % nbmp)
508 508 if nbap:
509 509 msg.append(' %i advisory' % nbmp)
510 510 msg.append(')')
511 511 msg.append(' %s\n' % status)
512 512 op.ui.debug(''.join(msg))
513 513
514 514 return handler
515 515
516 516 def _processpart(op, part):
517 517 """process a single part from a bundle
518 518
519 519 The part is guaranteed to have been fully consumed when the function exits
520 520 (even if an exception is raised)."""
521 521 handler = _gethandler(op, part)
522 522 if handler is None:
523 523 return
524 524
525 525 # handler is called outside the above try block so that we don't
526 526 # risk catching KeyErrors from anything other than the
527 527 # parthandlermapping lookup (any KeyError raised by handler()
528 528 # itself represents a defect of a different variety).
529 529 output = None
530 530 if op.captureoutput and op.reply is not None:
531 531 op.ui.pushbuffer(error=True, subproc=True)
532 532 output = ''
533 533 try:
534 534 handler(op, part)
535 535 finally:
536 536 if output is not None:
537 537 output = op.ui.popbuffer()
538 538 if output:
539 539 outpart = op.reply.newpart('output', data=output,
540 540 mandatory=False)
541 541 outpart.addparam(
542 542 'in-reply-to', pycompat.bytestr(part.id), mandatory=False)
543 543
544 544 def decodecaps(blob):
545 545 """decode a bundle2 caps bytes blob into a dictionary
546 546
547 547 The blob is a list of capabilities (one per line)
548 548 Capabilities may have values using a line of the form::
549 549
550 550 capability=value1,value2,value3
551 551
552 552 The values are always a list."""
553 553 caps = {}
554 554 for line in blob.splitlines():
555 555 if not line:
556 556 continue
557 557 if '=' not in line:
558 558 key, vals = line, ()
559 559 else:
560 560 key, vals = line.split('=', 1)
561 561 vals = vals.split(',')
562 562 key = urlreq.unquote(key)
563 563 vals = [urlreq.unquote(v) for v in vals]
564 564 caps[key] = vals
565 565 return caps
566 566
567 567 def encodecaps(caps):
568 568 """encode a bundle2 caps dictionary into a bytes blob"""
569 569 chunks = []
570 570 for ca in sorted(caps):
571 571 vals = caps[ca]
572 572 ca = urlreq.quote(ca)
573 573 vals = [urlreq.quote(v) for v in vals]
574 574 if vals:
575 575 ca = "%s=%s" % (ca, ','.join(vals))
576 576 chunks.append(ca)
577 577 return '\n'.join(chunks)
578 578
579 579 bundletypes = {
580 580 "": ("", 'UN'), # only when using unbundle on ssh and old http servers
581 581 # since the unification ssh accepts a header but there
582 582 # is no capability signaling it.
583 583 "HG20": (), # special-cased below
584 584 "HG10UN": ("HG10UN", 'UN'),
585 585 "HG10BZ": ("HG10", 'BZ'),
586 586 "HG10GZ": ("HG10GZ", 'GZ'),
587 587 }
588 588
589 589 # hgweb uses this list to communicate its preferred type
590 590 bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN']
591 591
592 592 class bundle20(object):
593 593 """represent an outgoing bundle2 container
594 594
595 595 Use the `addparam` method to add stream level parameter. and `newpart` to
596 596 populate it. Then call `getchunks` to retrieve all the binary chunks of
597 597 data that compose the bundle2 container."""
598 598
599 599 _magicstring = 'HG20'
600 600
601 601 def __init__(self, ui, capabilities=()):
602 602 self.ui = ui
603 603 self._params = []
604 604 self._parts = []
605 605 self.capabilities = dict(capabilities)
606 606 self._compengine = util.compengines.forbundletype('UN')
607 607 self._compopts = None
608 608 # If compression is being handled by a consumer of the raw
609 609 # data (e.g. the wire protocol), unsetting this flag tells
610 610 # consumers that the bundle is best left uncompressed.
611 611 self.prefercompressed = True
612 612
613 613 def setcompression(self, alg, compopts=None):
614 614 """setup core part compression to <alg>"""
615 615 if alg in (None, 'UN'):
616 616 return
617 617 assert not any(n.lower() == 'compression' for n, v in self._params)
618 618 self.addparam('Compression', alg)
619 619 self._compengine = util.compengines.forbundletype(alg)
620 620 self._compopts = compopts
621 621
622 622 @property
623 623 def nbparts(self):
624 624 """total number of parts added to the bundler"""
625 625 return len(self._parts)
626 626
627 627 # methods used to defines the bundle2 content
628 628 def addparam(self, name, value=None):
629 629 """add a stream level parameter"""
630 630 if not name:
631 631 raise ValueError(r'empty parameter name')
632 632 if name[0:1] not in pycompat.bytestr(string.ascii_letters):
633 633 raise ValueError(r'non letter first character: %s' % name)
634 634 self._params.append((name, value))
635 635
636 636 def addpart(self, part):
637 637 """add a new part to the bundle2 container
638 638
639 639 Parts contains the actual applicative payload."""
640 640 assert part.id is None
641 641 part.id = len(self._parts) # very cheap counter
642 642 self._parts.append(part)
643 643
644 644 def newpart(self, typeid, *args, **kwargs):
645 645 """create a new part and add it to the containers
646 646
647 647 As the part is directly added to the containers. For now, this means
648 648 that any failure to properly initialize the part after calling
649 649 ``newpart`` should result in a failure of the whole bundling process.
650 650
651 651 You can still fall back to manually create and add if you need better
652 652 control."""
653 653 part = bundlepart(typeid, *args, **kwargs)
654 654 self.addpart(part)
655 655 return part
656 656
657 657 # methods used to generate the bundle2 stream
658 658 def getchunks(self):
659 659 if self.ui.debugflag:
660 660 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
661 661 if self._params:
662 662 msg.append(' (%i params)' % len(self._params))
663 663 msg.append(' %i parts total\n' % len(self._parts))
664 664 self.ui.debug(''.join(msg))
665 665 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
666 666 yield self._magicstring
667 667 param = self._paramchunk()
668 668 outdebug(self.ui, 'bundle parameter: %s' % param)
669 669 yield _pack(_fstreamparamsize, len(param))
670 670 if param:
671 671 yield param
672 672 for chunk in self._compengine.compressstream(self._getcorechunk(),
673 673 self._compopts):
674 674 yield chunk
675 675
676 676 def _paramchunk(self):
677 677 """return a encoded version of all stream parameters"""
678 678 blocks = []
679 679 for par, value in self._params:
680 680 par = urlreq.quote(par)
681 681 if value is not None:
682 682 value = urlreq.quote(value)
683 683 par = '%s=%s' % (par, value)
684 684 blocks.append(par)
685 685 return ' '.join(blocks)
686 686
687 687 def _getcorechunk(self):
688 688 """yield chunk for the core part of the bundle
689 689
690 690 (all but headers and parameters)"""
691 691 outdebug(self.ui, 'start of parts')
692 692 for part in self._parts:
693 693 outdebug(self.ui, 'bundle part: "%s"' % part.type)
694 694 for chunk in part.getchunks(ui=self.ui):
695 695 yield chunk
696 696 outdebug(self.ui, 'end of bundle')
697 697 yield _pack(_fpartheadersize, 0)
698 698
699 699
700 700 def salvageoutput(self):
701 701 """return a list with a copy of all output parts in the bundle
702 702
703 703 This is meant to be used during error handling to make sure we preserve
704 704 server output"""
705 705 salvaged = []
706 706 for part in self._parts:
707 707 if part.type.startswith('output'):
708 708 salvaged.append(part.copy())
709 709 return salvaged
710 710
711 711
712 712 class unpackermixin(object):
713 713 """A mixin to extract bytes and struct data from a stream"""
714 714
715 715 def __init__(self, fp):
716 716 self._fp = fp
717 717
718 718 def _unpack(self, format):
719 719 """unpack this struct format from the stream
720 720
721 721 This method is meant for internal usage by the bundle2 protocol only.
722 722 They directly manipulate the low level stream including bundle2 level
723 723 instruction.
724 724
725 725 Do not use it to implement higher-level logic or methods."""
726 726 data = self._readexact(struct.calcsize(format))
727 727 return _unpack(format, data)
728 728
729 729 def _readexact(self, size):
730 730 """read exactly <size> bytes from the stream
731 731
732 732 This method is meant for internal usage by the bundle2 protocol only.
733 733 They directly manipulate the low level stream including bundle2 level
734 734 instruction.
735 735
736 736 Do not use it to implement higher-level logic or methods."""
737 737 return changegroup.readexactly(self._fp, size)
738 738
739 739 def getunbundler(ui, fp, magicstring=None):
740 740 """return a valid unbundler object for a given magicstring"""
741 741 if magicstring is None:
742 742 magicstring = changegroup.readexactly(fp, 4)
743 743 magic, version = magicstring[0:2], magicstring[2:4]
744 744 if magic != 'HG':
745 745 ui.debug(
746 746 "error: invalid magic: %r (version %r), should be 'HG'\n"
747 747 % (magic, version))
748 748 raise error.Abort(_('not a Mercurial bundle'))
749 749 unbundlerclass = formatmap.get(version)
750 750 if unbundlerclass is None:
751 751 raise error.Abort(_('unknown bundle version %s') % version)
752 752 unbundler = unbundlerclass(ui, fp)
753 753 indebug(ui, 'start processing of %s stream' % magicstring)
754 754 return unbundler
755 755
756 756 class unbundle20(unpackermixin):
757 757 """interpret a bundle2 stream
758 758
759 759 This class is fed with a binary stream and yields parts through its
760 760 `iterparts` methods."""
761 761
762 762 _magicstring = 'HG20'
763 763
764 764 def __init__(self, ui, fp):
765 765 """If header is specified, we do not read it out of the stream."""
766 766 self.ui = ui
767 767 self._compengine = util.compengines.forbundletype('UN')
768 768 self._compressed = None
769 769 super(unbundle20, self).__init__(fp)
770 770
771 771 @util.propertycache
772 772 def params(self):
773 773 """dictionary of stream level parameters"""
774 774 indebug(self.ui, 'reading bundle2 stream parameters')
775 775 params = {}
776 776 paramssize = self._unpack(_fstreamparamsize)[0]
777 777 if paramssize < 0:
778 778 raise error.BundleValueError('negative bundle param size: %i'
779 779 % paramssize)
780 780 if paramssize:
781 781 params = self._readexact(paramssize)
782 782 params = self._processallparams(params)
783 783 return params
784 784
785 785 def _processallparams(self, paramsblock):
786 786 """"""
787 787 params = util.sortdict()
788 788 for p in paramsblock.split(' '):
789 789 p = p.split('=', 1)
790 790 p = [urlreq.unquote(i) for i in p]
791 791 if len(p) < 2:
792 792 p.append(None)
793 793 self._processparam(*p)
794 794 params[p[0]] = p[1]
795 795 return params
796 796
797 797
798 798 def _processparam(self, name, value):
799 799 """process a parameter, applying its effect if needed
800 800
801 801 Parameter starting with a lower case letter are advisory and will be
802 802 ignored when unknown. Those starting with an upper case letter are
803 803 mandatory and will this function will raise a KeyError when unknown.
804 804
805 805 Note: no option are currently supported. Any input will be either
806 806 ignored or failing.
807 807 """
808 808 if not name:
809 809 raise ValueError(r'empty parameter name')
810 810 if name[0:1] not in pycompat.bytestr(string.ascii_letters):
811 811 raise ValueError(r'non letter first character: %s' % name)
812 812 try:
813 813 handler = b2streamparamsmap[name.lower()]
814 814 except KeyError:
815 815 if name[0:1].islower():
816 816 indebug(self.ui, "ignoring unknown parameter %s" % name)
817 817 else:
818 818 raise error.BundleUnknownFeatureError(params=(name,))
819 819 else:
820 820 handler(self, name, value)
821 821
822 822 def _forwardchunks(self):
823 823 """utility to transfer a bundle2 as binary
824 824
825 825 This is made necessary by the fact the 'getbundle' command over 'ssh'
826 826 have no way to know then the reply end, relying on the bundle to be
827 827 interpreted to know its end. This is terrible and we are sorry, but we
828 828 needed to move forward to get general delta enabled.
829 829 """
830 830 yield self._magicstring
831 831 assert 'params' not in vars(self)
832 832 paramssize = self._unpack(_fstreamparamsize)[0]
833 833 if paramssize < 0:
834 834 raise error.BundleValueError('negative bundle param size: %i'
835 835 % paramssize)
836 836 yield _pack(_fstreamparamsize, paramssize)
837 837 if paramssize:
838 838 params = self._readexact(paramssize)
839 839 self._processallparams(params)
840 840 yield params
841 841 assert self._compengine.bundletype == 'UN'
842 842 # From there, payload might need to be decompressed
843 843 self._fp = self._compengine.decompressorreader(self._fp)
844 844 emptycount = 0
845 845 while emptycount < 2:
846 846 # so we can brainlessly loop
847 847 assert _fpartheadersize == _fpayloadsize
848 848 size = self._unpack(_fpartheadersize)[0]
849 849 yield _pack(_fpartheadersize, size)
850 850 if size:
851 851 emptycount = 0
852 852 else:
853 853 emptycount += 1
854 854 continue
855 855 if size == flaginterrupt:
856 856 continue
857 857 elif size < 0:
858 858 raise error.BundleValueError('negative chunk size: %i')
859 859 yield self._readexact(size)
860 860
861 861
862 862 def iterparts(self, seekable=False):
863 863 """yield all parts contained in the stream"""
864 864 cls = seekableunbundlepart if seekable else unbundlepart
865 865 # make sure param have been loaded
866 866 self.params
867 867 # From there, payload need to be decompressed
868 868 self._fp = self._compengine.decompressorreader(self._fp)
869 869 indebug(self.ui, 'start extraction of bundle2 parts')
870 870 headerblock = self._readpartheader()
871 871 while headerblock is not None:
872 872 part = cls(self.ui, headerblock, self._fp)
873 873 yield part
874 874 # Ensure part is fully consumed so we can start reading the next
875 875 # part.
876 876 part.consume()
877 877
878 878 headerblock = self._readpartheader()
879 879 indebug(self.ui, 'end of bundle2 stream')
880 880
881 881 def _readpartheader(self):
882 882 """reads a part header size and return the bytes blob
883 883
884 884 returns None if empty"""
885 885 headersize = self._unpack(_fpartheadersize)[0]
886 886 if headersize < 0:
887 887 raise error.BundleValueError('negative part header size: %i'
888 888 % headersize)
889 889 indebug(self.ui, 'part header size: %i' % headersize)
890 890 if headersize:
891 891 return self._readexact(headersize)
892 892 return None
893 893
894 894 def compressed(self):
895 895 self.params # load params
896 896 return self._compressed
897 897
898 898 def close(self):
899 899 """close underlying file"""
900 900 if util.safehasattr(self._fp, 'close'):
901 901 return self._fp.close()
902 902
903 903 formatmap = {'20': unbundle20}
904 904
905 905 b2streamparamsmap = {}
906 906
907 907 def b2streamparamhandler(name):
908 908 """register a handler for a stream level parameter"""
909 909 def decorator(func):
910 910 assert name not in formatmap
911 911 b2streamparamsmap[name] = func
912 912 return func
913 913 return decorator
914 914
915 915 @b2streamparamhandler('compression')
916 916 def processcompression(unbundler, param, value):
917 917 """read compression parameter and install payload decompression"""
918 918 if value not in util.compengines.supportedbundletypes:
919 919 raise error.BundleUnknownFeatureError(params=(param,),
920 920 values=(value,))
921 921 unbundler._compengine = util.compengines.forbundletype(value)
922 922 if value is not None:
923 923 unbundler._compressed = True
924 924
925 925 class bundlepart(object):
926 926 """A bundle2 part contains application level payload
927 927
928 928 The part `type` is used to route the part to the application level
929 929 handler.
930 930
931 931 The part payload is contained in ``part.data``. It could be raw bytes or a
932 932 generator of byte chunks.
933 933
934 934 You can add parameters to the part using the ``addparam`` method.
935 935 Parameters can be either mandatory (default) or advisory. Remote side
936 936 should be able to safely ignore the advisory ones.
937 937
938 938 Both data and parameters cannot be modified after the generation has begun.
939 939 """
940 940
941 941 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
942 942 data='', mandatory=True):
943 943 validateparttype(parttype)
944 944 self.id = None
945 945 self.type = parttype
946 946 self._data = data
947 947 self._mandatoryparams = list(mandatoryparams)
948 948 self._advisoryparams = list(advisoryparams)
949 949 # checking for duplicated entries
950 950 self._seenparams = set()
951 951 for pname, __ in self._mandatoryparams + self._advisoryparams:
952 952 if pname in self._seenparams:
953 953 raise error.ProgrammingError('duplicated params: %s' % pname)
954 954 self._seenparams.add(pname)
955 955 # status of the part's generation:
956 956 # - None: not started,
957 957 # - False: currently generated,
958 958 # - True: generation done.
959 959 self._generated = None
960 960 self.mandatory = mandatory
961 961
962 962 def __repr__(self):
963 963 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
964 964 return ('<%s object at %x; id: %s; type: %s; mandatory: %s>'
965 965 % (cls, id(self), self.id, self.type, self.mandatory))
966 966
967 967 def copy(self):
968 968 """return a copy of the part
969 969
970 970 The new part have the very same content but no partid assigned yet.
971 971 Parts with generated data cannot be copied."""
972 972 assert not util.safehasattr(self.data, 'next')
973 973 return self.__class__(self.type, self._mandatoryparams,
974 974 self._advisoryparams, self._data, self.mandatory)
975 975
976 976 # methods used to defines the part content
977 977 @property
978 978 def data(self):
979 979 return self._data
980 980
981 981 @data.setter
982 982 def data(self, data):
983 983 if self._generated is not None:
984 984 raise error.ReadOnlyPartError('part is being generated')
985 985 self._data = data
986 986
987 987 @property
988 988 def mandatoryparams(self):
989 989 # make it an immutable tuple to force people through ``addparam``
990 990 return tuple(self._mandatoryparams)
991 991
992 992 @property
993 993 def advisoryparams(self):
994 994 # make it an immutable tuple to force people through ``addparam``
995 995 return tuple(self._advisoryparams)
996 996
997 997 def addparam(self, name, value='', mandatory=True):
998 998 """add a parameter to the part
999 999
1000 1000 If 'mandatory' is set to True, the remote handler must claim support
1001 1001 for this parameter or the unbundling will be aborted.
1002 1002
1003 1003 The 'name' and 'value' cannot exceed 255 bytes each.
1004 1004 """
1005 1005 if self._generated is not None:
1006 1006 raise error.ReadOnlyPartError('part is being generated')
1007 1007 if name in self._seenparams:
1008 1008 raise ValueError('duplicated params: %s' % name)
1009 1009 self._seenparams.add(name)
1010 1010 params = self._advisoryparams
1011 1011 if mandatory:
1012 1012 params = self._mandatoryparams
1013 1013 params.append((name, value))
1014 1014
1015 1015 # methods used to generates the bundle2 stream
1016 1016 def getchunks(self, ui):
1017 1017 if self._generated is not None:
1018 1018 raise error.ProgrammingError('part can only be consumed once')
1019 1019 self._generated = False
1020 1020
1021 1021 if ui.debugflag:
1022 1022 msg = ['bundle2-output-part: "%s"' % self.type]
1023 1023 if not self.mandatory:
1024 1024 msg.append(' (advisory)')
1025 1025 nbmp = len(self.mandatoryparams)
1026 1026 nbap = len(self.advisoryparams)
1027 1027 if nbmp or nbap:
1028 1028 msg.append(' (params:')
1029 1029 if nbmp:
1030 1030 msg.append(' %i mandatory' % nbmp)
1031 1031 if nbap:
1032 1032 msg.append(' %i advisory' % nbmp)
1033 1033 msg.append(')')
1034 1034 if not self.data:
1035 1035 msg.append(' empty payload')
1036 1036 elif (util.safehasattr(self.data, 'next')
1037 1037 or util.safehasattr(self.data, '__next__')):
1038 1038 msg.append(' streamed payload')
1039 1039 else:
1040 1040 msg.append(' %i bytes payload' % len(self.data))
1041 1041 msg.append('\n')
1042 1042 ui.debug(''.join(msg))
1043 1043
1044 1044 #### header
1045 1045 if self.mandatory:
1046 1046 parttype = self.type.upper()
1047 1047 else:
1048 1048 parttype = self.type.lower()
1049 1049 outdebug(ui, 'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1050 1050 ## parttype
1051 1051 header = [_pack(_fparttypesize, len(parttype)),
1052 1052 parttype, _pack(_fpartid, self.id),
1053 1053 ]
1054 1054 ## parameters
1055 1055 # count
1056 1056 manpar = self.mandatoryparams
1057 1057 advpar = self.advisoryparams
1058 1058 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1059 1059 # size
1060 1060 parsizes = []
1061 1061 for key, value in manpar:
1062 1062 parsizes.append(len(key))
1063 1063 parsizes.append(len(value))
1064 1064 for key, value in advpar:
1065 1065 parsizes.append(len(key))
1066 1066 parsizes.append(len(value))
1067 1067 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1068 1068 header.append(paramsizes)
1069 1069 # key, value
1070 1070 for key, value in manpar:
1071 1071 header.append(key)
1072 1072 header.append(value)
1073 1073 for key, value in advpar:
1074 1074 header.append(key)
1075 1075 header.append(value)
1076 1076 ## finalize header
1077 1077 try:
1078 1078 headerchunk = ''.join(header)
1079 1079 except TypeError:
1080 1080 raise TypeError(r'Found a non-bytes trying to '
1081 1081 r'build bundle part header: %r' % header)
1082 1082 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
1083 1083 yield _pack(_fpartheadersize, len(headerchunk))
1084 1084 yield headerchunk
1085 1085 ## payload
1086 1086 try:
1087 1087 for chunk in self._payloadchunks():
1088 1088 outdebug(ui, 'payload chunk size: %i' % len(chunk))
1089 1089 yield _pack(_fpayloadsize, len(chunk))
1090 1090 yield chunk
1091 1091 except GeneratorExit:
1092 1092 # GeneratorExit means that nobody is listening for our
1093 1093 # results anyway, so just bail quickly rather than trying
1094 1094 # to produce an error part.
1095 1095 ui.debug('bundle2-generatorexit\n')
1096 1096 raise
1097 1097 except BaseException as exc:
1098 1098 bexc = stringutil.forcebytestr(exc)
1099 1099 # backup exception data for later
1100 1100 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
1101 1101 % bexc)
1102 1102 tb = sys.exc_info()[2]
1103 1103 msg = 'unexpected error: %s' % bexc
1104 1104 interpart = bundlepart('error:abort', [('message', msg)],
1105 1105 mandatory=False)
1106 1106 interpart.id = 0
1107 1107 yield _pack(_fpayloadsize, -1)
1108 1108 for chunk in interpart.getchunks(ui=ui):
1109 1109 yield chunk
1110 1110 outdebug(ui, 'closing payload chunk')
1111 1111 # abort current part payload
1112 1112 yield _pack(_fpayloadsize, 0)
1113 1113 pycompat.raisewithtb(exc, tb)
1114 1114 # end of payload
1115 1115 outdebug(ui, 'closing payload chunk')
1116 1116 yield _pack(_fpayloadsize, 0)
1117 1117 self._generated = True
1118 1118
1119 1119 def _payloadchunks(self):
1120 1120 """yield chunks of a the part payload
1121 1121
1122 1122 Exists to handle the different methods to provide data to a part."""
1123 1123 # we only support fixed size data now.
1124 1124 # This will be improved in the future.
1125 1125 if (util.safehasattr(self.data, 'next')
1126 1126 or util.safehasattr(self.data, '__next__')):
1127 1127 buff = util.chunkbuffer(self.data)
1128 1128 chunk = buff.read(preferedchunksize)
1129 1129 while chunk:
1130 1130 yield chunk
1131 1131 chunk = buff.read(preferedchunksize)
1132 1132 elif len(self.data):
1133 1133 yield self.data
1134 1134
1135 1135
1136 1136 flaginterrupt = -1
1137 1137
1138 1138 class interrupthandler(unpackermixin):
1139 1139 """read one part and process it with restricted capability
1140 1140
1141 1141 This allows to transmit exception raised on the producer size during part
1142 1142 iteration while the consumer is reading a part.
1143 1143
1144 1144 Part processed in this manner only have access to a ui object,"""
1145 1145
1146 1146 def __init__(self, ui, fp):
1147 1147 super(interrupthandler, self).__init__(fp)
1148 1148 self.ui = ui
1149 1149
1150 1150 def _readpartheader(self):
1151 1151 """reads a part header size and return the bytes blob
1152 1152
1153 1153 returns None if empty"""
1154 1154 headersize = self._unpack(_fpartheadersize)[0]
1155 1155 if headersize < 0:
1156 1156 raise error.BundleValueError('negative part header size: %i'
1157 1157 % headersize)
1158 1158 indebug(self.ui, 'part header size: %i\n' % headersize)
1159 1159 if headersize:
1160 1160 return self._readexact(headersize)
1161 1161 return None
1162 1162
1163 1163 def __call__(self):
1164 1164
1165 1165 self.ui.debug('bundle2-input-stream-interrupt:'
1166 1166 ' opening out of band context\n')
1167 1167 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1168 1168 headerblock = self._readpartheader()
1169 1169 if headerblock is None:
1170 1170 indebug(self.ui, 'no part found during interruption.')
1171 1171 return
1172 1172 part = unbundlepart(self.ui, headerblock, self._fp)
1173 1173 op = interruptoperation(self.ui)
1174 1174 hardabort = False
1175 1175 try:
1176 1176 _processpart(op, part)
1177 1177 except (SystemExit, KeyboardInterrupt):
1178 1178 hardabort = True
1179 1179 raise
1180 1180 finally:
1181 1181 if not hardabort:
1182 1182 part.consume()
1183 1183 self.ui.debug('bundle2-input-stream-interrupt:'
1184 1184 ' closing out of band context\n')
1185 1185
1186 1186 class interruptoperation(object):
1187 1187 """A limited operation to be use by part handler during interruption
1188 1188
1189 1189 It only have access to an ui object.
1190 1190 """
1191 1191
1192 1192 def __init__(self, ui):
1193 1193 self.ui = ui
1194 1194 self.reply = None
1195 1195 self.captureoutput = False
1196 1196
1197 1197 @property
1198 1198 def repo(self):
1199 1199 raise error.ProgrammingError('no repo access from stream interruption')
1200 1200
1201 1201 def gettransaction(self):
1202 1202 raise TransactionUnavailable('no repo access from stream interruption')
1203 1203
1204 1204 def decodepayloadchunks(ui, fh):
1205 1205 """Reads bundle2 part payload data into chunks.
1206 1206
1207 1207 Part payload data consists of framed chunks. This function takes
1208 1208 a file handle and emits those chunks.
1209 1209 """
1210 1210 dolog = ui.configbool('devel', 'bundle2.debug')
1211 1211 debug = ui.debug
1212 1212
1213 1213 headerstruct = struct.Struct(_fpayloadsize)
1214 1214 headersize = headerstruct.size
1215 1215 unpack = headerstruct.unpack
1216 1216
1217 1217 readexactly = changegroup.readexactly
1218 1218 read = fh.read
1219 1219
1220 1220 chunksize = unpack(readexactly(fh, headersize))[0]
1221 1221 indebug(ui, 'payload chunk size: %i' % chunksize)
1222 1222
1223 1223 # changegroup.readexactly() is inlined below for performance.
1224 1224 while chunksize:
1225 1225 if chunksize >= 0:
1226 1226 s = read(chunksize)
1227 1227 if len(s) < chunksize:
1228 1228 raise error.Abort(_('stream ended unexpectedly '
1229 1229 ' (got %d bytes, expected %d)') %
1230 1230 (len(s), chunksize))
1231 1231
1232 1232 yield s
1233 1233 elif chunksize == flaginterrupt:
1234 1234 # Interrupt "signal" detected. The regular stream is interrupted
1235 1235 # and a bundle2 part follows. Consume it.
1236 1236 interrupthandler(ui, fh)()
1237 1237 else:
1238 1238 raise error.BundleValueError(
1239 1239 'negative payload chunk size: %s' % chunksize)
1240 1240
1241 1241 s = read(headersize)
1242 1242 if len(s) < headersize:
1243 1243 raise error.Abort(_('stream ended unexpectedly '
1244 1244 ' (got %d bytes, expected %d)') %
1245 1245 (len(s), chunksize))
1246 1246
1247 1247 chunksize = unpack(s)[0]
1248 1248
1249 1249 # indebug() inlined for performance.
1250 1250 if dolog:
1251 1251 debug('bundle2-input: payload chunk size: %i\n' % chunksize)
1252 1252
1253 1253 class unbundlepart(unpackermixin):
1254 1254 """a bundle part read from a bundle"""
1255 1255
1256 1256 def __init__(self, ui, header, fp):
1257 1257 super(unbundlepart, self).__init__(fp)
1258 1258 self._seekable = (util.safehasattr(fp, 'seek') and
1259 1259 util.safehasattr(fp, 'tell'))
1260 1260 self.ui = ui
1261 1261 # unbundle state attr
1262 1262 self._headerdata = header
1263 1263 self._headeroffset = 0
1264 1264 self._initialized = False
1265 1265 self.consumed = False
1266 1266 # part data
1267 1267 self.id = None
1268 1268 self.type = None
1269 1269 self.mandatoryparams = None
1270 1270 self.advisoryparams = None
1271 1271 self.params = None
1272 1272 self.mandatorykeys = ()
1273 1273 self._readheader()
1274 1274 self._mandatory = None
1275 1275 self._pos = 0
1276 1276
1277 1277 def _fromheader(self, size):
1278 1278 """return the next <size> byte from the header"""
1279 1279 offset = self._headeroffset
1280 1280 data = self._headerdata[offset:(offset + size)]
1281 1281 self._headeroffset = offset + size
1282 1282 return data
1283 1283
1284 1284 def _unpackheader(self, format):
1285 1285 """read given format from header
1286 1286
1287 1287 This automatically compute the size of the format to read."""
1288 1288 data = self._fromheader(struct.calcsize(format))
1289 1289 return _unpack(format, data)
1290 1290
1291 1291 def _initparams(self, mandatoryparams, advisoryparams):
1292 1292 """internal function to setup all logic related parameters"""
1293 1293 # make it read only to prevent people touching it by mistake.
1294 1294 self.mandatoryparams = tuple(mandatoryparams)
1295 1295 self.advisoryparams = tuple(advisoryparams)
1296 1296 # user friendly UI
1297 1297 self.params = util.sortdict(self.mandatoryparams)
1298 1298 self.params.update(self.advisoryparams)
1299 1299 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1300 1300
1301 1301 def _readheader(self):
1302 1302 """read the header and setup the object"""
1303 1303 typesize = self._unpackheader(_fparttypesize)[0]
1304 1304 self.type = self._fromheader(typesize)
1305 1305 indebug(self.ui, 'part type: "%s"' % self.type)
1306 1306 self.id = self._unpackheader(_fpartid)[0]
1307 1307 indebug(self.ui, 'part id: "%s"' % pycompat.bytestr(self.id))
1308 1308 # extract mandatory bit from type
1309 1309 self.mandatory = (self.type != self.type.lower())
1310 1310 self.type = self.type.lower()
1311 1311 ## reading parameters
1312 1312 # param count
1313 1313 mancount, advcount = self._unpackheader(_fpartparamcount)
1314 1314 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1315 1315 # param size
1316 1316 fparamsizes = _makefpartparamsizes(mancount + advcount)
1317 1317 paramsizes = self._unpackheader(fparamsizes)
1318 1318 # make it a list of couple again
1319 1319 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1320 1320 # split mandatory from advisory
1321 1321 mansizes = paramsizes[:mancount]
1322 1322 advsizes = paramsizes[mancount:]
1323 1323 # retrieve param value
1324 1324 manparams = []
1325 1325 for key, value in mansizes:
1326 1326 manparams.append((self._fromheader(key), self._fromheader(value)))
1327 1327 advparams = []
1328 1328 for key, value in advsizes:
1329 1329 advparams.append((self._fromheader(key), self._fromheader(value)))
1330 1330 self._initparams(manparams, advparams)
1331 1331 ## part payload
1332 1332 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1333 1333 # we read the data, tell it
1334 1334 self._initialized = True
1335 1335
1336 1336 def _payloadchunks(self):
1337 1337 """Generator of decoded chunks in the payload."""
1338 1338 return decodepayloadchunks(self.ui, self._fp)
1339 1339
1340 1340 def consume(self):
1341 1341 """Read the part payload until completion.
1342 1342
1343 1343 By consuming the part data, the underlying stream read offset will
1344 1344 be advanced to the next part (or end of stream).
1345 1345 """
1346 1346 if self.consumed:
1347 1347 return
1348 1348
1349 1349 chunk = self.read(32768)
1350 1350 while chunk:
1351 1351 self._pos += len(chunk)
1352 1352 chunk = self.read(32768)
1353 1353
1354 1354 def read(self, size=None):
1355 1355 """read payload data"""
1356 1356 if not self._initialized:
1357 1357 self._readheader()
1358 1358 if size is None:
1359 1359 data = self._payloadstream.read()
1360 1360 else:
1361 1361 data = self._payloadstream.read(size)
1362 1362 self._pos += len(data)
1363 1363 if size is None or len(data) < size:
1364 1364 if not self.consumed and self._pos:
1365 1365 self.ui.debug('bundle2-input-part: total payload size %i\n'
1366 1366 % self._pos)
1367 1367 self.consumed = True
1368 1368 return data
1369 1369
1370 1370 class seekableunbundlepart(unbundlepart):
1371 1371 """A bundle2 part in a bundle that is seekable.
1372 1372
1373 1373 Regular ``unbundlepart`` instances can only be read once. This class
1374 1374 extends ``unbundlepart`` to enable bi-directional seeking within the
1375 1375 part.
1376 1376
1377 1377 Bundle2 part data consists of framed chunks. Offsets when seeking
1378 1378 refer to the decoded data, not the offsets in the underlying bundle2
1379 1379 stream.
1380 1380
1381 1381 To facilitate quickly seeking within the decoded data, instances of this
1382 1382 class maintain a mapping between offsets in the underlying stream and
1383 1383 the decoded payload. This mapping will consume memory in proportion
1384 1384 to the number of chunks within the payload (which almost certainly
1385 1385 increases in proportion with the size of the part).
1386 1386 """
1387 1387 def __init__(self, ui, header, fp):
1388 1388 # (payload, file) offsets for chunk starts.
1389 1389 self._chunkindex = []
1390 1390
1391 1391 super(seekableunbundlepart, self).__init__(ui, header, fp)
1392 1392
1393 1393 def _payloadchunks(self, chunknum=0):
1394 1394 '''seek to specified chunk and start yielding data'''
1395 1395 if len(self._chunkindex) == 0:
1396 1396 assert chunknum == 0, 'Must start with chunk 0'
1397 1397 self._chunkindex.append((0, self._tellfp()))
1398 1398 else:
1399 1399 assert chunknum < len(self._chunkindex), \
1400 1400 'Unknown chunk %d' % chunknum
1401 1401 self._seekfp(self._chunkindex[chunknum][1])
1402 1402
1403 1403 pos = self._chunkindex[chunknum][0]
1404 1404
1405 1405 for chunk in decodepayloadchunks(self.ui, self._fp):
1406 1406 chunknum += 1
1407 1407 pos += len(chunk)
1408 1408 if chunknum == len(self._chunkindex):
1409 1409 self._chunkindex.append((pos, self._tellfp()))
1410 1410
1411 1411 yield chunk
1412 1412
1413 1413 def _findchunk(self, pos):
1414 1414 '''for a given payload position, return a chunk number and offset'''
1415 1415 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1416 1416 if ppos == pos:
1417 1417 return chunk, 0
1418 1418 elif ppos > pos:
1419 1419 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1420 1420 raise ValueError('Unknown chunk')
1421 1421
1422 1422 def tell(self):
1423 1423 return self._pos
1424 1424
1425 1425 def seek(self, offset, whence=os.SEEK_SET):
1426 1426 if whence == os.SEEK_SET:
1427 1427 newpos = offset
1428 1428 elif whence == os.SEEK_CUR:
1429 1429 newpos = self._pos + offset
1430 1430 elif whence == os.SEEK_END:
1431 1431 if not self.consumed:
1432 1432 # Can't use self.consume() here because it advances self._pos.
1433 1433 chunk = self.read(32768)
1434 1434 while chunk:
1435 1435 chunk = self.read(32768)
1436 1436 newpos = self._chunkindex[-1][0] - offset
1437 1437 else:
1438 1438 raise ValueError('Unknown whence value: %r' % (whence,))
1439 1439
1440 1440 if newpos > self._chunkindex[-1][0] and not self.consumed:
1441 1441 # Can't use self.consume() here because it advances self._pos.
1442 1442 chunk = self.read(32768)
1443 1443 while chunk:
1444 1444 chunk = self.read(32668)
1445 1445
1446 1446 if not 0 <= newpos <= self._chunkindex[-1][0]:
1447 1447 raise ValueError('Offset out of range')
1448 1448
1449 1449 if self._pos != newpos:
1450 1450 chunk, internaloffset = self._findchunk(newpos)
1451 1451 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1452 1452 adjust = self.read(internaloffset)
1453 1453 if len(adjust) != internaloffset:
1454 1454 raise error.Abort(_('Seek failed\n'))
1455 1455 self._pos = newpos
1456 1456
1457 1457 def _seekfp(self, offset, whence=0):
1458 1458 """move the underlying file pointer
1459 1459
1460 1460 This method is meant for internal usage by the bundle2 protocol only.
1461 1461 They directly manipulate the low level stream including bundle2 level
1462 1462 instruction.
1463 1463
1464 1464 Do not use it to implement higher-level logic or methods."""
1465 1465 if self._seekable:
1466 1466 return self._fp.seek(offset, whence)
1467 1467 else:
1468 1468 raise NotImplementedError(_('File pointer is not seekable'))
1469 1469
1470 1470 def _tellfp(self):
1471 1471 """return the file offset, or None if file is not seekable
1472 1472
1473 1473 This method is meant for internal usage by the bundle2 protocol only.
1474 1474 They directly manipulate the low level stream including bundle2 level
1475 1475 instruction.
1476 1476
1477 1477 Do not use it to implement higher-level logic or methods."""
1478 1478 if self._seekable:
1479 1479 try:
1480 1480 return self._fp.tell()
1481 1481 except IOError as e:
1482 1482 if e.errno == errno.ESPIPE:
1483 1483 self._seekable = False
1484 1484 else:
1485 1485 raise
1486 1486 return None
1487 1487
1488 1488 # These are only the static capabilities.
1489 1489 # Check the 'getrepocaps' function for the rest.
1490 1490 capabilities = {'HG20': (),
1491 1491 'bookmarks': (),
1492 1492 'error': ('abort', 'unsupportedcontent', 'pushraced',
1493 1493 'pushkey'),
1494 1494 'listkeys': (),
1495 1495 'pushkey': (),
1496 1496 'digests': tuple(sorted(util.DIGESTS.keys())),
1497 1497 'remote-changegroup': ('http', 'https'),
1498 1498 'hgtagsfnodes': (),
1499 1499 'rev-branch-cache': (),
1500 1500 'phases': ('heads',),
1501 1501 'stream': ('v2',),
1502 1502 }
1503 1503
1504 1504 def getrepocaps(repo, allowpushback=False, role=None):
1505 1505 """return the bundle2 capabilities for a given repo
1506 1506
1507 1507 Exists to allow extensions (like evolution) to mutate the capabilities.
1508 1508
1509 1509 The returned value is used for servers advertising their capabilities as
1510 1510 well as clients advertising their capabilities to servers as part of
1511 1511 bundle2 requests. The ``role`` argument specifies which is which.
1512 1512 """
1513 1513 if role not in ('client', 'server'):
1514 1514 raise error.ProgrammingError('role argument must be client or server')
1515 1515
1516 1516 caps = capabilities.copy()
1517 1517 caps['changegroup'] = tuple(sorted(
1518 1518 changegroup.supportedincomingversions(repo)))
1519 1519 if obsolete.isenabled(repo, obsolete.exchangeopt):
1520 1520 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1521 1521 caps['obsmarkers'] = supportedformat
1522 1522 if allowpushback:
1523 1523 caps['pushback'] = ()
1524 1524 cpmode = repo.ui.config('server', 'concurrent-push-mode')
1525 1525 if cpmode == 'check-related':
1526 1526 caps['checkheads'] = ('related',)
1527 1527 if 'phases' in repo.ui.configlist('devel', 'legacy.exchange'):
1528 1528 caps.pop('phases')
1529 1529
1530 1530 # Don't advertise stream clone support in server mode if not configured.
1531 1531 if role == 'server':
1532 1532 streamsupported = repo.ui.configbool('server', 'uncompressed',
1533 1533 untrusted=True)
1534 1534 featuresupported = repo.ui.configbool('experimental', 'bundle2.stream')
1535 1535
1536 1536 if not streamsupported or not featuresupported:
1537 1537 caps.pop('stream')
1538 1538 # Else always advertise support on client, because payload support
1539 1539 # should always be advertised.
1540 1540
1541 1541 return caps
1542 1542
1543 1543 def bundle2caps(remote):
1544 1544 """return the bundle capabilities of a peer as dict"""
1545 1545 raw = remote.capable('bundle2')
1546 1546 if not raw and raw != '':
1547 1547 return {}
1548 1548 capsblob = urlreq.unquote(remote.capable('bundle2'))
1549 1549 return decodecaps(capsblob)
1550 1550
1551 1551 def obsmarkersversion(caps):
1552 1552 """extract the list of supported obsmarkers versions from a bundle2caps dict
1553 1553 """
1554 1554 obscaps = caps.get('obsmarkers', ())
1555 1555 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1556 1556
1557 1557 def writenewbundle(ui, repo, source, filename, bundletype, outgoing, opts,
1558 1558 vfs=None, compression=None, compopts=None):
1559 1559 if bundletype.startswith('HG10'):
1560 1560 cg = changegroup.makechangegroup(repo, outgoing, '01', source)
1561 1561 return writebundle(ui, cg, filename, bundletype, vfs=vfs,
1562 1562 compression=compression, compopts=compopts)
1563 1563 elif not bundletype.startswith('HG20'):
1564 1564 raise error.ProgrammingError('unknown bundle type: %s' % bundletype)
1565 1565
1566 1566 caps = {}
1567 1567 if 'obsolescence' in opts:
1568 1568 caps['obsmarkers'] = ('V1',)
1569 1569 bundle = bundle20(ui, caps)
1570 1570 bundle.setcompression(compression, compopts)
1571 1571 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1572 1572 chunkiter = bundle.getchunks()
1573 1573
1574 1574 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1575 1575
1576 1576 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1577 1577 # We should eventually reconcile this logic with the one behind
1578 1578 # 'exchange.getbundle2partsgenerator'.
1579 1579 #
1580 1580 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1581 1581 # different right now. So we keep them separated for now for the sake of
1582 1582 # simplicity.
1583 1583
1584 1584 # we might not always want a changegroup in such bundle, for example in
1585 1585 # stream bundles
1586 1586 if opts.get('changegroup', True):
1587 1587 cgversion = opts.get('cg.version')
1588 1588 if cgversion is None:
1589 1589 cgversion = changegroup.safeversion(repo)
1590 1590 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1591 1591 part = bundler.newpart('changegroup', data=cg.getchunks())
1592 1592 part.addparam('version', cg.version)
1593 1593 if 'clcount' in cg.extras:
1594 1594 part.addparam('nbchanges', '%d' % cg.extras['clcount'],
1595 1595 mandatory=False)
1596 1596 if opts.get('phases') and repo.revs('%ln and secret()',
1597 1597 outgoing.missingheads):
1598 1598 part.addparam('targetphase', '%d' % phases.secret, mandatory=False)
1599 1599
1600 1600 if opts.get('streamv2', False):
1601 1601 addpartbundlestream2(bundler, repo, stream=True)
1602 1602
1603 1603 if opts.get('tagsfnodescache', True):
1604 1604 addparttagsfnodescache(repo, bundler, outgoing)
1605 1605
1606 1606 if opts.get('revbranchcache', True):
1607 1607 addpartrevbranchcache(repo, bundler, outgoing)
1608 1608
1609 1609 if opts.get('obsolescence', False):
1610 1610 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1611 1611 buildobsmarkerspart(bundler, obsmarkers)
1612 1612
1613 1613 if opts.get('phases', False):
1614 1614 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1615 1615 phasedata = phases.binaryencode(headsbyphase)
1616 1616 bundler.newpart('phase-heads', data=phasedata)
1617 1617
1618 1618 def addparttagsfnodescache(repo, bundler, outgoing):
1619 1619 # we include the tags fnode cache for the bundle changeset
1620 1620 # (as an optional parts)
1621 1621 cache = tags.hgtagsfnodescache(repo.unfiltered())
1622 1622 chunks = []
1623 1623
1624 1624 # .hgtags fnodes are only relevant for head changesets. While we could
1625 1625 # transfer values for all known nodes, there will likely be little to
1626 1626 # no benefit.
1627 1627 #
1628 1628 # We don't bother using a generator to produce output data because
1629 1629 # a) we only have 40 bytes per head and even esoteric numbers of heads
1630 1630 # consume little memory (1M heads is 40MB) b) we don't want to send the
1631 1631 # part if we don't have entries and knowing if we have entries requires
1632 1632 # cache lookups.
1633 1633 for node in outgoing.missingheads:
1634 1634 # Don't compute missing, as this may slow down serving.
1635 1635 fnode = cache.getfnode(node, computemissing=False)
1636 1636 if fnode is not None:
1637 1637 chunks.extend([node, fnode])
1638 1638
1639 1639 if chunks:
1640 1640 bundler.newpart('hgtagsfnodes', data=''.join(chunks))
1641 1641
1642 1642 def addpartrevbranchcache(repo, bundler, outgoing):
1643 1643 # we include the rev branch cache for the bundle changeset
1644 1644 # (as an optional parts)
1645 1645 cache = repo.revbranchcache()
1646 1646 cl = repo.unfiltered().changelog
1647 1647 branchesdata = collections.defaultdict(lambda: (set(), set()))
1648 1648 for node in outgoing.missing:
1649 1649 branch, close = cache.branchinfo(cl.rev(node))
1650 1650 branchesdata[branch][close].add(node)
1651 1651
1652 1652 def generate():
1653 1653 for branch, (nodes, closed) in sorted(branchesdata.items()):
1654 1654 utf8branch = encoding.fromlocal(branch)
1655 1655 yield rbcstruct.pack(len(utf8branch), len(nodes), len(closed))
1656 1656 yield utf8branch
1657 1657 for n in sorted(nodes):
1658 1658 yield n
1659 1659 for n in sorted(closed):
1660 1660 yield n
1661 1661
1662 1662 bundler.newpart('cache:rev-branch-cache', data=generate())
1663 1663
1664 1664 def _formatrequirementsspec(requirements):
1665 1665 return urlreq.quote(','.join(sorted(requirements)))
1666 1666
1667 1667 def _formatrequirementsparams(requirements):
1668 1668 requirements = _formatrequirementsspec(requirements)
1669 1669 params = "%s%s" % (urlreq.quote("requirements="), requirements)
1670 1670 return params
1671 1671
1672 1672 def addpartbundlestream2(bundler, repo, **kwargs):
1673 1673 if not kwargs.get('stream', False):
1674 1674 return
1675 1675
1676 1676 if not streamclone.allowservergeneration(repo):
1677 1677 raise error.Abort(_('stream data requested but server does not allow '
1678 1678 'this feature'),
1679 1679 hint=_('well-behaved clients should not be '
1680 1680 'requesting stream data from servers not '
1681 1681 'advertising it; the client may be buggy'))
1682 1682
1683 1683 # Stream clones don't compress well. And compression undermines a
1684 1684 # goal of stream clones, which is to be fast. Communicate the desire
1685 1685 # to avoid compression to consumers of the bundle.
1686 1686 bundler.prefercompressed = False
1687 1687
1688 1688 filecount, bytecount, it = streamclone.generatev2(repo)
1689 1689 requirements = _formatrequirementsspec(repo.requirements)
1690 1690 part = bundler.newpart('stream2', data=it)
1691 1691 part.addparam('bytecount', '%d' % bytecount, mandatory=True)
1692 1692 part.addparam('filecount', '%d' % filecount, mandatory=True)
1693 1693 part.addparam('requirements', requirements, mandatory=True)
1694 1694
1695 1695 def buildobsmarkerspart(bundler, markers):
1696 1696 """add an obsmarker part to the bundler with <markers>
1697 1697
1698 1698 No part is created if markers is empty.
1699 1699 Raises ValueError if the bundler doesn't support any known obsmarker format.
1700 1700 """
1701 1701 if not markers:
1702 1702 return None
1703 1703
1704 1704 remoteversions = obsmarkersversion(bundler.capabilities)
1705 1705 version = obsolete.commonversion(remoteversions)
1706 1706 if version is None:
1707 1707 raise ValueError('bundler does not support common obsmarker format')
1708 1708 stream = obsolete.encodemarkers(markers, True, version=version)
1709 1709 return bundler.newpart('obsmarkers', data=stream)
1710 1710
1711 1711 def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None,
1712 1712 compopts=None):
1713 1713 """Write a bundle file and return its filename.
1714 1714
1715 1715 Existing files will not be overwritten.
1716 1716 If no filename is specified, a temporary file is created.
1717 1717 bz2 compression can be turned off.
1718 1718 The bundle file will be deleted in case of errors.
1719 1719 """
1720 1720
1721 1721 if bundletype == "HG20":
1722 1722 bundle = bundle20(ui)
1723 1723 bundle.setcompression(compression, compopts)
1724 1724 part = bundle.newpart('changegroup', data=cg.getchunks())
1725 1725 part.addparam('version', cg.version)
1726 1726 if 'clcount' in cg.extras:
1727 1727 part.addparam('nbchanges', '%d' % cg.extras['clcount'],
1728 1728 mandatory=False)
1729 1729 chunkiter = bundle.getchunks()
1730 1730 else:
1731 1731 # compression argument is only for the bundle2 case
1732 1732 assert compression is None
1733 1733 if cg.version != '01':
1734 1734 raise error.Abort(_('old bundle types only supports v1 '
1735 1735 'changegroups'))
1736 1736 header, comp = bundletypes[bundletype]
1737 1737 if comp not in util.compengines.supportedbundletypes:
1738 1738 raise error.Abort(_('unknown stream compression type: %s')
1739 1739 % comp)
1740 1740 compengine = util.compengines.forbundletype(comp)
1741 1741 def chunkiter():
1742 1742 yield header
1743 1743 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1744 1744 yield chunk
1745 1745 chunkiter = chunkiter()
1746 1746
1747 1747 # parse the changegroup data, otherwise we will block
1748 1748 # in case of sshrepo because we don't know the end of the stream
1749 1749 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1750 1750
1751 1751 def combinechangegroupresults(op):
1752 1752 """logic to combine 0 or more addchangegroup results into one"""
1753 1753 results = [r.get('return', 0)
1754 1754 for r in op.records['changegroup']]
1755 1755 changedheads = 0
1756 1756 result = 1
1757 1757 for ret in results:
1758 1758 # If any changegroup result is 0, return 0
1759 1759 if ret == 0:
1760 1760 result = 0
1761 1761 break
1762 1762 if ret < -1:
1763 1763 changedheads += ret + 1
1764 1764 elif ret > 1:
1765 1765 changedheads += ret - 1
1766 1766 if changedheads > 0:
1767 1767 result = 1 + changedheads
1768 1768 elif changedheads < 0:
1769 1769 result = -1 + changedheads
1770 1770 return result
1771 1771
1772 1772 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest',
1773 1773 'targetphase'))
1774 1774 def handlechangegroup(op, inpart):
1775 1775 """apply a changegroup part on the repo
1776 1776
1777 1777 This is a very early implementation that will massive rework before being
1778 1778 inflicted to any end-user.
1779 1779 """
1780 1780 tr = op.gettransaction()
1781 1781 unpackerversion = inpart.params.get('version', '01')
1782 1782 # We should raise an appropriate exception here
1783 1783 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1784 1784 # the source and url passed here are overwritten by the one contained in
1785 1785 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1786 1786 nbchangesets = None
1787 1787 if 'nbchanges' in inpart.params:
1788 1788 nbchangesets = int(inpart.params.get('nbchanges'))
1789 1789 if ('treemanifest' in inpart.params and
1790 1790 'treemanifest' not in op.repo.requirements):
1791 1791 if len(op.repo.changelog) != 0:
1792 1792 raise error.Abort(_(
1793 1793 "bundle contains tree manifests, but local repo is "
1794 1794 "non-empty and does not use tree manifests"))
1795 1795 op.repo.requirements.add('treemanifest')
1796 1796 op.repo._applyopenerreqs()
1797 1797 op.repo._writerequirements()
1798 1798 extrakwargs = {}
1799 1799 targetphase = inpart.params.get('targetphase')
1800 1800 if targetphase is not None:
1801 1801 extrakwargs[r'targetphase'] = int(targetphase)
1802 1802 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2',
1803 1803 expectedtotal=nbchangesets, **extrakwargs)
1804 1804 if op.reply is not None:
1805 1805 # This is definitely not the final form of this
1806 1806 # return. But one need to start somewhere.
1807 1807 part = op.reply.newpart('reply:changegroup', mandatory=False)
1808 1808 part.addparam(
1809 1809 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1810 1810 part.addparam('return', '%i' % ret, mandatory=False)
1811 1811 assert not inpart.read()
1812 1812
1813 1813 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1814 1814 ['digest:%s' % k for k in util.DIGESTS.keys()])
1815 1815 @parthandler('remote-changegroup', _remotechangegroupparams)
1816 1816 def handleremotechangegroup(op, inpart):
1817 1817 """apply a bundle10 on the repo, given an url and validation information
1818 1818
1819 1819 All the information about the remote bundle to import are given as
1820 1820 parameters. The parameters include:
1821 1821 - url: the url to the bundle10.
1822 1822 - size: the bundle10 file size. It is used to validate what was
1823 1823 retrieved by the client matches the server knowledge about the bundle.
1824 1824 - digests: a space separated list of the digest types provided as
1825 1825 parameters.
1826 1826 - digest:<digest-type>: the hexadecimal representation of the digest with
1827 1827 that name. Like the size, it is used to validate what was retrieved by
1828 1828 the client matches what the server knows about the bundle.
1829 1829
1830 1830 When multiple digest types are given, all of them are checked.
1831 1831 """
1832 1832 try:
1833 1833 raw_url = inpart.params['url']
1834 1834 except KeyError:
1835 1835 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1836 1836 parsed_url = util.url(raw_url)
1837 1837 if parsed_url.scheme not in capabilities['remote-changegroup']:
1838 1838 raise error.Abort(_('remote-changegroup does not support %s urls') %
1839 1839 parsed_url.scheme)
1840 1840
1841 1841 try:
1842 1842 size = int(inpart.params['size'])
1843 1843 except ValueError:
1844 1844 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1845 1845 % 'size')
1846 1846 except KeyError:
1847 1847 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1848 1848
1849 1849 digests = {}
1850 1850 for typ in inpart.params.get('digests', '').split():
1851 1851 param = 'digest:%s' % typ
1852 1852 try:
1853 1853 value = inpart.params[param]
1854 1854 except KeyError:
1855 1855 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1856 1856 param)
1857 1857 digests[typ] = value
1858 1858
1859 1859 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1860 1860
1861 1861 tr = op.gettransaction()
1862 1862 from . import exchange
1863 1863 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1864 1864 if not isinstance(cg, changegroup.cg1unpacker):
1865 1865 raise error.Abort(_('%s: not a bundle version 1.0') %
1866 1866 util.hidepassword(raw_url))
1867 1867 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2')
1868 1868 if op.reply is not None:
1869 1869 # This is definitely not the final form of this
1870 1870 # return. But one need to start somewhere.
1871 1871 part = op.reply.newpart('reply:changegroup')
1872 1872 part.addparam(
1873 1873 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1874 1874 part.addparam('return', '%i' % ret, mandatory=False)
1875 1875 try:
1876 1876 real_part.validate()
1877 1877 except error.Abort as e:
1878 1878 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1879 1879 (util.hidepassword(raw_url), str(e)))
1880 1880 assert not inpart.read()
1881 1881
1882 1882 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1883 1883 def handlereplychangegroup(op, inpart):
1884 1884 ret = int(inpart.params['return'])
1885 1885 replyto = int(inpart.params['in-reply-to'])
1886 1886 op.records.add('changegroup', {'return': ret}, replyto)
1887 1887
1888 1888 @parthandler('check:bookmarks')
1889 1889 def handlecheckbookmarks(op, inpart):
1890 1890 """check location of bookmarks
1891 1891
1892 1892 This part is to be used to detect push race regarding bookmark, it
1893 1893 contains binary encoded (bookmark, node) tuple. If the local state does
1894 1894 not marks the one in the part, a PushRaced exception is raised
1895 1895 """
1896 1896 bookdata = bookmarks.binarydecode(inpart)
1897 1897
1898 1898 msgstandard = ('repository changed while pushing - please try again '
1899 1899 '(bookmark "%s" move from %s to %s)')
1900 1900 msgmissing = ('repository changed while pushing - please try again '
1901 1901 '(bookmark "%s" is missing, expected %s)')
1902 1902 msgexist = ('repository changed while pushing - please try again '
1903 1903 '(bookmark "%s" set on %s, expected missing)')
1904 1904 for book, node in bookdata:
1905 1905 currentnode = op.repo._bookmarks.get(book)
1906 1906 if currentnode != node:
1907 1907 if node is None:
1908 1908 finalmsg = msgexist % (book, nodemod.short(currentnode))
1909 1909 elif currentnode is None:
1910 1910 finalmsg = msgmissing % (book, nodemod.short(node))
1911 1911 else:
1912 1912 finalmsg = msgstandard % (book, nodemod.short(node),
1913 1913 nodemod.short(currentnode))
1914 1914 raise error.PushRaced(finalmsg)
1915 1915
1916 1916 @parthandler('check:heads')
1917 1917 def handlecheckheads(op, inpart):
1918 1918 """check that head of the repo did not change
1919 1919
1920 1920 This is used to detect a push race when using unbundle.
1921 1921 This replaces the "heads" argument of unbundle."""
1922 1922 h = inpart.read(20)
1923 1923 heads = []
1924 1924 while len(h) == 20:
1925 1925 heads.append(h)
1926 1926 h = inpart.read(20)
1927 1927 assert not h
1928 1928 # Trigger a transaction so that we are guaranteed to have the lock now.
1929 1929 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1930 1930 op.gettransaction()
1931 1931 if sorted(heads) != sorted(op.repo.heads()):
1932 1932 raise error.PushRaced('repository changed while pushing - '
1933 1933 'please try again')
1934 1934
1935 1935 @parthandler('check:updated-heads')
1936 1936 def handlecheckupdatedheads(op, inpart):
1937 1937 """check for race on the heads touched by a push
1938 1938
1939 1939 This is similar to 'check:heads' but focus on the heads actually updated
1940 1940 during the push. If other activities happen on unrelated heads, it is
1941 1941 ignored.
1942 1942
1943 1943 This allow server with high traffic to avoid push contention as long as
1944 1944 unrelated parts of the graph are involved."""
1945 1945 h = inpart.read(20)
1946 1946 heads = []
1947 1947 while len(h) == 20:
1948 1948 heads.append(h)
1949 1949 h = inpart.read(20)
1950 1950 assert not h
1951 1951 # trigger a transaction so that we are guaranteed to have the lock now.
1952 1952 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1953 1953 op.gettransaction()
1954 1954
1955 1955 currentheads = set()
1956 1956 for ls in op.repo.branchmap().itervalues():
1957 1957 currentheads.update(ls)
1958 1958
1959 1959 for h in heads:
1960 1960 if h not in currentheads:
1961 1961 raise error.PushRaced('repository changed while pushing - '
1962 1962 'please try again')
1963 1963
1964 1964 @parthandler('check:phases')
1965 1965 def handlecheckphases(op, inpart):
1966 1966 """check that phase boundaries of the repository did not change
1967 1967
1968 1968 This is used to detect a push race.
1969 1969 """
1970 1970 phasetonodes = phases.binarydecode(inpart)
1971 1971 unfi = op.repo.unfiltered()
1972 1972 cl = unfi.changelog
1973 1973 phasecache = unfi._phasecache
1974 1974 msg = ('repository changed while pushing - please try again '
1975 1975 '(%s is %s expected %s)')
1976 1976 for expectedphase, nodes in enumerate(phasetonodes):
1977 1977 for n in nodes:
1978 1978 actualphase = phasecache.phase(unfi, cl.rev(n))
1979 1979 if actualphase != expectedphase:
1980 1980 finalmsg = msg % (nodemod.short(n),
1981 1981 phases.phasenames[actualphase],
1982 1982 phases.phasenames[expectedphase])
1983 1983 raise error.PushRaced(finalmsg)
1984 1984
1985 1985 @parthandler('output')
1986 1986 def handleoutput(op, inpart):
1987 1987 """forward output captured on the server to the client"""
1988 1988 for line in inpart.read().splitlines():
1989 1989 op.ui.status(_('remote: %s\n') % line)
1990 1990
1991 1991 @parthandler('replycaps')
1992 1992 def handlereplycaps(op, inpart):
1993 1993 """Notify that a reply bundle should be created
1994 1994
1995 1995 The payload contains the capabilities information for the reply"""
1996 1996 caps = decodecaps(inpart.read())
1997 1997 if op.reply is None:
1998 1998 op.reply = bundle20(op.ui, caps)
1999 1999
2000 2000 class AbortFromPart(error.Abort):
2001 2001 """Sub-class of Abort that denotes an error from a bundle2 part."""
2002 2002
2003 2003 @parthandler('error:abort', ('message', 'hint'))
2004 2004 def handleerrorabort(op, inpart):
2005 2005 """Used to transmit abort error over the wire"""
2006 2006 raise AbortFromPart(inpart.params['message'],
2007 2007 hint=inpart.params.get('hint'))
2008 2008
2009 2009 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
2010 2010 'in-reply-to'))
2011 2011 def handleerrorpushkey(op, inpart):
2012 2012 """Used to transmit failure of a mandatory pushkey over the wire"""
2013 2013 kwargs = {}
2014 2014 for name in ('namespace', 'key', 'new', 'old', 'ret'):
2015 2015 value = inpart.params.get(name)
2016 2016 if value is not None:
2017 2017 kwargs[name] = value
2018 2018 raise error.PushkeyFailed(inpart.params['in-reply-to'],
2019 2019 **pycompat.strkwargs(kwargs))
2020 2020
2021 2021 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
2022 2022 def handleerrorunsupportedcontent(op, inpart):
2023 2023 """Used to transmit unknown content error over the wire"""
2024 2024 kwargs = {}
2025 2025 parttype = inpart.params.get('parttype')
2026 2026 if parttype is not None:
2027 2027 kwargs['parttype'] = parttype
2028 2028 params = inpart.params.get('params')
2029 2029 if params is not None:
2030 2030 kwargs['params'] = params.split('\0')
2031 2031
2032 2032 raise error.BundleUnknownFeatureError(**pycompat.strkwargs(kwargs))
2033 2033
2034 2034 @parthandler('error:pushraced', ('message',))
2035 2035 def handleerrorpushraced(op, inpart):
2036 2036 """Used to transmit push race error over the wire"""
2037 2037 raise error.ResponseError(_('push failed:'), inpart.params['message'])
2038 2038
2039 2039 @parthandler('listkeys', ('namespace',))
2040 2040 def handlelistkeys(op, inpart):
2041 2041 """retrieve pushkey namespace content stored in a bundle2"""
2042 2042 namespace = inpart.params['namespace']
2043 2043 r = pushkey.decodekeys(inpart.read())
2044 2044 op.records.add('listkeys', (namespace, r))
2045 2045
2046 2046 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
2047 2047 def handlepushkey(op, inpart):
2048 2048 """process a pushkey request"""
2049 2049 dec = pushkey.decode
2050 2050 namespace = dec(inpart.params['namespace'])
2051 2051 key = dec(inpart.params['key'])
2052 2052 old = dec(inpart.params['old'])
2053 2053 new = dec(inpart.params['new'])
2054 2054 # Grab the transaction to ensure that we have the lock before performing the
2055 2055 # pushkey.
2056 2056 if op.ui.configbool('experimental', 'bundle2lazylocking'):
2057 2057 op.gettransaction()
2058 2058 ret = op.repo.pushkey(namespace, key, old, new)
2059 2059 record = {'namespace': namespace,
2060 2060 'key': key,
2061 2061 'old': old,
2062 2062 'new': new}
2063 2063 op.records.add('pushkey', record)
2064 2064 if op.reply is not None:
2065 2065 rpart = op.reply.newpart('reply:pushkey')
2066 2066 rpart.addparam(
2067 2067 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
2068 2068 rpart.addparam('return', '%i' % ret, mandatory=False)
2069 2069 if inpart.mandatory and not ret:
2070 2070 kwargs = {}
2071 2071 for key in ('namespace', 'key', 'new', 'old', 'ret'):
2072 2072 if key in inpart.params:
2073 2073 kwargs[key] = inpart.params[key]
2074 2074 raise error.PushkeyFailed(partid='%d' % inpart.id,
2075 2075 **pycompat.strkwargs(kwargs))
2076 2076
2077 2077 @parthandler('bookmarks')
2078 2078 def handlebookmark(op, inpart):
2079 2079 """transmit bookmark information
2080 2080
2081 2081 The part contains binary encoded bookmark information.
2082 2082
2083 2083 The exact behavior of this part can be controlled by the 'bookmarks' mode
2084 2084 on the bundle operation.
2085 2085
2086 2086 When mode is 'apply' (the default) the bookmark information is applied as
2087 2087 is to the unbundling repository. Make sure a 'check:bookmarks' part is
2088 2088 issued earlier to check for push races in such update. This behavior is
2089 2089 suitable for pushing.
2090 2090
2091 2091 When mode is 'records', the information is recorded into the 'bookmarks'
2092 2092 records of the bundle operation. This behavior is suitable for pulling.
2093 2093 """
2094 2094 changes = bookmarks.binarydecode(inpart)
2095 2095
2096 2096 pushkeycompat = op.repo.ui.configbool('server', 'bookmarks-pushkey-compat')
2097 2097 bookmarksmode = op.modes.get('bookmarks', 'apply')
2098 2098
2099 2099 if bookmarksmode == 'apply':
2100 2100 tr = op.gettransaction()
2101 2101 bookstore = op.repo._bookmarks
2102 2102 if pushkeycompat:
2103 2103 allhooks = []
2104 2104 for book, node in changes:
2105 2105 hookargs = tr.hookargs.copy()
2106 2106 hookargs['pushkeycompat'] = '1'
2107 2107 hookargs['namespace'] = 'bookmarks'
2108 2108 hookargs['key'] = book
2109 2109 hookargs['old'] = nodemod.hex(bookstore.get(book, ''))
2110 2110 hookargs['new'] = nodemod.hex(node if node is not None else '')
2111 2111 allhooks.append(hookargs)
2112 2112
2113 2113 for hookargs in allhooks:
2114 2114 op.repo.hook('prepushkey', throw=True,
2115 2115 **pycompat.strkwargs(hookargs))
2116 2116
2117 2117 bookstore.applychanges(op.repo, op.gettransaction(), changes)
2118 2118
2119 2119 if pushkeycompat:
2120 2120 def runhook():
2121 2121 for hookargs in allhooks:
2122 2122 op.repo.hook('pushkey', **pycompat.strkwargs(hookargs))
2123 2123 op.repo._afterlock(runhook)
2124 2124
2125 2125 elif bookmarksmode == 'records':
2126 2126 for book, node in changes:
2127 2127 record = {'bookmark': book, 'node': node}
2128 2128 op.records.add('bookmarks', record)
2129 2129 else:
2130 2130 raise error.ProgrammingError('unkown bookmark mode: %s' % bookmarksmode)
2131 2131
2132 2132 @parthandler('phase-heads')
2133 2133 def handlephases(op, inpart):
2134 2134 """apply phases from bundle part to repo"""
2135 2135 headsbyphase = phases.binarydecode(inpart)
2136 2136 phases.updatephases(op.repo.unfiltered(), op.gettransaction, headsbyphase)
2137 2137
2138 2138 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
2139 2139 def handlepushkeyreply(op, inpart):
2140 2140 """retrieve the result of a pushkey request"""
2141 2141 ret = int(inpart.params['return'])
2142 2142 partid = int(inpart.params['in-reply-to'])
2143 2143 op.records.add('pushkey', {'return': ret}, partid)
2144 2144
2145 2145 @parthandler('obsmarkers')
2146 2146 def handleobsmarker(op, inpart):
2147 2147 """add a stream of obsmarkers to the repo"""
2148 2148 tr = op.gettransaction()
2149 2149 markerdata = inpart.read()
2150 2150 if op.ui.config('experimental', 'obsmarkers-exchange-debug'):
2151 2151 op.ui.write(('obsmarker-exchange: %i bytes received\n')
2152 2152 % len(markerdata))
2153 2153 # The mergemarkers call will crash if marker creation is not enabled.
2154 2154 # we want to avoid this if the part is advisory.
2155 2155 if not inpart.mandatory and op.repo.obsstore.readonly:
2156 2156 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled\n')
2157 2157 return
2158 2158 new = op.repo.obsstore.mergemarkers(tr, markerdata)
2159 2159 op.repo.invalidatevolatilesets()
2160 2160 if new:
2161 2161 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
2162 2162 op.records.add('obsmarkers', {'new': new})
2163 2163 if op.reply is not None:
2164 2164 rpart = op.reply.newpart('reply:obsmarkers')
2165 2165 rpart.addparam(
2166 2166 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
2167 2167 rpart.addparam('new', '%i' % new, mandatory=False)
2168 2168
2169 2169
2170 2170 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
2171 2171 def handleobsmarkerreply(op, inpart):
2172 2172 """retrieve the result of a pushkey request"""
2173 2173 ret = int(inpart.params['new'])
2174 2174 partid = int(inpart.params['in-reply-to'])
2175 2175 op.records.add('obsmarkers', {'new': ret}, partid)
2176 2176
2177 2177 @parthandler('hgtagsfnodes')
2178 2178 def handlehgtagsfnodes(op, inpart):
2179 2179 """Applies .hgtags fnodes cache entries to the local repo.
2180 2180
2181 2181 Payload is pairs of 20 byte changeset nodes and filenodes.
2182 2182 """
2183 2183 # Grab the transaction so we ensure that we have the lock at this point.
2184 2184 if op.ui.configbool('experimental', 'bundle2lazylocking'):
2185 2185 op.gettransaction()
2186 2186 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
2187 2187
2188 2188 count = 0
2189 2189 while True:
2190 2190 node = inpart.read(20)
2191 2191 fnode = inpart.read(20)
2192 2192 if len(node) < 20 or len(fnode) < 20:
2193 2193 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
2194 2194 break
2195 2195 cache.setfnode(node, fnode)
2196 2196 count += 1
2197 2197
2198 2198 cache.write()
2199 2199 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
2200 2200
2201 2201 rbcstruct = struct.Struct('>III')
2202 2202
2203 2203 @parthandler('cache:rev-branch-cache')
2204 2204 def handlerbc(op, inpart):
2205 2205 """receive a rev-branch-cache payload and update the local cache
2206 2206
2207 2207 The payload is a series of data related to each branch
2208 2208
2209 2209 1) branch name length
2210 2210 2) number of open heads
2211 2211 3) number of closed heads
2212 2212 4) open heads nodes
2213 2213 5) closed heads nodes
2214 2214 """
2215 2215 total = 0
2216 2216 rawheader = inpart.read(rbcstruct.size)
2217 2217 cache = op.repo.revbranchcache()
2218 2218 cl = op.repo.unfiltered().changelog
2219 2219 while rawheader:
2220 2220 header = rbcstruct.unpack(rawheader)
2221 2221 total += header[1] + header[2]
2222 2222 utf8branch = inpart.read(header[0])
2223 2223 branch = encoding.tolocal(utf8branch)
2224 2224 for x in xrange(header[1]):
2225 2225 node = inpart.read(20)
2226 2226 rev = cl.rev(node)
2227 2227 cache.setdata(branch, rev, node, False)
2228 2228 for x in xrange(header[2]):
2229 2229 node = inpart.read(20)
2230 2230 rev = cl.rev(node)
2231 2231 cache.setdata(branch, rev, node, True)
2232 2232 rawheader = inpart.read(rbcstruct.size)
2233 2233 cache.write()
2234 2234
2235 2235 @parthandler('pushvars')
2236 2236 def bundle2getvars(op, part):
2237 2237 '''unbundle a bundle2 containing shellvars on the server'''
2238 2238 # An option to disable unbundling on server-side for security reasons
2239 2239 if op.ui.configbool('push', 'pushvars.server'):
2240 2240 hookargs = {}
2241 2241 for key, value in part.advisoryparams:
2242 2242 key = key.upper()
2243 2243 # We want pushed variables to have USERVAR_ prepended so we know
2244 2244 # they came from the --pushvar flag.
2245 2245 key = "USERVAR_" + key
2246 2246 hookargs[key] = value
2247 2247 op.addhookargs(hookargs)
2248 2248
2249 2249 @parthandler('stream2', ('requirements', 'filecount', 'bytecount'))
2250 2250 def handlestreamv2bundle(op, part):
2251 2251
2252 2252 requirements = urlreq.unquote(part.params['requirements']).split(',')
2253 2253 filecount = int(part.params['filecount'])
2254 2254 bytecount = int(part.params['bytecount'])
2255 2255
2256 2256 repo = op.repo
2257 2257 if len(repo):
2258 2258 msg = _('cannot apply stream clone to non empty repository')
2259 2259 raise error.Abort(msg)
2260 2260
2261 2261 repo.ui.debug('applying stream bundle\n')
2262 2262 streamclone.applybundlev2(repo, part, filecount, bytecount,
2263 2263 requirements)
@@ -1,2336 +1,2338 b''
1 1 # exchange.py - utility to exchange data between repos.
2 2 #
3 3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from __future__ import absolute_import
9 9
10 10 import collections
11 11 import errno
12 12 import hashlib
13 13
14 14 from .i18n import _
15 15 from .node import (
16 16 bin,
17 17 hex,
18 18 nullid,
19 19 )
20 20 from .thirdparty import (
21 21 attr,
22 22 )
23 23 from . import (
24 24 bookmarks as bookmod,
25 25 bundle2,
26 26 changegroup,
27 27 discovery,
28 28 error,
29 29 lock as lockmod,
30 30 logexchange,
31 31 obsolete,
32 32 phases,
33 33 pushkey,
34 34 pycompat,
35 35 scmutil,
36 36 sslutil,
37 37 streamclone,
38 38 url as urlmod,
39 39 util,
40 40 )
41 41 from .utils import (
42 42 stringutil,
43 43 )
44 44
45 45 urlerr = util.urlerr
46 46 urlreq = util.urlreq
47 47
48 48 # Maps bundle version human names to changegroup versions.
49 49 _bundlespeccgversions = {'v1': '01',
50 50 'v2': '02',
51 51 'packed1': 's1',
52 52 'bundle2': '02', #legacy
53 53 }
54 54
55 55 # Maps bundle version with content opts to choose which part to bundle
56 56 _bundlespeccontentopts = {
57 57 'v1': {
58 58 'changegroup': True,
59 59 'cg.version': '01',
60 60 'obsolescence': False,
61 61 'phases': False,
62 62 'tagsfnodescache': False,
63 63 'revbranchcache': False
64 64 },
65 65 'v2': {
66 66 'changegroup': True,
67 67 'cg.version': '02',
68 68 'obsolescence': False,
69 69 'phases': False,
70 70 'tagsfnodescache': True,
71 71 'revbranchcache': True
72 72 },
73 73 'packed1' : {
74 74 'cg.version': 's1'
75 75 }
76 76 }
77 77 _bundlespeccontentopts['bundle2'] = _bundlespeccontentopts['v2']
78 78
79 79 _bundlespecvariants = {"streamv2": {"changegroup": False, "streamv2": True,
80 80 "tagsfnodescache": False,
81 81 "revbranchcache": False}}
82 82
83 83 # Compression engines allowed in version 1. THIS SHOULD NEVER CHANGE.
84 84 _bundlespecv1compengines = {'gzip', 'bzip2', 'none'}
85 85
86 86 @attr.s
87 87 class bundlespec(object):
88 88 compression = attr.ib()
89 89 version = attr.ib()
90 90 params = attr.ib()
91 91 contentopts = attr.ib()
92 92
93 93 def parsebundlespec(repo, spec, strict=True, externalnames=False):
94 94 """Parse a bundle string specification into parts.
95 95
96 96 Bundle specifications denote a well-defined bundle/exchange format.
97 97 The content of a given specification should not change over time in
98 98 order to ensure that bundles produced by a newer version of Mercurial are
99 99 readable from an older version.
100 100
101 101 The string currently has the form:
102 102
103 103 <compression>-<type>[;<parameter0>[;<parameter1>]]
104 104
105 105 Where <compression> is one of the supported compression formats
106 106 and <type> is (currently) a version string. A ";" can follow the type and
107 107 all text afterwards is interpreted as URI encoded, ";" delimited key=value
108 108 pairs.
109 109
110 110 If ``strict`` is True (the default) <compression> is required. Otherwise,
111 111 it is optional.
112 112
113 113 If ``externalnames`` is False (the default), the human-centric names will
114 114 be converted to their internal representation.
115 115
116 116 Returns a bundlespec object of (compression, version, parameters).
117 117 Compression will be ``None`` if not in strict mode and a compression isn't
118 118 defined.
119 119
120 120 An ``InvalidBundleSpecification`` is raised when the specification is
121 121 not syntactically well formed.
122 122
123 123 An ``UnsupportedBundleSpecification`` is raised when the compression or
124 124 bundle type/version is not recognized.
125 125
126 126 Note: this function will likely eventually return a more complex data
127 127 structure, including bundle2 part information.
128 128 """
129 129 def parseparams(s):
130 130 if ';' not in s:
131 131 return s, {}
132 132
133 133 params = {}
134 134 version, paramstr = s.split(';', 1)
135 135
136 136 for p in paramstr.split(';'):
137 137 if '=' not in p:
138 138 raise error.InvalidBundleSpecification(
139 139 _('invalid bundle specification: '
140 140 'missing "=" in parameter: %s') % p)
141 141
142 142 key, value = p.split('=', 1)
143 143 key = urlreq.unquote(key)
144 144 value = urlreq.unquote(value)
145 145 params[key] = value
146 146
147 147 return version, params
148 148
149 149
150 150 if strict and '-' not in spec:
151 151 raise error.InvalidBundleSpecification(
152 152 _('invalid bundle specification; '
153 153 'must be prefixed with compression: %s') % spec)
154 154
155 155 if '-' in spec:
156 156 compression, version = spec.split('-', 1)
157 157
158 158 if compression not in util.compengines.supportedbundlenames:
159 159 raise error.UnsupportedBundleSpecification(
160 160 _('%s compression is not supported') % compression)
161 161
162 162 version, params = parseparams(version)
163 163
164 164 if version not in _bundlespeccgversions:
165 165 raise error.UnsupportedBundleSpecification(
166 166 _('%s is not a recognized bundle version') % version)
167 167 else:
168 168 # Value could be just the compression or just the version, in which
169 169 # case some defaults are assumed (but only when not in strict mode).
170 170 assert not strict
171 171
172 172 spec, params = parseparams(spec)
173 173
174 174 if spec in util.compengines.supportedbundlenames:
175 175 compression = spec
176 176 version = 'v1'
177 177 # Generaldelta repos require v2.
178 178 if 'generaldelta' in repo.requirements:
179 179 version = 'v2'
180 180 # Modern compression engines require v2.
181 181 if compression not in _bundlespecv1compengines:
182 182 version = 'v2'
183 183 elif spec in _bundlespeccgversions:
184 184 if spec == 'packed1':
185 185 compression = 'none'
186 186 else:
187 187 compression = 'bzip2'
188 188 version = spec
189 189 else:
190 190 raise error.UnsupportedBundleSpecification(
191 191 _('%s is not a recognized bundle specification') % spec)
192 192
193 193 # Bundle version 1 only supports a known set of compression engines.
194 194 if version == 'v1' and compression not in _bundlespecv1compengines:
195 195 raise error.UnsupportedBundleSpecification(
196 196 _('compression engine %s is not supported on v1 bundles') %
197 197 compression)
198 198
199 199 # The specification for packed1 can optionally declare the data formats
200 200 # required to apply it. If we see this metadata, compare against what the
201 201 # repo supports and error if the bundle isn't compatible.
202 202 if version == 'packed1' and 'requirements' in params:
203 203 requirements = set(params['requirements'].split(','))
204 204 missingreqs = requirements - repo.supportedformats
205 205 if missingreqs:
206 206 raise error.UnsupportedBundleSpecification(
207 207 _('missing support for repository features: %s') %
208 208 ', '.join(sorted(missingreqs)))
209 209
210 210 # Compute contentopts based on the version
211 211 contentopts = _bundlespeccontentopts.get(version, {}).copy()
212 212
213 213 # Process the variants
214 214 if "stream" in params and params["stream"] == "v2":
215 215 variant = _bundlespecvariants["streamv2"]
216 216 contentopts.update(variant)
217 217
218 218 if not externalnames:
219 219 engine = util.compengines.forbundlename(compression)
220 220 compression = engine.bundletype()[1]
221 221 version = _bundlespeccgversions[version]
222 222
223 223 return bundlespec(compression, version, params, contentopts)
224 224
225 225 def readbundle(ui, fh, fname, vfs=None):
226 226 header = changegroup.readexactly(fh, 4)
227 227
228 228 alg = None
229 229 if not fname:
230 230 fname = "stream"
231 231 if not header.startswith('HG') and header.startswith('\0'):
232 232 fh = changegroup.headerlessfixup(fh, header)
233 233 header = "HG10"
234 234 alg = 'UN'
235 235 elif vfs:
236 236 fname = vfs.join(fname)
237 237
238 238 magic, version = header[0:2], header[2:4]
239 239
240 240 if magic != 'HG':
241 241 raise error.Abort(_('%s: not a Mercurial bundle') % fname)
242 242 if version == '10':
243 243 if alg is None:
244 244 alg = changegroup.readexactly(fh, 2)
245 245 return changegroup.cg1unpacker(fh, alg)
246 246 elif version.startswith('2'):
247 247 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
248 248 elif version == 'S1':
249 249 return streamclone.streamcloneapplier(fh)
250 250 else:
251 251 raise error.Abort(_('%s: unknown bundle version %s') % (fname, version))
252 252
253 253 def getbundlespec(ui, fh):
254 254 """Infer the bundlespec from a bundle file handle.
255 255
256 256 The input file handle is seeked and the original seek position is not
257 257 restored.
258 258 """
259 259 def speccompression(alg):
260 260 try:
261 261 return util.compengines.forbundletype(alg).bundletype()[0]
262 262 except KeyError:
263 263 return None
264 264
265 265 b = readbundle(ui, fh, None)
266 266 if isinstance(b, changegroup.cg1unpacker):
267 267 alg = b._type
268 268 if alg == '_truncatedBZ':
269 269 alg = 'BZ'
270 270 comp = speccompression(alg)
271 271 if not comp:
272 272 raise error.Abort(_('unknown compression algorithm: %s') % alg)
273 273 return '%s-v1' % comp
274 274 elif isinstance(b, bundle2.unbundle20):
275 275 if 'Compression' in b.params:
276 276 comp = speccompression(b.params['Compression'])
277 277 if not comp:
278 278 raise error.Abort(_('unknown compression algorithm: %s') % comp)
279 279 else:
280 280 comp = 'none'
281 281
282 282 version = None
283 283 for part in b.iterparts():
284 284 if part.type == 'changegroup':
285 285 version = part.params['version']
286 286 if version in ('01', '02'):
287 287 version = 'v2'
288 288 else:
289 289 raise error.Abort(_('changegroup version %s does not have '
290 290 'a known bundlespec') % version,
291 291 hint=_('try upgrading your Mercurial '
292 292 'client'))
293 293 elif part.type == 'stream2' and version is None:
294 294 # A stream2 part requires to be part of a v2 bundle
295 295 version = "v2"
296 296 requirements = urlreq.unquote(part.params['requirements'])
297 297 splitted = requirements.split()
298 298 params = bundle2._formatrequirementsparams(splitted)
299 299 return 'none-v2;stream=v2;%s' % params
300 300
301 301 if not version:
302 302 raise error.Abort(_('could not identify changegroup version in '
303 303 'bundle'))
304 304
305 305 return '%s-%s' % (comp, version)
306 306 elif isinstance(b, streamclone.streamcloneapplier):
307 307 requirements = streamclone.readbundle1header(fh)[2]
308 308 formatted = bundle2._formatrequirementsparams(requirements)
309 309 return 'none-packed1;%s' % formatted
310 310 else:
311 311 raise error.Abort(_('unknown bundle type: %s') % b)
312 312
313 313 def _computeoutgoing(repo, heads, common):
314 314 """Computes which revs are outgoing given a set of common
315 315 and a set of heads.
316 316
317 317 This is a separate function so extensions can have access to
318 318 the logic.
319 319
320 320 Returns a discovery.outgoing object.
321 321 """
322 322 cl = repo.changelog
323 323 if common:
324 324 hasnode = cl.hasnode
325 325 common = [n for n in common if hasnode(n)]
326 326 else:
327 327 common = [nullid]
328 328 if not heads:
329 329 heads = cl.heads()
330 330 return discovery.outgoing(repo, common, heads)
331 331
332 332 def _forcebundle1(op):
333 333 """return true if a pull/push must use bundle1
334 334
335 335 This function is used to allow testing of the older bundle version"""
336 336 ui = op.repo.ui
337 337 # The goal is this config is to allow developer to choose the bundle
338 338 # version used during exchanged. This is especially handy during test.
339 339 # Value is a list of bundle version to be picked from, highest version
340 340 # should be used.
341 341 #
342 342 # developer config: devel.legacy.exchange
343 343 exchange = ui.configlist('devel', 'legacy.exchange')
344 344 forcebundle1 = 'bundle2' not in exchange and 'bundle1' in exchange
345 345 return forcebundle1 or not op.remote.capable('bundle2')
346 346
347 347 class pushoperation(object):
348 348 """A object that represent a single push operation
349 349
350 350 Its purpose is to carry push related state and very common operations.
351 351
352 352 A new pushoperation should be created at the beginning of each push and
353 353 discarded afterward.
354 354 """
355 355
356 356 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
357 357 bookmarks=(), pushvars=None):
358 358 # repo we push from
359 359 self.repo = repo
360 360 self.ui = repo.ui
361 361 # repo we push to
362 362 self.remote = remote
363 363 # force option provided
364 364 self.force = force
365 365 # revs to be pushed (None is "all")
366 366 self.revs = revs
367 367 # bookmark explicitly pushed
368 368 self.bookmarks = bookmarks
369 369 # allow push of new branch
370 370 self.newbranch = newbranch
371 371 # step already performed
372 372 # (used to check what steps have been already performed through bundle2)
373 373 self.stepsdone = set()
374 374 # Integer version of the changegroup push result
375 375 # - None means nothing to push
376 376 # - 0 means HTTP error
377 377 # - 1 means we pushed and remote head count is unchanged *or*
378 378 # we have outgoing changesets but refused to push
379 379 # - other values as described by addchangegroup()
380 380 self.cgresult = None
381 381 # Boolean value for the bookmark push
382 382 self.bkresult = None
383 383 # discover.outgoing object (contains common and outgoing data)
384 384 self.outgoing = None
385 385 # all remote topological heads before the push
386 386 self.remoteheads = None
387 387 # Details of the remote branch pre and post push
388 388 #
389 389 # mapping: {'branch': ([remoteheads],
390 390 # [newheads],
391 391 # [unsyncedheads],
392 392 # [discardedheads])}
393 393 # - branch: the branch name
394 394 # - remoteheads: the list of remote heads known locally
395 395 # None if the branch is new
396 396 # - newheads: the new remote heads (known locally) with outgoing pushed
397 397 # - unsyncedheads: the list of remote heads unknown locally.
398 398 # - discardedheads: the list of remote heads made obsolete by the push
399 399 self.pushbranchmap = None
400 400 # testable as a boolean indicating if any nodes are missing locally.
401 401 self.incoming = None
402 402 # summary of the remote phase situation
403 403 self.remotephases = None
404 404 # phases changes that must be pushed along side the changesets
405 405 self.outdatedphases = None
406 406 # phases changes that must be pushed if changeset push fails
407 407 self.fallbackoutdatedphases = None
408 408 # outgoing obsmarkers
409 409 self.outobsmarkers = set()
410 410 # outgoing bookmarks
411 411 self.outbookmarks = []
412 412 # transaction manager
413 413 self.trmanager = None
414 414 # map { pushkey partid -> callback handling failure}
415 415 # used to handle exception from mandatory pushkey part failure
416 416 self.pkfailcb = {}
417 417 # an iterable of pushvars or None
418 418 self.pushvars = pushvars
419 419
420 420 @util.propertycache
421 421 def futureheads(self):
422 422 """future remote heads if the changeset push succeeds"""
423 423 return self.outgoing.missingheads
424 424
425 425 @util.propertycache
426 426 def fallbackheads(self):
427 427 """future remote heads if the changeset push fails"""
428 428 if self.revs is None:
429 429 # not target to push, all common are relevant
430 430 return self.outgoing.commonheads
431 431 unfi = self.repo.unfiltered()
432 432 # I want cheads = heads(::missingheads and ::commonheads)
433 433 # (missingheads is revs with secret changeset filtered out)
434 434 #
435 435 # This can be expressed as:
436 436 # cheads = ( (missingheads and ::commonheads)
437 437 # + (commonheads and ::missingheads))"
438 438 # )
439 439 #
440 440 # while trying to push we already computed the following:
441 441 # common = (::commonheads)
442 442 # missing = ((commonheads::missingheads) - commonheads)
443 443 #
444 444 # We can pick:
445 445 # * missingheads part of common (::commonheads)
446 446 common = self.outgoing.common
447 447 nm = self.repo.changelog.nodemap
448 448 cheads = [node for node in self.revs if nm[node] in common]
449 449 # and
450 450 # * commonheads parents on missing
451 451 revset = unfi.set('%ln and parents(roots(%ln))',
452 452 self.outgoing.commonheads,
453 453 self.outgoing.missing)
454 454 cheads.extend(c.node() for c in revset)
455 455 return cheads
456 456
457 457 @property
458 458 def commonheads(self):
459 459 """set of all common heads after changeset bundle push"""
460 460 if self.cgresult:
461 461 return self.futureheads
462 462 else:
463 463 return self.fallbackheads
464 464
465 465 # mapping of message used when pushing bookmark
466 466 bookmsgmap = {'update': (_("updating bookmark %s\n"),
467 467 _('updating bookmark %s failed!\n')),
468 468 'export': (_("exporting bookmark %s\n"),
469 469 _('exporting bookmark %s failed!\n')),
470 470 'delete': (_("deleting remote bookmark %s\n"),
471 471 _('deleting remote bookmark %s failed!\n')),
472 472 }
473 473
474 474
475 475 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=(),
476 476 opargs=None):
477 477 '''Push outgoing changesets (limited by revs) from a local
478 478 repository to remote. Return an integer:
479 479 - None means nothing to push
480 480 - 0 means HTTP error
481 481 - 1 means we pushed and remote head count is unchanged *or*
482 482 we have outgoing changesets but refused to push
483 483 - other values as described by addchangegroup()
484 484 '''
485 485 if opargs is None:
486 486 opargs = {}
487 487 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks,
488 488 **pycompat.strkwargs(opargs))
489 489 if pushop.remote.local():
490 490 missing = (set(pushop.repo.requirements)
491 491 - pushop.remote.local().supported)
492 492 if missing:
493 493 msg = _("required features are not"
494 494 " supported in the destination:"
495 495 " %s") % (', '.join(sorted(missing)))
496 496 raise error.Abort(msg)
497 497
498 498 if not pushop.remote.canpush():
499 499 raise error.Abort(_("destination does not support push"))
500 500
501 501 if not pushop.remote.capable('unbundle'):
502 502 raise error.Abort(_('cannot push: destination does not support the '
503 503 'unbundle wire protocol command'))
504 504
505 505 # get lock as we might write phase data
506 506 wlock = lock = None
507 507 try:
508 508 # bundle2 push may receive a reply bundle touching bookmarks or other
509 509 # things requiring the wlock. Take it now to ensure proper ordering.
510 510 maypushback = pushop.ui.configbool('experimental', 'bundle2.pushback')
511 511 if (not _forcebundle1(pushop)) and maypushback:
512 512 wlock = pushop.repo.wlock()
513 513 lock = pushop.repo.lock()
514 514 pushop.trmanager = transactionmanager(pushop.repo,
515 515 'push-response',
516 516 pushop.remote.url())
517 517 except IOError as err:
518 518 if err.errno != errno.EACCES:
519 519 raise
520 520 # source repo cannot be locked.
521 521 # We do not abort the push, but just disable the local phase
522 522 # synchronisation.
523 523 msg = 'cannot lock source repository: %s\n' % err
524 524 pushop.ui.debug(msg)
525 525
526 526 with wlock or util.nullcontextmanager(), \
527 527 lock or util.nullcontextmanager(), \
528 528 pushop.trmanager or util.nullcontextmanager():
529 529 pushop.repo.checkpush(pushop)
530 530 _pushdiscovery(pushop)
531 531 if not _forcebundle1(pushop):
532 532 _pushbundle2(pushop)
533 533 _pushchangeset(pushop)
534 534 _pushsyncphase(pushop)
535 535 _pushobsolete(pushop)
536 536 _pushbookmark(pushop)
537 537
538 538 return pushop
539 539
540 540 # list of steps to perform discovery before push
541 541 pushdiscoveryorder = []
542 542
543 543 # Mapping between step name and function
544 544 #
545 545 # This exists to help extensions wrap steps if necessary
546 546 pushdiscoverymapping = {}
547 547
548 548 def pushdiscovery(stepname):
549 549 """decorator for function performing discovery before push
550 550
551 551 The function is added to the step -> function mapping and appended to the
552 552 list of steps. Beware that decorated function will be added in order (this
553 553 may matter).
554 554
555 555 You can only use this decorator for a new step, if you want to wrap a step
556 556 from an extension, change the pushdiscovery dictionary directly."""
557 557 def dec(func):
558 558 assert stepname not in pushdiscoverymapping
559 559 pushdiscoverymapping[stepname] = func
560 560 pushdiscoveryorder.append(stepname)
561 561 return func
562 562 return dec
563 563
564 564 def _pushdiscovery(pushop):
565 565 """Run all discovery steps"""
566 566 for stepname in pushdiscoveryorder:
567 567 step = pushdiscoverymapping[stepname]
568 568 step(pushop)
569 569
570 570 @pushdiscovery('changeset')
571 571 def _pushdiscoverychangeset(pushop):
572 572 """discover the changeset that need to be pushed"""
573 573 fci = discovery.findcommonincoming
574 574 if pushop.revs:
575 575 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force,
576 576 ancestorsof=pushop.revs)
577 577 else:
578 578 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
579 579 common, inc, remoteheads = commoninc
580 580 fco = discovery.findcommonoutgoing
581 581 outgoing = fco(pushop.repo, pushop.remote, onlyheads=pushop.revs,
582 582 commoninc=commoninc, force=pushop.force)
583 583 pushop.outgoing = outgoing
584 584 pushop.remoteheads = remoteheads
585 585 pushop.incoming = inc
586 586
587 587 @pushdiscovery('phase')
588 588 def _pushdiscoveryphase(pushop):
589 589 """discover the phase that needs to be pushed
590 590
591 591 (computed for both success and failure case for changesets push)"""
592 592 outgoing = pushop.outgoing
593 593 unfi = pushop.repo.unfiltered()
594 594 remotephases = pushop.remote.listkeys('phases')
595 595 if (pushop.ui.configbool('ui', '_usedassubrepo')
596 596 and remotephases # server supports phases
597 597 and not pushop.outgoing.missing # no changesets to be pushed
598 598 and remotephases.get('publishing', False)):
599 599 # When:
600 600 # - this is a subrepo push
601 601 # - and remote support phase
602 602 # - and no changeset are to be pushed
603 603 # - and remote is publishing
604 604 # We may be in issue 3781 case!
605 605 # We drop the possible phase synchronisation done by
606 606 # courtesy to publish changesets possibly locally draft
607 607 # on the remote.
608 608 pushop.outdatedphases = []
609 609 pushop.fallbackoutdatedphases = []
610 610 return
611 611
612 612 pushop.remotephases = phases.remotephasessummary(pushop.repo,
613 613 pushop.fallbackheads,
614 614 remotephases)
615 615 droots = pushop.remotephases.draftroots
616 616
617 617 extracond = ''
618 618 if not pushop.remotephases.publishing:
619 619 extracond = ' and public()'
620 620 revset = 'heads((%%ln::%%ln) %s)' % extracond
621 621 # Get the list of all revs draft on remote by public here.
622 622 # XXX Beware that revset break if droots is not strictly
623 623 # XXX root we may want to ensure it is but it is costly
624 624 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
625 625 if not outgoing.missing:
626 626 future = fallback
627 627 else:
628 628 # adds changeset we are going to push as draft
629 629 #
630 630 # should not be necessary for publishing server, but because of an
631 631 # issue fixed in xxxxx we have to do it anyway.
632 632 fdroots = list(unfi.set('roots(%ln + %ln::)',
633 633 outgoing.missing, droots))
634 634 fdroots = [f.node() for f in fdroots]
635 635 future = list(unfi.set(revset, fdroots, pushop.futureheads))
636 636 pushop.outdatedphases = future
637 637 pushop.fallbackoutdatedphases = fallback
638 638
639 639 @pushdiscovery('obsmarker')
640 640 def _pushdiscoveryobsmarkers(pushop):
641 641 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
642 642 and pushop.repo.obsstore
643 643 and 'obsolete' in pushop.remote.listkeys('namespaces')):
644 644 repo = pushop.repo
645 645 # very naive computation, that can be quite expensive on big repo.
646 646 # However: evolution is currently slow on them anyway.
647 647 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
648 648 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
649 649
650 650 @pushdiscovery('bookmarks')
651 651 def _pushdiscoverybookmarks(pushop):
652 652 ui = pushop.ui
653 653 repo = pushop.repo.unfiltered()
654 654 remote = pushop.remote
655 655 ui.debug("checking for updated bookmarks\n")
656 656 ancestors = ()
657 657 if pushop.revs:
658 658 revnums = map(repo.changelog.rev, pushop.revs)
659 659 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
660 660 remotebookmark = remote.listkeys('bookmarks')
661 661
662 662 explicit = set([repo._bookmarks.expandname(bookmark)
663 663 for bookmark in pushop.bookmarks])
664 664
665 665 remotebookmark = bookmod.unhexlifybookmarks(remotebookmark)
666 666 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
667 667
668 668 def safehex(x):
669 669 if x is None:
670 670 return x
671 671 return hex(x)
672 672
673 673 def hexifycompbookmarks(bookmarks):
674 674 return [(b, safehex(scid), safehex(dcid))
675 675 for (b, scid, dcid) in bookmarks]
676 676
677 677 comp = [hexifycompbookmarks(marks) for marks in comp]
678 678 return _processcompared(pushop, ancestors, explicit, remotebookmark, comp)
679 679
680 680 def _processcompared(pushop, pushed, explicit, remotebms, comp):
681 681 """take decision on bookmark to pull from the remote bookmark
682 682
683 683 Exist to help extensions who want to alter this behavior.
684 684 """
685 685 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
686 686
687 687 repo = pushop.repo
688 688
689 689 for b, scid, dcid in advsrc:
690 690 if b in explicit:
691 691 explicit.remove(b)
692 692 if not pushed or repo[scid].rev() in pushed:
693 693 pushop.outbookmarks.append((b, dcid, scid))
694 694 # search added bookmark
695 695 for b, scid, dcid in addsrc:
696 696 if b in explicit:
697 697 explicit.remove(b)
698 698 pushop.outbookmarks.append((b, '', scid))
699 699 # search for overwritten bookmark
700 700 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
701 701 if b in explicit:
702 702 explicit.remove(b)
703 703 pushop.outbookmarks.append((b, dcid, scid))
704 704 # search for bookmark to delete
705 705 for b, scid, dcid in adddst:
706 706 if b in explicit:
707 707 explicit.remove(b)
708 708 # treat as "deleted locally"
709 709 pushop.outbookmarks.append((b, dcid, ''))
710 710 # identical bookmarks shouldn't get reported
711 711 for b, scid, dcid in same:
712 712 if b in explicit:
713 713 explicit.remove(b)
714 714
715 715 if explicit:
716 716 explicit = sorted(explicit)
717 717 # we should probably list all of them
718 718 pushop.ui.warn(_('bookmark %s does not exist on the local '
719 719 'or remote repository!\n') % explicit[0])
720 720 pushop.bkresult = 2
721 721
722 722 pushop.outbookmarks.sort()
723 723
724 724 def _pushcheckoutgoing(pushop):
725 725 outgoing = pushop.outgoing
726 726 unfi = pushop.repo.unfiltered()
727 727 if not outgoing.missing:
728 728 # nothing to push
729 729 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
730 730 return False
731 731 # something to push
732 732 if not pushop.force:
733 733 # if repo.obsstore == False --> no obsolete
734 734 # then, save the iteration
735 735 if unfi.obsstore:
736 736 # this message are here for 80 char limit reason
737 737 mso = _("push includes obsolete changeset: %s!")
738 738 mspd = _("push includes phase-divergent changeset: %s!")
739 739 mscd = _("push includes content-divergent changeset: %s!")
740 740 mst = {"orphan": _("push includes orphan changeset: %s!"),
741 741 "phase-divergent": mspd,
742 742 "content-divergent": mscd}
743 743 # If we are to push if there is at least one
744 744 # obsolete or unstable changeset in missing, at
745 745 # least one of the missinghead will be obsolete or
746 746 # unstable. So checking heads only is ok
747 747 for node in outgoing.missingheads:
748 748 ctx = unfi[node]
749 749 if ctx.obsolete():
750 750 raise error.Abort(mso % ctx)
751 751 elif ctx.isunstable():
752 752 # TODO print more than one instability in the abort
753 753 # message
754 754 raise error.Abort(mst[ctx.instabilities()[0]] % ctx)
755 755
756 756 discovery.checkheads(pushop)
757 757 return True
758 758
759 759 # List of names of steps to perform for an outgoing bundle2, order matters.
760 760 b2partsgenorder = []
761 761
762 762 # Mapping between step name and function
763 763 #
764 764 # This exists to help extensions wrap steps if necessary
765 765 b2partsgenmapping = {}
766 766
767 767 def b2partsgenerator(stepname, idx=None):
768 768 """decorator for function generating bundle2 part
769 769
770 770 The function is added to the step -> function mapping and appended to the
771 771 list of steps. Beware that decorated functions will be added in order
772 772 (this may matter).
773 773
774 774 You can only use this decorator for new steps, if you want to wrap a step
775 775 from an extension, attack the b2partsgenmapping dictionary directly."""
776 776 def dec(func):
777 777 assert stepname not in b2partsgenmapping
778 778 b2partsgenmapping[stepname] = func
779 779 if idx is None:
780 780 b2partsgenorder.append(stepname)
781 781 else:
782 782 b2partsgenorder.insert(idx, stepname)
783 783 return func
784 784 return dec
785 785
786 786 def _pushb2ctxcheckheads(pushop, bundler):
787 787 """Generate race condition checking parts
788 788
789 789 Exists as an independent function to aid extensions
790 790 """
791 791 # * 'force' do not check for push race,
792 792 # * if we don't push anything, there are nothing to check.
793 793 if not pushop.force and pushop.outgoing.missingheads:
794 794 allowunrelated = 'related' in bundler.capabilities.get('checkheads', ())
795 795 emptyremote = pushop.pushbranchmap is None
796 796 if not allowunrelated or emptyremote:
797 797 bundler.newpart('check:heads', data=iter(pushop.remoteheads))
798 798 else:
799 799 affected = set()
800 800 for branch, heads in pushop.pushbranchmap.iteritems():
801 801 remoteheads, newheads, unsyncedheads, discardedheads = heads
802 802 if remoteheads is not None:
803 803 remote = set(remoteheads)
804 804 affected |= set(discardedheads) & remote
805 805 affected |= remote - set(newheads)
806 806 if affected:
807 807 data = iter(sorted(affected))
808 808 bundler.newpart('check:updated-heads', data=data)
809 809
810 810 def _pushing(pushop):
811 811 """return True if we are pushing anything"""
812 812 return bool(pushop.outgoing.missing
813 813 or pushop.outdatedphases
814 814 or pushop.outobsmarkers
815 815 or pushop.outbookmarks)
816 816
817 817 @b2partsgenerator('check-bookmarks')
818 818 def _pushb2checkbookmarks(pushop, bundler):
819 819 """insert bookmark move checking"""
820 820 if not _pushing(pushop) or pushop.force:
821 821 return
822 822 b2caps = bundle2.bundle2caps(pushop.remote)
823 823 hasbookmarkcheck = 'bookmarks' in b2caps
824 824 if not (pushop.outbookmarks and hasbookmarkcheck):
825 825 return
826 826 data = []
827 827 for book, old, new in pushop.outbookmarks:
828 828 old = bin(old)
829 829 data.append((book, old))
830 830 checkdata = bookmod.binaryencode(data)
831 831 bundler.newpart('check:bookmarks', data=checkdata)
832 832
833 833 @b2partsgenerator('check-phases')
834 834 def _pushb2checkphases(pushop, bundler):
835 835 """insert phase move checking"""
836 836 if not _pushing(pushop) or pushop.force:
837 837 return
838 838 b2caps = bundle2.bundle2caps(pushop.remote)
839 839 hasphaseheads = 'heads' in b2caps.get('phases', ())
840 840 if pushop.remotephases is not None and hasphaseheads:
841 841 # check that the remote phase has not changed
842 842 checks = [[] for p in phases.allphases]
843 843 checks[phases.public].extend(pushop.remotephases.publicheads)
844 844 checks[phases.draft].extend(pushop.remotephases.draftroots)
845 845 if any(checks):
846 846 for nodes in checks:
847 847 nodes.sort()
848 848 checkdata = phases.binaryencode(checks)
849 849 bundler.newpart('check:phases', data=checkdata)
850 850
851 851 @b2partsgenerator('changeset')
852 852 def _pushb2ctx(pushop, bundler):
853 853 """handle changegroup push through bundle2
854 854
855 855 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
856 856 """
857 857 if 'changesets' in pushop.stepsdone:
858 858 return
859 859 pushop.stepsdone.add('changesets')
860 860 # Send known heads to the server for race detection.
861 861 if not _pushcheckoutgoing(pushop):
862 862 return
863 863 pushop.repo.prepushoutgoinghooks(pushop)
864 864
865 865 _pushb2ctxcheckheads(pushop, bundler)
866 866
867 867 b2caps = bundle2.bundle2caps(pushop.remote)
868 868 version = '01'
869 869 cgversions = b2caps.get('changegroup')
870 870 if cgversions: # 3.1 and 3.2 ship with an empty value
871 871 cgversions = [v for v in cgversions
872 872 if v in changegroup.supportedoutgoingversions(
873 873 pushop.repo)]
874 874 if not cgversions:
875 875 raise ValueError(_('no common changegroup version'))
876 876 version = max(cgversions)
877 877 cgstream = changegroup.makestream(pushop.repo, pushop.outgoing, version,
878 878 'push')
879 879 cgpart = bundler.newpart('changegroup', data=cgstream)
880 880 if cgversions:
881 881 cgpart.addparam('version', version)
882 882 if 'treemanifest' in pushop.repo.requirements:
883 883 cgpart.addparam('treemanifest', '1')
884 884 def handlereply(op):
885 885 """extract addchangegroup returns from server reply"""
886 886 cgreplies = op.records.getreplies(cgpart.id)
887 887 assert len(cgreplies['changegroup']) == 1
888 888 pushop.cgresult = cgreplies['changegroup'][0]['return']
889 889 return handlereply
890 890
891 891 @b2partsgenerator('phase')
892 892 def _pushb2phases(pushop, bundler):
893 893 """handle phase push through bundle2"""
894 894 if 'phases' in pushop.stepsdone:
895 895 return
896 896 b2caps = bundle2.bundle2caps(pushop.remote)
897 897 ui = pushop.repo.ui
898 898
899 899 legacyphase = 'phases' in ui.configlist('devel', 'legacy.exchange')
900 900 haspushkey = 'pushkey' in b2caps
901 901 hasphaseheads = 'heads' in b2caps.get('phases', ())
902 902
903 903 if hasphaseheads and not legacyphase:
904 904 return _pushb2phaseheads(pushop, bundler)
905 905 elif haspushkey:
906 906 return _pushb2phasespushkey(pushop, bundler)
907 907
908 908 def _pushb2phaseheads(pushop, bundler):
909 909 """push phase information through a bundle2 - binary part"""
910 910 pushop.stepsdone.add('phases')
911 911 if pushop.outdatedphases:
912 912 updates = [[] for p in phases.allphases]
913 913 updates[0].extend(h.node() for h in pushop.outdatedphases)
914 914 phasedata = phases.binaryencode(updates)
915 915 bundler.newpart('phase-heads', data=phasedata)
916 916
917 917 def _pushb2phasespushkey(pushop, bundler):
918 918 """push phase information through a bundle2 - pushkey part"""
919 919 pushop.stepsdone.add('phases')
920 920 part2node = []
921 921
922 922 def handlefailure(pushop, exc):
923 923 targetid = int(exc.partid)
924 924 for partid, node in part2node:
925 925 if partid == targetid:
926 926 raise error.Abort(_('updating %s to public failed') % node)
927 927
928 928 enc = pushkey.encode
929 929 for newremotehead in pushop.outdatedphases:
930 930 part = bundler.newpart('pushkey')
931 931 part.addparam('namespace', enc('phases'))
932 932 part.addparam('key', enc(newremotehead.hex()))
933 933 part.addparam('old', enc('%d' % phases.draft))
934 934 part.addparam('new', enc('%d' % phases.public))
935 935 part2node.append((part.id, newremotehead))
936 936 pushop.pkfailcb[part.id] = handlefailure
937 937
938 938 def handlereply(op):
939 939 for partid, node in part2node:
940 940 partrep = op.records.getreplies(partid)
941 941 results = partrep['pushkey']
942 942 assert len(results) <= 1
943 943 msg = None
944 944 if not results:
945 945 msg = _('server ignored update of %s to public!\n') % node
946 946 elif not int(results[0]['return']):
947 947 msg = _('updating %s to public failed!\n') % node
948 948 if msg is not None:
949 949 pushop.ui.warn(msg)
950 950 return handlereply
951 951
952 952 @b2partsgenerator('obsmarkers')
953 953 def _pushb2obsmarkers(pushop, bundler):
954 954 if 'obsmarkers' in pushop.stepsdone:
955 955 return
956 956 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
957 957 if obsolete.commonversion(remoteversions) is None:
958 958 return
959 959 pushop.stepsdone.add('obsmarkers')
960 960 if pushop.outobsmarkers:
961 961 markers = sorted(pushop.outobsmarkers)
962 962 bundle2.buildobsmarkerspart(bundler, markers)
963 963
964 964 @b2partsgenerator('bookmarks')
965 965 def _pushb2bookmarks(pushop, bundler):
966 966 """handle bookmark push through bundle2"""
967 967 if 'bookmarks' in pushop.stepsdone:
968 968 return
969 969 b2caps = bundle2.bundle2caps(pushop.remote)
970 970
971 971 legacy = pushop.repo.ui.configlist('devel', 'legacy.exchange')
972 972 legacybooks = 'bookmarks' in legacy
973 973
974 974 if not legacybooks and 'bookmarks' in b2caps:
975 975 return _pushb2bookmarkspart(pushop, bundler)
976 976 elif 'pushkey' in b2caps:
977 977 return _pushb2bookmarkspushkey(pushop, bundler)
978 978
979 979 def _bmaction(old, new):
980 980 """small utility for bookmark pushing"""
981 981 if not old:
982 982 return 'export'
983 983 elif not new:
984 984 return 'delete'
985 985 return 'update'
986 986
987 987 def _pushb2bookmarkspart(pushop, bundler):
988 988 pushop.stepsdone.add('bookmarks')
989 989 if not pushop.outbookmarks:
990 990 return
991 991
992 992 allactions = []
993 993 data = []
994 994 for book, old, new in pushop.outbookmarks:
995 995 new = bin(new)
996 996 data.append((book, new))
997 997 allactions.append((book, _bmaction(old, new)))
998 998 checkdata = bookmod.binaryencode(data)
999 999 bundler.newpart('bookmarks', data=checkdata)
1000 1000
1001 1001 def handlereply(op):
1002 1002 ui = pushop.ui
1003 1003 # if success
1004 1004 for book, action in allactions:
1005 1005 ui.status(bookmsgmap[action][0] % book)
1006 1006
1007 1007 return handlereply
1008 1008
1009 1009 def _pushb2bookmarkspushkey(pushop, bundler):
1010 1010 pushop.stepsdone.add('bookmarks')
1011 1011 part2book = []
1012 1012 enc = pushkey.encode
1013 1013
1014 1014 def handlefailure(pushop, exc):
1015 1015 targetid = int(exc.partid)
1016 1016 for partid, book, action in part2book:
1017 1017 if partid == targetid:
1018 1018 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
1019 1019 # we should not be called for part we did not generated
1020 1020 assert False
1021 1021
1022 1022 for book, old, new in pushop.outbookmarks:
1023 1023 part = bundler.newpart('pushkey')
1024 1024 part.addparam('namespace', enc('bookmarks'))
1025 1025 part.addparam('key', enc(book))
1026 1026 part.addparam('old', enc(old))
1027 1027 part.addparam('new', enc(new))
1028 1028 action = 'update'
1029 1029 if not old:
1030 1030 action = 'export'
1031 1031 elif not new:
1032 1032 action = 'delete'
1033 1033 part2book.append((part.id, book, action))
1034 1034 pushop.pkfailcb[part.id] = handlefailure
1035 1035
1036 1036 def handlereply(op):
1037 1037 ui = pushop.ui
1038 1038 for partid, book, action in part2book:
1039 1039 partrep = op.records.getreplies(partid)
1040 1040 results = partrep['pushkey']
1041 1041 assert len(results) <= 1
1042 1042 if not results:
1043 1043 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
1044 1044 else:
1045 1045 ret = int(results[0]['return'])
1046 1046 if ret:
1047 1047 ui.status(bookmsgmap[action][0] % book)
1048 1048 else:
1049 1049 ui.warn(bookmsgmap[action][1] % book)
1050 1050 if pushop.bkresult is not None:
1051 1051 pushop.bkresult = 1
1052 1052 return handlereply
1053 1053
1054 1054 @b2partsgenerator('pushvars', idx=0)
1055 1055 def _getbundlesendvars(pushop, bundler):
1056 1056 '''send shellvars via bundle2'''
1057 1057 pushvars = pushop.pushvars
1058 1058 if pushvars:
1059 1059 shellvars = {}
1060 1060 for raw in pushvars:
1061 1061 if '=' not in raw:
1062 1062 msg = ("unable to parse variable '%s', should follow "
1063 1063 "'KEY=VALUE' or 'KEY=' format")
1064 1064 raise error.Abort(msg % raw)
1065 1065 k, v = raw.split('=', 1)
1066 1066 shellvars[k] = v
1067 1067
1068 1068 part = bundler.newpart('pushvars')
1069 1069
1070 1070 for key, value in shellvars.iteritems():
1071 1071 part.addparam(key, value, mandatory=False)
1072 1072
1073 1073 def _pushbundle2(pushop):
1074 1074 """push data to the remote using bundle2
1075 1075
1076 1076 The only currently supported type of data is changegroup but this will
1077 1077 evolve in the future."""
1078 1078 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
1079 1079 pushback = (pushop.trmanager
1080 1080 and pushop.ui.configbool('experimental', 'bundle2.pushback'))
1081 1081
1082 1082 # create reply capability
1083 1083 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo,
1084 1084 allowpushback=pushback,
1085 1085 role='client'))
1086 1086 bundler.newpart('replycaps', data=capsblob)
1087 1087 replyhandlers = []
1088 1088 for partgenname in b2partsgenorder:
1089 1089 partgen = b2partsgenmapping[partgenname]
1090 1090 ret = partgen(pushop, bundler)
1091 1091 if callable(ret):
1092 1092 replyhandlers.append(ret)
1093 1093 # do not push if nothing to push
1094 1094 if bundler.nbparts <= 1:
1095 1095 return
1096 1096 stream = util.chunkbuffer(bundler.getchunks())
1097 1097 try:
1098 1098 try:
1099 1099 reply = pushop.remote.unbundle(
1100 1100 stream, ['force'], pushop.remote.url())
1101 1101 except error.BundleValueError as exc:
1102 1102 raise error.Abort(_('missing support for %s') % exc)
1103 1103 try:
1104 1104 trgetter = None
1105 1105 if pushback:
1106 1106 trgetter = pushop.trmanager.transaction
1107 1107 op = bundle2.processbundle(pushop.repo, reply, trgetter)
1108 1108 except error.BundleValueError as exc:
1109 1109 raise error.Abort(_('missing support for %s') % exc)
1110 1110 except bundle2.AbortFromPart as exc:
1111 1111 pushop.ui.status(_('remote: %s\n') % exc)
1112 1112 if exc.hint is not None:
1113 1113 pushop.ui.status(_('remote: %s\n') % ('(%s)' % exc.hint))
1114 1114 raise error.Abort(_('push failed on remote'))
1115 1115 except error.PushkeyFailed as exc:
1116 1116 partid = int(exc.partid)
1117 1117 if partid not in pushop.pkfailcb:
1118 1118 raise
1119 1119 pushop.pkfailcb[partid](pushop, exc)
1120 1120 for rephand in replyhandlers:
1121 1121 rephand(op)
1122 1122
1123 1123 def _pushchangeset(pushop):
1124 1124 """Make the actual push of changeset bundle to remote repo"""
1125 1125 if 'changesets' in pushop.stepsdone:
1126 1126 return
1127 1127 pushop.stepsdone.add('changesets')
1128 1128 if not _pushcheckoutgoing(pushop):
1129 1129 return
1130 1130
1131 1131 # Should have verified this in push().
1132 1132 assert pushop.remote.capable('unbundle')
1133 1133
1134 1134 pushop.repo.prepushoutgoinghooks(pushop)
1135 1135 outgoing = pushop.outgoing
1136 1136 # TODO: get bundlecaps from remote
1137 1137 bundlecaps = None
1138 1138 # create a changegroup from local
1139 1139 if pushop.revs is None and not (outgoing.excluded
1140 1140 or pushop.repo.changelog.filteredrevs):
1141 1141 # push everything,
1142 1142 # use the fast path, no race possible on push
1143 1143 cg = changegroup.makechangegroup(pushop.repo, outgoing, '01', 'push',
1144 1144 fastpath=True, bundlecaps=bundlecaps)
1145 1145 else:
1146 1146 cg = changegroup.makechangegroup(pushop.repo, outgoing, '01',
1147 1147 'push', bundlecaps=bundlecaps)
1148 1148
1149 1149 # apply changegroup to remote
1150 1150 # local repo finds heads on server, finds out what
1151 1151 # revs it must push. once revs transferred, if server
1152 1152 # finds it has different heads (someone else won
1153 1153 # commit/push race), server aborts.
1154 1154 if pushop.force:
1155 1155 remoteheads = ['force']
1156 1156 else:
1157 1157 remoteheads = pushop.remoteheads
1158 1158 # ssh: return remote's addchangegroup()
1159 1159 # http: return remote's addchangegroup() or 0 for error
1160 1160 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
1161 1161 pushop.repo.url())
1162 1162
1163 1163 def _pushsyncphase(pushop):
1164 1164 """synchronise phase information locally and remotely"""
1165 1165 cheads = pushop.commonheads
1166 1166 # even when we don't push, exchanging phase data is useful
1167 1167 remotephases = pushop.remote.listkeys('phases')
1168 1168 if (pushop.ui.configbool('ui', '_usedassubrepo')
1169 1169 and remotephases # server supports phases
1170 1170 and pushop.cgresult is None # nothing was pushed
1171 1171 and remotephases.get('publishing', False)):
1172 1172 # When:
1173 1173 # - this is a subrepo push
1174 1174 # - and remote support phase
1175 1175 # - and no changeset was pushed
1176 1176 # - and remote is publishing
1177 1177 # We may be in issue 3871 case!
1178 1178 # We drop the possible phase synchronisation done by
1179 1179 # courtesy to publish changesets possibly locally draft
1180 1180 # on the remote.
1181 1181 remotephases = {'publishing': 'True'}
1182 1182 if not remotephases: # old server or public only reply from non-publishing
1183 1183 _localphasemove(pushop, cheads)
1184 1184 # don't push any phase data as there is nothing to push
1185 1185 else:
1186 1186 ana = phases.analyzeremotephases(pushop.repo, cheads,
1187 1187 remotephases)
1188 1188 pheads, droots = ana
1189 1189 ### Apply remote phase on local
1190 1190 if remotephases.get('publishing', False):
1191 1191 _localphasemove(pushop, cheads)
1192 1192 else: # publish = False
1193 1193 _localphasemove(pushop, pheads)
1194 1194 _localphasemove(pushop, cheads, phases.draft)
1195 1195 ### Apply local phase on remote
1196 1196
1197 1197 if pushop.cgresult:
1198 1198 if 'phases' in pushop.stepsdone:
1199 1199 # phases already pushed though bundle2
1200 1200 return
1201 1201 outdated = pushop.outdatedphases
1202 1202 else:
1203 1203 outdated = pushop.fallbackoutdatedphases
1204 1204
1205 1205 pushop.stepsdone.add('phases')
1206 1206
1207 1207 # filter heads already turned public by the push
1208 1208 outdated = [c for c in outdated if c.node() not in pheads]
1209 1209 # fallback to independent pushkey command
1210 1210 for newremotehead in outdated:
1211 1211 r = pushop.remote.pushkey('phases',
1212 1212 newremotehead.hex(),
1213 1213 ('%d' % phases.draft),
1214 1214 ('%d' % phases.public))
1215 1215 if not r:
1216 1216 pushop.ui.warn(_('updating %s to public failed!\n')
1217 1217 % newremotehead)
1218 1218
1219 1219 def _localphasemove(pushop, nodes, phase=phases.public):
1220 1220 """move <nodes> to <phase> in the local source repo"""
1221 1221 if pushop.trmanager:
1222 1222 phases.advanceboundary(pushop.repo,
1223 1223 pushop.trmanager.transaction(),
1224 1224 phase,
1225 1225 nodes)
1226 1226 else:
1227 1227 # repo is not locked, do not change any phases!
1228 1228 # Informs the user that phases should have been moved when
1229 1229 # applicable.
1230 1230 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1231 1231 phasestr = phases.phasenames[phase]
1232 1232 if actualmoves:
1233 1233 pushop.ui.status(_('cannot lock source repo, skipping '
1234 1234 'local %s phase update\n') % phasestr)
1235 1235
1236 1236 def _pushobsolete(pushop):
1237 1237 """utility function to push obsolete markers to a remote"""
1238 1238 if 'obsmarkers' in pushop.stepsdone:
1239 1239 return
1240 1240 repo = pushop.repo
1241 1241 remote = pushop.remote
1242 1242 pushop.stepsdone.add('obsmarkers')
1243 1243 if pushop.outobsmarkers:
1244 1244 pushop.ui.debug('try to push obsolete markers to remote\n')
1245 1245 rslts = []
1246 1246 remotedata = obsolete._pushkeyescape(sorted(pushop.outobsmarkers))
1247 1247 for key in sorted(remotedata, reverse=True):
1248 1248 # reverse sort to ensure we end with dump0
1249 1249 data = remotedata[key]
1250 1250 rslts.append(remote.pushkey('obsolete', key, '', data))
1251 1251 if [r for r in rslts if not r]:
1252 1252 msg = _('failed to push some obsolete markers!\n')
1253 1253 repo.ui.warn(msg)
1254 1254
1255 1255 def _pushbookmark(pushop):
1256 1256 """Update bookmark position on remote"""
1257 1257 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
1258 1258 return
1259 1259 pushop.stepsdone.add('bookmarks')
1260 1260 ui = pushop.ui
1261 1261 remote = pushop.remote
1262 1262
1263 1263 for b, old, new in pushop.outbookmarks:
1264 1264 action = 'update'
1265 1265 if not old:
1266 1266 action = 'export'
1267 1267 elif not new:
1268 1268 action = 'delete'
1269 1269 if remote.pushkey('bookmarks', b, old, new):
1270 1270 ui.status(bookmsgmap[action][0] % b)
1271 1271 else:
1272 1272 ui.warn(bookmsgmap[action][1] % b)
1273 1273 # discovery can have set the value form invalid entry
1274 1274 if pushop.bkresult is not None:
1275 1275 pushop.bkresult = 1
1276 1276
1277 1277 class pulloperation(object):
1278 1278 """A object that represent a single pull operation
1279 1279
1280 1280 It purpose is to carry pull related state and very common operation.
1281 1281
1282 1282 A new should be created at the beginning of each pull and discarded
1283 1283 afterward.
1284 1284 """
1285 1285
1286 1286 def __init__(self, repo, remote, heads=None, force=False, bookmarks=(),
1287 1287 remotebookmarks=None, streamclonerequested=None):
1288 1288 # repo we pull into
1289 1289 self.repo = repo
1290 1290 # repo we pull from
1291 1291 self.remote = remote
1292 1292 # revision we try to pull (None is "all")
1293 1293 self.heads = heads
1294 1294 # bookmark pulled explicitly
1295 1295 self.explicitbookmarks = [repo._bookmarks.expandname(bookmark)
1296 1296 for bookmark in bookmarks]
1297 1297 # do we force pull?
1298 1298 self.force = force
1299 1299 # whether a streaming clone was requested
1300 1300 self.streamclonerequested = streamclonerequested
1301 1301 # transaction manager
1302 1302 self.trmanager = None
1303 1303 # set of common changeset between local and remote before pull
1304 1304 self.common = None
1305 1305 # set of pulled head
1306 1306 self.rheads = None
1307 1307 # list of missing changeset to fetch remotely
1308 1308 self.fetch = None
1309 1309 # remote bookmarks data
1310 1310 self.remotebookmarks = remotebookmarks
1311 1311 # result of changegroup pulling (used as return code by pull)
1312 1312 self.cgresult = None
1313 1313 # list of step already done
1314 1314 self.stepsdone = set()
1315 1315 # Whether we attempted a clone from pre-generated bundles.
1316 1316 self.clonebundleattempted = False
1317 1317
1318 1318 @util.propertycache
1319 1319 def pulledsubset(self):
1320 1320 """heads of the set of changeset target by the pull"""
1321 1321 # compute target subset
1322 1322 if self.heads is None:
1323 1323 # We pulled every thing possible
1324 1324 # sync on everything common
1325 1325 c = set(self.common)
1326 1326 ret = list(self.common)
1327 1327 for n in self.rheads:
1328 1328 if n not in c:
1329 1329 ret.append(n)
1330 1330 return ret
1331 1331 else:
1332 1332 # We pulled a specific subset
1333 1333 # sync on this subset
1334 1334 return self.heads
1335 1335
1336 1336 @util.propertycache
1337 1337 def canusebundle2(self):
1338 1338 return not _forcebundle1(self)
1339 1339
1340 1340 @util.propertycache
1341 1341 def remotebundle2caps(self):
1342 1342 return bundle2.bundle2caps(self.remote)
1343 1343
1344 1344 def gettransaction(self):
1345 1345 # deprecated; talk to trmanager directly
1346 1346 return self.trmanager.transaction()
1347 1347
1348 1348 class transactionmanager(util.transactional):
1349 1349 """An object to manage the life cycle of a transaction
1350 1350
1351 1351 It creates the transaction on demand and calls the appropriate hooks when
1352 1352 closing the transaction."""
1353 1353 def __init__(self, repo, source, url):
1354 1354 self.repo = repo
1355 1355 self.source = source
1356 1356 self.url = url
1357 1357 self._tr = None
1358 1358
1359 1359 def transaction(self):
1360 1360 """Return an open transaction object, constructing if necessary"""
1361 1361 if not self._tr:
1362 1362 trname = '%s\n%s' % (self.source, util.hidepassword(self.url))
1363 1363 self._tr = self.repo.transaction(trname)
1364 1364 self._tr.hookargs['source'] = self.source
1365 1365 self._tr.hookargs['url'] = self.url
1366 1366 return self._tr
1367 1367
1368 1368 def close(self):
1369 1369 """close transaction if created"""
1370 1370 if self._tr is not None:
1371 1371 self._tr.close()
1372 1372
1373 1373 def release(self):
1374 1374 """release transaction if created"""
1375 1375 if self._tr is not None:
1376 1376 self._tr.release()
1377 1377
1378 1378 def pull(repo, remote, heads=None, force=False, bookmarks=(), opargs=None,
1379 1379 streamclonerequested=None):
1380 1380 """Fetch repository data from a remote.
1381 1381
1382 1382 This is the main function used to retrieve data from a remote repository.
1383 1383
1384 1384 ``repo`` is the local repository to clone into.
1385 1385 ``remote`` is a peer instance.
1386 1386 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1387 1387 default) means to pull everything from the remote.
1388 1388 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1389 1389 default, all remote bookmarks are pulled.
1390 1390 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1391 1391 initialization.
1392 1392 ``streamclonerequested`` is a boolean indicating whether a "streaming
1393 1393 clone" is requested. A "streaming clone" is essentially a raw file copy
1394 1394 of revlogs from the server. This only works when the local repository is
1395 1395 empty. The default value of ``None`` means to respect the server
1396 1396 configuration for preferring stream clones.
1397 1397
1398 1398 Returns the ``pulloperation`` created for this pull.
1399 1399 """
1400 1400 if opargs is None:
1401 1401 opargs = {}
1402 1402 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks,
1403 1403 streamclonerequested=streamclonerequested,
1404 1404 **pycompat.strkwargs(opargs))
1405 1405
1406 1406 peerlocal = pullop.remote.local()
1407 1407 if peerlocal:
1408 1408 missing = set(peerlocal.requirements) - pullop.repo.supported
1409 1409 if missing:
1410 1410 msg = _("required features are not"
1411 1411 " supported in the destination:"
1412 1412 " %s") % (', '.join(sorted(missing)))
1413 1413 raise error.Abort(msg)
1414 1414
1415 1415 pullop.trmanager = transactionmanager(repo, 'pull', remote.url())
1416 1416 with repo.wlock(), repo.lock(), pullop.trmanager:
1417 1417 # This should ideally be in _pullbundle2(). However, it needs to run
1418 1418 # before discovery to avoid extra work.
1419 1419 _maybeapplyclonebundle(pullop)
1420 1420 streamclone.maybeperformlegacystreamclone(pullop)
1421 1421 _pulldiscovery(pullop)
1422 1422 if pullop.canusebundle2:
1423 1423 _pullbundle2(pullop)
1424 1424 _pullchangeset(pullop)
1425 1425 _pullphase(pullop)
1426 1426 _pullbookmarks(pullop)
1427 1427 _pullobsolete(pullop)
1428 1428
1429 1429 # storing remotenames
1430 1430 if repo.ui.configbool('experimental', 'remotenames'):
1431 1431 logexchange.pullremotenames(repo, remote)
1432 1432
1433 1433 return pullop
1434 1434
1435 1435 # list of steps to perform discovery before pull
1436 1436 pulldiscoveryorder = []
1437 1437
1438 1438 # Mapping between step name and function
1439 1439 #
1440 1440 # This exists to help extensions wrap steps if necessary
1441 1441 pulldiscoverymapping = {}
1442 1442
1443 1443 def pulldiscovery(stepname):
1444 1444 """decorator for function performing discovery before pull
1445 1445
1446 1446 The function is added to the step -> function mapping and appended to the
1447 1447 list of steps. Beware that decorated function will be added in order (this
1448 1448 may matter).
1449 1449
1450 1450 You can only use this decorator for a new step, if you want to wrap a step
1451 1451 from an extension, change the pulldiscovery dictionary directly."""
1452 1452 def dec(func):
1453 1453 assert stepname not in pulldiscoverymapping
1454 1454 pulldiscoverymapping[stepname] = func
1455 1455 pulldiscoveryorder.append(stepname)
1456 1456 return func
1457 1457 return dec
1458 1458
1459 1459 def _pulldiscovery(pullop):
1460 1460 """Run all discovery steps"""
1461 1461 for stepname in pulldiscoveryorder:
1462 1462 step = pulldiscoverymapping[stepname]
1463 1463 step(pullop)
1464 1464
1465 1465 @pulldiscovery('b1:bookmarks')
1466 1466 def _pullbookmarkbundle1(pullop):
1467 1467 """fetch bookmark data in bundle1 case
1468 1468
1469 1469 If not using bundle2, we have to fetch bookmarks before changeset
1470 1470 discovery to reduce the chance and impact of race conditions."""
1471 1471 if pullop.remotebookmarks is not None:
1472 1472 return
1473 1473 if pullop.canusebundle2 and 'listkeys' in pullop.remotebundle2caps:
1474 1474 # all known bundle2 servers now support listkeys, but lets be nice with
1475 1475 # new implementation.
1476 1476 return
1477 1477 books = pullop.remote.listkeys('bookmarks')
1478 1478 pullop.remotebookmarks = bookmod.unhexlifybookmarks(books)
1479 1479
1480 1480
1481 1481 @pulldiscovery('changegroup')
1482 1482 def _pulldiscoverychangegroup(pullop):
1483 1483 """discovery phase for the pull
1484 1484
1485 1485 Current handle changeset discovery only, will change handle all discovery
1486 1486 at some point."""
1487 1487 tmp = discovery.findcommonincoming(pullop.repo,
1488 1488 pullop.remote,
1489 1489 heads=pullop.heads,
1490 1490 force=pullop.force)
1491 1491 common, fetch, rheads = tmp
1492 1492 nm = pullop.repo.unfiltered().changelog.nodemap
1493 1493 if fetch and rheads:
1494 1494 # If a remote heads is filtered locally, put in back in common.
1495 1495 #
1496 1496 # This is a hackish solution to catch most of "common but locally
1497 1497 # hidden situation". We do not performs discovery on unfiltered
1498 1498 # repository because it end up doing a pathological amount of round
1499 1499 # trip for w huge amount of changeset we do not care about.
1500 1500 #
1501 1501 # If a set of such "common but filtered" changeset exist on the server
1502 1502 # but are not including a remote heads, we'll not be able to detect it,
1503 1503 scommon = set(common)
1504 1504 for n in rheads:
1505 1505 if n in nm:
1506 1506 if n not in scommon:
1507 1507 common.append(n)
1508 1508 if set(rheads).issubset(set(common)):
1509 1509 fetch = []
1510 1510 pullop.common = common
1511 1511 pullop.fetch = fetch
1512 1512 pullop.rheads = rheads
1513 1513
1514 1514 def _pullbundle2(pullop):
1515 1515 """pull data using bundle2
1516 1516
1517 1517 For now, the only supported data are changegroup."""
1518 1518 kwargs = {'bundlecaps': caps20to10(pullop.repo, role='client')}
1519 1519
1520 1520 # make ui easier to access
1521 1521 ui = pullop.repo.ui
1522 1522
1523 1523 # At the moment we don't do stream clones over bundle2. If that is
1524 1524 # implemented then here's where the check for that will go.
1525 1525 streaming = streamclone.canperformstreamclone(pullop, bundle2=True)[0]
1526 1526
1527 1527 # declare pull perimeters
1528 1528 kwargs['common'] = pullop.common
1529 1529 kwargs['heads'] = pullop.heads or pullop.rheads
1530 1530
1531 1531 if streaming:
1532 1532 kwargs['cg'] = False
1533 1533 kwargs['stream'] = True
1534 1534 pullop.stepsdone.add('changegroup')
1535 1535 pullop.stepsdone.add('phases')
1536 1536
1537 1537 else:
1538 1538 # pulling changegroup
1539 1539 pullop.stepsdone.add('changegroup')
1540 1540
1541 1541 kwargs['cg'] = pullop.fetch
1542 1542
1543 1543 legacyphase = 'phases' in ui.configlist('devel', 'legacy.exchange')
1544 1544 hasbinaryphase = 'heads' in pullop.remotebundle2caps.get('phases', ())
1545 1545 if (not legacyphase and hasbinaryphase):
1546 1546 kwargs['phases'] = True
1547 1547 pullop.stepsdone.add('phases')
1548 1548
1549 1549 if 'listkeys' in pullop.remotebundle2caps:
1550 1550 if 'phases' not in pullop.stepsdone:
1551 1551 kwargs['listkeys'] = ['phases']
1552 1552
1553 1553 bookmarksrequested = False
1554 1554 legacybookmark = 'bookmarks' in ui.configlist('devel', 'legacy.exchange')
1555 1555 hasbinarybook = 'bookmarks' in pullop.remotebundle2caps
1556 1556
1557 1557 if pullop.remotebookmarks is not None:
1558 1558 pullop.stepsdone.add('request-bookmarks')
1559 1559
1560 1560 if ('request-bookmarks' not in pullop.stepsdone
1561 1561 and pullop.remotebookmarks is None
1562 1562 and not legacybookmark and hasbinarybook):
1563 1563 kwargs['bookmarks'] = True
1564 1564 bookmarksrequested = True
1565 1565
1566 1566 if 'listkeys' in pullop.remotebundle2caps:
1567 1567 if 'request-bookmarks' not in pullop.stepsdone:
1568 1568 # make sure to always includes bookmark data when migrating
1569 1569 # `hg incoming --bundle` to using this function.
1570 1570 pullop.stepsdone.add('request-bookmarks')
1571 1571 kwargs.setdefault('listkeys', []).append('bookmarks')
1572 1572
1573 1573 # If this is a full pull / clone and the server supports the clone bundles
1574 1574 # feature, tell the server whether we attempted a clone bundle. The
1575 1575 # presence of this flag indicates the client supports clone bundles. This
1576 1576 # will enable the server to treat clients that support clone bundles
1577 1577 # differently from those that don't.
1578 1578 if (pullop.remote.capable('clonebundles')
1579 1579 and pullop.heads is None and list(pullop.common) == [nullid]):
1580 1580 kwargs['cbattempted'] = pullop.clonebundleattempted
1581 1581
1582 1582 if streaming:
1583 1583 pullop.repo.ui.status(_('streaming all changes\n'))
1584 1584 elif not pullop.fetch:
1585 1585 pullop.repo.ui.status(_("no changes found\n"))
1586 1586 pullop.cgresult = 0
1587 1587 else:
1588 1588 if pullop.heads is None and list(pullop.common) == [nullid]:
1589 1589 pullop.repo.ui.status(_("requesting all changes\n"))
1590 1590 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1591 1591 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1592 1592 if obsolete.commonversion(remoteversions) is not None:
1593 1593 kwargs['obsmarkers'] = True
1594 1594 pullop.stepsdone.add('obsmarkers')
1595 1595 _pullbundle2extraprepare(pullop, kwargs)
1596 1596 bundle = pullop.remote.getbundle('pull', **pycompat.strkwargs(kwargs))
1597 1597 try:
1598 op = bundle2.bundleoperation(pullop.repo, pullop.gettransaction)
1598 op = bundle2.bundleoperation(pullop.repo, pullop.gettransaction,
1599 source='pull')
1599 1600 op.modes['bookmarks'] = 'records'
1600 1601 bundle2.processbundle(pullop.repo, bundle, op=op)
1601 1602 except bundle2.AbortFromPart as exc:
1602 1603 pullop.repo.ui.status(_('remote: abort: %s\n') % exc)
1603 1604 raise error.Abort(_('pull failed on remote'), hint=exc.hint)
1604 1605 except error.BundleValueError as exc:
1605 1606 raise error.Abort(_('missing support for %s') % exc)
1606 1607
1607 1608 if pullop.fetch:
1608 1609 pullop.cgresult = bundle2.combinechangegroupresults(op)
1609 1610
1610 1611 # processing phases change
1611 1612 for namespace, value in op.records['listkeys']:
1612 1613 if namespace == 'phases':
1613 1614 _pullapplyphases(pullop, value)
1614 1615
1615 1616 # processing bookmark update
1616 1617 if bookmarksrequested:
1617 1618 books = {}
1618 1619 for record in op.records['bookmarks']:
1619 1620 books[record['bookmark']] = record["node"]
1620 1621 pullop.remotebookmarks = books
1621 1622 else:
1622 1623 for namespace, value in op.records['listkeys']:
1623 1624 if namespace == 'bookmarks':
1624 1625 pullop.remotebookmarks = bookmod.unhexlifybookmarks(value)
1625 1626
1626 1627 # bookmark data were either already there or pulled in the bundle
1627 1628 if pullop.remotebookmarks is not None:
1628 1629 _pullbookmarks(pullop)
1629 1630
1630 1631 def _pullbundle2extraprepare(pullop, kwargs):
1631 1632 """hook function so that extensions can extend the getbundle call"""
1632 1633
1633 1634 def _pullchangeset(pullop):
1634 1635 """pull changeset from unbundle into the local repo"""
1635 1636 # We delay the open of the transaction as late as possible so we
1636 1637 # don't open transaction for nothing or you break future useful
1637 1638 # rollback call
1638 1639 if 'changegroup' in pullop.stepsdone:
1639 1640 return
1640 1641 pullop.stepsdone.add('changegroup')
1641 1642 if not pullop.fetch:
1642 1643 pullop.repo.ui.status(_("no changes found\n"))
1643 1644 pullop.cgresult = 0
1644 1645 return
1645 1646 tr = pullop.gettransaction()
1646 1647 if pullop.heads is None and list(pullop.common) == [nullid]:
1647 1648 pullop.repo.ui.status(_("requesting all changes\n"))
1648 1649 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1649 1650 # issue1320, avoid a race if remote changed after discovery
1650 1651 pullop.heads = pullop.rheads
1651 1652
1652 1653 if pullop.remote.capable('getbundle'):
1653 1654 # TODO: get bundlecaps from remote
1654 1655 cg = pullop.remote.getbundle('pull', common=pullop.common,
1655 1656 heads=pullop.heads or pullop.rheads)
1656 1657 elif pullop.heads is None:
1657 1658 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1658 1659 elif not pullop.remote.capable('changegroupsubset'):
1659 1660 raise error.Abort(_("partial pull cannot be done because "
1660 1661 "other repository doesn't support "
1661 1662 "changegroupsubset."))
1662 1663 else:
1663 1664 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1664 1665 bundleop = bundle2.applybundle(pullop.repo, cg, tr, 'pull',
1665 1666 pullop.remote.url())
1666 1667 pullop.cgresult = bundle2.combinechangegroupresults(bundleop)
1667 1668
1668 1669 def _pullphase(pullop):
1669 1670 # Get remote phases data from remote
1670 1671 if 'phases' in pullop.stepsdone:
1671 1672 return
1672 1673 remotephases = pullop.remote.listkeys('phases')
1673 1674 _pullapplyphases(pullop, remotephases)
1674 1675
1675 1676 def _pullapplyphases(pullop, remotephases):
1676 1677 """apply phase movement from observed remote state"""
1677 1678 if 'phases' in pullop.stepsdone:
1678 1679 return
1679 1680 pullop.stepsdone.add('phases')
1680 1681 publishing = bool(remotephases.get('publishing', False))
1681 1682 if remotephases and not publishing:
1682 1683 # remote is new and non-publishing
1683 1684 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1684 1685 pullop.pulledsubset,
1685 1686 remotephases)
1686 1687 dheads = pullop.pulledsubset
1687 1688 else:
1688 1689 # Remote is old or publishing all common changesets
1689 1690 # should be seen as public
1690 1691 pheads = pullop.pulledsubset
1691 1692 dheads = []
1692 1693 unfi = pullop.repo.unfiltered()
1693 1694 phase = unfi._phasecache.phase
1694 1695 rev = unfi.changelog.nodemap.get
1695 1696 public = phases.public
1696 1697 draft = phases.draft
1697 1698
1698 1699 # exclude changesets already public locally and update the others
1699 1700 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1700 1701 if pheads:
1701 1702 tr = pullop.gettransaction()
1702 1703 phases.advanceboundary(pullop.repo, tr, public, pheads)
1703 1704
1704 1705 # exclude changesets already draft locally and update the others
1705 1706 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1706 1707 if dheads:
1707 1708 tr = pullop.gettransaction()
1708 1709 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1709 1710
1710 1711 def _pullbookmarks(pullop):
1711 1712 """process the remote bookmark information to update the local one"""
1712 1713 if 'bookmarks' in pullop.stepsdone:
1713 1714 return
1714 1715 pullop.stepsdone.add('bookmarks')
1715 1716 repo = pullop.repo
1716 1717 remotebookmarks = pullop.remotebookmarks
1717 1718 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1718 1719 pullop.remote.url(),
1719 1720 pullop.gettransaction,
1720 1721 explicit=pullop.explicitbookmarks)
1721 1722
1722 1723 def _pullobsolete(pullop):
1723 1724 """utility function to pull obsolete markers from a remote
1724 1725
1725 1726 The `gettransaction` is function that return the pull transaction, creating
1726 1727 one if necessary. We return the transaction to inform the calling code that
1727 1728 a new transaction have been created (when applicable).
1728 1729
1729 1730 Exists mostly to allow overriding for experimentation purpose"""
1730 1731 if 'obsmarkers' in pullop.stepsdone:
1731 1732 return
1732 1733 pullop.stepsdone.add('obsmarkers')
1733 1734 tr = None
1734 1735 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1735 1736 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1736 1737 remoteobs = pullop.remote.listkeys('obsolete')
1737 1738 if 'dump0' in remoteobs:
1738 1739 tr = pullop.gettransaction()
1739 1740 markers = []
1740 1741 for key in sorted(remoteobs, reverse=True):
1741 1742 if key.startswith('dump'):
1742 1743 data = util.b85decode(remoteobs[key])
1743 1744 version, newmarks = obsolete._readmarkers(data)
1744 1745 markers += newmarks
1745 1746 if markers:
1746 1747 pullop.repo.obsstore.add(tr, markers)
1747 1748 pullop.repo.invalidatevolatilesets()
1748 1749 return tr
1749 1750
1750 1751 def caps20to10(repo, role):
1751 1752 """return a set with appropriate options to use bundle20 during getbundle"""
1752 1753 caps = {'HG20'}
1753 1754 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo, role=role))
1754 1755 caps.add('bundle2=' + urlreq.quote(capsblob))
1755 1756 return caps
1756 1757
1757 1758 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1758 1759 getbundle2partsorder = []
1759 1760
1760 1761 # Mapping between step name and function
1761 1762 #
1762 1763 # This exists to help extensions wrap steps if necessary
1763 1764 getbundle2partsmapping = {}
1764 1765
1765 1766 def getbundle2partsgenerator(stepname, idx=None):
1766 1767 """decorator for function generating bundle2 part for getbundle
1767 1768
1768 1769 The function is added to the step -> function mapping and appended to the
1769 1770 list of steps. Beware that decorated functions will be added in order
1770 1771 (this may matter).
1771 1772
1772 1773 You can only use this decorator for new steps, if you want to wrap a step
1773 1774 from an extension, attack the getbundle2partsmapping dictionary directly."""
1774 1775 def dec(func):
1775 1776 assert stepname not in getbundle2partsmapping
1776 1777 getbundle2partsmapping[stepname] = func
1777 1778 if idx is None:
1778 1779 getbundle2partsorder.append(stepname)
1779 1780 else:
1780 1781 getbundle2partsorder.insert(idx, stepname)
1781 1782 return func
1782 1783 return dec
1783 1784
1784 1785 def bundle2requested(bundlecaps):
1785 1786 if bundlecaps is not None:
1786 1787 return any(cap.startswith('HG2') for cap in bundlecaps)
1787 1788 return False
1788 1789
1789 1790 def getbundlechunks(repo, source, heads=None, common=None, bundlecaps=None,
1790 1791 **kwargs):
1791 1792 """Return chunks constituting a bundle's raw data.
1792 1793
1793 1794 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
1794 1795 passed.
1795 1796
1796 1797 Returns a 2-tuple of a dict with metadata about the generated bundle
1797 1798 and an iterator over raw chunks (of varying sizes).
1798 1799 """
1799 1800 kwargs = pycompat.byteskwargs(kwargs)
1800 1801 info = {}
1801 1802 usebundle2 = bundle2requested(bundlecaps)
1802 1803 # bundle10 case
1803 1804 if not usebundle2:
1804 1805 if bundlecaps and not kwargs.get('cg', True):
1805 1806 raise ValueError(_('request for bundle10 must include changegroup'))
1806 1807
1807 1808 if kwargs:
1808 1809 raise ValueError(_('unsupported getbundle arguments: %s')
1809 1810 % ', '.join(sorted(kwargs.keys())))
1810 1811 outgoing = _computeoutgoing(repo, heads, common)
1811 1812 info['bundleversion'] = 1
1812 1813 return info, changegroup.makestream(repo, outgoing, '01', source,
1813 1814 bundlecaps=bundlecaps)
1814 1815
1815 1816 # bundle20 case
1816 1817 info['bundleversion'] = 2
1817 1818 b2caps = {}
1818 1819 for bcaps in bundlecaps:
1819 1820 if bcaps.startswith('bundle2='):
1820 1821 blob = urlreq.unquote(bcaps[len('bundle2='):])
1821 1822 b2caps.update(bundle2.decodecaps(blob))
1822 1823 bundler = bundle2.bundle20(repo.ui, b2caps)
1823 1824
1824 1825 kwargs['heads'] = heads
1825 1826 kwargs['common'] = common
1826 1827
1827 1828 for name in getbundle2partsorder:
1828 1829 func = getbundle2partsmapping[name]
1829 1830 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1830 1831 **pycompat.strkwargs(kwargs))
1831 1832
1832 1833 info['prefercompressed'] = bundler.prefercompressed
1833 1834
1834 1835 return info, bundler.getchunks()
1835 1836
1836 1837 @getbundle2partsgenerator('stream2')
1837 1838 def _getbundlestream2(bundler, repo, *args, **kwargs):
1838 1839 return bundle2.addpartbundlestream2(bundler, repo, **kwargs)
1839 1840
1840 1841 @getbundle2partsgenerator('changegroup')
1841 1842 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1842 1843 b2caps=None, heads=None, common=None, **kwargs):
1843 1844 """add a changegroup part to the requested bundle"""
1844 1845 cgstream = None
1845 1846 if kwargs.get(r'cg', True):
1846 1847 # build changegroup bundle here.
1847 1848 version = '01'
1848 1849 cgversions = b2caps.get('changegroup')
1849 1850 if cgversions: # 3.1 and 3.2 ship with an empty value
1850 1851 cgversions = [v for v in cgversions
1851 1852 if v in changegroup.supportedoutgoingversions(repo)]
1852 1853 if not cgversions:
1853 1854 raise ValueError(_('no common changegroup version'))
1854 1855 version = max(cgversions)
1855 1856 outgoing = _computeoutgoing(repo, heads, common)
1856 1857 if outgoing.missing:
1857 1858 cgstream = changegroup.makestream(repo, outgoing, version, source,
1858 1859 bundlecaps=bundlecaps)
1859 1860
1860 1861 if cgstream:
1861 1862 part = bundler.newpart('changegroup', data=cgstream)
1862 1863 if cgversions:
1863 1864 part.addparam('version', version)
1864 1865 part.addparam('nbchanges', '%d' % len(outgoing.missing),
1865 1866 mandatory=False)
1866 1867 if 'treemanifest' in repo.requirements:
1867 1868 part.addparam('treemanifest', '1')
1868 1869
1869 1870 @getbundle2partsgenerator('bookmarks')
1870 1871 def _getbundlebookmarkpart(bundler, repo, source, bundlecaps=None,
1871 1872 b2caps=None, **kwargs):
1872 1873 """add a bookmark part to the requested bundle"""
1873 1874 if not kwargs.get(r'bookmarks', False):
1874 1875 return
1875 1876 if 'bookmarks' not in b2caps:
1876 1877 raise ValueError(_('no common bookmarks exchange method'))
1877 1878 books = bookmod.listbinbookmarks(repo)
1878 1879 data = bookmod.binaryencode(books)
1879 1880 if data:
1880 1881 bundler.newpart('bookmarks', data=data)
1881 1882
1882 1883 @getbundle2partsgenerator('listkeys')
1883 1884 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1884 1885 b2caps=None, **kwargs):
1885 1886 """add parts containing listkeys namespaces to the requested bundle"""
1886 1887 listkeys = kwargs.get(r'listkeys', ())
1887 1888 for namespace in listkeys:
1888 1889 part = bundler.newpart('listkeys')
1889 1890 part.addparam('namespace', namespace)
1890 1891 keys = repo.listkeys(namespace).items()
1891 1892 part.data = pushkey.encodekeys(keys)
1892 1893
1893 1894 @getbundle2partsgenerator('obsmarkers')
1894 1895 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1895 1896 b2caps=None, heads=None, **kwargs):
1896 1897 """add an obsolescence markers part to the requested bundle"""
1897 1898 if kwargs.get(r'obsmarkers', False):
1898 1899 if heads is None:
1899 1900 heads = repo.heads()
1900 1901 subset = [c.node() for c in repo.set('::%ln', heads)]
1901 1902 markers = repo.obsstore.relevantmarkers(subset)
1902 1903 markers = sorted(markers)
1903 1904 bundle2.buildobsmarkerspart(bundler, markers)
1904 1905
1905 1906 @getbundle2partsgenerator('phases')
1906 1907 def _getbundlephasespart(bundler, repo, source, bundlecaps=None,
1907 1908 b2caps=None, heads=None, **kwargs):
1908 1909 """add phase heads part to the requested bundle"""
1909 1910 if kwargs.get(r'phases', False):
1910 1911 if not 'heads' in b2caps.get('phases'):
1911 1912 raise ValueError(_('no common phases exchange method'))
1912 1913 if heads is None:
1913 1914 heads = repo.heads()
1914 1915
1915 1916 headsbyphase = collections.defaultdict(set)
1916 1917 if repo.publishing():
1917 1918 headsbyphase[phases.public] = heads
1918 1919 else:
1919 1920 # find the appropriate heads to move
1920 1921
1921 1922 phase = repo._phasecache.phase
1922 1923 node = repo.changelog.node
1923 1924 rev = repo.changelog.rev
1924 1925 for h in heads:
1925 1926 headsbyphase[phase(repo, rev(h))].add(h)
1926 1927 seenphases = list(headsbyphase.keys())
1927 1928
1928 1929 # We do not handle anything but public and draft phase for now)
1929 1930 if seenphases:
1930 1931 assert max(seenphases) <= phases.draft
1931 1932
1932 1933 # if client is pulling non-public changesets, we need to find
1933 1934 # intermediate public heads.
1934 1935 draftheads = headsbyphase.get(phases.draft, set())
1935 1936 if draftheads:
1936 1937 publicheads = headsbyphase.get(phases.public, set())
1937 1938
1938 1939 revset = 'heads(only(%ln, %ln) and public())'
1939 1940 extraheads = repo.revs(revset, draftheads, publicheads)
1940 1941 for r in extraheads:
1941 1942 headsbyphase[phases.public].add(node(r))
1942 1943
1943 1944 # transform data in a format used by the encoding function
1944 1945 phasemapping = []
1945 1946 for phase in phases.allphases:
1946 1947 phasemapping.append(sorted(headsbyphase[phase]))
1947 1948
1948 1949 # generate the actual part
1949 1950 phasedata = phases.binaryencode(phasemapping)
1950 1951 bundler.newpart('phase-heads', data=phasedata)
1951 1952
1952 1953 @getbundle2partsgenerator('hgtagsfnodes')
1953 1954 def _getbundletagsfnodes(bundler, repo, source, bundlecaps=None,
1954 1955 b2caps=None, heads=None, common=None,
1955 1956 **kwargs):
1956 1957 """Transfer the .hgtags filenodes mapping.
1957 1958
1958 1959 Only values for heads in this bundle will be transferred.
1959 1960
1960 1961 The part data consists of pairs of 20 byte changeset node and .hgtags
1961 1962 filenodes raw values.
1962 1963 """
1963 1964 # Don't send unless:
1964 1965 # - changeset are being exchanged,
1965 1966 # - the client supports it.
1966 1967 if not (kwargs.get(r'cg', True) and 'hgtagsfnodes' in b2caps):
1967 1968 return
1968 1969
1969 1970 outgoing = _computeoutgoing(repo, heads, common)
1970 1971 bundle2.addparttagsfnodescache(repo, bundler, outgoing)
1971 1972
1972 1973 @getbundle2partsgenerator('cache:rev-branch-cache')
1973 1974 def _getbundlerevbranchcache(bundler, repo, source, bundlecaps=None,
1974 1975 b2caps=None, heads=None, common=None,
1975 1976 **kwargs):
1976 1977 """Transfer the rev-branch-cache mapping
1977 1978
1978 1979 The payload is a series of data related to each branch
1979 1980
1980 1981 1) branch name length
1981 1982 2) number of open heads
1982 1983 3) number of closed heads
1983 1984 4) open heads nodes
1984 1985 5) closed heads nodes
1985 1986 """
1986 1987 # Don't send unless:
1987 1988 # - changeset are being exchanged,
1988 1989 # - the client supports it.
1989 1990 if not (kwargs.get(r'cg', True)) or 'rev-branch-cache' not in b2caps:
1990 1991 return
1991 1992 outgoing = _computeoutgoing(repo, heads, common)
1992 1993 bundle2.addpartrevbranchcache(repo, bundler, outgoing)
1993 1994
1994 1995 def check_heads(repo, their_heads, context):
1995 1996 """check if the heads of a repo have been modified
1996 1997
1997 1998 Used by peer for unbundling.
1998 1999 """
1999 2000 heads = repo.heads()
2000 2001 heads_hash = hashlib.sha1(''.join(sorted(heads))).digest()
2001 2002 if not (their_heads == ['force'] or their_heads == heads or
2002 2003 their_heads == ['hashed', heads_hash]):
2003 2004 # someone else committed/pushed/unbundled while we
2004 2005 # were transferring data
2005 2006 raise error.PushRaced('repository changed while %s - '
2006 2007 'please try again' % context)
2007 2008
2008 2009 def unbundle(repo, cg, heads, source, url):
2009 2010 """Apply a bundle to a repo.
2010 2011
2011 2012 this function makes sure the repo is locked during the application and have
2012 2013 mechanism to check that no push race occurred between the creation of the
2013 2014 bundle and its application.
2014 2015
2015 2016 If the push was raced as PushRaced exception is raised."""
2016 2017 r = 0
2017 2018 # need a transaction when processing a bundle2 stream
2018 2019 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
2019 2020 lockandtr = [None, None, None]
2020 2021 recordout = None
2021 2022 # quick fix for output mismatch with bundle2 in 3.4
2022 2023 captureoutput = repo.ui.configbool('experimental', 'bundle2-output-capture')
2023 2024 if url.startswith('remote:http:') or url.startswith('remote:https:'):
2024 2025 captureoutput = True
2025 2026 try:
2026 2027 # note: outside bundle1, 'heads' is expected to be empty and this
2027 2028 # 'check_heads' call wil be a no-op
2028 2029 check_heads(repo, heads, 'uploading changes')
2029 2030 # push can proceed
2030 2031 if not isinstance(cg, bundle2.unbundle20):
2031 2032 # legacy case: bundle1 (changegroup 01)
2032 2033 txnname = "\n".join([source, util.hidepassword(url)])
2033 2034 with repo.lock(), repo.transaction(txnname) as tr:
2034 2035 op = bundle2.applybundle(repo, cg, tr, source, url)
2035 2036 r = bundle2.combinechangegroupresults(op)
2036 2037 else:
2037 2038 r = None
2038 2039 try:
2039 2040 def gettransaction():
2040 2041 if not lockandtr[2]:
2041 2042 lockandtr[0] = repo.wlock()
2042 2043 lockandtr[1] = repo.lock()
2043 2044 lockandtr[2] = repo.transaction(source)
2044 2045 lockandtr[2].hookargs['source'] = source
2045 2046 lockandtr[2].hookargs['url'] = url
2046 2047 lockandtr[2].hookargs['bundle2'] = '1'
2047 2048 return lockandtr[2]
2048 2049
2049 2050 # Do greedy locking by default until we're satisfied with lazy
2050 2051 # locking.
2051 2052 if not repo.ui.configbool('experimental', 'bundle2lazylocking'):
2052 2053 gettransaction()
2053 2054
2054 2055 op = bundle2.bundleoperation(repo, gettransaction,
2055 captureoutput=captureoutput)
2056 captureoutput=captureoutput,
2057 source='push')
2056 2058 try:
2057 2059 op = bundle2.processbundle(repo, cg, op=op)
2058 2060 finally:
2059 2061 r = op.reply
2060 2062 if captureoutput and r is not None:
2061 2063 repo.ui.pushbuffer(error=True, subproc=True)
2062 2064 def recordout(output):
2063 2065 r.newpart('output', data=output, mandatory=False)
2064 2066 if lockandtr[2] is not None:
2065 2067 lockandtr[2].close()
2066 2068 except BaseException as exc:
2067 2069 exc.duringunbundle2 = True
2068 2070 if captureoutput and r is not None:
2069 2071 parts = exc._bundle2salvagedoutput = r.salvageoutput()
2070 2072 def recordout(output):
2071 2073 part = bundle2.bundlepart('output', data=output,
2072 2074 mandatory=False)
2073 2075 parts.append(part)
2074 2076 raise
2075 2077 finally:
2076 2078 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
2077 2079 if recordout is not None:
2078 2080 recordout(repo.ui.popbuffer())
2079 2081 return r
2080 2082
2081 2083 def _maybeapplyclonebundle(pullop):
2082 2084 """Apply a clone bundle from a remote, if possible."""
2083 2085
2084 2086 repo = pullop.repo
2085 2087 remote = pullop.remote
2086 2088
2087 2089 if not repo.ui.configbool('ui', 'clonebundles'):
2088 2090 return
2089 2091
2090 2092 # Only run if local repo is empty.
2091 2093 if len(repo):
2092 2094 return
2093 2095
2094 2096 if pullop.heads:
2095 2097 return
2096 2098
2097 2099 if not remote.capable('clonebundles'):
2098 2100 return
2099 2101
2100 2102 res = remote._call('clonebundles')
2101 2103
2102 2104 # If we call the wire protocol command, that's good enough to record the
2103 2105 # attempt.
2104 2106 pullop.clonebundleattempted = True
2105 2107
2106 2108 entries = parseclonebundlesmanifest(repo, res)
2107 2109 if not entries:
2108 2110 repo.ui.note(_('no clone bundles available on remote; '
2109 2111 'falling back to regular clone\n'))
2110 2112 return
2111 2113
2112 2114 entries = filterclonebundleentries(
2113 2115 repo, entries, streamclonerequested=pullop.streamclonerequested)
2114 2116
2115 2117 if not entries:
2116 2118 # There is a thundering herd concern here. However, if a server
2117 2119 # operator doesn't advertise bundles appropriate for its clients,
2118 2120 # they deserve what's coming. Furthermore, from a client's
2119 2121 # perspective, no automatic fallback would mean not being able to
2120 2122 # clone!
2121 2123 repo.ui.warn(_('no compatible clone bundles available on server; '
2122 2124 'falling back to regular clone\n'))
2123 2125 repo.ui.warn(_('(you may want to report this to the server '
2124 2126 'operator)\n'))
2125 2127 return
2126 2128
2127 2129 entries = sortclonebundleentries(repo.ui, entries)
2128 2130
2129 2131 url = entries[0]['URL']
2130 2132 repo.ui.status(_('applying clone bundle from %s\n') % url)
2131 2133 if trypullbundlefromurl(repo.ui, repo, url):
2132 2134 repo.ui.status(_('finished applying clone bundle\n'))
2133 2135 # Bundle failed.
2134 2136 #
2135 2137 # We abort by default to avoid the thundering herd of
2136 2138 # clients flooding a server that was expecting expensive
2137 2139 # clone load to be offloaded.
2138 2140 elif repo.ui.configbool('ui', 'clonebundlefallback'):
2139 2141 repo.ui.warn(_('falling back to normal clone\n'))
2140 2142 else:
2141 2143 raise error.Abort(_('error applying bundle'),
2142 2144 hint=_('if this error persists, consider contacting '
2143 2145 'the server operator or disable clone '
2144 2146 'bundles via '
2145 2147 '"--config ui.clonebundles=false"'))
2146 2148
2147 2149 def parseclonebundlesmanifest(repo, s):
2148 2150 """Parses the raw text of a clone bundles manifest.
2149 2151
2150 2152 Returns a list of dicts. The dicts have a ``URL`` key corresponding
2151 2153 to the URL and other keys are the attributes for the entry.
2152 2154 """
2153 2155 m = []
2154 2156 for line in s.splitlines():
2155 2157 fields = line.split()
2156 2158 if not fields:
2157 2159 continue
2158 2160 attrs = {'URL': fields[0]}
2159 2161 for rawattr in fields[1:]:
2160 2162 key, value = rawattr.split('=', 1)
2161 2163 key = urlreq.unquote(key)
2162 2164 value = urlreq.unquote(value)
2163 2165 attrs[key] = value
2164 2166
2165 2167 # Parse BUNDLESPEC into components. This makes client-side
2166 2168 # preferences easier to specify since you can prefer a single
2167 2169 # component of the BUNDLESPEC.
2168 2170 if key == 'BUNDLESPEC':
2169 2171 try:
2170 2172 bundlespec = parsebundlespec(repo, value,
2171 2173 externalnames=True)
2172 2174 attrs['COMPRESSION'] = bundlespec.compression
2173 2175 attrs['VERSION'] = bundlespec.version
2174 2176 except error.InvalidBundleSpecification:
2175 2177 pass
2176 2178 except error.UnsupportedBundleSpecification:
2177 2179 pass
2178 2180
2179 2181 m.append(attrs)
2180 2182
2181 2183 return m
2182 2184
2183 2185 def isstreamclonespec(bundlespec):
2184 2186 # Stream clone v1
2185 2187 if (bundlespec.compression == 'UN' and bundlespec.version == 's1'):
2186 2188 return True
2187 2189
2188 2190 # Stream clone v2
2189 2191 if (bundlespec.compression == 'UN' and bundlespec.version == '02' and \
2190 2192 bundlespec.contentopts.get('streamv2')):
2191 2193 return True
2192 2194
2193 2195 return False
2194 2196
2195 2197 def filterclonebundleentries(repo, entries, streamclonerequested=False):
2196 2198 """Remove incompatible clone bundle manifest entries.
2197 2199
2198 2200 Accepts a list of entries parsed with ``parseclonebundlesmanifest``
2199 2201 and returns a new list consisting of only the entries that this client
2200 2202 should be able to apply.
2201 2203
2202 2204 There is no guarantee we'll be able to apply all returned entries because
2203 2205 the metadata we use to filter on may be missing or wrong.
2204 2206 """
2205 2207 newentries = []
2206 2208 for entry in entries:
2207 2209 spec = entry.get('BUNDLESPEC')
2208 2210 if spec:
2209 2211 try:
2210 2212 bundlespec = parsebundlespec(repo, spec, strict=True)
2211 2213
2212 2214 # If a stream clone was requested, filter out non-streamclone
2213 2215 # entries.
2214 2216 if streamclonerequested and not isstreamclonespec(bundlespec):
2215 2217 repo.ui.debug('filtering %s because not a stream clone\n' %
2216 2218 entry['URL'])
2217 2219 continue
2218 2220
2219 2221 except error.InvalidBundleSpecification as e:
2220 2222 repo.ui.debug(str(e) + '\n')
2221 2223 continue
2222 2224 except error.UnsupportedBundleSpecification as e:
2223 2225 repo.ui.debug('filtering %s because unsupported bundle '
2224 2226 'spec: %s\n' % (
2225 2227 entry['URL'], stringutil.forcebytestr(e)))
2226 2228 continue
2227 2229 # If we don't have a spec and requested a stream clone, we don't know
2228 2230 # what the entry is so don't attempt to apply it.
2229 2231 elif streamclonerequested:
2230 2232 repo.ui.debug('filtering %s because cannot determine if a stream '
2231 2233 'clone bundle\n' % entry['URL'])
2232 2234 continue
2233 2235
2234 2236 if 'REQUIRESNI' in entry and not sslutil.hassni:
2235 2237 repo.ui.debug('filtering %s because SNI not supported\n' %
2236 2238 entry['URL'])
2237 2239 continue
2238 2240
2239 2241 newentries.append(entry)
2240 2242
2241 2243 return newentries
2242 2244
2243 2245 class clonebundleentry(object):
2244 2246 """Represents an item in a clone bundles manifest.
2245 2247
2246 2248 This rich class is needed to support sorting since sorted() in Python 3
2247 2249 doesn't support ``cmp`` and our comparison is complex enough that ``key=``
2248 2250 won't work.
2249 2251 """
2250 2252
2251 2253 def __init__(self, value, prefers):
2252 2254 self.value = value
2253 2255 self.prefers = prefers
2254 2256
2255 2257 def _cmp(self, other):
2256 2258 for prefkey, prefvalue in self.prefers:
2257 2259 avalue = self.value.get(prefkey)
2258 2260 bvalue = other.value.get(prefkey)
2259 2261
2260 2262 # Special case for b missing attribute and a matches exactly.
2261 2263 if avalue is not None and bvalue is None and avalue == prefvalue:
2262 2264 return -1
2263 2265
2264 2266 # Special case for a missing attribute and b matches exactly.
2265 2267 if bvalue is not None and avalue is None and bvalue == prefvalue:
2266 2268 return 1
2267 2269
2268 2270 # We can't compare unless attribute present on both.
2269 2271 if avalue is None or bvalue is None:
2270 2272 continue
2271 2273
2272 2274 # Same values should fall back to next attribute.
2273 2275 if avalue == bvalue:
2274 2276 continue
2275 2277
2276 2278 # Exact matches come first.
2277 2279 if avalue == prefvalue:
2278 2280 return -1
2279 2281 if bvalue == prefvalue:
2280 2282 return 1
2281 2283
2282 2284 # Fall back to next attribute.
2283 2285 continue
2284 2286
2285 2287 # If we got here we couldn't sort by attributes and prefers. Fall
2286 2288 # back to index order.
2287 2289 return 0
2288 2290
2289 2291 def __lt__(self, other):
2290 2292 return self._cmp(other) < 0
2291 2293
2292 2294 def __gt__(self, other):
2293 2295 return self._cmp(other) > 0
2294 2296
2295 2297 def __eq__(self, other):
2296 2298 return self._cmp(other) == 0
2297 2299
2298 2300 def __le__(self, other):
2299 2301 return self._cmp(other) <= 0
2300 2302
2301 2303 def __ge__(self, other):
2302 2304 return self._cmp(other) >= 0
2303 2305
2304 2306 def __ne__(self, other):
2305 2307 return self._cmp(other) != 0
2306 2308
2307 2309 def sortclonebundleentries(ui, entries):
2308 2310 prefers = ui.configlist('ui', 'clonebundleprefers')
2309 2311 if not prefers:
2310 2312 return list(entries)
2311 2313
2312 2314 prefers = [p.split('=', 1) for p in prefers]
2313 2315
2314 2316 items = sorted(clonebundleentry(v, prefers) for v in entries)
2315 2317 return [i.value for i in items]
2316 2318
2317 2319 def trypullbundlefromurl(ui, repo, url):
2318 2320 """Attempt to apply a bundle from a URL."""
2319 2321 with repo.lock(), repo.transaction('bundleurl') as tr:
2320 2322 try:
2321 2323 fh = urlmod.open(ui, url)
2322 2324 cg = readbundle(ui, fh, 'stream')
2323 2325
2324 2326 if isinstance(cg, streamclone.streamcloneapplier):
2325 2327 cg.apply(repo)
2326 2328 else:
2327 2329 bundle2.applybundle(repo, cg, tr, 'clonebundles', url)
2328 2330 return True
2329 2331 except urlerr.httperror as e:
2330 2332 ui.warn(_('HTTP error fetching bundle: %s\n') %
2331 2333 stringutil.forcebytestr(e))
2332 2334 except urlerr.urlerror as e:
2333 2335 ui.warn(_('error fetching bundle: %s\n') %
2334 2336 stringutil.forcebytestr(e.reason))
2335 2337
2336 2338 return False
General Comments 0
You need to be logged in to leave comments. Login now