##// END OF EJS Templates
exchange: improve computation of relevant markers for large repos...
Joerg Sonnenberger -
r52560:b8647465 default
parent child Browse files
Show More
@@ -1,2675 +1,2675
1 1 # bundle2.py - generic container format to transmit arbitrary data.
2 2 #
3 3 # Copyright 2013 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 """Handling of the new bundle2 format
8 8
9 9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 11 that will be handed to and processed by the application layer.
12 12
13 13
14 14 General format architecture
15 15 ===========================
16 16
17 17 The format is architectured as follow
18 18
19 19 - magic string
20 20 - stream level parameters
21 21 - payload parts (any number)
22 22 - end of stream marker.
23 23
24 24 the Binary format
25 25 ============================
26 26
27 27 All numbers are unsigned and big-endian.
28 28
29 29 stream level parameters
30 30 ------------------------
31 31
32 32 Binary format is as follow
33 33
34 34 :params size: int32
35 35
36 36 The total number of Bytes used by the parameters
37 37
38 38 :params value: arbitrary number of Bytes
39 39
40 40 A blob of `params size` containing the serialized version of all stream level
41 41 parameters.
42 42
43 43 The blob contains a space separated list of parameters. Parameters with value
44 44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45 45
46 46 Empty name are obviously forbidden.
47 47
48 48 Name MUST start with a letter. If this first letter is lower case, the
49 49 parameter is advisory and can be safely ignored. However when the first
50 50 letter is capital, the parameter is mandatory and the bundling process MUST
51 51 stop if he is not able to proceed it.
52 52
53 53 Stream parameters use a simple textual format for two main reasons:
54 54
55 55 - Stream level parameters should remain simple and we want to discourage any
56 56 crazy usage.
57 57 - Textual data allow easy human inspection of a bundle2 header in case of
58 58 troubles.
59 59
60 60 Any Applicative level options MUST go into a bundle2 part instead.
61 61
62 62 Payload part
63 63 ------------------------
64 64
65 65 Binary format is as follow
66 66
67 67 :header size: int32
68 68
69 69 The total number of Bytes used by the part header. When the header is empty
70 70 (size = 0) this is interpreted as the end of stream marker.
71 71
72 72 :header:
73 73
74 74 The header defines how to interpret the part. It contains two piece of
75 75 data: the part type, and the part parameters.
76 76
77 77 The part type is used to route an application level handler, that can
78 78 interpret payload.
79 79
80 80 Part parameters are passed to the application level handler. They are
81 81 meant to convey information that will help the application level object to
82 82 interpret the part payload.
83 83
84 84 The binary format of the header is has follow
85 85
86 86 :typesize: (one byte)
87 87
88 88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89 89
90 90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 91 to this part.
92 92
93 93 :parameters:
94 94
95 95 Part's parameter may have arbitrary content, the binary structure is::
96 96
97 97 <mandatory-count><advisory-count><param-sizes><param-data>
98 98
99 99 :mandatory-count: 1 byte, number of mandatory parameters
100 100
101 101 :advisory-count: 1 byte, number of advisory parameters
102 102
103 103 :param-sizes:
104 104
105 105 N couple of bytes, where N is the total number of parameters. Each
106 106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107 107
108 108 :param-data:
109 109
110 110 A blob of bytes from which each parameter key and value can be
111 111 retrieved using the list of size couples stored in the previous
112 112 field.
113 113
114 114 Mandatory parameters comes first, then the advisory ones.
115 115
116 116 Each parameter's key MUST be unique within the part.
117 117
118 118 :payload:
119 119
120 120 payload is a series of `<chunksize><chunkdata>`.
121 121
122 122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124 124
125 125 The current implementation always produces either zero or one chunk.
126 126 This is an implementation limitation that will ultimately be lifted.
127 127
128 128 `chunksize` can be negative to trigger special case processing. No such
129 129 processing is in place yet.
130 130
131 131 Bundle processing
132 132 ============================
133 133
134 134 Each part is processed in order using a "part handler". Handler are registered
135 135 for a certain part type.
136 136
137 137 The matching of a part to its handler is case insensitive. The case of the
138 138 part type is used to know if a part is mandatory or advisory. If the Part type
139 139 contains any uppercase char it is considered mandatory. When no handler is
140 140 known for a Mandatory part, the process is aborted and an exception is raised.
141 141 If the part is advisory and no handler is known, the part is ignored. When the
142 142 process is aborted, the full bundle is still read from the stream to keep the
143 143 channel usable. But none of the part read from an abort are processed. In the
144 144 future, dropping the stream may become an option for channel we do not care to
145 145 preserve.
146 146 """
147 147
148 148
149 149 import collections
150 150 import errno
151 151 import os
152 152 import re
153 153 import string
154 154 import struct
155 155 import sys
156 156
157 157 from .i18n import _
158 158 from .node import (
159 159 hex,
160 160 short,
161 161 )
162 162 from . import (
163 163 bookmarks,
164 164 changegroup,
165 165 encoding,
166 166 error,
167 167 obsolete,
168 168 phases,
169 169 pushkey,
170 170 pycompat,
171 171 requirements,
172 172 scmutil,
173 173 streamclone,
174 174 tags,
175 175 url,
176 176 util,
177 177 )
178 178 from .utils import (
179 179 stringutil,
180 180 urlutil,
181 181 )
182 182 from .interfaces import repository
183 183
184 184 urlerr = util.urlerr
185 185 urlreq = util.urlreq
186 186
187 187 _pack = struct.pack
188 188 _unpack = struct.unpack
189 189
190 190 _fstreamparamsize = b'>i'
191 191 _fpartheadersize = b'>i'
192 192 _fparttypesize = b'>B'
193 193 _fpartid = b'>I'
194 194 _fpayloadsize = b'>i'
195 195 _fpartparamcount = b'>BB'
196 196
197 197 preferedchunksize = 32768
198 198
199 199 _parttypeforbidden = re.compile(b'[^a-zA-Z0-9_:-]')
200 200
201 201
202 202 def outdebug(ui, message):
203 203 """debug regarding output stream (bundling)"""
204 204 if ui.configbool(b'devel', b'bundle2.debug'):
205 205 ui.debug(b'bundle2-output: %s\n' % message)
206 206
207 207
208 208 def indebug(ui, message):
209 209 """debug on input stream (unbundling)"""
210 210 if ui.configbool(b'devel', b'bundle2.debug'):
211 211 ui.debug(b'bundle2-input: %s\n' % message)
212 212
213 213
214 214 def validateparttype(parttype):
215 215 """raise ValueError if a parttype contains invalid character"""
216 216 if _parttypeforbidden.search(parttype):
217 217 raise ValueError(parttype)
218 218
219 219
220 220 def _makefpartparamsizes(nbparams):
221 221 """return a struct format to read part parameter sizes
222 222
223 223 The number parameters is variable so we need to build that format
224 224 dynamically.
225 225 """
226 226 return b'>' + (b'BB' * nbparams)
227 227
228 228
229 229 parthandlermapping = {}
230 230
231 231
232 232 def parthandler(parttype, params=()):
233 233 """decorator that register a function as a bundle2 part handler
234 234
235 235 eg::
236 236
237 237 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
238 238 def myparttypehandler(...):
239 239 '''process a part of type "my part".'''
240 240 ...
241 241 """
242 242 validateparttype(parttype)
243 243
244 244 def _decorator(func):
245 245 lparttype = parttype.lower() # enforce lower case matching.
246 246 assert lparttype not in parthandlermapping
247 247 parthandlermapping[lparttype] = func
248 248 func.params = frozenset(params)
249 249 return func
250 250
251 251 return _decorator
252 252
253 253
254 254 class unbundlerecords:
255 255 """keep record of what happens during and unbundle
256 256
257 257 New records are added using `records.add('cat', obj)`. Where 'cat' is a
258 258 category of record and obj is an arbitrary object.
259 259
260 260 `records['cat']` will return all entries of this category 'cat'.
261 261
262 262 Iterating on the object itself will yield `('category', obj)` tuples
263 263 for all entries.
264 264
265 265 All iterations happens in chronological order.
266 266 """
267 267
268 268 def __init__(self):
269 269 self._categories = {}
270 270 self._sequences = []
271 271 self._replies = {}
272 272
273 273 def add(self, category, entry, inreplyto=None):
274 274 """add a new record of a given category.
275 275
276 276 The entry can then be retrieved in the list returned by
277 277 self['category']."""
278 278 self._categories.setdefault(category, []).append(entry)
279 279 self._sequences.append((category, entry))
280 280 if inreplyto is not None:
281 281 self.getreplies(inreplyto).add(category, entry)
282 282
283 283 def getreplies(self, partid):
284 284 """get the records that are replies to a specific part"""
285 285 return self._replies.setdefault(partid, unbundlerecords())
286 286
287 287 def __getitem__(self, cat):
288 288 return tuple(self._categories.get(cat, ()))
289 289
290 290 def __iter__(self):
291 291 return iter(self._sequences)
292 292
293 293 def __len__(self):
294 294 return len(self._sequences)
295 295
296 296 def __nonzero__(self):
297 297 return bool(self._sequences)
298 298
299 299 __bool__ = __nonzero__
300 300
301 301
302 302 class bundleoperation:
303 303 """an object that represents a single bundling process
304 304
305 305 Its purpose is to carry unbundle-related objects and states.
306 306
307 307 A new object should be created at the beginning of each bundle processing.
308 308 The object is to be returned by the processing function.
309 309
310 310 The object has very little content now it will ultimately contain:
311 311 * an access to the repo the bundle is applied to,
312 312 * a ui object,
313 313 * a way to retrieve a transaction to add changes to the repo,
314 314 * a way to record the result of processing each part,
315 315 * a way to construct a bundle response when applicable.
316 316 """
317 317
318 318 def __init__(
319 319 self,
320 320 repo,
321 321 transactiongetter,
322 322 captureoutput=True,
323 323 source=b'',
324 324 remote=None,
325 325 ):
326 326 self.repo = repo
327 327 # the peer object who produced this bundle if available
328 328 self.remote = remote
329 329 self.ui = repo.ui
330 330 self.records = unbundlerecords()
331 331 self.reply = None
332 332 self.captureoutput = captureoutput
333 333 self.hookargs = {}
334 334 self._gettransaction = transactiongetter
335 335 # carries value that can modify part behavior
336 336 self.modes = {}
337 337 self.source = source
338 338
339 339 def gettransaction(self):
340 340 transaction = self._gettransaction()
341 341
342 342 if self.hookargs:
343 343 # the ones added to the transaction supercede those added
344 344 # to the operation.
345 345 self.hookargs.update(transaction.hookargs)
346 346 transaction.hookargs = self.hookargs
347 347
348 348 # mark the hookargs as flushed. further attempts to add to
349 349 # hookargs will result in an abort.
350 350 self.hookargs = None
351 351
352 352 return transaction
353 353
354 354 def addhookargs(self, hookargs):
355 355 if self.hookargs is None:
356 356 raise error.ProgrammingError(
357 357 b'attempted to add hookargs to '
358 358 b'operation after transaction started'
359 359 )
360 360 self.hookargs.update(hookargs)
361 361
362 362
363 363 class TransactionUnavailable(RuntimeError):
364 364 pass
365 365
366 366
367 367 def _notransaction():
368 368 """default method to get a transaction while processing a bundle
369 369
370 370 Raise an exception to highlight the fact that no transaction was expected
371 371 to be created"""
372 372 raise TransactionUnavailable()
373 373
374 374
375 375 def applybundle(repo, unbundler, tr, source, url=None, remote=None, **kwargs):
376 376 # transform me into unbundler.apply() as soon as the freeze is lifted
377 377 if isinstance(unbundler, unbundle20):
378 378 tr.hookargs[b'bundle2'] = b'1'
379 379 if source is not None and b'source' not in tr.hookargs:
380 380 tr.hookargs[b'source'] = source
381 381 if url is not None and b'url' not in tr.hookargs:
382 382 tr.hookargs[b'url'] = url
383 383 return processbundle(
384 384 repo, unbundler, lambda: tr, source=source, remote=remote
385 385 )
386 386 else:
387 387 # the transactiongetter won't be used, but we might as well set it
388 388 op = bundleoperation(repo, lambda: tr, source=source, remote=remote)
389 389 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
390 390 return op
391 391
392 392
393 393 class partiterator:
394 394 def __init__(self, repo, op, unbundler):
395 395 self.repo = repo
396 396 self.op = op
397 397 self.unbundler = unbundler
398 398 self.iterator = None
399 399 self.count = 0
400 400 self.current = None
401 401
402 402 def __enter__(self):
403 403 def func():
404 404 itr = enumerate(self.unbundler.iterparts(), 1)
405 405 for count, p in itr:
406 406 self.count = count
407 407 self.current = p
408 408 yield p
409 409 p.consume()
410 410 self.current = None
411 411
412 412 self.iterator = func()
413 413 return self.iterator
414 414
415 415 def __exit__(self, type, exc, tb):
416 416 if not self.iterator:
417 417 return
418 418
419 419 # Only gracefully abort in a normal exception situation. User aborts
420 420 # like Ctrl+C throw a KeyboardInterrupt which is not a base Exception,
421 421 # and should not gracefully cleanup.
422 422 if isinstance(exc, Exception):
423 423 # Any exceptions seeking to the end of the bundle at this point are
424 424 # almost certainly related to the underlying stream being bad.
425 425 # And, chances are that the exception we're handling is related to
426 426 # getting in that bad state. So, we swallow the seeking error and
427 427 # re-raise the original error.
428 428 seekerror = False
429 429 try:
430 430 if self.current:
431 431 # consume the part content to not corrupt the stream.
432 432 self.current.consume()
433 433
434 434 for part in self.iterator:
435 435 # consume the bundle content
436 436 part.consume()
437 437 except Exception:
438 438 seekerror = True
439 439
440 440 # Small hack to let caller code distinguish exceptions from bundle2
441 441 # processing from processing the old format. This is mostly needed
442 442 # to handle different return codes to unbundle according to the type
443 443 # of bundle. We should probably clean up or drop this return code
444 444 # craziness in a future version.
445 445 exc.duringunbundle2 = True
446 446 salvaged = []
447 447 replycaps = None
448 448 if self.op.reply is not None:
449 449 salvaged = self.op.reply.salvageoutput()
450 450 replycaps = self.op.reply.capabilities
451 451 exc._replycaps = replycaps
452 452 exc._bundle2salvagedoutput = salvaged
453 453
454 454 # Re-raising from a variable loses the original stack. So only use
455 455 # that form if we need to.
456 456 if seekerror:
457 457 raise exc
458 458
459 459 self.repo.ui.debug(
460 460 b'bundle2-input-bundle: %i parts total\n' % self.count
461 461 )
462 462
463 463
464 464 def processbundle(
465 465 repo,
466 466 unbundler,
467 467 transactiongetter=None,
468 468 op=None,
469 469 source=b'',
470 470 remote=None,
471 471 ):
472 472 """This function process a bundle, apply effect to/from a repo
473 473
474 474 It iterates over each part then searches for and uses the proper handling
475 475 code to process the part. Parts are processed in order.
476 476
477 477 Unknown Mandatory part will abort the process.
478 478
479 479 It is temporarily possible to provide a prebuilt bundleoperation to the
480 480 function. This is used to ensure output is properly propagated in case of
481 481 an error during the unbundling. This output capturing part will likely be
482 482 reworked and this ability will probably go away in the process.
483 483 """
484 484 if op is None:
485 485 if transactiongetter is None:
486 486 transactiongetter = _notransaction
487 487 op = bundleoperation(
488 488 repo,
489 489 transactiongetter,
490 490 source=source,
491 491 remote=remote,
492 492 )
493 493 # todo:
494 494 # - replace this is a init function soon.
495 495 # - exception catching
496 496 unbundler.params
497 497 if repo.ui.debugflag:
498 498 msg = [b'bundle2-input-bundle:']
499 499 if unbundler.params:
500 500 msg.append(b' %i params' % len(unbundler.params))
501 501 if op._gettransaction is None or op._gettransaction is _notransaction:
502 502 msg.append(b' no-transaction')
503 503 else:
504 504 msg.append(b' with-transaction')
505 505 msg.append(b'\n')
506 506 repo.ui.debug(b''.join(msg))
507 507
508 508 processparts(repo, op, unbundler)
509 509
510 510 return op
511 511
512 512
513 513 def processparts(repo, op, unbundler):
514 514 with partiterator(repo, op, unbundler) as parts:
515 515 for part in parts:
516 516 _processpart(op, part)
517 517
518 518
519 519 def _processchangegroup(op, cg, tr, source, url, **kwargs):
520 520 if op.remote is not None and op.remote.path is not None:
521 521 remote_path = op.remote.path
522 522 kwargs = kwargs.copy()
523 523 kwargs['delta_base_reuse_policy'] = remote_path.delta_reuse_policy
524 524 ret = cg.apply(op.repo, tr, source, url, **kwargs)
525 525 op.records.add(
526 526 b'changegroup',
527 527 {
528 528 b'return': ret,
529 529 },
530 530 )
531 531 return ret
532 532
533 533
534 534 def _gethandler(op, part):
535 535 status = b'unknown' # used by debug output
536 536 try:
537 537 handler = parthandlermapping.get(part.type)
538 538 if handler is None:
539 539 status = b'unsupported-type'
540 540 raise error.BundleUnknownFeatureError(parttype=part.type)
541 541 indebug(op.ui, b'found a handler for part %s' % part.type)
542 542 unknownparams = part.mandatorykeys - handler.params
543 543 if unknownparams:
544 544 unknownparams = list(unknownparams)
545 545 unknownparams.sort()
546 546 status = b'unsupported-params (%s)' % b', '.join(unknownparams)
547 547 raise error.BundleUnknownFeatureError(
548 548 parttype=part.type, params=unknownparams
549 549 )
550 550 status = b'supported'
551 551 except error.BundleUnknownFeatureError as exc:
552 552 if part.mandatory: # mandatory parts
553 553 raise
554 554 indebug(op.ui, b'ignoring unsupported advisory part %s' % exc)
555 555 return # skip to part processing
556 556 finally:
557 557 if op.ui.debugflag:
558 558 msg = [b'bundle2-input-part: "%s"' % part.type]
559 559 if not part.mandatory:
560 560 msg.append(b' (advisory)')
561 561 nbmp = len(part.mandatorykeys)
562 562 nbap = len(part.params) - nbmp
563 563 if nbmp or nbap:
564 564 msg.append(b' (params:')
565 565 if nbmp:
566 566 msg.append(b' %i mandatory' % nbmp)
567 567 if nbap:
568 568 msg.append(b' %i advisory' % nbmp)
569 569 msg.append(b')')
570 570 msg.append(b' %s\n' % status)
571 571 op.ui.debug(b''.join(msg))
572 572
573 573 return handler
574 574
575 575
576 576 def _processpart(op, part):
577 577 """process a single part from a bundle
578 578
579 579 The part is guaranteed to have been fully consumed when the function exits
580 580 (even if an exception is raised)."""
581 581 handler = _gethandler(op, part)
582 582 if handler is None:
583 583 return
584 584
585 585 # handler is called outside the above try block so that we don't
586 586 # risk catching KeyErrors from anything other than the
587 587 # parthandlermapping lookup (any KeyError raised by handler()
588 588 # itself represents a defect of a different variety).
589 589 output = None
590 590 if op.captureoutput and op.reply is not None:
591 591 op.ui.pushbuffer(error=True, subproc=True)
592 592 output = b''
593 593 try:
594 594 handler(op, part)
595 595 finally:
596 596 if output is not None:
597 597 output = op.ui.popbuffer()
598 598 if output:
599 599 outpart = op.reply.newpart(b'output', data=output, mandatory=False)
600 600 outpart.addparam(
601 601 b'in-reply-to', pycompat.bytestr(part.id), mandatory=False
602 602 )
603 603
604 604
605 605 def decodecaps(blob):
606 606 """decode a bundle2 caps bytes blob into a dictionary
607 607
608 608 The blob is a list of capabilities (one per line)
609 609 Capabilities may have values using a line of the form::
610 610
611 611 capability=value1,value2,value3
612 612
613 613 The values are always a list."""
614 614 caps = {}
615 615 for line in blob.splitlines():
616 616 if not line:
617 617 continue
618 618 if b'=' not in line:
619 619 key, vals = line, ()
620 620 else:
621 621 key, vals = line.split(b'=', 1)
622 622 vals = vals.split(b',')
623 623 key = urlreq.unquote(key)
624 624 vals = [urlreq.unquote(v) for v in vals]
625 625 caps[key] = vals
626 626 return caps
627 627
628 628
629 629 def encodecaps(caps):
630 630 """encode a bundle2 caps dictionary into a bytes blob"""
631 631 chunks = []
632 632 for ca in sorted(caps):
633 633 vals = caps[ca]
634 634 ca = urlreq.quote(ca)
635 635 vals = [urlreq.quote(v) for v in vals]
636 636 if vals:
637 637 ca = b"%s=%s" % (ca, b','.join(vals))
638 638 chunks.append(ca)
639 639 return b'\n'.join(chunks)
640 640
641 641
642 642 bundletypes = {
643 643 b"": (b"", b'UN'), # only when using unbundle on ssh and old http servers
644 644 # since the unification ssh accepts a header but there
645 645 # is no capability signaling it.
646 646 b"HG20": (), # special-cased below
647 647 b"HG10UN": (b"HG10UN", b'UN'),
648 648 b"HG10BZ": (b"HG10", b'BZ'),
649 649 b"HG10GZ": (b"HG10GZ", b'GZ'),
650 650 }
651 651
652 652 # hgweb uses this list to communicate its preferred type
653 653 bundlepriority = [b'HG10GZ', b'HG10BZ', b'HG10UN']
654 654
655 655
656 656 class bundle20:
657 657 """represent an outgoing bundle2 container
658 658
659 659 Use the `addparam` method to add stream level parameter. and `newpart` to
660 660 populate it. Then call `getchunks` to retrieve all the binary chunks of
661 661 data that compose the bundle2 container."""
662 662
663 663 _magicstring = b'HG20'
664 664
665 665 def __init__(self, ui, capabilities=()):
666 666 self.ui = ui
667 667 self._params = []
668 668 self._parts = []
669 669 self.capabilities = dict(capabilities)
670 670 self._compengine = util.compengines.forbundletype(b'UN')
671 671 self._compopts = None
672 672 # If compression is being handled by a consumer of the raw
673 673 # data (e.g. the wire protocol), unsetting this flag tells
674 674 # consumers that the bundle is best left uncompressed.
675 675 self.prefercompressed = True
676 676
677 677 def setcompression(self, alg, compopts=None):
678 678 """setup core part compression to <alg>"""
679 679 if alg in (None, b'UN'):
680 680 return
681 681 assert not any(n.lower() == b'compression' for n, v in self._params)
682 682 self.addparam(b'Compression', alg)
683 683 self._compengine = util.compengines.forbundletype(alg)
684 684 self._compopts = compopts
685 685
686 686 @property
687 687 def nbparts(self):
688 688 """total number of parts added to the bundler"""
689 689 return len(self._parts)
690 690
691 691 # methods used to defines the bundle2 content
692 692 def addparam(self, name, value=None):
693 693 """add a stream level parameter"""
694 694 if not name:
695 695 raise error.ProgrammingError(b'empty parameter name')
696 696 if name[0:1] not in pycompat.bytestr(
697 697 string.ascii_letters # pytype: disable=wrong-arg-types
698 698 ):
699 699 raise error.ProgrammingError(
700 700 b'non letter first character: %s' % name
701 701 )
702 702 self._params.append((name, value))
703 703
704 704 def addpart(self, part):
705 705 """add a new part to the bundle2 container
706 706
707 707 Parts contains the actual applicative payload."""
708 708 assert part.id is None
709 709 part.id = len(self._parts) # very cheap counter
710 710 self._parts.append(part)
711 711
712 712 def newpart(self, typeid, *args, **kwargs):
713 713 """create a new part and add it to the containers
714 714
715 715 As the part is directly added to the containers. For now, this means
716 716 that any failure to properly initialize the part after calling
717 717 ``newpart`` should result in a failure of the whole bundling process.
718 718
719 719 You can still fall back to manually create and add if you need better
720 720 control."""
721 721 part = bundlepart(typeid, *args, **kwargs)
722 722 self.addpart(part)
723 723 return part
724 724
725 725 # methods used to generate the bundle2 stream
726 726 def getchunks(self):
727 727 if self.ui.debugflag:
728 728 msg = [b'bundle2-output-bundle: "%s",' % self._magicstring]
729 729 if self._params:
730 730 msg.append(b' (%i params)' % len(self._params))
731 731 msg.append(b' %i parts total\n' % len(self._parts))
732 732 self.ui.debug(b''.join(msg))
733 733 outdebug(self.ui, b'start emission of %s stream' % self._magicstring)
734 734 yield self._magicstring
735 735 param = self._paramchunk()
736 736 outdebug(self.ui, b'bundle parameter: %s' % param)
737 737 yield _pack(_fstreamparamsize, len(param))
738 738 if param:
739 739 yield param
740 740 for chunk in self._compengine.compressstream(
741 741 self._getcorechunk(), self._compopts
742 742 ):
743 743 yield chunk
744 744
745 745 def _paramchunk(self):
746 746 """return a encoded version of all stream parameters"""
747 747 blocks = []
748 748 for par, value in self._params:
749 749 par = urlreq.quote(par)
750 750 if value is not None:
751 751 value = urlreq.quote(value)
752 752 par = b'%s=%s' % (par, value)
753 753 blocks.append(par)
754 754 return b' '.join(blocks)
755 755
756 756 def _getcorechunk(self):
757 757 """yield chunk for the core part of the bundle
758 758
759 759 (all but headers and parameters)"""
760 760 outdebug(self.ui, b'start of parts')
761 761 for part in self._parts:
762 762 outdebug(self.ui, b'bundle part: "%s"' % part.type)
763 763 for chunk in part.getchunks(ui=self.ui):
764 764 yield chunk
765 765 outdebug(self.ui, b'end of bundle')
766 766 yield _pack(_fpartheadersize, 0)
767 767
768 768 def salvageoutput(self):
769 769 """return a list with a copy of all output parts in the bundle
770 770
771 771 This is meant to be used during error handling to make sure we preserve
772 772 server output"""
773 773 salvaged = []
774 774 for part in self._parts:
775 775 if part.type.startswith(b'output'):
776 776 salvaged.append(part.copy())
777 777 return salvaged
778 778
779 779
780 780 class unpackermixin:
781 781 """A mixin to extract bytes and struct data from a stream"""
782 782
783 783 def __init__(self, fp):
784 784 self._fp = fp
785 785
786 786 def _unpack(self, format):
787 787 """unpack this struct format from the stream
788 788
789 789 This method is meant for internal usage by the bundle2 protocol only.
790 790 They directly manipulate the low level stream including bundle2 level
791 791 instruction.
792 792
793 793 Do not use it to implement higher-level logic or methods."""
794 794 data = self._readexact(struct.calcsize(format))
795 795 return _unpack(format, data)
796 796
797 797 def _readexact(self, size):
798 798 """read exactly <size> bytes from the stream
799 799
800 800 This method is meant for internal usage by the bundle2 protocol only.
801 801 They directly manipulate the low level stream including bundle2 level
802 802 instruction.
803 803
804 804 Do not use it to implement higher-level logic or methods."""
805 805 return changegroup.readexactly(self._fp, size)
806 806
807 807
808 808 def getunbundler(ui, fp, magicstring=None):
809 809 """return a valid unbundler object for a given magicstring"""
810 810 if magicstring is None:
811 811 magicstring = changegroup.readexactly(fp, 4)
812 812 magic, version = magicstring[0:2], magicstring[2:4]
813 813 if magic != b'HG':
814 814 ui.debug(
815 815 b"error: invalid magic: %r (version %r), should be 'HG'\n"
816 816 % (magic, version)
817 817 )
818 818 raise error.Abort(_(b'not a Mercurial bundle'))
819 819 unbundlerclass = formatmap.get(version)
820 820 if unbundlerclass is None:
821 821 raise error.Abort(_(b'unknown bundle version %s') % version)
822 822 unbundler = unbundlerclass(ui, fp)
823 823 indebug(ui, b'start processing of %s stream' % magicstring)
824 824 return unbundler
825 825
826 826
827 827 class unbundle20(unpackermixin):
828 828 """interpret a bundle2 stream
829 829
830 830 This class is fed with a binary stream and yields parts through its
831 831 `iterparts` methods."""
832 832
833 833 _magicstring = b'HG20'
834 834
835 835 def __init__(self, ui, fp):
836 836 """If header is specified, we do not read it out of the stream."""
837 837 self.ui = ui
838 838 self._compengine = util.compengines.forbundletype(b'UN')
839 839 self._compressed = None
840 840 super(unbundle20, self).__init__(fp)
841 841
842 842 @util.propertycache
843 843 def params(self):
844 844 """dictionary of stream level parameters"""
845 845 indebug(self.ui, b'reading bundle2 stream parameters')
846 846 params = {}
847 847 paramssize = self._unpack(_fstreamparamsize)[0]
848 848 if paramssize < 0:
849 849 raise error.BundleValueError(
850 850 b'negative bundle param size: %i' % paramssize
851 851 )
852 852 if paramssize:
853 853 params = self._readexact(paramssize)
854 854 params = self._processallparams(params)
855 855 return params
856 856
857 857 def _processallparams(self, paramsblock):
858 858 """ """
859 859 params = util.sortdict()
860 860 for p in paramsblock.split(b' '):
861 861 p = p.split(b'=', 1)
862 862 p = [urlreq.unquote(i) for i in p]
863 863 if len(p) < 2:
864 864 p.append(None)
865 865 self._processparam(*p)
866 866 params[p[0]] = p[1]
867 867 return params
868 868
869 869 def _processparam(self, name, value):
870 870 """process a parameter, applying its effect if needed
871 871
872 872 Parameter starting with a lower case letter are advisory and will be
873 873 ignored when unknown. Those starting with an upper case letter are
874 874 mandatory and will this function will raise a KeyError when unknown.
875 875
876 876 Note: no option are currently supported. Any input will be either
877 877 ignored or failing.
878 878 """
879 879 if not name:
880 880 raise ValueError('empty parameter name')
881 881 if name[0:1] not in pycompat.bytestr(
882 882 string.ascii_letters # pytype: disable=wrong-arg-types
883 883 ):
884 884 raise ValueError('non letter first character: %s' % name)
885 885 try:
886 886 handler = b2streamparamsmap[name.lower()]
887 887 except KeyError:
888 888 if name[0:1].islower():
889 889 indebug(self.ui, b"ignoring unknown parameter %s" % name)
890 890 else:
891 891 raise error.BundleUnknownFeatureError(params=(name,))
892 892 else:
893 893 handler(self, name, value)
894 894
895 895 def _forwardchunks(self):
896 896 """utility to transfer a bundle2 as binary
897 897
898 898 This is made necessary by the fact the 'getbundle' command over 'ssh'
899 899 have no way to know when the reply ends, relying on the bundle to be
900 900 interpreted to know its end. This is terrible and we are sorry, but we
901 901 needed to move forward to get general delta enabled.
902 902 """
903 903 yield self._magicstring
904 904 assert 'params' not in vars(self)
905 905 paramssize = self._unpack(_fstreamparamsize)[0]
906 906 if paramssize < 0:
907 907 raise error.BundleValueError(
908 908 b'negative bundle param size: %i' % paramssize
909 909 )
910 910 if paramssize:
911 911 params = self._readexact(paramssize)
912 912 self._processallparams(params)
913 913 # The payload itself is decompressed below, so drop
914 914 # the compression parameter passed down to compensate.
915 915 outparams = []
916 916 for p in params.split(b' '):
917 917 k, v = p.split(b'=', 1)
918 918 if k.lower() != b'compression':
919 919 outparams.append(p)
920 920 outparams = b' '.join(outparams)
921 921 yield _pack(_fstreamparamsize, len(outparams))
922 922 yield outparams
923 923 else:
924 924 yield _pack(_fstreamparamsize, paramssize)
925 925 # From there, payload might need to be decompressed
926 926 self._fp = self._compengine.decompressorreader(self._fp)
927 927 emptycount = 0
928 928 while emptycount < 2:
929 929 # so we can brainlessly loop
930 930 assert _fpartheadersize == _fpayloadsize
931 931 size = self._unpack(_fpartheadersize)[0]
932 932 yield _pack(_fpartheadersize, size)
933 933 if size:
934 934 emptycount = 0
935 935 else:
936 936 emptycount += 1
937 937 continue
938 938 if size == flaginterrupt:
939 939 continue
940 940 elif size < 0:
941 941 raise error.BundleValueError(b'negative chunk size: %i')
942 942 yield self._readexact(size)
943 943
944 944 def iterparts(self, seekable=False):
945 945 """yield all parts contained in the stream"""
946 946 cls = seekableunbundlepart if seekable else unbundlepart
947 947 # make sure param have been loaded
948 948 self.params
949 949 # From there, payload need to be decompressed
950 950 self._fp = self._compengine.decompressorreader(self._fp)
951 951 indebug(self.ui, b'start extraction of bundle2 parts')
952 952 headerblock = self._readpartheader()
953 953 while headerblock is not None:
954 954 part = cls(self.ui, headerblock, self._fp)
955 955 yield part
956 956 # Ensure part is fully consumed so we can start reading the next
957 957 # part.
958 958 part.consume()
959 959
960 960 headerblock = self._readpartheader()
961 961 indebug(self.ui, b'end of bundle2 stream')
962 962
963 963 def _readpartheader(self):
964 964 """reads a part header size and return the bytes blob
965 965
966 966 returns None if empty"""
967 967 headersize = self._unpack(_fpartheadersize)[0]
968 968 if headersize < 0:
969 969 raise error.BundleValueError(
970 970 b'negative part header size: %i' % headersize
971 971 )
972 972 indebug(self.ui, b'part header size: %i' % headersize)
973 973 if headersize:
974 974 return self._readexact(headersize)
975 975 return None
976 976
977 977 def compressed(self):
978 978 self.params # load params
979 979 return self._compressed
980 980
981 981 def close(self):
982 982 """close underlying file"""
983 983 if hasattr(self._fp, 'close'):
984 984 return self._fp.close()
985 985
986 986
987 987 formatmap = {b'20': unbundle20}
988 988
989 989 b2streamparamsmap = {}
990 990
991 991
992 992 def b2streamparamhandler(name):
993 993 """register a handler for a stream level parameter"""
994 994
995 995 def decorator(func):
996 996 assert name not in formatmap
997 997 b2streamparamsmap[name] = func
998 998 return func
999 999
1000 1000 return decorator
1001 1001
1002 1002
1003 1003 @b2streamparamhandler(b'compression')
1004 1004 def processcompression(unbundler, param, value):
1005 1005 """read compression parameter and install payload decompression"""
1006 1006 if value not in util.compengines.supportedbundletypes:
1007 1007 raise error.BundleUnknownFeatureError(params=(param,), values=(value,))
1008 1008 unbundler._compengine = util.compengines.forbundletype(value)
1009 1009 if value is not None:
1010 1010 unbundler._compressed = True
1011 1011
1012 1012
1013 1013 class bundlepart:
1014 1014 """A bundle2 part contains application level payload
1015 1015
1016 1016 The part `type` is used to route the part to the application level
1017 1017 handler.
1018 1018
1019 1019 The part payload is contained in ``part.data``. It could be raw bytes or a
1020 1020 generator of byte chunks.
1021 1021
1022 1022 You can add parameters to the part using the ``addparam`` method.
1023 1023 Parameters can be either mandatory (default) or advisory. Remote side
1024 1024 should be able to safely ignore the advisory ones.
1025 1025
1026 1026 Both data and parameters cannot be modified after the generation has begun.
1027 1027 """
1028 1028
1029 1029 def __init__(
1030 1030 self,
1031 1031 parttype,
1032 1032 mandatoryparams=(),
1033 1033 advisoryparams=(),
1034 1034 data=b'',
1035 1035 mandatory=True,
1036 1036 ):
1037 1037 validateparttype(parttype)
1038 1038 self.id = None
1039 1039 self.type = parttype
1040 1040 self._data = data
1041 1041 self._mandatoryparams = list(mandatoryparams)
1042 1042 self._advisoryparams = list(advisoryparams)
1043 1043 # checking for duplicated entries
1044 1044 self._seenparams = set()
1045 1045 for pname, __ in self._mandatoryparams + self._advisoryparams:
1046 1046 if pname in self._seenparams:
1047 1047 raise error.ProgrammingError(b'duplicated params: %s' % pname)
1048 1048 self._seenparams.add(pname)
1049 1049 # status of the part's generation:
1050 1050 # - None: not started,
1051 1051 # - False: currently generated,
1052 1052 # - True: generation done.
1053 1053 self._generated = None
1054 1054 self.mandatory = mandatory
1055 1055
1056 1056 def __repr__(self):
1057 1057 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
1058 1058 return '<%s object at %x; id: %s; type: %s; mandatory: %s>' % (
1059 1059 cls,
1060 1060 id(self),
1061 1061 self.id,
1062 1062 self.type,
1063 1063 self.mandatory,
1064 1064 )
1065 1065
1066 1066 def copy(self):
1067 1067 """return a copy of the part
1068 1068
1069 1069 The new part have the very same content but no partid assigned yet.
1070 1070 Parts with generated data cannot be copied."""
1071 1071 assert not hasattr(self.data, 'next')
1072 1072 return self.__class__(
1073 1073 self.type,
1074 1074 self._mandatoryparams,
1075 1075 self._advisoryparams,
1076 1076 self._data,
1077 1077 self.mandatory,
1078 1078 )
1079 1079
1080 1080 # methods used to defines the part content
1081 1081 @property
1082 1082 def data(self):
1083 1083 return self._data
1084 1084
1085 1085 @data.setter
1086 1086 def data(self, data):
1087 1087 if self._generated is not None:
1088 1088 raise error.ReadOnlyPartError(b'part is being generated')
1089 1089 self._data = data
1090 1090
1091 1091 @property
1092 1092 def mandatoryparams(self):
1093 1093 # make it an immutable tuple to force people through ``addparam``
1094 1094 return tuple(self._mandatoryparams)
1095 1095
1096 1096 @property
1097 1097 def advisoryparams(self):
1098 1098 # make it an immutable tuple to force people through ``addparam``
1099 1099 return tuple(self._advisoryparams)
1100 1100
1101 1101 def addparam(self, name, value=b'', mandatory=True):
1102 1102 """add a parameter to the part
1103 1103
1104 1104 If 'mandatory' is set to True, the remote handler must claim support
1105 1105 for this parameter or the unbundling will be aborted.
1106 1106
1107 1107 The 'name' and 'value' cannot exceed 255 bytes each.
1108 1108 """
1109 1109 if self._generated is not None:
1110 1110 raise error.ReadOnlyPartError(b'part is being generated')
1111 1111 if name in self._seenparams:
1112 1112 raise ValueError(b'duplicated params: %s' % name)
1113 1113 self._seenparams.add(name)
1114 1114 params = self._advisoryparams
1115 1115 if mandatory:
1116 1116 params = self._mandatoryparams
1117 1117 params.append((name, value))
1118 1118
1119 1119 # methods used to generates the bundle2 stream
1120 1120 def getchunks(self, ui):
1121 1121 if self._generated is not None:
1122 1122 raise error.ProgrammingError(b'part can only be consumed once')
1123 1123 self._generated = False
1124 1124
1125 1125 if ui.debugflag:
1126 1126 msg = [b'bundle2-output-part: "%s"' % self.type]
1127 1127 if not self.mandatory:
1128 1128 msg.append(b' (advisory)')
1129 1129 nbmp = len(self.mandatoryparams)
1130 1130 nbap = len(self.advisoryparams)
1131 1131 if nbmp or nbap:
1132 1132 msg.append(b' (params:')
1133 1133 if nbmp:
1134 1134 msg.append(b' %i mandatory' % nbmp)
1135 1135 if nbap:
1136 1136 msg.append(b' %i advisory' % nbmp)
1137 1137 msg.append(b')')
1138 1138 if not self.data:
1139 1139 msg.append(b' empty payload')
1140 1140 elif hasattr(self.data, 'next') or hasattr(self.data, '__next__'):
1141 1141 msg.append(b' streamed payload')
1142 1142 else:
1143 1143 msg.append(b' %i bytes payload' % len(self.data))
1144 1144 msg.append(b'\n')
1145 1145 ui.debug(b''.join(msg))
1146 1146
1147 1147 #### header
1148 1148 if self.mandatory:
1149 1149 parttype = self.type.upper()
1150 1150 else:
1151 1151 parttype = self.type.lower()
1152 1152 outdebug(ui, b'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1153 1153 ## parttype
1154 1154 header = [
1155 1155 _pack(_fparttypesize, len(parttype)),
1156 1156 parttype,
1157 1157 _pack(_fpartid, self.id),
1158 1158 ]
1159 1159 ## parameters
1160 1160 # count
1161 1161 manpar = self.mandatoryparams
1162 1162 advpar = self.advisoryparams
1163 1163 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1164 1164 # size
1165 1165 parsizes = []
1166 1166 for key, value in manpar:
1167 1167 parsizes.append(len(key))
1168 1168 parsizes.append(len(value))
1169 1169 for key, value in advpar:
1170 1170 parsizes.append(len(key))
1171 1171 parsizes.append(len(value))
1172 1172 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1173 1173 header.append(paramsizes)
1174 1174 # key, value
1175 1175 for key, value in manpar:
1176 1176 header.append(key)
1177 1177 header.append(value)
1178 1178 for key, value in advpar:
1179 1179 header.append(key)
1180 1180 header.append(value)
1181 1181 ## finalize header
1182 1182 try:
1183 1183 headerchunk = b''.join(header)
1184 1184 except TypeError:
1185 1185 raise TypeError(
1186 1186 'Found a non-bytes trying to '
1187 1187 'build bundle part header: %r' % header
1188 1188 )
1189 1189 outdebug(ui, b'header chunk size: %i' % len(headerchunk))
1190 1190 yield _pack(_fpartheadersize, len(headerchunk))
1191 1191 yield headerchunk
1192 1192 ## payload
1193 1193 try:
1194 1194 for chunk in self._payloadchunks():
1195 1195 outdebug(ui, b'payload chunk size: %i' % len(chunk))
1196 1196 yield _pack(_fpayloadsize, len(chunk))
1197 1197 yield chunk
1198 1198 except GeneratorExit:
1199 1199 # GeneratorExit means that nobody is listening for our
1200 1200 # results anyway, so just bail quickly rather than trying
1201 1201 # to produce an error part.
1202 1202 ui.debug(b'bundle2-generatorexit\n')
1203 1203 raise
1204 1204 except BaseException as exc:
1205 1205 bexc = stringutil.forcebytestr(exc)
1206 1206 # backup exception data for later
1207 1207 ui.debug(
1208 1208 b'bundle2-input-stream-interrupt: encoding exception %s' % bexc
1209 1209 )
1210 1210 tb = sys.exc_info()[2]
1211 1211 msg = b'unexpected error: %s' % bexc
1212 1212 interpart = bundlepart(
1213 1213 b'error:abort', [(b'message', msg)], mandatory=False
1214 1214 )
1215 1215 interpart.id = 0
1216 1216 yield _pack(_fpayloadsize, -1)
1217 1217 for chunk in interpart.getchunks(ui=ui):
1218 1218 yield chunk
1219 1219 outdebug(ui, b'closing payload chunk')
1220 1220 # abort current part payload
1221 1221 yield _pack(_fpayloadsize, 0)
1222 1222 pycompat.raisewithtb(exc, tb)
1223 1223 # end of payload
1224 1224 outdebug(ui, b'closing payload chunk')
1225 1225 yield _pack(_fpayloadsize, 0)
1226 1226 self._generated = True
1227 1227
1228 1228 def _payloadchunks(self):
1229 1229 """yield chunks of a the part payload
1230 1230
1231 1231 Exists to handle the different methods to provide data to a part."""
1232 1232 # we only support fixed size data now.
1233 1233 # This will be improved in the future.
1234 1234 if hasattr(self.data, 'next') or hasattr(self.data, '__next__'):
1235 1235 buff = util.chunkbuffer(self.data)
1236 1236 chunk = buff.read(preferedchunksize)
1237 1237 while chunk:
1238 1238 yield chunk
1239 1239 chunk = buff.read(preferedchunksize)
1240 1240 elif len(self.data):
1241 1241 yield self.data
1242 1242
1243 1243
1244 1244 flaginterrupt = -1
1245 1245
1246 1246
1247 1247 class interrupthandler(unpackermixin):
1248 1248 """read one part and process it with restricted capability
1249 1249
1250 1250 This allows to transmit exception raised on the producer size during part
1251 1251 iteration while the consumer is reading a part.
1252 1252
1253 1253 Part processed in this manner only have access to a ui object,"""
1254 1254
1255 1255 def __init__(self, ui, fp):
1256 1256 super(interrupthandler, self).__init__(fp)
1257 1257 self.ui = ui
1258 1258
1259 1259 def _readpartheader(self):
1260 1260 """reads a part header size and return the bytes blob
1261 1261
1262 1262 returns None if empty"""
1263 1263 headersize = self._unpack(_fpartheadersize)[0]
1264 1264 if headersize < 0:
1265 1265 raise error.BundleValueError(
1266 1266 b'negative part header size: %i' % headersize
1267 1267 )
1268 1268 indebug(self.ui, b'part header size: %i\n' % headersize)
1269 1269 if headersize:
1270 1270 return self._readexact(headersize)
1271 1271 return None
1272 1272
1273 1273 def __call__(self):
1274 1274
1275 1275 self.ui.debug(
1276 1276 b'bundle2-input-stream-interrupt: opening out of band context\n'
1277 1277 )
1278 1278 indebug(self.ui, b'bundle2 stream interruption, looking for a part.')
1279 1279 headerblock = self._readpartheader()
1280 1280 if headerblock is None:
1281 1281 indebug(self.ui, b'no part found during interruption.')
1282 1282 return
1283 1283 part = unbundlepart(self.ui, headerblock, self._fp)
1284 1284 op = interruptoperation(self.ui)
1285 1285 hardabort = False
1286 1286 try:
1287 1287 _processpart(op, part)
1288 1288 except (SystemExit, KeyboardInterrupt):
1289 1289 hardabort = True
1290 1290 raise
1291 1291 finally:
1292 1292 if not hardabort:
1293 1293 part.consume()
1294 1294 self.ui.debug(
1295 1295 b'bundle2-input-stream-interrupt: closing out of band context\n'
1296 1296 )
1297 1297
1298 1298
1299 1299 class interruptoperation:
1300 1300 """A limited operation to be use by part handler during interruption
1301 1301
1302 1302 It only have access to an ui object.
1303 1303 """
1304 1304
1305 1305 def __init__(self, ui):
1306 1306 self.ui = ui
1307 1307 self.reply = None
1308 1308 self.captureoutput = False
1309 1309
1310 1310 @property
1311 1311 def repo(self):
1312 1312 raise error.ProgrammingError(b'no repo access from stream interruption')
1313 1313
1314 1314 def gettransaction(self):
1315 1315 raise TransactionUnavailable(b'no repo access from stream interruption')
1316 1316
1317 1317
1318 1318 def decodepayloadchunks(ui, fh):
1319 1319 """Reads bundle2 part payload data into chunks.
1320 1320
1321 1321 Part payload data consists of framed chunks. This function takes
1322 1322 a file handle and emits those chunks.
1323 1323 """
1324 1324 dolog = ui.configbool(b'devel', b'bundle2.debug')
1325 1325 debug = ui.debug
1326 1326
1327 1327 headerstruct = struct.Struct(_fpayloadsize)
1328 1328 headersize = headerstruct.size
1329 1329 unpack = headerstruct.unpack
1330 1330
1331 1331 readexactly = changegroup.readexactly
1332 1332 read = fh.read
1333 1333
1334 1334 chunksize = unpack(readexactly(fh, headersize))[0]
1335 1335 indebug(ui, b'payload chunk size: %i' % chunksize)
1336 1336
1337 1337 # changegroup.readexactly() is inlined below for performance.
1338 1338 while chunksize:
1339 1339 if chunksize >= 0:
1340 1340 s = read(chunksize)
1341 1341 if len(s) < chunksize:
1342 1342 raise error.Abort(
1343 1343 _(
1344 1344 b'stream ended unexpectedly '
1345 1345 b' (got %d bytes, expected %d)'
1346 1346 )
1347 1347 % (len(s), chunksize)
1348 1348 )
1349 1349
1350 1350 yield s
1351 1351 elif chunksize == flaginterrupt:
1352 1352 # Interrupt "signal" detected. The regular stream is interrupted
1353 1353 # and a bundle2 part follows. Consume it.
1354 1354 interrupthandler(ui, fh)()
1355 1355 else:
1356 1356 raise error.BundleValueError(
1357 1357 b'negative payload chunk size: %s' % chunksize
1358 1358 )
1359 1359
1360 1360 s = read(headersize)
1361 1361 if len(s) < headersize:
1362 1362 raise error.Abort(
1363 1363 _(b'stream ended unexpectedly (got %d bytes, expected %d)')
1364 1364 % (len(s), chunksize)
1365 1365 )
1366 1366
1367 1367 chunksize = unpack(s)[0]
1368 1368
1369 1369 # indebug() inlined for performance.
1370 1370 if dolog:
1371 1371 debug(b'bundle2-input: payload chunk size: %i\n' % chunksize)
1372 1372
1373 1373
1374 1374 class unbundlepart(unpackermixin):
1375 1375 """a bundle part read from a bundle"""
1376 1376
1377 1377 def __init__(self, ui, header, fp):
1378 1378 super(unbundlepart, self).__init__(fp)
1379 1379 self._seekable = hasattr(fp, 'seek') and hasattr(fp, 'tell')
1380 1380 self.ui = ui
1381 1381 # unbundle state attr
1382 1382 self._headerdata = header
1383 1383 self._headeroffset = 0
1384 1384 self._initialized = False
1385 1385 self.consumed = False
1386 1386 # part data
1387 1387 self.id = None
1388 1388 self.type = None
1389 1389 self.mandatoryparams = None
1390 1390 self.advisoryparams = None
1391 1391 self.params = None
1392 1392 self.mandatorykeys = ()
1393 1393 self._readheader()
1394 1394 self._mandatory = None
1395 1395 self._pos = 0
1396 1396
1397 1397 def _fromheader(self, size):
1398 1398 """return the next <size> byte from the header"""
1399 1399 offset = self._headeroffset
1400 1400 data = self._headerdata[offset : (offset + size)]
1401 1401 self._headeroffset = offset + size
1402 1402 return data
1403 1403
1404 1404 def _unpackheader(self, format):
1405 1405 """read given format from header
1406 1406
1407 1407 This automatically compute the size of the format to read."""
1408 1408 data = self._fromheader(struct.calcsize(format))
1409 1409 return _unpack(format, data)
1410 1410
1411 1411 def _initparams(self, mandatoryparams, advisoryparams):
1412 1412 """internal function to setup all logic related parameters"""
1413 1413 # make it read only to prevent people touching it by mistake.
1414 1414 self.mandatoryparams = tuple(mandatoryparams)
1415 1415 self.advisoryparams = tuple(advisoryparams)
1416 1416 # user friendly UI
1417 1417 self.params = util.sortdict(self.mandatoryparams)
1418 1418 self.params.update(self.advisoryparams)
1419 1419 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1420 1420
1421 1421 def _readheader(self):
1422 1422 """read the header and setup the object"""
1423 1423 typesize = self._unpackheader(_fparttypesize)[0]
1424 1424 self.type = self._fromheader(typesize)
1425 1425 indebug(self.ui, b'part type: "%s"' % self.type)
1426 1426 self.id = self._unpackheader(_fpartid)[0]
1427 1427 indebug(self.ui, b'part id: "%s"' % pycompat.bytestr(self.id))
1428 1428 # extract mandatory bit from type
1429 1429 self.mandatory = self.type != self.type.lower()
1430 1430 self.type = self.type.lower()
1431 1431 ## reading parameters
1432 1432 # param count
1433 1433 mancount, advcount = self._unpackheader(_fpartparamcount)
1434 1434 indebug(self.ui, b'part parameters: %i' % (mancount + advcount))
1435 1435 # param size
1436 1436 fparamsizes = _makefpartparamsizes(mancount + advcount)
1437 1437 paramsizes = self._unpackheader(fparamsizes)
1438 1438 # make it a list of couple again
1439 1439 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1440 1440 # split mandatory from advisory
1441 1441 mansizes = paramsizes[:mancount]
1442 1442 advsizes = paramsizes[mancount:]
1443 1443 # retrieve param value
1444 1444 manparams = []
1445 1445 for key, value in mansizes:
1446 1446 manparams.append((self._fromheader(key), self._fromheader(value)))
1447 1447 advparams = []
1448 1448 for key, value in advsizes:
1449 1449 advparams.append((self._fromheader(key), self._fromheader(value)))
1450 1450 self._initparams(manparams, advparams)
1451 1451 ## part payload
1452 1452 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1453 1453 # we read the data, tell it
1454 1454 self._initialized = True
1455 1455
1456 1456 def _payloadchunks(self):
1457 1457 """Generator of decoded chunks in the payload."""
1458 1458 return decodepayloadchunks(self.ui, self._fp)
1459 1459
1460 1460 def consume(self):
1461 1461 """Read the part payload until completion.
1462 1462
1463 1463 By consuming the part data, the underlying stream read offset will
1464 1464 be advanced to the next part (or end of stream).
1465 1465 """
1466 1466 if self.consumed:
1467 1467 return
1468 1468
1469 1469 chunk = self.read(32768)
1470 1470 while chunk:
1471 1471 self._pos += len(chunk)
1472 1472 chunk = self.read(32768)
1473 1473
1474 1474 def read(self, size=None):
1475 1475 """read payload data"""
1476 1476 if not self._initialized:
1477 1477 self._readheader()
1478 1478 if size is None:
1479 1479 data = self._payloadstream.read()
1480 1480 else:
1481 1481 data = self._payloadstream.read(size)
1482 1482 self._pos += len(data)
1483 1483 if size is None or len(data) < size:
1484 1484 if not self.consumed and self._pos:
1485 1485 self.ui.debug(
1486 1486 b'bundle2-input-part: total payload size %i\n' % self._pos
1487 1487 )
1488 1488 self.consumed = True
1489 1489 return data
1490 1490
1491 1491
1492 1492 class seekableunbundlepart(unbundlepart):
1493 1493 """A bundle2 part in a bundle that is seekable.
1494 1494
1495 1495 Regular ``unbundlepart`` instances can only be read once. This class
1496 1496 extends ``unbundlepart`` to enable bi-directional seeking within the
1497 1497 part.
1498 1498
1499 1499 Bundle2 part data consists of framed chunks. Offsets when seeking
1500 1500 refer to the decoded data, not the offsets in the underlying bundle2
1501 1501 stream.
1502 1502
1503 1503 To facilitate quickly seeking within the decoded data, instances of this
1504 1504 class maintain a mapping between offsets in the underlying stream and
1505 1505 the decoded payload. This mapping will consume memory in proportion
1506 1506 to the number of chunks within the payload (which almost certainly
1507 1507 increases in proportion with the size of the part).
1508 1508 """
1509 1509
1510 1510 def __init__(self, ui, header, fp):
1511 1511 # (payload, file) offsets for chunk starts.
1512 1512 self._chunkindex = []
1513 1513
1514 1514 super(seekableunbundlepart, self).__init__(ui, header, fp)
1515 1515
1516 1516 def _payloadchunks(self, chunknum=0):
1517 1517 '''seek to specified chunk and start yielding data'''
1518 1518 if len(self._chunkindex) == 0:
1519 1519 assert chunknum == 0, b'Must start with chunk 0'
1520 1520 self._chunkindex.append((0, self._tellfp()))
1521 1521 else:
1522 1522 assert chunknum < len(self._chunkindex), (
1523 1523 b'Unknown chunk %d' % chunknum
1524 1524 )
1525 1525 self._seekfp(self._chunkindex[chunknum][1])
1526 1526
1527 1527 pos = self._chunkindex[chunknum][0]
1528 1528
1529 1529 for chunk in decodepayloadchunks(self.ui, self._fp):
1530 1530 chunknum += 1
1531 1531 pos += len(chunk)
1532 1532 if chunknum == len(self._chunkindex):
1533 1533 self._chunkindex.append((pos, self._tellfp()))
1534 1534
1535 1535 yield chunk
1536 1536
1537 1537 def _findchunk(self, pos):
1538 1538 '''for a given payload position, return a chunk number and offset'''
1539 1539 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1540 1540 if ppos == pos:
1541 1541 return chunk, 0
1542 1542 elif ppos > pos:
1543 1543 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1544 1544 raise ValueError(b'Unknown chunk')
1545 1545
1546 1546 def tell(self):
1547 1547 return self._pos
1548 1548
1549 1549 def seek(self, offset, whence=os.SEEK_SET):
1550 1550 if whence == os.SEEK_SET:
1551 1551 newpos = offset
1552 1552 elif whence == os.SEEK_CUR:
1553 1553 newpos = self._pos + offset
1554 1554 elif whence == os.SEEK_END:
1555 1555 if not self.consumed:
1556 1556 # Can't use self.consume() here because it advances self._pos.
1557 1557 chunk = self.read(32768)
1558 1558 while chunk:
1559 1559 chunk = self.read(32768)
1560 1560 newpos = self._chunkindex[-1][0] - offset
1561 1561 else:
1562 1562 raise ValueError(b'Unknown whence value: %r' % (whence,))
1563 1563
1564 1564 if newpos > self._chunkindex[-1][0] and not self.consumed:
1565 1565 # Can't use self.consume() here because it advances self._pos.
1566 1566 chunk = self.read(32768)
1567 1567 while chunk:
1568 1568 chunk = self.read(32668)
1569 1569
1570 1570 if not 0 <= newpos <= self._chunkindex[-1][0]:
1571 1571 raise ValueError(b'Offset out of range')
1572 1572
1573 1573 if self._pos != newpos:
1574 1574 chunk, internaloffset = self._findchunk(newpos)
1575 1575 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1576 1576 adjust = self.read(internaloffset)
1577 1577 if len(adjust) != internaloffset:
1578 1578 raise error.Abort(_(b'Seek failed\n'))
1579 1579 self._pos = newpos
1580 1580
1581 1581 def _seekfp(self, offset, whence=0):
1582 1582 """move the underlying file pointer
1583 1583
1584 1584 This method is meant for internal usage by the bundle2 protocol only.
1585 1585 They directly manipulate the low level stream including bundle2 level
1586 1586 instruction.
1587 1587
1588 1588 Do not use it to implement higher-level logic or methods."""
1589 1589 if self._seekable:
1590 1590 return self._fp.seek(offset, whence)
1591 1591 else:
1592 1592 raise NotImplementedError(_(b'File pointer is not seekable'))
1593 1593
1594 1594 def _tellfp(self):
1595 1595 """return the file offset, or None if file is not seekable
1596 1596
1597 1597 This method is meant for internal usage by the bundle2 protocol only.
1598 1598 They directly manipulate the low level stream including bundle2 level
1599 1599 instruction.
1600 1600
1601 1601 Do not use it to implement higher-level logic or methods."""
1602 1602 if self._seekable:
1603 1603 try:
1604 1604 return self._fp.tell()
1605 1605 except IOError as e:
1606 1606 if e.errno == errno.ESPIPE:
1607 1607 self._seekable = False
1608 1608 else:
1609 1609 raise
1610 1610 return None
1611 1611
1612 1612
1613 1613 # These are only the static capabilities.
1614 1614 # Check the 'getrepocaps' function for the rest.
1615 1615 capabilities = {
1616 1616 b'HG20': (),
1617 1617 b'bookmarks': (),
1618 1618 b'error': (b'abort', b'unsupportedcontent', b'pushraced', b'pushkey'),
1619 1619 b'listkeys': (),
1620 1620 b'pushkey': (),
1621 1621 b'digests': tuple(sorted(util.DIGESTS.keys())),
1622 1622 b'remote-changegroup': (b'http', b'https'),
1623 1623 b'hgtagsfnodes': (),
1624 1624 b'phases': (b'heads',),
1625 1625 b'stream': (b'v2',),
1626 1626 }
1627 1627
1628 1628
1629 1629 def getrepocaps(repo, allowpushback=False, role=None):
1630 1630 """return the bundle2 capabilities for a given repo
1631 1631
1632 1632 Exists to allow extensions (like evolution) to mutate the capabilities.
1633 1633
1634 1634 The returned value is used for servers advertising their capabilities as
1635 1635 well as clients advertising their capabilities to servers as part of
1636 1636 bundle2 requests. The ``role`` argument specifies which is which.
1637 1637 """
1638 1638 if role not in (b'client', b'server'):
1639 1639 raise error.ProgrammingError(b'role argument must be client or server')
1640 1640
1641 1641 caps = capabilities.copy()
1642 1642 caps[b'changegroup'] = tuple(
1643 1643 sorted(changegroup.supportedincomingversions(repo))
1644 1644 )
1645 1645 if obsolete.isenabled(repo, obsolete.exchangeopt):
1646 1646 supportedformat = tuple(b'V%i' % v for v in obsolete.formats)
1647 1647 caps[b'obsmarkers'] = supportedformat
1648 1648 if allowpushback:
1649 1649 caps[b'pushback'] = ()
1650 1650 cpmode = repo.ui.config(b'server', b'concurrent-push-mode')
1651 1651 if cpmode == b'check-related':
1652 1652 caps[b'checkheads'] = (b'related',)
1653 1653 if b'phases' in repo.ui.configlist(b'devel', b'legacy.exchange'):
1654 1654 caps.pop(b'phases')
1655 1655
1656 1656 # Don't advertise stream clone support in server mode if not configured.
1657 1657 if role == b'server':
1658 1658 streamsupported = repo.ui.configbool(
1659 1659 b'server', b'uncompressed', untrusted=True
1660 1660 )
1661 1661 featuresupported = repo.ui.configbool(b'server', b'bundle2.stream')
1662 1662
1663 1663 if not streamsupported or not featuresupported:
1664 1664 caps.pop(b'stream')
1665 1665 # Else always advertise support on client, because payload support
1666 1666 # should always be advertised.
1667 1667
1668 1668 if repo.ui.configbool(b'experimental', b'stream-v3'):
1669 1669 if b'stream' in caps:
1670 1670 caps[b'stream'] += (b'v3-exp',)
1671 1671
1672 1672 # b'rev-branch-cache is no longer advertised, but still supported
1673 1673 # for legacy clients.
1674 1674
1675 1675 return caps
1676 1676
1677 1677
1678 1678 def bundle2caps(remote):
1679 1679 """return the bundle capabilities of a peer as dict"""
1680 1680 raw = remote.capable(b'bundle2')
1681 1681 if not raw and raw != b'':
1682 1682 return {}
1683 1683 capsblob = urlreq.unquote(remote.capable(b'bundle2'))
1684 1684 return decodecaps(capsblob)
1685 1685
1686 1686
1687 1687 def obsmarkersversion(caps):
1688 1688 """extract the list of supported obsmarkers versions from a bundle2caps dict"""
1689 1689 obscaps = caps.get(b'obsmarkers', ())
1690 1690 return [int(c[1:]) for c in obscaps if c.startswith(b'V')]
1691 1691
1692 1692
1693 1693 def writenewbundle(
1694 1694 ui,
1695 1695 repo,
1696 1696 source,
1697 1697 filename,
1698 1698 bundletype,
1699 1699 outgoing,
1700 1700 opts,
1701 1701 vfs=None,
1702 1702 compression=None,
1703 1703 compopts=None,
1704 1704 allow_internal=False,
1705 1705 ):
1706 1706 if bundletype.startswith(b'HG10'):
1707 1707 cg = changegroup.makechangegroup(repo, outgoing, b'01', source)
1708 1708 return writebundle(
1709 1709 ui,
1710 1710 cg,
1711 1711 filename,
1712 1712 bundletype,
1713 1713 vfs=vfs,
1714 1714 compression=compression,
1715 1715 compopts=compopts,
1716 1716 )
1717 1717 elif not bundletype.startswith(b'HG20'):
1718 1718 raise error.ProgrammingError(b'unknown bundle type: %s' % bundletype)
1719 1719
1720 1720 # enforce that no internal phase are to be bundled
1721 1721 bundled_internal = repo.revs(b"%ln and _internal()", outgoing.ancestorsof)
1722 1722 if bundled_internal and not allow_internal:
1723 1723 count = len(repo.revs(b'%ln and _internal()', outgoing.missing))
1724 1724 msg = "backup bundle would contains %d internal changesets"
1725 1725 msg %= count
1726 1726 raise error.ProgrammingError(msg)
1727 1727
1728 1728 caps = {}
1729 1729 if opts.get(b'obsolescence', False):
1730 1730 caps[b'obsmarkers'] = (b'V1',)
1731 1731 stream_version = opts.get(b'stream', b"")
1732 1732 if stream_version == b"v2":
1733 1733 caps[b'stream'] = [b'v2']
1734 1734 elif stream_version == b"v3-exp":
1735 1735 caps[b'stream'] = [b'v3-exp']
1736 1736 bundle = bundle20(ui, caps)
1737 1737 bundle.setcompression(compression, compopts)
1738 1738 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1739 1739 chunkiter = bundle.getchunks()
1740 1740
1741 1741 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1742 1742
1743 1743
1744 1744 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1745 1745 # We should eventually reconcile this logic with the one behind
1746 1746 # 'exchange.getbundle2partsgenerator'.
1747 1747 #
1748 1748 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1749 1749 # different right now. So we keep them separated for now for the sake of
1750 1750 # simplicity.
1751 1751
1752 1752 # we might not always want a changegroup in such bundle, for example in
1753 1753 # stream bundles
1754 1754 if opts.get(b'changegroup', True):
1755 1755 cgversion = opts.get(b'cg.version')
1756 1756 if cgversion is None:
1757 1757 cgversion = changegroup.safeversion(repo)
1758 1758 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1759 1759 part = bundler.newpart(b'changegroup', data=cg.getchunks())
1760 1760 part.addparam(b'version', cg.version)
1761 1761 if b'clcount' in cg.extras:
1762 1762 part.addparam(
1763 1763 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
1764 1764 )
1765 1765 if opts.get(b'phases'):
1766 1766 target_phase = phases.draft
1767 1767 for head in outgoing.ancestorsof:
1768 1768 target_phase = max(target_phase, repo[head].phase())
1769 1769 if target_phase > phases.draft:
1770 1770 part.addparam(
1771 1771 b'targetphase',
1772 1772 b'%d' % target_phase,
1773 1773 mandatory=False,
1774 1774 )
1775 1775 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
1776 1776 part.addparam(b'exp-sidedata', b'1')
1777 1777
1778 1778 if opts.get(b'stream', b"") == b"v2":
1779 1779 addpartbundlestream2(bundler, repo, stream=True)
1780 1780
1781 1781 if opts.get(b'stream', b"") == b"v3-exp":
1782 1782 addpartbundlestream2(bundler, repo, stream=True)
1783 1783
1784 1784 if opts.get(b'tagsfnodescache', True):
1785 1785 addparttagsfnodescache(repo, bundler, outgoing)
1786 1786
1787 1787 if opts.get(b'revbranchcache', True):
1788 1788 addpartrevbranchcache(repo, bundler, outgoing)
1789 1789
1790 1790 if opts.get(b'obsolescence', False):
1791 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1791 obsmarkers = repo.obsstore.relevantmarkers(nodes=outgoing.missing)
1792 1792 buildobsmarkerspart(
1793 1793 bundler,
1794 1794 obsmarkers,
1795 1795 mandatory=opts.get(b'obsolescence-mandatory', True),
1796 1796 )
1797 1797
1798 1798 if opts.get(b'phases', False):
1799 1799 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1800 1800 phasedata = phases.binaryencode(headsbyphase)
1801 1801 bundler.newpart(b'phase-heads', data=phasedata)
1802 1802
1803 1803
1804 1804 def addparttagsfnodescache(repo, bundler, outgoing):
1805 1805 # we include the tags fnode cache for the bundle changeset
1806 1806 # (as an optional parts)
1807 1807 cache = tags.hgtagsfnodescache(repo.unfiltered())
1808 1808 chunks = []
1809 1809
1810 1810 # .hgtags fnodes are only relevant for head changesets. While we could
1811 1811 # transfer values for all known nodes, there will likely be little to
1812 1812 # no benefit.
1813 1813 #
1814 1814 # We don't bother using a generator to produce output data because
1815 1815 # a) we only have 40 bytes per head and even esoteric numbers of heads
1816 1816 # consume little memory (1M heads is 40MB) b) we don't want to send the
1817 1817 # part if we don't have entries and knowing if we have entries requires
1818 1818 # cache lookups.
1819 1819 for node in outgoing.ancestorsof:
1820 1820 # Don't compute missing, as this may slow down serving.
1821 1821 fnode = cache.getfnode(node, computemissing=False)
1822 1822 if fnode:
1823 1823 chunks.extend([node, fnode])
1824 1824
1825 1825 if chunks:
1826 1826 bundler.newpart(
1827 1827 b'hgtagsfnodes',
1828 1828 mandatory=False,
1829 1829 data=b''.join(chunks),
1830 1830 )
1831 1831
1832 1832
1833 1833 def addpartrevbranchcache(repo, bundler, outgoing):
1834 1834 # we include the rev branch cache for the bundle changeset
1835 1835 # (as an optional parts)
1836 1836 cache = repo.revbranchcache()
1837 1837 cl = repo.unfiltered().changelog
1838 1838 branchesdata = collections.defaultdict(lambda: (set(), set()))
1839 1839 for node in outgoing.missing:
1840 1840 branch, close = cache.branchinfo(cl.rev(node))
1841 1841 branchesdata[branch][close].add(node)
1842 1842
1843 1843 def generate():
1844 1844 for branch, (nodes, closed) in sorted(branchesdata.items()):
1845 1845 utf8branch = encoding.fromlocal(branch)
1846 1846 yield rbcstruct.pack(len(utf8branch), len(nodes), len(closed))
1847 1847 yield utf8branch
1848 1848 for n in sorted(nodes):
1849 1849 yield n
1850 1850 for n in sorted(closed):
1851 1851 yield n
1852 1852
1853 1853 bundler.newpart(b'cache:rev-branch-cache', data=generate(), mandatory=False)
1854 1854
1855 1855
1856 1856 def _formatrequirementsspec(requirements):
1857 1857 requirements = [req for req in requirements if req != b"shared"]
1858 1858 return urlreq.quote(b','.join(sorted(requirements)))
1859 1859
1860 1860
1861 1861 def _formatrequirementsparams(requirements):
1862 1862 requirements = _formatrequirementsspec(requirements)
1863 1863 params = b"%s%s" % (urlreq.quote(b"requirements="), requirements)
1864 1864 return params
1865 1865
1866 1866
1867 1867 def format_remote_wanted_sidedata(repo):
1868 1868 """Formats a repo's wanted sidedata categories into a bytestring for
1869 1869 capabilities exchange."""
1870 1870 wanted = b""
1871 1871 if repo._wanted_sidedata:
1872 1872 wanted = b','.join(
1873 1873 pycompat.bytestr(c) for c in sorted(repo._wanted_sidedata)
1874 1874 )
1875 1875 return wanted
1876 1876
1877 1877
1878 1878 def read_remote_wanted_sidedata(remote):
1879 1879 sidedata_categories = remote.capable(b'exp-wanted-sidedata')
1880 1880 return read_wanted_sidedata(sidedata_categories)
1881 1881
1882 1882
1883 1883 def read_wanted_sidedata(formatted):
1884 1884 if formatted:
1885 1885 return set(formatted.split(b','))
1886 1886 return set()
1887 1887
1888 1888
1889 1889 def addpartbundlestream2(bundler, repo, **kwargs):
1890 1890 if not kwargs.get('stream', False):
1891 1891 return
1892 1892
1893 1893 if not streamclone.allowservergeneration(repo):
1894 1894 msg = _(b'stream data requested but server does not allow this feature')
1895 1895 hint = _(b'the client seems buggy')
1896 1896 raise error.Abort(msg, hint=hint)
1897 1897 if not (b'stream' in bundler.capabilities):
1898 1898 msg = _(
1899 1899 b'stream data requested but supported streaming clone versions were not specified'
1900 1900 )
1901 1901 hint = _(b'the client seems buggy')
1902 1902 raise error.Abort(msg, hint=hint)
1903 1903 client_supported = set(bundler.capabilities[b'stream'])
1904 1904 server_supported = set(getrepocaps(repo, role=b'client').get(b'stream', []))
1905 1905 common_supported = client_supported & server_supported
1906 1906 if not common_supported:
1907 1907 msg = _(b'no common supported version with the client: %s; %s')
1908 1908 str_server = b','.join(sorted(server_supported))
1909 1909 str_client = b','.join(sorted(client_supported))
1910 1910 msg %= (str_server, str_client)
1911 1911 raise error.Abort(msg)
1912 1912 version = max(common_supported)
1913 1913
1914 1914 # Stream clones don't compress well. And compression undermines a
1915 1915 # goal of stream clones, which is to be fast. Communicate the desire
1916 1916 # to avoid compression to consumers of the bundle.
1917 1917 bundler.prefercompressed = False
1918 1918
1919 1919 # get the includes and excludes
1920 1920 includepats = kwargs.get('includepats')
1921 1921 excludepats = kwargs.get('excludepats')
1922 1922
1923 1923 narrowstream = repo.ui.configbool(
1924 1924 b'experimental', b'server.stream-narrow-clones'
1925 1925 )
1926 1926
1927 1927 if (includepats or excludepats) and not narrowstream:
1928 1928 raise error.Abort(_(b'server does not support narrow stream clones'))
1929 1929
1930 1930 includeobsmarkers = False
1931 1931 if repo.obsstore:
1932 1932 remoteversions = obsmarkersversion(bundler.capabilities)
1933 1933 if not remoteversions:
1934 1934 raise error.Abort(
1935 1935 _(
1936 1936 b'server has obsolescence markers, but client '
1937 1937 b'cannot receive them via stream clone'
1938 1938 )
1939 1939 )
1940 1940 elif repo.obsstore._version in remoteversions:
1941 1941 includeobsmarkers = True
1942 1942
1943 1943 if version == b"v2":
1944 1944 filecount, bytecount, it = streamclone.generatev2(
1945 1945 repo, includepats, excludepats, includeobsmarkers
1946 1946 )
1947 1947 requirements = streamclone.streamed_requirements(repo)
1948 1948 requirements = _formatrequirementsspec(requirements)
1949 1949 part = bundler.newpart(b'stream2', data=it)
1950 1950 part.addparam(b'bytecount', b'%d' % bytecount, mandatory=True)
1951 1951 part.addparam(b'filecount', b'%d' % filecount, mandatory=True)
1952 1952 part.addparam(b'requirements', requirements, mandatory=True)
1953 1953 elif version == b"v3-exp":
1954 1954 it = streamclone.generatev3(
1955 1955 repo, includepats, excludepats, includeobsmarkers
1956 1956 )
1957 1957 requirements = streamclone.streamed_requirements(repo)
1958 1958 requirements = _formatrequirementsspec(requirements)
1959 1959 part = bundler.newpart(b'stream3-exp', data=it)
1960 1960 part.addparam(b'requirements', requirements, mandatory=True)
1961 1961
1962 1962
1963 1963 def buildobsmarkerspart(bundler, markers, mandatory=True):
1964 1964 """add an obsmarker part to the bundler with <markers>
1965 1965
1966 1966 No part is created if markers is empty.
1967 1967 Raises ValueError if the bundler doesn't support any known obsmarker format.
1968 1968 """
1969 1969 if not markers:
1970 1970 return None
1971 1971
1972 1972 remoteversions = obsmarkersversion(bundler.capabilities)
1973 1973 version = obsolete.commonversion(remoteversions)
1974 1974 if version is None:
1975 1975 raise ValueError(b'bundler does not support common obsmarker format')
1976 1976 stream = obsolete.encodemarkers(markers, True, version=version)
1977 1977 return bundler.newpart(b'obsmarkers', data=stream, mandatory=mandatory)
1978 1978
1979 1979
1980 1980 def writebundle(
1981 1981 ui, cg, filename, bundletype, vfs=None, compression=None, compopts=None
1982 1982 ):
1983 1983 """Write a bundle file and return its filename.
1984 1984
1985 1985 Existing files will not be overwritten.
1986 1986 If no filename is specified, a temporary file is created.
1987 1987 bz2 compression can be turned off.
1988 1988 The bundle file will be deleted in case of errors.
1989 1989 """
1990 1990
1991 1991 if bundletype == b"HG20":
1992 1992 bundle = bundle20(ui)
1993 1993 bundle.setcompression(compression, compopts)
1994 1994 part = bundle.newpart(b'changegroup', data=cg.getchunks())
1995 1995 part.addparam(b'version', cg.version)
1996 1996 if b'clcount' in cg.extras:
1997 1997 part.addparam(
1998 1998 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
1999 1999 )
2000 2000 chunkiter = bundle.getchunks()
2001 2001 else:
2002 2002 # compression argument is only for the bundle2 case
2003 2003 assert compression is None
2004 2004 if cg.version != b'01':
2005 2005 raise error.Abort(
2006 2006 _(b'old bundle types only supports v1 changegroups')
2007 2007 )
2008 2008
2009 2009 # HG20 is the case without 2 values to unpack, but is handled above.
2010 2010 # pytype: disable=bad-unpacking
2011 2011 header, comp = bundletypes[bundletype]
2012 2012 # pytype: enable=bad-unpacking
2013 2013
2014 2014 if comp not in util.compengines.supportedbundletypes:
2015 2015 raise error.Abort(_(b'unknown stream compression type: %s') % comp)
2016 2016 compengine = util.compengines.forbundletype(comp)
2017 2017
2018 2018 def chunkiter():
2019 2019 yield header
2020 2020 for chunk in compengine.compressstream(cg.getchunks(), compopts):
2021 2021 yield chunk
2022 2022
2023 2023 chunkiter = chunkiter()
2024 2024
2025 2025 # parse the changegroup data, otherwise we will block
2026 2026 # in case of sshrepo because we don't know the end of the stream
2027 2027 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
2028 2028
2029 2029
2030 2030 def combinechangegroupresults(op):
2031 2031 """logic to combine 0 or more addchangegroup results into one"""
2032 2032 results = [r.get(b'return', 0) for r in op.records[b'changegroup']]
2033 2033 changedheads = 0
2034 2034 result = 1
2035 2035 for ret in results:
2036 2036 # If any changegroup result is 0, return 0
2037 2037 if ret == 0:
2038 2038 result = 0
2039 2039 break
2040 2040 if ret < -1:
2041 2041 changedheads += ret + 1
2042 2042 elif ret > 1:
2043 2043 changedheads += ret - 1
2044 2044 if changedheads > 0:
2045 2045 result = 1 + changedheads
2046 2046 elif changedheads < 0:
2047 2047 result = -1 + changedheads
2048 2048 return result
2049 2049
2050 2050
2051 2051 @parthandler(
2052 2052 b'changegroup',
2053 2053 (
2054 2054 b'version',
2055 2055 b'nbchanges',
2056 2056 b'exp-sidedata',
2057 2057 b'exp-wanted-sidedata',
2058 2058 b'treemanifest',
2059 2059 b'targetphase',
2060 2060 ),
2061 2061 )
2062 2062 def handlechangegroup(op, inpart):
2063 2063 """apply a changegroup part on the repo"""
2064 2064 from . import localrepo
2065 2065
2066 2066 tr = op.gettransaction()
2067 2067 unpackerversion = inpart.params.get(b'version', b'01')
2068 2068 # We should raise an appropriate exception here
2069 2069 cg = changegroup.getunbundler(unpackerversion, inpart, None)
2070 2070 # the source and url passed here are overwritten by the one contained in
2071 2071 # the transaction.hookargs argument. So 'bundle2' is a placeholder
2072 2072 nbchangesets = None
2073 2073 if b'nbchanges' in inpart.params:
2074 2074 nbchangesets = int(inpart.params.get(b'nbchanges'))
2075 2075 if b'treemanifest' in inpart.params and not scmutil.istreemanifest(op.repo):
2076 2076 if len(op.repo.changelog) != 0:
2077 2077 raise error.Abort(
2078 2078 _(
2079 2079 b"bundle contains tree manifests, but local repo is "
2080 2080 b"non-empty and does not use tree manifests"
2081 2081 )
2082 2082 )
2083 2083 op.repo.requirements.add(requirements.TREEMANIFEST_REQUIREMENT)
2084 2084 op.repo.svfs.options = localrepo.resolvestorevfsoptions(
2085 2085 op.repo.ui, op.repo.requirements, op.repo.features
2086 2086 )
2087 2087 scmutil.writereporequirements(op.repo)
2088 2088
2089 2089 extrakwargs = {}
2090 2090 targetphase = inpart.params.get(b'targetphase')
2091 2091 if targetphase is not None:
2092 2092 extrakwargs['targetphase'] = int(targetphase)
2093 2093
2094 2094 remote_sidedata = inpart.params.get(b'exp-wanted-sidedata')
2095 2095 extrakwargs['sidedata_categories'] = read_wanted_sidedata(remote_sidedata)
2096 2096
2097 2097 ret = _processchangegroup(
2098 2098 op,
2099 2099 cg,
2100 2100 tr,
2101 2101 op.source,
2102 2102 b'bundle2',
2103 2103 expectedtotal=nbchangesets,
2104 2104 **extrakwargs
2105 2105 )
2106 2106 if op.reply is not None:
2107 2107 # This is definitely not the final form of this
2108 2108 # return. But one need to start somewhere.
2109 2109 part = op.reply.newpart(b'reply:changegroup', mandatory=False)
2110 2110 part.addparam(
2111 2111 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2112 2112 )
2113 2113 part.addparam(b'return', b'%i' % ret, mandatory=False)
2114 2114 assert not inpart.read()
2115 2115
2116 2116
2117 2117 _remotechangegroupparams = tuple(
2118 2118 [b'url', b'size', b'digests']
2119 2119 + [b'digest:%s' % k for k in util.DIGESTS.keys()]
2120 2120 )
2121 2121
2122 2122
2123 2123 @parthandler(b'remote-changegroup', _remotechangegroupparams)
2124 2124 def handleremotechangegroup(op, inpart):
2125 2125 """apply a bundle10 on the repo, given an url and validation information
2126 2126
2127 2127 All the information about the remote bundle to import are given as
2128 2128 parameters. The parameters include:
2129 2129 - url: the url to the bundle10.
2130 2130 - size: the bundle10 file size. It is used to validate what was
2131 2131 retrieved by the client matches the server knowledge about the bundle.
2132 2132 - digests: a space separated list of the digest types provided as
2133 2133 parameters.
2134 2134 - digest:<digest-type>: the hexadecimal representation of the digest with
2135 2135 that name. Like the size, it is used to validate what was retrieved by
2136 2136 the client matches what the server knows about the bundle.
2137 2137
2138 2138 When multiple digest types are given, all of them are checked.
2139 2139 """
2140 2140 try:
2141 2141 raw_url = inpart.params[b'url']
2142 2142 except KeyError:
2143 2143 raise error.Abort(_(b'remote-changegroup: missing "%s" param') % b'url')
2144 2144 parsed_url = urlutil.url(raw_url)
2145 2145 if parsed_url.scheme not in capabilities[b'remote-changegroup']:
2146 2146 raise error.Abort(
2147 2147 _(b'remote-changegroup does not support %s urls')
2148 2148 % parsed_url.scheme
2149 2149 )
2150 2150
2151 2151 try:
2152 2152 size = int(inpart.params[b'size'])
2153 2153 except ValueError:
2154 2154 raise error.Abort(
2155 2155 _(b'remote-changegroup: invalid value for param "%s"') % b'size'
2156 2156 )
2157 2157 except KeyError:
2158 2158 raise error.Abort(
2159 2159 _(b'remote-changegroup: missing "%s" param') % b'size'
2160 2160 )
2161 2161
2162 2162 digests = {}
2163 2163 for typ in inpart.params.get(b'digests', b'').split():
2164 2164 param = b'digest:%s' % typ
2165 2165 try:
2166 2166 value = inpart.params[param]
2167 2167 except KeyError:
2168 2168 raise error.Abort(
2169 2169 _(b'remote-changegroup: missing "%s" param') % param
2170 2170 )
2171 2171 digests[typ] = value
2172 2172
2173 2173 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
2174 2174
2175 2175 tr = op.gettransaction()
2176 2176 from . import exchange
2177 2177
2178 2178 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
2179 2179 if not isinstance(cg, changegroup.cg1unpacker):
2180 2180 raise error.Abort(
2181 2181 _(b'%s: not a bundle version 1.0') % urlutil.hidepassword(raw_url)
2182 2182 )
2183 2183 ret = _processchangegroup(op, cg, tr, op.source, b'bundle2')
2184 2184 if op.reply is not None:
2185 2185 # This is definitely not the final form of this
2186 2186 # return. But one need to start somewhere.
2187 2187 part = op.reply.newpart(b'reply:changegroup')
2188 2188 part.addparam(
2189 2189 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2190 2190 )
2191 2191 part.addparam(b'return', b'%i' % ret, mandatory=False)
2192 2192 try:
2193 2193 real_part.validate()
2194 2194 except error.Abort as e:
2195 2195 raise error.Abort(
2196 2196 _(b'bundle at %s is corrupted:\n%s')
2197 2197 % (urlutil.hidepassword(raw_url), e.message)
2198 2198 )
2199 2199 assert not inpart.read()
2200 2200
2201 2201
2202 2202 @parthandler(b'reply:changegroup', (b'return', b'in-reply-to'))
2203 2203 def handlereplychangegroup(op, inpart):
2204 2204 ret = int(inpart.params[b'return'])
2205 2205 replyto = int(inpart.params[b'in-reply-to'])
2206 2206 op.records.add(b'changegroup', {b'return': ret}, replyto)
2207 2207
2208 2208
2209 2209 @parthandler(b'check:bookmarks')
2210 2210 def handlecheckbookmarks(op, inpart):
2211 2211 """check location of bookmarks
2212 2212
2213 2213 This part is to be used to detect push race regarding bookmark, it
2214 2214 contains binary encoded (bookmark, node) tuple. If the local state does
2215 2215 not marks the one in the part, a PushRaced exception is raised
2216 2216 """
2217 2217 bookdata = bookmarks.binarydecode(op.repo, inpart)
2218 2218
2219 2219 msgstandard = (
2220 2220 b'remote repository changed while pushing - please try again '
2221 2221 b'(bookmark "%s" move from %s to %s)'
2222 2222 )
2223 2223 msgmissing = (
2224 2224 b'remote repository changed while pushing - please try again '
2225 2225 b'(bookmark "%s" is missing, expected %s)'
2226 2226 )
2227 2227 msgexist = (
2228 2228 b'remote repository changed while pushing - please try again '
2229 2229 b'(bookmark "%s" set on %s, expected missing)'
2230 2230 )
2231 2231 for book, node in bookdata:
2232 2232 currentnode = op.repo._bookmarks.get(book)
2233 2233 if currentnode != node:
2234 2234 if node is None:
2235 2235 finalmsg = msgexist % (book, short(currentnode))
2236 2236 elif currentnode is None:
2237 2237 finalmsg = msgmissing % (book, short(node))
2238 2238 else:
2239 2239 finalmsg = msgstandard % (
2240 2240 book,
2241 2241 short(node),
2242 2242 short(currentnode),
2243 2243 )
2244 2244 raise error.PushRaced(finalmsg)
2245 2245
2246 2246
2247 2247 @parthandler(b'check:heads')
2248 2248 def handlecheckheads(op, inpart):
2249 2249 """check that head of the repo did not change
2250 2250
2251 2251 This is used to detect a push race when using unbundle.
2252 2252 This replaces the "heads" argument of unbundle."""
2253 2253 h = inpart.read(20)
2254 2254 heads = []
2255 2255 while len(h) == 20:
2256 2256 heads.append(h)
2257 2257 h = inpart.read(20)
2258 2258 assert not h
2259 2259 # Trigger a transaction so that we are guaranteed to have the lock now.
2260 2260 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2261 2261 op.gettransaction()
2262 2262 if sorted(heads) != sorted(op.repo.heads()):
2263 2263 raise error.PushRaced(
2264 2264 b'remote repository changed while pushing - please try again'
2265 2265 )
2266 2266
2267 2267
2268 2268 @parthandler(b'check:updated-heads')
2269 2269 def handlecheckupdatedheads(op, inpart):
2270 2270 """check for race on the heads touched by a push
2271 2271
2272 2272 This is similar to 'check:heads' but focus on the heads actually updated
2273 2273 during the push. If other activities happen on unrelated heads, it is
2274 2274 ignored.
2275 2275
2276 2276 This allow server with high traffic to avoid push contention as long as
2277 2277 unrelated parts of the graph are involved."""
2278 2278 h = inpart.read(20)
2279 2279 heads = []
2280 2280 while len(h) == 20:
2281 2281 heads.append(h)
2282 2282 h = inpart.read(20)
2283 2283 assert not h
2284 2284 # trigger a transaction so that we are guaranteed to have the lock now.
2285 2285 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2286 2286 op.gettransaction()
2287 2287
2288 2288 currentheads = set()
2289 2289 for ls in op.repo.branchmap().iterheads():
2290 2290 currentheads.update(ls)
2291 2291
2292 2292 for h in heads:
2293 2293 if h not in currentheads:
2294 2294 raise error.PushRaced(
2295 2295 b'remote repository changed while pushing - '
2296 2296 b'please try again'
2297 2297 )
2298 2298
2299 2299
2300 2300 @parthandler(b'check:phases')
2301 2301 def handlecheckphases(op, inpart):
2302 2302 """check that phase boundaries of the repository did not change
2303 2303
2304 2304 This is used to detect a push race.
2305 2305 """
2306 2306 phasetonodes = phases.binarydecode(inpart)
2307 2307 unfi = op.repo.unfiltered()
2308 2308 cl = unfi.changelog
2309 2309 phasecache = unfi._phasecache
2310 2310 msg = (
2311 2311 b'remote repository changed while pushing - please try again '
2312 2312 b'(%s is %s expected %s)'
2313 2313 )
2314 2314 for expectedphase, nodes in phasetonodes.items():
2315 2315 for n in nodes:
2316 2316 actualphase = phasecache.phase(unfi, cl.rev(n))
2317 2317 if actualphase != expectedphase:
2318 2318 finalmsg = msg % (
2319 2319 short(n),
2320 2320 phases.phasenames[actualphase],
2321 2321 phases.phasenames[expectedphase],
2322 2322 )
2323 2323 raise error.PushRaced(finalmsg)
2324 2324
2325 2325
2326 2326 @parthandler(b'output')
2327 2327 def handleoutput(op, inpart):
2328 2328 """forward output captured on the server to the client"""
2329 2329 for line in inpart.read().splitlines():
2330 2330 op.ui.status(_(b'remote: %s\n') % line)
2331 2331
2332 2332
2333 2333 @parthandler(b'replycaps')
2334 2334 def handlereplycaps(op, inpart):
2335 2335 """Notify that a reply bundle should be created
2336 2336
2337 2337 The payload contains the capabilities information for the reply"""
2338 2338 caps = decodecaps(inpart.read())
2339 2339 if op.reply is None:
2340 2340 op.reply = bundle20(op.ui, caps)
2341 2341
2342 2342
2343 2343 class AbortFromPart(error.Abort):
2344 2344 """Sub-class of Abort that denotes an error from a bundle2 part."""
2345 2345
2346 2346
2347 2347 @parthandler(b'error:abort', (b'message', b'hint'))
2348 2348 def handleerrorabort(op, inpart):
2349 2349 """Used to transmit abort error over the wire"""
2350 2350 raise AbortFromPart(
2351 2351 inpart.params[b'message'], hint=inpart.params.get(b'hint')
2352 2352 )
2353 2353
2354 2354
2355 2355 @parthandler(
2356 2356 b'error:pushkey',
2357 2357 (b'namespace', b'key', b'new', b'old', b'ret', b'in-reply-to'),
2358 2358 )
2359 2359 def handleerrorpushkey(op, inpart):
2360 2360 """Used to transmit failure of a mandatory pushkey over the wire"""
2361 2361 kwargs = {}
2362 2362 for name in (b'namespace', b'key', b'new', b'old', b'ret'):
2363 2363 value = inpart.params.get(name)
2364 2364 if value is not None:
2365 2365 kwargs[name] = value
2366 2366 raise error.PushkeyFailed(
2367 2367 inpart.params[b'in-reply-to'], **pycompat.strkwargs(kwargs)
2368 2368 )
2369 2369
2370 2370
2371 2371 @parthandler(b'error:unsupportedcontent', (b'parttype', b'params'))
2372 2372 def handleerrorunsupportedcontent(op, inpart):
2373 2373 """Used to transmit unknown content error over the wire"""
2374 2374 kwargs = {}
2375 2375 parttype = inpart.params.get(b'parttype')
2376 2376 if parttype is not None:
2377 2377 kwargs[b'parttype'] = parttype
2378 2378 params = inpart.params.get(b'params')
2379 2379 if params is not None:
2380 2380 kwargs[b'params'] = params.split(b'\0')
2381 2381
2382 2382 raise error.BundleUnknownFeatureError(**pycompat.strkwargs(kwargs))
2383 2383
2384 2384
2385 2385 @parthandler(b'error:pushraced', (b'message',))
2386 2386 def handleerrorpushraced(op, inpart):
2387 2387 """Used to transmit push race error over the wire"""
2388 2388 raise error.ResponseError(_(b'push failed:'), inpart.params[b'message'])
2389 2389
2390 2390
2391 2391 @parthandler(b'listkeys', (b'namespace',))
2392 2392 def handlelistkeys(op, inpart):
2393 2393 """retrieve pushkey namespace content stored in a bundle2"""
2394 2394 namespace = inpart.params[b'namespace']
2395 2395 r = pushkey.decodekeys(inpart.read())
2396 2396 op.records.add(b'listkeys', (namespace, r))
2397 2397
2398 2398
2399 2399 @parthandler(b'pushkey', (b'namespace', b'key', b'old', b'new'))
2400 2400 def handlepushkey(op, inpart):
2401 2401 """process a pushkey request"""
2402 2402 dec = pushkey.decode
2403 2403 namespace = dec(inpart.params[b'namespace'])
2404 2404 key = dec(inpart.params[b'key'])
2405 2405 old = dec(inpart.params[b'old'])
2406 2406 new = dec(inpart.params[b'new'])
2407 2407 # Grab the transaction to ensure that we have the lock before performing the
2408 2408 # pushkey.
2409 2409 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2410 2410 op.gettransaction()
2411 2411 ret = op.repo.pushkey(namespace, key, old, new)
2412 2412 record = {b'namespace': namespace, b'key': key, b'old': old, b'new': new}
2413 2413 op.records.add(b'pushkey', record)
2414 2414 if op.reply is not None:
2415 2415 rpart = op.reply.newpart(b'reply:pushkey')
2416 2416 rpart.addparam(
2417 2417 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2418 2418 )
2419 2419 rpart.addparam(b'return', b'%i' % ret, mandatory=False)
2420 2420 if inpart.mandatory and not ret:
2421 2421 kwargs = {}
2422 2422 for key in (b'namespace', b'key', b'new', b'old', b'ret'):
2423 2423 if key in inpart.params:
2424 2424 kwargs[key] = inpart.params[key]
2425 2425 raise error.PushkeyFailed(
2426 2426 partid=b'%d' % inpart.id, **pycompat.strkwargs(kwargs)
2427 2427 )
2428 2428
2429 2429
2430 2430 @parthandler(b'bookmarks')
2431 2431 def handlebookmark(op, inpart):
2432 2432 """transmit bookmark information
2433 2433
2434 2434 The part contains binary encoded bookmark information.
2435 2435
2436 2436 The exact behavior of this part can be controlled by the 'bookmarks' mode
2437 2437 on the bundle operation.
2438 2438
2439 2439 When mode is 'apply' (the default) the bookmark information is applied as
2440 2440 is to the unbundling repository. Make sure a 'check:bookmarks' part is
2441 2441 issued earlier to check for push races in such update. This behavior is
2442 2442 suitable for pushing.
2443 2443
2444 2444 When mode is 'records', the information is recorded into the 'bookmarks'
2445 2445 records of the bundle operation. This behavior is suitable for pulling.
2446 2446 """
2447 2447 changes = bookmarks.binarydecode(op.repo, inpart)
2448 2448
2449 2449 pushkeycompat = op.repo.ui.configbool(
2450 2450 b'server', b'bookmarks-pushkey-compat'
2451 2451 )
2452 2452 bookmarksmode = op.modes.get(b'bookmarks', b'apply')
2453 2453
2454 2454 if bookmarksmode == b'apply':
2455 2455 tr = op.gettransaction()
2456 2456 bookstore = op.repo._bookmarks
2457 2457 if pushkeycompat:
2458 2458 allhooks = []
2459 2459 for book, node in changes:
2460 2460 hookargs = tr.hookargs.copy()
2461 2461 hookargs[b'pushkeycompat'] = b'1'
2462 2462 hookargs[b'namespace'] = b'bookmarks'
2463 2463 hookargs[b'key'] = book
2464 2464 hookargs[b'old'] = hex(bookstore.get(book, b''))
2465 2465 hookargs[b'new'] = hex(node if node is not None else b'')
2466 2466 allhooks.append(hookargs)
2467 2467
2468 2468 for hookargs in allhooks:
2469 2469 op.repo.hook(
2470 2470 b'prepushkey', throw=True, **pycompat.strkwargs(hookargs)
2471 2471 )
2472 2472
2473 2473 for book, node in changes:
2474 2474 if bookmarks.isdivergent(book):
2475 2475 msg = _(b'cannot accept divergent bookmark %s!') % book
2476 2476 raise error.Abort(msg)
2477 2477
2478 2478 bookstore.applychanges(op.repo, op.gettransaction(), changes)
2479 2479
2480 2480 if pushkeycompat:
2481 2481
2482 2482 def runhook(unused_success):
2483 2483 for hookargs in allhooks:
2484 2484 op.repo.hook(b'pushkey', **pycompat.strkwargs(hookargs))
2485 2485
2486 2486 op.repo._afterlock(runhook)
2487 2487
2488 2488 elif bookmarksmode == b'records':
2489 2489 for book, node in changes:
2490 2490 record = {b'bookmark': book, b'node': node}
2491 2491 op.records.add(b'bookmarks', record)
2492 2492 else:
2493 2493 raise error.ProgrammingError(
2494 2494 b'unknown bookmark mode: %s' % bookmarksmode
2495 2495 )
2496 2496
2497 2497
2498 2498 @parthandler(b'phase-heads')
2499 2499 def handlephases(op, inpart):
2500 2500 """apply phases from bundle part to repo"""
2501 2501 headsbyphase = phases.binarydecode(inpart)
2502 2502 phases.updatephases(op.repo.unfiltered(), op.gettransaction, headsbyphase)
2503 2503
2504 2504
2505 2505 @parthandler(b'reply:pushkey', (b'return', b'in-reply-to'))
2506 2506 def handlepushkeyreply(op, inpart):
2507 2507 """retrieve the result of a pushkey request"""
2508 2508 ret = int(inpart.params[b'return'])
2509 2509 partid = int(inpart.params[b'in-reply-to'])
2510 2510 op.records.add(b'pushkey', {b'return': ret}, partid)
2511 2511
2512 2512
2513 2513 @parthandler(b'obsmarkers')
2514 2514 def handleobsmarker(op, inpart):
2515 2515 """add a stream of obsmarkers to the repo"""
2516 2516 tr = op.gettransaction()
2517 2517 markerdata = inpart.read()
2518 2518 if op.ui.config(b'experimental', b'obsmarkers-exchange-debug'):
2519 2519 op.ui.writenoi18n(
2520 2520 b'obsmarker-exchange: %i bytes received\n' % len(markerdata)
2521 2521 )
2522 2522 # The mergemarkers call will crash if marker creation is not enabled.
2523 2523 # we want to avoid this if the part is advisory.
2524 2524 if not inpart.mandatory and op.repo.obsstore.readonly:
2525 2525 op.repo.ui.debug(
2526 2526 b'ignoring obsolescence markers, feature not enabled\n'
2527 2527 )
2528 2528 return
2529 2529 new = op.repo.obsstore.mergemarkers(tr, markerdata)
2530 2530 op.repo.invalidatevolatilesets()
2531 2531 op.records.add(b'obsmarkers', {b'new': new})
2532 2532 if op.reply is not None:
2533 2533 rpart = op.reply.newpart(b'reply:obsmarkers')
2534 2534 rpart.addparam(
2535 2535 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2536 2536 )
2537 2537 rpart.addparam(b'new', b'%i' % new, mandatory=False)
2538 2538
2539 2539
2540 2540 @parthandler(b'reply:obsmarkers', (b'new', b'in-reply-to'))
2541 2541 def handleobsmarkerreply(op, inpart):
2542 2542 """retrieve the result of a pushkey request"""
2543 2543 ret = int(inpart.params[b'new'])
2544 2544 partid = int(inpart.params[b'in-reply-to'])
2545 2545 op.records.add(b'obsmarkers', {b'new': ret}, partid)
2546 2546
2547 2547
2548 2548 @parthandler(b'hgtagsfnodes')
2549 2549 def handlehgtagsfnodes(op, inpart):
2550 2550 """Applies .hgtags fnodes cache entries to the local repo.
2551 2551
2552 2552 Payload is pairs of 20 byte changeset nodes and filenodes.
2553 2553 """
2554 2554 # Grab the transaction so we ensure that we have the lock at this point.
2555 2555 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2556 2556 op.gettransaction()
2557 2557 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
2558 2558
2559 2559 count = 0
2560 2560 while True:
2561 2561 node = inpart.read(20)
2562 2562 fnode = inpart.read(20)
2563 2563 if len(node) < 20 or len(fnode) < 20:
2564 2564 op.ui.debug(b'ignoring incomplete received .hgtags fnodes data\n')
2565 2565 break
2566 2566 cache.setfnode(node, fnode)
2567 2567 count += 1
2568 2568
2569 2569 cache.write()
2570 2570 op.ui.debug(b'applied %i hgtags fnodes cache entries\n' % count)
2571 2571
2572 2572
2573 2573 rbcstruct = struct.Struct(b'>III')
2574 2574
2575 2575
2576 2576 @parthandler(b'cache:rev-branch-cache')
2577 2577 def handlerbc(op, inpart):
2578 2578 """Legacy part, ignored for compatibility with bundles from or
2579 2579 for Mercurial before 5.7. Newer Mercurial computes the cache
2580 2580 efficiently enough during unbundling that the additional transfer
2581 2581 is unnecessary."""
2582 2582
2583 2583
2584 2584 @parthandler(b'pushvars')
2585 2585 def bundle2getvars(op, part):
2586 2586 '''unbundle a bundle2 containing shellvars on the server'''
2587 2587 # An option to disable unbundling on server-side for security reasons
2588 2588 if op.ui.configbool(b'push', b'pushvars.server'):
2589 2589 hookargs = {}
2590 2590 for key, value in part.advisoryparams:
2591 2591 key = key.upper()
2592 2592 # We want pushed variables to have USERVAR_ prepended so we know
2593 2593 # they came from the --pushvar flag.
2594 2594 key = b"USERVAR_" + key
2595 2595 hookargs[key] = value
2596 2596 op.addhookargs(hookargs)
2597 2597
2598 2598
2599 2599 @parthandler(b'stream2', (b'requirements', b'filecount', b'bytecount'))
2600 2600 def handlestreamv2bundle(op, part):
2601 2601
2602 2602 requirements = urlreq.unquote(part.params[b'requirements'])
2603 2603 requirements = requirements.split(b',') if requirements else []
2604 2604 filecount = int(part.params[b'filecount'])
2605 2605 bytecount = int(part.params[b'bytecount'])
2606 2606
2607 2607 repo = op.repo
2608 2608 if len(repo):
2609 2609 msg = _(b'cannot apply stream clone to non empty repository')
2610 2610 raise error.Abort(msg)
2611 2611
2612 2612 repo.ui.debug(b'applying stream bundle\n')
2613 2613 streamclone.applybundlev2(repo, part, filecount, bytecount, requirements)
2614 2614
2615 2615
2616 2616 @parthandler(b'stream3-exp', (b'requirements',))
2617 2617 def handlestreamv3bundle(op, part):
2618 2618 requirements = urlreq.unquote(part.params[b'requirements'])
2619 2619 requirements = requirements.split(b',') if requirements else []
2620 2620
2621 2621 repo = op.repo
2622 2622 if len(repo):
2623 2623 msg = _(b'cannot apply stream clone to non empty repository')
2624 2624 raise error.Abort(msg)
2625 2625
2626 2626 repo.ui.debug(b'applying stream bundle\n')
2627 2627 streamclone.applybundlev3(repo, part, requirements)
2628 2628
2629 2629
2630 2630 def widen_bundle(
2631 2631 bundler, repo, oldmatcher, newmatcher, common, known, cgversion, ellipses
2632 2632 ):
2633 2633 """generates bundle2 for widening a narrow clone
2634 2634
2635 2635 bundler is the bundle to which data should be added
2636 2636 repo is the localrepository instance
2637 2637 oldmatcher matches what the client already has
2638 2638 newmatcher matches what the client needs (including what it already has)
2639 2639 common is set of common heads between server and client
2640 2640 known is a set of revs known on the client side (used in ellipses)
2641 2641 cgversion is the changegroup version to send
2642 2642 ellipses is boolean value telling whether to send ellipses data or not
2643 2643
2644 2644 returns bundle2 of the data required for extending
2645 2645 """
2646 2646 commonnodes = set()
2647 2647 cl = repo.changelog
2648 2648 for r in repo.revs(b"::%ln", common):
2649 2649 commonnodes.add(cl.node(r))
2650 2650 if commonnodes:
2651 2651 packer = changegroup.getbundler(
2652 2652 cgversion,
2653 2653 repo,
2654 2654 oldmatcher=oldmatcher,
2655 2655 matcher=newmatcher,
2656 2656 fullnodes=commonnodes,
2657 2657 )
2658 2658 cgdata = packer.generate(
2659 2659 {repo.nullid},
2660 2660 list(commonnodes),
2661 2661 False,
2662 2662 b'narrow_widen',
2663 2663 changelog=False,
2664 2664 )
2665 2665
2666 2666 part = bundler.newpart(b'changegroup', data=cgdata)
2667 2667 part.addparam(b'version', cgversion)
2668 2668 if scmutil.istreemanifest(repo):
2669 2669 part.addparam(b'treemanifest', b'1')
2670 2670 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
2671 2671 part.addparam(b'exp-sidedata', b'1')
2672 2672 wanted = format_remote_wanted_sidedata(repo)
2673 2673 part.addparam(b'exp-wanted-sidedata', wanted)
2674 2674
2675 2675 return bundler
@@ -1,2953 +1,2958
1 1 # exchange.py - utility to exchange data between repos.
2 2 #
3 3 # Copyright 2005-2007 Olivia Mackall <olivia@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8
9 9 import collections
10 10 import weakref
11 11
12 12 from .i18n import _
13 13 from .node import (
14 14 hex,
15 15 nullrev,
16 16 )
17 17 from . import (
18 18 bookmarks as bookmod,
19 19 bundle2,
20 20 bundlecaches,
21 21 changegroup,
22 22 discovery,
23 23 error,
24 24 lock as lockmod,
25 25 logexchange,
26 26 narrowspec,
27 27 obsolete,
28 28 obsutil,
29 29 phases,
30 30 pushkey,
31 31 pycompat,
32 32 requirements,
33 33 scmutil,
34 34 streamclone,
35 35 url as urlmod,
36 36 util,
37 37 wireprototypes,
38 38 )
39 39 from .utils import (
40 40 hashutil,
41 41 stringutil,
42 42 urlutil,
43 43 )
44 44 from .interfaces import repository
45 45
46 46 urlerr = util.urlerr
47 47 urlreq = util.urlreq
48 48
49 49 _NARROWACL_SECTION = b'narrowacl'
50 50
51 51
52 52 def readbundle(ui, fh, fname, vfs=None):
53 53 header = changegroup.readexactly(fh, 4)
54 54
55 55 alg = None
56 56 if not fname:
57 57 fname = b"stream"
58 58 if not header.startswith(b'HG') and header.startswith(b'\0'):
59 59 fh = changegroup.headerlessfixup(fh, header)
60 60 header = b"HG10"
61 61 alg = b'UN'
62 62 elif vfs:
63 63 fname = vfs.join(fname)
64 64
65 65 magic, version = header[0:2], header[2:4]
66 66
67 67 if magic != b'HG':
68 68 raise error.Abort(_(b'%s: not a Mercurial bundle') % fname)
69 69 if version == b'10':
70 70 if alg is None:
71 71 alg = changegroup.readexactly(fh, 2)
72 72 return changegroup.cg1unpacker(fh, alg)
73 73 elif version.startswith(b'2'):
74 74 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
75 75 elif version == b'S1':
76 76 return streamclone.streamcloneapplier(fh)
77 77 else:
78 78 raise error.Abort(
79 79 _(b'%s: unknown bundle version %s') % (fname, version)
80 80 )
81 81
82 82
83 83 def _format_params(params):
84 84 parts = []
85 85 for key, value in sorted(params.items()):
86 86 value = urlreq.quote(value)
87 87 parts.append(b"%s=%s" % (key, value))
88 88 return b';'.join(parts)
89 89
90 90
91 91 def getbundlespec(ui, fh):
92 92 """Infer the bundlespec from a bundle file handle.
93 93
94 94 The input file handle is seeked and the original seek position is not
95 95 restored.
96 96 """
97 97
98 98 def speccompression(alg):
99 99 try:
100 100 return util.compengines.forbundletype(alg).bundletype()[0]
101 101 except KeyError:
102 102 return None
103 103
104 104 params = {}
105 105
106 106 b = readbundle(ui, fh, None)
107 107 if isinstance(b, changegroup.cg1unpacker):
108 108 alg = b._type
109 109 if alg == b'_truncatedBZ':
110 110 alg = b'BZ'
111 111 comp = speccompression(alg)
112 112 if not comp:
113 113 raise error.Abort(_(b'unknown compression algorithm: %s') % alg)
114 114 return b'%s-v1' % comp
115 115 elif isinstance(b, bundle2.unbundle20):
116 116 if b'Compression' in b.params:
117 117 comp = speccompression(b.params[b'Compression'])
118 118 if not comp:
119 119 raise error.Abort(
120 120 _(b'unknown compression algorithm: %s') % comp
121 121 )
122 122 else:
123 123 comp = b'none'
124 124
125 125 version = None
126 126 for part in b.iterparts():
127 127 if part.type == b'changegroup':
128 128 cgversion = part.params[b'version']
129 129 if cgversion in (b'01', b'02'):
130 130 version = b'v2'
131 131 elif cgversion in (b'03',):
132 132 version = b'v2'
133 133 params[b'cg.version'] = cgversion
134 134 else:
135 135 raise error.Abort(
136 136 _(
137 137 b'changegroup version %s does not have '
138 138 b'a known bundlespec'
139 139 )
140 140 % version,
141 141 hint=_(b'try upgrading your Mercurial client'),
142 142 )
143 143 elif part.type == b'stream2' and version is None:
144 144 # A stream2 part requires to be part of a v2 bundle
145 145 requirements = urlreq.unquote(part.params[b'requirements'])
146 146 splitted = requirements.split()
147 147 params = bundle2._formatrequirementsparams(splitted)
148 148 return b'none-v2;stream=v2;%s' % params
149 149 elif part.type == b'stream3-exp' and version is None:
150 150 # A stream3 part requires to be part of a v2 bundle
151 151 requirements = urlreq.unquote(part.params[b'requirements'])
152 152 splitted = requirements.split()
153 153 params = bundle2._formatrequirementsparams(splitted)
154 154 return b'none-v2;stream=v3-exp;%s' % params
155 155 elif part.type == b'obsmarkers':
156 156 params[b'obsolescence'] = b'yes'
157 157 if not part.mandatory:
158 158 params[b'obsolescence-mandatory'] = b'no'
159 159
160 160 if not version:
161 161 params[b'changegroup'] = b'no'
162 162 version = b'v2'
163 163 spec = b'%s-%s' % (comp, version)
164 164 if params:
165 165 spec += b';'
166 166 spec += _format_params(params)
167 167 return spec
168 168
169 169 elif isinstance(b, streamclone.streamcloneapplier):
170 170 requirements = streamclone.readbundle1header(fh)[2]
171 171 formatted = bundle2._formatrequirementsparams(requirements)
172 172 return b'none-packed1;%s' % formatted
173 173 else:
174 174 raise error.Abort(_(b'unknown bundle type: %s') % b)
175 175
176 176
177 177 def _computeoutgoing(repo, heads, common):
178 178 """Computes which revs are outgoing given a set of common
179 179 and a set of heads.
180 180
181 181 This is a separate function so extensions can have access to
182 182 the logic.
183 183
184 184 Returns a discovery.outgoing object.
185 185 """
186 186 cl = repo.changelog
187 187 if common:
188 188 hasnode = cl.hasnode
189 189 common = [n for n in common if hasnode(n)]
190 190 else:
191 191 common = [repo.nullid]
192 192 if not heads:
193 193 heads = cl.heads()
194 194 return discovery.outgoing(repo, common, heads)
195 195
196 196
197 197 def _checkpublish(pushop):
198 198 repo = pushop.repo
199 199 ui = repo.ui
200 200 behavior = ui.config(b'experimental', b'auto-publish')
201 201 if pushop.publish or behavior not in (b'warn', b'confirm', b'abort'):
202 202 return
203 203 remotephases = listkeys(pushop.remote, b'phases')
204 204 if not remotephases.get(b'publishing', False):
205 205 return
206 206
207 207 if pushop.revs is None:
208 208 published = repo.filtered(b'served').revs(b'not public()')
209 209 else:
210 210 published = repo.revs(b'::%ln - public()', pushop.revs)
211 211 # we want to use pushop.revs in the revset even if they themselves are
212 212 # secret, but we don't want to have anything that the server won't see
213 213 # in the result of this expression
214 214 published &= repo.filtered(b'served')
215 215 if published:
216 216 if behavior == b'warn':
217 217 ui.warn(
218 218 _(b'%i changesets about to be published\n') % len(published)
219 219 )
220 220 elif behavior == b'confirm':
221 221 if ui.promptchoice(
222 222 _(b'push and publish %i changesets (yn)?$$ &Yes $$ &No')
223 223 % len(published)
224 224 ):
225 225 raise error.CanceledError(_(b'user quit'))
226 226 elif behavior == b'abort':
227 227 msg = _(b'push would publish %i changesets') % len(published)
228 228 hint = _(
229 229 b"use --publish or adjust 'experimental.auto-publish'"
230 230 b" config"
231 231 )
232 232 raise error.Abort(msg, hint=hint)
233 233
234 234
235 235 def _forcebundle1(op):
236 236 """return true if a pull/push must use bundle1
237 237
238 238 This function is used to allow testing of the older bundle version"""
239 239 ui = op.repo.ui
240 240 # The goal is this config is to allow developer to choose the bundle
241 241 # version used during exchanged. This is especially handy during test.
242 242 # Value is a list of bundle version to be picked from, highest version
243 243 # should be used.
244 244 #
245 245 # developer config: devel.legacy.exchange
246 246 exchange = ui.configlist(b'devel', b'legacy.exchange')
247 247 forcebundle1 = b'bundle2' not in exchange and b'bundle1' in exchange
248 248 return forcebundle1 or not op.remote.capable(b'bundle2')
249 249
250 250
251 251 class pushoperation:
252 252 """A object that represent a single push operation
253 253
254 254 Its purpose is to carry push related state and very common operations.
255 255
256 256 A new pushoperation should be created at the beginning of each push and
257 257 discarded afterward.
258 258 """
259 259
260 260 def __init__(
261 261 self,
262 262 repo,
263 263 remote,
264 264 force=False,
265 265 revs=None,
266 266 newbranch=False,
267 267 bookmarks=(),
268 268 publish=False,
269 269 pushvars=None,
270 270 ):
271 271 # repo we push from
272 272 self.repo = repo
273 273 self.ui = repo.ui
274 274 # repo we push to
275 275 self.remote = remote
276 276 # force option provided
277 277 self.force = force
278 278 # revs to be pushed (None is "all")
279 279 self.revs = revs
280 280 # bookmark explicitly pushed
281 281 self.bookmarks = bookmarks
282 282 # allow push of new branch
283 283 self.newbranch = newbranch
284 284 # step already performed
285 285 # (used to check what steps have been already performed through bundle2)
286 286 self.stepsdone = set()
287 287 # Integer version of the changegroup push result
288 288 # - None means nothing to push
289 289 # - 0 means HTTP error
290 290 # - 1 means we pushed and remote head count is unchanged *or*
291 291 # we have outgoing changesets but refused to push
292 292 # - other values as described by addchangegroup()
293 293 self.cgresult = None
294 294 # Boolean value for the bookmark push
295 295 self.bkresult = None
296 296 # discover.outgoing object (contains common and outgoing data)
297 297 self.outgoing = None
298 298 # all remote topological heads before the push
299 299 self.remoteheads = None
300 300 # Details of the remote branch pre and post push
301 301 #
302 302 # mapping: {'branch': ([remoteheads],
303 303 # [newheads],
304 304 # [unsyncedheads],
305 305 # [discardedheads])}
306 306 # - branch: the branch name
307 307 # - remoteheads: the list of remote heads known locally
308 308 # None if the branch is new
309 309 # - newheads: the new remote heads (known locally) with outgoing pushed
310 310 # - unsyncedheads: the list of remote heads unknown locally.
311 311 # - discardedheads: the list of remote heads made obsolete by the push
312 312 self.pushbranchmap = None
313 313 # testable as a boolean indicating if any nodes are missing locally.
314 314 self.incoming = None
315 315 # summary of the remote phase situation
316 316 self.remotephases = None
317 317 # phases changes that must be pushed along side the changesets
318 318 self.outdatedphases = None
319 319 # phases changes that must be pushed if changeset push fails
320 320 self.fallbackoutdatedphases = None
321 321 # outgoing obsmarkers
322 322 self.outobsmarkers = set()
323 323 # outgoing bookmarks, list of (bm, oldnode | '', newnode | '')
324 324 self.outbookmarks = []
325 325 # transaction manager
326 326 self.trmanager = None
327 327 # map { pushkey partid -> callback handling failure}
328 328 # used to handle exception from mandatory pushkey part failure
329 329 self.pkfailcb = {}
330 330 # an iterable of pushvars or None
331 331 self.pushvars = pushvars
332 332 # publish pushed changesets
333 333 self.publish = publish
334 334
335 335 @util.propertycache
336 336 def futureheads(self):
337 337 """future remote heads if the changeset push succeeds"""
338 338 return self.outgoing.ancestorsof
339 339
340 340 @util.propertycache
341 341 def fallbackheads(self):
342 342 """future remote heads if the changeset push fails"""
343 343 if self.revs is None:
344 344 # not target to push, all common are relevant
345 345 return self.outgoing.commonheads
346 346 unfi = self.repo.unfiltered()
347 347 # I want cheads = heads(::push_heads and ::commonheads)
348 348 #
349 349 # To push, we already computed
350 350 # common = (::commonheads)
351 351 # missing = ((commonheads::push_heads) - commonheads)
352 352 #
353 353 # So we basically search
354 354 #
355 355 # almost_heads = heads((parents(missing) + push_heads) & common)
356 356 #
357 357 # We use "almost" here as this can return revision that are ancestors
358 358 # of other in the set and we need to explicitly turn it into an
359 359 # antichain later. We can do so using:
360 360 #
361 361 # cheads = heads(almost_heads::almost_heads)
362 362 #
363 363 # In pratice the code is a bit more convulted to avoid some extra
364 364 # computation. It aims at doing the same computation as highlighted
365 365 # above however.
366 366 common = self.outgoing.common
367 367 unfi = self.repo.unfiltered()
368 368 cl = unfi.changelog
369 369 to_rev = cl.index.rev
370 370 to_node = cl.node
371 371 parent_revs = cl.parentrevs
372 372 unselected = []
373 373 cheads = set()
374 374 # XXX-perf: `self.revs` and `outgoing.missing` could hold revs directly
375 375 for n in self.revs:
376 376 r = to_rev(n)
377 377 if r in common:
378 378 cheads.add(r)
379 379 else:
380 380 unselected.append(r)
381 381 known_non_heads = cl.ancestors(cheads, inclusive=True)
382 382 if unselected:
383 383 missing_revs = {to_rev(n) for n in self.outgoing.missing}
384 384 missing_revs.add(nullrev)
385 385 root_points = set()
386 386 for r in missing_revs:
387 387 p1, p2 = parent_revs(r)
388 388 if p1 not in missing_revs and p1 not in known_non_heads:
389 389 root_points.add(p1)
390 390 if p2 not in missing_revs and p2 not in known_non_heads:
391 391 root_points.add(p2)
392 392 if root_points:
393 393 heads = unfi.revs('heads(%ld::%ld)', root_points, root_points)
394 394 cheads.update(heads)
395 395 # XXX-perf: could this be a set of revision?
396 396 return [to_node(r) for r in sorted(cheads)]
397 397
398 398 @property
399 399 def commonheads(self):
400 400 """set of all common heads after changeset bundle push"""
401 401 if self.cgresult:
402 402 return self.futureheads
403 403 else:
404 404 return self.fallbackheads
405 405
406 406
407 407 # mapping of message used when pushing bookmark
408 408 bookmsgmap = {
409 409 b'update': (
410 410 _(b"updating bookmark %s\n"),
411 411 _(b'updating bookmark %s failed\n'),
412 412 ),
413 413 b'export': (
414 414 _(b"exporting bookmark %s\n"),
415 415 _(b'exporting bookmark %s failed\n'),
416 416 ),
417 417 b'delete': (
418 418 _(b"deleting remote bookmark %s\n"),
419 419 _(b'deleting remote bookmark %s failed\n'),
420 420 ),
421 421 }
422 422
423 423
424 424 def push(
425 425 repo,
426 426 remote,
427 427 force=False,
428 428 revs=None,
429 429 newbranch=False,
430 430 bookmarks=(),
431 431 publish=False,
432 432 opargs=None,
433 433 ):
434 434 """Push outgoing changesets (limited by revs) from a local
435 435 repository to remote. Return an integer:
436 436 - None means nothing to push
437 437 - 0 means HTTP error
438 438 - 1 means we pushed and remote head count is unchanged *or*
439 439 we have outgoing changesets but refused to push
440 440 - other values as described by addchangegroup()
441 441 """
442 442 if opargs is None:
443 443 opargs = {}
444 444 pushop = pushoperation(
445 445 repo,
446 446 remote,
447 447 force,
448 448 revs,
449 449 newbranch,
450 450 bookmarks,
451 451 publish,
452 452 **pycompat.strkwargs(opargs)
453 453 )
454 454 if pushop.remote.local():
455 455 missing = (
456 456 set(pushop.repo.requirements) - pushop.remote.local().supported
457 457 )
458 458 if missing:
459 459 msg = _(
460 460 b"required features are not"
461 461 b" supported in the destination:"
462 462 b" %s"
463 463 ) % (b', '.join(sorted(missing)))
464 464 raise error.Abort(msg)
465 465
466 466 if not pushop.remote.canpush():
467 467 raise error.Abort(_(b"destination does not support push"))
468 468
469 469 if not pushop.remote.capable(b'unbundle'):
470 470 raise error.Abort(
471 471 _(
472 472 b'cannot push: destination does not support the '
473 473 b'unbundle wire protocol command'
474 474 )
475 475 )
476 476 for category in sorted(bundle2.read_remote_wanted_sidedata(pushop.remote)):
477 477 # Check that a computer is registered for that category for at least
478 478 # one revlog kind.
479 479 for kind, computers in repo._sidedata_computers.items():
480 480 if computers.get(category):
481 481 break
482 482 else:
483 483 raise error.Abort(
484 484 _(
485 485 b'cannot push: required sidedata category not supported'
486 486 b" by this client: '%s'"
487 487 )
488 488 % pycompat.bytestr(category)
489 489 )
490 490 # get lock as we might write phase data
491 491 wlock = lock = None
492 492 try:
493 493 try:
494 494 # bundle2 push may receive a reply bundle touching bookmarks
495 495 # requiring the wlock. Take it now to ensure proper ordering.
496 496 maypushback = pushop.ui.configbool(
497 497 b'experimental',
498 498 b'bundle2.pushback',
499 499 )
500 500 if (
501 501 (not _forcebundle1(pushop))
502 502 and maypushback
503 503 and not bookmod.bookmarksinstore(repo)
504 504 ):
505 505 wlock = pushop.repo.wlock()
506 506 lock = pushop.repo.lock()
507 507 pushop.trmanager = transactionmanager(
508 508 pushop.repo, b'push-response', pushop.remote.url()
509 509 )
510 510 except error.LockUnavailable as err:
511 511 # source repo cannot be locked.
512 512 # We do not abort the push, but just disable the local phase
513 513 # synchronisation.
514 514 msg = b'cannot lock source repository: %s\n'
515 515 msg %= stringutil.forcebytestr(err)
516 516 pushop.ui.debug(msg)
517 517
518 518 pushop.repo.checkpush(pushop)
519 519 _checkpublish(pushop)
520 520 _pushdiscovery(pushop)
521 521 if not pushop.force:
522 522 _checksubrepostate(pushop)
523 523 if not _forcebundle1(pushop):
524 524 _pushbundle2(pushop)
525 525 _pushchangeset(pushop)
526 526 _pushsyncphase(pushop)
527 527 _pushobsolete(pushop)
528 528 _pushbookmark(pushop)
529 529 if pushop.trmanager is not None:
530 530 pushop.trmanager.close()
531 531 finally:
532 532 lockmod.release(pushop.trmanager, lock, wlock)
533 533
534 534 if repo.ui.configbool(b'experimental', b'remotenames'):
535 535 logexchange.pullremotenames(repo, remote)
536 536
537 537 return pushop
538 538
539 539
540 540 # list of steps to perform discovery before push
541 541 pushdiscoveryorder = []
542 542
543 543 # Mapping between step name and function
544 544 #
545 545 # This exists to help extensions wrap steps if necessary
546 546 pushdiscoverymapping = {}
547 547
548 548
549 549 def pushdiscovery(stepname):
550 550 """decorator for function performing discovery before push
551 551
552 552 The function is added to the step -> function mapping and appended to the
553 553 list of steps. Beware that decorated function will be added in order (this
554 554 may matter).
555 555
556 556 You can only use this decorator for a new step, if you want to wrap a step
557 557 from an extension, change the pushdiscovery dictionary directly."""
558 558
559 559 def dec(func):
560 560 assert stepname not in pushdiscoverymapping
561 561 pushdiscoverymapping[stepname] = func
562 562 pushdiscoveryorder.append(stepname)
563 563 return func
564 564
565 565 return dec
566 566
567 567
568 568 def _pushdiscovery(pushop):
569 569 """Run all discovery steps"""
570 570 for stepname in pushdiscoveryorder:
571 571 step = pushdiscoverymapping[stepname]
572 572 step(pushop)
573 573
574 574
575 575 def _checksubrepostate(pushop):
576 576 """Ensure all outgoing referenced subrepo revisions are present locally"""
577 577
578 578 repo = pushop.repo
579 579
580 580 # If the repository does not use subrepos, skip the expensive
581 581 # manifest checks.
582 582 if not len(repo.file(b'.hgsub')) or not len(repo.file(b'.hgsubstate')):
583 583 return
584 584
585 585 for n in pushop.outgoing.missing:
586 586 ctx = repo[n]
587 587
588 588 if b'.hgsub' in ctx.manifest() and b'.hgsubstate' in ctx.files():
589 589 for subpath in sorted(ctx.substate):
590 590 sub = ctx.sub(subpath)
591 591 sub.verify(onpush=True)
592 592
593 593
594 594 @pushdiscovery(b'changeset')
595 595 def _pushdiscoverychangeset(pushop):
596 596 """discover the changeset that need to be pushed"""
597 597 fci = discovery.findcommonincoming
598 598 if pushop.revs:
599 599 commoninc = fci(
600 600 pushop.repo,
601 601 pushop.remote,
602 602 force=pushop.force,
603 603 ancestorsof=pushop.revs,
604 604 )
605 605 else:
606 606 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
607 607 common, inc, remoteheads = commoninc
608 608 fco = discovery.findcommonoutgoing
609 609 outgoing = fco(
610 610 pushop.repo,
611 611 pushop.remote,
612 612 onlyheads=pushop.revs,
613 613 commoninc=commoninc,
614 614 force=pushop.force,
615 615 )
616 616 pushop.outgoing = outgoing
617 617 pushop.remoteheads = remoteheads
618 618 pushop.incoming = inc
619 619
620 620
621 621 @pushdiscovery(b'phase')
622 622 def _pushdiscoveryphase(pushop):
623 623 """discover the phase that needs to be pushed
624 624
625 625 (computed for both success and failure case for changesets push)"""
626 626 outgoing = pushop.outgoing
627 627 repo = pushop.repo
628 628 unfi = repo.unfiltered()
629 629 cl = unfi.changelog
630 630 to_rev = cl.index.rev
631 631 remotephases = listkeys(pushop.remote, b'phases')
632 632
633 633 if (
634 634 pushop.ui.configbool(b'ui', b'_usedassubrepo')
635 635 and remotephases # server supports phases
636 636 and not pushop.outgoing.missing # no changesets to be pushed
637 637 and remotephases.get(b'publishing', False)
638 638 ):
639 639 # When:
640 640 # - this is a subrepo push
641 641 # - and remote support phase
642 642 # - and no changeset are to be pushed
643 643 # - and remote is publishing
644 644 # We may be in issue 3781 case!
645 645 # We drop the possible phase synchronisation done by
646 646 # courtesy to publish changesets possibly locally draft
647 647 # on the remote.
648 648 pushop.outdatedphases = []
649 649 pushop.fallbackoutdatedphases = []
650 650 return
651 651
652 652 fallbackheads_rev = {to_rev(n) for n in pushop.fallbackheads}
653 653 pushop.remotephases = phases.RemotePhasesSummary(
654 654 pushop.repo,
655 655 fallbackheads_rev,
656 656 remotephases,
657 657 )
658 658 droots = set(pushop.remotephases.draft_roots)
659 659
660 660 fallback_publishing = pushop.remotephases.publishing
661 661 push_publishing = pushop.remotephases.publishing or pushop.publish
662 662 missing_revs = {to_rev(n) for n in outgoing.missing}
663 663 drafts = unfi._phasecache.get_raw_set(unfi, phases.draft)
664 664
665 665 if fallback_publishing:
666 666 fallback_roots = droots - missing_revs
667 667 revset = b'heads(%ld::%ld)'
668 668 else:
669 669 fallback_roots = droots - drafts
670 670 fallback_roots -= missing_revs
671 671 # Get the list of all revs draft on remote but public here.
672 672 revset = b'heads((%ld::%ld) and public())'
673 673 if not fallback_roots:
674 674 fallback = fallback_rev = []
675 675 else:
676 676 fallback_rev = unfi.revs(revset, fallback_roots, fallbackheads_rev)
677 677 fallback = [repo[r] for r in fallback_rev]
678 678
679 679 if push_publishing:
680 680 published = missing_revs.copy()
681 681 else:
682 682 published = missing_revs - drafts
683 683 if pushop.publish:
684 684 published.update(fallbackheads_rev & drafts)
685 685 elif fallback:
686 686 published.update(fallback_rev)
687 687
688 688 pushop.outdatedphases = [repo[r] for r in cl.headrevs(published)]
689 689 pushop.fallbackoutdatedphases = fallback
690 690
691 691
692 692 @pushdiscovery(b'obsmarker')
693 693 def _pushdiscoveryobsmarkers(pushop):
694 694 if not obsolete.isenabled(pushop.repo, obsolete.exchangeopt):
695 695 return
696 696
697 697 if not pushop.repo.obsstore:
698 698 return
699 699
700 700 if b'obsolete' not in listkeys(pushop.remote, b'namespaces'):
701 701 return
702 702
703 703 repo = pushop.repo
704 704 # very naive computation, that can be quite expensive on big repo.
705 705 # However: evolution is currently slow on them anyway.
706 nodes = (c.node() for c in repo.set(b'::%ln', pushop.futureheads))
707 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
706 repo.revs(b'::%ln', pushop.futureheads)
707 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(revs=revs)
708 708
709 709
710 710 @pushdiscovery(b'bookmarks')
711 711 def _pushdiscoverybookmarks(pushop):
712 712 ui = pushop.ui
713 713 repo = pushop.repo.unfiltered()
714 714 remote = pushop.remote
715 715 ui.debug(b"checking for updated bookmarks\n")
716 716 ancestors = ()
717 717 if pushop.revs:
718 718 revnums = pycompat.maplist(repo.changelog.rev, pushop.revs)
719 719 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
720 720
721 721 remotebookmark = bookmod.unhexlifybookmarks(listkeys(remote, b'bookmarks'))
722 722
723 723 explicit = {
724 724 repo._bookmarks.expandname(bookmark) for bookmark in pushop.bookmarks
725 725 }
726 726
727 727 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
728 728 return _processcompared(pushop, ancestors, explicit, remotebookmark, comp)
729 729
730 730
731 731 def _processcompared(pushop, pushed, explicit, remotebms, comp):
732 732 """take decision on bookmarks to push to the remote repo
733 733
734 734 Exists to help extensions alter this behavior.
735 735 """
736 736 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
737 737
738 738 repo = pushop.repo
739 739
740 740 for b, scid, dcid in advsrc:
741 741 if b in explicit:
742 742 explicit.remove(b)
743 743 if not pushed or repo[scid].rev() in pushed:
744 744 pushop.outbookmarks.append((b, dcid, scid))
745 745 # search added bookmark
746 746 for b, scid, dcid in addsrc:
747 747 if b in explicit:
748 748 explicit.remove(b)
749 749 if bookmod.isdivergent(b):
750 750 pushop.ui.warn(_(b'cannot push divergent bookmark %s!\n') % b)
751 751 pushop.bkresult = 2
752 752 elif pushed and repo[scid].rev() not in pushed:
753 753 # in case of race or secret
754 754 msg = _(b'cannot push bookmark X without its revision: %s!\n')
755 755 pushop.ui.warn(msg % b)
756 756 pushop.bkresult = 2
757 757 else:
758 758 pushop.outbookmarks.append((b, b'', scid))
759 759 # search for overwritten bookmark
760 760 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
761 761 if b in explicit:
762 762 explicit.remove(b)
763 763 if not pushed or repo[scid].rev() in pushed:
764 764 pushop.outbookmarks.append((b, dcid, scid))
765 765 # search for bookmark to delete
766 766 for b, scid, dcid in adddst:
767 767 if b in explicit:
768 768 explicit.remove(b)
769 769 # treat as "deleted locally"
770 770 pushop.outbookmarks.append((b, dcid, b''))
771 771 # identical bookmarks shouldn't get reported
772 772 for b, scid, dcid in same:
773 773 if b in explicit:
774 774 explicit.remove(b)
775 775
776 776 if explicit:
777 777 explicit = sorted(explicit)
778 778 # we should probably list all of them
779 779 pushop.ui.warn(
780 780 _(
781 781 b'bookmark %s does not exist on the local '
782 782 b'or remote repository!\n'
783 783 )
784 784 % explicit[0]
785 785 )
786 786 pushop.bkresult = 2
787 787
788 788 pushop.outbookmarks.sort()
789 789
790 790
791 791 def _pushcheckoutgoing(pushop):
792 792 outgoing = pushop.outgoing
793 793 unfi = pushop.repo.unfiltered()
794 794 if not outgoing.missing:
795 795 # nothing to push
796 796 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
797 797 return False
798 798 # something to push
799 799 if not pushop.force:
800 800 # if repo.obsstore == False --> no obsolete
801 801 # then, save the iteration
802 802 if unfi.obsstore:
803 803 # this message are here for 80 char limit reason
804 804 mso = _(b"push includes obsolete changeset: %s!")
805 805 mspd = _(b"push includes phase-divergent changeset: %s!")
806 806 mscd = _(b"push includes content-divergent changeset: %s!")
807 807 mst = {
808 808 b"orphan": _(b"push includes orphan changeset: %s!"),
809 809 b"phase-divergent": mspd,
810 810 b"content-divergent": mscd,
811 811 }
812 812 # If we are to push if there is at least one
813 813 # obsolete or unstable changeset in missing, at
814 814 # least one of the missinghead will be obsolete or
815 815 # unstable. So checking heads only is ok
816 816 for node in outgoing.ancestorsof:
817 817 ctx = unfi[node]
818 818 if ctx.obsolete():
819 819 raise error.Abort(mso % ctx)
820 820 elif ctx.isunstable():
821 821 # TODO print more than one instability in the abort
822 822 # message
823 823 raise error.Abort(mst[ctx.instabilities()[0]] % ctx)
824 824
825 825 discovery.checkheads(pushop)
826 826 return True
827 827
828 828
829 829 # List of names of steps to perform for an outgoing bundle2, order matters.
830 830 b2partsgenorder = []
831 831
832 832 # Mapping between step name and function
833 833 #
834 834 # This exists to help extensions wrap steps if necessary
835 835 b2partsgenmapping = {}
836 836
837 837
838 838 def b2partsgenerator(stepname, idx=None):
839 839 """decorator for function generating bundle2 part
840 840
841 841 The function is added to the step -> function mapping and appended to the
842 842 list of steps. Beware that decorated functions will be added in order
843 843 (this may matter).
844 844
845 845 You can only use this decorator for new steps, if you want to wrap a step
846 846 from an extension, attack the b2partsgenmapping dictionary directly."""
847 847
848 848 def dec(func):
849 849 assert stepname not in b2partsgenmapping
850 850 b2partsgenmapping[stepname] = func
851 851 if idx is None:
852 852 b2partsgenorder.append(stepname)
853 853 else:
854 854 b2partsgenorder.insert(idx, stepname)
855 855 return func
856 856
857 857 return dec
858 858
859 859
860 860 def _pushb2ctxcheckheads(pushop, bundler):
861 861 """Generate race condition checking parts
862 862
863 863 Exists as an independent function to aid extensions
864 864 """
865 865 # * 'force' do not check for push race,
866 866 # * if we don't push anything, there are nothing to check.
867 867 if not pushop.force and pushop.outgoing.ancestorsof:
868 868 allowunrelated = b'related' in bundler.capabilities.get(
869 869 b'checkheads', ()
870 870 )
871 871 emptyremote = pushop.pushbranchmap is None
872 872 if not allowunrelated or emptyremote:
873 873 bundler.newpart(b'check:heads', data=iter(pushop.remoteheads))
874 874 else:
875 875 affected = set()
876 876 for branch, heads in pushop.pushbranchmap.items():
877 877 remoteheads, newheads, unsyncedheads, discardedheads = heads
878 878 if remoteheads is not None:
879 879 remote = set(remoteheads)
880 880 affected |= set(discardedheads) & remote
881 881 affected |= remote - set(newheads)
882 882 if affected:
883 883 data = iter(sorted(affected))
884 884 bundler.newpart(b'check:updated-heads', data=data)
885 885
886 886
887 887 def _pushing(pushop):
888 888 """return True if we are pushing anything"""
889 889 return bool(
890 890 pushop.outgoing.missing
891 891 or pushop.outdatedphases
892 892 or pushop.outobsmarkers
893 893 or pushop.outbookmarks
894 894 )
895 895
896 896
897 897 @b2partsgenerator(b'check-bookmarks')
898 898 def _pushb2checkbookmarks(pushop, bundler):
899 899 """insert bookmark move checking"""
900 900 if not _pushing(pushop) or pushop.force:
901 901 return
902 902 b2caps = bundle2.bundle2caps(pushop.remote)
903 903 hasbookmarkcheck = b'bookmarks' in b2caps
904 904 if not (pushop.outbookmarks and hasbookmarkcheck):
905 905 return
906 906 data = []
907 907 for book, old, new in pushop.outbookmarks:
908 908 data.append((book, old))
909 909 checkdata = bookmod.binaryencode(pushop.repo, data)
910 910 bundler.newpart(b'check:bookmarks', data=checkdata)
911 911
912 912
913 913 @b2partsgenerator(b'check-phases')
914 914 def _pushb2checkphases(pushop, bundler):
915 915 """insert phase move checking"""
916 916 if not _pushing(pushop) or pushop.force:
917 917 return
918 918 b2caps = bundle2.bundle2caps(pushop.remote)
919 919 hasphaseheads = b'heads' in b2caps.get(b'phases', ())
920 920 if pushop.remotephases is not None and hasphaseheads:
921 921 # check that the remote phase has not changed
922 922 checks = {p: [] for p in phases.allphases}
923 923 to_node = pushop.repo.unfiltered().changelog.node
924 924 checks[phases.public].extend(
925 925 to_node(r) for r in pushop.remotephases.public_heads
926 926 )
927 927 checks[phases.draft].extend(
928 928 to_node(r) for r in pushop.remotephases.draft_roots
929 929 )
930 930 if any(checks.values()):
931 931 for phase in checks:
932 932 checks[phase].sort()
933 933 checkdata = phases.binaryencode(checks)
934 934 bundler.newpart(b'check:phases', data=checkdata)
935 935
936 936
937 937 @b2partsgenerator(b'changeset')
938 938 def _pushb2ctx(pushop, bundler):
939 939 """handle changegroup push through bundle2
940 940
941 941 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
942 942 """
943 943 if b'changesets' in pushop.stepsdone:
944 944 return
945 945 pushop.stepsdone.add(b'changesets')
946 946 # Send known heads to the server for race detection.
947 947 if not _pushcheckoutgoing(pushop):
948 948 return
949 949 pushop.repo.prepushoutgoinghooks(pushop)
950 950
951 951 _pushb2ctxcheckheads(pushop, bundler)
952 952
953 953 b2caps = bundle2.bundle2caps(pushop.remote)
954 954 version = b'01'
955 955 cgversions = b2caps.get(b'changegroup')
956 956 if cgversions: # 3.1 and 3.2 ship with an empty value
957 957 cgversions = [
958 958 v
959 959 for v in cgversions
960 960 if v in changegroup.supportedoutgoingversions(pushop.repo)
961 961 ]
962 962 if not cgversions:
963 963 raise error.Abort(_(b'no common changegroup version'))
964 964 version = max(cgversions)
965 965
966 966 remote_sidedata = bundle2.read_remote_wanted_sidedata(pushop.remote)
967 967 cgstream = changegroup.makestream(
968 968 pushop.repo,
969 969 pushop.outgoing,
970 970 version,
971 971 b'push',
972 972 bundlecaps=b2caps,
973 973 remote_sidedata=remote_sidedata,
974 974 )
975 975 cgpart = bundler.newpart(b'changegroup', data=cgstream)
976 976 if cgversions:
977 977 cgpart.addparam(b'version', version)
978 978 if scmutil.istreemanifest(pushop.repo):
979 979 cgpart.addparam(b'treemanifest', b'1')
980 980 if repository.REPO_FEATURE_SIDE_DATA in pushop.repo.features:
981 981 cgpart.addparam(b'exp-sidedata', b'1')
982 982
983 983 def handlereply(op):
984 984 """extract addchangegroup returns from server reply"""
985 985 cgreplies = op.records.getreplies(cgpart.id)
986 986 assert len(cgreplies[b'changegroup']) == 1
987 987 pushop.cgresult = cgreplies[b'changegroup'][0][b'return']
988 988
989 989 return handlereply
990 990
991 991
992 992 @b2partsgenerator(b'phase')
993 993 def _pushb2phases(pushop, bundler):
994 994 """handle phase push through bundle2"""
995 995 if b'phases' in pushop.stepsdone:
996 996 return
997 997 b2caps = bundle2.bundle2caps(pushop.remote)
998 998 ui = pushop.repo.ui
999 999
1000 1000 legacyphase = b'phases' in ui.configlist(b'devel', b'legacy.exchange')
1001 1001 haspushkey = b'pushkey' in b2caps
1002 1002 hasphaseheads = b'heads' in b2caps.get(b'phases', ())
1003 1003
1004 1004 if hasphaseheads and not legacyphase:
1005 1005 return _pushb2phaseheads(pushop, bundler)
1006 1006 elif haspushkey:
1007 1007 return _pushb2phasespushkey(pushop, bundler)
1008 1008
1009 1009
1010 1010 def _pushb2phaseheads(pushop, bundler):
1011 1011 """push phase information through a bundle2 - binary part"""
1012 1012 pushop.stepsdone.add(b'phases')
1013 1013 if pushop.outdatedphases:
1014 1014 updates = {p: [] for p in phases.allphases}
1015 1015 updates[0].extend(h.node() for h in pushop.outdatedphases)
1016 1016 phasedata = phases.binaryencode(updates)
1017 1017 bundler.newpart(b'phase-heads', data=phasedata)
1018 1018
1019 1019
1020 1020 def _pushb2phasespushkey(pushop, bundler):
1021 1021 """push phase information through a bundle2 - pushkey part"""
1022 1022 pushop.stepsdone.add(b'phases')
1023 1023 part2node = []
1024 1024
1025 1025 def handlefailure(pushop, exc):
1026 1026 targetid = int(exc.partid)
1027 1027 for partid, node in part2node:
1028 1028 if partid == targetid:
1029 1029 raise error.Abort(_(b'updating %s to public failed') % node)
1030 1030
1031 1031 enc = pushkey.encode
1032 1032 for newremotehead in pushop.outdatedphases:
1033 1033 part = bundler.newpart(b'pushkey')
1034 1034 part.addparam(b'namespace', enc(b'phases'))
1035 1035 part.addparam(b'key', enc(newremotehead.hex()))
1036 1036 part.addparam(b'old', enc(b'%d' % phases.draft))
1037 1037 part.addparam(b'new', enc(b'%d' % phases.public))
1038 1038 part2node.append((part.id, newremotehead))
1039 1039 pushop.pkfailcb[part.id] = handlefailure
1040 1040
1041 1041 def handlereply(op):
1042 1042 for partid, node in part2node:
1043 1043 partrep = op.records.getreplies(partid)
1044 1044 results = partrep[b'pushkey']
1045 1045 assert len(results) <= 1
1046 1046 msg = None
1047 1047 if not results:
1048 1048 msg = _(b'server ignored update of %s to public!\n') % node
1049 1049 elif not int(results[0][b'return']):
1050 1050 msg = _(b'updating %s to public failed!\n') % node
1051 1051 if msg is not None:
1052 1052 pushop.ui.warn(msg)
1053 1053
1054 1054 return handlereply
1055 1055
1056 1056
1057 1057 @b2partsgenerator(b'obsmarkers')
1058 1058 def _pushb2obsmarkers(pushop, bundler):
1059 1059 if b'obsmarkers' in pushop.stepsdone:
1060 1060 return
1061 1061 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
1062 1062 if obsolete.commonversion(remoteversions) is None:
1063 1063 return
1064 1064 pushop.stepsdone.add(b'obsmarkers')
1065 1065 if pushop.outobsmarkers:
1066 1066 markers = obsutil.sortedmarkers(pushop.outobsmarkers)
1067 1067 bundle2.buildobsmarkerspart(bundler, markers)
1068 1068
1069 1069
1070 1070 @b2partsgenerator(b'bookmarks')
1071 1071 def _pushb2bookmarks(pushop, bundler):
1072 1072 """handle bookmark push through bundle2"""
1073 1073 if b'bookmarks' in pushop.stepsdone:
1074 1074 return
1075 1075 b2caps = bundle2.bundle2caps(pushop.remote)
1076 1076
1077 1077 legacy = pushop.repo.ui.configlist(b'devel', b'legacy.exchange')
1078 1078 legacybooks = b'bookmarks' in legacy
1079 1079
1080 1080 if not legacybooks and b'bookmarks' in b2caps:
1081 1081 return _pushb2bookmarkspart(pushop, bundler)
1082 1082 elif b'pushkey' in b2caps:
1083 1083 return _pushb2bookmarkspushkey(pushop, bundler)
1084 1084
1085 1085
1086 1086 def _bmaction(old, new):
1087 1087 """small utility for bookmark pushing"""
1088 1088 if not old:
1089 1089 return b'export'
1090 1090 elif not new:
1091 1091 return b'delete'
1092 1092 return b'update'
1093 1093
1094 1094
1095 1095 def _abortonsecretctx(pushop, node, b):
1096 1096 """abort if a given bookmark points to a secret changeset"""
1097 1097 if node and pushop.repo[node].phase() == phases.secret:
1098 1098 raise error.Abort(
1099 1099 _(b'cannot push bookmark %s as it points to a secret changeset') % b
1100 1100 )
1101 1101
1102 1102
1103 1103 def _pushb2bookmarkspart(pushop, bundler):
1104 1104 pushop.stepsdone.add(b'bookmarks')
1105 1105 if not pushop.outbookmarks:
1106 1106 return
1107 1107
1108 1108 allactions = []
1109 1109 data = []
1110 1110 for book, old, new in pushop.outbookmarks:
1111 1111 _abortonsecretctx(pushop, new, book)
1112 1112 data.append((book, new))
1113 1113 allactions.append((book, _bmaction(old, new)))
1114 1114 checkdata = bookmod.binaryencode(pushop.repo, data)
1115 1115 bundler.newpart(b'bookmarks', data=checkdata)
1116 1116
1117 1117 def handlereply(op):
1118 1118 ui = pushop.ui
1119 1119 # if success
1120 1120 for book, action in allactions:
1121 1121 ui.status(bookmsgmap[action][0] % book)
1122 1122
1123 1123 return handlereply
1124 1124
1125 1125
1126 1126 def _pushb2bookmarkspushkey(pushop, bundler):
1127 1127 pushop.stepsdone.add(b'bookmarks')
1128 1128 part2book = []
1129 1129 enc = pushkey.encode
1130 1130
1131 1131 def handlefailure(pushop, exc):
1132 1132 targetid = int(exc.partid)
1133 1133 for partid, book, action in part2book:
1134 1134 if partid == targetid:
1135 1135 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
1136 1136 # we should not be called for part we did not generated
1137 1137 assert False
1138 1138
1139 1139 for book, old, new in pushop.outbookmarks:
1140 1140 _abortonsecretctx(pushop, new, book)
1141 1141 part = bundler.newpart(b'pushkey')
1142 1142 part.addparam(b'namespace', enc(b'bookmarks'))
1143 1143 part.addparam(b'key', enc(book))
1144 1144 part.addparam(b'old', enc(hex(old)))
1145 1145 part.addparam(b'new', enc(hex(new)))
1146 1146 action = b'update'
1147 1147 if not old:
1148 1148 action = b'export'
1149 1149 elif not new:
1150 1150 action = b'delete'
1151 1151 part2book.append((part.id, book, action))
1152 1152 pushop.pkfailcb[part.id] = handlefailure
1153 1153
1154 1154 def handlereply(op):
1155 1155 ui = pushop.ui
1156 1156 for partid, book, action in part2book:
1157 1157 partrep = op.records.getreplies(partid)
1158 1158 results = partrep[b'pushkey']
1159 1159 assert len(results) <= 1
1160 1160 if not results:
1161 1161 pushop.ui.warn(_(b'server ignored bookmark %s update\n') % book)
1162 1162 else:
1163 1163 ret = int(results[0][b'return'])
1164 1164 if ret:
1165 1165 ui.status(bookmsgmap[action][0] % book)
1166 1166 else:
1167 1167 ui.warn(bookmsgmap[action][1] % book)
1168 1168 if pushop.bkresult is not None:
1169 1169 pushop.bkresult = 1
1170 1170
1171 1171 return handlereply
1172 1172
1173 1173
1174 1174 @b2partsgenerator(b'pushvars', idx=0)
1175 1175 def _getbundlesendvars(pushop, bundler):
1176 1176 '''send shellvars via bundle2'''
1177 1177 pushvars = pushop.pushvars
1178 1178 if pushvars:
1179 1179 shellvars = {}
1180 1180 for raw in pushvars:
1181 1181 if b'=' not in raw:
1182 1182 msg = (
1183 1183 b"unable to parse variable '%s', should follow "
1184 1184 b"'KEY=VALUE' or 'KEY=' format"
1185 1185 )
1186 1186 raise error.Abort(msg % raw)
1187 1187 k, v = raw.split(b'=', 1)
1188 1188 shellvars[k] = v
1189 1189
1190 1190 part = bundler.newpart(b'pushvars')
1191 1191
1192 1192 for key, value in shellvars.items():
1193 1193 part.addparam(key, value, mandatory=False)
1194 1194
1195 1195
1196 1196 def _pushbundle2(pushop):
1197 1197 """push data to the remote using bundle2
1198 1198
1199 1199 The only currently supported type of data is changegroup but this will
1200 1200 evolve in the future."""
1201 1201 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
1202 1202 pushback = pushop.trmanager and pushop.ui.configbool(
1203 1203 b'experimental', b'bundle2.pushback'
1204 1204 )
1205 1205
1206 1206 # create reply capability
1207 1207 capsblob = bundle2.encodecaps(
1208 1208 bundle2.getrepocaps(pushop.repo, allowpushback=pushback, role=b'client')
1209 1209 )
1210 1210 bundler.newpart(b'replycaps', data=capsblob)
1211 1211 replyhandlers = []
1212 1212 for partgenname in b2partsgenorder:
1213 1213 partgen = b2partsgenmapping[partgenname]
1214 1214 ret = partgen(pushop, bundler)
1215 1215 if callable(ret):
1216 1216 replyhandlers.append(ret)
1217 1217 # do not push if nothing to push
1218 1218 if bundler.nbparts <= 1:
1219 1219 return
1220 1220 stream = util.chunkbuffer(bundler.getchunks())
1221 1221 try:
1222 1222 try:
1223 1223 with pushop.remote.commandexecutor() as e:
1224 1224 reply = e.callcommand(
1225 1225 b'unbundle',
1226 1226 {
1227 1227 b'bundle': stream,
1228 1228 b'heads': [b'force'],
1229 1229 b'url': pushop.remote.url(),
1230 1230 },
1231 1231 ).result()
1232 1232 except error.BundleValueError as exc:
1233 1233 raise error.RemoteError(_(b'missing support for %s') % exc)
1234 1234 try:
1235 1235 trgetter = None
1236 1236 if pushback:
1237 1237 trgetter = pushop.trmanager.transaction
1238 1238 op = bundle2.processbundle(
1239 1239 pushop.repo,
1240 1240 reply,
1241 1241 trgetter,
1242 1242 remote=pushop.remote,
1243 1243 )
1244 1244 except error.BundleValueError as exc:
1245 1245 raise error.RemoteError(_(b'missing support for %s') % exc)
1246 1246 except bundle2.AbortFromPart as exc:
1247 1247 pushop.ui.error(_(b'remote: %s\n') % exc)
1248 1248 if exc.hint is not None:
1249 1249 pushop.ui.error(_(b'remote: %s\n') % (b'(%s)' % exc.hint))
1250 1250 raise error.RemoteError(_(b'push failed on remote'))
1251 1251 except error.PushkeyFailed as exc:
1252 1252 partid = int(exc.partid)
1253 1253 if partid not in pushop.pkfailcb:
1254 1254 raise
1255 1255 pushop.pkfailcb[partid](pushop, exc)
1256 1256 for rephand in replyhandlers:
1257 1257 rephand(op)
1258 1258
1259 1259
1260 1260 def _pushchangeset(pushop):
1261 1261 """Make the actual push of changeset bundle to remote repo"""
1262 1262 if b'changesets' in pushop.stepsdone:
1263 1263 return
1264 1264 pushop.stepsdone.add(b'changesets')
1265 1265 if not _pushcheckoutgoing(pushop):
1266 1266 return
1267 1267
1268 1268 # Should have verified this in push().
1269 1269 assert pushop.remote.capable(b'unbundle')
1270 1270
1271 1271 pushop.repo.prepushoutgoinghooks(pushop)
1272 1272 outgoing = pushop.outgoing
1273 1273 # TODO: get bundlecaps from remote
1274 1274 bundlecaps = None
1275 1275 # create a changegroup from local
1276 1276 if pushop.revs is None and not (
1277 1277 outgoing.excluded or pushop.repo.changelog.filteredrevs
1278 1278 ):
1279 1279 # push everything,
1280 1280 # use the fast path, no race possible on push
1281 1281 cg = changegroup.makechangegroup(
1282 1282 pushop.repo,
1283 1283 outgoing,
1284 1284 b'01',
1285 1285 b'push',
1286 1286 fastpath=True,
1287 1287 bundlecaps=bundlecaps,
1288 1288 )
1289 1289 else:
1290 1290 cg = changegroup.makechangegroup(
1291 1291 pushop.repo, outgoing, b'01', b'push', bundlecaps=bundlecaps
1292 1292 )
1293 1293
1294 1294 # apply changegroup to remote
1295 1295 # local repo finds heads on server, finds out what
1296 1296 # revs it must push. once revs transferred, if server
1297 1297 # finds it has different heads (someone else won
1298 1298 # commit/push race), server aborts.
1299 1299 if pushop.force:
1300 1300 remoteheads = [b'force']
1301 1301 else:
1302 1302 remoteheads = pushop.remoteheads
1303 1303 # ssh: return remote's addchangegroup()
1304 1304 # http: return remote's addchangegroup() or 0 for error
1305 1305 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads, pushop.repo.url())
1306 1306
1307 1307
1308 1308 def _pushsyncphase(pushop):
1309 1309 """synchronise phase information locally and remotely"""
1310 1310 cheads = pushop.commonheads
1311 1311 # even when we don't push, exchanging phase data is useful
1312 1312 remotephases = listkeys(pushop.remote, b'phases')
1313 1313 if (
1314 1314 pushop.ui.configbool(b'ui', b'_usedassubrepo')
1315 1315 and remotephases # server supports phases
1316 1316 and pushop.cgresult is None # nothing was pushed
1317 1317 and remotephases.get(b'publishing', False)
1318 1318 ):
1319 1319 # When:
1320 1320 # - this is a subrepo push
1321 1321 # - and remote support phase
1322 1322 # - and no changeset was pushed
1323 1323 # - and remote is publishing
1324 1324 # We may be in issue 3871 case!
1325 1325 # We drop the possible phase synchronisation done by
1326 1326 # courtesy to publish changesets possibly locally draft
1327 1327 # on the remote.
1328 1328 remotephases = {b'publishing': b'True'}
1329 1329 if not remotephases: # old server or public only reply from non-publishing
1330 1330 _localphasemove(pushop, cheads)
1331 1331 # don't push any phase data as there is nothing to push
1332 1332 else:
1333 1333 unfi = pushop.repo.unfiltered()
1334 1334 to_rev = unfi.changelog.index.rev
1335 1335 to_node = unfi.changelog.node
1336 1336 cheads_revs = [to_rev(n) for n in cheads]
1337 1337 pheads_revs, _dr = phases.analyze_remote_phases(
1338 1338 pushop.repo,
1339 1339 cheads_revs,
1340 1340 remotephases,
1341 1341 )
1342 1342 pheads = [to_node(r) for r in pheads_revs]
1343 1343 ### Apply remote phase on local
1344 1344 if remotephases.get(b'publishing', False):
1345 1345 _localphasemove(pushop, cheads)
1346 1346 else: # publish = False
1347 1347 _localphasemove(pushop, pheads)
1348 1348 _localphasemove(pushop, cheads, phases.draft)
1349 1349 ### Apply local phase on remote
1350 1350
1351 1351 if pushop.cgresult:
1352 1352 if b'phases' in pushop.stepsdone:
1353 1353 # phases already pushed though bundle2
1354 1354 return
1355 1355 outdated = pushop.outdatedphases
1356 1356 else:
1357 1357 outdated = pushop.fallbackoutdatedphases
1358 1358
1359 1359 pushop.stepsdone.add(b'phases')
1360 1360
1361 1361 # filter heads already turned public by the push
1362 1362 outdated = [c for c in outdated if c.node() not in pheads]
1363 1363 # fallback to independent pushkey command
1364 1364 for newremotehead in outdated:
1365 1365 with pushop.remote.commandexecutor() as e:
1366 1366 r = e.callcommand(
1367 1367 b'pushkey',
1368 1368 {
1369 1369 b'namespace': b'phases',
1370 1370 b'key': newremotehead.hex(),
1371 1371 b'old': b'%d' % phases.draft,
1372 1372 b'new': b'%d' % phases.public,
1373 1373 },
1374 1374 ).result()
1375 1375
1376 1376 if not r:
1377 1377 pushop.ui.warn(
1378 1378 _(b'updating %s to public failed!\n') % newremotehead
1379 1379 )
1380 1380
1381 1381
1382 1382 def _localphasemove(pushop, nodes, phase=phases.public):
1383 1383 """move <nodes> to <phase> in the local source repo"""
1384 1384 if pushop.trmanager:
1385 1385 phases.advanceboundary(
1386 1386 pushop.repo, pushop.trmanager.transaction(), phase, nodes
1387 1387 )
1388 1388 else:
1389 1389 # repo is not locked, do not change any phases!
1390 1390 # Informs the user that phases should have been moved when
1391 1391 # applicable.
1392 1392 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1393 1393 phasestr = phases.phasenames[phase]
1394 1394 if actualmoves:
1395 1395 pushop.ui.status(
1396 1396 _(
1397 1397 b'cannot lock source repo, skipping '
1398 1398 b'local %s phase update\n'
1399 1399 )
1400 1400 % phasestr
1401 1401 )
1402 1402
1403 1403
1404 1404 def _pushobsolete(pushop):
1405 1405 """utility function to push obsolete markers to a remote"""
1406 1406 if b'obsmarkers' in pushop.stepsdone:
1407 1407 return
1408 1408 repo = pushop.repo
1409 1409 remote = pushop.remote
1410 1410 pushop.stepsdone.add(b'obsmarkers')
1411 1411 if pushop.outobsmarkers:
1412 1412 pushop.ui.debug(b'try to push obsolete markers to remote\n')
1413 1413 rslts = []
1414 1414 markers = obsutil.sortedmarkers(pushop.outobsmarkers)
1415 1415 remotedata = obsolete._pushkeyescape(markers)
1416 1416 for key in sorted(remotedata, reverse=True):
1417 1417 # reverse sort to ensure we end with dump0
1418 1418 data = remotedata[key]
1419 1419 rslts.append(remote.pushkey(b'obsolete', key, b'', data))
1420 1420 if [r for r in rslts if not r]:
1421 1421 msg = _(b'failed to push some obsolete markers!\n')
1422 1422 repo.ui.warn(msg)
1423 1423
1424 1424
1425 1425 def _pushbookmark(pushop):
1426 1426 """Update bookmark position on remote"""
1427 1427 if pushop.cgresult == 0 or b'bookmarks' in pushop.stepsdone:
1428 1428 return
1429 1429 pushop.stepsdone.add(b'bookmarks')
1430 1430 ui = pushop.ui
1431 1431 remote = pushop.remote
1432 1432
1433 1433 for b, old, new in pushop.outbookmarks:
1434 1434 action = b'update'
1435 1435 if not old:
1436 1436 action = b'export'
1437 1437 elif not new:
1438 1438 action = b'delete'
1439 1439
1440 1440 with remote.commandexecutor() as e:
1441 1441 r = e.callcommand(
1442 1442 b'pushkey',
1443 1443 {
1444 1444 b'namespace': b'bookmarks',
1445 1445 b'key': b,
1446 1446 b'old': hex(old),
1447 1447 b'new': hex(new),
1448 1448 },
1449 1449 ).result()
1450 1450
1451 1451 if r:
1452 1452 ui.status(bookmsgmap[action][0] % b)
1453 1453 else:
1454 1454 ui.warn(bookmsgmap[action][1] % b)
1455 1455 # discovery can have set the value form invalid entry
1456 1456 if pushop.bkresult is not None:
1457 1457 pushop.bkresult = 1
1458 1458
1459 1459
1460 1460 class pulloperation:
1461 1461 """A object that represent a single pull operation
1462 1462
1463 1463 It purpose is to carry pull related state and very common operation.
1464 1464
1465 1465 A new should be created at the beginning of each pull and discarded
1466 1466 afterward.
1467 1467 """
1468 1468
1469 1469 def __init__(
1470 1470 self,
1471 1471 repo,
1472 1472 remote,
1473 1473 heads=None,
1474 1474 force=False,
1475 1475 bookmarks=(),
1476 1476 remotebookmarks=None,
1477 1477 streamclonerequested=None,
1478 1478 includepats=None,
1479 1479 excludepats=None,
1480 1480 depth=None,
1481 1481 path=None,
1482 1482 ):
1483 1483 # repo we pull into
1484 1484 self.repo = repo
1485 1485 # repo we pull from
1486 1486 self.remote = remote
1487 1487 # path object used to build this remote
1488 1488 #
1489 1489 # Ideally, the remote peer would carry that directly.
1490 1490 self.remote_path = path
1491 1491 # revision we try to pull (None is "all")
1492 1492 self.heads = heads
1493 1493 # bookmark pulled explicitly
1494 1494 self.explicitbookmarks = [
1495 1495 repo._bookmarks.expandname(bookmark) for bookmark in bookmarks
1496 1496 ]
1497 1497 # do we force pull?
1498 1498 self.force = force
1499 1499 # whether a streaming clone was requested
1500 1500 self.streamclonerequested = streamclonerequested
1501 1501 # transaction manager
1502 1502 self.trmanager = None
1503 1503 # set of common changeset between local and remote before pull
1504 1504 self.common = None
1505 1505 # set of pulled head
1506 1506 self.rheads = None
1507 1507 # list of missing changeset to fetch remotely
1508 1508 self.fetch = None
1509 1509 # remote bookmarks data
1510 1510 self.remotebookmarks = remotebookmarks
1511 1511 # result of changegroup pulling (used as return code by pull)
1512 1512 self.cgresult = None
1513 1513 # list of step already done
1514 1514 self.stepsdone = set()
1515 1515 # Whether we attempted a clone from pre-generated bundles.
1516 1516 self.clonebundleattempted = False
1517 1517 # Set of file patterns to include.
1518 1518 self.includepats = includepats
1519 1519 # Set of file patterns to exclude.
1520 1520 self.excludepats = excludepats
1521 1521 # Number of ancestor changesets to pull from each pulled head.
1522 1522 self.depth = depth
1523 1523
1524 1524 @util.propertycache
1525 1525 def pulledsubset(self):
1526 1526 """heads of the set of changeset target by the pull"""
1527 1527 # compute target subset
1528 1528 if self.heads is None:
1529 1529 # We pulled every thing possible
1530 1530 # sync on everything common
1531 1531 c = set(self.common)
1532 1532 ret = list(self.common)
1533 1533 for n in self.rheads:
1534 1534 if n not in c:
1535 1535 ret.append(n)
1536 1536 return ret
1537 1537 else:
1538 1538 # We pulled a specific subset
1539 1539 # sync on this subset
1540 1540 return self.heads
1541 1541
1542 1542 @util.propertycache
1543 1543 def canusebundle2(self):
1544 1544 return not _forcebundle1(self)
1545 1545
1546 1546 @util.propertycache
1547 1547 def remotebundle2caps(self):
1548 1548 return bundle2.bundle2caps(self.remote)
1549 1549
1550 1550 def gettransaction(self):
1551 1551 # deprecated; talk to trmanager directly
1552 1552 return self.trmanager.transaction()
1553 1553
1554 1554
1555 1555 class transactionmanager(util.transactional):
1556 1556 """An object to manage the life cycle of a transaction
1557 1557
1558 1558 It creates the transaction on demand and calls the appropriate hooks when
1559 1559 closing the transaction."""
1560 1560
1561 1561 def __init__(self, repo, source, url):
1562 1562 self.repo = repo
1563 1563 self.source = source
1564 1564 self.url = url
1565 1565 self._tr = None
1566 1566
1567 1567 def transaction(self):
1568 1568 """Return an open transaction object, constructing if necessary"""
1569 1569 if not self._tr:
1570 1570 trname = b'%s\n%s' % (self.source, urlutil.hidepassword(self.url))
1571 1571 self._tr = self.repo.transaction(trname)
1572 1572 self._tr.hookargs[b'source'] = self.source
1573 1573 self._tr.hookargs[b'url'] = self.url
1574 1574 return self._tr
1575 1575
1576 1576 def close(self):
1577 1577 """close transaction if created"""
1578 1578 if self._tr is not None:
1579 1579 self._tr.close()
1580 1580
1581 1581 def release(self):
1582 1582 """release transaction if created"""
1583 1583 if self._tr is not None:
1584 1584 self._tr.release()
1585 1585
1586 1586
1587 1587 def listkeys(remote, namespace):
1588 1588 with remote.commandexecutor() as e:
1589 1589 return e.callcommand(b'listkeys', {b'namespace': namespace}).result()
1590 1590
1591 1591
1592 1592 def _fullpullbundle2(repo, pullop):
1593 1593 # The server may send a partial reply, i.e. when inlining
1594 1594 # pre-computed bundles. In that case, update the common
1595 1595 # set based on the results and pull another bundle.
1596 1596 #
1597 1597 # There are two indicators that the process is finished:
1598 1598 # - no changeset has been added, or
1599 1599 # - all remote heads are known locally.
1600 1600 # The head check must use the unfiltered view as obsoletion
1601 1601 # markers can hide heads.
1602 1602 unfi = repo.unfiltered()
1603 1603 unficl = unfi.changelog
1604 1604
1605 1605 def headsofdiff(h1, h2):
1606 1606 """Returns heads(h1 % h2)"""
1607 1607 res = unfi.set(b'heads(%ln %% %ln)', h1, h2)
1608 1608 return {ctx.node() for ctx in res}
1609 1609
1610 1610 def headsofunion(h1, h2):
1611 1611 """Returns heads((h1 + h2) - null)"""
1612 1612 res = unfi.set(b'heads((%ln + %ln - null))', h1, h2)
1613 1613 return {ctx.node() for ctx in res}
1614 1614
1615 1615 while True:
1616 1616 old_heads = unficl.heads()
1617 1617 clstart = len(unficl)
1618 1618 _pullbundle2(pullop)
1619 1619 if requirements.NARROW_REQUIREMENT in repo.requirements:
1620 1620 # XXX narrow clones filter the heads on the server side during
1621 1621 # XXX getbundle and result in partial replies as well.
1622 1622 # XXX Disable pull bundles in this case as band aid to avoid
1623 1623 # XXX extra round trips.
1624 1624 break
1625 1625 if clstart == len(unficl):
1626 1626 break
1627 1627 if all(unficl.hasnode(n) for n in pullop.rheads):
1628 1628 break
1629 1629 new_heads = headsofdiff(unficl.heads(), old_heads)
1630 1630 pullop.common = headsofunion(new_heads, pullop.common)
1631 1631 pullop.rheads = set(pullop.rheads) - pullop.common
1632 1632
1633 1633
1634 1634 def add_confirm_callback(repo, pullop):
1635 1635 """adds a finalize callback to transaction which can be used to show stats
1636 1636 to user and confirm the pull before committing transaction"""
1637 1637
1638 1638 tr = pullop.trmanager.transaction()
1639 1639 scmutil.registersummarycallback(
1640 1640 repo, tr, txnname=b'pull', as_validator=True
1641 1641 )
1642 1642 reporef = weakref.ref(repo.unfiltered())
1643 1643
1644 1644 def prompt(tr):
1645 1645 repo = reporef()
1646 1646 cm = _(b'accept incoming changes (yn)?$$ &Yes $$ &No')
1647 1647 if repo.ui.promptchoice(cm):
1648 1648 raise error.Abort(b"user aborted")
1649 1649
1650 1650 tr.addvalidator(b'900-pull-prompt', prompt)
1651 1651
1652 1652
1653 1653 def pull(
1654 1654 repo,
1655 1655 remote,
1656 1656 path=None,
1657 1657 heads=None,
1658 1658 force=False,
1659 1659 bookmarks=(),
1660 1660 opargs=None,
1661 1661 streamclonerequested=None,
1662 1662 includepats=None,
1663 1663 excludepats=None,
1664 1664 depth=None,
1665 1665 confirm=None,
1666 1666 ):
1667 1667 """Fetch repository data from a remote.
1668 1668
1669 1669 This is the main function used to retrieve data from a remote repository.
1670 1670
1671 1671 ``repo`` is the local repository to clone into.
1672 1672 ``remote`` is a peer instance.
1673 1673 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1674 1674 default) means to pull everything from the remote.
1675 1675 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1676 1676 default, all remote bookmarks are pulled.
1677 1677 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1678 1678 initialization.
1679 1679 ``streamclonerequested`` is a boolean indicating whether a "streaming
1680 1680 clone" is requested. A "streaming clone" is essentially a raw file copy
1681 1681 of revlogs from the server. This only works when the local repository is
1682 1682 empty. The default value of ``None`` means to respect the server
1683 1683 configuration for preferring stream clones.
1684 1684 ``includepats`` and ``excludepats`` define explicit file patterns to
1685 1685 include and exclude in storage, respectively. If not defined, narrow
1686 1686 patterns from the repo instance are used, if available.
1687 1687 ``depth`` is an integer indicating the DAG depth of history we're
1688 1688 interested in. If defined, for each revision specified in ``heads``, we
1689 1689 will fetch up to this many of its ancestors and data associated with them.
1690 1690 ``confirm`` is a boolean indicating whether the pull should be confirmed
1691 1691 before committing the transaction. This overrides HGPLAIN.
1692 1692
1693 1693 Returns the ``pulloperation`` created for this pull.
1694 1694 """
1695 1695 if opargs is None:
1696 1696 opargs = {}
1697 1697
1698 1698 # We allow the narrow patterns to be passed in explicitly to provide more
1699 1699 # flexibility for API consumers.
1700 1700 if includepats is not None or excludepats is not None:
1701 1701 includepats = includepats or set()
1702 1702 excludepats = excludepats or set()
1703 1703 else:
1704 1704 includepats, excludepats = repo.narrowpats
1705 1705
1706 1706 narrowspec.validatepatterns(includepats)
1707 1707 narrowspec.validatepatterns(excludepats)
1708 1708
1709 1709 pullop = pulloperation(
1710 1710 repo,
1711 1711 remote,
1712 1712 path=path,
1713 1713 heads=heads,
1714 1714 force=force,
1715 1715 bookmarks=bookmarks,
1716 1716 streamclonerequested=streamclonerequested,
1717 1717 includepats=includepats,
1718 1718 excludepats=excludepats,
1719 1719 depth=depth,
1720 1720 **pycompat.strkwargs(opargs)
1721 1721 )
1722 1722
1723 1723 peerlocal = pullop.remote.local()
1724 1724 if peerlocal:
1725 1725 missing = set(peerlocal.requirements) - pullop.repo.supported
1726 1726 if missing:
1727 1727 msg = _(
1728 1728 b"required features are not"
1729 1729 b" supported in the destination:"
1730 1730 b" %s"
1731 1731 ) % (b', '.join(sorted(missing)))
1732 1732 raise error.Abort(msg)
1733 1733
1734 1734 for category in repo._wanted_sidedata:
1735 1735 # Check that a computer is registered for that category for at least
1736 1736 # one revlog kind.
1737 1737 for kind, computers in repo._sidedata_computers.items():
1738 1738 if computers.get(category):
1739 1739 break
1740 1740 else:
1741 1741 # This should never happen since repos are supposed to be able to
1742 1742 # generate the sidedata they require.
1743 1743 raise error.ProgrammingError(
1744 1744 _(
1745 1745 b'sidedata category requested by local side without local'
1746 1746 b"support: '%s'"
1747 1747 )
1748 1748 % pycompat.bytestr(category)
1749 1749 )
1750 1750
1751 1751 pullop.trmanager = transactionmanager(repo, b'pull', remote.url())
1752 1752 wlock = util.nullcontextmanager
1753 1753 if not bookmod.bookmarksinstore(repo):
1754 1754 wlock = repo.wlock
1755 1755 with wlock(), repo.lock(), pullop.trmanager:
1756 1756 if confirm or (
1757 1757 repo.ui.configbool(b"pull", b"confirm") and not repo.ui.plain()
1758 1758 ):
1759 1759 add_confirm_callback(repo, pullop)
1760 1760
1761 1761 # This should ideally be in _pullbundle2(). However, it needs to run
1762 1762 # before discovery to avoid extra work.
1763 1763 _maybeapplyclonebundle(pullop)
1764 1764 streamclone.maybeperformlegacystreamclone(pullop)
1765 1765 _pulldiscovery(pullop)
1766 1766 if pullop.canusebundle2:
1767 1767 _fullpullbundle2(repo, pullop)
1768 1768 _pullchangeset(pullop)
1769 1769 _pullphase(pullop)
1770 1770 _pullbookmarks(pullop)
1771 1771 _pullobsolete(pullop)
1772 1772
1773 1773 # storing remotenames
1774 1774 if repo.ui.configbool(b'experimental', b'remotenames'):
1775 1775 logexchange.pullremotenames(repo, remote)
1776 1776
1777 1777 return pullop
1778 1778
1779 1779
1780 1780 # list of steps to perform discovery before pull
1781 1781 pulldiscoveryorder = []
1782 1782
1783 1783 # Mapping between step name and function
1784 1784 #
1785 1785 # This exists to help extensions wrap steps if necessary
1786 1786 pulldiscoverymapping = {}
1787 1787
1788 1788
1789 1789 def pulldiscovery(stepname):
1790 1790 """decorator for function performing discovery before pull
1791 1791
1792 1792 The function is added to the step -> function mapping and appended to the
1793 1793 list of steps. Beware that decorated function will be added in order (this
1794 1794 may matter).
1795 1795
1796 1796 You can only use this decorator for a new step, if you want to wrap a step
1797 1797 from an extension, change the pulldiscovery dictionary directly."""
1798 1798
1799 1799 def dec(func):
1800 1800 assert stepname not in pulldiscoverymapping
1801 1801 pulldiscoverymapping[stepname] = func
1802 1802 pulldiscoveryorder.append(stepname)
1803 1803 return func
1804 1804
1805 1805 return dec
1806 1806
1807 1807
1808 1808 def _pulldiscovery(pullop):
1809 1809 """Run all discovery steps"""
1810 1810 for stepname in pulldiscoveryorder:
1811 1811 step = pulldiscoverymapping[stepname]
1812 1812 step(pullop)
1813 1813
1814 1814
1815 1815 @pulldiscovery(b'b1:bookmarks')
1816 1816 def _pullbookmarkbundle1(pullop):
1817 1817 """fetch bookmark data in bundle1 case
1818 1818
1819 1819 If not using bundle2, we have to fetch bookmarks before changeset
1820 1820 discovery to reduce the chance and impact of race conditions."""
1821 1821 if pullop.remotebookmarks is not None:
1822 1822 return
1823 1823 if pullop.canusebundle2 and b'listkeys' in pullop.remotebundle2caps:
1824 1824 # all known bundle2 servers now support listkeys, but lets be nice with
1825 1825 # new implementation.
1826 1826 return
1827 1827 books = listkeys(pullop.remote, b'bookmarks')
1828 1828 pullop.remotebookmarks = bookmod.unhexlifybookmarks(books)
1829 1829
1830 1830
1831 1831 @pulldiscovery(b'changegroup')
1832 1832 def _pulldiscoverychangegroup(pullop):
1833 1833 """discovery phase for the pull
1834 1834
1835 1835 Current handle changeset discovery only, will change handle all discovery
1836 1836 at some point."""
1837 1837 tmp = discovery.findcommonincoming(
1838 1838 pullop.repo, pullop.remote, heads=pullop.heads, force=pullop.force
1839 1839 )
1840 1840 common, fetch, rheads = tmp
1841 1841 has_node = pullop.repo.unfiltered().changelog.index.has_node
1842 1842 if fetch and rheads:
1843 1843 # If a remote heads is filtered locally, put in back in common.
1844 1844 #
1845 1845 # This is a hackish solution to catch most of "common but locally
1846 1846 # hidden situation". We do not performs discovery on unfiltered
1847 1847 # repository because it end up doing a pathological amount of round
1848 1848 # trip for w huge amount of changeset we do not care about.
1849 1849 #
1850 1850 # If a set of such "common but filtered" changeset exist on the server
1851 1851 # but are not including a remote heads, we'll not be able to detect it,
1852 1852 scommon = set(common)
1853 1853 for n in rheads:
1854 1854 if has_node(n):
1855 1855 if n not in scommon:
1856 1856 common.append(n)
1857 1857 if set(rheads).issubset(set(common)):
1858 1858 fetch = []
1859 1859 pullop.common = common
1860 1860 pullop.fetch = fetch
1861 1861 pullop.rheads = rheads
1862 1862
1863 1863
1864 1864 def _pullbundle2(pullop):
1865 1865 """pull data using bundle2
1866 1866
1867 1867 For now, the only supported data are changegroup."""
1868 1868 kwargs = {b'bundlecaps': caps20to10(pullop.repo, role=b'client')}
1869 1869
1870 1870 # make ui easier to access
1871 1871 ui = pullop.repo.ui
1872 1872
1873 1873 # At the moment we don't do stream clones over bundle2. If that is
1874 1874 # implemented then here's where the check for that will go.
1875 1875 streaming = streamclone.canperformstreamclone(pullop, bundle2=True)[0]
1876 1876
1877 1877 # declare pull perimeters
1878 1878 kwargs[b'common'] = pullop.common
1879 1879 kwargs[b'heads'] = pullop.heads or pullop.rheads
1880 1880
1881 1881 # check server supports narrow and then adding includepats and excludepats
1882 1882 servernarrow = pullop.remote.capable(wireprototypes.NARROWCAP)
1883 1883 if servernarrow and pullop.includepats:
1884 1884 kwargs[b'includepats'] = pullop.includepats
1885 1885 if servernarrow and pullop.excludepats:
1886 1886 kwargs[b'excludepats'] = pullop.excludepats
1887 1887
1888 1888 if streaming:
1889 1889 kwargs[b'cg'] = False
1890 1890 kwargs[b'stream'] = True
1891 1891 pullop.stepsdone.add(b'changegroup')
1892 1892 pullop.stepsdone.add(b'phases')
1893 1893
1894 1894 else:
1895 1895 # pulling changegroup
1896 1896 pullop.stepsdone.add(b'changegroup')
1897 1897
1898 1898 kwargs[b'cg'] = pullop.fetch
1899 1899
1900 1900 legacyphase = b'phases' in ui.configlist(b'devel', b'legacy.exchange')
1901 1901 hasbinaryphase = b'heads' in pullop.remotebundle2caps.get(b'phases', ())
1902 1902 if not legacyphase and hasbinaryphase:
1903 1903 kwargs[b'phases'] = True
1904 1904 pullop.stepsdone.add(b'phases')
1905 1905
1906 1906 if b'listkeys' in pullop.remotebundle2caps:
1907 1907 if b'phases' not in pullop.stepsdone:
1908 1908 kwargs[b'listkeys'] = [b'phases']
1909 1909
1910 1910 bookmarksrequested = False
1911 1911 legacybookmark = b'bookmarks' in ui.configlist(b'devel', b'legacy.exchange')
1912 1912 hasbinarybook = b'bookmarks' in pullop.remotebundle2caps
1913 1913
1914 1914 if pullop.remotebookmarks is not None:
1915 1915 pullop.stepsdone.add(b'request-bookmarks')
1916 1916
1917 1917 if (
1918 1918 b'request-bookmarks' not in pullop.stepsdone
1919 1919 and pullop.remotebookmarks is None
1920 1920 and not legacybookmark
1921 1921 and hasbinarybook
1922 1922 ):
1923 1923 kwargs[b'bookmarks'] = True
1924 1924 bookmarksrequested = True
1925 1925
1926 1926 if b'listkeys' in pullop.remotebundle2caps:
1927 1927 if b'request-bookmarks' not in pullop.stepsdone:
1928 1928 # make sure to always includes bookmark data when migrating
1929 1929 # `hg incoming --bundle` to using this function.
1930 1930 pullop.stepsdone.add(b'request-bookmarks')
1931 1931 kwargs.setdefault(b'listkeys', []).append(b'bookmarks')
1932 1932
1933 1933 # If this is a full pull / clone and the server supports the clone bundles
1934 1934 # feature, tell the server whether we attempted a clone bundle. The
1935 1935 # presence of this flag indicates the client supports clone bundles. This
1936 1936 # will enable the server to treat clients that support clone bundles
1937 1937 # differently from those that don't.
1938 1938 if (
1939 1939 pullop.remote.capable(b'clonebundles')
1940 1940 and pullop.heads is None
1941 1941 and list(pullop.common) == [pullop.repo.nullid]
1942 1942 ):
1943 1943 kwargs[b'cbattempted'] = pullop.clonebundleattempted
1944 1944
1945 1945 if streaming:
1946 1946 pullop.repo.ui.status(_(b'streaming all changes\n'))
1947 1947 elif not pullop.fetch:
1948 1948 pullop.repo.ui.status(_(b"no changes found\n"))
1949 1949 pullop.cgresult = 0
1950 1950 else:
1951 1951 if pullop.heads is None and list(pullop.common) == [pullop.repo.nullid]:
1952 1952 pullop.repo.ui.status(_(b"requesting all changes\n"))
1953 1953 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1954 1954 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1955 1955 if obsolete.commonversion(remoteversions) is not None:
1956 1956 kwargs[b'obsmarkers'] = True
1957 1957 pullop.stepsdone.add(b'obsmarkers')
1958 1958 _pullbundle2extraprepare(pullop, kwargs)
1959 1959
1960 1960 remote_sidedata = bundle2.read_remote_wanted_sidedata(pullop.remote)
1961 1961 if remote_sidedata:
1962 1962 kwargs[b'remote_sidedata'] = remote_sidedata
1963 1963
1964 1964 with pullop.remote.commandexecutor() as e:
1965 1965 args = dict(kwargs)
1966 1966 args[b'source'] = b'pull'
1967 1967 bundle = e.callcommand(b'getbundle', args).result()
1968 1968
1969 1969 try:
1970 1970 op = bundle2.bundleoperation(
1971 1971 pullop.repo,
1972 1972 pullop.gettransaction,
1973 1973 source=b'pull',
1974 1974 remote=pullop.remote,
1975 1975 )
1976 1976 op.modes[b'bookmarks'] = b'records'
1977 1977 bundle2.processbundle(
1978 1978 pullop.repo,
1979 1979 bundle,
1980 1980 op=op,
1981 1981 remote=pullop.remote,
1982 1982 )
1983 1983 except bundle2.AbortFromPart as exc:
1984 1984 pullop.repo.ui.error(_(b'remote: abort: %s\n') % exc)
1985 1985 raise error.RemoteError(_(b'pull failed on remote'), hint=exc.hint)
1986 1986 except error.BundleValueError as exc:
1987 1987 raise error.RemoteError(_(b'missing support for %s') % exc)
1988 1988
1989 1989 if pullop.fetch:
1990 1990 pullop.cgresult = bundle2.combinechangegroupresults(op)
1991 1991
1992 1992 # processing phases change
1993 1993 for namespace, value in op.records[b'listkeys']:
1994 1994 if namespace == b'phases':
1995 1995 _pullapplyphases(pullop, value)
1996 1996
1997 1997 # processing bookmark update
1998 1998 if bookmarksrequested:
1999 1999 books = {}
2000 2000 for record in op.records[b'bookmarks']:
2001 2001 books[record[b'bookmark']] = record[b"node"]
2002 2002 pullop.remotebookmarks = books
2003 2003 else:
2004 2004 for namespace, value in op.records[b'listkeys']:
2005 2005 if namespace == b'bookmarks':
2006 2006 pullop.remotebookmarks = bookmod.unhexlifybookmarks(value)
2007 2007
2008 2008 # bookmark data were either already there or pulled in the bundle
2009 2009 if pullop.remotebookmarks is not None:
2010 2010 _pullbookmarks(pullop)
2011 2011
2012 2012
2013 2013 def _pullbundle2extraprepare(pullop, kwargs):
2014 2014 """hook function so that extensions can extend the getbundle call"""
2015 2015
2016 2016
2017 2017 def _pullchangeset(pullop):
2018 2018 """pull changeset from unbundle into the local repo"""
2019 2019 # We delay the open of the transaction as late as possible so we
2020 2020 # don't open transaction for nothing or you break future useful
2021 2021 # rollback call
2022 2022 if b'changegroup' in pullop.stepsdone:
2023 2023 return
2024 2024 pullop.stepsdone.add(b'changegroup')
2025 2025 if not pullop.fetch:
2026 2026 pullop.repo.ui.status(_(b"no changes found\n"))
2027 2027 pullop.cgresult = 0
2028 2028 return
2029 2029 tr = pullop.gettransaction()
2030 2030 if pullop.heads is None and list(pullop.common) == [pullop.repo.nullid]:
2031 2031 pullop.repo.ui.status(_(b"requesting all changes\n"))
2032 2032 elif pullop.heads is None and pullop.remote.capable(b'changegroupsubset'):
2033 2033 # issue1320, avoid a race if remote changed after discovery
2034 2034 pullop.heads = pullop.rheads
2035 2035
2036 2036 if pullop.remote.capable(b'getbundle'):
2037 2037 # TODO: get bundlecaps from remote
2038 2038 cg = pullop.remote.getbundle(
2039 2039 b'pull', common=pullop.common, heads=pullop.heads or pullop.rheads
2040 2040 )
2041 2041 elif pullop.heads is None:
2042 2042 with pullop.remote.commandexecutor() as e:
2043 2043 cg = e.callcommand(
2044 2044 b'changegroup',
2045 2045 {
2046 2046 b'nodes': pullop.fetch,
2047 2047 b'source': b'pull',
2048 2048 },
2049 2049 ).result()
2050 2050
2051 2051 elif not pullop.remote.capable(b'changegroupsubset'):
2052 2052 raise error.Abort(
2053 2053 _(
2054 2054 b"partial pull cannot be done because "
2055 2055 b"other repository doesn't support "
2056 2056 b"changegroupsubset."
2057 2057 )
2058 2058 )
2059 2059 else:
2060 2060 with pullop.remote.commandexecutor() as e:
2061 2061 cg = e.callcommand(
2062 2062 b'changegroupsubset',
2063 2063 {
2064 2064 b'bases': pullop.fetch,
2065 2065 b'heads': pullop.heads,
2066 2066 b'source': b'pull',
2067 2067 },
2068 2068 ).result()
2069 2069
2070 2070 bundleop = bundle2.applybundle(
2071 2071 pullop.repo,
2072 2072 cg,
2073 2073 tr,
2074 2074 b'pull',
2075 2075 pullop.remote.url(),
2076 2076 remote=pullop.remote,
2077 2077 )
2078 2078 pullop.cgresult = bundle2.combinechangegroupresults(bundleop)
2079 2079
2080 2080
2081 2081 def _pullphase(pullop):
2082 2082 # Get remote phases data from remote
2083 2083 if b'phases' in pullop.stepsdone:
2084 2084 return
2085 2085 remotephases = listkeys(pullop.remote, b'phases')
2086 2086 _pullapplyphases(pullop, remotephases)
2087 2087
2088 2088
2089 2089 def _pullapplyphases(pullop, remotephases):
2090 2090 """apply phase movement from observed remote state"""
2091 2091 if b'phases' in pullop.stepsdone:
2092 2092 return
2093 2093 pullop.stepsdone.add(b'phases')
2094 2094 publishing = bool(remotephases.get(b'publishing', False))
2095 2095 if remotephases and not publishing:
2096 2096 unfi = pullop.repo.unfiltered()
2097 2097 to_rev = unfi.changelog.index.rev
2098 2098 to_node = unfi.changelog.node
2099 2099 pulledsubset_revs = [to_rev(n) for n in pullop.pulledsubset]
2100 2100 # remote is new and non-publishing
2101 2101 pheads_revs, _dr = phases.analyze_remote_phases(
2102 2102 pullop.repo,
2103 2103 pulledsubset_revs,
2104 2104 remotephases,
2105 2105 )
2106 2106 pheads = [to_node(r) for r in pheads_revs]
2107 2107 dheads = pullop.pulledsubset
2108 2108 else:
2109 2109 # Remote is old or publishing all common changesets
2110 2110 # should be seen as public
2111 2111 pheads = pullop.pulledsubset
2112 2112 dheads = []
2113 2113 unfi = pullop.repo.unfiltered()
2114 2114 phase = unfi._phasecache.phase
2115 2115 rev = unfi.changelog.index.get_rev
2116 2116 public = phases.public
2117 2117 draft = phases.draft
2118 2118
2119 2119 # exclude changesets already public locally and update the others
2120 2120 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
2121 2121 if pheads:
2122 2122 tr = pullop.gettransaction()
2123 2123 phases.advanceboundary(pullop.repo, tr, public, pheads)
2124 2124
2125 2125 # exclude changesets already draft locally and update the others
2126 2126 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
2127 2127 if dheads:
2128 2128 tr = pullop.gettransaction()
2129 2129 phases.advanceboundary(pullop.repo, tr, draft, dheads)
2130 2130
2131 2131
2132 2132 def _pullbookmarks(pullop):
2133 2133 """process the remote bookmark information to update the local one"""
2134 2134 if b'bookmarks' in pullop.stepsdone:
2135 2135 return
2136 2136 pullop.stepsdone.add(b'bookmarks')
2137 2137 repo = pullop.repo
2138 2138 remotebookmarks = pullop.remotebookmarks
2139 2139 bookmarks_mode = None
2140 2140 if pullop.remote_path is not None:
2141 2141 bookmarks_mode = pullop.remote_path.bookmarks_mode
2142 2142 bookmod.updatefromremote(
2143 2143 repo.ui,
2144 2144 repo,
2145 2145 remotebookmarks,
2146 2146 pullop.remote.url(),
2147 2147 pullop.gettransaction,
2148 2148 explicit=pullop.explicitbookmarks,
2149 2149 mode=bookmarks_mode,
2150 2150 )
2151 2151
2152 2152
2153 2153 def _pullobsolete(pullop):
2154 2154 """utility function to pull obsolete markers from a remote
2155 2155
2156 2156 The `gettransaction` is function that return the pull transaction, creating
2157 2157 one if necessary. We return the transaction to inform the calling code that
2158 2158 a new transaction have been created (when applicable).
2159 2159
2160 2160 Exists mostly to allow overriding for experimentation purpose"""
2161 2161 if b'obsmarkers' in pullop.stepsdone:
2162 2162 return
2163 2163 pullop.stepsdone.add(b'obsmarkers')
2164 2164 tr = None
2165 2165 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
2166 2166 pullop.repo.ui.debug(b'fetching remote obsolete markers\n')
2167 2167 remoteobs = listkeys(pullop.remote, b'obsolete')
2168 2168 if b'dump0' in remoteobs:
2169 2169 tr = pullop.gettransaction()
2170 2170 markers = []
2171 2171 for key in sorted(remoteobs, reverse=True):
2172 2172 if key.startswith(b'dump'):
2173 2173 data = util.b85decode(remoteobs[key])
2174 2174 version, newmarks = obsolete._readmarkers(data)
2175 2175 markers += newmarks
2176 2176 if markers:
2177 2177 pullop.repo.obsstore.add(tr, markers)
2178 2178 pullop.repo.invalidatevolatilesets()
2179 2179 return tr
2180 2180
2181 2181
2182 2182 def applynarrowacl(repo, kwargs):
2183 2183 """Apply narrow fetch access control.
2184 2184
2185 2185 This massages the named arguments for getbundle wire protocol commands
2186 2186 so requested data is filtered through access control rules.
2187 2187 """
2188 2188 ui = repo.ui
2189 2189 # TODO this assumes existence of HTTP and is a layering violation.
2190 2190 username = ui.shortuser(ui.environ.get(b'REMOTE_USER') or ui.username())
2191 2191 user_includes = ui.configlist(
2192 2192 _NARROWACL_SECTION,
2193 2193 username + b'.includes',
2194 2194 ui.configlist(_NARROWACL_SECTION, b'default.includes'),
2195 2195 )
2196 2196 user_excludes = ui.configlist(
2197 2197 _NARROWACL_SECTION,
2198 2198 username + b'.excludes',
2199 2199 ui.configlist(_NARROWACL_SECTION, b'default.excludes'),
2200 2200 )
2201 2201 if not user_includes:
2202 2202 raise error.Abort(
2203 2203 _(b"%s configuration for user %s is empty")
2204 2204 % (_NARROWACL_SECTION, username)
2205 2205 )
2206 2206
2207 2207 user_includes = [
2208 2208 b'path:.' if p == b'*' else b'path:' + p for p in user_includes
2209 2209 ]
2210 2210 user_excludes = [
2211 2211 b'path:.' if p == b'*' else b'path:' + p for p in user_excludes
2212 2212 ]
2213 2213
2214 2214 req_includes = set(kwargs.get('includepats', []))
2215 2215 req_excludes = set(kwargs.get('excludepats', []))
2216 2216
2217 2217 req_includes, req_excludes, invalid_includes = narrowspec.restrictpatterns(
2218 2218 req_includes, req_excludes, user_includes, user_excludes
2219 2219 )
2220 2220
2221 2221 if invalid_includes:
2222 2222 raise error.Abort(
2223 2223 _(b"The following includes are not accessible for %s: %s")
2224 2224 % (username, stringutil.pprint(invalid_includes))
2225 2225 )
2226 2226
2227 2227 new_args = {}
2228 2228 new_args.update(kwargs)
2229 2229 new_args['narrow'] = True
2230 2230 new_args['narrow_acl'] = True
2231 2231 new_args['includepats'] = req_includes
2232 2232 if req_excludes:
2233 2233 new_args['excludepats'] = req_excludes
2234 2234
2235 2235 return new_args
2236 2236
2237 2237
2238 2238 def _computeellipsis(repo, common, heads, known, match, depth=None):
2239 2239 """Compute the shape of a narrowed DAG.
2240 2240
2241 2241 Args:
2242 2242 repo: The repository we're transferring.
2243 2243 common: The roots of the DAG range we're transferring.
2244 2244 May be just [nullid], which means all ancestors of heads.
2245 2245 heads: The heads of the DAG range we're transferring.
2246 2246 match: The narrowmatcher that allows us to identify relevant changes.
2247 2247 depth: If not None, only consider nodes to be full nodes if they are at
2248 2248 most depth changesets away from one of heads.
2249 2249
2250 2250 Returns:
2251 2251 A tuple of (visitnodes, relevant_nodes, ellipsisroots) where:
2252 2252
2253 2253 visitnodes: The list of nodes (either full or ellipsis) which
2254 2254 need to be sent to the client.
2255 2255 relevant_nodes: The set of changelog nodes which change a file inside
2256 2256 the narrowspec. The client needs these as non-ellipsis nodes.
2257 2257 ellipsisroots: A dict of {rev: parents} that is used in
2258 2258 narrowchangegroup to produce ellipsis nodes with the
2259 2259 correct parents.
2260 2260 """
2261 2261 cl = repo.changelog
2262 2262 mfl = repo.manifestlog
2263 2263
2264 2264 clrev = cl.rev
2265 2265
2266 2266 commonrevs = {clrev(n) for n in common} | {nullrev}
2267 2267 headsrevs = {clrev(n) for n in heads}
2268 2268
2269 2269 if depth:
2270 2270 revdepth = {h: 0 for h in headsrevs}
2271 2271
2272 2272 ellipsisheads = collections.defaultdict(set)
2273 2273 ellipsisroots = collections.defaultdict(set)
2274 2274
2275 2275 def addroot(head, curchange):
2276 2276 """Add a root to an ellipsis head, splitting heads with 3 roots."""
2277 2277 ellipsisroots[head].add(curchange)
2278 2278 # Recursively split ellipsis heads with 3 roots by finding the
2279 2279 # roots' youngest common descendant which is an elided merge commit.
2280 2280 # That descendant takes 2 of the 3 roots as its own, and becomes a
2281 2281 # root of the head.
2282 2282 while len(ellipsisroots[head]) > 2:
2283 2283 child, roots = splithead(head)
2284 2284 splitroots(head, child, roots)
2285 2285 head = child # Recurse in case we just added a 3rd root
2286 2286
2287 2287 def splitroots(head, child, roots):
2288 2288 ellipsisroots[head].difference_update(roots)
2289 2289 ellipsisroots[head].add(child)
2290 2290 ellipsisroots[child].update(roots)
2291 2291 ellipsisroots[child].discard(child)
2292 2292
2293 2293 def splithead(head):
2294 2294 r1, r2, r3 = sorted(ellipsisroots[head])
2295 2295 for nr1, nr2 in ((r2, r3), (r1, r3), (r1, r2)):
2296 2296 mid = repo.revs(
2297 2297 b'sort(merge() & %d::%d & %d::%d, -rev)', nr1, head, nr2, head
2298 2298 )
2299 2299 for j in mid:
2300 2300 if j == nr2:
2301 2301 return nr2, (nr1, nr2)
2302 2302 if j not in ellipsisroots or len(ellipsisroots[j]) < 2:
2303 2303 return j, (nr1, nr2)
2304 2304 raise error.Abort(
2305 2305 _(
2306 2306 b'Failed to split up ellipsis node! head: %d, '
2307 2307 b'roots: %d %d %d'
2308 2308 )
2309 2309 % (head, r1, r2, r3)
2310 2310 )
2311 2311
2312 2312 missing = list(cl.findmissingrevs(common=commonrevs, heads=headsrevs))
2313 2313 visit = reversed(missing)
2314 2314 relevant_nodes = set()
2315 2315 visitnodes = [cl.node(m) for m in missing]
2316 2316 required = set(headsrevs) | known
2317 2317 for rev in visit:
2318 2318 clrev = cl.changelogrevision(rev)
2319 2319 ps = [prev for prev in cl.parentrevs(rev) if prev != nullrev]
2320 2320 if depth is not None:
2321 2321 curdepth = revdepth[rev]
2322 2322 for p in ps:
2323 2323 revdepth[p] = min(curdepth + 1, revdepth.get(p, depth + 1))
2324 2324 needed = False
2325 2325 shallow_enough = depth is None or revdepth[rev] <= depth
2326 2326 if shallow_enough:
2327 2327 curmf = mfl[clrev.manifest].read()
2328 2328 if ps:
2329 2329 # We choose to not trust the changed files list in
2330 2330 # changesets because it's not always correct. TODO: could
2331 2331 # we trust it for the non-merge case?
2332 2332 p1mf = mfl[cl.changelogrevision(ps[0]).manifest].read()
2333 2333 needed = bool(curmf.diff(p1mf, match))
2334 2334 if not needed and len(ps) > 1:
2335 2335 # For merge changes, the list of changed files is not
2336 2336 # helpful, since we need to emit the merge if a file
2337 2337 # in the narrow spec has changed on either side of the
2338 2338 # merge. As a result, we do a manifest diff to check.
2339 2339 p2mf = mfl[cl.changelogrevision(ps[1]).manifest].read()
2340 2340 needed = bool(curmf.diff(p2mf, match))
2341 2341 else:
2342 2342 # For a root node, we need to include the node if any
2343 2343 # files in the node match the narrowspec.
2344 2344 needed = any(curmf.walk(match))
2345 2345
2346 2346 if needed:
2347 2347 for head in ellipsisheads[rev]:
2348 2348 addroot(head, rev)
2349 2349 for p in ps:
2350 2350 required.add(p)
2351 2351 relevant_nodes.add(cl.node(rev))
2352 2352 else:
2353 2353 if not ps:
2354 2354 ps = [nullrev]
2355 2355 if rev in required:
2356 2356 for head in ellipsisheads[rev]:
2357 2357 addroot(head, rev)
2358 2358 for p in ps:
2359 2359 ellipsisheads[p].add(rev)
2360 2360 else:
2361 2361 for p in ps:
2362 2362 ellipsisheads[p] |= ellipsisheads[rev]
2363 2363
2364 2364 # add common changesets as roots of their reachable ellipsis heads
2365 2365 for c in commonrevs:
2366 2366 for head in ellipsisheads[c]:
2367 2367 addroot(head, c)
2368 2368 return visitnodes, relevant_nodes, ellipsisroots
2369 2369
2370 2370
2371 2371 def caps20to10(repo, role):
2372 2372 """return a set with appropriate options to use bundle20 during getbundle"""
2373 2373 caps = {b'HG20'}
2374 2374 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo, role=role))
2375 2375 caps.add(b'bundle2=' + urlreq.quote(capsblob))
2376 2376 return caps
2377 2377
2378 2378
2379 2379 # List of names of steps to perform for a bundle2 for getbundle, order matters.
2380 2380 getbundle2partsorder = []
2381 2381
2382 2382 # Mapping between step name and function
2383 2383 #
2384 2384 # This exists to help extensions wrap steps if necessary
2385 2385 getbundle2partsmapping = {}
2386 2386
2387 2387
2388 2388 def getbundle2partsgenerator(stepname, idx=None):
2389 2389 """decorator for function generating bundle2 part for getbundle
2390 2390
2391 2391 The function is added to the step -> function mapping and appended to the
2392 2392 list of steps. Beware that decorated functions will be added in order
2393 2393 (this may matter).
2394 2394
2395 2395 You can only use this decorator for new steps, if you want to wrap a step
2396 2396 from an extension, attack the getbundle2partsmapping dictionary directly."""
2397 2397
2398 2398 def dec(func):
2399 2399 assert stepname not in getbundle2partsmapping
2400 2400 getbundle2partsmapping[stepname] = func
2401 2401 if idx is None:
2402 2402 getbundle2partsorder.append(stepname)
2403 2403 else:
2404 2404 getbundle2partsorder.insert(idx, stepname)
2405 2405 return func
2406 2406
2407 2407 return dec
2408 2408
2409 2409
2410 2410 def bundle2requested(bundlecaps):
2411 2411 if bundlecaps is not None:
2412 2412 return any(cap.startswith(b'HG2') for cap in bundlecaps)
2413 2413 return False
2414 2414
2415 2415
2416 2416 def getbundlechunks(
2417 2417 repo,
2418 2418 source,
2419 2419 heads=None,
2420 2420 common=None,
2421 2421 bundlecaps=None,
2422 2422 remote_sidedata=None,
2423 2423 **kwargs
2424 2424 ):
2425 2425 """Return chunks constituting a bundle's raw data.
2426 2426
2427 2427 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
2428 2428 passed.
2429 2429
2430 2430 Returns a 2-tuple of a dict with metadata about the generated bundle
2431 2431 and an iterator over raw chunks (of varying sizes).
2432 2432 """
2433 2433 kwargs = pycompat.byteskwargs(kwargs)
2434 2434 info = {}
2435 2435 usebundle2 = bundle2requested(bundlecaps)
2436 2436 # bundle10 case
2437 2437 if not usebundle2:
2438 2438 if bundlecaps and not kwargs.get(b'cg', True):
2439 2439 raise ValueError(
2440 2440 _(b'request for bundle10 must include changegroup')
2441 2441 )
2442 2442
2443 2443 if kwargs:
2444 2444 raise ValueError(
2445 2445 _(b'unsupported getbundle arguments: %s')
2446 2446 % b', '.join(sorted(kwargs.keys()))
2447 2447 )
2448 2448 outgoing = _computeoutgoing(repo, heads, common)
2449 2449 info[b'bundleversion'] = 1
2450 2450 return (
2451 2451 info,
2452 2452 changegroup.makestream(
2453 2453 repo,
2454 2454 outgoing,
2455 2455 b'01',
2456 2456 source,
2457 2457 bundlecaps=bundlecaps,
2458 2458 remote_sidedata=remote_sidedata,
2459 2459 ),
2460 2460 )
2461 2461
2462 2462 # bundle20 case
2463 2463 info[b'bundleversion'] = 2
2464 2464 b2caps = {}
2465 2465 for bcaps in bundlecaps:
2466 2466 if bcaps.startswith(b'bundle2='):
2467 2467 blob = urlreq.unquote(bcaps[len(b'bundle2=') :])
2468 2468 b2caps.update(bundle2.decodecaps(blob))
2469 2469 bundler = bundle2.bundle20(repo.ui, b2caps)
2470 2470
2471 2471 kwargs[b'heads'] = heads
2472 2472 kwargs[b'common'] = common
2473 2473
2474 2474 for name in getbundle2partsorder:
2475 2475 func = getbundle2partsmapping[name]
2476 2476 func(
2477 2477 bundler,
2478 2478 repo,
2479 2479 source,
2480 2480 bundlecaps=bundlecaps,
2481 2481 b2caps=b2caps,
2482 2482 remote_sidedata=remote_sidedata,
2483 2483 **pycompat.strkwargs(kwargs)
2484 2484 )
2485 2485
2486 2486 info[b'prefercompressed'] = bundler.prefercompressed
2487 2487
2488 2488 return info, bundler.getchunks()
2489 2489
2490 2490
2491 2491 @getbundle2partsgenerator(b'stream')
2492 2492 def _getbundlestream2(bundler, repo, *args, **kwargs):
2493 2493 return bundle2.addpartbundlestream2(bundler, repo, **kwargs)
2494 2494
2495 2495
2496 2496 @getbundle2partsgenerator(b'changegroup')
2497 2497 def _getbundlechangegrouppart(
2498 2498 bundler,
2499 2499 repo,
2500 2500 source,
2501 2501 bundlecaps=None,
2502 2502 b2caps=None,
2503 2503 heads=None,
2504 2504 common=None,
2505 2505 remote_sidedata=None,
2506 2506 **kwargs
2507 2507 ):
2508 2508 """add a changegroup part to the requested bundle"""
2509 2509 if not kwargs.get('cg', True) or not b2caps:
2510 2510 return
2511 2511
2512 2512 version = b'01'
2513 2513 cgversions = b2caps.get(b'changegroup')
2514 2514 if cgversions: # 3.1 and 3.2 ship with an empty value
2515 2515 cgversions = [
2516 2516 v
2517 2517 for v in cgversions
2518 2518 if v in changegroup.supportedoutgoingversions(repo)
2519 2519 ]
2520 2520 if not cgversions:
2521 2521 raise error.Abort(_(b'no common changegroup version'))
2522 2522 version = max(cgversions)
2523 2523
2524 2524 outgoing = _computeoutgoing(repo, heads, common)
2525 2525 if not outgoing.missing:
2526 2526 return
2527 2527
2528 2528 if kwargs.get('narrow', False):
2529 2529 include = sorted(filter(bool, kwargs.get('includepats', [])))
2530 2530 exclude = sorted(filter(bool, kwargs.get('excludepats', [])))
2531 2531 matcher = narrowspec.match(repo.root, include=include, exclude=exclude)
2532 2532 else:
2533 2533 matcher = None
2534 2534
2535 2535 cgstream = changegroup.makestream(
2536 2536 repo,
2537 2537 outgoing,
2538 2538 version,
2539 2539 source,
2540 2540 bundlecaps=bundlecaps,
2541 2541 matcher=matcher,
2542 2542 remote_sidedata=remote_sidedata,
2543 2543 )
2544 2544
2545 2545 part = bundler.newpart(b'changegroup', data=cgstream)
2546 2546 if cgversions:
2547 2547 part.addparam(b'version', version)
2548 2548
2549 2549 part.addparam(b'nbchanges', b'%d' % len(outgoing.missing), mandatory=False)
2550 2550
2551 2551 if scmutil.istreemanifest(repo):
2552 2552 part.addparam(b'treemanifest', b'1')
2553 2553
2554 2554 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
2555 2555 part.addparam(b'exp-sidedata', b'1')
2556 2556 sidedata = bundle2.format_remote_wanted_sidedata(repo)
2557 2557 part.addparam(b'exp-wanted-sidedata', sidedata)
2558 2558
2559 2559 if (
2560 2560 kwargs.get('narrow', False)
2561 2561 and kwargs.get('narrow_acl', False)
2562 2562 and (include or exclude)
2563 2563 ):
2564 2564 # this is mandatory because otherwise ACL clients won't work
2565 2565 narrowspecpart = bundler.newpart(b'Narrow:responsespec')
2566 2566 narrowspecpart.data = b'%s\0%s' % (
2567 2567 b'\n'.join(include),
2568 2568 b'\n'.join(exclude),
2569 2569 )
2570 2570
2571 2571
2572 2572 @getbundle2partsgenerator(b'bookmarks')
2573 2573 def _getbundlebookmarkpart(
2574 2574 bundler, repo, source, bundlecaps=None, b2caps=None, **kwargs
2575 2575 ):
2576 2576 """add a bookmark part to the requested bundle"""
2577 2577 if not kwargs.get('bookmarks', False):
2578 2578 return
2579 2579 if not b2caps or b'bookmarks' not in b2caps:
2580 2580 raise error.Abort(_(b'no common bookmarks exchange method'))
2581 2581 books = bookmod.listbinbookmarks(repo)
2582 2582 data = bookmod.binaryencode(repo, books)
2583 2583 if data:
2584 2584 bundler.newpart(b'bookmarks', data=data)
2585 2585
2586 2586
2587 2587 @getbundle2partsgenerator(b'listkeys')
2588 2588 def _getbundlelistkeysparts(
2589 2589 bundler, repo, source, bundlecaps=None, b2caps=None, **kwargs
2590 2590 ):
2591 2591 """add parts containing listkeys namespaces to the requested bundle"""
2592 2592 listkeys = kwargs.get('listkeys', ())
2593 2593 for namespace in listkeys:
2594 2594 part = bundler.newpart(b'listkeys')
2595 2595 part.addparam(b'namespace', namespace)
2596 2596 keys = repo.listkeys(namespace).items()
2597 2597 part.data = pushkey.encodekeys(keys)
2598 2598
2599 2599
2600 2600 @getbundle2partsgenerator(b'obsmarkers')
2601 2601 def _getbundleobsmarkerpart(
2602 2602 bundler, repo, source, bundlecaps=None, b2caps=None, heads=None, **kwargs
2603 2603 ):
2604 2604 """add an obsolescence markers part to the requested bundle"""
2605 2605 if kwargs.get('obsmarkers', False):
2606 unfi_cl = repo.unfiltered().changelog
2606 2607 if heads is None:
2607 heads = repo.heads()
2608 subset = [c.node() for c in repo.set(b'::%ln', heads)]
2609 markers = repo.obsstore.relevantmarkers(subset)
2608 headrevs = repo.changelog.headrevs()
2609 else:
2610 get_rev = unfi_cl.index.get_rev
2611 headrevs = [get_rev(node) for node in heads]
2612 headrevs = [rev for rev in headrevs if rev is not None]
2613 revs = unfi_cl.ancestors(headrevs, inclusive=True)
2614 markers = repo.obsstore.relevantmarkers(revs=revs)
2610 2615 markers = obsutil.sortedmarkers(markers)
2611 2616 bundle2.buildobsmarkerspart(bundler, markers)
2612 2617
2613 2618
2614 2619 @getbundle2partsgenerator(b'phases')
2615 2620 def _getbundlephasespart(
2616 2621 bundler, repo, source, bundlecaps=None, b2caps=None, heads=None, **kwargs
2617 2622 ):
2618 2623 """add phase heads part to the requested bundle"""
2619 2624 if kwargs.get('phases', False):
2620 2625 if not b2caps or b'heads' not in b2caps.get(b'phases'):
2621 2626 raise error.Abort(_(b'no common phases exchange method'))
2622 2627 if heads is None:
2623 2628 heads = repo.heads()
2624 2629
2625 2630 headsbyphase = collections.defaultdict(set)
2626 2631 if repo.publishing():
2627 2632 headsbyphase[phases.public] = heads
2628 2633 else:
2629 2634 # find the appropriate heads to move
2630 2635
2631 2636 phase = repo._phasecache.phase
2632 2637 node = repo.changelog.node
2633 2638 rev = repo.changelog.rev
2634 2639 for h in heads:
2635 2640 headsbyphase[phase(repo, rev(h))].add(h)
2636 2641 seenphases = list(headsbyphase.keys())
2637 2642
2638 2643 # We do not handle anything but public and draft phase for now)
2639 2644 if seenphases:
2640 2645 assert max(seenphases) <= phases.draft
2641 2646
2642 2647 # if client is pulling non-public changesets, we need to find
2643 2648 # intermediate public heads.
2644 2649 draftheads = headsbyphase.get(phases.draft, set())
2645 2650 if draftheads:
2646 2651 publicheads = headsbyphase.get(phases.public, set())
2647 2652
2648 2653 revset = b'heads(only(%ln, %ln) and public())'
2649 2654 extraheads = repo.revs(revset, draftheads, publicheads)
2650 2655 for r in extraheads:
2651 2656 headsbyphase[phases.public].add(node(r))
2652 2657
2653 2658 # transform data in a format used by the encoding function
2654 2659 phasemapping = {
2655 2660 phase: sorted(headsbyphase[phase]) for phase in phases.allphases
2656 2661 }
2657 2662
2658 2663 # generate the actual part
2659 2664 phasedata = phases.binaryencode(phasemapping)
2660 2665 bundler.newpart(b'phase-heads', data=phasedata)
2661 2666
2662 2667
2663 2668 @getbundle2partsgenerator(b'hgtagsfnodes')
2664 2669 def _getbundletagsfnodes(
2665 2670 bundler,
2666 2671 repo,
2667 2672 source,
2668 2673 bundlecaps=None,
2669 2674 b2caps=None,
2670 2675 heads=None,
2671 2676 common=None,
2672 2677 **kwargs
2673 2678 ):
2674 2679 """Transfer the .hgtags filenodes mapping.
2675 2680
2676 2681 Only values for heads in this bundle will be transferred.
2677 2682
2678 2683 The part data consists of pairs of 20 byte changeset node and .hgtags
2679 2684 filenodes raw values.
2680 2685 """
2681 2686 # Don't send unless:
2682 2687 # - changeset are being exchanged,
2683 2688 # - the client supports it.
2684 2689 if not b2caps or not (kwargs.get('cg', True) and b'hgtagsfnodes' in b2caps):
2685 2690 return
2686 2691
2687 2692 outgoing = _computeoutgoing(repo, heads, common)
2688 2693 bundle2.addparttagsfnodescache(repo, bundler, outgoing)
2689 2694
2690 2695
2691 2696 @getbundle2partsgenerator(b'cache:rev-branch-cache')
2692 2697 def _getbundlerevbranchcache(
2693 2698 bundler,
2694 2699 repo,
2695 2700 source,
2696 2701 bundlecaps=None,
2697 2702 b2caps=None,
2698 2703 heads=None,
2699 2704 common=None,
2700 2705 **kwargs
2701 2706 ):
2702 2707 """Transfer the rev-branch-cache mapping
2703 2708
2704 2709 The payload is a series of data related to each branch
2705 2710
2706 2711 1) branch name length
2707 2712 2) number of open heads
2708 2713 3) number of closed heads
2709 2714 4) open heads nodes
2710 2715 5) closed heads nodes
2711 2716 """
2712 2717 # Don't send unless:
2713 2718 # - changeset are being exchanged,
2714 2719 # - the client supports it.
2715 2720 # - narrow bundle isn't in play (not currently compatible).
2716 2721 if (
2717 2722 not kwargs.get('cg', True)
2718 2723 or not b2caps
2719 2724 or b'rev-branch-cache' not in b2caps
2720 2725 or kwargs.get('narrow', False)
2721 2726 or repo.ui.has_section(_NARROWACL_SECTION)
2722 2727 ):
2723 2728 return
2724 2729
2725 2730 outgoing = _computeoutgoing(repo, heads, common)
2726 2731 bundle2.addpartrevbranchcache(repo, bundler, outgoing)
2727 2732
2728 2733
2729 2734 def check_heads(repo, their_heads, context):
2730 2735 """check if the heads of a repo have been modified
2731 2736
2732 2737 Used by peer for unbundling.
2733 2738 """
2734 2739 heads = repo.heads()
2735 2740 heads_hash = hashutil.sha1(b''.join(sorted(heads))).digest()
2736 2741 if not (
2737 2742 their_heads == [b'force']
2738 2743 or their_heads == heads
2739 2744 or their_heads == [b'hashed', heads_hash]
2740 2745 ):
2741 2746 # someone else committed/pushed/unbundled while we
2742 2747 # were transferring data
2743 2748 raise error.PushRaced(
2744 2749 b'repository changed while %s - please try again' % context
2745 2750 )
2746 2751
2747 2752
2748 2753 def unbundle(repo, cg, heads, source, url):
2749 2754 """Apply a bundle to a repo.
2750 2755
2751 2756 this function makes sure the repo is locked during the application and have
2752 2757 mechanism to check that no push race occurred between the creation of the
2753 2758 bundle and its application.
2754 2759
2755 2760 If the push was raced as PushRaced exception is raised."""
2756 2761 r = 0
2757 2762 # need a transaction when processing a bundle2 stream
2758 2763 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
2759 2764 lockandtr = [None, None, None]
2760 2765 recordout = None
2761 2766 # quick fix for output mismatch with bundle2 in 3.4
2762 2767 captureoutput = repo.ui.configbool(
2763 2768 b'experimental', b'bundle2-output-capture'
2764 2769 )
2765 2770 if url.startswith(b'remote:http:') or url.startswith(b'remote:https:'):
2766 2771 captureoutput = True
2767 2772 try:
2768 2773 # note: outside bundle1, 'heads' is expected to be empty and this
2769 2774 # 'check_heads' call wil be a no-op
2770 2775 check_heads(repo, heads, b'uploading changes')
2771 2776 # push can proceed
2772 2777 if not isinstance(cg, bundle2.unbundle20):
2773 2778 # legacy case: bundle1 (changegroup 01)
2774 2779 txnname = b"\n".join([source, urlutil.hidepassword(url)])
2775 2780 with repo.lock(), repo.transaction(txnname) as tr:
2776 2781 op = bundle2.applybundle(repo, cg, tr, source, url)
2777 2782 r = bundle2.combinechangegroupresults(op)
2778 2783 else:
2779 2784 r = None
2780 2785 try:
2781 2786
2782 2787 def gettransaction():
2783 2788 if not lockandtr[2]:
2784 2789 if not bookmod.bookmarksinstore(repo):
2785 2790 lockandtr[0] = repo.wlock()
2786 2791 lockandtr[1] = repo.lock()
2787 2792 lockandtr[2] = repo.transaction(source)
2788 2793 lockandtr[2].hookargs[b'source'] = source
2789 2794 lockandtr[2].hookargs[b'url'] = url
2790 2795 lockandtr[2].hookargs[b'bundle2'] = b'1'
2791 2796 return lockandtr[2]
2792 2797
2793 2798 # Do greedy locking by default until we're satisfied with lazy
2794 2799 # locking.
2795 2800 if not repo.ui.configbool(
2796 2801 b'experimental', b'bundle2lazylocking'
2797 2802 ):
2798 2803 gettransaction()
2799 2804
2800 2805 op = bundle2.bundleoperation(
2801 2806 repo,
2802 2807 gettransaction,
2803 2808 captureoutput=captureoutput,
2804 2809 source=b'push',
2805 2810 )
2806 2811 try:
2807 2812 op = bundle2.processbundle(repo, cg, op=op)
2808 2813 finally:
2809 2814 r = op.reply
2810 2815 if captureoutput and r is not None:
2811 2816 repo.ui.pushbuffer(error=True, subproc=True)
2812 2817
2813 2818 def recordout(output):
2814 2819 r.newpart(b'output', data=output, mandatory=False)
2815 2820
2816 2821 if lockandtr[2] is not None:
2817 2822 lockandtr[2].close()
2818 2823 except BaseException as exc:
2819 2824 exc.duringunbundle2 = True
2820 2825 if captureoutput and r is not None:
2821 2826 parts = exc._bundle2salvagedoutput = r.salvageoutput()
2822 2827
2823 2828 def recordout(output):
2824 2829 part = bundle2.bundlepart(
2825 2830 b'output', data=output, mandatory=False
2826 2831 )
2827 2832 parts.append(part)
2828 2833
2829 2834 raise
2830 2835 finally:
2831 2836 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
2832 2837 if recordout is not None:
2833 2838 recordout(repo.ui.popbuffer())
2834 2839 return r
2835 2840
2836 2841
2837 2842 def _maybeapplyclonebundle(pullop):
2838 2843 """Apply a clone bundle from a remote, if possible."""
2839 2844
2840 2845 repo = pullop.repo
2841 2846 remote = pullop.remote
2842 2847
2843 2848 if not repo.ui.configbool(b'ui', b'clonebundles'):
2844 2849 return
2845 2850
2846 2851 # Only run if local repo is empty.
2847 2852 if len(repo):
2848 2853 return
2849 2854
2850 2855 if pullop.heads:
2851 2856 return
2852 2857
2853 2858 if not remote.capable(b'clonebundles'):
2854 2859 return
2855 2860
2856 2861 with remote.commandexecutor() as e:
2857 2862 res = e.callcommand(b'clonebundles', {}).result()
2858 2863
2859 2864 # If we call the wire protocol command, that's good enough to record the
2860 2865 # attempt.
2861 2866 pullop.clonebundleattempted = True
2862 2867
2863 2868 entries = bundlecaches.parseclonebundlesmanifest(repo, res)
2864 2869 if not entries:
2865 2870 repo.ui.note(
2866 2871 _(
2867 2872 b'no clone bundles available on remote; '
2868 2873 b'falling back to regular clone\n'
2869 2874 )
2870 2875 )
2871 2876 return
2872 2877
2873 2878 entries = bundlecaches.filterclonebundleentries(
2874 2879 repo, entries, streamclonerequested=pullop.streamclonerequested
2875 2880 )
2876 2881
2877 2882 if not entries:
2878 2883 # There is a thundering herd concern here. However, if a server
2879 2884 # operator doesn't advertise bundles appropriate for its clients,
2880 2885 # they deserve what's coming. Furthermore, from a client's
2881 2886 # perspective, no automatic fallback would mean not being able to
2882 2887 # clone!
2883 2888 repo.ui.warn(
2884 2889 _(
2885 2890 b'no compatible clone bundles available on server; '
2886 2891 b'falling back to regular clone\n'
2887 2892 )
2888 2893 )
2889 2894 repo.ui.warn(
2890 2895 _(b'(you may want to report this to the server operator)\n')
2891 2896 )
2892 2897 return
2893 2898
2894 2899 entries = bundlecaches.sortclonebundleentries(repo.ui, entries)
2895 2900
2896 2901 url = entries[0][b'URL']
2897 2902 repo.ui.status(_(b'applying clone bundle from %s\n') % url)
2898 2903 if trypullbundlefromurl(repo.ui, repo, url, remote):
2899 2904 repo.ui.status(_(b'finished applying clone bundle\n'))
2900 2905 # Bundle failed.
2901 2906 #
2902 2907 # We abort by default to avoid the thundering herd of
2903 2908 # clients flooding a server that was expecting expensive
2904 2909 # clone load to be offloaded.
2905 2910 elif repo.ui.configbool(b'ui', b'clonebundlefallback'):
2906 2911 repo.ui.warn(_(b'falling back to normal clone\n'))
2907 2912 else:
2908 2913 raise error.Abort(
2909 2914 _(b'error applying bundle'),
2910 2915 hint=_(
2911 2916 b'if this error persists, consider contacting '
2912 2917 b'the server operator or disable clone '
2913 2918 b'bundles via '
2914 2919 b'"--config ui.clonebundles=false"'
2915 2920 ),
2916 2921 )
2917 2922
2918 2923
2919 2924 def inline_clone_bundle_open(ui, url, peer):
2920 2925 if not peer:
2921 2926 raise error.Abort(_(b'no remote repository supplied for %s' % url))
2922 2927 clonebundleid = url[len(bundlecaches.CLONEBUNDLESCHEME) :]
2923 2928 peerclonebundle = peer.get_cached_bundle_inline(clonebundleid)
2924 2929 return util.chunkbuffer(peerclonebundle)
2925 2930
2926 2931
2927 2932 def trypullbundlefromurl(ui, repo, url, peer):
2928 2933 """Attempt to apply a bundle from a URL."""
2929 2934 with repo.lock(), repo.transaction(b'bundleurl') as tr:
2930 2935 try:
2931 2936 if url.startswith(bundlecaches.CLONEBUNDLESCHEME):
2932 2937 fh = inline_clone_bundle_open(ui, url, peer)
2933 2938 else:
2934 2939 fh = urlmod.open(ui, url)
2935 2940 cg = readbundle(ui, fh, b'stream')
2936 2941
2937 2942 if isinstance(cg, streamclone.streamcloneapplier):
2938 2943 cg.apply(repo)
2939 2944 else:
2940 2945 bundle2.applybundle(repo, cg, tr, b'clonebundles', url)
2941 2946 return True
2942 2947 except urlerr.httperror as e:
2943 2948 ui.warn(
2944 2949 _(b'HTTP error fetching bundle: %s\n')
2945 2950 % stringutil.forcebytestr(e)
2946 2951 )
2947 2952 except urlerr.urlerror as e:
2948 2953 ui.warn(
2949 2954 _(b'error fetching bundle: %s\n')
2950 2955 % stringutil.forcebytestr(e.reason)
2951 2956 )
2952 2957
2953 2958 return False
@@ -1,1155 +1,1176
1 1 # obsolete.py - obsolete markers handling
2 2 #
3 3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
4 4 # Logilab SA <contact@logilab.fr>
5 5 #
6 6 # This software may be used and distributed according to the terms of the
7 7 # GNU General Public License version 2 or any later version.
8 8
9 9 """Obsolete marker handling
10 10
11 11 An obsolete marker maps an old changeset to a list of new
12 12 changesets. If the list of new changesets is empty, the old changeset
13 13 is said to be "killed". Otherwise, the old changeset is being
14 14 "replaced" by the new changesets.
15 15
16 16 Obsolete markers can be used to record and distribute changeset graph
17 17 transformations performed by history rewrite operations, and help
18 18 building new tools to reconcile conflicting rewrite actions. To
19 19 facilitate conflict resolution, markers include various annotations
20 20 besides old and news changeset identifiers, such as creation date or
21 21 author name.
22 22
23 23 The old obsoleted changeset is called a "predecessor" and possible
24 24 replacements are called "successors". Markers that used changeset X as
25 25 a predecessor are called "successor markers of X" because they hold
26 26 information about the successors of X. Markers that use changeset Y as
27 27 a successors are call "predecessor markers of Y" because they hold
28 28 information about the predecessors of Y.
29 29
30 30 Examples:
31 31
32 32 - When changeset A is replaced by changeset A', one marker is stored:
33 33
34 34 (A, (A',))
35 35
36 36 - When changesets A and B are folded into a new changeset C, two markers are
37 37 stored:
38 38
39 39 (A, (C,)) and (B, (C,))
40 40
41 41 - When changeset A is simply "pruned" from the graph, a marker is created:
42 42
43 43 (A, ())
44 44
45 45 - When changeset A is split into B and C, a single marker is used:
46 46
47 47 (A, (B, C))
48 48
49 49 We use a single marker to distinguish the "split" case from the "divergence"
50 50 case. If two independent operations rewrite the same changeset A in to A' and
51 51 A'', we have an error case: divergent rewriting. We can detect it because
52 52 two markers will be created independently:
53 53
54 54 (A, (B,)) and (A, (C,))
55 55
56 56 Format
57 57 ------
58 58
59 59 Markers are stored in an append-only file stored in
60 60 '.hg/store/obsstore'.
61 61
62 62 The file starts with a version header:
63 63
64 64 - 1 unsigned byte: version number, starting at zero.
65 65
66 66 The header is followed by the markers. Marker format depend of the version. See
67 67 comment associated with each format for details.
68 68
69 69 """
70 70
71 71 import binascii
72 72 import struct
73 73 import weakref
74 74
75 75 from .i18n import _
76 76 from .node import (
77 77 bin,
78 78 hex,
79 79 )
80 80 from . import (
81 81 encoding,
82 82 error,
83 83 obsutil,
84 84 phases,
85 85 policy,
86 86 pycompat,
87 87 util,
88 88 )
89 89 from .utils import (
90 90 dateutil,
91 91 hashutil,
92 92 )
93 93
94 94 parsers = policy.importmod('parsers')
95 95
96 96 _pack = struct.pack
97 97 _unpack = struct.unpack
98 98 _calcsize = struct.calcsize
99 99 propertycache = util.propertycache
100 100
101 101 # Options for obsolescence
102 102 createmarkersopt = b'createmarkers'
103 103 allowunstableopt = b'allowunstable'
104 104 allowdivergenceopt = b'allowdivergence'
105 105 exchangeopt = b'exchange'
106 106
107 107
108 108 def _getoptionvalue(repo, option):
109 109 """Returns True if the given repository has the given obsolete option
110 110 enabled.
111 111 """
112 112 configkey = b'evolution.%s' % option
113 113 newconfig = repo.ui.configbool(b'experimental', configkey)
114 114
115 115 # Return the value only if defined
116 116 if newconfig is not None:
117 117 return newconfig
118 118
119 119 # Fallback on generic option
120 120 try:
121 121 return repo.ui.configbool(b'experimental', b'evolution')
122 122 except (error.ConfigError, AttributeError):
123 123 # Fallback on old-fashion config
124 124 # inconsistent config: experimental.evolution
125 125 result = set(repo.ui.configlist(b'experimental', b'evolution'))
126 126
127 127 if b'all' in result:
128 128 return True
129 129
130 130 # Temporary hack for next check
131 131 newconfig = repo.ui.config(b'experimental', b'evolution.createmarkers')
132 132 if newconfig:
133 133 result.add(b'createmarkers')
134 134
135 135 return option in result
136 136
137 137
138 138 def getoptions(repo):
139 139 """Returns dicts showing state of obsolescence features."""
140 140
141 141 createmarkersvalue = _getoptionvalue(repo, createmarkersopt)
142 142 if createmarkersvalue:
143 143 unstablevalue = _getoptionvalue(repo, allowunstableopt)
144 144 divergencevalue = _getoptionvalue(repo, allowdivergenceopt)
145 145 exchangevalue = _getoptionvalue(repo, exchangeopt)
146 146 else:
147 147 # if we cannot create obsolescence markers, we shouldn't exchange them
148 148 # or perform operations that lead to instability or divergence
149 149 unstablevalue = False
150 150 divergencevalue = False
151 151 exchangevalue = False
152 152
153 153 return {
154 154 createmarkersopt: createmarkersvalue,
155 155 allowunstableopt: unstablevalue,
156 156 allowdivergenceopt: divergencevalue,
157 157 exchangeopt: exchangevalue,
158 158 }
159 159
160 160
161 161 def isenabled(repo, option):
162 162 """Returns True if the given repository has the given obsolete option
163 163 enabled.
164 164 """
165 165 return getoptions(repo)[option]
166 166
167 167
168 168 # Creating aliases for marker flags because evolve extension looks for
169 169 # bumpedfix in obsolete.py
170 170 bumpedfix = obsutil.bumpedfix
171 171 usingsha256 = obsutil.usingsha256
172 172
173 173 ## Parsing and writing of version "0"
174 174 #
175 175 # The header is followed by the markers. Each marker is made of:
176 176 #
177 177 # - 1 uint8 : number of new changesets "N", can be zero.
178 178 #
179 179 # - 1 uint32: metadata size "M" in bytes.
180 180 #
181 181 # - 1 byte: a bit field. It is reserved for flags used in common
182 182 # obsolete marker operations, to avoid repeated decoding of metadata
183 183 # entries.
184 184 #
185 185 # - 20 bytes: obsoleted changeset identifier.
186 186 #
187 187 # - N*20 bytes: new changesets identifiers.
188 188 #
189 189 # - M bytes: metadata as a sequence of nul-terminated strings. Each
190 190 # string contains a key and a value, separated by a colon ':', without
191 191 # additional encoding. Keys cannot contain '\0' or ':' and values
192 192 # cannot contain '\0'.
193 193 _fm0version = 0
194 194 _fm0fixed = b'>BIB20s'
195 195 _fm0node = b'20s'
196 196 _fm0fsize = _calcsize(_fm0fixed)
197 197 _fm0fnodesize = _calcsize(_fm0node)
198 198
199 199
200 200 def _fm0readmarkers(data, off, stop):
201 201 # Loop on markers
202 202 while off < stop:
203 203 # read fixed part
204 204 cur = data[off : off + _fm0fsize]
205 205 off += _fm0fsize
206 206 numsuc, mdsize, flags, pre = _unpack(_fm0fixed, cur)
207 207 # read replacement
208 208 sucs = ()
209 209 if numsuc:
210 210 s = _fm0fnodesize * numsuc
211 211 cur = data[off : off + s]
212 212 sucs = _unpack(_fm0node * numsuc, cur)
213 213 off += s
214 214 # read metadata
215 215 # (metadata will be decoded on demand)
216 216 metadata = data[off : off + mdsize]
217 217 if len(metadata) != mdsize:
218 218 raise error.Abort(
219 219 _(
220 220 b'parsing obsolete marker: metadata is too '
221 221 b'short, %d bytes expected, got %d'
222 222 )
223 223 % (mdsize, len(metadata))
224 224 )
225 225 off += mdsize
226 226 metadata = _fm0decodemeta(metadata)
227 227 try:
228 228 when, offset = metadata.pop(b'date', b'0 0').split(b' ')
229 229 date = float(when), int(offset)
230 230 except ValueError:
231 231 date = (0.0, 0)
232 232 parents = None
233 233 if b'p2' in metadata:
234 234 parents = (metadata.pop(b'p1', None), metadata.pop(b'p2', None))
235 235 elif b'p1' in metadata:
236 236 parents = (metadata.pop(b'p1', None),)
237 237 elif b'p0' in metadata:
238 238 parents = ()
239 239 if parents is not None:
240 240 try:
241 241 parents = tuple(bin(p) for p in parents)
242 242 # if parent content is not a nodeid, drop the data
243 243 for p in parents:
244 244 if len(p) != 20:
245 245 parents = None
246 246 break
247 247 except binascii.Error:
248 248 # if content cannot be translated to nodeid drop the data.
249 249 parents = None
250 250
251 251 metadata = tuple(sorted(metadata.items()))
252 252
253 253 yield (pre, sucs, flags, metadata, date, parents)
254 254
255 255
256 256 def _fm0encodeonemarker(marker):
257 257 pre, sucs, flags, metadata, date, parents = marker
258 258 if flags & usingsha256:
259 259 raise error.Abort(_(b'cannot handle sha256 with old obsstore format'))
260 260 metadata = dict(metadata)
261 261 time, tz = date
262 262 metadata[b'date'] = b'%r %i' % (time, tz)
263 263 if parents is not None:
264 264 if not parents:
265 265 # mark that we explicitly recorded no parents
266 266 metadata[b'p0'] = b''
267 267 for i, p in enumerate(parents, 1):
268 268 metadata[b'p%i' % i] = hex(p)
269 269 metadata = _fm0encodemeta(metadata)
270 270 numsuc = len(sucs)
271 271 format = _fm0fixed + (_fm0node * numsuc)
272 272 data = [numsuc, len(metadata), flags, pre]
273 273 data.extend(sucs)
274 274 return _pack(format, *data) + metadata
275 275
276 276
277 277 def _fm0encodemeta(meta):
278 278 """Return encoded metadata string to string mapping.
279 279
280 280 Assume no ':' in key and no '\0' in both key and value."""
281 281 for key, value in meta.items():
282 282 if b':' in key or b'\0' in key:
283 283 raise ValueError(b"':' and '\0' are forbidden in metadata key'")
284 284 if b'\0' in value:
285 285 raise ValueError(b"':' is forbidden in metadata value'")
286 286 return b'\0'.join([b'%s:%s' % (k, meta[k]) for k in sorted(meta)])
287 287
288 288
289 289 def _fm0decodemeta(data):
290 290 """Return string to string dictionary from encoded version."""
291 291 d = {}
292 292 for l in data.split(b'\0'):
293 293 if l:
294 294 key, value = l.split(b':', 1)
295 295 d[key] = value
296 296 return d
297 297
298 298
299 299 ## Parsing and writing of version "1"
300 300 #
301 301 # The header is followed by the markers. Each marker is made of:
302 302 #
303 303 # - uint32: total size of the marker (including this field)
304 304 #
305 305 # - float64: date in seconds since epoch
306 306 #
307 307 # - int16: timezone offset in minutes
308 308 #
309 309 # - uint16: a bit field. It is reserved for flags used in common
310 310 # obsolete marker operations, to avoid repeated decoding of metadata
311 311 # entries.
312 312 #
313 313 # - uint8: number of successors "N", can be zero.
314 314 #
315 315 # - uint8: number of parents "P", can be zero.
316 316 #
317 317 # 0: parents data stored but no parent,
318 318 # 1: one parent stored,
319 319 # 2: two parents stored,
320 320 # 3: no parent data stored
321 321 #
322 322 # - uint8: number of metadata entries M
323 323 #
324 324 # - 20 or 32 bytes: predecessor changeset identifier.
325 325 #
326 326 # - N*(20 or 32) bytes: successors changesets identifiers.
327 327 #
328 328 # - P*(20 or 32) bytes: parents of the predecessors changesets.
329 329 #
330 330 # - M*(uint8, uint8): size of all metadata entries (key and value)
331 331 #
332 332 # - remaining bytes: the metadata, each (key, value) pair after the other.
333 333 _fm1version = 1
334 334 _fm1fixed = b'>IdhHBBB'
335 335 _fm1nodesha1 = b'20s'
336 336 _fm1nodesha256 = b'32s'
337 337 _fm1nodesha1size = _calcsize(_fm1nodesha1)
338 338 _fm1nodesha256size = _calcsize(_fm1nodesha256)
339 339 _fm1fsize = _calcsize(_fm1fixed)
340 340 _fm1parentnone = 3
341 341 _fm1metapair = b'BB'
342 342 _fm1metapairsize = _calcsize(_fm1metapair)
343 343
344 344
345 345 def _fm1purereadmarkers(data, off, stop):
346 346 # make some global constants local for performance
347 347 noneflag = _fm1parentnone
348 348 sha2flag = usingsha256
349 349 sha1size = _fm1nodesha1size
350 350 sha2size = _fm1nodesha256size
351 351 sha1fmt = _fm1nodesha1
352 352 sha2fmt = _fm1nodesha256
353 353 metasize = _fm1metapairsize
354 354 metafmt = _fm1metapair
355 355 fsize = _fm1fsize
356 356 unpack = _unpack
357 357
358 358 # Loop on markers
359 359 ufixed = struct.Struct(_fm1fixed).unpack
360 360
361 361 while off < stop:
362 362 # read fixed part
363 363 o1 = off + fsize
364 364 t, secs, tz, flags, numsuc, numpar, nummeta = ufixed(data[off:o1])
365 365
366 366 if flags & sha2flag:
367 367 nodefmt = sha2fmt
368 368 nodesize = sha2size
369 369 else:
370 370 nodefmt = sha1fmt
371 371 nodesize = sha1size
372 372
373 373 (prec,) = unpack(nodefmt, data[o1 : o1 + nodesize])
374 374 o1 += nodesize
375 375
376 376 # read 0 or more successors
377 377 if numsuc == 1:
378 378 o2 = o1 + nodesize
379 379 sucs = (data[o1:o2],)
380 380 else:
381 381 o2 = o1 + nodesize * numsuc
382 382 sucs = unpack(nodefmt * numsuc, data[o1:o2])
383 383
384 384 # read parents
385 385 if numpar == noneflag:
386 386 o3 = o2
387 387 parents = None
388 388 elif numpar == 1:
389 389 o3 = o2 + nodesize
390 390 parents = (data[o2:o3],)
391 391 else:
392 392 o3 = o2 + nodesize * numpar
393 393 parents = unpack(nodefmt * numpar, data[o2:o3])
394 394
395 395 # read metadata
396 396 off = o3 + metasize * nummeta
397 397 metapairsize = unpack(b'>' + (metafmt * nummeta), data[o3:off])
398 398 metadata = []
399 399 for idx in range(0, len(metapairsize), 2):
400 400 o1 = off + metapairsize[idx]
401 401 o2 = o1 + metapairsize[idx + 1]
402 402 metadata.append((data[off:o1], data[o1:o2]))
403 403 off = o2
404 404
405 405 yield (prec, sucs, flags, tuple(metadata), (secs, tz * 60), parents)
406 406
407 407
408 408 def _fm1encodeonemarker(marker):
409 409 pre, sucs, flags, metadata, date, parents = marker
410 410 # determine node size
411 411 _fm1node = _fm1nodesha1
412 412 if flags & usingsha256:
413 413 _fm1node = _fm1nodesha256
414 414 numsuc = len(sucs)
415 415 numextranodes = 1 + numsuc
416 416 if parents is None:
417 417 numpar = _fm1parentnone
418 418 else:
419 419 numpar = len(parents)
420 420 numextranodes += numpar
421 421 formatnodes = _fm1node * numextranodes
422 422 formatmeta = _fm1metapair * len(metadata)
423 423 format = _fm1fixed + formatnodes + formatmeta
424 424 # tz is stored in minutes so we divide by 60
425 425 tz = date[1] // 60
426 426 data = [None, date[0], tz, flags, numsuc, numpar, len(metadata), pre]
427 427 data.extend(sucs)
428 428 if parents is not None:
429 429 data.extend(parents)
430 430 totalsize = _calcsize(format)
431 431 for key, value in metadata:
432 432 lk = len(key)
433 433 lv = len(value)
434 434 if lk > 255:
435 435 msg = (
436 436 b'obsstore metadata key cannot be longer than 255 bytes'
437 437 b' (key "%s" is %u bytes)'
438 438 ) % (key, lk)
439 439 raise error.ProgrammingError(msg)
440 440 if lv > 255:
441 441 msg = (
442 442 b'obsstore metadata value cannot be longer than 255 bytes'
443 443 b' (value "%s" for key "%s" is %u bytes)'
444 444 ) % (value, key, lv)
445 445 raise error.ProgrammingError(msg)
446 446 data.append(lk)
447 447 data.append(lv)
448 448 totalsize += lk + lv
449 449 data[0] = totalsize
450 450 data = [_pack(format, *data)]
451 451 for key, value in metadata:
452 452 data.append(key)
453 453 data.append(value)
454 454 return b''.join(data)
455 455
456 456
457 457 def _fm1readmarkers(data, off, stop):
458 458 native = getattr(parsers, 'fm1readmarkers', None)
459 459 if not native:
460 460 return _fm1purereadmarkers(data, off, stop)
461 461 return native(data, off, stop)
462 462
463 463
464 464 # mapping to read/write various marker formats
465 465 # <version> -> (decoder, encoder)
466 466 formats = {
467 467 _fm0version: (_fm0readmarkers, _fm0encodeonemarker),
468 468 _fm1version: (_fm1readmarkers, _fm1encodeonemarker),
469 469 }
470 470
471 471
472 472 def _readmarkerversion(data):
473 473 return _unpack(b'>B', data[0:1])[0]
474 474
475 475
476 476 @util.nogc
477 477 def _readmarkers(data, off=None, stop=None):
478 478 """Read and enumerate markers from raw data"""
479 479 diskversion = _readmarkerversion(data)
480 480 if not off:
481 481 off = 1 # skip 1 byte version number
482 482 if stop is None:
483 483 stop = len(data)
484 484 if diskversion not in formats:
485 485 msg = _(b'parsing obsolete marker: unknown version %r') % diskversion
486 486 raise error.UnknownVersion(msg, version=diskversion)
487 487 return diskversion, formats[diskversion][0](data, off, stop)
488 488
489 489
490 490 def encodeheader(version=_fm0version):
491 491 return _pack(b'>B', version)
492 492
493 493
494 494 def encodemarkers(markers, addheader=False, version=_fm0version):
495 495 # Kept separate from flushmarkers(), it will be reused for
496 496 # markers exchange.
497 497 encodeone = formats[version][1]
498 498 if addheader:
499 499 yield encodeheader(version)
500 500 for marker in markers:
501 501 yield encodeone(marker)
502 502
503 503
504 504 @util.nogc
505 505 def _addsuccessors(successors, markers):
506 506 for mark in markers:
507 507 successors.setdefault(mark[0], set()).add(mark)
508 508
509 509
510 510 @util.nogc
511 511 def _addpredecessors(predecessors, markers):
512 512 for mark in markers:
513 513 for suc in mark[1]:
514 514 predecessors.setdefault(suc, set()).add(mark)
515 515
516 516
517 517 @util.nogc
518 518 def _addchildren(children, markers):
519 519 for mark in markers:
520 520 parents = mark[5]
521 521 if parents is not None:
522 522 for p in parents:
523 523 children.setdefault(p, set()).add(mark)
524 524
525 525
526 526 def _checkinvalidmarkers(repo, markers):
527 527 """search for marker with invalid data and raise error if needed
528 528
529 529 Exist as a separated function to allow the evolve extension for a more
530 530 subtle handling.
531 531 """
532 532 for mark in markers:
533 533 if repo.nullid in mark[1]:
534 534 raise error.Abort(
535 535 _(
536 536 b'bad obsolescence marker detected: '
537 537 b'invalid successors nullid'
538 538 )
539 539 )
540 540
541 541
542 542 class obsstore:
543 543 """Store obsolete markers
544 544
545 545 Markers can be accessed with two mappings:
546 546 - predecessors[x] -> set(markers on predecessors edges of x)
547 547 - successors[x] -> set(markers on successors edges of x)
548 548 - children[x] -> set(markers on predecessors edges of children(x)
549 549 """
550 550
551 551 fields = (b'prec', b'succs', b'flag', b'meta', b'date', b'parents')
552 552 # prec: nodeid, predecessors changesets
553 553 # succs: tuple of nodeid, successor changesets (0-N length)
554 554 # flag: integer, flag field carrying modifier for the markers (see doc)
555 555 # meta: binary blob in UTF-8, encoded metadata dictionary
556 556 # date: (float, int) tuple, date of marker creation
557 557 # parents: (tuple of nodeid) or None, parents of predecessors
558 558 # None is used when no data has been recorded
559 559
560 560 def __init__(self, repo, svfs, defaultformat=_fm1version, readonly=False):
561 561 # caches for various obsolescence related cache
562 562 self.caches = {}
563 563 self.svfs = svfs
564 564 self._repo = weakref.ref(repo)
565 565 self._defaultformat = defaultformat
566 566 self._readonly = readonly
567 567
568 568 @property
569 569 def repo(self):
570 570 r = self._repo()
571 571 if r is None:
572 572 msg = "using the obsstore of a deallocated repo"
573 573 raise error.ProgrammingError(msg)
574 574 return r
575 575
576 576 def __iter__(self):
577 577 return iter(self._all)
578 578
579 579 def __len__(self):
580 580 return len(self._all)
581 581
582 582 def __nonzero__(self):
583 583 from . import statichttprepo
584 584
585 585 if isinstance(self.repo, statichttprepo.statichttprepository):
586 586 # If repo is accessed via static HTTP, then we can't use os.stat()
587 587 # to just peek at the file size.
588 588 return len(self._data) > 1
589 589 if not self._cached('_all'):
590 590 try:
591 591 return self.svfs.stat(b'obsstore').st_size > 1
592 592 except FileNotFoundError:
593 593 # just build an empty _all list if no obsstore exists, which
594 594 # avoids further stat() syscalls
595 595 pass
596 596 return bool(self._all)
597 597
598 598 __bool__ = __nonzero__
599 599
600 600 @property
601 601 def readonly(self):
602 602 """True if marker creation is disabled
603 603
604 604 Remove me in the future when obsolete marker is always on."""
605 605 return self._readonly
606 606
607 607 def create(
608 608 self,
609 609 transaction,
610 610 prec,
611 611 succs=(),
612 612 flag=0,
613 613 parents=None,
614 614 date=None,
615 615 metadata=None,
616 616 ui=None,
617 617 ):
618 618 """obsolete: add a new obsolete marker
619 619
620 620 * ensuring it is hashable
621 621 * check mandatory metadata
622 622 * encode metadata
623 623
624 624 If you are a human writing code creating marker you want to use the
625 625 `createmarkers` function in this module instead.
626 626
627 627 return True if a new marker have been added, False if the markers
628 628 already existed (no op).
629 629 """
630 630 flag = int(flag)
631 631 if metadata is None:
632 632 metadata = {}
633 633 if date is None:
634 634 if b'date' in metadata:
635 635 # as a courtesy for out-of-tree extensions
636 636 date = dateutil.parsedate(metadata.pop(b'date'))
637 637 elif ui is not None:
638 638 date = ui.configdate(b'devel', b'default-date')
639 639 if date is None:
640 640 date = dateutil.makedate()
641 641 else:
642 642 date = dateutil.makedate()
643 643 if flag & usingsha256:
644 644 if len(prec) != 32:
645 645 raise ValueError(prec)
646 646 for succ in succs:
647 647 if len(succ) != 32:
648 648 raise ValueError(succ)
649 649 else:
650 650 if len(prec) != 20:
651 651 raise ValueError(prec)
652 652 for succ in succs:
653 653 if len(succ) != 20:
654 654 raise ValueError(succ)
655 655 if prec in succs:
656 656 raise ValueError('in-marker cycle with %s' % prec.hex())
657 657
658 658 metadata = tuple(sorted(metadata.items()))
659 659 for k, v in metadata:
660 660 try:
661 661 # might be better to reject non-ASCII keys
662 662 k.decode('utf-8')
663 663 v.decode('utf-8')
664 664 except UnicodeDecodeError:
665 665 raise error.ProgrammingError(
666 666 b'obsstore metadata must be valid UTF-8 sequence '
667 667 b'(key = %r, value = %r)'
668 668 % (pycompat.bytestr(k), pycompat.bytestr(v))
669 669 )
670 670
671 671 marker = (bytes(prec), tuple(succs), flag, metadata, date, parents)
672 672 return bool(self.add(transaction, [marker]))
673 673
674 674 def add(self, transaction, markers):
675 675 """Add new markers to the store
676 676
677 677 Take care of filtering duplicate.
678 678 Return the number of new marker."""
679 679 if self._readonly:
680 680 raise error.Abort(
681 681 _(b'creating obsolete markers is not enabled on this repo')
682 682 )
683 683 known = set()
684 684 getsuccessors = self.successors.get
685 685 new = []
686 686 for m in markers:
687 687 if m not in getsuccessors(m[0], ()) and m not in known:
688 688 known.add(m)
689 689 new.append(m)
690 690 if new:
691 691 f = self.svfs(b'obsstore', b'ab')
692 692 try:
693 693 offset = f.tell()
694 694 transaction.add(b'obsstore', offset)
695 695 # offset == 0: new file - add the version header
696 696 data = b''.join(encodemarkers(new, offset == 0, self._version))
697 697 f.write(data)
698 698 finally:
699 699 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
700 700 # call 'filecacheentry.refresh()' here
701 701 f.close()
702 702 addedmarkers = transaction.changes.get(b'obsmarkers')
703 703 if addedmarkers is not None:
704 704 addedmarkers.update(new)
705 705 self._addmarkers(new, data)
706 706 # new marker *may* have changed several set. invalidate the cache.
707 707 self.caches.clear()
708 708 # records the number of new markers for the transaction hooks
709 709 previous = int(transaction.hookargs.get(b'new_obsmarkers', b'0'))
710 710 transaction.hookargs[b'new_obsmarkers'] = b'%d' % (previous + len(new))
711 711 return len(new)
712 712
713 713 def mergemarkers(self, transaction, data):
714 714 """merge a binary stream of markers inside the obsstore
715 715
716 716 Returns the number of new markers added."""
717 717 version, markers = _readmarkers(data)
718 718 return self.add(transaction, markers)
719 719
720 720 @propertycache
721 721 def _data(self):
722 722 return self.svfs.tryread(b'obsstore')
723 723
724 724 @propertycache
725 725 def _version(self):
726 726 if len(self._data) >= 1:
727 727 return _readmarkerversion(self._data)
728 728 else:
729 729 return self._defaultformat
730 730
731 731 @propertycache
732 732 def _all(self):
733 733 data = self._data
734 734 if not data:
735 735 return []
736 736 self._version, markers = _readmarkers(data)
737 737 markers = list(markers)
738 738 _checkinvalidmarkers(self.repo, markers)
739 739 return markers
740 740
741 741 @propertycache
742 742 def successors(self):
743 743 successors = {}
744 744 _addsuccessors(successors, self._all)
745 745 return successors
746 746
747 747 @propertycache
748 748 def predecessors(self):
749 749 predecessors = {}
750 750 _addpredecessors(predecessors, self._all)
751 751 return predecessors
752 752
753 753 @propertycache
754 754 def children(self):
755 755 children = {}
756 756 _addchildren(children, self._all)
757 757 return children
758 758
759 759 def _cached(self, attr):
760 760 return attr in self.__dict__
761 761
762 762 def _addmarkers(self, markers, rawdata):
763 763 markers = list(markers) # to allow repeated iteration
764 764 self._data = self._data + rawdata
765 765 self._all.extend(markers)
766 766 if self._cached('successors'):
767 767 _addsuccessors(self.successors, markers)
768 768 if self._cached('predecessors'):
769 769 _addpredecessors(self.predecessors, markers)
770 770 if self._cached('children'):
771 771 _addchildren(self.children, markers)
772 772 _checkinvalidmarkers(self.repo, markers)
773 773
774 def relevantmarkers(self, nodes):
775 """return a set of all obsolescence markers relevant to a set of nodes.
774 def relevantmarkers(self, nodes=None, revs=None):
775 """return a set of all obsolescence markers relevant to a set of
776 nodes or revisions.
776 777
777 "relevant" to a set of nodes mean:
778 "relevant" to a set of nodes or revisions mean:
778 779
779 780 - marker that use this changeset as successor
780 781 - prune marker of direct children on this changeset
781 782 - recursive application of the two rules on predecessors of these
782 783 markers
783 784
784 785 It is a set so you cannot rely on order."""
786 if nodes is None:
787 nodes = set()
788 if revs is None:
789 revs = set()
785 790
786 pendingnodes = set(nodes)
787 seenmarkers = set()
788 seennodes = set(pendingnodes)
791 tonode = self.repo.unfiltered().changelog.node
792 pendingnodes = set()
789 793 precursorsmarkers = self.predecessors
790 794 succsmarkers = self.successors
791 795 children = self.children
796 for node in nodes:
797 if (
798 node in precursorsmarkers
799 or node in succsmarkers
800 or node in children
801 ):
802 pendingnodes.add(node)
803 for rev in revs:
804 node = tonode(rev)
805 if (
806 node in precursorsmarkers
807 or node in succsmarkers
808 or node in children
809 ):
810 pendingnodes.add(node)
811 seenmarkers = set()
812 seennodes = pendingnodes.copy()
792 813 while pendingnodes:
793 814 direct = set()
794 815 for current in pendingnodes:
795 816 direct.update(precursorsmarkers.get(current, ()))
796 817 pruned = [m for m in children.get(current, ()) if not m[1]]
797 818 direct.update(pruned)
798 819 pruned = [m for m in succsmarkers.get(current, ()) if not m[1]]
799 820 direct.update(pruned)
800 821 direct -= seenmarkers
801 822 pendingnodes = {m[0] for m in direct}
802 823 seenmarkers |= direct
803 824 pendingnodes -= seennodes
804 825 seennodes |= pendingnodes
805 826 return seenmarkers
806 827
807 828
808 829 def makestore(ui, repo):
809 830 """Create an obsstore instance from a repo."""
810 831 # read default format for new obsstore.
811 832 # developer config: format.obsstore-version
812 833 defaultformat = ui.configint(b'format', b'obsstore-version')
813 834 # rely on obsstore class default when possible.
814 835 kwargs = {}
815 836 if defaultformat is not None:
816 837 kwargs['defaultformat'] = defaultformat
817 838 readonly = not isenabled(repo, createmarkersopt)
818 839 store = obsstore(repo, repo.svfs, readonly=readonly, **kwargs)
819 840 if store and readonly:
820 841 ui.warn(
821 842 _(b'"obsolete" feature not enabled but %i markers found!\n')
822 843 % len(list(store))
823 844 )
824 845 return store
825 846
826 847
827 848 def commonversion(versions):
828 849 """Return the newest version listed in both versions and our local formats.
829 850
830 851 Returns None if no common version exists.
831 852 """
832 853 versions.sort(reverse=True)
833 854 # search for highest version known on both side
834 855 for v in versions:
835 856 if v in formats:
836 857 return v
837 858 return None
838 859
839 860
840 861 # arbitrary picked to fit into 8K limit from HTTP server
841 862 # you have to take in account:
842 863 # - the version header
843 864 # - the base85 encoding
844 865 _maxpayload = 5300
845 866
846 867
847 868 def _pushkeyescape(markers):
848 869 """encode markers into a dict suitable for pushkey exchange
849 870
850 871 - binary data is base85 encoded
851 872 - split in chunks smaller than 5300 bytes"""
852 873 keys = {}
853 874 parts = []
854 875 currentlen = _maxpayload * 2 # ensure we create a new part
855 876 for marker in markers:
856 877 nextdata = _fm0encodeonemarker(marker)
857 878 if len(nextdata) + currentlen > _maxpayload:
858 879 currentpart = []
859 880 currentlen = 0
860 881 parts.append(currentpart)
861 882 currentpart.append(nextdata)
862 883 currentlen += len(nextdata)
863 884 for idx, part in enumerate(reversed(parts)):
864 885 data = b''.join([_pack(b'>B', _fm0version)] + part)
865 886 keys[b'dump%i' % idx] = util.b85encode(data)
866 887 return keys
867 888
868 889
869 890 def listmarkers(repo):
870 891 """List markers over pushkey"""
871 892 if not repo.obsstore:
872 893 return {}
873 894 return _pushkeyescape(sorted(repo.obsstore))
874 895
875 896
876 897 def pushmarker(repo, key, old, new):
877 898 """Push markers over pushkey"""
878 899 if not key.startswith(b'dump'):
879 900 repo.ui.warn(_(b'unknown key: %r') % key)
880 901 return False
881 902 if old:
882 903 repo.ui.warn(_(b'unexpected old value for %r') % key)
883 904 return False
884 905 data = util.b85decode(new)
885 906 with repo.lock(), repo.transaction(b'pushkey: obsolete markers') as tr:
886 907 repo.obsstore.mergemarkers(tr, data)
887 908 repo.invalidatevolatilesets()
888 909 return True
889 910
890 911
891 912 # mapping of 'set-name' -> <function to compute this set>
892 913 cachefuncs = {}
893 914
894 915
895 916 def cachefor(name):
896 917 """Decorator to register a function as computing the cache for a set"""
897 918
898 919 def decorator(func):
899 920 if name in cachefuncs:
900 921 msg = b"duplicated registration for volatileset '%s' (existing: %r)"
901 922 raise error.ProgrammingError(msg % (name, cachefuncs[name]))
902 923 cachefuncs[name] = func
903 924 return func
904 925
905 926 return decorator
906 927
907 928
908 929 def getrevs(repo, name):
909 930 """Return the set of revision that belong to the <name> set
910 931
911 932 Such access may compute the set and cache it for future use"""
912 933 repo = repo.unfiltered()
913 934 with util.timedcm('getrevs %s', name):
914 935 if not repo.obsstore:
915 936 return frozenset()
916 937 if name not in repo.obsstore.caches:
917 938 repo.obsstore.caches[name] = cachefuncs[name](repo)
918 939 return repo.obsstore.caches[name]
919 940
920 941
921 942 # To be simple we need to invalidate obsolescence cache when:
922 943 #
923 944 # - new changeset is added:
924 945 # - public phase is changed
925 946 # - obsolescence marker are added
926 947 # - strip is used a repo
927 948 def clearobscaches(repo):
928 949 """Remove all obsolescence related cache from a repo
929 950
930 951 This remove all cache in obsstore is the obsstore already exist on the
931 952 repo.
932 953
933 954 (We could be smarter here given the exact event that trigger the cache
934 955 clearing)"""
935 956 # only clear cache is there is obsstore data in this repo
936 957 if b'obsstore' in repo._filecache:
937 958 repo.obsstore.caches.clear()
938 959
939 960
940 961 def _mutablerevs(repo):
941 962 """the set of mutable revision in the repository"""
942 963 return repo._phasecache.getrevset(repo, phases.relevant_mutable_phases)
943 964
944 965
945 966 @cachefor(b'obsolete')
946 967 def _computeobsoleteset(repo):
947 968 """the set of obsolete revisions"""
948 969 getnode = repo.changelog.node
949 970 notpublic = _mutablerevs(repo)
950 971 isobs = repo.obsstore.successors.__contains__
951 972 return frozenset(r for r in notpublic if isobs(getnode(r)))
952 973
953 974
954 975 @cachefor(b'orphan')
955 976 def _computeorphanset(repo):
956 977 """the set of non obsolete revisions with obsolete parents"""
957 978 pfunc = repo.changelog.parentrevs
958 979 mutable = _mutablerevs(repo)
959 980 obsolete = getrevs(repo, b'obsolete')
960 981 others = mutable - obsolete
961 982 unstable = set()
962 983 for r in sorted(others):
963 984 # A rev is unstable if one of its parent is obsolete or unstable
964 985 # this works since we traverse following growing rev order
965 986 for p in pfunc(r):
966 987 if p in obsolete or p in unstable:
967 988 unstable.add(r)
968 989 break
969 990 return frozenset(unstable)
970 991
971 992
972 993 @cachefor(b'suspended')
973 994 def _computesuspendedset(repo):
974 995 """the set of obsolete parents with non obsolete descendants"""
975 996 suspended = repo.changelog.ancestors(getrevs(repo, b'orphan'))
976 997 return frozenset(r for r in getrevs(repo, b'obsolete') if r in suspended)
977 998
978 999
979 1000 @cachefor(b'extinct')
980 1001 def _computeextinctset(repo):
981 1002 """the set of obsolete parents without non obsolete descendants"""
982 1003 return getrevs(repo, b'obsolete') - getrevs(repo, b'suspended')
983 1004
984 1005
985 1006 @cachefor(b'phasedivergent')
986 1007 def _computephasedivergentset(repo):
987 1008 """the set of revs trying to obsolete public revisions"""
988 1009 bumped = set()
989 1010 # util function (avoid attribute lookup in the loop)
990 1011 phase = repo._phasecache.phase # would be faster to grab the full list
991 1012 public = phases.public
992 1013 cl = repo.changelog
993 1014 torev = cl.index.get_rev
994 1015 tonode = cl.node
995 1016 obsstore = repo.obsstore
996 1017 candidates = sorted(_mutablerevs(repo) - getrevs(repo, b"obsolete"))
997 1018 for rev in candidates:
998 1019 # We only evaluate mutable, non-obsolete revision
999 1020 node = tonode(rev)
1000 1021 # (future) A cache of predecessors may worth if split is very common
1001 1022 for pnode in obsutil.allpredecessors(
1002 1023 obsstore, [node], ignoreflags=bumpedfix
1003 1024 ):
1004 1025 prev = torev(pnode) # unfiltered! but so is phasecache
1005 1026 if (prev is not None) and (phase(repo, prev) <= public):
1006 1027 # we have a public predecessor
1007 1028 bumped.add(rev)
1008 1029 break # Next draft!
1009 1030 return frozenset(bumped)
1010 1031
1011 1032
1012 1033 @cachefor(b'contentdivergent')
1013 1034 def _computecontentdivergentset(repo):
1014 1035 """the set of rev that compete to be the final successors of some revision."""
1015 1036 divergent = set()
1016 1037 obsstore = repo.obsstore
1017 1038 newermap = {}
1018 1039 tonode = repo.changelog.node
1019 1040 candidates = sorted(_mutablerevs(repo) - getrevs(repo, b"obsolete"))
1020 1041 for rev in candidates:
1021 1042 node = tonode(rev)
1022 1043 mark = obsstore.predecessors.get(node, ())
1023 1044 toprocess = set(mark)
1024 1045 seen = set()
1025 1046 while toprocess:
1026 1047 prec = toprocess.pop()[0]
1027 1048 if prec in seen:
1028 1049 continue # emergency cycle hanging prevention
1029 1050 seen.add(prec)
1030 1051 if prec not in newermap:
1031 1052 obsutil.successorssets(repo, prec, cache=newermap)
1032 1053 newer = [n for n in newermap[prec] if n]
1033 1054 if len(newer) > 1:
1034 1055 divergent.add(rev)
1035 1056 break
1036 1057 toprocess.update(obsstore.predecessors.get(prec, ()))
1037 1058 return frozenset(divergent)
1038 1059
1039 1060
1040 1061 def makefoldid(relation, user):
1041 1062
1042 1063 folddigest = hashutil.sha1(user)
1043 1064 for p in relation[0] + relation[1]:
1044 1065 folddigest.update(b'%d' % p.rev())
1045 1066 folddigest.update(p.node())
1046 1067 # Since fold only has to compete against fold for the same successors, it
1047 1068 # seems fine to use a small ID. Smaller ID save space.
1048 1069 return hex(folddigest.digest())[:8]
1049 1070
1050 1071
1051 1072 def createmarkers(
1052 1073 repo, relations, flag=0, date=None, metadata=None, operation=None
1053 1074 ):
1054 1075 """Add obsolete markers between changesets in a repo
1055 1076
1056 1077 <relations> must be an iterable of ((<old>,...), (<new>, ...)[,{metadata}])
1057 1078 tuple. `old` and `news` are changectx. metadata is an optional dictionary
1058 1079 containing metadata for this marker only. It is merged with the global
1059 1080 metadata specified through the `metadata` argument of this function.
1060 1081 Any string values in metadata must be UTF-8 bytes.
1061 1082
1062 1083 Trying to obsolete a public changeset will raise an exception.
1063 1084
1064 1085 Current user and date are used except if specified otherwise in the
1065 1086 metadata attribute.
1066 1087
1067 1088 This function operates within a transaction of its own, but does
1068 1089 not take any lock on the repo.
1069 1090 """
1070 1091 # prepare metadata
1071 1092 if metadata is None:
1072 1093 metadata = {}
1073 1094 if b'user' not in metadata:
1074 1095 luser = (
1075 1096 repo.ui.config(b'devel', b'user.obsmarker') or repo.ui.username()
1076 1097 )
1077 1098 metadata[b'user'] = encoding.fromlocal(luser)
1078 1099
1079 1100 # Operation metadata handling
1080 1101 useoperation = repo.ui.configbool(
1081 1102 b'experimental', b'evolution.track-operation'
1082 1103 )
1083 1104 if useoperation and operation:
1084 1105 metadata[b'operation'] = operation
1085 1106
1086 1107 # Effect flag metadata handling
1087 1108 saveeffectflag = repo.ui.configbool(
1088 1109 b'experimental', b'evolution.effect-flags'
1089 1110 )
1090 1111
1091 1112 with repo.transaction(b'add-obsolescence-marker') as tr:
1092 1113 markerargs = []
1093 1114 for rel in relations:
1094 1115 predecessors = rel[0]
1095 1116 if not isinstance(predecessors, tuple):
1096 1117 # preserve compat with old API until all caller are migrated
1097 1118 predecessors = (predecessors,)
1098 1119 if len(predecessors) > 1 and len(rel[1]) != 1:
1099 1120 msg = b'Fold markers can only have 1 successors, not %d'
1100 1121 raise error.ProgrammingError(msg % len(rel[1]))
1101 1122 foldid = None
1102 1123 foldsize = len(predecessors)
1103 1124 if 1 < foldsize:
1104 1125 foldid = makefoldid(rel, metadata[b'user'])
1105 1126 for foldidx, prec in enumerate(predecessors, 1):
1106 1127 sucs = rel[1]
1107 1128 localmetadata = metadata.copy()
1108 1129 if len(rel) > 2:
1109 1130 localmetadata.update(rel[2])
1110 1131 if foldid is not None:
1111 1132 localmetadata[b'fold-id'] = foldid
1112 1133 localmetadata[b'fold-idx'] = b'%d' % foldidx
1113 1134 localmetadata[b'fold-size'] = b'%d' % foldsize
1114 1135
1115 1136 if not prec.mutable():
1116 1137 raise error.Abort(
1117 1138 _(b"cannot obsolete public changeset: %s") % prec,
1118 1139 hint=b"see 'hg help phases' for details",
1119 1140 )
1120 1141 nprec = prec.node()
1121 1142 nsucs = tuple(s.node() for s in sucs)
1122 1143 npare = None
1123 1144 if not nsucs:
1124 1145 npare = tuple(p.node() for p in prec.parents())
1125 1146 if nprec in nsucs:
1126 1147 raise error.Abort(
1127 1148 _(b"changeset %s cannot obsolete itself") % prec
1128 1149 )
1129 1150
1130 1151 # Effect flag can be different by relation
1131 1152 if saveeffectflag:
1132 1153 # The effect flag is saved in a versioned field name for
1133 1154 # future evolution
1134 1155 effectflag = obsutil.geteffectflag(prec, sucs)
1135 1156 localmetadata[obsutil.EFFECTFLAGFIELD] = b"%d" % effectflag
1136 1157
1137 1158 # Creating the marker causes the hidden cache to become
1138 1159 # invalid, which causes recomputation when we ask for
1139 1160 # prec.parents() above. Resulting in n^2 behavior. So let's
1140 1161 # prepare all of the args first, then create the markers.
1141 1162 markerargs.append((nprec, nsucs, npare, localmetadata))
1142 1163
1143 1164 for args in markerargs:
1144 1165 nprec, nsucs, npare, localmetadata = args
1145 1166 repo.obsstore.create(
1146 1167 tr,
1147 1168 nprec,
1148 1169 nsucs,
1149 1170 flag,
1150 1171 parents=npare,
1151 1172 date=date,
1152 1173 metadata=localmetadata,
1153 1174 ui=repo.ui,
1154 1175 )
1155 1176 repo.filteredrevcache.clear()
@@ -1,1047 +1,1047
1 1 # obsutil.py - utility functions for obsolescence
2 2 #
3 3 # Copyright 2017 Boris Feld <boris.feld@octobus.net>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8
9 9 import re
10 10
11 11 from .i18n import _
12 12 from .node import (
13 13 hex,
14 14 short,
15 15 )
16 16 from . import (
17 17 diffutil,
18 18 encoding,
19 19 error,
20 20 phases,
21 21 util,
22 22 )
23 23 from .utils import dateutil
24 24
25 25 ### obsolescence marker flag
26 26
27 27 ## bumpedfix flag
28 28 #
29 29 # When a changeset A' succeed to a changeset A which became public, we call A'
30 30 # "bumped" because it's a successors of a public changesets
31 31 #
32 32 # o A' (bumped)
33 33 # |`:
34 34 # | o A
35 35 # |/
36 36 # o Z
37 37 #
38 38 # The way to solve this situation is to create a new changeset Ad as children
39 39 # of A. This changeset have the same content than A'. So the diff from A to A'
40 40 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
41 41 #
42 42 # o Ad
43 43 # |`:
44 44 # | x A'
45 45 # |'|
46 46 # o | A
47 47 # |/
48 48 # o Z
49 49 #
50 50 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
51 51 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
52 52 # This flag mean that the successors express the changes between the public and
53 53 # bumped version and fix the situation, breaking the transitivity of
54 54 # "bumped" here.
55 55 bumpedfix = 1
56 56 usingsha256 = 2
57 57
58 58
59 59 class marker:
60 60 """Wrap obsolete marker raw data"""
61 61
62 62 def __init__(self, repo, data):
63 63 # the repo argument will be used to create changectx in later version
64 64 self._repo = repo
65 65 self._data = data
66 66 self._decodedmeta = None
67 67
68 68 def __hash__(self):
69 69 return hash(self._data)
70 70
71 71 def __eq__(self, other):
72 72 if type(other) != type(self):
73 73 return False
74 74 return self._data == other._data
75 75
76 76 def prednode(self):
77 77 """Predecessor changeset node identifier"""
78 78 return self._data[0]
79 79
80 80 def succnodes(self):
81 81 """List of successor changesets node identifiers"""
82 82 return self._data[1]
83 83
84 84 def parentnodes(self):
85 85 """Parents of the predecessors (None if not recorded)"""
86 86 return self._data[5]
87 87
88 88 def metadata(self):
89 89 """Decoded metadata dictionary"""
90 90 return dict(self._data[3])
91 91
92 92 def date(self):
93 93 """Creation date as (unixtime, offset)"""
94 94 return self._data[4]
95 95
96 96 def flags(self):
97 97 """The flags field of the marker"""
98 98 return self._data[2]
99 99
100 100
101 101 def getmarkers(repo, nodes=None, exclusive=False):
102 102 """returns markers known in a repository
103 103
104 104 If <nodes> is specified, only markers "relevant" to those nodes are are
105 105 returned"""
106 106 if nodes is None:
107 107 rawmarkers = repo.obsstore
108 108 elif exclusive:
109 109 rawmarkers = exclusivemarkers(repo, nodes)
110 110 else:
111 rawmarkers = repo.obsstore.relevantmarkers(nodes)
111 rawmarkers = repo.obsstore.relevantmarkers(nodes=nodes)
112 112
113 113 for markerdata in rawmarkers:
114 114 yield marker(repo, markerdata)
115 115
116 116
117 117 def sortedmarkers(markers):
118 118 # last item of marker tuple ('parents') may be None or a tuple
119 119 return sorted(markers, key=lambda m: m[:-1] + (m[-1] or (),))
120 120
121 121
122 122 def closestpredecessors(repo, nodeid):
123 123 """yield the list of next predecessors pointing on visible changectx nodes
124 124
125 125 This function respect the repoview filtering, filtered revision will be
126 126 considered missing.
127 127 """
128 128
129 129 precursors = repo.obsstore.predecessors
130 130 stack = [nodeid]
131 131 seen = set(stack)
132 132
133 133 while stack:
134 134 current = stack.pop()
135 135 currentpreccs = precursors.get(current, ())
136 136
137 137 for prec in currentpreccs:
138 138 precnodeid = prec[0]
139 139
140 140 # Basic cycle protection
141 141 if precnodeid in seen:
142 142 continue
143 143 seen.add(precnodeid)
144 144
145 145 if precnodeid in repo:
146 146 yield precnodeid
147 147 else:
148 148 stack.append(precnodeid)
149 149
150 150
151 151 def allpredecessors(obsstore, nodes, ignoreflags=0):
152 152 """Yield node for every precursors of <nodes>.
153 153
154 154 Some precursors may be unknown locally.
155 155
156 156 This is a linear yield unsuited to detecting folded changesets. It includes
157 157 initial nodes too."""
158 158
159 159 remaining = set(nodes)
160 160 seen = set(remaining)
161 161 prec = obsstore.predecessors.get
162 162 while remaining:
163 163 current = remaining.pop()
164 164 yield current
165 165 for mark in prec(current, ()):
166 166 # ignore marker flagged with specified flag
167 167 if mark[2] & ignoreflags:
168 168 continue
169 169 suc = mark[0]
170 170 if suc not in seen:
171 171 seen.add(suc)
172 172 remaining.add(suc)
173 173
174 174
175 175 def allsuccessors(obsstore, nodes, ignoreflags=0):
176 176 """Yield node for every successor of <nodes>.
177 177
178 178 Some successors may be unknown locally.
179 179
180 180 This is a linear yield unsuited to detecting split changesets. It includes
181 181 initial nodes too."""
182 182 remaining = set(nodes)
183 183 seen = set(remaining)
184 184 while remaining:
185 185 current = remaining.pop()
186 186 yield current
187 187 for mark in obsstore.successors.get(current, ()):
188 188 # ignore marker flagged with specified flag
189 189 if mark[2] & ignoreflags:
190 190 continue
191 191 for suc in mark[1]:
192 192 if suc not in seen:
193 193 seen.add(suc)
194 194 remaining.add(suc)
195 195
196 196
197 197 def _filterprunes(markers):
198 198 """return a set with no prune markers"""
199 199 return {m for m in markers if m[1]}
200 200
201 201
202 202 def exclusivemarkers(repo, nodes):
203 203 """set of markers relevant to "nodes" but no other locally-known nodes
204 204
205 205 This function compute the set of markers "exclusive" to a locally-known
206 206 node. This means we walk the markers starting from <nodes> until we reach a
207 207 locally-known precursors outside of <nodes>. Element of <nodes> with
208 208 locally-known successors outside of <nodes> are ignored (since their
209 209 precursors markers are also relevant to these successors).
210 210
211 211 For example:
212 212
213 213 # (A0 rewritten as A1)
214 214 #
215 215 # A0 <-1- A1 # Marker "1" is exclusive to A1
216 216
217 217 or
218 218
219 219 # (A0 rewritten as AX; AX rewritten as A1; AX is unknown locally)
220 220 #
221 221 # <-1- A0 <-2- AX <-3- A1 # Marker "2,3" are exclusive to A1
222 222
223 223 or
224 224
225 225 # (A0 has unknown precursors, A0 rewritten as A1 and A2 (divergence))
226 226 #
227 227 # <-2- A1 # Marker "2" is exclusive to A0,A1
228 228 # /
229 229 # <-1- A0
230 230 # \
231 231 # <-3- A2 # Marker "3" is exclusive to A0,A2
232 232 #
233 233 # in addition:
234 234 #
235 235 # Markers "2,3" are exclusive to A1,A2
236 236 # Markers "1,2,3" are exclusive to A0,A1,A2
237 237
238 238 See test/test-obsolete-bundle-strip.t for more examples.
239 239
240 240 An example usage is strip. When stripping a changeset, we also want to
241 241 strip the markers exclusive to this changeset. Otherwise we would have
242 242 "dangling"" obsolescence markers from its precursors: Obsolescence markers
243 243 marking a node as obsolete without any successors available locally.
244 244
245 245 As for relevant markers, the prune markers for children will be followed.
246 246 Of course, they will only be followed if the pruned children is
247 247 locally-known. Since the prune markers are relevant to the pruned node.
248 248 However, while prune markers are considered relevant to the parent of the
249 249 pruned changesets, prune markers for locally-known changeset (with no
250 250 successors) are considered exclusive to the pruned nodes. This allows
251 251 to strip the prune markers (with the rest of the exclusive chain) alongside
252 252 the pruned changesets.
253 253 """
254 254 # running on a filtered repository would be dangerous as markers could be
255 255 # reported as exclusive when they are relevant for other filtered nodes.
256 256 unfi = repo.unfiltered()
257 257
258 258 # shortcut to various useful item
259 259 has_node = unfi.changelog.index.has_node
260 260 precursorsmarkers = unfi.obsstore.predecessors
261 261 successormarkers = unfi.obsstore.successors
262 262 childrenmarkers = unfi.obsstore.children
263 263
264 264 # exclusive markers (return of the function)
265 265 exclmarkers = set()
266 266 # we need fast membership testing
267 267 nodes = set(nodes)
268 268 # looking for head in the obshistory
269 269 #
270 270 # XXX we are ignoring all issues in regard with cycle for now.
271 271 stack = [n for n in nodes if not _filterprunes(successormarkers.get(n, ()))]
272 272 stack.sort()
273 273 # nodes already stacked
274 274 seennodes = set(stack)
275 275 while stack:
276 276 current = stack.pop()
277 277 # fetch precursors markers
278 278 markers = list(precursorsmarkers.get(current, ()))
279 279 # extend the list with prune markers
280 280 for mark in successormarkers.get(current, ()):
281 281 if not mark[1]:
282 282 markers.append(mark)
283 283 # and markers from children (looking for prune)
284 284 for mark in childrenmarkers.get(current, ()):
285 285 if not mark[1]:
286 286 markers.append(mark)
287 287 # traverse the markers
288 288 for mark in markers:
289 289 if mark in exclmarkers:
290 290 # markers already selected
291 291 continue
292 292
293 293 # If the markers is about the current node, select it
294 294 #
295 295 # (this delay the addition of markers from children)
296 296 if mark[1] or mark[0] == current:
297 297 exclmarkers.add(mark)
298 298
299 299 # should we keep traversing through the precursors?
300 300 prec = mark[0]
301 301
302 302 # nodes in the stack or already processed
303 303 if prec in seennodes:
304 304 continue
305 305
306 306 # is this a locally known node ?
307 307 known = has_node(prec)
308 308 # if locally-known and not in the <nodes> set the traversal
309 309 # stop here.
310 310 if known and prec not in nodes:
311 311 continue
312 312
313 313 # do not keep going if there are unselected markers pointing to this
314 314 # nodes. If we end up traversing these unselected markers later the
315 315 # node will be taken care of at that point.
316 316 precmarkers = _filterprunes(successormarkers.get(prec))
317 317 if precmarkers.issubset(exclmarkers):
318 318 seennodes.add(prec)
319 319 stack.append(prec)
320 320
321 321 return exclmarkers
322 322
323 323
324 324 def foreground(repo, nodes):
325 325 """return all nodes in the "foreground" of other node
326 326
327 327 The foreground of a revision is anything reachable using parent -> children
328 328 or precursor -> successor relation. It is very similar to "descendant" but
329 329 augmented with obsolescence information.
330 330
331 331 Beware that possible obsolescence cycle may result if complex situation.
332 332 """
333 333 repo = repo.unfiltered()
334 334 foreground = set(repo.set(b'%ln::', nodes))
335 335 if repo.obsstore:
336 336 # We only need this complicated logic if there is obsolescence
337 337 # XXX will probably deserve an optimised revset.
338 338 has_node = repo.changelog.index.has_node
339 339 plen = -1
340 340 # compute the whole set of successors or descendants
341 341 while len(foreground) != plen:
342 342 plen = len(foreground)
343 343 succs = {c.node() for c in foreground}
344 344 mutable = [c.node() for c in foreground if c.mutable()]
345 345 succs.update(allsuccessors(repo.obsstore, mutable))
346 346 known = (n for n in succs if has_node(n))
347 347 foreground = set(repo.set(b'%ln::', known))
348 348 return {c.node() for c in foreground}
349 349
350 350
351 351 # effectflag field
352 352 #
353 353 # Effect-flag is a 1-byte bit field used to store what changed between a
354 354 # changeset and its successor(s).
355 355 #
356 356 # The effect flag is stored in obs-markers metadata while we iterate on the
357 357 # information design. That's why we have the EFFECTFLAGFIELD. If we come up
358 358 # with an incompatible design for effect flag, we can store a new design under
359 359 # another field name so we don't break readers. We plan to extend the existing
360 360 # obsmarkers bit-field when the effect flag design will be stabilized.
361 361 #
362 362 # The effect-flag is placed behind an experimental flag
363 363 # `effect-flags` set to off by default.
364 364 #
365 365
366 366 EFFECTFLAGFIELD = b"ef1"
367 367
368 368 DESCCHANGED = 1 << 0 # action changed the description
369 369 METACHANGED = 1 << 1 # action change the meta
370 370 DIFFCHANGED = 1 << 3 # action change diff introduced by the changeset
371 371 PARENTCHANGED = 1 << 2 # action change the parent
372 372 USERCHANGED = 1 << 4 # the user changed
373 373 DATECHANGED = 1 << 5 # the date changed
374 374 BRANCHCHANGED = 1 << 6 # the branch changed
375 375
376 376 METABLACKLIST = [
377 377 re.compile(b'^branch$'),
378 378 re.compile(b'^.*-source$'),
379 379 re.compile(b'^.*_source$'),
380 380 re.compile(b'^source$'),
381 381 ]
382 382
383 383
384 384 def metanotblacklisted(metaitem):
385 385 """Check that the key of a meta item (extrakey, extravalue) does not
386 386 match at least one of the blacklist pattern
387 387 """
388 388 metakey = metaitem[0]
389 389
390 390 return not any(pattern.match(metakey) for pattern in METABLACKLIST)
391 391
392 392
393 393 def _prepare_hunk(hunk):
394 394 """Drop all information but the username and patch"""
395 395 cleanhunk = []
396 396 for line in hunk.splitlines():
397 397 if line.startswith(b'# User') or not line.startswith(b'#'):
398 398 if line.startswith(b'@@'):
399 399 line = b'@@\n'
400 400 cleanhunk.append(line)
401 401 return cleanhunk
402 402
403 403
404 404 def _getdifflines(iterdiff):
405 405 """return a cleaned up lines"""
406 406 lines = next(iterdiff, None)
407 407
408 408 if lines is None:
409 409 return lines
410 410
411 411 return _prepare_hunk(lines)
412 412
413 413
414 414 def _cmpdiff(leftctx, rightctx):
415 415 """return True if both ctx introduce the "same diff"
416 416
417 417 This is a first and basic implementation, with many shortcoming.
418 418 """
419 419 diffopts = diffutil.diffallopts(leftctx.repo().ui, {b'git': True})
420 420
421 421 # Leftctx or right ctx might be filtered, so we need to use the contexts
422 422 # with an unfiltered repository to safely compute the diff
423 423
424 424 # leftctx and rightctx can be from different repository views in case of
425 425 # hgsubversion, do don't try to access them from same repository
426 426 # rightctx.repo() and leftctx.repo() are not always the same
427 427 leftunfi = leftctx._repo.unfiltered()[leftctx.rev()]
428 428 leftdiff = leftunfi.diff(opts=diffopts)
429 429 rightunfi = rightctx._repo.unfiltered()[rightctx.rev()]
430 430 rightdiff = rightunfi.diff(opts=diffopts)
431 431
432 432 left, right = (0, 0)
433 433 while None not in (left, right):
434 434 left = _getdifflines(leftdiff)
435 435 right = _getdifflines(rightdiff)
436 436
437 437 if left != right:
438 438 return False
439 439 return True
440 440
441 441
442 442 def geteffectflag(source, successors):
443 443 """From an obs-marker relation, compute what changed between the
444 444 predecessor and the successor.
445 445 """
446 446 effects = 0
447 447
448 448 for changectx in successors:
449 449 # Check if description has changed
450 450 if changectx.description() != source.description():
451 451 effects |= DESCCHANGED
452 452
453 453 # Check if user has changed
454 454 if changectx.user() != source.user():
455 455 effects |= USERCHANGED
456 456
457 457 # Check if date has changed
458 458 if changectx.date() != source.date():
459 459 effects |= DATECHANGED
460 460
461 461 # Check if branch has changed
462 462 if changectx.branch() != source.branch():
463 463 effects |= BRANCHCHANGED
464 464
465 465 # Check if at least one of the parent has changed
466 466 if changectx.parents() != source.parents():
467 467 effects |= PARENTCHANGED
468 468
469 469 # Check if other meta has changed
470 470 changeextra = changectx.extra().items()
471 471 ctxmeta = sorted(filter(metanotblacklisted, changeextra))
472 472
473 473 sourceextra = source.extra().items()
474 474 srcmeta = sorted(filter(metanotblacklisted, sourceextra))
475 475
476 476 if ctxmeta != srcmeta:
477 477 effects |= METACHANGED
478 478
479 479 # Check if the diff has changed
480 480 if not _cmpdiff(source, changectx):
481 481 effects |= DIFFCHANGED
482 482
483 483 return effects
484 484
485 485
486 486 def getobsoleted(repo, tr=None, changes=None):
487 487 """return the set of pre-existing revisions obsoleted by a transaction
488 488
489 489 Either the transaction or changes item of the transaction (for hooks)
490 490 must be provided, but not both.
491 491 """
492 492 if (tr is None) == (changes is None):
493 493 e = b"exactly one of tr and changes must be provided"
494 494 raise error.ProgrammingError(e)
495 495 torev = repo.unfiltered().changelog.index.get_rev
496 496 phase = repo._phasecache.phase
497 497 succsmarkers = repo.obsstore.successors.get
498 498 public = phases.public
499 499 if changes is None:
500 500 changes = tr.changes
501 501 addedmarkers = changes[b'obsmarkers']
502 502 origrepolen = changes[b'origrepolen']
503 503 seenrevs = set()
504 504 obsoleted = set()
505 505 for mark in addedmarkers:
506 506 node = mark[0]
507 507 rev = torev(node)
508 508 if rev is None or rev in seenrevs or rev >= origrepolen:
509 509 continue
510 510 seenrevs.add(rev)
511 511 if phase(repo, rev) == public:
512 512 continue
513 513 if set(succsmarkers(node) or []).issubset(addedmarkers):
514 514 obsoleted.add(rev)
515 515 return obsoleted
516 516
517 517
518 518 class _succs(list):
519 519 """small class to represent a successors with some metadata about it"""
520 520
521 521 def __init__(self, *args, **kwargs):
522 522 super(_succs, self).__init__(*args, **kwargs)
523 523 self.markers = set()
524 524
525 525 def copy(self):
526 526 new = _succs(self)
527 527 new.markers = self.markers.copy()
528 528 return new
529 529
530 530 @util.propertycache
531 531 def _set(self):
532 532 # immutable
533 533 return set(self)
534 534
535 535 def canmerge(self, other):
536 536 return self._set.issubset(other._set)
537 537
538 538
539 539 def successorssets(repo, initialnode, closest=False, cache=None):
540 540 """Return set of all latest successors of initial nodes
541 541
542 542 The successors set of a changeset A are the group of revisions that succeed
543 543 A. It succeeds A as a consistent whole, each revision being only a partial
544 544 replacement. By default, the successors set contains non-obsolete
545 545 changesets only, walking the obsolescence graph until reaching a leaf. If
546 546 'closest' is set to True, closest successors-sets are return (the
547 547 obsolescence walk stops on known changesets).
548 548
549 549 This function returns the full list of successor sets which is why it
550 550 returns a list of tuples and not just a single tuple. Each tuple is a valid
551 551 successors set. Note that (A,) may be a valid successors set for changeset A
552 552 (see below).
553 553
554 554 In most cases, a changeset A will have a single element (e.g. the changeset
555 555 A is replaced by A') in its successors set. Though, it is also common for a
556 556 changeset A to have no elements in its successor set (e.g. the changeset
557 557 has been pruned). Therefore, the returned list of successors sets will be
558 558 [(A',)] or [], respectively.
559 559
560 560 When a changeset A is split into A' and B', however, it will result in a
561 561 successors set containing more than a single element, i.e. [(A',B')].
562 562 Divergent changesets will result in multiple successors sets, i.e. [(A',),
563 563 (A'')].
564 564
565 565 If a changeset A is not obsolete, then it will conceptually have no
566 566 successors set. To distinguish this from a pruned changeset, the successor
567 567 set will contain itself only, i.e. [(A,)].
568 568
569 569 Finally, final successors unknown locally are considered to be pruned
570 570 (pruned: obsoleted without any successors). (Final: successors not affected
571 571 by markers).
572 572
573 573 The 'closest' mode respect the repoview filtering. For example, without
574 574 filter it will stop at the first locally known changeset, with 'visible'
575 575 filter it will stop on visible changesets).
576 576
577 577 The optional `cache` parameter is a dictionary that may contains
578 578 precomputed successors sets. It is meant to reuse the computation of a
579 579 previous call to `successorssets` when multiple calls are made at the same
580 580 time. The cache dictionary is updated in place. The caller is responsible
581 581 for its life span. Code that makes multiple calls to `successorssets`
582 582 *should* use this cache mechanism or risk a performance hit.
583 583
584 584 Since results are different depending of the 'closest' most, the same cache
585 585 cannot be reused for both mode.
586 586 """
587 587
588 588 succmarkers = repo.obsstore.successors
589 589
590 590 # Stack of nodes we search successors sets for
591 591 toproceed = [initialnode]
592 592 # set version of above list for fast loop detection
593 593 # element added to "toproceed" must be added here
594 594 stackedset = set(toproceed)
595 595 if cache is None:
596 596 cache = {}
597 597
598 598 # This while loop is the flattened version of a recursive search for
599 599 # successors sets
600 600 #
601 601 # def successorssets(x):
602 602 # successors = directsuccessors(x)
603 603 # ss = [[]]
604 604 # for succ in directsuccessors(x):
605 605 # # product as in itertools cartesian product
606 606 # ss = product(ss, successorssets(succ))
607 607 # return ss
608 608 #
609 609 # But we can not use plain recursive calls here:
610 610 # - that would blow the python call stack
611 611 # - obsolescence markers may have cycles, we need to handle them.
612 612 #
613 613 # The `toproceed` list act as our call stack. Every node we search
614 614 # successors set for are stacked there.
615 615 #
616 616 # The `stackedset` is set version of this stack used to check if a node is
617 617 # already stacked. This check is used to detect cycles and prevent infinite
618 618 # loop.
619 619 #
620 620 # successors set of all nodes are stored in the `cache` dictionary.
621 621 #
622 622 # After this while loop ends we use the cache to return the successors sets
623 623 # for the node requested by the caller.
624 624 while toproceed:
625 625 # Every iteration tries to compute the successors sets of the topmost
626 626 # node of the stack: CURRENT.
627 627 #
628 628 # There are four possible outcomes:
629 629 #
630 630 # 1) We already know the successors sets of CURRENT:
631 631 # -> mission accomplished, pop it from the stack.
632 632 # 2) Stop the walk:
633 633 # default case: Node is not obsolete
634 634 # closest case: Node is known at this repo filter level
635 635 # -> the node is its own successors sets. Add it to the cache.
636 636 # 3) We do not know successors set of direct successors of CURRENT:
637 637 # -> We add those successors to the stack.
638 638 # 4) We know successors sets of all direct successors of CURRENT:
639 639 # -> We can compute CURRENT successors set and add it to the
640 640 # cache.
641 641 #
642 642 current = toproceed[-1]
643 643
644 644 # case 2 condition is a bit hairy because of closest,
645 645 # we compute it on its own
646 646 case2condition = (current not in succmarkers) or (
647 647 closest and current != initialnode and current in repo
648 648 )
649 649
650 650 if current in cache:
651 651 # case (1): We already know the successors sets
652 652 stackedset.remove(toproceed.pop())
653 653 elif case2condition:
654 654 # case (2): end of walk.
655 655 if current in repo:
656 656 # We have a valid successors.
657 657 cache[current] = [_succs((current,))]
658 658 else:
659 659 # Final obsolete version is unknown locally.
660 660 # Do not count that as a valid successors
661 661 cache[current] = []
662 662 else:
663 663 # cases (3) and (4)
664 664 #
665 665 # We proceed in two phases. Phase 1 aims to distinguish case (3)
666 666 # from case (4):
667 667 #
668 668 # For each direct successors of CURRENT, we check whether its
669 669 # successors sets are known. If they are not, we stack the
670 670 # unknown node and proceed to the next iteration of the while
671 671 # loop. (case 3)
672 672 #
673 673 # During this step, we may detect obsolescence cycles: a node
674 674 # with unknown successors sets but already in the call stack.
675 675 # In such a situation, we arbitrary set the successors sets of
676 676 # the node to nothing (node pruned) to break the cycle.
677 677 #
678 678 # If no break was encountered we proceed to phase 2.
679 679 #
680 680 # Phase 2 computes successors sets of CURRENT (case 4); see details
681 681 # in phase 2 itself.
682 682 #
683 683 # Note the two levels of iteration in each phase.
684 684 # - The first one handles obsolescence markers using CURRENT as
685 685 # precursor (successors markers of CURRENT).
686 686 #
687 687 # Having multiple entry here means divergence.
688 688 #
689 689 # - The second one handles successors defined in each marker.
690 690 #
691 691 # Having none means pruned node, multiple successors means split,
692 692 # single successors are standard replacement.
693 693 #
694 694 for mark in sortedmarkers(succmarkers[current]):
695 695 for suc in mark[1]:
696 696 if suc not in cache:
697 697 if suc in stackedset:
698 698 # cycle breaking
699 699 cache[suc] = []
700 700 else:
701 701 # case (3) If we have not computed successors sets
702 702 # of one of those successors we add it to the
703 703 # `toproceed` stack and stop all work for this
704 704 # iteration.
705 705 toproceed.append(suc)
706 706 stackedset.add(suc)
707 707 break
708 708 else:
709 709 continue
710 710 break
711 711 else:
712 712 # case (4): we know all successors sets of all direct
713 713 # successors
714 714 #
715 715 # Successors set contributed by each marker depends on the
716 716 # successors sets of all its "successors" node.
717 717 #
718 718 # Each different marker is a divergence in the obsolescence
719 719 # history. It contributes successors sets distinct from other
720 720 # markers.
721 721 #
722 722 # Within a marker, a successor may have divergent successors
723 723 # sets. In such a case, the marker will contribute multiple
724 724 # divergent successors sets. If multiple successors have
725 725 # divergent successors sets, a Cartesian product is used.
726 726 #
727 727 # At the end we post-process successors sets to remove
728 728 # duplicated entry and successors set that are strict subset of
729 729 # another one.
730 730 succssets = []
731 731 for mark in sortedmarkers(succmarkers[current]):
732 732 # successors sets contributed by this marker
733 733 base = _succs()
734 734 base.markers.add(mark)
735 735 markss = [base]
736 736 for suc in mark[1]:
737 737 # cardinal product with previous successors
738 738 productresult = []
739 739 for prefix in markss:
740 740 for suffix in cache[suc]:
741 741 newss = prefix.copy()
742 742 newss.markers.update(suffix.markers)
743 743 for part in suffix:
744 744 # do not duplicated entry in successors set
745 745 # first entry wins.
746 746 if part not in newss:
747 747 newss.append(part)
748 748 productresult.append(newss)
749 749 if productresult:
750 750 markss = productresult
751 751 succssets.extend(markss)
752 752 # remove duplicated and subset
753 753 seen = []
754 754 final = []
755 755 candidates = sorted(
756 756 (s for s in succssets if s), key=len, reverse=True
757 757 )
758 758 for cand in candidates:
759 759 for seensuccs in seen:
760 760 if cand.canmerge(seensuccs):
761 761 seensuccs.markers.update(cand.markers)
762 762 break
763 763 else:
764 764 final.append(cand)
765 765 seen.append(cand)
766 766 final.reverse() # put small successors set first
767 767 cache[current] = final
768 768 return cache[initialnode]
769 769
770 770
771 771 def successorsandmarkers(repo, ctx):
772 772 """compute the raw data needed for computing obsfate
773 773 Returns a list of dict, one dict per successors set
774 774 """
775 775 if not ctx.obsolete():
776 776 return None
777 777
778 778 ssets = successorssets(repo, ctx.node(), closest=True)
779 779
780 780 # closestsuccessors returns an empty list for pruned revisions, remap it
781 781 # into a list containing an empty list for future processing
782 782 if ssets == []:
783 783 ssets = [_succs()]
784 784
785 785 # Try to recover pruned markers
786 786 succsmap = repo.obsstore.successors
787 787 fullsuccessorsets = [] # successor set + markers
788 788 for sset in ssets:
789 789 if sset:
790 790 fullsuccessorsets.append(sset)
791 791 else:
792 792 # successorsset return an empty set() when ctx or one of its
793 793 # successors is pruned.
794 794 # In this case, walk the obs-markers tree again starting with ctx
795 795 # and find the relevant pruning obs-makers, the ones without
796 796 # successors.
797 797 # Having these markers allow us to compute some information about
798 798 # its fate, like who pruned this changeset and when.
799 799
800 800 # XXX we do not catch all prune markers (eg rewritten then pruned)
801 801 # (fix me later)
802 802 foundany = False
803 803 for mark in succsmap.get(ctx.node(), ()):
804 804 if not mark[1]:
805 805 foundany = True
806 806 sset = _succs()
807 807 sset.markers.add(mark)
808 808 fullsuccessorsets.append(sset)
809 809 if not foundany:
810 810 fullsuccessorsets.append(_succs())
811 811
812 812 values = []
813 813 for sset in fullsuccessorsets:
814 814 values.append({b'successors': sset, b'markers': sset.markers})
815 815
816 816 return values
817 817
818 818
819 819 def _getobsfate(successorssets):
820 820 """Compute a changeset obsolescence fate based on its successorssets.
821 821 Successors can be the tipmost ones or the immediate ones. This function
822 822 return values are not meant to be shown directly to users, it is meant to
823 823 be used by internal functions only.
824 824 Returns one fate from the following values:
825 825 - pruned
826 826 - diverged
827 827 - superseded
828 828 - superseded_split
829 829 """
830 830
831 831 if len(successorssets) == 0:
832 832 # The commit has been pruned
833 833 return b'pruned'
834 834 elif len(successorssets) > 1:
835 835 return b'diverged'
836 836 else:
837 837 # No divergence, only one set of successors
838 838 successors = successorssets[0]
839 839
840 840 if len(successors) == 1:
841 841 return b'superseded'
842 842 else:
843 843 return b'superseded_split'
844 844
845 845
846 846 def obsfateverb(successorset, markers):
847 847 """Return the verb summarizing the successorset and potentially using
848 848 information from the markers
849 849 """
850 850 if not successorset:
851 851 verb = b'pruned'
852 852 elif len(successorset) == 1:
853 853 verb = b'rewritten'
854 854 else:
855 855 verb = b'split'
856 856 return verb
857 857
858 858
859 859 def markersdates(markers):
860 860 """returns the list of dates for a list of markers"""
861 861 return [m[4] for m in markers]
862 862
863 863
864 864 def markersusers(markers):
865 865 """Returns a sorted list of markers users without duplicates"""
866 866 markersmeta = [dict(m[3]) for m in markers]
867 867 users = {
868 868 encoding.tolocal(meta[b'user'])
869 869 for meta in markersmeta
870 870 if meta.get(b'user')
871 871 }
872 872
873 873 return sorted(users)
874 874
875 875
876 876 def markersoperations(markers):
877 877 """Returns a sorted list of markers operations without duplicates"""
878 878 markersmeta = [dict(m[3]) for m in markers]
879 879 operations = {
880 880 meta.get(b'operation') for meta in markersmeta if meta.get(b'operation')
881 881 }
882 882
883 883 return sorted(operations)
884 884
885 885
886 886 def obsfateprinter(ui, repo, successors, markers, formatctx):
887 887 """Build a obsfate string for a single successorset using all obsfate
888 888 related function defined in obsutil
889 889 """
890 890 quiet = ui.quiet
891 891 verbose = ui.verbose
892 892 normal = not verbose and not quiet
893 893
894 894 line = []
895 895
896 896 # Verb
897 897 line.append(obsfateverb(successors, markers))
898 898
899 899 # Operations
900 900 operations = markersoperations(markers)
901 901 if operations:
902 902 line.append(b" using %s" % b", ".join(operations))
903 903
904 904 # Successors
905 905 if successors:
906 906 fmtsuccessors = [formatctx(repo[succ]) for succ in successors]
907 907 line.append(b" as %s" % b", ".join(fmtsuccessors))
908 908
909 909 # Users
910 910 users = markersusers(markers)
911 911 # Filter out current user in not verbose mode to reduce amount of
912 912 # information
913 913 if not verbose:
914 914 currentuser = ui.username(acceptempty=True)
915 915 if len(users) == 1 and currentuser in users:
916 916 users = None
917 917
918 918 if (verbose or normal) and users:
919 919 line.append(b" by %s" % b", ".join(users))
920 920
921 921 # Date
922 922 dates = markersdates(markers)
923 923
924 924 if dates and verbose:
925 925 min_date = min(dates)
926 926 max_date = max(dates)
927 927
928 928 if min_date == max_date:
929 929 fmtmin_date = dateutil.datestr(min_date, b'%Y-%m-%d %H:%M %1%2')
930 930 line.append(b" (at %s)" % fmtmin_date)
931 931 else:
932 932 fmtmin_date = dateutil.datestr(min_date, b'%Y-%m-%d %H:%M %1%2')
933 933 fmtmax_date = dateutil.datestr(max_date, b'%Y-%m-%d %H:%M %1%2')
934 934 line.append(b" (between %s and %s)" % (fmtmin_date, fmtmax_date))
935 935
936 936 return b"".join(line)
937 937
938 938
939 939 filteredmsgtable = {
940 940 b"pruned": _(b"hidden revision '%s' is pruned"),
941 941 b"diverged": _(b"hidden revision '%s' has diverged"),
942 942 b"superseded": _(b"hidden revision '%s' was rewritten as: %s"),
943 943 b"superseded_split": _(b"hidden revision '%s' was split as: %s"),
944 944 b"superseded_split_several": _(
945 945 b"hidden revision '%s' was split as: %s and %d more"
946 946 ),
947 947 }
948 948
949 949
950 950 def _getfilteredreason(repo, changeid, ctx):
951 951 """return a human-friendly string on why a obsolete changeset is hidden"""
952 952 successors = successorssets(repo, ctx.node())
953 953 fate = _getobsfate(successors)
954 954
955 955 # Be more precise in case the revision is superseded
956 956 if fate == b'pruned':
957 957 return filteredmsgtable[b'pruned'] % changeid
958 958 elif fate == b'diverged':
959 959 return filteredmsgtable[b'diverged'] % changeid
960 960 elif fate == b'superseded':
961 961 single_successor = short(successors[0][0])
962 962 return filteredmsgtable[b'superseded'] % (changeid, single_successor)
963 963 elif fate == b'superseded_split':
964 964
965 965 succs = []
966 966 for node_id in successors[0]:
967 967 succs.append(short(node_id))
968 968
969 969 if len(succs) <= 2:
970 970 fmtsuccs = b', '.join(succs)
971 971 return filteredmsgtable[b'superseded_split'] % (changeid, fmtsuccs)
972 972 else:
973 973 firstsuccessors = b', '.join(succs[:2])
974 974 remainingnumber = len(succs) - 2
975 975
976 976 args = (changeid, firstsuccessors, remainingnumber)
977 977 return filteredmsgtable[b'superseded_split_several'] % args
978 978
979 979
980 980 def divergentsets(repo, ctx):
981 981 """Compute sets of commits divergent with a given one"""
982 982 cache = {}
983 983 base = {}
984 984 for n in allpredecessors(repo.obsstore, [ctx.node()]):
985 985 if n == ctx.node():
986 986 # a node can't be a base for divergence with itself
987 987 continue
988 988 nsuccsets = successorssets(repo, n, cache)
989 989 for nsuccset in nsuccsets:
990 990 if ctx.node() in nsuccset:
991 991 # we are only interested in *other* successor sets
992 992 continue
993 993 if tuple(nsuccset) in base:
994 994 # we already know the latest base for this divergency
995 995 continue
996 996 base[tuple(nsuccset)] = n
997 997 return [
998 998 {b'divergentnodes': divset, b'commonpredecessor': b}
999 999 for divset, b in base.items()
1000 1000 ]
1001 1001
1002 1002
1003 1003 def whyunstable(repo, ctx):
1004 1004 result = []
1005 1005 if ctx.orphan():
1006 1006 for parent in ctx.parents():
1007 1007 kind = None
1008 1008 if parent.orphan():
1009 1009 kind = b'orphan'
1010 1010 elif parent.obsolete():
1011 1011 kind = b'obsolete'
1012 1012 if kind is not None:
1013 1013 result.append(
1014 1014 {
1015 1015 b'instability': b'orphan',
1016 1016 b'reason': b'%s parent' % kind,
1017 1017 b'node': parent.hex(),
1018 1018 }
1019 1019 )
1020 1020 if ctx.phasedivergent():
1021 1021 predecessors = allpredecessors(
1022 1022 repo.obsstore, [ctx.node()], ignoreflags=bumpedfix
1023 1023 )
1024 1024 immutable = [
1025 1025 repo[p] for p in predecessors if p in repo and not repo[p].mutable()
1026 1026 ]
1027 1027 for predecessor in immutable:
1028 1028 result.append(
1029 1029 {
1030 1030 b'instability': b'phase-divergent',
1031 1031 b'reason': b'immutable predecessor',
1032 1032 b'node': predecessor.hex(),
1033 1033 }
1034 1034 )
1035 1035 if ctx.contentdivergent():
1036 1036 dsets = divergentsets(repo, ctx)
1037 1037 for dset in dsets:
1038 1038 divnodes = [repo[n] for n in dset[b'divergentnodes']]
1039 1039 result.append(
1040 1040 {
1041 1041 b'instability': b'content-divergent',
1042 1042 b'divergentnodes': divnodes,
1043 1043 b'reason': b'predecessor',
1044 1044 b'node': hex(dset[b'commonpredecessor']),
1045 1045 }
1046 1046 )
1047 1047 return result
General Comments 0
You need to be logged in to leave comments. Login now