##// END OF EJS Templates
cleanup: typos, formatting
Joerg Sonnenberger -
r51885:704c3d08 stable
parent child Browse files
Show More
@@ -1,2676 +1,2676 b''
1 1 # bundle2.py - generic container format to transmit arbitrary data.
2 2 #
3 3 # Copyright 2013 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 """Handling of the new bundle2 format
8 8
9 9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 11 that will be handed to and processed by the application layer.
12 12
13 13
14 14 General format architecture
15 15 ===========================
16 16
17 17 The format is architectured as follow
18 18
19 19 - magic string
20 20 - stream level parameters
21 21 - payload parts (any number)
22 22 - end of stream marker.
23 23
24 24 the Binary format
25 25 ============================
26 26
27 27 All numbers are unsigned and big-endian.
28 28
29 29 stream level parameters
30 30 ------------------------
31 31
32 32 Binary format is as follow
33 33
34 34 :params size: int32
35 35
36 36 The total number of Bytes used by the parameters
37 37
38 38 :params value: arbitrary number of Bytes
39 39
40 40 A blob of `params size` containing the serialized version of all stream level
41 41 parameters.
42 42
43 43 The blob contains a space separated list of parameters. Parameters with value
44 44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45 45
46 46 Empty name are obviously forbidden.
47 47
48 48 Name MUST start with a letter. If this first letter is lower case, the
49 49 parameter is advisory and can be safely ignored. However when the first
50 50 letter is capital, the parameter is mandatory and the bundling process MUST
51 51 stop if he is not able to proceed it.
52 52
53 53 Stream parameters use a simple textual format for two main reasons:
54 54
55 55 - Stream level parameters should remain simple and we want to discourage any
56 56 crazy usage.
57 57 - Textual data allow easy human inspection of a bundle2 header in case of
58 58 troubles.
59 59
60 60 Any Applicative level options MUST go into a bundle2 part instead.
61 61
62 62 Payload part
63 63 ------------------------
64 64
65 65 Binary format is as follow
66 66
67 67 :header size: int32
68 68
69 69 The total number of Bytes used by the part header. When the header is empty
70 70 (size = 0) this is interpreted as the end of stream marker.
71 71
72 72 :header:
73 73
74 74 The header defines how to interpret the part. It contains two piece of
75 75 data: the part type, and the part parameters.
76 76
77 77 The part type is used to route an application level handler, that can
78 78 interpret payload.
79 79
80 80 Part parameters are passed to the application level handler. They are
81 81 meant to convey information that will help the application level object to
82 82 interpret the part payload.
83 83
84 84 The binary format of the header is has follow
85 85
86 86 :typesize: (one byte)
87 87
88 88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89 89
90 90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 91 to this part.
92 92
93 93 :parameters:
94 94
95 95 Part's parameter may have arbitrary content, the binary structure is::
96 96
97 97 <mandatory-count><advisory-count><param-sizes><param-data>
98 98
99 99 :mandatory-count: 1 byte, number of mandatory parameters
100 100
101 101 :advisory-count: 1 byte, number of advisory parameters
102 102
103 103 :param-sizes:
104 104
105 105 N couple of bytes, where N is the total number of parameters. Each
106 106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107 107
108 108 :param-data:
109 109
110 110 A blob of bytes from which each parameter key and value can be
111 111 retrieved using the list of size couples stored in the previous
112 112 field.
113 113
114 114 Mandatory parameters comes first, then the advisory ones.
115 115
116 116 Each parameter's key MUST be unique within the part.
117 117
118 118 :payload:
119 119
120 120 payload is a series of `<chunksize><chunkdata>`.
121 121
122 122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124 124
125 125 The current implementation always produces either zero or one chunk.
126 126 This is an implementation limitation that will ultimately be lifted.
127 127
128 128 `chunksize` can be negative to trigger special case processing. No such
129 129 processing is in place yet.
130 130
131 131 Bundle processing
132 132 ============================
133 133
134 134 Each part is processed in order using a "part handler". Handler are registered
135 135 for a certain part type.
136 136
137 137 The matching of a part to its handler is case insensitive. The case of the
138 138 part type is used to know if a part is mandatory or advisory. If the Part type
139 139 contains any uppercase char it is considered mandatory. When no handler is
140 140 known for a Mandatory part, the process is aborted and an exception is raised.
141 141 If the part is advisory and no handler is known, the part is ignored. When the
142 142 process is aborted, the full bundle is still read from the stream to keep the
143 143 channel usable. But none of the part read from an abort are processed. In the
144 144 future, dropping the stream may become an option for channel we do not care to
145 145 preserve.
146 146 """
147 147
148 148
149 149 import collections
150 150 import errno
151 151 import os
152 152 import re
153 153 import string
154 154 import struct
155 155 import sys
156 156
157 157 from .i18n import _
158 158 from .node import (
159 159 hex,
160 160 short,
161 161 )
162 162 from . import (
163 163 bookmarks,
164 164 changegroup,
165 165 encoding,
166 166 error,
167 167 obsolete,
168 168 phases,
169 169 pushkey,
170 170 pycompat,
171 171 requirements,
172 172 scmutil,
173 173 streamclone,
174 174 tags,
175 175 url,
176 176 util,
177 177 )
178 178 from .utils import (
179 179 stringutil,
180 180 urlutil,
181 181 )
182 182 from .interfaces import repository
183 183
184 184 urlerr = util.urlerr
185 185 urlreq = util.urlreq
186 186
187 187 _pack = struct.pack
188 188 _unpack = struct.unpack
189 189
190 190 _fstreamparamsize = b'>i'
191 191 _fpartheadersize = b'>i'
192 192 _fparttypesize = b'>B'
193 193 _fpartid = b'>I'
194 194 _fpayloadsize = b'>i'
195 195 _fpartparamcount = b'>BB'
196 196
197 197 preferedchunksize = 32768
198 198
199 199 _parttypeforbidden = re.compile(b'[^a-zA-Z0-9_:-]')
200 200
201 201
202 202 def outdebug(ui, message):
203 203 """debug regarding output stream (bundling)"""
204 204 if ui.configbool(b'devel', b'bundle2.debug'):
205 205 ui.debug(b'bundle2-output: %s\n' % message)
206 206
207 207
208 208 def indebug(ui, message):
209 209 """debug on input stream (unbundling)"""
210 210 if ui.configbool(b'devel', b'bundle2.debug'):
211 211 ui.debug(b'bundle2-input: %s\n' % message)
212 212
213 213
214 214 def validateparttype(parttype):
215 215 """raise ValueError if a parttype contains invalid character"""
216 216 if _parttypeforbidden.search(parttype):
217 217 raise ValueError(parttype)
218 218
219 219
220 220 def _makefpartparamsizes(nbparams):
221 221 """return a struct format to read part parameter sizes
222 222
223 223 The number parameters is variable so we need to build that format
224 224 dynamically.
225 225 """
226 226 return b'>' + (b'BB' * nbparams)
227 227
228 228
229 229 parthandlermapping = {}
230 230
231 231
232 232 def parthandler(parttype, params=()):
233 233 """decorator that register a function as a bundle2 part handler
234 234
235 235 eg::
236 236
237 237 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
238 238 def myparttypehandler(...):
239 239 '''process a part of type "my part".'''
240 240 ...
241 241 """
242 242 validateparttype(parttype)
243 243
244 244 def _decorator(func):
245 245 lparttype = parttype.lower() # enforce lower case matching.
246 246 assert lparttype not in parthandlermapping
247 247 parthandlermapping[lparttype] = func
248 248 func.params = frozenset(params)
249 249 return func
250 250
251 251 return _decorator
252 252
253 253
254 254 class unbundlerecords:
255 255 """keep record of what happens during and unbundle
256 256
257 257 New records are added using `records.add('cat', obj)`. Where 'cat' is a
258 258 category of record and obj is an arbitrary object.
259 259
260 260 `records['cat']` will return all entries of this category 'cat'.
261 261
262 262 Iterating on the object itself will yield `('category', obj)` tuples
263 263 for all entries.
264 264
265 265 All iterations happens in chronological order.
266 266 """
267 267
268 268 def __init__(self):
269 269 self._categories = {}
270 270 self._sequences = []
271 271 self._replies = {}
272 272
273 273 def add(self, category, entry, inreplyto=None):
274 274 """add a new record of a given category.
275 275
276 276 The entry can then be retrieved in the list returned by
277 277 self['category']."""
278 278 self._categories.setdefault(category, []).append(entry)
279 279 self._sequences.append((category, entry))
280 280 if inreplyto is not None:
281 281 self.getreplies(inreplyto).add(category, entry)
282 282
283 283 def getreplies(self, partid):
284 284 """get the records that are replies to a specific part"""
285 285 return self._replies.setdefault(partid, unbundlerecords())
286 286
287 287 def __getitem__(self, cat):
288 288 return tuple(self._categories.get(cat, ()))
289 289
290 290 def __iter__(self):
291 291 return iter(self._sequences)
292 292
293 293 def __len__(self):
294 294 return len(self._sequences)
295 295
296 296 def __nonzero__(self):
297 297 return bool(self._sequences)
298 298
299 299 __bool__ = __nonzero__
300 300
301 301
302 302 class bundleoperation:
303 303 """an object that represents a single bundling process
304 304
305 305 Its purpose is to carry unbundle-related objects and states.
306 306
307 307 A new object should be created at the beginning of each bundle processing.
308 308 The object is to be returned by the processing function.
309 309
310 310 The object has very little content now it will ultimately contain:
311 311 * an access to the repo the bundle is applied to,
312 312 * a ui object,
313 313 * a way to retrieve a transaction to add changes to the repo,
314 314 * a way to record the result of processing each part,
315 315 * a way to construct a bundle response when applicable.
316 316 """
317 317
318 318 def __init__(
319 319 self,
320 320 repo,
321 321 transactiongetter,
322 322 captureoutput=True,
323 323 source=b'',
324 324 remote=None,
325 325 ):
326 326 self.repo = repo
327 327 # the peer object who produced this bundle if available
328 328 self.remote = remote
329 329 self.ui = repo.ui
330 330 self.records = unbundlerecords()
331 331 self.reply = None
332 332 self.captureoutput = captureoutput
333 333 self.hookargs = {}
334 334 self._gettransaction = transactiongetter
335 335 # carries value that can modify part behavior
336 336 self.modes = {}
337 337 self.source = source
338 338
339 339 def gettransaction(self):
340 340 transaction = self._gettransaction()
341 341
342 342 if self.hookargs:
343 343 # the ones added to the transaction supercede those added
344 344 # to the operation.
345 345 self.hookargs.update(transaction.hookargs)
346 346 transaction.hookargs = self.hookargs
347 347
348 348 # mark the hookargs as flushed. further attempts to add to
349 349 # hookargs will result in an abort.
350 350 self.hookargs = None
351 351
352 352 return transaction
353 353
354 354 def addhookargs(self, hookargs):
355 355 if self.hookargs is None:
356 356 raise error.ProgrammingError(
357 357 b'attempted to add hookargs to '
358 358 b'operation after transaction started'
359 359 )
360 360 self.hookargs.update(hookargs)
361 361
362 362
363 363 class TransactionUnavailable(RuntimeError):
364 364 pass
365 365
366 366
367 367 def _notransaction():
368 368 """default method to get a transaction while processing a bundle
369 369
370 370 Raise an exception to highlight the fact that no transaction was expected
371 371 to be created"""
372 372 raise TransactionUnavailable()
373 373
374 374
375 375 def applybundle(repo, unbundler, tr, source, url=None, remote=None, **kwargs):
376 376 # transform me into unbundler.apply() as soon as the freeze is lifted
377 377 if isinstance(unbundler, unbundle20):
378 378 tr.hookargs[b'bundle2'] = b'1'
379 379 if source is not None and b'source' not in tr.hookargs:
380 380 tr.hookargs[b'source'] = source
381 381 if url is not None and b'url' not in tr.hookargs:
382 382 tr.hookargs[b'url'] = url
383 383 return processbundle(
384 384 repo, unbundler, lambda: tr, source=source, remote=remote
385 385 )
386 386 else:
387 387 # the transactiongetter won't be used, but we might as well set it
388 388 op = bundleoperation(repo, lambda: tr, source=source, remote=remote)
389 389 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
390 390 return op
391 391
392 392
393 393 class partiterator:
394 394 def __init__(self, repo, op, unbundler):
395 395 self.repo = repo
396 396 self.op = op
397 397 self.unbundler = unbundler
398 398 self.iterator = None
399 399 self.count = 0
400 400 self.current = None
401 401
402 402 def __enter__(self):
403 403 def func():
404 404 itr = enumerate(self.unbundler.iterparts(), 1)
405 405 for count, p in itr:
406 406 self.count = count
407 407 self.current = p
408 408 yield p
409 409 p.consume()
410 410 self.current = None
411 411
412 412 self.iterator = func()
413 413 return self.iterator
414 414
415 415 def __exit__(self, type, exc, tb):
416 416 if not self.iterator:
417 417 return
418 418
419 419 # Only gracefully abort in a normal exception situation. User aborts
420 420 # like Ctrl+C throw a KeyboardInterrupt which is not a base Exception,
421 421 # and should not gracefully cleanup.
422 422 if isinstance(exc, Exception):
423 423 # Any exceptions seeking to the end of the bundle at this point are
424 424 # almost certainly related to the underlying stream being bad.
425 425 # And, chances are that the exception we're handling is related to
426 426 # getting in that bad state. So, we swallow the seeking error and
427 427 # re-raise the original error.
428 428 seekerror = False
429 429 try:
430 430 if self.current:
431 431 # consume the part content to not corrupt the stream.
432 432 self.current.consume()
433 433
434 434 for part in self.iterator:
435 435 # consume the bundle content
436 436 part.consume()
437 437 except Exception:
438 438 seekerror = True
439 439
440 440 # Small hack to let caller code distinguish exceptions from bundle2
441 441 # processing from processing the old format. This is mostly needed
442 442 # to handle different return codes to unbundle according to the type
443 443 # of bundle. We should probably clean up or drop this return code
444 444 # craziness in a future version.
445 445 exc.duringunbundle2 = True
446 446 salvaged = []
447 447 replycaps = None
448 448 if self.op.reply is not None:
449 449 salvaged = self.op.reply.salvageoutput()
450 450 replycaps = self.op.reply.capabilities
451 451 exc._replycaps = replycaps
452 452 exc._bundle2salvagedoutput = salvaged
453 453
454 454 # Re-raising from a variable loses the original stack. So only use
455 455 # that form if we need to.
456 456 if seekerror:
457 457 raise exc
458 458
459 459 self.repo.ui.debug(
460 460 b'bundle2-input-bundle: %i parts total\n' % self.count
461 461 )
462 462
463 463
464 464 def processbundle(
465 465 repo,
466 466 unbundler,
467 467 transactiongetter=None,
468 468 op=None,
469 469 source=b'',
470 470 remote=None,
471 471 ):
472 472 """This function process a bundle, apply effect to/from a repo
473 473
474 474 It iterates over each part then searches for and uses the proper handling
475 475 code to process the part. Parts are processed in order.
476 476
477 477 Unknown Mandatory part will abort the process.
478 478
479 479 It is temporarily possible to provide a prebuilt bundleoperation to the
480 480 function. This is used to ensure output is properly propagated in case of
481 481 an error during the unbundling. This output capturing part will likely be
482 482 reworked and this ability will probably go away in the process.
483 483 """
484 484 if op is None:
485 485 if transactiongetter is None:
486 486 transactiongetter = _notransaction
487 487 op = bundleoperation(
488 488 repo,
489 489 transactiongetter,
490 490 source=source,
491 491 remote=remote,
492 492 )
493 493 # todo:
494 494 # - replace this is a init function soon.
495 495 # - exception catching
496 496 unbundler.params
497 497 if repo.ui.debugflag:
498 498 msg = [b'bundle2-input-bundle:']
499 499 if unbundler.params:
500 500 msg.append(b' %i params' % len(unbundler.params))
501 501 if op._gettransaction is None or op._gettransaction is _notransaction:
502 502 msg.append(b' no-transaction')
503 503 else:
504 504 msg.append(b' with-transaction')
505 505 msg.append(b'\n')
506 506 repo.ui.debug(b''.join(msg))
507 507
508 508 processparts(repo, op, unbundler)
509 509
510 510 return op
511 511
512 512
513 513 def processparts(repo, op, unbundler):
514 514 with partiterator(repo, op, unbundler) as parts:
515 515 for part in parts:
516 516 _processpart(op, part)
517 517
518 518
519 519 def _processchangegroup(op, cg, tr, source, url, **kwargs):
520 520 if op.remote is not None and op.remote.path is not None:
521 521 remote_path = op.remote.path
522 522 kwargs = kwargs.copy()
523 523 kwargs['delta_base_reuse_policy'] = remote_path.delta_reuse_policy
524 524 ret = cg.apply(op.repo, tr, source, url, **kwargs)
525 525 op.records.add(
526 526 b'changegroup',
527 527 {
528 528 b'return': ret,
529 529 },
530 530 )
531 531 return ret
532 532
533 533
534 534 def _gethandler(op, part):
535 535 status = b'unknown' # used by debug output
536 536 try:
537 537 handler = parthandlermapping.get(part.type)
538 538 if handler is None:
539 539 status = b'unsupported-type'
540 540 raise error.BundleUnknownFeatureError(parttype=part.type)
541 541 indebug(op.ui, b'found a handler for part %s' % part.type)
542 542 unknownparams = part.mandatorykeys - handler.params
543 543 if unknownparams:
544 544 unknownparams = list(unknownparams)
545 545 unknownparams.sort()
546 546 status = b'unsupported-params (%s)' % b', '.join(unknownparams)
547 547 raise error.BundleUnknownFeatureError(
548 548 parttype=part.type, params=unknownparams
549 549 )
550 550 status = b'supported'
551 551 except error.BundleUnknownFeatureError as exc:
552 552 if part.mandatory: # mandatory parts
553 553 raise
554 554 indebug(op.ui, b'ignoring unsupported advisory part %s' % exc)
555 555 return # skip to part processing
556 556 finally:
557 557 if op.ui.debugflag:
558 558 msg = [b'bundle2-input-part: "%s"' % part.type]
559 559 if not part.mandatory:
560 560 msg.append(b' (advisory)')
561 561 nbmp = len(part.mandatorykeys)
562 562 nbap = len(part.params) - nbmp
563 563 if nbmp or nbap:
564 564 msg.append(b' (params:')
565 565 if nbmp:
566 566 msg.append(b' %i mandatory' % nbmp)
567 567 if nbap:
568 568 msg.append(b' %i advisory' % nbmp)
569 569 msg.append(b')')
570 570 msg.append(b' %s\n' % status)
571 571 op.ui.debug(b''.join(msg))
572 572
573 573 return handler
574 574
575 575
576 576 def _processpart(op, part):
577 577 """process a single part from a bundle
578 578
579 579 The part is guaranteed to have been fully consumed when the function exits
580 580 (even if an exception is raised)."""
581 581 handler = _gethandler(op, part)
582 582 if handler is None:
583 583 return
584 584
585 585 # handler is called outside the above try block so that we don't
586 586 # risk catching KeyErrors from anything other than the
587 587 # parthandlermapping lookup (any KeyError raised by handler()
588 588 # itself represents a defect of a different variety).
589 589 output = None
590 590 if op.captureoutput and op.reply is not None:
591 591 op.ui.pushbuffer(error=True, subproc=True)
592 592 output = b''
593 593 try:
594 594 handler(op, part)
595 595 finally:
596 596 if output is not None:
597 597 output = op.ui.popbuffer()
598 598 if output:
599 599 outpart = op.reply.newpart(b'output', data=output, mandatory=False)
600 600 outpart.addparam(
601 601 b'in-reply-to', pycompat.bytestr(part.id), mandatory=False
602 602 )
603 603
604 604
605 605 def decodecaps(blob):
606 606 """decode a bundle2 caps bytes blob into a dictionary
607 607
608 608 The blob is a list of capabilities (one per line)
609 609 Capabilities may have values using a line of the form::
610 610
611 611 capability=value1,value2,value3
612 612
613 613 The values are always a list."""
614 614 caps = {}
615 615 for line in blob.splitlines():
616 616 if not line:
617 617 continue
618 618 if b'=' not in line:
619 619 key, vals = line, ()
620 620 else:
621 621 key, vals = line.split(b'=', 1)
622 622 vals = vals.split(b',')
623 623 key = urlreq.unquote(key)
624 624 vals = [urlreq.unquote(v) for v in vals]
625 625 caps[key] = vals
626 626 return caps
627 627
628 628
629 629 def encodecaps(caps):
630 630 """encode a bundle2 caps dictionary into a bytes blob"""
631 631 chunks = []
632 632 for ca in sorted(caps):
633 633 vals = caps[ca]
634 634 ca = urlreq.quote(ca)
635 635 vals = [urlreq.quote(v) for v in vals]
636 636 if vals:
637 637 ca = b"%s=%s" % (ca, b','.join(vals))
638 638 chunks.append(ca)
639 639 return b'\n'.join(chunks)
640 640
641 641
642 642 bundletypes = {
643 643 b"": (b"", b'UN'), # only when using unbundle on ssh and old http servers
644 644 # since the unification ssh accepts a header but there
645 645 # is no capability signaling it.
646 646 b"HG20": (), # special-cased below
647 647 b"HG10UN": (b"HG10UN", b'UN'),
648 648 b"HG10BZ": (b"HG10", b'BZ'),
649 649 b"HG10GZ": (b"HG10GZ", b'GZ'),
650 650 }
651 651
652 652 # hgweb uses this list to communicate its preferred type
653 653 bundlepriority = [b'HG10GZ', b'HG10BZ', b'HG10UN']
654 654
655 655
656 656 class bundle20:
657 657 """represent an outgoing bundle2 container
658 658
659 659 Use the `addparam` method to add stream level parameter. and `newpart` to
660 660 populate it. Then call `getchunks` to retrieve all the binary chunks of
661 661 data that compose the bundle2 container."""
662 662
663 663 _magicstring = b'HG20'
664 664
665 665 def __init__(self, ui, capabilities=()):
666 666 self.ui = ui
667 667 self._params = []
668 668 self._parts = []
669 669 self.capabilities = dict(capabilities)
670 670 self._compengine = util.compengines.forbundletype(b'UN')
671 671 self._compopts = None
672 672 # If compression is being handled by a consumer of the raw
673 673 # data (e.g. the wire protocol), unsetting this flag tells
674 674 # consumers that the bundle is best left uncompressed.
675 675 self.prefercompressed = True
676 676
677 677 def setcompression(self, alg, compopts=None):
678 678 """setup core part compression to <alg>"""
679 679 if alg in (None, b'UN'):
680 680 return
681 681 assert not any(n.lower() == b'compression' for n, v in self._params)
682 682 self.addparam(b'Compression', alg)
683 683 self._compengine = util.compengines.forbundletype(alg)
684 684 self._compopts = compopts
685 685
686 686 @property
687 687 def nbparts(self):
688 688 """total number of parts added to the bundler"""
689 689 return len(self._parts)
690 690
691 691 # methods used to defines the bundle2 content
692 692 def addparam(self, name, value=None):
693 693 """add a stream level parameter"""
694 694 if not name:
695 695 raise error.ProgrammingError(b'empty parameter name')
696 696 if name[0:1] not in pycompat.bytestr(
697 697 string.ascii_letters # pytype: disable=wrong-arg-types
698 698 ):
699 699 raise error.ProgrammingError(
700 700 b'non letter first character: %s' % name
701 701 )
702 702 self._params.append((name, value))
703 703
704 704 def addpart(self, part):
705 705 """add a new part to the bundle2 container
706 706
707 707 Parts contains the actual applicative payload."""
708 708 assert part.id is None
709 709 part.id = len(self._parts) # very cheap counter
710 710 self._parts.append(part)
711 711
712 712 def newpart(self, typeid, *args, **kwargs):
713 713 """create a new part and add it to the containers
714 714
715 715 As the part is directly added to the containers. For now, this means
716 716 that any failure to properly initialize the part after calling
717 717 ``newpart`` should result in a failure of the whole bundling process.
718 718
719 719 You can still fall back to manually create and add if you need better
720 720 control."""
721 721 part = bundlepart(typeid, *args, **kwargs)
722 722 self.addpart(part)
723 723 return part
724 724
725 725 # methods used to generate the bundle2 stream
726 726 def getchunks(self):
727 727 if self.ui.debugflag:
728 728 msg = [b'bundle2-output-bundle: "%s",' % self._magicstring]
729 729 if self._params:
730 730 msg.append(b' (%i params)' % len(self._params))
731 731 msg.append(b' %i parts total\n' % len(self._parts))
732 732 self.ui.debug(b''.join(msg))
733 733 outdebug(self.ui, b'start emission of %s stream' % self._magicstring)
734 734 yield self._magicstring
735 735 param = self._paramchunk()
736 736 outdebug(self.ui, b'bundle parameter: %s' % param)
737 737 yield _pack(_fstreamparamsize, len(param))
738 738 if param:
739 739 yield param
740 740 for chunk in self._compengine.compressstream(
741 741 self._getcorechunk(), self._compopts
742 742 ):
743 743 yield chunk
744 744
745 745 def _paramchunk(self):
746 746 """return a encoded version of all stream parameters"""
747 747 blocks = []
748 748 for par, value in self._params:
749 749 par = urlreq.quote(par)
750 750 if value is not None:
751 751 value = urlreq.quote(value)
752 752 par = b'%s=%s' % (par, value)
753 753 blocks.append(par)
754 754 return b' '.join(blocks)
755 755
756 756 def _getcorechunk(self):
757 757 """yield chunk for the core part of the bundle
758 758
759 759 (all but headers and parameters)"""
760 760 outdebug(self.ui, b'start of parts')
761 761 for part in self._parts:
762 762 outdebug(self.ui, b'bundle part: "%s"' % part.type)
763 763 for chunk in part.getchunks(ui=self.ui):
764 764 yield chunk
765 765 outdebug(self.ui, b'end of bundle')
766 766 yield _pack(_fpartheadersize, 0)
767 767
768 768 def salvageoutput(self):
769 769 """return a list with a copy of all output parts in the bundle
770 770
771 771 This is meant to be used during error handling to make sure we preserve
772 772 server output"""
773 773 salvaged = []
774 774 for part in self._parts:
775 775 if part.type.startswith(b'output'):
776 776 salvaged.append(part.copy())
777 777 return salvaged
778 778
779 779
780 780 class unpackermixin:
781 781 """A mixin to extract bytes and struct data from a stream"""
782 782
783 783 def __init__(self, fp):
784 784 self._fp = fp
785 785
786 786 def _unpack(self, format):
787 787 """unpack this struct format from the stream
788 788
789 789 This method is meant for internal usage by the bundle2 protocol only.
790 790 They directly manipulate the low level stream including bundle2 level
791 791 instruction.
792 792
793 793 Do not use it to implement higher-level logic or methods."""
794 794 data = self._readexact(struct.calcsize(format))
795 795 return _unpack(format, data)
796 796
797 797 def _readexact(self, size):
798 798 """read exactly <size> bytes from the stream
799 799
800 800 This method is meant for internal usage by the bundle2 protocol only.
801 801 They directly manipulate the low level stream including bundle2 level
802 802 instruction.
803 803
804 804 Do not use it to implement higher-level logic or methods."""
805 805 return changegroup.readexactly(self._fp, size)
806 806
807 807
808 808 def getunbundler(ui, fp, magicstring=None):
809 809 """return a valid unbundler object for a given magicstring"""
810 810 if magicstring is None:
811 811 magicstring = changegroup.readexactly(fp, 4)
812 812 magic, version = magicstring[0:2], magicstring[2:4]
813 813 if magic != b'HG':
814 814 ui.debug(
815 815 b"error: invalid magic: %r (version %r), should be 'HG'\n"
816 816 % (magic, version)
817 817 )
818 818 raise error.Abort(_(b'not a Mercurial bundle'))
819 819 unbundlerclass = formatmap.get(version)
820 820 if unbundlerclass is None:
821 821 raise error.Abort(_(b'unknown bundle version %s') % version)
822 822 unbundler = unbundlerclass(ui, fp)
823 823 indebug(ui, b'start processing of %s stream' % magicstring)
824 824 return unbundler
825 825
826 826
827 827 class unbundle20(unpackermixin):
828 828 """interpret a bundle2 stream
829 829
830 830 This class is fed with a binary stream and yields parts through its
831 831 `iterparts` methods."""
832 832
833 833 _magicstring = b'HG20'
834 834
835 835 def __init__(self, ui, fp):
836 836 """If header is specified, we do not read it out of the stream."""
837 837 self.ui = ui
838 838 self._compengine = util.compengines.forbundletype(b'UN')
839 839 self._compressed = None
840 840 super(unbundle20, self).__init__(fp)
841 841
842 842 @util.propertycache
843 843 def params(self):
844 844 """dictionary of stream level parameters"""
845 845 indebug(self.ui, b'reading bundle2 stream parameters')
846 846 params = {}
847 847 paramssize = self._unpack(_fstreamparamsize)[0]
848 848 if paramssize < 0:
849 849 raise error.BundleValueError(
850 850 b'negative bundle param size: %i' % paramssize
851 851 )
852 852 if paramssize:
853 853 params = self._readexact(paramssize)
854 854 params = self._processallparams(params)
855 855 return params
856 856
857 857 def _processallparams(self, paramsblock):
858 858 """ """
859 859 params = util.sortdict()
860 860 for p in paramsblock.split(b' '):
861 861 p = p.split(b'=', 1)
862 862 p = [urlreq.unquote(i) for i in p]
863 863 if len(p) < 2:
864 864 p.append(None)
865 865 self._processparam(*p)
866 866 params[p[0]] = p[1]
867 867 return params
868 868
869 869 def _processparam(self, name, value):
870 870 """process a parameter, applying its effect if needed
871 871
872 872 Parameter starting with a lower case letter are advisory and will be
873 873 ignored when unknown. Those starting with an upper case letter are
874 874 mandatory and will this function will raise a KeyError when unknown.
875 875
876 876 Note: no option are currently supported. Any input will be either
877 877 ignored or failing.
878 878 """
879 879 if not name:
880 880 raise ValueError('empty parameter name')
881 881 if name[0:1] not in pycompat.bytestr(
882 882 string.ascii_letters # pytype: disable=wrong-arg-types
883 883 ):
884 884 raise ValueError('non letter first character: %s' % name)
885 885 try:
886 886 handler = b2streamparamsmap[name.lower()]
887 887 except KeyError:
888 888 if name[0:1].islower():
889 889 indebug(self.ui, b"ignoring unknown parameter %s" % name)
890 890 else:
891 891 raise error.BundleUnknownFeatureError(params=(name,))
892 892 else:
893 893 handler(self, name, value)
894 894
895 895 def _forwardchunks(self):
896 896 """utility to transfer a bundle2 as binary
897 897
898 898 This is made necessary by the fact the 'getbundle' command over 'ssh'
899 have no way to know then the reply end, relying on the bundle to be
899 have no way to know when the reply ends, relying on the bundle to be
900 900 interpreted to know its end. This is terrible and we are sorry, but we
901 901 needed to move forward to get general delta enabled.
902 902 """
903 903 yield self._magicstring
904 904 assert 'params' not in vars(self)
905 905 paramssize = self._unpack(_fstreamparamsize)[0]
906 906 if paramssize < 0:
907 907 raise error.BundleValueError(
908 908 b'negative bundle param size: %i' % paramssize
909 909 )
910 910 if paramssize:
911 911 params = self._readexact(paramssize)
912 912 self._processallparams(params)
913 913 # The payload itself is decompressed below, so drop
914 914 # the compression parameter passed down to compensate.
915 915 outparams = []
916 916 for p in params.split(b' '):
917 917 k, v = p.split(b'=', 1)
918 918 if k.lower() != b'compression':
919 919 outparams.append(p)
920 920 outparams = b' '.join(outparams)
921 921 yield _pack(_fstreamparamsize, len(outparams))
922 922 yield outparams
923 923 else:
924 924 yield _pack(_fstreamparamsize, paramssize)
925 925 # From there, payload might need to be decompressed
926 926 self._fp = self._compengine.decompressorreader(self._fp)
927 927 emptycount = 0
928 928 while emptycount < 2:
929 929 # so we can brainlessly loop
930 930 assert _fpartheadersize == _fpayloadsize
931 931 size = self._unpack(_fpartheadersize)[0]
932 932 yield _pack(_fpartheadersize, size)
933 933 if size:
934 934 emptycount = 0
935 935 else:
936 936 emptycount += 1
937 937 continue
938 938 if size == flaginterrupt:
939 939 continue
940 940 elif size < 0:
941 941 raise error.BundleValueError(b'negative chunk size: %i')
942 942 yield self._readexact(size)
943 943
944 944 def iterparts(self, seekable=False):
945 945 """yield all parts contained in the stream"""
946 946 cls = seekableunbundlepart if seekable else unbundlepart
947 947 # make sure param have been loaded
948 948 self.params
949 949 # From there, payload need to be decompressed
950 950 self._fp = self._compengine.decompressorreader(self._fp)
951 951 indebug(self.ui, b'start extraction of bundle2 parts')
952 952 headerblock = self._readpartheader()
953 953 while headerblock is not None:
954 954 part = cls(self.ui, headerblock, self._fp)
955 955 yield part
956 956 # Ensure part is fully consumed so we can start reading the next
957 957 # part.
958 958 part.consume()
959 959
960 960 headerblock = self._readpartheader()
961 961 indebug(self.ui, b'end of bundle2 stream')
962 962
963 963 def _readpartheader(self):
964 964 """reads a part header size and return the bytes blob
965 965
966 966 returns None if empty"""
967 967 headersize = self._unpack(_fpartheadersize)[0]
968 968 if headersize < 0:
969 969 raise error.BundleValueError(
970 970 b'negative part header size: %i' % headersize
971 971 )
972 972 indebug(self.ui, b'part header size: %i' % headersize)
973 973 if headersize:
974 974 return self._readexact(headersize)
975 975 return None
976 976
977 977 def compressed(self):
978 978 self.params # load params
979 979 return self._compressed
980 980
981 981 def close(self):
982 982 """close underlying file"""
983 983 if util.safehasattr(self._fp, 'close'):
984 984 return self._fp.close()
985 985
986 986
987 987 formatmap = {b'20': unbundle20}
988 988
989 989 b2streamparamsmap = {}
990 990
991 991
992 992 def b2streamparamhandler(name):
993 993 """register a handler for a stream level parameter"""
994 994
995 995 def decorator(func):
996 996 assert name not in formatmap
997 997 b2streamparamsmap[name] = func
998 998 return func
999 999
1000 1000 return decorator
1001 1001
1002 1002
1003 1003 @b2streamparamhandler(b'compression')
1004 1004 def processcompression(unbundler, param, value):
1005 1005 """read compression parameter and install payload decompression"""
1006 1006 if value not in util.compengines.supportedbundletypes:
1007 1007 raise error.BundleUnknownFeatureError(params=(param,), values=(value,))
1008 1008 unbundler._compengine = util.compengines.forbundletype(value)
1009 1009 if value is not None:
1010 1010 unbundler._compressed = True
1011 1011
1012 1012
1013 1013 class bundlepart:
1014 1014 """A bundle2 part contains application level payload
1015 1015
1016 1016 The part `type` is used to route the part to the application level
1017 1017 handler.
1018 1018
1019 1019 The part payload is contained in ``part.data``. It could be raw bytes or a
1020 1020 generator of byte chunks.
1021 1021
1022 1022 You can add parameters to the part using the ``addparam`` method.
1023 1023 Parameters can be either mandatory (default) or advisory. Remote side
1024 1024 should be able to safely ignore the advisory ones.
1025 1025
1026 1026 Both data and parameters cannot be modified after the generation has begun.
1027 1027 """
1028 1028
1029 1029 def __init__(
1030 1030 self,
1031 1031 parttype,
1032 1032 mandatoryparams=(),
1033 1033 advisoryparams=(),
1034 1034 data=b'',
1035 1035 mandatory=True,
1036 1036 ):
1037 1037 validateparttype(parttype)
1038 1038 self.id = None
1039 1039 self.type = parttype
1040 1040 self._data = data
1041 1041 self._mandatoryparams = list(mandatoryparams)
1042 1042 self._advisoryparams = list(advisoryparams)
1043 1043 # checking for duplicated entries
1044 1044 self._seenparams = set()
1045 1045 for pname, __ in self._mandatoryparams + self._advisoryparams:
1046 1046 if pname in self._seenparams:
1047 1047 raise error.ProgrammingError(b'duplicated params: %s' % pname)
1048 1048 self._seenparams.add(pname)
1049 1049 # status of the part's generation:
1050 1050 # - None: not started,
1051 1051 # - False: currently generated,
1052 1052 # - True: generation done.
1053 1053 self._generated = None
1054 1054 self.mandatory = mandatory
1055 1055
1056 1056 def __repr__(self):
1057 1057 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
1058 1058 return '<%s object at %x; id: %s; type: %s; mandatory: %s>' % (
1059 1059 cls,
1060 1060 id(self),
1061 1061 self.id,
1062 1062 self.type,
1063 1063 self.mandatory,
1064 1064 )
1065 1065
1066 1066 def copy(self):
1067 1067 """return a copy of the part
1068 1068
1069 1069 The new part have the very same content but no partid assigned yet.
1070 1070 Parts with generated data cannot be copied."""
1071 1071 assert not util.safehasattr(self.data, 'next')
1072 1072 return self.__class__(
1073 1073 self.type,
1074 1074 self._mandatoryparams,
1075 1075 self._advisoryparams,
1076 1076 self._data,
1077 1077 self.mandatory,
1078 1078 )
1079 1079
1080 1080 # methods used to defines the part content
1081 1081 @property
1082 1082 def data(self):
1083 1083 return self._data
1084 1084
1085 1085 @data.setter
1086 1086 def data(self, data):
1087 1087 if self._generated is not None:
1088 1088 raise error.ReadOnlyPartError(b'part is being generated')
1089 1089 self._data = data
1090 1090
1091 1091 @property
1092 1092 def mandatoryparams(self):
1093 1093 # make it an immutable tuple to force people through ``addparam``
1094 1094 return tuple(self._mandatoryparams)
1095 1095
1096 1096 @property
1097 1097 def advisoryparams(self):
1098 1098 # make it an immutable tuple to force people through ``addparam``
1099 1099 return tuple(self._advisoryparams)
1100 1100
1101 1101 def addparam(self, name, value=b'', mandatory=True):
1102 1102 """add a parameter to the part
1103 1103
1104 1104 If 'mandatory' is set to True, the remote handler must claim support
1105 1105 for this parameter or the unbundling will be aborted.
1106 1106
1107 1107 The 'name' and 'value' cannot exceed 255 bytes each.
1108 1108 """
1109 1109 if self._generated is not None:
1110 1110 raise error.ReadOnlyPartError(b'part is being generated')
1111 1111 if name in self._seenparams:
1112 1112 raise ValueError(b'duplicated params: %s' % name)
1113 1113 self._seenparams.add(name)
1114 1114 params = self._advisoryparams
1115 1115 if mandatory:
1116 1116 params = self._mandatoryparams
1117 1117 params.append((name, value))
1118 1118
1119 1119 # methods used to generates the bundle2 stream
1120 1120 def getchunks(self, ui):
1121 1121 if self._generated is not None:
1122 1122 raise error.ProgrammingError(b'part can only be consumed once')
1123 1123 self._generated = False
1124 1124
1125 1125 if ui.debugflag:
1126 1126 msg = [b'bundle2-output-part: "%s"' % self.type]
1127 1127 if not self.mandatory:
1128 1128 msg.append(b' (advisory)')
1129 1129 nbmp = len(self.mandatoryparams)
1130 1130 nbap = len(self.advisoryparams)
1131 1131 if nbmp or nbap:
1132 1132 msg.append(b' (params:')
1133 1133 if nbmp:
1134 1134 msg.append(b' %i mandatory' % nbmp)
1135 1135 if nbap:
1136 1136 msg.append(b' %i advisory' % nbmp)
1137 1137 msg.append(b')')
1138 1138 if not self.data:
1139 1139 msg.append(b' empty payload')
1140 1140 elif util.safehasattr(self.data, 'next') or util.safehasattr(
1141 1141 self.data, b'__next__'
1142 1142 ):
1143 1143 msg.append(b' streamed payload')
1144 1144 else:
1145 1145 msg.append(b' %i bytes payload' % len(self.data))
1146 1146 msg.append(b'\n')
1147 1147 ui.debug(b''.join(msg))
1148 1148
1149 1149 #### header
1150 1150 if self.mandatory:
1151 1151 parttype = self.type.upper()
1152 1152 else:
1153 1153 parttype = self.type.lower()
1154 1154 outdebug(ui, b'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1155 1155 ## parttype
1156 1156 header = [
1157 1157 _pack(_fparttypesize, len(parttype)),
1158 1158 parttype,
1159 1159 _pack(_fpartid, self.id),
1160 1160 ]
1161 1161 ## parameters
1162 1162 # count
1163 1163 manpar = self.mandatoryparams
1164 1164 advpar = self.advisoryparams
1165 1165 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1166 1166 # size
1167 1167 parsizes = []
1168 1168 for key, value in manpar:
1169 1169 parsizes.append(len(key))
1170 1170 parsizes.append(len(value))
1171 1171 for key, value in advpar:
1172 1172 parsizes.append(len(key))
1173 1173 parsizes.append(len(value))
1174 1174 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1175 1175 header.append(paramsizes)
1176 1176 # key, value
1177 1177 for key, value in manpar:
1178 1178 header.append(key)
1179 1179 header.append(value)
1180 1180 for key, value in advpar:
1181 1181 header.append(key)
1182 1182 header.append(value)
1183 1183 ## finalize header
1184 1184 try:
1185 1185 headerchunk = b''.join(header)
1186 1186 except TypeError:
1187 1187 raise TypeError(
1188 1188 'Found a non-bytes trying to '
1189 1189 'build bundle part header: %r' % header
1190 1190 )
1191 1191 outdebug(ui, b'header chunk size: %i' % len(headerchunk))
1192 1192 yield _pack(_fpartheadersize, len(headerchunk))
1193 1193 yield headerchunk
1194 1194 ## payload
1195 1195 try:
1196 1196 for chunk in self._payloadchunks():
1197 1197 outdebug(ui, b'payload chunk size: %i' % len(chunk))
1198 1198 yield _pack(_fpayloadsize, len(chunk))
1199 1199 yield chunk
1200 1200 except GeneratorExit:
1201 1201 # GeneratorExit means that nobody is listening for our
1202 1202 # results anyway, so just bail quickly rather than trying
1203 1203 # to produce an error part.
1204 1204 ui.debug(b'bundle2-generatorexit\n')
1205 1205 raise
1206 1206 except BaseException as exc:
1207 1207 bexc = stringutil.forcebytestr(exc)
1208 1208 # backup exception data for later
1209 1209 ui.debug(
1210 1210 b'bundle2-input-stream-interrupt: encoding exception %s' % bexc
1211 1211 )
1212 1212 tb = sys.exc_info()[2]
1213 1213 msg = b'unexpected error: %s' % bexc
1214 1214 interpart = bundlepart(
1215 1215 b'error:abort', [(b'message', msg)], mandatory=False
1216 1216 )
1217 1217 interpart.id = 0
1218 1218 yield _pack(_fpayloadsize, -1)
1219 1219 for chunk in interpart.getchunks(ui=ui):
1220 1220 yield chunk
1221 1221 outdebug(ui, b'closing payload chunk')
1222 1222 # abort current part payload
1223 1223 yield _pack(_fpayloadsize, 0)
1224 1224 pycompat.raisewithtb(exc, tb)
1225 1225 # end of payload
1226 1226 outdebug(ui, b'closing payload chunk')
1227 1227 yield _pack(_fpayloadsize, 0)
1228 1228 self._generated = True
1229 1229
1230 1230 def _payloadchunks(self):
1231 1231 """yield chunks of a the part payload
1232 1232
1233 1233 Exists to handle the different methods to provide data to a part."""
1234 1234 # we only support fixed size data now.
1235 1235 # This will be improved in the future.
1236 1236 if util.safehasattr(self.data, 'next') or util.safehasattr(
1237 1237 self.data, '__next__'
1238 1238 ):
1239 1239 buff = util.chunkbuffer(self.data)
1240 1240 chunk = buff.read(preferedchunksize)
1241 1241 while chunk:
1242 1242 yield chunk
1243 1243 chunk = buff.read(preferedchunksize)
1244 1244 elif len(self.data):
1245 1245 yield self.data
1246 1246
1247 1247
1248 1248 flaginterrupt = -1
1249 1249
1250 1250
1251 1251 class interrupthandler(unpackermixin):
1252 1252 """read one part and process it with restricted capability
1253 1253
1254 1254 This allows to transmit exception raised on the producer size during part
1255 1255 iteration while the consumer is reading a part.
1256 1256
1257 1257 Part processed in this manner only have access to a ui object,"""
1258 1258
1259 1259 def __init__(self, ui, fp):
1260 1260 super(interrupthandler, self).__init__(fp)
1261 1261 self.ui = ui
1262 1262
1263 1263 def _readpartheader(self):
1264 1264 """reads a part header size and return the bytes blob
1265 1265
1266 1266 returns None if empty"""
1267 1267 headersize = self._unpack(_fpartheadersize)[0]
1268 1268 if headersize < 0:
1269 1269 raise error.BundleValueError(
1270 1270 b'negative part header size: %i' % headersize
1271 1271 )
1272 1272 indebug(self.ui, b'part header size: %i\n' % headersize)
1273 1273 if headersize:
1274 1274 return self._readexact(headersize)
1275 1275 return None
1276 1276
1277 1277 def __call__(self):
1278 1278
1279 1279 self.ui.debug(
1280 1280 b'bundle2-input-stream-interrupt: opening out of band context\n'
1281 1281 )
1282 1282 indebug(self.ui, b'bundle2 stream interruption, looking for a part.')
1283 1283 headerblock = self._readpartheader()
1284 1284 if headerblock is None:
1285 1285 indebug(self.ui, b'no part found during interruption.')
1286 1286 return
1287 1287 part = unbundlepart(self.ui, headerblock, self._fp)
1288 1288 op = interruptoperation(self.ui)
1289 1289 hardabort = False
1290 1290 try:
1291 1291 _processpart(op, part)
1292 1292 except (SystemExit, KeyboardInterrupt):
1293 1293 hardabort = True
1294 1294 raise
1295 1295 finally:
1296 1296 if not hardabort:
1297 1297 part.consume()
1298 1298 self.ui.debug(
1299 1299 b'bundle2-input-stream-interrupt: closing out of band context\n'
1300 1300 )
1301 1301
1302 1302
1303 1303 class interruptoperation:
1304 1304 """A limited operation to be use by part handler during interruption
1305 1305
1306 1306 It only have access to an ui object.
1307 1307 """
1308 1308
1309 1309 def __init__(self, ui):
1310 1310 self.ui = ui
1311 1311 self.reply = None
1312 1312 self.captureoutput = False
1313 1313
1314 1314 @property
1315 1315 def repo(self):
1316 1316 raise error.ProgrammingError(b'no repo access from stream interruption')
1317 1317
1318 1318 def gettransaction(self):
1319 1319 raise TransactionUnavailable(b'no repo access from stream interruption')
1320 1320
1321 1321
1322 1322 def decodepayloadchunks(ui, fh):
1323 1323 """Reads bundle2 part payload data into chunks.
1324 1324
1325 1325 Part payload data consists of framed chunks. This function takes
1326 1326 a file handle and emits those chunks.
1327 1327 """
1328 1328 dolog = ui.configbool(b'devel', b'bundle2.debug')
1329 1329 debug = ui.debug
1330 1330
1331 1331 headerstruct = struct.Struct(_fpayloadsize)
1332 1332 headersize = headerstruct.size
1333 1333 unpack = headerstruct.unpack
1334 1334
1335 1335 readexactly = changegroup.readexactly
1336 1336 read = fh.read
1337 1337
1338 1338 chunksize = unpack(readexactly(fh, headersize))[0]
1339 1339 indebug(ui, b'payload chunk size: %i' % chunksize)
1340 1340
1341 1341 # changegroup.readexactly() is inlined below for performance.
1342 1342 while chunksize:
1343 1343 if chunksize >= 0:
1344 1344 s = read(chunksize)
1345 1345 if len(s) < chunksize:
1346 1346 raise error.Abort(
1347 1347 _(
1348 1348 b'stream ended unexpectedly '
1349 1349 b' (got %d bytes, expected %d)'
1350 1350 )
1351 1351 % (len(s), chunksize)
1352 1352 )
1353 1353
1354 1354 yield s
1355 1355 elif chunksize == flaginterrupt:
1356 1356 # Interrupt "signal" detected. The regular stream is interrupted
1357 1357 # and a bundle2 part follows. Consume it.
1358 1358 interrupthandler(ui, fh)()
1359 1359 else:
1360 1360 raise error.BundleValueError(
1361 1361 b'negative payload chunk size: %s' % chunksize
1362 1362 )
1363 1363
1364 1364 s = read(headersize)
1365 1365 if len(s) < headersize:
1366 1366 raise error.Abort(
1367 1367 _(b'stream ended unexpectedly (got %d bytes, expected %d)')
1368 1368 % (len(s), chunksize)
1369 1369 )
1370 1370
1371 1371 chunksize = unpack(s)[0]
1372 1372
1373 1373 # indebug() inlined for performance.
1374 1374 if dolog:
1375 1375 debug(b'bundle2-input: payload chunk size: %i\n' % chunksize)
1376 1376
1377 1377
1378 1378 class unbundlepart(unpackermixin):
1379 1379 """a bundle part read from a bundle"""
1380 1380
1381 1381 def __init__(self, ui, header, fp):
1382 1382 super(unbundlepart, self).__init__(fp)
1383 1383 self._seekable = util.safehasattr(fp, 'seek') and util.safehasattr(
1384 1384 fp, 'tell'
1385 1385 )
1386 1386 self.ui = ui
1387 1387 # unbundle state attr
1388 1388 self._headerdata = header
1389 1389 self._headeroffset = 0
1390 1390 self._initialized = False
1391 1391 self.consumed = False
1392 1392 # part data
1393 1393 self.id = None
1394 1394 self.type = None
1395 1395 self.mandatoryparams = None
1396 1396 self.advisoryparams = None
1397 1397 self.params = None
1398 1398 self.mandatorykeys = ()
1399 1399 self._readheader()
1400 1400 self._mandatory = None
1401 1401 self._pos = 0
1402 1402
1403 1403 def _fromheader(self, size):
1404 1404 """return the next <size> byte from the header"""
1405 1405 offset = self._headeroffset
1406 1406 data = self._headerdata[offset : (offset + size)]
1407 1407 self._headeroffset = offset + size
1408 1408 return data
1409 1409
1410 1410 def _unpackheader(self, format):
1411 1411 """read given format from header
1412 1412
1413 1413 This automatically compute the size of the format to read."""
1414 1414 data = self._fromheader(struct.calcsize(format))
1415 1415 return _unpack(format, data)
1416 1416
1417 1417 def _initparams(self, mandatoryparams, advisoryparams):
1418 1418 """internal function to setup all logic related parameters"""
1419 1419 # make it read only to prevent people touching it by mistake.
1420 1420 self.mandatoryparams = tuple(mandatoryparams)
1421 1421 self.advisoryparams = tuple(advisoryparams)
1422 1422 # user friendly UI
1423 1423 self.params = util.sortdict(self.mandatoryparams)
1424 1424 self.params.update(self.advisoryparams)
1425 1425 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1426 1426
1427 1427 def _readheader(self):
1428 1428 """read the header and setup the object"""
1429 1429 typesize = self._unpackheader(_fparttypesize)[0]
1430 1430 self.type = self._fromheader(typesize)
1431 1431 indebug(self.ui, b'part type: "%s"' % self.type)
1432 1432 self.id = self._unpackheader(_fpartid)[0]
1433 1433 indebug(self.ui, b'part id: "%s"' % pycompat.bytestr(self.id))
1434 1434 # extract mandatory bit from type
1435 1435 self.mandatory = self.type != self.type.lower()
1436 1436 self.type = self.type.lower()
1437 1437 ## reading parameters
1438 1438 # param count
1439 1439 mancount, advcount = self._unpackheader(_fpartparamcount)
1440 1440 indebug(self.ui, b'part parameters: %i' % (mancount + advcount))
1441 1441 # param size
1442 1442 fparamsizes = _makefpartparamsizes(mancount + advcount)
1443 1443 paramsizes = self._unpackheader(fparamsizes)
1444 1444 # make it a list of couple again
1445 1445 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1446 1446 # split mandatory from advisory
1447 1447 mansizes = paramsizes[:mancount]
1448 1448 advsizes = paramsizes[mancount:]
1449 1449 # retrieve param value
1450 1450 manparams = []
1451 1451 for key, value in mansizes:
1452 1452 manparams.append((self._fromheader(key), self._fromheader(value)))
1453 1453 advparams = []
1454 1454 for key, value in advsizes:
1455 1455 advparams.append((self._fromheader(key), self._fromheader(value)))
1456 1456 self._initparams(manparams, advparams)
1457 1457 ## part payload
1458 1458 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1459 1459 # we read the data, tell it
1460 1460 self._initialized = True
1461 1461
1462 1462 def _payloadchunks(self):
1463 1463 """Generator of decoded chunks in the payload."""
1464 1464 return decodepayloadchunks(self.ui, self._fp)
1465 1465
1466 1466 def consume(self):
1467 1467 """Read the part payload until completion.
1468 1468
1469 1469 By consuming the part data, the underlying stream read offset will
1470 1470 be advanced to the next part (or end of stream).
1471 1471 """
1472 1472 if self.consumed:
1473 1473 return
1474 1474
1475 1475 chunk = self.read(32768)
1476 1476 while chunk:
1477 1477 self._pos += len(chunk)
1478 1478 chunk = self.read(32768)
1479 1479
1480 1480 def read(self, size=None):
1481 1481 """read payload data"""
1482 1482 if not self._initialized:
1483 1483 self._readheader()
1484 1484 if size is None:
1485 1485 data = self._payloadstream.read()
1486 1486 else:
1487 1487 data = self._payloadstream.read(size)
1488 1488 self._pos += len(data)
1489 1489 if size is None or len(data) < size:
1490 1490 if not self.consumed and self._pos:
1491 1491 self.ui.debug(
1492 1492 b'bundle2-input-part: total payload size %i\n' % self._pos
1493 1493 )
1494 1494 self.consumed = True
1495 1495 return data
1496 1496
1497 1497
1498 1498 class seekableunbundlepart(unbundlepart):
1499 1499 """A bundle2 part in a bundle that is seekable.
1500 1500
1501 1501 Regular ``unbundlepart`` instances can only be read once. This class
1502 1502 extends ``unbundlepart`` to enable bi-directional seeking within the
1503 1503 part.
1504 1504
1505 1505 Bundle2 part data consists of framed chunks. Offsets when seeking
1506 1506 refer to the decoded data, not the offsets in the underlying bundle2
1507 1507 stream.
1508 1508
1509 1509 To facilitate quickly seeking within the decoded data, instances of this
1510 1510 class maintain a mapping between offsets in the underlying stream and
1511 1511 the decoded payload. This mapping will consume memory in proportion
1512 1512 to the number of chunks within the payload (which almost certainly
1513 1513 increases in proportion with the size of the part).
1514 1514 """
1515 1515
1516 1516 def __init__(self, ui, header, fp):
1517 1517 # (payload, file) offsets for chunk starts.
1518 1518 self._chunkindex = []
1519 1519
1520 1520 super(seekableunbundlepart, self).__init__(ui, header, fp)
1521 1521
1522 1522 def _payloadchunks(self, chunknum=0):
1523 1523 '''seek to specified chunk and start yielding data'''
1524 1524 if len(self._chunkindex) == 0:
1525 1525 assert chunknum == 0, b'Must start with chunk 0'
1526 1526 self._chunkindex.append((0, self._tellfp()))
1527 1527 else:
1528 1528 assert chunknum < len(self._chunkindex), (
1529 1529 b'Unknown chunk %d' % chunknum
1530 1530 )
1531 1531 self._seekfp(self._chunkindex[chunknum][1])
1532 1532
1533 1533 pos = self._chunkindex[chunknum][0]
1534 1534
1535 1535 for chunk in decodepayloadchunks(self.ui, self._fp):
1536 1536 chunknum += 1
1537 1537 pos += len(chunk)
1538 1538 if chunknum == len(self._chunkindex):
1539 1539 self._chunkindex.append((pos, self._tellfp()))
1540 1540
1541 1541 yield chunk
1542 1542
1543 1543 def _findchunk(self, pos):
1544 1544 '''for a given payload position, return a chunk number and offset'''
1545 1545 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1546 1546 if ppos == pos:
1547 1547 return chunk, 0
1548 1548 elif ppos > pos:
1549 1549 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1550 1550 raise ValueError(b'Unknown chunk')
1551 1551
1552 1552 def tell(self):
1553 1553 return self._pos
1554 1554
1555 1555 def seek(self, offset, whence=os.SEEK_SET):
1556 1556 if whence == os.SEEK_SET:
1557 1557 newpos = offset
1558 1558 elif whence == os.SEEK_CUR:
1559 1559 newpos = self._pos + offset
1560 1560 elif whence == os.SEEK_END:
1561 1561 if not self.consumed:
1562 1562 # Can't use self.consume() here because it advances self._pos.
1563 1563 chunk = self.read(32768)
1564 1564 while chunk:
1565 1565 chunk = self.read(32768)
1566 1566 newpos = self._chunkindex[-1][0] - offset
1567 1567 else:
1568 1568 raise ValueError(b'Unknown whence value: %r' % (whence,))
1569 1569
1570 1570 if newpos > self._chunkindex[-1][0] and not self.consumed:
1571 1571 # Can't use self.consume() here because it advances self._pos.
1572 1572 chunk = self.read(32768)
1573 1573 while chunk:
1574 1574 chunk = self.read(32668)
1575 1575
1576 1576 if not 0 <= newpos <= self._chunkindex[-1][0]:
1577 1577 raise ValueError(b'Offset out of range')
1578 1578
1579 1579 if self._pos != newpos:
1580 1580 chunk, internaloffset = self._findchunk(newpos)
1581 1581 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1582 1582 adjust = self.read(internaloffset)
1583 1583 if len(adjust) != internaloffset:
1584 1584 raise error.Abort(_(b'Seek failed\n'))
1585 1585 self._pos = newpos
1586 1586
1587 1587 def _seekfp(self, offset, whence=0):
1588 1588 """move the underlying file pointer
1589 1589
1590 1590 This method is meant for internal usage by the bundle2 protocol only.
1591 1591 They directly manipulate the low level stream including bundle2 level
1592 1592 instruction.
1593 1593
1594 1594 Do not use it to implement higher-level logic or methods."""
1595 1595 if self._seekable:
1596 1596 return self._fp.seek(offset, whence)
1597 1597 else:
1598 1598 raise NotImplementedError(_(b'File pointer is not seekable'))
1599 1599
1600 1600 def _tellfp(self):
1601 1601 """return the file offset, or None if file is not seekable
1602 1602
1603 1603 This method is meant for internal usage by the bundle2 protocol only.
1604 1604 They directly manipulate the low level stream including bundle2 level
1605 1605 instruction.
1606 1606
1607 1607 Do not use it to implement higher-level logic or methods."""
1608 1608 if self._seekable:
1609 1609 try:
1610 1610 return self._fp.tell()
1611 1611 except IOError as e:
1612 1612 if e.errno == errno.ESPIPE:
1613 1613 self._seekable = False
1614 1614 else:
1615 1615 raise
1616 1616 return None
1617 1617
1618 1618
1619 1619 # These are only the static capabilities.
1620 1620 # Check the 'getrepocaps' function for the rest.
1621 1621 capabilities = {
1622 1622 b'HG20': (),
1623 1623 b'bookmarks': (),
1624 1624 b'error': (b'abort', b'unsupportedcontent', b'pushraced', b'pushkey'),
1625 1625 b'listkeys': (),
1626 1626 b'pushkey': (),
1627 1627 b'digests': tuple(sorted(util.DIGESTS.keys())),
1628 1628 b'remote-changegroup': (b'http', b'https'),
1629 1629 b'hgtagsfnodes': (),
1630 1630 b'phases': (b'heads',),
1631 1631 b'stream': (b'v2',),
1632 1632 }
1633 1633
1634 1634
1635 1635 def getrepocaps(repo, allowpushback=False, role=None):
1636 1636 """return the bundle2 capabilities for a given repo
1637 1637
1638 1638 Exists to allow extensions (like evolution) to mutate the capabilities.
1639 1639
1640 1640 The returned value is used for servers advertising their capabilities as
1641 1641 well as clients advertising their capabilities to servers as part of
1642 1642 bundle2 requests. The ``role`` argument specifies which is which.
1643 1643 """
1644 1644 if role not in (b'client', b'server'):
1645 1645 raise error.ProgrammingError(b'role argument must be client or server')
1646 1646
1647 1647 caps = capabilities.copy()
1648 1648 caps[b'changegroup'] = tuple(
1649 1649 sorted(changegroup.supportedincomingversions(repo))
1650 1650 )
1651 1651 if obsolete.isenabled(repo, obsolete.exchangeopt):
1652 1652 supportedformat = tuple(b'V%i' % v for v in obsolete.formats)
1653 1653 caps[b'obsmarkers'] = supportedformat
1654 1654 if allowpushback:
1655 1655 caps[b'pushback'] = ()
1656 1656 cpmode = repo.ui.config(b'server', b'concurrent-push-mode')
1657 1657 if cpmode == b'check-related':
1658 1658 caps[b'checkheads'] = (b'related',)
1659 1659 if b'phases' in repo.ui.configlist(b'devel', b'legacy.exchange'):
1660 1660 caps.pop(b'phases')
1661 1661
1662 1662 # Don't advertise stream clone support in server mode if not configured.
1663 1663 if role == b'server':
1664 1664 streamsupported = repo.ui.configbool(
1665 1665 b'server', b'uncompressed', untrusted=True
1666 1666 )
1667 1667 featuresupported = repo.ui.configbool(b'server', b'bundle2.stream')
1668 1668
1669 1669 if not streamsupported or not featuresupported:
1670 1670 caps.pop(b'stream')
1671 1671 # Else always advertise support on client, because payload support
1672 1672 # should always be advertised.
1673 1673
1674 1674 if repo.ui.configbool(b'experimental', b'stream-v3'):
1675 1675 if b'stream' in caps:
1676 1676 caps[b'stream'] += (b'v3-exp',)
1677 1677
1678 1678 # b'rev-branch-cache is no longer advertised, but still supported
1679 1679 # for legacy clients.
1680 1680
1681 1681 return caps
1682 1682
1683 1683
1684 1684 def bundle2caps(remote):
1685 1685 """return the bundle capabilities of a peer as dict"""
1686 1686 raw = remote.capable(b'bundle2')
1687 1687 if not raw and raw != b'':
1688 1688 return {}
1689 1689 capsblob = urlreq.unquote(remote.capable(b'bundle2'))
1690 1690 return decodecaps(capsblob)
1691 1691
1692 1692
1693 1693 def obsmarkersversion(caps):
1694 1694 """extract the list of supported obsmarkers versions from a bundle2caps dict"""
1695 1695 obscaps = caps.get(b'obsmarkers', ())
1696 1696 return [int(c[1:]) for c in obscaps if c.startswith(b'V')]
1697 1697
1698 1698
1699 1699 def writenewbundle(
1700 1700 ui,
1701 1701 repo,
1702 1702 source,
1703 1703 filename,
1704 1704 bundletype,
1705 1705 outgoing,
1706 1706 opts,
1707 1707 vfs=None,
1708 1708 compression=None,
1709 1709 compopts=None,
1710 1710 allow_internal=False,
1711 1711 ):
1712 1712 if bundletype.startswith(b'HG10'):
1713 1713 cg = changegroup.makechangegroup(repo, outgoing, b'01', source)
1714 1714 return writebundle(
1715 1715 ui,
1716 1716 cg,
1717 1717 filename,
1718 1718 bundletype,
1719 1719 vfs=vfs,
1720 1720 compression=compression,
1721 1721 compopts=compopts,
1722 1722 )
1723 1723 elif not bundletype.startswith(b'HG20'):
1724 1724 raise error.ProgrammingError(b'unknown bundle type: %s' % bundletype)
1725 1725
1726 1726 # enforce that no internal phase are to be bundled
1727 1727 bundled_internal = repo.revs(b"%ln and _internal()", outgoing.ancestorsof)
1728 1728 if bundled_internal and not allow_internal:
1729 1729 count = len(repo.revs(b'%ln and _internal()', outgoing.missing))
1730 1730 msg = "backup bundle would contains %d internal changesets"
1731 1731 msg %= count
1732 1732 raise error.ProgrammingError(msg)
1733 1733
1734 1734 caps = {}
1735 1735 if opts.get(b'obsolescence', False):
1736 1736 caps[b'obsmarkers'] = (b'V1',)
1737 1737 if opts.get(b'streamv2'):
1738 1738 caps[b'stream'] = [b'v2']
1739 1739 elif opts.get(b'streamv3-exp'):
1740 1740 caps[b'stream'] = [b'v3-exp']
1741 1741 bundle = bundle20(ui, caps)
1742 1742 bundle.setcompression(compression, compopts)
1743 1743 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1744 1744 chunkiter = bundle.getchunks()
1745 1745
1746 1746 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1747 1747
1748 1748
1749 1749 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1750 1750 # We should eventually reconcile this logic with the one behind
1751 1751 # 'exchange.getbundle2partsgenerator'.
1752 1752 #
1753 1753 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1754 1754 # different right now. So we keep them separated for now for the sake of
1755 1755 # simplicity.
1756 1756
1757 1757 # we might not always want a changegroup in such bundle, for example in
1758 1758 # stream bundles
1759 1759 if opts.get(b'changegroup', True):
1760 1760 cgversion = opts.get(b'cg.version')
1761 1761 if cgversion is None:
1762 1762 cgversion = changegroup.safeversion(repo)
1763 1763 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1764 1764 part = bundler.newpart(b'changegroup', data=cg.getchunks())
1765 1765 part.addparam(b'version', cg.version)
1766 1766 if b'clcount' in cg.extras:
1767 1767 part.addparam(
1768 1768 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
1769 1769 )
1770 1770 if opts.get(b'phases'):
1771 1771 target_phase = phases.draft
1772 1772 for head in outgoing.ancestorsof:
1773 1773 target_phase = max(target_phase, repo[head].phase())
1774 1774 if target_phase > phases.draft:
1775 1775 part.addparam(
1776 1776 b'targetphase',
1777 1777 b'%d' % target_phase,
1778 1778 mandatory=False,
1779 1779 )
1780 1780 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
1781 1781 part.addparam(b'exp-sidedata', b'1')
1782 1782
1783 1783 if opts.get(b'streamv2', False):
1784 1784 addpartbundlestream2(bundler, repo, stream=True)
1785 1785
1786 1786 if opts.get(b'streamv3-exp', False):
1787 1787 addpartbundlestream2(bundler, repo, stream=True)
1788 1788
1789 1789 if opts.get(b'tagsfnodescache', True):
1790 1790 addparttagsfnodescache(repo, bundler, outgoing)
1791 1791
1792 1792 if opts.get(b'revbranchcache', True):
1793 1793 addpartrevbranchcache(repo, bundler, outgoing)
1794 1794
1795 1795 if opts.get(b'obsolescence', False):
1796 1796 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1797 1797 buildobsmarkerspart(
1798 1798 bundler,
1799 1799 obsmarkers,
1800 1800 mandatory=opts.get(b'obsolescence-mandatory', True),
1801 1801 )
1802 1802
1803 1803 if opts.get(b'phases', False):
1804 1804 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1805 1805 phasedata = phases.binaryencode(headsbyphase)
1806 1806 bundler.newpart(b'phase-heads', data=phasedata)
1807 1807
1808 1808
1809 1809 def addparttagsfnodescache(repo, bundler, outgoing):
1810 1810 # we include the tags fnode cache for the bundle changeset
1811 1811 # (as an optional parts)
1812 1812 cache = tags.hgtagsfnodescache(repo.unfiltered())
1813 1813 chunks = []
1814 1814
1815 1815 # .hgtags fnodes are only relevant for head changesets. While we could
1816 1816 # transfer values for all known nodes, there will likely be little to
1817 1817 # no benefit.
1818 1818 #
1819 1819 # We don't bother using a generator to produce output data because
1820 1820 # a) we only have 40 bytes per head and even esoteric numbers of heads
1821 1821 # consume little memory (1M heads is 40MB) b) we don't want to send the
1822 1822 # part if we don't have entries and knowing if we have entries requires
1823 1823 # cache lookups.
1824 1824 for node in outgoing.ancestorsof:
1825 1825 # Don't compute missing, as this may slow down serving.
1826 1826 fnode = cache.getfnode(node, computemissing=False)
1827 1827 if fnode:
1828 1828 chunks.extend([node, fnode])
1829 1829
1830 1830 if chunks:
1831 1831 bundler.newpart(b'hgtagsfnodes', data=b''.join(chunks))
1832 1832
1833 1833
1834 1834 def addpartrevbranchcache(repo, bundler, outgoing):
1835 1835 # we include the rev branch cache for the bundle changeset
1836 1836 # (as an optional parts)
1837 1837 cache = repo.revbranchcache()
1838 1838 cl = repo.unfiltered().changelog
1839 1839 branchesdata = collections.defaultdict(lambda: (set(), set()))
1840 1840 for node in outgoing.missing:
1841 1841 branch, close = cache.branchinfo(cl.rev(node))
1842 1842 branchesdata[branch][close].add(node)
1843 1843
1844 1844 def generate():
1845 1845 for branch, (nodes, closed) in sorted(branchesdata.items()):
1846 1846 utf8branch = encoding.fromlocal(branch)
1847 1847 yield rbcstruct.pack(len(utf8branch), len(nodes), len(closed))
1848 1848 yield utf8branch
1849 1849 for n in sorted(nodes):
1850 1850 yield n
1851 1851 for n in sorted(closed):
1852 1852 yield n
1853 1853
1854 1854 bundler.newpart(b'cache:rev-branch-cache', data=generate(), mandatory=False)
1855 1855
1856 1856
1857 1857 def _formatrequirementsspec(requirements):
1858 1858 requirements = [req for req in requirements if req != b"shared"]
1859 1859 return urlreq.quote(b','.join(sorted(requirements)))
1860 1860
1861 1861
1862 1862 def _formatrequirementsparams(requirements):
1863 1863 requirements = _formatrequirementsspec(requirements)
1864 1864 params = b"%s%s" % (urlreq.quote(b"requirements="), requirements)
1865 1865 return params
1866 1866
1867 1867
1868 1868 def format_remote_wanted_sidedata(repo):
1869 1869 """Formats a repo's wanted sidedata categories into a bytestring for
1870 1870 capabilities exchange."""
1871 1871 wanted = b""
1872 1872 if repo._wanted_sidedata:
1873 1873 wanted = b','.join(
1874 1874 pycompat.bytestr(c) for c in sorted(repo._wanted_sidedata)
1875 1875 )
1876 1876 return wanted
1877 1877
1878 1878
1879 1879 def read_remote_wanted_sidedata(remote):
1880 1880 sidedata_categories = remote.capable(b'exp-wanted-sidedata')
1881 1881 return read_wanted_sidedata(sidedata_categories)
1882 1882
1883 1883
1884 1884 def read_wanted_sidedata(formatted):
1885 1885 if formatted:
1886 1886 return set(formatted.split(b','))
1887 1887 return set()
1888 1888
1889 1889
1890 1890 def addpartbundlestream2(bundler, repo, **kwargs):
1891 1891 if not kwargs.get('stream', False):
1892 1892 return
1893 1893
1894 1894 if not streamclone.allowservergeneration(repo):
1895 1895 msg = _(b'stream data requested but server does not allow this feature')
1896 1896 hint = _(b'the client seems buggy')
1897 1897 raise error.Abort(msg, hint=hint)
1898 1898 if not (b'stream' in bundler.capabilities):
1899 1899 msg = _(
1900 1900 b'stream data requested but supported streaming clone versions were not specified'
1901 1901 )
1902 1902 hint = _(b'the client seems buggy')
1903 1903 raise error.Abort(msg, hint=hint)
1904 1904 client_supported = set(bundler.capabilities[b'stream'])
1905 1905 server_supported = set(getrepocaps(repo, role=b'client').get(b'stream', []))
1906 1906 common_supported = client_supported & server_supported
1907 1907 if not common_supported:
1908 1908 msg = _(b'no common supported version with the client: %s; %s')
1909 1909 str_server = b','.join(sorted(server_supported))
1910 1910 str_client = b','.join(sorted(client_supported))
1911 1911 msg %= (str_server, str_client)
1912 1912 raise error.Abort(msg)
1913 1913 version = max(common_supported)
1914 1914
1915 1915 # Stream clones don't compress well. And compression undermines a
1916 1916 # goal of stream clones, which is to be fast. Communicate the desire
1917 1917 # to avoid compression to consumers of the bundle.
1918 1918 bundler.prefercompressed = False
1919 1919
1920 1920 # get the includes and excludes
1921 1921 includepats = kwargs.get('includepats')
1922 1922 excludepats = kwargs.get('excludepats')
1923 1923
1924 1924 narrowstream = repo.ui.configbool(
1925 1925 b'experimental', b'server.stream-narrow-clones'
1926 1926 )
1927 1927
1928 1928 if (includepats or excludepats) and not narrowstream:
1929 1929 raise error.Abort(_(b'server does not support narrow stream clones'))
1930 1930
1931 1931 includeobsmarkers = False
1932 1932 if repo.obsstore:
1933 1933 remoteversions = obsmarkersversion(bundler.capabilities)
1934 1934 if not remoteversions:
1935 1935 raise error.Abort(
1936 1936 _(
1937 1937 b'server has obsolescence markers, but client '
1938 1938 b'cannot receive them via stream clone'
1939 1939 )
1940 1940 )
1941 1941 elif repo.obsstore._version in remoteversions:
1942 1942 includeobsmarkers = True
1943 1943
1944 1944 if version == b"v2":
1945 1945 filecount, bytecount, it = streamclone.generatev2(
1946 1946 repo, includepats, excludepats, includeobsmarkers
1947 1947 )
1948 1948 requirements = streamclone.streamed_requirements(repo)
1949 1949 requirements = _formatrequirementsspec(requirements)
1950 1950 part = bundler.newpart(b'stream2', data=it)
1951 1951 part.addparam(b'bytecount', b'%d' % bytecount, mandatory=True)
1952 1952 part.addparam(b'filecount', b'%d' % filecount, mandatory=True)
1953 1953 part.addparam(b'requirements', requirements, mandatory=True)
1954 1954 elif version == b"v3-exp":
1955 1955 it = streamclone.generatev3(
1956 1956 repo, includepats, excludepats, includeobsmarkers
1957 1957 )
1958 1958 requirements = streamclone.streamed_requirements(repo)
1959 1959 requirements = _formatrequirementsspec(requirements)
1960 1960 part = bundler.newpart(b'stream3-exp', data=it)
1961 1961 part.addparam(b'requirements', requirements, mandatory=True)
1962 1962
1963 1963
1964 1964 def buildobsmarkerspart(bundler, markers, mandatory=True):
1965 1965 """add an obsmarker part to the bundler with <markers>
1966 1966
1967 1967 No part is created if markers is empty.
1968 1968 Raises ValueError if the bundler doesn't support any known obsmarker format.
1969 1969 """
1970 1970 if not markers:
1971 1971 return None
1972 1972
1973 1973 remoteversions = obsmarkersversion(bundler.capabilities)
1974 1974 version = obsolete.commonversion(remoteversions)
1975 1975 if version is None:
1976 1976 raise ValueError(b'bundler does not support common obsmarker format')
1977 1977 stream = obsolete.encodemarkers(markers, True, version=version)
1978 1978 return bundler.newpart(b'obsmarkers', data=stream, mandatory=mandatory)
1979 1979
1980 1980
1981 1981 def writebundle(
1982 1982 ui, cg, filename, bundletype, vfs=None, compression=None, compopts=None
1983 1983 ):
1984 1984 """Write a bundle file and return its filename.
1985 1985
1986 1986 Existing files will not be overwritten.
1987 1987 If no filename is specified, a temporary file is created.
1988 1988 bz2 compression can be turned off.
1989 1989 The bundle file will be deleted in case of errors.
1990 1990 """
1991 1991
1992 1992 if bundletype == b"HG20":
1993 1993 bundle = bundle20(ui)
1994 1994 bundle.setcompression(compression, compopts)
1995 1995 part = bundle.newpart(b'changegroup', data=cg.getchunks())
1996 1996 part.addparam(b'version', cg.version)
1997 1997 if b'clcount' in cg.extras:
1998 1998 part.addparam(
1999 1999 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
2000 2000 )
2001 2001 chunkiter = bundle.getchunks()
2002 2002 else:
2003 2003 # compression argument is only for the bundle2 case
2004 2004 assert compression is None
2005 2005 if cg.version != b'01':
2006 2006 raise error.Abort(
2007 2007 _(b'old bundle types only supports v1 changegroups')
2008 2008 )
2009 2009
2010 2010 # HG20 is the case without 2 values to unpack, but is handled above.
2011 2011 # pytype: disable=bad-unpacking
2012 2012 header, comp = bundletypes[bundletype]
2013 2013 # pytype: enable=bad-unpacking
2014 2014
2015 2015 if comp not in util.compengines.supportedbundletypes:
2016 2016 raise error.Abort(_(b'unknown stream compression type: %s') % comp)
2017 2017 compengine = util.compengines.forbundletype(comp)
2018 2018
2019 2019 def chunkiter():
2020 2020 yield header
2021 2021 for chunk in compengine.compressstream(cg.getchunks(), compopts):
2022 2022 yield chunk
2023 2023
2024 2024 chunkiter = chunkiter()
2025 2025
2026 2026 # parse the changegroup data, otherwise we will block
2027 2027 # in case of sshrepo because we don't know the end of the stream
2028 2028 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
2029 2029
2030 2030
2031 2031 def combinechangegroupresults(op):
2032 2032 """logic to combine 0 or more addchangegroup results into one"""
2033 2033 results = [r.get(b'return', 0) for r in op.records[b'changegroup']]
2034 2034 changedheads = 0
2035 2035 result = 1
2036 2036 for ret in results:
2037 2037 # If any changegroup result is 0, return 0
2038 2038 if ret == 0:
2039 2039 result = 0
2040 2040 break
2041 2041 if ret < -1:
2042 2042 changedheads += ret + 1
2043 2043 elif ret > 1:
2044 2044 changedheads += ret - 1
2045 2045 if changedheads > 0:
2046 2046 result = 1 + changedheads
2047 2047 elif changedheads < 0:
2048 2048 result = -1 + changedheads
2049 2049 return result
2050 2050
2051 2051
2052 2052 @parthandler(
2053 2053 b'changegroup',
2054 2054 (
2055 2055 b'version',
2056 2056 b'nbchanges',
2057 2057 b'exp-sidedata',
2058 2058 b'exp-wanted-sidedata',
2059 2059 b'treemanifest',
2060 2060 b'targetphase',
2061 2061 ),
2062 2062 )
2063 2063 def handlechangegroup(op, inpart):
2064 2064 """apply a changegroup part on the repo"""
2065 2065 from . import localrepo
2066 2066
2067 2067 tr = op.gettransaction()
2068 2068 unpackerversion = inpart.params.get(b'version', b'01')
2069 2069 # We should raise an appropriate exception here
2070 2070 cg = changegroup.getunbundler(unpackerversion, inpart, None)
2071 2071 # the source and url passed here are overwritten by the one contained in
2072 2072 # the transaction.hookargs argument. So 'bundle2' is a placeholder
2073 2073 nbchangesets = None
2074 2074 if b'nbchanges' in inpart.params:
2075 2075 nbchangesets = int(inpart.params.get(b'nbchanges'))
2076 2076 if b'treemanifest' in inpart.params and not scmutil.istreemanifest(op.repo):
2077 2077 if len(op.repo.changelog) != 0:
2078 2078 raise error.Abort(
2079 2079 _(
2080 2080 b"bundle contains tree manifests, but local repo is "
2081 2081 b"non-empty and does not use tree manifests"
2082 2082 )
2083 2083 )
2084 2084 op.repo.requirements.add(requirements.TREEMANIFEST_REQUIREMENT)
2085 2085 op.repo.svfs.options = localrepo.resolvestorevfsoptions(
2086 2086 op.repo.ui, op.repo.requirements, op.repo.features
2087 2087 )
2088 2088 scmutil.writereporequirements(op.repo)
2089 2089
2090 2090 extrakwargs = {}
2091 2091 targetphase = inpart.params.get(b'targetphase')
2092 2092 if targetphase is not None:
2093 2093 extrakwargs['targetphase'] = int(targetphase)
2094 2094
2095 2095 remote_sidedata = inpart.params.get(b'exp-wanted-sidedata')
2096 2096 extrakwargs['sidedata_categories'] = read_wanted_sidedata(remote_sidedata)
2097 2097
2098 2098 ret = _processchangegroup(
2099 2099 op,
2100 2100 cg,
2101 2101 tr,
2102 2102 op.source,
2103 2103 b'bundle2',
2104 2104 expectedtotal=nbchangesets,
2105 2105 **extrakwargs
2106 2106 )
2107 2107 if op.reply is not None:
2108 2108 # This is definitely not the final form of this
2109 2109 # return. But one need to start somewhere.
2110 2110 part = op.reply.newpart(b'reply:changegroup', mandatory=False)
2111 2111 part.addparam(
2112 2112 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2113 2113 )
2114 2114 part.addparam(b'return', b'%i' % ret, mandatory=False)
2115 2115 assert not inpart.read()
2116 2116
2117 2117
2118 2118 _remotechangegroupparams = tuple(
2119 2119 [b'url', b'size', b'digests']
2120 2120 + [b'digest:%s' % k for k in util.DIGESTS.keys()]
2121 2121 )
2122 2122
2123 2123
2124 2124 @parthandler(b'remote-changegroup', _remotechangegroupparams)
2125 2125 def handleremotechangegroup(op, inpart):
2126 2126 """apply a bundle10 on the repo, given an url and validation information
2127 2127
2128 2128 All the information about the remote bundle to import are given as
2129 2129 parameters. The parameters include:
2130 2130 - url: the url to the bundle10.
2131 2131 - size: the bundle10 file size. It is used to validate what was
2132 2132 retrieved by the client matches the server knowledge about the bundle.
2133 2133 - digests: a space separated list of the digest types provided as
2134 2134 parameters.
2135 2135 - digest:<digest-type>: the hexadecimal representation of the digest with
2136 2136 that name. Like the size, it is used to validate what was retrieved by
2137 2137 the client matches what the server knows about the bundle.
2138 2138
2139 2139 When multiple digest types are given, all of them are checked.
2140 2140 """
2141 2141 try:
2142 2142 raw_url = inpart.params[b'url']
2143 2143 except KeyError:
2144 2144 raise error.Abort(_(b'remote-changegroup: missing "%s" param') % b'url')
2145 2145 parsed_url = urlutil.url(raw_url)
2146 2146 if parsed_url.scheme not in capabilities[b'remote-changegroup']:
2147 2147 raise error.Abort(
2148 2148 _(b'remote-changegroup does not support %s urls')
2149 2149 % parsed_url.scheme
2150 2150 )
2151 2151
2152 2152 try:
2153 2153 size = int(inpart.params[b'size'])
2154 2154 except ValueError:
2155 2155 raise error.Abort(
2156 2156 _(b'remote-changegroup: invalid value for param "%s"') % b'size'
2157 2157 )
2158 2158 except KeyError:
2159 2159 raise error.Abort(
2160 2160 _(b'remote-changegroup: missing "%s" param') % b'size'
2161 2161 )
2162 2162
2163 2163 digests = {}
2164 2164 for typ in inpart.params.get(b'digests', b'').split():
2165 2165 param = b'digest:%s' % typ
2166 2166 try:
2167 2167 value = inpart.params[param]
2168 2168 except KeyError:
2169 2169 raise error.Abort(
2170 2170 _(b'remote-changegroup: missing "%s" param') % param
2171 2171 )
2172 2172 digests[typ] = value
2173 2173
2174 2174 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
2175 2175
2176 2176 tr = op.gettransaction()
2177 2177 from . import exchange
2178 2178
2179 2179 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
2180 2180 if not isinstance(cg, changegroup.cg1unpacker):
2181 2181 raise error.Abort(
2182 2182 _(b'%s: not a bundle version 1.0') % urlutil.hidepassword(raw_url)
2183 2183 )
2184 2184 ret = _processchangegroup(op, cg, tr, op.source, b'bundle2')
2185 2185 if op.reply is not None:
2186 2186 # This is definitely not the final form of this
2187 2187 # return. But one need to start somewhere.
2188 2188 part = op.reply.newpart(b'reply:changegroup')
2189 2189 part.addparam(
2190 2190 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2191 2191 )
2192 2192 part.addparam(b'return', b'%i' % ret, mandatory=False)
2193 2193 try:
2194 2194 real_part.validate()
2195 2195 except error.Abort as e:
2196 2196 raise error.Abort(
2197 2197 _(b'bundle at %s is corrupted:\n%s')
2198 2198 % (urlutil.hidepassword(raw_url), e.message)
2199 2199 )
2200 2200 assert not inpart.read()
2201 2201
2202 2202
2203 2203 @parthandler(b'reply:changegroup', (b'return', b'in-reply-to'))
2204 2204 def handlereplychangegroup(op, inpart):
2205 2205 ret = int(inpart.params[b'return'])
2206 2206 replyto = int(inpart.params[b'in-reply-to'])
2207 2207 op.records.add(b'changegroup', {b'return': ret}, replyto)
2208 2208
2209 2209
2210 2210 @parthandler(b'check:bookmarks')
2211 2211 def handlecheckbookmarks(op, inpart):
2212 2212 """check location of bookmarks
2213 2213
2214 2214 This part is to be used to detect push race regarding bookmark, it
2215 2215 contains binary encoded (bookmark, node) tuple. If the local state does
2216 2216 not marks the one in the part, a PushRaced exception is raised
2217 2217 """
2218 2218 bookdata = bookmarks.binarydecode(op.repo, inpart)
2219 2219
2220 2220 msgstandard = (
2221 2221 b'remote repository changed while pushing - please try again '
2222 2222 b'(bookmark "%s" move from %s to %s)'
2223 2223 )
2224 2224 msgmissing = (
2225 2225 b'remote repository changed while pushing - please try again '
2226 2226 b'(bookmark "%s" is missing, expected %s)'
2227 2227 )
2228 2228 msgexist = (
2229 2229 b'remote repository changed while pushing - please try again '
2230 2230 b'(bookmark "%s" set on %s, expected missing)'
2231 2231 )
2232 2232 for book, node in bookdata:
2233 2233 currentnode = op.repo._bookmarks.get(book)
2234 2234 if currentnode != node:
2235 2235 if node is None:
2236 2236 finalmsg = msgexist % (book, short(currentnode))
2237 2237 elif currentnode is None:
2238 2238 finalmsg = msgmissing % (book, short(node))
2239 2239 else:
2240 2240 finalmsg = msgstandard % (
2241 2241 book,
2242 2242 short(node),
2243 2243 short(currentnode),
2244 2244 )
2245 2245 raise error.PushRaced(finalmsg)
2246 2246
2247 2247
2248 2248 @parthandler(b'check:heads')
2249 2249 def handlecheckheads(op, inpart):
2250 2250 """check that head of the repo did not change
2251 2251
2252 2252 This is used to detect a push race when using unbundle.
2253 2253 This replaces the "heads" argument of unbundle."""
2254 2254 h = inpart.read(20)
2255 2255 heads = []
2256 2256 while len(h) == 20:
2257 2257 heads.append(h)
2258 2258 h = inpart.read(20)
2259 2259 assert not h
2260 2260 # Trigger a transaction so that we are guaranteed to have the lock now.
2261 2261 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2262 2262 op.gettransaction()
2263 2263 if sorted(heads) != sorted(op.repo.heads()):
2264 2264 raise error.PushRaced(
2265 2265 b'remote repository changed while pushing - please try again'
2266 2266 )
2267 2267
2268 2268
2269 2269 @parthandler(b'check:updated-heads')
2270 2270 def handlecheckupdatedheads(op, inpart):
2271 2271 """check for race on the heads touched by a push
2272 2272
2273 2273 This is similar to 'check:heads' but focus on the heads actually updated
2274 2274 during the push. If other activities happen on unrelated heads, it is
2275 2275 ignored.
2276 2276
2277 2277 This allow server with high traffic to avoid push contention as long as
2278 2278 unrelated parts of the graph are involved."""
2279 2279 h = inpart.read(20)
2280 2280 heads = []
2281 2281 while len(h) == 20:
2282 2282 heads.append(h)
2283 2283 h = inpart.read(20)
2284 2284 assert not h
2285 2285 # trigger a transaction so that we are guaranteed to have the lock now.
2286 2286 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2287 2287 op.gettransaction()
2288 2288
2289 2289 currentheads = set()
2290 2290 for ls in op.repo.branchmap().iterheads():
2291 2291 currentheads.update(ls)
2292 2292
2293 2293 for h in heads:
2294 2294 if h not in currentheads:
2295 2295 raise error.PushRaced(
2296 2296 b'remote repository changed while pushing - '
2297 2297 b'please try again'
2298 2298 )
2299 2299
2300 2300
2301 2301 @parthandler(b'check:phases')
2302 2302 def handlecheckphases(op, inpart):
2303 2303 """check that phase boundaries of the repository did not change
2304 2304
2305 2305 This is used to detect a push race.
2306 2306 """
2307 2307 phasetonodes = phases.binarydecode(inpart)
2308 2308 unfi = op.repo.unfiltered()
2309 2309 cl = unfi.changelog
2310 2310 phasecache = unfi._phasecache
2311 2311 msg = (
2312 2312 b'remote repository changed while pushing - please try again '
2313 2313 b'(%s is %s expected %s)'
2314 2314 )
2315 2315 for expectedphase, nodes in phasetonodes.items():
2316 2316 for n in nodes:
2317 2317 actualphase = phasecache.phase(unfi, cl.rev(n))
2318 2318 if actualphase != expectedphase:
2319 2319 finalmsg = msg % (
2320 2320 short(n),
2321 2321 phases.phasenames[actualphase],
2322 2322 phases.phasenames[expectedphase],
2323 2323 )
2324 2324 raise error.PushRaced(finalmsg)
2325 2325
2326 2326
2327 2327 @parthandler(b'output')
2328 2328 def handleoutput(op, inpart):
2329 2329 """forward output captured on the server to the client"""
2330 2330 for line in inpart.read().splitlines():
2331 2331 op.ui.status(_(b'remote: %s\n') % line)
2332 2332
2333 2333
2334 2334 @parthandler(b'replycaps')
2335 2335 def handlereplycaps(op, inpart):
2336 2336 """Notify that a reply bundle should be created
2337 2337
2338 2338 The payload contains the capabilities information for the reply"""
2339 2339 caps = decodecaps(inpart.read())
2340 2340 if op.reply is None:
2341 2341 op.reply = bundle20(op.ui, caps)
2342 2342
2343 2343
2344 2344 class AbortFromPart(error.Abort):
2345 2345 """Sub-class of Abort that denotes an error from a bundle2 part."""
2346 2346
2347 2347
2348 2348 @parthandler(b'error:abort', (b'message', b'hint'))
2349 2349 def handleerrorabort(op, inpart):
2350 2350 """Used to transmit abort error over the wire"""
2351 2351 raise AbortFromPart(
2352 2352 inpart.params[b'message'], hint=inpart.params.get(b'hint')
2353 2353 )
2354 2354
2355 2355
2356 2356 @parthandler(
2357 2357 b'error:pushkey',
2358 2358 (b'namespace', b'key', b'new', b'old', b'ret', b'in-reply-to'),
2359 2359 )
2360 2360 def handleerrorpushkey(op, inpart):
2361 2361 """Used to transmit failure of a mandatory pushkey over the wire"""
2362 2362 kwargs = {}
2363 2363 for name in (b'namespace', b'key', b'new', b'old', b'ret'):
2364 2364 value = inpart.params.get(name)
2365 2365 if value is not None:
2366 2366 kwargs[name] = value
2367 2367 raise error.PushkeyFailed(
2368 2368 inpart.params[b'in-reply-to'], **pycompat.strkwargs(kwargs)
2369 2369 )
2370 2370
2371 2371
2372 2372 @parthandler(b'error:unsupportedcontent', (b'parttype', b'params'))
2373 2373 def handleerrorunsupportedcontent(op, inpart):
2374 2374 """Used to transmit unknown content error over the wire"""
2375 2375 kwargs = {}
2376 2376 parttype = inpart.params.get(b'parttype')
2377 2377 if parttype is not None:
2378 2378 kwargs[b'parttype'] = parttype
2379 2379 params = inpart.params.get(b'params')
2380 2380 if params is not None:
2381 2381 kwargs[b'params'] = params.split(b'\0')
2382 2382
2383 2383 raise error.BundleUnknownFeatureError(**pycompat.strkwargs(kwargs))
2384 2384
2385 2385
2386 2386 @parthandler(b'error:pushraced', (b'message',))
2387 2387 def handleerrorpushraced(op, inpart):
2388 2388 """Used to transmit push race error over the wire"""
2389 2389 raise error.ResponseError(_(b'push failed:'), inpart.params[b'message'])
2390 2390
2391 2391
2392 2392 @parthandler(b'listkeys', (b'namespace',))
2393 2393 def handlelistkeys(op, inpart):
2394 2394 """retrieve pushkey namespace content stored in a bundle2"""
2395 2395 namespace = inpart.params[b'namespace']
2396 2396 r = pushkey.decodekeys(inpart.read())
2397 2397 op.records.add(b'listkeys', (namespace, r))
2398 2398
2399 2399
2400 2400 @parthandler(b'pushkey', (b'namespace', b'key', b'old', b'new'))
2401 2401 def handlepushkey(op, inpart):
2402 2402 """process a pushkey request"""
2403 2403 dec = pushkey.decode
2404 2404 namespace = dec(inpart.params[b'namespace'])
2405 2405 key = dec(inpart.params[b'key'])
2406 2406 old = dec(inpart.params[b'old'])
2407 2407 new = dec(inpart.params[b'new'])
2408 2408 # Grab the transaction to ensure that we have the lock before performing the
2409 2409 # pushkey.
2410 2410 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2411 2411 op.gettransaction()
2412 2412 ret = op.repo.pushkey(namespace, key, old, new)
2413 2413 record = {b'namespace': namespace, b'key': key, b'old': old, b'new': new}
2414 2414 op.records.add(b'pushkey', record)
2415 2415 if op.reply is not None:
2416 2416 rpart = op.reply.newpart(b'reply:pushkey')
2417 2417 rpart.addparam(
2418 2418 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2419 2419 )
2420 2420 rpart.addparam(b'return', b'%i' % ret, mandatory=False)
2421 2421 if inpart.mandatory and not ret:
2422 2422 kwargs = {}
2423 2423 for key in (b'namespace', b'key', b'new', b'old', b'ret'):
2424 2424 if key in inpart.params:
2425 2425 kwargs[key] = inpart.params[key]
2426 2426 raise error.PushkeyFailed(
2427 2427 partid=b'%d' % inpart.id, **pycompat.strkwargs(kwargs)
2428 2428 )
2429 2429
2430 2430
2431 2431 @parthandler(b'bookmarks')
2432 2432 def handlebookmark(op, inpart):
2433 2433 """transmit bookmark information
2434 2434
2435 2435 The part contains binary encoded bookmark information.
2436 2436
2437 2437 The exact behavior of this part can be controlled by the 'bookmarks' mode
2438 2438 on the bundle operation.
2439 2439
2440 2440 When mode is 'apply' (the default) the bookmark information is applied as
2441 2441 is to the unbundling repository. Make sure a 'check:bookmarks' part is
2442 2442 issued earlier to check for push races in such update. This behavior is
2443 2443 suitable for pushing.
2444 2444
2445 2445 When mode is 'records', the information is recorded into the 'bookmarks'
2446 2446 records of the bundle operation. This behavior is suitable for pulling.
2447 2447 """
2448 2448 changes = bookmarks.binarydecode(op.repo, inpart)
2449 2449
2450 2450 pushkeycompat = op.repo.ui.configbool(
2451 2451 b'server', b'bookmarks-pushkey-compat'
2452 2452 )
2453 2453 bookmarksmode = op.modes.get(b'bookmarks', b'apply')
2454 2454
2455 2455 if bookmarksmode == b'apply':
2456 2456 tr = op.gettransaction()
2457 2457 bookstore = op.repo._bookmarks
2458 2458 if pushkeycompat:
2459 2459 allhooks = []
2460 2460 for book, node in changes:
2461 2461 hookargs = tr.hookargs.copy()
2462 2462 hookargs[b'pushkeycompat'] = b'1'
2463 2463 hookargs[b'namespace'] = b'bookmarks'
2464 2464 hookargs[b'key'] = book
2465 2465 hookargs[b'old'] = hex(bookstore.get(book, b''))
2466 2466 hookargs[b'new'] = hex(node if node is not None else b'')
2467 2467 allhooks.append(hookargs)
2468 2468
2469 2469 for hookargs in allhooks:
2470 2470 op.repo.hook(
2471 2471 b'prepushkey', throw=True, **pycompat.strkwargs(hookargs)
2472 2472 )
2473 2473
2474 2474 for book, node in changes:
2475 2475 if bookmarks.isdivergent(book):
2476 2476 msg = _(b'cannot accept divergent bookmark %s!') % book
2477 2477 raise error.Abort(msg)
2478 2478
2479 2479 bookstore.applychanges(op.repo, op.gettransaction(), changes)
2480 2480
2481 2481 if pushkeycompat:
2482 2482
2483 2483 def runhook(unused_success):
2484 2484 for hookargs in allhooks:
2485 2485 op.repo.hook(b'pushkey', **pycompat.strkwargs(hookargs))
2486 2486
2487 2487 op.repo._afterlock(runhook)
2488 2488
2489 2489 elif bookmarksmode == b'records':
2490 2490 for book, node in changes:
2491 2491 record = {b'bookmark': book, b'node': node}
2492 2492 op.records.add(b'bookmarks', record)
2493 2493 else:
2494 2494 raise error.ProgrammingError(
2495 2495 b'unknown bookmark mode: %s' % bookmarksmode
2496 2496 )
2497 2497
2498 2498
2499 2499 @parthandler(b'phase-heads')
2500 2500 def handlephases(op, inpart):
2501 2501 """apply phases from bundle part to repo"""
2502 2502 headsbyphase = phases.binarydecode(inpart)
2503 2503 phases.updatephases(op.repo.unfiltered(), op.gettransaction, headsbyphase)
2504 2504
2505 2505
2506 2506 @parthandler(b'reply:pushkey', (b'return', b'in-reply-to'))
2507 2507 def handlepushkeyreply(op, inpart):
2508 2508 """retrieve the result of a pushkey request"""
2509 2509 ret = int(inpart.params[b'return'])
2510 2510 partid = int(inpart.params[b'in-reply-to'])
2511 2511 op.records.add(b'pushkey', {b'return': ret}, partid)
2512 2512
2513 2513
2514 2514 @parthandler(b'obsmarkers')
2515 2515 def handleobsmarker(op, inpart):
2516 2516 """add a stream of obsmarkers to the repo"""
2517 2517 tr = op.gettransaction()
2518 2518 markerdata = inpart.read()
2519 2519 if op.ui.config(b'experimental', b'obsmarkers-exchange-debug'):
2520 2520 op.ui.writenoi18n(
2521 2521 b'obsmarker-exchange: %i bytes received\n' % len(markerdata)
2522 2522 )
2523 2523 # The mergemarkers call will crash if marker creation is not enabled.
2524 2524 # we want to avoid this if the part is advisory.
2525 2525 if not inpart.mandatory and op.repo.obsstore.readonly:
2526 2526 op.repo.ui.debug(
2527 2527 b'ignoring obsolescence markers, feature not enabled\n'
2528 2528 )
2529 2529 return
2530 2530 new = op.repo.obsstore.mergemarkers(tr, markerdata)
2531 2531 op.repo.invalidatevolatilesets()
2532 2532 op.records.add(b'obsmarkers', {b'new': new})
2533 2533 if op.reply is not None:
2534 2534 rpart = op.reply.newpart(b'reply:obsmarkers')
2535 2535 rpart.addparam(
2536 2536 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2537 2537 )
2538 2538 rpart.addparam(b'new', b'%i' % new, mandatory=False)
2539 2539
2540 2540
2541 2541 @parthandler(b'reply:obsmarkers', (b'new', b'in-reply-to'))
2542 2542 def handleobsmarkerreply(op, inpart):
2543 2543 """retrieve the result of a pushkey request"""
2544 2544 ret = int(inpart.params[b'new'])
2545 2545 partid = int(inpart.params[b'in-reply-to'])
2546 2546 op.records.add(b'obsmarkers', {b'new': ret}, partid)
2547 2547
2548 2548
2549 2549 @parthandler(b'hgtagsfnodes')
2550 2550 def handlehgtagsfnodes(op, inpart):
2551 2551 """Applies .hgtags fnodes cache entries to the local repo.
2552 2552
2553 2553 Payload is pairs of 20 byte changeset nodes and filenodes.
2554 2554 """
2555 2555 # Grab the transaction so we ensure that we have the lock at this point.
2556 2556 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2557 2557 op.gettransaction()
2558 2558 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
2559 2559
2560 2560 count = 0
2561 2561 while True:
2562 2562 node = inpart.read(20)
2563 2563 fnode = inpart.read(20)
2564 2564 if len(node) < 20 or len(fnode) < 20:
2565 2565 op.ui.debug(b'ignoring incomplete received .hgtags fnodes data\n')
2566 2566 break
2567 2567 cache.setfnode(node, fnode)
2568 2568 count += 1
2569 2569
2570 2570 cache.write()
2571 2571 op.ui.debug(b'applied %i hgtags fnodes cache entries\n' % count)
2572 2572
2573 2573
2574 2574 rbcstruct = struct.Struct(b'>III')
2575 2575
2576 2576
2577 2577 @parthandler(b'cache:rev-branch-cache')
2578 2578 def handlerbc(op, inpart):
2579 2579 """Legacy part, ignored for compatibility with bundles from or
2580 2580 for Mercurial before 5.7. Newer Mercurial computes the cache
2581 2581 efficiently enough during unbundling that the additional transfer
2582 2582 is unnecessary."""
2583 2583
2584 2584
2585 2585 @parthandler(b'pushvars')
2586 2586 def bundle2getvars(op, part):
2587 2587 '''unbundle a bundle2 containing shellvars on the server'''
2588 2588 # An option to disable unbundling on server-side for security reasons
2589 2589 if op.ui.configbool(b'push', b'pushvars.server'):
2590 2590 hookargs = {}
2591 2591 for key, value in part.advisoryparams:
2592 2592 key = key.upper()
2593 2593 # We want pushed variables to have USERVAR_ prepended so we know
2594 2594 # they came from the --pushvar flag.
2595 2595 key = b"USERVAR_" + key
2596 2596 hookargs[key] = value
2597 2597 op.addhookargs(hookargs)
2598 2598
2599 2599
2600 2600 @parthandler(b'stream2', (b'requirements', b'filecount', b'bytecount'))
2601 2601 def handlestreamv2bundle(op, part):
2602 2602
2603 2603 requirements = urlreq.unquote(part.params[b'requirements'])
2604 2604 requirements = requirements.split(b',') if requirements else []
2605 2605 filecount = int(part.params[b'filecount'])
2606 2606 bytecount = int(part.params[b'bytecount'])
2607 2607
2608 2608 repo = op.repo
2609 2609 if len(repo):
2610 2610 msg = _(b'cannot apply stream clone to non empty repository')
2611 2611 raise error.Abort(msg)
2612 2612
2613 2613 repo.ui.debug(b'applying stream bundle\n')
2614 2614 streamclone.applybundlev2(repo, part, filecount, bytecount, requirements)
2615 2615
2616 2616
2617 2617 @parthandler(b'stream3-exp', (b'requirements',))
2618 2618 def handlestreamv3bundle(op, part):
2619 2619 requirements = urlreq.unquote(part.params[b'requirements'])
2620 2620 requirements = requirements.split(b',') if requirements else []
2621 2621
2622 2622 repo = op.repo
2623 2623 if len(repo):
2624 2624 msg = _(b'cannot apply stream clone to non empty repository')
2625 2625 raise error.Abort(msg)
2626 2626
2627 2627 repo.ui.debug(b'applying stream bundle\n')
2628 2628 streamclone.applybundlev3(repo, part, requirements)
2629 2629
2630 2630
2631 2631 def widen_bundle(
2632 2632 bundler, repo, oldmatcher, newmatcher, common, known, cgversion, ellipses
2633 2633 ):
2634 2634 """generates bundle2 for widening a narrow clone
2635 2635
2636 2636 bundler is the bundle to which data should be added
2637 2637 repo is the localrepository instance
2638 2638 oldmatcher matches what the client already has
2639 2639 newmatcher matches what the client needs (including what it already has)
2640 2640 common is set of common heads between server and client
2641 2641 known is a set of revs known on the client side (used in ellipses)
2642 2642 cgversion is the changegroup version to send
2643 2643 ellipses is boolean value telling whether to send ellipses data or not
2644 2644
2645 2645 returns bundle2 of the data required for extending
2646 2646 """
2647 2647 commonnodes = set()
2648 2648 cl = repo.changelog
2649 2649 for r in repo.revs(b"::%ln", common):
2650 2650 commonnodes.add(cl.node(r))
2651 2651 if commonnodes:
2652 2652 packer = changegroup.getbundler(
2653 2653 cgversion,
2654 2654 repo,
2655 2655 oldmatcher=oldmatcher,
2656 2656 matcher=newmatcher,
2657 2657 fullnodes=commonnodes,
2658 2658 )
2659 2659 cgdata = packer.generate(
2660 2660 {repo.nullid},
2661 2661 list(commonnodes),
2662 2662 False,
2663 2663 b'narrow_widen',
2664 2664 changelog=False,
2665 2665 )
2666 2666
2667 2667 part = bundler.newpart(b'changegroup', data=cgdata)
2668 2668 part.addparam(b'version', cgversion)
2669 2669 if scmutil.istreemanifest(repo):
2670 2670 part.addparam(b'treemanifest', b'1')
2671 2671 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
2672 2672 part.addparam(b'exp-sidedata', b'1')
2673 2673 wanted = format_remote_wanted_sidedata(repo)
2674 2674 part.addparam(b'exp-wanted-sidedata', wanted)
2675 2675
2676 2676 return bundler
@@ -1,1122 +1,1121 b''
1 1 # upgrade.py - functions for in place upgrade of Mercurial repository
2 2 #
3 3 # Copyright (c) 2016-present, Gregory Szorc
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8
9 9 from ..i18n import _
10 10 from .. import (
11 11 error,
12 12 localrepo,
13 13 pycompat,
14 14 requirements,
15 15 revlog,
16 16 util,
17 17 )
18 18
19 19 from ..utils import compression
20 20
21 21 if pycompat.TYPE_CHECKING:
22 22 from typing import (
23 23 List,
24 24 Type,
25 25 )
26 26
27 27
28 28 # list of requirements that request a clone of all revlog if added/removed
29 29 RECLONES_REQUIREMENTS = {
30 30 requirements.GENERALDELTA_REQUIREMENT,
31 31 requirements.SPARSEREVLOG_REQUIREMENT,
32 32 requirements.REVLOGV2_REQUIREMENT,
33 33 requirements.CHANGELOGV2_REQUIREMENT,
34 34 }
35 35
36 36
37 37 def preservedrequirements(repo):
38 38 preserved = {
39 39 requirements.SHARED_REQUIREMENT,
40 40 requirements.NARROW_REQUIREMENT,
41 41 }
42 42 return preserved & repo.requirements
43 43
44 44
45 45 FORMAT_VARIANT = b'deficiency'
46 46 OPTIMISATION = b'optimization'
47 47
48 48
49 49 class improvement:
50 50 """Represents an improvement that can be made as part of an upgrade."""
51 51
52 52 ### The following attributes should be defined for each subclass:
53 53
54 54 # Either ``FORMAT_VARIANT`` or ``OPTIMISATION``.
55 55 # A format variant is where we change the storage format. Not all format
56 56 # variant changes are an obvious problem.
57 57 # An optimization is an action (sometimes optional) that
58 58 # can be taken to further improve the state of the repository.
59 59 type = None
60 60
61 61 # machine-readable string uniquely identifying this improvement. it will be
62 62 # mapped to an action later in the upgrade process.
63 63 name = None
64 64
65 65 # message intended for humans explaining the improvement in more detail,
66 66 # including the implications of it ``FORMAT_VARIANT`` types, should be
67 67 # worded
68 68 # in the present tense.
69 69 description = None
70 70
71 71 # message intended for humans explaining what an upgrade addressing this
72 72 # issue will do. should be worded in the future tense.
73 73 upgrademessage = None
74 74
75 75 # value of current Mercurial default for new repository
76 76 default = None
77 77
78 78 # Message intended for humans which will be shown post an upgrade
79 79 # operation when the improvement will be added
80 80 postupgrademessage = None
81 81
82 82 # Message intended for humans which will be shown post an upgrade
83 83 # operation in which this improvement was removed
84 84 postdowngrademessage = None
85 85
86 86 # By default we assume that every improvement touches requirements and all revlogs
87 87
88 88 # Whether this improvement touches filelogs
89 89 touches_filelogs = True
90 90
91 91 # Whether this improvement touches manifests
92 92 touches_manifests = True
93 93
94 94 # Whether this improvement touches changelog
95 95 touches_changelog = True
96 96
97 97 # Whether this improvement changes repository requirements
98 98 touches_requirements = True
99 99
100 100 # Whether this improvement touches the dirstate
101 101 touches_dirstate = False
102 102
103 103 # Can this action be run on a share instead of its mains repository
104 104 compatible_with_share = False
105 105
106 106
107 107 allformatvariant = [] # type: List[Type['formatvariant']]
108 108
109 109
110 110 def registerformatvariant(cls):
111 111 allformatvariant.append(cls)
112 112 return cls
113 113
114 114
115 115 class formatvariant(improvement):
116 116 """an improvement subclass dedicated to repository format"""
117 117
118 118 type = FORMAT_VARIANT
119 119
120 120 @staticmethod
121 121 def fromrepo(repo):
122 122 """current value of the variant in the repository"""
123 123 raise NotImplementedError()
124 124
125 125 @staticmethod
126 126 def fromconfig(repo):
127 127 """current value of the variant in the configuration"""
128 128 raise NotImplementedError()
129 129
130 130
131 131 class requirementformatvariant(formatvariant):
132 132 """formatvariant based on a 'requirement' name.
133 133
134 134 Many format variant are controlled by a 'requirement'. We define a small
135 135 subclass to factor the code.
136 136 """
137 137
138 138 # the requirement that control this format variant
139 139 _requirement = None
140 140
141 141 @staticmethod
142 142 def _newreporequirements(ui):
143 143 return localrepo.newreporequirements(
144 144 ui, localrepo.defaultcreateopts(ui)
145 145 )
146 146
147 147 @classmethod
148 148 def fromrepo(cls, repo):
149 149 assert cls._requirement is not None
150 150 return cls._requirement in repo.requirements
151 151
152 152 @classmethod
153 153 def fromconfig(cls, repo):
154 154 assert cls._requirement is not None
155 155 return cls._requirement in cls._newreporequirements(repo.ui)
156 156
157 157
158 158 @registerformatvariant
159 159 class fncache(requirementformatvariant):
160 160 name = b'fncache'
161 161
162 162 _requirement = requirements.FNCACHE_REQUIREMENT
163 163
164 164 default = True
165 165
166 166 description = _(
167 167 b'long and reserved filenames may not work correctly; '
168 168 b'repository performance is sub-optimal'
169 169 )
170 170
171 171 upgrademessage = _(
172 172 b'repository will be more resilient to storing '
173 173 b'certain paths and performance of certain '
174 174 b'operations should be improved'
175 175 )
176 176
177 177
178 178 @registerformatvariant
179 179 class dirstatev2(requirementformatvariant):
180 180 name = b'dirstate-v2'
181 181 _requirement = requirements.DIRSTATE_V2_REQUIREMENT
182 182
183 183 default = False
184 184
185 185 description = _(
186 186 b'version 1 of the dirstate file format requires '
187 187 b'reading and parsing it all at once.\n'
188 188 b'Version 2 has a better structure,'
189 189 b'better information and lighter update mechanism'
190 190 )
191 191
192 192 upgrademessage = _(b'"hg status" will be faster')
193 193
194 194 touches_filelogs = False
195 195 touches_manifests = False
196 196 touches_changelog = False
197 197 touches_requirements = True
198 198 touches_dirstate = True
199 199 compatible_with_share = True
200 200
201 201
202 202 @registerformatvariant
203 203 class dirstatetrackedkey(requirementformatvariant):
204 204 name = b'tracked-hint'
205 205 _requirement = requirements.DIRSTATE_TRACKED_HINT_V1
206 206
207 207 default = False
208 208
209 209 description = _(
210 210 b'Add a small file to help external tooling that watch the tracked set'
211 211 )
212 212
213 213 upgrademessage = _(
214 214 b'external tools will be informated of potential change in the tracked set'
215 215 )
216 216
217 217 touches_filelogs = False
218 218 touches_manifests = False
219 219 touches_changelog = False
220 220 touches_requirements = True
221 221 touches_dirstate = True
222 222 compatible_with_share = True
223 223
224 224
225 225 @registerformatvariant
226 226 class dotencode(requirementformatvariant):
227 227 name = b'dotencode'
228 228
229 229 _requirement = requirements.DOTENCODE_REQUIREMENT
230 230
231 231 default = True
232 232
233 233 description = _(
234 234 b'storage of filenames beginning with a period or '
235 235 b'space may not work correctly'
236 236 )
237 237
238 238 upgrademessage = _(
239 239 b'repository will be better able to store files '
240 240 b'beginning with a space or period'
241 241 )
242 242
243 243
244 244 @registerformatvariant
245 245 class generaldelta(requirementformatvariant):
246 246 name = b'generaldelta'
247 247
248 248 _requirement = requirements.GENERALDELTA_REQUIREMENT
249 249
250 250 default = True
251 251
252 252 description = _(
253 253 b'deltas within internal storage are unable to '
254 254 b'choose optimal revisions; repository is larger and '
255 255 b'slower than it could be; interaction with other '
256 256 b'repositories may require extra network and CPU '
257 257 b'resources, making "hg push" and "hg pull" slower'
258 258 )
259 259
260 260 upgrademessage = _(
261 261 b'repository storage will be able to create '
262 262 b'optimal deltas; new repository data will be '
263 263 b'smaller and read times should decrease; '
264 264 b'interacting with other repositories using this '
265 265 b'storage model should require less network and '
266 266 b'CPU resources, making "hg push" and "hg pull" '
267 267 b'faster'
268 268 )
269 269
270 270
271 271 @registerformatvariant
272 272 class sharesafe(requirementformatvariant):
273 273 name = b'share-safe'
274 274 _requirement = requirements.SHARESAFE_REQUIREMENT
275 275
276 276 default = True
277 277
278 278 description = _(
279 279 b'old shared repositories do not share source repository '
280 280 b'requirements and config. This leads to various problems '
281 281 b'when the source repository format is upgraded or some new '
282 282 b'extensions are enabled.'
283 283 )
284 284
285 285 upgrademessage = _(
286 286 b'Upgrades a repository to share-safe format so that future '
287 287 b'shares of this repository share its requirements and configs.'
288 288 )
289 289
290 290 postdowngrademessage = _(
291 291 b'repository downgraded to not use share safe mode, '
292 b'existing shares will not work and needs to'
293 b' be reshared.'
292 b'existing shares will not work and need to be reshared.'
294 293 )
295 294
296 295 postupgrademessage = _(
297 296 b'repository upgraded to share safe mode, existing'
298 297 b' shares will still work in old non-safe mode. '
299 298 b'Re-share existing shares to use them in safe mode'
300 299 b' New shares will be created in safe mode.'
301 300 )
302 301
303 302 # upgrade only needs to change the requirements
304 303 touches_filelogs = False
305 304 touches_manifests = False
306 305 touches_changelog = False
307 306 touches_requirements = True
308 307
309 308
310 309 @registerformatvariant
311 310 class sparserevlog(requirementformatvariant):
312 311 name = b'sparserevlog'
313 312
314 313 _requirement = requirements.SPARSEREVLOG_REQUIREMENT
315 314
316 315 default = True
317 316
318 317 description = _(
319 318 b'in order to limit disk reading and memory usage on older '
320 319 b'version, the span of a delta chain from its root to its '
321 320 b'end is limited, whatever the relevant data in this span. '
322 321 b'This can severly limit Mercurial ability to build good '
323 322 b'chain of delta resulting is much more storage space being '
324 323 b'taken and limit reusability of on disk delta during '
325 324 b'exchange.'
326 325 )
327 326
328 327 upgrademessage = _(
329 328 b'Revlog supports delta chain with more unused data '
330 329 b'between payload. These gaps will be skipped at read '
331 330 b'time. This allows for better delta chains, making a '
332 331 b'better compression and faster exchange with server.'
333 332 )
334 333
335 334
336 335 @registerformatvariant
337 336 class persistentnodemap(requirementformatvariant):
338 337 name = b'persistent-nodemap'
339 338
340 339 _requirement = requirements.NODEMAP_REQUIREMENT
341 340
342 341 default = False
343 342
344 343 description = _(
345 344 b'persist the node -> rev mapping on disk to speedup lookup'
346 345 )
347 346
348 347 upgrademessage = _(b'Speedup revision lookup by node id.')
349 348
350 349
351 350 @registerformatvariant
352 351 class copiessdc(requirementformatvariant):
353 352 name = b'copies-sdc'
354 353
355 354 _requirement = requirements.COPIESSDC_REQUIREMENT
356 355
357 356 default = False
358 357
359 358 description = _(b'Stores copies information alongside changesets.')
360 359
361 360 upgrademessage = _(
362 b'Allows to use more efficient algorithm to deal with ' b'copy tracing.'
361 b'Allows to use more efficient algorithm to deal with copy tracing.'
363 362 )
364 363
365 364 touches_filelogs = False
366 365 touches_manifests = False
367 366
368 367
369 368 @registerformatvariant
370 369 class revlogv2(requirementformatvariant):
371 370 name = b'revlog-v2'
372 371 _requirement = requirements.REVLOGV2_REQUIREMENT
373 372 default = False
374 373 description = _(b'Version 2 of the revlog.')
375 374 upgrademessage = _(b'very experimental')
376 375
377 376
378 377 @registerformatvariant
379 378 class changelogv2(requirementformatvariant):
380 379 name = b'changelog-v2'
381 380 _requirement = requirements.CHANGELOGV2_REQUIREMENT
382 381 default = False
383 382 description = _(b'An iteration of the revlog focussed on changelog needs.')
384 383 upgrademessage = _(b'quite experimental')
385 384
386 385 touches_filelogs = False
387 386 touches_manifests = False
388 387
389 388
390 389 @registerformatvariant
391 390 class removecldeltachain(formatvariant):
392 391 name = b'plain-cl-delta'
393 392
394 393 default = True
395 394
396 395 description = _(
397 396 b'changelog storage is using deltas instead of '
398 397 b'raw entries; changelog reading and any '
399 398 b'operation relying on changelog data are slower '
400 399 b'than they could be'
401 400 )
402 401
403 402 upgrademessage = _(
404 403 b'changelog storage will be reformated to '
405 404 b'store raw entries; changelog reading will be '
406 405 b'faster; changelog size may be reduced'
407 406 )
408 407
409 408 @staticmethod
410 409 def fromrepo(repo):
411 410 # Mercurial 4.0 changed changelogs to not use delta chains. Search for
412 411 # changelogs with deltas.
413 412 cl = repo.changelog
414 413 chainbase = cl.chainbase
415 414 return all(rev == chainbase(rev) for rev in cl)
416 415
417 416 @staticmethod
418 417 def fromconfig(repo):
419 418 return True
420 419
421 420
422 421 _has_zstd = (
423 422 b'zstd' in util.compengines
424 423 and util.compengines[b'zstd'].available()
425 424 and util.compengines[b'zstd'].revlogheader()
426 425 )
427 426
428 427
429 428 @registerformatvariant
430 429 class compressionengine(formatvariant):
431 430 name = b'compression'
432 431
433 432 if _has_zstd:
434 433 default = b'zstd'
435 434 else:
436 435 default = b'zlib'
437 436
438 437 description = _(
439 438 b'Compresion algorithm used to compress data. '
440 439 b'Some engine are faster than other'
441 440 )
442 441
443 442 upgrademessage = _(
444 443 b'revlog content will be recompressed with the new algorithm.'
445 444 )
446 445
447 446 @classmethod
448 447 def fromrepo(cls, repo):
449 448 # we allow multiple compression engine requirement to co-exist because
450 449 # strickly speaking, revlog seems to support mixed compression style.
451 450 #
452 451 # The compression used for new entries will be "the last one"
453 452 compression = b'zlib'
454 453 for req in repo.requirements:
455 454 prefix = req.startswith
456 455 if prefix(b'revlog-compression-') or prefix(b'exp-compression-'):
457 456 compression = req.split(b'-', 2)[2]
458 457 return compression
459 458
460 459 @classmethod
461 460 def fromconfig(cls, repo):
462 461 compengines = repo.ui.configlist(b'format', b'revlog-compression')
463 462 # return the first valid value as the selection code would do
464 463 for comp in compengines:
465 464 if comp in util.compengines:
466 465 e = util.compengines[comp]
467 466 if e.available() and e.revlogheader():
468 467 return comp
469 468
470 469 # no valide compression found lets display it all for clarity
471 470 return b','.join(compengines)
472 471
473 472
474 473 @registerformatvariant
475 474 class compressionlevel(formatvariant):
476 475 name = b'compression-level'
477 476 default = b'default'
478 477
479 478 description = _(b'compression level')
480 479
481 480 upgrademessage = _(b'revlog content will be recompressed')
482 481
483 482 @classmethod
484 483 def fromrepo(cls, repo):
485 484 comp = compressionengine.fromrepo(repo)
486 485 level = None
487 486 if comp == b'zlib':
488 487 level = repo.ui.configint(b'storage', b'revlog.zlib.level')
489 488 elif comp == b'zstd':
490 489 level = repo.ui.configint(b'storage', b'revlog.zstd.level')
491 490 if level is None:
492 491 return b'default'
493 492 return bytes(level)
494 493
495 494 @classmethod
496 495 def fromconfig(cls, repo):
497 496 comp = compressionengine.fromconfig(repo)
498 497 level = None
499 498 if comp == b'zlib':
500 499 level = repo.ui.configint(b'storage', b'revlog.zlib.level')
501 500 elif comp == b'zstd':
502 501 level = repo.ui.configint(b'storage', b'revlog.zstd.level')
503 502 if level is None:
504 503 return b'default'
505 504 return bytes(level)
506 505
507 506
508 507 def find_format_upgrades(repo):
509 508 """returns a list of format upgrades which can be perform on the repo"""
510 509 upgrades = []
511 510
512 511 # We could detect lack of revlogv1 and store here, but they were added
513 512 # in 0.9.2 and we don't support upgrading repos without these
514 513 # requirements, so let's not bother.
515 514
516 515 for fv in allformatvariant:
517 516 if not fv.fromrepo(repo):
518 517 upgrades.append(fv)
519 518
520 519 return upgrades
521 520
522 521
523 522 def find_format_downgrades(repo):
524 523 """returns a list of format downgrades which will be performed on the repo
525 524 because of disabled config option for them"""
526 525
527 526 downgrades = []
528 527
529 528 for fv in allformatvariant:
530 529 if fv.name == b'compression':
531 530 # If there is a compression change between repository
532 531 # and config, destination repository compression will change
533 532 # and current compression will be removed.
534 533 if fv.fromrepo(repo) != fv.fromconfig(repo):
535 534 downgrades.append(fv)
536 535 continue
537 536 # format variant exist in repo but does not exist in new repository
538 537 # config
539 538 if fv.fromrepo(repo) and not fv.fromconfig(repo):
540 539 downgrades.append(fv)
541 540
542 541 return downgrades
543 542
544 543
545 544 ALL_OPTIMISATIONS = []
546 545
547 546
548 547 def register_optimization(obj):
549 548 ALL_OPTIMISATIONS.append(obj)
550 549 return obj
551 550
552 551
553 552 class optimization(improvement):
554 553 """an improvement subclass dedicated to optimizations"""
555 554
556 555 type = OPTIMISATION
557 556
558 557
559 558 @register_optimization
560 559 class redeltaparents(optimization):
561 560 name = b're-delta-parent'
562 561
563 562 type = OPTIMISATION
564 563
565 564 description = _(
566 565 b'deltas within internal storage will be recalculated to '
567 566 b'choose an optimal base revision where this was not '
568 567 b'already done; the size of the repository may shrink and '
569 568 b'various operations may become faster; the first time '
570 569 b'this optimization is performed could slow down upgrade '
571 570 b'execution considerably; subsequent invocations should '
572 571 b'not run noticeably slower'
573 572 )
574 573
575 574 upgrademessage = _(
576 575 b'deltas within internal storage will choose a new '
577 576 b'base revision if needed'
578 577 )
579 578
580 579
581 580 @register_optimization
582 581 class redeltamultibase(optimization):
583 582 name = b're-delta-multibase'
584 583
585 584 type = OPTIMISATION
586 585
587 586 description = _(
588 587 b'deltas within internal storage will be recalculated '
589 588 b'against multiple base revision and the smallest '
590 589 b'difference will be used; the size of the repository may '
591 590 b'shrink significantly when there are many merges; this '
592 591 b'optimization will slow down execution in proportion to '
593 592 b'the number of merges in the repository and the amount '
594 593 b'of files in the repository; this slow down should not '
595 594 b'be significant unless there are tens of thousands of '
596 595 b'files and thousands of merges'
597 596 )
598 597
599 598 upgrademessage = _(
600 599 b'deltas within internal storage will choose an '
601 600 b'optimal delta by computing deltas against multiple '
602 601 b'parents; may slow down execution time '
603 602 b'significantly'
604 603 )
605 604
606 605
607 606 @register_optimization
608 607 class redeltaall(optimization):
609 608 name = b're-delta-all'
610 609
611 610 type = OPTIMISATION
612 611
613 612 description = _(
614 613 b'deltas within internal storage will always be '
615 614 b'recalculated without reusing prior deltas; this will '
616 615 b'likely make execution run several times slower; this '
617 616 b'optimization is typically not needed'
618 617 )
619 618
620 619 upgrademessage = _(
621 620 b'deltas within internal storage will be fully '
622 621 b'recomputed; this will likely drastically slow down '
623 622 b'execution time'
624 623 )
625 624
626 625
627 626 @register_optimization
628 627 class redeltafulladd(optimization):
629 628 name = b're-delta-fulladd'
630 629
631 630 type = OPTIMISATION
632 631
633 632 description = _(
634 633 b'every revision will be re-added as if it was new '
635 634 b'content. It will go through the full storage '
636 635 b'mechanism giving extensions a chance to process it '
637 636 b'(eg. lfs). This is similar to "re-delta-all" but even '
638 637 b'slower since more logic is involved.'
639 638 )
640 639
641 640 upgrademessage = _(
642 641 b'each revision will be added as new content to the '
643 642 b'internal storage; this will likely drastically slow '
644 643 b'down execution time, but some extensions might need '
645 644 b'it'
646 645 )
647 646
648 647
649 648 def findoptimizations(repo):
650 649 """Determine optimisation that could be used during upgrade"""
651 650 # These are unconditionally added. There is logic later that figures out
652 651 # which ones to apply.
653 652 return list(ALL_OPTIMISATIONS)
654 653
655 654
656 655 def determine_upgrade_actions(
657 656 repo, format_upgrades, optimizations, sourcereqs, destreqs
658 657 ):
659 658 """Determine upgrade actions that will be performed.
660 659
661 660 Given a list of improvements as returned by ``find_format_upgrades`` and
662 661 ``findoptimizations``, determine the list of upgrade actions that
663 662 will be performed.
664 663
665 664 The role of this function is to filter improvements if needed, apply
666 665 recommended optimizations from the improvements list that make sense,
667 666 etc.
668 667
669 668 Returns a list of action names.
670 669 """
671 670 newactions = []
672 671
673 672 for d in format_upgrades:
674 673 if util.safehasattr(d, '_requirement'):
675 674 name = d._requirement
676 675 else:
677 676 name = None
678 677
679 678 # If the action is a requirement that doesn't show up in the
680 679 # destination requirements, prune the action.
681 680 if name is not None and name not in destreqs:
682 681 continue
683 682
684 683 newactions.append(d)
685 684
686 685 newactions.extend(
687 686 o
688 687 for o in sorted(optimizations, key=(lambda x: x.name))
689 688 if o not in newactions
690 689 )
691 690
692 691 # FUTURE consider adding some optimizations here for certain transitions.
693 692 # e.g. adding generaldelta could schedule parent redeltas.
694 693
695 694 return newactions
696 695
697 696
698 697 class BaseOperation:
699 698 """base class that contains the minimum for an upgrade to work
700 699
701 700 (this might need to be extended as the usage for subclass alternative to
702 701 UpgradeOperation extends)
703 702 """
704 703
705 704 def __init__(
706 705 self,
707 706 new_requirements,
708 707 backup_store,
709 708 ):
710 709 self.new_requirements = new_requirements
711 710 # should this operation create a backup of the store
712 711 self.backup_store = backup_store
713 712
714 713
715 714 class UpgradeOperation(BaseOperation):
716 715 """represent the work to be done during an upgrade"""
717 716
718 717 def __init__(
719 718 self,
720 719 ui,
721 720 new_requirements,
722 721 current_requirements,
723 722 upgrade_actions,
724 723 removed_actions,
725 724 revlogs_to_process,
726 725 backup_store,
727 726 ):
728 727 super().__init__(
729 728 new_requirements,
730 729 backup_store,
731 730 )
732 731 self.ui = ui
733 732 self.current_requirements = current_requirements
734 733 # list of upgrade actions the operation will perform
735 734 self.upgrade_actions = upgrade_actions
736 735 self.removed_actions = removed_actions
737 736 self.revlogs_to_process = revlogs_to_process
738 737 # requirements which will be added by the operation
739 738 self._added_requirements = (
740 739 self.new_requirements - self.current_requirements
741 740 )
742 741 # requirements which will be removed by the operation
743 742 self._removed_requirements = (
744 743 self.current_requirements - self.new_requirements
745 744 )
746 745 # requirements which will be preserved by the operation
747 746 self._preserved_requirements = (
748 747 self.current_requirements & self.new_requirements
749 748 )
750 749 # optimizations which are not used and it's recommended that they
751 750 # should use them
752 751 all_optimizations = findoptimizations(None)
753 752 self.unused_optimizations = [
754 753 i for i in all_optimizations if i not in self.upgrade_actions
755 754 ]
756 755
757 756 # delta reuse mode of this upgrade operation
758 757 upgrade_actions_names = self.upgrade_actions_names
759 758 self.delta_reuse_mode = revlog.revlog.DELTAREUSEALWAYS
760 759 if b're-delta-all' in upgrade_actions_names:
761 760 self.delta_reuse_mode = revlog.revlog.DELTAREUSENEVER
762 761 elif b're-delta-parent' in upgrade_actions_names:
763 762 self.delta_reuse_mode = revlog.revlog.DELTAREUSESAMEREVS
764 763 elif b're-delta-multibase' in upgrade_actions_names:
765 764 self.delta_reuse_mode = revlog.revlog.DELTAREUSESAMEREVS
766 765 elif b're-delta-fulladd' in upgrade_actions_names:
767 766 self.delta_reuse_mode = revlog.revlog.DELTAREUSEFULLADD
768 767
769 768 # should this operation force re-delta of both parents
770 769 self.force_re_delta_both_parents = (
771 770 b're-delta-multibase' in upgrade_actions_names
772 771 )
773 772
774 773 @property
775 774 def upgrade_actions_names(self):
776 775 return set([a.name for a in self.upgrade_actions])
777 776
778 777 @property
779 778 def requirements_only(self):
780 779 # does the operation only touches repository requirement
781 780 return (
782 781 self.touches_requirements
783 782 and not self.touches_filelogs
784 783 and not self.touches_manifests
785 784 and not self.touches_changelog
786 785 and not self.touches_dirstate
787 786 )
788 787
789 788 @property
790 789 def touches_filelogs(self):
791 790 for a in self.upgrade_actions:
792 791 # in optimisations, we re-process the revlogs again
793 792 if a.type == OPTIMISATION:
794 793 return True
795 794 elif a.touches_filelogs:
796 795 return True
797 796 for a in self.removed_actions:
798 797 if a.touches_filelogs:
799 798 return True
800 799 return False
801 800
802 801 @property
803 802 def touches_manifests(self):
804 803 for a in self.upgrade_actions:
805 804 # in optimisations, we re-process the revlogs again
806 805 if a.type == OPTIMISATION:
807 806 return True
808 807 elif a.touches_manifests:
809 808 return True
810 809 for a in self.removed_actions:
811 810 if a.touches_manifests:
812 811 return True
813 812 return False
814 813
815 814 @property
816 815 def touches_changelog(self):
817 816 for a in self.upgrade_actions:
818 817 # in optimisations, we re-process the revlogs again
819 818 if a.type == OPTIMISATION:
820 819 return True
821 820 elif a.touches_changelog:
822 821 return True
823 822 for a in self.removed_actions:
824 823 if a.touches_changelog:
825 824 return True
826 825 return False
827 826
828 827 @property
829 828 def touches_requirements(self):
830 829 for a in self.upgrade_actions:
831 830 # optimisations are used to re-process revlogs and does not result
832 831 # in a requirement being added or removed
833 832 if a.type == OPTIMISATION:
834 833 pass
835 834 elif a.touches_requirements:
836 835 return True
837 836 for a in self.removed_actions:
838 837 if a.touches_requirements:
839 838 return True
840 839
841 840 @property
842 841 def touches_dirstate(self):
843 842 for a in self.upgrade_actions:
844 843 # revlog optimisations do not affect the dirstate
845 844 if a.type == OPTIMISATION:
846 845 pass
847 846 elif a.touches_dirstate:
848 847 return True
849 848 for a in self.removed_actions:
850 849 if a.touches_dirstate:
851 850 return True
852 851
853 852 return False
854 853
855 854 def _write_labeled(self, l, label: bytes):
856 855 """
857 856 Utility function to aid writing of a list under one label
858 857 """
859 858 first = True
860 859 for r in sorted(l):
861 860 if not first:
862 861 self.ui.write(b', ')
863 862 self.ui.write(r, label=label)
864 863 first = False
865 864
866 865 def print_requirements(self):
867 866 self.ui.write(_(b'requirements\n'))
868 867 self.ui.write(_(b' preserved: '))
869 868 self._write_labeled(
870 869 self._preserved_requirements, b"upgrade-repo.requirement.preserved"
871 870 )
872 871 self.ui.write((b'\n'))
873 872 if self._removed_requirements:
874 873 self.ui.write(_(b' removed: '))
875 874 self._write_labeled(
876 875 self._removed_requirements, b"upgrade-repo.requirement.removed"
877 876 )
878 877 self.ui.write((b'\n'))
879 878 if self._added_requirements:
880 879 self.ui.write(_(b' added: '))
881 880 self._write_labeled(
882 881 self._added_requirements, b"upgrade-repo.requirement.added"
883 882 )
884 883 self.ui.write((b'\n'))
885 884 self.ui.write(b'\n')
886 885
887 886 def print_optimisations(self):
888 887 optimisations = [
889 888 a for a in self.upgrade_actions if a.type == OPTIMISATION
890 889 ]
891 890 optimisations.sort(key=lambda a: a.name)
892 891 if optimisations:
893 892 self.ui.write(_(b'optimisations: '))
894 893 self._write_labeled(
895 894 [a.name for a in optimisations],
896 895 b"upgrade-repo.optimisation.performed",
897 896 )
898 897 self.ui.write(b'\n\n')
899 898
900 899 def print_upgrade_actions(self):
901 900 for a in self.upgrade_actions:
902 901 self.ui.status(b'%s\n %s\n\n' % (a.name, a.upgrademessage))
903 902
904 903 def print_affected_revlogs(self):
905 904 if not self.revlogs_to_process:
906 905 self.ui.write((b'no revlogs to process\n'))
907 906 else:
908 907 self.ui.write((b'processed revlogs:\n'))
909 908 for r in sorted(self.revlogs_to_process):
910 909 self.ui.write((b' - %s\n' % r))
911 910 self.ui.write((b'\n'))
912 911
913 912 def print_unused_optimizations(self):
914 913 for i in self.unused_optimizations:
915 914 self.ui.status(_(b'%s\n %s\n\n') % (i.name, i.description))
916 915
917 916 def has_upgrade_action(self, name):
918 917 """Check whether the upgrade operation will perform this action"""
919 918 return name in self._upgrade_actions_names
920 919
921 920 def print_post_op_messages(self):
922 921 """print post upgrade operation warning messages"""
923 922 for a in self.upgrade_actions:
924 923 if a.postupgrademessage is not None:
925 924 self.ui.warn(b'%s\n' % a.postupgrademessage)
926 925 for a in self.removed_actions:
927 926 if a.postdowngrademessage is not None:
928 927 self.ui.warn(b'%s\n' % a.postdowngrademessage)
929 928
930 929
931 930 ### Code checking if a repository can got through the upgrade process at all. #
932 931
933 932
934 933 def requiredsourcerequirements(repo):
935 934 """Obtain requirements required to be present to upgrade a repo.
936 935
937 936 An upgrade will not be allowed if the repository doesn't have the
938 937 requirements returned by this function.
939 938 """
940 939 return {
941 940 # Introduced in Mercurial 0.9.2.
942 941 requirements.STORE_REQUIREMENT,
943 942 }
944 943
945 944
946 945 def blocksourcerequirements(repo):
947 946 """Obtain requirements that will prevent an upgrade from occurring.
948 947
949 948 An upgrade cannot be performed if the source repository contains a
950 949 requirements in the returned set.
951 950 """
952 951 return {
953 952 # This was a precursor to generaldelta and was never enabled by default.
954 953 # It should (hopefully) not exist in the wild.
955 954 b'parentdelta',
956 955 }
957 956
958 957
959 958 def check_revlog_version(reqs):
960 959 """Check that the requirements contain at least one Revlog version"""
961 960 all_revlogs = {
962 961 requirements.REVLOGV1_REQUIREMENT,
963 962 requirements.REVLOGV2_REQUIREMENT,
964 963 }
965 964 if not all_revlogs.intersection(reqs):
966 965 msg = _(b'cannot upgrade repository; missing a revlog version')
967 966 raise error.Abort(msg)
968 967
969 968
970 969 def check_source_requirements(repo):
971 970 """Ensure that no existing requirements prevent the repository upgrade"""
972 971
973 972 check_revlog_version(repo.requirements)
974 973 required = requiredsourcerequirements(repo)
975 974 missingreqs = required - repo.requirements
976 975 if missingreqs:
977 976 msg = _(b'cannot upgrade repository; requirement missing: %s')
978 977 missingreqs = b', '.join(sorted(missingreqs))
979 978 raise error.Abort(msg % missingreqs)
980 979
981 980 blocking = blocksourcerequirements(repo)
982 981 blockingreqs = blocking & repo.requirements
983 982 if blockingreqs:
984 983 m = _(b'cannot upgrade repository; unsupported source requirement: %s')
985 984 blockingreqs = b', '.join(sorted(blockingreqs))
986 985 raise error.Abort(m % blockingreqs)
987 986 # Upgrade should operate on the actual store, not the shared link.
988 987
989 988 bad_share = (
990 989 requirements.SHARED_REQUIREMENT in repo.requirements
991 990 and requirements.SHARESAFE_REQUIREMENT not in repo.requirements
992 991 )
993 992 if bad_share:
994 993 m = _(b'cannot upgrade repository; share repository without share-safe')
995 994 h = _(b'check :hg:`help config.format.use-share-safe`')
996 995 raise error.Abort(m, hint=h)
997 996
998 997
999 998 ### Verify the validity of the planned requirement changes ####################
1000 999
1001 1000
1002 1001 def supportremovedrequirements(repo):
1003 1002 """Obtain requirements that can be removed during an upgrade.
1004 1003
1005 1004 If an upgrade were to create a repository that dropped a requirement,
1006 1005 the dropped requirement must appear in the returned set for the upgrade
1007 1006 to be allowed.
1008 1007 """
1009 1008 supported = {
1010 1009 requirements.SPARSEREVLOG_REQUIREMENT,
1011 1010 requirements.COPIESSDC_REQUIREMENT,
1012 1011 requirements.NODEMAP_REQUIREMENT,
1013 1012 requirements.SHARESAFE_REQUIREMENT,
1014 1013 requirements.REVLOGV2_REQUIREMENT,
1015 1014 requirements.CHANGELOGV2_REQUIREMENT,
1016 1015 requirements.REVLOGV1_REQUIREMENT,
1017 1016 requirements.DIRSTATE_TRACKED_HINT_V1,
1018 1017 requirements.DIRSTATE_V2_REQUIREMENT,
1019 1018 }
1020 1019 for name in compression.compengines:
1021 1020 engine = compression.compengines[name]
1022 1021 if engine.available() and engine.revlogheader():
1023 1022 supported.add(b'exp-compression-%s' % name)
1024 1023 if engine.name() == b'zstd':
1025 1024 supported.add(b'revlog-compression-zstd')
1026 1025 return supported
1027 1026
1028 1027
1029 1028 def supporteddestrequirements(repo):
1030 1029 """Obtain requirements that upgrade supports in the destination.
1031 1030
1032 1031 If the result of the upgrade would have requirements not in this set,
1033 1032 the upgrade is disallowed.
1034 1033
1035 1034 Extensions should monkeypatch this to add their custom requirements.
1036 1035 """
1037 1036 supported = {
1038 1037 requirements.CHANGELOGV2_REQUIREMENT,
1039 1038 requirements.COPIESSDC_REQUIREMENT,
1040 1039 requirements.DIRSTATE_TRACKED_HINT_V1,
1041 1040 requirements.DIRSTATE_V2_REQUIREMENT,
1042 1041 requirements.DOTENCODE_REQUIREMENT,
1043 1042 requirements.FNCACHE_REQUIREMENT,
1044 1043 requirements.GENERALDELTA_REQUIREMENT,
1045 1044 requirements.NODEMAP_REQUIREMENT,
1046 1045 requirements.REVLOGV1_REQUIREMENT, # allowed in case of downgrade
1047 1046 requirements.REVLOGV2_REQUIREMENT,
1048 1047 requirements.SHARED_REQUIREMENT,
1049 1048 requirements.SHARESAFE_REQUIREMENT,
1050 1049 requirements.SPARSEREVLOG_REQUIREMENT,
1051 1050 requirements.STORE_REQUIREMENT,
1052 1051 requirements.TREEMANIFEST_REQUIREMENT,
1053 1052 requirements.NARROW_REQUIREMENT,
1054 1053 }
1055 1054 for name in compression.compengines:
1056 1055 engine = compression.compengines[name]
1057 1056 if engine.available() and engine.revlogheader():
1058 1057 supported.add(b'exp-compression-%s' % name)
1059 1058 if engine.name() == b'zstd':
1060 1059 supported.add(b'revlog-compression-zstd')
1061 1060 return supported
1062 1061
1063 1062
1064 1063 def allowednewrequirements(repo):
1065 1064 """Obtain requirements that can be added to a repository during upgrade.
1066 1065
1067 1066 This is used to disallow proposed requirements from being added when
1068 1067 they weren't present before.
1069 1068
1070 1069 We use a list of allowed requirement additions instead of a list of known
1071 1070 bad additions because the whitelist approach is safer and will prevent
1072 1071 future, unknown requirements from accidentally being added.
1073 1072 """
1074 1073 supported = {
1075 1074 requirements.DOTENCODE_REQUIREMENT,
1076 1075 requirements.FNCACHE_REQUIREMENT,
1077 1076 requirements.GENERALDELTA_REQUIREMENT,
1078 1077 requirements.SPARSEREVLOG_REQUIREMENT,
1079 1078 requirements.COPIESSDC_REQUIREMENT,
1080 1079 requirements.NODEMAP_REQUIREMENT,
1081 1080 requirements.SHARESAFE_REQUIREMENT,
1082 1081 requirements.REVLOGV1_REQUIREMENT,
1083 1082 requirements.REVLOGV2_REQUIREMENT,
1084 1083 requirements.CHANGELOGV2_REQUIREMENT,
1085 1084 requirements.DIRSTATE_TRACKED_HINT_V1,
1086 1085 requirements.DIRSTATE_V2_REQUIREMENT,
1087 1086 }
1088 1087 for name in compression.compengines:
1089 1088 engine = compression.compengines[name]
1090 1089 if engine.available() and engine.revlogheader():
1091 1090 supported.add(b'exp-compression-%s' % name)
1092 1091 if engine.name() == b'zstd':
1093 1092 supported.add(b'revlog-compression-zstd')
1094 1093 return supported
1095 1094
1096 1095
1097 1096 def check_requirements_changes(repo, new_reqs):
1098 1097 old_reqs = repo.requirements
1099 1098 check_revlog_version(repo.requirements)
1100 1099 support_removal = supportremovedrequirements(repo)
1101 1100 no_remove_reqs = old_reqs - new_reqs - support_removal
1102 1101 if no_remove_reqs:
1103 1102 msg = _(b'cannot upgrade repository; requirement would be removed: %s')
1104 1103 no_remove_reqs = b', '.join(sorted(no_remove_reqs))
1105 1104 raise error.Abort(msg % no_remove_reqs)
1106 1105
1107 1106 support_addition = allowednewrequirements(repo)
1108 1107 no_add_reqs = new_reqs - old_reqs - support_addition
1109 1108 if no_add_reqs:
1110 1109 m = _(b'cannot upgrade repository; do not support adding requirement: ')
1111 1110 no_add_reqs = b', '.join(sorted(no_add_reqs))
1112 1111 raise error.Abort(m + no_add_reqs)
1113 1112
1114 1113 supported = supporteddestrequirements(repo)
1115 1114 unsupported_reqs = new_reqs - supported
1116 1115 if unsupported_reqs:
1117 1116 msg = _(
1118 1117 b'cannot upgrade repository; do not support destination '
1119 1118 b'requirement: %s'
1120 1119 )
1121 1120 unsupported_reqs = b', '.join(sorted(unsupported_reqs))
1122 1121 raise error.Abort(msg % unsupported_reqs)
@@ -1,655 +1,655 b''
1 1 setup
2 2
3 3 $ cat >> $HGRCPATH <<EOF
4 4 > [extensions]
5 5 > share =
6 6 > [format]
7 7 > use-share-safe = True
8 8 > [storage]
9 9 > revlog.persistent-nodemap.slow-path=allow
10 10 > # enforce zlib to ensure we can upgrade to zstd later
11 11 > [format]
12 12 > revlog-compression=zlib
13 13 > # we want to be able to enable it later
14 14 > use-persistent-nodemap=no
15 15 > EOF
16 16
17 17 prepare source repo
18 18
19 19 $ hg init source
20 20 $ cd source
21 21 $ cat .hg/requires
22 22 dirstate-v2 (dirstate-v2 !)
23 23 share-safe
24 24 $ cat .hg/store/requires
25 25 dotencode
26 26 fncache
27 27 generaldelta
28 28 revlogv1
29 29 sparserevlog
30 30 store
31 31 $ hg debugrequirements
32 32 dotencode
33 33 dirstate-v2 (dirstate-v2 !)
34 34 fncache
35 35 generaldelta
36 36 revlogv1
37 37 share-safe
38 38 sparserevlog
39 39 store
40 40
41 41 $ echo a > a
42 42 $ hg ci -Aqm "added a"
43 43 $ echo b > b
44 44 $ hg ci -Aqm "added b"
45 45
46 46 $ HGEDITOR=cat hg config --shared
47 47 abort: repository is not shared; can't use --shared
48 48 [10]
49 49 $ cd ..
50 50
51 51 Create a shared repo and check the requirements are shared and read correctly
52 52 $ hg share source shared1
53 53 updating working directory
54 54 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
55 55 $ cd shared1
56 56 $ cat .hg/requires
57 57 dirstate-v2 (dirstate-v2 !)
58 58 share-safe
59 59 shared
60 60
61 61 $ hg debugrequirements -R ../source
62 62 dotencode
63 63 dirstate-v2 (dirstate-v2 !)
64 64 fncache
65 65 generaldelta
66 66 revlogv1
67 67 share-safe
68 68 sparserevlog
69 69 store
70 70
71 71 $ hg debugrequirements
72 72 dotencode
73 73 dirstate-v2 (dirstate-v2 !)
74 74 fncache
75 75 generaldelta
76 76 revlogv1
77 77 share-safe
78 78 shared
79 79 sparserevlog
80 80 store
81 81
82 82 $ echo c > c
83 83 $ hg ci -Aqm "added c"
84 84
85 85 Check that config of the source repository is also loaded
86 86
87 87 $ hg showconfig ui.curses
88 88 [1]
89 89
90 90 $ echo "[ui]" >> ../source/.hg/hgrc
91 91 $ echo "curses=true" >> ../source/.hg/hgrc
92 92
93 93 $ hg showconfig ui.curses
94 94 true
95 95
96 96 Test that extensions of source repository are also loaded
97 97
98 98 $ hg debugextensions
99 99 share
100 100 $ hg extdiff -p echo
101 101 hg: unknown command 'extdiff'
102 102 'extdiff' is provided by the following extension:
103 103
104 104 extdiff command to allow external programs to compare revisions
105 105
106 106 (use 'hg help extensions' for information on enabling extensions)
107 107 [10]
108 108
109 109 $ echo "[extensions]" >> ../source/.hg/hgrc
110 110 $ echo "extdiff=" >> ../source/.hg/hgrc
111 111
112 112 $ hg debugextensions -R ../source
113 113 extdiff
114 114 share
115 115 $ hg extdiff -R ../source -p echo
116 116
117 117 BROKEN: the command below will not work if config of shared source is not loaded
118 118 on dispatch but debugextensions says that extension
119 119 is loaded
120 120 $ hg debugextensions
121 121 extdiff
122 122 share
123 123
124 124 $ hg extdiff -p echo
125 125
126 126 However, local .hg/hgrc should override the config set by share source
127 127
128 128 $ echo "[ui]" >> .hg/hgrc
129 129 $ echo "curses=false" >> .hg/hgrc
130 130
131 131 $ hg showconfig ui.curses
132 132 false
133 133
134 134 $ HGEDITOR=cat hg config --shared
135 135 [ui]
136 136 curses=true
137 137 [extensions]
138 138 extdiff=
139 139
140 140 $ HGEDITOR=cat hg config --local
141 141 [ui]
142 142 curses=false
143 143
144 144 Testing that hooks set in source repository also runs in shared repo
145 145
146 146 $ cd ../source
147 147 $ cat <<EOF >> .hg/hgrc
148 148 > [extensions]
149 149 > hooklib=
150 150 > [hooks]
151 151 > pretxnchangegroup.reject_merge_commits = \
152 152 > python:hgext.hooklib.reject_merge_commits.hook
153 153 > EOF
154 154
155 155 $ cd ..
156 156 $ hg clone source cloned
157 157 updating to branch default
158 158 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
159 159 $ cd cloned
160 160 $ hg up 0
161 161 0 files updated, 0 files merged, 2 files removed, 0 files unresolved
162 162 $ echo bar > bar
163 163 $ hg ci -Aqm "added bar"
164 164 $ hg merge
165 165 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
166 166 (branch merge, don't forget to commit)
167 167 $ hg ci -m "merge commit"
168 168
169 169 $ hg push ../source
170 170 pushing to ../source
171 171 searching for changes
172 172 adding changesets
173 173 adding manifests
174 174 adding file changes
175 175 error: pretxnchangegroup.reject_merge_commits hook failed: bcde3522682d rejected as merge on the same branch. Please consider rebase.
176 176 transaction abort!
177 177 rollback completed
178 178 abort: bcde3522682d rejected as merge on the same branch. Please consider rebase.
179 179 [255]
180 180
181 181 $ hg push ../shared1
182 182 pushing to ../shared1
183 183 searching for changes
184 184 adding changesets
185 185 adding manifests
186 186 adding file changes
187 187 error: pretxnchangegroup.reject_merge_commits hook failed: bcde3522682d rejected as merge on the same branch. Please consider rebase.
188 188 transaction abort!
189 189 rollback completed
190 190 abort: bcde3522682d rejected as merge on the same branch. Please consider rebase.
191 191 [255]
192 192
193 193 Test that if share source config is untrusted, we dont read it
194 194
195 195 $ cd ../shared1
196 196
197 197 $ cat << EOF > $TESTTMP/untrusted.py
198 198 > from mercurial import scmutil, util
199 199 > def uisetup(ui):
200 200 > class untrustedui(ui.__class__):
201 201 > def _trusted(self, fp, f):
202 202 > if util.normpath(fp.name).endswith(b'source/.hg/hgrc'):
203 203 > return False
204 204 > return super(untrustedui, self)._trusted(fp, f)
205 205 > ui.__class__ = untrustedui
206 206 > EOF
207 207
208 208 $ hg showconfig hooks
209 209 hooks.pretxnchangegroup.reject_merge_commits=python:hgext.hooklib.reject_merge_commits.hook
210 210
211 211 $ hg showconfig hooks --config extensions.untrusted=$TESTTMP/untrusted.py
212 212 [1]
213 213
214 214 Update the source repository format and check that shared repo works
215 215
216 216 $ cd ../source
217 217
218 218 Disable zstd related tests because its not present on pure version
219 219 #if zstd
220 220 $ echo "[format]" >> .hg/hgrc
221 221 $ echo "revlog-compression=zstd" >> .hg/hgrc
222 222
223 223 $ hg debugupgraderepo --run -q
224 224 upgrade will perform the following actions:
225 225
226 226 requirements
227 227 preserved: dotencode, fncache, generaldelta, revlogv1, share-safe, sparserevlog, store (no-dirstate-v2 !)
228 228 preserved: dotencode, use-dirstate-v2, fncache, generaldelta, revlogv1, share-safe, sparserevlog, store (dirstate-v2 !)
229 229 added: revlog-compression-zstd
230 230
231 231 processed revlogs:
232 232 - all-filelogs
233 233 - changelog
234 234 - manifest
235 235
236 236 $ hg log -r .
237 237 changeset: 1:5f6d8a4bf34a
238 238 user: test
239 239 date: Thu Jan 01 00:00:00 1970 +0000
240 240 summary: added b
241 241
242 242 #endif
243 243 $ echo "[format]" >> .hg/hgrc
244 244 $ echo "use-persistent-nodemap=True" >> .hg/hgrc
245 245
246 246 $ hg debugupgraderepo --run -q -R ../shared1
247 247 abort: cannot use these actions on a share repository: persistent-nodemap
248 248 (upgrade the main repository directly)
249 249 [255]
250 250
251 251 $ hg debugupgraderepo --run -q
252 252 upgrade will perform the following actions:
253 253
254 254 requirements
255 255 preserved: dotencode, fncache, generaldelta, revlogv1, share-safe, sparserevlog, store (no-zstd no-dirstate-v2 !)
256 256 preserved: dotencode, fncache, generaldelta, revlog-compression-zstd, revlogv1, share-safe, sparserevlog, store (zstd no-dirstate-v2 !)
257 257 preserved: dotencode, use-dirstate-v2, fncache, generaldelta, revlogv1, share-safe, sparserevlog, store (no-zstd dirstate-v2 !)
258 258 preserved: dotencode, use-dirstate-v2, fncache, generaldelta, revlog-compression-zstd, revlogv1, share-safe, sparserevlog, store (zstd dirstate-v2 !)
259 259 added: persistent-nodemap
260 260
261 261 processed revlogs:
262 262 - all-filelogs
263 263 - changelog
264 264 - manifest
265 265
266 266 $ hg log -r .
267 267 changeset: 1:5f6d8a4bf34a
268 268 user: test
269 269 date: Thu Jan 01 00:00:00 1970 +0000
270 270 summary: added b
271 271
272 272
273 273 Shared one should work
274 274 $ cd ../shared1
275 275 $ hg log -r .
276 276 changeset: 2:155349b645be
277 277 tag: tip
278 278 user: test
279 279 date: Thu Jan 01 00:00:00 1970 +0000
280 280 summary: added c
281 281
282 282
283 283 Testing that nonsharedrc is loaded for source and not shared
284 284
285 285 $ cd ../source
286 286 $ touch .hg/hgrc-not-shared
287 287 $ echo "[ui]" >> .hg/hgrc-not-shared
288 288 $ echo "traceback=true" >> .hg/hgrc-not-shared
289 289
290 290 $ hg showconfig ui.traceback
291 291 true
292 292
293 293 $ HGEDITOR=cat hg config --non-shared
294 294 [ui]
295 295 traceback=true
296 296
297 297 $ cd ../shared1
298 298 $ hg showconfig ui.traceback
299 299 [1]
300 300
301 301 Unsharing works
302 302
303 303 $ hg unshare
304 304
305 305 Test that source config is added to the shared one after unshare, and the config
306 306 of current repo is still respected over the config which came from source config
307 307 $ cd ../cloned
308 308 $ hg push ../shared1
309 309 pushing to ../shared1
310 310 searching for changes
311 311 adding changesets
312 312 adding manifests
313 313 adding file changes
314 314 error: pretxnchangegroup.reject_merge_commits hook failed: bcde3522682d rejected as merge on the same branch. Please consider rebase.
315 315 transaction abort!
316 316 rollback completed
317 317 abort: bcde3522682d rejected as merge on the same branch. Please consider rebase.
318 318 [255]
319 319 $ hg showconfig ui.curses -R ../shared1
320 320 false
321 321
322 322 $ cd ../
323 323
324 324 Test that upgrading using debugupgraderepo works
325 325 =================================================
326 326
327 327 $ hg init non-share-safe --config format.use-share-safe=false
328 328 $ cd non-share-safe
329 329 $ hg debugrequirements
330 330 dotencode
331 331 dirstate-v2 (dirstate-v2 !)
332 332 fncache
333 333 generaldelta
334 334 revlogv1
335 335 sparserevlog
336 336 store
337 337 $ echo foo > foo
338 338 $ hg ci -Aqm 'added foo'
339 339 $ echo bar > bar
340 340 $ hg ci -Aqm 'added bar'
341 341
342 342 Create a share before upgrading
343 343
344 344 $ cd ..
345 345 $ hg share non-share-safe nss-share
346 346 updating working directory
347 347 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
348 348 $ hg debugrequirements -R nss-share
349 349 dotencode
350 350 dirstate-v2 (dirstate-v2 !)
351 351 fncache
352 352 generaldelta
353 353 revlogv1
354 354 shared
355 355 sparserevlog
356 356 store
357 357 $ cd non-share-safe
358 358
359 359 Upgrade
360 360
361 361 $ hg debugupgraderepo -q
362 362 requirements
363 363 preserved: dotencode, fncache, generaldelta, revlogv1, sparserevlog, store (no-dirstate-v2 !)
364 364 preserved: dotencode, use-dirstate-v2, fncache, generaldelta, revlogv1, sparserevlog, store (dirstate-v2 !)
365 365 added: share-safe
366 366
367 367 no revlogs to process
368 368
369 369 $ hg debugupgraderepo --run
370 370 upgrade will perform the following actions:
371 371
372 372 requirements
373 373 preserved: dotencode, fncache, generaldelta, revlogv1, sparserevlog, store (no-dirstate-v2 !)
374 374 preserved: dotencode, use-dirstate-v2, fncache, generaldelta, revlogv1, sparserevlog, store (dirstate-v2 !)
375 375 added: share-safe
376 376
377 377 share-safe
378 378 Upgrades a repository to share-safe format so that future shares of this repository share its requirements and configs.
379 379
380 380 no revlogs to process
381 381
382 382 beginning upgrade...
383 383 repository locked and read-only
384 384 creating temporary repository to stage upgraded data: $TESTTMP/non-share-safe/.hg/upgrade.* (glob)
385 385 (it is safe to interrupt this process any time before data migration completes)
386 386 upgrading repository requirements
387 387 removing temporary repository $TESTTMP/non-share-safe/.hg/upgrade.* (glob)
388 388 repository upgraded to share safe mode, existing shares will still work in old non-safe mode. Re-share existing shares to use them in safe mode New shares will be created in safe mode.
389 389
390 390 $ hg debugrequirements
391 391 dotencode
392 392 dirstate-v2 (dirstate-v2 !)
393 393 fncache
394 394 generaldelta
395 395 revlogv1
396 396 share-safe
397 397 sparserevlog
398 398 store
399 399
400 400 $ cat .hg/requires
401 401 dirstate-v2 (dirstate-v2 !)
402 402 share-safe
403 403
404 404 $ cat .hg/store/requires
405 405 dotencode
406 406 fncache
407 407 generaldelta
408 408 revlogv1
409 409 sparserevlog
410 410 store
411 411
412 412 $ hg log -GT "{node}: {desc}\n"
413 413 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
414 414 |
415 415 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
416 416
417 417
418 418 Make sure existing shares dont work with default config
419 419
420 420 $ hg log -GT "{node}: {desc}\n" -R ../nss-share
421 421 abort: version mismatch: source uses share-safe functionality while the current share does not
422 422 (see `hg help config.format.use-share-safe` for more information)
423 423 [255]
424 424
425 425
426 426 Create a safe share from upgrade one
427 427
428 428 $ cd ..
429 429 $ hg share non-share-safe ss-share
430 430 updating working directory
431 431 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
432 432 $ cd ss-share
433 433 $ hg log -GT "{node}: {desc}\n"
434 434 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
435 435 |
436 436 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
437 437
438 438 $ cd ../non-share-safe
439 439
440 440 Test that downgrading works too
441 441
442 442 $ cat >> $HGRCPATH <<EOF
443 443 > [extensions]
444 444 > share =
445 445 > [format]
446 446 > use-share-safe = False
447 447 > EOF
448 448
449 449 $ hg debugupgraderepo -q
450 450 requirements
451 451 preserved: dotencode, fncache, generaldelta, revlogv1, sparserevlog, store (no-dirstate-v2 !)
452 452 preserved: dotencode, use-dirstate-v2, fncache, generaldelta, revlogv1, sparserevlog, store (dirstate-v2 !)
453 453 removed: share-safe
454 454
455 455 no revlogs to process
456 456
457 457 $ hg debugupgraderepo --run
458 458 upgrade will perform the following actions:
459 459
460 460 requirements
461 461 preserved: dotencode, fncache, generaldelta, revlogv1, sparserevlog, store (no-dirstate-v2 !)
462 462 preserved: dotencode, use-dirstate-v2, fncache, generaldelta, revlogv1, sparserevlog, store (dirstate-v2 !)
463 463 removed: share-safe
464 464
465 465 no revlogs to process
466 466
467 467 beginning upgrade...
468 468 repository locked and read-only
469 469 creating temporary repository to stage upgraded data: $TESTTMP/non-share-safe/.hg/upgrade.* (glob)
470 470 (it is safe to interrupt this process any time before data migration completes)
471 471 upgrading repository requirements
472 472 removing temporary repository $TESTTMP/non-share-safe/.hg/upgrade.* (glob)
473 repository downgraded to not use share safe mode, existing shares will not work and needs to be reshared.
473 repository downgraded to not use share safe mode, existing shares will not work and need to be reshared.
474 474
475 475 $ hg debugrequirements
476 476 dotencode
477 477 dirstate-v2 (dirstate-v2 !)
478 478 fncache
479 479 generaldelta
480 480 revlogv1
481 481 sparserevlog
482 482 store
483 483
484 484 $ cat .hg/requires
485 485 dotencode
486 486 dirstate-v2 (dirstate-v2 !)
487 487 fncache
488 488 generaldelta
489 489 revlogv1
490 490 sparserevlog
491 491 store
492 492
493 493 $ test -f .hg/store/requires
494 494 [1]
495 495
496 496 $ hg log -GT "{node}: {desc}\n"
497 497 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
498 498 |
499 499 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
500 500
501 501
502 502 Make sure existing shares still works
503 503
504 504 $ hg log -GT "{node}: {desc}\n" -R ../nss-share
505 505 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
506 506 |
507 507 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
508 508
509 509
510 510 $ hg log -GT "{node}: {desc}\n" -R ../ss-share
511 511 abort: share source does not support share-safe requirement
512 512 (see `hg help config.format.use-share-safe` for more information)
513 513 [255]
514 514
515 515 Testing automatic downgrade of shares when config is set
516 516
517 517 $ touch ../ss-share/.hg/wlock
518 518 $ hg log -GT "{node}: {desc}\n" -R ../ss-share --config share.safe-mismatch.source-not-safe=downgrade-abort
519 519 abort: failed to downgrade share, got error: Lock held
520 520 (see `hg help config.format.use-share-safe` for more information)
521 521 [255]
522 522 $ rm ../ss-share/.hg/wlock
523 523
524 524 $ cp -R ../ss-share ../ss-share-bck
525 525 $ hg log -GT "{node}: {desc}\n" -R ../ss-share --config share.safe-mismatch.source-not-safe=downgrade-abort
526 526 repository downgraded to not use share-safe mode
527 527 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
528 528 |
529 529 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
530 530
531 531 $ rm -rf ../ss-share
532 532 $ mv ../ss-share-bck ../ss-share
533 533
534 534 $ hg log -GT "{node}: {desc}\n" -R ../ss-share --config share.safe-mismatch.source-not-safe=downgrade-abort --config share.safe-mismatch.source-not-safe:verbose-upgrade=no
535 535 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
536 536 |
537 537 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
538 538
539 539
540 540 $ hg log -GT "{node}: {desc}\n" -R ../ss-share
541 541 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
542 542 |
543 543 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
544 544
545 545
546 546
547 547 Testing automatic upgrade of shares when config is set
548 548
549 549 $ hg debugupgraderepo -q --run --config format.use-share-safe=True
550 550 upgrade will perform the following actions:
551 551
552 552 requirements
553 553 preserved: dotencode, fncache, generaldelta, revlogv1, sparserevlog, store (no-dirstate-v2 !)
554 554 preserved: dotencode, use-dirstate-v2, fncache, generaldelta, revlogv1, sparserevlog, store (dirstate-v2 !)
555 555 added: share-safe
556 556
557 557 no revlogs to process
558 558
559 559 repository upgraded to share safe mode, existing shares will still work in old non-safe mode. Re-share existing shares to use them in safe mode New shares will be created in safe mode.
560 560 $ hg debugrequirements
561 561 dotencode
562 562 dirstate-v2 (dirstate-v2 !)
563 563 fncache
564 564 generaldelta
565 565 revlogv1
566 566 share-safe
567 567 sparserevlog
568 568 store
569 569 $ hg log -GT "{node}: {desc}\n" -R ../nss-share
570 570 abort: version mismatch: source uses share-safe functionality while the current share does not
571 571 (see `hg help config.format.use-share-safe` for more information)
572 572 [255]
573 573
574 574 Check that if lock is taken, upgrade fails but read operation are successful
575 575 $ hg log -GT "{node}: {desc}\n" -R ../nss-share --config share.safe-mismatch.source-safe=upgra
576 576 abort: share-safe mismatch with source.
577 577 Unrecognized value 'upgra' of `share.safe-mismatch.source-safe` set.
578 578 (see `hg help config.format.use-share-safe` for more information)
579 579 [255]
580 580 $ touch ../nss-share/.hg/wlock
581 581 $ hg log -GT "{node}: {desc}\n" -R ../nss-share --config share.safe-mismatch.source-safe=upgrade-allow
582 582 failed to upgrade share, got error: Lock held
583 583 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
584 584 |
585 585 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
586 586
587 587
588 588 $ hg log -GT "{node}: {desc}\n" -R ../nss-share --config share.safe-mismatch.source-safe=upgrade-allow --config share.safe-mismatch.source-safe.warn=False
589 589 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
590 590 |
591 591 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
592 592
593 593
594 594 $ hg log -GT "{node}: {desc}\n" -R ../nss-share --config share.safe-mismatch.source-safe=upgrade-abort
595 595 abort: failed to upgrade share, got error: Lock held
596 596 (see `hg help config.format.use-share-safe` for more information)
597 597 [255]
598 598
599 599 $ rm ../nss-share/.hg/wlock
600 600 $ cp -R ../nss-share ../nss-share-bck
601 601 $ hg log -GT "{node}: {desc}\n" -R ../nss-share --config share.safe-mismatch.source-safe=upgrade-abort
602 602 repository upgraded to use share-safe mode
603 603 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
604 604 |
605 605 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
606 606
607 607 $ rm -rf ../nss-share
608 608 $ mv ../nss-share-bck ../nss-share
609 609 $ hg log -GT "{node}: {desc}\n" -R ../nss-share --config share.safe-mismatch.source-safe=upgrade-abort --config share.safe-mismatch.source-safe:verbose-upgrade=no
610 610 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
611 611 |
612 612 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
613 613
614 614
615 615 Test that unshare works
616 616
617 617 $ hg unshare -R ../nss-share
618 618 $ hg log -GT "{node}: {desc}\n" -R ../nss-share
619 619 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
620 620 |
621 621 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
622 622
623 623
624 624 Test automatique upgrade/downgrade of main-repository
625 625 ------------------------------------------------------
626 626
627 627 create an initial repository
628 628
629 629 $ hg init auto-upgrade \
630 630 > --config format.use-share-safe=no
631 631 $ hg debugbuilddag -R auto-upgrade --new-file .+5
632 632 $ hg -R auto-upgrade update
633 633 6 files updated, 0 files merged, 0 files removed, 0 files unresolved
634 634 $ hg debugformat -R auto-upgrade | grep share-safe
635 635 share-safe: no
636 636
637 637 upgrade it to share-safe automatically
638 638
639 639 $ hg status -R auto-upgrade \
640 640 > --config format.use-share-safe.automatic-upgrade-of-mismatching-repositories=yes \
641 641 > --config format.use-share-safe=yes
642 642 automatically upgrading repository to the `share-safe` feature
643 643 (see `hg help config.format.use-share-safe` for details)
644 644 $ hg debugformat -R auto-upgrade | grep share-safe
645 645 share-safe: yes
646 646
647 647 downgrade it from share-safe automatically
648 648
649 649 $ hg status -R auto-upgrade \
650 650 > --config format.use-share-safe.automatic-upgrade-of-mismatching-repositories=yes \
651 651 > --config format.use-share-safe=no
652 652 automatically downgrading repository from the `share-safe` feature
653 653 (see `hg help config.format.use-share-safe` for details)
654 654 $ hg debugformat -R auto-upgrade | grep share-safe
655 655 share-safe: no
General Comments 0
You need to be logged in to leave comments. Login now