##// END OF EJS Templates
streamclone: add support for bundle2 based stream clone...
Boris Feld -
r35781:7eedbd5d default
parent child Browse files
Show More
@@ -1,2144 +1,2147
1 1 # bundle2.py - generic container format to transmit arbitrary data.
2 2 #
3 3 # Copyright 2013 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 """Handling of the new bundle2 format
8 8
9 9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 11 that will be handed to and processed by the application layer.
12 12
13 13
14 14 General format architecture
15 15 ===========================
16 16
17 17 The format is architectured as follow
18 18
19 19 - magic string
20 20 - stream level parameters
21 21 - payload parts (any number)
22 22 - end of stream marker.
23 23
24 24 the Binary format
25 25 ============================
26 26
27 27 All numbers are unsigned and big-endian.
28 28
29 29 stream level parameters
30 30 ------------------------
31 31
32 32 Binary format is as follow
33 33
34 34 :params size: int32
35 35
36 36 The total number of Bytes used by the parameters
37 37
38 38 :params value: arbitrary number of Bytes
39 39
40 40 A blob of `params size` containing the serialized version of all stream level
41 41 parameters.
42 42
43 43 The blob contains a space separated list of parameters. Parameters with value
44 44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45 45
46 46 Empty name are obviously forbidden.
47 47
48 48 Name MUST start with a letter. If this first letter is lower case, the
49 49 parameter is advisory and can be safely ignored. However when the first
50 50 letter is capital, the parameter is mandatory and the bundling process MUST
51 51 stop if he is not able to proceed it.
52 52
53 53 Stream parameters use a simple textual format for two main reasons:
54 54
55 55 - Stream level parameters should remain simple and we want to discourage any
56 56 crazy usage.
57 57 - Textual data allow easy human inspection of a bundle2 header in case of
58 58 troubles.
59 59
60 60 Any Applicative level options MUST go into a bundle2 part instead.
61 61
62 62 Payload part
63 63 ------------------------
64 64
65 65 Binary format is as follow
66 66
67 67 :header size: int32
68 68
69 69 The total number of Bytes used by the part header. When the header is empty
70 70 (size = 0) this is interpreted as the end of stream marker.
71 71
72 72 :header:
73 73
74 74 The header defines how to interpret the part. It contains two piece of
75 75 data: the part type, and the part parameters.
76 76
77 77 The part type is used to route an application level handler, that can
78 78 interpret payload.
79 79
80 80 Part parameters are passed to the application level handler. They are
81 81 meant to convey information that will help the application level object to
82 82 interpret the part payload.
83 83
84 84 The binary format of the header is has follow
85 85
86 86 :typesize: (one byte)
87 87
88 88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89 89
90 90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 91 to this part.
92 92
93 93 :parameters:
94 94
95 95 Part's parameter may have arbitrary content, the binary structure is::
96 96
97 97 <mandatory-count><advisory-count><param-sizes><param-data>
98 98
99 99 :mandatory-count: 1 byte, number of mandatory parameters
100 100
101 101 :advisory-count: 1 byte, number of advisory parameters
102 102
103 103 :param-sizes:
104 104
105 105 N couple of bytes, where N is the total number of parameters. Each
106 106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107 107
108 108 :param-data:
109 109
110 110 A blob of bytes from which each parameter key and value can be
111 111 retrieved using the list of size couples stored in the previous
112 112 field.
113 113
114 114 Mandatory parameters comes first, then the advisory ones.
115 115
116 116 Each parameter's key MUST be unique within the part.
117 117
118 118 :payload:
119 119
120 120 payload is a series of `<chunksize><chunkdata>`.
121 121
122 122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124 124
125 125 The current implementation always produces either zero or one chunk.
126 126 This is an implementation limitation that will ultimately be lifted.
127 127
128 128 `chunksize` can be negative to trigger special case processing. No such
129 129 processing is in place yet.
130 130
131 131 Bundle processing
132 132 ============================
133 133
134 134 Each part is processed in order using a "part handler". Handler are registered
135 135 for a certain part type.
136 136
137 137 The matching of a part to its handler is case insensitive. The case of the
138 138 part type is used to know if a part is mandatory or advisory. If the Part type
139 139 contains any uppercase char it is considered mandatory. When no handler is
140 140 known for a Mandatory part, the process is aborted and an exception is raised.
141 141 If the part is advisory and no handler is known, the part is ignored. When the
142 142 process is aborted, the full bundle is still read from the stream to keep the
143 143 channel usable. But none of the part read from an abort are processed. In the
144 144 future, dropping the stream may become an option for channel we do not care to
145 145 preserve.
146 146 """
147 147
148 148 from __future__ import absolute_import, division
149 149
150 150 import errno
151 151 import os
152 152 import re
153 153 import string
154 154 import struct
155 155 import sys
156 156
157 157 from .i18n import _
158 158 from . import (
159 159 bookmarks,
160 160 changegroup,
161 161 error,
162 162 node as nodemod,
163 163 obsolete,
164 164 phases,
165 165 pushkey,
166 166 pycompat,
167 167 streamclone,
168 168 tags,
169 169 url,
170 170 util,
171 171 )
172 172
173 173 urlerr = util.urlerr
174 174 urlreq = util.urlreq
175 175
176 176 _pack = struct.pack
177 177 _unpack = struct.unpack
178 178
179 179 _fstreamparamsize = '>i'
180 180 _fpartheadersize = '>i'
181 181 _fparttypesize = '>B'
182 182 _fpartid = '>I'
183 183 _fpayloadsize = '>i'
184 184 _fpartparamcount = '>BB'
185 185
186 186 preferedchunksize = 4096
187 187
188 188 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
189 189
190 190 def outdebug(ui, message):
191 191 """debug regarding output stream (bundling)"""
192 192 if ui.configbool('devel', 'bundle2.debug'):
193 193 ui.debug('bundle2-output: %s\n' % message)
194 194
195 195 def indebug(ui, message):
196 196 """debug on input stream (unbundling)"""
197 197 if ui.configbool('devel', 'bundle2.debug'):
198 198 ui.debug('bundle2-input: %s\n' % message)
199 199
200 200 def validateparttype(parttype):
201 201 """raise ValueError if a parttype contains invalid character"""
202 202 if _parttypeforbidden.search(parttype):
203 203 raise ValueError(parttype)
204 204
205 205 def _makefpartparamsizes(nbparams):
206 206 """return a struct format to read part parameter sizes
207 207
208 208 The number parameters is variable so we need to build that format
209 209 dynamically.
210 210 """
211 211 return '>'+('BB'*nbparams)
212 212
213 213 parthandlermapping = {}
214 214
215 215 def parthandler(parttype, params=()):
216 216 """decorator that register a function as a bundle2 part handler
217 217
218 218 eg::
219 219
220 220 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
221 221 def myparttypehandler(...):
222 222 '''process a part of type "my part".'''
223 223 ...
224 224 """
225 225 validateparttype(parttype)
226 226 def _decorator(func):
227 227 lparttype = parttype.lower() # enforce lower case matching.
228 228 assert lparttype not in parthandlermapping
229 229 parthandlermapping[lparttype] = func
230 230 func.params = frozenset(params)
231 231 return func
232 232 return _decorator
233 233
234 234 class unbundlerecords(object):
235 235 """keep record of what happens during and unbundle
236 236
237 237 New records are added using `records.add('cat', obj)`. Where 'cat' is a
238 238 category of record and obj is an arbitrary object.
239 239
240 240 `records['cat']` will return all entries of this category 'cat'.
241 241
242 242 Iterating on the object itself will yield `('category', obj)` tuples
243 243 for all entries.
244 244
245 245 All iterations happens in chronological order.
246 246 """
247 247
248 248 def __init__(self):
249 249 self._categories = {}
250 250 self._sequences = []
251 251 self._replies = {}
252 252
253 253 def add(self, category, entry, inreplyto=None):
254 254 """add a new record of a given category.
255 255
256 256 The entry can then be retrieved in the list returned by
257 257 self['category']."""
258 258 self._categories.setdefault(category, []).append(entry)
259 259 self._sequences.append((category, entry))
260 260 if inreplyto is not None:
261 261 self.getreplies(inreplyto).add(category, entry)
262 262
263 263 def getreplies(self, partid):
264 264 """get the records that are replies to a specific part"""
265 265 return self._replies.setdefault(partid, unbundlerecords())
266 266
267 267 def __getitem__(self, cat):
268 268 return tuple(self._categories.get(cat, ()))
269 269
270 270 def __iter__(self):
271 271 return iter(self._sequences)
272 272
273 273 def __len__(self):
274 274 return len(self._sequences)
275 275
276 276 def __nonzero__(self):
277 277 return bool(self._sequences)
278 278
279 279 __bool__ = __nonzero__
280 280
281 281 class bundleoperation(object):
282 282 """an object that represents a single bundling process
283 283
284 284 Its purpose is to carry unbundle-related objects and states.
285 285
286 286 A new object should be created at the beginning of each bundle processing.
287 287 The object is to be returned by the processing function.
288 288
289 289 The object has very little content now it will ultimately contain:
290 290 * an access to the repo the bundle is applied to,
291 291 * a ui object,
292 292 * a way to retrieve a transaction to add changes to the repo,
293 293 * a way to record the result of processing each part,
294 294 * a way to construct a bundle response when applicable.
295 295 """
296 296
297 297 def __init__(self, repo, transactiongetter, captureoutput=True):
298 298 self.repo = repo
299 299 self.ui = repo.ui
300 300 self.records = unbundlerecords()
301 301 self.reply = None
302 302 self.captureoutput = captureoutput
303 303 self.hookargs = {}
304 304 self._gettransaction = transactiongetter
305 305 # carries value that can modify part behavior
306 306 self.modes = {}
307 307
308 308 def gettransaction(self):
309 309 transaction = self._gettransaction()
310 310
311 311 if self.hookargs:
312 312 # the ones added to the transaction supercede those added
313 313 # to the operation.
314 314 self.hookargs.update(transaction.hookargs)
315 315 transaction.hookargs = self.hookargs
316 316
317 317 # mark the hookargs as flushed. further attempts to add to
318 318 # hookargs will result in an abort.
319 319 self.hookargs = None
320 320
321 321 return transaction
322 322
323 323 def addhookargs(self, hookargs):
324 324 if self.hookargs is None:
325 325 raise error.ProgrammingError('attempted to add hookargs to '
326 326 'operation after transaction started')
327 327 self.hookargs.update(hookargs)
328 328
329 329 class TransactionUnavailable(RuntimeError):
330 330 pass
331 331
332 332 def _notransaction():
333 333 """default method to get a transaction while processing a bundle
334 334
335 335 Raise an exception to highlight the fact that no transaction was expected
336 336 to be created"""
337 337 raise TransactionUnavailable()
338 338
339 339 def applybundle(repo, unbundler, tr, source=None, url=None, **kwargs):
340 340 # transform me into unbundler.apply() as soon as the freeze is lifted
341 341 if isinstance(unbundler, unbundle20):
342 342 tr.hookargs['bundle2'] = '1'
343 343 if source is not None and 'source' not in tr.hookargs:
344 344 tr.hookargs['source'] = source
345 345 if url is not None and 'url' not in tr.hookargs:
346 346 tr.hookargs['url'] = url
347 347 return processbundle(repo, unbundler, lambda: tr)
348 348 else:
349 349 # the transactiongetter won't be used, but we might as well set it
350 350 op = bundleoperation(repo, lambda: tr)
351 351 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
352 352 return op
353 353
354 354 class partiterator(object):
355 355 def __init__(self, repo, op, unbundler):
356 356 self.repo = repo
357 357 self.op = op
358 358 self.unbundler = unbundler
359 359 self.iterator = None
360 360 self.count = 0
361 361 self.current = None
362 362
363 363 def __enter__(self):
364 364 def func():
365 365 itr = enumerate(self.unbundler.iterparts())
366 366 for count, p in itr:
367 367 self.count = count
368 368 self.current = p
369 369 yield p
370 370 p.consume()
371 371 self.current = None
372 372 self.iterator = func()
373 373 return self.iterator
374 374
375 375 def __exit__(self, type, exc, tb):
376 376 if not self.iterator:
377 377 return
378 378
379 379 # Only gracefully abort in a normal exception situation. User aborts
380 380 # like Ctrl+C throw a KeyboardInterrupt which is not a base Exception,
381 381 # and should not gracefully cleanup.
382 382 if isinstance(exc, Exception):
383 383 # Any exceptions seeking to the end of the bundle at this point are
384 384 # almost certainly related to the underlying stream being bad.
385 385 # And, chances are that the exception we're handling is related to
386 386 # getting in that bad state. So, we swallow the seeking error and
387 387 # re-raise the original error.
388 388 seekerror = False
389 389 try:
390 390 if self.current:
391 391 # consume the part content to not corrupt the stream.
392 392 self.current.consume()
393 393
394 394 for part in self.iterator:
395 395 # consume the bundle content
396 396 part.consume()
397 397 except Exception:
398 398 seekerror = True
399 399
400 400 # Small hack to let caller code distinguish exceptions from bundle2
401 401 # processing from processing the old format. This is mostly needed
402 402 # to handle different return codes to unbundle according to the type
403 403 # of bundle. We should probably clean up or drop this return code
404 404 # craziness in a future version.
405 405 exc.duringunbundle2 = True
406 406 salvaged = []
407 407 replycaps = None
408 408 if self.op.reply is not None:
409 409 salvaged = self.op.reply.salvageoutput()
410 410 replycaps = self.op.reply.capabilities
411 411 exc._replycaps = replycaps
412 412 exc._bundle2salvagedoutput = salvaged
413 413
414 414 # Re-raising from a variable loses the original stack. So only use
415 415 # that form if we need to.
416 416 if seekerror:
417 417 raise exc
418 418
419 419 self.repo.ui.debug('bundle2-input-bundle: %i parts total\n' %
420 420 self.count)
421 421
422 422 def processbundle(repo, unbundler, transactiongetter=None, op=None):
423 423 """This function process a bundle, apply effect to/from a repo
424 424
425 425 It iterates over each part then searches for and uses the proper handling
426 426 code to process the part. Parts are processed in order.
427 427
428 428 Unknown Mandatory part will abort the process.
429 429
430 430 It is temporarily possible to provide a prebuilt bundleoperation to the
431 431 function. This is used to ensure output is properly propagated in case of
432 432 an error during the unbundling. This output capturing part will likely be
433 433 reworked and this ability will probably go away in the process.
434 434 """
435 435 if op is None:
436 436 if transactiongetter is None:
437 437 transactiongetter = _notransaction
438 438 op = bundleoperation(repo, transactiongetter)
439 439 # todo:
440 440 # - replace this is a init function soon.
441 441 # - exception catching
442 442 unbundler.params
443 443 if repo.ui.debugflag:
444 444 msg = ['bundle2-input-bundle:']
445 445 if unbundler.params:
446 446 msg.append(' %i params' % len(unbundler.params))
447 447 if op._gettransaction is None or op._gettransaction is _notransaction:
448 448 msg.append(' no-transaction')
449 449 else:
450 450 msg.append(' with-transaction')
451 451 msg.append('\n')
452 452 repo.ui.debug(''.join(msg))
453 453
454 454 processparts(repo, op, unbundler)
455 455
456 456 return op
457 457
458 458 def processparts(repo, op, unbundler):
459 459 with partiterator(repo, op, unbundler) as parts:
460 460 for part in parts:
461 461 _processpart(op, part)
462 462
463 463 def _processchangegroup(op, cg, tr, source, url, **kwargs):
464 464 ret = cg.apply(op.repo, tr, source, url, **kwargs)
465 465 op.records.add('changegroup', {
466 466 'return': ret,
467 467 })
468 468 return ret
469 469
470 470 def _gethandler(op, part):
471 471 status = 'unknown' # used by debug output
472 472 try:
473 473 handler = parthandlermapping.get(part.type)
474 474 if handler is None:
475 475 status = 'unsupported-type'
476 476 raise error.BundleUnknownFeatureError(parttype=part.type)
477 477 indebug(op.ui, 'found a handler for part %s' % part.type)
478 478 unknownparams = part.mandatorykeys - handler.params
479 479 if unknownparams:
480 480 unknownparams = list(unknownparams)
481 481 unknownparams.sort()
482 482 status = 'unsupported-params (%s)' % ', '.join(unknownparams)
483 483 raise error.BundleUnknownFeatureError(parttype=part.type,
484 484 params=unknownparams)
485 485 status = 'supported'
486 486 except error.BundleUnknownFeatureError as exc:
487 487 if part.mandatory: # mandatory parts
488 488 raise
489 489 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
490 490 return # skip to part processing
491 491 finally:
492 492 if op.ui.debugflag:
493 493 msg = ['bundle2-input-part: "%s"' % part.type]
494 494 if not part.mandatory:
495 495 msg.append(' (advisory)')
496 496 nbmp = len(part.mandatorykeys)
497 497 nbap = len(part.params) - nbmp
498 498 if nbmp or nbap:
499 499 msg.append(' (params:')
500 500 if nbmp:
501 501 msg.append(' %i mandatory' % nbmp)
502 502 if nbap:
503 503 msg.append(' %i advisory' % nbmp)
504 504 msg.append(')')
505 505 msg.append(' %s\n' % status)
506 506 op.ui.debug(''.join(msg))
507 507
508 508 return handler
509 509
510 510 def _processpart(op, part):
511 511 """process a single part from a bundle
512 512
513 513 The part is guaranteed to have been fully consumed when the function exits
514 514 (even if an exception is raised)."""
515 515 handler = _gethandler(op, part)
516 516 if handler is None:
517 517 return
518 518
519 519 # handler is called outside the above try block so that we don't
520 520 # risk catching KeyErrors from anything other than the
521 521 # parthandlermapping lookup (any KeyError raised by handler()
522 522 # itself represents a defect of a different variety).
523 523 output = None
524 524 if op.captureoutput and op.reply is not None:
525 525 op.ui.pushbuffer(error=True, subproc=True)
526 526 output = ''
527 527 try:
528 528 handler(op, part)
529 529 finally:
530 530 if output is not None:
531 531 output = op.ui.popbuffer()
532 532 if output:
533 533 outpart = op.reply.newpart('output', data=output,
534 534 mandatory=False)
535 535 outpart.addparam(
536 536 'in-reply-to', pycompat.bytestr(part.id), mandatory=False)
537 537
538 538 def decodecaps(blob):
539 539 """decode a bundle2 caps bytes blob into a dictionary
540 540
541 541 The blob is a list of capabilities (one per line)
542 542 Capabilities may have values using a line of the form::
543 543
544 544 capability=value1,value2,value3
545 545
546 546 The values are always a list."""
547 547 caps = {}
548 548 for line in blob.splitlines():
549 549 if not line:
550 550 continue
551 551 if '=' not in line:
552 552 key, vals = line, ()
553 553 else:
554 554 key, vals = line.split('=', 1)
555 555 vals = vals.split(',')
556 556 key = urlreq.unquote(key)
557 557 vals = [urlreq.unquote(v) for v in vals]
558 558 caps[key] = vals
559 559 return caps
560 560
561 561 def encodecaps(caps):
562 562 """encode a bundle2 caps dictionary into a bytes blob"""
563 563 chunks = []
564 564 for ca in sorted(caps):
565 565 vals = caps[ca]
566 566 ca = urlreq.quote(ca)
567 567 vals = [urlreq.quote(v) for v in vals]
568 568 if vals:
569 569 ca = "%s=%s" % (ca, ','.join(vals))
570 570 chunks.append(ca)
571 571 return '\n'.join(chunks)
572 572
573 573 bundletypes = {
574 574 "": ("", 'UN'), # only when using unbundle on ssh and old http servers
575 575 # since the unification ssh accepts a header but there
576 576 # is no capability signaling it.
577 577 "HG20": (), # special-cased below
578 578 "HG10UN": ("HG10UN", 'UN'),
579 579 "HG10BZ": ("HG10", 'BZ'),
580 580 "HG10GZ": ("HG10GZ", 'GZ'),
581 581 }
582 582
583 583 # hgweb uses this list to communicate its preferred type
584 584 bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN']
585 585
586 586 class bundle20(object):
587 587 """represent an outgoing bundle2 container
588 588
589 589 Use the `addparam` method to add stream level parameter. and `newpart` to
590 590 populate it. Then call `getchunks` to retrieve all the binary chunks of
591 591 data that compose the bundle2 container."""
592 592
593 593 _magicstring = 'HG20'
594 594
595 595 def __init__(self, ui, capabilities=()):
596 596 self.ui = ui
597 597 self._params = []
598 598 self._parts = []
599 599 self.capabilities = dict(capabilities)
600 600 self._compengine = util.compengines.forbundletype('UN')
601 601 self._compopts = None
602 602
603 603 def setcompression(self, alg, compopts=None):
604 604 """setup core part compression to <alg>"""
605 605 if alg in (None, 'UN'):
606 606 return
607 607 assert not any(n.lower() == 'compression' for n, v in self._params)
608 608 self.addparam('Compression', alg)
609 609 self._compengine = util.compengines.forbundletype(alg)
610 610 self._compopts = compopts
611 611
612 612 @property
613 613 def nbparts(self):
614 614 """total number of parts added to the bundler"""
615 615 return len(self._parts)
616 616
617 617 # methods used to defines the bundle2 content
618 618 def addparam(self, name, value=None):
619 619 """add a stream level parameter"""
620 620 if not name:
621 621 raise ValueError(r'empty parameter name')
622 622 if name[0:1] not in pycompat.bytestr(string.ascii_letters):
623 623 raise ValueError(r'non letter first character: %s' % name)
624 624 self._params.append((name, value))
625 625
626 626 def addpart(self, part):
627 627 """add a new part to the bundle2 container
628 628
629 629 Parts contains the actual applicative payload."""
630 630 assert part.id is None
631 631 part.id = len(self._parts) # very cheap counter
632 632 self._parts.append(part)
633 633
634 634 def newpart(self, typeid, *args, **kwargs):
635 635 """create a new part and add it to the containers
636 636
637 637 As the part is directly added to the containers. For now, this means
638 638 that any failure to properly initialize the part after calling
639 639 ``newpart`` should result in a failure of the whole bundling process.
640 640
641 641 You can still fall back to manually create and add if you need better
642 642 control."""
643 643 part = bundlepart(typeid, *args, **kwargs)
644 644 self.addpart(part)
645 645 return part
646 646
647 647 # methods used to generate the bundle2 stream
648 648 def getchunks(self):
649 649 if self.ui.debugflag:
650 650 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
651 651 if self._params:
652 652 msg.append(' (%i params)' % len(self._params))
653 653 msg.append(' %i parts total\n' % len(self._parts))
654 654 self.ui.debug(''.join(msg))
655 655 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
656 656 yield self._magicstring
657 657 param = self._paramchunk()
658 658 outdebug(self.ui, 'bundle parameter: %s' % param)
659 659 yield _pack(_fstreamparamsize, len(param))
660 660 if param:
661 661 yield param
662 662 for chunk in self._compengine.compressstream(self._getcorechunk(),
663 663 self._compopts):
664 664 yield chunk
665 665
666 666 def _paramchunk(self):
667 667 """return a encoded version of all stream parameters"""
668 668 blocks = []
669 669 for par, value in self._params:
670 670 par = urlreq.quote(par)
671 671 if value is not None:
672 672 value = urlreq.quote(value)
673 673 par = '%s=%s' % (par, value)
674 674 blocks.append(par)
675 675 return ' '.join(blocks)
676 676
677 677 def _getcorechunk(self):
678 678 """yield chunk for the core part of the bundle
679 679
680 680 (all but headers and parameters)"""
681 681 outdebug(self.ui, 'start of parts')
682 682 for part in self._parts:
683 683 outdebug(self.ui, 'bundle part: "%s"' % part.type)
684 684 for chunk in part.getchunks(ui=self.ui):
685 685 yield chunk
686 686 outdebug(self.ui, 'end of bundle')
687 687 yield _pack(_fpartheadersize, 0)
688 688
689 689
690 690 def salvageoutput(self):
691 691 """return a list with a copy of all output parts in the bundle
692 692
693 693 This is meant to be used during error handling to make sure we preserve
694 694 server output"""
695 695 salvaged = []
696 696 for part in self._parts:
697 697 if part.type.startswith('output'):
698 698 salvaged.append(part.copy())
699 699 return salvaged
700 700
701 701
702 702 class unpackermixin(object):
703 703 """A mixin to extract bytes and struct data from a stream"""
704 704
705 705 def __init__(self, fp):
706 706 self._fp = fp
707 707
708 708 def _unpack(self, format):
709 709 """unpack this struct format from the stream
710 710
711 711 This method is meant for internal usage by the bundle2 protocol only.
712 712 They directly manipulate the low level stream including bundle2 level
713 713 instruction.
714 714
715 715 Do not use it to implement higher-level logic or methods."""
716 716 data = self._readexact(struct.calcsize(format))
717 717 return _unpack(format, data)
718 718
719 719 def _readexact(self, size):
720 720 """read exactly <size> bytes from the stream
721 721
722 722 This method is meant for internal usage by the bundle2 protocol only.
723 723 They directly manipulate the low level stream including bundle2 level
724 724 instruction.
725 725
726 726 Do not use it to implement higher-level logic or methods."""
727 727 return changegroup.readexactly(self._fp, size)
728 728
729 729 def getunbundler(ui, fp, magicstring=None):
730 730 """return a valid unbundler object for a given magicstring"""
731 731 if magicstring is None:
732 732 magicstring = changegroup.readexactly(fp, 4)
733 733 magic, version = magicstring[0:2], magicstring[2:4]
734 734 if magic != 'HG':
735 735 ui.debug(
736 736 "error: invalid magic: %r (version %r), should be 'HG'\n"
737 737 % (magic, version))
738 738 raise error.Abort(_('not a Mercurial bundle'))
739 739 unbundlerclass = formatmap.get(version)
740 740 if unbundlerclass is None:
741 741 raise error.Abort(_('unknown bundle version %s') % version)
742 742 unbundler = unbundlerclass(ui, fp)
743 743 indebug(ui, 'start processing of %s stream' % magicstring)
744 744 return unbundler
745 745
746 746 class unbundle20(unpackermixin):
747 747 """interpret a bundle2 stream
748 748
749 749 This class is fed with a binary stream and yields parts through its
750 750 `iterparts` methods."""
751 751
752 752 _magicstring = 'HG20'
753 753
754 754 def __init__(self, ui, fp):
755 755 """If header is specified, we do not read it out of the stream."""
756 756 self.ui = ui
757 757 self._compengine = util.compengines.forbundletype('UN')
758 758 self._compressed = None
759 759 super(unbundle20, self).__init__(fp)
760 760
761 761 @util.propertycache
762 762 def params(self):
763 763 """dictionary of stream level parameters"""
764 764 indebug(self.ui, 'reading bundle2 stream parameters')
765 765 params = {}
766 766 paramssize = self._unpack(_fstreamparamsize)[0]
767 767 if paramssize < 0:
768 768 raise error.BundleValueError('negative bundle param size: %i'
769 769 % paramssize)
770 770 if paramssize:
771 771 params = self._readexact(paramssize)
772 772 params = self._processallparams(params)
773 773 return params
774 774
775 775 def _processallparams(self, paramsblock):
776 776 """"""
777 777 params = util.sortdict()
778 778 for p in paramsblock.split(' '):
779 779 p = p.split('=', 1)
780 780 p = [urlreq.unquote(i) for i in p]
781 781 if len(p) < 2:
782 782 p.append(None)
783 783 self._processparam(*p)
784 784 params[p[0]] = p[1]
785 785 return params
786 786
787 787
788 788 def _processparam(self, name, value):
789 789 """process a parameter, applying its effect if needed
790 790
791 791 Parameter starting with a lower case letter are advisory and will be
792 792 ignored when unknown. Those starting with an upper case letter are
793 793 mandatory and will this function will raise a KeyError when unknown.
794 794
795 795 Note: no option are currently supported. Any input will be either
796 796 ignored or failing.
797 797 """
798 798 if not name:
799 799 raise ValueError(r'empty parameter name')
800 800 if name[0:1] not in pycompat.bytestr(string.ascii_letters):
801 801 raise ValueError(r'non letter first character: %s' % name)
802 802 try:
803 803 handler = b2streamparamsmap[name.lower()]
804 804 except KeyError:
805 805 if name[0:1].islower():
806 806 indebug(self.ui, "ignoring unknown parameter %s" % name)
807 807 else:
808 808 raise error.BundleUnknownFeatureError(params=(name,))
809 809 else:
810 810 handler(self, name, value)
811 811
812 812 def _forwardchunks(self):
813 813 """utility to transfer a bundle2 as binary
814 814
815 815 This is made necessary by the fact the 'getbundle' command over 'ssh'
816 816 have no way to know then the reply end, relying on the bundle to be
817 817 interpreted to know its end. This is terrible and we are sorry, but we
818 818 needed to move forward to get general delta enabled.
819 819 """
820 820 yield self._magicstring
821 821 assert 'params' not in vars(self)
822 822 paramssize = self._unpack(_fstreamparamsize)[0]
823 823 if paramssize < 0:
824 824 raise error.BundleValueError('negative bundle param size: %i'
825 825 % paramssize)
826 826 yield _pack(_fstreamparamsize, paramssize)
827 827 if paramssize:
828 828 params = self._readexact(paramssize)
829 829 self._processallparams(params)
830 830 yield params
831 831 assert self._compengine.bundletype == 'UN'
832 832 # From there, payload might need to be decompressed
833 833 self._fp = self._compengine.decompressorreader(self._fp)
834 834 emptycount = 0
835 835 while emptycount < 2:
836 836 # so we can brainlessly loop
837 837 assert _fpartheadersize == _fpayloadsize
838 838 size = self._unpack(_fpartheadersize)[0]
839 839 yield _pack(_fpartheadersize, size)
840 840 if size:
841 841 emptycount = 0
842 842 else:
843 843 emptycount += 1
844 844 continue
845 845 if size == flaginterrupt:
846 846 continue
847 847 elif size < 0:
848 848 raise error.BundleValueError('negative chunk size: %i')
849 849 yield self._readexact(size)
850 850
851 851
852 852 def iterparts(self, seekable=False):
853 853 """yield all parts contained in the stream"""
854 854 cls = seekableunbundlepart if seekable else unbundlepart
855 855 # make sure param have been loaded
856 856 self.params
857 857 # From there, payload need to be decompressed
858 858 self._fp = self._compengine.decompressorreader(self._fp)
859 859 indebug(self.ui, 'start extraction of bundle2 parts')
860 860 headerblock = self._readpartheader()
861 861 while headerblock is not None:
862 862 part = cls(self.ui, headerblock, self._fp)
863 863 yield part
864 864 # Ensure part is fully consumed so we can start reading the next
865 865 # part.
866 866 part.consume()
867 867
868 868 headerblock = self._readpartheader()
869 869 indebug(self.ui, 'end of bundle2 stream')
870 870
871 871 def _readpartheader(self):
872 872 """reads a part header size and return the bytes blob
873 873
874 874 returns None if empty"""
875 875 headersize = self._unpack(_fpartheadersize)[0]
876 876 if headersize < 0:
877 877 raise error.BundleValueError('negative part header size: %i'
878 878 % headersize)
879 879 indebug(self.ui, 'part header size: %i' % headersize)
880 880 if headersize:
881 881 return self._readexact(headersize)
882 882 return None
883 883
884 884 def compressed(self):
885 885 self.params # load params
886 886 return self._compressed
887 887
888 888 def close(self):
889 889 """close underlying file"""
890 890 if util.safehasattr(self._fp, 'close'):
891 891 return self._fp.close()
892 892
893 893 formatmap = {'20': unbundle20}
894 894
895 895 b2streamparamsmap = {}
896 896
897 897 def b2streamparamhandler(name):
898 898 """register a handler for a stream level parameter"""
899 899 def decorator(func):
900 900 assert name not in formatmap
901 901 b2streamparamsmap[name] = func
902 902 return func
903 903 return decorator
904 904
905 905 @b2streamparamhandler('compression')
906 906 def processcompression(unbundler, param, value):
907 907 """read compression parameter and install payload decompression"""
908 908 if value not in util.compengines.supportedbundletypes:
909 909 raise error.BundleUnknownFeatureError(params=(param,),
910 910 values=(value,))
911 911 unbundler._compengine = util.compengines.forbundletype(value)
912 912 if value is not None:
913 913 unbundler._compressed = True
914 914
915 915 class bundlepart(object):
916 916 """A bundle2 part contains application level payload
917 917
918 918 The part `type` is used to route the part to the application level
919 919 handler.
920 920
921 921 The part payload is contained in ``part.data``. It could be raw bytes or a
922 922 generator of byte chunks.
923 923
924 924 You can add parameters to the part using the ``addparam`` method.
925 925 Parameters can be either mandatory (default) or advisory. Remote side
926 926 should be able to safely ignore the advisory ones.
927 927
928 928 Both data and parameters cannot be modified after the generation has begun.
929 929 """
930 930
931 931 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
932 932 data='', mandatory=True):
933 933 validateparttype(parttype)
934 934 self.id = None
935 935 self.type = parttype
936 936 self._data = data
937 937 self._mandatoryparams = list(mandatoryparams)
938 938 self._advisoryparams = list(advisoryparams)
939 939 # checking for duplicated entries
940 940 self._seenparams = set()
941 941 for pname, __ in self._mandatoryparams + self._advisoryparams:
942 942 if pname in self._seenparams:
943 943 raise error.ProgrammingError('duplicated params: %s' % pname)
944 944 self._seenparams.add(pname)
945 945 # status of the part's generation:
946 946 # - None: not started,
947 947 # - False: currently generated,
948 948 # - True: generation done.
949 949 self._generated = None
950 950 self.mandatory = mandatory
951 951
952 952 def __repr__(self):
953 953 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
954 954 return ('<%s object at %x; id: %s; type: %s; mandatory: %s>'
955 955 % (cls, id(self), self.id, self.type, self.mandatory))
956 956
957 957 def copy(self):
958 958 """return a copy of the part
959 959
960 960 The new part have the very same content but no partid assigned yet.
961 961 Parts with generated data cannot be copied."""
962 962 assert not util.safehasattr(self.data, 'next')
963 963 return self.__class__(self.type, self._mandatoryparams,
964 964 self._advisoryparams, self._data, self.mandatory)
965 965
966 966 # methods used to defines the part content
967 967 @property
968 968 def data(self):
969 969 return self._data
970 970
971 971 @data.setter
972 972 def data(self, data):
973 973 if self._generated is not None:
974 974 raise error.ReadOnlyPartError('part is being generated')
975 975 self._data = data
976 976
977 977 @property
978 978 def mandatoryparams(self):
979 979 # make it an immutable tuple to force people through ``addparam``
980 980 return tuple(self._mandatoryparams)
981 981
982 982 @property
983 983 def advisoryparams(self):
984 984 # make it an immutable tuple to force people through ``addparam``
985 985 return tuple(self._advisoryparams)
986 986
987 987 def addparam(self, name, value='', mandatory=True):
988 988 """add a parameter to the part
989 989
990 990 If 'mandatory' is set to True, the remote handler must claim support
991 991 for this parameter or the unbundling will be aborted.
992 992
993 993 The 'name' and 'value' cannot exceed 255 bytes each.
994 994 """
995 995 if self._generated is not None:
996 996 raise error.ReadOnlyPartError('part is being generated')
997 997 if name in self._seenparams:
998 998 raise ValueError('duplicated params: %s' % name)
999 999 self._seenparams.add(name)
1000 1000 params = self._advisoryparams
1001 1001 if mandatory:
1002 1002 params = self._mandatoryparams
1003 1003 params.append((name, value))
1004 1004
1005 1005 # methods used to generates the bundle2 stream
1006 1006 def getchunks(self, ui):
1007 1007 if self._generated is not None:
1008 1008 raise error.ProgrammingError('part can only be consumed once')
1009 1009 self._generated = False
1010 1010
1011 1011 if ui.debugflag:
1012 1012 msg = ['bundle2-output-part: "%s"' % self.type]
1013 1013 if not self.mandatory:
1014 1014 msg.append(' (advisory)')
1015 1015 nbmp = len(self.mandatoryparams)
1016 1016 nbap = len(self.advisoryparams)
1017 1017 if nbmp or nbap:
1018 1018 msg.append(' (params:')
1019 1019 if nbmp:
1020 1020 msg.append(' %i mandatory' % nbmp)
1021 1021 if nbap:
1022 1022 msg.append(' %i advisory' % nbmp)
1023 1023 msg.append(')')
1024 1024 if not self.data:
1025 1025 msg.append(' empty payload')
1026 1026 elif (util.safehasattr(self.data, 'next')
1027 1027 or util.safehasattr(self.data, '__next__')):
1028 1028 msg.append(' streamed payload')
1029 1029 else:
1030 1030 msg.append(' %i bytes payload' % len(self.data))
1031 1031 msg.append('\n')
1032 1032 ui.debug(''.join(msg))
1033 1033
1034 1034 #### header
1035 1035 if self.mandatory:
1036 1036 parttype = self.type.upper()
1037 1037 else:
1038 1038 parttype = self.type.lower()
1039 1039 outdebug(ui, 'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1040 1040 ## parttype
1041 1041 header = [_pack(_fparttypesize, len(parttype)),
1042 1042 parttype, _pack(_fpartid, self.id),
1043 1043 ]
1044 1044 ## parameters
1045 1045 # count
1046 1046 manpar = self.mandatoryparams
1047 1047 advpar = self.advisoryparams
1048 1048 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1049 1049 # size
1050 1050 parsizes = []
1051 1051 for key, value in manpar:
1052 1052 parsizes.append(len(key))
1053 1053 parsizes.append(len(value))
1054 1054 for key, value in advpar:
1055 1055 parsizes.append(len(key))
1056 1056 parsizes.append(len(value))
1057 1057 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1058 1058 header.append(paramsizes)
1059 1059 # key, value
1060 1060 for key, value in manpar:
1061 1061 header.append(key)
1062 1062 header.append(value)
1063 1063 for key, value in advpar:
1064 1064 header.append(key)
1065 1065 header.append(value)
1066 1066 ## finalize header
1067 1067 try:
1068 1068 headerchunk = ''.join(header)
1069 1069 except TypeError:
1070 1070 raise TypeError(r'Found a non-bytes trying to '
1071 1071 r'build bundle part header: %r' % header)
1072 1072 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
1073 1073 yield _pack(_fpartheadersize, len(headerchunk))
1074 1074 yield headerchunk
1075 1075 ## payload
1076 1076 try:
1077 1077 for chunk in self._payloadchunks():
1078 1078 outdebug(ui, 'payload chunk size: %i' % len(chunk))
1079 1079 yield _pack(_fpayloadsize, len(chunk))
1080 1080 yield chunk
1081 1081 except GeneratorExit:
1082 1082 # GeneratorExit means that nobody is listening for our
1083 1083 # results anyway, so just bail quickly rather than trying
1084 1084 # to produce an error part.
1085 1085 ui.debug('bundle2-generatorexit\n')
1086 1086 raise
1087 1087 except BaseException as exc:
1088 1088 bexc = util.forcebytestr(exc)
1089 1089 # backup exception data for later
1090 1090 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
1091 1091 % bexc)
1092 1092 tb = sys.exc_info()[2]
1093 1093 msg = 'unexpected error: %s' % bexc
1094 1094 interpart = bundlepart('error:abort', [('message', msg)],
1095 1095 mandatory=False)
1096 1096 interpart.id = 0
1097 1097 yield _pack(_fpayloadsize, -1)
1098 1098 for chunk in interpart.getchunks(ui=ui):
1099 1099 yield chunk
1100 1100 outdebug(ui, 'closing payload chunk')
1101 1101 # abort current part payload
1102 1102 yield _pack(_fpayloadsize, 0)
1103 1103 pycompat.raisewithtb(exc, tb)
1104 1104 # end of payload
1105 1105 outdebug(ui, 'closing payload chunk')
1106 1106 yield _pack(_fpayloadsize, 0)
1107 1107 self._generated = True
1108 1108
1109 1109 def _payloadchunks(self):
1110 1110 """yield chunks of a the part payload
1111 1111
1112 1112 Exists to handle the different methods to provide data to a part."""
1113 1113 # we only support fixed size data now.
1114 1114 # This will be improved in the future.
1115 1115 if (util.safehasattr(self.data, 'next')
1116 1116 or util.safehasattr(self.data, '__next__')):
1117 1117 buff = util.chunkbuffer(self.data)
1118 1118 chunk = buff.read(preferedchunksize)
1119 1119 while chunk:
1120 1120 yield chunk
1121 1121 chunk = buff.read(preferedchunksize)
1122 1122 elif len(self.data):
1123 1123 yield self.data
1124 1124
1125 1125
1126 1126 flaginterrupt = -1
1127 1127
1128 1128 class interrupthandler(unpackermixin):
1129 1129 """read one part and process it with restricted capability
1130 1130
1131 1131 This allows to transmit exception raised on the producer size during part
1132 1132 iteration while the consumer is reading a part.
1133 1133
1134 1134 Part processed in this manner only have access to a ui object,"""
1135 1135
1136 1136 def __init__(self, ui, fp):
1137 1137 super(interrupthandler, self).__init__(fp)
1138 1138 self.ui = ui
1139 1139
1140 1140 def _readpartheader(self):
1141 1141 """reads a part header size and return the bytes blob
1142 1142
1143 1143 returns None if empty"""
1144 1144 headersize = self._unpack(_fpartheadersize)[0]
1145 1145 if headersize < 0:
1146 1146 raise error.BundleValueError('negative part header size: %i'
1147 1147 % headersize)
1148 1148 indebug(self.ui, 'part header size: %i\n' % headersize)
1149 1149 if headersize:
1150 1150 return self._readexact(headersize)
1151 1151 return None
1152 1152
1153 1153 def __call__(self):
1154 1154
1155 1155 self.ui.debug('bundle2-input-stream-interrupt:'
1156 1156 ' opening out of band context\n')
1157 1157 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1158 1158 headerblock = self._readpartheader()
1159 1159 if headerblock is None:
1160 1160 indebug(self.ui, 'no part found during interruption.')
1161 1161 return
1162 1162 part = unbundlepart(self.ui, headerblock, self._fp)
1163 1163 op = interruptoperation(self.ui)
1164 1164 hardabort = False
1165 1165 try:
1166 1166 _processpart(op, part)
1167 1167 except (SystemExit, KeyboardInterrupt):
1168 1168 hardabort = True
1169 1169 raise
1170 1170 finally:
1171 1171 if not hardabort:
1172 1172 part.consume()
1173 1173 self.ui.debug('bundle2-input-stream-interrupt:'
1174 1174 ' closing out of band context\n')
1175 1175
1176 1176 class interruptoperation(object):
1177 1177 """A limited operation to be use by part handler during interruption
1178 1178
1179 1179 It only have access to an ui object.
1180 1180 """
1181 1181
1182 1182 def __init__(self, ui):
1183 1183 self.ui = ui
1184 1184 self.reply = None
1185 1185 self.captureoutput = False
1186 1186
1187 1187 @property
1188 1188 def repo(self):
1189 1189 raise error.ProgrammingError('no repo access from stream interruption')
1190 1190
1191 1191 def gettransaction(self):
1192 1192 raise TransactionUnavailable('no repo access from stream interruption')
1193 1193
1194 1194 def decodepayloadchunks(ui, fh):
1195 1195 """Reads bundle2 part payload data into chunks.
1196 1196
1197 1197 Part payload data consists of framed chunks. This function takes
1198 1198 a file handle and emits those chunks.
1199 1199 """
1200 1200 dolog = ui.configbool('devel', 'bundle2.debug')
1201 1201 debug = ui.debug
1202 1202
1203 1203 headerstruct = struct.Struct(_fpayloadsize)
1204 1204 headersize = headerstruct.size
1205 1205 unpack = headerstruct.unpack
1206 1206
1207 1207 readexactly = changegroup.readexactly
1208 1208 read = fh.read
1209 1209
1210 1210 chunksize = unpack(readexactly(fh, headersize))[0]
1211 1211 indebug(ui, 'payload chunk size: %i' % chunksize)
1212 1212
1213 1213 # changegroup.readexactly() is inlined below for performance.
1214 1214 while chunksize:
1215 1215 if chunksize >= 0:
1216 1216 s = read(chunksize)
1217 1217 if len(s) < chunksize:
1218 1218 raise error.Abort(_('stream ended unexpectedly '
1219 1219 ' (got %d bytes, expected %d)') %
1220 1220 (len(s), chunksize))
1221 1221
1222 1222 yield s
1223 1223 elif chunksize == flaginterrupt:
1224 1224 # Interrupt "signal" detected. The regular stream is interrupted
1225 1225 # and a bundle2 part follows. Consume it.
1226 1226 interrupthandler(ui, fh)()
1227 1227 else:
1228 1228 raise error.BundleValueError(
1229 1229 'negative payload chunk size: %s' % chunksize)
1230 1230
1231 1231 s = read(headersize)
1232 1232 if len(s) < headersize:
1233 1233 raise error.Abort(_('stream ended unexpectedly '
1234 1234 ' (got %d bytes, expected %d)') %
1235 1235 (len(s), chunksize))
1236 1236
1237 1237 chunksize = unpack(s)[0]
1238 1238
1239 1239 # indebug() inlined for performance.
1240 1240 if dolog:
1241 1241 debug('bundle2-input: payload chunk size: %i\n' % chunksize)
1242 1242
1243 1243 class unbundlepart(unpackermixin):
1244 1244 """a bundle part read from a bundle"""
1245 1245
1246 1246 def __init__(self, ui, header, fp):
1247 1247 super(unbundlepart, self).__init__(fp)
1248 1248 self._seekable = (util.safehasattr(fp, 'seek') and
1249 1249 util.safehasattr(fp, 'tell'))
1250 1250 self.ui = ui
1251 1251 # unbundle state attr
1252 1252 self._headerdata = header
1253 1253 self._headeroffset = 0
1254 1254 self._initialized = False
1255 1255 self.consumed = False
1256 1256 # part data
1257 1257 self.id = None
1258 1258 self.type = None
1259 1259 self.mandatoryparams = None
1260 1260 self.advisoryparams = None
1261 1261 self.params = None
1262 1262 self.mandatorykeys = ()
1263 1263 self._readheader()
1264 1264 self._mandatory = None
1265 1265 self._pos = 0
1266 1266
1267 1267 def _fromheader(self, size):
1268 1268 """return the next <size> byte from the header"""
1269 1269 offset = self._headeroffset
1270 1270 data = self._headerdata[offset:(offset + size)]
1271 1271 self._headeroffset = offset + size
1272 1272 return data
1273 1273
1274 1274 def _unpackheader(self, format):
1275 1275 """read given format from header
1276 1276
1277 1277 This automatically compute the size of the format to read."""
1278 1278 data = self._fromheader(struct.calcsize(format))
1279 1279 return _unpack(format, data)
1280 1280
1281 1281 def _initparams(self, mandatoryparams, advisoryparams):
1282 1282 """internal function to setup all logic related parameters"""
1283 1283 # make it read only to prevent people touching it by mistake.
1284 1284 self.mandatoryparams = tuple(mandatoryparams)
1285 1285 self.advisoryparams = tuple(advisoryparams)
1286 1286 # user friendly UI
1287 1287 self.params = util.sortdict(self.mandatoryparams)
1288 1288 self.params.update(self.advisoryparams)
1289 1289 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1290 1290
1291 1291 def _readheader(self):
1292 1292 """read the header and setup the object"""
1293 1293 typesize = self._unpackheader(_fparttypesize)[0]
1294 1294 self.type = self._fromheader(typesize)
1295 1295 indebug(self.ui, 'part type: "%s"' % self.type)
1296 1296 self.id = self._unpackheader(_fpartid)[0]
1297 1297 indebug(self.ui, 'part id: "%s"' % pycompat.bytestr(self.id))
1298 1298 # extract mandatory bit from type
1299 1299 self.mandatory = (self.type != self.type.lower())
1300 1300 self.type = self.type.lower()
1301 1301 ## reading parameters
1302 1302 # param count
1303 1303 mancount, advcount = self._unpackheader(_fpartparamcount)
1304 1304 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1305 1305 # param size
1306 1306 fparamsizes = _makefpartparamsizes(mancount + advcount)
1307 1307 paramsizes = self._unpackheader(fparamsizes)
1308 1308 # make it a list of couple again
1309 1309 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1310 1310 # split mandatory from advisory
1311 1311 mansizes = paramsizes[:mancount]
1312 1312 advsizes = paramsizes[mancount:]
1313 1313 # retrieve param value
1314 1314 manparams = []
1315 1315 for key, value in mansizes:
1316 1316 manparams.append((self._fromheader(key), self._fromheader(value)))
1317 1317 advparams = []
1318 1318 for key, value in advsizes:
1319 1319 advparams.append((self._fromheader(key), self._fromheader(value)))
1320 1320 self._initparams(manparams, advparams)
1321 1321 ## part payload
1322 1322 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1323 1323 # we read the data, tell it
1324 1324 self._initialized = True
1325 1325
1326 1326 def _payloadchunks(self):
1327 1327 """Generator of decoded chunks in the payload."""
1328 1328 return decodepayloadchunks(self.ui, self._fp)
1329 1329
1330 1330 def consume(self):
1331 1331 """Read the part payload until completion.
1332 1332
1333 1333 By consuming the part data, the underlying stream read offset will
1334 1334 be advanced to the next part (or end of stream).
1335 1335 """
1336 1336 if self.consumed:
1337 1337 return
1338 1338
1339 1339 chunk = self.read(32768)
1340 1340 while chunk:
1341 1341 self._pos += len(chunk)
1342 1342 chunk = self.read(32768)
1343 1343
1344 1344 def read(self, size=None):
1345 1345 """read payload data"""
1346 1346 if not self._initialized:
1347 1347 self._readheader()
1348 1348 if size is None:
1349 1349 data = self._payloadstream.read()
1350 1350 else:
1351 1351 data = self._payloadstream.read(size)
1352 1352 self._pos += len(data)
1353 1353 if size is None or len(data) < size:
1354 1354 if not self.consumed and self._pos:
1355 1355 self.ui.debug('bundle2-input-part: total payload size %i\n'
1356 1356 % self._pos)
1357 1357 self.consumed = True
1358 1358 return data
1359 1359
1360 1360 class seekableunbundlepart(unbundlepart):
1361 1361 """A bundle2 part in a bundle that is seekable.
1362 1362
1363 1363 Regular ``unbundlepart`` instances can only be read once. This class
1364 1364 extends ``unbundlepart`` to enable bi-directional seeking within the
1365 1365 part.
1366 1366
1367 1367 Bundle2 part data consists of framed chunks. Offsets when seeking
1368 1368 refer to the decoded data, not the offsets in the underlying bundle2
1369 1369 stream.
1370 1370
1371 1371 To facilitate quickly seeking within the decoded data, instances of this
1372 1372 class maintain a mapping between offsets in the underlying stream and
1373 1373 the decoded payload. This mapping will consume memory in proportion
1374 1374 to the number of chunks within the payload (which almost certainly
1375 1375 increases in proportion with the size of the part).
1376 1376 """
1377 1377 def __init__(self, ui, header, fp):
1378 1378 # (payload, file) offsets for chunk starts.
1379 1379 self._chunkindex = []
1380 1380
1381 1381 super(seekableunbundlepart, self).__init__(ui, header, fp)
1382 1382
1383 1383 def _payloadchunks(self, chunknum=0):
1384 1384 '''seek to specified chunk and start yielding data'''
1385 1385 if len(self._chunkindex) == 0:
1386 1386 assert chunknum == 0, 'Must start with chunk 0'
1387 1387 self._chunkindex.append((0, self._tellfp()))
1388 1388 else:
1389 1389 assert chunknum < len(self._chunkindex), \
1390 1390 'Unknown chunk %d' % chunknum
1391 1391 self._seekfp(self._chunkindex[chunknum][1])
1392 1392
1393 1393 pos = self._chunkindex[chunknum][0]
1394 1394
1395 1395 for chunk in decodepayloadchunks(self.ui, self._fp):
1396 1396 chunknum += 1
1397 1397 pos += len(chunk)
1398 1398 if chunknum == len(self._chunkindex):
1399 1399 self._chunkindex.append((pos, self._tellfp()))
1400 1400
1401 1401 yield chunk
1402 1402
1403 1403 def _findchunk(self, pos):
1404 1404 '''for a given payload position, return a chunk number and offset'''
1405 1405 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1406 1406 if ppos == pos:
1407 1407 return chunk, 0
1408 1408 elif ppos > pos:
1409 1409 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1410 1410 raise ValueError('Unknown chunk')
1411 1411
1412 1412 def tell(self):
1413 1413 return self._pos
1414 1414
1415 1415 def seek(self, offset, whence=os.SEEK_SET):
1416 1416 if whence == os.SEEK_SET:
1417 1417 newpos = offset
1418 1418 elif whence == os.SEEK_CUR:
1419 1419 newpos = self._pos + offset
1420 1420 elif whence == os.SEEK_END:
1421 1421 if not self.consumed:
1422 1422 # Can't use self.consume() here because it advances self._pos.
1423 1423 chunk = self.read(32768)
1424 1424 while chunk:
1425 1425 chunk = self.read(32768)
1426 1426 newpos = self._chunkindex[-1][0] - offset
1427 1427 else:
1428 1428 raise ValueError('Unknown whence value: %r' % (whence,))
1429 1429
1430 1430 if newpos > self._chunkindex[-1][0] and not self.consumed:
1431 1431 # Can't use self.consume() here because it advances self._pos.
1432 1432 chunk = self.read(32768)
1433 1433 while chunk:
1434 1434 chunk = self.read(32668)
1435 1435
1436 1436 if not 0 <= newpos <= self._chunkindex[-1][0]:
1437 1437 raise ValueError('Offset out of range')
1438 1438
1439 1439 if self._pos != newpos:
1440 1440 chunk, internaloffset = self._findchunk(newpos)
1441 1441 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1442 1442 adjust = self.read(internaloffset)
1443 1443 if len(adjust) != internaloffset:
1444 1444 raise error.Abort(_('Seek failed\n'))
1445 1445 self._pos = newpos
1446 1446
1447 1447 def _seekfp(self, offset, whence=0):
1448 1448 """move the underlying file pointer
1449 1449
1450 1450 This method is meant for internal usage by the bundle2 protocol only.
1451 1451 They directly manipulate the low level stream including bundle2 level
1452 1452 instruction.
1453 1453
1454 1454 Do not use it to implement higher-level logic or methods."""
1455 1455 if self._seekable:
1456 1456 return self._fp.seek(offset, whence)
1457 1457 else:
1458 1458 raise NotImplementedError(_('File pointer is not seekable'))
1459 1459
1460 1460 def _tellfp(self):
1461 1461 """return the file offset, or None if file is not seekable
1462 1462
1463 1463 This method is meant for internal usage by the bundle2 protocol only.
1464 1464 They directly manipulate the low level stream including bundle2 level
1465 1465 instruction.
1466 1466
1467 1467 Do not use it to implement higher-level logic or methods."""
1468 1468 if self._seekable:
1469 1469 try:
1470 1470 return self._fp.tell()
1471 1471 except IOError as e:
1472 1472 if e.errno == errno.ESPIPE:
1473 1473 self._seekable = False
1474 1474 else:
1475 1475 raise
1476 1476 return None
1477 1477
1478 1478 # These are only the static capabilities.
1479 1479 # Check the 'getrepocaps' function for the rest.
1480 1480 capabilities = {'HG20': (),
1481 1481 'bookmarks': (),
1482 1482 'error': ('abort', 'unsupportedcontent', 'pushraced',
1483 1483 'pushkey'),
1484 1484 'listkeys': (),
1485 1485 'pushkey': (),
1486 1486 'digests': tuple(sorted(util.DIGESTS.keys())),
1487 1487 'remote-changegroup': ('http', 'https'),
1488 1488 'hgtagsfnodes': (),
1489 1489 'phases': ('heads',),
1490 'stream': ('v2',),
1490 1491 }
1491 1492
1492 1493 def getrepocaps(repo, allowpushback=False):
1493 1494 """return the bundle2 capabilities for a given repo
1494 1495
1495 1496 Exists to allow extensions (like evolution) to mutate the capabilities.
1496 1497 """
1497 1498 caps = capabilities.copy()
1498 1499 caps['changegroup'] = tuple(sorted(
1499 1500 changegroup.supportedincomingversions(repo)))
1500 1501 if obsolete.isenabled(repo, obsolete.exchangeopt):
1501 1502 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1502 1503 caps['obsmarkers'] = supportedformat
1503 1504 if allowpushback:
1504 1505 caps['pushback'] = ()
1505 1506 cpmode = repo.ui.config('server', 'concurrent-push-mode')
1506 1507 if cpmode == 'check-related':
1507 1508 caps['checkheads'] = ('related',)
1508 1509 if 'phases' in repo.ui.configlist('devel', 'legacy.exchange'):
1509 1510 caps.pop('phases')
1511 if not repo.ui.configbool('experimental', 'bundle2.stream'):
1512 caps.pop('stream')
1510 1513 return caps
1511 1514
1512 1515 def bundle2caps(remote):
1513 1516 """return the bundle capabilities of a peer as dict"""
1514 1517 raw = remote.capable('bundle2')
1515 1518 if not raw and raw != '':
1516 1519 return {}
1517 1520 capsblob = urlreq.unquote(remote.capable('bundle2'))
1518 1521 return decodecaps(capsblob)
1519 1522
1520 1523 def obsmarkersversion(caps):
1521 1524 """extract the list of supported obsmarkers versions from a bundle2caps dict
1522 1525 """
1523 1526 obscaps = caps.get('obsmarkers', ())
1524 1527 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1525 1528
1526 1529 def writenewbundle(ui, repo, source, filename, bundletype, outgoing, opts,
1527 1530 vfs=None, compression=None, compopts=None):
1528 1531 if bundletype.startswith('HG10'):
1529 1532 cg = changegroup.makechangegroup(repo, outgoing, '01', source)
1530 1533 return writebundle(ui, cg, filename, bundletype, vfs=vfs,
1531 1534 compression=compression, compopts=compopts)
1532 1535 elif not bundletype.startswith('HG20'):
1533 1536 raise error.ProgrammingError('unknown bundle type: %s' % bundletype)
1534 1537
1535 1538 caps = {}
1536 1539 if 'obsolescence' in opts:
1537 1540 caps['obsmarkers'] = ('V1',)
1538 1541 bundle = bundle20(ui, caps)
1539 1542 bundle.setcompression(compression, compopts)
1540 1543 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1541 1544 chunkiter = bundle.getchunks()
1542 1545
1543 1546 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1544 1547
1545 1548 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1546 1549 # We should eventually reconcile this logic with the one behind
1547 1550 # 'exchange.getbundle2partsgenerator'.
1548 1551 #
1549 1552 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1550 1553 # different right now. So we keep them separated for now for the sake of
1551 1554 # simplicity.
1552 1555
1553 1556 # we always want a changegroup in such bundle
1554 1557 cgversion = opts.get('cg.version')
1555 1558 if cgversion is None:
1556 1559 cgversion = changegroup.safeversion(repo)
1557 1560 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1558 1561 part = bundler.newpart('changegroup', data=cg.getchunks())
1559 1562 part.addparam('version', cg.version)
1560 1563 if 'clcount' in cg.extras:
1561 1564 part.addparam('nbchanges', '%d' % cg.extras['clcount'],
1562 1565 mandatory=False)
1563 1566 if opts.get('phases') and repo.revs('%ln and secret()',
1564 1567 outgoing.missingheads):
1565 1568 part.addparam('targetphase', '%d' % phases.secret, mandatory=False)
1566 1569
1567 1570 addparttagsfnodescache(repo, bundler, outgoing)
1568 1571
1569 1572 if opts.get('obsolescence', False):
1570 1573 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1571 1574 buildobsmarkerspart(bundler, obsmarkers)
1572 1575
1573 1576 if opts.get('phases', False):
1574 1577 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1575 1578 phasedata = phases.binaryencode(headsbyphase)
1576 1579 bundler.newpart('phase-heads', data=phasedata)
1577 1580
1578 1581 def addparttagsfnodescache(repo, bundler, outgoing):
1579 1582 # we include the tags fnode cache for the bundle changeset
1580 1583 # (as an optional parts)
1581 1584 cache = tags.hgtagsfnodescache(repo.unfiltered())
1582 1585 chunks = []
1583 1586
1584 1587 # .hgtags fnodes are only relevant for head changesets. While we could
1585 1588 # transfer values for all known nodes, there will likely be little to
1586 1589 # no benefit.
1587 1590 #
1588 1591 # We don't bother using a generator to produce output data because
1589 1592 # a) we only have 40 bytes per head and even esoteric numbers of heads
1590 1593 # consume little memory (1M heads is 40MB) b) we don't want to send the
1591 1594 # part if we don't have entries and knowing if we have entries requires
1592 1595 # cache lookups.
1593 1596 for node in outgoing.missingheads:
1594 1597 # Don't compute missing, as this may slow down serving.
1595 1598 fnode = cache.getfnode(node, computemissing=False)
1596 1599 if fnode is not None:
1597 1600 chunks.extend([node, fnode])
1598 1601
1599 1602 if chunks:
1600 1603 bundler.newpart('hgtagsfnodes', data=''.join(chunks))
1601 1604
1602 1605 def buildobsmarkerspart(bundler, markers):
1603 1606 """add an obsmarker part to the bundler with <markers>
1604 1607
1605 1608 No part is created if markers is empty.
1606 1609 Raises ValueError if the bundler doesn't support any known obsmarker format.
1607 1610 """
1608 1611 if not markers:
1609 1612 return None
1610 1613
1611 1614 remoteversions = obsmarkersversion(bundler.capabilities)
1612 1615 version = obsolete.commonversion(remoteversions)
1613 1616 if version is None:
1614 1617 raise ValueError('bundler does not support common obsmarker format')
1615 1618 stream = obsolete.encodemarkers(markers, True, version=version)
1616 1619 return bundler.newpart('obsmarkers', data=stream)
1617 1620
1618 1621 def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None,
1619 1622 compopts=None):
1620 1623 """Write a bundle file and return its filename.
1621 1624
1622 1625 Existing files will not be overwritten.
1623 1626 If no filename is specified, a temporary file is created.
1624 1627 bz2 compression can be turned off.
1625 1628 The bundle file will be deleted in case of errors.
1626 1629 """
1627 1630
1628 1631 if bundletype == "HG20":
1629 1632 bundle = bundle20(ui)
1630 1633 bundle.setcompression(compression, compopts)
1631 1634 part = bundle.newpart('changegroup', data=cg.getchunks())
1632 1635 part.addparam('version', cg.version)
1633 1636 if 'clcount' in cg.extras:
1634 1637 part.addparam('nbchanges', '%d' % cg.extras['clcount'],
1635 1638 mandatory=False)
1636 1639 chunkiter = bundle.getchunks()
1637 1640 else:
1638 1641 # compression argument is only for the bundle2 case
1639 1642 assert compression is None
1640 1643 if cg.version != '01':
1641 1644 raise error.Abort(_('old bundle types only supports v1 '
1642 1645 'changegroups'))
1643 1646 header, comp = bundletypes[bundletype]
1644 1647 if comp not in util.compengines.supportedbundletypes:
1645 1648 raise error.Abort(_('unknown stream compression type: %s')
1646 1649 % comp)
1647 1650 compengine = util.compengines.forbundletype(comp)
1648 1651 def chunkiter():
1649 1652 yield header
1650 1653 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1651 1654 yield chunk
1652 1655 chunkiter = chunkiter()
1653 1656
1654 1657 # parse the changegroup data, otherwise we will block
1655 1658 # in case of sshrepo because we don't know the end of the stream
1656 1659 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1657 1660
1658 1661 def combinechangegroupresults(op):
1659 1662 """logic to combine 0 or more addchangegroup results into one"""
1660 1663 results = [r.get('return', 0)
1661 1664 for r in op.records['changegroup']]
1662 1665 changedheads = 0
1663 1666 result = 1
1664 1667 for ret in results:
1665 1668 # If any changegroup result is 0, return 0
1666 1669 if ret == 0:
1667 1670 result = 0
1668 1671 break
1669 1672 if ret < -1:
1670 1673 changedheads += ret + 1
1671 1674 elif ret > 1:
1672 1675 changedheads += ret - 1
1673 1676 if changedheads > 0:
1674 1677 result = 1 + changedheads
1675 1678 elif changedheads < 0:
1676 1679 result = -1 + changedheads
1677 1680 return result
1678 1681
1679 1682 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest',
1680 1683 'targetphase'))
1681 1684 def handlechangegroup(op, inpart):
1682 1685 """apply a changegroup part on the repo
1683 1686
1684 1687 This is a very early implementation that will massive rework before being
1685 1688 inflicted to any end-user.
1686 1689 """
1687 1690 tr = op.gettransaction()
1688 1691 unpackerversion = inpart.params.get('version', '01')
1689 1692 # We should raise an appropriate exception here
1690 1693 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1691 1694 # the source and url passed here are overwritten by the one contained in
1692 1695 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1693 1696 nbchangesets = None
1694 1697 if 'nbchanges' in inpart.params:
1695 1698 nbchangesets = int(inpart.params.get('nbchanges'))
1696 1699 if ('treemanifest' in inpart.params and
1697 1700 'treemanifest' not in op.repo.requirements):
1698 1701 if len(op.repo.changelog) != 0:
1699 1702 raise error.Abort(_(
1700 1703 "bundle contains tree manifests, but local repo is "
1701 1704 "non-empty and does not use tree manifests"))
1702 1705 op.repo.requirements.add('treemanifest')
1703 1706 op.repo._applyopenerreqs()
1704 1707 op.repo._writerequirements()
1705 1708 extrakwargs = {}
1706 1709 targetphase = inpart.params.get('targetphase')
1707 1710 if targetphase is not None:
1708 1711 extrakwargs['targetphase'] = int(targetphase)
1709 1712 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2',
1710 1713 expectedtotal=nbchangesets, **extrakwargs)
1711 1714 if op.reply is not None:
1712 1715 # This is definitely not the final form of this
1713 1716 # return. But one need to start somewhere.
1714 1717 part = op.reply.newpart('reply:changegroup', mandatory=False)
1715 1718 part.addparam(
1716 1719 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1717 1720 part.addparam('return', '%i' % ret, mandatory=False)
1718 1721 assert not inpart.read()
1719 1722
1720 1723 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1721 1724 ['digest:%s' % k for k in util.DIGESTS.keys()])
1722 1725 @parthandler('remote-changegroup', _remotechangegroupparams)
1723 1726 def handleremotechangegroup(op, inpart):
1724 1727 """apply a bundle10 on the repo, given an url and validation information
1725 1728
1726 1729 All the information about the remote bundle to import are given as
1727 1730 parameters. The parameters include:
1728 1731 - url: the url to the bundle10.
1729 1732 - size: the bundle10 file size. It is used to validate what was
1730 1733 retrieved by the client matches the server knowledge about the bundle.
1731 1734 - digests: a space separated list of the digest types provided as
1732 1735 parameters.
1733 1736 - digest:<digest-type>: the hexadecimal representation of the digest with
1734 1737 that name. Like the size, it is used to validate what was retrieved by
1735 1738 the client matches what the server knows about the bundle.
1736 1739
1737 1740 When multiple digest types are given, all of them are checked.
1738 1741 """
1739 1742 try:
1740 1743 raw_url = inpart.params['url']
1741 1744 except KeyError:
1742 1745 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1743 1746 parsed_url = util.url(raw_url)
1744 1747 if parsed_url.scheme not in capabilities['remote-changegroup']:
1745 1748 raise error.Abort(_('remote-changegroup does not support %s urls') %
1746 1749 parsed_url.scheme)
1747 1750
1748 1751 try:
1749 1752 size = int(inpart.params['size'])
1750 1753 except ValueError:
1751 1754 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1752 1755 % 'size')
1753 1756 except KeyError:
1754 1757 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1755 1758
1756 1759 digests = {}
1757 1760 for typ in inpart.params.get('digests', '').split():
1758 1761 param = 'digest:%s' % typ
1759 1762 try:
1760 1763 value = inpart.params[param]
1761 1764 except KeyError:
1762 1765 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1763 1766 param)
1764 1767 digests[typ] = value
1765 1768
1766 1769 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1767 1770
1768 1771 tr = op.gettransaction()
1769 1772 from . import exchange
1770 1773 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1771 1774 if not isinstance(cg, changegroup.cg1unpacker):
1772 1775 raise error.Abort(_('%s: not a bundle version 1.0') %
1773 1776 util.hidepassword(raw_url))
1774 1777 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2')
1775 1778 if op.reply is not None:
1776 1779 # This is definitely not the final form of this
1777 1780 # return. But one need to start somewhere.
1778 1781 part = op.reply.newpart('reply:changegroup')
1779 1782 part.addparam(
1780 1783 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1781 1784 part.addparam('return', '%i' % ret, mandatory=False)
1782 1785 try:
1783 1786 real_part.validate()
1784 1787 except error.Abort as e:
1785 1788 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1786 1789 (util.hidepassword(raw_url), str(e)))
1787 1790 assert not inpart.read()
1788 1791
1789 1792 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1790 1793 def handlereplychangegroup(op, inpart):
1791 1794 ret = int(inpart.params['return'])
1792 1795 replyto = int(inpart.params['in-reply-to'])
1793 1796 op.records.add('changegroup', {'return': ret}, replyto)
1794 1797
1795 1798 @parthandler('check:bookmarks')
1796 1799 def handlecheckbookmarks(op, inpart):
1797 1800 """check location of bookmarks
1798 1801
1799 1802 This part is to be used to detect push race regarding bookmark, it
1800 1803 contains binary encoded (bookmark, node) tuple. If the local state does
1801 1804 not marks the one in the part, a PushRaced exception is raised
1802 1805 """
1803 1806 bookdata = bookmarks.binarydecode(inpart)
1804 1807
1805 1808 msgstandard = ('repository changed while pushing - please try again '
1806 1809 '(bookmark "%s" move from %s to %s)')
1807 1810 msgmissing = ('repository changed while pushing - please try again '
1808 1811 '(bookmark "%s" is missing, expected %s)')
1809 1812 msgexist = ('repository changed while pushing - please try again '
1810 1813 '(bookmark "%s" set on %s, expected missing)')
1811 1814 for book, node in bookdata:
1812 1815 currentnode = op.repo._bookmarks.get(book)
1813 1816 if currentnode != node:
1814 1817 if node is None:
1815 1818 finalmsg = msgexist % (book, nodemod.short(currentnode))
1816 1819 elif currentnode is None:
1817 1820 finalmsg = msgmissing % (book, nodemod.short(node))
1818 1821 else:
1819 1822 finalmsg = msgstandard % (book, nodemod.short(node),
1820 1823 nodemod.short(currentnode))
1821 1824 raise error.PushRaced(finalmsg)
1822 1825
1823 1826 @parthandler('check:heads')
1824 1827 def handlecheckheads(op, inpart):
1825 1828 """check that head of the repo did not change
1826 1829
1827 1830 This is used to detect a push race when using unbundle.
1828 1831 This replaces the "heads" argument of unbundle."""
1829 1832 h = inpart.read(20)
1830 1833 heads = []
1831 1834 while len(h) == 20:
1832 1835 heads.append(h)
1833 1836 h = inpart.read(20)
1834 1837 assert not h
1835 1838 # Trigger a transaction so that we are guaranteed to have the lock now.
1836 1839 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1837 1840 op.gettransaction()
1838 1841 if sorted(heads) != sorted(op.repo.heads()):
1839 1842 raise error.PushRaced('repository changed while pushing - '
1840 1843 'please try again')
1841 1844
1842 1845 @parthandler('check:updated-heads')
1843 1846 def handlecheckupdatedheads(op, inpart):
1844 1847 """check for race on the heads touched by a push
1845 1848
1846 1849 This is similar to 'check:heads' but focus on the heads actually updated
1847 1850 during the push. If other activities happen on unrelated heads, it is
1848 1851 ignored.
1849 1852
1850 1853 This allow server with high traffic to avoid push contention as long as
1851 1854 unrelated parts of the graph are involved."""
1852 1855 h = inpart.read(20)
1853 1856 heads = []
1854 1857 while len(h) == 20:
1855 1858 heads.append(h)
1856 1859 h = inpart.read(20)
1857 1860 assert not h
1858 1861 # trigger a transaction so that we are guaranteed to have the lock now.
1859 1862 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1860 1863 op.gettransaction()
1861 1864
1862 1865 currentheads = set()
1863 1866 for ls in op.repo.branchmap().itervalues():
1864 1867 currentheads.update(ls)
1865 1868
1866 1869 for h in heads:
1867 1870 if h not in currentheads:
1868 1871 raise error.PushRaced('repository changed while pushing - '
1869 1872 'please try again')
1870 1873
1871 1874 @parthandler('check:phases')
1872 1875 def handlecheckphases(op, inpart):
1873 1876 """check that phase boundaries of the repository did not change
1874 1877
1875 1878 This is used to detect a push race.
1876 1879 """
1877 1880 phasetonodes = phases.binarydecode(inpart)
1878 1881 unfi = op.repo.unfiltered()
1879 1882 cl = unfi.changelog
1880 1883 phasecache = unfi._phasecache
1881 1884 msg = ('repository changed while pushing - please try again '
1882 1885 '(%s is %s expected %s)')
1883 1886 for expectedphase, nodes in enumerate(phasetonodes):
1884 1887 for n in nodes:
1885 1888 actualphase = phasecache.phase(unfi, cl.rev(n))
1886 1889 if actualphase != expectedphase:
1887 1890 finalmsg = msg % (nodemod.short(n),
1888 1891 phases.phasenames[actualphase],
1889 1892 phases.phasenames[expectedphase])
1890 1893 raise error.PushRaced(finalmsg)
1891 1894
1892 1895 @parthandler('output')
1893 1896 def handleoutput(op, inpart):
1894 1897 """forward output captured on the server to the client"""
1895 1898 for line in inpart.read().splitlines():
1896 1899 op.ui.status(_('remote: %s\n') % line)
1897 1900
1898 1901 @parthandler('replycaps')
1899 1902 def handlereplycaps(op, inpart):
1900 1903 """Notify that a reply bundle should be created
1901 1904
1902 1905 The payload contains the capabilities information for the reply"""
1903 1906 caps = decodecaps(inpart.read())
1904 1907 if op.reply is None:
1905 1908 op.reply = bundle20(op.ui, caps)
1906 1909
1907 1910 class AbortFromPart(error.Abort):
1908 1911 """Sub-class of Abort that denotes an error from a bundle2 part."""
1909 1912
1910 1913 @parthandler('error:abort', ('message', 'hint'))
1911 1914 def handleerrorabort(op, inpart):
1912 1915 """Used to transmit abort error over the wire"""
1913 1916 raise AbortFromPart(inpart.params['message'],
1914 1917 hint=inpart.params.get('hint'))
1915 1918
1916 1919 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
1917 1920 'in-reply-to'))
1918 1921 def handleerrorpushkey(op, inpart):
1919 1922 """Used to transmit failure of a mandatory pushkey over the wire"""
1920 1923 kwargs = {}
1921 1924 for name in ('namespace', 'key', 'new', 'old', 'ret'):
1922 1925 value = inpart.params.get(name)
1923 1926 if value is not None:
1924 1927 kwargs[name] = value
1925 1928 raise error.PushkeyFailed(inpart.params['in-reply-to'], **kwargs)
1926 1929
1927 1930 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1928 1931 def handleerrorunsupportedcontent(op, inpart):
1929 1932 """Used to transmit unknown content error over the wire"""
1930 1933 kwargs = {}
1931 1934 parttype = inpart.params.get('parttype')
1932 1935 if parttype is not None:
1933 1936 kwargs['parttype'] = parttype
1934 1937 params = inpart.params.get('params')
1935 1938 if params is not None:
1936 1939 kwargs['params'] = params.split('\0')
1937 1940
1938 1941 raise error.BundleUnknownFeatureError(**kwargs)
1939 1942
1940 1943 @parthandler('error:pushraced', ('message',))
1941 1944 def handleerrorpushraced(op, inpart):
1942 1945 """Used to transmit push race error over the wire"""
1943 1946 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1944 1947
1945 1948 @parthandler('listkeys', ('namespace',))
1946 1949 def handlelistkeys(op, inpart):
1947 1950 """retrieve pushkey namespace content stored in a bundle2"""
1948 1951 namespace = inpart.params['namespace']
1949 1952 r = pushkey.decodekeys(inpart.read())
1950 1953 op.records.add('listkeys', (namespace, r))
1951 1954
1952 1955 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1953 1956 def handlepushkey(op, inpart):
1954 1957 """process a pushkey request"""
1955 1958 dec = pushkey.decode
1956 1959 namespace = dec(inpart.params['namespace'])
1957 1960 key = dec(inpart.params['key'])
1958 1961 old = dec(inpart.params['old'])
1959 1962 new = dec(inpart.params['new'])
1960 1963 # Grab the transaction to ensure that we have the lock before performing the
1961 1964 # pushkey.
1962 1965 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1963 1966 op.gettransaction()
1964 1967 ret = op.repo.pushkey(namespace, key, old, new)
1965 1968 record = {'namespace': namespace,
1966 1969 'key': key,
1967 1970 'old': old,
1968 1971 'new': new}
1969 1972 op.records.add('pushkey', record)
1970 1973 if op.reply is not None:
1971 1974 rpart = op.reply.newpart('reply:pushkey')
1972 1975 rpart.addparam(
1973 1976 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1974 1977 rpart.addparam('return', '%i' % ret, mandatory=False)
1975 1978 if inpart.mandatory and not ret:
1976 1979 kwargs = {}
1977 1980 for key in ('namespace', 'key', 'new', 'old', 'ret'):
1978 1981 if key in inpart.params:
1979 1982 kwargs[key] = inpart.params[key]
1980 1983 raise error.PushkeyFailed(partid=str(inpart.id), **kwargs)
1981 1984
1982 1985 @parthandler('bookmarks')
1983 1986 def handlebookmark(op, inpart):
1984 1987 """transmit bookmark information
1985 1988
1986 1989 The part contains binary encoded bookmark information.
1987 1990
1988 1991 The exact behavior of this part can be controlled by the 'bookmarks' mode
1989 1992 on the bundle operation.
1990 1993
1991 1994 When mode is 'apply' (the default) the bookmark information is applied as
1992 1995 is to the unbundling repository. Make sure a 'check:bookmarks' part is
1993 1996 issued earlier to check for push races in such update. This behavior is
1994 1997 suitable for pushing.
1995 1998
1996 1999 When mode is 'records', the information is recorded into the 'bookmarks'
1997 2000 records of the bundle operation. This behavior is suitable for pulling.
1998 2001 """
1999 2002 changes = bookmarks.binarydecode(inpart)
2000 2003
2001 2004 pushkeycompat = op.repo.ui.configbool('server', 'bookmarks-pushkey-compat')
2002 2005 bookmarksmode = op.modes.get('bookmarks', 'apply')
2003 2006
2004 2007 if bookmarksmode == 'apply':
2005 2008 tr = op.gettransaction()
2006 2009 bookstore = op.repo._bookmarks
2007 2010 if pushkeycompat:
2008 2011 allhooks = []
2009 2012 for book, node in changes:
2010 2013 hookargs = tr.hookargs.copy()
2011 2014 hookargs['pushkeycompat'] = '1'
2012 2015 hookargs['namespace'] = 'bookmark'
2013 2016 hookargs['key'] = book
2014 2017 hookargs['old'] = nodemod.hex(bookstore.get(book, ''))
2015 2018 hookargs['new'] = nodemod.hex(node if node is not None else '')
2016 2019 allhooks.append(hookargs)
2017 2020
2018 2021 for hookargs in allhooks:
2019 2022 op.repo.hook('prepushkey', throw=True, **hookargs)
2020 2023
2021 2024 bookstore.applychanges(op.repo, op.gettransaction(), changes)
2022 2025
2023 2026 if pushkeycompat:
2024 2027 def runhook():
2025 2028 for hookargs in allhooks:
2026 2029 op.repo.hook('pushkey', **hookargs)
2027 2030 op.repo._afterlock(runhook)
2028 2031
2029 2032 elif bookmarksmode == 'records':
2030 2033 for book, node in changes:
2031 2034 record = {'bookmark': book, 'node': node}
2032 2035 op.records.add('bookmarks', record)
2033 2036 else:
2034 2037 raise error.ProgrammingError('unkown bookmark mode: %s' % bookmarksmode)
2035 2038
2036 2039 @parthandler('phase-heads')
2037 2040 def handlephases(op, inpart):
2038 2041 """apply phases from bundle part to repo"""
2039 2042 headsbyphase = phases.binarydecode(inpart)
2040 2043 phases.updatephases(op.repo.unfiltered(), op.gettransaction, headsbyphase)
2041 2044
2042 2045 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
2043 2046 def handlepushkeyreply(op, inpart):
2044 2047 """retrieve the result of a pushkey request"""
2045 2048 ret = int(inpart.params['return'])
2046 2049 partid = int(inpart.params['in-reply-to'])
2047 2050 op.records.add('pushkey', {'return': ret}, partid)
2048 2051
2049 2052 @parthandler('obsmarkers')
2050 2053 def handleobsmarker(op, inpart):
2051 2054 """add a stream of obsmarkers to the repo"""
2052 2055 tr = op.gettransaction()
2053 2056 markerdata = inpart.read()
2054 2057 if op.ui.config('experimental', 'obsmarkers-exchange-debug'):
2055 2058 op.ui.write(('obsmarker-exchange: %i bytes received\n')
2056 2059 % len(markerdata))
2057 2060 # The mergemarkers call will crash if marker creation is not enabled.
2058 2061 # we want to avoid this if the part is advisory.
2059 2062 if not inpart.mandatory and op.repo.obsstore.readonly:
2060 2063 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled\n')
2061 2064 return
2062 2065 new = op.repo.obsstore.mergemarkers(tr, markerdata)
2063 2066 op.repo.invalidatevolatilesets()
2064 2067 if new:
2065 2068 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
2066 2069 op.records.add('obsmarkers', {'new': new})
2067 2070 if op.reply is not None:
2068 2071 rpart = op.reply.newpart('reply:obsmarkers')
2069 2072 rpart.addparam(
2070 2073 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
2071 2074 rpart.addparam('new', '%i' % new, mandatory=False)
2072 2075
2073 2076
2074 2077 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
2075 2078 def handleobsmarkerreply(op, inpart):
2076 2079 """retrieve the result of a pushkey request"""
2077 2080 ret = int(inpart.params['new'])
2078 2081 partid = int(inpart.params['in-reply-to'])
2079 2082 op.records.add('obsmarkers', {'new': ret}, partid)
2080 2083
2081 2084 @parthandler('hgtagsfnodes')
2082 2085 def handlehgtagsfnodes(op, inpart):
2083 2086 """Applies .hgtags fnodes cache entries to the local repo.
2084 2087
2085 2088 Payload is pairs of 20 byte changeset nodes and filenodes.
2086 2089 """
2087 2090 # Grab the transaction so we ensure that we have the lock at this point.
2088 2091 if op.ui.configbool('experimental', 'bundle2lazylocking'):
2089 2092 op.gettransaction()
2090 2093 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
2091 2094
2092 2095 count = 0
2093 2096 while True:
2094 2097 node = inpart.read(20)
2095 2098 fnode = inpart.read(20)
2096 2099 if len(node) < 20 or len(fnode) < 20:
2097 2100 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
2098 2101 break
2099 2102 cache.setfnode(node, fnode)
2100 2103 count += 1
2101 2104
2102 2105 cache.write()
2103 2106 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
2104 2107
2105 2108 @parthandler('pushvars')
2106 2109 def bundle2getvars(op, part):
2107 2110 '''unbundle a bundle2 containing shellvars on the server'''
2108 2111 # An option to disable unbundling on server-side for security reasons
2109 2112 if op.ui.configbool('push', 'pushvars.server'):
2110 2113 hookargs = {}
2111 2114 for key, value in part.advisoryparams:
2112 2115 key = key.upper()
2113 2116 # We want pushed variables to have USERVAR_ prepended so we know
2114 2117 # they came from the --pushvar flag.
2115 2118 key = "USERVAR_" + key
2116 2119 hookargs[key] = value
2117 2120 op.addhookargs(hookargs)
2118 2121
2119 2122 @parthandler('stream', ('requirements', 'filecount', 'bytecount', 'version'))
2120 2123 def handlestreambundle(op, part):
2121 2124
2122 2125 version = part.params['version']
2123 2126 if version != 'v2':
2124 2127 raise error.Abort(_('unknown stream bundle version %s') % version)
2125 2128 requirements = part.params['requirements'].split()
2126 2129 filecount = int(part.params['filecount'])
2127 2130 bytecount = int(part.params['bytecount'])
2128 2131
2129 2132 repo = op.repo
2130 2133 if len(repo):
2131 2134 msg = _('cannot apply stream clone to non empty repository')
2132 2135 raise error.Abort(msg)
2133 2136
2134 2137 repo.ui.debug('applying stream bundle\n')
2135 2138 streamclone.applybundlev2(repo, part, filecount, bytecount,
2136 2139 requirements)
2137 2140
2138 2141 # new requirements = old non-format requirements +
2139 2142 # new format-related remote requirements
2140 2143 # requirements from the streamed-in repository
2141 2144 repo.requirements = set(requirements) | (
2142 2145 repo.requirements - repo.supportedformats)
2143 2146 repo._applyopenerreqs()
2144 2147 repo._writerequirements()
@@ -1,1292 +1,1295
1 1 # configitems.py - centralized declaration of configuration option
2 2 #
3 3 # Copyright 2017 Pierre-Yves David <pierre-yves.david@octobus.net>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from __future__ import absolute_import
9 9
10 10 import functools
11 11 import re
12 12
13 13 from . import (
14 14 encoding,
15 15 error,
16 16 )
17 17
18 18 def loadconfigtable(ui, extname, configtable):
19 19 """update config item known to the ui with the extension ones"""
20 20 for section, items in configtable.items():
21 21 knownitems = ui._knownconfig.setdefault(section, itemregister())
22 22 knownkeys = set(knownitems)
23 23 newkeys = set(items)
24 24 for key in sorted(knownkeys & newkeys):
25 25 msg = "extension '%s' overwrite config item '%s.%s'"
26 26 msg %= (extname, section, key)
27 27 ui.develwarn(msg, config='warn-config')
28 28
29 29 knownitems.update(items)
30 30
31 31 class configitem(object):
32 32 """represent a known config item
33 33
34 34 :section: the official config section where to find this item,
35 35 :name: the official name within the section,
36 36 :default: default value for this item,
37 37 :alias: optional list of tuples as alternatives,
38 38 :generic: this is a generic definition, match name using regular expression.
39 39 """
40 40
41 41 def __init__(self, section, name, default=None, alias=(),
42 42 generic=False, priority=0):
43 43 self.section = section
44 44 self.name = name
45 45 self.default = default
46 46 self.alias = list(alias)
47 47 self.generic = generic
48 48 self.priority = priority
49 49 self._re = None
50 50 if generic:
51 51 self._re = re.compile(self.name)
52 52
53 53 class itemregister(dict):
54 54 """A specialized dictionary that can handle wild-card selection"""
55 55
56 56 def __init__(self):
57 57 super(itemregister, self).__init__()
58 58 self._generics = set()
59 59
60 60 def update(self, other):
61 61 super(itemregister, self).update(other)
62 62 self._generics.update(other._generics)
63 63
64 64 def __setitem__(self, key, item):
65 65 super(itemregister, self).__setitem__(key, item)
66 66 if item.generic:
67 67 self._generics.add(item)
68 68
69 69 def get(self, key):
70 70 baseitem = super(itemregister, self).get(key)
71 71 if baseitem is not None and not baseitem.generic:
72 72 return baseitem
73 73
74 74 # search for a matching generic item
75 75 generics = sorted(self._generics, key=(lambda x: (x.priority, x.name)))
76 76 for item in generics:
77 77 # we use 'match' instead of 'search' to make the matching simpler
78 78 # for people unfamiliar with regular expression. Having the match
79 79 # rooted to the start of the string will produce less surprising
80 80 # result for user writing simple regex for sub-attribute.
81 81 #
82 82 # For example using "color\..*" match produces an unsurprising
83 83 # result, while using search could suddenly match apparently
84 84 # unrelated configuration that happens to contains "color."
85 85 # anywhere. This is a tradeoff where we favor requiring ".*" on
86 86 # some match to avoid the need to prefix most pattern with "^".
87 87 # The "^" seems more error prone.
88 88 if item._re.match(key):
89 89 return item
90 90
91 91 return None
92 92
93 93 coreitems = {}
94 94
95 95 def _register(configtable, *args, **kwargs):
96 96 item = configitem(*args, **kwargs)
97 97 section = configtable.setdefault(item.section, itemregister())
98 98 if item.name in section:
99 99 msg = "duplicated config item registration for '%s.%s'"
100 100 raise error.ProgrammingError(msg % (item.section, item.name))
101 101 section[item.name] = item
102 102
103 103 # special value for case where the default is derived from other values
104 104 dynamicdefault = object()
105 105
106 106 # Registering actual config items
107 107
108 108 def getitemregister(configtable):
109 109 f = functools.partial(_register, configtable)
110 110 # export pseudo enum as configitem.*
111 111 f.dynamicdefault = dynamicdefault
112 112 return f
113 113
114 114 coreconfigitem = getitemregister(coreitems)
115 115
116 116 coreconfigitem('alias', '.*',
117 117 default=None,
118 118 generic=True,
119 119 )
120 120 coreconfigitem('annotate', 'nodates',
121 121 default=False,
122 122 )
123 123 coreconfigitem('annotate', 'showfunc',
124 124 default=False,
125 125 )
126 126 coreconfigitem('annotate', 'unified',
127 127 default=None,
128 128 )
129 129 coreconfigitem('annotate', 'git',
130 130 default=False,
131 131 )
132 132 coreconfigitem('annotate', 'ignorews',
133 133 default=False,
134 134 )
135 135 coreconfigitem('annotate', 'ignorewsamount',
136 136 default=False,
137 137 )
138 138 coreconfigitem('annotate', 'ignoreblanklines',
139 139 default=False,
140 140 )
141 141 coreconfigitem('annotate', 'ignorewseol',
142 142 default=False,
143 143 )
144 144 coreconfigitem('annotate', 'nobinary',
145 145 default=False,
146 146 )
147 147 coreconfigitem('annotate', 'noprefix',
148 148 default=False,
149 149 )
150 150 coreconfigitem('auth', 'cookiefile',
151 151 default=None,
152 152 )
153 153 # bookmarks.pushing: internal hack for discovery
154 154 coreconfigitem('bookmarks', 'pushing',
155 155 default=list,
156 156 )
157 157 # bundle.mainreporoot: internal hack for bundlerepo
158 158 coreconfigitem('bundle', 'mainreporoot',
159 159 default='',
160 160 )
161 161 # bundle.reorder: experimental config
162 162 coreconfigitem('bundle', 'reorder',
163 163 default='auto',
164 164 )
165 165 coreconfigitem('censor', 'policy',
166 166 default='abort',
167 167 )
168 168 coreconfigitem('chgserver', 'idletimeout',
169 169 default=3600,
170 170 )
171 171 coreconfigitem('chgserver', 'skiphash',
172 172 default=False,
173 173 )
174 174 coreconfigitem('cmdserver', 'log',
175 175 default=None,
176 176 )
177 177 coreconfigitem('color', '.*',
178 178 default=None,
179 179 generic=True,
180 180 )
181 181 coreconfigitem('color', 'mode',
182 182 default='auto',
183 183 )
184 184 coreconfigitem('color', 'pagermode',
185 185 default=dynamicdefault,
186 186 )
187 187 coreconfigitem('commands', 'show.aliasprefix',
188 188 default=list,
189 189 )
190 190 coreconfigitem('commands', 'status.relative',
191 191 default=False,
192 192 )
193 193 coreconfigitem('commands', 'status.skipstates',
194 194 default=[],
195 195 )
196 196 coreconfigitem('commands', 'status.verbose',
197 197 default=False,
198 198 )
199 199 coreconfigitem('commands', 'update.check',
200 200 default=None,
201 201 # Deprecated, remove after 4.4 release
202 202 alias=[('experimental', 'updatecheck')]
203 203 )
204 204 coreconfigitem('commands', 'update.requiredest',
205 205 default=False,
206 206 )
207 207 coreconfigitem('committemplate', '.*',
208 208 default=None,
209 209 generic=True,
210 210 )
211 211 coreconfigitem('convert', 'cvsps.cache',
212 212 default=True,
213 213 )
214 214 coreconfigitem('convert', 'cvsps.fuzz',
215 215 default=60,
216 216 )
217 217 coreconfigitem('convert', 'cvsps.logencoding',
218 218 default=None,
219 219 )
220 220 coreconfigitem('convert', 'cvsps.mergefrom',
221 221 default=None,
222 222 )
223 223 coreconfigitem('convert', 'cvsps.mergeto',
224 224 default=None,
225 225 )
226 226 coreconfigitem('convert', 'git.committeractions',
227 227 default=lambda: ['messagedifferent'],
228 228 )
229 229 coreconfigitem('convert', 'git.extrakeys',
230 230 default=list,
231 231 )
232 232 coreconfigitem('convert', 'git.findcopiesharder',
233 233 default=False,
234 234 )
235 235 coreconfigitem('convert', 'git.remoteprefix',
236 236 default='remote',
237 237 )
238 238 coreconfigitem('convert', 'git.renamelimit',
239 239 default=400,
240 240 )
241 241 coreconfigitem('convert', 'git.saverev',
242 242 default=True,
243 243 )
244 244 coreconfigitem('convert', 'git.similarity',
245 245 default=50,
246 246 )
247 247 coreconfigitem('convert', 'git.skipsubmodules',
248 248 default=False,
249 249 )
250 250 coreconfigitem('convert', 'hg.clonebranches',
251 251 default=False,
252 252 )
253 253 coreconfigitem('convert', 'hg.ignoreerrors',
254 254 default=False,
255 255 )
256 256 coreconfigitem('convert', 'hg.revs',
257 257 default=None,
258 258 )
259 259 coreconfigitem('convert', 'hg.saverev',
260 260 default=False,
261 261 )
262 262 coreconfigitem('convert', 'hg.sourcename',
263 263 default=None,
264 264 )
265 265 coreconfigitem('convert', 'hg.startrev',
266 266 default=None,
267 267 )
268 268 coreconfigitem('convert', 'hg.tagsbranch',
269 269 default='default',
270 270 )
271 271 coreconfigitem('convert', 'hg.usebranchnames',
272 272 default=True,
273 273 )
274 274 coreconfigitem('convert', 'ignoreancestorcheck',
275 275 default=False,
276 276 )
277 277 coreconfigitem('convert', 'localtimezone',
278 278 default=False,
279 279 )
280 280 coreconfigitem('convert', 'p4.encoding',
281 281 default=dynamicdefault,
282 282 )
283 283 coreconfigitem('convert', 'p4.startrev',
284 284 default=0,
285 285 )
286 286 coreconfigitem('convert', 'skiptags',
287 287 default=False,
288 288 )
289 289 coreconfigitem('convert', 'svn.debugsvnlog',
290 290 default=True,
291 291 )
292 292 coreconfigitem('convert', 'svn.trunk',
293 293 default=None,
294 294 )
295 295 coreconfigitem('convert', 'svn.tags',
296 296 default=None,
297 297 )
298 298 coreconfigitem('convert', 'svn.branches',
299 299 default=None,
300 300 )
301 301 coreconfigitem('convert', 'svn.startrev',
302 302 default=0,
303 303 )
304 304 coreconfigitem('debug', 'dirstate.delaywrite',
305 305 default=0,
306 306 )
307 307 coreconfigitem('defaults', '.*',
308 308 default=None,
309 309 generic=True,
310 310 )
311 311 coreconfigitem('devel', 'all-warnings',
312 312 default=False,
313 313 )
314 314 coreconfigitem('devel', 'bundle2.debug',
315 315 default=False,
316 316 )
317 317 coreconfigitem('devel', 'cache-vfs',
318 318 default=None,
319 319 )
320 320 coreconfigitem('devel', 'check-locks',
321 321 default=False,
322 322 )
323 323 coreconfigitem('devel', 'check-relroot',
324 324 default=False,
325 325 )
326 326 coreconfigitem('devel', 'default-date',
327 327 default=None,
328 328 )
329 329 coreconfigitem('devel', 'deprec-warn',
330 330 default=False,
331 331 )
332 332 coreconfigitem('devel', 'disableloaddefaultcerts',
333 333 default=False,
334 334 )
335 335 coreconfigitem('devel', 'warn-empty-changegroup',
336 336 default=False,
337 337 )
338 338 coreconfigitem('devel', 'legacy.exchange',
339 339 default=list,
340 340 )
341 341 coreconfigitem('devel', 'servercafile',
342 342 default='',
343 343 )
344 344 coreconfigitem('devel', 'serverexactprotocol',
345 345 default='',
346 346 )
347 347 coreconfigitem('devel', 'serverrequirecert',
348 348 default=False,
349 349 )
350 350 coreconfigitem('devel', 'strip-obsmarkers',
351 351 default=True,
352 352 )
353 353 coreconfigitem('devel', 'warn-config',
354 354 default=None,
355 355 )
356 356 coreconfigitem('devel', 'warn-config-default',
357 357 default=None,
358 358 )
359 359 coreconfigitem('devel', 'user.obsmarker',
360 360 default=None,
361 361 )
362 362 coreconfigitem('devel', 'warn-config-unknown',
363 363 default=None,
364 364 )
365 365 coreconfigitem('devel', 'debug.peer-request',
366 366 default=False,
367 367 )
368 368 coreconfigitem('diff', 'nodates',
369 369 default=False,
370 370 )
371 371 coreconfigitem('diff', 'showfunc',
372 372 default=False,
373 373 )
374 374 coreconfigitem('diff', 'unified',
375 375 default=None,
376 376 )
377 377 coreconfigitem('diff', 'git',
378 378 default=False,
379 379 )
380 380 coreconfigitem('diff', 'ignorews',
381 381 default=False,
382 382 )
383 383 coreconfigitem('diff', 'ignorewsamount',
384 384 default=False,
385 385 )
386 386 coreconfigitem('diff', 'ignoreblanklines',
387 387 default=False,
388 388 )
389 389 coreconfigitem('diff', 'ignorewseol',
390 390 default=False,
391 391 )
392 392 coreconfigitem('diff', 'nobinary',
393 393 default=False,
394 394 )
395 395 coreconfigitem('diff', 'noprefix',
396 396 default=False,
397 397 )
398 398 coreconfigitem('email', 'bcc',
399 399 default=None,
400 400 )
401 401 coreconfigitem('email', 'cc',
402 402 default=None,
403 403 )
404 404 coreconfigitem('email', 'charsets',
405 405 default=list,
406 406 )
407 407 coreconfigitem('email', 'from',
408 408 default=None,
409 409 )
410 410 coreconfigitem('email', 'method',
411 411 default='smtp',
412 412 )
413 413 coreconfigitem('email', 'reply-to',
414 414 default=None,
415 415 )
416 416 coreconfigitem('email', 'to',
417 417 default=None,
418 418 )
419 419 coreconfigitem('experimental', 'archivemetatemplate',
420 420 default=dynamicdefault,
421 421 )
422 422 coreconfigitem('experimental', 'bundle-phases',
423 423 default=False,
424 424 )
425 425 coreconfigitem('experimental', 'bundle2-advertise',
426 426 default=True,
427 427 )
428 428 coreconfigitem('experimental', 'bundle2-output-capture',
429 429 default=False,
430 430 )
431 431 coreconfigitem('experimental', 'bundle2.pushback',
432 432 default=False,
433 433 )
434 coreconfigitem('experimental', 'bundle2.stream',
435 default=False,
436 )
434 437 coreconfigitem('experimental', 'bundle2lazylocking',
435 438 default=False,
436 439 )
437 440 coreconfigitem('experimental', 'bundlecomplevel',
438 441 default=None,
439 442 )
440 443 coreconfigitem('experimental', 'changegroup3',
441 444 default=False,
442 445 )
443 446 coreconfigitem('experimental', 'clientcompressionengines',
444 447 default=list,
445 448 )
446 449 coreconfigitem('experimental', 'copytrace',
447 450 default='on',
448 451 )
449 452 coreconfigitem('experimental', 'copytrace.movecandidateslimit',
450 453 default=100,
451 454 )
452 455 coreconfigitem('experimental', 'copytrace.sourcecommitlimit',
453 456 default=100,
454 457 )
455 458 coreconfigitem('experimental', 'crecordtest',
456 459 default=None,
457 460 )
458 461 coreconfigitem('experimental', 'directaccess',
459 462 default=False,
460 463 )
461 464 coreconfigitem('experimental', 'directaccess.revnums',
462 465 default=False,
463 466 )
464 467 coreconfigitem('experimental', 'editortmpinhg',
465 468 default=False,
466 469 )
467 470 coreconfigitem('experimental', 'evolution',
468 471 default=list,
469 472 )
470 473 coreconfigitem('experimental', 'evolution.allowdivergence',
471 474 default=False,
472 475 alias=[('experimental', 'allowdivergence')]
473 476 )
474 477 coreconfigitem('experimental', 'evolution.allowunstable',
475 478 default=None,
476 479 )
477 480 coreconfigitem('experimental', 'evolution.createmarkers',
478 481 default=None,
479 482 )
480 483 coreconfigitem('experimental', 'evolution.effect-flags',
481 484 default=True,
482 485 alias=[('experimental', 'effect-flags')]
483 486 )
484 487 coreconfigitem('experimental', 'evolution.exchange',
485 488 default=None,
486 489 )
487 490 coreconfigitem('experimental', 'evolution.bundle-obsmarker',
488 491 default=False,
489 492 )
490 493 coreconfigitem('experimental', 'evolution.report-instabilities',
491 494 default=True,
492 495 )
493 496 coreconfigitem('experimental', 'evolution.track-operation',
494 497 default=True,
495 498 )
496 499 coreconfigitem('experimental', 'worddiff',
497 500 default=False,
498 501 )
499 502 coreconfigitem('experimental', 'maxdeltachainspan',
500 503 default=-1,
501 504 )
502 505 coreconfigitem('experimental', 'mmapindexthreshold',
503 506 default=None,
504 507 )
505 508 coreconfigitem('experimental', 'nonnormalparanoidcheck',
506 509 default=False,
507 510 )
508 511 coreconfigitem('experimental', 'exportableenviron',
509 512 default=list,
510 513 )
511 514 coreconfigitem('experimental', 'extendedheader.index',
512 515 default=None,
513 516 )
514 517 coreconfigitem('experimental', 'extendedheader.similarity',
515 518 default=False,
516 519 )
517 520 coreconfigitem('experimental', 'format.compression',
518 521 default='zlib',
519 522 )
520 523 coreconfigitem('experimental', 'graphshorten',
521 524 default=False,
522 525 )
523 526 coreconfigitem('experimental', 'graphstyle.parent',
524 527 default=dynamicdefault,
525 528 )
526 529 coreconfigitem('experimental', 'graphstyle.missing',
527 530 default=dynamicdefault,
528 531 )
529 532 coreconfigitem('experimental', 'graphstyle.grandparent',
530 533 default=dynamicdefault,
531 534 )
532 535 coreconfigitem('experimental', 'hook-track-tags',
533 536 default=False,
534 537 )
535 538 coreconfigitem('experimental', 'httppostargs',
536 539 default=False,
537 540 )
538 541 coreconfigitem('experimental', 'manifestv2',
539 542 default=False,
540 543 )
541 544 coreconfigitem('experimental', 'mergedriver',
542 545 default=None,
543 546 )
544 547 coreconfigitem('experimental', 'obsmarkers-exchange-debug',
545 548 default=False,
546 549 )
547 550 coreconfigitem('experimental', 'remotenames',
548 551 default=False,
549 552 )
550 553 coreconfigitem('experimental', 'revlogv2',
551 554 default=None,
552 555 )
553 556 coreconfigitem('experimental', 'single-head-per-branch',
554 557 default=False,
555 558 )
556 559 coreconfigitem('experimental', 'spacemovesdown',
557 560 default=False,
558 561 )
559 562 coreconfigitem('experimental', 'sparse-read',
560 563 default=False,
561 564 )
562 565 coreconfigitem('experimental', 'sparse-read.density-threshold',
563 566 default=0.25,
564 567 )
565 568 coreconfigitem('experimental', 'sparse-read.min-gap-size',
566 569 default='256K',
567 570 )
568 571 coreconfigitem('experimental', 'treemanifest',
569 572 default=False,
570 573 )
571 574 coreconfigitem('experimental', 'update.atomic-file',
572 575 default=False,
573 576 )
574 577 coreconfigitem('extensions', '.*',
575 578 default=None,
576 579 generic=True,
577 580 )
578 581 coreconfigitem('extdata', '.*',
579 582 default=None,
580 583 generic=True,
581 584 )
582 585 coreconfigitem('format', 'aggressivemergedeltas',
583 586 default=False,
584 587 )
585 588 coreconfigitem('format', 'chunkcachesize',
586 589 default=None,
587 590 )
588 591 coreconfigitem('format', 'dotencode',
589 592 default=True,
590 593 )
591 594 coreconfigitem('format', 'generaldelta',
592 595 default=False,
593 596 )
594 597 coreconfigitem('format', 'manifestcachesize',
595 598 default=None,
596 599 )
597 600 coreconfigitem('format', 'maxchainlen',
598 601 default=None,
599 602 )
600 603 coreconfigitem('format', 'obsstore-version',
601 604 default=None,
602 605 )
603 606 coreconfigitem('format', 'usefncache',
604 607 default=True,
605 608 )
606 609 coreconfigitem('format', 'usegeneraldelta',
607 610 default=True,
608 611 )
609 612 coreconfigitem('format', 'usestore',
610 613 default=True,
611 614 )
612 615 coreconfigitem('fsmonitor', 'warn_when_unused',
613 616 default=True,
614 617 )
615 618 coreconfigitem('fsmonitor', 'warn_update_file_count',
616 619 default=50000,
617 620 )
618 621 coreconfigitem('hooks', '.*',
619 622 default=dynamicdefault,
620 623 generic=True,
621 624 )
622 625 coreconfigitem('hgweb-paths', '.*',
623 626 default=list,
624 627 generic=True,
625 628 )
626 629 coreconfigitem('hostfingerprints', '.*',
627 630 default=list,
628 631 generic=True,
629 632 )
630 633 coreconfigitem('hostsecurity', 'ciphers',
631 634 default=None,
632 635 )
633 636 coreconfigitem('hostsecurity', 'disabletls10warning',
634 637 default=False,
635 638 )
636 639 coreconfigitem('hostsecurity', 'minimumprotocol',
637 640 default=dynamicdefault,
638 641 )
639 642 coreconfigitem('hostsecurity', '.*:minimumprotocol$',
640 643 default=dynamicdefault,
641 644 generic=True,
642 645 )
643 646 coreconfigitem('hostsecurity', '.*:ciphers$',
644 647 default=dynamicdefault,
645 648 generic=True,
646 649 )
647 650 coreconfigitem('hostsecurity', '.*:fingerprints$',
648 651 default=list,
649 652 generic=True,
650 653 )
651 654 coreconfigitem('hostsecurity', '.*:verifycertsfile$',
652 655 default=None,
653 656 generic=True,
654 657 )
655 658
656 659 coreconfigitem('http_proxy', 'always',
657 660 default=False,
658 661 )
659 662 coreconfigitem('http_proxy', 'host',
660 663 default=None,
661 664 )
662 665 coreconfigitem('http_proxy', 'no',
663 666 default=list,
664 667 )
665 668 coreconfigitem('http_proxy', 'passwd',
666 669 default=None,
667 670 )
668 671 coreconfigitem('http_proxy', 'user',
669 672 default=None,
670 673 )
671 674 coreconfigitem('logtoprocess', 'commandexception',
672 675 default=None,
673 676 )
674 677 coreconfigitem('logtoprocess', 'commandfinish',
675 678 default=None,
676 679 )
677 680 coreconfigitem('logtoprocess', 'command',
678 681 default=None,
679 682 )
680 683 coreconfigitem('logtoprocess', 'develwarn',
681 684 default=None,
682 685 )
683 686 coreconfigitem('logtoprocess', 'uiblocked',
684 687 default=None,
685 688 )
686 689 coreconfigitem('merge', 'checkunknown',
687 690 default='abort',
688 691 )
689 692 coreconfigitem('merge', 'checkignored',
690 693 default='abort',
691 694 )
692 695 coreconfigitem('experimental', 'merge.checkpathconflicts',
693 696 default=False,
694 697 )
695 698 coreconfigitem('merge', 'followcopies',
696 699 default=True,
697 700 )
698 701 coreconfigitem('merge', 'on-failure',
699 702 default='continue',
700 703 )
701 704 coreconfigitem('merge', 'preferancestor',
702 705 default=lambda: ['*'],
703 706 )
704 707 coreconfigitem('merge-tools', '.*',
705 708 default=None,
706 709 generic=True,
707 710 )
708 711 coreconfigitem('merge-tools', br'.*\.args$',
709 712 default="$local $base $other",
710 713 generic=True,
711 714 priority=-1,
712 715 )
713 716 coreconfigitem('merge-tools', br'.*\.binary$',
714 717 default=False,
715 718 generic=True,
716 719 priority=-1,
717 720 )
718 721 coreconfigitem('merge-tools', br'.*\.check$',
719 722 default=list,
720 723 generic=True,
721 724 priority=-1,
722 725 )
723 726 coreconfigitem('merge-tools', br'.*\.checkchanged$',
724 727 default=False,
725 728 generic=True,
726 729 priority=-1,
727 730 )
728 731 coreconfigitem('merge-tools', br'.*\.executable$',
729 732 default=dynamicdefault,
730 733 generic=True,
731 734 priority=-1,
732 735 )
733 736 coreconfigitem('merge-tools', br'.*\.fixeol$',
734 737 default=False,
735 738 generic=True,
736 739 priority=-1,
737 740 )
738 741 coreconfigitem('merge-tools', br'.*\.gui$',
739 742 default=False,
740 743 generic=True,
741 744 priority=-1,
742 745 )
743 746 coreconfigitem('merge-tools', br'.*\.priority$',
744 747 default=0,
745 748 generic=True,
746 749 priority=-1,
747 750 )
748 751 coreconfigitem('merge-tools', br'.*\.premerge$',
749 752 default=dynamicdefault,
750 753 generic=True,
751 754 priority=-1,
752 755 )
753 756 coreconfigitem('merge-tools', br'.*\.symlink$',
754 757 default=False,
755 758 generic=True,
756 759 priority=-1,
757 760 )
758 761 coreconfigitem('pager', 'attend-.*',
759 762 default=dynamicdefault,
760 763 generic=True,
761 764 )
762 765 coreconfigitem('pager', 'ignore',
763 766 default=list,
764 767 )
765 768 coreconfigitem('pager', 'pager',
766 769 default=dynamicdefault,
767 770 )
768 771 coreconfigitem('patch', 'eol',
769 772 default='strict',
770 773 )
771 774 coreconfigitem('patch', 'fuzz',
772 775 default=2,
773 776 )
774 777 coreconfigitem('paths', 'default',
775 778 default=None,
776 779 )
777 780 coreconfigitem('paths', 'default-push',
778 781 default=None,
779 782 )
780 783 coreconfigitem('paths', '.*',
781 784 default=None,
782 785 generic=True,
783 786 )
784 787 coreconfigitem('phases', 'checksubrepos',
785 788 default='follow',
786 789 )
787 790 coreconfigitem('phases', 'new-commit',
788 791 default='draft',
789 792 )
790 793 coreconfigitem('phases', 'publish',
791 794 default=True,
792 795 )
793 796 coreconfigitem('profiling', 'enabled',
794 797 default=False,
795 798 )
796 799 coreconfigitem('profiling', 'format',
797 800 default='text',
798 801 )
799 802 coreconfigitem('profiling', 'freq',
800 803 default=1000,
801 804 )
802 805 coreconfigitem('profiling', 'limit',
803 806 default=30,
804 807 )
805 808 coreconfigitem('profiling', 'nested',
806 809 default=0,
807 810 )
808 811 coreconfigitem('profiling', 'output',
809 812 default=None,
810 813 )
811 814 coreconfigitem('profiling', 'showmax',
812 815 default=0.999,
813 816 )
814 817 coreconfigitem('profiling', 'showmin',
815 818 default=dynamicdefault,
816 819 )
817 820 coreconfigitem('profiling', 'sort',
818 821 default='inlinetime',
819 822 )
820 823 coreconfigitem('profiling', 'statformat',
821 824 default='hotpath',
822 825 )
823 826 coreconfigitem('profiling', 'type',
824 827 default='stat',
825 828 )
826 829 coreconfigitem('progress', 'assume-tty',
827 830 default=False,
828 831 )
829 832 coreconfigitem('progress', 'changedelay',
830 833 default=1,
831 834 )
832 835 coreconfigitem('progress', 'clear-complete',
833 836 default=True,
834 837 )
835 838 coreconfigitem('progress', 'debug',
836 839 default=False,
837 840 )
838 841 coreconfigitem('progress', 'delay',
839 842 default=3,
840 843 )
841 844 coreconfigitem('progress', 'disable',
842 845 default=False,
843 846 )
844 847 coreconfigitem('progress', 'estimateinterval',
845 848 default=60.0,
846 849 )
847 850 coreconfigitem('progress', 'format',
848 851 default=lambda: ['topic', 'bar', 'number', 'estimate'],
849 852 )
850 853 coreconfigitem('progress', 'refresh',
851 854 default=0.1,
852 855 )
853 856 coreconfigitem('progress', 'width',
854 857 default=dynamicdefault,
855 858 )
856 859 coreconfigitem('push', 'pushvars.server',
857 860 default=False,
858 861 )
859 862 coreconfigitem('server', 'bookmarks-pushkey-compat',
860 863 default=True,
861 864 )
862 865 coreconfigitem('server', 'bundle1',
863 866 default=True,
864 867 )
865 868 coreconfigitem('server', 'bundle1gd',
866 869 default=None,
867 870 )
868 871 coreconfigitem('server', 'bundle1.pull',
869 872 default=None,
870 873 )
871 874 coreconfigitem('server', 'bundle1gd.pull',
872 875 default=None,
873 876 )
874 877 coreconfigitem('server', 'bundle1.push',
875 878 default=None,
876 879 )
877 880 coreconfigitem('server', 'bundle1gd.push',
878 881 default=None,
879 882 )
880 883 coreconfigitem('server', 'compressionengines',
881 884 default=list,
882 885 )
883 886 coreconfigitem('server', 'concurrent-push-mode',
884 887 default='strict',
885 888 )
886 889 coreconfigitem('server', 'disablefullbundle',
887 890 default=False,
888 891 )
889 892 coreconfigitem('server', 'maxhttpheaderlen',
890 893 default=1024,
891 894 )
892 895 coreconfigitem('server', 'preferuncompressed',
893 896 default=False,
894 897 )
895 898 coreconfigitem('server', 'uncompressed',
896 899 default=True,
897 900 )
898 901 coreconfigitem('server', 'uncompressedallowsecret',
899 902 default=False,
900 903 )
901 904 coreconfigitem('server', 'validate',
902 905 default=False,
903 906 )
904 907 coreconfigitem('server', 'zliblevel',
905 908 default=-1,
906 909 )
907 910 coreconfigitem('share', 'pool',
908 911 default=None,
909 912 )
910 913 coreconfigitem('share', 'poolnaming',
911 914 default='identity',
912 915 )
913 916 coreconfigitem('smtp', 'host',
914 917 default=None,
915 918 )
916 919 coreconfigitem('smtp', 'local_hostname',
917 920 default=None,
918 921 )
919 922 coreconfigitem('smtp', 'password',
920 923 default=None,
921 924 )
922 925 coreconfigitem('smtp', 'port',
923 926 default=dynamicdefault,
924 927 )
925 928 coreconfigitem('smtp', 'tls',
926 929 default='none',
927 930 )
928 931 coreconfigitem('smtp', 'username',
929 932 default=None,
930 933 )
931 934 coreconfigitem('sparse', 'missingwarning',
932 935 default=True,
933 936 )
934 937 coreconfigitem('subrepos', 'allowed',
935 938 default=dynamicdefault, # to make backporting simpler
936 939 )
937 940 coreconfigitem('subrepos', 'hg:allowed',
938 941 default=dynamicdefault,
939 942 )
940 943 coreconfigitem('subrepos', 'git:allowed',
941 944 default=dynamicdefault,
942 945 )
943 946 coreconfigitem('subrepos', 'svn:allowed',
944 947 default=dynamicdefault,
945 948 )
946 949 coreconfigitem('templates', '.*',
947 950 default=None,
948 951 generic=True,
949 952 )
950 953 coreconfigitem('trusted', 'groups',
951 954 default=list,
952 955 )
953 956 coreconfigitem('trusted', 'users',
954 957 default=list,
955 958 )
956 959 coreconfigitem('ui', '_usedassubrepo',
957 960 default=False,
958 961 )
959 962 coreconfigitem('ui', 'allowemptycommit',
960 963 default=False,
961 964 )
962 965 coreconfigitem('ui', 'archivemeta',
963 966 default=True,
964 967 )
965 968 coreconfigitem('ui', 'askusername',
966 969 default=False,
967 970 )
968 971 coreconfigitem('ui', 'clonebundlefallback',
969 972 default=False,
970 973 )
971 974 coreconfigitem('ui', 'clonebundleprefers',
972 975 default=list,
973 976 )
974 977 coreconfigitem('ui', 'clonebundles',
975 978 default=True,
976 979 )
977 980 coreconfigitem('ui', 'color',
978 981 default='auto',
979 982 )
980 983 coreconfigitem('ui', 'commitsubrepos',
981 984 default=False,
982 985 )
983 986 coreconfigitem('ui', 'debug',
984 987 default=False,
985 988 )
986 989 coreconfigitem('ui', 'debugger',
987 990 default=None,
988 991 )
989 992 coreconfigitem('ui', 'editor',
990 993 default=dynamicdefault,
991 994 )
992 995 coreconfigitem('ui', 'fallbackencoding',
993 996 default=None,
994 997 )
995 998 coreconfigitem('ui', 'forcecwd',
996 999 default=None,
997 1000 )
998 1001 coreconfigitem('ui', 'forcemerge',
999 1002 default=None,
1000 1003 )
1001 1004 coreconfigitem('ui', 'formatdebug',
1002 1005 default=False,
1003 1006 )
1004 1007 coreconfigitem('ui', 'formatjson',
1005 1008 default=False,
1006 1009 )
1007 1010 coreconfigitem('ui', 'formatted',
1008 1011 default=None,
1009 1012 )
1010 1013 coreconfigitem('ui', 'graphnodetemplate',
1011 1014 default=None,
1012 1015 )
1013 1016 coreconfigitem('ui', 'http2debuglevel',
1014 1017 default=None,
1015 1018 )
1016 1019 coreconfigitem('ui', 'interactive',
1017 1020 default=None,
1018 1021 )
1019 1022 coreconfigitem('ui', 'interface',
1020 1023 default=None,
1021 1024 )
1022 1025 coreconfigitem('ui', 'interface.chunkselector',
1023 1026 default=None,
1024 1027 )
1025 1028 coreconfigitem('ui', 'logblockedtimes',
1026 1029 default=False,
1027 1030 )
1028 1031 coreconfigitem('ui', 'logtemplate',
1029 1032 default=None,
1030 1033 )
1031 1034 coreconfigitem('ui', 'merge',
1032 1035 default=None,
1033 1036 )
1034 1037 coreconfigitem('ui', 'mergemarkers',
1035 1038 default='basic',
1036 1039 )
1037 1040 coreconfigitem('ui', 'mergemarkertemplate',
1038 1041 default=('{node|short} '
1039 1042 '{ifeq(tags, "tip", "", '
1040 1043 'ifeq(tags, "", "", "{tags} "))}'
1041 1044 '{if(bookmarks, "{bookmarks} ")}'
1042 1045 '{ifeq(branch, "default", "", "{branch} ")}'
1043 1046 '- {author|user}: {desc|firstline}')
1044 1047 )
1045 1048 coreconfigitem('ui', 'nontty',
1046 1049 default=False,
1047 1050 )
1048 1051 coreconfigitem('ui', 'origbackuppath',
1049 1052 default=None,
1050 1053 )
1051 1054 coreconfigitem('ui', 'paginate',
1052 1055 default=True,
1053 1056 )
1054 1057 coreconfigitem('ui', 'patch',
1055 1058 default=None,
1056 1059 )
1057 1060 coreconfigitem('ui', 'portablefilenames',
1058 1061 default='warn',
1059 1062 )
1060 1063 coreconfigitem('ui', 'promptecho',
1061 1064 default=False,
1062 1065 )
1063 1066 coreconfigitem('ui', 'quiet',
1064 1067 default=False,
1065 1068 )
1066 1069 coreconfigitem('ui', 'quietbookmarkmove',
1067 1070 default=False,
1068 1071 )
1069 1072 coreconfigitem('ui', 'remotecmd',
1070 1073 default='hg',
1071 1074 )
1072 1075 coreconfigitem('ui', 'report_untrusted',
1073 1076 default=True,
1074 1077 )
1075 1078 coreconfigitem('ui', 'rollback',
1076 1079 default=True,
1077 1080 )
1078 1081 coreconfigitem('ui', 'slash',
1079 1082 default=False,
1080 1083 )
1081 1084 coreconfigitem('ui', 'ssh',
1082 1085 default='ssh',
1083 1086 )
1084 1087 coreconfigitem('ui', 'ssherrorhint',
1085 1088 default=None,
1086 1089 )
1087 1090 coreconfigitem('ui', 'statuscopies',
1088 1091 default=False,
1089 1092 )
1090 1093 coreconfigitem('ui', 'strict',
1091 1094 default=False,
1092 1095 )
1093 1096 coreconfigitem('ui', 'style',
1094 1097 default='',
1095 1098 )
1096 1099 coreconfigitem('ui', 'supportcontact',
1097 1100 default=None,
1098 1101 )
1099 1102 coreconfigitem('ui', 'textwidth',
1100 1103 default=78,
1101 1104 )
1102 1105 coreconfigitem('ui', 'timeout',
1103 1106 default='600',
1104 1107 )
1105 1108 coreconfigitem('ui', 'timeout.warn',
1106 1109 default=0,
1107 1110 )
1108 1111 coreconfigitem('ui', 'traceback',
1109 1112 default=False,
1110 1113 )
1111 1114 coreconfigitem('ui', 'tweakdefaults',
1112 1115 default=False,
1113 1116 )
1114 1117 coreconfigitem('ui', 'usehttp2',
1115 1118 default=False,
1116 1119 )
1117 1120 coreconfigitem('ui', 'username',
1118 1121 alias=[('ui', 'user')]
1119 1122 )
1120 1123 coreconfigitem('ui', 'verbose',
1121 1124 default=False,
1122 1125 )
1123 1126 coreconfigitem('verify', 'skipflags',
1124 1127 default=None,
1125 1128 )
1126 1129 coreconfigitem('web', 'allowbz2',
1127 1130 default=False,
1128 1131 )
1129 1132 coreconfigitem('web', 'allowgz',
1130 1133 default=False,
1131 1134 )
1132 1135 coreconfigitem('web', 'allow-pull',
1133 1136 alias=[('web', 'allowpull')],
1134 1137 default=True,
1135 1138 )
1136 1139 coreconfigitem('web', 'allow-push',
1137 1140 alias=[('web', 'allow_push')],
1138 1141 default=list,
1139 1142 )
1140 1143 coreconfigitem('web', 'allowzip',
1141 1144 default=False,
1142 1145 )
1143 1146 coreconfigitem('web', 'archivesubrepos',
1144 1147 default=False,
1145 1148 )
1146 1149 coreconfigitem('web', 'cache',
1147 1150 default=True,
1148 1151 )
1149 1152 coreconfigitem('web', 'contact',
1150 1153 default=None,
1151 1154 )
1152 1155 coreconfigitem('web', 'deny_push',
1153 1156 default=list,
1154 1157 )
1155 1158 coreconfigitem('web', 'guessmime',
1156 1159 default=False,
1157 1160 )
1158 1161 coreconfigitem('web', 'hidden',
1159 1162 default=False,
1160 1163 )
1161 1164 coreconfigitem('web', 'labels',
1162 1165 default=list,
1163 1166 )
1164 1167 coreconfigitem('web', 'logoimg',
1165 1168 default='hglogo.png',
1166 1169 )
1167 1170 coreconfigitem('web', 'logourl',
1168 1171 default='https://mercurial-scm.org/',
1169 1172 )
1170 1173 coreconfigitem('web', 'accesslog',
1171 1174 default='-',
1172 1175 )
1173 1176 coreconfigitem('web', 'address',
1174 1177 default='',
1175 1178 )
1176 1179 coreconfigitem('web', 'allow_archive',
1177 1180 default=list,
1178 1181 )
1179 1182 coreconfigitem('web', 'allow_read',
1180 1183 default=list,
1181 1184 )
1182 1185 coreconfigitem('web', 'baseurl',
1183 1186 default=None,
1184 1187 )
1185 1188 coreconfigitem('web', 'cacerts',
1186 1189 default=None,
1187 1190 )
1188 1191 coreconfigitem('web', 'certificate',
1189 1192 default=None,
1190 1193 )
1191 1194 coreconfigitem('web', 'collapse',
1192 1195 default=False,
1193 1196 )
1194 1197 coreconfigitem('web', 'csp',
1195 1198 default=None,
1196 1199 )
1197 1200 coreconfigitem('web', 'deny_read',
1198 1201 default=list,
1199 1202 )
1200 1203 coreconfigitem('web', 'descend',
1201 1204 default=True,
1202 1205 )
1203 1206 coreconfigitem('web', 'description',
1204 1207 default="",
1205 1208 )
1206 1209 coreconfigitem('web', 'encoding',
1207 1210 default=lambda: encoding.encoding,
1208 1211 )
1209 1212 coreconfigitem('web', 'errorlog',
1210 1213 default='-',
1211 1214 )
1212 1215 coreconfigitem('web', 'ipv6',
1213 1216 default=False,
1214 1217 )
1215 1218 coreconfigitem('web', 'maxchanges',
1216 1219 default=10,
1217 1220 )
1218 1221 coreconfigitem('web', 'maxfiles',
1219 1222 default=10,
1220 1223 )
1221 1224 coreconfigitem('web', 'maxshortchanges',
1222 1225 default=60,
1223 1226 )
1224 1227 coreconfigitem('web', 'motd',
1225 1228 default='',
1226 1229 )
1227 1230 coreconfigitem('web', 'name',
1228 1231 default=dynamicdefault,
1229 1232 )
1230 1233 coreconfigitem('web', 'port',
1231 1234 default=8000,
1232 1235 )
1233 1236 coreconfigitem('web', 'prefix',
1234 1237 default='',
1235 1238 )
1236 1239 coreconfigitem('web', 'push_ssl',
1237 1240 default=True,
1238 1241 )
1239 1242 coreconfigitem('web', 'refreshinterval',
1240 1243 default=20,
1241 1244 )
1242 1245 coreconfigitem('web', 'staticurl',
1243 1246 default=None,
1244 1247 )
1245 1248 coreconfigitem('web', 'stripes',
1246 1249 default=1,
1247 1250 )
1248 1251 coreconfigitem('web', 'style',
1249 1252 default='paper',
1250 1253 )
1251 1254 coreconfigitem('web', 'templates',
1252 1255 default=None,
1253 1256 )
1254 1257 coreconfigitem('web', 'view',
1255 1258 default='served',
1256 1259 )
1257 1260 coreconfigitem('worker', 'backgroundclose',
1258 1261 default=dynamicdefault,
1259 1262 )
1260 1263 # Windows defaults to a limit of 512 open files. A buffer of 128
1261 1264 # should give us enough headway.
1262 1265 coreconfigitem('worker', 'backgroundclosemaxqueue',
1263 1266 default=384,
1264 1267 )
1265 1268 coreconfigitem('worker', 'backgroundcloseminfilecount',
1266 1269 default=2048,
1267 1270 )
1268 1271 coreconfigitem('worker', 'backgroundclosethreadcount',
1269 1272 default=4,
1270 1273 )
1271 1274 coreconfigitem('worker', 'enabled',
1272 1275 default=True,
1273 1276 )
1274 1277 coreconfigitem('worker', 'numcpus',
1275 1278 default=None,
1276 1279 )
1277 1280
1278 1281 # Rebase related configuration moved to core because other extension are doing
1279 1282 # strange things. For example, shelve import the extensions to reuse some bit
1280 1283 # without formally loading it.
1281 1284 coreconfigitem('commands', 'rebase.requiredest',
1282 1285 default=False,
1283 1286 )
1284 1287 coreconfigitem('experimental', 'rebaseskipobsolete',
1285 1288 default=True,
1286 1289 )
1287 1290 coreconfigitem('rebase', 'singletransaction',
1288 1291 default=False,
1289 1292 )
1290 1293 coreconfigitem('rebase', 'experimental.inmemory',
1291 1294 default=False,
1292 1295 )
@@ -1,2229 +1,2234
1 1 # exchange.py - utility to exchange data between repos.
2 2 #
3 3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from __future__ import absolute_import
9 9
10 10 import collections
11 11 import errno
12 12 import hashlib
13 13
14 14 from .i18n import _
15 15 from .node import (
16 16 bin,
17 17 hex,
18 18 nullid,
19 19 )
20 20 from . import (
21 21 bookmarks as bookmod,
22 22 bundle2,
23 23 changegroup,
24 24 discovery,
25 25 error,
26 26 lock as lockmod,
27 27 logexchange,
28 28 obsolete,
29 29 phases,
30 30 pushkey,
31 31 pycompat,
32 32 scmutil,
33 33 sslutil,
34 34 streamclone,
35 35 url as urlmod,
36 36 util,
37 37 )
38 38
39 39 urlerr = util.urlerr
40 40 urlreq = util.urlreq
41 41
42 42 # Maps bundle version human names to changegroup versions.
43 43 _bundlespeccgversions = {'v1': '01',
44 44 'v2': '02',
45 45 'packed1': 's1',
46 46 'bundle2': '02', #legacy
47 47 }
48 48
49 49 # Compression engines allowed in version 1. THIS SHOULD NEVER CHANGE.
50 50 _bundlespecv1compengines = {'gzip', 'bzip2', 'none'}
51 51
52 52 def parsebundlespec(repo, spec, strict=True, externalnames=False):
53 53 """Parse a bundle string specification into parts.
54 54
55 55 Bundle specifications denote a well-defined bundle/exchange format.
56 56 The content of a given specification should not change over time in
57 57 order to ensure that bundles produced by a newer version of Mercurial are
58 58 readable from an older version.
59 59
60 60 The string currently has the form:
61 61
62 62 <compression>-<type>[;<parameter0>[;<parameter1>]]
63 63
64 64 Where <compression> is one of the supported compression formats
65 65 and <type> is (currently) a version string. A ";" can follow the type and
66 66 all text afterwards is interpreted as URI encoded, ";" delimited key=value
67 67 pairs.
68 68
69 69 If ``strict`` is True (the default) <compression> is required. Otherwise,
70 70 it is optional.
71 71
72 72 If ``externalnames`` is False (the default), the human-centric names will
73 73 be converted to their internal representation.
74 74
75 75 Returns a 3-tuple of (compression, version, parameters). Compression will
76 76 be ``None`` if not in strict mode and a compression isn't defined.
77 77
78 78 An ``InvalidBundleSpecification`` is raised when the specification is
79 79 not syntactically well formed.
80 80
81 81 An ``UnsupportedBundleSpecification`` is raised when the compression or
82 82 bundle type/version is not recognized.
83 83
84 84 Note: this function will likely eventually return a more complex data
85 85 structure, including bundle2 part information.
86 86 """
87 87 def parseparams(s):
88 88 if ';' not in s:
89 89 return s, {}
90 90
91 91 params = {}
92 92 version, paramstr = s.split(';', 1)
93 93
94 94 for p in paramstr.split(';'):
95 95 if '=' not in p:
96 96 raise error.InvalidBundleSpecification(
97 97 _('invalid bundle specification: '
98 98 'missing "=" in parameter: %s') % p)
99 99
100 100 key, value = p.split('=', 1)
101 101 key = urlreq.unquote(key)
102 102 value = urlreq.unquote(value)
103 103 params[key] = value
104 104
105 105 return version, params
106 106
107 107
108 108 if strict and '-' not in spec:
109 109 raise error.InvalidBundleSpecification(
110 110 _('invalid bundle specification; '
111 111 'must be prefixed with compression: %s') % spec)
112 112
113 113 if '-' in spec:
114 114 compression, version = spec.split('-', 1)
115 115
116 116 if compression not in util.compengines.supportedbundlenames:
117 117 raise error.UnsupportedBundleSpecification(
118 118 _('%s compression is not supported') % compression)
119 119
120 120 version, params = parseparams(version)
121 121
122 122 if version not in _bundlespeccgversions:
123 123 raise error.UnsupportedBundleSpecification(
124 124 _('%s is not a recognized bundle version') % version)
125 125 else:
126 126 # Value could be just the compression or just the version, in which
127 127 # case some defaults are assumed (but only when not in strict mode).
128 128 assert not strict
129 129
130 130 spec, params = parseparams(spec)
131 131
132 132 if spec in util.compengines.supportedbundlenames:
133 133 compression = spec
134 134 version = 'v1'
135 135 # Generaldelta repos require v2.
136 136 if 'generaldelta' in repo.requirements:
137 137 version = 'v2'
138 138 # Modern compression engines require v2.
139 139 if compression not in _bundlespecv1compengines:
140 140 version = 'v2'
141 141 elif spec in _bundlespeccgversions:
142 142 if spec == 'packed1':
143 143 compression = 'none'
144 144 else:
145 145 compression = 'bzip2'
146 146 version = spec
147 147 else:
148 148 raise error.UnsupportedBundleSpecification(
149 149 _('%s is not a recognized bundle specification') % spec)
150 150
151 151 # Bundle version 1 only supports a known set of compression engines.
152 152 if version == 'v1' and compression not in _bundlespecv1compengines:
153 153 raise error.UnsupportedBundleSpecification(
154 154 _('compression engine %s is not supported on v1 bundles') %
155 155 compression)
156 156
157 157 # The specification for packed1 can optionally declare the data formats
158 158 # required to apply it. If we see this metadata, compare against what the
159 159 # repo supports and error if the bundle isn't compatible.
160 160 if version == 'packed1' and 'requirements' in params:
161 161 requirements = set(params['requirements'].split(','))
162 162 missingreqs = requirements - repo.supportedformats
163 163 if missingreqs:
164 164 raise error.UnsupportedBundleSpecification(
165 165 _('missing support for repository features: %s') %
166 166 ', '.join(sorted(missingreqs)))
167 167
168 168 if not externalnames:
169 169 engine = util.compengines.forbundlename(compression)
170 170 compression = engine.bundletype()[1]
171 171 version = _bundlespeccgversions[version]
172 172 return compression, version, params
173 173
174 174 def readbundle(ui, fh, fname, vfs=None):
175 175 header = changegroup.readexactly(fh, 4)
176 176
177 177 alg = None
178 178 if not fname:
179 179 fname = "stream"
180 180 if not header.startswith('HG') and header.startswith('\0'):
181 181 fh = changegroup.headerlessfixup(fh, header)
182 182 header = "HG10"
183 183 alg = 'UN'
184 184 elif vfs:
185 185 fname = vfs.join(fname)
186 186
187 187 magic, version = header[0:2], header[2:4]
188 188
189 189 if magic != 'HG':
190 190 raise error.Abort(_('%s: not a Mercurial bundle') % fname)
191 191 if version == '10':
192 192 if alg is None:
193 193 alg = changegroup.readexactly(fh, 2)
194 194 return changegroup.cg1unpacker(fh, alg)
195 195 elif version.startswith('2'):
196 196 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
197 197 elif version == 'S1':
198 198 return streamclone.streamcloneapplier(fh)
199 199 else:
200 200 raise error.Abort(_('%s: unknown bundle version %s') % (fname, version))
201 201
202 202 def getbundlespec(ui, fh):
203 203 """Infer the bundlespec from a bundle file handle.
204 204
205 205 The input file handle is seeked and the original seek position is not
206 206 restored.
207 207 """
208 208 def speccompression(alg):
209 209 try:
210 210 return util.compengines.forbundletype(alg).bundletype()[0]
211 211 except KeyError:
212 212 return None
213 213
214 214 b = readbundle(ui, fh, None)
215 215 if isinstance(b, changegroup.cg1unpacker):
216 216 alg = b._type
217 217 if alg == '_truncatedBZ':
218 218 alg = 'BZ'
219 219 comp = speccompression(alg)
220 220 if not comp:
221 221 raise error.Abort(_('unknown compression algorithm: %s') % alg)
222 222 return '%s-v1' % comp
223 223 elif isinstance(b, bundle2.unbundle20):
224 224 if 'Compression' in b.params:
225 225 comp = speccompression(b.params['Compression'])
226 226 if not comp:
227 227 raise error.Abort(_('unknown compression algorithm: %s') % comp)
228 228 else:
229 229 comp = 'none'
230 230
231 231 version = None
232 232 for part in b.iterparts():
233 233 if part.type == 'changegroup':
234 234 version = part.params['version']
235 235 if version in ('01', '02'):
236 236 version = 'v2'
237 237 else:
238 238 raise error.Abort(_('changegroup version %s does not have '
239 239 'a known bundlespec') % version,
240 240 hint=_('try upgrading your Mercurial '
241 241 'client'))
242 242
243 243 if not version:
244 244 raise error.Abort(_('could not identify changegroup version in '
245 245 'bundle'))
246 246
247 247 return '%s-%s' % (comp, version)
248 248 elif isinstance(b, streamclone.streamcloneapplier):
249 249 requirements = streamclone.readbundle1header(fh)[2]
250 250 params = 'requirements=%s' % ','.join(sorted(requirements))
251 251 return 'none-packed1;%s' % urlreq.quote(params)
252 252 else:
253 253 raise error.Abort(_('unknown bundle type: %s') % b)
254 254
255 255 def _computeoutgoing(repo, heads, common):
256 256 """Computes which revs are outgoing given a set of common
257 257 and a set of heads.
258 258
259 259 This is a separate function so extensions can have access to
260 260 the logic.
261 261
262 262 Returns a discovery.outgoing object.
263 263 """
264 264 cl = repo.changelog
265 265 if common:
266 266 hasnode = cl.hasnode
267 267 common = [n for n in common if hasnode(n)]
268 268 else:
269 269 common = [nullid]
270 270 if not heads:
271 271 heads = cl.heads()
272 272 return discovery.outgoing(repo, common, heads)
273 273
274 274 def _forcebundle1(op):
275 275 """return true if a pull/push must use bundle1
276 276
277 277 This function is used to allow testing of the older bundle version"""
278 278 ui = op.repo.ui
279 279 forcebundle1 = False
280 280 # The goal is this config is to allow developer to choose the bundle
281 281 # version used during exchanged. This is especially handy during test.
282 282 # Value is a list of bundle version to be picked from, highest version
283 283 # should be used.
284 284 #
285 285 # developer config: devel.legacy.exchange
286 286 exchange = ui.configlist('devel', 'legacy.exchange')
287 287 forcebundle1 = 'bundle2' not in exchange and 'bundle1' in exchange
288 288 return forcebundle1 or not op.remote.capable('bundle2')
289 289
290 290 class pushoperation(object):
291 291 """A object that represent a single push operation
292 292
293 293 Its purpose is to carry push related state and very common operations.
294 294
295 295 A new pushoperation should be created at the beginning of each push and
296 296 discarded afterward.
297 297 """
298 298
299 299 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
300 300 bookmarks=(), pushvars=None):
301 301 # repo we push from
302 302 self.repo = repo
303 303 self.ui = repo.ui
304 304 # repo we push to
305 305 self.remote = remote
306 306 # force option provided
307 307 self.force = force
308 308 # revs to be pushed (None is "all")
309 309 self.revs = revs
310 310 # bookmark explicitly pushed
311 311 self.bookmarks = bookmarks
312 312 # allow push of new branch
313 313 self.newbranch = newbranch
314 314 # step already performed
315 315 # (used to check what steps have been already performed through bundle2)
316 316 self.stepsdone = set()
317 317 # Integer version of the changegroup push result
318 318 # - None means nothing to push
319 319 # - 0 means HTTP error
320 320 # - 1 means we pushed and remote head count is unchanged *or*
321 321 # we have outgoing changesets but refused to push
322 322 # - other values as described by addchangegroup()
323 323 self.cgresult = None
324 324 # Boolean value for the bookmark push
325 325 self.bkresult = None
326 326 # discover.outgoing object (contains common and outgoing data)
327 327 self.outgoing = None
328 328 # all remote topological heads before the push
329 329 self.remoteheads = None
330 330 # Details of the remote branch pre and post push
331 331 #
332 332 # mapping: {'branch': ([remoteheads],
333 333 # [newheads],
334 334 # [unsyncedheads],
335 335 # [discardedheads])}
336 336 # - branch: the branch name
337 337 # - remoteheads: the list of remote heads known locally
338 338 # None if the branch is new
339 339 # - newheads: the new remote heads (known locally) with outgoing pushed
340 340 # - unsyncedheads: the list of remote heads unknown locally.
341 341 # - discardedheads: the list of remote heads made obsolete by the push
342 342 self.pushbranchmap = None
343 343 # testable as a boolean indicating if any nodes are missing locally.
344 344 self.incoming = None
345 345 # summary of the remote phase situation
346 346 self.remotephases = None
347 347 # phases changes that must be pushed along side the changesets
348 348 self.outdatedphases = None
349 349 # phases changes that must be pushed if changeset push fails
350 350 self.fallbackoutdatedphases = None
351 351 # outgoing obsmarkers
352 352 self.outobsmarkers = set()
353 353 # outgoing bookmarks
354 354 self.outbookmarks = []
355 355 # transaction manager
356 356 self.trmanager = None
357 357 # map { pushkey partid -> callback handling failure}
358 358 # used to handle exception from mandatory pushkey part failure
359 359 self.pkfailcb = {}
360 360 # an iterable of pushvars or None
361 361 self.pushvars = pushvars
362 362
363 363 @util.propertycache
364 364 def futureheads(self):
365 365 """future remote heads if the changeset push succeeds"""
366 366 return self.outgoing.missingheads
367 367
368 368 @util.propertycache
369 369 def fallbackheads(self):
370 370 """future remote heads if the changeset push fails"""
371 371 if self.revs is None:
372 372 # not target to push, all common are relevant
373 373 return self.outgoing.commonheads
374 374 unfi = self.repo.unfiltered()
375 375 # I want cheads = heads(::missingheads and ::commonheads)
376 376 # (missingheads is revs with secret changeset filtered out)
377 377 #
378 378 # This can be expressed as:
379 379 # cheads = ( (missingheads and ::commonheads)
380 380 # + (commonheads and ::missingheads))"
381 381 # )
382 382 #
383 383 # while trying to push we already computed the following:
384 384 # common = (::commonheads)
385 385 # missing = ((commonheads::missingheads) - commonheads)
386 386 #
387 387 # We can pick:
388 388 # * missingheads part of common (::commonheads)
389 389 common = self.outgoing.common
390 390 nm = self.repo.changelog.nodemap
391 391 cheads = [node for node in self.revs if nm[node] in common]
392 392 # and
393 393 # * commonheads parents on missing
394 394 revset = unfi.set('%ln and parents(roots(%ln))',
395 395 self.outgoing.commonheads,
396 396 self.outgoing.missing)
397 397 cheads.extend(c.node() for c in revset)
398 398 return cheads
399 399
400 400 @property
401 401 def commonheads(self):
402 402 """set of all common heads after changeset bundle push"""
403 403 if self.cgresult:
404 404 return self.futureheads
405 405 else:
406 406 return self.fallbackheads
407 407
408 408 # mapping of message used when pushing bookmark
409 409 bookmsgmap = {'update': (_("updating bookmark %s\n"),
410 410 _('updating bookmark %s failed!\n')),
411 411 'export': (_("exporting bookmark %s\n"),
412 412 _('exporting bookmark %s failed!\n')),
413 413 'delete': (_("deleting remote bookmark %s\n"),
414 414 _('deleting remote bookmark %s failed!\n')),
415 415 }
416 416
417 417
418 418 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=(),
419 419 opargs=None):
420 420 '''Push outgoing changesets (limited by revs) from a local
421 421 repository to remote. Return an integer:
422 422 - None means nothing to push
423 423 - 0 means HTTP error
424 424 - 1 means we pushed and remote head count is unchanged *or*
425 425 we have outgoing changesets but refused to push
426 426 - other values as described by addchangegroup()
427 427 '''
428 428 if opargs is None:
429 429 opargs = {}
430 430 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks,
431 431 **pycompat.strkwargs(opargs))
432 432 if pushop.remote.local():
433 433 missing = (set(pushop.repo.requirements)
434 434 - pushop.remote.local().supported)
435 435 if missing:
436 436 msg = _("required features are not"
437 437 " supported in the destination:"
438 438 " %s") % (', '.join(sorted(missing)))
439 439 raise error.Abort(msg)
440 440
441 441 if not pushop.remote.canpush():
442 442 raise error.Abort(_("destination does not support push"))
443 443
444 444 if not pushop.remote.capable('unbundle'):
445 445 raise error.Abort(_('cannot push: destination does not support the '
446 446 'unbundle wire protocol command'))
447 447
448 448 # get lock as we might write phase data
449 449 wlock = lock = None
450 450 try:
451 451 # bundle2 push may receive a reply bundle touching bookmarks or other
452 452 # things requiring the wlock. Take it now to ensure proper ordering.
453 453 maypushback = pushop.ui.configbool('experimental', 'bundle2.pushback')
454 454 if (not _forcebundle1(pushop)) and maypushback:
455 455 wlock = pushop.repo.wlock()
456 456 lock = pushop.repo.lock()
457 457 pushop.trmanager = transactionmanager(pushop.repo,
458 458 'push-response',
459 459 pushop.remote.url())
460 460 except IOError as err:
461 461 if err.errno != errno.EACCES:
462 462 raise
463 463 # source repo cannot be locked.
464 464 # We do not abort the push, but just disable the local phase
465 465 # synchronisation.
466 466 msg = 'cannot lock source repository: %s\n' % err
467 467 pushop.ui.debug(msg)
468 468
469 469 with wlock or util.nullcontextmanager(), \
470 470 lock or util.nullcontextmanager(), \
471 471 pushop.trmanager or util.nullcontextmanager():
472 472 pushop.repo.checkpush(pushop)
473 473 _pushdiscovery(pushop)
474 474 if not _forcebundle1(pushop):
475 475 _pushbundle2(pushop)
476 476 _pushchangeset(pushop)
477 477 _pushsyncphase(pushop)
478 478 _pushobsolete(pushop)
479 479 _pushbookmark(pushop)
480 480
481 481 return pushop
482 482
483 483 # list of steps to perform discovery before push
484 484 pushdiscoveryorder = []
485 485
486 486 # Mapping between step name and function
487 487 #
488 488 # This exists to help extensions wrap steps if necessary
489 489 pushdiscoverymapping = {}
490 490
491 491 def pushdiscovery(stepname):
492 492 """decorator for function performing discovery before push
493 493
494 494 The function is added to the step -> function mapping and appended to the
495 495 list of steps. Beware that decorated function will be added in order (this
496 496 may matter).
497 497
498 498 You can only use this decorator for a new step, if you want to wrap a step
499 499 from an extension, change the pushdiscovery dictionary directly."""
500 500 def dec(func):
501 501 assert stepname not in pushdiscoverymapping
502 502 pushdiscoverymapping[stepname] = func
503 503 pushdiscoveryorder.append(stepname)
504 504 return func
505 505 return dec
506 506
507 507 def _pushdiscovery(pushop):
508 508 """Run all discovery steps"""
509 509 for stepname in pushdiscoveryorder:
510 510 step = pushdiscoverymapping[stepname]
511 511 step(pushop)
512 512
513 513 @pushdiscovery('changeset')
514 514 def _pushdiscoverychangeset(pushop):
515 515 """discover the changeset that need to be pushed"""
516 516 fci = discovery.findcommonincoming
517 517 if pushop.revs:
518 518 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force,
519 519 ancestorsof=pushop.revs)
520 520 else:
521 521 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
522 522 common, inc, remoteheads = commoninc
523 523 fco = discovery.findcommonoutgoing
524 524 outgoing = fco(pushop.repo, pushop.remote, onlyheads=pushop.revs,
525 525 commoninc=commoninc, force=pushop.force)
526 526 pushop.outgoing = outgoing
527 527 pushop.remoteheads = remoteheads
528 528 pushop.incoming = inc
529 529
530 530 @pushdiscovery('phase')
531 531 def _pushdiscoveryphase(pushop):
532 532 """discover the phase that needs to be pushed
533 533
534 534 (computed for both success and failure case for changesets push)"""
535 535 outgoing = pushop.outgoing
536 536 unfi = pushop.repo.unfiltered()
537 537 remotephases = pushop.remote.listkeys('phases')
538 538 if (pushop.ui.configbool('ui', '_usedassubrepo')
539 539 and remotephases # server supports phases
540 540 and not pushop.outgoing.missing # no changesets to be pushed
541 541 and remotephases.get('publishing', False)):
542 542 # When:
543 543 # - this is a subrepo push
544 544 # - and remote support phase
545 545 # - and no changeset are to be pushed
546 546 # - and remote is publishing
547 547 # We may be in issue 3781 case!
548 548 # We drop the possible phase synchronisation done by
549 549 # courtesy to publish changesets possibly locally draft
550 550 # on the remote.
551 551 pushop.outdatedphases = []
552 552 pushop.fallbackoutdatedphases = []
553 553 return
554 554
555 555 pushop.remotephases = phases.remotephasessummary(pushop.repo,
556 556 pushop.fallbackheads,
557 557 remotephases)
558 558 droots = pushop.remotephases.draftroots
559 559
560 560 extracond = ''
561 561 if not pushop.remotephases.publishing:
562 562 extracond = ' and public()'
563 563 revset = 'heads((%%ln::%%ln) %s)' % extracond
564 564 # Get the list of all revs draft on remote by public here.
565 565 # XXX Beware that revset break if droots is not strictly
566 566 # XXX root we may want to ensure it is but it is costly
567 567 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
568 568 if not outgoing.missing:
569 569 future = fallback
570 570 else:
571 571 # adds changeset we are going to push as draft
572 572 #
573 573 # should not be necessary for publishing server, but because of an
574 574 # issue fixed in xxxxx we have to do it anyway.
575 575 fdroots = list(unfi.set('roots(%ln + %ln::)',
576 576 outgoing.missing, droots))
577 577 fdroots = [f.node() for f in fdroots]
578 578 future = list(unfi.set(revset, fdroots, pushop.futureheads))
579 579 pushop.outdatedphases = future
580 580 pushop.fallbackoutdatedphases = fallback
581 581
582 582 @pushdiscovery('obsmarker')
583 583 def _pushdiscoveryobsmarkers(pushop):
584 584 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
585 585 and pushop.repo.obsstore
586 586 and 'obsolete' in pushop.remote.listkeys('namespaces')):
587 587 repo = pushop.repo
588 588 # very naive computation, that can be quite expensive on big repo.
589 589 # However: evolution is currently slow on them anyway.
590 590 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
591 591 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
592 592
593 593 @pushdiscovery('bookmarks')
594 594 def _pushdiscoverybookmarks(pushop):
595 595 ui = pushop.ui
596 596 repo = pushop.repo.unfiltered()
597 597 remote = pushop.remote
598 598 ui.debug("checking for updated bookmarks\n")
599 599 ancestors = ()
600 600 if pushop.revs:
601 601 revnums = map(repo.changelog.rev, pushop.revs)
602 602 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
603 603 remotebookmark = remote.listkeys('bookmarks')
604 604
605 605 explicit = set([repo._bookmarks.expandname(bookmark)
606 606 for bookmark in pushop.bookmarks])
607 607
608 608 remotebookmark = bookmod.unhexlifybookmarks(remotebookmark)
609 609 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
610 610
611 611 def safehex(x):
612 612 if x is None:
613 613 return x
614 614 return hex(x)
615 615
616 616 def hexifycompbookmarks(bookmarks):
617 617 for b, scid, dcid in bookmarks:
618 618 yield b, safehex(scid), safehex(dcid)
619 619
620 620 comp = [hexifycompbookmarks(marks) for marks in comp]
621 621 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
622 622
623 623 for b, scid, dcid in advsrc:
624 624 if b in explicit:
625 625 explicit.remove(b)
626 626 if not ancestors or repo[scid].rev() in ancestors:
627 627 pushop.outbookmarks.append((b, dcid, scid))
628 628 # search added bookmark
629 629 for b, scid, dcid in addsrc:
630 630 if b in explicit:
631 631 explicit.remove(b)
632 632 pushop.outbookmarks.append((b, '', scid))
633 633 # search for overwritten bookmark
634 634 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
635 635 if b in explicit:
636 636 explicit.remove(b)
637 637 pushop.outbookmarks.append((b, dcid, scid))
638 638 # search for bookmark to delete
639 639 for b, scid, dcid in adddst:
640 640 if b in explicit:
641 641 explicit.remove(b)
642 642 # treat as "deleted locally"
643 643 pushop.outbookmarks.append((b, dcid, ''))
644 644 # identical bookmarks shouldn't get reported
645 645 for b, scid, dcid in same:
646 646 if b in explicit:
647 647 explicit.remove(b)
648 648
649 649 if explicit:
650 650 explicit = sorted(explicit)
651 651 # we should probably list all of them
652 652 ui.warn(_('bookmark %s does not exist on the local '
653 653 'or remote repository!\n') % explicit[0])
654 654 pushop.bkresult = 2
655 655
656 656 pushop.outbookmarks.sort()
657 657
658 658 def _pushcheckoutgoing(pushop):
659 659 outgoing = pushop.outgoing
660 660 unfi = pushop.repo.unfiltered()
661 661 if not outgoing.missing:
662 662 # nothing to push
663 663 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
664 664 return False
665 665 # something to push
666 666 if not pushop.force:
667 667 # if repo.obsstore == False --> no obsolete
668 668 # then, save the iteration
669 669 if unfi.obsstore:
670 670 # this message are here for 80 char limit reason
671 671 mso = _("push includes obsolete changeset: %s!")
672 672 mspd = _("push includes phase-divergent changeset: %s!")
673 673 mscd = _("push includes content-divergent changeset: %s!")
674 674 mst = {"orphan": _("push includes orphan changeset: %s!"),
675 675 "phase-divergent": mspd,
676 676 "content-divergent": mscd}
677 677 # If we are to push if there is at least one
678 678 # obsolete or unstable changeset in missing, at
679 679 # least one of the missinghead will be obsolete or
680 680 # unstable. So checking heads only is ok
681 681 for node in outgoing.missingheads:
682 682 ctx = unfi[node]
683 683 if ctx.obsolete():
684 684 raise error.Abort(mso % ctx)
685 685 elif ctx.isunstable():
686 686 # TODO print more than one instability in the abort
687 687 # message
688 688 raise error.Abort(mst[ctx.instabilities()[0]] % ctx)
689 689
690 690 discovery.checkheads(pushop)
691 691 return True
692 692
693 693 # List of names of steps to perform for an outgoing bundle2, order matters.
694 694 b2partsgenorder = []
695 695
696 696 # Mapping between step name and function
697 697 #
698 698 # This exists to help extensions wrap steps if necessary
699 699 b2partsgenmapping = {}
700 700
701 701 def b2partsgenerator(stepname, idx=None):
702 702 """decorator for function generating bundle2 part
703 703
704 704 The function is added to the step -> function mapping and appended to the
705 705 list of steps. Beware that decorated functions will be added in order
706 706 (this may matter).
707 707
708 708 You can only use this decorator for new steps, if you want to wrap a step
709 709 from an extension, attack the b2partsgenmapping dictionary directly."""
710 710 def dec(func):
711 711 assert stepname not in b2partsgenmapping
712 712 b2partsgenmapping[stepname] = func
713 713 if idx is None:
714 714 b2partsgenorder.append(stepname)
715 715 else:
716 716 b2partsgenorder.insert(idx, stepname)
717 717 return func
718 718 return dec
719 719
720 720 def _pushb2ctxcheckheads(pushop, bundler):
721 721 """Generate race condition checking parts
722 722
723 723 Exists as an independent function to aid extensions
724 724 """
725 725 # * 'force' do not check for push race,
726 726 # * if we don't push anything, there are nothing to check.
727 727 if not pushop.force and pushop.outgoing.missingheads:
728 728 allowunrelated = 'related' in bundler.capabilities.get('checkheads', ())
729 729 emptyremote = pushop.pushbranchmap is None
730 730 if not allowunrelated or emptyremote:
731 731 bundler.newpart('check:heads', data=iter(pushop.remoteheads))
732 732 else:
733 733 affected = set()
734 734 for branch, heads in pushop.pushbranchmap.iteritems():
735 735 remoteheads, newheads, unsyncedheads, discardedheads = heads
736 736 if remoteheads is not None:
737 737 remote = set(remoteheads)
738 738 affected |= set(discardedheads) & remote
739 739 affected |= remote - set(newheads)
740 740 if affected:
741 741 data = iter(sorted(affected))
742 742 bundler.newpart('check:updated-heads', data=data)
743 743
744 744 def _pushing(pushop):
745 745 """return True if we are pushing anything"""
746 746 return bool(pushop.outgoing.missing
747 747 or pushop.outdatedphases
748 748 or pushop.outobsmarkers
749 749 or pushop.outbookmarks)
750 750
751 751 @b2partsgenerator('check-bookmarks')
752 752 def _pushb2checkbookmarks(pushop, bundler):
753 753 """insert bookmark move checking"""
754 754 if not _pushing(pushop) or pushop.force:
755 755 return
756 756 b2caps = bundle2.bundle2caps(pushop.remote)
757 757 hasbookmarkcheck = 'bookmarks' in b2caps
758 758 if not (pushop.outbookmarks and hasbookmarkcheck):
759 759 return
760 760 data = []
761 761 for book, old, new in pushop.outbookmarks:
762 762 old = bin(old)
763 763 data.append((book, old))
764 764 checkdata = bookmod.binaryencode(data)
765 765 bundler.newpart('check:bookmarks', data=checkdata)
766 766
767 767 @b2partsgenerator('check-phases')
768 768 def _pushb2checkphases(pushop, bundler):
769 769 """insert phase move checking"""
770 770 if not _pushing(pushop) or pushop.force:
771 771 return
772 772 b2caps = bundle2.bundle2caps(pushop.remote)
773 773 hasphaseheads = 'heads' in b2caps.get('phases', ())
774 774 if pushop.remotephases is not None and hasphaseheads:
775 775 # check that the remote phase has not changed
776 776 checks = [[] for p in phases.allphases]
777 777 checks[phases.public].extend(pushop.remotephases.publicheads)
778 778 checks[phases.draft].extend(pushop.remotephases.draftroots)
779 779 if any(checks):
780 780 for nodes in checks:
781 781 nodes.sort()
782 782 checkdata = phases.binaryencode(checks)
783 783 bundler.newpart('check:phases', data=checkdata)
784 784
785 785 @b2partsgenerator('changeset')
786 786 def _pushb2ctx(pushop, bundler):
787 787 """handle changegroup push through bundle2
788 788
789 789 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
790 790 """
791 791 if 'changesets' in pushop.stepsdone:
792 792 return
793 793 pushop.stepsdone.add('changesets')
794 794 # Send known heads to the server for race detection.
795 795 if not _pushcheckoutgoing(pushop):
796 796 return
797 797 pushop.repo.prepushoutgoinghooks(pushop)
798 798
799 799 _pushb2ctxcheckheads(pushop, bundler)
800 800
801 801 b2caps = bundle2.bundle2caps(pushop.remote)
802 802 version = '01'
803 803 cgversions = b2caps.get('changegroup')
804 804 if cgversions: # 3.1 and 3.2 ship with an empty value
805 805 cgversions = [v for v in cgversions
806 806 if v in changegroup.supportedoutgoingversions(
807 807 pushop.repo)]
808 808 if not cgversions:
809 809 raise ValueError(_('no common changegroup version'))
810 810 version = max(cgversions)
811 811 cgstream = changegroup.makestream(pushop.repo, pushop.outgoing, version,
812 812 'push')
813 813 cgpart = bundler.newpart('changegroup', data=cgstream)
814 814 if cgversions:
815 815 cgpart.addparam('version', version)
816 816 if 'treemanifest' in pushop.repo.requirements:
817 817 cgpart.addparam('treemanifest', '1')
818 818 def handlereply(op):
819 819 """extract addchangegroup returns from server reply"""
820 820 cgreplies = op.records.getreplies(cgpart.id)
821 821 assert len(cgreplies['changegroup']) == 1
822 822 pushop.cgresult = cgreplies['changegroup'][0]['return']
823 823 return handlereply
824 824
825 825 @b2partsgenerator('phase')
826 826 def _pushb2phases(pushop, bundler):
827 827 """handle phase push through bundle2"""
828 828 if 'phases' in pushop.stepsdone:
829 829 return
830 830 b2caps = bundle2.bundle2caps(pushop.remote)
831 831 ui = pushop.repo.ui
832 832
833 833 legacyphase = 'phases' in ui.configlist('devel', 'legacy.exchange')
834 834 haspushkey = 'pushkey' in b2caps
835 835 hasphaseheads = 'heads' in b2caps.get('phases', ())
836 836
837 837 if hasphaseheads and not legacyphase:
838 838 return _pushb2phaseheads(pushop, bundler)
839 839 elif haspushkey:
840 840 return _pushb2phasespushkey(pushop, bundler)
841 841
842 842 def _pushb2phaseheads(pushop, bundler):
843 843 """push phase information through a bundle2 - binary part"""
844 844 pushop.stepsdone.add('phases')
845 845 if pushop.outdatedphases:
846 846 updates = [[] for p in phases.allphases]
847 847 updates[0].extend(h.node() for h in pushop.outdatedphases)
848 848 phasedata = phases.binaryencode(updates)
849 849 bundler.newpart('phase-heads', data=phasedata)
850 850
851 851 def _pushb2phasespushkey(pushop, bundler):
852 852 """push phase information through a bundle2 - pushkey part"""
853 853 pushop.stepsdone.add('phases')
854 854 part2node = []
855 855
856 856 def handlefailure(pushop, exc):
857 857 targetid = int(exc.partid)
858 858 for partid, node in part2node:
859 859 if partid == targetid:
860 860 raise error.Abort(_('updating %s to public failed') % node)
861 861
862 862 enc = pushkey.encode
863 863 for newremotehead in pushop.outdatedphases:
864 864 part = bundler.newpart('pushkey')
865 865 part.addparam('namespace', enc('phases'))
866 866 part.addparam('key', enc(newremotehead.hex()))
867 867 part.addparam('old', enc('%d' % phases.draft))
868 868 part.addparam('new', enc('%d' % phases.public))
869 869 part2node.append((part.id, newremotehead))
870 870 pushop.pkfailcb[part.id] = handlefailure
871 871
872 872 def handlereply(op):
873 873 for partid, node in part2node:
874 874 partrep = op.records.getreplies(partid)
875 875 results = partrep['pushkey']
876 876 assert len(results) <= 1
877 877 msg = None
878 878 if not results:
879 879 msg = _('server ignored update of %s to public!\n') % node
880 880 elif not int(results[0]['return']):
881 881 msg = _('updating %s to public failed!\n') % node
882 882 if msg is not None:
883 883 pushop.ui.warn(msg)
884 884 return handlereply
885 885
886 886 @b2partsgenerator('obsmarkers')
887 887 def _pushb2obsmarkers(pushop, bundler):
888 888 if 'obsmarkers' in pushop.stepsdone:
889 889 return
890 890 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
891 891 if obsolete.commonversion(remoteversions) is None:
892 892 return
893 893 pushop.stepsdone.add('obsmarkers')
894 894 if pushop.outobsmarkers:
895 895 markers = sorted(pushop.outobsmarkers)
896 896 bundle2.buildobsmarkerspart(bundler, markers)
897 897
898 898 @b2partsgenerator('bookmarks')
899 899 def _pushb2bookmarks(pushop, bundler):
900 900 """handle bookmark push through bundle2"""
901 901 if 'bookmarks' in pushop.stepsdone:
902 902 return
903 903 b2caps = bundle2.bundle2caps(pushop.remote)
904 904
905 905 legacy = pushop.repo.ui.configlist('devel', 'legacy.exchange')
906 906 legacybooks = 'bookmarks' in legacy
907 907
908 908 if not legacybooks and 'bookmarks' in b2caps:
909 909 return _pushb2bookmarkspart(pushop, bundler)
910 910 elif 'pushkey' in b2caps:
911 911 return _pushb2bookmarkspushkey(pushop, bundler)
912 912
913 913 def _bmaction(old, new):
914 914 """small utility for bookmark pushing"""
915 915 if not old:
916 916 return 'export'
917 917 elif not new:
918 918 return 'delete'
919 919 return 'update'
920 920
921 921 def _pushb2bookmarkspart(pushop, bundler):
922 922 pushop.stepsdone.add('bookmarks')
923 923 if not pushop.outbookmarks:
924 924 return
925 925
926 926 allactions = []
927 927 data = []
928 928 for book, old, new in pushop.outbookmarks:
929 929 new = bin(new)
930 930 data.append((book, new))
931 931 allactions.append((book, _bmaction(old, new)))
932 932 checkdata = bookmod.binaryencode(data)
933 933 bundler.newpart('bookmarks', data=checkdata)
934 934
935 935 def handlereply(op):
936 936 ui = pushop.ui
937 937 # if success
938 938 for book, action in allactions:
939 939 ui.status(bookmsgmap[action][0] % book)
940 940
941 941 return handlereply
942 942
943 943 def _pushb2bookmarkspushkey(pushop, bundler):
944 944 pushop.stepsdone.add('bookmarks')
945 945 part2book = []
946 946 enc = pushkey.encode
947 947
948 948 def handlefailure(pushop, exc):
949 949 targetid = int(exc.partid)
950 950 for partid, book, action in part2book:
951 951 if partid == targetid:
952 952 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
953 953 # we should not be called for part we did not generated
954 954 assert False
955 955
956 956 for book, old, new in pushop.outbookmarks:
957 957 part = bundler.newpart('pushkey')
958 958 part.addparam('namespace', enc('bookmarks'))
959 959 part.addparam('key', enc(book))
960 960 part.addparam('old', enc(old))
961 961 part.addparam('new', enc(new))
962 962 action = 'update'
963 963 if not old:
964 964 action = 'export'
965 965 elif not new:
966 966 action = 'delete'
967 967 part2book.append((part.id, book, action))
968 968 pushop.pkfailcb[part.id] = handlefailure
969 969
970 970 def handlereply(op):
971 971 ui = pushop.ui
972 972 for partid, book, action in part2book:
973 973 partrep = op.records.getreplies(partid)
974 974 results = partrep['pushkey']
975 975 assert len(results) <= 1
976 976 if not results:
977 977 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
978 978 else:
979 979 ret = int(results[0]['return'])
980 980 if ret:
981 981 ui.status(bookmsgmap[action][0] % book)
982 982 else:
983 983 ui.warn(bookmsgmap[action][1] % book)
984 984 if pushop.bkresult is not None:
985 985 pushop.bkresult = 1
986 986 return handlereply
987 987
988 988 @b2partsgenerator('pushvars', idx=0)
989 989 def _getbundlesendvars(pushop, bundler):
990 990 '''send shellvars via bundle2'''
991 991 pushvars = pushop.pushvars
992 992 if pushvars:
993 993 shellvars = {}
994 994 for raw in pushvars:
995 995 if '=' not in raw:
996 996 msg = ("unable to parse variable '%s', should follow "
997 997 "'KEY=VALUE' or 'KEY=' format")
998 998 raise error.Abort(msg % raw)
999 999 k, v = raw.split('=', 1)
1000 1000 shellvars[k] = v
1001 1001
1002 1002 part = bundler.newpart('pushvars')
1003 1003
1004 1004 for key, value in shellvars.iteritems():
1005 1005 part.addparam(key, value, mandatory=False)
1006 1006
1007 1007 def _pushbundle2(pushop):
1008 1008 """push data to the remote using bundle2
1009 1009
1010 1010 The only currently supported type of data is changegroup but this will
1011 1011 evolve in the future."""
1012 1012 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
1013 1013 pushback = (pushop.trmanager
1014 1014 and pushop.ui.configbool('experimental', 'bundle2.pushback'))
1015 1015
1016 1016 # create reply capability
1017 1017 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo,
1018 1018 allowpushback=pushback))
1019 1019 bundler.newpart('replycaps', data=capsblob)
1020 1020 replyhandlers = []
1021 1021 for partgenname in b2partsgenorder:
1022 1022 partgen = b2partsgenmapping[partgenname]
1023 1023 ret = partgen(pushop, bundler)
1024 1024 if callable(ret):
1025 1025 replyhandlers.append(ret)
1026 1026 # do not push if nothing to push
1027 1027 if bundler.nbparts <= 1:
1028 1028 return
1029 1029 stream = util.chunkbuffer(bundler.getchunks())
1030 1030 try:
1031 1031 try:
1032 1032 reply = pushop.remote.unbundle(
1033 1033 stream, ['force'], pushop.remote.url())
1034 1034 except error.BundleValueError as exc:
1035 1035 raise error.Abort(_('missing support for %s') % exc)
1036 1036 try:
1037 1037 trgetter = None
1038 1038 if pushback:
1039 1039 trgetter = pushop.trmanager.transaction
1040 1040 op = bundle2.processbundle(pushop.repo, reply, trgetter)
1041 1041 except error.BundleValueError as exc:
1042 1042 raise error.Abort(_('missing support for %s') % exc)
1043 1043 except bundle2.AbortFromPart as exc:
1044 1044 pushop.ui.status(_('remote: %s\n') % exc)
1045 1045 if exc.hint is not None:
1046 1046 pushop.ui.status(_('remote: %s\n') % ('(%s)' % exc.hint))
1047 1047 raise error.Abort(_('push failed on remote'))
1048 1048 except error.PushkeyFailed as exc:
1049 1049 partid = int(exc.partid)
1050 1050 if partid not in pushop.pkfailcb:
1051 1051 raise
1052 1052 pushop.pkfailcb[partid](pushop, exc)
1053 1053 for rephand in replyhandlers:
1054 1054 rephand(op)
1055 1055
1056 1056 def _pushchangeset(pushop):
1057 1057 """Make the actual push of changeset bundle to remote repo"""
1058 1058 if 'changesets' in pushop.stepsdone:
1059 1059 return
1060 1060 pushop.stepsdone.add('changesets')
1061 1061 if not _pushcheckoutgoing(pushop):
1062 1062 return
1063 1063
1064 1064 # Should have verified this in push().
1065 1065 assert pushop.remote.capable('unbundle')
1066 1066
1067 1067 pushop.repo.prepushoutgoinghooks(pushop)
1068 1068 outgoing = pushop.outgoing
1069 1069 # TODO: get bundlecaps from remote
1070 1070 bundlecaps = None
1071 1071 # create a changegroup from local
1072 1072 if pushop.revs is None and not (outgoing.excluded
1073 1073 or pushop.repo.changelog.filteredrevs):
1074 1074 # push everything,
1075 1075 # use the fast path, no race possible on push
1076 1076 cg = changegroup.makechangegroup(pushop.repo, outgoing, '01', 'push',
1077 1077 fastpath=True, bundlecaps=bundlecaps)
1078 1078 else:
1079 1079 cg = changegroup.makechangegroup(pushop.repo, outgoing, '01',
1080 1080 'push', bundlecaps=bundlecaps)
1081 1081
1082 1082 # apply changegroup to remote
1083 1083 # local repo finds heads on server, finds out what
1084 1084 # revs it must push. once revs transferred, if server
1085 1085 # finds it has different heads (someone else won
1086 1086 # commit/push race), server aborts.
1087 1087 if pushop.force:
1088 1088 remoteheads = ['force']
1089 1089 else:
1090 1090 remoteheads = pushop.remoteheads
1091 1091 # ssh: return remote's addchangegroup()
1092 1092 # http: return remote's addchangegroup() or 0 for error
1093 1093 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
1094 1094 pushop.repo.url())
1095 1095
1096 1096 def _pushsyncphase(pushop):
1097 1097 """synchronise phase information locally and remotely"""
1098 1098 cheads = pushop.commonheads
1099 1099 # even when we don't push, exchanging phase data is useful
1100 1100 remotephases = pushop.remote.listkeys('phases')
1101 1101 if (pushop.ui.configbool('ui', '_usedassubrepo')
1102 1102 and remotephases # server supports phases
1103 1103 and pushop.cgresult is None # nothing was pushed
1104 1104 and remotephases.get('publishing', False)):
1105 1105 # When:
1106 1106 # - this is a subrepo push
1107 1107 # - and remote support phase
1108 1108 # - and no changeset was pushed
1109 1109 # - and remote is publishing
1110 1110 # We may be in issue 3871 case!
1111 1111 # We drop the possible phase synchronisation done by
1112 1112 # courtesy to publish changesets possibly locally draft
1113 1113 # on the remote.
1114 1114 remotephases = {'publishing': 'True'}
1115 1115 if not remotephases: # old server or public only reply from non-publishing
1116 1116 _localphasemove(pushop, cheads)
1117 1117 # don't push any phase data as there is nothing to push
1118 1118 else:
1119 1119 ana = phases.analyzeremotephases(pushop.repo, cheads,
1120 1120 remotephases)
1121 1121 pheads, droots = ana
1122 1122 ### Apply remote phase on local
1123 1123 if remotephases.get('publishing', False):
1124 1124 _localphasemove(pushop, cheads)
1125 1125 else: # publish = False
1126 1126 _localphasemove(pushop, pheads)
1127 1127 _localphasemove(pushop, cheads, phases.draft)
1128 1128 ### Apply local phase on remote
1129 1129
1130 1130 if pushop.cgresult:
1131 1131 if 'phases' in pushop.stepsdone:
1132 1132 # phases already pushed though bundle2
1133 1133 return
1134 1134 outdated = pushop.outdatedphases
1135 1135 else:
1136 1136 outdated = pushop.fallbackoutdatedphases
1137 1137
1138 1138 pushop.stepsdone.add('phases')
1139 1139
1140 1140 # filter heads already turned public by the push
1141 1141 outdated = [c for c in outdated if c.node() not in pheads]
1142 1142 # fallback to independent pushkey command
1143 1143 for newremotehead in outdated:
1144 1144 r = pushop.remote.pushkey('phases',
1145 1145 newremotehead.hex(),
1146 1146 str(phases.draft),
1147 1147 str(phases.public))
1148 1148 if not r:
1149 1149 pushop.ui.warn(_('updating %s to public failed!\n')
1150 1150 % newremotehead)
1151 1151
1152 1152 def _localphasemove(pushop, nodes, phase=phases.public):
1153 1153 """move <nodes> to <phase> in the local source repo"""
1154 1154 if pushop.trmanager:
1155 1155 phases.advanceboundary(pushop.repo,
1156 1156 pushop.trmanager.transaction(),
1157 1157 phase,
1158 1158 nodes)
1159 1159 else:
1160 1160 # repo is not locked, do not change any phases!
1161 1161 # Informs the user that phases should have been moved when
1162 1162 # applicable.
1163 1163 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1164 1164 phasestr = phases.phasenames[phase]
1165 1165 if actualmoves:
1166 1166 pushop.ui.status(_('cannot lock source repo, skipping '
1167 1167 'local %s phase update\n') % phasestr)
1168 1168
1169 1169 def _pushobsolete(pushop):
1170 1170 """utility function to push obsolete markers to a remote"""
1171 1171 if 'obsmarkers' in pushop.stepsdone:
1172 1172 return
1173 1173 repo = pushop.repo
1174 1174 remote = pushop.remote
1175 1175 pushop.stepsdone.add('obsmarkers')
1176 1176 if pushop.outobsmarkers:
1177 1177 pushop.ui.debug('try to push obsolete markers to remote\n')
1178 1178 rslts = []
1179 1179 remotedata = obsolete._pushkeyescape(sorted(pushop.outobsmarkers))
1180 1180 for key in sorted(remotedata, reverse=True):
1181 1181 # reverse sort to ensure we end with dump0
1182 1182 data = remotedata[key]
1183 1183 rslts.append(remote.pushkey('obsolete', key, '', data))
1184 1184 if [r for r in rslts if not r]:
1185 1185 msg = _('failed to push some obsolete markers!\n')
1186 1186 repo.ui.warn(msg)
1187 1187
1188 1188 def _pushbookmark(pushop):
1189 1189 """Update bookmark position on remote"""
1190 1190 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
1191 1191 return
1192 1192 pushop.stepsdone.add('bookmarks')
1193 1193 ui = pushop.ui
1194 1194 remote = pushop.remote
1195 1195
1196 1196 for b, old, new in pushop.outbookmarks:
1197 1197 action = 'update'
1198 1198 if not old:
1199 1199 action = 'export'
1200 1200 elif not new:
1201 1201 action = 'delete'
1202 1202 if remote.pushkey('bookmarks', b, old, new):
1203 1203 ui.status(bookmsgmap[action][0] % b)
1204 1204 else:
1205 1205 ui.warn(bookmsgmap[action][1] % b)
1206 1206 # discovery can have set the value form invalid entry
1207 1207 if pushop.bkresult is not None:
1208 1208 pushop.bkresult = 1
1209 1209
1210 1210 class pulloperation(object):
1211 1211 """A object that represent a single pull operation
1212 1212
1213 1213 It purpose is to carry pull related state and very common operation.
1214 1214
1215 1215 A new should be created at the beginning of each pull and discarded
1216 1216 afterward.
1217 1217 """
1218 1218
1219 1219 def __init__(self, repo, remote, heads=None, force=False, bookmarks=(),
1220 1220 remotebookmarks=None, streamclonerequested=None):
1221 1221 # repo we pull into
1222 1222 self.repo = repo
1223 1223 # repo we pull from
1224 1224 self.remote = remote
1225 1225 # revision we try to pull (None is "all")
1226 1226 self.heads = heads
1227 1227 # bookmark pulled explicitly
1228 1228 self.explicitbookmarks = [repo._bookmarks.expandname(bookmark)
1229 1229 for bookmark in bookmarks]
1230 1230 # do we force pull?
1231 1231 self.force = force
1232 1232 # whether a streaming clone was requested
1233 1233 self.streamclonerequested = streamclonerequested
1234 1234 # transaction manager
1235 1235 self.trmanager = None
1236 1236 # set of common changeset between local and remote before pull
1237 1237 self.common = None
1238 1238 # set of pulled head
1239 1239 self.rheads = None
1240 1240 # list of missing changeset to fetch remotely
1241 1241 self.fetch = None
1242 1242 # remote bookmarks data
1243 1243 self.remotebookmarks = remotebookmarks
1244 1244 # result of changegroup pulling (used as return code by pull)
1245 1245 self.cgresult = None
1246 1246 # list of step already done
1247 1247 self.stepsdone = set()
1248 1248 # Whether we attempted a clone from pre-generated bundles.
1249 1249 self.clonebundleattempted = False
1250 1250
1251 1251 @util.propertycache
1252 1252 def pulledsubset(self):
1253 1253 """heads of the set of changeset target by the pull"""
1254 1254 # compute target subset
1255 1255 if self.heads is None:
1256 1256 # We pulled every thing possible
1257 1257 # sync on everything common
1258 1258 c = set(self.common)
1259 1259 ret = list(self.common)
1260 1260 for n in self.rheads:
1261 1261 if n not in c:
1262 1262 ret.append(n)
1263 1263 return ret
1264 1264 else:
1265 1265 # We pulled a specific subset
1266 1266 # sync on this subset
1267 1267 return self.heads
1268 1268
1269 1269 @util.propertycache
1270 1270 def canusebundle2(self):
1271 1271 return not _forcebundle1(self)
1272 1272
1273 1273 @util.propertycache
1274 1274 def remotebundle2caps(self):
1275 1275 return bundle2.bundle2caps(self.remote)
1276 1276
1277 1277 def gettransaction(self):
1278 1278 # deprecated; talk to trmanager directly
1279 1279 return self.trmanager.transaction()
1280 1280
1281 1281 class transactionmanager(util.transactional):
1282 1282 """An object to manage the life cycle of a transaction
1283 1283
1284 1284 It creates the transaction on demand and calls the appropriate hooks when
1285 1285 closing the transaction."""
1286 1286 def __init__(self, repo, source, url):
1287 1287 self.repo = repo
1288 1288 self.source = source
1289 1289 self.url = url
1290 1290 self._tr = None
1291 1291
1292 1292 def transaction(self):
1293 1293 """Return an open transaction object, constructing if necessary"""
1294 1294 if not self._tr:
1295 1295 trname = '%s\n%s' % (self.source, util.hidepassword(self.url))
1296 1296 self._tr = self.repo.transaction(trname)
1297 1297 self._tr.hookargs['source'] = self.source
1298 1298 self._tr.hookargs['url'] = self.url
1299 1299 return self._tr
1300 1300
1301 1301 def close(self):
1302 1302 """close transaction if created"""
1303 1303 if self._tr is not None:
1304 1304 self._tr.close()
1305 1305
1306 1306 def release(self):
1307 1307 """release transaction if created"""
1308 1308 if self._tr is not None:
1309 1309 self._tr.release()
1310 1310
1311 1311 def pull(repo, remote, heads=None, force=False, bookmarks=(), opargs=None,
1312 1312 streamclonerequested=None):
1313 1313 """Fetch repository data from a remote.
1314 1314
1315 1315 This is the main function used to retrieve data from a remote repository.
1316 1316
1317 1317 ``repo`` is the local repository to clone into.
1318 1318 ``remote`` is a peer instance.
1319 1319 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1320 1320 default) means to pull everything from the remote.
1321 1321 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1322 1322 default, all remote bookmarks are pulled.
1323 1323 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1324 1324 initialization.
1325 1325 ``streamclonerequested`` is a boolean indicating whether a "streaming
1326 1326 clone" is requested. A "streaming clone" is essentially a raw file copy
1327 1327 of revlogs from the server. This only works when the local repository is
1328 1328 empty. The default value of ``None`` means to respect the server
1329 1329 configuration for preferring stream clones.
1330 1330
1331 1331 Returns the ``pulloperation`` created for this pull.
1332 1332 """
1333 1333 if opargs is None:
1334 1334 opargs = {}
1335 1335 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks,
1336 1336 streamclonerequested=streamclonerequested,
1337 1337 **pycompat.strkwargs(opargs))
1338 1338
1339 1339 peerlocal = pullop.remote.local()
1340 1340 if peerlocal:
1341 1341 missing = set(peerlocal.requirements) - pullop.repo.supported
1342 1342 if missing:
1343 1343 msg = _("required features are not"
1344 1344 " supported in the destination:"
1345 1345 " %s") % (', '.join(sorted(missing)))
1346 1346 raise error.Abort(msg)
1347 1347
1348 1348 pullop.trmanager = transactionmanager(repo, 'pull', remote.url())
1349 1349 with repo.wlock(), repo.lock(), pullop.trmanager:
1350 1350 # This should ideally be in _pullbundle2(). However, it needs to run
1351 1351 # before discovery to avoid extra work.
1352 1352 _maybeapplyclonebundle(pullop)
1353 1353 streamclone.maybeperformlegacystreamclone(pullop)
1354 1354 _pulldiscovery(pullop)
1355 1355 if pullop.canusebundle2:
1356 1356 _pullbundle2(pullop)
1357 1357 _pullchangeset(pullop)
1358 1358 _pullphase(pullop)
1359 1359 _pullbookmarks(pullop)
1360 1360 _pullobsolete(pullop)
1361 1361
1362 1362 # storing remotenames
1363 1363 if repo.ui.configbool('experimental', 'remotenames'):
1364 1364 logexchange.pullremotenames(repo, remote)
1365 1365
1366 1366 return pullop
1367 1367
1368 1368 # list of steps to perform discovery before pull
1369 1369 pulldiscoveryorder = []
1370 1370
1371 1371 # Mapping between step name and function
1372 1372 #
1373 1373 # This exists to help extensions wrap steps if necessary
1374 1374 pulldiscoverymapping = {}
1375 1375
1376 1376 def pulldiscovery(stepname):
1377 1377 """decorator for function performing discovery before pull
1378 1378
1379 1379 The function is added to the step -> function mapping and appended to the
1380 1380 list of steps. Beware that decorated function will be added in order (this
1381 1381 may matter).
1382 1382
1383 1383 You can only use this decorator for a new step, if you want to wrap a step
1384 1384 from an extension, change the pulldiscovery dictionary directly."""
1385 1385 def dec(func):
1386 1386 assert stepname not in pulldiscoverymapping
1387 1387 pulldiscoverymapping[stepname] = func
1388 1388 pulldiscoveryorder.append(stepname)
1389 1389 return func
1390 1390 return dec
1391 1391
1392 1392 def _pulldiscovery(pullop):
1393 1393 """Run all discovery steps"""
1394 1394 for stepname in pulldiscoveryorder:
1395 1395 step = pulldiscoverymapping[stepname]
1396 1396 step(pullop)
1397 1397
1398 1398 @pulldiscovery('b1:bookmarks')
1399 1399 def _pullbookmarkbundle1(pullop):
1400 1400 """fetch bookmark data in bundle1 case
1401 1401
1402 1402 If not using bundle2, we have to fetch bookmarks before changeset
1403 1403 discovery to reduce the chance and impact of race conditions."""
1404 1404 if pullop.remotebookmarks is not None:
1405 1405 return
1406 1406 if pullop.canusebundle2 and 'listkeys' in pullop.remotebundle2caps:
1407 1407 # all known bundle2 servers now support listkeys, but lets be nice with
1408 1408 # new implementation.
1409 1409 return
1410 1410 books = pullop.remote.listkeys('bookmarks')
1411 1411 pullop.remotebookmarks = bookmod.unhexlifybookmarks(books)
1412 1412
1413 1413
1414 1414 @pulldiscovery('changegroup')
1415 1415 def _pulldiscoverychangegroup(pullop):
1416 1416 """discovery phase for the pull
1417 1417
1418 1418 Current handle changeset discovery only, will change handle all discovery
1419 1419 at some point."""
1420 1420 tmp = discovery.findcommonincoming(pullop.repo,
1421 1421 pullop.remote,
1422 1422 heads=pullop.heads,
1423 1423 force=pullop.force)
1424 1424 common, fetch, rheads = tmp
1425 1425 nm = pullop.repo.unfiltered().changelog.nodemap
1426 1426 if fetch and rheads:
1427 1427 # If a remote heads is filtered locally, put in back in common.
1428 1428 #
1429 1429 # This is a hackish solution to catch most of "common but locally
1430 1430 # hidden situation". We do not performs discovery on unfiltered
1431 1431 # repository because it end up doing a pathological amount of round
1432 1432 # trip for w huge amount of changeset we do not care about.
1433 1433 #
1434 1434 # If a set of such "common but filtered" changeset exist on the server
1435 1435 # but are not including a remote heads, we'll not be able to detect it,
1436 1436 scommon = set(common)
1437 1437 for n in rheads:
1438 1438 if n in nm:
1439 1439 if n not in scommon:
1440 1440 common.append(n)
1441 1441 if set(rheads).issubset(set(common)):
1442 1442 fetch = []
1443 1443 pullop.common = common
1444 1444 pullop.fetch = fetch
1445 1445 pullop.rheads = rheads
1446 1446
1447 1447 def _pullbundle2(pullop):
1448 1448 """pull data using bundle2
1449 1449
1450 1450 For now, the only supported data are changegroup."""
1451 1451 kwargs = {'bundlecaps': caps20to10(pullop.repo)}
1452 1452
1453 1453 # make ui easier to access
1454 1454 ui = pullop.repo.ui
1455 1455
1456 1456 # At the moment we don't do stream clones over bundle2. If that is
1457 1457 # implemented then here's where the check for that will go.
1458 streaming = False
1458 streaming = streamclone.canperformstreamclone(pullop, bundle2=True)[0]
1459 1459
1460 1460 # declare pull perimeters
1461 1461 kwargs['common'] = pullop.common
1462 1462 kwargs['heads'] = pullop.heads or pullop.rheads
1463 1463
1464 if True:
1464 if streaming:
1465 kwargs['cg'] = False
1466 kwargs['stream'] = True
1467 pullop.stepsdone.add('changegroup')
1468
1469 else:
1465 1470 # pulling changegroup
1466 1471 pullop.stepsdone.add('changegroup')
1467 1472
1468 1473 kwargs['cg'] = pullop.fetch
1469 1474
1470 1475 legacyphase = 'phases' in ui.configlist('devel', 'legacy.exchange')
1471 1476 hasbinaryphase = 'heads' in pullop.remotebundle2caps.get('phases', ())
1472 1477 if (not legacyphase and hasbinaryphase):
1473 1478 kwargs['phases'] = True
1474 1479 pullop.stepsdone.add('phases')
1475 1480
1476 1481 if 'listkeys' in pullop.remotebundle2caps:
1477 1482 if 'phases' not in pullop.stepsdone:
1478 1483 kwargs['listkeys'] = ['phases']
1479 1484
1480 1485 bookmarksrequested = False
1481 1486 legacybookmark = 'bookmarks' in ui.configlist('devel', 'legacy.exchange')
1482 1487 hasbinarybook = 'bookmarks' in pullop.remotebundle2caps
1483 1488
1484 1489 if pullop.remotebookmarks is not None:
1485 1490 pullop.stepsdone.add('request-bookmarks')
1486 1491
1487 1492 if ('request-bookmarks' not in pullop.stepsdone
1488 1493 and pullop.remotebookmarks is None
1489 1494 and not legacybookmark and hasbinarybook):
1490 1495 kwargs['bookmarks'] = True
1491 1496 bookmarksrequested = True
1492 1497
1493 1498 if 'listkeys' in pullop.remotebundle2caps:
1494 1499 if 'request-bookmarks' not in pullop.stepsdone:
1495 1500 # make sure to always includes bookmark data when migrating
1496 1501 # `hg incoming --bundle` to using this function.
1497 1502 pullop.stepsdone.add('request-bookmarks')
1498 1503 kwargs.setdefault('listkeys', []).append('bookmarks')
1499 1504
1500 1505 # If this is a full pull / clone and the server supports the clone bundles
1501 1506 # feature, tell the server whether we attempted a clone bundle. The
1502 1507 # presence of this flag indicates the client supports clone bundles. This
1503 1508 # will enable the server to treat clients that support clone bundles
1504 1509 # differently from those that don't.
1505 1510 if (pullop.remote.capable('clonebundles')
1506 1511 and pullop.heads is None and list(pullop.common) == [nullid]):
1507 1512 kwargs['cbattempted'] = pullop.clonebundleattempted
1508 1513
1509 1514 if streaming:
1510 1515 pullop.repo.ui.status(_('streaming all changes\n'))
1511 1516 elif not pullop.fetch:
1512 1517 pullop.repo.ui.status(_("no changes found\n"))
1513 1518 pullop.cgresult = 0
1514 1519 else:
1515 1520 if pullop.heads is None and list(pullop.common) == [nullid]:
1516 1521 pullop.repo.ui.status(_("requesting all changes\n"))
1517 1522 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1518 1523 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1519 1524 if obsolete.commonversion(remoteversions) is not None:
1520 1525 kwargs['obsmarkers'] = True
1521 1526 pullop.stepsdone.add('obsmarkers')
1522 1527 _pullbundle2extraprepare(pullop, kwargs)
1523 1528 bundle = pullop.remote.getbundle('pull', **pycompat.strkwargs(kwargs))
1524 1529 try:
1525 1530 op = bundle2.bundleoperation(pullop.repo, pullop.gettransaction)
1526 1531 op.modes['bookmarks'] = 'records'
1527 1532 bundle2.processbundle(pullop.repo, bundle, op=op)
1528 1533 except bundle2.AbortFromPart as exc:
1529 1534 pullop.repo.ui.status(_('remote: abort: %s\n') % exc)
1530 1535 raise error.Abort(_('pull failed on remote'), hint=exc.hint)
1531 1536 except error.BundleValueError as exc:
1532 1537 raise error.Abort(_('missing support for %s') % exc)
1533 1538
1534 1539 if pullop.fetch:
1535 1540 pullop.cgresult = bundle2.combinechangegroupresults(op)
1536 1541
1537 1542 # processing phases change
1538 1543 for namespace, value in op.records['listkeys']:
1539 1544 if namespace == 'phases':
1540 1545 _pullapplyphases(pullop, value)
1541 1546
1542 1547 # processing bookmark update
1543 1548 if bookmarksrequested:
1544 1549 books = {}
1545 1550 for record in op.records['bookmarks']:
1546 1551 books[record['bookmark']] = record["node"]
1547 1552 pullop.remotebookmarks = books
1548 1553 else:
1549 1554 for namespace, value in op.records['listkeys']:
1550 1555 if namespace == 'bookmarks':
1551 1556 pullop.remotebookmarks = bookmod.unhexlifybookmarks(value)
1552 1557
1553 1558 # bookmark data were either already there or pulled in the bundle
1554 1559 if pullop.remotebookmarks is not None:
1555 1560 _pullbookmarks(pullop)
1556 1561
1557 1562 def _pullbundle2extraprepare(pullop, kwargs):
1558 1563 """hook function so that extensions can extend the getbundle call"""
1559 1564
1560 1565 def _pullchangeset(pullop):
1561 1566 """pull changeset from unbundle into the local repo"""
1562 1567 # We delay the open of the transaction as late as possible so we
1563 1568 # don't open transaction for nothing or you break future useful
1564 1569 # rollback call
1565 1570 if 'changegroup' in pullop.stepsdone:
1566 1571 return
1567 1572 pullop.stepsdone.add('changegroup')
1568 1573 if not pullop.fetch:
1569 1574 pullop.repo.ui.status(_("no changes found\n"))
1570 1575 pullop.cgresult = 0
1571 1576 return
1572 1577 tr = pullop.gettransaction()
1573 1578 if pullop.heads is None and list(pullop.common) == [nullid]:
1574 1579 pullop.repo.ui.status(_("requesting all changes\n"))
1575 1580 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1576 1581 # issue1320, avoid a race if remote changed after discovery
1577 1582 pullop.heads = pullop.rheads
1578 1583
1579 1584 if pullop.remote.capable('getbundle'):
1580 1585 # TODO: get bundlecaps from remote
1581 1586 cg = pullop.remote.getbundle('pull', common=pullop.common,
1582 1587 heads=pullop.heads or pullop.rheads)
1583 1588 elif pullop.heads is None:
1584 1589 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1585 1590 elif not pullop.remote.capable('changegroupsubset'):
1586 1591 raise error.Abort(_("partial pull cannot be done because "
1587 1592 "other repository doesn't support "
1588 1593 "changegroupsubset."))
1589 1594 else:
1590 1595 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1591 1596 bundleop = bundle2.applybundle(pullop.repo, cg, tr, 'pull',
1592 1597 pullop.remote.url())
1593 1598 pullop.cgresult = bundle2.combinechangegroupresults(bundleop)
1594 1599
1595 1600 def _pullphase(pullop):
1596 1601 # Get remote phases data from remote
1597 1602 if 'phases' in pullop.stepsdone:
1598 1603 return
1599 1604 remotephases = pullop.remote.listkeys('phases')
1600 1605 _pullapplyphases(pullop, remotephases)
1601 1606
1602 1607 def _pullapplyphases(pullop, remotephases):
1603 1608 """apply phase movement from observed remote state"""
1604 1609 if 'phases' in pullop.stepsdone:
1605 1610 return
1606 1611 pullop.stepsdone.add('phases')
1607 1612 publishing = bool(remotephases.get('publishing', False))
1608 1613 if remotephases and not publishing:
1609 1614 # remote is new and non-publishing
1610 1615 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1611 1616 pullop.pulledsubset,
1612 1617 remotephases)
1613 1618 dheads = pullop.pulledsubset
1614 1619 else:
1615 1620 # Remote is old or publishing all common changesets
1616 1621 # should be seen as public
1617 1622 pheads = pullop.pulledsubset
1618 1623 dheads = []
1619 1624 unfi = pullop.repo.unfiltered()
1620 1625 phase = unfi._phasecache.phase
1621 1626 rev = unfi.changelog.nodemap.get
1622 1627 public = phases.public
1623 1628 draft = phases.draft
1624 1629
1625 1630 # exclude changesets already public locally and update the others
1626 1631 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1627 1632 if pheads:
1628 1633 tr = pullop.gettransaction()
1629 1634 phases.advanceboundary(pullop.repo, tr, public, pheads)
1630 1635
1631 1636 # exclude changesets already draft locally and update the others
1632 1637 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1633 1638 if dheads:
1634 1639 tr = pullop.gettransaction()
1635 1640 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1636 1641
1637 1642 def _pullbookmarks(pullop):
1638 1643 """process the remote bookmark information to update the local one"""
1639 1644 if 'bookmarks' in pullop.stepsdone:
1640 1645 return
1641 1646 pullop.stepsdone.add('bookmarks')
1642 1647 repo = pullop.repo
1643 1648 remotebookmarks = pullop.remotebookmarks
1644 1649 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1645 1650 pullop.remote.url(),
1646 1651 pullop.gettransaction,
1647 1652 explicit=pullop.explicitbookmarks)
1648 1653
1649 1654 def _pullobsolete(pullop):
1650 1655 """utility function to pull obsolete markers from a remote
1651 1656
1652 1657 The `gettransaction` is function that return the pull transaction, creating
1653 1658 one if necessary. We return the transaction to inform the calling code that
1654 1659 a new transaction have been created (when applicable).
1655 1660
1656 1661 Exists mostly to allow overriding for experimentation purpose"""
1657 1662 if 'obsmarkers' in pullop.stepsdone:
1658 1663 return
1659 1664 pullop.stepsdone.add('obsmarkers')
1660 1665 tr = None
1661 1666 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1662 1667 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1663 1668 remoteobs = pullop.remote.listkeys('obsolete')
1664 1669 if 'dump0' in remoteobs:
1665 1670 tr = pullop.gettransaction()
1666 1671 markers = []
1667 1672 for key in sorted(remoteobs, reverse=True):
1668 1673 if key.startswith('dump'):
1669 1674 data = util.b85decode(remoteobs[key])
1670 1675 version, newmarks = obsolete._readmarkers(data)
1671 1676 markers += newmarks
1672 1677 if markers:
1673 1678 pullop.repo.obsstore.add(tr, markers)
1674 1679 pullop.repo.invalidatevolatilesets()
1675 1680 return tr
1676 1681
1677 1682 def caps20to10(repo):
1678 1683 """return a set with appropriate options to use bundle20 during getbundle"""
1679 1684 caps = {'HG20'}
1680 1685 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
1681 1686 caps.add('bundle2=' + urlreq.quote(capsblob))
1682 1687 return caps
1683 1688
1684 1689 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1685 1690 getbundle2partsorder = []
1686 1691
1687 1692 # Mapping between step name and function
1688 1693 #
1689 1694 # This exists to help extensions wrap steps if necessary
1690 1695 getbundle2partsmapping = {}
1691 1696
1692 1697 def getbundle2partsgenerator(stepname, idx=None):
1693 1698 """decorator for function generating bundle2 part for getbundle
1694 1699
1695 1700 The function is added to the step -> function mapping and appended to the
1696 1701 list of steps. Beware that decorated functions will be added in order
1697 1702 (this may matter).
1698 1703
1699 1704 You can only use this decorator for new steps, if you want to wrap a step
1700 1705 from an extension, attack the getbundle2partsmapping dictionary directly."""
1701 1706 def dec(func):
1702 1707 assert stepname not in getbundle2partsmapping
1703 1708 getbundle2partsmapping[stepname] = func
1704 1709 if idx is None:
1705 1710 getbundle2partsorder.append(stepname)
1706 1711 else:
1707 1712 getbundle2partsorder.insert(idx, stepname)
1708 1713 return func
1709 1714 return dec
1710 1715
1711 1716 def bundle2requested(bundlecaps):
1712 1717 if bundlecaps is not None:
1713 1718 return any(cap.startswith('HG2') for cap in bundlecaps)
1714 1719 return False
1715 1720
1716 1721 def getbundlechunks(repo, source, heads=None, common=None, bundlecaps=None,
1717 1722 **kwargs):
1718 1723 """Return chunks constituting a bundle's raw data.
1719 1724
1720 1725 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
1721 1726 passed.
1722 1727
1723 1728 Returns an iterator over raw chunks (of varying sizes).
1724 1729 """
1725 1730 kwargs = pycompat.byteskwargs(kwargs)
1726 1731 usebundle2 = bundle2requested(bundlecaps)
1727 1732 # bundle10 case
1728 1733 if not usebundle2:
1729 1734 if bundlecaps and not kwargs.get('cg', True):
1730 1735 raise ValueError(_('request for bundle10 must include changegroup'))
1731 1736
1732 1737 if kwargs:
1733 1738 raise ValueError(_('unsupported getbundle arguments: %s')
1734 1739 % ', '.join(sorted(kwargs.keys())))
1735 1740 outgoing = _computeoutgoing(repo, heads, common)
1736 1741 return changegroup.makestream(repo, outgoing, '01', source,
1737 1742 bundlecaps=bundlecaps)
1738 1743
1739 1744 # bundle20 case
1740 1745 b2caps = {}
1741 1746 for bcaps in bundlecaps:
1742 1747 if bcaps.startswith('bundle2='):
1743 1748 blob = urlreq.unquote(bcaps[len('bundle2='):])
1744 1749 b2caps.update(bundle2.decodecaps(blob))
1745 1750 bundler = bundle2.bundle20(repo.ui, b2caps)
1746 1751
1747 1752 kwargs['heads'] = heads
1748 1753 kwargs['common'] = common
1749 1754
1750 1755 for name in getbundle2partsorder:
1751 1756 func = getbundle2partsmapping[name]
1752 1757 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1753 1758 **pycompat.strkwargs(kwargs))
1754 1759
1755 1760 return bundler.getchunks()
1756 1761
1757 1762 @getbundle2partsgenerator('stream')
1758 1763 def _getbundlestream(bundler, repo, source, bundlecaps=None,
1759 1764 b2caps=None, heads=None, common=None, **kwargs):
1760 1765 if not kwargs.get('stream', False):
1761 1766 return
1762 1767 filecount, bytecount, it = streamclone.generatev2(repo)
1763 1768 requirements = ' '.join(repo.requirements)
1764 1769 part = bundler.newpart('stream', data=it)
1765 1770 part.addparam('bytecount', '%d' % bytecount, mandatory=True)
1766 1771 part.addparam('filecount', '%d' % filecount, mandatory=True)
1767 1772 part.addparam('requirements', requirements, mandatory=True)
1768 1773 part.addparam('version', 'v2', mandatory=True)
1769 1774
1770 1775 @getbundle2partsgenerator('changegroup')
1771 1776 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1772 1777 b2caps=None, heads=None, common=None, **kwargs):
1773 1778 """add a changegroup part to the requested bundle"""
1774 1779 cgstream = None
1775 1780 if kwargs.get(r'cg', True):
1776 1781 # build changegroup bundle here.
1777 1782 version = '01'
1778 1783 cgversions = b2caps.get('changegroup')
1779 1784 if cgversions: # 3.1 and 3.2 ship with an empty value
1780 1785 cgversions = [v for v in cgversions
1781 1786 if v in changegroup.supportedoutgoingversions(repo)]
1782 1787 if not cgversions:
1783 1788 raise ValueError(_('no common changegroup version'))
1784 1789 version = max(cgversions)
1785 1790 outgoing = _computeoutgoing(repo, heads, common)
1786 1791 if outgoing.missing:
1787 1792 cgstream = changegroup.makestream(repo, outgoing, version, source,
1788 1793 bundlecaps=bundlecaps)
1789 1794
1790 1795 if cgstream:
1791 1796 part = bundler.newpart('changegroup', data=cgstream)
1792 1797 if cgversions:
1793 1798 part.addparam('version', version)
1794 1799 part.addparam('nbchanges', '%d' % len(outgoing.missing),
1795 1800 mandatory=False)
1796 1801 if 'treemanifest' in repo.requirements:
1797 1802 part.addparam('treemanifest', '1')
1798 1803
1799 1804 @getbundle2partsgenerator('bookmarks')
1800 1805 def _getbundlebookmarkpart(bundler, repo, source, bundlecaps=None,
1801 1806 b2caps=None, **kwargs):
1802 1807 """add a bookmark part to the requested bundle"""
1803 1808 if not kwargs.get(r'bookmarks', False):
1804 1809 return
1805 1810 if 'bookmarks' not in b2caps:
1806 1811 raise ValueError(_('no common bookmarks exchange method'))
1807 1812 books = bookmod.listbinbookmarks(repo)
1808 1813 data = bookmod.binaryencode(books)
1809 1814 if data:
1810 1815 bundler.newpart('bookmarks', data=data)
1811 1816
1812 1817 @getbundle2partsgenerator('listkeys')
1813 1818 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1814 1819 b2caps=None, **kwargs):
1815 1820 """add parts containing listkeys namespaces to the requested bundle"""
1816 1821 listkeys = kwargs.get(r'listkeys', ())
1817 1822 for namespace in listkeys:
1818 1823 part = bundler.newpart('listkeys')
1819 1824 part.addparam('namespace', namespace)
1820 1825 keys = repo.listkeys(namespace).items()
1821 1826 part.data = pushkey.encodekeys(keys)
1822 1827
1823 1828 @getbundle2partsgenerator('obsmarkers')
1824 1829 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1825 1830 b2caps=None, heads=None, **kwargs):
1826 1831 """add an obsolescence markers part to the requested bundle"""
1827 1832 if kwargs.get(r'obsmarkers', False):
1828 1833 if heads is None:
1829 1834 heads = repo.heads()
1830 1835 subset = [c.node() for c in repo.set('::%ln', heads)]
1831 1836 markers = repo.obsstore.relevantmarkers(subset)
1832 1837 markers = sorted(markers)
1833 1838 bundle2.buildobsmarkerspart(bundler, markers)
1834 1839
1835 1840 @getbundle2partsgenerator('phases')
1836 1841 def _getbundlephasespart(bundler, repo, source, bundlecaps=None,
1837 1842 b2caps=None, heads=None, **kwargs):
1838 1843 """add phase heads part to the requested bundle"""
1839 1844 if kwargs.get(r'phases', False):
1840 1845 if not 'heads' in b2caps.get('phases'):
1841 1846 raise ValueError(_('no common phases exchange method'))
1842 1847 if heads is None:
1843 1848 heads = repo.heads()
1844 1849
1845 1850 headsbyphase = collections.defaultdict(set)
1846 1851 if repo.publishing():
1847 1852 headsbyphase[phases.public] = heads
1848 1853 else:
1849 1854 # find the appropriate heads to move
1850 1855
1851 1856 phase = repo._phasecache.phase
1852 1857 node = repo.changelog.node
1853 1858 rev = repo.changelog.rev
1854 1859 for h in heads:
1855 1860 headsbyphase[phase(repo, rev(h))].add(h)
1856 1861 seenphases = list(headsbyphase.keys())
1857 1862
1858 1863 # We do not handle anything but public and draft phase for now)
1859 1864 if seenphases:
1860 1865 assert max(seenphases) <= phases.draft
1861 1866
1862 1867 # if client is pulling non-public changesets, we need to find
1863 1868 # intermediate public heads.
1864 1869 draftheads = headsbyphase.get(phases.draft, set())
1865 1870 if draftheads:
1866 1871 publicheads = headsbyphase.get(phases.public, set())
1867 1872
1868 1873 revset = 'heads(only(%ln, %ln) and public())'
1869 1874 extraheads = repo.revs(revset, draftheads, publicheads)
1870 1875 for r in extraheads:
1871 1876 headsbyphase[phases.public].add(node(r))
1872 1877
1873 1878 # transform data in a format used by the encoding function
1874 1879 phasemapping = []
1875 1880 for phase in phases.allphases:
1876 1881 phasemapping.append(sorted(headsbyphase[phase]))
1877 1882
1878 1883 # generate the actual part
1879 1884 phasedata = phases.binaryencode(phasemapping)
1880 1885 bundler.newpart('phase-heads', data=phasedata)
1881 1886
1882 1887 @getbundle2partsgenerator('hgtagsfnodes')
1883 1888 def _getbundletagsfnodes(bundler, repo, source, bundlecaps=None,
1884 1889 b2caps=None, heads=None, common=None,
1885 1890 **kwargs):
1886 1891 """Transfer the .hgtags filenodes mapping.
1887 1892
1888 1893 Only values for heads in this bundle will be transferred.
1889 1894
1890 1895 The part data consists of pairs of 20 byte changeset node and .hgtags
1891 1896 filenodes raw values.
1892 1897 """
1893 1898 # Don't send unless:
1894 1899 # - changeset are being exchanged,
1895 1900 # - the client supports it.
1896 1901 if not (kwargs.get(r'cg', True) and 'hgtagsfnodes' in b2caps):
1897 1902 return
1898 1903
1899 1904 outgoing = _computeoutgoing(repo, heads, common)
1900 1905 bundle2.addparttagsfnodescache(repo, bundler, outgoing)
1901 1906
1902 1907 def check_heads(repo, their_heads, context):
1903 1908 """check if the heads of a repo have been modified
1904 1909
1905 1910 Used by peer for unbundling.
1906 1911 """
1907 1912 heads = repo.heads()
1908 1913 heads_hash = hashlib.sha1(''.join(sorted(heads))).digest()
1909 1914 if not (their_heads == ['force'] or their_heads == heads or
1910 1915 their_heads == ['hashed', heads_hash]):
1911 1916 # someone else committed/pushed/unbundled while we
1912 1917 # were transferring data
1913 1918 raise error.PushRaced('repository changed while %s - '
1914 1919 'please try again' % context)
1915 1920
1916 1921 def unbundle(repo, cg, heads, source, url):
1917 1922 """Apply a bundle to a repo.
1918 1923
1919 1924 this function makes sure the repo is locked during the application and have
1920 1925 mechanism to check that no push race occurred between the creation of the
1921 1926 bundle and its application.
1922 1927
1923 1928 If the push was raced as PushRaced exception is raised."""
1924 1929 r = 0
1925 1930 # need a transaction when processing a bundle2 stream
1926 1931 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
1927 1932 lockandtr = [None, None, None]
1928 1933 recordout = None
1929 1934 # quick fix for output mismatch with bundle2 in 3.4
1930 1935 captureoutput = repo.ui.configbool('experimental', 'bundle2-output-capture')
1931 1936 if url.startswith('remote:http:') or url.startswith('remote:https:'):
1932 1937 captureoutput = True
1933 1938 try:
1934 1939 # note: outside bundle1, 'heads' is expected to be empty and this
1935 1940 # 'check_heads' call wil be a no-op
1936 1941 check_heads(repo, heads, 'uploading changes')
1937 1942 # push can proceed
1938 1943 if not isinstance(cg, bundle2.unbundle20):
1939 1944 # legacy case: bundle1 (changegroup 01)
1940 1945 txnname = "\n".join([source, util.hidepassword(url)])
1941 1946 with repo.lock(), repo.transaction(txnname) as tr:
1942 1947 op = bundle2.applybundle(repo, cg, tr, source, url)
1943 1948 r = bundle2.combinechangegroupresults(op)
1944 1949 else:
1945 1950 r = None
1946 1951 try:
1947 1952 def gettransaction():
1948 1953 if not lockandtr[2]:
1949 1954 lockandtr[0] = repo.wlock()
1950 1955 lockandtr[1] = repo.lock()
1951 1956 lockandtr[2] = repo.transaction(source)
1952 1957 lockandtr[2].hookargs['source'] = source
1953 1958 lockandtr[2].hookargs['url'] = url
1954 1959 lockandtr[2].hookargs['bundle2'] = '1'
1955 1960 return lockandtr[2]
1956 1961
1957 1962 # Do greedy locking by default until we're satisfied with lazy
1958 1963 # locking.
1959 1964 if not repo.ui.configbool('experimental', 'bundle2lazylocking'):
1960 1965 gettransaction()
1961 1966
1962 1967 op = bundle2.bundleoperation(repo, gettransaction,
1963 1968 captureoutput=captureoutput)
1964 1969 try:
1965 1970 op = bundle2.processbundle(repo, cg, op=op)
1966 1971 finally:
1967 1972 r = op.reply
1968 1973 if captureoutput and r is not None:
1969 1974 repo.ui.pushbuffer(error=True, subproc=True)
1970 1975 def recordout(output):
1971 1976 r.newpart('output', data=output, mandatory=False)
1972 1977 if lockandtr[2] is not None:
1973 1978 lockandtr[2].close()
1974 1979 except BaseException as exc:
1975 1980 exc.duringunbundle2 = True
1976 1981 if captureoutput and r is not None:
1977 1982 parts = exc._bundle2salvagedoutput = r.salvageoutput()
1978 1983 def recordout(output):
1979 1984 part = bundle2.bundlepart('output', data=output,
1980 1985 mandatory=False)
1981 1986 parts.append(part)
1982 1987 raise
1983 1988 finally:
1984 1989 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
1985 1990 if recordout is not None:
1986 1991 recordout(repo.ui.popbuffer())
1987 1992 return r
1988 1993
1989 1994 def _maybeapplyclonebundle(pullop):
1990 1995 """Apply a clone bundle from a remote, if possible."""
1991 1996
1992 1997 repo = pullop.repo
1993 1998 remote = pullop.remote
1994 1999
1995 2000 if not repo.ui.configbool('ui', 'clonebundles'):
1996 2001 return
1997 2002
1998 2003 # Only run if local repo is empty.
1999 2004 if len(repo):
2000 2005 return
2001 2006
2002 2007 if pullop.heads:
2003 2008 return
2004 2009
2005 2010 if not remote.capable('clonebundles'):
2006 2011 return
2007 2012
2008 2013 res = remote._call('clonebundles')
2009 2014
2010 2015 # If we call the wire protocol command, that's good enough to record the
2011 2016 # attempt.
2012 2017 pullop.clonebundleattempted = True
2013 2018
2014 2019 entries = parseclonebundlesmanifest(repo, res)
2015 2020 if not entries:
2016 2021 repo.ui.note(_('no clone bundles available on remote; '
2017 2022 'falling back to regular clone\n'))
2018 2023 return
2019 2024
2020 2025 entries = filterclonebundleentries(
2021 2026 repo, entries, streamclonerequested=pullop.streamclonerequested)
2022 2027
2023 2028 if not entries:
2024 2029 # There is a thundering herd concern here. However, if a server
2025 2030 # operator doesn't advertise bundles appropriate for its clients,
2026 2031 # they deserve what's coming. Furthermore, from a client's
2027 2032 # perspective, no automatic fallback would mean not being able to
2028 2033 # clone!
2029 2034 repo.ui.warn(_('no compatible clone bundles available on server; '
2030 2035 'falling back to regular clone\n'))
2031 2036 repo.ui.warn(_('(you may want to report this to the server '
2032 2037 'operator)\n'))
2033 2038 return
2034 2039
2035 2040 entries = sortclonebundleentries(repo.ui, entries)
2036 2041
2037 2042 url = entries[0]['URL']
2038 2043 repo.ui.status(_('applying clone bundle from %s\n') % url)
2039 2044 if trypullbundlefromurl(repo.ui, repo, url):
2040 2045 repo.ui.status(_('finished applying clone bundle\n'))
2041 2046 # Bundle failed.
2042 2047 #
2043 2048 # We abort by default to avoid the thundering herd of
2044 2049 # clients flooding a server that was expecting expensive
2045 2050 # clone load to be offloaded.
2046 2051 elif repo.ui.configbool('ui', 'clonebundlefallback'):
2047 2052 repo.ui.warn(_('falling back to normal clone\n'))
2048 2053 else:
2049 2054 raise error.Abort(_('error applying bundle'),
2050 2055 hint=_('if this error persists, consider contacting '
2051 2056 'the server operator or disable clone '
2052 2057 'bundles via '
2053 2058 '"--config ui.clonebundles=false"'))
2054 2059
2055 2060 def parseclonebundlesmanifest(repo, s):
2056 2061 """Parses the raw text of a clone bundles manifest.
2057 2062
2058 2063 Returns a list of dicts. The dicts have a ``URL`` key corresponding
2059 2064 to the URL and other keys are the attributes for the entry.
2060 2065 """
2061 2066 m = []
2062 2067 for line in s.splitlines():
2063 2068 fields = line.split()
2064 2069 if not fields:
2065 2070 continue
2066 2071 attrs = {'URL': fields[0]}
2067 2072 for rawattr in fields[1:]:
2068 2073 key, value = rawattr.split('=', 1)
2069 2074 key = urlreq.unquote(key)
2070 2075 value = urlreq.unquote(value)
2071 2076 attrs[key] = value
2072 2077
2073 2078 # Parse BUNDLESPEC into components. This makes client-side
2074 2079 # preferences easier to specify since you can prefer a single
2075 2080 # component of the BUNDLESPEC.
2076 2081 if key == 'BUNDLESPEC':
2077 2082 try:
2078 2083 comp, version, params = parsebundlespec(repo, value,
2079 2084 externalnames=True)
2080 2085 attrs['COMPRESSION'] = comp
2081 2086 attrs['VERSION'] = version
2082 2087 except error.InvalidBundleSpecification:
2083 2088 pass
2084 2089 except error.UnsupportedBundleSpecification:
2085 2090 pass
2086 2091
2087 2092 m.append(attrs)
2088 2093
2089 2094 return m
2090 2095
2091 2096 def filterclonebundleentries(repo, entries, streamclonerequested=False):
2092 2097 """Remove incompatible clone bundle manifest entries.
2093 2098
2094 2099 Accepts a list of entries parsed with ``parseclonebundlesmanifest``
2095 2100 and returns a new list consisting of only the entries that this client
2096 2101 should be able to apply.
2097 2102
2098 2103 There is no guarantee we'll be able to apply all returned entries because
2099 2104 the metadata we use to filter on may be missing or wrong.
2100 2105 """
2101 2106 newentries = []
2102 2107 for entry in entries:
2103 2108 spec = entry.get('BUNDLESPEC')
2104 2109 if spec:
2105 2110 try:
2106 2111 comp, version, params = parsebundlespec(repo, spec, strict=True)
2107 2112
2108 2113 # If a stream clone was requested, filter out non-streamclone
2109 2114 # entries.
2110 2115 if streamclonerequested and (comp != 'UN' or version != 's1'):
2111 2116 repo.ui.debug('filtering %s because not a stream clone\n' %
2112 2117 entry['URL'])
2113 2118 continue
2114 2119
2115 2120 except error.InvalidBundleSpecification as e:
2116 2121 repo.ui.debug(str(e) + '\n')
2117 2122 continue
2118 2123 except error.UnsupportedBundleSpecification as e:
2119 2124 repo.ui.debug('filtering %s because unsupported bundle '
2120 2125 'spec: %s\n' % (entry['URL'], str(e)))
2121 2126 continue
2122 2127 # If we don't have a spec and requested a stream clone, we don't know
2123 2128 # what the entry is so don't attempt to apply it.
2124 2129 elif streamclonerequested:
2125 2130 repo.ui.debug('filtering %s because cannot determine if a stream '
2126 2131 'clone bundle\n' % entry['URL'])
2127 2132 continue
2128 2133
2129 2134 if 'REQUIRESNI' in entry and not sslutil.hassni:
2130 2135 repo.ui.debug('filtering %s because SNI not supported\n' %
2131 2136 entry['URL'])
2132 2137 continue
2133 2138
2134 2139 newentries.append(entry)
2135 2140
2136 2141 return newentries
2137 2142
2138 2143 class clonebundleentry(object):
2139 2144 """Represents an item in a clone bundles manifest.
2140 2145
2141 2146 This rich class is needed to support sorting since sorted() in Python 3
2142 2147 doesn't support ``cmp`` and our comparison is complex enough that ``key=``
2143 2148 won't work.
2144 2149 """
2145 2150
2146 2151 def __init__(self, value, prefers):
2147 2152 self.value = value
2148 2153 self.prefers = prefers
2149 2154
2150 2155 def _cmp(self, other):
2151 2156 for prefkey, prefvalue in self.prefers:
2152 2157 avalue = self.value.get(prefkey)
2153 2158 bvalue = other.value.get(prefkey)
2154 2159
2155 2160 # Special case for b missing attribute and a matches exactly.
2156 2161 if avalue is not None and bvalue is None and avalue == prefvalue:
2157 2162 return -1
2158 2163
2159 2164 # Special case for a missing attribute and b matches exactly.
2160 2165 if bvalue is not None and avalue is None and bvalue == prefvalue:
2161 2166 return 1
2162 2167
2163 2168 # We can't compare unless attribute present on both.
2164 2169 if avalue is None or bvalue is None:
2165 2170 continue
2166 2171
2167 2172 # Same values should fall back to next attribute.
2168 2173 if avalue == bvalue:
2169 2174 continue
2170 2175
2171 2176 # Exact matches come first.
2172 2177 if avalue == prefvalue:
2173 2178 return -1
2174 2179 if bvalue == prefvalue:
2175 2180 return 1
2176 2181
2177 2182 # Fall back to next attribute.
2178 2183 continue
2179 2184
2180 2185 # If we got here we couldn't sort by attributes and prefers. Fall
2181 2186 # back to index order.
2182 2187 return 0
2183 2188
2184 2189 def __lt__(self, other):
2185 2190 return self._cmp(other) < 0
2186 2191
2187 2192 def __gt__(self, other):
2188 2193 return self._cmp(other) > 0
2189 2194
2190 2195 def __eq__(self, other):
2191 2196 return self._cmp(other) == 0
2192 2197
2193 2198 def __le__(self, other):
2194 2199 return self._cmp(other) <= 0
2195 2200
2196 2201 def __ge__(self, other):
2197 2202 return self._cmp(other) >= 0
2198 2203
2199 2204 def __ne__(self, other):
2200 2205 return self._cmp(other) != 0
2201 2206
2202 2207 def sortclonebundleentries(ui, entries):
2203 2208 prefers = ui.configlist('ui', 'clonebundleprefers')
2204 2209 if not prefers:
2205 2210 return list(entries)
2206 2211
2207 2212 prefers = [p.split('=', 1) for p in prefers]
2208 2213
2209 2214 items = sorted(clonebundleentry(v, prefers) for v in entries)
2210 2215 return [i.value for i in items]
2211 2216
2212 2217 def trypullbundlefromurl(ui, repo, url):
2213 2218 """Attempt to apply a bundle from a URL."""
2214 2219 with repo.lock(), repo.transaction('bundleurl') as tr:
2215 2220 try:
2216 2221 fh = urlmod.open(ui, url)
2217 2222 cg = readbundle(ui, fh, 'stream')
2218 2223
2219 2224 if isinstance(cg, streamclone.streamcloneapplier):
2220 2225 cg.apply(repo)
2221 2226 else:
2222 2227 bundle2.applybundle(repo, cg, tr, 'clonebundles', url)
2223 2228 return True
2224 2229 except urlerr.httperror as e:
2225 2230 ui.warn(_('HTTP error fetching bundle: %s\n') % str(e))
2226 2231 except urlerr.urlerror as e:
2227 2232 ui.warn(_('error fetching bundle: %s\n') % e.reason)
2228 2233
2229 2234 return False
@@ -1,198 +1,264
1 1 #require serve
2 2
3 #testcases stream-legacy stream-bundle2
4
5 #if stream-bundle2
6 $ cat << EOF >> $HGRCPATH
7 > [experimental]
8 > bundle2.stream = yes
9 > EOF
10 #endif
11
3 12 Initialize repository
4 13 the status call is to check for issue5130
5 14
6 15 $ hg init server
7 16 $ cd server
8 17 $ touch foo
9 18 $ hg -q commit -A -m initial
10 19 >>> for i in range(1024):
11 20 ... with open(str(i), 'wb') as fh:
12 21 ... fh.write(str(i))
13 22 $ hg -q commit -A -m 'add a lot of files'
14 23 $ hg st
15 24 $ hg serve -p $HGPORT -d --pid-file=hg.pid
16 25 $ cat hg.pid >> $DAEMON_PIDS
17 26 $ cd ..
18 27
19 28 Basic clone
20 29
30 #if stream-legacy
21 31 $ hg clone --stream -U http://localhost:$HGPORT clone1
22 32 streaming all changes
23 33 1027 files to transfer, 96.3 KB of data
24 34 transferred 96.3 KB in * seconds (*/sec) (glob)
25 35 searching for changes
26 36 no changes found
37 #endif
38 #if stream-bundle2
39 $ hg clone --stream -U http://localhost:$HGPORT clone1
40 streaming all changes
41 1027 files to transfer, 96.3 KB of data
42 transferred 96.3 KB in * seconds (* */sec) (glob)
43 #endif
27 44
28 45 --uncompressed is an alias to --stream
29 46
47 #if stream-legacy
30 48 $ hg clone --uncompressed -U http://localhost:$HGPORT clone1-uncompressed
31 49 streaming all changes
32 50 1027 files to transfer, 96.3 KB of data
33 51 transferred 96.3 KB in * seconds (*/sec) (glob)
34 52 searching for changes
35 53 no changes found
54 #endif
55 #if stream-bundle2
56 $ hg clone --uncompressed -U http://localhost:$HGPORT clone1-uncompressed
57 streaming all changes
58 1027 files to transfer, 96.3 KB of data
59 transferred 96.3 KB in * seconds (* */sec) (glob)
60 #endif
36 61
37 62 Clone with background file closing enabled
38 63
64 #if stream-legacy
39 65 $ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --stream -U http://localhost:$HGPORT clone-background | grep -v adding
40 66 using http://localhost:$HGPORT/
41 67 sending capabilities command
42 68 sending branchmap command
43 69 streaming all changes
44 70 sending stream_out command
45 71 1027 files to transfer, 96.3 KB of data
46 72 starting 4 threads for background file closing
47 73 transferred 96.3 KB in * seconds (*/sec) (glob)
48 74 query 1; heads
49 75 sending batch command
50 76 searching for changes
51 77 all remote heads known locally
52 78 no changes found
53 79 sending getbundle command
54 80 bundle2-input-bundle: with-transaction
55 81 bundle2-input-part: "listkeys" (params: 1 mandatory) supported
56 82 bundle2-input-part: "phase-heads" supported
57 83 bundle2-input-part: total payload size 24
58 84 bundle2-input-bundle: 1 parts total
59 85 checking for updated bookmarks
86 #endif
87 #if stream-bundle2
88 $ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --stream -U http://localhost:$HGPORT clone-background | grep -v adding
89 using http://localhost:$HGPORT/
90 sending capabilities command
91 query 1; heads
92 sending batch command
93 streaming all changes
94 sending getbundle command
95 bundle2-input-bundle: with-transaction
96 bundle2-input-part: "stream" (params: 4 mandatory) supported
97 applying stream bundle
98 1027 files to transfer, 96.3 KB of data
99 starting 4 threads for background file closing
100 transferred 96.3 KB in * seconds (* */sec) (glob)
101 bundle2-input-part: total payload size 110887
102 bundle2-input-part: "listkeys" (params: 1 mandatory) supported
103 bundle2-input-part: "phase-heads" supported
104 bundle2-input-part: total payload size 24
105 bundle2-input-bundle: 2 parts total
106 checking for updated bookmarks
107 #endif
60 108
61 109 Cannot stream clone when there are secret changesets
62 110
63 111 $ hg -R server phase --force --secret -r tip
64 112 $ hg clone --stream -U http://localhost:$HGPORT secret-denied
65 113 warning: stream clone requested but server has them disabled
66 114 requesting all changes
67 115 adding changesets
68 116 adding manifests
69 117 adding file changes
70 118 added 1 changesets with 1 changes to 1 files
71 119 new changesets 96ee1d7354c4
72 120
73 121 $ killdaemons.py
74 122
75 123 Streaming of secrets can be overridden by server config
76 124
77 125 $ cd server
78 126 $ hg serve --config server.uncompressedallowsecret=true -p $HGPORT -d --pid-file=hg.pid
79 127 $ cat hg.pid > $DAEMON_PIDS
80 128 $ cd ..
81 129
130 #if stream-legacy
82 131 $ hg clone --stream -U http://localhost:$HGPORT secret-allowed
83 132 streaming all changes
84 133 1027 files to transfer, 96.3 KB of data
85 134 transferred 96.3 KB in * seconds (*/sec) (glob)
86 135 searching for changes
87 136 no changes found
137 #endif
138 #if stream-bundle2
139 $ hg clone --stream -U http://localhost:$HGPORT secret-allowed
140 streaming all changes
141 1027 files to transfer, 96.3 KB of data
142 transferred 96.3 KB in * seconds (* */sec) (glob)
143 #endif
88 144
89 145 $ killdaemons.py
90 146
91 147 Verify interaction between preferuncompressed and secret presence
92 148
93 149 $ cd server
94 150 $ hg serve --config server.preferuncompressed=true -p $HGPORT -d --pid-file=hg.pid
95 151 $ cat hg.pid > $DAEMON_PIDS
96 152 $ cd ..
97 153
98 154 $ hg clone -U http://localhost:$HGPORT preferuncompressed-secret
99 155 requesting all changes
100 156 adding changesets
101 157 adding manifests
102 158 adding file changes
103 159 added 1 changesets with 1 changes to 1 files
104 160 new changesets 96ee1d7354c4
105 161
106 162 $ killdaemons.py
107 163
108 164 Clone not allowed when full bundles disabled and can't serve secrets
109 165
110 166 $ cd server
111 167 $ hg serve --config server.disablefullbundle=true -p $HGPORT -d --pid-file=hg.pid
112 168 $ cat hg.pid > $DAEMON_PIDS
113 169 $ cd ..
114 170
115 171 $ hg clone --stream http://localhost:$HGPORT secret-full-disabled
116 172 warning: stream clone requested but server has them disabled
117 173 requesting all changes
118 174 remote: abort: server has pull-based clones disabled
119 175 abort: pull failed on remote
120 176 (remove --pull if specified or upgrade Mercurial)
121 177 [255]
122 178
123 179 Local stream clone with secrets involved
124 180 (This is just a test over behavior: if you have access to the repo's files,
125 181 there is no security so it isn't important to prevent a clone here.)
126 182
127 183 $ hg clone -U --stream server local-secret
128 184 warning: stream clone requested but server has them disabled
129 185 requesting all changes
130 186 adding changesets
131 187 adding manifests
132 188 adding file changes
133 189 added 1 changesets with 1 changes to 1 files
134 190 new changesets 96ee1d7354c4
135 191
136 192 Stream clone while repo is changing:
137 193
138 194 $ mkdir changing
139 195 $ cd changing
140 196
141 197 extension for delaying the server process so we reliably can modify the repo
142 198 while cloning
143 199
144 200 $ cat > delayer.py <<EOF
145 201 > import time
146 202 > from mercurial import extensions, vfs
147 203 > def __call__(orig, self, path, *args, **kwargs):
148 204 > if path == 'data/f1.i':
149 205 > time.sleep(2)
150 206 > return orig(self, path, *args, **kwargs)
151 207 > extensions.wrapfunction(vfs.vfs, '__call__', __call__)
152 208 > EOF
153 209
154 210 prepare repo with small and big file to cover both code paths in emitrevlogdata
155 211
156 212 $ hg init repo
157 213 $ touch repo/f1
158 214 $ $TESTDIR/seq.py 50000 > repo/f2
159 215 $ hg -R repo ci -Aqm "0"
160 216 $ hg serve -R repo -p $HGPORT1 -d --pid-file=hg.pid --config extensions.delayer=delayer.py
161 217 $ cat hg.pid >> $DAEMON_PIDS
162 218
163 219 clone while modifying the repo between stating file with write lock and
164 220 actually serving file content
165 221
166 222 $ hg clone -q --stream -U http://localhost:$HGPORT1 clone &
167 223 $ sleep 1
168 224 $ echo >> repo/f1
169 225 $ echo >> repo/f2
170 226 $ hg -R repo ci -m "1"
171 227 $ wait
172 228 $ hg -R clone id
173 229 000000000000
174 230 $ cd ..
175 231
176 232 Stream repository with bookmarks
177 233 --------------------------------
178 234
179 235 (revert introduction of secret changeset)
180 236
181 237 $ hg -R server phase --draft 'secret()'
182 238
183 239 add a bookmark
184 240
185 241 $ hg -R server bookmark -r tip some-bookmark
186 242
187 243 clone it
188 244
245 #if stream-legacy
189 246 $ hg clone --stream http://localhost:$HGPORT with-bookmarks
190 247 streaming all changes
191 248 1027 files to transfer, 96.3 KB of data
192 249 transferred 96.3 KB in * seconds (*) (glob)
193 250 searching for changes
194 251 no changes found
195 252 updating to branch default
196 253 1025 files updated, 0 files merged, 0 files removed, 0 files unresolved
254 #endif
255 #if stream-bundle2
256 $ hg clone --stream http://localhost:$HGPORT with-bookmarks
257 streaming all changes
258 1027 files to transfer, 96.3 KB of data
259 transferred 96.3 KB in * seconds (* */sec) (glob)
260 updating to branch default
261 1025 files updated, 0 files merged, 0 files removed, 0 files unresolved
262 #endif
197 263 $ hg -R with-bookmarks bookmarks
198 264 some-bookmark 1:c17445101a72
General Comments 0
You need to be logged in to leave comments. Login now