##// END OF EJS Templates
push: add a way to allow concurrent pushes on unrelated heads...
marmoute -
r32709:16ada4cb default
parent child Browse files
Show More
@@ -1,1755 +1,1786 b''
1 1 # bundle2.py - generic container format to transmit arbitrary data.
2 2 #
3 3 # Copyright 2013 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 """Handling of the new bundle2 format
8 8
9 9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 11 that will be handed to and processed by the application layer.
12 12
13 13
14 14 General format architecture
15 15 ===========================
16 16
17 17 The format is architectured as follow
18 18
19 19 - magic string
20 20 - stream level parameters
21 21 - payload parts (any number)
22 22 - end of stream marker.
23 23
24 24 the Binary format
25 25 ============================
26 26
27 27 All numbers are unsigned and big-endian.
28 28
29 29 stream level parameters
30 30 ------------------------
31 31
32 32 Binary format is as follow
33 33
34 34 :params size: int32
35 35
36 36 The total number of Bytes used by the parameters
37 37
38 38 :params value: arbitrary number of Bytes
39 39
40 40 A blob of `params size` containing the serialized version of all stream level
41 41 parameters.
42 42
43 43 The blob contains a space separated list of parameters. Parameters with value
44 44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45 45
46 46 Empty name are obviously forbidden.
47 47
48 48 Name MUST start with a letter. If this first letter is lower case, the
49 49 parameter is advisory and can be safely ignored. However when the first
50 50 letter is capital, the parameter is mandatory and the bundling process MUST
51 51 stop if he is not able to proceed it.
52 52
53 53 Stream parameters use a simple textual format for two main reasons:
54 54
55 55 - Stream level parameters should remain simple and we want to discourage any
56 56 crazy usage.
57 57 - Textual data allow easy human inspection of a bundle2 header in case of
58 58 troubles.
59 59
60 60 Any Applicative level options MUST go into a bundle2 part instead.
61 61
62 62 Payload part
63 63 ------------------------
64 64
65 65 Binary format is as follow
66 66
67 67 :header size: int32
68 68
69 69 The total number of Bytes used by the part header. When the header is empty
70 70 (size = 0) this is interpreted as the end of stream marker.
71 71
72 72 :header:
73 73
74 74 The header defines how to interpret the part. It contains two piece of
75 75 data: the part type, and the part parameters.
76 76
77 77 The part type is used to route an application level handler, that can
78 78 interpret payload.
79 79
80 80 Part parameters are passed to the application level handler. They are
81 81 meant to convey information that will help the application level object to
82 82 interpret the part payload.
83 83
84 84 The binary format of the header is has follow
85 85
86 86 :typesize: (one byte)
87 87
88 88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89 89
90 90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 91 to this part.
92 92
93 93 :parameters:
94 94
95 95 Part's parameter may have arbitrary content, the binary structure is::
96 96
97 97 <mandatory-count><advisory-count><param-sizes><param-data>
98 98
99 99 :mandatory-count: 1 byte, number of mandatory parameters
100 100
101 101 :advisory-count: 1 byte, number of advisory parameters
102 102
103 103 :param-sizes:
104 104
105 105 N couple of bytes, where N is the total number of parameters. Each
106 106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107 107
108 108 :param-data:
109 109
110 110 A blob of bytes from which each parameter key and value can be
111 111 retrieved using the list of size couples stored in the previous
112 112 field.
113 113
114 114 Mandatory parameters comes first, then the advisory ones.
115 115
116 116 Each parameter's key MUST be unique within the part.
117 117
118 118 :payload:
119 119
120 120 payload is a series of `<chunksize><chunkdata>`.
121 121
122 122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124 124
125 125 The current implementation always produces either zero or one chunk.
126 126 This is an implementation limitation that will ultimately be lifted.
127 127
128 128 `chunksize` can be negative to trigger special case processing. No such
129 129 processing is in place yet.
130 130
131 131 Bundle processing
132 132 ============================
133 133
134 134 Each part is processed in order using a "part handler". Handler are registered
135 135 for a certain part type.
136 136
137 137 The matching of a part to its handler is case insensitive. The case of the
138 138 part type is used to know if a part is mandatory or advisory. If the Part type
139 139 contains any uppercase char it is considered mandatory. When no handler is
140 140 known for a Mandatory part, the process is aborted and an exception is raised.
141 141 If the part is advisory and no handler is known, the part is ignored. When the
142 142 process is aborted, the full bundle is still read from the stream to keep the
143 143 channel usable. But none of the part read from an abort are processed. In the
144 144 future, dropping the stream may become an option for channel we do not care to
145 145 preserve.
146 146 """
147 147
148 148 from __future__ import absolute_import
149 149
150 150 import errno
151 151 import re
152 152 import string
153 153 import struct
154 154 import sys
155 155
156 156 from .i18n import _
157 157 from . import (
158 158 changegroup,
159 159 error,
160 160 obsolete,
161 161 pushkey,
162 162 pycompat,
163 163 tags,
164 164 url,
165 165 util,
166 166 )
167 167
168 168 urlerr = util.urlerr
169 169 urlreq = util.urlreq
170 170
171 171 _pack = struct.pack
172 172 _unpack = struct.unpack
173 173
174 174 _fstreamparamsize = '>i'
175 175 _fpartheadersize = '>i'
176 176 _fparttypesize = '>B'
177 177 _fpartid = '>I'
178 178 _fpayloadsize = '>i'
179 179 _fpartparamcount = '>BB'
180 180
181 181 preferedchunksize = 4096
182 182
183 183 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
184 184
185 185 def outdebug(ui, message):
186 186 """debug regarding output stream (bundling)"""
187 187 if ui.configbool('devel', 'bundle2.debug', False):
188 188 ui.debug('bundle2-output: %s\n' % message)
189 189
190 190 def indebug(ui, message):
191 191 """debug on input stream (unbundling)"""
192 192 if ui.configbool('devel', 'bundle2.debug', False):
193 193 ui.debug('bundle2-input: %s\n' % message)
194 194
195 195 def validateparttype(parttype):
196 196 """raise ValueError if a parttype contains invalid character"""
197 197 if _parttypeforbidden.search(parttype):
198 198 raise ValueError(parttype)
199 199
200 200 def _makefpartparamsizes(nbparams):
201 201 """return a struct format to read part parameter sizes
202 202
203 203 The number parameters is variable so we need to build that format
204 204 dynamically.
205 205 """
206 206 return '>'+('BB'*nbparams)
207 207
208 208 parthandlermapping = {}
209 209
210 210 def parthandler(parttype, params=()):
211 211 """decorator that register a function as a bundle2 part handler
212 212
213 213 eg::
214 214
215 215 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
216 216 def myparttypehandler(...):
217 217 '''process a part of type "my part".'''
218 218 ...
219 219 """
220 220 validateparttype(parttype)
221 221 def _decorator(func):
222 222 lparttype = parttype.lower() # enforce lower case matching.
223 223 assert lparttype not in parthandlermapping
224 224 parthandlermapping[lparttype] = func
225 225 func.params = frozenset(params)
226 226 return func
227 227 return _decorator
228 228
229 229 class unbundlerecords(object):
230 230 """keep record of what happens during and unbundle
231 231
232 232 New records are added using `records.add('cat', obj)`. Where 'cat' is a
233 233 category of record and obj is an arbitrary object.
234 234
235 235 `records['cat']` will return all entries of this category 'cat'.
236 236
237 237 Iterating on the object itself will yield `('category', obj)` tuples
238 238 for all entries.
239 239
240 240 All iterations happens in chronological order.
241 241 """
242 242
243 243 def __init__(self):
244 244 self._categories = {}
245 245 self._sequences = []
246 246 self._replies = {}
247 247
248 248 def add(self, category, entry, inreplyto=None):
249 249 """add a new record of a given category.
250 250
251 251 The entry can then be retrieved in the list returned by
252 252 self['category']."""
253 253 self._categories.setdefault(category, []).append(entry)
254 254 self._sequences.append((category, entry))
255 255 if inreplyto is not None:
256 256 self.getreplies(inreplyto).add(category, entry)
257 257
258 258 def getreplies(self, partid):
259 259 """get the records that are replies to a specific part"""
260 260 return self._replies.setdefault(partid, unbundlerecords())
261 261
262 262 def __getitem__(self, cat):
263 263 return tuple(self._categories.get(cat, ()))
264 264
265 265 def __iter__(self):
266 266 return iter(self._sequences)
267 267
268 268 def __len__(self):
269 269 return len(self._sequences)
270 270
271 271 def __nonzero__(self):
272 272 return bool(self._sequences)
273 273
274 274 __bool__ = __nonzero__
275 275
276 276 class bundleoperation(object):
277 277 """an object that represents a single bundling process
278 278
279 279 Its purpose is to carry unbundle-related objects and states.
280 280
281 281 A new object should be created at the beginning of each bundle processing.
282 282 The object is to be returned by the processing function.
283 283
284 284 The object has very little content now it will ultimately contain:
285 285 * an access to the repo the bundle is applied to,
286 286 * a ui object,
287 287 * a way to retrieve a transaction to add changes to the repo,
288 288 * a way to record the result of processing each part,
289 289 * a way to construct a bundle response when applicable.
290 290 """
291 291
292 292 def __init__(self, repo, transactiongetter, captureoutput=True):
293 293 self.repo = repo
294 294 self.ui = repo.ui
295 295 self.records = unbundlerecords()
296 296 self.gettransaction = transactiongetter
297 297 self.reply = None
298 298 self.captureoutput = captureoutput
299 299
300 300 class TransactionUnavailable(RuntimeError):
301 301 pass
302 302
303 303 def _notransaction():
304 304 """default method to get a transaction while processing a bundle
305 305
306 306 Raise an exception to highlight the fact that no transaction was expected
307 307 to be created"""
308 308 raise TransactionUnavailable()
309 309
310 310 def applybundle(repo, unbundler, tr, source=None, url=None, op=None):
311 311 # transform me into unbundler.apply() as soon as the freeze is lifted
312 312 tr.hookargs['bundle2'] = '1'
313 313 if source is not None and 'source' not in tr.hookargs:
314 314 tr.hookargs['source'] = source
315 315 if url is not None and 'url' not in tr.hookargs:
316 316 tr.hookargs['url'] = url
317 317 return processbundle(repo, unbundler, lambda: tr, op=op)
318 318
319 319 def processbundle(repo, unbundler, transactiongetter=None, op=None):
320 320 """This function process a bundle, apply effect to/from a repo
321 321
322 322 It iterates over each part then searches for and uses the proper handling
323 323 code to process the part. Parts are processed in order.
324 324
325 325 Unknown Mandatory part will abort the process.
326 326
327 327 It is temporarily possible to provide a prebuilt bundleoperation to the
328 328 function. This is used to ensure output is properly propagated in case of
329 329 an error during the unbundling. This output capturing part will likely be
330 330 reworked and this ability will probably go away in the process.
331 331 """
332 332 if op is None:
333 333 if transactiongetter is None:
334 334 transactiongetter = _notransaction
335 335 op = bundleoperation(repo, transactiongetter)
336 336 # todo:
337 337 # - replace this is a init function soon.
338 338 # - exception catching
339 339 unbundler.params
340 340 if repo.ui.debugflag:
341 341 msg = ['bundle2-input-bundle:']
342 342 if unbundler.params:
343 343 msg.append(' %i params')
344 344 if op.gettransaction is None:
345 345 msg.append(' no-transaction')
346 346 else:
347 347 msg.append(' with-transaction')
348 348 msg.append('\n')
349 349 repo.ui.debug(''.join(msg))
350 350 iterparts = enumerate(unbundler.iterparts())
351 351 part = None
352 352 nbpart = 0
353 353 try:
354 354 for nbpart, part in iterparts:
355 355 _processpart(op, part)
356 356 except Exception as exc:
357 357 # Any exceptions seeking to the end of the bundle at this point are
358 358 # almost certainly related to the underlying stream being bad.
359 359 # And, chances are that the exception we're handling is related to
360 360 # getting in that bad state. So, we swallow the seeking error and
361 361 # re-raise the original error.
362 362 seekerror = False
363 363 try:
364 364 for nbpart, part in iterparts:
365 365 # consume the bundle content
366 366 part.seek(0, 2)
367 367 except Exception:
368 368 seekerror = True
369 369
370 370 # Small hack to let caller code distinguish exceptions from bundle2
371 371 # processing from processing the old format. This is mostly
372 372 # needed to handle different return codes to unbundle according to the
373 373 # type of bundle. We should probably clean up or drop this return code
374 374 # craziness in a future version.
375 375 exc.duringunbundle2 = True
376 376 salvaged = []
377 377 replycaps = None
378 378 if op.reply is not None:
379 379 salvaged = op.reply.salvageoutput()
380 380 replycaps = op.reply.capabilities
381 381 exc._replycaps = replycaps
382 382 exc._bundle2salvagedoutput = salvaged
383 383
384 384 # Re-raising from a variable loses the original stack. So only use
385 385 # that form if we need to.
386 386 if seekerror:
387 387 raise exc
388 388 else:
389 389 raise
390 390 finally:
391 391 repo.ui.debug('bundle2-input-bundle: %i parts total\n' % nbpart)
392 392
393 393 return op
394 394
395 395 def _processpart(op, part):
396 396 """process a single part from a bundle
397 397
398 398 The part is guaranteed to have been fully consumed when the function exits
399 399 (even if an exception is raised)."""
400 400 status = 'unknown' # used by debug output
401 401 hardabort = False
402 402 try:
403 403 try:
404 404 handler = parthandlermapping.get(part.type)
405 405 if handler is None:
406 406 status = 'unsupported-type'
407 407 raise error.BundleUnknownFeatureError(parttype=part.type)
408 408 indebug(op.ui, 'found a handler for part %r' % part.type)
409 409 unknownparams = part.mandatorykeys - handler.params
410 410 if unknownparams:
411 411 unknownparams = list(unknownparams)
412 412 unknownparams.sort()
413 413 status = 'unsupported-params (%s)' % unknownparams
414 414 raise error.BundleUnknownFeatureError(parttype=part.type,
415 415 params=unknownparams)
416 416 status = 'supported'
417 417 except error.BundleUnknownFeatureError as exc:
418 418 if part.mandatory: # mandatory parts
419 419 raise
420 420 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
421 421 return # skip to part processing
422 422 finally:
423 423 if op.ui.debugflag:
424 424 msg = ['bundle2-input-part: "%s"' % part.type]
425 425 if not part.mandatory:
426 426 msg.append(' (advisory)')
427 427 nbmp = len(part.mandatorykeys)
428 428 nbap = len(part.params) - nbmp
429 429 if nbmp or nbap:
430 430 msg.append(' (params:')
431 431 if nbmp:
432 432 msg.append(' %i mandatory' % nbmp)
433 433 if nbap:
434 434 msg.append(' %i advisory' % nbmp)
435 435 msg.append(')')
436 436 msg.append(' %s\n' % status)
437 437 op.ui.debug(''.join(msg))
438 438
439 439 # handler is called outside the above try block so that we don't
440 440 # risk catching KeyErrors from anything other than the
441 441 # parthandlermapping lookup (any KeyError raised by handler()
442 442 # itself represents a defect of a different variety).
443 443 output = None
444 444 if op.captureoutput and op.reply is not None:
445 445 op.ui.pushbuffer(error=True, subproc=True)
446 446 output = ''
447 447 try:
448 448 handler(op, part)
449 449 finally:
450 450 if output is not None:
451 451 output = op.ui.popbuffer()
452 452 if output:
453 453 outpart = op.reply.newpart('output', data=output,
454 454 mandatory=False)
455 455 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
456 456 # If exiting or interrupted, do not attempt to seek the stream in the
457 457 # finally block below. This makes abort faster.
458 458 except (SystemExit, KeyboardInterrupt):
459 459 hardabort = True
460 460 raise
461 461 finally:
462 462 # consume the part content to not corrupt the stream.
463 463 if not hardabort:
464 464 part.seek(0, 2)
465 465
466 466
467 467 def decodecaps(blob):
468 468 """decode a bundle2 caps bytes blob into a dictionary
469 469
470 470 The blob is a list of capabilities (one per line)
471 471 Capabilities may have values using a line of the form::
472 472
473 473 capability=value1,value2,value3
474 474
475 475 The values are always a list."""
476 476 caps = {}
477 477 for line in blob.splitlines():
478 478 if not line:
479 479 continue
480 480 if '=' not in line:
481 481 key, vals = line, ()
482 482 else:
483 483 key, vals = line.split('=', 1)
484 484 vals = vals.split(',')
485 485 key = urlreq.unquote(key)
486 486 vals = [urlreq.unquote(v) for v in vals]
487 487 caps[key] = vals
488 488 return caps
489 489
490 490 def encodecaps(caps):
491 491 """encode a bundle2 caps dictionary into a bytes blob"""
492 492 chunks = []
493 493 for ca in sorted(caps):
494 494 vals = caps[ca]
495 495 ca = urlreq.quote(ca)
496 496 vals = [urlreq.quote(v) for v in vals]
497 497 if vals:
498 498 ca = "%s=%s" % (ca, ','.join(vals))
499 499 chunks.append(ca)
500 500 return '\n'.join(chunks)
501 501
502 502 bundletypes = {
503 503 "": ("", 'UN'), # only when using unbundle on ssh and old http servers
504 504 # since the unification ssh accepts a header but there
505 505 # is no capability signaling it.
506 506 "HG20": (), # special-cased below
507 507 "HG10UN": ("HG10UN", 'UN'),
508 508 "HG10BZ": ("HG10", 'BZ'),
509 509 "HG10GZ": ("HG10GZ", 'GZ'),
510 510 }
511 511
512 512 # hgweb uses this list to communicate its preferred type
513 513 bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN']
514 514
515 515 class bundle20(object):
516 516 """represent an outgoing bundle2 container
517 517
518 518 Use the `addparam` method to add stream level parameter. and `newpart` to
519 519 populate it. Then call `getchunks` to retrieve all the binary chunks of
520 520 data that compose the bundle2 container."""
521 521
522 522 _magicstring = 'HG20'
523 523
524 524 def __init__(self, ui, capabilities=()):
525 525 self.ui = ui
526 526 self._params = []
527 527 self._parts = []
528 528 self.capabilities = dict(capabilities)
529 529 self._compengine = util.compengines.forbundletype('UN')
530 530 self._compopts = None
531 531
532 532 def setcompression(self, alg, compopts=None):
533 533 """setup core part compression to <alg>"""
534 534 if alg in (None, 'UN'):
535 535 return
536 536 assert not any(n.lower() == 'compression' for n, v in self._params)
537 537 self.addparam('Compression', alg)
538 538 self._compengine = util.compengines.forbundletype(alg)
539 539 self._compopts = compopts
540 540
541 541 @property
542 542 def nbparts(self):
543 543 """total number of parts added to the bundler"""
544 544 return len(self._parts)
545 545
546 546 # methods used to defines the bundle2 content
547 547 def addparam(self, name, value=None):
548 548 """add a stream level parameter"""
549 549 if not name:
550 550 raise ValueError('empty parameter name')
551 551 if name[0] not in string.letters:
552 552 raise ValueError('non letter first character: %r' % name)
553 553 self._params.append((name, value))
554 554
555 555 def addpart(self, part):
556 556 """add a new part to the bundle2 container
557 557
558 558 Parts contains the actual applicative payload."""
559 559 assert part.id is None
560 560 part.id = len(self._parts) # very cheap counter
561 561 self._parts.append(part)
562 562
563 563 def newpart(self, typeid, *args, **kwargs):
564 564 """create a new part and add it to the containers
565 565
566 566 As the part is directly added to the containers. For now, this means
567 567 that any failure to properly initialize the part after calling
568 568 ``newpart`` should result in a failure of the whole bundling process.
569 569
570 570 You can still fall back to manually create and add if you need better
571 571 control."""
572 572 part = bundlepart(typeid, *args, **kwargs)
573 573 self.addpart(part)
574 574 return part
575 575
576 576 # methods used to generate the bundle2 stream
577 577 def getchunks(self):
578 578 if self.ui.debugflag:
579 579 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
580 580 if self._params:
581 581 msg.append(' (%i params)' % len(self._params))
582 582 msg.append(' %i parts total\n' % len(self._parts))
583 583 self.ui.debug(''.join(msg))
584 584 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
585 585 yield self._magicstring
586 586 param = self._paramchunk()
587 587 outdebug(self.ui, 'bundle parameter: %s' % param)
588 588 yield _pack(_fstreamparamsize, len(param))
589 589 if param:
590 590 yield param
591 591 for chunk in self._compengine.compressstream(self._getcorechunk(),
592 592 self._compopts):
593 593 yield chunk
594 594
595 595 def _paramchunk(self):
596 596 """return a encoded version of all stream parameters"""
597 597 blocks = []
598 598 for par, value in self._params:
599 599 par = urlreq.quote(par)
600 600 if value is not None:
601 601 value = urlreq.quote(value)
602 602 par = '%s=%s' % (par, value)
603 603 blocks.append(par)
604 604 return ' '.join(blocks)
605 605
606 606 def _getcorechunk(self):
607 607 """yield chunk for the core part of the bundle
608 608
609 609 (all but headers and parameters)"""
610 610 outdebug(self.ui, 'start of parts')
611 611 for part in self._parts:
612 612 outdebug(self.ui, 'bundle part: "%s"' % part.type)
613 613 for chunk in part.getchunks(ui=self.ui):
614 614 yield chunk
615 615 outdebug(self.ui, 'end of bundle')
616 616 yield _pack(_fpartheadersize, 0)
617 617
618 618
619 619 def salvageoutput(self):
620 620 """return a list with a copy of all output parts in the bundle
621 621
622 622 This is meant to be used during error handling to make sure we preserve
623 623 server output"""
624 624 salvaged = []
625 625 for part in self._parts:
626 626 if part.type.startswith('output'):
627 627 salvaged.append(part.copy())
628 628 return salvaged
629 629
630 630
631 631 class unpackermixin(object):
632 632 """A mixin to extract bytes and struct data from a stream"""
633 633
634 634 def __init__(self, fp):
635 635 self._fp = fp
636 636
637 637 def _unpack(self, format):
638 638 """unpack this struct format from the stream
639 639
640 640 This method is meant for internal usage by the bundle2 protocol only.
641 641 They directly manipulate the low level stream including bundle2 level
642 642 instruction.
643 643
644 644 Do not use it to implement higher-level logic or methods."""
645 645 data = self._readexact(struct.calcsize(format))
646 646 return _unpack(format, data)
647 647
648 648 def _readexact(self, size):
649 649 """read exactly <size> bytes from the stream
650 650
651 651 This method is meant for internal usage by the bundle2 protocol only.
652 652 They directly manipulate the low level stream including bundle2 level
653 653 instruction.
654 654
655 655 Do not use it to implement higher-level logic or methods."""
656 656 return changegroup.readexactly(self._fp, size)
657 657
658 658 def getunbundler(ui, fp, magicstring=None):
659 659 """return a valid unbundler object for a given magicstring"""
660 660 if magicstring is None:
661 661 magicstring = changegroup.readexactly(fp, 4)
662 662 magic, version = magicstring[0:2], magicstring[2:4]
663 663 if magic != 'HG':
664 664 raise error.Abort(_('not a Mercurial bundle'))
665 665 unbundlerclass = formatmap.get(version)
666 666 if unbundlerclass is None:
667 667 raise error.Abort(_('unknown bundle version %s') % version)
668 668 unbundler = unbundlerclass(ui, fp)
669 669 indebug(ui, 'start processing of %s stream' % magicstring)
670 670 return unbundler
671 671
672 672 class unbundle20(unpackermixin):
673 673 """interpret a bundle2 stream
674 674
675 675 This class is fed with a binary stream and yields parts through its
676 676 `iterparts` methods."""
677 677
678 678 _magicstring = 'HG20'
679 679
680 680 def __init__(self, ui, fp):
681 681 """If header is specified, we do not read it out of the stream."""
682 682 self.ui = ui
683 683 self._compengine = util.compengines.forbundletype('UN')
684 684 self._compressed = None
685 685 super(unbundle20, self).__init__(fp)
686 686
687 687 @util.propertycache
688 688 def params(self):
689 689 """dictionary of stream level parameters"""
690 690 indebug(self.ui, 'reading bundle2 stream parameters')
691 691 params = {}
692 692 paramssize = self._unpack(_fstreamparamsize)[0]
693 693 if paramssize < 0:
694 694 raise error.BundleValueError('negative bundle param size: %i'
695 695 % paramssize)
696 696 if paramssize:
697 697 params = self._readexact(paramssize)
698 698 params = self._processallparams(params)
699 699 return params
700 700
701 701 def _processallparams(self, paramsblock):
702 702 """"""
703 703 params = util.sortdict()
704 704 for p in paramsblock.split(' '):
705 705 p = p.split('=', 1)
706 706 p = [urlreq.unquote(i) for i in p]
707 707 if len(p) < 2:
708 708 p.append(None)
709 709 self._processparam(*p)
710 710 params[p[0]] = p[1]
711 711 return params
712 712
713 713
714 714 def _processparam(self, name, value):
715 715 """process a parameter, applying its effect if needed
716 716
717 717 Parameter starting with a lower case letter are advisory and will be
718 718 ignored when unknown. Those starting with an upper case letter are
719 719 mandatory and will this function will raise a KeyError when unknown.
720 720
721 721 Note: no option are currently supported. Any input will be either
722 722 ignored or failing.
723 723 """
724 724 if not name:
725 725 raise ValueError('empty parameter name')
726 726 if name[0] not in string.letters:
727 727 raise ValueError('non letter first character: %r' % name)
728 728 try:
729 729 handler = b2streamparamsmap[name.lower()]
730 730 except KeyError:
731 731 if name[0].islower():
732 732 indebug(self.ui, "ignoring unknown parameter %r" % name)
733 733 else:
734 734 raise error.BundleUnknownFeatureError(params=(name,))
735 735 else:
736 736 handler(self, name, value)
737 737
738 738 def _forwardchunks(self):
739 739 """utility to transfer a bundle2 as binary
740 740
741 741 This is made necessary by the fact the 'getbundle' command over 'ssh'
742 742 have no way to know then the reply end, relying on the bundle to be
743 743 interpreted to know its end. This is terrible and we are sorry, but we
744 744 needed to move forward to get general delta enabled.
745 745 """
746 746 yield self._magicstring
747 747 assert 'params' not in vars(self)
748 748 paramssize = self._unpack(_fstreamparamsize)[0]
749 749 if paramssize < 0:
750 750 raise error.BundleValueError('negative bundle param size: %i'
751 751 % paramssize)
752 752 yield _pack(_fstreamparamsize, paramssize)
753 753 if paramssize:
754 754 params = self._readexact(paramssize)
755 755 self._processallparams(params)
756 756 yield params
757 757 assert self._compengine.bundletype == 'UN'
758 758 # From there, payload might need to be decompressed
759 759 self._fp = self._compengine.decompressorreader(self._fp)
760 760 emptycount = 0
761 761 while emptycount < 2:
762 762 # so we can brainlessly loop
763 763 assert _fpartheadersize == _fpayloadsize
764 764 size = self._unpack(_fpartheadersize)[0]
765 765 yield _pack(_fpartheadersize, size)
766 766 if size:
767 767 emptycount = 0
768 768 else:
769 769 emptycount += 1
770 770 continue
771 771 if size == flaginterrupt:
772 772 continue
773 773 elif size < 0:
774 774 raise error.BundleValueError('negative chunk size: %i')
775 775 yield self._readexact(size)
776 776
777 777
778 778 def iterparts(self):
779 779 """yield all parts contained in the stream"""
780 780 # make sure param have been loaded
781 781 self.params
782 782 # From there, payload need to be decompressed
783 783 self._fp = self._compengine.decompressorreader(self._fp)
784 784 indebug(self.ui, 'start extraction of bundle2 parts')
785 785 headerblock = self._readpartheader()
786 786 while headerblock is not None:
787 787 part = unbundlepart(self.ui, headerblock, self._fp)
788 788 yield part
789 789 part.seek(0, 2)
790 790 headerblock = self._readpartheader()
791 791 indebug(self.ui, 'end of bundle2 stream')
792 792
793 793 def _readpartheader(self):
794 794 """reads a part header size and return the bytes blob
795 795
796 796 returns None if empty"""
797 797 headersize = self._unpack(_fpartheadersize)[0]
798 798 if headersize < 0:
799 799 raise error.BundleValueError('negative part header size: %i'
800 800 % headersize)
801 801 indebug(self.ui, 'part header size: %i' % headersize)
802 802 if headersize:
803 803 return self._readexact(headersize)
804 804 return None
805 805
806 806 def compressed(self):
807 807 self.params # load params
808 808 return self._compressed
809 809
810 810 def close(self):
811 811 """close underlying file"""
812 812 if util.safehasattr(self._fp, 'close'):
813 813 return self._fp.close()
814 814
815 815 formatmap = {'20': unbundle20}
816 816
817 817 b2streamparamsmap = {}
818 818
819 819 def b2streamparamhandler(name):
820 820 """register a handler for a stream level parameter"""
821 821 def decorator(func):
822 822 assert name not in formatmap
823 823 b2streamparamsmap[name] = func
824 824 return func
825 825 return decorator
826 826
827 827 @b2streamparamhandler('compression')
828 828 def processcompression(unbundler, param, value):
829 829 """read compression parameter and install payload decompression"""
830 830 if value not in util.compengines.supportedbundletypes:
831 831 raise error.BundleUnknownFeatureError(params=(param,),
832 832 values=(value,))
833 833 unbundler._compengine = util.compengines.forbundletype(value)
834 834 if value is not None:
835 835 unbundler._compressed = True
836 836
837 837 class bundlepart(object):
838 838 """A bundle2 part contains application level payload
839 839
840 840 The part `type` is used to route the part to the application level
841 841 handler.
842 842
843 843 The part payload is contained in ``part.data``. It could be raw bytes or a
844 844 generator of byte chunks.
845 845
846 846 You can add parameters to the part using the ``addparam`` method.
847 847 Parameters can be either mandatory (default) or advisory. Remote side
848 848 should be able to safely ignore the advisory ones.
849 849
850 850 Both data and parameters cannot be modified after the generation has begun.
851 851 """
852 852
853 853 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
854 854 data='', mandatory=True):
855 855 validateparttype(parttype)
856 856 self.id = None
857 857 self.type = parttype
858 858 self._data = data
859 859 self._mandatoryparams = list(mandatoryparams)
860 860 self._advisoryparams = list(advisoryparams)
861 861 # checking for duplicated entries
862 862 self._seenparams = set()
863 863 for pname, __ in self._mandatoryparams + self._advisoryparams:
864 864 if pname in self._seenparams:
865 865 raise error.ProgrammingError('duplicated params: %s' % pname)
866 866 self._seenparams.add(pname)
867 867 # status of the part's generation:
868 868 # - None: not started,
869 869 # - False: currently generated,
870 870 # - True: generation done.
871 871 self._generated = None
872 872 self.mandatory = mandatory
873 873
874 874 def __repr__(self):
875 875 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
876 876 return ('<%s object at %x; id: %s; type: %s; mandatory: %s>'
877 877 % (cls, id(self), self.id, self.type, self.mandatory))
878 878
879 879 def copy(self):
880 880 """return a copy of the part
881 881
882 882 The new part have the very same content but no partid assigned yet.
883 883 Parts with generated data cannot be copied."""
884 884 assert not util.safehasattr(self.data, 'next')
885 885 return self.__class__(self.type, self._mandatoryparams,
886 886 self._advisoryparams, self._data, self.mandatory)
887 887
888 888 # methods used to defines the part content
889 889 @property
890 890 def data(self):
891 891 return self._data
892 892
893 893 @data.setter
894 894 def data(self, data):
895 895 if self._generated is not None:
896 896 raise error.ReadOnlyPartError('part is being generated')
897 897 self._data = data
898 898
899 899 @property
900 900 def mandatoryparams(self):
901 901 # make it an immutable tuple to force people through ``addparam``
902 902 return tuple(self._mandatoryparams)
903 903
904 904 @property
905 905 def advisoryparams(self):
906 906 # make it an immutable tuple to force people through ``addparam``
907 907 return tuple(self._advisoryparams)
908 908
909 909 def addparam(self, name, value='', mandatory=True):
910 910 """add a parameter to the part
911 911
912 912 If 'mandatory' is set to True, the remote handler must claim support
913 913 for this parameter or the unbundling will be aborted.
914 914
915 915 The 'name' and 'value' cannot exceed 255 bytes each.
916 916 """
917 917 if self._generated is not None:
918 918 raise error.ReadOnlyPartError('part is being generated')
919 919 if name in self._seenparams:
920 920 raise ValueError('duplicated params: %s' % name)
921 921 self._seenparams.add(name)
922 922 params = self._advisoryparams
923 923 if mandatory:
924 924 params = self._mandatoryparams
925 925 params.append((name, value))
926 926
927 927 # methods used to generates the bundle2 stream
928 928 def getchunks(self, ui):
929 929 if self._generated is not None:
930 930 raise error.ProgrammingError('part can only be consumed once')
931 931 self._generated = False
932 932
933 933 if ui.debugflag:
934 934 msg = ['bundle2-output-part: "%s"' % self.type]
935 935 if not self.mandatory:
936 936 msg.append(' (advisory)')
937 937 nbmp = len(self.mandatoryparams)
938 938 nbap = len(self.advisoryparams)
939 939 if nbmp or nbap:
940 940 msg.append(' (params:')
941 941 if nbmp:
942 942 msg.append(' %i mandatory' % nbmp)
943 943 if nbap:
944 944 msg.append(' %i advisory' % nbmp)
945 945 msg.append(')')
946 946 if not self.data:
947 947 msg.append(' empty payload')
948 948 elif util.safehasattr(self.data, 'next'):
949 949 msg.append(' streamed payload')
950 950 else:
951 951 msg.append(' %i bytes payload' % len(self.data))
952 952 msg.append('\n')
953 953 ui.debug(''.join(msg))
954 954
955 955 #### header
956 956 if self.mandatory:
957 957 parttype = self.type.upper()
958 958 else:
959 959 parttype = self.type.lower()
960 960 outdebug(ui, 'part %s: "%s"' % (self.id, parttype))
961 961 ## parttype
962 962 header = [_pack(_fparttypesize, len(parttype)),
963 963 parttype, _pack(_fpartid, self.id),
964 964 ]
965 965 ## parameters
966 966 # count
967 967 manpar = self.mandatoryparams
968 968 advpar = self.advisoryparams
969 969 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
970 970 # size
971 971 parsizes = []
972 972 for key, value in manpar:
973 973 parsizes.append(len(key))
974 974 parsizes.append(len(value))
975 975 for key, value in advpar:
976 976 parsizes.append(len(key))
977 977 parsizes.append(len(value))
978 978 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
979 979 header.append(paramsizes)
980 980 # key, value
981 981 for key, value in manpar:
982 982 header.append(key)
983 983 header.append(value)
984 984 for key, value in advpar:
985 985 header.append(key)
986 986 header.append(value)
987 987 ## finalize header
988 988 headerchunk = ''.join(header)
989 989 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
990 990 yield _pack(_fpartheadersize, len(headerchunk))
991 991 yield headerchunk
992 992 ## payload
993 993 try:
994 994 for chunk in self._payloadchunks():
995 995 outdebug(ui, 'payload chunk size: %i' % len(chunk))
996 996 yield _pack(_fpayloadsize, len(chunk))
997 997 yield chunk
998 998 except GeneratorExit:
999 999 # GeneratorExit means that nobody is listening for our
1000 1000 # results anyway, so just bail quickly rather than trying
1001 1001 # to produce an error part.
1002 1002 ui.debug('bundle2-generatorexit\n')
1003 1003 raise
1004 1004 except BaseException as exc:
1005 1005 # backup exception data for later
1006 1006 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
1007 1007 % exc)
1008 1008 tb = sys.exc_info()[2]
1009 1009 msg = 'unexpected error: %s' % exc
1010 1010 interpart = bundlepart('error:abort', [('message', msg)],
1011 1011 mandatory=False)
1012 1012 interpart.id = 0
1013 1013 yield _pack(_fpayloadsize, -1)
1014 1014 for chunk in interpart.getchunks(ui=ui):
1015 1015 yield chunk
1016 1016 outdebug(ui, 'closing payload chunk')
1017 1017 # abort current part payload
1018 1018 yield _pack(_fpayloadsize, 0)
1019 1019 pycompat.raisewithtb(exc, tb)
1020 1020 # end of payload
1021 1021 outdebug(ui, 'closing payload chunk')
1022 1022 yield _pack(_fpayloadsize, 0)
1023 1023 self._generated = True
1024 1024
1025 1025 def _payloadchunks(self):
1026 1026 """yield chunks of a the part payload
1027 1027
1028 1028 Exists to handle the different methods to provide data to a part."""
1029 1029 # we only support fixed size data now.
1030 1030 # This will be improved in the future.
1031 1031 if util.safehasattr(self.data, 'next'):
1032 1032 buff = util.chunkbuffer(self.data)
1033 1033 chunk = buff.read(preferedchunksize)
1034 1034 while chunk:
1035 1035 yield chunk
1036 1036 chunk = buff.read(preferedchunksize)
1037 1037 elif len(self.data):
1038 1038 yield self.data
1039 1039
1040 1040
1041 1041 flaginterrupt = -1
1042 1042
1043 1043 class interrupthandler(unpackermixin):
1044 1044 """read one part and process it with restricted capability
1045 1045
1046 1046 This allows to transmit exception raised on the producer size during part
1047 1047 iteration while the consumer is reading a part.
1048 1048
1049 1049 Part processed in this manner only have access to a ui object,"""
1050 1050
1051 1051 def __init__(self, ui, fp):
1052 1052 super(interrupthandler, self).__init__(fp)
1053 1053 self.ui = ui
1054 1054
1055 1055 def _readpartheader(self):
1056 1056 """reads a part header size and return the bytes blob
1057 1057
1058 1058 returns None if empty"""
1059 1059 headersize = self._unpack(_fpartheadersize)[0]
1060 1060 if headersize < 0:
1061 1061 raise error.BundleValueError('negative part header size: %i'
1062 1062 % headersize)
1063 1063 indebug(self.ui, 'part header size: %i\n' % headersize)
1064 1064 if headersize:
1065 1065 return self._readexact(headersize)
1066 1066 return None
1067 1067
1068 1068 def __call__(self):
1069 1069
1070 1070 self.ui.debug('bundle2-input-stream-interrupt:'
1071 1071 ' opening out of band context\n')
1072 1072 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1073 1073 headerblock = self._readpartheader()
1074 1074 if headerblock is None:
1075 1075 indebug(self.ui, 'no part found during interruption.')
1076 1076 return
1077 1077 part = unbundlepart(self.ui, headerblock, self._fp)
1078 1078 op = interruptoperation(self.ui)
1079 1079 _processpart(op, part)
1080 1080 self.ui.debug('bundle2-input-stream-interrupt:'
1081 1081 ' closing out of band context\n')
1082 1082
1083 1083 class interruptoperation(object):
1084 1084 """A limited operation to be use by part handler during interruption
1085 1085
1086 1086 It only have access to an ui object.
1087 1087 """
1088 1088
1089 1089 def __init__(self, ui):
1090 1090 self.ui = ui
1091 1091 self.reply = None
1092 1092 self.captureoutput = False
1093 1093
1094 1094 @property
1095 1095 def repo(self):
1096 1096 raise error.ProgrammingError('no repo access from stream interruption')
1097 1097
1098 1098 def gettransaction(self):
1099 1099 raise TransactionUnavailable('no repo access from stream interruption')
1100 1100
1101 1101 class unbundlepart(unpackermixin):
1102 1102 """a bundle part read from a bundle"""
1103 1103
1104 1104 def __init__(self, ui, header, fp):
1105 1105 super(unbundlepart, self).__init__(fp)
1106 1106 self._seekable = (util.safehasattr(fp, 'seek') and
1107 1107 util.safehasattr(fp, 'tell'))
1108 1108 self.ui = ui
1109 1109 # unbundle state attr
1110 1110 self._headerdata = header
1111 1111 self._headeroffset = 0
1112 1112 self._initialized = False
1113 1113 self.consumed = False
1114 1114 # part data
1115 1115 self.id = None
1116 1116 self.type = None
1117 1117 self.mandatoryparams = None
1118 1118 self.advisoryparams = None
1119 1119 self.params = None
1120 1120 self.mandatorykeys = ()
1121 1121 self._payloadstream = None
1122 1122 self._readheader()
1123 1123 self._mandatory = None
1124 1124 self._chunkindex = [] #(payload, file) position tuples for chunk starts
1125 1125 self._pos = 0
1126 1126
1127 1127 def _fromheader(self, size):
1128 1128 """return the next <size> byte from the header"""
1129 1129 offset = self._headeroffset
1130 1130 data = self._headerdata[offset:(offset + size)]
1131 1131 self._headeroffset = offset + size
1132 1132 return data
1133 1133
1134 1134 def _unpackheader(self, format):
1135 1135 """read given format from header
1136 1136
1137 1137 This automatically compute the size of the format to read."""
1138 1138 data = self._fromheader(struct.calcsize(format))
1139 1139 return _unpack(format, data)
1140 1140
1141 1141 def _initparams(self, mandatoryparams, advisoryparams):
1142 1142 """internal function to setup all logic related parameters"""
1143 1143 # make it read only to prevent people touching it by mistake.
1144 1144 self.mandatoryparams = tuple(mandatoryparams)
1145 1145 self.advisoryparams = tuple(advisoryparams)
1146 1146 # user friendly UI
1147 1147 self.params = util.sortdict(self.mandatoryparams)
1148 1148 self.params.update(self.advisoryparams)
1149 1149 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1150 1150
1151 1151 def _payloadchunks(self, chunknum=0):
1152 1152 '''seek to specified chunk and start yielding data'''
1153 1153 if len(self._chunkindex) == 0:
1154 1154 assert chunknum == 0, 'Must start with chunk 0'
1155 1155 self._chunkindex.append((0, self._tellfp()))
1156 1156 else:
1157 1157 assert chunknum < len(self._chunkindex), \
1158 1158 'Unknown chunk %d' % chunknum
1159 1159 self._seekfp(self._chunkindex[chunknum][1])
1160 1160
1161 1161 pos = self._chunkindex[chunknum][0]
1162 1162 payloadsize = self._unpack(_fpayloadsize)[0]
1163 1163 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1164 1164 while payloadsize:
1165 1165 if payloadsize == flaginterrupt:
1166 1166 # interruption detection, the handler will now read a
1167 1167 # single part and process it.
1168 1168 interrupthandler(self.ui, self._fp)()
1169 1169 elif payloadsize < 0:
1170 1170 msg = 'negative payload chunk size: %i' % payloadsize
1171 1171 raise error.BundleValueError(msg)
1172 1172 else:
1173 1173 result = self._readexact(payloadsize)
1174 1174 chunknum += 1
1175 1175 pos += payloadsize
1176 1176 if chunknum == len(self._chunkindex):
1177 1177 self._chunkindex.append((pos, self._tellfp()))
1178 1178 yield result
1179 1179 payloadsize = self._unpack(_fpayloadsize)[0]
1180 1180 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1181 1181
1182 1182 def _findchunk(self, pos):
1183 1183 '''for a given payload position, return a chunk number and offset'''
1184 1184 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1185 1185 if ppos == pos:
1186 1186 return chunk, 0
1187 1187 elif ppos > pos:
1188 1188 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1189 1189 raise ValueError('Unknown chunk')
1190 1190
1191 1191 def _readheader(self):
1192 1192 """read the header and setup the object"""
1193 1193 typesize = self._unpackheader(_fparttypesize)[0]
1194 1194 self.type = self._fromheader(typesize)
1195 1195 indebug(self.ui, 'part type: "%s"' % self.type)
1196 1196 self.id = self._unpackheader(_fpartid)[0]
1197 1197 indebug(self.ui, 'part id: "%s"' % self.id)
1198 1198 # extract mandatory bit from type
1199 1199 self.mandatory = (self.type != self.type.lower())
1200 1200 self.type = self.type.lower()
1201 1201 ## reading parameters
1202 1202 # param count
1203 1203 mancount, advcount = self._unpackheader(_fpartparamcount)
1204 1204 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1205 1205 # param size
1206 1206 fparamsizes = _makefpartparamsizes(mancount + advcount)
1207 1207 paramsizes = self._unpackheader(fparamsizes)
1208 1208 # make it a list of couple again
1209 1209 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
1210 1210 # split mandatory from advisory
1211 1211 mansizes = paramsizes[:mancount]
1212 1212 advsizes = paramsizes[mancount:]
1213 1213 # retrieve param value
1214 1214 manparams = []
1215 1215 for key, value in mansizes:
1216 1216 manparams.append((self._fromheader(key), self._fromheader(value)))
1217 1217 advparams = []
1218 1218 for key, value in advsizes:
1219 1219 advparams.append((self._fromheader(key), self._fromheader(value)))
1220 1220 self._initparams(manparams, advparams)
1221 1221 ## part payload
1222 1222 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1223 1223 # we read the data, tell it
1224 1224 self._initialized = True
1225 1225
1226 1226 def read(self, size=None):
1227 1227 """read payload data"""
1228 1228 if not self._initialized:
1229 1229 self._readheader()
1230 1230 if size is None:
1231 1231 data = self._payloadstream.read()
1232 1232 else:
1233 1233 data = self._payloadstream.read(size)
1234 1234 self._pos += len(data)
1235 1235 if size is None or len(data) < size:
1236 1236 if not self.consumed and self._pos:
1237 1237 self.ui.debug('bundle2-input-part: total payload size %i\n'
1238 1238 % self._pos)
1239 1239 self.consumed = True
1240 1240 return data
1241 1241
1242 1242 def tell(self):
1243 1243 return self._pos
1244 1244
1245 1245 def seek(self, offset, whence=0):
1246 1246 if whence == 0:
1247 1247 newpos = offset
1248 1248 elif whence == 1:
1249 1249 newpos = self._pos + offset
1250 1250 elif whence == 2:
1251 1251 if not self.consumed:
1252 1252 self.read()
1253 1253 newpos = self._chunkindex[-1][0] - offset
1254 1254 else:
1255 1255 raise ValueError('Unknown whence value: %r' % (whence,))
1256 1256
1257 1257 if newpos > self._chunkindex[-1][0] and not self.consumed:
1258 1258 self.read()
1259 1259 if not 0 <= newpos <= self._chunkindex[-1][0]:
1260 1260 raise ValueError('Offset out of range')
1261 1261
1262 1262 if self._pos != newpos:
1263 1263 chunk, internaloffset = self._findchunk(newpos)
1264 1264 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1265 1265 adjust = self.read(internaloffset)
1266 1266 if len(adjust) != internaloffset:
1267 1267 raise error.Abort(_('Seek failed\n'))
1268 1268 self._pos = newpos
1269 1269
1270 1270 def _seekfp(self, offset, whence=0):
1271 1271 """move the underlying file pointer
1272 1272
1273 1273 This method is meant for internal usage by the bundle2 protocol only.
1274 1274 They directly manipulate the low level stream including bundle2 level
1275 1275 instruction.
1276 1276
1277 1277 Do not use it to implement higher-level logic or methods."""
1278 1278 if self._seekable:
1279 1279 return self._fp.seek(offset, whence)
1280 1280 else:
1281 1281 raise NotImplementedError(_('File pointer is not seekable'))
1282 1282
1283 1283 def _tellfp(self):
1284 1284 """return the file offset, or None if file is not seekable
1285 1285
1286 1286 This method is meant for internal usage by the bundle2 protocol only.
1287 1287 They directly manipulate the low level stream including bundle2 level
1288 1288 instruction.
1289 1289
1290 1290 Do not use it to implement higher-level logic or methods."""
1291 1291 if self._seekable:
1292 1292 try:
1293 1293 return self._fp.tell()
1294 1294 except IOError as e:
1295 1295 if e.errno == errno.ESPIPE:
1296 1296 self._seekable = False
1297 1297 else:
1298 1298 raise
1299 1299 return None
1300 1300
1301 1301 # These are only the static capabilities.
1302 1302 # Check the 'getrepocaps' function for the rest.
1303 1303 capabilities = {'HG20': (),
1304 1304 'error': ('abort', 'unsupportedcontent', 'pushraced',
1305 1305 'pushkey'),
1306 1306 'listkeys': (),
1307 1307 'pushkey': (),
1308 1308 'digests': tuple(sorted(util.DIGESTS.keys())),
1309 1309 'remote-changegroup': ('http', 'https'),
1310 1310 'hgtagsfnodes': (),
1311 1311 }
1312 1312
1313 1313 def getrepocaps(repo, allowpushback=False):
1314 1314 """return the bundle2 capabilities for a given repo
1315 1315
1316 1316 Exists to allow extensions (like evolution) to mutate the capabilities.
1317 1317 """
1318 1318 caps = capabilities.copy()
1319 1319 caps['changegroup'] = tuple(sorted(
1320 1320 changegroup.supportedincomingversions(repo)))
1321 1321 if obsolete.isenabled(repo, obsolete.exchangeopt):
1322 1322 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1323 1323 caps['obsmarkers'] = supportedformat
1324 1324 if allowpushback:
1325 1325 caps['pushback'] = ()
1326 if not repo.ui.configbool('experimental', 'checkheads-strict', True):
1327 caps['checkheads'] = ('related',)
1326 1328 return caps
1327 1329
1328 1330 def bundle2caps(remote):
1329 1331 """return the bundle capabilities of a peer as dict"""
1330 1332 raw = remote.capable('bundle2')
1331 1333 if not raw and raw != '':
1332 1334 return {}
1333 1335 capsblob = urlreq.unquote(remote.capable('bundle2'))
1334 1336 return decodecaps(capsblob)
1335 1337
1336 1338 def obsmarkersversion(caps):
1337 1339 """extract the list of supported obsmarkers versions from a bundle2caps dict
1338 1340 """
1339 1341 obscaps = caps.get('obsmarkers', ())
1340 1342 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1341 1343
1342 1344 def writenewbundle(ui, repo, source, filename, bundletype, outgoing, opts,
1343 1345 vfs=None, compression=None, compopts=None):
1344 1346 if bundletype.startswith('HG10'):
1345 1347 cg = changegroup.getchangegroup(repo, source, outgoing, version='01')
1346 1348 return writebundle(ui, cg, filename, bundletype, vfs=vfs,
1347 1349 compression=compression, compopts=compopts)
1348 1350 elif not bundletype.startswith('HG20'):
1349 1351 raise error.ProgrammingError('unknown bundle type: %s' % bundletype)
1350 1352
1351 1353 caps = {}
1352 1354 if 'obsolescence' in opts:
1353 1355 caps['obsmarkers'] = ('V1',)
1354 1356 bundle = bundle20(ui, caps)
1355 1357 bundle.setcompression(compression, compopts)
1356 1358 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1357 1359 chunkiter = bundle.getchunks()
1358 1360
1359 1361 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1360 1362
1361 1363 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1362 1364 # We should eventually reconcile this logic with the one behind
1363 1365 # 'exchange.getbundle2partsgenerator'.
1364 1366 #
1365 1367 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1366 1368 # different right now. So we keep them separated for now for the sake of
1367 1369 # simplicity.
1368 1370
1369 1371 # we always want a changegroup in such bundle
1370 1372 cgversion = opts.get('cg.version')
1371 1373 if cgversion is None:
1372 1374 cgversion = changegroup.safeversion(repo)
1373 1375 cg = changegroup.getchangegroup(repo, source, outgoing,
1374 1376 version=cgversion)
1375 1377 part = bundler.newpart('changegroup', data=cg.getchunks())
1376 1378 part.addparam('version', cg.version)
1377 1379 if 'clcount' in cg.extras:
1378 1380 part.addparam('nbchanges', str(cg.extras['clcount']),
1379 1381 mandatory=False)
1380 1382
1381 1383 addparttagsfnodescache(repo, bundler, outgoing)
1382 1384
1383 1385 if opts.get('obsolescence', False):
1384 1386 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1385 1387 buildobsmarkerspart(bundler, obsmarkers)
1386 1388
1387 1389 def addparttagsfnodescache(repo, bundler, outgoing):
1388 1390 # we include the tags fnode cache for the bundle changeset
1389 1391 # (as an optional parts)
1390 1392 cache = tags.hgtagsfnodescache(repo.unfiltered())
1391 1393 chunks = []
1392 1394
1393 1395 # .hgtags fnodes are only relevant for head changesets. While we could
1394 1396 # transfer values for all known nodes, there will likely be little to
1395 1397 # no benefit.
1396 1398 #
1397 1399 # We don't bother using a generator to produce output data because
1398 1400 # a) we only have 40 bytes per head and even esoteric numbers of heads
1399 1401 # consume little memory (1M heads is 40MB) b) we don't want to send the
1400 1402 # part if we don't have entries and knowing if we have entries requires
1401 1403 # cache lookups.
1402 1404 for node in outgoing.missingheads:
1403 1405 # Don't compute missing, as this may slow down serving.
1404 1406 fnode = cache.getfnode(node, computemissing=False)
1405 1407 if fnode is not None:
1406 1408 chunks.extend([node, fnode])
1407 1409
1408 1410 if chunks:
1409 1411 bundler.newpart('hgtagsfnodes', data=''.join(chunks))
1410 1412
1411 1413 def buildobsmarkerspart(bundler, markers):
1412 1414 """add an obsmarker part to the bundler with <markers>
1413 1415
1414 1416 No part is created if markers is empty.
1415 1417 Raises ValueError if the bundler doesn't support any known obsmarker format.
1416 1418 """
1417 1419 if not markers:
1418 1420 return None
1419 1421
1420 1422 remoteversions = obsmarkersversion(bundler.capabilities)
1421 1423 version = obsolete.commonversion(remoteversions)
1422 1424 if version is None:
1423 1425 raise ValueError('bundler does not support common obsmarker format')
1424 1426 stream = obsolete.encodemarkers(markers, True, version=version)
1425 1427 return bundler.newpart('obsmarkers', data=stream)
1426 1428
1427 1429 def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None,
1428 1430 compopts=None):
1429 1431 """Write a bundle file and return its filename.
1430 1432
1431 1433 Existing files will not be overwritten.
1432 1434 If no filename is specified, a temporary file is created.
1433 1435 bz2 compression can be turned off.
1434 1436 The bundle file will be deleted in case of errors.
1435 1437 """
1436 1438
1437 1439 if bundletype == "HG20":
1438 1440 bundle = bundle20(ui)
1439 1441 bundle.setcompression(compression, compopts)
1440 1442 part = bundle.newpart('changegroup', data=cg.getchunks())
1441 1443 part.addparam('version', cg.version)
1442 1444 if 'clcount' in cg.extras:
1443 1445 part.addparam('nbchanges', str(cg.extras['clcount']),
1444 1446 mandatory=False)
1445 1447 chunkiter = bundle.getchunks()
1446 1448 else:
1447 1449 # compression argument is only for the bundle2 case
1448 1450 assert compression is None
1449 1451 if cg.version != '01':
1450 1452 raise error.Abort(_('old bundle types only supports v1 '
1451 1453 'changegroups'))
1452 1454 header, comp = bundletypes[bundletype]
1453 1455 if comp not in util.compengines.supportedbundletypes:
1454 1456 raise error.Abort(_('unknown stream compression type: %s')
1455 1457 % comp)
1456 1458 compengine = util.compengines.forbundletype(comp)
1457 1459 def chunkiter():
1458 1460 yield header
1459 1461 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1460 1462 yield chunk
1461 1463 chunkiter = chunkiter()
1462 1464
1463 1465 # parse the changegroup data, otherwise we will block
1464 1466 # in case of sshrepo because we don't know the end of the stream
1465 1467 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1466 1468
1467 1469 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest'))
1468 1470 def handlechangegroup(op, inpart):
1469 1471 """apply a changegroup part on the repo
1470 1472
1471 1473 This is a very early implementation that will massive rework before being
1472 1474 inflicted to any end-user.
1473 1475 """
1474 1476 # Make sure we trigger a transaction creation
1475 1477 #
1476 1478 # The addchangegroup function will get a transaction object by itself, but
1477 1479 # we need to make sure we trigger the creation of a transaction object used
1478 1480 # for the whole processing scope.
1479 1481 op.gettransaction()
1480 1482 unpackerversion = inpart.params.get('version', '01')
1481 1483 # We should raise an appropriate exception here
1482 1484 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1483 1485 # the source and url passed here are overwritten by the one contained in
1484 1486 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1485 1487 nbchangesets = None
1486 1488 if 'nbchanges' in inpart.params:
1487 1489 nbchangesets = int(inpart.params.get('nbchanges'))
1488 1490 if ('treemanifest' in inpart.params and
1489 1491 'treemanifest' not in op.repo.requirements):
1490 1492 if len(op.repo.changelog) != 0:
1491 1493 raise error.Abort(_(
1492 1494 "bundle contains tree manifests, but local repo is "
1493 1495 "non-empty and does not use tree manifests"))
1494 1496 op.repo.requirements.add('treemanifest')
1495 1497 op.repo._applyopenerreqs()
1496 1498 op.repo._writerequirements()
1497 1499 ret = cg.apply(op.repo, 'bundle2', 'bundle2', expectedtotal=nbchangesets)
1498 1500 op.records.add('changegroup', {'return': ret})
1499 1501 if op.reply is not None:
1500 1502 # This is definitely not the final form of this
1501 1503 # return. But one need to start somewhere.
1502 1504 part = op.reply.newpart('reply:changegroup', mandatory=False)
1503 1505 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1504 1506 part.addparam('return', '%i' % ret, mandatory=False)
1505 1507 assert not inpart.read()
1506 1508
1507 1509 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1508 1510 ['digest:%s' % k for k in util.DIGESTS.keys()])
1509 1511 @parthandler('remote-changegroup', _remotechangegroupparams)
1510 1512 def handleremotechangegroup(op, inpart):
1511 1513 """apply a bundle10 on the repo, given an url and validation information
1512 1514
1513 1515 All the information about the remote bundle to import are given as
1514 1516 parameters. The parameters include:
1515 1517 - url: the url to the bundle10.
1516 1518 - size: the bundle10 file size. It is used to validate what was
1517 1519 retrieved by the client matches the server knowledge about the bundle.
1518 1520 - digests: a space separated list of the digest types provided as
1519 1521 parameters.
1520 1522 - digest:<digest-type>: the hexadecimal representation of the digest with
1521 1523 that name. Like the size, it is used to validate what was retrieved by
1522 1524 the client matches what the server knows about the bundle.
1523 1525
1524 1526 When multiple digest types are given, all of them are checked.
1525 1527 """
1526 1528 try:
1527 1529 raw_url = inpart.params['url']
1528 1530 except KeyError:
1529 1531 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1530 1532 parsed_url = util.url(raw_url)
1531 1533 if parsed_url.scheme not in capabilities['remote-changegroup']:
1532 1534 raise error.Abort(_('remote-changegroup does not support %s urls') %
1533 1535 parsed_url.scheme)
1534 1536
1535 1537 try:
1536 1538 size = int(inpart.params['size'])
1537 1539 except ValueError:
1538 1540 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1539 1541 % 'size')
1540 1542 except KeyError:
1541 1543 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1542 1544
1543 1545 digests = {}
1544 1546 for typ in inpart.params.get('digests', '').split():
1545 1547 param = 'digest:%s' % typ
1546 1548 try:
1547 1549 value = inpart.params[param]
1548 1550 except KeyError:
1549 1551 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1550 1552 param)
1551 1553 digests[typ] = value
1552 1554
1553 1555 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1554 1556
1555 1557 # Make sure we trigger a transaction creation
1556 1558 #
1557 1559 # The addchangegroup function will get a transaction object by itself, but
1558 1560 # we need to make sure we trigger the creation of a transaction object used
1559 1561 # for the whole processing scope.
1560 1562 op.gettransaction()
1561 1563 from . import exchange
1562 1564 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1563 1565 if not isinstance(cg, changegroup.cg1unpacker):
1564 1566 raise error.Abort(_('%s: not a bundle version 1.0') %
1565 1567 util.hidepassword(raw_url))
1566 1568 ret = cg.apply(op.repo, 'bundle2', 'bundle2')
1567 1569 op.records.add('changegroup', {'return': ret})
1568 1570 if op.reply is not None:
1569 1571 # This is definitely not the final form of this
1570 1572 # return. But one need to start somewhere.
1571 1573 part = op.reply.newpart('reply:changegroup')
1572 1574 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1573 1575 part.addparam('return', '%i' % ret, mandatory=False)
1574 1576 try:
1575 1577 real_part.validate()
1576 1578 except error.Abort as e:
1577 1579 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1578 1580 (util.hidepassword(raw_url), str(e)))
1579 1581 assert not inpart.read()
1580 1582
1581 1583 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1582 1584 def handlereplychangegroup(op, inpart):
1583 1585 ret = int(inpart.params['return'])
1584 1586 replyto = int(inpart.params['in-reply-to'])
1585 1587 op.records.add('changegroup', {'return': ret}, replyto)
1586 1588
1587 1589 @parthandler('check:heads')
1588 1590 def handlecheckheads(op, inpart):
1589 1591 """check that head of the repo did not change
1590 1592
1591 1593 This is used to detect a push race when using unbundle.
1592 1594 This replaces the "heads" argument of unbundle."""
1593 1595 h = inpart.read(20)
1594 1596 heads = []
1595 1597 while len(h) == 20:
1596 1598 heads.append(h)
1597 1599 h = inpart.read(20)
1598 1600 assert not h
1599 1601 # Trigger a transaction so that we are guaranteed to have the lock now.
1600 1602 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1601 1603 op.gettransaction()
1602 1604 if sorted(heads) != sorted(op.repo.heads()):
1603 1605 raise error.PushRaced('repository changed while pushing - '
1604 1606 'please try again')
1605 1607
1608 @parthandler('check:updated-heads')
1609 def handlecheckupdatedheads(op, inpart):
1610 """check for race on the heads touched by a push
1611
1612 This is similar to 'check:heads' but focus on the heads actually updated
1613 during the push. If other activities happen on unrelated heads, it is
1614 ignored.
1615
1616 This allow server with high traffic to avoid push contention as long as
1617 unrelated parts of the graph are involved."""
1618 h = inpart.read(20)
1619 heads = []
1620 while len(h) == 20:
1621 heads.append(h)
1622 h = inpart.read(20)
1623 assert not h
1624 # trigger a transaction so that we are guaranteed to have the lock now.
1625 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1626 op.gettransaction()
1627
1628 currentheads = set()
1629 for ls in op.repo.branchmap().itervalues():
1630 currentheads.update(ls)
1631
1632 for h in heads:
1633 if h not in currentheads:
1634 raise error.PushRaced('repository changed while pushing - '
1635 'please try again')
1636
1606 1637 @parthandler('output')
1607 1638 def handleoutput(op, inpart):
1608 1639 """forward output captured on the server to the client"""
1609 1640 for line in inpart.read().splitlines():
1610 1641 op.ui.status(_('remote: %s\n') % line)
1611 1642
1612 1643 @parthandler('replycaps')
1613 1644 def handlereplycaps(op, inpart):
1614 1645 """Notify that a reply bundle should be created
1615 1646
1616 1647 The payload contains the capabilities information for the reply"""
1617 1648 caps = decodecaps(inpart.read())
1618 1649 if op.reply is None:
1619 1650 op.reply = bundle20(op.ui, caps)
1620 1651
1621 1652 class AbortFromPart(error.Abort):
1622 1653 """Sub-class of Abort that denotes an error from a bundle2 part."""
1623 1654
1624 1655 @parthandler('error:abort', ('message', 'hint'))
1625 1656 def handleerrorabort(op, inpart):
1626 1657 """Used to transmit abort error over the wire"""
1627 1658 raise AbortFromPart(inpart.params['message'],
1628 1659 hint=inpart.params.get('hint'))
1629 1660
1630 1661 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
1631 1662 'in-reply-to'))
1632 1663 def handleerrorpushkey(op, inpart):
1633 1664 """Used to transmit failure of a mandatory pushkey over the wire"""
1634 1665 kwargs = {}
1635 1666 for name in ('namespace', 'key', 'new', 'old', 'ret'):
1636 1667 value = inpart.params.get(name)
1637 1668 if value is not None:
1638 1669 kwargs[name] = value
1639 1670 raise error.PushkeyFailed(inpart.params['in-reply-to'], **kwargs)
1640 1671
1641 1672 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1642 1673 def handleerrorunsupportedcontent(op, inpart):
1643 1674 """Used to transmit unknown content error over the wire"""
1644 1675 kwargs = {}
1645 1676 parttype = inpart.params.get('parttype')
1646 1677 if parttype is not None:
1647 1678 kwargs['parttype'] = parttype
1648 1679 params = inpart.params.get('params')
1649 1680 if params is not None:
1650 1681 kwargs['params'] = params.split('\0')
1651 1682
1652 1683 raise error.BundleUnknownFeatureError(**kwargs)
1653 1684
1654 1685 @parthandler('error:pushraced', ('message',))
1655 1686 def handleerrorpushraced(op, inpart):
1656 1687 """Used to transmit push race error over the wire"""
1657 1688 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1658 1689
1659 1690 @parthandler('listkeys', ('namespace',))
1660 1691 def handlelistkeys(op, inpart):
1661 1692 """retrieve pushkey namespace content stored in a bundle2"""
1662 1693 namespace = inpart.params['namespace']
1663 1694 r = pushkey.decodekeys(inpart.read())
1664 1695 op.records.add('listkeys', (namespace, r))
1665 1696
1666 1697 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1667 1698 def handlepushkey(op, inpart):
1668 1699 """process a pushkey request"""
1669 1700 dec = pushkey.decode
1670 1701 namespace = dec(inpart.params['namespace'])
1671 1702 key = dec(inpart.params['key'])
1672 1703 old = dec(inpart.params['old'])
1673 1704 new = dec(inpart.params['new'])
1674 1705 # Grab the transaction to ensure that we have the lock before performing the
1675 1706 # pushkey.
1676 1707 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1677 1708 op.gettransaction()
1678 1709 ret = op.repo.pushkey(namespace, key, old, new)
1679 1710 record = {'namespace': namespace,
1680 1711 'key': key,
1681 1712 'old': old,
1682 1713 'new': new}
1683 1714 op.records.add('pushkey', record)
1684 1715 if op.reply is not None:
1685 1716 rpart = op.reply.newpart('reply:pushkey')
1686 1717 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1687 1718 rpart.addparam('return', '%i' % ret, mandatory=False)
1688 1719 if inpart.mandatory and not ret:
1689 1720 kwargs = {}
1690 1721 for key in ('namespace', 'key', 'new', 'old', 'ret'):
1691 1722 if key in inpart.params:
1692 1723 kwargs[key] = inpart.params[key]
1693 1724 raise error.PushkeyFailed(partid=str(inpart.id), **kwargs)
1694 1725
1695 1726 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
1696 1727 def handlepushkeyreply(op, inpart):
1697 1728 """retrieve the result of a pushkey request"""
1698 1729 ret = int(inpart.params['return'])
1699 1730 partid = int(inpart.params['in-reply-to'])
1700 1731 op.records.add('pushkey', {'return': ret}, partid)
1701 1732
1702 1733 @parthandler('obsmarkers')
1703 1734 def handleobsmarker(op, inpart):
1704 1735 """add a stream of obsmarkers to the repo"""
1705 1736 tr = op.gettransaction()
1706 1737 markerdata = inpart.read()
1707 1738 if op.ui.config('experimental', 'obsmarkers-exchange-debug', False):
1708 1739 op.ui.write(('obsmarker-exchange: %i bytes received\n')
1709 1740 % len(markerdata))
1710 1741 # The mergemarkers call will crash if marker creation is not enabled.
1711 1742 # we want to avoid this if the part is advisory.
1712 1743 if not inpart.mandatory and op.repo.obsstore.readonly:
1713 1744 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled')
1714 1745 return
1715 1746 new = op.repo.obsstore.mergemarkers(tr, markerdata)
1716 1747 op.repo.invalidatevolatilesets()
1717 1748 if new:
1718 1749 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1719 1750 op.records.add('obsmarkers', {'new': new})
1720 1751 if op.reply is not None:
1721 1752 rpart = op.reply.newpart('reply:obsmarkers')
1722 1753 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1723 1754 rpart.addparam('new', '%i' % new, mandatory=False)
1724 1755
1725 1756
1726 1757 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
1727 1758 def handleobsmarkerreply(op, inpart):
1728 1759 """retrieve the result of a pushkey request"""
1729 1760 ret = int(inpart.params['new'])
1730 1761 partid = int(inpart.params['in-reply-to'])
1731 1762 op.records.add('obsmarkers', {'new': ret}, partid)
1732 1763
1733 1764 @parthandler('hgtagsfnodes')
1734 1765 def handlehgtagsfnodes(op, inpart):
1735 1766 """Applies .hgtags fnodes cache entries to the local repo.
1736 1767
1737 1768 Payload is pairs of 20 byte changeset nodes and filenodes.
1738 1769 """
1739 1770 # Grab the transaction so we ensure that we have the lock at this point.
1740 1771 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1741 1772 op.gettransaction()
1742 1773 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
1743 1774
1744 1775 count = 0
1745 1776 while True:
1746 1777 node = inpart.read(20)
1747 1778 fnode = inpart.read(20)
1748 1779 if len(node) < 20 or len(fnode) < 20:
1749 1780 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
1750 1781 break
1751 1782 cache.setfnode(node, fnode)
1752 1783 count += 1
1753 1784
1754 1785 cache.write()
1755 1786 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
@@ -1,526 +1,527 b''
1 1 # discovery.py - protocol changeset discovery functions
2 2 #
3 3 # Copyright 2010 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from __future__ import absolute_import
9 9
10 10 import functools
11 11
12 12 from .i18n import _
13 13 from .node import (
14 14 hex,
15 15 nullid,
16 16 short,
17 17 )
18 18
19 19 from . import (
20 20 bookmarks,
21 21 branchmap,
22 22 error,
23 23 phases,
24 24 setdiscovery,
25 25 treediscovery,
26 26 util,
27 27 )
28 28
29 29 def findcommonincoming(repo, remote, heads=None, force=False):
30 30 """Return a tuple (common, anyincoming, heads) used to identify the common
31 31 subset of nodes between repo and remote.
32 32
33 33 "common" is a list of (at least) the heads of the common subset.
34 34 "anyincoming" is testable as a boolean indicating if any nodes are missing
35 35 locally. If remote does not support getbundle, this actually is a list of
36 36 roots of the nodes that would be incoming, to be supplied to
37 37 changegroupsubset. No code except for pull should be relying on this fact
38 38 any longer.
39 39 "heads" is either the supplied heads, or else the remote's heads.
40 40
41 41 If you pass heads and they are all known locally, the response lists just
42 42 these heads in "common" and in "heads".
43 43
44 44 Please use findcommonoutgoing to compute the set of outgoing nodes to give
45 45 extensions a good hook into outgoing.
46 46 """
47 47
48 48 if not remote.capable('getbundle'):
49 49 return treediscovery.findcommonincoming(repo, remote, heads, force)
50 50
51 51 if heads:
52 52 allknown = True
53 53 knownnode = repo.changelog.hasnode # no nodemap until it is filtered
54 54 for h in heads:
55 55 if not knownnode(h):
56 56 allknown = False
57 57 break
58 58 if allknown:
59 59 return (heads, False, heads)
60 60
61 61 res = setdiscovery.findcommonheads(repo.ui, repo, remote,
62 62 abortwhenunrelated=not force)
63 63 common, anyinc, srvheads = res
64 64 return (list(common), anyinc, heads or list(srvheads))
65 65
66 66 class outgoing(object):
67 67 '''Represents the set of nodes present in a local repo but not in a
68 68 (possibly) remote one.
69 69
70 70 Members:
71 71
72 72 missing is a list of all nodes present in local but not in remote.
73 73 common is a list of all nodes shared between the two repos.
74 74 excluded is the list of missing changeset that shouldn't be sent remotely.
75 75 missingheads is the list of heads of missing.
76 76 commonheads is the list of heads of common.
77 77
78 78 The sets are computed on demand from the heads, unless provided upfront
79 79 by discovery.'''
80 80
81 81 def __init__(self, repo, commonheads=None, missingheads=None,
82 82 missingroots=None):
83 83 # at least one of them must not be set
84 84 assert None in (commonheads, missingroots)
85 85 cl = repo.changelog
86 86 if missingheads is None:
87 87 missingheads = cl.heads()
88 88 if missingroots:
89 89 discbases = []
90 90 for n in missingroots:
91 91 discbases.extend([p for p in cl.parents(n) if p != nullid])
92 92 # TODO remove call to nodesbetween.
93 93 # TODO populate attributes on outgoing instance instead of setting
94 94 # discbases.
95 95 csets, roots, heads = cl.nodesbetween(missingroots, missingheads)
96 96 included = set(csets)
97 97 missingheads = heads
98 98 commonheads = [n for n in discbases if n not in included]
99 99 elif not commonheads:
100 100 commonheads = [nullid]
101 101 self.commonheads = commonheads
102 102 self.missingheads = missingheads
103 103 self._revlog = cl
104 104 self._common = None
105 105 self._missing = None
106 106 self.excluded = []
107 107
108 108 def _computecommonmissing(self):
109 109 sets = self._revlog.findcommonmissing(self.commonheads,
110 110 self.missingheads)
111 111 self._common, self._missing = sets
112 112
113 113 @util.propertycache
114 114 def common(self):
115 115 if self._common is None:
116 116 self._computecommonmissing()
117 117 return self._common
118 118
119 119 @util.propertycache
120 120 def missing(self):
121 121 if self._missing is None:
122 122 self._computecommonmissing()
123 123 return self._missing
124 124
125 125 def findcommonoutgoing(repo, other, onlyheads=None, force=False,
126 126 commoninc=None, portable=False):
127 127 '''Return an outgoing instance to identify the nodes present in repo but
128 128 not in other.
129 129
130 130 If onlyheads is given, only nodes ancestral to nodes in onlyheads
131 131 (inclusive) are included. If you already know the local repo's heads,
132 132 passing them in onlyheads is faster than letting them be recomputed here.
133 133
134 134 If commoninc is given, it must be the result of a prior call to
135 135 findcommonincoming(repo, other, force) to avoid recomputing it here.
136 136
137 137 If portable is given, compute more conservative common and missingheads,
138 138 to make bundles created from the instance more portable.'''
139 139 # declare an empty outgoing object to be filled later
140 140 og = outgoing(repo, None, None)
141 141
142 142 # get common set if not provided
143 143 if commoninc is None:
144 144 commoninc = findcommonincoming(repo, other, force=force)
145 145 og.commonheads, _any, _hds = commoninc
146 146
147 147 # compute outgoing
148 148 mayexclude = (repo._phasecache.phaseroots[phases.secret] or repo.obsstore)
149 149 if not mayexclude:
150 150 og.missingheads = onlyheads or repo.heads()
151 151 elif onlyheads is None:
152 152 # use visible heads as it should be cached
153 153 og.missingheads = repo.filtered("served").heads()
154 154 og.excluded = [ctx.node() for ctx in repo.set('secret() or extinct()')]
155 155 else:
156 156 # compute common, missing and exclude secret stuff
157 157 sets = repo.changelog.findcommonmissing(og.commonheads, onlyheads)
158 158 og._common, allmissing = sets
159 159 og._missing = missing = []
160 160 og.excluded = excluded = []
161 161 for node in allmissing:
162 162 ctx = repo[node]
163 163 if ctx.phase() >= phases.secret or ctx.extinct():
164 164 excluded.append(node)
165 165 else:
166 166 missing.append(node)
167 167 if len(missing) == len(allmissing):
168 168 missingheads = onlyheads
169 169 else: # update missing heads
170 170 missingheads = phases.newheads(repo, onlyheads, excluded)
171 171 og.missingheads = missingheads
172 172 if portable:
173 173 # recompute common and missingheads as if -r<rev> had been given for
174 174 # each head of missing, and --base <rev> for each head of the proper
175 175 # ancestors of missing
176 176 og._computecommonmissing()
177 177 cl = repo.changelog
178 178 missingrevs = set(cl.rev(n) for n in og._missing)
179 179 og._common = set(cl.ancestors(missingrevs)) - missingrevs
180 180 commonheads = set(og.commonheads)
181 181 og.missingheads = [h for h in og.missingheads if h not in commonheads]
182 182
183 183 return og
184 184
185 185 def _headssummary(pushop):
186 186 """compute a summary of branch and heads status before and after push
187 187
188 188 return {'branch': ([remoteheads], [newheads],
189 189 [unsyncedheads], [discardedheads])} mapping
190 190
191 191 - branch: the branch name,
192 192 - remoteheads: the list of remote heads known locally
193 193 None if the branch is new,
194 194 - newheads: the new remote heads (known locally) with outgoing pushed,
195 195 - unsyncedheads: the list of remote heads unknown locally,
196 196 - discardedheads: the list of heads made obsolete by the push.
197 197 """
198 198 repo = pushop.repo.unfiltered()
199 199 remote = pushop.remote
200 200 outgoing = pushop.outgoing
201 201 cl = repo.changelog
202 202 headssum = {}
203 203 # A. Create set of branches involved in the push.
204 204 branches = set(repo[n].branch() for n in outgoing.missing)
205 205 remotemap = remote.branchmap()
206 206 newbranches = branches - set(remotemap)
207 207 branches.difference_update(newbranches)
208 208
209 209 # A. register remote heads
210 210 remotebranches = set()
211 211 for branch, heads in remote.branchmap().iteritems():
212 212 remotebranches.add(branch)
213 213 known = []
214 214 unsynced = []
215 215 knownnode = cl.hasnode # do not use nodemap until it is filtered
216 216 for h in heads:
217 217 if knownnode(h):
218 218 known.append(h)
219 219 else:
220 220 unsynced.append(h)
221 221 headssum[branch] = (known, list(known), unsynced)
222 222 # B. add new branch data
223 223 missingctx = list(repo[n] for n in outgoing.missing)
224 224 touchedbranches = set()
225 225 for ctx in missingctx:
226 226 branch = ctx.branch()
227 227 touchedbranches.add(branch)
228 228 if branch not in headssum:
229 229 headssum[branch] = (None, [], [])
230 230
231 231 # C drop data about untouched branches:
232 232 for branch in remotebranches - touchedbranches:
233 233 del headssum[branch]
234 234
235 235 # D. Update newmap with outgoing changes.
236 236 # This will possibly add new heads and remove existing ones.
237 237 newmap = branchmap.branchcache((branch, heads[1])
238 238 for branch, heads in headssum.iteritems()
239 239 if heads[0] is not None)
240 240 newmap.update(repo, (ctx.rev() for ctx in missingctx))
241 241 for branch, newheads in newmap.iteritems():
242 242 headssum[branch][1][:] = newheads
243 243 for branch, items in headssum.iteritems():
244 244 for l in items:
245 245 if l is not None:
246 246 l.sort()
247 247 headssum[branch] = items + ([],)
248 248
249 249 # If there are no obsstore, no post processing are needed.
250 250 if repo.obsstore:
251 251 allmissing = set(outgoing.missing)
252 252 cctx = repo.set('%ld', outgoing.common)
253 253 allfuturecommon = set(c.node() for c in cctx)
254 254 allfuturecommon.update(allmissing)
255 255 for branch, heads in sorted(headssum.iteritems()):
256 256 remoteheads, newheads, unsyncedheads, placeholder = heads
257 257 result = _postprocessobsolete(pushop, allfuturecommon, newheads)
258 258 headssum[branch] = (remoteheads, sorted(result[0]), unsyncedheads,
259 259 sorted(result[1]))
260 260 return headssum
261 261
262 262 def _oldheadssummary(repo, remoteheads, outgoing, inc=False):
263 263 """Compute branchmapsummary for repo without branchmap support"""
264 264
265 265 # 1-4b. old servers: Check for new topological heads.
266 266 # Construct {old,new}map with branch = None (topological branch).
267 267 # (code based on update)
268 268 knownnode = repo.changelog.hasnode # no nodemap until it is filtered
269 269 oldheads = sorted(h for h in remoteheads if knownnode(h))
270 270 # all nodes in outgoing.missing are children of either:
271 271 # - an element of oldheads
272 272 # - another element of outgoing.missing
273 273 # - nullrev
274 274 # This explains why the new head are very simple to compute.
275 275 r = repo.set('heads(%ln + %ln)', oldheads, outgoing.missing)
276 276 newheads = sorted(c.node() for c in r)
277 277 # set some unsynced head to issue the "unsynced changes" warning
278 278 if inc:
279 279 unsynced = [None]
280 280 else:
281 281 unsynced = []
282 282 return {None: (oldheads, newheads, unsynced, [])}
283 283
284 284 def _nowarnheads(pushop):
285 285 # Compute newly pushed bookmarks. We don't warn about bookmarked heads.
286 286 repo = pushop.repo.unfiltered()
287 287 remote = pushop.remote
288 288 localbookmarks = repo._bookmarks
289 289 remotebookmarks = remote.listkeys('bookmarks')
290 290 bookmarkedheads = set()
291 291
292 292 # internal config: bookmarks.pushing
293 293 newbookmarks = [localbookmarks.expandname(b)
294 294 for b in pushop.ui.configlist('bookmarks', 'pushing')]
295 295
296 296 for bm in localbookmarks:
297 297 rnode = remotebookmarks.get(bm)
298 298 if rnode and rnode in repo:
299 299 lctx, rctx = repo[bm], repo[rnode]
300 300 if bookmarks.validdest(repo, rctx, lctx):
301 301 bookmarkedheads.add(lctx.node())
302 302 else:
303 303 if bm in newbookmarks and bm not in remotebookmarks:
304 304 bookmarkedheads.add(repo[bm].node())
305 305
306 306 return bookmarkedheads
307 307
308 308 def checkheads(pushop):
309 309 """Check that a push won't add any outgoing head
310 310
311 311 raise Abort error and display ui message as needed.
312 312 """
313 313
314 314 repo = pushop.repo.unfiltered()
315 315 remote = pushop.remote
316 316 outgoing = pushop.outgoing
317 317 remoteheads = pushop.remoteheads
318 318 newbranch = pushop.newbranch
319 319 inc = bool(pushop.incoming)
320 320
321 321 # Check for each named branch if we're creating new remote heads.
322 322 # To be a remote head after push, node must be either:
323 323 # - unknown locally
324 324 # - a local outgoing head descended from update
325 325 # - a remote head that's known locally and not
326 326 # ancestral to an outgoing head
327 327 if remoteheads == [nullid]:
328 328 # remote is empty, nothing to check.
329 329 return
330 330
331 331 if remote.capable('branchmap'):
332 332 headssum = _headssummary(pushop)
333 333 else:
334 334 headssum = _oldheadssummary(repo, remoteheads, outgoing, inc)
335 pushop.pushbranchmap = headssum
335 336 newbranches = [branch for branch, heads in headssum.iteritems()
336 337 if heads[0] is None]
337 338 # 1. Check for new branches on the remote.
338 339 if newbranches and not newbranch: # new branch requires --new-branch
339 340 branchnames = ', '.join(sorted(newbranches))
340 341 raise error.Abort(_("push creates new remote branches: %s!")
341 342 % branchnames,
342 343 hint=_("use 'hg push --new-branch' to create"
343 344 " new remote branches"))
344 345
345 346 # 2. Find heads that we need not warn about
346 347 nowarnheads = _nowarnheads(pushop)
347 348
348 349 # 3. Check for new heads.
349 350 # If there are more heads after the push than before, a suitable
350 351 # error message, depending on unsynced status, is displayed.
351 352 errormsg = None
352 353 for branch, heads in sorted(headssum.iteritems()):
353 354 remoteheads, newheads, unsyncedheads, discardedheads = heads
354 355 # add unsynced data
355 356 if remoteheads is None:
356 357 oldhs = set()
357 358 else:
358 359 oldhs = set(remoteheads)
359 360 oldhs.update(unsyncedheads)
360 361 dhs = None # delta heads, the new heads on branch
361 362 newhs = set(newheads)
362 363 newhs.update(unsyncedheads)
363 364 if unsyncedheads:
364 365 if None in unsyncedheads:
365 366 # old remote, no heads data
366 367 heads = None
367 368 elif len(unsyncedheads) <= 4 or repo.ui.verbose:
368 369 heads = ' '.join(short(h) for h in unsyncedheads)
369 370 else:
370 371 heads = (' '.join(short(h) for h in unsyncedheads[:4]) +
371 372 ' ' + _("and %s others") % (len(unsyncedheads) - 4))
372 373 if heads is None:
373 374 repo.ui.status(_("remote has heads that are "
374 375 "not known locally\n"))
375 376 elif branch is None:
376 377 repo.ui.status(_("remote has heads that are "
377 378 "not known locally: %s\n") % heads)
378 379 else:
379 380 repo.ui.status(_("remote has heads on branch '%s' that are "
380 381 "not known locally: %s\n") % (branch, heads))
381 382 if remoteheads is None:
382 383 if len(newhs) > 1:
383 384 dhs = list(newhs)
384 385 if errormsg is None:
385 386 errormsg = (_("push creates new branch '%s' "
386 387 "with multiple heads") % (branch))
387 388 hint = _("merge or"
388 389 " see 'hg help push' for details about"
389 390 " pushing new heads")
390 391 elif len(newhs) > len(oldhs):
391 392 # remove bookmarked or existing remote heads from the new heads list
392 393 dhs = sorted(newhs - nowarnheads - oldhs)
393 394 if dhs:
394 395 if errormsg is None:
395 396 if branch not in ('default', None):
396 397 errormsg = _("push creates new remote head %s "
397 398 "on branch '%s'!") % (short(dhs[0]), branch)
398 399 elif repo[dhs[0]].bookmarks():
399 400 errormsg = _("push creates new remote head %s "
400 401 "with bookmark '%s'!") % (
401 402 short(dhs[0]), repo[dhs[0]].bookmarks()[0])
402 403 else:
403 404 errormsg = _("push creates new remote head %s!"
404 405 ) % short(dhs[0])
405 406 if unsyncedheads:
406 407 hint = _("pull and merge or"
407 408 " see 'hg help push' for details about"
408 409 " pushing new heads")
409 410 else:
410 411 hint = _("merge or"
411 412 " see 'hg help push' for details about"
412 413 " pushing new heads")
413 414 if branch is None:
414 415 repo.ui.note(_("new remote heads:\n"))
415 416 else:
416 417 repo.ui.note(_("new remote heads on branch '%s':\n") % branch)
417 418 for h in dhs:
418 419 repo.ui.note((" %s\n") % short(h))
419 420 if errormsg:
420 421 raise error.Abort(errormsg, hint=hint)
421 422
422 423 def _postprocessobsolete(pushop, futurecommon, candidate_newhs):
423 424 """post process the list of new heads with obsolescence information
424 425
425 426 Exists as a sub-function to contain the complexity and allow extensions to
426 427 experiment with smarter logic.
427 428
428 429 Returns (newheads, discarded_heads) tuple
429 430 """
430 431 # known issue
431 432 #
432 433 # * We "silently" skip processing on all changeset unknown locally
433 434 #
434 435 # * if <nh> is public on the remote, it won't be affected by obsolete
435 436 # marker and a new is created
436 437
437 438 # define various utilities and containers
438 439 repo = pushop.repo
439 440 unfi = repo.unfiltered()
440 441 tonode = unfi.changelog.node
441 442 torev = unfi.changelog.rev
442 443 public = phases.public
443 444 getphase = unfi._phasecache.phase
444 445 ispublic = (lambda r: getphase(unfi, r) == public)
445 446 hasoutmarker = functools.partial(pushingmarkerfor, unfi.obsstore,
446 447 futurecommon)
447 448 successorsmarkers = unfi.obsstore.successors
448 449 newhs = set() # final set of new heads
449 450 discarded = set() # new head of fully replaced branch
450 451
451 452 localcandidate = set() # candidate heads known locally
452 453 unknownheads = set() # candidate heads unknown locally
453 454 for h in candidate_newhs:
454 455 if h in unfi:
455 456 localcandidate.add(h)
456 457 else:
457 458 if successorsmarkers.get(h) is not None:
458 459 msg = ('checkheads: remote head unknown locally has'
459 460 ' local marker: %s\n')
460 461 repo.ui.debug(msg % hex(h))
461 462 unknownheads.add(h)
462 463
463 464 # fast path the simple case
464 465 if len(localcandidate) == 1:
465 466 return unknownheads | set(candidate_newhs), set()
466 467
467 468 # actually process branch replacement
468 469 while localcandidate:
469 470 nh = localcandidate.pop()
470 471 # run this check early to skip the evaluation of the whole branch
471 472 if (nh in futurecommon or ispublic(torev(nh))):
472 473 newhs.add(nh)
473 474 continue
474 475
475 476 # Get all revs/nodes on the branch exclusive to this head
476 477 # (already filtered heads are "ignored"))
477 478 branchrevs = unfi.revs('only(%n, (%ln+%ln))',
478 479 nh, localcandidate, newhs)
479 480 branchnodes = [tonode(r) for r in branchrevs]
480 481
481 482 # The branch won't be hidden on the remote if
482 483 # * any part of it is public,
483 484 # * any part of it is considered part of the result by previous logic,
484 485 # * if we have no markers to push to obsolete it.
485 486 if (any(ispublic(r) for r in branchrevs)
486 487 or any(n in futurecommon for n in branchnodes)
487 488 or any(not hasoutmarker(n) for n in branchnodes)):
488 489 newhs.add(nh)
489 490 else:
490 491 # note: there is a corner case if there is a merge in the branch.
491 492 # we might end up with -more- heads. However, these heads are not
492 493 # "added" by the push, but more by the "removal" on the remote so I
493 494 # think is a okay to ignore them,
494 495 discarded.add(nh)
495 496 newhs |= unknownheads
496 497 return newhs, discarded
497 498
498 499 def pushingmarkerfor(obsstore, pushset, node):
499 500 """true if some markers are to be pushed for node
500 501
501 502 We cannot just look in to the pushed obsmarkers from the pushop because
502 503 discovery might have filtered relevant markers. In addition listing all
503 504 markers relevant to all changesets in the pushed set would be too expensive
504 505 (O(len(repo)))
505 506
506 507 (note: There are cache opportunity in this function. but it would requires
507 508 a two dimensional stack.)
508 509 """
509 510 successorsmarkers = obsstore.successors
510 511 stack = [node]
511 512 seen = set(stack)
512 513 while stack:
513 514 current = stack.pop()
514 515 if current in pushset:
515 516 return True
516 517 markers = successorsmarkers.get(current, ())
517 518 # markers fields = ('prec', 'succs', 'flag', 'meta', 'date', 'parents')
518 519 for m in markers:
519 520 nexts = m[1] # successors
520 521 if not nexts: # this is a prune marker
521 522 nexts = m[5] or () # parents
522 523 for n in nexts:
523 524 if n not in seen:
524 525 seen.add(n)
525 526 stack.append(n)
526 527 return False
@@ -1,1989 +1,2017 b''
1 1 # exchange.py - utility to exchange data between repos.
2 2 #
3 3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from __future__ import absolute_import
9 9
10 10 import errno
11 11 import hashlib
12 12
13 13 from .i18n import _
14 14 from .node import (
15 15 hex,
16 16 nullid,
17 17 )
18 18 from . import (
19 19 bookmarks as bookmod,
20 20 bundle2,
21 21 changegroup,
22 22 discovery,
23 23 error,
24 24 lock as lockmod,
25 25 obsolete,
26 26 phases,
27 27 pushkey,
28 28 scmutil,
29 29 sslutil,
30 30 streamclone,
31 31 url as urlmod,
32 32 util,
33 33 )
34 34
35 35 urlerr = util.urlerr
36 36 urlreq = util.urlreq
37 37
38 38 # Maps bundle version human names to changegroup versions.
39 39 _bundlespeccgversions = {'v1': '01',
40 40 'v2': '02',
41 41 'packed1': 's1',
42 42 'bundle2': '02', #legacy
43 43 }
44 44
45 45 # Compression engines allowed in version 1. THIS SHOULD NEVER CHANGE.
46 46 _bundlespecv1compengines = {'gzip', 'bzip2', 'none'}
47 47
48 48 def parsebundlespec(repo, spec, strict=True, externalnames=False):
49 49 """Parse a bundle string specification into parts.
50 50
51 51 Bundle specifications denote a well-defined bundle/exchange format.
52 52 The content of a given specification should not change over time in
53 53 order to ensure that bundles produced by a newer version of Mercurial are
54 54 readable from an older version.
55 55
56 56 The string currently has the form:
57 57
58 58 <compression>-<type>[;<parameter0>[;<parameter1>]]
59 59
60 60 Where <compression> is one of the supported compression formats
61 61 and <type> is (currently) a version string. A ";" can follow the type and
62 62 all text afterwards is interpreted as URI encoded, ";" delimited key=value
63 63 pairs.
64 64
65 65 If ``strict`` is True (the default) <compression> is required. Otherwise,
66 66 it is optional.
67 67
68 68 If ``externalnames`` is False (the default), the human-centric names will
69 69 be converted to their internal representation.
70 70
71 71 Returns a 3-tuple of (compression, version, parameters). Compression will
72 72 be ``None`` if not in strict mode and a compression isn't defined.
73 73
74 74 An ``InvalidBundleSpecification`` is raised when the specification is
75 75 not syntactically well formed.
76 76
77 77 An ``UnsupportedBundleSpecification`` is raised when the compression or
78 78 bundle type/version is not recognized.
79 79
80 80 Note: this function will likely eventually return a more complex data
81 81 structure, including bundle2 part information.
82 82 """
83 83 def parseparams(s):
84 84 if ';' not in s:
85 85 return s, {}
86 86
87 87 params = {}
88 88 version, paramstr = s.split(';', 1)
89 89
90 90 for p in paramstr.split(';'):
91 91 if '=' not in p:
92 92 raise error.InvalidBundleSpecification(
93 93 _('invalid bundle specification: '
94 94 'missing "=" in parameter: %s') % p)
95 95
96 96 key, value = p.split('=', 1)
97 97 key = urlreq.unquote(key)
98 98 value = urlreq.unquote(value)
99 99 params[key] = value
100 100
101 101 return version, params
102 102
103 103
104 104 if strict and '-' not in spec:
105 105 raise error.InvalidBundleSpecification(
106 106 _('invalid bundle specification; '
107 107 'must be prefixed with compression: %s') % spec)
108 108
109 109 if '-' in spec:
110 110 compression, version = spec.split('-', 1)
111 111
112 112 if compression not in util.compengines.supportedbundlenames:
113 113 raise error.UnsupportedBundleSpecification(
114 114 _('%s compression is not supported') % compression)
115 115
116 116 version, params = parseparams(version)
117 117
118 118 if version not in _bundlespeccgversions:
119 119 raise error.UnsupportedBundleSpecification(
120 120 _('%s is not a recognized bundle version') % version)
121 121 else:
122 122 # Value could be just the compression or just the version, in which
123 123 # case some defaults are assumed (but only when not in strict mode).
124 124 assert not strict
125 125
126 126 spec, params = parseparams(spec)
127 127
128 128 if spec in util.compengines.supportedbundlenames:
129 129 compression = spec
130 130 version = 'v1'
131 131 # Generaldelta repos require v2.
132 132 if 'generaldelta' in repo.requirements:
133 133 version = 'v2'
134 134 # Modern compression engines require v2.
135 135 if compression not in _bundlespecv1compengines:
136 136 version = 'v2'
137 137 elif spec in _bundlespeccgversions:
138 138 if spec == 'packed1':
139 139 compression = 'none'
140 140 else:
141 141 compression = 'bzip2'
142 142 version = spec
143 143 else:
144 144 raise error.UnsupportedBundleSpecification(
145 145 _('%s is not a recognized bundle specification') % spec)
146 146
147 147 # Bundle version 1 only supports a known set of compression engines.
148 148 if version == 'v1' and compression not in _bundlespecv1compengines:
149 149 raise error.UnsupportedBundleSpecification(
150 150 _('compression engine %s is not supported on v1 bundles') %
151 151 compression)
152 152
153 153 # The specification for packed1 can optionally declare the data formats
154 154 # required to apply it. If we see this metadata, compare against what the
155 155 # repo supports and error if the bundle isn't compatible.
156 156 if version == 'packed1' and 'requirements' in params:
157 157 requirements = set(params['requirements'].split(','))
158 158 missingreqs = requirements - repo.supportedformats
159 159 if missingreqs:
160 160 raise error.UnsupportedBundleSpecification(
161 161 _('missing support for repository features: %s') %
162 162 ', '.join(sorted(missingreqs)))
163 163
164 164 if not externalnames:
165 165 engine = util.compengines.forbundlename(compression)
166 166 compression = engine.bundletype()[1]
167 167 version = _bundlespeccgversions[version]
168 168 return compression, version, params
169 169
170 170 def readbundle(ui, fh, fname, vfs=None):
171 171 header = changegroup.readexactly(fh, 4)
172 172
173 173 alg = None
174 174 if not fname:
175 175 fname = "stream"
176 176 if not header.startswith('HG') and header.startswith('\0'):
177 177 fh = changegroup.headerlessfixup(fh, header)
178 178 header = "HG10"
179 179 alg = 'UN'
180 180 elif vfs:
181 181 fname = vfs.join(fname)
182 182
183 183 magic, version = header[0:2], header[2:4]
184 184
185 185 if magic != 'HG':
186 186 raise error.Abort(_('%s: not a Mercurial bundle') % fname)
187 187 if version == '10':
188 188 if alg is None:
189 189 alg = changegroup.readexactly(fh, 2)
190 190 return changegroup.cg1unpacker(fh, alg)
191 191 elif version.startswith('2'):
192 192 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
193 193 elif version == 'S1':
194 194 return streamclone.streamcloneapplier(fh)
195 195 else:
196 196 raise error.Abort(_('%s: unknown bundle version %s') % (fname, version))
197 197
198 198 def getbundlespec(ui, fh):
199 199 """Infer the bundlespec from a bundle file handle.
200 200
201 201 The input file handle is seeked and the original seek position is not
202 202 restored.
203 203 """
204 204 def speccompression(alg):
205 205 try:
206 206 return util.compengines.forbundletype(alg).bundletype()[0]
207 207 except KeyError:
208 208 return None
209 209
210 210 b = readbundle(ui, fh, None)
211 211 if isinstance(b, changegroup.cg1unpacker):
212 212 alg = b._type
213 213 if alg == '_truncatedBZ':
214 214 alg = 'BZ'
215 215 comp = speccompression(alg)
216 216 if not comp:
217 217 raise error.Abort(_('unknown compression algorithm: %s') % alg)
218 218 return '%s-v1' % comp
219 219 elif isinstance(b, bundle2.unbundle20):
220 220 if 'Compression' in b.params:
221 221 comp = speccompression(b.params['Compression'])
222 222 if not comp:
223 223 raise error.Abort(_('unknown compression algorithm: %s') % comp)
224 224 else:
225 225 comp = 'none'
226 226
227 227 version = None
228 228 for part in b.iterparts():
229 229 if part.type == 'changegroup':
230 230 version = part.params['version']
231 231 if version in ('01', '02'):
232 232 version = 'v2'
233 233 else:
234 234 raise error.Abort(_('changegroup version %s does not have '
235 235 'a known bundlespec') % version,
236 236 hint=_('try upgrading your Mercurial '
237 237 'client'))
238 238
239 239 if not version:
240 240 raise error.Abort(_('could not identify changegroup version in '
241 241 'bundle'))
242 242
243 243 return '%s-%s' % (comp, version)
244 244 elif isinstance(b, streamclone.streamcloneapplier):
245 245 requirements = streamclone.readbundle1header(fh)[2]
246 246 params = 'requirements=%s' % ','.join(sorted(requirements))
247 247 return 'none-packed1;%s' % urlreq.quote(params)
248 248 else:
249 249 raise error.Abort(_('unknown bundle type: %s') % b)
250 250
251 251 def _computeoutgoing(repo, heads, common):
252 252 """Computes which revs are outgoing given a set of common
253 253 and a set of heads.
254 254
255 255 This is a separate function so extensions can have access to
256 256 the logic.
257 257
258 258 Returns a discovery.outgoing object.
259 259 """
260 260 cl = repo.changelog
261 261 if common:
262 262 hasnode = cl.hasnode
263 263 common = [n for n in common if hasnode(n)]
264 264 else:
265 265 common = [nullid]
266 266 if not heads:
267 267 heads = cl.heads()
268 268 return discovery.outgoing(repo, common, heads)
269 269
270 270 def _forcebundle1(op):
271 271 """return true if a pull/push must use bundle1
272 272
273 273 This function is used to allow testing of the older bundle version"""
274 274 ui = op.repo.ui
275 275 forcebundle1 = False
276 276 # The goal is this config is to allow developer to choose the bundle
277 277 # version used during exchanged. This is especially handy during test.
278 278 # Value is a list of bundle version to be picked from, highest version
279 279 # should be used.
280 280 #
281 281 # developer config: devel.legacy.exchange
282 282 exchange = ui.configlist('devel', 'legacy.exchange')
283 283 forcebundle1 = 'bundle2' not in exchange and 'bundle1' in exchange
284 284 return forcebundle1 or not op.remote.capable('bundle2')
285 285
286 286 class pushoperation(object):
287 287 """A object that represent a single push operation
288 288
289 289 Its purpose is to carry push related state and very common operations.
290 290
291 291 A new pushoperation should be created at the beginning of each push and
292 292 discarded afterward.
293 293 """
294 294
295 295 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
296 296 bookmarks=()):
297 297 # repo we push from
298 298 self.repo = repo
299 299 self.ui = repo.ui
300 300 # repo we push to
301 301 self.remote = remote
302 302 # force option provided
303 303 self.force = force
304 304 # revs to be pushed (None is "all")
305 305 self.revs = revs
306 306 # bookmark explicitly pushed
307 307 self.bookmarks = bookmarks
308 308 # allow push of new branch
309 309 self.newbranch = newbranch
310 310 # did a local lock get acquired?
311 311 self.locallocked = None
312 312 # step already performed
313 313 # (used to check what steps have been already performed through bundle2)
314 314 self.stepsdone = set()
315 315 # Integer version of the changegroup push result
316 316 # - None means nothing to push
317 317 # - 0 means HTTP error
318 318 # - 1 means we pushed and remote head count is unchanged *or*
319 319 # we have outgoing changesets but refused to push
320 320 # - other values as described by addchangegroup()
321 321 self.cgresult = None
322 322 # Boolean value for the bookmark push
323 323 self.bkresult = None
324 324 # discover.outgoing object (contains common and outgoing data)
325 325 self.outgoing = None
326 # all remote heads before the push
326 # all remote topological heads before the push
327 327 self.remoteheads = None
328 # Details of the remote branch pre and post push
329 #
330 # mapping: {'branch': ([remoteheads],
331 # [newheads],
332 # [unsyncedheads],
333 # [discardedheads])}
334 # - branch: the branch name
335 # - remoteheads: the list of remote heads known locally
336 # None if the branch is new
337 # - newheads: the new remote heads (known locally) with outgoing pushed
338 # - unsyncedheads: the list of remote heads unknown locally.
339 # - discardedheads: the list of remote heads made obsolete by the push
340 self.pushbranchmap = None
328 341 # testable as a boolean indicating if any nodes are missing locally.
329 342 self.incoming = None
330 343 # phases changes that must be pushed along side the changesets
331 344 self.outdatedphases = None
332 345 # phases changes that must be pushed if changeset push fails
333 346 self.fallbackoutdatedphases = None
334 347 # outgoing obsmarkers
335 348 self.outobsmarkers = set()
336 349 # outgoing bookmarks
337 350 self.outbookmarks = []
338 351 # transaction manager
339 352 self.trmanager = None
340 353 # map { pushkey partid -> callback handling failure}
341 354 # used to handle exception from mandatory pushkey part failure
342 355 self.pkfailcb = {}
343 356
344 357 @util.propertycache
345 358 def futureheads(self):
346 359 """future remote heads if the changeset push succeeds"""
347 360 return self.outgoing.missingheads
348 361
349 362 @util.propertycache
350 363 def fallbackheads(self):
351 364 """future remote heads if the changeset push fails"""
352 365 if self.revs is None:
353 366 # not target to push, all common are relevant
354 367 return self.outgoing.commonheads
355 368 unfi = self.repo.unfiltered()
356 369 # I want cheads = heads(::missingheads and ::commonheads)
357 370 # (missingheads is revs with secret changeset filtered out)
358 371 #
359 372 # This can be expressed as:
360 373 # cheads = ( (missingheads and ::commonheads)
361 374 # + (commonheads and ::missingheads))"
362 375 # )
363 376 #
364 377 # while trying to push we already computed the following:
365 378 # common = (::commonheads)
366 379 # missing = ((commonheads::missingheads) - commonheads)
367 380 #
368 381 # We can pick:
369 382 # * missingheads part of common (::commonheads)
370 383 common = self.outgoing.common
371 384 nm = self.repo.changelog.nodemap
372 385 cheads = [node for node in self.revs if nm[node] in common]
373 386 # and
374 387 # * commonheads parents on missing
375 388 revset = unfi.set('%ln and parents(roots(%ln))',
376 389 self.outgoing.commonheads,
377 390 self.outgoing.missing)
378 391 cheads.extend(c.node() for c in revset)
379 392 return cheads
380 393
381 394 @property
382 395 def commonheads(self):
383 396 """set of all common heads after changeset bundle push"""
384 397 if self.cgresult:
385 398 return self.futureheads
386 399 else:
387 400 return self.fallbackheads
388 401
389 402 # mapping of message used when pushing bookmark
390 403 bookmsgmap = {'update': (_("updating bookmark %s\n"),
391 404 _('updating bookmark %s failed!\n')),
392 405 'export': (_("exporting bookmark %s\n"),
393 406 _('exporting bookmark %s failed!\n')),
394 407 'delete': (_("deleting remote bookmark %s\n"),
395 408 _('deleting remote bookmark %s failed!\n')),
396 409 }
397 410
398 411
399 412 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=(),
400 413 opargs=None):
401 414 '''Push outgoing changesets (limited by revs) from a local
402 415 repository to remote. Return an integer:
403 416 - None means nothing to push
404 417 - 0 means HTTP error
405 418 - 1 means we pushed and remote head count is unchanged *or*
406 419 we have outgoing changesets but refused to push
407 420 - other values as described by addchangegroup()
408 421 '''
409 422 if opargs is None:
410 423 opargs = {}
411 424 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks,
412 425 **opargs)
413 426 if pushop.remote.local():
414 427 missing = (set(pushop.repo.requirements)
415 428 - pushop.remote.local().supported)
416 429 if missing:
417 430 msg = _("required features are not"
418 431 " supported in the destination:"
419 432 " %s") % (', '.join(sorted(missing)))
420 433 raise error.Abort(msg)
421 434
422 435 # there are two ways to push to remote repo:
423 436 #
424 437 # addchangegroup assumes local user can lock remote
425 438 # repo (local filesystem, old ssh servers).
426 439 #
427 440 # unbundle assumes local user cannot lock remote repo (new ssh
428 441 # servers, http servers).
429 442
430 443 if not pushop.remote.canpush():
431 444 raise error.Abort(_("destination does not support push"))
432 445 # get local lock as we might write phase data
433 446 localwlock = locallock = None
434 447 try:
435 448 # bundle2 push may receive a reply bundle touching bookmarks or other
436 449 # things requiring the wlock. Take it now to ensure proper ordering.
437 450 maypushback = pushop.ui.configbool('experimental', 'bundle2.pushback')
438 451 if (not _forcebundle1(pushop)) and maypushback:
439 452 localwlock = pushop.repo.wlock()
440 453 locallock = pushop.repo.lock()
441 454 pushop.locallocked = True
442 455 except IOError as err:
443 456 pushop.locallocked = False
444 457 if err.errno != errno.EACCES:
445 458 raise
446 459 # source repo cannot be locked.
447 460 # We do not abort the push, but just disable the local phase
448 461 # synchronisation.
449 462 msg = 'cannot lock source repository: %s\n' % err
450 463 pushop.ui.debug(msg)
451 464 try:
452 465 if pushop.locallocked:
453 466 pushop.trmanager = transactionmanager(pushop.repo,
454 467 'push-response',
455 468 pushop.remote.url())
456 469 pushop.repo.checkpush(pushop)
457 470 lock = None
458 471 unbundle = pushop.remote.capable('unbundle')
459 472 if not unbundle:
460 473 lock = pushop.remote.lock()
461 474 try:
462 475 _pushdiscovery(pushop)
463 476 if not _forcebundle1(pushop):
464 477 _pushbundle2(pushop)
465 478 _pushchangeset(pushop)
466 479 _pushsyncphase(pushop)
467 480 _pushobsolete(pushop)
468 481 _pushbookmark(pushop)
469 482 finally:
470 483 if lock is not None:
471 484 lock.release()
472 485 if pushop.trmanager:
473 486 pushop.trmanager.close()
474 487 finally:
475 488 if pushop.trmanager:
476 489 pushop.trmanager.release()
477 490 if locallock is not None:
478 491 locallock.release()
479 492 if localwlock is not None:
480 493 localwlock.release()
481 494
482 495 return pushop
483 496
484 497 # list of steps to perform discovery before push
485 498 pushdiscoveryorder = []
486 499
487 500 # Mapping between step name and function
488 501 #
489 502 # This exists to help extensions wrap steps if necessary
490 503 pushdiscoverymapping = {}
491 504
492 505 def pushdiscovery(stepname):
493 506 """decorator for function performing discovery before push
494 507
495 508 The function is added to the step -> function mapping and appended to the
496 509 list of steps. Beware that decorated function will be added in order (this
497 510 may matter).
498 511
499 512 You can only use this decorator for a new step, if you want to wrap a step
500 513 from an extension, change the pushdiscovery dictionary directly."""
501 514 def dec(func):
502 515 assert stepname not in pushdiscoverymapping
503 516 pushdiscoverymapping[stepname] = func
504 517 pushdiscoveryorder.append(stepname)
505 518 return func
506 519 return dec
507 520
508 521 def _pushdiscovery(pushop):
509 522 """Run all discovery steps"""
510 523 for stepname in pushdiscoveryorder:
511 524 step = pushdiscoverymapping[stepname]
512 525 step(pushop)
513 526
514 527 @pushdiscovery('changeset')
515 528 def _pushdiscoverychangeset(pushop):
516 529 """discover the changeset that need to be pushed"""
517 530 fci = discovery.findcommonincoming
518 531 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
519 532 common, inc, remoteheads = commoninc
520 533 fco = discovery.findcommonoutgoing
521 534 outgoing = fco(pushop.repo, pushop.remote, onlyheads=pushop.revs,
522 535 commoninc=commoninc, force=pushop.force)
523 536 pushop.outgoing = outgoing
524 537 pushop.remoteheads = remoteheads
525 538 pushop.incoming = inc
526 539
527 540 @pushdiscovery('phase')
528 541 def _pushdiscoveryphase(pushop):
529 542 """discover the phase that needs to be pushed
530 543
531 544 (computed for both success and failure case for changesets push)"""
532 545 outgoing = pushop.outgoing
533 546 unfi = pushop.repo.unfiltered()
534 547 remotephases = pushop.remote.listkeys('phases')
535 548 publishing = remotephases.get('publishing', False)
536 549 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
537 550 and remotephases # server supports phases
538 551 and not pushop.outgoing.missing # no changesets to be pushed
539 552 and publishing):
540 553 # When:
541 554 # - this is a subrepo push
542 555 # - and remote support phase
543 556 # - and no changeset are to be pushed
544 557 # - and remote is publishing
545 558 # We may be in issue 3871 case!
546 559 # We drop the possible phase synchronisation done by
547 560 # courtesy to publish changesets possibly locally draft
548 561 # on the remote.
549 562 remotephases = {'publishing': 'True'}
550 563 ana = phases.analyzeremotephases(pushop.repo,
551 564 pushop.fallbackheads,
552 565 remotephases)
553 566 pheads, droots = ana
554 567 extracond = ''
555 568 if not publishing:
556 569 extracond = ' and public()'
557 570 revset = 'heads((%%ln::%%ln) %s)' % extracond
558 571 # Get the list of all revs draft on remote by public here.
559 572 # XXX Beware that revset break if droots is not strictly
560 573 # XXX root we may want to ensure it is but it is costly
561 574 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
562 575 if not outgoing.missing:
563 576 future = fallback
564 577 else:
565 578 # adds changeset we are going to push as draft
566 579 #
567 580 # should not be necessary for publishing server, but because of an
568 581 # issue fixed in xxxxx we have to do it anyway.
569 582 fdroots = list(unfi.set('roots(%ln + %ln::)',
570 583 outgoing.missing, droots))
571 584 fdroots = [f.node() for f in fdroots]
572 585 future = list(unfi.set(revset, fdroots, pushop.futureheads))
573 586 pushop.outdatedphases = future
574 587 pushop.fallbackoutdatedphases = fallback
575 588
576 589 @pushdiscovery('obsmarker')
577 590 def _pushdiscoveryobsmarkers(pushop):
578 591 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
579 592 and pushop.repo.obsstore
580 593 and 'obsolete' in pushop.remote.listkeys('namespaces')):
581 594 repo = pushop.repo
582 595 # very naive computation, that can be quite expensive on big repo.
583 596 # However: evolution is currently slow on them anyway.
584 597 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
585 598 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
586 599
587 600 @pushdiscovery('bookmarks')
588 601 def _pushdiscoverybookmarks(pushop):
589 602 ui = pushop.ui
590 603 repo = pushop.repo.unfiltered()
591 604 remote = pushop.remote
592 605 ui.debug("checking for updated bookmarks\n")
593 606 ancestors = ()
594 607 if pushop.revs:
595 608 revnums = map(repo.changelog.rev, pushop.revs)
596 609 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
597 610 remotebookmark = remote.listkeys('bookmarks')
598 611
599 612 explicit = set([repo._bookmarks.expandname(bookmark)
600 613 for bookmark in pushop.bookmarks])
601 614
602 615 remotebookmark = bookmod.unhexlifybookmarks(remotebookmark)
603 616 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
604 617
605 618 def safehex(x):
606 619 if x is None:
607 620 return x
608 621 return hex(x)
609 622
610 623 def hexifycompbookmarks(bookmarks):
611 624 for b, scid, dcid in bookmarks:
612 625 yield b, safehex(scid), safehex(dcid)
613 626
614 627 comp = [hexifycompbookmarks(marks) for marks in comp]
615 628 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
616 629
617 630 for b, scid, dcid in advsrc:
618 631 if b in explicit:
619 632 explicit.remove(b)
620 633 if not ancestors or repo[scid].rev() in ancestors:
621 634 pushop.outbookmarks.append((b, dcid, scid))
622 635 # search added bookmark
623 636 for b, scid, dcid in addsrc:
624 637 if b in explicit:
625 638 explicit.remove(b)
626 639 pushop.outbookmarks.append((b, '', scid))
627 640 # search for overwritten bookmark
628 641 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
629 642 if b in explicit:
630 643 explicit.remove(b)
631 644 pushop.outbookmarks.append((b, dcid, scid))
632 645 # search for bookmark to delete
633 646 for b, scid, dcid in adddst:
634 647 if b in explicit:
635 648 explicit.remove(b)
636 649 # treat as "deleted locally"
637 650 pushop.outbookmarks.append((b, dcid, ''))
638 651 # identical bookmarks shouldn't get reported
639 652 for b, scid, dcid in same:
640 653 if b in explicit:
641 654 explicit.remove(b)
642 655
643 656 if explicit:
644 657 explicit = sorted(explicit)
645 658 # we should probably list all of them
646 659 ui.warn(_('bookmark %s does not exist on the local '
647 660 'or remote repository!\n') % explicit[0])
648 661 pushop.bkresult = 2
649 662
650 663 pushop.outbookmarks.sort()
651 664
652 665 def _pushcheckoutgoing(pushop):
653 666 outgoing = pushop.outgoing
654 667 unfi = pushop.repo.unfiltered()
655 668 if not outgoing.missing:
656 669 # nothing to push
657 670 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
658 671 return False
659 672 # something to push
660 673 if not pushop.force:
661 674 # if repo.obsstore == False --> no obsolete
662 675 # then, save the iteration
663 676 if unfi.obsstore:
664 677 # this message are here for 80 char limit reason
665 678 mso = _("push includes obsolete changeset: %s!")
666 679 mst = {"unstable": _("push includes unstable changeset: %s!"),
667 680 "bumped": _("push includes bumped changeset: %s!"),
668 681 "divergent": _("push includes divergent changeset: %s!")}
669 682 # If we are to push if there is at least one
670 683 # obsolete or unstable changeset in missing, at
671 684 # least one of the missinghead will be obsolete or
672 685 # unstable. So checking heads only is ok
673 686 for node in outgoing.missingheads:
674 687 ctx = unfi[node]
675 688 if ctx.obsolete():
676 689 raise error.Abort(mso % ctx)
677 690 elif ctx.troubled():
678 691 raise error.Abort(mst[ctx.troubles()[0]] % ctx)
679 692
680 693 discovery.checkheads(pushop)
681 694 return True
682 695
683 696 # List of names of steps to perform for an outgoing bundle2, order matters.
684 697 b2partsgenorder = []
685 698
686 699 # Mapping between step name and function
687 700 #
688 701 # This exists to help extensions wrap steps if necessary
689 702 b2partsgenmapping = {}
690 703
691 704 def b2partsgenerator(stepname, idx=None):
692 705 """decorator for function generating bundle2 part
693 706
694 707 The function is added to the step -> function mapping and appended to the
695 708 list of steps. Beware that decorated functions will be added in order
696 709 (this may matter).
697 710
698 711 You can only use this decorator for new steps, if you want to wrap a step
699 712 from an extension, attack the b2partsgenmapping dictionary directly."""
700 713 def dec(func):
701 714 assert stepname not in b2partsgenmapping
702 715 b2partsgenmapping[stepname] = func
703 716 if idx is None:
704 717 b2partsgenorder.append(stepname)
705 718 else:
706 719 b2partsgenorder.insert(idx, stepname)
707 720 return func
708 721 return dec
709 722
710 723 def _pushb2ctxcheckheads(pushop, bundler):
711 724 """Generate race condition checking parts
712 725
713 726 Exists as an independent function to aid extensions
714 727 """
715 if not pushop.force:
716 bundler.newpart('check:heads', data=iter(pushop.remoteheads))
728 # * 'force' do not check for push race,
729 # * if we don't push anything, there are nothing to check.
730 if not pushop.force and pushop.outgoing.missingheads:
731 allowunrelated = 'related' in bundler.capabilities.get('checkheads', ())
732 if not allowunrelated:
733 bundler.newpart('check:heads', data=iter(pushop.remoteheads))
734 else:
735 affected = set()
736 for branch, heads in pushop.pushbranchmap.iteritems():
737 remoteheads, newheads, unsyncedheads, discardedheads = heads
738 if remoteheads is not None:
739 remote = set(remoteheads)
740 affected |= set(discardedheads) & remote
741 affected |= remote - set(newheads)
742 if affected:
743 data = iter(sorted(affected))
744 bundler.newpart('check:updated-heads', data=data)
717 745
718 746 @b2partsgenerator('changeset')
719 747 def _pushb2ctx(pushop, bundler):
720 748 """handle changegroup push through bundle2
721 749
722 750 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
723 751 """
724 752 if 'changesets' in pushop.stepsdone:
725 753 return
726 754 pushop.stepsdone.add('changesets')
727 755 # Send known heads to the server for race detection.
728 756 if not _pushcheckoutgoing(pushop):
729 757 return
730 758 pushop.repo.prepushoutgoinghooks(pushop)
731 759
732 760 _pushb2ctxcheckheads(pushop, bundler)
733 761
734 762 b2caps = bundle2.bundle2caps(pushop.remote)
735 763 version = '01'
736 764 cgversions = b2caps.get('changegroup')
737 765 if cgversions: # 3.1 and 3.2 ship with an empty value
738 766 cgversions = [v for v in cgversions
739 767 if v in changegroup.supportedoutgoingversions(
740 768 pushop.repo)]
741 769 if not cgversions:
742 770 raise ValueError(_('no common changegroup version'))
743 771 version = max(cgversions)
744 772 cg = changegroup.getlocalchangegroupraw(pushop.repo, 'push',
745 773 pushop.outgoing,
746 774 version=version)
747 775 cgpart = bundler.newpart('changegroup', data=cg)
748 776 if cgversions:
749 777 cgpart.addparam('version', version)
750 778 if 'treemanifest' in pushop.repo.requirements:
751 779 cgpart.addparam('treemanifest', '1')
752 780 def handlereply(op):
753 781 """extract addchangegroup returns from server reply"""
754 782 cgreplies = op.records.getreplies(cgpart.id)
755 783 assert len(cgreplies['changegroup']) == 1
756 784 pushop.cgresult = cgreplies['changegroup'][0]['return']
757 785 return handlereply
758 786
759 787 @b2partsgenerator('phase')
760 788 def _pushb2phases(pushop, bundler):
761 789 """handle phase push through bundle2"""
762 790 if 'phases' in pushop.stepsdone:
763 791 return
764 792 b2caps = bundle2.bundle2caps(pushop.remote)
765 793 if not 'pushkey' in b2caps:
766 794 return
767 795 pushop.stepsdone.add('phases')
768 796 part2node = []
769 797
770 798 def handlefailure(pushop, exc):
771 799 targetid = int(exc.partid)
772 800 for partid, node in part2node:
773 801 if partid == targetid:
774 802 raise error.Abort(_('updating %s to public failed') % node)
775 803
776 804 enc = pushkey.encode
777 805 for newremotehead in pushop.outdatedphases:
778 806 part = bundler.newpart('pushkey')
779 807 part.addparam('namespace', enc('phases'))
780 808 part.addparam('key', enc(newremotehead.hex()))
781 809 part.addparam('old', enc(str(phases.draft)))
782 810 part.addparam('new', enc(str(phases.public)))
783 811 part2node.append((part.id, newremotehead))
784 812 pushop.pkfailcb[part.id] = handlefailure
785 813
786 814 def handlereply(op):
787 815 for partid, node in part2node:
788 816 partrep = op.records.getreplies(partid)
789 817 results = partrep['pushkey']
790 818 assert len(results) <= 1
791 819 msg = None
792 820 if not results:
793 821 msg = _('server ignored update of %s to public!\n') % node
794 822 elif not int(results[0]['return']):
795 823 msg = _('updating %s to public failed!\n') % node
796 824 if msg is not None:
797 825 pushop.ui.warn(msg)
798 826 return handlereply
799 827
800 828 @b2partsgenerator('obsmarkers')
801 829 def _pushb2obsmarkers(pushop, bundler):
802 830 if 'obsmarkers' in pushop.stepsdone:
803 831 return
804 832 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
805 833 if obsolete.commonversion(remoteversions) is None:
806 834 return
807 835 pushop.stepsdone.add('obsmarkers')
808 836 if pushop.outobsmarkers:
809 837 markers = sorted(pushop.outobsmarkers)
810 838 bundle2.buildobsmarkerspart(bundler, markers)
811 839
812 840 @b2partsgenerator('bookmarks')
813 841 def _pushb2bookmarks(pushop, bundler):
814 842 """handle bookmark push through bundle2"""
815 843 if 'bookmarks' in pushop.stepsdone:
816 844 return
817 845 b2caps = bundle2.bundle2caps(pushop.remote)
818 846 if 'pushkey' not in b2caps:
819 847 return
820 848 pushop.stepsdone.add('bookmarks')
821 849 part2book = []
822 850 enc = pushkey.encode
823 851
824 852 def handlefailure(pushop, exc):
825 853 targetid = int(exc.partid)
826 854 for partid, book, action in part2book:
827 855 if partid == targetid:
828 856 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
829 857 # we should not be called for part we did not generated
830 858 assert False
831 859
832 860 for book, old, new in pushop.outbookmarks:
833 861 part = bundler.newpart('pushkey')
834 862 part.addparam('namespace', enc('bookmarks'))
835 863 part.addparam('key', enc(book))
836 864 part.addparam('old', enc(old))
837 865 part.addparam('new', enc(new))
838 866 action = 'update'
839 867 if not old:
840 868 action = 'export'
841 869 elif not new:
842 870 action = 'delete'
843 871 part2book.append((part.id, book, action))
844 872 pushop.pkfailcb[part.id] = handlefailure
845 873
846 874 def handlereply(op):
847 875 ui = pushop.ui
848 876 for partid, book, action in part2book:
849 877 partrep = op.records.getreplies(partid)
850 878 results = partrep['pushkey']
851 879 assert len(results) <= 1
852 880 if not results:
853 881 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
854 882 else:
855 883 ret = int(results[0]['return'])
856 884 if ret:
857 885 ui.status(bookmsgmap[action][0] % book)
858 886 else:
859 887 ui.warn(bookmsgmap[action][1] % book)
860 888 if pushop.bkresult is not None:
861 889 pushop.bkresult = 1
862 890 return handlereply
863 891
864 892
865 893 def _pushbundle2(pushop):
866 894 """push data to the remote using bundle2
867 895
868 896 The only currently supported type of data is changegroup but this will
869 897 evolve in the future."""
870 898 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
871 899 pushback = (pushop.trmanager
872 900 and pushop.ui.configbool('experimental', 'bundle2.pushback'))
873 901
874 902 # create reply capability
875 903 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo,
876 904 allowpushback=pushback))
877 905 bundler.newpart('replycaps', data=capsblob)
878 906 replyhandlers = []
879 907 for partgenname in b2partsgenorder:
880 908 partgen = b2partsgenmapping[partgenname]
881 909 ret = partgen(pushop, bundler)
882 910 if callable(ret):
883 911 replyhandlers.append(ret)
884 912 # do not push if nothing to push
885 913 if bundler.nbparts <= 1:
886 914 return
887 915 stream = util.chunkbuffer(bundler.getchunks())
888 916 try:
889 917 try:
890 918 reply = pushop.remote.unbundle(
891 919 stream, ['force'], pushop.remote.url())
892 920 except error.BundleValueError as exc:
893 921 raise error.Abort(_('missing support for %s') % exc)
894 922 try:
895 923 trgetter = None
896 924 if pushback:
897 925 trgetter = pushop.trmanager.transaction
898 926 op = bundle2.processbundle(pushop.repo, reply, trgetter)
899 927 except error.BundleValueError as exc:
900 928 raise error.Abort(_('missing support for %s') % exc)
901 929 except bundle2.AbortFromPart as exc:
902 930 pushop.ui.status(_('remote: %s\n') % exc)
903 931 if exc.hint is not None:
904 932 pushop.ui.status(_('remote: %s\n') % ('(%s)' % exc.hint))
905 933 raise error.Abort(_('push failed on remote'))
906 934 except error.PushkeyFailed as exc:
907 935 partid = int(exc.partid)
908 936 if partid not in pushop.pkfailcb:
909 937 raise
910 938 pushop.pkfailcb[partid](pushop, exc)
911 939 for rephand in replyhandlers:
912 940 rephand(op)
913 941
914 942 def _pushchangeset(pushop):
915 943 """Make the actual push of changeset bundle to remote repo"""
916 944 if 'changesets' in pushop.stepsdone:
917 945 return
918 946 pushop.stepsdone.add('changesets')
919 947 if not _pushcheckoutgoing(pushop):
920 948 return
921 949 pushop.repo.prepushoutgoinghooks(pushop)
922 950 outgoing = pushop.outgoing
923 951 unbundle = pushop.remote.capable('unbundle')
924 952 # TODO: get bundlecaps from remote
925 953 bundlecaps = None
926 954 # create a changegroup from local
927 955 if pushop.revs is None and not (outgoing.excluded
928 956 or pushop.repo.changelog.filteredrevs):
929 957 # push everything,
930 958 # use the fast path, no race possible on push
931 959 bundler = changegroup.cg1packer(pushop.repo, bundlecaps)
932 960 cg = changegroup.getsubset(pushop.repo,
933 961 outgoing,
934 962 bundler,
935 963 'push',
936 964 fastpath=True)
937 965 else:
938 966 cg = changegroup.getchangegroup(pushop.repo, 'push', outgoing,
939 967 bundlecaps=bundlecaps)
940 968
941 969 # apply changegroup to remote
942 970 if unbundle:
943 971 # local repo finds heads on server, finds out what
944 972 # revs it must push. once revs transferred, if server
945 973 # finds it has different heads (someone else won
946 974 # commit/push race), server aborts.
947 975 if pushop.force:
948 976 remoteheads = ['force']
949 977 else:
950 978 remoteheads = pushop.remoteheads
951 979 # ssh: return remote's addchangegroup()
952 980 # http: return remote's addchangegroup() or 0 for error
953 981 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
954 982 pushop.repo.url())
955 983 else:
956 984 # we return an integer indicating remote head count
957 985 # change
958 986 pushop.cgresult = pushop.remote.addchangegroup(cg, 'push',
959 987 pushop.repo.url())
960 988
961 989 def _pushsyncphase(pushop):
962 990 """synchronise phase information locally and remotely"""
963 991 cheads = pushop.commonheads
964 992 # even when we don't push, exchanging phase data is useful
965 993 remotephases = pushop.remote.listkeys('phases')
966 994 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
967 995 and remotephases # server supports phases
968 996 and pushop.cgresult is None # nothing was pushed
969 997 and remotephases.get('publishing', False)):
970 998 # When:
971 999 # - this is a subrepo push
972 1000 # - and remote support phase
973 1001 # - and no changeset was pushed
974 1002 # - and remote is publishing
975 1003 # We may be in issue 3871 case!
976 1004 # We drop the possible phase synchronisation done by
977 1005 # courtesy to publish changesets possibly locally draft
978 1006 # on the remote.
979 1007 remotephases = {'publishing': 'True'}
980 1008 if not remotephases: # old server or public only reply from non-publishing
981 1009 _localphasemove(pushop, cheads)
982 1010 # don't push any phase data as there is nothing to push
983 1011 else:
984 1012 ana = phases.analyzeremotephases(pushop.repo, cheads,
985 1013 remotephases)
986 1014 pheads, droots = ana
987 1015 ### Apply remote phase on local
988 1016 if remotephases.get('publishing', False):
989 1017 _localphasemove(pushop, cheads)
990 1018 else: # publish = False
991 1019 _localphasemove(pushop, pheads)
992 1020 _localphasemove(pushop, cheads, phases.draft)
993 1021 ### Apply local phase on remote
994 1022
995 1023 if pushop.cgresult:
996 1024 if 'phases' in pushop.stepsdone:
997 1025 # phases already pushed though bundle2
998 1026 return
999 1027 outdated = pushop.outdatedphases
1000 1028 else:
1001 1029 outdated = pushop.fallbackoutdatedphases
1002 1030
1003 1031 pushop.stepsdone.add('phases')
1004 1032
1005 1033 # filter heads already turned public by the push
1006 1034 outdated = [c for c in outdated if c.node() not in pheads]
1007 1035 # fallback to independent pushkey command
1008 1036 for newremotehead in outdated:
1009 1037 r = pushop.remote.pushkey('phases',
1010 1038 newremotehead.hex(),
1011 1039 str(phases.draft),
1012 1040 str(phases.public))
1013 1041 if not r:
1014 1042 pushop.ui.warn(_('updating %s to public failed!\n')
1015 1043 % newremotehead)
1016 1044
1017 1045 def _localphasemove(pushop, nodes, phase=phases.public):
1018 1046 """move <nodes> to <phase> in the local source repo"""
1019 1047 if pushop.trmanager:
1020 1048 phases.advanceboundary(pushop.repo,
1021 1049 pushop.trmanager.transaction(),
1022 1050 phase,
1023 1051 nodes)
1024 1052 else:
1025 1053 # repo is not locked, do not change any phases!
1026 1054 # Informs the user that phases should have been moved when
1027 1055 # applicable.
1028 1056 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1029 1057 phasestr = phases.phasenames[phase]
1030 1058 if actualmoves:
1031 1059 pushop.ui.status(_('cannot lock source repo, skipping '
1032 1060 'local %s phase update\n') % phasestr)
1033 1061
1034 1062 def _pushobsolete(pushop):
1035 1063 """utility function to push obsolete markers to a remote"""
1036 1064 if 'obsmarkers' in pushop.stepsdone:
1037 1065 return
1038 1066 repo = pushop.repo
1039 1067 remote = pushop.remote
1040 1068 pushop.stepsdone.add('obsmarkers')
1041 1069 if pushop.outobsmarkers:
1042 1070 pushop.ui.debug('try to push obsolete markers to remote\n')
1043 1071 rslts = []
1044 1072 remotedata = obsolete._pushkeyescape(sorted(pushop.outobsmarkers))
1045 1073 for key in sorted(remotedata, reverse=True):
1046 1074 # reverse sort to ensure we end with dump0
1047 1075 data = remotedata[key]
1048 1076 rslts.append(remote.pushkey('obsolete', key, '', data))
1049 1077 if [r for r in rslts if not r]:
1050 1078 msg = _('failed to push some obsolete markers!\n')
1051 1079 repo.ui.warn(msg)
1052 1080
1053 1081 def _pushbookmark(pushop):
1054 1082 """Update bookmark position on remote"""
1055 1083 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
1056 1084 return
1057 1085 pushop.stepsdone.add('bookmarks')
1058 1086 ui = pushop.ui
1059 1087 remote = pushop.remote
1060 1088
1061 1089 for b, old, new in pushop.outbookmarks:
1062 1090 action = 'update'
1063 1091 if not old:
1064 1092 action = 'export'
1065 1093 elif not new:
1066 1094 action = 'delete'
1067 1095 if remote.pushkey('bookmarks', b, old, new):
1068 1096 ui.status(bookmsgmap[action][0] % b)
1069 1097 else:
1070 1098 ui.warn(bookmsgmap[action][1] % b)
1071 1099 # discovery can have set the value form invalid entry
1072 1100 if pushop.bkresult is not None:
1073 1101 pushop.bkresult = 1
1074 1102
1075 1103 class pulloperation(object):
1076 1104 """A object that represent a single pull operation
1077 1105
1078 1106 It purpose is to carry pull related state and very common operation.
1079 1107
1080 1108 A new should be created at the beginning of each pull and discarded
1081 1109 afterward.
1082 1110 """
1083 1111
1084 1112 def __init__(self, repo, remote, heads=None, force=False, bookmarks=(),
1085 1113 remotebookmarks=None, streamclonerequested=None):
1086 1114 # repo we pull into
1087 1115 self.repo = repo
1088 1116 # repo we pull from
1089 1117 self.remote = remote
1090 1118 # revision we try to pull (None is "all")
1091 1119 self.heads = heads
1092 1120 # bookmark pulled explicitly
1093 1121 self.explicitbookmarks = [repo._bookmarks.expandname(bookmark)
1094 1122 for bookmark in bookmarks]
1095 1123 # do we force pull?
1096 1124 self.force = force
1097 1125 # whether a streaming clone was requested
1098 1126 self.streamclonerequested = streamclonerequested
1099 1127 # transaction manager
1100 1128 self.trmanager = None
1101 1129 # set of common changeset between local and remote before pull
1102 1130 self.common = None
1103 1131 # set of pulled head
1104 1132 self.rheads = None
1105 1133 # list of missing changeset to fetch remotely
1106 1134 self.fetch = None
1107 1135 # remote bookmarks data
1108 1136 self.remotebookmarks = remotebookmarks
1109 1137 # result of changegroup pulling (used as return code by pull)
1110 1138 self.cgresult = None
1111 1139 # list of step already done
1112 1140 self.stepsdone = set()
1113 1141 # Whether we attempted a clone from pre-generated bundles.
1114 1142 self.clonebundleattempted = False
1115 1143
1116 1144 @util.propertycache
1117 1145 def pulledsubset(self):
1118 1146 """heads of the set of changeset target by the pull"""
1119 1147 # compute target subset
1120 1148 if self.heads is None:
1121 1149 # We pulled every thing possible
1122 1150 # sync on everything common
1123 1151 c = set(self.common)
1124 1152 ret = list(self.common)
1125 1153 for n in self.rheads:
1126 1154 if n not in c:
1127 1155 ret.append(n)
1128 1156 return ret
1129 1157 else:
1130 1158 # We pulled a specific subset
1131 1159 # sync on this subset
1132 1160 return self.heads
1133 1161
1134 1162 @util.propertycache
1135 1163 def canusebundle2(self):
1136 1164 return not _forcebundle1(self)
1137 1165
1138 1166 @util.propertycache
1139 1167 def remotebundle2caps(self):
1140 1168 return bundle2.bundle2caps(self.remote)
1141 1169
1142 1170 def gettransaction(self):
1143 1171 # deprecated; talk to trmanager directly
1144 1172 return self.trmanager.transaction()
1145 1173
1146 1174 class transactionmanager(object):
1147 1175 """An object to manage the life cycle of a transaction
1148 1176
1149 1177 It creates the transaction on demand and calls the appropriate hooks when
1150 1178 closing the transaction."""
1151 1179 def __init__(self, repo, source, url):
1152 1180 self.repo = repo
1153 1181 self.source = source
1154 1182 self.url = url
1155 1183 self._tr = None
1156 1184
1157 1185 def transaction(self):
1158 1186 """Return an open transaction object, constructing if necessary"""
1159 1187 if not self._tr:
1160 1188 trname = '%s\n%s' % (self.source, util.hidepassword(self.url))
1161 1189 self._tr = self.repo.transaction(trname)
1162 1190 self._tr.hookargs['source'] = self.source
1163 1191 self._tr.hookargs['url'] = self.url
1164 1192 return self._tr
1165 1193
1166 1194 def close(self):
1167 1195 """close transaction if created"""
1168 1196 if self._tr is not None:
1169 1197 self._tr.close()
1170 1198
1171 1199 def release(self):
1172 1200 """release transaction if created"""
1173 1201 if self._tr is not None:
1174 1202 self._tr.release()
1175 1203
1176 1204 def pull(repo, remote, heads=None, force=False, bookmarks=(), opargs=None,
1177 1205 streamclonerequested=None):
1178 1206 """Fetch repository data from a remote.
1179 1207
1180 1208 This is the main function used to retrieve data from a remote repository.
1181 1209
1182 1210 ``repo`` is the local repository to clone into.
1183 1211 ``remote`` is a peer instance.
1184 1212 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1185 1213 default) means to pull everything from the remote.
1186 1214 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1187 1215 default, all remote bookmarks are pulled.
1188 1216 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1189 1217 initialization.
1190 1218 ``streamclonerequested`` is a boolean indicating whether a "streaming
1191 1219 clone" is requested. A "streaming clone" is essentially a raw file copy
1192 1220 of revlogs from the server. This only works when the local repository is
1193 1221 empty. The default value of ``None`` means to respect the server
1194 1222 configuration for preferring stream clones.
1195 1223
1196 1224 Returns the ``pulloperation`` created for this pull.
1197 1225 """
1198 1226 if opargs is None:
1199 1227 opargs = {}
1200 1228 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks,
1201 1229 streamclonerequested=streamclonerequested, **opargs)
1202 1230 if pullop.remote.local():
1203 1231 missing = set(pullop.remote.requirements) - pullop.repo.supported
1204 1232 if missing:
1205 1233 msg = _("required features are not"
1206 1234 " supported in the destination:"
1207 1235 " %s") % (', '.join(sorted(missing)))
1208 1236 raise error.Abort(msg)
1209 1237
1210 1238 wlock = lock = None
1211 1239 try:
1212 1240 wlock = pullop.repo.wlock()
1213 1241 lock = pullop.repo.lock()
1214 1242 pullop.trmanager = transactionmanager(repo, 'pull', remote.url())
1215 1243 streamclone.maybeperformlegacystreamclone(pullop)
1216 1244 # This should ideally be in _pullbundle2(). However, it needs to run
1217 1245 # before discovery to avoid extra work.
1218 1246 _maybeapplyclonebundle(pullop)
1219 1247 _pulldiscovery(pullop)
1220 1248 if pullop.canusebundle2:
1221 1249 _pullbundle2(pullop)
1222 1250 _pullchangeset(pullop)
1223 1251 _pullphase(pullop)
1224 1252 _pullbookmarks(pullop)
1225 1253 _pullobsolete(pullop)
1226 1254 pullop.trmanager.close()
1227 1255 finally:
1228 1256 lockmod.release(pullop.trmanager, lock, wlock)
1229 1257
1230 1258 return pullop
1231 1259
1232 1260 # list of steps to perform discovery before pull
1233 1261 pulldiscoveryorder = []
1234 1262
1235 1263 # Mapping between step name and function
1236 1264 #
1237 1265 # This exists to help extensions wrap steps if necessary
1238 1266 pulldiscoverymapping = {}
1239 1267
1240 1268 def pulldiscovery(stepname):
1241 1269 """decorator for function performing discovery before pull
1242 1270
1243 1271 The function is added to the step -> function mapping and appended to the
1244 1272 list of steps. Beware that decorated function will be added in order (this
1245 1273 may matter).
1246 1274
1247 1275 You can only use this decorator for a new step, if you want to wrap a step
1248 1276 from an extension, change the pulldiscovery dictionary directly."""
1249 1277 def dec(func):
1250 1278 assert stepname not in pulldiscoverymapping
1251 1279 pulldiscoverymapping[stepname] = func
1252 1280 pulldiscoveryorder.append(stepname)
1253 1281 return func
1254 1282 return dec
1255 1283
1256 1284 def _pulldiscovery(pullop):
1257 1285 """Run all discovery steps"""
1258 1286 for stepname in pulldiscoveryorder:
1259 1287 step = pulldiscoverymapping[stepname]
1260 1288 step(pullop)
1261 1289
1262 1290 @pulldiscovery('b1:bookmarks')
1263 1291 def _pullbookmarkbundle1(pullop):
1264 1292 """fetch bookmark data in bundle1 case
1265 1293
1266 1294 If not using bundle2, we have to fetch bookmarks before changeset
1267 1295 discovery to reduce the chance and impact of race conditions."""
1268 1296 if pullop.remotebookmarks is not None:
1269 1297 return
1270 1298 if pullop.canusebundle2 and 'listkeys' in pullop.remotebundle2caps:
1271 1299 # all known bundle2 servers now support listkeys, but lets be nice with
1272 1300 # new implementation.
1273 1301 return
1274 1302 pullop.remotebookmarks = pullop.remote.listkeys('bookmarks')
1275 1303
1276 1304
1277 1305 @pulldiscovery('changegroup')
1278 1306 def _pulldiscoverychangegroup(pullop):
1279 1307 """discovery phase for the pull
1280 1308
1281 1309 Current handle changeset discovery only, will change handle all discovery
1282 1310 at some point."""
1283 1311 tmp = discovery.findcommonincoming(pullop.repo,
1284 1312 pullop.remote,
1285 1313 heads=pullop.heads,
1286 1314 force=pullop.force)
1287 1315 common, fetch, rheads = tmp
1288 1316 nm = pullop.repo.unfiltered().changelog.nodemap
1289 1317 if fetch and rheads:
1290 1318 # If a remote heads in filtered locally, lets drop it from the unknown
1291 1319 # remote heads and put in back in common.
1292 1320 #
1293 1321 # This is a hackish solution to catch most of "common but locally
1294 1322 # hidden situation". We do not performs discovery on unfiltered
1295 1323 # repository because it end up doing a pathological amount of round
1296 1324 # trip for w huge amount of changeset we do not care about.
1297 1325 #
1298 1326 # If a set of such "common but filtered" changeset exist on the server
1299 1327 # but are not including a remote heads, we'll not be able to detect it,
1300 1328 scommon = set(common)
1301 1329 filteredrheads = []
1302 1330 for n in rheads:
1303 1331 if n in nm:
1304 1332 if n not in scommon:
1305 1333 common.append(n)
1306 1334 else:
1307 1335 filteredrheads.append(n)
1308 1336 if not filteredrheads:
1309 1337 fetch = []
1310 1338 rheads = filteredrheads
1311 1339 pullop.common = common
1312 1340 pullop.fetch = fetch
1313 1341 pullop.rheads = rheads
1314 1342
1315 1343 def _pullbundle2(pullop):
1316 1344 """pull data using bundle2
1317 1345
1318 1346 For now, the only supported data are changegroup."""
1319 1347 kwargs = {'bundlecaps': caps20to10(pullop.repo)}
1320 1348
1321 1349 # At the moment we don't do stream clones over bundle2. If that is
1322 1350 # implemented then here's where the check for that will go.
1323 1351 streaming = False
1324 1352
1325 1353 # pulling changegroup
1326 1354 pullop.stepsdone.add('changegroup')
1327 1355
1328 1356 kwargs['common'] = pullop.common
1329 1357 kwargs['heads'] = pullop.heads or pullop.rheads
1330 1358 kwargs['cg'] = pullop.fetch
1331 1359 if 'listkeys' in pullop.remotebundle2caps:
1332 1360 kwargs['listkeys'] = ['phases']
1333 1361 if pullop.remotebookmarks is None:
1334 1362 # make sure to always includes bookmark data when migrating
1335 1363 # `hg incoming --bundle` to using this function.
1336 1364 kwargs['listkeys'].append('bookmarks')
1337 1365
1338 1366 # If this is a full pull / clone and the server supports the clone bundles
1339 1367 # feature, tell the server whether we attempted a clone bundle. The
1340 1368 # presence of this flag indicates the client supports clone bundles. This
1341 1369 # will enable the server to treat clients that support clone bundles
1342 1370 # differently from those that don't.
1343 1371 if (pullop.remote.capable('clonebundles')
1344 1372 and pullop.heads is None and list(pullop.common) == [nullid]):
1345 1373 kwargs['cbattempted'] = pullop.clonebundleattempted
1346 1374
1347 1375 if streaming:
1348 1376 pullop.repo.ui.status(_('streaming all changes\n'))
1349 1377 elif not pullop.fetch:
1350 1378 pullop.repo.ui.status(_("no changes found\n"))
1351 1379 pullop.cgresult = 0
1352 1380 else:
1353 1381 if pullop.heads is None and list(pullop.common) == [nullid]:
1354 1382 pullop.repo.ui.status(_("requesting all changes\n"))
1355 1383 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1356 1384 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1357 1385 if obsolete.commonversion(remoteversions) is not None:
1358 1386 kwargs['obsmarkers'] = True
1359 1387 pullop.stepsdone.add('obsmarkers')
1360 1388 _pullbundle2extraprepare(pullop, kwargs)
1361 1389 bundle = pullop.remote.getbundle('pull', **kwargs)
1362 1390 try:
1363 1391 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
1364 1392 except bundle2.AbortFromPart as exc:
1365 1393 pullop.repo.ui.status(_('remote: abort: %s\n') % exc)
1366 1394 raise error.Abort(_('pull failed on remote'), hint=exc.hint)
1367 1395 except error.BundleValueError as exc:
1368 1396 raise error.Abort(_('missing support for %s') % exc)
1369 1397
1370 1398 if pullop.fetch:
1371 1399 results = [cg['return'] for cg in op.records['changegroup']]
1372 1400 pullop.cgresult = changegroup.combineresults(results)
1373 1401
1374 1402 # processing phases change
1375 1403 for namespace, value in op.records['listkeys']:
1376 1404 if namespace == 'phases':
1377 1405 _pullapplyphases(pullop, value)
1378 1406
1379 1407 # processing bookmark update
1380 1408 for namespace, value in op.records['listkeys']:
1381 1409 if namespace == 'bookmarks':
1382 1410 pullop.remotebookmarks = value
1383 1411
1384 1412 # bookmark data were either already there or pulled in the bundle
1385 1413 if pullop.remotebookmarks is not None:
1386 1414 _pullbookmarks(pullop)
1387 1415
1388 1416 def _pullbundle2extraprepare(pullop, kwargs):
1389 1417 """hook function so that extensions can extend the getbundle call"""
1390 1418 pass
1391 1419
1392 1420 def _pullchangeset(pullop):
1393 1421 """pull changeset from unbundle into the local repo"""
1394 1422 # We delay the open of the transaction as late as possible so we
1395 1423 # don't open transaction for nothing or you break future useful
1396 1424 # rollback call
1397 1425 if 'changegroup' in pullop.stepsdone:
1398 1426 return
1399 1427 pullop.stepsdone.add('changegroup')
1400 1428 if not pullop.fetch:
1401 1429 pullop.repo.ui.status(_("no changes found\n"))
1402 1430 pullop.cgresult = 0
1403 1431 return
1404 1432 pullop.gettransaction()
1405 1433 if pullop.heads is None and list(pullop.common) == [nullid]:
1406 1434 pullop.repo.ui.status(_("requesting all changes\n"))
1407 1435 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1408 1436 # issue1320, avoid a race if remote changed after discovery
1409 1437 pullop.heads = pullop.rheads
1410 1438
1411 1439 if pullop.remote.capable('getbundle'):
1412 1440 # TODO: get bundlecaps from remote
1413 1441 cg = pullop.remote.getbundle('pull', common=pullop.common,
1414 1442 heads=pullop.heads or pullop.rheads)
1415 1443 elif pullop.heads is None:
1416 1444 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1417 1445 elif not pullop.remote.capable('changegroupsubset'):
1418 1446 raise error.Abort(_("partial pull cannot be done because "
1419 1447 "other repository doesn't support "
1420 1448 "changegroupsubset."))
1421 1449 else:
1422 1450 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1423 1451 pullop.cgresult = cg.apply(pullop.repo, 'pull', pullop.remote.url())
1424 1452
1425 1453 def _pullphase(pullop):
1426 1454 # Get remote phases data from remote
1427 1455 if 'phases' in pullop.stepsdone:
1428 1456 return
1429 1457 remotephases = pullop.remote.listkeys('phases')
1430 1458 _pullapplyphases(pullop, remotephases)
1431 1459
1432 1460 def _pullapplyphases(pullop, remotephases):
1433 1461 """apply phase movement from observed remote state"""
1434 1462 if 'phases' in pullop.stepsdone:
1435 1463 return
1436 1464 pullop.stepsdone.add('phases')
1437 1465 publishing = bool(remotephases.get('publishing', False))
1438 1466 if remotephases and not publishing:
1439 1467 # remote is new and non-publishing
1440 1468 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1441 1469 pullop.pulledsubset,
1442 1470 remotephases)
1443 1471 dheads = pullop.pulledsubset
1444 1472 else:
1445 1473 # Remote is old or publishing all common changesets
1446 1474 # should be seen as public
1447 1475 pheads = pullop.pulledsubset
1448 1476 dheads = []
1449 1477 unfi = pullop.repo.unfiltered()
1450 1478 phase = unfi._phasecache.phase
1451 1479 rev = unfi.changelog.nodemap.get
1452 1480 public = phases.public
1453 1481 draft = phases.draft
1454 1482
1455 1483 # exclude changesets already public locally and update the others
1456 1484 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1457 1485 if pheads:
1458 1486 tr = pullop.gettransaction()
1459 1487 phases.advanceboundary(pullop.repo, tr, public, pheads)
1460 1488
1461 1489 # exclude changesets already draft locally and update the others
1462 1490 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1463 1491 if dheads:
1464 1492 tr = pullop.gettransaction()
1465 1493 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1466 1494
1467 1495 def _pullbookmarks(pullop):
1468 1496 """process the remote bookmark information to update the local one"""
1469 1497 if 'bookmarks' in pullop.stepsdone:
1470 1498 return
1471 1499 pullop.stepsdone.add('bookmarks')
1472 1500 repo = pullop.repo
1473 1501 remotebookmarks = pullop.remotebookmarks
1474 1502 remotebookmarks = bookmod.unhexlifybookmarks(remotebookmarks)
1475 1503 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1476 1504 pullop.remote.url(),
1477 1505 pullop.gettransaction,
1478 1506 explicit=pullop.explicitbookmarks)
1479 1507
1480 1508 def _pullobsolete(pullop):
1481 1509 """utility function to pull obsolete markers from a remote
1482 1510
1483 1511 The `gettransaction` is function that return the pull transaction, creating
1484 1512 one if necessary. We return the transaction to inform the calling code that
1485 1513 a new transaction have been created (when applicable).
1486 1514
1487 1515 Exists mostly to allow overriding for experimentation purpose"""
1488 1516 if 'obsmarkers' in pullop.stepsdone:
1489 1517 return
1490 1518 pullop.stepsdone.add('obsmarkers')
1491 1519 tr = None
1492 1520 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1493 1521 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1494 1522 remoteobs = pullop.remote.listkeys('obsolete')
1495 1523 if 'dump0' in remoteobs:
1496 1524 tr = pullop.gettransaction()
1497 1525 markers = []
1498 1526 for key in sorted(remoteobs, reverse=True):
1499 1527 if key.startswith('dump'):
1500 1528 data = util.b85decode(remoteobs[key])
1501 1529 version, newmarks = obsolete._readmarkers(data)
1502 1530 markers += newmarks
1503 1531 if markers:
1504 1532 pullop.repo.obsstore.add(tr, markers)
1505 1533 pullop.repo.invalidatevolatilesets()
1506 1534 return tr
1507 1535
1508 1536 def caps20to10(repo):
1509 1537 """return a set with appropriate options to use bundle20 during getbundle"""
1510 1538 caps = {'HG20'}
1511 1539 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
1512 1540 caps.add('bundle2=' + urlreq.quote(capsblob))
1513 1541 return caps
1514 1542
1515 1543 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1516 1544 getbundle2partsorder = []
1517 1545
1518 1546 # Mapping between step name and function
1519 1547 #
1520 1548 # This exists to help extensions wrap steps if necessary
1521 1549 getbundle2partsmapping = {}
1522 1550
1523 1551 def getbundle2partsgenerator(stepname, idx=None):
1524 1552 """decorator for function generating bundle2 part for getbundle
1525 1553
1526 1554 The function is added to the step -> function mapping and appended to the
1527 1555 list of steps. Beware that decorated functions will be added in order
1528 1556 (this may matter).
1529 1557
1530 1558 You can only use this decorator for new steps, if you want to wrap a step
1531 1559 from an extension, attack the getbundle2partsmapping dictionary directly."""
1532 1560 def dec(func):
1533 1561 assert stepname not in getbundle2partsmapping
1534 1562 getbundle2partsmapping[stepname] = func
1535 1563 if idx is None:
1536 1564 getbundle2partsorder.append(stepname)
1537 1565 else:
1538 1566 getbundle2partsorder.insert(idx, stepname)
1539 1567 return func
1540 1568 return dec
1541 1569
1542 1570 def bundle2requested(bundlecaps):
1543 1571 if bundlecaps is not None:
1544 1572 return any(cap.startswith('HG2') for cap in bundlecaps)
1545 1573 return False
1546 1574
1547 1575 def getbundlechunks(repo, source, heads=None, common=None, bundlecaps=None,
1548 1576 **kwargs):
1549 1577 """Return chunks constituting a bundle's raw data.
1550 1578
1551 1579 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
1552 1580 passed.
1553 1581
1554 1582 Returns an iterator over raw chunks (of varying sizes).
1555 1583 """
1556 1584 usebundle2 = bundle2requested(bundlecaps)
1557 1585 # bundle10 case
1558 1586 if not usebundle2:
1559 1587 if bundlecaps and not kwargs.get('cg', True):
1560 1588 raise ValueError(_('request for bundle10 must include changegroup'))
1561 1589
1562 1590 if kwargs:
1563 1591 raise ValueError(_('unsupported getbundle arguments: %s')
1564 1592 % ', '.join(sorted(kwargs.keys())))
1565 1593 outgoing = _computeoutgoing(repo, heads, common)
1566 1594 bundler = changegroup.getbundler('01', repo, bundlecaps)
1567 1595 return changegroup.getsubsetraw(repo, outgoing, bundler, source)
1568 1596
1569 1597 # bundle20 case
1570 1598 b2caps = {}
1571 1599 for bcaps in bundlecaps:
1572 1600 if bcaps.startswith('bundle2='):
1573 1601 blob = urlreq.unquote(bcaps[len('bundle2='):])
1574 1602 b2caps.update(bundle2.decodecaps(blob))
1575 1603 bundler = bundle2.bundle20(repo.ui, b2caps)
1576 1604
1577 1605 kwargs['heads'] = heads
1578 1606 kwargs['common'] = common
1579 1607
1580 1608 for name in getbundle2partsorder:
1581 1609 func = getbundle2partsmapping[name]
1582 1610 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1583 1611 **kwargs)
1584 1612
1585 1613 return bundler.getchunks()
1586 1614
1587 1615 @getbundle2partsgenerator('changegroup')
1588 1616 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1589 1617 b2caps=None, heads=None, common=None, **kwargs):
1590 1618 """add a changegroup part to the requested bundle"""
1591 1619 cg = None
1592 1620 if kwargs.get('cg', True):
1593 1621 # build changegroup bundle here.
1594 1622 version = '01'
1595 1623 cgversions = b2caps.get('changegroup')
1596 1624 if cgversions: # 3.1 and 3.2 ship with an empty value
1597 1625 cgversions = [v for v in cgversions
1598 1626 if v in changegroup.supportedoutgoingversions(repo)]
1599 1627 if not cgversions:
1600 1628 raise ValueError(_('no common changegroup version'))
1601 1629 version = max(cgversions)
1602 1630 outgoing = _computeoutgoing(repo, heads, common)
1603 1631 cg = changegroup.getlocalchangegroupraw(repo, source, outgoing,
1604 1632 bundlecaps=bundlecaps,
1605 1633 version=version)
1606 1634
1607 1635 if cg:
1608 1636 part = bundler.newpart('changegroup', data=cg)
1609 1637 if cgversions:
1610 1638 part.addparam('version', version)
1611 1639 part.addparam('nbchanges', str(len(outgoing.missing)), mandatory=False)
1612 1640 if 'treemanifest' in repo.requirements:
1613 1641 part.addparam('treemanifest', '1')
1614 1642
1615 1643 @getbundle2partsgenerator('listkeys')
1616 1644 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1617 1645 b2caps=None, **kwargs):
1618 1646 """add parts containing listkeys namespaces to the requested bundle"""
1619 1647 listkeys = kwargs.get('listkeys', ())
1620 1648 for namespace in listkeys:
1621 1649 part = bundler.newpart('listkeys')
1622 1650 part.addparam('namespace', namespace)
1623 1651 keys = repo.listkeys(namespace).items()
1624 1652 part.data = pushkey.encodekeys(keys)
1625 1653
1626 1654 @getbundle2partsgenerator('obsmarkers')
1627 1655 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1628 1656 b2caps=None, heads=None, **kwargs):
1629 1657 """add an obsolescence markers part to the requested bundle"""
1630 1658 if kwargs.get('obsmarkers', False):
1631 1659 if heads is None:
1632 1660 heads = repo.heads()
1633 1661 subset = [c.node() for c in repo.set('::%ln', heads)]
1634 1662 markers = repo.obsstore.relevantmarkers(subset)
1635 1663 markers = sorted(markers)
1636 1664 bundle2.buildobsmarkerspart(bundler, markers)
1637 1665
1638 1666 @getbundle2partsgenerator('hgtagsfnodes')
1639 1667 def _getbundletagsfnodes(bundler, repo, source, bundlecaps=None,
1640 1668 b2caps=None, heads=None, common=None,
1641 1669 **kwargs):
1642 1670 """Transfer the .hgtags filenodes mapping.
1643 1671
1644 1672 Only values for heads in this bundle will be transferred.
1645 1673
1646 1674 The part data consists of pairs of 20 byte changeset node and .hgtags
1647 1675 filenodes raw values.
1648 1676 """
1649 1677 # Don't send unless:
1650 1678 # - changeset are being exchanged,
1651 1679 # - the client supports it.
1652 1680 if not (kwargs.get('cg', True) and 'hgtagsfnodes' in b2caps):
1653 1681 return
1654 1682
1655 1683 outgoing = _computeoutgoing(repo, heads, common)
1656 1684 bundle2.addparttagsfnodescache(repo, bundler, outgoing)
1657 1685
1658 1686 def _getbookmarks(repo, **kwargs):
1659 1687 """Returns bookmark to node mapping.
1660 1688
1661 1689 This function is primarily used to generate `bookmarks` bundle2 part.
1662 1690 It is a separate function in order to make it easy to wrap it
1663 1691 in extensions. Passing `kwargs` to the function makes it easy to
1664 1692 add new parameters in extensions.
1665 1693 """
1666 1694
1667 1695 return dict(bookmod.listbinbookmarks(repo))
1668 1696
1669 1697 def check_heads(repo, their_heads, context):
1670 1698 """check if the heads of a repo have been modified
1671 1699
1672 1700 Used by peer for unbundling.
1673 1701 """
1674 1702 heads = repo.heads()
1675 1703 heads_hash = hashlib.sha1(''.join(sorted(heads))).digest()
1676 1704 if not (their_heads == ['force'] or their_heads == heads or
1677 1705 their_heads == ['hashed', heads_hash]):
1678 1706 # someone else committed/pushed/unbundled while we
1679 1707 # were transferring data
1680 1708 raise error.PushRaced('repository changed while %s - '
1681 1709 'please try again' % context)
1682 1710
1683 1711 def unbundle(repo, cg, heads, source, url):
1684 1712 """Apply a bundle to a repo.
1685 1713
1686 1714 this function makes sure the repo is locked during the application and have
1687 1715 mechanism to check that no push race occurred between the creation of the
1688 1716 bundle and its application.
1689 1717
1690 1718 If the push was raced as PushRaced exception is raised."""
1691 1719 r = 0
1692 1720 # need a transaction when processing a bundle2 stream
1693 1721 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
1694 1722 lockandtr = [None, None, None]
1695 1723 recordout = None
1696 1724 # quick fix for output mismatch with bundle2 in 3.4
1697 1725 captureoutput = repo.ui.configbool('experimental', 'bundle2-output-capture',
1698 1726 False)
1699 1727 if url.startswith('remote:http:') or url.startswith('remote:https:'):
1700 1728 captureoutput = True
1701 1729 try:
1702 1730 # note: outside bundle1, 'heads' is expected to be empty and this
1703 1731 # 'check_heads' call wil be a no-op
1704 1732 check_heads(repo, heads, 'uploading changes')
1705 1733 # push can proceed
1706 1734 if not util.safehasattr(cg, 'params'):
1707 1735 # legacy case: bundle1 (changegroup 01)
1708 1736 lockandtr[1] = repo.lock()
1709 1737 r = cg.apply(repo, source, url)
1710 1738 else:
1711 1739 r = None
1712 1740 try:
1713 1741 def gettransaction():
1714 1742 if not lockandtr[2]:
1715 1743 lockandtr[0] = repo.wlock()
1716 1744 lockandtr[1] = repo.lock()
1717 1745 lockandtr[2] = repo.transaction(source)
1718 1746 lockandtr[2].hookargs['source'] = source
1719 1747 lockandtr[2].hookargs['url'] = url
1720 1748 lockandtr[2].hookargs['bundle2'] = '1'
1721 1749 return lockandtr[2]
1722 1750
1723 1751 # Do greedy locking by default until we're satisfied with lazy
1724 1752 # locking.
1725 1753 if not repo.ui.configbool('experimental', 'bundle2lazylocking'):
1726 1754 gettransaction()
1727 1755
1728 1756 op = bundle2.bundleoperation(repo, gettransaction,
1729 1757 captureoutput=captureoutput)
1730 1758 try:
1731 1759 op = bundle2.processbundle(repo, cg, op=op)
1732 1760 finally:
1733 1761 r = op.reply
1734 1762 if captureoutput and r is not None:
1735 1763 repo.ui.pushbuffer(error=True, subproc=True)
1736 1764 def recordout(output):
1737 1765 r.newpart('output', data=output, mandatory=False)
1738 1766 if lockandtr[2] is not None:
1739 1767 lockandtr[2].close()
1740 1768 except BaseException as exc:
1741 1769 exc.duringunbundle2 = True
1742 1770 if captureoutput and r is not None:
1743 1771 parts = exc._bundle2salvagedoutput = r.salvageoutput()
1744 1772 def recordout(output):
1745 1773 part = bundle2.bundlepart('output', data=output,
1746 1774 mandatory=False)
1747 1775 parts.append(part)
1748 1776 raise
1749 1777 finally:
1750 1778 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
1751 1779 if recordout is not None:
1752 1780 recordout(repo.ui.popbuffer())
1753 1781 return r
1754 1782
1755 1783 def _maybeapplyclonebundle(pullop):
1756 1784 """Apply a clone bundle from a remote, if possible."""
1757 1785
1758 1786 repo = pullop.repo
1759 1787 remote = pullop.remote
1760 1788
1761 1789 if not repo.ui.configbool('ui', 'clonebundles', True):
1762 1790 return
1763 1791
1764 1792 # Only run if local repo is empty.
1765 1793 if len(repo):
1766 1794 return
1767 1795
1768 1796 if pullop.heads:
1769 1797 return
1770 1798
1771 1799 if not remote.capable('clonebundles'):
1772 1800 return
1773 1801
1774 1802 res = remote._call('clonebundles')
1775 1803
1776 1804 # If we call the wire protocol command, that's good enough to record the
1777 1805 # attempt.
1778 1806 pullop.clonebundleattempted = True
1779 1807
1780 1808 entries = parseclonebundlesmanifest(repo, res)
1781 1809 if not entries:
1782 1810 repo.ui.note(_('no clone bundles available on remote; '
1783 1811 'falling back to regular clone\n'))
1784 1812 return
1785 1813
1786 1814 entries = filterclonebundleentries(repo, entries)
1787 1815 if not entries:
1788 1816 # There is a thundering herd concern here. However, if a server
1789 1817 # operator doesn't advertise bundles appropriate for its clients,
1790 1818 # they deserve what's coming. Furthermore, from a client's
1791 1819 # perspective, no automatic fallback would mean not being able to
1792 1820 # clone!
1793 1821 repo.ui.warn(_('no compatible clone bundles available on server; '
1794 1822 'falling back to regular clone\n'))
1795 1823 repo.ui.warn(_('(you may want to report this to the server '
1796 1824 'operator)\n'))
1797 1825 return
1798 1826
1799 1827 entries = sortclonebundleentries(repo.ui, entries)
1800 1828
1801 1829 url = entries[0]['URL']
1802 1830 repo.ui.status(_('applying clone bundle from %s\n') % url)
1803 1831 if trypullbundlefromurl(repo.ui, repo, url):
1804 1832 repo.ui.status(_('finished applying clone bundle\n'))
1805 1833 # Bundle failed.
1806 1834 #
1807 1835 # We abort by default to avoid the thundering herd of
1808 1836 # clients flooding a server that was expecting expensive
1809 1837 # clone load to be offloaded.
1810 1838 elif repo.ui.configbool('ui', 'clonebundlefallback', False):
1811 1839 repo.ui.warn(_('falling back to normal clone\n'))
1812 1840 else:
1813 1841 raise error.Abort(_('error applying bundle'),
1814 1842 hint=_('if this error persists, consider contacting '
1815 1843 'the server operator or disable clone '
1816 1844 'bundles via '
1817 1845 '"--config ui.clonebundles=false"'))
1818 1846
1819 1847 def parseclonebundlesmanifest(repo, s):
1820 1848 """Parses the raw text of a clone bundles manifest.
1821 1849
1822 1850 Returns a list of dicts. The dicts have a ``URL`` key corresponding
1823 1851 to the URL and other keys are the attributes for the entry.
1824 1852 """
1825 1853 m = []
1826 1854 for line in s.splitlines():
1827 1855 fields = line.split()
1828 1856 if not fields:
1829 1857 continue
1830 1858 attrs = {'URL': fields[0]}
1831 1859 for rawattr in fields[1:]:
1832 1860 key, value = rawattr.split('=', 1)
1833 1861 key = urlreq.unquote(key)
1834 1862 value = urlreq.unquote(value)
1835 1863 attrs[key] = value
1836 1864
1837 1865 # Parse BUNDLESPEC into components. This makes client-side
1838 1866 # preferences easier to specify since you can prefer a single
1839 1867 # component of the BUNDLESPEC.
1840 1868 if key == 'BUNDLESPEC':
1841 1869 try:
1842 1870 comp, version, params = parsebundlespec(repo, value,
1843 1871 externalnames=True)
1844 1872 attrs['COMPRESSION'] = comp
1845 1873 attrs['VERSION'] = version
1846 1874 except error.InvalidBundleSpecification:
1847 1875 pass
1848 1876 except error.UnsupportedBundleSpecification:
1849 1877 pass
1850 1878
1851 1879 m.append(attrs)
1852 1880
1853 1881 return m
1854 1882
1855 1883 def filterclonebundleentries(repo, entries):
1856 1884 """Remove incompatible clone bundle manifest entries.
1857 1885
1858 1886 Accepts a list of entries parsed with ``parseclonebundlesmanifest``
1859 1887 and returns a new list consisting of only the entries that this client
1860 1888 should be able to apply.
1861 1889
1862 1890 There is no guarantee we'll be able to apply all returned entries because
1863 1891 the metadata we use to filter on may be missing or wrong.
1864 1892 """
1865 1893 newentries = []
1866 1894 for entry in entries:
1867 1895 spec = entry.get('BUNDLESPEC')
1868 1896 if spec:
1869 1897 try:
1870 1898 parsebundlespec(repo, spec, strict=True)
1871 1899 except error.InvalidBundleSpecification as e:
1872 1900 repo.ui.debug(str(e) + '\n')
1873 1901 continue
1874 1902 except error.UnsupportedBundleSpecification as e:
1875 1903 repo.ui.debug('filtering %s because unsupported bundle '
1876 1904 'spec: %s\n' % (entry['URL'], str(e)))
1877 1905 continue
1878 1906
1879 1907 if 'REQUIRESNI' in entry and not sslutil.hassni:
1880 1908 repo.ui.debug('filtering %s because SNI not supported\n' %
1881 1909 entry['URL'])
1882 1910 continue
1883 1911
1884 1912 newentries.append(entry)
1885 1913
1886 1914 return newentries
1887 1915
1888 1916 class clonebundleentry(object):
1889 1917 """Represents an item in a clone bundles manifest.
1890 1918
1891 1919 This rich class is needed to support sorting since sorted() in Python 3
1892 1920 doesn't support ``cmp`` and our comparison is complex enough that ``key=``
1893 1921 won't work.
1894 1922 """
1895 1923
1896 1924 def __init__(self, value, prefers):
1897 1925 self.value = value
1898 1926 self.prefers = prefers
1899 1927
1900 1928 def _cmp(self, other):
1901 1929 for prefkey, prefvalue in self.prefers:
1902 1930 avalue = self.value.get(prefkey)
1903 1931 bvalue = other.value.get(prefkey)
1904 1932
1905 1933 # Special case for b missing attribute and a matches exactly.
1906 1934 if avalue is not None and bvalue is None and avalue == prefvalue:
1907 1935 return -1
1908 1936
1909 1937 # Special case for a missing attribute and b matches exactly.
1910 1938 if bvalue is not None and avalue is None and bvalue == prefvalue:
1911 1939 return 1
1912 1940
1913 1941 # We can't compare unless attribute present on both.
1914 1942 if avalue is None or bvalue is None:
1915 1943 continue
1916 1944
1917 1945 # Same values should fall back to next attribute.
1918 1946 if avalue == bvalue:
1919 1947 continue
1920 1948
1921 1949 # Exact matches come first.
1922 1950 if avalue == prefvalue:
1923 1951 return -1
1924 1952 if bvalue == prefvalue:
1925 1953 return 1
1926 1954
1927 1955 # Fall back to next attribute.
1928 1956 continue
1929 1957
1930 1958 # If we got here we couldn't sort by attributes and prefers. Fall
1931 1959 # back to index order.
1932 1960 return 0
1933 1961
1934 1962 def __lt__(self, other):
1935 1963 return self._cmp(other) < 0
1936 1964
1937 1965 def __gt__(self, other):
1938 1966 return self._cmp(other) > 0
1939 1967
1940 1968 def __eq__(self, other):
1941 1969 return self._cmp(other) == 0
1942 1970
1943 1971 def __le__(self, other):
1944 1972 return self._cmp(other) <= 0
1945 1973
1946 1974 def __ge__(self, other):
1947 1975 return self._cmp(other) >= 0
1948 1976
1949 1977 def __ne__(self, other):
1950 1978 return self._cmp(other) != 0
1951 1979
1952 1980 def sortclonebundleentries(ui, entries):
1953 1981 prefers = ui.configlist('ui', 'clonebundleprefers', default=[])
1954 1982 if not prefers:
1955 1983 return list(entries)
1956 1984
1957 1985 prefers = [p.split('=', 1) for p in prefers]
1958 1986
1959 1987 items = sorted(clonebundleentry(v, prefers) for v in entries)
1960 1988 return [i.value for i in items]
1961 1989
1962 1990 def trypullbundlefromurl(ui, repo, url):
1963 1991 """Attempt to apply a bundle from a URL."""
1964 1992 lock = repo.lock()
1965 1993 try:
1966 1994 tr = repo.transaction('bundleurl')
1967 1995 try:
1968 1996 try:
1969 1997 fh = urlmod.open(ui, url)
1970 1998 cg = readbundle(ui, fh, 'stream')
1971 1999
1972 2000 if isinstance(cg, bundle2.unbundle20):
1973 2001 bundle2.processbundle(repo, cg, lambda: tr)
1974 2002 elif isinstance(cg, streamclone.streamcloneapplier):
1975 2003 cg.apply(repo)
1976 2004 else:
1977 2005 cg.apply(repo, 'clonebundles', url)
1978 2006 tr.close()
1979 2007 return True
1980 2008 except urlerr.httperror as e:
1981 2009 ui.warn(_('HTTP error fetching bundle: %s\n') % str(e))
1982 2010 except urlerr.urlerror as e:
1983 2011 ui.warn(_('error fetching bundle: %s\n') % e.reason)
1984 2012
1985 2013 return False
1986 2014 finally:
1987 2015 tr.release()
1988 2016 finally:
1989 2017 lock.release()
@@ -1,1625 +1,1839 b''
1 1 ============================================================================================
2 2 Test cases where there are race condition between two clients pushing to the same repository
3 3 ============================================================================================
4 4
5 5 This file tests cases where two clients push to a server at the same time. The
6 6 "raced" client is done preparing it push bundle when the "racing" client
7 7 perform its push. The "raced" client starts its actual push after the "racing"
8 8 client push is fully complete.
9 9
10 10 A set of extension and shell functions ensures this scheduling.
11 11
12 12 $ cat >> delaypush.py << EOF
13 13 > """small extension orchestrate push race
14 14 >
15 15 > Client with the extensions will create a file when ready and get stuck until
16 16 > a file is created."""
17 17 >
18 18 > import atexit
19 19 > import errno
20 20 > import os
21 21 > import time
22 22 >
23 23 > from mercurial import (
24 24 > exchange,
25 25 > extensions,
26 26 > )
27 27 >
28 28 > def delaypush(orig, pushop):
29 29 > # notify we are done preparing
30 30 > readypath = pushop.repo.ui.config('delaypush', 'ready-path', None)
31 31 > if readypath is not None:
32 32 > with open(readypath, 'w') as r:
33 33 > r.write('foo')
34 34 > pushop.repo.ui.status('wrote ready: %s\n' % readypath)
35 35 > # now wait for the other process to be done
36 36 > watchpath = pushop.repo.ui.config('delaypush', 'release-path', None)
37 37 > if watchpath is not None:
38 38 > pushop.repo.ui.status('waiting on: %s\n' % watchpath)
39 39 > limit = 100
40 40 > while 0 < limit and not os.path.exists(watchpath):
41 41 > limit -= 1
42 42 > time.sleep(0.1)
43 43 > if limit <= 0:
44 44 > repo.ui.warn('exiting without watchfile: %s' % watchpath)
45 45 > else:
46 46 > # delete the file at the end of the push
47 47 > def delete():
48 48 > try:
49 49 > os.unlink(watchpath)
50 50 > except OSError as exc:
51 51 > if exc.errno != errno.ENOENT:
52 52 > raise
53 53 > atexit.register(delete)
54 54 > return orig(pushop)
55 55 >
56 56 > def uisetup(ui):
57 57 > extensions.wrapfunction(exchange, '_pushbundle2', delaypush)
58 58 > EOF
59 59
60 60 $ waiton () {
61 61 > # wait for a file to be created (then delete it)
62 62 > count=100
63 63 > while [ ! -f $1 ] ;
64 64 > do
65 65 > sleep 0.1;
66 66 > count=`expr $count - 1`;
67 67 > if [ $count -lt 0 ];
68 68 > then
69 69 > break
70 70 > fi;
71 71 > done
72 72 > [ -f $1 ] || echo "ready file still missing: $1"
73 73 > rm -f $1
74 74 > }
75 75
76 76 $ release () {
77 77 > # create a file and wait for it be deleted
78 78 > count=100
79 79 > touch $1
80 80 > while [ -f $1 ] ;
81 81 > do
82 82 > sleep 0.1;
83 83 > count=`expr $count - 1`;
84 84 > if [ $count -lt 0 ];
85 85 > then
86 86 > break
87 87 > fi;
88 88 > done
89 89 > [ ! -f $1 ] || echo "delay file still exist: $1"
90 90 > }
91 91
92 92 $ cat >> $HGRCPATH << EOF
93 93 > [ui]
94 94 > ssh = python "$TESTDIR/dummyssh"
95 95 > # simplify output
96 96 > logtemplate = {node|short} {desc} ({branch})
97 97 > [phases]
98 98 > publish = no
99 99 > [experimental]
100 100 > evolution = all
101 101 > [alias]
102 102 > graph = log -G --rev 'sort(all(), "topo")'
103 103 > EOF
104 104
105 We tests multiple cases:
106 * strict: no race detected,
107 * unrelated: race on unrelated heads are allowed.
108
109 #testcases strict unrelated
110
111 #if unrelated
112
113 $ cat >> $HGRCPATH << EOF
114 > [experimental]
115 > checkheads-strict = no
116 > EOF
117
118 #endif
119
105 120 Setup
106 121 -----
107 122
108 123 create a repo with one root
109 124
110 125 $ hg init server
111 126 $ cd server
112 127 $ echo root > root
113 128 $ hg ci -Am "C-ROOT"
114 129 adding root
115 130 $ cd ..
116 131
117 132 clone it in two clients
118 133
119 134 $ hg clone ssh://user@dummy/server client-racy
120 135 requesting all changes
121 136 adding changesets
122 137 adding manifests
123 138 adding file changes
124 139 added 1 changesets with 1 changes to 1 files
125 140 updating to branch default
126 141 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
127 142 $ hg clone ssh://user@dummy/server client-other
128 143 requesting all changes
129 144 adding changesets
130 145 adding manifests
131 146 adding file changes
132 147 added 1 changesets with 1 changes to 1 files
133 148 updating to branch default
134 149 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
135 150
136 151 setup one to allow race on push
137 152
138 153 $ cat >> client-racy/.hg/hgrc << EOF
139 154 > [extensions]
140 155 > delaypush = $TESTTMP/delaypush.py
141 156 > [delaypush]
142 157 > ready-path = $TESTTMP/readyfile
143 158 > release-path = $TESTTMP/watchfile
144 159 > EOF
145 160
146 161 Simple race, both try to push to the server at the same time
147 162 ------------------------------------------------------------
148 163
149 164 Both try to replace the same head
150 165
151 166 # a
152 167 # | b
153 168 # |/
154 169 # *
155 170
156 171 Creating changesets
157 172
158 173 $ echo b > client-other/a
159 174 $ hg -R client-other/ add client-other/a
160 175 $ hg -R client-other/ commit -m "C-A"
161 176 $ echo b > client-racy/b
162 177 $ hg -R client-racy/ add client-racy/b
163 178 $ hg -R client-racy/ commit -m "C-B"
164 179
165 180 Pushing
166 181
167 182 $ hg -R client-racy push -r 'tip' > ./push-log 2>&1 &
168 183
169 184 $ waiton $TESTTMP/readyfile
170 185
171 186 $ hg -R client-other push -r 'tip'
172 187 pushing to ssh://user@dummy/server
173 188 searching for changes
174 189 remote: adding changesets
175 190 remote: adding manifests
176 191 remote: adding file changes
177 192 remote: added 1 changesets with 1 changes to 1 files
178 193
179 194 $ release $TESTTMP/watchfile
180 195
181 196 Check the result of the push
182 197
183 198 $ cat ./push-log
184 199 pushing to ssh://user@dummy/server
185 200 searching for changes
186 201 wrote ready: $TESTTMP/readyfile
187 202 waiting on: $TESTTMP/watchfile
188 203 abort: push failed:
189 204 'repository changed while pushing - please try again'
190 205
191 206 $ hg -R server graph
192 207 o 98217d5a1659 C-A (default)
193 208 |
194 209 @ 842e2fac6304 C-ROOT (default)
195 210
196 211
197 212 Pushing on two different heads
198 213 ------------------------------
199 214
200 215 Both try to replace a different head
201 216
202 217 # a b
203 218 # | |
204 219 # * *
205 220 # |/
206 221 # *
207 222
208 223 (resync-all)
209 224
210 225 $ hg -R ./server pull ./client-racy
211 226 pulling from ./client-racy
212 227 searching for changes
213 228 adding changesets
214 229 adding manifests
215 230 adding file changes
216 231 added 1 changesets with 1 changes to 1 files (+1 heads)
217 232 (run 'hg heads' to see heads, 'hg merge' to merge)
218 233 $ hg -R ./client-other pull
219 234 pulling from ssh://user@dummy/server
220 235 searching for changes
221 236 adding changesets
222 237 adding manifests
223 238 adding file changes
224 239 added 1 changesets with 1 changes to 1 files (+1 heads)
225 240 (run 'hg heads' to see heads, 'hg merge' to merge)
226 241 $ hg -R ./client-racy pull
227 242 pulling from ssh://user@dummy/server
228 243 searching for changes
229 244 adding changesets
230 245 adding manifests
231 246 adding file changes
232 247 added 1 changesets with 1 changes to 1 files (+1 heads)
233 248 (run 'hg heads' to see heads, 'hg merge' to merge)
234 249
235 250 $ hg -R server graph
236 251 o a9149a1428e2 C-B (default)
237 252 |
238 253 | o 98217d5a1659 C-A (default)
239 254 |/
240 255 @ 842e2fac6304 C-ROOT (default)
241 256
242 257
243 258 Creating changesets
244 259
245 260 $ echo aa >> client-other/a
246 261 $ hg -R client-other/ commit -m "C-C"
247 262 $ echo bb >> client-racy/b
248 263 $ hg -R client-racy/ commit -m "C-D"
249 264
250 265 Pushing
251 266
252 267 $ hg -R client-racy push -r 'tip' > ./push-log 2>&1 &
253 268
254 269 $ waiton $TESTTMP/readyfile
255 270
256 271 $ hg -R client-other push -r 'tip'
257 272 pushing to ssh://user@dummy/server
258 273 searching for changes
259 274 remote: adding changesets
260 275 remote: adding manifests
261 276 remote: adding file changes
262 277 remote: added 1 changesets with 1 changes to 1 files
263 278
264 279 $ release $TESTTMP/watchfile
265 280
266 281 Check the result of the push
267 282
283 #if strict
268 284 $ cat ./push-log
269 285 pushing to ssh://user@dummy/server
270 286 searching for changes
271 287 wrote ready: $TESTTMP/readyfile
272 288 waiting on: $TESTTMP/watchfile
273 289 abort: push failed:
274 290 'repository changed while pushing - please try again'
275 291
276 292 $ hg -R server graph
277 293 o 51c544a58128 C-C (default)
278 294 |
279 295 o 98217d5a1659 C-A (default)
280 296 |
281 297 | o a9149a1428e2 C-B (default)
282 298 |/
283 299 @ 842e2fac6304 C-ROOT (default)
284 300
301 #endif
302 #if unrelated
303
304 (The two heads are unrelated, push should be allowed)
305
306 $ cat ./push-log
307 pushing to ssh://user@dummy/server
308 searching for changes
309 wrote ready: $TESTTMP/readyfile
310 waiting on: $TESTTMP/watchfile
311 remote: adding changesets
312 remote: adding manifests
313 remote: adding file changes
314 remote: added 1 changesets with 1 changes to 1 files
315
316 $ hg -R server graph
317 o 59e76faf78bd C-D (default)
318 |
319 o a9149a1428e2 C-B (default)
320 |
321 | o 51c544a58128 C-C (default)
322 | |
323 | o 98217d5a1659 C-A (default)
324 |/
325 @ 842e2fac6304 C-ROOT (default)
326
327 #endif
328
285 329 Pushing while someone creates a new head
286 330 -----------------------------------------
287 331
288 332 Pushing a new changeset while someone creates a new branch.
289 333
290 334 # a (raced)
291 335 # |
292 336 # * b
293 337 # |/
294 338 # *
295 339
296 340 (resync-all)
297 341
342 #if strict
343
298 344 $ hg -R ./server pull ./client-racy
299 345 pulling from ./client-racy
300 346 searching for changes
301 347 adding changesets
302 348 adding manifests
303 349 adding file changes
304 350 added 1 changesets with 1 changes to 1 files
305 351 (run 'hg update' to get a working copy)
352
353 #endif
354 #if unrelated
355
356 $ hg -R ./server pull ./client-racy
357 pulling from ./client-racy
358 searching for changes
359 no changes found
360
361 #endif
362
306 363 $ hg -R ./client-other pull
307 364 pulling from ssh://user@dummy/server
308 365 searching for changes
309 366 adding changesets
310 367 adding manifests
311 368 adding file changes
312 369 added 1 changesets with 1 changes to 1 files
313 370 (run 'hg update' to get a working copy)
314 371 $ hg -R ./client-racy pull
315 372 pulling from ssh://user@dummy/server
316 373 searching for changes
317 374 adding changesets
318 375 adding manifests
319 376 adding file changes
320 377 added 1 changesets with 1 changes to 1 files
321 378 (run 'hg update' to get a working copy)
322 379
323 380 $ hg -R server graph
324 381 o 59e76faf78bd C-D (default)
325 382 |
326 383 o a9149a1428e2 C-B (default)
327 384 |
328 385 | o 51c544a58128 C-C (default)
329 386 | |
330 387 | o 98217d5a1659 C-A (default)
331 388 |/
332 389 @ 842e2fac6304 C-ROOT (default)
333 390
334 391
335 392 Creating changesets
336 393
337 394 (new head)
338 395
339 396 $ hg -R client-other/ up 'desc("C-A")'
340 397 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
341 398 $ echo aaa >> client-other/a
342 399 $ hg -R client-other/ commit -m "C-E"
343 400 created new head
344 401
345 402 (children of existing head)
346 403
347 404 $ hg -R client-racy/ up 'desc("C-C")'
348 405 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
349 406 $ echo bbb >> client-racy/a
350 407 $ hg -R client-racy/ commit -m "C-F"
351 408
352 409 Pushing
353 410
354 411 $ hg -R client-racy push -r 'tip' > ./push-log 2>&1 &
355 412
356 413 $ waiton $TESTTMP/readyfile
357 414
358 415 $ hg -R client-other push -fr 'tip'
359 416 pushing to ssh://user@dummy/server
360 417 searching for changes
361 418 remote: adding changesets
362 419 remote: adding manifests
363 420 remote: adding file changes
364 421 remote: added 1 changesets with 1 changes to 1 files (+1 heads)
365 422
366 423 $ release $TESTTMP/watchfile
367 424
368 425 Check the result of the push
369 426
427 #if strict
428
370 429 $ cat ./push-log
371 430 pushing to ssh://user@dummy/server
372 431 searching for changes
373 432 wrote ready: $TESTTMP/readyfile
374 433 waiting on: $TESTTMP/watchfile
375 434 abort: push failed:
376 435 'repository changed while pushing - please try again'
377 436
378 437 $ hg -R server graph
379 438 o d603e2c0cdd7 C-E (default)
380 439 |
381 440 | o 51c544a58128 C-C (default)
382 441 |/
383 442 o 98217d5a1659 C-A (default)
384 443 |
385 444 | o 59e76faf78bd C-D (default)
386 445 | |
387 446 | o a9149a1428e2 C-B (default)
388 447 |/
389 448 @ 842e2fac6304 C-ROOT (default)
390 449
391 450
451 #endif
452
453 #if unrelated
454
455 (The racing new head do not affect existing heads, push should go through)
456
457 $ cat ./push-log
458 pushing to ssh://user@dummy/server
459 searching for changes
460 wrote ready: $TESTTMP/readyfile
461 waiting on: $TESTTMP/watchfile
462 remote: adding changesets
463 remote: adding manifests
464 remote: adding file changes
465 remote: added 1 changesets with 1 changes to 1 files
466
467 $ hg -R server graph
468 o d9e379a8c432 C-F (default)
469 |
470 o 51c544a58128 C-C (default)
471 |
472 | o d603e2c0cdd7 C-E (default)
473 |/
474 o 98217d5a1659 C-A (default)
475 |
476 | o 59e76faf78bd C-D (default)
477 | |
478 | o a9149a1428e2 C-B (default)
479 |/
480 @ 842e2fac6304 C-ROOT (default)
481
482 #endif
483
392 484 Pushing touching different named branch (same topo): new branch raced
393 485 ---------------------------------------------------------------------
394 486
395 487 Pushing two children on the same head, one is a different named branch
396 488
397 489 # a (raced, branch-a)
398 490 # |
399 491 # | b (default branch)
400 492 # |/
401 493 # *
402 494
403 495 (resync-all)
404 496
497 #if strict
498
405 499 $ hg -R ./server pull ./client-racy
406 500 pulling from ./client-racy
407 501 searching for changes
408 502 adding changesets
409 503 adding manifests
410 504 adding file changes
411 505 added 1 changesets with 1 changes to 1 files
412 506 (run 'hg update' to get a working copy)
507
508 #endif
509 #if unrelated
510
511 $ hg -R ./server pull ./client-racy
512 pulling from ./client-racy
513 searching for changes
514 no changes found
515
516 #endif
517
413 518 $ hg -R ./client-other pull
414 519 pulling from ssh://user@dummy/server
415 520 searching for changes
416 521 adding changesets
417 522 adding manifests
418 523 adding file changes
419 524 added 1 changesets with 1 changes to 1 files
420 525 (run 'hg update' to get a working copy)
421 526 $ hg -R ./client-racy pull
422 527 pulling from ssh://user@dummy/server
423 528 searching for changes
424 529 adding changesets
425 530 adding manifests
426 531 adding file changes
427 532 added 1 changesets with 1 changes to 1 files (+1 heads)
428 533 (run 'hg heads .' to see heads, 'hg merge' to merge)
429 534
430 535 $ hg -R server graph
431 536 o d9e379a8c432 C-F (default)
432 537 |
433 538 o 51c544a58128 C-C (default)
434 539 |
435 540 | o d603e2c0cdd7 C-E (default)
436 541 |/
437 542 o 98217d5a1659 C-A (default)
438 543 |
439 544 | o 59e76faf78bd C-D (default)
440 545 | |
441 546 | o a9149a1428e2 C-B (default)
442 547 |/
443 548 @ 842e2fac6304 C-ROOT (default)
444 549
445 550
446 551 Creating changesets
447 552
448 553 (update existing head)
449 554
450 555 $ hg -R client-other/ up 'desc("C-F")'
451 556 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
452 557 $ echo aaa >> client-other/a
453 558 $ hg -R client-other/ commit -m "C-G"
454 559
455 560 (new named branch from that existing head)
456 561
457 562 $ hg -R client-racy/ up 'desc("C-F")'
458 563 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
459 564 $ echo bbb >> client-racy/a
460 565 $ hg -R client-racy/ branch my-first-test-branch
461 566 marked working directory as branch my-first-test-branch
462 567 (branches are permanent and global, did you want a bookmark?)
463 568 $ hg -R client-racy/ commit -m "C-H"
464 569
465 570 Pushing
466 571
467 572 $ hg -R client-racy push -r 'tip' --new-branch > ./push-log 2>&1 &
468 573
469 574 $ waiton $TESTTMP/readyfile
470 575
471 576 $ hg -R client-other push -fr 'tip'
472 577 pushing to ssh://user@dummy/server
473 578 searching for changes
474 579 remote: adding changesets
475 580 remote: adding manifests
476 581 remote: adding file changes
477 582 remote: added 1 changesets with 1 changes to 1 files
478 583
479 584 $ release $TESTTMP/watchfile
480 585
481 586 Check the result of the push
482 587
588 #if strict
483 589 $ cat ./push-log
484 590 pushing to ssh://user@dummy/server
485 591 searching for changes
486 592 wrote ready: $TESTTMP/readyfile
487 593 waiting on: $TESTTMP/watchfile
488 594 abort: push failed:
489 595 'repository changed while pushing - please try again'
490 596
491 597 $ hg -R server graph
492 598 o 75d69cba5402 C-G (default)
493 599 |
494 600 o d9e379a8c432 C-F (default)
495 601 |
496 602 o 51c544a58128 C-C (default)
497 603 |
498 604 | o d603e2c0cdd7 C-E (default)
499 605 |/
500 606 o 98217d5a1659 C-A (default)
501 607 |
502 608 | o 59e76faf78bd C-D (default)
503 609 | |
504 610 | o a9149a1428e2 C-B (default)
505 611 |/
506 612 @ 842e2fac6304 C-ROOT (default)
507 613
614 #endif
615 #if unrelated
616
617 (unrelated named branches are unrelated)
618
619 $ cat ./push-log
620 pushing to ssh://user@dummy/server
621 searching for changes
622 wrote ready: $TESTTMP/readyfile
623 waiting on: $TESTTMP/watchfile
624 remote: adding changesets
625 remote: adding manifests
626 remote: adding file changes
627 remote: added 1 changesets with 1 changes to 1 files (+1 heads)
628
629 $ hg -R server graph
630 o 833be552cfe6 C-H (my-first-test-branch)
631 |
632 | o 75d69cba5402 C-G (default)
633 |/
634 o d9e379a8c432 C-F (default)
635 |
636 o 51c544a58128 C-C (default)
637 |
638 | o d603e2c0cdd7 C-E (default)
639 |/
640 o 98217d5a1659 C-A (default)
641 |
642 | o 59e76faf78bd C-D (default)
643 | |
644 | o a9149a1428e2 C-B (default)
645 |/
646 @ 842e2fac6304 C-ROOT (default)
647
648 #endif
649
650 The racing new head do not affect existing heads, push should go through
508 651
509 652 pushing touching different named branch (same topo): old branch raced
510 653 ---------------------------------------------------------------------
511 654
512 655 Pushing two children on the same head, one is a different named branch
513 656
514 657 # a (raced, default-branch)
515 658 # |
516 659 # | b (new branch)
517 660 # |/
518 661 # * (default-branch)
519 662
520 663 (resync-all)
521 664
665 #if strict
666
522 667 $ hg -R ./server pull ./client-racy
523 668 pulling from ./client-racy
524 669 searching for changes
525 670 adding changesets
526 671 adding manifests
527 672 adding file changes
528 673 added 1 changesets with 1 changes to 1 files (+1 heads)
529 674 (run 'hg heads .' to see heads, 'hg merge' to merge)
675
676 #endif
677 #if unrelated
678
679 $ hg -R ./server pull ./client-racy
680 pulling from ./client-racy
681 searching for changes
682 no changes found
683
684 #endif
685
530 686 $ hg -R ./client-other pull
531 687 pulling from ssh://user@dummy/server
532 688 searching for changes
533 689 adding changesets
534 690 adding manifests
535 691 adding file changes
536 692 added 1 changesets with 1 changes to 1 files (+1 heads)
537 693 (run 'hg heads .' to see heads, 'hg merge' to merge)
538 694 $ hg -R ./client-racy pull
539 695 pulling from ssh://user@dummy/server
540 696 searching for changes
541 697 adding changesets
542 698 adding manifests
543 699 adding file changes
544 700 added 1 changesets with 1 changes to 1 files (+1 heads)
545 701 (run 'hg heads' to see heads)
546 702
547 703 $ hg -R server graph
548 704 o 833be552cfe6 C-H (my-first-test-branch)
549 705 |
550 706 | o 75d69cba5402 C-G (default)
551 707 |/
552 708 o d9e379a8c432 C-F (default)
553 709 |
554 710 o 51c544a58128 C-C (default)
555 711 |
556 712 | o d603e2c0cdd7 C-E (default)
557 713 |/
558 714 o 98217d5a1659 C-A (default)
559 715 |
560 716 | o 59e76faf78bd C-D (default)
561 717 | |
562 718 | o a9149a1428e2 C-B (default)
563 719 |/
564 720 @ 842e2fac6304 C-ROOT (default)
565 721
566 722
567 723 Creating changesets
568 724
569 725 (new named branch from one head)
570 726
571 727 $ hg -R client-other/ up 'desc("C-G")'
572 728 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
573 729 $ echo aaa >> client-other/a
574 730 $ hg -R client-other/ branch my-second-test-branch
575 731 marked working directory as branch my-second-test-branch
576 732 $ hg -R client-other/ commit -m "C-I"
577 733
578 734 (children "updating" that same head)
579 735
580 736 $ hg -R client-racy/ up 'desc("C-G")'
581 737 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
582 738 $ echo bbb >> client-racy/a
583 739 $ hg -R client-racy/ commit -m "C-J"
584 740
585 741 Pushing
586 742
587 743 $ hg -R client-racy push -r 'tip' > ./push-log 2>&1 &
588 744
589 745 $ waiton $TESTTMP/readyfile
590 746
591 747 $ hg -R client-other push -fr 'tip' --new-branch
592 748 pushing to ssh://user@dummy/server
593 749 searching for changes
594 750 remote: adding changesets
595 751 remote: adding manifests
596 752 remote: adding file changes
597 753 remote: added 1 changesets with 1 changes to 1 files
598 754
599 755 $ release $TESTTMP/watchfile
600 756
601 757 Check the result of the push
602 758
759 #if strict
760
603 761 $ cat ./push-log
604 762 pushing to ssh://user@dummy/server
605 763 searching for changes
606 764 wrote ready: $TESTTMP/readyfile
607 765 waiting on: $TESTTMP/watchfile
608 766 abort: push failed:
609 767 'repository changed while pushing - please try again'
610 768
611 769 $ hg -R server graph
612 770 o b35ed749f288 C-I (my-second-test-branch)
613 771 |
614 772 o 75d69cba5402 C-G (default)
615 773 |
616 774 | o 833be552cfe6 C-H (my-first-test-branch)
617 775 |/
618 776 o d9e379a8c432 C-F (default)
619 777 |
620 778 o 51c544a58128 C-C (default)
621 779 |
622 780 | o d603e2c0cdd7 C-E (default)
623 781 |/
624 782 o 98217d5a1659 C-A (default)
625 783 |
626 784 | o 59e76faf78bd C-D (default)
627 785 | |
628 786 | o a9149a1428e2 C-B (default)
629 787 |/
630 788 @ 842e2fac6304 C-ROOT (default)
631 789
632 790
791 #endif
792
793 #if unrelated
794
795 (unrelated named branches are unrelated)
796
797 $ cat ./push-log
798 pushing to ssh://user@dummy/server
799 searching for changes
800 wrote ready: $TESTTMP/readyfile
801 waiting on: $TESTTMP/watchfile
802 remote: adding changesets
803 remote: adding manifests
804 remote: adding file changes
805 remote: added 1 changesets with 1 changes to 1 files (+1 heads)
806
807 $ hg -R server graph
808 o 89420bf00fae C-J (default)
809 |
810 | o b35ed749f288 C-I (my-second-test-branch)
811 |/
812 o 75d69cba5402 C-G (default)
813 |
814 | o 833be552cfe6 C-H (my-first-test-branch)
815 |/
816 o d9e379a8c432 C-F (default)
817 |
818 o 51c544a58128 C-C (default)
819 |
820 | o d603e2c0cdd7 C-E (default)
821 |/
822 o 98217d5a1659 C-A (default)
823 |
824 | o 59e76faf78bd C-D (default)
825 | |
826 | o a9149a1428e2 C-B (default)
827 |/
828 @ 842e2fac6304 C-ROOT (default)
829
830
831 #endif
832
633 833 pushing racing push touch multiple heads
634 834 ----------------------------------------
635 835
636 836 There are multiple heads, but the racing push touch all of them
637 837
638 838 # a (raced)
639 839 # | b
640 840 # |/|
641 841 # * *
642 842 # |/
643 843 # *
644 844
645 845 (resync-all)
646 846
847 #if strict
848
647 849 $ hg -R ./server pull ./client-racy
648 850 pulling from ./client-racy
649 851 searching for changes
650 852 adding changesets
651 853 adding manifests
652 854 adding file changes
653 855 added 1 changesets with 1 changes to 1 files (+1 heads)
654 856 (run 'hg heads .' to see heads, 'hg merge' to merge)
857
858 #endif
859
860 #if unrelated
861
862 $ hg -R ./server pull ./client-racy
863 pulling from ./client-racy
864 searching for changes
865 no changes found
866
867 #endif
868
655 869 $ hg -R ./client-other pull
656 870 pulling from ssh://user@dummy/server
657 871 searching for changes
658 872 adding changesets
659 873 adding manifests
660 874 adding file changes
661 875 added 1 changesets with 1 changes to 1 files (+1 heads)
662 876 (run 'hg heads' to see heads)
663 877 $ hg -R ./client-racy pull
664 878 pulling from ssh://user@dummy/server
665 879 searching for changes
666 880 adding changesets
667 881 adding manifests
668 882 adding file changes
669 883 added 1 changesets with 1 changes to 1 files (+1 heads)
670 884 (run 'hg heads .' to see heads, 'hg merge' to merge)
671 885
672 886 $ hg -R server graph
673 887 o 89420bf00fae C-J (default)
674 888 |
675 889 | o b35ed749f288 C-I (my-second-test-branch)
676 890 |/
677 891 o 75d69cba5402 C-G (default)
678 892 |
679 893 | o 833be552cfe6 C-H (my-first-test-branch)
680 894 |/
681 895 o d9e379a8c432 C-F (default)
682 896 |
683 897 o 51c544a58128 C-C (default)
684 898 |
685 899 | o d603e2c0cdd7 C-E (default)
686 900 |/
687 901 o 98217d5a1659 C-A (default)
688 902 |
689 903 | o 59e76faf78bd C-D (default)
690 904 | |
691 905 | o a9149a1428e2 C-B (default)
692 906 |/
693 907 @ 842e2fac6304 C-ROOT (default)
694 908
695 909
696 910 Creating changesets
697 911
698 912 (merges heads)
699 913
700 914 $ hg -R client-other/ up 'desc("C-E")'
701 915 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
702 916 $ hg -R client-other/ merge 'desc("C-D")'
703 917 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
704 918 (branch merge, don't forget to commit)
705 919 $ hg -R client-other/ commit -m "C-K"
706 920
707 921 (update one head)
708 922
709 923 $ hg -R client-racy/ up 'desc("C-D")'
710 924 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
711 925 $ echo bbb >> client-racy/b
712 926 $ hg -R client-racy/ commit -m "C-L"
713 927
714 928 Pushing
715 929
716 930 $ hg -R client-racy push -r 'tip' > ./push-log 2>&1 &
717 931
718 932 $ waiton $TESTTMP/readyfile
719 933
720 934 $ hg -R client-other push -fr 'tip' --new-branch
721 935 pushing to ssh://user@dummy/server
722 936 searching for changes
723 937 remote: adding changesets
724 938 remote: adding manifests
725 939 remote: adding file changes
726 940 remote: added 1 changesets with 0 changes to 0 files (-1 heads)
727 941
728 942 $ release $TESTTMP/watchfile
729 943
730 944 Check the result of the push
731 945
732 946 $ cat ./push-log
733 947 pushing to ssh://user@dummy/server
734 948 searching for changes
735 949 wrote ready: $TESTTMP/readyfile
736 950 waiting on: $TESTTMP/watchfile
737 951 abort: push failed:
738 952 'repository changed while pushing - please try again'
739 953
740 954 $ hg -R server graph
741 955 o be705100c623 C-K (default)
742 956 |\
743 957 | o d603e2c0cdd7 C-E (default)
744 958 | |
745 959 o | 59e76faf78bd C-D (default)
746 960 | |
747 961 | | o 89420bf00fae C-J (default)
748 962 | | |
749 963 | | | o b35ed749f288 C-I (my-second-test-branch)
750 964 | | |/
751 965 | | o 75d69cba5402 C-G (default)
752 966 | | |
753 967 | | | o 833be552cfe6 C-H (my-first-test-branch)
754 968 | | |/
755 969 | | o d9e379a8c432 C-F (default)
756 970 | | |
757 971 | | o 51c544a58128 C-C (default)
758 972 | |/
759 973 o | a9149a1428e2 C-B (default)
760 974 | |
761 975 | o 98217d5a1659 C-A (default)
762 976 |/
763 977 @ 842e2fac6304 C-ROOT (default)
764 978
765 979
766 980 pushing raced push touch multiple heads
767 981 ---------------------------------------
768 982
769 983 There are multiple heads, the raced push touch all of them
770 984
771 985 # b
772 986 # | a (raced)
773 987 # |/|
774 988 # * *
775 989 # |/
776 990 # *
777 991
778 992 (resync-all)
779 993
780 994 $ hg -R ./server pull ./client-racy
781 995 pulling from ./client-racy
782 996 searching for changes
783 997 adding changesets
784 998 adding manifests
785 999 adding file changes
786 1000 added 1 changesets with 1 changes to 1 files (+1 heads)
787 1001 (run 'hg heads .' to see heads, 'hg merge' to merge)
788 1002 $ hg -R ./client-other pull
789 1003 pulling from ssh://user@dummy/server
790 1004 searching for changes
791 1005 adding changesets
792 1006 adding manifests
793 1007 adding file changes
794 1008 added 1 changesets with 1 changes to 1 files (+1 heads)
795 1009 (run 'hg heads .' to see heads, 'hg merge' to merge)
796 1010 $ hg -R ./client-racy pull
797 1011 pulling from ssh://user@dummy/server
798 1012 searching for changes
799 1013 adding changesets
800 1014 adding manifests
801 1015 adding file changes
802 1016 added 1 changesets with 0 changes to 0 files
803 1017 (run 'hg update' to get a working copy)
804 1018
805 1019 $ hg -R server graph
806 1020 o cac2cead0ff0 C-L (default)
807 1021 |
808 1022 | o be705100c623 C-K (default)
809 1023 |/|
810 1024 | o d603e2c0cdd7 C-E (default)
811 1025 | |
812 1026 o | 59e76faf78bd C-D (default)
813 1027 | |
814 1028 | | o 89420bf00fae C-J (default)
815 1029 | | |
816 1030 | | | o b35ed749f288 C-I (my-second-test-branch)
817 1031 | | |/
818 1032 | | o 75d69cba5402 C-G (default)
819 1033 | | |
820 1034 | | | o 833be552cfe6 C-H (my-first-test-branch)
821 1035 | | |/
822 1036 | | o d9e379a8c432 C-F (default)
823 1037 | | |
824 1038 | | o 51c544a58128 C-C (default)
825 1039 | |/
826 1040 o | a9149a1428e2 C-B (default)
827 1041 | |
828 1042 | o 98217d5a1659 C-A (default)
829 1043 |/
830 1044 @ 842e2fac6304 C-ROOT (default)
831 1045
832 1046
833 1047 Creating changesets
834 1048
835 1049 (update existing head)
836 1050
837 1051 $ echo aaa >> client-other/a
838 1052 $ hg -R client-other/ commit -m "C-M"
839 1053
840 1054 (merge heads)
841 1055
842 1056 $ hg -R client-racy/ merge 'desc("C-K")'
843 1057 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
844 1058 (branch merge, don't forget to commit)
845 1059 $ hg -R client-racy/ commit -m "C-N"
846 1060
847 1061 Pushing
848 1062
849 1063 $ hg -R client-racy push -r 'tip' > ./push-log 2>&1 &
850 1064
851 1065 $ waiton $TESTTMP/readyfile
852 1066
853 1067 $ hg -R client-other push -fr 'tip' --new-branch
854 1068 pushing to ssh://user@dummy/server
855 1069 searching for changes
856 1070 remote: adding changesets
857 1071 remote: adding manifests
858 1072 remote: adding file changes
859 1073 remote: added 1 changesets with 1 changes to 1 files
860 1074
861 1075 $ release $TESTTMP/watchfile
862 1076
863 1077 Check the result of the push
864 1078
865 1079 $ cat ./push-log
866 1080 pushing to ssh://user@dummy/server
867 1081 searching for changes
868 1082 wrote ready: $TESTTMP/readyfile
869 1083 waiting on: $TESTTMP/watchfile
870 1084 abort: push failed:
871 1085 'repository changed while pushing - please try again'
872 1086
873 1087 $ hg -R server graph
874 1088 o 6fd3090135df C-M (default)
875 1089 |
876 1090 o be705100c623 C-K (default)
877 1091 |\
878 1092 | o d603e2c0cdd7 C-E (default)
879 1093 | |
880 1094 +---o cac2cead0ff0 C-L (default)
881 1095 | |
882 1096 o | 59e76faf78bd C-D (default)
883 1097 | |
884 1098 | | o 89420bf00fae C-J (default)
885 1099 | | |
886 1100 | | | o b35ed749f288 C-I (my-second-test-branch)
887 1101 | | |/
888 1102 | | o 75d69cba5402 C-G (default)
889 1103 | | |
890 1104 | | | o 833be552cfe6 C-H (my-first-test-branch)
891 1105 | | |/
892 1106 | | o d9e379a8c432 C-F (default)
893 1107 | | |
894 1108 | | o 51c544a58128 C-C (default)
895 1109 | |/
896 1110 o | a9149a1428e2 C-B (default)
897 1111 | |
898 1112 | o 98217d5a1659 C-A (default)
899 1113 |/
900 1114 @ 842e2fac6304 C-ROOT (default)
901 1115
902 1116
903 1117 racing commit push a new head behind another named branch
904 1118 ---------------------------------------------------------
905 1119
906 1120 non-continuous branch are valid case, we tests for them.
907 1121
908 1122 # b (branch default)
909 1123 # |
910 1124 # o (branch foo)
911 1125 # |
912 1126 # | a (raced, branch default)
913 1127 # |/
914 1128 # * (branch foo)
915 1129 # |
916 1130 # * (branch default)
917 1131
918 1132 (resync-all + other branch)
919 1133
920 1134 $ hg -R ./server pull ./client-racy
921 1135 pulling from ./client-racy
922 1136 searching for changes
923 1137 adding changesets
924 1138 adding manifests
925 1139 adding file changes
926 1140 added 1 changesets with 0 changes to 0 files
927 1141 (run 'hg update' to get a working copy)
928 1142
929 1143 (creates named branch on head)
930 1144
931 1145 $ hg -R ./server/ up 'desc("C-N")'
932 1146 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
933 1147 $ hg -R ./server/ branch other
934 1148 marked working directory as branch other
935 1149 $ hg -R ./server/ ci -m "C-Z"
936 1150 $ hg -R ./server/ up null
937 1151 0 files updated, 0 files merged, 3 files removed, 0 files unresolved
938 1152
939 1153 (sync client)
940 1154
941 1155 $ hg -R ./client-other pull
942 1156 pulling from ssh://user@dummy/server
943 1157 searching for changes
944 1158 adding changesets
945 1159 adding manifests
946 1160 adding file changes
947 1161 added 2 changesets with 0 changes to 0 files
948 1162 (run 'hg update' to get a working copy)
949 1163 $ hg -R ./client-racy pull
950 1164 pulling from ssh://user@dummy/server
951 1165 searching for changes
952 1166 adding changesets
953 1167 adding manifests
954 1168 adding file changes
955 1169 added 2 changesets with 1 changes to 1 files (+1 heads)
956 1170 (run 'hg heads .' to see heads, 'hg merge' to merge)
957 1171
958 1172 $ hg -R server graph
959 1173 o 55a6f1c01b48 C-Z (other)
960 1174 |
961 1175 o 866a66e18630 C-N (default)
962 1176 |\
963 1177 +---o 6fd3090135df C-M (default)
964 1178 | |
965 1179 | o cac2cead0ff0 C-L (default)
966 1180 | |
967 1181 o | be705100c623 C-K (default)
968 1182 |\|
969 1183 o | d603e2c0cdd7 C-E (default)
970 1184 | |
971 1185 | o 59e76faf78bd C-D (default)
972 1186 | |
973 1187 | | o 89420bf00fae C-J (default)
974 1188 | | |
975 1189 | | | o b35ed749f288 C-I (my-second-test-branch)
976 1190 | | |/
977 1191 | | o 75d69cba5402 C-G (default)
978 1192 | | |
979 1193 | | | o 833be552cfe6 C-H (my-first-test-branch)
980 1194 | | |/
981 1195 | | o d9e379a8c432 C-F (default)
982 1196 | | |
983 1197 +---o 51c544a58128 C-C (default)
984 1198 | |
985 1199 | o a9149a1428e2 C-B (default)
986 1200 | |
987 1201 o | 98217d5a1659 C-A (default)
988 1202 |/
989 1203 o 842e2fac6304 C-ROOT (default)
990 1204
991 1205
992 1206 Creating changesets
993 1207
994 1208 (update default head through another named branch one)
995 1209
996 1210 $ hg -R client-other/ up 'desc("C-Z")'
997 1211 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
998 1212 $ echo aaa >> client-other/a
999 1213 $ hg -R client-other/ commit -m "C-O"
1000 1214 $ echo aaa >> client-other/a
1001 1215 $ hg -R client-other/ branch --force default
1002 1216 marked working directory as branch default
1003 1217 $ hg -R client-other/ commit -m "C-P"
1004 1218 created new head
1005 1219
1006 1220 (update default head)
1007 1221
1008 1222 $ hg -R client-racy/ up 'desc("C-Z")'
1009 1223 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
1010 1224 $ echo bbb >> client-other/a
1011 1225 $ hg -R client-racy/ branch --force default
1012 1226 marked working directory as branch default
1013 1227 $ hg -R client-racy/ commit -m "C-Q"
1014 1228 created new head
1015 1229
1016 1230 Pushing
1017 1231
1018 1232 $ hg -R client-racy push -r 'tip' > ./push-log 2>&1 &
1019 1233
1020 1234 $ waiton $TESTTMP/readyfile
1021 1235
1022 1236 $ hg -R client-other push -fr 'tip' --new-branch
1023 1237 pushing to ssh://user@dummy/server
1024 1238 searching for changes
1025 1239 remote: adding changesets
1026 1240 remote: adding manifests
1027 1241 remote: adding file changes
1028 1242 remote: added 2 changesets with 1 changes to 1 files
1029 1243
1030 1244 $ release $TESTTMP/watchfile
1031 1245
1032 1246 Check the result of the push
1033 1247
1034 1248 $ cat ./push-log
1035 1249 pushing to ssh://user@dummy/server
1036 1250 searching for changes
1037 1251 wrote ready: $TESTTMP/readyfile
1038 1252 waiting on: $TESTTMP/watchfile
1039 1253 abort: push failed:
1040 1254 'repository changed while pushing - please try again'
1041 1255
1042 1256 $ hg -R server graph
1043 1257 o 1b58ee3f79e5 C-P (default)
1044 1258 |
1045 1259 o d0a85b2252a9 C-O (other)
1046 1260 |
1047 1261 o 55a6f1c01b48 C-Z (other)
1048 1262 |
1049 1263 o 866a66e18630 C-N (default)
1050 1264 |\
1051 1265 +---o 6fd3090135df C-M (default)
1052 1266 | |
1053 1267 | o cac2cead0ff0 C-L (default)
1054 1268 | |
1055 1269 o | be705100c623 C-K (default)
1056 1270 |\|
1057 1271 o | d603e2c0cdd7 C-E (default)
1058 1272 | |
1059 1273 | o 59e76faf78bd C-D (default)
1060 1274 | |
1061 1275 | | o 89420bf00fae C-J (default)
1062 1276 | | |
1063 1277 | | | o b35ed749f288 C-I (my-second-test-branch)
1064 1278 | | |/
1065 1279 | | o 75d69cba5402 C-G (default)
1066 1280 | | |
1067 1281 | | | o 833be552cfe6 C-H (my-first-test-branch)
1068 1282 | | |/
1069 1283 | | o d9e379a8c432 C-F (default)
1070 1284 | | |
1071 1285 +---o 51c544a58128 C-C (default)
1072 1286 | |
1073 1287 | o a9149a1428e2 C-B (default)
1074 1288 | |
1075 1289 o | 98217d5a1659 C-A (default)
1076 1290 |/
1077 1291 o 842e2fac6304 C-ROOT (default)
1078 1292
1079 1293
1080 1294 raced commit push a new head behind another named branch
1081 1295 ---------------------------------------------------------
1082 1296
1083 1297 non-continuous branch are valid case, we tests for them.
1084 1298
1085 1299 # b (raced branch default)
1086 1300 # |
1087 1301 # o (branch foo)
1088 1302 # |
1089 1303 # | a (branch default)
1090 1304 # |/
1091 1305 # * (branch foo)
1092 1306 # |
1093 1307 # * (branch default)
1094 1308
1095 1309 (resync-all)
1096 1310
1097 1311 $ hg -R ./server pull ./client-racy
1098 1312 pulling from ./client-racy
1099 1313 searching for changes
1100 1314 adding changesets
1101 1315 adding manifests
1102 1316 adding file changes
1103 1317 added 1 changesets with 0 changes to 0 files (+1 heads)
1104 1318 (run 'hg heads .' to see heads, 'hg merge' to merge)
1105 1319 $ hg -R ./client-other pull
1106 1320 pulling from ssh://user@dummy/server
1107 1321 searching for changes
1108 1322 adding changesets
1109 1323 adding manifests
1110 1324 adding file changes
1111 1325 added 1 changesets with 0 changes to 0 files (+1 heads)
1112 1326 (run 'hg heads .' to see heads, 'hg merge' to merge)
1113 1327 $ hg -R ./client-racy pull
1114 1328 pulling from ssh://user@dummy/server
1115 1329 searching for changes
1116 1330 adding changesets
1117 1331 adding manifests
1118 1332 adding file changes
1119 1333 added 2 changesets with 1 changes to 1 files (+1 heads)
1120 1334 (run 'hg heads .' to see heads, 'hg merge' to merge)
1121 1335
1122 1336 $ hg -R server graph
1123 1337 o b0ee3d6f51bc C-Q (default)
1124 1338 |
1125 1339 | o 1b58ee3f79e5 C-P (default)
1126 1340 | |
1127 1341 | o d0a85b2252a9 C-O (other)
1128 1342 |/
1129 1343 o 55a6f1c01b48 C-Z (other)
1130 1344 |
1131 1345 o 866a66e18630 C-N (default)
1132 1346 |\
1133 1347 +---o 6fd3090135df C-M (default)
1134 1348 | |
1135 1349 | o cac2cead0ff0 C-L (default)
1136 1350 | |
1137 1351 o | be705100c623 C-K (default)
1138 1352 |\|
1139 1353 o | d603e2c0cdd7 C-E (default)
1140 1354 | |
1141 1355 | o 59e76faf78bd C-D (default)
1142 1356 | |
1143 1357 | | o 89420bf00fae C-J (default)
1144 1358 | | |
1145 1359 | | | o b35ed749f288 C-I (my-second-test-branch)
1146 1360 | | |/
1147 1361 | | o 75d69cba5402 C-G (default)
1148 1362 | | |
1149 1363 | | | o 833be552cfe6 C-H (my-first-test-branch)
1150 1364 | | |/
1151 1365 | | o d9e379a8c432 C-F (default)
1152 1366 | | |
1153 1367 +---o 51c544a58128 C-C (default)
1154 1368 | |
1155 1369 | o a9149a1428e2 C-B (default)
1156 1370 | |
1157 1371 o | 98217d5a1659 C-A (default)
1158 1372 |/
1159 1373 o 842e2fac6304 C-ROOT (default)
1160 1374
1161 1375
1162 1376 Creating changesets
1163 1377
1164 1378 (update 'other' named branch head)
1165 1379
1166 1380 $ hg -R client-other/ up 'desc("C-P")'
1167 1381 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
1168 1382 $ echo aaa >> client-other/a
1169 1383 $ hg -R client-other/ branch --force other
1170 1384 marked working directory as branch other
1171 1385 $ hg -R client-other/ commit -m "C-R"
1172 1386 created new head
1173 1387
1174 1388 (update 'other named brnach through a 'default' changeset')
1175 1389
1176 1390 $ hg -R client-racy/ up 'desc("C-P")'
1177 1391 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
1178 1392 $ echo bbb >> client-racy/a
1179 1393 $ hg -R client-racy/ commit -m "C-S"
1180 1394 $ echo bbb >> client-racy/a
1181 1395 $ hg -R client-racy/ branch --force other
1182 1396 marked working directory as branch other
1183 1397 $ hg -R client-racy/ commit -m "C-T"
1184 1398 created new head
1185 1399
1186 1400 Pushing
1187 1401
1188 1402 $ hg -R client-racy push -r 'tip' > ./push-log 2>&1 &
1189 1403
1190 1404 $ waiton $TESTTMP/readyfile
1191 1405
1192 1406 $ hg -R client-other push -fr 'tip' --new-branch
1193 1407 pushing to ssh://user@dummy/server
1194 1408 searching for changes
1195 1409 remote: adding changesets
1196 1410 remote: adding manifests
1197 1411 remote: adding file changes
1198 1412 remote: added 1 changesets with 1 changes to 1 files
1199 1413
1200 1414 $ release $TESTTMP/watchfile
1201 1415
1202 1416 Check the result of the push
1203 1417
1204 1418 $ cat ./push-log
1205 1419 pushing to ssh://user@dummy/server
1206 1420 searching for changes
1207 1421 wrote ready: $TESTTMP/readyfile
1208 1422 waiting on: $TESTTMP/watchfile
1209 1423 abort: push failed:
1210 1424 'repository changed while pushing - please try again'
1211 1425
1212 1426 $ hg -R server graph
1213 1427 o de7b9e2ba3f6 C-R (other)
1214 1428 |
1215 1429 o 1b58ee3f79e5 C-P (default)
1216 1430 |
1217 1431 o d0a85b2252a9 C-O (other)
1218 1432 |
1219 1433 | o b0ee3d6f51bc C-Q (default)
1220 1434 |/
1221 1435 o 55a6f1c01b48 C-Z (other)
1222 1436 |
1223 1437 o 866a66e18630 C-N (default)
1224 1438 |\
1225 1439 +---o 6fd3090135df C-M (default)
1226 1440 | |
1227 1441 | o cac2cead0ff0 C-L (default)
1228 1442 | |
1229 1443 o | be705100c623 C-K (default)
1230 1444 |\|
1231 1445 o | d603e2c0cdd7 C-E (default)
1232 1446 | |
1233 1447 | o 59e76faf78bd C-D (default)
1234 1448 | |
1235 1449 | | o 89420bf00fae C-J (default)
1236 1450 | | |
1237 1451 | | | o b35ed749f288 C-I (my-second-test-branch)
1238 1452 | | |/
1239 1453 | | o 75d69cba5402 C-G (default)
1240 1454 | | |
1241 1455 | | | o 833be552cfe6 C-H (my-first-test-branch)
1242 1456 | | |/
1243 1457 | | o d9e379a8c432 C-F (default)
1244 1458 | | |
1245 1459 +---o 51c544a58128 C-C (default)
1246 1460 | |
1247 1461 | o a9149a1428e2 C-B (default)
1248 1462 | |
1249 1463 o | 98217d5a1659 C-A (default)
1250 1464 |/
1251 1465 o 842e2fac6304 C-ROOT (default)
1252 1466
1253 1467
1254 1468 raced commit push a new head obsoleting the one touched by the racing push
1255 1469 --------------------------------------------------------------------------
1256 1470
1257 1471 # b (racing)
1258 1472 # |
1259 1473 # ΓΈβ‡ β—” a (raced)
1260 1474 # |/
1261 1475 # *
1262 1476
1263 1477 (resync-all)
1264 1478
1265 1479 $ hg -R ./server pull ./client-racy
1266 1480 pulling from ./client-racy
1267 1481 searching for changes
1268 1482 adding changesets
1269 1483 adding manifests
1270 1484 adding file changes
1271 1485 added 2 changesets with 2 changes to 1 files (+1 heads)
1272 1486 (run 'hg heads .' to see heads, 'hg merge' to merge)
1273 1487 $ hg -R ./client-other pull
1274 1488 pulling from ssh://user@dummy/server
1275 1489 searching for changes
1276 1490 adding changesets
1277 1491 adding manifests
1278 1492 adding file changes
1279 1493 added 2 changesets with 2 changes to 1 files (+1 heads)
1280 1494 (run 'hg heads' to see heads, 'hg merge' to merge)
1281 1495 $ hg -R ./client-racy pull
1282 1496 pulling from ssh://user@dummy/server
1283 1497 searching for changes
1284 1498 adding changesets
1285 1499 adding manifests
1286 1500 adding file changes
1287 1501 added 1 changesets with 1 changes to 1 files (+1 heads)
1288 1502 (run 'hg heads' to see heads, 'hg merge' to merge)
1289 1503
1290 1504 $ hg -R server graph
1291 1505 o 3d57ed3c1091 C-T (other)
1292 1506 |
1293 1507 o 2efd43f7b5ba C-S (default)
1294 1508 |
1295 1509 | o de7b9e2ba3f6 C-R (other)
1296 1510 |/
1297 1511 o 1b58ee3f79e5 C-P (default)
1298 1512 |
1299 1513 o d0a85b2252a9 C-O (other)
1300 1514 |
1301 1515 | o b0ee3d6f51bc C-Q (default)
1302 1516 |/
1303 1517 o 55a6f1c01b48 C-Z (other)
1304 1518 |
1305 1519 o 866a66e18630 C-N (default)
1306 1520 |\
1307 1521 +---o 6fd3090135df C-M (default)
1308 1522 | |
1309 1523 | o cac2cead0ff0 C-L (default)
1310 1524 | |
1311 1525 o | be705100c623 C-K (default)
1312 1526 |\|
1313 1527 o | d603e2c0cdd7 C-E (default)
1314 1528 | |
1315 1529 | o 59e76faf78bd C-D (default)
1316 1530 | |
1317 1531 | | o 89420bf00fae C-J (default)
1318 1532 | | |
1319 1533 | | | o b35ed749f288 C-I (my-second-test-branch)
1320 1534 | | |/
1321 1535 | | o 75d69cba5402 C-G (default)
1322 1536 | | |
1323 1537 | | | o 833be552cfe6 C-H (my-first-test-branch)
1324 1538 | | |/
1325 1539 | | o d9e379a8c432 C-F (default)
1326 1540 | | |
1327 1541 +---o 51c544a58128 C-C (default)
1328 1542 | |
1329 1543 | o a9149a1428e2 C-B (default)
1330 1544 | |
1331 1545 o | 98217d5a1659 C-A (default)
1332 1546 |/
1333 1547 o 842e2fac6304 C-ROOT (default)
1334 1548
1335 1549
1336 1550 Creating changesets and markers
1337 1551
1338 1552 (continue existing head)
1339 1553
1340 1554 $ hg -R client-other/ up 'desc("C-Q")'
1341 1555 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
1342 1556 $ echo aaa >> client-other/a
1343 1557 $ hg -R client-other/ commit -m "C-U"
1344 1558
1345 1559 (new topo branch obsoleting that same head)
1346 1560
1347 1561 $ hg -R client-racy/ up 'desc("C-Z")'
1348 1562 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
1349 1563 $ echo bbb >> client-racy/a
1350 1564 $ hg -R client-racy/ branch --force default
1351 1565 marked working directory as branch default
1352 1566 $ hg -R client-racy/ commit -m "C-V"
1353 1567 created new head
1354 1568 $ ID_Q=`hg -R client-racy log -T '{node}\n' -r 'desc("C-Q")'`
1355 1569 $ ID_V=`hg -R client-racy log -T '{node}\n' -r 'desc("C-V")'`
1356 1570 $ hg -R client-racy debugobsolete $ID_Q $ID_V
1357 1571
1358 1572 Pushing
1359 1573
1360 1574 $ hg -R client-racy push -r 'tip' > ./push-log 2>&1 &
1361 1575
1362 1576 $ waiton $TESTTMP/readyfile
1363 1577
1364 1578 $ hg -R client-other push -fr 'tip' --new-branch
1365 1579 pushing to ssh://user@dummy/server
1366 1580 searching for changes
1367 1581 remote: adding changesets
1368 1582 remote: adding manifests
1369 1583 remote: adding file changes
1370 1584 remote: added 1 changesets with 0 changes to 0 files
1371 1585
1372 1586 $ release $TESTTMP/watchfile
1373 1587
1374 1588 Check the result of the push
1375 1589
1376 1590 $ cat ./push-log
1377 1591 pushing to ssh://user@dummy/server
1378 1592 searching for changes
1379 1593 wrote ready: $TESTTMP/readyfile
1380 1594 waiting on: $TESTTMP/watchfile
1381 1595 abort: push failed:
1382 1596 'repository changed while pushing - please try again'
1383 1597
1384 1598 $ hg -R server debugobsolete
1385 1599 $ hg -R server graph
1386 1600 o a98a47d8b85b C-U (default)
1387 1601 |
1388 1602 o b0ee3d6f51bc C-Q (default)
1389 1603 |
1390 1604 | o 3d57ed3c1091 C-T (other)
1391 1605 | |
1392 1606 | o 2efd43f7b5ba C-S (default)
1393 1607 | |
1394 1608 | | o de7b9e2ba3f6 C-R (other)
1395 1609 | |/
1396 1610 | o 1b58ee3f79e5 C-P (default)
1397 1611 | |
1398 1612 | o d0a85b2252a9 C-O (other)
1399 1613 |/
1400 1614 o 55a6f1c01b48 C-Z (other)
1401 1615 |
1402 1616 o 866a66e18630 C-N (default)
1403 1617 |\
1404 1618 +---o 6fd3090135df C-M (default)
1405 1619 | |
1406 1620 | o cac2cead0ff0 C-L (default)
1407 1621 | |
1408 1622 o | be705100c623 C-K (default)
1409 1623 |\|
1410 1624 o | d603e2c0cdd7 C-E (default)
1411 1625 | |
1412 1626 | o 59e76faf78bd C-D (default)
1413 1627 | |
1414 1628 | | o 89420bf00fae C-J (default)
1415 1629 | | |
1416 1630 | | | o b35ed749f288 C-I (my-second-test-branch)
1417 1631 | | |/
1418 1632 | | o 75d69cba5402 C-G (default)
1419 1633 | | |
1420 1634 | | | o 833be552cfe6 C-H (my-first-test-branch)
1421 1635 | | |/
1422 1636 | | o d9e379a8c432 C-F (default)
1423 1637 | | |
1424 1638 +---o 51c544a58128 C-C (default)
1425 1639 | |
1426 1640 | o a9149a1428e2 C-B (default)
1427 1641 | |
1428 1642 o | 98217d5a1659 C-A (default)
1429 1643 |/
1430 1644 o 842e2fac6304 C-ROOT (default)
1431 1645
1432 1646
1433 1647 racing commit push a new head obsoleting the one touched by the raced push
1434 1648 --------------------------------------------------------------------------
1435 1649
1436 1650 (mirror test case of the previous one
1437 1651
1438 1652 # a (raced branch default)
1439 1653 # |
1440 1654 # ΓΈβ‡ β—” b (racing)
1441 1655 # |/
1442 1656 # *
1443 1657
1444 1658 (resync-all)
1445 1659
1446 1660 $ hg -R ./server pull ./client-racy
1447 1661 pulling from ./client-racy
1448 1662 searching for changes
1449 1663 adding changesets
1450 1664 adding manifests
1451 1665 adding file changes
1452 1666 added 1 changesets with 1 changes to 1 files (+1 heads)
1453 1667 1 new obsolescence markers
1454 1668 (run 'hg heads .' to see heads, 'hg merge' to merge)
1455 1669 $ hg -R ./client-other pull
1456 1670 pulling from ssh://user@dummy/server
1457 1671 searching for changes
1458 1672 adding changesets
1459 1673 adding manifests
1460 1674 adding file changes
1461 1675 added 1 changesets with 1 changes to 1 files (+1 heads)
1462 1676 1 new obsolescence markers
1463 1677 (run 'hg heads .' to see heads, 'hg merge' to merge)
1464 1678 $ hg -R ./client-racy pull
1465 1679 pulling from ssh://user@dummy/server
1466 1680 searching for changes
1467 1681 adding changesets
1468 1682 adding manifests
1469 1683 adding file changes
1470 1684 added 1 changesets with 0 changes to 0 files
1471 1685 (run 'hg update' to get a working copy)
1472 1686
1473 1687 $ hg -R server debugobsolete
1474 1688 b0ee3d6f51bc4c0ca6d4f2907708027a6c376233 720c5163ecf64dcc6216bee2d62bf3edb1882499 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
1475 1689 $ hg -R server graph
1476 1690 o 720c5163ecf6 C-V (default)
1477 1691 |
1478 1692 | o a98a47d8b85b C-U (default)
1479 1693 | |
1480 1694 | x b0ee3d6f51bc C-Q (default)
1481 1695 |/
1482 1696 | o 3d57ed3c1091 C-T (other)
1483 1697 | |
1484 1698 | o 2efd43f7b5ba C-S (default)
1485 1699 | |
1486 1700 | | o de7b9e2ba3f6 C-R (other)
1487 1701 | |/
1488 1702 | o 1b58ee3f79e5 C-P (default)
1489 1703 | |
1490 1704 | o d0a85b2252a9 C-O (other)
1491 1705 |/
1492 1706 o 55a6f1c01b48 C-Z (other)
1493 1707 |
1494 1708 o 866a66e18630 C-N (default)
1495 1709 |\
1496 1710 +---o 6fd3090135df C-M (default)
1497 1711 | |
1498 1712 | o cac2cead0ff0 C-L (default)
1499 1713 | |
1500 1714 o | be705100c623 C-K (default)
1501 1715 |\|
1502 1716 o | d603e2c0cdd7 C-E (default)
1503 1717 | |
1504 1718 | o 59e76faf78bd C-D (default)
1505 1719 | |
1506 1720 | | o 89420bf00fae C-J (default)
1507 1721 | | |
1508 1722 | | | o b35ed749f288 C-I (my-second-test-branch)
1509 1723 | | |/
1510 1724 | | o 75d69cba5402 C-G (default)
1511 1725 | | |
1512 1726 | | | o 833be552cfe6 C-H (my-first-test-branch)
1513 1727 | | |/
1514 1728 | | o d9e379a8c432 C-F (default)
1515 1729 | | |
1516 1730 +---o 51c544a58128 C-C (default)
1517 1731 | |
1518 1732 | o a9149a1428e2 C-B (default)
1519 1733 | |
1520 1734 o | 98217d5a1659 C-A (default)
1521 1735 |/
1522 1736 o 842e2fac6304 C-ROOT (default)
1523 1737
1524 1738
1525 1739 Creating changesets and markers
1526 1740
1527 1741 (new topo branch obsoleting that same head)
1528 1742
1529 1743 $ hg -R client-other/ up 'desc("C-Q")'
1530 1744 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
1531 1745 $ echo bbb >> client-other/a
1532 1746 $ hg -R client-other/ branch --force default
1533 1747 marked working directory as branch default
1534 1748 $ hg -R client-other/ commit -m "C-W"
1535 1749 created new head
1536 1750 $ ID_V=`hg -R client-other log -T '{node}\n' -r 'desc("C-V")'`
1537 1751 $ ID_W=`hg -R client-other log -T '{node}\n' -r 'desc("C-W")'`
1538 1752 $ hg -R client-other debugobsolete $ID_V $ID_W
1539 1753
1540 1754 (continue the same head)
1541 1755
1542 1756 $ echo aaa >> client-racy/a
1543 1757 $ hg -R client-racy/ commit -m "C-X"
1544 1758
1545 1759 Pushing
1546 1760
1547 1761 $ hg -R client-racy push -r 'tip' > ./push-log 2>&1 &
1548 1762
1549 1763 $ waiton $TESTTMP/readyfile
1550 1764
1551 1765 $ hg -R client-other push -fr 'tip' --new-branch
1552 1766 pushing to ssh://user@dummy/server
1553 1767 searching for changes
1554 1768 remote: adding changesets
1555 1769 remote: adding manifests
1556 1770 remote: adding file changes
1557 1771 remote: added 1 changesets with 0 changes to 1 files (+1 heads)
1558 1772 remote: 1 new obsolescence markers
1559 1773
1560 1774 $ release $TESTTMP/watchfile
1561 1775
1562 1776 Check the result of the push
1563 1777
1564 1778 $ cat ./push-log
1565 1779 pushing to ssh://user@dummy/server
1566 1780 searching for changes
1567 1781 wrote ready: $TESTTMP/readyfile
1568 1782 waiting on: $TESTTMP/watchfile
1569 1783 abort: push failed:
1570 1784 'repository changed while pushing - please try again'
1571 1785
1572 1786 $ hg -R server debugobsolete
1573 1787 b0ee3d6f51bc4c0ca6d4f2907708027a6c376233 720c5163ecf64dcc6216bee2d62bf3edb1882499 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
1574 1788 720c5163ecf64dcc6216bee2d62bf3edb1882499 39bc0598afe90ab18da460bafecc0fa953b77596 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
1575 1789 $ hg -R server graph --hidden
1576 1790 o 39bc0598afe9 C-W (default)
1577 1791 |
1578 1792 | o a98a47d8b85b C-U (default)
1579 1793 |/
1580 1794 x b0ee3d6f51bc C-Q (default)
1581 1795 |
1582 1796 | o 3d57ed3c1091 C-T (other)
1583 1797 | |
1584 1798 | o 2efd43f7b5ba C-S (default)
1585 1799 | |
1586 1800 | | o de7b9e2ba3f6 C-R (other)
1587 1801 | |/
1588 1802 | o 1b58ee3f79e5 C-P (default)
1589 1803 | |
1590 1804 | o d0a85b2252a9 C-O (other)
1591 1805 |/
1592 1806 | x 720c5163ecf6 C-V (default)
1593 1807 |/
1594 1808 o 55a6f1c01b48 C-Z (other)
1595 1809 |
1596 1810 o 866a66e18630 C-N (default)
1597 1811 |\
1598 1812 +---o 6fd3090135df C-M (default)
1599 1813 | |
1600 1814 | o cac2cead0ff0 C-L (default)
1601 1815 | |
1602 1816 o | be705100c623 C-K (default)
1603 1817 |\|
1604 1818 o | d603e2c0cdd7 C-E (default)
1605 1819 | |
1606 1820 | o 59e76faf78bd C-D (default)
1607 1821 | |
1608 1822 | | o 89420bf00fae C-J (default)
1609 1823 | | |
1610 1824 | | | o b35ed749f288 C-I (my-second-test-branch)
1611 1825 | | |/
1612 1826 | | o 75d69cba5402 C-G (default)
1613 1827 | | |
1614 1828 | | | o 833be552cfe6 C-H (my-first-test-branch)
1615 1829 | | |/
1616 1830 | | o d9e379a8c432 C-F (default)
1617 1831 | | |
1618 1832 +---o 51c544a58128 C-C (default)
1619 1833 | |
1620 1834 | o a9149a1428e2 C-B (default)
1621 1835 | |
1622 1836 o | 98217d5a1659 C-A (default)
1623 1837 |/
1624 1838 o 842e2fac6304 C-ROOT (default)
1625 1839
General Comments 0
You need to be logged in to leave comments. Login now