##// END OF EJS Templates
bundle2: also save output when error happens during part processing...
Pierre-Yves David -
r24851:df0ce98c stable
parent child Browse files
Show More
@@ -1,1264 +1,1270 b''
1 1 # bundle2.py - generic container format to transmit arbitrary data.
2 2 #
3 3 # Copyright 2013 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 """Handling of the new bundle2 format
8 8
9 9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 11 that will be handed to and processed by the application layer.
12 12
13 13
14 14 General format architecture
15 15 ===========================
16 16
17 17 The format is architectured as follow
18 18
19 19 - magic string
20 20 - stream level parameters
21 21 - payload parts (any number)
22 22 - end of stream marker.
23 23
24 24 the Binary format
25 25 ============================
26 26
27 27 All numbers are unsigned and big-endian.
28 28
29 29 stream level parameters
30 30 ------------------------
31 31
32 32 Binary format is as follow
33 33
34 34 :params size: int32
35 35
36 36 The total number of Bytes used by the parameters
37 37
38 38 :params value: arbitrary number of Bytes
39 39
40 40 A blob of `params size` containing the serialized version of all stream level
41 41 parameters.
42 42
43 43 The blob contains a space separated list of parameters. Parameters with value
44 44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45 45
46 46 Empty name are obviously forbidden.
47 47
48 48 Name MUST start with a letter. If this first letter is lower case, the
49 49 parameter is advisory and can be safely ignored. However when the first
50 50 letter is capital, the parameter is mandatory and the bundling process MUST
51 51 stop if he is not able to proceed it.
52 52
53 53 Stream parameters use a simple textual format for two main reasons:
54 54
55 55 - Stream level parameters should remain simple and we want to discourage any
56 56 crazy usage.
57 57 - Textual data allow easy human inspection of a bundle2 header in case of
58 58 troubles.
59 59
60 60 Any Applicative level options MUST go into a bundle2 part instead.
61 61
62 62 Payload part
63 63 ------------------------
64 64
65 65 Binary format is as follow
66 66
67 67 :header size: int32
68 68
69 69 The total number of Bytes used by the part headers. When the header is empty
70 70 (size = 0) this is interpreted as the end of stream marker.
71 71
72 72 :header:
73 73
74 74 The header defines how to interpret the part. It contains two piece of
75 75 data: the part type, and the part parameters.
76 76
77 77 The part type is used to route an application level handler, that can
78 78 interpret payload.
79 79
80 80 Part parameters are passed to the application level handler. They are
81 81 meant to convey information that will help the application level object to
82 82 interpret the part payload.
83 83
84 84 The binary format of the header is has follow
85 85
86 86 :typesize: (one byte)
87 87
88 88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89 89
90 90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 91 to this part.
92 92
93 93 :parameters:
94 94
95 95 Part's parameter may have arbitrary content, the binary structure is::
96 96
97 97 <mandatory-count><advisory-count><param-sizes><param-data>
98 98
99 99 :mandatory-count: 1 byte, number of mandatory parameters
100 100
101 101 :advisory-count: 1 byte, number of advisory parameters
102 102
103 103 :param-sizes:
104 104
105 105 N couple of bytes, where N is the total number of parameters. Each
106 106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107 107
108 108 :param-data:
109 109
110 110 A blob of bytes from which each parameter key and value can be
111 111 retrieved using the list of size couples stored in the previous
112 112 field.
113 113
114 114 Mandatory parameters comes first, then the advisory ones.
115 115
116 116 Each parameter's key MUST be unique within the part.
117 117
118 118 :payload:
119 119
120 120 payload is a series of `<chunksize><chunkdata>`.
121 121
122 122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124 124
125 125 The current implementation always produces either zero or one chunk.
126 126 This is an implementation limitation that will ultimately be lifted.
127 127
128 128 `chunksize` can be negative to trigger special case processing. No such
129 129 processing is in place yet.
130 130
131 131 Bundle processing
132 132 ============================
133 133
134 134 Each part is processed in order using a "part handler". Handler are registered
135 135 for a certain part type.
136 136
137 137 The matching of a part to its handler is case insensitive. The case of the
138 138 part type is used to know if a part is mandatory or advisory. If the Part type
139 139 contains any uppercase char it is considered mandatory. When no handler is
140 140 known for a Mandatory part, the process is aborted and an exception is raised.
141 141 If the part is advisory and no handler is known, the part is ignored. When the
142 142 process is aborted, the full bundle is still read from the stream to keep the
143 143 channel usable. But none of the part read from an abort are processed. In the
144 144 future, dropping the stream may become an option for channel we do not care to
145 145 preserve.
146 146 """
147 147
148 148 import errno
149 149 import sys
150 150 import util
151 151 import struct
152 152 import urllib
153 153 import string
154 154 import obsolete
155 155 import pushkey
156 156 import url
157 157 import re
158 158
159 159 import changegroup, error
160 160 from i18n import _
161 161
162 162 _pack = struct.pack
163 163 _unpack = struct.unpack
164 164
165 165 _fstreamparamsize = '>i'
166 166 _fpartheadersize = '>i'
167 167 _fparttypesize = '>B'
168 168 _fpartid = '>I'
169 169 _fpayloadsize = '>i'
170 170 _fpartparamcount = '>BB'
171 171
172 172 preferedchunksize = 4096
173 173
174 174 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
175 175
176 176 def validateparttype(parttype):
177 177 """raise ValueError if a parttype contains invalid character"""
178 178 if _parttypeforbidden.search(parttype):
179 179 raise ValueError(parttype)
180 180
181 181 def _makefpartparamsizes(nbparams):
182 182 """return a struct format to read part parameter sizes
183 183
184 184 The number parameters is variable so we need to build that format
185 185 dynamically.
186 186 """
187 187 return '>'+('BB'*nbparams)
188 188
189 189 parthandlermapping = {}
190 190
191 191 def parthandler(parttype, params=()):
192 192 """decorator that register a function as a bundle2 part handler
193 193
194 194 eg::
195 195
196 196 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
197 197 def myparttypehandler(...):
198 198 '''process a part of type "my part".'''
199 199 ...
200 200 """
201 201 validateparttype(parttype)
202 202 def _decorator(func):
203 203 lparttype = parttype.lower() # enforce lower case matching.
204 204 assert lparttype not in parthandlermapping
205 205 parthandlermapping[lparttype] = func
206 206 func.params = frozenset(params)
207 207 return func
208 208 return _decorator
209 209
210 210 class unbundlerecords(object):
211 211 """keep record of what happens during and unbundle
212 212
213 213 New records are added using `records.add('cat', obj)`. Where 'cat' is a
214 214 category of record and obj is an arbitrary object.
215 215
216 216 `records['cat']` will return all entries of this category 'cat'.
217 217
218 218 Iterating on the object itself will yield `('category', obj)` tuples
219 219 for all entries.
220 220
221 221 All iterations happens in chronological order.
222 222 """
223 223
224 224 def __init__(self):
225 225 self._categories = {}
226 226 self._sequences = []
227 227 self._replies = {}
228 228
229 229 def add(self, category, entry, inreplyto=None):
230 230 """add a new record of a given category.
231 231
232 232 The entry can then be retrieved in the list returned by
233 233 self['category']."""
234 234 self._categories.setdefault(category, []).append(entry)
235 235 self._sequences.append((category, entry))
236 236 if inreplyto is not None:
237 237 self.getreplies(inreplyto).add(category, entry)
238 238
239 239 def getreplies(self, partid):
240 240 """get the records that are replies to a specific part"""
241 241 return self._replies.setdefault(partid, unbundlerecords())
242 242
243 243 def __getitem__(self, cat):
244 244 return tuple(self._categories.get(cat, ()))
245 245
246 246 def __iter__(self):
247 247 return iter(self._sequences)
248 248
249 249 def __len__(self):
250 250 return len(self._sequences)
251 251
252 252 def __nonzero__(self):
253 253 return bool(self._sequences)
254 254
255 255 class bundleoperation(object):
256 256 """an object that represents a single bundling process
257 257
258 258 Its purpose is to carry unbundle-related objects and states.
259 259
260 260 A new object should be created at the beginning of each bundle processing.
261 261 The object is to be returned by the processing function.
262 262
263 263 The object has very little content now it will ultimately contain:
264 264 * an access to the repo the bundle is applied to,
265 265 * a ui object,
266 266 * a way to retrieve a transaction to add changes to the repo,
267 267 * a way to record the result of processing each part,
268 268 * a way to construct a bundle response when applicable.
269 269 """
270 270
271 271 def __init__(self, repo, transactiongetter):
272 272 self.repo = repo
273 273 self.ui = repo.ui
274 274 self.records = unbundlerecords()
275 275 self.gettransaction = transactiongetter
276 276 self.reply = None
277 277
278 278 class TransactionUnavailable(RuntimeError):
279 279 pass
280 280
281 281 def _notransaction():
282 282 """default method to get a transaction while processing a bundle
283 283
284 284 Raise an exception to highlight the fact that no transaction was expected
285 285 to be created"""
286 286 raise TransactionUnavailable()
287 287
288 def processbundle(repo, unbundler, transactiongetter=None):
288 def processbundle(repo, unbundler, transactiongetter=None, op=None):
289 289 """This function process a bundle, apply effect to/from a repo
290 290
291 291 It iterates over each part then searches for and uses the proper handling
292 292 code to process the part. Parts are processed in order.
293 293
294 294 This is very early version of this function that will be strongly reworked
295 295 before final usage.
296 296
297 297 Unknown Mandatory part will abort the process.
298
299 It is temporarily possible to provide a prebuilt bundleoperation to the
300 function. This is used to ensure output is properly propagated in case of
301 an error during the unbundling. This output capturing part will likely be
302 reworked and this ability will probably go away in the process.
298 303 """
304 if op is None:
299 305 if transactiongetter is None:
300 306 transactiongetter = _notransaction
301 307 op = bundleoperation(repo, transactiongetter)
302 308 # todo:
303 309 # - replace this is a init function soon.
304 310 # - exception catching
305 311 unbundler.params
306 312 iterparts = unbundler.iterparts()
307 313 part = None
308 314 try:
309 315 for part in iterparts:
310 316 _processpart(op, part)
311 317 except Exception, exc:
312 318 for part in iterparts:
313 319 # consume the bundle content
314 320 part.seek(0, 2)
315 321 # Small hack to let caller code distinguish exceptions from bundle2
316 322 # processing from processing the old format. This is mostly
317 323 # needed to handle different return codes to unbundle according to the
318 324 # type of bundle. We should probably clean up or drop this return code
319 325 # craziness in a future version.
320 326 exc.duringunbundle2 = True
321 327 salvaged = []
322 328 if op.reply is not None:
323 329 salvaged = op.reply.salvageoutput()
324 330 exc._bundle2salvagedoutput = salvaged
325 331 raise
326 332 return op
327 333
328 334 def _processpart(op, part):
329 335 """process a single part from a bundle
330 336
331 337 The part is guaranteed to have been fully consumed when the function exits
332 338 (even if an exception is raised)."""
333 339 try:
334 340 try:
335 341 handler = parthandlermapping.get(part.type)
336 342 if handler is None:
337 343 raise error.UnsupportedPartError(parttype=part.type)
338 344 op.ui.debug('found a handler for part %r\n' % part.type)
339 345 unknownparams = part.mandatorykeys - handler.params
340 346 if unknownparams:
341 347 unknownparams = list(unknownparams)
342 348 unknownparams.sort()
343 349 raise error.UnsupportedPartError(parttype=part.type,
344 350 params=unknownparams)
345 351 except error.UnsupportedPartError, exc:
346 352 if part.mandatory: # mandatory parts
347 353 raise
348 354 op.ui.debug('ignoring unsupported advisory part %s\n' % exc)
349 355 return # skip to part processing
350 356
351 357 # handler is called outside the above try block so that we don't
352 358 # risk catching KeyErrors from anything other than the
353 359 # parthandlermapping lookup (any KeyError raised by handler()
354 360 # itself represents a defect of a different variety).
355 361 output = None
356 362 if op.reply is not None:
357 363 op.ui.pushbuffer(error=True, subproc=True)
358 364 output = ''
359 365 try:
360 366 handler(op, part)
361 367 finally:
362 368 if output is not None:
363 369 output = op.ui.popbuffer()
364 370 if output:
365 371 outpart = op.reply.newpart('output', data=output,
366 372 mandatory=False)
367 373 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
368 374 finally:
369 375 # consume the part content to not corrupt the stream.
370 376 part.seek(0, 2)
371 377
372 378
373 379 def decodecaps(blob):
374 380 """decode a bundle2 caps bytes blob into a dictionary
375 381
376 382 The blob is a list of capabilities (one per line)
377 383 Capabilities may have values using a line of the form::
378 384
379 385 capability=value1,value2,value3
380 386
381 387 The values are always a list."""
382 388 caps = {}
383 389 for line in blob.splitlines():
384 390 if not line:
385 391 continue
386 392 if '=' not in line:
387 393 key, vals = line, ()
388 394 else:
389 395 key, vals = line.split('=', 1)
390 396 vals = vals.split(',')
391 397 key = urllib.unquote(key)
392 398 vals = [urllib.unquote(v) for v in vals]
393 399 caps[key] = vals
394 400 return caps
395 401
396 402 def encodecaps(caps):
397 403 """encode a bundle2 caps dictionary into a bytes blob"""
398 404 chunks = []
399 405 for ca in sorted(caps):
400 406 vals = caps[ca]
401 407 ca = urllib.quote(ca)
402 408 vals = [urllib.quote(v) for v in vals]
403 409 if vals:
404 410 ca = "%s=%s" % (ca, ','.join(vals))
405 411 chunks.append(ca)
406 412 return '\n'.join(chunks)
407 413
408 414 class bundle20(object):
409 415 """represent an outgoing bundle2 container
410 416
411 417 Use the `addparam` method to add stream level parameter. and `newpart` to
412 418 populate it. Then call `getchunks` to retrieve all the binary chunks of
413 419 data that compose the bundle2 container."""
414 420
415 421 _magicstring = 'HG20'
416 422
417 423 def __init__(self, ui, capabilities=()):
418 424 self.ui = ui
419 425 self._params = []
420 426 self._parts = []
421 427 self.capabilities = dict(capabilities)
422 428
423 429 @property
424 430 def nbparts(self):
425 431 """total number of parts added to the bundler"""
426 432 return len(self._parts)
427 433
428 434 # methods used to defines the bundle2 content
429 435 def addparam(self, name, value=None):
430 436 """add a stream level parameter"""
431 437 if not name:
432 438 raise ValueError('empty parameter name')
433 439 if name[0] not in string.letters:
434 440 raise ValueError('non letter first character: %r' % name)
435 441 self._params.append((name, value))
436 442
437 443 def addpart(self, part):
438 444 """add a new part to the bundle2 container
439 445
440 446 Parts contains the actual applicative payload."""
441 447 assert part.id is None
442 448 part.id = len(self._parts) # very cheap counter
443 449 self._parts.append(part)
444 450
445 451 def newpart(self, typeid, *args, **kwargs):
446 452 """create a new part and add it to the containers
447 453
448 454 As the part is directly added to the containers. For now, this means
449 455 that any failure to properly initialize the part after calling
450 456 ``newpart`` should result in a failure of the whole bundling process.
451 457
452 458 You can still fall back to manually create and add if you need better
453 459 control."""
454 460 part = bundlepart(typeid, *args, **kwargs)
455 461 self.addpart(part)
456 462 return part
457 463
458 464 # methods used to generate the bundle2 stream
459 465 def getchunks(self):
460 466 self.ui.debug('start emission of %s stream\n' % self._magicstring)
461 467 yield self._magicstring
462 468 param = self._paramchunk()
463 469 self.ui.debug('bundle parameter: %s\n' % param)
464 470 yield _pack(_fstreamparamsize, len(param))
465 471 if param:
466 472 yield param
467 473
468 474 self.ui.debug('start of parts\n')
469 475 for part in self._parts:
470 476 self.ui.debug('bundle part: "%s"\n' % part.type)
471 477 for chunk in part.getchunks():
472 478 yield chunk
473 479 self.ui.debug('end of bundle\n')
474 480 yield _pack(_fpartheadersize, 0)
475 481
476 482 def _paramchunk(self):
477 483 """return a encoded version of all stream parameters"""
478 484 blocks = []
479 485 for par, value in self._params:
480 486 par = urllib.quote(par)
481 487 if value is not None:
482 488 value = urllib.quote(value)
483 489 par = '%s=%s' % (par, value)
484 490 blocks.append(par)
485 491 return ' '.join(blocks)
486 492
487 493 def salvageoutput(self):
488 494 """return a list with a copy of all output parts in the bundle
489 495
490 496 This is meant to be used during error handling to make sure we preserve
491 497 server output"""
492 498 salvaged = []
493 499 for part in self._parts:
494 500 if part.type.startswith('output'):
495 501 salvaged.append(part.copy())
496 502 return salvaged
497 503
498 504
499 505 class unpackermixin(object):
500 506 """A mixin to extract bytes and struct data from a stream"""
501 507
502 508 def __init__(self, fp):
503 509 self._fp = fp
504 510 self._seekable = (util.safehasattr(fp, 'seek') and
505 511 util.safehasattr(fp, 'tell'))
506 512
507 513 def _unpack(self, format):
508 514 """unpack this struct format from the stream"""
509 515 data = self._readexact(struct.calcsize(format))
510 516 return _unpack(format, data)
511 517
512 518 def _readexact(self, size):
513 519 """read exactly <size> bytes from the stream"""
514 520 return changegroup.readexactly(self._fp, size)
515 521
516 522 def seek(self, offset, whence=0):
517 523 """move the underlying file pointer"""
518 524 if self._seekable:
519 525 return self._fp.seek(offset, whence)
520 526 else:
521 527 raise NotImplementedError(_('File pointer is not seekable'))
522 528
523 529 def tell(self):
524 530 """return the file offset, or None if file is not seekable"""
525 531 if self._seekable:
526 532 try:
527 533 return self._fp.tell()
528 534 except IOError, e:
529 535 if e.errno == errno.ESPIPE:
530 536 self._seekable = False
531 537 else:
532 538 raise
533 539 return None
534 540
535 541 def close(self):
536 542 """close underlying file"""
537 543 if util.safehasattr(self._fp, 'close'):
538 544 return self._fp.close()
539 545
540 546 def getunbundler(ui, fp, header=None):
541 547 """return a valid unbundler object for a given header"""
542 548 if header is None:
543 549 header = changegroup.readexactly(fp, 4)
544 550 magic, version = header[0:2], header[2:4]
545 551 if magic != 'HG':
546 552 raise util.Abort(_('not a Mercurial bundle'))
547 553 unbundlerclass = formatmap.get(version)
548 554 if unbundlerclass is None:
549 555 raise util.Abort(_('unknown bundle version %s') % version)
550 556 unbundler = unbundlerclass(ui, fp)
551 557 ui.debug('start processing of %s stream\n' % header)
552 558 return unbundler
553 559
554 560 class unbundle20(unpackermixin):
555 561 """interpret a bundle2 stream
556 562
557 563 This class is fed with a binary stream and yields parts through its
558 564 `iterparts` methods."""
559 565
560 566 def __init__(self, ui, fp):
561 567 """If header is specified, we do not read it out of the stream."""
562 568 self.ui = ui
563 569 super(unbundle20, self).__init__(fp)
564 570
565 571 @util.propertycache
566 572 def params(self):
567 573 """dictionary of stream level parameters"""
568 574 self.ui.debug('reading bundle2 stream parameters\n')
569 575 params = {}
570 576 paramssize = self._unpack(_fstreamparamsize)[0]
571 577 if paramssize < 0:
572 578 raise error.BundleValueError('negative bundle param size: %i'
573 579 % paramssize)
574 580 if paramssize:
575 581 for p in self._readexact(paramssize).split(' '):
576 582 p = p.split('=', 1)
577 583 p = [urllib.unquote(i) for i in p]
578 584 if len(p) < 2:
579 585 p.append(None)
580 586 self._processparam(*p)
581 587 params[p[0]] = p[1]
582 588 return params
583 589
584 590 def _processparam(self, name, value):
585 591 """process a parameter, applying its effect if needed
586 592
587 593 Parameter starting with a lower case letter are advisory and will be
588 594 ignored when unknown. Those starting with an upper case letter are
589 595 mandatory and will this function will raise a KeyError when unknown.
590 596
591 597 Note: no option are currently supported. Any input will be either
592 598 ignored or failing.
593 599 """
594 600 if not name:
595 601 raise ValueError('empty parameter name')
596 602 if name[0] not in string.letters:
597 603 raise ValueError('non letter first character: %r' % name)
598 604 # Some logic will be later added here to try to process the option for
599 605 # a dict of known parameter.
600 606 if name[0].islower():
601 607 self.ui.debug("ignoring unknown parameter %r\n" % name)
602 608 else:
603 609 raise error.UnsupportedPartError(params=(name,))
604 610
605 611
606 612 def iterparts(self):
607 613 """yield all parts contained in the stream"""
608 614 # make sure param have been loaded
609 615 self.params
610 616 self.ui.debug('start extraction of bundle2 parts\n')
611 617 headerblock = self._readpartheader()
612 618 while headerblock is not None:
613 619 part = unbundlepart(self.ui, headerblock, self._fp)
614 620 yield part
615 621 part.seek(0, 2)
616 622 headerblock = self._readpartheader()
617 623 self.ui.debug('end of bundle2 stream\n')
618 624
619 625 def _readpartheader(self):
620 626 """reads a part header size and return the bytes blob
621 627
622 628 returns None if empty"""
623 629 headersize = self._unpack(_fpartheadersize)[0]
624 630 if headersize < 0:
625 631 raise error.BundleValueError('negative part header size: %i'
626 632 % headersize)
627 633 self.ui.debug('part header size: %i\n' % headersize)
628 634 if headersize:
629 635 return self._readexact(headersize)
630 636 return None
631 637
632 638 def compressed(self):
633 639 return False
634 640
635 641 formatmap = {'20': unbundle20}
636 642
637 643 class bundlepart(object):
638 644 """A bundle2 part contains application level payload
639 645
640 646 The part `type` is used to route the part to the application level
641 647 handler.
642 648
643 649 The part payload is contained in ``part.data``. It could be raw bytes or a
644 650 generator of byte chunks.
645 651
646 652 You can add parameters to the part using the ``addparam`` method.
647 653 Parameters can be either mandatory (default) or advisory. Remote side
648 654 should be able to safely ignore the advisory ones.
649 655
650 656 Both data and parameters cannot be modified after the generation has begun.
651 657 """
652 658
653 659 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
654 660 data='', mandatory=True):
655 661 validateparttype(parttype)
656 662 self.id = None
657 663 self.type = parttype
658 664 self._data = data
659 665 self._mandatoryparams = list(mandatoryparams)
660 666 self._advisoryparams = list(advisoryparams)
661 667 # checking for duplicated entries
662 668 self._seenparams = set()
663 669 for pname, __ in self._mandatoryparams + self._advisoryparams:
664 670 if pname in self._seenparams:
665 671 raise RuntimeError('duplicated params: %s' % pname)
666 672 self._seenparams.add(pname)
667 673 # status of the part's generation:
668 674 # - None: not started,
669 675 # - False: currently generated,
670 676 # - True: generation done.
671 677 self._generated = None
672 678 self.mandatory = mandatory
673 679
674 680 def copy(self):
675 681 """return a copy of the part
676 682
677 683 The new part have the very same content but no partid assigned yet.
678 684 Parts with generated data cannot be copied."""
679 685 assert not util.safehasattr(self.data, 'next')
680 686 return self.__class__(self.type, self._mandatoryparams,
681 687 self._advisoryparams, self._data, self.mandatory)
682 688
683 689 # methods used to defines the part content
684 690 def __setdata(self, data):
685 691 if self._generated is not None:
686 692 raise error.ReadOnlyPartError('part is being generated')
687 693 self._data = data
688 694 def __getdata(self):
689 695 return self._data
690 696 data = property(__getdata, __setdata)
691 697
692 698 @property
693 699 def mandatoryparams(self):
694 700 # make it an immutable tuple to force people through ``addparam``
695 701 return tuple(self._mandatoryparams)
696 702
697 703 @property
698 704 def advisoryparams(self):
699 705 # make it an immutable tuple to force people through ``addparam``
700 706 return tuple(self._advisoryparams)
701 707
702 708 def addparam(self, name, value='', mandatory=True):
703 709 if self._generated is not None:
704 710 raise error.ReadOnlyPartError('part is being generated')
705 711 if name in self._seenparams:
706 712 raise ValueError('duplicated params: %s' % name)
707 713 self._seenparams.add(name)
708 714 params = self._advisoryparams
709 715 if mandatory:
710 716 params = self._mandatoryparams
711 717 params.append((name, value))
712 718
713 719 # methods used to generates the bundle2 stream
714 720 def getchunks(self):
715 721 if self._generated is not None:
716 722 raise RuntimeError('part can only be consumed once')
717 723 self._generated = False
718 724 #### header
719 725 if self.mandatory:
720 726 parttype = self.type.upper()
721 727 else:
722 728 parttype = self.type.lower()
723 729 ## parttype
724 730 header = [_pack(_fparttypesize, len(parttype)),
725 731 parttype, _pack(_fpartid, self.id),
726 732 ]
727 733 ## parameters
728 734 # count
729 735 manpar = self.mandatoryparams
730 736 advpar = self.advisoryparams
731 737 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
732 738 # size
733 739 parsizes = []
734 740 for key, value in manpar:
735 741 parsizes.append(len(key))
736 742 parsizes.append(len(value))
737 743 for key, value in advpar:
738 744 parsizes.append(len(key))
739 745 parsizes.append(len(value))
740 746 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
741 747 header.append(paramsizes)
742 748 # key, value
743 749 for key, value in manpar:
744 750 header.append(key)
745 751 header.append(value)
746 752 for key, value in advpar:
747 753 header.append(key)
748 754 header.append(value)
749 755 ## finalize header
750 756 headerchunk = ''.join(header)
751 757 yield _pack(_fpartheadersize, len(headerchunk))
752 758 yield headerchunk
753 759 ## payload
754 760 try:
755 761 for chunk in self._payloadchunks():
756 762 yield _pack(_fpayloadsize, len(chunk))
757 763 yield chunk
758 764 except Exception, exc:
759 765 # backup exception data for later
760 766 exc_info = sys.exc_info()
761 767 msg = 'unexpected error: %s' % exc
762 768 interpart = bundlepart('error:abort', [('message', msg)],
763 769 mandatory=False)
764 770 interpart.id = 0
765 771 yield _pack(_fpayloadsize, -1)
766 772 for chunk in interpart.getchunks():
767 773 yield chunk
768 774 # abort current part payload
769 775 yield _pack(_fpayloadsize, 0)
770 776 raise exc_info[0], exc_info[1], exc_info[2]
771 777 # end of payload
772 778 yield _pack(_fpayloadsize, 0)
773 779 self._generated = True
774 780
775 781 def _payloadchunks(self):
776 782 """yield chunks of a the part payload
777 783
778 784 Exists to handle the different methods to provide data to a part."""
779 785 # we only support fixed size data now.
780 786 # This will be improved in the future.
781 787 if util.safehasattr(self.data, 'next'):
782 788 buff = util.chunkbuffer(self.data)
783 789 chunk = buff.read(preferedchunksize)
784 790 while chunk:
785 791 yield chunk
786 792 chunk = buff.read(preferedchunksize)
787 793 elif len(self.data):
788 794 yield self.data
789 795
790 796
791 797 flaginterrupt = -1
792 798
793 799 class interrupthandler(unpackermixin):
794 800 """read one part and process it with restricted capability
795 801
796 802 This allows to transmit exception raised on the producer size during part
797 803 iteration while the consumer is reading a part.
798 804
799 805 Part processed in this manner only have access to a ui object,"""
800 806
801 807 def __init__(self, ui, fp):
802 808 super(interrupthandler, self).__init__(fp)
803 809 self.ui = ui
804 810
805 811 def _readpartheader(self):
806 812 """reads a part header size and return the bytes blob
807 813
808 814 returns None if empty"""
809 815 headersize = self._unpack(_fpartheadersize)[0]
810 816 if headersize < 0:
811 817 raise error.BundleValueError('negative part header size: %i'
812 818 % headersize)
813 819 self.ui.debug('part header size: %i\n' % headersize)
814 820 if headersize:
815 821 return self._readexact(headersize)
816 822 return None
817 823
818 824 def __call__(self):
819 825 self.ui.debug('bundle2 stream interruption, looking for a part.\n')
820 826 headerblock = self._readpartheader()
821 827 if headerblock is None:
822 828 self.ui.debug('no part found during interruption.\n')
823 829 return
824 830 part = unbundlepart(self.ui, headerblock, self._fp)
825 831 op = interruptoperation(self.ui)
826 832 _processpart(op, part)
827 833
828 834 class interruptoperation(object):
829 835 """A limited operation to be use by part handler during interruption
830 836
831 837 It only have access to an ui object.
832 838 """
833 839
834 840 def __init__(self, ui):
835 841 self.ui = ui
836 842 self.reply = None
837 843
838 844 @property
839 845 def repo(self):
840 846 raise RuntimeError('no repo access from stream interruption')
841 847
842 848 def gettransaction(self):
843 849 raise TransactionUnavailable('no repo access from stream interruption')
844 850
845 851 class unbundlepart(unpackermixin):
846 852 """a bundle part read from a bundle"""
847 853
848 854 def __init__(self, ui, header, fp):
849 855 super(unbundlepart, self).__init__(fp)
850 856 self.ui = ui
851 857 # unbundle state attr
852 858 self._headerdata = header
853 859 self._headeroffset = 0
854 860 self._initialized = False
855 861 self.consumed = False
856 862 # part data
857 863 self.id = None
858 864 self.type = None
859 865 self.mandatoryparams = None
860 866 self.advisoryparams = None
861 867 self.params = None
862 868 self.mandatorykeys = ()
863 869 self._payloadstream = None
864 870 self._readheader()
865 871 self._mandatory = None
866 872 self._chunkindex = [] #(payload, file) position tuples for chunk starts
867 873 self._pos = 0
868 874
869 875 def _fromheader(self, size):
870 876 """return the next <size> byte from the header"""
871 877 offset = self._headeroffset
872 878 data = self._headerdata[offset:(offset + size)]
873 879 self._headeroffset = offset + size
874 880 return data
875 881
876 882 def _unpackheader(self, format):
877 883 """read given format from header
878 884
879 885 This automatically compute the size of the format to read."""
880 886 data = self._fromheader(struct.calcsize(format))
881 887 return _unpack(format, data)
882 888
883 889 def _initparams(self, mandatoryparams, advisoryparams):
884 890 """internal function to setup all logic related parameters"""
885 891 # make it read only to prevent people touching it by mistake.
886 892 self.mandatoryparams = tuple(mandatoryparams)
887 893 self.advisoryparams = tuple(advisoryparams)
888 894 # user friendly UI
889 895 self.params = dict(self.mandatoryparams)
890 896 self.params.update(dict(self.advisoryparams))
891 897 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
892 898
893 899 def _payloadchunks(self, chunknum=0):
894 900 '''seek to specified chunk and start yielding data'''
895 901 if len(self._chunkindex) == 0:
896 902 assert chunknum == 0, 'Must start with chunk 0'
897 903 self._chunkindex.append((0, super(unbundlepart, self).tell()))
898 904 else:
899 905 assert chunknum < len(self._chunkindex), \
900 906 'Unknown chunk %d' % chunknum
901 907 super(unbundlepart, self).seek(self._chunkindex[chunknum][1])
902 908
903 909 pos = self._chunkindex[chunknum][0]
904 910 payloadsize = self._unpack(_fpayloadsize)[0]
905 911 self.ui.debug('payload chunk size: %i\n' % payloadsize)
906 912 while payloadsize:
907 913 if payloadsize == flaginterrupt:
908 914 # interruption detection, the handler will now read a
909 915 # single part and process it.
910 916 interrupthandler(self.ui, self._fp)()
911 917 elif payloadsize < 0:
912 918 msg = 'negative payload chunk size: %i' % payloadsize
913 919 raise error.BundleValueError(msg)
914 920 else:
915 921 result = self._readexact(payloadsize)
916 922 chunknum += 1
917 923 pos += payloadsize
918 924 if chunknum == len(self._chunkindex):
919 925 self._chunkindex.append((pos,
920 926 super(unbundlepart, self).tell()))
921 927 yield result
922 928 payloadsize = self._unpack(_fpayloadsize)[0]
923 929 self.ui.debug('payload chunk size: %i\n' % payloadsize)
924 930
925 931 def _findchunk(self, pos):
926 932 '''for a given payload position, return a chunk number and offset'''
927 933 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
928 934 if ppos == pos:
929 935 return chunk, 0
930 936 elif ppos > pos:
931 937 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
932 938 raise ValueError('Unknown chunk')
933 939
934 940 def _readheader(self):
935 941 """read the header and setup the object"""
936 942 typesize = self._unpackheader(_fparttypesize)[0]
937 943 self.type = self._fromheader(typesize)
938 944 self.ui.debug('part type: "%s"\n' % self.type)
939 945 self.id = self._unpackheader(_fpartid)[0]
940 946 self.ui.debug('part id: "%s"\n' % self.id)
941 947 # extract mandatory bit from type
942 948 self.mandatory = (self.type != self.type.lower())
943 949 self.type = self.type.lower()
944 950 ## reading parameters
945 951 # param count
946 952 mancount, advcount = self._unpackheader(_fpartparamcount)
947 953 self.ui.debug('part parameters: %i\n' % (mancount + advcount))
948 954 # param size
949 955 fparamsizes = _makefpartparamsizes(mancount + advcount)
950 956 paramsizes = self._unpackheader(fparamsizes)
951 957 # make it a list of couple again
952 958 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
953 959 # split mandatory from advisory
954 960 mansizes = paramsizes[:mancount]
955 961 advsizes = paramsizes[mancount:]
956 962 # retrieve param value
957 963 manparams = []
958 964 for key, value in mansizes:
959 965 manparams.append((self._fromheader(key), self._fromheader(value)))
960 966 advparams = []
961 967 for key, value in advsizes:
962 968 advparams.append((self._fromheader(key), self._fromheader(value)))
963 969 self._initparams(manparams, advparams)
964 970 ## part payload
965 971 self._payloadstream = util.chunkbuffer(self._payloadchunks())
966 972 # we read the data, tell it
967 973 self._initialized = True
968 974
969 975 def read(self, size=None):
970 976 """read payload data"""
971 977 if not self._initialized:
972 978 self._readheader()
973 979 if size is None:
974 980 data = self._payloadstream.read()
975 981 else:
976 982 data = self._payloadstream.read(size)
977 983 if size is None or len(data) < size:
978 984 self.consumed = True
979 985 self._pos += len(data)
980 986 return data
981 987
982 988 def tell(self):
983 989 return self._pos
984 990
985 991 def seek(self, offset, whence=0):
986 992 if whence == 0:
987 993 newpos = offset
988 994 elif whence == 1:
989 995 newpos = self._pos + offset
990 996 elif whence == 2:
991 997 if not self.consumed:
992 998 self.read()
993 999 newpos = self._chunkindex[-1][0] - offset
994 1000 else:
995 1001 raise ValueError('Unknown whence value: %r' % (whence,))
996 1002
997 1003 if newpos > self._chunkindex[-1][0] and not self.consumed:
998 1004 self.read()
999 1005 if not 0 <= newpos <= self._chunkindex[-1][0]:
1000 1006 raise ValueError('Offset out of range')
1001 1007
1002 1008 if self._pos != newpos:
1003 1009 chunk, internaloffset = self._findchunk(newpos)
1004 1010 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1005 1011 adjust = self.read(internaloffset)
1006 1012 if len(adjust) != internaloffset:
1007 1013 raise util.Abort(_('Seek failed\n'))
1008 1014 self._pos = newpos
1009 1015
1010 1016 capabilities = {'HG20': (),
1011 1017 'listkeys': (),
1012 1018 'pushkey': (),
1013 1019 'digests': tuple(sorted(util.DIGESTS.keys())),
1014 1020 'remote-changegroup': ('http', 'https'),
1015 1021 }
1016 1022
1017 1023 def getrepocaps(repo, allowpushback=False):
1018 1024 """return the bundle2 capabilities for a given repo
1019 1025
1020 1026 Exists to allow extensions (like evolution) to mutate the capabilities.
1021 1027 """
1022 1028 caps = capabilities.copy()
1023 1029 caps['changegroup'] = tuple(sorted(changegroup.packermap.keys()))
1024 1030 if obsolete.isenabled(repo, obsolete.exchangeopt):
1025 1031 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1026 1032 caps['obsmarkers'] = supportedformat
1027 1033 if allowpushback:
1028 1034 caps['pushback'] = ()
1029 1035 return caps
1030 1036
1031 1037 def bundle2caps(remote):
1032 1038 """return the bundle capabilities of a peer as dict"""
1033 1039 raw = remote.capable('bundle2')
1034 1040 if not raw and raw != '':
1035 1041 return {}
1036 1042 capsblob = urllib.unquote(remote.capable('bundle2'))
1037 1043 return decodecaps(capsblob)
1038 1044
1039 1045 def obsmarkersversion(caps):
1040 1046 """extract the list of supported obsmarkers versions from a bundle2caps dict
1041 1047 """
1042 1048 obscaps = caps.get('obsmarkers', ())
1043 1049 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1044 1050
1045 1051 @parthandler('changegroup', ('version',))
1046 1052 def handlechangegroup(op, inpart):
1047 1053 """apply a changegroup part on the repo
1048 1054
1049 1055 This is a very early implementation that will massive rework before being
1050 1056 inflicted to any end-user.
1051 1057 """
1052 1058 # Make sure we trigger a transaction creation
1053 1059 #
1054 1060 # The addchangegroup function will get a transaction object by itself, but
1055 1061 # we need to make sure we trigger the creation of a transaction object used
1056 1062 # for the whole processing scope.
1057 1063 op.gettransaction()
1058 1064 unpackerversion = inpart.params.get('version', '01')
1059 1065 # We should raise an appropriate exception here
1060 1066 unpacker = changegroup.packermap[unpackerversion][1]
1061 1067 cg = unpacker(inpart, 'UN')
1062 1068 # the source and url passed here are overwritten by the one contained in
1063 1069 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1064 1070 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
1065 1071 op.records.add('changegroup', {'return': ret})
1066 1072 if op.reply is not None:
1067 1073 # This is definitely not the final form of this
1068 1074 # return. But one need to start somewhere.
1069 1075 part = op.reply.newpart('reply:changegroup', mandatory=False)
1070 1076 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1071 1077 part.addparam('return', '%i' % ret, mandatory=False)
1072 1078 assert not inpart.read()
1073 1079
1074 1080 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1075 1081 ['digest:%s' % k for k in util.DIGESTS.keys()])
1076 1082 @parthandler('remote-changegroup', _remotechangegroupparams)
1077 1083 def handleremotechangegroup(op, inpart):
1078 1084 """apply a bundle10 on the repo, given an url and validation information
1079 1085
1080 1086 All the information about the remote bundle to import are given as
1081 1087 parameters. The parameters include:
1082 1088 - url: the url to the bundle10.
1083 1089 - size: the bundle10 file size. It is used to validate what was
1084 1090 retrieved by the client matches the server knowledge about the bundle.
1085 1091 - digests: a space separated list of the digest types provided as
1086 1092 parameters.
1087 1093 - digest:<digest-type>: the hexadecimal representation of the digest with
1088 1094 that name. Like the size, it is used to validate what was retrieved by
1089 1095 the client matches what the server knows about the bundle.
1090 1096
1091 1097 When multiple digest types are given, all of them are checked.
1092 1098 """
1093 1099 try:
1094 1100 raw_url = inpart.params['url']
1095 1101 except KeyError:
1096 1102 raise util.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1097 1103 parsed_url = util.url(raw_url)
1098 1104 if parsed_url.scheme not in capabilities['remote-changegroup']:
1099 1105 raise util.Abort(_('remote-changegroup does not support %s urls') %
1100 1106 parsed_url.scheme)
1101 1107
1102 1108 try:
1103 1109 size = int(inpart.params['size'])
1104 1110 except ValueError:
1105 1111 raise util.Abort(_('remote-changegroup: invalid value for param "%s"')
1106 1112 % 'size')
1107 1113 except KeyError:
1108 1114 raise util.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1109 1115
1110 1116 digests = {}
1111 1117 for typ in inpart.params.get('digests', '').split():
1112 1118 param = 'digest:%s' % typ
1113 1119 try:
1114 1120 value = inpart.params[param]
1115 1121 except KeyError:
1116 1122 raise util.Abort(_('remote-changegroup: missing "%s" param') %
1117 1123 param)
1118 1124 digests[typ] = value
1119 1125
1120 1126 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1121 1127
1122 1128 # Make sure we trigger a transaction creation
1123 1129 #
1124 1130 # The addchangegroup function will get a transaction object by itself, but
1125 1131 # we need to make sure we trigger the creation of a transaction object used
1126 1132 # for the whole processing scope.
1127 1133 op.gettransaction()
1128 1134 import exchange
1129 1135 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1130 1136 if not isinstance(cg, changegroup.cg1unpacker):
1131 1137 raise util.Abort(_('%s: not a bundle version 1.0') %
1132 1138 util.hidepassword(raw_url))
1133 1139 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
1134 1140 op.records.add('changegroup', {'return': ret})
1135 1141 if op.reply is not None:
1136 1142 # This is definitely not the final form of this
1137 1143 # return. But one need to start somewhere.
1138 1144 part = op.reply.newpart('reply:changegroup')
1139 1145 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1140 1146 part.addparam('return', '%i' % ret, mandatory=False)
1141 1147 try:
1142 1148 real_part.validate()
1143 1149 except util.Abort, e:
1144 1150 raise util.Abort(_('bundle at %s is corrupted:\n%s') %
1145 1151 (util.hidepassword(raw_url), str(e)))
1146 1152 assert not inpart.read()
1147 1153
1148 1154 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1149 1155 def handlereplychangegroup(op, inpart):
1150 1156 ret = int(inpart.params['return'])
1151 1157 replyto = int(inpart.params['in-reply-to'])
1152 1158 op.records.add('changegroup', {'return': ret}, replyto)
1153 1159
1154 1160 @parthandler('check:heads')
1155 1161 def handlecheckheads(op, inpart):
1156 1162 """check that head of the repo did not change
1157 1163
1158 1164 This is used to detect a push race when using unbundle.
1159 1165 This replaces the "heads" argument of unbundle."""
1160 1166 h = inpart.read(20)
1161 1167 heads = []
1162 1168 while len(h) == 20:
1163 1169 heads.append(h)
1164 1170 h = inpart.read(20)
1165 1171 assert not h
1166 1172 if heads != op.repo.heads():
1167 1173 raise error.PushRaced('repository changed while pushing - '
1168 1174 'please try again')
1169 1175
1170 1176 @parthandler('output')
1171 1177 def handleoutput(op, inpart):
1172 1178 """forward output captured on the server to the client"""
1173 1179 for line in inpart.read().splitlines():
1174 1180 op.ui.status(('remote: %s\n' % line))
1175 1181
1176 1182 @parthandler('replycaps')
1177 1183 def handlereplycaps(op, inpart):
1178 1184 """Notify that a reply bundle should be created
1179 1185
1180 1186 The payload contains the capabilities information for the reply"""
1181 1187 caps = decodecaps(inpart.read())
1182 1188 if op.reply is None:
1183 1189 op.reply = bundle20(op.ui, caps)
1184 1190
1185 1191 @parthandler('error:abort', ('message', 'hint'))
1186 1192 def handleerrorabort(op, inpart):
1187 1193 """Used to transmit abort error over the wire"""
1188 1194 raise util.Abort(inpart.params['message'], hint=inpart.params.get('hint'))
1189 1195
1190 1196 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1191 1197 def handleerrorunsupportedcontent(op, inpart):
1192 1198 """Used to transmit unknown content error over the wire"""
1193 1199 kwargs = {}
1194 1200 parttype = inpart.params.get('parttype')
1195 1201 if parttype is not None:
1196 1202 kwargs['parttype'] = parttype
1197 1203 params = inpart.params.get('params')
1198 1204 if params is not None:
1199 1205 kwargs['params'] = params.split('\0')
1200 1206
1201 1207 raise error.UnsupportedPartError(**kwargs)
1202 1208
1203 1209 @parthandler('error:pushraced', ('message',))
1204 1210 def handleerrorpushraced(op, inpart):
1205 1211 """Used to transmit push race error over the wire"""
1206 1212 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1207 1213
1208 1214 @parthandler('listkeys', ('namespace',))
1209 1215 def handlelistkeys(op, inpart):
1210 1216 """retrieve pushkey namespace content stored in a bundle2"""
1211 1217 namespace = inpart.params['namespace']
1212 1218 r = pushkey.decodekeys(inpart.read())
1213 1219 op.records.add('listkeys', (namespace, r))
1214 1220
1215 1221 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1216 1222 def handlepushkey(op, inpart):
1217 1223 """process a pushkey request"""
1218 1224 dec = pushkey.decode
1219 1225 namespace = dec(inpart.params['namespace'])
1220 1226 key = dec(inpart.params['key'])
1221 1227 old = dec(inpart.params['old'])
1222 1228 new = dec(inpart.params['new'])
1223 1229 ret = op.repo.pushkey(namespace, key, old, new)
1224 1230 record = {'namespace': namespace,
1225 1231 'key': key,
1226 1232 'old': old,
1227 1233 'new': new}
1228 1234 op.records.add('pushkey', record)
1229 1235 if op.reply is not None:
1230 1236 rpart = op.reply.newpart('reply:pushkey')
1231 1237 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1232 1238 rpart.addparam('return', '%i' % ret, mandatory=False)
1233 1239
1234 1240 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
1235 1241 def handlepushkeyreply(op, inpart):
1236 1242 """retrieve the result of a pushkey request"""
1237 1243 ret = int(inpart.params['return'])
1238 1244 partid = int(inpart.params['in-reply-to'])
1239 1245 op.records.add('pushkey', {'return': ret}, partid)
1240 1246
1241 1247 @parthandler('obsmarkers')
1242 1248 def handleobsmarker(op, inpart):
1243 1249 """add a stream of obsmarkers to the repo"""
1244 1250 tr = op.gettransaction()
1245 1251 markerdata = inpart.read()
1246 1252 if op.ui.config('experimental', 'obsmarkers-exchange-debug', False):
1247 1253 op.ui.write(('obsmarker-exchange: %i bytes received\n')
1248 1254 % len(markerdata))
1249 1255 new = op.repo.obsstore.mergemarkers(tr, markerdata)
1250 1256 if new:
1251 1257 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1252 1258 op.records.add('obsmarkers', {'new': new})
1253 1259 if op.reply is not None:
1254 1260 rpart = op.reply.newpart('reply:obsmarkers')
1255 1261 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1256 1262 rpart.addparam('new', '%i' % new, mandatory=False)
1257 1263
1258 1264
1259 1265 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
1260 1266 def handlepushkeyreply(op, inpart):
1261 1267 """retrieve the result of a pushkey request"""
1262 1268 ret = int(inpart.params['new'])
1263 1269 partid = int(inpart.params['in-reply-to'])
1264 1270 op.records.add('obsmarkers', {'new': ret}, partid)
@@ -1,1322 +1,1326 b''
1 1 # exchange.py - utility to exchange data between repos.
2 2 #
3 3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from i18n import _
9 9 from node import hex, nullid
10 10 import errno, urllib
11 11 import util, scmutil, changegroup, base85, error
12 12 import discovery, phases, obsolete, bookmarks as bookmod, bundle2, pushkey
13 13 import lock as lockmod
14 14
15 15 def readbundle(ui, fh, fname, vfs=None):
16 16 header = changegroup.readexactly(fh, 4)
17 17
18 18 alg = None
19 19 if not fname:
20 20 fname = "stream"
21 21 if not header.startswith('HG') and header.startswith('\0'):
22 22 fh = changegroup.headerlessfixup(fh, header)
23 23 header = "HG10"
24 24 alg = 'UN'
25 25 elif vfs:
26 26 fname = vfs.join(fname)
27 27
28 28 magic, version = header[0:2], header[2:4]
29 29
30 30 if magic != 'HG':
31 31 raise util.Abort(_('%s: not a Mercurial bundle') % fname)
32 32 if version == '10':
33 33 if alg is None:
34 34 alg = changegroup.readexactly(fh, 2)
35 35 return changegroup.cg1unpacker(fh, alg)
36 36 elif version.startswith('2'):
37 37 return bundle2.getunbundler(ui, fh, header=magic + version)
38 38 else:
39 39 raise util.Abort(_('%s: unknown bundle version %s') % (fname, version))
40 40
41 41 def buildobsmarkerspart(bundler, markers):
42 42 """add an obsmarker part to the bundler with <markers>
43 43
44 44 No part is created if markers is empty.
45 45 Raises ValueError if the bundler doesn't support any known obsmarker format.
46 46 """
47 47 if markers:
48 48 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
49 49 version = obsolete.commonversion(remoteversions)
50 50 if version is None:
51 51 raise ValueError('bundler do not support common obsmarker format')
52 52 stream = obsolete.encodemarkers(markers, True, version=version)
53 53 return bundler.newpart('obsmarkers', data=stream)
54 54 return None
55 55
56 56 def _canusebundle2(op):
57 57 """return true if a pull/push can use bundle2
58 58
59 59 Feel free to nuke this function when we drop the experimental option"""
60 60 return (op.repo.ui.configbool('experimental', 'bundle2-exp', False)
61 61 and op.remote.capable('bundle2'))
62 62
63 63
64 64 class pushoperation(object):
65 65 """A object that represent a single push operation
66 66
67 67 It purpose is to carry push related state and very common operation.
68 68
69 69 A new should be created at the beginning of each push and discarded
70 70 afterward.
71 71 """
72 72
73 73 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
74 74 bookmarks=()):
75 75 # repo we push from
76 76 self.repo = repo
77 77 self.ui = repo.ui
78 78 # repo we push to
79 79 self.remote = remote
80 80 # force option provided
81 81 self.force = force
82 82 # revs to be pushed (None is "all")
83 83 self.revs = revs
84 84 # bookmark explicitly pushed
85 85 self.bookmarks = bookmarks
86 86 # allow push of new branch
87 87 self.newbranch = newbranch
88 88 # did a local lock get acquired?
89 89 self.locallocked = None
90 90 # step already performed
91 91 # (used to check what steps have been already performed through bundle2)
92 92 self.stepsdone = set()
93 93 # Integer version of the changegroup push result
94 94 # - None means nothing to push
95 95 # - 0 means HTTP error
96 96 # - 1 means we pushed and remote head count is unchanged *or*
97 97 # we have outgoing changesets but refused to push
98 98 # - other values as described by addchangegroup()
99 99 self.cgresult = None
100 100 # Boolean value for the bookmark push
101 101 self.bkresult = None
102 102 # discover.outgoing object (contains common and outgoing data)
103 103 self.outgoing = None
104 104 # all remote heads before the push
105 105 self.remoteheads = None
106 106 # testable as a boolean indicating if any nodes are missing locally.
107 107 self.incoming = None
108 108 # phases changes that must be pushed along side the changesets
109 109 self.outdatedphases = None
110 110 # phases changes that must be pushed if changeset push fails
111 111 self.fallbackoutdatedphases = None
112 112 # outgoing obsmarkers
113 113 self.outobsmarkers = set()
114 114 # outgoing bookmarks
115 115 self.outbookmarks = []
116 116 # transaction manager
117 117 self.trmanager = None
118 118
119 119 @util.propertycache
120 120 def futureheads(self):
121 121 """future remote heads if the changeset push succeeds"""
122 122 return self.outgoing.missingheads
123 123
124 124 @util.propertycache
125 125 def fallbackheads(self):
126 126 """future remote heads if the changeset push fails"""
127 127 if self.revs is None:
128 128 # not target to push, all common are relevant
129 129 return self.outgoing.commonheads
130 130 unfi = self.repo.unfiltered()
131 131 # I want cheads = heads(::missingheads and ::commonheads)
132 132 # (missingheads is revs with secret changeset filtered out)
133 133 #
134 134 # This can be expressed as:
135 135 # cheads = ( (missingheads and ::commonheads)
136 136 # + (commonheads and ::missingheads))"
137 137 # )
138 138 #
139 139 # while trying to push we already computed the following:
140 140 # common = (::commonheads)
141 141 # missing = ((commonheads::missingheads) - commonheads)
142 142 #
143 143 # We can pick:
144 144 # * missingheads part of common (::commonheads)
145 145 common = set(self.outgoing.common)
146 146 nm = self.repo.changelog.nodemap
147 147 cheads = [node for node in self.revs if nm[node] in common]
148 148 # and
149 149 # * commonheads parents on missing
150 150 revset = unfi.set('%ln and parents(roots(%ln))',
151 151 self.outgoing.commonheads,
152 152 self.outgoing.missing)
153 153 cheads.extend(c.node() for c in revset)
154 154 return cheads
155 155
156 156 @property
157 157 def commonheads(self):
158 158 """set of all common heads after changeset bundle push"""
159 159 if self.cgresult:
160 160 return self.futureheads
161 161 else:
162 162 return self.fallbackheads
163 163
164 164 # mapping of message used when pushing bookmark
165 165 bookmsgmap = {'update': (_("updating bookmark %s\n"),
166 166 _('updating bookmark %s failed!\n')),
167 167 'export': (_("exporting bookmark %s\n"),
168 168 _('exporting bookmark %s failed!\n')),
169 169 'delete': (_("deleting remote bookmark %s\n"),
170 170 _('deleting remote bookmark %s failed!\n')),
171 171 }
172 172
173 173
174 174 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=()):
175 175 '''Push outgoing changesets (limited by revs) from a local
176 176 repository to remote. Return an integer:
177 177 - None means nothing to push
178 178 - 0 means HTTP error
179 179 - 1 means we pushed and remote head count is unchanged *or*
180 180 we have outgoing changesets but refused to push
181 181 - other values as described by addchangegroup()
182 182 '''
183 183 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks)
184 184 if pushop.remote.local():
185 185 missing = (set(pushop.repo.requirements)
186 186 - pushop.remote.local().supported)
187 187 if missing:
188 188 msg = _("required features are not"
189 189 " supported in the destination:"
190 190 " %s") % (', '.join(sorted(missing)))
191 191 raise util.Abort(msg)
192 192
193 193 # there are two ways to push to remote repo:
194 194 #
195 195 # addchangegroup assumes local user can lock remote
196 196 # repo (local filesystem, old ssh servers).
197 197 #
198 198 # unbundle assumes local user cannot lock remote repo (new ssh
199 199 # servers, http servers).
200 200
201 201 if not pushop.remote.canpush():
202 202 raise util.Abort(_("destination does not support push"))
203 203 # get local lock as we might write phase data
204 204 localwlock = locallock = None
205 205 try:
206 206 # bundle2 push may receive a reply bundle touching bookmarks or other
207 207 # things requiring the wlock. Take it now to ensure proper ordering.
208 208 maypushback = pushop.ui.configbool('experimental', 'bundle2.pushback')
209 209 if _canusebundle2(pushop) and maypushback:
210 210 localwlock = pushop.repo.wlock()
211 211 locallock = pushop.repo.lock()
212 212 pushop.locallocked = True
213 213 except IOError, err:
214 214 pushop.locallocked = False
215 215 if err.errno != errno.EACCES:
216 216 raise
217 217 # source repo cannot be locked.
218 218 # We do not abort the push, but just disable the local phase
219 219 # synchronisation.
220 220 msg = 'cannot lock source repository: %s\n' % err
221 221 pushop.ui.debug(msg)
222 222 try:
223 223 if pushop.locallocked:
224 224 pushop.trmanager = transactionmanager(repo,
225 225 'push-response',
226 226 pushop.remote.url())
227 227 pushop.repo.checkpush(pushop)
228 228 lock = None
229 229 unbundle = pushop.remote.capable('unbundle')
230 230 if not unbundle:
231 231 lock = pushop.remote.lock()
232 232 try:
233 233 _pushdiscovery(pushop)
234 234 if _canusebundle2(pushop):
235 235 _pushbundle2(pushop)
236 236 _pushchangeset(pushop)
237 237 _pushsyncphase(pushop)
238 238 _pushobsolete(pushop)
239 239 _pushbookmark(pushop)
240 240 finally:
241 241 if lock is not None:
242 242 lock.release()
243 243 if pushop.trmanager:
244 244 pushop.trmanager.close()
245 245 finally:
246 246 if pushop.trmanager:
247 247 pushop.trmanager.release()
248 248 if locallock is not None:
249 249 locallock.release()
250 250 if localwlock is not None:
251 251 localwlock.release()
252 252
253 253 return pushop
254 254
255 255 # list of steps to perform discovery before push
256 256 pushdiscoveryorder = []
257 257
258 258 # Mapping between step name and function
259 259 #
260 260 # This exists to help extensions wrap steps if necessary
261 261 pushdiscoverymapping = {}
262 262
263 263 def pushdiscovery(stepname):
264 264 """decorator for function performing discovery before push
265 265
266 266 The function is added to the step -> function mapping and appended to the
267 267 list of steps. Beware that decorated function will be added in order (this
268 268 may matter).
269 269
270 270 You can only use this decorator for a new step, if you want to wrap a step
271 271 from an extension, change the pushdiscovery dictionary directly."""
272 272 def dec(func):
273 273 assert stepname not in pushdiscoverymapping
274 274 pushdiscoverymapping[stepname] = func
275 275 pushdiscoveryorder.append(stepname)
276 276 return func
277 277 return dec
278 278
279 279 def _pushdiscovery(pushop):
280 280 """Run all discovery steps"""
281 281 for stepname in pushdiscoveryorder:
282 282 step = pushdiscoverymapping[stepname]
283 283 step(pushop)
284 284
285 285 @pushdiscovery('changeset')
286 286 def _pushdiscoverychangeset(pushop):
287 287 """discover the changeset that need to be pushed"""
288 288 fci = discovery.findcommonincoming
289 289 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
290 290 common, inc, remoteheads = commoninc
291 291 fco = discovery.findcommonoutgoing
292 292 outgoing = fco(pushop.repo, pushop.remote, onlyheads=pushop.revs,
293 293 commoninc=commoninc, force=pushop.force)
294 294 pushop.outgoing = outgoing
295 295 pushop.remoteheads = remoteheads
296 296 pushop.incoming = inc
297 297
298 298 @pushdiscovery('phase')
299 299 def _pushdiscoveryphase(pushop):
300 300 """discover the phase that needs to be pushed
301 301
302 302 (computed for both success and failure case for changesets push)"""
303 303 outgoing = pushop.outgoing
304 304 unfi = pushop.repo.unfiltered()
305 305 remotephases = pushop.remote.listkeys('phases')
306 306 publishing = remotephases.get('publishing', False)
307 307 ana = phases.analyzeremotephases(pushop.repo,
308 308 pushop.fallbackheads,
309 309 remotephases)
310 310 pheads, droots = ana
311 311 extracond = ''
312 312 if not publishing:
313 313 extracond = ' and public()'
314 314 revset = 'heads((%%ln::%%ln) %s)' % extracond
315 315 # Get the list of all revs draft on remote by public here.
316 316 # XXX Beware that revset break if droots is not strictly
317 317 # XXX root we may want to ensure it is but it is costly
318 318 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
319 319 if not outgoing.missing:
320 320 future = fallback
321 321 else:
322 322 # adds changeset we are going to push as draft
323 323 #
324 324 # should not be necessary for publishing server, but because of an
325 325 # issue fixed in xxxxx we have to do it anyway.
326 326 fdroots = list(unfi.set('roots(%ln + %ln::)',
327 327 outgoing.missing, droots))
328 328 fdroots = [f.node() for f in fdroots]
329 329 future = list(unfi.set(revset, fdroots, pushop.futureheads))
330 330 pushop.outdatedphases = future
331 331 pushop.fallbackoutdatedphases = fallback
332 332
333 333 @pushdiscovery('obsmarker')
334 334 def _pushdiscoveryobsmarkers(pushop):
335 335 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
336 336 and pushop.repo.obsstore
337 337 and 'obsolete' in pushop.remote.listkeys('namespaces')):
338 338 repo = pushop.repo
339 339 # very naive computation, that can be quite expensive on big repo.
340 340 # However: evolution is currently slow on them anyway.
341 341 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
342 342 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
343 343
344 344 @pushdiscovery('bookmarks')
345 345 def _pushdiscoverybookmarks(pushop):
346 346 ui = pushop.ui
347 347 repo = pushop.repo.unfiltered()
348 348 remote = pushop.remote
349 349 ui.debug("checking for updated bookmarks\n")
350 350 ancestors = ()
351 351 if pushop.revs:
352 352 revnums = map(repo.changelog.rev, pushop.revs)
353 353 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
354 354 remotebookmark = remote.listkeys('bookmarks')
355 355
356 356 explicit = set(pushop.bookmarks)
357 357
358 358 comp = bookmod.compare(repo, repo._bookmarks, remotebookmark, srchex=hex)
359 359 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
360 360 for b, scid, dcid in advsrc:
361 361 if b in explicit:
362 362 explicit.remove(b)
363 363 if not ancestors or repo[scid].rev() in ancestors:
364 364 pushop.outbookmarks.append((b, dcid, scid))
365 365 # search added bookmark
366 366 for b, scid, dcid in addsrc:
367 367 if b in explicit:
368 368 explicit.remove(b)
369 369 pushop.outbookmarks.append((b, '', scid))
370 370 # search for overwritten bookmark
371 371 for b, scid, dcid in advdst + diverge + differ:
372 372 if b in explicit:
373 373 explicit.remove(b)
374 374 pushop.outbookmarks.append((b, dcid, scid))
375 375 # search for bookmark to delete
376 376 for b, scid, dcid in adddst:
377 377 if b in explicit:
378 378 explicit.remove(b)
379 379 # treat as "deleted locally"
380 380 pushop.outbookmarks.append((b, dcid, ''))
381 381 # identical bookmarks shouldn't get reported
382 382 for b, scid, dcid in same:
383 383 if b in explicit:
384 384 explicit.remove(b)
385 385
386 386 if explicit:
387 387 explicit = sorted(explicit)
388 388 # we should probably list all of them
389 389 ui.warn(_('bookmark %s does not exist on the local '
390 390 'or remote repository!\n') % explicit[0])
391 391 pushop.bkresult = 2
392 392
393 393 pushop.outbookmarks.sort()
394 394
395 395 def _pushcheckoutgoing(pushop):
396 396 outgoing = pushop.outgoing
397 397 unfi = pushop.repo.unfiltered()
398 398 if not outgoing.missing:
399 399 # nothing to push
400 400 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
401 401 return False
402 402 # something to push
403 403 if not pushop.force:
404 404 # if repo.obsstore == False --> no obsolete
405 405 # then, save the iteration
406 406 if unfi.obsstore:
407 407 # this message are here for 80 char limit reason
408 408 mso = _("push includes obsolete changeset: %s!")
409 409 mst = {"unstable": _("push includes unstable changeset: %s!"),
410 410 "bumped": _("push includes bumped changeset: %s!"),
411 411 "divergent": _("push includes divergent changeset: %s!")}
412 412 # If we are to push if there is at least one
413 413 # obsolete or unstable changeset in missing, at
414 414 # least one of the missinghead will be obsolete or
415 415 # unstable. So checking heads only is ok
416 416 for node in outgoing.missingheads:
417 417 ctx = unfi[node]
418 418 if ctx.obsolete():
419 419 raise util.Abort(mso % ctx)
420 420 elif ctx.troubled():
421 421 raise util.Abort(mst[ctx.troubles()[0]] % ctx)
422 422 newbm = pushop.ui.configlist('bookmarks', 'pushing')
423 423 discovery.checkheads(unfi, pushop.remote, outgoing,
424 424 pushop.remoteheads,
425 425 pushop.newbranch,
426 426 bool(pushop.incoming),
427 427 newbm)
428 428 return True
429 429
430 430 # List of names of steps to perform for an outgoing bundle2, order matters.
431 431 b2partsgenorder = []
432 432
433 433 # Mapping between step name and function
434 434 #
435 435 # This exists to help extensions wrap steps if necessary
436 436 b2partsgenmapping = {}
437 437
438 438 def b2partsgenerator(stepname, idx=None):
439 439 """decorator for function generating bundle2 part
440 440
441 441 The function is added to the step -> function mapping and appended to the
442 442 list of steps. Beware that decorated functions will be added in order
443 443 (this may matter).
444 444
445 445 You can only use this decorator for new steps, if you want to wrap a step
446 446 from an extension, attack the b2partsgenmapping dictionary directly."""
447 447 def dec(func):
448 448 assert stepname not in b2partsgenmapping
449 449 b2partsgenmapping[stepname] = func
450 450 if idx is None:
451 451 b2partsgenorder.append(stepname)
452 452 else:
453 453 b2partsgenorder.insert(idx, stepname)
454 454 return func
455 455 return dec
456 456
457 457 @b2partsgenerator('changeset')
458 458 def _pushb2ctx(pushop, bundler):
459 459 """handle changegroup push through bundle2
460 460
461 461 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
462 462 """
463 463 if 'changesets' in pushop.stepsdone:
464 464 return
465 465 pushop.stepsdone.add('changesets')
466 466 # Send known heads to the server for race detection.
467 467 if not _pushcheckoutgoing(pushop):
468 468 return
469 469 pushop.repo.prepushoutgoinghooks(pushop.repo,
470 470 pushop.remote,
471 471 pushop.outgoing)
472 472 if not pushop.force:
473 473 bundler.newpart('check:heads', data=iter(pushop.remoteheads))
474 474 b2caps = bundle2.bundle2caps(pushop.remote)
475 475 version = None
476 476 cgversions = b2caps.get('changegroup')
477 477 if not cgversions: # 3.1 and 3.2 ship with an empty value
478 478 cg = changegroup.getlocalchangegroupraw(pushop.repo, 'push',
479 479 pushop.outgoing)
480 480 else:
481 481 cgversions = [v for v in cgversions if v in changegroup.packermap]
482 482 if not cgversions:
483 483 raise ValueError(_('no common changegroup version'))
484 484 version = max(cgversions)
485 485 cg = changegroup.getlocalchangegroupraw(pushop.repo, 'push',
486 486 pushop.outgoing,
487 487 version=version)
488 488 cgpart = bundler.newpart('changegroup', data=cg)
489 489 if version is not None:
490 490 cgpart.addparam('version', version)
491 491 def handlereply(op):
492 492 """extract addchangegroup returns from server reply"""
493 493 cgreplies = op.records.getreplies(cgpart.id)
494 494 assert len(cgreplies['changegroup']) == 1
495 495 pushop.cgresult = cgreplies['changegroup'][0]['return']
496 496 return handlereply
497 497
498 498 @b2partsgenerator('phase')
499 499 def _pushb2phases(pushop, bundler):
500 500 """handle phase push through bundle2"""
501 501 if 'phases' in pushop.stepsdone:
502 502 return
503 503 b2caps = bundle2.bundle2caps(pushop.remote)
504 504 if not 'pushkey' in b2caps:
505 505 return
506 506 pushop.stepsdone.add('phases')
507 507 part2node = []
508 508 enc = pushkey.encode
509 509 for newremotehead in pushop.outdatedphases:
510 510 part = bundler.newpart('pushkey')
511 511 part.addparam('namespace', enc('phases'))
512 512 part.addparam('key', enc(newremotehead.hex()))
513 513 part.addparam('old', enc(str(phases.draft)))
514 514 part.addparam('new', enc(str(phases.public)))
515 515 part2node.append((part.id, newremotehead))
516 516 def handlereply(op):
517 517 for partid, node in part2node:
518 518 partrep = op.records.getreplies(partid)
519 519 results = partrep['pushkey']
520 520 assert len(results) <= 1
521 521 msg = None
522 522 if not results:
523 523 msg = _('server ignored update of %s to public!\n') % node
524 524 elif not int(results[0]['return']):
525 525 msg = _('updating %s to public failed!\n') % node
526 526 if msg is not None:
527 527 pushop.ui.warn(msg)
528 528 return handlereply
529 529
530 530 @b2partsgenerator('obsmarkers')
531 531 def _pushb2obsmarkers(pushop, bundler):
532 532 if 'obsmarkers' in pushop.stepsdone:
533 533 return
534 534 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
535 535 if obsolete.commonversion(remoteversions) is None:
536 536 return
537 537 pushop.stepsdone.add('obsmarkers')
538 538 if pushop.outobsmarkers:
539 539 buildobsmarkerspart(bundler, pushop.outobsmarkers)
540 540
541 541 @b2partsgenerator('bookmarks')
542 542 def _pushb2bookmarks(pushop, bundler):
543 543 """handle phase push through bundle2"""
544 544 if 'bookmarks' in pushop.stepsdone:
545 545 return
546 546 b2caps = bundle2.bundle2caps(pushop.remote)
547 547 if 'pushkey' not in b2caps:
548 548 return
549 549 pushop.stepsdone.add('bookmarks')
550 550 part2book = []
551 551 enc = pushkey.encode
552 552 for book, old, new in pushop.outbookmarks:
553 553 part = bundler.newpart('pushkey')
554 554 part.addparam('namespace', enc('bookmarks'))
555 555 part.addparam('key', enc(book))
556 556 part.addparam('old', enc(old))
557 557 part.addparam('new', enc(new))
558 558 action = 'update'
559 559 if not old:
560 560 action = 'export'
561 561 elif not new:
562 562 action = 'delete'
563 563 part2book.append((part.id, book, action))
564 564
565 565
566 566 def handlereply(op):
567 567 ui = pushop.ui
568 568 for partid, book, action in part2book:
569 569 partrep = op.records.getreplies(partid)
570 570 results = partrep['pushkey']
571 571 assert len(results) <= 1
572 572 if not results:
573 573 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
574 574 else:
575 575 ret = int(results[0]['return'])
576 576 if ret:
577 577 ui.status(bookmsgmap[action][0] % book)
578 578 else:
579 579 ui.warn(bookmsgmap[action][1] % book)
580 580 if pushop.bkresult is not None:
581 581 pushop.bkresult = 1
582 582 return handlereply
583 583
584 584
585 585 def _pushbundle2(pushop):
586 586 """push data to the remote using bundle2
587 587
588 588 The only currently supported type of data is changegroup but this will
589 589 evolve in the future."""
590 590 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
591 591 pushback = (pushop.trmanager
592 592 and pushop.ui.configbool('experimental', 'bundle2.pushback'))
593 593
594 594 # create reply capability
595 595 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo,
596 596 allowpushback=pushback))
597 597 bundler.newpart('replycaps', data=capsblob)
598 598 replyhandlers = []
599 599 for partgenname in b2partsgenorder:
600 600 partgen = b2partsgenmapping[partgenname]
601 601 ret = partgen(pushop, bundler)
602 602 if callable(ret):
603 603 replyhandlers.append(ret)
604 604 # do not push if nothing to push
605 605 if bundler.nbparts <= 1:
606 606 return
607 607 stream = util.chunkbuffer(bundler.getchunks())
608 608 try:
609 609 reply = pushop.remote.unbundle(stream, ['force'], 'push')
610 610 except error.BundleValueError, exc:
611 611 raise util.Abort('missing support for %s' % exc)
612 612 try:
613 613 trgetter = None
614 614 if pushback:
615 615 trgetter = pushop.trmanager.transaction
616 616 op = bundle2.processbundle(pushop.repo, reply, trgetter)
617 617 except error.BundleValueError, exc:
618 618 raise util.Abort('missing support for %s' % exc)
619 619 for rephand in replyhandlers:
620 620 rephand(op)
621 621
622 622 def _pushchangeset(pushop):
623 623 """Make the actual push of changeset bundle to remote repo"""
624 624 if 'changesets' in pushop.stepsdone:
625 625 return
626 626 pushop.stepsdone.add('changesets')
627 627 if not _pushcheckoutgoing(pushop):
628 628 return
629 629 pushop.repo.prepushoutgoinghooks(pushop.repo,
630 630 pushop.remote,
631 631 pushop.outgoing)
632 632 outgoing = pushop.outgoing
633 633 unbundle = pushop.remote.capable('unbundle')
634 634 # TODO: get bundlecaps from remote
635 635 bundlecaps = None
636 636 # create a changegroup from local
637 637 if pushop.revs is None and not (outgoing.excluded
638 638 or pushop.repo.changelog.filteredrevs):
639 639 # push everything,
640 640 # use the fast path, no race possible on push
641 641 bundler = changegroup.cg1packer(pushop.repo, bundlecaps)
642 642 cg = changegroup.getsubset(pushop.repo,
643 643 outgoing,
644 644 bundler,
645 645 'push',
646 646 fastpath=True)
647 647 else:
648 648 cg = changegroup.getlocalchangegroup(pushop.repo, 'push', outgoing,
649 649 bundlecaps)
650 650
651 651 # apply changegroup to remote
652 652 if unbundle:
653 653 # local repo finds heads on server, finds out what
654 654 # revs it must push. once revs transferred, if server
655 655 # finds it has different heads (someone else won
656 656 # commit/push race), server aborts.
657 657 if pushop.force:
658 658 remoteheads = ['force']
659 659 else:
660 660 remoteheads = pushop.remoteheads
661 661 # ssh: return remote's addchangegroup()
662 662 # http: return remote's addchangegroup() or 0 for error
663 663 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
664 664 pushop.repo.url())
665 665 else:
666 666 # we return an integer indicating remote head count
667 667 # change
668 668 pushop.cgresult = pushop.remote.addchangegroup(cg, 'push',
669 669 pushop.repo.url())
670 670
671 671 def _pushsyncphase(pushop):
672 672 """synchronise phase information locally and remotely"""
673 673 cheads = pushop.commonheads
674 674 # even when we don't push, exchanging phase data is useful
675 675 remotephases = pushop.remote.listkeys('phases')
676 676 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
677 677 and remotephases # server supports phases
678 678 and pushop.cgresult is None # nothing was pushed
679 679 and remotephases.get('publishing', False)):
680 680 # When:
681 681 # - this is a subrepo push
682 682 # - and remote support phase
683 683 # - and no changeset was pushed
684 684 # - and remote is publishing
685 685 # We may be in issue 3871 case!
686 686 # We drop the possible phase synchronisation done by
687 687 # courtesy to publish changesets possibly locally draft
688 688 # on the remote.
689 689 remotephases = {'publishing': 'True'}
690 690 if not remotephases: # old server or public only reply from non-publishing
691 691 _localphasemove(pushop, cheads)
692 692 # don't push any phase data as there is nothing to push
693 693 else:
694 694 ana = phases.analyzeremotephases(pushop.repo, cheads,
695 695 remotephases)
696 696 pheads, droots = ana
697 697 ### Apply remote phase on local
698 698 if remotephases.get('publishing', False):
699 699 _localphasemove(pushop, cheads)
700 700 else: # publish = False
701 701 _localphasemove(pushop, pheads)
702 702 _localphasemove(pushop, cheads, phases.draft)
703 703 ### Apply local phase on remote
704 704
705 705 if pushop.cgresult:
706 706 if 'phases' in pushop.stepsdone:
707 707 # phases already pushed though bundle2
708 708 return
709 709 outdated = pushop.outdatedphases
710 710 else:
711 711 outdated = pushop.fallbackoutdatedphases
712 712
713 713 pushop.stepsdone.add('phases')
714 714
715 715 # filter heads already turned public by the push
716 716 outdated = [c for c in outdated if c.node() not in pheads]
717 717 # fallback to independent pushkey command
718 718 for newremotehead in outdated:
719 719 r = pushop.remote.pushkey('phases',
720 720 newremotehead.hex(),
721 721 str(phases.draft),
722 722 str(phases.public))
723 723 if not r:
724 724 pushop.ui.warn(_('updating %s to public failed!\n')
725 725 % newremotehead)
726 726
727 727 def _localphasemove(pushop, nodes, phase=phases.public):
728 728 """move <nodes> to <phase> in the local source repo"""
729 729 if pushop.trmanager:
730 730 phases.advanceboundary(pushop.repo,
731 731 pushop.trmanager.transaction(),
732 732 phase,
733 733 nodes)
734 734 else:
735 735 # repo is not locked, do not change any phases!
736 736 # Informs the user that phases should have been moved when
737 737 # applicable.
738 738 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
739 739 phasestr = phases.phasenames[phase]
740 740 if actualmoves:
741 741 pushop.ui.status(_('cannot lock source repo, skipping '
742 742 'local %s phase update\n') % phasestr)
743 743
744 744 def _pushobsolete(pushop):
745 745 """utility function to push obsolete markers to a remote"""
746 746 if 'obsmarkers' in pushop.stepsdone:
747 747 return
748 748 pushop.ui.debug('try to push obsolete markers to remote\n')
749 749 repo = pushop.repo
750 750 remote = pushop.remote
751 751 pushop.stepsdone.add('obsmarkers')
752 752 if pushop.outobsmarkers:
753 753 rslts = []
754 754 remotedata = obsolete._pushkeyescape(pushop.outobsmarkers)
755 755 for key in sorted(remotedata, reverse=True):
756 756 # reverse sort to ensure we end with dump0
757 757 data = remotedata[key]
758 758 rslts.append(remote.pushkey('obsolete', key, '', data))
759 759 if [r for r in rslts if not r]:
760 760 msg = _('failed to push some obsolete markers!\n')
761 761 repo.ui.warn(msg)
762 762
763 763 def _pushbookmark(pushop):
764 764 """Update bookmark position on remote"""
765 765 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
766 766 return
767 767 pushop.stepsdone.add('bookmarks')
768 768 ui = pushop.ui
769 769 remote = pushop.remote
770 770
771 771 for b, old, new in pushop.outbookmarks:
772 772 action = 'update'
773 773 if not old:
774 774 action = 'export'
775 775 elif not new:
776 776 action = 'delete'
777 777 if remote.pushkey('bookmarks', b, old, new):
778 778 ui.status(bookmsgmap[action][0] % b)
779 779 else:
780 780 ui.warn(bookmsgmap[action][1] % b)
781 781 # discovery can have set the value form invalid entry
782 782 if pushop.bkresult is not None:
783 783 pushop.bkresult = 1
784 784
785 785 class pulloperation(object):
786 786 """A object that represent a single pull operation
787 787
788 788 It purpose is to carry pull related state and very common operation.
789 789
790 790 A new should be created at the beginning of each pull and discarded
791 791 afterward.
792 792 """
793 793
794 794 def __init__(self, repo, remote, heads=None, force=False, bookmarks=()):
795 795 # repo we pull into
796 796 self.repo = repo
797 797 # repo we pull from
798 798 self.remote = remote
799 799 # revision we try to pull (None is "all")
800 800 self.heads = heads
801 801 # bookmark pulled explicitly
802 802 self.explicitbookmarks = bookmarks
803 803 # do we force pull?
804 804 self.force = force
805 805 # transaction manager
806 806 self.trmanager = None
807 807 # set of common changeset between local and remote before pull
808 808 self.common = None
809 809 # set of pulled head
810 810 self.rheads = None
811 811 # list of missing changeset to fetch remotely
812 812 self.fetch = None
813 813 # remote bookmarks data
814 814 self.remotebookmarks = None
815 815 # result of changegroup pulling (used as return code by pull)
816 816 self.cgresult = None
817 817 # list of step already done
818 818 self.stepsdone = set()
819 819
820 820 @util.propertycache
821 821 def pulledsubset(self):
822 822 """heads of the set of changeset target by the pull"""
823 823 # compute target subset
824 824 if self.heads is None:
825 825 # We pulled every thing possible
826 826 # sync on everything common
827 827 c = set(self.common)
828 828 ret = list(self.common)
829 829 for n in self.rheads:
830 830 if n not in c:
831 831 ret.append(n)
832 832 return ret
833 833 else:
834 834 # We pulled a specific subset
835 835 # sync on this subset
836 836 return self.heads
837 837
838 838 def gettransaction(self):
839 839 # deprecated; talk to trmanager directly
840 840 return self.trmanager.transaction()
841 841
842 842 class transactionmanager(object):
843 843 """An object to manage the life cycle of a transaction
844 844
845 845 It creates the transaction on demand and calls the appropriate hooks when
846 846 closing the transaction."""
847 847 def __init__(self, repo, source, url):
848 848 self.repo = repo
849 849 self.source = source
850 850 self.url = url
851 851 self._tr = None
852 852
853 853 def transaction(self):
854 854 """Return an open transaction object, constructing if necessary"""
855 855 if not self._tr:
856 856 trname = '%s\n%s' % (self.source, util.hidepassword(self.url))
857 857 self._tr = self.repo.transaction(trname)
858 858 self._tr.hookargs['source'] = self.source
859 859 self._tr.hookargs['url'] = self.url
860 860 return self._tr
861 861
862 862 def close(self):
863 863 """close transaction if created"""
864 864 if self._tr is not None:
865 865 self._tr.close()
866 866
867 867 def release(self):
868 868 """release transaction if created"""
869 869 if self._tr is not None:
870 870 self._tr.release()
871 871
872 872 def pull(repo, remote, heads=None, force=False, bookmarks=()):
873 873 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks)
874 874 if pullop.remote.local():
875 875 missing = set(pullop.remote.requirements) - pullop.repo.supported
876 876 if missing:
877 877 msg = _("required features are not"
878 878 " supported in the destination:"
879 879 " %s") % (', '.join(sorted(missing)))
880 880 raise util.Abort(msg)
881 881
882 882 pullop.remotebookmarks = remote.listkeys('bookmarks')
883 883 lock = pullop.repo.lock()
884 884 try:
885 885 pullop.trmanager = transactionmanager(repo, 'pull', remote.url())
886 886 _pulldiscovery(pullop)
887 887 if _canusebundle2(pullop):
888 888 _pullbundle2(pullop)
889 889 _pullchangeset(pullop)
890 890 _pullphase(pullop)
891 891 _pullbookmarks(pullop)
892 892 _pullobsolete(pullop)
893 893 pullop.trmanager.close()
894 894 finally:
895 895 pullop.trmanager.release()
896 896 lock.release()
897 897
898 898 return pullop
899 899
900 900 # list of steps to perform discovery before pull
901 901 pulldiscoveryorder = []
902 902
903 903 # Mapping between step name and function
904 904 #
905 905 # This exists to help extensions wrap steps if necessary
906 906 pulldiscoverymapping = {}
907 907
908 908 def pulldiscovery(stepname):
909 909 """decorator for function performing discovery before pull
910 910
911 911 The function is added to the step -> function mapping and appended to the
912 912 list of steps. Beware that decorated function will be added in order (this
913 913 may matter).
914 914
915 915 You can only use this decorator for a new step, if you want to wrap a step
916 916 from an extension, change the pulldiscovery dictionary directly."""
917 917 def dec(func):
918 918 assert stepname not in pulldiscoverymapping
919 919 pulldiscoverymapping[stepname] = func
920 920 pulldiscoveryorder.append(stepname)
921 921 return func
922 922 return dec
923 923
924 924 def _pulldiscovery(pullop):
925 925 """Run all discovery steps"""
926 926 for stepname in pulldiscoveryorder:
927 927 step = pulldiscoverymapping[stepname]
928 928 step(pullop)
929 929
930 930 @pulldiscovery('changegroup')
931 931 def _pulldiscoverychangegroup(pullop):
932 932 """discovery phase for the pull
933 933
934 934 Current handle changeset discovery only, will change handle all discovery
935 935 at some point."""
936 936 tmp = discovery.findcommonincoming(pullop.repo,
937 937 pullop.remote,
938 938 heads=pullop.heads,
939 939 force=pullop.force)
940 940 common, fetch, rheads = tmp
941 941 nm = pullop.repo.unfiltered().changelog.nodemap
942 942 if fetch and rheads:
943 943 # If a remote heads in filtered locally, lets drop it from the unknown
944 944 # remote heads and put in back in common.
945 945 #
946 946 # This is a hackish solution to catch most of "common but locally
947 947 # hidden situation". We do not performs discovery on unfiltered
948 948 # repository because it end up doing a pathological amount of round
949 949 # trip for w huge amount of changeset we do not care about.
950 950 #
951 951 # If a set of such "common but filtered" changeset exist on the server
952 952 # but are not including a remote heads, we'll not be able to detect it,
953 953 scommon = set(common)
954 954 filteredrheads = []
955 955 for n in rheads:
956 956 if n in nm:
957 957 if n not in scommon:
958 958 common.append(n)
959 959 else:
960 960 filteredrheads.append(n)
961 961 if not filteredrheads:
962 962 fetch = []
963 963 rheads = filteredrheads
964 964 pullop.common = common
965 965 pullop.fetch = fetch
966 966 pullop.rheads = rheads
967 967
968 968 def _pullbundle2(pullop):
969 969 """pull data using bundle2
970 970
971 971 For now, the only supported data are changegroup."""
972 972 remotecaps = bundle2.bundle2caps(pullop.remote)
973 973 kwargs = {'bundlecaps': caps20to10(pullop.repo)}
974 974 # pulling changegroup
975 975 pullop.stepsdone.add('changegroup')
976 976
977 977 kwargs['common'] = pullop.common
978 978 kwargs['heads'] = pullop.heads or pullop.rheads
979 979 kwargs['cg'] = pullop.fetch
980 980 if 'listkeys' in remotecaps:
981 981 kwargs['listkeys'] = ['phase', 'bookmarks']
982 982 if not pullop.fetch:
983 983 pullop.repo.ui.status(_("no changes found\n"))
984 984 pullop.cgresult = 0
985 985 else:
986 986 if pullop.heads is None and list(pullop.common) == [nullid]:
987 987 pullop.repo.ui.status(_("requesting all changes\n"))
988 988 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
989 989 remoteversions = bundle2.obsmarkersversion(remotecaps)
990 990 if obsolete.commonversion(remoteversions) is not None:
991 991 kwargs['obsmarkers'] = True
992 992 pullop.stepsdone.add('obsmarkers')
993 993 _pullbundle2extraprepare(pullop, kwargs)
994 994 bundle = pullop.remote.getbundle('pull', **kwargs)
995 995 try:
996 996 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
997 997 except error.BundleValueError, exc:
998 998 raise util.Abort('missing support for %s' % exc)
999 999
1000 1000 if pullop.fetch:
1001 1001 results = [cg['return'] for cg in op.records['changegroup']]
1002 1002 pullop.cgresult = changegroup.combineresults(results)
1003 1003
1004 1004 # processing phases change
1005 1005 for namespace, value in op.records['listkeys']:
1006 1006 if namespace == 'phases':
1007 1007 _pullapplyphases(pullop, value)
1008 1008
1009 1009 # processing bookmark update
1010 1010 for namespace, value in op.records['listkeys']:
1011 1011 if namespace == 'bookmarks':
1012 1012 pullop.remotebookmarks = value
1013 1013 _pullbookmarks(pullop)
1014 1014
1015 1015 def _pullbundle2extraprepare(pullop, kwargs):
1016 1016 """hook function so that extensions can extend the getbundle call"""
1017 1017 pass
1018 1018
1019 1019 def _pullchangeset(pullop):
1020 1020 """pull changeset from unbundle into the local repo"""
1021 1021 # We delay the open of the transaction as late as possible so we
1022 1022 # don't open transaction for nothing or you break future useful
1023 1023 # rollback call
1024 1024 if 'changegroup' in pullop.stepsdone:
1025 1025 return
1026 1026 pullop.stepsdone.add('changegroup')
1027 1027 if not pullop.fetch:
1028 1028 pullop.repo.ui.status(_("no changes found\n"))
1029 1029 pullop.cgresult = 0
1030 1030 return
1031 1031 pullop.gettransaction()
1032 1032 if pullop.heads is None and list(pullop.common) == [nullid]:
1033 1033 pullop.repo.ui.status(_("requesting all changes\n"))
1034 1034 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1035 1035 # issue1320, avoid a race if remote changed after discovery
1036 1036 pullop.heads = pullop.rheads
1037 1037
1038 1038 if pullop.remote.capable('getbundle'):
1039 1039 # TODO: get bundlecaps from remote
1040 1040 cg = pullop.remote.getbundle('pull', common=pullop.common,
1041 1041 heads=pullop.heads or pullop.rheads)
1042 1042 elif pullop.heads is None:
1043 1043 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1044 1044 elif not pullop.remote.capable('changegroupsubset'):
1045 1045 raise util.Abort(_("partial pull cannot be done because "
1046 1046 "other repository doesn't support "
1047 1047 "changegroupsubset."))
1048 1048 else:
1049 1049 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1050 1050 pullop.cgresult = changegroup.addchangegroup(pullop.repo, cg, 'pull',
1051 1051 pullop.remote.url())
1052 1052
1053 1053 def _pullphase(pullop):
1054 1054 # Get remote phases data from remote
1055 1055 if 'phases' in pullop.stepsdone:
1056 1056 return
1057 1057 remotephases = pullop.remote.listkeys('phases')
1058 1058 _pullapplyphases(pullop, remotephases)
1059 1059
1060 1060 def _pullapplyphases(pullop, remotephases):
1061 1061 """apply phase movement from observed remote state"""
1062 1062 if 'phases' in pullop.stepsdone:
1063 1063 return
1064 1064 pullop.stepsdone.add('phases')
1065 1065 publishing = bool(remotephases.get('publishing', False))
1066 1066 if remotephases and not publishing:
1067 1067 # remote is new and unpublishing
1068 1068 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1069 1069 pullop.pulledsubset,
1070 1070 remotephases)
1071 1071 dheads = pullop.pulledsubset
1072 1072 else:
1073 1073 # Remote is old or publishing all common changesets
1074 1074 # should be seen as public
1075 1075 pheads = pullop.pulledsubset
1076 1076 dheads = []
1077 1077 unfi = pullop.repo.unfiltered()
1078 1078 phase = unfi._phasecache.phase
1079 1079 rev = unfi.changelog.nodemap.get
1080 1080 public = phases.public
1081 1081 draft = phases.draft
1082 1082
1083 1083 # exclude changesets already public locally and update the others
1084 1084 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1085 1085 if pheads:
1086 1086 tr = pullop.gettransaction()
1087 1087 phases.advanceboundary(pullop.repo, tr, public, pheads)
1088 1088
1089 1089 # exclude changesets already draft locally and update the others
1090 1090 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1091 1091 if dheads:
1092 1092 tr = pullop.gettransaction()
1093 1093 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1094 1094
1095 1095 def _pullbookmarks(pullop):
1096 1096 """process the remote bookmark information to update the local one"""
1097 1097 if 'bookmarks' in pullop.stepsdone:
1098 1098 return
1099 1099 pullop.stepsdone.add('bookmarks')
1100 1100 repo = pullop.repo
1101 1101 remotebookmarks = pullop.remotebookmarks
1102 1102 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1103 1103 pullop.remote.url(),
1104 1104 pullop.gettransaction,
1105 1105 explicit=pullop.explicitbookmarks)
1106 1106
1107 1107 def _pullobsolete(pullop):
1108 1108 """utility function to pull obsolete markers from a remote
1109 1109
1110 1110 The `gettransaction` is function that return the pull transaction, creating
1111 1111 one if necessary. We return the transaction to inform the calling code that
1112 1112 a new transaction have been created (when applicable).
1113 1113
1114 1114 Exists mostly to allow overriding for experimentation purpose"""
1115 1115 if 'obsmarkers' in pullop.stepsdone:
1116 1116 return
1117 1117 pullop.stepsdone.add('obsmarkers')
1118 1118 tr = None
1119 1119 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1120 1120 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1121 1121 remoteobs = pullop.remote.listkeys('obsolete')
1122 1122 if 'dump0' in remoteobs:
1123 1123 tr = pullop.gettransaction()
1124 1124 for key in sorted(remoteobs, reverse=True):
1125 1125 if key.startswith('dump'):
1126 1126 data = base85.b85decode(remoteobs[key])
1127 1127 pullop.repo.obsstore.mergemarkers(tr, data)
1128 1128 pullop.repo.invalidatevolatilesets()
1129 1129 return tr
1130 1130
1131 1131 def caps20to10(repo):
1132 1132 """return a set with appropriate options to use bundle20 during getbundle"""
1133 1133 caps = set(['HG20'])
1134 1134 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
1135 1135 caps.add('bundle2=' + urllib.quote(capsblob))
1136 1136 return caps
1137 1137
1138 1138 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1139 1139 getbundle2partsorder = []
1140 1140
1141 1141 # Mapping between step name and function
1142 1142 #
1143 1143 # This exists to help extensions wrap steps if necessary
1144 1144 getbundle2partsmapping = {}
1145 1145
1146 1146 def getbundle2partsgenerator(stepname, idx=None):
1147 1147 """decorator for function generating bundle2 part for getbundle
1148 1148
1149 1149 The function is added to the step -> function mapping and appended to the
1150 1150 list of steps. Beware that decorated functions will be added in order
1151 1151 (this may matter).
1152 1152
1153 1153 You can only use this decorator for new steps, if you want to wrap a step
1154 1154 from an extension, attack the getbundle2partsmapping dictionary directly."""
1155 1155 def dec(func):
1156 1156 assert stepname not in getbundle2partsmapping
1157 1157 getbundle2partsmapping[stepname] = func
1158 1158 if idx is None:
1159 1159 getbundle2partsorder.append(stepname)
1160 1160 else:
1161 1161 getbundle2partsorder.insert(idx, stepname)
1162 1162 return func
1163 1163 return dec
1164 1164
1165 1165 def getbundle(repo, source, heads=None, common=None, bundlecaps=None,
1166 1166 **kwargs):
1167 1167 """return a full bundle (with potentially multiple kind of parts)
1168 1168
1169 1169 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
1170 1170 passed. For now, the bundle can contain only changegroup, but this will
1171 1171 changes when more part type will be available for bundle2.
1172 1172
1173 1173 This is different from changegroup.getchangegroup that only returns an HG10
1174 1174 changegroup bundle. They may eventually get reunited in the future when we
1175 1175 have a clearer idea of the API we what to query different data.
1176 1176
1177 1177 The implementation is at a very early stage and will get massive rework
1178 1178 when the API of bundle is refined.
1179 1179 """
1180 1180 # bundle10 case
1181 1181 usebundle2 = False
1182 1182 if bundlecaps is not None:
1183 1183 usebundle2 = util.any((cap.startswith('HG2') for cap in bundlecaps))
1184 1184 if not usebundle2:
1185 1185 if bundlecaps and not kwargs.get('cg', True):
1186 1186 raise ValueError(_('request for bundle10 must include changegroup'))
1187 1187
1188 1188 if kwargs:
1189 1189 raise ValueError(_('unsupported getbundle arguments: %s')
1190 1190 % ', '.join(sorted(kwargs.keys())))
1191 1191 return changegroup.getchangegroup(repo, source, heads=heads,
1192 1192 common=common, bundlecaps=bundlecaps)
1193 1193
1194 1194 # bundle20 case
1195 1195 b2caps = {}
1196 1196 for bcaps in bundlecaps:
1197 1197 if bcaps.startswith('bundle2='):
1198 1198 blob = urllib.unquote(bcaps[len('bundle2='):])
1199 1199 b2caps.update(bundle2.decodecaps(blob))
1200 1200 bundler = bundle2.bundle20(repo.ui, b2caps)
1201 1201
1202 1202 kwargs['heads'] = heads
1203 1203 kwargs['common'] = common
1204 1204
1205 1205 for name in getbundle2partsorder:
1206 1206 func = getbundle2partsmapping[name]
1207 1207 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1208 1208 **kwargs)
1209 1209
1210 1210 return util.chunkbuffer(bundler.getchunks())
1211 1211
1212 1212 @getbundle2partsgenerator('changegroup')
1213 1213 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1214 1214 b2caps=None, heads=None, common=None, **kwargs):
1215 1215 """add a changegroup part to the requested bundle"""
1216 1216 cg = None
1217 1217 if kwargs.get('cg', True):
1218 1218 # build changegroup bundle here.
1219 1219 version = None
1220 1220 cgversions = b2caps.get('changegroup')
1221 1221 if not cgversions: # 3.1 and 3.2 ship with an empty value
1222 1222 cg = changegroup.getchangegroupraw(repo, source, heads=heads,
1223 1223 common=common,
1224 1224 bundlecaps=bundlecaps)
1225 1225 else:
1226 1226 cgversions = [v for v in cgversions if v in changegroup.packermap]
1227 1227 if not cgversions:
1228 1228 raise ValueError(_('no common changegroup version'))
1229 1229 version = max(cgversions)
1230 1230 cg = changegroup.getchangegroupraw(repo, source, heads=heads,
1231 1231 common=common,
1232 1232 bundlecaps=bundlecaps,
1233 1233 version=version)
1234 1234
1235 1235 if cg:
1236 1236 part = bundler.newpart('changegroup', data=cg)
1237 1237 if version is not None:
1238 1238 part.addparam('version', version)
1239 1239
1240 1240 @getbundle2partsgenerator('listkeys')
1241 1241 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1242 1242 b2caps=None, **kwargs):
1243 1243 """add parts containing listkeys namespaces to the requested bundle"""
1244 1244 listkeys = kwargs.get('listkeys', ())
1245 1245 for namespace in listkeys:
1246 1246 part = bundler.newpart('listkeys')
1247 1247 part.addparam('namespace', namespace)
1248 1248 keys = repo.listkeys(namespace).items()
1249 1249 part.data = pushkey.encodekeys(keys)
1250 1250
1251 1251 @getbundle2partsgenerator('obsmarkers')
1252 1252 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1253 1253 b2caps=None, heads=None, **kwargs):
1254 1254 """add an obsolescence markers part to the requested bundle"""
1255 1255 if kwargs.get('obsmarkers', False):
1256 1256 if heads is None:
1257 1257 heads = repo.heads()
1258 1258 subset = [c.node() for c in repo.set('::%ln', heads)]
1259 1259 markers = repo.obsstore.relevantmarkers(subset)
1260 1260 buildobsmarkerspart(bundler, markers)
1261 1261
1262 1262 def check_heads(repo, their_heads, context):
1263 1263 """check if the heads of a repo have been modified
1264 1264
1265 1265 Used by peer for unbundling.
1266 1266 """
1267 1267 heads = repo.heads()
1268 1268 heads_hash = util.sha1(''.join(sorted(heads))).digest()
1269 1269 if not (their_heads == ['force'] or their_heads == heads or
1270 1270 their_heads == ['hashed', heads_hash]):
1271 1271 # someone else committed/pushed/unbundled while we
1272 1272 # were transferring data
1273 1273 raise error.PushRaced('repository changed while %s - '
1274 1274 'please try again' % context)
1275 1275
1276 1276 def unbundle(repo, cg, heads, source, url):
1277 1277 """Apply a bundle to a repo.
1278 1278
1279 1279 this function makes sure the repo is locked during the application and have
1280 1280 mechanism to check that no push race occurred between the creation of the
1281 1281 bundle and its application.
1282 1282
1283 1283 If the push was raced as PushRaced exception is raised."""
1284 1284 r = 0
1285 1285 # need a transaction when processing a bundle2 stream
1286 1286 wlock = lock = tr = None
1287 1287 recordout = None
1288 1288 try:
1289 1289 check_heads(repo, heads, 'uploading changes')
1290 1290 # push can proceed
1291 1291 if util.safehasattr(cg, 'params'):
1292 1292 r = None
1293 1293 try:
1294 1294 wlock = repo.wlock()
1295 1295 lock = repo.lock()
1296 1296 tr = repo.transaction(source)
1297 1297 tr.hookargs['source'] = source
1298 1298 tr.hookargs['url'] = url
1299 1299 tr.hookargs['bundle2'] = '1'
1300 r = bundle2.processbundle(repo, cg, lambda: tr).reply
1300 op = bundle2.bundleoperation(repo, lambda: tr)
1301 try:
1302 r = bundle2.processbundle(repo, cg, op=op)
1303 finally:
1304 r = op.reply
1301 1305 if r is not None:
1302 1306 repo.ui.pushbuffer(error=True, subproc=True)
1303 1307 def recordout(output):
1304 1308 r.newpart('output', data=output, mandatory=False)
1305 1309 tr.close()
1306 1310 except Exception, exc:
1307 1311 exc.duringunbundle2 = True
1308 1312 if r is not None:
1309 1313 parts = exc._bundle2salvagedoutput = r.salvageoutput()
1310 1314 def recordout(output):
1311 1315 part = bundle2.bundlepart('output', data=output,
1312 1316 mandatory=False)
1313 1317 parts.append(part)
1314 1318 raise
1315 1319 else:
1316 1320 lock = repo.lock()
1317 1321 r = changegroup.addchangegroup(repo, cg, source, url)
1318 1322 finally:
1319 1323 lockmod.release(tr, lock, wlock)
1320 1324 if recordout is not None:
1321 1325 recordout(repo.ui.popbuffer())
1322 1326 return r
@@ -1,621 +1,669 b''
1 1 Test exchange of common information using bundle2
2 2
3 3
4 4 $ getmainid() {
5 5 > hg -R main log --template '{node}\n' --rev "$1"
6 6 > }
7 7
8 8 enable obsolescence
9 9
10 10 $ cat > $TESTTMP/bundle2-pushkey-hook.sh << EOF
11 11 > echo pushkey: lock state after \"\$HG_NAMESPACE\"
12 12 > hg debuglock
13 13 > EOF
14 14
15 15 $ cat >> $HGRCPATH << EOF
16 16 > [experimental]
17 17 > evolution=createmarkers,exchange
18 18 > bundle2-exp=True
19 19 > [ui]
20 20 > ssh=python "$TESTDIR/dummyssh"
21 21 > logtemplate={rev}:{node|short} {phase} {author} {bookmarks} {desc|firstline}
22 22 > [web]
23 23 > push_ssl = false
24 24 > allow_push = *
25 25 > [phases]
26 26 > publish=False
27 27 > [hooks]
28 28 > pretxnclose.tip = hg log -r tip -T "pre-close-tip:{node|short} {phase} {bookmarks}\n"
29 29 > txnclose.tip = hg log -r tip -T "postclose-tip:{node|short} {phase} {bookmarks}\n"
30 30 > txnclose.env = sh -c "HG_LOCAL= python \"$TESTDIR/printenv.py\" txnclose"
31 31 > pushkey= sh "$TESTTMP/bundle2-pushkey-hook.sh"
32 32 > EOF
33 33
34 34 The extension requires a repo (currently unused)
35 35
36 36 $ hg init main
37 37 $ cd main
38 38 $ touch a
39 39 $ hg add a
40 40 $ hg commit -m 'a'
41 41 pre-close-tip:3903775176ed draft
42 42 postclose-tip:3903775176ed draft
43 43 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=commit (glob)
44 44
45 45 $ hg unbundle $TESTDIR/bundles/rebase.hg
46 46 adding changesets
47 47 adding manifests
48 48 adding file changes
49 49 added 8 changesets with 7 changes to 7 files (+3 heads)
50 50 pre-close-tip:02de42196ebe draft
51 51 postclose-tip:02de42196ebe draft
52 52 txnclose hook: HG_NODE=cd010b8cd998f3981a5a8115f94f8da4ab506089 HG_PHASES_MOVED=1 HG_SOURCE=unbundle HG_TXNID=TXN:* HG_TXNNAME=unbundle (glob)
53 53 bundle:*/tests/bundles/rebase.hg HG_URL=bundle:*/tests/bundles/rebase.hg (glob)
54 54 (run 'hg heads' to see heads, 'hg merge' to merge)
55 55
56 56 $ cd ..
57 57
58 58 Real world exchange
59 59 =====================
60 60
61 61 Add more obsolescence information
62 62
63 63 $ hg -R main debugobsolete -d '0 0' 1111111111111111111111111111111111111111 `getmainid 9520eea781bc`
64 64 pre-close-tip:02de42196ebe draft
65 65 postclose-tip:02de42196ebe draft
66 66 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
67 67 $ hg -R main debugobsolete -d '0 0' 2222222222222222222222222222222222222222 `getmainid 24b6387c8c8c`
68 68 pre-close-tip:02de42196ebe draft
69 69 postclose-tip:02de42196ebe draft
70 70 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
71 71
72 72 clone --pull
73 73
74 74 $ hg -R main phase --public cd010b8cd998
75 75 pre-close-tip:02de42196ebe draft
76 76 postclose-tip:02de42196ebe draft
77 77 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=phase (glob)
78 78 $ hg clone main other --pull --rev 9520eea781bc
79 79 adding changesets
80 80 adding manifests
81 81 adding file changes
82 82 added 2 changesets with 2 changes to 2 files
83 83 1 new obsolescence markers
84 84 pre-close-tip:9520eea781bc draft
85 85 postclose-tip:9520eea781bc draft
86 86 txnclose hook: HG_NEW_OBSMARKERS=1 HG_NODE=cd010b8cd998f3981a5a8115f94f8da4ab506089 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
87 87 file:/*/$TESTTMP/main HG_URL=file:$TESTTMP/main (glob)
88 88 updating to branch default
89 89 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
90 90 $ hg -R other log -G
91 91 @ 1:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
92 92 |
93 93 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
94 94
95 95 $ hg -R other debugobsolete
96 96 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
97 97
98 98 pull
99 99
100 100 $ hg -R main phase --public 9520eea781bc
101 101 pre-close-tip:02de42196ebe draft
102 102 postclose-tip:02de42196ebe draft
103 103 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=phase (glob)
104 104 $ hg -R other pull -r 24b6387c8c8c
105 105 pulling from $TESTTMP/main (glob)
106 106 searching for changes
107 107 adding changesets
108 108 adding manifests
109 109 adding file changes
110 110 added 1 changesets with 1 changes to 1 files (+1 heads)
111 111 1 new obsolescence markers
112 112 pre-close-tip:24b6387c8c8c draft
113 113 postclose-tip:24b6387c8c8c draft
114 114 txnclose hook: HG_NEW_OBSMARKERS=1 HG_NODE=24b6387c8c8cae37178880f3fa95ded3cb1cf785 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
115 115 file:/*/$TESTTMP/main HG_URL=file:$TESTTMP/main (glob)
116 116 (run 'hg heads' to see heads, 'hg merge' to merge)
117 117 $ hg -R other log -G
118 118 o 2:24b6387c8c8c draft Nicolas Dumazet <nicdumz.commits@gmail.com> F
119 119 |
120 120 | @ 1:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
121 121 |/
122 122 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
123 123
124 124 $ hg -R other debugobsolete
125 125 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
126 126 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
127 127
128 128 pull empty (with phase movement)
129 129
130 130 $ hg -R main phase --public 24b6387c8c8c
131 131 pre-close-tip:02de42196ebe draft
132 132 postclose-tip:02de42196ebe draft
133 133 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=phase (glob)
134 134 $ hg -R other pull -r 24b6387c8c8c
135 135 pulling from $TESTTMP/main (glob)
136 136 no changes found
137 137 pre-close-tip:24b6387c8c8c public
138 138 postclose-tip:24b6387c8c8c public
139 139 txnclose hook: HG_NEW_OBSMARKERS=0 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
140 140 file:/*/$TESTTMP/main HG_URL=file:$TESTTMP/main (glob)
141 141 $ hg -R other log -G
142 142 o 2:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
143 143 |
144 144 | @ 1:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
145 145 |/
146 146 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
147 147
148 148 $ hg -R other debugobsolete
149 149 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
150 150 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
151 151
152 152 pull empty
153 153
154 154 $ hg -R other pull -r 24b6387c8c8c
155 155 pulling from $TESTTMP/main (glob)
156 156 no changes found
157 157 pre-close-tip:24b6387c8c8c public
158 158 postclose-tip:24b6387c8c8c public
159 159 txnclose hook: HG_NEW_OBSMARKERS=0 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
160 160 file:/*/$TESTTMP/main HG_URL=file:$TESTTMP/main (glob)
161 161 $ hg -R other log -G
162 162 o 2:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
163 163 |
164 164 | @ 1:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
165 165 |/
166 166 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
167 167
168 168 $ hg -R other debugobsolete
169 169 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
170 170 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
171 171
172 172 add extra data to test their exchange during push
173 173
174 174 $ hg -R main bookmark --rev eea13746799a book_eea1
175 175 $ hg -R main debugobsolete -d '0 0' 3333333333333333333333333333333333333333 `getmainid eea13746799a`
176 176 pre-close-tip:02de42196ebe draft
177 177 postclose-tip:02de42196ebe draft
178 178 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
179 179 $ hg -R main bookmark --rev 02de42196ebe book_02de
180 180 $ hg -R main debugobsolete -d '0 0' 4444444444444444444444444444444444444444 `getmainid 02de42196ebe`
181 181 pre-close-tip:02de42196ebe draft book_02de
182 182 postclose-tip:02de42196ebe draft book_02de
183 183 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
184 184 $ hg -R main bookmark --rev 42ccdea3bb16 book_42cc
185 185 $ hg -R main debugobsolete -d '0 0' 5555555555555555555555555555555555555555 `getmainid 42ccdea3bb16`
186 186 pre-close-tip:02de42196ebe draft book_02de
187 187 postclose-tip:02de42196ebe draft book_02de
188 188 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
189 189 $ hg -R main bookmark --rev 5fddd98957c8 book_5fdd
190 190 $ hg -R main debugobsolete -d '0 0' 6666666666666666666666666666666666666666 `getmainid 5fddd98957c8`
191 191 pre-close-tip:02de42196ebe draft book_02de
192 192 postclose-tip:02de42196ebe draft book_02de
193 193 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
194 194 $ hg -R main bookmark --rev 32af7686d403 book_32af
195 195 $ hg -R main debugobsolete -d '0 0' 7777777777777777777777777777777777777777 `getmainid 32af7686d403`
196 196 pre-close-tip:02de42196ebe draft book_02de
197 197 postclose-tip:02de42196ebe draft book_02de
198 198 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
199 199
200 200 $ hg -R other bookmark --rev cd010b8cd998 book_eea1
201 201 $ hg -R other bookmark --rev cd010b8cd998 book_02de
202 202 $ hg -R other bookmark --rev cd010b8cd998 book_42cc
203 203 $ hg -R other bookmark --rev cd010b8cd998 book_5fdd
204 204 $ hg -R other bookmark --rev cd010b8cd998 book_32af
205 205
206 206 $ hg -R main phase --public eea13746799a
207 207 pre-close-tip:02de42196ebe draft book_02de
208 208 postclose-tip:02de42196ebe draft book_02de
209 209 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=phase (glob)
210 210
211 211 push
212 212 $ hg -R main push other --rev eea13746799a --bookmark book_eea1
213 213 pushing to other
214 214 searching for changes
215 215 remote: adding changesets
216 216 remote: adding manifests
217 217 remote: adding file changes
218 218 remote: added 1 changesets with 0 changes to 0 files (-1 heads)
219 219 remote: 1 new obsolescence markers
220 220 remote: pre-close-tip:eea13746799a public book_eea1
221 221 remote: pushkey: lock state after "phases"
222 222 remote: lock: free
223 223 remote: wlock: free
224 224 remote: pushkey: lock state after "bookmarks"
225 225 remote: lock: free
226 226 remote: wlock: free
227 227 remote: postclose-tip:eea13746799a public book_eea1
228 228 remote: txnclose hook: HG_BOOKMARK_MOVED=1 HG_BUNDLE2=1 HG_NEW_OBSMARKERS=1 HG_NODE=eea13746799a9e0bfd88f29d3c2e9dc9389f524f HG_PHASES_MOVED=1 HG_SOURCE=push HG_TXNID=TXN:* HG_TXNNAME=push HG_URL=push (glob)
229 229 updating bookmark book_eea1
230 230 pre-close-tip:02de42196ebe draft book_02de
231 231 postclose-tip:02de42196ebe draft book_02de
232 232 txnclose hook: HG_SOURCE=push-response HG_TXNID=TXN:* HG_TXNNAME=push-response (glob)
233 233 file:/*/$TESTTMP/other HG_URL=file:$TESTTMP/other (glob)
234 234 $ hg -R other log -G
235 235 o 3:eea13746799a public Nicolas Dumazet <nicdumz.commits@gmail.com> book_eea1 G
236 236 |\
237 237 | o 2:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
238 238 | |
239 239 @ | 1:9520eea781bc public Nicolas Dumazet <nicdumz.commits@gmail.com> E
240 240 |/
241 241 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> book_02de book_32af book_42cc book_5fdd A
242 242
243 243 $ hg -R other debugobsolete
244 244 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
245 245 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
246 246 3333333333333333333333333333333333333333 eea13746799a9e0bfd88f29d3c2e9dc9389f524f 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
247 247
248 248 pull over ssh
249 249
250 250 $ hg -R other pull ssh://user@dummy/main -r 02de42196ebe --bookmark book_02de
251 251 pulling from ssh://user@dummy/main
252 252 searching for changes
253 253 adding changesets
254 254 adding manifests
255 255 adding file changes
256 256 added 1 changesets with 1 changes to 1 files (+1 heads)
257 257 1 new obsolescence markers
258 258 updating bookmark book_02de
259 259 pre-close-tip:02de42196ebe draft book_02de
260 260 postclose-tip:02de42196ebe draft book_02de
261 261 txnclose hook: HG_BOOKMARK_MOVED=1 HG_NEW_OBSMARKERS=1 HG_NODE=02de42196ebee42ef284b6780a87cdc96e8eaab6 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
262 262 ssh://user@dummy/main HG_URL=ssh://user@dummy/main
263 263 (run 'hg heads' to see heads, 'hg merge' to merge)
264 264 $ hg -R other debugobsolete
265 265 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
266 266 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
267 267 3333333333333333333333333333333333333333 eea13746799a9e0bfd88f29d3c2e9dc9389f524f 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
268 268 4444444444444444444444444444444444444444 02de42196ebee42ef284b6780a87cdc96e8eaab6 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
269 269
270 270 pull over http
271 271
272 272 $ hg -R main serve -p $HGPORT -d --pid-file=main.pid -E main-error.log
273 273 $ cat main.pid >> $DAEMON_PIDS
274 274
275 275 $ hg -R other pull http://localhost:$HGPORT/ -r 42ccdea3bb16 --bookmark book_42cc
276 276 pulling from http://localhost:$HGPORT/
277 277 searching for changes
278 278 adding changesets
279 279 adding manifests
280 280 adding file changes
281 281 added 1 changesets with 1 changes to 1 files (+1 heads)
282 282 1 new obsolescence markers
283 283 updating bookmark book_42cc
284 284 pre-close-tip:42ccdea3bb16 draft book_42cc
285 285 postclose-tip:42ccdea3bb16 draft book_42cc
286 286 txnclose hook: HG_BOOKMARK_MOVED=1 HG_NEW_OBSMARKERS=1 HG_NODE=42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
287 287 http://localhost:$HGPORT/ HG_URL=http://localhost:$HGPORT/
288 288 (run 'hg heads .' to see heads, 'hg merge' to merge)
289 289 $ cat main-error.log
290 290 $ hg -R other debugobsolete
291 291 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
292 292 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
293 293 3333333333333333333333333333333333333333 eea13746799a9e0bfd88f29d3c2e9dc9389f524f 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
294 294 4444444444444444444444444444444444444444 02de42196ebee42ef284b6780a87cdc96e8eaab6 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
295 295 5555555555555555555555555555555555555555 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
296 296
297 297 push over ssh
298 298
299 299 $ hg -R main push ssh://user@dummy/other -r 5fddd98957c8 --bookmark book_5fdd
300 300 pushing to ssh://user@dummy/other
301 301 searching for changes
302 302 remote: adding changesets
303 303 remote: adding manifests
304 304 remote: adding file changes
305 305 remote: added 1 changesets with 1 changes to 1 files
306 306 remote: 1 new obsolescence markers
307 307 remote: pre-close-tip:5fddd98957c8 draft book_5fdd
308 308 remote: pushkey: lock state after "bookmarks"
309 309 remote: lock: free
310 310 remote: wlock: free
311 311 remote: postclose-tip:5fddd98957c8 draft book_5fdd
312 312 remote: txnclose hook: HG_BOOKMARK_MOVED=1 HG_BUNDLE2=1 HG_NEW_OBSMARKERS=1 HG_NODE=5fddd98957c8a54a4d436dfe1da9d87f21a1b97b HG_SOURCE=serve HG_TXNID=TXN:* HG_TXNNAME=serve HG_URL=remote:ssh:127.0.0.1 (glob)
313 313 updating bookmark book_5fdd
314 314 pre-close-tip:02de42196ebe draft book_02de
315 315 postclose-tip:02de42196ebe draft book_02de
316 316 txnclose hook: HG_SOURCE=push-response HG_TXNID=TXN:* HG_TXNNAME=push-response (glob)
317 317 ssh://user@dummy/other HG_URL=ssh://user@dummy/other
318 318 $ hg -R other log -G
319 319 o 6:5fddd98957c8 draft Nicolas Dumazet <nicdumz.commits@gmail.com> book_5fdd C
320 320 |
321 321 o 5:42ccdea3bb16 draft Nicolas Dumazet <nicdumz.commits@gmail.com> book_42cc B
322 322 |
323 323 | o 4:02de42196ebe draft Nicolas Dumazet <nicdumz.commits@gmail.com> book_02de H
324 324 | |
325 325 | | o 3:eea13746799a public Nicolas Dumazet <nicdumz.commits@gmail.com> book_eea1 G
326 326 | |/|
327 327 | o | 2:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
328 328 |/ /
329 329 | @ 1:9520eea781bc public Nicolas Dumazet <nicdumz.commits@gmail.com> E
330 330 |/
331 331 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> book_32af A
332 332
333 333 $ hg -R other debugobsolete
334 334 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
335 335 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
336 336 3333333333333333333333333333333333333333 eea13746799a9e0bfd88f29d3c2e9dc9389f524f 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
337 337 4444444444444444444444444444444444444444 02de42196ebee42ef284b6780a87cdc96e8eaab6 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
338 338 5555555555555555555555555555555555555555 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
339 339 6666666666666666666666666666666666666666 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
340 340
341 341 push over http
342 342
343 343 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
344 344 $ cat other.pid >> $DAEMON_PIDS
345 345
346 346 $ hg -R main phase --public 32af7686d403
347 347 pre-close-tip:02de42196ebe draft book_02de
348 348 postclose-tip:02de42196ebe draft book_02de
349 349 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=phase (glob)
350 350 $ hg -R main push http://localhost:$HGPORT2/ -r 32af7686d403 --bookmark book_32af
351 351 pushing to http://localhost:$HGPORT2/
352 352 searching for changes
353 353 remote: adding changesets
354 354 remote: adding manifests
355 355 remote: adding file changes
356 356 remote: added 1 changesets with 1 changes to 1 files
357 357 remote: 1 new obsolescence markers
358 358 remote: pre-close-tip:32af7686d403 public book_32af
359 359 remote: pushkey: lock state after "phases"
360 360 remote: lock: free
361 361 remote: wlock: free
362 362 remote: pushkey: lock state after "bookmarks"
363 363 remote: lock: free
364 364 remote: wlock: free
365 365 remote: postclose-tip:32af7686d403 public book_32af
366 366 remote: txnclose hook: HG_BOOKMARK_MOVED=1 HG_BUNDLE2=1 HG_NEW_OBSMARKERS=1 HG_NODE=32af7686d403cf45b5d95f2d70cebea587ac806a HG_PHASES_MOVED=1 HG_SOURCE=serve HG_TXNID=TXN:* HG_TXNNAME=serve HG_URL=remote:http:127.0.0.1: (glob)
367 367 updating bookmark book_32af
368 368 pre-close-tip:02de42196ebe draft book_02de
369 369 postclose-tip:02de42196ebe draft book_02de
370 370 txnclose hook: HG_SOURCE=push-response HG_TXNID=TXN:* HG_TXNNAME=push-response (glob)
371 371 http://localhost:$HGPORT2/ HG_URL=http://localhost:$HGPORT2/
372 372 $ cat other-error.log
373 373
374 374 Check final content.
375 375
376 376 $ hg -R other log -G
377 377 o 7:32af7686d403 public Nicolas Dumazet <nicdumz.commits@gmail.com> book_32af D
378 378 |
379 379 o 6:5fddd98957c8 public Nicolas Dumazet <nicdumz.commits@gmail.com> book_5fdd C
380 380 |
381 381 o 5:42ccdea3bb16 public Nicolas Dumazet <nicdumz.commits@gmail.com> book_42cc B
382 382 |
383 383 | o 4:02de42196ebe draft Nicolas Dumazet <nicdumz.commits@gmail.com> book_02de H
384 384 | |
385 385 | | o 3:eea13746799a public Nicolas Dumazet <nicdumz.commits@gmail.com> book_eea1 G
386 386 | |/|
387 387 | o | 2:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
388 388 |/ /
389 389 | @ 1:9520eea781bc public Nicolas Dumazet <nicdumz.commits@gmail.com> E
390 390 |/
391 391 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
392 392
393 393 $ hg -R other debugobsolete
394 394 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
395 395 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
396 396 3333333333333333333333333333333333333333 eea13746799a9e0bfd88f29d3c2e9dc9389f524f 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
397 397 4444444444444444444444444444444444444444 02de42196ebee42ef284b6780a87cdc96e8eaab6 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
398 398 5555555555555555555555555555555555555555 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
399 399 6666666666666666666666666666666666666666 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
400 400 7777777777777777777777777777777777777777 32af7686d403cf45b5d95f2d70cebea587ac806a 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
401 401
402 402 (check that no 'pending' files remain)
403 403
404 404 $ ls -1 other/.hg/bookmarks*
405 405 other/.hg/bookmarks
406 406 $ ls -1 other/.hg/store/phaseroots*
407 407 other/.hg/store/phaseroots
408 408 $ ls -1 other/.hg/store/00changelog.i*
409 409 other/.hg/store/00changelog.i
410 410
411 411 Error Handling
412 412 ==============
413 413
414 414 Check that errors are properly returned to the client during push.
415 415
416 416 Setting up
417 417
418 418 $ cat > failpush.py << EOF
419 419 > """A small extension that makes push fails when using bundle2
420 420 >
421 421 > used to test error handling in bundle2
422 422 > """
423 423 >
424 424 > from mercurial import util
425 425 > from mercurial import bundle2
426 426 > from mercurial import exchange
427 427 > from mercurial import extensions
428 428 >
429 429 > def _pushbundle2failpart(pushop, bundler):
430 430 > reason = pushop.ui.config('failpush', 'reason', None)
431 431 > part = None
432 432 > if reason == 'abort':
433 433 > bundler.newpart('test:abort')
434 434 > if reason == 'unknown':
435 435 > bundler.newpart('test:unknown')
436 436 > if reason == 'race':
437 437 > # 20 Bytes of crap
438 438 > bundler.newpart('check:heads', data='01234567890123456789')
439 439 >
440 440 > @bundle2.parthandler("test:abort")
441 441 > def handleabort(op, part):
442 442 > raise util.Abort('Abandon ship!', hint="don't panic")
443 443 >
444 444 > def uisetup(ui):
445 445 > exchange.b2partsgenmapping['failpart'] = _pushbundle2failpart
446 446 > exchange.b2partsgenorder.insert(0, 'failpart')
447 447 >
448 448 > EOF
449 449
450 450 $ cd main
451 451 $ hg up tip
452 452 3 files updated, 0 files merged, 1 files removed, 0 files unresolved
453 453 $ echo 'I' > I
454 454 $ hg add I
455 455 $ hg ci -m 'I'
456 456 pre-close-tip:e7ec4e813ba6 draft
457 457 postclose-tip:e7ec4e813ba6 draft
458 458 txnclose hook: HG_TXNID=TXN:* HG_TXNNAME=commit (glob)
459 459 $ hg id
460 460 e7ec4e813ba6 tip
461 461 $ cd ..
462 462
463 463 $ cat << EOF >> $HGRCPATH
464 464 > [extensions]
465 465 > failpush=$TESTTMP/failpush.py
466 466 > EOF
467 467
468 468 $ "$TESTDIR/killdaemons.py" $DAEMON_PIDS
469 469 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
470 470 $ cat other.pid >> $DAEMON_PIDS
471 471
472 472 Doing the actual push: Abort error
473 473
474 474 $ cat << EOF >> $HGRCPATH
475 475 > [failpush]
476 476 > reason = abort
477 477 > EOF
478 478
479 479 $ hg -R main push other -r e7ec4e813ba6
480 480 pushing to other
481 481 searching for changes
482 482 abort: Abandon ship!
483 483 (don't panic)
484 484 [255]
485 485
486 486 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
487 487 pushing to ssh://user@dummy/other
488 488 searching for changes
489 489 abort: Abandon ship!
490 490 (don't panic)
491 491 [255]
492 492
493 493 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
494 494 pushing to http://localhost:$HGPORT2/
495 495 searching for changes
496 496 abort: Abandon ship!
497 497 (don't panic)
498 498 [255]
499 499
500 500
501 501 Doing the actual push: unknown mandatory parts
502 502
503 503 $ cat << EOF >> $HGRCPATH
504 504 > [failpush]
505 505 > reason = unknown
506 506 > EOF
507 507
508 508 $ hg -R main push other -r e7ec4e813ba6
509 509 pushing to other
510 510 searching for changes
511 511 abort: missing support for test:unknown
512 512 [255]
513 513
514 514 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
515 515 pushing to ssh://user@dummy/other
516 516 searching for changes
517 517 abort: missing support for test:unknown
518 518 [255]
519 519
520 520 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
521 521 pushing to http://localhost:$HGPORT2/
522 522 searching for changes
523 523 abort: missing support for test:unknown
524 524 [255]
525 525
526 526 Doing the actual push: race
527 527
528 528 $ cat << EOF >> $HGRCPATH
529 529 > [failpush]
530 530 > reason = race
531 531 > EOF
532 532
533 533 $ hg -R main push other -r e7ec4e813ba6
534 534 pushing to other
535 535 searching for changes
536 536 abort: push failed:
537 537 'repository changed while pushing - please try again'
538 538 [255]
539 539
540 540 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
541 541 pushing to ssh://user@dummy/other
542 542 searching for changes
543 543 abort: push failed:
544 544 'repository changed while pushing - please try again'
545 545 [255]
546 546
547 547 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
548 548 pushing to http://localhost:$HGPORT2/
549 549 searching for changes
550 550 abort: push failed:
551 551 'repository changed while pushing - please try again'
552 552 [255]
553 553
554 554 Doing the actual push: hook abort
555 555
556 556 $ cat << EOF >> $HGRCPATH
557 557 > [failpush]
558 558 > reason =
559 559 > [hooks]
560 560 > pretxnclose.failpush = echo "You shall not pass!"; false
561 561 > txnabort.failpush = echo 'Cleaning up the mess...'
562 562 > EOF
563 563
564 564 $ "$TESTDIR/killdaemons.py" $DAEMON_PIDS
565 565 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
566 566 $ cat other.pid >> $DAEMON_PIDS
567 567
568 568 $ hg -R main push other -r e7ec4e813ba6
569 569 pushing to other
570 570 searching for changes
571 571 remote: adding changesets
572 572 remote: adding manifests
573 573 remote: adding file changes
574 574 remote: added 1 changesets with 1 changes to 1 files
575 575 remote: pre-close-tip:e7ec4e813ba6 draft
576 576 remote: You shall not pass!
577 577 remote: transaction abort!
578 578 remote: Cleaning up the mess...
579 579 remote: rollback completed
580 580 abort: pretxnclose.failpush hook exited with status 1
581 581 [255]
582 582
583 583 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
584 584 pushing to ssh://user@dummy/other
585 585 searching for changes
586 586 remote: adding changesets
587 587 remote: adding manifests
588 588 remote: adding file changes
589 589 remote: added 1 changesets with 1 changes to 1 files
590 590 remote: pre-close-tip:e7ec4e813ba6 draft
591 591 remote: You shall not pass!
592 592 remote: transaction abort!
593 593 remote: Cleaning up the mess...
594 594 remote: rollback completed
595 595 abort: pretxnclose.failpush hook exited with status 1
596 596 [255]
597 597
598 598 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
599 599 pushing to http://localhost:$HGPORT2/
600 600 searching for changes
601 601 remote: adding changesets
602 602 remote: adding manifests
603 603 remote: adding file changes
604 604 remote: added 1 changesets with 1 changes to 1 files
605 605 remote: pre-close-tip:e7ec4e813ba6 draft
606 606 remote: You shall not pass!
607 607 remote: transaction abort!
608 608 remote: Cleaning up the mess...
609 609 remote: rollback completed
610 610 abort: pretxnclose.failpush hook exited with status 1
611 611 [255]
612 612
613 613 (check that no 'pending' files remain)
614 614
615 615 $ ls -1 other/.hg/bookmarks*
616 616 other/.hg/bookmarks
617 617 $ ls -1 other/.hg/store/phaseroots*
618 618 other/.hg/store/phaseroots
619 619 $ ls -1 other/.hg/store/00changelog.i*
620 620 other/.hg/store/00changelog.i
621 621
622 Check error from hook during the unbundling process itself
623
624 $ cat << EOF >> $HGRCPATH
625 > pretxnchangegroup = echo "Fail early!"; false
626 > EOF
627 $ "$TESTDIR/killdaemons.py" $DAEMON_PIDS # reload http config
628 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
629 $ cat other.pid >> $DAEMON_PIDS
630
631 $ hg -R main push other -r e7ec4e813ba6
632 pushing to other
633 searching for changes
634 remote: adding changesets
635 remote: adding manifests
636 remote: adding file changes
637 remote: added 1 changesets with 1 changes to 1 files
638 remote: Fail early!
639 remote: transaction abort!
640 remote: Cleaning up the mess...
641 remote: rollback completed
642 abort: pretxnchangegroup hook exited with status 1
643 [255]
644 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
645 pushing to ssh://user@dummy/other
646 searching for changes
647 remote: adding changesets
648 remote: adding manifests
649 remote: adding file changes
650 remote: added 1 changesets with 1 changes to 1 files
651 remote: Fail early!
652 remote: transaction abort!
653 remote: Cleaning up the mess...
654 remote: rollback completed
655 abort: pretxnchangegroup hook exited with status 1
656 [255]
657 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
658 pushing to http://localhost:$HGPORT2/
659 searching for changes
660 remote: adding changesets
661 remote: adding manifests
662 remote: adding file changes
663 remote: added 1 changesets with 1 changes to 1 files
664 remote: Fail early!
665 remote: transaction abort!
666 remote: Cleaning up the mess...
667 remote: rollback completed
668 abort: pretxnchangegroup hook exited with status 1
669 [255]
General Comments 0
You need to be logged in to leave comments. Login now