##// END OF EJS Templates
bundle2: abort when a mandatory pushkey part fails...
Pierre-Yves David -
r25481:6de96cb3 default
parent child Browse files
Show More
@@ -1,1384 +1,1387 b''
1 1 # bundle2.py - generic container format to transmit arbitrary data.
2 2 #
3 3 # Copyright 2013 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 """Handling of the new bundle2 format
8 8
9 9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 11 that will be handed to and processed by the application layer.
12 12
13 13
14 14 General format architecture
15 15 ===========================
16 16
17 17 The format is architectured as follow
18 18
19 19 - magic string
20 20 - stream level parameters
21 21 - payload parts (any number)
22 22 - end of stream marker.
23 23
24 24 the Binary format
25 25 ============================
26 26
27 27 All numbers are unsigned and big-endian.
28 28
29 29 stream level parameters
30 30 ------------------------
31 31
32 32 Binary format is as follow
33 33
34 34 :params size: int32
35 35
36 36 The total number of Bytes used by the parameters
37 37
38 38 :params value: arbitrary number of Bytes
39 39
40 40 A blob of `params size` containing the serialized version of all stream level
41 41 parameters.
42 42
43 43 The blob contains a space separated list of parameters. Parameters with value
44 44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45 45
46 46 Empty name are obviously forbidden.
47 47
48 48 Name MUST start with a letter. If this first letter is lower case, the
49 49 parameter is advisory and can be safely ignored. However when the first
50 50 letter is capital, the parameter is mandatory and the bundling process MUST
51 51 stop if he is not able to proceed it.
52 52
53 53 Stream parameters use a simple textual format for two main reasons:
54 54
55 55 - Stream level parameters should remain simple and we want to discourage any
56 56 crazy usage.
57 57 - Textual data allow easy human inspection of a bundle2 header in case of
58 58 troubles.
59 59
60 60 Any Applicative level options MUST go into a bundle2 part instead.
61 61
62 62 Payload part
63 63 ------------------------
64 64
65 65 Binary format is as follow
66 66
67 67 :header size: int32
68 68
69 69 The total number of Bytes used by the part headers. When the header is empty
70 70 (size = 0) this is interpreted as the end of stream marker.
71 71
72 72 :header:
73 73
74 74 The header defines how to interpret the part. It contains two piece of
75 75 data: the part type, and the part parameters.
76 76
77 77 The part type is used to route an application level handler, that can
78 78 interpret payload.
79 79
80 80 Part parameters are passed to the application level handler. They are
81 81 meant to convey information that will help the application level object to
82 82 interpret the part payload.
83 83
84 84 The binary format of the header is has follow
85 85
86 86 :typesize: (one byte)
87 87
88 88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89 89
90 90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 91 to this part.
92 92
93 93 :parameters:
94 94
95 95 Part's parameter may have arbitrary content, the binary structure is::
96 96
97 97 <mandatory-count><advisory-count><param-sizes><param-data>
98 98
99 99 :mandatory-count: 1 byte, number of mandatory parameters
100 100
101 101 :advisory-count: 1 byte, number of advisory parameters
102 102
103 103 :param-sizes:
104 104
105 105 N couple of bytes, where N is the total number of parameters. Each
106 106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107 107
108 108 :param-data:
109 109
110 110 A blob of bytes from which each parameter key and value can be
111 111 retrieved using the list of size couples stored in the previous
112 112 field.
113 113
114 114 Mandatory parameters comes first, then the advisory ones.
115 115
116 116 Each parameter's key MUST be unique within the part.
117 117
118 118 :payload:
119 119
120 120 payload is a series of `<chunksize><chunkdata>`.
121 121
122 122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124 124
125 125 The current implementation always produces either zero or one chunk.
126 126 This is an implementation limitation that will ultimately be lifted.
127 127
128 128 `chunksize` can be negative to trigger special case processing. No such
129 129 processing is in place yet.
130 130
131 131 Bundle processing
132 132 ============================
133 133
134 134 Each part is processed in order using a "part handler". Handler are registered
135 135 for a certain part type.
136 136
137 137 The matching of a part to its handler is case insensitive. The case of the
138 138 part type is used to know if a part is mandatory or advisory. If the Part type
139 139 contains any uppercase char it is considered mandatory. When no handler is
140 140 known for a Mandatory part, the process is aborted and an exception is raised.
141 141 If the part is advisory and no handler is known, the part is ignored. When the
142 142 process is aborted, the full bundle is still read from the stream to keep the
143 143 channel usable. But none of the part read from an abort are processed. In the
144 144 future, dropping the stream may become an option for channel we do not care to
145 145 preserve.
146 146 """
147 147
148 148 import errno
149 149 import sys
150 150 import util
151 151 import struct
152 152 import urllib
153 153 import string
154 154 import obsolete
155 155 import pushkey
156 156 import url
157 157 import re
158 158
159 159 import changegroup, error, tags
160 160 from i18n import _
161 161
162 162 _pack = struct.pack
163 163 _unpack = struct.unpack
164 164
165 165 _fstreamparamsize = '>i'
166 166 _fpartheadersize = '>i'
167 167 _fparttypesize = '>B'
168 168 _fpartid = '>I'
169 169 _fpayloadsize = '>i'
170 170 _fpartparamcount = '>BB'
171 171
172 172 preferedchunksize = 4096
173 173
174 174 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
175 175
176 176 def outdebug(ui, message):
177 177 """debug regarding output stream (bundling)"""
178 178 if ui.configbool('devel', 'bundle2.debug', False):
179 179 ui.debug('bundle2-output: %s\n' % message)
180 180
181 181 def indebug(ui, message):
182 182 """debug on input stream (unbundling)"""
183 183 if ui.configbool('devel', 'bundle2.debug', False):
184 184 ui.debug('bundle2-input: %s\n' % message)
185 185
186 186 def validateparttype(parttype):
187 187 """raise ValueError if a parttype contains invalid character"""
188 188 if _parttypeforbidden.search(parttype):
189 189 raise ValueError(parttype)
190 190
191 191 def _makefpartparamsizes(nbparams):
192 192 """return a struct format to read part parameter sizes
193 193
194 194 The number parameters is variable so we need to build that format
195 195 dynamically.
196 196 """
197 197 return '>'+('BB'*nbparams)
198 198
199 199 parthandlermapping = {}
200 200
201 201 def parthandler(parttype, params=()):
202 202 """decorator that register a function as a bundle2 part handler
203 203
204 204 eg::
205 205
206 206 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
207 207 def myparttypehandler(...):
208 208 '''process a part of type "my part".'''
209 209 ...
210 210 """
211 211 validateparttype(parttype)
212 212 def _decorator(func):
213 213 lparttype = parttype.lower() # enforce lower case matching.
214 214 assert lparttype not in parthandlermapping
215 215 parthandlermapping[lparttype] = func
216 216 func.params = frozenset(params)
217 217 return func
218 218 return _decorator
219 219
220 220 class unbundlerecords(object):
221 221 """keep record of what happens during and unbundle
222 222
223 223 New records are added using `records.add('cat', obj)`. Where 'cat' is a
224 224 category of record and obj is an arbitrary object.
225 225
226 226 `records['cat']` will return all entries of this category 'cat'.
227 227
228 228 Iterating on the object itself will yield `('category', obj)` tuples
229 229 for all entries.
230 230
231 231 All iterations happens in chronological order.
232 232 """
233 233
234 234 def __init__(self):
235 235 self._categories = {}
236 236 self._sequences = []
237 237 self._replies = {}
238 238
239 239 def add(self, category, entry, inreplyto=None):
240 240 """add a new record of a given category.
241 241
242 242 The entry can then be retrieved in the list returned by
243 243 self['category']."""
244 244 self._categories.setdefault(category, []).append(entry)
245 245 self._sequences.append((category, entry))
246 246 if inreplyto is not None:
247 247 self.getreplies(inreplyto).add(category, entry)
248 248
249 249 def getreplies(self, partid):
250 250 """get the records that are replies to a specific part"""
251 251 return self._replies.setdefault(partid, unbundlerecords())
252 252
253 253 def __getitem__(self, cat):
254 254 return tuple(self._categories.get(cat, ()))
255 255
256 256 def __iter__(self):
257 257 return iter(self._sequences)
258 258
259 259 def __len__(self):
260 260 return len(self._sequences)
261 261
262 262 def __nonzero__(self):
263 263 return bool(self._sequences)
264 264
265 265 class bundleoperation(object):
266 266 """an object that represents a single bundling process
267 267
268 268 Its purpose is to carry unbundle-related objects and states.
269 269
270 270 A new object should be created at the beginning of each bundle processing.
271 271 The object is to be returned by the processing function.
272 272
273 273 The object has very little content now it will ultimately contain:
274 274 * an access to the repo the bundle is applied to,
275 275 * a ui object,
276 276 * a way to retrieve a transaction to add changes to the repo,
277 277 * a way to record the result of processing each part,
278 278 * a way to construct a bundle response when applicable.
279 279 """
280 280
281 281 def __init__(self, repo, transactiongetter, captureoutput=True):
282 282 self.repo = repo
283 283 self.ui = repo.ui
284 284 self.records = unbundlerecords()
285 285 self.gettransaction = transactiongetter
286 286 self.reply = None
287 287 self.captureoutput = captureoutput
288 288
289 289 class TransactionUnavailable(RuntimeError):
290 290 pass
291 291
292 292 def _notransaction():
293 293 """default method to get a transaction while processing a bundle
294 294
295 295 Raise an exception to highlight the fact that no transaction was expected
296 296 to be created"""
297 297 raise TransactionUnavailable()
298 298
299 299 def processbundle(repo, unbundler, transactiongetter=None, op=None):
300 300 """This function process a bundle, apply effect to/from a repo
301 301
302 302 It iterates over each part then searches for and uses the proper handling
303 303 code to process the part. Parts are processed in order.
304 304
305 305 This is very early version of this function that will be strongly reworked
306 306 before final usage.
307 307
308 308 Unknown Mandatory part will abort the process.
309 309
310 310 It is temporarily possible to provide a prebuilt bundleoperation to the
311 311 function. This is used to ensure output is properly propagated in case of
312 312 an error during the unbundling. This output capturing part will likely be
313 313 reworked and this ability will probably go away in the process.
314 314 """
315 315 if op is None:
316 316 if transactiongetter is None:
317 317 transactiongetter = _notransaction
318 318 op = bundleoperation(repo, transactiongetter)
319 319 # todo:
320 320 # - replace this is a init function soon.
321 321 # - exception catching
322 322 unbundler.params
323 323 if repo.ui.debugflag:
324 324 msg = ['bundle2-input-bundle:']
325 325 if unbundler.params:
326 326 msg.append(' %i params')
327 327 if op.gettransaction is None:
328 328 msg.append(' no-transaction')
329 329 else:
330 330 msg.append(' with-transaction')
331 331 msg.append('\n')
332 332 repo.ui.debug(''.join(msg))
333 333 iterparts = enumerate(unbundler.iterparts())
334 334 part = None
335 335 nbpart = 0
336 336 try:
337 337 for nbpart, part in iterparts:
338 338 _processpart(op, part)
339 339 except BaseException, exc:
340 340 for nbpart, part in iterparts:
341 341 # consume the bundle content
342 342 part.seek(0, 2)
343 343 # Small hack to let caller code distinguish exceptions from bundle2
344 344 # processing from processing the old format. This is mostly
345 345 # needed to handle different return codes to unbundle according to the
346 346 # type of bundle. We should probably clean up or drop this return code
347 347 # craziness in a future version.
348 348 exc.duringunbundle2 = True
349 349 salvaged = []
350 350 if op.reply is not None:
351 351 salvaged = op.reply.salvageoutput()
352 352 exc._bundle2salvagedoutput = salvaged
353 353 raise
354 354 finally:
355 355 repo.ui.debug('bundle2-input-bundle: %i parts total\n' % nbpart)
356 356
357 357 return op
358 358
359 359 def _processpart(op, part):
360 360 """process a single part from a bundle
361 361
362 362 The part is guaranteed to have been fully consumed when the function exits
363 363 (even if an exception is raised)."""
364 364 status = 'unknown' # used by debug output
365 365 try:
366 366 try:
367 367 handler = parthandlermapping.get(part.type)
368 368 if handler is None:
369 369 status = 'unsupported-type'
370 370 raise error.UnsupportedPartError(parttype=part.type)
371 371 indebug(op.ui, 'found a handler for part %r' % part.type)
372 372 unknownparams = part.mandatorykeys - handler.params
373 373 if unknownparams:
374 374 unknownparams = list(unknownparams)
375 375 unknownparams.sort()
376 376 status = 'unsupported-params (%s)' % unknownparams
377 377 raise error.UnsupportedPartError(parttype=part.type,
378 378 params=unknownparams)
379 379 status = 'supported'
380 380 except error.UnsupportedPartError, exc:
381 381 if part.mandatory: # mandatory parts
382 382 raise
383 383 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
384 384 return # skip to part processing
385 385 finally:
386 386 if op.ui.debugflag:
387 387 msg = ['bundle2-input-part: "%s"' % part.type]
388 388 if not part.mandatory:
389 389 msg.append(' (advisory)')
390 390 nbmp = len(part.mandatorykeys)
391 391 nbap = len(part.params) - nbmp
392 392 if nbmp or nbap:
393 393 msg.append(' (params:')
394 394 if nbmp:
395 395 msg.append(' %i mandatory' % nbmp)
396 396 if nbap:
397 397 msg.append(' %i advisory' % nbmp)
398 398 msg.append(')')
399 399 msg.append(' %s\n' % status)
400 400 op.ui.debug(''.join(msg))
401 401
402 402 # handler is called outside the above try block so that we don't
403 403 # risk catching KeyErrors from anything other than the
404 404 # parthandlermapping lookup (any KeyError raised by handler()
405 405 # itself represents a defect of a different variety).
406 406 output = None
407 407 if op.captureoutput and op.reply is not None:
408 408 op.ui.pushbuffer(error=True, subproc=True)
409 409 output = ''
410 410 try:
411 411 handler(op, part)
412 412 finally:
413 413 if output is not None:
414 414 output = op.ui.popbuffer()
415 415 if output:
416 416 outpart = op.reply.newpart('output', data=output,
417 417 mandatory=False)
418 418 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
419 419 finally:
420 420 # consume the part content to not corrupt the stream.
421 421 part.seek(0, 2)
422 422
423 423
424 424 def decodecaps(blob):
425 425 """decode a bundle2 caps bytes blob into a dictionary
426 426
427 427 The blob is a list of capabilities (one per line)
428 428 Capabilities may have values using a line of the form::
429 429
430 430 capability=value1,value2,value3
431 431
432 432 The values are always a list."""
433 433 caps = {}
434 434 for line in blob.splitlines():
435 435 if not line:
436 436 continue
437 437 if '=' not in line:
438 438 key, vals = line, ()
439 439 else:
440 440 key, vals = line.split('=', 1)
441 441 vals = vals.split(',')
442 442 key = urllib.unquote(key)
443 443 vals = [urllib.unquote(v) for v in vals]
444 444 caps[key] = vals
445 445 return caps
446 446
447 447 def encodecaps(caps):
448 448 """encode a bundle2 caps dictionary into a bytes blob"""
449 449 chunks = []
450 450 for ca in sorted(caps):
451 451 vals = caps[ca]
452 452 ca = urllib.quote(ca)
453 453 vals = [urllib.quote(v) for v in vals]
454 454 if vals:
455 455 ca = "%s=%s" % (ca, ','.join(vals))
456 456 chunks.append(ca)
457 457 return '\n'.join(chunks)
458 458
459 459 class bundle20(object):
460 460 """represent an outgoing bundle2 container
461 461
462 462 Use the `addparam` method to add stream level parameter. and `newpart` to
463 463 populate it. Then call `getchunks` to retrieve all the binary chunks of
464 464 data that compose the bundle2 container."""
465 465
466 466 _magicstring = 'HG20'
467 467
468 468 def __init__(self, ui, capabilities=()):
469 469 self.ui = ui
470 470 self._params = []
471 471 self._parts = []
472 472 self.capabilities = dict(capabilities)
473 473
474 474 @property
475 475 def nbparts(self):
476 476 """total number of parts added to the bundler"""
477 477 return len(self._parts)
478 478
479 479 # methods used to defines the bundle2 content
480 480 def addparam(self, name, value=None):
481 481 """add a stream level parameter"""
482 482 if not name:
483 483 raise ValueError('empty parameter name')
484 484 if name[0] not in string.letters:
485 485 raise ValueError('non letter first character: %r' % name)
486 486 self._params.append((name, value))
487 487
488 488 def addpart(self, part):
489 489 """add a new part to the bundle2 container
490 490
491 491 Parts contains the actual applicative payload."""
492 492 assert part.id is None
493 493 part.id = len(self._parts) # very cheap counter
494 494 self._parts.append(part)
495 495
496 496 def newpart(self, typeid, *args, **kwargs):
497 497 """create a new part and add it to the containers
498 498
499 499 As the part is directly added to the containers. For now, this means
500 500 that any failure to properly initialize the part after calling
501 501 ``newpart`` should result in a failure of the whole bundling process.
502 502
503 503 You can still fall back to manually create and add if you need better
504 504 control."""
505 505 part = bundlepart(typeid, *args, **kwargs)
506 506 self.addpart(part)
507 507 return part
508 508
509 509 # methods used to generate the bundle2 stream
510 510 def getchunks(self):
511 511 if self.ui.debugflag:
512 512 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
513 513 if self._params:
514 514 msg.append(' (%i params)' % len(self._params))
515 515 msg.append(' %i parts total\n' % len(self._parts))
516 516 self.ui.debug(''.join(msg))
517 517 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
518 518 yield self._magicstring
519 519 param = self._paramchunk()
520 520 outdebug(self.ui, 'bundle parameter: %s' % param)
521 521 yield _pack(_fstreamparamsize, len(param))
522 522 if param:
523 523 yield param
524 524
525 525 outdebug(self.ui, 'start of parts')
526 526 for part in self._parts:
527 527 outdebug(self.ui, 'bundle part: "%s"' % part.type)
528 528 for chunk in part.getchunks(ui=self.ui):
529 529 yield chunk
530 530 outdebug(self.ui, 'end of bundle')
531 531 yield _pack(_fpartheadersize, 0)
532 532
533 533 def _paramchunk(self):
534 534 """return a encoded version of all stream parameters"""
535 535 blocks = []
536 536 for par, value in self._params:
537 537 par = urllib.quote(par)
538 538 if value is not None:
539 539 value = urllib.quote(value)
540 540 par = '%s=%s' % (par, value)
541 541 blocks.append(par)
542 542 return ' '.join(blocks)
543 543
544 544 def salvageoutput(self):
545 545 """return a list with a copy of all output parts in the bundle
546 546
547 547 This is meant to be used during error handling to make sure we preserve
548 548 server output"""
549 549 salvaged = []
550 550 for part in self._parts:
551 551 if part.type.startswith('output'):
552 552 salvaged.append(part.copy())
553 553 return salvaged
554 554
555 555
556 556 class unpackermixin(object):
557 557 """A mixin to extract bytes and struct data from a stream"""
558 558
559 559 def __init__(self, fp):
560 560 self._fp = fp
561 561 self._seekable = (util.safehasattr(fp, 'seek') and
562 562 util.safehasattr(fp, 'tell'))
563 563
564 564 def _unpack(self, format):
565 565 """unpack this struct format from the stream"""
566 566 data = self._readexact(struct.calcsize(format))
567 567 return _unpack(format, data)
568 568
569 569 def _readexact(self, size):
570 570 """read exactly <size> bytes from the stream"""
571 571 return changegroup.readexactly(self._fp, size)
572 572
573 573 def seek(self, offset, whence=0):
574 574 """move the underlying file pointer"""
575 575 if self._seekable:
576 576 return self._fp.seek(offset, whence)
577 577 else:
578 578 raise NotImplementedError(_('File pointer is not seekable'))
579 579
580 580 def tell(self):
581 581 """return the file offset, or None if file is not seekable"""
582 582 if self._seekable:
583 583 try:
584 584 return self._fp.tell()
585 585 except IOError, e:
586 586 if e.errno == errno.ESPIPE:
587 587 self._seekable = False
588 588 else:
589 589 raise
590 590 return None
591 591
592 592 def close(self):
593 593 """close underlying file"""
594 594 if util.safehasattr(self._fp, 'close'):
595 595 return self._fp.close()
596 596
597 597 def getunbundler(ui, fp, header=None):
598 598 """return a valid unbundler object for a given header"""
599 599 if header is None:
600 600 header = changegroup.readexactly(fp, 4)
601 601 magic, version = header[0:2], header[2:4]
602 602 if magic != 'HG':
603 603 raise util.Abort(_('not a Mercurial bundle'))
604 604 unbundlerclass = formatmap.get(version)
605 605 if unbundlerclass is None:
606 606 raise util.Abort(_('unknown bundle version %s') % version)
607 607 unbundler = unbundlerclass(ui, fp)
608 608 indebug(ui, 'start processing of %s stream' % header)
609 609 return unbundler
610 610
611 611 class unbundle20(unpackermixin):
612 612 """interpret a bundle2 stream
613 613
614 614 This class is fed with a binary stream and yields parts through its
615 615 `iterparts` methods."""
616 616
617 617 def __init__(self, ui, fp):
618 618 """If header is specified, we do not read it out of the stream."""
619 619 self.ui = ui
620 620 super(unbundle20, self).__init__(fp)
621 621
622 622 @util.propertycache
623 623 def params(self):
624 624 """dictionary of stream level parameters"""
625 625 indebug(self.ui, 'reading bundle2 stream parameters')
626 626 params = {}
627 627 paramssize = self._unpack(_fstreamparamsize)[0]
628 628 if paramssize < 0:
629 629 raise error.BundleValueError('negative bundle param size: %i'
630 630 % paramssize)
631 631 if paramssize:
632 632 for p in self._readexact(paramssize).split(' '):
633 633 p = p.split('=', 1)
634 634 p = [urllib.unquote(i) for i in p]
635 635 if len(p) < 2:
636 636 p.append(None)
637 637 self._processparam(*p)
638 638 params[p[0]] = p[1]
639 639 return params
640 640
641 641 def _processparam(self, name, value):
642 642 """process a parameter, applying its effect if needed
643 643
644 644 Parameter starting with a lower case letter are advisory and will be
645 645 ignored when unknown. Those starting with an upper case letter are
646 646 mandatory and will this function will raise a KeyError when unknown.
647 647
648 648 Note: no option are currently supported. Any input will be either
649 649 ignored or failing.
650 650 """
651 651 if not name:
652 652 raise ValueError('empty parameter name')
653 653 if name[0] not in string.letters:
654 654 raise ValueError('non letter first character: %r' % name)
655 655 # Some logic will be later added here to try to process the option for
656 656 # a dict of known parameter.
657 657 if name[0].islower():
658 658 indebug(self.ui, "ignoring unknown parameter %r" % name)
659 659 else:
660 660 raise error.UnsupportedPartError(params=(name,))
661 661
662 662
663 663 def iterparts(self):
664 664 """yield all parts contained in the stream"""
665 665 # make sure param have been loaded
666 666 self.params
667 667 indebug(self.ui, 'start extraction of bundle2 parts')
668 668 headerblock = self._readpartheader()
669 669 while headerblock is not None:
670 670 part = unbundlepart(self.ui, headerblock, self._fp)
671 671 yield part
672 672 part.seek(0, 2)
673 673 headerblock = self._readpartheader()
674 674 indebug(self.ui, 'end of bundle2 stream')
675 675
676 676 def _readpartheader(self):
677 677 """reads a part header size and return the bytes blob
678 678
679 679 returns None if empty"""
680 680 headersize = self._unpack(_fpartheadersize)[0]
681 681 if headersize < 0:
682 682 raise error.BundleValueError('negative part header size: %i'
683 683 % headersize)
684 684 indebug(self.ui, 'part header size: %i' % headersize)
685 685 if headersize:
686 686 return self._readexact(headersize)
687 687 return None
688 688
689 689 def compressed(self):
690 690 return False
691 691
692 692 formatmap = {'20': unbundle20}
693 693
694 694 class bundlepart(object):
695 695 """A bundle2 part contains application level payload
696 696
697 697 The part `type` is used to route the part to the application level
698 698 handler.
699 699
700 700 The part payload is contained in ``part.data``. It could be raw bytes or a
701 701 generator of byte chunks.
702 702
703 703 You can add parameters to the part using the ``addparam`` method.
704 704 Parameters can be either mandatory (default) or advisory. Remote side
705 705 should be able to safely ignore the advisory ones.
706 706
707 707 Both data and parameters cannot be modified after the generation has begun.
708 708 """
709 709
710 710 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
711 711 data='', mandatory=True):
712 712 validateparttype(parttype)
713 713 self.id = None
714 714 self.type = parttype
715 715 self._data = data
716 716 self._mandatoryparams = list(mandatoryparams)
717 717 self._advisoryparams = list(advisoryparams)
718 718 # checking for duplicated entries
719 719 self._seenparams = set()
720 720 for pname, __ in self._mandatoryparams + self._advisoryparams:
721 721 if pname in self._seenparams:
722 722 raise RuntimeError('duplicated params: %s' % pname)
723 723 self._seenparams.add(pname)
724 724 # status of the part's generation:
725 725 # - None: not started,
726 726 # - False: currently generated,
727 727 # - True: generation done.
728 728 self._generated = None
729 729 self.mandatory = mandatory
730 730
731 731 def copy(self):
732 732 """return a copy of the part
733 733
734 734 The new part have the very same content but no partid assigned yet.
735 735 Parts with generated data cannot be copied."""
736 736 assert not util.safehasattr(self.data, 'next')
737 737 return self.__class__(self.type, self._mandatoryparams,
738 738 self._advisoryparams, self._data, self.mandatory)
739 739
740 740 # methods used to defines the part content
741 741 def __setdata(self, data):
742 742 if self._generated is not None:
743 743 raise error.ReadOnlyPartError('part is being generated')
744 744 self._data = data
745 745 def __getdata(self):
746 746 return self._data
747 747 data = property(__getdata, __setdata)
748 748
749 749 @property
750 750 def mandatoryparams(self):
751 751 # make it an immutable tuple to force people through ``addparam``
752 752 return tuple(self._mandatoryparams)
753 753
754 754 @property
755 755 def advisoryparams(self):
756 756 # make it an immutable tuple to force people through ``addparam``
757 757 return tuple(self._advisoryparams)
758 758
759 759 def addparam(self, name, value='', mandatory=True):
760 760 if self._generated is not None:
761 761 raise error.ReadOnlyPartError('part is being generated')
762 762 if name in self._seenparams:
763 763 raise ValueError('duplicated params: %s' % name)
764 764 self._seenparams.add(name)
765 765 params = self._advisoryparams
766 766 if mandatory:
767 767 params = self._mandatoryparams
768 768 params.append((name, value))
769 769
770 770 # methods used to generates the bundle2 stream
771 771 def getchunks(self, ui):
772 772 if self._generated is not None:
773 773 raise RuntimeError('part can only be consumed once')
774 774 self._generated = False
775 775
776 776 if ui.debugflag:
777 777 msg = ['bundle2-output-part: "%s"' % self.type]
778 778 if not self.mandatory:
779 779 msg.append(' (advisory)')
780 780 nbmp = len(self.mandatoryparams)
781 781 nbap = len(self.advisoryparams)
782 782 if nbmp or nbap:
783 783 msg.append(' (params:')
784 784 if nbmp:
785 785 msg.append(' %i mandatory' % nbmp)
786 786 if nbap:
787 787 msg.append(' %i advisory' % nbmp)
788 788 msg.append(')')
789 789 if not self.data:
790 790 msg.append(' empty payload')
791 791 elif util.safehasattr(self.data, 'next'):
792 792 msg.append(' streamed payload')
793 793 else:
794 794 msg.append(' %i bytes payload' % len(self.data))
795 795 msg.append('\n')
796 796 ui.debug(''.join(msg))
797 797
798 798 #### header
799 799 if self.mandatory:
800 800 parttype = self.type.upper()
801 801 else:
802 802 parttype = self.type.lower()
803 803 outdebug(ui, 'part %s: "%s"' % (self.id, parttype))
804 804 ## parttype
805 805 header = [_pack(_fparttypesize, len(parttype)),
806 806 parttype, _pack(_fpartid, self.id),
807 807 ]
808 808 ## parameters
809 809 # count
810 810 manpar = self.mandatoryparams
811 811 advpar = self.advisoryparams
812 812 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
813 813 # size
814 814 parsizes = []
815 815 for key, value in manpar:
816 816 parsizes.append(len(key))
817 817 parsizes.append(len(value))
818 818 for key, value in advpar:
819 819 parsizes.append(len(key))
820 820 parsizes.append(len(value))
821 821 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
822 822 header.append(paramsizes)
823 823 # key, value
824 824 for key, value in manpar:
825 825 header.append(key)
826 826 header.append(value)
827 827 for key, value in advpar:
828 828 header.append(key)
829 829 header.append(value)
830 830 ## finalize header
831 831 headerchunk = ''.join(header)
832 832 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
833 833 yield _pack(_fpartheadersize, len(headerchunk))
834 834 yield headerchunk
835 835 ## payload
836 836 try:
837 837 for chunk in self._payloadchunks():
838 838 outdebug(ui, 'payload chunk size: %i' % len(chunk))
839 839 yield _pack(_fpayloadsize, len(chunk))
840 840 yield chunk
841 841 except BaseException, exc:
842 842 # backup exception data for later
843 843 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
844 844 % exc)
845 845 exc_info = sys.exc_info()
846 846 msg = 'unexpected error: %s' % exc
847 847 interpart = bundlepart('error:abort', [('message', msg)],
848 848 mandatory=False)
849 849 interpart.id = 0
850 850 yield _pack(_fpayloadsize, -1)
851 851 for chunk in interpart.getchunks(ui=ui):
852 852 yield chunk
853 853 outdebug(ui, 'closing payload chunk')
854 854 # abort current part payload
855 855 yield _pack(_fpayloadsize, 0)
856 856 raise exc_info[0], exc_info[1], exc_info[2]
857 857 # end of payload
858 858 outdebug(ui, 'closing payload chunk')
859 859 yield _pack(_fpayloadsize, 0)
860 860 self._generated = True
861 861
862 862 def _payloadchunks(self):
863 863 """yield chunks of a the part payload
864 864
865 865 Exists to handle the different methods to provide data to a part."""
866 866 # we only support fixed size data now.
867 867 # This will be improved in the future.
868 868 if util.safehasattr(self.data, 'next'):
869 869 buff = util.chunkbuffer(self.data)
870 870 chunk = buff.read(preferedchunksize)
871 871 while chunk:
872 872 yield chunk
873 873 chunk = buff.read(preferedchunksize)
874 874 elif len(self.data):
875 875 yield self.data
876 876
877 877
878 878 flaginterrupt = -1
879 879
880 880 class interrupthandler(unpackermixin):
881 881 """read one part and process it with restricted capability
882 882
883 883 This allows to transmit exception raised on the producer size during part
884 884 iteration while the consumer is reading a part.
885 885
886 886 Part processed in this manner only have access to a ui object,"""
887 887
888 888 def __init__(self, ui, fp):
889 889 super(interrupthandler, self).__init__(fp)
890 890 self.ui = ui
891 891
892 892 def _readpartheader(self):
893 893 """reads a part header size and return the bytes blob
894 894
895 895 returns None if empty"""
896 896 headersize = self._unpack(_fpartheadersize)[0]
897 897 if headersize < 0:
898 898 raise error.BundleValueError('negative part header size: %i'
899 899 % headersize)
900 900 indebug(self.ui, 'part header size: %i\n' % headersize)
901 901 if headersize:
902 902 return self._readexact(headersize)
903 903 return None
904 904
905 905 def __call__(self):
906 906
907 907 self.ui.debug('bundle2-input-stream-interrupt:'
908 908 ' opening out of band context\n')
909 909 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
910 910 headerblock = self._readpartheader()
911 911 if headerblock is None:
912 912 indebug(self.ui, 'no part found during interruption.')
913 913 return
914 914 part = unbundlepart(self.ui, headerblock, self._fp)
915 915 op = interruptoperation(self.ui)
916 916 _processpart(op, part)
917 917 self.ui.debug('bundle2-input-stream-interrupt:'
918 918 ' closing out of band context\n')
919 919
920 920 class interruptoperation(object):
921 921 """A limited operation to be use by part handler during interruption
922 922
923 923 It only have access to an ui object.
924 924 """
925 925
926 926 def __init__(self, ui):
927 927 self.ui = ui
928 928 self.reply = None
929 929 self.captureoutput = False
930 930
931 931 @property
932 932 def repo(self):
933 933 raise RuntimeError('no repo access from stream interruption')
934 934
935 935 def gettransaction(self):
936 936 raise TransactionUnavailable('no repo access from stream interruption')
937 937
938 938 class unbundlepart(unpackermixin):
939 939 """a bundle part read from a bundle"""
940 940
941 941 def __init__(self, ui, header, fp):
942 942 super(unbundlepart, self).__init__(fp)
943 943 self.ui = ui
944 944 # unbundle state attr
945 945 self._headerdata = header
946 946 self._headeroffset = 0
947 947 self._initialized = False
948 948 self.consumed = False
949 949 # part data
950 950 self.id = None
951 951 self.type = None
952 952 self.mandatoryparams = None
953 953 self.advisoryparams = None
954 954 self.params = None
955 955 self.mandatorykeys = ()
956 956 self._payloadstream = None
957 957 self._readheader()
958 958 self._mandatory = None
959 959 self._chunkindex = [] #(payload, file) position tuples for chunk starts
960 960 self._pos = 0
961 961
962 962 def _fromheader(self, size):
963 963 """return the next <size> byte from the header"""
964 964 offset = self._headeroffset
965 965 data = self._headerdata[offset:(offset + size)]
966 966 self._headeroffset = offset + size
967 967 return data
968 968
969 969 def _unpackheader(self, format):
970 970 """read given format from header
971 971
972 972 This automatically compute the size of the format to read."""
973 973 data = self._fromheader(struct.calcsize(format))
974 974 return _unpack(format, data)
975 975
976 976 def _initparams(self, mandatoryparams, advisoryparams):
977 977 """internal function to setup all logic related parameters"""
978 978 # make it read only to prevent people touching it by mistake.
979 979 self.mandatoryparams = tuple(mandatoryparams)
980 980 self.advisoryparams = tuple(advisoryparams)
981 981 # user friendly UI
982 982 self.params = dict(self.mandatoryparams)
983 983 self.params.update(dict(self.advisoryparams))
984 984 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
985 985
986 986 def _payloadchunks(self, chunknum=0):
987 987 '''seek to specified chunk and start yielding data'''
988 988 if len(self._chunkindex) == 0:
989 989 assert chunknum == 0, 'Must start with chunk 0'
990 990 self._chunkindex.append((0, super(unbundlepart, self).tell()))
991 991 else:
992 992 assert chunknum < len(self._chunkindex), \
993 993 'Unknown chunk %d' % chunknum
994 994 super(unbundlepart, self).seek(self._chunkindex[chunknum][1])
995 995
996 996 pos = self._chunkindex[chunknum][0]
997 997 payloadsize = self._unpack(_fpayloadsize)[0]
998 998 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
999 999 while payloadsize:
1000 1000 if payloadsize == flaginterrupt:
1001 1001 # interruption detection, the handler will now read a
1002 1002 # single part and process it.
1003 1003 interrupthandler(self.ui, self._fp)()
1004 1004 elif payloadsize < 0:
1005 1005 msg = 'negative payload chunk size: %i' % payloadsize
1006 1006 raise error.BundleValueError(msg)
1007 1007 else:
1008 1008 result = self._readexact(payloadsize)
1009 1009 chunknum += 1
1010 1010 pos += payloadsize
1011 1011 if chunknum == len(self._chunkindex):
1012 1012 self._chunkindex.append((pos,
1013 1013 super(unbundlepart, self).tell()))
1014 1014 yield result
1015 1015 payloadsize = self._unpack(_fpayloadsize)[0]
1016 1016 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1017 1017
1018 1018 def _findchunk(self, pos):
1019 1019 '''for a given payload position, return a chunk number and offset'''
1020 1020 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1021 1021 if ppos == pos:
1022 1022 return chunk, 0
1023 1023 elif ppos > pos:
1024 1024 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1025 1025 raise ValueError('Unknown chunk')
1026 1026
1027 1027 def _readheader(self):
1028 1028 """read the header and setup the object"""
1029 1029 typesize = self._unpackheader(_fparttypesize)[0]
1030 1030 self.type = self._fromheader(typesize)
1031 1031 indebug(self.ui, 'part type: "%s"' % self.type)
1032 1032 self.id = self._unpackheader(_fpartid)[0]
1033 1033 indebug(self.ui, 'part id: "%s"' % self.id)
1034 1034 # extract mandatory bit from type
1035 1035 self.mandatory = (self.type != self.type.lower())
1036 1036 self.type = self.type.lower()
1037 1037 ## reading parameters
1038 1038 # param count
1039 1039 mancount, advcount = self._unpackheader(_fpartparamcount)
1040 1040 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1041 1041 # param size
1042 1042 fparamsizes = _makefpartparamsizes(mancount + advcount)
1043 1043 paramsizes = self._unpackheader(fparamsizes)
1044 1044 # make it a list of couple again
1045 1045 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
1046 1046 # split mandatory from advisory
1047 1047 mansizes = paramsizes[:mancount]
1048 1048 advsizes = paramsizes[mancount:]
1049 1049 # retrieve param value
1050 1050 manparams = []
1051 1051 for key, value in mansizes:
1052 1052 manparams.append((self._fromheader(key), self._fromheader(value)))
1053 1053 advparams = []
1054 1054 for key, value in advsizes:
1055 1055 advparams.append((self._fromheader(key), self._fromheader(value)))
1056 1056 self._initparams(manparams, advparams)
1057 1057 ## part payload
1058 1058 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1059 1059 # we read the data, tell it
1060 1060 self._initialized = True
1061 1061
1062 1062 def read(self, size=None):
1063 1063 """read payload data"""
1064 1064 if not self._initialized:
1065 1065 self._readheader()
1066 1066 if size is None:
1067 1067 data = self._payloadstream.read()
1068 1068 else:
1069 1069 data = self._payloadstream.read(size)
1070 1070 self._pos += len(data)
1071 1071 if size is None or len(data) < size:
1072 1072 if not self.consumed and self._pos:
1073 1073 self.ui.debug('bundle2-input-part: total payload size %i\n'
1074 1074 % self._pos)
1075 1075 self.consumed = True
1076 1076 return data
1077 1077
1078 1078 def tell(self):
1079 1079 return self._pos
1080 1080
1081 1081 def seek(self, offset, whence=0):
1082 1082 if whence == 0:
1083 1083 newpos = offset
1084 1084 elif whence == 1:
1085 1085 newpos = self._pos + offset
1086 1086 elif whence == 2:
1087 1087 if not self.consumed:
1088 1088 self.read()
1089 1089 newpos = self._chunkindex[-1][0] - offset
1090 1090 else:
1091 1091 raise ValueError('Unknown whence value: %r' % (whence,))
1092 1092
1093 1093 if newpos > self._chunkindex[-1][0] and not self.consumed:
1094 1094 self.read()
1095 1095 if not 0 <= newpos <= self._chunkindex[-1][0]:
1096 1096 raise ValueError('Offset out of range')
1097 1097
1098 1098 if self._pos != newpos:
1099 1099 chunk, internaloffset = self._findchunk(newpos)
1100 1100 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1101 1101 adjust = self.read(internaloffset)
1102 1102 if len(adjust) != internaloffset:
1103 1103 raise util.Abort(_('Seek failed\n'))
1104 1104 self._pos = newpos
1105 1105
1106 1106 # These are only the static capabilities.
1107 1107 # Check the 'getrepocaps' function for the rest.
1108 1108 capabilities = {'HG20': (),
1109 1109 'listkeys': (),
1110 1110 'pushkey': (),
1111 1111 'digests': tuple(sorted(util.DIGESTS.keys())),
1112 1112 'remote-changegroup': ('http', 'https'),
1113 1113 'hgtagsfnodes': (),
1114 1114 }
1115 1115
1116 1116 def getrepocaps(repo, allowpushback=False):
1117 1117 """return the bundle2 capabilities for a given repo
1118 1118
1119 1119 Exists to allow extensions (like evolution) to mutate the capabilities.
1120 1120 """
1121 1121 caps = capabilities.copy()
1122 1122 caps['changegroup'] = tuple(sorted(changegroup.packermap.keys()))
1123 1123 if obsolete.isenabled(repo, obsolete.exchangeopt):
1124 1124 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1125 1125 caps['obsmarkers'] = supportedformat
1126 1126 if allowpushback:
1127 1127 caps['pushback'] = ()
1128 1128 return caps
1129 1129
1130 1130 def bundle2caps(remote):
1131 1131 """return the bundle capabilities of a peer as dict"""
1132 1132 raw = remote.capable('bundle2')
1133 1133 if not raw and raw != '':
1134 1134 return {}
1135 1135 capsblob = urllib.unquote(remote.capable('bundle2'))
1136 1136 return decodecaps(capsblob)
1137 1137
1138 1138 def obsmarkersversion(caps):
1139 1139 """extract the list of supported obsmarkers versions from a bundle2caps dict
1140 1140 """
1141 1141 obscaps = caps.get('obsmarkers', ())
1142 1142 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1143 1143
1144 1144 @parthandler('changegroup', ('version',))
1145 1145 def handlechangegroup(op, inpart):
1146 1146 """apply a changegroup part on the repo
1147 1147
1148 1148 This is a very early implementation that will massive rework before being
1149 1149 inflicted to any end-user.
1150 1150 """
1151 1151 # Make sure we trigger a transaction creation
1152 1152 #
1153 1153 # The addchangegroup function will get a transaction object by itself, but
1154 1154 # we need to make sure we trigger the creation of a transaction object used
1155 1155 # for the whole processing scope.
1156 1156 op.gettransaction()
1157 1157 unpackerversion = inpart.params.get('version', '01')
1158 1158 # We should raise an appropriate exception here
1159 1159 unpacker = changegroup.packermap[unpackerversion][1]
1160 1160 cg = unpacker(inpart, 'UN')
1161 1161 # the source and url passed here are overwritten by the one contained in
1162 1162 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1163 1163 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
1164 1164 op.records.add('changegroup', {'return': ret})
1165 1165 if op.reply is not None:
1166 1166 # This is definitely not the final form of this
1167 1167 # return. But one need to start somewhere.
1168 1168 part = op.reply.newpart('reply:changegroup', mandatory=False)
1169 1169 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1170 1170 part.addparam('return', '%i' % ret, mandatory=False)
1171 1171 assert not inpart.read()
1172 1172
1173 1173 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1174 1174 ['digest:%s' % k for k in util.DIGESTS.keys()])
1175 1175 @parthandler('remote-changegroup', _remotechangegroupparams)
1176 1176 def handleremotechangegroup(op, inpart):
1177 1177 """apply a bundle10 on the repo, given an url and validation information
1178 1178
1179 1179 All the information about the remote bundle to import are given as
1180 1180 parameters. The parameters include:
1181 1181 - url: the url to the bundle10.
1182 1182 - size: the bundle10 file size. It is used to validate what was
1183 1183 retrieved by the client matches the server knowledge about the bundle.
1184 1184 - digests: a space separated list of the digest types provided as
1185 1185 parameters.
1186 1186 - digest:<digest-type>: the hexadecimal representation of the digest with
1187 1187 that name. Like the size, it is used to validate what was retrieved by
1188 1188 the client matches what the server knows about the bundle.
1189 1189
1190 1190 When multiple digest types are given, all of them are checked.
1191 1191 """
1192 1192 try:
1193 1193 raw_url = inpart.params['url']
1194 1194 except KeyError:
1195 1195 raise util.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1196 1196 parsed_url = util.url(raw_url)
1197 1197 if parsed_url.scheme not in capabilities['remote-changegroup']:
1198 1198 raise util.Abort(_('remote-changegroup does not support %s urls') %
1199 1199 parsed_url.scheme)
1200 1200
1201 1201 try:
1202 1202 size = int(inpart.params['size'])
1203 1203 except ValueError:
1204 1204 raise util.Abort(_('remote-changegroup: invalid value for param "%s"')
1205 1205 % 'size')
1206 1206 except KeyError:
1207 1207 raise util.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1208 1208
1209 1209 digests = {}
1210 1210 for typ in inpart.params.get('digests', '').split():
1211 1211 param = 'digest:%s' % typ
1212 1212 try:
1213 1213 value = inpart.params[param]
1214 1214 except KeyError:
1215 1215 raise util.Abort(_('remote-changegroup: missing "%s" param') %
1216 1216 param)
1217 1217 digests[typ] = value
1218 1218
1219 1219 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1220 1220
1221 1221 # Make sure we trigger a transaction creation
1222 1222 #
1223 1223 # The addchangegroup function will get a transaction object by itself, but
1224 1224 # we need to make sure we trigger the creation of a transaction object used
1225 1225 # for the whole processing scope.
1226 1226 op.gettransaction()
1227 1227 import exchange
1228 1228 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1229 1229 if not isinstance(cg, changegroup.cg1unpacker):
1230 1230 raise util.Abort(_('%s: not a bundle version 1.0') %
1231 1231 util.hidepassword(raw_url))
1232 1232 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
1233 1233 op.records.add('changegroup', {'return': ret})
1234 1234 if op.reply is not None:
1235 1235 # This is definitely not the final form of this
1236 1236 # return. But one need to start somewhere.
1237 1237 part = op.reply.newpart('reply:changegroup')
1238 1238 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1239 1239 part.addparam('return', '%i' % ret, mandatory=False)
1240 1240 try:
1241 1241 real_part.validate()
1242 1242 except util.Abort, e:
1243 1243 raise util.Abort(_('bundle at %s is corrupted:\n%s') %
1244 1244 (util.hidepassword(raw_url), str(e)))
1245 1245 assert not inpart.read()
1246 1246
1247 1247 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1248 1248 def handlereplychangegroup(op, inpart):
1249 1249 ret = int(inpart.params['return'])
1250 1250 replyto = int(inpart.params['in-reply-to'])
1251 1251 op.records.add('changegroup', {'return': ret}, replyto)
1252 1252
1253 1253 @parthandler('check:heads')
1254 1254 def handlecheckheads(op, inpart):
1255 1255 """check that head of the repo did not change
1256 1256
1257 1257 This is used to detect a push race when using unbundle.
1258 1258 This replaces the "heads" argument of unbundle."""
1259 1259 h = inpart.read(20)
1260 1260 heads = []
1261 1261 while len(h) == 20:
1262 1262 heads.append(h)
1263 1263 h = inpart.read(20)
1264 1264 assert not h
1265 1265 if heads != op.repo.heads():
1266 1266 raise error.PushRaced('repository changed while pushing - '
1267 1267 'please try again')
1268 1268
1269 1269 @parthandler('output')
1270 1270 def handleoutput(op, inpart):
1271 1271 """forward output captured on the server to the client"""
1272 1272 for line in inpart.read().splitlines():
1273 1273 op.ui.status(('remote: %s\n' % line))
1274 1274
1275 1275 @parthandler('replycaps')
1276 1276 def handlereplycaps(op, inpart):
1277 1277 """Notify that a reply bundle should be created
1278 1278
1279 1279 The payload contains the capabilities information for the reply"""
1280 1280 caps = decodecaps(inpart.read())
1281 1281 if op.reply is None:
1282 1282 op.reply = bundle20(op.ui, caps)
1283 1283
1284 1284 @parthandler('error:abort', ('message', 'hint'))
1285 1285 def handleerrorabort(op, inpart):
1286 1286 """Used to transmit abort error over the wire"""
1287 1287 raise util.Abort(inpart.params['message'], hint=inpart.params.get('hint'))
1288 1288
1289 1289 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1290 1290 def handleerrorunsupportedcontent(op, inpart):
1291 1291 """Used to transmit unknown content error over the wire"""
1292 1292 kwargs = {}
1293 1293 parttype = inpart.params.get('parttype')
1294 1294 if parttype is not None:
1295 1295 kwargs['parttype'] = parttype
1296 1296 params = inpart.params.get('params')
1297 1297 if params is not None:
1298 1298 kwargs['params'] = params.split('\0')
1299 1299
1300 1300 raise error.UnsupportedPartError(**kwargs)
1301 1301
1302 1302 @parthandler('error:pushraced', ('message',))
1303 1303 def handleerrorpushraced(op, inpart):
1304 1304 """Used to transmit push race error over the wire"""
1305 1305 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1306 1306
1307 1307 @parthandler('listkeys', ('namespace',))
1308 1308 def handlelistkeys(op, inpart):
1309 1309 """retrieve pushkey namespace content stored in a bundle2"""
1310 1310 namespace = inpart.params['namespace']
1311 1311 r = pushkey.decodekeys(inpart.read())
1312 1312 op.records.add('listkeys', (namespace, r))
1313 1313
1314 1314 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1315 1315 def handlepushkey(op, inpart):
1316 1316 """process a pushkey request"""
1317 1317 dec = pushkey.decode
1318 1318 namespace = dec(inpart.params['namespace'])
1319 1319 key = dec(inpart.params['key'])
1320 1320 old = dec(inpart.params['old'])
1321 1321 new = dec(inpart.params['new'])
1322 1322 ret = op.repo.pushkey(namespace, key, old, new)
1323 1323 record = {'namespace': namespace,
1324 1324 'key': key,
1325 1325 'old': old,
1326 1326 'new': new}
1327 1327 op.records.add('pushkey', record)
1328 1328 if op.reply is not None:
1329 1329 rpart = op.reply.newpart('reply:pushkey')
1330 1330 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1331 1331 rpart.addparam('return', '%i' % ret, mandatory=False)
1332 if inpart.mandatory and not ret:
1333 raise util.Abort(_('failed to update value for "%s/%s"')
1334 % (namespace, key))
1332 1335
1333 1336 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
1334 1337 def handlepushkeyreply(op, inpart):
1335 1338 """retrieve the result of a pushkey request"""
1336 1339 ret = int(inpart.params['return'])
1337 1340 partid = int(inpart.params['in-reply-to'])
1338 1341 op.records.add('pushkey', {'return': ret}, partid)
1339 1342
1340 1343 @parthandler('obsmarkers')
1341 1344 def handleobsmarker(op, inpart):
1342 1345 """add a stream of obsmarkers to the repo"""
1343 1346 tr = op.gettransaction()
1344 1347 markerdata = inpart.read()
1345 1348 if op.ui.config('experimental', 'obsmarkers-exchange-debug', False):
1346 1349 op.ui.write(('obsmarker-exchange: %i bytes received\n')
1347 1350 % len(markerdata))
1348 1351 new = op.repo.obsstore.mergemarkers(tr, markerdata)
1349 1352 if new:
1350 1353 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1351 1354 op.records.add('obsmarkers', {'new': new})
1352 1355 if op.reply is not None:
1353 1356 rpart = op.reply.newpart('reply:obsmarkers')
1354 1357 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1355 1358 rpart.addparam('new', '%i' % new, mandatory=False)
1356 1359
1357 1360
1358 1361 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
1359 1362 def handlepushkeyreply(op, inpart):
1360 1363 """retrieve the result of a pushkey request"""
1361 1364 ret = int(inpart.params['new'])
1362 1365 partid = int(inpart.params['in-reply-to'])
1363 1366 op.records.add('obsmarkers', {'new': ret}, partid)
1364 1367
1365 1368 @parthandler('hgtagsfnodes')
1366 1369 def handlehgtagsfnodes(op, inpart):
1367 1370 """Applies .hgtags fnodes cache entries to the local repo.
1368 1371
1369 1372 Payload is pairs of 20 byte changeset nodes and filenodes.
1370 1373 """
1371 1374 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
1372 1375
1373 1376 count = 0
1374 1377 while True:
1375 1378 node = inpart.read(20)
1376 1379 fnode = inpart.read(20)
1377 1380 if len(node) < 20 or len(fnode) < 20:
1378 1381 op.ui.debug('received incomplete .hgtags fnodes data, ignoring\n')
1379 1382 break
1380 1383 cache.setfnode(node, fnode)
1381 1384 count += 1
1382 1385
1383 1386 cache.write()
1384 1387 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
@@ -1,719 +1,861 b''
1 1 Test exchange of common information using bundle2
2 2
3 3
4 4 $ getmainid() {
5 5 > hg -R main log --template '{node}\n' --rev "$1"
6 6 > }
7 7
8 8 enable obsolescence
9 9
10 10 $ cat > $TESTTMP/bundle2-pushkey-hook.sh << EOF
11 11 > echo pushkey: lock state after \"\$HG_NAMESPACE\"
12 12 > hg debuglock
13 13 > EOF
14 14
15 15 $ cat >> $HGRCPATH << EOF
16 16 > [experimental]
17 17 > evolution=createmarkers,exchange
18 18 > bundle2-exp=True
19 19 > bundle2-output-capture=True
20 20 > [ui]
21 21 > ssh=dummyssh
22 22 > logtemplate={rev}:{node|short} {phase} {author} {bookmarks} {desc|firstline}
23 23 > [web]
24 24 > push_ssl = false
25 25 > allow_push = *
26 26 > [phases]
27 27 > publish=False
28 28 > [hooks]
29 29 > pretxnclose.tip = hg log -r tip -T "pre-close-tip:{node|short} {phase} {bookmarks}\n"
30 30 > txnclose.tip = hg log -r tip -T "postclose-tip:{node|short} {phase} {bookmarks}\n"
31 31 > txnclose.env = sh -c "HG_LOCAL= printenv.py txnclose"
32 32 > pushkey= sh "$TESTTMP/bundle2-pushkey-hook.sh"
33 33 > EOF
34 34
35 35 The extension requires a repo (currently unused)
36 36
37 37 $ hg init main
38 38 $ cd main
39 39 $ touch a
40 40 $ hg add a
41 41 $ hg commit -m 'a'
42 42 pre-close-tip:3903775176ed draft
43 43 postclose-tip:3903775176ed draft
44 44 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=commit (glob)
45 45
46 46 $ hg unbundle $TESTDIR/bundles/rebase.hg
47 47 adding changesets
48 48 adding manifests
49 49 adding file changes
50 50 added 8 changesets with 7 changes to 7 files (+3 heads)
51 51 pre-close-tip:02de42196ebe draft
52 52 postclose-tip:02de42196ebe draft
53 53 txnclose hook: HG_NODE=cd010b8cd998f3981a5a8115f94f8da4ab506089 HG_PHASES_MOVED=1 HG_SOURCE=unbundle HG_TXNID=TXN:* HG_TXNNAME=unbundle (glob)
54 54 bundle:*/tests/bundles/rebase.hg HG_URL=bundle:*/tests/bundles/rebase.hg (glob)
55 55 (run 'hg heads' to see heads, 'hg merge' to merge)
56 56
57 57 $ cd ..
58 58
59 59 Real world exchange
60 60 =====================
61 61
62 62 Add more obsolescence information
63 63
64 64 $ hg -R main debugobsolete -d '0 0' 1111111111111111111111111111111111111111 `getmainid 9520eea781bc`
65 65 pre-close-tip:02de42196ebe draft
66 66 postclose-tip:02de42196ebe draft
67 67 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
68 68 $ hg -R main debugobsolete -d '0 0' 2222222222222222222222222222222222222222 `getmainid 24b6387c8c8c`
69 69 pre-close-tip:02de42196ebe draft
70 70 postclose-tip:02de42196ebe draft
71 71 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
72 72
73 73 clone --pull
74 74
75 75 $ hg -R main phase --public cd010b8cd998
76 76 pre-close-tip:02de42196ebe draft
77 77 postclose-tip:02de42196ebe draft
78 78 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=phase (glob)
79 79 $ hg clone main other --pull --rev 9520eea781bc
80 80 adding changesets
81 81 adding manifests
82 82 adding file changes
83 83 added 2 changesets with 2 changes to 2 files
84 84 1 new obsolescence markers
85 85 pre-close-tip:9520eea781bc draft
86 86 postclose-tip:9520eea781bc draft
87 87 txnclose hook: HG_NEW_OBSMARKERS=1 HG_NODE=cd010b8cd998f3981a5a8115f94f8da4ab506089 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
88 88 file:/*/$TESTTMP/main HG_URL=file:$TESTTMP/main (glob)
89 89 updating to branch default
90 90 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
91 91 $ hg -R other log -G
92 92 @ 1:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
93 93 |
94 94 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
95 95
96 96 $ hg -R other debugobsolete
97 97 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
98 98
99 99 pull
100 100
101 101 $ hg -R main phase --public 9520eea781bc
102 102 pre-close-tip:02de42196ebe draft
103 103 postclose-tip:02de42196ebe draft
104 104 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=phase (glob)
105 105 $ hg -R other pull -r 24b6387c8c8c
106 106 pulling from $TESTTMP/main (glob)
107 107 searching for changes
108 108 adding changesets
109 109 adding manifests
110 110 adding file changes
111 111 added 1 changesets with 1 changes to 1 files (+1 heads)
112 112 1 new obsolescence markers
113 113 pre-close-tip:24b6387c8c8c draft
114 114 postclose-tip:24b6387c8c8c draft
115 115 txnclose hook: HG_NEW_OBSMARKERS=1 HG_NODE=24b6387c8c8cae37178880f3fa95ded3cb1cf785 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
116 116 file:/*/$TESTTMP/main HG_URL=file:$TESTTMP/main (glob)
117 117 (run 'hg heads' to see heads, 'hg merge' to merge)
118 118 $ hg -R other log -G
119 119 o 2:24b6387c8c8c draft Nicolas Dumazet <nicdumz.commits@gmail.com> F
120 120 |
121 121 | @ 1:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
122 122 |/
123 123 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
124 124
125 125 $ hg -R other debugobsolete
126 126 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
127 127 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
128 128
129 129 pull empty (with phase movement)
130 130
131 131 $ hg -R main phase --public 24b6387c8c8c
132 132 pre-close-tip:02de42196ebe draft
133 133 postclose-tip:02de42196ebe draft
134 134 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=phase (glob)
135 135 $ hg -R other pull -r 24b6387c8c8c
136 136 pulling from $TESTTMP/main (glob)
137 137 no changes found
138 138 pre-close-tip:24b6387c8c8c public
139 139 postclose-tip:24b6387c8c8c public
140 140 txnclose hook: HG_NEW_OBSMARKERS=0 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
141 141 file:/*/$TESTTMP/main HG_URL=file:$TESTTMP/main (glob)
142 142 $ hg -R other log -G
143 143 o 2:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
144 144 |
145 145 | @ 1:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
146 146 |/
147 147 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
148 148
149 149 $ hg -R other debugobsolete
150 150 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
151 151 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
152 152
153 153 pull empty
154 154
155 155 $ hg -R other pull -r 24b6387c8c8c
156 156 pulling from $TESTTMP/main (glob)
157 157 no changes found
158 158 pre-close-tip:24b6387c8c8c public
159 159 postclose-tip:24b6387c8c8c public
160 160 txnclose hook: HG_NEW_OBSMARKERS=0 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
161 161 file:/*/$TESTTMP/main HG_URL=file:$TESTTMP/main (glob)
162 162 $ hg -R other log -G
163 163 o 2:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
164 164 |
165 165 | @ 1:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
166 166 |/
167 167 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
168 168
169 169 $ hg -R other debugobsolete
170 170 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
171 171 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
172 172
173 173 add extra data to test their exchange during push
174 174
175 175 $ hg -R main bookmark --rev eea13746799a book_eea1
176 176 $ hg -R main debugobsolete -d '0 0' 3333333333333333333333333333333333333333 `getmainid eea13746799a`
177 177 pre-close-tip:02de42196ebe draft
178 178 postclose-tip:02de42196ebe draft
179 179 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
180 180 $ hg -R main bookmark --rev 02de42196ebe book_02de
181 181 $ hg -R main debugobsolete -d '0 0' 4444444444444444444444444444444444444444 `getmainid 02de42196ebe`
182 182 pre-close-tip:02de42196ebe draft book_02de
183 183 postclose-tip:02de42196ebe draft book_02de
184 184 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
185 185 $ hg -R main bookmark --rev 42ccdea3bb16 book_42cc
186 186 $ hg -R main debugobsolete -d '0 0' 5555555555555555555555555555555555555555 `getmainid 42ccdea3bb16`
187 187 pre-close-tip:02de42196ebe draft book_02de
188 188 postclose-tip:02de42196ebe draft book_02de
189 189 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
190 190 $ hg -R main bookmark --rev 5fddd98957c8 book_5fdd
191 191 $ hg -R main debugobsolete -d '0 0' 6666666666666666666666666666666666666666 `getmainid 5fddd98957c8`
192 192 pre-close-tip:02de42196ebe draft book_02de
193 193 postclose-tip:02de42196ebe draft book_02de
194 194 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
195 195 $ hg -R main bookmark --rev 32af7686d403 book_32af
196 196 $ hg -R main debugobsolete -d '0 0' 7777777777777777777777777777777777777777 `getmainid 32af7686d403`
197 197 pre-close-tip:02de42196ebe draft book_02de
198 198 postclose-tip:02de42196ebe draft book_02de
199 199 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
200 200
201 201 $ hg -R other bookmark --rev cd010b8cd998 book_eea1
202 202 $ hg -R other bookmark --rev cd010b8cd998 book_02de
203 203 $ hg -R other bookmark --rev cd010b8cd998 book_42cc
204 204 $ hg -R other bookmark --rev cd010b8cd998 book_5fdd
205 205 $ hg -R other bookmark --rev cd010b8cd998 book_32af
206 206
207 207 $ hg -R main phase --public eea13746799a
208 208 pre-close-tip:02de42196ebe draft book_02de
209 209 postclose-tip:02de42196ebe draft book_02de
210 210 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=phase (glob)
211 211
212 212 push
213 213 $ hg -R main push other --rev eea13746799a --bookmark book_eea1
214 214 pushing to other
215 215 searching for changes
216 216 remote: adding changesets
217 217 remote: adding manifests
218 218 remote: adding file changes
219 219 remote: added 1 changesets with 0 changes to 0 files (-1 heads)
220 220 remote: 1 new obsolescence markers
221 221 remote: pre-close-tip:eea13746799a public book_eea1
222 222 remote: pushkey: lock state after "phases"
223 223 remote: lock: free
224 224 remote: wlock: free
225 225 remote: pushkey: lock state after "bookmarks"
226 226 remote: lock: free
227 227 remote: wlock: free
228 228 remote: postclose-tip:eea13746799a public book_eea1
229 229 remote: txnclose hook: HG_BOOKMARK_MOVED=1 HG_BUNDLE2=1 HG_NEW_OBSMARKERS=1 HG_NODE=eea13746799a9e0bfd88f29d3c2e9dc9389f524f HG_PHASES_MOVED=1 HG_SOURCE=push HG_TXNID=TXN:* HG_TXNNAME=push HG_URL=push (glob)
230 230 updating bookmark book_eea1
231 231 pre-close-tip:02de42196ebe draft book_02de
232 232 postclose-tip:02de42196ebe draft book_02de
233 233 txnclose hook: HG_SOURCE=push-response HG_TXNID=TXN:* HG_TXNNAME=push-response (glob)
234 234 file:/*/$TESTTMP/other HG_URL=file:$TESTTMP/other (glob)
235 235 $ hg -R other log -G
236 236 o 3:eea13746799a public Nicolas Dumazet <nicdumz.commits@gmail.com> book_eea1 G
237 237 |\
238 238 | o 2:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
239 239 | |
240 240 @ | 1:9520eea781bc public Nicolas Dumazet <nicdumz.commits@gmail.com> E
241 241 |/
242 242 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> book_02de book_32af book_42cc book_5fdd A
243 243
244 244 $ hg -R other debugobsolete
245 245 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
246 246 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
247 247 3333333333333333333333333333333333333333 eea13746799a9e0bfd88f29d3c2e9dc9389f524f 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
248 248
249 249 pull over ssh
250 250
251 251 $ hg -R other pull ssh://user@dummy/main -r 02de42196ebe --bookmark book_02de
252 252 pulling from ssh://user@dummy/main
253 253 searching for changes
254 254 adding changesets
255 255 adding manifests
256 256 adding file changes
257 257 added 1 changesets with 1 changes to 1 files (+1 heads)
258 258 1 new obsolescence markers
259 259 updating bookmark book_02de
260 260 pre-close-tip:02de42196ebe draft book_02de
261 261 postclose-tip:02de42196ebe draft book_02de
262 262 txnclose hook: HG_BOOKMARK_MOVED=1 HG_NEW_OBSMARKERS=1 HG_NODE=02de42196ebee42ef284b6780a87cdc96e8eaab6 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
263 263 ssh://user@dummy/main HG_URL=ssh://user@dummy/main
264 264 (run 'hg heads' to see heads, 'hg merge' to merge)
265 265 $ hg -R other debugobsolete
266 266 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
267 267 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
268 268 3333333333333333333333333333333333333333 eea13746799a9e0bfd88f29d3c2e9dc9389f524f 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
269 269 4444444444444444444444444444444444444444 02de42196ebee42ef284b6780a87cdc96e8eaab6 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
270 270
271 271 pull over http
272 272
273 273 $ hg -R main serve -p $HGPORT -d --pid-file=main.pid -E main-error.log
274 274 $ cat main.pid >> $DAEMON_PIDS
275 275
276 276 $ hg -R other pull http://localhost:$HGPORT/ -r 42ccdea3bb16 --bookmark book_42cc
277 277 pulling from http://localhost:$HGPORT/
278 278 searching for changes
279 279 adding changesets
280 280 adding manifests
281 281 adding file changes
282 282 added 1 changesets with 1 changes to 1 files (+1 heads)
283 283 1 new obsolescence markers
284 284 updating bookmark book_42cc
285 285 pre-close-tip:42ccdea3bb16 draft book_42cc
286 286 postclose-tip:42ccdea3bb16 draft book_42cc
287 287 txnclose hook: HG_BOOKMARK_MOVED=1 HG_NEW_OBSMARKERS=1 HG_NODE=42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
288 288 http://localhost:$HGPORT/ HG_URL=http://localhost:$HGPORT/
289 289 (run 'hg heads .' to see heads, 'hg merge' to merge)
290 290 $ cat main-error.log
291 291 $ hg -R other debugobsolete
292 292 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
293 293 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
294 294 3333333333333333333333333333333333333333 eea13746799a9e0bfd88f29d3c2e9dc9389f524f 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
295 295 4444444444444444444444444444444444444444 02de42196ebee42ef284b6780a87cdc96e8eaab6 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
296 296 5555555555555555555555555555555555555555 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
297 297
298 298 push over ssh
299 299
300 300 $ hg -R main push ssh://user@dummy/other -r 5fddd98957c8 --bookmark book_5fdd
301 301 pushing to ssh://user@dummy/other
302 302 searching for changes
303 303 remote: adding changesets
304 304 remote: adding manifests
305 305 remote: adding file changes
306 306 remote: added 1 changesets with 1 changes to 1 files
307 307 remote: 1 new obsolescence markers
308 308 remote: pre-close-tip:5fddd98957c8 draft book_5fdd
309 309 remote: pushkey: lock state after "bookmarks"
310 310 remote: lock: free
311 311 remote: wlock: free
312 312 remote: postclose-tip:5fddd98957c8 draft book_5fdd
313 313 remote: txnclose hook: HG_BOOKMARK_MOVED=1 HG_BUNDLE2=1 HG_NEW_OBSMARKERS=1 HG_NODE=5fddd98957c8a54a4d436dfe1da9d87f21a1b97b HG_SOURCE=serve HG_TXNID=TXN:* HG_TXNNAME=serve HG_URL=remote:ssh:127.0.0.1 (glob)
314 314 updating bookmark book_5fdd
315 315 pre-close-tip:02de42196ebe draft book_02de
316 316 postclose-tip:02de42196ebe draft book_02de
317 317 txnclose hook: HG_SOURCE=push-response HG_TXNID=TXN:* HG_TXNNAME=push-response (glob)
318 318 ssh://user@dummy/other HG_URL=ssh://user@dummy/other
319 319 $ hg -R other log -G
320 320 o 6:5fddd98957c8 draft Nicolas Dumazet <nicdumz.commits@gmail.com> book_5fdd C
321 321 |
322 322 o 5:42ccdea3bb16 draft Nicolas Dumazet <nicdumz.commits@gmail.com> book_42cc B
323 323 |
324 324 | o 4:02de42196ebe draft Nicolas Dumazet <nicdumz.commits@gmail.com> book_02de H
325 325 | |
326 326 | | o 3:eea13746799a public Nicolas Dumazet <nicdumz.commits@gmail.com> book_eea1 G
327 327 | |/|
328 328 | o | 2:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
329 329 |/ /
330 330 | @ 1:9520eea781bc public Nicolas Dumazet <nicdumz.commits@gmail.com> E
331 331 |/
332 332 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> book_32af A
333 333
334 334 $ hg -R other debugobsolete
335 335 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
336 336 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
337 337 3333333333333333333333333333333333333333 eea13746799a9e0bfd88f29d3c2e9dc9389f524f 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
338 338 4444444444444444444444444444444444444444 02de42196ebee42ef284b6780a87cdc96e8eaab6 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
339 339 5555555555555555555555555555555555555555 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
340 340 6666666666666666666666666666666666666666 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
341 341
342 342 push over http
343 343
344 344 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
345 345 $ cat other.pid >> $DAEMON_PIDS
346 346
347 347 $ hg -R main phase --public 32af7686d403
348 348 pre-close-tip:02de42196ebe draft book_02de
349 349 postclose-tip:02de42196ebe draft book_02de
350 350 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=phase (glob)
351 351 $ hg -R main push http://localhost:$HGPORT2/ -r 32af7686d403 --bookmark book_32af
352 352 pushing to http://localhost:$HGPORT2/
353 353 searching for changes
354 354 remote: adding changesets
355 355 remote: adding manifests
356 356 remote: adding file changes
357 357 remote: added 1 changesets with 1 changes to 1 files
358 358 remote: 1 new obsolescence markers
359 359 remote: pre-close-tip:32af7686d403 public book_32af
360 360 remote: pushkey: lock state after "phases"
361 361 remote: lock: free
362 362 remote: wlock: free
363 363 remote: pushkey: lock state after "bookmarks"
364 364 remote: lock: free
365 365 remote: wlock: free
366 366 remote: postclose-tip:32af7686d403 public book_32af
367 367 remote: txnclose hook: HG_BOOKMARK_MOVED=1 HG_BUNDLE2=1 HG_NEW_OBSMARKERS=1 HG_NODE=32af7686d403cf45b5d95f2d70cebea587ac806a HG_PHASES_MOVED=1 HG_SOURCE=serve HG_TXNID=TXN:* HG_TXNNAME=serve HG_URL=remote:http:127.0.0.1: (glob)
368 368 updating bookmark book_32af
369 369 pre-close-tip:02de42196ebe draft book_02de
370 370 postclose-tip:02de42196ebe draft book_02de
371 371 txnclose hook: HG_SOURCE=push-response HG_TXNID=TXN:* HG_TXNNAME=push-response (glob)
372 372 http://localhost:$HGPORT2/ HG_URL=http://localhost:$HGPORT2/
373 373 $ cat other-error.log
374 374
375 375 Check final content.
376 376
377 377 $ hg -R other log -G
378 378 o 7:32af7686d403 public Nicolas Dumazet <nicdumz.commits@gmail.com> book_32af D
379 379 |
380 380 o 6:5fddd98957c8 public Nicolas Dumazet <nicdumz.commits@gmail.com> book_5fdd C
381 381 |
382 382 o 5:42ccdea3bb16 public Nicolas Dumazet <nicdumz.commits@gmail.com> book_42cc B
383 383 |
384 384 | o 4:02de42196ebe draft Nicolas Dumazet <nicdumz.commits@gmail.com> book_02de H
385 385 | |
386 386 | | o 3:eea13746799a public Nicolas Dumazet <nicdumz.commits@gmail.com> book_eea1 G
387 387 | |/|
388 388 | o | 2:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
389 389 |/ /
390 390 | @ 1:9520eea781bc public Nicolas Dumazet <nicdumz.commits@gmail.com> E
391 391 |/
392 392 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
393 393
394 394 $ hg -R other debugobsolete
395 395 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
396 396 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
397 397 3333333333333333333333333333333333333333 eea13746799a9e0bfd88f29d3c2e9dc9389f524f 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
398 398 4444444444444444444444444444444444444444 02de42196ebee42ef284b6780a87cdc96e8eaab6 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
399 399 5555555555555555555555555555555555555555 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
400 400 6666666666666666666666666666666666666666 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
401 401 7777777777777777777777777777777777777777 32af7686d403cf45b5d95f2d70cebea587ac806a 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
402 402
403 403 (check that no 'pending' files remain)
404 404
405 405 $ ls -1 other/.hg/bookmarks*
406 406 other/.hg/bookmarks
407 407 $ ls -1 other/.hg/store/phaseroots*
408 408 other/.hg/store/phaseroots
409 409 $ ls -1 other/.hg/store/00changelog.i*
410 410 other/.hg/store/00changelog.i
411 411
412 412 Error Handling
413 413 ==============
414 414
415 415 Check that errors are properly returned to the client during push.
416 416
417 417 Setting up
418 418
419 419 $ cat > failpush.py << EOF
420 420 > """A small extension that makes push fails when using bundle2
421 421 >
422 422 > used to test error handling in bundle2
423 423 > """
424 424 >
425 425 > from mercurial import util
426 426 > from mercurial import bundle2
427 427 > from mercurial import exchange
428 428 > from mercurial import extensions
429 429 >
430 430 > def _pushbundle2failpart(pushop, bundler):
431 431 > reason = pushop.ui.config('failpush', 'reason', None)
432 432 > part = None
433 433 > if reason == 'abort':
434 434 > bundler.newpart('test:abort')
435 435 > if reason == 'unknown':
436 436 > bundler.newpart('test:unknown')
437 437 > if reason == 'race':
438 438 > # 20 Bytes of crap
439 439 > bundler.newpart('check:heads', data='01234567890123456789')
440 440 >
441 441 > @bundle2.parthandler("test:abort")
442 442 > def handleabort(op, part):
443 443 > raise util.Abort('Abandon ship!', hint="don't panic")
444 444 >
445 445 > def uisetup(ui):
446 446 > exchange.b2partsgenmapping['failpart'] = _pushbundle2failpart
447 447 > exchange.b2partsgenorder.insert(0, 'failpart')
448 448 >
449 449 > EOF
450 450
451 451 $ cd main
452 452 $ hg up tip
453 453 3 files updated, 0 files merged, 1 files removed, 0 files unresolved
454 454 $ echo 'I' > I
455 455 $ hg add I
456 456 $ hg ci -m 'I'
457 457 pre-close-tip:e7ec4e813ba6 draft
458 458 postclose-tip:e7ec4e813ba6 draft
459 459 txnclose hook: HG_TXNID=TXN:* HG_TXNNAME=commit (glob)
460 460 $ hg id
461 461 e7ec4e813ba6 tip
462 462 $ cd ..
463 463
464 464 $ cat << EOF >> $HGRCPATH
465 465 > [extensions]
466 466 > failpush=$TESTTMP/failpush.py
467 467 > EOF
468 468
469 469 $ killdaemons.py
470 470 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
471 471 $ cat other.pid >> $DAEMON_PIDS
472 472
473 473 Doing the actual push: Abort error
474 474
475 475 $ cat << EOF >> $HGRCPATH
476 476 > [failpush]
477 477 > reason = abort
478 478 > EOF
479 479
480 480 $ hg -R main push other -r e7ec4e813ba6
481 481 pushing to other
482 482 searching for changes
483 483 abort: Abandon ship!
484 484 (don't panic)
485 485 [255]
486 486
487 487 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
488 488 pushing to ssh://user@dummy/other
489 489 searching for changes
490 490 abort: Abandon ship!
491 491 (don't panic)
492 492 [255]
493 493
494 494 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
495 495 pushing to http://localhost:$HGPORT2/
496 496 searching for changes
497 497 abort: Abandon ship!
498 498 (don't panic)
499 499 [255]
500 500
501 501
502 502 Doing the actual push: unknown mandatory parts
503 503
504 504 $ cat << EOF >> $HGRCPATH
505 505 > [failpush]
506 506 > reason = unknown
507 507 > EOF
508 508
509 509 $ hg -R main push other -r e7ec4e813ba6
510 510 pushing to other
511 511 searching for changes
512 512 abort: missing support for test:unknown
513 513 [255]
514 514
515 515 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
516 516 pushing to ssh://user@dummy/other
517 517 searching for changes
518 518 abort: missing support for test:unknown
519 519 [255]
520 520
521 521 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
522 522 pushing to http://localhost:$HGPORT2/
523 523 searching for changes
524 524 abort: missing support for test:unknown
525 525 [255]
526 526
527 527 Doing the actual push: race
528 528
529 529 $ cat << EOF >> $HGRCPATH
530 530 > [failpush]
531 531 > reason = race
532 532 > EOF
533 533
534 534 $ hg -R main push other -r e7ec4e813ba6
535 535 pushing to other
536 536 searching for changes
537 537 abort: push failed:
538 538 'repository changed while pushing - please try again'
539 539 [255]
540 540
541 541 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
542 542 pushing to ssh://user@dummy/other
543 543 searching for changes
544 544 abort: push failed:
545 545 'repository changed while pushing - please try again'
546 546 [255]
547 547
548 548 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
549 549 pushing to http://localhost:$HGPORT2/
550 550 searching for changes
551 551 abort: push failed:
552 552 'repository changed while pushing - please try again'
553 553 [255]
554 554
555 555 Doing the actual push: hook abort
556 556
557 557 $ cat << EOF >> $HGRCPATH
558 558 > [failpush]
559 559 > reason =
560 560 > [hooks]
561 561 > pretxnclose.failpush = sh -c "echo 'You shall not pass!'; false"
562 562 > txnabort.failpush = sh -c "echo 'Cleaning up the mess...'"
563 563 > EOF
564 564
565 565 $ killdaemons.py
566 566 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
567 567 $ cat other.pid >> $DAEMON_PIDS
568 568
569 569 $ hg -R main push other -r e7ec4e813ba6
570 570 pushing to other
571 571 searching for changes
572 572 remote: adding changesets
573 573 remote: adding manifests
574 574 remote: adding file changes
575 575 remote: added 1 changesets with 1 changes to 1 files
576 576 remote: pre-close-tip:e7ec4e813ba6 draft
577 577 remote: You shall not pass!
578 578 remote: transaction abort!
579 579 remote: Cleaning up the mess...
580 580 remote: rollback completed
581 581 abort: pretxnclose.failpush hook exited with status 1
582 582 [255]
583 583
584 584 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
585 585 pushing to ssh://user@dummy/other
586 586 searching for changes
587 587 remote: adding changesets
588 588 remote: adding manifests
589 589 remote: adding file changes
590 590 remote: added 1 changesets with 1 changes to 1 files
591 591 remote: pre-close-tip:e7ec4e813ba6 draft
592 592 remote: You shall not pass!
593 593 remote: transaction abort!
594 594 remote: Cleaning up the mess...
595 595 remote: rollback completed
596 596 abort: pretxnclose.failpush hook exited with status 1
597 597 [255]
598 598
599 599 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
600 600 pushing to http://localhost:$HGPORT2/
601 601 searching for changes
602 602 remote: adding changesets
603 603 remote: adding manifests
604 604 remote: adding file changes
605 605 remote: added 1 changesets with 1 changes to 1 files
606 606 remote: pre-close-tip:e7ec4e813ba6 draft
607 607 remote: You shall not pass!
608 608 remote: transaction abort!
609 609 remote: Cleaning up the mess...
610 610 remote: rollback completed
611 611 abort: pretxnclose.failpush hook exited with status 1
612 612 [255]
613 613
614 614 (check that no 'pending' files remain)
615 615
616 616 $ ls -1 other/.hg/bookmarks*
617 617 other/.hg/bookmarks
618 618 $ ls -1 other/.hg/store/phaseroots*
619 619 other/.hg/store/phaseroots
620 620 $ ls -1 other/.hg/store/00changelog.i*
621 621 other/.hg/store/00changelog.i
622 622
623 623 Check error from hook during the unbundling process itself
624 624
625 625 $ cat << EOF >> $HGRCPATH
626 626 > pretxnchangegroup = sh -c "echo 'Fail early!'; false"
627 627 > EOF
628 628 $ killdaemons.py # reload http config
629 629 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
630 630 $ cat other.pid >> $DAEMON_PIDS
631 631
632 632 $ hg -R main push other -r e7ec4e813ba6
633 633 pushing to other
634 634 searching for changes
635 635 remote: adding changesets
636 636 remote: adding manifests
637 637 remote: adding file changes
638 638 remote: added 1 changesets with 1 changes to 1 files
639 639 remote: Fail early!
640 640 remote: transaction abort!
641 641 remote: Cleaning up the mess...
642 642 remote: rollback completed
643 643 abort: pretxnchangegroup hook exited with status 1
644 644 [255]
645 645 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
646 646 pushing to ssh://user@dummy/other
647 647 searching for changes
648 648 remote: adding changesets
649 649 remote: adding manifests
650 650 remote: adding file changes
651 651 remote: added 1 changesets with 1 changes to 1 files
652 652 remote: Fail early!
653 653 remote: transaction abort!
654 654 remote: Cleaning up the mess...
655 655 remote: rollback completed
656 656 abort: pretxnchangegroup hook exited with status 1
657 657 [255]
658 658 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
659 659 pushing to http://localhost:$HGPORT2/
660 660 searching for changes
661 661 remote: adding changesets
662 662 remote: adding manifests
663 663 remote: adding file changes
664 664 remote: added 1 changesets with 1 changes to 1 files
665 665 remote: Fail early!
666 666 remote: transaction abort!
667 667 remote: Cleaning up the mess...
668 668 remote: rollback completed
669 669 abort: pretxnchangegroup hook exited with status 1
670 670 [255]
671 671
672 672 Check output capture control.
673 673
674 674 (should be still forced for http, disabled for local and ssh)
675 675
676 676 $ cat >> $HGRCPATH << EOF
677 677 > [experimental]
678 678 > bundle2-output-capture=False
679 679 > EOF
680 680
681 681 $ hg -R main push other -r e7ec4e813ba6
682 682 pushing to other
683 683 searching for changes
684 684 adding changesets
685 685 adding manifests
686 686 adding file changes
687 687 added 1 changesets with 1 changes to 1 files
688 688 Fail early!
689 689 transaction abort!
690 690 Cleaning up the mess...
691 691 rollback completed
692 692 abort: pretxnchangegroup hook exited with status 1
693 693 [255]
694 694 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
695 695 pushing to ssh://user@dummy/other
696 696 searching for changes
697 697 remote: adding changesets
698 698 remote: adding manifests
699 699 remote: adding file changes
700 700 remote: added 1 changesets with 1 changes to 1 files
701 701 remote: Fail early!
702 702 remote: transaction abort!
703 703 remote: Cleaning up the mess...
704 704 remote: rollback completed
705 705 abort: pretxnchangegroup hook exited with status 1
706 706 [255]
707 707 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
708 708 pushing to http://localhost:$HGPORT2/
709 709 searching for changes
710 710 remote: adding changesets
711 711 remote: adding manifests
712 712 remote: adding file changes
713 713 remote: added 1 changesets with 1 changes to 1 files
714 714 remote: Fail early!
715 715 remote: transaction abort!
716 716 remote: Cleaning up the mess...
717 717 remote: rollback completed
718 718 abort: pretxnchangegroup hook exited with status 1
719 719 [255]
720
721 Check abort from mandatory pushkey
722
723 $ cat > mandatorypart.py << EOF
724 > from mercurial import exchange
725 > from mercurial import pushkey
726 > from mercurial import node
727 > @exchange.b2partsgenerator('failingpuskey')
728 > def addfailingpushey(pushop, bundler):
729 > enc = pushkey.encode
730 > part = bundler.newpart('pushkey')
731 > part.addparam('namespace', enc('phases'))
732 > part.addparam('key', enc(pushop.repo['cd010b8cd998'].hex()))
733 > part.addparam('old', enc(str(0))) # successful update
734 > part.addparam('new', enc(str(0)))
735 > EOF
736 $ cat >> $HGRCPATH << EOF
737 > [hooks]
738 > pretxnchangegroup=
739 > pretxnclose.failpush=
740 > prepushkey.failpush = sh -c "echo 'do not push the key !'; false"
741 > [extensions]
742 > mandatorypart=$TESTTMP/mandatorypart.py
743 > EOF
744 $ "$TESTDIR/killdaemons.py" $DAEMON_PIDS # reload http config
745 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
746 $ cat other.pid >> $DAEMON_PIDS
747
748 (Failure from a hook)
749
750 $ hg -R main push other -r e7ec4e813ba6
751 pushing to other
752 searching for changes
753 adding changesets
754 adding manifests
755 adding file changes
756 added 1 changesets with 1 changes to 1 files
757 do not push the key !
758 pushkey-abort: prepushkey.failpush hook exited with status 1
759 transaction abort!
760 Cleaning up the mess...
761 rollback completed
762 abort: failed to update value for "phases/cd010b8cd998f3981a5a8115f94f8da4ab506089"
763 [255]
764 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
765 pushing to ssh://user@dummy/other
766 searching for changes
767 remote: adding changesets
768 remote: adding manifests
769 remote: adding file changes
770 remote: added 1 changesets with 1 changes to 1 files
771 remote: do not push the key !
772 remote: pushkey-abort: prepushkey.failpush hook exited with status 1
773 remote: transaction abort!
774 remote: Cleaning up the mess...
775 remote: rollback completed
776 abort: failed to update value for "phases/cd010b8cd998f3981a5a8115f94f8da4ab506089"
777 [255]
778 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
779 pushing to http://localhost:$HGPORT2/
780 searching for changes
781 remote: adding changesets
782 remote: adding manifests
783 remote: adding file changes
784 remote: added 1 changesets with 1 changes to 1 files
785 remote: do not push the key !
786 remote: pushkey-abort: prepushkey.failpush hook exited with status 1
787 remote: transaction abort!
788 remote: Cleaning up the mess...
789 remote: rollback completed
790 abort: failed to update value for "phases/cd010b8cd998f3981a5a8115f94f8da4ab506089"
791 [255]
792
793 (Failure from a the pushkey)
794
795 $ cat > mandatorypart.py << EOF
796 > from mercurial import exchange
797 > from mercurial import pushkey
798 > from mercurial import node
799 > @exchange.b2partsgenerator('failingpuskey')
800 > def addfailingpushey(pushop, bundler):
801 > enc = pushkey.encode
802 > part = bundler.newpart('pushkey')
803 > part.addparam('namespace', enc('phases'))
804 > part.addparam('key', enc(pushop.repo['cd010b8cd998'].hex()))
805 > part.addparam('old', enc(str(4))) # will fail
806 > part.addparam('new', enc(str(3)))
807 > EOF
808 $ cat >> $HGRCPATH << EOF
809 > [hooks]
810 > prepushkey.failpush =
811 > EOF
812 $ "$TESTDIR/killdaemons.py" $DAEMON_PIDS # reload http config
813 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
814 $ cat other.pid >> $DAEMON_PIDS
815
816 $ hg -R main push other -r e7ec4e813ba6
817 pushing to other
818 searching for changes
819 adding changesets
820 adding manifests
821 adding file changes
822 added 1 changesets with 1 changes to 1 files
823 transaction abort!
824 Cleaning up the mess...
825 rollback completed
826 pushkey: lock state after "phases"
827 lock: free
828 wlock: free
829 abort: failed to update value for "phases/cd010b8cd998f3981a5a8115f94f8da4ab506089"
830 [255]
831 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
832 pushing to ssh://user@dummy/other
833 searching for changes
834 remote: adding changesets
835 remote: adding manifests
836 remote: adding file changes
837 remote: added 1 changesets with 1 changes to 1 files
838 remote: transaction abort!
839 remote: Cleaning up the mess...
840 remote: rollback completed
841 remote: pushkey: lock state after "phases"
842 remote: lock: free
843 remote: wlock: free
844 abort: failed to update value for "phases/cd010b8cd998f3981a5a8115f94f8da4ab506089"
845 [255]
846 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
847 pushing to http://localhost:$HGPORT2/
848 searching for changes
849 remote: adding changesets
850 remote: adding manifests
851 remote: adding file changes
852 remote: added 1 changesets with 1 changes to 1 files
853 remote: transaction abort!
854 remote: Cleaning up the mess...
855 remote: rollback completed
856 remote: pushkey: lock state after "phases"
857 remote: lock: free
858 remote: wlock: free
859 abort: failed to update value for "phases/cd010b8cd998f3981a5a8115f94f8da4ab506089"
860 [255]
861
General Comments 0
You need to be logged in to leave comments. Login now