##// END OF EJS Templates
bundle2: rename part to bundlepart...
Pierre-Yves David -
r21005:3d38ebb5 default
parent child Browse files
Show More
@@ -1,610 +1,610
1 1 # bundle2.py - generic container format to transmit arbitrary data.
2 2 #
3 3 # Copyright 2013 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 """Handling of the new bundle2 format
8 8
9 9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 11 that will be handed to and processed by the application layer.
12 12
13 13
14 14 General format architecture
15 15 ===========================
16 16
17 17 The format is architectured as follow
18 18
19 19 - magic string
20 20 - stream level parameters
21 21 - payload parts (any number)
22 22 - end of stream marker.
23 23
24 24 the Binary format
25 25 ============================
26 26
27 27 All numbers are unsigned and big endian.
28 28
29 29 stream level parameters
30 30 ------------------------
31 31
32 32 Binary format is as follow
33 33
34 34 :params size: (16 bits integer)
35 35
36 36 The total number of Bytes used by the parameters
37 37
38 38 :params value: arbitrary number of Bytes
39 39
40 40 A blob of `params size` containing the serialized version of all stream level
41 41 parameters.
42 42
43 43 The blob contains a space separated list of parameters. parameter with value
44 44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45 45
46 46 Empty name are obviously forbidden.
47 47
48 48 Name MUST start with a letter. If this first letter is lower case, the
49 49 parameter is advisory and can be safefly ignored. However when the first
50 50 letter is capital, the parameter is mandatory and the bundling process MUST
51 51 stop if he is not able to proceed it.
52 52
53 53 Stream parameters use a simple textual format for two main reasons:
54 54
55 55 - Stream level parameters should remains simple and we want to discourage any
56 56 crazy usage.
57 57 - Textual data allow easy human inspection of a the bundle2 header in case of
58 58 troubles.
59 59
60 60 Any Applicative level options MUST go into a bundle2 part instead.
61 61
62 62 Payload part
63 63 ------------------------
64 64
65 65 Binary format is as follow
66 66
67 67 :header size: (16 bits inter)
68 68
69 69 The total number of Bytes used by the part headers. When the header is empty
70 70 (size = 0) this is interpreted as the end of stream marker.
71 71
72 72 :header:
73 73
74 74 The header defines how to interpret the part. It contains two piece of
75 75 data: the part type, and the part parameters.
76 76
77 77 The part type is used to route an application level handler, that can
78 78 interpret payload.
79 79
80 80 Part parameters are passed to the application level handler. They are
81 81 meant to convey information that will help the application level object to
82 82 interpret the part payload.
83 83
84 84 The binary format of the header is has follow
85 85
86 86 :typesize: (one byte)
87 87
88 88 :typename: alphanumerical part name
89 89
90 90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 91 to this part.
92 92
93 93 :parameters:
94 94
95 95 Part's parameter may have arbitraty content, the binary structure is::
96 96
97 97 <mandatory-count><advisory-count><param-sizes><param-data>
98 98
99 99 :mandatory-count: 1 byte, number of mandatory parameters
100 100
101 101 :advisory-count: 1 byte, number of advisory parameters
102 102
103 103 :param-sizes:
104 104
105 105 N couple of bytes, where N is the total number of parameters. Each
106 106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107 107
108 108 :param-data:
109 109
110 110 A blob of bytes from which each parameter key and value can be
111 111 retrieved using the list of size couples stored in the previous
112 112 field.
113 113
114 114 Mandatory parameters comes first, then the advisory ones.
115 115
116 116 :payload:
117 117
118 118 payload is a series of `<chunksize><chunkdata>`.
119 119
120 120 `chunksize` is a 32 bits integer, `chunkdata` are plain bytes (as much as
121 121 `chunksize` says)` The payload part is concluded by a zero size chunk.
122 122
123 123 The current implementation always produces either zero or one chunk.
124 124 This is an implementation limitation that will ultimatly be lifted.
125 125
126 126 Bundle processing
127 127 ============================
128 128
129 129 Each part is processed in order using a "part handler". Handler are registered
130 130 for a certain part type.
131 131
132 132 The matching of a part to its handler is case insensitive. The case of the
133 133 part type is used to know if a part is mandatory or advisory. If the Part type
134 134 contains any uppercase char it is considered mandatory. When no handler is
135 135 known for a Mandatory part, the process is aborted and an exception is raised.
136 136 If the part is advisory and no handler is known, the part is ignored. When the
137 137 process is aborted, the full bundle is still read from the stream to keep the
138 138 channel usable. But none of the part read from an abort are processed. In the
139 139 future, dropping the stream may become an option for channel we do not care to
140 140 preserve.
141 141 """
142 142
143 143 import util
144 144 import struct
145 145 import urllib
146 146 import string
147 147 import StringIO
148 148
149 149 import changegroup
150 150 from i18n import _
151 151
152 152 _pack = struct.pack
153 153 _unpack = struct.unpack
154 154
155 155 _magicstring = 'HG20'
156 156
157 157 _fstreamparamsize = '>H'
158 158 _fpartheadersize = '>H'
159 159 _fparttypesize = '>B'
160 160 _fpartid = '>I'
161 161 _fpayloadsize = '>I'
162 162 _fpartparamcount = '>BB'
163 163
164 164 preferedchunksize = 4096
165 165
166 166 def _makefpartparamsizes(nbparams):
167 167 """return a struct format to read part parameter sizes
168 168
169 169 The number parameters is variable so we need to build that format
170 170 dynamically.
171 171 """
172 172 return '>'+('BB'*nbparams)
173 173
174 174 parthandlermapping = {}
175 175
176 176 def parthandler(parttype):
177 177 """decorator that register a function as a bundle2 part handler
178 178
179 179 eg::
180 180
181 181 @parthandler('myparttype')
182 182 def myparttypehandler(...):
183 183 '''process a part of type "my part".'''
184 184 ...
185 185 """
186 186 def _decorator(func):
187 187 lparttype = parttype.lower() # enforce lower case matching.
188 188 assert lparttype not in parthandlermapping
189 189 parthandlermapping[lparttype] = func
190 190 return func
191 191 return _decorator
192 192
193 193 class unbundlerecords(object):
194 194 """keep record of what happens during and unbundle
195 195
196 196 New records are added using `records.add('cat', obj)`. Where 'cat' is a
197 197 category of record and obj is an arbitraty object.
198 198
199 199 `records['cat']` will return all entries of this category 'cat'.
200 200
201 201 Iterating on the object itself will yield `('category', obj)` tuples
202 202 for all entries.
203 203
204 204 All iterations happens in chronological order.
205 205 """
206 206
207 207 def __init__(self):
208 208 self._categories = {}
209 209 self._sequences = []
210 210 self._replies = {}
211 211
212 212 def add(self, category, entry, inreplyto=None):
213 213 """add a new record of a given category.
214 214
215 215 The entry can then be retrieved in the list returned by
216 216 self['category']."""
217 217 self._categories.setdefault(category, []).append(entry)
218 218 self._sequences.append((category, entry))
219 219 if inreplyto is not None:
220 220 self.getreplies(inreplyto).add(category, entry)
221 221
222 222 def getreplies(self, partid):
223 223 """get the subrecords that replies to a specific part"""
224 224 return self._replies.setdefault(partid, unbundlerecords())
225 225
226 226 def __getitem__(self, cat):
227 227 return tuple(self._categories.get(cat, ()))
228 228
229 229 def __iter__(self):
230 230 return iter(self._sequences)
231 231
232 232 def __len__(self):
233 233 return len(self._sequences)
234 234
235 235 def __nonzero__(self):
236 236 return bool(self._sequences)
237 237
238 238 class bundleoperation(object):
239 239 """an object that represents a single bundling process
240 240
241 241 Its purpose is to carry unbundle-related objects and states.
242 242
243 243 A new object should be created at the beginning of each bundle processing.
244 244 The object is to be returned by the processing function.
245 245
246 246 The object has very little content now it will ultimately contain:
247 247 * an access to the repo the bundle is applied to,
248 248 * a ui object,
249 249 * a way to retrieve a transaction to add changes to the repo,
250 250 * a way to record the result of processing each part,
251 251 * a way to construct a bundle response when applicable.
252 252 """
253 253
254 254 def __init__(self, repo, transactiongetter):
255 255 self.repo = repo
256 256 self.ui = repo.ui
257 257 self.records = unbundlerecords()
258 258 self.gettransaction = transactiongetter
259 259 self.reply = None
260 260
261 261 class TransactionUnavailable(RuntimeError):
262 262 pass
263 263
264 264 def _notransaction():
265 265 """default method to get a transaction while processing a bundle
266 266
267 267 Raise an exception to highlight the fact that no transaction was expected
268 268 to be created"""
269 269 raise TransactionUnavailable()
270 270
271 271 def processbundle(repo, unbundler, transactiongetter=_notransaction):
272 272 """This function process a bundle, apply effect to/from a repo
273 273
274 274 It iterates over each part then searches for and uses the proper handling
275 275 code to process the part. Parts are processed in order.
276 276
277 277 This is very early version of this function that will be strongly reworked
278 278 before final usage.
279 279
280 280 Unknown Mandatory part will abort the process.
281 281 """
282 282 op = bundleoperation(repo, transactiongetter)
283 283 # todo:
284 284 # - only create reply bundle if requested.
285 285 op.reply = bundle20(op.ui)
286 286 # todo:
287 287 # - replace this is a init function soon.
288 288 # - exception catching
289 289 unbundler.params
290 290 iterparts = iter(unbundler)
291 291 try:
292 292 for part in iterparts:
293 293 parttype = part.type
294 294 # part key are matched lower case
295 295 key = parttype.lower()
296 296 try:
297 297 handler = parthandlermapping[key]
298 298 op.ui.debug('found a handler for part %r\n' % parttype)
299 299 except KeyError:
300 300 if key != parttype: # mandatory parts
301 301 # todo:
302 302 # - use a more precise exception
303 303 raise
304 304 op.ui.debug('ignoring unknown advisory part %r\n' % key)
305 305 # todo:
306 306 # - consume the part once we use streaming
307 307 continue
308 308
309 309 # handler is called outside the above try block so that we don't
310 310 # risk catching KeyErrors from anything other than the
311 311 # parthandlermapping lookup (any KeyError raised by handler()
312 312 # itself represents a defect of a different variety).
313 313 handler(op, part)
314 314 except Exception:
315 315 for part in iterparts:
316 316 pass # consume the bundle content
317 317 raise
318 318 return op
319 319
320 320 class bundle20(object):
321 321 """represent an outgoing bundle2 container
322 322
323 323 Use the `addparam` method to add stream level parameter. and `addpart` to
324 324 populate it. Then call `getchunks` to retrieve all the binary chunks of
325 325 datathat compose the bundle2 container."""
326 326
327 327 def __init__(self, ui):
328 328 self.ui = ui
329 329 self._params = []
330 330 self._parts = []
331 331
332 332 def addparam(self, name, value=None):
333 333 """add a stream level parameter"""
334 334 if not name:
335 335 raise ValueError('empty parameter name')
336 336 if name[0] not in string.letters:
337 337 raise ValueError('non letter first character: %r' % name)
338 338 self._params.append((name, value))
339 339
340 340 def addpart(self, part):
341 341 """add a new part to the bundle2 container
342 342
343 343 Parts contains the actuall applicative payload."""
344 344 assert part.id is None
345 345 part.id = len(self._parts) # very cheap counter
346 346 self._parts.append(part)
347 347
348 348 def getchunks(self):
349 349 self.ui.debug('start emission of %s stream\n' % _magicstring)
350 350 yield _magicstring
351 351 param = self._paramchunk()
352 352 self.ui.debug('bundle parameter: %s\n' % param)
353 353 yield _pack(_fstreamparamsize, len(param))
354 354 if param:
355 355 yield param
356 356
357 357 self.ui.debug('start of parts\n')
358 358 for part in self._parts:
359 359 self.ui.debug('bundle part: "%s"\n' % part.type)
360 360 for chunk in part.getchunks():
361 361 yield chunk
362 362 self.ui.debug('end of bundle\n')
363 363 yield '\0\0'
364 364
365 365 def _paramchunk(self):
366 366 """return a encoded version of all stream parameters"""
367 367 blocks = []
368 368 for par, value in self._params:
369 369 par = urllib.quote(par)
370 370 if value is not None:
371 371 value = urllib.quote(value)
372 372 par = '%s=%s' % (par, value)
373 373 blocks.append(par)
374 374 return ' '.join(blocks)
375 375
376 376 class unbundle20(object):
377 377 """interpret a bundle2 stream
378 378
379 379 (this will eventually yield parts)"""
380 380
381 381 def __init__(self, ui, fp):
382 382 self.ui = ui
383 383 self._fp = fp
384 384 header = self._readexact(4)
385 385 magic, version = header[0:2], header[2:4]
386 386 if magic != 'HG':
387 387 raise util.Abort(_('not a Mercurial bundle'))
388 388 if version != '20':
389 389 raise util.Abort(_('unknown bundle version %s') % version)
390 390 self.ui.debug('start processing of %s stream\n' % header)
391 391
392 392 def _unpack(self, format):
393 393 """unpack this struct format from the stream"""
394 394 data = self._readexact(struct.calcsize(format))
395 395 return _unpack(format, data)
396 396
397 397 def _readexact(self, size):
398 398 """read exactly <size> bytes from the stream"""
399 399 return changegroup.readexactly(self._fp, size)
400 400
401 401 @util.propertycache
402 402 def params(self):
403 403 """dictionnary of stream level parameters"""
404 404 self.ui.debug('reading bundle2 stream parameters\n')
405 405 params = {}
406 406 paramssize = self._unpack(_fstreamparamsize)[0]
407 407 if paramssize:
408 408 for p in self._readexact(paramssize).split(' '):
409 409 p = p.split('=', 1)
410 410 p = [urllib.unquote(i) for i in p]
411 411 if len(p) < 2:
412 412 p.append(None)
413 413 self._processparam(*p)
414 414 params[p[0]] = p[1]
415 415 return params
416 416
417 417 def _processparam(self, name, value):
418 418 """process a parameter, applying its effect if needed
419 419
420 420 Parameter starting with a lower case letter are advisory and will be
421 421 ignored when unknown. Those starting with an upper case letter are
422 422 mandatory and will this function will raise a KeyError when unknown.
423 423
424 424 Note: no option are currently supported. Any input will be either
425 425 ignored or failing.
426 426 """
427 427 if not name:
428 428 raise ValueError('empty parameter name')
429 429 if name[0] not in string.letters:
430 430 raise ValueError('non letter first character: %r' % name)
431 431 # Some logic will be later added here to try to process the option for
432 432 # a dict of known parameter.
433 433 if name[0].islower():
434 434 self.ui.debug("ignoring unknown parameter %r\n" % name)
435 435 else:
436 436 raise KeyError(name)
437 437
438 438
439 439 def __iter__(self):
440 440 """yield all parts contained in the stream"""
441 441 # make sure param have been loaded
442 442 self.params
443 443 self.ui.debug('start extraction of bundle2 parts\n')
444 444 part = self._readpart()
445 445 while part is not None:
446 446 yield part
447 447 part = self._readpart()
448 448 self.ui.debug('end of bundle2 stream\n')
449 449
450 450 def _readpart(self):
451 451 """return None when an end of stream markers is reach"""
452 452
453 453 headersize = self._unpack(_fpartheadersize)[0]
454 454 self.ui.debug('part header size: %i\n' % headersize)
455 455 if not headersize:
456 456 return None
457 457 headerblock = self._readexact(headersize)
458 458 # some utility to help reading from the header block
459 459 self._offset = 0 # layer violation to have something easy to understand
460 460 def fromheader(size):
461 461 """return the next <size> byte from the header"""
462 462 offset = self._offset
463 463 data = headerblock[offset:(offset + size)]
464 464 self._offset = offset + size
465 465 return data
466 466 def unpackheader(format):
467 467 """read given format from header
468 468
469 469 This automatically compute the size of the format to read."""
470 470 data = fromheader(struct.calcsize(format))
471 471 return _unpack(format, data)
472 472
473 473 typesize = unpackheader(_fparttypesize)[0]
474 474 parttype = fromheader(typesize)
475 475 self.ui.debug('part type: "%s"\n' % parttype)
476 476 partid = unpackheader(_fpartid)[0]
477 477 self.ui.debug('part id: "%s"\n' % partid)
478 478 ## reading parameters
479 479 # param count
480 480 mancount, advcount = unpackheader(_fpartparamcount)
481 481 self.ui.debug('part parameters: %i\n' % (mancount + advcount))
482 482 # param size
483 483 paramsizes = unpackheader(_makefpartparamsizes(mancount + advcount))
484 484 # make it a list of couple again
485 485 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
486 486 # split mandatory from advisory
487 487 mansizes = paramsizes[:mancount]
488 488 advsizes = paramsizes[mancount:]
489 489 # retrive param value
490 490 manparams = []
491 491 for key, value in mansizes:
492 492 manparams.append((fromheader(key), fromheader(value)))
493 493 advparams = []
494 494 for key, value in advsizes:
495 495 advparams.append((fromheader(key), fromheader(value)))
496 496 del self._offset # clean up layer, nobody saw anything.
497 497 ## part payload
498 498 payload = []
499 499 payloadsize = self._unpack(_fpayloadsize)[0]
500 500 self.ui.debug('payload chunk size: %i\n' % payloadsize)
501 501 while payloadsize:
502 502 payload.append(self._readexact(payloadsize))
503 503 payloadsize = self._unpack(_fpayloadsize)[0]
504 504 self.ui.debug('payload chunk size: %i\n' % payloadsize)
505 505 payload = ''.join(payload)
506 current = part(parttype, manparams, advparams, data=payload)
506 current = bundlepart(parttype, manparams, advparams, data=payload)
507 507 current.id = partid
508 508 return current
509 509
510 510
511 class part(object):
511 class bundlepart(object):
512 512 """A bundle2 part contains application level payload
513 513
514 514 The part `type` is used to route the part to the application level
515 515 handler.
516 516 """
517 517
518 518 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
519 519 data=''):
520 520 self.id = None
521 521 self.type = parttype
522 522 self.data = data
523 523 self.mandatoryparams = mandatoryparams
524 524 self.advisoryparams = advisoryparams
525 525
526 526 def getchunks(self):
527 527 #### header
528 528 ## parttype
529 529 header = [_pack(_fparttypesize, len(self.type)),
530 530 self.type, _pack(_fpartid, self.id),
531 531 ]
532 532 ## parameters
533 533 # count
534 534 manpar = self.mandatoryparams
535 535 advpar = self.advisoryparams
536 536 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
537 537 # size
538 538 parsizes = []
539 539 for key, value in manpar:
540 540 parsizes.append(len(key))
541 541 parsizes.append(len(value))
542 542 for key, value in advpar:
543 543 parsizes.append(len(key))
544 544 parsizes.append(len(value))
545 545 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
546 546 header.append(paramsizes)
547 547 # key, value
548 548 for key, value in manpar:
549 549 header.append(key)
550 550 header.append(value)
551 551 for key, value in advpar:
552 552 header.append(key)
553 553 header.append(value)
554 554 ## finalize header
555 555 headerchunk = ''.join(header)
556 556 yield _pack(_fpartheadersize, len(headerchunk))
557 557 yield headerchunk
558 558 ## payload
559 559 for chunk in self._payloadchunks():
560 560 yield _pack(_fpayloadsize, len(chunk))
561 561 yield chunk
562 562 # end of payload
563 563 yield _pack(_fpayloadsize, 0)
564 564
565 565 def _payloadchunks(self):
566 566 """yield chunks of a the part payload
567 567
568 568 Exists to handle the different methods to provide data to a part."""
569 569 # we only support fixed size data now.
570 570 # This will be improved in the future.
571 571 if util.safehasattr(self.data, 'next'):
572 572 buff = util.chunkbuffer(self.data)
573 573 chunk = buff.read(preferedchunksize)
574 574 while chunk:
575 575 yield chunk
576 576 chunk = buff.read(preferedchunksize)
577 577 elif len(self.data):
578 578 yield self.data
579 579
580 580 @parthandler('changegroup')
581 581 def handlechangegroup(op, inpart):
582 582 """apply a changegroup part on the repo
583 583
584 584 This is a very early implementation that will massive rework before being
585 585 inflicted to any end-user.
586 586 """
587 587 # Make sure we trigger a transaction creation
588 588 #
589 589 # The addchangegroup function will get a transaction object by itself, but
590 590 # we need to make sure we trigger the creation of a transaction object used
591 591 # for the whole processing scope.
592 592 op.gettransaction()
593 593 data = StringIO.StringIO(inpart.data)
594 594 data.seek(0)
595 595 cg = changegroup.readbundle(data, 'bundle2part')
596 596 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
597 597 op.records.add('changegroup', {'return': ret})
598 598 if op.reply is not None:
599 599 # This is definitly not the final form of this
600 600 # return. But one need to start somewhere.
601 op.reply.addpart(part('reply:changegroup', (),
601 op.reply.addpart(bundlepart('reply:changegroup', (),
602 602 [('in-reply-to', str(inpart.id)),
603 603 ('return', '%i' % ret)]))
604 604
605 605 @parthandler('reply:changegroup')
606 606 def handlechangegroup(op, inpart):
607 607 p = dict(inpart.advisoryparams)
608 608 ret = int(p['return'])
609 609 op.records.add('changegroup', {'return': ret}, int(p['in-reply-to']))
610 610
@@ -1,644 +1,644
1 1 # exchange.py - utily to exchange data between repo.
2 2 #
3 3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from i18n import _
9 9 from node import hex, nullid
10 10 import errno
11 11 import util, scmutil, changegroup, base85
12 12 import discovery, phases, obsolete, bookmarks, bundle2
13 13
14 14
15 15 class pushoperation(object):
16 16 """A object that represent a single push operation
17 17
18 18 It purpose is to carry push related state and very common operation.
19 19
20 20 A new should be created at the begining of each push and discarded
21 21 afterward.
22 22 """
23 23
24 24 def __init__(self, repo, remote, force=False, revs=None, newbranch=False):
25 25 # repo we push from
26 26 self.repo = repo
27 27 self.ui = repo.ui
28 28 # repo we push to
29 29 self.remote = remote
30 30 # force option provided
31 31 self.force = force
32 32 # revs to be pushed (None is "all")
33 33 self.revs = revs
34 34 # allow push of new branch
35 35 self.newbranch = newbranch
36 36 # did a local lock get acquired?
37 37 self.locallocked = None
38 38 # Integer version of the push result
39 39 # - None means nothing to push
40 40 # - 0 means HTTP error
41 41 # - 1 means we pushed and remote head count is unchanged *or*
42 42 # we have outgoing changesets but refused to push
43 43 # - other values as described by addchangegroup()
44 44 self.ret = None
45 45 # discover.outgoing object (contains common and outgoin data)
46 46 self.outgoing = None
47 47 # all remote heads before the push
48 48 self.remoteheads = None
49 49 # testable as a boolean indicating if any nodes are missing locally.
50 50 self.incoming = None
51 51 # set of all heads common after changeset bundle push
52 52 self.commonheads = None
53 53
54 54 def push(repo, remote, force=False, revs=None, newbranch=False):
55 55 '''Push outgoing changesets (limited by revs) from a local
56 56 repository to remote. Return an integer:
57 57 - None means nothing to push
58 58 - 0 means HTTP error
59 59 - 1 means we pushed and remote head count is unchanged *or*
60 60 we have outgoing changesets but refused to push
61 61 - other values as described by addchangegroup()
62 62 '''
63 63 pushop = pushoperation(repo, remote, force, revs, newbranch)
64 64 if pushop.remote.local():
65 65 missing = (set(pushop.repo.requirements)
66 66 - pushop.remote.local().supported)
67 67 if missing:
68 68 msg = _("required features are not"
69 69 " supported in the destination:"
70 70 " %s") % (', '.join(sorted(missing)))
71 71 raise util.Abort(msg)
72 72
73 73 # there are two ways to push to remote repo:
74 74 #
75 75 # addchangegroup assumes local user can lock remote
76 76 # repo (local filesystem, old ssh servers).
77 77 #
78 78 # unbundle assumes local user cannot lock remote repo (new ssh
79 79 # servers, http servers).
80 80
81 81 if not pushop.remote.canpush():
82 82 raise util.Abort(_("destination does not support push"))
83 83 # get local lock as we might write phase data
84 84 locallock = None
85 85 try:
86 86 locallock = pushop.repo.lock()
87 87 pushop.locallocked = True
88 88 except IOError, err:
89 89 pushop.locallocked = False
90 90 if err.errno != errno.EACCES:
91 91 raise
92 92 # source repo cannot be locked.
93 93 # We do not abort the push, but just disable the local phase
94 94 # synchronisation.
95 95 msg = 'cannot lock source repository: %s\n' % err
96 96 pushop.ui.debug(msg)
97 97 try:
98 98 pushop.repo.checkpush(pushop)
99 99 lock = None
100 100 unbundle = pushop.remote.capable('unbundle')
101 101 if not unbundle:
102 102 lock = pushop.remote.lock()
103 103 try:
104 104 _pushdiscovery(pushop)
105 105 if _pushcheckoutgoing(pushop):
106 106 _pushchangeset(pushop)
107 107 _pushcomputecommonheads(pushop)
108 108 _pushsyncphase(pushop)
109 109 _pushobsolete(pushop)
110 110 finally:
111 111 if lock is not None:
112 112 lock.release()
113 113 finally:
114 114 if locallock is not None:
115 115 locallock.release()
116 116
117 117 _pushbookmark(pushop)
118 118 return pushop.ret
119 119
120 120 def _pushdiscovery(pushop):
121 121 # discovery
122 122 unfi = pushop.repo.unfiltered()
123 123 fci = discovery.findcommonincoming
124 124 commoninc = fci(unfi, pushop.remote, force=pushop.force)
125 125 common, inc, remoteheads = commoninc
126 126 fco = discovery.findcommonoutgoing
127 127 outgoing = fco(unfi, pushop.remote, onlyheads=pushop.revs,
128 128 commoninc=commoninc, force=pushop.force)
129 129 pushop.outgoing = outgoing
130 130 pushop.remoteheads = remoteheads
131 131 pushop.incoming = inc
132 132
133 133 def _pushcheckoutgoing(pushop):
134 134 outgoing = pushop.outgoing
135 135 unfi = pushop.repo.unfiltered()
136 136 if not outgoing.missing:
137 137 # nothing to push
138 138 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
139 139 return False
140 140 # something to push
141 141 if not pushop.force:
142 142 # if repo.obsstore == False --> no obsolete
143 143 # then, save the iteration
144 144 if unfi.obsstore:
145 145 # this message are here for 80 char limit reason
146 146 mso = _("push includes obsolete changeset: %s!")
147 147 mst = "push includes %s changeset: %s!"
148 148 # plain versions for i18n tool to detect them
149 149 _("push includes unstable changeset: %s!")
150 150 _("push includes bumped changeset: %s!")
151 151 _("push includes divergent changeset: %s!")
152 152 # If we are to push if there is at least one
153 153 # obsolete or unstable changeset in missing, at
154 154 # least one of the missinghead will be obsolete or
155 155 # unstable. So checking heads only is ok
156 156 for node in outgoing.missingheads:
157 157 ctx = unfi[node]
158 158 if ctx.obsolete():
159 159 raise util.Abort(mso % ctx)
160 160 elif ctx.troubled():
161 161 raise util.Abort(_(mst)
162 162 % (ctx.troubles()[0],
163 163 ctx))
164 164 newbm = pushop.ui.configlist('bookmarks', 'pushing')
165 165 discovery.checkheads(unfi, pushop.remote, outgoing,
166 166 pushop.remoteheads,
167 167 pushop.newbranch,
168 168 bool(pushop.incoming),
169 169 newbm)
170 170 return True
171 171
172 172 def _pushchangeset(pushop):
173 173 """Make the actual push of changeset bundle to remote repo"""
174 174 outgoing = pushop.outgoing
175 175 unbundle = pushop.remote.capable('unbundle')
176 176 # TODO: get bundlecaps from remote
177 177 bundlecaps = None
178 178 # create a changegroup from local
179 179 if pushop.revs is None and not (outgoing.excluded
180 180 or pushop.repo.changelog.filteredrevs):
181 181 # push everything,
182 182 # use the fast path, no race possible on push
183 183 bundler = changegroup.bundle10(pushop.repo, bundlecaps)
184 184 cg = changegroup.getsubset(pushop.repo,
185 185 outgoing,
186 186 bundler,
187 187 'push',
188 188 fastpath=True)
189 189 else:
190 190 cg = changegroup.getlocalbundle(pushop.repo, 'push', outgoing,
191 191 bundlecaps)
192 192
193 193 # apply changegroup to remote
194 194 if unbundle:
195 195 # local repo finds heads on server, finds out what
196 196 # revs it must push. once revs transferred, if server
197 197 # finds it has different heads (someone else won
198 198 # commit/push race), server aborts.
199 199 if pushop.force:
200 200 remoteheads = ['force']
201 201 else:
202 202 remoteheads = pushop.remoteheads
203 203 # ssh: return remote's addchangegroup()
204 204 # http: return remote's addchangegroup() or 0 for error
205 205 pushop.ret = pushop.remote.unbundle(cg, remoteheads,
206 206 'push')
207 207 else:
208 208 # we return an integer indicating remote head count
209 209 # change
210 210 pushop.ret = pushop.remote.addchangegroup(cg, 'push', pushop.repo.url())
211 211
212 212 def _pushcomputecommonheads(pushop):
213 213 unfi = pushop.repo.unfiltered()
214 214 if pushop.ret:
215 215 # push succeed, synchronize target of the push
216 216 cheads = pushop.outgoing.missingheads
217 217 elif pushop.revs is None:
218 218 # All out push fails. synchronize all common
219 219 cheads = pushop.outgoing.commonheads
220 220 else:
221 221 # I want cheads = heads(::missingheads and ::commonheads)
222 222 # (missingheads is revs with secret changeset filtered out)
223 223 #
224 224 # This can be expressed as:
225 225 # cheads = ( (missingheads and ::commonheads)
226 226 # + (commonheads and ::missingheads))"
227 227 # )
228 228 #
229 229 # while trying to push we already computed the following:
230 230 # common = (::commonheads)
231 231 # missing = ((commonheads::missingheads) - commonheads)
232 232 #
233 233 # We can pick:
234 234 # * missingheads part of common (::commonheads)
235 235 common = set(pushop.outgoing.common)
236 236 nm = pushop.repo.changelog.nodemap
237 237 cheads = [node for node in pushop.revs if nm[node] in common]
238 238 # and
239 239 # * commonheads parents on missing
240 240 revset = unfi.set('%ln and parents(roots(%ln))',
241 241 pushop.outgoing.commonheads,
242 242 pushop.outgoing.missing)
243 243 cheads.extend(c.node() for c in revset)
244 244 pushop.commonheads = cheads
245 245
246 246 def _pushsyncphase(pushop):
247 247 """synchronise phase information locally and remotly"""
248 248 unfi = pushop.repo.unfiltered()
249 249 cheads = pushop.commonheads
250 250 if pushop.ret:
251 251 # push succeed, synchronize target of the push
252 252 cheads = pushop.outgoing.missingheads
253 253 elif pushop.revs is None:
254 254 # All out push fails. synchronize all common
255 255 cheads = pushop.outgoing.commonheads
256 256 else:
257 257 # I want cheads = heads(::missingheads and ::commonheads)
258 258 # (missingheads is revs with secret changeset filtered out)
259 259 #
260 260 # This can be expressed as:
261 261 # cheads = ( (missingheads and ::commonheads)
262 262 # + (commonheads and ::missingheads))"
263 263 # )
264 264 #
265 265 # while trying to push we already computed the following:
266 266 # common = (::commonheads)
267 267 # missing = ((commonheads::missingheads) - commonheads)
268 268 #
269 269 # We can pick:
270 270 # * missingheads part of common (::commonheads)
271 271 common = set(pushop.outgoing.common)
272 272 nm = pushop.repo.changelog.nodemap
273 273 cheads = [node for node in pushop.revs if nm[node] in common]
274 274 # and
275 275 # * commonheads parents on missing
276 276 revset = unfi.set('%ln and parents(roots(%ln))',
277 277 pushop.outgoing.commonheads,
278 278 pushop.outgoing.missing)
279 279 cheads.extend(c.node() for c in revset)
280 280 pushop.commonheads = cheads
281 281 # even when we don't push, exchanging phase data is useful
282 282 remotephases = pushop.remote.listkeys('phases')
283 283 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
284 284 and remotephases # server supports phases
285 285 and pushop.ret is None # nothing was pushed
286 286 and remotephases.get('publishing', False)):
287 287 # When:
288 288 # - this is a subrepo push
289 289 # - and remote support phase
290 290 # - and no changeset was pushed
291 291 # - and remote is publishing
292 292 # We may be in issue 3871 case!
293 293 # We drop the possible phase synchronisation done by
294 294 # courtesy to publish changesets possibly locally draft
295 295 # on the remote.
296 296 remotephases = {'publishing': 'True'}
297 297 if not remotephases: # old server or public only rer
298 298 _localphasemove(pushop, cheads)
299 299 # don't push any phase data as there is nothing to push
300 300 else:
301 301 ana = phases.analyzeremotephases(pushop.repo, cheads,
302 302 remotephases)
303 303 pheads, droots = ana
304 304 ### Apply remote phase on local
305 305 if remotephases.get('publishing', False):
306 306 _localphasemove(pushop, cheads)
307 307 else: # publish = False
308 308 _localphasemove(pushop, pheads)
309 309 _localphasemove(pushop, cheads, phases.draft)
310 310 ### Apply local phase on remote
311 311
312 312 # Get the list of all revs draft on remote by public here.
313 313 # XXX Beware that revset break if droots is not strictly
314 314 # XXX root we may want to ensure it is but it is costly
315 315 outdated = unfi.set('heads((%ln::%ln) and public())',
316 316 droots, cheads)
317 317 for newremotehead in outdated:
318 318 r = pushop.remote.pushkey('phases',
319 319 newremotehead.hex(),
320 320 str(phases.draft),
321 321 str(phases.public))
322 322 if not r:
323 323 pushop.ui.warn(_('updating %s to public failed!\n')
324 324 % newremotehead)
325 325
326 326 def _localphasemove(pushop, nodes, phase=phases.public):
327 327 """move <nodes> to <phase> in the local source repo"""
328 328 if pushop.locallocked:
329 329 phases.advanceboundary(pushop.repo, phase, nodes)
330 330 else:
331 331 # repo is not locked, do not change any phases!
332 332 # Informs the user that phases should have been moved when
333 333 # applicable.
334 334 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
335 335 phasestr = phases.phasenames[phase]
336 336 if actualmoves:
337 337 pushop.ui.status(_('cannot lock source repo, skipping '
338 338 'local %s phase update\n') % phasestr)
339 339
340 340 def _pushobsolete(pushop):
341 341 """utility function to push obsolete markers to a remote"""
342 342 pushop.ui.debug('try to push obsolete markers to remote\n')
343 343 repo = pushop.repo
344 344 remote = pushop.remote
345 345 if (obsolete._enabled and repo.obsstore and
346 346 'obsolete' in remote.listkeys('namespaces')):
347 347 rslts = []
348 348 remotedata = repo.listkeys('obsolete')
349 349 for key in sorted(remotedata, reverse=True):
350 350 # reverse sort to ensure we end with dump0
351 351 data = remotedata[key]
352 352 rslts.append(remote.pushkey('obsolete', key, '', data))
353 353 if [r for r in rslts if not r]:
354 354 msg = _('failed to push some obsolete markers!\n')
355 355 repo.ui.warn(msg)
356 356
357 357 def _pushbookmark(pushop):
358 358 """Update bookmark position on remote"""
359 359 ui = pushop.ui
360 360 repo = pushop.repo.unfiltered()
361 361 remote = pushop.remote
362 362 ui.debug("checking for updated bookmarks\n")
363 363 revnums = map(repo.changelog.rev, pushop.revs or [])
364 364 ancestors = [a for a in repo.changelog.ancestors(revnums, inclusive=True)]
365 365 (addsrc, adddst, advsrc, advdst, diverge, differ, invalid
366 366 ) = bookmarks.compare(repo, repo._bookmarks, remote.listkeys('bookmarks'),
367 367 srchex=hex)
368 368
369 369 for b, scid, dcid in advsrc:
370 370 if ancestors and repo[scid].rev() not in ancestors:
371 371 continue
372 372 if remote.pushkey('bookmarks', b, dcid, scid):
373 373 ui.status(_("updating bookmark %s\n") % b)
374 374 else:
375 375 ui.warn(_('updating bookmark %s failed!\n') % b)
376 376
377 377 class pulloperation(object):
378 378 """A object that represent a single pull operation
379 379
380 380 It purpose is to carry push related state and very common operation.
381 381
382 382 A new should be created at the begining of each pull and discarded
383 383 afterward.
384 384 """
385 385
386 386 def __init__(self, repo, remote, heads=None, force=False):
387 387 # repo we pull into
388 388 self.repo = repo
389 389 # repo we pull from
390 390 self.remote = remote
391 391 # revision we try to pull (None is "all")
392 392 self.heads = heads
393 393 # do we force pull?
394 394 self.force = force
395 395 # the name the pull transaction
396 396 self._trname = 'pull\n' + util.hidepassword(remote.url())
397 397 # hold the transaction once created
398 398 self._tr = None
399 399 # set of common changeset between local and remote before pull
400 400 self.common = None
401 401 # set of pulled head
402 402 self.rheads = None
403 403 # list of missing changeset to fetch remotly
404 404 self.fetch = None
405 405 # result of changegroup pulling (used as returng code by pull)
406 406 self.cgresult = None
407 407 # list of step remaining todo (related to future bundle2 usage)
408 408 self.todosteps = set(['changegroup', 'phases', 'obsmarkers'])
409 409
410 410 @util.propertycache
411 411 def pulledsubset(self):
412 412 """heads of the set of changeset target by the pull"""
413 413 # compute target subset
414 414 if self.heads is None:
415 415 # We pulled every thing possible
416 416 # sync on everything common
417 417 c = set(self.common)
418 418 ret = list(self.common)
419 419 for n in self.rheads:
420 420 if n not in c:
421 421 ret.append(n)
422 422 return ret
423 423 else:
424 424 # We pulled a specific subset
425 425 # sync on this subset
426 426 return self.heads
427 427
428 428 def gettransaction(self):
429 429 """get appropriate pull transaction, creating it if needed"""
430 430 if self._tr is None:
431 431 self._tr = self.repo.transaction(self._trname)
432 432 return self._tr
433 433
434 434 def closetransaction(self):
435 435 """close transaction if created"""
436 436 if self._tr is not None:
437 437 self._tr.close()
438 438
439 439 def releasetransaction(self):
440 440 """release transaction if created"""
441 441 if self._tr is not None:
442 442 self._tr.release()
443 443
444 444 def pull(repo, remote, heads=None, force=False):
445 445 pullop = pulloperation(repo, remote, heads, force)
446 446 if pullop.remote.local():
447 447 missing = set(pullop.remote.requirements) - pullop.repo.supported
448 448 if missing:
449 449 msg = _("required features are not"
450 450 " supported in the destination:"
451 451 " %s") % (', '.join(sorted(missing)))
452 452 raise util.Abort(msg)
453 453
454 454 lock = pullop.repo.lock()
455 455 try:
456 456 _pulldiscovery(pullop)
457 457 if pullop.remote.capable('bundle2'):
458 458 _pullbundle2(pullop)
459 459 if 'changegroup' in pullop.todosteps:
460 460 _pullchangeset(pullop)
461 461 if 'phases' in pullop.todosteps:
462 462 _pullphase(pullop)
463 463 if 'obsmarkers' in pullop.todosteps:
464 464 _pullobsolete(pullop)
465 465 pullop.closetransaction()
466 466 finally:
467 467 pullop.releasetransaction()
468 468 lock.release()
469 469
470 470 return pullop.cgresult
471 471
472 472 def _pulldiscovery(pullop):
473 473 """discovery phase for the pull
474 474
475 475 Current handle changeset discovery only, will change handle all discovery
476 476 at some point."""
477 477 tmp = discovery.findcommonincoming(pullop.repo.unfiltered(),
478 478 pullop.remote,
479 479 heads=pullop.heads,
480 480 force=pullop.force)
481 481 pullop.common, pullop.fetch, pullop.rheads = tmp
482 482
483 483 def _pullbundle2(pullop):
484 484 """pull data using bundle2
485 485
486 486 For now, the only supported data are changegroup."""
487 487 kwargs = {'bundlecaps': set(['HG20'])}
488 488 # pulling changegroup
489 489 pullop.todosteps.remove('changegroup')
490 490 if not pullop.fetch:
491 491 pullop.repo.ui.status(_("no changes found\n"))
492 492 pullop.cgresult = 0
493 493 else:
494 494 kwargs['common'] = pullop.common
495 495 kwargs['heads'] = pullop.heads or pullop.rheads
496 496 if pullop.heads is None and list(pullop.common) == [nullid]:
497 497 pullop.repo.ui.status(_("requesting all changes\n"))
498 498 if kwargs.keys() == ['format']:
499 499 return # nothing to pull
500 500 bundle = pullop.remote.getbundle('pull', **kwargs)
501 501 try:
502 502 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
503 503 except KeyError, exc:
504 504 raise util.Abort('missing support for %s' % exc)
505 505 assert len(op.records['changegroup']) == 1
506 506 pullop.cgresult = op.records['changegroup'][0]['return']
507 507
508 508 def _pullchangeset(pullop):
509 509 """pull changeset from unbundle into the local repo"""
510 510 # We delay the open of the transaction as late as possible so we
511 511 # don't open transaction for nothing or you break future useful
512 512 # rollback call
513 513 pullop.todosteps.remove('changegroup')
514 514 if not pullop.fetch:
515 515 pullop.repo.ui.status(_("no changes found\n"))
516 516 pullop.cgresult = 0
517 517 return
518 518 pullop.gettransaction()
519 519 if pullop.heads is None and list(pullop.common) == [nullid]:
520 520 pullop.repo.ui.status(_("requesting all changes\n"))
521 521 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
522 522 # issue1320, avoid a race if remote changed after discovery
523 523 pullop.heads = pullop.rheads
524 524
525 525 if pullop.remote.capable('getbundle'):
526 526 # TODO: get bundlecaps from remote
527 527 cg = pullop.remote.getbundle('pull', common=pullop.common,
528 528 heads=pullop.heads or pullop.rheads)
529 529 elif pullop.heads is None:
530 530 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
531 531 elif not pullop.remote.capable('changegroupsubset'):
532 532 raise util.Abort(_("partial pull cannot be done because "
533 533 "other repository doesn't support "
534 534 "changegroupsubset."))
535 535 else:
536 536 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
537 537 pullop.cgresult = changegroup.addchangegroup(pullop.repo, cg, 'pull',
538 538 pullop.remote.url())
539 539
540 540 def _pullphase(pullop):
541 541 # Get remote phases data from remote
542 542 pullop.todosteps.remove('phases')
543 543 remotephases = pullop.remote.listkeys('phases')
544 544 publishing = bool(remotephases.get('publishing', False))
545 545 if remotephases and not publishing:
546 546 # remote is new and unpublishing
547 547 pheads, _dr = phases.analyzeremotephases(pullop.repo,
548 548 pullop.pulledsubset,
549 549 remotephases)
550 550 phases.advanceboundary(pullop.repo, phases.public, pheads)
551 551 phases.advanceboundary(pullop.repo, phases.draft,
552 552 pullop.pulledsubset)
553 553 else:
554 554 # Remote is old or publishing all common changesets
555 555 # should be seen as public
556 556 phases.advanceboundary(pullop.repo, phases.public,
557 557 pullop.pulledsubset)
558 558
559 559 def _pullobsolete(pullop):
560 560 """utility function to pull obsolete markers from a remote
561 561
562 562 The `gettransaction` is function that return the pull transaction, creating
563 563 one if necessary. We return the transaction to inform the calling code that
564 564 a new transaction have been created (when applicable).
565 565
566 566 Exists mostly to allow overriding for experimentation purpose"""
567 567 pullop.todosteps.remove('obsmarkers')
568 568 tr = None
569 569 if obsolete._enabled:
570 570 pullop.repo.ui.debug('fetching remote obsolete markers\n')
571 571 remoteobs = pullop.remote.listkeys('obsolete')
572 572 if 'dump0' in remoteobs:
573 573 tr = pullop.gettransaction()
574 574 for key in sorted(remoteobs, reverse=True):
575 575 if key.startswith('dump'):
576 576 data = base85.b85decode(remoteobs[key])
577 577 pullop.repo.obsstore.mergemarkers(tr, data)
578 578 pullop.repo.invalidatevolatilesets()
579 579 return tr
580 580
581 581 def getbundle(repo, source, heads=None, common=None, bundlecaps=None):
582 582 """return a full bundle (with potentially multiple kind of parts)
583 583
584 584 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
585 585 passed. For now, the bundle can contain only changegroup, but this will
586 586 changes when more part type will be available for bundle2.
587 587
588 588 This is different from changegroup.getbundle that only returns an HG10
589 589 changegroup bundle. They may eventually get reunited in the future when we
590 590 have a clearer idea of the API we what to query different data.
591 591
592 592 The implementation is at a very early stage and will get massive rework
593 593 when the API of bundle is refined.
594 594 """
595 595 # build bundle here.
596 596 cg = changegroup.getbundle(repo, source, heads=heads,
597 597 common=common, bundlecaps=bundlecaps)
598 598 if bundlecaps is None or 'HG20' not in bundlecaps:
599 599 return cg
600 600 # very crude first implementation,
601 601 # the bundle API will change and the generation will be done lazily.
602 602 bundler = bundle2.bundle20(repo.ui)
603 603 def cgchunks(cg=cg):
604 604 yield 'HG10UN'
605 605 for c in cg.getchunks():
606 606 yield c
607 part = bundle2.part('changegroup', data=cgchunks())
607 part = bundle2.bundlepart('changegroup', data=cgchunks())
608 608 bundler.addpart(part)
609 609 return bundle2.unbundle20(repo.ui, util.chunkbuffer(bundler.getchunks()))
610 610
611 611 class PushRaced(RuntimeError):
612 612 """An exception raised during unbunding that indicate a push race"""
613 613
614 614 def check_heads(repo, their_heads, context):
615 615 """check if the heads of a repo have been modified
616 616
617 617 Used by peer for unbundling.
618 618 """
619 619 heads = repo.heads()
620 620 heads_hash = util.sha1(''.join(sorted(heads))).digest()
621 621 if not (their_heads == ['force'] or their_heads == heads or
622 622 their_heads == ['hashed', heads_hash]):
623 623 # someone else committed/pushed/unbundled while we
624 624 # were transferring data
625 625 raise PushRaced('repository changed while %s - '
626 626 'please try again' % context)
627 627
628 628 def unbundle(repo, cg, heads, source, url):
629 629 """Apply a bundle to a repo.
630 630
631 631 this function makes sure the repo is locked during the application and have
632 632 mechanism to check that no push race occured between the creation of the
633 633 bundle and its application.
634 634
635 635 If the push was raced as PushRaced exception is raised."""
636 636 r = 0
637 637 lock = repo.lock()
638 638 try:
639 639 check_heads(repo, heads, 'uploading changes')
640 640 # push can proceed
641 641 r = changegroup.addchangegroup(repo, cg, source, url)
642 642 finally:
643 643 lock.release()
644 644 return r
@@ -1,676 +1,677
1 1
2 2 Create an extension to test bundle2 API
3 3
4 4 $ cat > bundle2.py << EOF
5 5 > """A small extension to test bundle2 implementation
6 6 >
7 7 > Current bundle2 implementation is far too limited to be used in any core
8 8 > code. We still need to be able to test it while it grow up.
9 9 > """
10 10 >
11 11 > import sys
12 12 > from mercurial import cmdutil
13 13 > from mercurial import util
14 14 > from mercurial import bundle2
15 15 > from mercurial import scmutil
16 16 > from mercurial import discovery
17 17 > from mercurial import changegroup
18 18 > cmdtable = {}
19 19 > command = cmdutil.command(cmdtable)
20 20 >
21 21 > ELEPHANTSSONG = """Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
22 22 > Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
23 23 > Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko."""
24 24 > assert len(ELEPHANTSSONG) == 178 # future test say 178 bytes, trust it.
25 25 >
26 26 > @bundle2.parthandler('test:song')
27 27 > def songhandler(op, part):
28 28 > """handle a "test:song" bundle2 part, printing the lyrics on stdin"""
29 29 > op.ui.write('The choir starts singing:\n')
30 30 > verses = 0
31 31 > for line in part.data.split('\n'):
32 32 > op.ui.write(' %s\n' % line)
33 33 > verses += 1
34 34 > op.records.add('song', {'verses': verses})
35 35 >
36 36 > @bundle2.parthandler('test:ping')
37 37 > def pinghandler(op, part):
38 38 > op.ui.write('received ping request (id %i)\n' % part.id)
39 39 > if op.reply is not None:
40 > op.reply.addpart(bundle2.part('test:pong',
41 > [('in-reply-to', str(part.id))]))
40 > rpart = bundle2.bundlepart('test:pong',
41 > [('in-reply-to', str(part.id))])
42 > op.reply.addpart(rpart)
42 43 >
43 44 > @command('bundle2',
44 45 > [('', 'param', [], 'stream level parameter'),
45 46 > ('', 'unknown', False, 'include an unknown mandatory part in the bundle'),
46 47 > ('', 'parts', False, 'include some arbitrary parts to the bundle'),
47 48 > ('r', 'rev', [], 'includes those changeset in the bundle'),],
48 49 > '[OUTPUTFILE]')
49 50 > def cmdbundle2(ui, repo, path=None, **opts):
50 51 > """write a bundle2 container on standard ouput"""
51 52 > bundler = bundle2.bundle20(ui)
52 53 > for p in opts['param']:
53 54 > p = p.split('=', 1)
54 55 > try:
55 56 > bundler.addparam(*p)
56 57 > except ValueError, exc:
57 58 > raise util.Abort('%s' % exc)
58 59 >
59 60 > revs = opts['rev']
60 61 > if 'rev' in opts:
61 62 > revs = scmutil.revrange(repo, opts['rev'])
62 63 > if revs:
63 64 > # very crude version of a changegroup part creation
64 65 > bundled = repo.revs('%ld::%ld', revs, revs)
65 66 > headmissing = [c.node() for c in repo.set('heads(%ld)', revs)]
66 67 > headcommon = [c.node() for c in repo.set('parents(%ld) - %ld', revs, revs)]
67 68 > outgoing = discovery.outgoing(repo.changelog, headcommon, headmissing)
68 69 > cg = changegroup.getlocalbundle(repo, 'test:bundle2', outgoing, None)
69 70 > def cgchunks(cg=cg):
70 71 > yield 'HG10UN'
71 72 > for c in cg.getchunks():
72 73 > yield c
73 > part = bundle2.part('changegroup', data=cgchunks())
74 > part = bundle2.bundlepart('changegroup', data=cgchunks())
74 75 > bundler.addpart(part)
75 76 >
76 77 > if opts['parts']:
77 > part = bundle2.part('test:empty')
78 > part = bundle2.bundlepart('test:empty')
78 79 > bundler.addpart(part)
79 80 > # add a second one to make sure we handle multiple parts
80 > part = bundle2.part('test:empty')
81 > part = bundle2.bundlepart('test:empty')
81 82 > bundler.addpart(part)
82 > part = bundle2.part('test:song', data=ELEPHANTSSONG)
83 > part = bundle2.bundlepart('test:song', data=ELEPHANTSSONG)
83 84 > bundler.addpart(part)
84 > part = bundle2.part('test:math',
85 > part = bundle2.bundlepart('test:math',
85 86 > [('pi', '3.14'), ('e', '2.72')],
86 87 > [('cooking', 'raw')],
87 88 > '42')
88 89 > bundler.addpart(part)
89 90 > if opts['unknown']:
90 > part = bundle2.part('test:UNKNOWN',
91 > part = bundle2.bundlepart('test:UNKNOWN',
91 92 > data='some random content')
92 93 > bundler.addpart(part)
93 94 > if opts['parts']:
94 > part = bundle2.part('test:ping')
95 > part = bundle2.bundlepart('test:ping')
95 96 > bundler.addpart(part)
96 97 >
97 98 > if path is None:
98 99 > file = sys.stdout
99 100 > else:
100 101 > file = open(path, 'w')
101 102 >
102 103 > for chunk in bundler.getchunks():
103 104 > file.write(chunk)
104 105 >
105 106 > @command('unbundle2', [], '')
106 107 > def cmdunbundle2(ui, repo, replypath=None):
107 108 > """process a bundle2 stream from stdin on the current repo"""
108 109 > try:
109 110 > tr = None
110 111 > lock = repo.lock()
111 112 > tr = repo.transaction('processbundle')
112 113 > try:
113 114 > unbundler = bundle2.unbundle20(ui, sys.stdin)
114 115 > op = bundle2.processbundle(repo, unbundler, lambda: tr)
115 116 > tr.close()
116 117 > except KeyError, exc:
117 118 > raise util.Abort('missing support for %s' % exc)
118 119 > finally:
119 120 > if tr is not None:
120 121 > tr.release()
121 122 > lock.release()
122 123 > remains = sys.stdin.read()
123 124 > ui.write('%i unread bytes\n' % len(remains))
124 125 > if op.records['song']:
125 126 > totalverses = sum(r['verses'] for r in op.records['song'])
126 127 > ui.write('%i total verses sung\n' % totalverses)
127 128 > for rec in op.records['changegroup']:
128 129 > ui.write('addchangegroup return: %i\n' % rec['return'])
129 130 > if op.reply is not None and replypath is not None:
130 131 > file = open(replypath, 'w')
131 132 > for chunk in op.reply.getchunks():
132 133 > file.write(chunk)
133 134 >
134 135 > @command('statbundle2', [], '')
135 136 > def cmdstatbundle2(ui, repo):
136 137 > """print statistic on the bundle2 container read from stdin"""
137 138 > unbundler = bundle2.unbundle20(ui, sys.stdin)
138 139 > try:
139 140 > params = unbundler.params
140 141 > except KeyError, exc:
141 142 > raise util.Abort('unknown parameters: %s' % exc)
142 143 > ui.write('options count: %i\n' % len(params))
143 144 > for key in sorted(params):
144 145 > ui.write('- %s\n' % key)
145 146 > value = params[key]
146 147 > if value is not None:
147 148 > ui.write(' %s\n' % value)
148 149 > parts = list(unbundler)
149 150 > ui.write('parts count: %i\n' % len(parts))
150 151 > for p in parts:
151 152 > ui.write(' :%s:\n' % p.type)
152 153 > ui.write(' mandatory: %i\n' % len(p.mandatoryparams))
153 154 > ui.write(' advisory: %i\n' % len(p.advisoryparams))
154 155 > ui.write(' payload: %i bytes\n' % len(p.data))
155 156 > EOF
156 157 $ cat >> $HGRCPATH << EOF
157 158 > [extensions]
158 159 > bundle2=$TESTTMP/bundle2.py
159 160 > [server]
160 161 > bundle2=True
161 162 > EOF
162 163
163 164 The extension requires a repo (currently unused)
164 165
165 166 $ hg init main
166 167 $ cd main
167 168 $ touch a
168 169 $ hg add a
169 170 $ hg commit -m 'a'
170 171
171 172
172 173 Empty bundle
173 174 =================
174 175
175 176 - no option
176 177 - no parts
177 178
178 179 Test bundling
179 180
180 181 $ hg bundle2
181 182 HG20\x00\x00\x00\x00 (no-eol) (esc)
182 183
183 184 Test unbundling
184 185
185 186 $ hg bundle2 | hg statbundle2
186 187 options count: 0
187 188 parts count: 0
188 189
189 190 Test old style bundle are detected and refused
190 191
191 192 $ hg bundle --all ../bundle.hg
192 193 1 changesets found
193 194 $ hg statbundle2 < ../bundle.hg
194 195 abort: unknown bundle version 10
195 196 [255]
196 197
197 198 Test parameters
198 199 =================
199 200
200 201 - some options
201 202 - no parts
202 203
203 204 advisory parameters, no value
204 205 -------------------------------
205 206
206 207 Simplest possible parameters form
207 208
208 209 Test generation simple option
209 210
210 211 $ hg bundle2 --param 'caution'
211 212 HG20\x00\x07caution\x00\x00 (no-eol) (esc)
212 213
213 214 Test unbundling
214 215
215 216 $ hg bundle2 --param 'caution' | hg statbundle2
216 217 options count: 1
217 218 - caution
218 219 parts count: 0
219 220
220 221 Test generation multiple option
221 222
222 223 $ hg bundle2 --param 'caution' --param 'meal'
223 224 HG20\x00\x0ccaution meal\x00\x00 (no-eol) (esc)
224 225
225 226 Test unbundling
226 227
227 228 $ hg bundle2 --param 'caution' --param 'meal' | hg statbundle2
228 229 options count: 2
229 230 - caution
230 231 - meal
231 232 parts count: 0
232 233
233 234 advisory parameters, with value
234 235 -------------------------------
235 236
236 237 Test generation
237 238
238 239 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants'
239 240 HG20\x00\x1ccaution meal=vegan elephants\x00\x00 (no-eol) (esc)
240 241
241 242 Test unbundling
242 243
243 244 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants' | hg statbundle2
244 245 options count: 3
245 246 - caution
246 247 - elephants
247 248 - meal
248 249 vegan
249 250 parts count: 0
250 251
251 252 parameter with special char in value
252 253 ---------------------------------------------------
253 254
254 255 Test generation
255 256
256 257 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple
257 258 HG20\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00 (no-eol) (esc)
258 259
259 260 Test unbundling
260 261
261 262 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple | hg statbundle2
262 263 options count: 2
263 264 - e|! 7/
264 265 babar%#==tutu
265 266 - simple
266 267 parts count: 0
267 268
268 269 Test unknown mandatory option
269 270 ---------------------------------------------------
270 271
271 272 $ hg bundle2 --param 'Gravity' | hg statbundle2
272 273 abort: unknown parameters: 'Gravity'
273 274 [255]
274 275
275 276 Test debug output
276 277 ---------------------------------------------------
277 278
278 279 bundling debug
279 280
280 281 $ hg bundle2 --debug --param 'e|! 7/=babar%#==tutu' --param simple ../out.hg2
281 282 start emission of HG20 stream
282 283 bundle parameter: e%7C%21%207/=babar%25%23%3D%3Dtutu simple
283 284 start of parts
284 285 end of bundle
285 286
286 287 file content is ok
287 288
288 289 $ cat ../out.hg2
289 290 HG20\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00 (no-eol) (esc)
290 291
291 292 unbundling debug
292 293
293 294 $ hg statbundle2 --debug < ../out.hg2
294 295 start processing of HG20 stream
295 296 reading bundle2 stream parameters
296 297 ignoring unknown parameter 'e|! 7/'
297 298 ignoring unknown parameter 'simple'
298 299 options count: 2
299 300 - e|! 7/
300 301 babar%#==tutu
301 302 - simple
302 303 start extraction of bundle2 parts
303 304 part header size: 0
304 305 end of bundle2 stream
305 306 parts count: 0
306 307
307 308
308 309 Test buggy input
309 310 ---------------------------------------------------
310 311
311 312 empty parameter name
312 313
313 314 $ hg bundle2 --param '' --quiet
314 315 abort: empty parameter name
315 316 [255]
316 317
317 318 bad parameter name
318 319
319 320 $ hg bundle2 --param 42babar
320 321 abort: non letter first character: '42babar'
321 322 [255]
322 323
323 324
324 325 Test part
325 326 =================
326 327
327 328 $ hg bundle2 --parts ../parts.hg2 --debug
328 329 start emission of HG20 stream
329 330 bundle parameter:
330 331 start of parts
331 332 bundle part: "test:empty"
332 333 bundle part: "test:empty"
333 334 bundle part: "test:song"
334 335 bundle part: "test:math"
335 336 bundle part: "test:ping"
336 337 end of bundle
337 338
338 339 $ cat ../parts.hg2
339 340 HG20\x00\x00\x00\x11 (esc)
340 341 test:empty\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x11 (esc)
341 342 test:empty\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x10 test:song\x00\x00\x00\x02\x00\x00\x00\x00\x00\xb2Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko (esc)
342 343 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
343 344 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.\x00\x00\x00\x00\x00+ test:math\x00\x00\x00\x03\x02\x01\x02\x04\x01\x04\x07\x03pi3.14e2.72cookingraw\x00\x00\x00\x0242\x00\x00\x00\x00\x00\x10 test:ping\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
344 345
345 346
346 347 $ hg statbundle2 < ../parts.hg2
347 348 options count: 0
348 349 parts count: 5
349 350 :test:empty:
350 351 mandatory: 0
351 352 advisory: 0
352 353 payload: 0 bytes
353 354 :test:empty:
354 355 mandatory: 0
355 356 advisory: 0
356 357 payload: 0 bytes
357 358 :test:song:
358 359 mandatory: 0
359 360 advisory: 0
360 361 payload: 178 bytes
361 362 :test:math:
362 363 mandatory: 2
363 364 advisory: 1
364 365 payload: 2 bytes
365 366 :test:ping:
366 367 mandatory: 0
367 368 advisory: 0
368 369 payload: 0 bytes
369 370
370 371 $ hg statbundle2 --debug < ../parts.hg2
371 372 start processing of HG20 stream
372 373 reading bundle2 stream parameters
373 374 options count: 0
374 375 start extraction of bundle2 parts
375 376 part header size: 17
376 377 part type: "test:empty"
377 378 part id: "0"
378 379 part parameters: 0
379 380 payload chunk size: 0
380 381 part header size: 17
381 382 part type: "test:empty"
382 383 part id: "1"
383 384 part parameters: 0
384 385 payload chunk size: 0
385 386 part header size: 16
386 387 part type: "test:song"
387 388 part id: "2"
388 389 part parameters: 0
389 390 payload chunk size: 178
390 391 payload chunk size: 0
391 392 part header size: 43
392 393 part type: "test:math"
393 394 part id: "3"
394 395 part parameters: 3
395 396 payload chunk size: 2
396 397 payload chunk size: 0
397 398 part header size: 16
398 399 part type: "test:ping"
399 400 part id: "4"
400 401 part parameters: 0
401 402 payload chunk size: 0
402 403 part header size: 0
403 404 end of bundle2 stream
404 405 parts count: 5
405 406 :test:empty:
406 407 mandatory: 0
407 408 advisory: 0
408 409 payload: 0 bytes
409 410 :test:empty:
410 411 mandatory: 0
411 412 advisory: 0
412 413 payload: 0 bytes
413 414 :test:song:
414 415 mandatory: 0
415 416 advisory: 0
416 417 payload: 178 bytes
417 418 :test:math:
418 419 mandatory: 2
419 420 advisory: 1
420 421 payload: 2 bytes
421 422 :test:ping:
422 423 mandatory: 0
423 424 advisory: 0
424 425 payload: 0 bytes
425 426
426 427 Test actual unbundling of test part
427 428 =======================================
428 429
429 430 Process the bundle
430 431
431 432 $ hg unbundle2 --debug < ../parts.hg2
432 433 start processing of HG20 stream
433 434 reading bundle2 stream parameters
434 435 start extraction of bundle2 parts
435 436 part header size: 17
436 437 part type: "test:empty"
437 438 part id: "0"
438 439 part parameters: 0
439 440 payload chunk size: 0
440 441 ignoring unknown advisory part 'test:empty'
441 442 part header size: 17
442 443 part type: "test:empty"
443 444 part id: "1"
444 445 part parameters: 0
445 446 payload chunk size: 0
446 447 ignoring unknown advisory part 'test:empty'
447 448 part header size: 16
448 449 part type: "test:song"
449 450 part id: "2"
450 451 part parameters: 0
451 452 payload chunk size: 178
452 453 payload chunk size: 0
453 454 found a handler for part 'test:song'
454 455 The choir starts singing:
455 456 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
456 457 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
457 458 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
458 459 part header size: 43
459 460 part type: "test:math"
460 461 part id: "3"
461 462 part parameters: 3
462 463 payload chunk size: 2
463 464 payload chunk size: 0
464 465 ignoring unknown advisory part 'test:math'
465 466 part header size: 16
466 467 part type: "test:ping"
467 468 part id: "4"
468 469 part parameters: 0
469 470 payload chunk size: 0
470 471 found a handler for part 'test:ping'
471 472 received ping request (id 4)
472 473 part header size: 0
473 474 end of bundle2 stream
474 475 0 unread bytes
475 476 3 total verses sung
476 477
477 478 Unbundle with an unknown mandatory part
478 479 (should abort)
479 480
480 481 $ hg bundle2 --parts --unknown ../unknown.hg2
481 482
482 483 $ hg unbundle2 < ../unknown.hg2
483 484 The choir starts singing:
484 485 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
485 486 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
486 487 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
487 488 0 unread bytes
488 489 abort: missing support for 'test:unknown'
489 490 [255]
490 491
491 492 unbundle with a reply
492 493
493 494 $ hg unbundle2 ../reply.hg2 < ../parts.hg2
494 495 The choir starts singing:
495 496 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
496 497 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
497 498 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
498 499 received ping request (id 4)
499 500 0 unread bytes
500 501 3 total verses sung
501 502
502 503 The reply is a bundle
503 504
504 505 $ cat ../reply.hg2
505 506 HG20\x00\x00\x00\x1e test:pong\x00\x00\x00\x00\x01\x00\x0b\x01in-reply-to4\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
506 507
507 508 The reply is valid
508 509
509 510 $ hg statbundle2 < ../reply.hg2
510 511 options count: 0
511 512 parts count: 1
512 513 :test:pong:
513 514 mandatory: 1
514 515 advisory: 0
515 516 payload: 0 bytes
516 517
517 518 Support for changegroup
518 519 ===================================
519 520
520 521 $ hg unbundle $TESTDIR/bundles/rebase.hg
521 522 adding changesets
522 523 adding manifests
523 524 adding file changes
524 525 added 8 changesets with 7 changes to 7 files (+3 heads)
525 526 (run 'hg heads' to see heads, 'hg merge' to merge)
526 527
527 528 $ hg log -G
528 529 o changeset: 8:02de42196ebe
529 530 | tag: tip
530 531 | parent: 6:24b6387c8c8c
531 532 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
532 533 | date: Sat Apr 30 15:24:48 2011 +0200
533 534 | summary: H
534 535 |
535 536 | o changeset: 7:eea13746799a
536 537 |/| parent: 6:24b6387c8c8c
537 538 | | parent: 5:9520eea781bc
538 539 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
539 540 | | date: Sat Apr 30 15:24:48 2011 +0200
540 541 | | summary: G
541 542 | |
542 543 o | changeset: 6:24b6387c8c8c
543 544 | | parent: 1:cd010b8cd998
544 545 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
545 546 | | date: Sat Apr 30 15:24:48 2011 +0200
546 547 | | summary: F
547 548 | |
548 549 | o changeset: 5:9520eea781bc
549 550 |/ parent: 1:cd010b8cd998
550 551 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
551 552 | date: Sat Apr 30 15:24:48 2011 +0200
552 553 | summary: E
553 554 |
554 555 | o changeset: 4:32af7686d403
555 556 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
556 557 | | date: Sat Apr 30 15:24:48 2011 +0200
557 558 | | summary: D
558 559 | |
559 560 | o changeset: 3:5fddd98957c8
560 561 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
561 562 | | date: Sat Apr 30 15:24:48 2011 +0200
562 563 | | summary: C
563 564 | |
564 565 | o changeset: 2:42ccdea3bb16
565 566 |/ user: Nicolas Dumazet <nicdumz.commits@gmail.com>
566 567 | date: Sat Apr 30 15:24:48 2011 +0200
567 568 | summary: B
568 569 |
569 570 o changeset: 1:cd010b8cd998
570 571 parent: -1:000000000000
571 572 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
572 573 date: Sat Apr 30 15:24:48 2011 +0200
573 574 summary: A
574 575
575 576 @ changeset: 0:3903775176ed
576 577 user: test
577 578 date: Thu Jan 01 00:00:00 1970 +0000
578 579 summary: a
579 580
580 581
581 582 $ hg bundle2 --debug --rev '8+7+5+4' ../rev.hg2
582 583 4 changesets found
583 584 list of changesets:
584 585 32af7686d403cf45b5d95f2d70cebea587ac806a
585 586 9520eea781bcca16c1e15acc0ba14335a0e8e5ba
586 587 eea13746799a9e0bfd88f29d3c2e9dc9389f524f
587 588 02de42196ebee42ef284b6780a87cdc96e8eaab6
588 589 start emission of HG20 stream
589 590 bundle parameter:
590 591 start of parts
591 592 bundle part: "changegroup"
592 593 bundling: 1/4 changesets (25.00%)
593 594 bundling: 2/4 changesets (50.00%)
594 595 bundling: 3/4 changesets (75.00%)
595 596 bundling: 4/4 changesets (100.00%)
596 597 bundling: 1/4 manifests (25.00%)
597 598 bundling: 2/4 manifests (50.00%)
598 599 bundling: 3/4 manifests (75.00%)
599 600 bundling: 4/4 manifests (100.00%)
600 601 bundling: D 1/3 files (33.33%)
601 602 bundling: E 2/3 files (66.67%)
602 603 bundling: H 3/3 files (100.00%)
603 604 end of bundle
604 605
605 606 $ cat ../rev.hg2
606 607 HG20\x00\x00\x00\x12\x0bchangegroup\x00\x00\x00\x00\x00\x00\x00\x00\x06\x19HG10UN\x00\x00\x00\xa42\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j_\xdd\xd9\x89W\xc8\xa5JMCm\xfe\x1d\xa9\xd8\x7f!\xa1\xb9{\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)6e1f4c47ecb533ffd0c8e52cdc88afb6cd39e20c (esc)
607 608 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02D (esc)
608 609 \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01D\x00\x00\x00\xa4\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xcd\x01\x0b\x8c\xd9\x98\xf3\x98\x1aZ\x81\x15\xf9O\x8d\xa4\xabP`\x89\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)4dece9c826f69490507b98c6383a3009b295837d (esc)
609 610 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02E (esc)
610 611 \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01E\x00\x00\x00\xa2\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)365b93d57fdf4814e2b5911d6bacff2b12014441 (esc)
611 612 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x00\x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01G\x00\x00\x00\xa4\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
612 613 \x87\xcd\xc9n\x8e\xaa\xb6$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
613 614 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)8bee48edc7318541fc0013ee41b089276a8c24bf (esc)
614 615 \x00\x00\x00f\x00\x00\x00f\x00\x00\x00\x02H (esc)
615 616 \x00\x00\x00g\x00\x00\x00h\x00\x00\x00\x01H\x00\x00\x00\x00\x00\x00\x00\x8bn\x1fLG\xec\xb53\xff\xd0\xc8\xe5,\xdc\x88\xaf\xb6\xcd9\xe2\x0cf\xa5\xa0\x18\x17\xfd\xf5#\x9c'8\x02\xb5\xb7a\x8d\x05\x1c\x89\xe4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+D\x00c3f1ca2924c16a19b0656a84900e504e5b0aec2d (esc)
616 617 \x00\x00\x00\x8bM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\x00}\x8c\x9d\x88\x84\x13%\xf5\xc6\xb0cq\xb3[N\x8a+\x1a\x83\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00+\x00\x00\x00\xac\x00\x00\x00+E\x009c6fd0350a6c0d0c49d4a9c5017cf07043f54e58 (esc)
617 618 \x00\x00\x00\x8b6[\x93\xd5\x7f\xdfH\x14\xe2\xb5\x91\x1dk\xac\xff+\x12\x01DA(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xceM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00V\x00\x00\x00V\x00\x00\x00+F\x0022bfcfd62a21a3287edbd4d656218d0f525ed76a (esc)
618 619 \x00\x00\x00\x97\x8b\xeeH\xed\xc71\x85A\xfc\x00\x13\xeeA\xb0\x89'j\x8c$\xbf(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xce\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
619 620 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00+\x00\x00\x00V\x00\x00\x00\x00\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+H\x008500189e74a9e0475e822093bc7db0d631aeb0b4 (esc)
620 621 \x00\x00\x00\x00\x00\x00\x00\x05D\x00\x00\x00b\xc3\xf1\xca)$\xc1j\x19\xb0ej\x84\x90\x0ePN[ (esc)
621 622 \xec-\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02D (esc)
622 623 \x00\x00\x00\x00\x00\x00\x00\x05E\x00\x00\x00b\x9co\xd05 (esc)
623 624 l\r (no-eol) (esc)
624 625 \x0cI\xd4\xa9\xc5\x01|\xf0pC\xf5NX\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02E (esc)
625 626 \x00\x00\x00\x00\x00\x00\x00\x05H\x00\x00\x00b\x85\x00\x18\x9et\xa9\xe0G^\x82 \x93\xbc}\xb0\xd61\xae\xb0\xb4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
626 627 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02H (esc)
627 628 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
628 629
629 630 $ hg unbundle2 ../rev-replay.hg2 < ../rev.hg2
630 631 adding changesets
631 632 adding manifests
632 633 adding file changes
633 634 added 0 changesets with 0 changes to 3 files
634 635 0 unread bytes
635 636 addchangegroup return: 1
636 637
637 638 $ cat ../rev-replay.hg2
638 639 HG20\x00\x00\x00/\x11reply:changegroup\x00\x00\x00\x00\x00\x02\x0b\x01\x06\x01in-reply-to0return1\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
639 640
640 641 Real world exchange
641 642 =====================
642 643
643 644
644 645 clone --pull
645 646
646 647 $ cd ..
647 648 $ hg clone main other --pull --rev 9520eea781bc
648 649 adding changesets
649 650 adding manifests
650 651 adding file changes
651 652 added 2 changesets with 2 changes to 2 files
652 653 updating to branch default
653 654 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
654 655 $ hg -R other log -G
655 656 @ changeset: 1:9520eea781bc
656 657 | tag: tip
657 658 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
658 659 | date: Sat Apr 30 15:24:48 2011 +0200
659 660 | summary: E
660 661 |
661 662 o changeset: 0:cd010b8cd998
662 663 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
663 664 date: Sat Apr 30 15:24:48 2011 +0200
664 665 summary: A
665 666
666 667
667 668 pull
668 669
669 670 $ hg -R other pull
670 671 pulling from $TESTTMP/main (glob)
671 672 searching for changes
672 673 adding changesets
673 674 adding manifests
674 675 adding file changes
675 676 added 7 changesets with 6 changes to 6 files (+3 heads)
676 677 (run 'hg heads' to see heads, 'hg merge' to merge)
General Comments 0
You need to be logged in to leave comments. Login now