##// END OF EJS Templates
bundle2: change header size and make them signed (new format)...
Pierre-Yves David -
r23009:90f86ad3 default
parent child Browse files
Show More
@@ -1,953 +1,956 b''
1 1 # bundle2.py - generic container format to transmit arbitrary data.
2 2 #
3 3 # Copyright 2013 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 """Handling of the new bundle2 format
8 8
9 9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 11 that will be handed to and processed by the application layer.
12 12
13 13
14 14 General format architecture
15 15 ===========================
16 16
17 17 The format is architectured as follow
18 18
19 19 - magic string
20 20 - stream level parameters
21 21 - payload parts (any number)
22 22 - end of stream marker.
23 23
24 24 the Binary format
25 25 ============================
26 26
27 27 All numbers are unsigned and big-endian.
28 28
29 29 stream level parameters
30 30 ------------------------
31 31
32 32 Binary format is as follow
33 33
34 :params size: (16 bits integer)
34 :params size: int32
35 35
36 36 The total number of Bytes used by the parameters
37 37
38 38 :params value: arbitrary number of Bytes
39 39
40 40 A blob of `params size` containing the serialized version of all stream level
41 41 parameters.
42 42
43 43 The blob contains a space separated list of parameters. Parameters with value
44 44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45 45
46 46 Empty name are obviously forbidden.
47 47
48 48 Name MUST start with a letter. If this first letter is lower case, the
49 49 parameter is advisory and can be safely ignored. However when the first
50 50 letter is capital, the parameter is mandatory and the bundling process MUST
51 51 stop if he is not able to proceed it.
52 52
53 53 Stream parameters use a simple textual format for two main reasons:
54 54
55 55 - Stream level parameters should remain simple and we want to discourage any
56 56 crazy usage.
57 57 - Textual data allow easy human inspection of a bundle2 header in case of
58 58 troubles.
59 59
60 60 Any Applicative level options MUST go into a bundle2 part instead.
61 61
62 62 Payload part
63 63 ------------------------
64 64
65 65 Binary format is as follow
66 66
67 :header size: (16 bits inter)
67 :header size: int32
68 68
69 69 The total number of Bytes used by the part headers. When the header is empty
70 70 (size = 0) this is interpreted as the end of stream marker.
71 71
72 72 :header:
73 73
74 74 The header defines how to interpret the part. It contains two piece of
75 75 data: the part type, and the part parameters.
76 76
77 77 The part type is used to route an application level handler, that can
78 78 interpret payload.
79 79
80 80 Part parameters are passed to the application level handler. They are
81 81 meant to convey information that will help the application level object to
82 82 interpret the part payload.
83 83
84 84 The binary format of the header is has follow
85 85
86 86 :typesize: (one byte)
87 87
88 88 :parttype: alphanumerical part name
89 89
90 90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 91 to this part.
92 92
93 93 :parameters:
94 94
95 95 Part's parameter may have arbitrary content, the binary structure is::
96 96
97 97 <mandatory-count><advisory-count><param-sizes><param-data>
98 98
99 99 :mandatory-count: 1 byte, number of mandatory parameters
100 100
101 101 :advisory-count: 1 byte, number of advisory parameters
102 102
103 103 :param-sizes:
104 104
105 105 N couple of bytes, where N is the total number of parameters. Each
106 106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107 107
108 108 :param-data:
109 109
110 110 A blob of bytes from which each parameter key and value can be
111 111 retrieved using the list of size couples stored in the previous
112 112 field.
113 113
114 114 Mandatory parameters comes first, then the advisory ones.
115 115
116 116 Each parameter's key MUST be unique within the part.
117 117
118 118 :payload:
119 119
120 120 payload is a series of `<chunksize><chunkdata>`.
121 121
122 `chunksize` is a 32 bits integer, `chunkdata` are plain bytes (as much as
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124 124
125 125 The current implementation always produces either zero or one chunk.
126 126 This is an implementation limitation that will ultimately be lifted.
127 127
128 `chunksize` can be negative to trigger special case processing. No such
129 processing is in place yet.
130
128 131 Bundle processing
129 132 ============================
130 133
131 134 Each part is processed in order using a "part handler". Handler are registered
132 135 for a certain part type.
133 136
134 137 The matching of a part to its handler is case insensitive. The case of the
135 138 part type is used to know if a part is mandatory or advisory. If the Part type
136 139 contains any uppercase char it is considered mandatory. When no handler is
137 140 known for a Mandatory part, the process is aborted and an exception is raised.
138 141 If the part is advisory and no handler is known, the part is ignored. When the
139 142 process is aborted, the full bundle is still read from the stream to keep the
140 143 channel usable. But none of the part read from an abort are processed. In the
141 144 future, dropping the stream may become an option for channel we do not care to
142 145 preserve.
143 146 """
144 147
145 148 import util
146 149 import struct
147 150 import urllib
148 151 import string
149 152 import obsolete
150 153 import pushkey
151 154
152 155 import changegroup, error
153 156 from i18n import _
154 157
155 158 _pack = struct.pack
156 159 _unpack = struct.unpack
157 160
158 _magicstring = 'HG2X'
161 _magicstring = 'HG2Y'
159 162
160 _fstreamparamsize = '>H'
161 _fpartheadersize = '>H'
163 _fstreamparamsize = '>i'
164 _fpartheadersize = '>i'
162 165 _fparttypesize = '>B'
163 166 _fpartid = '>I'
164 _fpayloadsize = '>I'
167 _fpayloadsize = '>i'
165 168 _fpartparamcount = '>BB'
166 169
167 170 preferedchunksize = 4096
168 171
169 172 def _makefpartparamsizes(nbparams):
170 173 """return a struct format to read part parameter sizes
171 174
172 175 The number parameters is variable so we need to build that format
173 176 dynamically.
174 177 """
175 178 return '>'+('BB'*nbparams)
176 179
177 180 parthandlermapping = {}
178 181
179 182 def parthandler(parttype, params=()):
180 183 """decorator that register a function as a bundle2 part handler
181 184
182 185 eg::
183 186
184 187 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
185 188 def myparttypehandler(...):
186 189 '''process a part of type "my part".'''
187 190 ...
188 191 """
189 192 def _decorator(func):
190 193 lparttype = parttype.lower() # enforce lower case matching.
191 194 assert lparttype not in parthandlermapping
192 195 parthandlermapping[lparttype] = func
193 196 func.params = frozenset(params)
194 197 return func
195 198 return _decorator
196 199
197 200 class unbundlerecords(object):
198 201 """keep record of what happens during and unbundle
199 202
200 203 New records are added using `records.add('cat', obj)`. Where 'cat' is a
201 204 category of record and obj is an arbitrary object.
202 205
203 206 `records['cat']` will return all entries of this category 'cat'.
204 207
205 208 Iterating on the object itself will yield `('category', obj)` tuples
206 209 for all entries.
207 210
208 211 All iterations happens in chronological order.
209 212 """
210 213
211 214 def __init__(self):
212 215 self._categories = {}
213 216 self._sequences = []
214 217 self._replies = {}
215 218
216 219 def add(self, category, entry, inreplyto=None):
217 220 """add a new record of a given category.
218 221
219 222 The entry can then be retrieved in the list returned by
220 223 self['category']."""
221 224 self._categories.setdefault(category, []).append(entry)
222 225 self._sequences.append((category, entry))
223 226 if inreplyto is not None:
224 227 self.getreplies(inreplyto).add(category, entry)
225 228
226 229 def getreplies(self, partid):
227 230 """get the subrecords that replies to a specific part"""
228 231 return self._replies.setdefault(partid, unbundlerecords())
229 232
230 233 def __getitem__(self, cat):
231 234 return tuple(self._categories.get(cat, ()))
232 235
233 236 def __iter__(self):
234 237 return iter(self._sequences)
235 238
236 239 def __len__(self):
237 240 return len(self._sequences)
238 241
239 242 def __nonzero__(self):
240 243 return bool(self._sequences)
241 244
242 245 class bundleoperation(object):
243 246 """an object that represents a single bundling process
244 247
245 248 Its purpose is to carry unbundle-related objects and states.
246 249
247 250 A new object should be created at the beginning of each bundle processing.
248 251 The object is to be returned by the processing function.
249 252
250 253 The object has very little content now it will ultimately contain:
251 254 * an access to the repo the bundle is applied to,
252 255 * a ui object,
253 256 * a way to retrieve a transaction to add changes to the repo,
254 257 * a way to record the result of processing each part,
255 258 * a way to construct a bundle response when applicable.
256 259 """
257 260
258 261 def __init__(self, repo, transactiongetter):
259 262 self.repo = repo
260 263 self.ui = repo.ui
261 264 self.records = unbundlerecords()
262 265 self.gettransaction = transactiongetter
263 266 self.reply = None
264 267
265 268 class TransactionUnavailable(RuntimeError):
266 269 pass
267 270
268 271 def _notransaction():
269 272 """default method to get a transaction while processing a bundle
270 273
271 274 Raise an exception to highlight the fact that no transaction was expected
272 275 to be created"""
273 276 raise TransactionUnavailable()
274 277
275 278 def processbundle(repo, unbundler, transactiongetter=_notransaction):
276 279 """This function process a bundle, apply effect to/from a repo
277 280
278 281 It iterates over each part then searches for and uses the proper handling
279 282 code to process the part. Parts are processed in order.
280 283
281 284 This is very early version of this function that will be strongly reworked
282 285 before final usage.
283 286
284 287 Unknown Mandatory part will abort the process.
285 288 """
286 289 op = bundleoperation(repo, transactiongetter)
287 290 # todo:
288 291 # - replace this is a init function soon.
289 292 # - exception catching
290 293 unbundler.params
291 294 iterparts = unbundler.iterparts()
292 295 part = None
293 296 try:
294 297 for part in iterparts:
295 298 _processpart(op, part)
296 299 except Exception, exc:
297 300 for part in iterparts:
298 301 # consume the bundle content
299 302 part.read()
300 303 # Small hack to let caller code distinguish exceptions from bundle2
301 304 # processing fron the ones from bundle1 processing. This is mostly
302 305 # needed to handle different return codes to unbundle according to the
303 306 # type of bundle. We should probably clean up or drop this return code
304 307 # craziness in a future version.
305 308 exc.duringunbundle2 = True
306 309 raise
307 310 return op
308 311
309 312 def _processpart(op, part):
310 313 """process a single part from a bundle
311 314
312 315 The part is guaranteed to have been fully consumed when the function exits
313 316 (even if an exception is raised)."""
314 317 try:
315 318 parttype = part.type
316 319 # part key are matched lower case
317 320 key = parttype.lower()
318 321 try:
319 322 handler = parthandlermapping.get(key)
320 323 if handler is None:
321 324 raise error.BundleValueError(parttype=key)
322 325 op.ui.debug('found a handler for part %r\n' % parttype)
323 326 unknownparams = part.mandatorykeys - handler.params
324 327 if unknownparams:
325 328 unknownparams = list(unknownparams)
326 329 unknownparams.sort()
327 330 raise error.BundleValueError(parttype=key,
328 331 params=unknownparams)
329 332 except error.BundleValueError, exc:
330 333 if key != parttype: # mandatory parts
331 334 raise
332 335 op.ui.debug('ignoring unsupported advisory part %s\n' % exc)
333 336 return # skip to part processing
334 337
335 338 # handler is called outside the above try block so that we don't
336 339 # risk catching KeyErrors from anything other than the
337 340 # parthandlermapping lookup (any KeyError raised by handler()
338 341 # itself represents a defect of a different variety).
339 342 output = None
340 343 if op.reply is not None:
341 344 op.ui.pushbuffer(error=True)
342 345 output = ''
343 346 try:
344 347 handler(op, part)
345 348 finally:
346 349 if output is not None:
347 350 output = op.ui.popbuffer()
348 351 if output:
349 352 outpart = op.reply.newpart('b2x:output', data=output)
350 353 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
351 354 finally:
352 355 # consume the part content to not corrupt the stream.
353 356 part.read()
354 357
355 358
356 359 def decodecaps(blob):
357 360 """decode a bundle2 caps bytes blob into a dictionnary
358 361
359 362 The blob is a list of capabilities (one per line)
360 363 Capabilities may have values using a line of the form::
361 364
362 365 capability=value1,value2,value3
363 366
364 367 The values are always a list."""
365 368 caps = {}
366 369 for line in blob.splitlines():
367 370 if not line:
368 371 continue
369 372 if '=' not in line:
370 373 key, vals = line, ()
371 374 else:
372 375 key, vals = line.split('=', 1)
373 376 vals = vals.split(',')
374 377 key = urllib.unquote(key)
375 378 vals = [urllib.unquote(v) for v in vals]
376 379 caps[key] = vals
377 380 return caps
378 381
379 382 def encodecaps(caps):
380 383 """encode a bundle2 caps dictionary into a bytes blob"""
381 384 chunks = []
382 385 for ca in sorted(caps):
383 386 vals = caps[ca]
384 387 ca = urllib.quote(ca)
385 388 vals = [urllib.quote(v) for v in vals]
386 389 if vals:
387 390 ca = "%s=%s" % (ca, ','.join(vals))
388 391 chunks.append(ca)
389 392 return '\n'.join(chunks)
390 393
391 394 class bundle20(object):
392 395 """represent an outgoing bundle2 container
393 396
394 397 Use the `addparam` method to add stream level parameter. and `newpart` to
395 398 populate it. Then call `getchunks` to retrieve all the binary chunks of
396 399 data that compose the bundle2 container."""
397 400
398 401 def __init__(self, ui, capabilities=()):
399 402 self.ui = ui
400 403 self._params = []
401 404 self._parts = []
402 405 self.capabilities = dict(capabilities)
403 406
404 407 @property
405 408 def nbparts(self):
406 409 """total number of parts added to the bundler"""
407 410 return len(self._parts)
408 411
409 412 # methods used to defines the bundle2 content
410 413 def addparam(self, name, value=None):
411 414 """add a stream level parameter"""
412 415 if not name:
413 416 raise ValueError('empty parameter name')
414 417 if name[0] not in string.letters:
415 418 raise ValueError('non letter first character: %r' % name)
416 419 self._params.append((name, value))
417 420
418 421 def addpart(self, part):
419 422 """add a new part to the bundle2 container
420 423
421 424 Parts contains the actual applicative payload."""
422 425 assert part.id is None
423 426 part.id = len(self._parts) # very cheap counter
424 427 self._parts.append(part)
425 428
426 429 def newpart(self, typeid, *args, **kwargs):
427 430 """create a new part and add it to the containers
428 431
429 432 As the part is directly added to the containers. For now, this means
430 433 that any failure to properly initialize the part after calling
431 434 ``newpart`` should result in a failure of the whole bundling process.
432 435
433 436 You can still fall back to manually create and add if you need better
434 437 control."""
435 438 part = bundlepart(typeid, *args, **kwargs)
436 439 self.addpart(part)
437 440 return part
438 441
439 442 # methods used to generate the bundle2 stream
440 443 def getchunks(self):
441 444 self.ui.debug('start emission of %s stream\n' % _magicstring)
442 445 yield _magicstring
443 446 param = self._paramchunk()
444 447 self.ui.debug('bundle parameter: %s\n' % param)
445 448 yield _pack(_fstreamparamsize, len(param))
446 449 if param:
447 450 yield param
448 451
449 452 self.ui.debug('start of parts\n')
450 453 for part in self._parts:
451 454 self.ui.debug('bundle part: "%s"\n' % part.type)
452 455 for chunk in part.getchunks():
453 456 yield chunk
454 457 self.ui.debug('end of bundle\n')
455 458 yield _pack(_fpartheadersize, 0)
456 459
457 460 def _paramchunk(self):
458 461 """return a encoded version of all stream parameters"""
459 462 blocks = []
460 463 for par, value in self._params:
461 464 par = urllib.quote(par)
462 465 if value is not None:
463 466 value = urllib.quote(value)
464 467 par = '%s=%s' % (par, value)
465 468 blocks.append(par)
466 469 return ' '.join(blocks)
467 470
468 471 class unpackermixin(object):
469 472 """A mixin to extract bytes and struct data from a stream"""
470 473
471 474 def __init__(self, fp):
472 475 self._fp = fp
473 476
474 477 def _unpack(self, format):
475 478 """unpack this struct format from the stream"""
476 479 data = self._readexact(struct.calcsize(format))
477 480 return _unpack(format, data)
478 481
479 482 def _readexact(self, size):
480 483 """read exactly <size> bytes from the stream"""
481 484 return changegroup.readexactly(self._fp, size)
482 485
483 486
484 487 class unbundle20(unpackermixin):
485 488 """interpret a bundle2 stream
486 489
487 490 This class is fed with a binary stream and yields parts through its
488 491 `iterparts` methods."""
489 492
490 493 def __init__(self, ui, fp, header=None):
491 494 """If header is specified, we do not read it out of the stream."""
492 495 self.ui = ui
493 496 super(unbundle20, self).__init__(fp)
494 497 if header is None:
495 498 header = self._readexact(4)
496 499 magic, version = header[0:2], header[2:4]
497 500 if magic != 'HG':
498 501 raise util.Abort(_('not a Mercurial bundle'))
499 if version != '2X':
502 if version != '2Y':
500 503 raise util.Abort(_('unknown bundle version %s') % version)
501 504 self.ui.debug('start processing of %s stream\n' % header)
502 505
503 506 @util.propertycache
504 507 def params(self):
505 508 """dictionary of stream level parameters"""
506 509 self.ui.debug('reading bundle2 stream parameters\n')
507 510 params = {}
508 511 paramssize = self._unpack(_fstreamparamsize)[0]
509 512 if paramssize:
510 513 for p in self._readexact(paramssize).split(' '):
511 514 p = p.split('=', 1)
512 515 p = [urllib.unquote(i) for i in p]
513 516 if len(p) < 2:
514 517 p.append(None)
515 518 self._processparam(*p)
516 519 params[p[0]] = p[1]
517 520 return params
518 521
519 522 def _processparam(self, name, value):
520 523 """process a parameter, applying its effect if needed
521 524
522 525 Parameter starting with a lower case letter are advisory and will be
523 526 ignored when unknown. Those starting with an upper case letter are
524 527 mandatory and will this function will raise a KeyError when unknown.
525 528
526 529 Note: no option are currently supported. Any input will be either
527 530 ignored or failing.
528 531 """
529 532 if not name:
530 533 raise ValueError('empty parameter name')
531 534 if name[0] not in string.letters:
532 535 raise ValueError('non letter first character: %r' % name)
533 536 # Some logic will be later added here to try to process the option for
534 537 # a dict of known parameter.
535 538 if name[0].islower():
536 539 self.ui.debug("ignoring unknown parameter %r\n" % name)
537 540 else:
538 541 raise error.BundleValueError(params=(name,))
539 542
540 543
541 544 def iterparts(self):
542 545 """yield all parts contained in the stream"""
543 546 # make sure param have been loaded
544 547 self.params
545 548 self.ui.debug('start extraction of bundle2 parts\n')
546 549 headerblock = self._readpartheader()
547 550 while headerblock is not None:
548 551 part = unbundlepart(self.ui, headerblock, self._fp)
549 552 yield part
550 553 headerblock = self._readpartheader()
551 554 self.ui.debug('end of bundle2 stream\n')
552 555
553 556 def _readpartheader(self):
554 557 """reads a part header size and return the bytes blob
555 558
556 559 returns None if empty"""
557 560 headersize = self._unpack(_fpartheadersize)[0]
558 561 self.ui.debug('part header size: %i\n' % headersize)
559 562 if headersize:
560 563 return self._readexact(headersize)
561 564 return None
562 565
563 566
564 567 class bundlepart(object):
565 568 """A bundle2 part contains application level payload
566 569
567 570 The part `type` is used to route the part to the application level
568 571 handler.
569 572
570 573 The part payload is contained in ``part.data``. It could be raw bytes or a
571 574 generator of byte chunks.
572 575
573 576 You can add parameters to the part using the ``addparam`` method.
574 577 Parameters can be either mandatory (default) or advisory. Remote side
575 578 should be able to safely ignore the advisory ones.
576 579
577 580 Both data and parameters cannot be modified after the generation has begun.
578 581 """
579 582
580 583 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
581 584 data=''):
582 585 self.id = None
583 586 self.type = parttype
584 587 self._data = data
585 588 self._mandatoryparams = list(mandatoryparams)
586 589 self._advisoryparams = list(advisoryparams)
587 590 # checking for duplicated entries
588 591 self._seenparams = set()
589 592 for pname, __ in self._mandatoryparams + self._advisoryparams:
590 593 if pname in self._seenparams:
591 594 raise RuntimeError('duplicated params: %s' % pname)
592 595 self._seenparams.add(pname)
593 596 # status of the part's generation:
594 597 # - None: not started,
595 598 # - False: currently generated,
596 599 # - True: generation done.
597 600 self._generated = None
598 601
599 602 # methods used to defines the part content
600 603 def __setdata(self, data):
601 604 if self._generated is not None:
602 605 raise error.ReadOnlyPartError('part is being generated')
603 606 self._data = data
604 607 def __getdata(self):
605 608 return self._data
606 609 data = property(__getdata, __setdata)
607 610
608 611 @property
609 612 def mandatoryparams(self):
610 613 # make it an immutable tuple to force people through ``addparam``
611 614 return tuple(self._mandatoryparams)
612 615
613 616 @property
614 617 def advisoryparams(self):
615 618 # make it an immutable tuple to force people through ``addparam``
616 619 return tuple(self._advisoryparams)
617 620
618 621 def addparam(self, name, value='', mandatory=True):
619 622 if self._generated is not None:
620 623 raise error.ReadOnlyPartError('part is being generated')
621 624 if name in self._seenparams:
622 625 raise ValueError('duplicated params: %s' % name)
623 626 self._seenparams.add(name)
624 627 params = self._advisoryparams
625 628 if mandatory:
626 629 params = self._mandatoryparams
627 630 params.append((name, value))
628 631
629 632 # methods used to generates the bundle2 stream
630 633 def getchunks(self):
631 634 if self._generated is not None:
632 635 raise RuntimeError('part can only be consumed once')
633 636 self._generated = False
634 637 #### header
635 638 ## parttype
636 639 header = [_pack(_fparttypesize, len(self.type)),
637 640 self.type, _pack(_fpartid, self.id),
638 641 ]
639 642 ## parameters
640 643 # count
641 644 manpar = self.mandatoryparams
642 645 advpar = self.advisoryparams
643 646 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
644 647 # size
645 648 parsizes = []
646 649 for key, value in manpar:
647 650 parsizes.append(len(key))
648 651 parsizes.append(len(value))
649 652 for key, value in advpar:
650 653 parsizes.append(len(key))
651 654 parsizes.append(len(value))
652 655 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
653 656 header.append(paramsizes)
654 657 # key, value
655 658 for key, value in manpar:
656 659 header.append(key)
657 660 header.append(value)
658 661 for key, value in advpar:
659 662 header.append(key)
660 663 header.append(value)
661 664 ## finalize header
662 665 headerchunk = ''.join(header)
663 666 yield _pack(_fpartheadersize, len(headerchunk))
664 667 yield headerchunk
665 668 ## payload
666 669 for chunk in self._payloadchunks():
667 670 yield _pack(_fpayloadsize, len(chunk))
668 671 yield chunk
669 672 # end of payload
670 673 yield _pack(_fpayloadsize, 0)
671 674 self._generated = True
672 675
673 676 def _payloadchunks(self):
674 677 """yield chunks of a the part payload
675 678
676 679 Exists to handle the different methods to provide data to a part."""
677 680 # we only support fixed size data now.
678 681 # This will be improved in the future.
679 682 if util.safehasattr(self.data, 'next'):
680 683 buff = util.chunkbuffer(self.data)
681 684 chunk = buff.read(preferedchunksize)
682 685 while chunk:
683 686 yield chunk
684 687 chunk = buff.read(preferedchunksize)
685 688 elif len(self.data):
686 689 yield self.data
687 690
688 691 class unbundlepart(unpackermixin):
689 692 """a bundle part read from a bundle"""
690 693
691 694 def __init__(self, ui, header, fp):
692 695 super(unbundlepart, self).__init__(fp)
693 696 self.ui = ui
694 697 # unbundle state attr
695 698 self._headerdata = header
696 699 self._headeroffset = 0
697 700 self._initialized = False
698 701 self.consumed = False
699 702 # part data
700 703 self.id = None
701 704 self.type = None
702 705 self.mandatoryparams = None
703 706 self.advisoryparams = None
704 707 self.params = None
705 708 self.mandatorykeys = ()
706 709 self._payloadstream = None
707 710 self._readheader()
708 711
709 712 def _fromheader(self, size):
710 713 """return the next <size> byte from the header"""
711 714 offset = self._headeroffset
712 715 data = self._headerdata[offset:(offset + size)]
713 716 self._headeroffset = offset + size
714 717 return data
715 718
716 719 def _unpackheader(self, format):
717 720 """read given format from header
718 721
719 722 This automatically compute the size of the format to read."""
720 723 data = self._fromheader(struct.calcsize(format))
721 724 return _unpack(format, data)
722 725
723 726 def _initparams(self, mandatoryparams, advisoryparams):
724 727 """internal function to setup all logic related parameters"""
725 728 # make it read only to prevent people touching it by mistake.
726 729 self.mandatoryparams = tuple(mandatoryparams)
727 730 self.advisoryparams = tuple(advisoryparams)
728 731 # user friendly UI
729 732 self.params = dict(self.mandatoryparams)
730 733 self.params.update(dict(self.advisoryparams))
731 734 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
732 735
733 736 def _readheader(self):
734 737 """read the header and setup the object"""
735 738 typesize = self._unpackheader(_fparttypesize)[0]
736 739 self.type = self._fromheader(typesize)
737 740 self.ui.debug('part type: "%s"\n' % self.type)
738 741 self.id = self._unpackheader(_fpartid)[0]
739 742 self.ui.debug('part id: "%s"\n' % self.id)
740 743 ## reading parameters
741 744 # param count
742 745 mancount, advcount = self._unpackheader(_fpartparamcount)
743 746 self.ui.debug('part parameters: %i\n' % (mancount + advcount))
744 747 # param size
745 748 fparamsizes = _makefpartparamsizes(mancount + advcount)
746 749 paramsizes = self._unpackheader(fparamsizes)
747 750 # make it a list of couple again
748 751 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
749 752 # split mandatory from advisory
750 753 mansizes = paramsizes[:mancount]
751 754 advsizes = paramsizes[mancount:]
752 755 # retrive param value
753 756 manparams = []
754 757 for key, value in mansizes:
755 758 manparams.append((self._fromheader(key), self._fromheader(value)))
756 759 advparams = []
757 760 for key, value in advsizes:
758 761 advparams.append((self._fromheader(key), self._fromheader(value)))
759 762 self._initparams(manparams, advparams)
760 763 ## part payload
761 764 def payloadchunks():
762 765 payloadsize = self._unpack(_fpayloadsize)[0]
763 766 self.ui.debug('payload chunk size: %i\n' % payloadsize)
764 767 while payloadsize:
765 768 yield self._readexact(payloadsize)
766 769 payloadsize = self._unpack(_fpayloadsize)[0]
767 770 self.ui.debug('payload chunk size: %i\n' % payloadsize)
768 771 self._payloadstream = util.chunkbuffer(payloadchunks())
769 772 # we read the data, tell it
770 773 self._initialized = True
771 774
772 775 def read(self, size=None):
773 776 """read payload data"""
774 777 if not self._initialized:
775 778 self._readheader()
776 779 if size is None:
777 780 data = self._payloadstream.read()
778 781 else:
779 782 data = self._payloadstream.read(size)
780 783 if size is None or len(data) < size:
781 784 self.consumed = True
782 785 return data
783 786
784 capabilities = {'HG2X': (),
787 capabilities = {'HG2Y': (),
785 788 'b2x:listkeys': (),
786 789 'b2x:pushkey': (),
787 790 'b2x:changegroup': (),
788 791 }
789 792
790 793 def getrepocaps(repo):
791 794 """return the bundle2 capabilities for a given repo
792 795
793 796 Exists to allow extensions (like evolution) to mutate the capabilities.
794 797 """
795 798 caps = capabilities.copy()
796 799 if obsolete.isenabled(repo, obsolete.exchangeopt):
797 800 supportedformat = tuple('V%i' % v for v in obsolete.formats)
798 801 caps['b2x:obsmarkers'] = supportedformat
799 802 return caps
800 803
801 804 def bundle2caps(remote):
802 805 """return the bundlecapabilities of a peer as dict"""
803 806 raw = remote.capable('bundle2-exp')
804 807 if not raw and raw != '':
805 808 return {}
806 809 capsblob = urllib.unquote(remote.capable('bundle2-exp'))
807 810 return decodecaps(capsblob)
808 811
809 812 def obsmarkersversion(caps):
810 813 """extract the list of supported obsmarkers versions from a bundle2caps dict
811 814 """
812 815 obscaps = caps.get('b2x:obsmarkers', ())
813 816 return [int(c[1:]) for c in obscaps if c.startswith('V')]
814 817
815 818 @parthandler('b2x:changegroup')
816 819 def handlechangegroup(op, inpart):
817 820 """apply a changegroup part on the repo
818 821
819 822 This is a very early implementation that will massive rework before being
820 823 inflicted to any end-user.
821 824 """
822 825 # Make sure we trigger a transaction creation
823 826 #
824 827 # The addchangegroup function will get a transaction object by itself, but
825 828 # we need to make sure we trigger the creation of a transaction object used
826 829 # for the whole processing scope.
827 830 op.gettransaction()
828 831 cg = changegroup.cg1unpacker(inpart, 'UN')
829 832 # the source and url passed here are overwritten by the one contained in
830 833 # the transaction.hookargs argument. So 'bundle2' is a placeholder
831 834 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
832 835 op.records.add('changegroup', {'return': ret})
833 836 if op.reply is not None:
834 837 # This is definitly not the final form of this
835 838 # return. But one need to start somewhere.
836 839 part = op.reply.newpart('b2x:reply:changegroup')
837 840 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
838 841 part.addparam('return', '%i' % ret, mandatory=False)
839 842 assert not inpart.read()
840 843
841 844 @parthandler('b2x:reply:changegroup', ('return', 'in-reply-to'))
842 845 def handlereplychangegroup(op, inpart):
843 846 ret = int(inpart.params['return'])
844 847 replyto = int(inpart.params['in-reply-to'])
845 848 op.records.add('changegroup', {'return': ret}, replyto)
846 849
847 850 @parthandler('b2x:check:heads')
848 851 def handlecheckheads(op, inpart):
849 852 """check that head of the repo did not change
850 853
851 854 This is used to detect a push race when using unbundle.
852 855 This replaces the "heads" argument of unbundle."""
853 856 h = inpart.read(20)
854 857 heads = []
855 858 while len(h) == 20:
856 859 heads.append(h)
857 860 h = inpart.read(20)
858 861 assert not h
859 862 if heads != op.repo.heads():
860 863 raise error.PushRaced('repository changed while pushing - '
861 864 'please try again')
862 865
863 866 @parthandler('b2x:output')
864 867 def handleoutput(op, inpart):
865 868 """forward output captured on the server to the client"""
866 869 for line in inpart.read().splitlines():
867 870 op.ui.write(('remote: %s\n' % line))
868 871
869 872 @parthandler('b2x:replycaps')
870 873 def handlereplycaps(op, inpart):
871 874 """Notify that a reply bundle should be created
872 875
873 876 The payload contains the capabilities information for the reply"""
874 877 caps = decodecaps(inpart.read())
875 878 if op.reply is None:
876 879 op.reply = bundle20(op.ui, caps)
877 880
878 881 @parthandler('b2x:error:abort', ('message', 'hint'))
879 882 def handlereplycaps(op, inpart):
880 883 """Used to transmit abort error over the wire"""
881 884 raise util.Abort(inpart.params['message'], hint=inpart.params.get('hint'))
882 885
883 886 @parthandler('b2x:error:unsupportedcontent', ('parttype', 'params'))
884 887 def handlereplycaps(op, inpart):
885 888 """Used to transmit unknown content error over the wire"""
886 889 kwargs = {}
887 890 parttype = inpart.params.get('parttype')
888 891 if parttype is not None:
889 892 kwargs['parttype'] = parttype
890 893 params = inpart.params.get('params')
891 894 if params is not None:
892 895 kwargs['params'] = params.split('\0')
893 896
894 897 raise error.BundleValueError(**kwargs)
895 898
896 899 @parthandler('b2x:error:pushraced', ('message',))
897 900 def handlereplycaps(op, inpart):
898 901 """Used to transmit push race error over the wire"""
899 902 raise error.ResponseError(_('push failed:'), inpart.params['message'])
900 903
901 904 @parthandler('b2x:listkeys', ('namespace',))
902 905 def handlelistkeys(op, inpart):
903 906 """retrieve pushkey namespace content stored in a bundle2"""
904 907 namespace = inpart.params['namespace']
905 908 r = pushkey.decodekeys(inpart.read())
906 909 op.records.add('listkeys', (namespace, r))
907 910
908 911 @parthandler('b2x:pushkey', ('namespace', 'key', 'old', 'new'))
909 912 def handlepushkey(op, inpart):
910 913 """process a pushkey request"""
911 914 dec = pushkey.decode
912 915 namespace = dec(inpart.params['namespace'])
913 916 key = dec(inpart.params['key'])
914 917 old = dec(inpart.params['old'])
915 918 new = dec(inpart.params['new'])
916 919 ret = op.repo.pushkey(namespace, key, old, new)
917 920 record = {'namespace': namespace,
918 921 'key': key,
919 922 'old': old,
920 923 'new': new}
921 924 op.records.add('pushkey', record)
922 925 if op.reply is not None:
923 926 rpart = op.reply.newpart('b2x:reply:pushkey')
924 927 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
925 928 rpart.addparam('return', '%i' % ret, mandatory=False)
926 929
927 930 @parthandler('b2x:reply:pushkey', ('return', 'in-reply-to'))
928 931 def handlepushkeyreply(op, inpart):
929 932 """retrieve the result of a pushkey request"""
930 933 ret = int(inpart.params['return'])
931 934 partid = int(inpart.params['in-reply-to'])
932 935 op.records.add('pushkey', {'return': ret}, partid)
933 936
934 937 @parthandler('b2x:obsmarkers')
935 938 def handleobsmarker(op, inpart):
936 939 """add a stream of obsmarkers to the repo"""
937 940 tr = op.gettransaction()
938 941 new = op.repo.obsstore.mergemarkers(tr, inpart.read())
939 942 if new:
940 943 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
941 944 op.records.add('obsmarkers', {'new': new})
942 945 if op.reply is not None:
943 946 rpart = op.reply.newpart('b2x:reply:obsmarkers')
944 947 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
945 948 rpart.addparam('new', '%i' % new, mandatory=False)
946 949
947 950
948 951 @parthandler('b2x:reply:obsmarkers', ('new', 'in-reply-to'))
949 952 def handlepushkeyreply(op, inpart):
950 953 """retrieve the result of a pushkey request"""
951 954 ret = int(inpart.params['new'])
952 955 partid = int(inpart.params['in-reply-to'])
953 956 op.records.add('obsmarkers', {'new': ret}, partid)
@@ -1,1266 +1,1266 b''
1 1 # exchange.py - utility to exchange data between repos.
2 2 #
3 3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from i18n import _
9 9 from node import hex, nullid
10 10 import errno, urllib
11 11 import util, scmutil, changegroup, base85, error
12 12 import discovery, phases, obsolete, bookmarks as bookmod, bundle2, pushkey
13 13
14 14 def readbundle(ui, fh, fname, vfs=None):
15 15 header = changegroup.readexactly(fh, 4)
16 16
17 17 alg = None
18 18 if not fname:
19 19 fname = "stream"
20 20 if not header.startswith('HG') and header.startswith('\0'):
21 21 fh = changegroup.headerlessfixup(fh, header)
22 22 header = "HG10"
23 23 alg = 'UN'
24 24 elif vfs:
25 25 fname = vfs.join(fname)
26 26
27 27 magic, version = header[0:2], header[2:4]
28 28
29 29 if magic != 'HG':
30 30 raise util.Abort(_('%s: not a Mercurial bundle') % fname)
31 31 if version == '10':
32 32 if alg is None:
33 33 alg = changegroup.readexactly(fh, 2)
34 34 return changegroup.cg1unpacker(fh, alg)
35 elif version == '2X':
35 elif version == '2Y':
36 36 return bundle2.unbundle20(ui, fh, header=magic + version)
37 37 else:
38 38 raise util.Abort(_('%s: unknown bundle version %s') % (fname, version))
39 39
40 40 def buildobsmarkerspart(bundler, markers):
41 41 """add an obsmarker part to the bundler with <markers>
42 42
43 43 No part is created if markers is empty.
44 44 Raises ValueError if the bundler doesn't support any known obsmarker format.
45 45 """
46 46 if markers:
47 47 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
48 48 version = obsolete.commonversion(remoteversions)
49 49 if version is None:
50 50 raise ValueError('bundler do not support common obsmarker format')
51 51 stream = obsolete.encodemarkers(markers, True, version=version)
52 52 return bundler.newpart('B2X:OBSMARKERS', data=stream)
53 53 return None
54 54
55 55 class pushoperation(object):
56 56 """A object that represent a single push operation
57 57
58 58 It purpose is to carry push related state and very common operation.
59 59
60 60 A new should be created at the beginning of each push and discarded
61 61 afterward.
62 62 """
63 63
64 64 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
65 65 bookmarks=()):
66 66 # repo we push from
67 67 self.repo = repo
68 68 self.ui = repo.ui
69 69 # repo we push to
70 70 self.remote = remote
71 71 # force option provided
72 72 self.force = force
73 73 # revs to be pushed (None is "all")
74 74 self.revs = revs
75 75 # bookmark explicitly pushed
76 76 self.bookmarks = bookmarks
77 77 # allow push of new branch
78 78 self.newbranch = newbranch
79 79 # did a local lock get acquired?
80 80 self.locallocked = None
81 81 # step already performed
82 82 # (used to check what steps have been already performed through bundle2)
83 83 self.stepsdone = set()
84 84 # Integer version of the changegroup push result
85 85 # - None means nothing to push
86 86 # - 0 means HTTP error
87 87 # - 1 means we pushed and remote head count is unchanged *or*
88 88 # we have outgoing changesets but refused to push
89 89 # - other values as described by addchangegroup()
90 90 self.cgresult = None
91 91 # Boolean value for the bookmark push
92 92 self.bkresult = None
93 93 # discover.outgoing object (contains common and outgoing data)
94 94 self.outgoing = None
95 95 # all remote heads before the push
96 96 self.remoteheads = None
97 97 # testable as a boolean indicating if any nodes are missing locally.
98 98 self.incoming = None
99 99 # phases changes that must be pushed along side the changesets
100 100 self.outdatedphases = None
101 101 # phases changes that must be pushed if changeset push fails
102 102 self.fallbackoutdatedphases = None
103 103 # outgoing obsmarkers
104 104 self.outobsmarkers = set()
105 105 # outgoing bookmarks
106 106 self.outbookmarks = []
107 107
108 108 @util.propertycache
109 109 def futureheads(self):
110 110 """future remote heads if the changeset push succeeds"""
111 111 return self.outgoing.missingheads
112 112
113 113 @util.propertycache
114 114 def fallbackheads(self):
115 115 """future remote heads if the changeset push fails"""
116 116 if self.revs is None:
117 117 # not target to push, all common are relevant
118 118 return self.outgoing.commonheads
119 119 unfi = self.repo.unfiltered()
120 120 # I want cheads = heads(::missingheads and ::commonheads)
121 121 # (missingheads is revs with secret changeset filtered out)
122 122 #
123 123 # This can be expressed as:
124 124 # cheads = ( (missingheads and ::commonheads)
125 125 # + (commonheads and ::missingheads))"
126 126 # )
127 127 #
128 128 # while trying to push we already computed the following:
129 129 # common = (::commonheads)
130 130 # missing = ((commonheads::missingheads) - commonheads)
131 131 #
132 132 # We can pick:
133 133 # * missingheads part of common (::commonheads)
134 134 common = set(self.outgoing.common)
135 135 nm = self.repo.changelog.nodemap
136 136 cheads = [node for node in self.revs if nm[node] in common]
137 137 # and
138 138 # * commonheads parents on missing
139 139 revset = unfi.set('%ln and parents(roots(%ln))',
140 140 self.outgoing.commonheads,
141 141 self.outgoing.missing)
142 142 cheads.extend(c.node() for c in revset)
143 143 return cheads
144 144
145 145 @property
146 146 def commonheads(self):
147 147 """set of all common heads after changeset bundle push"""
148 148 if self.cgresult:
149 149 return self.futureheads
150 150 else:
151 151 return self.fallbackheads
152 152
153 153 # mapping of message used when pushing bookmark
154 154 bookmsgmap = {'update': (_("updating bookmark %s\n"),
155 155 _('updating bookmark %s failed!\n')),
156 156 'export': (_("exporting bookmark %s\n"),
157 157 _('exporting bookmark %s failed!\n')),
158 158 'delete': (_("deleting remote bookmark %s\n"),
159 159 _('deleting remote bookmark %s failed!\n')),
160 160 }
161 161
162 162
163 163 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=()):
164 164 '''Push outgoing changesets (limited by revs) from a local
165 165 repository to remote. Return an integer:
166 166 - None means nothing to push
167 167 - 0 means HTTP error
168 168 - 1 means we pushed and remote head count is unchanged *or*
169 169 we have outgoing changesets but refused to push
170 170 - other values as described by addchangegroup()
171 171 '''
172 172 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks)
173 173 if pushop.remote.local():
174 174 missing = (set(pushop.repo.requirements)
175 175 - pushop.remote.local().supported)
176 176 if missing:
177 177 msg = _("required features are not"
178 178 " supported in the destination:"
179 179 " %s") % (', '.join(sorted(missing)))
180 180 raise util.Abort(msg)
181 181
182 182 # there are two ways to push to remote repo:
183 183 #
184 184 # addchangegroup assumes local user can lock remote
185 185 # repo (local filesystem, old ssh servers).
186 186 #
187 187 # unbundle assumes local user cannot lock remote repo (new ssh
188 188 # servers, http servers).
189 189
190 190 if not pushop.remote.canpush():
191 191 raise util.Abort(_("destination does not support push"))
192 192 # get local lock as we might write phase data
193 193 locallock = None
194 194 try:
195 195 locallock = pushop.repo.lock()
196 196 pushop.locallocked = True
197 197 except IOError, err:
198 198 pushop.locallocked = False
199 199 if err.errno != errno.EACCES:
200 200 raise
201 201 # source repo cannot be locked.
202 202 # We do not abort the push, but just disable the local phase
203 203 # synchronisation.
204 204 msg = 'cannot lock source repository: %s\n' % err
205 205 pushop.ui.debug(msg)
206 206 try:
207 207 pushop.repo.checkpush(pushop)
208 208 lock = None
209 209 unbundle = pushop.remote.capable('unbundle')
210 210 if not unbundle:
211 211 lock = pushop.remote.lock()
212 212 try:
213 213 _pushdiscovery(pushop)
214 214 if (pushop.repo.ui.configbool('experimental', 'bundle2-exp',
215 215 False)
216 216 and pushop.remote.capable('bundle2-exp')):
217 217 _pushbundle2(pushop)
218 218 _pushchangeset(pushop)
219 219 _pushsyncphase(pushop)
220 220 _pushobsolete(pushop)
221 221 _pushbookmark(pushop)
222 222 finally:
223 223 if lock is not None:
224 224 lock.release()
225 225 finally:
226 226 if locallock is not None:
227 227 locallock.release()
228 228
229 229 return pushop
230 230
231 231 # list of steps to perform discovery before push
232 232 pushdiscoveryorder = []
233 233
234 234 # Mapping between step name and function
235 235 #
236 236 # This exists to help extensions wrap steps if necessary
237 237 pushdiscoverymapping = {}
238 238
239 239 def pushdiscovery(stepname):
240 240 """decorator for function performing discovery before push
241 241
242 242 The function is added to the step -> function mapping and appended to the
243 243 list of steps. Beware that decorated function will be added in order (this
244 244 may matter).
245 245
246 246 You can only use this decorator for a new step, if you want to wrap a step
247 247 from an extension, change the pushdiscovery dictionary directly."""
248 248 def dec(func):
249 249 assert stepname not in pushdiscoverymapping
250 250 pushdiscoverymapping[stepname] = func
251 251 pushdiscoveryorder.append(stepname)
252 252 return func
253 253 return dec
254 254
255 255 def _pushdiscovery(pushop):
256 256 """Run all discovery steps"""
257 257 for stepname in pushdiscoveryorder:
258 258 step = pushdiscoverymapping[stepname]
259 259 step(pushop)
260 260
261 261 @pushdiscovery('changeset')
262 262 def _pushdiscoverychangeset(pushop):
263 263 """discover the changeset that need to be pushed"""
264 264 unfi = pushop.repo.unfiltered()
265 265 fci = discovery.findcommonincoming
266 266 commoninc = fci(unfi, pushop.remote, force=pushop.force)
267 267 common, inc, remoteheads = commoninc
268 268 fco = discovery.findcommonoutgoing
269 269 outgoing = fco(unfi, pushop.remote, onlyheads=pushop.revs,
270 270 commoninc=commoninc, force=pushop.force)
271 271 pushop.outgoing = outgoing
272 272 pushop.remoteheads = remoteheads
273 273 pushop.incoming = inc
274 274
275 275 @pushdiscovery('phase')
276 276 def _pushdiscoveryphase(pushop):
277 277 """discover the phase that needs to be pushed
278 278
279 279 (computed for both success and failure case for changesets push)"""
280 280 outgoing = pushop.outgoing
281 281 unfi = pushop.repo.unfiltered()
282 282 remotephases = pushop.remote.listkeys('phases')
283 283 publishing = remotephases.get('publishing', False)
284 284 ana = phases.analyzeremotephases(pushop.repo,
285 285 pushop.fallbackheads,
286 286 remotephases)
287 287 pheads, droots = ana
288 288 extracond = ''
289 289 if not publishing:
290 290 extracond = ' and public()'
291 291 revset = 'heads((%%ln::%%ln) %s)' % extracond
292 292 # Get the list of all revs draft on remote by public here.
293 293 # XXX Beware that revset break if droots is not strictly
294 294 # XXX root we may want to ensure it is but it is costly
295 295 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
296 296 if not outgoing.missing:
297 297 future = fallback
298 298 else:
299 299 # adds changeset we are going to push as draft
300 300 #
301 301 # should not be necessary for pushblishing server, but because of an
302 302 # issue fixed in xxxxx we have to do it anyway.
303 303 fdroots = list(unfi.set('roots(%ln + %ln::)',
304 304 outgoing.missing, droots))
305 305 fdroots = [f.node() for f in fdroots]
306 306 future = list(unfi.set(revset, fdroots, pushop.futureheads))
307 307 pushop.outdatedphases = future
308 308 pushop.fallbackoutdatedphases = fallback
309 309
310 310 @pushdiscovery('obsmarker')
311 311 def _pushdiscoveryobsmarkers(pushop):
312 312 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
313 313 and pushop.repo.obsstore
314 314 and 'obsolete' in pushop.remote.listkeys('namespaces')):
315 315 repo = pushop.repo
316 316 # very naive computation, that can be quite expensive on big repo.
317 317 # However: evolution is currently slow on them anyway.
318 318 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
319 319 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
320 320
321 321 @pushdiscovery('bookmarks')
322 322 def _pushdiscoverybookmarks(pushop):
323 323 ui = pushop.ui
324 324 repo = pushop.repo.unfiltered()
325 325 remote = pushop.remote
326 326 ui.debug("checking for updated bookmarks\n")
327 327 ancestors = ()
328 328 if pushop.revs:
329 329 revnums = map(repo.changelog.rev, pushop.revs)
330 330 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
331 331 remotebookmark = remote.listkeys('bookmarks')
332 332
333 333 explicit = set(pushop.bookmarks)
334 334
335 335 comp = bookmod.compare(repo, repo._bookmarks, remotebookmark, srchex=hex)
336 336 addsrc, adddst, advsrc, advdst, diverge, differ, invalid = comp
337 337 for b, scid, dcid in advsrc:
338 338 if b in explicit:
339 339 explicit.remove(b)
340 340 if not ancestors or repo[scid].rev() in ancestors:
341 341 pushop.outbookmarks.append((b, dcid, scid))
342 342 # search added bookmark
343 343 for b, scid, dcid in addsrc:
344 344 if b in explicit:
345 345 explicit.remove(b)
346 346 pushop.outbookmarks.append((b, '', scid))
347 347 # search for overwritten bookmark
348 348 for b, scid, dcid in advdst + diverge + differ:
349 349 if b in explicit:
350 350 explicit.remove(b)
351 351 pushop.outbookmarks.append((b, dcid, scid))
352 352 # search for bookmark to delete
353 353 for b, scid, dcid in adddst:
354 354 if b in explicit:
355 355 explicit.remove(b)
356 356 # treat as "deleted locally"
357 357 pushop.outbookmarks.append((b, dcid, ''))
358 358
359 359 if explicit:
360 360 explicit = sorted(explicit)
361 361 # we should probably list all of them
362 362 ui.warn(_('bookmark %s does not exist on the local '
363 363 'or remote repository!\n') % explicit[0])
364 364 pushop.bkresult = 2
365 365
366 366 pushop.outbookmarks.sort()
367 367
368 368 def _pushcheckoutgoing(pushop):
369 369 outgoing = pushop.outgoing
370 370 unfi = pushop.repo.unfiltered()
371 371 if not outgoing.missing:
372 372 # nothing to push
373 373 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
374 374 return False
375 375 # something to push
376 376 if not pushop.force:
377 377 # if repo.obsstore == False --> no obsolete
378 378 # then, save the iteration
379 379 if unfi.obsstore:
380 380 # this message are here for 80 char limit reason
381 381 mso = _("push includes obsolete changeset: %s!")
382 382 mst = {"unstable": _("push includes unstable changeset: %s!"),
383 383 "bumped": _("push includes bumped changeset: %s!"),
384 384 "divergent": _("push includes divergent changeset: %s!")}
385 385 # If we are to push if there is at least one
386 386 # obsolete or unstable changeset in missing, at
387 387 # least one of the missinghead will be obsolete or
388 388 # unstable. So checking heads only is ok
389 389 for node in outgoing.missingheads:
390 390 ctx = unfi[node]
391 391 if ctx.obsolete():
392 392 raise util.Abort(mso % ctx)
393 393 elif ctx.troubled():
394 394 raise util.Abort(mst[ctx.troubles()[0]] % ctx)
395 395 newbm = pushop.ui.configlist('bookmarks', 'pushing')
396 396 discovery.checkheads(unfi, pushop.remote, outgoing,
397 397 pushop.remoteheads,
398 398 pushop.newbranch,
399 399 bool(pushop.incoming),
400 400 newbm)
401 401 return True
402 402
403 403 # List of names of steps to perform for an outgoing bundle2, order matters.
404 404 b2partsgenorder = []
405 405
406 406 # Mapping between step name and function
407 407 #
408 408 # This exists to help extensions wrap steps if necessary
409 409 b2partsgenmapping = {}
410 410
411 411 def b2partsgenerator(stepname):
412 412 """decorator for function generating bundle2 part
413 413
414 414 The function is added to the step -> function mapping and appended to the
415 415 list of steps. Beware that decorated functions will be added in order
416 416 (this may matter).
417 417
418 418 You can only use this decorator for new steps, if you want to wrap a step
419 419 from an extension, attack the b2partsgenmapping dictionary directly."""
420 420 def dec(func):
421 421 assert stepname not in b2partsgenmapping
422 422 b2partsgenmapping[stepname] = func
423 423 b2partsgenorder.append(stepname)
424 424 return func
425 425 return dec
426 426
427 427 @b2partsgenerator('changeset')
428 428 def _pushb2ctx(pushop, bundler):
429 429 """handle changegroup push through bundle2
430 430
431 431 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
432 432 """
433 433 if 'changesets' in pushop.stepsdone:
434 434 return
435 435 pushop.stepsdone.add('changesets')
436 436 # Send known heads to the server for race detection.
437 437 if not _pushcheckoutgoing(pushop):
438 438 return
439 439 pushop.repo.prepushoutgoinghooks(pushop.repo,
440 440 pushop.remote,
441 441 pushop.outgoing)
442 442 if not pushop.force:
443 443 bundler.newpart('B2X:CHECK:HEADS', data=iter(pushop.remoteheads))
444 444 cg = changegroup.getlocalchangegroup(pushop.repo, 'push', pushop.outgoing)
445 445 cgpart = bundler.newpart('B2X:CHANGEGROUP', data=cg.getchunks())
446 446 def handlereply(op):
447 447 """extract addchangroup returns from server reply"""
448 448 cgreplies = op.records.getreplies(cgpart.id)
449 449 assert len(cgreplies['changegroup']) == 1
450 450 pushop.cgresult = cgreplies['changegroup'][0]['return']
451 451 return handlereply
452 452
453 453 @b2partsgenerator('phase')
454 454 def _pushb2phases(pushop, bundler):
455 455 """handle phase push through bundle2"""
456 456 if 'phases' in pushop.stepsdone:
457 457 return
458 458 b2caps = bundle2.bundle2caps(pushop.remote)
459 459 if not 'b2x:pushkey' in b2caps:
460 460 return
461 461 pushop.stepsdone.add('phases')
462 462 part2node = []
463 463 enc = pushkey.encode
464 464 for newremotehead in pushop.outdatedphases:
465 465 part = bundler.newpart('b2x:pushkey')
466 466 part.addparam('namespace', enc('phases'))
467 467 part.addparam('key', enc(newremotehead.hex()))
468 468 part.addparam('old', enc(str(phases.draft)))
469 469 part.addparam('new', enc(str(phases.public)))
470 470 part2node.append((part.id, newremotehead))
471 471 def handlereply(op):
472 472 for partid, node in part2node:
473 473 partrep = op.records.getreplies(partid)
474 474 results = partrep['pushkey']
475 475 assert len(results) <= 1
476 476 msg = None
477 477 if not results:
478 478 msg = _('server ignored update of %s to public!\n') % node
479 479 elif not int(results[0]['return']):
480 480 msg = _('updating %s to public failed!\n') % node
481 481 if msg is not None:
482 482 pushop.ui.warn(msg)
483 483 return handlereply
484 484
485 485 @b2partsgenerator('obsmarkers')
486 486 def _pushb2obsmarkers(pushop, bundler):
487 487 if 'obsmarkers' in pushop.stepsdone:
488 488 return
489 489 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
490 490 if obsolete.commonversion(remoteversions) is None:
491 491 return
492 492 pushop.stepsdone.add('obsmarkers')
493 493 if pushop.outobsmarkers:
494 494 buildobsmarkerspart(bundler, pushop.outobsmarkers)
495 495
496 496 @b2partsgenerator('bookmarks')
497 497 def _pushb2bookmarks(pushop, bundler):
498 498 """handle phase push through bundle2"""
499 499 if 'bookmarks' in pushop.stepsdone:
500 500 return
501 501 b2caps = bundle2.bundle2caps(pushop.remote)
502 502 if 'b2x:pushkey' not in b2caps:
503 503 return
504 504 pushop.stepsdone.add('bookmarks')
505 505 part2book = []
506 506 enc = pushkey.encode
507 507 for book, old, new in pushop.outbookmarks:
508 508 part = bundler.newpart('b2x:pushkey')
509 509 part.addparam('namespace', enc('bookmarks'))
510 510 part.addparam('key', enc(book))
511 511 part.addparam('old', enc(old))
512 512 part.addparam('new', enc(new))
513 513 action = 'update'
514 514 if not old:
515 515 action = 'export'
516 516 elif not new:
517 517 action = 'delete'
518 518 part2book.append((part.id, book, action))
519 519
520 520
521 521 def handlereply(op):
522 522 ui = pushop.ui
523 523 for partid, book, action in part2book:
524 524 partrep = op.records.getreplies(partid)
525 525 results = partrep['pushkey']
526 526 assert len(results) <= 1
527 527 if not results:
528 528 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
529 529 else:
530 530 ret = int(results[0]['return'])
531 531 if ret:
532 532 ui.status(bookmsgmap[action][0] % book)
533 533 else:
534 534 ui.warn(bookmsgmap[action][1] % book)
535 535 if pushop.bkresult is not None:
536 536 pushop.bkresult = 1
537 537 return handlereply
538 538
539 539
540 540 def _pushbundle2(pushop):
541 541 """push data to the remote using bundle2
542 542
543 543 The only currently supported type of data is changegroup but this will
544 544 evolve in the future."""
545 545 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
546 546 # create reply capability
547 547 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo))
548 548 bundler.newpart('b2x:replycaps', data=capsblob)
549 549 replyhandlers = []
550 550 for partgenname in b2partsgenorder:
551 551 partgen = b2partsgenmapping[partgenname]
552 552 ret = partgen(pushop, bundler)
553 553 if callable(ret):
554 554 replyhandlers.append(ret)
555 555 # do not push if nothing to push
556 556 if bundler.nbparts <= 1:
557 557 return
558 558 stream = util.chunkbuffer(bundler.getchunks())
559 559 try:
560 560 reply = pushop.remote.unbundle(stream, ['force'], 'push')
561 561 except error.BundleValueError, exc:
562 562 raise util.Abort('missing support for %s' % exc)
563 563 try:
564 564 op = bundle2.processbundle(pushop.repo, reply)
565 565 except error.BundleValueError, exc:
566 566 raise util.Abort('missing support for %s' % exc)
567 567 for rephand in replyhandlers:
568 568 rephand(op)
569 569
570 570 def _pushchangeset(pushop):
571 571 """Make the actual push of changeset bundle to remote repo"""
572 572 if 'changesets' in pushop.stepsdone:
573 573 return
574 574 pushop.stepsdone.add('changesets')
575 575 if not _pushcheckoutgoing(pushop):
576 576 return
577 577 pushop.repo.prepushoutgoinghooks(pushop.repo,
578 578 pushop.remote,
579 579 pushop.outgoing)
580 580 outgoing = pushop.outgoing
581 581 unbundle = pushop.remote.capable('unbundle')
582 582 # TODO: get bundlecaps from remote
583 583 bundlecaps = None
584 584 # create a changegroup from local
585 585 if pushop.revs is None and not (outgoing.excluded
586 586 or pushop.repo.changelog.filteredrevs):
587 587 # push everything,
588 588 # use the fast path, no race possible on push
589 589 bundler = changegroup.cg1packer(pushop.repo, bundlecaps)
590 590 cg = changegroup.getsubset(pushop.repo,
591 591 outgoing,
592 592 bundler,
593 593 'push',
594 594 fastpath=True)
595 595 else:
596 596 cg = changegroup.getlocalchangegroup(pushop.repo, 'push', outgoing,
597 597 bundlecaps)
598 598
599 599 # apply changegroup to remote
600 600 if unbundle:
601 601 # local repo finds heads on server, finds out what
602 602 # revs it must push. once revs transferred, if server
603 603 # finds it has different heads (someone else won
604 604 # commit/push race), server aborts.
605 605 if pushop.force:
606 606 remoteheads = ['force']
607 607 else:
608 608 remoteheads = pushop.remoteheads
609 609 # ssh: return remote's addchangegroup()
610 610 # http: return remote's addchangegroup() or 0 for error
611 611 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
612 612 pushop.repo.url())
613 613 else:
614 614 # we return an integer indicating remote head count
615 615 # change
616 616 pushop.cgresult = pushop.remote.addchangegroup(cg, 'push',
617 617 pushop.repo.url())
618 618
619 619 def _pushsyncphase(pushop):
620 620 """synchronise phase information locally and remotely"""
621 621 cheads = pushop.commonheads
622 622 # even when we don't push, exchanging phase data is useful
623 623 remotephases = pushop.remote.listkeys('phases')
624 624 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
625 625 and remotephases # server supports phases
626 626 and pushop.cgresult is None # nothing was pushed
627 627 and remotephases.get('publishing', False)):
628 628 # When:
629 629 # - this is a subrepo push
630 630 # - and remote support phase
631 631 # - and no changeset was pushed
632 632 # - and remote is publishing
633 633 # We may be in issue 3871 case!
634 634 # We drop the possible phase synchronisation done by
635 635 # courtesy to publish changesets possibly locally draft
636 636 # on the remote.
637 637 remotephases = {'publishing': 'True'}
638 638 if not remotephases: # old server or public only reply from non-publishing
639 639 _localphasemove(pushop, cheads)
640 640 # don't push any phase data as there is nothing to push
641 641 else:
642 642 ana = phases.analyzeremotephases(pushop.repo, cheads,
643 643 remotephases)
644 644 pheads, droots = ana
645 645 ### Apply remote phase on local
646 646 if remotephases.get('publishing', False):
647 647 _localphasemove(pushop, cheads)
648 648 else: # publish = False
649 649 _localphasemove(pushop, pheads)
650 650 _localphasemove(pushop, cheads, phases.draft)
651 651 ### Apply local phase on remote
652 652
653 653 if pushop.cgresult:
654 654 if 'phases' in pushop.stepsdone:
655 655 # phases already pushed though bundle2
656 656 return
657 657 outdated = pushop.outdatedphases
658 658 else:
659 659 outdated = pushop.fallbackoutdatedphases
660 660
661 661 pushop.stepsdone.add('phases')
662 662
663 663 # filter heads already turned public by the push
664 664 outdated = [c for c in outdated if c.node() not in pheads]
665 665 b2caps = bundle2.bundle2caps(pushop.remote)
666 666 if 'b2x:pushkey' in b2caps:
667 667 # server supports bundle2, let's do a batched push through it
668 668 #
669 669 # This will eventually be unified with the changesets bundle2 push
670 670 bundler = bundle2.bundle20(pushop.ui, b2caps)
671 671 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo))
672 672 bundler.newpart('b2x:replycaps', data=capsblob)
673 673 part2node = []
674 674 enc = pushkey.encode
675 675 for newremotehead in outdated:
676 676 part = bundler.newpart('b2x:pushkey')
677 677 part.addparam('namespace', enc('phases'))
678 678 part.addparam('key', enc(newremotehead.hex()))
679 679 part.addparam('old', enc(str(phases.draft)))
680 680 part.addparam('new', enc(str(phases.public)))
681 681 part2node.append((part.id, newremotehead))
682 682 stream = util.chunkbuffer(bundler.getchunks())
683 683 try:
684 684 reply = pushop.remote.unbundle(stream, ['force'], 'push')
685 685 op = bundle2.processbundle(pushop.repo, reply)
686 686 except error.BundleValueError, exc:
687 687 raise util.Abort('missing support for %s' % exc)
688 688 for partid, node in part2node:
689 689 partrep = op.records.getreplies(partid)
690 690 results = partrep['pushkey']
691 691 assert len(results) <= 1
692 692 msg = None
693 693 if not results:
694 694 msg = _('server ignored update of %s to public!\n') % node
695 695 elif not int(results[0]['return']):
696 696 msg = _('updating %s to public failed!\n') % node
697 697 if msg is not None:
698 698 pushop.ui.warn(msg)
699 699
700 700 else:
701 701 # fallback to independant pushkey command
702 702 for newremotehead in outdated:
703 703 r = pushop.remote.pushkey('phases',
704 704 newremotehead.hex(),
705 705 str(phases.draft),
706 706 str(phases.public))
707 707 if not r:
708 708 pushop.ui.warn(_('updating %s to public failed!\n')
709 709 % newremotehead)
710 710
711 711 def _localphasemove(pushop, nodes, phase=phases.public):
712 712 """move <nodes> to <phase> in the local source repo"""
713 713 if pushop.locallocked:
714 714 tr = pushop.repo.transaction('push-phase-sync')
715 715 try:
716 716 phases.advanceboundary(pushop.repo, tr, phase, nodes)
717 717 tr.close()
718 718 finally:
719 719 tr.release()
720 720 else:
721 721 # repo is not locked, do not change any phases!
722 722 # Informs the user that phases should have been moved when
723 723 # applicable.
724 724 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
725 725 phasestr = phases.phasenames[phase]
726 726 if actualmoves:
727 727 pushop.ui.status(_('cannot lock source repo, skipping '
728 728 'local %s phase update\n') % phasestr)
729 729
730 730 def _pushobsolete(pushop):
731 731 """utility function to push obsolete markers to a remote"""
732 732 if 'obsmarkers' in pushop.stepsdone:
733 733 return
734 734 pushop.ui.debug('try to push obsolete markers to remote\n')
735 735 repo = pushop.repo
736 736 remote = pushop.remote
737 737 pushop.stepsdone.add('obsmarkers')
738 738 if pushop.outobsmarkers:
739 739 rslts = []
740 740 remotedata = obsolete._pushkeyescape(pushop.outobsmarkers)
741 741 for key in sorted(remotedata, reverse=True):
742 742 # reverse sort to ensure we end with dump0
743 743 data = remotedata[key]
744 744 rslts.append(remote.pushkey('obsolete', key, '', data))
745 745 if [r for r in rslts if not r]:
746 746 msg = _('failed to push some obsolete markers!\n')
747 747 repo.ui.warn(msg)
748 748
749 749 def _pushbookmark(pushop):
750 750 """Update bookmark position on remote"""
751 751 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
752 752 return
753 753 pushop.stepsdone.add('bookmarks')
754 754 ui = pushop.ui
755 755 remote = pushop.remote
756 756
757 757 for b, old, new in pushop.outbookmarks:
758 758 action = 'update'
759 759 if not old:
760 760 action = 'export'
761 761 elif not new:
762 762 action = 'delete'
763 763 if remote.pushkey('bookmarks', b, old, new):
764 764 ui.status(bookmsgmap[action][0] % b)
765 765 else:
766 766 ui.warn(bookmsgmap[action][1] % b)
767 767 # discovery can have set the value form invalid entry
768 768 if pushop.bkresult is not None:
769 769 pushop.bkresult = 1
770 770
771 771 class pulloperation(object):
772 772 """A object that represent a single pull operation
773 773
774 774 It purpose is to carry push related state and very common operation.
775 775
776 776 A new should be created at the beginning of each pull and discarded
777 777 afterward.
778 778 """
779 779
780 780 def __init__(self, repo, remote, heads=None, force=False, bookmarks=()):
781 781 # repo we pull into
782 782 self.repo = repo
783 783 # repo we pull from
784 784 self.remote = remote
785 785 # revision we try to pull (None is "all")
786 786 self.heads = heads
787 787 # bookmark pulled explicitly
788 788 self.explicitbookmarks = bookmarks
789 789 # do we force pull?
790 790 self.force = force
791 791 # the name the pull transaction
792 792 self._trname = 'pull\n' + util.hidepassword(remote.url())
793 793 # hold the transaction once created
794 794 self._tr = None
795 795 # set of common changeset between local and remote before pull
796 796 self.common = None
797 797 # set of pulled head
798 798 self.rheads = None
799 799 # list of missing changeset to fetch remotely
800 800 self.fetch = None
801 801 # remote bookmarks data
802 802 self.remotebookmarks = None
803 803 # result of changegroup pulling (used as return code by pull)
804 804 self.cgresult = None
805 805 # list of step already done
806 806 self.stepsdone = set()
807 807
808 808 @util.propertycache
809 809 def pulledsubset(self):
810 810 """heads of the set of changeset target by the pull"""
811 811 # compute target subset
812 812 if self.heads is None:
813 813 # We pulled every thing possible
814 814 # sync on everything common
815 815 c = set(self.common)
816 816 ret = list(self.common)
817 817 for n in self.rheads:
818 818 if n not in c:
819 819 ret.append(n)
820 820 return ret
821 821 else:
822 822 # We pulled a specific subset
823 823 # sync on this subset
824 824 return self.heads
825 825
826 826 def gettransaction(self):
827 827 """get appropriate pull transaction, creating it if needed"""
828 828 if self._tr is None:
829 829 self._tr = self.repo.transaction(self._trname)
830 830 self._tr.hookargs['source'] = 'pull'
831 831 self._tr.hookargs['url'] = self.remote.url()
832 832 return self._tr
833 833
834 834 def closetransaction(self):
835 835 """close transaction if created"""
836 836 if self._tr is not None:
837 837 repo = self.repo
838 838 cl = repo.unfiltered().changelog
839 839 p = cl.writepending() and repo.root or ""
840 840 p = cl.writepending() and repo.root or ""
841 841 repo.hook('b2x-pretransactionclose', throw=True, pending=p,
842 842 **self._tr.hookargs)
843 843 self._tr.close()
844 844 repo.hook('b2x-transactionclose', **self._tr.hookargs)
845 845
846 846 def releasetransaction(self):
847 847 """release transaction if created"""
848 848 if self._tr is not None:
849 849 self._tr.release()
850 850
851 851 def pull(repo, remote, heads=None, force=False, bookmarks=()):
852 852 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks)
853 853 if pullop.remote.local():
854 854 missing = set(pullop.remote.requirements) - pullop.repo.supported
855 855 if missing:
856 856 msg = _("required features are not"
857 857 " supported in the destination:"
858 858 " %s") % (', '.join(sorted(missing)))
859 859 raise util.Abort(msg)
860 860
861 861 pullop.remotebookmarks = remote.listkeys('bookmarks')
862 862 lock = pullop.repo.lock()
863 863 try:
864 864 _pulldiscovery(pullop)
865 865 if (pullop.repo.ui.configbool('experimental', 'bundle2-exp', False)
866 866 and pullop.remote.capable('bundle2-exp')):
867 867 _pullbundle2(pullop)
868 868 _pullchangeset(pullop)
869 869 _pullphase(pullop)
870 870 _pullbookmarks(pullop)
871 871 _pullobsolete(pullop)
872 872 pullop.closetransaction()
873 873 finally:
874 874 pullop.releasetransaction()
875 875 lock.release()
876 876
877 877 return pullop
878 878
879 879 # list of steps to perform discovery before pull
880 880 pulldiscoveryorder = []
881 881
882 882 # Mapping between step name and function
883 883 #
884 884 # This exists to help extensions wrap steps if necessary
885 885 pulldiscoverymapping = {}
886 886
887 887 def pulldiscovery(stepname):
888 888 """decorator for function performing discovery before pull
889 889
890 890 The function is added to the step -> function mapping and appended to the
891 891 list of steps. Beware that decorated function will be added in order (this
892 892 may matter).
893 893
894 894 You can only use this decorator for a new step, if you want to wrap a step
895 895 from an extension, change the pulldiscovery dictionary directly."""
896 896 def dec(func):
897 897 assert stepname not in pulldiscoverymapping
898 898 pulldiscoverymapping[stepname] = func
899 899 pulldiscoveryorder.append(stepname)
900 900 return func
901 901 return dec
902 902
903 903 def _pulldiscovery(pullop):
904 904 """Run all discovery steps"""
905 905 for stepname in pulldiscoveryorder:
906 906 step = pulldiscoverymapping[stepname]
907 907 step(pullop)
908 908
909 909 @pulldiscovery('changegroup')
910 910 def _pulldiscoverychangegroup(pullop):
911 911 """discovery phase for the pull
912 912
913 913 Current handle changeset discovery only, will change handle all discovery
914 914 at some point."""
915 915 tmp = discovery.findcommonincoming(pullop.repo.unfiltered(),
916 916 pullop.remote,
917 917 heads=pullop.heads,
918 918 force=pullop.force)
919 919 pullop.common, pullop.fetch, pullop.rheads = tmp
920 920
921 921 def _pullbundle2(pullop):
922 922 """pull data using bundle2
923 923
924 924 For now, the only supported data are changegroup."""
925 925 remotecaps = bundle2.bundle2caps(pullop.remote)
926 926 kwargs = {'bundlecaps': caps20to10(pullop.repo)}
927 927 # pulling changegroup
928 928 pullop.stepsdone.add('changegroup')
929 929
930 930 kwargs['common'] = pullop.common
931 931 kwargs['heads'] = pullop.heads or pullop.rheads
932 932 kwargs['cg'] = pullop.fetch
933 933 if 'b2x:listkeys' in remotecaps:
934 934 kwargs['listkeys'] = ['phase', 'bookmarks']
935 935 if not pullop.fetch:
936 936 pullop.repo.ui.status(_("no changes found\n"))
937 937 pullop.cgresult = 0
938 938 else:
939 939 if pullop.heads is None and list(pullop.common) == [nullid]:
940 940 pullop.repo.ui.status(_("requesting all changes\n"))
941 941 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
942 942 remoteversions = bundle2.obsmarkersversion(remotecaps)
943 943 if obsolete.commonversion(remoteversions) is not None:
944 944 kwargs['obsmarkers'] = True
945 945 pullop.stepsdone.add('obsmarkers')
946 946 _pullbundle2extraprepare(pullop, kwargs)
947 947 if kwargs.keys() == ['format']:
948 948 return # nothing to pull
949 949 bundle = pullop.remote.getbundle('pull', **kwargs)
950 950 try:
951 951 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
952 952 except error.BundleValueError, exc:
953 953 raise util.Abort('missing support for %s' % exc)
954 954
955 955 if pullop.fetch:
956 956 changedheads = 0
957 957 pullop.cgresult = 1
958 958 for cg in op.records['changegroup']:
959 959 ret = cg['return']
960 960 # If any changegroup result is 0, return 0
961 961 if ret == 0:
962 962 pullop.cgresult = 0
963 963 break
964 964 if ret < -1:
965 965 changedheads += ret + 1
966 966 elif ret > 1:
967 967 changedheads += ret - 1
968 968 if changedheads > 0:
969 969 pullop.cgresult = 1 + changedheads
970 970 elif changedheads < 0:
971 971 pullop.cgresult = -1 + changedheads
972 972
973 973 # processing phases change
974 974 for namespace, value in op.records['listkeys']:
975 975 if namespace == 'phases':
976 976 _pullapplyphases(pullop, value)
977 977
978 978 # processing bookmark update
979 979 for namespace, value in op.records['listkeys']:
980 980 if namespace == 'bookmarks':
981 981 pullop.remotebookmarks = value
982 982 _pullbookmarks(pullop)
983 983
984 984 def _pullbundle2extraprepare(pullop, kwargs):
985 985 """hook function so that extensions can extend the getbundle call"""
986 986 pass
987 987
988 988 def _pullchangeset(pullop):
989 989 """pull changeset from unbundle into the local repo"""
990 990 # We delay the open of the transaction as late as possible so we
991 991 # don't open transaction for nothing or you break future useful
992 992 # rollback call
993 993 if 'changegroup' in pullop.stepsdone:
994 994 return
995 995 pullop.stepsdone.add('changegroup')
996 996 if not pullop.fetch:
997 997 pullop.repo.ui.status(_("no changes found\n"))
998 998 pullop.cgresult = 0
999 999 return
1000 1000 pullop.gettransaction()
1001 1001 if pullop.heads is None and list(pullop.common) == [nullid]:
1002 1002 pullop.repo.ui.status(_("requesting all changes\n"))
1003 1003 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1004 1004 # issue1320, avoid a race if remote changed after discovery
1005 1005 pullop.heads = pullop.rheads
1006 1006
1007 1007 if pullop.remote.capable('getbundle'):
1008 1008 # TODO: get bundlecaps from remote
1009 1009 cg = pullop.remote.getbundle('pull', common=pullop.common,
1010 1010 heads=pullop.heads or pullop.rheads)
1011 1011 elif pullop.heads is None:
1012 1012 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1013 1013 elif not pullop.remote.capable('changegroupsubset'):
1014 1014 raise util.Abort(_("partial pull cannot be done because "
1015 1015 "other repository doesn't support "
1016 1016 "changegroupsubset."))
1017 1017 else:
1018 1018 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1019 1019 pullop.cgresult = changegroup.addchangegroup(pullop.repo, cg, 'pull',
1020 1020 pullop.remote.url())
1021 1021
1022 1022 def _pullphase(pullop):
1023 1023 # Get remote phases data from remote
1024 1024 if 'phases' in pullop.stepsdone:
1025 1025 return
1026 1026 remotephases = pullop.remote.listkeys('phases')
1027 1027 _pullapplyphases(pullop, remotephases)
1028 1028
1029 1029 def _pullapplyphases(pullop, remotephases):
1030 1030 """apply phase movement from observed remote state"""
1031 1031 if 'phases' in pullop.stepsdone:
1032 1032 return
1033 1033 pullop.stepsdone.add('phases')
1034 1034 publishing = bool(remotephases.get('publishing', False))
1035 1035 if remotephases and not publishing:
1036 1036 # remote is new and unpublishing
1037 1037 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1038 1038 pullop.pulledsubset,
1039 1039 remotephases)
1040 1040 dheads = pullop.pulledsubset
1041 1041 else:
1042 1042 # Remote is old or publishing all common changesets
1043 1043 # should be seen as public
1044 1044 pheads = pullop.pulledsubset
1045 1045 dheads = []
1046 1046 unfi = pullop.repo.unfiltered()
1047 1047 phase = unfi._phasecache.phase
1048 1048 rev = unfi.changelog.nodemap.get
1049 1049 public = phases.public
1050 1050 draft = phases.draft
1051 1051
1052 1052 # exclude changesets already public locally and update the others
1053 1053 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1054 1054 if pheads:
1055 1055 tr = pullop.gettransaction()
1056 1056 phases.advanceboundary(pullop.repo, tr, public, pheads)
1057 1057
1058 1058 # exclude changesets already draft locally and update the others
1059 1059 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1060 1060 if dheads:
1061 1061 tr = pullop.gettransaction()
1062 1062 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1063 1063
1064 1064 def _pullbookmarks(pullop):
1065 1065 """process the remote bookmark information to update the local one"""
1066 1066 if 'bookmarks' in pullop.stepsdone:
1067 1067 return
1068 1068 pullop.stepsdone.add('bookmarks')
1069 1069 repo = pullop.repo
1070 1070 remotebookmarks = pullop.remotebookmarks
1071 1071 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1072 1072 pullop.remote.url(),
1073 1073 pullop.gettransaction,
1074 1074 explicit=pullop.explicitbookmarks)
1075 1075
1076 1076 def _pullobsolete(pullop):
1077 1077 """utility function to pull obsolete markers from a remote
1078 1078
1079 1079 The `gettransaction` is function that return the pull transaction, creating
1080 1080 one if necessary. We return the transaction to inform the calling code that
1081 1081 a new transaction have been created (when applicable).
1082 1082
1083 1083 Exists mostly to allow overriding for experimentation purpose"""
1084 1084 if 'obsmarkers' in pullop.stepsdone:
1085 1085 return
1086 1086 pullop.stepsdone.add('obsmarkers')
1087 1087 tr = None
1088 1088 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1089 1089 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1090 1090 remoteobs = pullop.remote.listkeys('obsolete')
1091 1091 if 'dump0' in remoteobs:
1092 1092 tr = pullop.gettransaction()
1093 1093 for key in sorted(remoteobs, reverse=True):
1094 1094 if key.startswith('dump'):
1095 1095 data = base85.b85decode(remoteobs[key])
1096 1096 pullop.repo.obsstore.mergemarkers(tr, data)
1097 1097 pullop.repo.invalidatevolatilesets()
1098 1098 return tr
1099 1099
1100 1100 def caps20to10(repo):
1101 1101 """return a set with appropriate options to use bundle20 during getbundle"""
1102 caps = set(['HG2X'])
1102 caps = set(['HG2Y'])
1103 1103 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
1104 1104 caps.add('bundle2=' + urllib.quote(capsblob))
1105 1105 return caps
1106 1106
1107 1107 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1108 1108 getbundle2partsorder = []
1109 1109
1110 1110 # Mapping between step name and function
1111 1111 #
1112 1112 # This exists to help extensions wrap steps if necessary
1113 1113 getbundle2partsmapping = {}
1114 1114
1115 1115 def getbundle2partsgenerator(stepname):
1116 1116 """decorator for function generating bundle2 part for getbundle
1117 1117
1118 1118 The function is added to the step -> function mapping and appended to the
1119 1119 list of steps. Beware that decorated functions will be added in order
1120 1120 (this may matter).
1121 1121
1122 1122 You can only use this decorator for new steps, if you want to wrap a step
1123 1123 from an extension, attack the getbundle2partsmapping dictionary directly."""
1124 1124 def dec(func):
1125 1125 assert stepname not in getbundle2partsmapping
1126 1126 getbundle2partsmapping[stepname] = func
1127 1127 getbundle2partsorder.append(stepname)
1128 1128 return func
1129 1129 return dec
1130 1130
1131 1131 def getbundle(repo, source, heads=None, common=None, bundlecaps=None,
1132 1132 **kwargs):
1133 1133 """return a full bundle (with potentially multiple kind of parts)
1134 1134
1135 Could be a bundle HG10 or a bundle HG2X depending on bundlecaps
1135 Could be a bundle HG10 or a bundle HG2Y depending on bundlecaps
1136 1136 passed. For now, the bundle can contain only changegroup, but this will
1137 1137 changes when more part type will be available for bundle2.
1138 1138
1139 1139 This is different from changegroup.getchangegroup that only returns an HG10
1140 1140 changegroup bundle. They may eventually get reunited in the future when we
1141 1141 have a clearer idea of the API we what to query different data.
1142 1142
1143 1143 The implementation is at a very early stage and will get massive rework
1144 1144 when the API of bundle is refined.
1145 1145 """
1146 1146 # bundle10 case
1147 if bundlecaps is None or 'HG2X' not in bundlecaps:
1147 if bundlecaps is None or 'HG2Y' not in bundlecaps:
1148 1148 if bundlecaps and not kwargs.get('cg', True):
1149 1149 raise ValueError(_('request for bundle10 must include changegroup'))
1150 1150
1151 1151 if kwargs:
1152 1152 raise ValueError(_('unsupported getbundle arguments: %s')
1153 1153 % ', '.join(sorted(kwargs.keys())))
1154 1154 return changegroup.getchangegroup(repo, source, heads=heads,
1155 1155 common=common, bundlecaps=bundlecaps)
1156 1156
1157 1157 # bundle20 case
1158 1158 b2caps = {}
1159 1159 for bcaps in bundlecaps:
1160 1160 if bcaps.startswith('bundle2='):
1161 1161 blob = urllib.unquote(bcaps[len('bundle2='):])
1162 1162 b2caps.update(bundle2.decodecaps(blob))
1163 1163 bundler = bundle2.bundle20(repo.ui, b2caps)
1164 1164
1165 1165 for name in getbundle2partsorder:
1166 1166 func = getbundle2partsmapping[name]
1167 1167 kwargs['heads'] = heads
1168 1168 kwargs['common'] = common
1169 1169 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1170 1170 **kwargs)
1171 1171
1172 1172 return util.chunkbuffer(bundler.getchunks())
1173 1173
1174 1174 @getbundle2partsgenerator('changegroup')
1175 1175 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1176 1176 b2caps=None, heads=None, common=None, **kwargs):
1177 1177 """add a changegroup part to the requested bundle"""
1178 1178 cg = None
1179 1179 if kwargs.get('cg', True):
1180 1180 # build changegroup bundle here.
1181 1181 cg = changegroup.getchangegroup(repo, source, heads=heads,
1182 1182 common=common, bundlecaps=bundlecaps)
1183 1183
1184 1184 if cg:
1185 1185 bundler.newpart('b2x:changegroup', data=cg.getchunks())
1186 1186
1187 1187 @getbundle2partsgenerator('listkeys')
1188 1188 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1189 1189 b2caps=None, **kwargs):
1190 1190 """add parts containing listkeys namespaces to the requested bundle"""
1191 1191 listkeys = kwargs.get('listkeys', ())
1192 1192 for namespace in listkeys:
1193 1193 part = bundler.newpart('b2x:listkeys')
1194 1194 part.addparam('namespace', namespace)
1195 1195 keys = repo.listkeys(namespace).items()
1196 1196 part.data = pushkey.encodekeys(keys)
1197 1197
1198 1198 @getbundle2partsgenerator('obsmarkers')
1199 1199 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1200 1200 b2caps=None, heads=None, **kwargs):
1201 1201 """add an obsolescence markers part to the requested bundle"""
1202 1202 if kwargs.get('obsmarkers', False):
1203 1203 if heads is None:
1204 1204 heads = repo.heads()
1205 1205 subset = [c.node() for c in repo.set('::%ln', heads)]
1206 1206 markers = repo.obsstore.relevantmarkers(subset)
1207 1207 buildobsmarkerspart(bundler, markers)
1208 1208
1209 1209 @getbundle2partsgenerator('extra')
1210 1210 def _getbundleextrapart(bundler, repo, source, bundlecaps=None,
1211 1211 b2caps=None, **kwargs):
1212 1212 """hook function to let extensions add parts to the requested bundle"""
1213 1213 pass
1214 1214
1215 1215 def check_heads(repo, their_heads, context):
1216 1216 """check if the heads of a repo have been modified
1217 1217
1218 1218 Used by peer for unbundling.
1219 1219 """
1220 1220 heads = repo.heads()
1221 1221 heads_hash = util.sha1(''.join(sorted(heads))).digest()
1222 1222 if not (their_heads == ['force'] or their_heads == heads or
1223 1223 their_heads == ['hashed', heads_hash]):
1224 1224 # someone else committed/pushed/unbundled while we
1225 1225 # were transferring data
1226 1226 raise error.PushRaced('repository changed while %s - '
1227 1227 'please try again' % context)
1228 1228
1229 1229 def unbundle(repo, cg, heads, source, url):
1230 1230 """Apply a bundle to a repo.
1231 1231
1232 1232 this function makes sure the repo is locked during the application and have
1233 1233 mechanism to check that no push race occurred between the creation of the
1234 1234 bundle and its application.
1235 1235
1236 1236 If the push was raced as PushRaced exception is raised."""
1237 1237 r = 0
1238 1238 # need a transaction when processing a bundle2 stream
1239 1239 tr = None
1240 1240 lock = repo.lock()
1241 1241 try:
1242 1242 check_heads(repo, heads, 'uploading changes')
1243 1243 # push can proceed
1244 1244 if util.safehasattr(cg, 'params'):
1245 1245 try:
1246 1246 tr = repo.transaction('unbundle')
1247 1247 tr.hookargs['source'] = source
1248 1248 tr.hookargs['url'] = url
1249 1249 tr.hookargs['bundle2-exp'] = '1'
1250 1250 r = bundle2.processbundle(repo, cg, lambda: tr).reply
1251 1251 cl = repo.unfiltered().changelog
1252 1252 p = cl.writepending() and repo.root or ""
1253 1253 repo.hook('b2x-pretransactionclose', throw=True, pending=p,
1254 1254 **tr.hookargs)
1255 1255 tr.close()
1256 1256 repo.hook('b2x-transactionclose', **tr.hookargs)
1257 1257 except Exception, exc:
1258 1258 exc.duringunbundle2 = True
1259 1259 raise
1260 1260 else:
1261 1261 r = changegroup.addchangegroup(repo, cg, source, url)
1262 1262 finally:
1263 1263 if tr is not None:
1264 1264 tr.release()
1265 1265 lock.release()
1266 1266 return r
@@ -1,1792 +1,1792 b''
1 1 # localrepo.py - read/write repository class for mercurial
2 2 #
3 3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 from node import hex, nullid, short
8 8 from i18n import _
9 9 import urllib
10 10 import peer, changegroup, subrepo, pushkey, obsolete, repoview
11 11 import changelog, dirstate, filelog, manifest, context, bookmarks, phases
12 12 import lock as lockmod
13 13 import transaction, store, encoding, exchange, bundle2
14 14 import scmutil, util, extensions, hook, error, revset
15 15 import match as matchmod
16 16 import merge as mergemod
17 17 import tags as tagsmod
18 18 from lock import release
19 19 import weakref, errno, os, time, inspect
20 20 import branchmap, pathutil
21 21 propertycache = util.propertycache
22 22 filecache = scmutil.filecache
23 23
24 24 class repofilecache(filecache):
25 25 """All filecache usage on repo are done for logic that should be unfiltered
26 26 """
27 27
28 28 def __get__(self, repo, type=None):
29 29 return super(repofilecache, self).__get__(repo.unfiltered(), type)
30 30 def __set__(self, repo, value):
31 31 return super(repofilecache, self).__set__(repo.unfiltered(), value)
32 32 def __delete__(self, repo):
33 33 return super(repofilecache, self).__delete__(repo.unfiltered())
34 34
35 35 class storecache(repofilecache):
36 36 """filecache for files in the store"""
37 37 def join(self, obj, fname):
38 38 return obj.sjoin(fname)
39 39
40 40 class unfilteredpropertycache(propertycache):
41 41 """propertycache that apply to unfiltered repo only"""
42 42
43 43 def __get__(self, repo, type=None):
44 44 unfi = repo.unfiltered()
45 45 if unfi is repo:
46 46 return super(unfilteredpropertycache, self).__get__(unfi)
47 47 return getattr(unfi, self.name)
48 48
49 49 class filteredpropertycache(propertycache):
50 50 """propertycache that must take filtering in account"""
51 51
52 52 def cachevalue(self, obj, value):
53 53 object.__setattr__(obj, self.name, value)
54 54
55 55
56 56 def hasunfilteredcache(repo, name):
57 57 """check if a repo has an unfilteredpropertycache value for <name>"""
58 58 return name in vars(repo.unfiltered())
59 59
60 60 def unfilteredmethod(orig):
61 61 """decorate method that always need to be run on unfiltered version"""
62 62 def wrapper(repo, *args, **kwargs):
63 63 return orig(repo.unfiltered(), *args, **kwargs)
64 64 return wrapper
65 65
66 66 moderncaps = set(('lookup', 'branchmap', 'pushkey', 'known', 'getbundle',
67 67 'unbundle'))
68 68 legacycaps = moderncaps.union(set(['changegroupsubset']))
69 69
70 70 class localpeer(peer.peerrepository):
71 71 '''peer for a local repo; reflects only the most recent API'''
72 72
73 73 def __init__(self, repo, caps=moderncaps):
74 74 peer.peerrepository.__init__(self)
75 75 self._repo = repo.filtered('served')
76 76 self.ui = repo.ui
77 77 self._caps = repo._restrictcapabilities(caps)
78 78 self.requirements = repo.requirements
79 79 self.supportedformats = repo.supportedformats
80 80
81 81 def close(self):
82 82 self._repo.close()
83 83
84 84 def _capabilities(self):
85 85 return self._caps
86 86
87 87 def local(self):
88 88 return self._repo
89 89
90 90 def canpush(self):
91 91 return True
92 92
93 93 def url(self):
94 94 return self._repo.url()
95 95
96 96 def lookup(self, key):
97 97 return self._repo.lookup(key)
98 98
99 99 def branchmap(self):
100 100 return self._repo.branchmap()
101 101
102 102 def heads(self):
103 103 return self._repo.heads()
104 104
105 105 def known(self, nodes):
106 106 return self._repo.known(nodes)
107 107
108 108 def getbundle(self, source, heads=None, common=None, bundlecaps=None,
109 109 format='HG10', **kwargs):
110 110 cg = exchange.getbundle(self._repo, source, heads=heads,
111 111 common=common, bundlecaps=bundlecaps, **kwargs)
112 if bundlecaps is not None and 'HG2X' in bundlecaps:
112 if bundlecaps is not None and 'HG2Y' in bundlecaps:
113 113 # When requesting a bundle2, getbundle returns a stream to make the
114 114 # wire level function happier. We need to build a proper object
115 115 # from it in local peer.
116 116 cg = bundle2.unbundle20(self.ui, cg)
117 117 return cg
118 118
119 119 # TODO We might want to move the next two calls into legacypeer and add
120 120 # unbundle instead.
121 121
122 122 def unbundle(self, cg, heads, url):
123 123 """apply a bundle on a repo
124 124
125 125 This function handles the repo locking itself."""
126 126 try:
127 127 cg = exchange.readbundle(self.ui, cg, None)
128 128 ret = exchange.unbundle(self._repo, cg, heads, 'push', url)
129 129 if util.safehasattr(ret, 'getchunks'):
130 130 # This is a bundle20 object, turn it into an unbundler.
131 131 # This little dance should be dropped eventually when the API
132 132 # is finally improved.
133 133 stream = util.chunkbuffer(ret.getchunks())
134 134 ret = bundle2.unbundle20(self.ui, stream)
135 135 return ret
136 136 except error.PushRaced, exc:
137 137 raise error.ResponseError(_('push failed:'), str(exc))
138 138
139 139 def lock(self):
140 140 return self._repo.lock()
141 141
142 142 def addchangegroup(self, cg, source, url):
143 143 return changegroup.addchangegroup(self._repo, cg, source, url)
144 144
145 145 def pushkey(self, namespace, key, old, new):
146 146 return self._repo.pushkey(namespace, key, old, new)
147 147
148 148 def listkeys(self, namespace):
149 149 return self._repo.listkeys(namespace)
150 150
151 151 def debugwireargs(self, one, two, three=None, four=None, five=None):
152 152 '''used to test argument passing over the wire'''
153 153 return "%s %s %s %s %s" % (one, two, three, four, five)
154 154
155 155 class locallegacypeer(localpeer):
156 156 '''peer extension which implements legacy methods too; used for tests with
157 157 restricted capabilities'''
158 158
159 159 def __init__(self, repo):
160 160 localpeer.__init__(self, repo, caps=legacycaps)
161 161
162 162 def branches(self, nodes):
163 163 return self._repo.branches(nodes)
164 164
165 165 def between(self, pairs):
166 166 return self._repo.between(pairs)
167 167
168 168 def changegroup(self, basenodes, source):
169 169 return changegroup.changegroup(self._repo, basenodes, source)
170 170
171 171 def changegroupsubset(self, bases, heads, source):
172 172 return changegroup.changegroupsubset(self._repo, bases, heads, source)
173 173
174 174 class localrepository(object):
175 175
176 176 supportedformats = set(('revlogv1', 'generaldelta'))
177 177 _basesupported = supportedformats | set(('store', 'fncache', 'shared',
178 178 'dotencode'))
179 179 openerreqs = set(('revlogv1', 'generaldelta'))
180 180 requirements = ['revlogv1']
181 181 filtername = None
182 182
183 183 # a list of (ui, featureset) functions.
184 184 # only functions defined in module of enabled extensions are invoked
185 185 featuresetupfuncs = set()
186 186
187 187 def _baserequirements(self, create):
188 188 return self.requirements[:]
189 189
190 190 def __init__(self, baseui, path=None, create=False):
191 191 self.wvfs = scmutil.vfs(path, expandpath=True, realpath=True)
192 192 self.wopener = self.wvfs
193 193 self.root = self.wvfs.base
194 194 self.path = self.wvfs.join(".hg")
195 195 self.origroot = path
196 196 self.auditor = pathutil.pathauditor(self.root, self._checknested)
197 197 self.vfs = scmutil.vfs(self.path)
198 198 self.opener = self.vfs
199 199 self.baseui = baseui
200 200 self.ui = baseui.copy()
201 201 self.ui.copy = baseui.copy # prevent copying repo configuration
202 202 # A list of callback to shape the phase if no data were found.
203 203 # Callback are in the form: func(repo, roots) --> processed root.
204 204 # This list it to be filled by extension during repo setup
205 205 self._phasedefaults = []
206 206 try:
207 207 self.ui.readconfig(self.join("hgrc"), self.root)
208 208 extensions.loadall(self.ui)
209 209 except IOError:
210 210 pass
211 211
212 212 if self.featuresetupfuncs:
213 213 self.supported = set(self._basesupported) # use private copy
214 214 extmods = set(m.__name__ for n, m
215 215 in extensions.extensions(self.ui))
216 216 for setupfunc in self.featuresetupfuncs:
217 217 if setupfunc.__module__ in extmods:
218 218 setupfunc(self.ui, self.supported)
219 219 else:
220 220 self.supported = self._basesupported
221 221
222 222 if not self.vfs.isdir():
223 223 if create:
224 224 if not self.wvfs.exists():
225 225 self.wvfs.makedirs()
226 226 self.vfs.makedir(notindexed=True)
227 227 requirements = self._baserequirements(create)
228 228 if self.ui.configbool('format', 'usestore', True):
229 229 self.vfs.mkdir("store")
230 230 requirements.append("store")
231 231 if self.ui.configbool('format', 'usefncache', True):
232 232 requirements.append("fncache")
233 233 if self.ui.configbool('format', 'dotencode', True):
234 234 requirements.append('dotencode')
235 235 # create an invalid changelog
236 236 self.vfs.append(
237 237 "00changelog.i",
238 238 '\0\0\0\2' # represents revlogv2
239 239 ' dummy changelog to prevent using the old repo layout'
240 240 )
241 241 if self.ui.configbool('format', 'generaldelta', False):
242 242 requirements.append("generaldelta")
243 243 requirements = set(requirements)
244 244 else:
245 245 raise error.RepoError(_("repository %s not found") % path)
246 246 elif create:
247 247 raise error.RepoError(_("repository %s already exists") % path)
248 248 else:
249 249 try:
250 250 requirements = scmutil.readrequires(self.vfs, self.supported)
251 251 except IOError, inst:
252 252 if inst.errno != errno.ENOENT:
253 253 raise
254 254 requirements = set()
255 255
256 256 self.sharedpath = self.path
257 257 try:
258 258 vfs = scmutil.vfs(self.vfs.read("sharedpath").rstrip('\n'),
259 259 realpath=True)
260 260 s = vfs.base
261 261 if not vfs.exists():
262 262 raise error.RepoError(
263 263 _('.hg/sharedpath points to nonexistent directory %s') % s)
264 264 self.sharedpath = s
265 265 except IOError, inst:
266 266 if inst.errno != errno.ENOENT:
267 267 raise
268 268
269 269 self.store = store.store(requirements, self.sharedpath, scmutil.vfs)
270 270 self.spath = self.store.path
271 271 self.svfs = self.store.vfs
272 272 self.sopener = self.svfs
273 273 self.sjoin = self.store.join
274 274 self.vfs.createmode = self.store.createmode
275 275 self._applyrequirements(requirements)
276 276 if create:
277 277 self._writerequirements()
278 278
279 279
280 280 self._branchcaches = {}
281 281 self.filterpats = {}
282 282 self._datafilters = {}
283 283 self._transref = self._lockref = self._wlockref = None
284 284
285 285 # A cache for various files under .hg/ that tracks file changes,
286 286 # (used by the filecache decorator)
287 287 #
288 288 # Maps a property name to its util.filecacheentry
289 289 self._filecache = {}
290 290
291 291 # hold sets of revision to be filtered
292 292 # should be cleared when something might have changed the filter value:
293 293 # - new changesets,
294 294 # - phase change,
295 295 # - new obsolescence marker,
296 296 # - working directory parent change,
297 297 # - bookmark changes
298 298 self.filteredrevcache = {}
299 299
300 300 def close(self):
301 301 pass
302 302
303 303 def _restrictcapabilities(self, caps):
304 304 # bundle2 is not ready for prime time, drop it unless explicitly
305 305 # required by the tests (or some brave tester)
306 306 if self.ui.configbool('experimental', 'bundle2-exp', False):
307 307 caps = set(caps)
308 308 capsblob = bundle2.encodecaps(bundle2.getrepocaps(self))
309 309 caps.add('bundle2-exp=' + urllib.quote(capsblob))
310 310 return caps
311 311
312 312 def _applyrequirements(self, requirements):
313 313 self.requirements = requirements
314 314 self.sopener.options = dict((r, 1) for r in requirements
315 315 if r in self.openerreqs)
316 316 chunkcachesize = self.ui.configint('format', 'chunkcachesize')
317 317 if chunkcachesize is not None:
318 318 self.sopener.options['chunkcachesize'] = chunkcachesize
319 319
320 320 def _writerequirements(self):
321 321 reqfile = self.opener("requires", "w")
322 322 for r in sorted(self.requirements):
323 323 reqfile.write("%s\n" % r)
324 324 reqfile.close()
325 325
326 326 def _checknested(self, path):
327 327 """Determine if path is a legal nested repository."""
328 328 if not path.startswith(self.root):
329 329 return False
330 330 subpath = path[len(self.root) + 1:]
331 331 normsubpath = util.pconvert(subpath)
332 332
333 333 # XXX: Checking against the current working copy is wrong in
334 334 # the sense that it can reject things like
335 335 #
336 336 # $ hg cat -r 10 sub/x.txt
337 337 #
338 338 # if sub/ is no longer a subrepository in the working copy
339 339 # parent revision.
340 340 #
341 341 # However, it can of course also allow things that would have
342 342 # been rejected before, such as the above cat command if sub/
343 343 # is a subrepository now, but was a normal directory before.
344 344 # The old path auditor would have rejected by mistake since it
345 345 # panics when it sees sub/.hg/.
346 346 #
347 347 # All in all, checking against the working copy seems sensible
348 348 # since we want to prevent access to nested repositories on
349 349 # the filesystem *now*.
350 350 ctx = self[None]
351 351 parts = util.splitpath(subpath)
352 352 while parts:
353 353 prefix = '/'.join(parts)
354 354 if prefix in ctx.substate:
355 355 if prefix == normsubpath:
356 356 return True
357 357 else:
358 358 sub = ctx.sub(prefix)
359 359 return sub.checknested(subpath[len(prefix) + 1:])
360 360 else:
361 361 parts.pop()
362 362 return False
363 363
364 364 def peer(self):
365 365 return localpeer(self) # not cached to avoid reference cycle
366 366
367 367 def unfiltered(self):
368 368 """Return unfiltered version of the repository
369 369
370 370 Intended to be overwritten by filtered repo."""
371 371 return self
372 372
373 373 def filtered(self, name):
374 374 """Return a filtered version of a repository"""
375 375 # build a new class with the mixin and the current class
376 376 # (possibly subclass of the repo)
377 377 class proxycls(repoview.repoview, self.unfiltered().__class__):
378 378 pass
379 379 return proxycls(self, name)
380 380
381 381 @repofilecache('bookmarks')
382 382 def _bookmarks(self):
383 383 return bookmarks.bmstore(self)
384 384
385 385 @repofilecache('bookmarks.current')
386 386 def _bookmarkcurrent(self):
387 387 return bookmarks.readcurrent(self)
388 388
389 389 def bookmarkheads(self, bookmark):
390 390 name = bookmark.split('@', 1)[0]
391 391 heads = []
392 392 for mark, n in self._bookmarks.iteritems():
393 393 if mark.split('@', 1)[0] == name:
394 394 heads.append(n)
395 395 return heads
396 396
397 397 @storecache('phaseroots')
398 398 def _phasecache(self):
399 399 return phases.phasecache(self, self._phasedefaults)
400 400
401 401 @storecache('obsstore')
402 402 def obsstore(self):
403 403 # read default format for new obsstore.
404 404 defaultformat = self.ui.configint('format', 'obsstore-version', None)
405 405 # rely on obsstore class default when possible.
406 406 kwargs = {}
407 407 if defaultformat is not None:
408 408 kwargs['defaultformat'] = defaultformat
409 409 readonly = not obsolete.isenabled(self, obsolete.createmarkersopt)
410 410 store = obsolete.obsstore(self.sopener, readonly=readonly,
411 411 **kwargs)
412 412 if store and readonly:
413 413 # message is rare enough to not be translated
414 414 msg = 'obsolete feature not enabled but %i markers found!\n'
415 415 self.ui.warn(msg % len(list(store)))
416 416 return store
417 417
418 418 @storecache('00changelog.i')
419 419 def changelog(self):
420 420 c = changelog.changelog(self.sopener)
421 421 if 'HG_PENDING' in os.environ:
422 422 p = os.environ['HG_PENDING']
423 423 if p.startswith(self.root):
424 424 c.readpending('00changelog.i.a')
425 425 return c
426 426
427 427 @storecache('00manifest.i')
428 428 def manifest(self):
429 429 return manifest.manifest(self.sopener)
430 430
431 431 @repofilecache('dirstate')
432 432 def dirstate(self):
433 433 warned = [0]
434 434 def validate(node):
435 435 try:
436 436 self.changelog.rev(node)
437 437 return node
438 438 except error.LookupError:
439 439 if not warned[0]:
440 440 warned[0] = True
441 441 self.ui.warn(_("warning: ignoring unknown"
442 442 " working parent %s!\n") % short(node))
443 443 return nullid
444 444
445 445 return dirstate.dirstate(self.opener, self.ui, self.root, validate)
446 446
447 447 def __getitem__(self, changeid):
448 448 if changeid is None:
449 449 return context.workingctx(self)
450 450 return context.changectx(self, changeid)
451 451
452 452 def __contains__(self, changeid):
453 453 try:
454 454 return bool(self.lookup(changeid))
455 455 except error.RepoLookupError:
456 456 return False
457 457
458 458 def __nonzero__(self):
459 459 return True
460 460
461 461 def __len__(self):
462 462 return len(self.changelog)
463 463
464 464 def __iter__(self):
465 465 return iter(self.changelog)
466 466
467 467 def revs(self, expr, *args):
468 468 '''Return a list of revisions matching the given revset'''
469 469 expr = revset.formatspec(expr, *args)
470 470 m = revset.match(None, expr)
471 471 return m(self, revset.spanset(self))
472 472
473 473 def set(self, expr, *args):
474 474 '''
475 475 Yield a context for each matching revision, after doing arg
476 476 replacement via revset.formatspec
477 477 '''
478 478 for r in self.revs(expr, *args):
479 479 yield self[r]
480 480
481 481 def url(self):
482 482 return 'file:' + self.root
483 483
484 484 def hook(self, name, throw=False, **args):
485 485 """Call a hook, passing this repo instance.
486 486
487 487 This a convenience method to aid invoking hooks. Extensions likely
488 488 won't call this unless they have registered a custom hook or are
489 489 replacing code that is expected to call a hook.
490 490 """
491 491 return hook.hook(self.ui, self, name, throw, **args)
492 492
493 493 @unfilteredmethod
494 494 def _tag(self, names, node, message, local, user, date, extra={},
495 495 editor=False):
496 496 if isinstance(names, str):
497 497 names = (names,)
498 498
499 499 branches = self.branchmap()
500 500 for name in names:
501 501 self.hook('pretag', throw=True, node=hex(node), tag=name,
502 502 local=local)
503 503 if name in branches:
504 504 self.ui.warn(_("warning: tag %s conflicts with existing"
505 505 " branch name\n") % name)
506 506
507 507 def writetags(fp, names, munge, prevtags):
508 508 fp.seek(0, 2)
509 509 if prevtags and prevtags[-1] != '\n':
510 510 fp.write('\n')
511 511 for name in names:
512 512 m = munge and munge(name) or name
513 513 if (self._tagscache.tagtypes and
514 514 name in self._tagscache.tagtypes):
515 515 old = self.tags().get(name, nullid)
516 516 fp.write('%s %s\n' % (hex(old), m))
517 517 fp.write('%s %s\n' % (hex(node), m))
518 518 fp.close()
519 519
520 520 prevtags = ''
521 521 if local:
522 522 try:
523 523 fp = self.opener('localtags', 'r+')
524 524 except IOError:
525 525 fp = self.opener('localtags', 'a')
526 526 else:
527 527 prevtags = fp.read()
528 528
529 529 # local tags are stored in the current charset
530 530 writetags(fp, names, None, prevtags)
531 531 for name in names:
532 532 self.hook('tag', node=hex(node), tag=name, local=local)
533 533 return
534 534
535 535 try:
536 536 fp = self.wfile('.hgtags', 'rb+')
537 537 except IOError, e:
538 538 if e.errno != errno.ENOENT:
539 539 raise
540 540 fp = self.wfile('.hgtags', 'ab')
541 541 else:
542 542 prevtags = fp.read()
543 543
544 544 # committed tags are stored in UTF-8
545 545 writetags(fp, names, encoding.fromlocal, prevtags)
546 546
547 547 fp.close()
548 548
549 549 self.invalidatecaches()
550 550
551 551 if '.hgtags' not in self.dirstate:
552 552 self[None].add(['.hgtags'])
553 553
554 554 m = matchmod.exact(self.root, '', ['.hgtags'])
555 555 tagnode = self.commit(message, user, date, extra=extra, match=m,
556 556 editor=editor)
557 557
558 558 for name in names:
559 559 self.hook('tag', node=hex(node), tag=name, local=local)
560 560
561 561 return tagnode
562 562
563 563 def tag(self, names, node, message, local, user, date, editor=False):
564 564 '''tag a revision with one or more symbolic names.
565 565
566 566 names is a list of strings or, when adding a single tag, names may be a
567 567 string.
568 568
569 569 if local is True, the tags are stored in a per-repository file.
570 570 otherwise, they are stored in the .hgtags file, and a new
571 571 changeset is committed with the change.
572 572
573 573 keyword arguments:
574 574
575 575 local: whether to store tags in non-version-controlled file
576 576 (default False)
577 577
578 578 message: commit message to use if committing
579 579
580 580 user: name of user to use if committing
581 581
582 582 date: date tuple to use if committing'''
583 583
584 584 if not local:
585 585 m = matchmod.exact(self.root, '', ['.hgtags'])
586 586 if util.any(self.status(match=m, unknown=True, ignored=True)):
587 587 raise util.Abort(_('working copy of .hgtags is changed'),
588 588 hint=_('please commit .hgtags manually'))
589 589
590 590 self.tags() # instantiate the cache
591 591 self._tag(names, node, message, local, user, date, editor=editor)
592 592
593 593 @filteredpropertycache
594 594 def _tagscache(self):
595 595 '''Returns a tagscache object that contains various tags related
596 596 caches.'''
597 597
598 598 # This simplifies its cache management by having one decorated
599 599 # function (this one) and the rest simply fetch things from it.
600 600 class tagscache(object):
601 601 def __init__(self):
602 602 # These two define the set of tags for this repository. tags
603 603 # maps tag name to node; tagtypes maps tag name to 'global' or
604 604 # 'local'. (Global tags are defined by .hgtags across all
605 605 # heads, and local tags are defined in .hg/localtags.)
606 606 # They constitute the in-memory cache of tags.
607 607 self.tags = self.tagtypes = None
608 608
609 609 self.nodetagscache = self.tagslist = None
610 610
611 611 cache = tagscache()
612 612 cache.tags, cache.tagtypes = self._findtags()
613 613
614 614 return cache
615 615
616 616 def tags(self):
617 617 '''return a mapping of tag to node'''
618 618 t = {}
619 619 if self.changelog.filteredrevs:
620 620 tags, tt = self._findtags()
621 621 else:
622 622 tags = self._tagscache.tags
623 623 for k, v in tags.iteritems():
624 624 try:
625 625 # ignore tags to unknown nodes
626 626 self.changelog.rev(v)
627 627 t[k] = v
628 628 except (error.LookupError, ValueError):
629 629 pass
630 630 return t
631 631
632 632 def _findtags(self):
633 633 '''Do the hard work of finding tags. Return a pair of dicts
634 634 (tags, tagtypes) where tags maps tag name to node, and tagtypes
635 635 maps tag name to a string like \'global\' or \'local\'.
636 636 Subclasses or extensions are free to add their own tags, but
637 637 should be aware that the returned dicts will be retained for the
638 638 duration of the localrepo object.'''
639 639
640 640 # XXX what tagtype should subclasses/extensions use? Currently
641 641 # mq and bookmarks add tags, but do not set the tagtype at all.
642 642 # Should each extension invent its own tag type? Should there
643 643 # be one tagtype for all such "virtual" tags? Or is the status
644 644 # quo fine?
645 645
646 646 alltags = {} # map tag name to (node, hist)
647 647 tagtypes = {}
648 648
649 649 tagsmod.findglobaltags(self.ui, self, alltags, tagtypes)
650 650 tagsmod.readlocaltags(self.ui, self, alltags, tagtypes)
651 651
652 652 # Build the return dicts. Have to re-encode tag names because
653 653 # the tags module always uses UTF-8 (in order not to lose info
654 654 # writing to the cache), but the rest of Mercurial wants them in
655 655 # local encoding.
656 656 tags = {}
657 657 for (name, (node, hist)) in alltags.iteritems():
658 658 if node != nullid:
659 659 tags[encoding.tolocal(name)] = node
660 660 tags['tip'] = self.changelog.tip()
661 661 tagtypes = dict([(encoding.tolocal(name), value)
662 662 for (name, value) in tagtypes.iteritems()])
663 663 return (tags, tagtypes)
664 664
665 665 def tagtype(self, tagname):
666 666 '''
667 667 return the type of the given tag. result can be:
668 668
669 669 'local' : a local tag
670 670 'global' : a global tag
671 671 None : tag does not exist
672 672 '''
673 673
674 674 return self._tagscache.tagtypes.get(tagname)
675 675
676 676 def tagslist(self):
677 677 '''return a list of tags ordered by revision'''
678 678 if not self._tagscache.tagslist:
679 679 l = []
680 680 for t, n in self.tags().iteritems():
681 681 l.append((self.changelog.rev(n), t, n))
682 682 self._tagscache.tagslist = [(t, n) for r, t, n in sorted(l)]
683 683
684 684 return self._tagscache.tagslist
685 685
686 686 def nodetags(self, node):
687 687 '''return the tags associated with a node'''
688 688 if not self._tagscache.nodetagscache:
689 689 nodetagscache = {}
690 690 for t, n in self._tagscache.tags.iteritems():
691 691 nodetagscache.setdefault(n, []).append(t)
692 692 for tags in nodetagscache.itervalues():
693 693 tags.sort()
694 694 self._tagscache.nodetagscache = nodetagscache
695 695 return self._tagscache.nodetagscache.get(node, [])
696 696
697 697 def nodebookmarks(self, node):
698 698 marks = []
699 699 for bookmark, n in self._bookmarks.iteritems():
700 700 if n == node:
701 701 marks.append(bookmark)
702 702 return sorted(marks)
703 703
704 704 def branchmap(self):
705 705 '''returns a dictionary {branch: [branchheads]} with branchheads
706 706 ordered by increasing revision number'''
707 707 branchmap.updatecache(self)
708 708 return self._branchcaches[self.filtername]
709 709
710 710 def branchtip(self, branch):
711 711 '''return the tip node for a given branch'''
712 712 try:
713 713 return self.branchmap().branchtip(branch)
714 714 except KeyError:
715 715 raise error.RepoLookupError(_("unknown branch '%s'") % branch)
716 716
717 717 def lookup(self, key):
718 718 return self[key].node()
719 719
720 720 def lookupbranch(self, key, remote=None):
721 721 repo = remote or self
722 722 if key in repo.branchmap():
723 723 return key
724 724
725 725 repo = (remote and remote.local()) and remote or self
726 726 return repo[key].branch()
727 727
728 728 def known(self, nodes):
729 729 nm = self.changelog.nodemap
730 730 pc = self._phasecache
731 731 result = []
732 732 for n in nodes:
733 733 r = nm.get(n)
734 734 resp = not (r is None or pc.phase(self, r) >= phases.secret)
735 735 result.append(resp)
736 736 return result
737 737
738 738 def local(self):
739 739 return self
740 740
741 741 def cancopy(self):
742 742 # so statichttprepo's override of local() works
743 743 if not self.local():
744 744 return False
745 745 if not self.ui.configbool('phases', 'publish', True):
746 746 return True
747 747 # if publishing we can't copy if there is filtered content
748 748 return not self.filtered('visible').changelog.filteredrevs
749 749
750 750 def join(self, f, *insidef):
751 751 return os.path.join(self.path, f, *insidef)
752 752
753 753 def wjoin(self, f, *insidef):
754 754 return os.path.join(self.root, f, *insidef)
755 755
756 756 def file(self, f):
757 757 if f[0] == '/':
758 758 f = f[1:]
759 759 return filelog.filelog(self.sopener, f)
760 760
761 761 def changectx(self, changeid):
762 762 return self[changeid]
763 763
764 764 def parents(self, changeid=None):
765 765 '''get list of changectxs for parents of changeid'''
766 766 return self[changeid].parents()
767 767
768 768 def setparents(self, p1, p2=nullid):
769 769 self.dirstate.beginparentchange()
770 770 copies = self.dirstate.setparents(p1, p2)
771 771 pctx = self[p1]
772 772 if copies:
773 773 # Adjust copy records, the dirstate cannot do it, it
774 774 # requires access to parents manifests. Preserve them
775 775 # only for entries added to first parent.
776 776 for f in copies:
777 777 if f not in pctx and copies[f] in pctx:
778 778 self.dirstate.copy(copies[f], f)
779 779 if p2 == nullid:
780 780 for f, s in sorted(self.dirstate.copies().items()):
781 781 if f not in pctx and s not in pctx:
782 782 self.dirstate.copy(None, f)
783 783 self.dirstate.endparentchange()
784 784
785 785 def filectx(self, path, changeid=None, fileid=None):
786 786 """changeid can be a changeset revision, node, or tag.
787 787 fileid can be a file revision or node."""
788 788 return context.filectx(self, path, changeid, fileid)
789 789
790 790 def getcwd(self):
791 791 return self.dirstate.getcwd()
792 792
793 793 def pathto(self, f, cwd=None):
794 794 return self.dirstate.pathto(f, cwd)
795 795
796 796 def wfile(self, f, mode='r'):
797 797 return self.wopener(f, mode)
798 798
799 799 def _link(self, f):
800 800 return self.wvfs.islink(f)
801 801
802 802 def _loadfilter(self, filter):
803 803 if filter not in self.filterpats:
804 804 l = []
805 805 for pat, cmd in self.ui.configitems(filter):
806 806 if cmd == '!':
807 807 continue
808 808 mf = matchmod.match(self.root, '', [pat])
809 809 fn = None
810 810 params = cmd
811 811 for name, filterfn in self._datafilters.iteritems():
812 812 if cmd.startswith(name):
813 813 fn = filterfn
814 814 params = cmd[len(name):].lstrip()
815 815 break
816 816 if not fn:
817 817 fn = lambda s, c, **kwargs: util.filter(s, c)
818 818 # Wrap old filters not supporting keyword arguments
819 819 if not inspect.getargspec(fn)[2]:
820 820 oldfn = fn
821 821 fn = lambda s, c, **kwargs: oldfn(s, c)
822 822 l.append((mf, fn, params))
823 823 self.filterpats[filter] = l
824 824 return self.filterpats[filter]
825 825
826 826 def _filter(self, filterpats, filename, data):
827 827 for mf, fn, cmd in filterpats:
828 828 if mf(filename):
829 829 self.ui.debug("filtering %s through %s\n" % (filename, cmd))
830 830 data = fn(data, cmd, ui=self.ui, repo=self, filename=filename)
831 831 break
832 832
833 833 return data
834 834
835 835 @unfilteredpropertycache
836 836 def _encodefilterpats(self):
837 837 return self._loadfilter('encode')
838 838
839 839 @unfilteredpropertycache
840 840 def _decodefilterpats(self):
841 841 return self._loadfilter('decode')
842 842
843 843 def adddatafilter(self, name, filter):
844 844 self._datafilters[name] = filter
845 845
846 846 def wread(self, filename):
847 847 if self._link(filename):
848 848 data = self.wvfs.readlink(filename)
849 849 else:
850 850 data = self.wopener.read(filename)
851 851 return self._filter(self._encodefilterpats, filename, data)
852 852
853 853 def wwrite(self, filename, data, flags):
854 854 data = self._filter(self._decodefilterpats, filename, data)
855 855 if 'l' in flags:
856 856 self.wopener.symlink(data, filename)
857 857 else:
858 858 self.wopener.write(filename, data)
859 859 if 'x' in flags:
860 860 self.wvfs.setflags(filename, False, True)
861 861
862 862 def wwritedata(self, filename, data):
863 863 return self._filter(self._decodefilterpats, filename, data)
864 864
865 865 def transaction(self, desc, report=None):
866 866 tr = self._transref and self._transref() or None
867 867 if tr and tr.running():
868 868 return tr.nest()
869 869
870 870 # abort here if the journal already exists
871 871 if self.svfs.exists("journal"):
872 872 raise error.RepoError(
873 873 _("abandoned transaction found"),
874 874 hint=_("run 'hg recover' to clean up transaction"))
875 875
876 876 def onclose():
877 877 self.store.write(self._transref())
878 878
879 879 self._writejournal(desc)
880 880 renames = [(vfs, x, undoname(x)) for vfs, x in self._journalfiles()]
881 881 rp = report and report or self.ui.warn
882 882 tr = transaction.transaction(rp, self.sopener,
883 883 "journal",
884 884 aftertrans(renames),
885 885 self.store.createmode,
886 886 onclose)
887 887 self._transref = weakref.ref(tr)
888 888 return tr
889 889
890 890 def _journalfiles(self):
891 891 return ((self.svfs, 'journal'),
892 892 (self.vfs, 'journal.dirstate'),
893 893 (self.vfs, 'journal.branch'),
894 894 (self.vfs, 'journal.desc'),
895 895 (self.vfs, 'journal.bookmarks'),
896 896 (self.svfs, 'journal.phaseroots'))
897 897
898 898 def undofiles(self):
899 899 return [(vfs, undoname(x)) for vfs, x in self._journalfiles()]
900 900
901 901 def _writejournal(self, desc):
902 902 self.opener.write("journal.dirstate",
903 903 self.opener.tryread("dirstate"))
904 904 self.opener.write("journal.branch",
905 905 encoding.fromlocal(self.dirstate.branch()))
906 906 self.opener.write("journal.desc",
907 907 "%d\n%s\n" % (len(self), desc))
908 908 self.opener.write("journal.bookmarks",
909 909 self.opener.tryread("bookmarks"))
910 910 self.sopener.write("journal.phaseroots",
911 911 self.sopener.tryread("phaseroots"))
912 912
913 913 def recover(self):
914 914 lock = self.lock()
915 915 try:
916 916 if self.svfs.exists("journal"):
917 917 self.ui.status(_("rolling back interrupted transaction\n"))
918 918 transaction.rollback(self.sopener, "journal",
919 919 self.ui.warn)
920 920 self.invalidate()
921 921 return True
922 922 else:
923 923 self.ui.warn(_("no interrupted transaction available\n"))
924 924 return False
925 925 finally:
926 926 lock.release()
927 927
928 928 def rollback(self, dryrun=False, force=False):
929 929 wlock = lock = None
930 930 try:
931 931 wlock = self.wlock()
932 932 lock = self.lock()
933 933 if self.svfs.exists("undo"):
934 934 return self._rollback(dryrun, force)
935 935 else:
936 936 self.ui.warn(_("no rollback information available\n"))
937 937 return 1
938 938 finally:
939 939 release(lock, wlock)
940 940
941 941 @unfilteredmethod # Until we get smarter cache management
942 942 def _rollback(self, dryrun, force):
943 943 ui = self.ui
944 944 try:
945 945 args = self.opener.read('undo.desc').splitlines()
946 946 (oldlen, desc, detail) = (int(args[0]), args[1], None)
947 947 if len(args) >= 3:
948 948 detail = args[2]
949 949 oldtip = oldlen - 1
950 950
951 951 if detail and ui.verbose:
952 952 msg = (_('repository tip rolled back to revision %s'
953 953 ' (undo %s: %s)\n')
954 954 % (oldtip, desc, detail))
955 955 else:
956 956 msg = (_('repository tip rolled back to revision %s'
957 957 ' (undo %s)\n')
958 958 % (oldtip, desc))
959 959 except IOError:
960 960 msg = _('rolling back unknown transaction\n')
961 961 desc = None
962 962
963 963 if not force and self['.'] != self['tip'] and desc == 'commit':
964 964 raise util.Abort(
965 965 _('rollback of last commit while not checked out '
966 966 'may lose data'), hint=_('use -f to force'))
967 967
968 968 ui.status(msg)
969 969 if dryrun:
970 970 return 0
971 971
972 972 parents = self.dirstate.parents()
973 973 self.destroying()
974 974 transaction.rollback(self.sopener, 'undo', ui.warn)
975 975 if self.vfs.exists('undo.bookmarks'):
976 976 self.vfs.rename('undo.bookmarks', 'bookmarks')
977 977 if self.svfs.exists('undo.phaseroots'):
978 978 self.svfs.rename('undo.phaseroots', 'phaseroots')
979 979 self.invalidate()
980 980
981 981 parentgone = (parents[0] not in self.changelog.nodemap or
982 982 parents[1] not in self.changelog.nodemap)
983 983 if parentgone:
984 984 self.vfs.rename('undo.dirstate', 'dirstate')
985 985 try:
986 986 branch = self.opener.read('undo.branch')
987 987 self.dirstate.setbranch(encoding.tolocal(branch))
988 988 except IOError:
989 989 ui.warn(_('named branch could not be reset: '
990 990 'current branch is still \'%s\'\n')
991 991 % self.dirstate.branch())
992 992
993 993 self.dirstate.invalidate()
994 994 parents = tuple([p.rev() for p in self.parents()])
995 995 if len(parents) > 1:
996 996 ui.status(_('working directory now based on '
997 997 'revisions %d and %d\n') % parents)
998 998 else:
999 999 ui.status(_('working directory now based on '
1000 1000 'revision %d\n') % parents)
1001 1001 # TODO: if we know which new heads may result from this rollback, pass
1002 1002 # them to destroy(), which will prevent the branchhead cache from being
1003 1003 # invalidated.
1004 1004 self.destroyed()
1005 1005 return 0
1006 1006
1007 1007 def invalidatecaches(self):
1008 1008
1009 1009 if '_tagscache' in vars(self):
1010 1010 # can't use delattr on proxy
1011 1011 del self.__dict__['_tagscache']
1012 1012
1013 1013 self.unfiltered()._branchcaches.clear()
1014 1014 self.invalidatevolatilesets()
1015 1015
1016 1016 def invalidatevolatilesets(self):
1017 1017 self.filteredrevcache.clear()
1018 1018 obsolete.clearobscaches(self)
1019 1019
1020 1020 def invalidatedirstate(self):
1021 1021 '''Invalidates the dirstate, causing the next call to dirstate
1022 1022 to check if it was modified since the last time it was read,
1023 1023 rereading it if it has.
1024 1024
1025 1025 This is different to dirstate.invalidate() that it doesn't always
1026 1026 rereads the dirstate. Use dirstate.invalidate() if you want to
1027 1027 explicitly read the dirstate again (i.e. restoring it to a previous
1028 1028 known good state).'''
1029 1029 if hasunfilteredcache(self, 'dirstate'):
1030 1030 for k in self.dirstate._filecache:
1031 1031 try:
1032 1032 delattr(self.dirstate, k)
1033 1033 except AttributeError:
1034 1034 pass
1035 1035 delattr(self.unfiltered(), 'dirstate')
1036 1036
1037 1037 def invalidate(self):
1038 1038 unfiltered = self.unfiltered() # all file caches are stored unfiltered
1039 1039 for k in self._filecache:
1040 1040 # dirstate is invalidated separately in invalidatedirstate()
1041 1041 if k == 'dirstate':
1042 1042 continue
1043 1043
1044 1044 try:
1045 1045 delattr(unfiltered, k)
1046 1046 except AttributeError:
1047 1047 pass
1048 1048 self.invalidatecaches()
1049 1049 self.store.invalidatecaches()
1050 1050
1051 1051 def invalidateall(self):
1052 1052 '''Fully invalidates both store and non-store parts, causing the
1053 1053 subsequent operation to reread any outside changes.'''
1054 1054 # extension should hook this to invalidate its caches
1055 1055 self.invalidate()
1056 1056 self.invalidatedirstate()
1057 1057
1058 1058 def _lock(self, vfs, lockname, wait, releasefn, acquirefn, desc):
1059 1059 try:
1060 1060 l = lockmod.lock(vfs, lockname, 0, releasefn, desc=desc)
1061 1061 except error.LockHeld, inst:
1062 1062 if not wait:
1063 1063 raise
1064 1064 self.ui.warn(_("waiting for lock on %s held by %r\n") %
1065 1065 (desc, inst.locker))
1066 1066 # default to 600 seconds timeout
1067 1067 l = lockmod.lock(vfs, lockname,
1068 1068 int(self.ui.config("ui", "timeout", "600")),
1069 1069 releasefn, desc=desc)
1070 1070 self.ui.warn(_("got lock after %s seconds\n") % l.delay)
1071 1071 if acquirefn:
1072 1072 acquirefn()
1073 1073 return l
1074 1074
1075 1075 def _afterlock(self, callback):
1076 1076 """add a callback to the current repository lock.
1077 1077
1078 1078 The callback will be executed on lock release."""
1079 1079 l = self._lockref and self._lockref()
1080 1080 if l:
1081 1081 l.postrelease.append(callback)
1082 1082 else:
1083 1083 callback()
1084 1084
1085 1085 def lock(self, wait=True):
1086 1086 '''Lock the repository store (.hg/store) and return a weak reference
1087 1087 to the lock. Use this before modifying the store (e.g. committing or
1088 1088 stripping). If you are opening a transaction, get a lock as well.)'''
1089 1089 l = self._lockref and self._lockref()
1090 1090 if l is not None and l.held:
1091 1091 l.lock()
1092 1092 return l
1093 1093
1094 1094 def unlock():
1095 1095 for k, ce in self._filecache.items():
1096 1096 if k == 'dirstate' or k not in self.__dict__:
1097 1097 continue
1098 1098 ce.refresh()
1099 1099
1100 1100 l = self._lock(self.svfs, "lock", wait, unlock,
1101 1101 self.invalidate, _('repository %s') % self.origroot)
1102 1102 self._lockref = weakref.ref(l)
1103 1103 return l
1104 1104
1105 1105 def wlock(self, wait=True):
1106 1106 '''Lock the non-store parts of the repository (everything under
1107 1107 .hg except .hg/store) and return a weak reference to the lock.
1108 1108 Use this before modifying files in .hg.'''
1109 1109 l = self._wlockref and self._wlockref()
1110 1110 if l is not None and l.held:
1111 1111 l.lock()
1112 1112 return l
1113 1113
1114 1114 def unlock():
1115 1115 if self.dirstate.pendingparentchange():
1116 1116 self.dirstate.invalidate()
1117 1117 else:
1118 1118 self.dirstate.write()
1119 1119
1120 1120 self._filecache['dirstate'].refresh()
1121 1121
1122 1122 l = self._lock(self.vfs, "wlock", wait, unlock,
1123 1123 self.invalidatedirstate, _('working directory of %s') %
1124 1124 self.origroot)
1125 1125 self._wlockref = weakref.ref(l)
1126 1126 return l
1127 1127
1128 1128 def _filecommit(self, fctx, manifest1, manifest2, linkrev, tr, changelist):
1129 1129 """
1130 1130 commit an individual file as part of a larger transaction
1131 1131 """
1132 1132
1133 1133 fname = fctx.path()
1134 1134 text = fctx.data()
1135 1135 flog = self.file(fname)
1136 1136 fparent1 = manifest1.get(fname, nullid)
1137 1137 fparent2 = manifest2.get(fname, nullid)
1138 1138
1139 1139 meta = {}
1140 1140 copy = fctx.renamed()
1141 1141 if copy and copy[0] != fname:
1142 1142 # Mark the new revision of this file as a copy of another
1143 1143 # file. This copy data will effectively act as a parent
1144 1144 # of this new revision. If this is a merge, the first
1145 1145 # parent will be the nullid (meaning "look up the copy data")
1146 1146 # and the second one will be the other parent. For example:
1147 1147 #
1148 1148 # 0 --- 1 --- 3 rev1 changes file foo
1149 1149 # \ / rev2 renames foo to bar and changes it
1150 1150 # \- 2 -/ rev3 should have bar with all changes and
1151 1151 # should record that bar descends from
1152 1152 # bar in rev2 and foo in rev1
1153 1153 #
1154 1154 # this allows this merge to succeed:
1155 1155 #
1156 1156 # 0 --- 1 --- 3 rev4 reverts the content change from rev2
1157 1157 # \ / merging rev3 and rev4 should use bar@rev2
1158 1158 # \- 2 --- 4 as the merge base
1159 1159 #
1160 1160
1161 1161 cfname = copy[0]
1162 1162 crev = manifest1.get(cfname)
1163 1163 newfparent = fparent2
1164 1164
1165 1165 if manifest2: # branch merge
1166 1166 if fparent2 == nullid or crev is None: # copied on remote side
1167 1167 if cfname in manifest2:
1168 1168 crev = manifest2[cfname]
1169 1169 newfparent = fparent1
1170 1170
1171 1171 # find source in nearest ancestor if we've lost track
1172 1172 if not crev:
1173 1173 self.ui.debug(" %s: searching for copy revision for %s\n" %
1174 1174 (fname, cfname))
1175 1175 for ancestor in self[None].ancestors():
1176 1176 if cfname in ancestor:
1177 1177 crev = ancestor[cfname].filenode()
1178 1178 break
1179 1179
1180 1180 if crev:
1181 1181 self.ui.debug(" %s: copy %s:%s\n" % (fname, cfname, hex(crev)))
1182 1182 meta["copy"] = cfname
1183 1183 meta["copyrev"] = hex(crev)
1184 1184 fparent1, fparent2 = nullid, newfparent
1185 1185 else:
1186 1186 self.ui.warn(_("warning: can't find ancestor for '%s' "
1187 1187 "copied from '%s'!\n") % (fname, cfname))
1188 1188
1189 1189 elif fparent1 == nullid:
1190 1190 fparent1, fparent2 = fparent2, nullid
1191 1191 elif fparent2 != nullid:
1192 1192 # is one parent an ancestor of the other?
1193 1193 fparentancestors = flog.commonancestorsheads(fparent1, fparent2)
1194 1194 if fparent1 in fparentancestors:
1195 1195 fparent1, fparent2 = fparent2, nullid
1196 1196 elif fparent2 in fparentancestors:
1197 1197 fparent2 = nullid
1198 1198
1199 1199 # is the file changed?
1200 1200 if fparent2 != nullid or flog.cmp(fparent1, text) or meta:
1201 1201 changelist.append(fname)
1202 1202 return flog.add(text, meta, tr, linkrev, fparent1, fparent2)
1203 1203 # are just the flags changed during merge?
1204 1204 elif fname in manifest1 and manifest1.flags(fname) != fctx.flags():
1205 1205 changelist.append(fname)
1206 1206
1207 1207 return fparent1
1208 1208
1209 1209 @unfilteredmethod
1210 1210 def commit(self, text="", user=None, date=None, match=None, force=False,
1211 1211 editor=False, extra={}):
1212 1212 """Add a new revision to current repository.
1213 1213
1214 1214 Revision information is gathered from the working directory,
1215 1215 match can be used to filter the committed files. If editor is
1216 1216 supplied, it is called to get a commit message.
1217 1217 """
1218 1218
1219 1219 def fail(f, msg):
1220 1220 raise util.Abort('%s: %s' % (f, msg))
1221 1221
1222 1222 if not match:
1223 1223 match = matchmod.always(self.root, '')
1224 1224
1225 1225 if not force:
1226 1226 vdirs = []
1227 1227 match.explicitdir = vdirs.append
1228 1228 match.bad = fail
1229 1229
1230 1230 wlock = self.wlock()
1231 1231 try:
1232 1232 wctx = self[None]
1233 1233 merge = len(wctx.parents()) > 1
1234 1234
1235 1235 if (not force and merge and match and
1236 1236 (match.files() or match.anypats())):
1237 1237 raise util.Abort(_('cannot partially commit a merge '
1238 1238 '(do not specify files or patterns)'))
1239 1239
1240 1240 status = self.status(match=match, clean=force)
1241 1241 if force:
1242 1242 status.modified.extend(status.clean) # mq may commit clean files
1243 1243
1244 1244 # check subrepos
1245 1245 subs = []
1246 1246 commitsubs = set()
1247 1247 newstate = wctx.substate.copy()
1248 1248 # only manage subrepos and .hgsubstate if .hgsub is present
1249 1249 if '.hgsub' in wctx:
1250 1250 # we'll decide whether to track this ourselves, thanks
1251 1251 for c in status.modified, status.added, status.removed:
1252 1252 if '.hgsubstate' in c:
1253 1253 c.remove('.hgsubstate')
1254 1254
1255 1255 # compare current state to last committed state
1256 1256 # build new substate based on last committed state
1257 1257 oldstate = wctx.p1().substate
1258 1258 for s in sorted(newstate.keys()):
1259 1259 if not match(s):
1260 1260 # ignore working copy, use old state if present
1261 1261 if s in oldstate:
1262 1262 newstate[s] = oldstate[s]
1263 1263 continue
1264 1264 if not force:
1265 1265 raise util.Abort(
1266 1266 _("commit with new subrepo %s excluded") % s)
1267 1267 if wctx.sub(s).dirty(True):
1268 1268 if not self.ui.configbool('ui', 'commitsubrepos'):
1269 1269 raise util.Abort(
1270 1270 _("uncommitted changes in subrepo %s") % s,
1271 1271 hint=_("use --subrepos for recursive commit"))
1272 1272 subs.append(s)
1273 1273 commitsubs.add(s)
1274 1274 else:
1275 1275 bs = wctx.sub(s).basestate()
1276 1276 newstate[s] = (newstate[s][0], bs, newstate[s][2])
1277 1277 if oldstate.get(s, (None, None, None))[1] != bs:
1278 1278 subs.append(s)
1279 1279
1280 1280 # check for removed subrepos
1281 1281 for p in wctx.parents():
1282 1282 r = [s for s in p.substate if s not in newstate]
1283 1283 subs += [s for s in r if match(s)]
1284 1284 if subs:
1285 1285 if (not match('.hgsub') and
1286 1286 '.hgsub' in (wctx.modified() + wctx.added())):
1287 1287 raise util.Abort(
1288 1288 _("can't commit subrepos without .hgsub"))
1289 1289 status.modified.insert(0, '.hgsubstate')
1290 1290
1291 1291 elif '.hgsub' in status.removed:
1292 1292 # clean up .hgsubstate when .hgsub is removed
1293 1293 if ('.hgsubstate' in wctx and
1294 1294 '.hgsubstate' not in (status.modified + status.added +
1295 1295 status.removed)):
1296 1296 status.removed.insert(0, '.hgsubstate')
1297 1297
1298 1298 # make sure all explicit patterns are matched
1299 1299 if not force and match.files():
1300 1300 matched = set(status.modified + status.added + status.removed)
1301 1301
1302 1302 for f in match.files():
1303 1303 f = self.dirstate.normalize(f)
1304 1304 if f == '.' or f in matched or f in wctx.substate:
1305 1305 continue
1306 1306 if f in status.deleted:
1307 1307 fail(f, _('file not found!'))
1308 1308 if f in vdirs: # visited directory
1309 1309 d = f + '/'
1310 1310 for mf in matched:
1311 1311 if mf.startswith(d):
1312 1312 break
1313 1313 else:
1314 1314 fail(f, _("no match under directory!"))
1315 1315 elif f not in self.dirstate:
1316 1316 fail(f, _("file not tracked!"))
1317 1317
1318 1318 cctx = context.workingctx(self, text, user, date, extra, status)
1319 1319
1320 1320 if (not force and not extra.get("close") and not merge
1321 1321 and not cctx.files()
1322 1322 and wctx.branch() == wctx.p1().branch()):
1323 1323 return None
1324 1324
1325 1325 if merge and cctx.deleted():
1326 1326 raise util.Abort(_("cannot commit merge with missing files"))
1327 1327
1328 1328 ms = mergemod.mergestate(self)
1329 1329 for f in status.modified:
1330 1330 if f in ms and ms[f] == 'u':
1331 1331 raise util.Abort(_("unresolved merge conflicts "
1332 1332 "(see hg help resolve)"))
1333 1333
1334 1334 if editor:
1335 1335 cctx._text = editor(self, cctx, subs)
1336 1336 edited = (text != cctx._text)
1337 1337
1338 1338 # Save commit message in case this transaction gets rolled back
1339 1339 # (e.g. by a pretxncommit hook). Leave the content alone on
1340 1340 # the assumption that the user will use the same editor again.
1341 1341 msgfn = self.savecommitmessage(cctx._text)
1342 1342
1343 1343 # commit subs and write new state
1344 1344 if subs:
1345 1345 for s in sorted(commitsubs):
1346 1346 sub = wctx.sub(s)
1347 1347 self.ui.status(_('committing subrepository %s\n') %
1348 1348 subrepo.subrelpath(sub))
1349 1349 sr = sub.commit(cctx._text, user, date)
1350 1350 newstate[s] = (newstate[s][0], sr)
1351 1351 subrepo.writestate(self, newstate)
1352 1352
1353 1353 p1, p2 = self.dirstate.parents()
1354 1354 hookp1, hookp2 = hex(p1), (p2 != nullid and hex(p2) or '')
1355 1355 try:
1356 1356 self.hook("precommit", throw=True, parent1=hookp1,
1357 1357 parent2=hookp2)
1358 1358 ret = self.commitctx(cctx, True)
1359 1359 except: # re-raises
1360 1360 if edited:
1361 1361 self.ui.write(
1362 1362 _('note: commit message saved in %s\n') % msgfn)
1363 1363 raise
1364 1364
1365 1365 # update bookmarks, dirstate and mergestate
1366 1366 bookmarks.update(self, [p1, p2], ret)
1367 1367 cctx.markcommitted(ret)
1368 1368 ms.reset()
1369 1369 finally:
1370 1370 wlock.release()
1371 1371
1372 1372 def commithook(node=hex(ret), parent1=hookp1, parent2=hookp2):
1373 1373 self.hook("commit", node=node, parent1=parent1, parent2=parent2)
1374 1374 self._afterlock(commithook)
1375 1375 return ret
1376 1376
1377 1377 @unfilteredmethod
1378 1378 def commitctx(self, ctx, error=False):
1379 1379 """Add a new revision to current repository.
1380 1380 Revision information is passed via the context argument.
1381 1381 """
1382 1382
1383 1383 tr = None
1384 1384 p1, p2 = ctx.p1(), ctx.p2()
1385 1385 user = ctx.user()
1386 1386
1387 1387 lock = self.lock()
1388 1388 try:
1389 1389 tr = self.transaction("commit")
1390 1390 trp = weakref.proxy(tr)
1391 1391
1392 1392 if ctx.files():
1393 1393 m1 = p1.manifest()
1394 1394 m2 = p2.manifest()
1395 1395 m = m1.copy()
1396 1396
1397 1397 # check in files
1398 1398 added = []
1399 1399 changed = []
1400 1400 removed = list(ctx.removed())
1401 1401 linkrev = len(self)
1402 1402 for f in sorted(ctx.modified() + ctx.added()):
1403 1403 self.ui.note(f + "\n")
1404 1404 try:
1405 1405 fctx = ctx[f]
1406 1406 if fctx is None:
1407 1407 removed.append(f)
1408 1408 else:
1409 1409 added.append(f)
1410 1410 m[f] = self._filecommit(fctx, m1, m2, linkrev,
1411 1411 trp, changed)
1412 1412 m.setflag(f, fctx.flags())
1413 1413 except OSError, inst:
1414 1414 self.ui.warn(_("trouble committing %s!\n") % f)
1415 1415 raise
1416 1416 except IOError, inst:
1417 1417 errcode = getattr(inst, 'errno', errno.ENOENT)
1418 1418 if error or errcode and errcode != errno.ENOENT:
1419 1419 self.ui.warn(_("trouble committing %s!\n") % f)
1420 1420 raise
1421 1421
1422 1422 # update manifest
1423 1423 removed = [f for f in sorted(removed) if f in m1 or f in m2]
1424 1424 drop = [f for f in removed if f in m]
1425 1425 for f in drop:
1426 1426 del m[f]
1427 1427 mn = self.manifest.add(m, trp, linkrev,
1428 1428 p1.manifestnode(), p2.manifestnode(),
1429 1429 added, drop)
1430 1430 files = changed + removed
1431 1431 else:
1432 1432 mn = p1.manifestnode()
1433 1433 files = []
1434 1434
1435 1435 # update changelog
1436 1436 self.changelog.delayupdate()
1437 1437 n = self.changelog.add(mn, files, ctx.description(),
1438 1438 trp, p1.node(), p2.node(),
1439 1439 user, ctx.date(), ctx.extra().copy())
1440 1440 p = lambda: self.changelog.writepending() and self.root or ""
1441 1441 xp1, xp2 = p1.hex(), p2 and p2.hex() or ''
1442 1442 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
1443 1443 parent2=xp2, pending=p)
1444 1444 self.changelog.finalize(trp)
1445 1445 # set the new commit is proper phase
1446 1446 targetphase = subrepo.newcommitphase(self.ui, ctx)
1447 1447 if targetphase:
1448 1448 # retract boundary do not alter parent changeset.
1449 1449 # if a parent have higher the resulting phase will
1450 1450 # be compliant anyway
1451 1451 #
1452 1452 # if minimal phase was 0 we don't need to retract anything
1453 1453 phases.retractboundary(self, tr, targetphase, [n])
1454 1454 tr.close()
1455 1455 branchmap.updatecache(self.filtered('served'))
1456 1456 return n
1457 1457 finally:
1458 1458 if tr:
1459 1459 tr.release()
1460 1460 lock.release()
1461 1461
1462 1462 @unfilteredmethod
1463 1463 def destroying(self):
1464 1464 '''Inform the repository that nodes are about to be destroyed.
1465 1465 Intended for use by strip and rollback, so there's a common
1466 1466 place for anything that has to be done before destroying history.
1467 1467
1468 1468 This is mostly useful for saving state that is in memory and waiting
1469 1469 to be flushed when the current lock is released. Because a call to
1470 1470 destroyed is imminent, the repo will be invalidated causing those
1471 1471 changes to stay in memory (waiting for the next unlock), or vanish
1472 1472 completely.
1473 1473 '''
1474 1474 # When using the same lock to commit and strip, the phasecache is left
1475 1475 # dirty after committing. Then when we strip, the repo is invalidated,
1476 1476 # causing those changes to disappear.
1477 1477 if '_phasecache' in vars(self):
1478 1478 self._phasecache.write()
1479 1479
1480 1480 @unfilteredmethod
1481 1481 def destroyed(self):
1482 1482 '''Inform the repository that nodes have been destroyed.
1483 1483 Intended for use by strip and rollback, so there's a common
1484 1484 place for anything that has to be done after destroying history.
1485 1485 '''
1486 1486 # When one tries to:
1487 1487 # 1) destroy nodes thus calling this method (e.g. strip)
1488 1488 # 2) use phasecache somewhere (e.g. commit)
1489 1489 #
1490 1490 # then 2) will fail because the phasecache contains nodes that were
1491 1491 # removed. We can either remove phasecache from the filecache,
1492 1492 # causing it to reload next time it is accessed, or simply filter
1493 1493 # the removed nodes now and write the updated cache.
1494 1494 self._phasecache.filterunknown(self)
1495 1495 self._phasecache.write()
1496 1496
1497 1497 # update the 'served' branch cache to help read only server process
1498 1498 # Thanks to branchcache collaboration this is done from the nearest
1499 1499 # filtered subset and it is expected to be fast.
1500 1500 branchmap.updatecache(self.filtered('served'))
1501 1501
1502 1502 # Ensure the persistent tag cache is updated. Doing it now
1503 1503 # means that the tag cache only has to worry about destroyed
1504 1504 # heads immediately after a strip/rollback. That in turn
1505 1505 # guarantees that "cachetip == currenttip" (comparing both rev
1506 1506 # and node) always means no nodes have been added or destroyed.
1507 1507
1508 1508 # XXX this is suboptimal when qrefresh'ing: we strip the current
1509 1509 # head, refresh the tag cache, then immediately add a new head.
1510 1510 # But I think doing it this way is necessary for the "instant
1511 1511 # tag cache retrieval" case to work.
1512 1512 self.invalidate()
1513 1513
1514 1514 def walk(self, match, node=None):
1515 1515 '''
1516 1516 walk recursively through the directory tree or a given
1517 1517 changeset, finding all files matched by the match
1518 1518 function
1519 1519 '''
1520 1520 return self[node].walk(match)
1521 1521
1522 1522 def status(self, node1='.', node2=None, match=None,
1523 1523 ignored=False, clean=False, unknown=False,
1524 1524 listsubrepos=False):
1525 1525 '''a convenience method that calls node1.status(node2)'''
1526 1526 return self[node1].status(node2, match, ignored, clean, unknown,
1527 1527 listsubrepos)
1528 1528
1529 1529 def heads(self, start=None):
1530 1530 heads = self.changelog.heads(start)
1531 1531 # sort the output in rev descending order
1532 1532 return sorted(heads, key=self.changelog.rev, reverse=True)
1533 1533
1534 1534 def branchheads(self, branch=None, start=None, closed=False):
1535 1535 '''return a (possibly filtered) list of heads for the given branch
1536 1536
1537 1537 Heads are returned in topological order, from newest to oldest.
1538 1538 If branch is None, use the dirstate branch.
1539 1539 If start is not None, return only heads reachable from start.
1540 1540 If closed is True, return heads that are marked as closed as well.
1541 1541 '''
1542 1542 if branch is None:
1543 1543 branch = self[None].branch()
1544 1544 branches = self.branchmap()
1545 1545 if branch not in branches:
1546 1546 return []
1547 1547 # the cache returns heads ordered lowest to highest
1548 1548 bheads = list(reversed(branches.branchheads(branch, closed=closed)))
1549 1549 if start is not None:
1550 1550 # filter out the heads that cannot be reached from startrev
1551 1551 fbheads = set(self.changelog.nodesbetween([start], bheads)[2])
1552 1552 bheads = [h for h in bheads if h in fbheads]
1553 1553 return bheads
1554 1554
1555 1555 def branches(self, nodes):
1556 1556 if not nodes:
1557 1557 nodes = [self.changelog.tip()]
1558 1558 b = []
1559 1559 for n in nodes:
1560 1560 t = n
1561 1561 while True:
1562 1562 p = self.changelog.parents(n)
1563 1563 if p[1] != nullid or p[0] == nullid:
1564 1564 b.append((t, n, p[0], p[1]))
1565 1565 break
1566 1566 n = p[0]
1567 1567 return b
1568 1568
1569 1569 def between(self, pairs):
1570 1570 r = []
1571 1571
1572 1572 for top, bottom in pairs:
1573 1573 n, l, i = top, [], 0
1574 1574 f = 1
1575 1575
1576 1576 while n != bottom and n != nullid:
1577 1577 p = self.changelog.parents(n)[0]
1578 1578 if i == f:
1579 1579 l.append(n)
1580 1580 f = f * 2
1581 1581 n = p
1582 1582 i += 1
1583 1583
1584 1584 r.append(l)
1585 1585
1586 1586 return r
1587 1587
1588 1588 def checkpush(self, pushop):
1589 1589 """Extensions can override this function if additional checks have
1590 1590 to be performed before pushing, or call it if they override push
1591 1591 command.
1592 1592 """
1593 1593 pass
1594 1594
1595 1595 @unfilteredpropertycache
1596 1596 def prepushoutgoinghooks(self):
1597 1597 """Return util.hooks consists of "(repo, remote, outgoing)"
1598 1598 functions, which are called before pushing changesets.
1599 1599 """
1600 1600 return util.hooks()
1601 1601
1602 1602 def stream_in(self, remote, requirements):
1603 1603 lock = self.lock()
1604 1604 try:
1605 1605 # Save remote branchmap. We will use it later
1606 1606 # to speed up branchcache creation
1607 1607 rbranchmap = None
1608 1608 if remote.capable("branchmap"):
1609 1609 rbranchmap = remote.branchmap()
1610 1610
1611 1611 fp = remote.stream_out()
1612 1612 l = fp.readline()
1613 1613 try:
1614 1614 resp = int(l)
1615 1615 except ValueError:
1616 1616 raise error.ResponseError(
1617 1617 _('unexpected response from remote server:'), l)
1618 1618 if resp == 1:
1619 1619 raise util.Abort(_('operation forbidden by server'))
1620 1620 elif resp == 2:
1621 1621 raise util.Abort(_('locking the remote repository failed'))
1622 1622 elif resp != 0:
1623 1623 raise util.Abort(_('the server sent an unknown error code'))
1624 1624 self.ui.status(_('streaming all changes\n'))
1625 1625 l = fp.readline()
1626 1626 try:
1627 1627 total_files, total_bytes = map(int, l.split(' ', 1))
1628 1628 except (ValueError, TypeError):
1629 1629 raise error.ResponseError(
1630 1630 _('unexpected response from remote server:'), l)
1631 1631 self.ui.status(_('%d files to transfer, %s of data\n') %
1632 1632 (total_files, util.bytecount(total_bytes)))
1633 1633 handled_bytes = 0
1634 1634 self.ui.progress(_('clone'), 0, total=total_bytes)
1635 1635 start = time.time()
1636 1636
1637 1637 tr = self.transaction(_('clone'))
1638 1638 try:
1639 1639 for i in xrange(total_files):
1640 1640 # XXX doesn't support '\n' or '\r' in filenames
1641 1641 l = fp.readline()
1642 1642 try:
1643 1643 name, size = l.split('\0', 1)
1644 1644 size = int(size)
1645 1645 except (ValueError, TypeError):
1646 1646 raise error.ResponseError(
1647 1647 _('unexpected response from remote server:'), l)
1648 1648 if self.ui.debugflag:
1649 1649 self.ui.debug('adding %s (%s)\n' %
1650 1650 (name, util.bytecount(size)))
1651 1651 # for backwards compat, name was partially encoded
1652 1652 ofp = self.sopener(store.decodedir(name), 'w')
1653 1653 for chunk in util.filechunkiter(fp, limit=size):
1654 1654 handled_bytes += len(chunk)
1655 1655 self.ui.progress(_('clone'), handled_bytes,
1656 1656 total=total_bytes)
1657 1657 ofp.write(chunk)
1658 1658 ofp.close()
1659 1659 tr.close()
1660 1660 finally:
1661 1661 tr.release()
1662 1662
1663 1663 # Writing straight to files circumvented the inmemory caches
1664 1664 self.invalidate()
1665 1665
1666 1666 elapsed = time.time() - start
1667 1667 if elapsed <= 0:
1668 1668 elapsed = 0.001
1669 1669 self.ui.progress(_('clone'), None)
1670 1670 self.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') %
1671 1671 (util.bytecount(total_bytes), elapsed,
1672 1672 util.bytecount(total_bytes / elapsed)))
1673 1673
1674 1674 # new requirements = old non-format requirements +
1675 1675 # new format-related
1676 1676 # requirements from the streamed-in repository
1677 1677 requirements.update(set(self.requirements) - self.supportedformats)
1678 1678 self._applyrequirements(requirements)
1679 1679 self._writerequirements()
1680 1680
1681 1681 if rbranchmap:
1682 1682 rbheads = []
1683 1683 for bheads in rbranchmap.itervalues():
1684 1684 rbheads.extend(bheads)
1685 1685
1686 1686 if rbheads:
1687 1687 rtiprev = max((int(self.changelog.rev(node))
1688 1688 for node in rbheads))
1689 1689 cache = branchmap.branchcache(rbranchmap,
1690 1690 self[rtiprev].node(),
1691 1691 rtiprev)
1692 1692 # Try to stick it as low as possible
1693 1693 # filter above served are unlikely to be fetch from a clone
1694 1694 for candidate in ('base', 'immutable', 'served'):
1695 1695 rview = self.filtered(candidate)
1696 1696 if cache.validfor(rview):
1697 1697 self._branchcaches[candidate] = cache
1698 1698 cache.write(rview)
1699 1699 break
1700 1700 self.invalidate()
1701 1701 return len(self.heads()) + 1
1702 1702 finally:
1703 1703 lock.release()
1704 1704
1705 1705 def clone(self, remote, heads=[], stream=False):
1706 1706 '''clone remote repository.
1707 1707
1708 1708 keyword arguments:
1709 1709 heads: list of revs to clone (forces use of pull)
1710 1710 stream: use streaming clone if possible'''
1711 1711
1712 1712 # now, all clients that can request uncompressed clones can
1713 1713 # read repo formats supported by all servers that can serve
1714 1714 # them.
1715 1715
1716 1716 # if revlog format changes, client will have to check version
1717 1717 # and format flags on "stream" capability, and use
1718 1718 # uncompressed only if compatible.
1719 1719
1720 1720 if not stream:
1721 1721 # if the server explicitly prefers to stream (for fast LANs)
1722 1722 stream = remote.capable('stream-preferred')
1723 1723
1724 1724 if stream and not heads:
1725 1725 # 'stream' means remote revlog format is revlogv1 only
1726 1726 if remote.capable('stream'):
1727 1727 return self.stream_in(remote, set(('revlogv1',)))
1728 1728 # otherwise, 'streamreqs' contains the remote revlog format
1729 1729 streamreqs = remote.capable('streamreqs')
1730 1730 if streamreqs:
1731 1731 streamreqs = set(streamreqs.split(','))
1732 1732 # if we support it, stream in and adjust our requirements
1733 1733 if not streamreqs - self.supportedformats:
1734 1734 return self.stream_in(remote, streamreqs)
1735 1735
1736 1736 quiet = self.ui.backupconfig('ui', 'quietbookmarkmove')
1737 1737 try:
1738 1738 self.ui.setconfig('ui', 'quietbookmarkmove', True, 'clone')
1739 1739 ret = exchange.pull(self, remote, heads).cgresult
1740 1740 finally:
1741 1741 self.ui.restoreconfig(quiet)
1742 1742 return ret
1743 1743
1744 1744 def pushkey(self, namespace, key, old, new):
1745 1745 self.hook('prepushkey', throw=True, namespace=namespace, key=key,
1746 1746 old=old, new=new)
1747 1747 self.ui.debug('pushing key for "%s:%s"\n' % (namespace, key))
1748 1748 ret = pushkey.push(self, namespace, key, old, new)
1749 1749 self.hook('pushkey', namespace=namespace, key=key, old=old, new=new,
1750 1750 ret=ret)
1751 1751 return ret
1752 1752
1753 1753 def listkeys(self, namespace):
1754 1754 self.hook('prelistkeys', throw=True, namespace=namespace)
1755 1755 self.ui.debug('listing keys for "%s"\n' % namespace)
1756 1756 values = pushkey.list(self, namespace)
1757 1757 self.hook('listkeys', namespace=namespace, values=values)
1758 1758 return values
1759 1759
1760 1760 def debugwireargs(self, one, two, three=None, four=None, five=None):
1761 1761 '''used to test argument passing over the wire'''
1762 1762 return "%s %s %s %s %s" % (one, two, three, four, five)
1763 1763
1764 1764 def savecommitmessage(self, text):
1765 1765 fp = self.opener('last-message.txt', 'wb')
1766 1766 try:
1767 1767 fp.write(text)
1768 1768 finally:
1769 1769 fp.close()
1770 1770 return self.pathto(fp.name[len(self.root) + 1:])
1771 1771
1772 1772 # used to avoid circular references so destructors work
1773 1773 def aftertrans(files):
1774 1774 renamefiles = [tuple(t) for t in files]
1775 1775 def a():
1776 1776 for vfs, src, dest in renamefiles:
1777 1777 try:
1778 1778 vfs.rename(src, dest)
1779 1779 except OSError: # journal file does not yet exist
1780 1780 pass
1781 1781 return a
1782 1782
1783 1783 def undoname(fn):
1784 1784 base, name = os.path.split(fn)
1785 1785 assert name.startswith('journal')
1786 1786 return os.path.join(base, name.replace('journal', 'undo', 1))
1787 1787
1788 1788 def instance(ui, path, create):
1789 1789 return localrepository(ui, util.urllocalpath(path), create)
1790 1790
1791 1791 def islocal(path):
1792 1792 return True
@@ -1,869 +1,869 b''
1 1 # wireproto.py - generic wire protocol support functions
2 2 #
3 3 # Copyright 2005-2010 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 import urllib, tempfile, os, sys
9 9 from i18n import _
10 10 from node import bin, hex
11 11 import changegroup as changegroupmod, bundle2, pushkey as pushkeymod
12 12 import peer, error, encoding, util, store, exchange
13 13
14 14
15 15 class abstractserverproto(object):
16 16 """abstract class that summarizes the protocol API
17 17
18 18 Used as reference and documentation.
19 19 """
20 20
21 21 def getargs(self, args):
22 22 """return the value for arguments in <args>
23 23
24 24 returns a list of values (same order as <args>)"""
25 25 raise NotImplementedError()
26 26
27 27 def getfile(self, fp):
28 28 """write the whole content of a file into a file like object
29 29
30 30 The file is in the form::
31 31
32 32 (<chunk-size>\n<chunk>)+0\n
33 33
34 34 chunk size is the ascii version of the int.
35 35 """
36 36 raise NotImplementedError()
37 37
38 38 def redirect(self):
39 39 """may setup interception for stdout and stderr
40 40
41 41 See also the `restore` method."""
42 42 raise NotImplementedError()
43 43
44 44 # If the `redirect` function does install interception, the `restore`
45 45 # function MUST be defined. If interception is not used, this function
46 46 # MUST NOT be defined.
47 47 #
48 48 # left commented here on purpose
49 49 #
50 50 #def restore(self):
51 51 # """reinstall previous stdout and stderr and return intercepted stdout
52 52 # """
53 53 # raise NotImplementedError()
54 54
55 55 def groupchunks(self, cg):
56 56 """return 4096 chunks from a changegroup object
57 57
58 58 Some protocols may have compressed the contents."""
59 59 raise NotImplementedError()
60 60
61 61 # abstract batching support
62 62
63 63 class future(object):
64 64 '''placeholder for a value to be set later'''
65 65 def set(self, value):
66 66 if util.safehasattr(self, 'value'):
67 67 raise error.RepoError("future is already set")
68 68 self.value = value
69 69
70 70 class batcher(object):
71 71 '''base class for batches of commands submittable in a single request
72 72
73 73 All methods invoked on instances of this class are simply queued and
74 74 return a a future for the result. Once you call submit(), all the queued
75 75 calls are performed and the results set in their respective futures.
76 76 '''
77 77 def __init__(self):
78 78 self.calls = []
79 79 def __getattr__(self, name):
80 80 def call(*args, **opts):
81 81 resref = future()
82 82 self.calls.append((name, args, opts, resref,))
83 83 return resref
84 84 return call
85 85 def submit(self):
86 86 pass
87 87
88 88 class localbatch(batcher):
89 89 '''performs the queued calls directly'''
90 90 def __init__(self, local):
91 91 batcher.__init__(self)
92 92 self.local = local
93 93 def submit(self):
94 94 for name, args, opts, resref in self.calls:
95 95 resref.set(getattr(self.local, name)(*args, **opts))
96 96
97 97 class remotebatch(batcher):
98 98 '''batches the queued calls; uses as few roundtrips as possible'''
99 99 def __init__(self, remote):
100 100 '''remote must support _submitbatch(encbatch) and
101 101 _submitone(op, encargs)'''
102 102 batcher.__init__(self)
103 103 self.remote = remote
104 104 def submit(self):
105 105 req, rsp = [], []
106 106 for name, args, opts, resref in self.calls:
107 107 mtd = getattr(self.remote, name)
108 108 batchablefn = getattr(mtd, 'batchable', None)
109 109 if batchablefn is not None:
110 110 batchable = batchablefn(mtd.im_self, *args, **opts)
111 111 encargsorres, encresref = batchable.next()
112 112 if encresref:
113 113 req.append((name, encargsorres,))
114 114 rsp.append((batchable, encresref, resref,))
115 115 else:
116 116 resref.set(encargsorres)
117 117 else:
118 118 if req:
119 119 self._submitreq(req, rsp)
120 120 req, rsp = [], []
121 121 resref.set(mtd(*args, **opts))
122 122 if req:
123 123 self._submitreq(req, rsp)
124 124 def _submitreq(self, req, rsp):
125 125 encresults = self.remote._submitbatch(req)
126 126 for encres, r in zip(encresults, rsp):
127 127 batchable, encresref, resref = r
128 128 encresref.set(encres)
129 129 resref.set(batchable.next())
130 130
131 131 def batchable(f):
132 132 '''annotation for batchable methods
133 133
134 134 Such methods must implement a coroutine as follows:
135 135
136 136 @batchable
137 137 def sample(self, one, two=None):
138 138 # Handle locally computable results first:
139 139 if not one:
140 140 yield "a local result", None
141 141 # Build list of encoded arguments suitable for your wire protocol:
142 142 encargs = [('one', encode(one),), ('two', encode(two),)]
143 143 # Create future for injection of encoded result:
144 144 encresref = future()
145 145 # Return encoded arguments and future:
146 146 yield encargs, encresref
147 147 # Assuming the future to be filled with the result from the batched
148 148 # request now. Decode it:
149 149 yield decode(encresref.value)
150 150
151 151 The decorator returns a function which wraps this coroutine as a plain
152 152 method, but adds the original method as an attribute called "batchable",
153 153 which is used by remotebatch to split the call into separate encoding and
154 154 decoding phases.
155 155 '''
156 156 def plain(*args, **opts):
157 157 batchable = f(*args, **opts)
158 158 encargsorres, encresref = batchable.next()
159 159 if not encresref:
160 160 return encargsorres # a local result in this case
161 161 self = args[0]
162 162 encresref.set(self._submitone(f.func_name, encargsorres))
163 163 return batchable.next()
164 164 setattr(plain, 'batchable', f)
165 165 return plain
166 166
167 167 # list of nodes encoding / decoding
168 168
169 169 def decodelist(l, sep=' '):
170 170 if l:
171 171 return map(bin, l.split(sep))
172 172 return []
173 173
174 174 def encodelist(l, sep=' '):
175 175 return sep.join(map(hex, l))
176 176
177 177 # batched call argument encoding
178 178
179 179 def escapearg(plain):
180 180 return (plain
181 181 .replace(':', '::')
182 182 .replace(',', ':,')
183 183 .replace(';', ':;')
184 184 .replace('=', ':='))
185 185
186 186 def unescapearg(escaped):
187 187 return (escaped
188 188 .replace(':=', '=')
189 189 .replace(':;', ';')
190 190 .replace(':,', ',')
191 191 .replace('::', ':'))
192 192
193 193 # mapping of options accepted by getbundle and their types
194 194 #
195 195 # Meant to be extended by extensions. It is extensions responsibility to ensure
196 196 # such options are properly processed in exchange.getbundle.
197 197 #
198 198 # supported types are:
199 199 #
200 200 # :nodes: list of binary nodes
201 201 # :csv: list of comma-separated values
202 202 # :plain: string with no transformation needed.
203 203 gboptsmap = {'heads': 'nodes',
204 204 'common': 'nodes',
205 205 'obsmarkers': 'boolean',
206 206 'bundlecaps': 'csv',
207 207 'listkeys': 'csv',
208 208 'cg': 'boolean'}
209 209
210 210 # client side
211 211
212 212 class wirepeer(peer.peerrepository):
213 213
214 214 def batch(self):
215 215 return remotebatch(self)
216 216 def _submitbatch(self, req):
217 217 cmds = []
218 218 for op, argsdict in req:
219 219 args = ','.join('%s=%s' % p for p in argsdict.iteritems())
220 220 cmds.append('%s %s' % (op, args))
221 221 rsp = self._call("batch", cmds=';'.join(cmds))
222 222 return rsp.split(';')
223 223 def _submitone(self, op, args):
224 224 return self._call(op, **args)
225 225
226 226 @batchable
227 227 def lookup(self, key):
228 228 self.requirecap('lookup', _('look up remote revision'))
229 229 f = future()
230 230 yield {'key': encoding.fromlocal(key)}, f
231 231 d = f.value
232 232 success, data = d[:-1].split(" ", 1)
233 233 if int(success):
234 234 yield bin(data)
235 235 self._abort(error.RepoError(data))
236 236
237 237 @batchable
238 238 def heads(self):
239 239 f = future()
240 240 yield {}, f
241 241 d = f.value
242 242 try:
243 243 yield decodelist(d[:-1])
244 244 except ValueError:
245 245 self._abort(error.ResponseError(_("unexpected response:"), d))
246 246
247 247 @batchable
248 248 def known(self, nodes):
249 249 f = future()
250 250 yield {'nodes': encodelist(nodes)}, f
251 251 d = f.value
252 252 try:
253 253 yield [bool(int(b)) for b in d]
254 254 except ValueError:
255 255 self._abort(error.ResponseError(_("unexpected response:"), d))
256 256
257 257 @batchable
258 258 def branchmap(self):
259 259 f = future()
260 260 yield {}, f
261 261 d = f.value
262 262 try:
263 263 branchmap = {}
264 264 for branchpart in d.splitlines():
265 265 branchname, branchheads = branchpart.split(' ', 1)
266 266 branchname = encoding.tolocal(urllib.unquote(branchname))
267 267 branchheads = decodelist(branchheads)
268 268 branchmap[branchname] = branchheads
269 269 yield branchmap
270 270 except TypeError:
271 271 self._abort(error.ResponseError(_("unexpected response:"), d))
272 272
273 273 def branches(self, nodes):
274 274 n = encodelist(nodes)
275 275 d = self._call("branches", nodes=n)
276 276 try:
277 277 br = [tuple(decodelist(b)) for b in d.splitlines()]
278 278 return br
279 279 except ValueError:
280 280 self._abort(error.ResponseError(_("unexpected response:"), d))
281 281
282 282 def between(self, pairs):
283 283 batch = 8 # avoid giant requests
284 284 r = []
285 285 for i in xrange(0, len(pairs), batch):
286 286 n = " ".join([encodelist(p, '-') for p in pairs[i:i + batch]])
287 287 d = self._call("between", pairs=n)
288 288 try:
289 289 r.extend(l and decodelist(l) or [] for l in d.splitlines())
290 290 except ValueError:
291 291 self._abort(error.ResponseError(_("unexpected response:"), d))
292 292 return r
293 293
294 294 @batchable
295 295 def pushkey(self, namespace, key, old, new):
296 296 if not self.capable('pushkey'):
297 297 yield False, None
298 298 f = future()
299 299 self.ui.debug('preparing pushkey for "%s:%s"\n' % (namespace, key))
300 300 yield {'namespace': encoding.fromlocal(namespace),
301 301 'key': encoding.fromlocal(key),
302 302 'old': encoding.fromlocal(old),
303 303 'new': encoding.fromlocal(new)}, f
304 304 d = f.value
305 305 d, output = d.split('\n', 1)
306 306 try:
307 307 d = bool(int(d))
308 308 except ValueError:
309 309 raise error.ResponseError(
310 310 _('push failed (unexpected response):'), d)
311 311 for l in output.splitlines(True):
312 312 self.ui.status(_('remote: '), l)
313 313 yield d
314 314
315 315 @batchable
316 316 def listkeys(self, namespace):
317 317 if not self.capable('pushkey'):
318 318 yield {}, None
319 319 f = future()
320 320 self.ui.debug('preparing listkeys for "%s"\n' % namespace)
321 321 yield {'namespace': encoding.fromlocal(namespace)}, f
322 322 d = f.value
323 323 yield pushkeymod.decodekeys(d)
324 324
325 325 def stream_out(self):
326 326 return self._callstream('stream_out')
327 327
328 328 def changegroup(self, nodes, kind):
329 329 n = encodelist(nodes)
330 330 f = self._callcompressable("changegroup", roots=n)
331 331 return changegroupmod.cg1unpacker(f, 'UN')
332 332
333 333 def changegroupsubset(self, bases, heads, kind):
334 334 self.requirecap('changegroupsubset', _('look up remote changes'))
335 335 bases = encodelist(bases)
336 336 heads = encodelist(heads)
337 337 f = self._callcompressable("changegroupsubset",
338 338 bases=bases, heads=heads)
339 339 return changegroupmod.cg1unpacker(f, 'UN')
340 340
341 341 def getbundle(self, source, **kwargs):
342 342 self.requirecap('getbundle', _('look up remote changes'))
343 343 opts = {}
344 344 for key, value in kwargs.iteritems():
345 345 if value is None:
346 346 continue
347 347 keytype = gboptsmap.get(key)
348 348 if keytype is None:
349 349 assert False, 'unexpected'
350 350 elif keytype == 'nodes':
351 351 value = encodelist(value)
352 352 elif keytype == 'csv':
353 353 value = ','.join(value)
354 354 elif keytype == 'boolean':
355 355 value = '%i' % bool(value)
356 356 elif keytype != 'plain':
357 357 raise KeyError('unknown getbundle option type %s'
358 358 % keytype)
359 359 opts[key] = value
360 360 f = self._callcompressable("getbundle", **opts)
361 361 bundlecaps = kwargs.get('bundlecaps')
362 if bundlecaps is not None and 'HG2X' in bundlecaps:
362 if bundlecaps is not None and 'HG2Y' in bundlecaps:
363 363 return bundle2.unbundle20(self.ui, f)
364 364 else:
365 365 return changegroupmod.cg1unpacker(f, 'UN')
366 366
367 367 def unbundle(self, cg, heads, source):
368 368 '''Send cg (a readable file-like object representing the
369 369 changegroup to push, typically a chunkbuffer object) to the
370 370 remote server as a bundle.
371 371
372 372 When pushing a bundle10 stream, return an integer indicating the
373 373 result of the push (see localrepository.addchangegroup()).
374 374
375 375 When pushing a bundle20 stream, return a bundle20 stream.'''
376 376
377 377 if heads != ['force'] and self.capable('unbundlehash'):
378 378 heads = encodelist(['hashed',
379 379 util.sha1(''.join(sorted(heads))).digest()])
380 380 else:
381 381 heads = encodelist(heads)
382 382
383 383 if util.safehasattr(cg, 'deltaheader'):
384 384 # this a bundle10, do the old style call sequence
385 385 ret, output = self._callpush("unbundle", cg, heads=heads)
386 386 if ret == "":
387 387 raise error.ResponseError(
388 388 _('push failed:'), output)
389 389 try:
390 390 ret = int(ret)
391 391 except ValueError:
392 392 raise error.ResponseError(
393 393 _('push failed (unexpected response):'), ret)
394 394
395 395 for l in output.splitlines(True):
396 396 self.ui.status(_('remote: '), l)
397 397 else:
398 398 # bundle2 push. Send a stream, fetch a stream.
399 399 stream = self._calltwowaystream('unbundle', cg, heads=heads)
400 400 ret = bundle2.unbundle20(self.ui, stream)
401 401 return ret
402 402
403 403 def debugwireargs(self, one, two, three=None, four=None, five=None):
404 404 # don't pass optional arguments left at their default value
405 405 opts = {}
406 406 if three is not None:
407 407 opts['three'] = three
408 408 if four is not None:
409 409 opts['four'] = four
410 410 return self._call('debugwireargs', one=one, two=two, **opts)
411 411
412 412 def _call(self, cmd, **args):
413 413 """execute <cmd> on the server
414 414
415 415 The command is expected to return a simple string.
416 416
417 417 returns the server reply as a string."""
418 418 raise NotImplementedError()
419 419
420 420 def _callstream(self, cmd, **args):
421 421 """execute <cmd> on the server
422 422
423 423 The command is expected to return a stream.
424 424
425 425 returns the server reply as a file like object."""
426 426 raise NotImplementedError()
427 427
428 428 def _callcompressable(self, cmd, **args):
429 429 """execute <cmd> on the server
430 430
431 431 The command is expected to return a stream.
432 432
433 433 The stream may have been compressed in some implementations. This
434 434 function takes care of the decompression. This is the only difference
435 435 with _callstream.
436 436
437 437 returns the server reply as a file like object.
438 438 """
439 439 raise NotImplementedError()
440 440
441 441 def _callpush(self, cmd, fp, **args):
442 442 """execute a <cmd> on server
443 443
444 444 The command is expected to be related to a push. Push has a special
445 445 return method.
446 446
447 447 returns the server reply as a (ret, output) tuple. ret is either
448 448 empty (error) or a stringified int.
449 449 """
450 450 raise NotImplementedError()
451 451
452 452 def _calltwowaystream(self, cmd, fp, **args):
453 453 """execute <cmd> on server
454 454
455 455 The command will send a stream to the server and get a stream in reply.
456 456 """
457 457 raise NotImplementedError()
458 458
459 459 def _abort(self, exception):
460 460 """clearly abort the wire protocol connection and raise the exception
461 461 """
462 462 raise NotImplementedError()
463 463
464 464 # server side
465 465
466 466 # wire protocol command can either return a string or one of these classes.
467 467 class streamres(object):
468 468 """wireproto reply: binary stream
469 469
470 470 The call was successful and the result is a stream.
471 471 Iterate on the `self.gen` attribute to retrieve chunks.
472 472 """
473 473 def __init__(self, gen):
474 474 self.gen = gen
475 475
476 476 class pushres(object):
477 477 """wireproto reply: success with simple integer return
478 478
479 479 The call was successful and returned an integer contained in `self.res`.
480 480 """
481 481 def __init__(self, res):
482 482 self.res = res
483 483
484 484 class pusherr(object):
485 485 """wireproto reply: failure
486 486
487 487 The call failed. The `self.res` attribute contains the error message.
488 488 """
489 489 def __init__(self, res):
490 490 self.res = res
491 491
492 492 class ooberror(object):
493 493 """wireproto reply: failure of a batch of operation
494 494
495 495 Something failed during a batch call. The error message is stored in
496 496 `self.message`.
497 497 """
498 498 def __init__(self, message):
499 499 self.message = message
500 500
501 501 def dispatch(repo, proto, command):
502 502 repo = repo.filtered("served")
503 503 func, spec = commands[command]
504 504 args = proto.getargs(spec)
505 505 return func(repo, proto, *args)
506 506
507 507 def options(cmd, keys, others):
508 508 opts = {}
509 509 for k in keys:
510 510 if k in others:
511 511 opts[k] = others[k]
512 512 del others[k]
513 513 if others:
514 514 sys.stderr.write("warning: %s ignored unexpected arguments %s\n"
515 515 % (cmd, ",".join(others)))
516 516 return opts
517 517
518 518 # list of commands
519 519 commands = {}
520 520
521 521 def wireprotocommand(name, args=''):
522 522 """decorator for wire protocol command"""
523 523 def register(func):
524 524 commands[name] = (func, args)
525 525 return func
526 526 return register
527 527
528 528 @wireprotocommand('batch', 'cmds *')
529 529 def batch(repo, proto, cmds, others):
530 530 repo = repo.filtered("served")
531 531 res = []
532 532 for pair in cmds.split(';'):
533 533 op, args = pair.split(' ', 1)
534 534 vals = {}
535 535 for a in args.split(','):
536 536 if a:
537 537 n, v = a.split('=')
538 538 vals[n] = unescapearg(v)
539 539 func, spec = commands[op]
540 540 if spec:
541 541 keys = spec.split()
542 542 data = {}
543 543 for k in keys:
544 544 if k == '*':
545 545 star = {}
546 546 for key in vals.keys():
547 547 if key not in keys:
548 548 star[key] = vals[key]
549 549 data['*'] = star
550 550 else:
551 551 data[k] = vals[k]
552 552 result = func(repo, proto, *[data[k] for k in keys])
553 553 else:
554 554 result = func(repo, proto)
555 555 if isinstance(result, ooberror):
556 556 return result
557 557 res.append(escapearg(result))
558 558 return ';'.join(res)
559 559
560 560 @wireprotocommand('between', 'pairs')
561 561 def between(repo, proto, pairs):
562 562 pairs = [decodelist(p, '-') for p in pairs.split(" ")]
563 563 r = []
564 564 for b in repo.between(pairs):
565 565 r.append(encodelist(b) + "\n")
566 566 return "".join(r)
567 567
568 568 @wireprotocommand('branchmap')
569 569 def branchmap(repo, proto):
570 570 branchmap = repo.branchmap()
571 571 heads = []
572 572 for branch, nodes in branchmap.iteritems():
573 573 branchname = urllib.quote(encoding.fromlocal(branch))
574 574 branchnodes = encodelist(nodes)
575 575 heads.append('%s %s' % (branchname, branchnodes))
576 576 return '\n'.join(heads)
577 577
578 578 @wireprotocommand('branches', 'nodes')
579 579 def branches(repo, proto, nodes):
580 580 nodes = decodelist(nodes)
581 581 r = []
582 582 for b in repo.branches(nodes):
583 583 r.append(encodelist(b) + "\n")
584 584 return "".join(r)
585 585
586 586
587 587 wireprotocaps = ['lookup', 'changegroupsubset', 'branchmap', 'pushkey',
588 588 'known', 'getbundle', 'unbundlehash', 'batch']
589 589
590 590 def _capabilities(repo, proto):
591 591 """return a list of capabilities for a repo
592 592
593 593 This function exists to allow extensions to easily wrap capabilities
594 594 computation
595 595
596 596 - returns a lists: easy to alter
597 597 - change done here will be propagated to both `capabilities` and `hello`
598 598 command without any other action needed.
599 599 """
600 600 # copy to prevent modification of the global list
601 601 caps = list(wireprotocaps)
602 602 if _allowstream(repo.ui):
603 603 if repo.ui.configbool('server', 'preferuncompressed', False):
604 604 caps.append('stream-preferred')
605 605 requiredformats = repo.requirements & repo.supportedformats
606 606 # if our local revlogs are just revlogv1, add 'stream' cap
607 607 if not requiredformats - set(('revlogv1',)):
608 608 caps.append('stream')
609 609 # otherwise, add 'streamreqs' detailing our local revlog format
610 610 else:
611 611 caps.append('streamreqs=%s' % ','.join(requiredformats))
612 612 if repo.ui.configbool('experimental', 'bundle2-exp', False):
613 613 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
614 614 caps.append('bundle2-exp=' + urllib.quote(capsblob))
615 615 caps.append('unbundle=%s' % ','.join(changegroupmod.bundlepriority))
616 616 caps.append('httpheader=1024')
617 617 return caps
618 618
619 619 # If you are writing an extension and consider wrapping this function. Wrap
620 620 # `_capabilities` instead.
621 621 @wireprotocommand('capabilities')
622 622 def capabilities(repo, proto):
623 623 return ' '.join(_capabilities(repo, proto))
624 624
625 625 @wireprotocommand('changegroup', 'roots')
626 626 def changegroup(repo, proto, roots):
627 627 nodes = decodelist(roots)
628 628 cg = changegroupmod.changegroup(repo, nodes, 'serve')
629 629 return streamres(proto.groupchunks(cg))
630 630
631 631 @wireprotocommand('changegroupsubset', 'bases heads')
632 632 def changegroupsubset(repo, proto, bases, heads):
633 633 bases = decodelist(bases)
634 634 heads = decodelist(heads)
635 635 cg = changegroupmod.changegroupsubset(repo, bases, heads, 'serve')
636 636 return streamres(proto.groupchunks(cg))
637 637
638 638 @wireprotocommand('debugwireargs', 'one two *')
639 639 def debugwireargs(repo, proto, one, two, others):
640 640 # only accept optional args from the known set
641 641 opts = options('debugwireargs', ['three', 'four'], others)
642 642 return repo.debugwireargs(one, two, **opts)
643 643
644 644 # List of options accepted by getbundle.
645 645 #
646 646 # Meant to be extended by extensions. It is the extension's responsibility to
647 647 # ensure such options are properly processed in exchange.getbundle.
648 648 gboptslist = ['heads', 'common', 'bundlecaps']
649 649
650 650 @wireprotocommand('getbundle', '*')
651 651 def getbundle(repo, proto, others):
652 652 opts = options('getbundle', gboptsmap.keys(), others)
653 653 for k, v in opts.iteritems():
654 654 keytype = gboptsmap[k]
655 655 if keytype == 'nodes':
656 656 opts[k] = decodelist(v)
657 657 elif keytype == 'csv':
658 658 opts[k] = set(v.split(','))
659 659 elif keytype == 'boolean':
660 660 opts[k] = bool(v)
661 661 elif keytype != 'plain':
662 662 raise KeyError('unknown getbundle option type %s'
663 663 % keytype)
664 664 cg = exchange.getbundle(repo, 'serve', **opts)
665 665 return streamres(proto.groupchunks(cg))
666 666
667 667 @wireprotocommand('heads')
668 668 def heads(repo, proto):
669 669 h = repo.heads()
670 670 return encodelist(h) + "\n"
671 671
672 672 @wireprotocommand('hello')
673 673 def hello(repo, proto):
674 674 '''the hello command returns a set of lines describing various
675 675 interesting things about the server, in an RFC822-like format.
676 676 Currently the only one defined is "capabilities", which
677 677 consists of a line in the form:
678 678
679 679 capabilities: space separated list of tokens
680 680 '''
681 681 return "capabilities: %s\n" % (capabilities(repo, proto))
682 682
683 683 @wireprotocommand('listkeys', 'namespace')
684 684 def listkeys(repo, proto, namespace):
685 685 d = repo.listkeys(encoding.tolocal(namespace)).items()
686 686 return pushkeymod.encodekeys(d)
687 687
688 688 @wireprotocommand('lookup', 'key')
689 689 def lookup(repo, proto, key):
690 690 try:
691 691 k = encoding.tolocal(key)
692 692 c = repo[k]
693 693 r = c.hex()
694 694 success = 1
695 695 except Exception, inst:
696 696 r = str(inst)
697 697 success = 0
698 698 return "%s %s\n" % (success, r)
699 699
700 700 @wireprotocommand('known', 'nodes *')
701 701 def known(repo, proto, nodes, others):
702 702 return ''.join(b and "1" or "0" for b in repo.known(decodelist(nodes)))
703 703
704 704 @wireprotocommand('pushkey', 'namespace key old new')
705 705 def pushkey(repo, proto, namespace, key, old, new):
706 706 # compatibility with pre-1.8 clients which were accidentally
707 707 # sending raw binary nodes rather than utf-8-encoded hex
708 708 if len(new) == 20 and new.encode('string-escape') != new:
709 709 # looks like it could be a binary node
710 710 try:
711 711 new.decode('utf-8')
712 712 new = encoding.tolocal(new) # but cleanly decodes as UTF-8
713 713 except UnicodeDecodeError:
714 714 pass # binary, leave unmodified
715 715 else:
716 716 new = encoding.tolocal(new) # normal path
717 717
718 718 if util.safehasattr(proto, 'restore'):
719 719
720 720 proto.redirect()
721 721
722 722 try:
723 723 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
724 724 encoding.tolocal(old), new) or False
725 725 except util.Abort:
726 726 r = False
727 727
728 728 output = proto.restore()
729 729
730 730 return '%s\n%s' % (int(r), output)
731 731
732 732 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
733 733 encoding.tolocal(old), new)
734 734 return '%s\n' % int(r)
735 735
736 736 def _allowstream(ui):
737 737 return ui.configbool('server', 'uncompressed', True, untrusted=True)
738 738
739 739 def _walkstreamfiles(repo):
740 740 # this is it's own function so extensions can override it
741 741 return repo.store.walk()
742 742
743 743 @wireprotocommand('stream_out')
744 744 def stream(repo, proto):
745 745 '''If the server supports streaming clone, it advertises the "stream"
746 746 capability with a value representing the version and flags of the repo
747 747 it is serving. Client checks to see if it understands the format.
748 748
749 749 The format is simple: the server writes out a line with the amount
750 750 of files, then the total amount of bytes to be transferred (separated
751 751 by a space). Then, for each file, the server first writes the filename
752 752 and file size (separated by the null character), then the file contents.
753 753 '''
754 754
755 755 if not _allowstream(repo.ui):
756 756 return '1\n'
757 757
758 758 entries = []
759 759 total_bytes = 0
760 760 try:
761 761 # get consistent snapshot of repo, lock during scan
762 762 lock = repo.lock()
763 763 try:
764 764 repo.ui.debug('scanning\n')
765 765 for name, ename, size in _walkstreamfiles(repo):
766 766 if size:
767 767 entries.append((name, size))
768 768 total_bytes += size
769 769 finally:
770 770 lock.release()
771 771 except error.LockError:
772 772 return '2\n' # error: 2
773 773
774 774 def streamer(repo, entries, total):
775 775 '''stream out all metadata files in repository.'''
776 776 yield '0\n' # success
777 777 repo.ui.debug('%d files, %d bytes to transfer\n' %
778 778 (len(entries), total_bytes))
779 779 yield '%d %d\n' % (len(entries), total_bytes)
780 780
781 781 sopener = repo.sopener
782 782 oldaudit = sopener.mustaudit
783 783 debugflag = repo.ui.debugflag
784 784 sopener.mustaudit = False
785 785
786 786 try:
787 787 for name, size in entries:
788 788 if debugflag:
789 789 repo.ui.debug('sending %s (%d bytes)\n' % (name, size))
790 790 # partially encode name over the wire for backwards compat
791 791 yield '%s\0%d\n' % (store.encodedir(name), size)
792 792 if size <= 65536:
793 793 fp = sopener(name)
794 794 try:
795 795 data = fp.read(size)
796 796 finally:
797 797 fp.close()
798 798 yield data
799 799 else:
800 800 for chunk in util.filechunkiter(sopener(name), limit=size):
801 801 yield chunk
802 802 # replace with "finally:" when support for python 2.4 has been dropped
803 803 except Exception:
804 804 sopener.mustaudit = oldaudit
805 805 raise
806 806 sopener.mustaudit = oldaudit
807 807
808 808 return streamres(streamer(repo, entries, total_bytes))
809 809
810 810 @wireprotocommand('unbundle', 'heads')
811 811 def unbundle(repo, proto, heads):
812 812 their_heads = decodelist(heads)
813 813
814 814 try:
815 815 proto.redirect()
816 816
817 817 exchange.check_heads(repo, their_heads, 'preparing changes')
818 818
819 819 # write bundle data to temporary file because it can be big
820 820 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
821 821 fp = os.fdopen(fd, 'wb+')
822 822 r = 0
823 823 try:
824 824 proto.getfile(fp)
825 825 fp.seek(0)
826 826 gen = exchange.readbundle(repo.ui, fp, None)
827 827 r = exchange.unbundle(repo, gen, their_heads, 'serve',
828 828 proto._client())
829 829 if util.safehasattr(r, 'addpart'):
830 830 # The return looks streameable, we are in the bundle2 case and
831 831 # should return a stream.
832 832 return streamres(r.getchunks())
833 833 return pushres(r)
834 834
835 835 finally:
836 836 fp.close()
837 837 os.unlink(tempname)
838 838 except error.BundleValueError, exc:
839 839 bundler = bundle2.bundle20(repo.ui)
840 840 errpart = bundler.newpart('B2X:ERROR:UNSUPPORTEDCONTENT')
841 841 if exc.parttype is not None:
842 842 errpart.addparam('parttype', exc.parttype)
843 843 if exc.params:
844 844 errpart.addparam('params', '\0'.join(exc.params))
845 845 return streamres(bundler.getchunks())
846 846 except util.Abort, inst:
847 847 # The old code we moved used sys.stderr directly.
848 848 # We did not change it to minimise code change.
849 849 # This need to be moved to something proper.
850 850 # Feel free to do it.
851 851 if getattr(inst, 'duringunbundle2', False):
852 852 bundler = bundle2.bundle20(repo.ui)
853 853 manargs = [('message', str(inst))]
854 854 advargs = []
855 855 if inst.hint is not None:
856 856 advargs.append(('hint', inst.hint))
857 857 bundler.addpart(bundle2.bundlepart('B2X:ERROR:ABORT',
858 858 manargs, advargs))
859 859 return streamres(bundler.getchunks())
860 860 else:
861 861 sys.stderr.write("abort: %s\n" % inst)
862 862 return pushres(0)
863 863 except error.PushRaced, exc:
864 864 if getattr(exc, 'duringunbundle2', False):
865 865 bundler = bundle2.bundle20(repo.ui)
866 866 bundler.newpart('B2X:ERROR:PUSHRACED', [('message', str(exc))])
867 867 return streamres(bundler.getchunks())
868 868 else:
869 869 return pusherr(str(exc))
@@ -1,802 +1,802 b''
1 1 This test is decicated to test the bundle2 container format
2 2
3 3 It test multiple existing parts to test different feature of the container. You
4 4 probably do not need to touch this test unless you change the binary encoding
5 5 of the bundle2 format itself.
6 6
7 7 Create an extension to test bundle2 API
8 8
9 9 $ cat > bundle2.py << EOF
10 10 > """A small extension to test bundle2 implementation
11 11 >
12 12 > Current bundle2 implementation is far too limited to be used in any core
13 13 > code. We still need to be able to test it while it grow up.
14 14 > """
15 15 >
16 16 > import sys, os
17 17 > from mercurial import cmdutil
18 18 > from mercurial import util
19 19 > from mercurial import bundle2
20 20 > from mercurial import scmutil
21 21 > from mercurial import discovery
22 22 > from mercurial import changegroup
23 23 > from mercurial import error
24 24 > from mercurial import obsolete
25 25 >
26 26 >
27 27 > try:
28 28 > import msvcrt
29 29 > msvcrt.setmode(sys.stdin.fileno(), os.O_BINARY)
30 30 > msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY)
31 31 > msvcrt.setmode(sys.stderr.fileno(), os.O_BINARY)
32 32 > except ImportError:
33 33 > pass
34 34 >
35 35 > cmdtable = {}
36 36 > command = cmdutil.command(cmdtable)
37 37 >
38 38 > ELEPHANTSSONG = """Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
39 39 > Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
40 40 > Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko."""
41 41 > assert len(ELEPHANTSSONG) == 178 # future test say 178 bytes, trust it.
42 42 >
43 43 > @bundle2.parthandler('test:song')
44 44 > def songhandler(op, part):
45 45 > """handle a "test:song" bundle2 part, printing the lyrics on stdin"""
46 46 > op.ui.write('The choir starts singing:\n')
47 47 > verses = 0
48 48 > for line in part.read().split('\n'):
49 49 > op.ui.write(' %s\n' % line)
50 50 > verses += 1
51 51 > op.records.add('song', {'verses': verses})
52 52 >
53 53 > @bundle2.parthandler('test:ping')
54 54 > def pinghandler(op, part):
55 55 > op.ui.write('received ping request (id %i)\n' % part.id)
56 56 > if op.reply is not None and 'ping-pong' in op.reply.capabilities:
57 57 > op.ui.write_err('replying to ping request (id %i)\n' % part.id)
58 58 > op.reply.newpart('test:pong', [('in-reply-to', str(part.id))])
59 59 >
60 60 > @bundle2.parthandler('test:debugreply')
61 61 > def debugreply(op, part):
62 62 > """print data about the capacity of the bundle reply"""
63 63 > if op.reply is None:
64 64 > op.ui.write('debugreply: no reply\n')
65 65 > else:
66 66 > op.ui.write('debugreply: capabilities:\n')
67 67 > for cap in sorted(op.reply.capabilities):
68 68 > op.ui.write('debugreply: %r\n' % cap)
69 69 > for val in op.reply.capabilities[cap]:
70 70 > op.ui.write('debugreply: %r\n' % val)
71 71 >
72 72 > @command('bundle2',
73 73 > [('', 'param', [], 'stream level parameter'),
74 74 > ('', 'unknown', False, 'include an unknown mandatory part in the bundle'),
75 75 > ('', 'unknownparams', False, 'include an unknown part parameters in the bundle'),
76 76 > ('', 'parts', False, 'include some arbitrary parts to the bundle'),
77 77 > ('', 'reply', False, 'produce a reply bundle'),
78 78 > ('', 'pushrace', False, 'includes a check:head part with unknown nodes'),
79 79 > ('', 'genraise', False, 'includes a part that raise an exception during generation'),
80 80 > ('r', 'rev', [], 'includes those changeset in the bundle'),],
81 81 > '[OUTPUTFILE]')
82 82 > def cmdbundle2(ui, repo, path=None, **opts):
83 83 > """write a bundle2 container on standard ouput"""
84 84 > bundler = bundle2.bundle20(ui)
85 85 > for p in opts['param']:
86 86 > p = p.split('=', 1)
87 87 > try:
88 88 > bundler.addparam(*p)
89 89 > except ValueError, exc:
90 90 > raise util.Abort('%s' % exc)
91 91 >
92 92 > if opts['reply']:
93 93 > capsstring = 'ping-pong\nelephants=babar,celeste\ncity%3D%21=celeste%2Cville'
94 94 > bundler.newpart('b2x:replycaps', data=capsstring)
95 95 >
96 96 > if opts['pushrace']:
97 97 > # also serve to test the assignement of data outside of init
98 98 > part = bundler.newpart('b2x:check:heads')
99 99 > part.data = '01234567890123456789'
100 100 >
101 101 > revs = opts['rev']
102 102 > if 'rev' in opts:
103 103 > revs = scmutil.revrange(repo, opts['rev'])
104 104 > if revs:
105 105 > # very crude version of a changegroup part creation
106 106 > bundled = repo.revs('%ld::%ld', revs, revs)
107 107 > headmissing = [c.node() for c in repo.set('heads(%ld)', revs)]
108 108 > headcommon = [c.node() for c in repo.set('parents(%ld) - %ld', revs, revs)]
109 109 > outgoing = discovery.outgoing(repo.changelog, headcommon, headmissing)
110 110 > cg = changegroup.getlocalchangegroup(repo, 'test:bundle2', outgoing, None)
111 111 > bundler.newpart('b2x:changegroup', data=cg.getchunks())
112 112 >
113 113 > if opts['parts']:
114 114 > bundler.newpart('test:empty')
115 115 > # add a second one to make sure we handle multiple parts
116 116 > bundler.newpart('test:empty')
117 117 > bundler.newpart('test:song', data=ELEPHANTSSONG)
118 118 > bundler.newpart('test:debugreply')
119 119 > mathpart = bundler.newpart('test:math')
120 120 > mathpart.addparam('pi', '3.14')
121 121 > mathpart.addparam('e', '2.72')
122 122 > mathpart.addparam('cooking', 'raw', mandatory=False)
123 123 > mathpart.data = '42'
124 124 > # advisory known part with unknown mandatory param
125 125 > bundler.newpart('test:song', [('randomparam','')])
126 126 > if opts['unknown']:
127 127 > bundler.newpart('test:UNKNOWN', data='some random content')
128 128 > if opts['unknownparams']:
129 129 > bundler.newpart('test:SONG', [('randomparams', '')])
130 130 > if opts['parts']:
131 131 > bundler.newpart('test:ping')
132 132 > if opts['genraise']:
133 133 > def genraise():
134 134 > yield 'first line\n'
135 135 > raise RuntimeError('Someone set up us the bomb!')
136 136 > bundler.newpart('b2x:output', data=genraise())
137 137 >
138 138 > if path is None:
139 139 > file = sys.stdout
140 140 > else:
141 141 > file = open(path, 'wb')
142 142 >
143 143 > try:
144 144 > for chunk in bundler.getchunks():
145 145 > file.write(chunk)
146 146 > except RuntimeError, exc:
147 147 > raise util.Abort(exc)
148 148 >
149 149 > @command('unbundle2', [], '')
150 150 > def cmdunbundle2(ui, repo, replypath=None):
151 151 > """process a bundle2 stream from stdin on the current repo"""
152 152 > try:
153 153 > tr = None
154 154 > lock = repo.lock()
155 155 > tr = repo.transaction('processbundle')
156 156 > try:
157 157 > unbundler = bundle2.unbundle20(ui, sys.stdin)
158 158 > op = bundle2.processbundle(repo, unbundler, lambda: tr)
159 159 > tr.close()
160 160 > except error.BundleValueError, exc:
161 161 > raise util.Abort('missing support for %s' % exc)
162 162 > except error.PushRaced, exc:
163 163 > raise util.Abort('push race: %s' % exc)
164 164 > finally:
165 165 > if tr is not None:
166 166 > tr.release()
167 167 > lock.release()
168 168 > remains = sys.stdin.read()
169 169 > ui.write('%i unread bytes\n' % len(remains))
170 170 > if op.records['song']:
171 171 > totalverses = sum(r['verses'] for r in op.records['song'])
172 172 > ui.write('%i total verses sung\n' % totalverses)
173 173 > for rec in op.records['changegroup']:
174 174 > ui.write('addchangegroup return: %i\n' % rec['return'])
175 175 > if op.reply is not None and replypath is not None:
176 176 > file = open(replypath, 'wb')
177 177 > for chunk in op.reply.getchunks():
178 178 > file.write(chunk)
179 179 >
180 180 > @command('statbundle2', [], '')
181 181 > def cmdstatbundle2(ui, repo):
182 182 > """print statistic on the bundle2 container read from stdin"""
183 183 > unbundler = bundle2.unbundle20(ui, sys.stdin)
184 184 > try:
185 185 > params = unbundler.params
186 186 > except error.BundleValueError, exc:
187 187 > raise util.Abort('unknown parameters: %s' % exc)
188 188 > ui.write('options count: %i\n' % len(params))
189 189 > for key in sorted(params):
190 190 > ui.write('- %s\n' % key)
191 191 > value = params[key]
192 192 > if value is not None:
193 193 > ui.write(' %s\n' % value)
194 194 > count = 0
195 195 > for p in unbundler.iterparts():
196 196 > count += 1
197 197 > ui.write(' :%s:\n' % p.type)
198 198 > ui.write(' mandatory: %i\n' % len(p.mandatoryparams))
199 199 > ui.write(' advisory: %i\n' % len(p.advisoryparams))
200 200 > ui.write(' payload: %i bytes\n' % len(p.read()))
201 201 > ui.write('parts count: %i\n' % count)
202 202 > EOF
203 203 $ cat >> $HGRCPATH << EOF
204 204 > [extensions]
205 205 > bundle2=$TESTTMP/bundle2.py
206 206 > [experimental]
207 207 > bundle2-exp=True
208 208 > evolution=createmarkers
209 209 > [ui]
210 210 > ssh=python "$TESTDIR/dummyssh"
211 211 > logtemplate={rev}:{node|short} {phase} {author} {bookmarks} {desc|firstline}
212 212 > [web]
213 213 > push_ssl = false
214 214 > allow_push = *
215 215 > [phases]
216 216 > publish=False
217 217 > EOF
218 218
219 219 The extension requires a repo (currently unused)
220 220
221 221 $ hg init main
222 222 $ cd main
223 223 $ touch a
224 224 $ hg add a
225 225 $ hg commit -m 'a'
226 226
227 227
228 228 Empty bundle
229 229 =================
230 230
231 231 - no option
232 232 - no parts
233 233
234 234 Test bundling
235 235
236 236 $ hg bundle2
237 HG2X\x00\x00\x00\x00 (no-eol) (esc)
237 HG2Y\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
238 238
239 239 Test unbundling
240 240
241 241 $ hg bundle2 | hg statbundle2
242 242 options count: 0
243 243 parts count: 0
244 244
245 245 Test old style bundle are detected and refused
246 246
247 247 $ hg bundle --all ../bundle.hg
248 248 1 changesets found
249 249 $ hg statbundle2 < ../bundle.hg
250 250 abort: unknown bundle version 10
251 251 [255]
252 252
253 253 Test parameters
254 254 =================
255 255
256 256 - some options
257 257 - no parts
258 258
259 259 advisory parameters, no value
260 260 -------------------------------
261 261
262 262 Simplest possible parameters form
263 263
264 264 Test generation simple option
265 265
266 266 $ hg bundle2 --param 'caution'
267 HG2X\x00\x07caution\x00\x00 (no-eol) (esc)
267 HG2Y\x00\x00\x00\x07caution\x00\x00\x00\x00 (no-eol) (esc)
268 268
269 269 Test unbundling
270 270
271 271 $ hg bundle2 --param 'caution' | hg statbundle2
272 272 options count: 1
273 273 - caution
274 274 parts count: 0
275 275
276 276 Test generation multiple option
277 277
278 278 $ hg bundle2 --param 'caution' --param 'meal'
279 HG2X\x00\x0ccaution meal\x00\x00 (no-eol) (esc)
279 HG2Y\x00\x00\x00\x0ccaution meal\x00\x00\x00\x00 (no-eol) (esc)
280 280
281 281 Test unbundling
282 282
283 283 $ hg bundle2 --param 'caution' --param 'meal' | hg statbundle2
284 284 options count: 2
285 285 - caution
286 286 - meal
287 287 parts count: 0
288 288
289 289 advisory parameters, with value
290 290 -------------------------------
291 291
292 292 Test generation
293 293
294 294 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants'
295 HG2X\x00\x1ccaution meal=vegan elephants\x00\x00 (no-eol) (esc)
295 HG2Y\x00\x00\x00\x1ccaution meal=vegan elephants\x00\x00\x00\x00 (no-eol) (esc)
296 296
297 297 Test unbundling
298 298
299 299 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants' | hg statbundle2
300 300 options count: 3
301 301 - caution
302 302 - elephants
303 303 - meal
304 304 vegan
305 305 parts count: 0
306 306
307 307 parameter with special char in value
308 308 ---------------------------------------------------
309 309
310 310 Test generation
311 311
312 312 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple
313 HG2X\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00 (no-eol) (esc)
313 HG2Y\x00\x00\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00\x00\x00 (no-eol) (esc)
314 314
315 315 Test unbundling
316 316
317 317 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple | hg statbundle2
318 318 options count: 2
319 319 - e|! 7/
320 320 babar%#==tutu
321 321 - simple
322 322 parts count: 0
323 323
324 324 Test unknown mandatory option
325 325 ---------------------------------------------------
326 326
327 327 $ hg bundle2 --param 'Gravity' | hg statbundle2
328 328 abort: unknown parameters: Stream Parameter - Gravity
329 329 [255]
330 330
331 331 Test debug output
332 332 ---------------------------------------------------
333 333
334 334 bundling debug
335 335
336 336 $ hg bundle2 --debug --param 'e|! 7/=babar%#==tutu' --param simple ../out.hg2
337 start emission of HG2X stream
337 start emission of HG2Y stream
338 338 bundle parameter: e%7C%21%207/=babar%25%23%3D%3Dtutu simple
339 339 start of parts
340 340 end of bundle
341 341
342 342 file content is ok
343 343
344 344 $ cat ../out.hg2
345 HG2X\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00 (no-eol) (esc)
345 HG2Y\x00\x00\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00\x00\x00 (no-eol) (esc)
346 346
347 347 unbundling debug
348 348
349 349 $ hg statbundle2 --debug < ../out.hg2
350 start processing of HG2X stream
350 start processing of HG2Y stream
351 351 reading bundle2 stream parameters
352 352 ignoring unknown parameter 'e|! 7/'
353 353 ignoring unknown parameter 'simple'
354 354 options count: 2
355 355 - e|! 7/
356 356 babar%#==tutu
357 357 - simple
358 358 start extraction of bundle2 parts
359 359 part header size: 0
360 360 end of bundle2 stream
361 361 parts count: 0
362 362
363 363
364 364 Test buggy input
365 365 ---------------------------------------------------
366 366
367 367 empty parameter name
368 368
369 369 $ hg bundle2 --param '' --quiet
370 370 abort: empty parameter name
371 371 [255]
372 372
373 373 bad parameter name
374 374
375 375 $ hg bundle2 --param 42babar
376 376 abort: non letter first character: '42babar'
377 377 [255]
378 378
379 379
380 380 Test part
381 381 =================
382 382
383 383 $ hg bundle2 --parts ../parts.hg2 --debug
384 start emission of HG2X stream
384 start emission of HG2Y stream
385 385 bundle parameter:
386 386 start of parts
387 387 bundle part: "test:empty"
388 388 bundle part: "test:empty"
389 389 bundle part: "test:song"
390 390 bundle part: "test:debugreply"
391 391 bundle part: "test:math"
392 392 bundle part: "test:song"
393 393 bundle part: "test:ping"
394 394 end of bundle
395 395
396 396 $ cat ../parts.hg2
397 HG2X\x00\x00\x00\x11 (esc)
398 test:empty\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x11 (esc)
399 test:empty\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x10 test:song\x00\x00\x00\x02\x00\x00\x00\x00\x00\xb2Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko (esc)
397 HG2Y\x00\x00\x00\x00\x00\x00\x00\x11 (esc)
398 test:empty\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x11 (esc)
399 test:empty\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10 test:song\x00\x00\x00\x02\x00\x00\x00\x00\x00\xb2Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko (esc)
400 400 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
401 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.\x00\x00\x00\x00\x00\x16\x0ftest:debugreply\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00+ test:math\x00\x00\x00\x04\x02\x01\x02\x04\x01\x04\x07\x03pi3.14e2.72cookingraw\x00\x00\x00\x0242\x00\x00\x00\x00\x00\x1d test:song\x00\x00\x00\x05\x01\x00\x0b\x00randomparam\x00\x00\x00\x00\x00\x10 test:ping\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
401 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.\x00\x00\x00\x00\x00\x00\x00\x16\x0ftest:debugreply\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00+ test:math\x00\x00\x00\x04\x02\x01\x02\x04\x01\x04\x07\x03pi3.14e2.72cookingraw\x00\x00\x00\x0242\x00\x00\x00\x00\x00\x00\x00\x1d test:song\x00\x00\x00\x05\x01\x00\x0b\x00randomparam\x00\x00\x00\x00\x00\x00\x00\x10 test:ping\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
402 402
403 403
404 404 $ hg statbundle2 < ../parts.hg2
405 405 options count: 0
406 406 :test:empty:
407 407 mandatory: 0
408 408 advisory: 0
409 409 payload: 0 bytes
410 410 :test:empty:
411 411 mandatory: 0
412 412 advisory: 0
413 413 payload: 0 bytes
414 414 :test:song:
415 415 mandatory: 0
416 416 advisory: 0
417 417 payload: 178 bytes
418 418 :test:debugreply:
419 419 mandatory: 0
420 420 advisory: 0
421 421 payload: 0 bytes
422 422 :test:math:
423 423 mandatory: 2
424 424 advisory: 1
425 425 payload: 2 bytes
426 426 :test:song:
427 427 mandatory: 1
428 428 advisory: 0
429 429 payload: 0 bytes
430 430 :test:ping:
431 431 mandatory: 0
432 432 advisory: 0
433 433 payload: 0 bytes
434 434 parts count: 7
435 435
436 436 $ hg statbundle2 --debug < ../parts.hg2
437 start processing of HG2X stream
437 start processing of HG2Y stream
438 438 reading bundle2 stream parameters
439 439 options count: 0
440 440 start extraction of bundle2 parts
441 441 part header size: 17
442 442 part type: "test:empty"
443 443 part id: "0"
444 444 part parameters: 0
445 445 :test:empty:
446 446 mandatory: 0
447 447 advisory: 0
448 448 payload chunk size: 0
449 449 payload: 0 bytes
450 450 part header size: 17
451 451 part type: "test:empty"
452 452 part id: "1"
453 453 part parameters: 0
454 454 :test:empty:
455 455 mandatory: 0
456 456 advisory: 0
457 457 payload chunk size: 0
458 458 payload: 0 bytes
459 459 part header size: 16
460 460 part type: "test:song"
461 461 part id: "2"
462 462 part parameters: 0
463 463 :test:song:
464 464 mandatory: 0
465 465 advisory: 0
466 466 payload chunk size: 178
467 467 payload chunk size: 0
468 468 payload: 178 bytes
469 469 part header size: 22
470 470 part type: "test:debugreply"
471 471 part id: "3"
472 472 part parameters: 0
473 473 :test:debugreply:
474 474 mandatory: 0
475 475 advisory: 0
476 476 payload chunk size: 0
477 477 payload: 0 bytes
478 478 part header size: 43
479 479 part type: "test:math"
480 480 part id: "4"
481 481 part parameters: 3
482 482 :test:math:
483 483 mandatory: 2
484 484 advisory: 1
485 485 payload chunk size: 2
486 486 payload chunk size: 0
487 487 payload: 2 bytes
488 488 part header size: 29
489 489 part type: "test:song"
490 490 part id: "5"
491 491 part parameters: 1
492 492 :test:song:
493 493 mandatory: 1
494 494 advisory: 0
495 495 payload chunk size: 0
496 496 payload: 0 bytes
497 497 part header size: 16
498 498 part type: "test:ping"
499 499 part id: "6"
500 500 part parameters: 0
501 501 :test:ping:
502 502 mandatory: 0
503 503 advisory: 0
504 504 payload chunk size: 0
505 505 payload: 0 bytes
506 506 part header size: 0
507 507 end of bundle2 stream
508 508 parts count: 7
509 509
510 510 Test actual unbundling of test part
511 511 =======================================
512 512
513 513 Process the bundle
514 514
515 515 $ hg unbundle2 --debug < ../parts.hg2
516 start processing of HG2X stream
516 start processing of HG2Y stream
517 517 reading bundle2 stream parameters
518 518 start extraction of bundle2 parts
519 519 part header size: 17
520 520 part type: "test:empty"
521 521 part id: "0"
522 522 part parameters: 0
523 523 ignoring unsupported advisory part test:empty
524 524 payload chunk size: 0
525 525 part header size: 17
526 526 part type: "test:empty"
527 527 part id: "1"
528 528 part parameters: 0
529 529 ignoring unsupported advisory part test:empty
530 530 payload chunk size: 0
531 531 part header size: 16
532 532 part type: "test:song"
533 533 part id: "2"
534 534 part parameters: 0
535 535 found a handler for part 'test:song'
536 536 The choir starts singing:
537 537 payload chunk size: 178
538 538 payload chunk size: 0
539 539 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
540 540 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
541 541 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
542 542 part header size: 22
543 543 part type: "test:debugreply"
544 544 part id: "3"
545 545 part parameters: 0
546 546 found a handler for part 'test:debugreply'
547 547 debugreply: no reply
548 548 payload chunk size: 0
549 549 part header size: 43
550 550 part type: "test:math"
551 551 part id: "4"
552 552 part parameters: 3
553 553 ignoring unsupported advisory part test:math
554 554 payload chunk size: 2
555 555 payload chunk size: 0
556 556 part header size: 29
557 557 part type: "test:song"
558 558 part id: "5"
559 559 part parameters: 1
560 560 found a handler for part 'test:song'
561 561 ignoring unsupported advisory part test:song - randomparam
562 562 payload chunk size: 0
563 563 part header size: 16
564 564 part type: "test:ping"
565 565 part id: "6"
566 566 part parameters: 0
567 567 found a handler for part 'test:ping'
568 568 received ping request (id 6)
569 569 payload chunk size: 0
570 570 part header size: 0
571 571 end of bundle2 stream
572 572 0 unread bytes
573 573 3 total verses sung
574 574
575 575 Unbundle with an unknown mandatory part
576 576 (should abort)
577 577
578 578 $ hg bundle2 --parts --unknown ../unknown.hg2
579 579
580 580 $ hg unbundle2 < ../unknown.hg2
581 581 The choir starts singing:
582 582 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
583 583 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
584 584 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
585 585 debugreply: no reply
586 586 0 unread bytes
587 587 abort: missing support for test:unknown
588 588 [255]
589 589
590 590 Unbundle with an unknown mandatory part parameters
591 591 (should abort)
592 592
593 593 $ hg bundle2 --unknownparams ../unknown.hg2
594 594
595 595 $ hg unbundle2 < ../unknown.hg2
596 596 0 unread bytes
597 597 abort: missing support for test:song - randomparams
598 598 [255]
599 599
600 600 unbundle with a reply
601 601
602 602 $ hg bundle2 --parts --reply ../parts-reply.hg2
603 603 $ hg unbundle2 ../reply.hg2 < ../parts-reply.hg2
604 604 0 unread bytes
605 605 3 total verses sung
606 606
607 607 The reply is a bundle
608 608
609 609 $ cat ../reply.hg2
610 HG2X\x00\x00\x00\x1f (esc)
610 HG2Y\x00\x00\x00\x00\x00\x00\x00\x1f (esc)
611 611 b2x:output\x00\x00\x00\x00\x00\x01\x0b\x01in-reply-to3\x00\x00\x00\xd9The choir starts singing: (esc)
612 612 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
613 613 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
614 614 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
615 \x00\x00\x00\x00\x00\x1f (esc)
615 \x00\x00\x00\x00\x00\x00\x00\x1f (esc)
616 616 b2x:output\x00\x00\x00\x01\x00\x01\x0b\x01in-reply-to4\x00\x00\x00\xc9debugreply: capabilities: (esc)
617 617 debugreply: 'city=!'
618 618 debugreply: 'celeste,ville'
619 619 debugreply: 'elephants'
620 620 debugreply: 'babar'
621 621 debugreply: 'celeste'
622 622 debugreply: 'ping-pong'
623 \x00\x00\x00\x00\x00\x1e test:pong\x00\x00\x00\x02\x01\x00\x0b\x01in-reply-to7\x00\x00\x00\x00\x00\x1f (esc)
623 \x00\x00\x00\x00\x00\x00\x00\x1e test:pong\x00\x00\x00\x02\x01\x00\x0b\x01in-reply-to7\x00\x00\x00\x00\x00\x00\x00\x1f (esc)
624 624 b2x:output\x00\x00\x00\x03\x00\x01\x0b\x01in-reply-to7\x00\x00\x00=received ping request (id 7) (esc)
625 625 replying to ping request (id 7)
626 \x00\x00\x00\x00\x00\x00 (no-eol) (esc)
626 \x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
627 627
628 628 The reply is valid
629 629
630 630 $ hg statbundle2 < ../reply.hg2
631 631 options count: 0
632 632 :b2x:output:
633 633 mandatory: 0
634 634 advisory: 1
635 635 payload: 217 bytes
636 636 :b2x:output:
637 637 mandatory: 0
638 638 advisory: 1
639 639 payload: 201 bytes
640 640 :test:pong:
641 641 mandatory: 1
642 642 advisory: 0
643 643 payload: 0 bytes
644 644 :b2x:output:
645 645 mandatory: 0
646 646 advisory: 1
647 647 payload: 61 bytes
648 648 parts count: 4
649 649
650 650 Unbundle the reply to get the output:
651 651
652 652 $ hg unbundle2 < ../reply.hg2
653 653 remote: The choir starts singing:
654 654 remote: Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
655 655 remote: Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
656 656 remote: Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
657 657 remote: debugreply: capabilities:
658 658 remote: debugreply: 'city=!'
659 659 remote: debugreply: 'celeste,ville'
660 660 remote: debugreply: 'elephants'
661 661 remote: debugreply: 'babar'
662 662 remote: debugreply: 'celeste'
663 663 remote: debugreply: 'ping-pong'
664 664 remote: received ping request (id 7)
665 665 remote: replying to ping request (id 7)
666 666 0 unread bytes
667 667
668 668 Test push race detection
669 669
670 670 $ hg bundle2 --pushrace ../part-race.hg2
671 671
672 672 $ hg unbundle2 < ../part-race.hg2
673 673 0 unread bytes
674 674 abort: push race: repository changed while pushing - please try again
675 675 [255]
676 676
677 677 Support for changegroup
678 678 ===================================
679 679
680 680 $ hg unbundle $TESTDIR/bundles/rebase.hg
681 681 adding changesets
682 682 adding manifests
683 683 adding file changes
684 684 added 8 changesets with 7 changes to 7 files (+3 heads)
685 685 (run 'hg heads' to see heads, 'hg merge' to merge)
686 686
687 687 $ hg log -G
688 688 o 8:02de42196ebe draft Nicolas Dumazet <nicdumz.commits@gmail.com> H
689 689 |
690 690 | o 7:eea13746799a draft Nicolas Dumazet <nicdumz.commits@gmail.com> G
691 691 |/|
692 692 o | 6:24b6387c8c8c draft Nicolas Dumazet <nicdumz.commits@gmail.com> F
693 693 | |
694 694 | o 5:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
695 695 |/
696 696 | o 4:32af7686d403 draft Nicolas Dumazet <nicdumz.commits@gmail.com> D
697 697 | |
698 698 | o 3:5fddd98957c8 draft Nicolas Dumazet <nicdumz.commits@gmail.com> C
699 699 | |
700 700 | o 2:42ccdea3bb16 draft Nicolas Dumazet <nicdumz.commits@gmail.com> B
701 701 |/
702 702 o 1:cd010b8cd998 draft Nicolas Dumazet <nicdumz.commits@gmail.com> A
703 703
704 704 @ 0:3903775176ed draft test a
705 705
706 706
707 707 $ hg bundle2 --debug --rev '8+7+5+4' ../rev.hg2
708 708 4 changesets found
709 709 list of changesets:
710 710 32af7686d403cf45b5d95f2d70cebea587ac806a
711 711 9520eea781bcca16c1e15acc0ba14335a0e8e5ba
712 712 eea13746799a9e0bfd88f29d3c2e9dc9389f524f
713 713 02de42196ebee42ef284b6780a87cdc96e8eaab6
714 start emission of HG2X stream
714 start emission of HG2Y stream
715 715 bundle parameter:
716 716 start of parts
717 717 bundle part: "b2x:changegroup"
718 718 bundling: 1/4 changesets (25.00%)
719 719 bundling: 2/4 changesets (50.00%)
720 720 bundling: 3/4 changesets (75.00%)
721 721 bundling: 4/4 changesets (100.00%)
722 722 bundling: 1/4 manifests (25.00%)
723 723 bundling: 2/4 manifests (50.00%)
724 724 bundling: 3/4 manifests (75.00%)
725 725 bundling: 4/4 manifests (100.00%)
726 726 bundling: D 1/3 files (33.33%)
727 727 bundling: E 2/3 files (66.67%)
728 728 bundling: H 3/3 files (100.00%)
729 729 end of bundle
730 730
731 731 $ cat ../rev.hg2
732 HG2X\x00\x00\x00\x16\x0fb2x:changegroup\x00\x00\x00\x00\x00\x00\x00\x00\x06\x13\x00\x00\x00\xa42\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j_\xdd\xd9\x89W\xc8\xa5JMCm\xfe\x1d\xa9\xd8\x7f!\xa1\xb9{\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)6e1f4c47ecb533ffd0c8e52cdc88afb6cd39e20c (esc)
732 HG2Y\x00\x00\x00\x00\x00\x00\x00\x16\x0fb2x:changegroup\x00\x00\x00\x00\x00\x00\x00\x00\x06\x13\x00\x00\x00\xa42\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j_\xdd\xd9\x89W\xc8\xa5JMCm\xfe\x1d\xa9\xd8\x7f!\xa1\xb9{\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)6e1f4c47ecb533ffd0c8e52cdc88afb6cd39e20c (esc)
733 733 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02D (esc)
734 734 \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01D\x00\x00\x00\xa4\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xcd\x01\x0b\x8c\xd9\x98\xf3\x98\x1aZ\x81\x15\xf9O\x8d\xa4\xabP`\x89\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)4dece9c826f69490507b98c6383a3009b295837d (esc)
735 735 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02E (esc)
736 736 \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01E\x00\x00\x00\xa2\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)365b93d57fdf4814e2b5911d6bacff2b12014441 (esc)
737 737 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x00\x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01G\x00\x00\x00\xa4\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
738 738 \x87\xcd\xc9n\x8e\xaa\xb6$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
739 739 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)8bee48edc7318541fc0013ee41b089276a8c24bf (esc)
740 740 \x00\x00\x00f\x00\x00\x00f\x00\x00\x00\x02H (esc)
741 741 \x00\x00\x00g\x00\x00\x00h\x00\x00\x00\x01H\x00\x00\x00\x00\x00\x00\x00\x8bn\x1fLG\xec\xb53\xff\xd0\xc8\xe5,\xdc\x88\xaf\xb6\xcd9\xe2\x0cf\xa5\xa0\x18\x17\xfd\xf5#\x9c'8\x02\xb5\xb7a\x8d\x05\x1c\x89\xe4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+D\x00c3f1ca2924c16a19b0656a84900e504e5b0aec2d (esc)
742 742 \x00\x00\x00\x8bM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\x00}\x8c\x9d\x88\x84\x13%\xf5\xc6\xb0cq\xb3[N\x8a+\x1a\x83\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00+\x00\x00\x00\xac\x00\x00\x00+E\x009c6fd0350a6c0d0c49d4a9c5017cf07043f54e58 (esc)
743 743 \x00\x00\x00\x8b6[\x93\xd5\x7f\xdfH\x14\xe2\xb5\x91\x1dk\xac\xff+\x12\x01DA(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xceM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00V\x00\x00\x00V\x00\x00\x00+F\x0022bfcfd62a21a3287edbd4d656218d0f525ed76a (esc)
744 744 \x00\x00\x00\x97\x8b\xeeH\xed\xc71\x85A\xfc\x00\x13\xeeA\xb0\x89'j\x8c$\xbf(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xce\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
745 745 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00+\x00\x00\x00V\x00\x00\x00\x00\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+H\x008500189e74a9e0475e822093bc7db0d631aeb0b4 (esc)
746 746 \x00\x00\x00\x00\x00\x00\x00\x05D\x00\x00\x00b\xc3\xf1\xca)$\xc1j\x19\xb0ej\x84\x90\x0ePN[ (esc)
747 747 \xec-\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02D (esc)
748 748 \x00\x00\x00\x00\x00\x00\x00\x05E\x00\x00\x00b\x9co\xd05 (esc)
749 749 l\r (no-eol) (esc)
750 750 \x0cI\xd4\xa9\xc5\x01|\xf0pC\xf5NX\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02E (esc)
751 751 \x00\x00\x00\x00\x00\x00\x00\x05H\x00\x00\x00b\x85\x00\x18\x9et\xa9\xe0G^\x82 \x93\xbc}\xb0\xd61\xae\xb0\xb4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
752 752 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02H (esc)
753 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
753 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
754 754
755 755 $ hg unbundle2 < ../rev.hg2
756 756 adding changesets
757 757 adding manifests
758 758 adding file changes
759 759 added 0 changesets with 0 changes to 3 files
760 760 0 unread bytes
761 761 addchangegroup return: 1
762 762
763 763 with reply
764 764
765 765 $ hg bundle2 --rev '8+7+5+4' --reply ../rev-rr.hg2
766 766 $ hg unbundle2 ../rev-reply.hg2 < ../rev-rr.hg2
767 767 0 unread bytes
768 768 addchangegroup return: 1
769 769
770 770 $ cat ../rev-reply.hg2
771 HG2X\x00\x00\x003\x15b2x:reply:changegroup\x00\x00\x00\x00\x00\x02\x0b\x01\x06\x01in-reply-to1return1\x00\x00\x00\x00\x00\x1f (esc)
771 HG2Y\x00\x00\x00\x00\x00\x00\x003\x15b2x:reply:changegroup\x00\x00\x00\x00\x00\x02\x0b\x01\x06\x01in-reply-to1return1\x00\x00\x00\x00\x00\x00\x00\x1f (esc)
772 772 b2x:output\x00\x00\x00\x01\x00\x01\x0b\x01in-reply-to1\x00\x00\x00dadding changesets (esc)
773 773 adding manifests
774 774 adding file changes
775 775 added 0 changesets with 0 changes to 3 files
776 \x00\x00\x00\x00\x00\x00 (no-eol) (esc)
776 \x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
777 777
778 778 Check handling of exception during generation.
779 779 ----------------------------------------------
780 780 (is currently not right)
781 781
782 782 $ hg bundle2 --genraise > ../genfailed.hg2
783 783 abort: Someone set up us the bomb!
784 784 [255]
785 785
786 786 Should still be a valid bundle
787 787 (is currently not right)
788 788
789 789 $ cat ../genfailed.hg2
790 HG2X\x00\x00\x00\x11 (esc)
790 HG2Y\x00\x00\x00\x00\x00\x00\x00\x11 (esc)
791 791 b2x:output\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
792 792
793 793 And its handling on the other size raise a clean exception
794 794 (is currently not right)
795 795
796 796 $ cat ../genfailed.hg2 | hg unbundle2
797 797 0 unread bytes
798 abort: stream ended unexpectedly (got 0 bytes, expected 2)
798 abort: stream ended unexpectedly (got 0 bytes, expected 4)
799 799 [255]
800 800
801 801
802 802 $ cd ..
General Comments 0
You need to be logged in to leave comments. Login now