##// END OF EJS Templates
changegroup: rename bundle-related functions and classes...
Sune Foldager -
r22390:e2806b86 default
parent child Browse files
Show More
@@ -1,946 +1,946
1 1 # bundle2.py - generic container format to transmit arbitrary data.
2 2 #
3 3 # Copyright 2013 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 """Handling of the new bundle2 format
8 8
9 9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 11 that will be handed to and processed by the application layer.
12 12
13 13
14 14 General format architecture
15 15 ===========================
16 16
17 17 The format is architectured as follow
18 18
19 19 - magic string
20 20 - stream level parameters
21 21 - payload parts (any number)
22 22 - end of stream marker.
23 23
24 24 the Binary format
25 25 ============================
26 26
27 27 All numbers are unsigned and big-endian.
28 28
29 29 stream level parameters
30 30 ------------------------
31 31
32 32 Binary format is as follow
33 33
34 34 :params size: (16 bits integer)
35 35
36 36 The total number of Bytes used by the parameters
37 37
38 38 :params value: arbitrary number of Bytes
39 39
40 40 A blob of `params size` containing the serialized version of all stream level
41 41 parameters.
42 42
43 43 The blob contains a space separated list of parameters. Parameters with value
44 44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45 45
46 46 Empty name are obviously forbidden.
47 47
48 48 Name MUST start with a letter. If this first letter is lower case, the
49 49 parameter is advisory and can be safely ignored. However when the first
50 50 letter is capital, the parameter is mandatory and the bundling process MUST
51 51 stop if he is not able to proceed it.
52 52
53 53 Stream parameters use a simple textual format for two main reasons:
54 54
55 55 - Stream level parameters should remain simple and we want to discourage any
56 56 crazy usage.
57 57 - Textual data allow easy human inspection of a bundle2 header in case of
58 58 troubles.
59 59
60 60 Any Applicative level options MUST go into a bundle2 part instead.
61 61
62 62 Payload part
63 63 ------------------------
64 64
65 65 Binary format is as follow
66 66
67 67 :header size: (16 bits inter)
68 68
69 69 The total number of Bytes used by the part headers. When the header is empty
70 70 (size = 0) this is interpreted as the end of stream marker.
71 71
72 72 :header:
73 73
74 74 The header defines how to interpret the part. It contains two piece of
75 75 data: the part type, and the part parameters.
76 76
77 77 The part type is used to route an application level handler, that can
78 78 interpret payload.
79 79
80 80 Part parameters are passed to the application level handler. They are
81 81 meant to convey information that will help the application level object to
82 82 interpret the part payload.
83 83
84 84 The binary format of the header is has follow
85 85
86 86 :typesize: (one byte)
87 87
88 88 :parttype: alphanumerical part name
89 89
90 90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 91 to this part.
92 92
93 93 :parameters:
94 94
95 95 Part's parameter may have arbitrary content, the binary structure is::
96 96
97 97 <mandatory-count><advisory-count><param-sizes><param-data>
98 98
99 99 :mandatory-count: 1 byte, number of mandatory parameters
100 100
101 101 :advisory-count: 1 byte, number of advisory parameters
102 102
103 103 :param-sizes:
104 104
105 105 N couple of bytes, where N is the total number of parameters. Each
106 106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107 107
108 108 :param-data:
109 109
110 110 A blob of bytes from which each parameter key and value can be
111 111 retrieved using the list of size couples stored in the previous
112 112 field.
113 113
114 114 Mandatory parameters comes first, then the advisory ones.
115 115
116 116 Each parameter's key MUST be unique within the part.
117 117
118 118 :payload:
119 119
120 120 payload is a series of `<chunksize><chunkdata>`.
121 121
122 122 `chunksize` is a 32 bits integer, `chunkdata` are plain bytes (as much as
123 123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124 124
125 125 The current implementation always produces either zero or one chunk.
126 126 This is an implementation limitation that will ultimately be lifted.
127 127
128 128 Bundle processing
129 129 ============================
130 130
131 131 Each part is processed in order using a "part handler". Handler are registered
132 132 for a certain part type.
133 133
134 134 The matching of a part to its handler is case insensitive. The case of the
135 135 part type is used to know if a part is mandatory or advisory. If the Part type
136 136 contains any uppercase char it is considered mandatory. When no handler is
137 137 known for a Mandatory part, the process is aborted and an exception is raised.
138 138 If the part is advisory and no handler is known, the part is ignored. When the
139 139 process is aborted, the full bundle is still read from the stream to keep the
140 140 channel usable. But none of the part read from an abort are processed. In the
141 141 future, dropping the stream may become an option for channel we do not care to
142 142 preserve.
143 143 """
144 144
145 145 import util
146 146 import struct
147 147 import urllib
148 148 import string
149 149 import obsolete
150 150 import pushkey
151 151
152 152 import changegroup, error
153 153 from i18n import _
154 154
155 155 _pack = struct.pack
156 156 _unpack = struct.unpack
157 157
158 158 _magicstring = 'HG2X'
159 159
160 160 _fstreamparamsize = '>H'
161 161 _fpartheadersize = '>H'
162 162 _fparttypesize = '>B'
163 163 _fpartid = '>I'
164 164 _fpayloadsize = '>I'
165 165 _fpartparamcount = '>BB'
166 166
167 167 preferedchunksize = 4096
168 168
169 169 def _makefpartparamsizes(nbparams):
170 170 """return a struct format to read part parameter sizes
171 171
172 172 The number parameters is variable so we need to build that format
173 173 dynamically.
174 174 """
175 175 return '>'+('BB'*nbparams)
176 176
177 177 parthandlermapping = {}
178 178
179 179 def parthandler(parttype, params=()):
180 180 """decorator that register a function as a bundle2 part handler
181 181
182 182 eg::
183 183
184 184 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
185 185 def myparttypehandler(...):
186 186 '''process a part of type "my part".'''
187 187 ...
188 188 """
189 189 def _decorator(func):
190 190 lparttype = parttype.lower() # enforce lower case matching.
191 191 assert lparttype not in parthandlermapping
192 192 parthandlermapping[lparttype] = func
193 193 func.params = frozenset(params)
194 194 return func
195 195 return _decorator
196 196
197 197 class unbundlerecords(object):
198 198 """keep record of what happens during and unbundle
199 199
200 200 New records are added using `records.add('cat', obj)`. Where 'cat' is a
201 201 category of record and obj is an arbitrary object.
202 202
203 203 `records['cat']` will return all entries of this category 'cat'.
204 204
205 205 Iterating on the object itself will yield `('category', obj)` tuples
206 206 for all entries.
207 207
208 208 All iterations happens in chronological order.
209 209 """
210 210
211 211 def __init__(self):
212 212 self._categories = {}
213 213 self._sequences = []
214 214 self._replies = {}
215 215
216 216 def add(self, category, entry, inreplyto=None):
217 217 """add a new record of a given category.
218 218
219 219 The entry can then be retrieved in the list returned by
220 220 self['category']."""
221 221 self._categories.setdefault(category, []).append(entry)
222 222 self._sequences.append((category, entry))
223 223 if inreplyto is not None:
224 224 self.getreplies(inreplyto).add(category, entry)
225 225
226 226 def getreplies(self, partid):
227 227 """get the subrecords that replies to a specific part"""
228 228 return self._replies.setdefault(partid, unbundlerecords())
229 229
230 230 def __getitem__(self, cat):
231 231 return tuple(self._categories.get(cat, ()))
232 232
233 233 def __iter__(self):
234 234 return iter(self._sequences)
235 235
236 236 def __len__(self):
237 237 return len(self._sequences)
238 238
239 239 def __nonzero__(self):
240 240 return bool(self._sequences)
241 241
242 242 class bundleoperation(object):
243 243 """an object that represents a single bundling process
244 244
245 245 Its purpose is to carry unbundle-related objects and states.
246 246
247 247 A new object should be created at the beginning of each bundle processing.
248 248 The object is to be returned by the processing function.
249 249
250 250 The object has very little content now it will ultimately contain:
251 251 * an access to the repo the bundle is applied to,
252 252 * a ui object,
253 253 * a way to retrieve a transaction to add changes to the repo,
254 254 * a way to record the result of processing each part,
255 255 * a way to construct a bundle response when applicable.
256 256 """
257 257
258 258 def __init__(self, repo, transactiongetter):
259 259 self.repo = repo
260 260 self.ui = repo.ui
261 261 self.records = unbundlerecords()
262 262 self.gettransaction = transactiongetter
263 263 self.reply = None
264 264
265 265 class TransactionUnavailable(RuntimeError):
266 266 pass
267 267
268 268 def _notransaction():
269 269 """default method to get a transaction while processing a bundle
270 270
271 271 Raise an exception to highlight the fact that no transaction was expected
272 272 to be created"""
273 273 raise TransactionUnavailable()
274 274
275 275 def processbundle(repo, unbundler, transactiongetter=_notransaction):
276 276 """This function process a bundle, apply effect to/from a repo
277 277
278 278 It iterates over each part then searches for and uses the proper handling
279 279 code to process the part. Parts are processed in order.
280 280
281 281 This is very early version of this function that will be strongly reworked
282 282 before final usage.
283 283
284 284 Unknown Mandatory part will abort the process.
285 285 """
286 286 op = bundleoperation(repo, transactiongetter)
287 287 # todo:
288 288 # - replace this is a init function soon.
289 289 # - exception catching
290 290 unbundler.params
291 291 iterparts = unbundler.iterparts()
292 292 part = None
293 293 try:
294 294 for part in iterparts:
295 295 parttype = part.type
296 296 # part key are matched lower case
297 297 key = parttype.lower()
298 298 try:
299 299 handler = parthandlermapping.get(key)
300 300 if handler is None:
301 301 raise error.BundleValueError(parttype=key)
302 302 op.ui.debug('found a handler for part %r\n' % parttype)
303 303 unknownparams = part.mandatorykeys - handler.params
304 304 if unknownparams:
305 305 unknownparams = list(unknownparams)
306 306 unknownparams.sort()
307 307 raise error.BundleValueError(parttype=key,
308 308 params=unknownparams)
309 309 except error.BundleValueError, exc:
310 310 if key != parttype: # mandatory parts
311 311 raise
312 312 op.ui.debug('ignoring unsupported advisory part %s\n' % exc)
313 313 # consuming the part
314 314 part.read()
315 315 continue
316 316
317 317
318 318 # handler is called outside the above try block so that we don't
319 319 # risk catching KeyErrors from anything other than the
320 320 # parthandlermapping lookup (any KeyError raised by handler()
321 321 # itself represents a defect of a different variety).
322 322 output = None
323 323 if op.reply is not None:
324 324 op.ui.pushbuffer(error=True)
325 325 output = ''
326 326 try:
327 327 handler(op, part)
328 328 finally:
329 329 if output is not None:
330 330 output = op.ui.popbuffer()
331 331 if output:
332 332 outpart = op.reply.newpart('b2x:output', data=output)
333 333 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
334 334 part.read()
335 335 except Exception, exc:
336 336 if part is not None:
337 337 # consume the bundle content
338 338 part.read()
339 339 for part in iterparts:
340 340 # consume the bundle content
341 341 part.read()
342 342 # Small hack to let caller code distinguish exceptions from bundle2
343 343 # processing fron the ones from bundle1 processing. This is mostly
344 344 # needed to handle different return codes to unbundle according to the
345 345 # type of bundle. We should probably clean up or drop this return code
346 346 # craziness in a future version.
347 347 exc.duringunbundle2 = True
348 348 raise
349 349 return op
350 350
351 351 def decodecaps(blob):
352 352 """decode a bundle2 caps bytes blob into a dictionnary
353 353
354 354 The blob is a list of capabilities (one per line)
355 355 Capabilities may have values using a line of the form::
356 356
357 357 capability=value1,value2,value3
358 358
359 359 The values are always a list."""
360 360 caps = {}
361 361 for line in blob.splitlines():
362 362 if not line:
363 363 continue
364 364 if '=' not in line:
365 365 key, vals = line, ()
366 366 else:
367 367 key, vals = line.split('=', 1)
368 368 vals = vals.split(',')
369 369 key = urllib.unquote(key)
370 370 vals = [urllib.unquote(v) for v in vals]
371 371 caps[key] = vals
372 372 return caps
373 373
374 374 def encodecaps(caps):
375 375 """encode a bundle2 caps dictionary into a bytes blob"""
376 376 chunks = []
377 377 for ca in sorted(caps):
378 378 vals = caps[ca]
379 379 ca = urllib.quote(ca)
380 380 vals = [urllib.quote(v) for v in vals]
381 381 if vals:
382 382 ca = "%s=%s" % (ca, ','.join(vals))
383 383 chunks.append(ca)
384 384 return '\n'.join(chunks)
385 385
386 386 class bundle20(object):
387 387 """represent an outgoing bundle2 container
388 388
389 389 Use the `addparam` method to add stream level parameter. and `newpart` to
390 390 populate it. Then call `getchunks` to retrieve all the binary chunks of
391 391 data that compose the bundle2 container."""
392 392
393 393 def __init__(self, ui, capabilities=()):
394 394 self.ui = ui
395 395 self._params = []
396 396 self._parts = []
397 397 self.capabilities = dict(capabilities)
398 398
399 399 @property
400 400 def nbparts(self):
401 401 """total number of parts added to the bundler"""
402 402 return len(self._parts)
403 403
404 404 # methods used to defines the bundle2 content
405 405 def addparam(self, name, value=None):
406 406 """add a stream level parameter"""
407 407 if not name:
408 408 raise ValueError('empty parameter name')
409 409 if name[0] not in string.letters:
410 410 raise ValueError('non letter first character: %r' % name)
411 411 self._params.append((name, value))
412 412
413 413 def addpart(self, part):
414 414 """add a new part to the bundle2 container
415 415
416 416 Parts contains the actual applicative payload."""
417 417 assert part.id is None
418 418 part.id = len(self._parts) # very cheap counter
419 419 self._parts.append(part)
420 420
421 421 def newpart(self, typeid, *args, **kwargs):
422 422 """create a new part and add it to the containers
423 423
424 424 As the part is directly added to the containers. For now, this means
425 425 that any failure to properly initialize the part after calling
426 426 ``newpart`` should result in a failure of the whole bundling process.
427 427
428 428 You can still fall back to manually create and add if you need better
429 429 control."""
430 430 part = bundlepart(typeid, *args, **kwargs)
431 431 self.addpart(part)
432 432 return part
433 433
434 434 # methods used to generate the bundle2 stream
435 435 def getchunks(self):
436 436 self.ui.debug('start emission of %s stream\n' % _magicstring)
437 437 yield _magicstring
438 438 param = self._paramchunk()
439 439 self.ui.debug('bundle parameter: %s\n' % param)
440 440 yield _pack(_fstreamparamsize, len(param))
441 441 if param:
442 442 yield param
443 443
444 444 self.ui.debug('start of parts\n')
445 445 for part in self._parts:
446 446 self.ui.debug('bundle part: "%s"\n' % part.type)
447 447 for chunk in part.getchunks():
448 448 yield chunk
449 449 self.ui.debug('end of bundle\n')
450 450 yield '\0\0'
451 451
452 452 def _paramchunk(self):
453 453 """return a encoded version of all stream parameters"""
454 454 blocks = []
455 455 for par, value in self._params:
456 456 par = urllib.quote(par)
457 457 if value is not None:
458 458 value = urllib.quote(value)
459 459 par = '%s=%s' % (par, value)
460 460 blocks.append(par)
461 461 return ' '.join(blocks)
462 462
463 463 class unpackermixin(object):
464 464 """A mixin to extract bytes and struct data from a stream"""
465 465
466 466 def __init__(self, fp):
467 467 self._fp = fp
468 468
469 469 def _unpack(self, format):
470 470 """unpack this struct format from the stream"""
471 471 data = self._readexact(struct.calcsize(format))
472 472 return _unpack(format, data)
473 473
474 474 def _readexact(self, size):
475 475 """read exactly <size> bytes from the stream"""
476 476 return changegroup.readexactly(self._fp, size)
477 477
478 478
479 479 class unbundle20(unpackermixin):
480 480 """interpret a bundle2 stream
481 481
482 482 This class is fed with a binary stream and yields parts through its
483 483 `iterparts` methods."""
484 484
485 485 def __init__(self, ui, fp, header=None):
486 486 """If header is specified, we do not read it out of the stream."""
487 487 self.ui = ui
488 488 super(unbundle20, self).__init__(fp)
489 489 if header is None:
490 490 header = self._readexact(4)
491 491 magic, version = header[0:2], header[2:4]
492 492 if magic != 'HG':
493 493 raise util.Abort(_('not a Mercurial bundle'))
494 494 if version != '2X':
495 495 raise util.Abort(_('unknown bundle version %s') % version)
496 496 self.ui.debug('start processing of %s stream\n' % header)
497 497
498 498 @util.propertycache
499 499 def params(self):
500 500 """dictionary of stream level parameters"""
501 501 self.ui.debug('reading bundle2 stream parameters\n')
502 502 params = {}
503 503 paramssize = self._unpack(_fstreamparamsize)[0]
504 504 if paramssize:
505 505 for p in self._readexact(paramssize).split(' '):
506 506 p = p.split('=', 1)
507 507 p = [urllib.unquote(i) for i in p]
508 508 if len(p) < 2:
509 509 p.append(None)
510 510 self._processparam(*p)
511 511 params[p[0]] = p[1]
512 512 return params
513 513
514 514 def _processparam(self, name, value):
515 515 """process a parameter, applying its effect if needed
516 516
517 517 Parameter starting with a lower case letter are advisory and will be
518 518 ignored when unknown. Those starting with an upper case letter are
519 519 mandatory and will this function will raise a KeyError when unknown.
520 520
521 521 Note: no option are currently supported. Any input will be either
522 522 ignored or failing.
523 523 """
524 524 if not name:
525 525 raise ValueError('empty parameter name')
526 526 if name[0] not in string.letters:
527 527 raise ValueError('non letter first character: %r' % name)
528 528 # Some logic will be later added here to try to process the option for
529 529 # a dict of known parameter.
530 530 if name[0].islower():
531 531 self.ui.debug("ignoring unknown parameter %r\n" % name)
532 532 else:
533 533 raise error.BundleValueError(params=(name,))
534 534
535 535
536 536 def iterparts(self):
537 537 """yield all parts contained in the stream"""
538 538 # make sure param have been loaded
539 539 self.params
540 540 self.ui.debug('start extraction of bundle2 parts\n')
541 541 headerblock = self._readpartheader()
542 542 while headerblock is not None:
543 543 part = unbundlepart(self.ui, headerblock, self._fp)
544 544 yield part
545 545 headerblock = self._readpartheader()
546 546 self.ui.debug('end of bundle2 stream\n')
547 547
548 548 def _readpartheader(self):
549 549 """reads a part header size and return the bytes blob
550 550
551 551 returns None if empty"""
552 552 headersize = self._unpack(_fpartheadersize)[0]
553 553 self.ui.debug('part header size: %i\n' % headersize)
554 554 if headersize:
555 555 return self._readexact(headersize)
556 556 return None
557 557
558 558
559 559 class bundlepart(object):
560 560 """A bundle2 part contains application level payload
561 561
562 562 The part `type` is used to route the part to the application level
563 563 handler.
564 564
565 565 The part payload is contained in ``part.data``. It could be raw bytes or a
566 566 generator of byte chunks.
567 567
568 568 You can add parameters to the part using the ``addparam`` method.
569 569 Parameters can be either mandatory (default) or advisory. Remote side
570 570 should be able to safely ignore the advisory ones.
571 571
572 572 Both data and parameters cannot be modified after the generation has begun.
573 573 """
574 574
575 575 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
576 576 data=''):
577 577 self.id = None
578 578 self.type = parttype
579 579 self._data = data
580 580 self._mandatoryparams = list(mandatoryparams)
581 581 self._advisoryparams = list(advisoryparams)
582 582 # checking for duplicated entries
583 583 self._seenparams = set()
584 584 for pname, __ in self._mandatoryparams + self._advisoryparams:
585 585 if pname in self._seenparams:
586 586 raise RuntimeError('duplicated params: %s' % pname)
587 587 self._seenparams.add(pname)
588 588 # status of the part's generation:
589 589 # - None: not started,
590 590 # - False: currently generated,
591 591 # - True: generation done.
592 592 self._generated = None
593 593
594 594 # methods used to defines the part content
595 595 def __setdata(self, data):
596 596 if self._generated is not None:
597 597 raise error.ReadOnlyPartError('part is being generated')
598 598 self._data = data
599 599 def __getdata(self):
600 600 return self._data
601 601 data = property(__getdata, __setdata)
602 602
603 603 @property
604 604 def mandatoryparams(self):
605 605 # make it an immutable tuple to force people through ``addparam``
606 606 return tuple(self._mandatoryparams)
607 607
608 608 @property
609 609 def advisoryparams(self):
610 610 # make it an immutable tuple to force people through ``addparam``
611 611 return tuple(self._advisoryparams)
612 612
613 613 def addparam(self, name, value='', mandatory=True):
614 614 if self._generated is not None:
615 615 raise error.ReadOnlyPartError('part is being generated')
616 616 if name in self._seenparams:
617 617 raise ValueError('duplicated params: %s' % name)
618 618 self._seenparams.add(name)
619 619 params = self._advisoryparams
620 620 if mandatory:
621 621 params = self._mandatoryparams
622 622 params.append((name, value))
623 623
624 624 # methods used to generates the bundle2 stream
625 625 def getchunks(self):
626 626 if self._generated is not None:
627 627 raise RuntimeError('part can only be consumed once')
628 628 self._generated = False
629 629 #### header
630 630 ## parttype
631 631 header = [_pack(_fparttypesize, len(self.type)),
632 632 self.type, _pack(_fpartid, self.id),
633 633 ]
634 634 ## parameters
635 635 # count
636 636 manpar = self.mandatoryparams
637 637 advpar = self.advisoryparams
638 638 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
639 639 # size
640 640 parsizes = []
641 641 for key, value in manpar:
642 642 parsizes.append(len(key))
643 643 parsizes.append(len(value))
644 644 for key, value in advpar:
645 645 parsizes.append(len(key))
646 646 parsizes.append(len(value))
647 647 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
648 648 header.append(paramsizes)
649 649 # key, value
650 650 for key, value in manpar:
651 651 header.append(key)
652 652 header.append(value)
653 653 for key, value in advpar:
654 654 header.append(key)
655 655 header.append(value)
656 656 ## finalize header
657 657 headerchunk = ''.join(header)
658 658 yield _pack(_fpartheadersize, len(headerchunk))
659 659 yield headerchunk
660 660 ## payload
661 661 for chunk in self._payloadchunks():
662 662 yield _pack(_fpayloadsize, len(chunk))
663 663 yield chunk
664 664 # end of payload
665 665 yield _pack(_fpayloadsize, 0)
666 666 self._generated = True
667 667
668 668 def _payloadchunks(self):
669 669 """yield chunks of a the part payload
670 670
671 671 Exists to handle the different methods to provide data to a part."""
672 672 # we only support fixed size data now.
673 673 # This will be improved in the future.
674 674 if util.safehasattr(self.data, 'next'):
675 675 buff = util.chunkbuffer(self.data)
676 676 chunk = buff.read(preferedchunksize)
677 677 while chunk:
678 678 yield chunk
679 679 chunk = buff.read(preferedchunksize)
680 680 elif len(self.data):
681 681 yield self.data
682 682
683 683 class unbundlepart(unpackermixin):
684 684 """a bundle part read from a bundle"""
685 685
686 686 def __init__(self, ui, header, fp):
687 687 super(unbundlepart, self).__init__(fp)
688 688 self.ui = ui
689 689 # unbundle state attr
690 690 self._headerdata = header
691 691 self._headeroffset = 0
692 692 self._initialized = False
693 693 self.consumed = False
694 694 # part data
695 695 self.id = None
696 696 self.type = None
697 697 self.mandatoryparams = None
698 698 self.advisoryparams = None
699 699 self.params = None
700 700 self.mandatorykeys = ()
701 701 self._payloadstream = None
702 702 self._readheader()
703 703
704 704 def _fromheader(self, size):
705 705 """return the next <size> byte from the header"""
706 706 offset = self._headeroffset
707 707 data = self._headerdata[offset:(offset + size)]
708 708 self._headeroffset = offset + size
709 709 return data
710 710
711 711 def _unpackheader(self, format):
712 712 """read given format from header
713 713
714 714 This automatically compute the size of the format to read."""
715 715 data = self._fromheader(struct.calcsize(format))
716 716 return _unpack(format, data)
717 717
718 718 def _initparams(self, mandatoryparams, advisoryparams):
719 719 """internal function to setup all logic related parameters"""
720 720 # make it read only to prevent people touching it by mistake.
721 721 self.mandatoryparams = tuple(mandatoryparams)
722 722 self.advisoryparams = tuple(advisoryparams)
723 723 # user friendly UI
724 724 self.params = dict(self.mandatoryparams)
725 725 self.params.update(dict(self.advisoryparams))
726 726 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
727 727
728 728 def _readheader(self):
729 729 """read the header and setup the object"""
730 730 typesize = self._unpackheader(_fparttypesize)[0]
731 731 self.type = self._fromheader(typesize)
732 732 self.ui.debug('part type: "%s"\n' % self.type)
733 733 self.id = self._unpackheader(_fpartid)[0]
734 734 self.ui.debug('part id: "%s"\n' % self.id)
735 735 ## reading parameters
736 736 # param count
737 737 mancount, advcount = self._unpackheader(_fpartparamcount)
738 738 self.ui.debug('part parameters: %i\n' % (mancount + advcount))
739 739 # param size
740 740 fparamsizes = _makefpartparamsizes(mancount + advcount)
741 741 paramsizes = self._unpackheader(fparamsizes)
742 742 # make it a list of couple again
743 743 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
744 744 # split mandatory from advisory
745 745 mansizes = paramsizes[:mancount]
746 746 advsizes = paramsizes[mancount:]
747 747 # retrive param value
748 748 manparams = []
749 749 for key, value in mansizes:
750 750 manparams.append((self._fromheader(key), self._fromheader(value)))
751 751 advparams = []
752 752 for key, value in advsizes:
753 753 advparams.append((self._fromheader(key), self._fromheader(value)))
754 754 self._initparams(manparams, advparams)
755 755 ## part payload
756 756 def payloadchunks():
757 757 payloadsize = self._unpack(_fpayloadsize)[0]
758 758 self.ui.debug('payload chunk size: %i\n' % payloadsize)
759 759 while payloadsize:
760 760 yield self._readexact(payloadsize)
761 761 payloadsize = self._unpack(_fpayloadsize)[0]
762 762 self.ui.debug('payload chunk size: %i\n' % payloadsize)
763 763 self._payloadstream = util.chunkbuffer(payloadchunks())
764 764 # we read the data, tell it
765 765 self._initialized = True
766 766
767 767 def read(self, size=None):
768 768 """read payload data"""
769 769 if not self._initialized:
770 770 self._readheader()
771 771 if size is None:
772 772 data = self._payloadstream.read()
773 773 else:
774 774 data = self._payloadstream.read(size)
775 775 if size is None or len(data) < size:
776 776 self.consumed = True
777 777 return data
778 778
779 779 capabilities = {'HG2X': (),
780 780 'b2x:listkeys': (),
781 781 'b2x:pushkey': (),
782 782 'b2x:changegroup': (),
783 783 }
784 784
785 785 def getrepocaps(repo):
786 786 """return the bundle2 capabilities for a given repo
787 787
788 788 Exists to allow extensions (like evolution) to mutate the capabilities.
789 789 """
790 790 caps = capabilities.copy()
791 791 if obsolete._enabled:
792 792 supportedformat = tuple('V%i' % v for v in obsolete.formats)
793 793 caps['b2x:obsmarkers'] = supportedformat
794 794 return caps
795 795
796 796 def bundle2caps(remote):
797 797 """return the bundlecapabilities of a peer as dict"""
798 798 raw = remote.capable('bundle2-exp')
799 799 if not raw and raw != '':
800 800 return {}
801 801 capsblob = urllib.unquote(remote.capable('bundle2-exp'))
802 802 return decodecaps(capsblob)
803 803
804 804 def obsmarkersversion(caps):
805 805 """extract the list of supported obsmarkers versions from a bundle2caps dict
806 806 """
807 807 obscaps = caps.get('b2x:obsmarkers', ())
808 808 return [int(c[1:]) for c in obscaps if c.startswith('V')]
809 809
810 810 @parthandler('b2x:changegroup')
811 811 def handlechangegroup(op, inpart):
812 812 """apply a changegroup part on the repo
813 813
814 814 This is a very early implementation that will massive rework before being
815 815 inflicted to any end-user.
816 816 """
817 817 # Make sure we trigger a transaction creation
818 818 #
819 819 # The addchangegroup function will get a transaction object by itself, but
820 820 # we need to make sure we trigger the creation of a transaction object used
821 821 # for the whole processing scope.
822 822 op.gettransaction()
823 cg = changegroup.unbundle10(inpart, 'UN')
823 cg = changegroup.cg1unpacker(inpart, 'UN')
824 824 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
825 825 op.records.add('changegroup', {'return': ret})
826 826 if op.reply is not None:
827 827 # This is definitly not the final form of this
828 828 # return. But one need to start somewhere.
829 829 part = op.reply.newpart('b2x:reply:changegroup')
830 830 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
831 831 part.addparam('return', '%i' % ret, mandatory=False)
832 832 assert not inpart.read()
833 833
834 834 @parthandler('b2x:reply:changegroup', ('return', 'in-reply-to'))
835 835 def handlechangegroup(op, inpart):
836 836 ret = int(inpart.params['return'])
837 837 replyto = int(inpart.params['in-reply-to'])
838 838 op.records.add('changegroup', {'return': ret}, replyto)
839 839
840 840 @parthandler('b2x:check:heads')
841 841 def handlechangegroup(op, inpart):
842 842 """check that head of the repo did not change
843 843
844 844 This is used to detect a push race when using unbundle.
845 845 This replaces the "heads" argument of unbundle."""
846 846 h = inpart.read(20)
847 847 heads = []
848 848 while len(h) == 20:
849 849 heads.append(h)
850 850 h = inpart.read(20)
851 851 assert not h
852 852 if heads != op.repo.heads():
853 853 raise error.PushRaced('repository changed while pushing - '
854 854 'please try again')
855 855
856 856 @parthandler('b2x:output')
857 857 def handleoutput(op, inpart):
858 858 """forward output captured on the server to the client"""
859 859 for line in inpart.read().splitlines():
860 860 op.ui.write(('remote: %s\n' % line))
861 861
862 862 @parthandler('b2x:replycaps')
863 863 def handlereplycaps(op, inpart):
864 864 """Notify that a reply bundle should be created
865 865
866 866 The payload contains the capabilities information for the reply"""
867 867 caps = decodecaps(inpart.read())
868 868 if op.reply is None:
869 869 op.reply = bundle20(op.ui, caps)
870 870
871 871 @parthandler('b2x:error:abort', ('message', 'hint'))
872 872 def handlereplycaps(op, inpart):
873 873 """Used to transmit abort error over the wire"""
874 874 raise util.Abort(inpart.params['message'], hint=inpart.params.get('hint'))
875 875
876 876 @parthandler('b2x:error:unsupportedcontent', ('parttype', 'params'))
877 877 def handlereplycaps(op, inpart):
878 878 """Used to transmit unknown content error over the wire"""
879 879 kwargs = {}
880 880 parttype = inpart.params.get('parttype')
881 881 if parttype is not None:
882 882 kwargs['parttype'] = parttype
883 883 params = inpart.params.get('params')
884 884 if params is not None:
885 885 kwargs['params'] = params.split('\0')
886 886
887 887 raise error.BundleValueError(**kwargs)
888 888
889 889 @parthandler('b2x:error:pushraced', ('message',))
890 890 def handlereplycaps(op, inpart):
891 891 """Used to transmit push race error over the wire"""
892 892 raise error.ResponseError(_('push failed:'), inpart.params['message'])
893 893
894 894 @parthandler('b2x:listkeys', ('namespace',))
895 895 def handlelistkeys(op, inpart):
896 896 """retrieve pushkey namespace content stored in a bundle2"""
897 897 namespace = inpart.params['namespace']
898 898 r = pushkey.decodekeys(inpart.read())
899 899 op.records.add('listkeys', (namespace, r))
900 900
901 901 @parthandler('b2x:pushkey', ('namespace', 'key', 'old', 'new'))
902 902 def handlepushkey(op, inpart):
903 903 """process a pushkey request"""
904 904 dec = pushkey.decode
905 905 namespace = dec(inpart.params['namespace'])
906 906 key = dec(inpart.params['key'])
907 907 old = dec(inpart.params['old'])
908 908 new = dec(inpart.params['new'])
909 909 ret = op.repo.pushkey(namespace, key, old, new)
910 910 record = {'namespace': namespace,
911 911 'key': key,
912 912 'old': old,
913 913 'new': new}
914 914 op.records.add('pushkey', record)
915 915 if op.reply is not None:
916 916 rpart = op.reply.newpart('b2x:reply:pushkey')
917 917 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
918 918 rpart.addparam('return', '%i' % ret, mandatory=False)
919 919
920 920 @parthandler('b2x:reply:pushkey', ('return', 'in-reply-to'))
921 921 def handlepushkeyreply(op, inpart):
922 922 """retrieve the result of a pushkey request"""
923 923 ret = int(inpart.params['return'])
924 924 partid = int(inpart.params['in-reply-to'])
925 925 op.records.add('pushkey', {'return': ret}, partid)
926 926
927 927 @parthandler('b2x:obsmarkers')
928 928 def handleobsmarker(op, inpart):
929 929 """add a stream of obsmarkers to the repo"""
930 930 tr = op.gettransaction()
931 931 new = op.repo.obsstore.mergemarkers(tr, inpart.read())
932 932 if new:
933 933 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
934 934 op.records.add('obsmarkers', {'new': new})
935 935 if op.reply is not None:
936 936 rpart = op.reply.newpart('b2x:reply:obsmarkers')
937 937 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
938 938 rpart.addparam('new', '%i' % new, mandatory=False)
939 939
940 940
941 941 @parthandler('b2x:reply:obsmarkers', ('new', 'in-reply-to'))
942 942 def handlepushkeyreply(op, inpart):
943 943 """retrieve the result of a pushkey request"""
944 944 ret = int(inpart.params['new'])
945 945 partid = int(inpart.params['in-reply-to'])
946 946 op.records.add('obsmarkers', {'new': ret}, partid)
@@ -1,756 +1,756
1 1 # changegroup.py - Mercurial changegroup manipulation functions
2 2 #
3 3 # Copyright 2006 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 import weakref
9 9 from i18n import _
10 10 from node import nullrev, nullid, hex, short
11 11 import mdiff, util, dagutil
12 12 import struct, os, bz2, zlib, tempfile
13 13 import discovery, error, phases, branchmap
14 14
15 _BUNDLE10_DELTA_HEADER = "20s20s20s20s"
15 _CHANGEGROUPV1_DELTA_HEADER = "20s20s20s20s"
16 16
17 17 def readexactly(stream, n):
18 18 '''read n bytes from stream.read and abort if less was available'''
19 19 s = stream.read(n)
20 20 if len(s) < n:
21 21 raise util.Abort(_("stream ended unexpectedly"
22 22 " (got %d bytes, expected %d)")
23 23 % (len(s), n))
24 24 return s
25 25
26 26 def getchunk(stream):
27 27 """return the next chunk from stream as a string"""
28 28 d = readexactly(stream, 4)
29 29 l = struct.unpack(">l", d)[0]
30 30 if l <= 4:
31 31 if l:
32 32 raise util.Abort(_("invalid chunk length %d") % l)
33 33 return ""
34 34 return readexactly(stream, l - 4)
35 35
36 36 def chunkheader(length):
37 37 """return a changegroup chunk header (string)"""
38 38 return struct.pack(">l", length + 4)
39 39
40 40 def closechunk():
41 41 """return a changegroup chunk header (string) for a zero-length chunk"""
42 42 return struct.pack(">l", 0)
43 43
44 44 class nocompress(object):
45 45 def compress(self, x):
46 46 return x
47 47 def flush(self):
48 48 return ""
49 49
50 50 bundletypes = {
51 51 "": ("", nocompress), # only when using unbundle on ssh and old http servers
52 52 # since the unification ssh accepts a header but there
53 53 # is no capability signaling it.
54 54 "HG10UN": ("HG10UN", nocompress),
55 55 "HG10BZ": ("HG10", lambda: bz2.BZ2Compressor()),
56 56 "HG10GZ": ("HG10GZ", lambda: zlib.compressobj()),
57 57 }
58 58
59 59 # hgweb uses this list to communicate its preferred type
60 60 bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN']
61 61
62 62 def writebundle(cg, filename, bundletype, vfs=None):
63 63 """Write a bundle file and return its filename.
64 64
65 65 Existing files will not be overwritten.
66 66 If no filename is specified, a temporary file is created.
67 67 bz2 compression can be turned off.
68 68 The bundle file will be deleted in case of errors.
69 69 """
70 70
71 71 fh = None
72 72 cleanup = None
73 73 try:
74 74 if filename:
75 75 if vfs:
76 76 fh = vfs.open(filename, "wb")
77 77 else:
78 78 fh = open(filename, "wb")
79 79 else:
80 80 fd, filename = tempfile.mkstemp(prefix="hg-bundle-", suffix=".hg")
81 81 fh = os.fdopen(fd, "wb")
82 82 cleanup = filename
83 83
84 84 header, compressor = bundletypes[bundletype]
85 85 fh.write(header)
86 86 z = compressor()
87 87
88 88 # parse the changegroup data, otherwise we will block
89 89 # in case of sshrepo because we don't know the end of the stream
90 90
91 91 # an empty chunkgroup is the end of the changegroup
92 92 # a changegroup has at least 2 chunkgroups (changelog and manifest).
93 93 # after that, an empty chunkgroup is the end of the changegroup
94 94 for chunk in cg.getchunks():
95 95 fh.write(z.compress(chunk))
96 96 fh.write(z.flush())
97 97 cleanup = None
98 98 return filename
99 99 finally:
100 100 if fh is not None:
101 101 fh.close()
102 102 if cleanup is not None:
103 103 if filename and vfs:
104 104 vfs.unlink(cleanup)
105 105 else:
106 106 os.unlink(cleanup)
107 107
108 108 def decompressor(fh, alg):
109 109 if alg == 'UN':
110 110 return fh
111 111 elif alg == 'GZ':
112 112 def generator(f):
113 113 zd = zlib.decompressobj()
114 114 for chunk in util.filechunkiter(f):
115 115 yield zd.decompress(chunk)
116 116 elif alg == 'BZ':
117 117 def generator(f):
118 118 zd = bz2.BZ2Decompressor()
119 119 zd.decompress("BZ")
120 120 for chunk in util.filechunkiter(f, 4096):
121 121 yield zd.decompress(chunk)
122 122 else:
123 123 raise util.Abort("unknown bundle compression '%s'" % alg)
124 124 return util.chunkbuffer(generator(fh))
125 125
126 class unbundle10(object):
127 deltaheader = _BUNDLE10_DELTA_HEADER
126 class cg1unpacker(object):
127 deltaheader = _CHANGEGROUPV1_DELTA_HEADER
128 128 deltaheadersize = struct.calcsize(deltaheader)
129 129 def __init__(self, fh, alg):
130 130 self._stream = decompressor(fh, alg)
131 131 self._type = alg
132 132 self.callback = None
133 133 def compressed(self):
134 134 return self._type != 'UN'
135 135 def read(self, l):
136 136 return self._stream.read(l)
137 137 def seek(self, pos):
138 138 return self._stream.seek(pos)
139 139 def tell(self):
140 140 return self._stream.tell()
141 141 def close(self):
142 142 return self._stream.close()
143 143
144 144 def chunklength(self):
145 145 d = readexactly(self._stream, 4)
146 146 l = struct.unpack(">l", d)[0]
147 147 if l <= 4:
148 148 if l:
149 149 raise util.Abort(_("invalid chunk length %d") % l)
150 150 return 0
151 151 if self.callback:
152 152 self.callback()
153 153 return l - 4
154 154
155 155 def changelogheader(self):
156 156 """v10 does not have a changelog header chunk"""
157 157 return {}
158 158
159 159 def manifestheader(self):
160 160 """v10 does not have a manifest header chunk"""
161 161 return {}
162 162
163 163 def filelogheader(self):
164 164 """return the header of the filelogs chunk, v10 only has the filename"""
165 165 l = self.chunklength()
166 166 if not l:
167 167 return {}
168 168 fname = readexactly(self._stream, l)
169 169 return {'filename': fname}
170 170
171 171 def _deltaheader(self, headertuple, prevnode):
172 172 node, p1, p2, cs = headertuple
173 173 if prevnode is None:
174 174 deltabase = p1
175 175 else:
176 176 deltabase = prevnode
177 177 return node, p1, p2, deltabase, cs
178 178
179 179 def deltachunk(self, prevnode):
180 180 l = self.chunklength()
181 181 if not l:
182 182 return {}
183 183 headerdata = readexactly(self._stream, self.deltaheadersize)
184 184 header = struct.unpack(self.deltaheader, headerdata)
185 185 delta = readexactly(self._stream, l - self.deltaheadersize)
186 186 node, p1, p2, deltabase, cs = self._deltaheader(header, prevnode)
187 187 return {'node': node, 'p1': p1, 'p2': p2, 'cs': cs,
188 188 'deltabase': deltabase, 'delta': delta}
189 189
190 190 def getchunks(self):
191 191 """returns all the chunks contains in the bundle
192 192
193 193 Used when you need to forward the binary stream to a file or another
194 194 network API. To do so, it parse the changegroup data, otherwise it will
195 195 block in case of sshrepo because it don't know the end of the stream.
196 196 """
197 197 # an empty chunkgroup is the end of the changegroup
198 198 # a changegroup has at least 2 chunkgroups (changelog and manifest).
199 199 # after that, an empty chunkgroup is the end of the changegroup
200 200 empty = False
201 201 count = 0
202 202 while not empty or count <= 2:
203 203 empty = True
204 204 count += 1
205 205 while True:
206 206 chunk = getchunk(self)
207 207 if not chunk:
208 208 break
209 209 empty = False
210 210 yield chunkheader(len(chunk))
211 211 pos = 0
212 212 while pos < len(chunk):
213 213 next = pos + 2**20
214 214 yield chunk[pos:next]
215 215 pos = next
216 216 yield closechunk()
217 217
218 218 class headerlessfixup(object):
219 219 def __init__(self, fh, h):
220 220 self._h = h
221 221 self._fh = fh
222 222 def read(self, n):
223 223 if self._h:
224 224 d, self._h = self._h[:n], self._h[n:]
225 225 if len(d) < n:
226 226 d += readexactly(self._fh, n - len(d))
227 227 return d
228 228 return readexactly(self._fh, n)
229 229
230 class bundle10(object):
231 deltaheader = _BUNDLE10_DELTA_HEADER
230 class cg1packer(object):
231 deltaheader = _CHANGEGROUPV1_DELTA_HEADER
232 232 def __init__(self, repo, bundlecaps=None):
233 233 """Given a source repo, construct a bundler.
234 234
235 235 bundlecaps is optional and can be used to specify the set of
236 236 capabilities which can be used to build the bundle.
237 237 """
238 238 # Set of capabilities we can use to build the bundle.
239 239 if bundlecaps is None:
240 240 bundlecaps = set()
241 241 self._bundlecaps = bundlecaps
242 242 self._changelog = repo.changelog
243 243 self._manifest = repo.manifest
244 244 reorder = repo.ui.config('bundle', 'reorder', 'auto')
245 245 if reorder == 'auto':
246 246 reorder = None
247 247 else:
248 248 reorder = util.parsebool(reorder)
249 249 self._repo = repo
250 250 self._reorder = reorder
251 251 self._progress = repo.ui.progress
252 252 def close(self):
253 253 return closechunk()
254 254
255 255 def fileheader(self, fname):
256 256 return chunkheader(len(fname)) + fname
257 257
258 258 def group(self, nodelist, revlog, lookup, units=None, reorder=None):
259 259 """Calculate a delta group, yielding a sequence of changegroup chunks
260 260 (strings).
261 261
262 262 Given a list of changeset revs, return a set of deltas and
263 263 metadata corresponding to nodes. The first delta is
264 264 first parent(nodelist[0]) -> nodelist[0], the receiver is
265 265 guaranteed to have this parent as it has all history before
266 266 these changesets. In the case firstparent is nullrev the
267 267 changegroup starts with a full revision.
268 268
269 269 If units is not None, progress detail will be generated, units specifies
270 270 the type of revlog that is touched (changelog, manifest, etc.).
271 271 """
272 272 # if we don't have any revisions touched by these changesets, bail
273 273 if len(nodelist) == 0:
274 274 yield self.close()
275 275 return
276 276
277 277 # for generaldelta revlogs, we linearize the revs; this will both be
278 278 # much quicker and generate a much smaller bundle
279 279 if (revlog._generaldelta and reorder is not False) or reorder:
280 280 dag = dagutil.revlogdag(revlog)
281 281 revs = set(revlog.rev(n) for n in nodelist)
282 282 revs = dag.linearize(revs)
283 283 else:
284 284 revs = sorted([revlog.rev(n) for n in nodelist])
285 285
286 286 # add the parent of the first rev
287 287 p = revlog.parentrevs(revs[0])[0]
288 288 revs.insert(0, p)
289 289
290 290 # build deltas
291 291 total = len(revs) - 1
292 292 msgbundling = _('bundling')
293 293 for r in xrange(len(revs) - 1):
294 294 if units is not None:
295 295 self._progress(msgbundling, r + 1, unit=units, total=total)
296 296 prev, curr = revs[r], revs[r + 1]
297 297 linknode = lookup(revlog.node(curr))
298 298 for c in self.revchunk(revlog, curr, prev, linknode):
299 299 yield c
300 300
301 301 yield self.close()
302 302
303 303 # filter any nodes that claim to be part of the known set
304 304 def prune(self, revlog, missing, commonrevs, source):
305 305 rr, rl = revlog.rev, revlog.linkrev
306 306 return [n for n in missing if rl(rr(n)) not in commonrevs]
307 307
308 308 def generate(self, commonrevs, clnodes, fastpathlinkrev, source):
309 309 '''yield a sequence of changegroup chunks (strings)'''
310 310 repo = self._repo
311 311 cl = self._changelog
312 312 mf = self._manifest
313 313 reorder = self._reorder
314 314 progress = self._progress
315 315
316 316 # for progress output
317 317 msgbundling = _('bundling')
318 318
319 319 mfs = {} # needed manifests
320 320 fnodes = {} # needed file nodes
321 321 changedfiles = set()
322 322
323 323 # Callback for the changelog, used to collect changed files and manifest
324 324 # nodes.
325 325 # Returns the linkrev node (identity in the changelog case).
326 326 def lookupcl(x):
327 327 c = cl.read(x)
328 328 changedfiles.update(c[3])
329 329 # record the first changeset introducing this manifest version
330 330 mfs.setdefault(c[0], x)
331 331 return x
332 332
333 333 # Callback for the manifest, used to collect linkrevs for filelog
334 334 # revisions.
335 335 # Returns the linkrev node (collected in lookupcl).
336 336 def lookupmf(x):
337 337 clnode = mfs[x]
338 338 if not fastpathlinkrev:
339 339 mdata = mf.readfast(x)
340 340 for f, n in mdata.iteritems():
341 341 if f in changedfiles:
342 342 # record the first changeset introducing this filelog
343 343 # version
344 344 fnodes[f].setdefault(n, clnode)
345 345 return clnode
346 346
347 347 for chunk in self.group(clnodes, cl, lookupcl, units=_('changesets'),
348 348 reorder=reorder):
349 349 yield chunk
350 350 progress(msgbundling, None)
351 351
352 352 for f in changedfiles:
353 353 fnodes[f] = {}
354 354 mfnodes = self.prune(mf, mfs, commonrevs, source)
355 355 for chunk in self.group(mfnodes, mf, lookupmf, units=_('manifests'),
356 356 reorder=reorder):
357 357 yield chunk
358 358 progress(msgbundling, None)
359 359
360 360 mfs.clear()
361 361 needed = set(cl.rev(x) for x in clnodes)
362 362
363 363 def linknodes(filerevlog, fname):
364 364 if fastpathlinkrev:
365 365 llr = filerevlog.linkrev
366 366 def genfilenodes():
367 367 for r in filerevlog:
368 368 linkrev = llr(r)
369 369 if linkrev in needed:
370 370 yield filerevlog.node(r), cl.node(linkrev)
371 371 fnodes[fname] = dict(genfilenodes())
372 372 return fnodes.get(fname, {})
373 373
374 374 for chunk in self.generatefiles(changedfiles, linknodes, commonrevs,
375 375 source):
376 376 yield chunk
377 377
378 378 yield self.close()
379 379 progress(msgbundling, None)
380 380
381 381 if clnodes:
382 382 repo.hook('outgoing', node=hex(clnodes[0]), source=source)
383 383
384 384 def generatefiles(self, changedfiles, linknodes, commonrevs, source):
385 385 repo = self._repo
386 386 progress = self._progress
387 387 reorder = self._reorder
388 388 msgbundling = _('bundling')
389 389
390 390 total = len(changedfiles)
391 391 # for progress output
392 392 msgfiles = _('files')
393 393 for i, fname in enumerate(sorted(changedfiles)):
394 394 filerevlog = repo.file(fname)
395 395 if not filerevlog:
396 396 raise util.Abort(_("empty or missing revlog for %s") % fname)
397 397
398 398 linkrevnodes = linknodes(filerevlog, fname)
399 399 # Lookup for filenodes, we collected the linkrev nodes above in the
400 400 # fastpath case and with lookupmf in the slowpath case.
401 401 def lookupfilelog(x):
402 402 return linkrevnodes[x]
403 403
404 404 filenodes = self.prune(filerevlog, linkrevnodes, commonrevs, source)
405 405 if filenodes:
406 406 progress(msgbundling, i + 1, item=fname, unit=msgfiles,
407 407 total=total)
408 408 yield self.fileheader(fname)
409 409 for chunk in self.group(filenodes, filerevlog, lookupfilelog,
410 410 reorder=reorder):
411 411 yield chunk
412 412
413 413 def revchunk(self, revlog, rev, prev, linknode):
414 414 node = revlog.node(rev)
415 415 p1, p2 = revlog.parentrevs(rev)
416 416 base = prev
417 417
418 418 prefix = ''
419 419 if base == nullrev:
420 420 delta = revlog.revision(node)
421 421 prefix = mdiff.trivialdiffheader(len(delta))
422 422 else:
423 423 delta = revlog.revdiff(base, rev)
424 424 p1n, p2n = revlog.parents(node)
425 425 basenode = revlog.node(base)
426 426 meta = self.builddeltaheader(node, p1n, p2n, basenode, linknode)
427 427 meta += prefix
428 428 l = len(meta) + len(delta)
429 429 yield chunkheader(l)
430 430 yield meta
431 431 yield delta
432 432 def builddeltaheader(self, node, p1n, p2n, basenode, linknode):
433 433 # do nothing with basenode, it is implicitly the previous one in HG10
434 434 return struct.pack(self.deltaheader, node, p1n, p2n, linknode)
435 435
436 436 def _changegroupinfo(repo, nodes, source):
437 437 if repo.ui.verbose or source == 'bundle':
438 438 repo.ui.status(_("%d changesets found\n") % len(nodes))
439 439 if repo.ui.debugflag:
440 440 repo.ui.debug("list of changesets:\n")
441 441 for node in nodes:
442 442 repo.ui.debug("%s\n" % hex(node))
443 443
444 444 def getsubset(repo, outgoing, bundler, source, fastpath=False):
445 445 repo = repo.unfiltered()
446 446 commonrevs = outgoing.common
447 447 csets = outgoing.missing
448 448 heads = outgoing.missingheads
449 449 # We go through the fast path if we get told to, or if all (unfiltered
450 450 # heads have been requested (since we then know there all linkrevs will
451 451 # be pulled by the client).
452 452 heads.sort()
453 453 fastpathlinkrev = fastpath or (
454 454 repo.filtername is None and heads == sorted(repo.heads()))
455 455
456 456 repo.hook('preoutgoing', throw=True, source=source)
457 457 _changegroupinfo(repo, csets, source)
458 458 gengroup = bundler.generate(commonrevs, csets, fastpathlinkrev, source)
459 return unbundle10(util.chunkbuffer(gengroup), 'UN')
459 return cg1unpacker(util.chunkbuffer(gengroup), 'UN')
460 460
461 461 def changegroupsubset(repo, roots, heads, source):
462 462 """Compute a changegroup consisting of all the nodes that are
463 463 descendants of any of the roots and ancestors of any of the heads.
464 464 Return a chunkbuffer object whose read() method will return
465 465 successive changegroup chunks.
466 466
467 467 It is fairly complex as determining which filenodes and which
468 468 manifest nodes need to be included for the changeset to be complete
469 469 is non-trivial.
470 470
471 471 Another wrinkle is doing the reverse, figuring out which changeset in
472 472 the changegroup a particular filenode or manifestnode belongs to.
473 473 """
474 474 cl = repo.changelog
475 475 if not roots:
476 476 roots = [nullid]
477 477 # TODO: remove call to nodesbetween.
478 478 csets, roots, heads = cl.nodesbetween(roots, heads)
479 479 discbases = []
480 480 for n in roots:
481 481 discbases.extend([p for p in cl.parents(n) if p != nullid])
482 482 outgoing = discovery.outgoing(cl, discbases, heads)
483 bundler = bundle10(repo)
483 bundler = cg1packer(repo)
484 484 return getsubset(repo, outgoing, bundler, source)
485 485
486 def getlocalbundle(repo, source, outgoing, bundlecaps=None):
486 def getlocalchangegroup(repo, source, outgoing, bundlecaps=None):
487 487 """Like getbundle, but taking a discovery.outgoing as an argument.
488 488
489 489 This is only implemented for local repos and reuses potentially
490 490 precomputed sets in outgoing."""
491 491 if not outgoing.missing:
492 492 return None
493 bundler = bundle10(repo, bundlecaps)
493 bundler = cg1packer(repo, bundlecaps)
494 494 return getsubset(repo, outgoing, bundler, source)
495 495
496 496 def _computeoutgoing(repo, heads, common):
497 497 """Computes which revs are outgoing given a set of common
498 498 and a set of heads.
499 499
500 500 This is a separate function so extensions can have access to
501 501 the logic.
502 502
503 503 Returns a discovery.outgoing object.
504 504 """
505 505 cl = repo.changelog
506 506 if common:
507 507 hasnode = cl.hasnode
508 508 common = [n for n in common if hasnode(n)]
509 509 else:
510 510 common = [nullid]
511 511 if not heads:
512 512 heads = cl.heads()
513 513 return discovery.outgoing(cl, common, heads)
514 514
515 def getbundle(repo, source, heads=None, common=None, bundlecaps=None):
515 def getchangegroup(repo, source, heads=None, common=None, bundlecaps=None):
516 516 """Like changegroupsubset, but returns the set difference between the
517 517 ancestors of heads and the ancestors common.
518 518
519 519 If heads is None, use the local heads. If common is None, use [nullid].
520 520
521 521 The nodes in common might not all be known locally due to the way the
522 522 current discovery protocol works.
523 523 """
524 524 outgoing = _computeoutgoing(repo, heads, common)
525 return getlocalbundle(repo, source, outgoing, bundlecaps=bundlecaps)
525 return getlocalchangegroup(repo, source, outgoing, bundlecaps=bundlecaps)
526 526
527 527 def changegroup(repo, basenodes, source):
528 528 # to avoid a race we use changegroupsubset() (issue1320)
529 529 return changegroupsubset(repo, basenodes, repo.heads(), source)
530 530
531 531 def addchangegroupfiles(repo, source, revmap, trp, pr, needfiles):
532 532 revisions = 0
533 533 files = 0
534 534 while True:
535 535 chunkdata = source.filelogheader()
536 536 if not chunkdata:
537 537 break
538 538 f = chunkdata["filename"]
539 539 repo.ui.debug("adding %s revisions\n" % f)
540 540 pr()
541 541 fl = repo.file(f)
542 542 o = len(fl)
543 543 if not fl.addgroup(source, revmap, trp):
544 544 raise util.Abort(_("received file revlog group is empty"))
545 545 revisions += len(fl) - o
546 546 files += 1
547 547 if f in needfiles:
548 548 needs = needfiles[f]
549 549 for new in xrange(o, len(fl)):
550 550 n = fl.node(new)
551 551 if n in needs:
552 552 needs.remove(n)
553 553 else:
554 554 raise util.Abort(
555 555 _("received spurious file revlog entry"))
556 556 if not needs:
557 557 del needfiles[f]
558 558 repo.ui.progress(_('files'), None)
559 559
560 560 for f, needs in needfiles.iteritems():
561 561 fl = repo.file(f)
562 562 for n in needs:
563 563 try:
564 564 fl.rev(n)
565 565 except error.LookupError:
566 566 raise util.Abort(
567 567 _('missing file data for %s:%s - run hg verify') %
568 568 (f, hex(n)))
569 569
570 570 return revisions, files
571 571
572 572 def addchangegroup(repo, source, srctype, url, emptyok=False,
573 573 targetphase=phases.draft):
574 574 """Add the changegroup returned by source.read() to this repo.
575 575 srctype is a string like 'push', 'pull', or 'unbundle'. url is
576 576 the URL of the repo where this changegroup is coming from.
577 577
578 578 Return an integer summarizing the change to this repo:
579 579 - nothing changed or no source: 0
580 580 - more heads than before: 1+added heads (2..n)
581 581 - fewer heads than before: -1-removed heads (-2..-n)
582 582 - number of heads stays the same: 1
583 583 """
584 584 repo = repo.unfiltered()
585 585 def csmap(x):
586 586 repo.ui.debug("add changeset %s\n" % short(x))
587 587 return len(cl)
588 588
589 589 def revmap(x):
590 590 return cl.rev(x)
591 591
592 592 if not source:
593 593 return 0
594 594
595 595 repo.hook('prechangegroup', throw=True, source=srctype, url=url)
596 596
597 597 changesets = files = revisions = 0
598 598 efiles = set()
599 599
600 600 # write changelog data to temp files so concurrent readers will not see
601 601 # inconsistent view
602 602 cl = repo.changelog
603 603 cl.delayupdate()
604 604 oldheads = cl.heads()
605 605
606 606 tr = repo.transaction("\n".join([srctype, util.hidepassword(url)]))
607 607 try:
608 608 trp = weakref.proxy(tr)
609 609 # pull off the changeset group
610 610 repo.ui.status(_("adding changesets\n"))
611 611 clstart = len(cl)
612 612 class prog(object):
613 613 step = _('changesets')
614 614 count = 1
615 615 ui = repo.ui
616 616 total = None
617 617 def __call__(repo):
618 618 repo.ui.progress(repo.step, repo.count, unit=_('chunks'),
619 619 total=repo.total)
620 620 repo.count += 1
621 621 pr = prog()
622 622 source.callback = pr
623 623
624 624 source.changelogheader()
625 625 srccontent = cl.addgroup(source, csmap, trp)
626 626 if not (srccontent or emptyok):
627 627 raise util.Abort(_("received changelog group is empty"))
628 628 clend = len(cl)
629 629 changesets = clend - clstart
630 630 for c in xrange(clstart, clend):
631 631 efiles.update(repo[c].files())
632 632 efiles = len(efiles)
633 633 repo.ui.progress(_('changesets'), None)
634 634
635 635 # pull off the manifest group
636 636 repo.ui.status(_("adding manifests\n"))
637 637 pr.step = _('manifests')
638 638 pr.count = 1
639 639 pr.total = changesets # manifests <= changesets
640 640 # no need to check for empty manifest group here:
641 641 # if the result of the merge of 1 and 2 is the same in 3 and 4,
642 642 # no new manifest will be created and the manifest group will
643 643 # be empty during the pull
644 644 source.manifestheader()
645 645 repo.manifest.addgroup(source, revmap, trp)
646 646 repo.ui.progress(_('manifests'), None)
647 647
648 648 needfiles = {}
649 649 if repo.ui.configbool('server', 'validate', default=False):
650 650 # validate incoming csets have their manifests
651 651 for cset in xrange(clstart, clend):
652 652 mfest = repo.changelog.read(repo.changelog.node(cset))[0]
653 653 mfest = repo.manifest.readdelta(mfest)
654 654 # store file nodes we must see
655 655 for f, n in mfest.iteritems():
656 656 needfiles.setdefault(f, set()).add(n)
657 657
658 658 # process the files
659 659 repo.ui.status(_("adding file changes\n"))
660 660 pr.step = _('files')
661 661 pr.count = 1
662 662 pr.total = efiles
663 663 source.callback = None
664 664
665 665 newrevs, newfiles = addchangegroupfiles(repo, source, revmap, trp, pr,
666 666 needfiles)
667 667 revisions += newrevs
668 668 files += newfiles
669 669
670 670 dh = 0
671 671 if oldheads:
672 672 heads = cl.heads()
673 673 dh = len(heads) - len(oldheads)
674 674 for h in heads:
675 675 if h not in oldheads and repo[h].closesbranch():
676 676 dh -= 1
677 677 htext = ""
678 678 if dh:
679 679 htext = _(" (%+d heads)") % dh
680 680
681 681 repo.ui.status(_("added %d changesets"
682 682 " with %d changes to %d files%s\n")
683 683 % (changesets, revisions, files, htext))
684 684 repo.invalidatevolatilesets()
685 685
686 686 if changesets > 0:
687 687 p = lambda: cl.writepending() and repo.root or ""
688 688 if 'node' not in tr.hookargs:
689 689 tr.hookargs['node'] = hex(cl.node(clstart))
690 690 repo.hook('pretxnchangegroup', throw=True, source=srctype,
691 691 url=url, pending=p, **tr.hookargs)
692 692
693 693 added = [cl.node(r) for r in xrange(clstart, clend)]
694 694 publishing = repo.ui.configbool('phases', 'publish', True)
695 695 if srctype in ('push', 'serve'):
696 696 # Old servers can not push the boundary themselves.
697 697 # New servers won't push the boundary if changeset already
698 698 # exists locally as secret
699 699 #
700 700 # We should not use added here but the list of all change in
701 701 # the bundle
702 702 if publishing:
703 703 phases.advanceboundary(repo, tr, phases.public, srccontent)
704 704 else:
705 705 # Those changesets have been pushed from the outside, their
706 706 # phases are going to be pushed alongside. Therefor
707 707 # `targetphase` is ignored.
708 708 phases.advanceboundary(repo, tr, phases.draft, srccontent)
709 709 phases.retractboundary(repo, tr, phases.draft, added)
710 710 elif srctype != 'strip':
711 711 # publishing only alter behavior during push
712 712 #
713 713 # strip should not touch boundary at all
714 714 phases.retractboundary(repo, tr, targetphase, added)
715 715
716 716 # make changelog see real files again
717 717 cl.finalize(trp)
718 718
719 719 tr.close()
720 720
721 721 if changesets > 0:
722 722 if srctype != 'strip':
723 723 # During strip, branchcache is invalid but coming call to
724 724 # `destroyed` will repair it.
725 725 # In other case we can safely update cache on disk.
726 726 branchmap.updatecache(repo.filtered('served'))
727 727 def runhooks():
728 728 # These hooks run when the lock releases, not when the
729 729 # transaction closes. So it's possible for the changelog
730 730 # to have changed since we last saw it.
731 731 if clstart >= len(repo):
732 732 return
733 733
734 734 # forcefully update the on-disk branch cache
735 735 repo.ui.debug("updating the branch cache\n")
736 736 repo.hook("changegroup", source=srctype, url=url,
737 737 **tr.hookargs)
738 738
739 739 for n in added:
740 740 repo.hook("incoming", node=hex(n), source=srctype,
741 741 url=url)
742 742
743 743 newheads = [h for h in repo.heads() if h not in oldheads]
744 744 repo.ui.log("incoming",
745 745 "%s incoming changes - new heads: %s\n",
746 746 len(added),
747 747 ', '.join([hex(c[:6]) for c in newheads]))
748 748 repo._afterlock(runhooks)
749 749
750 750 finally:
751 751 tr.release()
752 752 # never return 0 here:
753 753 if dh < 0:
754 754 return dh - 1
755 755 else:
756 756 return dh + 1
@@ -1,6113 +1,6114
1 1 # commands.py - command processing for mercurial
2 2 #
3 3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from node import hex, bin, nullid, nullrev, short
9 9 from lock import release
10 10 from i18n import _
11 11 import os, re, difflib, time, tempfile, errno, shlex
12 12 import sys
13 13 import hg, scmutil, util, revlog, copies, error, bookmarks
14 14 import patch, help, encoding, templatekw, discovery
15 15 import archival, changegroup, cmdutil, hbisect
16 16 import sshserver, hgweb, commandserver
17 17 import extensions
18 18 from hgweb import server as hgweb_server
19 19 import merge as mergemod
20 20 import minirst, revset, fileset
21 21 import dagparser, context, simplemerge, graphmod
22 22 import random
23 23 import setdiscovery, treediscovery, dagutil, pvec, localrepo
24 24 import phases, obsolete, exchange
25 25
26 26 table = {}
27 27
28 28 command = cmdutil.command(table)
29 29
30 30 # Space delimited list of commands that don't require local repositories.
31 31 # This should be populated by passing norepo=True into the @command decorator.
32 32 norepo = ''
33 33 # Space delimited list of commands that optionally require local repositories.
34 34 # This should be populated by passing optionalrepo=True into the @command
35 35 # decorator.
36 36 optionalrepo = ''
37 37 # Space delimited list of commands that will examine arguments looking for
38 38 # a repository. This should be populated by passing inferrepo=True into the
39 39 # @command decorator.
40 40 inferrepo = ''
41 41
42 42 # common command options
43 43
44 44 globalopts = [
45 45 ('R', 'repository', '',
46 46 _('repository root directory or name of overlay bundle file'),
47 47 _('REPO')),
48 48 ('', 'cwd', '',
49 49 _('change working directory'), _('DIR')),
50 50 ('y', 'noninteractive', None,
51 51 _('do not prompt, automatically pick the first choice for all prompts')),
52 52 ('q', 'quiet', None, _('suppress output')),
53 53 ('v', 'verbose', None, _('enable additional output')),
54 54 ('', 'config', [],
55 55 _('set/override config option (use \'section.name=value\')'),
56 56 _('CONFIG')),
57 57 ('', 'debug', None, _('enable debugging output')),
58 58 ('', 'debugger', None, _('start debugger')),
59 59 ('', 'encoding', encoding.encoding, _('set the charset encoding'),
60 60 _('ENCODE')),
61 61 ('', 'encodingmode', encoding.encodingmode,
62 62 _('set the charset encoding mode'), _('MODE')),
63 63 ('', 'traceback', None, _('always print a traceback on exception')),
64 64 ('', 'time', None, _('time how long the command takes')),
65 65 ('', 'profile', None, _('print command execution profile')),
66 66 ('', 'version', None, _('output version information and exit')),
67 67 ('h', 'help', None, _('display help and exit')),
68 68 ('', 'hidden', False, _('consider hidden changesets')),
69 69 ]
70 70
71 71 dryrunopts = [('n', 'dry-run', None,
72 72 _('do not perform actions, just print output'))]
73 73
74 74 remoteopts = [
75 75 ('e', 'ssh', '',
76 76 _('specify ssh command to use'), _('CMD')),
77 77 ('', 'remotecmd', '',
78 78 _('specify hg command to run on the remote side'), _('CMD')),
79 79 ('', 'insecure', None,
80 80 _('do not verify server certificate (ignoring web.cacerts config)')),
81 81 ]
82 82
83 83 walkopts = [
84 84 ('I', 'include', [],
85 85 _('include names matching the given patterns'), _('PATTERN')),
86 86 ('X', 'exclude', [],
87 87 _('exclude names matching the given patterns'), _('PATTERN')),
88 88 ]
89 89
90 90 commitopts = [
91 91 ('m', 'message', '',
92 92 _('use text as commit message'), _('TEXT')),
93 93 ('l', 'logfile', '',
94 94 _('read commit message from file'), _('FILE')),
95 95 ]
96 96
97 97 commitopts2 = [
98 98 ('d', 'date', '',
99 99 _('record the specified date as commit date'), _('DATE')),
100 100 ('u', 'user', '',
101 101 _('record the specified user as committer'), _('USER')),
102 102 ]
103 103
104 104 templateopts = [
105 105 ('', 'style', '',
106 106 _('display using template map file (DEPRECATED)'), _('STYLE')),
107 107 ('T', 'template', '',
108 108 _('display with template'), _('TEMPLATE')),
109 109 ]
110 110
111 111 logopts = [
112 112 ('p', 'patch', None, _('show patch')),
113 113 ('g', 'git', None, _('use git extended diff format')),
114 114 ('l', 'limit', '',
115 115 _('limit number of changes displayed'), _('NUM')),
116 116 ('M', 'no-merges', None, _('do not show merges')),
117 117 ('', 'stat', None, _('output diffstat-style summary of changes')),
118 118 ('G', 'graph', None, _("show the revision DAG")),
119 119 ] + templateopts
120 120
121 121 diffopts = [
122 122 ('a', 'text', None, _('treat all files as text')),
123 123 ('g', 'git', None, _('use git extended diff format')),
124 124 ('', 'nodates', None, _('omit dates from diff headers'))
125 125 ]
126 126
127 127 diffwsopts = [
128 128 ('w', 'ignore-all-space', None,
129 129 _('ignore white space when comparing lines')),
130 130 ('b', 'ignore-space-change', None,
131 131 _('ignore changes in the amount of white space')),
132 132 ('B', 'ignore-blank-lines', None,
133 133 _('ignore changes whose lines are all blank')),
134 134 ]
135 135
136 136 diffopts2 = [
137 137 ('p', 'show-function', None, _('show which function each change is in')),
138 138 ('', 'reverse', None, _('produce a diff that undoes the changes')),
139 139 ] + diffwsopts + [
140 140 ('U', 'unified', '',
141 141 _('number of lines of context to show'), _('NUM')),
142 142 ('', 'stat', None, _('output diffstat-style summary of changes')),
143 143 ]
144 144
145 145 mergetoolopts = [
146 146 ('t', 'tool', '', _('specify merge tool')),
147 147 ]
148 148
149 149 similarityopts = [
150 150 ('s', 'similarity', '',
151 151 _('guess renamed files by similarity (0<=s<=100)'), _('SIMILARITY'))
152 152 ]
153 153
154 154 subrepoopts = [
155 155 ('S', 'subrepos', None,
156 156 _('recurse into subrepositories'))
157 157 ]
158 158
159 159 # Commands start here, listed alphabetically
160 160
161 161 @command('^add',
162 162 walkopts + subrepoopts + dryrunopts,
163 163 _('[OPTION]... [FILE]...'),
164 164 inferrepo=True)
165 165 def add(ui, repo, *pats, **opts):
166 166 """add the specified files on the next commit
167 167
168 168 Schedule files to be version controlled and added to the
169 169 repository.
170 170
171 171 The files will be added to the repository at the next commit. To
172 172 undo an add before that, see :hg:`forget`.
173 173
174 174 If no names are given, add all files to the repository.
175 175
176 176 .. container:: verbose
177 177
178 178 An example showing how new (unknown) files are added
179 179 automatically by :hg:`add`::
180 180
181 181 $ ls
182 182 foo.c
183 183 $ hg status
184 184 ? foo.c
185 185 $ hg add
186 186 adding foo.c
187 187 $ hg status
188 188 A foo.c
189 189
190 190 Returns 0 if all files are successfully added.
191 191 """
192 192
193 193 m = scmutil.match(repo[None], pats, opts)
194 194 rejected = cmdutil.add(ui, repo, m, opts.get('dry_run'),
195 195 opts.get('subrepos'), prefix="", explicitonly=False)
196 196 return rejected and 1 or 0
197 197
198 198 @command('addremove',
199 199 similarityopts + walkopts + dryrunopts,
200 200 _('[OPTION]... [FILE]...'),
201 201 inferrepo=True)
202 202 def addremove(ui, repo, *pats, **opts):
203 203 """add all new files, delete all missing files
204 204
205 205 Add all new files and remove all missing files from the
206 206 repository.
207 207
208 208 New files are ignored if they match any of the patterns in
209 209 ``.hgignore``. As with add, these changes take effect at the next
210 210 commit.
211 211
212 212 Use the -s/--similarity option to detect renamed files. This
213 213 option takes a percentage between 0 (disabled) and 100 (files must
214 214 be identical) as its parameter. With a parameter greater than 0,
215 215 this compares every removed file with every added file and records
216 216 those similar enough as renames. Detecting renamed files this way
217 217 can be expensive. After using this option, :hg:`status -C` can be
218 218 used to check which files were identified as moved or renamed. If
219 219 not specified, -s/--similarity defaults to 100 and only renames of
220 220 identical files are detected.
221 221
222 222 Returns 0 if all files are successfully added.
223 223 """
224 224 try:
225 225 sim = float(opts.get('similarity') or 100)
226 226 except ValueError:
227 227 raise util.Abort(_('similarity must be a number'))
228 228 if sim < 0 or sim > 100:
229 229 raise util.Abort(_('similarity must be between 0 and 100'))
230 230 return scmutil.addremove(repo, pats, opts, similarity=sim / 100.0)
231 231
232 232 @command('^annotate|blame',
233 233 [('r', 'rev', '', _('annotate the specified revision'), _('REV')),
234 234 ('', 'follow', None,
235 235 _('follow copies/renames and list the filename (DEPRECATED)')),
236 236 ('', 'no-follow', None, _("don't follow copies and renames")),
237 237 ('a', 'text', None, _('treat all files as text')),
238 238 ('u', 'user', None, _('list the author (long with -v)')),
239 239 ('f', 'file', None, _('list the filename')),
240 240 ('d', 'date', None, _('list the date (short with -q)')),
241 241 ('n', 'number', None, _('list the revision number (default)')),
242 242 ('c', 'changeset', None, _('list the changeset')),
243 243 ('l', 'line-number', None, _('show line number at the first appearance'))
244 244 ] + diffwsopts + walkopts,
245 245 _('[-r REV] [-f] [-a] [-u] [-d] [-n] [-c] [-l] FILE...'),
246 246 inferrepo=True)
247 247 def annotate(ui, repo, *pats, **opts):
248 248 """show changeset information by line for each file
249 249
250 250 List changes in files, showing the revision id responsible for
251 251 each line
252 252
253 253 This command is useful for discovering when a change was made and
254 254 by whom.
255 255
256 256 Without the -a/--text option, annotate will avoid processing files
257 257 it detects as binary. With -a, annotate will annotate the file
258 258 anyway, although the results will probably be neither useful
259 259 nor desirable.
260 260
261 261 Returns 0 on success.
262 262 """
263 263 if not pats:
264 264 raise util.Abort(_('at least one filename or pattern is required'))
265 265
266 266 if opts.get('follow'):
267 267 # --follow is deprecated and now just an alias for -f/--file
268 268 # to mimic the behavior of Mercurial before version 1.5
269 269 opts['file'] = True
270 270
271 271 datefunc = ui.quiet and util.shortdate or util.datestr
272 272 getdate = util.cachefunc(lambda x: datefunc(x[0].date()))
273 273 hexfn = ui.debugflag and hex or short
274 274
275 275 opmap = [('user', ' ', lambda x: ui.shortuser(x[0].user())),
276 276 ('number', ' ', lambda x: str(x[0].rev())),
277 277 ('changeset', ' ', lambda x: hexfn(x[0].node())),
278 278 ('date', ' ', getdate),
279 279 ('file', ' ', lambda x: x[0].path()),
280 280 ('line_number', ':', lambda x: str(x[1])),
281 281 ]
282 282
283 283 if (not opts.get('user') and not opts.get('changeset')
284 284 and not opts.get('date') and not opts.get('file')):
285 285 opts['number'] = True
286 286
287 287 linenumber = opts.get('line_number') is not None
288 288 if linenumber and (not opts.get('changeset')) and (not opts.get('number')):
289 289 raise util.Abort(_('at least one of -n/-c is required for -l'))
290 290
291 291 funcmap = [(func, sep) for op, sep, func in opmap if opts.get(op)]
292 292 funcmap[0] = (funcmap[0][0], '') # no separator in front of first column
293 293
294 294 def bad(x, y):
295 295 raise util.Abort("%s: %s" % (x, y))
296 296
297 297 ctx = scmutil.revsingle(repo, opts.get('rev'))
298 298 m = scmutil.match(ctx, pats, opts)
299 299 m.bad = bad
300 300 follow = not opts.get('no_follow')
301 301 diffopts = patch.diffopts(ui, opts, section='annotate')
302 302 for abs in ctx.walk(m):
303 303 fctx = ctx[abs]
304 304 if not opts.get('text') and util.binary(fctx.data()):
305 305 ui.write(_("%s: binary file\n") % ((pats and m.rel(abs)) or abs))
306 306 continue
307 307
308 308 lines = fctx.annotate(follow=follow, linenumber=linenumber,
309 309 diffopts=diffopts)
310 310 pieces = []
311 311
312 312 for f, sep in funcmap:
313 313 l = [f(n) for n, dummy in lines]
314 314 if l:
315 315 sized = [(x, encoding.colwidth(x)) for x in l]
316 316 ml = max([w for x, w in sized])
317 317 pieces.append(["%s%s%s" % (sep, ' ' * (ml - w), x)
318 318 for x, w in sized])
319 319
320 320 if pieces:
321 321 for p, l in zip(zip(*pieces), lines):
322 322 ui.write("%s: %s" % ("".join(p), l[1]))
323 323
324 324 if lines and not lines[-1][1].endswith('\n'):
325 325 ui.write('\n')
326 326
327 327 @command('archive',
328 328 [('', 'no-decode', None, _('do not pass files through decoders')),
329 329 ('p', 'prefix', '', _('directory prefix for files in archive'),
330 330 _('PREFIX')),
331 331 ('r', 'rev', '', _('revision to distribute'), _('REV')),
332 332 ('t', 'type', '', _('type of distribution to create'), _('TYPE')),
333 333 ] + subrepoopts + walkopts,
334 334 _('[OPTION]... DEST'))
335 335 def archive(ui, repo, dest, **opts):
336 336 '''create an unversioned archive of a repository revision
337 337
338 338 By default, the revision used is the parent of the working
339 339 directory; use -r/--rev to specify a different revision.
340 340
341 341 The archive type is automatically detected based on file
342 342 extension (or override using -t/--type).
343 343
344 344 .. container:: verbose
345 345
346 346 Examples:
347 347
348 348 - create a zip file containing the 1.0 release::
349 349
350 350 hg archive -r 1.0 project-1.0.zip
351 351
352 352 - create a tarball excluding .hg files::
353 353
354 354 hg archive project.tar.gz -X ".hg*"
355 355
356 356 Valid types are:
357 357
358 358 :``files``: a directory full of files (default)
359 359 :``tar``: tar archive, uncompressed
360 360 :``tbz2``: tar archive, compressed using bzip2
361 361 :``tgz``: tar archive, compressed using gzip
362 362 :``uzip``: zip archive, uncompressed
363 363 :``zip``: zip archive, compressed using deflate
364 364
365 365 The exact name of the destination archive or directory is given
366 366 using a format string; see :hg:`help export` for details.
367 367
368 368 Each member added to an archive file has a directory prefix
369 369 prepended. Use -p/--prefix to specify a format string for the
370 370 prefix. The default is the basename of the archive, with suffixes
371 371 removed.
372 372
373 373 Returns 0 on success.
374 374 '''
375 375
376 376 ctx = scmutil.revsingle(repo, opts.get('rev'))
377 377 if not ctx:
378 378 raise util.Abort(_('no working directory: please specify a revision'))
379 379 node = ctx.node()
380 380 dest = cmdutil.makefilename(repo, dest, node)
381 381 if os.path.realpath(dest) == repo.root:
382 382 raise util.Abort(_('repository root cannot be destination'))
383 383
384 384 kind = opts.get('type') or archival.guesskind(dest) or 'files'
385 385 prefix = opts.get('prefix')
386 386
387 387 if dest == '-':
388 388 if kind == 'files':
389 389 raise util.Abort(_('cannot archive plain files to stdout'))
390 390 dest = cmdutil.makefileobj(repo, dest)
391 391 if not prefix:
392 392 prefix = os.path.basename(repo.root) + '-%h'
393 393
394 394 prefix = cmdutil.makefilename(repo, prefix, node)
395 395 matchfn = scmutil.match(ctx, [], opts)
396 396 archival.archive(repo, dest, node, kind, not opts.get('no_decode'),
397 397 matchfn, prefix, subrepos=opts.get('subrepos'))
398 398
399 399 @command('backout',
400 400 [('', 'merge', None, _('merge with old dirstate parent after backout')),
401 401 ('', 'parent', '',
402 402 _('parent to choose when backing out merge (DEPRECATED)'), _('REV')),
403 403 ('r', 'rev', '', _('revision to backout'), _('REV')),
404 404 ('e', 'edit', False, _('invoke editor on commit messages')),
405 405 ] + mergetoolopts + walkopts + commitopts + commitopts2,
406 406 _('[OPTION]... [-r] REV'))
407 407 def backout(ui, repo, node=None, rev=None, **opts):
408 408 '''reverse effect of earlier changeset
409 409
410 410 Prepare a new changeset with the effect of REV undone in the
411 411 current working directory.
412 412
413 413 If REV is the parent of the working directory, then this new changeset
414 414 is committed automatically. Otherwise, hg needs to merge the
415 415 changes and the merged result is left uncommitted.
416 416
417 417 .. note::
418 418
419 419 backout cannot be used to fix either an unwanted or
420 420 incorrect merge.
421 421
422 422 .. container:: verbose
423 423
424 424 By default, the pending changeset will have one parent,
425 425 maintaining a linear history. With --merge, the pending
426 426 changeset will instead have two parents: the old parent of the
427 427 working directory and a new child of REV that simply undoes REV.
428 428
429 429 Before version 1.7, the behavior without --merge was equivalent
430 430 to specifying --merge followed by :hg:`update --clean .` to
431 431 cancel the merge and leave the child of REV as a head to be
432 432 merged separately.
433 433
434 434 See :hg:`help dates` for a list of formats valid for -d/--date.
435 435
436 436 Returns 0 on success, 1 if nothing to backout or there are unresolved
437 437 files.
438 438 '''
439 439 if rev and node:
440 440 raise util.Abort(_("please specify just one revision"))
441 441
442 442 if not rev:
443 443 rev = node
444 444
445 445 if not rev:
446 446 raise util.Abort(_("please specify a revision to backout"))
447 447
448 448 date = opts.get('date')
449 449 if date:
450 450 opts['date'] = util.parsedate(date)
451 451
452 452 cmdutil.checkunfinished(repo)
453 453 cmdutil.bailifchanged(repo)
454 454 node = scmutil.revsingle(repo, rev).node()
455 455
456 456 op1, op2 = repo.dirstate.parents()
457 457 if not repo.changelog.isancestor(node, op1):
458 458 raise util.Abort(_('cannot backout change that is not an ancestor'))
459 459
460 460 p1, p2 = repo.changelog.parents(node)
461 461 if p1 == nullid:
462 462 raise util.Abort(_('cannot backout a change with no parents'))
463 463 if p2 != nullid:
464 464 if not opts.get('parent'):
465 465 raise util.Abort(_('cannot backout a merge changeset'))
466 466 p = repo.lookup(opts['parent'])
467 467 if p not in (p1, p2):
468 468 raise util.Abort(_('%s is not a parent of %s') %
469 469 (short(p), short(node)))
470 470 parent = p
471 471 else:
472 472 if opts.get('parent'):
473 473 raise util.Abort(_('cannot use --parent on non-merge changeset'))
474 474 parent = p1
475 475
476 476 # the backout should appear on the same branch
477 477 wlock = repo.wlock()
478 478 try:
479 479 branch = repo.dirstate.branch()
480 480 bheads = repo.branchheads(branch)
481 481 rctx = scmutil.revsingle(repo, hex(parent))
482 482 if not opts.get('merge') and op1 != node:
483 483 try:
484 484 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
485 485 'backout')
486 486 stats = mergemod.update(repo, parent, True, True, False,
487 487 node, False)
488 488 repo.setparents(op1, op2)
489 489 hg._showstats(repo, stats)
490 490 if stats[3]:
491 491 repo.ui.status(_("use 'hg resolve' to retry unresolved "
492 492 "file merges\n"))
493 493 else:
494 494 msg = _("changeset %s backed out, "
495 495 "don't forget to commit.\n")
496 496 ui.status(msg % short(node))
497 497 return stats[3] > 0
498 498 finally:
499 499 ui.setconfig('ui', 'forcemerge', '', '')
500 500 else:
501 501 hg.clean(repo, node, show_stats=False)
502 502 repo.dirstate.setbranch(branch)
503 503 cmdutil.revert(ui, repo, rctx, repo.dirstate.parents())
504 504
505 505
506 506 def commitfunc(ui, repo, message, match, opts):
507 507 editform = 'backout'
508 508 e = cmdutil.getcommiteditor(editform=editform, **opts)
509 509 if not message:
510 510 # we don't translate commit messages
511 511 message = "Backed out changeset %s" % short(node)
512 512 e = cmdutil.getcommiteditor(edit=True, editform=editform)
513 513 return repo.commit(message, opts.get('user'), opts.get('date'),
514 514 match, editor=e)
515 515 newnode = cmdutil.commit(ui, repo, commitfunc, [], opts)
516 516 if not newnode:
517 517 ui.status(_("nothing changed\n"))
518 518 return 1
519 519 cmdutil.commitstatus(repo, newnode, branch, bheads)
520 520
521 521 def nice(node):
522 522 return '%d:%s' % (repo.changelog.rev(node), short(node))
523 523 ui.status(_('changeset %s backs out changeset %s\n') %
524 524 (nice(repo.changelog.tip()), nice(node)))
525 525 if opts.get('merge') and op1 != node:
526 526 hg.clean(repo, op1, show_stats=False)
527 527 ui.status(_('merging with changeset %s\n')
528 528 % nice(repo.changelog.tip()))
529 529 try:
530 530 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
531 531 'backout')
532 532 return hg.merge(repo, hex(repo.changelog.tip()))
533 533 finally:
534 534 ui.setconfig('ui', 'forcemerge', '', '')
535 535 finally:
536 536 wlock.release()
537 537 return 0
538 538
539 539 @command('bisect',
540 540 [('r', 'reset', False, _('reset bisect state')),
541 541 ('g', 'good', False, _('mark changeset good')),
542 542 ('b', 'bad', False, _('mark changeset bad')),
543 543 ('s', 'skip', False, _('skip testing changeset')),
544 544 ('e', 'extend', False, _('extend the bisect range')),
545 545 ('c', 'command', '', _('use command to check changeset state'), _('CMD')),
546 546 ('U', 'noupdate', False, _('do not update to target'))],
547 547 _("[-gbsr] [-U] [-c CMD] [REV]"))
548 548 def bisect(ui, repo, rev=None, extra=None, command=None,
549 549 reset=None, good=None, bad=None, skip=None, extend=None,
550 550 noupdate=None):
551 551 """subdivision search of changesets
552 552
553 553 This command helps to find changesets which introduce problems. To
554 554 use, mark the earliest changeset you know exhibits the problem as
555 555 bad, then mark the latest changeset which is free from the problem
556 556 as good. Bisect will update your working directory to a revision
557 557 for testing (unless the -U/--noupdate option is specified). Once
558 558 you have performed tests, mark the working directory as good or
559 559 bad, and bisect will either update to another candidate changeset
560 560 or announce that it has found the bad revision.
561 561
562 562 As a shortcut, you can also use the revision argument to mark a
563 563 revision as good or bad without checking it out first.
564 564
565 565 If you supply a command, it will be used for automatic bisection.
566 566 The environment variable HG_NODE will contain the ID of the
567 567 changeset being tested. The exit status of the command will be
568 568 used to mark revisions as good or bad: status 0 means good, 125
569 569 means to skip the revision, 127 (command not found) will abort the
570 570 bisection, and any other non-zero exit status means the revision
571 571 is bad.
572 572
573 573 .. container:: verbose
574 574
575 575 Some examples:
576 576
577 577 - start a bisection with known bad revision 34, and good revision 12::
578 578
579 579 hg bisect --bad 34
580 580 hg bisect --good 12
581 581
582 582 - advance the current bisection by marking current revision as good or
583 583 bad::
584 584
585 585 hg bisect --good
586 586 hg bisect --bad
587 587
588 588 - mark the current revision, or a known revision, to be skipped (e.g. if
589 589 that revision is not usable because of another issue)::
590 590
591 591 hg bisect --skip
592 592 hg bisect --skip 23
593 593
594 594 - skip all revisions that do not touch directories ``foo`` or ``bar``::
595 595
596 596 hg bisect --skip "!( file('path:foo') & file('path:bar') )"
597 597
598 598 - forget the current bisection::
599 599
600 600 hg bisect --reset
601 601
602 602 - use 'make && make tests' to automatically find the first broken
603 603 revision::
604 604
605 605 hg bisect --reset
606 606 hg bisect --bad 34
607 607 hg bisect --good 12
608 608 hg bisect --command "make && make tests"
609 609
610 610 - see all changesets whose states are already known in the current
611 611 bisection::
612 612
613 613 hg log -r "bisect(pruned)"
614 614
615 615 - see the changeset currently being bisected (especially useful
616 616 if running with -U/--noupdate)::
617 617
618 618 hg log -r "bisect(current)"
619 619
620 620 - see all changesets that took part in the current bisection::
621 621
622 622 hg log -r "bisect(range)"
623 623
624 624 - you can even get a nice graph::
625 625
626 626 hg log --graph -r "bisect(range)"
627 627
628 628 See :hg:`help revsets` for more about the `bisect()` keyword.
629 629
630 630 Returns 0 on success.
631 631 """
632 632 def extendbisectrange(nodes, good):
633 633 # bisect is incomplete when it ends on a merge node and
634 634 # one of the parent was not checked.
635 635 parents = repo[nodes[0]].parents()
636 636 if len(parents) > 1:
637 637 side = good and state['bad'] or state['good']
638 638 num = len(set(i.node() for i in parents) & set(side))
639 639 if num == 1:
640 640 return parents[0].ancestor(parents[1])
641 641 return None
642 642
643 643 def print_result(nodes, good):
644 644 displayer = cmdutil.show_changeset(ui, repo, {})
645 645 if len(nodes) == 1:
646 646 # narrowed it down to a single revision
647 647 if good:
648 648 ui.write(_("The first good revision is:\n"))
649 649 else:
650 650 ui.write(_("The first bad revision is:\n"))
651 651 displayer.show(repo[nodes[0]])
652 652 extendnode = extendbisectrange(nodes, good)
653 653 if extendnode is not None:
654 654 ui.write(_('Not all ancestors of this changeset have been'
655 655 ' checked.\nUse bisect --extend to continue the '
656 656 'bisection from\nthe common ancestor, %s.\n')
657 657 % extendnode)
658 658 else:
659 659 # multiple possible revisions
660 660 if good:
661 661 ui.write(_("Due to skipped revisions, the first "
662 662 "good revision could be any of:\n"))
663 663 else:
664 664 ui.write(_("Due to skipped revisions, the first "
665 665 "bad revision could be any of:\n"))
666 666 for n in nodes:
667 667 displayer.show(repo[n])
668 668 displayer.close()
669 669
670 670 def check_state(state, interactive=True):
671 671 if not state['good'] or not state['bad']:
672 672 if (good or bad or skip or reset) and interactive:
673 673 return
674 674 if not state['good']:
675 675 raise util.Abort(_('cannot bisect (no known good revisions)'))
676 676 else:
677 677 raise util.Abort(_('cannot bisect (no known bad revisions)'))
678 678 return True
679 679
680 680 # backward compatibility
681 681 if rev in "good bad reset init".split():
682 682 ui.warn(_("(use of 'hg bisect <cmd>' is deprecated)\n"))
683 683 cmd, rev, extra = rev, extra, None
684 684 if cmd == "good":
685 685 good = True
686 686 elif cmd == "bad":
687 687 bad = True
688 688 else:
689 689 reset = True
690 690 elif extra or good + bad + skip + reset + extend + bool(command) > 1:
691 691 raise util.Abort(_('incompatible arguments'))
692 692
693 693 cmdutil.checkunfinished(repo)
694 694
695 695 if reset:
696 696 p = repo.join("bisect.state")
697 697 if os.path.exists(p):
698 698 os.unlink(p)
699 699 return
700 700
701 701 state = hbisect.load_state(repo)
702 702
703 703 if command:
704 704 changesets = 1
705 705 if noupdate:
706 706 try:
707 707 node = state['current'][0]
708 708 except LookupError:
709 709 raise util.Abort(_('current bisect revision is unknown - '
710 710 'start a new bisect to fix'))
711 711 else:
712 712 node, p2 = repo.dirstate.parents()
713 713 if p2 != nullid:
714 714 raise util.Abort(_('current bisect revision is a merge'))
715 715 try:
716 716 while changesets:
717 717 # update state
718 718 state['current'] = [node]
719 719 hbisect.save_state(repo, state)
720 720 status = util.system(command,
721 721 environ={'HG_NODE': hex(node)},
722 722 out=ui.fout)
723 723 if status == 125:
724 724 transition = "skip"
725 725 elif status == 0:
726 726 transition = "good"
727 727 # status < 0 means process was killed
728 728 elif status == 127:
729 729 raise util.Abort(_("failed to execute %s") % command)
730 730 elif status < 0:
731 731 raise util.Abort(_("%s killed") % command)
732 732 else:
733 733 transition = "bad"
734 734 ctx = scmutil.revsingle(repo, rev, node)
735 735 rev = None # clear for future iterations
736 736 state[transition].append(ctx.node())
737 737 ui.status(_('changeset %d:%s: %s\n') % (ctx, ctx, transition))
738 738 check_state(state, interactive=False)
739 739 # bisect
740 740 nodes, changesets, bgood = hbisect.bisect(repo.changelog, state)
741 741 # update to next check
742 742 node = nodes[0]
743 743 if not noupdate:
744 744 cmdutil.bailifchanged(repo)
745 745 hg.clean(repo, node, show_stats=False)
746 746 finally:
747 747 state['current'] = [node]
748 748 hbisect.save_state(repo, state)
749 749 print_result(nodes, bgood)
750 750 return
751 751
752 752 # update state
753 753
754 754 if rev:
755 755 nodes = [repo.lookup(i) for i in scmutil.revrange(repo, [rev])]
756 756 else:
757 757 nodes = [repo.lookup('.')]
758 758
759 759 if good or bad or skip:
760 760 if good:
761 761 state['good'] += nodes
762 762 elif bad:
763 763 state['bad'] += nodes
764 764 elif skip:
765 765 state['skip'] += nodes
766 766 hbisect.save_state(repo, state)
767 767
768 768 if not check_state(state):
769 769 return
770 770
771 771 # actually bisect
772 772 nodes, changesets, good = hbisect.bisect(repo.changelog, state)
773 773 if extend:
774 774 if not changesets:
775 775 extendnode = extendbisectrange(nodes, good)
776 776 if extendnode is not None:
777 777 ui.write(_("Extending search to changeset %d:%s\n")
778 778 % (extendnode.rev(), extendnode))
779 779 state['current'] = [extendnode.node()]
780 780 hbisect.save_state(repo, state)
781 781 if noupdate:
782 782 return
783 783 cmdutil.bailifchanged(repo)
784 784 return hg.clean(repo, extendnode.node())
785 785 raise util.Abort(_("nothing to extend"))
786 786
787 787 if changesets == 0:
788 788 print_result(nodes, good)
789 789 else:
790 790 assert len(nodes) == 1 # only a single node can be tested next
791 791 node = nodes[0]
792 792 # compute the approximate number of remaining tests
793 793 tests, size = 0, 2
794 794 while size <= changesets:
795 795 tests, size = tests + 1, size * 2
796 796 rev = repo.changelog.rev(node)
797 797 ui.write(_("Testing changeset %d:%s "
798 798 "(%d changesets remaining, ~%d tests)\n")
799 799 % (rev, short(node), changesets, tests))
800 800 state['current'] = [node]
801 801 hbisect.save_state(repo, state)
802 802 if not noupdate:
803 803 cmdutil.bailifchanged(repo)
804 804 return hg.clean(repo, node)
805 805
806 806 @command('bookmarks|bookmark',
807 807 [('f', 'force', False, _('force')),
808 808 ('r', 'rev', '', _('revision'), _('REV')),
809 809 ('d', 'delete', False, _('delete a given bookmark')),
810 810 ('m', 'rename', '', _('rename a given bookmark'), _('NAME')),
811 811 ('i', 'inactive', False, _('mark a bookmark inactive'))],
812 812 _('hg bookmarks [OPTIONS]... [NAME]...'))
813 813 def bookmark(ui, repo, *names, **opts):
814 814 '''create a new bookmark or list existing bookmarks
815 815
816 816 Bookmarks are labels on changesets to help track lines of development.
817 817 Bookmarks are unversioned and can be moved, renamed and deleted.
818 818 Deleting or moving a bookmark has no effect on the associated changesets.
819 819
820 820 Creating or updating to a bookmark causes it to be marked as 'active'.
821 821 The active bookmark is indicated with a '*'.
822 822 When a commit is made, the active bookmark will advance to the new commit.
823 823 A plain :hg:`update` will also advance an active bookmark, if possible.
824 824 Updating away from a bookmark will cause it to be deactivated.
825 825
826 826 Bookmarks can be pushed and pulled between repositories (see
827 827 :hg:`help push` and :hg:`help pull`). If a shared bookmark has
828 828 diverged, a new 'divergent bookmark' of the form 'name@path' will
829 829 be created. Using :hg:'merge' will resolve the divergence.
830 830
831 831 A bookmark named '@' has the special property that :hg:`clone` will
832 832 check it out by default if it exists.
833 833
834 834 .. container:: verbose
835 835
836 836 Examples:
837 837
838 838 - create an active bookmark for a new line of development::
839 839
840 840 hg book new-feature
841 841
842 842 - create an inactive bookmark as a place marker::
843 843
844 844 hg book -i reviewed
845 845
846 846 - create an inactive bookmark on another changeset::
847 847
848 848 hg book -r .^ tested
849 849
850 850 - move the '@' bookmark from another branch::
851 851
852 852 hg book -f @
853 853 '''
854 854 force = opts.get('force')
855 855 rev = opts.get('rev')
856 856 delete = opts.get('delete')
857 857 rename = opts.get('rename')
858 858 inactive = opts.get('inactive')
859 859
860 860 def checkformat(mark):
861 861 mark = mark.strip()
862 862 if not mark:
863 863 raise util.Abort(_("bookmark names cannot consist entirely of "
864 864 "whitespace"))
865 865 scmutil.checknewlabel(repo, mark, 'bookmark')
866 866 return mark
867 867
868 868 def checkconflict(repo, mark, cur, force=False, target=None):
869 869 if mark in marks and not force:
870 870 if target:
871 871 if marks[mark] == target and target == cur:
872 872 # re-activating a bookmark
873 873 return
874 874 anc = repo.changelog.ancestors([repo[target].rev()])
875 875 bmctx = repo[marks[mark]]
876 876 divs = [repo[b].node() for b in marks
877 877 if b.split('@', 1)[0] == mark.split('@', 1)[0]]
878 878
879 879 # allow resolving a single divergent bookmark even if moving
880 880 # the bookmark across branches when a revision is specified
881 881 # that contains a divergent bookmark
882 882 if bmctx.rev() not in anc and target in divs:
883 883 bookmarks.deletedivergent(repo, [target], mark)
884 884 return
885 885
886 886 deletefrom = [b for b in divs
887 887 if repo[b].rev() in anc or b == target]
888 888 bookmarks.deletedivergent(repo, deletefrom, mark)
889 889 if bookmarks.validdest(repo, bmctx, repo[target]):
890 890 ui.status(_("moving bookmark '%s' forward from %s\n") %
891 891 (mark, short(bmctx.node())))
892 892 return
893 893 raise util.Abort(_("bookmark '%s' already exists "
894 894 "(use -f to force)") % mark)
895 895 if ((mark in repo.branchmap() or mark == repo.dirstate.branch())
896 896 and not force):
897 897 raise util.Abort(
898 898 _("a bookmark cannot have the name of an existing branch"))
899 899
900 900 if delete and rename:
901 901 raise util.Abort(_("--delete and --rename are incompatible"))
902 902 if delete and rev:
903 903 raise util.Abort(_("--rev is incompatible with --delete"))
904 904 if rename and rev:
905 905 raise util.Abort(_("--rev is incompatible with --rename"))
906 906 if not names and (delete or rev):
907 907 raise util.Abort(_("bookmark name required"))
908 908
909 909 if delete or rename or names or inactive:
910 910 wlock = repo.wlock()
911 911 try:
912 912 cur = repo.changectx('.').node()
913 913 marks = repo._bookmarks
914 914 if delete:
915 915 for mark in names:
916 916 if mark not in marks:
917 917 raise util.Abort(_("bookmark '%s' does not exist") %
918 918 mark)
919 919 if mark == repo._bookmarkcurrent:
920 920 bookmarks.unsetcurrent(repo)
921 921 del marks[mark]
922 922 marks.write()
923 923
924 924 elif rename:
925 925 if not names:
926 926 raise util.Abort(_("new bookmark name required"))
927 927 elif len(names) > 1:
928 928 raise util.Abort(_("only one new bookmark name allowed"))
929 929 mark = checkformat(names[0])
930 930 if rename not in marks:
931 931 raise util.Abort(_("bookmark '%s' does not exist") % rename)
932 932 checkconflict(repo, mark, cur, force)
933 933 marks[mark] = marks[rename]
934 934 if repo._bookmarkcurrent == rename and not inactive:
935 935 bookmarks.setcurrent(repo, mark)
936 936 del marks[rename]
937 937 marks.write()
938 938
939 939 elif names:
940 940 newact = None
941 941 for mark in names:
942 942 mark = checkformat(mark)
943 943 if newact is None:
944 944 newact = mark
945 945 if inactive and mark == repo._bookmarkcurrent:
946 946 bookmarks.unsetcurrent(repo)
947 947 return
948 948 tgt = cur
949 949 if rev:
950 950 tgt = scmutil.revsingle(repo, rev).node()
951 951 checkconflict(repo, mark, cur, force, tgt)
952 952 marks[mark] = tgt
953 953 if not inactive and cur == marks[newact] and not rev:
954 954 bookmarks.setcurrent(repo, newact)
955 955 elif cur != tgt and newact == repo._bookmarkcurrent:
956 956 bookmarks.unsetcurrent(repo)
957 957 marks.write()
958 958
959 959 elif inactive:
960 960 if len(marks) == 0:
961 961 ui.status(_("no bookmarks set\n"))
962 962 elif not repo._bookmarkcurrent:
963 963 ui.status(_("no active bookmark\n"))
964 964 else:
965 965 bookmarks.unsetcurrent(repo)
966 966 finally:
967 967 wlock.release()
968 968 else: # show bookmarks
969 969 hexfn = ui.debugflag and hex or short
970 970 marks = repo._bookmarks
971 971 if len(marks) == 0:
972 972 ui.status(_("no bookmarks set\n"))
973 973 else:
974 974 for bmark, n in sorted(marks.iteritems()):
975 975 current = repo._bookmarkcurrent
976 976 if bmark == current:
977 977 prefix, label = '*', 'bookmarks.current'
978 978 else:
979 979 prefix, label = ' ', ''
980 980
981 981 if ui.quiet:
982 982 ui.write("%s\n" % bmark, label=label)
983 983 else:
984 984 pad = " " * (25 - encoding.colwidth(bmark))
985 985 ui.write(" %s %s%s %d:%s\n" % (
986 986 prefix, bmark, pad, repo.changelog.rev(n), hexfn(n)),
987 987 label=label)
988 988
989 989 @command('branch',
990 990 [('f', 'force', None,
991 991 _('set branch name even if it shadows an existing branch')),
992 992 ('C', 'clean', None, _('reset branch name to parent branch name'))],
993 993 _('[-fC] [NAME]'))
994 994 def branch(ui, repo, label=None, **opts):
995 995 """set or show the current branch name
996 996
997 997 .. note::
998 998
999 999 Branch names are permanent and global. Use :hg:`bookmark` to create a
1000 1000 light-weight bookmark instead. See :hg:`help glossary` for more
1001 1001 information about named branches and bookmarks.
1002 1002
1003 1003 With no argument, show the current branch name. With one argument,
1004 1004 set the working directory branch name (the branch will not exist
1005 1005 in the repository until the next commit). Standard practice
1006 1006 recommends that primary development take place on the 'default'
1007 1007 branch.
1008 1008
1009 1009 Unless -f/--force is specified, branch will not let you set a
1010 1010 branch name that already exists, even if it's inactive.
1011 1011
1012 1012 Use -C/--clean to reset the working directory branch to that of
1013 1013 the parent of the working directory, negating a previous branch
1014 1014 change.
1015 1015
1016 1016 Use the command :hg:`update` to switch to an existing branch. Use
1017 1017 :hg:`commit --close-branch` to mark this branch as closed.
1018 1018
1019 1019 Returns 0 on success.
1020 1020 """
1021 1021 if label:
1022 1022 label = label.strip()
1023 1023
1024 1024 if not opts.get('clean') and not label:
1025 1025 ui.write("%s\n" % repo.dirstate.branch())
1026 1026 return
1027 1027
1028 1028 wlock = repo.wlock()
1029 1029 try:
1030 1030 if opts.get('clean'):
1031 1031 label = repo[None].p1().branch()
1032 1032 repo.dirstate.setbranch(label)
1033 1033 ui.status(_('reset working directory to branch %s\n') % label)
1034 1034 elif label:
1035 1035 if not opts.get('force') and label in repo.branchmap():
1036 1036 if label not in [p.branch() for p in repo.parents()]:
1037 1037 raise util.Abort(_('a branch of the same name already'
1038 1038 ' exists'),
1039 1039 # i18n: "it" refers to an existing branch
1040 1040 hint=_("use 'hg update' to switch to it"))
1041 1041 scmutil.checknewlabel(repo, label, 'branch')
1042 1042 repo.dirstate.setbranch(label)
1043 1043 ui.status(_('marked working directory as branch %s\n') % label)
1044 1044 ui.status(_('(branches are permanent and global, '
1045 1045 'did you want a bookmark?)\n'))
1046 1046 finally:
1047 1047 wlock.release()
1048 1048
1049 1049 @command('branches',
1050 1050 [('a', 'active', False, _('show only branches that have unmerged heads')),
1051 1051 ('c', 'closed', False, _('show normal and closed branches'))],
1052 1052 _('[-ac]'))
1053 1053 def branches(ui, repo, active=False, closed=False):
1054 1054 """list repository named branches
1055 1055
1056 1056 List the repository's named branches, indicating which ones are
1057 1057 inactive. If -c/--closed is specified, also list branches which have
1058 1058 been marked closed (see :hg:`commit --close-branch`).
1059 1059
1060 1060 If -a/--active is specified, only show active branches. A branch
1061 1061 is considered active if it contains repository heads.
1062 1062
1063 1063 Use the command :hg:`update` to switch to an existing branch.
1064 1064
1065 1065 Returns 0.
1066 1066 """
1067 1067
1068 1068 hexfunc = ui.debugflag and hex or short
1069 1069
1070 1070 allheads = set(repo.heads())
1071 1071 branches = []
1072 1072 for tag, heads, tip, isclosed in repo.branchmap().iterbranches():
1073 1073 isactive = not isclosed and bool(set(heads) & allheads)
1074 1074 branches.append((tag, repo[tip], isactive, not isclosed))
1075 1075 branches.sort(key=lambda i: (i[2], i[1].rev(), i[0], i[3]),
1076 1076 reverse=True)
1077 1077
1078 1078 for tag, ctx, isactive, isopen in branches:
1079 1079 if (not active) or isactive:
1080 1080 if isactive:
1081 1081 label = 'branches.active'
1082 1082 notice = ''
1083 1083 elif not isopen:
1084 1084 if not closed:
1085 1085 continue
1086 1086 label = 'branches.closed'
1087 1087 notice = _(' (closed)')
1088 1088 else:
1089 1089 label = 'branches.inactive'
1090 1090 notice = _(' (inactive)')
1091 1091 if tag == repo.dirstate.branch():
1092 1092 label = 'branches.current'
1093 1093 rev = str(ctx.rev()).rjust(31 - encoding.colwidth(tag))
1094 1094 rev = ui.label('%s:%s' % (rev, hexfunc(ctx.node())),
1095 1095 'log.changeset changeset.%s' % ctx.phasestr())
1096 1096 labeledtag = ui.label(tag, label)
1097 1097 if ui.quiet:
1098 1098 ui.write("%s\n" % labeledtag)
1099 1099 else:
1100 1100 ui.write("%s %s%s\n" % (labeledtag, rev, notice))
1101 1101
1102 1102 @command('bundle',
1103 1103 [('f', 'force', None, _('run even when the destination is unrelated')),
1104 1104 ('r', 'rev', [], _('a changeset intended to be added to the destination'),
1105 1105 _('REV')),
1106 1106 ('b', 'branch', [], _('a specific branch you would like to bundle'),
1107 1107 _('BRANCH')),
1108 1108 ('', 'base', [],
1109 1109 _('a base changeset assumed to be available at the destination'),
1110 1110 _('REV')),
1111 1111 ('a', 'all', None, _('bundle all changesets in the repository')),
1112 1112 ('t', 'type', 'bzip2', _('bundle compression type to use'), _('TYPE')),
1113 1113 ] + remoteopts,
1114 1114 _('[-f] [-t TYPE] [-a] [-r REV]... [--base REV]... FILE [DEST]'))
1115 1115 def bundle(ui, repo, fname, dest=None, **opts):
1116 1116 """create a changegroup file
1117 1117
1118 1118 Generate a compressed changegroup file collecting changesets not
1119 1119 known to be in another repository.
1120 1120
1121 1121 If you omit the destination repository, then hg assumes the
1122 1122 destination will have all the nodes you specify with --base
1123 1123 parameters. To create a bundle containing all changesets, use
1124 1124 -a/--all (or --base null).
1125 1125
1126 1126 You can change compression method with the -t/--type option.
1127 1127 The available compression methods are: none, bzip2, and
1128 1128 gzip (by default, bundles are compressed using bzip2).
1129 1129
1130 1130 The bundle file can then be transferred using conventional means
1131 1131 and applied to another repository with the unbundle or pull
1132 1132 command. This is useful when direct push and pull are not
1133 1133 available or when exporting an entire repository is undesirable.
1134 1134
1135 1135 Applying bundles preserves all changeset contents including
1136 1136 permissions, copy/rename information, and revision history.
1137 1137
1138 1138 Returns 0 on success, 1 if no changes found.
1139 1139 """
1140 1140 revs = None
1141 1141 if 'rev' in opts:
1142 1142 revs = scmutil.revrange(repo, opts['rev'])
1143 1143
1144 1144 bundletype = opts.get('type', 'bzip2').lower()
1145 1145 btypes = {'none': 'HG10UN', 'bzip2': 'HG10BZ', 'gzip': 'HG10GZ'}
1146 1146 bundletype = btypes.get(bundletype)
1147 1147 if bundletype not in changegroup.bundletypes:
1148 1148 raise util.Abort(_('unknown bundle type specified with --type'))
1149 1149
1150 1150 if opts.get('all'):
1151 1151 base = ['null']
1152 1152 else:
1153 1153 base = scmutil.revrange(repo, opts.get('base'))
1154 1154 # TODO: get desired bundlecaps from command line.
1155 1155 bundlecaps = None
1156 1156 if base:
1157 1157 if dest:
1158 1158 raise util.Abort(_("--base is incompatible with specifying "
1159 1159 "a destination"))
1160 1160 common = [repo.lookup(rev) for rev in base]
1161 1161 heads = revs and map(repo.lookup, revs) or revs
1162 cg = changegroup.getbundle(repo, 'bundle', heads=heads, common=common,
1163 bundlecaps=bundlecaps)
1162 cg = changegroup.getchangegroup(repo, 'bundle', heads=heads,
1163 common=common, bundlecaps=bundlecaps)
1164 1164 outgoing = None
1165 1165 else:
1166 1166 dest = ui.expandpath(dest or 'default-push', dest or 'default')
1167 1167 dest, branches = hg.parseurl(dest, opts.get('branch'))
1168 1168 other = hg.peer(repo, opts, dest)
1169 1169 revs, checkout = hg.addbranchrevs(repo, repo, branches, revs)
1170 1170 heads = revs and map(repo.lookup, revs) or revs
1171 1171 outgoing = discovery.findcommonoutgoing(repo, other,
1172 1172 onlyheads=heads,
1173 1173 force=opts.get('force'),
1174 1174 portable=True)
1175 cg = changegroup.getlocalbundle(repo, 'bundle', outgoing, bundlecaps)
1175 cg = changegroup.getlocalchangegroup(repo, 'bundle', outgoing,
1176 bundlecaps)
1176 1177 if not cg:
1177 1178 scmutil.nochangesfound(ui, repo, outgoing and outgoing.excluded)
1178 1179 return 1
1179 1180
1180 1181 changegroup.writebundle(cg, fname, bundletype)
1181 1182
1182 1183 @command('cat',
1183 1184 [('o', 'output', '',
1184 1185 _('print output to file with formatted name'), _('FORMAT')),
1185 1186 ('r', 'rev', '', _('print the given revision'), _('REV')),
1186 1187 ('', 'decode', None, _('apply any matching decode filter')),
1187 1188 ] + walkopts,
1188 1189 _('[OPTION]... FILE...'),
1189 1190 inferrepo=True)
1190 1191 def cat(ui, repo, file1, *pats, **opts):
1191 1192 """output the current or given revision of files
1192 1193
1193 1194 Print the specified files as they were at the given revision. If
1194 1195 no revision is given, the parent of the working directory is used.
1195 1196
1196 1197 Output may be to a file, in which case the name of the file is
1197 1198 given using a format string. The formatting rules as follows:
1198 1199
1199 1200 :``%%``: literal "%" character
1200 1201 :``%s``: basename of file being printed
1201 1202 :``%d``: dirname of file being printed, or '.' if in repository root
1202 1203 :``%p``: root-relative path name of file being printed
1203 1204 :``%H``: changeset hash (40 hexadecimal digits)
1204 1205 :``%R``: changeset revision number
1205 1206 :``%h``: short-form changeset hash (12 hexadecimal digits)
1206 1207 :``%r``: zero-padded changeset revision number
1207 1208 :``%b``: basename of the exporting repository
1208 1209
1209 1210 Returns 0 on success.
1210 1211 """
1211 1212 ctx = scmutil.revsingle(repo, opts.get('rev'))
1212 1213 m = scmutil.match(ctx, (file1,) + pats, opts)
1213 1214
1214 1215 return cmdutil.cat(ui, repo, ctx, m, '', **opts)
1215 1216
1216 1217 @command('^clone',
1217 1218 [('U', 'noupdate', None,
1218 1219 _('the clone will include an empty working copy (only a repository)')),
1219 1220 ('u', 'updaterev', '', _('revision, tag or branch to check out'), _('REV')),
1220 1221 ('r', 'rev', [], _('include the specified changeset'), _('REV')),
1221 1222 ('b', 'branch', [], _('clone only the specified branch'), _('BRANCH')),
1222 1223 ('', 'pull', None, _('use pull protocol to copy metadata')),
1223 1224 ('', 'uncompressed', None, _('use uncompressed transfer (fast over LAN)')),
1224 1225 ] + remoteopts,
1225 1226 _('[OPTION]... SOURCE [DEST]'),
1226 1227 norepo=True)
1227 1228 def clone(ui, source, dest=None, **opts):
1228 1229 """make a copy of an existing repository
1229 1230
1230 1231 Create a copy of an existing repository in a new directory.
1231 1232
1232 1233 If no destination directory name is specified, it defaults to the
1233 1234 basename of the source.
1234 1235
1235 1236 The location of the source is added to the new repository's
1236 1237 ``.hg/hgrc`` file, as the default to be used for future pulls.
1237 1238
1238 1239 Only local paths and ``ssh://`` URLs are supported as
1239 1240 destinations. For ``ssh://`` destinations, no working directory or
1240 1241 ``.hg/hgrc`` will be created on the remote side.
1241 1242
1242 1243 To pull only a subset of changesets, specify one or more revisions
1243 1244 identifiers with -r/--rev or branches with -b/--branch. The
1244 1245 resulting clone will contain only the specified changesets and
1245 1246 their ancestors. These options (or 'clone src#rev dest') imply
1246 1247 --pull, even for local source repositories. Note that specifying a
1247 1248 tag will include the tagged changeset but not the changeset
1248 1249 containing the tag.
1249 1250
1250 1251 If the source repository has a bookmark called '@' set, that
1251 1252 revision will be checked out in the new repository by default.
1252 1253
1253 1254 To check out a particular version, use -u/--update, or
1254 1255 -U/--noupdate to create a clone with no working directory.
1255 1256
1256 1257 .. container:: verbose
1257 1258
1258 1259 For efficiency, hardlinks are used for cloning whenever the
1259 1260 source and destination are on the same filesystem (note this
1260 1261 applies only to the repository data, not to the working
1261 1262 directory). Some filesystems, such as AFS, implement hardlinking
1262 1263 incorrectly, but do not report errors. In these cases, use the
1263 1264 --pull option to avoid hardlinking.
1264 1265
1265 1266 In some cases, you can clone repositories and the working
1266 1267 directory using full hardlinks with ::
1267 1268
1268 1269 $ cp -al REPO REPOCLONE
1269 1270
1270 1271 This is the fastest way to clone, but it is not always safe. The
1271 1272 operation is not atomic (making sure REPO is not modified during
1272 1273 the operation is up to you) and you have to make sure your
1273 1274 editor breaks hardlinks (Emacs and most Linux Kernel tools do
1274 1275 so). Also, this is not compatible with certain extensions that
1275 1276 place their metadata under the .hg directory, such as mq.
1276 1277
1277 1278 Mercurial will update the working directory to the first applicable
1278 1279 revision from this list:
1279 1280
1280 1281 a) null if -U or the source repository has no changesets
1281 1282 b) if -u . and the source repository is local, the first parent of
1282 1283 the source repository's working directory
1283 1284 c) the changeset specified with -u (if a branch name, this means the
1284 1285 latest head of that branch)
1285 1286 d) the changeset specified with -r
1286 1287 e) the tipmost head specified with -b
1287 1288 f) the tipmost head specified with the url#branch source syntax
1288 1289 g) the revision marked with the '@' bookmark, if present
1289 1290 h) the tipmost head of the default branch
1290 1291 i) tip
1291 1292
1292 1293 Examples:
1293 1294
1294 1295 - clone a remote repository to a new directory named hg/::
1295 1296
1296 1297 hg clone http://selenic.com/hg
1297 1298
1298 1299 - create a lightweight local clone::
1299 1300
1300 1301 hg clone project/ project-feature/
1301 1302
1302 1303 - clone from an absolute path on an ssh server (note double-slash)::
1303 1304
1304 1305 hg clone ssh://user@server//home/projects/alpha/
1305 1306
1306 1307 - do a high-speed clone over a LAN while checking out a
1307 1308 specified version::
1308 1309
1309 1310 hg clone --uncompressed http://server/repo -u 1.5
1310 1311
1311 1312 - create a repository without changesets after a particular revision::
1312 1313
1313 1314 hg clone -r 04e544 experimental/ good/
1314 1315
1315 1316 - clone (and track) a particular named branch::
1316 1317
1317 1318 hg clone http://selenic.com/hg#stable
1318 1319
1319 1320 See :hg:`help urls` for details on specifying URLs.
1320 1321
1321 1322 Returns 0 on success.
1322 1323 """
1323 1324 if opts.get('noupdate') and opts.get('updaterev'):
1324 1325 raise util.Abort(_("cannot specify both --noupdate and --updaterev"))
1325 1326
1326 1327 r = hg.clone(ui, opts, source, dest,
1327 1328 pull=opts.get('pull'),
1328 1329 stream=opts.get('uncompressed'),
1329 1330 rev=opts.get('rev'),
1330 1331 update=opts.get('updaterev') or not opts.get('noupdate'),
1331 1332 branch=opts.get('branch'))
1332 1333
1333 1334 return r is None
1334 1335
1335 1336 @command('^commit|ci',
1336 1337 [('A', 'addremove', None,
1337 1338 _('mark new/missing files as added/removed before committing')),
1338 1339 ('', 'close-branch', None,
1339 1340 _('mark a branch as closed, hiding it from the branch list')),
1340 1341 ('', 'amend', None, _('amend the parent of the working dir')),
1341 1342 ('s', 'secret', None, _('use the secret phase for committing')),
1342 1343 ('e', 'edit', None, _('invoke editor on commit messages')),
1343 1344 ] + walkopts + commitopts + commitopts2 + subrepoopts,
1344 1345 _('[OPTION]... [FILE]...'),
1345 1346 inferrepo=True)
1346 1347 def commit(ui, repo, *pats, **opts):
1347 1348 """commit the specified files or all outstanding changes
1348 1349
1349 1350 Commit changes to the given files into the repository. Unlike a
1350 1351 centralized SCM, this operation is a local operation. See
1351 1352 :hg:`push` for a way to actively distribute your changes.
1352 1353
1353 1354 If a list of files is omitted, all changes reported by :hg:`status`
1354 1355 will be committed.
1355 1356
1356 1357 If you are committing the result of a merge, do not provide any
1357 1358 filenames or -I/-X filters.
1358 1359
1359 1360 If no commit message is specified, Mercurial starts your
1360 1361 configured editor where you can enter a message. In case your
1361 1362 commit fails, you will find a backup of your message in
1362 1363 ``.hg/last-message.txt``.
1363 1364
1364 1365 The --amend flag can be used to amend the parent of the
1365 1366 working directory with a new commit that contains the changes
1366 1367 in the parent in addition to those currently reported by :hg:`status`,
1367 1368 if there are any. The old commit is stored in a backup bundle in
1368 1369 ``.hg/strip-backup`` (see :hg:`help bundle` and :hg:`help unbundle`
1369 1370 on how to restore it).
1370 1371
1371 1372 Message, user and date are taken from the amended commit unless
1372 1373 specified. When a message isn't specified on the command line,
1373 1374 the editor will open with the message of the amended commit.
1374 1375
1375 1376 It is not possible to amend public changesets (see :hg:`help phases`)
1376 1377 or changesets that have children.
1377 1378
1378 1379 See :hg:`help dates` for a list of formats valid for -d/--date.
1379 1380
1380 1381 Returns 0 on success, 1 if nothing changed.
1381 1382 """
1382 1383 if opts.get('subrepos'):
1383 1384 if opts.get('amend'):
1384 1385 raise util.Abort(_('cannot amend with --subrepos'))
1385 1386 # Let --subrepos on the command line override config setting.
1386 1387 ui.setconfig('ui', 'commitsubrepos', True, 'commit')
1387 1388
1388 1389 cmdutil.checkunfinished(repo, commit=True)
1389 1390
1390 1391 branch = repo[None].branch()
1391 1392 bheads = repo.branchheads(branch)
1392 1393
1393 1394 extra = {}
1394 1395 if opts.get('close_branch'):
1395 1396 extra['close'] = 1
1396 1397
1397 1398 if not bheads:
1398 1399 raise util.Abort(_('can only close branch heads'))
1399 1400 elif opts.get('amend'):
1400 1401 if repo.parents()[0].p1().branch() != branch and \
1401 1402 repo.parents()[0].p2().branch() != branch:
1402 1403 raise util.Abort(_('can only close branch heads'))
1403 1404
1404 1405 if opts.get('amend'):
1405 1406 if ui.configbool('ui', 'commitsubrepos'):
1406 1407 raise util.Abort(_('cannot amend with ui.commitsubrepos enabled'))
1407 1408
1408 1409 old = repo['.']
1409 1410 if old.phase() == phases.public:
1410 1411 raise util.Abort(_('cannot amend public changesets'))
1411 1412 if len(repo[None].parents()) > 1:
1412 1413 raise util.Abort(_('cannot amend while merging'))
1413 1414 if (not obsolete._enabled) and old.children():
1414 1415 raise util.Abort(_('cannot amend changeset with children'))
1415 1416
1416 1417 # commitfunc is used only for temporary amend commit by cmdutil.amend
1417 1418 def commitfunc(ui, repo, message, match, opts):
1418 1419 return repo.commit(message,
1419 1420 opts.get('user') or old.user(),
1420 1421 opts.get('date') or old.date(),
1421 1422 match,
1422 1423 extra=extra)
1423 1424
1424 1425 current = repo._bookmarkcurrent
1425 1426 marks = old.bookmarks()
1426 1427 node = cmdutil.amend(ui, repo, commitfunc, old, extra, pats, opts)
1427 1428 if node == old.node():
1428 1429 ui.status(_("nothing changed\n"))
1429 1430 return 1
1430 1431 elif marks:
1431 1432 ui.debug('moving bookmarks %r from %s to %s\n' %
1432 1433 (marks, old.hex(), hex(node)))
1433 1434 newmarks = repo._bookmarks
1434 1435 for bm in marks:
1435 1436 newmarks[bm] = node
1436 1437 if bm == current:
1437 1438 bookmarks.setcurrent(repo, bm)
1438 1439 newmarks.write()
1439 1440 else:
1440 1441 def commitfunc(ui, repo, message, match, opts):
1441 1442 backup = ui.backupconfig('phases', 'new-commit')
1442 1443 baseui = repo.baseui
1443 1444 basebackup = baseui.backupconfig('phases', 'new-commit')
1444 1445 try:
1445 1446 if opts.get('secret'):
1446 1447 ui.setconfig('phases', 'new-commit', 'secret', 'commit')
1447 1448 # Propagate to subrepos
1448 1449 baseui.setconfig('phases', 'new-commit', 'secret', 'commit')
1449 1450
1450 1451 editform = cmdutil.mergeeditform(repo[None], 'commit.normal')
1451 1452 editor = cmdutil.getcommiteditor(editform=editform, **opts)
1452 1453 return repo.commit(message, opts.get('user'), opts.get('date'),
1453 1454 match,
1454 1455 editor=editor,
1455 1456 extra=extra)
1456 1457 finally:
1457 1458 ui.restoreconfig(backup)
1458 1459 repo.baseui.restoreconfig(basebackup)
1459 1460
1460 1461
1461 1462 node = cmdutil.commit(ui, repo, commitfunc, pats, opts)
1462 1463
1463 1464 if not node:
1464 1465 stat = repo.status(match=scmutil.match(repo[None], pats, opts))
1465 1466 if stat[3]:
1466 1467 ui.status(_("nothing changed (%d missing files, see "
1467 1468 "'hg status')\n") % len(stat[3]))
1468 1469 else:
1469 1470 ui.status(_("nothing changed\n"))
1470 1471 return 1
1471 1472
1472 1473 cmdutil.commitstatus(repo, node, branch, bheads, opts)
1473 1474
1474 1475 @command('config|showconfig|debugconfig',
1475 1476 [('u', 'untrusted', None, _('show untrusted configuration options')),
1476 1477 ('e', 'edit', None, _('edit user config')),
1477 1478 ('l', 'local', None, _('edit repository config')),
1478 1479 ('g', 'global', None, _('edit global config'))],
1479 1480 _('[-u] [NAME]...'),
1480 1481 optionalrepo=True)
1481 1482 def config(ui, repo, *values, **opts):
1482 1483 """show combined config settings from all hgrc files
1483 1484
1484 1485 With no arguments, print names and values of all config items.
1485 1486
1486 1487 With one argument of the form section.name, print just the value
1487 1488 of that config item.
1488 1489
1489 1490 With multiple arguments, print names and values of all config
1490 1491 items with matching section names.
1491 1492
1492 1493 With --edit, start an editor on the user-level config file. With
1493 1494 --global, edit the system-wide config file. With --local, edit the
1494 1495 repository-level config file.
1495 1496
1496 1497 With --debug, the source (filename and line number) is printed
1497 1498 for each config item.
1498 1499
1499 1500 See :hg:`help config` for more information about config files.
1500 1501
1501 1502 Returns 0 on success, 1 if NAME does not exist.
1502 1503
1503 1504 """
1504 1505
1505 1506 if opts.get('edit') or opts.get('local') or opts.get('global'):
1506 1507 if opts.get('local') and opts.get('global'):
1507 1508 raise util.Abort(_("can't use --local and --global together"))
1508 1509
1509 1510 if opts.get('local'):
1510 1511 if not repo:
1511 1512 raise util.Abort(_("can't use --local outside a repository"))
1512 1513 paths = [repo.join('hgrc')]
1513 1514 elif opts.get('global'):
1514 1515 paths = scmutil.systemrcpath()
1515 1516 else:
1516 1517 paths = scmutil.userrcpath()
1517 1518
1518 1519 for f in paths:
1519 1520 if os.path.exists(f):
1520 1521 break
1521 1522 else:
1522 1523 from config import samplehgrcs
1523 1524
1524 1525 if opts.get('global'):
1525 1526 samplehgrc = samplehgrcs['global']
1526 1527 elif opts.get('local'):
1527 1528 samplehgrc = samplehgrcs['local']
1528 1529 else:
1529 1530 samplehgrc = samplehgrcs['user']
1530 1531
1531 1532 f = paths[0]
1532 1533 fp = open(f, "w")
1533 1534 fp.write(samplehgrc)
1534 1535 fp.close()
1535 1536
1536 1537 editor = ui.geteditor()
1537 1538 util.system("%s \"%s\"" % (editor, f),
1538 1539 onerr=util.Abort, errprefix=_("edit failed"),
1539 1540 out=ui.fout)
1540 1541 return
1541 1542
1542 1543 for f in scmutil.rcpath():
1543 1544 ui.debug('read config from: %s\n' % f)
1544 1545 untrusted = bool(opts.get('untrusted'))
1545 1546 if values:
1546 1547 sections = [v for v in values if '.' not in v]
1547 1548 items = [v for v in values if '.' in v]
1548 1549 if len(items) > 1 or items and sections:
1549 1550 raise util.Abort(_('only one config item permitted'))
1550 1551 matched = False
1551 1552 for section, name, value in ui.walkconfig(untrusted=untrusted):
1552 1553 value = str(value).replace('\n', '\\n')
1553 1554 sectname = section + '.' + name
1554 1555 if values:
1555 1556 for v in values:
1556 1557 if v == section:
1557 1558 ui.debug('%s: ' %
1558 1559 ui.configsource(section, name, untrusted))
1559 1560 ui.write('%s=%s\n' % (sectname, value))
1560 1561 matched = True
1561 1562 elif v == sectname:
1562 1563 ui.debug('%s: ' %
1563 1564 ui.configsource(section, name, untrusted))
1564 1565 ui.write(value, '\n')
1565 1566 matched = True
1566 1567 else:
1567 1568 ui.debug('%s: ' %
1568 1569 ui.configsource(section, name, untrusted))
1569 1570 ui.write('%s=%s\n' % (sectname, value))
1570 1571 matched = True
1571 1572 if matched:
1572 1573 return 0
1573 1574 return 1
1574 1575
1575 1576 @command('copy|cp',
1576 1577 [('A', 'after', None, _('record a copy that has already occurred')),
1577 1578 ('f', 'force', None, _('forcibly copy over an existing managed file')),
1578 1579 ] + walkopts + dryrunopts,
1579 1580 _('[OPTION]... [SOURCE]... DEST'))
1580 1581 def copy(ui, repo, *pats, **opts):
1581 1582 """mark files as copied for the next commit
1582 1583
1583 1584 Mark dest as having copies of source files. If dest is a
1584 1585 directory, copies are put in that directory. If dest is a file,
1585 1586 the source must be a single file.
1586 1587
1587 1588 By default, this command copies the contents of files as they
1588 1589 exist in the working directory. If invoked with -A/--after, the
1589 1590 operation is recorded, but no copying is performed.
1590 1591
1591 1592 This command takes effect with the next commit. To undo a copy
1592 1593 before that, see :hg:`revert`.
1593 1594
1594 1595 Returns 0 on success, 1 if errors are encountered.
1595 1596 """
1596 1597 wlock = repo.wlock(False)
1597 1598 try:
1598 1599 return cmdutil.copy(ui, repo, pats, opts)
1599 1600 finally:
1600 1601 wlock.release()
1601 1602
1602 1603 @command('debugancestor', [], _('[INDEX] REV1 REV2'), optionalrepo=True)
1603 1604 def debugancestor(ui, repo, *args):
1604 1605 """find the ancestor revision of two revisions in a given index"""
1605 1606 if len(args) == 3:
1606 1607 index, rev1, rev2 = args
1607 1608 r = revlog.revlog(scmutil.opener(os.getcwd(), audit=False), index)
1608 1609 lookup = r.lookup
1609 1610 elif len(args) == 2:
1610 1611 if not repo:
1611 1612 raise util.Abort(_("there is no Mercurial repository here "
1612 1613 "(.hg not found)"))
1613 1614 rev1, rev2 = args
1614 1615 r = repo.changelog
1615 1616 lookup = repo.lookup
1616 1617 else:
1617 1618 raise util.Abort(_('either two or three arguments required'))
1618 1619 a = r.ancestor(lookup(rev1), lookup(rev2))
1619 1620 ui.write("%d:%s\n" % (r.rev(a), hex(a)))
1620 1621
1621 1622 @command('debugbuilddag',
1622 1623 [('m', 'mergeable-file', None, _('add single file mergeable changes')),
1623 1624 ('o', 'overwritten-file', None, _('add single file all revs overwrite')),
1624 1625 ('n', 'new-file', None, _('add new file at each rev'))],
1625 1626 _('[OPTION]... [TEXT]'))
1626 1627 def debugbuilddag(ui, repo, text=None,
1627 1628 mergeable_file=False,
1628 1629 overwritten_file=False,
1629 1630 new_file=False):
1630 1631 """builds a repo with a given DAG from scratch in the current empty repo
1631 1632
1632 1633 The description of the DAG is read from stdin if not given on the
1633 1634 command line.
1634 1635
1635 1636 Elements:
1636 1637
1637 1638 - "+n" is a linear run of n nodes based on the current default parent
1638 1639 - "." is a single node based on the current default parent
1639 1640 - "$" resets the default parent to null (implied at the start);
1640 1641 otherwise the default parent is always the last node created
1641 1642 - "<p" sets the default parent to the backref p
1642 1643 - "*p" is a fork at parent p, which is a backref
1643 1644 - "*p1/p2" is a merge of parents p1 and p2, which are backrefs
1644 1645 - "/p2" is a merge of the preceding node and p2
1645 1646 - ":tag" defines a local tag for the preceding node
1646 1647 - "@branch" sets the named branch for subsequent nodes
1647 1648 - "#...\\n" is a comment up to the end of the line
1648 1649
1649 1650 Whitespace between the above elements is ignored.
1650 1651
1651 1652 A backref is either
1652 1653
1653 1654 - a number n, which references the node curr-n, where curr is the current
1654 1655 node, or
1655 1656 - the name of a local tag you placed earlier using ":tag", or
1656 1657 - empty to denote the default parent.
1657 1658
1658 1659 All string valued-elements are either strictly alphanumeric, or must
1659 1660 be enclosed in double quotes ("..."), with "\\" as escape character.
1660 1661 """
1661 1662
1662 1663 if text is None:
1663 1664 ui.status(_("reading DAG from stdin\n"))
1664 1665 text = ui.fin.read()
1665 1666
1666 1667 cl = repo.changelog
1667 1668 if len(cl) > 0:
1668 1669 raise util.Abort(_('repository is not empty'))
1669 1670
1670 1671 # determine number of revs in DAG
1671 1672 total = 0
1672 1673 for type, data in dagparser.parsedag(text):
1673 1674 if type == 'n':
1674 1675 total += 1
1675 1676
1676 1677 if mergeable_file:
1677 1678 linesperrev = 2
1678 1679 # make a file with k lines per rev
1679 1680 initialmergedlines = [str(i) for i in xrange(0, total * linesperrev)]
1680 1681 initialmergedlines.append("")
1681 1682
1682 1683 tags = []
1683 1684
1684 1685 lock = tr = None
1685 1686 try:
1686 1687 lock = repo.lock()
1687 1688 tr = repo.transaction("builddag")
1688 1689
1689 1690 at = -1
1690 1691 atbranch = 'default'
1691 1692 nodeids = []
1692 1693 id = 0
1693 1694 ui.progress(_('building'), id, unit=_('revisions'), total=total)
1694 1695 for type, data in dagparser.parsedag(text):
1695 1696 if type == 'n':
1696 1697 ui.note(('node %s\n' % str(data)))
1697 1698 id, ps = data
1698 1699
1699 1700 files = []
1700 1701 fctxs = {}
1701 1702
1702 1703 p2 = None
1703 1704 if mergeable_file:
1704 1705 fn = "mf"
1705 1706 p1 = repo[ps[0]]
1706 1707 if len(ps) > 1:
1707 1708 p2 = repo[ps[1]]
1708 1709 pa = p1.ancestor(p2)
1709 1710 base, local, other = [x[fn].data() for x in (pa, p1,
1710 1711 p2)]
1711 1712 m3 = simplemerge.Merge3Text(base, local, other)
1712 1713 ml = [l.strip() for l in m3.merge_lines()]
1713 1714 ml.append("")
1714 1715 elif at > 0:
1715 1716 ml = p1[fn].data().split("\n")
1716 1717 else:
1717 1718 ml = initialmergedlines
1718 1719 ml[id * linesperrev] += " r%i" % id
1719 1720 mergedtext = "\n".join(ml)
1720 1721 files.append(fn)
1721 1722 fctxs[fn] = context.memfilectx(repo, fn, mergedtext)
1722 1723
1723 1724 if overwritten_file:
1724 1725 fn = "of"
1725 1726 files.append(fn)
1726 1727 fctxs[fn] = context.memfilectx(repo, fn, "r%i\n" % id)
1727 1728
1728 1729 if new_file:
1729 1730 fn = "nf%i" % id
1730 1731 files.append(fn)
1731 1732 fctxs[fn] = context.memfilectx(repo, fn, "r%i\n" % id)
1732 1733 if len(ps) > 1:
1733 1734 if not p2:
1734 1735 p2 = repo[ps[1]]
1735 1736 for fn in p2:
1736 1737 if fn.startswith("nf"):
1737 1738 files.append(fn)
1738 1739 fctxs[fn] = p2[fn]
1739 1740
1740 1741 def fctxfn(repo, cx, path):
1741 1742 return fctxs.get(path)
1742 1743
1743 1744 if len(ps) == 0 or ps[0] < 0:
1744 1745 pars = [None, None]
1745 1746 elif len(ps) == 1:
1746 1747 pars = [nodeids[ps[0]], None]
1747 1748 else:
1748 1749 pars = [nodeids[p] for p in ps]
1749 1750 cx = context.memctx(repo, pars, "r%i" % id, files, fctxfn,
1750 1751 date=(id, 0),
1751 1752 user="debugbuilddag",
1752 1753 extra={'branch': atbranch})
1753 1754 nodeid = repo.commitctx(cx)
1754 1755 nodeids.append(nodeid)
1755 1756 at = id
1756 1757 elif type == 'l':
1757 1758 id, name = data
1758 1759 ui.note(('tag %s\n' % name))
1759 1760 tags.append("%s %s\n" % (hex(repo.changelog.node(id)), name))
1760 1761 elif type == 'a':
1761 1762 ui.note(('branch %s\n' % data))
1762 1763 atbranch = data
1763 1764 ui.progress(_('building'), id, unit=_('revisions'), total=total)
1764 1765 tr.close()
1765 1766
1766 1767 if tags:
1767 1768 repo.opener.write("localtags", "".join(tags))
1768 1769 finally:
1769 1770 ui.progress(_('building'), None)
1770 1771 release(tr, lock)
1771 1772
1772 1773 @command('debugbundle',
1773 1774 [('a', 'all', None, _('show all details'))],
1774 1775 _('FILE'),
1775 1776 norepo=True)
1776 1777 def debugbundle(ui, bundlepath, all=None, **opts):
1777 1778 """lists the contents of a bundle"""
1778 1779 f = hg.openpath(ui, bundlepath)
1779 1780 try:
1780 1781 gen = exchange.readbundle(ui, f, bundlepath)
1781 1782 if all:
1782 1783 ui.write(("format: id, p1, p2, cset, delta base, len(delta)\n"))
1783 1784
1784 1785 def showchunks(named):
1785 1786 ui.write("\n%s\n" % named)
1786 1787 chain = None
1787 1788 while True:
1788 1789 chunkdata = gen.deltachunk(chain)
1789 1790 if not chunkdata:
1790 1791 break
1791 1792 node = chunkdata['node']
1792 1793 p1 = chunkdata['p1']
1793 1794 p2 = chunkdata['p2']
1794 1795 cs = chunkdata['cs']
1795 1796 deltabase = chunkdata['deltabase']
1796 1797 delta = chunkdata['delta']
1797 1798 ui.write("%s %s %s %s %s %s\n" %
1798 1799 (hex(node), hex(p1), hex(p2),
1799 1800 hex(cs), hex(deltabase), len(delta)))
1800 1801 chain = node
1801 1802
1802 1803 chunkdata = gen.changelogheader()
1803 1804 showchunks("changelog")
1804 1805 chunkdata = gen.manifestheader()
1805 1806 showchunks("manifest")
1806 1807 while True:
1807 1808 chunkdata = gen.filelogheader()
1808 1809 if not chunkdata:
1809 1810 break
1810 1811 fname = chunkdata['filename']
1811 1812 showchunks(fname)
1812 1813 else:
1813 1814 chunkdata = gen.changelogheader()
1814 1815 chain = None
1815 1816 while True:
1816 1817 chunkdata = gen.deltachunk(chain)
1817 1818 if not chunkdata:
1818 1819 break
1819 1820 node = chunkdata['node']
1820 1821 ui.write("%s\n" % hex(node))
1821 1822 chain = node
1822 1823 finally:
1823 1824 f.close()
1824 1825
1825 1826 @command('debugcheckstate', [], '')
1826 1827 def debugcheckstate(ui, repo):
1827 1828 """validate the correctness of the current dirstate"""
1828 1829 parent1, parent2 = repo.dirstate.parents()
1829 1830 m1 = repo[parent1].manifest()
1830 1831 m2 = repo[parent2].manifest()
1831 1832 errors = 0
1832 1833 for f in repo.dirstate:
1833 1834 state = repo.dirstate[f]
1834 1835 if state in "nr" and f not in m1:
1835 1836 ui.warn(_("%s in state %s, but not in manifest1\n") % (f, state))
1836 1837 errors += 1
1837 1838 if state in "a" and f in m1:
1838 1839 ui.warn(_("%s in state %s, but also in manifest1\n") % (f, state))
1839 1840 errors += 1
1840 1841 if state in "m" and f not in m1 and f not in m2:
1841 1842 ui.warn(_("%s in state %s, but not in either manifest\n") %
1842 1843 (f, state))
1843 1844 errors += 1
1844 1845 for f in m1:
1845 1846 state = repo.dirstate[f]
1846 1847 if state not in "nrm":
1847 1848 ui.warn(_("%s in manifest1, but listed as state %s") % (f, state))
1848 1849 errors += 1
1849 1850 if errors:
1850 1851 error = _(".hg/dirstate inconsistent with current parent's manifest")
1851 1852 raise util.Abort(error)
1852 1853
1853 1854 @command('debugcommands', [], _('[COMMAND]'), norepo=True)
1854 1855 def debugcommands(ui, cmd='', *args):
1855 1856 """list all available commands and options"""
1856 1857 for cmd, vals in sorted(table.iteritems()):
1857 1858 cmd = cmd.split('|')[0].strip('^')
1858 1859 opts = ', '.join([i[1] for i in vals[1]])
1859 1860 ui.write('%s: %s\n' % (cmd, opts))
1860 1861
1861 1862 @command('debugcomplete',
1862 1863 [('o', 'options', None, _('show the command options'))],
1863 1864 _('[-o] CMD'),
1864 1865 norepo=True)
1865 1866 def debugcomplete(ui, cmd='', **opts):
1866 1867 """returns the completion list associated with the given command"""
1867 1868
1868 1869 if opts.get('options'):
1869 1870 options = []
1870 1871 otables = [globalopts]
1871 1872 if cmd:
1872 1873 aliases, entry = cmdutil.findcmd(cmd, table, False)
1873 1874 otables.append(entry[1])
1874 1875 for t in otables:
1875 1876 for o in t:
1876 1877 if "(DEPRECATED)" in o[3]:
1877 1878 continue
1878 1879 if o[0]:
1879 1880 options.append('-%s' % o[0])
1880 1881 options.append('--%s' % o[1])
1881 1882 ui.write("%s\n" % "\n".join(options))
1882 1883 return
1883 1884
1884 1885 cmdlist = cmdutil.findpossible(cmd, table)
1885 1886 if ui.verbose:
1886 1887 cmdlist = [' '.join(c[0]) for c in cmdlist.values()]
1887 1888 ui.write("%s\n" % "\n".join(sorted(cmdlist)))
1888 1889
1889 1890 @command('debugdag',
1890 1891 [('t', 'tags', None, _('use tags as labels')),
1891 1892 ('b', 'branches', None, _('annotate with branch names')),
1892 1893 ('', 'dots', None, _('use dots for runs')),
1893 1894 ('s', 'spaces', None, _('separate elements by spaces'))],
1894 1895 _('[OPTION]... [FILE [REV]...]'),
1895 1896 optionalrepo=True)
1896 1897 def debugdag(ui, repo, file_=None, *revs, **opts):
1897 1898 """format the changelog or an index DAG as a concise textual description
1898 1899
1899 1900 If you pass a revlog index, the revlog's DAG is emitted. If you list
1900 1901 revision numbers, they get labeled in the output as rN.
1901 1902
1902 1903 Otherwise, the changelog DAG of the current repo is emitted.
1903 1904 """
1904 1905 spaces = opts.get('spaces')
1905 1906 dots = opts.get('dots')
1906 1907 if file_:
1907 1908 rlog = revlog.revlog(scmutil.opener(os.getcwd(), audit=False), file_)
1908 1909 revs = set((int(r) for r in revs))
1909 1910 def events():
1910 1911 for r in rlog:
1911 1912 yield 'n', (r, list(p for p in rlog.parentrevs(r)
1912 1913 if p != -1))
1913 1914 if r in revs:
1914 1915 yield 'l', (r, "r%i" % r)
1915 1916 elif repo:
1916 1917 cl = repo.changelog
1917 1918 tags = opts.get('tags')
1918 1919 branches = opts.get('branches')
1919 1920 if tags:
1920 1921 labels = {}
1921 1922 for l, n in repo.tags().items():
1922 1923 labels.setdefault(cl.rev(n), []).append(l)
1923 1924 def events():
1924 1925 b = "default"
1925 1926 for r in cl:
1926 1927 if branches:
1927 1928 newb = cl.read(cl.node(r))[5]['branch']
1928 1929 if newb != b:
1929 1930 yield 'a', newb
1930 1931 b = newb
1931 1932 yield 'n', (r, list(p for p in cl.parentrevs(r)
1932 1933 if p != -1))
1933 1934 if tags:
1934 1935 ls = labels.get(r)
1935 1936 if ls:
1936 1937 for l in ls:
1937 1938 yield 'l', (r, l)
1938 1939 else:
1939 1940 raise util.Abort(_('need repo for changelog dag'))
1940 1941
1941 1942 for line in dagparser.dagtextlines(events(),
1942 1943 addspaces=spaces,
1943 1944 wraplabels=True,
1944 1945 wrapannotations=True,
1945 1946 wrapnonlinear=dots,
1946 1947 usedots=dots,
1947 1948 maxlinewidth=70):
1948 1949 ui.write(line)
1949 1950 ui.write("\n")
1950 1951
1951 1952 @command('debugdata',
1952 1953 [('c', 'changelog', False, _('open changelog')),
1953 1954 ('m', 'manifest', False, _('open manifest'))],
1954 1955 _('-c|-m|FILE REV'))
1955 1956 def debugdata(ui, repo, file_, rev=None, **opts):
1956 1957 """dump the contents of a data file revision"""
1957 1958 if opts.get('changelog') or opts.get('manifest'):
1958 1959 file_, rev = None, file_
1959 1960 elif rev is None:
1960 1961 raise error.CommandError('debugdata', _('invalid arguments'))
1961 1962 r = cmdutil.openrevlog(repo, 'debugdata', file_, opts)
1962 1963 try:
1963 1964 ui.write(r.revision(r.lookup(rev)))
1964 1965 except KeyError:
1965 1966 raise util.Abort(_('invalid revision identifier %s') % rev)
1966 1967
1967 1968 @command('debugdate',
1968 1969 [('e', 'extended', None, _('try extended date formats'))],
1969 1970 _('[-e] DATE [RANGE]'),
1970 1971 norepo=True, optionalrepo=True)
1971 1972 def debugdate(ui, date, range=None, **opts):
1972 1973 """parse and display a date"""
1973 1974 if opts["extended"]:
1974 1975 d = util.parsedate(date, util.extendeddateformats)
1975 1976 else:
1976 1977 d = util.parsedate(date)
1977 1978 ui.write(("internal: %s %s\n") % d)
1978 1979 ui.write(("standard: %s\n") % util.datestr(d))
1979 1980 if range:
1980 1981 m = util.matchdate(range)
1981 1982 ui.write(("match: %s\n") % m(d[0]))
1982 1983
1983 1984 @command('debugdiscovery',
1984 1985 [('', 'old', None, _('use old-style discovery')),
1985 1986 ('', 'nonheads', None,
1986 1987 _('use old-style discovery with non-heads included')),
1987 1988 ] + remoteopts,
1988 1989 _('[-l REV] [-r REV] [-b BRANCH]... [OTHER]'))
1989 1990 def debugdiscovery(ui, repo, remoteurl="default", **opts):
1990 1991 """runs the changeset discovery protocol in isolation"""
1991 1992 remoteurl, branches = hg.parseurl(ui.expandpath(remoteurl),
1992 1993 opts.get('branch'))
1993 1994 remote = hg.peer(repo, opts, remoteurl)
1994 1995 ui.status(_('comparing with %s\n') % util.hidepassword(remoteurl))
1995 1996
1996 1997 # make sure tests are repeatable
1997 1998 random.seed(12323)
1998 1999
1999 2000 def doit(localheads, remoteheads, remote=remote):
2000 2001 if opts.get('old'):
2001 2002 if localheads:
2002 2003 raise util.Abort('cannot use localheads with old style '
2003 2004 'discovery')
2004 2005 if not util.safehasattr(remote, 'branches'):
2005 2006 # enable in-client legacy support
2006 2007 remote = localrepo.locallegacypeer(remote.local())
2007 2008 common, _in, hds = treediscovery.findcommonincoming(repo, remote,
2008 2009 force=True)
2009 2010 common = set(common)
2010 2011 if not opts.get('nonheads'):
2011 2012 ui.write(("unpruned common: %s\n") %
2012 2013 " ".join(sorted(short(n) for n in common)))
2013 2014 dag = dagutil.revlogdag(repo.changelog)
2014 2015 all = dag.ancestorset(dag.internalizeall(common))
2015 2016 common = dag.externalizeall(dag.headsetofconnecteds(all))
2016 2017 else:
2017 2018 common, any, hds = setdiscovery.findcommonheads(ui, repo, remote)
2018 2019 common = set(common)
2019 2020 rheads = set(hds)
2020 2021 lheads = set(repo.heads())
2021 2022 ui.write(("common heads: %s\n") %
2022 2023 " ".join(sorted(short(n) for n in common)))
2023 2024 if lheads <= common:
2024 2025 ui.write(("local is subset\n"))
2025 2026 elif rheads <= common:
2026 2027 ui.write(("remote is subset\n"))
2027 2028
2028 2029 serverlogs = opts.get('serverlog')
2029 2030 if serverlogs:
2030 2031 for filename in serverlogs:
2031 2032 logfile = open(filename, 'r')
2032 2033 try:
2033 2034 line = logfile.readline()
2034 2035 while line:
2035 2036 parts = line.strip().split(';')
2036 2037 op = parts[1]
2037 2038 if op == 'cg':
2038 2039 pass
2039 2040 elif op == 'cgss':
2040 2041 doit(parts[2].split(' '), parts[3].split(' '))
2041 2042 elif op == 'unb':
2042 2043 doit(parts[3].split(' '), parts[2].split(' '))
2043 2044 line = logfile.readline()
2044 2045 finally:
2045 2046 logfile.close()
2046 2047
2047 2048 else:
2048 2049 remoterevs, _checkout = hg.addbranchrevs(repo, remote, branches,
2049 2050 opts.get('remote_head'))
2050 2051 localrevs = opts.get('local_head')
2051 2052 doit(localrevs, remoterevs)
2052 2053
2053 2054 @command('debugfileset',
2054 2055 [('r', 'rev', '', _('apply the filespec on this revision'), _('REV'))],
2055 2056 _('[-r REV] FILESPEC'))
2056 2057 def debugfileset(ui, repo, expr, **opts):
2057 2058 '''parse and apply a fileset specification'''
2058 2059 ctx = scmutil.revsingle(repo, opts.get('rev'), None)
2059 2060 if ui.verbose:
2060 2061 tree = fileset.parse(expr)[0]
2061 2062 ui.note(tree, "\n")
2062 2063
2063 2064 for f in ctx.getfileset(expr):
2064 2065 ui.write("%s\n" % f)
2065 2066
2066 2067 @command('debugfsinfo', [], _('[PATH]'), norepo=True)
2067 2068 def debugfsinfo(ui, path="."):
2068 2069 """show information detected about current filesystem"""
2069 2070 util.writefile('.debugfsinfo', '')
2070 2071 ui.write(('exec: %s\n') % (util.checkexec(path) and 'yes' or 'no'))
2071 2072 ui.write(('symlink: %s\n') % (util.checklink(path) and 'yes' or 'no'))
2072 2073 ui.write(('hardlink: %s\n') % (util.checknlink(path) and 'yes' or 'no'))
2073 2074 ui.write(('case-sensitive: %s\n') % (util.checkcase('.debugfsinfo')
2074 2075 and 'yes' or 'no'))
2075 2076 os.unlink('.debugfsinfo')
2076 2077
2077 2078 @command('debuggetbundle',
2078 2079 [('H', 'head', [], _('id of head node'), _('ID')),
2079 2080 ('C', 'common', [], _('id of common node'), _('ID')),
2080 2081 ('t', 'type', 'bzip2', _('bundle compression type to use'), _('TYPE'))],
2081 2082 _('REPO FILE [-H|-C ID]...'),
2082 2083 norepo=True)
2083 2084 def debuggetbundle(ui, repopath, bundlepath, head=None, common=None, **opts):
2084 2085 """retrieves a bundle from a repo
2085 2086
2086 2087 Every ID must be a full-length hex node id string. Saves the bundle to the
2087 2088 given file.
2088 2089 """
2089 2090 repo = hg.peer(ui, opts, repopath)
2090 2091 if not repo.capable('getbundle'):
2091 2092 raise util.Abort("getbundle() not supported by target repository")
2092 2093 args = {}
2093 2094 if common:
2094 2095 args['common'] = [bin(s) for s in common]
2095 2096 if head:
2096 2097 args['heads'] = [bin(s) for s in head]
2097 2098 # TODO: get desired bundlecaps from command line.
2098 2099 args['bundlecaps'] = None
2099 2100 bundle = repo.getbundle('debug', **args)
2100 2101
2101 2102 bundletype = opts.get('type', 'bzip2').lower()
2102 2103 btypes = {'none': 'HG10UN', 'bzip2': 'HG10BZ', 'gzip': 'HG10GZ'}
2103 2104 bundletype = btypes.get(bundletype)
2104 2105 if bundletype not in changegroup.bundletypes:
2105 2106 raise util.Abort(_('unknown bundle type specified with --type'))
2106 2107 changegroup.writebundle(bundle, bundlepath, bundletype)
2107 2108
2108 2109 @command('debugignore', [], '')
2109 2110 def debugignore(ui, repo, *values, **opts):
2110 2111 """display the combined ignore pattern"""
2111 2112 ignore = repo.dirstate._ignore
2112 2113 includepat = getattr(ignore, 'includepat', None)
2113 2114 if includepat is not None:
2114 2115 ui.write("%s\n" % includepat)
2115 2116 else:
2116 2117 raise util.Abort(_("no ignore patterns found"))
2117 2118
2118 2119 @command('debugindex',
2119 2120 [('c', 'changelog', False, _('open changelog')),
2120 2121 ('m', 'manifest', False, _('open manifest')),
2121 2122 ('f', 'format', 0, _('revlog format'), _('FORMAT'))],
2122 2123 _('[-f FORMAT] -c|-m|FILE'),
2123 2124 optionalrepo=True)
2124 2125 def debugindex(ui, repo, file_=None, **opts):
2125 2126 """dump the contents of an index file"""
2126 2127 r = cmdutil.openrevlog(repo, 'debugindex', file_, opts)
2127 2128 format = opts.get('format', 0)
2128 2129 if format not in (0, 1):
2129 2130 raise util.Abort(_("unknown format %d") % format)
2130 2131
2131 2132 generaldelta = r.version & revlog.REVLOGGENERALDELTA
2132 2133 if generaldelta:
2133 2134 basehdr = ' delta'
2134 2135 else:
2135 2136 basehdr = ' base'
2136 2137
2137 2138 if format == 0:
2138 2139 ui.write(" rev offset length " + basehdr + " linkrev"
2139 2140 " nodeid p1 p2\n")
2140 2141 elif format == 1:
2141 2142 ui.write(" rev flag offset length"
2142 2143 " size " + basehdr + " link p1 p2"
2143 2144 " nodeid\n")
2144 2145
2145 2146 for i in r:
2146 2147 node = r.node(i)
2147 2148 if generaldelta:
2148 2149 base = r.deltaparent(i)
2149 2150 else:
2150 2151 base = r.chainbase(i)
2151 2152 if format == 0:
2152 2153 try:
2153 2154 pp = r.parents(node)
2154 2155 except Exception:
2155 2156 pp = [nullid, nullid]
2156 2157 ui.write("% 6d % 9d % 7d % 6d % 7d %s %s %s\n" % (
2157 2158 i, r.start(i), r.length(i), base, r.linkrev(i),
2158 2159 short(node), short(pp[0]), short(pp[1])))
2159 2160 elif format == 1:
2160 2161 pr = r.parentrevs(i)
2161 2162 ui.write("% 6d %04x % 8d % 8d % 8d % 6d % 6d % 6d % 6d %s\n" % (
2162 2163 i, r.flags(i), r.start(i), r.length(i), r.rawsize(i),
2163 2164 base, r.linkrev(i), pr[0], pr[1], short(node)))
2164 2165
2165 2166 @command('debugindexdot', [], _('FILE'), optionalrepo=True)
2166 2167 def debugindexdot(ui, repo, file_):
2167 2168 """dump an index DAG as a graphviz dot file"""
2168 2169 r = None
2169 2170 if repo:
2170 2171 filelog = repo.file(file_)
2171 2172 if len(filelog):
2172 2173 r = filelog
2173 2174 if not r:
2174 2175 r = revlog.revlog(scmutil.opener(os.getcwd(), audit=False), file_)
2175 2176 ui.write(("digraph G {\n"))
2176 2177 for i in r:
2177 2178 node = r.node(i)
2178 2179 pp = r.parents(node)
2179 2180 ui.write("\t%d -> %d\n" % (r.rev(pp[0]), i))
2180 2181 if pp[1] != nullid:
2181 2182 ui.write("\t%d -> %d\n" % (r.rev(pp[1]), i))
2182 2183 ui.write("}\n")
2183 2184
2184 2185 @command('debuginstall', [], '', norepo=True)
2185 2186 def debuginstall(ui):
2186 2187 '''test Mercurial installation
2187 2188
2188 2189 Returns 0 on success.
2189 2190 '''
2190 2191
2191 2192 def writetemp(contents):
2192 2193 (fd, name) = tempfile.mkstemp(prefix="hg-debuginstall-")
2193 2194 f = os.fdopen(fd, "wb")
2194 2195 f.write(contents)
2195 2196 f.close()
2196 2197 return name
2197 2198
2198 2199 problems = 0
2199 2200
2200 2201 # encoding
2201 2202 ui.status(_("checking encoding (%s)...\n") % encoding.encoding)
2202 2203 try:
2203 2204 encoding.fromlocal("test")
2204 2205 except util.Abort, inst:
2205 2206 ui.write(" %s\n" % inst)
2206 2207 ui.write(_(" (check that your locale is properly set)\n"))
2207 2208 problems += 1
2208 2209
2209 2210 # Python
2210 2211 ui.status(_("checking Python executable (%s)\n") % sys.executable)
2211 2212 ui.status(_("checking Python version (%s)\n")
2212 2213 % ("%s.%s.%s" % sys.version_info[:3]))
2213 2214 ui.status(_("checking Python lib (%s)...\n")
2214 2215 % os.path.dirname(os.__file__))
2215 2216
2216 2217 # compiled modules
2217 2218 ui.status(_("checking installed modules (%s)...\n")
2218 2219 % os.path.dirname(__file__))
2219 2220 try:
2220 2221 import bdiff, mpatch, base85, osutil
2221 2222 dir(bdiff), dir(mpatch), dir(base85), dir(osutil) # quiet pyflakes
2222 2223 except Exception, inst:
2223 2224 ui.write(" %s\n" % inst)
2224 2225 ui.write(_(" One or more extensions could not be found"))
2225 2226 ui.write(_(" (check that you compiled the extensions)\n"))
2226 2227 problems += 1
2227 2228
2228 2229 # templates
2229 2230 import templater
2230 2231 p = templater.templatepath()
2231 2232 ui.status(_("checking templates (%s)...\n") % ' '.join(p))
2232 2233 if p:
2233 2234 m = templater.templatepath("map-cmdline.default")
2234 2235 if m:
2235 2236 # template found, check if it is working
2236 2237 try:
2237 2238 templater.templater(m)
2238 2239 except Exception, inst:
2239 2240 ui.write(" %s\n" % inst)
2240 2241 p = None
2241 2242 else:
2242 2243 ui.write(_(" template 'default' not found\n"))
2243 2244 p = None
2244 2245 else:
2245 2246 ui.write(_(" no template directories found\n"))
2246 2247 if not p:
2247 2248 ui.write(_(" (templates seem to have been installed incorrectly)\n"))
2248 2249 problems += 1
2249 2250
2250 2251 # editor
2251 2252 ui.status(_("checking commit editor...\n"))
2252 2253 editor = ui.geteditor()
2253 2254 cmdpath = util.findexe(shlex.split(editor)[0])
2254 2255 if not cmdpath:
2255 2256 if editor == 'vi':
2256 2257 ui.write(_(" No commit editor set and can't find vi in PATH\n"))
2257 2258 ui.write(_(" (specify a commit editor in your configuration"
2258 2259 " file)\n"))
2259 2260 else:
2260 2261 ui.write(_(" Can't find editor '%s' in PATH\n") % editor)
2261 2262 ui.write(_(" (specify a commit editor in your configuration"
2262 2263 " file)\n"))
2263 2264 problems += 1
2264 2265
2265 2266 # check username
2266 2267 ui.status(_("checking username...\n"))
2267 2268 try:
2268 2269 ui.username()
2269 2270 except util.Abort, e:
2270 2271 ui.write(" %s\n" % e)
2271 2272 ui.write(_(" (specify a username in your configuration file)\n"))
2272 2273 problems += 1
2273 2274
2274 2275 if not problems:
2275 2276 ui.status(_("no problems detected\n"))
2276 2277 else:
2277 2278 ui.write(_("%s problems detected,"
2278 2279 " please check your install!\n") % problems)
2279 2280
2280 2281 return problems
2281 2282
2282 2283 @command('debugknown', [], _('REPO ID...'), norepo=True)
2283 2284 def debugknown(ui, repopath, *ids, **opts):
2284 2285 """test whether node ids are known to a repo
2285 2286
2286 2287 Every ID must be a full-length hex node id string. Returns a list of 0s
2287 2288 and 1s indicating unknown/known.
2288 2289 """
2289 2290 repo = hg.peer(ui, opts, repopath)
2290 2291 if not repo.capable('known'):
2291 2292 raise util.Abort("known() not supported by target repository")
2292 2293 flags = repo.known([bin(s) for s in ids])
2293 2294 ui.write("%s\n" % ("".join([f and "1" or "0" for f in flags])))
2294 2295
2295 2296 @command('debuglabelcomplete', [], _('LABEL...'))
2296 2297 def debuglabelcomplete(ui, repo, *args):
2297 2298 '''complete "labels" - tags, open branch names, bookmark names'''
2298 2299
2299 2300 labels = set()
2300 2301 labels.update(t[0] for t in repo.tagslist())
2301 2302 labels.update(repo._bookmarks.keys())
2302 2303 labels.update(tag for (tag, heads, tip, closed)
2303 2304 in repo.branchmap().iterbranches() if not closed)
2304 2305 completions = set()
2305 2306 if not args:
2306 2307 args = ['']
2307 2308 for a in args:
2308 2309 completions.update(l for l in labels if l.startswith(a))
2309 2310 ui.write('\n'.join(sorted(completions)))
2310 2311 ui.write('\n')
2311 2312
2312 2313 @command('debugobsolete',
2313 2314 [('', 'flags', 0, _('markers flag')),
2314 2315 ('', 'record-parents', False,
2315 2316 _('record parent information for the precursor')),
2316 2317 ('r', 'rev', [], _('display markers relevant to REV')),
2317 2318 ] + commitopts2,
2318 2319 _('[OBSOLETED [REPLACEMENT] [REPL... ]'))
2319 2320 def debugobsolete(ui, repo, precursor=None, *successors, **opts):
2320 2321 """create arbitrary obsolete marker
2321 2322
2322 2323 With no arguments, displays the list of obsolescence markers."""
2323 2324
2324 2325 def parsenodeid(s):
2325 2326 try:
2326 2327 # We do not use revsingle/revrange functions here to accept
2327 2328 # arbitrary node identifiers, possibly not present in the
2328 2329 # local repository.
2329 2330 n = bin(s)
2330 2331 if len(n) != len(nullid):
2331 2332 raise TypeError()
2332 2333 return n
2333 2334 except TypeError:
2334 2335 raise util.Abort('changeset references must be full hexadecimal '
2335 2336 'node identifiers')
2336 2337
2337 2338 if precursor is not None:
2338 2339 if opts['rev']:
2339 2340 raise util.Abort('cannot select revision when creating marker')
2340 2341 metadata = {}
2341 2342 metadata['user'] = opts['user'] or ui.username()
2342 2343 succs = tuple(parsenodeid(succ) for succ in successors)
2343 2344 l = repo.lock()
2344 2345 try:
2345 2346 tr = repo.transaction('debugobsolete')
2346 2347 try:
2347 2348 try:
2348 2349 date = opts.get('date')
2349 2350 if date:
2350 2351 date = util.parsedate(date)
2351 2352 else:
2352 2353 date = None
2353 2354 prec = parsenodeid(precursor)
2354 2355 parents = None
2355 2356 if opts['record_parents']:
2356 2357 if prec not in repo.unfiltered():
2357 2358 raise util.Abort('cannot used --record-parents on '
2358 2359 'unknown changesets')
2359 2360 parents = repo.unfiltered()[prec].parents()
2360 2361 parents = tuple(p.node() for p in parents)
2361 2362 repo.obsstore.create(tr, prec, succs, opts['flags'],
2362 2363 parents=parents, date=date,
2363 2364 metadata=metadata)
2364 2365 tr.close()
2365 2366 except ValueError, exc:
2366 2367 raise util.Abort(_('bad obsmarker input: %s') % exc)
2367 2368 finally:
2368 2369 tr.release()
2369 2370 finally:
2370 2371 l.release()
2371 2372 else:
2372 2373 if opts['rev']:
2373 2374 revs = scmutil.revrange(repo, opts['rev'])
2374 2375 nodes = [repo[r].node() for r in revs]
2375 2376 markers = list(obsolete.getmarkers(repo, nodes=nodes))
2376 2377 markers.sort(key=lambda x: x._data)
2377 2378 else:
2378 2379 markers = obsolete.getmarkers(repo)
2379 2380
2380 2381 for m in markers:
2381 2382 cmdutil.showmarker(ui, m)
2382 2383
2383 2384 @command('debugpathcomplete',
2384 2385 [('f', 'full', None, _('complete an entire path')),
2385 2386 ('n', 'normal', None, _('show only normal files')),
2386 2387 ('a', 'added', None, _('show only added files')),
2387 2388 ('r', 'removed', None, _('show only removed files'))],
2388 2389 _('FILESPEC...'))
2389 2390 def debugpathcomplete(ui, repo, *specs, **opts):
2390 2391 '''complete part or all of a tracked path
2391 2392
2392 2393 This command supports shells that offer path name completion. It
2393 2394 currently completes only files already known to the dirstate.
2394 2395
2395 2396 Completion extends only to the next path segment unless
2396 2397 --full is specified, in which case entire paths are used.'''
2397 2398
2398 2399 def complete(path, acceptable):
2399 2400 dirstate = repo.dirstate
2400 2401 spec = os.path.normpath(os.path.join(os.getcwd(), path))
2401 2402 rootdir = repo.root + os.sep
2402 2403 if spec != repo.root and not spec.startswith(rootdir):
2403 2404 return [], []
2404 2405 if os.path.isdir(spec):
2405 2406 spec += '/'
2406 2407 spec = spec[len(rootdir):]
2407 2408 fixpaths = os.sep != '/'
2408 2409 if fixpaths:
2409 2410 spec = spec.replace(os.sep, '/')
2410 2411 speclen = len(spec)
2411 2412 fullpaths = opts['full']
2412 2413 files, dirs = set(), set()
2413 2414 adddir, addfile = dirs.add, files.add
2414 2415 for f, st in dirstate.iteritems():
2415 2416 if f.startswith(spec) and st[0] in acceptable:
2416 2417 if fixpaths:
2417 2418 f = f.replace('/', os.sep)
2418 2419 if fullpaths:
2419 2420 addfile(f)
2420 2421 continue
2421 2422 s = f.find(os.sep, speclen)
2422 2423 if s >= 0:
2423 2424 adddir(f[:s])
2424 2425 else:
2425 2426 addfile(f)
2426 2427 return files, dirs
2427 2428
2428 2429 acceptable = ''
2429 2430 if opts['normal']:
2430 2431 acceptable += 'nm'
2431 2432 if opts['added']:
2432 2433 acceptable += 'a'
2433 2434 if opts['removed']:
2434 2435 acceptable += 'r'
2435 2436 cwd = repo.getcwd()
2436 2437 if not specs:
2437 2438 specs = ['.']
2438 2439
2439 2440 files, dirs = set(), set()
2440 2441 for spec in specs:
2441 2442 f, d = complete(spec, acceptable or 'nmar')
2442 2443 files.update(f)
2443 2444 dirs.update(d)
2444 2445 files.update(dirs)
2445 2446 ui.write('\n'.join(repo.pathto(p, cwd) for p in sorted(files)))
2446 2447 ui.write('\n')
2447 2448
2448 2449 @command('debugpushkey', [], _('REPO NAMESPACE [KEY OLD NEW]'), norepo=True)
2449 2450 def debugpushkey(ui, repopath, namespace, *keyinfo, **opts):
2450 2451 '''access the pushkey key/value protocol
2451 2452
2452 2453 With two args, list the keys in the given namespace.
2453 2454
2454 2455 With five args, set a key to new if it currently is set to old.
2455 2456 Reports success or failure.
2456 2457 '''
2457 2458
2458 2459 target = hg.peer(ui, {}, repopath)
2459 2460 if keyinfo:
2460 2461 key, old, new = keyinfo
2461 2462 r = target.pushkey(namespace, key, old, new)
2462 2463 ui.status(str(r) + '\n')
2463 2464 return not r
2464 2465 else:
2465 2466 for k, v in sorted(target.listkeys(namespace).iteritems()):
2466 2467 ui.write("%s\t%s\n" % (k.encode('string-escape'),
2467 2468 v.encode('string-escape')))
2468 2469
2469 2470 @command('debugpvec', [], _('A B'))
2470 2471 def debugpvec(ui, repo, a, b=None):
2471 2472 ca = scmutil.revsingle(repo, a)
2472 2473 cb = scmutil.revsingle(repo, b)
2473 2474 pa = pvec.ctxpvec(ca)
2474 2475 pb = pvec.ctxpvec(cb)
2475 2476 if pa == pb:
2476 2477 rel = "="
2477 2478 elif pa > pb:
2478 2479 rel = ">"
2479 2480 elif pa < pb:
2480 2481 rel = "<"
2481 2482 elif pa | pb:
2482 2483 rel = "|"
2483 2484 ui.write(_("a: %s\n") % pa)
2484 2485 ui.write(_("b: %s\n") % pb)
2485 2486 ui.write(_("depth(a): %d depth(b): %d\n") % (pa._depth, pb._depth))
2486 2487 ui.write(_("delta: %d hdist: %d distance: %d relation: %s\n") %
2487 2488 (abs(pa._depth - pb._depth), pvec._hamming(pa._vec, pb._vec),
2488 2489 pa.distance(pb), rel))
2489 2490
2490 2491 @command('debugrebuilddirstate|debugrebuildstate',
2491 2492 [('r', 'rev', '', _('revision to rebuild to'), _('REV'))],
2492 2493 _('[-r REV]'))
2493 2494 def debugrebuilddirstate(ui, repo, rev):
2494 2495 """rebuild the dirstate as it would look like for the given revision
2495 2496
2496 2497 If no revision is specified the first current parent will be used.
2497 2498
2498 2499 The dirstate will be set to the files of the given revision.
2499 2500 The actual working directory content or existing dirstate
2500 2501 information such as adds or removes is not considered.
2501 2502
2502 2503 One use of this command is to make the next :hg:`status` invocation
2503 2504 check the actual file content.
2504 2505 """
2505 2506 ctx = scmutil.revsingle(repo, rev)
2506 2507 wlock = repo.wlock()
2507 2508 try:
2508 2509 repo.dirstate.rebuild(ctx.node(), ctx.manifest())
2509 2510 finally:
2510 2511 wlock.release()
2511 2512
2512 2513 @command('debugrename',
2513 2514 [('r', 'rev', '', _('revision to debug'), _('REV'))],
2514 2515 _('[-r REV] FILE'))
2515 2516 def debugrename(ui, repo, file1, *pats, **opts):
2516 2517 """dump rename information"""
2517 2518
2518 2519 ctx = scmutil.revsingle(repo, opts.get('rev'))
2519 2520 m = scmutil.match(ctx, (file1,) + pats, opts)
2520 2521 for abs in ctx.walk(m):
2521 2522 fctx = ctx[abs]
2522 2523 o = fctx.filelog().renamed(fctx.filenode())
2523 2524 rel = m.rel(abs)
2524 2525 if o:
2525 2526 ui.write(_("%s renamed from %s:%s\n") % (rel, o[0], hex(o[1])))
2526 2527 else:
2527 2528 ui.write(_("%s not renamed\n") % rel)
2528 2529
2529 2530 @command('debugrevlog',
2530 2531 [('c', 'changelog', False, _('open changelog')),
2531 2532 ('m', 'manifest', False, _('open manifest')),
2532 2533 ('d', 'dump', False, _('dump index data'))],
2533 2534 _('-c|-m|FILE'),
2534 2535 optionalrepo=True)
2535 2536 def debugrevlog(ui, repo, file_=None, **opts):
2536 2537 """show data and statistics about a revlog"""
2537 2538 r = cmdutil.openrevlog(repo, 'debugrevlog', file_, opts)
2538 2539
2539 2540 if opts.get("dump"):
2540 2541 numrevs = len(r)
2541 2542 ui.write("# rev p1rev p2rev start end deltastart base p1 p2"
2542 2543 " rawsize totalsize compression heads chainlen\n")
2543 2544 ts = 0
2544 2545 heads = set()
2545 2546 rindex = r.index
2546 2547
2547 2548 def chainbaseandlen(rev):
2548 2549 clen = 0
2549 2550 base = rindex[rev][3]
2550 2551 while base != rev:
2551 2552 clen += 1
2552 2553 rev = base
2553 2554 base = rindex[rev][3]
2554 2555 return base, clen
2555 2556
2556 2557 for rev in xrange(numrevs):
2557 2558 dbase = r.deltaparent(rev)
2558 2559 if dbase == -1:
2559 2560 dbase = rev
2560 2561 cbase, clen = chainbaseandlen(rev)
2561 2562 p1, p2 = r.parentrevs(rev)
2562 2563 rs = r.rawsize(rev)
2563 2564 ts = ts + rs
2564 2565 heads -= set(r.parentrevs(rev))
2565 2566 heads.add(rev)
2566 2567 ui.write("%5d %5d %5d %5d %5d %10d %4d %4d %4d %7d %9d "
2567 2568 "%11d %5d %8d\n" %
2568 2569 (rev, p1, p2, r.start(rev), r.end(rev),
2569 2570 r.start(dbase), r.start(cbase),
2570 2571 r.start(p1), r.start(p2),
2571 2572 rs, ts, ts / r.end(rev), len(heads), clen))
2572 2573 return 0
2573 2574
2574 2575 v = r.version
2575 2576 format = v & 0xFFFF
2576 2577 flags = []
2577 2578 gdelta = False
2578 2579 if v & revlog.REVLOGNGINLINEDATA:
2579 2580 flags.append('inline')
2580 2581 if v & revlog.REVLOGGENERALDELTA:
2581 2582 gdelta = True
2582 2583 flags.append('generaldelta')
2583 2584 if not flags:
2584 2585 flags = ['(none)']
2585 2586
2586 2587 nummerges = 0
2587 2588 numfull = 0
2588 2589 numprev = 0
2589 2590 nump1 = 0
2590 2591 nump2 = 0
2591 2592 numother = 0
2592 2593 nump1prev = 0
2593 2594 nump2prev = 0
2594 2595 chainlengths = []
2595 2596
2596 2597 datasize = [None, 0, 0L]
2597 2598 fullsize = [None, 0, 0L]
2598 2599 deltasize = [None, 0, 0L]
2599 2600
2600 2601 def addsize(size, l):
2601 2602 if l[0] is None or size < l[0]:
2602 2603 l[0] = size
2603 2604 if size > l[1]:
2604 2605 l[1] = size
2605 2606 l[2] += size
2606 2607
2607 2608 numrevs = len(r)
2608 2609 for rev in xrange(numrevs):
2609 2610 p1, p2 = r.parentrevs(rev)
2610 2611 delta = r.deltaparent(rev)
2611 2612 if format > 0:
2612 2613 addsize(r.rawsize(rev), datasize)
2613 2614 if p2 != nullrev:
2614 2615 nummerges += 1
2615 2616 size = r.length(rev)
2616 2617 if delta == nullrev:
2617 2618 chainlengths.append(0)
2618 2619 numfull += 1
2619 2620 addsize(size, fullsize)
2620 2621 else:
2621 2622 chainlengths.append(chainlengths[delta] + 1)
2622 2623 addsize(size, deltasize)
2623 2624 if delta == rev - 1:
2624 2625 numprev += 1
2625 2626 if delta == p1:
2626 2627 nump1prev += 1
2627 2628 elif delta == p2:
2628 2629 nump2prev += 1
2629 2630 elif delta == p1:
2630 2631 nump1 += 1
2631 2632 elif delta == p2:
2632 2633 nump2 += 1
2633 2634 elif delta != nullrev:
2634 2635 numother += 1
2635 2636
2636 2637 # Adjust size min value for empty cases
2637 2638 for size in (datasize, fullsize, deltasize):
2638 2639 if size[0] is None:
2639 2640 size[0] = 0
2640 2641
2641 2642 numdeltas = numrevs - numfull
2642 2643 numoprev = numprev - nump1prev - nump2prev
2643 2644 totalrawsize = datasize[2]
2644 2645 datasize[2] /= numrevs
2645 2646 fulltotal = fullsize[2]
2646 2647 fullsize[2] /= numfull
2647 2648 deltatotal = deltasize[2]
2648 2649 if numrevs - numfull > 0:
2649 2650 deltasize[2] /= numrevs - numfull
2650 2651 totalsize = fulltotal + deltatotal
2651 2652 avgchainlen = sum(chainlengths) / numrevs
2652 2653 compratio = totalrawsize / totalsize
2653 2654
2654 2655 basedfmtstr = '%%%dd\n'
2655 2656 basepcfmtstr = '%%%dd %s(%%5.2f%%%%)\n'
2656 2657
2657 2658 def dfmtstr(max):
2658 2659 return basedfmtstr % len(str(max))
2659 2660 def pcfmtstr(max, padding=0):
2660 2661 return basepcfmtstr % (len(str(max)), ' ' * padding)
2661 2662
2662 2663 def pcfmt(value, total):
2663 2664 return (value, 100 * float(value) / total)
2664 2665
2665 2666 ui.write(('format : %d\n') % format)
2666 2667 ui.write(('flags : %s\n') % ', '.join(flags))
2667 2668
2668 2669 ui.write('\n')
2669 2670 fmt = pcfmtstr(totalsize)
2670 2671 fmt2 = dfmtstr(totalsize)
2671 2672 ui.write(('revisions : ') + fmt2 % numrevs)
2672 2673 ui.write((' merges : ') + fmt % pcfmt(nummerges, numrevs))
2673 2674 ui.write((' normal : ') + fmt % pcfmt(numrevs - nummerges, numrevs))
2674 2675 ui.write(('revisions : ') + fmt2 % numrevs)
2675 2676 ui.write((' full : ') + fmt % pcfmt(numfull, numrevs))
2676 2677 ui.write((' deltas : ') + fmt % pcfmt(numdeltas, numrevs))
2677 2678 ui.write(('revision size : ') + fmt2 % totalsize)
2678 2679 ui.write((' full : ') + fmt % pcfmt(fulltotal, totalsize))
2679 2680 ui.write((' deltas : ') + fmt % pcfmt(deltatotal, totalsize))
2680 2681
2681 2682 ui.write('\n')
2682 2683 fmt = dfmtstr(max(avgchainlen, compratio))
2683 2684 ui.write(('avg chain length : ') + fmt % avgchainlen)
2684 2685 ui.write(('compression ratio : ') + fmt % compratio)
2685 2686
2686 2687 if format > 0:
2687 2688 ui.write('\n')
2688 2689 ui.write(('uncompressed data size (min/max/avg) : %d / %d / %d\n')
2689 2690 % tuple(datasize))
2690 2691 ui.write(('full revision size (min/max/avg) : %d / %d / %d\n')
2691 2692 % tuple(fullsize))
2692 2693 ui.write(('delta size (min/max/avg) : %d / %d / %d\n')
2693 2694 % tuple(deltasize))
2694 2695
2695 2696 if numdeltas > 0:
2696 2697 ui.write('\n')
2697 2698 fmt = pcfmtstr(numdeltas)
2698 2699 fmt2 = pcfmtstr(numdeltas, 4)
2699 2700 ui.write(('deltas against prev : ') + fmt % pcfmt(numprev, numdeltas))
2700 2701 if numprev > 0:
2701 2702 ui.write((' where prev = p1 : ') + fmt2 % pcfmt(nump1prev,
2702 2703 numprev))
2703 2704 ui.write((' where prev = p2 : ') + fmt2 % pcfmt(nump2prev,
2704 2705 numprev))
2705 2706 ui.write((' other : ') + fmt2 % pcfmt(numoprev,
2706 2707 numprev))
2707 2708 if gdelta:
2708 2709 ui.write(('deltas against p1 : ')
2709 2710 + fmt % pcfmt(nump1, numdeltas))
2710 2711 ui.write(('deltas against p2 : ')
2711 2712 + fmt % pcfmt(nump2, numdeltas))
2712 2713 ui.write(('deltas against other : ') + fmt % pcfmt(numother,
2713 2714 numdeltas))
2714 2715
2715 2716 @command('debugrevspec',
2716 2717 [('', 'optimize', None, _('print parsed tree after optimizing'))],
2717 2718 ('REVSPEC'))
2718 2719 def debugrevspec(ui, repo, expr, **opts):
2719 2720 """parse and apply a revision specification
2720 2721
2721 2722 Use --verbose to print the parsed tree before and after aliases
2722 2723 expansion.
2723 2724 """
2724 2725 if ui.verbose:
2725 2726 tree = revset.parse(expr)[0]
2726 2727 ui.note(revset.prettyformat(tree), "\n")
2727 2728 newtree = revset.findaliases(ui, tree)
2728 2729 if newtree != tree:
2729 2730 ui.note(revset.prettyformat(newtree), "\n")
2730 2731 if opts["optimize"]:
2731 2732 weight, optimizedtree = revset.optimize(newtree, True)
2732 2733 ui.note("* optimized:\n", revset.prettyformat(optimizedtree), "\n")
2733 2734 func = revset.match(ui, expr)
2734 2735 for c in func(repo, revset.spanset(repo)):
2735 2736 ui.write("%s\n" % c)
2736 2737
2737 2738 @command('debugsetparents', [], _('REV1 [REV2]'))
2738 2739 def debugsetparents(ui, repo, rev1, rev2=None):
2739 2740 """manually set the parents of the current working directory
2740 2741
2741 2742 This is useful for writing repository conversion tools, but should
2742 2743 be used with care.
2743 2744
2744 2745 Returns 0 on success.
2745 2746 """
2746 2747
2747 2748 r1 = scmutil.revsingle(repo, rev1).node()
2748 2749 r2 = scmutil.revsingle(repo, rev2, 'null').node()
2749 2750
2750 2751 wlock = repo.wlock()
2751 2752 try:
2752 2753 repo.setparents(r1, r2)
2753 2754 finally:
2754 2755 wlock.release()
2755 2756
2756 2757 @command('debugdirstate|debugstate',
2757 2758 [('', 'nodates', None, _('do not display the saved mtime')),
2758 2759 ('', 'datesort', None, _('sort by saved mtime'))],
2759 2760 _('[OPTION]...'))
2760 2761 def debugstate(ui, repo, nodates=None, datesort=None):
2761 2762 """show the contents of the current dirstate"""
2762 2763 timestr = ""
2763 2764 showdate = not nodates
2764 2765 if datesort:
2765 2766 keyfunc = lambda x: (x[1][3], x[0]) # sort by mtime, then by filename
2766 2767 else:
2767 2768 keyfunc = None # sort by filename
2768 2769 for file_, ent in sorted(repo.dirstate._map.iteritems(), key=keyfunc):
2769 2770 if showdate:
2770 2771 if ent[3] == -1:
2771 2772 # Pad or slice to locale representation
2772 2773 locale_len = len(time.strftime("%Y-%m-%d %H:%M:%S ",
2773 2774 time.localtime(0)))
2774 2775 timestr = 'unset'
2775 2776 timestr = (timestr[:locale_len] +
2776 2777 ' ' * (locale_len - len(timestr)))
2777 2778 else:
2778 2779 timestr = time.strftime("%Y-%m-%d %H:%M:%S ",
2779 2780 time.localtime(ent[3]))
2780 2781 if ent[1] & 020000:
2781 2782 mode = 'lnk'
2782 2783 else:
2783 2784 mode = '%3o' % (ent[1] & 0777 & ~util.umask)
2784 2785 ui.write("%c %s %10d %s%s\n" % (ent[0], mode, ent[2], timestr, file_))
2785 2786 for f in repo.dirstate.copies():
2786 2787 ui.write(_("copy: %s -> %s\n") % (repo.dirstate.copied(f), f))
2787 2788
2788 2789 @command('debugsub',
2789 2790 [('r', 'rev', '',
2790 2791 _('revision to check'), _('REV'))],
2791 2792 _('[-r REV] [REV]'))
2792 2793 def debugsub(ui, repo, rev=None):
2793 2794 ctx = scmutil.revsingle(repo, rev, None)
2794 2795 for k, v in sorted(ctx.substate.items()):
2795 2796 ui.write(('path %s\n') % k)
2796 2797 ui.write((' source %s\n') % v[0])
2797 2798 ui.write((' revision %s\n') % v[1])
2798 2799
2799 2800 @command('debugsuccessorssets',
2800 2801 [],
2801 2802 _('[REV]'))
2802 2803 def debugsuccessorssets(ui, repo, *revs):
2803 2804 """show set of successors for revision
2804 2805
2805 2806 A successors set of changeset A is a consistent group of revisions that
2806 2807 succeed A. It contains non-obsolete changesets only.
2807 2808
2808 2809 In most cases a changeset A has a single successors set containing a single
2809 2810 successor (changeset A replaced by A').
2810 2811
2811 2812 A changeset that is made obsolete with no successors are called "pruned".
2812 2813 Such changesets have no successors sets at all.
2813 2814
2814 2815 A changeset that has been "split" will have a successors set containing
2815 2816 more than one successor.
2816 2817
2817 2818 A changeset that has been rewritten in multiple different ways is called
2818 2819 "divergent". Such changesets have multiple successor sets (each of which
2819 2820 may also be split, i.e. have multiple successors).
2820 2821
2821 2822 Results are displayed as follows::
2822 2823
2823 2824 <rev1>
2824 2825 <successors-1A>
2825 2826 <rev2>
2826 2827 <successors-2A>
2827 2828 <successors-2B1> <successors-2B2> <successors-2B3>
2828 2829
2829 2830 Here rev2 has two possible (i.e. divergent) successors sets. The first
2830 2831 holds one element, whereas the second holds three (i.e. the changeset has
2831 2832 been split).
2832 2833 """
2833 2834 # passed to successorssets caching computation from one call to another
2834 2835 cache = {}
2835 2836 ctx2str = str
2836 2837 node2str = short
2837 2838 if ui.debug():
2838 2839 def ctx2str(ctx):
2839 2840 return ctx.hex()
2840 2841 node2str = hex
2841 2842 for rev in scmutil.revrange(repo, revs):
2842 2843 ctx = repo[rev]
2843 2844 ui.write('%s\n'% ctx2str(ctx))
2844 2845 for succsset in obsolete.successorssets(repo, ctx.node(), cache):
2845 2846 if succsset:
2846 2847 ui.write(' ')
2847 2848 ui.write(node2str(succsset[0]))
2848 2849 for node in succsset[1:]:
2849 2850 ui.write(' ')
2850 2851 ui.write(node2str(node))
2851 2852 ui.write('\n')
2852 2853
2853 2854 @command('debugwalk', walkopts, _('[OPTION]... [FILE]...'), inferrepo=True)
2854 2855 def debugwalk(ui, repo, *pats, **opts):
2855 2856 """show how files match on given patterns"""
2856 2857 m = scmutil.match(repo[None], pats, opts)
2857 2858 items = list(repo.walk(m))
2858 2859 if not items:
2859 2860 return
2860 2861 f = lambda fn: fn
2861 2862 if ui.configbool('ui', 'slash') and os.sep != '/':
2862 2863 f = lambda fn: util.normpath(fn)
2863 2864 fmt = 'f %%-%ds %%-%ds %%s' % (
2864 2865 max([len(abs) for abs in items]),
2865 2866 max([len(m.rel(abs)) for abs in items]))
2866 2867 for abs in items:
2867 2868 line = fmt % (abs, f(m.rel(abs)), m.exact(abs) and 'exact' or '')
2868 2869 ui.write("%s\n" % line.rstrip())
2869 2870
2870 2871 @command('debugwireargs',
2871 2872 [('', 'three', '', 'three'),
2872 2873 ('', 'four', '', 'four'),
2873 2874 ('', 'five', '', 'five'),
2874 2875 ] + remoteopts,
2875 2876 _('REPO [OPTIONS]... [ONE [TWO]]'),
2876 2877 norepo=True)
2877 2878 def debugwireargs(ui, repopath, *vals, **opts):
2878 2879 repo = hg.peer(ui, opts, repopath)
2879 2880 for opt in remoteopts:
2880 2881 del opts[opt[1]]
2881 2882 args = {}
2882 2883 for k, v in opts.iteritems():
2883 2884 if v:
2884 2885 args[k] = v
2885 2886 # run twice to check that we don't mess up the stream for the next command
2886 2887 res1 = repo.debugwireargs(*vals, **args)
2887 2888 res2 = repo.debugwireargs(*vals, **args)
2888 2889 ui.write("%s\n" % res1)
2889 2890 if res1 != res2:
2890 2891 ui.warn("%s\n" % res2)
2891 2892
2892 2893 @command('^diff',
2893 2894 [('r', 'rev', [], _('revision'), _('REV')),
2894 2895 ('c', 'change', '', _('change made by revision'), _('REV'))
2895 2896 ] + diffopts + diffopts2 + walkopts + subrepoopts,
2896 2897 _('[OPTION]... ([-c REV] | [-r REV1 [-r REV2]]) [FILE]...'),
2897 2898 inferrepo=True)
2898 2899 def diff(ui, repo, *pats, **opts):
2899 2900 """diff repository (or selected files)
2900 2901
2901 2902 Show differences between revisions for the specified files.
2902 2903
2903 2904 Differences between files are shown using the unified diff format.
2904 2905
2905 2906 .. note::
2906 2907
2907 2908 diff may generate unexpected results for merges, as it will
2908 2909 default to comparing against the working directory's first
2909 2910 parent changeset if no revisions are specified.
2910 2911
2911 2912 When two revision arguments are given, then changes are shown
2912 2913 between those revisions. If only one revision is specified then
2913 2914 that revision is compared to the working directory, and, when no
2914 2915 revisions are specified, the working directory files are compared
2915 2916 to its parent.
2916 2917
2917 2918 Alternatively you can specify -c/--change with a revision to see
2918 2919 the changes in that changeset relative to its first parent.
2919 2920
2920 2921 Without the -a/--text option, diff will avoid generating diffs of
2921 2922 files it detects as binary. With -a, diff will generate a diff
2922 2923 anyway, probably with undesirable results.
2923 2924
2924 2925 Use the -g/--git option to generate diffs in the git extended diff
2925 2926 format. For more information, read :hg:`help diffs`.
2926 2927
2927 2928 .. container:: verbose
2928 2929
2929 2930 Examples:
2930 2931
2931 2932 - compare a file in the current working directory to its parent::
2932 2933
2933 2934 hg diff foo.c
2934 2935
2935 2936 - compare two historical versions of a directory, with rename info::
2936 2937
2937 2938 hg diff --git -r 1.0:1.2 lib/
2938 2939
2939 2940 - get change stats relative to the last change on some date::
2940 2941
2941 2942 hg diff --stat -r "date('may 2')"
2942 2943
2943 2944 - diff all newly-added files that contain a keyword::
2944 2945
2945 2946 hg diff "set:added() and grep(GNU)"
2946 2947
2947 2948 - compare a revision and its parents::
2948 2949
2949 2950 hg diff -c 9353 # compare against first parent
2950 2951 hg diff -r 9353^:9353 # same using revset syntax
2951 2952 hg diff -r 9353^2:9353 # compare against the second parent
2952 2953
2953 2954 Returns 0 on success.
2954 2955 """
2955 2956
2956 2957 revs = opts.get('rev')
2957 2958 change = opts.get('change')
2958 2959 stat = opts.get('stat')
2959 2960 reverse = opts.get('reverse')
2960 2961
2961 2962 if revs and change:
2962 2963 msg = _('cannot specify --rev and --change at the same time')
2963 2964 raise util.Abort(msg)
2964 2965 elif change:
2965 2966 node2 = scmutil.revsingle(repo, change, None).node()
2966 2967 node1 = repo[node2].p1().node()
2967 2968 else:
2968 2969 node1, node2 = scmutil.revpair(repo, revs)
2969 2970
2970 2971 if reverse:
2971 2972 node1, node2 = node2, node1
2972 2973
2973 2974 diffopts = patch.diffopts(ui, opts)
2974 2975 m = scmutil.match(repo[node2], pats, opts)
2975 2976 cmdutil.diffordiffstat(ui, repo, diffopts, node1, node2, m, stat=stat,
2976 2977 listsubrepos=opts.get('subrepos'))
2977 2978
2978 2979 @command('^export',
2979 2980 [('o', 'output', '',
2980 2981 _('print output to file with formatted name'), _('FORMAT')),
2981 2982 ('', 'switch-parent', None, _('diff against the second parent')),
2982 2983 ('r', 'rev', [], _('revisions to export'), _('REV')),
2983 2984 ] + diffopts,
2984 2985 _('[OPTION]... [-o OUTFILESPEC] [-r] [REV]...'))
2985 2986 def export(ui, repo, *changesets, **opts):
2986 2987 """dump the header and diffs for one or more changesets
2987 2988
2988 2989 Print the changeset header and diffs for one or more revisions.
2989 2990 If no revision is given, the parent of the working directory is used.
2990 2991
2991 2992 The information shown in the changeset header is: author, date,
2992 2993 branch name (if non-default), changeset hash, parent(s) and commit
2993 2994 comment.
2994 2995
2995 2996 .. note::
2996 2997
2997 2998 export may generate unexpected diff output for merge
2998 2999 changesets, as it will compare the merge changeset against its
2999 3000 first parent only.
3000 3001
3001 3002 Output may be to a file, in which case the name of the file is
3002 3003 given using a format string. The formatting rules are as follows:
3003 3004
3004 3005 :``%%``: literal "%" character
3005 3006 :``%H``: changeset hash (40 hexadecimal digits)
3006 3007 :``%N``: number of patches being generated
3007 3008 :``%R``: changeset revision number
3008 3009 :``%b``: basename of the exporting repository
3009 3010 :``%h``: short-form changeset hash (12 hexadecimal digits)
3010 3011 :``%m``: first line of the commit message (only alphanumeric characters)
3011 3012 :``%n``: zero-padded sequence number, starting at 1
3012 3013 :``%r``: zero-padded changeset revision number
3013 3014
3014 3015 Without the -a/--text option, export will avoid generating diffs
3015 3016 of files it detects as binary. With -a, export will generate a
3016 3017 diff anyway, probably with undesirable results.
3017 3018
3018 3019 Use the -g/--git option to generate diffs in the git extended diff
3019 3020 format. See :hg:`help diffs` for more information.
3020 3021
3021 3022 With the --switch-parent option, the diff will be against the
3022 3023 second parent. It can be useful to review a merge.
3023 3024
3024 3025 .. container:: verbose
3025 3026
3026 3027 Examples:
3027 3028
3028 3029 - use export and import to transplant a bugfix to the current
3029 3030 branch::
3030 3031
3031 3032 hg export -r 9353 | hg import -
3032 3033
3033 3034 - export all the changesets between two revisions to a file with
3034 3035 rename information::
3035 3036
3036 3037 hg export --git -r 123:150 > changes.txt
3037 3038
3038 3039 - split outgoing changes into a series of patches with
3039 3040 descriptive names::
3040 3041
3041 3042 hg export -r "outgoing()" -o "%n-%m.patch"
3042 3043
3043 3044 Returns 0 on success.
3044 3045 """
3045 3046 changesets += tuple(opts.get('rev', []))
3046 3047 if not changesets:
3047 3048 changesets = ['.']
3048 3049 revs = scmutil.revrange(repo, changesets)
3049 3050 if not revs:
3050 3051 raise util.Abort(_("export requires at least one changeset"))
3051 3052 if len(revs) > 1:
3052 3053 ui.note(_('exporting patches:\n'))
3053 3054 else:
3054 3055 ui.note(_('exporting patch:\n'))
3055 3056 cmdutil.export(repo, revs, template=opts.get('output'),
3056 3057 switch_parent=opts.get('switch_parent'),
3057 3058 opts=patch.diffopts(ui, opts))
3058 3059
3059 3060 @command('^forget', walkopts, _('[OPTION]... FILE...'), inferrepo=True)
3060 3061 def forget(ui, repo, *pats, **opts):
3061 3062 """forget the specified files on the next commit
3062 3063
3063 3064 Mark the specified files so they will no longer be tracked
3064 3065 after the next commit.
3065 3066
3066 3067 This only removes files from the current branch, not from the
3067 3068 entire project history, and it does not delete them from the
3068 3069 working directory.
3069 3070
3070 3071 To undo a forget before the next commit, see :hg:`add`.
3071 3072
3072 3073 .. container:: verbose
3073 3074
3074 3075 Examples:
3075 3076
3076 3077 - forget newly-added binary files::
3077 3078
3078 3079 hg forget "set:added() and binary()"
3079 3080
3080 3081 - forget files that would be excluded by .hgignore::
3081 3082
3082 3083 hg forget "set:hgignore()"
3083 3084
3084 3085 Returns 0 on success.
3085 3086 """
3086 3087
3087 3088 if not pats:
3088 3089 raise util.Abort(_('no files specified'))
3089 3090
3090 3091 m = scmutil.match(repo[None], pats, opts)
3091 3092 rejected = cmdutil.forget(ui, repo, m, prefix="", explicitonly=False)[0]
3092 3093 return rejected and 1 or 0
3093 3094
3094 3095 @command(
3095 3096 'graft',
3096 3097 [('r', 'rev', [], _('revisions to graft'), _('REV')),
3097 3098 ('c', 'continue', False, _('resume interrupted graft')),
3098 3099 ('e', 'edit', False, _('invoke editor on commit messages')),
3099 3100 ('', 'log', None, _('append graft info to log message')),
3100 3101 ('f', 'force', False, _('force graft')),
3101 3102 ('D', 'currentdate', False,
3102 3103 _('record the current date as commit date')),
3103 3104 ('U', 'currentuser', False,
3104 3105 _('record the current user as committer'), _('DATE'))]
3105 3106 + commitopts2 + mergetoolopts + dryrunopts,
3106 3107 _('[OPTION]... [-r] REV...'))
3107 3108 def graft(ui, repo, *revs, **opts):
3108 3109 '''copy changes from other branches onto the current branch
3109 3110
3110 3111 This command uses Mercurial's merge logic to copy individual
3111 3112 changes from other branches without merging branches in the
3112 3113 history graph. This is sometimes known as 'backporting' or
3113 3114 'cherry-picking'. By default, graft will copy user, date, and
3114 3115 description from the source changesets.
3115 3116
3116 3117 Changesets that are ancestors of the current revision, that have
3117 3118 already been grafted, or that are merges will be skipped.
3118 3119
3119 3120 If --log is specified, log messages will have a comment appended
3120 3121 of the form::
3121 3122
3122 3123 (grafted from CHANGESETHASH)
3123 3124
3124 3125 If --force is specified, revisions will be grafted even if they
3125 3126 are already ancestors of or have been grafted to the destination.
3126 3127 This is useful when the revisions have since been backed out.
3127 3128
3128 3129 If a graft merge results in conflicts, the graft process is
3129 3130 interrupted so that the current merge can be manually resolved.
3130 3131 Once all conflicts are addressed, the graft process can be
3131 3132 continued with the -c/--continue option.
3132 3133
3133 3134 .. note::
3134 3135
3135 3136 The -c/--continue option does not reapply earlier options, except
3136 3137 for --force.
3137 3138
3138 3139 .. container:: verbose
3139 3140
3140 3141 Examples:
3141 3142
3142 3143 - copy a single change to the stable branch and edit its description::
3143 3144
3144 3145 hg update stable
3145 3146 hg graft --edit 9393
3146 3147
3147 3148 - graft a range of changesets with one exception, updating dates::
3148 3149
3149 3150 hg graft -D "2085::2093 and not 2091"
3150 3151
3151 3152 - continue a graft after resolving conflicts::
3152 3153
3153 3154 hg graft -c
3154 3155
3155 3156 - show the source of a grafted changeset::
3156 3157
3157 3158 hg log --debug -r .
3158 3159
3159 3160 See :hg:`help revisions` and :hg:`help revsets` for more about
3160 3161 specifying revisions.
3161 3162
3162 3163 Returns 0 on successful completion.
3163 3164 '''
3164 3165
3165 3166 revs = list(revs)
3166 3167 revs.extend(opts['rev'])
3167 3168
3168 3169 if not opts.get('user') and opts.get('currentuser'):
3169 3170 opts['user'] = ui.username()
3170 3171 if not opts.get('date') and opts.get('currentdate'):
3171 3172 opts['date'] = "%d %d" % util.makedate()
3172 3173
3173 3174 editor = cmdutil.getcommiteditor(editform='graft', **opts)
3174 3175
3175 3176 cont = False
3176 3177 if opts['continue']:
3177 3178 cont = True
3178 3179 if revs:
3179 3180 raise util.Abort(_("can't specify --continue and revisions"))
3180 3181 # read in unfinished revisions
3181 3182 try:
3182 3183 nodes = repo.opener.read('graftstate').splitlines()
3183 3184 revs = [repo[node].rev() for node in nodes]
3184 3185 except IOError, inst:
3185 3186 if inst.errno != errno.ENOENT:
3186 3187 raise
3187 3188 raise util.Abort(_("no graft state found, can't continue"))
3188 3189 else:
3189 3190 cmdutil.checkunfinished(repo)
3190 3191 cmdutil.bailifchanged(repo)
3191 3192 if not revs:
3192 3193 raise util.Abort(_('no revisions specified'))
3193 3194 revs = scmutil.revrange(repo, revs)
3194 3195
3195 3196 # check for merges
3196 3197 for rev in repo.revs('%ld and merge()', revs):
3197 3198 ui.warn(_('skipping ungraftable merge revision %s\n') % rev)
3198 3199 revs.remove(rev)
3199 3200 if not revs:
3200 3201 return -1
3201 3202
3202 3203 # Don't check in the --continue case, in effect retaining --force across
3203 3204 # --continues. That's because without --force, any revisions we decided to
3204 3205 # skip would have been filtered out here, so they wouldn't have made their
3205 3206 # way to the graftstate. With --force, any revisions we would have otherwise
3206 3207 # skipped would not have been filtered out, and if they hadn't been applied
3207 3208 # already, they'd have been in the graftstate.
3208 3209 if not (cont or opts.get('force')):
3209 3210 # check for ancestors of dest branch
3210 3211 crev = repo['.'].rev()
3211 3212 ancestors = repo.changelog.ancestors([crev], inclusive=True)
3212 3213 # Cannot use x.remove(y) on smart set, this has to be a list.
3213 3214 # XXX make this lazy in the future
3214 3215 revs = list(revs)
3215 3216 # don't mutate while iterating, create a copy
3216 3217 for rev in list(revs):
3217 3218 if rev in ancestors:
3218 3219 ui.warn(_('skipping ancestor revision %s\n') % rev)
3219 3220 # XXX remove on list is slow
3220 3221 revs.remove(rev)
3221 3222 if not revs:
3222 3223 return -1
3223 3224
3224 3225 # analyze revs for earlier grafts
3225 3226 ids = {}
3226 3227 for ctx in repo.set("%ld", revs):
3227 3228 ids[ctx.hex()] = ctx.rev()
3228 3229 n = ctx.extra().get('source')
3229 3230 if n:
3230 3231 ids[n] = ctx.rev()
3231 3232
3232 3233 # check ancestors for earlier grafts
3233 3234 ui.debug('scanning for duplicate grafts\n')
3234 3235
3235 3236 for rev in repo.changelog.findmissingrevs(revs, [crev]):
3236 3237 ctx = repo[rev]
3237 3238 n = ctx.extra().get('source')
3238 3239 if n in ids:
3239 3240 try:
3240 3241 r = repo[n].rev()
3241 3242 except error.RepoLookupError:
3242 3243 r = None
3243 3244 if r in revs:
3244 3245 ui.warn(_('skipping revision %s (already grafted to %s)\n')
3245 3246 % (r, rev))
3246 3247 revs.remove(r)
3247 3248 elif ids[n] in revs:
3248 3249 if r is None:
3249 3250 ui.warn(_('skipping already grafted revision %s '
3250 3251 '(%s also has unknown origin %s)\n')
3251 3252 % (ids[n], rev, n))
3252 3253 else:
3253 3254 ui.warn(_('skipping already grafted revision %s '
3254 3255 '(%s also has origin %d)\n')
3255 3256 % (ids[n], rev, r))
3256 3257 revs.remove(ids[n])
3257 3258 elif ctx.hex() in ids:
3258 3259 r = ids[ctx.hex()]
3259 3260 ui.warn(_('skipping already grafted revision %s '
3260 3261 '(was grafted from %d)\n') % (r, rev))
3261 3262 revs.remove(r)
3262 3263 if not revs:
3263 3264 return -1
3264 3265
3265 3266 wlock = repo.wlock()
3266 3267 try:
3267 3268 current = repo['.']
3268 3269 for pos, ctx in enumerate(repo.set("%ld", revs)):
3269 3270
3270 3271 ui.status(_('grafting revision %s\n') % ctx.rev())
3271 3272 if opts.get('dry_run'):
3272 3273 continue
3273 3274
3274 3275 source = ctx.extra().get('source')
3275 3276 if not source:
3276 3277 source = ctx.hex()
3277 3278 extra = {'source': source}
3278 3279 user = ctx.user()
3279 3280 if opts.get('user'):
3280 3281 user = opts['user']
3281 3282 date = ctx.date()
3282 3283 if opts.get('date'):
3283 3284 date = opts['date']
3284 3285 message = ctx.description()
3285 3286 if opts.get('log'):
3286 3287 message += '\n(grafted from %s)' % ctx.hex()
3287 3288
3288 3289 # we don't merge the first commit when continuing
3289 3290 if not cont:
3290 3291 # perform the graft merge with p1(rev) as 'ancestor'
3291 3292 try:
3292 3293 # ui.forcemerge is an internal variable, do not document
3293 3294 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
3294 3295 'graft')
3295 3296 stats = mergemod.update(repo, ctx.node(), True, True, False,
3296 3297 ctx.p1().node(),
3297 3298 labels=['local', 'graft'])
3298 3299 finally:
3299 3300 repo.ui.setconfig('ui', 'forcemerge', '', 'graft')
3300 3301 # report any conflicts
3301 3302 if stats and stats[3] > 0:
3302 3303 # write out state for --continue
3303 3304 nodelines = [repo[rev].hex() + "\n" for rev in revs[pos:]]
3304 3305 repo.opener.write('graftstate', ''.join(nodelines))
3305 3306 raise util.Abort(
3306 3307 _("unresolved conflicts, can't continue"),
3307 3308 hint=_('use hg resolve and hg graft --continue'))
3308 3309 else:
3309 3310 cont = False
3310 3311
3311 3312 # drop the second merge parent
3312 3313 repo.setparents(current.node(), nullid)
3313 3314 repo.dirstate.write()
3314 3315 # fix up dirstate for copies and renames
3315 3316 cmdutil.duplicatecopies(repo, ctx.rev(), ctx.p1().rev())
3316 3317
3317 3318 # commit
3318 3319 node = repo.commit(text=message, user=user,
3319 3320 date=date, extra=extra, editor=editor)
3320 3321 if node is None:
3321 3322 ui.status(_('graft for revision %s is empty\n') % ctx.rev())
3322 3323 else:
3323 3324 current = repo[node]
3324 3325 finally:
3325 3326 wlock.release()
3326 3327
3327 3328 # remove state when we complete successfully
3328 3329 if not opts.get('dry_run'):
3329 3330 util.unlinkpath(repo.join('graftstate'), ignoremissing=True)
3330 3331
3331 3332 return 0
3332 3333
3333 3334 @command('grep',
3334 3335 [('0', 'print0', None, _('end fields with NUL')),
3335 3336 ('', 'all', None, _('print all revisions that match')),
3336 3337 ('a', 'text', None, _('treat all files as text')),
3337 3338 ('f', 'follow', None,
3338 3339 _('follow changeset history,'
3339 3340 ' or file history across copies and renames')),
3340 3341 ('i', 'ignore-case', None, _('ignore case when matching')),
3341 3342 ('l', 'files-with-matches', None,
3342 3343 _('print only filenames and revisions that match')),
3343 3344 ('n', 'line-number', None, _('print matching line numbers')),
3344 3345 ('r', 'rev', [],
3345 3346 _('only search files changed within revision range'), _('REV')),
3346 3347 ('u', 'user', None, _('list the author (long with -v)')),
3347 3348 ('d', 'date', None, _('list the date (short with -q)')),
3348 3349 ] + walkopts,
3349 3350 _('[OPTION]... PATTERN [FILE]...'),
3350 3351 inferrepo=True)
3351 3352 def grep(ui, repo, pattern, *pats, **opts):
3352 3353 """search for a pattern in specified files and revisions
3353 3354
3354 3355 Search revisions of files for a regular expression.
3355 3356
3356 3357 This command behaves differently than Unix grep. It only accepts
3357 3358 Python/Perl regexps. It searches repository history, not the
3358 3359 working directory. It always prints the revision number in which a
3359 3360 match appears.
3360 3361
3361 3362 By default, grep only prints output for the first revision of a
3362 3363 file in which it finds a match. To get it to print every revision
3363 3364 that contains a change in match status ("-" for a match that
3364 3365 becomes a non-match, or "+" for a non-match that becomes a match),
3365 3366 use the --all flag.
3366 3367
3367 3368 Returns 0 if a match is found, 1 otherwise.
3368 3369 """
3369 3370 reflags = re.M
3370 3371 if opts.get('ignore_case'):
3371 3372 reflags |= re.I
3372 3373 try:
3373 3374 regexp = util.re.compile(pattern, reflags)
3374 3375 except re.error, inst:
3375 3376 ui.warn(_("grep: invalid match pattern: %s\n") % inst)
3376 3377 return 1
3377 3378 sep, eol = ':', '\n'
3378 3379 if opts.get('print0'):
3379 3380 sep = eol = '\0'
3380 3381
3381 3382 getfile = util.lrucachefunc(repo.file)
3382 3383
3383 3384 def matchlines(body):
3384 3385 begin = 0
3385 3386 linenum = 0
3386 3387 while begin < len(body):
3387 3388 match = regexp.search(body, begin)
3388 3389 if not match:
3389 3390 break
3390 3391 mstart, mend = match.span()
3391 3392 linenum += body.count('\n', begin, mstart) + 1
3392 3393 lstart = body.rfind('\n', begin, mstart) + 1 or begin
3393 3394 begin = body.find('\n', mend) + 1 or len(body) + 1
3394 3395 lend = begin - 1
3395 3396 yield linenum, mstart - lstart, mend - lstart, body[lstart:lend]
3396 3397
3397 3398 class linestate(object):
3398 3399 def __init__(self, line, linenum, colstart, colend):
3399 3400 self.line = line
3400 3401 self.linenum = linenum
3401 3402 self.colstart = colstart
3402 3403 self.colend = colend
3403 3404
3404 3405 def __hash__(self):
3405 3406 return hash((self.linenum, self.line))
3406 3407
3407 3408 def __eq__(self, other):
3408 3409 return self.line == other.line
3409 3410
3410 3411 def __iter__(self):
3411 3412 yield (self.line[:self.colstart], '')
3412 3413 yield (self.line[self.colstart:self.colend], 'grep.match')
3413 3414 rest = self.line[self.colend:]
3414 3415 while rest != '':
3415 3416 match = regexp.search(rest)
3416 3417 if not match:
3417 3418 yield (rest, '')
3418 3419 break
3419 3420 mstart, mend = match.span()
3420 3421 yield (rest[:mstart], '')
3421 3422 yield (rest[mstart:mend], 'grep.match')
3422 3423 rest = rest[mend:]
3423 3424
3424 3425 matches = {}
3425 3426 copies = {}
3426 3427 def grepbody(fn, rev, body):
3427 3428 matches[rev].setdefault(fn, [])
3428 3429 m = matches[rev][fn]
3429 3430 for lnum, cstart, cend, line in matchlines(body):
3430 3431 s = linestate(line, lnum, cstart, cend)
3431 3432 m.append(s)
3432 3433
3433 3434 def difflinestates(a, b):
3434 3435 sm = difflib.SequenceMatcher(None, a, b)
3435 3436 for tag, alo, ahi, blo, bhi in sm.get_opcodes():
3436 3437 if tag == 'insert':
3437 3438 for i in xrange(blo, bhi):
3438 3439 yield ('+', b[i])
3439 3440 elif tag == 'delete':
3440 3441 for i in xrange(alo, ahi):
3441 3442 yield ('-', a[i])
3442 3443 elif tag == 'replace':
3443 3444 for i in xrange(alo, ahi):
3444 3445 yield ('-', a[i])
3445 3446 for i in xrange(blo, bhi):
3446 3447 yield ('+', b[i])
3447 3448
3448 3449 def display(fn, ctx, pstates, states):
3449 3450 rev = ctx.rev()
3450 3451 datefunc = ui.quiet and util.shortdate or util.datestr
3451 3452 found = False
3452 3453 @util.cachefunc
3453 3454 def binary():
3454 3455 flog = getfile(fn)
3455 3456 return util.binary(flog.read(ctx.filenode(fn)))
3456 3457
3457 3458 if opts.get('all'):
3458 3459 iter = difflinestates(pstates, states)
3459 3460 else:
3460 3461 iter = [('', l) for l in states]
3461 3462 for change, l in iter:
3462 3463 cols = [(fn, 'grep.filename'), (str(rev), 'grep.rev')]
3463 3464
3464 3465 if opts.get('line_number'):
3465 3466 cols.append((str(l.linenum), 'grep.linenumber'))
3466 3467 if opts.get('all'):
3467 3468 cols.append((change, 'grep.change'))
3468 3469 if opts.get('user'):
3469 3470 cols.append((ui.shortuser(ctx.user()), 'grep.user'))
3470 3471 if opts.get('date'):
3471 3472 cols.append((datefunc(ctx.date()), 'grep.date'))
3472 3473 for col, label in cols[:-1]:
3473 3474 ui.write(col, label=label)
3474 3475 ui.write(sep, label='grep.sep')
3475 3476 ui.write(cols[-1][0], label=cols[-1][1])
3476 3477 if not opts.get('files_with_matches'):
3477 3478 ui.write(sep, label='grep.sep')
3478 3479 if not opts.get('text') and binary():
3479 3480 ui.write(" Binary file matches")
3480 3481 else:
3481 3482 for s, label in l:
3482 3483 ui.write(s, label=label)
3483 3484 ui.write(eol)
3484 3485 found = True
3485 3486 if opts.get('files_with_matches'):
3486 3487 break
3487 3488 return found
3488 3489
3489 3490 skip = {}
3490 3491 revfiles = {}
3491 3492 matchfn = scmutil.match(repo[None], pats, opts)
3492 3493 found = False
3493 3494 follow = opts.get('follow')
3494 3495
3495 3496 def prep(ctx, fns):
3496 3497 rev = ctx.rev()
3497 3498 pctx = ctx.p1()
3498 3499 parent = pctx.rev()
3499 3500 matches.setdefault(rev, {})
3500 3501 matches.setdefault(parent, {})
3501 3502 files = revfiles.setdefault(rev, [])
3502 3503 for fn in fns:
3503 3504 flog = getfile(fn)
3504 3505 try:
3505 3506 fnode = ctx.filenode(fn)
3506 3507 except error.LookupError:
3507 3508 continue
3508 3509
3509 3510 copied = flog.renamed(fnode)
3510 3511 copy = follow and copied and copied[0]
3511 3512 if copy:
3512 3513 copies.setdefault(rev, {})[fn] = copy
3513 3514 if fn in skip:
3514 3515 if copy:
3515 3516 skip[copy] = True
3516 3517 continue
3517 3518 files.append(fn)
3518 3519
3519 3520 if fn not in matches[rev]:
3520 3521 grepbody(fn, rev, flog.read(fnode))
3521 3522
3522 3523 pfn = copy or fn
3523 3524 if pfn not in matches[parent]:
3524 3525 try:
3525 3526 fnode = pctx.filenode(pfn)
3526 3527 grepbody(pfn, parent, flog.read(fnode))
3527 3528 except error.LookupError:
3528 3529 pass
3529 3530
3530 3531 for ctx in cmdutil.walkchangerevs(repo, matchfn, opts, prep):
3531 3532 rev = ctx.rev()
3532 3533 parent = ctx.p1().rev()
3533 3534 for fn in sorted(revfiles.get(rev, [])):
3534 3535 states = matches[rev][fn]
3535 3536 copy = copies.get(rev, {}).get(fn)
3536 3537 if fn in skip:
3537 3538 if copy:
3538 3539 skip[copy] = True
3539 3540 continue
3540 3541 pstates = matches.get(parent, {}).get(copy or fn, [])
3541 3542 if pstates or states:
3542 3543 r = display(fn, ctx, pstates, states)
3543 3544 found = found or r
3544 3545 if r and not opts.get('all'):
3545 3546 skip[fn] = True
3546 3547 if copy:
3547 3548 skip[copy] = True
3548 3549 del matches[rev]
3549 3550 del revfiles[rev]
3550 3551
3551 3552 return not found
3552 3553
3553 3554 @command('heads',
3554 3555 [('r', 'rev', '',
3555 3556 _('show only heads which are descendants of STARTREV'), _('STARTREV')),
3556 3557 ('t', 'topo', False, _('show topological heads only')),
3557 3558 ('a', 'active', False, _('show active branchheads only (DEPRECATED)')),
3558 3559 ('c', 'closed', False, _('show normal and closed branch heads')),
3559 3560 ] + templateopts,
3560 3561 _('[-ct] [-r STARTREV] [REV]...'))
3561 3562 def heads(ui, repo, *branchrevs, **opts):
3562 3563 """show branch heads
3563 3564
3564 3565 With no arguments, show all open branch heads in the repository.
3565 3566 Branch heads are changesets that have no descendants on the
3566 3567 same branch. They are where development generally takes place and
3567 3568 are the usual targets for update and merge operations.
3568 3569
3569 3570 If one or more REVs are given, only open branch heads on the
3570 3571 branches associated with the specified changesets are shown. This
3571 3572 means that you can use :hg:`heads .` to see the heads on the
3572 3573 currently checked-out branch.
3573 3574
3574 3575 If -c/--closed is specified, also show branch heads marked closed
3575 3576 (see :hg:`commit --close-branch`).
3576 3577
3577 3578 If STARTREV is specified, only those heads that are descendants of
3578 3579 STARTREV will be displayed.
3579 3580
3580 3581 If -t/--topo is specified, named branch mechanics will be ignored and only
3581 3582 topological heads (changesets with no children) will be shown.
3582 3583
3583 3584 Returns 0 if matching heads are found, 1 if not.
3584 3585 """
3585 3586
3586 3587 start = None
3587 3588 if 'rev' in opts:
3588 3589 start = scmutil.revsingle(repo, opts['rev'], None).node()
3589 3590
3590 3591 if opts.get('topo'):
3591 3592 heads = [repo[h] for h in repo.heads(start)]
3592 3593 else:
3593 3594 heads = []
3594 3595 for branch in repo.branchmap():
3595 3596 heads += repo.branchheads(branch, start, opts.get('closed'))
3596 3597 heads = [repo[h] for h in heads]
3597 3598
3598 3599 if branchrevs:
3599 3600 branches = set(repo[br].branch() for br in branchrevs)
3600 3601 heads = [h for h in heads if h.branch() in branches]
3601 3602
3602 3603 if opts.get('active') and branchrevs:
3603 3604 dagheads = repo.heads(start)
3604 3605 heads = [h for h in heads if h.node() in dagheads]
3605 3606
3606 3607 if branchrevs:
3607 3608 haveheads = set(h.branch() for h in heads)
3608 3609 if branches - haveheads:
3609 3610 headless = ', '.join(b for b in branches - haveheads)
3610 3611 msg = _('no open branch heads found on branches %s')
3611 3612 if opts.get('rev'):
3612 3613 msg += _(' (started at %s)') % opts['rev']
3613 3614 ui.warn((msg + '\n') % headless)
3614 3615
3615 3616 if not heads:
3616 3617 return 1
3617 3618
3618 3619 heads = sorted(heads, key=lambda x: -x.rev())
3619 3620 displayer = cmdutil.show_changeset(ui, repo, opts)
3620 3621 for ctx in heads:
3621 3622 displayer.show(ctx)
3622 3623 displayer.close()
3623 3624
3624 3625 @command('help',
3625 3626 [('e', 'extension', None, _('show only help for extensions')),
3626 3627 ('c', 'command', None, _('show only help for commands')),
3627 3628 ('k', 'keyword', '', _('show topics matching keyword')),
3628 3629 ],
3629 3630 _('[-ec] [TOPIC]'),
3630 3631 norepo=True)
3631 3632 def help_(ui, name=None, **opts):
3632 3633 """show help for a given topic or a help overview
3633 3634
3634 3635 With no arguments, print a list of commands with short help messages.
3635 3636
3636 3637 Given a topic, extension, or command name, print help for that
3637 3638 topic.
3638 3639
3639 3640 Returns 0 if successful.
3640 3641 """
3641 3642
3642 3643 textwidth = min(ui.termwidth(), 80) - 2
3643 3644
3644 3645 keep = ui.verbose and ['verbose'] or []
3645 3646 text = help.help_(ui, name, **opts)
3646 3647
3647 3648 formatted, pruned = minirst.format(text, textwidth, keep=keep)
3648 3649 if 'verbose' in pruned:
3649 3650 keep.append('omitted')
3650 3651 else:
3651 3652 keep.append('notomitted')
3652 3653 formatted, pruned = minirst.format(text, textwidth, keep=keep)
3653 3654 ui.write(formatted)
3654 3655
3655 3656
3656 3657 @command('identify|id',
3657 3658 [('r', 'rev', '',
3658 3659 _('identify the specified revision'), _('REV')),
3659 3660 ('n', 'num', None, _('show local revision number')),
3660 3661 ('i', 'id', None, _('show global revision id')),
3661 3662 ('b', 'branch', None, _('show branch')),
3662 3663 ('t', 'tags', None, _('show tags')),
3663 3664 ('B', 'bookmarks', None, _('show bookmarks')),
3664 3665 ] + remoteopts,
3665 3666 _('[-nibtB] [-r REV] [SOURCE]'),
3666 3667 optionalrepo=True)
3667 3668 def identify(ui, repo, source=None, rev=None,
3668 3669 num=None, id=None, branch=None, tags=None, bookmarks=None, **opts):
3669 3670 """identify the working copy or specified revision
3670 3671
3671 3672 Print a summary identifying the repository state at REV using one or
3672 3673 two parent hash identifiers, followed by a "+" if the working
3673 3674 directory has uncommitted changes, the branch name (if not default),
3674 3675 a list of tags, and a list of bookmarks.
3675 3676
3676 3677 When REV is not given, print a summary of the current state of the
3677 3678 repository.
3678 3679
3679 3680 Specifying a path to a repository root or Mercurial bundle will
3680 3681 cause lookup to operate on that repository/bundle.
3681 3682
3682 3683 .. container:: verbose
3683 3684
3684 3685 Examples:
3685 3686
3686 3687 - generate a build identifier for the working directory::
3687 3688
3688 3689 hg id --id > build-id.dat
3689 3690
3690 3691 - find the revision corresponding to a tag::
3691 3692
3692 3693 hg id -n -r 1.3
3693 3694
3694 3695 - check the most recent revision of a remote repository::
3695 3696
3696 3697 hg id -r tip http://selenic.com/hg/
3697 3698
3698 3699 Returns 0 if successful.
3699 3700 """
3700 3701
3701 3702 if not repo and not source:
3702 3703 raise util.Abort(_("there is no Mercurial repository here "
3703 3704 "(.hg not found)"))
3704 3705
3705 3706 hexfunc = ui.debugflag and hex or short
3706 3707 default = not (num or id or branch or tags or bookmarks)
3707 3708 output = []
3708 3709 revs = []
3709 3710
3710 3711 if source:
3711 3712 source, branches = hg.parseurl(ui.expandpath(source))
3712 3713 peer = hg.peer(repo or ui, opts, source) # only pass ui when no repo
3713 3714 repo = peer.local()
3714 3715 revs, checkout = hg.addbranchrevs(repo, peer, branches, None)
3715 3716
3716 3717 if not repo:
3717 3718 if num or branch or tags:
3718 3719 raise util.Abort(
3719 3720 _("can't query remote revision number, branch, or tags"))
3720 3721 if not rev and revs:
3721 3722 rev = revs[0]
3722 3723 if not rev:
3723 3724 rev = "tip"
3724 3725
3725 3726 remoterev = peer.lookup(rev)
3726 3727 if default or id:
3727 3728 output = [hexfunc(remoterev)]
3728 3729
3729 3730 def getbms():
3730 3731 bms = []
3731 3732
3732 3733 if 'bookmarks' in peer.listkeys('namespaces'):
3733 3734 hexremoterev = hex(remoterev)
3734 3735 bms = [bm for bm, bmr in peer.listkeys('bookmarks').iteritems()
3735 3736 if bmr == hexremoterev]
3736 3737
3737 3738 return sorted(bms)
3738 3739
3739 3740 if bookmarks:
3740 3741 output.extend(getbms())
3741 3742 elif default and not ui.quiet:
3742 3743 # multiple bookmarks for a single parent separated by '/'
3743 3744 bm = '/'.join(getbms())
3744 3745 if bm:
3745 3746 output.append(bm)
3746 3747 else:
3747 3748 if not rev:
3748 3749 ctx = repo[None]
3749 3750 parents = ctx.parents()
3750 3751 changed = ""
3751 3752 if default or id or num:
3752 3753 if (util.any(repo.status())
3753 3754 or util.any(ctx.sub(s).dirty() for s in ctx.substate)):
3754 3755 changed = '+'
3755 3756 if default or id:
3756 3757 output = ["%s%s" %
3757 3758 ('+'.join([hexfunc(p.node()) for p in parents]), changed)]
3758 3759 if num:
3759 3760 output.append("%s%s" %
3760 3761 ('+'.join([str(p.rev()) for p in parents]), changed))
3761 3762 else:
3762 3763 ctx = scmutil.revsingle(repo, rev)
3763 3764 if default or id:
3764 3765 output = [hexfunc(ctx.node())]
3765 3766 if num:
3766 3767 output.append(str(ctx.rev()))
3767 3768
3768 3769 if default and not ui.quiet:
3769 3770 b = ctx.branch()
3770 3771 if b != 'default':
3771 3772 output.append("(%s)" % b)
3772 3773
3773 3774 # multiple tags for a single parent separated by '/'
3774 3775 t = '/'.join(ctx.tags())
3775 3776 if t:
3776 3777 output.append(t)
3777 3778
3778 3779 # multiple bookmarks for a single parent separated by '/'
3779 3780 bm = '/'.join(ctx.bookmarks())
3780 3781 if bm:
3781 3782 output.append(bm)
3782 3783 else:
3783 3784 if branch:
3784 3785 output.append(ctx.branch())
3785 3786
3786 3787 if tags:
3787 3788 output.extend(ctx.tags())
3788 3789
3789 3790 if bookmarks:
3790 3791 output.extend(ctx.bookmarks())
3791 3792
3792 3793 ui.write("%s\n" % ' '.join(output))
3793 3794
3794 3795 @command('import|patch',
3795 3796 [('p', 'strip', 1,
3796 3797 _('directory strip option for patch. This has the same '
3797 3798 'meaning as the corresponding patch option'), _('NUM')),
3798 3799 ('b', 'base', '', _('base path (DEPRECATED)'), _('PATH')),
3799 3800 ('e', 'edit', False, _('invoke editor on commit messages')),
3800 3801 ('f', 'force', None,
3801 3802 _('skip check for outstanding uncommitted changes (DEPRECATED)')),
3802 3803 ('', 'no-commit', None,
3803 3804 _("don't commit, just update the working directory")),
3804 3805 ('', 'bypass', None,
3805 3806 _("apply patch without touching the working directory")),
3806 3807 ('', 'partial', None,
3807 3808 _('commit even if some hunks fail')),
3808 3809 ('', 'exact', None,
3809 3810 _('apply patch to the nodes from which it was generated')),
3810 3811 ('', 'import-branch', None,
3811 3812 _('use any branch information in patch (implied by --exact)'))] +
3812 3813 commitopts + commitopts2 + similarityopts,
3813 3814 _('[OPTION]... PATCH...'))
3814 3815 def import_(ui, repo, patch1=None, *patches, **opts):
3815 3816 """import an ordered set of patches
3816 3817
3817 3818 Import a list of patches and commit them individually (unless
3818 3819 --no-commit is specified).
3819 3820
3820 3821 Because import first applies changes to the working directory,
3821 3822 import will abort if there are outstanding changes.
3822 3823
3823 3824 You can import a patch straight from a mail message. Even patches
3824 3825 as attachments work (to use the body part, it must have type
3825 3826 text/plain or text/x-patch). From and Subject headers of email
3826 3827 message are used as default committer and commit message. All
3827 3828 text/plain body parts before first diff are added to commit
3828 3829 message.
3829 3830
3830 3831 If the imported patch was generated by :hg:`export`, user and
3831 3832 description from patch override values from message headers and
3832 3833 body. Values given on command line with -m/--message and -u/--user
3833 3834 override these.
3834 3835
3835 3836 If --exact is specified, import will set the working directory to
3836 3837 the parent of each patch before applying it, and will abort if the
3837 3838 resulting changeset has a different ID than the one recorded in
3838 3839 the patch. This may happen due to character set problems or other
3839 3840 deficiencies in the text patch format.
3840 3841
3841 3842 Use --bypass to apply and commit patches directly to the
3842 3843 repository, not touching the working directory. Without --exact,
3843 3844 patches will be applied on top of the working directory parent
3844 3845 revision.
3845 3846
3846 3847 With -s/--similarity, hg will attempt to discover renames and
3847 3848 copies in the patch in the same way as :hg:`addremove`.
3848 3849
3849 3850 Use --partial to ensure a changeset will be created from the patch
3850 3851 even if some hunks fail to apply. Hunks that fail to apply will be
3851 3852 written to a <target-file>.rej file. Conflicts can then be resolved
3852 3853 by hand before :hg:`commit --amend` is run to update the created
3853 3854 changeset. This flag exists to let people import patches that
3854 3855 partially apply without losing the associated metadata (author,
3855 3856 date, description, ...). Note that when none of the hunk applies
3856 3857 cleanly, :hg:`import --partial` will create an empty changeset,
3857 3858 importing only the patch metadata.
3858 3859
3859 3860 To read a patch from standard input, use "-" as the patch name. If
3860 3861 a URL is specified, the patch will be downloaded from it.
3861 3862 See :hg:`help dates` for a list of formats valid for -d/--date.
3862 3863
3863 3864 .. container:: verbose
3864 3865
3865 3866 Examples:
3866 3867
3867 3868 - import a traditional patch from a website and detect renames::
3868 3869
3869 3870 hg import -s 80 http://example.com/bugfix.patch
3870 3871
3871 3872 - import a changeset from an hgweb server::
3872 3873
3873 3874 hg import http://www.selenic.com/hg/rev/5ca8c111e9aa
3874 3875
3875 3876 - import all the patches in an Unix-style mbox::
3876 3877
3877 3878 hg import incoming-patches.mbox
3878 3879
3879 3880 - attempt to exactly restore an exported changeset (not always
3880 3881 possible)::
3881 3882
3882 3883 hg import --exact proposed-fix.patch
3883 3884
3884 3885 Returns 0 on success, 1 on partial success (see --partial).
3885 3886 """
3886 3887
3887 3888 if not patch1:
3888 3889 raise util.Abort(_('need at least one patch to import'))
3889 3890
3890 3891 patches = (patch1,) + patches
3891 3892
3892 3893 date = opts.get('date')
3893 3894 if date:
3894 3895 opts['date'] = util.parsedate(date)
3895 3896
3896 3897 update = not opts.get('bypass')
3897 3898 if not update and opts.get('no_commit'):
3898 3899 raise util.Abort(_('cannot use --no-commit with --bypass'))
3899 3900 try:
3900 3901 sim = float(opts.get('similarity') or 0)
3901 3902 except ValueError:
3902 3903 raise util.Abort(_('similarity must be a number'))
3903 3904 if sim < 0 or sim > 100:
3904 3905 raise util.Abort(_('similarity must be between 0 and 100'))
3905 3906 if sim and not update:
3906 3907 raise util.Abort(_('cannot use --similarity with --bypass'))
3907 3908 if opts.get('exact') and opts.get('edit'):
3908 3909 raise util.Abort(_('cannot use --exact with --edit'))
3909 3910
3910 3911 if update:
3911 3912 cmdutil.checkunfinished(repo)
3912 3913 if (opts.get('exact') or not opts.get('force')) and update:
3913 3914 cmdutil.bailifchanged(repo)
3914 3915
3915 3916 base = opts["base"]
3916 3917 wlock = lock = tr = None
3917 3918 msgs = []
3918 3919 ret = 0
3919 3920
3920 3921
3921 3922 try:
3922 3923 try:
3923 3924 wlock = repo.wlock()
3924 3925 if not opts.get('no_commit'):
3925 3926 lock = repo.lock()
3926 3927 tr = repo.transaction('import')
3927 3928 parents = repo.parents()
3928 3929 for patchurl in patches:
3929 3930 if patchurl == '-':
3930 3931 ui.status(_('applying patch from stdin\n'))
3931 3932 patchfile = ui.fin
3932 3933 patchurl = 'stdin' # for error message
3933 3934 else:
3934 3935 patchurl = os.path.join(base, patchurl)
3935 3936 ui.status(_('applying %s\n') % patchurl)
3936 3937 patchfile = hg.openpath(ui, patchurl)
3937 3938
3938 3939 haspatch = False
3939 3940 for hunk in patch.split(patchfile):
3940 3941 (msg, node, rej) = cmdutil.tryimportone(ui, repo, hunk,
3941 3942 parents, opts,
3942 3943 msgs, hg.clean)
3943 3944 if msg:
3944 3945 haspatch = True
3945 3946 ui.note(msg + '\n')
3946 3947 if update or opts.get('exact'):
3947 3948 parents = repo.parents()
3948 3949 else:
3949 3950 parents = [repo[node]]
3950 3951 if rej:
3951 3952 ui.write_err(_("patch applied partially\n"))
3952 3953 ui.write_err(_("(fix the .rej files and run "
3953 3954 "`hg commit --amend`)\n"))
3954 3955 ret = 1
3955 3956 break
3956 3957
3957 3958 if not haspatch:
3958 3959 raise util.Abort(_('%s: no diffs found') % patchurl)
3959 3960
3960 3961 if tr:
3961 3962 tr.close()
3962 3963 if msgs:
3963 3964 repo.savecommitmessage('\n* * *\n'.join(msgs))
3964 3965 return ret
3965 3966 except: # re-raises
3966 3967 # wlock.release() indirectly calls dirstate.write(): since
3967 3968 # we're crashing, we do not want to change the working dir
3968 3969 # parent after all, so make sure it writes nothing
3969 3970 repo.dirstate.invalidate()
3970 3971 raise
3971 3972 finally:
3972 3973 if tr:
3973 3974 tr.release()
3974 3975 release(lock, wlock)
3975 3976
3976 3977 @command('incoming|in',
3977 3978 [('f', 'force', None,
3978 3979 _('run even if remote repository is unrelated')),
3979 3980 ('n', 'newest-first', None, _('show newest record first')),
3980 3981 ('', 'bundle', '',
3981 3982 _('file to store the bundles into'), _('FILE')),
3982 3983 ('r', 'rev', [], _('a remote changeset intended to be added'), _('REV')),
3983 3984 ('B', 'bookmarks', False, _("compare bookmarks")),
3984 3985 ('b', 'branch', [],
3985 3986 _('a specific branch you would like to pull'), _('BRANCH')),
3986 3987 ] + logopts + remoteopts + subrepoopts,
3987 3988 _('[-p] [-n] [-M] [-f] [-r REV]... [--bundle FILENAME] [SOURCE]'))
3988 3989 def incoming(ui, repo, source="default", **opts):
3989 3990 """show new changesets found in source
3990 3991
3991 3992 Show new changesets found in the specified path/URL or the default
3992 3993 pull location. These are the changesets that would have been pulled
3993 3994 if a pull at the time you issued this command.
3994 3995
3995 3996 For remote repository, using --bundle avoids downloading the
3996 3997 changesets twice if the incoming is followed by a pull.
3997 3998
3998 3999 See pull for valid source format details.
3999 4000
4000 4001 .. container:: verbose
4001 4002
4002 4003 Examples:
4003 4004
4004 4005 - show incoming changes with patches and full description::
4005 4006
4006 4007 hg incoming -vp
4007 4008
4008 4009 - show incoming changes excluding merges, store a bundle::
4009 4010
4010 4011 hg in -vpM --bundle incoming.hg
4011 4012 hg pull incoming.hg
4012 4013
4013 4014 - briefly list changes inside a bundle::
4014 4015
4015 4016 hg in changes.hg -T "{desc|firstline}\\n"
4016 4017
4017 4018 Returns 0 if there are incoming changes, 1 otherwise.
4018 4019 """
4019 4020 if opts.get('graph'):
4020 4021 cmdutil.checkunsupportedgraphflags([], opts)
4021 4022 def display(other, chlist, displayer):
4022 4023 revdag = cmdutil.graphrevs(other, chlist, opts)
4023 4024 showparents = [ctx.node() for ctx in repo[None].parents()]
4024 4025 cmdutil.displaygraph(ui, revdag, displayer, showparents,
4025 4026 graphmod.asciiedges)
4026 4027
4027 4028 hg._incoming(display, lambda: 1, ui, repo, source, opts, buffered=True)
4028 4029 return 0
4029 4030
4030 4031 if opts.get('bundle') and opts.get('subrepos'):
4031 4032 raise util.Abort(_('cannot combine --bundle and --subrepos'))
4032 4033
4033 4034 if opts.get('bookmarks'):
4034 4035 source, branches = hg.parseurl(ui.expandpath(source),
4035 4036 opts.get('branch'))
4036 4037 other = hg.peer(repo, opts, source)
4037 4038 if 'bookmarks' not in other.listkeys('namespaces'):
4038 4039 ui.warn(_("remote doesn't support bookmarks\n"))
4039 4040 return 0
4040 4041 ui.status(_('comparing with %s\n') % util.hidepassword(source))
4041 4042 return bookmarks.diff(ui, repo, other)
4042 4043
4043 4044 repo._subtoppath = ui.expandpath(source)
4044 4045 try:
4045 4046 return hg.incoming(ui, repo, source, opts)
4046 4047 finally:
4047 4048 del repo._subtoppath
4048 4049
4049 4050
4050 4051 @command('^init', remoteopts, _('[-e CMD] [--remotecmd CMD] [DEST]'),
4051 4052 norepo=True)
4052 4053 def init(ui, dest=".", **opts):
4053 4054 """create a new repository in the given directory
4054 4055
4055 4056 Initialize a new repository in the given directory. If the given
4056 4057 directory does not exist, it will be created.
4057 4058
4058 4059 If no directory is given, the current directory is used.
4059 4060
4060 4061 It is possible to specify an ``ssh://`` URL as the destination.
4061 4062 See :hg:`help urls` for more information.
4062 4063
4063 4064 Returns 0 on success.
4064 4065 """
4065 4066 hg.peer(ui, opts, ui.expandpath(dest), create=True)
4066 4067
4067 4068 @command('locate',
4068 4069 [('r', 'rev', '', _('search the repository as it is in REV'), _('REV')),
4069 4070 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
4070 4071 ('f', 'fullpath', None, _('print complete paths from the filesystem root')),
4071 4072 ] + walkopts,
4072 4073 _('[OPTION]... [PATTERN]...'))
4073 4074 def locate(ui, repo, *pats, **opts):
4074 4075 """locate files matching specific patterns
4075 4076
4076 4077 Print files under Mercurial control in the working directory whose
4077 4078 names match the given patterns.
4078 4079
4079 4080 By default, this command searches all directories in the working
4080 4081 directory. To search just the current directory and its
4081 4082 subdirectories, use "--include .".
4082 4083
4083 4084 If no patterns are given to match, this command prints the names
4084 4085 of all files under Mercurial control in the working directory.
4085 4086
4086 4087 If you want to feed the output of this command into the "xargs"
4087 4088 command, use the -0 option to both this command and "xargs". This
4088 4089 will avoid the problem of "xargs" treating single filenames that
4089 4090 contain whitespace as multiple filenames.
4090 4091
4091 4092 Returns 0 if a match is found, 1 otherwise.
4092 4093 """
4093 4094 end = opts.get('print0') and '\0' or '\n'
4094 4095 rev = scmutil.revsingle(repo, opts.get('rev'), None).node()
4095 4096
4096 4097 ret = 1
4097 4098 ctx = repo[rev]
4098 4099 m = scmutil.match(ctx, pats, opts, default='relglob')
4099 4100 m.bad = lambda x, y: False
4100 4101
4101 4102 for abs in ctx.matches(m):
4102 4103 if opts.get('fullpath'):
4103 4104 ui.write(repo.wjoin(abs), end)
4104 4105 else:
4105 4106 ui.write(((pats and m.rel(abs)) or abs), end)
4106 4107 ret = 0
4107 4108
4108 4109 return ret
4109 4110
4110 4111 @command('^log|history',
4111 4112 [('f', 'follow', None,
4112 4113 _('follow changeset history, or file history across copies and renames')),
4113 4114 ('', 'follow-first', None,
4114 4115 _('only follow the first parent of merge changesets (DEPRECATED)')),
4115 4116 ('d', 'date', '', _('show revisions matching date spec'), _('DATE')),
4116 4117 ('C', 'copies', None, _('show copied files')),
4117 4118 ('k', 'keyword', [],
4118 4119 _('do case-insensitive search for a given text'), _('TEXT')),
4119 4120 ('r', 'rev', [], _('show the specified revision or range'), _('REV')),
4120 4121 ('', 'removed', None, _('include revisions where files were removed')),
4121 4122 ('m', 'only-merges', None, _('show only merges (DEPRECATED)')),
4122 4123 ('u', 'user', [], _('revisions committed by user'), _('USER')),
4123 4124 ('', 'only-branch', [],
4124 4125 _('show only changesets within the given named branch (DEPRECATED)'),
4125 4126 _('BRANCH')),
4126 4127 ('b', 'branch', [],
4127 4128 _('show changesets within the given named branch'), _('BRANCH')),
4128 4129 ('P', 'prune', [],
4129 4130 _('do not display revision or any of its ancestors'), _('REV')),
4130 4131 ] + logopts + walkopts,
4131 4132 _('[OPTION]... [FILE]'),
4132 4133 inferrepo=True)
4133 4134 def log(ui, repo, *pats, **opts):
4134 4135 """show revision history of entire repository or files
4135 4136
4136 4137 Print the revision history of the specified files or the entire
4137 4138 project.
4138 4139
4139 4140 If no revision range is specified, the default is ``tip:0`` unless
4140 4141 --follow is set, in which case the working directory parent is
4141 4142 used as the starting revision.
4142 4143
4143 4144 File history is shown without following rename or copy history of
4144 4145 files. Use -f/--follow with a filename to follow history across
4145 4146 renames and copies. --follow without a filename will only show
4146 4147 ancestors or descendants of the starting revision.
4147 4148
4148 4149 By default this command prints revision number and changeset id,
4149 4150 tags, non-trivial parents, user, date and time, and a summary for
4150 4151 each commit. When the -v/--verbose switch is used, the list of
4151 4152 changed files and full commit message are shown.
4152 4153
4153 4154 With --graph the revisions are shown as an ASCII art DAG with the most
4154 4155 recent changeset at the top.
4155 4156 'o' is a changeset, '@' is a working directory parent, 'x' is obsolete,
4156 4157 and '+' represents a fork where the changeset from the lines below is a
4157 4158 parent of the 'o' merge on the same line.
4158 4159
4159 4160 .. note::
4160 4161
4161 4162 log -p/--patch may generate unexpected diff output for merge
4162 4163 changesets, as it will only compare the merge changeset against
4163 4164 its first parent. Also, only files different from BOTH parents
4164 4165 will appear in files:.
4165 4166
4166 4167 .. note::
4167 4168
4168 4169 for performance reasons, log FILE may omit duplicate changes
4169 4170 made on branches and will not show deletions. To see all
4170 4171 changes including duplicates and deletions, use the --removed
4171 4172 switch.
4172 4173
4173 4174 .. container:: verbose
4174 4175
4175 4176 Some examples:
4176 4177
4177 4178 - changesets with full descriptions and file lists::
4178 4179
4179 4180 hg log -v
4180 4181
4181 4182 - changesets ancestral to the working directory::
4182 4183
4183 4184 hg log -f
4184 4185
4185 4186 - last 10 commits on the current branch::
4186 4187
4187 4188 hg log -l 10 -b .
4188 4189
4189 4190 - changesets showing all modifications of a file, including removals::
4190 4191
4191 4192 hg log --removed file.c
4192 4193
4193 4194 - all changesets that touch a directory, with diffs, excluding merges::
4194 4195
4195 4196 hg log -Mp lib/
4196 4197
4197 4198 - all revision numbers that match a keyword::
4198 4199
4199 4200 hg log -k bug --template "{rev}\\n"
4200 4201
4201 4202 - list available log templates::
4202 4203
4203 4204 hg log -T list
4204 4205
4205 4206 - check if a given changeset is included is a tagged release::
4206 4207
4207 4208 hg log -r "a21ccf and ancestor(1.9)"
4208 4209
4209 4210 - find all changesets by some user in a date range::
4210 4211
4211 4212 hg log -k alice -d "may 2008 to jul 2008"
4212 4213
4213 4214 - summary of all changesets after the last tag::
4214 4215
4215 4216 hg log -r "last(tagged())::" --template "{desc|firstline}\\n"
4216 4217
4217 4218 See :hg:`help dates` for a list of formats valid for -d/--date.
4218 4219
4219 4220 See :hg:`help revisions` and :hg:`help revsets` for more about
4220 4221 specifying revisions.
4221 4222
4222 4223 See :hg:`help templates` for more about pre-packaged styles and
4223 4224 specifying custom templates.
4224 4225
4225 4226 Returns 0 on success.
4226 4227 """
4227 4228 if opts.get('graph'):
4228 4229 return cmdutil.graphlog(ui, repo, *pats, **opts)
4229 4230
4230 4231 revs, expr, filematcher = cmdutil.getlogrevs(repo, pats, opts)
4231 4232 limit = cmdutil.loglimit(opts)
4232 4233 count = 0
4233 4234
4234 4235 getrenamed = None
4235 4236 if opts.get('copies'):
4236 4237 endrev = None
4237 4238 if opts.get('rev'):
4238 4239 endrev = scmutil.revrange(repo, opts.get('rev')).max() + 1
4239 4240 getrenamed = templatekw.getrenamedfn(repo, endrev=endrev)
4240 4241
4241 4242 displayer = cmdutil.show_changeset(ui, repo, opts, buffered=True)
4242 4243 for rev in revs:
4243 4244 if count == limit:
4244 4245 break
4245 4246 ctx = repo[rev]
4246 4247 copies = None
4247 4248 if getrenamed is not None and rev:
4248 4249 copies = []
4249 4250 for fn in ctx.files():
4250 4251 rename = getrenamed(fn, rev)
4251 4252 if rename:
4252 4253 copies.append((fn, rename[0]))
4253 4254 revmatchfn = filematcher and filematcher(ctx.rev()) or None
4254 4255 displayer.show(ctx, copies=copies, matchfn=revmatchfn)
4255 4256 if displayer.flush(rev):
4256 4257 count += 1
4257 4258
4258 4259 displayer.close()
4259 4260
4260 4261 @command('manifest',
4261 4262 [('r', 'rev', '', _('revision to display'), _('REV')),
4262 4263 ('', 'all', False, _("list files from all revisions"))],
4263 4264 _('[-r REV]'))
4264 4265 def manifest(ui, repo, node=None, rev=None, **opts):
4265 4266 """output the current or given revision of the project manifest
4266 4267
4267 4268 Print a list of version controlled files for the given revision.
4268 4269 If no revision is given, the first parent of the working directory
4269 4270 is used, or the null revision if no revision is checked out.
4270 4271
4271 4272 With -v, print file permissions, symlink and executable bits.
4272 4273 With --debug, print file revision hashes.
4273 4274
4274 4275 If option --all is specified, the list of all files from all revisions
4275 4276 is printed. This includes deleted and renamed files.
4276 4277
4277 4278 Returns 0 on success.
4278 4279 """
4279 4280
4280 4281 fm = ui.formatter('manifest', opts)
4281 4282
4282 4283 if opts.get('all'):
4283 4284 if rev or node:
4284 4285 raise util.Abort(_("can't specify a revision with --all"))
4285 4286
4286 4287 res = []
4287 4288 prefix = "data/"
4288 4289 suffix = ".i"
4289 4290 plen = len(prefix)
4290 4291 slen = len(suffix)
4291 4292 lock = repo.lock()
4292 4293 try:
4293 4294 for fn, b, size in repo.store.datafiles():
4294 4295 if size != 0 and fn[-slen:] == suffix and fn[:plen] == prefix:
4295 4296 res.append(fn[plen:-slen])
4296 4297 finally:
4297 4298 lock.release()
4298 4299 for f in res:
4299 4300 fm.startitem()
4300 4301 fm.write("path", '%s\n', f)
4301 4302 fm.end()
4302 4303 return
4303 4304
4304 4305 if rev and node:
4305 4306 raise util.Abort(_("please specify just one revision"))
4306 4307
4307 4308 if not node:
4308 4309 node = rev
4309 4310
4310 4311 char = {'l': '@', 'x': '*', '': ''}
4311 4312 mode = {'l': '644', 'x': '755', '': '644'}
4312 4313 ctx = scmutil.revsingle(repo, node)
4313 4314 mf = ctx.manifest()
4314 4315 for f in ctx:
4315 4316 fm.startitem()
4316 4317 fl = ctx[f].flags()
4317 4318 fm.condwrite(ui.debugflag, 'hash', '%s ', hex(mf[f]))
4318 4319 fm.condwrite(ui.verbose, 'mode type', '%s %1s ', mode[fl], char[fl])
4319 4320 fm.write('path', '%s\n', f)
4320 4321 fm.end()
4321 4322
4322 4323 @command('^merge',
4323 4324 [('f', 'force', None,
4324 4325 _('force a merge including outstanding changes (DEPRECATED)')),
4325 4326 ('r', 'rev', '', _('revision to merge'), _('REV')),
4326 4327 ('P', 'preview', None,
4327 4328 _('review revisions to merge (no merge is performed)'))
4328 4329 ] + mergetoolopts,
4329 4330 _('[-P] [-f] [[-r] REV]'))
4330 4331 def merge(ui, repo, node=None, **opts):
4331 4332 """merge working directory with another revision
4332 4333
4333 4334 The current working directory is updated with all changes made in
4334 4335 the requested revision since the last common predecessor revision.
4335 4336
4336 4337 Files that changed between either parent are marked as changed for
4337 4338 the next commit and a commit must be performed before any further
4338 4339 updates to the repository are allowed. The next commit will have
4339 4340 two parents.
4340 4341
4341 4342 ``--tool`` can be used to specify the merge tool used for file
4342 4343 merges. It overrides the HGMERGE environment variable and your
4343 4344 configuration files. See :hg:`help merge-tools` for options.
4344 4345
4345 4346 If no revision is specified, the working directory's parent is a
4346 4347 head revision, and the current branch contains exactly one other
4347 4348 head, the other head is merged with by default. Otherwise, an
4348 4349 explicit revision with which to merge with must be provided.
4349 4350
4350 4351 :hg:`resolve` must be used to resolve unresolved files.
4351 4352
4352 4353 To undo an uncommitted merge, use :hg:`update --clean .` which
4353 4354 will check out a clean copy of the original merge parent, losing
4354 4355 all changes.
4355 4356
4356 4357 Returns 0 on success, 1 if there are unresolved files.
4357 4358 """
4358 4359
4359 4360 if opts.get('rev') and node:
4360 4361 raise util.Abort(_("please specify just one revision"))
4361 4362 if not node:
4362 4363 node = opts.get('rev')
4363 4364
4364 4365 if node:
4365 4366 node = scmutil.revsingle(repo, node).node()
4366 4367
4367 4368 if not node and repo._bookmarkcurrent:
4368 4369 bmheads = repo.bookmarkheads(repo._bookmarkcurrent)
4369 4370 curhead = repo[repo._bookmarkcurrent].node()
4370 4371 if len(bmheads) == 2:
4371 4372 if curhead == bmheads[0]:
4372 4373 node = bmheads[1]
4373 4374 else:
4374 4375 node = bmheads[0]
4375 4376 elif len(bmheads) > 2:
4376 4377 raise util.Abort(_("multiple matching bookmarks to merge - "
4377 4378 "please merge with an explicit rev or bookmark"),
4378 4379 hint=_("run 'hg heads' to see all heads"))
4379 4380 elif len(bmheads) <= 1:
4380 4381 raise util.Abort(_("no matching bookmark to merge - "
4381 4382 "please merge with an explicit rev or bookmark"),
4382 4383 hint=_("run 'hg heads' to see all heads"))
4383 4384
4384 4385 if not node and not repo._bookmarkcurrent:
4385 4386 branch = repo[None].branch()
4386 4387 bheads = repo.branchheads(branch)
4387 4388 nbhs = [bh for bh in bheads if not repo[bh].bookmarks()]
4388 4389
4389 4390 if len(nbhs) > 2:
4390 4391 raise util.Abort(_("branch '%s' has %d heads - "
4391 4392 "please merge with an explicit rev")
4392 4393 % (branch, len(bheads)),
4393 4394 hint=_("run 'hg heads .' to see heads"))
4394 4395
4395 4396 parent = repo.dirstate.p1()
4396 4397 if len(nbhs) <= 1:
4397 4398 if len(bheads) > 1:
4398 4399 raise util.Abort(_("heads are bookmarked - "
4399 4400 "please merge with an explicit rev"),
4400 4401 hint=_("run 'hg heads' to see all heads"))
4401 4402 if len(repo.heads()) > 1:
4402 4403 raise util.Abort(_("branch '%s' has one head - "
4403 4404 "please merge with an explicit rev")
4404 4405 % branch,
4405 4406 hint=_("run 'hg heads' to see all heads"))
4406 4407 msg, hint = _('nothing to merge'), None
4407 4408 if parent != repo.lookup(branch):
4408 4409 hint = _("use 'hg update' instead")
4409 4410 raise util.Abort(msg, hint=hint)
4410 4411
4411 4412 if parent not in bheads:
4412 4413 raise util.Abort(_('working directory not at a head revision'),
4413 4414 hint=_("use 'hg update' or merge with an "
4414 4415 "explicit revision"))
4415 4416 if parent == nbhs[0]:
4416 4417 node = nbhs[-1]
4417 4418 else:
4418 4419 node = nbhs[0]
4419 4420
4420 4421 if opts.get('preview'):
4421 4422 # find nodes that are ancestors of p2 but not of p1
4422 4423 p1 = repo.lookup('.')
4423 4424 p2 = repo.lookup(node)
4424 4425 nodes = repo.changelog.findmissing(common=[p1], heads=[p2])
4425 4426
4426 4427 displayer = cmdutil.show_changeset(ui, repo, opts)
4427 4428 for node in nodes:
4428 4429 displayer.show(repo[node])
4429 4430 displayer.close()
4430 4431 return 0
4431 4432
4432 4433 try:
4433 4434 # ui.forcemerge is an internal variable, do not document
4434 4435 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''), 'merge')
4435 4436 return hg.merge(repo, node, force=opts.get('force'))
4436 4437 finally:
4437 4438 ui.setconfig('ui', 'forcemerge', '', 'merge')
4438 4439
4439 4440 @command('outgoing|out',
4440 4441 [('f', 'force', None, _('run even when the destination is unrelated')),
4441 4442 ('r', 'rev', [],
4442 4443 _('a changeset intended to be included in the destination'), _('REV')),
4443 4444 ('n', 'newest-first', None, _('show newest record first')),
4444 4445 ('B', 'bookmarks', False, _('compare bookmarks')),
4445 4446 ('b', 'branch', [], _('a specific branch you would like to push'),
4446 4447 _('BRANCH')),
4447 4448 ] + logopts + remoteopts + subrepoopts,
4448 4449 _('[-M] [-p] [-n] [-f] [-r REV]... [DEST]'))
4449 4450 def outgoing(ui, repo, dest=None, **opts):
4450 4451 """show changesets not found in the destination
4451 4452
4452 4453 Show changesets not found in the specified destination repository
4453 4454 or the default push location. These are the changesets that would
4454 4455 be pushed if a push was requested.
4455 4456
4456 4457 See pull for details of valid destination formats.
4457 4458
4458 4459 Returns 0 if there are outgoing changes, 1 otherwise.
4459 4460 """
4460 4461 if opts.get('graph'):
4461 4462 cmdutil.checkunsupportedgraphflags([], opts)
4462 4463 o, other = hg._outgoing(ui, repo, dest, opts)
4463 4464 if not o:
4464 4465 cmdutil.outgoinghooks(ui, repo, other, opts, o)
4465 4466 return
4466 4467
4467 4468 revdag = cmdutil.graphrevs(repo, o, opts)
4468 4469 displayer = cmdutil.show_changeset(ui, repo, opts, buffered=True)
4469 4470 showparents = [ctx.node() for ctx in repo[None].parents()]
4470 4471 cmdutil.displaygraph(ui, revdag, displayer, showparents,
4471 4472 graphmod.asciiedges)
4472 4473 cmdutil.outgoinghooks(ui, repo, other, opts, o)
4473 4474 return 0
4474 4475
4475 4476 if opts.get('bookmarks'):
4476 4477 dest = ui.expandpath(dest or 'default-push', dest or 'default')
4477 4478 dest, branches = hg.parseurl(dest, opts.get('branch'))
4478 4479 other = hg.peer(repo, opts, dest)
4479 4480 if 'bookmarks' not in other.listkeys('namespaces'):
4480 4481 ui.warn(_("remote doesn't support bookmarks\n"))
4481 4482 return 0
4482 4483 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
4483 4484 return bookmarks.diff(ui, other, repo)
4484 4485
4485 4486 repo._subtoppath = ui.expandpath(dest or 'default-push', dest or 'default')
4486 4487 try:
4487 4488 return hg.outgoing(ui, repo, dest, opts)
4488 4489 finally:
4489 4490 del repo._subtoppath
4490 4491
4491 4492 @command('parents',
4492 4493 [('r', 'rev', '', _('show parents of the specified revision'), _('REV')),
4493 4494 ] + templateopts,
4494 4495 _('[-r REV] [FILE]'),
4495 4496 inferrepo=True)
4496 4497 def parents(ui, repo, file_=None, **opts):
4497 4498 """show the parents of the working directory or revision
4498 4499
4499 4500 Print the working directory's parent revisions. If a revision is
4500 4501 given via -r/--rev, the parent of that revision will be printed.
4501 4502 If a file argument is given, the revision in which the file was
4502 4503 last changed (before the working directory revision or the
4503 4504 argument to --rev if given) is printed.
4504 4505
4505 4506 Returns 0 on success.
4506 4507 """
4507 4508
4508 4509 ctx = scmutil.revsingle(repo, opts.get('rev'), None)
4509 4510
4510 4511 if file_:
4511 4512 m = scmutil.match(ctx, (file_,), opts)
4512 4513 if m.anypats() or len(m.files()) != 1:
4513 4514 raise util.Abort(_('can only specify an explicit filename'))
4514 4515 file_ = m.files()[0]
4515 4516 filenodes = []
4516 4517 for cp in ctx.parents():
4517 4518 if not cp:
4518 4519 continue
4519 4520 try:
4520 4521 filenodes.append(cp.filenode(file_))
4521 4522 except error.LookupError:
4522 4523 pass
4523 4524 if not filenodes:
4524 4525 raise util.Abort(_("'%s' not found in manifest!") % file_)
4525 4526 p = []
4526 4527 for fn in filenodes:
4527 4528 fctx = repo.filectx(file_, fileid=fn)
4528 4529 p.append(fctx.node())
4529 4530 else:
4530 4531 p = [cp.node() for cp in ctx.parents()]
4531 4532
4532 4533 displayer = cmdutil.show_changeset(ui, repo, opts)
4533 4534 for n in p:
4534 4535 if n != nullid:
4535 4536 displayer.show(repo[n])
4536 4537 displayer.close()
4537 4538
4538 4539 @command('paths', [], _('[NAME]'), optionalrepo=True)
4539 4540 def paths(ui, repo, search=None):
4540 4541 """show aliases for remote repositories
4541 4542
4542 4543 Show definition of symbolic path name NAME. If no name is given,
4543 4544 show definition of all available names.
4544 4545
4545 4546 Option -q/--quiet suppresses all output when searching for NAME
4546 4547 and shows only the path names when listing all definitions.
4547 4548
4548 4549 Path names are defined in the [paths] section of your
4549 4550 configuration file and in ``/etc/mercurial/hgrc``. If run inside a
4550 4551 repository, ``.hg/hgrc`` is used, too.
4551 4552
4552 4553 The path names ``default`` and ``default-push`` have a special
4553 4554 meaning. When performing a push or pull operation, they are used
4554 4555 as fallbacks if no location is specified on the command-line.
4555 4556 When ``default-push`` is set, it will be used for push and
4556 4557 ``default`` will be used for pull; otherwise ``default`` is used
4557 4558 as the fallback for both. When cloning a repository, the clone
4558 4559 source is written as ``default`` in ``.hg/hgrc``. Note that
4559 4560 ``default`` and ``default-push`` apply to all inbound (e.g.
4560 4561 :hg:`incoming`) and outbound (e.g. :hg:`outgoing`, :hg:`email` and
4561 4562 :hg:`bundle`) operations.
4562 4563
4563 4564 See :hg:`help urls` for more information.
4564 4565
4565 4566 Returns 0 on success.
4566 4567 """
4567 4568 if search:
4568 4569 for name, path in ui.configitems("paths"):
4569 4570 if name == search:
4570 4571 ui.status("%s\n" % util.hidepassword(path))
4571 4572 return
4572 4573 if not ui.quiet:
4573 4574 ui.warn(_("not found!\n"))
4574 4575 return 1
4575 4576 else:
4576 4577 for name, path in ui.configitems("paths"):
4577 4578 if ui.quiet:
4578 4579 ui.write("%s\n" % name)
4579 4580 else:
4580 4581 ui.write("%s = %s\n" % (name, util.hidepassword(path)))
4581 4582
4582 4583 @command('phase',
4583 4584 [('p', 'public', False, _('set changeset phase to public')),
4584 4585 ('d', 'draft', False, _('set changeset phase to draft')),
4585 4586 ('s', 'secret', False, _('set changeset phase to secret')),
4586 4587 ('f', 'force', False, _('allow to move boundary backward')),
4587 4588 ('r', 'rev', [], _('target revision'), _('REV')),
4588 4589 ],
4589 4590 _('[-p|-d|-s] [-f] [-r] REV...'))
4590 4591 def phase(ui, repo, *revs, **opts):
4591 4592 """set or show the current phase name
4592 4593
4593 4594 With no argument, show the phase name of specified revisions.
4594 4595
4595 4596 With one of -p/--public, -d/--draft or -s/--secret, change the
4596 4597 phase value of the specified revisions.
4597 4598
4598 4599 Unless -f/--force is specified, :hg:`phase` won't move changeset from a
4599 4600 lower phase to an higher phase. Phases are ordered as follows::
4600 4601
4601 4602 public < draft < secret
4602 4603
4603 4604 Returns 0 on success, 1 if no phases were changed or some could not
4604 4605 be changed.
4605 4606 """
4606 4607 # search for a unique phase argument
4607 4608 targetphase = None
4608 4609 for idx, name in enumerate(phases.phasenames):
4609 4610 if opts[name]:
4610 4611 if targetphase is not None:
4611 4612 raise util.Abort(_('only one phase can be specified'))
4612 4613 targetphase = idx
4613 4614
4614 4615 # look for specified revision
4615 4616 revs = list(revs)
4616 4617 revs.extend(opts['rev'])
4617 4618 if not revs:
4618 4619 raise util.Abort(_('no revisions specified'))
4619 4620
4620 4621 revs = scmutil.revrange(repo, revs)
4621 4622
4622 4623 lock = None
4623 4624 ret = 0
4624 4625 if targetphase is None:
4625 4626 # display
4626 4627 for r in revs:
4627 4628 ctx = repo[r]
4628 4629 ui.write('%i: %s\n' % (ctx.rev(), ctx.phasestr()))
4629 4630 else:
4630 4631 tr = None
4631 4632 lock = repo.lock()
4632 4633 try:
4633 4634 tr = repo.transaction("phase")
4634 4635 # set phase
4635 4636 if not revs:
4636 4637 raise util.Abort(_('empty revision set'))
4637 4638 nodes = [repo[r].node() for r in revs]
4638 4639 olddata = repo._phasecache.getphaserevs(repo)[:]
4639 4640 phases.advanceboundary(repo, tr, targetphase, nodes)
4640 4641 if opts['force']:
4641 4642 phases.retractboundary(repo, tr, targetphase, nodes)
4642 4643 tr.close()
4643 4644 finally:
4644 4645 if tr is not None:
4645 4646 tr.release()
4646 4647 lock.release()
4647 4648 # moving revision from public to draft may hide them
4648 4649 # We have to check result on an unfiltered repository
4649 4650 unfi = repo.unfiltered()
4650 4651 newdata = repo._phasecache.getphaserevs(unfi)
4651 4652 changes = sum(o != newdata[i] for i, o in enumerate(olddata))
4652 4653 cl = unfi.changelog
4653 4654 rejected = [n for n in nodes
4654 4655 if newdata[cl.rev(n)] < targetphase]
4655 4656 if rejected:
4656 4657 ui.warn(_('cannot move %i changesets to a higher '
4657 4658 'phase, use --force\n') % len(rejected))
4658 4659 ret = 1
4659 4660 if changes:
4660 4661 msg = _('phase changed for %i changesets\n') % changes
4661 4662 if ret:
4662 4663 ui.status(msg)
4663 4664 else:
4664 4665 ui.note(msg)
4665 4666 else:
4666 4667 ui.warn(_('no phases changed\n'))
4667 4668 ret = 1
4668 4669 return ret
4669 4670
4670 4671 def postincoming(ui, repo, modheads, optupdate, checkout):
4671 4672 if modheads == 0:
4672 4673 return
4673 4674 if optupdate:
4674 4675 checkout, movemarkfrom = bookmarks.calculateupdate(ui, repo, checkout)
4675 4676 try:
4676 4677 ret = hg.update(repo, checkout)
4677 4678 except util.Abort, inst:
4678 4679 ui.warn(_("not updating: %s\n") % str(inst))
4679 4680 if inst.hint:
4680 4681 ui.warn(_("(%s)\n") % inst.hint)
4681 4682 return 0
4682 4683 if not ret and not checkout:
4683 4684 if bookmarks.update(repo, [movemarkfrom], repo['.'].node()):
4684 4685 ui.status(_("updating bookmark %s\n") % repo._bookmarkcurrent)
4685 4686 return ret
4686 4687 if modheads > 1:
4687 4688 currentbranchheads = len(repo.branchheads())
4688 4689 if currentbranchheads == modheads:
4689 4690 ui.status(_("(run 'hg heads' to see heads, 'hg merge' to merge)\n"))
4690 4691 elif currentbranchheads > 1:
4691 4692 ui.status(_("(run 'hg heads .' to see heads, 'hg merge' to "
4692 4693 "merge)\n"))
4693 4694 else:
4694 4695 ui.status(_("(run 'hg heads' to see heads)\n"))
4695 4696 else:
4696 4697 ui.status(_("(run 'hg update' to get a working copy)\n"))
4697 4698
4698 4699 @command('^pull',
4699 4700 [('u', 'update', None,
4700 4701 _('update to new branch head if changesets were pulled')),
4701 4702 ('f', 'force', None, _('run even when remote repository is unrelated')),
4702 4703 ('r', 'rev', [], _('a remote changeset intended to be added'), _('REV')),
4703 4704 ('B', 'bookmark', [], _("bookmark to pull"), _('BOOKMARK')),
4704 4705 ('b', 'branch', [], _('a specific branch you would like to pull'),
4705 4706 _('BRANCH')),
4706 4707 ] + remoteopts,
4707 4708 _('[-u] [-f] [-r REV]... [-e CMD] [--remotecmd CMD] [SOURCE]'))
4708 4709 def pull(ui, repo, source="default", **opts):
4709 4710 """pull changes from the specified source
4710 4711
4711 4712 Pull changes from a remote repository to a local one.
4712 4713
4713 4714 This finds all changes from the repository at the specified path
4714 4715 or URL and adds them to a local repository (the current one unless
4715 4716 -R is specified). By default, this does not update the copy of the
4716 4717 project in the working directory.
4717 4718
4718 4719 Use :hg:`incoming` if you want to see what would have been added
4719 4720 by a pull at the time you issued this command. If you then decide
4720 4721 to add those changes to the repository, you should use :hg:`pull
4721 4722 -r X` where ``X`` is the last changeset listed by :hg:`incoming`.
4722 4723
4723 4724 If SOURCE is omitted, the 'default' path will be used.
4724 4725 See :hg:`help urls` for more information.
4725 4726
4726 4727 Returns 0 on success, 1 if an update had unresolved files.
4727 4728 """
4728 4729 source, branches = hg.parseurl(ui.expandpath(source), opts.get('branch'))
4729 4730 other = hg.peer(repo, opts, source)
4730 4731 try:
4731 4732 ui.status(_('pulling from %s\n') % util.hidepassword(source))
4732 4733 revs, checkout = hg.addbranchrevs(repo, other, branches,
4733 4734 opts.get('rev'))
4734 4735
4735 4736 remotebookmarks = other.listkeys('bookmarks')
4736 4737
4737 4738 if opts.get('bookmark'):
4738 4739 if not revs:
4739 4740 revs = []
4740 4741 for b in opts['bookmark']:
4741 4742 if b not in remotebookmarks:
4742 4743 raise util.Abort(_('remote bookmark %s not found!') % b)
4743 4744 revs.append(remotebookmarks[b])
4744 4745
4745 4746 if revs:
4746 4747 try:
4747 4748 revs = [other.lookup(rev) for rev in revs]
4748 4749 except error.CapabilityError:
4749 4750 err = _("other repository doesn't support revision lookup, "
4750 4751 "so a rev cannot be specified.")
4751 4752 raise util.Abort(err)
4752 4753
4753 4754 modheads = repo.pull(other, heads=revs, force=opts.get('force'))
4754 4755 bookmarks.updatefromremote(ui, repo, remotebookmarks, source)
4755 4756 if checkout:
4756 4757 checkout = str(repo.changelog.rev(other.lookup(checkout)))
4757 4758 repo._subtoppath = source
4758 4759 try:
4759 4760 ret = postincoming(ui, repo, modheads, opts.get('update'), checkout)
4760 4761
4761 4762 finally:
4762 4763 del repo._subtoppath
4763 4764
4764 4765 # update specified bookmarks
4765 4766 if opts.get('bookmark'):
4766 4767 marks = repo._bookmarks
4767 4768 for b in opts['bookmark']:
4768 4769 # explicit pull overrides local bookmark if any
4769 4770 ui.status(_("importing bookmark %s\n") % b)
4770 4771 marks[b] = repo[remotebookmarks[b]].node()
4771 4772 marks.write()
4772 4773 finally:
4773 4774 other.close()
4774 4775 return ret
4775 4776
4776 4777 @command('^push',
4777 4778 [('f', 'force', None, _('force push')),
4778 4779 ('r', 'rev', [],
4779 4780 _('a changeset intended to be included in the destination'),
4780 4781 _('REV')),
4781 4782 ('B', 'bookmark', [], _("bookmark to push"), _('BOOKMARK')),
4782 4783 ('b', 'branch', [],
4783 4784 _('a specific branch you would like to push'), _('BRANCH')),
4784 4785 ('', 'new-branch', False, _('allow pushing a new branch')),
4785 4786 ] + remoteopts,
4786 4787 _('[-f] [-r REV]... [-e CMD] [--remotecmd CMD] [DEST]'))
4787 4788 def push(ui, repo, dest=None, **opts):
4788 4789 """push changes to the specified destination
4789 4790
4790 4791 Push changesets from the local repository to the specified
4791 4792 destination.
4792 4793
4793 4794 This operation is symmetrical to pull: it is identical to a pull
4794 4795 in the destination repository from the current one.
4795 4796
4796 4797 By default, push will not allow creation of new heads at the
4797 4798 destination, since multiple heads would make it unclear which head
4798 4799 to use. In this situation, it is recommended to pull and merge
4799 4800 before pushing.
4800 4801
4801 4802 Use --new-branch if you want to allow push to create a new named
4802 4803 branch that is not present at the destination. This allows you to
4803 4804 only create a new branch without forcing other changes.
4804 4805
4805 4806 .. note::
4806 4807
4807 4808 Extra care should be taken with the -f/--force option,
4808 4809 which will push all new heads on all branches, an action which will
4809 4810 almost always cause confusion for collaborators.
4810 4811
4811 4812 If -r/--rev is used, the specified revision and all its ancestors
4812 4813 will be pushed to the remote repository.
4813 4814
4814 4815 If -B/--bookmark is used, the specified bookmarked revision, its
4815 4816 ancestors, and the bookmark will be pushed to the remote
4816 4817 repository.
4817 4818
4818 4819 Please see :hg:`help urls` for important details about ``ssh://``
4819 4820 URLs. If DESTINATION is omitted, a default path will be used.
4820 4821
4821 4822 Returns 0 if push was successful, 1 if nothing to push.
4822 4823 """
4823 4824
4824 4825 if opts.get('bookmark'):
4825 4826 ui.setconfig('bookmarks', 'pushing', opts['bookmark'], 'push')
4826 4827 for b in opts['bookmark']:
4827 4828 # translate -B options to -r so changesets get pushed
4828 4829 if b in repo._bookmarks:
4829 4830 opts.setdefault('rev', []).append(b)
4830 4831 else:
4831 4832 # if we try to push a deleted bookmark, translate it to null
4832 4833 # this lets simultaneous -r, -b options continue working
4833 4834 opts.setdefault('rev', []).append("null")
4834 4835
4835 4836 dest = ui.expandpath(dest or 'default-push', dest or 'default')
4836 4837 dest, branches = hg.parseurl(dest, opts.get('branch'))
4837 4838 ui.status(_('pushing to %s\n') % util.hidepassword(dest))
4838 4839 revs, checkout = hg.addbranchrevs(repo, repo, branches, opts.get('rev'))
4839 4840 try:
4840 4841 other = hg.peer(repo, opts, dest)
4841 4842 except error.RepoError:
4842 4843 if dest == "default-push":
4843 4844 raise util.Abort(_("default repository not configured!"),
4844 4845 hint=_('see the "path" section in "hg help config"'))
4845 4846 else:
4846 4847 raise
4847 4848
4848 4849 if revs:
4849 4850 revs = [repo.lookup(r) for r in scmutil.revrange(repo, revs)]
4850 4851
4851 4852 repo._subtoppath = dest
4852 4853 try:
4853 4854 # push subrepos depth-first for coherent ordering
4854 4855 c = repo['']
4855 4856 subs = c.substate # only repos that are committed
4856 4857 for s in sorted(subs):
4857 4858 result = c.sub(s).push(opts)
4858 4859 if result == 0:
4859 4860 return not result
4860 4861 finally:
4861 4862 del repo._subtoppath
4862 4863 result = repo.push(other, opts.get('force'), revs=revs,
4863 4864 newbranch=opts.get('new_branch'))
4864 4865
4865 4866 result = not result
4866 4867
4867 4868 if opts.get('bookmark'):
4868 4869 bresult = bookmarks.pushtoremote(ui, repo, other, opts['bookmark'])
4869 4870 if bresult == 2:
4870 4871 return 2
4871 4872 if not result and bresult:
4872 4873 result = 2
4873 4874
4874 4875 return result
4875 4876
4876 4877 @command('recover', [])
4877 4878 def recover(ui, repo):
4878 4879 """roll back an interrupted transaction
4879 4880
4880 4881 Recover from an interrupted commit or pull.
4881 4882
4882 4883 This command tries to fix the repository status after an
4883 4884 interrupted operation. It should only be necessary when Mercurial
4884 4885 suggests it.
4885 4886
4886 4887 Returns 0 if successful, 1 if nothing to recover or verify fails.
4887 4888 """
4888 4889 if repo.recover():
4889 4890 return hg.verify(repo)
4890 4891 return 1
4891 4892
4892 4893 @command('^remove|rm',
4893 4894 [('A', 'after', None, _('record delete for missing files')),
4894 4895 ('f', 'force', None,
4895 4896 _('remove (and delete) file even if added or modified')),
4896 4897 ] + walkopts,
4897 4898 _('[OPTION]... FILE...'),
4898 4899 inferrepo=True)
4899 4900 def remove(ui, repo, *pats, **opts):
4900 4901 """remove the specified files on the next commit
4901 4902
4902 4903 Schedule the indicated files for removal from the current branch.
4903 4904
4904 4905 This command schedules the files to be removed at the next commit.
4905 4906 To undo a remove before that, see :hg:`revert`. To undo added
4906 4907 files, see :hg:`forget`.
4907 4908
4908 4909 .. container:: verbose
4909 4910
4910 4911 -A/--after can be used to remove only files that have already
4911 4912 been deleted, -f/--force can be used to force deletion, and -Af
4912 4913 can be used to remove files from the next revision without
4913 4914 deleting them from the working directory.
4914 4915
4915 4916 The following table details the behavior of remove for different
4916 4917 file states (columns) and option combinations (rows). The file
4917 4918 states are Added [A], Clean [C], Modified [M] and Missing [!]
4918 4919 (as reported by :hg:`status`). The actions are Warn, Remove
4919 4920 (from branch) and Delete (from disk):
4920 4921
4921 4922 ========= == == == ==
4922 4923 opt/state A C M !
4923 4924 ========= == == == ==
4924 4925 none W RD W R
4925 4926 -f R RD RD R
4926 4927 -A W W W R
4927 4928 -Af R R R R
4928 4929 ========= == == == ==
4929 4930
4930 4931 Note that remove never deletes files in Added [A] state from the
4931 4932 working directory, not even if option --force is specified.
4932 4933
4933 4934 Returns 0 on success, 1 if any warnings encountered.
4934 4935 """
4935 4936
4936 4937 ret = 0
4937 4938 after, force = opts.get('after'), opts.get('force')
4938 4939 if not pats and not after:
4939 4940 raise util.Abort(_('no files specified'))
4940 4941
4941 4942 m = scmutil.match(repo[None], pats, opts)
4942 4943 s = repo.status(match=m, clean=True)
4943 4944 modified, added, deleted, clean = s[0], s[1], s[3], s[6]
4944 4945
4945 4946 # warn about failure to delete explicit files/dirs
4946 4947 wctx = repo[None]
4947 4948 for f in m.files():
4948 4949 if f in repo.dirstate or f in wctx.dirs():
4949 4950 continue
4950 4951 if os.path.exists(m.rel(f)):
4951 4952 if os.path.isdir(m.rel(f)):
4952 4953 ui.warn(_('not removing %s: no tracked files\n') % m.rel(f))
4953 4954 else:
4954 4955 ui.warn(_('not removing %s: file is untracked\n') % m.rel(f))
4955 4956 # missing files will generate a warning elsewhere
4956 4957 ret = 1
4957 4958
4958 4959 if force:
4959 4960 list = modified + deleted + clean + added
4960 4961 elif after:
4961 4962 list = deleted
4962 4963 for f in modified + added + clean:
4963 4964 ui.warn(_('not removing %s: file still exists\n') % m.rel(f))
4964 4965 ret = 1
4965 4966 else:
4966 4967 list = deleted + clean
4967 4968 for f in modified:
4968 4969 ui.warn(_('not removing %s: file is modified (use -f'
4969 4970 ' to force removal)\n') % m.rel(f))
4970 4971 ret = 1
4971 4972 for f in added:
4972 4973 ui.warn(_('not removing %s: file has been marked for add'
4973 4974 ' (use forget to undo)\n') % m.rel(f))
4974 4975 ret = 1
4975 4976
4976 4977 for f in sorted(list):
4977 4978 if ui.verbose or not m.exact(f):
4978 4979 ui.status(_('removing %s\n') % m.rel(f))
4979 4980
4980 4981 wlock = repo.wlock()
4981 4982 try:
4982 4983 if not after:
4983 4984 for f in list:
4984 4985 if f in added:
4985 4986 continue # we never unlink added files on remove
4986 4987 util.unlinkpath(repo.wjoin(f), ignoremissing=True)
4987 4988 repo[None].forget(list)
4988 4989 finally:
4989 4990 wlock.release()
4990 4991
4991 4992 return ret
4992 4993
4993 4994 @command('rename|move|mv',
4994 4995 [('A', 'after', None, _('record a rename that has already occurred')),
4995 4996 ('f', 'force', None, _('forcibly copy over an existing managed file')),
4996 4997 ] + walkopts + dryrunopts,
4997 4998 _('[OPTION]... SOURCE... DEST'))
4998 4999 def rename(ui, repo, *pats, **opts):
4999 5000 """rename files; equivalent of copy + remove
5000 5001
5001 5002 Mark dest as copies of sources; mark sources for deletion. If dest
5002 5003 is a directory, copies are put in that directory. If dest is a
5003 5004 file, there can only be one source.
5004 5005
5005 5006 By default, this command copies the contents of files as they
5006 5007 exist in the working directory. If invoked with -A/--after, the
5007 5008 operation is recorded, but no copying is performed.
5008 5009
5009 5010 This command takes effect at the next commit. To undo a rename
5010 5011 before that, see :hg:`revert`.
5011 5012
5012 5013 Returns 0 on success, 1 if errors are encountered.
5013 5014 """
5014 5015 wlock = repo.wlock(False)
5015 5016 try:
5016 5017 return cmdutil.copy(ui, repo, pats, opts, rename=True)
5017 5018 finally:
5018 5019 wlock.release()
5019 5020
5020 5021 @command('resolve',
5021 5022 [('a', 'all', None, _('select all unresolved files')),
5022 5023 ('l', 'list', None, _('list state of files needing merge')),
5023 5024 ('m', 'mark', None, _('mark files as resolved')),
5024 5025 ('u', 'unmark', None, _('mark files as unresolved')),
5025 5026 ('n', 'no-status', None, _('hide status prefix'))]
5026 5027 + mergetoolopts + walkopts,
5027 5028 _('[OPTION]... [FILE]...'),
5028 5029 inferrepo=True)
5029 5030 def resolve(ui, repo, *pats, **opts):
5030 5031 """redo merges or set/view the merge status of files
5031 5032
5032 5033 Merges with unresolved conflicts are often the result of
5033 5034 non-interactive merging using the ``internal:merge`` configuration
5034 5035 setting, or a command-line merge tool like ``diff3``. The resolve
5035 5036 command is used to manage the files involved in a merge, after
5036 5037 :hg:`merge` has been run, and before :hg:`commit` is run (i.e. the
5037 5038 working directory must have two parents). See :hg:`help
5038 5039 merge-tools` for information on configuring merge tools.
5039 5040
5040 5041 The resolve command can be used in the following ways:
5041 5042
5042 5043 - :hg:`resolve [--tool TOOL] FILE...`: attempt to re-merge the specified
5043 5044 files, discarding any previous merge attempts. Re-merging is not
5044 5045 performed for files already marked as resolved. Use ``--all/-a``
5045 5046 to select all unresolved files. ``--tool`` can be used to specify
5046 5047 the merge tool used for the given files. It overrides the HGMERGE
5047 5048 environment variable and your configuration files. Previous file
5048 5049 contents are saved with a ``.orig`` suffix.
5049 5050
5050 5051 - :hg:`resolve -m [FILE]`: mark a file as having been resolved
5051 5052 (e.g. after having manually fixed-up the files). The default is
5052 5053 to mark all unresolved files.
5053 5054
5054 5055 - :hg:`resolve -u [FILE]...`: mark a file as unresolved. The
5055 5056 default is to mark all resolved files.
5056 5057
5057 5058 - :hg:`resolve -l`: list files which had or still have conflicts.
5058 5059 In the printed list, ``U`` = unresolved and ``R`` = resolved.
5059 5060
5060 5061 Note that Mercurial will not let you commit files with unresolved
5061 5062 merge conflicts. You must use :hg:`resolve -m ...` before you can
5062 5063 commit after a conflicting merge.
5063 5064
5064 5065 Returns 0 on success, 1 if any files fail a resolve attempt.
5065 5066 """
5066 5067
5067 5068 all, mark, unmark, show, nostatus = \
5068 5069 [opts.get(o) for o in 'all mark unmark list no_status'.split()]
5069 5070
5070 5071 if (show and (mark or unmark)) or (mark and unmark):
5071 5072 raise util.Abort(_("too many options specified"))
5072 5073 if pats and all:
5073 5074 raise util.Abort(_("can't specify --all and patterns"))
5074 5075 if not (all or pats or show or mark or unmark):
5075 5076 raise util.Abort(_('no files or directories specified'),
5076 5077 hint=('use --all to remerge all files'))
5077 5078
5078 5079 wlock = repo.wlock()
5079 5080 try:
5080 5081 ms = mergemod.mergestate(repo)
5081 5082
5082 5083 if not ms.active() and not show:
5083 5084 raise util.Abort(
5084 5085 _('resolve command not applicable when not merging'))
5085 5086
5086 5087 m = scmutil.match(repo[None], pats, opts)
5087 5088 ret = 0
5088 5089 didwork = False
5089 5090
5090 5091 for f in ms:
5091 5092 if not m(f):
5092 5093 continue
5093 5094
5094 5095 didwork = True
5095 5096
5096 5097 if show:
5097 5098 if nostatus:
5098 5099 ui.write("%s\n" % f)
5099 5100 else:
5100 5101 ui.write("%s %s\n" % (ms[f].upper(), f),
5101 5102 label='resolve.' +
5102 5103 {'u': 'unresolved', 'r': 'resolved'}[ms[f]])
5103 5104 elif mark:
5104 5105 ms.mark(f, "r")
5105 5106 elif unmark:
5106 5107 ms.mark(f, "u")
5107 5108 else:
5108 5109 wctx = repo[None]
5109 5110
5110 5111 # backup pre-resolve (merge uses .orig for its own purposes)
5111 5112 a = repo.wjoin(f)
5112 5113 util.copyfile(a, a + ".resolve")
5113 5114
5114 5115 try:
5115 5116 # resolve file
5116 5117 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
5117 5118 'resolve')
5118 5119 if ms.resolve(f, wctx):
5119 5120 ret = 1
5120 5121 finally:
5121 5122 ui.setconfig('ui', 'forcemerge', '', 'resolve')
5122 5123 ms.commit()
5123 5124
5124 5125 # replace filemerge's .orig file with our resolve file
5125 5126 util.rename(a + ".resolve", a + ".orig")
5126 5127
5127 5128 ms.commit()
5128 5129
5129 5130 if not didwork and pats:
5130 5131 ui.warn(_("arguments do not match paths that need resolving\n"))
5131 5132
5132 5133 finally:
5133 5134 wlock.release()
5134 5135
5135 5136 # Nudge users into finishing an unfinished operation. We don't print
5136 5137 # this with the list/show operation because we want list/show to remain
5137 5138 # machine readable.
5138 5139 if not list(ms.unresolved()) and not show:
5139 5140 ui.status(_('(no more unresolved files)\n'))
5140 5141
5141 5142 return ret
5142 5143
5143 5144 @command('revert',
5144 5145 [('a', 'all', None, _('revert all changes when no arguments given')),
5145 5146 ('d', 'date', '', _('tipmost revision matching date'), _('DATE')),
5146 5147 ('r', 'rev', '', _('revert to the specified revision'), _('REV')),
5147 5148 ('C', 'no-backup', None, _('do not save backup copies of files')),
5148 5149 ] + walkopts + dryrunopts,
5149 5150 _('[OPTION]... [-r REV] [NAME]...'))
5150 5151 def revert(ui, repo, *pats, **opts):
5151 5152 """restore files to their checkout state
5152 5153
5153 5154 .. note::
5154 5155
5155 5156 To check out earlier revisions, you should use :hg:`update REV`.
5156 5157 To cancel an uncommitted merge (and lose your changes),
5157 5158 use :hg:`update --clean .`.
5158 5159
5159 5160 With no revision specified, revert the specified files or directories
5160 5161 to the contents they had in the parent of the working directory.
5161 5162 This restores the contents of files to an unmodified
5162 5163 state and unschedules adds, removes, copies, and renames. If the
5163 5164 working directory has two parents, you must explicitly specify a
5164 5165 revision.
5165 5166
5166 5167 Using the -r/--rev or -d/--date options, revert the given files or
5167 5168 directories to their states as of a specific revision. Because
5168 5169 revert does not change the working directory parents, this will
5169 5170 cause these files to appear modified. This can be helpful to "back
5170 5171 out" some or all of an earlier change. See :hg:`backout` for a
5171 5172 related method.
5172 5173
5173 5174 Modified files are saved with a .orig suffix before reverting.
5174 5175 To disable these backups, use --no-backup.
5175 5176
5176 5177 See :hg:`help dates` for a list of formats valid for -d/--date.
5177 5178
5178 5179 Returns 0 on success.
5179 5180 """
5180 5181
5181 5182 if opts.get("date"):
5182 5183 if opts.get("rev"):
5183 5184 raise util.Abort(_("you can't specify a revision and a date"))
5184 5185 opts["rev"] = cmdutil.finddate(ui, repo, opts["date"])
5185 5186
5186 5187 parent, p2 = repo.dirstate.parents()
5187 5188 if not opts.get('rev') and p2 != nullid:
5188 5189 # revert after merge is a trap for new users (issue2915)
5189 5190 raise util.Abort(_('uncommitted merge with no revision specified'),
5190 5191 hint=_('use "hg update" or see "hg help revert"'))
5191 5192
5192 5193 ctx = scmutil.revsingle(repo, opts.get('rev'))
5193 5194
5194 5195 if not pats and not opts.get('all'):
5195 5196 msg = _("no files or directories specified")
5196 5197 if p2 != nullid:
5197 5198 hint = _("uncommitted merge, use --all to discard all changes,"
5198 5199 " or 'hg update -C .' to abort the merge")
5199 5200 raise util.Abort(msg, hint=hint)
5200 5201 dirty = util.any(repo.status())
5201 5202 node = ctx.node()
5202 5203 if node != parent:
5203 5204 if dirty:
5204 5205 hint = _("uncommitted changes, use --all to discard all"
5205 5206 " changes, or 'hg update %s' to update") % ctx.rev()
5206 5207 else:
5207 5208 hint = _("use --all to revert all files,"
5208 5209 " or 'hg update %s' to update") % ctx.rev()
5209 5210 elif dirty:
5210 5211 hint = _("uncommitted changes, use --all to discard all changes")
5211 5212 else:
5212 5213 hint = _("use --all to revert all files")
5213 5214 raise util.Abort(msg, hint=hint)
5214 5215
5215 5216 return cmdutil.revert(ui, repo, ctx, (parent, p2), *pats, **opts)
5216 5217
5217 5218 @command('rollback', dryrunopts +
5218 5219 [('f', 'force', False, _('ignore safety measures'))])
5219 5220 def rollback(ui, repo, **opts):
5220 5221 """roll back the last transaction (DANGEROUS) (DEPRECATED)
5221 5222
5222 5223 Please use :hg:`commit --amend` instead of rollback to correct
5223 5224 mistakes in the last commit.
5224 5225
5225 5226 This command should be used with care. There is only one level of
5226 5227 rollback, and there is no way to undo a rollback. It will also
5227 5228 restore the dirstate at the time of the last transaction, losing
5228 5229 any dirstate changes since that time. This command does not alter
5229 5230 the working directory.
5230 5231
5231 5232 Transactions are used to encapsulate the effects of all commands
5232 5233 that create new changesets or propagate existing changesets into a
5233 5234 repository.
5234 5235
5235 5236 .. container:: verbose
5236 5237
5237 5238 For example, the following commands are transactional, and their
5238 5239 effects can be rolled back:
5239 5240
5240 5241 - commit
5241 5242 - import
5242 5243 - pull
5243 5244 - push (with this repository as the destination)
5244 5245 - unbundle
5245 5246
5246 5247 To avoid permanent data loss, rollback will refuse to rollback a
5247 5248 commit transaction if it isn't checked out. Use --force to
5248 5249 override this protection.
5249 5250
5250 5251 This command is not intended for use on public repositories. Once
5251 5252 changes are visible for pull by other users, rolling a transaction
5252 5253 back locally is ineffective (someone else may already have pulled
5253 5254 the changes). Furthermore, a race is possible with readers of the
5254 5255 repository; for example an in-progress pull from the repository
5255 5256 may fail if a rollback is performed.
5256 5257
5257 5258 Returns 0 on success, 1 if no rollback data is available.
5258 5259 """
5259 5260 return repo.rollback(dryrun=opts.get('dry_run'),
5260 5261 force=opts.get('force'))
5261 5262
5262 5263 @command('root', [])
5263 5264 def root(ui, repo):
5264 5265 """print the root (top) of the current working directory
5265 5266
5266 5267 Print the root directory of the current repository.
5267 5268
5268 5269 Returns 0 on success.
5269 5270 """
5270 5271 ui.write(repo.root + "\n")
5271 5272
5272 5273 @command('^serve',
5273 5274 [('A', 'accesslog', '', _('name of access log file to write to'),
5274 5275 _('FILE')),
5275 5276 ('d', 'daemon', None, _('run server in background')),
5276 5277 ('', 'daemon-pipefds', '', _('used internally by daemon mode'), _('NUM')),
5277 5278 ('E', 'errorlog', '', _('name of error log file to write to'), _('FILE')),
5278 5279 # use string type, then we can check if something was passed
5279 5280 ('p', 'port', '', _('port to listen on (default: 8000)'), _('PORT')),
5280 5281 ('a', 'address', '', _('address to listen on (default: all interfaces)'),
5281 5282 _('ADDR')),
5282 5283 ('', 'prefix', '', _('prefix path to serve from (default: server root)'),
5283 5284 _('PREFIX')),
5284 5285 ('n', 'name', '',
5285 5286 _('name to show in web pages (default: working directory)'), _('NAME')),
5286 5287 ('', 'web-conf', '',
5287 5288 _('name of the hgweb config file (see "hg help hgweb")'), _('FILE')),
5288 5289 ('', 'webdir-conf', '', _('name of the hgweb config file (DEPRECATED)'),
5289 5290 _('FILE')),
5290 5291 ('', 'pid-file', '', _('name of file to write process ID to'), _('FILE')),
5291 5292 ('', 'stdio', None, _('for remote clients')),
5292 5293 ('', 'cmdserver', '', _('for remote clients'), _('MODE')),
5293 5294 ('t', 'templates', '', _('web templates to use'), _('TEMPLATE')),
5294 5295 ('', 'style', '', _('template style to use'), _('STYLE')),
5295 5296 ('6', 'ipv6', None, _('use IPv6 in addition to IPv4')),
5296 5297 ('', 'certificate', '', _('SSL certificate file'), _('FILE'))],
5297 5298 _('[OPTION]...'),
5298 5299 optionalrepo=True)
5299 5300 def serve(ui, repo, **opts):
5300 5301 """start stand-alone webserver
5301 5302
5302 5303 Start a local HTTP repository browser and pull server. You can use
5303 5304 this for ad-hoc sharing and browsing of repositories. It is
5304 5305 recommended to use a real web server to serve a repository for
5305 5306 longer periods of time.
5306 5307
5307 5308 Please note that the server does not implement access control.
5308 5309 This means that, by default, anybody can read from the server and
5309 5310 nobody can write to it by default. Set the ``web.allow_push``
5310 5311 option to ``*`` to allow everybody to push to the server. You
5311 5312 should use a real web server if you need to authenticate users.
5312 5313
5313 5314 By default, the server logs accesses to stdout and errors to
5314 5315 stderr. Use the -A/--accesslog and -E/--errorlog options to log to
5315 5316 files.
5316 5317
5317 5318 To have the server choose a free port number to listen on, specify
5318 5319 a port number of 0; in this case, the server will print the port
5319 5320 number it uses.
5320 5321
5321 5322 Returns 0 on success.
5322 5323 """
5323 5324
5324 5325 if opts["stdio"] and opts["cmdserver"]:
5325 5326 raise util.Abort(_("cannot use --stdio with --cmdserver"))
5326 5327
5327 5328 if opts["stdio"]:
5328 5329 if repo is None:
5329 5330 raise error.RepoError(_("there is no Mercurial repository here"
5330 5331 " (.hg not found)"))
5331 5332 s = sshserver.sshserver(ui, repo)
5332 5333 s.serve_forever()
5333 5334
5334 5335 if opts["cmdserver"]:
5335 5336 s = commandserver.server(ui, repo, opts["cmdserver"])
5336 5337 return s.serve()
5337 5338
5338 5339 # this way we can check if something was given in the command-line
5339 5340 if opts.get('port'):
5340 5341 opts['port'] = util.getport(opts.get('port'))
5341 5342
5342 5343 baseui = repo and repo.baseui or ui
5343 5344 optlist = ("name templates style address port prefix ipv6"
5344 5345 " accesslog errorlog certificate encoding")
5345 5346 for o in optlist.split():
5346 5347 val = opts.get(o, '')
5347 5348 if val in (None, ''): # should check against default options instead
5348 5349 continue
5349 5350 baseui.setconfig("web", o, val, 'serve')
5350 5351 if repo and repo.ui != baseui:
5351 5352 repo.ui.setconfig("web", o, val, 'serve')
5352 5353
5353 5354 o = opts.get('web_conf') or opts.get('webdir_conf')
5354 5355 if not o:
5355 5356 if not repo:
5356 5357 raise error.RepoError(_("there is no Mercurial repository"
5357 5358 " here (.hg not found)"))
5358 5359 o = repo
5359 5360
5360 5361 app = hgweb.hgweb(o, baseui=baseui)
5361 5362 service = httpservice(ui, app, opts)
5362 5363 cmdutil.service(opts, initfn=service.init, runfn=service.run)
5363 5364
5364 5365 class httpservice(object):
5365 5366 def __init__(self, ui, app, opts):
5366 5367 self.ui = ui
5367 5368 self.app = app
5368 5369 self.opts = opts
5369 5370
5370 5371 def init(self):
5371 5372 util.setsignalhandler()
5372 5373 self.httpd = hgweb_server.create_server(self.ui, self.app)
5373 5374
5374 5375 if self.opts['port'] and not self.ui.verbose:
5375 5376 return
5376 5377
5377 5378 if self.httpd.prefix:
5378 5379 prefix = self.httpd.prefix.strip('/') + '/'
5379 5380 else:
5380 5381 prefix = ''
5381 5382
5382 5383 port = ':%d' % self.httpd.port
5383 5384 if port == ':80':
5384 5385 port = ''
5385 5386
5386 5387 bindaddr = self.httpd.addr
5387 5388 if bindaddr == '0.0.0.0':
5388 5389 bindaddr = '*'
5389 5390 elif ':' in bindaddr: # IPv6
5390 5391 bindaddr = '[%s]' % bindaddr
5391 5392
5392 5393 fqaddr = self.httpd.fqaddr
5393 5394 if ':' in fqaddr:
5394 5395 fqaddr = '[%s]' % fqaddr
5395 5396 if self.opts['port']:
5396 5397 write = self.ui.status
5397 5398 else:
5398 5399 write = self.ui.write
5399 5400 write(_('listening at http://%s%s/%s (bound to %s:%d)\n') %
5400 5401 (fqaddr, port, prefix, bindaddr, self.httpd.port))
5401 5402 self.ui.flush() # avoid buffering of status message
5402 5403
5403 5404 def run(self):
5404 5405 self.httpd.serve_forever()
5405 5406
5406 5407
5407 5408 @command('^status|st',
5408 5409 [('A', 'all', None, _('show status of all files')),
5409 5410 ('m', 'modified', None, _('show only modified files')),
5410 5411 ('a', 'added', None, _('show only added files')),
5411 5412 ('r', 'removed', None, _('show only removed files')),
5412 5413 ('d', 'deleted', None, _('show only deleted (but tracked) files')),
5413 5414 ('c', 'clean', None, _('show only files without changes')),
5414 5415 ('u', 'unknown', None, _('show only unknown (not tracked) files')),
5415 5416 ('i', 'ignored', None, _('show only ignored files')),
5416 5417 ('n', 'no-status', None, _('hide status prefix')),
5417 5418 ('C', 'copies', None, _('show source of copied files')),
5418 5419 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
5419 5420 ('', 'rev', [], _('show difference from revision'), _('REV')),
5420 5421 ('', 'change', '', _('list the changed files of a revision'), _('REV')),
5421 5422 ] + walkopts + subrepoopts,
5422 5423 _('[OPTION]... [FILE]...'),
5423 5424 inferrepo=True)
5424 5425 def status(ui, repo, *pats, **opts):
5425 5426 """show changed files in the working directory
5426 5427
5427 5428 Show status of files in the repository. If names are given, only
5428 5429 files that match are shown. Files that are clean or ignored or
5429 5430 the source of a copy/move operation, are not listed unless
5430 5431 -c/--clean, -i/--ignored, -C/--copies or -A/--all are given.
5431 5432 Unless options described with "show only ..." are given, the
5432 5433 options -mardu are used.
5433 5434
5434 5435 Option -q/--quiet hides untracked (unknown and ignored) files
5435 5436 unless explicitly requested with -u/--unknown or -i/--ignored.
5436 5437
5437 5438 .. note::
5438 5439
5439 5440 status may appear to disagree with diff if permissions have
5440 5441 changed or a merge has occurred. The standard diff format does
5441 5442 not report permission changes and diff only reports changes
5442 5443 relative to one merge parent.
5443 5444
5444 5445 If one revision is given, it is used as the base revision.
5445 5446 If two revisions are given, the differences between them are
5446 5447 shown. The --change option can also be used as a shortcut to list
5447 5448 the changed files of a revision from its first parent.
5448 5449
5449 5450 The codes used to show the status of files are::
5450 5451
5451 5452 M = modified
5452 5453 A = added
5453 5454 R = removed
5454 5455 C = clean
5455 5456 ! = missing (deleted by non-hg command, but still tracked)
5456 5457 ? = not tracked
5457 5458 I = ignored
5458 5459 = origin of the previous file (with --copies)
5459 5460
5460 5461 .. container:: verbose
5461 5462
5462 5463 Examples:
5463 5464
5464 5465 - show changes in the working directory relative to a
5465 5466 changeset::
5466 5467
5467 5468 hg status --rev 9353
5468 5469
5469 5470 - show all changes including copies in an existing changeset::
5470 5471
5471 5472 hg status --copies --change 9353
5472 5473
5473 5474 - get a NUL separated list of added files, suitable for xargs::
5474 5475
5475 5476 hg status -an0
5476 5477
5477 5478 Returns 0 on success.
5478 5479 """
5479 5480
5480 5481 revs = opts.get('rev')
5481 5482 change = opts.get('change')
5482 5483
5483 5484 if revs and change:
5484 5485 msg = _('cannot specify --rev and --change at the same time')
5485 5486 raise util.Abort(msg)
5486 5487 elif change:
5487 5488 node2 = scmutil.revsingle(repo, change, None).node()
5488 5489 node1 = repo[node2].p1().node()
5489 5490 else:
5490 5491 node1, node2 = scmutil.revpair(repo, revs)
5491 5492
5492 5493 cwd = (pats and repo.getcwd()) or ''
5493 5494 end = opts.get('print0') and '\0' or '\n'
5494 5495 copy = {}
5495 5496 states = 'modified added removed deleted unknown ignored clean'.split()
5496 5497 show = [k for k in states if opts.get(k)]
5497 5498 if opts.get('all'):
5498 5499 show += ui.quiet and (states[:4] + ['clean']) or states
5499 5500 if not show:
5500 5501 show = ui.quiet and states[:4] or states[:5]
5501 5502
5502 5503 stat = repo.status(node1, node2, scmutil.match(repo[node2], pats, opts),
5503 5504 'ignored' in show, 'clean' in show, 'unknown' in show,
5504 5505 opts.get('subrepos'))
5505 5506 changestates = zip(states, 'MAR!?IC', stat)
5506 5507
5507 5508 if (opts.get('all') or opts.get('copies')) and not opts.get('no_status'):
5508 5509 copy = copies.pathcopies(repo[node1], repo[node2])
5509 5510
5510 5511 fm = ui.formatter('status', opts)
5511 5512 fmt = '%s' + end
5512 5513 showchar = not opts.get('no_status')
5513 5514
5514 5515 for state, char, files in changestates:
5515 5516 if state in show:
5516 5517 label = 'status.' + state
5517 5518 for f in files:
5518 5519 fm.startitem()
5519 5520 fm.condwrite(showchar, 'status', '%s ', char, label=label)
5520 5521 fm.write('path', fmt, repo.pathto(f, cwd), label=label)
5521 5522 if f in copy:
5522 5523 fm.write("copy", ' %s' + end, repo.pathto(copy[f], cwd),
5523 5524 label='status.copied')
5524 5525 fm.end()
5525 5526
5526 5527 @command('^summary|sum',
5527 5528 [('', 'remote', None, _('check for push and pull'))], '[--remote]')
5528 5529 def summary(ui, repo, **opts):
5529 5530 """summarize working directory state
5530 5531
5531 5532 This generates a brief summary of the working directory state,
5532 5533 including parents, branch, commit status, and available updates.
5533 5534
5534 5535 With the --remote option, this will check the default paths for
5535 5536 incoming and outgoing changes. This can be time-consuming.
5536 5537
5537 5538 Returns 0 on success.
5538 5539 """
5539 5540
5540 5541 ctx = repo[None]
5541 5542 parents = ctx.parents()
5542 5543 pnode = parents[0].node()
5543 5544 marks = []
5544 5545
5545 5546 for p in parents:
5546 5547 # label with log.changeset (instead of log.parent) since this
5547 5548 # shows a working directory parent *changeset*:
5548 5549 # i18n: column positioning for "hg summary"
5549 5550 ui.write(_('parent: %d:%s ') % (p.rev(), str(p)),
5550 5551 label='log.changeset changeset.%s' % p.phasestr())
5551 5552 ui.write(' '.join(p.tags()), label='log.tag')
5552 5553 if p.bookmarks():
5553 5554 marks.extend(p.bookmarks())
5554 5555 if p.rev() == -1:
5555 5556 if not len(repo):
5556 5557 ui.write(_(' (empty repository)'))
5557 5558 else:
5558 5559 ui.write(_(' (no revision checked out)'))
5559 5560 ui.write('\n')
5560 5561 if p.description():
5561 5562 ui.status(' ' + p.description().splitlines()[0].strip() + '\n',
5562 5563 label='log.summary')
5563 5564
5564 5565 branch = ctx.branch()
5565 5566 bheads = repo.branchheads(branch)
5566 5567 # i18n: column positioning for "hg summary"
5567 5568 m = _('branch: %s\n') % branch
5568 5569 if branch != 'default':
5569 5570 ui.write(m, label='log.branch')
5570 5571 else:
5571 5572 ui.status(m, label='log.branch')
5572 5573
5573 5574 if marks:
5574 5575 current = repo._bookmarkcurrent
5575 5576 # i18n: column positioning for "hg summary"
5576 5577 ui.write(_('bookmarks:'), label='log.bookmark')
5577 5578 if current is not None:
5578 5579 if current in marks:
5579 5580 ui.write(' *' + current, label='bookmarks.current')
5580 5581 marks.remove(current)
5581 5582 else:
5582 5583 ui.write(' [%s]' % current, label='bookmarks.current')
5583 5584 for m in marks:
5584 5585 ui.write(' ' + m, label='log.bookmark')
5585 5586 ui.write('\n', label='log.bookmark')
5586 5587
5587 5588 st = list(repo.status(unknown=True))[:6]
5588 5589
5589 5590 c = repo.dirstate.copies()
5590 5591 copied, renamed = [], []
5591 5592 for d, s in c.iteritems():
5592 5593 if s in st[2]:
5593 5594 st[2].remove(s)
5594 5595 renamed.append(d)
5595 5596 else:
5596 5597 copied.append(d)
5597 5598 if d in st[1]:
5598 5599 st[1].remove(d)
5599 5600 st.insert(3, renamed)
5600 5601 st.insert(4, copied)
5601 5602
5602 5603 ms = mergemod.mergestate(repo)
5603 5604 st.append([f for f in ms if ms[f] == 'u'])
5604 5605
5605 5606 subs = [s for s in ctx.substate if ctx.sub(s).dirty()]
5606 5607 st.append(subs)
5607 5608
5608 5609 labels = [ui.label(_('%d modified'), 'status.modified'),
5609 5610 ui.label(_('%d added'), 'status.added'),
5610 5611 ui.label(_('%d removed'), 'status.removed'),
5611 5612 ui.label(_('%d renamed'), 'status.copied'),
5612 5613 ui.label(_('%d copied'), 'status.copied'),
5613 5614 ui.label(_('%d deleted'), 'status.deleted'),
5614 5615 ui.label(_('%d unknown'), 'status.unknown'),
5615 5616 ui.label(_('%d ignored'), 'status.ignored'),
5616 5617 ui.label(_('%d unresolved'), 'resolve.unresolved'),
5617 5618 ui.label(_('%d subrepos'), 'status.modified')]
5618 5619 t = []
5619 5620 for s, l in zip(st, labels):
5620 5621 if s:
5621 5622 t.append(l % len(s))
5622 5623
5623 5624 t = ', '.join(t)
5624 5625 cleanworkdir = False
5625 5626
5626 5627 if repo.vfs.exists('updatestate'):
5627 5628 t += _(' (interrupted update)')
5628 5629 elif len(parents) > 1:
5629 5630 t += _(' (merge)')
5630 5631 elif branch != parents[0].branch():
5631 5632 t += _(' (new branch)')
5632 5633 elif (parents[0].closesbranch() and
5633 5634 pnode in repo.branchheads(branch, closed=True)):
5634 5635 t += _(' (head closed)')
5635 5636 elif not (st[0] or st[1] or st[2] or st[3] or st[4] or st[9]):
5636 5637 t += _(' (clean)')
5637 5638 cleanworkdir = True
5638 5639 elif pnode not in bheads:
5639 5640 t += _(' (new branch head)')
5640 5641
5641 5642 if cleanworkdir:
5642 5643 # i18n: column positioning for "hg summary"
5643 5644 ui.status(_('commit: %s\n') % t.strip())
5644 5645 else:
5645 5646 # i18n: column positioning for "hg summary"
5646 5647 ui.write(_('commit: %s\n') % t.strip())
5647 5648
5648 5649 # all ancestors of branch heads - all ancestors of parent = new csets
5649 5650 new = len(repo.changelog.findmissing([pctx.node() for pctx in parents],
5650 5651 bheads))
5651 5652
5652 5653 if new == 0:
5653 5654 # i18n: column positioning for "hg summary"
5654 5655 ui.status(_('update: (current)\n'))
5655 5656 elif pnode not in bheads:
5656 5657 # i18n: column positioning for "hg summary"
5657 5658 ui.write(_('update: %d new changesets (update)\n') % new)
5658 5659 else:
5659 5660 # i18n: column positioning for "hg summary"
5660 5661 ui.write(_('update: %d new changesets, %d branch heads (merge)\n') %
5661 5662 (new, len(bheads)))
5662 5663
5663 5664 cmdutil.summaryhooks(ui, repo)
5664 5665
5665 5666 if opts.get('remote'):
5666 5667 needsincoming, needsoutgoing = True, True
5667 5668 else:
5668 5669 needsincoming, needsoutgoing = False, False
5669 5670 for i, o in cmdutil.summaryremotehooks(ui, repo, opts, None):
5670 5671 if i:
5671 5672 needsincoming = True
5672 5673 if o:
5673 5674 needsoutgoing = True
5674 5675 if not needsincoming and not needsoutgoing:
5675 5676 return
5676 5677
5677 5678 def getincoming():
5678 5679 source, branches = hg.parseurl(ui.expandpath('default'))
5679 5680 sbranch = branches[0]
5680 5681 try:
5681 5682 other = hg.peer(repo, {}, source)
5682 5683 except error.RepoError:
5683 5684 if opts.get('remote'):
5684 5685 raise
5685 5686 return source, sbranch, None, None, None
5686 5687 revs, checkout = hg.addbranchrevs(repo, other, branches, None)
5687 5688 if revs:
5688 5689 revs = [other.lookup(rev) for rev in revs]
5689 5690 ui.debug('comparing with %s\n' % util.hidepassword(source))
5690 5691 repo.ui.pushbuffer()
5691 5692 commoninc = discovery.findcommonincoming(repo, other, heads=revs)
5692 5693 repo.ui.popbuffer()
5693 5694 return source, sbranch, other, commoninc, commoninc[1]
5694 5695
5695 5696 if needsincoming:
5696 5697 source, sbranch, sother, commoninc, incoming = getincoming()
5697 5698 else:
5698 5699 source = sbranch = sother = commoninc = incoming = None
5699 5700
5700 5701 def getoutgoing():
5701 5702 dest, branches = hg.parseurl(ui.expandpath('default-push', 'default'))
5702 5703 dbranch = branches[0]
5703 5704 revs, checkout = hg.addbranchrevs(repo, repo, branches, None)
5704 5705 if source != dest:
5705 5706 try:
5706 5707 dother = hg.peer(repo, {}, dest)
5707 5708 except error.RepoError:
5708 5709 if opts.get('remote'):
5709 5710 raise
5710 5711 return dest, dbranch, None, None
5711 5712 ui.debug('comparing with %s\n' % util.hidepassword(dest))
5712 5713 elif sother is None:
5713 5714 # there is no explicit destination peer, but source one is invalid
5714 5715 return dest, dbranch, None, None
5715 5716 else:
5716 5717 dother = sother
5717 5718 if (source != dest or (sbranch is not None and sbranch != dbranch)):
5718 5719 common = None
5719 5720 else:
5720 5721 common = commoninc
5721 5722 if revs:
5722 5723 revs = [repo.lookup(rev) for rev in revs]
5723 5724 repo.ui.pushbuffer()
5724 5725 outgoing = discovery.findcommonoutgoing(repo, dother, onlyheads=revs,
5725 5726 commoninc=common)
5726 5727 repo.ui.popbuffer()
5727 5728 return dest, dbranch, dother, outgoing
5728 5729
5729 5730 if needsoutgoing:
5730 5731 dest, dbranch, dother, outgoing = getoutgoing()
5731 5732 else:
5732 5733 dest = dbranch = dother = outgoing = None
5733 5734
5734 5735 if opts.get('remote'):
5735 5736 t = []
5736 5737 if incoming:
5737 5738 t.append(_('1 or more incoming'))
5738 5739 o = outgoing.missing
5739 5740 if o:
5740 5741 t.append(_('%d outgoing') % len(o))
5741 5742 other = dother or sother
5742 5743 if 'bookmarks' in other.listkeys('namespaces'):
5743 5744 lmarks = repo.listkeys('bookmarks')
5744 5745 rmarks = other.listkeys('bookmarks')
5745 5746 diff = set(rmarks) - set(lmarks)
5746 5747 if len(diff) > 0:
5747 5748 t.append(_('%d incoming bookmarks') % len(diff))
5748 5749 diff = set(lmarks) - set(rmarks)
5749 5750 if len(diff) > 0:
5750 5751 t.append(_('%d outgoing bookmarks') % len(diff))
5751 5752
5752 5753 if t:
5753 5754 # i18n: column positioning for "hg summary"
5754 5755 ui.write(_('remote: %s\n') % (', '.join(t)))
5755 5756 else:
5756 5757 # i18n: column positioning for "hg summary"
5757 5758 ui.status(_('remote: (synced)\n'))
5758 5759
5759 5760 cmdutil.summaryremotehooks(ui, repo, opts,
5760 5761 ((source, sbranch, sother, commoninc),
5761 5762 (dest, dbranch, dother, outgoing)))
5762 5763
5763 5764 @command('tag',
5764 5765 [('f', 'force', None, _('force tag')),
5765 5766 ('l', 'local', None, _('make the tag local')),
5766 5767 ('r', 'rev', '', _('revision to tag'), _('REV')),
5767 5768 ('', 'remove', None, _('remove a tag')),
5768 5769 # -l/--local is already there, commitopts cannot be used
5769 5770 ('e', 'edit', None, _('invoke editor on commit messages')),
5770 5771 ('m', 'message', '', _('use text as commit message'), _('TEXT')),
5771 5772 ] + commitopts2,
5772 5773 _('[-f] [-l] [-m TEXT] [-d DATE] [-u USER] [-r REV] NAME...'))
5773 5774 def tag(ui, repo, name1, *names, **opts):
5774 5775 """add one or more tags for the current or given revision
5775 5776
5776 5777 Name a particular revision using <name>.
5777 5778
5778 5779 Tags are used to name particular revisions of the repository and are
5779 5780 very useful to compare different revisions, to go back to significant
5780 5781 earlier versions or to mark branch points as releases, etc. Changing
5781 5782 an existing tag is normally disallowed; use -f/--force to override.
5782 5783
5783 5784 If no revision is given, the parent of the working directory is
5784 5785 used.
5785 5786
5786 5787 To facilitate version control, distribution, and merging of tags,
5787 5788 they are stored as a file named ".hgtags" which is managed similarly
5788 5789 to other project files and can be hand-edited if necessary. This
5789 5790 also means that tagging creates a new commit. The file
5790 5791 ".hg/localtags" is used for local tags (not shared among
5791 5792 repositories).
5792 5793
5793 5794 Tag commits are usually made at the head of a branch. If the parent
5794 5795 of the working directory is not a branch head, :hg:`tag` aborts; use
5795 5796 -f/--force to force the tag commit to be based on a non-head
5796 5797 changeset.
5797 5798
5798 5799 See :hg:`help dates` for a list of formats valid for -d/--date.
5799 5800
5800 5801 Since tag names have priority over branch names during revision
5801 5802 lookup, using an existing branch name as a tag name is discouraged.
5802 5803
5803 5804 Returns 0 on success.
5804 5805 """
5805 5806 wlock = lock = None
5806 5807 try:
5807 5808 wlock = repo.wlock()
5808 5809 lock = repo.lock()
5809 5810 rev_ = "."
5810 5811 names = [t.strip() for t in (name1,) + names]
5811 5812 if len(names) != len(set(names)):
5812 5813 raise util.Abort(_('tag names must be unique'))
5813 5814 for n in names:
5814 5815 scmutil.checknewlabel(repo, n, 'tag')
5815 5816 if not n:
5816 5817 raise util.Abort(_('tag names cannot consist entirely of '
5817 5818 'whitespace'))
5818 5819 if opts.get('rev') and opts.get('remove'):
5819 5820 raise util.Abort(_("--rev and --remove are incompatible"))
5820 5821 if opts.get('rev'):
5821 5822 rev_ = opts['rev']
5822 5823 message = opts.get('message')
5823 5824 if opts.get('remove'):
5824 5825 expectedtype = opts.get('local') and 'local' or 'global'
5825 5826 for n in names:
5826 5827 if not repo.tagtype(n):
5827 5828 raise util.Abort(_("tag '%s' does not exist") % n)
5828 5829 if repo.tagtype(n) != expectedtype:
5829 5830 if expectedtype == 'global':
5830 5831 raise util.Abort(_("tag '%s' is not a global tag") % n)
5831 5832 else:
5832 5833 raise util.Abort(_("tag '%s' is not a local tag") % n)
5833 5834 rev_ = nullid
5834 5835 if not message:
5835 5836 # we don't translate commit messages
5836 5837 message = 'Removed tag %s' % ', '.join(names)
5837 5838 elif not opts.get('force'):
5838 5839 for n in names:
5839 5840 if n in repo.tags():
5840 5841 raise util.Abort(_("tag '%s' already exists "
5841 5842 "(use -f to force)") % n)
5842 5843 if not opts.get('local'):
5843 5844 p1, p2 = repo.dirstate.parents()
5844 5845 if p2 != nullid:
5845 5846 raise util.Abort(_('uncommitted merge'))
5846 5847 bheads = repo.branchheads()
5847 5848 if not opts.get('force') and bheads and p1 not in bheads:
5848 5849 raise util.Abort(_('not at a branch head (use -f to force)'))
5849 5850 r = scmutil.revsingle(repo, rev_).node()
5850 5851
5851 5852 if not message:
5852 5853 # we don't translate commit messages
5853 5854 message = ('Added tag %s for changeset %s' %
5854 5855 (', '.join(names), short(r)))
5855 5856
5856 5857 date = opts.get('date')
5857 5858 if date:
5858 5859 date = util.parsedate(date)
5859 5860
5860 5861 if opts.get('remove'):
5861 5862 editform = 'tag.remove'
5862 5863 else:
5863 5864 editform = 'tag.add'
5864 5865 editor = cmdutil.getcommiteditor(editform=editform, **opts)
5865 5866
5866 5867 # don't allow tagging the null rev
5867 5868 if (not opts.get('remove') and
5868 5869 scmutil.revsingle(repo, rev_).rev() == nullrev):
5869 5870 raise util.Abort(_("cannot tag null revision"))
5870 5871
5871 5872 repo.tag(names, r, message, opts.get('local'), opts.get('user'), date,
5872 5873 editor=editor)
5873 5874 finally:
5874 5875 release(lock, wlock)
5875 5876
5876 5877 @command('tags', [], '')
5877 5878 def tags(ui, repo, **opts):
5878 5879 """list repository tags
5879 5880
5880 5881 This lists both regular and local tags. When the -v/--verbose
5881 5882 switch is used, a third column "local" is printed for local tags.
5882 5883
5883 5884 Returns 0 on success.
5884 5885 """
5885 5886
5886 5887 fm = ui.formatter('tags', opts)
5887 5888 hexfunc = ui.debugflag and hex or short
5888 5889 tagtype = ""
5889 5890
5890 5891 for t, n in reversed(repo.tagslist()):
5891 5892 hn = hexfunc(n)
5892 5893 label = 'tags.normal'
5893 5894 tagtype = ''
5894 5895 if repo.tagtype(t) == 'local':
5895 5896 label = 'tags.local'
5896 5897 tagtype = 'local'
5897 5898
5898 5899 fm.startitem()
5899 5900 fm.write('tag', '%s', t, label=label)
5900 5901 fmt = " " * (30 - encoding.colwidth(t)) + ' %5d:%s'
5901 5902 fm.condwrite(not ui.quiet, 'rev id', fmt,
5902 5903 repo.changelog.rev(n), hn, label=label)
5903 5904 fm.condwrite(ui.verbose and tagtype, 'type', ' %s',
5904 5905 tagtype, label=label)
5905 5906 fm.plain('\n')
5906 5907 fm.end()
5907 5908
5908 5909 @command('tip',
5909 5910 [('p', 'patch', None, _('show patch')),
5910 5911 ('g', 'git', None, _('use git extended diff format')),
5911 5912 ] + templateopts,
5912 5913 _('[-p] [-g]'))
5913 5914 def tip(ui, repo, **opts):
5914 5915 """show the tip revision (DEPRECATED)
5915 5916
5916 5917 The tip revision (usually just called the tip) is the changeset
5917 5918 most recently added to the repository (and therefore the most
5918 5919 recently changed head).
5919 5920
5920 5921 If you have just made a commit, that commit will be the tip. If
5921 5922 you have just pulled changes from another repository, the tip of
5922 5923 that repository becomes the current tip. The "tip" tag is special
5923 5924 and cannot be renamed or assigned to a different changeset.
5924 5925
5925 5926 This command is deprecated, please use :hg:`heads` instead.
5926 5927
5927 5928 Returns 0 on success.
5928 5929 """
5929 5930 displayer = cmdutil.show_changeset(ui, repo, opts)
5930 5931 displayer.show(repo['tip'])
5931 5932 displayer.close()
5932 5933
5933 5934 @command('unbundle',
5934 5935 [('u', 'update', None,
5935 5936 _('update to new branch head if changesets were unbundled'))],
5936 5937 _('[-u] FILE...'))
5937 5938 def unbundle(ui, repo, fname1, *fnames, **opts):
5938 5939 """apply one or more changegroup files
5939 5940
5940 5941 Apply one or more compressed changegroup files generated by the
5941 5942 bundle command.
5942 5943
5943 5944 Returns 0 on success, 1 if an update has unresolved files.
5944 5945 """
5945 5946 fnames = (fname1,) + fnames
5946 5947
5947 5948 lock = repo.lock()
5948 5949 try:
5949 5950 for fname in fnames:
5950 5951 f = hg.openpath(ui, fname)
5951 5952 gen = exchange.readbundle(ui, f, fname)
5952 5953 modheads = changegroup.addchangegroup(repo, gen, 'unbundle',
5953 5954 'bundle:' + fname)
5954 5955 finally:
5955 5956 lock.release()
5956 5957
5957 5958 return postincoming(ui, repo, modheads, opts.get('update'), None)
5958 5959
5959 5960 @command('^update|up|checkout|co',
5960 5961 [('C', 'clean', None, _('discard uncommitted changes (no backup)')),
5961 5962 ('c', 'check', None,
5962 5963 _('update across branches if no uncommitted changes')),
5963 5964 ('d', 'date', '', _('tipmost revision matching date'), _('DATE')),
5964 5965 ('r', 'rev', '', _('revision'), _('REV'))
5965 5966 ] + mergetoolopts,
5966 5967 _('[-c] [-C] [-d DATE] [[-r] REV]'))
5967 5968 def update(ui, repo, node=None, rev=None, clean=False, date=None, check=False,
5968 5969 tool=None):
5969 5970 """update working directory (or switch revisions)
5970 5971
5971 5972 Update the repository's working directory to the specified
5972 5973 changeset. If no changeset is specified, update to the tip of the
5973 5974 current named branch and move the current bookmark (see :hg:`help
5974 5975 bookmarks`).
5975 5976
5976 5977 Update sets the working directory's parent revision to the specified
5977 5978 changeset (see :hg:`help parents`).
5978 5979
5979 5980 If the changeset is not a descendant or ancestor of the working
5980 5981 directory's parent, the update is aborted. With the -c/--check
5981 5982 option, the working directory is checked for uncommitted changes; if
5982 5983 none are found, the working directory is updated to the specified
5983 5984 changeset.
5984 5985
5985 5986 .. container:: verbose
5986 5987
5987 5988 The following rules apply when the working directory contains
5988 5989 uncommitted changes:
5989 5990
5990 5991 1. If neither -c/--check nor -C/--clean is specified, and if
5991 5992 the requested changeset is an ancestor or descendant of
5992 5993 the working directory's parent, the uncommitted changes
5993 5994 are merged into the requested changeset and the merged
5994 5995 result is left uncommitted. If the requested changeset is
5995 5996 not an ancestor or descendant (that is, it is on another
5996 5997 branch), the update is aborted and the uncommitted changes
5997 5998 are preserved.
5998 5999
5999 6000 2. With the -c/--check option, the update is aborted and the
6000 6001 uncommitted changes are preserved.
6001 6002
6002 6003 3. With the -C/--clean option, uncommitted changes are discarded and
6003 6004 the working directory is updated to the requested changeset.
6004 6005
6005 6006 To cancel an uncommitted merge (and lose your changes), use
6006 6007 :hg:`update --clean .`.
6007 6008
6008 6009 Use null as the changeset to remove the working directory (like
6009 6010 :hg:`clone -U`).
6010 6011
6011 6012 If you want to revert just one file to an older revision, use
6012 6013 :hg:`revert [-r REV] NAME`.
6013 6014
6014 6015 See :hg:`help dates` for a list of formats valid for -d/--date.
6015 6016
6016 6017 Returns 0 on success, 1 if there are unresolved files.
6017 6018 """
6018 6019 if rev and node:
6019 6020 raise util.Abort(_("please specify just one revision"))
6020 6021
6021 6022 if rev is None or rev == '':
6022 6023 rev = node
6023 6024
6024 6025 cmdutil.clearunfinished(repo)
6025 6026
6026 6027 # with no argument, we also move the current bookmark, if any
6027 6028 rev, movemarkfrom = bookmarks.calculateupdate(ui, repo, rev)
6028 6029
6029 6030 # if we defined a bookmark, we have to remember the original bookmark name
6030 6031 brev = rev
6031 6032 rev = scmutil.revsingle(repo, rev, rev).rev()
6032 6033
6033 6034 if check and clean:
6034 6035 raise util.Abort(_("cannot specify both -c/--check and -C/--clean"))
6035 6036
6036 6037 if date:
6037 6038 if rev is not None:
6038 6039 raise util.Abort(_("you can't specify a revision and a date"))
6039 6040 rev = cmdutil.finddate(ui, repo, date)
6040 6041
6041 6042 if check:
6042 6043 c = repo[None]
6043 6044 if c.dirty(merge=False, branch=False, missing=True):
6044 6045 raise util.Abort(_("uncommitted changes"))
6045 6046 if rev is None:
6046 6047 rev = repo[repo[None].branch()].rev()
6047 6048 mergemod._checkunknown(repo, repo[None], repo[rev])
6048 6049
6049 6050 repo.ui.setconfig('ui', 'forcemerge', tool, 'update')
6050 6051
6051 6052 if clean:
6052 6053 ret = hg.clean(repo, rev)
6053 6054 else:
6054 6055 ret = hg.update(repo, rev)
6055 6056
6056 6057 if not ret and movemarkfrom:
6057 6058 if bookmarks.update(repo, [movemarkfrom], repo['.'].node()):
6058 6059 ui.status(_("updating bookmark %s\n") % repo._bookmarkcurrent)
6059 6060 elif brev in repo._bookmarks:
6060 6061 bookmarks.setcurrent(repo, brev)
6061 6062 ui.status(_("(activating bookmark %s)\n") % brev)
6062 6063 elif brev:
6063 6064 if repo._bookmarkcurrent:
6064 6065 ui.status(_("(leaving bookmark %s)\n") %
6065 6066 repo._bookmarkcurrent)
6066 6067 bookmarks.unsetcurrent(repo)
6067 6068
6068 6069 return ret
6069 6070
6070 6071 @command('verify', [])
6071 6072 def verify(ui, repo):
6072 6073 """verify the integrity of the repository
6073 6074
6074 6075 Verify the integrity of the current repository.
6075 6076
6076 6077 This will perform an extensive check of the repository's
6077 6078 integrity, validating the hashes and checksums of each entry in
6078 6079 the changelog, manifest, and tracked files, as well as the
6079 6080 integrity of their crosslinks and indices.
6080 6081
6081 6082 Please see http://mercurial.selenic.com/wiki/RepositoryCorruption
6082 6083 for more information about recovery from corruption of the
6083 6084 repository.
6084 6085
6085 6086 Returns 0 on success, 1 if errors are encountered.
6086 6087 """
6087 6088 return hg.verify(repo)
6088 6089
6089 6090 @command('version', [], norepo=True)
6090 6091 def version_(ui):
6091 6092 """output version and copyright information"""
6092 6093 ui.write(_("Mercurial Distributed SCM (version %s)\n")
6093 6094 % util.version())
6094 6095 ui.status(_(
6095 6096 "(see http://mercurial.selenic.com for more information)\n"
6096 6097 "\nCopyright (C) 2005-2014 Matt Mackall and others\n"
6097 6098 "This is free software; see the source for copying conditions. "
6098 6099 "There is NO\nwarranty; "
6099 6100 "not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n"
6100 6101 ))
6101 6102
6102 6103 ui.note(_("\nEnabled extensions:\n\n"))
6103 6104 if ui.verbose:
6104 6105 # format names and versions into columns
6105 6106 names = []
6106 6107 vers = []
6107 6108 for name, module in extensions.extensions():
6108 6109 names.append(name)
6109 6110 vers.append(extensions.moduleversion(module))
6110 6111 if names:
6111 6112 maxnamelen = max(len(n) for n in names)
6112 6113 for i, name in enumerate(names):
6113 6114 ui.write(" %-*s %s\n" % (maxnamelen, name, vers[i]))
@@ -1,1077 +1,1077
1 1 # exchange.py - utility to exchange data between repos.
2 2 #
3 3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from i18n import _
9 9 from node import hex, nullid
10 10 import errno, urllib
11 11 import util, scmutil, changegroup, base85, error
12 12 import discovery, phases, obsolete, bookmarks, bundle2, pushkey
13 13
14 14 def readbundle(ui, fh, fname, vfs=None):
15 15 header = changegroup.readexactly(fh, 4)
16 16
17 17 alg = None
18 18 if not fname:
19 19 fname = "stream"
20 20 if not header.startswith('HG') and header.startswith('\0'):
21 21 fh = changegroup.headerlessfixup(fh, header)
22 22 header = "HG10"
23 23 alg = 'UN'
24 24 elif vfs:
25 25 fname = vfs.join(fname)
26 26
27 27 magic, version = header[0:2], header[2:4]
28 28
29 29 if magic != 'HG':
30 30 raise util.Abort(_('%s: not a Mercurial bundle') % fname)
31 31 if version == '10':
32 32 if alg is None:
33 33 alg = changegroup.readexactly(fh, 2)
34 return changegroup.unbundle10(fh, alg)
34 return changegroup.cg1unpacker(fh, alg)
35 35 elif version == '2X':
36 36 return bundle2.unbundle20(ui, fh, header=magic + version)
37 37 else:
38 38 raise util.Abort(_('%s: unknown bundle version %s') % (fname, version))
39 39
40 40 def buildobsmarkerspart(bundler, markers):
41 41 """add an obsmarker part to the bundler with <markers>
42 42
43 43 No part is created if markers is empty.
44 44 Raises ValueError if the bundler doesn't support any known obsmarker format.
45 45 """
46 46 if markers:
47 47 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
48 48 version = obsolete.commonversion(remoteversions)
49 49 if version is None:
50 50 raise ValueError('bundler do not support common obsmarker format')
51 51 stream = obsolete.encodemarkers(markers, True, version=version)
52 52 return bundler.newpart('B2X:OBSMARKERS', data=stream)
53 53 return None
54 54
55 55 class pushoperation(object):
56 56 """A object that represent a single push operation
57 57
58 58 It purpose is to carry push related state and very common operation.
59 59
60 60 A new should be created at the beginning of each push and discarded
61 61 afterward.
62 62 """
63 63
64 64 def __init__(self, repo, remote, force=False, revs=None, newbranch=False):
65 65 # repo we push from
66 66 self.repo = repo
67 67 self.ui = repo.ui
68 68 # repo we push to
69 69 self.remote = remote
70 70 # force option provided
71 71 self.force = force
72 72 # revs to be pushed (None is "all")
73 73 self.revs = revs
74 74 # allow push of new branch
75 75 self.newbranch = newbranch
76 76 # did a local lock get acquired?
77 77 self.locallocked = None
78 78 # step already performed
79 79 # (used to check what steps have been already performed through bundle2)
80 80 self.stepsdone = set()
81 81 # Integer version of the push result
82 82 # - None means nothing to push
83 83 # - 0 means HTTP error
84 84 # - 1 means we pushed and remote head count is unchanged *or*
85 85 # we have outgoing changesets but refused to push
86 86 # - other values as described by addchangegroup()
87 87 self.ret = None
88 88 # discover.outgoing object (contains common and outgoing data)
89 89 self.outgoing = None
90 90 # all remote heads before the push
91 91 self.remoteheads = None
92 92 # testable as a boolean indicating if any nodes are missing locally.
93 93 self.incoming = None
94 94 # phases changes that must be pushed along side the changesets
95 95 self.outdatedphases = None
96 96 # phases changes that must be pushed if changeset push fails
97 97 self.fallbackoutdatedphases = None
98 98 # outgoing obsmarkers
99 99 self.outobsmarkers = set()
100 100 # outgoing bookmarks
101 101 self.outbookmarks = []
102 102
103 103 @util.propertycache
104 104 def futureheads(self):
105 105 """future remote heads if the changeset push succeeds"""
106 106 return self.outgoing.missingheads
107 107
108 108 @util.propertycache
109 109 def fallbackheads(self):
110 110 """future remote heads if the changeset push fails"""
111 111 if self.revs is None:
112 112 # not target to push, all common are relevant
113 113 return self.outgoing.commonheads
114 114 unfi = self.repo.unfiltered()
115 115 # I want cheads = heads(::missingheads and ::commonheads)
116 116 # (missingheads is revs with secret changeset filtered out)
117 117 #
118 118 # This can be expressed as:
119 119 # cheads = ( (missingheads and ::commonheads)
120 120 # + (commonheads and ::missingheads))"
121 121 # )
122 122 #
123 123 # while trying to push we already computed the following:
124 124 # common = (::commonheads)
125 125 # missing = ((commonheads::missingheads) - commonheads)
126 126 #
127 127 # We can pick:
128 128 # * missingheads part of common (::commonheads)
129 129 common = set(self.outgoing.common)
130 130 nm = self.repo.changelog.nodemap
131 131 cheads = [node for node in self.revs if nm[node] in common]
132 132 # and
133 133 # * commonheads parents on missing
134 134 revset = unfi.set('%ln and parents(roots(%ln))',
135 135 self.outgoing.commonheads,
136 136 self.outgoing.missing)
137 137 cheads.extend(c.node() for c in revset)
138 138 return cheads
139 139
140 140 @property
141 141 def commonheads(self):
142 142 """set of all common heads after changeset bundle push"""
143 143 if self.ret:
144 144 return self.futureheads
145 145 else:
146 146 return self.fallbackheads
147 147
148 148 def push(repo, remote, force=False, revs=None, newbranch=False):
149 149 '''Push outgoing changesets (limited by revs) from a local
150 150 repository to remote. Return an integer:
151 151 - None means nothing to push
152 152 - 0 means HTTP error
153 153 - 1 means we pushed and remote head count is unchanged *or*
154 154 we have outgoing changesets but refused to push
155 155 - other values as described by addchangegroup()
156 156 '''
157 157 pushop = pushoperation(repo, remote, force, revs, newbranch)
158 158 if pushop.remote.local():
159 159 missing = (set(pushop.repo.requirements)
160 160 - pushop.remote.local().supported)
161 161 if missing:
162 162 msg = _("required features are not"
163 163 " supported in the destination:"
164 164 " %s") % (', '.join(sorted(missing)))
165 165 raise util.Abort(msg)
166 166
167 167 # there are two ways to push to remote repo:
168 168 #
169 169 # addchangegroup assumes local user can lock remote
170 170 # repo (local filesystem, old ssh servers).
171 171 #
172 172 # unbundle assumes local user cannot lock remote repo (new ssh
173 173 # servers, http servers).
174 174
175 175 if not pushop.remote.canpush():
176 176 raise util.Abort(_("destination does not support push"))
177 177 # get local lock as we might write phase data
178 178 locallock = None
179 179 try:
180 180 locallock = pushop.repo.lock()
181 181 pushop.locallocked = True
182 182 except IOError, err:
183 183 pushop.locallocked = False
184 184 if err.errno != errno.EACCES:
185 185 raise
186 186 # source repo cannot be locked.
187 187 # We do not abort the push, but just disable the local phase
188 188 # synchronisation.
189 189 msg = 'cannot lock source repository: %s\n' % err
190 190 pushop.ui.debug(msg)
191 191 try:
192 192 pushop.repo.checkpush(pushop)
193 193 lock = None
194 194 unbundle = pushop.remote.capable('unbundle')
195 195 if not unbundle:
196 196 lock = pushop.remote.lock()
197 197 try:
198 198 _pushdiscovery(pushop)
199 199 if (pushop.repo.ui.configbool('experimental', 'bundle2-exp',
200 200 False)
201 201 and pushop.remote.capable('bundle2-exp')):
202 202 _pushbundle2(pushop)
203 203 _pushchangeset(pushop)
204 204 _pushsyncphase(pushop)
205 205 _pushobsolete(pushop)
206 206 _pushbookmark(pushop)
207 207 finally:
208 208 if lock is not None:
209 209 lock.release()
210 210 finally:
211 211 if locallock is not None:
212 212 locallock.release()
213 213
214 214 return pushop.ret
215 215
216 216 # list of steps to perform discovery before push
217 217 pushdiscoveryorder = []
218 218
219 219 # Mapping between step name and function
220 220 #
221 221 # This exists to help extensions wrap steps if necessary
222 222 pushdiscoverymapping = {}
223 223
224 224 def pushdiscovery(stepname):
225 225 """decorator for function performing discovery before push
226 226
227 227 The function is added to the step -> function mapping and appended to the
228 228 list of steps. Beware that decorated function will be added in order (this
229 229 may matter).
230 230
231 231 You can only use this decorator for a new step, if you want to wrap a step
232 232 from an extension, change the pushdiscovery dictionary directly."""
233 233 def dec(func):
234 234 assert stepname not in pushdiscoverymapping
235 235 pushdiscoverymapping[stepname] = func
236 236 pushdiscoveryorder.append(stepname)
237 237 return func
238 238 return dec
239 239
240 240 def _pushdiscovery(pushop):
241 241 """Run all discovery steps"""
242 242 for stepname in pushdiscoveryorder:
243 243 step = pushdiscoverymapping[stepname]
244 244 step(pushop)
245 245
246 246 @pushdiscovery('changeset')
247 247 def _pushdiscoverychangeset(pushop):
248 248 """discover the changeset that need to be pushed"""
249 249 unfi = pushop.repo.unfiltered()
250 250 fci = discovery.findcommonincoming
251 251 commoninc = fci(unfi, pushop.remote, force=pushop.force)
252 252 common, inc, remoteheads = commoninc
253 253 fco = discovery.findcommonoutgoing
254 254 outgoing = fco(unfi, pushop.remote, onlyheads=pushop.revs,
255 255 commoninc=commoninc, force=pushop.force)
256 256 pushop.outgoing = outgoing
257 257 pushop.remoteheads = remoteheads
258 258 pushop.incoming = inc
259 259
260 260 @pushdiscovery('phase')
261 261 def _pushdiscoveryphase(pushop):
262 262 """discover the phase that needs to be pushed
263 263
264 264 (computed for both success and failure case for changesets push)"""
265 265 outgoing = pushop.outgoing
266 266 unfi = pushop.repo.unfiltered()
267 267 remotephases = pushop.remote.listkeys('phases')
268 268 publishing = remotephases.get('publishing', False)
269 269 ana = phases.analyzeremotephases(pushop.repo,
270 270 pushop.fallbackheads,
271 271 remotephases)
272 272 pheads, droots = ana
273 273 extracond = ''
274 274 if not publishing:
275 275 extracond = ' and public()'
276 276 revset = 'heads((%%ln::%%ln) %s)' % extracond
277 277 # Get the list of all revs draft on remote by public here.
278 278 # XXX Beware that revset break if droots is not strictly
279 279 # XXX root we may want to ensure it is but it is costly
280 280 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
281 281 if not outgoing.missing:
282 282 future = fallback
283 283 else:
284 284 # adds changeset we are going to push as draft
285 285 #
286 286 # should not be necessary for pushblishing server, but because of an
287 287 # issue fixed in xxxxx we have to do it anyway.
288 288 fdroots = list(unfi.set('roots(%ln + %ln::)',
289 289 outgoing.missing, droots))
290 290 fdroots = [f.node() for f in fdroots]
291 291 future = list(unfi.set(revset, fdroots, pushop.futureheads))
292 292 pushop.outdatedphases = future
293 293 pushop.fallbackoutdatedphases = fallback
294 294
295 295 @pushdiscovery('obsmarker')
296 296 def _pushdiscoveryobsmarkers(pushop):
297 297 if (obsolete._enabled
298 298 and pushop.repo.obsstore
299 299 and 'obsolete' in pushop.remote.listkeys('namespaces')):
300 300 repo = pushop.repo
301 301 # very naive computation, that can be quite expensive on big repo.
302 302 # However: evolution is currently slow on them anyway.
303 303 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
304 304 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
305 305
306 306 @pushdiscovery('bookmarks')
307 307 def _pushdiscoverybookmarks(pushop):
308 308 ui = pushop.ui
309 309 repo = pushop.repo.unfiltered()
310 310 remote = pushop.remote
311 311 ui.debug("checking for updated bookmarks\n")
312 312 ancestors = ()
313 313 if pushop.revs:
314 314 revnums = map(repo.changelog.rev, pushop.revs)
315 315 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
316 316 remotebookmark = remote.listkeys('bookmarks')
317 317
318 318 comp = bookmarks.compare(repo, repo._bookmarks, remotebookmark, srchex=hex)
319 319 addsrc, adddst, advsrc, advdst, diverge, differ, invalid = comp
320 320 for b, scid, dcid in advsrc:
321 321 if not ancestors or repo[scid].rev() in ancestors:
322 322 pushop.outbookmarks.append((b, dcid, scid))
323 323
324 324 def _pushcheckoutgoing(pushop):
325 325 outgoing = pushop.outgoing
326 326 unfi = pushop.repo.unfiltered()
327 327 if not outgoing.missing:
328 328 # nothing to push
329 329 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
330 330 return False
331 331 # something to push
332 332 if not pushop.force:
333 333 # if repo.obsstore == False --> no obsolete
334 334 # then, save the iteration
335 335 if unfi.obsstore:
336 336 # this message are here for 80 char limit reason
337 337 mso = _("push includes obsolete changeset: %s!")
338 338 mst = "push includes %s changeset: %s!"
339 339 # plain versions for i18n tool to detect them
340 340 _("push includes unstable changeset: %s!")
341 341 _("push includes bumped changeset: %s!")
342 342 _("push includes divergent changeset: %s!")
343 343 # If we are to push if there is at least one
344 344 # obsolete or unstable changeset in missing, at
345 345 # least one of the missinghead will be obsolete or
346 346 # unstable. So checking heads only is ok
347 347 for node in outgoing.missingheads:
348 348 ctx = unfi[node]
349 349 if ctx.obsolete():
350 350 raise util.Abort(mso % ctx)
351 351 elif ctx.troubled():
352 352 raise util.Abort(_(mst)
353 353 % (ctx.troubles()[0],
354 354 ctx))
355 355 newbm = pushop.ui.configlist('bookmarks', 'pushing')
356 356 discovery.checkheads(unfi, pushop.remote, outgoing,
357 357 pushop.remoteheads,
358 358 pushop.newbranch,
359 359 bool(pushop.incoming),
360 360 newbm)
361 361 return True
362 362
363 363 # List of names of steps to perform for an outgoing bundle2, order matters.
364 364 b2partsgenorder = []
365 365
366 366 # Mapping between step name and function
367 367 #
368 368 # This exists to help extensions wrap steps if necessary
369 369 b2partsgenmapping = {}
370 370
371 371 def b2partsgenerator(stepname):
372 372 """decorator for function generating bundle2 part
373 373
374 374 The function is added to the step -> function mapping and appended to the
375 375 list of steps. Beware that decorated functions will be added in order
376 376 (this may matter).
377 377
378 378 You can only use this decorator for new steps, if you want to wrap a step
379 379 from an extension, attack the b2partsgenmapping dictionary directly."""
380 380 def dec(func):
381 381 assert stepname not in b2partsgenmapping
382 382 b2partsgenmapping[stepname] = func
383 383 b2partsgenorder.append(stepname)
384 384 return func
385 385 return dec
386 386
387 387 @b2partsgenerator('changeset')
388 388 def _pushb2ctx(pushop, bundler):
389 389 """handle changegroup push through bundle2
390 390
391 391 addchangegroup result is stored in the ``pushop.ret`` attribute.
392 392 """
393 393 if 'changesets' in pushop.stepsdone:
394 394 return
395 395 pushop.stepsdone.add('changesets')
396 396 # Send known heads to the server for race detection.
397 397 if not _pushcheckoutgoing(pushop):
398 398 return
399 399 pushop.repo.prepushoutgoinghooks(pushop.repo,
400 400 pushop.remote,
401 401 pushop.outgoing)
402 402 if not pushop.force:
403 403 bundler.newpart('B2X:CHECK:HEADS', data=iter(pushop.remoteheads))
404 cg = changegroup.getlocalbundle(pushop.repo, 'push', pushop.outgoing)
404 cg = changegroup.getlocalchangegroup(pushop.repo, 'push', pushop.outgoing)
405 405 cgpart = bundler.newpart('B2X:CHANGEGROUP', data=cg.getchunks())
406 406 def handlereply(op):
407 407 """extract addchangroup returns from server reply"""
408 408 cgreplies = op.records.getreplies(cgpart.id)
409 409 assert len(cgreplies['changegroup']) == 1
410 410 pushop.ret = cgreplies['changegroup'][0]['return']
411 411 return handlereply
412 412
413 413 @b2partsgenerator('phase')
414 414 def _pushb2phases(pushop, bundler):
415 415 """handle phase push through bundle2"""
416 416 if 'phases' in pushop.stepsdone:
417 417 return
418 418 b2caps = bundle2.bundle2caps(pushop.remote)
419 419 if not 'b2x:pushkey' in b2caps:
420 420 return
421 421 pushop.stepsdone.add('phases')
422 422 part2node = []
423 423 enc = pushkey.encode
424 424 for newremotehead in pushop.outdatedphases:
425 425 part = bundler.newpart('b2x:pushkey')
426 426 part.addparam('namespace', enc('phases'))
427 427 part.addparam('key', enc(newremotehead.hex()))
428 428 part.addparam('old', enc(str(phases.draft)))
429 429 part.addparam('new', enc(str(phases.public)))
430 430 part2node.append((part.id, newremotehead))
431 431 def handlereply(op):
432 432 for partid, node in part2node:
433 433 partrep = op.records.getreplies(partid)
434 434 results = partrep['pushkey']
435 435 assert len(results) <= 1
436 436 msg = None
437 437 if not results:
438 438 msg = _('server ignored update of %s to public!\n') % node
439 439 elif not int(results[0]['return']):
440 440 msg = _('updating %s to public failed!\n') % node
441 441 if msg is not None:
442 442 pushop.ui.warn(msg)
443 443 return handlereply
444 444
445 445 @b2partsgenerator('obsmarkers')
446 446 def _pushb2obsmarkers(pushop, bundler):
447 447 if 'obsmarkers' in pushop.stepsdone:
448 448 return
449 449 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
450 450 if obsolete.commonversion(remoteversions) is None:
451 451 return
452 452 pushop.stepsdone.add('obsmarkers')
453 453 if pushop.outobsmarkers:
454 454 buildobsmarkerspart(bundler, pushop.outobsmarkers)
455 455
456 456 @b2partsgenerator('bookmarks')
457 457 def _pushb2bookmarks(pushop, bundler):
458 458 """handle phase push through bundle2"""
459 459 if 'bookmarks' in pushop.stepsdone:
460 460 return
461 461 b2caps = bundle2.bundle2caps(pushop.remote)
462 462 if 'b2x:pushkey' not in b2caps:
463 463 return
464 464 pushop.stepsdone.add('bookmarks')
465 465 part2book = []
466 466 enc = pushkey.encode
467 467 for book, old, new in pushop.outbookmarks:
468 468 part = bundler.newpart('b2x:pushkey')
469 469 part.addparam('namespace', enc('bookmarks'))
470 470 part.addparam('key', enc(book))
471 471 part.addparam('old', enc(old))
472 472 part.addparam('new', enc(new))
473 473 part2book.append((part.id, book))
474 474 def handlereply(op):
475 475 for partid, book in part2book:
476 476 partrep = op.records.getreplies(partid)
477 477 results = partrep['pushkey']
478 478 assert len(results) <= 1
479 479 if not results:
480 480 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
481 481 else:
482 482 ret = int(results[0]['return'])
483 483 if ret:
484 484 pushop.ui.status(_("updating bookmark %s\n") % book)
485 485 else:
486 486 pushop.ui.warn(_('updating bookmark %s failed!\n') % book)
487 487 return handlereply
488 488
489 489
490 490 def _pushbundle2(pushop):
491 491 """push data to the remote using bundle2
492 492
493 493 The only currently supported type of data is changegroup but this will
494 494 evolve in the future."""
495 495 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
496 496 # create reply capability
497 497 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo))
498 498 bundler.newpart('b2x:replycaps', data=capsblob)
499 499 replyhandlers = []
500 500 for partgenname in b2partsgenorder:
501 501 partgen = b2partsgenmapping[partgenname]
502 502 ret = partgen(pushop, bundler)
503 503 if callable(ret):
504 504 replyhandlers.append(ret)
505 505 # do not push if nothing to push
506 506 if bundler.nbparts <= 1:
507 507 return
508 508 stream = util.chunkbuffer(bundler.getchunks())
509 509 try:
510 510 reply = pushop.remote.unbundle(stream, ['force'], 'push')
511 511 except error.BundleValueError, exc:
512 512 raise util.Abort('missing support for %s' % exc)
513 513 try:
514 514 op = bundle2.processbundle(pushop.repo, reply)
515 515 except error.BundleValueError, exc:
516 516 raise util.Abort('missing support for %s' % exc)
517 517 for rephand in replyhandlers:
518 518 rephand(op)
519 519
520 520 def _pushchangeset(pushop):
521 521 """Make the actual push of changeset bundle to remote repo"""
522 522 if 'changesets' in pushop.stepsdone:
523 523 return
524 524 pushop.stepsdone.add('changesets')
525 525 if not _pushcheckoutgoing(pushop):
526 526 return
527 527 pushop.repo.prepushoutgoinghooks(pushop.repo,
528 528 pushop.remote,
529 529 pushop.outgoing)
530 530 outgoing = pushop.outgoing
531 531 unbundle = pushop.remote.capable('unbundle')
532 532 # TODO: get bundlecaps from remote
533 533 bundlecaps = None
534 534 # create a changegroup from local
535 535 if pushop.revs is None and not (outgoing.excluded
536 536 or pushop.repo.changelog.filteredrevs):
537 537 # push everything,
538 538 # use the fast path, no race possible on push
539 bundler = changegroup.bundle10(pushop.repo, bundlecaps)
539 bundler = changegroup.cg1packer(pushop.repo, bundlecaps)
540 540 cg = changegroup.getsubset(pushop.repo,
541 541 outgoing,
542 542 bundler,
543 543 'push',
544 544 fastpath=True)
545 545 else:
546 cg = changegroup.getlocalbundle(pushop.repo, 'push', outgoing,
546 cg = changegroup.getlocalchangegroup(pushop.repo, 'push', outgoing,
547 547 bundlecaps)
548 548
549 549 # apply changegroup to remote
550 550 if unbundle:
551 551 # local repo finds heads on server, finds out what
552 552 # revs it must push. once revs transferred, if server
553 553 # finds it has different heads (someone else won
554 554 # commit/push race), server aborts.
555 555 if pushop.force:
556 556 remoteheads = ['force']
557 557 else:
558 558 remoteheads = pushop.remoteheads
559 559 # ssh: return remote's addchangegroup()
560 560 # http: return remote's addchangegroup() or 0 for error
561 561 pushop.ret = pushop.remote.unbundle(cg, remoteheads,
562 562 pushop.repo.url())
563 563 else:
564 564 # we return an integer indicating remote head count
565 565 # change
566 566 pushop.ret = pushop.remote.addchangegroup(cg, 'push', pushop.repo.url())
567 567
568 568 def _pushsyncphase(pushop):
569 569 """synchronise phase information locally and remotely"""
570 570 cheads = pushop.commonheads
571 571 # even when we don't push, exchanging phase data is useful
572 572 remotephases = pushop.remote.listkeys('phases')
573 573 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
574 574 and remotephases # server supports phases
575 575 and pushop.ret is None # nothing was pushed
576 576 and remotephases.get('publishing', False)):
577 577 # When:
578 578 # - this is a subrepo push
579 579 # - and remote support phase
580 580 # - and no changeset was pushed
581 581 # - and remote is publishing
582 582 # We may be in issue 3871 case!
583 583 # We drop the possible phase synchronisation done by
584 584 # courtesy to publish changesets possibly locally draft
585 585 # on the remote.
586 586 remotephases = {'publishing': 'True'}
587 587 if not remotephases: # old server or public only reply from non-publishing
588 588 _localphasemove(pushop, cheads)
589 589 # don't push any phase data as there is nothing to push
590 590 else:
591 591 ana = phases.analyzeremotephases(pushop.repo, cheads,
592 592 remotephases)
593 593 pheads, droots = ana
594 594 ### Apply remote phase on local
595 595 if remotephases.get('publishing', False):
596 596 _localphasemove(pushop, cheads)
597 597 else: # publish = False
598 598 _localphasemove(pushop, pheads)
599 599 _localphasemove(pushop, cheads, phases.draft)
600 600 ### Apply local phase on remote
601 601
602 602 if pushop.ret:
603 603 if 'phases' in pushop.stepsdone:
604 604 # phases already pushed though bundle2
605 605 return
606 606 outdated = pushop.outdatedphases
607 607 else:
608 608 outdated = pushop.fallbackoutdatedphases
609 609
610 610 pushop.stepsdone.add('phases')
611 611
612 612 # filter heads already turned public by the push
613 613 outdated = [c for c in outdated if c.node() not in pheads]
614 614 b2caps = bundle2.bundle2caps(pushop.remote)
615 615 if 'b2x:pushkey' in b2caps:
616 616 # server supports bundle2, let's do a batched push through it
617 617 #
618 618 # This will eventually be unified with the changesets bundle2 push
619 619 bundler = bundle2.bundle20(pushop.ui, b2caps)
620 620 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo))
621 621 bundler.newpart('b2x:replycaps', data=capsblob)
622 622 part2node = []
623 623 enc = pushkey.encode
624 624 for newremotehead in outdated:
625 625 part = bundler.newpart('b2x:pushkey')
626 626 part.addparam('namespace', enc('phases'))
627 627 part.addparam('key', enc(newremotehead.hex()))
628 628 part.addparam('old', enc(str(phases.draft)))
629 629 part.addparam('new', enc(str(phases.public)))
630 630 part2node.append((part.id, newremotehead))
631 631 stream = util.chunkbuffer(bundler.getchunks())
632 632 try:
633 633 reply = pushop.remote.unbundle(stream, ['force'], 'push')
634 634 op = bundle2.processbundle(pushop.repo, reply)
635 635 except error.BundleValueError, exc:
636 636 raise util.Abort('missing support for %s' % exc)
637 637 for partid, node in part2node:
638 638 partrep = op.records.getreplies(partid)
639 639 results = partrep['pushkey']
640 640 assert len(results) <= 1
641 641 msg = None
642 642 if not results:
643 643 msg = _('server ignored update of %s to public!\n') % node
644 644 elif not int(results[0]['return']):
645 645 msg = _('updating %s to public failed!\n') % node
646 646 if msg is not None:
647 647 pushop.ui.warn(msg)
648 648
649 649 else:
650 650 # fallback to independant pushkey command
651 651 for newremotehead in outdated:
652 652 r = pushop.remote.pushkey('phases',
653 653 newremotehead.hex(),
654 654 str(phases.draft),
655 655 str(phases.public))
656 656 if not r:
657 657 pushop.ui.warn(_('updating %s to public failed!\n')
658 658 % newremotehead)
659 659
660 660 def _localphasemove(pushop, nodes, phase=phases.public):
661 661 """move <nodes> to <phase> in the local source repo"""
662 662 if pushop.locallocked:
663 663 tr = pushop.repo.transaction('push-phase-sync')
664 664 try:
665 665 phases.advanceboundary(pushop.repo, tr, phase, nodes)
666 666 tr.close()
667 667 finally:
668 668 tr.release()
669 669 else:
670 670 # repo is not locked, do not change any phases!
671 671 # Informs the user that phases should have been moved when
672 672 # applicable.
673 673 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
674 674 phasestr = phases.phasenames[phase]
675 675 if actualmoves:
676 676 pushop.ui.status(_('cannot lock source repo, skipping '
677 677 'local %s phase update\n') % phasestr)
678 678
679 679 def _pushobsolete(pushop):
680 680 """utility function to push obsolete markers to a remote"""
681 681 if 'obsmarkers' in pushop.stepsdone:
682 682 return
683 683 pushop.ui.debug('try to push obsolete markers to remote\n')
684 684 repo = pushop.repo
685 685 remote = pushop.remote
686 686 pushop.stepsdone.add('obsmarkers')
687 687 if pushop.outobsmarkers:
688 688 rslts = []
689 689 remotedata = obsolete._pushkeyescape(pushop.outobsmarkers)
690 690 for key in sorted(remotedata, reverse=True):
691 691 # reverse sort to ensure we end with dump0
692 692 data = remotedata[key]
693 693 rslts.append(remote.pushkey('obsolete', key, '', data))
694 694 if [r for r in rslts if not r]:
695 695 msg = _('failed to push some obsolete markers!\n')
696 696 repo.ui.warn(msg)
697 697
698 698 def _pushbookmark(pushop):
699 699 """Update bookmark position on remote"""
700 700 if pushop.ret == 0 or 'bookmarks' in pushop.stepsdone:
701 701 return
702 702 pushop.stepsdone.add('bookmarks')
703 703 ui = pushop.ui
704 704 remote = pushop.remote
705 705 for b, old, new in pushop.outbookmarks:
706 706 if remote.pushkey('bookmarks', b, old, new):
707 707 ui.status(_("updating bookmark %s\n") % b)
708 708 else:
709 709 ui.warn(_('updating bookmark %s failed!\n') % b)
710 710
711 711 class pulloperation(object):
712 712 """A object that represent a single pull operation
713 713
714 714 It purpose is to carry push related state and very common operation.
715 715
716 716 A new should be created at the beginning of each pull and discarded
717 717 afterward.
718 718 """
719 719
720 720 def __init__(self, repo, remote, heads=None, force=False):
721 721 # repo we pull into
722 722 self.repo = repo
723 723 # repo we pull from
724 724 self.remote = remote
725 725 # revision we try to pull (None is "all")
726 726 self.heads = heads
727 727 # do we force pull?
728 728 self.force = force
729 729 # the name the pull transaction
730 730 self._trname = 'pull\n' + util.hidepassword(remote.url())
731 731 # hold the transaction once created
732 732 self._tr = None
733 733 # set of common changeset between local and remote before pull
734 734 self.common = None
735 735 # set of pulled head
736 736 self.rheads = None
737 737 # list of missing changeset to fetch remotely
738 738 self.fetch = None
739 739 # result of changegroup pulling (used as return code by pull)
740 740 self.cgresult = None
741 741 # list of step remaining todo (related to future bundle2 usage)
742 742 self.todosteps = set(['changegroup', 'phases', 'obsmarkers'])
743 743
744 744 @util.propertycache
745 745 def pulledsubset(self):
746 746 """heads of the set of changeset target by the pull"""
747 747 # compute target subset
748 748 if self.heads is None:
749 749 # We pulled every thing possible
750 750 # sync on everything common
751 751 c = set(self.common)
752 752 ret = list(self.common)
753 753 for n in self.rheads:
754 754 if n not in c:
755 755 ret.append(n)
756 756 return ret
757 757 else:
758 758 # We pulled a specific subset
759 759 # sync on this subset
760 760 return self.heads
761 761
762 762 def gettransaction(self):
763 763 """get appropriate pull transaction, creating it if needed"""
764 764 if self._tr is None:
765 765 self._tr = self.repo.transaction(self._trname)
766 766 return self._tr
767 767
768 768 def closetransaction(self):
769 769 """close transaction if created"""
770 770 if self._tr is not None:
771 771 self._tr.close()
772 772
773 773 def releasetransaction(self):
774 774 """release transaction if created"""
775 775 if self._tr is not None:
776 776 self._tr.release()
777 777
778 778 def pull(repo, remote, heads=None, force=False):
779 779 pullop = pulloperation(repo, remote, heads, force)
780 780 if pullop.remote.local():
781 781 missing = set(pullop.remote.requirements) - pullop.repo.supported
782 782 if missing:
783 783 msg = _("required features are not"
784 784 " supported in the destination:"
785 785 " %s") % (', '.join(sorted(missing)))
786 786 raise util.Abort(msg)
787 787
788 788 lock = pullop.repo.lock()
789 789 try:
790 790 _pulldiscovery(pullop)
791 791 if (pullop.repo.ui.configbool('experimental', 'bundle2-exp', False)
792 792 and pullop.remote.capable('bundle2-exp')):
793 793 _pullbundle2(pullop)
794 794 if 'changegroup' in pullop.todosteps:
795 795 _pullchangeset(pullop)
796 796 if 'phases' in pullop.todosteps:
797 797 _pullphase(pullop)
798 798 if 'obsmarkers' in pullop.todosteps:
799 799 _pullobsolete(pullop)
800 800 pullop.closetransaction()
801 801 finally:
802 802 pullop.releasetransaction()
803 803 lock.release()
804 804
805 805 return pullop.cgresult
806 806
807 807 def _pulldiscovery(pullop):
808 808 """discovery phase for the pull
809 809
810 810 Current handle changeset discovery only, will change handle all discovery
811 811 at some point."""
812 812 tmp = discovery.findcommonincoming(pullop.repo.unfiltered(),
813 813 pullop.remote,
814 814 heads=pullop.heads,
815 815 force=pullop.force)
816 816 pullop.common, pullop.fetch, pullop.rheads = tmp
817 817
818 818 def _pullbundle2(pullop):
819 819 """pull data using bundle2
820 820
821 821 For now, the only supported data are changegroup."""
822 822 remotecaps = bundle2.bundle2caps(pullop.remote)
823 823 kwargs = {'bundlecaps': caps20to10(pullop.repo)}
824 824 # pulling changegroup
825 825 pullop.todosteps.remove('changegroup')
826 826
827 827 kwargs['common'] = pullop.common
828 828 kwargs['heads'] = pullop.heads or pullop.rheads
829 829 kwargs['cg'] = pullop.fetch
830 830 if 'b2x:listkeys' in remotecaps:
831 831 kwargs['listkeys'] = ['phase']
832 832 if not pullop.fetch:
833 833 pullop.repo.ui.status(_("no changes found\n"))
834 834 pullop.cgresult = 0
835 835 else:
836 836 if pullop.heads is None and list(pullop.common) == [nullid]:
837 837 pullop.repo.ui.status(_("requesting all changes\n"))
838 838 if obsolete._enabled:
839 839 remoteversions = bundle2.obsmarkersversion(remotecaps)
840 840 if obsolete.commonversion(remoteversions) is not None:
841 841 kwargs['obsmarkers'] = True
842 842 pullop.todosteps.remove('obsmarkers')
843 843 _pullbundle2extraprepare(pullop, kwargs)
844 844 if kwargs.keys() == ['format']:
845 845 return # nothing to pull
846 846 bundle = pullop.remote.getbundle('pull', **kwargs)
847 847 try:
848 848 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
849 849 except error.BundleValueError, exc:
850 850 raise util.Abort('missing support for %s' % exc)
851 851
852 852 if pullop.fetch:
853 853 assert len(op.records['changegroup']) == 1
854 854 pullop.cgresult = op.records['changegroup'][0]['return']
855 855
856 856 # processing phases change
857 857 for namespace, value in op.records['listkeys']:
858 858 if namespace == 'phases':
859 859 _pullapplyphases(pullop, value)
860 860
861 861 def _pullbundle2extraprepare(pullop, kwargs):
862 862 """hook function so that extensions can extend the getbundle call"""
863 863 pass
864 864
865 865 def _pullchangeset(pullop):
866 866 """pull changeset from unbundle into the local repo"""
867 867 # We delay the open of the transaction as late as possible so we
868 868 # don't open transaction for nothing or you break future useful
869 869 # rollback call
870 870 pullop.todosteps.remove('changegroup')
871 871 if not pullop.fetch:
872 872 pullop.repo.ui.status(_("no changes found\n"))
873 873 pullop.cgresult = 0
874 874 return
875 875 pullop.gettransaction()
876 876 if pullop.heads is None and list(pullop.common) == [nullid]:
877 877 pullop.repo.ui.status(_("requesting all changes\n"))
878 878 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
879 879 # issue1320, avoid a race if remote changed after discovery
880 880 pullop.heads = pullop.rheads
881 881
882 882 if pullop.remote.capable('getbundle'):
883 883 # TODO: get bundlecaps from remote
884 884 cg = pullop.remote.getbundle('pull', common=pullop.common,
885 885 heads=pullop.heads or pullop.rheads)
886 886 elif pullop.heads is None:
887 887 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
888 888 elif not pullop.remote.capable('changegroupsubset'):
889 889 raise util.Abort(_("partial pull cannot be done because "
890 890 "other repository doesn't support "
891 891 "changegroupsubset."))
892 892 else:
893 893 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
894 894 pullop.cgresult = changegroup.addchangegroup(pullop.repo, cg, 'pull',
895 895 pullop.remote.url())
896 896
897 897 def _pullphase(pullop):
898 898 # Get remote phases data from remote
899 899 remotephases = pullop.remote.listkeys('phases')
900 900 _pullapplyphases(pullop, remotephases)
901 901
902 902 def _pullapplyphases(pullop, remotephases):
903 903 """apply phase movement from observed remote state"""
904 904 pullop.todosteps.remove('phases')
905 905 publishing = bool(remotephases.get('publishing', False))
906 906 if remotephases and not publishing:
907 907 # remote is new and unpublishing
908 908 pheads, _dr = phases.analyzeremotephases(pullop.repo,
909 909 pullop.pulledsubset,
910 910 remotephases)
911 911 dheads = pullop.pulledsubset
912 912 else:
913 913 # Remote is old or publishing all common changesets
914 914 # should be seen as public
915 915 pheads = pullop.pulledsubset
916 916 dheads = []
917 917 unfi = pullop.repo.unfiltered()
918 918 phase = unfi._phasecache.phase
919 919 rev = unfi.changelog.nodemap.get
920 920 public = phases.public
921 921 draft = phases.draft
922 922
923 923 # exclude changesets already public locally and update the others
924 924 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
925 925 if pheads:
926 926 tr = pullop.gettransaction()
927 927 phases.advanceboundary(pullop.repo, tr, public, pheads)
928 928
929 929 # exclude changesets already draft locally and update the others
930 930 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
931 931 if dheads:
932 932 tr = pullop.gettransaction()
933 933 phases.advanceboundary(pullop.repo, tr, draft, dheads)
934 934
935 935 def _pullobsolete(pullop):
936 936 """utility function to pull obsolete markers from a remote
937 937
938 938 The `gettransaction` is function that return the pull transaction, creating
939 939 one if necessary. We return the transaction to inform the calling code that
940 940 a new transaction have been created (when applicable).
941 941
942 942 Exists mostly to allow overriding for experimentation purpose"""
943 943 pullop.todosteps.remove('obsmarkers')
944 944 tr = None
945 945 if obsolete._enabled:
946 946 pullop.repo.ui.debug('fetching remote obsolete markers\n')
947 947 remoteobs = pullop.remote.listkeys('obsolete')
948 948 if 'dump0' in remoteobs:
949 949 tr = pullop.gettransaction()
950 950 for key in sorted(remoteobs, reverse=True):
951 951 if key.startswith('dump'):
952 952 data = base85.b85decode(remoteobs[key])
953 953 pullop.repo.obsstore.mergemarkers(tr, data)
954 954 pullop.repo.invalidatevolatilesets()
955 955 return tr
956 956
957 957 def caps20to10(repo):
958 958 """return a set with appropriate options to use bundle20 during getbundle"""
959 959 caps = set(['HG2X'])
960 960 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
961 961 caps.add('bundle2=' + urllib.quote(capsblob))
962 962 return caps
963 963
964 964 def getbundle(repo, source, heads=None, common=None, bundlecaps=None,
965 965 **kwargs):
966 966 """return a full bundle (with potentially multiple kind of parts)
967 967
968 968 Could be a bundle HG10 or a bundle HG2X depending on bundlecaps
969 969 passed. For now, the bundle can contain only changegroup, but this will
970 970 changes when more part type will be available for bundle2.
971 971
972 This is different from changegroup.getbundle that only returns an HG10
972 This is different from changegroup.getchangegroup that only returns an HG10
973 973 changegroup bundle. They may eventually get reunited in the future when we
974 974 have a clearer idea of the API we what to query different data.
975 975
976 976 The implementation is at a very early stage and will get massive rework
977 977 when the API of bundle is refined.
978 978 """
979 979 cg = None
980 980 if kwargs.get('cg', True):
981 981 # build changegroup bundle here.
982 cg = changegroup.getbundle(repo, source, heads=heads,
983 common=common, bundlecaps=bundlecaps)
982 cg = changegroup.getchangegroup(repo, source, heads=heads,
983 common=common, bundlecaps=bundlecaps)
984 984 elif 'HG2X' not in bundlecaps:
985 985 raise ValueError(_('request for bundle10 must include changegroup'))
986 986 if bundlecaps is None or 'HG2X' not in bundlecaps:
987 987 if kwargs:
988 988 raise ValueError(_('unsupported getbundle arguments: %s')
989 989 % ', '.join(sorted(kwargs.keys())))
990 990 return cg
991 991 # very crude first implementation,
992 992 # the bundle API will change and the generation will be done lazily.
993 993 b2caps = {}
994 994 for bcaps in bundlecaps:
995 995 if bcaps.startswith('bundle2='):
996 996 blob = urllib.unquote(bcaps[len('bundle2='):])
997 997 b2caps.update(bundle2.decodecaps(blob))
998 998 bundler = bundle2.bundle20(repo.ui, b2caps)
999 999 if cg:
1000 1000 bundler.newpart('b2x:changegroup', data=cg.getchunks())
1001 1001 listkeys = kwargs.get('listkeys', ())
1002 1002 for namespace in listkeys:
1003 1003 part = bundler.newpart('b2x:listkeys')
1004 1004 part.addparam('namespace', namespace)
1005 1005 keys = repo.listkeys(namespace).items()
1006 1006 part.data = pushkey.encodekeys(keys)
1007 1007 _getbundleobsmarkerpart(bundler, repo, source, heads=heads, common=common,
1008 1008 bundlecaps=bundlecaps, **kwargs)
1009 1009 _getbundleextrapart(bundler, repo, source, heads=heads, common=common,
1010 1010 bundlecaps=bundlecaps, **kwargs)
1011 1011 return util.chunkbuffer(bundler.getchunks())
1012 1012
1013 1013 def _getbundleobsmarkerpart(bundler, repo, source, heads=None, common=None,
1014 1014 bundlecaps=None, **kwargs):
1015 1015 if kwargs.get('obsmarkers', False):
1016 1016 if heads is None:
1017 1017 heads = repo.heads()
1018 1018 subset = [c.node() for c in repo.set('::%ln', heads)]
1019 1019 markers = repo.obsstore.relevantmarkers(subset)
1020 1020 buildobsmarkerspart(bundler, markers)
1021 1021
1022 1022 def _getbundleextrapart(bundler, repo, source, heads=None, common=None,
1023 1023 bundlecaps=None, **kwargs):
1024 1024 """hook function to let extensions add parts to the requested bundle"""
1025 1025 pass
1026 1026
1027 1027 def check_heads(repo, their_heads, context):
1028 1028 """check if the heads of a repo have been modified
1029 1029
1030 1030 Used by peer for unbundling.
1031 1031 """
1032 1032 heads = repo.heads()
1033 1033 heads_hash = util.sha1(''.join(sorted(heads))).digest()
1034 1034 if not (their_heads == ['force'] or their_heads == heads or
1035 1035 their_heads == ['hashed', heads_hash]):
1036 1036 # someone else committed/pushed/unbundled while we
1037 1037 # were transferring data
1038 1038 raise error.PushRaced('repository changed while %s - '
1039 1039 'please try again' % context)
1040 1040
1041 1041 def unbundle(repo, cg, heads, source, url):
1042 1042 """Apply a bundle to a repo.
1043 1043
1044 1044 this function makes sure the repo is locked during the application and have
1045 1045 mechanism to check that no push race occurred between the creation of the
1046 1046 bundle and its application.
1047 1047
1048 1048 If the push was raced as PushRaced exception is raised."""
1049 1049 r = 0
1050 1050 # need a transaction when processing a bundle2 stream
1051 1051 tr = None
1052 1052 lock = repo.lock()
1053 1053 try:
1054 1054 check_heads(repo, heads, 'uploading changes')
1055 1055 # push can proceed
1056 1056 if util.safehasattr(cg, 'params'):
1057 1057 try:
1058 1058 tr = repo.transaction('unbundle')
1059 1059 tr.hookargs['bundle2-exp'] = '1'
1060 1060 r = bundle2.processbundle(repo, cg, lambda: tr).reply
1061 1061 cl = repo.unfiltered().changelog
1062 1062 p = cl.writepending() and repo.root or ""
1063 1063 repo.hook('b2x-pretransactionclose', throw=True, source=source,
1064 1064 url=url, pending=p, **tr.hookargs)
1065 1065 tr.close()
1066 1066 repo.hook('b2x-transactionclose', source=source, url=url,
1067 1067 **tr.hookargs)
1068 1068 except Exception, exc:
1069 1069 exc.duringunbundle2 = True
1070 1070 raise
1071 1071 else:
1072 1072 r = changegroup.addchangegroup(repo, cg, source, url)
1073 1073 finally:
1074 1074 if tr is not None:
1075 1075 tr.release()
1076 1076 lock.release()
1077 1077 return r
@@ -1,152 +1,152
1 1 # sshserver.py - ssh protocol server support for mercurial
2 2 #
3 3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 4 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
5 5 #
6 6 # This software may be used and distributed according to the terms of the
7 7 # GNU General Public License version 2 or any later version.
8 8
9 9 import util, hook, wireproto, changegroup
10 10 import os, sys
11 11
12 12 class sshserver(wireproto.abstractserverproto):
13 13 def __init__(self, ui, repo):
14 14 self.ui = ui
15 15 self.repo = repo
16 16 self.lock = None
17 17 self.fin = ui.fin
18 18 self.fout = ui.fout
19 19
20 20 hook.redirect(True)
21 21 ui.fout = repo.ui.fout = ui.ferr
22 22
23 23 # Prevent insertion/deletion of CRs
24 24 util.setbinary(self.fin)
25 25 util.setbinary(self.fout)
26 26
27 27 def getargs(self, args):
28 28 data = {}
29 29 keys = args.split()
30 30 for n in xrange(len(keys)):
31 31 argline = self.fin.readline()[:-1]
32 32 arg, l = argline.split()
33 33 if arg not in keys:
34 34 raise util.Abort("unexpected parameter %r" % arg)
35 35 if arg == '*':
36 36 star = {}
37 37 for k in xrange(int(l)):
38 38 argline = self.fin.readline()[:-1]
39 39 arg, l = argline.split()
40 40 val = self.fin.read(int(l))
41 41 star[arg] = val
42 42 data['*'] = star
43 43 else:
44 44 val = self.fin.read(int(l))
45 45 data[arg] = val
46 46 return [data[k] for k in keys]
47 47
48 48 def getarg(self, name):
49 49 return self.getargs(name)[0]
50 50
51 51 def getfile(self, fpout):
52 52 self.sendresponse('')
53 53 count = int(self.fin.readline())
54 54 while count:
55 55 fpout.write(self.fin.read(count))
56 56 count = int(self.fin.readline())
57 57
58 58 def redirect(self):
59 59 pass
60 60
61 61 def groupchunks(self, changegroup):
62 62 while True:
63 63 d = changegroup.read(4096)
64 64 if not d:
65 65 break
66 66 yield d
67 67
68 68 def sendresponse(self, v):
69 69 self.fout.write("%d\n" % len(v))
70 70 self.fout.write(v)
71 71 self.fout.flush()
72 72
73 73 def sendstream(self, source):
74 74 write = self.fout.write
75 75 for chunk in source.gen:
76 76 write(chunk)
77 77 self.fout.flush()
78 78
79 79 def sendpushresponse(self, rsp):
80 80 self.sendresponse('')
81 81 self.sendresponse(str(rsp.res))
82 82
83 83 def sendpusherror(self, rsp):
84 84 self.sendresponse(rsp.res)
85 85
86 86 def sendooberror(self, rsp):
87 87 self.ui.ferr.write('%s\n-\n' % rsp.message)
88 88 self.ui.ferr.flush()
89 89 self.fout.write('\n')
90 90 self.fout.flush()
91 91
92 92 def serve_forever(self):
93 93 try:
94 94 while self.serve_one():
95 95 pass
96 96 finally:
97 97 if self.lock is not None:
98 98 self.lock.release()
99 99 sys.exit(0)
100 100
101 101 handlers = {
102 102 str: sendresponse,
103 103 wireproto.streamres: sendstream,
104 104 wireproto.pushres: sendpushresponse,
105 105 wireproto.pusherr: sendpusherror,
106 106 wireproto.ooberror: sendooberror,
107 107 }
108 108
109 109 def serve_one(self):
110 110 cmd = self.fin.readline()[:-1]
111 111 if cmd and cmd in wireproto.commands:
112 112 rsp = wireproto.dispatch(self.repo, self, cmd)
113 113 self.handlers[rsp.__class__](self, rsp)
114 114 elif cmd:
115 115 impl = getattr(self, 'do_' + cmd, None)
116 116 if impl:
117 117 r = impl()
118 118 if r is not None:
119 119 self.sendresponse(r)
120 120 else: self.sendresponse("")
121 121 return cmd != ''
122 122
123 123 def do_lock(self):
124 124 '''DEPRECATED - allowing remote client to lock repo is not safe'''
125 125
126 126 self.lock = self.repo.lock()
127 127 return ""
128 128
129 129 def do_unlock(self):
130 130 '''DEPRECATED'''
131 131
132 132 if self.lock:
133 133 self.lock.release()
134 134 self.lock = None
135 135 return ""
136 136
137 137 def do_addchangegroup(self):
138 138 '''DEPRECATED'''
139 139
140 140 if not self.lock:
141 141 self.sendresponse("not locked")
142 142 return
143 143
144 144 self.sendresponse("")
145 cg = changegroup.unbundle10(self.fin, "UN")
145 cg = changegroup.cg1unpacker(self.fin, "UN")
146 146 r = changegroup.addchangegroup(self.repo, cg, 'serve', self._client())
147 147 self.lock.release()
148 148 return str(r)
149 149
150 150 def _client(self):
151 151 client = os.environ.get('SSH_CLIENT', '').split(' ', 1)[0]
152 152 return 'remote:ssh:' + client
@@ -1,869 +1,869
1 1 # wireproto.py - generic wire protocol support functions
2 2 #
3 3 # Copyright 2005-2010 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 import urllib, tempfile, os, sys
9 9 from i18n import _
10 10 from node import bin, hex
11 11 import changegroup as changegroupmod, bundle2, pushkey as pushkeymod
12 12 import peer, error, encoding, util, store, exchange
13 13
14 14
15 15 class abstractserverproto(object):
16 16 """abstract class that summarizes the protocol API
17 17
18 18 Used as reference and documentation.
19 19 """
20 20
21 21 def getargs(self, args):
22 22 """return the value for arguments in <args>
23 23
24 24 returns a list of values (same order as <args>)"""
25 25 raise NotImplementedError()
26 26
27 27 def getfile(self, fp):
28 28 """write the whole content of a file into a file like object
29 29
30 30 The file is in the form::
31 31
32 32 (<chunk-size>\n<chunk>)+0\n
33 33
34 34 chunk size is the ascii version of the int.
35 35 """
36 36 raise NotImplementedError()
37 37
38 38 def redirect(self):
39 39 """may setup interception for stdout and stderr
40 40
41 41 See also the `restore` method."""
42 42 raise NotImplementedError()
43 43
44 44 # If the `redirect` function does install interception, the `restore`
45 45 # function MUST be defined. If interception is not used, this function
46 46 # MUST NOT be defined.
47 47 #
48 48 # left commented here on purpose
49 49 #
50 50 #def restore(self):
51 51 # """reinstall previous stdout and stderr and return intercepted stdout
52 52 # """
53 53 # raise NotImplementedError()
54 54
55 55 def groupchunks(self, cg):
56 56 """return 4096 chunks from a changegroup object
57 57
58 58 Some protocols may have compressed the contents."""
59 59 raise NotImplementedError()
60 60
61 61 # abstract batching support
62 62
63 63 class future(object):
64 64 '''placeholder for a value to be set later'''
65 65 def set(self, value):
66 66 if util.safehasattr(self, 'value'):
67 67 raise error.RepoError("future is already set")
68 68 self.value = value
69 69
70 70 class batcher(object):
71 71 '''base class for batches of commands submittable in a single request
72 72
73 73 All methods invoked on instances of this class are simply queued and
74 74 return a a future for the result. Once you call submit(), all the queued
75 75 calls are performed and the results set in their respective futures.
76 76 '''
77 77 def __init__(self):
78 78 self.calls = []
79 79 def __getattr__(self, name):
80 80 def call(*args, **opts):
81 81 resref = future()
82 82 self.calls.append((name, args, opts, resref,))
83 83 return resref
84 84 return call
85 85 def submit(self):
86 86 pass
87 87
88 88 class localbatch(batcher):
89 89 '''performs the queued calls directly'''
90 90 def __init__(self, local):
91 91 batcher.__init__(self)
92 92 self.local = local
93 93 def submit(self):
94 94 for name, args, opts, resref in self.calls:
95 95 resref.set(getattr(self.local, name)(*args, **opts))
96 96
97 97 class remotebatch(batcher):
98 98 '''batches the queued calls; uses as few roundtrips as possible'''
99 99 def __init__(self, remote):
100 100 '''remote must support _submitbatch(encbatch) and
101 101 _submitone(op, encargs)'''
102 102 batcher.__init__(self)
103 103 self.remote = remote
104 104 def submit(self):
105 105 req, rsp = [], []
106 106 for name, args, opts, resref in self.calls:
107 107 mtd = getattr(self.remote, name)
108 108 batchablefn = getattr(mtd, 'batchable', None)
109 109 if batchablefn is not None:
110 110 batchable = batchablefn(mtd.im_self, *args, **opts)
111 111 encargsorres, encresref = batchable.next()
112 112 if encresref:
113 113 req.append((name, encargsorres,))
114 114 rsp.append((batchable, encresref, resref,))
115 115 else:
116 116 resref.set(encargsorres)
117 117 else:
118 118 if req:
119 119 self._submitreq(req, rsp)
120 120 req, rsp = [], []
121 121 resref.set(mtd(*args, **opts))
122 122 if req:
123 123 self._submitreq(req, rsp)
124 124 def _submitreq(self, req, rsp):
125 125 encresults = self.remote._submitbatch(req)
126 126 for encres, r in zip(encresults, rsp):
127 127 batchable, encresref, resref = r
128 128 encresref.set(encres)
129 129 resref.set(batchable.next())
130 130
131 131 def batchable(f):
132 132 '''annotation for batchable methods
133 133
134 134 Such methods must implement a coroutine as follows:
135 135
136 136 @batchable
137 137 def sample(self, one, two=None):
138 138 # Handle locally computable results first:
139 139 if not one:
140 140 yield "a local result", None
141 141 # Build list of encoded arguments suitable for your wire protocol:
142 142 encargs = [('one', encode(one),), ('two', encode(two),)]
143 143 # Create future for injection of encoded result:
144 144 encresref = future()
145 145 # Return encoded arguments and future:
146 146 yield encargs, encresref
147 147 # Assuming the future to be filled with the result from the batched
148 148 # request now. Decode it:
149 149 yield decode(encresref.value)
150 150
151 151 The decorator returns a function which wraps this coroutine as a plain
152 152 method, but adds the original method as an attribute called "batchable",
153 153 which is used by remotebatch to split the call into separate encoding and
154 154 decoding phases.
155 155 '''
156 156 def plain(*args, **opts):
157 157 batchable = f(*args, **opts)
158 158 encargsorres, encresref = batchable.next()
159 159 if not encresref:
160 160 return encargsorres # a local result in this case
161 161 self = args[0]
162 162 encresref.set(self._submitone(f.func_name, encargsorres))
163 163 return batchable.next()
164 164 setattr(plain, 'batchable', f)
165 165 return plain
166 166
167 167 # list of nodes encoding / decoding
168 168
169 169 def decodelist(l, sep=' '):
170 170 if l:
171 171 return map(bin, l.split(sep))
172 172 return []
173 173
174 174 def encodelist(l, sep=' '):
175 175 return sep.join(map(hex, l))
176 176
177 177 # batched call argument encoding
178 178
179 179 def escapearg(plain):
180 180 return (plain
181 181 .replace(':', '::')
182 182 .replace(',', ':,')
183 183 .replace(';', ':;')
184 184 .replace('=', ':='))
185 185
186 186 def unescapearg(escaped):
187 187 return (escaped
188 188 .replace(':=', '=')
189 189 .replace(':;', ';')
190 190 .replace(':,', ',')
191 191 .replace('::', ':'))
192 192
193 193 # mapping of options accepted by getbundle and their types
194 194 #
195 195 # Meant to be extended by extensions. It is extensions responsibility to ensure
196 196 # such options are properly processed in exchange.getbundle.
197 197 #
198 198 # supported types are:
199 199 #
200 200 # :nodes: list of binary nodes
201 201 # :csv: list of comma-separated values
202 202 # :plain: string with no transformation needed.
203 203 gboptsmap = {'heads': 'nodes',
204 204 'common': 'nodes',
205 205 'obsmarkers': 'boolean',
206 206 'bundlecaps': 'csv',
207 207 'listkeys': 'csv',
208 208 'cg': 'boolean'}
209 209
210 210 # client side
211 211
212 212 class wirepeer(peer.peerrepository):
213 213
214 214 def batch(self):
215 215 return remotebatch(self)
216 216 def _submitbatch(self, req):
217 217 cmds = []
218 218 for op, argsdict in req:
219 219 args = ','.join('%s=%s' % p for p in argsdict.iteritems())
220 220 cmds.append('%s %s' % (op, args))
221 221 rsp = self._call("batch", cmds=';'.join(cmds))
222 222 return rsp.split(';')
223 223 def _submitone(self, op, args):
224 224 return self._call(op, **args)
225 225
226 226 @batchable
227 227 def lookup(self, key):
228 228 self.requirecap('lookup', _('look up remote revision'))
229 229 f = future()
230 230 yield {'key': encoding.fromlocal(key)}, f
231 231 d = f.value
232 232 success, data = d[:-1].split(" ", 1)
233 233 if int(success):
234 234 yield bin(data)
235 235 self._abort(error.RepoError(data))
236 236
237 237 @batchable
238 238 def heads(self):
239 239 f = future()
240 240 yield {}, f
241 241 d = f.value
242 242 try:
243 243 yield decodelist(d[:-1])
244 244 except ValueError:
245 245 self._abort(error.ResponseError(_("unexpected response:"), d))
246 246
247 247 @batchable
248 248 def known(self, nodes):
249 249 f = future()
250 250 yield {'nodes': encodelist(nodes)}, f
251 251 d = f.value
252 252 try:
253 253 yield [bool(int(b)) for b in d]
254 254 except ValueError:
255 255 self._abort(error.ResponseError(_("unexpected response:"), d))
256 256
257 257 @batchable
258 258 def branchmap(self):
259 259 f = future()
260 260 yield {}, f
261 261 d = f.value
262 262 try:
263 263 branchmap = {}
264 264 for branchpart in d.splitlines():
265 265 branchname, branchheads = branchpart.split(' ', 1)
266 266 branchname = encoding.tolocal(urllib.unquote(branchname))
267 267 branchheads = decodelist(branchheads)
268 268 branchmap[branchname] = branchheads
269 269 yield branchmap
270 270 except TypeError:
271 271 self._abort(error.ResponseError(_("unexpected response:"), d))
272 272
273 273 def branches(self, nodes):
274 274 n = encodelist(nodes)
275 275 d = self._call("branches", nodes=n)
276 276 try:
277 277 br = [tuple(decodelist(b)) for b in d.splitlines()]
278 278 return br
279 279 except ValueError:
280 280 self._abort(error.ResponseError(_("unexpected response:"), d))
281 281
282 282 def between(self, pairs):
283 283 batch = 8 # avoid giant requests
284 284 r = []
285 285 for i in xrange(0, len(pairs), batch):
286 286 n = " ".join([encodelist(p, '-') for p in pairs[i:i + batch]])
287 287 d = self._call("between", pairs=n)
288 288 try:
289 289 r.extend(l and decodelist(l) or [] for l in d.splitlines())
290 290 except ValueError:
291 291 self._abort(error.ResponseError(_("unexpected response:"), d))
292 292 return r
293 293
294 294 @batchable
295 295 def pushkey(self, namespace, key, old, new):
296 296 if not self.capable('pushkey'):
297 297 yield False, None
298 298 f = future()
299 299 self.ui.debug('preparing pushkey for "%s:%s"\n' % (namespace, key))
300 300 yield {'namespace': encoding.fromlocal(namespace),
301 301 'key': encoding.fromlocal(key),
302 302 'old': encoding.fromlocal(old),
303 303 'new': encoding.fromlocal(new)}, f
304 304 d = f.value
305 305 d, output = d.split('\n', 1)
306 306 try:
307 307 d = bool(int(d))
308 308 except ValueError:
309 309 raise error.ResponseError(
310 310 _('push failed (unexpected response):'), d)
311 311 for l in output.splitlines(True):
312 312 self.ui.status(_('remote: '), l)
313 313 yield d
314 314
315 315 @batchable
316 316 def listkeys(self, namespace):
317 317 if not self.capable('pushkey'):
318 318 yield {}, None
319 319 f = future()
320 320 self.ui.debug('preparing listkeys for "%s"\n' % namespace)
321 321 yield {'namespace': encoding.fromlocal(namespace)}, f
322 322 d = f.value
323 323 yield pushkeymod.decodekeys(d)
324 324
325 325 def stream_out(self):
326 326 return self._callstream('stream_out')
327 327
328 328 def changegroup(self, nodes, kind):
329 329 n = encodelist(nodes)
330 330 f = self._callcompressable("changegroup", roots=n)
331 return changegroupmod.unbundle10(f, 'UN')
331 return changegroupmod.cg1unpacker(f, 'UN')
332 332
333 333 def changegroupsubset(self, bases, heads, kind):
334 334 self.requirecap('changegroupsubset', _('look up remote changes'))
335 335 bases = encodelist(bases)
336 336 heads = encodelist(heads)
337 337 f = self._callcompressable("changegroupsubset",
338 338 bases=bases, heads=heads)
339 return changegroupmod.unbundle10(f, 'UN')
339 return changegroupmod.cg1unpacker(f, 'UN')
340 340
341 341 def getbundle(self, source, **kwargs):
342 342 self.requirecap('getbundle', _('look up remote changes'))
343 343 opts = {}
344 344 for key, value in kwargs.iteritems():
345 345 if value is None:
346 346 continue
347 347 keytype = gboptsmap.get(key)
348 348 if keytype is None:
349 349 assert False, 'unexpected'
350 350 elif keytype == 'nodes':
351 351 value = encodelist(value)
352 352 elif keytype == 'csv':
353 353 value = ','.join(value)
354 354 elif keytype == 'boolean':
355 355 value = '%i' % bool(value)
356 356 elif keytype != 'plain':
357 357 raise KeyError('unknown getbundle option type %s'
358 358 % keytype)
359 359 opts[key] = value
360 360 f = self._callcompressable("getbundle", **opts)
361 361 bundlecaps = kwargs.get('bundlecaps')
362 362 if bundlecaps is not None and 'HG2X' in bundlecaps:
363 363 return bundle2.unbundle20(self.ui, f)
364 364 else:
365 return changegroupmod.unbundle10(f, 'UN')
365 return changegroupmod.cg1unpacker(f, 'UN')
366 366
367 367 def unbundle(self, cg, heads, source):
368 368 '''Send cg (a readable file-like object representing the
369 369 changegroup to push, typically a chunkbuffer object) to the
370 370 remote server as a bundle.
371 371
372 372 When pushing a bundle10 stream, return an integer indicating the
373 373 result of the push (see localrepository.addchangegroup()).
374 374
375 375 When pushing a bundle20 stream, return a bundle20 stream.'''
376 376
377 377 if heads != ['force'] and self.capable('unbundlehash'):
378 378 heads = encodelist(['hashed',
379 379 util.sha1(''.join(sorted(heads))).digest()])
380 380 else:
381 381 heads = encodelist(heads)
382 382
383 383 if util.safehasattr(cg, 'deltaheader'):
384 384 # this a bundle10, do the old style call sequence
385 385 ret, output = self._callpush("unbundle", cg, heads=heads)
386 386 if ret == "":
387 387 raise error.ResponseError(
388 388 _('push failed:'), output)
389 389 try:
390 390 ret = int(ret)
391 391 except ValueError:
392 392 raise error.ResponseError(
393 393 _('push failed (unexpected response):'), ret)
394 394
395 395 for l in output.splitlines(True):
396 396 self.ui.status(_('remote: '), l)
397 397 else:
398 398 # bundle2 push. Send a stream, fetch a stream.
399 399 stream = self._calltwowaystream('unbundle', cg, heads=heads)
400 400 ret = bundle2.unbundle20(self.ui, stream)
401 401 return ret
402 402
403 403 def debugwireargs(self, one, two, three=None, four=None, five=None):
404 404 # don't pass optional arguments left at their default value
405 405 opts = {}
406 406 if three is not None:
407 407 opts['three'] = three
408 408 if four is not None:
409 409 opts['four'] = four
410 410 return self._call('debugwireargs', one=one, two=two, **opts)
411 411
412 412 def _call(self, cmd, **args):
413 413 """execute <cmd> on the server
414 414
415 415 The command is expected to return a simple string.
416 416
417 417 returns the server reply as a string."""
418 418 raise NotImplementedError()
419 419
420 420 def _callstream(self, cmd, **args):
421 421 """execute <cmd> on the server
422 422
423 423 The command is expected to return a stream.
424 424
425 425 returns the server reply as a file like object."""
426 426 raise NotImplementedError()
427 427
428 428 def _callcompressable(self, cmd, **args):
429 429 """execute <cmd> on the server
430 430
431 431 The command is expected to return a stream.
432 432
433 433 The stream may have been compressed in some implementations. This
434 434 function takes care of the decompression. This is the only difference
435 435 with _callstream.
436 436
437 437 returns the server reply as a file like object.
438 438 """
439 439 raise NotImplementedError()
440 440
441 441 def _callpush(self, cmd, fp, **args):
442 442 """execute a <cmd> on server
443 443
444 444 The command is expected to be related to a push. Push has a special
445 445 return method.
446 446
447 447 returns the server reply as a (ret, output) tuple. ret is either
448 448 empty (error) or a stringified int.
449 449 """
450 450 raise NotImplementedError()
451 451
452 452 def _calltwowaystream(self, cmd, fp, **args):
453 453 """execute <cmd> on server
454 454
455 455 The command will send a stream to the server and get a stream in reply.
456 456 """
457 457 raise NotImplementedError()
458 458
459 459 def _abort(self, exception):
460 460 """clearly abort the wire protocol connection and raise the exception
461 461 """
462 462 raise NotImplementedError()
463 463
464 464 # server side
465 465
466 466 # wire protocol command can either return a string or one of these classes.
467 467 class streamres(object):
468 468 """wireproto reply: binary stream
469 469
470 470 The call was successful and the result is a stream.
471 471 Iterate on the `self.gen` attribute to retrieve chunks.
472 472 """
473 473 def __init__(self, gen):
474 474 self.gen = gen
475 475
476 476 class pushres(object):
477 477 """wireproto reply: success with simple integer return
478 478
479 479 The call was successful and returned an integer contained in `self.res`.
480 480 """
481 481 def __init__(self, res):
482 482 self.res = res
483 483
484 484 class pusherr(object):
485 485 """wireproto reply: failure
486 486
487 487 The call failed. The `self.res` attribute contains the error message.
488 488 """
489 489 def __init__(self, res):
490 490 self.res = res
491 491
492 492 class ooberror(object):
493 493 """wireproto reply: failure of a batch of operation
494 494
495 495 Something failed during a batch call. The error message is stored in
496 496 `self.message`.
497 497 """
498 498 def __init__(self, message):
499 499 self.message = message
500 500
501 501 def dispatch(repo, proto, command):
502 502 repo = repo.filtered("served")
503 503 func, spec = commands[command]
504 504 args = proto.getargs(spec)
505 505 return func(repo, proto, *args)
506 506
507 507 def options(cmd, keys, others):
508 508 opts = {}
509 509 for k in keys:
510 510 if k in others:
511 511 opts[k] = others[k]
512 512 del others[k]
513 513 if others:
514 514 sys.stderr.write("warning: %s ignored unexpected arguments %s\n"
515 515 % (cmd, ",".join(others)))
516 516 return opts
517 517
518 518 # list of commands
519 519 commands = {}
520 520
521 521 def wireprotocommand(name, args=''):
522 522 """decorator for wire protocol command"""
523 523 def register(func):
524 524 commands[name] = (func, args)
525 525 return func
526 526 return register
527 527
528 528 @wireprotocommand('batch', 'cmds *')
529 529 def batch(repo, proto, cmds, others):
530 530 repo = repo.filtered("served")
531 531 res = []
532 532 for pair in cmds.split(';'):
533 533 op, args = pair.split(' ', 1)
534 534 vals = {}
535 535 for a in args.split(','):
536 536 if a:
537 537 n, v = a.split('=')
538 538 vals[n] = unescapearg(v)
539 539 func, spec = commands[op]
540 540 if spec:
541 541 keys = spec.split()
542 542 data = {}
543 543 for k in keys:
544 544 if k == '*':
545 545 star = {}
546 546 for key in vals.keys():
547 547 if key not in keys:
548 548 star[key] = vals[key]
549 549 data['*'] = star
550 550 else:
551 551 data[k] = vals[k]
552 552 result = func(repo, proto, *[data[k] for k in keys])
553 553 else:
554 554 result = func(repo, proto)
555 555 if isinstance(result, ooberror):
556 556 return result
557 557 res.append(escapearg(result))
558 558 return ';'.join(res)
559 559
560 560 @wireprotocommand('between', 'pairs')
561 561 def between(repo, proto, pairs):
562 562 pairs = [decodelist(p, '-') for p in pairs.split(" ")]
563 563 r = []
564 564 for b in repo.between(pairs):
565 565 r.append(encodelist(b) + "\n")
566 566 return "".join(r)
567 567
568 568 @wireprotocommand('branchmap')
569 569 def branchmap(repo, proto):
570 570 branchmap = repo.branchmap()
571 571 heads = []
572 572 for branch, nodes in branchmap.iteritems():
573 573 branchname = urllib.quote(encoding.fromlocal(branch))
574 574 branchnodes = encodelist(nodes)
575 575 heads.append('%s %s' % (branchname, branchnodes))
576 576 return '\n'.join(heads)
577 577
578 578 @wireprotocommand('branches', 'nodes')
579 579 def branches(repo, proto, nodes):
580 580 nodes = decodelist(nodes)
581 581 r = []
582 582 for b in repo.branches(nodes):
583 583 r.append(encodelist(b) + "\n")
584 584 return "".join(r)
585 585
586 586
587 587 wireprotocaps = ['lookup', 'changegroupsubset', 'branchmap', 'pushkey',
588 588 'known', 'getbundle', 'unbundlehash', 'batch']
589 589
590 590 def _capabilities(repo, proto):
591 591 """return a list of capabilities for a repo
592 592
593 593 This function exists to allow extensions to easily wrap capabilities
594 594 computation
595 595
596 596 - returns a lists: easy to alter
597 597 - change done here will be propagated to both `capabilities` and `hello`
598 598 command without any other action needed.
599 599 """
600 600 # copy to prevent modification of the global list
601 601 caps = list(wireprotocaps)
602 602 if _allowstream(repo.ui):
603 603 if repo.ui.configbool('server', 'preferuncompressed', False):
604 604 caps.append('stream-preferred')
605 605 requiredformats = repo.requirements & repo.supportedformats
606 606 # if our local revlogs are just revlogv1, add 'stream' cap
607 607 if not requiredformats - set(('revlogv1',)):
608 608 caps.append('stream')
609 609 # otherwise, add 'streamreqs' detailing our local revlog format
610 610 else:
611 611 caps.append('streamreqs=%s' % ','.join(requiredformats))
612 612 if repo.ui.configbool('experimental', 'bundle2-exp', False):
613 613 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
614 614 caps.append('bundle2-exp=' + urllib.quote(capsblob))
615 615 caps.append('unbundle=%s' % ','.join(changegroupmod.bundlepriority))
616 616 caps.append('httpheader=1024')
617 617 return caps
618 618
619 619 # If you are writing an extension and consider wrapping this function. Wrap
620 620 # `_capabilities` instead.
621 621 @wireprotocommand('capabilities')
622 622 def capabilities(repo, proto):
623 623 return ' '.join(_capabilities(repo, proto))
624 624
625 625 @wireprotocommand('changegroup', 'roots')
626 626 def changegroup(repo, proto, roots):
627 627 nodes = decodelist(roots)
628 628 cg = changegroupmod.changegroup(repo, nodes, 'serve')
629 629 return streamres(proto.groupchunks(cg))
630 630
631 631 @wireprotocommand('changegroupsubset', 'bases heads')
632 632 def changegroupsubset(repo, proto, bases, heads):
633 633 bases = decodelist(bases)
634 634 heads = decodelist(heads)
635 635 cg = changegroupmod.changegroupsubset(repo, bases, heads, 'serve')
636 636 return streamres(proto.groupchunks(cg))
637 637
638 638 @wireprotocommand('debugwireargs', 'one two *')
639 639 def debugwireargs(repo, proto, one, two, others):
640 640 # only accept optional args from the known set
641 641 opts = options('debugwireargs', ['three', 'four'], others)
642 642 return repo.debugwireargs(one, two, **opts)
643 643
644 644 # List of options accepted by getbundle.
645 645 #
646 646 # Meant to be extended by extensions. It is the extension's responsibility to
647 647 # ensure such options are properly processed in exchange.getbundle.
648 648 gboptslist = ['heads', 'common', 'bundlecaps']
649 649
650 650 @wireprotocommand('getbundle', '*')
651 651 def getbundle(repo, proto, others):
652 652 opts = options('getbundle', gboptsmap.keys(), others)
653 653 for k, v in opts.iteritems():
654 654 keytype = gboptsmap[k]
655 655 if keytype == 'nodes':
656 656 opts[k] = decodelist(v)
657 657 elif keytype == 'csv':
658 658 opts[k] = set(v.split(','))
659 659 elif keytype == 'boolean':
660 660 opts[k] = bool(v)
661 661 elif keytype != 'plain':
662 662 raise KeyError('unknown getbundle option type %s'
663 663 % keytype)
664 664 cg = exchange.getbundle(repo, 'serve', **opts)
665 665 return streamres(proto.groupchunks(cg))
666 666
667 667 @wireprotocommand('heads')
668 668 def heads(repo, proto):
669 669 h = repo.heads()
670 670 return encodelist(h) + "\n"
671 671
672 672 @wireprotocommand('hello')
673 673 def hello(repo, proto):
674 674 '''the hello command returns a set of lines describing various
675 675 interesting things about the server, in an RFC822-like format.
676 676 Currently the only one defined is "capabilities", which
677 677 consists of a line in the form:
678 678
679 679 capabilities: space separated list of tokens
680 680 '''
681 681 return "capabilities: %s\n" % (capabilities(repo, proto))
682 682
683 683 @wireprotocommand('listkeys', 'namespace')
684 684 def listkeys(repo, proto, namespace):
685 685 d = repo.listkeys(encoding.tolocal(namespace)).items()
686 686 return pushkeymod.encodekeys(d)
687 687
688 688 @wireprotocommand('lookup', 'key')
689 689 def lookup(repo, proto, key):
690 690 try:
691 691 k = encoding.tolocal(key)
692 692 c = repo[k]
693 693 r = c.hex()
694 694 success = 1
695 695 except Exception, inst:
696 696 r = str(inst)
697 697 success = 0
698 698 return "%s %s\n" % (success, r)
699 699
700 700 @wireprotocommand('known', 'nodes *')
701 701 def known(repo, proto, nodes, others):
702 702 return ''.join(b and "1" or "0" for b in repo.known(decodelist(nodes)))
703 703
704 704 @wireprotocommand('pushkey', 'namespace key old new')
705 705 def pushkey(repo, proto, namespace, key, old, new):
706 706 # compatibility with pre-1.8 clients which were accidentally
707 707 # sending raw binary nodes rather than utf-8-encoded hex
708 708 if len(new) == 20 and new.encode('string-escape') != new:
709 709 # looks like it could be a binary node
710 710 try:
711 711 new.decode('utf-8')
712 712 new = encoding.tolocal(new) # but cleanly decodes as UTF-8
713 713 except UnicodeDecodeError:
714 714 pass # binary, leave unmodified
715 715 else:
716 716 new = encoding.tolocal(new) # normal path
717 717
718 718 if util.safehasattr(proto, 'restore'):
719 719
720 720 proto.redirect()
721 721
722 722 try:
723 723 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
724 724 encoding.tolocal(old), new) or False
725 725 except util.Abort:
726 726 r = False
727 727
728 728 output = proto.restore()
729 729
730 730 return '%s\n%s' % (int(r), output)
731 731
732 732 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
733 733 encoding.tolocal(old), new)
734 734 return '%s\n' % int(r)
735 735
736 736 def _allowstream(ui):
737 737 return ui.configbool('server', 'uncompressed', True, untrusted=True)
738 738
739 739 def _walkstreamfiles(repo):
740 740 # this is it's own function so extensions can override it
741 741 return repo.store.walk()
742 742
743 743 @wireprotocommand('stream_out')
744 744 def stream(repo, proto):
745 745 '''If the server supports streaming clone, it advertises the "stream"
746 746 capability with a value representing the version and flags of the repo
747 747 it is serving. Client checks to see if it understands the format.
748 748
749 749 The format is simple: the server writes out a line with the amount
750 750 of files, then the total amount of bytes to be transferred (separated
751 751 by a space). Then, for each file, the server first writes the filename
752 752 and file size (separated by the null character), then the file contents.
753 753 '''
754 754
755 755 if not _allowstream(repo.ui):
756 756 return '1\n'
757 757
758 758 entries = []
759 759 total_bytes = 0
760 760 try:
761 761 # get consistent snapshot of repo, lock during scan
762 762 lock = repo.lock()
763 763 try:
764 764 repo.ui.debug('scanning\n')
765 765 for name, ename, size in _walkstreamfiles(repo):
766 766 if size:
767 767 entries.append((name, size))
768 768 total_bytes += size
769 769 finally:
770 770 lock.release()
771 771 except error.LockError:
772 772 return '2\n' # error: 2
773 773
774 774 def streamer(repo, entries, total):
775 775 '''stream out all metadata files in repository.'''
776 776 yield '0\n' # success
777 777 repo.ui.debug('%d files, %d bytes to transfer\n' %
778 778 (len(entries), total_bytes))
779 779 yield '%d %d\n' % (len(entries), total_bytes)
780 780
781 781 sopener = repo.sopener
782 782 oldaudit = sopener.mustaudit
783 783 debugflag = repo.ui.debugflag
784 784 sopener.mustaudit = False
785 785
786 786 try:
787 787 for name, size in entries:
788 788 if debugflag:
789 789 repo.ui.debug('sending %s (%d bytes)\n' % (name, size))
790 790 # partially encode name over the wire for backwards compat
791 791 yield '%s\0%d\n' % (store.encodedir(name), size)
792 792 if size <= 65536:
793 793 fp = sopener(name)
794 794 try:
795 795 data = fp.read(size)
796 796 finally:
797 797 fp.close()
798 798 yield data
799 799 else:
800 800 for chunk in util.filechunkiter(sopener(name), limit=size):
801 801 yield chunk
802 802 # replace with "finally:" when support for python 2.4 has been dropped
803 803 except Exception:
804 804 sopener.mustaudit = oldaudit
805 805 raise
806 806 sopener.mustaudit = oldaudit
807 807
808 808 return streamres(streamer(repo, entries, total_bytes))
809 809
810 810 @wireprotocommand('unbundle', 'heads')
811 811 def unbundle(repo, proto, heads):
812 812 their_heads = decodelist(heads)
813 813
814 814 try:
815 815 proto.redirect()
816 816
817 817 exchange.check_heads(repo, their_heads, 'preparing changes')
818 818
819 819 # write bundle data to temporary file because it can be big
820 820 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
821 821 fp = os.fdopen(fd, 'wb+')
822 822 r = 0
823 823 try:
824 824 proto.getfile(fp)
825 825 fp.seek(0)
826 826 gen = exchange.readbundle(repo.ui, fp, None)
827 827 r = exchange.unbundle(repo, gen, their_heads, 'serve',
828 828 proto._client())
829 829 if util.safehasattr(r, 'addpart'):
830 830 # The return looks streameable, we are in the bundle2 case and
831 831 # should return a stream.
832 832 return streamres(r.getchunks())
833 833 return pushres(r)
834 834
835 835 finally:
836 836 fp.close()
837 837 os.unlink(tempname)
838 838 except error.BundleValueError, exc:
839 839 bundler = bundle2.bundle20(repo.ui)
840 840 errpart = bundler.newpart('B2X:ERROR:UNSUPPORTEDCONTENT')
841 841 if exc.parttype is not None:
842 842 errpart.addparam('parttype', exc.parttype)
843 843 if exc.params:
844 844 errpart.addparam('params', '\0'.join(exc.params))
845 845 return streamres(bundler.getchunks())
846 846 except util.Abort, inst:
847 847 # The old code we moved used sys.stderr directly.
848 848 # We did not change it to minimise code change.
849 849 # This need to be moved to something proper.
850 850 # Feel free to do it.
851 851 if getattr(inst, 'duringunbundle2', False):
852 852 bundler = bundle2.bundle20(repo.ui)
853 853 manargs = [('message', str(inst))]
854 854 advargs = []
855 855 if inst.hint is not None:
856 856 advargs.append(('hint', inst.hint))
857 857 bundler.addpart(bundle2.bundlepart('B2X:ERROR:ABORT',
858 858 manargs, advargs))
859 859 return streamres(bundler.getchunks())
860 860 else:
861 861 sys.stderr.write("abort: %s\n" % inst)
862 862 return pushres(0)
863 863 except error.PushRaced, exc:
864 864 if getattr(exc, 'duringunbundle2', False):
865 865 bundler = bundle2.bundle20(repo.ui)
866 866 bundler.newpart('B2X:ERROR:PUSHRACED', [('message', str(exc))])
867 867 return streamres(bundler.getchunks())
868 868 else:
869 869 return pusherr(str(exc))
@@ -1,1198 +1,1198
1 1
2 2 $ getmainid() {
3 3 > hg -R main log --template '{node}\n' --rev "$1"
4 4 > }
5 5
6 6 Create an extension to test bundle2 API
7 7
8 8 $ cat > bundle2.py << EOF
9 9 > """A small extension to test bundle2 implementation
10 10 >
11 11 > Current bundle2 implementation is far too limited to be used in any core
12 12 > code. We still need to be able to test it while it grow up.
13 13 > """
14 14 >
15 15 > import sys, os
16 16 > from mercurial import cmdutil
17 17 > from mercurial import util
18 18 > from mercurial import bundle2
19 19 > from mercurial import scmutil
20 20 > from mercurial import discovery
21 21 > from mercurial import changegroup
22 22 > from mercurial import error
23 23 > from mercurial import obsolete
24 24 >
25 25 > obsolete._enabled = True
26 26 >
27 27 > try:
28 28 > import msvcrt
29 29 > msvcrt.setmode(sys.stdin.fileno(), os.O_BINARY)
30 30 > msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY)
31 31 > msvcrt.setmode(sys.stderr.fileno(), os.O_BINARY)
32 32 > except ImportError:
33 33 > pass
34 34 >
35 35 > cmdtable = {}
36 36 > command = cmdutil.command(cmdtable)
37 37 >
38 38 > ELEPHANTSSONG = """Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
39 39 > Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
40 40 > Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko."""
41 41 > assert len(ELEPHANTSSONG) == 178 # future test say 178 bytes, trust it.
42 42 >
43 43 > @bundle2.parthandler('test:song')
44 44 > def songhandler(op, part):
45 45 > """handle a "test:song" bundle2 part, printing the lyrics on stdin"""
46 46 > op.ui.write('The choir starts singing:\n')
47 47 > verses = 0
48 48 > for line in part.read().split('\n'):
49 49 > op.ui.write(' %s\n' % line)
50 50 > verses += 1
51 51 > op.records.add('song', {'verses': verses})
52 52 >
53 53 > @bundle2.parthandler('test:ping')
54 54 > def pinghandler(op, part):
55 55 > op.ui.write('received ping request (id %i)\n' % part.id)
56 56 > if op.reply is not None and 'ping-pong' in op.reply.capabilities:
57 57 > op.ui.write_err('replying to ping request (id %i)\n' % part.id)
58 58 > op.reply.newpart('test:pong', [('in-reply-to', str(part.id))])
59 59 >
60 60 > @bundle2.parthandler('test:debugreply')
61 61 > def debugreply(op, part):
62 62 > """print data about the capacity of the bundle reply"""
63 63 > if op.reply is None:
64 64 > op.ui.write('debugreply: no reply\n')
65 65 > else:
66 66 > op.ui.write('debugreply: capabilities:\n')
67 67 > for cap in sorted(op.reply.capabilities):
68 68 > op.ui.write('debugreply: %r\n' % cap)
69 69 > for val in op.reply.capabilities[cap]:
70 70 > op.ui.write('debugreply: %r\n' % val)
71 71 >
72 72 > @command('bundle2',
73 73 > [('', 'param', [], 'stream level parameter'),
74 74 > ('', 'unknown', False, 'include an unknown mandatory part in the bundle'),
75 75 > ('', 'unknownparams', False, 'include an unknown part parameters in the bundle'),
76 76 > ('', 'parts', False, 'include some arbitrary parts to the bundle'),
77 77 > ('', 'reply', False, 'produce a reply bundle'),
78 78 > ('', 'pushrace', False, 'includes a check:head part with unknown nodes'),
79 79 > ('r', 'rev', [], 'includes those changeset in the bundle'),],
80 80 > '[OUTPUTFILE]')
81 81 > def cmdbundle2(ui, repo, path=None, **opts):
82 82 > """write a bundle2 container on standard ouput"""
83 83 > bundler = bundle2.bundle20(ui)
84 84 > for p in opts['param']:
85 85 > p = p.split('=', 1)
86 86 > try:
87 87 > bundler.addparam(*p)
88 88 > except ValueError, exc:
89 89 > raise util.Abort('%s' % exc)
90 90 >
91 91 > if opts['reply']:
92 92 > capsstring = 'ping-pong\nelephants=babar,celeste\ncity%3D%21=celeste%2Cville'
93 93 > bundler.newpart('b2x:replycaps', data=capsstring)
94 94 >
95 95 > if opts['pushrace']:
96 96 > # also serve to test the assignement of data outside of init
97 97 > part = bundler.newpart('b2x:check:heads')
98 98 > part.data = '01234567890123456789'
99 99 >
100 100 > revs = opts['rev']
101 101 > if 'rev' in opts:
102 102 > revs = scmutil.revrange(repo, opts['rev'])
103 103 > if revs:
104 104 > # very crude version of a changegroup part creation
105 105 > bundled = repo.revs('%ld::%ld', revs, revs)
106 106 > headmissing = [c.node() for c in repo.set('heads(%ld)', revs)]
107 107 > headcommon = [c.node() for c in repo.set('parents(%ld) - %ld', revs, revs)]
108 108 > outgoing = discovery.outgoing(repo.changelog, headcommon, headmissing)
109 > cg = changegroup.getlocalbundle(repo, 'test:bundle2', outgoing, None)
109 > cg = changegroup.getlocalchangegroup(repo, 'test:bundle2', outgoing, None)
110 110 > bundler.newpart('b2x:changegroup', data=cg.getchunks())
111 111 >
112 112 > if opts['parts']:
113 113 > bundler.newpart('test:empty')
114 114 > # add a second one to make sure we handle multiple parts
115 115 > bundler.newpart('test:empty')
116 116 > bundler.newpart('test:song', data=ELEPHANTSSONG)
117 117 > bundler.newpart('test:debugreply')
118 118 > mathpart = bundler.newpart('test:math')
119 119 > mathpart.addparam('pi', '3.14')
120 120 > mathpart.addparam('e', '2.72')
121 121 > mathpart.addparam('cooking', 'raw', mandatory=False)
122 122 > mathpart.data = '42'
123 123 > # advisory known part with unknown mandatory param
124 124 > bundler.newpart('test:song', [('randomparam','')])
125 125 > if opts['unknown']:
126 126 > bundler.newpart('test:UNKNOWN', data='some random content')
127 127 > if opts['unknownparams']:
128 128 > bundler.newpart('test:SONG', [('randomparams', '')])
129 129 > if opts['parts']:
130 130 > bundler.newpart('test:ping')
131 131 >
132 132 > if path is None:
133 133 > file = sys.stdout
134 134 > else:
135 135 > file = open(path, 'wb')
136 136 >
137 137 > for chunk in bundler.getchunks():
138 138 > file.write(chunk)
139 139 >
140 140 > @command('unbundle2', [], '')
141 141 > def cmdunbundle2(ui, repo, replypath=None):
142 142 > """process a bundle2 stream from stdin on the current repo"""
143 143 > try:
144 144 > tr = None
145 145 > lock = repo.lock()
146 146 > tr = repo.transaction('processbundle')
147 147 > try:
148 148 > unbundler = bundle2.unbundle20(ui, sys.stdin)
149 149 > op = bundle2.processbundle(repo, unbundler, lambda: tr)
150 150 > tr.close()
151 151 > except error.BundleValueError, exc:
152 152 > raise util.Abort('missing support for %s' % exc)
153 153 > except error.PushRaced, exc:
154 154 > raise util.Abort('push race: %s' % exc)
155 155 > finally:
156 156 > if tr is not None:
157 157 > tr.release()
158 158 > lock.release()
159 159 > remains = sys.stdin.read()
160 160 > ui.write('%i unread bytes\n' % len(remains))
161 161 > if op.records['song']:
162 162 > totalverses = sum(r['verses'] for r in op.records['song'])
163 163 > ui.write('%i total verses sung\n' % totalverses)
164 164 > for rec in op.records['changegroup']:
165 165 > ui.write('addchangegroup return: %i\n' % rec['return'])
166 166 > if op.reply is not None and replypath is not None:
167 167 > file = open(replypath, 'wb')
168 168 > for chunk in op.reply.getchunks():
169 169 > file.write(chunk)
170 170 >
171 171 > @command('statbundle2', [], '')
172 172 > def cmdstatbundle2(ui, repo):
173 173 > """print statistic on the bundle2 container read from stdin"""
174 174 > unbundler = bundle2.unbundle20(ui, sys.stdin)
175 175 > try:
176 176 > params = unbundler.params
177 177 > except error.BundleValueError, exc:
178 178 > raise util.Abort('unknown parameters: %s' % exc)
179 179 > ui.write('options count: %i\n' % len(params))
180 180 > for key in sorted(params):
181 181 > ui.write('- %s\n' % key)
182 182 > value = params[key]
183 183 > if value is not None:
184 184 > ui.write(' %s\n' % value)
185 185 > count = 0
186 186 > for p in unbundler.iterparts():
187 187 > count += 1
188 188 > ui.write(' :%s:\n' % p.type)
189 189 > ui.write(' mandatory: %i\n' % len(p.mandatoryparams))
190 190 > ui.write(' advisory: %i\n' % len(p.advisoryparams))
191 191 > ui.write(' payload: %i bytes\n' % len(p.read()))
192 192 > ui.write('parts count: %i\n' % count)
193 193 > EOF
194 194 $ cat >> $HGRCPATH << EOF
195 195 > [extensions]
196 196 > bundle2=$TESTTMP/bundle2.py
197 197 > [experimental]
198 198 > bundle2-exp=True
199 199 > [ui]
200 200 > ssh=python "$TESTDIR/dummyssh"
201 201 > logtemplate={rev}:{node|short} {phase} {author} {bookmarks} {desc|firstline}
202 202 > [web]
203 203 > push_ssl = false
204 204 > allow_push = *
205 205 > [phases]
206 206 > publish=False
207 207 > EOF
208 208
209 209 The extension requires a repo (currently unused)
210 210
211 211 $ hg init main
212 212 $ cd main
213 213 $ touch a
214 214 $ hg add a
215 215 $ hg commit -m 'a'
216 216
217 217
218 218 Empty bundle
219 219 =================
220 220
221 221 - no option
222 222 - no parts
223 223
224 224 Test bundling
225 225
226 226 $ hg bundle2
227 227 HG2X\x00\x00\x00\x00 (no-eol) (esc)
228 228
229 229 Test unbundling
230 230
231 231 $ hg bundle2 | hg statbundle2
232 232 options count: 0
233 233 parts count: 0
234 234
235 235 Test old style bundle are detected and refused
236 236
237 237 $ hg bundle --all ../bundle.hg
238 238 1 changesets found
239 239 $ hg statbundle2 < ../bundle.hg
240 240 abort: unknown bundle version 10
241 241 [255]
242 242
243 243 Test parameters
244 244 =================
245 245
246 246 - some options
247 247 - no parts
248 248
249 249 advisory parameters, no value
250 250 -------------------------------
251 251
252 252 Simplest possible parameters form
253 253
254 254 Test generation simple option
255 255
256 256 $ hg bundle2 --param 'caution'
257 257 HG2X\x00\x07caution\x00\x00 (no-eol) (esc)
258 258
259 259 Test unbundling
260 260
261 261 $ hg bundle2 --param 'caution' | hg statbundle2
262 262 options count: 1
263 263 - caution
264 264 parts count: 0
265 265
266 266 Test generation multiple option
267 267
268 268 $ hg bundle2 --param 'caution' --param 'meal'
269 269 HG2X\x00\x0ccaution meal\x00\x00 (no-eol) (esc)
270 270
271 271 Test unbundling
272 272
273 273 $ hg bundle2 --param 'caution' --param 'meal' | hg statbundle2
274 274 options count: 2
275 275 - caution
276 276 - meal
277 277 parts count: 0
278 278
279 279 advisory parameters, with value
280 280 -------------------------------
281 281
282 282 Test generation
283 283
284 284 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants'
285 285 HG2X\x00\x1ccaution meal=vegan elephants\x00\x00 (no-eol) (esc)
286 286
287 287 Test unbundling
288 288
289 289 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants' | hg statbundle2
290 290 options count: 3
291 291 - caution
292 292 - elephants
293 293 - meal
294 294 vegan
295 295 parts count: 0
296 296
297 297 parameter with special char in value
298 298 ---------------------------------------------------
299 299
300 300 Test generation
301 301
302 302 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple
303 303 HG2X\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00 (no-eol) (esc)
304 304
305 305 Test unbundling
306 306
307 307 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple | hg statbundle2
308 308 options count: 2
309 309 - e|! 7/
310 310 babar%#==tutu
311 311 - simple
312 312 parts count: 0
313 313
314 314 Test unknown mandatory option
315 315 ---------------------------------------------------
316 316
317 317 $ hg bundle2 --param 'Gravity' | hg statbundle2
318 318 abort: unknown parameters: Stream Parameter - Gravity
319 319 [255]
320 320
321 321 Test debug output
322 322 ---------------------------------------------------
323 323
324 324 bundling debug
325 325
326 326 $ hg bundle2 --debug --param 'e|! 7/=babar%#==tutu' --param simple ../out.hg2
327 327 start emission of HG2X stream
328 328 bundle parameter: e%7C%21%207/=babar%25%23%3D%3Dtutu simple
329 329 start of parts
330 330 end of bundle
331 331
332 332 file content is ok
333 333
334 334 $ cat ../out.hg2
335 335 HG2X\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00 (no-eol) (esc)
336 336
337 337 unbundling debug
338 338
339 339 $ hg statbundle2 --debug < ../out.hg2
340 340 start processing of HG2X stream
341 341 reading bundle2 stream parameters
342 342 ignoring unknown parameter 'e|! 7/'
343 343 ignoring unknown parameter 'simple'
344 344 options count: 2
345 345 - e|! 7/
346 346 babar%#==tutu
347 347 - simple
348 348 start extraction of bundle2 parts
349 349 part header size: 0
350 350 end of bundle2 stream
351 351 parts count: 0
352 352
353 353
354 354 Test buggy input
355 355 ---------------------------------------------------
356 356
357 357 empty parameter name
358 358
359 359 $ hg bundle2 --param '' --quiet
360 360 abort: empty parameter name
361 361 [255]
362 362
363 363 bad parameter name
364 364
365 365 $ hg bundle2 --param 42babar
366 366 abort: non letter first character: '42babar'
367 367 [255]
368 368
369 369
370 370 Test part
371 371 =================
372 372
373 373 $ hg bundle2 --parts ../parts.hg2 --debug
374 374 start emission of HG2X stream
375 375 bundle parameter:
376 376 start of parts
377 377 bundle part: "test:empty"
378 378 bundle part: "test:empty"
379 379 bundle part: "test:song"
380 380 bundle part: "test:debugreply"
381 381 bundle part: "test:math"
382 382 bundle part: "test:song"
383 383 bundle part: "test:ping"
384 384 end of bundle
385 385
386 386 $ cat ../parts.hg2
387 387 HG2X\x00\x00\x00\x11 (esc)
388 388 test:empty\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x11 (esc)
389 389 test:empty\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x10 test:song\x00\x00\x00\x02\x00\x00\x00\x00\x00\xb2Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko (esc)
390 390 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
391 391 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.\x00\x00\x00\x00\x00\x16\x0ftest:debugreply\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00+ test:math\x00\x00\x00\x04\x02\x01\x02\x04\x01\x04\x07\x03pi3.14e2.72cookingraw\x00\x00\x00\x0242\x00\x00\x00\x00\x00\x1d test:song\x00\x00\x00\x05\x01\x00\x0b\x00randomparam\x00\x00\x00\x00\x00\x10 test:ping\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
392 392
393 393
394 394 $ hg statbundle2 < ../parts.hg2
395 395 options count: 0
396 396 :test:empty:
397 397 mandatory: 0
398 398 advisory: 0
399 399 payload: 0 bytes
400 400 :test:empty:
401 401 mandatory: 0
402 402 advisory: 0
403 403 payload: 0 bytes
404 404 :test:song:
405 405 mandatory: 0
406 406 advisory: 0
407 407 payload: 178 bytes
408 408 :test:debugreply:
409 409 mandatory: 0
410 410 advisory: 0
411 411 payload: 0 bytes
412 412 :test:math:
413 413 mandatory: 2
414 414 advisory: 1
415 415 payload: 2 bytes
416 416 :test:song:
417 417 mandatory: 1
418 418 advisory: 0
419 419 payload: 0 bytes
420 420 :test:ping:
421 421 mandatory: 0
422 422 advisory: 0
423 423 payload: 0 bytes
424 424 parts count: 7
425 425
426 426 $ hg statbundle2 --debug < ../parts.hg2
427 427 start processing of HG2X stream
428 428 reading bundle2 stream parameters
429 429 options count: 0
430 430 start extraction of bundle2 parts
431 431 part header size: 17
432 432 part type: "test:empty"
433 433 part id: "0"
434 434 part parameters: 0
435 435 :test:empty:
436 436 mandatory: 0
437 437 advisory: 0
438 438 payload chunk size: 0
439 439 payload: 0 bytes
440 440 part header size: 17
441 441 part type: "test:empty"
442 442 part id: "1"
443 443 part parameters: 0
444 444 :test:empty:
445 445 mandatory: 0
446 446 advisory: 0
447 447 payload chunk size: 0
448 448 payload: 0 bytes
449 449 part header size: 16
450 450 part type: "test:song"
451 451 part id: "2"
452 452 part parameters: 0
453 453 :test:song:
454 454 mandatory: 0
455 455 advisory: 0
456 456 payload chunk size: 178
457 457 payload chunk size: 0
458 458 payload: 178 bytes
459 459 part header size: 22
460 460 part type: "test:debugreply"
461 461 part id: "3"
462 462 part parameters: 0
463 463 :test:debugreply:
464 464 mandatory: 0
465 465 advisory: 0
466 466 payload chunk size: 0
467 467 payload: 0 bytes
468 468 part header size: 43
469 469 part type: "test:math"
470 470 part id: "4"
471 471 part parameters: 3
472 472 :test:math:
473 473 mandatory: 2
474 474 advisory: 1
475 475 payload chunk size: 2
476 476 payload chunk size: 0
477 477 payload: 2 bytes
478 478 part header size: 29
479 479 part type: "test:song"
480 480 part id: "5"
481 481 part parameters: 1
482 482 :test:song:
483 483 mandatory: 1
484 484 advisory: 0
485 485 payload chunk size: 0
486 486 payload: 0 bytes
487 487 part header size: 16
488 488 part type: "test:ping"
489 489 part id: "6"
490 490 part parameters: 0
491 491 :test:ping:
492 492 mandatory: 0
493 493 advisory: 0
494 494 payload chunk size: 0
495 495 payload: 0 bytes
496 496 part header size: 0
497 497 end of bundle2 stream
498 498 parts count: 7
499 499
500 500 Test actual unbundling of test part
501 501 =======================================
502 502
503 503 Process the bundle
504 504
505 505 $ hg unbundle2 --debug < ../parts.hg2
506 506 start processing of HG2X stream
507 507 reading bundle2 stream parameters
508 508 start extraction of bundle2 parts
509 509 part header size: 17
510 510 part type: "test:empty"
511 511 part id: "0"
512 512 part parameters: 0
513 513 ignoring unsupported advisory part test:empty
514 514 payload chunk size: 0
515 515 part header size: 17
516 516 part type: "test:empty"
517 517 part id: "1"
518 518 part parameters: 0
519 519 ignoring unsupported advisory part test:empty
520 520 payload chunk size: 0
521 521 part header size: 16
522 522 part type: "test:song"
523 523 part id: "2"
524 524 part parameters: 0
525 525 found a handler for part 'test:song'
526 526 The choir starts singing:
527 527 payload chunk size: 178
528 528 payload chunk size: 0
529 529 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
530 530 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
531 531 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
532 532 part header size: 22
533 533 part type: "test:debugreply"
534 534 part id: "3"
535 535 part parameters: 0
536 536 found a handler for part 'test:debugreply'
537 537 debugreply: no reply
538 538 payload chunk size: 0
539 539 part header size: 43
540 540 part type: "test:math"
541 541 part id: "4"
542 542 part parameters: 3
543 543 ignoring unsupported advisory part test:math
544 544 payload chunk size: 2
545 545 payload chunk size: 0
546 546 part header size: 29
547 547 part type: "test:song"
548 548 part id: "5"
549 549 part parameters: 1
550 550 found a handler for part 'test:song'
551 551 ignoring unsupported advisory part test:song - randomparam
552 552 payload chunk size: 0
553 553 part header size: 16
554 554 part type: "test:ping"
555 555 part id: "6"
556 556 part parameters: 0
557 557 found a handler for part 'test:ping'
558 558 received ping request (id 6)
559 559 payload chunk size: 0
560 560 part header size: 0
561 561 end of bundle2 stream
562 562 0 unread bytes
563 563 3 total verses sung
564 564
565 565 Unbundle with an unknown mandatory part
566 566 (should abort)
567 567
568 568 $ hg bundle2 --parts --unknown ../unknown.hg2
569 569
570 570 $ hg unbundle2 < ../unknown.hg2
571 571 The choir starts singing:
572 572 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
573 573 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
574 574 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
575 575 debugreply: no reply
576 576 0 unread bytes
577 577 abort: missing support for test:unknown
578 578 [255]
579 579
580 580 Unbundle with an unknown mandatory part parameters
581 581 (should abort)
582 582
583 583 $ hg bundle2 --unknownparams ../unknown.hg2
584 584
585 585 $ hg unbundle2 < ../unknown.hg2
586 586 0 unread bytes
587 587 abort: missing support for test:song - randomparams
588 588 [255]
589 589
590 590 unbundle with a reply
591 591
592 592 $ hg bundle2 --parts --reply ../parts-reply.hg2
593 593 $ hg unbundle2 ../reply.hg2 < ../parts-reply.hg2
594 594 0 unread bytes
595 595 3 total verses sung
596 596
597 597 The reply is a bundle
598 598
599 599 $ cat ../reply.hg2
600 600 HG2X\x00\x00\x00\x1f (esc)
601 601 b2x:output\x00\x00\x00\x00\x00\x01\x0b\x01in-reply-to3\x00\x00\x00\xd9The choir starts singing: (esc)
602 602 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
603 603 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
604 604 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
605 605 \x00\x00\x00\x00\x00\x1f (esc)
606 606 b2x:output\x00\x00\x00\x01\x00\x01\x0b\x01in-reply-to4\x00\x00\x00\xc9debugreply: capabilities: (esc)
607 607 debugreply: 'city=!'
608 608 debugreply: 'celeste,ville'
609 609 debugreply: 'elephants'
610 610 debugreply: 'babar'
611 611 debugreply: 'celeste'
612 612 debugreply: 'ping-pong'
613 613 \x00\x00\x00\x00\x00\x1e test:pong\x00\x00\x00\x02\x01\x00\x0b\x01in-reply-to7\x00\x00\x00\x00\x00\x1f (esc)
614 614 b2x:output\x00\x00\x00\x03\x00\x01\x0b\x01in-reply-to7\x00\x00\x00=received ping request (id 7) (esc)
615 615 replying to ping request (id 7)
616 616 \x00\x00\x00\x00\x00\x00 (no-eol) (esc)
617 617
618 618 The reply is valid
619 619
620 620 $ hg statbundle2 < ../reply.hg2
621 621 options count: 0
622 622 :b2x:output:
623 623 mandatory: 0
624 624 advisory: 1
625 625 payload: 217 bytes
626 626 :b2x:output:
627 627 mandatory: 0
628 628 advisory: 1
629 629 payload: 201 bytes
630 630 :test:pong:
631 631 mandatory: 1
632 632 advisory: 0
633 633 payload: 0 bytes
634 634 :b2x:output:
635 635 mandatory: 0
636 636 advisory: 1
637 637 payload: 61 bytes
638 638 parts count: 4
639 639
640 640 Unbundle the reply to get the output:
641 641
642 642 $ hg unbundle2 < ../reply.hg2
643 643 remote: The choir starts singing:
644 644 remote: Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
645 645 remote: Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
646 646 remote: Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
647 647 remote: debugreply: capabilities:
648 648 remote: debugreply: 'city=!'
649 649 remote: debugreply: 'celeste,ville'
650 650 remote: debugreply: 'elephants'
651 651 remote: debugreply: 'babar'
652 652 remote: debugreply: 'celeste'
653 653 remote: debugreply: 'ping-pong'
654 654 remote: received ping request (id 7)
655 655 remote: replying to ping request (id 7)
656 656 0 unread bytes
657 657
658 658 Test push race detection
659 659
660 660 $ hg bundle2 --pushrace ../part-race.hg2
661 661
662 662 $ hg unbundle2 < ../part-race.hg2
663 663 0 unread bytes
664 664 abort: push race: repository changed while pushing - please try again
665 665 [255]
666 666
667 667 Support for changegroup
668 668 ===================================
669 669
670 670 $ hg unbundle $TESTDIR/bundles/rebase.hg
671 671 adding changesets
672 672 adding manifests
673 673 adding file changes
674 674 added 8 changesets with 7 changes to 7 files (+3 heads)
675 675 (run 'hg heads' to see heads, 'hg merge' to merge)
676 676
677 677 $ hg log -G
678 678 o 8:02de42196ebe draft Nicolas Dumazet <nicdumz.commits@gmail.com> H
679 679 |
680 680 | o 7:eea13746799a draft Nicolas Dumazet <nicdumz.commits@gmail.com> G
681 681 |/|
682 682 o | 6:24b6387c8c8c draft Nicolas Dumazet <nicdumz.commits@gmail.com> F
683 683 | |
684 684 | o 5:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
685 685 |/
686 686 | o 4:32af7686d403 draft Nicolas Dumazet <nicdumz.commits@gmail.com> D
687 687 | |
688 688 | o 3:5fddd98957c8 draft Nicolas Dumazet <nicdumz.commits@gmail.com> C
689 689 | |
690 690 | o 2:42ccdea3bb16 draft Nicolas Dumazet <nicdumz.commits@gmail.com> B
691 691 |/
692 692 o 1:cd010b8cd998 draft Nicolas Dumazet <nicdumz.commits@gmail.com> A
693 693
694 694 @ 0:3903775176ed draft test a
695 695
696 696
697 697 $ hg bundle2 --debug --rev '8+7+5+4' ../rev.hg2
698 698 4 changesets found
699 699 list of changesets:
700 700 32af7686d403cf45b5d95f2d70cebea587ac806a
701 701 9520eea781bcca16c1e15acc0ba14335a0e8e5ba
702 702 eea13746799a9e0bfd88f29d3c2e9dc9389f524f
703 703 02de42196ebee42ef284b6780a87cdc96e8eaab6
704 704 start emission of HG2X stream
705 705 bundle parameter:
706 706 start of parts
707 707 bundle part: "b2x:changegroup"
708 708 bundling: 1/4 changesets (25.00%)
709 709 bundling: 2/4 changesets (50.00%)
710 710 bundling: 3/4 changesets (75.00%)
711 711 bundling: 4/4 changesets (100.00%)
712 712 bundling: 1/4 manifests (25.00%)
713 713 bundling: 2/4 manifests (50.00%)
714 714 bundling: 3/4 manifests (75.00%)
715 715 bundling: 4/4 manifests (100.00%)
716 716 bundling: D 1/3 files (33.33%)
717 717 bundling: E 2/3 files (66.67%)
718 718 bundling: H 3/3 files (100.00%)
719 719 end of bundle
720 720
721 721 $ cat ../rev.hg2
722 722 HG2X\x00\x00\x00\x16\x0fb2x:changegroup\x00\x00\x00\x00\x00\x00\x00\x00\x06\x13\x00\x00\x00\xa42\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j_\xdd\xd9\x89W\xc8\xa5JMCm\xfe\x1d\xa9\xd8\x7f!\xa1\xb9{\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)6e1f4c47ecb533ffd0c8e52cdc88afb6cd39e20c (esc)
723 723 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02D (esc)
724 724 \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01D\x00\x00\x00\xa4\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xcd\x01\x0b\x8c\xd9\x98\xf3\x98\x1aZ\x81\x15\xf9O\x8d\xa4\xabP`\x89\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)4dece9c826f69490507b98c6383a3009b295837d (esc)
725 725 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02E (esc)
726 726 \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01E\x00\x00\x00\xa2\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)365b93d57fdf4814e2b5911d6bacff2b12014441 (esc)
727 727 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x00\x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01G\x00\x00\x00\xa4\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
728 728 \x87\xcd\xc9n\x8e\xaa\xb6$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
729 729 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)8bee48edc7318541fc0013ee41b089276a8c24bf (esc)
730 730 \x00\x00\x00f\x00\x00\x00f\x00\x00\x00\x02H (esc)
731 731 \x00\x00\x00g\x00\x00\x00h\x00\x00\x00\x01H\x00\x00\x00\x00\x00\x00\x00\x8bn\x1fLG\xec\xb53\xff\xd0\xc8\xe5,\xdc\x88\xaf\xb6\xcd9\xe2\x0cf\xa5\xa0\x18\x17\xfd\xf5#\x9c'8\x02\xb5\xb7a\x8d\x05\x1c\x89\xe4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+D\x00c3f1ca2924c16a19b0656a84900e504e5b0aec2d (esc)
732 732 \x00\x00\x00\x8bM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\x00}\x8c\x9d\x88\x84\x13%\xf5\xc6\xb0cq\xb3[N\x8a+\x1a\x83\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00+\x00\x00\x00\xac\x00\x00\x00+E\x009c6fd0350a6c0d0c49d4a9c5017cf07043f54e58 (esc)
733 733 \x00\x00\x00\x8b6[\x93\xd5\x7f\xdfH\x14\xe2\xb5\x91\x1dk\xac\xff+\x12\x01DA(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xceM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00V\x00\x00\x00V\x00\x00\x00+F\x0022bfcfd62a21a3287edbd4d656218d0f525ed76a (esc)
734 734 \x00\x00\x00\x97\x8b\xeeH\xed\xc71\x85A\xfc\x00\x13\xeeA\xb0\x89'j\x8c$\xbf(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xce\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
735 735 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00+\x00\x00\x00V\x00\x00\x00\x00\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+H\x008500189e74a9e0475e822093bc7db0d631aeb0b4 (esc)
736 736 \x00\x00\x00\x00\x00\x00\x00\x05D\x00\x00\x00b\xc3\xf1\xca)$\xc1j\x19\xb0ej\x84\x90\x0ePN[ (esc)
737 737 \xec-\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02D (esc)
738 738 \x00\x00\x00\x00\x00\x00\x00\x05E\x00\x00\x00b\x9co\xd05 (esc)
739 739 l\r (no-eol) (esc)
740 740 \x0cI\xd4\xa9\xc5\x01|\xf0pC\xf5NX\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02E (esc)
741 741 \x00\x00\x00\x00\x00\x00\x00\x05H\x00\x00\x00b\x85\x00\x18\x9et\xa9\xe0G^\x82 \x93\xbc}\xb0\xd61\xae\xb0\xb4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
742 742 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02H (esc)
743 743 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
744 744
745 745 $ hg unbundle2 < ../rev.hg2
746 746 adding changesets
747 747 adding manifests
748 748 adding file changes
749 749 added 0 changesets with 0 changes to 3 files
750 750 0 unread bytes
751 751 addchangegroup return: 1
752 752
753 753 with reply
754 754
755 755 $ hg bundle2 --rev '8+7+5+4' --reply ../rev-rr.hg2
756 756 $ hg unbundle2 ../rev-reply.hg2 < ../rev-rr.hg2
757 757 0 unread bytes
758 758 addchangegroup return: 1
759 759
760 760 $ cat ../rev-reply.hg2
761 761 HG2X\x00\x00\x003\x15b2x:reply:changegroup\x00\x00\x00\x00\x00\x02\x0b\x01\x06\x01in-reply-to1return1\x00\x00\x00\x00\x00\x1f (esc)
762 762 b2x:output\x00\x00\x00\x01\x00\x01\x0b\x01in-reply-to1\x00\x00\x00dadding changesets (esc)
763 763 adding manifests
764 764 adding file changes
765 765 added 0 changesets with 0 changes to 3 files
766 766 \x00\x00\x00\x00\x00\x00 (no-eol) (esc)
767 767
768 768 $ cd ..
769 769
770 770 Real world exchange
771 771 =====================
772 772
773 773 Add more obsolescence information
774 774
775 775 $ hg -R main debugobsolete -d '0 0' 1111111111111111111111111111111111111111 `getmainid 9520eea781bc`
776 776 $ hg -R main debugobsolete -d '0 0' 2222222222222222222222222222222222222222 `getmainid 24b6387c8c8c`
777 777
778 778 clone --pull
779 779
780 780 $ hg -R main phase --public cd010b8cd998
781 781 $ hg clone main other --pull --rev 9520eea781bc
782 782 adding changesets
783 783 adding manifests
784 784 adding file changes
785 785 added 2 changesets with 2 changes to 2 files
786 786 1 new obsolescence markers
787 787 updating to branch default
788 788 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
789 789 $ hg -R other log -G
790 790 @ 1:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
791 791 |
792 792 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
793 793
794 794 $ hg -R other debugobsolete
795 795 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
796 796
797 797 pull
798 798
799 799 $ hg -R main phase --public 9520eea781bc
800 800 $ hg -R other pull -r 24b6387c8c8c
801 801 pulling from $TESTTMP/main (glob)
802 802 searching for changes
803 803 adding changesets
804 804 adding manifests
805 805 adding file changes
806 806 added 1 changesets with 1 changes to 1 files (+1 heads)
807 807 1 new obsolescence markers
808 808 (run 'hg heads' to see heads, 'hg merge' to merge)
809 809 $ hg -R other log -G
810 810 o 2:24b6387c8c8c draft Nicolas Dumazet <nicdumz.commits@gmail.com> F
811 811 |
812 812 | @ 1:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
813 813 |/
814 814 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
815 815
816 816 $ hg -R other debugobsolete
817 817 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
818 818 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
819 819
820 820 pull empty (with phase movement)
821 821
822 822 $ hg -R main phase --public 24b6387c8c8c
823 823 $ hg -R other pull -r 24b6387c8c8c
824 824 pulling from $TESTTMP/main (glob)
825 825 no changes found
826 826 $ hg -R other log -G
827 827 o 2:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
828 828 |
829 829 | @ 1:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
830 830 |/
831 831 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
832 832
833 833 $ hg -R other debugobsolete
834 834 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
835 835 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
836 836
837 837 pull empty
838 838
839 839 $ hg -R other pull -r 24b6387c8c8c
840 840 pulling from $TESTTMP/main (glob)
841 841 no changes found
842 842 $ hg -R other log -G
843 843 o 2:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
844 844 |
845 845 | @ 1:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
846 846 |/
847 847 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
848 848
849 849 $ hg -R other debugobsolete
850 850 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
851 851 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
852 852
853 853 add extra data to test their exchange during push
854 854
855 855 $ hg -R main bookmark --rev eea13746799a book_eea1
856 856 $ hg -R main debugobsolete -d '0 0' 3333333333333333333333333333333333333333 `getmainid eea13746799a`
857 857 $ hg -R main bookmark --rev 02de42196ebe book_02de
858 858 $ hg -R main debugobsolete -d '0 0' 4444444444444444444444444444444444444444 `getmainid 02de42196ebe`
859 859 $ hg -R main bookmark --rev 42ccdea3bb16 book_42cc
860 860 $ hg -R main debugobsolete -d '0 0' 5555555555555555555555555555555555555555 `getmainid 42ccdea3bb16`
861 861 $ hg -R main bookmark --rev 5fddd98957c8 book_5fdd
862 862 $ hg -R main debugobsolete -d '0 0' 6666666666666666666666666666666666666666 `getmainid 5fddd98957c8`
863 863 $ hg -R main bookmark --rev 32af7686d403 book_32af
864 864 $ hg -R main debugobsolete -d '0 0' 7777777777777777777777777777777777777777 `getmainid 32af7686d403`
865 865
866 866 $ hg -R other bookmark --rev cd010b8cd998 book_eea1
867 867 $ hg -R other bookmark --rev cd010b8cd998 book_02de
868 868 $ hg -R other bookmark --rev cd010b8cd998 book_42cc
869 869 $ hg -R other bookmark --rev cd010b8cd998 book_5fdd
870 870 $ hg -R other bookmark --rev cd010b8cd998 book_32af
871 871
872 872 $ hg -R main phase --public eea13746799a
873 873
874 874 push
875 875 $ hg -R main push other --rev eea13746799a --bookmark book_eea1
876 876 pushing to other
877 877 searching for changes
878 878 remote: adding changesets
879 879 remote: adding manifests
880 880 remote: adding file changes
881 881 remote: added 1 changesets with 0 changes to 0 files (-1 heads)
882 882 remote: 1 new obsolescence markers
883 883 updating bookmark book_eea1
884 884 exporting bookmark book_eea1
885 885 $ hg -R other log -G
886 886 o 3:eea13746799a public Nicolas Dumazet <nicdumz.commits@gmail.com> book_eea1 G
887 887 |\
888 888 | o 2:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
889 889 | |
890 890 @ | 1:9520eea781bc public Nicolas Dumazet <nicdumz.commits@gmail.com> E
891 891 |/
892 892 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> book_02de book_32af book_42cc book_5fdd A
893 893
894 894 $ hg -R other debugobsolete
895 895 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
896 896 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
897 897 3333333333333333333333333333333333333333 eea13746799a9e0bfd88f29d3c2e9dc9389f524f 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
898 898
899 899 pull over ssh
900 900
901 901 $ hg -R other pull ssh://user@dummy/main -r 02de42196ebe --bookmark book_02de
902 902 pulling from ssh://user@dummy/main
903 903 searching for changes
904 904 adding changesets
905 905 adding manifests
906 906 adding file changes
907 907 added 1 changesets with 1 changes to 1 files (+1 heads)
908 908 1 new obsolescence markers
909 909 updating bookmark book_02de
910 910 (run 'hg heads' to see heads, 'hg merge' to merge)
911 911 importing bookmark book_02de
912 912 $ hg -R other debugobsolete
913 913 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
914 914 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
915 915 3333333333333333333333333333333333333333 eea13746799a9e0bfd88f29d3c2e9dc9389f524f 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
916 916 4444444444444444444444444444444444444444 02de42196ebee42ef284b6780a87cdc96e8eaab6 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
917 917
918 918 pull over http
919 919
920 920 $ hg -R main serve -p $HGPORT -d --pid-file=main.pid -E main-error.log
921 921 $ cat main.pid >> $DAEMON_PIDS
922 922
923 923 $ hg -R other pull http://localhost:$HGPORT/ -r 42ccdea3bb16 --bookmark book_42cc
924 924 pulling from http://localhost:$HGPORT/
925 925 searching for changes
926 926 adding changesets
927 927 adding manifests
928 928 adding file changes
929 929 added 1 changesets with 1 changes to 1 files (+1 heads)
930 930 1 new obsolescence markers
931 931 updating bookmark book_42cc
932 932 (run 'hg heads .' to see heads, 'hg merge' to merge)
933 933 importing bookmark book_42cc
934 934 $ cat main-error.log
935 935 $ hg -R other debugobsolete
936 936 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
937 937 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
938 938 3333333333333333333333333333333333333333 eea13746799a9e0bfd88f29d3c2e9dc9389f524f 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
939 939 4444444444444444444444444444444444444444 02de42196ebee42ef284b6780a87cdc96e8eaab6 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
940 940 5555555555555555555555555555555555555555 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
941 941
942 942 push over ssh
943 943
944 944 $ hg -R main push ssh://user@dummy/other -r 5fddd98957c8 --bookmark book_5fdd
945 945 pushing to ssh://user@dummy/other
946 946 searching for changes
947 947 remote: adding changesets
948 948 remote: adding manifests
949 949 remote: adding file changes
950 950 remote: added 1 changesets with 1 changes to 1 files
951 951 remote: 1 new obsolescence markers
952 952 updating bookmark book_5fdd
953 953 exporting bookmark book_5fdd
954 954 $ hg -R other log -G
955 955 o 6:5fddd98957c8 draft Nicolas Dumazet <nicdumz.commits@gmail.com> book_5fdd C
956 956 |
957 957 o 5:42ccdea3bb16 draft Nicolas Dumazet <nicdumz.commits@gmail.com> book_42cc B
958 958 |
959 959 | o 4:02de42196ebe draft Nicolas Dumazet <nicdumz.commits@gmail.com> book_02de H
960 960 | |
961 961 | | o 3:eea13746799a public Nicolas Dumazet <nicdumz.commits@gmail.com> book_eea1 G
962 962 | |/|
963 963 | o | 2:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
964 964 |/ /
965 965 | @ 1:9520eea781bc public Nicolas Dumazet <nicdumz.commits@gmail.com> E
966 966 |/
967 967 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> book_32af A
968 968
969 969 $ hg -R other debugobsolete
970 970 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
971 971 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
972 972 3333333333333333333333333333333333333333 eea13746799a9e0bfd88f29d3c2e9dc9389f524f 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
973 973 4444444444444444444444444444444444444444 02de42196ebee42ef284b6780a87cdc96e8eaab6 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
974 974 5555555555555555555555555555555555555555 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
975 975 6666666666666666666666666666666666666666 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
976 976
977 977 push over http
978 978
979 979 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
980 980 $ cat other.pid >> $DAEMON_PIDS
981 981
982 982 $ hg -R main phase --public 32af7686d403
983 983 $ hg -R main push http://localhost:$HGPORT2/ -r 32af7686d403 --bookmark book_32af
984 984 pushing to http://localhost:$HGPORT2/
985 985 searching for changes
986 986 remote: adding changesets
987 987 remote: adding manifests
988 988 remote: adding file changes
989 989 remote: added 1 changesets with 1 changes to 1 files
990 990 remote: 1 new obsolescence markers
991 991 updating bookmark book_32af
992 992 exporting bookmark book_32af
993 993 $ cat other-error.log
994 994
995 995 Check final content.
996 996
997 997 $ hg -R other log -G
998 998 o 7:32af7686d403 public Nicolas Dumazet <nicdumz.commits@gmail.com> book_32af D
999 999 |
1000 1000 o 6:5fddd98957c8 public Nicolas Dumazet <nicdumz.commits@gmail.com> book_5fdd C
1001 1001 |
1002 1002 o 5:42ccdea3bb16 public Nicolas Dumazet <nicdumz.commits@gmail.com> book_42cc B
1003 1003 |
1004 1004 | o 4:02de42196ebe draft Nicolas Dumazet <nicdumz.commits@gmail.com> book_02de H
1005 1005 | |
1006 1006 | | o 3:eea13746799a public Nicolas Dumazet <nicdumz.commits@gmail.com> book_eea1 G
1007 1007 | |/|
1008 1008 | o | 2:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
1009 1009 |/ /
1010 1010 | @ 1:9520eea781bc public Nicolas Dumazet <nicdumz.commits@gmail.com> E
1011 1011 |/
1012 1012 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
1013 1013
1014 1014 $ hg -R other debugobsolete
1015 1015 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
1016 1016 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
1017 1017 3333333333333333333333333333333333333333 eea13746799a9e0bfd88f29d3c2e9dc9389f524f 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
1018 1018 4444444444444444444444444444444444444444 02de42196ebee42ef284b6780a87cdc96e8eaab6 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
1019 1019 5555555555555555555555555555555555555555 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
1020 1020 6666666666666666666666666666666666666666 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
1021 1021 7777777777777777777777777777777777777777 32af7686d403cf45b5d95f2d70cebea587ac806a 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
1022 1022
1023 1023 Error Handling
1024 1024 ==============
1025 1025
1026 1026 Check that errors are properly returned to the client during push.
1027 1027
1028 1028 Setting up
1029 1029
1030 1030 $ cat > failpush.py << EOF
1031 1031 > """A small extension that makes push fails when using bundle2
1032 1032 >
1033 1033 > used to test error handling in bundle2
1034 1034 > """
1035 1035 >
1036 1036 > from mercurial import util
1037 1037 > from mercurial import bundle2
1038 1038 > from mercurial import exchange
1039 1039 > from mercurial import extensions
1040 1040 >
1041 1041 > def _pushbundle2failpart(pushop, bundler):
1042 1042 > reason = pushop.ui.config('failpush', 'reason', None)
1043 1043 > part = None
1044 1044 > if reason == 'abort':
1045 1045 > bundler.newpart('test:abort')
1046 1046 > if reason == 'unknown':
1047 1047 > bundler.newpart('TEST:UNKNOWN')
1048 1048 > if reason == 'race':
1049 1049 > # 20 Bytes of crap
1050 1050 > bundler.newpart('b2x:check:heads', data='01234567890123456789')
1051 1051 >
1052 1052 > @bundle2.parthandler("test:abort")
1053 1053 > def handleabort(op, part):
1054 1054 > raise util.Abort('Abandon ship!', hint="don't panic")
1055 1055 >
1056 1056 > def uisetup(ui):
1057 1057 > exchange.b2partsgenmapping['failpart'] = _pushbundle2failpart
1058 1058 > exchange.b2partsgenorder.insert(0, 'failpart')
1059 1059 >
1060 1060 > EOF
1061 1061
1062 1062 $ cd main
1063 1063 $ hg up tip
1064 1064 3 files updated, 0 files merged, 1 files removed, 0 files unresolved
1065 1065 $ echo 'I' > I
1066 1066 $ hg add I
1067 1067 $ hg ci -m 'I'
1068 1068 $ hg id
1069 1069 e7ec4e813ba6 tip
1070 1070 $ cd ..
1071 1071
1072 1072 $ cat << EOF >> $HGRCPATH
1073 1073 > [extensions]
1074 1074 > failpush=$TESTTMP/failpush.py
1075 1075 > EOF
1076 1076
1077 1077 $ "$TESTDIR/killdaemons.py" $DAEMON_PIDS
1078 1078 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
1079 1079 $ cat other.pid >> $DAEMON_PIDS
1080 1080
1081 1081 Doing the actual push: Abort error
1082 1082
1083 1083 $ cat << EOF >> $HGRCPATH
1084 1084 > [failpush]
1085 1085 > reason = abort
1086 1086 > EOF
1087 1087
1088 1088 $ hg -R main push other -r e7ec4e813ba6
1089 1089 pushing to other
1090 1090 searching for changes
1091 1091 abort: Abandon ship!
1092 1092 (don't panic)
1093 1093 [255]
1094 1094
1095 1095 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
1096 1096 pushing to ssh://user@dummy/other
1097 1097 searching for changes
1098 1098 abort: Abandon ship!
1099 1099 (don't panic)
1100 1100 [255]
1101 1101
1102 1102 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
1103 1103 pushing to http://localhost:$HGPORT2/
1104 1104 searching for changes
1105 1105 abort: Abandon ship!
1106 1106 (don't panic)
1107 1107 [255]
1108 1108
1109 1109
1110 1110 Doing the actual push: unknown mandatory parts
1111 1111
1112 1112 $ cat << EOF >> $HGRCPATH
1113 1113 > [failpush]
1114 1114 > reason = unknown
1115 1115 > EOF
1116 1116
1117 1117 $ hg -R main push other -r e7ec4e813ba6
1118 1118 pushing to other
1119 1119 searching for changes
1120 1120 abort: missing support for test:unknown
1121 1121 [255]
1122 1122
1123 1123 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
1124 1124 pushing to ssh://user@dummy/other
1125 1125 searching for changes
1126 1126 abort: missing support for test:unknown
1127 1127 [255]
1128 1128
1129 1129 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
1130 1130 pushing to http://localhost:$HGPORT2/
1131 1131 searching for changes
1132 1132 abort: missing support for test:unknown
1133 1133 [255]
1134 1134
1135 1135 Doing the actual push: race
1136 1136
1137 1137 $ cat << EOF >> $HGRCPATH
1138 1138 > [failpush]
1139 1139 > reason = race
1140 1140 > EOF
1141 1141
1142 1142 $ hg -R main push other -r e7ec4e813ba6
1143 1143 pushing to other
1144 1144 searching for changes
1145 1145 abort: push failed:
1146 1146 'repository changed while pushing - please try again'
1147 1147 [255]
1148 1148
1149 1149 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
1150 1150 pushing to ssh://user@dummy/other
1151 1151 searching for changes
1152 1152 abort: push failed:
1153 1153 'repository changed while pushing - please try again'
1154 1154 [255]
1155 1155
1156 1156 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
1157 1157 pushing to http://localhost:$HGPORT2/
1158 1158 searching for changes
1159 1159 abort: push failed:
1160 1160 'repository changed while pushing - please try again'
1161 1161 [255]
1162 1162
1163 1163 Doing the actual push: hook abort
1164 1164
1165 1165 $ cat << EOF >> $HGRCPATH
1166 1166 > [failpush]
1167 1167 > reason =
1168 1168 > [hooks]
1169 1169 > b2x-pretransactionclose.failpush = false
1170 1170 > EOF
1171 1171
1172 1172 $ "$TESTDIR/killdaemons.py" $DAEMON_PIDS
1173 1173 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
1174 1174 $ cat other.pid >> $DAEMON_PIDS
1175 1175
1176 1176 $ hg -R main push other -r e7ec4e813ba6
1177 1177 pushing to other
1178 1178 searching for changes
1179 1179 transaction abort!
1180 1180 rollback completed
1181 1181 abort: b2x-pretransactionclose.failpush hook exited with status 1
1182 1182 [255]
1183 1183
1184 1184 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
1185 1185 pushing to ssh://user@dummy/other
1186 1186 searching for changes
1187 1187 abort: b2x-pretransactionclose.failpush hook exited with status 1
1188 1188 remote: transaction abort!
1189 1189 remote: rollback completed
1190 1190 [255]
1191 1191
1192 1192 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
1193 1193 pushing to http://localhost:$HGPORT2/
1194 1194 searching for changes
1195 1195 abort: b2x-pretransactionclose.failpush hook exited with status 1
1196 1196 [255]
1197 1197
1198 1198
General Comments 0
You need to be logged in to leave comments. Login now