##// END OF EJS Templates
bundle: introduce an higher level function to write bundle on disk...
marmoute -
r32216:9dc36df7 default
parent child Browse files
Show More
@@ -1,1669 +1,1705 b''
1 1 # bundle2.py - generic container format to transmit arbitrary data.
2 2 #
3 3 # Copyright 2013 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 """Handling of the new bundle2 format
8 8
9 9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 11 that will be handed to and processed by the application layer.
12 12
13 13
14 14 General format architecture
15 15 ===========================
16 16
17 17 The format is architectured as follow
18 18
19 19 - magic string
20 20 - stream level parameters
21 21 - payload parts (any number)
22 22 - end of stream marker.
23 23
24 24 the Binary format
25 25 ============================
26 26
27 27 All numbers are unsigned and big-endian.
28 28
29 29 stream level parameters
30 30 ------------------------
31 31
32 32 Binary format is as follow
33 33
34 34 :params size: int32
35 35
36 36 The total number of Bytes used by the parameters
37 37
38 38 :params value: arbitrary number of Bytes
39 39
40 40 A blob of `params size` containing the serialized version of all stream level
41 41 parameters.
42 42
43 43 The blob contains a space separated list of parameters. Parameters with value
44 44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45 45
46 46 Empty name are obviously forbidden.
47 47
48 48 Name MUST start with a letter. If this first letter is lower case, the
49 49 parameter is advisory and can be safely ignored. However when the first
50 50 letter is capital, the parameter is mandatory and the bundling process MUST
51 51 stop if he is not able to proceed it.
52 52
53 53 Stream parameters use a simple textual format for two main reasons:
54 54
55 55 - Stream level parameters should remain simple and we want to discourage any
56 56 crazy usage.
57 57 - Textual data allow easy human inspection of a bundle2 header in case of
58 58 troubles.
59 59
60 60 Any Applicative level options MUST go into a bundle2 part instead.
61 61
62 62 Payload part
63 63 ------------------------
64 64
65 65 Binary format is as follow
66 66
67 67 :header size: int32
68 68
69 69 The total number of Bytes used by the part header. When the header is empty
70 70 (size = 0) this is interpreted as the end of stream marker.
71 71
72 72 :header:
73 73
74 74 The header defines how to interpret the part. It contains two piece of
75 75 data: the part type, and the part parameters.
76 76
77 77 The part type is used to route an application level handler, that can
78 78 interpret payload.
79 79
80 80 Part parameters are passed to the application level handler. They are
81 81 meant to convey information that will help the application level object to
82 82 interpret the part payload.
83 83
84 84 The binary format of the header is has follow
85 85
86 86 :typesize: (one byte)
87 87
88 88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89 89
90 90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 91 to this part.
92 92
93 93 :parameters:
94 94
95 95 Part's parameter may have arbitrary content, the binary structure is::
96 96
97 97 <mandatory-count><advisory-count><param-sizes><param-data>
98 98
99 99 :mandatory-count: 1 byte, number of mandatory parameters
100 100
101 101 :advisory-count: 1 byte, number of advisory parameters
102 102
103 103 :param-sizes:
104 104
105 105 N couple of bytes, where N is the total number of parameters. Each
106 106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107 107
108 108 :param-data:
109 109
110 110 A blob of bytes from which each parameter key and value can be
111 111 retrieved using the list of size couples stored in the previous
112 112 field.
113 113
114 114 Mandatory parameters comes first, then the advisory ones.
115 115
116 116 Each parameter's key MUST be unique within the part.
117 117
118 118 :payload:
119 119
120 120 payload is a series of `<chunksize><chunkdata>`.
121 121
122 122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124 124
125 125 The current implementation always produces either zero or one chunk.
126 126 This is an implementation limitation that will ultimately be lifted.
127 127
128 128 `chunksize` can be negative to trigger special case processing. No such
129 129 processing is in place yet.
130 130
131 131 Bundle processing
132 132 ============================
133 133
134 134 Each part is processed in order using a "part handler". Handler are registered
135 135 for a certain part type.
136 136
137 137 The matching of a part to its handler is case insensitive. The case of the
138 138 part type is used to know if a part is mandatory or advisory. If the Part type
139 139 contains any uppercase char it is considered mandatory. When no handler is
140 140 known for a Mandatory part, the process is aborted and an exception is raised.
141 141 If the part is advisory and no handler is known, the part is ignored. When the
142 142 process is aborted, the full bundle is still read from the stream to keep the
143 143 channel usable. But none of the part read from an abort are processed. In the
144 144 future, dropping the stream may become an option for channel we do not care to
145 145 preserve.
146 146 """
147 147
148 148 from __future__ import absolute_import
149 149
150 150 import errno
151 151 import re
152 152 import string
153 153 import struct
154 154 import sys
155 155
156 156 from .i18n import _
157 157 from . import (
158 158 changegroup,
159 159 error,
160 160 obsolete,
161 161 pushkey,
162 162 pycompat,
163 163 tags,
164 164 url,
165 165 util,
166 166 )
167 167
168 168 urlerr = util.urlerr
169 169 urlreq = util.urlreq
170 170
171 171 _pack = struct.pack
172 172 _unpack = struct.unpack
173 173
174 174 _fstreamparamsize = '>i'
175 175 _fpartheadersize = '>i'
176 176 _fparttypesize = '>B'
177 177 _fpartid = '>I'
178 178 _fpayloadsize = '>i'
179 179 _fpartparamcount = '>BB'
180 180
181 181 preferedchunksize = 4096
182 182
183 183 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
184 184
185 185 def outdebug(ui, message):
186 186 """debug regarding output stream (bundling)"""
187 187 if ui.configbool('devel', 'bundle2.debug', False):
188 188 ui.debug('bundle2-output: %s\n' % message)
189 189
190 190 def indebug(ui, message):
191 191 """debug on input stream (unbundling)"""
192 192 if ui.configbool('devel', 'bundle2.debug', False):
193 193 ui.debug('bundle2-input: %s\n' % message)
194 194
195 195 def validateparttype(parttype):
196 196 """raise ValueError if a parttype contains invalid character"""
197 197 if _parttypeforbidden.search(parttype):
198 198 raise ValueError(parttype)
199 199
200 200 def _makefpartparamsizes(nbparams):
201 201 """return a struct format to read part parameter sizes
202 202
203 203 The number parameters is variable so we need to build that format
204 204 dynamically.
205 205 """
206 206 return '>'+('BB'*nbparams)
207 207
208 208 parthandlermapping = {}
209 209
210 210 def parthandler(parttype, params=()):
211 211 """decorator that register a function as a bundle2 part handler
212 212
213 213 eg::
214 214
215 215 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
216 216 def myparttypehandler(...):
217 217 '''process a part of type "my part".'''
218 218 ...
219 219 """
220 220 validateparttype(parttype)
221 221 def _decorator(func):
222 222 lparttype = parttype.lower() # enforce lower case matching.
223 223 assert lparttype not in parthandlermapping
224 224 parthandlermapping[lparttype] = func
225 225 func.params = frozenset(params)
226 226 return func
227 227 return _decorator
228 228
229 229 class unbundlerecords(object):
230 230 """keep record of what happens during and unbundle
231 231
232 232 New records are added using `records.add('cat', obj)`. Where 'cat' is a
233 233 category of record and obj is an arbitrary object.
234 234
235 235 `records['cat']` will return all entries of this category 'cat'.
236 236
237 237 Iterating on the object itself will yield `('category', obj)` tuples
238 238 for all entries.
239 239
240 240 All iterations happens in chronological order.
241 241 """
242 242
243 243 def __init__(self):
244 244 self._categories = {}
245 245 self._sequences = []
246 246 self._replies = {}
247 247
248 248 def add(self, category, entry, inreplyto=None):
249 249 """add a new record of a given category.
250 250
251 251 The entry can then be retrieved in the list returned by
252 252 self['category']."""
253 253 self._categories.setdefault(category, []).append(entry)
254 254 self._sequences.append((category, entry))
255 255 if inreplyto is not None:
256 256 self.getreplies(inreplyto).add(category, entry)
257 257
258 258 def getreplies(self, partid):
259 259 """get the records that are replies to a specific part"""
260 260 return self._replies.setdefault(partid, unbundlerecords())
261 261
262 262 def __getitem__(self, cat):
263 263 return tuple(self._categories.get(cat, ()))
264 264
265 265 def __iter__(self):
266 266 return iter(self._sequences)
267 267
268 268 def __len__(self):
269 269 return len(self._sequences)
270 270
271 271 def __nonzero__(self):
272 272 return bool(self._sequences)
273 273
274 274 __bool__ = __nonzero__
275 275
276 276 class bundleoperation(object):
277 277 """an object that represents a single bundling process
278 278
279 279 Its purpose is to carry unbundle-related objects and states.
280 280
281 281 A new object should be created at the beginning of each bundle processing.
282 282 The object is to be returned by the processing function.
283 283
284 284 The object has very little content now it will ultimately contain:
285 285 * an access to the repo the bundle is applied to,
286 286 * a ui object,
287 287 * a way to retrieve a transaction to add changes to the repo,
288 288 * a way to record the result of processing each part,
289 289 * a way to construct a bundle response when applicable.
290 290 """
291 291
292 292 def __init__(self, repo, transactiongetter, captureoutput=True):
293 293 self.repo = repo
294 294 self.ui = repo.ui
295 295 self.records = unbundlerecords()
296 296 self.gettransaction = transactiongetter
297 297 self.reply = None
298 298 self.captureoutput = captureoutput
299 299
300 300 class TransactionUnavailable(RuntimeError):
301 301 pass
302 302
303 303 def _notransaction():
304 304 """default method to get a transaction while processing a bundle
305 305
306 306 Raise an exception to highlight the fact that no transaction was expected
307 307 to be created"""
308 308 raise TransactionUnavailable()
309 309
310 310 def applybundle(repo, unbundler, tr, source=None, url=None, op=None):
311 311 # transform me into unbundler.apply() as soon as the freeze is lifted
312 312 tr.hookargs['bundle2'] = '1'
313 313 if source is not None and 'source' not in tr.hookargs:
314 314 tr.hookargs['source'] = source
315 315 if url is not None and 'url' not in tr.hookargs:
316 316 tr.hookargs['url'] = url
317 317 return processbundle(repo, unbundler, lambda: tr, op=op)
318 318
319 319 def processbundle(repo, unbundler, transactiongetter=None, op=None):
320 320 """This function process a bundle, apply effect to/from a repo
321 321
322 322 It iterates over each part then searches for and uses the proper handling
323 323 code to process the part. Parts are processed in order.
324 324
325 325 Unknown Mandatory part will abort the process.
326 326
327 327 It is temporarily possible to provide a prebuilt bundleoperation to the
328 328 function. This is used to ensure output is properly propagated in case of
329 329 an error during the unbundling. This output capturing part will likely be
330 330 reworked and this ability will probably go away in the process.
331 331 """
332 332 if op is None:
333 333 if transactiongetter is None:
334 334 transactiongetter = _notransaction
335 335 op = bundleoperation(repo, transactiongetter)
336 336 # todo:
337 337 # - replace this is a init function soon.
338 338 # - exception catching
339 339 unbundler.params
340 340 if repo.ui.debugflag:
341 341 msg = ['bundle2-input-bundle:']
342 342 if unbundler.params:
343 343 msg.append(' %i params')
344 344 if op.gettransaction is None:
345 345 msg.append(' no-transaction')
346 346 else:
347 347 msg.append(' with-transaction')
348 348 msg.append('\n')
349 349 repo.ui.debug(''.join(msg))
350 350 iterparts = enumerate(unbundler.iterparts())
351 351 part = None
352 352 nbpart = 0
353 353 try:
354 354 for nbpart, part in iterparts:
355 355 _processpart(op, part)
356 356 except Exception as exc:
357 357 # Any exceptions seeking to the end of the bundle at this point are
358 358 # almost certainly related to the underlying stream being bad.
359 359 # And, chances are that the exception we're handling is related to
360 360 # getting in that bad state. So, we swallow the seeking error and
361 361 # re-raise the original error.
362 362 seekerror = False
363 363 try:
364 364 for nbpart, part in iterparts:
365 365 # consume the bundle content
366 366 part.seek(0, 2)
367 367 except Exception:
368 368 seekerror = True
369 369
370 370 # Small hack to let caller code distinguish exceptions from bundle2
371 371 # processing from processing the old format. This is mostly
372 372 # needed to handle different return codes to unbundle according to the
373 373 # type of bundle. We should probably clean up or drop this return code
374 374 # craziness in a future version.
375 375 exc.duringunbundle2 = True
376 376 salvaged = []
377 377 replycaps = None
378 378 if op.reply is not None:
379 379 salvaged = op.reply.salvageoutput()
380 380 replycaps = op.reply.capabilities
381 381 exc._replycaps = replycaps
382 382 exc._bundle2salvagedoutput = salvaged
383 383
384 384 # Re-raising from a variable loses the original stack. So only use
385 385 # that form if we need to.
386 386 if seekerror:
387 387 raise exc
388 388 else:
389 389 raise
390 390 finally:
391 391 repo.ui.debug('bundle2-input-bundle: %i parts total\n' % nbpart)
392 392
393 393 return op
394 394
395 395 def _processpart(op, part):
396 396 """process a single part from a bundle
397 397
398 398 The part is guaranteed to have been fully consumed when the function exits
399 399 (even if an exception is raised)."""
400 400 status = 'unknown' # used by debug output
401 401 hardabort = False
402 402 try:
403 403 try:
404 404 handler = parthandlermapping.get(part.type)
405 405 if handler is None:
406 406 status = 'unsupported-type'
407 407 raise error.BundleUnknownFeatureError(parttype=part.type)
408 408 indebug(op.ui, 'found a handler for part %r' % part.type)
409 409 unknownparams = part.mandatorykeys - handler.params
410 410 if unknownparams:
411 411 unknownparams = list(unknownparams)
412 412 unknownparams.sort()
413 413 status = 'unsupported-params (%s)' % unknownparams
414 414 raise error.BundleUnknownFeatureError(parttype=part.type,
415 415 params=unknownparams)
416 416 status = 'supported'
417 417 except error.BundleUnknownFeatureError as exc:
418 418 if part.mandatory: # mandatory parts
419 419 raise
420 420 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
421 421 return # skip to part processing
422 422 finally:
423 423 if op.ui.debugflag:
424 424 msg = ['bundle2-input-part: "%s"' % part.type]
425 425 if not part.mandatory:
426 426 msg.append(' (advisory)')
427 427 nbmp = len(part.mandatorykeys)
428 428 nbap = len(part.params) - nbmp
429 429 if nbmp or nbap:
430 430 msg.append(' (params:')
431 431 if nbmp:
432 432 msg.append(' %i mandatory' % nbmp)
433 433 if nbap:
434 434 msg.append(' %i advisory' % nbmp)
435 435 msg.append(')')
436 436 msg.append(' %s\n' % status)
437 437 op.ui.debug(''.join(msg))
438 438
439 439 # handler is called outside the above try block so that we don't
440 440 # risk catching KeyErrors from anything other than the
441 441 # parthandlermapping lookup (any KeyError raised by handler()
442 442 # itself represents a defect of a different variety).
443 443 output = None
444 444 if op.captureoutput and op.reply is not None:
445 445 op.ui.pushbuffer(error=True, subproc=True)
446 446 output = ''
447 447 try:
448 448 handler(op, part)
449 449 finally:
450 450 if output is not None:
451 451 output = op.ui.popbuffer()
452 452 if output:
453 453 outpart = op.reply.newpart('output', data=output,
454 454 mandatory=False)
455 455 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
456 456 # If exiting or interrupted, do not attempt to seek the stream in the
457 457 # finally block below. This makes abort faster.
458 458 except (SystemExit, KeyboardInterrupt):
459 459 hardabort = True
460 460 raise
461 461 finally:
462 462 # consume the part content to not corrupt the stream.
463 463 if not hardabort:
464 464 part.seek(0, 2)
465 465
466 466
467 467 def decodecaps(blob):
468 468 """decode a bundle2 caps bytes blob into a dictionary
469 469
470 470 The blob is a list of capabilities (one per line)
471 471 Capabilities may have values using a line of the form::
472 472
473 473 capability=value1,value2,value3
474 474
475 475 The values are always a list."""
476 476 caps = {}
477 477 for line in blob.splitlines():
478 478 if not line:
479 479 continue
480 480 if '=' not in line:
481 481 key, vals = line, ()
482 482 else:
483 483 key, vals = line.split('=', 1)
484 484 vals = vals.split(',')
485 485 key = urlreq.unquote(key)
486 486 vals = [urlreq.unquote(v) for v in vals]
487 487 caps[key] = vals
488 488 return caps
489 489
490 490 def encodecaps(caps):
491 491 """encode a bundle2 caps dictionary into a bytes blob"""
492 492 chunks = []
493 493 for ca in sorted(caps):
494 494 vals = caps[ca]
495 495 ca = urlreq.quote(ca)
496 496 vals = [urlreq.quote(v) for v in vals]
497 497 if vals:
498 498 ca = "%s=%s" % (ca, ','.join(vals))
499 499 chunks.append(ca)
500 500 return '\n'.join(chunks)
501 501
502 502 bundletypes = {
503 503 "": ("", 'UN'), # only when using unbundle on ssh and old http servers
504 504 # since the unification ssh accepts a header but there
505 505 # is no capability signaling it.
506 506 "HG20": (), # special-cased below
507 507 "HG10UN": ("HG10UN", 'UN'),
508 508 "HG10BZ": ("HG10", 'BZ'),
509 509 "HG10GZ": ("HG10GZ", 'GZ'),
510 510 }
511 511
512 512 # hgweb uses this list to communicate its preferred type
513 513 bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN']
514 514
515 515 class bundle20(object):
516 516 """represent an outgoing bundle2 container
517 517
518 518 Use the `addparam` method to add stream level parameter. and `newpart` to
519 519 populate it. Then call `getchunks` to retrieve all the binary chunks of
520 520 data that compose the bundle2 container."""
521 521
522 522 _magicstring = 'HG20'
523 523
524 524 def __init__(self, ui, capabilities=()):
525 525 self.ui = ui
526 526 self._params = []
527 527 self._parts = []
528 528 self.capabilities = dict(capabilities)
529 529 self._compengine = util.compengines.forbundletype('UN')
530 530 self._compopts = None
531 531
532 532 def setcompression(self, alg, compopts=None):
533 533 """setup core part compression to <alg>"""
534 534 if alg in (None, 'UN'):
535 535 return
536 536 assert not any(n.lower() == 'compression' for n, v in self._params)
537 537 self.addparam('Compression', alg)
538 538 self._compengine = util.compengines.forbundletype(alg)
539 539 self._compopts = compopts
540 540
541 541 @property
542 542 def nbparts(self):
543 543 """total number of parts added to the bundler"""
544 544 return len(self._parts)
545 545
546 546 # methods used to defines the bundle2 content
547 547 def addparam(self, name, value=None):
548 548 """add a stream level parameter"""
549 549 if not name:
550 550 raise ValueError('empty parameter name')
551 551 if name[0] not in string.letters:
552 552 raise ValueError('non letter first character: %r' % name)
553 553 self._params.append((name, value))
554 554
555 555 def addpart(self, part):
556 556 """add a new part to the bundle2 container
557 557
558 558 Parts contains the actual applicative payload."""
559 559 assert part.id is None
560 560 part.id = len(self._parts) # very cheap counter
561 561 self._parts.append(part)
562 562
563 563 def newpart(self, typeid, *args, **kwargs):
564 564 """create a new part and add it to the containers
565 565
566 566 As the part is directly added to the containers. For now, this means
567 567 that any failure to properly initialize the part after calling
568 568 ``newpart`` should result in a failure of the whole bundling process.
569 569
570 570 You can still fall back to manually create and add if you need better
571 571 control."""
572 572 part = bundlepart(typeid, *args, **kwargs)
573 573 self.addpart(part)
574 574 return part
575 575
576 576 # methods used to generate the bundle2 stream
577 577 def getchunks(self):
578 578 if self.ui.debugflag:
579 579 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
580 580 if self._params:
581 581 msg.append(' (%i params)' % len(self._params))
582 582 msg.append(' %i parts total\n' % len(self._parts))
583 583 self.ui.debug(''.join(msg))
584 584 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
585 585 yield self._magicstring
586 586 param = self._paramchunk()
587 587 outdebug(self.ui, 'bundle parameter: %s' % param)
588 588 yield _pack(_fstreamparamsize, len(param))
589 589 if param:
590 590 yield param
591 591 for chunk in self._compengine.compressstream(self._getcorechunk(),
592 592 self._compopts):
593 593 yield chunk
594 594
595 595 def _paramchunk(self):
596 596 """return a encoded version of all stream parameters"""
597 597 blocks = []
598 598 for par, value in self._params:
599 599 par = urlreq.quote(par)
600 600 if value is not None:
601 601 value = urlreq.quote(value)
602 602 par = '%s=%s' % (par, value)
603 603 blocks.append(par)
604 604 return ' '.join(blocks)
605 605
606 606 def _getcorechunk(self):
607 607 """yield chunk for the core part of the bundle
608 608
609 609 (all but headers and parameters)"""
610 610 outdebug(self.ui, 'start of parts')
611 611 for part in self._parts:
612 612 outdebug(self.ui, 'bundle part: "%s"' % part.type)
613 613 for chunk in part.getchunks(ui=self.ui):
614 614 yield chunk
615 615 outdebug(self.ui, 'end of bundle')
616 616 yield _pack(_fpartheadersize, 0)
617 617
618 618
619 619 def salvageoutput(self):
620 620 """return a list with a copy of all output parts in the bundle
621 621
622 622 This is meant to be used during error handling to make sure we preserve
623 623 server output"""
624 624 salvaged = []
625 625 for part in self._parts:
626 626 if part.type.startswith('output'):
627 627 salvaged.append(part.copy())
628 628 return salvaged
629 629
630 630
631 631 class unpackermixin(object):
632 632 """A mixin to extract bytes and struct data from a stream"""
633 633
634 634 def __init__(self, fp):
635 635 self._fp = fp
636 636
637 637 def _unpack(self, format):
638 638 """unpack this struct format from the stream
639 639
640 640 This method is meant for internal usage by the bundle2 protocol only.
641 641 They directly manipulate the low level stream including bundle2 level
642 642 instruction.
643 643
644 644 Do not use it to implement higher-level logic or methods."""
645 645 data = self._readexact(struct.calcsize(format))
646 646 return _unpack(format, data)
647 647
648 648 def _readexact(self, size):
649 649 """read exactly <size> bytes from the stream
650 650
651 651 This method is meant for internal usage by the bundle2 protocol only.
652 652 They directly manipulate the low level stream including bundle2 level
653 653 instruction.
654 654
655 655 Do not use it to implement higher-level logic or methods."""
656 656 return changegroup.readexactly(self._fp, size)
657 657
658 658 def getunbundler(ui, fp, magicstring=None):
659 659 """return a valid unbundler object for a given magicstring"""
660 660 if magicstring is None:
661 661 magicstring = changegroup.readexactly(fp, 4)
662 662 magic, version = magicstring[0:2], magicstring[2:4]
663 663 if magic != 'HG':
664 664 raise error.Abort(_('not a Mercurial bundle'))
665 665 unbundlerclass = formatmap.get(version)
666 666 if unbundlerclass is None:
667 667 raise error.Abort(_('unknown bundle version %s') % version)
668 668 unbundler = unbundlerclass(ui, fp)
669 669 indebug(ui, 'start processing of %s stream' % magicstring)
670 670 return unbundler
671 671
672 672 class unbundle20(unpackermixin):
673 673 """interpret a bundle2 stream
674 674
675 675 This class is fed with a binary stream and yields parts through its
676 676 `iterparts` methods."""
677 677
678 678 _magicstring = 'HG20'
679 679
680 680 def __init__(self, ui, fp):
681 681 """If header is specified, we do not read it out of the stream."""
682 682 self.ui = ui
683 683 self._compengine = util.compengines.forbundletype('UN')
684 684 self._compressed = None
685 685 super(unbundle20, self).__init__(fp)
686 686
687 687 @util.propertycache
688 688 def params(self):
689 689 """dictionary of stream level parameters"""
690 690 indebug(self.ui, 'reading bundle2 stream parameters')
691 691 params = {}
692 692 paramssize = self._unpack(_fstreamparamsize)[0]
693 693 if paramssize < 0:
694 694 raise error.BundleValueError('negative bundle param size: %i'
695 695 % paramssize)
696 696 if paramssize:
697 697 params = self._readexact(paramssize)
698 698 params = self._processallparams(params)
699 699 return params
700 700
701 701 def _processallparams(self, paramsblock):
702 702 """"""
703 703 params = util.sortdict()
704 704 for p in paramsblock.split(' '):
705 705 p = p.split('=', 1)
706 706 p = [urlreq.unquote(i) for i in p]
707 707 if len(p) < 2:
708 708 p.append(None)
709 709 self._processparam(*p)
710 710 params[p[0]] = p[1]
711 711 return params
712 712
713 713
714 714 def _processparam(self, name, value):
715 715 """process a parameter, applying its effect if needed
716 716
717 717 Parameter starting with a lower case letter are advisory and will be
718 718 ignored when unknown. Those starting with an upper case letter are
719 719 mandatory and will this function will raise a KeyError when unknown.
720 720
721 721 Note: no option are currently supported. Any input will be either
722 722 ignored or failing.
723 723 """
724 724 if not name:
725 725 raise ValueError('empty parameter name')
726 726 if name[0] not in string.letters:
727 727 raise ValueError('non letter first character: %r' % name)
728 728 try:
729 729 handler = b2streamparamsmap[name.lower()]
730 730 except KeyError:
731 731 if name[0].islower():
732 732 indebug(self.ui, "ignoring unknown parameter %r" % name)
733 733 else:
734 734 raise error.BundleUnknownFeatureError(params=(name,))
735 735 else:
736 736 handler(self, name, value)
737 737
738 738 def _forwardchunks(self):
739 739 """utility to transfer a bundle2 as binary
740 740
741 741 This is made necessary by the fact the 'getbundle' command over 'ssh'
742 742 have no way to know then the reply end, relying on the bundle to be
743 743 interpreted to know its end. This is terrible and we are sorry, but we
744 744 needed to move forward to get general delta enabled.
745 745 """
746 746 yield self._magicstring
747 747 assert 'params' not in vars(self)
748 748 paramssize = self._unpack(_fstreamparamsize)[0]
749 749 if paramssize < 0:
750 750 raise error.BundleValueError('negative bundle param size: %i'
751 751 % paramssize)
752 752 yield _pack(_fstreamparamsize, paramssize)
753 753 if paramssize:
754 754 params = self._readexact(paramssize)
755 755 self._processallparams(params)
756 756 yield params
757 757 assert self._compengine.bundletype == 'UN'
758 758 # From there, payload might need to be decompressed
759 759 self._fp = self._compengine.decompressorreader(self._fp)
760 760 emptycount = 0
761 761 while emptycount < 2:
762 762 # so we can brainlessly loop
763 763 assert _fpartheadersize == _fpayloadsize
764 764 size = self._unpack(_fpartheadersize)[0]
765 765 yield _pack(_fpartheadersize, size)
766 766 if size:
767 767 emptycount = 0
768 768 else:
769 769 emptycount += 1
770 770 continue
771 771 if size == flaginterrupt:
772 772 continue
773 773 elif size < 0:
774 774 raise error.BundleValueError('negative chunk size: %i')
775 775 yield self._readexact(size)
776 776
777 777
778 778 def iterparts(self):
779 779 """yield all parts contained in the stream"""
780 780 # make sure param have been loaded
781 781 self.params
782 782 # From there, payload need to be decompressed
783 783 self._fp = self._compengine.decompressorreader(self._fp)
784 784 indebug(self.ui, 'start extraction of bundle2 parts')
785 785 headerblock = self._readpartheader()
786 786 while headerblock is not None:
787 787 part = unbundlepart(self.ui, headerblock, self._fp)
788 788 yield part
789 789 part.seek(0, 2)
790 790 headerblock = self._readpartheader()
791 791 indebug(self.ui, 'end of bundle2 stream')
792 792
793 793 def _readpartheader(self):
794 794 """reads a part header size and return the bytes blob
795 795
796 796 returns None if empty"""
797 797 headersize = self._unpack(_fpartheadersize)[0]
798 798 if headersize < 0:
799 799 raise error.BundleValueError('negative part header size: %i'
800 800 % headersize)
801 801 indebug(self.ui, 'part header size: %i' % headersize)
802 802 if headersize:
803 803 return self._readexact(headersize)
804 804 return None
805 805
806 806 def compressed(self):
807 807 self.params # load params
808 808 return self._compressed
809 809
810 810 def close(self):
811 811 """close underlying file"""
812 812 if util.safehasattr(self._fp, 'close'):
813 813 return self._fp.close()
814 814
815 815 formatmap = {'20': unbundle20}
816 816
817 817 b2streamparamsmap = {}
818 818
819 819 def b2streamparamhandler(name):
820 820 """register a handler for a stream level parameter"""
821 821 def decorator(func):
822 822 assert name not in formatmap
823 823 b2streamparamsmap[name] = func
824 824 return func
825 825 return decorator
826 826
827 827 @b2streamparamhandler('compression')
828 828 def processcompression(unbundler, param, value):
829 829 """read compression parameter and install payload decompression"""
830 830 if value not in util.compengines.supportedbundletypes:
831 831 raise error.BundleUnknownFeatureError(params=(param,),
832 832 values=(value,))
833 833 unbundler._compengine = util.compengines.forbundletype(value)
834 834 if value is not None:
835 835 unbundler._compressed = True
836 836
837 837 class bundlepart(object):
838 838 """A bundle2 part contains application level payload
839 839
840 840 The part `type` is used to route the part to the application level
841 841 handler.
842 842
843 843 The part payload is contained in ``part.data``. It could be raw bytes or a
844 844 generator of byte chunks.
845 845
846 846 You can add parameters to the part using the ``addparam`` method.
847 847 Parameters can be either mandatory (default) or advisory. Remote side
848 848 should be able to safely ignore the advisory ones.
849 849
850 850 Both data and parameters cannot be modified after the generation has begun.
851 851 """
852 852
853 853 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
854 854 data='', mandatory=True):
855 855 validateparttype(parttype)
856 856 self.id = None
857 857 self.type = parttype
858 858 self._data = data
859 859 self._mandatoryparams = list(mandatoryparams)
860 860 self._advisoryparams = list(advisoryparams)
861 861 # checking for duplicated entries
862 862 self._seenparams = set()
863 863 for pname, __ in self._mandatoryparams + self._advisoryparams:
864 864 if pname in self._seenparams:
865 865 raise error.ProgrammingError('duplicated params: %s' % pname)
866 866 self._seenparams.add(pname)
867 867 # status of the part's generation:
868 868 # - None: not started,
869 869 # - False: currently generated,
870 870 # - True: generation done.
871 871 self._generated = None
872 872 self.mandatory = mandatory
873 873
874 874 def __repr__(self):
875 875 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
876 876 return ('<%s object at %x; id: %s; type: %s; mandatory: %s>'
877 877 % (cls, id(self), self.id, self.type, self.mandatory))
878 878
879 879 def copy(self):
880 880 """return a copy of the part
881 881
882 882 The new part have the very same content but no partid assigned yet.
883 883 Parts with generated data cannot be copied."""
884 884 assert not util.safehasattr(self.data, 'next')
885 885 return self.__class__(self.type, self._mandatoryparams,
886 886 self._advisoryparams, self._data, self.mandatory)
887 887
888 888 # methods used to defines the part content
889 889 @property
890 890 def data(self):
891 891 return self._data
892 892
893 893 @data.setter
894 894 def data(self, data):
895 895 if self._generated is not None:
896 896 raise error.ReadOnlyPartError('part is being generated')
897 897 self._data = data
898 898
899 899 @property
900 900 def mandatoryparams(self):
901 901 # make it an immutable tuple to force people through ``addparam``
902 902 return tuple(self._mandatoryparams)
903 903
904 904 @property
905 905 def advisoryparams(self):
906 906 # make it an immutable tuple to force people through ``addparam``
907 907 return tuple(self._advisoryparams)
908 908
909 909 def addparam(self, name, value='', mandatory=True):
910 910 """add a parameter to the part
911 911
912 912 If 'mandatory' is set to True, the remote handler must claim support
913 913 for this parameter or the unbundling will be aborted.
914 914
915 915 The 'name' and 'value' cannot exceed 255 bytes each.
916 916 """
917 917 if self._generated is not None:
918 918 raise error.ReadOnlyPartError('part is being generated')
919 919 if name in self._seenparams:
920 920 raise ValueError('duplicated params: %s' % name)
921 921 self._seenparams.add(name)
922 922 params = self._advisoryparams
923 923 if mandatory:
924 924 params = self._mandatoryparams
925 925 params.append((name, value))
926 926
927 927 # methods used to generates the bundle2 stream
928 928 def getchunks(self, ui):
929 929 if self._generated is not None:
930 930 raise error.ProgrammingError('part can only be consumed once')
931 931 self._generated = False
932 932
933 933 if ui.debugflag:
934 934 msg = ['bundle2-output-part: "%s"' % self.type]
935 935 if not self.mandatory:
936 936 msg.append(' (advisory)')
937 937 nbmp = len(self.mandatoryparams)
938 938 nbap = len(self.advisoryparams)
939 939 if nbmp or nbap:
940 940 msg.append(' (params:')
941 941 if nbmp:
942 942 msg.append(' %i mandatory' % nbmp)
943 943 if nbap:
944 944 msg.append(' %i advisory' % nbmp)
945 945 msg.append(')')
946 946 if not self.data:
947 947 msg.append(' empty payload')
948 948 elif util.safehasattr(self.data, 'next'):
949 949 msg.append(' streamed payload')
950 950 else:
951 951 msg.append(' %i bytes payload' % len(self.data))
952 952 msg.append('\n')
953 953 ui.debug(''.join(msg))
954 954
955 955 #### header
956 956 if self.mandatory:
957 957 parttype = self.type.upper()
958 958 else:
959 959 parttype = self.type.lower()
960 960 outdebug(ui, 'part %s: "%s"' % (self.id, parttype))
961 961 ## parttype
962 962 header = [_pack(_fparttypesize, len(parttype)),
963 963 parttype, _pack(_fpartid, self.id),
964 964 ]
965 965 ## parameters
966 966 # count
967 967 manpar = self.mandatoryparams
968 968 advpar = self.advisoryparams
969 969 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
970 970 # size
971 971 parsizes = []
972 972 for key, value in manpar:
973 973 parsizes.append(len(key))
974 974 parsizes.append(len(value))
975 975 for key, value in advpar:
976 976 parsizes.append(len(key))
977 977 parsizes.append(len(value))
978 978 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
979 979 header.append(paramsizes)
980 980 # key, value
981 981 for key, value in manpar:
982 982 header.append(key)
983 983 header.append(value)
984 984 for key, value in advpar:
985 985 header.append(key)
986 986 header.append(value)
987 987 ## finalize header
988 988 headerchunk = ''.join(header)
989 989 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
990 990 yield _pack(_fpartheadersize, len(headerchunk))
991 991 yield headerchunk
992 992 ## payload
993 993 try:
994 994 for chunk in self._payloadchunks():
995 995 outdebug(ui, 'payload chunk size: %i' % len(chunk))
996 996 yield _pack(_fpayloadsize, len(chunk))
997 997 yield chunk
998 998 except GeneratorExit:
999 999 # GeneratorExit means that nobody is listening for our
1000 1000 # results anyway, so just bail quickly rather than trying
1001 1001 # to produce an error part.
1002 1002 ui.debug('bundle2-generatorexit\n')
1003 1003 raise
1004 1004 except BaseException as exc:
1005 1005 # backup exception data for later
1006 1006 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
1007 1007 % exc)
1008 1008 tb = sys.exc_info()[2]
1009 1009 msg = 'unexpected error: %s' % exc
1010 1010 interpart = bundlepart('error:abort', [('message', msg)],
1011 1011 mandatory=False)
1012 1012 interpart.id = 0
1013 1013 yield _pack(_fpayloadsize, -1)
1014 1014 for chunk in interpart.getchunks(ui=ui):
1015 1015 yield chunk
1016 1016 outdebug(ui, 'closing payload chunk')
1017 1017 # abort current part payload
1018 1018 yield _pack(_fpayloadsize, 0)
1019 1019 pycompat.raisewithtb(exc, tb)
1020 1020 # end of payload
1021 1021 outdebug(ui, 'closing payload chunk')
1022 1022 yield _pack(_fpayloadsize, 0)
1023 1023 self._generated = True
1024 1024
1025 1025 def _payloadchunks(self):
1026 1026 """yield chunks of a the part payload
1027 1027
1028 1028 Exists to handle the different methods to provide data to a part."""
1029 1029 # we only support fixed size data now.
1030 1030 # This will be improved in the future.
1031 1031 if util.safehasattr(self.data, 'next'):
1032 1032 buff = util.chunkbuffer(self.data)
1033 1033 chunk = buff.read(preferedchunksize)
1034 1034 while chunk:
1035 1035 yield chunk
1036 1036 chunk = buff.read(preferedchunksize)
1037 1037 elif len(self.data):
1038 1038 yield self.data
1039 1039
1040 1040
1041 1041 flaginterrupt = -1
1042 1042
1043 1043 class interrupthandler(unpackermixin):
1044 1044 """read one part and process it with restricted capability
1045 1045
1046 1046 This allows to transmit exception raised on the producer size during part
1047 1047 iteration while the consumer is reading a part.
1048 1048
1049 1049 Part processed in this manner only have access to a ui object,"""
1050 1050
1051 1051 def __init__(self, ui, fp):
1052 1052 super(interrupthandler, self).__init__(fp)
1053 1053 self.ui = ui
1054 1054
1055 1055 def _readpartheader(self):
1056 1056 """reads a part header size and return the bytes blob
1057 1057
1058 1058 returns None if empty"""
1059 1059 headersize = self._unpack(_fpartheadersize)[0]
1060 1060 if headersize < 0:
1061 1061 raise error.BundleValueError('negative part header size: %i'
1062 1062 % headersize)
1063 1063 indebug(self.ui, 'part header size: %i\n' % headersize)
1064 1064 if headersize:
1065 1065 return self._readexact(headersize)
1066 1066 return None
1067 1067
1068 1068 def __call__(self):
1069 1069
1070 1070 self.ui.debug('bundle2-input-stream-interrupt:'
1071 1071 ' opening out of band context\n')
1072 1072 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1073 1073 headerblock = self._readpartheader()
1074 1074 if headerblock is None:
1075 1075 indebug(self.ui, 'no part found during interruption.')
1076 1076 return
1077 1077 part = unbundlepart(self.ui, headerblock, self._fp)
1078 1078 op = interruptoperation(self.ui)
1079 1079 _processpart(op, part)
1080 1080 self.ui.debug('bundle2-input-stream-interrupt:'
1081 1081 ' closing out of band context\n')
1082 1082
1083 1083 class interruptoperation(object):
1084 1084 """A limited operation to be use by part handler during interruption
1085 1085
1086 1086 It only have access to an ui object.
1087 1087 """
1088 1088
1089 1089 def __init__(self, ui):
1090 1090 self.ui = ui
1091 1091 self.reply = None
1092 1092 self.captureoutput = False
1093 1093
1094 1094 @property
1095 1095 def repo(self):
1096 1096 raise error.ProgrammingError('no repo access from stream interruption')
1097 1097
1098 1098 def gettransaction(self):
1099 1099 raise TransactionUnavailable('no repo access from stream interruption')
1100 1100
1101 1101 class unbundlepart(unpackermixin):
1102 1102 """a bundle part read from a bundle"""
1103 1103
1104 1104 def __init__(self, ui, header, fp):
1105 1105 super(unbundlepart, self).__init__(fp)
1106 1106 self._seekable = (util.safehasattr(fp, 'seek') and
1107 1107 util.safehasattr(fp, 'tell'))
1108 1108 self.ui = ui
1109 1109 # unbundle state attr
1110 1110 self._headerdata = header
1111 1111 self._headeroffset = 0
1112 1112 self._initialized = False
1113 1113 self.consumed = False
1114 1114 # part data
1115 1115 self.id = None
1116 1116 self.type = None
1117 1117 self.mandatoryparams = None
1118 1118 self.advisoryparams = None
1119 1119 self.params = None
1120 1120 self.mandatorykeys = ()
1121 1121 self._payloadstream = None
1122 1122 self._readheader()
1123 1123 self._mandatory = None
1124 1124 self._chunkindex = [] #(payload, file) position tuples for chunk starts
1125 1125 self._pos = 0
1126 1126
1127 1127 def _fromheader(self, size):
1128 1128 """return the next <size> byte from the header"""
1129 1129 offset = self._headeroffset
1130 1130 data = self._headerdata[offset:(offset + size)]
1131 1131 self._headeroffset = offset + size
1132 1132 return data
1133 1133
1134 1134 def _unpackheader(self, format):
1135 1135 """read given format from header
1136 1136
1137 1137 This automatically compute the size of the format to read."""
1138 1138 data = self._fromheader(struct.calcsize(format))
1139 1139 return _unpack(format, data)
1140 1140
1141 1141 def _initparams(self, mandatoryparams, advisoryparams):
1142 1142 """internal function to setup all logic related parameters"""
1143 1143 # make it read only to prevent people touching it by mistake.
1144 1144 self.mandatoryparams = tuple(mandatoryparams)
1145 1145 self.advisoryparams = tuple(advisoryparams)
1146 1146 # user friendly UI
1147 1147 self.params = util.sortdict(self.mandatoryparams)
1148 1148 self.params.update(self.advisoryparams)
1149 1149 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1150 1150
1151 1151 def _payloadchunks(self, chunknum=0):
1152 1152 '''seek to specified chunk and start yielding data'''
1153 1153 if len(self._chunkindex) == 0:
1154 1154 assert chunknum == 0, 'Must start with chunk 0'
1155 1155 self._chunkindex.append((0, self._tellfp()))
1156 1156 else:
1157 1157 assert chunknum < len(self._chunkindex), \
1158 1158 'Unknown chunk %d' % chunknum
1159 1159 self._seekfp(self._chunkindex[chunknum][1])
1160 1160
1161 1161 pos = self._chunkindex[chunknum][0]
1162 1162 payloadsize = self._unpack(_fpayloadsize)[0]
1163 1163 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1164 1164 while payloadsize:
1165 1165 if payloadsize == flaginterrupt:
1166 1166 # interruption detection, the handler will now read a
1167 1167 # single part and process it.
1168 1168 interrupthandler(self.ui, self._fp)()
1169 1169 elif payloadsize < 0:
1170 1170 msg = 'negative payload chunk size: %i' % payloadsize
1171 1171 raise error.BundleValueError(msg)
1172 1172 else:
1173 1173 result = self._readexact(payloadsize)
1174 1174 chunknum += 1
1175 1175 pos += payloadsize
1176 1176 if chunknum == len(self._chunkindex):
1177 1177 self._chunkindex.append((pos, self._tellfp()))
1178 1178 yield result
1179 1179 payloadsize = self._unpack(_fpayloadsize)[0]
1180 1180 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1181 1181
1182 1182 def _findchunk(self, pos):
1183 1183 '''for a given payload position, return a chunk number and offset'''
1184 1184 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1185 1185 if ppos == pos:
1186 1186 return chunk, 0
1187 1187 elif ppos > pos:
1188 1188 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1189 1189 raise ValueError('Unknown chunk')
1190 1190
1191 1191 def _readheader(self):
1192 1192 """read the header and setup the object"""
1193 1193 typesize = self._unpackheader(_fparttypesize)[0]
1194 1194 self.type = self._fromheader(typesize)
1195 1195 indebug(self.ui, 'part type: "%s"' % self.type)
1196 1196 self.id = self._unpackheader(_fpartid)[0]
1197 1197 indebug(self.ui, 'part id: "%s"' % self.id)
1198 1198 # extract mandatory bit from type
1199 1199 self.mandatory = (self.type != self.type.lower())
1200 1200 self.type = self.type.lower()
1201 1201 ## reading parameters
1202 1202 # param count
1203 1203 mancount, advcount = self._unpackheader(_fpartparamcount)
1204 1204 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1205 1205 # param size
1206 1206 fparamsizes = _makefpartparamsizes(mancount + advcount)
1207 1207 paramsizes = self._unpackheader(fparamsizes)
1208 1208 # make it a list of couple again
1209 1209 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
1210 1210 # split mandatory from advisory
1211 1211 mansizes = paramsizes[:mancount]
1212 1212 advsizes = paramsizes[mancount:]
1213 1213 # retrieve param value
1214 1214 manparams = []
1215 1215 for key, value in mansizes:
1216 1216 manparams.append((self._fromheader(key), self._fromheader(value)))
1217 1217 advparams = []
1218 1218 for key, value in advsizes:
1219 1219 advparams.append((self._fromheader(key), self._fromheader(value)))
1220 1220 self._initparams(manparams, advparams)
1221 1221 ## part payload
1222 1222 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1223 1223 # we read the data, tell it
1224 1224 self._initialized = True
1225 1225
1226 1226 def read(self, size=None):
1227 1227 """read payload data"""
1228 1228 if not self._initialized:
1229 1229 self._readheader()
1230 1230 if size is None:
1231 1231 data = self._payloadstream.read()
1232 1232 else:
1233 1233 data = self._payloadstream.read(size)
1234 1234 self._pos += len(data)
1235 1235 if size is None or len(data) < size:
1236 1236 if not self.consumed and self._pos:
1237 1237 self.ui.debug('bundle2-input-part: total payload size %i\n'
1238 1238 % self._pos)
1239 1239 self.consumed = True
1240 1240 return data
1241 1241
1242 1242 def tell(self):
1243 1243 return self._pos
1244 1244
1245 1245 def seek(self, offset, whence=0):
1246 1246 if whence == 0:
1247 1247 newpos = offset
1248 1248 elif whence == 1:
1249 1249 newpos = self._pos + offset
1250 1250 elif whence == 2:
1251 1251 if not self.consumed:
1252 1252 self.read()
1253 1253 newpos = self._chunkindex[-1][0] - offset
1254 1254 else:
1255 1255 raise ValueError('Unknown whence value: %r' % (whence,))
1256 1256
1257 1257 if newpos > self._chunkindex[-1][0] and not self.consumed:
1258 1258 self.read()
1259 1259 if not 0 <= newpos <= self._chunkindex[-1][0]:
1260 1260 raise ValueError('Offset out of range')
1261 1261
1262 1262 if self._pos != newpos:
1263 1263 chunk, internaloffset = self._findchunk(newpos)
1264 1264 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1265 1265 adjust = self.read(internaloffset)
1266 1266 if len(adjust) != internaloffset:
1267 1267 raise error.Abort(_('Seek failed\n'))
1268 1268 self._pos = newpos
1269 1269
1270 1270 def _seekfp(self, offset, whence=0):
1271 1271 """move the underlying file pointer
1272 1272
1273 1273 This method is meant for internal usage by the bundle2 protocol only.
1274 1274 They directly manipulate the low level stream including bundle2 level
1275 1275 instruction.
1276 1276
1277 1277 Do not use it to implement higher-level logic or methods."""
1278 1278 if self._seekable:
1279 1279 return self._fp.seek(offset, whence)
1280 1280 else:
1281 1281 raise NotImplementedError(_('File pointer is not seekable'))
1282 1282
1283 1283 def _tellfp(self):
1284 1284 """return the file offset, or None if file is not seekable
1285 1285
1286 1286 This method is meant for internal usage by the bundle2 protocol only.
1287 1287 They directly manipulate the low level stream including bundle2 level
1288 1288 instruction.
1289 1289
1290 1290 Do not use it to implement higher-level logic or methods."""
1291 1291 if self._seekable:
1292 1292 try:
1293 1293 return self._fp.tell()
1294 1294 except IOError as e:
1295 1295 if e.errno == errno.ESPIPE:
1296 1296 self._seekable = False
1297 1297 else:
1298 1298 raise
1299 1299 return None
1300 1300
1301 1301 # These are only the static capabilities.
1302 1302 # Check the 'getrepocaps' function for the rest.
1303 1303 capabilities = {'HG20': (),
1304 1304 'error': ('abort', 'unsupportedcontent', 'pushraced',
1305 1305 'pushkey'),
1306 1306 'listkeys': (),
1307 1307 'pushkey': (),
1308 1308 'digests': tuple(sorted(util.DIGESTS.keys())),
1309 1309 'remote-changegroup': ('http', 'https'),
1310 1310 'hgtagsfnodes': (),
1311 1311 }
1312 1312
1313 1313 def getrepocaps(repo, allowpushback=False):
1314 1314 """return the bundle2 capabilities for a given repo
1315 1315
1316 1316 Exists to allow extensions (like evolution) to mutate the capabilities.
1317 1317 """
1318 1318 caps = capabilities.copy()
1319 1319 caps['changegroup'] = tuple(sorted(
1320 1320 changegroup.supportedincomingversions(repo)))
1321 1321 if obsolete.isenabled(repo, obsolete.exchangeopt):
1322 1322 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1323 1323 caps['obsmarkers'] = supportedformat
1324 1324 if allowpushback:
1325 1325 caps['pushback'] = ()
1326 1326 return caps
1327 1327
1328 1328 def bundle2caps(remote):
1329 1329 """return the bundle capabilities of a peer as dict"""
1330 1330 raw = remote.capable('bundle2')
1331 1331 if not raw and raw != '':
1332 1332 return {}
1333 1333 capsblob = urlreq.unquote(remote.capable('bundle2'))
1334 1334 return decodecaps(capsblob)
1335 1335
1336 1336 def obsmarkersversion(caps):
1337 1337 """extract the list of supported obsmarkers versions from a bundle2caps dict
1338 1338 """
1339 1339 obscaps = caps.get('obsmarkers', ())
1340 1340 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1341 1341
1342 def writenewbundle(ui, repo, source, filename, bundletype, outgoing, opts,
1343 vfs=None, compression=None, compopts=None):
1344 if bundletype.startswith('HG10'):
1345 cg = changegroup.getchangegroup(repo, source, outgoing, version='01')
1346 return writebundle(ui, cg, filename, bundletype, vfs=vfs,
1347 compression=compression, compopts=compopts)
1348 elif not bundletype.startswith('HG20'):
1349 raise error.ProgrammingError('unknown bundle type: %s' % bundletype)
1350
1351 bundle = bundle20(ui)
1352 bundle.setcompression(compression, compopts)
1353 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1354 chunkiter = bundle.getchunks()
1355
1356 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1357
1358 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1359 # We should eventually reconcile this logic with the one behind
1360 # 'exchange.getbundle2partsgenerator'.
1361 #
1362 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1363 # different right now. So we keep them separated for now for the sake of
1364 # simplicity.
1365
1366 # we always want a changegroup in such bundle
1367 cgversion = opts.get('cg.version')
1368 if cgversion is None:
1369 cgversion = changegroup.safeversion(repo)
1370 cg = changegroup.getchangegroup(repo, source, outgoing,
1371 version=cgversion)
1372 part = bundler.newpart('changegroup', data=cg.getchunks())
1373 part.addparam('version', cg.version)
1374 if 'clcount' in cg.extras:
1375 part.addparam('nbchanges', str(cg.extras['clcount']),
1376 mandatory=False)
1377
1342 1378 def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None,
1343 1379 compopts=None):
1344 1380 """Write a bundle file and return its filename.
1345 1381
1346 1382 Existing files will not be overwritten.
1347 1383 If no filename is specified, a temporary file is created.
1348 1384 bz2 compression can be turned off.
1349 1385 The bundle file will be deleted in case of errors.
1350 1386 """
1351 1387
1352 1388 if bundletype == "HG20":
1353 1389 bundle = bundle20(ui)
1354 1390 bundle.setcompression(compression, compopts)
1355 1391 part = bundle.newpart('changegroup', data=cg.getchunks())
1356 1392 part.addparam('version', cg.version)
1357 1393 if 'clcount' in cg.extras:
1358 1394 part.addparam('nbchanges', str(cg.extras['clcount']),
1359 1395 mandatory=False)
1360 1396 chunkiter = bundle.getchunks()
1361 1397 else:
1362 1398 # compression argument is only for the bundle2 case
1363 1399 assert compression is None
1364 1400 if cg.version != '01':
1365 1401 raise error.Abort(_('old bundle types only supports v1 '
1366 1402 'changegroups'))
1367 1403 header, comp = bundletypes[bundletype]
1368 1404 if comp not in util.compengines.supportedbundletypes:
1369 1405 raise error.Abort(_('unknown stream compression type: %s')
1370 1406 % comp)
1371 1407 compengine = util.compengines.forbundletype(comp)
1372 1408 def chunkiter():
1373 1409 yield header
1374 1410 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1375 1411 yield chunk
1376 1412 chunkiter = chunkiter()
1377 1413
1378 1414 # parse the changegroup data, otherwise we will block
1379 1415 # in case of sshrepo because we don't know the end of the stream
1380 1416 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1381 1417
1382 1418 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest'))
1383 1419 def handlechangegroup(op, inpart):
1384 1420 """apply a changegroup part on the repo
1385 1421
1386 1422 This is a very early implementation that will massive rework before being
1387 1423 inflicted to any end-user.
1388 1424 """
1389 1425 # Make sure we trigger a transaction creation
1390 1426 #
1391 1427 # The addchangegroup function will get a transaction object by itself, but
1392 1428 # we need to make sure we trigger the creation of a transaction object used
1393 1429 # for the whole processing scope.
1394 1430 op.gettransaction()
1395 1431 unpackerversion = inpart.params.get('version', '01')
1396 1432 # We should raise an appropriate exception here
1397 1433 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1398 1434 # the source and url passed here are overwritten by the one contained in
1399 1435 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1400 1436 nbchangesets = None
1401 1437 if 'nbchanges' in inpart.params:
1402 1438 nbchangesets = int(inpart.params.get('nbchanges'))
1403 1439 if ('treemanifest' in inpart.params and
1404 1440 'treemanifest' not in op.repo.requirements):
1405 1441 if len(op.repo.changelog) != 0:
1406 1442 raise error.Abort(_(
1407 1443 "bundle contains tree manifests, but local repo is "
1408 1444 "non-empty and does not use tree manifests"))
1409 1445 op.repo.requirements.add('treemanifest')
1410 1446 op.repo._applyopenerreqs()
1411 1447 op.repo._writerequirements()
1412 1448 ret = cg.apply(op.repo, 'bundle2', 'bundle2', expectedtotal=nbchangesets)
1413 1449 op.records.add('changegroup', {'return': ret})
1414 1450 if op.reply is not None:
1415 1451 # This is definitely not the final form of this
1416 1452 # return. But one need to start somewhere.
1417 1453 part = op.reply.newpart('reply:changegroup', mandatory=False)
1418 1454 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1419 1455 part.addparam('return', '%i' % ret, mandatory=False)
1420 1456 assert not inpart.read()
1421 1457
1422 1458 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1423 1459 ['digest:%s' % k for k in util.DIGESTS.keys()])
1424 1460 @parthandler('remote-changegroup', _remotechangegroupparams)
1425 1461 def handleremotechangegroup(op, inpart):
1426 1462 """apply a bundle10 on the repo, given an url and validation information
1427 1463
1428 1464 All the information about the remote bundle to import are given as
1429 1465 parameters. The parameters include:
1430 1466 - url: the url to the bundle10.
1431 1467 - size: the bundle10 file size. It is used to validate what was
1432 1468 retrieved by the client matches the server knowledge about the bundle.
1433 1469 - digests: a space separated list of the digest types provided as
1434 1470 parameters.
1435 1471 - digest:<digest-type>: the hexadecimal representation of the digest with
1436 1472 that name. Like the size, it is used to validate what was retrieved by
1437 1473 the client matches what the server knows about the bundle.
1438 1474
1439 1475 When multiple digest types are given, all of them are checked.
1440 1476 """
1441 1477 try:
1442 1478 raw_url = inpart.params['url']
1443 1479 except KeyError:
1444 1480 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1445 1481 parsed_url = util.url(raw_url)
1446 1482 if parsed_url.scheme not in capabilities['remote-changegroup']:
1447 1483 raise error.Abort(_('remote-changegroup does not support %s urls') %
1448 1484 parsed_url.scheme)
1449 1485
1450 1486 try:
1451 1487 size = int(inpart.params['size'])
1452 1488 except ValueError:
1453 1489 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1454 1490 % 'size')
1455 1491 except KeyError:
1456 1492 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1457 1493
1458 1494 digests = {}
1459 1495 for typ in inpart.params.get('digests', '').split():
1460 1496 param = 'digest:%s' % typ
1461 1497 try:
1462 1498 value = inpart.params[param]
1463 1499 except KeyError:
1464 1500 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1465 1501 param)
1466 1502 digests[typ] = value
1467 1503
1468 1504 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1469 1505
1470 1506 # Make sure we trigger a transaction creation
1471 1507 #
1472 1508 # The addchangegroup function will get a transaction object by itself, but
1473 1509 # we need to make sure we trigger the creation of a transaction object used
1474 1510 # for the whole processing scope.
1475 1511 op.gettransaction()
1476 1512 from . import exchange
1477 1513 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1478 1514 if not isinstance(cg, changegroup.cg1unpacker):
1479 1515 raise error.Abort(_('%s: not a bundle version 1.0') %
1480 1516 util.hidepassword(raw_url))
1481 1517 ret = cg.apply(op.repo, 'bundle2', 'bundle2')
1482 1518 op.records.add('changegroup', {'return': ret})
1483 1519 if op.reply is not None:
1484 1520 # This is definitely not the final form of this
1485 1521 # return. But one need to start somewhere.
1486 1522 part = op.reply.newpart('reply:changegroup')
1487 1523 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1488 1524 part.addparam('return', '%i' % ret, mandatory=False)
1489 1525 try:
1490 1526 real_part.validate()
1491 1527 except error.Abort as e:
1492 1528 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1493 1529 (util.hidepassword(raw_url), str(e)))
1494 1530 assert not inpart.read()
1495 1531
1496 1532 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1497 1533 def handlereplychangegroup(op, inpart):
1498 1534 ret = int(inpart.params['return'])
1499 1535 replyto = int(inpart.params['in-reply-to'])
1500 1536 op.records.add('changegroup', {'return': ret}, replyto)
1501 1537
1502 1538 @parthandler('check:heads')
1503 1539 def handlecheckheads(op, inpart):
1504 1540 """check that head of the repo did not change
1505 1541
1506 1542 This is used to detect a push race when using unbundle.
1507 1543 This replaces the "heads" argument of unbundle."""
1508 1544 h = inpart.read(20)
1509 1545 heads = []
1510 1546 while len(h) == 20:
1511 1547 heads.append(h)
1512 1548 h = inpart.read(20)
1513 1549 assert not h
1514 1550 # Trigger a transaction so that we are guaranteed to have the lock now.
1515 1551 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1516 1552 op.gettransaction()
1517 1553 if sorted(heads) != sorted(op.repo.heads()):
1518 1554 raise error.PushRaced('repository changed while pushing - '
1519 1555 'please try again')
1520 1556
1521 1557 @parthandler('output')
1522 1558 def handleoutput(op, inpart):
1523 1559 """forward output captured on the server to the client"""
1524 1560 for line in inpart.read().splitlines():
1525 1561 op.ui.status(_('remote: %s\n') % line)
1526 1562
1527 1563 @parthandler('replycaps')
1528 1564 def handlereplycaps(op, inpart):
1529 1565 """Notify that a reply bundle should be created
1530 1566
1531 1567 The payload contains the capabilities information for the reply"""
1532 1568 caps = decodecaps(inpart.read())
1533 1569 if op.reply is None:
1534 1570 op.reply = bundle20(op.ui, caps)
1535 1571
1536 1572 class AbortFromPart(error.Abort):
1537 1573 """Sub-class of Abort that denotes an error from a bundle2 part."""
1538 1574
1539 1575 @parthandler('error:abort', ('message', 'hint'))
1540 1576 def handleerrorabort(op, inpart):
1541 1577 """Used to transmit abort error over the wire"""
1542 1578 raise AbortFromPart(inpart.params['message'],
1543 1579 hint=inpart.params.get('hint'))
1544 1580
1545 1581 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
1546 1582 'in-reply-to'))
1547 1583 def handleerrorpushkey(op, inpart):
1548 1584 """Used to transmit failure of a mandatory pushkey over the wire"""
1549 1585 kwargs = {}
1550 1586 for name in ('namespace', 'key', 'new', 'old', 'ret'):
1551 1587 value = inpart.params.get(name)
1552 1588 if value is not None:
1553 1589 kwargs[name] = value
1554 1590 raise error.PushkeyFailed(inpart.params['in-reply-to'], **kwargs)
1555 1591
1556 1592 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1557 1593 def handleerrorunsupportedcontent(op, inpart):
1558 1594 """Used to transmit unknown content error over the wire"""
1559 1595 kwargs = {}
1560 1596 parttype = inpart.params.get('parttype')
1561 1597 if parttype is not None:
1562 1598 kwargs['parttype'] = parttype
1563 1599 params = inpart.params.get('params')
1564 1600 if params is not None:
1565 1601 kwargs['params'] = params.split('\0')
1566 1602
1567 1603 raise error.BundleUnknownFeatureError(**kwargs)
1568 1604
1569 1605 @parthandler('error:pushraced', ('message',))
1570 1606 def handleerrorpushraced(op, inpart):
1571 1607 """Used to transmit push race error over the wire"""
1572 1608 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1573 1609
1574 1610 @parthandler('listkeys', ('namespace',))
1575 1611 def handlelistkeys(op, inpart):
1576 1612 """retrieve pushkey namespace content stored in a bundle2"""
1577 1613 namespace = inpart.params['namespace']
1578 1614 r = pushkey.decodekeys(inpart.read())
1579 1615 op.records.add('listkeys', (namespace, r))
1580 1616
1581 1617 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1582 1618 def handlepushkey(op, inpart):
1583 1619 """process a pushkey request"""
1584 1620 dec = pushkey.decode
1585 1621 namespace = dec(inpart.params['namespace'])
1586 1622 key = dec(inpart.params['key'])
1587 1623 old = dec(inpart.params['old'])
1588 1624 new = dec(inpart.params['new'])
1589 1625 # Grab the transaction to ensure that we have the lock before performing the
1590 1626 # pushkey.
1591 1627 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1592 1628 op.gettransaction()
1593 1629 ret = op.repo.pushkey(namespace, key, old, new)
1594 1630 record = {'namespace': namespace,
1595 1631 'key': key,
1596 1632 'old': old,
1597 1633 'new': new}
1598 1634 op.records.add('pushkey', record)
1599 1635 if op.reply is not None:
1600 1636 rpart = op.reply.newpart('reply:pushkey')
1601 1637 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1602 1638 rpart.addparam('return', '%i' % ret, mandatory=False)
1603 1639 if inpart.mandatory and not ret:
1604 1640 kwargs = {}
1605 1641 for key in ('namespace', 'key', 'new', 'old', 'ret'):
1606 1642 if key in inpart.params:
1607 1643 kwargs[key] = inpart.params[key]
1608 1644 raise error.PushkeyFailed(partid=str(inpart.id), **kwargs)
1609 1645
1610 1646 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
1611 1647 def handlepushkeyreply(op, inpart):
1612 1648 """retrieve the result of a pushkey request"""
1613 1649 ret = int(inpart.params['return'])
1614 1650 partid = int(inpart.params['in-reply-to'])
1615 1651 op.records.add('pushkey', {'return': ret}, partid)
1616 1652
1617 1653 @parthandler('obsmarkers')
1618 1654 def handleobsmarker(op, inpart):
1619 1655 """add a stream of obsmarkers to the repo"""
1620 1656 tr = op.gettransaction()
1621 1657 markerdata = inpart.read()
1622 1658 if op.ui.config('experimental', 'obsmarkers-exchange-debug', False):
1623 1659 op.ui.write(('obsmarker-exchange: %i bytes received\n')
1624 1660 % len(markerdata))
1625 1661 # The mergemarkers call will crash if marker creation is not enabled.
1626 1662 # we want to avoid this if the part is advisory.
1627 1663 if not inpart.mandatory and op.repo.obsstore.readonly:
1628 1664 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled')
1629 1665 return
1630 1666 new = op.repo.obsstore.mergemarkers(tr, markerdata)
1631 1667 if new:
1632 1668 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1633 1669 op.records.add('obsmarkers', {'new': new})
1634 1670 if op.reply is not None:
1635 1671 rpart = op.reply.newpart('reply:obsmarkers')
1636 1672 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1637 1673 rpart.addparam('new', '%i' % new, mandatory=False)
1638 1674
1639 1675
1640 1676 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
1641 1677 def handleobsmarkerreply(op, inpart):
1642 1678 """retrieve the result of a pushkey request"""
1643 1679 ret = int(inpart.params['new'])
1644 1680 partid = int(inpart.params['in-reply-to'])
1645 1681 op.records.add('obsmarkers', {'new': ret}, partid)
1646 1682
1647 1683 @parthandler('hgtagsfnodes')
1648 1684 def handlehgtagsfnodes(op, inpart):
1649 1685 """Applies .hgtags fnodes cache entries to the local repo.
1650 1686
1651 1687 Payload is pairs of 20 byte changeset nodes and filenodes.
1652 1688 """
1653 1689 # Grab the transaction so we ensure that we have the lock at this point.
1654 1690 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1655 1691 op.gettransaction()
1656 1692 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
1657 1693
1658 1694 count = 0
1659 1695 while True:
1660 1696 node = inpart.read(20)
1661 1697 fnode = inpart.read(20)
1662 1698 if len(node) < 20 or len(fnode) < 20:
1663 1699 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
1664 1700 break
1665 1701 cache.setfnode(node, fnode)
1666 1702 count += 1
1667 1703
1668 1704 cache.write()
1669 1705 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
@@ -1,5517 +1,5518 b''
1 1 # commands.py - command processing for mercurial
2 2 #
3 3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from __future__ import absolute_import
9 9
10 10 import difflib
11 11 import errno
12 12 import os
13 13 import re
14 14
15 15 from .i18n import _
16 16 from .node import (
17 17 hex,
18 18 nullid,
19 19 nullrev,
20 20 short,
21 21 )
22 22 from . import (
23 23 archival,
24 24 bookmarks,
25 25 bundle2,
26 26 changegroup,
27 27 cmdutil,
28 28 copies,
29 29 destutil,
30 30 dirstateguard,
31 31 discovery,
32 32 encoding,
33 33 error,
34 34 exchange,
35 35 extensions,
36 36 graphmod,
37 37 hbisect,
38 38 help,
39 39 hg,
40 40 lock as lockmod,
41 41 merge as mergemod,
42 42 obsolete,
43 43 patch,
44 44 phases,
45 45 pycompat,
46 46 rcutil,
47 47 revsetlang,
48 48 scmutil,
49 49 server,
50 50 sshserver,
51 51 streamclone,
52 52 tags as tagsmod,
53 53 templatekw,
54 54 ui as uimod,
55 55 util,
56 56 )
57 57
58 58 release = lockmod.release
59 59
60 60 table = {}
61 61
62 62 command = cmdutil.command(table)
63 63
64 64 # label constants
65 65 # until 3.5, bookmarks.current was the advertised name, not
66 66 # bookmarks.active, so we must use both to avoid breaking old
67 67 # custom styles
68 68 activebookmarklabel = 'bookmarks.active bookmarks.current'
69 69
70 70 # common command options
71 71
72 72 globalopts = [
73 73 ('R', 'repository', '',
74 74 _('repository root directory or name of overlay bundle file'),
75 75 _('REPO')),
76 76 ('', 'cwd', '',
77 77 _('change working directory'), _('DIR')),
78 78 ('y', 'noninteractive', None,
79 79 _('do not prompt, automatically pick the first choice for all prompts')),
80 80 ('q', 'quiet', None, _('suppress output')),
81 81 ('v', 'verbose', None, _('enable additional output')),
82 82 ('', 'color', '',
83 83 # i18n: 'always', 'auto', 'never', and 'debug' are keywords
84 84 # and should not be translated
85 85 _("when to colorize (boolean, always, auto, never, or debug)"),
86 86 _('TYPE')),
87 87 ('', 'config', [],
88 88 _('set/override config option (use \'section.name=value\')'),
89 89 _('CONFIG')),
90 90 ('', 'debug', None, _('enable debugging output')),
91 91 ('', 'debugger', None, _('start debugger')),
92 92 ('', 'encoding', encoding.encoding, _('set the charset encoding'),
93 93 _('ENCODE')),
94 94 ('', 'encodingmode', encoding.encodingmode,
95 95 _('set the charset encoding mode'), _('MODE')),
96 96 ('', 'traceback', None, _('always print a traceback on exception')),
97 97 ('', 'time', None, _('time how long the command takes')),
98 98 ('', 'profile', None, _('print command execution profile')),
99 99 ('', 'version', None, _('output version information and exit')),
100 100 ('h', 'help', None, _('display help and exit')),
101 101 ('', 'hidden', False, _('consider hidden changesets')),
102 102 ('', 'pager', 'auto',
103 103 _("when to paginate (boolean, always, auto, or never)"), _('TYPE')),
104 104 ]
105 105
106 106 dryrunopts = [('n', 'dry-run', None,
107 107 _('do not perform actions, just print output'))]
108 108
109 109 remoteopts = [
110 110 ('e', 'ssh', '',
111 111 _('specify ssh command to use'), _('CMD')),
112 112 ('', 'remotecmd', '',
113 113 _('specify hg command to run on the remote side'), _('CMD')),
114 114 ('', 'insecure', None,
115 115 _('do not verify server certificate (ignoring web.cacerts config)')),
116 116 ]
117 117
118 118 walkopts = [
119 119 ('I', 'include', [],
120 120 _('include names matching the given patterns'), _('PATTERN')),
121 121 ('X', 'exclude', [],
122 122 _('exclude names matching the given patterns'), _('PATTERN')),
123 123 ]
124 124
125 125 commitopts = [
126 126 ('m', 'message', '',
127 127 _('use text as commit message'), _('TEXT')),
128 128 ('l', 'logfile', '',
129 129 _('read commit message from file'), _('FILE')),
130 130 ]
131 131
132 132 commitopts2 = [
133 133 ('d', 'date', '',
134 134 _('record the specified date as commit date'), _('DATE')),
135 135 ('u', 'user', '',
136 136 _('record the specified user as committer'), _('USER')),
137 137 ]
138 138
139 139 # hidden for now
140 140 formatteropts = [
141 141 ('T', 'template', '',
142 142 _('display with template (EXPERIMENTAL)'), _('TEMPLATE')),
143 143 ]
144 144
145 145 templateopts = [
146 146 ('', 'style', '',
147 147 _('display using template map file (DEPRECATED)'), _('STYLE')),
148 148 ('T', 'template', '',
149 149 _('display with template'), _('TEMPLATE')),
150 150 ]
151 151
152 152 logopts = [
153 153 ('p', 'patch', None, _('show patch')),
154 154 ('g', 'git', None, _('use git extended diff format')),
155 155 ('l', 'limit', '',
156 156 _('limit number of changes displayed'), _('NUM')),
157 157 ('M', 'no-merges', None, _('do not show merges')),
158 158 ('', 'stat', None, _('output diffstat-style summary of changes')),
159 159 ('G', 'graph', None, _("show the revision DAG")),
160 160 ] + templateopts
161 161
162 162 diffopts = [
163 163 ('a', 'text', None, _('treat all files as text')),
164 164 ('g', 'git', None, _('use git extended diff format')),
165 165 ('', 'binary', None, _('generate binary diffs in git mode (default)')),
166 166 ('', 'nodates', None, _('omit dates from diff headers'))
167 167 ]
168 168
169 169 diffwsopts = [
170 170 ('w', 'ignore-all-space', None,
171 171 _('ignore white space when comparing lines')),
172 172 ('b', 'ignore-space-change', None,
173 173 _('ignore changes in the amount of white space')),
174 174 ('B', 'ignore-blank-lines', None,
175 175 _('ignore changes whose lines are all blank')),
176 176 ]
177 177
178 178 diffopts2 = [
179 179 ('', 'noprefix', None, _('omit a/ and b/ prefixes from filenames')),
180 180 ('p', 'show-function', None, _('show which function each change is in')),
181 181 ('', 'reverse', None, _('produce a diff that undoes the changes')),
182 182 ] + diffwsopts + [
183 183 ('U', 'unified', '',
184 184 _('number of lines of context to show'), _('NUM')),
185 185 ('', 'stat', None, _('output diffstat-style summary of changes')),
186 186 ('', 'root', '', _('produce diffs relative to subdirectory'), _('DIR')),
187 187 ]
188 188
189 189 mergetoolopts = [
190 190 ('t', 'tool', '', _('specify merge tool')),
191 191 ]
192 192
193 193 similarityopts = [
194 194 ('s', 'similarity', '',
195 195 _('guess renamed files by similarity (0<=s<=100)'), _('SIMILARITY'))
196 196 ]
197 197
198 198 subrepoopts = [
199 199 ('S', 'subrepos', None,
200 200 _('recurse into subrepositories'))
201 201 ]
202 202
203 203 debugrevlogopts = [
204 204 ('c', 'changelog', False, _('open changelog')),
205 205 ('m', 'manifest', False, _('open manifest')),
206 206 ('', 'dir', '', _('open directory manifest')),
207 207 ]
208 208
209 209 # Commands start here, listed alphabetically
210 210
211 211 @command('^add',
212 212 walkopts + subrepoopts + dryrunopts,
213 213 _('[OPTION]... [FILE]...'),
214 214 inferrepo=True)
215 215 def add(ui, repo, *pats, **opts):
216 216 """add the specified files on the next commit
217 217
218 218 Schedule files to be version controlled and added to the
219 219 repository.
220 220
221 221 The files will be added to the repository at the next commit. To
222 222 undo an add before that, see :hg:`forget`.
223 223
224 224 If no names are given, add all files to the repository (except
225 225 files matching ``.hgignore``).
226 226
227 227 .. container:: verbose
228 228
229 229 Examples:
230 230
231 231 - New (unknown) files are added
232 232 automatically by :hg:`add`::
233 233
234 234 $ ls
235 235 foo.c
236 236 $ hg status
237 237 ? foo.c
238 238 $ hg add
239 239 adding foo.c
240 240 $ hg status
241 241 A foo.c
242 242
243 243 - Specific files to be added can be specified::
244 244
245 245 $ ls
246 246 bar.c foo.c
247 247 $ hg status
248 248 ? bar.c
249 249 ? foo.c
250 250 $ hg add bar.c
251 251 $ hg status
252 252 A bar.c
253 253 ? foo.c
254 254
255 255 Returns 0 if all files are successfully added.
256 256 """
257 257
258 258 m = scmutil.match(repo[None], pats, pycompat.byteskwargs(opts))
259 259 rejected = cmdutil.add(ui, repo, m, "", False, **opts)
260 260 return rejected and 1 or 0
261 261
262 262 @command('addremove',
263 263 similarityopts + subrepoopts + walkopts + dryrunopts,
264 264 _('[OPTION]... [FILE]...'),
265 265 inferrepo=True)
266 266 def addremove(ui, repo, *pats, **opts):
267 267 """add all new files, delete all missing files
268 268
269 269 Add all new files and remove all missing files from the
270 270 repository.
271 271
272 272 Unless names are given, new files are ignored if they match any of
273 273 the patterns in ``.hgignore``. As with add, these changes take
274 274 effect at the next commit.
275 275
276 276 Use the -s/--similarity option to detect renamed files. This
277 277 option takes a percentage between 0 (disabled) and 100 (files must
278 278 be identical) as its parameter. With a parameter greater than 0,
279 279 this compares every removed file with every added file and records
280 280 those similar enough as renames. Detecting renamed files this way
281 281 can be expensive. After using this option, :hg:`status -C` can be
282 282 used to check which files were identified as moved or renamed. If
283 283 not specified, -s/--similarity defaults to 100 and only renames of
284 284 identical files are detected.
285 285
286 286 .. container:: verbose
287 287
288 288 Examples:
289 289
290 290 - A number of files (bar.c and foo.c) are new,
291 291 while foobar.c has been removed (without using :hg:`remove`)
292 292 from the repository::
293 293
294 294 $ ls
295 295 bar.c foo.c
296 296 $ hg status
297 297 ! foobar.c
298 298 ? bar.c
299 299 ? foo.c
300 300 $ hg addremove
301 301 adding bar.c
302 302 adding foo.c
303 303 removing foobar.c
304 304 $ hg status
305 305 A bar.c
306 306 A foo.c
307 307 R foobar.c
308 308
309 309 - A file foobar.c was moved to foo.c without using :hg:`rename`.
310 310 Afterwards, it was edited slightly::
311 311
312 312 $ ls
313 313 foo.c
314 314 $ hg status
315 315 ! foobar.c
316 316 ? foo.c
317 317 $ hg addremove --similarity 90
318 318 removing foobar.c
319 319 adding foo.c
320 320 recording removal of foobar.c as rename to foo.c (94% similar)
321 321 $ hg status -C
322 322 A foo.c
323 323 foobar.c
324 324 R foobar.c
325 325
326 326 Returns 0 if all files are successfully added.
327 327 """
328 328 opts = pycompat.byteskwargs(opts)
329 329 try:
330 330 sim = float(opts.get('similarity') or 100)
331 331 except ValueError:
332 332 raise error.Abort(_('similarity must be a number'))
333 333 if sim < 0 or sim > 100:
334 334 raise error.Abort(_('similarity must be between 0 and 100'))
335 335 matcher = scmutil.match(repo[None], pats, opts)
336 336 return scmutil.addremove(repo, matcher, "", opts, similarity=sim / 100.0)
337 337
338 338 @command('^annotate|blame',
339 339 [('r', 'rev', '', _('annotate the specified revision'), _('REV')),
340 340 ('', 'follow', None,
341 341 _('follow copies/renames and list the filename (DEPRECATED)')),
342 342 ('', 'no-follow', None, _("don't follow copies and renames")),
343 343 ('a', 'text', None, _('treat all files as text')),
344 344 ('u', 'user', None, _('list the author (long with -v)')),
345 345 ('f', 'file', None, _('list the filename')),
346 346 ('d', 'date', None, _('list the date (short with -q)')),
347 347 ('n', 'number', None, _('list the revision number (default)')),
348 348 ('c', 'changeset', None, _('list the changeset')),
349 349 ('l', 'line-number', None, _('show line number at the first appearance'))
350 350 ] + diffwsopts + walkopts + formatteropts,
351 351 _('[-r REV] [-f] [-a] [-u] [-d] [-n] [-c] [-l] FILE...'),
352 352 inferrepo=True)
353 353 def annotate(ui, repo, *pats, **opts):
354 354 """show changeset information by line for each file
355 355
356 356 List changes in files, showing the revision id responsible for
357 357 each line.
358 358
359 359 This command is useful for discovering when a change was made and
360 360 by whom.
361 361
362 362 If you include --file, --user, or --date, the revision number is
363 363 suppressed unless you also include --number.
364 364
365 365 Without the -a/--text option, annotate will avoid processing files
366 366 it detects as binary. With -a, annotate will annotate the file
367 367 anyway, although the results will probably be neither useful
368 368 nor desirable.
369 369
370 370 Returns 0 on success.
371 371 """
372 372 opts = pycompat.byteskwargs(opts)
373 373 if not pats:
374 374 raise error.Abort(_('at least one filename or pattern is required'))
375 375
376 376 if opts.get('follow'):
377 377 # --follow is deprecated and now just an alias for -f/--file
378 378 # to mimic the behavior of Mercurial before version 1.5
379 379 opts['file'] = True
380 380
381 381 ctx = scmutil.revsingle(repo, opts.get('rev'))
382 382
383 383 fm = ui.formatter('annotate', opts)
384 384 if ui.quiet:
385 385 datefunc = util.shortdate
386 386 else:
387 387 datefunc = util.datestr
388 388 if ctx.rev() is None:
389 389 def hexfn(node):
390 390 if node is None:
391 391 return None
392 392 else:
393 393 return fm.hexfunc(node)
394 394 if opts.get('changeset'):
395 395 # omit "+" suffix which is appended to node hex
396 396 def formatrev(rev):
397 397 if rev is None:
398 398 return '%d' % ctx.p1().rev()
399 399 else:
400 400 return '%d' % rev
401 401 else:
402 402 def formatrev(rev):
403 403 if rev is None:
404 404 return '%d+' % ctx.p1().rev()
405 405 else:
406 406 return '%d ' % rev
407 407 def formathex(hex):
408 408 if hex is None:
409 409 return '%s+' % fm.hexfunc(ctx.p1().node())
410 410 else:
411 411 return '%s ' % hex
412 412 else:
413 413 hexfn = fm.hexfunc
414 414 formatrev = formathex = str
415 415
416 416 opmap = [('user', ' ', lambda x: x[0].user(), ui.shortuser),
417 417 ('number', ' ', lambda x: x[0].rev(), formatrev),
418 418 ('changeset', ' ', lambda x: hexfn(x[0].node()), formathex),
419 419 ('date', ' ', lambda x: x[0].date(), util.cachefunc(datefunc)),
420 420 ('file', ' ', lambda x: x[0].path(), str),
421 421 ('line_number', ':', lambda x: x[1], str),
422 422 ]
423 423 fieldnamemap = {'number': 'rev', 'changeset': 'node'}
424 424
425 425 if (not opts.get('user') and not opts.get('changeset')
426 426 and not opts.get('date') and not opts.get('file')):
427 427 opts['number'] = True
428 428
429 429 linenumber = opts.get('line_number') is not None
430 430 if linenumber and (not opts.get('changeset')) and (not opts.get('number')):
431 431 raise error.Abort(_('at least one of -n/-c is required for -l'))
432 432
433 433 ui.pager('annotate')
434 434
435 435 if fm.isplain():
436 436 def makefunc(get, fmt):
437 437 return lambda x: fmt(get(x))
438 438 else:
439 439 def makefunc(get, fmt):
440 440 return get
441 441 funcmap = [(makefunc(get, fmt), sep) for op, sep, get, fmt in opmap
442 442 if opts.get(op)]
443 443 funcmap[0] = (funcmap[0][0], '') # no separator in front of first column
444 444 fields = ' '.join(fieldnamemap.get(op, op) for op, sep, get, fmt in opmap
445 445 if opts.get(op))
446 446
447 447 def bad(x, y):
448 448 raise error.Abort("%s: %s" % (x, y))
449 449
450 450 m = scmutil.match(ctx, pats, opts, badfn=bad)
451 451
452 452 follow = not opts.get('no_follow')
453 453 diffopts = patch.difffeatureopts(ui, opts, section='annotate',
454 454 whitespace=True)
455 455 for abs in ctx.walk(m):
456 456 fctx = ctx[abs]
457 457 if not opts.get('text') and fctx.isbinary():
458 458 fm.plain(_("%s: binary file\n") % ((pats and m.rel(abs)) or abs))
459 459 continue
460 460
461 461 lines = fctx.annotate(follow=follow, linenumber=linenumber,
462 462 diffopts=diffopts)
463 463 if not lines:
464 464 continue
465 465 formats = []
466 466 pieces = []
467 467
468 468 for f, sep in funcmap:
469 469 l = [f(n) for n, dummy in lines]
470 470 if fm.isplain():
471 471 sizes = [encoding.colwidth(x) for x in l]
472 472 ml = max(sizes)
473 473 formats.append([sep + ' ' * (ml - w) + '%s' for w in sizes])
474 474 else:
475 475 formats.append(['%s' for x in l])
476 476 pieces.append(l)
477 477
478 478 for f, p, l in zip(zip(*formats), zip(*pieces), lines):
479 479 fm.startitem()
480 480 fm.write(fields, "".join(f), *p)
481 481 fm.write('line', ": %s", l[1])
482 482
483 483 if not lines[-1][1].endswith('\n'):
484 484 fm.plain('\n')
485 485
486 486 fm.end()
487 487
488 488 @command('archive',
489 489 [('', 'no-decode', None, _('do not pass files through decoders')),
490 490 ('p', 'prefix', '', _('directory prefix for files in archive'),
491 491 _('PREFIX')),
492 492 ('r', 'rev', '', _('revision to distribute'), _('REV')),
493 493 ('t', 'type', '', _('type of distribution to create'), _('TYPE')),
494 494 ] + subrepoopts + walkopts,
495 495 _('[OPTION]... DEST'))
496 496 def archive(ui, repo, dest, **opts):
497 497 '''create an unversioned archive of a repository revision
498 498
499 499 By default, the revision used is the parent of the working
500 500 directory; use -r/--rev to specify a different revision.
501 501
502 502 The archive type is automatically detected based on file
503 503 extension (to override, use -t/--type).
504 504
505 505 .. container:: verbose
506 506
507 507 Examples:
508 508
509 509 - create a zip file containing the 1.0 release::
510 510
511 511 hg archive -r 1.0 project-1.0.zip
512 512
513 513 - create a tarball excluding .hg files::
514 514
515 515 hg archive project.tar.gz -X ".hg*"
516 516
517 517 Valid types are:
518 518
519 519 :``files``: a directory full of files (default)
520 520 :``tar``: tar archive, uncompressed
521 521 :``tbz2``: tar archive, compressed using bzip2
522 522 :``tgz``: tar archive, compressed using gzip
523 523 :``uzip``: zip archive, uncompressed
524 524 :``zip``: zip archive, compressed using deflate
525 525
526 526 The exact name of the destination archive or directory is given
527 527 using a format string; see :hg:`help export` for details.
528 528
529 529 Each member added to an archive file has a directory prefix
530 530 prepended. Use -p/--prefix to specify a format string for the
531 531 prefix. The default is the basename of the archive, with suffixes
532 532 removed.
533 533
534 534 Returns 0 on success.
535 535 '''
536 536
537 537 opts = pycompat.byteskwargs(opts)
538 538 ctx = scmutil.revsingle(repo, opts.get('rev'))
539 539 if not ctx:
540 540 raise error.Abort(_('no working directory: please specify a revision'))
541 541 node = ctx.node()
542 542 dest = cmdutil.makefilename(repo, dest, node)
543 543 if os.path.realpath(dest) == repo.root:
544 544 raise error.Abort(_('repository root cannot be destination'))
545 545
546 546 kind = opts.get('type') or archival.guesskind(dest) or 'files'
547 547 prefix = opts.get('prefix')
548 548
549 549 if dest == '-':
550 550 if kind == 'files':
551 551 raise error.Abort(_('cannot archive plain files to stdout'))
552 552 dest = cmdutil.makefileobj(repo, dest)
553 553 if not prefix:
554 554 prefix = os.path.basename(repo.root) + '-%h'
555 555
556 556 prefix = cmdutil.makefilename(repo, prefix, node)
557 557 matchfn = scmutil.match(ctx, [], opts)
558 558 archival.archive(repo, dest, node, kind, not opts.get('no_decode'),
559 559 matchfn, prefix, subrepos=opts.get('subrepos'))
560 560
561 561 @command('backout',
562 562 [('', 'merge', None, _('merge with old dirstate parent after backout')),
563 563 ('', 'commit', None,
564 564 _('commit if no conflicts were encountered (DEPRECATED)')),
565 565 ('', 'no-commit', None, _('do not commit')),
566 566 ('', 'parent', '',
567 567 _('parent to choose when backing out merge (DEPRECATED)'), _('REV')),
568 568 ('r', 'rev', '', _('revision to backout'), _('REV')),
569 569 ('e', 'edit', False, _('invoke editor on commit messages')),
570 570 ] + mergetoolopts + walkopts + commitopts + commitopts2,
571 571 _('[OPTION]... [-r] REV'))
572 572 def backout(ui, repo, node=None, rev=None, **opts):
573 573 '''reverse effect of earlier changeset
574 574
575 575 Prepare a new changeset with the effect of REV undone in the
576 576 current working directory. If no conflicts were encountered,
577 577 it will be committed immediately.
578 578
579 579 If REV is the parent of the working directory, then this new changeset
580 580 is committed automatically (unless --no-commit is specified).
581 581
582 582 .. note::
583 583
584 584 :hg:`backout` cannot be used to fix either an unwanted or
585 585 incorrect merge.
586 586
587 587 .. container:: verbose
588 588
589 589 Examples:
590 590
591 591 - Reverse the effect of the parent of the working directory.
592 592 This backout will be committed immediately::
593 593
594 594 hg backout -r .
595 595
596 596 - Reverse the effect of previous bad revision 23::
597 597
598 598 hg backout -r 23
599 599
600 600 - Reverse the effect of previous bad revision 23 and
601 601 leave changes uncommitted::
602 602
603 603 hg backout -r 23 --no-commit
604 604 hg commit -m "Backout revision 23"
605 605
606 606 By default, the pending changeset will have one parent,
607 607 maintaining a linear history. With --merge, the pending
608 608 changeset will instead have two parents: the old parent of the
609 609 working directory and a new child of REV that simply undoes REV.
610 610
611 611 Before version 1.7, the behavior without --merge was equivalent
612 612 to specifying --merge followed by :hg:`update --clean .` to
613 613 cancel the merge and leave the child of REV as a head to be
614 614 merged separately.
615 615
616 616 See :hg:`help dates` for a list of formats valid for -d/--date.
617 617
618 618 See :hg:`help revert` for a way to restore files to the state
619 619 of another revision.
620 620
621 621 Returns 0 on success, 1 if nothing to backout or there are unresolved
622 622 files.
623 623 '''
624 624 wlock = lock = None
625 625 try:
626 626 wlock = repo.wlock()
627 627 lock = repo.lock()
628 628 return _dobackout(ui, repo, node, rev, **opts)
629 629 finally:
630 630 release(lock, wlock)
631 631
632 632 def _dobackout(ui, repo, node=None, rev=None, **opts):
633 633 opts = pycompat.byteskwargs(opts)
634 634 if opts.get('commit') and opts.get('no_commit'):
635 635 raise error.Abort(_("cannot use --commit with --no-commit"))
636 636 if opts.get('merge') and opts.get('no_commit'):
637 637 raise error.Abort(_("cannot use --merge with --no-commit"))
638 638
639 639 if rev and node:
640 640 raise error.Abort(_("please specify just one revision"))
641 641
642 642 if not rev:
643 643 rev = node
644 644
645 645 if not rev:
646 646 raise error.Abort(_("please specify a revision to backout"))
647 647
648 648 date = opts.get('date')
649 649 if date:
650 650 opts['date'] = util.parsedate(date)
651 651
652 652 cmdutil.checkunfinished(repo)
653 653 cmdutil.bailifchanged(repo)
654 654 node = scmutil.revsingle(repo, rev).node()
655 655
656 656 op1, op2 = repo.dirstate.parents()
657 657 if not repo.changelog.isancestor(node, op1):
658 658 raise error.Abort(_('cannot backout change that is not an ancestor'))
659 659
660 660 p1, p2 = repo.changelog.parents(node)
661 661 if p1 == nullid:
662 662 raise error.Abort(_('cannot backout a change with no parents'))
663 663 if p2 != nullid:
664 664 if not opts.get('parent'):
665 665 raise error.Abort(_('cannot backout a merge changeset'))
666 666 p = repo.lookup(opts['parent'])
667 667 if p not in (p1, p2):
668 668 raise error.Abort(_('%s is not a parent of %s') %
669 669 (short(p), short(node)))
670 670 parent = p
671 671 else:
672 672 if opts.get('parent'):
673 673 raise error.Abort(_('cannot use --parent on non-merge changeset'))
674 674 parent = p1
675 675
676 676 # the backout should appear on the same branch
677 677 branch = repo.dirstate.branch()
678 678 bheads = repo.branchheads(branch)
679 679 rctx = scmutil.revsingle(repo, hex(parent))
680 680 if not opts.get('merge') and op1 != node:
681 681 dsguard = dirstateguard.dirstateguard(repo, 'backout')
682 682 try:
683 683 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
684 684 'backout')
685 685 stats = mergemod.update(repo, parent, True, True, node, False)
686 686 repo.setparents(op1, op2)
687 687 dsguard.close()
688 688 hg._showstats(repo, stats)
689 689 if stats[3]:
690 690 repo.ui.status(_("use 'hg resolve' to retry unresolved "
691 691 "file merges\n"))
692 692 return 1
693 693 finally:
694 694 ui.setconfig('ui', 'forcemerge', '', '')
695 695 lockmod.release(dsguard)
696 696 else:
697 697 hg.clean(repo, node, show_stats=False)
698 698 repo.dirstate.setbranch(branch)
699 699 cmdutil.revert(ui, repo, rctx, repo.dirstate.parents())
700 700
701 701 if opts.get('no_commit'):
702 702 msg = _("changeset %s backed out, "
703 703 "don't forget to commit.\n")
704 704 ui.status(msg % short(node))
705 705 return 0
706 706
707 707 def commitfunc(ui, repo, message, match, opts):
708 708 editform = 'backout'
709 709 e = cmdutil.getcommiteditor(editform=editform,
710 710 **pycompat.strkwargs(opts))
711 711 if not message:
712 712 # we don't translate commit messages
713 713 message = "Backed out changeset %s" % short(node)
714 714 e = cmdutil.getcommiteditor(edit=True, editform=editform)
715 715 return repo.commit(message, opts.get('user'), opts.get('date'),
716 716 match, editor=e)
717 717 newnode = cmdutil.commit(ui, repo, commitfunc, [], opts)
718 718 if not newnode:
719 719 ui.status(_("nothing changed\n"))
720 720 return 1
721 721 cmdutil.commitstatus(repo, newnode, branch, bheads)
722 722
723 723 def nice(node):
724 724 return '%d:%s' % (repo.changelog.rev(node), short(node))
725 725 ui.status(_('changeset %s backs out changeset %s\n') %
726 726 (nice(repo.changelog.tip()), nice(node)))
727 727 if opts.get('merge') and op1 != node:
728 728 hg.clean(repo, op1, show_stats=False)
729 729 ui.status(_('merging with changeset %s\n')
730 730 % nice(repo.changelog.tip()))
731 731 try:
732 732 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
733 733 'backout')
734 734 return hg.merge(repo, hex(repo.changelog.tip()))
735 735 finally:
736 736 ui.setconfig('ui', 'forcemerge', '', '')
737 737 return 0
738 738
739 739 @command('bisect',
740 740 [('r', 'reset', False, _('reset bisect state')),
741 741 ('g', 'good', False, _('mark changeset good')),
742 742 ('b', 'bad', False, _('mark changeset bad')),
743 743 ('s', 'skip', False, _('skip testing changeset')),
744 744 ('e', 'extend', False, _('extend the bisect range')),
745 745 ('c', 'command', '', _('use command to check changeset state'), _('CMD')),
746 746 ('U', 'noupdate', False, _('do not update to target'))],
747 747 _("[-gbsr] [-U] [-c CMD] [REV]"))
748 748 def bisect(ui, repo, rev=None, extra=None, command=None,
749 749 reset=None, good=None, bad=None, skip=None, extend=None,
750 750 noupdate=None):
751 751 """subdivision search of changesets
752 752
753 753 This command helps to find changesets which introduce problems. To
754 754 use, mark the earliest changeset you know exhibits the problem as
755 755 bad, then mark the latest changeset which is free from the problem
756 756 as good. Bisect will update your working directory to a revision
757 757 for testing (unless the -U/--noupdate option is specified). Once
758 758 you have performed tests, mark the working directory as good or
759 759 bad, and bisect will either update to another candidate changeset
760 760 or announce that it has found the bad revision.
761 761
762 762 As a shortcut, you can also use the revision argument to mark a
763 763 revision as good or bad without checking it out first.
764 764
765 765 If you supply a command, it will be used for automatic bisection.
766 766 The environment variable HG_NODE will contain the ID of the
767 767 changeset being tested. The exit status of the command will be
768 768 used to mark revisions as good or bad: status 0 means good, 125
769 769 means to skip the revision, 127 (command not found) will abort the
770 770 bisection, and any other non-zero exit status means the revision
771 771 is bad.
772 772
773 773 .. container:: verbose
774 774
775 775 Some examples:
776 776
777 777 - start a bisection with known bad revision 34, and good revision 12::
778 778
779 779 hg bisect --bad 34
780 780 hg bisect --good 12
781 781
782 782 - advance the current bisection by marking current revision as good or
783 783 bad::
784 784
785 785 hg bisect --good
786 786 hg bisect --bad
787 787
788 788 - mark the current revision, or a known revision, to be skipped (e.g. if
789 789 that revision is not usable because of another issue)::
790 790
791 791 hg bisect --skip
792 792 hg bisect --skip 23
793 793
794 794 - skip all revisions that do not touch directories ``foo`` or ``bar``::
795 795
796 796 hg bisect --skip "!( file('path:foo') & file('path:bar') )"
797 797
798 798 - forget the current bisection::
799 799
800 800 hg bisect --reset
801 801
802 802 - use 'make && make tests' to automatically find the first broken
803 803 revision::
804 804
805 805 hg bisect --reset
806 806 hg bisect --bad 34
807 807 hg bisect --good 12
808 808 hg bisect --command "make && make tests"
809 809
810 810 - see all changesets whose states are already known in the current
811 811 bisection::
812 812
813 813 hg log -r "bisect(pruned)"
814 814
815 815 - see the changeset currently being bisected (especially useful
816 816 if running with -U/--noupdate)::
817 817
818 818 hg log -r "bisect(current)"
819 819
820 820 - see all changesets that took part in the current bisection::
821 821
822 822 hg log -r "bisect(range)"
823 823
824 824 - you can even get a nice graph::
825 825
826 826 hg log --graph -r "bisect(range)"
827 827
828 828 See :hg:`help revisions.bisect` for more about the `bisect()` predicate.
829 829
830 830 Returns 0 on success.
831 831 """
832 832 # backward compatibility
833 833 if rev in "good bad reset init".split():
834 834 ui.warn(_("(use of 'hg bisect <cmd>' is deprecated)\n"))
835 835 cmd, rev, extra = rev, extra, None
836 836 if cmd == "good":
837 837 good = True
838 838 elif cmd == "bad":
839 839 bad = True
840 840 else:
841 841 reset = True
842 842 elif extra or good + bad + skip + reset + extend + bool(command) > 1:
843 843 raise error.Abort(_('incompatible arguments'))
844 844
845 845 if reset:
846 846 hbisect.resetstate(repo)
847 847 return
848 848
849 849 state = hbisect.load_state(repo)
850 850
851 851 # update state
852 852 if good or bad or skip:
853 853 if rev:
854 854 nodes = [repo.lookup(i) for i in scmutil.revrange(repo, [rev])]
855 855 else:
856 856 nodes = [repo.lookup('.')]
857 857 if good:
858 858 state['good'] += nodes
859 859 elif bad:
860 860 state['bad'] += nodes
861 861 elif skip:
862 862 state['skip'] += nodes
863 863 hbisect.save_state(repo, state)
864 864 if not (state['good'] and state['bad']):
865 865 return
866 866
867 867 def mayupdate(repo, node, show_stats=True):
868 868 """common used update sequence"""
869 869 if noupdate:
870 870 return
871 871 cmdutil.checkunfinished(repo)
872 872 cmdutil.bailifchanged(repo)
873 873 return hg.clean(repo, node, show_stats=show_stats)
874 874
875 875 displayer = cmdutil.show_changeset(ui, repo, {})
876 876
877 877 if command:
878 878 changesets = 1
879 879 if noupdate:
880 880 try:
881 881 node = state['current'][0]
882 882 except LookupError:
883 883 raise error.Abort(_('current bisect revision is unknown - '
884 884 'start a new bisect to fix'))
885 885 else:
886 886 node, p2 = repo.dirstate.parents()
887 887 if p2 != nullid:
888 888 raise error.Abort(_('current bisect revision is a merge'))
889 889 if rev:
890 890 node = repo[scmutil.revsingle(repo, rev, node)].node()
891 891 try:
892 892 while changesets:
893 893 # update state
894 894 state['current'] = [node]
895 895 hbisect.save_state(repo, state)
896 896 status = ui.system(command, environ={'HG_NODE': hex(node)},
897 897 blockedtag='bisect_check')
898 898 if status == 125:
899 899 transition = "skip"
900 900 elif status == 0:
901 901 transition = "good"
902 902 # status < 0 means process was killed
903 903 elif status == 127:
904 904 raise error.Abort(_("failed to execute %s") % command)
905 905 elif status < 0:
906 906 raise error.Abort(_("%s killed") % command)
907 907 else:
908 908 transition = "bad"
909 909 state[transition].append(node)
910 910 ctx = repo[node]
911 911 ui.status(_('changeset %d:%s: %s\n') % (ctx, ctx, transition))
912 912 hbisect.checkstate(state)
913 913 # bisect
914 914 nodes, changesets, bgood = hbisect.bisect(repo.changelog, state)
915 915 # update to next check
916 916 node = nodes[0]
917 917 mayupdate(repo, node, show_stats=False)
918 918 finally:
919 919 state['current'] = [node]
920 920 hbisect.save_state(repo, state)
921 921 hbisect.printresult(ui, repo, state, displayer, nodes, bgood)
922 922 return
923 923
924 924 hbisect.checkstate(state)
925 925
926 926 # actually bisect
927 927 nodes, changesets, good = hbisect.bisect(repo.changelog, state)
928 928 if extend:
929 929 if not changesets:
930 930 extendnode = hbisect.extendrange(repo, state, nodes, good)
931 931 if extendnode is not None:
932 932 ui.write(_("Extending search to changeset %d:%s\n")
933 933 % (extendnode.rev(), extendnode))
934 934 state['current'] = [extendnode.node()]
935 935 hbisect.save_state(repo, state)
936 936 return mayupdate(repo, extendnode.node())
937 937 raise error.Abort(_("nothing to extend"))
938 938
939 939 if changesets == 0:
940 940 hbisect.printresult(ui, repo, state, displayer, nodes, good)
941 941 else:
942 942 assert len(nodes) == 1 # only a single node can be tested next
943 943 node = nodes[0]
944 944 # compute the approximate number of remaining tests
945 945 tests, size = 0, 2
946 946 while size <= changesets:
947 947 tests, size = tests + 1, size * 2
948 948 rev = repo.changelog.rev(node)
949 949 ui.write(_("Testing changeset %d:%s "
950 950 "(%d changesets remaining, ~%d tests)\n")
951 951 % (rev, short(node), changesets, tests))
952 952 state['current'] = [node]
953 953 hbisect.save_state(repo, state)
954 954 return mayupdate(repo, node)
955 955
956 956 @command('bookmarks|bookmark',
957 957 [('f', 'force', False, _('force')),
958 958 ('r', 'rev', '', _('revision for bookmark action'), _('REV')),
959 959 ('d', 'delete', False, _('delete a given bookmark')),
960 960 ('m', 'rename', '', _('rename a given bookmark'), _('OLD')),
961 961 ('i', 'inactive', False, _('mark a bookmark inactive')),
962 962 ] + formatteropts,
963 963 _('hg bookmarks [OPTIONS]... [NAME]...'))
964 964 def bookmark(ui, repo, *names, **opts):
965 965 '''create a new bookmark or list existing bookmarks
966 966
967 967 Bookmarks are labels on changesets to help track lines of development.
968 968 Bookmarks are unversioned and can be moved, renamed and deleted.
969 969 Deleting or moving a bookmark has no effect on the associated changesets.
970 970
971 971 Creating or updating to a bookmark causes it to be marked as 'active'.
972 972 The active bookmark is indicated with a '*'.
973 973 When a commit is made, the active bookmark will advance to the new commit.
974 974 A plain :hg:`update` will also advance an active bookmark, if possible.
975 975 Updating away from a bookmark will cause it to be deactivated.
976 976
977 977 Bookmarks can be pushed and pulled between repositories (see
978 978 :hg:`help push` and :hg:`help pull`). If a shared bookmark has
979 979 diverged, a new 'divergent bookmark' of the form 'name@path' will
980 980 be created. Using :hg:`merge` will resolve the divergence.
981 981
982 982 A bookmark named '@' has the special property that :hg:`clone` will
983 983 check it out by default if it exists.
984 984
985 985 .. container:: verbose
986 986
987 987 Examples:
988 988
989 989 - create an active bookmark for a new line of development::
990 990
991 991 hg book new-feature
992 992
993 993 - create an inactive bookmark as a place marker::
994 994
995 995 hg book -i reviewed
996 996
997 997 - create an inactive bookmark on another changeset::
998 998
999 999 hg book -r .^ tested
1000 1000
1001 1001 - rename bookmark turkey to dinner::
1002 1002
1003 1003 hg book -m turkey dinner
1004 1004
1005 1005 - move the '@' bookmark from another branch::
1006 1006
1007 1007 hg book -f @
1008 1008 '''
1009 1009 opts = pycompat.byteskwargs(opts)
1010 1010 force = opts.get('force')
1011 1011 rev = opts.get('rev')
1012 1012 delete = opts.get('delete')
1013 1013 rename = opts.get('rename')
1014 1014 inactive = opts.get('inactive')
1015 1015
1016 1016 def checkformat(mark):
1017 1017 mark = mark.strip()
1018 1018 if not mark:
1019 1019 raise error.Abort(_("bookmark names cannot consist entirely of "
1020 1020 "whitespace"))
1021 1021 scmutil.checknewlabel(repo, mark, 'bookmark')
1022 1022 return mark
1023 1023
1024 1024 def checkconflict(repo, mark, cur, force=False, target=None):
1025 1025 if mark in marks and not force:
1026 1026 if target:
1027 1027 if marks[mark] == target and target == cur:
1028 1028 # re-activating a bookmark
1029 1029 return
1030 1030 anc = repo.changelog.ancestors([repo[target].rev()])
1031 1031 bmctx = repo[marks[mark]]
1032 1032 divs = [repo[b].node() for b in marks
1033 1033 if b.split('@', 1)[0] == mark.split('@', 1)[0]]
1034 1034
1035 1035 # allow resolving a single divergent bookmark even if moving
1036 1036 # the bookmark across branches when a revision is specified
1037 1037 # that contains a divergent bookmark
1038 1038 if bmctx.rev() not in anc and target in divs:
1039 1039 bookmarks.deletedivergent(repo, [target], mark)
1040 1040 return
1041 1041
1042 1042 deletefrom = [b for b in divs
1043 1043 if repo[b].rev() in anc or b == target]
1044 1044 bookmarks.deletedivergent(repo, deletefrom, mark)
1045 1045 if bookmarks.validdest(repo, bmctx, repo[target]):
1046 1046 ui.status(_("moving bookmark '%s' forward from %s\n") %
1047 1047 (mark, short(bmctx.node())))
1048 1048 return
1049 1049 raise error.Abort(_("bookmark '%s' already exists "
1050 1050 "(use -f to force)") % mark)
1051 1051 if ((mark in repo.branchmap() or mark == repo.dirstate.branch())
1052 1052 and not force):
1053 1053 raise error.Abort(
1054 1054 _("a bookmark cannot have the name of an existing branch"))
1055 1055
1056 1056 if delete and rename:
1057 1057 raise error.Abort(_("--delete and --rename are incompatible"))
1058 1058 if delete and rev:
1059 1059 raise error.Abort(_("--rev is incompatible with --delete"))
1060 1060 if rename and rev:
1061 1061 raise error.Abort(_("--rev is incompatible with --rename"))
1062 1062 if not names and (delete or rev):
1063 1063 raise error.Abort(_("bookmark name required"))
1064 1064
1065 1065 if delete or rename or names or inactive:
1066 1066 wlock = lock = tr = None
1067 1067 try:
1068 1068 wlock = repo.wlock()
1069 1069 lock = repo.lock()
1070 1070 cur = repo.changectx('.').node()
1071 1071 marks = repo._bookmarks
1072 1072 if delete:
1073 1073 tr = repo.transaction('bookmark')
1074 1074 for mark in names:
1075 1075 if mark not in marks:
1076 1076 raise error.Abort(_("bookmark '%s' does not exist") %
1077 1077 mark)
1078 1078 if mark == repo._activebookmark:
1079 1079 bookmarks.deactivate(repo)
1080 1080 del marks[mark]
1081 1081
1082 1082 elif rename:
1083 1083 tr = repo.transaction('bookmark')
1084 1084 if not names:
1085 1085 raise error.Abort(_("new bookmark name required"))
1086 1086 elif len(names) > 1:
1087 1087 raise error.Abort(_("only one new bookmark name allowed"))
1088 1088 mark = checkformat(names[0])
1089 1089 if rename not in marks:
1090 1090 raise error.Abort(_("bookmark '%s' does not exist")
1091 1091 % rename)
1092 1092 checkconflict(repo, mark, cur, force)
1093 1093 marks[mark] = marks[rename]
1094 1094 if repo._activebookmark == rename and not inactive:
1095 1095 bookmarks.activate(repo, mark)
1096 1096 del marks[rename]
1097 1097 elif names:
1098 1098 tr = repo.transaction('bookmark')
1099 1099 newact = None
1100 1100 for mark in names:
1101 1101 mark = checkformat(mark)
1102 1102 if newact is None:
1103 1103 newact = mark
1104 1104 if inactive and mark == repo._activebookmark:
1105 1105 bookmarks.deactivate(repo)
1106 1106 return
1107 1107 tgt = cur
1108 1108 if rev:
1109 1109 tgt = scmutil.revsingle(repo, rev).node()
1110 1110 checkconflict(repo, mark, cur, force, tgt)
1111 1111 marks[mark] = tgt
1112 1112 if not inactive and cur == marks[newact] and not rev:
1113 1113 bookmarks.activate(repo, newact)
1114 1114 elif cur != tgt and newact == repo._activebookmark:
1115 1115 bookmarks.deactivate(repo)
1116 1116 elif inactive:
1117 1117 if len(marks) == 0:
1118 1118 ui.status(_("no bookmarks set\n"))
1119 1119 elif not repo._activebookmark:
1120 1120 ui.status(_("no active bookmark\n"))
1121 1121 else:
1122 1122 bookmarks.deactivate(repo)
1123 1123 if tr is not None:
1124 1124 marks.recordchange(tr)
1125 1125 tr.close()
1126 1126 finally:
1127 1127 lockmod.release(tr, lock, wlock)
1128 1128 else: # show bookmarks
1129 1129 fm = ui.formatter('bookmarks', opts)
1130 1130 hexfn = fm.hexfunc
1131 1131 marks = repo._bookmarks
1132 1132 if len(marks) == 0 and fm.isplain():
1133 1133 ui.status(_("no bookmarks set\n"))
1134 1134 for bmark, n in sorted(marks.iteritems()):
1135 1135 active = repo._activebookmark
1136 1136 if bmark == active:
1137 1137 prefix, label = '*', activebookmarklabel
1138 1138 else:
1139 1139 prefix, label = ' ', ''
1140 1140
1141 1141 fm.startitem()
1142 1142 if not ui.quiet:
1143 1143 fm.plain(' %s ' % prefix, label=label)
1144 1144 fm.write('bookmark', '%s', bmark, label=label)
1145 1145 pad = " " * (25 - encoding.colwidth(bmark))
1146 1146 fm.condwrite(not ui.quiet, 'rev node', pad + ' %d:%s',
1147 1147 repo.changelog.rev(n), hexfn(n), label=label)
1148 1148 fm.data(active=(bmark == active))
1149 1149 fm.plain('\n')
1150 1150 fm.end()
1151 1151
1152 1152 @command('branch',
1153 1153 [('f', 'force', None,
1154 1154 _('set branch name even if it shadows an existing branch')),
1155 1155 ('C', 'clean', None, _('reset branch name to parent branch name'))],
1156 1156 _('[-fC] [NAME]'))
1157 1157 def branch(ui, repo, label=None, **opts):
1158 1158 """set or show the current branch name
1159 1159
1160 1160 .. note::
1161 1161
1162 1162 Branch names are permanent and global. Use :hg:`bookmark` to create a
1163 1163 light-weight bookmark instead. See :hg:`help glossary` for more
1164 1164 information about named branches and bookmarks.
1165 1165
1166 1166 With no argument, show the current branch name. With one argument,
1167 1167 set the working directory branch name (the branch will not exist
1168 1168 in the repository until the next commit). Standard practice
1169 1169 recommends that primary development take place on the 'default'
1170 1170 branch.
1171 1171
1172 1172 Unless -f/--force is specified, branch will not let you set a
1173 1173 branch name that already exists.
1174 1174
1175 1175 Use -C/--clean to reset the working directory branch to that of
1176 1176 the parent of the working directory, negating a previous branch
1177 1177 change.
1178 1178
1179 1179 Use the command :hg:`update` to switch to an existing branch. Use
1180 1180 :hg:`commit --close-branch` to mark this branch head as closed.
1181 1181 When all heads of a branch are closed, the branch will be
1182 1182 considered closed.
1183 1183
1184 1184 Returns 0 on success.
1185 1185 """
1186 1186 opts = pycompat.byteskwargs(opts)
1187 1187 if label:
1188 1188 label = label.strip()
1189 1189
1190 1190 if not opts.get('clean') and not label:
1191 1191 ui.write("%s\n" % repo.dirstate.branch())
1192 1192 return
1193 1193
1194 1194 with repo.wlock():
1195 1195 if opts.get('clean'):
1196 1196 label = repo[None].p1().branch()
1197 1197 repo.dirstate.setbranch(label)
1198 1198 ui.status(_('reset working directory to branch %s\n') % label)
1199 1199 elif label:
1200 1200 if not opts.get('force') and label in repo.branchmap():
1201 1201 if label not in [p.branch() for p in repo[None].parents()]:
1202 1202 raise error.Abort(_('a branch of the same name already'
1203 1203 ' exists'),
1204 1204 # i18n: "it" refers to an existing branch
1205 1205 hint=_("use 'hg update' to switch to it"))
1206 1206 scmutil.checknewlabel(repo, label, 'branch')
1207 1207 repo.dirstate.setbranch(label)
1208 1208 ui.status(_('marked working directory as branch %s\n') % label)
1209 1209
1210 1210 # find any open named branches aside from default
1211 1211 others = [n for n, h, t, c in repo.branchmap().iterbranches()
1212 1212 if n != "default" and not c]
1213 1213 if not others:
1214 1214 ui.status(_('(branches are permanent and global, '
1215 1215 'did you want a bookmark?)\n'))
1216 1216
1217 1217 @command('branches',
1218 1218 [('a', 'active', False,
1219 1219 _('show only branches that have unmerged heads (DEPRECATED)')),
1220 1220 ('c', 'closed', False, _('show normal and closed branches')),
1221 1221 ] + formatteropts,
1222 1222 _('[-c]'))
1223 1223 def branches(ui, repo, active=False, closed=False, **opts):
1224 1224 """list repository named branches
1225 1225
1226 1226 List the repository's named branches, indicating which ones are
1227 1227 inactive. If -c/--closed is specified, also list branches which have
1228 1228 been marked closed (see :hg:`commit --close-branch`).
1229 1229
1230 1230 Use the command :hg:`update` to switch to an existing branch.
1231 1231
1232 1232 Returns 0.
1233 1233 """
1234 1234
1235 1235 opts = pycompat.byteskwargs(opts)
1236 1236 ui.pager('branches')
1237 1237 fm = ui.formatter('branches', opts)
1238 1238 hexfunc = fm.hexfunc
1239 1239
1240 1240 allheads = set(repo.heads())
1241 1241 branches = []
1242 1242 for tag, heads, tip, isclosed in repo.branchmap().iterbranches():
1243 1243 isactive = not isclosed and bool(set(heads) & allheads)
1244 1244 branches.append((tag, repo[tip], isactive, not isclosed))
1245 1245 branches.sort(key=lambda i: (i[2], i[1].rev(), i[0], i[3]),
1246 1246 reverse=True)
1247 1247
1248 1248 for tag, ctx, isactive, isopen in branches:
1249 1249 if active and not isactive:
1250 1250 continue
1251 1251 if isactive:
1252 1252 label = 'branches.active'
1253 1253 notice = ''
1254 1254 elif not isopen:
1255 1255 if not closed:
1256 1256 continue
1257 1257 label = 'branches.closed'
1258 1258 notice = _(' (closed)')
1259 1259 else:
1260 1260 label = 'branches.inactive'
1261 1261 notice = _(' (inactive)')
1262 1262 current = (tag == repo.dirstate.branch())
1263 1263 if current:
1264 1264 label = 'branches.current'
1265 1265
1266 1266 fm.startitem()
1267 1267 fm.write('branch', '%s', tag, label=label)
1268 1268 rev = ctx.rev()
1269 1269 padsize = max(31 - len(str(rev)) - encoding.colwidth(tag), 0)
1270 1270 fmt = ' ' * padsize + ' %d:%s'
1271 1271 fm.condwrite(not ui.quiet, 'rev node', fmt, rev, hexfunc(ctx.node()),
1272 1272 label='log.changeset changeset.%s' % ctx.phasestr())
1273 1273 fm.context(ctx=ctx)
1274 1274 fm.data(active=isactive, closed=not isopen, current=current)
1275 1275 if not ui.quiet:
1276 1276 fm.plain(notice)
1277 1277 fm.plain('\n')
1278 1278 fm.end()
1279 1279
1280 1280 @command('bundle',
1281 1281 [('f', 'force', None, _('run even when the destination is unrelated')),
1282 1282 ('r', 'rev', [], _('a changeset intended to be added to the destination'),
1283 1283 _('REV')),
1284 1284 ('b', 'branch', [], _('a specific branch you would like to bundle'),
1285 1285 _('BRANCH')),
1286 1286 ('', 'base', [],
1287 1287 _('a base changeset assumed to be available at the destination'),
1288 1288 _('REV')),
1289 1289 ('a', 'all', None, _('bundle all changesets in the repository')),
1290 1290 ('t', 'type', 'bzip2', _('bundle compression type to use'), _('TYPE')),
1291 1291 ] + remoteopts,
1292 1292 _('[-f] [-t BUNDLESPEC] [-a] [-r REV]... [--base REV]... FILE [DEST]'))
1293 1293 def bundle(ui, repo, fname, dest=None, **opts):
1294 1294 """create a bundle file
1295 1295
1296 1296 Generate a bundle file containing data to be added to a repository.
1297 1297
1298 1298 To create a bundle containing all changesets, use -a/--all
1299 1299 (or --base null). Otherwise, hg assumes the destination will have
1300 1300 all the nodes you specify with --base parameters. Otherwise, hg
1301 1301 will assume the repository has all the nodes in destination, or
1302 1302 default-push/default if no destination is specified.
1303 1303
1304 1304 You can change bundle format with the -t/--type option. See
1305 1305 :hg:`help bundlespec` for documentation on this format. By default,
1306 1306 the most appropriate format is used and compression defaults to
1307 1307 bzip2.
1308 1308
1309 1309 The bundle file can then be transferred using conventional means
1310 1310 and applied to another repository with the unbundle or pull
1311 1311 command. This is useful when direct push and pull are not
1312 1312 available or when exporting an entire repository is undesirable.
1313 1313
1314 1314 Applying bundles preserves all changeset contents including
1315 1315 permissions, copy/rename information, and revision history.
1316 1316
1317 1317 Returns 0 on success, 1 if no changes found.
1318 1318 """
1319 1319 opts = pycompat.byteskwargs(opts)
1320 1320 revs = None
1321 1321 if 'rev' in opts:
1322 1322 revstrings = opts['rev']
1323 1323 revs = scmutil.revrange(repo, revstrings)
1324 1324 if revstrings and not revs:
1325 1325 raise error.Abort(_('no commits to bundle'))
1326 1326
1327 1327 bundletype = opts.get('type', 'bzip2').lower()
1328 1328 try:
1329 1329 bcompression, cgversion, params = exchange.parsebundlespec(
1330 1330 repo, bundletype, strict=False)
1331 1331 except error.UnsupportedBundleSpecification as e:
1332 1332 raise error.Abort(str(e),
1333 1333 hint=_("see 'hg help bundlespec' for supported "
1334 1334 "values for --type"))
1335 1335
1336 1336 # Packed bundles are a pseudo bundle format for now.
1337 1337 if cgversion == 's1':
1338 1338 raise error.Abort(_('packed bundles cannot be produced by "hg bundle"'),
1339 1339 hint=_("use 'hg debugcreatestreamclonebundle'"))
1340 1340
1341 1341 if opts.get('all'):
1342 1342 if dest:
1343 1343 raise error.Abort(_("--all is incompatible with specifying "
1344 1344 "a destination"))
1345 1345 if opts.get('base'):
1346 1346 ui.warn(_("ignoring --base because --all was specified\n"))
1347 1347 base = ['null']
1348 1348 else:
1349 1349 base = scmutil.revrange(repo, opts.get('base'))
1350 1350 if cgversion not in changegroup.supportedoutgoingversions(repo):
1351 1351 raise error.Abort(_("repository does not support bundle version %s") %
1352 1352 cgversion)
1353 1353
1354 1354 if base:
1355 1355 if dest:
1356 1356 raise error.Abort(_("--base is incompatible with specifying "
1357 1357 "a destination"))
1358 1358 common = [repo.lookup(rev) for rev in base]
1359 1359 heads = revs and map(repo.lookup, revs) or None
1360 1360 outgoing = discovery.outgoing(repo, common, heads)
1361 1361 else:
1362 1362 dest = ui.expandpath(dest or 'default-push', dest or 'default')
1363 1363 dest, branches = hg.parseurl(dest, opts.get('branch'))
1364 1364 other = hg.peer(repo, opts, dest)
1365 1365 revs, checkout = hg.addbranchrevs(repo, repo, branches, revs)
1366 1366 heads = revs and map(repo.lookup, revs) or revs
1367 1367 outgoing = discovery.findcommonoutgoing(repo, other,
1368 1368 onlyheads=heads,
1369 1369 force=opts.get('force'),
1370 1370 portable=True)
1371 1371
1372 1372 if not outgoing.missing:
1373 1373 scmutil.nochangesfound(ui, repo, not base and outgoing.excluded)
1374 1374 return 1
1375 1375
1376 1376 if cgversion == '01': #bundle1
1377 1377 if bcompression is None:
1378 1378 bcompression = 'UN'
1379 1379 bversion = 'HG10' + bcompression
1380 1380 bcompression = None
1381 1381 elif cgversion in ('02', '03'):
1382 1382 bversion = 'HG20'
1383 1383 else:
1384 1384 raise error.ProgrammingError(
1385 1385 'bundle: unexpected changegroup version %s' % cgversion)
1386 1386
1387 1387 # TODO compression options should be derived from bundlespec parsing.
1388 1388 # This is a temporary hack to allow adjusting bundle compression
1389 1389 # level without a) formalizing the bundlespec changes to declare it
1390 1390 # b) introducing a command flag.
1391 1391 compopts = {}
1392 1392 complevel = ui.configint('experimental', 'bundlecomplevel')
1393 1393 if complevel is not None:
1394 1394 compopts['level'] = complevel
1395 1395
1396 cg = changegroup.getchangegroup(repo, 'bundle', outgoing, version=cgversion)
1397
1398 bundle2.writebundle(ui, cg, fname, bversion, compression=bcompression,
1396
1397 contentopts = {'cg.version': cgversion}
1398 bundle2.writenewbundle(ui, repo, 'bundle', fname, bversion, outgoing,
1399 contentopts, compression=bcompression,
1399 1400 compopts=compopts)
1400 1401
1401 1402 @command('cat',
1402 1403 [('o', 'output', '',
1403 1404 _('print output to file with formatted name'), _('FORMAT')),
1404 1405 ('r', 'rev', '', _('print the given revision'), _('REV')),
1405 1406 ('', 'decode', None, _('apply any matching decode filter')),
1406 1407 ] + walkopts,
1407 1408 _('[OPTION]... FILE...'),
1408 1409 inferrepo=True)
1409 1410 def cat(ui, repo, file1, *pats, **opts):
1410 1411 """output the current or given revision of files
1411 1412
1412 1413 Print the specified files as they were at the given revision. If
1413 1414 no revision is given, the parent of the working directory is used.
1414 1415
1415 1416 Output may be to a file, in which case the name of the file is
1416 1417 given using a format string. The formatting rules as follows:
1417 1418
1418 1419 :``%%``: literal "%" character
1419 1420 :``%s``: basename of file being printed
1420 1421 :``%d``: dirname of file being printed, or '.' if in repository root
1421 1422 :``%p``: root-relative path name of file being printed
1422 1423 :``%H``: changeset hash (40 hexadecimal digits)
1423 1424 :``%R``: changeset revision number
1424 1425 :``%h``: short-form changeset hash (12 hexadecimal digits)
1425 1426 :``%r``: zero-padded changeset revision number
1426 1427 :``%b``: basename of the exporting repository
1427 1428
1428 1429 Returns 0 on success.
1429 1430 """
1430 1431 ctx = scmutil.revsingle(repo, opts.get('rev'))
1431 1432 m = scmutil.match(ctx, (file1,) + pats, opts)
1432 1433
1433 1434 ui.pager('cat')
1434 1435 return cmdutil.cat(ui, repo, ctx, m, '', **opts)
1435 1436
1436 1437 @command('^clone',
1437 1438 [('U', 'noupdate', None, _('the clone will include an empty working '
1438 1439 'directory (only a repository)')),
1439 1440 ('u', 'updaterev', '', _('revision, tag, or branch to check out'),
1440 1441 _('REV')),
1441 1442 ('r', 'rev', [], _('include the specified changeset'), _('REV')),
1442 1443 ('b', 'branch', [], _('clone only the specified branch'), _('BRANCH')),
1443 1444 ('', 'pull', None, _('use pull protocol to copy metadata')),
1444 1445 ('', 'uncompressed', None, _('use uncompressed transfer (fast over LAN)')),
1445 1446 ] + remoteopts,
1446 1447 _('[OPTION]... SOURCE [DEST]'),
1447 1448 norepo=True)
1448 1449 def clone(ui, source, dest=None, **opts):
1449 1450 """make a copy of an existing repository
1450 1451
1451 1452 Create a copy of an existing repository in a new directory.
1452 1453
1453 1454 If no destination directory name is specified, it defaults to the
1454 1455 basename of the source.
1455 1456
1456 1457 The location of the source is added to the new repository's
1457 1458 ``.hg/hgrc`` file, as the default to be used for future pulls.
1458 1459
1459 1460 Only local paths and ``ssh://`` URLs are supported as
1460 1461 destinations. For ``ssh://`` destinations, no working directory or
1461 1462 ``.hg/hgrc`` will be created on the remote side.
1462 1463
1463 1464 If the source repository has a bookmark called '@' set, that
1464 1465 revision will be checked out in the new repository by default.
1465 1466
1466 1467 To check out a particular version, use -u/--update, or
1467 1468 -U/--noupdate to create a clone with no working directory.
1468 1469
1469 1470 To pull only a subset of changesets, specify one or more revisions
1470 1471 identifiers with -r/--rev or branches with -b/--branch. The
1471 1472 resulting clone will contain only the specified changesets and
1472 1473 their ancestors. These options (or 'clone src#rev dest') imply
1473 1474 --pull, even for local source repositories.
1474 1475
1475 1476 .. note::
1476 1477
1477 1478 Specifying a tag will include the tagged changeset but not the
1478 1479 changeset containing the tag.
1479 1480
1480 1481 .. container:: verbose
1481 1482
1482 1483 For efficiency, hardlinks are used for cloning whenever the
1483 1484 source and destination are on the same filesystem (note this
1484 1485 applies only to the repository data, not to the working
1485 1486 directory). Some filesystems, such as AFS, implement hardlinking
1486 1487 incorrectly, but do not report errors. In these cases, use the
1487 1488 --pull option to avoid hardlinking.
1488 1489
1489 1490 In some cases, you can clone repositories and the working
1490 1491 directory using full hardlinks with ::
1491 1492
1492 1493 $ cp -al REPO REPOCLONE
1493 1494
1494 1495 This is the fastest way to clone, but it is not always safe. The
1495 1496 operation is not atomic (making sure REPO is not modified during
1496 1497 the operation is up to you) and you have to make sure your
1497 1498 editor breaks hardlinks (Emacs and most Linux Kernel tools do
1498 1499 so). Also, this is not compatible with certain extensions that
1499 1500 place their metadata under the .hg directory, such as mq.
1500 1501
1501 1502 Mercurial will update the working directory to the first applicable
1502 1503 revision from this list:
1503 1504
1504 1505 a) null if -U or the source repository has no changesets
1505 1506 b) if -u . and the source repository is local, the first parent of
1506 1507 the source repository's working directory
1507 1508 c) the changeset specified with -u (if a branch name, this means the
1508 1509 latest head of that branch)
1509 1510 d) the changeset specified with -r
1510 1511 e) the tipmost head specified with -b
1511 1512 f) the tipmost head specified with the url#branch source syntax
1512 1513 g) the revision marked with the '@' bookmark, if present
1513 1514 h) the tipmost head of the default branch
1514 1515 i) tip
1515 1516
1516 1517 When cloning from servers that support it, Mercurial may fetch
1517 1518 pre-generated data from a server-advertised URL. When this is done,
1518 1519 hooks operating on incoming changesets and changegroups may fire twice,
1519 1520 once for the bundle fetched from the URL and another for any additional
1520 1521 data not fetched from this URL. In addition, if an error occurs, the
1521 1522 repository may be rolled back to a partial clone. This behavior may
1522 1523 change in future releases. See :hg:`help -e clonebundles` for more.
1523 1524
1524 1525 Examples:
1525 1526
1526 1527 - clone a remote repository to a new directory named hg/::
1527 1528
1528 1529 hg clone https://www.mercurial-scm.org/repo/hg/
1529 1530
1530 1531 - create a lightweight local clone::
1531 1532
1532 1533 hg clone project/ project-feature/
1533 1534
1534 1535 - clone from an absolute path on an ssh server (note double-slash)::
1535 1536
1536 1537 hg clone ssh://user@server//home/projects/alpha/
1537 1538
1538 1539 - do a high-speed clone over a LAN while checking out a
1539 1540 specified version::
1540 1541
1541 1542 hg clone --uncompressed http://server/repo -u 1.5
1542 1543
1543 1544 - create a repository without changesets after a particular revision::
1544 1545
1545 1546 hg clone -r 04e544 experimental/ good/
1546 1547
1547 1548 - clone (and track) a particular named branch::
1548 1549
1549 1550 hg clone https://www.mercurial-scm.org/repo/hg/#stable
1550 1551
1551 1552 See :hg:`help urls` for details on specifying URLs.
1552 1553
1553 1554 Returns 0 on success.
1554 1555 """
1555 1556 opts = pycompat.byteskwargs(opts)
1556 1557 if opts.get('noupdate') and opts.get('updaterev'):
1557 1558 raise error.Abort(_("cannot specify both --noupdate and --updaterev"))
1558 1559
1559 1560 r = hg.clone(ui, opts, source, dest,
1560 1561 pull=opts.get('pull'),
1561 1562 stream=opts.get('uncompressed'),
1562 1563 rev=opts.get('rev'),
1563 1564 update=opts.get('updaterev') or not opts.get('noupdate'),
1564 1565 branch=opts.get('branch'),
1565 1566 shareopts=opts.get('shareopts'))
1566 1567
1567 1568 return r is None
1568 1569
1569 1570 @command('^commit|ci',
1570 1571 [('A', 'addremove', None,
1571 1572 _('mark new/missing files as added/removed before committing')),
1572 1573 ('', 'close-branch', None,
1573 1574 _('mark a branch head as closed')),
1574 1575 ('', 'amend', None, _('amend the parent of the working directory')),
1575 1576 ('s', 'secret', None, _('use the secret phase for committing')),
1576 1577 ('e', 'edit', None, _('invoke editor on commit messages')),
1577 1578 ('i', 'interactive', None, _('use interactive mode')),
1578 1579 ] + walkopts + commitopts + commitopts2 + subrepoopts,
1579 1580 _('[OPTION]... [FILE]...'),
1580 1581 inferrepo=True)
1581 1582 def commit(ui, repo, *pats, **opts):
1582 1583 """commit the specified files or all outstanding changes
1583 1584
1584 1585 Commit changes to the given files into the repository. Unlike a
1585 1586 centralized SCM, this operation is a local operation. See
1586 1587 :hg:`push` for a way to actively distribute your changes.
1587 1588
1588 1589 If a list of files is omitted, all changes reported by :hg:`status`
1589 1590 will be committed.
1590 1591
1591 1592 If you are committing the result of a merge, do not provide any
1592 1593 filenames or -I/-X filters.
1593 1594
1594 1595 If no commit message is specified, Mercurial starts your
1595 1596 configured editor where you can enter a message. In case your
1596 1597 commit fails, you will find a backup of your message in
1597 1598 ``.hg/last-message.txt``.
1598 1599
1599 1600 The --close-branch flag can be used to mark the current branch
1600 1601 head closed. When all heads of a branch are closed, the branch
1601 1602 will be considered closed and no longer listed.
1602 1603
1603 1604 The --amend flag can be used to amend the parent of the
1604 1605 working directory with a new commit that contains the changes
1605 1606 in the parent in addition to those currently reported by :hg:`status`,
1606 1607 if there are any. The old commit is stored in a backup bundle in
1607 1608 ``.hg/strip-backup`` (see :hg:`help bundle` and :hg:`help unbundle`
1608 1609 on how to restore it).
1609 1610
1610 1611 Message, user and date are taken from the amended commit unless
1611 1612 specified. When a message isn't specified on the command line,
1612 1613 the editor will open with the message of the amended commit.
1613 1614
1614 1615 It is not possible to amend public changesets (see :hg:`help phases`)
1615 1616 or changesets that have children.
1616 1617
1617 1618 See :hg:`help dates` for a list of formats valid for -d/--date.
1618 1619
1619 1620 Returns 0 on success, 1 if nothing changed.
1620 1621
1621 1622 .. container:: verbose
1622 1623
1623 1624 Examples:
1624 1625
1625 1626 - commit all files ending in .py::
1626 1627
1627 1628 hg commit --include "set:**.py"
1628 1629
1629 1630 - commit all non-binary files::
1630 1631
1631 1632 hg commit --exclude "set:binary()"
1632 1633
1633 1634 - amend the current commit and set the date to now::
1634 1635
1635 1636 hg commit --amend --date now
1636 1637 """
1637 1638 wlock = lock = None
1638 1639 try:
1639 1640 wlock = repo.wlock()
1640 1641 lock = repo.lock()
1641 1642 return _docommit(ui, repo, *pats, **opts)
1642 1643 finally:
1643 1644 release(lock, wlock)
1644 1645
1645 1646 def _docommit(ui, repo, *pats, **opts):
1646 1647 if opts.get(r'interactive'):
1647 1648 opts.pop(r'interactive')
1648 1649 ret = cmdutil.dorecord(ui, repo, commit, None, False,
1649 1650 cmdutil.recordfilter, *pats,
1650 1651 **opts)
1651 1652 # ret can be 0 (no changes to record) or the value returned by
1652 1653 # commit(), 1 if nothing changed or None on success.
1653 1654 return 1 if ret == 0 else ret
1654 1655
1655 1656 opts = pycompat.byteskwargs(opts)
1656 1657 if opts.get('subrepos'):
1657 1658 if opts.get('amend'):
1658 1659 raise error.Abort(_('cannot amend with --subrepos'))
1659 1660 # Let --subrepos on the command line override config setting.
1660 1661 ui.setconfig('ui', 'commitsubrepos', True, 'commit')
1661 1662
1662 1663 cmdutil.checkunfinished(repo, commit=True)
1663 1664
1664 1665 branch = repo[None].branch()
1665 1666 bheads = repo.branchheads(branch)
1666 1667
1667 1668 extra = {}
1668 1669 if opts.get('close_branch'):
1669 1670 extra['close'] = 1
1670 1671
1671 1672 if not bheads:
1672 1673 raise error.Abort(_('can only close branch heads'))
1673 1674 elif opts.get('amend'):
1674 1675 if repo[None].parents()[0].p1().branch() != branch and \
1675 1676 repo[None].parents()[0].p2().branch() != branch:
1676 1677 raise error.Abort(_('can only close branch heads'))
1677 1678
1678 1679 if opts.get('amend'):
1679 1680 if ui.configbool('ui', 'commitsubrepos'):
1680 1681 raise error.Abort(_('cannot amend with ui.commitsubrepos enabled'))
1681 1682
1682 1683 old = repo['.']
1683 1684 if not old.mutable():
1684 1685 raise error.Abort(_('cannot amend public changesets'))
1685 1686 if len(repo[None].parents()) > 1:
1686 1687 raise error.Abort(_('cannot amend while merging'))
1687 1688 allowunstable = obsolete.isenabled(repo, obsolete.allowunstableopt)
1688 1689 if not allowunstable and old.children():
1689 1690 raise error.Abort(_('cannot amend changeset with children'))
1690 1691
1691 1692 # Currently histedit gets confused if an amend happens while histedit
1692 1693 # is in progress. Since we have a checkunfinished command, we are
1693 1694 # temporarily honoring it.
1694 1695 #
1695 1696 # Note: eventually this guard will be removed. Please do not expect
1696 1697 # this behavior to remain.
1697 1698 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1698 1699 cmdutil.checkunfinished(repo)
1699 1700
1700 1701 # commitfunc is used only for temporary amend commit by cmdutil.amend
1701 1702 def commitfunc(ui, repo, message, match, opts):
1702 1703 return repo.commit(message,
1703 1704 opts.get('user') or old.user(),
1704 1705 opts.get('date') or old.date(),
1705 1706 match,
1706 1707 extra=extra)
1707 1708
1708 1709 node = cmdutil.amend(ui, repo, commitfunc, old, extra, pats, opts)
1709 1710 if node == old.node():
1710 1711 ui.status(_("nothing changed\n"))
1711 1712 return 1
1712 1713 else:
1713 1714 def commitfunc(ui, repo, message, match, opts):
1714 1715 overrides = {}
1715 1716 if opts.get('secret'):
1716 1717 overrides[('phases', 'new-commit')] = 'secret'
1717 1718
1718 1719 baseui = repo.baseui
1719 1720 with baseui.configoverride(overrides, 'commit'):
1720 1721 with ui.configoverride(overrides, 'commit'):
1721 1722 editform = cmdutil.mergeeditform(repo[None],
1722 1723 'commit.normal')
1723 1724 editor = cmdutil.getcommiteditor(
1724 1725 editform=editform, **pycompat.strkwargs(opts))
1725 1726 return repo.commit(message,
1726 1727 opts.get('user'),
1727 1728 opts.get('date'),
1728 1729 match,
1729 1730 editor=editor,
1730 1731 extra=extra)
1731 1732
1732 1733 node = cmdutil.commit(ui, repo, commitfunc, pats, opts)
1733 1734
1734 1735 if not node:
1735 1736 stat = cmdutil.postcommitstatus(repo, pats, opts)
1736 1737 if stat[3]:
1737 1738 ui.status(_("nothing changed (%d missing files, see "
1738 1739 "'hg status')\n") % len(stat[3]))
1739 1740 else:
1740 1741 ui.status(_("nothing changed\n"))
1741 1742 return 1
1742 1743
1743 1744 cmdutil.commitstatus(repo, node, branch, bheads, opts)
1744 1745
1745 1746 @command('config|showconfig|debugconfig',
1746 1747 [('u', 'untrusted', None, _('show untrusted configuration options')),
1747 1748 ('e', 'edit', None, _('edit user config')),
1748 1749 ('l', 'local', None, _('edit repository config')),
1749 1750 ('g', 'global', None, _('edit global config'))] + formatteropts,
1750 1751 _('[-u] [NAME]...'),
1751 1752 optionalrepo=True)
1752 1753 def config(ui, repo, *values, **opts):
1753 1754 """show combined config settings from all hgrc files
1754 1755
1755 1756 With no arguments, print names and values of all config items.
1756 1757
1757 1758 With one argument of the form section.name, print just the value
1758 1759 of that config item.
1759 1760
1760 1761 With multiple arguments, print names and values of all config
1761 1762 items with matching section names.
1762 1763
1763 1764 With --edit, start an editor on the user-level config file. With
1764 1765 --global, edit the system-wide config file. With --local, edit the
1765 1766 repository-level config file.
1766 1767
1767 1768 With --debug, the source (filename and line number) is printed
1768 1769 for each config item.
1769 1770
1770 1771 See :hg:`help config` for more information about config files.
1771 1772
1772 1773 Returns 0 on success, 1 if NAME does not exist.
1773 1774
1774 1775 """
1775 1776
1776 1777 opts = pycompat.byteskwargs(opts)
1777 1778 if opts.get('edit') or opts.get('local') or opts.get('global'):
1778 1779 if opts.get('local') and opts.get('global'):
1779 1780 raise error.Abort(_("can't use --local and --global together"))
1780 1781
1781 1782 if opts.get('local'):
1782 1783 if not repo:
1783 1784 raise error.Abort(_("can't use --local outside a repository"))
1784 1785 paths = [repo.vfs.join('hgrc')]
1785 1786 elif opts.get('global'):
1786 1787 paths = rcutil.systemrcpath()
1787 1788 else:
1788 1789 paths = rcutil.userrcpath()
1789 1790
1790 1791 for f in paths:
1791 1792 if os.path.exists(f):
1792 1793 break
1793 1794 else:
1794 1795 if opts.get('global'):
1795 1796 samplehgrc = uimod.samplehgrcs['global']
1796 1797 elif opts.get('local'):
1797 1798 samplehgrc = uimod.samplehgrcs['local']
1798 1799 else:
1799 1800 samplehgrc = uimod.samplehgrcs['user']
1800 1801
1801 1802 f = paths[0]
1802 1803 fp = open(f, "w")
1803 1804 fp.write(samplehgrc)
1804 1805 fp.close()
1805 1806
1806 1807 editor = ui.geteditor()
1807 1808 ui.system("%s \"%s\"" % (editor, f),
1808 1809 onerr=error.Abort, errprefix=_("edit failed"),
1809 1810 blockedtag='config_edit')
1810 1811 return
1811 1812 ui.pager('config')
1812 1813 fm = ui.formatter('config', opts)
1813 1814 for t, f in rcutil.rccomponents():
1814 1815 if t == 'path':
1815 1816 ui.debug('read config from: %s\n' % f)
1816 1817 elif t == 'items':
1817 1818 for section, name, value, source in f:
1818 1819 ui.debug('set config by: %s\n' % source)
1819 1820 else:
1820 1821 raise error.ProgrammingError('unknown rctype: %s' % t)
1821 1822 untrusted = bool(opts.get('untrusted'))
1822 1823 if values:
1823 1824 sections = [v for v in values if '.' not in v]
1824 1825 items = [v for v in values if '.' in v]
1825 1826 if len(items) > 1 or items and sections:
1826 1827 raise error.Abort(_('only one config item permitted'))
1827 1828 matched = False
1828 1829 for section, name, value in ui.walkconfig(untrusted=untrusted):
1829 1830 source = ui.configsource(section, name, untrusted)
1830 1831 value = pycompat.bytestr(value)
1831 1832 if fm.isplain():
1832 1833 source = source or 'none'
1833 1834 value = value.replace('\n', '\\n')
1834 1835 entryname = section + '.' + name
1835 1836 if values:
1836 1837 for v in values:
1837 1838 if v == section:
1838 1839 fm.startitem()
1839 1840 fm.condwrite(ui.debugflag, 'source', '%s: ', source)
1840 1841 fm.write('name value', '%s=%s\n', entryname, value)
1841 1842 matched = True
1842 1843 elif v == entryname:
1843 1844 fm.startitem()
1844 1845 fm.condwrite(ui.debugflag, 'source', '%s: ', source)
1845 1846 fm.write('value', '%s\n', value)
1846 1847 fm.data(name=entryname)
1847 1848 matched = True
1848 1849 else:
1849 1850 fm.startitem()
1850 1851 fm.condwrite(ui.debugflag, 'source', '%s: ', source)
1851 1852 fm.write('name value', '%s=%s\n', entryname, value)
1852 1853 matched = True
1853 1854 fm.end()
1854 1855 if matched:
1855 1856 return 0
1856 1857 return 1
1857 1858
1858 1859 @command('copy|cp',
1859 1860 [('A', 'after', None, _('record a copy that has already occurred')),
1860 1861 ('f', 'force', None, _('forcibly copy over an existing managed file')),
1861 1862 ] + walkopts + dryrunopts,
1862 1863 _('[OPTION]... [SOURCE]... DEST'))
1863 1864 def copy(ui, repo, *pats, **opts):
1864 1865 """mark files as copied for the next commit
1865 1866
1866 1867 Mark dest as having copies of source files. If dest is a
1867 1868 directory, copies are put in that directory. If dest is a file,
1868 1869 the source must be a single file.
1869 1870
1870 1871 By default, this command copies the contents of files as they
1871 1872 exist in the working directory. If invoked with -A/--after, the
1872 1873 operation is recorded, but no copying is performed.
1873 1874
1874 1875 This command takes effect with the next commit. To undo a copy
1875 1876 before that, see :hg:`revert`.
1876 1877
1877 1878 Returns 0 on success, 1 if errors are encountered.
1878 1879 """
1879 1880 opts = pycompat.byteskwargs(opts)
1880 1881 with repo.wlock(False):
1881 1882 return cmdutil.copy(ui, repo, pats, opts)
1882 1883
1883 1884 @command('^diff',
1884 1885 [('r', 'rev', [], _('revision'), _('REV')),
1885 1886 ('c', 'change', '', _('change made by revision'), _('REV'))
1886 1887 ] + diffopts + diffopts2 + walkopts + subrepoopts,
1887 1888 _('[OPTION]... ([-c REV] | [-r REV1 [-r REV2]]) [FILE]...'),
1888 1889 inferrepo=True)
1889 1890 def diff(ui, repo, *pats, **opts):
1890 1891 """diff repository (or selected files)
1891 1892
1892 1893 Show differences between revisions for the specified files.
1893 1894
1894 1895 Differences between files are shown using the unified diff format.
1895 1896
1896 1897 .. note::
1897 1898
1898 1899 :hg:`diff` may generate unexpected results for merges, as it will
1899 1900 default to comparing against the working directory's first
1900 1901 parent changeset if no revisions are specified.
1901 1902
1902 1903 When two revision arguments are given, then changes are shown
1903 1904 between those revisions. If only one revision is specified then
1904 1905 that revision is compared to the working directory, and, when no
1905 1906 revisions are specified, the working directory files are compared
1906 1907 to its first parent.
1907 1908
1908 1909 Alternatively you can specify -c/--change with a revision to see
1909 1910 the changes in that changeset relative to its first parent.
1910 1911
1911 1912 Without the -a/--text option, diff will avoid generating diffs of
1912 1913 files it detects as binary. With -a, diff will generate a diff
1913 1914 anyway, probably with undesirable results.
1914 1915
1915 1916 Use the -g/--git option to generate diffs in the git extended diff
1916 1917 format. For more information, read :hg:`help diffs`.
1917 1918
1918 1919 .. container:: verbose
1919 1920
1920 1921 Examples:
1921 1922
1922 1923 - compare a file in the current working directory to its parent::
1923 1924
1924 1925 hg diff foo.c
1925 1926
1926 1927 - compare two historical versions of a directory, with rename info::
1927 1928
1928 1929 hg diff --git -r 1.0:1.2 lib/
1929 1930
1930 1931 - get change stats relative to the last change on some date::
1931 1932
1932 1933 hg diff --stat -r "date('may 2')"
1933 1934
1934 1935 - diff all newly-added files that contain a keyword::
1935 1936
1936 1937 hg diff "set:added() and grep(GNU)"
1937 1938
1938 1939 - compare a revision and its parents::
1939 1940
1940 1941 hg diff -c 9353 # compare against first parent
1941 1942 hg diff -r 9353^:9353 # same using revset syntax
1942 1943 hg diff -r 9353^2:9353 # compare against the second parent
1943 1944
1944 1945 Returns 0 on success.
1945 1946 """
1946 1947
1947 1948 opts = pycompat.byteskwargs(opts)
1948 1949 revs = opts.get('rev')
1949 1950 change = opts.get('change')
1950 1951 stat = opts.get('stat')
1951 1952 reverse = opts.get('reverse')
1952 1953
1953 1954 if revs and change:
1954 1955 msg = _('cannot specify --rev and --change at the same time')
1955 1956 raise error.Abort(msg)
1956 1957 elif change:
1957 1958 node2 = scmutil.revsingle(repo, change, None).node()
1958 1959 node1 = repo[node2].p1().node()
1959 1960 else:
1960 1961 node1, node2 = scmutil.revpair(repo, revs)
1961 1962
1962 1963 if reverse:
1963 1964 node1, node2 = node2, node1
1964 1965
1965 1966 diffopts = patch.diffallopts(ui, opts)
1966 1967 m = scmutil.match(repo[node2], pats, opts)
1967 1968 ui.pager('diff')
1968 1969 cmdutil.diffordiffstat(ui, repo, diffopts, node1, node2, m, stat=stat,
1969 1970 listsubrepos=opts.get('subrepos'),
1970 1971 root=opts.get('root'))
1971 1972
1972 1973 @command('^export',
1973 1974 [('o', 'output', '',
1974 1975 _('print output to file with formatted name'), _('FORMAT')),
1975 1976 ('', 'switch-parent', None, _('diff against the second parent')),
1976 1977 ('r', 'rev', [], _('revisions to export'), _('REV')),
1977 1978 ] + diffopts,
1978 1979 _('[OPTION]... [-o OUTFILESPEC] [-r] [REV]...'))
1979 1980 def export(ui, repo, *changesets, **opts):
1980 1981 """dump the header and diffs for one or more changesets
1981 1982
1982 1983 Print the changeset header and diffs for one or more revisions.
1983 1984 If no revision is given, the parent of the working directory is used.
1984 1985
1985 1986 The information shown in the changeset header is: author, date,
1986 1987 branch name (if non-default), changeset hash, parent(s) and commit
1987 1988 comment.
1988 1989
1989 1990 .. note::
1990 1991
1991 1992 :hg:`export` may generate unexpected diff output for merge
1992 1993 changesets, as it will compare the merge changeset against its
1993 1994 first parent only.
1994 1995
1995 1996 Output may be to a file, in which case the name of the file is
1996 1997 given using a format string. The formatting rules are as follows:
1997 1998
1998 1999 :``%%``: literal "%" character
1999 2000 :``%H``: changeset hash (40 hexadecimal digits)
2000 2001 :``%N``: number of patches being generated
2001 2002 :``%R``: changeset revision number
2002 2003 :``%b``: basename of the exporting repository
2003 2004 :``%h``: short-form changeset hash (12 hexadecimal digits)
2004 2005 :``%m``: first line of the commit message (only alphanumeric characters)
2005 2006 :``%n``: zero-padded sequence number, starting at 1
2006 2007 :``%r``: zero-padded changeset revision number
2007 2008
2008 2009 Without the -a/--text option, export will avoid generating diffs
2009 2010 of files it detects as binary. With -a, export will generate a
2010 2011 diff anyway, probably with undesirable results.
2011 2012
2012 2013 Use the -g/--git option to generate diffs in the git extended diff
2013 2014 format. See :hg:`help diffs` for more information.
2014 2015
2015 2016 With the --switch-parent option, the diff will be against the
2016 2017 second parent. It can be useful to review a merge.
2017 2018
2018 2019 .. container:: verbose
2019 2020
2020 2021 Examples:
2021 2022
2022 2023 - use export and import to transplant a bugfix to the current
2023 2024 branch::
2024 2025
2025 2026 hg export -r 9353 | hg import -
2026 2027
2027 2028 - export all the changesets between two revisions to a file with
2028 2029 rename information::
2029 2030
2030 2031 hg export --git -r 123:150 > changes.txt
2031 2032
2032 2033 - split outgoing changes into a series of patches with
2033 2034 descriptive names::
2034 2035
2035 2036 hg export -r "outgoing()" -o "%n-%m.patch"
2036 2037
2037 2038 Returns 0 on success.
2038 2039 """
2039 2040 opts = pycompat.byteskwargs(opts)
2040 2041 changesets += tuple(opts.get('rev', []))
2041 2042 if not changesets:
2042 2043 changesets = ['.']
2043 2044 revs = scmutil.revrange(repo, changesets)
2044 2045 if not revs:
2045 2046 raise error.Abort(_("export requires at least one changeset"))
2046 2047 if len(revs) > 1:
2047 2048 ui.note(_('exporting patches:\n'))
2048 2049 else:
2049 2050 ui.note(_('exporting patch:\n'))
2050 2051 ui.pager('export')
2051 2052 cmdutil.export(repo, revs, template=opts.get('output'),
2052 2053 switch_parent=opts.get('switch_parent'),
2053 2054 opts=patch.diffallopts(ui, opts))
2054 2055
2055 2056 @command('files',
2056 2057 [('r', 'rev', '', _('search the repository as it is in REV'), _('REV')),
2057 2058 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
2058 2059 ] + walkopts + formatteropts + subrepoopts,
2059 2060 _('[OPTION]... [FILE]...'))
2060 2061 def files(ui, repo, *pats, **opts):
2061 2062 """list tracked files
2062 2063
2063 2064 Print files under Mercurial control in the working directory or
2064 2065 specified revision for given files (excluding removed files).
2065 2066 Files can be specified as filenames or filesets.
2066 2067
2067 2068 If no files are given to match, this command prints the names
2068 2069 of all files under Mercurial control.
2069 2070
2070 2071 .. container:: verbose
2071 2072
2072 2073 Examples:
2073 2074
2074 2075 - list all files under the current directory::
2075 2076
2076 2077 hg files .
2077 2078
2078 2079 - shows sizes and flags for current revision::
2079 2080
2080 2081 hg files -vr .
2081 2082
2082 2083 - list all files named README::
2083 2084
2084 2085 hg files -I "**/README"
2085 2086
2086 2087 - list all binary files::
2087 2088
2088 2089 hg files "set:binary()"
2089 2090
2090 2091 - find files containing a regular expression::
2091 2092
2092 2093 hg files "set:grep('bob')"
2093 2094
2094 2095 - search tracked file contents with xargs and grep::
2095 2096
2096 2097 hg files -0 | xargs -0 grep foo
2097 2098
2098 2099 See :hg:`help patterns` and :hg:`help filesets` for more information
2099 2100 on specifying file patterns.
2100 2101
2101 2102 Returns 0 if a match is found, 1 otherwise.
2102 2103
2103 2104 """
2104 2105
2105 2106 opts = pycompat.byteskwargs(opts)
2106 2107 ctx = scmutil.revsingle(repo, opts.get('rev'), None)
2107 2108
2108 2109 end = '\n'
2109 2110 if opts.get('print0'):
2110 2111 end = '\0'
2111 2112 fmt = '%s' + end
2112 2113
2113 2114 m = scmutil.match(ctx, pats, opts)
2114 2115 ui.pager('files')
2115 2116 with ui.formatter('files', opts) as fm:
2116 2117 return cmdutil.files(ui, ctx, m, fm, fmt, opts.get('subrepos'))
2117 2118
2118 2119 @command('^forget', walkopts, _('[OPTION]... FILE...'), inferrepo=True)
2119 2120 def forget(ui, repo, *pats, **opts):
2120 2121 """forget the specified files on the next commit
2121 2122
2122 2123 Mark the specified files so they will no longer be tracked
2123 2124 after the next commit.
2124 2125
2125 2126 This only removes files from the current branch, not from the
2126 2127 entire project history, and it does not delete them from the
2127 2128 working directory.
2128 2129
2129 2130 To delete the file from the working directory, see :hg:`remove`.
2130 2131
2131 2132 To undo a forget before the next commit, see :hg:`add`.
2132 2133
2133 2134 .. container:: verbose
2134 2135
2135 2136 Examples:
2136 2137
2137 2138 - forget newly-added binary files::
2138 2139
2139 2140 hg forget "set:added() and binary()"
2140 2141
2141 2142 - forget files that would be excluded by .hgignore::
2142 2143
2143 2144 hg forget "set:hgignore()"
2144 2145
2145 2146 Returns 0 on success.
2146 2147 """
2147 2148
2148 2149 opts = pycompat.byteskwargs(opts)
2149 2150 if not pats:
2150 2151 raise error.Abort(_('no files specified'))
2151 2152
2152 2153 m = scmutil.match(repo[None], pats, opts)
2153 2154 rejected = cmdutil.forget(ui, repo, m, prefix="", explicitonly=False)[0]
2154 2155 return rejected and 1 or 0
2155 2156
2156 2157 @command(
2157 2158 'graft',
2158 2159 [('r', 'rev', [], _('revisions to graft'), _('REV')),
2159 2160 ('c', 'continue', False, _('resume interrupted graft')),
2160 2161 ('e', 'edit', False, _('invoke editor on commit messages')),
2161 2162 ('', 'log', None, _('append graft info to log message')),
2162 2163 ('f', 'force', False, _('force graft')),
2163 2164 ('D', 'currentdate', False,
2164 2165 _('record the current date as commit date')),
2165 2166 ('U', 'currentuser', False,
2166 2167 _('record the current user as committer'), _('DATE'))]
2167 2168 + commitopts2 + mergetoolopts + dryrunopts,
2168 2169 _('[OPTION]... [-r REV]... REV...'))
2169 2170 def graft(ui, repo, *revs, **opts):
2170 2171 '''copy changes from other branches onto the current branch
2171 2172
2172 2173 This command uses Mercurial's merge logic to copy individual
2173 2174 changes from other branches without merging branches in the
2174 2175 history graph. This is sometimes known as 'backporting' or
2175 2176 'cherry-picking'. By default, graft will copy user, date, and
2176 2177 description from the source changesets.
2177 2178
2178 2179 Changesets that are ancestors of the current revision, that have
2179 2180 already been grafted, or that are merges will be skipped.
2180 2181
2181 2182 If --log is specified, log messages will have a comment appended
2182 2183 of the form::
2183 2184
2184 2185 (grafted from CHANGESETHASH)
2185 2186
2186 2187 If --force is specified, revisions will be grafted even if they
2187 2188 are already ancestors of or have been grafted to the destination.
2188 2189 This is useful when the revisions have since been backed out.
2189 2190
2190 2191 If a graft merge results in conflicts, the graft process is
2191 2192 interrupted so that the current merge can be manually resolved.
2192 2193 Once all conflicts are addressed, the graft process can be
2193 2194 continued with the -c/--continue option.
2194 2195
2195 2196 .. note::
2196 2197
2197 2198 The -c/--continue option does not reapply earlier options, except
2198 2199 for --force.
2199 2200
2200 2201 .. container:: verbose
2201 2202
2202 2203 Examples:
2203 2204
2204 2205 - copy a single change to the stable branch and edit its description::
2205 2206
2206 2207 hg update stable
2207 2208 hg graft --edit 9393
2208 2209
2209 2210 - graft a range of changesets with one exception, updating dates::
2210 2211
2211 2212 hg graft -D "2085::2093 and not 2091"
2212 2213
2213 2214 - continue a graft after resolving conflicts::
2214 2215
2215 2216 hg graft -c
2216 2217
2217 2218 - show the source of a grafted changeset::
2218 2219
2219 2220 hg log --debug -r .
2220 2221
2221 2222 - show revisions sorted by date::
2222 2223
2223 2224 hg log -r "sort(all(), date)"
2224 2225
2225 2226 See :hg:`help revisions` for more about specifying revisions.
2226 2227
2227 2228 Returns 0 on successful completion.
2228 2229 '''
2229 2230 with repo.wlock():
2230 2231 return _dograft(ui, repo, *revs, **opts)
2231 2232
2232 2233 def _dograft(ui, repo, *revs, **opts):
2233 2234 opts = pycompat.byteskwargs(opts)
2234 2235 if revs and opts.get('rev'):
2235 2236 ui.warn(_('warning: inconsistent use of --rev might give unexpected '
2236 2237 'revision ordering!\n'))
2237 2238
2238 2239 revs = list(revs)
2239 2240 revs.extend(opts.get('rev'))
2240 2241
2241 2242 if not opts.get('user') and opts.get('currentuser'):
2242 2243 opts['user'] = ui.username()
2243 2244 if not opts.get('date') and opts.get('currentdate'):
2244 2245 opts['date'] = "%d %d" % util.makedate()
2245 2246
2246 2247 editor = cmdutil.getcommiteditor(editform='graft',
2247 2248 **pycompat.strkwargs(opts))
2248 2249
2249 2250 cont = False
2250 2251 if opts.get('continue'):
2251 2252 cont = True
2252 2253 if revs:
2253 2254 raise error.Abort(_("can't specify --continue and revisions"))
2254 2255 # read in unfinished revisions
2255 2256 try:
2256 2257 nodes = repo.vfs.read('graftstate').splitlines()
2257 2258 revs = [repo[node].rev() for node in nodes]
2258 2259 except IOError as inst:
2259 2260 if inst.errno != errno.ENOENT:
2260 2261 raise
2261 2262 cmdutil.wrongtooltocontinue(repo, _('graft'))
2262 2263 else:
2263 2264 cmdutil.checkunfinished(repo)
2264 2265 cmdutil.bailifchanged(repo)
2265 2266 if not revs:
2266 2267 raise error.Abort(_('no revisions specified'))
2267 2268 revs = scmutil.revrange(repo, revs)
2268 2269
2269 2270 skipped = set()
2270 2271 # check for merges
2271 2272 for rev in repo.revs('%ld and merge()', revs):
2272 2273 ui.warn(_('skipping ungraftable merge revision %s\n') % rev)
2273 2274 skipped.add(rev)
2274 2275 revs = [r for r in revs if r not in skipped]
2275 2276 if not revs:
2276 2277 return -1
2277 2278
2278 2279 # Don't check in the --continue case, in effect retaining --force across
2279 2280 # --continues. That's because without --force, any revisions we decided to
2280 2281 # skip would have been filtered out here, so they wouldn't have made their
2281 2282 # way to the graftstate. With --force, any revisions we would have otherwise
2282 2283 # skipped would not have been filtered out, and if they hadn't been applied
2283 2284 # already, they'd have been in the graftstate.
2284 2285 if not (cont or opts.get('force')):
2285 2286 # check for ancestors of dest branch
2286 2287 crev = repo['.'].rev()
2287 2288 ancestors = repo.changelog.ancestors([crev], inclusive=True)
2288 2289 # XXX make this lazy in the future
2289 2290 # don't mutate while iterating, create a copy
2290 2291 for rev in list(revs):
2291 2292 if rev in ancestors:
2292 2293 ui.warn(_('skipping ancestor revision %d:%s\n') %
2293 2294 (rev, repo[rev]))
2294 2295 # XXX remove on list is slow
2295 2296 revs.remove(rev)
2296 2297 if not revs:
2297 2298 return -1
2298 2299
2299 2300 # analyze revs for earlier grafts
2300 2301 ids = {}
2301 2302 for ctx in repo.set("%ld", revs):
2302 2303 ids[ctx.hex()] = ctx.rev()
2303 2304 n = ctx.extra().get('source')
2304 2305 if n:
2305 2306 ids[n] = ctx.rev()
2306 2307
2307 2308 # check ancestors for earlier grafts
2308 2309 ui.debug('scanning for duplicate grafts\n')
2309 2310
2310 2311 for rev in repo.changelog.findmissingrevs(revs, [crev]):
2311 2312 ctx = repo[rev]
2312 2313 n = ctx.extra().get('source')
2313 2314 if n in ids:
2314 2315 try:
2315 2316 r = repo[n].rev()
2316 2317 except error.RepoLookupError:
2317 2318 r = None
2318 2319 if r in revs:
2319 2320 ui.warn(_('skipping revision %d:%s '
2320 2321 '(already grafted to %d:%s)\n')
2321 2322 % (r, repo[r], rev, ctx))
2322 2323 revs.remove(r)
2323 2324 elif ids[n] in revs:
2324 2325 if r is None:
2325 2326 ui.warn(_('skipping already grafted revision %d:%s '
2326 2327 '(%d:%s also has unknown origin %s)\n')
2327 2328 % (ids[n], repo[ids[n]], rev, ctx, n[:12]))
2328 2329 else:
2329 2330 ui.warn(_('skipping already grafted revision %d:%s '
2330 2331 '(%d:%s also has origin %d:%s)\n')
2331 2332 % (ids[n], repo[ids[n]], rev, ctx, r, n[:12]))
2332 2333 revs.remove(ids[n])
2333 2334 elif ctx.hex() in ids:
2334 2335 r = ids[ctx.hex()]
2335 2336 ui.warn(_('skipping already grafted revision %d:%s '
2336 2337 '(was grafted from %d:%s)\n') %
2337 2338 (r, repo[r], rev, ctx))
2338 2339 revs.remove(r)
2339 2340 if not revs:
2340 2341 return -1
2341 2342
2342 2343 for pos, ctx in enumerate(repo.set("%ld", revs)):
2343 2344 desc = '%d:%s "%s"' % (ctx.rev(), ctx,
2344 2345 ctx.description().split('\n', 1)[0])
2345 2346 names = repo.nodetags(ctx.node()) + repo.nodebookmarks(ctx.node())
2346 2347 if names:
2347 2348 desc += ' (%s)' % ' '.join(names)
2348 2349 ui.status(_('grafting %s\n') % desc)
2349 2350 if opts.get('dry_run'):
2350 2351 continue
2351 2352
2352 2353 source = ctx.extra().get('source')
2353 2354 extra = {}
2354 2355 if source:
2355 2356 extra['source'] = source
2356 2357 extra['intermediate-source'] = ctx.hex()
2357 2358 else:
2358 2359 extra['source'] = ctx.hex()
2359 2360 user = ctx.user()
2360 2361 if opts.get('user'):
2361 2362 user = opts['user']
2362 2363 date = ctx.date()
2363 2364 if opts.get('date'):
2364 2365 date = opts['date']
2365 2366 message = ctx.description()
2366 2367 if opts.get('log'):
2367 2368 message += '\n(grafted from %s)' % ctx.hex()
2368 2369
2369 2370 # we don't merge the first commit when continuing
2370 2371 if not cont:
2371 2372 # perform the graft merge with p1(rev) as 'ancestor'
2372 2373 try:
2373 2374 # ui.forcemerge is an internal variable, do not document
2374 2375 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
2375 2376 'graft')
2376 2377 stats = mergemod.graft(repo, ctx, ctx.p1(),
2377 2378 ['local', 'graft'])
2378 2379 finally:
2379 2380 repo.ui.setconfig('ui', 'forcemerge', '', 'graft')
2380 2381 # report any conflicts
2381 2382 if stats and stats[3] > 0:
2382 2383 # write out state for --continue
2383 2384 nodelines = [repo[rev].hex() + "\n" for rev in revs[pos:]]
2384 2385 repo.vfs.write('graftstate', ''.join(nodelines))
2385 2386 extra = ''
2386 2387 if opts.get('user'):
2387 2388 extra += ' --user %s' % util.shellquote(opts['user'])
2388 2389 if opts.get('date'):
2389 2390 extra += ' --date %s' % util.shellquote(opts['date'])
2390 2391 if opts.get('log'):
2391 2392 extra += ' --log'
2392 2393 hint=_("use 'hg resolve' and 'hg graft --continue%s'") % extra
2393 2394 raise error.Abort(
2394 2395 _("unresolved conflicts, can't continue"),
2395 2396 hint=hint)
2396 2397 else:
2397 2398 cont = False
2398 2399
2399 2400 # commit
2400 2401 node = repo.commit(text=message, user=user,
2401 2402 date=date, extra=extra, editor=editor)
2402 2403 if node is None:
2403 2404 ui.warn(
2404 2405 _('note: graft of %d:%s created no changes to commit\n') %
2405 2406 (ctx.rev(), ctx))
2406 2407
2407 2408 # remove state when we complete successfully
2408 2409 if not opts.get('dry_run'):
2409 2410 repo.vfs.unlinkpath('graftstate', ignoremissing=True)
2410 2411
2411 2412 return 0
2412 2413
2413 2414 @command('grep',
2414 2415 [('0', 'print0', None, _('end fields with NUL')),
2415 2416 ('', 'all', None, _('print all revisions that match')),
2416 2417 ('a', 'text', None, _('treat all files as text')),
2417 2418 ('f', 'follow', None,
2418 2419 _('follow changeset history,'
2419 2420 ' or file history across copies and renames')),
2420 2421 ('i', 'ignore-case', None, _('ignore case when matching')),
2421 2422 ('l', 'files-with-matches', None,
2422 2423 _('print only filenames and revisions that match')),
2423 2424 ('n', 'line-number', None, _('print matching line numbers')),
2424 2425 ('r', 'rev', [],
2425 2426 _('only search files changed within revision range'), _('REV')),
2426 2427 ('u', 'user', None, _('list the author (long with -v)')),
2427 2428 ('d', 'date', None, _('list the date (short with -q)')),
2428 2429 ] + formatteropts + walkopts,
2429 2430 _('[OPTION]... PATTERN [FILE]...'),
2430 2431 inferrepo=True)
2431 2432 def grep(ui, repo, pattern, *pats, **opts):
2432 2433 """search revision history for a pattern in specified files
2433 2434
2434 2435 Search revision history for a regular expression in the specified
2435 2436 files or the entire project.
2436 2437
2437 2438 By default, grep prints the most recent revision number for each
2438 2439 file in which it finds a match. To get it to print every revision
2439 2440 that contains a change in match status ("-" for a match that becomes
2440 2441 a non-match, or "+" for a non-match that becomes a match), use the
2441 2442 --all flag.
2442 2443
2443 2444 PATTERN can be any Python (roughly Perl-compatible) regular
2444 2445 expression.
2445 2446
2446 2447 If no FILEs are specified (and -f/--follow isn't set), all files in
2447 2448 the repository are searched, including those that don't exist in the
2448 2449 current branch or have been deleted in a prior changeset.
2449 2450
2450 2451 Returns 0 if a match is found, 1 otherwise.
2451 2452 """
2452 2453 opts = pycompat.byteskwargs(opts)
2453 2454 reflags = re.M
2454 2455 if opts.get('ignore_case'):
2455 2456 reflags |= re.I
2456 2457 try:
2457 2458 regexp = util.re.compile(pattern, reflags)
2458 2459 except re.error as inst:
2459 2460 ui.warn(_("grep: invalid match pattern: %s\n") % inst)
2460 2461 return 1
2461 2462 sep, eol = ':', '\n'
2462 2463 if opts.get('print0'):
2463 2464 sep = eol = '\0'
2464 2465
2465 2466 getfile = util.lrucachefunc(repo.file)
2466 2467
2467 2468 def matchlines(body):
2468 2469 begin = 0
2469 2470 linenum = 0
2470 2471 while begin < len(body):
2471 2472 match = regexp.search(body, begin)
2472 2473 if not match:
2473 2474 break
2474 2475 mstart, mend = match.span()
2475 2476 linenum += body.count('\n', begin, mstart) + 1
2476 2477 lstart = body.rfind('\n', begin, mstart) + 1 or begin
2477 2478 begin = body.find('\n', mend) + 1 or len(body) + 1
2478 2479 lend = begin - 1
2479 2480 yield linenum, mstart - lstart, mend - lstart, body[lstart:lend]
2480 2481
2481 2482 class linestate(object):
2482 2483 def __init__(self, line, linenum, colstart, colend):
2483 2484 self.line = line
2484 2485 self.linenum = linenum
2485 2486 self.colstart = colstart
2486 2487 self.colend = colend
2487 2488
2488 2489 def __hash__(self):
2489 2490 return hash((self.linenum, self.line))
2490 2491
2491 2492 def __eq__(self, other):
2492 2493 return self.line == other.line
2493 2494
2494 2495 def findpos(self):
2495 2496 """Iterate all (start, end) indices of matches"""
2496 2497 yield self.colstart, self.colend
2497 2498 p = self.colend
2498 2499 while p < len(self.line):
2499 2500 m = regexp.search(self.line, p)
2500 2501 if not m:
2501 2502 break
2502 2503 yield m.span()
2503 2504 p = m.end()
2504 2505
2505 2506 matches = {}
2506 2507 copies = {}
2507 2508 def grepbody(fn, rev, body):
2508 2509 matches[rev].setdefault(fn, [])
2509 2510 m = matches[rev][fn]
2510 2511 for lnum, cstart, cend, line in matchlines(body):
2511 2512 s = linestate(line, lnum, cstart, cend)
2512 2513 m.append(s)
2513 2514
2514 2515 def difflinestates(a, b):
2515 2516 sm = difflib.SequenceMatcher(None, a, b)
2516 2517 for tag, alo, ahi, blo, bhi in sm.get_opcodes():
2517 2518 if tag == 'insert':
2518 2519 for i in xrange(blo, bhi):
2519 2520 yield ('+', b[i])
2520 2521 elif tag == 'delete':
2521 2522 for i in xrange(alo, ahi):
2522 2523 yield ('-', a[i])
2523 2524 elif tag == 'replace':
2524 2525 for i in xrange(alo, ahi):
2525 2526 yield ('-', a[i])
2526 2527 for i in xrange(blo, bhi):
2527 2528 yield ('+', b[i])
2528 2529
2529 2530 def display(fm, fn, ctx, pstates, states):
2530 2531 rev = ctx.rev()
2531 2532 if fm.isplain():
2532 2533 formatuser = ui.shortuser
2533 2534 else:
2534 2535 formatuser = str
2535 2536 if ui.quiet:
2536 2537 datefmt = '%Y-%m-%d'
2537 2538 else:
2538 2539 datefmt = '%a %b %d %H:%M:%S %Y %1%2'
2539 2540 found = False
2540 2541 @util.cachefunc
2541 2542 def binary():
2542 2543 flog = getfile(fn)
2543 2544 return util.binary(flog.read(ctx.filenode(fn)))
2544 2545
2545 2546 fieldnamemap = {'filename': 'file', 'linenumber': 'line_number'}
2546 2547 if opts.get('all'):
2547 2548 iter = difflinestates(pstates, states)
2548 2549 else:
2549 2550 iter = [('', l) for l in states]
2550 2551 for change, l in iter:
2551 2552 fm.startitem()
2552 2553 fm.data(node=fm.hexfunc(ctx.node()))
2553 2554 cols = [
2554 2555 ('filename', fn, True),
2555 2556 ('rev', rev, True),
2556 2557 ('linenumber', l.linenum, opts.get('line_number')),
2557 2558 ]
2558 2559 if opts.get('all'):
2559 2560 cols.append(('change', change, True))
2560 2561 cols.extend([
2561 2562 ('user', formatuser(ctx.user()), opts.get('user')),
2562 2563 ('date', fm.formatdate(ctx.date(), datefmt), opts.get('date')),
2563 2564 ])
2564 2565 lastcol = next(name for name, data, cond in reversed(cols) if cond)
2565 2566 for name, data, cond in cols:
2566 2567 field = fieldnamemap.get(name, name)
2567 2568 fm.condwrite(cond, field, '%s', data, label='grep.%s' % name)
2568 2569 if cond and name != lastcol:
2569 2570 fm.plain(sep, label='grep.sep')
2570 2571 if not opts.get('files_with_matches'):
2571 2572 fm.plain(sep, label='grep.sep')
2572 2573 if not opts.get('text') and binary():
2573 2574 fm.plain(_(" Binary file matches"))
2574 2575 else:
2575 2576 displaymatches(fm.nested('texts'), l)
2576 2577 fm.plain(eol)
2577 2578 found = True
2578 2579 if opts.get('files_with_matches'):
2579 2580 break
2580 2581 return found
2581 2582
2582 2583 def displaymatches(fm, l):
2583 2584 p = 0
2584 2585 for s, e in l.findpos():
2585 2586 if p < s:
2586 2587 fm.startitem()
2587 2588 fm.write('text', '%s', l.line[p:s])
2588 2589 fm.data(matched=False)
2589 2590 fm.startitem()
2590 2591 fm.write('text', '%s', l.line[s:e], label='grep.match')
2591 2592 fm.data(matched=True)
2592 2593 p = e
2593 2594 if p < len(l.line):
2594 2595 fm.startitem()
2595 2596 fm.write('text', '%s', l.line[p:])
2596 2597 fm.data(matched=False)
2597 2598 fm.end()
2598 2599
2599 2600 skip = {}
2600 2601 revfiles = {}
2601 2602 matchfn = scmutil.match(repo[None], pats, opts)
2602 2603 found = False
2603 2604 follow = opts.get('follow')
2604 2605
2605 2606 def prep(ctx, fns):
2606 2607 rev = ctx.rev()
2607 2608 pctx = ctx.p1()
2608 2609 parent = pctx.rev()
2609 2610 matches.setdefault(rev, {})
2610 2611 matches.setdefault(parent, {})
2611 2612 files = revfiles.setdefault(rev, [])
2612 2613 for fn in fns:
2613 2614 flog = getfile(fn)
2614 2615 try:
2615 2616 fnode = ctx.filenode(fn)
2616 2617 except error.LookupError:
2617 2618 continue
2618 2619
2619 2620 copied = flog.renamed(fnode)
2620 2621 copy = follow and copied and copied[0]
2621 2622 if copy:
2622 2623 copies.setdefault(rev, {})[fn] = copy
2623 2624 if fn in skip:
2624 2625 if copy:
2625 2626 skip[copy] = True
2626 2627 continue
2627 2628 files.append(fn)
2628 2629
2629 2630 if fn not in matches[rev]:
2630 2631 grepbody(fn, rev, flog.read(fnode))
2631 2632
2632 2633 pfn = copy or fn
2633 2634 if pfn not in matches[parent]:
2634 2635 try:
2635 2636 fnode = pctx.filenode(pfn)
2636 2637 grepbody(pfn, parent, flog.read(fnode))
2637 2638 except error.LookupError:
2638 2639 pass
2639 2640
2640 2641 ui.pager('grep')
2641 2642 fm = ui.formatter('grep', opts)
2642 2643 for ctx in cmdutil.walkchangerevs(repo, matchfn, opts, prep):
2643 2644 rev = ctx.rev()
2644 2645 parent = ctx.p1().rev()
2645 2646 for fn in sorted(revfiles.get(rev, [])):
2646 2647 states = matches[rev][fn]
2647 2648 copy = copies.get(rev, {}).get(fn)
2648 2649 if fn in skip:
2649 2650 if copy:
2650 2651 skip[copy] = True
2651 2652 continue
2652 2653 pstates = matches.get(parent, {}).get(copy or fn, [])
2653 2654 if pstates or states:
2654 2655 r = display(fm, fn, ctx, pstates, states)
2655 2656 found = found or r
2656 2657 if r and not opts.get('all'):
2657 2658 skip[fn] = True
2658 2659 if copy:
2659 2660 skip[copy] = True
2660 2661 del matches[rev]
2661 2662 del revfiles[rev]
2662 2663 fm.end()
2663 2664
2664 2665 return not found
2665 2666
2666 2667 @command('heads',
2667 2668 [('r', 'rev', '',
2668 2669 _('show only heads which are descendants of STARTREV'), _('STARTREV')),
2669 2670 ('t', 'topo', False, _('show topological heads only')),
2670 2671 ('a', 'active', False, _('show active branchheads only (DEPRECATED)')),
2671 2672 ('c', 'closed', False, _('show normal and closed branch heads')),
2672 2673 ] + templateopts,
2673 2674 _('[-ct] [-r STARTREV] [REV]...'))
2674 2675 def heads(ui, repo, *branchrevs, **opts):
2675 2676 """show branch heads
2676 2677
2677 2678 With no arguments, show all open branch heads in the repository.
2678 2679 Branch heads are changesets that have no descendants on the
2679 2680 same branch. They are where development generally takes place and
2680 2681 are the usual targets for update and merge operations.
2681 2682
2682 2683 If one or more REVs are given, only open branch heads on the
2683 2684 branches associated with the specified changesets are shown. This
2684 2685 means that you can use :hg:`heads .` to see the heads on the
2685 2686 currently checked-out branch.
2686 2687
2687 2688 If -c/--closed is specified, also show branch heads marked closed
2688 2689 (see :hg:`commit --close-branch`).
2689 2690
2690 2691 If STARTREV is specified, only those heads that are descendants of
2691 2692 STARTREV will be displayed.
2692 2693
2693 2694 If -t/--topo is specified, named branch mechanics will be ignored and only
2694 2695 topological heads (changesets with no children) will be shown.
2695 2696
2696 2697 Returns 0 if matching heads are found, 1 if not.
2697 2698 """
2698 2699
2699 2700 opts = pycompat.byteskwargs(opts)
2700 2701 start = None
2701 2702 if 'rev' in opts:
2702 2703 start = scmutil.revsingle(repo, opts['rev'], None).node()
2703 2704
2704 2705 if opts.get('topo'):
2705 2706 heads = [repo[h] for h in repo.heads(start)]
2706 2707 else:
2707 2708 heads = []
2708 2709 for branch in repo.branchmap():
2709 2710 heads += repo.branchheads(branch, start, opts.get('closed'))
2710 2711 heads = [repo[h] for h in heads]
2711 2712
2712 2713 if branchrevs:
2713 2714 branches = set(repo[br].branch() for br in branchrevs)
2714 2715 heads = [h for h in heads if h.branch() in branches]
2715 2716
2716 2717 if opts.get('active') and branchrevs:
2717 2718 dagheads = repo.heads(start)
2718 2719 heads = [h for h in heads if h.node() in dagheads]
2719 2720
2720 2721 if branchrevs:
2721 2722 haveheads = set(h.branch() for h in heads)
2722 2723 if branches - haveheads:
2723 2724 headless = ', '.join(b for b in branches - haveheads)
2724 2725 msg = _('no open branch heads found on branches %s')
2725 2726 if opts.get('rev'):
2726 2727 msg += _(' (started at %s)') % opts['rev']
2727 2728 ui.warn((msg + '\n') % headless)
2728 2729
2729 2730 if not heads:
2730 2731 return 1
2731 2732
2732 2733 ui.pager('heads')
2733 2734 heads = sorted(heads, key=lambda x: -x.rev())
2734 2735 displayer = cmdutil.show_changeset(ui, repo, opts)
2735 2736 for ctx in heads:
2736 2737 displayer.show(ctx)
2737 2738 displayer.close()
2738 2739
2739 2740 @command('help',
2740 2741 [('e', 'extension', None, _('show only help for extensions')),
2741 2742 ('c', 'command', None, _('show only help for commands')),
2742 2743 ('k', 'keyword', None, _('show topics matching keyword')),
2743 2744 ('s', 'system', [], _('show help for specific platform(s)')),
2744 2745 ],
2745 2746 _('[-ecks] [TOPIC]'),
2746 2747 norepo=True)
2747 2748 def help_(ui, name=None, **opts):
2748 2749 """show help for a given topic or a help overview
2749 2750
2750 2751 With no arguments, print a list of commands with short help messages.
2751 2752
2752 2753 Given a topic, extension, or command name, print help for that
2753 2754 topic.
2754 2755
2755 2756 Returns 0 if successful.
2756 2757 """
2757 2758
2758 2759 keep = opts.get(r'system') or []
2759 2760 if len(keep) == 0:
2760 2761 if pycompat.sysplatform.startswith('win'):
2761 2762 keep.append('windows')
2762 2763 elif pycompat.sysplatform == 'OpenVMS':
2763 2764 keep.append('vms')
2764 2765 elif pycompat.sysplatform == 'plan9':
2765 2766 keep.append('plan9')
2766 2767 else:
2767 2768 keep.append('unix')
2768 2769 keep.append(pycompat.sysplatform.lower())
2769 2770 if ui.verbose:
2770 2771 keep.append('verbose')
2771 2772
2772 2773 formatted = help.formattedhelp(ui, name, keep=keep, **opts)
2773 2774 ui.pager('help')
2774 2775 ui.write(formatted)
2775 2776
2776 2777
2777 2778 @command('identify|id',
2778 2779 [('r', 'rev', '',
2779 2780 _('identify the specified revision'), _('REV')),
2780 2781 ('n', 'num', None, _('show local revision number')),
2781 2782 ('i', 'id', None, _('show global revision id')),
2782 2783 ('b', 'branch', None, _('show branch')),
2783 2784 ('t', 'tags', None, _('show tags')),
2784 2785 ('B', 'bookmarks', None, _('show bookmarks')),
2785 2786 ] + remoteopts,
2786 2787 _('[-nibtB] [-r REV] [SOURCE]'),
2787 2788 optionalrepo=True)
2788 2789 def identify(ui, repo, source=None, rev=None,
2789 2790 num=None, id=None, branch=None, tags=None, bookmarks=None, **opts):
2790 2791 """identify the working directory or specified revision
2791 2792
2792 2793 Print a summary identifying the repository state at REV using one or
2793 2794 two parent hash identifiers, followed by a "+" if the working
2794 2795 directory has uncommitted changes, the branch name (if not default),
2795 2796 a list of tags, and a list of bookmarks.
2796 2797
2797 2798 When REV is not given, print a summary of the current state of the
2798 2799 repository.
2799 2800
2800 2801 Specifying a path to a repository root or Mercurial bundle will
2801 2802 cause lookup to operate on that repository/bundle.
2802 2803
2803 2804 .. container:: verbose
2804 2805
2805 2806 Examples:
2806 2807
2807 2808 - generate a build identifier for the working directory::
2808 2809
2809 2810 hg id --id > build-id.dat
2810 2811
2811 2812 - find the revision corresponding to a tag::
2812 2813
2813 2814 hg id -n -r 1.3
2814 2815
2815 2816 - check the most recent revision of a remote repository::
2816 2817
2817 2818 hg id -r tip https://www.mercurial-scm.org/repo/hg/
2818 2819
2819 2820 See :hg:`log` for generating more information about specific revisions,
2820 2821 including full hash identifiers.
2821 2822
2822 2823 Returns 0 if successful.
2823 2824 """
2824 2825
2825 2826 opts = pycompat.byteskwargs(opts)
2826 2827 if not repo and not source:
2827 2828 raise error.Abort(_("there is no Mercurial repository here "
2828 2829 "(.hg not found)"))
2829 2830
2830 2831 if ui.debugflag:
2831 2832 hexfunc = hex
2832 2833 else:
2833 2834 hexfunc = short
2834 2835 default = not (num or id or branch or tags or bookmarks)
2835 2836 output = []
2836 2837 revs = []
2837 2838
2838 2839 if source:
2839 2840 source, branches = hg.parseurl(ui.expandpath(source))
2840 2841 peer = hg.peer(repo or ui, opts, source) # only pass ui when no repo
2841 2842 repo = peer.local()
2842 2843 revs, checkout = hg.addbranchrevs(repo, peer, branches, None)
2843 2844
2844 2845 if not repo:
2845 2846 if num or branch or tags:
2846 2847 raise error.Abort(
2847 2848 _("can't query remote revision number, branch, or tags"))
2848 2849 if not rev and revs:
2849 2850 rev = revs[0]
2850 2851 if not rev:
2851 2852 rev = "tip"
2852 2853
2853 2854 remoterev = peer.lookup(rev)
2854 2855 if default or id:
2855 2856 output = [hexfunc(remoterev)]
2856 2857
2857 2858 def getbms():
2858 2859 bms = []
2859 2860
2860 2861 if 'bookmarks' in peer.listkeys('namespaces'):
2861 2862 hexremoterev = hex(remoterev)
2862 2863 bms = [bm for bm, bmr in peer.listkeys('bookmarks').iteritems()
2863 2864 if bmr == hexremoterev]
2864 2865
2865 2866 return sorted(bms)
2866 2867
2867 2868 if bookmarks:
2868 2869 output.extend(getbms())
2869 2870 elif default and not ui.quiet:
2870 2871 # multiple bookmarks for a single parent separated by '/'
2871 2872 bm = '/'.join(getbms())
2872 2873 if bm:
2873 2874 output.append(bm)
2874 2875 else:
2875 2876 ctx = scmutil.revsingle(repo, rev, None)
2876 2877
2877 2878 if ctx.rev() is None:
2878 2879 ctx = repo[None]
2879 2880 parents = ctx.parents()
2880 2881 taglist = []
2881 2882 for p in parents:
2882 2883 taglist.extend(p.tags())
2883 2884
2884 2885 changed = ""
2885 2886 if default or id or num:
2886 2887 if (any(repo.status())
2887 2888 or any(ctx.sub(s).dirty() for s in ctx.substate)):
2888 2889 changed = '+'
2889 2890 if default or id:
2890 2891 output = ["%s%s" %
2891 2892 ('+'.join([hexfunc(p.node()) for p in parents]), changed)]
2892 2893 if num:
2893 2894 output.append("%s%s" %
2894 2895 ('+'.join([str(p.rev()) for p in parents]), changed))
2895 2896 else:
2896 2897 if default or id:
2897 2898 output = [hexfunc(ctx.node())]
2898 2899 if num:
2899 2900 output.append(str(ctx.rev()))
2900 2901 taglist = ctx.tags()
2901 2902
2902 2903 if default and not ui.quiet:
2903 2904 b = ctx.branch()
2904 2905 if b != 'default':
2905 2906 output.append("(%s)" % b)
2906 2907
2907 2908 # multiple tags for a single parent separated by '/'
2908 2909 t = '/'.join(taglist)
2909 2910 if t:
2910 2911 output.append(t)
2911 2912
2912 2913 # multiple bookmarks for a single parent separated by '/'
2913 2914 bm = '/'.join(ctx.bookmarks())
2914 2915 if bm:
2915 2916 output.append(bm)
2916 2917 else:
2917 2918 if branch:
2918 2919 output.append(ctx.branch())
2919 2920
2920 2921 if tags:
2921 2922 output.extend(taglist)
2922 2923
2923 2924 if bookmarks:
2924 2925 output.extend(ctx.bookmarks())
2925 2926
2926 2927 ui.write("%s\n" % ' '.join(output))
2927 2928
2928 2929 @command('import|patch',
2929 2930 [('p', 'strip', 1,
2930 2931 _('directory strip option for patch. This has the same '
2931 2932 'meaning as the corresponding patch option'), _('NUM')),
2932 2933 ('b', 'base', '', _('base path (DEPRECATED)'), _('PATH')),
2933 2934 ('e', 'edit', False, _('invoke editor on commit messages')),
2934 2935 ('f', 'force', None,
2935 2936 _('skip check for outstanding uncommitted changes (DEPRECATED)')),
2936 2937 ('', 'no-commit', None,
2937 2938 _("don't commit, just update the working directory")),
2938 2939 ('', 'bypass', None,
2939 2940 _("apply patch without touching the working directory")),
2940 2941 ('', 'partial', None,
2941 2942 _('commit even if some hunks fail')),
2942 2943 ('', 'exact', None,
2943 2944 _('abort if patch would apply lossily')),
2944 2945 ('', 'prefix', '',
2945 2946 _('apply patch to subdirectory'), _('DIR')),
2946 2947 ('', 'import-branch', None,
2947 2948 _('use any branch information in patch (implied by --exact)'))] +
2948 2949 commitopts + commitopts2 + similarityopts,
2949 2950 _('[OPTION]... PATCH...'))
2950 2951 def import_(ui, repo, patch1=None, *patches, **opts):
2951 2952 """import an ordered set of patches
2952 2953
2953 2954 Import a list of patches and commit them individually (unless
2954 2955 --no-commit is specified).
2955 2956
2956 2957 To read a patch from standard input (stdin), use "-" as the patch
2957 2958 name. If a URL is specified, the patch will be downloaded from
2958 2959 there.
2959 2960
2960 2961 Import first applies changes to the working directory (unless
2961 2962 --bypass is specified), import will abort if there are outstanding
2962 2963 changes.
2963 2964
2964 2965 Use --bypass to apply and commit patches directly to the
2965 2966 repository, without affecting the working directory. Without
2966 2967 --exact, patches will be applied on top of the working directory
2967 2968 parent revision.
2968 2969
2969 2970 You can import a patch straight from a mail message. Even patches
2970 2971 as attachments work (to use the body part, it must have type
2971 2972 text/plain or text/x-patch). From and Subject headers of email
2972 2973 message are used as default committer and commit message. All
2973 2974 text/plain body parts before first diff are added to the commit
2974 2975 message.
2975 2976
2976 2977 If the imported patch was generated by :hg:`export`, user and
2977 2978 description from patch override values from message headers and
2978 2979 body. Values given on command line with -m/--message and -u/--user
2979 2980 override these.
2980 2981
2981 2982 If --exact is specified, import will set the working directory to
2982 2983 the parent of each patch before applying it, and will abort if the
2983 2984 resulting changeset has a different ID than the one recorded in
2984 2985 the patch. This will guard against various ways that portable
2985 2986 patch formats and mail systems might fail to transfer Mercurial
2986 2987 data or metadata. See :hg:`bundle` for lossless transmission.
2987 2988
2988 2989 Use --partial to ensure a changeset will be created from the patch
2989 2990 even if some hunks fail to apply. Hunks that fail to apply will be
2990 2991 written to a <target-file>.rej file. Conflicts can then be resolved
2991 2992 by hand before :hg:`commit --amend` is run to update the created
2992 2993 changeset. This flag exists to let people import patches that
2993 2994 partially apply without losing the associated metadata (author,
2994 2995 date, description, ...).
2995 2996
2996 2997 .. note::
2997 2998
2998 2999 When no hunks apply cleanly, :hg:`import --partial` will create
2999 3000 an empty changeset, importing only the patch metadata.
3000 3001
3001 3002 With -s/--similarity, hg will attempt to discover renames and
3002 3003 copies in the patch in the same way as :hg:`addremove`.
3003 3004
3004 3005 It is possible to use external patch programs to perform the patch
3005 3006 by setting the ``ui.patch`` configuration option. For the default
3006 3007 internal tool, the fuzz can also be configured via ``patch.fuzz``.
3007 3008 See :hg:`help config` for more information about configuration
3008 3009 files and how to use these options.
3009 3010
3010 3011 See :hg:`help dates` for a list of formats valid for -d/--date.
3011 3012
3012 3013 .. container:: verbose
3013 3014
3014 3015 Examples:
3015 3016
3016 3017 - import a traditional patch from a website and detect renames::
3017 3018
3018 3019 hg import -s 80 http://example.com/bugfix.patch
3019 3020
3020 3021 - import a changeset from an hgweb server::
3021 3022
3022 3023 hg import https://www.mercurial-scm.org/repo/hg/rev/5ca8c111e9aa
3023 3024
3024 3025 - import all the patches in an Unix-style mbox::
3025 3026
3026 3027 hg import incoming-patches.mbox
3027 3028
3028 3029 - import patches from stdin::
3029 3030
3030 3031 hg import -
3031 3032
3032 3033 - attempt to exactly restore an exported changeset (not always
3033 3034 possible)::
3034 3035
3035 3036 hg import --exact proposed-fix.patch
3036 3037
3037 3038 - use an external tool to apply a patch which is too fuzzy for
3038 3039 the default internal tool.
3039 3040
3040 3041 hg import --config ui.patch="patch --merge" fuzzy.patch
3041 3042
3042 3043 - change the default fuzzing from 2 to a less strict 7
3043 3044
3044 3045 hg import --config ui.fuzz=7 fuzz.patch
3045 3046
3046 3047 Returns 0 on success, 1 on partial success (see --partial).
3047 3048 """
3048 3049
3049 3050 opts = pycompat.byteskwargs(opts)
3050 3051 if not patch1:
3051 3052 raise error.Abort(_('need at least one patch to import'))
3052 3053
3053 3054 patches = (patch1,) + patches
3054 3055
3055 3056 date = opts.get('date')
3056 3057 if date:
3057 3058 opts['date'] = util.parsedate(date)
3058 3059
3059 3060 exact = opts.get('exact')
3060 3061 update = not opts.get('bypass')
3061 3062 if not update and opts.get('no_commit'):
3062 3063 raise error.Abort(_('cannot use --no-commit with --bypass'))
3063 3064 try:
3064 3065 sim = float(opts.get('similarity') or 0)
3065 3066 except ValueError:
3066 3067 raise error.Abort(_('similarity must be a number'))
3067 3068 if sim < 0 or sim > 100:
3068 3069 raise error.Abort(_('similarity must be between 0 and 100'))
3069 3070 if sim and not update:
3070 3071 raise error.Abort(_('cannot use --similarity with --bypass'))
3071 3072 if exact:
3072 3073 if opts.get('edit'):
3073 3074 raise error.Abort(_('cannot use --exact with --edit'))
3074 3075 if opts.get('prefix'):
3075 3076 raise error.Abort(_('cannot use --exact with --prefix'))
3076 3077
3077 3078 base = opts["base"]
3078 3079 wlock = dsguard = lock = tr = None
3079 3080 msgs = []
3080 3081 ret = 0
3081 3082
3082 3083
3083 3084 try:
3084 3085 wlock = repo.wlock()
3085 3086
3086 3087 if update:
3087 3088 cmdutil.checkunfinished(repo)
3088 3089 if (exact or not opts.get('force')):
3089 3090 cmdutil.bailifchanged(repo)
3090 3091
3091 3092 if not opts.get('no_commit'):
3092 3093 lock = repo.lock()
3093 3094 tr = repo.transaction('import')
3094 3095 else:
3095 3096 dsguard = dirstateguard.dirstateguard(repo, 'import')
3096 3097 parents = repo[None].parents()
3097 3098 for patchurl in patches:
3098 3099 if patchurl == '-':
3099 3100 ui.status(_('applying patch from stdin\n'))
3100 3101 patchfile = ui.fin
3101 3102 patchurl = 'stdin' # for error message
3102 3103 else:
3103 3104 patchurl = os.path.join(base, patchurl)
3104 3105 ui.status(_('applying %s\n') % patchurl)
3105 3106 patchfile = hg.openpath(ui, patchurl)
3106 3107
3107 3108 haspatch = False
3108 3109 for hunk in patch.split(patchfile):
3109 3110 (msg, node, rej) = cmdutil.tryimportone(ui, repo, hunk,
3110 3111 parents, opts,
3111 3112 msgs, hg.clean)
3112 3113 if msg:
3113 3114 haspatch = True
3114 3115 ui.note(msg + '\n')
3115 3116 if update or exact:
3116 3117 parents = repo[None].parents()
3117 3118 else:
3118 3119 parents = [repo[node]]
3119 3120 if rej:
3120 3121 ui.write_err(_("patch applied partially\n"))
3121 3122 ui.write_err(_("(fix the .rej files and run "
3122 3123 "`hg commit --amend`)\n"))
3123 3124 ret = 1
3124 3125 break
3125 3126
3126 3127 if not haspatch:
3127 3128 raise error.Abort(_('%s: no diffs found') % patchurl)
3128 3129
3129 3130 if tr:
3130 3131 tr.close()
3131 3132 if msgs:
3132 3133 repo.savecommitmessage('\n* * *\n'.join(msgs))
3133 3134 if dsguard:
3134 3135 dsguard.close()
3135 3136 return ret
3136 3137 finally:
3137 3138 if tr:
3138 3139 tr.release()
3139 3140 release(lock, dsguard, wlock)
3140 3141
3141 3142 @command('incoming|in',
3142 3143 [('f', 'force', None,
3143 3144 _('run even if remote repository is unrelated')),
3144 3145 ('n', 'newest-first', None, _('show newest record first')),
3145 3146 ('', 'bundle', '',
3146 3147 _('file to store the bundles into'), _('FILE')),
3147 3148 ('r', 'rev', [], _('a remote changeset intended to be added'), _('REV')),
3148 3149 ('B', 'bookmarks', False, _("compare bookmarks")),
3149 3150 ('b', 'branch', [],
3150 3151 _('a specific branch you would like to pull'), _('BRANCH')),
3151 3152 ] + logopts + remoteopts + subrepoopts,
3152 3153 _('[-p] [-n] [-M] [-f] [-r REV]... [--bundle FILENAME] [SOURCE]'))
3153 3154 def incoming(ui, repo, source="default", **opts):
3154 3155 """show new changesets found in source
3155 3156
3156 3157 Show new changesets found in the specified path/URL or the default
3157 3158 pull location. These are the changesets that would have been pulled
3158 3159 if a pull at the time you issued this command.
3159 3160
3160 3161 See pull for valid source format details.
3161 3162
3162 3163 .. container:: verbose
3163 3164
3164 3165 With -B/--bookmarks, the result of bookmark comparison between
3165 3166 local and remote repositories is displayed. With -v/--verbose,
3166 3167 status is also displayed for each bookmark like below::
3167 3168
3168 3169 BM1 01234567890a added
3169 3170 BM2 1234567890ab advanced
3170 3171 BM3 234567890abc diverged
3171 3172 BM4 34567890abcd changed
3172 3173
3173 3174 The action taken locally when pulling depends on the
3174 3175 status of each bookmark:
3175 3176
3176 3177 :``added``: pull will create it
3177 3178 :``advanced``: pull will update it
3178 3179 :``diverged``: pull will create a divergent bookmark
3179 3180 :``changed``: result depends on remote changesets
3180 3181
3181 3182 From the point of view of pulling behavior, bookmark
3182 3183 existing only in the remote repository are treated as ``added``,
3183 3184 even if it is in fact locally deleted.
3184 3185
3185 3186 .. container:: verbose
3186 3187
3187 3188 For remote repository, using --bundle avoids downloading the
3188 3189 changesets twice if the incoming is followed by a pull.
3189 3190
3190 3191 Examples:
3191 3192
3192 3193 - show incoming changes with patches and full description::
3193 3194
3194 3195 hg incoming -vp
3195 3196
3196 3197 - show incoming changes excluding merges, store a bundle::
3197 3198
3198 3199 hg in -vpM --bundle incoming.hg
3199 3200 hg pull incoming.hg
3200 3201
3201 3202 - briefly list changes inside a bundle::
3202 3203
3203 3204 hg in changes.hg -T "{desc|firstline}\\n"
3204 3205
3205 3206 Returns 0 if there are incoming changes, 1 otherwise.
3206 3207 """
3207 3208 opts = pycompat.byteskwargs(opts)
3208 3209 if opts.get('graph'):
3209 3210 cmdutil.checkunsupportedgraphflags([], opts)
3210 3211 def display(other, chlist, displayer):
3211 3212 revdag = cmdutil.graphrevs(other, chlist, opts)
3212 3213 cmdutil.displaygraph(ui, repo, revdag, displayer,
3213 3214 graphmod.asciiedges)
3214 3215
3215 3216 hg._incoming(display, lambda: 1, ui, repo, source, opts, buffered=True)
3216 3217 return 0
3217 3218
3218 3219 if opts.get('bundle') and opts.get('subrepos'):
3219 3220 raise error.Abort(_('cannot combine --bundle and --subrepos'))
3220 3221
3221 3222 if opts.get('bookmarks'):
3222 3223 source, branches = hg.parseurl(ui.expandpath(source),
3223 3224 opts.get('branch'))
3224 3225 other = hg.peer(repo, opts, source)
3225 3226 if 'bookmarks' not in other.listkeys('namespaces'):
3226 3227 ui.warn(_("remote doesn't support bookmarks\n"))
3227 3228 return 0
3228 3229 ui.pager('incoming')
3229 3230 ui.status(_('comparing with %s\n') % util.hidepassword(source))
3230 3231 return bookmarks.incoming(ui, repo, other)
3231 3232
3232 3233 repo._subtoppath = ui.expandpath(source)
3233 3234 try:
3234 3235 return hg.incoming(ui, repo, source, opts)
3235 3236 finally:
3236 3237 del repo._subtoppath
3237 3238
3238 3239
3239 3240 @command('^init', remoteopts, _('[-e CMD] [--remotecmd CMD] [DEST]'),
3240 3241 norepo=True)
3241 3242 def init(ui, dest=".", **opts):
3242 3243 """create a new repository in the given directory
3243 3244
3244 3245 Initialize a new repository in the given directory. If the given
3245 3246 directory does not exist, it will be created.
3246 3247
3247 3248 If no directory is given, the current directory is used.
3248 3249
3249 3250 It is possible to specify an ``ssh://`` URL as the destination.
3250 3251 See :hg:`help urls` for more information.
3251 3252
3252 3253 Returns 0 on success.
3253 3254 """
3254 3255 opts = pycompat.byteskwargs(opts)
3255 3256 hg.peer(ui, opts, ui.expandpath(dest), create=True)
3256 3257
3257 3258 @command('locate',
3258 3259 [('r', 'rev', '', _('search the repository as it is in REV'), _('REV')),
3259 3260 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
3260 3261 ('f', 'fullpath', None, _('print complete paths from the filesystem root')),
3261 3262 ] + walkopts,
3262 3263 _('[OPTION]... [PATTERN]...'))
3263 3264 def locate(ui, repo, *pats, **opts):
3264 3265 """locate files matching specific patterns (DEPRECATED)
3265 3266
3266 3267 Print files under Mercurial control in the working directory whose
3267 3268 names match the given patterns.
3268 3269
3269 3270 By default, this command searches all directories in the working
3270 3271 directory. To search just the current directory and its
3271 3272 subdirectories, use "--include .".
3272 3273
3273 3274 If no patterns are given to match, this command prints the names
3274 3275 of all files under Mercurial control in the working directory.
3275 3276
3276 3277 If you want to feed the output of this command into the "xargs"
3277 3278 command, use the -0 option to both this command and "xargs". This
3278 3279 will avoid the problem of "xargs" treating single filenames that
3279 3280 contain whitespace as multiple filenames.
3280 3281
3281 3282 See :hg:`help files` for a more versatile command.
3282 3283
3283 3284 Returns 0 if a match is found, 1 otherwise.
3284 3285 """
3285 3286 opts = pycompat.byteskwargs(opts)
3286 3287 if opts.get('print0'):
3287 3288 end = '\0'
3288 3289 else:
3289 3290 end = '\n'
3290 3291 rev = scmutil.revsingle(repo, opts.get('rev'), None).node()
3291 3292
3292 3293 ret = 1
3293 3294 ctx = repo[rev]
3294 3295 m = scmutil.match(ctx, pats, opts, default='relglob',
3295 3296 badfn=lambda x, y: False)
3296 3297
3297 3298 ui.pager('locate')
3298 3299 for abs in ctx.matches(m):
3299 3300 if opts.get('fullpath'):
3300 3301 ui.write(repo.wjoin(abs), end)
3301 3302 else:
3302 3303 ui.write(((pats and m.rel(abs)) or abs), end)
3303 3304 ret = 0
3304 3305
3305 3306 return ret
3306 3307
3307 3308 @command('^log|history',
3308 3309 [('f', 'follow', None,
3309 3310 _('follow changeset history, or file history across copies and renames')),
3310 3311 ('', 'follow-first', None,
3311 3312 _('only follow the first parent of merge changesets (DEPRECATED)')),
3312 3313 ('d', 'date', '', _('show revisions matching date spec'), _('DATE')),
3313 3314 ('C', 'copies', None, _('show copied files')),
3314 3315 ('k', 'keyword', [],
3315 3316 _('do case-insensitive search for a given text'), _('TEXT')),
3316 3317 ('r', 'rev', [], _('show the specified revision or revset'), _('REV')),
3317 3318 ('', 'removed', None, _('include revisions where files were removed')),
3318 3319 ('m', 'only-merges', None, _('show only merges (DEPRECATED)')),
3319 3320 ('u', 'user', [], _('revisions committed by user'), _('USER')),
3320 3321 ('', 'only-branch', [],
3321 3322 _('show only changesets within the given named branch (DEPRECATED)'),
3322 3323 _('BRANCH')),
3323 3324 ('b', 'branch', [],
3324 3325 _('show changesets within the given named branch'), _('BRANCH')),
3325 3326 ('P', 'prune', [],
3326 3327 _('do not display revision or any of its ancestors'), _('REV')),
3327 3328 ] + logopts + walkopts,
3328 3329 _('[OPTION]... [FILE]'),
3329 3330 inferrepo=True)
3330 3331 def log(ui, repo, *pats, **opts):
3331 3332 """show revision history of entire repository or files
3332 3333
3333 3334 Print the revision history of the specified files or the entire
3334 3335 project.
3335 3336
3336 3337 If no revision range is specified, the default is ``tip:0`` unless
3337 3338 --follow is set, in which case the working directory parent is
3338 3339 used as the starting revision.
3339 3340
3340 3341 File history is shown without following rename or copy history of
3341 3342 files. Use -f/--follow with a filename to follow history across
3342 3343 renames and copies. --follow without a filename will only show
3343 3344 ancestors or descendants of the starting revision.
3344 3345
3345 3346 By default this command prints revision number and changeset id,
3346 3347 tags, non-trivial parents, user, date and time, and a summary for
3347 3348 each commit. When the -v/--verbose switch is used, the list of
3348 3349 changed files and full commit message are shown.
3349 3350
3350 3351 With --graph the revisions are shown as an ASCII art DAG with the most
3351 3352 recent changeset at the top.
3352 3353 'o' is a changeset, '@' is a working directory parent, 'x' is obsolete,
3353 3354 and '+' represents a fork where the changeset from the lines below is a
3354 3355 parent of the 'o' merge on the same line.
3355 3356 Paths in the DAG are represented with '|', '/' and so forth. ':' in place
3356 3357 of a '|' indicates one or more revisions in a path are omitted.
3357 3358
3358 3359 .. note::
3359 3360
3360 3361 :hg:`log --patch` may generate unexpected diff output for merge
3361 3362 changesets, as it will only compare the merge changeset against
3362 3363 its first parent. Also, only files different from BOTH parents
3363 3364 will appear in files:.
3364 3365
3365 3366 .. note::
3366 3367
3367 3368 For performance reasons, :hg:`log FILE` may omit duplicate changes
3368 3369 made on branches and will not show removals or mode changes. To
3369 3370 see all such changes, use the --removed switch.
3370 3371
3371 3372 .. container:: verbose
3372 3373
3373 3374 Some examples:
3374 3375
3375 3376 - changesets with full descriptions and file lists::
3376 3377
3377 3378 hg log -v
3378 3379
3379 3380 - changesets ancestral to the working directory::
3380 3381
3381 3382 hg log -f
3382 3383
3383 3384 - last 10 commits on the current branch::
3384 3385
3385 3386 hg log -l 10 -b .
3386 3387
3387 3388 - changesets showing all modifications of a file, including removals::
3388 3389
3389 3390 hg log --removed file.c
3390 3391
3391 3392 - all changesets that touch a directory, with diffs, excluding merges::
3392 3393
3393 3394 hg log -Mp lib/
3394 3395
3395 3396 - all revision numbers that match a keyword::
3396 3397
3397 3398 hg log -k bug --template "{rev}\\n"
3398 3399
3399 3400 - the full hash identifier of the working directory parent::
3400 3401
3401 3402 hg log -r . --template "{node}\\n"
3402 3403
3403 3404 - list available log templates::
3404 3405
3405 3406 hg log -T list
3406 3407
3407 3408 - check if a given changeset is included in a tagged release::
3408 3409
3409 3410 hg log -r "a21ccf and ancestor(1.9)"
3410 3411
3411 3412 - find all changesets by some user in a date range::
3412 3413
3413 3414 hg log -k alice -d "may 2008 to jul 2008"
3414 3415
3415 3416 - summary of all changesets after the last tag::
3416 3417
3417 3418 hg log -r "last(tagged())::" --template "{desc|firstline}\\n"
3418 3419
3419 3420 See :hg:`help dates` for a list of formats valid for -d/--date.
3420 3421
3421 3422 See :hg:`help revisions` for more about specifying and ordering
3422 3423 revisions.
3423 3424
3424 3425 See :hg:`help templates` for more about pre-packaged styles and
3425 3426 specifying custom templates.
3426 3427
3427 3428 Returns 0 on success.
3428 3429
3429 3430 """
3430 3431 opts = pycompat.byteskwargs(opts)
3431 3432 if opts.get('follow') and opts.get('rev'):
3432 3433 opts['rev'] = [revsetlang.formatspec('reverse(::%lr)', opts.get('rev'))]
3433 3434 del opts['follow']
3434 3435
3435 3436 if opts.get('graph'):
3436 3437 return cmdutil.graphlog(ui, repo, pats, opts)
3437 3438
3438 3439 revs, expr, filematcher = cmdutil.getlogrevs(repo, pats, opts)
3439 3440 limit = cmdutil.loglimit(opts)
3440 3441 count = 0
3441 3442
3442 3443 getrenamed = None
3443 3444 if opts.get('copies'):
3444 3445 endrev = None
3445 3446 if opts.get('rev'):
3446 3447 endrev = scmutil.revrange(repo, opts.get('rev')).max() + 1
3447 3448 getrenamed = templatekw.getrenamedfn(repo, endrev=endrev)
3448 3449
3449 3450 ui.pager('log')
3450 3451 displayer = cmdutil.show_changeset(ui, repo, opts, buffered=True)
3451 3452 for rev in revs:
3452 3453 if count == limit:
3453 3454 break
3454 3455 ctx = repo[rev]
3455 3456 copies = None
3456 3457 if getrenamed is not None and rev:
3457 3458 copies = []
3458 3459 for fn in ctx.files():
3459 3460 rename = getrenamed(fn, rev)
3460 3461 if rename:
3461 3462 copies.append((fn, rename[0]))
3462 3463 if filematcher:
3463 3464 revmatchfn = filematcher(ctx.rev())
3464 3465 else:
3465 3466 revmatchfn = None
3466 3467 displayer.show(ctx, copies=copies, matchfn=revmatchfn)
3467 3468 if displayer.flush(ctx):
3468 3469 count += 1
3469 3470
3470 3471 displayer.close()
3471 3472
3472 3473 @command('manifest',
3473 3474 [('r', 'rev', '', _('revision to display'), _('REV')),
3474 3475 ('', 'all', False, _("list files from all revisions"))]
3475 3476 + formatteropts,
3476 3477 _('[-r REV]'))
3477 3478 def manifest(ui, repo, node=None, rev=None, **opts):
3478 3479 """output the current or given revision of the project manifest
3479 3480
3480 3481 Print a list of version controlled files for the given revision.
3481 3482 If no revision is given, the first parent of the working directory
3482 3483 is used, or the null revision if no revision is checked out.
3483 3484
3484 3485 With -v, print file permissions, symlink and executable bits.
3485 3486 With --debug, print file revision hashes.
3486 3487
3487 3488 If option --all is specified, the list of all files from all revisions
3488 3489 is printed. This includes deleted and renamed files.
3489 3490
3490 3491 Returns 0 on success.
3491 3492 """
3492 3493 opts = pycompat.byteskwargs(opts)
3493 3494 fm = ui.formatter('manifest', opts)
3494 3495
3495 3496 if opts.get('all'):
3496 3497 if rev or node:
3497 3498 raise error.Abort(_("can't specify a revision with --all"))
3498 3499
3499 3500 res = []
3500 3501 prefix = "data/"
3501 3502 suffix = ".i"
3502 3503 plen = len(prefix)
3503 3504 slen = len(suffix)
3504 3505 with repo.lock():
3505 3506 for fn, b, size in repo.store.datafiles():
3506 3507 if size != 0 and fn[-slen:] == suffix and fn[:plen] == prefix:
3507 3508 res.append(fn[plen:-slen])
3508 3509 ui.pager('manifest')
3509 3510 for f in res:
3510 3511 fm.startitem()
3511 3512 fm.write("path", '%s\n', f)
3512 3513 fm.end()
3513 3514 return
3514 3515
3515 3516 if rev and node:
3516 3517 raise error.Abort(_("please specify just one revision"))
3517 3518
3518 3519 if not node:
3519 3520 node = rev
3520 3521
3521 3522 char = {'l': '@', 'x': '*', '': ''}
3522 3523 mode = {'l': '644', 'x': '755', '': '644'}
3523 3524 ctx = scmutil.revsingle(repo, node)
3524 3525 mf = ctx.manifest()
3525 3526 ui.pager('manifest')
3526 3527 for f in ctx:
3527 3528 fm.startitem()
3528 3529 fl = ctx[f].flags()
3529 3530 fm.condwrite(ui.debugflag, 'hash', '%s ', hex(mf[f]))
3530 3531 fm.condwrite(ui.verbose, 'mode type', '%s %1s ', mode[fl], char[fl])
3531 3532 fm.write('path', '%s\n', f)
3532 3533 fm.end()
3533 3534
3534 3535 @command('^merge',
3535 3536 [('f', 'force', None,
3536 3537 _('force a merge including outstanding changes (DEPRECATED)')),
3537 3538 ('r', 'rev', '', _('revision to merge'), _('REV')),
3538 3539 ('P', 'preview', None,
3539 3540 _('review revisions to merge (no merge is performed)'))
3540 3541 ] + mergetoolopts,
3541 3542 _('[-P] [[-r] REV]'))
3542 3543 def merge(ui, repo, node=None, **opts):
3543 3544 """merge another revision into working directory
3544 3545
3545 3546 The current working directory is updated with all changes made in
3546 3547 the requested revision since the last common predecessor revision.
3547 3548
3548 3549 Files that changed between either parent are marked as changed for
3549 3550 the next commit and a commit must be performed before any further
3550 3551 updates to the repository are allowed. The next commit will have
3551 3552 two parents.
3552 3553
3553 3554 ``--tool`` can be used to specify the merge tool used for file
3554 3555 merges. It overrides the HGMERGE environment variable and your
3555 3556 configuration files. See :hg:`help merge-tools` for options.
3556 3557
3557 3558 If no revision is specified, the working directory's parent is a
3558 3559 head revision, and the current branch contains exactly one other
3559 3560 head, the other head is merged with by default. Otherwise, an
3560 3561 explicit revision with which to merge with must be provided.
3561 3562
3562 3563 See :hg:`help resolve` for information on handling file conflicts.
3563 3564
3564 3565 To undo an uncommitted merge, use :hg:`update --clean .` which
3565 3566 will check out a clean copy of the original merge parent, losing
3566 3567 all changes.
3567 3568
3568 3569 Returns 0 on success, 1 if there are unresolved files.
3569 3570 """
3570 3571
3571 3572 opts = pycompat.byteskwargs(opts)
3572 3573 if opts.get('rev') and node:
3573 3574 raise error.Abort(_("please specify just one revision"))
3574 3575 if not node:
3575 3576 node = opts.get('rev')
3576 3577
3577 3578 if node:
3578 3579 node = scmutil.revsingle(repo, node).node()
3579 3580
3580 3581 if not node:
3581 3582 node = repo[destutil.destmerge(repo)].node()
3582 3583
3583 3584 if opts.get('preview'):
3584 3585 # find nodes that are ancestors of p2 but not of p1
3585 3586 p1 = repo.lookup('.')
3586 3587 p2 = repo.lookup(node)
3587 3588 nodes = repo.changelog.findmissing(common=[p1], heads=[p2])
3588 3589
3589 3590 displayer = cmdutil.show_changeset(ui, repo, opts)
3590 3591 for node in nodes:
3591 3592 displayer.show(repo[node])
3592 3593 displayer.close()
3593 3594 return 0
3594 3595
3595 3596 try:
3596 3597 # ui.forcemerge is an internal variable, do not document
3597 3598 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''), 'merge')
3598 3599 force = opts.get('force')
3599 3600 labels = ['working copy', 'merge rev']
3600 3601 return hg.merge(repo, node, force=force, mergeforce=force,
3601 3602 labels=labels)
3602 3603 finally:
3603 3604 ui.setconfig('ui', 'forcemerge', '', 'merge')
3604 3605
3605 3606 @command('outgoing|out',
3606 3607 [('f', 'force', None, _('run even when the destination is unrelated')),
3607 3608 ('r', 'rev', [],
3608 3609 _('a changeset intended to be included in the destination'), _('REV')),
3609 3610 ('n', 'newest-first', None, _('show newest record first')),
3610 3611 ('B', 'bookmarks', False, _('compare bookmarks')),
3611 3612 ('b', 'branch', [], _('a specific branch you would like to push'),
3612 3613 _('BRANCH')),
3613 3614 ] + logopts + remoteopts + subrepoopts,
3614 3615 _('[-M] [-p] [-n] [-f] [-r REV]... [DEST]'))
3615 3616 def outgoing(ui, repo, dest=None, **opts):
3616 3617 """show changesets not found in the destination
3617 3618
3618 3619 Show changesets not found in the specified destination repository
3619 3620 or the default push location. These are the changesets that would
3620 3621 be pushed if a push was requested.
3621 3622
3622 3623 See pull for details of valid destination formats.
3623 3624
3624 3625 .. container:: verbose
3625 3626
3626 3627 With -B/--bookmarks, the result of bookmark comparison between
3627 3628 local and remote repositories is displayed. With -v/--verbose,
3628 3629 status is also displayed for each bookmark like below::
3629 3630
3630 3631 BM1 01234567890a added
3631 3632 BM2 deleted
3632 3633 BM3 234567890abc advanced
3633 3634 BM4 34567890abcd diverged
3634 3635 BM5 4567890abcde changed
3635 3636
3636 3637 The action taken when pushing depends on the
3637 3638 status of each bookmark:
3638 3639
3639 3640 :``added``: push with ``-B`` will create it
3640 3641 :``deleted``: push with ``-B`` will delete it
3641 3642 :``advanced``: push will update it
3642 3643 :``diverged``: push with ``-B`` will update it
3643 3644 :``changed``: push with ``-B`` will update it
3644 3645
3645 3646 From the point of view of pushing behavior, bookmarks
3646 3647 existing only in the remote repository are treated as
3647 3648 ``deleted``, even if it is in fact added remotely.
3648 3649
3649 3650 Returns 0 if there are outgoing changes, 1 otherwise.
3650 3651 """
3651 3652 opts = pycompat.byteskwargs(opts)
3652 3653 if opts.get('graph'):
3653 3654 cmdutil.checkunsupportedgraphflags([], opts)
3654 3655 o, other = hg._outgoing(ui, repo, dest, opts)
3655 3656 if not o:
3656 3657 cmdutil.outgoinghooks(ui, repo, other, opts, o)
3657 3658 return
3658 3659
3659 3660 revdag = cmdutil.graphrevs(repo, o, opts)
3660 3661 ui.pager('outgoing')
3661 3662 displayer = cmdutil.show_changeset(ui, repo, opts, buffered=True)
3662 3663 cmdutil.displaygraph(ui, repo, revdag, displayer, graphmod.asciiedges)
3663 3664 cmdutil.outgoinghooks(ui, repo, other, opts, o)
3664 3665 return 0
3665 3666
3666 3667 if opts.get('bookmarks'):
3667 3668 dest = ui.expandpath(dest or 'default-push', dest or 'default')
3668 3669 dest, branches = hg.parseurl(dest, opts.get('branch'))
3669 3670 other = hg.peer(repo, opts, dest)
3670 3671 if 'bookmarks' not in other.listkeys('namespaces'):
3671 3672 ui.warn(_("remote doesn't support bookmarks\n"))
3672 3673 return 0
3673 3674 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
3674 3675 ui.pager('outgoing')
3675 3676 return bookmarks.outgoing(ui, repo, other)
3676 3677
3677 3678 repo._subtoppath = ui.expandpath(dest or 'default-push', dest or 'default')
3678 3679 try:
3679 3680 return hg.outgoing(ui, repo, dest, opts)
3680 3681 finally:
3681 3682 del repo._subtoppath
3682 3683
3683 3684 @command('parents',
3684 3685 [('r', 'rev', '', _('show parents of the specified revision'), _('REV')),
3685 3686 ] + templateopts,
3686 3687 _('[-r REV] [FILE]'),
3687 3688 inferrepo=True)
3688 3689 def parents(ui, repo, file_=None, **opts):
3689 3690 """show the parents of the working directory or revision (DEPRECATED)
3690 3691
3691 3692 Print the working directory's parent revisions. If a revision is
3692 3693 given via -r/--rev, the parent of that revision will be printed.
3693 3694 If a file argument is given, the revision in which the file was
3694 3695 last changed (before the working directory revision or the
3695 3696 argument to --rev if given) is printed.
3696 3697
3697 3698 This command is equivalent to::
3698 3699
3699 3700 hg log -r "p1()+p2()" or
3700 3701 hg log -r "p1(REV)+p2(REV)" or
3701 3702 hg log -r "max(::p1() and file(FILE))+max(::p2() and file(FILE))" or
3702 3703 hg log -r "max(::p1(REV) and file(FILE))+max(::p2(REV) and file(FILE))"
3703 3704
3704 3705 See :hg:`summary` and :hg:`help revsets` for related information.
3705 3706
3706 3707 Returns 0 on success.
3707 3708 """
3708 3709
3709 3710 opts = pycompat.byteskwargs(opts)
3710 3711 ctx = scmutil.revsingle(repo, opts.get('rev'), None)
3711 3712
3712 3713 if file_:
3713 3714 m = scmutil.match(ctx, (file_,), opts)
3714 3715 if m.anypats() or len(m.files()) != 1:
3715 3716 raise error.Abort(_('can only specify an explicit filename'))
3716 3717 file_ = m.files()[0]
3717 3718 filenodes = []
3718 3719 for cp in ctx.parents():
3719 3720 if not cp:
3720 3721 continue
3721 3722 try:
3722 3723 filenodes.append(cp.filenode(file_))
3723 3724 except error.LookupError:
3724 3725 pass
3725 3726 if not filenodes:
3726 3727 raise error.Abort(_("'%s' not found in manifest!") % file_)
3727 3728 p = []
3728 3729 for fn in filenodes:
3729 3730 fctx = repo.filectx(file_, fileid=fn)
3730 3731 p.append(fctx.node())
3731 3732 else:
3732 3733 p = [cp.node() for cp in ctx.parents()]
3733 3734
3734 3735 displayer = cmdutil.show_changeset(ui, repo, opts)
3735 3736 for n in p:
3736 3737 if n != nullid:
3737 3738 displayer.show(repo[n])
3738 3739 displayer.close()
3739 3740
3740 3741 @command('paths', formatteropts, _('[NAME]'), optionalrepo=True)
3741 3742 def paths(ui, repo, search=None, **opts):
3742 3743 """show aliases for remote repositories
3743 3744
3744 3745 Show definition of symbolic path name NAME. If no name is given,
3745 3746 show definition of all available names.
3746 3747
3747 3748 Option -q/--quiet suppresses all output when searching for NAME
3748 3749 and shows only the path names when listing all definitions.
3749 3750
3750 3751 Path names are defined in the [paths] section of your
3751 3752 configuration file and in ``/etc/mercurial/hgrc``. If run inside a
3752 3753 repository, ``.hg/hgrc`` is used, too.
3753 3754
3754 3755 The path names ``default`` and ``default-push`` have a special
3755 3756 meaning. When performing a push or pull operation, they are used
3756 3757 as fallbacks if no location is specified on the command-line.
3757 3758 When ``default-push`` is set, it will be used for push and
3758 3759 ``default`` will be used for pull; otherwise ``default`` is used
3759 3760 as the fallback for both. When cloning a repository, the clone
3760 3761 source is written as ``default`` in ``.hg/hgrc``.
3761 3762
3762 3763 .. note::
3763 3764
3764 3765 ``default`` and ``default-push`` apply to all inbound (e.g.
3765 3766 :hg:`incoming`) and outbound (e.g. :hg:`outgoing`, :hg:`email`
3766 3767 and :hg:`bundle`) operations.
3767 3768
3768 3769 See :hg:`help urls` for more information.
3769 3770
3770 3771 Returns 0 on success.
3771 3772 """
3772 3773
3773 3774 opts = pycompat.byteskwargs(opts)
3774 3775 ui.pager('paths')
3775 3776 if search:
3776 3777 pathitems = [(name, path) for name, path in ui.paths.iteritems()
3777 3778 if name == search]
3778 3779 else:
3779 3780 pathitems = sorted(ui.paths.iteritems())
3780 3781
3781 3782 fm = ui.formatter('paths', opts)
3782 3783 if fm.isplain():
3783 3784 hidepassword = util.hidepassword
3784 3785 else:
3785 3786 hidepassword = str
3786 3787 if ui.quiet:
3787 3788 namefmt = '%s\n'
3788 3789 else:
3789 3790 namefmt = '%s = '
3790 3791 showsubopts = not search and not ui.quiet
3791 3792
3792 3793 for name, path in pathitems:
3793 3794 fm.startitem()
3794 3795 fm.condwrite(not search, 'name', namefmt, name)
3795 3796 fm.condwrite(not ui.quiet, 'url', '%s\n', hidepassword(path.rawloc))
3796 3797 for subopt, value in sorted(path.suboptions.items()):
3797 3798 assert subopt not in ('name', 'url')
3798 3799 if showsubopts:
3799 3800 fm.plain('%s:%s = ' % (name, subopt))
3800 3801 fm.condwrite(showsubopts, subopt, '%s\n', value)
3801 3802
3802 3803 fm.end()
3803 3804
3804 3805 if search and not pathitems:
3805 3806 if not ui.quiet:
3806 3807 ui.warn(_("not found!\n"))
3807 3808 return 1
3808 3809 else:
3809 3810 return 0
3810 3811
3811 3812 @command('phase',
3812 3813 [('p', 'public', False, _('set changeset phase to public')),
3813 3814 ('d', 'draft', False, _('set changeset phase to draft')),
3814 3815 ('s', 'secret', False, _('set changeset phase to secret')),
3815 3816 ('f', 'force', False, _('allow to move boundary backward')),
3816 3817 ('r', 'rev', [], _('target revision'), _('REV')),
3817 3818 ],
3818 3819 _('[-p|-d|-s] [-f] [-r] [REV...]'))
3819 3820 def phase(ui, repo, *revs, **opts):
3820 3821 """set or show the current phase name
3821 3822
3822 3823 With no argument, show the phase name of the current revision(s).
3823 3824
3824 3825 With one of -p/--public, -d/--draft or -s/--secret, change the
3825 3826 phase value of the specified revisions.
3826 3827
3827 3828 Unless -f/--force is specified, :hg:`phase` won't move changeset from a
3828 3829 lower phase to an higher phase. Phases are ordered as follows::
3829 3830
3830 3831 public < draft < secret
3831 3832
3832 3833 Returns 0 on success, 1 if some phases could not be changed.
3833 3834
3834 3835 (For more information about the phases concept, see :hg:`help phases`.)
3835 3836 """
3836 3837 opts = pycompat.byteskwargs(opts)
3837 3838 # search for a unique phase argument
3838 3839 targetphase = None
3839 3840 for idx, name in enumerate(phases.phasenames):
3840 3841 if opts[name]:
3841 3842 if targetphase is not None:
3842 3843 raise error.Abort(_('only one phase can be specified'))
3843 3844 targetphase = idx
3844 3845
3845 3846 # look for specified revision
3846 3847 revs = list(revs)
3847 3848 revs.extend(opts['rev'])
3848 3849 if not revs:
3849 3850 # display both parents as the second parent phase can influence
3850 3851 # the phase of a merge commit
3851 3852 revs = [c.rev() for c in repo[None].parents()]
3852 3853
3853 3854 revs = scmutil.revrange(repo, revs)
3854 3855
3855 3856 lock = None
3856 3857 ret = 0
3857 3858 if targetphase is None:
3858 3859 # display
3859 3860 for r in revs:
3860 3861 ctx = repo[r]
3861 3862 ui.write('%i: %s\n' % (ctx.rev(), ctx.phasestr()))
3862 3863 else:
3863 3864 tr = None
3864 3865 lock = repo.lock()
3865 3866 try:
3866 3867 tr = repo.transaction("phase")
3867 3868 # set phase
3868 3869 if not revs:
3869 3870 raise error.Abort(_('empty revision set'))
3870 3871 nodes = [repo[r].node() for r in revs]
3871 3872 # moving revision from public to draft may hide them
3872 3873 # We have to check result on an unfiltered repository
3873 3874 unfi = repo.unfiltered()
3874 3875 getphase = unfi._phasecache.phase
3875 3876 olddata = [getphase(unfi, r) for r in unfi]
3876 3877 phases.advanceboundary(repo, tr, targetphase, nodes)
3877 3878 if opts['force']:
3878 3879 phases.retractboundary(repo, tr, targetphase, nodes)
3879 3880 tr.close()
3880 3881 finally:
3881 3882 if tr is not None:
3882 3883 tr.release()
3883 3884 lock.release()
3884 3885 getphase = unfi._phasecache.phase
3885 3886 newdata = [getphase(unfi, r) for r in unfi]
3886 3887 changes = sum(newdata[r] != olddata[r] for r in unfi)
3887 3888 cl = unfi.changelog
3888 3889 rejected = [n for n in nodes
3889 3890 if newdata[cl.rev(n)] < targetphase]
3890 3891 if rejected:
3891 3892 ui.warn(_('cannot move %i changesets to a higher '
3892 3893 'phase, use --force\n') % len(rejected))
3893 3894 ret = 1
3894 3895 if changes:
3895 3896 msg = _('phase changed for %i changesets\n') % changes
3896 3897 if ret:
3897 3898 ui.status(msg)
3898 3899 else:
3899 3900 ui.note(msg)
3900 3901 else:
3901 3902 ui.warn(_('no phases changed\n'))
3902 3903 return ret
3903 3904
3904 3905 def postincoming(ui, repo, modheads, optupdate, checkout, brev):
3905 3906 """Run after a changegroup has been added via pull/unbundle
3906 3907
3907 3908 This takes arguments below:
3908 3909
3909 3910 :modheads: change of heads by pull/unbundle
3910 3911 :optupdate: updating working directory is needed or not
3911 3912 :checkout: update destination revision (or None to default destination)
3912 3913 :brev: a name, which might be a bookmark to be activated after updating
3913 3914 """
3914 3915 if modheads == 0:
3915 3916 return
3916 3917 if optupdate:
3917 3918 try:
3918 3919 return hg.updatetotally(ui, repo, checkout, brev)
3919 3920 except error.UpdateAbort as inst:
3920 3921 msg = _("not updating: %s") % str(inst)
3921 3922 hint = inst.hint
3922 3923 raise error.UpdateAbort(msg, hint=hint)
3923 3924 if modheads > 1:
3924 3925 currentbranchheads = len(repo.branchheads())
3925 3926 if currentbranchheads == modheads:
3926 3927 ui.status(_("(run 'hg heads' to see heads, 'hg merge' to merge)\n"))
3927 3928 elif currentbranchheads > 1:
3928 3929 ui.status(_("(run 'hg heads .' to see heads, 'hg merge' to "
3929 3930 "merge)\n"))
3930 3931 else:
3931 3932 ui.status(_("(run 'hg heads' to see heads)\n"))
3932 3933 else:
3933 3934 ui.status(_("(run 'hg update' to get a working copy)\n"))
3934 3935
3935 3936 @command('^pull',
3936 3937 [('u', 'update', None,
3937 3938 _('update to new branch head if changesets were pulled')),
3938 3939 ('f', 'force', None, _('run even when remote repository is unrelated')),
3939 3940 ('r', 'rev', [], _('a remote changeset intended to be added'), _('REV')),
3940 3941 ('B', 'bookmark', [], _("bookmark to pull"), _('BOOKMARK')),
3941 3942 ('b', 'branch', [], _('a specific branch you would like to pull'),
3942 3943 _('BRANCH')),
3943 3944 ] + remoteopts,
3944 3945 _('[-u] [-f] [-r REV]... [-e CMD] [--remotecmd CMD] [SOURCE]'))
3945 3946 def pull(ui, repo, source="default", **opts):
3946 3947 """pull changes from the specified source
3947 3948
3948 3949 Pull changes from a remote repository to a local one.
3949 3950
3950 3951 This finds all changes from the repository at the specified path
3951 3952 or URL and adds them to a local repository (the current one unless
3952 3953 -R is specified). By default, this does not update the copy of the
3953 3954 project in the working directory.
3954 3955
3955 3956 Use :hg:`incoming` if you want to see what would have been added
3956 3957 by a pull at the time you issued this command. If you then decide
3957 3958 to add those changes to the repository, you should use :hg:`pull
3958 3959 -r X` where ``X`` is the last changeset listed by :hg:`incoming`.
3959 3960
3960 3961 If SOURCE is omitted, the 'default' path will be used.
3961 3962 See :hg:`help urls` for more information.
3962 3963
3963 3964 Specifying bookmark as ``.`` is equivalent to specifying the active
3964 3965 bookmark's name.
3965 3966
3966 3967 Returns 0 on success, 1 if an update had unresolved files.
3967 3968 """
3968 3969
3969 3970 opts = pycompat.byteskwargs(opts)
3970 3971 if ui.configbool('commands', 'update.requiredest') and opts.get('update'):
3971 3972 msg = _('update destination required by configuration')
3972 3973 hint = _('use hg pull followed by hg update DEST')
3973 3974 raise error.Abort(msg, hint=hint)
3974 3975
3975 3976 source, branches = hg.parseurl(ui.expandpath(source), opts.get('branch'))
3976 3977 ui.status(_('pulling from %s\n') % util.hidepassword(source))
3977 3978 other = hg.peer(repo, opts, source)
3978 3979 try:
3979 3980 revs, checkout = hg.addbranchrevs(repo, other, branches,
3980 3981 opts.get('rev'))
3981 3982
3982 3983
3983 3984 pullopargs = {}
3984 3985 if opts.get('bookmark'):
3985 3986 if not revs:
3986 3987 revs = []
3987 3988 # The list of bookmark used here is not the one used to actually
3988 3989 # update the bookmark name. This can result in the revision pulled
3989 3990 # not ending up with the name of the bookmark because of a race
3990 3991 # condition on the server. (See issue 4689 for details)
3991 3992 remotebookmarks = other.listkeys('bookmarks')
3992 3993 pullopargs['remotebookmarks'] = remotebookmarks
3993 3994 for b in opts['bookmark']:
3994 3995 b = repo._bookmarks.expandname(b)
3995 3996 if b not in remotebookmarks:
3996 3997 raise error.Abort(_('remote bookmark %s not found!') % b)
3997 3998 revs.append(remotebookmarks[b])
3998 3999
3999 4000 if revs:
4000 4001 try:
4001 4002 # When 'rev' is a bookmark name, we cannot guarantee that it
4002 4003 # will be updated with that name because of a race condition
4003 4004 # server side. (See issue 4689 for details)
4004 4005 oldrevs = revs
4005 4006 revs = [] # actually, nodes
4006 4007 for r in oldrevs:
4007 4008 node = other.lookup(r)
4008 4009 revs.append(node)
4009 4010 if r == checkout:
4010 4011 checkout = node
4011 4012 except error.CapabilityError:
4012 4013 err = _("other repository doesn't support revision lookup, "
4013 4014 "so a rev cannot be specified.")
4014 4015 raise error.Abort(err)
4015 4016
4016 4017 pullopargs.update(opts.get('opargs', {}))
4017 4018 modheads = exchange.pull(repo, other, heads=revs,
4018 4019 force=opts.get('force'),
4019 4020 bookmarks=opts.get('bookmark', ()),
4020 4021 opargs=pullopargs).cgresult
4021 4022
4022 4023 # brev is a name, which might be a bookmark to be activated at
4023 4024 # the end of the update. In other words, it is an explicit
4024 4025 # destination of the update
4025 4026 brev = None
4026 4027
4027 4028 if checkout:
4028 4029 checkout = str(repo.changelog.rev(checkout))
4029 4030
4030 4031 # order below depends on implementation of
4031 4032 # hg.addbranchrevs(). opts['bookmark'] is ignored,
4032 4033 # because 'checkout' is determined without it.
4033 4034 if opts.get('rev'):
4034 4035 brev = opts['rev'][0]
4035 4036 elif opts.get('branch'):
4036 4037 brev = opts['branch'][0]
4037 4038 else:
4038 4039 brev = branches[0]
4039 4040 repo._subtoppath = source
4040 4041 try:
4041 4042 ret = postincoming(ui, repo, modheads, opts.get('update'),
4042 4043 checkout, brev)
4043 4044
4044 4045 finally:
4045 4046 del repo._subtoppath
4046 4047
4047 4048 finally:
4048 4049 other.close()
4049 4050 return ret
4050 4051
4051 4052 @command('^push',
4052 4053 [('f', 'force', None, _('force push')),
4053 4054 ('r', 'rev', [],
4054 4055 _('a changeset intended to be included in the destination'),
4055 4056 _('REV')),
4056 4057 ('B', 'bookmark', [], _("bookmark to push"), _('BOOKMARK')),
4057 4058 ('b', 'branch', [],
4058 4059 _('a specific branch you would like to push'), _('BRANCH')),
4059 4060 ('', 'new-branch', False, _('allow pushing a new branch')),
4060 4061 ] + remoteopts,
4061 4062 _('[-f] [-r REV]... [-e CMD] [--remotecmd CMD] [DEST]'))
4062 4063 def push(ui, repo, dest=None, **opts):
4063 4064 """push changes to the specified destination
4064 4065
4065 4066 Push changesets from the local repository to the specified
4066 4067 destination.
4067 4068
4068 4069 This operation is symmetrical to pull: it is identical to a pull
4069 4070 in the destination repository from the current one.
4070 4071
4071 4072 By default, push will not allow creation of new heads at the
4072 4073 destination, since multiple heads would make it unclear which head
4073 4074 to use. In this situation, it is recommended to pull and merge
4074 4075 before pushing.
4075 4076
4076 4077 Use --new-branch if you want to allow push to create a new named
4077 4078 branch that is not present at the destination. This allows you to
4078 4079 only create a new branch without forcing other changes.
4079 4080
4080 4081 .. note::
4081 4082
4082 4083 Extra care should be taken with the -f/--force option,
4083 4084 which will push all new heads on all branches, an action which will
4084 4085 almost always cause confusion for collaborators.
4085 4086
4086 4087 If -r/--rev is used, the specified revision and all its ancestors
4087 4088 will be pushed to the remote repository.
4088 4089
4089 4090 If -B/--bookmark is used, the specified bookmarked revision, its
4090 4091 ancestors, and the bookmark will be pushed to the remote
4091 4092 repository. Specifying ``.`` is equivalent to specifying the active
4092 4093 bookmark's name.
4093 4094
4094 4095 Please see :hg:`help urls` for important details about ``ssh://``
4095 4096 URLs. If DESTINATION is omitted, a default path will be used.
4096 4097
4097 4098 Returns 0 if push was successful, 1 if nothing to push.
4098 4099 """
4099 4100
4100 4101 opts = pycompat.byteskwargs(opts)
4101 4102 if opts.get('bookmark'):
4102 4103 ui.setconfig('bookmarks', 'pushing', opts['bookmark'], 'push')
4103 4104 for b in opts['bookmark']:
4104 4105 # translate -B options to -r so changesets get pushed
4105 4106 b = repo._bookmarks.expandname(b)
4106 4107 if b in repo._bookmarks:
4107 4108 opts.setdefault('rev', []).append(b)
4108 4109 else:
4109 4110 # if we try to push a deleted bookmark, translate it to null
4110 4111 # this lets simultaneous -r, -b options continue working
4111 4112 opts.setdefault('rev', []).append("null")
4112 4113
4113 4114 path = ui.paths.getpath(dest, default=('default-push', 'default'))
4114 4115 if not path:
4115 4116 raise error.Abort(_('default repository not configured!'),
4116 4117 hint=_("see 'hg help config.paths'"))
4117 4118 dest = path.pushloc or path.loc
4118 4119 branches = (path.branch, opts.get('branch') or [])
4119 4120 ui.status(_('pushing to %s\n') % util.hidepassword(dest))
4120 4121 revs, checkout = hg.addbranchrevs(repo, repo, branches, opts.get('rev'))
4121 4122 other = hg.peer(repo, opts, dest)
4122 4123
4123 4124 if revs:
4124 4125 revs = [repo.lookup(r) for r in scmutil.revrange(repo, revs)]
4125 4126 if not revs:
4126 4127 raise error.Abort(_("specified revisions evaluate to an empty set"),
4127 4128 hint=_("use different revision arguments"))
4128 4129 elif path.pushrev:
4129 4130 # It doesn't make any sense to specify ancestor revisions. So limit
4130 4131 # to DAG heads to make discovery simpler.
4131 4132 expr = revsetlang.formatspec('heads(%r)', path.pushrev)
4132 4133 revs = scmutil.revrange(repo, [expr])
4133 4134 revs = [repo[rev].node() for rev in revs]
4134 4135 if not revs:
4135 4136 raise error.Abort(_('default push revset for path evaluates to an '
4136 4137 'empty set'))
4137 4138
4138 4139 repo._subtoppath = dest
4139 4140 try:
4140 4141 # push subrepos depth-first for coherent ordering
4141 4142 c = repo['']
4142 4143 subs = c.substate # only repos that are committed
4143 4144 for s in sorted(subs):
4144 4145 result = c.sub(s).push(opts)
4145 4146 if result == 0:
4146 4147 return not result
4147 4148 finally:
4148 4149 del repo._subtoppath
4149 4150 pushop = exchange.push(repo, other, opts.get('force'), revs=revs,
4150 4151 newbranch=opts.get('new_branch'),
4151 4152 bookmarks=opts.get('bookmark', ()),
4152 4153 opargs=opts.get('opargs'))
4153 4154
4154 4155 result = not pushop.cgresult
4155 4156
4156 4157 if pushop.bkresult is not None:
4157 4158 if pushop.bkresult == 2:
4158 4159 result = 2
4159 4160 elif not result and pushop.bkresult:
4160 4161 result = 2
4161 4162
4162 4163 return result
4163 4164
4164 4165 @command('recover', [])
4165 4166 def recover(ui, repo):
4166 4167 """roll back an interrupted transaction
4167 4168
4168 4169 Recover from an interrupted commit or pull.
4169 4170
4170 4171 This command tries to fix the repository status after an
4171 4172 interrupted operation. It should only be necessary when Mercurial
4172 4173 suggests it.
4173 4174
4174 4175 Returns 0 if successful, 1 if nothing to recover or verify fails.
4175 4176 """
4176 4177 if repo.recover():
4177 4178 return hg.verify(repo)
4178 4179 return 1
4179 4180
4180 4181 @command('^remove|rm',
4181 4182 [('A', 'after', None, _('record delete for missing files')),
4182 4183 ('f', 'force', None,
4183 4184 _('forget added files, delete modified files')),
4184 4185 ] + subrepoopts + walkopts,
4185 4186 _('[OPTION]... FILE...'),
4186 4187 inferrepo=True)
4187 4188 def remove(ui, repo, *pats, **opts):
4188 4189 """remove the specified files on the next commit
4189 4190
4190 4191 Schedule the indicated files for removal from the current branch.
4191 4192
4192 4193 This command schedules the files to be removed at the next commit.
4193 4194 To undo a remove before that, see :hg:`revert`. To undo added
4194 4195 files, see :hg:`forget`.
4195 4196
4196 4197 .. container:: verbose
4197 4198
4198 4199 -A/--after can be used to remove only files that have already
4199 4200 been deleted, -f/--force can be used to force deletion, and -Af
4200 4201 can be used to remove files from the next revision without
4201 4202 deleting them from the working directory.
4202 4203
4203 4204 The following table details the behavior of remove for different
4204 4205 file states (columns) and option combinations (rows). The file
4205 4206 states are Added [A], Clean [C], Modified [M] and Missing [!]
4206 4207 (as reported by :hg:`status`). The actions are Warn, Remove
4207 4208 (from branch) and Delete (from disk):
4208 4209
4209 4210 ========= == == == ==
4210 4211 opt/state A C M !
4211 4212 ========= == == == ==
4212 4213 none W RD W R
4213 4214 -f R RD RD R
4214 4215 -A W W W R
4215 4216 -Af R R R R
4216 4217 ========= == == == ==
4217 4218
4218 4219 .. note::
4219 4220
4220 4221 :hg:`remove` never deletes files in Added [A] state from the
4221 4222 working directory, not even if ``--force`` is specified.
4222 4223
4223 4224 Returns 0 on success, 1 if any warnings encountered.
4224 4225 """
4225 4226
4226 4227 opts = pycompat.byteskwargs(opts)
4227 4228 after, force = opts.get('after'), opts.get('force')
4228 4229 if not pats and not after:
4229 4230 raise error.Abort(_('no files specified'))
4230 4231
4231 4232 m = scmutil.match(repo[None], pats, opts)
4232 4233 subrepos = opts.get('subrepos')
4233 4234 return cmdutil.remove(ui, repo, m, "", after, force, subrepos)
4234 4235
4235 4236 @command('rename|move|mv',
4236 4237 [('A', 'after', None, _('record a rename that has already occurred')),
4237 4238 ('f', 'force', None, _('forcibly copy over an existing managed file')),
4238 4239 ] + walkopts + dryrunopts,
4239 4240 _('[OPTION]... SOURCE... DEST'))
4240 4241 def rename(ui, repo, *pats, **opts):
4241 4242 """rename files; equivalent of copy + remove
4242 4243
4243 4244 Mark dest as copies of sources; mark sources for deletion. If dest
4244 4245 is a directory, copies are put in that directory. If dest is a
4245 4246 file, there can only be one source.
4246 4247
4247 4248 By default, this command copies the contents of files as they
4248 4249 exist in the working directory. If invoked with -A/--after, the
4249 4250 operation is recorded, but no copying is performed.
4250 4251
4251 4252 This command takes effect at the next commit. To undo a rename
4252 4253 before that, see :hg:`revert`.
4253 4254
4254 4255 Returns 0 on success, 1 if errors are encountered.
4255 4256 """
4256 4257 opts = pycompat.byteskwargs(opts)
4257 4258 with repo.wlock(False):
4258 4259 return cmdutil.copy(ui, repo, pats, opts, rename=True)
4259 4260
4260 4261 @command('resolve',
4261 4262 [('a', 'all', None, _('select all unresolved files')),
4262 4263 ('l', 'list', None, _('list state of files needing merge')),
4263 4264 ('m', 'mark', None, _('mark files as resolved')),
4264 4265 ('u', 'unmark', None, _('mark files as unresolved')),
4265 4266 ('n', 'no-status', None, _('hide status prefix'))]
4266 4267 + mergetoolopts + walkopts + formatteropts,
4267 4268 _('[OPTION]... [FILE]...'),
4268 4269 inferrepo=True)
4269 4270 def resolve(ui, repo, *pats, **opts):
4270 4271 """redo merges or set/view the merge status of files
4271 4272
4272 4273 Merges with unresolved conflicts are often the result of
4273 4274 non-interactive merging using the ``internal:merge`` configuration
4274 4275 setting, or a command-line merge tool like ``diff3``. The resolve
4275 4276 command is used to manage the files involved in a merge, after
4276 4277 :hg:`merge` has been run, and before :hg:`commit` is run (i.e. the
4277 4278 working directory must have two parents). See :hg:`help
4278 4279 merge-tools` for information on configuring merge tools.
4279 4280
4280 4281 The resolve command can be used in the following ways:
4281 4282
4282 4283 - :hg:`resolve [--tool TOOL] FILE...`: attempt to re-merge the specified
4283 4284 files, discarding any previous merge attempts. Re-merging is not
4284 4285 performed for files already marked as resolved. Use ``--all/-a``
4285 4286 to select all unresolved files. ``--tool`` can be used to specify
4286 4287 the merge tool used for the given files. It overrides the HGMERGE
4287 4288 environment variable and your configuration files. Previous file
4288 4289 contents are saved with a ``.orig`` suffix.
4289 4290
4290 4291 - :hg:`resolve -m [FILE]`: mark a file as having been resolved
4291 4292 (e.g. after having manually fixed-up the files). The default is
4292 4293 to mark all unresolved files.
4293 4294
4294 4295 - :hg:`resolve -u [FILE]...`: mark a file as unresolved. The
4295 4296 default is to mark all resolved files.
4296 4297
4297 4298 - :hg:`resolve -l`: list files which had or still have conflicts.
4298 4299 In the printed list, ``U`` = unresolved and ``R`` = resolved.
4299 4300 You can use ``set:unresolved()`` or ``set:resolved()`` to filter
4300 4301 the list. See :hg:`help filesets` for details.
4301 4302
4302 4303 .. note::
4303 4304
4304 4305 Mercurial will not let you commit files with unresolved merge
4305 4306 conflicts. You must use :hg:`resolve -m ...` before you can
4306 4307 commit after a conflicting merge.
4307 4308
4308 4309 Returns 0 on success, 1 if any files fail a resolve attempt.
4309 4310 """
4310 4311
4311 4312 opts = pycompat.byteskwargs(opts)
4312 4313 flaglist = 'all mark unmark list no_status'.split()
4313 4314 all, mark, unmark, show, nostatus = \
4314 4315 [opts.get(o) for o in flaglist]
4315 4316
4316 4317 if (show and (mark or unmark)) or (mark and unmark):
4317 4318 raise error.Abort(_("too many options specified"))
4318 4319 if pats and all:
4319 4320 raise error.Abort(_("can't specify --all and patterns"))
4320 4321 if not (all or pats or show or mark or unmark):
4321 4322 raise error.Abort(_('no files or directories specified'),
4322 4323 hint=('use --all to re-merge all unresolved files'))
4323 4324
4324 4325 if show:
4325 4326 ui.pager('resolve')
4326 4327 fm = ui.formatter('resolve', opts)
4327 4328 ms = mergemod.mergestate.read(repo)
4328 4329 m = scmutil.match(repo[None], pats, opts)
4329 4330 for f in ms:
4330 4331 if not m(f):
4331 4332 continue
4332 4333 l = 'resolve.' + {'u': 'unresolved', 'r': 'resolved',
4333 4334 'd': 'driverresolved'}[ms[f]]
4334 4335 fm.startitem()
4335 4336 fm.condwrite(not nostatus, 'status', '%s ', ms[f].upper(), label=l)
4336 4337 fm.write('path', '%s\n', f, label=l)
4337 4338 fm.end()
4338 4339 return 0
4339 4340
4340 4341 with repo.wlock():
4341 4342 ms = mergemod.mergestate.read(repo)
4342 4343
4343 4344 if not (ms.active() or repo.dirstate.p2() != nullid):
4344 4345 raise error.Abort(
4345 4346 _('resolve command not applicable when not merging'))
4346 4347
4347 4348 wctx = repo[None]
4348 4349
4349 4350 if ms.mergedriver and ms.mdstate() == 'u':
4350 4351 proceed = mergemod.driverpreprocess(repo, ms, wctx)
4351 4352 ms.commit()
4352 4353 # allow mark and unmark to go through
4353 4354 if not mark and not unmark and not proceed:
4354 4355 return 1
4355 4356
4356 4357 m = scmutil.match(wctx, pats, opts)
4357 4358 ret = 0
4358 4359 didwork = False
4359 4360 runconclude = False
4360 4361
4361 4362 tocomplete = []
4362 4363 for f in ms:
4363 4364 if not m(f):
4364 4365 continue
4365 4366
4366 4367 didwork = True
4367 4368
4368 4369 # don't let driver-resolved files be marked, and run the conclude
4369 4370 # step if asked to resolve
4370 4371 if ms[f] == "d":
4371 4372 exact = m.exact(f)
4372 4373 if mark:
4373 4374 if exact:
4374 4375 ui.warn(_('not marking %s as it is driver-resolved\n')
4375 4376 % f)
4376 4377 elif unmark:
4377 4378 if exact:
4378 4379 ui.warn(_('not unmarking %s as it is driver-resolved\n')
4379 4380 % f)
4380 4381 else:
4381 4382 runconclude = True
4382 4383 continue
4383 4384
4384 4385 if mark:
4385 4386 ms.mark(f, "r")
4386 4387 elif unmark:
4387 4388 ms.mark(f, "u")
4388 4389 else:
4389 4390 # backup pre-resolve (merge uses .orig for its own purposes)
4390 4391 a = repo.wjoin(f)
4391 4392 try:
4392 4393 util.copyfile(a, a + ".resolve")
4393 4394 except (IOError, OSError) as inst:
4394 4395 if inst.errno != errno.ENOENT:
4395 4396 raise
4396 4397
4397 4398 try:
4398 4399 # preresolve file
4399 4400 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
4400 4401 'resolve')
4401 4402 complete, r = ms.preresolve(f, wctx)
4402 4403 if not complete:
4403 4404 tocomplete.append(f)
4404 4405 elif r:
4405 4406 ret = 1
4406 4407 finally:
4407 4408 ui.setconfig('ui', 'forcemerge', '', 'resolve')
4408 4409 ms.commit()
4409 4410
4410 4411 # replace filemerge's .orig file with our resolve file, but only
4411 4412 # for merges that are complete
4412 4413 if complete:
4413 4414 try:
4414 4415 util.rename(a + ".resolve",
4415 4416 scmutil.origpath(ui, repo, a))
4416 4417 except OSError as inst:
4417 4418 if inst.errno != errno.ENOENT:
4418 4419 raise
4419 4420
4420 4421 for f in tocomplete:
4421 4422 try:
4422 4423 # resolve file
4423 4424 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
4424 4425 'resolve')
4425 4426 r = ms.resolve(f, wctx)
4426 4427 if r:
4427 4428 ret = 1
4428 4429 finally:
4429 4430 ui.setconfig('ui', 'forcemerge', '', 'resolve')
4430 4431 ms.commit()
4431 4432
4432 4433 # replace filemerge's .orig file with our resolve file
4433 4434 a = repo.wjoin(f)
4434 4435 try:
4435 4436 util.rename(a + ".resolve", scmutil.origpath(ui, repo, a))
4436 4437 except OSError as inst:
4437 4438 if inst.errno != errno.ENOENT:
4438 4439 raise
4439 4440
4440 4441 ms.commit()
4441 4442 ms.recordactions()
4442 4443
4443 4444 if not didwork and pats:
4444 4445 hint = None
4445 4446 if not any([p for p in pats if p.find(':') >= 0]):
4446 4447 pats = ['path:%s' % p for p in pats]
4447 4448 m = scmutil.match(wctx, pats, opts)
4448 4449 for f in ms:
4449 4450 if not m(f):
4450 4451 continue
4451 4452 flags = ''.join(['-%s ' % o[0] for o in flaglist
4452 4453 if opts.get(o)])
4453 4454 hint = _("(try: hg resolve %s%s)\n") % (
4454 4455 flags,
4455 4456 ' '.join(pats))
4456 4457 break
4457 4458 ui.warn(_("arguments do not match paths that need resolving\n"))
4458 4459 if hint:
4459 4460 ui.warn(hint)
4460 4461 elif ms.mergedriver and ms.mdstate() != 's':
4461 4462 # run conclude step when either a driver-resolved file is requested
4462 4463 # or there are no driver-resolved files
4463 4464 # we can't use 'ret' to determine whether any files are unresolved
4464 4465 # because we might not have tried to resolve some
4465 4466 if ((runconclude or not list(ms.driverresolved()))
4466 4467 and not list(ms.unresolved())):
4467 4468 proceed = mergemod.driverconclude(repo, ms, wctx)
4468 4469 ms.commit()
4469 4470 if not proceed:
4470 4471 return 1
4471 4472
4472 4473 # Nudge users into finishing an unfinished operation
4473 4474 unresolvedf = list(ms.unresolved())
4474 4475 driverresolvedf = list(ms.driverresolved())
4475 4476 if not unresolvedf and not driverresolvedf:
4476 4477 ui.status(_('(no more unresolved files)\n'))
4477 4478 cmdutil.checkafterresolved(repo)
4478 4479 elif not unresolvedf:
4479 4480 ui.status(_('(no more unresolved files -- '
4480 4481 'run "hg resolve --all" to conclude)\n'))
4481 4482
4482 4483 return ret
4483 4484
4484 4485 @command('revert',
4485 4486 [('a', 'all', None, _('revert all changes when no arguments given')),
4486 4487 ('d', 'date', '', _('tipmost revision matching date'), _('DATE')),
4487 4488 ('r', 'rev', '', _('revert to the specified revision'), _('REV')),
4488 4489 ('C', 'no-backup', None, _('do not save backup copies of files')),
4489 4490 ('i', 'interactive', None,
4490 4491 _('interactively select the changes (EXPERIMENTAL)')),
4491 4492 ] + walkopts + dryrunopts,
4492 4493 _('[OPTION]... [-r REV] [NAME]...'))
4493 4494 def revert(ui, repo, *pats, **opts):
4494 4495 """restore files to their checkout state
4495 4496
4496 4497 .. note::
4497 4498
4498 4499 To check out earlier revisions, you should use :hg:`update REV`.
4499 4500 To cancel an uncommitted merge (and lose your changes),
4500 4501 use :hg:`update --clean .`.
4501 4502
4502 4503 With no revision specified, revert the specified files or directories
4503 4504 to the contents they had in the parent of the working directory.
4504 4505 This restores the contents of files to an unmodified
4505 4506 state and unschedules adds, removes, copies, and renames. If the
4506 4507 working directory has two parents, you must explicitly specify a
4507 4508 revision.
4508 4509
4509 4510 Using the -r/--rev or -d/--date options, revert the given files or
4510 4511 directories to their states as of a specific revision. Because
4511 4512 revert does not change the working directory parents, this will
4512 4513 cause these files to appear modified. This can be helpful to "back
4513 4514 out" some or all of an earlier change. See :hg:`backout` for a
4514 4515 related method.
4515 4516
4516 4517 Modified files are saved with a .orig suffix before reverting.
4517 4518 To disable these backups, use --no-backup. It is possible to store
4518 4519 the backup files in a custom directory relative to the root of the
4519 4520 repository by setting the ``ui.origbackuppath`` configuration
4520 4521 option.
4521 4522
4522 4523 See :hg:`help dates` for a list of formats valid for -d/--date.
4523 4524
4524 4525 See :hg:`help backout` for a way to reverse the effect of an
4525 4526 earlier changeset.
4526 4527
4527 4528 Returns 0 on success.
4528 4529 """
4529 4530
4530 4531 if opts.get("date"):
4531 4532 if opts.get("rev"):
4532 4533 raise error.Abort(_("you can't specify a revision and a date"))
4533 4534 opts["rev"] = cmdutil.finddate(ui, repo, opts["date"])
4534 4535
4535 4536 parent, p2 = repo.dirstate.parents()
4536 4537 if not opts.get('rev') and p2 != nullid:
4537 4538 # revert after merge is a trap for new users (issue2915)
4538 4539 raise error.Abort(_('uncommitted merge with no revision specified'),
4539 4540 hint=_("use 'hg update' or see 'hg help revert'"))
4540 4541
4541 4542 ctx = scmutil.revsingle(repo, opts.get('rev'))
4542 4543
4543 4544 if (not (pats or opts.get('include') or opts.get('exclude') or
4544 4545 opts.get('all') or opts.get('interactive'))):
4545 4546 msg = _("no files or directories specified")
4546 4547 if p2 != nullid:
4547 4548 hint = _("uncommitted merge, use --all to discard all changes,"
4548 4549 " or 'hg update -C .' to abort the merge")
4549 4550 raise error.Abort(msg, hint=hint)
4550 4551 dirty = any(repo.status())
4551 4552 node = ctx.node()
4552 4553 if node != parent:
4553 4554 if dirty:
4554 4555 hint = _("uncommitted changes, use --all to discard all"
4555 4556 " changes, or 'hg update %s' to update") % ctx.rev()
4556 4557 else:
4557 4558 hint = _("use --all to revert all files,"
4558 4559 " or 'hg update %s' to update") % ctx.rev()
4559 4560 elif dirty:
4560 4561 hint = _("uncommitted changes, use --all to discard all changes")
4561 4562 else:
4562 4563 hint = _("use --all to revert all files")
4563 4564 raise error.Abort(msg, hint=hint)
4564 4565
4565 4566 return cmdutil.revert(ui, repo, ctx, (parent, p2), *pats, **opts)
4566 4567
4567 4568 @command('rollback', dryrunopts +
4568 4569 [('f', 'force', False, _('ignore safety measures'))])
4569 4570 def rollback(ui, repo, **opts):
4570 4571 """roll back the last transaction (DANGEROUS) (DEPRECATED)
4571 4572
4572 4573 Please use :hg:`commit --amend` instead of rollback to correct
4573 4574 mistakes in the last commit.
4574 4575
4575 4576 This command should be used with care. There is only one level of
4576 4577 rollback, and there is no way to undo a rollback. It will also
4577 4578 restore the dirstate at the time of the last transaction, losing
4578 4579 any dirstate changes since that time. This command does not alter
4579 4580 the working directory.
4580 4581
4581 4582 Transactions are used to encapsulate the effects of all commands
4582 4583 that create new changesets or propagate existing changesets into a
4583 4584 repository.
4584 4585
4585 4586 .. container:: verbose
4586 4587
4587 4588 For example, the following commands are transactional, and their
4588 4589 effects can be rolled back:
4589 4590
4590 4591 - commit
4591 4592 - import
4592 4593 - pull
4593 4594 - push (with this repository as the destination)
4594 4595 - unbundle
4595 4596
4596 4597 To avoid permanent data loss, rollback will refuse to rollback a
4597 4598 commit transaction if it isn't checked out. Use --force to
4598 4599 override this protection.
4599 4600
4600 4601 The rollback command can be entirely disabled by setting the
4601 4602 ``ui.rollback`` configuration setting to false. If you're here
4602 4603 because you want to use rollback and it's disabled, you can
4603 4604 re-enable the command by setting ``ui.rollback`` to true.
4604 4605
4605 4606 This command is not intended for use on public repositories. Once
4606 4607 changes are visible for pull by other users, rolling a transaction
4607 4608 back locally is ineffective (someone else may already have pulled
4608 4609 the changes). Furthermore, a race is possible with readers of the
4609 4610 repository; for example an in-progress pull from the repository
4610 4611 may fail if a rollback is performed.
4611 4612
4612 4613 Returns 0 on success, 1 if no rollback data is available.
4613 4614 """
4614 4615 if not ui.configbool('ui', 'rollback', True):
4615 4616 raise error.Abort(_('rollback is disabled because it is unsafe'),
4616 4617 hint=('see `hg help -v rollback` for information'))
4617 4618 return repo.rollback(dryrun=opts.get(r'dry_run'),
4618 4619 force=opts.get(r'force'))
4619 4620
4620 4621 @command('root', [])
4621 4622 def root(ui, repo):
4622 4623 """print the root (top) of the current working directory
4623 4624
4624 4625 Print the root directory of the current repository.
4625 4626
4626 4627 Returns 0 on success.
4627 4628 """
4628 4629 ui.write(repo.root + "\n")
4629 4630
4630 4631 @command('^serve',
4631 4632 [('A', 'accesslog', '', _('name of access log file to write to'),
4632 4633 _('FILE')),
4633 4634 ('d', 'daemon', None, _('run server in background')),
4634 4635 ('', 'daemon-postexec', [], _('used internally by daemon mode')),
4635 4636 ('E', 'errorlog', '', _('name of error log file to write to'), _('FILE')),
4636 4637 # use string type, then we can check if something was passed
4637 4638 ('p', 'port', '', _('port to listen on (default: 8000)'), _('PORT')),
4638 4639 ('a', 'address', '', _('address to listen on (default: all interfaces)'),
4639 4640 _('ADDR')),
4640 4641 ('', 'prefix', '', _('prefix path to serve from (default: server root)'),
4641 4642 _('PREFIX')),
4642 4643 ('n', 'name', '',
4643 4644 _('name to show in web pages (default: working directory)'), _('NAME')),
4644 4645 ('', 'web-conf', '',
4645 4646 _("name of the hgweb config file (see 'hg help hgweb')"), _('FILE')),
4646 4647 ('', 'webdir-conf', '', _('name of the hgweb config file (DEPRECATED)'),
4647 4648 _('FILE')),
4648 4649 ('', 'pid-file', '', _('name of file to write process ID to'), _('FILE')),
4649 4650 ('', 'stdio', None, _('for remote clients (ADVANCED)')),
4650 4651 ('', 'cmdserver', '', _('for remote clients (ADVANCED)'), _('MODE')),
4651 4652 ('t', 'templates', '', _('web templates to use'), _('TEMPLATE')),
4652 4653 ('', 'style', '', _('template style to use'), _('STYLE')),
4653 4654 ('6', 'ipv6', None, _('use IPv6 in addition to IPv4')),
4654 4655 ('', 'certificate', '', _('SSL certificate file'), _('FILE'))]
4655 4656 + subrepoopts,
4656 4657 _('[OPTION]...'),
4657 4658 optionalrepo=True)
4658 4659 def serve(ui, repo, **opts):
4659 4660 """start stand-alone webserver
4660 4661
4661 4662 Start a local HTTP repository browser and pull server. You can use
4662 4663 this for ad-hoc sharing and browsing of repositories. It is
4663 4664 recommended to use a real web server to serve a repository for
4664 4665 longer periods of time.
4665 4666
4666 4667 Please note that the server does not implement access control.
4667 4668 This means that, by default, anybody can read from the server and
4668 4669 nobody can write to it by default. Set the ``web.allow_push``
4669 4670 option to ``*`` to allow everybody to push to the server. You
4670 4671 should use a real web server if you need to authenticate users.
4671 4672
4672 4673 By default, the server logs accesses to stdout and errors to
4673 4674 stderr. Use the -A/--accesslog and -E/--errorlog options to log to
4674 4675 files.
4675 4676
4676 4677 To have the server choose a free port number to listen on, specify
4677 4678 a port number of 0; in this case, the server will print the port
4678 4679 number it uses.
4679 4680
4680 4681 Returns 0 on success.
4681 4682 """
4682 4683
4683 4684 opts = pycompat.byteskwargs(opts)
4684 4685 if opts["stdio"] and opts["cmdserver"]:
4685 4686 raise error.Abort(_("cannot use --stdio with --cmdserver"))
4686 4687
4687 4688 if opts["stdio"]:
4688 4689 if repo is None:
4689 4690 raise error.RepoError(_("there is no Mercurial repository here"
4690 4691 " (.hg not found)"))
4691 4692 s = sshserver.sshserver(ui, repo)
4692 4693 s.serve_forever()
4693 4694
4694 4695 service = server.createservice(ui, repo, opts)
4695 4696 return server.runservice(opts, initfn=service.init, runfn=service.run)
4696 4697
4697 4698 @command('^status|st',
4698 4699 [('A', 'all', None, _('show status of all files')),
4699 4700 ('m', 'modified', None, _('show only modified files')),
4700 4701 ('a', 'added', None, _('show only added files')),
4701 4702 ('r', 'removed', None, _('show only removed files')),
4702 4703 ('d', 'deleted', None, _('show only deleted (but tracked) files')),
4703 4704 ('c', 'clean', None, _('show only files without changes')),
4704 4705 ('u', 'unknown', None, _('show only unknown (not tracked) files')),
4705 4706 ('i', 'ignored', None, _('show only ignored files')),
4706 4707 ('n', 'no-status', None, _('hide status prefix')),
4707 4708 ('C', 'copies', None, _('show source of copied files')),
4708 4709 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
4709 4710 ('', 'rev', [], _('show difference from revision'), _('REV')),
4710 4711 ('', 'change', '', _('list the changed files of a revision'), _('REV')),
4711 4712 ] + walkopts + subrepoopts + formatteropts,
4712 4713 _('[OPTION]... [FILE]...'),
4713 4714 inferrepo=True)
4714 4715 def status(ui, repo, *pats, **opts):
4715 4716 """show changed files in the working directory
4716 4717
4717 4718 Show status of files in the repository. If names are given, only
4718 4719 files that match are shown. Files that are clean or ignored or
4719 4720 the source of a copy/move operation, are not listed unless
4720 4721 -c/--clean, -i/--ignored, -C/--copies or -A/--all are given.
4721 4722 Unless options described with "show only ..." are given, the
4722 4723 options -mardu are used.
4723 4724
4724 4725 Option -q/--quiet hides untracked (unknown and ignored) files
4725 4726 unless explicitly requested with -u/--unknown or -i/--ignored.
4726 4727
4727 4728 .. note::
4728 4729
4729 4730 :hg:`status` may appear to disagree with diff if permissions have
4730 4731 changed or a merge has occurred. The standard diff format does
4731 4732 not report permission changes and diff only reports changes
4732 4733 relative to one merge parent.
4733 4734
4734 4735 If one revision is given, it is used as the base revision.
4735 4736 If two revisions are given, the differences between them are
4736 4737 shown. The --change option can also be used as a shortcut to list
4737 4738 the changed files of a revision from its first parent.
4738 4739
4739 4740 The codes used to show the status of files are::
4740 4741
4741 4742 M = modified
4742 4743 A = added
4743 4744 R = removed
4744 4745 C = clean
4745 4746 ! = missing (deleted by non-hg command, but still tracked)
4746 4747 ? = not tracked
4747 4748 I = ignored
4748 4749 = origin of the previous file (with --copies)
4749 4750
4750 4751 .. container:: verbose
4751 4752
4752 4753 Examples:
4753 4754
4754 4755 - show changes in the working directory relative to a
4755 4756 changeset::
4756 4757
4757 4758 hg status --rev 9353
4758 4759
4759 4760 - show changes in the working directory relative to the
4760 4761 current directory (see :hg:`help patterns` for more information)::
4761 4762
4762 4763 hg status re:
4763 4764
4764 4765 - show all changes including copies in an existing changeset::
4765 4766
4766 4767 hg status --copies --change 9353
4767 4768
4768 4769 - get a NUL separated list of added files, suitable for xargs::
4769 4770
4770 4771 hg status -an0
4771 4772
4772 4773 Returns 0 on success.
4773 4774 """
4774 4775
4775 4776 opts = pycompat.byteskwargs(opts)
4776 4777 revs = opts.get('rev')
4777 4778 change = opts.get('change')
4778 4779
4779 4780 if revs and change:
4780 4781 msg = _('cannot specify --rev and --change at the same time')
4781 4782 raise error.Abort(msg)
4782 4783 elif change:
4783 4784 node2 = scmutil.revsingle(repo, change, None).node()
4784 4785 node1 = repo[node2].p1().node()
4785 4786 else:
4786 4787 node1, node2 = scmutil.revpair(repo, revs)
4787 4788
4788 4789 if pats or ui.configbool('commands', 'status.relative'):
4789 4790 cwd = repo.getcwd()
4790 4791 else:
4791 4792 cwd = ''
4792 4793
4793 4794 if opts.get('print0'):
4794 4795 end = '\0'
4795 4796 else:
4796 4797 end = '\n'
4797 4798 copy = {}
4798 4799 states = 'modified added removed deleted unknown ignored clean'.split()
4799 4800 show = [k for k in states if opts.get(k)]
4800 4801 if opts.get('all'):
4801 4802 show += ui.quiet and (states[:4] + ['clean']) or states
4802 4803 if not show:
4803 4804 if ui.quiet:
4804 4805 show = states[:4]
4805 4806 else:
4806 4807 show = states[:5]
4807 4808
4808 4809 m = scmutil.match(repo[node2], pats, opts)
4809 4810 stat = repo.status(node1, node2, m,
4810 4811 'ignored' in show, 'clean' in show, 'unknown' in show,
4811 4812 opts.get('subrepos'))
4812 4813 changestates = zip(states, pycompat.iterbytestr('MAR!?IC'), stat)
4813 4814
4814 4815 if (opts.get('all') or opts.get('copies')
4815 4816 or ui.configbool('ui', 'statuscopies')) and not opts.get('no_status'):
4816 4817 copy = copies.pathcopies(repo[node1], repo[node2], m)
4817 4818
4818 4819 ui.pager('status')
4819 4820 fm = ui.formatter('status', opts)
4820 4821 fmt = '%s' + end
4821 4822 showchar = not opts.get('no_status')
4822 4823
4823 4824 for state, char, files in changestates:
4824 4825 if state in show:
4825 4826 label = 'status.' + state
4826 4827 for f in files:
4827 4828 fm.startitem()
4828 4829 fm.condwrite(showchar, 'status', '%s ', char, label=label)
4829 4830 fm.write('path', fmt, repo.pathto(f, cwd), label=label)
4830 4831 if f in copy:
4831 4832 fm.write("copy", ' %s' + end, repo.pathto(copy[f], cwd),
4832 4833 label='status.copied')
4833 4834 fm.end()
4834 4835
4835 4836 @command('^summary|sum',
4836 4837 [('', 'remote', None, _('check for push and pull'))], '[--remote]')
4837 4838 def summary(ui, repo, **opts):
4838 4839 """summarize working directory state
4839 4840
4840 4841 This generates a brief summary of the working directory state,
4841 4842 including parents, branch, commit status, phase and available updates.
4842 4843
4843 4844 With the --remote option, this will check the default paths for
4844 4845 incoming and outgoing changes. This can be time-consuming.
4845 4846
4846 4847 Returns 0 on success.
4847 4848 """
4848 4849
4849 4850 opts = pycompat.byteskwargs(opts)
4850 4851 ui.pager('summary')
4851 4852 ctx = repo[None]
4852 4853 parents = ctx.parents()
4853 4854 pnode = parents[0].node()
4854 4855 marks = []
4855 4856
4856 4857 ms = None
4857 4858 try:
4858 4859 ms = mergemod.mergestate.read(repo)
4859 4860 except error.UnsupportedMergeRecords as e:
4860 4861 s = ' '.join(e.recordtypes)
4861 4862 ui.warn(
4862 4863 _('warning: merge state has unsupported record types: %s\n') % s)
4863 4864 unresolved = 0
4864 4865 else:
4865 4866 unresolved = [f for f in ms if ms[f] == 'u']
4866 4867
4867 4868 for p in parents:
4868 4869 # label with log.changeset (instead of log.parent) since this
4869 4870 # shows a working directory parent *changeset*:
4870 4871 # i18n: column positioning for "hg summary"
4871 4872 ui.write(_('parent: %d:%s ') % (p.rev(), p),
4872 4873 label=cmdutil._changesetlabels(p))
4873 4874 ui.write(' '.join(p.tags()), label='log.tag')
4874 4875 if p.bookmarks():
4875 4876 marks.extend(p.bookmarks())
4876 4877 if p.rev() == -1:
4877 4878 if not len(repo):
4878 4879 ui.write(_(' (empty repository)'))
4879 4880 else:
4880 4881 ui.write(_(' (no revision checked out)'))
4881 4882 if p.obsolete():
4882 4883 ui.write(_(' (obsolete)'))
4883 4884 if p.troubled():
4884 4885 ui.write(' ('
4885 4886 + ', '.join(ui.label(trouble, 'trouble.%s' % trouble)
4886 4887 for trouble in p.troubles())
4887 4888 + ')')
4888 4889 ui.write('\n')
4889 4890 if p.description():
4890 4891 ui.status(' ' + p.description().splitlines()[0].strip() + '\n',
4891 4892 label='log.summary')
4892 4893
4893 4894 branch = ctx.branch()
4894 4895 bheads = repo.branchheads(branch)
4895 4896 # i18n: column positioning for "hg summary"
4896 4897 m = _('branch: %s\n') % branch
4897 4898 if branch != 'default':
4898 4899 ui.write(m, label='log.branch')
4899 4900 else:
4900 4901 ui.status(m, label='log.branch')
4901 4902
4902 4903 if marks:
4903 4904 active = repo._activebookmark
4904 4905 # i18n: column positioning for "hg summary"
4905 4906 ui.write(_('bookmarks:'), label='log.bookmark')
4906 4907 if active is not None:
4907 4908 if active in marks:
4908 4909 ui.write(' *' + active, label=activebookmarklabel)
4909 4910 marks.remove(active)
4910 4911 else:
4911 4912 ui.write(' [%s]' % active, label=activebookmarklabel)
4912 4913 for m in marks:
4913 4914 ui.write(' ' + m, label='log.bookmark')
4914 4915 ui.write('\n', label='log.bookmark')
4915 4916
4916 4917 status = repo.status(unknown=True)
4917 4918
4918 4919 c = repo.dirstate.copies()
4919 4920 copied, renamed = [], []
4920 4921 for d, s in c.iteritems():
4921 4922 if s in status.removed:
4922 4923 status.removed.remove(s)
4923 4924 renamed.append(d)
4924 4925 else:
4925 4926 copied.append(d)
4926 4927 if d in status.added:
4927 4928 status.added.remove(d)
4928 4929
4929 4930 subs = [s for s in ctx.substate if ctx.sub(s).dirty()]
4930 4931
4931 4932 labels = [(ui.label(_('%d modified'), 'status.modified'), status.modified),
4932 4933 (ui.label(_('%d added'), 'status.added'), status.added),
4933 4934 (ui.label(_('%d removed'), 'status.removed'), status.removed),
4934 4935 (ui.label(_('%d renamed'), 'status.copied'), renamed),
4935 4936 (ui.label(_('%d copied'), 'status.copied'), copied),
4936 4937 (ui.label(_('%d deleted'), 'status.deleted'), status.deleted),
4937 4938 (ui.label(_('%d unknown'), 'status.unknown'), status.unknown),
4938 4939 (ui.label(_('%d unresolved'), 'resolve.unresolved'), unresolved),
4939 4940 (ui.label(_('%d subrepos'), 'status.modified'), subs)]
4940 4941 t = []
4941 4942 for l, s in labels:
4942 4943 if s:
4943 4944 t.append(l % len(s))
4944 4945
4945 4946 t = ', '.join(t)
4946 4947 cleanworkdir = False
4947 4948
4948 4949 if repo.vfs.exists('graftstate'):
4949 4950 t += _(' (graft in progress)')
4950 4951 if repo.vfs.exists('updatestate'):
4951 4952 t += _(' (interrupted update)')
4952 4953 elif len(parents) > 1:
4953 4954 t += _(' (merge)')
4954 4955 elif branch != parents[0].branch():
4955 4956 t += _(' (new branch)')
4956 4957 elif (parents[0].closesbranch() and
4957 4958 pnode in repo.branchheads(branch, closed=True)):
4958 4959 t += _(' (head closed)')
4959 4960 elif not (status.modified or status.added or status.removed or renamed or
4960 4961 copied or subs):
4961 4962 t += _(' (clean)')
4962 4963 cleanworkdir = True
4963 4964 elif pnode not in bheads:
4964 4965 t += _(' (new branch head)')
4965 4966
4966 4967 if parents:
4967 4968 pendingphase = max(p.phase() for p in parents)
4968 4969 else:
4969 4970 pendingphase = phases.public
4970 4971
4971 4972 if pendingphase > phases.newcommitphase(ui):
4972 4973 t += ' (%s)' % phases.phasenames[pendingphase]
4973 4974
4974 4975 if cleanworkdir:
4975 4976 # i18n: column positioning for "hg summary"
4976 4977 ui.status(_('commit: %s\n') % t.strip())
4977 4978 else:
4978 4979 # i18n: column positioning for "hg summary"
4979 4980 ui.write(_('commit: %s\n') % t.strip())
4980 4981
4981 4982 # all ancestors of branch heads - all ancestors of parent = new csets
4982 4983 new = len(repo.changelog.findmissing([pctx.node() for pctx in parents],
4983 4984 bheads))
4984 4985
4985 4986 if new == 0:
4986 4987 # i18n: column positioning for "hg summary"
4987 4988 ui.status(_('update: (current)\n'))
4988 4989 elif pnode not in bheads:
4989 4990 # i18n: column positioning for "hg summary"
4990 4991 ui.write(_('update: %d new changesets (update)\n') % new)
4991 4992 else:
4992 4993 # i18n: column positioning for "hg summary"
4993 4994 ui.write(_('update: %d new changesets, %d branch heads (merge)\n') %
4994 4995 (new, len(bheads)))
4995 4996
4996 4997 t = []
4997 4998 draft = len(repo.revs('draft()'))
4998 4999 if draft:
4999 5000 t.append(_('%d draft') % draft)
5000 5001 secret = len(repo.revs('secret()'))
5001 5002 if secret:
5002 5003 t.append(_('%d secret') % secret)
5003 5004
5004 5005 if draft or secret:
5005 5006 ui.status(_('phases: %s\n') % ', '.join(t))
5006 5007
5007 5008 if obsolete.isenabled(repo, obsolete.createmarkersopt):
5008 5009 for trouble in ("unstable", "divergent", "bumped"):
5009 5010 numtrouble = len(repo.revs(trouble + "()"))
5010 5011 # We write all the possibilities to ease translation
5011 5012 troublemsg = {
5012 5013 "unstable": _("unstable: %d changesets"),
5013 5014 "divergent": _("divergent: %d changesets"),
5014 5015 "bumped": _("bumped: %d changesets"),
5015 5016 }
5016 5017 if numtrouble > 0:
5017 5018 ui.status(troublemsg[trouble] % numtrouble + "\n")
5018 5019
5019 5020 cmdutil.summaryhooks(ui, repo)
5020 5021
5021 5022 if opts.get('remote'):
5022 5023 needsincoming, needsoutgoing = True, True
5023 5024 else:
5024 5025 needsincoming, needsoutgoing = False, False
5025 5026 for i, o in cmdutil.summaryremotehooks(ui, repo, opts, None):
5026 5027 if i:
5027 5028 needsincoming = True
5028 5029 if o:
5029 5030 needsoutgoing = True
5030 5031 if not needsincoming and not needsoutgoing:
5031 5032 return
5032 5033
5033 5034 def getincoming():
5034 5035 source, branches = hg.parseurl(ui.expandpath('default'))
5035 5036 sbranch = branches[0]
5036 5037 try:
5037 5038 other = hg.peer(repo, {}, source)
5038 5039 except error.RepoError:
5039 5040 if opts.get('remote'):
5040 5041 raise
5041 5042 return source, sbranch, None, None, None
5042 5043 revs, checkout = hg.addbranchrevs(repo, other, branches, None)
5043 5044 if revs:
5044 5045 revs = [other.lookup(rev) for rev in revs]
5045 5046 ui.debug('comparing with %s\n' % util.hidepassword(source))
5046 5047 repo.ui.pushbuffer()
5047 5048 commoninc = discovery.findcommonincoming(repo, other, heads=revs)
5048 5049 repo.ui.popbuffer()
5049 5050 return source, sbranch, other, commoninc, commoninc[1]
5050 5051
5051 5052 if needsincoming:
5052 5053 source, sbranch, sother, commoninc, incoming = getincoming()
5053 5054 else:
5054 5055 source = sbranch = sother = commoninc = incoming = None
5055 5056
5056 5057 def getoutgoing():
5057 5058 dest, branches = hg.parseurl(ui.expandpath('default-push', 'default'))
5058 5059 dbranch = branches[0]
5059 5060 revs, checkout = hg.addbranchrevs(repo, repo, branches, None)
5060 5061 if source != dest:
5061 5062 try:
5062 5063 dother = hg.peer(repo, {}, dest)
5063 5064 except error.RepoError:
5064 5065 if opts.get('remote'):
5065 5066 raise
5066 5067 return dest, dbranch, None, None
5067 5068 ui.debug('comparing with %s\n' % util.hidepassword(dest))
5068 5069 elif sother is None:
5069 5070 # there is no explicit destination peer, but source one is invalid
5070 5071 return dest, dbranch, None, None
5071 5072 else:
5072 5073 dother = sother
5073 5074 if (source != dest or (sbranch is not None and sbranch != dbranch)):
5074 5075 common = None
5075 5076 else:
5076 5077 common = commoninc
5077 5078 if revs:
5078 5079 revs = [repo.lookup(rev) for rev in revs]
5079 5080 repo.ui.pushbuffer()
5080 5081 outgoing = discovery.findcommonoutgoing(repo, dother, onlyheads=revs,
5081 5082 commoninc=common)
5082 5083 repo.ui.popbuffer()
5083 5084 return dest, dbranch, dother, outgoing
5084 5085
5085 5086 if needsoutgoing:
5086 5087 dest, dbranch, dother, outgoing = getoutgoing()
5087 5088 else:
5088 5089 dest = dbranch = dother = outgoing = None
5089 5090
5090 5091 if opts.get('remote'):
5091 5092 t = []
5092 5093 if incoming:
5093 5094 t.append(_('1 or more incoming'))
5094 5095 o = outgoing.missing
5095 5096 if o:
5096 5097 t.append(_('%d outgoing') % len(o))
5097 5098 other = dother or sother
5098 5099 if 'bookmarks' in other.listkeys('namespaces'):
5099 5100 counts = bookmarks.summary(repo, other)
5100 5101 if counts[0] > 0:
5101 5102 t.append(_('%d incoming bookmarks') % counts[0])
5102 5103 if counts[1] > 0:
5103 5104 t.append(_('%d outgoing bookmarks') % counts[1])
5104 5105
5105 5106 if t:
5106 5107 # i18n: column positioning for "hg summary"
5107 5108 ui.write(_('remote: %s\n') % (', '.join(t)))
5108 5109 else:
5109 5110 # i18n: column positioning for "hg summary"
5110 5111 ui.status(_('remote: (synced)\n'))
5111 5112
5112 5113 cmdutil.summaryremotehooks(ui, repo, opts,
5113 5114 ((source, sbranch, sother, commoninc),
5114 5115 (dest, dbranch, dother, outgoing)))
5115 5116
5116 5117 @command('tag',
5117 5118 [('f', 'force', None, _('force tag')),
5118 5119 ('l', 'local', None, _('make the tag local')),
5119 5120 ('r', 'rev', '', _('revision to tag'), _('REV')),
5120 5121 ('', 'remove', None, _('remove a tag')),
5121 5122 # -l/--local is already there, commitopts cannot be used
5122 5123 ('e', 'edit', None, _('invoke editor on commit messages')),
5123 5124 ('m', 'message', '', _('use text as commit message'), _('TEXT')),
5124 5125 ] + commitopts2,
5125 5126 _('[-f] [-l] [-m TEXT] [-d DATE] [-u USER] [-r REV] NAME...'))
5126 5127 def tag(ui, repo, name1, *names, **opts):
5127 5128 """add one or more tags for the current or given revision
5128 5129
5129 5130 Name a particular revision using <name>.
5130 5131
5131 5132 Tags are used to name particular revisions of the repository and are
5132 5133 very useful to compare different revisions, to go back to significant
5133 5134 earlier versions or to mark branch points as releases, etc. Changing
5134 5135 an existing tag is normally disallowed; use -f/--force to override.
5135 5136
5136 5137 If no revision is given, the parent of the working directory is
5137 5138 used.
5138 5139
5139 5140 To facilitate version control, distribution, and merging of tags,
5140 5141 they are stored as a file named ".hgtags" which is managed similarly
5141 5142 to other project files and can be hand-edited if necessary. This
5142 5143 also means that tagging creates a new commit. The file
5143 5144 ".hg/localtags" is used for local tags (not shared among
5144 5145 repositories).
5145 5146
5146 5147 Tag commits are usually made at the head of a branch. If the parent
5147 5148 of the working directory is not a branch head, :hg:`tag` aborts; use
5148 5149 -f/--force to force the tag commit to be based on a non-head
5149 5150 changeset.
5150 5151
5151 5152 See :hg:`help dates` for a list of formats valid for -d/--date.
5152 5153
5153 5154 Since tag names have priority over branch names during revision
5154 5155 lookup, using an existing branch name as a tag name is discouraged.
5155 5156
5156 5157 Returns 0 on success.
5157 5158 """
5158 5159 opts = pycompat.byteskwargs(opts)
5159 5160 wlock = lock = None
5160 5161 try:
5161 5162 wlock = repo.wlock()
5162 5163 lock = repo.lock()
5163 5164 rev_ = "."
5164 5165 names = [t.strip() for t in (name1,) + names]
5165 5166 if len(names) != len(set(names)):
5166 5167 raise error.Abort(_('tag names must be unique'))
5167 5168 for n in names:
5168 5169 scmutil.checknewlabel(repo, n, 'tag')
5169 5170 if not n:
5170 5171 raise error.Abort(_('tag names cannot consist entirely of '
5171 5172 'whitespace'))
5172 5173 if opts.get('rev') and opts.get('remove'):
5173 5174 raise error.Abort(_("--rev and --remove are incompatible"))
5174 5175 if opts.get('rev'):
5175 5176 rev_ = opts['rev']
5176 5177 message = opts.get('message')
5177 5178 if opts.get('remove'):
5178 5179 if opts.get('local'):
5179 5180 expectedtype = 'local'
5180 5181 else:
5181 5182 expectedtype = 'global'
5182 5183
5183 5184 for n in names:
5184 5185 if not repo.tagtype(n):
5185 5186 raise error.Abort(_("tag '%s' does not exist") % n)
5186 5187 if repo.tagtype(n) != expectedtype:
5187 5188 if expectedtype == 'global':
5188 5189 raise error.Abort(_("tag '%s' is not a global tag") % n)
5189 5190 else:
5190 5191 raise error.Abort(_("tag '%s' is not a local tag") % n)
5191 5192 rev_ = 'null'
5192 5193 if not message:
5193 5194 # we don't translate commit messages
5194 5195 message = 'Removed tag %s' % ', '.join(names)
5195 5196 elif not opts.get('force'):
5196 5197 for n in names:
5197 5198 if n in repo.tags():
5198 5199 raise error.Abort(_("tag '%s' already exists "
5199 5200 "(use -f to force)") % n)
5200 5201 if not opts.get('local'):
5201 5202 p1, p2 = repo.dirstate.parents()
5202 5203 if p2 != nullid:
5203 5204 raise error.Abort(_('uncommitted merge'))
5204 5205 bheads = repo.branchheads()
5205 5206 if not opts.get('force') and bheads and p1 not in bheads:
5206 5207 raise error.Abort(_('working directory is not at a branch head '
5207 5208 '(use -f to force)'))
5208 5209 r = scmutil.revsingle(repo, rev_).node()
5209 5210
5210 5211 if not message:
5211 5212 # we don't translate commit messages
5212 5213 message = ('Added tag %s for changeset %s' %
5213 5214 (', '.join(names), short(r)))
5214 5215
5215 5216 date = opts.get('date')
5216 5217 if date:
5217 5218 date = util.parsedate(date)
5218 5219
5219 5220 if opts.get('remove'):
5220 5221 editform = 'tag.remove'
5221 5222 else:
5222 5223 editform = 'tag.add'
5223 5224 editor = cmdutil.getcommiteditor(editform=editform,
5224 5225 **pycompat.strkwargs(opts))
5225 5226
5226 5227 # don't allow tagging the null rev
5227 5228 if (not opts.get('remove') and
5228 5229 scmutil.revsingle(repo, rev_).rev() == nullrev):
5229 5230 raise error.Abort(_("cannot tag null revision"))
5230 5231
5231 5232 tagsmod.tag(repo, names, r, message, opts.get('local'),
5232 5233 opts.get('user'), date, editor=editor)
5233 5234 finally:
5234 5235 release(lock, wlock)
5235 5236
5236 5237 @command('tags', formatteropts, '')
5237 5238 def tags(ui, repo, **opts):
5238 5239 """list repository tags
5239 5240
5240 5241 This lists both regular and local tags. When the -v/--verbose
5241 5242 switch is used, a third column "local" is printed for local tags.
5242 5243 When the -q/--quiet switch is used, only the tag name is printed.
5243 5244
5244 5245 Returns 0 on success.
5245 5246 """
5246 5247
5247 5248 opts = pycompat.byteskwargs(opts)
5248 5249 ui.pager('tags')
5249 5250 fm = ui.formatter('tags', opts)
5250 5251 hexfunc = fm.hexfunc
5251 5252 tagtype = ""
5252 5253
5253 5254 for t, n in reversed(repo.tagslist()):
5254 5255 hn = hexfunc(n)
5255 5256 label = 'tags.normal'
5256 5257 tagtype = ''
5257 5258 if repo.tagtype(t) == 'local':
5258 5259 label = 'tags.local'
5259 5260 tagtype = 'local'
5260 5261
5261 5262 fm.startitem()
5262 5263 fm.write('tag', '%s', t, label=label)
5263 5264 fmt = " " * (30 - encoding.colwidth(t)) + ' %5d:%s'
5264 5265 fm.condwrite(not ui.quiet, 'rev node', fmt,
5265 5266 repo.changelog.rev(n), hn, label=label)
5266 5267 fm.condwrite(ui.verbose and tagtype, 'type', ' %s',
5267 5268 tagtype, label=label)
5268 5269 fm.plain('\n')
5269 5270 fm.end()
5270 5271
5271 5272 @command('tip',
5272 5273 [('p', 'patch', None, _('show patch')),
5273 5274 ('g', 'git', None, _('use git extended diff format')),
5274 5275 ] + templateopts,
5275 5276 _('[-p] [-g]'))
5276 5277 def tip(ui, repo, **opts):
5277 5278 """show the tip revision (DEPRECATED)
5278 5279
5279 5280 The tip revision (usually just called the tip) is the changeset
5280 5281 most recently added to the repository (and therefore the most
5281 5282 recently changed head).
5282 5283
5283 5284 If you have just made a commit, that commit will be the tip. If
5284 5285 you have just pulled changes from another repository, the tip of
5285 5286 that repository becomes the current tip. The "tip" tag is special
5286 5287 and cannot be renamed or assigned to a different changeset.
5287 5288
5288 5289 This command is deprecated, please use :hg:`heads` instead.
5289 5290
5290 5291 Returns 0 on success.
5291 5292 """
5292 5293 opts = pycompat.byteskwargs(opts)
5293 5294 displayer = cmdutil.show_changeset(ui, repo, opts)
5294 5295 displayer.show(repo['tip'])
5295 5296 displayer.close()
5296 5297
5297 5298 @command('unbundle',
5298 5299 [('u', 'update', None,
5299 5300 _('update to new branch head if changesets were unbundled'))],
5300 5301 _('[-u] FILE...'))
5301 5302 def unbundle(ui, repo, fname1, *fnames, **opts):
5302 5303 """apply one or more bundle files
5303 5304
5304 5305 Apply one or more bundle files generated by :hg:`bundle`.
5305 5306
5306 5307 Returns 0 on success, 1 if an update has unresolved files.
5307 5308 """
5308 5309 fnames = (fname1,) + fnames
5309 5310
5310 5311 with repo.lock():
5311 5312 for fname in fnames:
5312 5313 f = hg.openpath(ui, fname)
5313 5314 gen = exchange.readbundle(ui, f, fname)
5314 5315 if isinstance(gen, bundle2.unbundle20):
5315 5316 tr = repo.transaction('unbundle')
5316 5317 try:
5317 5318 op = bundle2.applybundle(repo, gen, tr, source='unbundle',
5318 5319 url='bundle:' + fname)
5319 5320 tr.close()
5320 5321 except error.BundleUnknownFeatureError as exc:
5321 5322 raise error.Abort(_('%s: unknown bundle feature, %s')
5322 5323 % (fname, exc),
5323 5324 hint=_("see https://mercurial-scm.org/"
5324 5325 "wiki/BundleFeature for more "
5325 5326 "information"))
5326 5327 finally:
5327 5328 if tr:
5328 5329 tr.release()
5329 5330 changes = [r.get('return', 0)
5330 5331 for r in op.records['changegroup']]
5331 5332 modheads = changegroup.combineresults(changes)
5332 5333 elif isinstance(gen, streamclone.streamcloneapplier):
5333 5334 raise error.Abort(
5334 5335 _('packed bundles cannot be applied with '
5335 5336 '"hg unbundle"'),
5336 5337 hint=_('use "hg debugapplystreamclonebundle"'))
5337 5338 else:
5338 5339 modheads = gen.apply(repo, 'unbundle', 'bundle:' + fname)
5339 5340
5340 5341 return postincoming(ui, repo, modheads, opts.get(r'update'), None, None)
5341 5342
5342 5343 @command('^update|up|checkout|co',
5343 5344 [('C', 'clean', None, _('discard uncommitted changes (no backup)')),
5344 5345 ('c', 'check', None, _('require clean working directory')),
5345 5346 ('m', 'merge', None, _('merge uncommitted changes')),
5346 5347 ('d', 'date', '', _('tipmost revision matching date'), _('DATE')),
5347 5348 ('r', 'rev', '', _('revision'), _('REV'))
5348 5349 ] + mergetoolopts,
5349 5350 _('[-C|-c|-m] [-d DATE] [[-r] REV]'))
5350 5351 def update(ui, repo, node=None, rev=None, clean=False, date=None, check=False,
5351 5352 merge=None, tool=None):
5352 5353 """update working directory (or switch revisions)
5353 5354
5354 5355 Update the repository's working directory to the specified
5355 5356 changeset. If no changeset is specified, update to the tip of the
5356 5357 current named branch and move the active bookmark (see :hg:`help
5357 5358 bookmarks`).
5358 5359
5359 5360 Update sets the working directory's parent revision to the specified
5360 5361 changeset (see :hg:`help parents`).
5361 5362
5362 5363 If the changeset is not a descendant or ancestor of the working
5363 5364 directory's parent and there are uncommitted changes, the update is
5364 5365 aborted. With the -c/--check option, the working directory is checked
5365 5366 for uncommitted changes; if none are found, the working directory is
5366 5367 updated to the specified changeset.
5367 5368
5368 5369 .. container:: verbose
5369 5370
5370 5371 The -C/--clean, -c/--check, and -m/--merge options control what
5371 5372 happens if the working directory contains uncommitted changes.
5372 5373 At most of one of them can be specified.
5373 5374
5374 5375 1. If no option is specified, and if
5375 5376 the requested changeset is an ancestor or descendant of
5376 5377 the working directory's parent, the uncommitted changes
5377 5378 are merged into the requested changeset and the merged
5378 5379 result is left uncommitted. If the requested changeset is
5379 5380 not an ancestor or descendant (that is, it is on another
5380 5381 branch), the update is aborted and the uncommitted changes
5381 5382 are preserved.
5382 5383
5383 5384 2. With the -m/--merge option, the update is allowed even if the
5384 5385 requested changeset is not an ancestor or descendant of
5385 5386 the working directory's parent.
5386 5387
5387 5388 3. With the -c/--check option, the update is aborted and the
5388 5389 uncommitted changes are preserved.
5389 5390
5390 5391 4. With the -C/--clean option, uncommitted changes are discarded and
5391 5392 the working directory is updated to the requested changeset.
5392 5393
5393 5394 To cancel an uncommitted merge (and lose your changes), use
5394 5395 :hg:`update --clean .`.
5395 5396
5396 5397 Use null as the changeset to remove the working directory (like
5397 5398 :hg:`clone -U`).
5398 5399
5399 5400 If you want to revert just one file to an older revision, use
5400 5401 :hg:`revert [-r REV] NAME`.
5401 5402
5402 5403 See :hg:`help dates` for a list of formats valid for -d/--date.
5403 5404
5404 5405 Returns 0 on success, 1 if there are unresolved files.
5405 5406 """
5406 5407 if rev and node:
5407 5408 raise error.Abort(_("please specify just one revision"))
5408 5409
5409 5410 if ui.configbool('commands', 'update.requiredest'):
5410 5411 if not node and not rev and not date:
5411 5412 raise error.Abort(_('you must specify a destination'),
5412 5413 hint=_('for example: hg update ".::"'))
5413 5414
5414 5415 if rev is None or rev == '':
5415 5416 rev = node
5416 5417
5417 5418 if date and rev is not None:
5418 5419 raise error.Abort(_("you can't specify a revision and a date"))
5419 5420
5420 5421 if len([x for x in (clean, check, merge) if x]) > 1:
5421 5422 raise error.Abort(_("can only specify one of -C/--clean, -c/--check, "
5422 5423 "or -m/merge"))
5423 5424
5424 5425 updatecheck = None
5425 5426 if check:
5426 5427 updatecheck = 'abort'
5427 5428 elif merge:
5428 5429 updatecheck = 'none'
5429 5430
5430 5431 with repo.wlock():
5431 5432 cmdutil.clearunfinished(repo)
5432 5433
5433 5434 if date:
5434 5435 rev = cmdutil.finddate(ui, repo, date)
5435 5436
5436 5437 # if we defined a bookmark, we have to remember the original name
5437 5438 brev = rev
5438 5439 rev = scmutil.revsingle(repo, rev, rev).rev()
5439 5440
5440 5441 repo.ui.setconfig('ui', 'forcemerge', tool, 'update')
5441 5442
5442 5443 return hg.updatetotally(ui, repo, rev, brev, clean=clean,
5443 5444 updatecheck=updatecheck)
5444 5445
5445 5446 @command('verify', [])
5446 5447 def verify(ui, repo):
5447 5448 """verify the integrity of the repository
5448 5449
5449 5450 Verify the integrity of the current repository.
5450 5451
5451 5452 This will perform an extensive check of the repository's
5452 5453 integrity, validating the hashes and checksums of each entry in
5453 5454 the changelog, manifest, and tracked files, as well as the
5454 5455 integrity of their crosslinks and indices.
5455 5456
5456 5457 Please see https://mercurial-scm.org/wiki/RepositoryCorruption
5457 5458 for more information about recovery from corruption of the
5458 5459 repository.
5459 5460
5460 5461 Returns 0 on success, 1 if errors are encountered.
5461 5462 """
5462 5463 return hg.verify(repo)
5463 5464
5464 5465 @command('version', [] + formatteropts, norepo=True)
5465 5466 def version_(ui, **opts):
5466 5467 """output version and copyright information"""
5467 5468 opts = pycompat.byteskwargs(opts)
5468 5469 if ui.verbose:
5469 5470 ui.pager('version')
5470 5471 fm = ui.formatter("version", opts)
5471 5472 fm.startitem()
5472 5473 fm.write("ver", _("Mercurial Distributed SCM (version %s)\n"),
5473 5474 util.version())
5474 5475 license = _(
5475 5476 "(see https://mercurial-scm.org for more information)\n"
5476 5477 "\nCopyright (C) 2005-2017 Matt Mackall and others\n"
5477 5478 "This is free software; see the source for copying conditions. "
5478 5479 "There is NO\nwarranty; "
5479 5480 "not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n"
5480 5481 )
5481 5482 if not ui.quiet:
5482 5483 fm.plain(license)
5483 5484
5484 5485 if ui.verbose:
5485 5486 fm.plain(_("\nEnabled extensions:\n\n"))
5486 5487 # format names and versions into columns
5487 5488 names = []
5488 5489 vers = []
5489 5490 isinternals = []
5490 5491 for name, module in extensions.extensions():
5491 5492 names.append(name)
5492 5493 vers.append(extensions.moduleversion(module) or None)
5493 5494 isinternals.append(extensions.ismoduleinternal(module))
5494 5495 fn = fm.nested("extensions")
5495 5496 if names:
5496 5497 namefmt = " %%-%ds " % max(len(n) for n in names)
5497 5498 places = [_("external"), _("internal")]
5498 5499 for n, v, p in zip(names, vers, isinternals):
5499 5500 fn.startitem()
5500 5501 fn.condwrite(ui.verbose, "name", namefmt, n)
5501 5502 if ui.verbose:
5502 5503 fn.plain("%s " % places[p])
5503 5504 fn.data(bundled=p)
5504 5505 fn.condwrite(ui.verbose and v, "ver", "%s", v)
5505 5506 if ui.verbose:
5506 5507 fn.plain("\n")
5507 5508 fn.end()
5508 5509 fm.end()
5509 5510
5510 5511 def loadcmdtable(ui, name, cmdtable):
5511 5512 """Load command functions from specified cmdtable
5512 5513 """
5513 5514 overrides = [cmd for cmd in cmdtable if cmd in table]
5514 5515 if overrides:
5515 5516 ui.warn(_("extension '%s' overrides commands: %s\n")
5516 5517 % (name, " ".join(overrides)))
5517 5518 table.update(cmdtable)
General Comments 0
You need to be logged in to leave comments. Login now