##// END OF EJS Templates
bundle2: record changegroup data in 'op.records' (API)...
Martin von Zweigbergk -
r33030:3e102a8d default
parent child Browse files
Show More
@@ -1,1778 +1,1784 b''
1 1 # bundle2.py - generic container format to transmit arbitrary data.
2 2 #
3 3 # Copyright 2013 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 """Handling of the new bundle2 format
8 8
9 9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 11 that will be handed to and processed by the application layer.
12 12
13 13
14 14 General format architecture
15 15 ===========================
16 16
17 17 The format is architectured as follow
18 18
19 19 - magic string
20 20 - stream level parameters
21 21 - payload parts (any number)
22 22 - end of stream marker.
23 23
24 24 the Binary format
25 25 ============================
26 26
27 27 All numbers are unsigned and big-endian.
28 28
29 29 stream level parameters
30 30 ------------------------
31 31
32 32 Binary format is as follow
33 33
34 34 :params size: int32
35 35
36 36 The total number of Bytes used by the parameters
37 37
38 38 :params value: arbitrary number of Bytes
39 39
40 40 A blob of `params size` containing the serialized version of all stream level
41 41 parameters.
42 42
43 43 The blob contains a space separated list of parameters. Parameters with value
44 44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45 45
46 46 Empty name are obviously forbidden.
47 47
48 48 Name MUST start with a letter. If this first letter is lower case, the
49 49 parameter is advisory and can be safely ignored. However when the first
50 50 letter is capital, the parameter is mandatory and the bundling process MUST
51 51 stop if he is not able to proceed it.
52 52
53 53 Stream parameters use a simple textual format for two main reasons:
54 54
55 55 - Stream level parameters should remain simple and we want to discourage any
56 56 crazy usage.
57 57 - Textual data allow easy human inspection of a bundle2 header in case of
58 58 troubles.
59 59
60 60 Any Applicative level options MUST go into a bundle2 part instead.
61 61
62 62 Payload part
63 63 ------------------------
64 64
65 65 Binary format is as follow
66 66
67 67 :header size: int32
68 68
69 69 The total number of Bytes used by the part header. When the header is empty
70 70 (size = 0) this is interpreted as the end of stream marker.
71 71
72 72 :header:
73 73
74 74 The header defines how to interpret the part. It contains two piece of
75 75 data: the part type, and the part parameters.
76 76
77 77 The part type is used to route an application level handler, that can
78 78 interpret payload.
79 79
80 80 Part parameters are passed to the application level handler. They are
81 81 meant to convey information that will help the application level object to
82 82 interpret the part payload.
83 83
84 84 The binary format of the header is has follow
85 85
86 86 :typesize: (one byte)
87 87
88 88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89 89
90 90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 91 to this part.
92 92
93 93 :parameters:
94 94
95 95 Part's parameter may have arbitrary content, the binary structure is::
96 96
97 97 <mandatory-count><advisory-count><param-sizes><param-data>
98 98
99 99 :mandatory-count: 1 byte, number of mandatory parameters
100 100
101 101 :advisory-count: 1 byte, number of advisory parameters
102 102
103 103 :param-sizes:
104 104
105 105 N couple of bytes, where N is the total number of parameters. Each
106 106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107 107
108 108 :param-data:
109 109
110 110 A blob of bytes from which each parameter key and value can be
111 111 retrieved using the list of size couples stored in the previous
112 112 field.
113 113
114 114 Mandatory parameters comes first, then the advisory ones.
115 115
116 116 Each parameter's key MUST be unique within the part.
117 117
118 118 :payload:
119 119
120 120 payload is a series of `<chunksize><chunkdata>`.
121 121
122 122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124 124
125 125 The current implementation always produces either zero or one chunk.
126 126 This is an implementation limitation that will ultimately be lifted.
127 127
128 128 `chunksize` can be negative to trigger special case processing. No such
129 129 processing is in place yet.
130 130
131 131 Bundle processing
132 132 ============================
133 133
134 134 Each part is processed in order using a "part handler". Handler are registered
135 135 for a certain part type.
136 136
137 137 The matching of a part to its handler is case insensitive. The case of the
138 138 part type is used to know if a part is mandatory or advisory. If the Part type
139 139 contains any uppercase char it is considered mandatory. When no handler is
140 140 known for a Mandatory part, the process is aborted and an exception is raised.
141 141 If the part is advisory and no handler is known, the part is ignored. When the
142 142 process is aborted, the full bundle is still read from the stream to keep the
143 143 channel usable. But none of the part read from an abort are processed. In the
144 144 future, dropping the stream may become an option for channel we do not care to
145 145 preserve.
146 146 """
147 147
148 148 from __future__ import absolute_import
149 149
150 150 import errno
151 151 import re
152 152 import string
153 153 import struct
154 154 import sys
155 155
156 156 from .i18n import _
157 157 from . import (
158 158 changegroup,
159 159 error,
160 160 obsolete,
161 161 pushkey,
162 162 pycompat,
163 163 tags,
164 164 url,
165 165 util,
166 166 )
167 167
168 168 urlerr = util.urlerr
169 169 urlreq = util.urlreq
170 170
171 171 _pack = struct.pack
172 172 _unpack = struct.unpack
173 173
174 174 _fstreamparamsize = '>i'
175 175 _fpartheadersize = '>i'
176 176 _fparttypesize = '>B'
177 177 _fpartid = '>I'
178 178 _fpayloadsize = '>i'
179 179 _fpartparamcount = '>BB'
180 180
181 181 preferedchunksize = 4096
182 182
183 183 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
184 184
185 185 def outdebug(ui, message):
186 186 """debug regarding output stream (bundling)"""
187 187 if ui.configbool('devel', 'bundle2.debug', False):
188 188 ui.debug('bundle2-output: %s\n' % message)
189 189
190 190 def indebug(ui, message):
191 191 """debug on input stream (unbundling)"""
192 192 if ui.configbool('devel', 'bundle2.debug', False):
193 193 ui.debug('bundle2-input: %s\n' % message)
194 194
195 195 def validateparttype(parttype):
196 196 """raise ValueError if a parttype contains invalid character"""
197 197 if _parttypeforbidden.search(parttype):
198 198 raise ValueError(parttype)
199 199
200 200 def _makefpartparamsizes(nbparams):
201 201 """return a struct format to read part parameter sizes
202 202
203 203 The number parameters is variable so we need to build that format
204 204 dynamically.
205 205 """
206 206 return '>'+('BB'*nbparams)
207 207
208 208 parthandlermapping = {}
209 209
210 210 def parthandler(parttype, params=()):
211 211 """decorator that register a function as a bundle2 part handler
212 212
213 213 eg::
214 214
215 215 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
216 216 def myparttypehandler(...):
217 217 '''process a part of type "my part".'''
218 218 ...
219 219 """
220 220 validateparttype(parttype)
221 221 def _decorator(func):
222 222 lparttype = parttype.lower() # enforce lower case matching.
223 223 assert lparttype not in parthandlermapping
224 224 parthandlermapping[lparttype] = func
225 225 func.params = frozenset(params)
226 226 return func
227 227 return _decorator
228 228
229 229 class unbundlerecords(object):
230 230 """keep record of what happens during and unbundle
231 231
232 232 New records are added using `records.add('cat', obj)`. Where 'cat' is a
233 233 category of record and obj is an arbitrary object.
234 234
235 235 `records['cat']` will return all entries of this category 'cat'.
236 236
237 237 Iterating on the object itself will yield `('category', obj)` tuples
238 238 for all entries.
239 239
240 240 All iterations happens in chronological order.
241 241 """
242 242
243 243 def __init__(self):
244 244 self._categories = {}
245 245 self._sequences = []
246 246 self._replies = {}
247 247
248 248 def add(self, category, entry, inreplyto=None):
249 249 """add a new record of a given category.
250 250
251 251 The entry can then be retrieved in the list returned by
252 252 self['category']."""
253 253 self._categories.setdefault(category, []).append(entry)
254 254 self._sequences.append((category, entry))
255 255 if inreplyto is not None:
256 256 self.getreplies(inreplyto).add(category, entry)
257 257
258 258 def getreplies(self, partid):
259 259 """get the records that are replies to a specific part"""
260 260 return self._replies.setdefault(partid, unbundlerecords())
261 261
262 262 def __getitem__(self, cat):
263 263 return tuple(self._categories.get(cat, ()))
264 264
265 265 def __iter__(self):
266 266 return iter(self._sequences)
267 267
268 268 def __len__(self):
269 269 return len(self._sequences)
270 270
271 271 def __nonzero__(self):
272 272 return bool(self._sequences)
273 273
274 274 __bool__ = __nonzero__
275 275
276 276 class bundleoperation(object):
277 277 """an object that represents a single bundling process
278 278
279 279 Its purpose is to carry unbundle-related objects and states.
280 280
281 281 A new object should be created at the beginning of each bundle processing.
282 282 The object is to be returned by the processing function.
283 283
284 284 The object has very little content now it will ultimately contain:
285 285 * an access to the repo the bundle is applied to,
286 286 * a ui object,
287 287 * a way to retrieve a transaction to add changes to the repo,
288 288 * a way to record the result of processing each part,
289 289 * a way to construct a bundle response when applicable.
290 290 """
291 291
292 292 def __init__(self, repo, transactiongetter, captureoutput=True):
293 293 self.repo = repo
294 294 self.ui = repo.ui
295 295 self.records = unbundlerecords()
296 296 self.gettransaction = transactiongetter
297 297 self.reply = None
298 298 self.captureoutput = captureoutput
299 299
300 300 class TransactionUnavailable(RuntimeError):
301 301 pass
302 302
303 303 def _notransaction():
304 304 """default method to get a transaction while processing a bundle
305 305
306 306 Raise an exception to highlight the fact that no transaction was expected
307 307 to be created"""
308 308 raise TransactionUnavailable()
309 309
310 310 def applybundle(repo, unbundler, tr, source=None, url=None, op=None):
311 311 # transform me into unbundler.apply() as soon as the freeze is lifted
312 312 tr.hookargs['bundle2'] = '1'
313 313 if source is not None and 'source' not in tr.hookargs:
314 314 tr.hookargs['source'] = source
315 315 if url is not None and 'url' not in tr.hookargs:
316 316 tr.hookargs['url'] = url
317 317 return processbundle(repo, unbundler, lambda: tr, op=op)
318 318
319 319 def processbundle(repo, unbundler, transactiongetter=None, op=None):
320 320 """This function process a bundle, apply effect to/from a repo
321 321
322 322 It iterates over each part then searches for and uses the proper handling
323 323 code to process the part. Parts are processed in order.
324 324
325 325 Unknown Mandatory part will abort the process.
326 326
327 327 It is temporarily possible to provide a prebuilt bundleoperation to the
328 328 function. This is used to ensure output is properly propagated in case of
329 329 an error during the unbundling. This output capturing part will likely be
330 330 reworked and this ability will probably go away in the process.
331 331 """
332 332 if op is None:
333 333 if transactiongetter is None:
334 334 transactiongetter = _notransaction
335 335 op = bundleoperation(repo, transactiongetter)
336 336 # todo:
337 337 # - replace this is a init function soon.
338 338 # - exception catching
339 339 unbundler.params
340 340 if repo.ui.debugflag:
341 341 msg = ['bundle2-input-bundle:']
342 342 if unbundler.params:
343 343 msg.append(' %i params')
344 344 if op.gettransaction is None or op.gettransaction is _notransaction:
345 345 msg.append(' no-transaction')
346 346 else:
347 347 msg.append(' with-transaction')
348 348 msg.append('\n')
349 349 repo.ui.debug(''.join(msg))
350 350 iterparts = enumerate(unbundler.iterparts())
351 351 part = None
352 352 nbpart = 0
353 353 try:
354 354 for nbpart, part in iterparts:
355 355 _processpart(op, part)
356 356 except Exception as exc:
357 357 # Any exceptions seeking to the end of the bundle at this point are
358 358 # almost certainly related to the underlying stream being bad.
359 359 # And, chances are that the exception we're handling is related to
360 360 # getting in that bad state. So, we swallow the seeking error and
361 361 # re-raise the original error.
362 362 seekerror = False
363 363 try:
364 364 for nbpart, part in iterparts:
365 365 # consume the bundle content
366 366 part.seek(0, 2)
367 367 except Exception:
368 368 seekerror = True
369 369
370 370 # Small hack to let caller code distinguish exceptions from bundle2
371 371 # processing from processing the old format. This is mostly
372 372 # needed to handle different return codes to unbundle according to the
373 373 # type of bundle. We should probably clean up or drop this return code
374 374 # craziness in a future version.
375 375 exc.duringunbundle2 = True
376 376 salvaged = []
377 377 replycaps = None
378 378 if op.reply is not None:
379 379 salvaged = op.reply.salvageoutput()
380 380 replycaps = op.reply.capabilities
381 381 exc._replycaps = replycaps
382 382 exc._bundle2salvagedoutput = salvaged
383 383
384 384 # Re-raising from a variable loses the original stack. So only use
385 385 # that form if we need to.
386 386 if seekerror:
387 387 raise exc
388 388 else:
389 389 raise
390 390 finally:
391 391 repo.ui.debug('bundle2-input-bundle: %i parts total\n' % nbpart)
392 392
393 393 return op
394 394
395 395 def _processpart(op, part):
396 396 """process a single part from a bundle
397 397
398 398 The part is guaranteed to have been fully consumed when the function exits
399 399 (even if an exception is raised)."""
400 400 status = 'unknown' # used by debug output
401 401 hardabort = False
402 402 try:
403 403 try:
404 404 handler = parthandlermapping.get(part.type)
405 405 if handler is None:
406 406 status = 'unsupported-type'
407 407 raise error.BundleUnknownFeatureError(parttype=part.type)
408 408 indebug(op.ui, 'found a handler for part %r' % part.type)
409 409 unknownparams = part.mandatorykeys - handler.params
410 410 if unknownparams:
411 411 unknownparams = list(unknownparams)
412 412 unknownparams.sort()
413 413 status = 'unsupported-params (%s)' % unknownparams
414 414 raise error.BundleUnknownFeatureError(parttype=part.type,
415 415 params=unknownparams)
416 416 status = 'supported'
417 417 except error.BundleUnknownFeatureError as exc:
418 418 if part.mandatory: # mandatory parts
419 419 raise
420 420 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
421 421 return # skip to part processing
422 422 finally:
423 423 if op.ui.debugflag:
424 424 msg = ['bundle2-input-part: "%s"' % part.type]
425 425 if not part.mandatory:
426 426 msg.append(' (advisory)')
427 427 nbmp = len(part.mandatorykeys)
428 428 nbap = len(part.params) - nbmp
429 429 if nbmp or nbap:
430 430 msg.append(' (params:')
431 431 if nbmp:
432 432 msg.append(' %i mandatory' % nbmp)
433 433 if nbap:
434 434 msg.append(' %i advisory' % nbmp)
435 435 msg.append(')')
436 436 msg.append(' %s\n' % status)
437 437 op.ui.debug(''.join(msg))
438 438
439 439 # handler is called outside the above try block so that we don't
440 440 # risk catching KeyErrors from anything other than the
441 441 # parthandlermapping lookup (any KeyError raised by handler()
442 442 # itself represents a defect of a different variety).
443 443 output = None
444 444 if op.captureoutput and op.reply is not None:
445 445 op.ui.pushbuffer(error=True, subproc=True)
446 446 output = ''
447 447 try:
448 448 handler(op, part)
449 449 finally:
450 450 if output is not None:
451 451 output = op.ui.popbuffer()
452 452 if output:
453 453 outpart = op.reply.newpart('output', data=output,
454 454 mandatory=False)
455 455 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
456 456 # If exiting or interrupted, do not attempt to seek the stream in the
457 457 # finally block below. This makes abort faster.
458 458 except (SystemExit, KeyboardInterrupt):
459 459 hardabort = True
460 460 raise
461 461 finally:
462 462 # consume the part content to not corrupt the stream.
463 463 if not hardabort:
464 464 part.seek(0, 2)
465 465
466 466
467 467 def decodecaps(blob):
468 468 """decode a bundle2 caps bytes blob into a dictionary
469 469
470 470 The blob is a list of capabilities (one per line)
471 471 Capabilities may have values using a line of the form::
472 472
473 473 capability=value1,value2,value3
474 474
475 475 The values are always a list."""
476 476 caps = {}
477 477 for line in blob.splitlines():
478 478 if not line:
479 479 continue
480 480 if '=' not in line:
481 481 key, vals = line, ()
482 482 else:
483 483 key, vals = line.split('=', 1)
484 484 vals = vals.split(',')
485 485 key = urlreq.unquote(key)
486 486 vals = [urlreq.unquote(v) for v in vals]
487 487 caps[key] = vals
488 488 return caps
489 489
490 490 def encodecaps(caps):
491 491 """encode a bundle2 caps dictionary into a bytes blob"""
492 492 chunks = []
493 493 for ca in sorted(caps):
494 494 vals = caps[ca]
495 495 ca = urlreq.quote(ca)
496 496 vals = [urlreq.quote(v) for v in vals]
497 497 if vals:
498 498 ca = "%s=%s" % (ca, ','.join(vals))
499 499 chunks.append(ca)
500 500 return '\n'.join(chunks)
501 501
502 502 bundletypes = {
503 503 "": ("", 'UN'), # only when using unbundle on ssh and old http servers
504 504 # since the unification ssh accepts a header but there
505 505 # is no capability signaling it.
506 506 "HG20": (), # special-cased below
507 507 "HG10UN": ("HG10UN", 'UN'),
508 508 "HG10BZ": ("HG10", 'BZ'),
509 509 "HG10GZ": ("HG10GZ", 'GZ'),
510 510 }
511 511
512 512 # hgweb uses this list to communicate its preferred type
513 513 bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN']
514 514
515 515 class bundle20(object):
516 516 """represent an outgoing bundle2 container
517 517
518 518 Use the `addparam` method to add stream level parameter. and `newpart` to
519 519 populate it. Then call `getchunks` to retrieve all the binary chunks of
520 520 data that compose the bundle2 container."""
521 521
522 522 _magicstring = 'HG20'
523 523
524 524 def __init__(self, ui, capabilities=()):
525 525 self.ui = ui
526 526 self._params = []
527 527 self._parts = []
528 528 self.capabilities = dict(capabilities)
529 529 self._compengine = util.compengines.forbundletype('UN')
530 530 self._compopts = None
531 531
532 532 def setcompression(self, alg, compopts=None):
533 533 """setup core part compression to <alg>"""
534 534 if alg in (None, 'UN'):
535 535 return
536 536 assert not any(n.lower() == 'compression' for n, v in self._params)
537 537 self.addparam('Compression', alg)
538 538 self._compengine = util.compengines.forbundletype(alg)
539 539 self._compopts = compopts
540 540
541 541 @property
542 542 def nbparts(self):
543 543 """total number of parts added to the bundler"""
544 544 return len(self._parts)
545 545
546 546 # methods used to defines the bundle2 content
547 547 def addparam(self, name, value=None):
548 548 """add a stream level parameter"""
549 549 if not name:
550 550 raise ValueError('empty parameter name')
551 551 if name[0] not in string.letters:
552 552 raise ValueError('non letter first character: %r' % name)
553 553 self._params.append((name, value))
554 554
555 555 def addpart(self, part):
556 556 """add a new part to the bundle2 container
557 557
558 558 Parts contains the actual applicative payload."""
559 559 assert part.id is None
560 560 part.id = len(self._parts) # very cheap counter
561 561 self._parts.append(part)
562 562
563 563 def newpart(self, typeid, *args, **kwargs):
564 564 """create a new part and add it to the containers
565 565
566 566 As the part is directly added to the containers. For now, this means
567 567 that any failure to properly initialize the part after calling
568 568 ``newpart`` should result in a failure of the whole bundling process.
569 569
570 570 You can still fall back to manually create and add if you need better
571 571 control."""
572 572 part = bundlepart(typeid, *args, **kwargs)
573 573 self.addpart(part)
574 574 return part
575 575
576 576 # methods used to generate the bundle2 stream
577 577 def getchunks(self):
578 578 if self.ui.debugflag:
579 579 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
580 580 if self._params:
581 581 msg.append(' (%i params)' % len(self._params))
582 582 msg.append(' %i parts total\n' % len(self._parts))
583 583 self.ui.debug(''.join(msg))
584 584 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
585 585 yield self._magicstring
586 586 param = self._paramchunk()
587 587 outdebug(self.ui, 'bundle parameter: %s' % param)
588 588 yield _pack(_fstreamparamsize, len(param))
589 589 if param:
590 590 yield param
591 591 for chunk in self._compengine.compressstream(self._getcorechunk(),
592 592 self._compopts):
593 593 yield chunk
594 594
595 595 def _paramchunk(self):
596 596 """return a encoded version of all stream parameters"""
597 597 blocks = []
598 598 for par, value in self._params:
599 599 par = urlreq.quote(par)
600 600 if value is not None:
601 601 value = urlreq.quote(value)
602 602 par = '%s=%s' % (par, value)
603 603 blocks.append(par)
604 604 return ' '.join(blocks)
605 605
606 606 def _getcorechunk(self):
607 607 """yield chunk for the core part of the bundle
608 608
609 609 (all but headers and parameters)"""
610 610 outdebug(self.ui, 'start of parts')
611 611 for part in self._parts:
612 612 outdebug(self.ui, 'bundle part: "%s"' % part.type)
613 613 for chunk in part.getchunks(ui=self.ui):
614 614 yield chunk
615 615 outdebug(self.ui, 'end of bundle')
616 616 yield _pack(_fpartheadersize, 0)
617 617
618 618
619 619 def salvageoutput(self):
620 620 """return a list with a copy of all output parts in the bundle
621 621
622 622 This is meant to be used during error handling to make sure we preserve
623 623 server output"""
624 624 salvaged = []
625 625 for part in self._parts:
626 626 if part.type.startswith('output'):
627 627 salvaged.append(part.copy())
628 628 return salvaged
629 629
630 630
631 631 class unpackermixin(object):
632 632 """A mixin to extract bytes and struct data from a stream"""
633 633
634 634 def __init__(self, fp):
635 635 self._fp = fp
636 636
637 637 def _unpack(self, format):
638 638 """unpack this struct format from the stream
639 639
640 640 This method is meant for internal usage by the bundle2 protocol only.
641 641 They directly manipulate the low level stream including bundle2 level
642 642 instruction.
643 643
644 644 Do not use it to implement higher-level logic or methods."""
645 645 data = self._readexact(struct.calcsize(format))
646 646 return _unpack(format, data)
647 647
648 648 def _readexact(self, size):
649 649 """read exactly <size> bytes from the stream
650 650
651 651 This method is meant for internal usage by the bundle2 protocol only.
652 652 They directly manipulate the low level stream including bundle2 level
653 653 instruction.
654 654
655 655 Do not use it to implement higher-level logic or methods."""
656 656 return changegroup.readexactly(self._fp, size)
657 657
658 658 def getunbundler(ui, fp, magicstring=None):
659 659 """return a valid unbundler object for a given magicstring"""
660 660 if magicstring is None:
661 661 magicstring = changegroup.readexactly(fp, 4)
662 662 magic, version = magicstring[0:2], magicstring[2:4]
663 663 if magic != 'HG':
664 664 raise error.Abort(_('not a Mercurial bundle'))
665 665 unbundlerclass = formatmap.get(version)
666 666 if unbundlerclass is None:
667 667 raise error.Abort(_('unknown bundle version %s') % version)
668 668 unbundler = unbundlerclass(ui, fp)
669 669 indebug(ui, 'start processing of %s stream' % magicstring)
670 670 return unbundler
671 671
672 672 class unbundle20(unpackermixin):
673 673 """interpret a bundle2 stream
674 674
675 675 This class is fed with a binary stream and yields parts through its
676 676 `iterparts` methods."""
677 677
678 678 _magicstring = 'HG20'
679 679
680 680 def __init__(self, ui, fp):
681 681 """If header is specified, we do not read it out of the stream."""
682 682 self.ui = ui
683 683 self._compengine = util.compengines.forbundletype('UN')
684 684 self._compressed = None
685 685 super(unbundle20, self).__init__(fp)
686 686
687 687 @util.propertycache
688 688 def params(self):
689 689 """dictionary of stream level parameters"""
690 690 indebug(self.ui, 'reading bundle2 stream parameters')
691 691 params = {}
692 692 paramssize = self._unpack(_fstreamparamsize)[0]
693 693 if paramssize < 0:
694 694 raise error.BundleValueError('negative bundle param size: %i'
695 695 % paramssize)
696 696 if paramssize:
697 697 params = self._readexact(paramssize)
698 698 params = self._processallparams(params)
699 699 return params
700 700
701 701 def _processallparams(self, paramsblock):
702 702 """"""
703 703 params = util.sortdict()
704 704 for p in paramsblock.split(' '):
705 705 p = p.split('=', 1)
706 706 p = [urlreq.unquote(i) for i in p]
707 707 if len(p) < 2:
708 708 p.append(None)
709 709 self._processparam(*p)
710 710 params[p[0]] = p[1]
711 711 return params
712 712
713 713
714 714 def _processparam(self, name, value):
715 715 """process a parameter, applying its effect if needed
716 716
717 717 Parameter starting with a lower case letter are advisory and will be
718 718 ignored when unknown. Those starting with an upper case letter are
719 719 mandatory and will this function will raise a KeyError when unknown.
720 720
721 721 Note: no option are currently supported. Any input will be either
722 722 ignored or failing.
723 723 """
724 724 if not name:
725 725 raise ValueError('empty parameter name')
726 726 if name[0] not in string.letters:
727 727 raise ValueError('non letter first character: %r' % name)
728 728 try:
729 729 handler = b2streamparamsmap[name.lower()]
730 730 except KeyError:
731 731 if name[0].islower():
732 732 indebug(self.ui, "ignoring unknown parameter %r" % name)
733 733 else:
734 734 raise error.BundleUnknownFeatureError(params=(name,))
735 735 else:
736 736 handler(self, name, value)
737 737
738 738 def _forwardchunks(self):
739 739 """utility to transfer a bundle2 as binary
740 740
741 741 This is made necessary by the fact the 'getbundle' command over 'ssh'
742 742 have no way to know then the reply end, relying on the bundle to be
743 743 interpreted to know its end. This is terrible and we are sorry, but we
744 744 needed to move forward to get general delta enabled.
745 745 """
746 746 yield self._magicstring
747 747 assert 'params' not in vars(self)
748 748 paramssize = self._unpack(_fstreamparamsize)[0]
749 749 if paramssize < 0:
750 750 raise error.BundleValueError('negative bundle param size: %i'
751 751 % paramssize)
752 752 yield _pack(_fstreamparamsize, paramssize)
753 753 if paramssize:
754 754 params = self._readexact(paramssize)
755 755 self._processallparams(params)
756 756 yield params
757 757 assert self._compengine.bundletype == 'UN'
758 758 # From there, payload might need to be decompressed
759 759 self._fp = self._compengine.decompressorreader(self._fp)
760 760 emptycount = 0
761 761 while emptycount < 2:
762 762 # so we can brainlessly loop
763 763 assert _fpartheadersize == _fpayloadsize
764 764 size = self._unpack(_fpartheadersize)[0]
765 765 yield _pack(_fpartheadersize, size)
766 766 if size:
767 767 emptycount = 0
768 768 else:
769 769 emptycount += 1
770 770 continue
771 771 if size == flaginterrupt:
772 772 continue
773 773 elif size < 0:
774 774 raise error.BundleValueError('negative chunk size: %i')
775 775 yield self._readexact(size)
776 776
777 777
778 778 def iterparts(self):
779 779 """yield all parts contained in the stream"""
780 780 # make sure param have been loaded
781 781 self.params
782 782 # From there, payload need to be decompressed
783 783 self._fp = self._compengine.decompressorreader(self._fp)
784 784 indebug(self.ui, 'start extraction of bundle2 parts')
785 785 headerblock = self._readpartheader()
786 786 while headerblock is not None:
787 787 part = unbundlepart(self.ui, headerblock, self._fp)
788 788 yield part
789 789 part.seek(0, 2)
790 790 headerblock = self._readpartheader()
791 791 indebug(self.ui, 'end of bundle2 stream')
792 792
793 793 def _readpartheader(self):
794 794 """reads a part header size and return the bytes blob
795 795
796 796 returns None if empty"""
797 797 headersize = self._unpack(_fpartheadersize)[0]
798 798 if headersize < 0:
799 799 raise error.BundleValueError('negative part header size: %i'
800 800 % headersize)
801 801 indebug(self.ui, 'part header size: %i' % headersize)
802 802 if headersize:
803 803 return self._readexact(headersize)
804 804 return None
805 805
806 806 def compressed(self):
807 807 self.params # load params
808 808 return self._compressed
809 809
810 810 def close(self):
811 811 """close underlying file"""
812 812 if util.safehasattr(self._fp, 'close'):
813 813 return self._fp.close()
814 814
815 815 formatmap = {'20': unbundle20}
816 816
817 817 b2streamparamsmap = {}
818 818
819 819 def b2streamparamhandler(name):
820 820 """register a handler for a stream level parameter"""
821 821 def decorator(func):
822 822 assert name not in formatmap
823 823 b2streamparamsmap[name] = func
824 824 return func
825 825 return decorator
826 826
827 827 @b2streamparamhandler('compression')
828 828 def processcompression(unbundler, param, value):
829 829 """read compression parameter and install payload decompression"""
830 830 if value not in util.compengines.supportedbundletypes:
831 831 raise error.BundleUnknownFeatureError(params=(param,),
832 832 values=(value,))
833 833 unbundler._compengine = util.compengines.forbundletype(value)
834 834 if value is not None:
835 835 unbundler._compressed = True
836 836
837 837 class bundlepart(object):
838 838 """A bundle2 part contains application level payload
839 839
840 840 The part `type` is used to route the part to the application level
841 841 handler.
842 842
843 843 The part payload is contained in ``part.data``. It could be raw bytes or a
844 844 generator of byte chunks.
845 845
846 846 You can add parameters to the part using the ``addparam`` method.
847 847 Parameters can be either mandatory (default) or advisory. Remote side
848 848 should be able to safely ignore the advisory ones.
849 849
850 850 Both data and parameters cannot be modified after the generation has begun.
851 851 """
852 852
853 853 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
854 854 data='', mandatory=True):
855 855 validateparttype(parttype)
856 856 self.id = None
857 857 self.type = parttype
858 858 self._data = data
859 859 self._mandatoryparams = list(mandatoryparams)
860 860 self._advisoryparams = list(advisoryparams)
861 861 # checking for duplicated entries
862 862 self._seenparams = set()
863 863 for pname, __ in self._mandatoryparams + self._advisoryparams:
864 864 if pname in self._seenparams:
865 865 raise error.ProgrammingError('duplicated params: %s' % pname)
866 866 self._seenparams.add(pname)
867 867 # status of the part's generation:
868 868 # - None: not started,
869 869 # - False: currently generated,
870 870 # - True: generation done.
871 871 self._generated = None
872 872 self.mandatory = mandatory
873 873
874 874 def __repr__(self):
875 875 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
876 876 return ('<%s object at %x; id: %s; type: %s; mandatory: %s>'
877 877 % (cls, id(self), self.id, self.type, self.mandatory))
878 878
879 879 def copy(self):
880 880 """return a copy of the part
881 881
882 882 The new part have the very same content but no partid assigned yet.
883 883 Parts with generated data cannot be copied."""
884 884 assert not util.safehasattr(self.data, 'next')
885 885 return self.__class__(self.type, self._mandatoryparams,
886 886 self._advisoryparams, self._data, self.mandatory)
887 887
888 888 # methods used to defines the part content
889 889 @property
890 890 def data(self):
891 891 return self._data
892 892
893 893 @data.setter
894 894 def data(self, data):
895 895 if self._generated is not None:
896 896 raise error.ReadOnlyPartError('part is being generated')
897 897 self._data = data
898 898
899 899 @property
900 900 def mandatoryparams(self):
901 901 # make it an immutable tuple to force people through ``addparam``
902 902 return tuple(self._mandatoryparams)
903 903
904 904 @property
905 905 def advisoryparams(self):
906 906 # make it an immutable tuple to force people through ``addparam``
907 907 return tuple(self._advisoryparams)
908 908
909 909 def addparam(self, name, value='', mandatory=True):
910 910 """add a parameter to the part
911 911
912 912 If 'mandatory' is set to True, the remote handler must claim support
913 913 for this parameter or the unbundling will be aborted.
914 914
915 915 The 'name' and 'value' cannot exceed 255 bytes each.
916 916 """
917 917 if self._generated is not None:
918 918 raise error.ReadOnlyPartError('part is being generated')
919 919 if name in self._seenparams:
920 920 raise ValueError('duplicated params: %s' % name)
921 921 self._seenparams.add(name)
922 922 params = self._advisoryparams
923 923 if mandatory:
924 924 params = self._mandatoryparams
925 925 params.append((name, value))
926 926
927 927 # methods used to generates the bundle2 stream
928 928 def getchunks(self, ui):
929 929 if self._generated is not None:
930 930 raise error.ProgrammingError('part can only be consumed once')
931 931 self._generated = False
932 932
933 933 if ui.debugflag:
934 934 msg = ['bundle2-output-part: "%s"' % self.type]
935 935 if not self.mandatory:
936 936 msg.append(' (advisory)')
937 937 nbmp = len(self.mandatoryparams)
938 938 nbap = len(self.advisoryparams)
939 939 if nbmp or nbap:
940 940 msg.append(' (params:')
941 941 if nbmp:
942 942 msg.append(' %i mandatory' % nbmp)
943 943 if nbap:
944 944 msg.append(' %i advisory' % nbmp)
945 945 msg.append(')')
946 946 if not self.data:
947 947 msg.append(' empty payload')
948 948 elif util.safehasattr(self.data, 'next'):
949 949 msg.append(' streamed payload')
950 950 else:
951 951 msg.append(' %i bytes payload' % len(self.data))
952 952 msg.append('\n')
953 953 ui.debug(''.join(msg))
954 954
955 955 #### header
956 956 if self.mandatory:
957 957 parttype = self.type.upper()
958 958 else:
959 959 parttype = self.type.lower()
960 960 outdebug(ui, 'part %s: "%s"' % (self.id, parttype))
961 961 ## parttype
962 962 header = [_pack(_fparttypesize, len(parttype)),
963 963 parttype, _pack(_fpartid, self.id),
964 964 ]
965 965 ## parameters
966 966 # count
967 967 manpar = self.mandatoryparams
968 968 advpar = self.advisoryparams
969 969 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
970 970 # size
971 971 parsizes = []
972 972 for key, value in manpar:
973 973 parsizes.append(len(key))
974 974 parsizes.append(len(value))
975 975 for key, value in advpar:
976 976 parsizes.append(len(key))
977 977 parsizes.append(len(value))
978 978 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
979 979 header.append(paramsizes)
980 980 # key, value
981 981 for key, value in manpar:
982 982 header.append(key)
983 983 header.append(value)
984 984 for key, value in advpar:
985 985 header.append(key)
986 986 header.append(value)
987 987 ## finalize header
988 988 headerchunk = ''.join(header)
989 989 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
990 990 yield _pack(_fpartheadersize, len(headerchunk))
991 991 yield headerchunk
992 992 ## payload
993 993 try:
994 994 for chunk in self._payloadchunks():
995 995 outdebug(ui, 'payload chunk size: %i' % len(chunk))
996 996 yield _pack(_fpayloadsize, len(chunk))
997 997 yield chunk
998 998 except GeneratorExit:
999 999 # GeneratorExit means that nobody is listening for our
1000 1000 # results anyway, so just bail quickly rather than trying
1001 1001 # to produce an error part.
1002 1002 ui.debug('bundle2-generatorexit\n')
1003 1003 raise
1004 1004 except BaseException as exc:
1005 1005 # backup exception data for later
1006 1006 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
1007 1007 % exc)
1008 1008 tb = sys.exc_info()[2]
1009 1009 msg = 'unexpected error: %s' % exc
1010 1010 interpart = bundlepart('error:abort', [('message', msg)],
1011 1011 mandatory=False)
1012 1012 interpart.id = 0
1013 1013 yield _pack(_fpayloadsize, -1)
1014 1014 for chunk in interpart.getchunks(ui=ui):
1015 1015 yield chunk
1016 1016 outdebug(ui, 'closing payload chunk')
1017 1017 # abort current part payload
1018 1018 yield _pack(_fpayloadsize, 0)
1019 1019 pycompat.raisewithtb(exc, tb)
1020 1020 # end of payload
1021 1021 outdebug(ui, 'closing payload chunk')
1022 1022 yield _pack(_fpayloadsize, 0)
1023 1023 self._generated = True
1024 1024
1025 1025 def _payloadchunks(self):
1026 1026 """yield chunks of a the part payload
1027 1027
1028 1028 Exists to handle the different methods to provide data to a part."""
1029 1029 # we only support fixed size data now.
1030 1030 # This will be improved in the future.
1031 1031 if util.safehasattr(self.data, 'next'):
1032 1032 buff = util.chunkbuffer(self.data)
1033 1033 chunk = buff.read(preferedchunksize)
1034 1034 while chunk:
1035 1035 yield chunk
1036 1036 chunk = buff.read(preferedchunksize)
1037 1037 elif len(self.data):
1038 1038 yield self.data
1039 1039
1040 1040
1041 1041 flaginterrupt = -1
1042 1042
1043 1043 class interrupthandler(unpackermixin):
1044 1044 """read one part and process it with restricted capability
1045 1045
1046 1046 This allows to transmit exception raised on the producer size during part
1047 1047 iteration while the consumer is reading a part.
1048 1048
1049 1049 Part processed in this manner only have access to a ui object,"""
1050 1050
1051 1051 def __init__(self, ui, fp):
1052 1052 super(interrupthandler, self).__init__(fp)
1053 1053 self.ui = ui
1054 1054
1055 1055 def _readpartheader(self):
1056 1056 """reads a part header size and return the bytes blob
1057 1057
1058 1058 returns None if empty"""
1059 1059 headersize = self._unpack(_fpartheadersize)[0]
1060 1060 if headersize < 0:
1061 1061 raise error.BundleValueError('negative part header size: %i'
1062 1062 % headersize)
1063 1063 indebug(self.ui, 'part header size: %i\n' % headersize)
1064 1064 if headersize:
1065 1065 return self._readexact(headersize)
1066 1066 return None
1067 1067
1068 1068 def __call__(self):
1069 1069
1070 1070 self.ui.debug('bundle2-input-stream-interrupt:'
1071 1071 ' opening out of band context\n')
1072 1072 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1073 1073 headerblock = self._readpartheader()
1074 1074 if headerblock is None:
1075 1075 indebug(self.ui, 'no part found during interruption.')
1076 1076 return
1077 1077 part = unbundlepart(self.ui, headerblock, self._fp)
1078 1078 op = interruptoperation(self.ui)
1079 1079 _processpart(op, part)
1080 1080 self.ui.debug('bundle2-input-stream-interrupt:'
1081 1081 ' closing out of band context\n')
1082 1082
1083 1083 class interruptoperation(object):
1084 1084 """A limited operation to be use by part handler during interruption
1085 1085
1086 1086 It only have access to an ui object.
1087 1087 """
1088 1088
1089 1089 def __init__(self, ui):
1090 1090 self.ui = ui
1091 1091 self.reply = None
1092 1092 self.captureoutput = False
1093 1093
1094 1094 @property
1095 1095 def repo(self):
1096 1096 raise error.ProgrammingError('no repo access from stream interruption')
1097 1097
1098 1098 def gettransaction(self):
1099 1099 raise TransactionUnavailable('no repo access from stream interruption')
1100 1100
1101 1101 class unbundlepart(unpackermixin):
1102 1102 """a bundle part read from a bundle"""
1103 1103
1104 1104 def __init__(self, ui, header, fp):
1105 1105 super(unbundlepart, self).__init__(fp)
1106 1106 self._seekable = (util.safehasattr(fp, 'seek') and
1107 1107 util.safehasattr(fp, 'tell'))
1108 1108 self.ui = ui
1109 1109 # unbundle state attr
1110 1110 self._headerdata = header
1111 1111 self._headeroffset = 0
1112 1112 self._initialized = False
1113 1113 self.consumed = False
1114 1114 # part data
1115 1115 self.id = None
1116 1116 self.type = None
1117 1117 self.mandatoryparams = None
1118 1118 self.advisoryparams = None
1119 1119 self.params = None
1120 1120 self.mandatorykeys = ()
1121 1121 self._payloadstream = None
1122 1122 self._readheader()
1123 1123 self._mandatory = None
1124 1124 self._chunkindex = [] #(payload, file) position tuples for chunk starts
1125 1125 self._pos = 0
1126 1126
1127 1127 def _fromheader(self, size):
1128 1128 """return the next <size> byte from the header"""
1129 1129 offset = self._headeroffset
1130 1130 data = self._headerdata[offset:(offset + size)]
1131 1131 self._headeroffset = offset + size
1132 1132 return data
1133 1133
1134 1134 def _unpackheader(self, format):
1135 1135 """read given format from header
1136 1136
1137 1137 This automatically compute the size of the format to read."""
1138 1138 data = self._fromheader(struct.calcsize(format))
1139 1139 return _unpack(format, data)
1140 1140
1141 1141 def _initparams(self, mandatoryparams, advisoryparams):
1142 1142 """internal function to setup all logic related parameters"""
1143 1143 # make it read only to prevent people touching it by mistake.
1144 1144 self.mandatoryparams = tuple(mandatoryparams)
1145 1145 self.advisoryparams = tuple(advisoryparams)
1146 1146 # user friendly UI
1147 1147 self.params = util.sortdict(self.mandatoryparams)
1148 1148 self.params.update(self.advisoryparams)
1149 1149 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1150 1150
1151 1151 def _payloadchunks(self, chunknum=0):
1152 1152 '''seek to specified chunk and start yielding data'''
1153 1153 if len(self._chunkindex) == 0:
1154 1154 assert chunknum == 0, 'Must start with chunk 0'
1155 1155 self._chunkindex.append((0, self._tellfp()))
1156 1156 else:
1157 1157 assert chunknum < len(self._chunkindex), \
1158 1158 'Unknown chunk %d' % chunknum
1159 1159 self._seekfp(self._chunkindex[chunknum][1])
1160 1160
1161 1161 pos = self._chunkindex[chunknum][0]
1162 1162 payloadsize = self._unpack(_fpayloadsize)[0]
1163 1163 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1164 1164 while payloadsize:
1165 1165 if payloadsize == flaginterrupt:
1166 1166 # interruption detection, the handler will now read a
1167 1167 # single part and process it.
1168 1168 interrupthandler(self.ui, self._fp)()
1169 1169 elif payloadsize < 0:
1170 1170 msg = 'negative payload chunk size: %i' % payloadsize
1171 1171 raise error.BundleValueError(msg)
1172 1172 else:
1173 1173 result = self._readexact(payloadsize)
1174 1174 chunknum += 1
1175 1175 pos += payloadsize
1176 1176 if chunknum == len(self._chunkindex):
1177 1177 self._chunkindex.append((pos, self._tellfp()))
1178 1178 yield result
1179 1179 payloadsize = self._unpack(_fpayloadsize)[0]
1180 1180 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1181 1181
1182 1182 def _findchunk(self, pos):
1183 1183 '''for a given payload position, return a chunk number and offset'''
1184 1184 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1185 1185 if ppos == pos:
1186 1186 return chunk, 0
1187 1187 elif ppos > pos:
1188 1188 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1189 1189 raise ValueError('Unknown chunk')
1190 1190
1191 1191 def _readheader(self):
1192 1192 """read the header and setup the object"""
1193 1193 typesize = self._unpackheader(_fparttypesize)[0]
1194 1194 self.type = self._fromheader(typesize)
1195 1195 indebug(self.ui, 'part type: "%s"' % self.type)
1196 1196 self.id = self._unpackheader(_fpartid)[0]
1197 1197 indebug(self.ui, 'part id: "%s"' % self.id)
1198 1198 # extract mandatory bit from type
1199 1199 self.mandatory = (self.type != self.type.lower())
1200 1200 self.type = self.type.lower()
1201 1201 ## reading parameters
1202 1202 # param count
1203 1203 mancount, advcount = self._unpackheader(_fpartparamcount)
1204 1204 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1205 1205 # param size
1206 1206 fparamsizes = _makefpartparamsizes(mancount + advcount)
1207 1207 paramsizes = self._unpackheader(fparamsizes)
1208 1208 # make it a list of couple again
1209 1209 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
1210 1210 # split mandatory from advisory
1211 1211 mansizes = paramsizes[:mancount]
1212 1212 advsizes = paramsizes[mancount:]
1213 1213 # retrieve param value
1214 1214 manparams = []
1215 1215 for key, value in mansizes:
1216 1216 manparams.append((self._fromheader(key), self._fromheader(value)))
1217 1217 advparams = []
1218 1218 for key, value in advsizes:
1219 1219 advparams.append((self._fromheader(key), self._fromheader(value)))
1220 1220 self._initparams(manparams, advparams)
1221 1221 ## part payload
1222 1222 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1223 1223 # we read the data, tell it
1224 1224 self._initialized = True
1225 1225
1226 1226 def read(self, size=None):
1227 1227 """read payload data"""
1228 1228 if not self._initialized:
1229 1229 self._readheader()
1230 1230 if size is None:
1231 1231 data = self._payloadstream.read()
1232 1232 else:
1233 1233 data = self._payloadstream.read(size)
1234 1234 self._pos += len(data)
1235 1235 if size is None or len(data) < size:
1236 1236 if not self.consumed and self._pos:
1237 1237 self.ui.debug('bundle2-input-part: total payload size %i\n'
1238 1238 % self._pos)
1239 1239 self.consumed = True
1240 1240 return data
1241 1241
1242 1242 def tell(self):
1243 1243 return self._pos
1244 1244
1245 1245 def seek(self, offset, whence=0):
1246 1246 if whence == 0:
1247 1247 newpos = offset
1248 1248 elif whence == 1:
1249 1249 newpos = self._pos + offset
1250 1250 elif whence == 2:
1251 1251 if not self.consumed:
1252 1252 self.read()
1253 1253 newpos = self._chunkindex[-1][0] - offset
1254 1254 else:
1255 1255 raise ValueError('Unknown whence value: %r' % (whence,))
1256 1256
1257 1257 if newpos > self._chunkindex[-1][0] and not self.consumed:
1258 1258 self.read()
1259 1259 if not 0 <= newpos <= self._chunkindex[-1][0]:
1260 1260 raise ValueError('Offset out of range')
1261 1261
1262 1262 if self._pos != newpos:
1263 1263 chunk, internaloffset = self._findchunk(newpos)
1264 1264 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1265 1265 adjust = self.read(internaloffset)
1266 1266 if len(adjust) != internaloffset:
1267 1267 raise error.Abort(_('Seek failed\n'))
1268 1268 self._pos = newpos
1269 1269
1270 1270 def _seekfp(self, offset, whence=0):
1271 1271 """move the underlying file pointer
1272 1272
1273 1273 This method is meant for internal usage by the bundle2 protocol only.
1274 1274 They directly manipulate the low level stream including bundle2 level
1275 1275 instruction.
1276 1276
1277 1277 Do not use it to implement higher-level logic or methods."""
1278 1278 if self._seekable:
1279 1279 return self._fp.seek(offset, whence)
1280 1280 else:
1281 1281 raise NotImplementedError(_('File pointer is not seekable'))
1282 1282
1283 1283 def _tellfp(self):
1284 1284 """return the file offset, or None if file is not seekable
1285 1285
1286 1286 This method is meant for internal usage by the bundle2 protocol only.
1287 1287 They directly manipulate the low level stream including bundle2 level
1288 1288 instruction.
1289 1289
1290 1290 Do not use it to implement higher-level logic or methods."""
1291 1291 if self._seekable:
1292 1292 try:
1293 1293 return self._fp.tell()
1294 1294 except IOError as e:
1295 1295 if e.errno == errno.ESPIPE:
1296 1296 self._seekable = False
1297 1297 else:
1298 1298 raise
1299 1299 return None
1300 1300
1301 1301 # These are only the static capabilities.
1302 1302 # Check the 'getrepocaps' function for the rest.
1303 1303 capabilities = {'HG20': (),
1304 1304 'error': ('abort', 'unsupportedcontent', 'pushraced',
1305 1305 'pushkey'),
1306 1306 'listkeys': (),
1307 1307 'pushkey': (),
1308 1308 'digests': tuple(sorted(util.DIGESTS.keys())),
1309 1309 'remote-changegroup': ('http', 'https'),
1310 1310 'hgtagsfnodes': (),
1311 1311 }
1312 1312
1313 1313 def getrepocaps(repo, allowpushback=False):
1314 1314 """return the bundle2 capabilities for a given repo
1315 1315
1316 1316 Exists to allow extensions (like evolution) to mutate the capabilities.
1317 1317 """
1318 1318 caps = capabilities.copy()
1319 1319 caps['changegroup'] = tuple(sorted(
1320 1320 changegroup.supportedincomingversions(repo)))
1321 1321 if obsolete.isenabled(repo, obsolete.exchangeopt):
1322 1322 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1323 1323 caps['obsmarkers'] = supportedformat
1324 1324 if allowpushback:
1325 1325 caps['pushback'] = ()
1326 1326 cpmode = repo.ui.config('server', 'concurrent-push-mode', 'strict')
1327 1327 if cpmode == 'check-related':
1328 1328 caps['checkheads'] = ('related',)
1329 1329 return caps
1330 1330
1331 1331 def bundle2caps(remote):
1332 1332 """return the bundle capabilities of a peer as dict"""
1333 1333 raw = remote.capable('bundle2')
1334 1334 if not raw and raw != '':
1335 1335 return {}
1336 1336 capsblob = urlreq.unquote(remote.capable('bundle2'))
1337 1337 return decodecaps(capsblob)
1338 1338
1339 1339 def obsmarkersversion(caps):
1340 1340 """extract the list of supported obsmarkers versions from a bundle2caps dict
1341 1341 """
1342 1342 obscaps = caps.get('obsmarkers', ())
1343 1343 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1344 1344
1345 1345 def writenewbundle(ui, repo, source, filename, bundletype, outgoing, opts,
1346 1346 vfs=None, compression=None, compopts=None):
1347 1347 if bundletype.startswith('HG10'):
1348 1348 cg = changegroup.getchangegroup(repo, source, outgoing, version='01')
1349 1349 return writebundle(ui, cg, filename, bundletype, vfs=vfs,
1350 1350 compression=compression, compopts=compopts)
1351 1351 elif not bundletype.startswith('HG20'):
1352 1352 raise error.ProgrammingError('unknown bundle type: %s' % bundletype)
1353 1353
1354 1354 caps = {}
1355 1355 if 'obsolescence' in opts:
1356 1356 caps['obsmarkers'] = ('V1',)
1357 1357 bundle = bundle20(ui, caps)
1358 1358 bundle.setcompression(compression, compopts)
1359 1359 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1360 1360 chunkiter = bundle.getchunks()
1361 1361
1362 1362 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1363 1363
1364 1364 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1365 1365 # We should eventually reconcile this logic with the one behind
1366 1366 # 'exchange.getbundle2partsgenerator'.
1367 1367 #
1368 1368 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1369 1369 # different right now. So we keep them separated for now for the sake of
1370 1370 # simplicity.
1371 1371
1372 1372 # we always want a changegroup in such bundle
1373 1373 cgversion = opts.get('cg.version')
1374 1374 if cgversion is None:
1375 1375 cgversion = changegroup.safeversion(repo)
1376 1376 cg = changegroup.getchangegroup(repo, source, outgoing,
1377 1377 version=cgversion)
1378 1378 part = bundler.newpart('changegroup', data=cg.getchunks())
1379 1379 part.addparam('version', cg.version)
1380 1380 if 'clcount' in cg.extras:
1381 1381 part.addparam('nbchanges', str(cg.extras['clcount']),
1382 1382 mandatory=False)
1383 1383
1384 1384 addparttagsfnodescache(repo, bundler, outgoing)
1385 1385
1386 1386 if opts.get('obsolescence', False):
1387 1387 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1388 1388 buildobsmarkerspart(bundler, obsmarkers)
1389 1389
1390 1390 def addparttagsfnodescache(repo, bundler, outgoing):
1391 1391 # we include the tags fnode cache for the bundle changeset
1392 1392 # (as an optional parts)
1393 1393 cache = tags.hgtagsfnodescache(repo.unfiltered())
1394 1394 chunks = []
1395 1395
1396 1396 # .hgtags fnodes are only relevant for head changesets. While we could
1397 1397 # transfer values for all known nodes, there will likely be little to
1398 1398 # no benefit.
1399 1399 #
1400 1400 # We don't bother using a generator to produce output data because
1401 1401 # a) we only have 40 bytes per head and even esoteric numbers of heads
1402 1402 # consume little memory (1M heads is 40MB) b) we don't want to send the
1403 1403 # part if we don't have entries and knowing if we have entries requires
1404 1404 # cache lookups.
1405 1405 for node in outgoing.missingheads:
1406 1406 # Don't compute missing, as this may slow down serving.
1407 1407 fnode = cache.getfnode(node, computemissing=False)
1408 1408 if fnode is not None:
1409 1409 chunks.extend([node, fnode])
1410 1410
1411 1411 if chunks:
1412 1412 bundler.newpart('hgtagsfnodes', data=''.join(chunks))
1413 1413
1414 1414 def buildobsmarkerspart(bundler, markers):
1415 1415 """add an obsmarker part to the bundler with <markers>
1416 1416
1417 1417 No part is created if markers is empty.
1418 1418 Raises ValueError if the bundler doesn't support any known obsmarker format.
1419 1419 """
1420 1420 if not markers:
1421 1421 return None
1422 1422
1423 1423 remoteversions = obsmarkersversion(bundler.capabilities)
1424 1424 version = obsolete.commonversion(remoteversions)
1425 1425 if version is None:
1426 1426 raise ValueError('bundler does not support common obsmarker format')
1427 1427 stream = obsolete.encodemarkers(markers, True, version=version)
1428 1428 return bundler.newpart('obsmarkers', data=stream)
1429 1429
1430 1430 def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None,
1431 1431 compopts=None):
1432 1432 """Write a bundle file and return its filename.
1433 1433
1434 1434 Existing files will not be overwritten.
1435 1435 If no filename is specified, a temporary file is created.
1436 1436 bz2 compression can be turned off.
1437 1437 The bundle file will be deleted in case of errors.
1438 1438 """
1439 1439
1440 1440 if bundletype == "HG20":
1441 1441 bundle = bundle20(ui)
1442 1442 bundle.setcompression(compression, compopts)
1443 1443 part = bundle.newpart('changegroup', data=cg.getchunks())
1444 1444 part.addparam('version', cg.version)
1445 1445 if 'clcount' in cg.extras:
1446 1446 part.addparam('nbchanges', str(cg.extras['clcount']),
1447 1447 mandatory=False)
1448 1448 chunkiter = bundle.getchunks()
1449 1449 else:
1450 1450 # compression argument is only for the bundle2 case
1451 1451 assert compression is None
1452 1452 if cg.version != '01':
1453 1453 raise error.Abort(_('old bundle types only supports v1 '
1454 1454 'changegroups'))
1455 1455 header, comp = bundletypes[bundletype]
1456 1456 if comp not in util.compengines.supportedbundletypes:
1457 1457 raise error.Abort(_('unknown stream compression type: %s')
1458 1458 % comp)
1459 1459 compengine = util.compengines.forbundletype(comp)
1460 1460 def chunkiter():
1461 1461 yield header
1462 1462 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1463 1463 yield chunk
1464 1464 chunkiter = chunkiter()
1465 1465
1466 1466 # parse the changegroup data, otherwise we will block
1467 1467 # in case of sshrepo because we don't know the end of the stream
1468 1468 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1469 1469
1470 1470 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest'))
1471 1471 def handlechangegroup(op, inpart):
1472 1472 """apply a changegroup part on the repo
1473 1473
1474 1474 This is a very early implementation that will massive rework before being
1475 1475 inflicted to any end-user.
1476 1476 """
1477 1477 tr = op.gettransaction()
1478 1478 unpackerversion = inpart.params.get('version', '01')
1479 1479 # We should raise an appropriate exception here
1480 1480 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1481 1481 # the source and url passed here are overwritten by the one contained in
1482 1482 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1483 1483 nbchangesets = None
1484 1484 if 'nbchanges' in inpart.params:
1485 1485 nbchangesets = int(inpart.params.get('nbchanges'))
1486 1486 if ('treemanifest' in inpart.params and
1487 1487 'treemanifest' not in op.repo.requirements):
1488 1488 if len(op.repo.changelog) != 0:
1489 1489 raise error.Abort(_(
1490 1490 "bundle contains tree manifests, but local repo is "
1491 1491 "non-empty and does not use tree manifests"))
1492 1492 op.repo.requirements.add('treemanifest')
1493 1493 op.repo._applyopenerreqs()
1494 1494 op.repo._writerequirements()
1495 ret = cg.apply(op.repo, tr, 'bundle2', 'bundle2',
1495 ret, addednodes = cg.apply(op.repo, tr, 'bundle2', 'bundle2',
1496 1496 expectedtotal=nbchangesets)
1497 op.records.add('changegroup', {'return': ret})
1497 op.records.add('changegroup', {
1498 'return': ret,
1499 'addednodes': addednodes,
1500 })
1498 1501 if op.reply is not None:
1499 1502 # This is definitely not the final form of this
1500 1503 # return. But one need to start somewhere.
1501 1504 part = op.reply.newpart('reply:changegroup', mandatory=False)
1502 1505 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1503 1506 part.addparam('return', '%i' % ret, mandatory=False)
1504 1507 assert not inpart.read()
1505 1508
1506 1509 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1507 1510 ['digest:%s' % k for k in util.DIGESTS.keys()])
1508 1511 @parthandler('remote-changegroup', _remotechangegroupparams)
1509 1512 def handleremotechangegroup(op, inpart):
1510 1513 """apply a bundle10 on the repo, given an url and validation information
1511 1514
1512 1515 All the information about the remote bundle to import are given as
1513 1516 parameters. The parameters include:
1514 1517 - url: the url to the bundle10.
1515 1518 - size: the bundle10 file size. It is used to validate what was
1516 1519 retrieved by the client matches the server knowledge about the bundle.
1517 1520 - digests: a space separated list of the digest types provided as
1518 1521 parameters.
1519 1522 - digest:<digest-type>: the hexadecimal representation of the digest with
1520 1523 that name. Like the size, it is used to validate what was retrieved by
1521 1524 the client matches what the server knows about the bundle.
1522 1525
1523 1526 When multiple digest types are given, all of them are checked.
1524 1527 """
1525 1528 try:
1526 1529 raw_url = inpart.params['url']
1527 1530 except KeyError:
1528 1531 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1529 1532 parsed_url = util.url(raw_url)
1530 1533 if parsed_url.scheme not in capabilities['remote-changegroup']:
1531 1534 raise error.Abort(_('remote-changegroup does not support %s urls') %
1532 1535 parsed_url.scheme)
1533 1536
1534 1537 try:
1535 1538 size = int(inpart.params['size'])
1536 1539 except ValueError:
1537 1540 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1538 1541 % 'size')
1539 1542 except KeyError:
1540 1543 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1541 1544
1542 1545 digests = {}
1543 1546 for typ in inpart.params.get('digests', '').split():
1544 1547 param = 'digest:%s' % typ
1545 1548 try:
1546 1549 value = inpart.params[param]
1547 1550 except KeyError:
1548 1551 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1549 1552 param)
1550 1553 digests[typ] = value
1551 1554
1552 1555 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1553 1556
1554 1557 tr = op.gettransaction()
1555 1558 from . import exchange
1556 1559 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1557 1560 if not isinstance(cg, changegroup.cg1unpacker):
1558 1561 raise error.Abort(_('%s: not a bundle version 1.0') %
1559 1562 util.hidepassword(raw_url))
1560 ret = cg.apply(op.repo, tr, 'bundle2', 'bundle2')
1561 op.records.add('changegroup', {'return': ret})
1563 ret, addednodes = cg.apply(op.repo, tr, 'bundle2', 'bundle2')
1564 op.records.add('changegroup', {
1565 'return': ret,
1566 'addednodes': addednodes,
1567 })
1562 1568 if op.reply is not None:
1563 1569 # This is definitely not the final form of this
1564 1570 # return. But one need to start somewhere.
1565 1571 part = op.reply.newpart('reply:changegroup')
1566 1572 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1567 1573 part.addparam('return', '%i' % ret, mandatory=False)
1568 1574 try:
1569 1575 real_part.validate()
1570 1576 except error.Abort as e:
1571 1577 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1572 1578 (util.hidepassword(raw_url), str(e)))
1573 1579 assert not inpart.read()
1574 1580
1575 1581 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1576 1582 def handlereplychangegroup(op, inpart):
1577 1583 ret = int(inpart.params['return'])
1578 1584 replyto = int(inpart.params['in-reply-to'])
1579 1585 op.records.add('changegroup', {'return': ret}, replyto)
1580 1586
1581 1587 @parthandler('check:heads')
1582 1588 def handlecheckheads(op, inpart):
1583 1589 """check that head of the repo did not change
1584 1590
1585 1591 This is used to detect a push race when using unbundle.
1586 1592 This replaces the "heads" argument of unbundle."""
1587 1593 h = inpart.read(20)
1588 1594 heads = []
1589 1595 while len(h) == 20:
1590 1596 heads.append(h)
1591 1597 h = inpart.read(20)
1592 1598 assert not h
1593 1599 # Trigger a transaction so that we are guaranteed to have the lock now.
1594 1600 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1595 1601 op.gettransaction()
1596 1602 if sorted(heads) != sorted(op.repo.heads()):
1597 1603 raise error.PushRaced('repository changed while pushing - '
1598 1604 'please try again')
1599 1605
1600 1606 @parthandler('check:updated-heads')
1601 1607 def handlecheckupdatedheads(op, inpart):
1602 1608 """check for race on the heads touched by a push
1603 1609
1604 1610 This is similar to 'check:heads' but focus on the heads actually updated
1605 1611 during the push. If other activities happen on unrelated heads, it is
1606 1612 ignored.
1607 1613
1608 1614 This allow server with high traffic to avoid push contention as long as
1609 1615 unrelated parts of the graph are involved."""
1610 1616 h = inpart.read(20)
1611 1617 heads = []
1612 1618 while len(h) == 20:
1613 1619 heads.append(h)
1614 1620 h = inpart.read(20)
1615 1621 assert not h
1616 1622 # trigger a transaction so that we are guaranteed to have the lock now.
1617 1623 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1618 1624 op.gettransaction()
1619 1625
1620 1626 currentheads = set()
1621 1627 for ls in op.repo.branchmap().itervalues():
1622 1628 currentheads.update(ls)
1623 1629
1624 1630 for h in heads:
1625 1631 if h not in currentheads:
1626 1632 raise error.PushRaced('repository changed while pushing - '
1627 1633 'please try again')
1628 1634
1629 1635 @parthandler('output')
1630 1636 def handleoutput(op, inpart):
1631 1637 """forward output captured on the server to the client"""
1632 1638 for line in inpart.read().splitlines():
1633 1639 op.ui.status(_('remote: %s\n') % line)
1634 1640
1635 1641 @parthandler('replycaps')
1636 1642 def handlereplycaps(op, inpart):
1637 1643 """Notify that a reply bundle should be created
1638 1644
1639 1645 The payload contains the capabilities information for the reply"""
1640 1646 caps = decodecaps(inpart.read())
1641 1647 if op.reply is None:
1642 1648 op.reply = bundle20(op.ui, caps)
1643 1649
1644 1650 class AbortFromPart(error.Abort):
1645 1651 """Sub-class of Abort that denotes an error from a bundle2 part."""
1646 1652
1647 1653 @parthandler('error:abort', ('message', 'hint'))
1648 1654 def handleerrorabort(op, inpart):
1649 1655 """Used to transmit abort error over the wire"""
1650 1656 raise AbortFromPart(inpart.params['message'],
1651 1657 hint=inpart.params.get('hint'))
1652 1658
1653 1659 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
1654 1660 'in-reply-to'))
1655 1661 def handleerrorpushkey(op, inpart):
1656 1662 """Used to transmit failure of a mandatory pushkey over the wire"""
1657 1663 kwargs = {}
1658 1664 for name in ('namespace', 'key', 'new', 'old', 'ret'):
1659 1665 value = inpart.params.get(name)
1660 1666 if value is not None:
1661 1667 kwargs[name] = value
1662 1668 raise error.PushkeyFailed(inpart.params['in-reply-to'], **kwargs)
1663 1669
1664 1670 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1665 1671 def handleerrorunsupportedcontent(op, inpart):
1666 1672 """Used to transmit unknown content error over the wire"""
1667 1673 kwargs = {}
1668 1674 parttype = inpart.params.get('parttype')
1669 1675 if parttype is not None:
1670 1676 kwargs['parttype'] = parttype
1671 1677 params = inpart.params.get('params')
1672 1678 if params is not None:
1673 1679 kwargs['params'] = params.split('\0')
1674 1680
1675 1681 raise error.BundleUnknownFeatureError(**kwargs)
1676 1682
1677 1683 @parthandler('error:pushraced', ('message',))
1678 1684 def handleerrorpushraced(op, inpart):
1679 1685 """Used to transmit push race error over the wire"""
1680 1686 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1681 1687
1682 1688 @parthandler('listkeys', ('namespace',))
1683 1689 def handlelistkeys(op, inpart):
1684 1690 """retrieve pushkey namespace content stored in a bundle2"""
1685 1691 namespace = inpart.params['namespace']
1686 1692 r = pushkey.decodekeys(inpart.read())
1687 1693 op.records.add('listkeys', (namespace, r))
1688 1694
1689 1695 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1690 1696 def handlepushkey(op, inpart):
1691 1697 """process a pushkey request"""
1692 1698 dec = pushkey.decode
1693 1699 namespace = dec(inpart.params['namespace'])
1694 1700 key = dec(inpart.params['key'])
1695 1701 old = dec(inpart.params['old'])
1696 1702 new = dec(inpart.params['new'])
1697 1703 # Grab the transaction to ensure that we have the lock before performing the
1698 1704 # pushkey.
1699 1705 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1700 1706 op.gettransaction()
1701 1707 ret = op.repo.pushkey(namespace, key, old, new)
1702 1708 record = {'namespace': namespace,
1703 1709 'key': key,
1704 1710 'old': old,
1705 1711 'new': new}
1706 1712 op.records.add('pushkey', record)
1707 1713 if op.reply is not None:
1708 1714 rpart = op.reply.newpart('reply:pushkey')
1709 1715 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1710 1716 rpart.addparam('return', '%i' % ret, mandatory=False)
1711 1717 if inpart.mandatory and not ret:
1712 1718 kwargs = {}
1713 1719 for key in ('namespace', 'key', 'new', 'old', 'ret'):
1714 1720 if key in inpart.params:
1715 1721 kwargs[key] = inpart.params[key]
1716 1722 raise error.PushkeyFailed(partid=str(inpart.id), **kwargs)
1717 1723
1718 1724 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
1719 1725 def handlepushkeyreply(op, inpart):
1720 1726 """retrieve the result of a pushkey request"""
1721 1727 ret = int(inpart.params['return'])
1722 1728 partid = int(inpart.params['in-reply-to'])
1723 1729 op.records.add('pushkey', {'return': ret}, partid)
1724 1730
1725 1731 @parthandler('obsmarkers')
1726 1732 def handleobsmarker(op, inpart):
1727 1733 """add a stream of obsmarkers to the repo"""
1728 1734 tr = op.gettransaction()
1729 1735 markerdata = inpart.read()
1730 1736 if op.ui.config('experimental', 'obsmarkers-exchange-debug', False):
1731 1737 op.ui.write(('obsmarker-exchange: %i bytes received\n')
1732 1738 % len(markerdata))
1733 1739 # The mergemarkers call will crash if marker creation is not enabled.
1734 1740 # we want to avoid this if the part is advisory.
1735 1741 if not inpart.mandatory and op.repo.obsstore.readonly:
1736 1742 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled')
1737 1743 return
1738 1744 new = op.repo.obsstore.mergemarkers(tr, markerdata)
1739 1745 op.repo.invalidatevolatilesets()
1740 1746 if new:
1741 1747 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1742 1748 op.records.add('obsmarkers', {'new': new})
1743 1749 if op.reply is not None:
1744 1750 rpart = op.reply.newpart('reply:obsmarkers')
1745 1751 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1746 1752 rpart.addparam('new', '%i' % new, mandatory=False)
1747 1753
1748 1754
1749 1755 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
1750 1756 def handleobsmarkerreply(op, inpart):
1751 1757 """retrieve the result of a pushkey request"""
1752 1758 ret = int(inpart.params['new'])
1753 1759 partid = int(inpart.params['in-reply-to'])
1754 1760 op.records.add('obsmarkers', {'new': ret}, partid)
1755 1761
1756 1762 @parthandler('hgtagsfnodes')
1757 1763 def handlehgtagsfnodes(op, inpart):
1758 1764 """Applies .hgtags fnodes cache entries to the local repo.
1759 1765
1760 1766 Payload is pairs of 20 byte changeset nodes and filenodes.
1761 1767 """
1762 1768 # Grab the transaction so we ensure that we have the lock at this point.
1763 1769 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1764 1770 op.gettransaction()
1765 1771 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
1766 1772
1767 1773 count = 0
1768 1774 while True:
1769 1775 node = inpart.read(20)
1770 1776 fnode = inpart.read(20)
1771 1777 if len(node) < 20 or len(fnode) < 20:
1772 1778 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
1773 1779 break
1774 1780 cache.setfnode(node, fnode)
1775 1781 count += 1
1776 1782
1777 1783 cache.write()
1778 1784 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
@@ -1,1025 +1,1026 b''
1 1 # changegroup.py - Mercurial changegroup manipulation functions
2 2 #
3 3 # Copyright 2006 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from __future__ import absolute_import
9 9
10 10 import os
11 11 import struct
12 12 import tempfile
13 13 import weakref
14 14
15 15 from .i18n import _
16 16 from .node import (
17 17 hex,
18 18 nullrev,
19 19 short,
20 20 )
21 21
22 22 from . import (
23 23 dagutil,
24 24 discovery,
25 25 error,
26 26 mdiff,
27 27 phases,
28 28 pycompat,
29 29 util,
30 30 )
31 31
32 32 _CHANGEGROUPV1_DELTA_HEADER = "20s20s20s20s"
33 33 _CHANGEGROUPV2_DELTA_HEADER = "20s20s20s20s20s"
34 34 _CHANGEGROUPV3_DELTA_HEADER = ">20s20s20s20s20sH"
35 35
36 36 def readexactly(stream, n):
37 37 '''read n bytes from stream.read and abort if less was available'''
38 38 s = stream.read(n)
39 39 if len(s) < n:
40 40 raise error.Abort(_("stream ended unexpectedly"
41 41 " (got %d bytes, expected %d)")
42 42 % (len(s), n))
43 43 return s
44 44
45 45 def getchunk(stream):
46 46 """return the next chunk from stream as a string"""
47 47 d = readexactly(stream, 4)
48 48 l = struct.unpack(">l", d)[0]
49 49 if l <= 4:
50 50 if l:
51 51 raise error.Abort(_("invalid chunk length %d") % l)
52 52 return ""
53 53 return readexactly(stream, l - 4)
54 54
55 55 def chunkheader(length):
56 56 """return a changegroup chunk header (string)"""
57 57 return struct.pack(">l", length + 4)
58 58
59 59 def closechunk():
60 60 """return a changegroup chunk header (string) for a zero-length chunk"""
61 61 return struct.pack(">l", 0)
62 62
63 63 def combineresults(results):
64 64 """logic to combine 0 or more addchangegroup results into one"""
65 65 changedheads = 0
66 66 result = 1
67 67 for ret in results:
68 68 # If any changegroup result is 0, return 0
69 69 if ret == 0:
70 70 result = 0
71 71 break
72 72 if ret < -1:
73 73 changedheads += ret + 1
74 74 elif ret > 1:
75 75 changedheads += ret - 1
76 76 if changedheads > 0:
77 77 result = 1 + changedheads
78 78 elif changedheads < 0:
79 79 result = -1 + changedheads
80 80 return result
81 81
82 82 def writechunks(ui, chunks, filename, vfs=None):
83 83 """Write chunks to a file and return its filename.
84 84
85 85 The stream is assumed to be a bundle file.
86 86 Existing files will not be overwritten.
87 87 If no filename is specified, a temporary file is created.
88 88 """
89 89 fh = None
90 90 cleanup = None
91 91 try:
92 92 if filename:
93 93 if vfs:
94 94 fh = vfs.open(filename, "wb")
95 95 else:
96 96 # Increase default buffer size because default is usually
97 97 # small (4k is common on Linux).
98 98 fh = open(filename, "wb", 131072)
99 99 else:
100 100 fd, filename = tempfile.mkstemp(prefix="hg-bundle-", suffix=".hg")
101 101 fh = os.fdopen(fd, pycompat.sysstr("wb"))
102 102 cleanup = filename
103 103 for c in chunks:
104 104 fh.write(c)
105 105 cleanup = None
106 106 return filename
107 107 finally:
108 108 if fh is not None:
109 109 fh.close()
110 110 if cleanup is not None:
111 111 if filename and vfs:
112 112 vfs.unlink(cleanup)
113 113 else:
114 114 os.unlink(cleanup)
115 115
116 116 class cg1unpacker(object):
117 117 """Unpacker for cg1 changegroup streams.
118 118
119 119 A changegroup unpacker handles the framing of the revision data in
120 120 the wire format. Most consumers will want to use the apply()
121 121 method to add the changes from the changegroup to a repository.
122 122
123 123 If you're forwarding a changegroup unmodified to another consumer,
124 124 use getchunks(), which returns an iterator of changegroup
125 125 chunks. This is mostly useful for cases where you need to know the
126 126 data stream has ended by observing the end of the changegroup.
127 127
128 128 deltachunk() is useful only if you're applying delta data. Most
129 129 consumers should prefer apply() instead.
130 130
131 131 A few other public methods exist. Those are used only for
132 132 bundlerepo and some debug commands - their use is discouraged.
133 133 """
134 134 deltaheader = _CHANGEGROUPV1_DELTA_HEADER
135 135 deltaheadersize = struct.calcsize(deltaheader)
136 136 version = '01'
137 137 _grouplistcount = 1 # One list of files after the manifests
138 138
139 139 def __init__(self, fh, alg, extras=None):
140 140 if alg is None:
141 141 alg = 'UN'
142 142 if alg not in util.compengines.supportedbundletypes:
143 143 raise error.Abort(_('unknown stream compression type: %s')
144 144 % alg)
145 145 if alg == 'BZ':
146 146 alg = '_truncatedBZ'
147 147
148 148 compengine = util.compengines.forbundletype(alg)
149 149 self._stream = compengine.decompressorreader(fh)
150 150 self._type = alg
151 151 self.extras = extras or {}
152 152 self.callback = None
153 153
154 154 # These methods (compressed, read, seek, tell) all appear to only
155 155 # be used by bundlerepo, but it's a little hard to tell.
156 156 def compressed(self):
157 157 return self._type is not None and self._type != 'UN'
158 158 def read(self, l):
159 159 return self._stream.read(l)
160 160 def seek(self, pos):
161 161 return self._stream.seek(pos)
162 162 def tell(self):
163 163 return self._stream.tell()
164 164 def close(self):
165 165 return self._stream.close()
166 166
167 167 def _chunklength(self):
168 168 d = readexactly(self._stream, 4)
169 169 l = struct.unpack(">l", d)[0]
170 170 if l <= 4:
171 171 if l:
172 172 raise error.Abort(_("invalid chunk length %d") % l)
173 173 return 0
174 174 if self.callback:
175 175 self.callback()
176 176 return l - 4
177 177
178 178 def changelogheader(self):
179 179 """v10 does not have a changelog header chunk"""
180 180 return {}
181 181
182 182 def manifestheader(self):
183 183 """v10 does not have a manifest header chunk"""
184 184 return {}
185 185
186 186 def filelogheader(self):
187 187 """return the header of the filelogs chunk, v10 only has the filename"""
188 188 l = self._chunklength()
189 189 if not l:
190 190 return {}
191 191 fname = readexactly(self._stream, l)
192 192 return {'filename': fname}
193 193
194 194 def _deltaheader(self, headertuple, prevnode):
195 195 node, p1, p2, cs = headertuple
196 196 if prevnode is None:
197 197 deltabase = p1
198 198 else:
199 199 deltabase = prevnode
200 200 flags = 0
201 201 return node, p1, p2, deltabase, cs, flags
202 202
203 203 def deltachunk(self, prevnode):
204 204 l = self._chunklength()
205 205 if not l:
206 206 return {}
207 207 headerdata = readexactly(self._stream, self.deltaheadersize)
208 208 header = struct.unpack(self.deltaheader, headerdata)
209 209 delta = readexactly(self._stream, l - self.deltaheadersize)
210 210 node, p1, p2, deltabase, cs, flags = self._deltaheader(header, prevnode)
211 211 return {'node': node, 'p1': p1, 'p2': p2, 'cs': cs,
212 212 'deltabase': deltabase, 'delta': delta, 'flags': flags}
213 213
214 214 def getchunks(self):
215 215 """returns all the chunks contains in the bundle
216 216
217 217 Used when you need to forward the binary stream to a file or another
218 218 network API. To do so, it parse the changegroup data, otherwise it will
219 219 block in case of sshrepo because it don't know the end of the stream.
220 220 """
221 221 # an empty chunkgroup is the end of the changegroup
222 222 # a changegroup has at least 2 chunkgroups (changelog and manifest).
223 223 # after that, changegroup versions 1 and 2 have a series of groups
224 224 # with one group per file. changegroup 3 has a series of directory
225 225 # manifests before the files.
226 226 count = 0
227 227 emptycount = 0
228 228 while emptycount < self._grouplistcount:
229 229 empty = True
230 230 count += 1
231 231 while True:
232 232 chunk = getchunk(self)
233 233 if not chunk:
234 234 if empty and count > 2:
235 235 emptycount += 1
236 236 break
237 237 empty = False
238 238 yield chunkheader(len(chunk))
239 239 pos = 0
240 240 while pos < len(chunk):
241 241 next = pos + 2**20
242 242 yield chunk[pos:next]
243 243 pos = next
244 244 yield closechunk()
245 245
246 246 def _unpackmanifests(self, repo, revmap, trp, prog, numchanges):
247 247 # We know that we'll never have more manifests than we had
248 248 # changesets.
249 249 self.callback = prog(_('manifests'), numchanges)
250 250 # no need to check for empty manifest group here:
251 251 # if the result of the merge of 1 and 2 is the same in 3 and 4,
252 252 # no new manifest will be created and the manifest group will
253 253 # be empty during the pull
254 254 self.manifestheader()
255 255 repo.manifestlog._revlog.addgroup(self, revmap, trp)
256 256 repo.ui.progress(_('manifests'), None)
257 257 self.callback = None
258 258
259 259 def apply(self, repo, tr, srctype, url, emptyok=False,
260 260 targetphase=phases.draft, expectedtotal=None):
261 261 """Add the changegroup returned by source.read() to this repo.
262 262 srctype is a string like 'push', 'pull', or 'unbundle'. url is
263 263 the URL of the repo where this changegroup is coming from.
264 264
265 265 Return an integer summarizing the change to this repo:
266 266 - nothing changed or no source: 0
267 267 - more heads than before: 1+added heads (2..n)
268 268 - fewer heads than before: -1-removed heads (-2..-n)
269 269 - number of heads stays the same: 1
270 270 """
271 271 repo = repo.unfiltered()
272 272 def csmap(x):
273 273 repo.ui.debug("add changeset %s\n" % short(x))
274 274 return len(cl)
275 275
276 276 def revmap(x):
277 277 return cl.rev(x)
278 278
279 279 changesets = files = revisions = 0
280 280
281 281 try:
282 282 # The transaction may already carry source information. In this
283 283 # case we use the top level data. We overwrite the argument
284 284 # because we need to use the top level value (if they exist)
285 285 # in this function.
286 286 srctype = tr.hookargs.setdefault('source', srctype)
287 287 url = tr.hookargs.setdefault('url', url)
288 288 repo.hook('prechangegroup', throw=True, **tr.hookargs)
289 289
290 290 # write changelog data to temp files so concurrent readers
291 291 # will not see an inconsistent view
292 292 cl = repo.changelog
293 293 cl.delayupdate(tr)
294 294 oldheads = set(cl.heads())
295 295
296 296 trp = weakref.proxy(tr)
297 297 # pull off the changeset group
298 298 repo.ui.status(_("adding changesets\n"))
299 299 clstart = len(cl)
300 300 class prog(object):
301 301 def __init__(self, step, total):
302 302 self._step = step
303 303 self._total = total
304 304 self._count = 1
305 305 def __call__(self):
306 306 repo.ui.progress(self._step, self._count, unit=_('chunks'),
307 307 total=self._total)
308 308 self._count += 1
309 309 self.callback = prog(_('changesets'), expectedtotal)
310 310
311 311 efiles = set()
312 312 def onchangelog(cl, node):
313 313 efiles.update(cl.readfiles(node))
314 314
315 315 self.changelogheader()
316 316 cgnodes = cl.addgroup(self, csmap, trp, addrevisioncb=onchangelog)
317 317 efiles = len(efiles)
318 318
319 319 if not (cgnodes or emptyok):
320 320 raise error.Abort(_("received changelog group is empty"))
321 321 clend = len(cl)
322 322 changesets = clend - clstart
323 323 repo.ui.progress(_('changesets'), None)
324 324 self.callback = None
325 325
326 326 # pull off the manifest group
327 327 repo.ui.status(_("adding manifests\n"))
328 328 self._unpackmanifests(repo, revmap, trp, prog, changesets)
329 329
330 330 needfiles = {}
331 331 if repo.ui.configbool('server', 'validate', default=False):
332 332 cl = repo.changelog
333 333 ml = repo.manifestlog
334 334 # validate incoming csets have their manifests
335 335 for cset in xrange(clstart, clend):
336 336 mfnode = cl.changelogrevision(cset).manifest
337 337 mfest = ml[mfnode].readdelta()
338 338 # store file cgnodes we must see
339 339 for f, n in mfest.iteritems():
340 340 needfiles.setdefault(f, set()).add(n)
341 341
342 342 # process the files
343 343 repo.ui.status(_("adding file changes\n"))
344 344 newrevs, newfiles = _addchangegroupfiles(
345 345 repo, self, revmap, trp, efiles, needfiles)
346 346 revisions += newrevs
347 347 files += newfiles
348 348
349 349 deltaheads = 0
350 350 if oldheads:
351 351 heads = cl.heads()
352 352 deltaheads = len(heads) - len(oldheads)
353 353 for h in heads:
354 354 if h not in oldheads and repo[h].closesbranch():
355 355 deltaheads -= 1
356 356 htext = ""
357 357 if deltaheads:
358 358 htext = _(" (%+d heads)") % deltaheads
359 359
360 360 repo.ui.status(_("added %d changesets"
361 361 " with %d changes to %d files%s\n")
362 362 % (changesets, revisions, files, htext))
363 363 repo.invalidatevolatilesets()
364 364
365 365 if changesets > 0:
366 366 if 'node' not in tr.hookargs:
367 367 tr.hookargs['node'] = hex(cl.node(clstart))
368 368 tr.hookargs['node_last'] = hex(cl.node(clend - 1))
369 369 hookargs = dict(tr.hookargs)
370 370 else:
371 371 hookargs = dict(tr.hookargs)
372 372 hookargs['node'] = hex(cl.node(clstart))
373 373 hookargs['node_last'] = hex(cl.node(clend - 1))
374 374 repo.hook('pretxnchangegroup', throw=True, **hookargs)
375 375
376 376 added = [cl.node(r) for r in xrange(clstart, clend)]
377 377 if srctype in ('push', 'serve'):
378 378 # Old servers can not push the boundary themselves.
379 379 # New servers won't push the boundary if changeset already
380 380 # exists locally as secret
381 381 #
382 382 # We should not use added here but the list of all change in
383 383 # the bundle
384 384 if repo.publishing():
385 385 phases.advanceboundary(repo, tr, phases.public, cgnodes)
386 386 else:
387 387 # Those changesets have been pushed from the
388 388 # outside, their phases are going to be pushed
389 389 # alongside. Therefor `targetphase` is
390 390 # ignored.
391 391 phases.advanceboundary(repo, tr, phases.draft, cgnodes)
392 392 phases.retractboundary(repo, tr, phases.draft, added)
393 393 elif srctype != 'strip':
394 394 # publishing only alter behavior during push
395 395 #
396 396 # strip should not touch boundary at all
397 397 phases.retractboundary(repo, tr, targetphase, added)
398 398
399 399 if changesets > 0:
400 400
401 401 def runhooks():
402 402 # These hooks run when the lock releases, not when the
403 403 # transaction closes. So it's possible for the changelog
404 404 # to have changed since we last saw it.
405 405 if clstart >= len(repo):
406 406 return
407 407
408 408 repo.hook("changegroup", **hookargs)
409 409
410 410 for n in added:
411 411 args = hookargs.copy()
412 412 args['node'] = hex(n)
413 413 del args['node_last']
414 414 repo.hook("incoming", **args)
415 415
416 416 newheads = [h for h in repo.heads()
417 417 if h not in oldheads]
418 418 repo.ui.log("incoming",
419 419 "%s incoming changes - new heads: %s\n",
420 420 len(added),
421 421 ', '.join([hex(c[:6]) for c in newheads]))
422 422
423 423 tr.addpostclose('changegroup-runhooks-%020i' % clstart,
424 424 lambda tr: repo._afterlock(runhooks))
425 425 finally:
426 426 repo.ui.flush()
427 427 # never return 0 here:
428 428 if deltaheads < 0:
429 return deltaheads - 1
429 ret = deltaheads - 1
430 430 else:
431 return deltaheads + 1
431 ret = deltaheads + 1
432 return ret, added
432 433
433 434 class cg2unpacker(cg1unpacker):
434 435 """Unpacker for cg2 streams.
435 436
436 437 cg2 streams add support for generaldelta, so the delta header
437 438 format is slightly different. All other features about the data
438 439 remain the same.
439 440 """
440 441 deltaheader = _CHANGEGROUPV2_DELTA_HEADER
441 442 deltaheadersize = struct.calcsize(deltaheader)
442 443 version = '02'
443 444
444 445 def _deltaheader(self, headertuple, prevnode):
445 446 node, p1, p2, deltabase, cs = headertuple
446 447 flags = 0
447 448 return node, p1, p2, deltabase, cs, flags
448 449
449 450 class cg3unpacker(cg2unpacker):
450 451 """Unpacker for cg3 streams.
451 452
452 453 cg3 streams add support for exchanging treemanifests and revlog
453 454 flags. It adds the revlog flags to the delta header and an empty chunk
454 455 separating manifests and files.
455 456 """
456 457 deltaheader = _CHANGEGROUPV3_DELTA_HEADER
457 458 deltaheadersize = struct.calcsize(deltaheader)
458 459 version = '03'
459 460 _grouplistcount = 2 # One list of manifests and one list of files
460 461
461 462 def _deltaheader(self, headertuple, prevnode):
462 463 node, p1, p2, deltabase, cs, flags = headertuple
463 464 return node, p1, p2, deltabase, cs, flags
464 465
465 466 def _unpackmanifests(self, repo, revmap, trp, prog, numchanges):
466 467 super(cg3unpacker, self)._unpackmanifests(repo, revmap, trp, prog,
467 468 numchanges)
468 469 for chunkdata in iter(self.filelogheader, {}):
469 470 # If we get here, there are directory manifests in the changegroup
470 471 d = chunkdata["filename"]
471 472 repo.ui.debug("adding %s revisions\n" % d)
472 473 dirlog = repo.manifestlog._revlog.dirlog(d)
473 474 if not dirlog.addgroup(self, revmap, trp):
474 475 raise error.Abort(_("received dir revlog group is empty"))
475 476
476 477 class headerlessfixup(object):
477 478 def __init__(self, fh, h):
478 479 self._h = h
479 480 self._fh = fh
480 481 def read(self, n):
481 482 if self._h:
482 483 d, self._h = self._h[:n], self._h[n:]
483 484 if len(d) < n:
484 485 d += readexactly(self._fh, n - len(d))
485 486 return d
486 487 return readexactly(self._fh, n)
487 488
488 489 class cg1packer(object):
489 490 deltaheader = _CHANGEGROUPV1_DELTA_HEADER
490 491 version = '01'
491 492 def __init__(self, repo, bundlecaps=None):
492 493 """Given a source repo, construct a bundler.
493 494
494 495 bundlecaps is optional and can be used to specify the set of
495 496 capabilities which can be used to build the bundle. While bundlecaps is
496 497 unused in core Mercurial, extensions rely on this feature to communicate
497 498 capabilities to customize the changegroup packer.
498 499 """
499 500 # Set of capabilities we can use to build the bundle.
500 501 if bundlecaps is None:
501 502 bundlecaps = set()
502 503 self._bundlecaps = bundlecaps
503 504 # experimental config: bundle.reorder
504 505 reorder = repo.ui.config('bundle', 'reorder', 'auto')
505 506 if reorder == 'auto':
506 507 reorder = None
507 508 else:
508 509 reorder = util.parsebool(reorder)
509 510 self._repo = repo
510 511 self._reorder = reorder
511 512 self._progress = repo.ui.progress
512 513 if self._repo.ui.verbose and not self._repo.ui.debugflag:
513 514 self._verbosenote = self._repo.ui.note
514 515 else:
515 516 self._verbosenote = lambda s: None
516 517
517 518 def close(self):
518 519 return closechunk()
519 520
520 521 def fileheader(self, fname):
521 522 return chunkheader(len(fname)) + fname
522 523
523 524 # Extracted both for clarity and for overriding in extensions.
524 525 def _sortgroup(self, revlog, nodelist, lookup):
525 526 """Sort nodes for change group and turn them into revnums."""
526 527 # for generaldelta revlogs, we linearize the revs; this will both be
527 528 # much quicker and generate a much smaller bundle
528 529 if (revlog._generaldelta and self._reorder is None) or self._reorder:
529 530 dag = dagutil.revlogdag(revlog)
530 531 return dag.linearize(set(revlog.rev(n) for n in nodelist))
531 532 else:
532 533 return sorted([revlog.rev(n) for n in nodelist])
533 534
534 535 def group(self, nodelist, revlog, lookup, units=None):
535 536 """Calculate a delta group, yielding a sequence of changegroup chunks
536 537 (strings).
537 538
538 539 Given a list of changeset revs, return a set of deltas and
539 540 metadata corresponding to nodes. The first delta is
540 541 first parent(nodelist[0]) -> nodelist[0], the receiver is
541 542 guaranteed to have this parent as it has all history before
542 543 these changesets. In the case firstparent is nullrev the
543 544 changegroup starts with a full revision.
544 545
545 546 If units is not None, progress detail will be generated, units specifies
546 547 the type of revlog that is touched (changelog, manifest, etc.).
547 548 """
548 549 # if we don't have any revisions touched by these changesets, bail
549 550 if len(nodelist) == 0:
550 551 yield self.close()
551 552 return
552 553
553 554 revs = self._sortgroup(revlog, nodelist, lookup)
554 555
555 556 # add the parent of the first rev
556 557 p = revlog.parentrevs(revs[0])[0]
557 558 revs.insert(0, p)
558 559
559 560 # build deltas
560 561 total = len(revs) - 1
561 562 msgbundling = _('bundling')
562 563 for r in xrange(len(revs) - 1):
563 564 if units is not None:
564 565 self._progress(msgbundling, r + 1, unit=units, total=total)
565 566 prev, curr = revs[r], revs[r + 1]
566 567 linknode = lookup(revlog.node(curr))
567 568 for c in self.revchunk(revlog, curr, prev, linknode):
568 569 yield c
569 570
570 571 if units is not None:
571 572 self._progress(msgbundling, None)
572 573 yield self.close()
573 574
574 575 # filter any nodes that claim to be part of the known set
575 576 def prune(self, revlog, missing, commonrevs):
576 577 rr, rl = revlog.rev, revlog.linkrev
577 578 return [n for n in missing if rl(rr(n)) not in commonrevs]
578 579
579 580 def _packmanifests(self, dir, mfnodes, lookuplinknode):
580 581 """Pack flat manifests into a changegroup stream."""
581 582 assert not dir
582 583 for chunk in self.group(mfnodes, self._repo.manifestlog._revlog,
583 584 lookuplinknode, units=_('manifests')):
584 585 yield chunk
585 586
586 587 def _manifestsdone(self):
587 588 return ''
588 589
589 590 def generate(self, commonrevs, clnodes, fastpathlinkrev, source):
590 591 '''yield a sequence of changegroup chunks (strings)'''
591 592 repo = self._repo
592 593 cl = repo.changelog
593 594
594 595 clrevorder = {}
595 596 mfs = {} # needed manifests
596 597 fnodes = {} # needed file nodes
597 598 changedfiles = set()
598 599
599 600 # Callback for the changelog, used to collect changed files and manifest
600 601 # nodes.
601 602 # Returns the linkrev node (identity in the changelog case).
602 603 def lookupcl(x):
603 604 c = cl.read(x)
604 605 clrevorder[x] = len(clrevorder)
605 606 n = c[0]
606 607 # record the first changeset introducing this manifest version
607 608 mfs.setdefault(n, x)
608 609 # Record a complete list of potentially-changed files in
609 610 # this manifest.
610 611 changedfiles.update(c[3])
611 612 return x
612 613
613 614 self._verbosenote(_('uncompressed size of bundle content:\n'))
614 615 size = 0
615 616 for chunk in self.group(clnodes, cl, lookupcl, units=_('changesets')):
616 617 size += len(chunk)
617 618 yield chunk
618 619 self._verbosenote(_('%8.i (changelog)\n') % size)
619 620
620 621 # We need to make sure that the linkrev in the changegroup refers to
621 622 # the first changeset that introduced the manifest or file revision.
622 623 # The fastpath is usually safer than the slowpath, because the filelogs
623 624 # are walked in revlog order.
624 625 #
625 626 # When taking the slowpath with reorder=None and the manifest revlog
626 627 # uses generaldelta, the manifest may be walked in the "wrong" order.
627 628 # Without 'clrevorder', we would get an incorrect linkrev (see fix in
628 629 # cc0ff93d0c0c).
629 630 #
630 631 # When taking the fastpath, we are only vulnerable to reordering
631 632 # of the changelog itself. The changelog never uses generaldelta, so
632 633 # it is only reordered when reorder=True. To handle this case, we
633 634 # simply take the slowpath, which already has the 'clrevorder' logic.
634 635 # This was also fixed in cc0ff93d0c0c.
635 636 fastpathlinkrev = fastpathlinkrev and not self._reorder
636 637 # Treemanifests don't work correctly with fastpathlinkrev
637 638 # either, because we don't discover which directory nodes to
638 639 # send along with files. This could probably be fixed.
639 640 fastpathlinkrev = fastpathlinkrev and (
640 641 'treemanifest' not in repo.requirements)
641 642
642 643 for chunk in self.generatemanifests(commonrevs, clrevorder,
643 644 fastpathlinkrev, mfs, fnodes):
644 645 yield chunk
645 646 mfs.clear()
646 647 clrevs = set(cl.rev(x) for x in clnodes)
647 648
648 649 if not fastpathlinkrev:
649 650 def linknodes(unused, fname):
650 651 return fnodes.get(fname, {})
651 652 else:
652 653 cln = cl.node
653 654 def linknodes(filerevlog, fname):
654 655 llr = filerevlog.linkrev
655 656 fln = filerevlog.node
656 657 revs = ((r, llr(r)) for r in filerevlog)
657 658 return dict((fln(r), cln(lr)) for r, lr in revs if lr in clrevs)
658 659
659 660 for chunk in self.generatefiles(changedfiles, linknodes, commonrevs,
660 661 source):
661 662 yield chunk
662 663
663 664 yield self.close()
664 665
665 666 if clnodes:
666 667 repo.hook('outgoing', node=hex(clnodes[0]), source=source)
667 668
668 669 def generatemanifests(self, commonrevs, clrevorder, fastpathlinkrev, mfs,
669 670 fnodes):
670 671 repo = self._repo
671 672 mfl = repo.manifestlog
672 673 dirlog = mfl._revlog.dirlog
673 674 tmfnodes = {'': mfs}
674 675
675 676 # Callback for the manifest, used to collect linkrevs for filelog
676 677 # revisions.
677 678 # Returns the linkrev node (collected in lookupcl).
678 679 def makelookupmflinknode(dir):
679 680 if fastpathlinkrev:
680 681 assert not dir
681 682 return mfs.__getitem__
682 683
683 684 def lookupmflinknode(x):
684 685 """Callback for looking up the linknode for manifests.
685 686
686 687 Returns the linkrev node for the specified manifest.
687 688
688 689 SIDE EFFECT:
689 690
690 691 1) fclnodes gets populated with the list of relevant
691 692 file nodes if we're not using fastpathlinkrev
692 693 2) When treemanifests are in use, collects treemanifest nodes
693 694 to send
694 695
695 696 Note that this means manifests must be completely sent to
696 697 the client before you can trust the list of files and
697 698 treemanifests to send.
698 699 """
699 700 clnode = tmfnodes[dir][x]
700 701 mdata = mfl.get(dir, x).readfast(shallow=True)
701 702 for p, n, fl in mdata.iterentries():
702 703 if fl == 't': # subdirectory manifest
703 704 subdir = dir + p + '/'
704 705 tmfclnodes = tmfnodes.setdefault(subdir, {})
705 706 tmfclnode = tmfclnodes.setdefault(n, clnode)
706 707 if clrevorder[clnode] < clrevorder[tmfclnode]:
707 708 tmfclnodes[n] = clnode
708 709 else:
709 710 f = dir + p
710 711 fclnodes = fnodes.setdefault(f, {})
711 712 fclnode = fclnodes.setdefault(n, clnode)
712 713 if clrevorder[clnode] < clrevorder[fclnode]:
713 714 fclnodes[n] = clnode
714 715 return clnode
715 716 return lookupmflinknode
716 717
717 718 size = 0
718 719 while tmfnodes:
719 720 dir = min(tmfnodes)
720 721 nodes = tmfnodes[dir]
721 722 prunednodes = self.prune(dirlog(dir), nodes, commonrevs)
722 723 if not dir or prunednodes:
723 724 for x in self._packmanifests(dir, prunednodes,
724 725 makelookupmflinknode(dir)):
725 726 size += len(x)
726 727 yield x
727 728 del tmfnodes[dir]
728 729 self._verbosenote(_('%8.i (manifests)\n') % size)
729 730 yield self._manifestsdone()
730 731
731 732 # The 'source' parameter is useful for extensions
732 733 def generatefiles(self, changedfiles, linknodes, commonrevs, source):
733 734 repo = self._repo
734 735 progress = self._progress
735 736 msgbundling = _('bundling')
736 737
737 738 total = len(changedfiles)
738 739 # for progress output
739 740 msgfiles = _('files')
740 741 for i, fname in enumerate(sorted(changedfiles)):
741 742 filerevlog = repo.file(fname)
742 743 if not filerevlog:
743 744 raise error.Abort(_("empty or missing revlog for %s") % fname)
744 745
745 746 linkrevnodes = linknodes(filerevlog, fname)
746 747 # Lookup for filenodes, we collected the linkrev nodes above in the
747 748 # fastpath case and with lookupmf in the slowpath case.
748 749 def lookupfilelog(x):
749 750 return linkrevnodes[x]
750 751
751 752 filenodes = self.prune(filerevlog, linkrevnodes, commonrevs)
752 753 if filenodes:
753 754 progress(msgbundling, i + 1, item=fname, unit=msgfiles,
754 755 total=total)
755 756 h = self.fileheader(fname)
756 757 size = len(h)
757 758 yield h
758 759 for chunk in self.group(filenodes, filerevlog, lookupfilelog):
759 760 size += len(chunk)
760 761 yield chunk
761 762 self._verbosenote(_('%8.i %s\n') % (size, fname))
762 763 progress(msgbundling, None)
763 764
764 765 def deltaparent(self, revlog, rev, p1, p2, prev):
765 766 return prev
766 767
767 768 def revchunk(self, revlog, rev, prev, linknode):
768 769 node = revlog.node(rev)
769 770 p1, p2 = revlog.parentrevs(rev)
770 771 base = self.deltaparent(revlog, rev, p1, p2, prev)
771 772
772 773 prefix = ''
773 774 if revlog.iscensored(base) or revlog.iscensored(rev):
774 775 try:
775 776 delta = revlog.revision(node, raw=True)
776 777 except error.CensoredNodeError as e:
777 778 delta = e.tombstone
778 779 if base == nullrev:
779 780 prefix = mdiff.trivialdiffheader(len(delta))
780 781 else:
781 782 baselen = revlog.rawsize(base)
782 783 prefix = mdiff.replacediffheader(baselen, len(delta))
783 784 elif base == nullrev:
784 785 delta = revlog.revision(node, raw=True)
785 786 prefix = mdiff.trivialdiffheader(len(delta))
786 787 else:
787 788 delta = revlog.revdiff(base, rev)
788 789 p1n, p2n = revlog.parents(node)
789 790 basenode = revlog.node(base)
790 791 flags = revlog.flags(rev)
791 792 meta = self.builddeltaheader(node, p1n, p2n, basenode, linknode, flags)
792 793 meta += prefix
793 794 l = len(meta) + len(delta)
794 795 yield chunkheader(l)
795 796 yield meta
796 797 yield delta
797 798 def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags):
798 799 # do nothing with basenode, it is implicitly the previous one in HG10
799 800 # do nothing with flags, it is implicitly 0 for cg1 and cg2
800 801 return struct.pack(self.deltaheader, node, p1n, p2n, linknode)
801 802
802 803 class cg2packer(cg1packer):
803 804 version = '02'
804 805 deltaheader = _CHANGEGROUPV2_DELTA_HEADER
805 806
806 807 def __init__(self, repo, bundlecaps=None):
807 808 super(cg2packer, self).__init__(repo, bundlecaps)
808 809 if self._reorder is None:
809 810 # Since generaldelta is directly supported by cg2, reordering
810 811 # generally doesn't help, so we disable it by default (treating
811 812 # bundle.reorder=auto just like bundle.reorder=False).
812 813 self._reorder = False
813 814
814 815 def deltaparent(self, revlog, rev, p1, p2, prev):
815 816 dp = revlog.deltaparent(rev)
816 817 if dp == nullrev and revlog.storedeltachains:
817 818 # Avoid sending full revisions when delta parent is null. Pick prev
818 819 # in that case. It's tempting to pick p1 in this case, as p1 will
819 820 # be smaller in the common case. However, computing a delta against
820 821 # p1 may require resolving the raw text of p1, which could be
821 822 # expensive. The revlog caches should have prev cached, meaning
822 823 # less CPU for changegroup generation. There is likely room to add
823 824 # a flag and/or config option to control this behavior.
824 825 return prev
825 826 elif dp == nullrev:
826 827 # revlog is configured to use full snapshot for a reason,
827 828 # stick to full snapshot.
828 829 return nullrev
829 830 elif dp not in (p1, p2, prev):
830 831 # Pick prev when we can't be sure remote has the base revision.
831 832 return prev
832 833 else:
833 834 return dp
834 835
835 836 def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags):
836 837 # Do nothing with flags, it is implicitly 0 in cg1 and cg2
837 838 return struct.pack(self.deltaheader, node, p1n, p2n, basenode, linknode)
838 839
839 840 class cg3packer(cg2packer):
840 841 version = '03'
841 842 deltaheader = _CHANGEGROUPV3_DELTA_HEADER
842 843
843 844 def _packmanifests(self, dir, mfnodes, lookuplinknode):
844 845 if dir:
845 846 yield self.fileheader(dir)
846 847
847 848 dirlog = self._repo.manifestlog._revlog.dirlog(dir)
848 849 for chunk in self.group(mfnodes, dirlog, lookuplinknode,
849 850 units=_('manifests')):
850 851 yield chunk
851 852
852 853 def _manifestsdone(self):
853 854 return self.close()
854 855
855 856 def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags):
856 857 return struct.pack(
857 858 self.deltaheader, node, p1n, p2n, basenode, linknode, flags)
858 859
859 860 _packermap = {'01': (cg1packer, cg1unpacker),
860 861 # cg2 adds support for exchanging generaldelta
861 862 '02': (cg2packer, cg2unpacker),
862 863 # cg3 adds support for exchanging revlog flags and treemanifests
863 864 '03': (cg3packer, cg3unpacker),
864 865 }
865 866
866 867 def allsupportedversions(repo):
867 868 versions = set(_packermap.keys())
868 869 if not (repo.ui.configbool('experimental', 'changegroup3') or
869 870 repo.ui.configbool('experimental', 'treemanifest') or
870 871 'treemanifest' in repo.requirements):
871 872 versions.discard('03')
872 873 return versions
873 874
874 875 # Changegroup versions that can be applied to the repo
875 876 def supportedincomingversions(repo):
876 877 return allsupportedversions(repo)
877 878
878 879 # Changegroup versions that can be created from the repo
879 880 def supportedoutgoingversions(repo):
880 881 versions = allsupportedversions(repo)
881 882 if 'treemanifest' in repo.requirements:
882 883 # Versions 01 and 02 support only flat manifests and it's just too
883 884 # expensive to convert between the flat manifest and tree manifest on
884 885 # the fly. Since tree manifests are hashed differently, all of history
885 886 # would have to be converted. Instead, we simply don't even pretend to
886 887 # support versions 01 and 02.
887 888 versions.discard('01')
888 889 versions.discard('02')
889 890 return versions
890 891
891 892 def safeversion(repo):
892 893 # Finds the smallest version that it's safe to assume clients of the repo
893 894 # will support. For example, all hg versions that support generaldelta also
894 895 # support changegroup 02.
895 896 versions = supportedoutgoingversions(repo)
896 897 if 'generaldelta' in repo.requirements:
897 898 versions.discard('01')
898 899 assert versions
899 900 return min(versions)
900 901
901 902 def getbundler(version, repo, bundlecaps=None):
902 903 assert version in supportedoutgoingversions(repo)
903 904 return _packermap[version][0](repo, bundlecaps)
904 905
905 906 def getunbundler(version, fh, alg, extras=None):
906 907 return _packermap[version][1](fh, alg, extras=extras)
907 908
908 909 def _changegroupinfo(repo, nodes, source):
909 910 if repo.ui.verbose or source == 'bundle':
910 911 repo.ui.status(_("%d changesets found\n") % len(nodes))
911 912 if repo.ui.debugflag:
912 913 repo.ui.debug("list of changesets:\n")
913 914 for node in nodes:
914 915 repo.ui.debug("%s\n" % hex(node))
915 916
916 917 def getsubsetraw(repo, outgoing, bundler, source, fastpath=False):
917 918 repo = repo.unfiltered()
918 919 commonrevs = outgoing.common
919 920 csets = outgoing.missing
920 921 heads = outgoing.missingheads
921 922 # We go through the fast path if we get told to, or if all (unfiltered
922 923 # heads have been requested (since we then know there all linkrevs will
923 924 # be pulled by the client).
924 925 heads.sort()
925 926 fastpathlinkrev = fastpath or (
926 927 repo.filtername is None and heads == sorted(repo.heads()))
927 928
928 929 repo.hook('preoutgoing', throw=True, source=source)
929 930 _changegroupinfo(repo, csets, source)
930 931 return bundler.generate(commonrevs, csets, fastpathlinkrev, source)
931 932
932 933 def getsubset(repo, outgoing, bundler, source, fastpath=False):
933 934 gengroup = getsubsetraw(repo, outgoing, bundler, source, fastpath)
934 935 return getunbundler(bundler.version, util.chunkbuffer(gengroup), None,
935 936 {'clcount': len(outgoing.missing)})
936 937
937 938 def changegroupsubset(repo, roots, heads, source, version='01'):
938 939 """Compute a changegroup consisting of all the nodes that are
939 940 descendants of any of the roots and ancestors of any of the heads.
940 941 Return a chunkbuffer object whose read() method will return
941 942 successive changegroup chunks.
942 943
943 944 It is fairly complex as determining which filenodes and which
944 945 manifest nodes need to be included for the changeset to be complete
945 946 is non-trivial.
946 947
947 948 Another wrinkle is doing the reverse, figuring out which changeset in
948 949 the changegroup a particular filenode or manifestnode belongs to.
949 950 """
950 951 outgoing = discovery.outgoing(repo, missingroots=roots, missingheads=heads)
951 952 bundler = getbundler(version, repo)
952 953 return getsubset(repo, outgoing, bundler, source)
953 954
954 955 def getlocalchangegroupraw(repo, source, outgoing, bundlecaps=None,
955 956 version='01'):
956 957 """Like getbundle, but taking a discovery.outgoing as an argument.
957 958
958 959 This is only implemented for local repos and reuses potentially
959 960 precomputed sets in outgoing. Returns a raw changegroup generator."""
960 961 if not outgoing.missing:
961 962 return None
962 963 bundler = getbundler(version, repo, bundlecaps)
963 964 return getsubsetraw(repo, outgoing, bundler, source)
964 965
965 966 def getchangegroup(repo, source, outgoing, bundlecaps=None,
966 967 version='01'):
967 968 """Like getbundle, but taking a discovery.outgoing as an argument.
968 969
969 970 This is only implemented for local repos and reuses potentially
970 971 precomputed sets in outgoing."""
971 972 if not outgoing.missing:
972 973 return None
973 974 bundler = getbundler(version, repo, bundlecaps)
974 975 return getsubset(repo, outgoing, bundler, source)
975 976
976 977 def getlocalchangegroup(repo, *args, **kwargs):
977 978 repo.ui.deprecwarn('getlocalchangegroup is deprecated, use getchangegroup',
978 979 '4.3')
979 980 return getchangegroup(repo, *args, **kwargs)
980 981
981 982 def changegroup(repo, basenodes, source):
982 983 # to avoid a race we use changegroupsubset() (issue1320)
983 984 return changegroupsubset(repo, basenodes, repo.heads(), source)
984 985
985 986 def _addchangegroupfiles(repo, source, revmap, trp, expectedfiles, needfiles):
986 987 revisions = 0
987 988 files = 0
988 989 for chunkdata in iter(source.filelogheader, {}):
989 990 files += 1
990 991 f = chunkdata["filename"]
991 992 repo.ui.debug("adding %s revisions\n" % f)
992 993 repo.ui.progress(_('files'), files, unit=_('files'),
993 994 total=expectedfiles)
994 995 fl = repo.file(f)
995 996 o = len(fl)
996 997 try:
997 998 if not fl.addgroup(source, revmap, trp):
998 999 raise error.Abort(_("received file revlog group is empty"))
999 1000 except error.CensoredBaseError as e:
1000 1001 raise error.Abort(_("received delta base is censored: %s") % e)
1001 1002 revisions += len(fl) - o
1002 1003 if f in needfiles:
1003 1004 needs = needfiles[f]
1004 1005 for new in xrange(o, len(fl)):
1005 1006 n = fl.node(new)
1006 1007 if n in needs:
1007 1008 needs.remove(n)
1008 1009 else:
1009 1010 raise error.Abort(
1010 1011 _("received spurious file revlog entry"))
1011 1012 if not needs:
1012 1013 del needfiles[f]
1013 1014 repo.ui.progress(_('files'), None)
1014 1015
1015 1016 for f, needs in needfiles.iteritems():
1016 1017 fl = repo.file(f)
1017 1018 for n in needs:
1018 1019 try:
1019 1020 fl.rev(n)
1020 1021 except error.LookupError:
1021 1022 raise error.Abort(
1022 1023 _('missing file data for %s:%s - run hg verify') %
1023 1024 (f, hex(n)))
1024 1025
1025 1026 return revisions, files
@@ -1,5400 +1,5400 b''
1 1 # commands.py - command processing for mercurial
2 2 #
3 3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from __future__ import absolute_import
9 9
10 10 import difflib
11 11 import errno
12 12 import os
13 13 import re
14 14 import sys
15 15
16 16 from .i18n import _
17 17 from .node import (
18 18 hex,
19 19 nullid,
20 20 nullrev,
21 21 short,
22 22 )
23 23 from . import (
24 24 archival,
25 25 bookmarks,
26 26 bundle2,
27 27 changegroup,
28 28 cmdutil,
29 29 copies,
30 30 debugcommands as debugcommandsmod,
31 31 destutil,
32 32 dirstateguard,
33 33 discovery,
34 34 encoding,
35 35 error,
36 36 exchange,
37 37 extensions,
38 38 formatter,
39 39 graphmod,
40 40 hbisect,
41 41 help,
42 42 hg,
43 43 lock as lockmod,
44 44 merge as mergemod,
45 45 obsolete,
46 46 patch,
47 47 phases,
48 48 pycompat,
49 49 rcutil,
50 50 registrar,
51 51 revsetlang,
52 52 scmutil,
53 53 server,
54 54 sshserver,
55 55 streamclone,
56 56 tags as tagsmod,
57 57 templatekw,
58 58 ui as uimod,
59 59 util,
60 60 )
61 61
62 62 release = lockmod.release
63 63
64 64 table = {}
65 65 table.update(debugcommandsmod.command._table)
66 66
67 67 command = registrar.command(table)
68 68
69 69 # common command options
70 70
71 71 globalopts = [
72 72 ('R', 'repository', '',
73 73 _('repository root directory or name of overlay bundle file'),
74 74 _('REPO')),
75 75 ('', 'cwd', '',
76 76 _('change working directory'), _('DIR')),
77 77 ('y', 'noninteractive', None,
78 78 _('do not prompt, automatically pick the first choice for all prompts')),
79 79 ('q', 'quiet', None, _('suppress output')),
80 80 ('v', 'verbose', None, _('enable additional output')),
81 81 ('', 'color', '',
82 82 # i18n: 'always', 'auto', 'never', and 'debug' are keywords
83 83 # and should not be translated
84 84 _("when to colorize (boolean, always, auto, never, or debug)"),
85 85 _('TYPE')),
86 86 ('', 'config', [],
87 87 _('set/override config option (use \'section.name=value\')'),
88 88 _('CONFIG')),
89 89 ('', 'debug', None, _('enable debugging output')),
90 90 ('', 'debugger', None, _('start debugger')),
91 91 ('', 'encoding', encoding.encoding, _('set the charset encoding'),
92 92 _('ENCODE')),
93 93 ('', 'encodingmode', encoding.encodingmode,
94 94 _('set the charset encoding mode'), _('MODE')),
95 95 ('', 'traceback', None, _('always print a traceback on exception')),
96 96 ('', 'time', None, _('time how long the command takes')),
97 97 ('', 'profile', None, _('print command execution profile')),
98 98 ('', 'version', None, _('output version information and exit')),
99 99 ('h', 'help', None, _('display help and exit')),
100 100 ('', 'hidden', False, _('consider hidden changesets')),
101 101 ('', 'pager', 'auto',
102 102 _("when to paginate (boolean, always, auto, or never)"), _('TYPE')),
103 103 ]
104 104
105 105 dryrunopts = cmdutil.dryrunopts
106 106 remoteopts = cmdutil.remoteopts
107 107 walkopts = cmdutil.walkopts
108 108 commitopts = cmdutil.commitopts
109 109 commitopts2 = cmdutil.commitopts2
110 110 formatteropts = cmdutil.formatteropts
111 111 templateopts = cmdutil.templateopts
112 112 logopts = cmdutil.logopts
113 113 diffopts = cmdutil.diffopts
114 114 diffwsopts = cmdutil.diffwsopts
115 115 diffopts2 = cmdutil.diffopts2
116 116 mergetoolopts = cmdutil.mergetoolopts
117 117 similarityopts = cmdutil.similarityopts
118 118 subrepoopts = cmdutil.subrepoopts
119 119 debugrevlogopts = cmdutil.debugrevlogopts
120 120
121 121 # Commands start here, listed alphabetically
122 122
123 123 @command('^add',
124 124 walkopts + subrepoopts + dryrunopts,
125 125 _('[OPTION]... [FILE]...'),
126 126 inferrepo=True)
127 127 def add(ui, repo, *pats, **opts):
128 128 """add the specified files on the next commit
129 129
130 130 Schedule files to be version controlled and added to the
131 131 repository.
132 132
133 133 The files will be added to the repository at the next commit. To
134 134 undo an add before that, see :hg:`forget`.
135 135
136 136 If no names are given, add all files to the repository (except
137 137 files matching ``.hgignore``).
138 138
139 139 .. container:: verbose
140 140
141 141 Examples:
142 142
143 143 - New (unknown) files are added
144 144 automatically by :hg:`add`::
145 145
146 146 $ ls
147 147 foo.c
148 148 $ hg status
149 149 ? foo.c
150 150 $ hg add
151 151 adding foo.c
152 152 $ hg status
153 153 A foo.c
154 154
155 155 - Specific files to be added can be specified::
156 156
157 157 $ ls
158 158 bar.c foo.c
159 159 $ hg status
160 160 ? bar.c
161 161 ? foo.c
162 162 $ hg add bar.c
163 163 $ hg status
164 164 A bar.c
165 165 ? foo.c
166 166
167 167 Returns 0 if all files are successfully added.
168 168 """
169 169
170 170 m = scmutil.match(repo[None], pats, pycompat.byteskwargs(opts))
171 171 rejected = cmdutil.add(ui, repo, m, "", False, **opts)
172 172 return rejected and 1 or 0
173 173
174 174 @command('addremove',
175 175 similarityopts + subrepoopts + walkopts + dryrunopts,
176 176 _('[OPTION]... [FILE]...'),
177 177 inferrepo=True)
178 178 def addremove(ui, repo, *pats, **opts):
179 179 """add all new files, delete all missing files
180 180
181 181 Add all new files and remove all missing files from the
182 182 repository.
183 183
184 184 Unless names are given, new files are ignored if they match any of
185 185 the patterns in ``.hgignore``. As with add, these changes take
186 186 effect at the next commit.
187 187
188 188 Use the -s/--similarity option to detect renamed files. This
189 189 option takes a percentage between 0 (disabled) and 100 (files must
190 190 be identical) as its parameter. With a parameter greater than 0,
191 191 this compares every removed file with every added file and records
192 192 those similar enough as renames. Detecting renamed files this way
193 193 can be expensive. After using this option, :hg:`status -C` can be
194 194 used to check which files were identified as moved or renamed. If
195 195 not specified, -s/--similarity defaults to 100 and only renames of
196 196 identical files are detected.
197 197
198 198 .. container:: verbose
199 199
200 200 Examples:
201 201
202 202 - A number of files (bar.c and foo.c) are new,
203 203 while foobar.c has been removed (without using :hg:`remove`)
204 204 from the repository::
205 205
206 206 $ ls
207 207 bar.c foo.c
208 208 $ hg status
209 209 ! foobar.c
210 210 ? bar.c
211 211 ? foo.c
212 212 $ hg addremove
213 213 adding bar.c
214 214 adding foo.c
215 215 removing foobar.c
216 216 $ hg status
217 217 A bar.c
218 218 A foo.c
219 219 R foobar.c
220 220
221 221 - A file foobar.c was moved to foo.c without using :hg:`rename`.
222 222 Afterwards, it was edited slightly::
223 223
224 224 $ ls
225 225 foo.c
226 226 $ hg status
227 227 ! foobar.c
228 228 ? foo.c
229 229 $ hg addremove --similarity 90
230 230 removing foobar.c
231 231 adding foo.c
232 232 recording removal of foobar.c as rename to foo.c (94% similar)
233 233 $ hg status -C
234 234 A foo.c
235 235 foobar.c
236 236 R foobar.c
237 237
238 238 Returns 0 if all files are successfully added.
239 239 """
240 240 opts = pycompat.byteskwargs(opts)
241 241 try:
242 242 sim = float(opts.get('similarity') or 100)
243 243 except ValueError:
244 244 raise error.Abort(_('similarity must be a number'))
245 245 if sim < 0 or sim > 100:
246 246 raise error.Abort(_('similarity must be between 0 and 100'))
247 247 matcher = scmutil.match(repo[None], pats, opts)
248 248 return scmutil.addremove(repo, matcher, "", opts, similarity=sim / 100.0)
249 249
250 250 @command('^annotate|blame',
251 251 [('r', 'rev', '', _('annotate the specified revision'), _('REV')),
252 252 ('', 'follow', None,
253 253 _('follow copies/renames and list the filename (DEPRECATED)')),
254 254 ('', 'no-follow', None, _("don't follow copies and renames")),
255 255 ('a', 'text', None, _('treat all files as text')),
256 256 ('u', 'user', None, _('list the author (long with -v)')),
257 257 ('f', 'file', None, _('list the filename')),
258 258 ('d', 'date', None, _('list the date (short with -q)')),
259 259 ('n', 'number', None, _('list the revision number (default)')),
260 260 ('c', 'changeset', None, _('list the changeset')),
261 261 ('l', 'line-number', None, _('show line number at the first appearance')),
262 262 ('', 'skip', [], _('revision to not display (EXPERIMENTAL)'), _('REV')),
263 263 ] + diffwsopts + walkopts + formatteropts,
264 264 _('[-r REV] [-f] [-a] [-u] [-d] [-n] [-c] [-l] FILE...'),
265 265 inferrepo=True)
266 266 def annotate(ui, repo, *pats, **opts):
267 267 """show changeset information by line for each file
268 268
269 269 List changes in files, showing the revision id responsible for
270 270 each line.
271 271
272 272 This command is useful for discovering when a change was made and
273 273 by whom.
274 274
275 275 If you include --file, --user, or --date, the revision number is
276 276 suppressed unless you also include --number.
277 277
278 278 Without the -a/--text option, annotate will avoid processing files
279 279 it detects as binary. With -a, annotate will annotate the file
280 280 anyway, although the results will probably be neither useful
281 281 nor desirable.
282 282
283 283 Returns 0 on success.
284 284 """
285 285 opts = pycompat.byteskwargs(opts)
286 286 if not pats:
287 287 raise error.Abort(_('at least one filename or pattern is required'))
288 288
289 289 if opts.get('follow'):
290 290 # --follow is deprecated and now just an alias for -f/--file
291 291 # to mimic the behavior of Mercurial before version 1.5
292 292 opts['file'] = True
293 293
294 294 ctx = scmutil.revsingle(repo, opts.get('rev'))
295 295
296 296 rootfm = ui.formatter('annotate', opts)
297 297 if ui.quiet:
298 298 datefunc = util.shortdate
299 299 else:
300 300 datefunc = util.datestr
301 301 if ctx.rev() is None:
302 302 def hexfn(node):
303 303 if node is None:
304 304 return None
305 305 else:
306 306 return rootfm.hexfunc(node)
307 307 if opts.get('changeset'):
308 308 # omit "+" suffix which is appended to node hex
309 309 def formatrev(rev):
310 310 if rev is None:
311 311 return '%d' % ctx.p1().rev()
312 312 else:
313 313 return '%d' % rev
314 314 else:
315 315 def formatrev(rev):
316 316 if rev is None:
317 317 return '%d+' % ctx.p1().rev()
318 318 else:
319 319 return '%d ' % rev
320 320 def formathex(hex):
321 321 if hex is None:
322 322 return '%s+' % rootfm.hexfunc(ctx.p1().node())
323 323 else:
324 324 return '%s ' % hex
325 325 else:
326 326 hexfn = rootfm.hexfunc
327 327 formatrev = formathex = str
328 328
329 329 opmap = [('user', ' ', lambda x: x[0].user(), ui.shortuser),
330 330 ('number', ' ', lambda x: x[0].rev(), formatrev),
331 331 ('changeset', ' ', lambda x: hexfn(x[0].node()), formathex),
332 332 ('date', ' ', lambda x: x[0].date(), util.cachefunc(datefunc)),
333 333 ('file', ' ', lambda x: x[0].path(), str),
334 334 ('line_number', ':', lambda x: x[1], str),
335 335 ]
336 336 fieldnamemap = {'number': 'rev', 'changeset': 'node'}
337 337
338 338 if (not opts.get('user') and not opts.get('changeset')
339 339 and not opts.get('date') and not opts.get('file')):
340 340 opts['number'] = True
341 341
342 342 linenumber = opts.get('line_number') is not None
343 343 if linenumber and (not opts.get('changeset')) and (not opts.get('number')):
344 344 raise error.Abort(_('at least one of -n/-c is required for -l'))
345 345
346 346 ui.pager('annotate')
347 347
348 348 if rootfm.isplain():
349 349 def makefunc(get, fmt):
350 350 return lambda x: fmt(get(x))
351 351 else:
352 352 def makefunc(get, fmt):
353 353 return get
354 354 funcmap = [(makefunc(get, fmt), sep) for op, sep, get, fmt in opmap
355 355 if opts.get(op)]
356 356 funcmap[0] = (funcmap[0][0], '') # no separator in front of first column
357 357 fields = ' '.join(fieldnamemap.get(op, op) for op, sep, get, fmt in opmap
358 358 if opts.get(op))
359 359
360 360 def bad(x, y):
361 361 raise error.Abort("%s: %s" % (x, y))
362 362
363 363 m = scmutil.match(ctx, pats, opts, badfn=bad)
364 364
365 365 follow = not opts.get('no_follow')
366 366 diffopts = patch.difffeatureopts(ui, opts, section='annotate',
367 367 whitespace=True)
368 368 skiprevs = opts.get('skip')
369 369 if skiprevs:
370 370 skiprevs = scmutil.revrange(repo, skiprevs)
371 371
372 372 for abs in ctx.walk(m):
373 373 fctx = ctx[abs]
374 374 rootfm.startitem()
375 375 rootfm.data(abspath=abs, path=m.rel(abs))
376 376 if not opts.get('text') and fctx.isbinary():
377 377 rootfm.plain(_("%s: binary file\n")
378 378 % ((pats and m.rel(abs)) or abs))
379 379 continue
380 380
381 381 fm = rootfm.nested('lines')
382 382 lines = fctx.annotate(follow=follow, linenumber=linenumber,
383 383 skiprevs=skiprevs, diffopts=diffopts)
384 384 if not lines:
385 385 fm.end()
386 386 continue
387 387 formats = []
388 388 pieces = []
389 389
390 390 for f, sep in funcmap:
391 391 l = [f(n) for n, dummy in lines]
392 392 if fm.isplain():
393 393 sizes = [encoding.colwidth(x) for x in l]
394 394 ml = max(sizes)
395 395 formats.append([sep + ' ' * (ml - w) + '%s' for w in sizes])
396 396 else:
397 397 formats.append(['%s' for x in l])
398 398 pieces.append(l)
399 399
400 400 for f, p, l in zip(zip(*formats), zip(*pieces), lines):
401 401 fm.startitem()
402 402 fm.write(fields, "".join(f), *p)
403 403 fm.write('line', ": %s", l[1])
404 404
405 405 if not lines[-1][1].endswith('\n'):
406 406 fm.plain('\n')
407 407 fm.end()
408 408
409 409 rootfm.end()
410 410
411 411 @command('archive',
412 412 [('', 'no-decode', None, _('do not pass files through decoders')),
413 413 ('p', 'prefix', '', _('directory prefix for files in archive'),
414 414 _('PREFIX')),
415 415 ('r', 'rev', '', _('revision to distribute'), _('REV')),
416 416 ('t', 'type', '', _('type of distribution to create'), _('TYPE')),
417 417 ] + subrepoopts + walkopts,
418 418 _('[OPTION]... DEST'))
419 419 def archive(ui, repo, dest, **opts):
420 420 '''create an unversioned archive of a repository revision
421 421
422 422 By default, the revision used is the parent of the working
423 423 directory; use -r/--rev to specify a different revision.
424 424
425 425 The archive type is automatically detected based on file
426 426 extension (to override, use -t/--type).
427 427
428 428 .. container:: verbose
429 429
430 430 Examples:
431 431
432 432 - create a zip file containing the 1.0 release::
433 433
434 434 hg archive -r 1.0 project-1.0.zip
435 435
436 436 - create a tarball excluding .hg files::
437 437
438 438 hg archive project.tar.gz -X ".hg*"
439 439
440 440 Valid types are:
441 441
442 442 :``files``: a directory full of files (default)
443 443 :``tar``: tar archive, uncompressed
444 444 :``tbz2``: tar archive, compressed using bzip2
445 445 :``tgz``: tar archive, compressed using gzip
446 446 :``uzip``: zip archive, uncompressed
447 447 :``zip``: zip archive, compressed using deflate
448 448
449 449 The exact name of the destination archive or directory is given
450 450 using a format string; see :hg:`help export` for details.
451 451
452 452 Each member added to an archive file has a directory prefix
453 453 prepended. Use -p/--prefix to specify a format string for the
454 454 prefix. The default is the basename of the archive, with suffixes
455 455 removed.
456 456
457 457 Returns 0 on success.
458 458 '''
459 459
460 460 opts = pycompat.byteskwargs(opts)
461 461 ctx = scmutil.revsingle(repo, opts.get('rev'))
462 462 if not ctx:
463 463 raise error.Abort(_('no working directory: please specify a revision'))
464 464 node = ctx.node()
465 465 dest = cmdutil.makefilename(repo, dest, node)
466 466 if os.path.realpath(dest) == repo.root:
467 467 raise error.Abort(_('repository root cannot be destination'))
468 468
469 469 kind = opts.get('type') or archival.guesskind(dest) or 'files'
470 470 prefix = opts.get('prefix')
471 471
472 472 if dest == '-':
473 473 if kind == 'files':
474 474 raise error.Abort(_('cannot archive plain files to stdout'))
475 475 dest = cmdutil.makefileobj(repo, dest)
476 476 if not prefix:
477 477 prefix = os.path.basename(repo.root) + '-%h'
478 478
479 479 prefix = cmdutil.makefilename(repo, prefix, node)
480 480 matchfn = scmutil.match(ctx, [], opts)
481 481 archival.archive(repo, dest, node, kind, not opts.get('no_decode'),
482 482 matchfn, prefix, subrepos=opts.get('subrepos'))
483 483
484 484 @command('backout',
485 485 [('', 'merge', None, _('merge with old dirstate parent after backout')),
486 486 ('', 'commit', None,
487 487 _('commit if no conflicts were encountered (DEPRECATED)')),
488 488 ('', 'no-commit', None, _('do not commit')),
489 489 ('', 'parent', '',
490 490 _('parent to choose when backing out merge (DEPRECATED)'), _('REV')),
491 491 ('r', 'rev', '', _('revision to backout'), _('REV')),
492 492 ('e', 'edit', False, _('invoke editor on commit messages')),
493 493 ] + mergetoolopts + walkopts + commitopts + commitopts2,
494 494 _('[OPTION]... [-r] REV'))
495 495 def backout(ui, repo, node=None, rev=None, **opts):
496 496 '''reverse effect of earlier changeset
497 497
498 498 Prepare a new changeset with the effect of REV undone in the
499 499 current working directory. If no conflicts were encountered,
500 500 it will be committed immediately.
501 501
502 502 If REV is the parent of the working directory, then this new changeset
503 503 is committed automatically (unless --no-commit is specified).
504 504
505 505 .. note::
506 506
507 507 :hg:`backout` cannot be used to fix either an unwanted or
508 508 incorrect merge.
509 509
510 510 .. container:: verbose
511 511
512 512 Examples:
513 513
514 514 - Reverse the effect of the parent of the working directory.
515 515 This backout will be committed immediately::
516 516
517 517 hg backout -r .
518 518
519 519 - Reverse the effect of previous bad revision 23::
520 520
521 521 hg backout -r 23
522 522
523 523 - Reverse the effect of previous bad revision 23 and
524 524 leave changes uncommitted::
525 525
526 526 hg backout -r 23 --no-commit
527 527 hg commit -m "Backout revision 23"
528 528
529 529 By default, the pending changeset will have one parent,
530 530 maintaining a linear history. With --merge, the pending
531 531 changeset will instead have two parents: the old parent of the
532 532 working directory and a new child of REV that simply undoes REV.
533 533
534 534 Before version 1.7, the behavior without --merge was equivalent
535 535 to specifying --merge followed by :hg:`update --clean .` to
536 536 cancel the merge and leave the child of REV as a head to be
537 537 merged separately.
538 538
539 539 See :hg:`help dates` for a list of formats valid for -d/--date.
540 540
541 541 See :hg:`help revert` for a way to restore files to the state
542 542 of another revision.
543 543
544 544 Returns 0 on success, 1 if nothing to backout or there are unresolved
545 545 files.
546 546 '''
547 547 wlock = lock = None
548 548 try:
549 549 wlock = repo.wlock()
550 550 lock = repo.lock()
551 551 return _dobackout(ui, repo, node, rev, **opts)
552 552 finally:
553 553 release(lock, wlock)
554 554
555 555 def _dobackout(ui, repo, node=None, rev=None, **opts):
556 556 opts = pycompat.byteskwargs(opts)
557 557 if opts.get('commit') and opts.get('no_commit'):
558 558 raise error.Abort(_("cannot use --commit with --no-commit"))
559 559 if opts.get('merge') and opts.get('no_commit'):
560 560 raise error.Abort(_("cannot use --merge with --no-commit"))
561 561
562 562 if rev and node:
563 563 raise error.Abort(_("please specify just one revision"))
564 564
565 565 if not rev:
566 566 rev = node
567 567
568 568 if not rev:
569 569 raise error.Abort(_("please specify a revision to backout"))
570 570
571 571 date = opts.get('date')
572 572 if date:
573 573 opts['date'] = util.parsedate(date)
574 574
575 575 cmdutil.checkunfinished(repo)
576 576 cmdutil.bailifchanged(repo)
577 577 node = scmutil.revsingle(repo, rev).node()
578 578
579 579 op1, op2 = repo.dirstate.parents()
580 580 if not repo.changelog.isancestor(node, op1):
581 581 raise error.Abort(_('cannot backout change that is not an ancestor'))
582 582
583 583 p1, p2 = repo.changelog.parents(node)
584 584 if p1 == nullid:
585 585 raise error.Abort(_('cannot backout a change with no parents'))
586 586 if p2 != nullid:
587 587 if not opts.get('parent'):
588 588 raise error.Abort(_('cannot backout a merge changeset'))
589 589 p = repo.lookup(opts['parent'])
590 590 if p not in (p1, p2):
591 591 raise error.Abort(_('%s is not a parent of %s') %
592 592 (short(p), short(node)))
593 593 parent = p
594 594 else:
595 595 if opts.get('parent'):
596 596 raise error.Abort(_('cannot use --parent on non-merge changeset'))
597 597 parent = p1
598 598
599 599 # the backout should appear on the same branch
600 600 branch = repo.dirstate.branch()
601 601 bheads = repo.branchheads(branch)
602 602 rctx = scmutil.revsingle(repo, hex(parent))
603 603 if not opts.get('merge') and op1 != node:
604 604 dsguard = dirstateguard.dirstateguard(repo, 'backout')
605 605 try:
606 606 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
607 607 'backout')
608 608 stats = mergemod.update(repo, parent, True, True, node, False)
609 609 repo.setparents(op1, op2)
610 610 dsguard.close()
611 611 hg._showstats(repo, stats)
612 612 if stats[3]:
613 613 repo.ui.status(_("use 'hg resolve' to retry unresolved "
614 614 "file merges\n"))
615 615 return 1
616 616 finally:
617 617 ui.setconfig('ui', 'forcemerge', '', '')
618 618 lockmod.release(dsguard)
619 619 else:
620 620 hg.clean(repo, node, show_stats=False)
621 621 repo.dirstate.setbranch(branch)
622 622 cmdutil.revert(ui, repo, rctx, repo.dirstate.parents())
623 623
624 624 if opts.get('no_commit'):
625 625 msg = _("changeset %s backed out, "
626 626 "don't forget to commit.\n")
627 627 ui.status(msg % short(node))
628 628 return 0
629 629
630 630 def commitfunc(ui, repo, message, match, opts):
631 631 editform = 'backout'
632 632 e = cmdutil.getcommiteditor(editform=editform,
633 633 **pycompat.strkwargs(opts))
634 634 if not message:
635 635 # we don't translate commit messages
636 636 message = "Backed out changeset %s" % short(node)
637 637 e = cmdutil.getcommiteditor(edit=True, editform=editform)
638 638 return repo.commit(message, opts.get('user'), opts.get('date'),
639 639 match, editor=e)
640 640 newnode = cmdutil.commit(ui, repo, commitfunc, [], opts)
641 641 if not newnode:
642 642 ui.status(_("nothing changed\n"))
643 643 return 1
644 644 cmdutil.commitstatus(repo, newnode, branch, bheads)
645 645
646 646 def nice(node):
647 647 return '%d:%s' % (repo.changelog.rev(node), short(node))
648 648 ui.status(_('changeset %s backs out changeset %s\n') %
649 649 (nice(repo.changelog.tip()), nice(node)))
650 650 if opts.get('merge') and op1 != node:
651 651 hg.clean(repo, op1, show_stats=False)
652 652 ui.status(_('merging with changeset %s\n')
653 653 % nice(repo.changelog.tip()))
654 654 try:
655 655 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
656 656 'backout')
657 657 return hg.merge(repo, hex(repo.changelog.tip()))
658 658 finally:
659 659 ui.setconfig('ui', 'forcemerge', '', '')
660 660 return 0
661 661
662 662 @command('bisect',
663 663 [('r', 'reset', False, _('reset bisect state')),
664 664 ('g', 'good', False, _('mark changeset good')),
665 665 ('b', 'bad', False, _('mark changeset bad')),
666 666 ('s', 'skip', False, _('skip testing changeset')),
667 667 ('e', 'extend', False, _('extend the bisect range')),
668 668 ('c', 'command', '', _('use command to check changeset state'), _('CMD')),
669 669 ('U', 'noupdate', False, _('do not update to target'))],
670 670 _("[-gbsr] [-U] [-c CMD] [REV]"))
671 671 def bisect(ui, repo, rev=None, extra=None, command=None,
672 672 reset=None, good=None, bad=None, skip=None, extend=None,
673 673 noupdate=None):
674 674 """subdivision search of changesets
675 675
676 676 This command helps to find changesets which introduce problems. To
677 677 use, mark the earliest changeset you know exhibits the problem as
678 678 bad, then mark the latest changeset which is free from the problem
679 679 as good. Bisect will update your working directory to a revision
680 680 for testing (unless the -U/--noupdate option is specified). Once
681 681 you have performed tests, mark the working directory as good or
682 682 bad, and bisect will either update to another candidate changeset
683 683 or announce that it has found the bad revision.
684 684
685 685 As a shortcut, you can also use the revision argument to mark a
686 686 revision as good or bad without checking it out first.
687 687
688 688 If you supply a command, it will be used for automatic bisection.
689 689 The environment variable HG_NODE will contain the ID of the
690 690 changeset being tested. The exit status of the command will be
691 691 used to mark revisions as good or bad: status 0 means good, 125
692 692 means to skip the revision, 127 (command not found) will abort the
693 693 bisection, and any other non-zero exit status means the revision
694 694 is bad.
695 695
696 696 .. container:: verbose
697 697
698 698 Some examples:
699 699
700 700 - start a bisection with known bad revision 34, and good revision 12::
701 701
702 702 hg bisect --bad 34
703 703 hg bisect --good 12
704 704
705 705 - advance the current bisection by marking current revision as good or
706 706 bad::
707 707
708 708 hg bisect --good
709 709 hg bisect --bad
710 710
711 711 - mark the current revision, or a known revision, to be skipped (e.g. if
712 712 that revision is not usable because of another issue)::
713 713
714 714 hg bisect --skip
715 715 hg bisect --skip 23
716 716
717 717 - skip all revisions that do not touch directories ``foo`` or ``bar``::
718 718
719 719 hg bisect --skip "!( file('path:foo') & file('path:bar') )"
720 720
721 721 - forget the current bisection::
722 722
723 723 hg bisect --reset
724 724
725 725 - use 'make && make tests' to automatically find the first broken
726 726 revision::
727 727
728 728 hg bisect --reset
729 729 hg bisect --bad 34
730 730 hg bisect --good 12
731 731 hg bisect --command "make && make tests"
732 732
733 733 - see all changesets whose states are already known in the current
734 734 bisection::
735 735
736 736 hg log -r "bisect(pruned)"
737 737
738 738 - see the changeset currently being bisected (especially useful
739 739 if running with -U/--noupdate)::
740 740
741 741 hg log -r "bisect(current)"
742 742
743 743 - see all changesets that took part in the current bisection::
744 744
745 745 hg log -r "bisect(range)"
746 746
747 747 - you can even get a nice graph::
748 748
749 749 hg log --graph -r "bisect(range)"
750 750
751 751 See :hg:`help revisions.bisect` for more about the `bisect()` predicate.
752 752
753 753 Returns 0 on success.
754 754 """
755 755 # backward compatibility
756 756 if rev in "good bad reset init".split():
757 757 ui.warn(_("(use of 'hg bisect <cmd>' is deprecated)\n"))
758 758 cmd, rev, extra = rev, extra, None
759 759 if cmd == "good":
760 760 good = True
761 761 elif cmd == "bad":
762 762 bad = True
763 763 else:
764 764 reset = True
765 765 elif extra:
766 766 raise error.Abort(_('incompatible arguments'))
767 767
768 768 incompatibles = {
769 769 '--bad': bad,
770 770 '--command': bool(command),
771 771 '--extend': extend,
772 772 '--good': good,
773 773 '--reset': reset,
774 774 '--skip': skip,
775 775 }
776 776
777 777 enabled = [x for x in incompatibles if incompatibles[x]]
778 778
779 779 if len(enabled) > 1:
780 780 raise error.Abort(_('%s and %s are incompatible') %
781 781 tuple(sorted(enabled)[0:2]))
782 782
783 783 if reset:
784 784 hbisect.resetstate(repo)
785 785 return
786 786
787 787 state = hbisect.load_state(repo)
788 788
789 789 # update state
790 790 if good or bad or skip:
791 791 if rev:
792 792 nodes = [repo.lookup(i) for i in scmutil.revrange(repo, [rev])]
793 793 else:
794 794 nodes = [repo.lookup('.')]
795 795 if good:
796 796 state['good'] += nodes
797 797 elif bad:
798 798 state['bad'] += nodes
799 799 elif skip:
800 800 state['skip'] += nodes
801 801 hbisect.save_state(repo, state)
802 802 if not (state['good'] and state['bad']):
803 803 return
804 804
805 805 def mayupdate(repo, node, show_stats=True):
806 806 """common used update sequence"""
807 807 if noupdate:
808 808 return
809 809 cmdutil.checkunfinished(repo)
810 810 cmdutil.bailifchanged(repo)
811 811 return hg.clean(repo, node, show_stats=show_stats)
812 812
813 813 displayer = cmdutil.show_changeset(ui, repo, {})
814 814
815 815 if command:
816 816 changesets = 1
817 817 if noupdate:
818 818 try:
819 819 node = state['current'][0]
820 820 except LookupError:
821 821 raise error.Abort(_('current bisect revision is unknown - '
822 822 'start a new bisect to fix'))
823 823 else:
824 824 node, p2 = repo.dirstate.parents()
825 825 if p2 != nullid:
826 826 raise error.Abort(_('current bisect revision is a merge'))
827 827 if rev:
828 828 node = repo[scmutil.revsingle(repo, rev, node)].node()
829 829 try:
830 830 while changesets:
831 831 # update state
832 832 state['current'] = [node]
833 833 hbisect.save_state(repo, state)
834 834 status = ui.system(command, environ={'HG_NODE': hex(node)},
835 835 blockedtag='bisect_check')
836 836 if status == 125:
837 837 transition = "skip"
838 838 elif status == 0:
839 839 transition = "good"
840 840 # status < 0 means process was killed
841 841 elif status == 127:
842 842 raise error.Abort(_("failed to execute %s") % command)
843 843 elif status < 0:
844 844 raise error.Abort(_("%s killed") % command)
845 845 else:
846 846 transition = "bad"
847 847 state[transition].append(node)
848 848 ctx = repo[node]
849 849 ui.status(_('changeset %d:%s: %s\n') % (ctx, ctx, transition))
850 850 hbisect.checkstate(state)
851 851 # bisect
852 852 nodes, changesets, bgood = hbisect.bisect(repo.changelog, state)
853 853 # update to next check
854 854 node = nodes[0]
855 855 mayupdate(repo, node, show_stats=False)
856 856 finally:
857 857 state['current'] = [node]
858 858 hbisect.save_state(repo, state)
859 859 hbisect.printresult(ui, repo, state, displayer, nodes, bgood)
860 860 return
861 861
862 862 hbisect.checkstate(state)
863 863
864 864 # actually bisect
865 865 nodes, changesets, good = hbisect.bisect(repo.changelog, state)
866 866 if extend:
867 867 if not changesets:
868 868 extendnode = hbisect.extendrange(repo, state, nodes, good)
869 869 if extendnode is not None:
870 870 ui.write(_("Extending search to changeset %d:%s\n")
871 871 % (extendnode.rev(), extendnode))
872 872 state['current'] = [extendnode.node()]
873 873 hbisect.save_state(repo, state)
874 874 return mayupdate(repo, extendnode.node())
875 875 raise error.Abort(_("nothing to extend"))
876 876
877 877 if changesets == 0:
878 878 hbisect.printresult(ui, repo, state, displayer, nodes, good)
879 879 else:
880 880 assert len(nodes) == 1 # only a single node can be tested next
881 881 node = nodes[0]
882 882 # compute the approximate number of remaining tests
883 883 tests, size = 0, 2
884 884 while size <= changesets:
885 885 tests, size = tests + 1, size * 2
886 886 rev = repo.changelog.rev(node)
887 887 ui.write(_("Testing changeset %d:%s "
888 888 "(%d changesets remaining, ~%d tests)\n")
889 889 % (rev, short(node), changesets, tests))
890 890 state['current'] = [node]
891 891 hbisect.save_state(repo, state)
892 892 return mayupdate(repo, node)
893 893
894 894 @command('bookmarks|bookmark',
895 895 [('f', 'force', False, _('force')),
896 896 ('r', 'rev', '', _('revision for bookmark action'), _('REV')),
897 897 ('d', 'delete', False, _('delete a given bookmark')),
898 898 ('m', 'rename', '', _('rename a given bookmark'), _('OLD')),
899 899 ('i', 'inactive', False, _('mark a bookmark inactive')),
900 900 ] + formatteropts,
901 901 _('hg bookmarks [OPTIONS]... [NAME]...'))
902 902 def bookmark(ui, repo, *names, **opts):
903 903 '''create a new bookmark or list existing bookmarks
904 904
905 905 Bookmarks are labels on changesets to help track lines of development.
906 906 Bookmarks are unversioned and can be moved, renamed and deleted.
907 907 Deleting or moving a bookmark has no effect on the associated changesets.
908 908
909 909 Creating or updating to a bookmark causes it to be marked as 'active'.
910 910 The active bookmark is indicated with a '*'.
911 911 When a commit is made, the active bookmark will advance to the new commit.
912 912 A plain :hg:`update` will also advance an active bookmark, if possible.
913 913 Updating away from a bookmark will cause it to be deactivated.
914 914
915 915 Bookmarks can be pushed and pulled between repositories (see
916 916 :hg:`help push` and :hg:`help pull`). If a shared bookmark has
917 917 diverged, a new 'divergent bookmark' of the form 'name@path' will
918 918 be created. Using :hg:`merge` will resolve the divergence.
919 919
920 920 A bookmark named '@' has the special property that :hg:`clone` will
921 921 check it out by default if it exists.
922 922
923 923 .. container:: verbose
924 924
925 925 Examples:
926 926
927 927 - create an active bookmark for a new line of development::
928 928
929 929 hg book new-feature
930 930
931 931 - create an inactive bookmark as a place marker::
932 932
933 933 hg book -i reviewed
934 934
935 935 - create an inactive bookmark on another changeset::
936 936
937 937 hg book -r .^ tested
938 938
939 939 - rename bookmark turkey to dinner::
940 940
941 941 hg book -m turkey dinner
942 942
943 943 - move the '@' bookmark from another branch::
944 944
945 945 hg book -f @
946 946 '''
947 947 opts = pycompat.byteskwargs(opts)
948 948 force = opts.get('force')
949 949 rev = opts.get('rev')
950 950 delete = opts.get('delete')
951 951 rename = opts.get('rename')
952 952 inactive = opts.get('inactive')
953 953
954 954 if delete and rename:
955 955 raise error.Abort(_("--delete and --rename are incompatible"))
956 956 if delete and rev:
957 957 raise error.Abort(_("--rev is incompatible with --delete"))
958 958 if rename and rev:
959 959 raise error.Abort(_("--rev is incompatible with --rename"))
960 960 if not names and (delete or rev):
961 961 raise error.Abort(_("bookmark name required"))
962 962
963 963 if delete or rename or names or inactive:
964 964 with repo.wlock(), repo.lock(), repo.transaction('bookmark') as tr:
965 965 if delete:
966 966 bookmarks.delete(repo, tr, names)
967 967 elif rename:
968 968 if not names:
969 969 raise error.Abort(_("new bookmark name required"))
970 970 elif len(names) > 1:
971 971 raise error.Abort(_("only one new bookmark name allowed"))
972 972 bookmarks.rename(repo, tr, rename, names[0], force, inactive)
973 973 elif names:
974 974 bookmarks.addbookmarks(repo, tr, names, rev, force, inactive)
975 975 elif inactive:
976 976 if len(repo._bookmarks) == 0:
977 977 ui.status(_("no bookmarks set\n"))
978 978 elif not repo._activebookmark:
979 979 ui.status(_("no active bookmark\n"))
980 980 else:
981 981 bookmarks.deactivate(repo)
982 982 else: # show bookmarks
983 983 bookmarks.printbookmarks(ui, repo, **opts)
984 984
985 985 @command('branch',
986 986 [('f', 'force', None,
987 987 _('set branch name even if it shadows an existing branch')),
988 988 ('C', 'clean', None, _('reset branch name to parent branch name'))],
989 989 _('[-fC] [NAME]'))
990 990 def branch(ui, repo, label=None, **opts):
991 991 """set or show the current branch name
992 992
993 993 .. note::
994 994
995 995 Branch names are permanent and global. Use :hg:`bookmark` to create a
996 996 light-weight bookmark instead. See :hg:`help glossary` for more
997 997 information about named branches and bookmarks.
998 998
999 999 With no argument, show the current branch name. With one argument,
1000 1000 set the working directory branch name (the branch will not exist
1001 1001 in the repository until the next commit). Standard practice
1002 1002 recommends that primary development take place on the 'default'
1003 1003 branch.
1004 1004
1005 1005 Unless -f/--force is specified, branch will not let you set a
1006 1006 branch name that already exists.
1007 1007
1008 1008 Use -C/--clean to reset the working directory branch to that of
1009 1009 the parent of the working directory, negating a previous branch
1010 1010 change.
1011 1011
1012 1012 Use the command :hg:`update` to switch to an existing branch. Use
1013 1013 :hg:`commit --close-branch` to mark this branch head as closed.
1014 1014 When all heads of a branch are closed, the branch will be
1015 1015 considered closed.
1016 1016
1017 1017 Returns 0 on success.
1018 1018 """
1019 1019 opts = pycompat.byteskwargs(opts)
1020 1020 if label:
1021 1021 label = label.strip()
1022 1022
1023 1023 if not opts.get('clean') and not label:
1024 1024 ui.write("%s\n" % repo.dirstate.branch())
1025 1025 return
1026 1026
1027 1027 with repo.wlock():
1028 1028 if opts.get('clean'):
1029 1029 label = repo[None].p1().branch()
1030 1030 repo.dirstate.setbranch(label)
1031 1031 ui.status(_('reset working directory to branch %s\n') % label)
1032 1032 elif label:
1033 1033 if not opts.get('force') and label in repo.branchmap():
1034 1034 if label not in [p.branch() for p in repo[None].parents()]:
1035 1035 raise error.Abort(_('a branch of the same name already'
1036 1036 ' exists'),
1037 1037 # i18n: "it" refers to an existing branch
1038 1038 hint=_("use 'hg update' to switch to it"))
1039 1039 scmutil.checknewlabel(repo, label, 'branch')
1040 1040 repo.dirstate.setbranch(label)
1041 1041 ui.status(_('marked working directory as branch %s\n') % label)
1042 1042
1043 1043 # find any open named branches aside from default
1044 1044 others = [n for n, h, t, c in repo.branchmap().iterbranches()
1045 1045 if n != "default" and not c]
1046 1046 if not others:
1047 1047 ui.status(_('(branches are permanent and global, '
1048 1048 'did you want a bookmark?)\n'))
1049 1049
1050 1050 @command('branches',
1051 1051 [('a', 'active', False,
1052 1052 _('show only branches that have unmerged heads (DEPRECATED)')),
1053 1053 ('c', 'closed', False, _('show normal and closed branches')),
1054 1054 ] + formatteropts,
1055 1055 _('[-c]'))
1056 1056 def branches(ui, repo, active=False, closed=False, **opts):
1057 1057 """list repository named branches
1058 1058
1059 1059 List the repository's named branches, indicating which ones are
1060 1060 inactive. If -c/--closed is specified, also list branches which have
1061 1061 been marked closed (see :hg:`commit --close-branch`).
1062 1062
1063 1063 Use the command :hg:`update` to switch to an existing branch.
1064 1064
1065 1065 Returns 0.
1066 1066 """
1067 1067
1068 1068 opts = pycompat.byteskwargs(opts)
1069 1069 ui.pager('branches')
1070 1070 fm = ui.formatter('branches', opts)
1071 1071 hexfunc = fm.hexfunc
1072 1072
1073 1073 allheads = set(repo.heads())
1074 1074 branches = []
1075 1075 for tag, heads, tip, isclosed in repo.branchmap().iterbranches():
1076 1076 isactive = not isclosed and bool(set(heads) & allheads)
1077 1077 branches.append((tag, repo[tip], isactive, not isclosed))
1078 1078 branches.sort(key=lambda i: (i[2], i[1].rev(), i[0], i[3]),
1079 1079 reverse=True)
1080 1080
1081 1081 for tag, ctx, isactive, isopen in branches:
1082 1082 if active and not isactive:
1083 1083 continue
1084 1084 if isactive:
1085 1085 label = 'branches.active'
1086 1086 notice = ''
1087 1087 elif not isopen:
1088 1088 if not closed:
1089 1089 continue
1090 1090 label = 'branches.closed'
1091 1091 notice = _(' (closed)')
1092 1092 else:
1093 1093 label = 'branches.inactive'
1094 1094 notice = _(' (inactive)')
1095 1095 current = (tag == repo.dirstate.branch())
1096 1096 if current:
1097 1097 label = 'branches.current'
1098 1098
1099 1099 fm.startitem()
1100 1100 fm.write('branch', '%s', tag, label=label)
1101 1101 rev = ctx.rev()
1102 1102 padsize = max(31 - len(str(rev)) - encoding.colwidth(tag), 0)
1103 1103 fmt = ' ' * padsize + ' %d:%s'
1104 1104 fm.condwrite(not ui.quiet, 'rev node', fmt, rev, hexfunc(ctx.node()),
1105 1105 label='log.changeset changeset.%s' % ctx.phasestr())
1106 1106 fm.context(ctx=ctx)
1107 1107 fm.data(active=isactive, closed=not isopen, current=current)
1108 1108 if not ui.quiet:
1109 1109 fm.plain(notice)
1110 1110 fm.plain('\n')
1111 1111 fm.end()
1112 1112
1113 1113 @command('bundle',
1114 1114 [('f', 'force', None, _('run even when the destination is unrelated')),
1115 1115 ('r', 'rev', [], _('a changeset intended to be added to the destination'),
1116 1116 _('REV')),
1117 1117 ('b', 'branch', [], _('a specific branch you would like to bundle'),
1118 1118 _('BRANCH')),
1119 1119 ('', 'base', [],
1120 1120 _('a base changeset assumed to be available at the destination'),
1121 1121 _('REV')),
1122 1122 ('a', 'all', None, _('bundle all changesets in the repository')),
1123 1123 ('t', 'type', 'bzip2', _('bundle compression type to use'), _('TYPE')),
1124 1124 ] + remoteopts,
1125 1125 _('[-f] [-t BUNDLESPEC] [-a] [-r REV]... [--base REV]... FILE [DEST]'))
1126 1126 def bundle(ui, repo, fname, dest=None, **opts):
1127 1127 """create a bundle file
1128 1128
1129 1129 Generate a bundle file containing data to be added to a repository.
1130 1130
1131 1131 To create a bundle containing all changesets, use -a/--all
1132 1132 (or --base null). Otherwise, hg assumes the destination will have
1133 1133 all the nodes you specify with --base parameters. Otherwise, hg
1134 1134 will assume the repository has all the nodes in destination, or
1135 1135 default-push/default if no destination is specified.
1136 1136
1137 1137 You can change bundle format with the -t/--type option. See
1138 1138 :hg:`help bundlespec` for documentation on this format. By default,
1139 1139 the most appropriate format is used and compression defaults to
1140 1140 bzip2.
1141 1141
1142 1142 The bundle file can then be transferred using conventional means
1143 1143 and applied to another repository with the unbundle or pull
1144 1144 command. This is useful when direct push and pull are not
1145 1145 available or when exporting an entire repository is undesirable.
1146 1146
1147 1147 Applying bundles preserves all changeset contents including
1148 1148 permissions, copy/rename information, and revision history.
1149 1149
1150 1150 Returns 0 on success, 1 if no changes found.
1151 1151 """
1152 1152 opts = pycompat.byteskwargs(opts)
1153 1153 revs = None
1154 1154 if 'rev' in opts:
1155 1155 revstrings = opts['rev']
1156 1156 revs = scmutil.revrange(repo, revstrings)
1157 1157 if revstrings and not revs:
1158 1158 raise error.Abort(_('no commits to bundle'))
1159 1159
1160 1160 bundletype = opts.get('type', 'bzip2').lower()
1161 1161 try:
1162 1162 bcompression, cgversion, params = exchange.parsebundlespec(
1163 1163 repo, bundletype, strict=False)
1164 1164 except error.UnsupportedBundleSpecification as e:
1165 1165 raise error.Abort(str(e),
1166 1166 hint=_("see 'hg help bundlespec' for supported "
1167 1167 "values for --type"))
1168 1168
1169 1169 # Packed bundles are a pseudo bundle format for now.
1170 1170 if cgversion == 's1':
1171 1171 raise error.Abort(_('packed bundles cannot be produced by "hg bundle"'),
1172 1172 hint=_("use 'hg debugcreatestreamclonebundle'"))
1173 1173
1174 1174 if opts.get('all'):
1175 1175 if dest:
1176 1176 raise error.Abort(_("--all is incompatible with specifying "
1177 1177 "a destination"))
1178 1178 if opts.get('base'):
1179 1179 ui.warn(_("ignoring --base because --all was specified\n"))
1180 1180 base = ['null']
1181 1181 else:
1182 1182 base = scmutil.revrange(repo, opts.get('base'))
1183 1183 if cgversion not in changegroup.supportedoutgoingversions(repo):
1184 1184 raise error.Abort(_("repository does not support bundle version %s") %
1185 1185 cgversion)
1186 1186
1187 1187 if base:
1188 1188 if dest:
1189 1189 raise error.Abort(_("--base is incompatible with specifying "
1190 1190 "a destination"))
1191 1191 common = [repo.lookup(rev) for rev in base]
1192 1192 heads = revs and map(repo.lookup, revs) or None
1193 1193 outgoing = discovery.outgoing(repo, common, heads)
1194 1194 else:
1195 1195 dest = ui.expandpath(dest or 'default-push', dest or 'default')
1196 1196 dest, branches = hg.parseurl(dest, opts.get('branch'))
1197 1197 other = hg.peer(repo, opts, dest)
1198 1198 revs, checkout = hg.addbranchrevs(repo, repo, branches, revs)
1199 1199 heads = revs and map(repo.lookup, revs) or revs
1200 1200 outgoing = discovery.findcommonoutgoing(repo, other,
1201 1201 onlyheads=heads,
1202 1202 force=opts.get('force'),
1203 1203 portable=True)
1204 1204
1205 1205 if not outgoing.missing:
1206 1206 scmutil.nochangesfound(ui, repo, not base and outgoing.excluded)
1207 1207 return 1
1208 1208
1209 1209 if cgversion == '01': #bundle1
1210 1210 if bcompression is None:
1211 1211 bcompression = 'UN'
1212 1212 bversion = 'HG10' + bcompression
1213 1213 bcompression = None
1214 1214 elif cgversion in ('02', '03'):
1215 1215 bversion = 'HG20'
1216 1216 else:
1217 1217 raise error.ProgrammingError(
1218 1218 'bundle: unexpected changegroup version %s' % cgversion)
1219 1219
1220 1220 # TODO compression options should be derived from bundlespec parsing.
1221 1221 # This is a temporary hack to allow adjusting bundle compression
1222 1222 # level without a) formalizing the bundlespec changes to declare it
1223 1223 # b) introducing a command flag.
1224 1224 compopts = {}
1225 1225 complevel = ui.configint('experimental', 'bundlecomplevel')
1226 1226 if complevel is not None:
1227 1227 compopts['level'] = complevel
1228 1228
1229 1229
1230 1230 contentopts = {'cg.version': cgversion}
1231 1231 if repo.ui.configbool('experimental', 'evolution.bundle-obsmarker', False):
1232 1232 contentopts['obsolescence'] = True
1233 1233 bundle2.writenewbundle(ui, repo, 'bundle', fname, bversion, outgoing,
1234 1234 contentopts, compression=bcompression,
1235 1235 compopts=compopts)
1236 1236
1237 1237 @command('cat',
1238 1238 [('o', 'output', '',
1239 1239 _('print output to file with formatted name'), _('FORMAT')),
1240 1240 ('r', 'rev', '', _('print the given revision'), _('REV')),
1241 1241 ('', 'decode', None, _('apply any matching decode filter')),
1242 1242 ] + walkopts + formatteropts,
1243 1243 _('[OPTION]... FILE...'),
1244 1244 inferrepo=True)
1245 1245 def cat(ui, repo, file1, *pats, **opts):
1246 1246 """output the current or given revision of files
1247 1247
1248 1248 Print the specified files as they were at the given revision. If
1249 1249 no revision is given, the parent of the working directory is used.
1250 1250
1251 1251 Output may be to a file, in which case the name of the file is
1252 1252 given using a format string. The formatting rules as follows:
1253 1253
1254 1254 :``%%``: literal "%" character
1255 1255 :``%s``: basename of file being printed
1256 1256 :``%d``: dirname of file being printed, or '.' if in repository root
1257 1257 :``%p``: root-relative path name of file being printed
1258 1258 :``%H``: changeset hash (40 hexadecimal digits)
1259 1259 :``%R``: changeset revision number
1260 1260 :``%h``: short-form changeset hash (12 hexadecimal digits)
1261 1261 :``%r``: zero-padded changeset revision number
1262 1262 :``%b``: basename of the exporting repository
1263 1263
1264 1264 Returns 0 on success.
1265 1265 """
1266 1266 ctx = scmutil.revsingle(repo, opts.get('rev'))
1267 1267 m = scmutil.match(ctx, (file1,) + pats, opts)
1268 1268 fntemplate = opts.pop('output', '')
1269 1269 if cmdutil.isstdiofilename(fntemplate):
1270 1270 fntemplate = ''
1271 1271
1272 1272 if fntemplate:
1273 1273 fm = formatter.nullformatter(ui, 'cat')
1274 1274 else:
1275 1275 ui.pager('cat')
1276 1276 fm = ui.formatter('cat', opts)
1277 1277 with fm:
1278 1278 return cmdutil.cat(ui, repo, ctx, m, fm, fntemplate, '', **opts)
1279 1279
1280 1280 @command('^clone',
1281 1281 [('U', 'noupdate', None, _('the clone will include an empty working '
1282 1282 'directory (only a repository)')),
1283 1283 ('u', 'updaterev', '', _('revision, tag, or branch to check out'),
1284 1284 _('REV')),
1285 1285 ('r', 'rev', [], _('include the specified changeset'), _('REV')),
1286 1286 ('b', 'branch', [], _('clone only the specified branch'), _('BRANCH')),
1287 1287 ('', 'pull', None, _('use pull protocol to copy metadata')),
1288 1288 ('', 'uncompressed', None, _('use uncompressed transfer (fast over LAN)')),
1289 1289 ] + remoteopts,
1290 1290 _('[OPTION]... SOURCE [DEST]'),
1291 1291 norepo=True)
1292 1292 def clone(ui, source, dest=None, **opts):
1293 1293 """make a copy of an existing repository
1294 1294
1295 1295 Create a copy of an existing repository in a new directory.
1296 1296
1297 1297 If no destination directory name is specified, it defaults to the
1298 1298 basename of the source.
1299 1299
1300 1300 The location of the source is added to the new repository's
1301 1301 ``.hg/hgrc`` file, as the default to be used for future pulls.
1302 1302
1303 1303 Only local paths and ``ssh://`` URLs are supported as
1304 1304 destinations. For ``ssh://`` destinations, no working directory or
1305 1305 ``.hg/hgrc`` will be created on the remote side.
1306 1306
1307 1307 If the source repository has a bookmark called '@' set, that
1308 1308 revision will be checked out in the new repository by default.
1309 1309
1310 1310 To check out a particular version, use -u/--update, or
1311 1311 -U/--noupdate to create a clone with no working directory.
1312 1312
1313 1313 To pull only a subset of changesets, specify one or more revisions
1314 1314 identifiers with -r/--rev or branches with -b/--branch. The
1315 1315 resulting clone will contain only the specified changesets and
1316 1316 their ancestors. These options (or 'clone src#rev dest') imply
1317 1317 --pull, even for local source repositories.
1318 1318
1319 1319 .. note::
1320 1320
1321 1321 Specifying a tag will include the tagged changeset but not the
1322 1322 changeset containing the tag.
1323 1323
1324 1324 .. container:: verbose
1325 1325
1326 1326 For efficiency, hardlinks are used for cloning whenever the
1327 1327 source and destination are on the same filesystem (note this
1328 1328 applies only to the repository data, not to the working
1329 1329 directory). Some filesystems, such as AFS, implement hardlinking
1330 1330 incorrectly, but do not report errors. In these cases, use the
1331 1331 --pull option to avoid hardlinking.
1332 1332
1333 1333 In some cases, you can clone repositories and the working
1334 1334 directory using full hardlinks with ::
1335 1335
1336 1336 $ cp -al REPO REPOCLONE
1337 1337
1338 1338 This is the fastest way to clone, but it is not always safe. The
1339 1339 operation is not atomic (making sure REPO is not modified during
1340 1340 the operation is up to you) and you have to make sure your
1341 1341 editor breaks hardlinks (Emacs and most Linux Kernel tools do
1342 1342 so). Also, this is not compatible with certain extensions that
1343 1343 place their metadata under the .hg directory, such as mq.
1344 1344
1345 1345 Mercurial will update the working directory to the first applicable
1346 1346 revision from this list:
1347 1347
1348 1348 a) null if -U or the source repository has no changesets
1349 1349 b) if -u . and the source repository is local, the first parent of
1350 1350 the source repository's working directory
1351 1351 c) the changeset specified with -u (if a branch name, this means the
1352 1352 latest head of that branch)
1353 1353 d) the changeset specified with -r
1354 1354 e) the tipmost head specified with -b
1355 1355 f) the tipmost head specified with the url#branch source syntax
1356 1356 g) the revision marked with the '@' bookmark, if present
1357 1357 h) the tipmost head of the default branch
1358 1358 i) tip
1359 1359
1360 1360 When cloning from servers that support it, Mercurial may fetch
1361 1361 pre-generated data from a server-advertised URL. When this is done,
1362 1362 hooks operating on incoming changesets and changegroups may fire twice,
1363 1363 once for the bundle fetched from the URL and another for any additional
1364 1364 data not fetched from this URL. In addition, if an error occurs, the
1365 1365 repository may be rolled back to a partial clone. This behavior may
1366 1366 change in future releases. See :hg:`help -e clonebundles` for more.
1367 1367
1368 1368 Examples:
1369 1369
1370 1370 - clone a remote repository to a new directory named hg/::
1371 1371
1372 1372 hg clone https://www.mercurial-scm.org/repo/hg/
1373 1373
1374 1374 - create a lightweight local clone::
1375 1375
1376 1376 hg clone project/ project-feature/
1377 1377
1378 1378 - clone from an absolute path on an ssh server (note double-slash)::
1379 1379
1380 1380 hg clone ssh://user@server//home/projects/alpha/
1381 1381
1382 1382 - do a high-speed clone over a LAN while checking out a
1383 1383 specified version::
1384 1384
1385 1385 hg clone --uncompressed http://server/repo -u 1.5
1386 1386
1387 1387 - create a repository without changesets after a particular revision::
1388 1388
1389 1389 hg clone -r 04e544 experimental/ good/
1390 1390
1391 1391 - clone (and track) a particular named branch::
1392 1392
1393 1393 hg clone https://www.mercurial-scm.org/repo/hg/#stable
1394 1394
1395 1395 See :hg:`help urls` for details on specifying URLs.
1396 1396
1397 1397 Returns 0 on success.
1398 1398 """
1399 1399 opts = pycompat.byteskwargs(opts)
1400 1400 if opts.get('noupdate') and opts.get('updaterev'):
1401 1401 raise error.Abort(_("cannot specify both --noupdate and --updaterev"))
1402 1402
1403 1403 r = hg.clone(ui, opts, source, dest,
1404 1404 pull=opts.get('pull'),
1405 1405 stream=opts.get('uncompressed'),
1406 1406 rev=opts.get('rev'),
1407 1407 update=opts.get('updaterev') or not opts.get('noupdate'),
1408 1408 branch=opts.get('branch'),
1409 1409 shareopts=opts.get('shareopts'))
1410 1410
1411 1411 return r is None
1412 1412
1413 1413 @command('^commit|ci',
1414 1414 [('A', 'addremove', None,
1415 1415 _('mark new/missing files as added/removed before committing')),
1416 1416 ('', 'close-branch', None,
1417 1417 _('mark a branch head as closed')),
1418 1418 ('', 'amend', None, _('amend the parent of the working directory')),
1419 1419 ('s', 'secret', None, _('use the secret phase for committing')),
1420 1420 ('e', 'edit', None, _('invoke editor on commit messages')),
1421 1421 ('i', 'interactive', None, _('use interactive mode')),
1422 1422 ] + walkopts + commitopts + commitopts2 + subrepoopts,
1423 1423 _('[OPTION]... [FILE]...'),
1424 1424 inferrepo=True)
1425 1425 def commit(ui, repo, *pats, **opts):
1426 1426 """commit the specified files or all outstanding changes
1427 1427
1428 1428 Commit changes to the given files into the repository. Unlike a
1429 1429 centralized SCM, this operation is a local operation. See
1430 1430 :hg:`push` for a way to actively distribute your changes.
1431 1431
1432 1432 If a list of files is omitted, all changes reported by :hg:`status`
1433 1433 will be committed.
1434 1434
1435 1435 If you are committing the result of a merge, do not provide any
1436 1436 filenames or -I/-X filters.
1437 1437
1438 1438 If no commit message is specified, Mercurial starts your
1439 1439 configured editor where you can enter a message. In case your
1440 1440 commit fails, you will find a backup of your message in
1441 1441 ``.hg/last-message.txt``.
1442 1442
1443 1443 The --close-branch flag can be used to mark the current branch
1444 1444 head closed. When all heads of a branch are closed, the branch
1445 1445 will be considered closed and no longer listed.
1446 1446
1447 1447 The --amend flag can be used to amend the parent of the
1448 1448 working directory with a new commit that contains the changes
1449 1449 in the parent in addition to those currently reported by :hg:`status`,
1450 1450 if there are any. The old commit is stored in a backup bundle in
1451 1451 ``.hg/strip-backup`` (see :hg:`help bundle` and :hg:`help unbundle`
1452 1452 on how to restore it).
1453 1453
1454 1454 Message, user and date are taken from the amended commit unless
1455 1455 specified. When a message isn't specified on the command line,
1456 1456 the editor will open with the message of the amended commit.
1457 1457
1458 1458 It is not possible to amend public changesets (see :hg:`help phases`)
1459 1459 or changesets that have children.
1460 1460
1461 1461 See :hg:`help dates` for a list of formats valid for -d/--date.
1462 1462
1463 1463 Returns 0 on success, 1 if nothing changed.
1464 1464
1465 1465 .. container:: verbose
1466 1466
1467 1467 Examples:
1468 1468
1469 1469 - commit all files ending in .py::
1470 1470
1471 1471 hg commit --include "set:**.py"
1472 1472
1473 1473 - commit all non-binary files::
1474 1474
1475 1475 hg commit --exclude "set:binary()"
1476 1476
1477 1477 - amend the current commit and set the date to now::
1478 1478
1479 1479 hg commit --amend --date now
1480 1480 """
1481 1481 wlock = lock = None
1482 1482 try:
1483 1483 wlock = repo.wlock()
1484 1484 lock = repo.lock()
1485 1485 return _docommit(ui, repo, *pats, **opts)
1486 1486 finally:
1487 1487 release(lock, wlock)
1488 1488
1489 1489 def _docommit(ui, repo, *pats, **opts):
1490 1490 if opts.get(r'interactive'):
1491 1491 opts.pop(r'interactive')
1492 1492 ret = cmdutil.dorecord(ui, repo, commit, None, False,
1493 1493 cmdutil.recordfilter, *pats,
1494 1494 **opts)
1495 1495 # ret can be 0 (no changes to record) or the value returned by
1496 1496 # commit(), 1 if nothing changed or None on success.
1497 1497 return 1 if ret == 0 else ret
1498 1498
1499 1499 opts = pycompat.byteskwargs(opts)
1500 1500 if opts.get('subrepos'):
1501 1501 if opts.get('amend'):
1502 1502 raise error.Abort(_('cannot amend with --subrepos'))
1503 1503 # Let --subrepos on the command line override config setting.
1504 1504 ui.setconfig('ui', 'commitsubrepos', True, 'commit')
1505 1505
1506 1506 cmdutil.checkunfinished(repo, commit=True)
1507 1507
1508 1508 branch = repo[None].branch()
1509 1509 bheads = repo.branchheads(branch)
1510 1510
1511 1511 extra = {}
1512 1512 if opts.get('close_branch'):
1513 1513 extra['close'] = 1
1514 1514
1515 1515 if not bheads:
1516 1516 raise error.Abort(_('can only close branch heads'))
1517 1517 elif opts.get('amend'):
1518 1518 if repo[None].parents()[0].p1().branch() != branch and \
1519 1519 repo[None].parents()[0].p2().branch() != branch:
1520 1520 raise error.Abort(_('can only close branch heads'))
1521 1521
1522 1522 if opts.get('amend'):
1523 1523 if ui.configbool('ui', 'commitsubrepos'):
1524 1524 raise error.Abort(_('cannot amend with ui.commitsubrepos enabled'))
1525 1525
1526 1526 old = repo['.']
1527 1527 if not old.mutable():
1528 1528 raise error.Abort(_('cannot amend public changesets'))
1529 1529 if len(repo[None].parents()) > 1:
1530 1530 raise error.Abort(_('cannot amend while merging'))
1531 1531 allowunstable = obsolete.isenabled(repo, obsolete.allowunstableopt)
1532 1532 if not allowunstable and old.children():
1533 1533 raise error.Abort(_('cannot amend changeset with children'))
1534 1534
1535 1535 # Currently histedit gets confused if an amend happens while histedit
1536 1536 # is in progress. Since we have a checkunfinished command, we are
1537 1537 # temporarily honoring it.
1538 1538 #
1539 1539 # Note: eventually this guard will be removed. Please do not expect
1540 1540 # this behavior to remain.
1541 1541 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
1542 1542 cmdutil.checkunfinished(repo)
1543 1543
1544 1544 # commitfunc is used only for temporary amend commit by cmdutil.amend
1545 1545 def commitfunc(ui, repo, message, match, opts):
1546 1546 return repo.commit(message,
1547 1547 opts.get('user') or old.user(),
1548 1548 opts.get('date') or old.date(),
1549 1549 match,
1550 1550 extra=extra)
1551 1551
1552 1552 node = cmdutil.amend(ui, repo, commitfunc, old, extra, pats, opts)
1553 1553 if node == old.node():
1554 1554 ui.status(_("nothing changed\n"))
1555 1555 return 1
1556 1556 else:
1557 1557 def commitfunc(ui, repo, message, match, opts):
1558 1558 overrides = {}
1559 1559 if opts.get('secret'):
1560 1560 overrides[('phases', 'new-commit')] = 'secret'
1561 1561
1562 1562 baseui = repo.baseui
1563 1563 with baseui.configoverride(overrides, 'commit'):
1564 1564 with ui.configoverride(overrides, 'commit'):
1565 1565 editform = cmdutil.mergeeditform(repo[None],
1566 1566 'commit.normal')
1567 1567 editor = cmdutil.getcommiteditor(
1568 1568 editform=editform, **pycompat.strkwargs(opts))
1569 1569 return repo.commit(message,
1570 1570 opts.get('user'),
1571 1571 opts.get('date'),
1572 1572 match,
1573 1573 editor=editor,
1574 1574 extra=extra)
1575 1575
1576 1576 node = cmdutil.commit(ui, repo, commitfunc, pats, opts)
1577 1577
1578 1578 if not node:
1579 1579 stat = cmdutil.postcommitstatus(repo, pats, opts)
1580 1580 if stat[3]:
1581 1581 ui.status(_("nothing changed (%d missing files, see "
1582 1582 "'hg status')\n") % len(stat[3]))
1583 1583 else:
1584 1584 ui.status(_("nothing changed\n"))
1585 1585 return 1
1586 1586
1587 1587 cmdutil.commitstatus(repo, node, branch, bheads, opts)
1588 1588
1589 1589 @command('config|showconfig|debugconfig',
1590 1590 [('u', 'untrusted', None, _('show untrusted configuration options')),
1591 1591 ('e', 'edit', None, _('edit user config')),
1592 1592 ('l', 'local', None, _('edit repository config')),
1593 1593 ('g', 'global', None, _('edit global config'))] + formatteropts,
1594 1594 _('[-u] [NAME]...'),
1595 1595 optionalrepo=True)
1596 1596 def config(ui, repo, *values, **opts):
1597 1597 """show combined config settings from all hgrc files
1598 1598
1599 1599 With no arguments, print names and values of all config items.
1600 1600
1601 1601 With one argument of the form section.name, print just the value
1602 1602 of that config item.
1603 1603
1604 1604 With multiple arguments, print names and values of all config
1605 1605 items with matching section names.
1606 1606
1607 1607 With --edit, start an editor on the user-level config file. With
1608 1608 --global, edit the system-wide config file. With --local, edit the
1609 1609 repository-level config file.
1610 1610
1611 1611 With --debug, the source (filename and line number) is printed
1612 1612 for each config item.
1613 1613
1614 1614 See :hg:`help config` for more information about config files.
1615 1615
1616 1616 Returns 0 on success, 1 if NAME does not exist.
1617 1617
1618 1618 """
1619 1619
1620 1620 opts = pycompat.byteskwargs(opts)
1621 1621 if opts.get('edit') or opts.get('local') or opts.get('global'):
1622 1622 if opts.get('local') and opts.get('global'):
1623 1623 raise error.Abort(_("can't use --local and --global together"))
1624 1624
1625 1625 if opts.get('local'):
1626 1626 if not repo:
1627 1627 raise error.Abort(_("can't use --local outside a repository"))
1628 1628 paths = [repo.vfs.join('hgrc')]
1629 1629 elif opts.get('global'):
1630 1630 paths = rcutil.systemrcpath()
1631 1631 else:
1632 1632 paths = rcutil.userrcpath()
1633 1633
1634 1634 for f in paths:
1635 1635 if os.path.exists(f):
1636 1636 break
1637 1637 else:
1638 1638 if opts.get('global'):
1639 1639 samplehgrc = uimod.samplehgrcs['global']
1640 1640 elif opts.get('local'):
1641 1641 samplehgrc = uimod.samplehgrcs['local']
1642 1642 else:
1643 1643 samplehgrc = uimod.samplehgrcs['user']
1644 1644
1645 1645 f = paths[0]
1646 1646 fp = open(f, "w")
1647 1647 fp.write(samplehgrc)
1648 1648 fp.close()
1649 1649
1650 1650 editor = ui.geteditor()
1651 1651 ui.system("%s \"%s\"" % (editor, f),
1652 1652 onerr=error.Abort, errprefix=_("edit failed"),
1653 1653 blockedtag='config_edit')
1654 1654 return
1655 1655 ui.pager('config')
1656 1656 fm = ui.formatter('config', opts)
1657 1657 for t, f in rcutil.rccomponents():
1658 1658 if t == 'path':
1659 1659 ui.debug('read config from: %s\n' % f)
1660 1660 elif t == 'items':
1661 1661 for section, name, value, source in f:
1662 1662 ui.debug('set config by: %s\n' % source)
1663 1663 else:
1664 1664 raise error.ProgrammingError('unknown rctype: %s' % t)
1665 1665 untrusted = bool(opts.get('untrusted'))
1666 1666 if values:
1667 1667 sections = [v for v in values if '.' not in v]
1668 1668 items = [v for v in values if '.' in v]
1669 1669 if len(items) > 1 or items and sections:
1670 1670 raise error.Abort(_('only one config item permitted'))
1671 1671 matched = False
1672 1672 for section, name, value in ui.walkconfig(untrusted=untrusted):
1673 1673 source = ui.configsource(section, name, untrusted)
1674 1674 value = pycompat.bytestr(value)
1675 1675 if fm.isplain():
1676 1676 source = source or 'none'
1677 1677 value = value.replace('\n', '\\n')
1678 1678 entryname = section + '.' + name
1679 1679 if values:
1680 1680 for v in values:
1681 1681 if v == section:
1682 1682 fm.startitem()
1683 1683 fm.condwrite(ui.debugflag, 'source', '%s: ', source)
1684 1684 fm.write('name value', '%s=%s\n', entryname, value)
1685 1685 matched = True
1686 1686 elif v == entryname:
1687 1687 fm.startitem()
1688 1688 fm.condwrite(ui.debugflag, 'source', '%s: ', source)
1689 1689 fm.write('value', '%s\n', value)
1690 1690 fm.data(name=entryname)
1691 1691 matched = True
1692 1692 else:
1693 1693 fm.startitem()
1694 1694 fm.condwrite(ui.debugflag, 'source', '%s: ', source)
1695 1695 fm.write('name value', '%s=%s\n', entryname, value)
1696 1696 matched = True
1697 1697 fm.end()
1698 1698 if matched:
1699 1699 return 0
1700 1700 return 1
1701 1701
1702 1702 @command('copy|cp',
1703 1703 [('A', 'after', None, _('record a copy that has already occurred')),
1704 1704 ('f', 'force', None, _('forcibly copy over an existing managed file')),
1705 1705 ] + walkopts + dryrunopts,
1706 1706 _('[OPTION]... [SOURCE]... DEST'))
1707 1707 def copy(ui, repo, *pats, **opts):
1708 1708 """mark files as copied for the next commit
1709 1709
1710 1710 Mark dest as having copies of source files. If dest is a
1711 1711 directory, copies are put in that directory. If dest is a file,
1712 1712 the source must be a single file.
1713 1713
1714 1714 By default, this command copies the contents of files as they
1715 1715 exist in the working directory. If invoked with -A/--after, the
1716 1716 operation is recorded, but no copying is performed.
1717 1717
1718 1718 This command takes effect with the next commit. To undo a copy
1719 1719 before that, see :hg:`revert`.
1720 1720
1721 1721 Returns 0 on success, 1 if errors are encountered.
1722 1722 """
1723 1723 opts = pycompat.byteskwargs(opts)
1724 1724 with repo.wlock(False):
1725 1725 return cmdutil.copy(ui, repo, pats, opts)
1726 1726
1727 1727 @command('debugcommands', [], _('[COMMAND]'), norepo=True)
1728 1728 def debugcommands(ui, cmd='', *args):
1729 1729 """list all available commands and options"""
1730 1730 for cmd, vals in sorted(table.iteritems()):
1731 1731 cmd = cmd.split('|')[0].strip('^')
1732 1732 opts = ', '.join([i[1] for i in vals[1]])
1733 1733 ui.write('%s: %s\n' % (cmd, opts))
1734 1734
1735 1735 @command('debugcomplete',
1736 1736 [('o', 'options', None, _('show the command options'))],
1737 1737 _('[-o] CMD'),
1738 1738 norepo=True)
1739 1739 def debugcomplete(ui, cmd='', **opts):
1740 1740 """returns the completion list associated with the given command"""
1741 1741
1742 1742 if opts.get('options'):
1743 1743 options = []
1744 1744 otables = [globalopts]
1745 1745 if cmd:
1746 1746 aliases, entry = cmdutil.findcmd(cmd, table, False)
1747 1747 otables.append(entry[1])
1748 1748 for t in otables:
1749 1749 for o in t:
1750 1750 if "(DEPRECATED)" in o[3]:
1751 1751 continue
1752 1752 if o[0]:
1753 1753 options.append('-%s' % o[0])
1754 1754 options.append('--%s' % o[1])
1755 1755 ui.write("%s\n" % "\n".join(options))
1756 1756 return
1757 1757
1758 1758 cmdlist, unused_allcmds = cmdutil.findpossible(cmd, table)
1759 1759 if ui.verbose:
1760 1760 cmdlist = [' '.join(c[0]) for c in cmdlist.values()]
1761 1761 ui.write("%s\n" % "\n".join(sorted(cmdlist)))
1762 1762
1763 1763 @command('^diff',
1764 1764 [('r', 'rev', [], _('revision'), _('REV')),
1765 1765 ('c', 'change', '', _('change made by revision'), _('REV'))
1766 1766 ] + diffopts + diffopts2 + walkopts + subrepoopts,
1767 1767 _('[OPTION]... ([-c REV] | [-r REV1 [-r REV2]]) [FILE]...'),
1768 1768 inferrepo=True)
1769 1769 def diff(ui, repo, *pats, **opts):
1770 1770 """diff repository (or selected files)
1771 1771
1772 1772 Show differences between revisions for the specified files.
1773 1773
1774 1774 Differences between files are shown using the unified diff format.
1775 1775
1776 1776 .. note::
1777 1777
1778 1778 :hg:`diff` may generate unexpected results for merges, as it will
1779 1779 default to comparing against the working directory's first
1780 1780 parent changeset if no revisions are specified.
1781 1781
1782 1782 When two revision arguments are given, then changes are shown
1783 1783 between those revisions. If only one revision is specified then
1784 1784 that revision is compared to the working directory, and, when no
1785 1785 revisions are specified, the working directory files are compared
1786 1786 to its first parent.
1787 1787
1788 1788 Alternatively you can specify -c/--change with a revision to see
1789 1789 the changes in that changeset relative to its first parent.
1790 1790
1791 1791 Without the -a/--text option, diff will avoid generating diffs of
1792 1792 files it detects as binary. With -a, diff will generate a diff
1793 1793 anyway, probably with undesirable results.
1794 1794
1795 1795 Use the -g/--git option to generate diffs in the git extended diff
1796 1796 format. For more information, read :hg:`help diffs`.
1797 1797
1798 1798 .. container:: verbose
1799 1799
1800 1800 Examples:
1801 1801
1802 1802 - compare a file in the current working directory to its parent::
1803 1803
1804 1804 hg diff foo.c
1805 1805
1806 1806 - compare two historical versions of a directory, with rename info::
1807 1807
1808 1808 hg diff --git -r 1.0:1.2 lib/
1809 1809
1810 1810 - get change stats relative to the last change on some date::
1811 1811
1812 1812 hg diff --stat -r "date('may 2')"
1813 1813
1814 1814 - diff all newly-added files that contain a keyword::
1815 1815
1816 1816 hg diff "set:added() and grep(GNU)"
1817 1817
1818 1818 - compare a revision and its parents::
1819 1819
1820 1820 hg diff -c 9353 # compare against first parent
1821 1821 hg diff -r 9353^:9353 # same using revset syntax
1822 1822 hg diff -r 9353^2:9353 # compare against the second parent
1823 1823
1824 1824 Returns 0 on success.
1825 1825 """
1826 1826
1827 1827 opts = pycompat.byteskwargs(opts)
1828 1828 revs = opts.get('rev')
1829 1829 change = opts.get('change')
1830 1830 stat = opts.get('stat')
1831 1831 reverse = opts.get('reverse')
1832 1832
1833 1833 if revs and change:
1834 1834 msg = _('cannot specify --rev and --change at the same time')
1835 1835 raise error.Abort(msg)
1836 1836 elif change:
1837 1837 node2 = scmutil.revsingle(repo, change, None).node()
1838 1838 node1 = repo[node2].p1().node()
1839 1839 else:
1840 1840 node1, node2 = scmutil.revpair(repo, revs)
1841 1841
1842 1842 if reverse:
1843 1843 node1, node2 = node2, node1
1844 1844
1845 1845 diffopts = patch.diffallopts(ui, opts)
1846 1846 m = scmutil.match(repo[node2], pats, opts)
1847 1847 ui.pager('diff')
1848 1848 cmdutil.diffordiffstat(ui, repo, diffopts, node1, node2, m, stat=stat,
1849 1849 listsubrepos=opts.get('subrepos'),
1850 1850 root=opts.get('root'))
1851 1851
1852 1852 @command('^export',
1853 1853 [('o', 'output', '',
1854 1854 _('print output to file with formatted name'), _('FORMAT')),
1855 1855 ('', 'switch-parent', None, _('diff against the second parent')),
1856 1856 ('r', 'rev', [], _('revisions to export'), _('REV')),
1857 1857 ] + diffopts,
1858 1858 _('[OPTION]... [-o OUTFILESPEC] [-r] [REV]...'))
1859 1859 def export(ui, repo, *changesets, **opts):
1860 1860 """dump the header and diffs for one or more changesets
1861 1861
1862 1862 Print the changeset header and diffs for one or more revisions.
1863 1863 If no revision is given, the parent of the working directory is used.
1864 1864
1865 1865 The information shown in the changeset header is: author, date,
1866 1866 branch name (if non-default), changeset hash, parent(s) and commit
1867 1867 comment.
1868 1868
1869 1869 .. note::
1870 1870
1871 1871 :hg:`export` may generate unexpected diff output for merge
1872 1872 changesets, as it will compare the merge changeset against its
1873 1873 first parent only.
1874 1874
1875 1875 Output may be to a file, in which case the name of the file is
1876 1876 given using a format string. The formatting rules are as follows:
1877 1877
1878 1878 :``%%``: literal "%" character
1879 1879 :``%H``: changeset hash (40 hexadecimal digits)
1880 1880 :``%N``: number of patches being generated
1881 1881 :``%R``: changeset revision number
1882 1882 :``%b``: basename of the exporting repository
1883 1883 :``%h``: short-form changeset hash (12 hexadecimal digits)
1884 1884 :``%m``: first line of the commit message (only alphanumeric characters)
1885 1885 :``%n``: zero-padded sequence number, starting at 1
1886 1886 :``%r``: zero-padded changeset revision number
1887 1887
1888 1888 Without the -a/--text option, export will avoid generating diffs
1889 1889 of files it detects as binary. With -a, export will generate a
1890 1890 diff anyway, probably with undesirable results.
1891 1891
1892 1892 Use the -g/--git option to generate diffs in the git extended diff
1893 1893 format. See :hg:`help diffs` for more information.
1894 1894
1895 1895 With the --switch-parent option, the diff will be against the
1896 1896 second parent. It can be useful to review a merge.
1897 1897
1898 1898 .. container:: verbose
1899 1899
1900 1900 Examples:
1901 1901
1902 1902 - use export and import to transplant a bugfix to the current
1903 1903 branch::
1904 1904
1905 1905 hg export -r 9353 | hg import -
1906 1906
1907 1907 - export all the changesets between two revisions to a file with
1908 1908 rename information::
1909 1909
1910 1910 hg export --git -r 123:150 > changes.txt
1911 1911
1912 1912 - split outgoing changes into a series of patches with
1913 1913 descriptive names::
1914 1914
1915 1915 hg export -r "outgoing()" -o "%n-%m.patch"
1916 1916
1917 1917 Returns 0 on success.
1918 1918 """
1919 1919 opts = pycompat.byteskwargs(opts)
1920 1920 changesets += tuple(opts.get('rev', []))
1921 1921 if not changesets:
1922 1922 changesets = ['.']
1923 1923 revs = scmutil.revrange(repo, changesets)
1924 1924 if not revs:
1925 1925 raise error.Abort(_("export requires at least one changeset"))
1926 1926 if len(revs) > 1:
1927 1927 ui.note(_('exporting patches:\n'))
1928 1928 else:
1929 1929 ui.note(_('exporting patch:\n'))
1930 1930 ui.pager('export')
1931 1931 cmdutil.export(repo, revs, fntemplate=opts.get('output'),
1932 1932 switch_parent=opts.get('switch_parent'),
1933 1933 opts=patch.diffallopts(ui, opts))
1934 1934
1935 1935 @command('files',
1936 1936 [('r', 'rev', '', _('search the repository as it is in REV'), _('REV')),
1937 1937 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
1938 1938 ] + walkopts + formatteropts + subrepoopts,
1939 1939 _('[OPTION]... [FILE]...'))
1940 1940 def files(ui, repo, *pats, **opts):
1941 1941 """list tracked files
1942 1942
1943 1943 Print files under Mercurial control in the working directory or
1944 1944 specified revision for given files (excluding removed files).
1945 1945 Files can be specified as filenames or filesets.
1946 1946
1947 1947 If no files are given to match, this command prints the names
1948 1948 of all files under Mercurial control.
1949 1949
1950 1950 .. container:: verbose
1951 1951
1952 1952 Examples:
1953 1953
1954 1954 - list all files under the current directory::
1955 1955
1956 1956 hg files .
1957 1957
1958 1958 - shows sizes and flags for current revision::
1959 1959
1960 1960 hg files -vr .
1961 1961
1962 1962 - list all files named README::
1963 1963
1964 1964 hg files -I "**/README"
1965 1965
1966 1966 - list all binary files::
1967 1967
1968 1968 hg files "set:binary()"
1969 1969
1970 1970 - find files containing a regular expression::
1971 1971
1972 1972 hg files "set:grep('bob')"
1973 1973
1974 1974 - search tracked file contents with xargs and grep::
1975 1975
1976 1976 hg files -0 | xargs -0 grep foo
1977 1977
1978 1978 See :hg:`help patterns` and :hg:`help filesets` for more information
1979 1979 on specifying file patterns.
1980 1980
1981 1981 Returns 0 if a match is found, 1 otherwise.
1982 1982
1983 1983 """
1984 1984
1985 1985 opts = pycompat.byteskwargs(opts)
1986 1986 ctx = scmutil.revsingle(repo, opts.get('rev'), None)
1987 1987
1988 1988 end = '\n'
1989 1989 if opts.get('print0'):
1990 1990 end = '\0'
1991 1991 fmt = '%s' + end
1992 1992
1993 1993 m = scmutil.match(ctx, pats, opts)
1994 1994 ui.pager('files')
1995 1995 with ui.formatter('files', opts) as fm:
1996 1996 return cmdutil.files(ui, ctx, m, fm, fmt, opts.get('subrepos'))
1997 1997
1998 1998 @command('^forget', walkopts, _('[OPTION]... FILE...'), inferrepo=True)
1999 1999 def forget(ui, repo, *pats, **opts):
2000 2000 """forget the specified files on the next commit
2001 2001
2002 2002 Mark the specified files so they will no longer be tracked
2003 2003 after the next commit.
2004 2004
2005 2005 This only removes files from the current branch, not from the
2006 2006 entire project history, and it does not delete them from the
2007 2007 working directory.
2008 2008
2009 2009 To delete the file from the working directory, see :hg:`remove`.
2010 2010
2011 2011 To undo a forget before the next commit, see :hg:`add`.
2012 2012
2013 2013 .. container:: verbose
2014 2014
2015 2015 Examples:
2016 2016
2017 2017 - forget newly-added binary files::
2018 2018
2019 2019 hg forget "set:added() and binary()"
2020 2020
2021 2021 - forget files that would be excluded by .hgignore::
2022 2022
2023 2023 hg forget "set:hgignore()"
2024 2024
2025 2025 Returns 0 on success.
2026 2026 """
2027 2027
2028 2028 opts = pycompat.byteskwargs(opts)
2029 2029 if not pats:
2030 2030 raise error.Abort(_('no files specified'))
2031 2031
2032 2032 m = scmutil.match(repo[None], pats, opts)
2033 2033 rejected = cmdutil.forget(ui, repo, m, prefix="", explicitonly=False)[0]
2034 2034 return rejected and 1 or 0
2035 2035
2036 2036 @command(
2037 2037 'graft',
2038 2038 [('r', 'rev', [], _('revisions to graft'), _('REV')),
2039 2039 ('c', 'continue', False, _('resume interrupted graft')),
2040 2040 ('e', 'edit', False, _('invoke editor on commit messages')),
2041 2041 ('', 'log', None, _('append graft info to log message')),
2042 2042 ('f', 'force', False, _('force graft')),
2043 2043 ('D', 'currentdate', False,
2044 2044 _('record the current date as commit date')),
2045 2045 ('U', 'currentuser', False,
2046 2046 _('record the current user as committer'), _('DATE'))]
2047 2047 + commitopts2 + mergetoolopts + dryrunopts,
2048 2048 _('[OPTION]... [-r REV]... REV...'))
2049 2049 def graft(ui, repo, *revs, **opts):
2050 2050 '''copy changes from other branches onto the current branch
2051 2051
2052 2052 This command uses Mercurial's merge logic to copy individual
2053 2053 changes from other branches without merging branches in the
2054 2054 history graph. This is sometimes known as 'backporting' or
2055 2055 'cherry-picking'. By default, graft will copy user, date, and
2056 2056 description from the source changesets.
2057 2057
2058 2058 Changesets that are ancestors of the current revision, that have
2059 2059 already been grafted, or that are merges will be skipped.
2060 2060
2061 2061 If --log is specified, log messages will have a comment appended
2062 2062 of the form::
2063 2063
2064 2064 (grafted from CHANGESETHASH)
2065 2065
2066 2066 If --force is specified, revisions will be grafted even if they
2067 2067 are already ancestors of or have been grafted to the destination.
2068 2068 This is useful when the revisions have since been backed out.
2069 2069
2070 2070 If a graft merge results in conflicts, the graft process is
2071 2071 interrupted so that the current merge can be manually resolved.
2072 2072 Once all conflicts are addressed, the graft process can be
2073 2073 continued with the -c/--continue option.
2074 2074
2075 2075 .. note::
2076 2076
2077 2077 The -c/--continue option does not reapply earlier options, except
2078 2078 for --force.
2079 2079
2080 2080 .. container:: verbose
2081 2081
2082 2082 Examples:
2083 2083
2084 2084 - copy a single change to the stable branch and edit its description::
2085 2085
2086 2086 hg update stable
2087 2087 hg graft --edit 9393
2088 2088
2089 2089 - graft a range of changesets with one exception, updating dates::
2090 2090
2091 2091 hg graft -D "2085::2093 and not 2091"
2092 2092
2093 2093 - continue a graft after resolving conflicts::
2094 2094
2095 2095 hg graft -c
2096 2096
2097 2097 - show the source of a grafted changeset::
2098 2098
2099 2099 hg log --debug -r .
2100 2100
2101 2101 - show revisions sorted by date::
2102 2102
2103 2103 hg log -r "sort(all(), date)"
2104 2104
2105 2105 See :hg:`help revisions` for more about specifying revisions.
2106 2106
2107 2107 Returns 0 on successful completion.
2108 2108 '''
2109 2109 with repo.wlock():
2110 2110 return _dograft(ui, repo, *revs, **opts)
2111 2111
2112 2112 def _dograft(ui, repo, *revs, **opts):
2113 2113 opts = pycompat.byteskwargs(opts)
2114 2114 if revs and opts.get('rev'):
2115 2115 ui.warn(_('warning: inconsistent use of --rev might give unexpected '
2116 2116 'revision ordering!\n'))
2117 2117
2118 2118 revs = list(revs)
2119 2119 revs.extend(opts.get('rev'))
2120 2120
2121 2121 if not opts.get('user') and opts.get('currentuser'):
2122 2122 opts['user'] = ui.username()
2123 2123 if not opts.get('date') and opts.get('currentdate'):
2124 2124 opts['date'] = "%d %d" % util.makedate()
2125 2125
2126 2126 editor = cmdutil.getcommiteditor(editform='graft',
2127 2127 **pycompat.strkwargs(opts))
2128 2128
2129 2129 cont = False
2130 2130 if opts.get('continue'):
2131 2131 cont = True
2132 2132 if revs:
2133 2133 raise error.Abort(_("can't specify --continue and revisions"))
2134 2134 # read in unfinished revisions
2135 2135 try:
2136 2136 nodes = repo.vfs.read('graftstate').splitlines()
2137 2137 revs = [repo[node].rev() for node in nodes]
2138 2138 except IOError as inst:
2139 2139 if inst.errno != errno.ENOENT:
2140 2140 raise
2141 2141 cmdutil.wrongtooltocontinue(repo, _('graft'))
2142 2142 else:
2143 2143 cmdutil.checkunfinished(repo)
2144 2144 cmdutil.bailifchanged(repo)
2145 2145 if not revs:
2146 2146 raise error.Abort(_('no revisions specified'))
2147 2147 revs = scmutil.revrange(repo, revs)
2148 2148
2149 2149 skipped = set()
2150 2150 # check for merges
2151 2151 for rev in repo.revs('%ld and merge()', revs):
2152 2152 ui.warn(_('skipping ungraftable merge revision %s\n') % rev)
2153 2153 skipped.add(rev)
2154 2154 revs = [r for r in revs if r not in skipped]
2155 2155 if not revs:
2156 2156 return -1
2157 2157
2158 2158 # Don't check in the --continue case, in effect retaining --force across
2159 2159 # --continues. That's because without --force, any revisions we decided to
2160 2160 # skip would have been filtered out here, so they wouldn't have made their
2161 2161 # way to the graftstate. With --force, any revisions we would have otherwise
2162 2162 # skipped would not have been filtered out, and if they hadn't been applied
2163 2163 # already, they'd have been in the graftstate.
2164 2164 if not (cont or opts.get('force')):
2165 2165 # check for ancestors of dest branch
2166 2166 crev = repo['.'].rev()
2167 2167 ancestors = repo.changelog.ancestors([crev], inclusive=True)
2168 2168 # XXX make this lazy in the future
2169 2169 # don't mutate while iterating, create a copy
2170 2170 for rev in list(revs):
2171 2171 if rev in ancestors:
2172 2172 ui.warn(_('skipping ancestor revision %d:%s\n') %
2173 2173 (rev, repo[rev]))
2174 2174 # XXX remove on list is slow
2175 2175 revs.remove(rev)
2176 2176 if not revs:
2177 2177 return -1
2178 2178
2179 2179 # analyze revs for earlier grafts
2180 2180 ids = {}
2181 2181 for ctx in repo.set("%ld", revs):
2182 2182 ids[ctx.hex()] = ctx.rev()
2183 2183 n = ctx.extra().get('source')
2184 2184 if n:
2185 2185 ids[n] = ctx.rev()
2186 2186
2187 2187 # check ancestors for earlier grafts
2188 2188 ui.debug('scanning for duplicate grafts\n')
2189 2189
2190 2190 # The only changesets we can be sure doesn't contain grafts of any
2191 2191 # revs, are the ones that are common ancestors of *all* revs:
2192 2192 for rev in repo.revs('only(%d,ancestor(%ld))', crev, revs):
2193 2193 ctx = repo[rev]
2194 2194 n = ctx.extra().get('source')
2195 2195 if n in ids:
2196 2196 try:
2197 2197 r = repo[n].rev()
2198 2198 except error.RepoLookupError:
2199 2199 r = None
2200 2200 if r in revs:
2201 2201 ui.warn(_('skipping revision %d:%s '
2202 2202 '(already grafted to %d:%s)\n')
2203 2203 % (r, repo[r], rev, ctx))
2204 2204 revs.remove(r)
2205 2205 elif ids[n] in revs:
2206 2206 if r is None:
2207 2207 ui.warn(_('skipping already grafted revision %d:%s '
2208 2208 '(%d:%s also has unknown origin %s)\n')
2209 2209 % (ids[n], repo[ids[n]], rev, ctx, n[:12]))
2210 2210 else:
2211 2211 ui.warn(_('skipping already grafted revision %d:%s '
2212 2212 '(%d:%s also has origin %d:%s)\n')
2213 2213 % (ids[n], repo[ids[n]], rev, ctx, r, n[:12]))
2214 2214 revs.remove(ids[n])
2215 2215 elif ctx.hex() in ids:
2216 2216 r = ids[ctx.hex()]
2217 2217 ui.warn(_('skipping already grafted revision %d:%s '
2218 2218 '(was grafted from %d:%s)\n') %
2219 2219 (r, repo[r], rev, ctx))
2220 2220 revs.remove(r)
2221 2221 if not revs:
2222 2222 return -1
2223 2223
2224 2224 for pos, ctx in enumerate(repo.set("%ld", revs)):
2225 2225 desc = '%d:%s "%s"' % (ctx.rev(), ctx,
2226 2226 ctx.description().split('\n', 1)[0])
2227 2227 names = repo.nodetags(ctx.node()) + repo.nodebookmarks(ctx.node())
2228 2228 if names:
2229 2229 desc += ' (%s)' % ' '.join(names)
2230 2230 ui.status(_('grafting %s\n') % desc)
2231 2231 if opts.get('dry_run'):
2232 2232 continue
2233 2233
2234 2234 source = ctx.extra().get('source')
2235 2235 extra = {}
2236 2236 if source:
2237 2237 extra['source'] = source
2238 2238 extra['intermediate-source'] = ctx.hex()
2239 2239 else:
2240 2240 extra['source'] = ctx.hex()
2241 2241 user = ctx.user()
2242 2242 if opts.get('user'):
2243 2243 user = opts['user']
2244 2244 date = ctx.date()
2245 2245 if opts.get('date'):
2246 2246 date = opts['date']
2247 2247 message = ctx.description()
2248 2248 if opts.get('log'):
2249 2249 message += '\n(grafted from %s)' % ctx.hex()
2250 2250
2251 2251 # we don't merge the first commit when continuing
2252 2252 if not cont:
2253 2253 # perform the graft merge with p1(rev) as 'ancestor'
2254 2254 try:
2255 2255 # ui.forcemerge is an internal variable, do not document
2256 2256 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
2257 2257 'graft')
2258 2258 stats = mergemod.graft(repo, ctx, ctx.p1(),
2259 2259 ['local', 'graft'])
2260 2260 finally:
2261 2261 repo.ui.setconfig('ui', 'forcemerge', '', 'graft')
2262 2262 # report any conflicts
2263 2263 if stats and stats[3] > 0:
2264 2264 # write out state for --continue
2265 2265 nodelines = [repo[rev].hex() + "\n" for rev in revs[pos:]]
2266 2266 repo.vfs.write('graftstate', ''.join(nodelines))
2267 2267 extra = ''
2268 2268 if opts.get('user'):
2269 2269 extra += ' --user %s' % util.shellquote(opts['user'])
2270 2270 if opts.get('date'):
2271 2271 extra += ' --date %s' % util.shellquote(opts['date'])
2272 2272 if opts.get('log'):
2273 2273 extra += ' --log'
2274 2274 hint=_("use 'hg resolve' and 'hg graft --continue%s'") % extra
2275 2275 raise error.Abort(
2276 2276 _("unresolved conflicts, can't continue"),
2277 2277 hint=hint)
2278 2278 else:
2279 2279 cont = False
2280 2280
2281 2281 # commit
2282 2282 node = repo.commit(text=message, user=user,
2283 2283 date=date, extra=extra, editor=editor)
2284 2284 if node is None:
2285 2285 ui.warn(
2286 2286 _('note: graft of %d:%s created no changes to commit\n') %
2287 2287 (ctx.rev(), ctx))
2288 2288
2289 2289 # remove state when we complete successfully
2290 2290 if not opts.get('dry_run'):
2291 2291 repo.vfs.unlinkpath('graftstate', ignoremissing=True)
2292 2292
2293 2293 return 0
2294 2294
2295 2295 @command('grep',
2296 2296 [('0', 'print0', None, _('end fields with NUL')),
2297 2297 ('', 'all', None, _('print all revisions that match')),
2298 2298 ('a', 'text', None, _('treat all files as text')),
2299 2299 ('f', 'follow', None,
2300 2300 _('follow changeset history,'
2301 2301 ' or file history across copies and renames')),
2302 2302 ('i', 'ignore-case', None, _('ignore case when matching')),
2303 2303 ('l', 'files-with-matches', None,
2304 2304 _('print only filenames and revisions that match')),
2305 2305 ('n', 'line-number', None, _('print matching line numbers')),
2306 2306 ('r', 'rev', [],
2307 2307 _('only search files changed within revision range'), _('REV')),
2308 2308 ('u', 'user', None, _('list the author (long with -v)')),
2309 2309 ('d', 'date', None, _('list the date (short with -q)')),
2310 2310 ] + formatteropts + walkopts,
2311 2311 _('[OPTION]... PATTERN [FILE]...'),
2312 2312 inferrepo=True)
2313 2313 def grep(ui, repo, pattern, *pats, **opts):
2314 2314 """search revision history for a pattern in specified files
2315 2315
2316 2316 Search revision history for a regular expression in the specified
2317 2317 files or the entire project.
2318 2318
2319 2319 By default, grep prints the most recent revision number for each
2320 2320 file in which it finds a match. To get it to print every revision
2321 2321 that contains a change in match status ("-" for a match that becomes
2322 2322 a non-match, or "+" for a non-match that becomes a match), use the
2323 2323 --all flag.
2324 2324
2325 2325 PATTERN can be any Python (roughly Perl-compatible) regular
2326 2326 expression.
2327 2327
2328 2328 If no FILEs are specified (and -f/--follow isn't set), all files in
2329 2329 the repository are searched, including those that don't exist in the
2330 2330 current branch or have been deleted in a prior changeset.
2331 2331
2332 2332 Returns 0 if a match is found, 1 otherwise.
2333 2333 """
2334 2334 opts = pycompat.byteskwargs(opts)
2335 2335 reflags = re.M
2336 2336 if opts.get('ignore_case'):
2337 2337 reflags |= re.I
2338 2338 try:
2339 2339 regexp = util.re.compile(pattern, reflags)
2340 2340 except re.error as inst:
2341 2341 ui.warn(_("grep: invalid match pattern: %s\n") % inst)
2342 2342 return 1
2343 2343 sep, eol = ':', '\n'
2344 2344 if opts.get('print0'):
2345 2345 sep = eol = '\0'
2346 2346
2347 2347 getfile = util.lrucachefunc(repo.file)
2348 2348
2349 2349 def matchlines(body):
2350 2350 begin = 0
2351 2351 linenum = 0
2352 2352 while begin < len(body):
2353 2353 match = regexp.search(body, begin)
2354 2354 if not match:
2355 2355 break
2356 2356 mstart, mend = match.span()
2357 2357 linenum += body.count('\n', begin, mstart) + 1
2358 2358 lstart = body.rfind('\n', begin, mstart) + 1 or begin
2359 2359 begin = body.find('\n', mend) + 1 or len(body) + 1
2360 2360 lend = begin - 1
2361 2361 yield linenum, mstart - lstart, mend - lstart, body[lstart:lend]
2362 2362
2363 2363 class linestate(object):
2364 2364 def __init__(self, line, linenum, colstart, colend):
2365 2365 self.line = line
2366 2366 self.linenum = linenum
2367 2367 self.colstart = colstart
2368 2368 self.colend = colend
2369 2369
2370 2370 def __hash__(self):
2371 2371 return hash((self.linenum, self.line))
2372 2372
2373 2373 def __eq__(self, other):
2374 2374 return self.line == other.line
2375 2375
2376 2376 def findpos(self):
2377 2377 """Iterate all (start, end) indices of matches"""
2378 2378 yield self.colstart, self.colend
2379 2379 p = self.colend
2380 2380 while p < len(self.line):
2381 2381 m = regexp.search(self.line, p)
2382 2382 if not m:
2383 2383 break
2384 2384 yield m.span()
2385 2385 p = m.end()
2386 2386
2387 2387 matches = {}
2388 2388 copies = {}
2389 2389 def grepbody(fn, rev, body):
2390 2390 matches[rev].setdefault(fn, [])
2391 2391 m = matches[rev][fn]
2392 2392 for lnum, cstart, cend, line in matchlines(body):
2393 2393 s = linestate(line, lnum, cstart, cend)
2394 2394 m.append(s)
2395 2395
2396 2396 def difflinestates(a, b):
2397 2397 sm = difflib.SequenceMatcher(None, a, b)
2398 2398 for tag, alo, ahi, blo, bhi in sm.get_opcodes():
2399 2399 if tag == 'insert':
2400 2400 for i in xrange(blo, bhi):
2401 2401 yield ('+', b[i])
2402 2402 elif tag == 'delete':
2403 2403 for i in xrange(alo, ahi):
2404 2404 yield ('-', a[i])
2405 2405 elif tag == 'replace':
2406 2406 for i in xrange(alo, ahi):
2407 2407 yield ('-', a[i])
2408 2408 for i in xrange(blo, bhi):
2409 2409 yield ('+', b[i])
2410 2410
2411 2411 def display(fm, fn, ctx, pstates, states):
2412 2412 rev = ctx.rev()
2413 2413 if fm.isplain():
2414 2414 formatuser = ui.shortuser
2415 2415 else:
2416 2416 formatuser = str
2417 2417 if ui.quiet:
2418 2418 datefmt = '%Y-%m-%d'
2419 2419 else:
2420 2420 datefmt = '%a %b %d %H:%M:%S %Y %1%2'
2421 2421 found = False
2422 2422 @util.cachefunc
2423 2423 def binary():
2424 2424 flog = getfile(fn)
2425 2425 return util.binary(flog.read(ctx.filenode(fn)))
2426 2426
2427 2427 fieldnamemap = {'filename': 'file', 'linenumber': 'line_number'}
2428 2428 if opts.get('all'):
2429 2429 iter = difflinestates(pstates, states)
2430 2430 else:
2431 2431 iter = [('', l) for l in states]
2432 2432 for change, l in iter:
2433 2433 fm.startitem()
2434 2434 fm.data(node=fm.hexfunc(ctx.node()))
2435 2435 cols = [
2436 2436 ('filename', fn, True),
2437 2437 ('rev', rev, True),
2438 2438 ('linenumber', l.linenum, opts.get('line_number')),
2439 2439 ]
2440 2440 if opts.get('all'):
2441 2441 cols.append(('change', change, True))
2442 2442 cols.extend([
2443 2443 ('user', formatuser(ctx.user()), opts.get('user')),
2444 2444 ('date', fm.formatdate(ctx.date(), datefmt), opts.get('date')),
2445 2445 ])
2446 2446 lastcol = next(name for name, data, cond in reversed(cols) if cond)
2447 2447 for name, data, cond in cols:
2448 2448 field = fieldnamemap.get(name, name)
2449 2449 fm.condwrite(cond, field, '%s', data, label='grep.%s' % name)
2450 2450 if cond and name != lastcol:
2451 2451 fm.plain(sep, label='grep.sep')
2452 2452 if not opts.get('files_with_matches'):
2453 2453 fm.plain(sep, label='grep.sep')
2454 2454 if not opts.get('text') and binary():
2455 2455 fm.plain(_(" Binary file matches"))
2456 2456 else:
2457 2457 displaymatches(fm.nested('texts'), l)
2458 2458 fm.plain(eol)
2459 2459 found = True
2460 2460 if opts.get('files_with_matches'):
2461 2461 break
2462 2462 return found
2463 2463
2464 2464 def displaymatches(fm, l):
2465 2465 p = 0
2466 2466 for s, e in l.findpos():
2467 2467 if p < s:
2468 2468 fm.startitem()
2469 2469 fm.write('text', '%s', l.line[p:s])
2470 2470 fm.data(matched=False)
2471 2471 fm.startitem()
2472 2472 fm.write('text', '%s', l.line[s:e], label='grep.match')
2473 2473 fm.data(matched=True)
2474 2474 p = e
2475 2475 if p < len(l.line):
2476 2476 fm.startitem()
2477 2477 fm.write('text', '%s', l.line[p:])
2478 2478 fm.data(matched=False)
2479 2479 fm.end()
2480 2480
2481 2481 skip = {}
2482 2482 revfiles = {}
2483 2483 matchfn = scmutil.match(repo[None], pats, opts)
2484 2484 found = False
2485 2485 follow = opts.get('follow')
2486 2486
2487 2487 def prep(ctx, fns):
2488 2488 rev = ctx.rev()
2489 2489 pctx = ctx.p1()
2490 2490 parent = pctx.rev()
2491 2491 matches.setdefault(rev, {})
2492 2492 matches.setdefault(parent, {})
2493 2493 files = revfiles.setdefault(rev, [])
2494 2494 for fn in fns:
2495 2495 flog = getfile(fn)
2496 2496 try:
2497 2497 fnode = ctx.filenode(fn)
2498 2498 except error.LookupError:
2499 2499 continue
2500 2500
2501 2501 copied = flog.renamed(fnode)
2502 2502 copy = follow and copied and copied[0]
2503 2503 if copy:
2504 2504 copies.setdefault(rev, {})[fn] = copy
2505 2505 if fn in skip:
2506 2506 if copy:
2507 2507 skip[copy] = True
2508 2508 continue
2509 2509 files.append(fn)
2510 2510
2511 2511 if fn not in matches[rev]:
2512 2512 grepbody(fn, rev, flog.read(fnode))
2513 2513
2514 2514 pfn = copy or fn
2515 2515 if pfn not in matches[parent]:
2516 2516 try:
2517 2517 fnode = pctx.filenode(pfn)
2518 2518 grepbody(pfn, parent, flog.read(fnode))
2519 2519 except error.LookupError:
2520 2520 pass
2521 2521
2522 2522 ui.pager('grep')
2523 2523 fm = ui.formatter('grep', opts)
2524 2524 for ctx in cmdutil.walkchangerevs(repo, matchfn, opts, prep):
2525 2525 rev = ctx.rev()
2526 2526 parent = ctx.p1().rev()
2527 2527 for fn in sorted(revfiles.get(rev, [])):
2528 2528 states = matches[rev][fn]
2529 2529 copy = copies.get(rev, {}).get(fn)
2530 2530 if fn in skip:
2531 2531 if copy:
2532 2532 skip[copy] = True
2533 2533 continue
2534 2534 pstates = matches.get(parent, {}).get(copy or fn, [])
2535 2535 if pstates or states:
2536 2536 r = display(fm, fn, ctx, pstates, states)
2537 2537 found = found or r
2538 2538 if r and not opts.get('all'):
2539 2539 skip[fn] = True
2540 2540 if copy:
2541 2541 skip[copy] = True
2542 2542 del matches[rev]
2543 2543 del revfiles[rev]
2544 2544 fm.end()
2545 2545
2546 2546 return not found
2547 2547
2548 2548 @command('heads',
2549 2549 [('r', 'rev', '',
2550 2550 _('show only heads which are descendants of STARTREV'), _('STARTREV')),
2551 2551 ('t', 'topo', False, _('show topological heads only')),
2552 2552 ('a', 'active', False, _('show active branchheads only (DEPRECATED)')),
2553 2553 ('c', 'closed', False, _('show normal and closed branch heads')),
2554 2554 ] + templateopts,
2555 2555 _('[-ct] [-r STARTREV] [REV]...'))
2556 2556 def heads(ui, repo, *branchrevs, **opts):
2557 2557 """show branch heads
2558 2558
2559 2559 With no arguments, show all open branch heads in the repository.
2560 2560 Branch heads are changesets that have no descendants on the
2561 2561 same branch. They are where development generally takes place and
2562 2562 are the usual targets for update and merge operations.
2563 2563
2564 2564 If one or more REVs are given, only open branch heads on the
2565 2565 branches associated with the specified changesets are shown. This
2566 2566 means that you can use :hg:`heads .` to see the heads on the
2567 2567 currently checked-out branch.
2568 2568
2569 2569 If -c/--closed is specified, also show branch heads marked closed
2570 2570 (see :hg:`commit --close-branch`).
2571 2571
2572 2572 If STARTREV is specified, only those heads that are descendants of
2573 2573 STARTREV will be displayed.
2574 2574
2575 2575 If -t/--topo is specified, named branch mechanics will be ignored and only
2576 2576 topological heads (changesets with no children) will be shown.
2577 2577
2578 2578 Returns 0 if matching heads are found, 1 if not.
2579 2579 """
2580 2580
2581 2581 opts = pycompat.byteskwargs(opts)
2582 2582 start = None
2583 2583 if 'rev' in opts:
2584 2584 start = scmutil.revsingle(repo, opts['rev'], None).node()
2585 2585
2586 2586 if opts.get('topo'):
2587 2587 heads = [repo[h] for h in repo.heads(start)]
2588 2588 else:
2589 2589 heads = []
2590 2590 for branch in repo.branchmap():
2591 2591 heads += repo.branchheads(branch, start, opts.get('closed'))
2592 2592 heads = [repo[h] for h in heads]
2593 2593
2594 2594 if branchrevs:
2595 2595 branches = set(repo[br].branch() for br in branchrevs)
2596 2596 heads = [h for h in heads if h.branch() in branches]
2597 2597
2598 2598 if opts.get('active') and branchrevs:
2599 2599 dagheads = repo.heads(start)
2600 2600 heads = [h for h in heads if h.node() in dagheads]
2601 2601
2602 2602 if branchrevs:
2603 2603 haveheads = set(h.branch() for h in heads)
2604 2604 if branches - haveheads:
2605 2605 headless = ', '.join(b for b in branches - haveheads)
2606 2606 msg = _('no open branch heads found on branches %s')
2607 2607 if opts.get('rev'):
2608 2608 msg += _(' (started at %s)') % opts['rev']
2609 2609 ui.warn((msg + '\n') % headless)
2610 2610
2611 2611 if not heads:
2612 2612 return 1
2613 2613
2614 2614 ui.pager('heads')
2615 2615 heads = sorted(heads, key=lambda x: -x.rev())
2616 2616 displayer = cmdutil.show_changeset(ui, repo, opts)
2617 2617 for ctx in heads:
2618 2618 displayer.show(ctx)
2619 2619 displayer.close()
2620 2620
2621 2621 @command('help',
2622 2622 [('e', 'extension', None, _('show only help for extensions')),
2623 2623 ('c', 'command', None, _('show only help for commands')),
2624 2624 ('k', 'keyword', None, _('show topics matching keyword')),
2625 2625 ('s', 'system', [], _('show help for specific platform(s)')),
2626 2626 ],
2627 2627 _('[-ecks] [TOPIC]'),
2628 2628 norepo=True)
2629 2629 def help_(ui, name=None, **opts):
2630 2630 """show help for a given topic or a help overview
2631 2631
2632 2632 With no arguments, print a list of commands with short help messages.
2633 2633
2634 2634 Given a topic, extension, or command name, print help for that
2635 2635 topic.
2636 2636
2637 2637 Returns 0 if successful.
2638 2638 """
2639 2639
2640 2640 keep = opts.get(r'system') or []
2641 2641 if len(keep) == 0:
2642 2642 if pycompat.sysplatform.startswith('win'):
2643 2643 keep.append('windows')
2644 2644 elif pycompat.sysplatform == 'OpenVMS':
2645 2645 keep.append('vms')
2646 2646 elif pycompat.sysplatform == 'plan9':
2647 2647 keep.append('plan9')
2648 2648 else:
2649 2649 keep.append('unix')
2650 2650 keep.append(pycompat.sysplatform.lower())
2651 2651 if ui.verbose:
2652 2652 keep.append('verbose')
2653 2653
2654 2654 commands = sys.modules[__name__]
2655 2655 formatted = help.formattedhelp(ui, commands, name, keep=keep, **opts)
2656 2656 ui.pager('help')
2657 2657 ui.write(formatted)
2658 2658
2659 2659
2660 2660 @command('identify|id',
2661 2661 [('r', 'rev', '',
2662 2662 _('identify the specified revision'), _('REV')),
2663 2663 ('n', 'num', None, _('show local revision number')),
2664 2664 ('i', 'id', None, _('show global revision id')),
2665 2665 ('b', 'branch', None, _('show branch')),
2666 2666 ('t', 'tags', None, _('show tags')),
2667 2667 ('B', 'bookmarks', None, _('show bookmarks')),
2668 2668 ] + remoteopts,
2669 2669 _('[-nibtB] [-r REV] [SOURCE]'),
2670 2670 optionalrepo=True)
2671 2671 def identify(ui, repo, source=None, rev=None,
2672 2672 num=None, id=None, branch=None, tags=None, bookmarks=None, **opts):
2673 2673 """identify the working directory or specified revision
2674 2674
2675 2675 Print a summary identifying the repository state at REV using one or
2676 2676 two parent hash identifiers, followed by a "+" if the working
2677 2677 directory has uncommitted changes, the branch name (if not default),
2678 2678 a list of tags, and a list of bookmarks.
2679 2679
2680 2680 When REV is not given, print a summary of the current state of the
2681 2681 repository.
2682 2682
2683 2683 Specifying a path to a repository root or Mercurial bundle will
2684 2684 cause lookup to operate on that repository/bundle.
2685 2685
2686 2686 .. container:: verbose
2687 2687
2688 2688 Examples:
2689 2689
2690 2690 - generate a build identifier for the working directory::
2691 2691
2692 2692 hg id --id > build-id.dat
2693 2693
2694 2694 - find the revision corresponding to a tag::
2695 2695
2696 2696 hg id -n -r 1.3
2697 2697
2698 2698 - check the most recent revision of a remote repository::
2699 2699
2700 2700 hg id -r tip https://www.mercurial-scm.org/repo/hg/
2701 2701
2702 2702 See :hg:`log` for generating more information about specific revisions,
2703 2703 including full hash identifiers.
2704 2704
2705 2705 Returns 0 if successful.
2706 2706 """
2707 2707
2708 2708 opts = pycompat.byteskwargs(opts)
2709 2709 if not repo and not source:
2710 2710 raise error.Abort(_("there is no Mercurial repository here "
2711 2711 "(.hg not found)"))
2712 2712
2713 2713 if ui.debugflag:
2714 2714 hexfunc = hex
2715 2715 else:
2716 2716 hexfunc = short
2717 2717 default = not (num or id or branch or tags or bookmarks)
2718 2718 output = []
2719 2719 revs = []
2720 2720
2721 2721 if source:
2722 2722 source, branches = hg.parseurl(ui.expandpath(source))
2723 2723 peer = hg.peer(repo or ui, opts, source) # only pass ui when no repo
2724 2724 repo = peer.local()
2725 2725 revs, checkout = hg.addbranchrevs(repo, peer, branches, None)
2726 2726
2727 2727 if not repo:
2728 2728 if num or branch or tags:
2729 2729 raise error.Abort(
2730 2730 _("can't query remote revision number, branch, or tags"))
2731 2731 if not rev and revs:
2732 2732 rev = revs[0]
2733 2733 if not rev:
2734 2734 rev = "tip"
2735 2735
2736 2736 remoterev = peer.lookup(rev)
2737 2737 if default or id:
2738 2738 output = [hexfunc(remoterev)]
2739 2739
2740 2740 def getbms():
2741 2741 bms = []
2742 2742
2743 2743 if 'bookmarks' in peer.listkeys('namespaces'):
2744 2744 hexremoterev = hex(remoterev)
2745 2745 bms = [bm for bm, bmr in peer.listkeys('bookmarks').iteritems()
2746 2746 if bmr == hexremoterev]
2747 2747
2748 2748 return sorted(bms)
2749 2749
2750 2750 if bookmarks:
2751 2751 output.extend(getbms())
2752 2752 elif default and not ui.quiet:
2753 2753 # multiple bookmarks for a single parent separated by '/'
2754 2754 bm = '/'.join(getbms())
2755 2755 if bm:
2756 2756 output.append(bm)
2757 2757 else:
2758 2758 ctx = scmutil.revsingle(repo, rev, None)
2759 2759
2760 2760 if ctx.rev() is None:
2761 2761 ctx = repo[None]
2762 2762 parents = ctx.parents()
2763 2763 taglist = []
2764 2764 for p in parents:
2765 2765 taglist.extend(p.tags())
2766 2766
2767 2767 changed = ""
2768 2768 if default or id or num:
2769 2769 if (any(repo.status())
2770 2770 or any(ctx.sub(s).dirty() for s in ctx.substate)):
2771 2771 changed = '+'
2772 2772 if default or id:
2773 2773 output = ["%s%s" %
2774 2774 ('+'.join([hexfunc(p.node()) for p in parents]), changed)]
2775 2775 if num:
2776 2776 output.append("%s%s" %
2777 2777 ('+'.join(["%d" % p.rev() for p in parents]), changed))
2778 2778 else:
2779 2779 if default or id:
2780 2780 output = [hexfunc(ctx.node())]
2781 2781 if num:
2782 2782 output.append(pycompat.bytestr(ctx.rev()))
2783 2783 taglist = ctx.tags()
2784 2784
2785 2785 if default and not ui.quiet:
2786 2786 b = ctx.branch()
2787 2787 if b != 'default':
2788 2788 output.append("(%s)" % b)
2789 2789
2790 2790 # multiple tags for a single parent separated by '/'
2791 2791 t = '/'.join(taglist)
2792 2792 if t:
2793 2793 output.append(t)
2794 2794
2795 2795 # multiple bookmarks for a single parent separated by '/'
2796 2796 bm = '/'.join(ctx.bookmarks())
2797 2797 if bm:
2798 2798 output.append(bm)
2799 2799 else:
2800 2800 if branch:
2801 2801 output.append(ctx.branch())
2802 2802
2803 2803 if tags:
2804 2804 output.extend(taglist)
2805 2805
2806 2806 if bookmarks:
2807 2807 output.extend(ctx.bookmarks())
2808 2808
2809 2809 ui.write("%s\n" % ' '.join(output))
2810 2810
2811 2811 @command('import|patch',
2812 2812 [('p', 'strip', 1,
2813 2813 _('directory strip option for patch. This has the same '
2814 2814 'meaning as the corresponding patch option'), _('NUM')),
2815 2815 ('b', 'base', '', _('base path (DEPRECATED)'), _('PATH')),
2816 2816 ('e', 'edit', False, _('invoke editor on commit messages')),
2817 2817 ('f', 'force', None,
2818 2818 _('skip check for outstanding uncommitted changes (DEPRECATED)')),
2819 2819 ('', 'no-commit', None,
2820 2820 _("don't commit, just update the working directory")),
2821 2821 ('', 'bypass', None,
2822 2822 _("apply patch without touching the working directory")),
2823 2823 ('', 'partial', None,
2824 2824 _('commit even if some hunks fail')),
2825 2825 ('', 'exact', None,
2826 2826 _('abort if patch would apply lossily')),
2827 2827 ('', 'prefix', '',
2828 2828 _('apply patch to subdirectory'), _('DIR')),
2829 2829 ('', 'import-branch', None,
2830 2830 _('use any branch information in patch (implied by --exact)'))] +
2831 2831 commitopts + commitopts2 + similarityopts,
2832 2832 _('[OPTION]... PATCH...'))
2833 2833 def import_(ui, repo, patch1=None, *patches, **opts):
2834 2834 """import an ordered set of patches
2835 2835
2836 2836 Import a list of patches and commit them individually (unless
2837 2837 --no-commit is specified).
2838 2838
2839 2839 To read a patch from standard input (stdin), use "-" as the patch
2840 2840 name. If a URL is specified, the patch will be downloaded from
2841 2841 there.
2842 2842
2843 2843 Import first applies changes to the working directory (unless
2844 2844 --bypass is specified), import will abort if there are outstanding
2845 2845 changes.
2846 2846
2847 2847 Use --bypass to apply and commit patches directly to the
2848 2848 repository, without affecting the working directory. Without
2849 2849 --exact, patches will be applied on top of the working directory
2850 2850 parent revision.
2851 2851
2852 2852 You can import a patch straight from a mail message. Even patches
2853 2853 as attachments work (to use the body part, it must have type
2854 2854 text/plain or text/x-patch). From and Subject headers of email
2855 2855 message are used as default committer and commit message. All
2856 2856 text/plain body parts before first diff are added to the commit
2857 2857 message.
2858 2858
2859 2859 If the imported patch was generated by :hg:`export`, user and
2860 2860 description from patch override values from message headers and
2861 2861 body. Values given on command line with -m/--message and -u/--user
2862 2862 override these.
2863 2863
2864 2864 If --exact is specified, import will set the working directory to
2865 2865 the parent of each patch before applying it, and will abort if the
2866 2866 resulting changeset has a different ID than the one recorded in
2867 2867 the patch. This will guard against various ways that portable
2868 2868 patch formats and mail systems might fail to transfer Mercurial
2869 2869 data or metadata. See :hg:`bundle` for lossless transmission.
2870 2870
2871 2871 Use --partial to ensure a changeset will be created from the patch
2872 2872 even if some hunks fail to apply. Hunks that fail to apply will be
2873 2873 written to a <target-file>.rej file. Conflicts can then be resolved
2874 2874 by hand before :hg:`commit --amend` is run to update the created
2875 2875 changeset. This flag exists to let people import patches that
2876 2876 partially apply without losing the associated metadata (author,
2877 2877 date, description, ...).
2878 2878
2879 2879 .. note::
2880 2880
2881 2881 When no hunks apply cleanly, :hg:`import --partial` will create
2882 2882 an empty changeset, importing only the patch metadata.
2883 2883
2884 2884 With -s/--similarity, hg will attempt to discover renames and
2885 2885 copies in the patch in the same way as :hg:`addremove`.
2886 2886
2887 2887 It is possible to use external patch programs to perform the patch
2888 2888 by setting the ``ui.patch`` configuration option. For the default
2889 2889 internal tool, the fuzz can also be configured via ``patch.fuzz``.
2890 2890 See :hg:`help config` for more information about configuration
2891 2891 files and how to use these options.
2892 2892
2893 2893 See :hg:`help dates` for a list of formats valid for -d/--date.
2894 2894
2895 2895 .. container:: verbose
2896 2896
2897 2897 Examples:
2898 2898
2899 2899 - import a traditional patch from a website and detect renames::
2900 2900
2901 2901 hg import -s 80 http://example.com/bugfix.patch
2902 2902
2903 2903 - import a changeset from an hgweb server::
2904 2904
2905 2905 hg import https://www.mercurial-scm.org/repo/hg/rev/5ca8c111e9aa
2906 2906
2907 2907 - import all the patches in an Unix-style mbox::
2908 2908
2909 2909 hg import incoming-patches.mbox
2910 2910
2911 2911 - import patches from stdin::
2912 2912
2913 2913 hg import -
2914 2914
2915 2915 - attempt to exactly restore an exported changeset (not always
2916 2916 possible)::
2917 2917
2918 2918 hg import --exact proposed-fix.patch
2919 2919
2920 2920 - use an external tool to apply a patch which is too fuzzy for
2921 2921 the default internal tool.
2922 2922
2923 2923 hg import --config ui.patch="patch --merge" fuzzy.patch
2924 2924
2925 2925 - change the default fuzzing from 2 to a less strict 7
2926 2926
2927 2927 hg import --config ui.fuzz=7 fuzz.patch
2928 2928
2929 2929 Returns 0 on success, 1 on partial success (see --partial).
2930 2930 """
2931 2931
2932 2932 opts = pycompat.byteskwargs(opts)
2933 2933 if not patch1:
2934 2934 raise error.Abort(_('need at least one patch to import'))
2935 2935
2936 2936 patches = (patch1,) + patches
2937 2937
2938 2938 date = opts.get('date')
2939 2939 if date:
2940 2940 opts['date'] = util.parsedate(date)
2941 2941
2942 2942 exact = opts.get('exact')
2943 2943 update = not opts.get('bypass')
2944 2944 if not update and opts.get('no_commit'):
2945 2945 raise error.Abort(_('cannot use --no-commit with --bypass'))
2946 2946 try:
2947 2947 sim = float(opts.get('similarity') or 0)
2948 2948 except ValueError:
2949 2949 raise error.Abort(_('similarity must be a number'))
2950 2950 if sim < 0 or sim > 100:
2951 2951 raise error.Abort(_('similarity must be between 0 and 100'))
2952 2952 if sim and not update:
2953 2953 raise error.Abort(_('cannot use --similarity with --bypass'))
2954 2954 if exact:
2955 2955 if opts.get('edit'):
2956 2956 raise error.Abort(_('cannot use --exact with --edit'))
2957 2957 if opts.get('prefix'):
2958 2958 raise error.Abort(_('cannot use --exact with --prefix'))
2959 2959
2960 2960 base = opts["base"]
2961 2961 wlock = dsguard = lock = tr = None
2962 2962 msgs = []
2963 2963 ret = 0
2964 2964
2965 2965
2966 2966 try:
2967 2967 wlock = repo.wlock()
2968 2968
2969 2969 if update:
2970 2970 cmdutil.checkunfinished(repo)
2971 2971 if (exact or not opts.get('force')):
2972 2972 cmdutil.bailifchanged(repo)
2973 2973
2974 2974 if not opts.get('no_commit'):
2975 2975 lock = repo.lock()
2976 2976 tr = repo.transaction('import')
2977 2977 else:
2978 2978 dsguard = dirstateguard.dirstateguard(repo, 'import')
2979 2979 parents = repo[None].parents()
2980 2980 for patchurl in patches:
2981 2981 if patchurl == '-':
2982 2982 ui.status(_('applying patch from stdin\n'))
2983 2983 patchfile = ui.fin
2984 2984 patchurl = 'stdin' # for error message
2985 2985 else:
2986 2986 patchurl = os.path.join(base, patchurl)
2987 2987 ui.status(_('applying %s\n') % patchurl)
2988 2988 patchfile = hg.openpath(ui, patchurl)
2989 2989
2990 2990 haspatch = False
2991 2991 for hunk in patch.split(patchfile):
2992 2992 (msg, node, rej) = cmdutil.tryimportone(ui, repo, hunk,
2993 2993 parents, opts,
2994 2994 msgs, hg.clean)
2995 2995 if msg:
2996 2996 haspatch = True
2997 2997 ui.note(msg + '\n')
2998 2998 if update or exact:
2999 2999 parents = repo[None].parents()
3000 3000 else:
3001 3001 parents = [repo[node]]
3002 3002 if rej:
3003 3003 ui.write_err(_("patch applied partially\n"))
3004 3004 ui.write_err(_("(fix the .rej files and run "
3005 3005 "`hg commit --amend`)\n"))
3006 3006 ret = 1
3007 3007 break
3008 3008
3009 3009 if not haspatch:
3010 3010 raise error.Abort(_('%s: no diffs found') % patchurl)
3011 3011
3012 3012 if tr:
3013 3013 tr.close()
3014 3014 if msgs:
3015 3015 repo.savecommitmessage('\n* * *\n'.join(msgs))
3016 3016 if dsguard:
3017 3017 dsguard.close()
3018 3018 return ret
3019 3019 finally:
3020 3020 if tr:
3021 3021 tr.release()
3022 3022 release(lock, dsguard, wlock)
3023 3023
3024 3024 @command('incoming|in',
3025 3025 [('f', 'force', None,
3026 3026 _('run even if remote repository is unrelated')),
3027 3027 ('n', 'newest-first', None, _('show newest record first')),
3028 3028 ('', 'bundle', '',
3029 3029 _('file to store the bundles into'), _('FILE')),
3030 3030 ('r', 'rev', [], _('a remote changeset intended to be added'), _('REV')),
3031 3031 ('B', 'bookmarks', False, _("compare bookmarks")),
3032 3032 ('b', 'branch', [],
3033 3033 _('a specific branch you would like to pull'), _('BRANCH')),
3034 3034 ] + logopts + remoteopts + subrepoopts,
3035 3035 _('[-p] [-n] [-M] [-f] [-r REV]... [--bundle FILENAME] [SOURCE]'))
3036 3036 def incoming(ui, repo, source="default", **opts):
3037 3037 """show new changesets found in source
3038 3038
3039 3039 Show new changesets found in the specified path/URL or the default
3040 3040 pull location. These are the changesets that would have been pulled
3041 3041 if a pull at the time you issued this command.
3042 3042
3043 3043 See pull for valid source format details.
3044 3044
3045 3045 .. container:: verbose
3046 3046
3047 3047 With -B/--bookmarks, the result of bookmark comparison between
3048 3048 local and remote repositories is displayed. With -v/--verbose,
3049 3049 status is also displayed for each bookmark like below::
3050 3050
3051 3051 BM1 01234567890a added
3052 3052 BM2 1234567890ab advanced
3053 3053 BM3 234567890abc diverged
3054 3054 BM4 34567890abcd changed
3055 3055
3056 3056 The action taken locally when pulling depends on the
3057 3057 status of each bookmark:
3058 3058
3059 3059 :``added``: pull will create it
3060 3060 :``advanced``: pull will update it
3061 3061 :``diverged``: pull will create a divergent bookmark
3062 3062 :``changed``: result depends on remote changesets
3063 3063
3064 3064 From the point of view of pulling behavior, bookmark
3065 3065 existing only in the remote repository are treated as ``added``,
3066 3066 even if it is in fact locally deleted.
3067 3067
3068 3068 .. container:: verbose
3069 3069
3070 3070 For remote repository, using --bundle avoids downloading the
3071 3071 changesets twice if the incoming is followed by a pull.
3072 3072
3073 3073 Examples:
3074 3074
3075 3075 - show incoming changes with patches and full description::
3076 3076
3077 3077 hg incoming -vp
3078 3078
3079 3079 - show incoming changes excluding merges, store a bundle::
3080 3080
3081 3081 hg in -vpM --bundle incoming.hg
3082 3082 hg pull incoming.hg
3083 3083
3084 3084 - briefly list changes inside a bundle::
3085 3085
3086 3086 hg in changes.hg -T "{desc|firstline}\\n"
3087 3087
3088 3088 Returns 0 if there are incoming changes, 1 otherwise.
3089 3089 """
3090 3090 opts = pycompat.byteskwargs(opts)
3091 3091 if opts.get('graph'):
3092 3092 cmdutil.checkunsupportedgraphflags([], opts)
3093 3093 def display(other, chlist, displayer):
3094 3094 revdag = cmdutil.graphrevs(other, chlist, opts)
3095 3095 cmdutil.displaygraph(ui, repo, revdag, displayer,
3096 3096 graphmod.asciiedges)
3097 3097
3098 3098 hg._incoming(display, lambda: 1, ui, repo, source, opts, buffered=True)
3099 3099 return 0
3100 3100
3101 3101 if opts.get('bundle') and opts.get('subrepos'):
3102 3102 raise error.Abort(_('cannot combine --bundle and --subrepos'))
3103 3103
3104 3104 if opts.get('bookmarks'):
3105 3105 source, branches = hg.parseurl(ui.expandpath(source),
3106 3106 opts.get('branch'))
3107 3107 other = hg.peer(repo, opts, source)
3108 3108 if 'bookmarks' not in other.listkeys('namespaces'):
3109 3109 ui.warn(_("remote doesn't support bookmarks\n"))
3110 3110 return 0
3111 3111 ui.pager('incoming')
3112 3112 ui.status(_('comparing with %s\n') % util.hidepassword(source))
3113 3113 return bookmarks.incoming(ui, repo, other)
3114 3114
3115 3115 repo._subtoppath = ui.expandpath(source)
3116 3116 try:
3117 3117 return hg.incoming(ui, repo, source, opts)
3118 3118 finally:
3119 3119 del repo._subtoppath
3120 3120
3121 3121
3122 3122 @command('^init', remoteopts, _('[-e CMD] [--remotecmd CMD] [DEST]'),
3123 3123 norepo=True)
3124 3124 def init(ui, dest=".", **opts):
3125 3125 """create a new repository in the given directory
3126 3126
3127 3127 Initialize a new repository in the given directory. If the given
3128 3128 directory does not exist, it will be created.
3129 3129
3130 3130 If no directory is given, the current directory is used.
3131 3131
3132 3132 It is possible to specify an ``ssh://`` URL as the destination.
3133 3133 See :hg:`help urls` for more information.
3134 3134
3135 3135 Returns 0 on success.
3136 3136 """
3137 3137 opts = pycompat.byteskwargs(opts)
3138 3138 hg.peer(ui, opts, ui.expandpath(dest), create=True)
3139 3139
3140 3140 @command('locate',
3141 3141 [('r', 'rev', '', _('search the repository as it is in REV'), _('REV')),
3142 3142 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
3143 3143 ('f', 'fullpath', None, _('print complete paths from the filesystem root')),
3144 3144 ] + walkopts,
3145 3145 _('[OPTION]... [PATTERN]...'))
3146 3146 def locate(ui, repo, *pats, **opts):
3147 3147 """locate files matching specific patterns (DEPRECATED)
3148 3148
3149 3149 Print files under Mercurial control in the working directory whose
3150 3150 names match the given patterns.
3151 3151
3152 3152 By default, this command searches all directories in the working
3153 3153 directory. To search just the current directory and its
3154 3154 subdirectories, use "--include .".
3155 3155
3156 3156 If no patterns are given to match, this command prints the names
3157 3157 of all files under Mercurial control in the working directory.
3158 3158
3159 3159 If you want to feed the output of this command into the "xargs"
3160 3160 command, use the -0 option to both this command and "xargs". This
3161 3161 will avoid the problem of "xargs" treating single filenames that
3162 3162 contain whitespace as multiple filenames.
3163 3163
3164 3164 See :hg:`help files` for a more versatile command.
3165 3165
3166 3166 Returns 0 if a match is found, 1 otherwise.
3167 3167 """
3168 3168 opts = pycompat.byteskwargs(opts)
3169 3169 if opts.get('print0'):
3170 3170 end = '\0'
3171 3171 else:
3172 3172 end = '\n'
3173 3173 rev = scmutil.revsingle(repo, opts.get('rev'), None).node()
3174 3174
3175 3175 ret = 1
3176 3176 ctx = repo[rev]
3177 3177 m = scmutil.match(ctx, pats, opts, default='relglob',
3178 3178 badfn=lambda x, y: False)
3179 3179
3180 3180 ui.pager('locate')
3181 3181 for abs in ctx.matches(m):
3182 3182 if opts.get('fullpath'):
3183 3183 ui.write(repo.wjoin(abs), end)
3184 3184 else:
3185 3185 ui.write(((pats and m.rel(abs)) or abs), end)
3186 3186 ret = 0
3187 3187
3188 3188 return ret
3189 3189
3190 3190 @command('^log|history',
3191 3191 [('f', 'follow', None,
3192 3192 _('follow changeset history, or file history across copies and renames')),
3193 3193 ('', 'follow-first', None,
3194 3194 _('only follow the first parent of merge changesets (DEPRECATED)')),
3195 3195 ('d', 'date', '', _('show revisions matching date spec'), _('DATE')),
3196 3196 ('C', 'copies', None, _('show copied files')),
3197 3197 ('k', 'keyword', [],
3198 3198 _('do case-insensitive search for a given text'), _('TEXT')),
3199 3199 ('r', 'rev', [], _('show the specified revision or revset'), _('REV')),
3200 3200 ('', 'removed', None, _('include revisions where files were removed')),
3201 3201 ('m', 'only-merges', None, _('show only merges (DEPRECATED)')),
3202 3202 ('u', 'user', [], _('revisions committed by user'), _('USER')),
3203 3203 ('', 'only-branch', [],
3204 3204 _('show only changesets within the given named branch (DEPRECATED)'),
3205 3205 _('BRANCH')),
3206 3206 ('b', 'branch', [],
3207 3207 _('show changesets within the given named branch'), _('BRANCH')),
3208 3208 ('P', 'prune', [],
3209 3209 _('do not display revision or any of its ancestors'), _('REV')),
3210 3210 ] + logopts + walkopts,
3211 3211 _('[OPTION]... [FILE]'),
3212 3212 inferrepo=True)
3213 3213 def log(ui, repo, *pats, **opts):
3214 3214 """show revision history of entire repository or files
3215 3215
3216 3216 Print the revision history of the specified files or the entire
3217 3217 project.
3218 3218
3219 3219 If no revision range is specified, the default is ``tip:0`` unless
3220 3220 --follow is set, in which case the working directory parent is
3221 3221 used as the starting revision.
3222 3222
3223 3223 File history is shown without following rename or copy history of
3224 3224 files. Use -f/--follow with a filename to follow history across
3225 3225 renames and copies. --follow without a filename will only show
3226 3226 ancestors or descendants of the starting revision.
3227 3227
3228 3228 By default this command prints revision number and changeset id,
3229 3229 tags, non-trivial parents, user, date and time, and a summary for
3230 3230 each commit. When the -v/--verbose switch is used, the list of
3231 3231 changed files and full commit message are shown.
3232 3232
3233 3233 With --graph the revisions are shown as an ASCII art DAG with the most
3234 3234 recent changeset at the top.
3235 3235 'o' is a changeset, '@' is a working directory parent, 'x' is obsolete,
3236 3236 and '+' represents a fork where the changeset from the lines below is a
3237 3237 parent of the 'o' merge on the same line.
3238 3238 Paths in the DAG are represented with '|', '/' and so forth. ':' in place
3239 3239 of a '|' indicates one or more revisions in a path are omitted.
3240 3240
3241 3241 .. note::
3242 3242
3243 3243 :hg:`log --patch` may generate unexpected diff output for merge
3244 3244 changesets, as it will only compare the merge changeset against
3245 3245 its first parent. Also, only files different from BOTH parents
3246 3246 will appear in files:.
3247 3247
3248 3248 .. note::
3249 3249
3250 3250 For performance reasons, :hg:`log FILE` may omit duplicate changes
3251 3251 made on branches and will not show removals or mode changes. To
3252 3252 see all such changes, use the --removed switch.
3253 3253
3254 3254 .. container:: verbose
3255 3255
3256 3256 Some examples:
3257 3257
3258 3258 - changesets with full descriptions and file lists::
3259 3259
3260 3260 hg log -v
3261 3261
3262 3262 - changesets ancestral to the working directory::
3263 3263
3264 3264 hg log -f
3265 3265
3266 3266 - last 10 commits on the current branch::
3267 3267
3268 3268 hg log -l 10 -b .
3269 3269
3270 3270 - changesets showing all modifications of a file, including removals::
3271 3271
3272 3272 hg log --removed file.c
3273 3273
3274 3274 - all changesets that touch a directory, with diffs, excluding merges::
3275 3275
3276 3276 hg log -Mp lib/
3277 3277
3278 3278 - all revision numbers that match a keyword::
3279 3279
3280 3280 hg log -k bug --template "{rev}\\n"
3281 3281
3282 3282 - the full hash identifier of the working directory parent::
3283 3283
3284 3284 hg log -r . --template "{node}\\n"
3285 3285
3286 3286 - list available log templates::
3287 3287
3288 3288 hg log -T list
3289 3289
3290 3290 - check if a given changeset is included in a tagged release::
3291 3291
3292 3292 hg log -r "a21ccf and ancestor(1.9)"
3293 3293
3294 3294 - find all changesets by some user in a date range::
3295 3295
3296 3296 hg log -k alice -d "may 2008 to jul 2008"
3297 3297
3298 3298 - summary of all changesets after the last tag::
3299 3299
3300 3300 hg log -r "last(tagged())::" --template "{desc|firstline}\\n"
3301 3301
3302 3302 See :hg:`help dates` for a list of formats valid for -d/--date.
3303 3303
3304 3304 See :hg:`help revisions` for more about specifying and ordering
3305 3305 revisions.
3306 3306
3307 3307 See :hg:`help templates` for more about pre-packaged styles and
3308 3308 specifying custom templates.
3309 3309
3310 3310 Returns 0 on success.
3311 3311
3312 3312 """
3313 3313 opts = pycompat.byteskwargs(opts)
3314 3314 if opts.get('follow') and opts.get('rev'):
3315 3315 opts['rev'] = [revsetlang.formatspec('reverse(::%lr)', opts.get('rev'))]
3316 3316 del opts['follow']
3317 3317
3318 3318 if opts.get('graph'):
3319 3319 return cmdutil.graphlog(ui, repo, pats, opts)
3320 3320
3321 3321 revs, expr, filematcher = cmdutil.getlogrevs(repo, pats, opts)
3322 3322 limit = cmdutil.loglimit(opts)
3323 3323 count = 0
3324 3324
3325 3325 getrenamed = None
3326 3326 if opts.get('copies'):
3327 3327 endrev = None
3328 3328 if opts.get('rev'):
3329 3329 endrev = scmutil.revrange(repo, opts.get('rev')).max() + 1
3330 3330 getrenamed = templatekw.getrenamedfn(repo, endrev=endrev)
3331 3331
3332 3332 ui.pager('log')
3333 3333 displayer = cmdutil.show_changeset(ui, repo, opts, buffered=True)
3334 3334 for rev in revs:
3335 3335 if count == limit:
3336 3336 break
3337 3337 ctx = repo[rev]
3338 3338 copies = None
3339 3339 if getrenamed is not None and rev:
3340 3340 copies = []
3341 3341 for fn in ctx.files():
3342 3342 rename = getrenamed(fn, rev)
3343 3343 if rename:
3344 3344 copies.append((fn, rename[0]))
3345 3345 if filematcher:
3346 3346 revmatchfn = filematcher(ctx.rev())
3347 3347 else:
3348 3348 revmatchfn = None
3349 3349 displayer.show(ctx, copies=copies, matchfn=revmatchfn)
3350 3350 if displayer.flush(ctx):
3351 3351 count += 1
3352 3352
3353 3353 displayer.close()
3354 3354
3355 3355 @command('manifest',
3356 3356 [('r', 'rev', '', _('revision to display'), _('REV')),
3357 3357 ('', 'all', False, _("list files from all revisions"))]
3358 3358 + formatteropts,
3359 3359 _('[-r REV]'))
3360 3360 def manifest(ui, repo, node=None, rev=None, **opts):
3361 3361 """output the current or given revision of the project manifest
3362 3362
3363 3363 Print a list of version controlled files for the given revision.
3364 3364 If no revision is given, the first parent of the working directory
3365 3365 is used, or the null revision if no revision is checked out.
3366 3366
3367 3367 With -v, print file permissions, symlink and executable bits.
3368 3368 With --debug, print file revision hashes.
3369 3369
3370 3370 If option --all is specified, the list of all files from all revisions
3371 3371 is printed. This includes deleted and renamed files.
3372 3372
3373 3373 Returns 0 on success.
3374 3374 """
3375 3375 opts = pycompat.byteskwargs(opts)
3376 3376 fm = ui.formatter('manifest', opts)
3377 3377
3378 3378 if opts.get('all'):
3379 3379 if rev or node:
3380 3380 raise error.Abort(_("can't specify a revision with --all"))
3381 3381
3382 3382 res = []
3383 3383 prefix = "data/"
3384 3384 suffix = ".i"
3385 3385 plen = len(prefix)
3386 3386 slen = len(suffix)
3387 3387 with repo.lock():
3388 3388 for fn, b, size in repo.store.datafiles():
3389 3389 if size != 0 and fn[-slen:] == suffix and fn[:plen] == prefix:
3390 3390 res.append(fn[plen:-slen])
3391 3391 ui.pager('manifest')
3392 3392 for f in res:
3393 3393 fm.startitem()
3394 3394 fm.write("path", '%s\n', f)
3395 3395 fm.end()
3396 3396 return
3397 3397
3398 3398 if rev and node:
3399 3399 raise error.Abort(_("please specify just one revision"))
3400 3400
3401 3401 if not node:
3402 3402 node = rev
3403 3403
3404 3404 char = {'l': '@', 'x': '*', '': ''}
3405 3405 mode = {'l': '644', 'x': '755', '': '644'}
3406 3406 ctx = scmutil.revsingle(repo, node)
3407 3407 mf = ctx.manifest()
3408 3408 ui.pager('manifest')
3409 3409 for f in ctx:
3410 3410 fm.startitem()
3411 3411 fl = ctx[f].flags()
3412 3412 fm.condwrite(ui.debugflag, 'hash', '%s ', hex(mf[f]))
3413 3413 fm.condwrite(ui.verbose, 'mode type', '%s %1s ', mode[fl], char[fl])
3414 3414 fm.write('path', '%s\n', f)
3415 3415 fm.end()
3416 3416
3417 3417 @command('^merge',
3418 3418 [('f', 'force', None,
3419 3419 _('force a merge including outstanding changes (DEPRECATED)')),
3420 3420 ('r', 'rev', '', _('revision to merge'), _('REV')),
3421 3421 ('P', 'preview', None,
3422 3422 _('review revisions to merge (no merge is performed)'))
3423 3423 ] + mergetoolopts,
3424 3424 _('[-P] [[-r] REV]'))
3425 3425 def merge(ui, repo, node=None, **opts):
3426 3426 """merge another revision into working directory
3427 3427
3428 3428 The current working directory is updated with all changes made in
3429 3429 the requested revision since the last common predecessor revision.
3430 3430
3431 3431 Files that changed between either parent are marked as changed for
3432 3432 the next commit and a commit must be performed before any further
3433 3433 updates to the repository are allowed. The next commit will have
3434 3434 two parents.
3435 3435
3436 3436 ``--tool`` can be used to specify the merge tool used for file
3437 3437 merges. It overrides the HGMERGE environment variable and your
3438 3438 configuration files. See :hg:`help merge-tools` for options.
3439 3439
3440 3440 If no revision is specified, the working directory's parent is a
3441 3441 head revision, and the current branch contains exactly one other
3442 3442 head, the other head is merged with by default. Otherwise, an
3443 3443 explicit revision with which to merge with must be provided.
3444 3444
3445 3445 See :hg:`help resolve` for information on handling file conflicts.
3446 3446
3447 3447 To undo an uncommitted merge, use :hg:`update --clean .` which
3448 3448 will check out a clean copy of the original merge parent, losing
3449 3449 all changes.
3450 3450
3451 3451 Returns 0 on success, 1 if there are unresolved files.
3452 3452 """
3453 3453
3454 3454 opts = pycompat.byteskwargs(opts)
3455 3455 if opts.get('rev') and node:
3456 3456 raise error.Abort(_("please specify just one revision"))
3457 3457 if not node:
3458 3458 node = opts.get('rev')
3459 3459
3460 3460 if node:
3461 3461 node = scmutil.revsingle(repo, node).node()
3462 3462
3463 3463 if not node:
3464 3464 node = repo[destutil.destmerge(repo)].node()
3465 3465
3466 3466 if opts.get('preview'):
3467 3467 # find nodes that are ancestors of p2 but not of p1
3468 3468 p1 = repo.lookup('.')
3469 3469 p2 = repo.lookup(node)
3470 3470 nodes = repo.changelog.findmissing(common=[p1], heads=[p2])
3471 3471
3472 3472 displayer = cmdutil.show_changeset(ui, repo, opts)
3473 3473 for node in nodes:
3474 3474 displayer.show(repo[node])
3475 3475 displayer.close()
3476 3476 return 0
3477 3477
3478 3478 try:
3479 3479 # ui.forcemerge is an internal variable, do not document
3480 3480 repo.ui.setconfig('ui', 'forcemerge', opts.get('tool', ''), 'merge')
3481 3481 force = opts.get('force')
3482 3482 labels = ['working copy', 'merge rev']
3483 3483 return hg.merge(repo, node, force=force, mergeforce=force,
3484 3484 labels=labels)
3485 3485 finally:
3486 3486 ui.setconfig('ui', 'forcemerge', '', 'merge')
3487 3487
3488 3488 @command('outgoing|out',
3489 3489 [('f', 'force', None, _('run even when the destination is unrelated')),
3490 3490 ('r', 'rev', [],
3491 3491 _('a changeset intended to be included in the destination'), _('REV')),
3492 3492 ('n', 'newest-first', None, _('show newest record first')),
3493 3493 ('B', 'bookmarks', False, _('compare bookmarks')),
3494 3494 ('b', 'branch', [], _('a specific branch you would like to push'),
3495 3495 _('BRANCH')),
3496 3496 ] + logopts + remoteopts + subrepoopts,
3497 3497 _('[-M] [-p] [-n] [-f] [-r REV]... [DEST]'))
3498 3498 def outgoing(ui, repo, dest=None, **opts):
3499 3499 """show changesets not found in the destination
3500 3500
3501 3501 Show changesets not found in the specified destination repository
3502 3502 or the default push location. These are the changesets that would
3503 3503 be pushed if a push was requested.
3504 3504
3505 3505 See pull for details of valid destination formats.
3506 3506
3507 3507 .. container:: verbose
3508 3508
3509 3509 With -B/--bookmarks, the result of bookmark comparison between
3510 3510 local and remote repositories is displayed. With -v/--verbose,
3511 3511 status is also displayed for each bookmark like below::
3512 3512
3513 3513 BM1 01234567890a added
3514 3514 BM2 deleted
3515 3515 BM3 234567890abc advanced
3516 3516 BM4 34567890abcd diverged
3517 3517 BM5 4567890abcde changed
3518 3518
3519 3519 The action taken when pushing depends on the
3520 3520 status of each bookmark:
3521 3521
3522 3522 :``added``: push with ``-B`` will create it
3523 3523 :``deleted``: push with ``-B`` will delete it
3524 3524 :``advanced``: push will update it
3525 3525 :``diverged``: push with ``-B`` will update it
3526 3526 :``changed``: push with ``-B`` will update it
3527 3527
3528 3528 From the point of view of pushing behavior, bookmarks
3529 3529 existing only in the remote repository are treated as
3530 3530 ``deleted``, even if it is in fact added remotely.
3531 3531
3532 3532 Returns 0 if there are outgoing changes, 1 otherwise.
3533 3533 """
3534 3534 opts = pycompat.byteskwargs(opts)
3535 3535 if opts.get('graph'):
3536 3536 cmdutil.checkunsupportedgraphflags([], opts)
3537 3537 o, other = hg._outgoing(ui, repo, dest, opts)
3538 3538 if not o:
3539 3539 cmdutil.outgoinghooks(ui, repo, other, opts, o)
3540 3540 return
3541 3541
3542 3542 revdag = cmdutil.graphrevs(repo, o, opts)
3543 3543 ui.pager('outgoing')
3544 3544 displayer = cmdutil.show_changeset(ui, repo, opts, buffered=True)
3545 3545 cmdutil.displaygraph(ui, repo, revdag, displayer, graphmod.asciiedges)
3546 3546 cmdutil.outgoinghooks(ui, repo, other, opts, o)
3547 3547 return 0
3548 3548
3549 3549 if opts.get('bookmarks'):
3550 3550 dest = ui.expandpath(dest or 'default-push', dest or 'default')
3551 3551 dest, branches = hg.parseurl(dest, opts.get('branch'))
3552 3552 other = hg.peer(repo, opts, dest)
3553 3553 if 'bookmarks' not in other.listkeys('namespaces'):
3554 3554 ui.warn(_("remote doesn't support bookmarks\n"))
3555 3555 return 0
3556 3556 ui.status(_('comparing with %s\n') % util.hidepassword(dest))
3557 3557 ui.pager('outgoing')
3558 3558 return bookmarks.outgoing(ui, repo, other)
3559 3559
3560 3560 repo._subtoppath = ui.expandpath(dest or 'default-push', dest or 'default')
3561 3561 try:
3562 3562 return hg.outgoing(ui, repo, dest, opts)
3563 3563 finally:
3564 3564 del repo._subtoppath
3565 3565
3566 3566 @command('parents',
3567 3567 [('r', 'rev', '', _('show parents of the specified revision'), _('REV')),
3568 3568 ] + templateopts,
3569 3569 _('[-r REV] [FILE]'),
3570 3570 inferrepo=True)
3571 3571 def parents(ui, repo, file_=None, **opts):
3572 3572 """show the parents of the working directory or revision (DEPRECATED)
3573 3573
3574 3574 Print the working directory's parent revisions. If a revision is
3575 3575 given via -r/--rev, the parent of that revision will be printed.
3576 3576 If a file argument is given, the revision in which the file was
3577 3577 last changed (before the working directory revision or the
3578 3578 argument to --rev if given) is printed.
3579 3579
3580 3580 This command is equivalent to::
3581 3581
3582 3582 hg log -r "p1()+p2()" or
3583 3583 hg log -r "p1(REV)+p2(REV)" or
3584 3584 hg log -r "max(::p1() and file(FILE))+max(::p2() and file(FILE))" or
3585 3585 hg log -r "max(::p1(REV) and file(FILE))+max(::p2(REV) and file(FILE))"
3586 3586
3587 3587 See :hg:`summary` and :hg:`help revsets` for related information.
3588 3588
3589 3589 Returns 0 on success.
3590 3590 """
3591 3591
3592 3592 opts = pycompat.byteskwargs(opts)
3593 3593 ctx = scmutil.revsingle(repo, opts.get('rev'), None)
3594 3594
3595 3595 if file_:
3596 3596 m = scmutil.match(ctx, (file_,), opts)
3597 3597 if m.anypats() or len(m.files()) != 1:
3598 3598 raise error.Abort(_('can only specify an explicit filename'))
3599 3599 file_ = m.files()[0]
3600 3600 filenodes = []
3601 3601 for cp in ctx.parents():
3602 3602 if not cp:
3603 3603 continue
3604 3604 try:
3605 3605 filenodes.append(cp.filenode(file_))
3606 3606 except error.LookupError:
3607 3607 pass
3608 3608 if not filenodes:
3609 3609 raise error.Abort(_("'%s' not found in manifest!") % file_)
3610 3610 p = []
3611 3611 for fn in filenodes:
3612 3612 fctx = repo.filectx(file_, fileid=fn)
3613 3613 p.append(fctx.node())
3614 3614 else:
3615 3615 p = [cp.node() for cp in ctx.parents()]
3616 3616
3617 3617 displayer = cmdutil.show_changeset(ui, repo, opts)
3618 3618 for n in p:
3619 3619 if n != nullid:
3620 3620 displayer.show(repo[n])
3621 3621 displayer.close()
3622 3622
3623 3623 @command('paths', formatteropts, _('[NAME]'), optionalrepo=True)
3624 3624 def paths(ui, repo, search=None, **opts):
3625 3625 """show aliases for remote repositories
3626 3626
3627 3627 Show definition of symbolic path name NAME. If no name is given,
3628 3628 show definition of all available names.
3629 3629
3630 3630 Option -q/--quiet suppresses all output when searching for NAME
3631 3631 and shows only the path names when listing all definitions.
3632 3632
3633 3633 Path names are defined in the [paths] section of your
3634 3634 configuration file and in ``/etc/mercurial/hgrc``. If run inside a
3635 3635 repository, ``.hg/hgrc`` is used, too.
3636 3636
3637 3637 The path names ``default`` and ``default-push`` have a special
3638 3638 meaning. When performing a push or pull operation, they are used
3639 3639 as fallbacks if no location is specified on the command-line.
3640 3640 When ``default-push`` is set, it will be used for push and
3641 3641 ``default`` will be used for pull; otherwise ``default`` is used
3642 3642 as the fallback for both. When cloning a repository, the clone
3643 3643 source is written as ``default`` in ``.hg/hgrc``.
3644 3644
3645 3645 .. note::
3646 3646
3647 3647 ``default`` and ``default-push`` apply to all inbound (e.g.
3648 3648 :hg:`incoming`) and outbound (e.g. :hg:`outgoing`, :hg:`email`
3649 3649 and :hg:`bundle`) operations.
3650 3650
3651 3651 See :hg:`help urls` for more information.
3652 3652
3653 3653 Returns 0 on success.
3654 3654 """
3655 3655
3656 3656 opts = pycompat.byteskwargs(opts)
3657 3657 ui.pager('paths')
3658 3658 if search:
3659 3659 pathitems = [(name, path) for name, path in ui.paths.iteritems()
3660 3660 if name == search]
3661 3661 else:
3662 3662 pathitems = sorted(ui.paths.iteritems())
3663 3663
3664 3664 fm = ui.formatter('paths', opts)
3665 3665 if fm.isplain():
3666 3666 hidepassword = util.hidepassword
3667 3667 else:
3668 3668 hidepassword = str
3669 3669 if ui.quiet:
3670 3670 namefmt = '%s\n'
3671 3671 else:
3672 3672 namefmt = '%s = '
3673 3673 showsubopts = not search and not ui.quiet
3674 3674
3675 3675 for name, path in pathitems:
3676 3676 fm.startitem()
3677 3677 fm.condwrite(not search, 'name', namefmt, name)
3678 3678 fm.condwrite(not ui.quiet, 'url', '%s\n', hidepassword(path.rawloc))
3679 3679 for subopt, value in sorted(path.suboptions.items()):
3680 3680 assert subopt not in ('name', 'url')
3681 3681 if showsubopts:
3682 3682 fm.plain('%s:%s = ' % (name, subopt))
3683 3683 fm.condwrite(showsubopts, subopt, '%s\n', value)
3684 3684
3685 3685 fm.end()
3686 3686
3687 3687 if search and not pathitems:
3688 3688 if not ui.quiet:
3689 3689 ui.warn(_("not found!\n"))
3690 3690 return 1
3691 3691 else:
3692 3692 return 0
3693 3693
3694 3694 @command('phase',
3695 3695 [('p', 'public', False, _('set changeset phase to public')),
3696 3696 ('d', 'draft', False, _('set changeset phase to draft')),
3697 3697 ('s', 'secret', False, _('set changeset phase to secret')),
3698 3698 ('f', 'force', False, _('allow to move boundary backward')),
3699 3699 ('r', 'rev', [], _('target revision'), _('REV')),
3700 3700 ],
3701 3701 _('[-p|-d|-s] [-f] [-r] [REV...]'))
3702 3702 def phase(ui, repo, *revs, **opts):
3703 3703 """set or show the current phase name
3704 3704
3705 3705 With no argument, show the phase name of the current revision(s).
3706 3706
3707 3707 With one of -p/--public, -d/--draft or -s/--secret, change the
3708 3708 phase value of the specified revisions.
3709 3709
3710 3710 Unless -f/--force is specified, :hg:`phase` won't move changeset from a
3711 3711 lower phase to an higher phase. Phases are ordered as follows::
3712 3712
3713 3713 public < draft < secret
3714 3714
3715 3715 Returns 0 on success, 1 if some phases could not be changed.
3716 3716
3717 3717 (For more information about the phases concept, see :hg:`help phases`.)
3718 3718 """
3719 3719 opts = pycompat.byteskwargs(opts)
3720 3720 # search for a unique phase argument
3721 3721 targetphase = None
3722 3722 for idx, name in enumerate(phases.phasenames):
3723 3723 if opts[name]:
3724 3724 if targetphase is not None:
3725 3725 raise error.Abort(_('only one phase can be specified'))
3726 3726 targetphase = idx
3727 3727
3728 3728 # look for specified revision
3729 3729 revs = list(revs)
3730 3730 revs.extend(opts['rev'])
3731 3731 if not revs:
3732 3732 # display both parents as the second parent phase can influence
3733 3733 # the phase of a merge commit
3734 3734 revs = [c.rev() for c in repo[None].parents()]
3735 3735
3736 3736 revs = scmutil.revrange(repo, revs)
3737 3737
3738 3738 lock = None
3739 3739 ret = 0
3740 3740 if targetphase is None:
3741 3741 # display
3742 3742 for r in revs:
3743 3743 ctx = repo[r]
3744 3744 ui.write('%i: %s\n' % (ctx.rev(), ctx.phasestr()))
3745 3745 else:
3746 3746 tr = None
3747 3747 lock = repo.lock()
3748 3748 try:
3749 3749 tr = repo.transaction("phase")
3750 3750 # set phase
3751 3751 if not revs:
3752 3752 raise error.Abort(_('empty revision set'))
3753 3753 nodes = [repo[r].node() for r in revs]
3754 3754 # moving revision from public to draft may hide them
3755 3755 # We have to check result on an unfiltered repository
3756 3756 unfi = repo.unfiltered()
3757 3757 getphase = unfi._phasecache.phase
3758 3758 olddata = [getphase(unfi, r) for r in unfi]
3759 3759 phases.advanceboundary(repo, tr, targetphase, nodes)
3760 3760 if opts['force']:
3761 3761 phases.retractboundary(repo, tr, targetphase, nodes)
3762 3762 tr.close()
3763 3763 finally:
3764 3764 if tr is not None:
3765 3765 tr.release()
3766 3766 lock.release()
3767 3767 getphase = unfi._phasecache.phase
3768 3768 newdata = [getphase(unfi, r) for r in unfi]
3769 3769 changes = sum(newdata[r] != olddata[r] for r in unfi)
3770 3770 cl = unfi.changelog
3771 3771 rejected = [n for n in nodes
3772 3772 if newdata[cl.rev(n)] < targetphase]
3773 3773 if rejected:
3774 3774 ui.warn(_('cannot move %i changesets to a higher '
3775 3775 'phase, use --force\n') % len(rejected))
3776 3776 ret = 1
3777 3777 if changes:
3778 3778 msg = _('phase changed for %i changesets\n') % changes
3779 3779 if ret:
3780 3780 ui.status(msg)
3781 3781 else:
3782 3782 ui.note(msg)
3783 3783 else:
3784 3784 ui.warn(_('no phases changed\n'))
3785 3785 return ret
3786 3786
3787 3787 def postincoming(ui, repo, modheads, optupdate, checkout, brev):
3788 3788 """Run after a changegroup has been added via pull/unbundle
3789 3789
3790 3790 This takes arguments below:
3791 3791
3792 3792 :modheads: change of heads by pull/unbundle
3793 3793 :optupdate: updating working directory is needed or not
3794 3794 :checkout: update destination revision (or None to default destination)
3795 3795 :brev: a name, which might be a bookmark to be activated after updating
3796 3796 """
3797 3797 if modheads == 0:
3798 3798 return
3799 3799 if optupdate:
3800 3800 try:
3801 3801 return hg.updatetotally(ui, repo, checkout, brev)
3802 3802 except error.UpdateAbort as inst:
3803 3803 msg = _("not updating: %s") % str(inst)
3804 3804 hint = inst.hint
3805 3805 raise error.UpdateAbort(msg, hint=hint)
3806 3806 if modheads > 1:
3807 3807 currentbranchheads = len(repo.branchheads())
3808 3808 if currentbranchheads == modheads:
3809 3809 ui.status(_("(run 'hg heads' to see heads, 'hg merge' to merge)\n"))
3810 3810 elif currentbranchheads > 1:
3811 3811 ui.status(_("(run 'hg heads .' to see heads, 'hg merge' to "
3812 3812 "merge)\n"))
3813 3813 else:
3814 3814 ui.status(_("(run 'hg heads' to see heads)\n"))
3815 3815 else:
3816 3816 ui.status(_("(run 'hg update' to get a working copy)\n"))
3817 3817
3818 3818 @command('^pull',
3819 3819 [('u', 'update', None,
3820 3820 _('update to new branch head if changesets were pulled')),
3821 3821 ('f', 'force', None, _('run even when remote repository is unrelated')),
3822 3822 ('r', 'rev', [], _('a remote changeset intended to be added'), _('REV')),
3823 3823 ('B', 'bookmark', [], _("bookmark to pull"), _('BOOKMARK')),
3824 3824 ('b', 'branch', [], _('a specific branch you would like to pull'),
3825 3825 _('BRANCH')),
3826 3826 ] + remoteopts,
3827 3827 _('[-u] [-f] [-r REV]... [-e CMD] [--remotecmd CMD] [SOURCE]'))
3828 3828 def pull(ui, repo, source="default", **opts):
3829 3829 """pull changes from the specified source
3830 3830
3831 3831 Pull changes from a remote repository to a local one.
3832 3832
3833 3833 This finds all changes from the repository at the specified path
3834 3834 or URL and adds them to a local repository (the current one unless
3835 3835 -R is specified). By default, this does not update the copy of the
3836 3836 project in the working directory.
3837 3837
3838 3838 Use :hg:`incoming` if you want to see what would have been added
3839 3839 by a pull at the time you issued this command. If you then decide
3840 3840 to add those changes to the repository, you should use :hg:`pull
3841 3841 -r X` where ``X`` is the last changeset listed by :hg:`incoming`.
3842 3842
3843 3843 If SOURCE is omitted, the 'default' path will be used.
3844 3844 See :hg:`help urls` for more information.
3845 3845
3846 3846 Specifying bookmark as ``.`` is equivalent to specifying the active
3847 3847 bookmark's name.
3848 3848
3849 3849 Returns 0 on success, 1 if an update had unresolved files.
3850 3850 """
3851 3851
3852 3852 opts = pycompat.byteskwargs(opts)
3853 3853 if ui.configbool('commands', 'update.requiredest') and opts.get('update'):
3854 3854 msg = _('update destination required by configuration')
3855 3855 hint = _('use hg pull followed by hg update DEST')
3856 3856 raise error.Abort(msg, hint=hint)
3857 3857
3858 3858 source, branches = hg.parseurl(ui.expandpath(source), opts.get('branch'))
3859 3859 ui.status(_('pulling from %s\n') % util.hidepassword(source))
3860 3860 other = hg.peer(repo, opts, source)
3861 3861 try:
3862 3862 revs, checkout = hg.addbranchrevs(repo, other, branches,
3863 3863 opts.get('rev'))
3864 3864
3865 3865
3866 3866 pullopargs = {}
3867 3867 if opts.get('bookmark'):
3868 3868 if not revs:
3869 3869 revs = []
3870 3870 # The list of bookmark used here is not the one used to actually
3871 3871 # update the bookmark name. This can result in the revision pulled
3872 3872 # not ending up with the name of the bookmark because of a race
3873 3873 # condition on the server. (See issue 4689 for details)
3874 3874 remotebookmarks = other.listkeys('bookmarks')
3875 3875 pullopargs['remotebookmarks'] = remotebookmarks
3876 3876 for b in opts['bookmark']:
3877 3877 b = repo._bookmarks.expandname(b)
3878 3878 if b not in remotebookmarks:
3879 3879 raise error.Abort(_('remote bookmark %s not found!') % b)
3880 3880 revs.append(remotebookmarks[b])
3881 3881
3882 3882 if revs:
3883 3883 try:
3884 3884 # When 'rev' is a bookmark name, we cannot guarantee that it
3885 3885 # will be updated with that name because of a race condition
3886 3886 # server side. (See issue 4689 for details)
3887 3887 oldrevs = revs
3888 3888 revs = [] # actually, nodes
3889 3889 for r in oldrevs:
3890 3890 node = other.lookup(r)
3891 3891 revs.append(node)
3892 3892 if r == checkout:
3893 3893 checkout = node
3894 3894 except error.CapabilityError:
3895 3895 err = _("other repository doesn't support revision lookup, "
3896 3896 "so a rev cannot be specified.")
3897 3897 raise error.Abort(err)
3898 3898
3899 3899 pullopargs.update(opts.get('opargs', {}))
3900 3900 modheads = exchange.pull(repo, other, heads=revs,
3901 3901 force=opts.get('force'),
3902 3902 bookmarks=opts.get('bookmark', ()),
3903 3903 opargs=pullopargs).cgresult
3904 3904
3905 3905 # brev is a name, which might be a bookmark to be activated at
3906 3906 # the end of the update. In other words, it is an explicit
3907 3907 # destination of the update
3908 3908 brev = None
3909 3909
3910 3910 if checkout:
3911 3911 checkout = str(repo.changelog.rev(checkout))
3912 3912
3913 3913 # order below depends on implementation of
3914 3914 # hg.addbranchrevs(). opts['bookmark'] is ignored,
3915 3915 # because 'checkout' is determined without it.
3916 3916 if opts.get('rev'):
3917 3917 brev = opts['rev'][0]
3918 3918 elif opts.get('branch'):
3919 3919 brev = opts['branch'][0]
3920 3920 else:
3921 3921 brev = branches[0]
3922 3922 repo._subtoppath = source
3923 3923 try:
3924 3924 ret = postincoming(ui, repo, modheads, opts.get('update'),
3925 3925 checkout, brev)
3926 3926
3927 3927 finally:
3928 3928 del repo._subtoppath
3929 3929
3930 3930 finally:
3931 3931 other.close()
3932 3932 return ret
3933 3933
3934 3934 @command('^push',
3935 3935 [('f', 'force', None, _('force push')),
3936 3936 ('r', 'rev', [],
3937 3937 _('a changeset intended to be included in the destination'),
3938 3938 _('REV')),
3939 3939 ('B', 'bookmark', [], _("bookmark to push"), _('BOOKMARK')),
3940 3940 ('b', 'branch', [],
3941 3941 _('a specific branch you would like to push'), _('BRANCH')),
3942 3942 ('', 'new-branch', False, _('allow pushing a new branch')),
3943 3943 ] + remoteopts,
3944 3944 _('[-f] [-r REV]... [-e CMD] [--remotecmd CMD] [DEST]'))
3945 3945 def push(ui, repo, dest=None, **opts):
3946 3946 """push changes to the specified destination
3947 3947
3948 3948 Push changesets from the local repository to the specified
3949 3949 destination.
3950 3950
3951 3951 This operation is symmetrical to pull: it is identical to a pull
3952 3952 in the destination repository from the current one.
3953 3953
3954 3954 By default, push will not allow creation of new heads at the
3955 3955 destination, since multiple heads would make it unclear which head
3956 3956 to use. In this situation, it is recommended to pull and merge
3957 3957 before pushing.
3958 3958
3959 3959 Use --new-branch if you want to allow push to create a new named
3960 3960 branch that is not present at the destination. This allows you to
3961 3961 only create a new branch without forcing other changes.
3962 3962
3963 3963 .. note::
3964 3964
3965 3965 Extra care should be taken with the -f/--force option,
3966 3966 which will push all new heads on all branches, an action which will
3967 3967 almost always cause confusion for collaborators.
3968 3968
3969 3969 If -r/--rev is used, the specified revision and all its ancestors
3970 3970 will be pushed to the remote repository.
3971 3971
3972 3972 If -B/--bookmark is used, the specified bookmarked revision, its
3973 3973 ancestors, and the bookmark will be pushed to the remote
3974 3974 repository. Specifying ``.`` is equivalent to specifying the active
3975 3975 bookmark's name.
3976 3976
3977 3977 Please see :hg:`help urls` for important details about ``ssh://``
3978 3978 URLs. If DESTINATION is omitted, a default path will be used.
3979 3979
3980 3980 Returns 0 if push was successful, 1 if nothing to push.
3981 3981 """
3982 3982
3983 3983 opts = pycompat.byteskwargs(opts)
3984 3984 if opts.get('bookmark'):
3985 3985 ui.setconfig('bookmarks', 'pushing', opts['bookmark'], 'push')
3986 3986 for b in opts['bookmark']:
3987 3987 # translate -B options to -r so changesets get pushed
3988 3988 b = repo._bookmarks.expandname(b)
3989 3989 if b in repo._bookmarks:
3990 3990 opts.setdefault('rev', []).append(b)
3991 3991 else:
3992 3992 # if we try to push a deleted bookmark, translate it to null
3993 3993 # this lets simultaneous -r, -b options continue working
3994 3994 opts.setdefault('rev', []).append("null")
3995 3995
3996 3996 path = ui.paths.getpath(dest, default=('default-push', 'default'))
3997 3997 if not path:
3998 3998 raise error.Abort(_('default repository not configured!'),
3999 3999 hint=_("see 'hg help config.paths'"))
4000 4000 dest = path.pushloc or path.loc
4001 4001 branches = (path.branch, opts.get('branch') or [])
4002 4002 ui.status(_('pushing to %s\n') % util.hidepassword(dest))
4003 4003 revs, checkout = hg.addbranchrevs(repo, repo, branches, opts.get('rev'))
4004 4004 other = hg.peer(repo, opts, dest)
4005 4005
4006 4006 if revs:
4007 4007 revs = [repo.lookup(r) for r in scmutil.revrange(repo, revs)]
4008 4008 if not revs:
4009 4009 raise error.Abort(_("specified revisions evaluate to an empty set"),
4010 4010 hint=_("use different revision arguments"))
4011 4011 elif path.pushrev:
4012 4012 # It doesn't make any sense to specify ancestor revisions. So limit
4013 4013 # to DAG heads to make discovery simpler.
4014 4014 expr = revsetlang.formatspec('heads(%r)', path.pushrev)
4015 4015 revs = scmutil.revrange(repo, [expr])
4016 4016 revs = [repo[rev].node() for rev in revs]
4017 4017 if not revs:
4018 4018 raise error.Abort(_('default push revset for path evaluates to an '
4019 4019 'empty set'))
4020 4020
4021 4021 repo._subtoppath = dest
4022 4022 try:
4023 4023 # push subrepos depth-first for coherent ordering
4024 4024 c = repo['']
4025 4025 subs = c.substate # only repos that are committed
4026 4026 for s in sorted(subs):
4027 4027 result = c.sub(s).push(opts)
4028 4028 if result == 0:
4029 4029 return not result
4030 4030 finally:
4031 4031 del repo._subtoppath
4032 4032 pushop = exchange.push(repo, other, opts.get('force'), revs=revs,
4033 4033 newbranch=opts.get('new_branch'),
4034 4034 bookmarks=opts.get('bookmark', ()),
4035 4035 opargs=opts.get('opargs'))
4036 4036
4037 4037 result = not pushop.cgresult
4038 4038
4039 4039 if pushop.bkresult is not None:
4040 4040 if pushop.bkresult == 2:
4041 4041 result = 2
4042 4042 elif not result and pushop.bkresult:
4043 4043 result = 2
4044 4044
4045 4045 return result
4046 4046
4047 4047 @command('recover', [])
4048 4048 def recover(ui, repo):
4049 4049 """roll back an interrupted transaction
4050 4050
4051 4051 Recover from an interrupted commit or pull.
4052 4052
4053 4053 This command tries to fix the repository status after an
4054 4054 interrupted operation. It should only be necessary when Mercurial
4055 4055 suggests it.
4056 4056
4057 4057 Returns 0 if successful, 1 if nothing to recover or verify fails.
4058 4058 """
4059 4059 if repo.recover():
4060 4060 return hg.verify(repo)
4061 4061 return 1
4062 4062
4063 4063 @command('^remove|rm',
4064 4064 [('A', 'after', None, _('record delete for missing files')),
4065 4065 ('f', 'force', None,
4066 4066 _('forget added files, delete modified files')),
4067 4067 ] + subrepoopts + walkopts,
4068 4068 _('[OPTION]... FILE...'),
4069 4069 inferrepo=True)
4070 4070 def remove(ui, repo, *pats, **opts):
4071 4071 """remove the specified files on the next commit
4072 4072
4073 4073 Schedule the indicated files for removal from the current branch.
4074 4074
4075 4075 This command schedules the files to be removed at the next commit.
4076 4076 To undo a remove before that, see :hg:`revert`. To undo added
4077 4077 files, see :hg:`forget`.
4078 4078
4079 4079 .. container:: verbose
4080 4080
4081 4081 -A/--after can be used to remove only files that have already
4082 4082 been deleted, -f/--force can be used to force deletion, and -Af
4083 4083 can be used to remove files from the next revision without
4084 4084 deleting them from the working directory.
4085 4085
4086 4086 The following table details the behavior of remove for different
4087 4087 file states (columns) and option combinations (rows). The file
4088 4088 states are Added [A], Clean [C], Modified [M] and Missing [!]
4089 4089 (as reported by :hg:`status`). The actions are Warn, Remove
4090 4090 (from branch) and Delete (from disk):
4091 4091
4092 4092 ========= == == == ==
4093 4093 opt/state A C M !
4094 4094 ========= == == == ==
4095 4095 none W RD W R
4096 4096 -f R RD RD R
4097 4097 -A W W W R
4098 4098 -Af R R R R
4099 4099 ========= == == == ==
4100 4100
4101 4101 .. note::
4102 4102
4103 4103 :hg:`remove` never deletes files in Added [A] state from the
4104 4104 working directory, not even if ``--force`` is specified.
4105 4105
4106 4106 Returns 0 on success, 1 if any warnings encountered.
4107 4107 """
4108 4108
4109 4109 opts = pycompat.byteskwargs(opts)
4110 4110 after, force = opts.get('after'), opts.get('force')
4111 4111 if not pats and not after:
4112 4112 raise error.Abort(_('no files specified'))
4113 4113
4114 4114 m = scmutil.match(repo[None], pats, opts)
4115 4115 subrepos = opts.get('subrepos')
4116 4116 return cmdutil.remove(ui, repo, m, "", after, force, subrepos)
4117 4117
4118 4118 @command('rename|move|mv',
4119 4119 [('A', 'after', None, _('record a rename that has already occurred')),
4120 4120 ('f', 'force', None, _('forcibly copy over an existing managed file')),
4121 4121 ] + walkopts + dryrunopts,
4122 4122 _('[OPTION]... SOURCE... DEST'))
4123 4123 def rename(ui, repo, *pats, **opts):
4124 4124 """rename files; equivalent of copy + remove
4125 4125
4126 4126 Mark dest as copies of sources; mark sources for deletion. If dest
4127 4127 is a directory, copies are put in that directory. If dest is a
4128 4128 file, there can only be one source.
4129 4129
4130 4130 By default, this command copies the contents of files as they
4131 4131 exist in the working directory. If invoked with -A/--after, the
4132 4132 operation is recorded, but no copying is performed.
4133 4133
4134 4134 This command takes effect at the next commit. To undo a rename
4135 4135 before that, see :hg:`revert`.
4136 4136
4137 4137 Returns 0 on success, 1 if errors are encountered.
4138 4138 """
4139 4139 opts = pycompat.byteskwargs(opts)
4140 4140 with repo.wlock(False):
4141 4141 return cmdutil.copy(ui, repo, pats, opts, rename=True)
4142 4142
4143 4143 @command('resolve',
4144 4144 [('a', 'all', None, _('select all unresolved files')),
4145 4145 ('l', 'list', None, _('list state of files needing merge')),
4146 4146 ('m', 'mark', None, _('mark files as resolved')),
4147 4147 ('u', 'unmark', None, _('mark files as unresolved')),
4148 4148 ('n', 'no-status', None, _('hide status prefix'))]
4149 4149 + mergetoolopts + walkopts + formatteropts,
4150 4150 _('[OPTION]... [FILE]...'),
4151 4151 inferrepo=True)
4152 4152 def resolve(ui, repo, *pats, **opts):
4153 4153 """redo merges or set/view the merge status of files
4154 4154
4155 4155 Merges with unresolved conflicts are often the result of
4156 4156 non-interactive merging using the ``internal:merge`` configuration
4157 4157 setting, or a command-line merge tool like ``diff3``. The resolve
4158 4158 command is used to manage the files involved in a merge, after
4159 4159 :hg:`merge` has been run, and before :hg:`commit` is run (i.e. the
4160 4160 working directory must have two parents). See :hg:`help
4161 4161 merge-tools` for information on configuring merge tools.
4162 4162
4163 4163 The resolve command can be used in the following ways:
4164 4164
4165 4165 - :hg:`resolve [--tool TOOL] FILE...`: attempt to re-merge the specified
4166 4166 files, discarding any previous merge attempts. Re-merging is not
4167 4167 performed for files already marked as resolved. Use ``--all/-a``
4168 4168 to select all unresolved files. ``--tool`` can be used to specify
4169 4169 the merge tool used for the given files. It overrides the HGMERGE
4170 4170 environment variable and your configuration files. Previous file
4171 4171 contents are saved with a ``.orig`` suffix.
4172 4172
4173 4173 - :hg:`resolve -m [FILE]`: mark a file as having been resolved
4174 4174 (e.g. after having manually fixed-up the files). The default is
4175 4175 to mark all unresolved files.
4176 4176
4177 4177 - :hg:`resolve -u [FILE]...`: mark a file as unresolved. The
4178 4178 default is to mark all resolved files.
4179 4179
4180 4180 - :hg:`resolve -l`: list files which had or still have conflicts.
4181 4181 In the printed list, ``U`` = unresolved and ``R`` = resolved.
4182 4182 You can use ``set:unresolved()`` or ``set:resolved()`` to filter
4183 4183 the list. See :hg:`help filesets` for details.
4184 4184
4185 4185 .. note::
4186 4186
4187 4187 Mercurial will not let you commit files with unresolved merge
4188 4188 conflicts. You must use :hg:`resolve -m ...` before you can
4189 4189 commit after a conflicting merge.
4190 4190
4191 4191 Returns 0 on success, 1 if any files fail a resolve attempt.
4192 4192 """
4193 4193
4194 4194 opts = pycompat.byteskwargs(opts)
4195 4195 flaglist = 'all mark unmark list no_status'.split()
4196 4196 all, mark, unmark, show, nostatus = \
4197 4197 [opts.get(o) for o in flaglist]
4198 4198
4199 4199 if (show and (mark or unmark)) or (mark and unmark):
4200 4200 raise error.Abort(_("too many options specified"))
4201 4201 if pats and all:
4202 4202 raise error.Abort(_("can't specify --all and patterns"))
4203 4203 if not (all or pats or show or mark or unmark):
4204 4204 raise error.Abort(_('no files or directories specified'),
4205 4205 hint=('use --all to re-merge all unresolved files'))
4206 4206
4207 4207 if show:
4208 4208 ui.pager('resolve')
4209 4209 fm = ui.formatter('resolve', opts)
4210 4210 ms = mergemod.mergestate.read(repo)
4211 4211 m = scmutil.match(repo[None], pats, opts)
4212 4212 for f in ms:
4213 4213 if not m(f):
4214 4214 continue
4215 4215 l = 'resolve.' + {'u': 'unresolved', 'r': 'resolved',
4216 4216 'd': 'driverresolved'}[ms[f]]
4217 4217 fm.startitem()
4218 4218 fm.condwrite(not nostatus, 'status', '%s ', ms[f].upper(), label=l)
4219 4219 fm.write('path', '%s\n', f, label=l)
4220 4220 fm.end()
4221 4221 return 0
4222 4222
4223 4223 with repo.wlock():
4224 4224 ms = mergemod.mergestate.read(repo)
4225 4225
4226 4226 if not (ms.active() or repo.dirstate.p2() != nullid):
4227 4227 raise error.Abort(
4228 4228 _('resolve command not applicable when not merging'))
4229 4229
4230 4230 wctx = repo[None]
4231 4231
4232 4232 if ms.mergedriver and ms.mdstate() == 'u':
4233 4233 proceed = mergemod.driverpreprocess(repo, ms, wctx)
4234 4234 ms.commit()
4235 4235 # allow mark and unmark to go through
4236 4236 if not mark and not unmark and not proceed:
4237 4237 return 1
4238 4238
4239 4239 m = scmutil.match(wctx, pats, opts)
4240 4240 ret = 0
4241 4241 didwork = False
4242 4242 runconclude = False
4243 4243
4244 4244 tocomplete = []
4245 4245 for f in ms:
4246 4246 if not m(f):
4247 4247 continue
4248 4248
4249 4249 didwork = True
4250 4250
4251 4251 # don't let driver-resolved files be marked, and run the conclude
4252 4252 # step if asked to resolve
4253 4253 if ms[f] == "d":
4254 4254 exact = m.exact(f)
4255 4255 if mark:
4256 4256 if exact:
4257 4257 ui.warn(_('not marking %s as it is driver-resolved\n')
4258 4258 % f)
4259 4259 elif unmark:
4260 4260 if exact:
4261 4261 ui.warn(_('not unmarking %s as it is driver-resolved\n')
4262 4262 % f)
4263 4263 else:
4264 4264 runconclude = True
4265 4265 continue
4266 4266
4267 4267 if mark:
4268 4268 ms.mark(f, "r")
4269 4269 elif unmark:
4270 4270 ms.mark(f, "u")
4271 4271 else:
4272 4272 # backup pre-resolve (merge uses .orig for its own purposes)
4273 4273 a = repo.wjoin(f)
4274 4274 try:
4275 4275 util.copyfile(a, a + ".resolve")
4276 4276 except (IOError, OSError) as inst:
4277 4277 if inst.errno != errno.ENOENT:
4278 4278 raise
4279 4279
4280 4280 try:
4281 4281 # preresolve file
4282 4282 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
4283 4283 'resolve')
4284 4284 complete, r = ms.preresolve(f, wctx)
4285 4285 if not complete:
4286 4286 tocomplete.append(f)
4287 4287 elif r:
4288 4288 ret = 1
4289 4289 finally:
4290 4290 ui.setconfig('ui', 'forcemerge', '', 'resolve')
4291 4291 ms.commit()
4292 4292
4293 4293 # replace filemerge's .orig file with our resolve file, but only
4294 4294 # for merges that are complete
4295 4295 if complete:
4296 4296 try:
4297 4297 util.rename(a + ".resolve",
4298 4298 scmutil.origpath(ui, repo, a))
4299 4299 except OSError as inst:
4300 4300 if inst.errno != errno.ENOENT:
4301 4301 raise
4302 4302
4303 4303 for f in tocomplete:
4304 4304 try:
4305 4305 # resolve file
4306 4306 ui.setconfig('ui', 'forcemerge', opts.get('tool', ''),
4307 4307 'resolve')
4308 4308 r = ms.resolve(f, wctx)
4309 4309 if r:
4310 4310 ret = 1
4311 4311 finally:
4312 4312 ui.setconfig('ui', 'forcemerge', '', 'resolve')
4313 4313 ms.commit()
4314 4314
4315 4315 # replace filemerge's .orig file with our resolve file
4316 4316 a = repo.wjoin(f)
4317 4317 try:
4318 4318 util.rename(a + ".resolve", scmutil.origpath(ui, repo, a))
4319 4319 except OSError as inst:
4320 4320 if inst.errno != errno.ENOENT:
4321 4321 raise
4322 4322
4323 4323 ms.commit()
4324 4324 ms.recordactions()
4325 4325
4326 4326 if not didwork and pats:
4327 4327 hint = None
4328 4328 if not any([p for p in pats if p.find(':') >= 0]):
4329 4329 pats = ['path:%s' % p for p in pats]
4330 4330 m = scmutil.match(wctx, pats, opts)
4331 4331 for f in ms:
4332 4332 if not m(f):
4333 4333 continue
4334 4334 flags = ''.join(['-%s ' % o[0] for o in flaglist
4335 4335 if opts.get(o)])
4336 4336 hint = _("(try: hg resolve %s%s)\n") % (
4337 4337 flags,
4338 4338 ' '.join(pats))
4339 4339 break
4340 4340 ui.warn(_("arguments do not match paths that need resolving\n"))
4341 4341 if hint:
4342 4342 ui.warn(hint)
4343 4343 elif ms.mergedriver and ms.mdstate() != 's':
4344 4344 # run conclude step when either a driver-resolved file is requested
4345 4345 # or there are no driver-resolved files
4346 4346 # we can't use 'ret' to determine whether any files are unresolved
4347 4347 # because we might not have tried to resolve some
4348 4348 if ((runconclude or not list(ms.driverresolved()))
4349 4349 and not list(ms.unresolved())):
4350 4350 proceed = mergemod.driverconclude(repo, ms, wctx)
4351 4351 ms.commit()
4352 4352 if not proceed:
4353 4353 return 1
4354 4354
4355 4355 # Nudge users into finishing an unfinished operation
4356 4356 unresolvedf = list(ms.unresolved())
4357 4357 driverresolvedf = list(ms.driverresolved())
4358 4358 if not unresolvedf and not driverresolvedf:
4359 4359 ui.status(_('(no more unresolved files)\n'))
4360 4360 cmdutil.checkafterresolved(repo)
4361 4361 elif not unresolvedf:
4362 4362 ui.status(_('(no more unresolved files -- '
4363 4363 'run "hg resolve --all" to conclude)\n'))
4364 4364
4365 4365 return ret
4366 4366
4367 4367 @command('revert',
4368 4368 [('a', 'all', None, _('revert all changes when no arguments given')),
4369 4369 ('d', 'date', '', _('tipmost revision matching date'), _('DATE')),
4370 4370 ('r', 'rev', '', _('revert to the specified revision'), _('REV')),
4371 4371 ('C', 'no-backup', None, _('do not save backup copies of files')),
4372 4372 ('i', 'interactive', None,
4373 4373 _('interactively select the changes (EXPERIMENTAL)')),
4374 4374 ] + walkopts + dryrunopts,
4375 4375 _('[OPTION]... [-r REV] [NAME]...'))
4376 4376 def revert(ui, repo, *pats, **opts):
4377 4377 """restore files to their checkout state
4378 4378
4379 4379 .. note::
4380 4380
4381 4381 To check out earlier revisions, you should use :hg:`update REV`.
4382 4382 To cancel an uncommitted merge (and lose your changes),
4383 4383 use :hg:`update --clean .`.
4384 4384
4385 4385 With no revision specified, revert the specified files or directories
4386 4386 to the contents they had in the parent of the working directory.
4387 4387 This restores the contents of files to an unmodified
4388 4388 state and unschedules adds, removes, copies, and renames. If the
4389 4389 working directory has two parents, you must explicitly specify a
4390 4390 revision.
4391 4391
4392 4392 Using the -r/--rev or -d/--date options, revert the given files or
4393 4393 directories to their states as of a specific revision. Because
4394 4394 revert does not change the working directory parents, this will
4395 4395 cause these files to appear modified. This can be helpful to "back
4396 4396 out" some or all of an earlier change. See :hg:`backout` for a
4397 4397 related method.
4398 4398
4399 4399 Modified files are saved with a .orig suffix before reverting.
4400 4400 To disable these backups, use --no-backup. It is possible to store
4401 4401 the backup files in a custom directory relative to the root of the
4402 4402 repository by setting the ``ui.origbackuppath`` configuration
4403 4403 option.
4404 4404
4405 4405 See :hg:`help dates` for a list of formats valid for -d/--date.
4406 4406
4407 4407 See :hg:`help backout` for a way to reverse the effect of an
4408 4408 earlier changeset.
4409 4409
4410 4410 Returns 0 on success.
4411 4411 """
4412 4412
4413 4413 if opts.get("date"):
4414 4414 if opts.get("rev"):
4415 4415 raise error.Abort(_("you can't specify a revision and a date"))
4416 4416 opts["rev"] = cmdutil.finddate(ui, repo, opts["date"])
4417 4417
4418 4418 parent, p2 = repo.dirstate.parents()
4419 4419 if not opts.get('rev') and p2 != nullid:
4420 4420 # revert after merge is a trap for new users (issue2915)
4421 4421 raise error.Abort(_('uncommitted merge with no revision specified'),
4422 4422 hint=_("use 'hg update' or see 'hg help revert'"))
4423 4423
4424 4424 ctx = scmutil.revsingle(repo, opts.get('rev'))
4425 4425
4426 4426 if (not (pats or opts.get('include') or opts.get('exclude') or
4427 4427 opts.get('all') or opts.get('interactive'))):
4428 4428 msg = _("no files or directories specified")
4429 4429 if p2 != nullid:
4430 4430 hint = _("uncommitted merge, use --all to discard all changes,"
4431 4431 " or 'hg update -C .' to abort the merge")
4432 4432 raise error.Abort(msg, hint=hint)
4433 4433 dirty = any(repo.status())
4434 4434 node = ctx.node()
4435 4435 if node != parent:
4436 4436 if dirty:
4437 4437 hint = _("uncommitted changes, use --all to discard all"
4438 4438 " changes, or 'hg update %s' to update") % ctx.rev()
4439 4439 else:
4440 4440 hint = _("use --all to revert all files,"
4441 4441 " or 'hg update %s' to update") % ctx.rev()
4442 4442 elif dirty:
4443 4443 hint = _("uncommitted changes, use --all to discard all changes")
4444 4444 else:
4445 4445 hint = _("use --all to revert all files")
4446 4446 raise error.Abort(msg, hint=hint)
4447 4447
4448 4448 return cmdutil.revert(ui, repo, ctx, (parent, p2), *pats, **opts)
4449 4449
4450 4450 @command('rollback', dryrunopts +
4451 4451 [('f', 'force', False, _('ignore safety measures'))])
4452 4452 def rollback(ui, repo, **opts):
4453 4453 """roll back the last transaction (DANGEROUS) (DEPRECATED)
4454 4454
4455 4455 Please use :hg:`commit --amend` instead of rollback to correct
4456 4456 mistakes in the last commit.
4457 4457
4458 4458 This command should be used with care. There is only one level of
4459 4459 rollback, and there is no way to undo a rollback. It will also
4460 4460 restore the dirstate at the time of the last transaction, losing
4461 4461 any dirstate changes since that time. This command does not alter
4462 4462 the working directory.
4463 4463
4464 4464 Transactions are used to encapsulate the effects of all commands
4465 4465 that create new changesets or propagate existing changesets into a
4466 4466 repository.
4467 4467
4468 4468 .. container:: verbose
4469 4469
4470 4470 For example, the following commands are transactional, and their
4471 4471 effects can be rolled back:
4472 4472
4473 4473 - commit
4474 4474 - import
4475 4475 - pull
4476 4476 - push (with this repository as the destination)
4477 4477 - unbundle
4478 4478
4479 4479 To avoid permanent data loss, rollback will refuse to rollback a
4480 4480 commit transaction if it isn't checked out. Use --force to
4481 4481 override this protection.
4482 4482
4483 4483 The rollback command can be entirely disabled by setting the
4484 4484 ``ui.rollback`` configuration setting to false. If you're here
4485 4485 because you want to use rollback and it's disabled, you can
4486 4486 re-enable the command by setting ``ui.rollback`` to true.
4487 4487
4488 4488 This command is not intended for use on public repositories. Once
4489 4489 changes are visible for pull by other users, rolling a transaction
4490 4490 back locally is ineffective (someone else may already have pulled
4491 4491 the changes). Furthermore, a race is possible with readers of the
4492 4492 repository; for example an in-progress pull from the repository
4493 4493 may fail if a rollback is performed.
4494 4494
4495 4495 Returns 0 on success, 1 if no rollback data is available.
4496 4496 """
4497 4497 if not ui.configbool('ui', 'rollback', True):
4498 4498 raise error.Abort(_('rollback is disabled because it is unsafe'),
4499 4499 hint=('see `hg help -v rollback` for information'))
4500 4500 return repo.rollback(dryrun=opts.get(r'dry_run'),
4501 4501 force=opts.get(r'force'))
4502 4502
4503 4503 @command('root', [])
4504 4504 def root(ui, repo):
4505 4505 """print the root (top) of the current working directory
4506 4506
4507 4507 Print the root directory of the current repository.
4508 4508
4509 4509 Returns 0 on success.
4510 4510 """
4511 4511 ui.write(repo.root + "\n")
4512 4512
4513 4513 @command('^serve',
4514 4514 [('A', 'accesslog', '', _('name of access log file to write to'),
4515 4515 _('FILE')),
4516 4516 ('d', 'daemon', None, _('run server in background')),
4517 4517 ('', 'daemon-postexec', [], _('used internally by daemon mode')),
4518 4518 ('E', 'errorlog', '', _('name of error log file to write to'), _('FILE')),
4519 4519 # use string type, then we can check if something was passed
4520 4520 ('p', 'port', '', _('port to listen on (default: 8000)'), _('PORT')),
4521 4521 ('a', 'address', '', _('address to listen on (default: all interfaces)'),
4522 4522 _('ADDR')),
4523 4523 ('', 'prefix', '', _('prefix path to serve from (default: server root)'),
4524 4524 _('PREFIX')),
4525 4525 ('n', 'name', '',
4526 4526 _('name to show in web pages (default: working directory)'), _('NAME')),
4527 4527 ('', 'web-conf', '',
4528 4528 _("name of the hgweb config file (see 'hg help hgweb')"), _('FILE')),
4529 4529 ('', 'webdir-conf', '', _('name of the hgweb config file (DEPRECATED)'),
4530 4530 _('FILE')),
4531 4531 ('', 'pid-file', '', _('name of file to write process ID to'), _('FILE')),
4532 4532 ('', 'stdio', None, _('for remote clients (ADVANCED)')),
4533 4533 ('', 'cmdserver', '', _('for remote clients (ADVANCED)'), _('MODE')),
4534 4534 ('t', 'templates', '', _('web templates to use'), _('TEMPLATE')),
4535 4535 ('', 'style', '', _('template style to use'), _('STYLE')),
4536 4536 ('6', 'ipv6', None, _('use IPv6 in addition to IPv4')),
4537 4537 ('', 'certificate', '', _('SSL certificate file'), _('FILE'))]
4538 4538 + subrepoopts,
4539 4539 _('[OPTION]...'),
4540 4540 optionalrepo=True)
4541 4541 def serve(ui, repo, **opts):
4542 4542 """start stand-alone webserver
4543 4543
4544 4544 Start a local HTTP repository browser and pull server. You can use
4545 4545 this for ad-hoc sharing and browsing of repositories. It is
4546 4546 recommended to use a real web server to serve a repository for
4547 4547 longer periods of time.
4548 4548
4549 4549 Please note that the server does not implement access control.
4550 4550 This means that, by default, anybody can read from the server and
4551 4551 nobody can write to it by default. Set the ``web.allow_push``
4552 4552 option to ``*`` to allow everybody to push to the server. You
4553 4553 should use a real web server if you need to authenticate users.
4554 4554
4555 4555 By default, the server logs accesses to stdout and errors to
4556 4556 stderr. Use the -A/--accesslog and -E/--errorlog options to log to
4557 4557 files.
4558 4558
4559 4559 To have the server choose a free port number to listen on, specify
4560 4560 a port number of 0; in this case, the server will print the port
4561 4561 number it uses.
4562 4562
4563 4563 Returns 0 on success.
4564 4564 """
4565 4565
4566 4566 opts = pycompat.byteskwargs(opts)
4567 4567 if opts["stdio"] and opts["cmdserver"]:
4568 4568 raise error.Abort(_("cannot use --stdio with --cmdserver"))
4569 4569
4570 4570 if opts["stdio"]:
4571 4571 if repo is None:
4572 4572 raise error.RepoError(_("there is no Mercurial repository here"
4573 4573 " (.hg not found)"))
4574 4574 s = sshserver.sshserver(ui, repo)
4575 4575 s.serve_forever()
4576 4576
4577 4577 service = server.createservice(ui, repo, opts)
4578 4578 return server.runservice(opts, initfn=service.init, runfn=service.run)
4579 4579
4580 4580 @command('^status|st',
4581 4581 [('A', 'all', None, _('show status of all files')),
4582 4582 ('m', 'modified', None, _('show only modified files')),
4583 4583 ('a', 'added', None, _('show only added files')),
4584 4584 ('r', 'removed', None, _('show only removed files')),
4585 4585 ('d', 'deleted', None, _('show only deleted (but tracked) files')),
4586 4586 ('c', 'clean', None, _('show only files without changes')),
4587 4587 ('u', 'unknown', None, _('show only unknown (not tracked) files')),
4588 4588 ('i', 'ignored', None, _('show only ignored files')),
4589 4589 ('n', 'no-status', None, _('hide status prefix')),
4590 4590 ('C', 'copies', None, _('show source of copied files')),
4591 4591 ('0', 'print0', None, _('end filenames with NUL, for use with xargs')),
4592 4592 ('', 'rev', [], _('show difference from revision'), _('REV')),
4593 4593 ('', 'change', '', _('list the changed files of a revision'), _('REV')),
4594 4594 ] + walkopts + subrepoopts + formatteropts,
4595 4595 _('[OPTION]... [FILE]...'),
4596 4596 inferrepo=True)
4597 4597 def status(ui, repo, *pats, **opts):
4598 4598 """show changed files in the working directory
4599 4599
4600 4600 Show status of files in the repository. If names are given, only
4601 4601 files that match are shown. Files that are clean or ignored or
4602 4602 the source of a copy/move operation, are not listed unless
4603 4603 -c/--clean, -i/--ignored, -C/--copies or -A/--all are given.
4604 4604 Unless options described with "show only ..." are given, the
4605 4605 options -mardu are used.
4606 4606
4607 4607 Option -q/--quiet hides untracked (unknown and ignored) files
4608 4608 unless explicitly requested with -u/--unknown or -i/--ignored.
4609 4609
4610 4610 .. note::
4611 4611
4612 4612 :hg:`status` may appear to disagree with diff if permissions have
4613 4613 changed or a merge has occurred. The standard diff format does
4614 4614 not report permission changes and diff only reports changes
4615 4615 relative to one merge parent.
4616 4616
4617 4617 If one revision is given, it is used as the base revision.
4618 4618 If two revisions are given, the differences between them are
4619 4619 shown. The --change option can also be used as a shortcut to list
4620 4620 the changed files of a revision from its first parent.
4621 4621
4622 4622 The codes used to show the status of files are::
4623 4623
4624 4624 M = modified
4625 4625 A = added
4626 4626 R = removed
4627 4627 C = clean
4628 4628 ! = missing (deleted by non-hg command, but still tracked)
4629 4629 ? = not tracked
4630 4630 I = ignored
4631 4631 = origin of the previous file (with --copies)
4632 4632
4633 4633 .. container:: verbose
4634 4634
4635 4635 Examples:
4636 4636
4637 4637 - show changes in the working directory relative to a
4638 4638 changeset::
4639 4639
4640 4640 hg status --rev 9353
4641 4641
4642 4642 - show changes in the working directory relative to the
4643 4643 current directory (see :hg:`help patterns` for more information)::
4644 4644
4645 4645 hg status re:
4646 4646
4647 4647 - show all changes including copies in an existing changeset::
4648 4648
4649 4649 hg status --copies --change 9353
4650 4650
4651 4651 - get a NUL separated list of added files, suitable for xargs::
4652 4652
4653 4653 hg status -an0
4654 4654
4655 4655 Returns 0 on success.
4656 4656 """
4657 4657
4658 4658 opts = pycompat.byteskwargs(opts)
4659 4659 revs = opts.get('rev')
4660 4660 change = opts.get('change')
4661 4661
4662 4662 if revs and change:
4663 4663 msg = _('cannot specify --rev and --change at the same time')
4664 4664 raise error.Abort(msg)
4665 4665 elif change:
4666 4666 node2 = scmutil.revsingle(repo, change, None).node()
4667 4667 node1 = repo[node2].p1().node()
4668 4668 else:
4669 4669 node1, node2 = scmutil.revpair(repo, revs)
4670 4670
4671 4671 if pats or ui.configbool('commands', 'status.relative'):
4672 4672 cwd = repo.getcwd()
4673 4673 else:
4674 4674 cwd = ''
4675 4675
4676 4676 if opts.get('print0'):
4677 4677 end = '\0'
4678 4678 else:
4679 4679 end = '\n'
4680 4680 copy = {}
4681 4681 states = 'modified added removed deleted unknown ignored clean'.split()
4682 4682 show = [k for k in states if opts.get(k)]
4683 4683 if opts.get('all'):
4684 4684 show += ui.quiet and (states[:4] + ['clean']) or states
4685 4685 if not show:
4686 4686 if ui.quiet:
4687 4687 show = states[:4]
4688 4688 else:
4689 4689 show = states[:5]
4690 4690
4691 4691 m = scmutil.match(repo[node2], pats, opts)
4692 4692 stat = repo.status(node1, node2, m,
4693 4693 'ignored' in show, 'clean' in show, 'unknown' in show,
4694 4694 opts.get('subrepos'))
4695 4695 changestates = zip(states, pycompat.iterbytestr('MAR!?IC'), stat)
4696 4696
4697 4697 if (opts.get('all') or opts.get('copies')
4698 4698 or ui.configbool('ui', 'statuscopies')) and not opts.get('no_status'):
4699 4699 copy = copies.pathcopies(repo[node1], repo[node2], m)
4700 4700
4701 4701 ui.pager('status')
4702 4702 fm = ui.formatter('status', opts)
4703 4703 fmt = '%s' + end
4704 4704 showchar = not opts.get('no_status')
4705 4705
4706 4706 for state, char, files in changestates:
4707 4707 if state in show:
4708 4708 label = 'status.' + state
4709 4709 for f in files:
4710 4710 fm.startitem()
4711 4711 fm.condwrite(showchar, 'status', '%s ', char, label=label)
4712 4712 fm.write('path', fmt, repo.pathto(f, cwd), label=label)
4713 4713 if f in copy:
4714 4714 fm.write("copy", ' %s' + end, repo.pathto(copy[f], cwd),
4715 4715 label='status.copied')
4716 4716 fm.end()
4717 4717
4718 4718 @command('^summary|sum',
4719 4719 [('', 'remote', None, _('check for push and pull'))], '[--remote]')
4720 4720 def summary(ui, repo, **opts):
4721 4721 """summarize working directory state
4722 4722
4723 4723 This generates a brief summary of the working directory state,
4724 4724 including parents, branch, commit status, phase and available updates.
4725 4725
4726 4726 With the --remote option, this will check the default paths for
4727 4727 incoming and outgoing changes. This can be time-consuming.
4728 4728
4729 4729 Returns 0 on success.
4730 4730 """
4731 4731
4732 4732 opts = pycompat.byteskwargs(opts)
4733 4733 ui.pager('summary')
4734 4734 ctx = repo[None]
4735 4735 parents = ctx.parents()
4736 4736 pnode = parents[0].node()
4737 4737 marks = []
4738 4738
4739 4739 ms = None
4740 4740 try:
4741 4741 ms = mergemod.mergestate.read(repo)
4742 4742 except error.UnsupportedMergeRecords as e:
4743 4743 s = ' '.join(e.recordtypes)
4744 4744 ui.warn(
4745 4745 _('warning: merge state has unsupported record types: %s\n') % s)
4746 4746 unresolved = 0
4747 4747 else:
4748 4748 unresolved = [f for f in ms if ms[f] == 'u']
4749 4749
4750 4750 for p in parents:
4751 4751 # label with log.changeset (instead of log.parent) since this
4752 4752 # shows a working directory parent *changeset*:
4753 4753 # i18n: column positioning for "hg summary"
4754 4754 ui.write(_('parent: %d:%s ') % (p.rev(), p),
4755 4755 label=cmdutil._changesetlabels(p))
4756 4756 ui.write(' '.join(p.tags()), label='log.tag')
4757 4757 if p.bookmarks():
4758 4758 marks.extend(p.bookmarks())
4759 4759 if p.rev() == -1:
4760 4760 if not len(repo):
4761 4761 ui.write(_(' (empty repository)'))
4762 4762 else:
4763 4763 ui.write(_(' (no revision checked out)'))
4764 4764 if p.obsolete():
4765 4765 ui.write(_(' (obsolete)'))
4766 4766 if p.troubled():
4767 4767 ui.write(' ('
4768 4768 + ', '.join(ui.label(trouble, 'trouble.%s' % trouble)
4769 4769 for trouble in p.troubles())
4770 4770 + ')')
4771 4771 ui.write('\n')
4772 4772 if p.description():
4773 4773 ui.status(' ' + p.description().splitlines()[0].strip() + '\n',
4774 4774 label='log.summary')
4775 4775
4776 4776 branch = ctx.branch()
4777 4777 bheads = repo.branchheads(branch)
4778 4778 # i18n: column positioning for "hg summary"
4779 4779 m = _('branch: %s\n') % branch
4780 4780 if branch != 'default':
4781 4781 ui.write(m, label='log.branch')
4782 4782 else:
4783 4783 ui.status(m, label='log.branch')
4784 4784
4785 4785 if marks:
4786 4786 active = repo._activebookmark
4787 4787 # i18n: column positioning for "hg summary"
4788 4788 ui.write(_('bookmarks:'), label='log.bookmark')
4789 4789 if active is not None:
4790 4790 if active in marks:
4791 4791 ui.write(' *' + active, label=bookmarks.activebookmarklabel)
4792 4792 marks.remove(active)
4793 4793 else:
4794 4794 ui.write(' [%s]' % active, label=bookmarks.activebookmarklabel)
4795 4795 for m in marks:
4796 4796 ui.write(' ' + m, label='log.bookmark')
4797 4797 ui.write('\n', label='log.bookmark')
4798 4798
4799 4799 status = repo.status(unknown=True)
4800 4800
4801 4801 c = repo.dirstate.copies()
4802 4802 copied, renamed = [], []
4803 4803 for d, s in c.iteritems():
4804 4804 if s in status.removed:
4805 4805 status.removed.remove(s)
4806 4806 renamed.append(d)
4807 4807 else:
4808 4808 copied.append(d)
4809 4809 if d in status.added:
4810 4810 status.added.remove(d)
4811 4811
4812 4812 subs = [s for s in ctx.substate if ctx.sub(s).dirty()]
4813 4813
4814 4814 labels = [(ui.label(_('%d modified'), 'status.modified'), status.modified),
4815 4815 (ui.label(_('%d added'), 'status.added'), status.added),
4816 4816 (ui.label(_('%d removed'), 'status.removed'), status.removed),
4817 4817 (ui.label(_('%d renamed'), 'status.copied'), renamed),
4818 4818 (ui.label(_('%d copied'), 'status.copied'), copied),
4819 4819 (ui.label(_('%d deleted'), 'status.deleted'), status.deleted),
4820 4820 (ui.label(_('%d unknown'), 'status.unknown'), status.unknown),
4821 4821 (ui.label(_('%d unresolved'), 'resolve.unresolved'), unresolved),
4822 4822 (ui.label(_('%d subrepos'), 'status.modified'), subs)]
4823 4823 t = []
4824 4824 for l, s in labels:
4825 4825 if s:
4826 4826 t.append(l % len(s))
4827 4827
4828 4828 t = ', '.join(t)
4829 4829 cleanworkdir = False
4830 4830
4831 4831 if repo.vfs.exists('graftstate'):
4832 4832 t += _(' (graft in progress)')
4833 4833 if repo.vfs.exists('updatestate'):
4834 4834 t += _(' (interrupted update)')
4835 4835 elif len(parents) > 1:
4836 4836 t += _(' (merge)')
4837 4837 elif branch != parents[0].branch():
4838 4838 t += _(' (new branch)')
4839 4839 elif (parents[0].closesbranch() and
4840 4840 pnode in repo.branchheads(branch, closed=True)):
4841 4841 t += _(' (head closed)')
4842 4842 elif not (status.modified or status.added or status.removed or renamed or
4843 4843 copied or subs):
4844 4844 t += _(' (clean)')
4845 4845 cleanworkdir = True
4846 4846 elif pnode not in bheads:
4847 4847 t += _(' (new branch head)')
4848 4848
4849 4849 if parents:
4850 4850 pendingphase = max(p.phase() for p in parents)
4851 4851 else:
4852 4852 pendingphase = phases.public
4853 4853
4854 4854 if pendingphase > phases.newcommitphase(ui):
4855 4855 t += ' (%s)' % phases.phasenames[pendingphase]
4856 4856
4857 4857 if cleanworkdir:
4858 4858 # i18n: column positioning for "hg summary"
4859 4859 ui.status(_('commit: %s\n') % t.strip())
4860 4860 else:
4861 4861 # i18n: column positioning for "hg summary"
4862 4862 ui.write(_('commit: %s\n') % t.strip())
4863 4863
4864 4864 # all ancestors of branch heads - all ancestors of parent = new csets
4865 4865 new = len(repo.changelog.findmissing([pctx.node() for pctx in parents],
4866 4866 bheads))
4867 4867
4868 4868 if new == 0:
4869 4869 # i18n: column positioning for "hg summary"
4870 4870 ui.status(_('update: (current)\n'))
4871 4871 elif pnode not in bheads:
4872 4872 # i18n: column positioning for "hg summary"
4873 4873 ui.write(_('update: %d new changesets (update)\n') % new)
4874 4874 else:
4875 4875 # i18n: column positioning for "hg summary"
4876 4876 ui.write(_('update: %d new changesets, %d branch heads (merge)\n') %
4877 4877 (new, len(bheads)))
4878 4878
4879 4879 t = []
4880 4880 draft = len(repo.revs('draft()'))
4881 4881 if draft:
4882 4882 t.append(_('%d draft') % draft)
4883 4883 secret = len(repo.revs('secret()'))
4884 4884 if secret:
4885 4885 t.append(_('%d secret') % secret)
4886 4886
4887 4887 if draft or secret:
4888 4888 ui.status(_('phases: %s\n') % ', '.join(t))
4889 4889
4890 4890 if obsolete.isenabled(repo, obsolete.createmarkersopt):
4891 4891 for trouble in ("unstable", "divergent", "bumped"):
4892 4892 numtrouble = len(repo.revs(trouble + "()"))
4893 4893 # We write all the possibilities to ease translation
4894 4894 troublemsg = {
4895 4895 "unstable": _("unstable: %d changesets"),
4896 4896 "divergent": _("divergent: %d changesets"),
4897 4897 "bumped": _("bumped: %d changesets"),
4898 4898 }
4899 4899 if numtrouble > 0:
4900 4900 ui.status(troublemsg[trouble] % numtrouble + "\n")
4901 4901
4902 4902 cmdutil.summaryhooks(ui, repo)
4903 4903
4904 4904 if opts.get('remote'):
4905 4905 needsincoming, needsoutgoing = True, True
4906 4906 else:
4907 4907 needsincoming, needsoutgoing = False, False
4908 4908 for i, o in cmdutil.summaryremotehooks(ui, repo, opts, None):
4909 4909 if i:
4910 4910 needsincoming = True
4911 4911 if o:
4912 4912 needsoutgoing = True
4913 4913 if not needsincoming and not needsoutgoing:
4914 4914 return
4915 4915
4916 4916 def getincoming():
4917 4917 source, branches = hg.parseurl(ui.expandpath('default'))
4918 4918 sbranch = branches[0]
4919 4919 try:
4920 4920 other = hg.peer(repo, {}, source)
4921 4921 except error.RepoError:
4922 4922 if opts.get('remote'):
4923 4923 raise
4924 4924 return source, sbranch, None, None, None
4925 4925 revs, checkout = hg.addbranchrevs(repo, other, branches, None)
4926 4926 if revs:
4927 4927 revs = [other.lookup(rev) for rev in revs]
4928 4928 ui.debug('comparing with %s\n' % util.hidepassword(source))
4929 4929 repo.ui.pushbuffer()
4930 4930 commoninc = discovery.findcommonincoming(repo, other, heads=revs)
4931 4931 repo.ui.popbuffer()
4932 4932 return source, sbranch, other, commoninc, commoninc[1]
4933 4933
4934 4934 if needsincoming:
4935 4935 source, sbranch, sother, commoninc, incoming = getincoming()
4936 4936 else:
4937 4937 source = sbranch = sother = commoninc = incoming = None
4938 4938
4939 4939 def getoutgoing():
4940 4940 dest, branches = hg.parseurl(ui.expandpath('default-push', 'default'))
4941 4941 dbranch = branches[0]
4942 4942 revs, checkout = hg.addbranchrevs(repo, repo, branches, None)
4943 4943 if source != dest:
4944 4944 try:
4945 4945 dother = hg.peer(repo, {}, dest)
4946 4946 except error.RepoError:
4947 4947 if opts.get('remote'):
4948 4948 raise
4949 4949 return dest, dbranch, None, None
4950 4950 ui.debug('comparing with %s\n' % util.hidepassword(dest))
4951 4951 elif sother is None:
4952 4952 # there is no explicit destination peer, but source one is invalid
4953 4953 return dest, dbranch, None, None
4954 4954 else:
4955 4955 dother = sother
4956 4956 if (source != dest or (sbranch is not None and sbranch != dbranch)):
4957 4957 common = None
4958 4958 else:
4959 4959 common = commoninc
4960 4960 if revs:
4961 4961 revs = [repo.lookup(rev) for rev in revs]
4962 4962 repo.ui.pushbuffer()
4963 4963 outgoing = discovery.findcommonoutgoing(repo, dother, onlyheads=revs,
4964 4964 commoninc=common)
4965 4965 repo.ui.popbuffer()
4966 4966 return dest, dbranch, dother, outgoing
4967 4967
4968 4968 if needsoutgoing:
4969 4969 dest, dbranch, dother, outgoing = getoutgoing()
4970 4970 else:
4971 4971 dest = dbranch = dother = outgoing = None
4972 4972
4973 4973 if opts.get('remote'):
4974 4974 t = []
4975 4975 if incoming:
4976 4976 t.append(_('1 or more incoming'))
4977 4977 o = outgoing.missing
4978 4978 if o:
4979 4979 t.append(_('%d outgoing') % len(o))
4980 4980 other = dother or sother
4981 4981 if 'bookmarks' in other.listkeys('namespaces'):
4982 4982 counts = bookmarks.summary(repo, other)
4983 4983 if counts[0] > 0:
4984 4984 t.append(_('%d incoming bookmarks') % counts[0])
4985 4985 if counts[1] > 0:
4986 4986 t.append(_('%d outgoing bookmarks') % counts[1])
4987 4987
4988 4988 if t:
4989 4989 # i18n: column positioning for "hg summary"
4990 4990 ui.write(_('remote: %s\n') % (', '.join(t)))
4991 4991 else:
4992 4992 # i18n: column positioning for "hg summary"
4993 4993 ui.status(_('remote: (synced)\n'))
4994 4994
4995 4995 cmdutil.summaryremotehooks(ui, repo, opts,
4996 4996 ((source, sbranch, sother, commoninc),
4997 4997 (dest, dbranch, dother, outgoing)))
4998 4998
4999 4999 @command('tag',
5000 5000 [('f', 'force', None, _('force tag')),
5001 5001 ('l', 'local', None, _('make the tag local')),
5002 5002 ('r', 'rev', '', _('revision to tag'), _('REV')),
5003 5003 ('', 'remove', None, _('remove a tag')),
5004 5004 # -l/--local is already there, commitopts cannot be used
5005 5005 ('e', 'edit', None, _('invoke editor on commit messages')),
5006 5006 ('m', 'message', '', _('use text as commit message'), _('TEXT')),
5007 5007 ] + commitopts2,
5008 5008 _('[-f] [-l] [-m TEXT] [-d DATE] [-u USER] [-r REV] NAME...'))
5009 5009 def tag(ui, repo, name1, *names, **opts):
5010 5010 """add one or more tags for the current or given revision
5011 5011
5012 5012 Name a particular revision using <name>.
5013 5013
5014 5014 Tags are used to name particular revisions of the repository and are
5015 5015 very useful to compare different revisions, to go back to significant
5016 5016 earlier versions or to mark branch points as releases, etc. Changing
5017 5017 an existing tag is normally disallowed; use -f/--force to override.
5018 5018
5019 5019 If no revision is given, the parent of the working directory is
5020 5020 used.
5021 5021
5022 5022 To facilitate version control, distribution, and merging of tags,
5023 5023 they are stored as a file named ".hgtags" which is managed similarly
5024 5024 to other project files and can be hand-edited if necessary. This
5025 5025 also means that tagging creates a new commit. The file
5026 5026 ".hg/localtags" is used for local tags (not shared among
5027 5027 repositories).
5028 5028
5029 5029 Tag commits are usually made at the head of a branch. If the parent
5030 5030 of the working directory is not a branch head, :hg:`tag` aborts; use
5031 5031 -f/--force to force the tag commit to be based on a non-head
5032 5032 changeset.
5033 5033
5034 5034 See :hg:`help dates` for a list of formats valid for -d/--date.
5035 5035
5036 5036 Since tag names have priority over branch names during revision
5037 5037 lookup, using an existing branch name as a tag name is discouraged.
5038 5038
5039 5039 Returns 0 on success.
5040 5040 """
5041 5041 opts = pycompat.byteskwargs(opts)
5042 5042 wlock = lock = None
5043 5043 try:
5044 5044 wlock = repo.wlock()
5045 5045 lock = repo.lock()
5046 5046 rev_ = "."
5047 5047 names = [t.strip() for t in (name1,) + names]
5048 5048 if len(names) != len(set(names)):
5049 5049 raise error.Abort(_('tag names must be unique'))
5050 5050 for n in names:
5051 5051 scmutil.checknewlabel(repo, n, 'tag')
5052 5052 if not n:
5053 5053 raise error.Abort(_('tag names cannot consist entirely of '
5054 5054 'whitespace'))
5055 5055 if opts.get('rev') and opts.get('remove'):
5056 5056 raise error.Abort(_("--rev and --remove are incompatible"))
5057 5057 if opts.get('rev'):
5058 5058 rev_ = opts['rev']
5059 5059 message = opts.get('message')
5060 5060 if opts.get('remove'):
5061 5061 if opts.get('local'):
5062 5062 expectedtype = 'local'
5063 5063 else:
5064 5064 expectedtype = 'global'
5065 5065
5066 5066 for n in names:
5067 5067 if not repo.tagtype(n):
5068 5068 raise error.Abort(_("tag '%s' does not exist") % n)
5069 5069 if repo.tagtype(n) != expectedtype:
5070 5070 if expectedtype == 'global':
5071 5071 raise error.Abort(_("tag '%s' is not a global tag") % n)
5072 5072 else:
5073 5073 raise error.Abort(_("tag '%s' is not a local tag") % n)
5074 5074 rev_ = 'null'
5075 5075 if not message:
5076 5076 # we don't translate commit messages
5077 5077 message = 'Removed tag %s' % ', '.join(names)
5078 5078 elif not opts.get('force'):
5079 5079 for n in names:
5080 5080 if n in repo.tags():
5081 5081 raise error.Abort(_("tag '%s' already exists "
5082 5082 "(use -f to force)") % n)
5083 5083 if not opts.get('local'):
5084 5084 p1, p2 = repo.dirstate.parents()
5085 5085 if p2 != nullid:
5086 5086 raise error.Abort(_('uncommitted merge'))
5087 5087 bheads = repo.branchheads()
5088 5088 if not opts.get('force') and bheads and p1 not in bheads:
5089 5089 raise error.Abort(_('working directory is not at a branch head '
5090 5090 '(use -f to force)'))
5091 5091 r = scmutil.revsingle(repo, rev_).node()
5092 5092
5093 5093 if not message:
5094 5094 # we don't translate commit messages
5095 5095 message = ('Added tag %s for changeset %s' %
5096 5096 (', '.join(names), short(r)))
5097 5097
5098 5098 date = opts.get('date')
5099 5099 if date:
5100 5100 date = util.parsedate(date)
5101 5101
5102 5102 if opts.get('remove'):
5103 5103 editform = 'tag.remove'
5104 5104 else:
5105 5105 editform = 'tag.add'
5106 5106 editor = cmdutil.getcommiteditor(editform=editform,
5107 5107 **pycompat.strkwargs(opts))
5108 5108
5109 5109 # don't allow tagging the null rev
5110 5110 if (not opts.get('remove') and
5111 5111 scmutil.revsingle(repo, rev_).rev() == nullrev):
5112 5112 raise error.Abort(_("cannot tag null revision"))
5113 5113
5114 5114 tagsmod.tag(repo, names, r, message, opts.get('local'),
5115 5115 opts.get('user'), date, editor=editor)
5116 5116 finally:
5117 5117 release(lock, wlock)
5118 5118
5119 5119 @command('tags', formatteropts, '')
5120 5120 def tags(ui, repo, **opts):
5121 5121 """list repository tags
5122 5122
5123 5123 This lists both regular and local tags. When the -v/--verbose
5124 5124 switch is used, a third column "local" is printed for local tags.
5125 5125 When the -q/--quiet switch is used, only the tag name is printed.
5126 5126
5127 5127 Returns 0 on success.
5128 5128 """
5129 5129
5130 5130 opts = pycompat.byteskwargs(opts)
5131 5131 ui.pager('tags')
5132 5132 fm = ui.formatter('tags', opts)
5133 5133 hexfunc = fm.hexfunc
5134 5134 tagtype = ""
5135 5135
5136 5136 for t, n in reversed(repo.tagslist()):
5137 5137 hn = hexfunc(n)
5138 5138 label = 'tags.normal'
5139 5139 tagtype = ''
5140 5140 if repo.tagtype(t) == 'local':
5141 5141 label = 'tags.local'
5142 5142 tagtype = 'local'
5143 5143
5144 5144 fm.startitem()
5145 5145 fm.write('tag', '%s', t, label=label)
5146 5146 fmt = " " * (30 - encoding.colwidth(t)) + ' %5d:%s'
5147 5147 fm.condwrite(not ui.quiet, 'rev node', fmt,
5148 5148 repo.changelog.rev(n), hn, label=label)
5149 5149 fm.condwrite(ui.verbose and tagtype, 'type', ' %s',
5150 5150 tagtype, label=label)
5151 5151 fm.plain('\n')
5152 5152 fm.end()
5153 5153
5154 5154 @command('tip',
5155 5155 [('p', 'patch', None, _('show patch')),
5156 5156 ('g', 'git', None, _('use git extended diff format')),
5157 5157 ] + templateopts,
5158 5158 _('[-p] [-g]'))
5159 5159 def tip(ui, repo, **opts):
5160 5160 """show the tip revision (DEPRECATED)
5161 5161
5162 5162 The tip revision (usually just called the tip) is the changeset
5163 5163 most recently added to the repository (and therefore the most
5164 5164 recently changed head).
5165 5165
5166 5166 If you have just made a commit, that commit will be the tip. If
5167 5167 you have just pulled changes from another repository, the tip of
5168 5168 that repository becomes the current tip. The "tip" tag is special
5169 5169 and cannot be renamed or assigned to a different changeset.
5170 5170
5171 5171 This command is deprecated, please use :hg:`heads` instead.
5172 5172
5173 5173 Returns 0 on success.
5174 5174 """
5175 5175 opts = pycompat.byteskwargs(opts)
5176 5176 displayer = cmdutil.show_changeset(ui, repo, opts)
5177 5177 displayer.show(repo['tip'])
5178 5178 displayer.close()
5179 5179
5180 5180 @command('unbundle',
5181 5181 [('u', 'update', None,
5182 5182 _('update to new branch head if changesets were unbundled'))],
5183 5183 _('[-u] FILE...'))
5184 5184 def unbundle(ui, repo, fname1, *fnames, **opts):
5185 5185 """apply one or more bundle files
5186 5186
5187 5187 Apply one or more bundle files generated by :hg:`bundle`.
5188 5188
5189 5189 Returns 0 on success, 1 if an update has unresolved files.
5190 5190 """
5191 5191 fnames = (fname1,) + fnames
5192 5192
5193 5193 with repo.lock():
5194 5194 for fname in fnames:
5195 5195 f = hg.openpath(ui, fname)
5196 5196 gen = exchange.readbundle(ui, f, fname)
5197 5197 if isinstance(gen, streamclone.streamcloneapplier):
5198 5198 raise error.Abort(
5199 5199 _('packed bundles cannot be applied with '
5200 5200 '"hg unbundle"'),
5201 5201 hint=_('use "hg debugapplystreamclonebundle"'))
5202 5202 url = 'bundle:' + fname
5203 5203 if isinstance(gen, bundle2.unbundle20):
5204 5204 with repo.transaction('unbundle') as tr:
5205 5205 try:
5206 5206 op = bundle2.applybundle(repo, gen, tr,
5207 5207 source='unbundle',
5208 5208 url=url)
5209 5209 except error.BundleUnknownFeatureError as exc:
5210 5210 raise error.Abort(
5211 5211 _('%s: unknown bundle feature, %s') % (fname, exc),
5212 5212 hint=_("see https://mercurial-scm.org/"
5213 5213 "wiki/BundleFeature for more "
5214 5214 "information"))
5215 5215 changes = [r.get('return', 0)
5216 5216 for r in op.records['changegroup']]
5217 5217 modheads = changegroup.combineresults(changes)
5218 5218 else:
5219 5219 txnname = 'unbundle\n%s' % util.hidepassword(url)
5220 5220 with repo.transaction(txnname) as tr:
5221 modheads = gen.apply(repo, tr, 'unbundle', url)
5221 modheads, addednodes = gen.apply(repo, tr, 'unbundle', url)
5222 5222
5223 5223 return postincoming(ui, repo, modheads, opts.get(r'update'), None, None)
5224 5224
5225 5225 @command('^update|up|checkout|co',
5226 5226 [('C', 'clean', None, _('discard uncommitted changes (no backup)')),
5227 5227 ('c', 'check', None, _('require clean working directory')),
5228 5228 ('m', 'merge', None, _('merge uncommitted changes')),
5229 5229 ('d', 'date', '', _('tipmost revision matching date'), _('DATE')),
5230 5230 ('r', 'rev', '', _('revision'), _('REV'))
5231 5231 ] + mergetoolopts,
5232 5232 _('[-C|-c|-m] [-d DATE] [[-r] REV]'))
5233 5233 def update(ui, repo, node=None, rev=None, clean=False, date=None, check=False,
5234 5234 merge=None, tool=None):
5235 5235 """update working directory (or switch revisions)
5236 5236
5237 5237 Update the repository's working directory to the specified
5238 5238 changeset. If no changeset is specified, update to the tip of the
5239 5239 current named branch and move the active bookmark (see :hg:`help
5240 5240 bookmarks`).
5241 5241
5242 5242 Update sets the working directory's parent revision to the specified
5243 5243 changeset (see :hg:`help parents`).
5244 5244
5245 5245 If the changeset is not a descendant or ancestor of the working
5246 5246 directory's parent and there are uncommitted changes, the update is
5247 5247 aborted. With the -c/--check option, the working directory is checked
5248 5248 for uncommitted changes; if none are found, the working directory is
5249 5249 updated to the specified changeset.
5250 5250
5251 5251 .. container:: verbose
5252 5252
5253 5253 The -C/--clean, -c/--check, and -m/--merge options control what
5254 5254 happens if the working directory contains uncommitted changes.
5255 5255 At most of one of them can be specified.
5256 5256
5257 5257 1. If no option is specified, and if
5258 5258 the requested changeset is an ancestor or descendant of
5259 5259 the working directory's parent, the uncommitted changes
5260 5260 are merged into the requested changeset and the merged
5261 5261 result is left uncommitted. If the requested changeset is
5262 5262 not an ancestor or descendant (that is, it is on another
5263 5263 branch), the update is aborted and the uncommitted changes
5264 5264 are preserved.
5265 5265
5266 5266 2. With the -m/--merge option, the update is allowed even if the
5267 5267 requested changeset is not an ancestor or descendant of
5268 5268 the working directory's parent.
5269 5269
5270 5270 3. With the -c/--check option, the update is aborted and the
5271 5271 uncommitted changes are preserved.
5272 5272
5273 5273 4. With the -C/--clean option, uncommitted changes are discarded and
5274 5274 the working directory is updated to the requested changeset.
5275 5275
5276 5276 To cancel an uncommitted merge (and lose your changes), use
5277 5277 :hg:`update --clean .`.
5278 5278
5279 5279 Use null as the changeset to remove the working directory (like
5280 5280 :hg:`clone -U`).
5281 5281
5282 5282 If you want to revert just one file to an older revision, use
5283 5283 :hg:`revert [-r REV] NAME`.
5284 5284
5285 5285 See :hg:`help dates` for a list of formats valid for -d/--date.
5286 5286
5287 5287 Returns 0 on success, 1 if there are unresolved files.
5288 5288 """
5289 5289 if rev and node:
5290 5290 raise error.Abort(_("please specify just one revision"))
5291 5291
5292 5292 if ui.configbool('commands', 'update.requiredest'):
5293 5293 if not node and not rev and not date:
5294 5294 raise error.Abort(_('you must specify a destination'),
5295 5295 hint=_('for example: hg update ".::"'))
5296 5296
5297 5297 if rev is None or rev == '':
5298 5298 rev = node
5299 5299
5300 5300 if date and rev is not None:
5301 5301 raise error.Abort(_("you can't specify a revision and a date"))
5302 5302
5303 5303 if len([x for x in (clean, check, merge) if x]) > 1:
5304 5304 raise error.Abort(_("can only specify one of -C/--clean, -c/--check, "
5305 5305 "or -m/merge"))
5306 5306
5307 5307 updatecheck = None
5308 5308 if check:
5309 5309 updatecheck = 'abort'
5310 5310 elif merge:
5311 5311 updatecheck = 'none'
5312 5312
5313 5313 with repo.wlock():
5314 5314 cmdutil.clearunfinished(repo)
5315 5315
5316 5316 if date:
5317 5317 rev = cmdutil.finddate(ui, repo, date)
5318 5318
5319 5319 # if we defined a bookmark, we have to remember the original name
5320 5320 brev = rev
5321 5321 rev = scmutil.revsingle(repo, rev, rev).rev()
5322 5322
5323 5323 repo.ui.setconfig('ui', 'forcemerge', tool, 'update')
5324 5324
5325 5325 return hg.updatetotally(ui, repo, rev, brev, clean=clean,
5326 5326 updatecheck=updatecheck)
5327 5327
5328 5328 @command('verify', [])
5329 5329 def verify(ui, repo):
5330 5330 """verify the integrity of the repository
5331 5331
5332 5332 Verify the integrity of the current repository.
5333 5333
5334 5334 This will perform an extensive check of the repository's
5335 5335 integrity, validating the hashes and checksums of each entry in
5336 5336 the changelog, manifest, and tracked files, as well as the
5337 5337 integrity of their crosslinks and indices.
5338 5338
5339 5339 Please see https://mercurial-scm.org/wiki/RepositoryCorruption
5340 5340 for more information about recovery from corruption of the
5341 5341 repository.
5342 5342
5343 5343 Returns 0 on success, 1 if errors are encountered.
5344 5344 """
5345 5345 return hg.verify(repo)
5346 5346
5347 5347 @command('version', [] + formatteropts, norepo=True)
5348 5348 def version_(ui, **opts):
5349 5349 """output version and copyright information"""
5350 5350 opts = pycompat.byteskwargs(opts)
5351 5351 if ui.verbose:
5352 5352 ui.pager('version')
5353 5353 fm = ui.formatter("version", opts)
5354 5354 fm.startitem()
5355 5355 fm.write("ver", _("Mercurial Distributed SCM (version %s)\n"),
5356 5356 util.version())
5357 5357 license = _(
5358 5358 "(see https://mercurial-scm.org for more information)\n"
5359 5359 "\nCopyright (C) 2005-2017 Matt Mackall and others\n"
5360 5360 "This is free software; see the source for copying conditions. "
5361 5361 "There is NO\nwarranty; "
5362 5362 "not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n"
5363 5363 )
5364 5364 if not ui.quiet:
5365 5365 fm.plain(license)
5366 5366
5367 5367 if ui.verbose:
5368 5368 fm.plain(_("\nEnabled extensions:\n\n"))
5369 5369 # format names and versions into columns
5370 5370 names = []
5371 5371 vers = []
5372 5372 isinternals = []
5373 5373 for name, module in extensions.extensions():
5374 5374 names.append(name)
5375 5375 vers.append(extensions.moduleversion(module) or None)
5376 5376 isinternals.append(extensions.ismoduleinternal(module))
5377 5377 fn = fm.nested("extensions")
5378 5378 if names:
5379 5379 namefmt = " %%-%ds " % max(len(n) for n in names)
5380 5380 places = [_("external"), _("internal")]
5381 5381 for n, v, p in zip(names, vers, isinternals):
5382 5382 fn.startitem()
5383 5383 fn.condwrite(ui.verbose, "name", namefmt, n)
5384 5384 if ui.verbose:
5385 5385 fn.plain("%s " % places[p])
5386 5386 fn.data(bundled=p)
5387 5387 fn.condwrite(ui.verbose and v, "ver", "%s", v)
5388 5388 if ui.verbose:
5389 5389 fn.plain("\n")
5390 5390 fn.end()
5391 5391 fm.end()
5392 5392
5393 5393 def loadcmdtable(ui, name, cmdtable):
5394 5394 """Load command functions from specified cmdtable
5395 5395 """
5396 5396 overrides = [cmd for cmd in cmdtable if cmd in table]
5397 5397 if overrides:
5398 5398 ui.warn(_("extension '%s' overrides commands: %s\n")
5399 5399 % (name, " ".join(overrides)))
5400 5400 table.update(cmdtable)
@@ -1,2012 +1,2013 b''
1 1 # exchange.py - utility to exchange data between repos.
2 2 #
3 3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from __future__ import absolute_import
9 9
10 10 import errno
11 11 import hashlib
12 12
13 13 from .i18n import _
14 14 from .node import (
15 15 hex,
16 16 nullid,
17 17 )
18 18 from . import (
19 19 bookmarks as bookmod,
20 20 bundle2,
21 21 changegroup,
22 22 discovery,
23 23 error,
24 24 lock as lockmod,
25 25 obsolete,
26 26 phases,
27 27 pushkey,
28 28 pycompat,
29 29 scmutil,
30 30 sslutil,
31 31 streamclone,
32 32 url as urlmod,
33 33 util,
34 34 )
35 35
36 36 urlerr = util.urlerr
37 37 urlreq = util.urlreq
38 38
39 39 # Maps bundle version human names to changegroup versions.
40 40 _bundlespeccgversions = {'v1': '01',
41 41 'v2': '02',
42 42 'packed1': 's1',
43 43 'bundle2': '02', #legacy
44 44 }
45 45
46 46 # Compression engines allowed in version 1. THIS SHOULD NEVER CHANGE.
47 47 _bundlespecv1compengines = {'gzip', 'bzip2', 'none'}
48 48
49 49 def parsebundlespec(repo, spec, strict=True, externalnames=False):
50 50 """Parse a bundle string specification into parts.
51 51
52 52 Bundle specifications denote a well-defined bundle/exchange format.
53 53 The content of a given specification should not change over time in
54 54 order to ensure that bundles produced by a newer version of Mercurial are
55 55 readable from an older version.
56 56
57 57 The string currently has the form:
58 58
59 59 <compression>-<type>[;<parameter0>[;<parameter1>]]
60 60
61 61 Where <compression> is one of the supported compression formats
62 62 and <type> is (currently) a version string. A ";" can follow the type and
63 63 all text afterwards is interpreted as URI encoded, ";" delimited key=value
64 64 pairs.
65 65
66 66 If ``strict`` is True (the default) <compression> is required. Otherwise,
67 67 it is optional.
68 68
69 69 If ``externalnames`` is False (the default), the human-centric names will
70 70 be converted to their internal representation.
71 71
72 72 Returns a 3-tuple of (compression, version, parameters). Compression will
73 73 be ``None`` if not in strict mode and a compression isn't defined.
74 74
75 75 An ``InvalidBundleSpecification`` is raised when the specification is
76 76 not syntactically well formed.
77 77
78 78 An ``UnsupportedBundleSpecification`` is raised when the compression or
79 79 bundle type/version is not recognized.
80 80
81 81 Note: this function will likely eventually return a more complex data
82 82 structure, including bundle2 part information.
83 83 """
84 84 def parseparams(s):
85 85 if ';' not in s:
86 86 return s, {}
87 87
88 88 params = {}
89 89 version, paramstr = s.split(';', 1)
90 90
91 91 for p in paramstr.split(';'):
92 92 if '=' not in p:
93 93 raise error.InvalidBundleSpecification(
94 94 _('invalid bundle specification: '
95 95 'missing "=" in parameter: %s') % p)
96 96
97 97 key, value = p.split('=', 1)
98 98 key = urlreq.unquote(key)
99 99 value = urlreq.unquote(value)
100 100 params[key] = value
101 101
102 102 return version, params
103 103
104 104
105 105 if strict and '-' not in spec:
106 106 raise error.InvalidBundleSpecification(
107 107 _('invalid bundle specification; '
108 108 'must be prefixed with compression: %s') % spec)
109 109
110 110 if '-' in spec:
111 111 compression, version = spec.split('-', 1)
112 112
113 113 if compression not in util.compengines.supportedbundlenames:
114 114 raise error.UnsupportedBundleSpecification(
115 115 _('%s compression is not supported') % compression)
116 116
117 117 version, params = parseparams(version)
118 118
119 119 if version not in _bundlespeccgversions:
120 120 raise error.UnsupportedBundleSpecification(
121 121 _('%s is not a recognized bundle version') % version)
122 122 else:
123 123 # Value could be just the compression or just the version, in which
124 124 # case some defaults are assumed (but only when not in strict mode).
125 125 assert not strict
126 126
127 127 spec, params = parseparams(spec)
128 128
129 129 if spec in util.compengines.supportedbundlenames:
130 130 compression = spec
131 131 version = 'v1'
132 132 # Generaldelta repos require v2.
133 133 if 'generaldelta' in repo.requirements:
134 134 version = 'v2'
135 135 # Modern compression engines require v2.
136 136 if compression not in _bundlespecv1compengines:
137 137 version = 'v2'
138 138 elif spec in _bundlespeccgversions:
139 139 if spec == 'packed1':
140 140 compression = 'none'
141 141 else:
142 142 compression = 'bzip2'
143 143 version = spec
144 144 else:
145 145 raise error.UnsupportedBundleSpecification(
146 146 _('%s is not a recognized bundle specification') % spec)
147 147
148 148 # Bundle version 1 only supports a known set of compression engines.
149 149 if version == 'v1' and compression not in _bundlespecv1compengines:
150 150 raise error.UnsupportedBundleSpecification(
151 151 _('compression engine %s is not supported on v1 bundles') %
152 152 compression)
153 153
154 154 # The specification for packed1 can optionally declare the data formats
155 155 # required to apply it. If we see this metadata, compare against what the
156 156 # repo supports and error if the bundle isn't compatible.
157 157 if version == 'packed1' and 'requirements' in params:
158 158 requirements = set(params['requirements'].split(','))
159 159 missingreqs = requirements - repo.supportedformats
160 160 if missingreqs:
161 161 raise error.UnsupportedBundleSpecification(
162 162 _('missing support for repository features: %s') %
163 163 ', '.join(sorted(missingreqs)))
164 164
165 165 if not externalnames:
166 166 engine = util.compengines.forbundlename(compression)
167 167 compression = engine.bundletype()[1]
168 168 version = _bundlespeccgversions[version]
169 169 return compression, version, params
170 170
171 171 def readbundle(ui, fh, fname, vfs=None):
172 172 header = changegroup.readexactly(fh, 4)
173 173
174 174 alg = None
175 175 if not fname:
176 176 fname = "stream"
177 177 if not header.startswith('HG') and header.startswith('\0'):
178 178 fh = changegroup.headerlessfixup(fh, header)
179 179 header = "HG10"
180 180 alg = 'UN'
181 181 elif vfs:
182 182 fname = vfs.join(fname)
183 183
184 184 magic, version = header[0:2], header[2:4]
185 185
186 186 if magic != 'HG':
187 187 raise error.Abort(_('%s: not a Mercurial bundle') % fname)
188 188 if version == '10':
189 189 if alg is None:
190 190 alg = changegroup.readexactly(fh, 2)
191 191 return changegroup.cg1unpacker(fh, alg)
192 192 elif version.startswith('2'):
193 193 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
194 194 elif version == 'S1':
195 195 return streamclone.streamcloneapplier(fh)
196 196 else:
197 197 raise error.Abort(_('%s: unknown bundle version %s') % (fname, version))
198 198
199 199 def getbundlespec(ui, fh):
200 200 """Infer the bundlespec from a bundle file handle.
201 201
202 202 The input file handle is seeked and the original seek position is not
203 203 restored.
204 204 """
205 205 def speccompression(alg):
206 206 try:
207 207 return util.compengines.forbundletype(alg).bundletype()[0]
208 208 except KeyError:
209 209 return None
210 210
211 211 b = readbundle(ui, fh, None)
212 212 if isinstance(b, changegroup.cg1unpacker):
213 213 alg = b._type
214 214 if alg == '_truncatedBZ':
215 215 alg = 'BZ'
216 216 comp = speccompression(alg)
217 217 if not comp:
218 218 raise error.Abort(_('unknown compression algorithm: %s') % alg)
219 219 return '%s-v1' % comp
220 220 elif isinstance(b, bundle2.unbundle20):
221 221 if 'Compression' in b.params:
222 222 comp = speccompression(b.params['Compression'])
223 223 if not comp:
224 224 raise error.Abort(_('unknown compression algorithm: %s') % comp)
225 225 else:
226 226 comp = 'none'
227 227
228 228 version = None
229 229 for part in b.iterparts():
230 230 if part.type == 'changegroup':
231 231 version = part.params['version']
232 232 if version in ('01', '02'):
233 233 version = 'v2'
234 234 else:
235 235 raise error.Abort(_('changegroup version %s does not have '
236 236 'a known bundlespec') % version,
237 237 hint=_('try upgrading your Mercurial '
238 238 'client'))
239 239
240 240 if not version:
241 241 raise error.Abort(_('could not identify changegroup version in '
242 242 'bundle'))
243 243
244 244 return '%s-%s' % (comp, version)
245 245 elif isinstance(b, streamclone.streamcloneapplier):
246 246 requirements = streamclone.readbundle1header(fh)[2]
247 247 params = 'requirements=%s' % ','.join(sorted(requirements))
248 248 return 'none-packed1;%s' % urlreq.quote(params)
249 249 else:
250 250 raise error.Abort(_('unknown bundle type: %s') % b)
251 251
252 252 def _computeoutgoing(repo, heads, common):
253 253 """Computes which revs are outgoing given a set of common
254 254 and a set of heads.
255 255
256 256 This is a separate function so extensions can have access to
257 257 the logic.
258 258
259 259 Returns a discovery.outgoing object.
260 260 """
261 261 cl = repo.changelog
262 262 if common:
263 263 hasnode = cl.hasnode
264 264 common = [n for n in common if hasnode(n)]
265 265 else:
266 266 common = [nullid]
267 267 if not heads:
268 268 heads = cl.heads()
269 269 return discovery.outgoing(repo, common, heads)
270 270
271 271 def _forcebundle1(op):
272 272 """return true if a pull/push must use bundle1
273 273
274 274 This function is used to allow testing of the older bundle version"""
275 275 ui = op.repo.ui
276 276 forcebundle1 = False
277 277 # The goal is this config is to allow developer to choose the bundle
278 278 # version used during exchanged. This is especially handy during test.
279 279 # Value is a list of bundle version to be picked from, highest version
280 280 # should be used.
281 281 #
282 282 # developer config: devel.legacy.exchange
283 283 exchange = ui.configlist('devel', 'legacy.exchange')
284 284 forcebundle1 = 'bundle2' not in exchange and 'bundle1' in exchange
285 285 return forcebundle1 or not op.remote.capable('bundle2')
286 286
287 287 class pushoperation(object):
288 288 """A object that represent a single push operation
289 289
290 290 Its purpose is to carry push related state and very common operations.
291 291
292 292 A new pushoperation should be created at the beginning of each push and
293 293 discarded afterward.
294 294 """
295 295
296 296 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
297 297 bookmarks=()):
298 298 # repo we push from
299 299 self.repo = repo
300 300 self.ui = repo.ui
301 301 # repo we push to
302 302 self.remote = remote
303 303 # force option provided
304 304 self.force = force
305 305 # revs to be pushed (None is "all")
306 306 self.revs = revs
307 307 # bookmark explicitly pushed
308 308 self.bookmarks = bookmarks
309 309 # allow push of new branch
310 310 self.newbranch = newbranch
311 311 # did a local lock get acquired?
312 312 self.locallocked = None
313 313 # step already performed
314 314 # (used to check what steps have been already performed through bundle2)
315 315 self.stepsdone = set()
316 316 # Integer version of the changegroup push result
317 317 # - None means nothing to push
318 318 # - 0 means HTTP error
319 319 # - 1 means we pushed and remote head count is unchanged *or*
320 320 # we have outgoing changesets but refused to push
321 321 # - other values as described by addchangegroup()
322 322 self.cgresult = None
323 323 # Boolean value for the bookmark push
324 324 self.bkresult = None
325 325 # discover.outgoing object (contains common and outgoing data)
326 326 self.outgoing = None
327 327 # all remote topological heads before the push
328 328 self.remoteheads = None
329 329 # Details of the remote branch pre and post push
330 330 #
331 331 # mapping: {'branch': ([remoteheads],
332 332 # [newheads],
333 333 # [unsyncedheads],
334 334 # [discardedheads])}
335 335 # - branch: the branch name
336 336 # - remoteheads: the list of remote heads known locally
337 337 # None if the branch is new
338 338 # - newheads: the new remote heads (known locally) with outgoing pushed
339 339 # - unsyncedheads: the list of remote heads unknown locally.
340 340 # - discardedheads: the list of remote heads made obsolete by the push
341 341 self.pushbranchmap = None
342 342 # testable as a boolean indicating if any nodes are missing locally.
343 343 self.incoming = None
344 344 # phases changes that must be pushed along side the changesets
345 345 self.outdatedphases = None
346 346 # phases changes that must be pushed if changeset push fails
347 347 self.fallbackoutdatedphases = None
348 348 # outgoing obsmarkers
349 349 self.outobsmarkers = set()
350 350 # outgoing bookmarks
351 351 self.outbookmarks = []
352 352 # transaction manager
353 353 self.trmanager = None
354 354 # map { pushkey partid -> callback handling failure}
355 355 # used to handle exception from mandatory pushkey part failure
356 356 self.pkfailcb = {}
357 357
358 358 @util.propertycache
359 359 def futureheads(self):
360 360 """future remote heads if the changeset push succeeds"""
361 361 return self.outgoing.missingheads
362 362
363 363 @util.propertycache
364 364 def fallbackheads(self):
365 365 """future remote heads if the changeset push fails"""
366 366 if self.revs is None:
367 367 # not target to push, all common are relevant
368 368 return self.outgoing.commonheads
369 369 unfi = self.repo.unfiltered()
370 370 # I want cheads = heads(::missingheads and ::commonheads)
371 371 # (missingheads is revs with secret changeset filtered out)
372 372 #
373 373 # This can be expressed as:
374 374 # cheads = ( (missingheads and ::commonheads)
375 375 # + (commonheads and ::missingheads))"
376 376 # )
377 377 #
378 378 # while trying to push we already computed the following:
379 379 # common = (::commonheads)
380 380 # missing = ((commonheads::missingheads) - commonheads)
381 381 #
382 382 # We can pick:
383 383 # * missingheads part of common (::commonheads)
384 384 common = self.outgoing.common
385 385 nm = self.repo.changelog.nodemap
386 386 cheads = [node for node in self.revs if nm[node] in common]
387 387 # and
388 388 # * commonheads parents on missing
389 389 revset = unfi.set('%ln and parents(roots(%ln))',
390 390 self.outgoing.commonheads,
391 391 self.outgoing.missing)
392 392 cheads.extend(c.node() for c in revset)
393 393 return cheads
394 394
395 395 @property
396 396 def commonheads(self):
397 397 """set of all common heads after changeset bundle push"""
398 398 if self.cgresult:
399 399 return self.futureheads
400 400 else:
401 401 return self.fallbackheads
402 402
403 403 # mapping of message used when pushing bookmark
404 404 bookmsgmap = {'update': (_("updating bookmark %s\n"),
405 405 _('updating bookmark %s failed!\n')),
406 406 'export': (_("exporting bookmark %s\n"),
407 407 _('exporting bookmark %s failed!\n')),
408 408 'delete': (_("deleting remote bookmark %s\n"),
409 409 _('deleting remote bookmark %s failed!\n')),
410 410 }
411 411
412 412
413 413 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=(),
414 414 opargs=None):
415 415 '''Push outgoing changesets (limited by revs) from a local
416 416 repository to remote. Return an integer:
417 417 - None means nothing to push
418 418 - 0 means HTTP error
419 419 - 1 means we pushed and remote head count is unchanged *or*
420 420 we have outgoing changesets but refused to push
421 421 - other values as described by addchangegroup()
422 422 '''
423 423 if opargs is None:
424 424 opargs = {}
425 425 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks,
426 426 **opargs)
427 427 if pushop.remote.local():
428 428 missing = (set(pushop.repo.requirements)
429 429 - pushop.remote.local().supported)
430 430 if missing:
431 431 msg = _("required features are not"
432 432 " supported in the destination:"
433 433 " %s") % (', '.join(sorted(missing)))
434 434 raise error.Abort(msg)
435 435
436 436 # there are two ways to push to remote repo:
437 437 #
438 438 # addchangegroup assumes local user can lock remote
439 439 # repo (local filesystem, old ssh servers).
440 440 #
441 441 # unbundle assumes local user cannot lock remote repo (new ssh
442 442 # servers, http servers).
443 443
444 444 if not pushop.remote.canpush():
445 445 raise error.Abort(_("destination does not support push"))
446 446 # get local lock as we might write phase data
447 447 localwlock = locallock = None
448 448 try:
449 449 # bundle2 push may receive a reply bundle touching bookmarks or other
450 450 # things requiring the wlock. Take it now to ensure proper ordering.
451 451 maypushback = pushop.ui.configbool('experimental', 'bundle2.pushback')
452 452 if (not _forcebundle1(pushop)) and maypushback:
453 453 localwlock = pushop.repo.wlock()
454 454 locallock = pushop.repo.lock()
455 455 pushop.locallocked = True
456 456 except IOError as err:
457 457 pushop.locallocked = False
458 458 if err.errno != errno.EACCES:
459 459 raise
460 460 # source repo cannot be locked.
461 461 # We do not abort the push, but just disable the local phase
462 462 # synchronisation.
463 463 msg = 'cannot lock source repository: %s\n' % err
464 464 pushop.ui.debug(msg)
465 465 try:
466 466 if pushop.locallocked:
467 467 pushop.trmanager = transactionmanager(pushop.repo,
468 468 'push-response',
469 469 pushop.remote.url())
470 470 pushop.repo.checkpush(pushop)
471 471 lock = None
472 472 unbundle = pushop.remote.capable('unbundle')
473 473 if not unbundle:
474 474 lock = pushop.remote.lock()
475 475 try:
476 476 _pushdiscovery(pushop)
477 477 if not _forcebundle1(pushop):
478 478 _pushbundle2(pushop)
479 479 _pushchangeset(pushop)
480 480 _pushsyncphase(pushop)
481 481 _pushobsolete(pushop)
482 482 _pushbookmark(pushop)
483 483 finally:
484 484 if lock is not None:
485 485 lock.release()
486 486 if pushop.trmanager:
487 487 pushop.trmanager.close()
488 488 finally:
489 489 if pushop.trmanager:
490 490 pushop.trmanager.release()
491 491 if locallock is not None:
492 492 locallock.release()
493 493 if localwlock is not None:
494 494 localwlock.release()
495 495
496 496 return pushop
497 497
498 498 # list of steps to perform discovery before push
499 499 pushdiscoveryorder = []
500 500
501 501 # Mapping between step name and function
502 502 #
503 503 # This exists to help extensions wrap steps if necessary
504 504 pushdiscoverymapping = {}
505 505
506 506 def pushdiscovery(stepname):
507 507 """decorator for function performing discovery before push
508 508
509 509 The function is added to the step -> function mapping and appended to the
510 510 list of steps. Beware that decorated function will be added in order (this
511 511 may matter).
512 512
513 513 You can only use this decorator for a new step, if you want to wrap a step
514 514 from an extension, change the pushdiscovery dictionary directly."""
515 515 def dec(func):
516 516 assert stepname not in pushdiscoverymapping
517 517 pushdiscoverymapping[stepname] = func
518 518 pushdiscoveryorder.append(stepname)
519 519 return func
520 520 return dec
521 521
522 522 def _pushdiscovery(pushop):
523 523 """Run all discovery steps"""
524 524 for stepname in pushdiscoveryorder:
525 525 step = pushdiscoverymapping[stepname]
526 526 step(pushop)
527 527
528 528 @pushdiscovery('changeset')
529 529 def _pushdiscoverychangeset(pushop):
530 530 """discover the changeset that need to be pushed"""
531 531 fci = discovery.findcommonincoming
532 532 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
533 533 common, inc, remoteheads = commoninc
534 534 fco = discovery.findcommonoutgoing
535 535 outgoing = fco(pushop.repo, pushop.remote, onlyheads=pushop.revs,
536 536 commoninc=commoninc, force=pushop.force)
537 537 pushop.outgoing = outgoing
538 538 pushop.remoteheads = remoteheads
539 539 pushop.incoming = inc
540 540
541 541 @pushdiscovery('phase')
542 542 def _pushdiscoveryphase(pushop):
543 543 """discover the phase that needs to be pushed
544 544
545 545 (computed for both success and failure case for changesets push)"""
546 546 outgoing = pushop.outgoing
547 547 unfi = pushop.repo.unfiltered()
548 548 remotephases = pushop.remote.listkeys('phases')
549 549 publishing = remotephases.get('publishing', False)
550 550 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
551 551 and remotephases # server supports phases
552 552 and not pushop.outgoing.missing # no changesets to be pushed
553 553 and publishing):
554 554 # When:
555 555 # - this is a subrepo push
556 556 # - and remote support phase
557 557 # - and no changeset are to be pushed
558 558 # - and remote is publishing
559 559 # We may be in issue 3871 case!
560 560 # We drop the possible phase synchronisation done by
561 561 # courtesy to publish changesets possibly locally draft
562 562 # on the remote.
563 563 remotephases = {'publishing': 'True'}
564 564 ana = phases.analyzeremotephases(pushop.repo,
565 565 pushop.fallbackheads,
566 566 remotephases)
567 567 pheads, droots = ana
568 568 extracond = ''
569 569 if not publishing:
570 570 extracond = ' and public()'
571 571 revset = 'heads((%%ln::%%ln) %s)' % extracond
572 572 # Get the list of all revs draft on remote by public here.
573 573 # XXX Beware that revset break if droots is not strictly
574 574 # XXX root we may want to ensure it is but it is costly
575 575 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
576 576 if not outgoing.missing:
577 577 future = fallback
578 578 else:
579 579 # adds changeset we are going to push as draft
580 580 #
581 581 # should not be necessary for publishing server, but because of an
582 582 # issue fixed in xxxxx we have to do it anyway.
583 583 fdroots = list(unfi.set('roots(%ln + %ln::)',
584 584 outgoing.missing, droots))
585 585 fdroots = [f.node() for f in fdroots]
586 586 future = list(unfi.set(revset, fdroots, pushop.futureheads))
587 587 pushop.outdatedphases = future
588 588 pushop.fallbackoutdatedphases = fallback
589 589
590 590 @pushdiscovery('obsmarker')
591 591 def _pushdiscoveryobsmarkers(pushop):
592 592 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
593 593 and pushop.repo.obsstore
594 594 and 'obsolete' in pushop.remote.listkeys('namespaces')):
595 595 repo = pushop.repo
596 596 # very naive computation, that can be quite expensive on big repo.
597 597 # However: evolution is currently slow on them anyway.
598 598 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
599 599 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
600 600
601 601 @pushdiscovery('bookmarks')
602 602 def _pushdiscoverybookmarks(pushop):
603 603 ui = pushop.ui
604 604 repo = pushop.repo.unfiltered()
605 605 remote = pushop.remote
606 606 ui.debug("checking for updated bookmarks\n")
607 607 ancestors = ()
608 608 if pushop.revs:
609 609 revnums = map(repo.changelog.rev, pushop.revs)
610 610 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
611 611 remotebookmark = remote.listkeys('bookmarks')
612 612
613 613 explicit = set([repo._bookmarks.expandname(bookmark)
614 614 for bookmark in pushop.bookmarks])
615 615
616 616 remotebookmark = bookmod.unhexlifybookmarks(remotebookmark)
617 617 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
618 618
619 619 def safehex(x):
620 620 if x is None:
621 621 return x
622 622 return hex(x)
623 623
624 624 def hexifycompbookmarks(bookmarks):
625 625 for b, scid, dcid in bookmarks:
626 626 yield b, safehex(scid), safehex(dcid)
627 627
628 628 comp = [hexifycompbookmarks(marks) for marks in comp]
629 629 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
630 630
631 631 for b, scid, dcid in advsrc:
632 632 if b in explicit:
633 633 explicit.remove(b)
634 634 if not ancestors or repo[scid].rev() in ancestors:
635 635 pushop.outbookmarks.append((b, dcid, scid))
636 636 # search added bookmark
637 637 for b, scid, dcid in addsrc:
638 638 if b in explicit:
639 639 explicit.remove(b)
640 640 pushop.outbookmarks.append((b, '', scid))
641 641 # search for overwritten bookmark
642 642 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
643 643 if b in explicit:
644 644 explicit.remove(b)
645 645 pushop.outbookmarks.append((b, dcid, scid))
646 646 # search for bookmark to delete
647 647 for b, scid, dcid in adddst:
648 648 if b in explicit:
649 649 explicit.remove(b)
650 650 # treat as "deleted locally"
651 651 pushop.outbookmarks.append((b, dcid, ''))
652 652 # identical bookmarks shouldn't get reported
653 653 for b, scid, dcid in same:
654 654 if b in explicit:
655 655 explicit.remove(b)
656 656
657 657 if explicit:
658 658 explicit = sorted(explicit)
659 659 # we should probably list all of them
660 660 ui.warn(_('bookmark %s does not exist on the local '
661 661 'or remote repository!\n') % explicit[0])
662 662 pushop.bkresult = 2
663 663
664 664 pushop.outbookmarks.sort()
665 665
666 666 def _pushcheckoutgoing(pushop):
667 667 outgoing = pushop.outgoing
668 668 unfi = pushop.repo.unfiltered()
669 669 if not outgoing.missing:
670 670 # nothing to push
671 671 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
672 672 return False
673 673 # something to push
674 674 if not pushop.force:
675 675 # if repo.obsstore == False --> no obsolete
676 676 # then, save the iteration
677 677 if unfi.obsstore:
678 678 # this message are here for 80 char limit reason
679 679 mso = _("push includes obsolete changeset: %s!")
680 680 mst = {"unstable": _("push includes unstable changeset: %s!"),
681 681 "bumped": _("push includes bumped changeset: %s!"),
682 682 "divergent": _("push includes divergent changeset: %s!")}
683 683 # If we are to push if there is at least one
684 684 # obsolete or unstable changeset in missing, at
685 685 # least one of the missinghead will be obsolete or
686 686 # unstable. So checking heads only is ok
687 687 for node in outgoing.missingheads:
688 688 ctx = unfi[node]
689 689 if ctx.obsolete():
690 690 raise error.Abort(mso % ctx)
691 691 elif ctx.troubled():
692 692 raise error.Abort(mst[ctx.troubles()[0]] % ctx)
693 693
694 694 discovery.checkheads(pushop)
695 695 return True
696 696
697 697 # List of names of steps to perform for an outgoing bundle2, order matters.
698 698 b2partsgenorder = []
699 699
700 700 # Mapping between step name and function
701 701 #
702 702 # This exists to help extensions wrap steps if necessary
703 703 b2partsgenmapping = {}
704 704
705 705 def b2partsgenerator(stepname, idx=None):
706 706 """decorator for function generating bundle2 part
707 707
708 708 The function is added to the step -> function mapping and appended to the
709 709 list of steps. Beware that decorated functions will be added in order
710 710 (this may matter).
711 711
712 712 You can only use this decorator for new steps, if you want to wrap a step
713 713 from an extension, attack the b2partsgenmapping dictionary directly."""
714 714 def dec(func):
715 715 assert stepname not in b2partsgenmapping
716 716 b2partsgenmapping[stepname] = func
717 717 if idx is None:
718 718 b2partsgenorder.append(stepname)
719 719 else:
720 720 b2partsgenorder.insert(idx, stepname)
721 721 return func
722 722 return dec
723 723
724 724 def _pushb2ctxcheckheads(pushop, bundler):
725 725 """Generate race condition checking parts
726 726
727 727 Exists as an independent function to aid extensions
728 728 """
729 729 # * 'force' do not check for push race,
730 730 # * if we don't push anything, there are nothing to check.
731 731 if not pushop.force and pushop.outgoing.missingheads:
732 732 allowunrelated = 'related' in bundler.capabilities.get('checkheads', ())
733 733 if not allowunrelated:
734 734 bundler.newpart('check:heads', data=iter(pushop.remoteheads))
735 735 else:
736 736 affected = set()
737 737 for branch, heads in pushop.pushbranchmap.iteritems():
738 738 remoteheads, newheads, unsyncedheads, discardedheads = heads
739 739 if remoteheads is not None:
740 740 remote = set(remoteheads)
741 741 affected |= set(discardedheads) & remote
742 742 affected |= remote - set(newheads)
743 743 if affected:
744 744 data = iter(sorted(affected))
745 745 bundler.newpart('check:updated-heads', data=data)
746 746
747 747 @b2partsgenerator('changeset')
748 748 def _pushb2ctx(pushop, bundler):
749 749 """handle changegroup push through bundle2
750 750
751 751 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
752 752 """
753 753 if 'changesets' in pushop.stepsdone:
754 754 return
755 755 pushop.stepsdone.add('changesets')
756 756 # Send known heads to the server for race detection.
757 757 if not _pushcheckoutgoing(pushop):
758 758 return
759 759 pushop.repo.prepushoutgoinghooks(pushop)
760 760
761 761 _pushb2ctxcheckheads(pushop, bundler)
762 762
763 763 b2caps = bundle2.bundle2caps(pushop.remote)
764 764 version = '01'
765 765 cgversions = b2caps.get('changegroup')
766 766 if cgversions: # 3.1 and 3.2 ship with an empty value
767 767 cgversions = [v for v in cgversions
768 768 if v in changegroup.supportedoutgoingversions(
769 769 pushop.repo)]
770 770 if not cgversions:
771 771 raise ValueError(_('no common changegroup version'))
772 772 version = max(cgversions)
773 773 cg = changegroup.getlocalchangegroupraw(pushop.repo, 'push',
774 774 pushop.outgoing,
775 775 version=version)
776 776 cgpart = bundler.newpart('changegroup', data=cg)
777 777 if cgversions:
778 778 cgpart.addparam('version', version)
779 779 if 'treemanifest' in pushop.repo.requirements:
780 780 cgpart.addparam('treemanifest', '1')
781 781 def handlereply(op):
782 782 """extract addchangegroup returns from server reply"""
783 783 cgreplies = op.records.getreplies(cgpart.id)
784 784 assert len(cgreplies['changegroup']) == 1
785 785 pushop.cgresult = cgreplies['changegroup'][0]['return']
786 786 return handlereply
787 787
788 788 @b2partsgenerator('phase')
789 789 def _pushb2phases(pushop, bundler):
790 790 """handle phase push through bundle2"""
791 791 if 'phases' in pushop.stepsdone:
792 792 return
793 793 b2caps = bundle2.bundle2caps(pushop.remote)
794 794 if not 'pushkey' in b2caps:
795 795 return
796 796 pushop.stepsdone.add('phases')
797 797 part2node = []
798 798
799 799 def handlefailure(pushop, exc):
800 800 targetid = int(exc.partid)
801 801 for partid, node in part2node:
802 802 if partid == targetid:
803 803 raise error.Abort(_('updating %s to public failed') % node)
804 804
805 805 enc = pushkey.encode
806 806 for newremotehead in pushop.outdatedphases:
807 807 part = bundler.newpart('pushkey')
808 808 part.addparam('namespace', enc('phases'))
809 809 part.addparam('key', enc(newremotehead.hex()))
810 810 part.addparam('old', enc(str(phases.draft)))
811 811 part.addparam('new', enc(str(phases.public)))
812 812 part2node.append((part.id, newremotehead))
813 813 pushop.pkfailcb[part.id] = handlefailure
814 814
815 815 def handlereply(op):
816 816 for partid, node in part2node:
817 817 partrep = op.records.getreplies(partid)
818 818 results = partrep['pushkey']
819 819 assert len(results) <= 1
820 820 msg = None
821 821 if not results:
822 822 msg = _('server ignored update of %s to public!\n') % node
823 823 elif not int(results[0]['return']):
824 824 msg = _('updating %s to public failed!\n') % node
825 825 if msg is not None:
826 826 pushop.ui.warn(msg)
827 827 return handlereply
828 828
829 829 @b2partsgenerator('obsmarkers')
830 830 def _pushb2obsmarkers(pushop, bundler):
831 831 if 'obsmarkers' in pushop.stepsdone:
832 832 return
833 833 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
834 834 if obsolete.commonversion(remoteversions) is None:
835 835 return
836 836 pushop.stepsdone.add('obsmarkers')
837 837 if pushop.outobsmarkers:
838 838 markers = sorted(pushop.outobsmarkers)
839 839 bundle2.buildobsmarkerspart(bundler, markers)
840 840
841 841 @b2partsgenerator('bookmarks')
842 842 def _pushb2bookmarks(pushop, bundler):
843 843 """handle bookmark push through bundle2"""
844 844 if 'bookmarks' in pushop.stepsdone:
845 845 return
846 846 b2caps = bundle2.bundle2caps(pushop.remote)
847 847 if 'pushkey' not in b2caps:
848 848 return
849 849 pushop.stepsdone.add('bookmarks')
850 850 part2book = []
851 851 enc = pushkey.encode
852 852
853 853 def handlefailure(pushop, exc):
854 854 targetid = int(exc.partid)
855 855 for partid, book, action in part2book:
856 856 if partid == targetid:
857 857 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
858 858 # we should not be called for part we did not generated
859 859 assert False
860 860
861 861 for book, old, new in pushop.outbookmarks:
862 862 part = bundler.newpart('pushkey')
863 863 part.addparam('namespace', enc('bookmarks'))
864 864 part.addparam('key', enc(book))
865 865 part.addparam('old', enc(old))
866 866 part.addparam('new', enc(new))
867 867 action = 'update'
868 868 if not old:
869 869 action = 'export'
870 870 elif not new:
871 871 action = 'delete'
872 872 part2book.append((part.id, book, action))
873 873 pushop.pkfailcb[part.id] = handlefailure
874 874
875 875 def handlereply(op):
876 876 ui = pushop.ui
877 877 for partid, book, action in part2book:
878 878 partrep = op.records.getreplies(partid)
879 879 results = partrep['pushkey']
880 880 assert len(results) <= 1
881 881 if not results:
882 882 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
883 883 else:
884 884 ret = int(results[0]['return'])
885 885 if ret:
886 886 ui.status(bookmsgmap[action][0] % book)
887 887 else:
888 888 ui.warn(bookmsgmap[action][1] % book)
889 889 if pushop.bkresult is not None:
890 890 pushop.bkresult = 1
891 891 return handlereply
892 892
893 893
894 894 def _pushbundle2(pushop):
895 895 """push data to the remote using bundle2
896 896
897 897 The only currently supported type of data is changegroup but this will
898 898 evolve in the future."""
899 899 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
900 900 pushback = (pushop.trmanager
901 901 and pushop.ui.configbool('experimental', 'bundle2.pushback'))
902 902
903 903 # create reply capability
904 904 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo,
905 905 allowpushback=pushback))
906 906 bundler.newpart('replycaps', data=capsblob)
907 907 replyhandlers = []
908 908 for partgenname in b2partsgenorder:
909 909 partgen = b2partsgenmapping[partgenname]
910 910 ret = partgen(pushop, bundler)
911 911 if callable(ret):
912 912 replyhandlers.append(ret)
913 913 # do not push if nothing to push
914 914 if bundler.nbparts <= 1:
915 915 return
916 916 stream = util.chunkbuffer(bundler.getchunks())
917 917 try:
918 918 try:
919 919 reply = pushop.remote.unbundle(
920 920 stream, ['force'], pushop.remote.url())
921 921 except error.BundleValueError as exc:
922 922 raise error.Abort(_('missing support for %s') % exc)
923 923 try:
924 924 trgetter = None
925 925 if pushback:
926 926 trgetter = pushop.trmanager.transaction
927 927 op = bundle2.processbundle(pushop.repo, reply, trgetter)
928 928 except error.BundleValueError as exc:
929 929 raise error.Abort(_('missing support for %s') % exc)
930 930 except bundle2.AbortFromPart as exc:
931 931 pushop.ui.status(_('remote: %s\n') % exc)
932 932 if exc.hint is not None:
933 933 pushop.ui.status(_('remote: %s\n') % ('(%s)' % exc.hint))
934 934 raise error.Abort(_('push failed on remote'))
935 935 except error.PushkeyFailed as exc:
936 936 partid = int(exc.partid)
937 937 if partid not in pushop.pkfailcb:
938 938 raise
939 939 pushop.pkfailcb[partid](pushop, exc)
940 940 for rephand in replyhandlers:
941 941 rephand(op)
942 942
943 943 def _pushchangeset(pushop):
944 944 """Make the actual push of changeset bundle to remote repo"""
945 945 if 'changesets' in pushop.stepsdone:
946 946 return
947 947 pushop.stepsdone.add('changesets')
948 948 if not _pushcheckoutgoing(pushop):
949 949 return
950 950 pushop.repo.prepushoutgoinghooks(pushop)
951 951 outgoing = pushop.outgoing
952 952 unbundle = pushop.remote.capable('unbundle')
953 953 # TODO: get bundlecaps from remote
954 954 bundlecaps = None
955 955 # create a changegroup from local
956 956 if pushop.revs is None and not (outgoing.excluded
957 957 or pushop.repo.changelog.filteredrevs):
958 958 # push everything,
959 959 # use the fast path, no race possible on push
960 960 bundler = changegroup.cg1packer(pushop.repo, bundlecaps)
961 961 cg = changegroup.getsubset(pushop.repo,
962 962 outgoing,
963 963 bundler,
964 964 'push',
965 965 fastpath=True)
966 966 else:
967 967 cg = changegroup.getchangegroup(pushop.repo, 'push', outgoing,
968 968 bundlecaps=bundlecaps)
969 969
970 970 # apply changegroup to remote
971 971 if unbundle:
972 972 # local repo finds heads on server, finds out what
973 973 # revs it must push. once revs transferred, if server
974 974 # finds it has different heads (someone else won
975 975 # commit/push race), server aborts.
976 976 if pushop.force:
977 977 remoteheads = ['force']
978 978 else:
979 979 remoteheads = pushop.remoteheads
980 980 # ssh: return remote's addchangegroup()
981 981 # http: return remote's addchangegroup() or 0 for error
982 982 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
983 983 pushop.repo.url())
984 984 else:
985 985 # we return an integer indicating remote head count
986 986 # change
987 987 pushop.cgresult = pushop.remote.addchangegroup(cg, 'push',
988 988 pushop.repo.url())
989 989
990 990 def _pushsyncphase(pushop):
991 991 """synchronise phase information locally and remotely"""
992 992 cheads = pushop.commonheads
993 993 # even when we don't push, exchanging phase data is useful
994 994 remotephases = pushop.remote.listkeys('phases')
995 995 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
996 996 and remotephases # server supports phases
997 997 and pushop.cgresult is None # nothing was pushed
998 998 and remotephases.get('publishing', False)):
999 999 # When:
1000 1000 # - this is a subrepo push
1001 1001 # - and remote support phase
1002 1002 # - and no changeset was pushed
1003 1003 # - and remote is publishing
1004 1004 # We may be in issue 3871 case!
1005 1005 # We drop the possible phase synchronisation done by
1006 1006 # courtesy to publish changesets possibly locally draft
1007 1007 # on the remote.
1008 1008 remotephases = {'publishing': 'True'}
1009 1009 if not remotephases: # old server or public only reply from non-publishing
1010 1010 _localphasemove(pushop, cheads)
1011 1011 # don't push any phase data as there is nothing to push
1012 1012 else:
1013 1013 ana = phases.analyzeremotephases(pushop.repo, cheads,
1014 1014 remotephases)
1015 1015 pheads, droots = ana
1016 1016 ### Apply remote phase on local
1017 1017 if remotephases.get('publishing', False):
1018 1018 _localphasemove(pushop, cheads)
1019 1019 else: # publish = False
1020 1020 _localphasemove(pushop, pheads)
1021 1021 _localphasemove(pushop, cheads, phases.draft)
1022 1022 ### Apply local phase on remote
1023 1023
1024 1024 if pushop.cgresult:
1025 1025 if 'phases' in pushop.stepsdone:
1026 1026 # phases already pushed though bundle2
1027 1027 return
1028 1028 outdated = pushop.outdatedphases
1029 1029 else:
1030 1030 outdated = pushop.fallbackoutdatedphases
1031 1031
1032 1032 pushop.stepsdone.add('phases')
1033 1033
1034 1034 # filter heads already turned public by the push
1035 1035 outdated = [c for c in outdated if c.node() not in pheads]
1036 1036 # fallback to independent pushkey command
1037 1037 for newremotehead in outdated:
1038 1038 r = pushop.remote.pushkey('phases',
1039 1039 newremotehead.hex(),
1040 1040 str(phases.draft),
1041 1041 str(phases.public))
1042 1042 if not r:
1043 1043 pushop.ui.warn(_('updating %s to public failed!\n')
1044 1044 % newremotehead)
1045 1045
1046 1046 def _localphasemove(pushop, nodes, phase=phases.public):
1047 1047 """move <nodes> to <phase> in the local source repo"""
1048 1048 if pushop.trmanager:
1049 1049 phases.advanceboundary(pushop.repo,
1050 1050 pushop.trmanager.transaction(),
1051 1051 phase,
1052 1052 nodes)
1053 1053 else:
1054 1054 # repo is not locked, do not change any phases!
1055 1055 # Informs the user that phases should have been moved when
1056 1056 # applicable.
1057 1057 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1058 1058 phasestr = phases.phasenames[phase]
1059 1059 if actualmoves:
1060 1060 pushop.ui.status(_('cannot lock source repo, skipping '
1061 1061 'local %s phase update\n') % phasestr)
1062 1062
1063 1063 def _pushobsolete(pushop):
1064 1064 """utility function to push obsolete markers to a remote"""
1065 1065 if 'obsmarkers' in pushop.stepsdone:
1066 1066 return
1067 1067 repo = pushop.repo
1068 1068 remote = pushop.remote
1069 1069 pushop.stepsdone.add('obsmarkers')
1070 1070 if pushop.outobsmarkers:
1071 1071 pushop.ui.debug('try to push obsolete markers to remote\n')
1072 1072 rslts = []
1073 1073 remotedata = obsolete._pushkeyescape(sorted(pushop.outobsmarkers))
1074 1074 for key in sorted(remotedata, reverse=True):
1075 1075 # reverse sort to ensure we end with dump0
1076 1076 data = remotedata[key]
1077 1077 rslts.append(remote.pushkey('obsolete', key, '', data))
1078 1078 if [r for r in rslts if not r]:
1079 1079 msg = _('failed to push some obsolete markers!\n')
1080 1080 repo.ui.warn(msg)
1081 1081
1082 1082 def _pushbookmark(pushop):
1083 1083 """Update bookmark position on remote"""
1084 1084 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
1085 1085 return
1086 1086 pushop.stepsdone.add('bookmarks')
1087 1087 ui = pushop.ui
1088 1088 remote = pushop.remote
1089 1089
1090 1090 for b, old, new in pushop.outbookmarks:
1091 1091 action = 'update'
1092 1092 if not old:
1093 1093 action = 'export'
1094 1094 elif not new:
1095 1095 action = 'delete'
1096 1096 if remote.pushkey('bookmarks', b, old, new):
1097 1097 ui.status(bookmsgmap[action][0] % b)
1098 1098 else:
1099 1099 ui.warn(bookmsgmap[action][1] % b)
1100 1100 # discovery can have set the value form invalid entry
1101 1101 if pushop.bkresult is not None:
1102 1102 pushop.bkresult = 1
1103 1103
1104 1104 class pulloperation(object):
1105 1105 """A object that represent a single pull operation
1106 1106
1107 1107 It purpose is to carry pull related state and very common operation.
1108 1108
1109 1109 A new should be created at the beginning of each pull and discarded
1110 1110 afterward.
1111 1111 """
1112 1112
1113 1113 def __init__(self, repo, remote, heads=None, force=False, bookmarks=(),
1114 1114 remotebookmarks=None, streamclonerequested=None):
1115 1115 # repo we pull into
1116 1116 self.repo = repo
1117 1117 # repo we pull from
1118 1118 self.remote = remote
1119 1119 # revision we try to pull (None is "all")
1120 1120 self.heads = heads
1121 1121 # bookmark pulled explicitly
1122 1122 self.explicitbookmarks = [repo._bookmarks.expandname(bookmark)
1123 1123 for bookmark in bookmarks]
1124 1124 # do we force pull?
1125 1125 self.force = force
1126 1126 # whether a streaming clone was requested
1127 1127 self.streamclonerequested = streamclonerequested
1128 1128 # transaction manager
1129 1129 self.trmanager = None
1130 1130 # set of common changeset between local and remote before pull
1131 1131 self.common = None
1132 1132 # set of pulled head
1133 1133 self.rheads = None
1134 1134 # list of missing changeset to fetch remotely
1135 1135 self.fetch = None
1136 1136 # remote bookmarks data
1137 1137 self.remotebookmarks = remotebookmarks
1138 1138 # result of changegroup pulling (used as return code by pull)
1139 1139 self.cgresult = None
1140 1140 # list of step already done
1141 1141 self.stepsdone = set()
1142 1142 # Whether we attempted a clone from pre-generated bundles.
1143 1143 self.clonebundleattempted = False
1144 1144
1145 1145 @util.propertycache
1146 1146 def pulledsubset(self):
1147 1147 """heads of the set of changeset target by the pull"""
1148 1148 # compute target subset
1149 1149 if self.heads is None:
1150 1150 # We pulled every thing possible
1151 1151 # sync on everything common
1152 1152 c = set(self.common)
1153 1153 ret = list(self.common)
1154 1154 for n in self.rheads:
1155 1155 if n not in c:
1156 1156 ret.append(n)
1157 1157 return ret
1158 1158 else:
1159 1159 # We pulled a specific subset
1160 1160 # sync on this subset
1161 1161 return self.heads
1162 1162
1163 1163 @util.propertycache
1164 1164 def canusebundle2(self):
1165 1165 return not _forcebundle1(self)
1166 1166
1167 1167 @util.propertycache
1168 1168 def remotebundle2caps(self):
1169 1169 return bundle2.bundle2caps(self.remote)
1170 1170
1171 1171 def gettransaction(self):
1172 1172 # deprecated; talk to trmanager directly
1173 1173 return self.trmanager.transaction()
1174 1174
1175 1175 class transactionmanager(object):
1176 1176 """An object to manage the life cycle of a transaction
1177 1177
1178 1178 It creates the transaction on demand and calls the appropriate hooks when
1179 1179 closing the transaction."""
1180 1180 def __init__(self, repo, source, url):
1181 1181 self.repo = repo
1182 1182 self.source = source
1183 1183 self.url = url
1184 1184 self._tr = None
1185 1185
1186 1186 def transaction(self):
1187 1187 """Return an open transaction object, constructing if necessary"""
1188 1188 if not self._tr:
1189 1189 trname = '%s\n%s' % (self.source, util.hidepassword(self.url))
1190 1190 self._tr = self.repo.transaction(trname)
1191 1191 self._tr.hookargs['source'] = self.source
1192 1192 self._tr.hookargs['url'] = self.url
1193 1193 return self._tr
1194 1194
1195 1195 def close(self):
1196 1196 """close transaction if created"""
1197 1197 if self._tr is not None:
1198 1198 self._tr.close()
1199 1199
1200 1200 def release(self):
1201 1201 """release transaction if created"""
1202 1202 if self._tr is not None:
1203 1203 self._tr.release()
1204 1204
1205 1205 def pull(repo, remote, heads=None, force=False, bookmarks=(), opargs=None,
1206 1206 streamclonerequested=None):
1207 1207 """Fetch repository data from a remote.
1208 1208
1209 1209 This is the main function used to retrieve data from a remote repository.
1210 1210
1211 1211 ``repo`` is the local repository to clone into.
1212 1212 ``remote`` is a peer instance.
1213 1213 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1214 1214 default) means to pull everything from the remote.
1215 1215 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1216 1216 default, all remote bookmarks are pulled.
1217 1217 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1218 1218 initialization.
1219 1219 ``streamclonerequested`` is a boolean indicating whether a "streaming
1220 1220 clone" is requested. A "streaming clone" is essentially a raw file copy
1221 1221 of revlogs from the server. This only works when the local repository is
1222 1222 empty. The default value of ``None`` means to respect the server
1223 1223 configuration for preferring stream clones.
1224 1224
1225 1225 Returns the ``pulloperation`` created for this pull.
1226 1226 """
1227 1227 if opargs is None:
1228 1228 opargs = {}
1229 1229 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks,
1230 1230 streamclonerequested=streamclonerequested, **opargs)
1231 1231 if pullop.remote.local():
1232 1232 missing = set(pullop.remote.requirements) - pullop.repo.supported
1233 1233 if missing:
1234 1234 msg = _("required features are not"
1235 1235 " supported in the destination:"
1236 1236 " %s") % (', '.join(sorted(missing)))
1237 1237 raise error.Abort(msg)
1238 1238
1239 1239 wlock = lock = None
1240 1240 try:
1241 1241 wlock = pullop.repo.wlock()
1242 1242 lock = pullop.repo.lock()
1243 1243 pullop.trmanager = transactionmanager(repo, 'pull', remote.url())
1244 1244 streamclone.maybeperformlegacystreamclone(pullop)
1245 1245 # This should ideally be in _pullbundle2(). However, it needs to run
1246 1246 # before discovery to avoid extra work.
1247 1247 _maybeapplyclonebundle(pullop)
1248 1248 _pulldiscovery(pullop)
1249 1249 if pullop.canusebundle2:
1250 1250 _pullbundle2(pullop)
1251 1251 _pullchangeset(pullop)
1252 1252 _pullphase(pullop)
1253 1253 _pullbookmarks(pullop)
1254 1254 _pullobsolete(pullop)
1255 1255 pullop.trmanager.close()
1256 1256 finally:
1257 1257 lockmod.release(pullop.trmanager, lock, wlock)
1258 1258
1259 1259 return pullop
1260 1260
1261 1261 # list of steps to perform discovery before pull
1262 1262 pulldiscoveryorder = []
1263 1263
1264 1264 # Mapping between step name and function
1265 1265 #
1266 1266 # This exists to help extensions wrap steps if necessary
1267 1267 pulldiscoverymapping = {}
1268 1268
1269 1269 def pulldiscovery(stepname):
1270 1270 """decorator for function performing discovery before pull
1271 1271
1272 1272 The function is added to the step -> function mapping and appended to the
1273 1273 list of steps. Beware that decorated function will be added in order (this
1274 1274 may matter).
1275 1275
1276 1276 You can only use this decorator for a new step, if you want to wrap a step
1277 1277 from an extension, change the pulldiscovery dictionary directly."""
1278 1278 def dec(func):
1279 1279 assert stepname not in pulldiscoverymapping
1280 1280 pulldiscoverymapping[stepname] = func
1281 1281 pulldiscoveryorder.append(stepname)
1282 1282 return func
1283 1283 return dec
1284 1284
1285 1285 def _pulldiscovery(pullop):
1286 1286 """Run all discovery steps"""
1287 1287 for stepname in pulldiscoveryorder:
1288 1288 step = pulldiscoverymapping[stepname]
1289 1289 step(pullop)
1290 1290
1291 1291 @pulldiscovery('b1:bookmarks')
1292 1292 def _pullbookmarkbundle1(pullop):
1293 1293 """fetch bookmark data in bundle1 case
1294 1294
1295 1295 If not using bundle2, we have to fetch bookmarks before changeset
1296 1296 discovery to reduce the chance and impact of race conditions."""
1297 1297 if pullop.remotebookmarks is not None:
1298 1298 return
1299 1299 if pullop.canusebundle2 and 'listkeys' in pullop.remotebundle2caps:
1300 1300 # all known bundle2 servers now support listkeys, but lets be nice with
1301 1301 # new implementation.
1302 1302 return
1303 1303 pullop.remotebookmarks = pullop.remote.listkeys('bookmarks')
1304 1304
1305 1305
1306 1306 @pulldiscovery('changegroup')
1307 1307 def _pulldiscoverychangegroup(pullop):
1308 1308 """discovery phase for the pull
1309 1309
1310 1310 Current handle changeset discovery only, will change handle all discovery
1311 1311 at some point."""
1312 1312 tmp = discovery.findcommonincoming(pullop.repo,
1313 1313 pullop.remote,
1314 1314 heads=pullop.heads,
1315 1315 force=pullop.force)
1316 1316 common, fetch, rheads = tmp
1317 1317 nm = pullop.repo.unfiltered().changelog.nodemap
1318 1318 if fetch and rheads:
1319 1319 # If a remote heads in filtered locally, lets drop it from the unknown
1320 1320 # remote heads and put in back in common.
1321 1321 #
1322 1322 # This is a hackish solution to catch most of "common but locally
1323 1323 # hidden situation". We do not performs discovery on unfiltered
1324 1324 # repository because it end up doing a pathological amount of round
1325 1325 # trip for w huge amount of changeset we do not care about.
1326 1326 #
1327 1327 # If a set of such "common but filtered" changeset exist on the server
1328 1328 # but are not including a remote heads, we'll not be able to detect it,
1329 1329 scommon = set(common)
1330 1330 filteredrheads = []
1331 1331 for n in rheads:
1332 1332 if n in nm:
1333 1333 if n not in scommon:
1334 1334 common.append(n)
1335 1335 else:
1336 1336 filteredrheads.append(n)
1337 1337 if not filteredrheads:
1338 1338 fetch = []
1339 1339 rheads = filteredrheads
1340 1340 pullop.common = common
1341 1341 pullop.fetch = fetch
1342 1342 pullop.rheads = rheads
1343 1343
1344 1344 def _pullbundle2(pullop):
1345 1345 """pull data using bundle2
1346 1346
1347 1347 For now, the only supported data are changegroup."""
1348 1348 kwargs = {'bundlecaps': caps20to10(pullop.repo)}
1349 1349
1350 1350 # At the moment we don't do stream clones over bundle2. If that is
1351 1351 # implemented then here's where the check for that will go.
1352 1352 streaming = False
1353 1353
1354 1354 # pulling changegroup
1355 1355 pullop.stepsdone.add('changegroup')
1356 1356
1357 1357 kwargs['common'] = pullop.common
1358 1358 kwargs['heads'] = pullop.heads or pullop.rheads
1359 1359 kwargs['cg'] = pullop.fetch
1360 1360 if 'listkeys' in pullop.remotebundle2caps:
1361 1361 kwargs['listkeys'] = ['phases']
1362 1362 if pullop.remotebookmarks is None:
1363 1363 # make sure to always includes bookmark data when migrating
1364 1364 # `hg incoming --bundle` to using this function.
1365 1365 kwargs['listkeys'].append('bookmarks')
1366 1366
1367 1367 # If this is a full pull / clone and the server supports the clone bundles
1368 1368 # feature, tell the server whether we attempted a clone bundle. The
1369 1369 # presence of this flag indicates the client supports clone bundles. This
1370 1370 # will enable the server to treat clients that support clone bundles
1371 1371 # differently from those that don't.
1372 1372 if (pullop.remote.capable('clonebundles')
1373 1373 and pullop.heads is None and list(pullop.common) == [nullid]):
1374 1374 kwargs['cbattempted'] = pullop.clonebundleattempted
1375 1375
1376 1376 if streaming:
1377 1377 pullop.repo.ui.status(_('streaming all changes\n'))
1378 1378 elif not pullop.fetch:
1379 1379 pullop.repo.ui.status(_("no changes found\n"))
1380 1380 pullop.cgresult = 0
1381 1381 else:
1382 1382 if pullop.heads is None and list(pullop.common) == [nullid]:
1383 1383 pullop.repo.ui.status(_("requesting all changes\n"))
1384 1384 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1385 1385 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1386 1386 if obsolete.commonversion(remoteversions) is not None:
1387 1387 kwargs['obsmarkers'] = True
1388 1388 pullop.stepsdone.add('obsmarkers')
1389 1389 _pullbundle2extraprepare(pullop, kwargs)
1390 1390 bundle = pullop.remote.getbundle('pull', **pycompat.strkwargs(kwargs))
1391 1391 try:
1392 1392 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
1393 1393 except bundle2.AbortFromPart as exc:
1394 1394 pullop.repo.ui.status(_('remote: abort: %s\n') % exc)
1395 1395 raise error.Abort(_('pull failed on remote'), hint=exc.hint)
1396 1396 except error.BundleValueError as exc:
1397 1397 raise error.Abort(_('missing support for %s') % exc)
1398 1398
1399 1399 if pullop.fetch:
1400 1400 results = [cg['return'] for cg in op.records['changegroup']]
1401 1401 pullop.cgresult = changegroup.combineresults(results)
1402 1402
1403 1403 # processing phases change
1404 1404 for namespace, value in op.records['listkeys']:
1405 1405 if namespace == 'phases':
1406 1406 _pullapplyphases(pullop, value)
1407 1407
1408 1408 # processing bookmark update
1409 1409 for namespace, value in op.records['listkeys']:
1410 1410 if namespace == 'bookmarks':
1411 1411 pullop.remotebookmarks = value
1412 1412
1413 1413 # bookmark data were either already there or pulled in the bundle
1414 1414 if pullop.remotebookmarks is not None:
1415 1415 _pullbookmarks(pullop)
1416 1416
1417 1417 def _pullbundle2extraprepare(pullop, kwargs):
1418 1418 """hook function so that extensions can extend the getbundle call"""
1419 1419 pass
1420 1420
1421 1421 def _pullchangeset(pullop):
1422 1422 """pull changeset from unbundle into the local repo"""
1423 1423 # We delay the open of the transaction as late as possible so we
1424 1424 # don't open transaction for nothing or you break future useful
1425 1425 # rollback call
1426 1426 if 'changegroup' in pullop.stepsdone:
1427 1427 return
1428 1428 pullop.stepsdone.add('changegroup')
1429 1429 if not pullop.fetch:
1430 1430 pullop.repo.ui.status(_("no changes found\n"))
1431 1431 pullop.cgresult = 0
1432 1432 return
1433 1433 tr = pullop.gettransaction()
1434 1434 if pullop.heads is None and list(pullop.common) == [nullid]:
1435 1435 pullop.repo.ui.status(_("requesting all changes\n"))
1436 1436 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1437 1437 # issue1320, avoid a race if remote changed after discovery
1438 1438 pullop.heads = pullop.rheads
1439 1439
1440 1440 if pullop.remote.capable('getbundle'):
1441 1441 # TODO: get bundlecaps from remote
1442 1442 cg = pullop.remote.getbundle('pull', common=pullop.common,
1443 1443 heads=pullop.heads or pullop.rheads)
1444 1444 elif pullop.heads is None:
1445 1445 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1446 1446 elif not pullop.remote.capable('changegroupsubset'):
1447 1447 raise error.Abort(_("partial pull cannot be done because "
1448 1448 "other repository doesn't support "
1449 1449 "changegroupsubset."))
1450 1450 else:
1451 1451 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1452 pullop.cgresult = cg.apply(pullop.repo, tr, 'pull', pullop.remote.url())
1452 pullop.cgresult, addednodes = cg.apply(pullop.repo, tr, 'pull',
1453 pullop.remote.url())
1453 1454
1454 1455 def _pullphase(pullop):
1455 1456 # Get remote phases data from remote
1456 1457 if 'phases' in pullop.stepsdone:
1457 1458 return
1458 1459 remotephases = pullop.remote.listkeys('phases')
1459 1460 _pullapplyphases(pullop, remotephases)
1460 1461
1461 1462 def _pullapplyphases(pullop, remotephases):
1462 1463 """apply phase movement from observed remote state"""
1463 1464 if 'phases' in pullop.stepsdone:
1464 1465 return
1465 1466 pullop.stepsdone.add('phases')
1466 1467 publishing = bool(remotephases.get('publishing', False))
1467 1468 if remotephases and not publishing:
1468 1469 # remote is new and non-publishing
1469 1470 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1470 1471 pullop.pulledsubset,
1471 1472 remotephases)
1472 1473 dheads = pullop.pulledsubset
1473 1474 else:
1474 1475 # Remote is old or publishing all common changesets
1475 1476 # should be seen as public
1476 1477 pheads = pullop.pulledsubset
1477 1478 dheads = []
1478 1479 unfi = pullop.repo.unfiltered()
1479 1480 phase = unfi._phasecache.phase
1480 1481 rev = unfi.changelog.nodemap.get
1481 1482 public = phases.public
1482 1483 draft = phases.draft
1483 1484
1484 1485 # exclude changesets already public locally and update the others
1485 1486 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1486 1487 if pheads:
1487 1488 tr = pullop.gettransaction()
1488 1489 phases.advanceboundary(pullop.repo, tr, public, pheads)
1489 1490
1490 1491 # exclude changesets already draft locally and update the others
1491 1492 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1492 1493 if dheads:
1493 1494 tr = pullop.gettransaction()
1494 1495 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1495 1496
1496 1497 def _pullbookmarks(pullop):
1497 1498 """process the remote bookmark information to update the local one"""
1498 1499 if 'bookmarks' in pullop.stepsdone:
1499 1500 return
1500 1501 pullop.stepsdone.add('bookmarks')
1501 1502 repo = pullop.repo
1502 1503 remotebookmarks = pullop.remotebookmarks
1503 1504 remotebookmarks = bookmod.unhexlifybookmarks(remotebookmarks)
1504 1505 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1505 1506 pullop.remote.url(),
1506 1507 pullop.gettransaction,
1507 1508 explicit=pullop.explicitbookmarks)
1508 1509
1509 1510 def _pullobsolete(pullop):
1510 1511 """utility function to pull obsolete markers from a remote
1511 1512
1512 1513 The `gettransaction` is function that return the pull transaction, creating
1513 1514 one if necessary. We return the transaction to inform the calling code that
1514 1515 a new transaction have been created (when applicable).
1515 1516
1516 1517 Exists mostly to allow overriding for experimentation purpose"""
1517 1518 if 'obsmarkers' in pullop.stepsdone:
1518 1519 return
1519 1520 pullop.stepsdone.add('obsmarkers')
1520 1521 tr = None
1521 1522 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1522 1523 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1523 1524 remoteobs = pullop.remote.listkeys('obsolete')
1524 1525 if 'dump0' in remoteobs:
1525 1526 tr = pullop.gettransaction()
1526 1527 markers = []
1527 1528 for key in sorted(remoteobs, reverse=True):
1528 1529 if key.startswith('dump'):
1529 1530 data = util.b85decode(remoteobs[key])
1530 1531 version, newmarks = obsolete._readmarkers(data)
1531 1532 markers += newmarks
1532 1533 if markers:
1533 1534 pullop.repo.obsstore.add(tr, markers)
1534 1535 pullop.repo.invalidatevolatilesets()
1535 1536 return tr
1536 1537
1537 1538 def caps20to10(repo):
1538 1539 """return a set with appropriate options to use bundle20 during getbundle"""
1539 1540 caps = {'HG20'}
1540 1541 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
1541 1542 caps.add('bundle2=' + urlreq.quote(capsblob))
1542 1543 return caps
1543 1544
1544 1545 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1545 1546 getbundle2partsorder = []
1546 1547
1547 1548 # Mapping between step name and function
1548 1549 #
1549 1550 # This exists to help extensions wrap steps if necessary
1550 1551 getbundle2partsmapping = {}
1551 1552
1552 1553 def getbundle2partsgenerator(stepname, idx=None):
1553 1554 """decorator for function generating bundle2 part for getbundle
1554 1555
1555 1556 The function is added to the step -> function mapping and appended to the
1556 1557 list of steps. Beware that decorated functions will be added in order
1557 1558 (this may matter).
1558 1559
1559 1560 You can only use this decorator for new steps, if you want to wrap a step
1560 1561 from an extension, attack the getbundle2partsmapping dictionary directly."""
1561 1562 def dec(func):
1562 1563 assert stepname not in getbundle2partsmapping
1563 1564 getbundle2partsmapping[stepname] = func
1564 1565 if idx is None:
1565 1566 getbundle2partsorder.append(stepname)
1566 1567 else:
1567 1568 getbundle2partsorder.insert(idx, stepname)
1568 1569 return func
1569 1570 return dec
1570 1571
1571 1572 def bundle2requested(bundlecaps):
1572 1573 if bundlecaps is not None:
1573 1574 return any(cap.startswith('HG2') for cap in bundlecaps)
1574 1575 return False
1575 1576
1576 1577 def getbundlechunks(repo, source, heads=None, common=None, bundlecaps=None,
1577 1578 **kwargs):
1578 1579 """Return chunks constituting a bundle's raw data.
1579 1580
1580 1581 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
1581 1582 passed.
1582 1583
1583 1584 Returns an iterator over raw chunks (of varying sizes).
1584 1585 """
1585 1586 kwargs = pycompat.byteskwargs(kwargs)
1586 1587 usebundle2 = bundle2requested(bundlecaps)
1587 1588 # bundle10 case
1588 1589 if not usebundle2:
1589 1590 if bundlecaps and not kwargs.get('cg', True):
1590 1591 raise ValueError(_('request for bundle10 must include changegroup'))
1591 1592
1592 1593 if kwargs:
1593 1594 raise ValueError(_('unsupported getbundle arguments: %s')
1594 1595 % ', '.join(sorted(kwargs.keys())))
1595 1596 outgoing = _computeoutgoing(repo, heads, common)
1596 1597 bundler = changegroup.getbundler('01', repo, bundlecaps)
1597 1598 return changegroup.getsubsetraw(repo, outgoing, bundler, source)
1598 1599
1599 1600 # bundle20 case
1600 1601 b2caps = {}
1601 1602 for bcaps in bundlecaps:
1602 1603 if bcaps.startswith('bundle2='):
1603 1604 blob = urlreq.unquote(bcaps[len('bundle2='):])
1604 1605 b2caps.update(bundle2.decodecaps(blob))
1605 1606 bundler = bundle2.bundle20(repo.ui, b2caps)
1606 1607
1607 1608 kwargs['heads'] = heads
1608 1609 kwargs['common'] = common
1609 1610
1610 1611 for name in getbundle2partsorder:
1611 1612 func = getbundle2partsmapping[name]
1612 1613 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1613 1614 **pycompat.strkwargs(kwargs))
1614 1615
1615 1616 return bundler.getchunks()
1616 1617
1617 1618 @getbundle2partsgenerator('changegroup')
1618 1619 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1619 1620 b2caps=None, heads=None, common=None, **kwargs):
1620 1621 """add a changegroup part to the requested bundle"""
1621 1622 cg = None
1622 1623 if kwargs.get('cg', True):
1623 1624 # build changegroup bundle here.
1624 1625 version = '01'
1625 1626 cgversions = b2caps.get('changegroup')
1626 1627 if cgversions: # 3.1 and 3.2 ship with an empty value
1627 1628 cgversions = [v for v in cgversions
1628 1629 if v in changegroup.supportedoutgoingversions(repo)]
1629 1630 if not cgversions:
1630 1631 raise ValueError(_('no common changegroup version'))
1631 1632 version = max(cgversions)
1632 1633 outgoing = _computeoutgoing(repo, heads, common)
1633 1634 cg = changegroup.getlocalchangegroupraw(repo, source, outgoing,
1634 1635 bundlecaps=bundlecaps,
1635 1636 version=version)
1636 1637
1637 1638 if cg:
1638 1639 part = bundler.newpart('changegroup', data=cg)
1639 1640 if cgversions:
1640 1641 part.addparam('version', version)
1641 1642 part.addparam('nbchanges', str(len(outgoing.missing)), mandatory=False)
1642 1643 if 'treemanifest' in repo.requirements:
1643 1644 part.addparam('treemanifest', '1')
1644 1645
1645 1646 @getbundle2partsgenerator('listkeys')
1646 1647 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1647 1648 b2caps=None, **kwargs):
1648 1649 """add parts containing listkeys namespaces to the requested bundle"""
1649 1650 listkeys = kwargs.get('listkeys', ())
1650 1651 for namespace in listkeys:
1651 1652 part = bundler.newpart('listkeys')
1652 1653 part.addparam('namespace', namespace)
1653 1654 keys = repo.listkeys(namespace).items()
1654 1655 part.data = pushkey.encodekeys(keys)
1655 1656
1656 1657 @getbundle2partsgenerator('obsmarkers')
1657 1658 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1658 1659 b2caps=None, heads=None, **kwargs):
1659 1660 """add an obsolescence markers part to the requested bundle"""
1660 1661 if kwargs.get('obsmarkers', False):
1661 1662 if heads is None:
1662 1663 heads = repo.heads()
1663 1664 subset = [c.node() for c in repo.set('::%ln', heads)]
1664 1665 markers = repo.obsstore.relevantmarkers(subset)
1665 1666 markers = sorted(markers)
1666 1667 bundle2.buildobsmarkerspart(bundler, markers)
1667 1668
1668 1669 @getbundle2partsgenerator('hgtagsfnodes')
1669 1670 def _getbundletagsfnodes(bundler, repo, source, bundlecaps=None,
1670 1671 b2caps=None, heads=None, common=None,
1671 1672 **kwargs):
1672 1673 """Transfer the .hgtags filenodes mapping.
1673 1674
1674 1675 Only values for heads in this bundle will be transferred.
1675 1676
1676 1677 The part data consists of pairs of 20 byte changeset node and .hgtags
1677 1678 filenodes raw values.
1678 1679 """
1679 1680 # Don't send unless:
1680 1681 # - changeset are being exchanged,
1681 1682 # - the client supports it.
1682 1683 if not (kwargs.get('cg', True) and 'hgtagsfnodes' in b2caps):
1683 1684 return
1684 1685
1685 1686 outgoing = _computeoutgoing(repo, heads, common)
1686 1687 bundle2.addparttagsfnodescache(repo, bundler, outgoing)
1687 1688
1688 1689 def _getbookmarks(repo, **kwargs):
1689 1690 """Returns bookmark to node mapping.
1690 1691
1691 1692 This function is primarily used to generate `bookmarks` bundle2 part.
1692 1693 It is a separate function in order to make it easy to wrap it
1693 1694 in extensions. Passing `kwargs` to the function makes it easy to
1694 1695 add new parameters in extensions.
1695 1696 """
1696 1697
1697 1698 return dict(bookmod.listbinbookmarks(repo))
1698 1699
1699 1700 def check_heads(repo, their_heads, context):
1700 1701 """check if the heads of a repo have been modified
1701 1702
1702 1703 Used by peer for unbundling.
1703 1704 """
1704 1705 heads = repo.heads()
1705 1706 heads_hash = hashlib.sha1(''.join(sorted(heads))).digest()
1706 1707 if not (their_heads == ['force'] or their_heads == heads or
1707 1708 their_heads == ['hashed', heads_hash]):
1708 1709 # someone else committed/pushed/unbundled while we
1709 1710 # were transferring data
1710 1711 raise error.PushRaced('repository changed while %s - '
1711 1712 'please try again' % context)
1712 1713
1713 1714 def unbundle(repo, cg, heads, source, url):
1714 1715 """Apply a bundle to a repo.
1715 1716
1716 1717 this function makes sure the repo is locked during the application and have
1717 1718 mechanism to check that no push race occurred between the creation of the
1718 1719 bundle and its application.
1719 1720
1720 1721 If the push was raced as PushRaced exception is raised."""
1721 1722 r = 0
1722 1723 # need a transaction when processing a bundle2 stream
1723 1724 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
1724 1725 lockandtr = [None, None, None]
1725 1726 recordout = None
1726 1727 # quick fix for output mismatch with bundle2 in 3.4
1727 1728 captureoutput = repo.ui.configbool('experimental', 'bundle2-output-capture',
1728 1729 False)
1729 1730 if url.startswith('remote:http:') or url.startswith('remote:https:'):
1730 1731 captureoutput = True
1731 1732 try:
1732 1733 # note: outside bundle1, 'heads' is expected to be empty and this
1733 1734 # 'check_heads' call wil be a no-op
1734 1735 check_heads(repo, heads, 'uploading changes')
1735 1736 # push can proceed
1736 1737 if not isinstance(cg, bundle2.unbundle20):
1737 1738 # legacy case: bundle1 (changegroup 01)
1738 1739 txnname = "\n".join([source, util.hidepassword(url)])
1739 1740 with repo.lock(), repo.transaction(txnname) as tr:
1740 r = cg.apply(repo, tr, source, url)
1741 r, addednodes = cg.apply(repo, tr, source, url)
1741 1742 else:
1742 1743 r = None
1743 1744 try:
1744 1745 def gettransaction():
1745 1746 if not lockandtr[2]:
1746 1747 lockandtr[0] = repo.wlock()
1747 1748 lockandtr[1] = repo.lock()
1748 1749 lockandtr[2] = repo.transaction(source)
1749 1750 lockandtr[2].hookargs['source'] = source
1750 1751 lockandtr[2].hookargs['url'] = url
1751 1752 lockandtr[2].hookargs['bundle2'] = '1'
1752 1753 return lockandtr[2]
1753 1754
1754 1755 # Do greedy locking by default until we're satisfied with lazy
1755 1756 # locking.
1756 1757 if not repo.ui.configbool('experimental', 'bundle2lazylocking'):
1757 1758 gettransaction()
1758 1759
1759 1760 op = bundle2.bundleoperation(repo, gettransaction,
1760 1761 captureoutput=captureoutput)
1761 1762 try:
1762 1763 op = bundle2.processbundle(repo, cg, op=op)
1763 1764 finally:
1764 1765 r = op.reply
1765 1766 if captureoutput and r is not None:
1766 1767 repo.ui.pushbuffer(error=True, subproc=True)
1767 1768 def recordout(output):
1768 1769 r.newpart('output', data=output, mandatory=False)
1769 1770 if lockandtr[2] is not None:
1770 1771 lockandtr[2].close()
1771 1772 except BaseException as exc:
1772 1773 exc.duringunbundle2 = True
1773 1774 if captureoutput and r is not None:
1774 1775 parts = exc._bundle2salvagedoutput = r.salvageoutput()
1775 1776 def recordout(output):
1776 1777 part = bundle2.bundlepart('output', data=output,
1777 1778 mandatory=False)
1778 1779 parts.append(part)
1779 1780 raise
1780 1781 finally:
1781 1782 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
1782 1783 if recordout is not None:
1783 1784 recordout(repo.ui.popbuffer())
1784 1785 return r
1785 1786
1786 1787 def _maybeapplyclonebundle(pullop):
1787 1788 """Apply a clone bundle from a remote, if possible."""
1788 1789
1789 1790 repo = pullop.repo
1790 1791 remote = pullop.remote
1791 1792
1792 1793 if not repo.ui.configbool('ui', 'clonebundles', True):
1793 1794 return
1794 1795
1795 1796 # Only run if local repo is empty.
1796 1797 if len(repo):
1797 1798 return
1798 1799
1799 1800 if pullop.heads:
1800 1801 return
1801 1802
1802 1803 if not remote.capable('clonebundles'):
1803 1804 return
1804 1805
1805 1806 res = remote._call('clonebundles')
1806 1807
1807 1808 # If we call the wire protocol command, that's good enough to record the
1808 1809 # attempt.
1809 1810 pullop.clonebundleattempted = True
1810 1811
1811 1812 entries = parseclonebundlesmanifest(repo, res)
1812 1813 if not entries:
1813 1814 repo.ui.note(_('no clone bundles available on remote; '
1814 1815 'falling back to regular clone\n'))
1815 1816 return
1816 1817
1817 1818 entries = filterclonebundleentries(repo, entries)
1818 1819 if not entries:
1819 1820 # There is a thundering herd concern here. However, if a server
1820 1821 # operator doesn't advertise bundles appropriate for its clients,
1821 1822 # they deserve what's coming. Furthermore, from a client's
1822 1823 # perspective, no automatic fallback would mean not being able to
1823 1824 # clone!
1824 1825 repo.ui.warn(_('no compatible clone bundles available on server; '
1825 1826 'falling back to regular clone\n'))
1826 1827 repo.ui.warn(_('(you may want to report this to the server '
1827 1828 'operator)\n'))
1828 1829 return
1829 1830
1830 1831 entries = sortclonebundleentries(repo.ui, entries)
1831 1832
1832 1833 url = entries[0]['URL']
1833 1834 repo.ui.status(_('applying clone bundle from %s\n') % url)
1834 1835 if trypullbundlefromurl(repo.ui, repo, url):
1835 1836 repo.ui.status(_('finished applying clone bundle\n'))
1836 1837 # Bundle failed.
1837 1838 #
1838 1839 # We abort by default to avoid the thundering herd of
1839 1840 # clients flooding a server that was expecting expensive
1840 1841 # clone load to be offloaded.
1841 1842 elif repo.ui.configbool('ui', 'clonebundlefallback', False):
1842 1843 repo.ui.warn(_('falling back to normal clone\n'))
1843 1844 else:
1844 1845 raise error.Abort(_('error applying bundle'),
1845 1846 hint=_('if this error persists, consider contacting '
1846 1847 'the server operator or disable clone '
1847 1848 'bundles via '
1848 1849 '"--config ui.clonebundles=false"'))
1849 1850
1850 1851 def parseclonebundlesmanifest(repo, s):
1851 1852 """Parses the raw text of a clone bundles manifest.
1852 1853
1853 1854 Returns a list of dicts. The dicts have a ``URL`` key corresponding
1854 1855 to the URL and other keys are the attributes for the entry.
1855 1856 """
1856 1857 m = []
1857 1858 for line in s.splitlines():
1858 1859 fields = line.split()
1859 1860 if not fields:
1860 1861 continue
1861 1862 attrs = {'URL': fields[0]}
1862 1863 for rawattr in fields[1:]:
1863 1864 key, value = rawattr.split('=', 1)
1864 1865 key = urlreq.unquote(key)
1865 1866 value = urlreq.unquote(value)
1866 1867 attrs[key] = value
1867 1868
1868 1869 # Parse BUNDLESPEC into components. This makes client-side
1869 1870 # preferences easier to specify since you can prefer a single
1870 1871 # component of the BUNDLESPEC.
1871 1872 if key == 'BUNDLESPEC':
1872 1873 try:
1873 1874 comp, version, params = parsebundlespec(repo, value,
1874 1875 externalnames=True)
1875 1876 attrs['COMPRESSION'] = comp
1876 1877 attrs['VERSION'] = version
1877 1878 except error.InvalidBundleSpecification:
1878 1879 pass
1879 1880 except error.UnsupportedBundleSpecification:
1880 1881 pass
1881 1882
1882 1883 m.append(attrs)
1883 1884
1884 1885 return m
1885 1886
1886 1887 def filterclonebundleentries(repo, entries):
1887 1888 """Remove incompatible clone bundle manifest entries.
1888 1889
1889 1890 Accepts a list of entries parsed with ``parseclonebundlesmanifest``
1890 1891 and returns a new list consisting of only the entries that this client
1891 1892 should be able to apply.
1892 1893
1893 1894 There is no guarantee we'll be able to apply all returned entries because
1894 1895 the metadata we use to filter on may be missing or wrong.
1895 1896 """
1896 1897 newentries = []
1897 1898 for entry in entries:
1898 1899 spec = entry.get('BUNDLESPEC')
1899 1900 if spec:
1900 1901 try:
1901 1902 parsebundlespec(repo, spec, strict=True)
1902 1903 except error.InvalidBundleSpecification as e:
1903 1904 repo.ui.debug(str(e) + '\n')
1904 1905 continue
1905 1906 except error.UnsupportedBundleSpecification as e:
1906 1907 repo.ui.debug('filtering %s because unsupported bundle '
1907 1908 'spec: %s\n' % (entry['URL'], str(e)))
1908 1909 continue
1909 1910
1910 1911 if 'REQUIRESNI' in entry and not sslutil.hassni:
1911 1912 repo.ui.debug('filtering %s because SNI not supported\n' %
1912 1913 entry['URL'])
1913 1914 continue
1914 1915
1915 1916 newentries.append(entry)
1916 1917
1917 1918 return newentries
1918 1919
1919 1920 class clonebundleentry(object):
1920 1921 """Represents an item in a clone bundles manifest.
1921 1922
1922 1923 This rich class is needed to support sorting since sorted() in Python 3
1923 1924 doesn't support ``cmp`` and our comparison is complex enough that ``key=``
1924 1925 won't work.
1925 1926 """
1926 1927
1927 1928 def __init__(self, value, prefers):
1928 1929 self.value = value
1929 1930 self.prefers = prefers
1930 1931
1931 1932 def _cmp(self, other):
1932 1933 for prefkey, prefvalue in self.prefers:
1933 1934 avalue = self.value.get(prefkey)
1934 1935 bvalue = other.value.get(prefkey)
1935 1936
1936 1937 # Special case for b missing attribute and a matches exactly.
1937 1938 if avalue is not None and bvalue is None and avalue == prefvalue:
1938 1939 return -1
1939 1940
1940 1941 # Special case for a missing attribute and b matches exactly.
1941 1942 if bvalue is not None and avalue is None and bvalue == prefvalue:
1942 1943 return 1
1943 1944
1944 1945 # We can't compare unless attribute present on both.
1945 1946 if avalue is None or bvalue is None:
1946 1947 continue
1947 1948
1948 1949 # Same values should fall back to next attribute.
1949 1950 if avalue == bvalue:
1950 1951 continue
1951 1952
1952 1953 # Exact matches come first.
1953 1954 if avalue == prefvalue:
1954 1955 return -1
1955 1956 if bvalue == prefvalue:
1956 1957 return 1
1957 1958
1958 1959 # Fall back to next attribute.
1959 1960 continue
1960 1961
1961 1962 # If we got here we couldn't sort by attributes and prefers. Fall
1962 1963 # back to index order.
1963 1964 return 0
1964 1965
1965 1966 def __lt__(self, other):
1966 1967 return self._cmp(other) < 0
1967 1968
1968 1969 def __gt__(self, other):
1969 1970 return self._cmp(other) > 0
1970 1971
1971 1972 def __eq__(self, other):
1972 1973 return self._cmp(other) == 0
1973 1974
1974 1975 def __le__(self, other):
1975 1976 return self._cmp(other) <= 0
1976 1977
1977 1978 def __ge__(self, other):
1978 1979 return self._cmp(other) >= 0
1979 1980
1980 1981 def __ne__(self, other):
1981 1982 return self._cmp(other) != 0
1982 1983
1983 1984 def sortclonebundleentries(ui, entries):
1984 1985 prefers = ui.configlist('ui', 'clonebundleprefers')
1985 1986 if not prefers:
1986 1987 return list(entries)
1987 1988
1988 1989 prefers = [p.split('=', 1) for p in prefers]
1989 1990
1990 1991 items = sorted(clonebundleentry(v, prefers) for v in entries)
1991 1992 return [i.value for i in items]
1992 1993
1993 1994 def trypullbundlefromurl(ui, repo, url):
1994 1995 """Attempt to apply a bundle from a URL."""
1995 1996 with repo.lock(), repo.transaction('bundleurl') as tr:
1996 1997 try:
1997 1998 fh = urlmod.open(ui, url)
1998 1999 cg = readbundle(ui, fh, 'stream')
1999 2000
2000 2001 if isinstance(cg, bundle2.unbundle20):
2001 2002 bundle2.applybundle(repo, cg, tr, 'clonebundles', url)
2002 2003 elif isinstance(cg, streamclone.streamcloneapplier):
2003 2004 cg.apply(repo)
2004 2005 else:
2005 2006 cg.apply(repo, tr, 'clonebundles', url)
2006 2007 return True
2007 2008 except urlerr.httperror as e:
2008 2009 ui.warn(_('HTTP error fetching bundle: %s\n') % str(e))
2009 2010 except urlerr.urlerror as e:
2010 2011 ui.warn(_('error fetching bundle: %s\n') % e.reason)
2011 2012
2012 2013 return False
General Comments 0
You need to be logged in to leave comments. Login now